text stringlengths 9 7.94M |
|---|
\begin{document}
\title{ Derivation and Analysis of Simplified Filters for Complex Dynamical Systems }
\author{ Wonjung Lee \thanks { Mathematics Institute and Centre for Predictive Modelling, School of Engineering, University of Warwick, U.K. (W.Lee@warwick.ac.uk). } \and Andrew Stuart \thanks { Mathematics Institute, University of Warwick, U.K. (A.M.Stuart@warwick.ac.uk). } }
\pagestyle{myheadings} \markboth{Derivation and Analysis of Simplified Filters for Complex Dynamical Systems}{Wonjung Lee and Andrew Stuart} \maketitle
\begin{abstract} Filtering is concerned with the sequential estimation of the state, and uncertainties, of a Markovian system, given noisy observations. It is particularly difficult to achieve accurate filtering in complex dynamical systems, such as those arising in turbulence, in which effective low-dimensional representation of the desired probability distribution is challenging. Nonetheless recent advances have shown considerable success in filtering based on certain carefully chosen simplifications of the underlying system, which allow closed form filters. This leads to filtering algorithms with significant, but judiciously chosen, model error. The purpose of this article is to analyze the effectiveness of these simplified filters, and to suggest modifications of them which lead to improved filtering in certain time-scale regimes. We employ a Markov switching process for the true signal underlying the data, rather than working with a fully resolved DNS PDE model. Such Markov switching models haven been demonstrated to provide an excellent surrogate test-bed for the turbulent bursting phenomena which make filtering of complex physical models, such as those arising in atmospheric sciences, so challenging. \end{abstract}
\begin{keywords}
Sequential filtering, Bayesian statistics, Complex dynamical systems, Model error.
{\bf AMS subject classifications.} 60G35, 93E11, 94A12 \end{keywords}
\section{Introduction} \label{sec:intro}
\subsection{Overview} Filtering is concerned with the sequential updating of Markovian systems, given noisy, partial observations of the system state \cite{law2014data,majda2012filtering,reich2015probabilistic}. Due to the increasing prevalence of data in all areas of science and engineering, and due to the inherent complexity of physical models developed for the description of many phenomena arising in science and engineering, the need for accurate and speedy filters is paramount. However in its full form filtering requires the description of a time-evolving probability distribution on the system state, conditioned on data, which for many systems can be hard to represent in a computationally tractable way. This is a particular challenge for the complex physical models arising in areas such as atmospheric sciences \cite{kalnay2003atmospheric}, oceanography \cite{bennett1992inverse} and oil reservoir simulation \cite{oliver2008inverse}. However a recent body of work by Majda and coworkers \cite{harlim2008filtering, majda2010mathematical, majda2012filtering, castronovo2008mathematical, keating2011new, branicki2013dynamic, branicki2012filtering, sapsis2013statistically, harlim2013test} has demonstrated the possibility of using drastic simplifications of the models for complex turbulent phenomena in order to construct effective filters which are computationally tractable in real-time. The underlying philosophy of this work is to replace the true underlying Markovian model (often deterministic, but chaotic) with a simplified stochastic model which captures the key physical phenomena at the statistical level yet is amenable to closed form expressions for the purpose of filtering. It is possible to interpret this work as providing an important step towards the adoption of {\em physically informed machine learning}, going beyond traditional machine learning methodologies which often attempt to build models from the data alone \cite{bishop2006pattern, murphy2012machine}. The purpose of our work is to shed further light on this body of work, through analysis, through the derivation of new methods in the same spirit, and through careful numerical experiments.
In order to carry out this program we do not work with a full complex model of turbulence for our true signal, but rather work with a simple switching stochastic model (SSM), a stochastic differential equation driven by a sign-alternating two-state Markov process \cite{mao2006stochastic,walter2006conditional}. The system is either forced or dissipated depending on the sign of the driving signal, and as a consequence admits intermittent bursting phenomena, similar to what is seen in real turbulent signals \cite{zakharov1992kolmogorov, frisch1996turbulence, bohr2005dynamical}. The use of this model as a simplified model for turbulent bursting, and demonstration of its effectiveness in this context, may be seen from the papers \cite{gershgorin2010test, gershgorin2010improving}. This SSM, then, is viewed as the ``true'' Markov model whose signals generate the data. Our objective is to find simplified models, amenable to filtering, which capture the essential features of the SSM. We now define the filtering problem and outline the simplified models that we consider.
\subsection{The True Model and Model Error} Consider an $\mathbb{R}^d$-valued Markov process $x(t)$ where $t \geq 0$. The process is hidden and we only have access to $y_n$, $n \in \mathbb{N}$, which is a (partial) noisy observation of $x_n \equiv x(nT)$ for some $T > 0$. For $Y_{n}:=\{y_1,\cdots,y_{n}\}$ the key objective in probabilistic
filtering is the sequential updating of $\mathbb{P}(x_n|Y_n)$ \cite{kushner1967approximations, jazwinski1970stochastic, anderson1979optimal, doucet2001sequential, majda2012filtering}.
To perform filtering, the standard approach adopted in large scale geophysical applications is to alternate the uncertainty propagation $\mathbb{P}\left( x_{n-1} \vert Y_{n-1} \right) \mapsto \mathbb{P}\left( x_n \vert Y_{n-1}\right)$, and the data acquisition $\mathbb{P}\left( x_{n} \vert Y_{n-1} \right) \mapsto \mathbb{P}\left( x_n \vert Y_{n}\right)$ in a sequential manner. The former step corresponds to probabilistic solution of the governing equation for $x(t)$, while the latter step is accomplished by Bayes' rule $\mathbb{P}( x_n \vert Y_n ) \propto \mathbb{P}(x_n \vert Y_{n-1}) \mathbb{P}(y_n \vert x_n)$, which asserts that the posterior distribution is proportional to the product of the prior distribution and the likelihood (viewed as function of $x_n$). Examples of Bayesian filters include the Kalman filter \cite{Kalman60, kalman1961new}, the extended Kalman filter \cite{gelb1974applied}, the ensemble Kalman filter \cite{evensen2009data}, the particle filter \cite{gordon1993novel} and the Gaussian mixture filter \cite{sorenson1971recursive, chen2000mixture, stordal2011bridging}.
In this paper the true model $x_n$ underlying the data will be found from discrete time sampling of the following switching stochastic model, or SSM for short: \begin{equation} \begin{split} \label{eq:fm} \textbf{(SSM)}\qquad \begin{cases} du &= -\gamma u dt + \sigma_u dB_u \\ \gamma &\in \,\,\,\{\gamma_{+} ,\gamma_{-} \} \end{cases} \end{split} \end{equation} where $\gamma(t)$ is a Markov process, alternately taking constant values of $\{\gamma_{+}>0,\gamma_{-}<0\}$. The distribution functions of the random variables \begin{equation*} \begin{split} \tau^{\gamma_{+}} &= \inf \{ t: \gamma(t)=\gamma_{-} \vert \gamma(0)=\gamma_{+}\}\\ \tau^{\gamma_{-}} &= \inf \{ t: \gamma(t)=\gamma_{+} \vert \gamma(0)=\gamma_{-}\} \end{split} \end{equation*} are given by \begin{equation*} \begin{split} \mathbb{P}(\tau^{\gamma_{+}} < t) &= 1-e^{-\frac{\lambda_{+}}{\epsilon} t} \\ \mathbb{P}(\tau^{\gamma_{-}} < t) &= 1-e^{-\frac{\lambda_{-}}{\epsilon} t} \end{split} \end{equation*} respectively. The positive parameter $\epsilon$ determines the transition rates, accounting for the time-scale separation between input signal $\gamma$ and output response $u$. In case of small $\epsilon$, there is rapid switching between $\gamma_{+}$ and $\gamma_{-}$. On the other hand, switching is a rare event when $\epsilon$ is large. In the general notation above we have $x=(u,\gamma).$
For $x_n=(u_n,\gamma_n)=(u(nT),\gamma(nT))$ we assume the noisy observations are of the form \begin{equation} \label{eq:obs} \qquad y_n = u_n + \eta_n, \qquad \eta_n \sim \mathcal{N}({0},R_n) \end{equation} where $\{\eta_n\}_{n \ge 0}$ is an independent and identically distributed centred Gaussian. The filtering distribution
$\mathbb{P}(x_n|Y_n)$, determined by (\ref{eq:fm}) and (\ref{eq:obs}), does not allow for a closed-form representation. In the following, we address the problem through {\em filtering with model error}: that is, instead of a straightforward application to the genuine system, we replace the process $x$ by a different Markov model which is more amenable to filtering explicitly than is the SSM. We tune the parameters of the new models to maximize their statistical resemblances with the SSM. It is important to note that in this paper, due to the low dimensionality of SSM, the introduction of reduced models used for filtering presumably does not lead to a significant saving of computational costs. However the aim is to understand the application of the methodology developed by Majda and coworkers which is targetted at situations where the true signal is very expensive to simulate, whilst the models used for filtering are orders of magnitude cheaper. Furthermore we investigate a new theory-based conceptual framework to illustrate this body of work, and to develop generalizations of it, working in a simple setting where the true signal of interest comes from the SSM.
There are four forms of filters with model error considered in this paper (acronyms explained later). The MSM and DSM are particularly relevant when $\epsilon$ is smaller, while the dMSM and dDSM are designed especially for larger $\epsilon$. The MSM is found from the SSM by replacing the switching process $\gamma$ by its mean (constant in time) value, giving rise to a process ${\bar u}$ instead of $u$. The DSM is found by replacing the switching process $\gamma$ by the solution of an Ornstein-Uhlenbeck (OU) process, giving rise to a process ${\widehat u}$ instead of $u.$ The dMSM is found by replacing $u$ by a process with a constant $\gamma$ in time, but choosing that constant randomly, according to carefully chosen weights. This leads to replacement of $u$ by a process ${\bar u}'.$ And finally the dDSM is found by replacing $u$ by ${\widehat u}'$ in which $\gamma$ is given by one of two OU processes for all time, but choosing the OU process randomly, according to carefully chosen weights. From now on, it will help to keep in mind that MSM and DSM are approximations of SSM for smaller $\epsilon$, and dMSM and dDSM are approximations of SSM for larger $\epsilon$.
\subsection{Our Contributions} Existing and extensive numerical studies naturally give rise to two fundamental questions about filtering with model error: (i) what are the precise conditions under which a given filter with model error is the best choice out of some class of filters; and (ii) how to choose the free parameters so as to maximize the consequent filtering accuracy. To address these questions we investigate the accuracy of the filters with model error via careful numerical experiments, and introduce a systematic approach for parameter determination. Specifically, our contributions in the present paper are as follows:
\begin{itemize}
\item in addition to studying the filters with model error MSM and DSM, introduced in \cite{gershgorin2010test, gershgorin2010improving}, we also introduce our own filters with model error: dMSM and dDSM;
\item we build a Gaussian filter and a Gaussian mixture filter for SSM;
\item we show the consistency of the reduced models in the extremely small (large) $\epsilon$ regime by proving limit theorems that connect the filter signal models MSM (dMSM) and DSM (dDSM) with the true signal model SSM;
\item we use asymptotic analysis in the small (large) $\epsilon$ regime to obtain analytic formulae for the adaptive parameters of the simplified models MSM (dMSM) and DSM (dDSM);
\item we employ optimization to solve minimization problem that yields suitable parameters for the simplifications when the scale-separation is not extreme but moderate or weak;
\item we perform direct numerical simulations to show the accuracy and feasibility of the methods.
\end{itemize}
\subsection{Organization of the Paper} The paper is organized as follows. We precisely define the models used for filtering in section~\ref{sec:appssm}. Our main results are in section~\ref{sec:DSMparamters}, where various tools, tuned to the relevant parameter regime for $\epsilon$, are deployed to improve filtering accuracy. We perform numerical experiments in section~\ref{sec:ns} and draw conclusions in section~\ref{sec:conclusions}. Lengthy calculations concerning the analysis of models are gathered in the appendices, in order to improve accessibility of the paper.
\section{Filtering With Model Error: Simplifications of SSM} \label{sec:appssm} Here we define four adaptive approximate models for SSM, based on the analysis of the qualitative behaviors of the switching process, and use them to build filters. Subsection~\ref{subsec:ssr} is concerned with the case when $\epsilon$ is small (scale-separation regime) and subsection~\ref{subsec:rer} is when $\epsilon$ is large (rare-event regime).
\subsection{Scale-Separation Regime} \label{subsec:ssr}
\subsubsection{Mean Stochastic Model (MSM)} In many multi-scale problems, the governing equation in which the driving signal is significantly faster is replaced by an equation with non-oscillatory coefficient found as a limit (usually in a weak sense) of scale-separation \cite{pavliotis2008multiscale, bensoussan2011asymptotic, cioranescu2000introduction}. This work suggests that, when $\epsilon$ is sufficiently small, the mean stochastic model (MSM) \begin{equation} \begin{split} \label{eq:msm} \textbf{(MSM)}\qquad \left\{ \begin{array}{ll} d\bar{u} &= - \bar{\gamma} \bar{u}\,dt+\sigma_u dB_u \\ \bar{\gamma} &= \text{const} \end{array} \right. \end{split} \end{equation} can be a good approximation of SSM. Using MSM for filtering we note that, provided $\bar{u}_0$ is Gaussian, all distributions $\mathbb{P}(\bar{u}_{n-1}\vert Y_{n-1}) \mapsto \mathbb{P}(\bar{u}_{n}\vert Y_{n-1}) \mapsto \mathbb{P}(\bar{u}_n\vert Y_n)$ are Gaussians and may be updated by the Kalman filter \cite{Kalman60}.
\begin{figure}
\caption{ Regularized modeling of the qualitative behaviors of switching process. }
\label{fig:ssm}
\end{figure}
\subsubsection{Diffusive Stochastic Model (DSM)} The diffusive stochastic model (DSM) is given by \begin{equation*} \begin{split} \textbf{(DSM)}\qquad \left\{ \begin{array}{ll} d\widehat{u} &= - \widehat{\gamma} \widehat{u}\,dt+\sigma_u dB_u \\ d\widehat{\gamma} & = -\frac{\nu}{\epsilon}(\widehat{\gamma}-\mu) \,dt+ \frac{\sigma}{\sqrt{\epsilon}} dB_{\gamma} \\ \end{array} \right. \end{split} \end{equation*} Note that $\widehat{\gamma}$ is in an Ornstein-Uhlenbeck process: solution of the Langevin equation \begin{equation} \begin{split} \label{eq:le} d{\widetilde{\gamma}} & = -\frac{1}{\epsilon}\nabla U({\widetilde{\gamma}})dt+ \frac{\sigma}{\sqrt{\epsilon}} dB_{\gamma} \end{split} \end{equation} with the potential \begin{equation} \label{eq:pot} \begin{split} U(x)=U_1(x):= \frac{\nu}{2}(x-\mu)^2. \end{split} \end{equation} We aim to tune this process to match the response of the system, in the observed variable $u$. The reason for interest in this model is that, although exact filtering is not possible, it is possible to compute an approximate Gaussian filter, based on exact propagation of the first two moments. Indeed provided $(\widehat{u}(0),\widehat{\gamma}(0))$ is joint Gaussian, the mean and covariance of $(\widehat{u}(T),\widehat{\gamma}(T))$ are exactly solvable. Denoting $\widehat{\gamma}_n \equiv \widehat{\gamma}(n T)$, the resultant moment mapping can be used for uncertainty propagation: $\mathbb{P}(\widehat{u}_{n-1}, \widehat{\gamma}_{n-1}\vert Y_{n-1}) \mapsto \mathbb{P}(\widehat{u}_{n}, \widehat{\gamma}_{n}\vert Y_{n-1}) $. Under this Gaussian approximation, the Kalman filter may be applied to obtain $\mathbb{P}(\widehat{u}_{n}, \widehat{\gamma}_{n}\vert Y_{n-1}) \mapsto \mathbb{P}(\widehat{u}_n, \widehat{\gamma}_{n}\vert Y_n)$. The resulting filter is named the stochastic parametrization extended Kalman filter (SPEKF) in \cite{gershgorin2008nonlinear} where it was introduced. Finally, a proper marginalization at every step yields the object of interest: $\mathbb{P}(\widehat{u}_n\vert Y_n)$.
\subsection{Rare-Event Regime} \label{subsec:rer}
\subsubsection{Dual-mode Mean Stochastic Model (dMSM)} \label{sec:dsm} When $\epsilon$ is large enough, transitions in $\gamma$ are rare. To study this case, we build the following dual-mode mean stochastic model (dMSM): \begin{equation} \begin{split} \label{eq:msmd} \textbf{(dMSM)}\qquad \left\{ \begin{array}{ll} d\bar{{u}}' &= - \bar{{\gamma}}' \bar{{u}}'\,dt+\sigma_u dB_u \\ \bar{{\gamma}}' &= \left\{ \begin{array}{ll} {\gamma}_{+} \;\text{with probability} \;\bar{\rho}_{+} \;\text{for}\; t \geq 0 \\ {\gamma}_{-} \;\text{with probability} \;\bar{\rho}_{-}= 1-\bar{\rho}_{+}\;\text{for}\; t \geq 0 \\ \end{array} \right. \end{array} \right. \end{split} \end{equation} as the reduced modeling of SSM. This can be viewed as an example of the more general switching linear dynamical system model \cite{ghahramani2000variational}.
If the probability distribution of $\bar{u}'(0)$ is the sum of weighted Gaussian kernels, then note that \begin{equation} \label{eq:probmsmd} \begin{split} \mathbb{P}(\bar{u}'(T)) =\mathbb{P}(\bar{\gamma}'=\gamma_{+})\mathbb{P}(\bar{u}'(T)\vert \bar{\gamma}'=\gamma_{+}) +\mathbb{P}(\bar{\gamma}'=\gamma_{-})\mathbb{P}(\bar{u}'(T)\vert \bar{\gamma}'=\gamma_{-}). \end{split} \end{equation} Under this assumption on $\bar{u}'(0)$, then, we may use the Gaussian mixture filter to obtain the exact filtering solution of dMSM. The procedure $\mathbb{P}(\bar{u}'_{n-1}\vert Y_{n-1}) \mapsto \mathbb{P}(\bar{u}'_{n}\vert Y_{n-1})$ is performed using (\ref{eq:probmsmd}), and a parallel application of the Kalman filter to each Gaussian kernel, along with updating of the weights of each kernel, completes the update $\mathbb{P}(\bar{u}'_{n}\vert Y_{n-1}) \mapsto \mathbb{P}(\bar{u}'_n\vert Y_n)$.
In practice, the geometric growth in the number of kernels in the number of prediction steps prevents tractable exact inference as data is accumulated sequentially. One resolution that we adopt here is through the projection of the filtering solution onto the space of tractable distributions. Following the idea of assumed density filtering \cite{maybeck1982stochastic}, a large mixture of Gaussians is replaced by a smaller mixture of Gaussians at regular time-intervals, while filtering progresses \cite{5977695}.
\subsubsection{Dual-mode Diffusive Stochastic Model (dDSM)} As in the DSM we now try to use a diffusion process to model the switching process $\gamma$, in order to benefit from the possibility of propagating second moments exactly, as is done in the DSM. When $\epsilon$ is large, however, the process (\ref{eq:le}) with the single-well potential \eqref{eq:pot} is not suitable for mimicking rare transitions. We instead consider a double-well potential $U_2(x)$ for $U(x)$ (illustrated in the right-panel of Fig.~\ref{fig:ssm}). In this scenario, the motion of $\widetilde{\gamma}$ is captured within either of the potential wells for significant time periods, but random perturbations allow it to effectively jump over the potential barrier and enter the parallel metastable state.
Based upon the quadratic expansions \begin{equation*} \begin{split} U_2(x) & \simeq \left\{ \begin{array}{ll} U_2(\mu_{+})+\frac{\nu_{+}}{2}(x-\mu_{+})^2 \qquad
\text{when} \;\;|x-\mu_{+}| \;\; \text{is small} \\ U_2(\mu_{-})+\frac{\nu_{-}}{2}(x-\mu_{-})^2 \qquad
\text{when} \;\;|x-\mu_{-}| \;\; \text{is small}\\ \end{array} \right. \end{split} \end{equation*} we build a new model \begin{equation} \begin{split} \label{eq:dsmdeq} \textbf{(dDSM)}\qquad \left\{ \begin{array}{ll} d\widehat{{u}}' &= - \widehat{{\gamma}}' \widehat{{u}}'\,dt+\sigma_u dB_u \\ d\widehat{{\gamma}}' & = \left\{ \begin{array}{ll} -\frac{\nu_{+}}{\epsilon}(\widehat{{\gamma}}' -\mu_{+}) \,dt+ \frac{\sigma_{+}}{\sqrt{\epsilon}} dB_{\gamma} \;\text{with probability} \;\widehat{\rho}_{+}\\ -\frac{\nu_{-}}{\epsilon}(\widehat{{\gamma}}' -\mu_{-}) \,dt+ \frac{\sigma_{-}}{\sqrt{\epsilon}} dB_{\gamma} \;\text{with probability} \;\widehat{\rho}_{-}= 1-\widehat{\rho}_{+} \end{array} \right. \end{array} \right. \end{split} \end{equation} where the uncertainty is separately delivered by two independent sets of SDEs. Eq.~(\ref{eq:dsmdeq}) is named by the dual-mode diffusive stochastic model (dDSM).
When $(\widehat{u}'(0),\widehat{\gamma}'(0))$ is a Gaussian mixture, utilizing the exact solvability of the first two moments of the propagated distributions (as for DSM), the probability of $(\widehat{u}'(T),\widehat{\gamma}'(T))$ can be approximated as Gaussian mixture with the number of kernels doubled, similarly to dMSM. As for dMSM we may perform a reduction of the number of mixtures to retain computational tractability. In this way, the approximate Gaussian mixture filter $\mathbb{P}(\widehat{u}'_{n-1}, \widehat{\gamma}'_{n-1}\vert Y_{n-1}) \mapsto \mathbb{P}(\widehat{u}'_{n}, \widehat{\gamma}'_{n}\vert Y_{n-1}) \mapsto \mathbb{P}(\widehat{u}'_n, \widehat{\gamma}'_{n}\vert Y_n)$ is established.
\section{Model Validations} \label{sec:DSMparamters} In this section, we proceed (i) to validate the proposed models, and (ii) to determine the adaptive parameters. We classify the $\epsilon$ parameter regime into the six regions; the scale-separation limit $\{\epsilon \to 0\}$, the sharp scale-separation regime $\{\epsilon \ll 1\}$, the imprecise scale-separation regime $\{\epsilon <1 \}$, the moderately rare-event regime $\{\epsilon >1\}$, the extremely rare-event regime $\{\epsilon \gg 1\}$, the rare-event limit $\{\epsilon \to \infty\}$. Subsection~\ref{sec:equivalence} is devoted to the study of the case $\{\epsilon \to 0, \epsilon \to \infty \}$, and subsection~\ref{subsec:dsmcoeff} to $\{\epsilon \ll 1, \epsilon \gg 1 \}$, and subsection~\ref{subsec:mini} to $\{\epsilon < 1, \epsilon > 1\}$.
\subsection{Convergence Results} \label{sec:equivalence} Here we demonstrate the consistency of the simplified models by showing that $u_T, \widehat{u}_T \to \bar{u}_T$ (subscript notation will be abused and $u_T = u(T)$) as $\epsilon \to 0$ and that $u_T, \widehat{u}'_T \to \bar{u}'_T$ as $\epsilon \to \infty$ in senses elucidated in what follows. All proofs are deferred to the Appendix~\ref{sec:equivalenceproof}.
\subsubsection{Scale-Separation Limit} The main results here are the following theorem and corollary; the constants are defined in the developments following their statements.
\begin{Theorem} \label{ssmthm} Assume that ${u}_0$, $\bar{u}_0$ and $\widehat{u}_0$ are identically distributed Gaussian random variables, and assume that $({u}_0,{\gamma}_0)$ and $(\widehat{u}_0,\widehat{\gamma}_0)$ are independent pairs of random variables. If $\bar{\gamma}$ and $\mu$ are equal to $\bar{\gamma}_\infty \equiv \frac{\lambda_{-}\gamma_{+}+\lambda_{+}\gamma_{-}}{\lambda_{-} +\lambda_{+}}$, then, for any fixed $T>0$, as $\epsilon \to 0$ the mean and variance of $u_T$ and $\widehat{u}_T$ converge to those of $\bar{u}_T$. \end{Theorem}
\begin{proof}[of Theorem \ref{ssmthm}] This follows from Lemmas~\ref{thm:uconv2}, \ref{thm:ssmmgf}, \ref{thm:dsmmgf}, using the explicit calculations which are presented after the corollary below. \end{proof}
\begin{Corollary} \label{cor1} Under the conditions in Theorem~\ref{ssmthm}, the mean and variance in the Gaussian filters for $u_n\vert Y_n$ (defined in Appendix~\ref{subsec:ssmgaussianfilter}) and
$\widehat{u}_n\vert Y_n$, converge to those of $\bar{u}_n|Y_n$, for fixed $n > 0$, as $\epsilon \to 0$. \end{Corollary}
\begin{proof}[of Corollary \ref{cor1}] This follows from the data assimilation formula of the Kalman filter \cite{anderson1979optimal}. \end{proof}
Let ${u}^\epsilon$ solve \begin{equation} \begin{split} \label{eq:langevin} d{u}^\epsilon &= -{\gamma}^\epsilon {u}^\epsilon dt + \sigma_u dB_u \end{split} \end{equation} for a random process ${\gamma}^\epsilon$. For $\Gamma^\epsilon_t \equiv \int^t_0 {\gamma}^\epsilon (s) ds$ the integral process of ${\gamma}^\epsilon$ we have the variation-of-constants yields \begin{equation} \begin{split} \label{eq:voc} u_T^\epsilon & = e^{-\Gamma^\epsilon_T}u_0^\epsilon + \sigma_u \int^T_0 e^{-(\Gamma^\epsilon_T-\Gamma^\epsilon_t)}dB_u(t). \end{split} \end{equation} Application of the It\^o formula shows that the mean and covariance are given by \begin{equation} \begin{split} \label{eq:uqmom} \left\langle {u}^\epsilon_T \right\rangle & = \left\langle e^{-{\Gamma}^\epsilon_T}{u}^\epsilon_0 \right\rangle \\ \text{Var}(u^\epsilon_T) &= \left\langle \left( e^{-{\Gamma}^\epsilon_T}u_0^\epsilon \right)^2 \right\rangle -\left\langle e^{-{\Gamma}^\epsilon_T}u^\epsilon_0 \right\rangle^2 + \sigma_u^2 \int^T_0 \left\langle e^{-2({\Gamma}^\epsilon_T-{\Gamma}^\epsilon_t)} \right\rangle dt. \end{split} \end{equation} Here and henceforth, $\langle \cdots\rangle$ denotes the statistical average. Eq.~(\ref{eq:uqmom}) reveals that the moment generating function (MGF) of integral process of $\gamma^\epsilon$ are particularly relevant to the first two moments propagation of $u^\epsilon$ governed by (\ref{eq:langevin}).
\begin{Lemma} \label{thm:uconv2} Let $\bar{u}_t$ satisfy MSM (\ref{eq:msm}). If \begin{equation} \label{eq:mgfconvs} \left\langle e^{\alpha( \Gamma^\epsilon_T-\Gamma^\epsilon_t )} \right\rangle \to e^{\alpha \bar{\gamma}(T-t) } \quad \text{for} \;\;\; \alpha=-1, -2 \;\;\;\text{and} \;\;\; 0\leq t\leq T \end{equation} and if \begin{equation} \begin{split} \label{eq:mcconvconds} \left\langle \left( e^{-{\Gamma}^\epsilon_T}u_0^\epsilon \right)^m \right\rangle \to \left\langle \left( e^{-\bar{\gamma} T} \bar{u}_0 \right)^m \right\rangle \quad \text{for} \;\;\; m=1,2 \end{split} \end{equation} as $\epsilon \to 0$ then the mean and variance of ${u}^\epsilon_T$ converge to those of $\bar{u}_T$. Further if \begin{equation} \begin{split} \label{eq:l2convconds} \left\langle \left( e^{-\Gamma^\epsilon_T}{u}^\epsilon_0 - e^{-\bar{\gamma}T}\bar{u}_0 \right)^2 \right\rangle \to 0 \end{split} \end{equation} as $\epsilon \to 0$ then ${u}^\epsilon_T$ converges to $\bar{u}_T$ in $L^2(\Omega;{\mathbb R}).$ The convergence rates are determined by those associated with Eqs.~(\ref{eq:mgfconvs}), (\ref{eq:mcconvconds}) and (\ref{eq:l2convconds}). \end{Lemma}
Let $\Gamma_t \equiv \int^t_0 \gamma(s) ds$ and $\widehat{\Gamma}_t \equiv \int^t_0 \widehat{\gamma}(s) ds$ be the integral processes associated to SSM and DSM respectively. Because $\bar{\Gamma}_t \equiv \int_0^t \bar{{\gamma}}(s) ds=\bar{{\gamma}} t$ ($\bar{{\gamma}}(s)=\bar{{\gamma}}$ is constant) in the case of the MSM, we expect $u_T, \widehat{u}_T \to \bar{u}_T$ provided both integral processes, $\Gamma_t$ and $\widehat{\Gamma}_t $, behave like the probability distribution $\delta_{\bar{\gamma}t}$ in the small $\epsilon$ limit. It turns out this is indeed the case due to averaging. The next two lemmas highlight this behavior.
\begin{Lemma}[SSM] \label{thm:ssmmgf} Let $\gamma_t \sim \rho_{+}(t)\delta_{\gamma_{+}} +\rho_{-}(t)\delta_{\gamma_{-}}$ then, for any fixed $t>0$, $\rho_{\pm}(t)\to \frac{\lambda_{\mp}}{\lambda_{-} +\lambda_{+}}$ as $\epsilon \to 0$. Let $\gamma_\infty \sim \frac{\lambda_{-}}{\lambda_{-} +\lambda_{+}}\delta_{\gamma_{+}} +\frac{\lambda_{+}}{\lambda_{-} +\lambda_{+}}\delta_{\gamma_{-}}$ and $\bar{\gamma}_\infty \equiv \langle \gamma_\infty\rangle = \frac{\lambda_{-}\gamma_{+}+\lambda_{+}\gamma_{-}}{\lambda_{-} +\lambda_{+}}$ then, for any fixed $T>t>0$, we have $ \left\langle e^{\alpha( {\Gamma}_T-{\Gamma}_t )} \right\rangle \to e^{\alpha \bar{\gamma}_\infty(T-t) } $ as $\epsilon \to 0$. \end{Lemma}
\begin{Lemma}[DSM] \label{thm:dsmmgf} Let $\widehat{\gamma}_\infty \sim \mathcal{N}(\mu,\sigma^2/2\nu)$ then, for any fixed $t>0$, the mean and variance of $\widehat{\gamma}_t$ converge to those of $\widehat{\gamma}_\infty$ as $\epsilon \to 0$. Furthermore, we have, for any fixed $T>t>0$, $ \left\langle e^{\alpha( \widehat{\Gamma}_T-\widehat{\Gamma}_t )} \right\rangle \to e^{\alpha \mu(T-t) } $ as $\epsilon \to 0$. \end{Lemma}
\subsubsection{Rare-Event Limit}
The main results in this regime are the following theorem and corollary.
\begin{Theorem} \label{ssmthm2} Assume that ${u}_0$, $\bar{u}'_0$ and $\widehat{u}'_0$ are identically distributed Gaussian random variables, and assume that $({u}_0,{\gamma}_0)$ and $(\widehat{u}'_0,\widehat{\gamma}'_0)$ are independent pairs of random variables. Then, for any fixed $T>0$, the mean and variance of $\mathbb{P}({u}_T\vert {\gamma}_0=\gamma_{\pm})$, $\mathbb{P}(\widehat{u}'_T\vert \widehat{\gamma}'_0=\gamma_{\pm})$ converges to those of $\mathbb{P}(\bar{u}'_T\vert \bar{\gamma}' =\gamma_{\pm})$ as $\epsilon \to \infty$.
Furthermore, let ${\gamma}_0 \triangleq \gamma_\infty$, and if $\bar{\gamma}'$ and $\widehat{\gamma}'_0$ are identically distributed with $\gamma_\infty$, then the weight, mean and variance of components in the Gaussian mixture approximation for ${u}_T$, $\widehat{u}'_T$ converge to those of $\bar{u}'_T$ as $\epsilon \to \infty$.
\end{Theorem}
\begin{proof} This follows from Lemmas~\ref{thm:uconv3}, \ref{lem:ssmd}, \ref{lem:dsmd} below. \end{proof}
\begin{Corollary} Under the conditions in Theorem~\ref{ssmthm2}, the weight, mean and variance of mixture components in the Gaussian mixture filters for $u_n\vert Y_n$ (defined in Appendix~\ref{subsec:ssmgaussiansumfilter}) and $\widehat{u}'_n\vert Y_n$, converge to those of
$\bar{u}'_n|Y_n$, for fixed $n > 0$, as $\epsilon \to \infty$. \end{Corollary}
\begin{proof} This follows from parallel application of the Kalman filter update to the mixture components. \end{proof}
\begin{Lemma} \label{thm:uconv3} Let $\bar{u}'_t$ solve dDSM (\ref{eq:msmd}). If, for each fixed $T>t>0$, \begin{equation} \label{eq:mgfconvd} \left\langle e^{\alpha( \Gamma^\epsilon_T-\Gamma^\epsilon_t )} \vert \gamma^\epsilon_0=\gamma_{\pm}\right\rangle \to e^{\alpha {\gamma_{\pm}}(T-t) } \quad \text{for} \;\;\; \alpha=-1, -2 \;\;\;\text{and} \;\;\; 0\leq t\leq T \end{equation} and if \begin{equation} \begin{split} \label{eq:mcconvcondd} \left\langle \left( e^{-{\Gamma}^\epsilon_T}u_0^\epsilon \right)^m \vert \gamma^\epsilon_0=\gamma_{\pm}\right\rangle \to \left\langle \left( e^{-{\gamma}_{\pm} T} \bar{u}_0 \right)^m \right\rangle \quad \text{for} \;\;\; m=1,2 \end{split} \end{equation} as $\epsilon \to \infty,$ then the mean and variance of ${u}^\epsilon_T\vert \gamma^\epsilon_0=\gamma_{\pm}$ converge to those of $\bar{u}_T\vert \bar{\gamma}'=\gamma_{\pm}$. The convergence rates are determined by those associated with Eqs.~(\ref{eq:mgfconvd}), (\ref{eq:mcconvcondd}).
Furthermore, if ${\gamma}^\epsilon_0\triangleq \bar{\gamma}'$, then the weight, mean and variance of components in the Gaussian mixture approximation for ${u}^\epsilon_T$ converge to those of $\bar{u}_T$ from $\mathbb{P}({u}^\epsilon_T) =\mathbb{P}({\gamma}^\epsilon_0=\gamma_{+})\mathbb{P}({u}^\epsilon_T\vert {\gamma}^\epsilon_0=\gamma_{+}) +\mathbb{P}({\gamma}^\epsilon_0=\gamma_{-})\mathbb{P}({u}^\epsilon_T\vert {\gamma}^\epsilon_0=\gamma_{-})$.
\end{Lemma}
To ensure the convergences of SSM and dDSM to dMSM, as $\epsilon$ grows, both ${\Gamma}_t$ and $\widehat{\Gamma}'_t \equiv \int^t_0 \widehat{\gamma}'(s)ds$ need to converge to $\bar{\Gamma}'_t \equiv \int_0^t \bar{{\gamma}}'(s) ds \sim \bar{\rho}_{+}\delta_{\gamma_{+}t}+ \bar{\rho}_{-}\delta_{\gamma_{-}t}$.
\begin{Lemma} [SSM] \label{lem:ssmd} For fixed $T>t>0$ $\left\langle e^{\alpha (\Gamma_T-\Gamma_t)} \vert {\gamma}_0=\gamma_{\pm} \right\rangle \to e^{ \alpha \gamma_{\pm}(T-t)}$ as $\epsilon \to \infty$. \end{Lemma}
\begin{Lemma} [dDSM] For fixed $T>t>0$ \label{lem:dsmd} $\left\langle e^{\alpha (\widehat{\Gamma}'_T-\widehat{\Gamma}'_t)} \vert \widehat{\gamma}'_0=\gamma_{\pm} \right\rangle \to e^{ \alpha \gamma_{\pm}(T-t)}$ as $\epsilon \to \infty$. \end{Lemma}
\subsection{Asymptotic Matching} \label{subsec:dsmcoeff} The convergence results in the preceding subsection demonstrate that the filtering performances of the approximate filters, and the exact filter, would be similar to one another in that $\bar{\gamma} = \bar{\gamma}_\infty$, $\mu = \bar{\gamma}_\infty$ (when $\epsilon \ll 1$) and $\bar{\gamma}'\triangleq {\gamma}_\infty$, $\widehat{\rho}_{\pm} \propto {\lambda_{\mp}}$, $\mu_{\pm} = \gamma_{\pm}$ (when $\epsilon \gg 1$). The former result relates to the robustness of the DSM filter inherited from the adaptive parameters $\{ \mu,\sigma\}$, demonstrated here when $\epsilon$ is small, and demonstrated through extensive numerical simulations in \cite{gershgorin2010test, gershgorin2010improving}.
However, when $\epsilon$ deviates considerably from the two extreme values ($\epsilon =0$ and $\epsilon =\infty$), the choice of associated parameters in the filtering models is indeed one critical factor for a successful filtering with model error. The current and next subsections concern the determination of $\Theta\equiv\{\mu,\nu,\sigma\}$ for DSM, and $\Theta'\equiv\{\widehat{\rho}_{\pm},\mu_{\pm},\nu_{\pm},\sigma_{\pm}\}$ for dDSM. Unlike earlier works in this area where these associated parameters are chosen from a number of parallel direct numerical simulations comparing the original dynamics and its simplifications, our approach will specify the parameters in a systematic analysis-based manner.
\subsubsection{Sharp Scale-Separation Regime} In this parameter regime, because DSM is associated to a nonlinear approximate Kalman filter, we attempt to equate the first and second order statistics of SSM and DSM, \begin{equation} \begin{split} \label{eq:momcriteria} \left\{ \begin{array}{ll} \langle u_T \rangle & = \langle \widehat{u}_T \rangle \\ \text{Var}(u_T) & = \text{Var}(\widehat{u}_T) \end{array} \right. \end{split} \end{equation} for high accuracy. It is worth noticing that, in view of (\ref{eq:uqmom}), if the MGFs agree with one another, that is if \begin{subnumcases} {\label{eq:criteria22}} \label{eq:criteria} \qquad \; \left\langle e^{\alpha \Gamma_T}\right\rangle = \left\langle e^{\alpha \widehat{\Gamma}_T}\right\rangle \qquad \quad \text{for} \;\;\; \alpha=-1, -2 \\ \label{eq:mcriteria2} \left\langle e^{\alpha (\Gamma_T-\Gamma_t)}\right\rangle
= \left\langle e^{\alpha (\widehat{\Gamma}_T-\widehat{\Gamma}_t)}\right\rangle \quad \text{for}\; \; \; \alpha=-2 \;\;\;\text{and}\;\;\; 0\leq t \leq T \end{subnumcases} and if $({u}_0,{\gamma}_0)$ and $(\widehat{u}_0,\widehat{\gamma}_0)$ are uncorrelated, and if ${u}_0 \triangleq \widehat{u}_0$, then Eq.~(\ref{eq:momcriteria}) holds. Motivated by convergence to the common limit, as demonstrated above, we here strive to asymptotically satisfy (\ref{eq:criteria22}) when $\epsilon \ll 1$.
To that end, we derive the approximation \begin{equation}
\begin{split} \label{eq:fmcharasy1} \left\langle e^{\alpha (\Gamma_T-\Gamma_t)} \right\rangle & \simeq \exp\Bigg( \alpha \bar{\gamma}_\infty(T-t) +\alpha^2\frac{3}{8}\frac{ (\gamma_{-}-\gamma_{+})^2(\lambda_{-}^2+\lambda_{+}^2)}{\lambda_{-}\lambda_{+} (\lambda_{+}+ \lambda_{-})} (T-t) \epsilon\\ & + \alpha \left( \mathbb{P}(\gamma_0=\gamma_{+}) \frac{(\gamma_{+}-\gamma_{-})}{4\lambda_{+}} + \mathbb{P}(\gamma_0=\gamma_{-}) \frac{(\gamma_{-}-\gamma_{+})}{4\lambda_{-}} \right) \epsilon +\mathcal{O}(\epsilon^2) \Bigg) \quad \epsilon < T-t
\end{split} \end{equation} in the Appendix~\ref{subsec:scaleseparation}. We also derive the approximations \begin{subequations} \label{eq:spmcharasy0} \begin{align} \label{eq:spmcharasy1} \left\langle e^{\alpha \widehat{\Gamma}_T}\right\rangle & = \exp\left( \alpha \left( \mu T + \langle \widehat{\gamma}_0-\mu\rangle \frac{\epsilon}{\nu}\right) +\alpha^2 \frac{\sigma^2}{2\nu^2}T\epsilon +\mathcal{O}(\epsilon^2) \right) \quad \epsilon < T \\ \label{eq:spmcharasy2} \left\langle e^{\alpha (\widehat{\Gamma}_T-\widehat{\Gamma}_t)} \right\rangle & = \exp\left( \alpha \mu(T-t) +\alpha^2 \frac{\sigma^2}{2\nu^2}(T-t)\epsilon + \mathcal{O}(\epsilon^2) \right) \quad \epsilon < t \end{align} \end{subequations} in the Appendix~\ref{subsec:dsmssr}. Importantly, the exponents of MGFs are of the second-order with respect to $\alpha T$ up to $\mathcal{O}(\epsilon)$, indicating that both $\Gamma_T$ and $\widehat{\Gamma}_T$ are statistically closer to Gaussian in this parameter regime.
From a comparison between (\ref{eq:fmcharasy1}) and (\ref{eq:spmcharasy0}), we realize Eq.~(\ref{eq:criteria22}) is asymptotically met provided $\bar{\gamma}_\infty=\mu$ and \begin{equation} \begin{split} \label{eq:sc1} \frac{3}{8}\frac{ (\gamma_{-}-\gamma_{+})^2(\lambda_{-}^2+\lambda_{+}^2)}{\lambda_{-}\lambda_{+} (\lambda_{+}+ \lambda_{-})} & =\frac{\sigma^2}{2\nu^2} \end{split} \end{equation} and \begin{equation} \begin{split} \label{eq:sc2} \mathbb{P}(\gamma_0=\gamma_{+}) \frac{(\gamma_{+}-\gamma_{-})}{4\lambda_{+}} + \mathbb{P}(\gamma_0=\gamma_{-}) \frac{(\gamma_{-}-\gamma_{+})}{4\lambda_{-}} =\frac{\langle \widehat{\gamma}_0-\mu \rangle}{\nu}. \end{split} \end{equation}
Eqs.~(\ref{eq:sc1}), (\ref{eq:sc2}) can be solved to determine a unique set of $\{\nu,\sigma^2\}$ but might result in $\nu < 0$ which is unphysical. In order to avoid this possibility, we impose the equivalence between variances of stationary processes $\gamma_\infty$ and $\widehat{\gamma}_\infty$ \begin{equation} \begin{split} \label{eq:varmatch} \frac{\lambda_{+}\lambda_{-}(\gamma_{+}-\gamma_{-})^2}{(\lambda_{-} +\lambda_{+})^2} & = \frac{\sigma^2}{2\nu} \\ \end{split} \end{equation} instead of Eq.~(\ref{eq:sc2}). From Eqs.~(\ref{eq:sc1}) and (\ref{eq:varmatch}), we obtain \begin{equation} \begin{split} \label{eq:choice} \Theta_{\text{naive}} \equiv \begin{cases} \mu & = \bar{\gamma}_\infty \\ \nu &= \frac{8}{3}\frac{ \lambda_{-}^2\lambda_{+}^2 }{ (\lambda_{-}+\lambda_{+}) (\lambda_{+}^2+ \lambda_{-}^2)} \qquad \text{when} \quad \epsilon \ll 1 \\ \sigma^2 &= \frac{16}{3}\frac{ \lambda_{-}^3\lambda_{+}^3 (\gamma_{-}-\gamma_{+})^2 }{ (\lambda_{-}+\lambda_{+})^3 (\lambda_{+}^2+ \lambda_{-}^2)}\\ \end{cases} \end{split} \end{equation} which we term the naive set of DSM parameters, valid when $\epsilon \ll 1$.
\subsubsection{Extremely Rare-Event Regime} Using a similar argument to that employed in the case of DSM, we set ${\rho}_{\pm}=\widehat{\rho}_{\pm}=\frac{\lambda_{\mp}}{\lambda_{-} +\lambda_{+}}$ and attempt to satisfy \begin{equation*} \begin{split} \left\{ \begin{array}{ll} \langle u_T \vert \gamma_0=\gamma_{\pm} \rangle & = \langle \widehat{u}_T \vert \widehat{\gamma}'_0=\gamma_{\pm} \rangle \\ \text{Var}(u_T\vert \gamma_0=\gamma_{\pm}) & = \text{Var}(\widehat{u}_T\vert \widehat{\gamma}'_0=\gamma_{\pm}) \end{array} \right. \end{split} \end{equation*} hence \begin{subnumcases} {\label{eq:Dcriteria}} \label{eq:Dcriteria1} \quad\; \quad\left\langle e^{\alpha \Gamma_T} \vert \gamma_0=\gamma_{\pm} \right\rangle
= \left\langle e^{\alpha \widehat{\Gamma}'_T}\vert \widehat{\gamma}'_0=\gamma_{\pm}\right\rangle
\qquad \text{for} \;\; \alpha=-1, -2 \\ \label{eq:Dcriteria2}
\left\langle e^{\alpha (\Gamma_T-\Gamma_t)}\vert {\gamma}_0=\gamma_{\pm}\right\rangle
= \left\langle e^{\alpha (\widehat{\Gamma}'_T-\widehat{\Gamma}'_t)} \vert \widehat{\gamma}'_0=\gamma_{\pm}\right\rangle \; \text{for}\; \; \alpha=-2 \;\;\text{and}\;\; 0\leq t \leq T \end{subnumcases} for dDSM.
In the case $\epsilon \gg 1$, we derive \begin{equation}
\begin{split} \label{eq:ssmmgfjt5} \left\langle e^{\alpha \Gamma_T} \vert {\gamma}_0=\gamma_{\pm} \right\rangle & \simeq \exp\left( \alpha \gamma_{\pm}T-\frac{1}{\epsilon}\lambda_{\pm}T\right)\\
\end{split} \end{equation} in the Appendix~\ref{subsec:ssmrer} and \begin{equation} \begin{split} \label{eq:dsmmgfp} \left\langle e^{\alpha \widehat{\Gamma}'_T} \vert \widehat{\gamma}'_0=\gamma_{\pm} \right\rangle & \simeq \exp\left( \alpha \left( {\gamma}_{\pm}T - \frac{1}{2\epsilon} ( {\gamma}_{\pm}-\mu_{\pm}) \nu_{\pm} T^2 \right) +\frac{\alpha^2}{2} \frac{(\sigma_{\pm})^2}{3\epsilon}T^3 \right)\\ \end{split} \end{equation} in the Appendix~\ref{subsec:dsmrer}. Note the exponents in (\ref{eq:ssmmgfjt5}) and (\ref{eq:dsmmgfp}) are of different forms, indicating both ${\Gamma}_T$ and $\widehat{\Gamma}'_T$ are distant from Gaussian in this parameter regime.
Differently from the case of DSM, we here manage to asymptotically satisfy Eq.~(\ref{eq:Dcriteria1}) alone, yielding \begin{equation} \begin{split} \label{eq:ddsmpara} \Theta'_{\text{naive}} \equiv \begin{cases} \widehat{\rho}_{\pm}=\frac{\lambda_{\mp}}{\lambda_{-} +\lambda_{+}}\\ \mu_{\pm}=2T\frac{\lambda_{+}\lambda_{-}(\gamma_{+}-\gamma_{-})^2}{(\lambda_{-} +\lambda_{+})^2} +\gamma_{\pm} \qquad \text{when} \quad \epsilon \gg 1 \\ \nu_{\pm}= \frac{3\lambda_{\pm}}{2T^2} \frac{(\lambda_{-} +\lambda_{+})^2}{\lambda_{+}\lambda_{-}(\gamma_{+}-\gamma_{-})^2} \\ (\sigma_{\pm})^2 = \frac{3\lambda_{\pm}}{T^2}\\ \end{cases} \end{split} \end{equation} which we term the naive set of dDSM parameters, valid when $\epsilon \gg 1$. Unlike Eq.~(\ref{eq:choice}), due to the dependence on $T$, the set of parameters (\ref{eq:ddsmpara}) is valid only for fixed-time prediction. The Gaussian mixture from dDSM with $\Theta'_{\text{naive}}$ leads to accurate mean approximations but the accuracy of the variance approximation is not guaranteed in view of Eq.~(\ref{eq:uqmom}) where integration over $[0,T]$ is involved.
\subsection{Minimizing Sum-of-Squares} \label{subsec:mini} In the parameter regime $\epsilon \sim O(1)$, due to the absence of small or large parameters allowing for asymptotic analysis, we invoke a minimization principle to determine the set of parameters $\Theta$ and $\Theta'$.
\subsubsection{Imprecise Scale-Separation Regime} When $\epsilon < 1$, we aim to find $\Theta$ which minimizes the sum-of-squares \begin{equation} \begin{split} \label{eq:qdf} J(\epsilon) \equiv \kappa \big\vert \langle u_T \rangle - \langle \widehat{u}_T\rangle \big\vert^2 + \big\vert \text{Var}(u_T) - \text{Var}( \widehat{u}_T) \big\vert^2 \end{split} \end{equation} where $\kappa \geq 0$ is introduced to ensure appropriate scaling of the two terms in the objective function. To be more precise, given $(\widehat{u}_0, \widehat{\gamma}_0)$, Eq.~(\ref{eq:qdf}) is an algebraic relation in terms of $\Theta$ once we impose ${u}_0:= \widehat{u}_0$ and ${\gamma}_0:=\gamma_\infty$ (see Appendices \ref{app:ssm} and \ref{app:dsm}). Note that a minimizer of $J(\epsilon)$ comes as close as possible to fulfilling Eq.~(\ref{eq:momcriteria}). It is worth mentioning that, differently from the MFG matching (\ref{eq:criteria22}) for which $(\widehat{u}_0, \widehat{\gamma}_0)$ should be at most weakly correlated for the approach to be valid, the minimization methodology can be used irrespective of their potentially strong correlation.
We identify a (local) minimizer by taking $\Theta_{\text{naive}}$ as an initial starting point, and applying an optimizer such as gradient descent. This minimization can be performed using continuation in $\epsilon$, starting from $\epsilon \ll 1$ where the initial guess will be accurate. Because the solution of this minimization is computed at each assimilation time step we name it {\em dynamic calibration} and denote the resulting time-dependent parameters by $\Theta_{\text{dynamic}}$. Of course the key issue in sequential filtering that we are addressing is to maintain an accurate description of the evolving probability distribution with reasonable computational cost. In this context it is impractical to compute $\Theta_{\text{dynamic}}$ at every observation time. In practice, one can take a time average of a range of dynamic calibrations. We refer to this as static calibration and denote the resulting parameter by $\Theta_{\text{static}}$.
\subsubsection{Moderately Rare-Event Regime} As for the extremely rare-event regime, we carry out the same procedure for each stable and unstable Gaussian kernel. As for the imprecise scale-separation regime we also minimize an expression analogous to Eq.~(\ref{eq:qdf}) in which the conditioned mean and covariance are used instead. We first find $\Theta'_{\text{dynamic}}$ from $\Theta'_{\text{naive}}$, and next find $\Theta'_{\text{static}}$ from $\Theta'_{\text{dynamic}}$. Unlike the method based on matching MGF asymptotics, where the potential inaccuracy of variance approximations are present, this method simultaneously accounts for accuracy in both the mean and covariance approximations.
\section{Numerical Simulations} \label{sec:ns} Having obtained three different versions of adaptive parameters (naive set, static calibration, dynamic calibration) for DSM and dDSM, we here investigate the filtering performances of the suggested models using numerical simulations.
Very importantly, one distinguished advantage of the framework we are currently adopting lies in the analytic tractability of the state space model. In Appendix~\ref{subsec:selecton}, we derive the closed form solution (when $\lambda_{+} = \lambda_{-}$) and the series solution (when $\lambda_{+} \neq \lambda_{-}$) for MGFs of the SSM integral process. In Appendix~\ref{subsec:ssmfilter}, we use them to design the Gaussian filter (suitable when $\epsilon$ is small) and the Gaussian sum filter (suitable when $\epsilon$ is large) for SSM. Those results from the direct filtering of SSM are then to be used as the reference solutions in subsequent experiments. We emphasize that the presence of these reference probability distributions enables very careful examination of filter accuracy in our numerical experiments, beyond measuring the distance between a realization of the truth signal and the mean of an approximate filtering solution and beyond what is seen in most other works concerning the computational evaluations of filters; this in turn gives further depth to our demonstrations.
In all our experiments, we use the following parameter values to specify the SSM truth model: $\sigma_u=0.1549$, $\gamma_{+}=2.27$, $\gamma_{-}=-0.04$, $\lambda_{+}=1$ and $\lambda_{-}=2$ (these choices follow those in \cite{gershgorin2010test}). Fixing inter-observation time $T=1$, we study the cases of $\epsilon=10^{-1}$, $10^{0}$, $10^{1}$, $10^{2}$. Each one is selected as representative of the parameter regimes: sharp scale-separation, imprecise scale-separation, moderately rare-event, extremely rare-event, in the order given. Since $\mathbb{E}(\tau_k) = 1/r$ for $\tau_k \sim \exp(r)$, the reciprocal of $\epsilon$ equals the average number of transitions from the stable mode ($\gamma = \gamma_{+}$) to the unstable mode ($\gamma = \gamma_{-}$) on the unit time interval. As $\lambda_{-}$ is twice $\lambda_{+}$ in this example, the average time spent in the stable mode is twice that spent in the unstable mode.
We take the initial condition of SSM according to $u_0 \triangleq \mathcal{N}(0.1, 0.0016)$ and $\gamma_0 \triangleq \gamma_\infty$, independently from one the other. For MSM (dMSM), we take $\bar{u}_0( \bar{u}'_0) \triangleq u_0$. We also take $\bar{\gamma}=\bar{\gamma}_\infty (= 1.5) $ and $\bar{\gamma}'\triangleq {\gamma}_\infty$. For DSM (dDSM), we take the independent Gaussian $(\widehat{u}_0,\widehat{\gamma}_0)$ (or $(\widehat{u}'_0,\widehat{\gamma}'_0)$) where $\widehat{u}_0 \triangleq u_0$ and $\widehat{\gamma}_0 \triangleq \mathcal{N}( 1.2 \bar{\gamma}_\infty, \text{Var}(\gamma_\infty))$. We set $\widehat{\rho}^{\pm}(0)=\bar{\rho}_{\pm}$. For the observational process in Eq.~(\ref{eq:obs}), we use $R_n=0.25 E$ where $E \equiv \sigma_u^2/(2\bar{\gamma})$ (in this case the variance of $\bar{u}_n \vert Y_n$ is independent of $n$).
\subsection{Performances of Simplified Filters} \label{sec:psf} \subsubsection{Sharp Scale-Separation Regime} We first study the case of $\epsilon=10^{-1}$. For the implementation of DSM with dynamic calibration, along with $\Theta_{\text{naive}}$ as a starting point, a local minimizer $\Theta_{\text{dynamic}}$ of Eq.~(\ref{eq:qdf}) is solved at every observation time. The choice of $\kappa$ in $J(\epsilon)$ plays a substantial role in this problem. Here and hereafter, the value of $\kappa$ is set to zero for simplicity and consistency of presentations; this allows the prior mean from dynamic and static calibrations to be inaccurate but, in filtering, the posterior is the main object of interest. The time average of these parameters for $1\leq n \leq50$ is taken as $\Theta_{\text{static}}$.
In addition to DSM filters, we apply Gaussian filters for MSM and SSM. For the latter, due to distinct $\lambda_{\pm}$, we need to truncate the series solution of the MGF. Hereafter, the first $30$ terms of the series solution will be kept as this ensures accuracy by virtue of the fact that $\mathbb{P}(N_T > 30)< 10^{-5}$.
In Fig.~\ref{fig1ep1}, we depict the relative errors of the prior and posterior approximations in terms of mean and variance. We see that the approximations of DSM with the parameters tuned by our methods are significantly more accurate than the MSM approximation. As expected, the overall errors of the mean and variance relative to those from SSM filtering solution are given in the order : DSM (dynamic calibration) $\lesssim$ DSM (static calibration) $<$ DSM (naive set) $<$ MSM. Admittedly, this result is merely for a single realization of the observation process. However we show that the result is indeed robust with respect to the chosen observational data set in the following manner.
At each observation time step, the posterior distributions of the approximate models are determined by the instance of observation, which is drawn from a Gaussian. In Fig.~\ref{fig2ep1}, we depict the dependence of the corresponding filter accuracy on $y_n$ for $n=20$ and $n=40$. It is observed that, for most values of $y_n$, Gaussian filters for DSM with dynamic and static calibrations significantly outperform MSM, leading to highly accurate posterior approximations. In Fig.~\ref{fig2ep1}, we also depict the statistical average of the posterior error with respect to $y_n$ for each $n$. There, one can see the ordering of the accuracies is exactly the same as in the single realization experiment.
\subsubsection{Imprecise Scale-Separation Regime} Taking $\epsilon=10^{0}$, it is not immediately intuitive whether either the Gaussian description or the Gaussian mixture description is a better approximation of the SSM. It turns out that, in this case, the Gaussian filter for SSM is more suitable as the reference solution; our investigation of this issue can be found in subsection~\ref{sec:sa}. Accordingly we find dynamic and static calibrations, and implement Gaussian filters for DSM, MSM and SSM. We depict Fig.~\ref{fig1e1} and Fig.~\ref{fig2e1}, which correspond, respectively, to Fig.~\ref{fig1ep1} and Fig.~\ref{fig2ep1}. The scenario interpreted from the figures is basically the same as the one arising when $\epsilon=10^{-1}$, with one exception that the naive DSM is less accurate than the MSM. This is no surprise, because $\epsilon$ is no longer small and $\Theta_{\text{naive}}$ is no longer expected to be valid. Therefore, the overall errors are ordered as: DSM (dynamic calibration) $\lesssim$ DSM (static calibration) $<$ MSM $<$ DSM (naive set).
\subsubsection{Moderately Rare-Event Regime} When $\epsilon=10^1$, it is shown in subsection~\ref{sec:sa} that the Gaussian sum filter for SSM, made efficient by merging the mixture approximation of the posterior into a Gaussian at every observation time, is indeed better than the Gaussian filter for the reference solution. We apply the same kind of Gaussian sum filters for dMSM and dDSM. For the dDSM implementations, taking $\Theta'_{\text{naive}}$ as a starting point, we solve dynamic calibrations for each of two evolving Gaussian kernels. We then individually average them to obtain a static calibration.
In Fig.~\ref{fig1e10}, we depict the relative error for each of the Gaussian kernel approximations. Combining these two cases, we plot Fig.~\ref{fig2e10} and Fig.~\ref{fig3e10}, which correspond to Fig.~\ref{fig1e1} and Fig.~\ref{fig2e1} respectively. Importantly, for comparison, we additionally plot the result from DSM with $(\mu=\bar{\gamma}_\infty, \nu=0.1\bar{\gamma}_\infty,\sigma=5\sigma_u)$. These parameters are the ones used in \cite{gershgorin2010test}. They are selected as suitable from direct numerical simulations in this parameter regime, and are interestingly very close to $\Theta_{\text{naive}}$. Here the DSM appears as a reasonable approximation of SSM but this Gaussian filter is characterized by significantly less accuracy than the remaining Gaussian sum filters.
Our simulations further indicate, in this case, that the dependency of filter accuracy on the observation is much more complicated than the previous Gaussian filtering examples (Fig.~\ref{fig3e10}). The overall errors are of the order : dDSM (dynamic calibration) $\lesssim$ dDSM (static calibration) $<$ dMSM $\lesssim$ dDSM (naive set) $<$ DSM (naive set).
\subsubsection{Extremely rare-event regime} Like the preceding case, we take as reference the Gaussian sum filter for SSM with projection of posterior into the set of Gaussian distributions. The overall scenario when $\epsilon=10^2$ is similar to the case with $\epsilon=10^1$, except that dMSM becomes more accurate. We plot Fig.~\ref{fig1e100}, Fig.~\ref{fig2e100} and Fig.~\ref{fig3e100} that correspond to Fig.~\ref{fig1e10}, Fig.~\ref{fig2e10} and Fig.~\ref{fig3e10} respectively. We see the overall errors are of the order : dDSM (dynamic calibration) $\simeq$ dMSM $<$ dDSM (static calibration) $<$ dDSM (naive set). Note dMSM is quite accurate in this case because $\epsilon$ is very large.
\subsubsection{Summary} To summarize we plot the root mean square errors of mean and variance between the reference and approximations for all four choices of $\epsilon$ in Fig.~\ref{fig:rmse}.
\subsection{Supplementary Analysis} \label{sec:sa} This section discusses our choices, especially in relation to choice of reference solution, made while performing numerical simulations in subsection~\ref{sec:psf}; it can be skipped without harming the understanding of the main messages of the paper.
\subsubsection{Imprecise Scale-Separation Regime} In this case where we take $\epsilon=10^{0}$, to make sure whether either the Gaussian description or the Gaussian sum description is a better approximation of the SSM, what we do is to compare the similarity/distance between MSM (note the derivation corresponds to $\epsilon=0$) and Gaussian approximation of SSM, and that between dMSM (that corresponds to $\epsilon=\infty$) and Gaussian sum approximation of SSM.
To that end, we plot the prior distributions from all four cases, when $n=10$ (and we do the same in the remaining examples), in the left panel of Fig.~\ref{fig:gsae1}. We see that the dMSM has a one-sided fat tail, which is due to the contribution by the Gaussian kernel evolved while $\gamma$ is in the unstable mode. However this feature is not apparent in the mixture approximation of SSM (in fact both Gaussian and Gaussian mixture approximations of SSM are very similar and unimodal). Furthermore, the $L^1$ distance between MSM and SSM (Gaussian) is significantly smaller than the one between dMSM and SSM (Gaussian mixture), as shown in the right panel of Fig.~\ref{fig:gsae1}. The discussion demonstrates that, in this parameter regime, the Gaussian filter for SSM is more suitable as a reference solution than is the Gaussian sum filter.
\subsubsection{Moderately Rare-Event Regime} When $\epsilon=10^1$, we plot the four relevant prior distributions in the top-left panel of Fig.~\ref{fig:gsae10100} . While MSM and SSM (Gaussian) are distant from one another, both dMSM and SSM (Gaussian mixture) are characterized by a one-sided fat tail, in contrast to the case of $\epsilon=10^0$, and further are very close to one another. Therefore SSM with the Gaussian sum filter is chosen as the appropriate reference solution.
We turn our attention to the validity of Gaussian approximation of the Gaussian mixture posterior. The top-right panel of Fig.~\ref{fig:gsae10100} depicts the posterior of SSM (Gaussian mixture), which consists of two kernels. The distribution is well approximated by a single Gaussian that has the same mean and variance. This is due to the sharpness of the likelihood we choose (discussed shortly). We may thus approximate the filtering solution by a Gaussian at every observation time, and we can apply Gaussian sum filters in a computationally tractable way without harming accuracy.
\subsubsection{Extremely Rare-Event Regime} With regard to SSM filter, the scenario when $\epsilon=10^2$ is the same as the case with $\epsilon=10^1$. In the bottom of Fig.~\ref{fig:gsae10100}, the priors of dMSM and SSM (Gaussian sum) are almost indistinguishable, and the SSM posterior is accurately approximated by a Gaussian.
We conclude the current section with further study of the Gaussian approximation of the posterior. Recall we have fixed $R_n=0.25E$ thus far. In this case, it is shown that the Gaussian approximation of the posterior can be performed without losing accuracy, but this may not be the case when $R_n$ is bigger. In Fig.~\ref{fig:gsa2e100}, we plot the prior and posterior with $R_n=0.75E$. Due to the flatter likelihood, the posterior with two kernels significantly deviates from the Gaussian approximation. In this case, the Gaussian approximation of the posterior cannot guarantee the accuracy of the filtering solution.
\begin{figure}
\caption{
The relative errors of the mean and variance approximations of the prior $u_{n}|Y_{n-1}$ (top) and posterior $u_{n}|Y_{n}$ (bottom) distributions when $\epsilon = 10^{-1}$. The dashed lines denote the time averages over $0 \leq n \leq 50$. }
\label{fig1ep1}
\caption{
The relative errors of the approximations of the posterior $u_n|Y_n$ distributions that depend on the realization of $y_n= u_n + \eta_n$ (top and bottom-left), and their statistical averages with respect to the law of $y_n$ (bottom-right) when $\epsilon = 10^{-1}$. In Gaussian filters, the accuracy of the posterior variance does not depend on the instance of $y_n$ (top-right). }
\label{fig2ep1}
\end{figure}
\begin{figure}
\caption{
The relative errors of the mean and variance approximations of the prior $u_{n}|Y_{n-1}$ (top) and posterior $u_{n}|Y_{n}$ (bottom) distributions when $\epsilon = 10^{0}$. }
\label{fig1e1}
\caption{
The relative errors of the approximations of the posterior $u_n|Y_n$ distributions that depend on the realization of $y_n= u_n + \eta_n$, and their statistical averages with respect to the law of $y_n$ when $\epsilon = 10^{0}$. }
\label{fig2e1}
\end{figure}
\begin{figure}
\caption{ The relative errors of the mean and variance approximations of each Gaussian kernels of the prior distributions when $\epsilon = 10^{1}$. }
\label{fig1e10}
\caption{
The relative errors of the mean and variance approximations of the prior $u_{n}|Y_{n-1}$ (top) and posterior $u_{n}|Y_{n}$ (bottom) distributions when $\epsilon = 10^{1}$. }
\label{fig2e10}
\end{figure}
\begin{figure}
\caption{
The relative errors of the approximations of the posterior $u_n|Y_n$ distributions that depend on the realization of $y_n= u_n + \eta_n$, and their statistical averages with respect to the law of $y_n$ when $\epsilon = 10^{0}$. In Gaussian sum filters, the accuracy of the posterior variance depends on the instance of $y_n$ (top-right and middle-right). }
\label{fig3e10}
\end{figure}
\begin{figure}
\caption{ The relative errors of the mean and variance approximations of each Gaussian kernels of the prior distributions when $\epsilon = 10^{2}$. }
\label{fig1e100}
\caption{
The relative errors of the mean and variance approximations of the prior $u_{n}|Y_{n-1}$ (top) and posterior $u_{n}|Y_{n}$ (bottom) distributions when $\epsilon = 10^{2}$. }
\label{fig2e100}
\end{figure}
\begin{figure}
\caption{
The relative errors of the approximations of the posterior $u_n|Y_n$ distributions that depend on the realization of $y_n= u_n + \eta_n$, and their statistical averages with respect to the law of $y_n$ when $\epsilon = 10^{2}$. }
\label{fig3e100}
\end{figure}
\begin{figure}
\caption{ The root mean square errors between the references from SSM filters and the approximations from MSM and DSM filters. }
\label{fig:rmse}
\end{figure}
\begin{figure}
\caption{ The Gaussian and Gaussian mixture approximations of SSM (labeled SSM(G) and SSM(GM) respectively) together with MSM and dMSM when $\epsilon = 10^{0}$. The dashed lines represent two Gaussian kernels consisting of dMSM (left). }
\label{fig:gsae1}
\caption{ The Gaussian and Gaussian mixture approximations of SSM prior (left) and Gaussian approximations of SSM posterior (right) when $\epsilon = 10^{1}$ (top) and $\epsilon = 10^{2}$ (bottom). The dashed lines are two Gaussian kernels of SSM approximation (top-left) and Gaussian approximation of Gaussian mixture SSM (right). }
\label{fig:gsae10100}
\caption{ The Gaussian and Gaussian sum approximations of SSM prior (left) and Gaussian approximations of SSM posterior (right) when $\epsilon = 10^{2}$ and $R=0.75E$. }
\label{fig:gsa2e100}
\end{figure}
\section{Conclusions} \label{sec:conclusions} In this paper we have employed simplified models for the estimation of a partially observed turbulent signal. Our test bed, the switching stochastic model (SSM), is a stochastic differential equation driven by a sign-alternating two-state Markov process. The system is either forced or dissipated depending on the sign of the driving signal, and as a consequence exhibits intermittent turbulent bursting. It is a cheap surrogate for turbulent signal generation, allowing rapid prototyping of a variety of approximate filters -- filters with model error. Two approximate models (MSM, DSM) for SSM have been constructed via simplification of the switching process underlying the turbulent bursting, leading to a Gaussian description for the filtering solution. We study the moment generating function (MGF) with respect to the time integral of the switching process to reveal that these two models precisely mimic the SSM behavior when the switching frequency is relatively high. In addition to these two models, based on the same argument, we also build two models (dMSM, dDSM) whose regime of validity is rare switching for the driving signal. We associate these two models with Gaussian sum filters.
We first use the ergodicity of the switching processes to prove MSM (dMSM) is the high (low) switching frequency limit of SSM and DSM (dDSM). Besides verifying the consistency of the proposed approximate models, the convergence results give rise to an analytic determination of DSM (dDSM) parameters when the time-scales of driving input and system output are well separated. We achieve this from the comparison between asymptotics of MGFs in each of two opposing parameter regimes, because their matching implies the lower order moments of the corresponding DSM (dDSM) are very close to those of SSM. The result again gives rise to a determination of DSM (dDSM) parameters when two time-scales are weakly separated. In this case, we numerically find a minimizer of the sum-of-squares error function between the mean and variance of SSM and DSM (dDSM) for which the previous analytic solutions is used as the initial candidate. In our numerical simulations, the filtering results utilizing DSM (dDSM) with the parameters tuned according to our suggestions show significant improvements in accuracy in all the parameter regimes that we examined. Furthermore, when the time-scale separation is weak, the cost of performing the minimizations can be alleviated by averaging the parameters calculated only for a number of observation time steps, while maintaining the accuracy of the filtering solution to a considerable extent.
We have used the tools from three different research areas: limit theorems, asymptotic analysis and computational optimization to complete the whole scenario. These methods are not separate but carefully chained together through a solution cascade to provide a significant step in the analysis and development of filters utilizing approximate models suggested from a rigorous analysis of the underlying system. As the ultimate goal of filtering with model error is to estimate the system state and associated uncertainties of real-world turbulence, at tractable cost, our future work will include the development of these algorithms, and their benchmarking, in the case where the true signal is not generated by SSM, but rather by a real turbulence model.
\begin{appendix} \numberwithin{equation}{section}
\section{Switching Stochastic Model} \label{app:ssm} Here we analyze SSM. Subsection~\ref{subsec:selecton} is with regard to the computation of MGFs of integral process of driving signal, and subsection~\ref{subsec:amgfip} to their asymptotic behaviors. We develop SSM filters in subsection~\ref{subsec:ssmfilter}. Note $\epsilon$ in $\lambda_{+}/\epsilon$ and $\lambda_{-}/\epsilon$ is dropped out for the notational economy.
\subsection{Moment generating function (MGF) of integral process} \label{subsec:selecton} Here we aim to analytically compute \begin{equation}
\begin{split} \label{eq:ssmmgfjt20} & \left\langle e^{\alpha (\Gamma_T-\Gamma_t )} \right\rangle \\ & \left\langle e^{\alpha (\Gamma_T-\Gamma_t )} \vert \gamma_0=\gamma_{\pm} \right\rangle
\end{split} \end{equation} where $\Gamma_t = \int^t_0 {\gamma}(s) ds $.
We first point out that it suffices provide the formula for \begin{equation}
\begin{split} \label{eq:condssmmgf} \left\langle e^{\alpha \Gamma_t} \vert \gamma_0 =\gamma_{+} \right\rangle.
\end{split} \end{equation} Once it is done, that of $\left\langle e^{\alpha \Gamma_t} \vert \gamma_0 =\gamma_{-} \right\rangle$ is immediate from the exchange $(\lambda_{+},\gamma_{+}) \leftrightarrow (\lambda_{-},\gamma_{-})$. Because $\rho_\gamma(t)\equiv ( \mathbb{P}( \gamma_t =\gamma_{+} ), \mathbb{P}( \gamma_t =\gamma_{-} ) )^t$ solves $d{\rho}_\gamma(t)/{dt}=L^t\rho_\gamma(t)$, where upper $t$ denotes transpose and \begin{equation*} L \equiv \left( \begin{array}{cc} -\lambda_{+} &\lambda_{+} \\ \lambda_{-} & -\lambda_{-} \end{array} \right), \end{equation*} it satisfies \begin{equation}
\begin{split} \label{eq:probgamma} \left( \begin{array}{c} \mathbb{P}( \gamma_t =\gamma_{+} ) \\ \mathbb{P}( \gamma_t =\gamma_{-} ) \end{array} \right) &= \frac{1}{ \lambda_{+} +\lambda_{-} }
\left( \begin{array}{cc} \lambda_{-} +\lambda_{+}e^{-(\lambda_{-} +\lambda_{+})t} & \; \lambda_{-} -\lambda_{-} e^{-(\lambda_{-} +\lambda_{+})t}\\ \lambda_{+} -\lambda_{+} e^{-(\lambda_{-} +\lambda_{+})t}& \; \lambda_{+} +\lambda_{-}e^{-(\lambda_{-} +\lambda_{+})t} \end{array} \right) \left( \begin{array}{c} \mathbb{P}( \gamma_0 =\gamma_{+} ) \\ \mathbb{P}( \gamma_0 =\gamma_{-} ) \end{array} \right).
\end{split} \end{equation} Then from substituting (\ref{eq:probgamma}) into \begin{equation}
\begin{split} \label{eq:ssmmgfjt} \left\langle e^{\alpha (\Gamma_T-\Gamma_t )} \right\rangle &= \mathbb{P}( \gamma_t =\gamma_{+} ) \left\langle e^{\alpha (\Gamma_T-\Gamma_t )} \vert \gamma_t =\gamma_{+} \right\rangle + \mathbb{P}( \gamma_t =\gamma_{-} ) \left\langle e^{\alpha (\Gamma_T-\Gamma_t )} \vert \gamma_t =\gamma_{-} \right\rangle \\ &= \mathbb{P}( \gamma_0 =\gamma_{+} ) \left\langle e^{\alpha (\Gamma_T-\Gamma_t )} \vert \gamma_0 =\gamma_{+} \right\rangle + \mathbb{P}( \gamma_0 =\gamma_{-} ) \left\langle e^{\alpha (\Gamma_T-\Gamma_t )} \vert \gamma_0 =\gamma_{-} \right\rangle
\end{split} \end{equation} it follows that \begin{equation}
\begin{split} \label{eq:probgammats}
\left\langle e^{\alpha (\Gamma_T-\Gamma_t )} \vert \gamma_0=\gamma_{+} \right\rangle
= \frac{1}{ \lambda_{+} +\lambda_{-} } & \Bigg( \left(\lambda_{-} +\lambda_{+}e^{-(\lambda_{-} +\lambda_{+})t} \right) \left\langle e^{\alpha (\Gamma_T-\Gamma_t )} \vert \gamma_t =\gamma_{+} \right\rangle \\ & \quad + \left(\lambda_{+} -\lambda_{+} e^{-(\lambda_{-} +\lambda_{+})t}\right) \left\langle e^{\alpha (\Gamma_T-\Gamma_t )} \vert \gamma_t =\gamma_{-} \right\rangle \Bigg).
\end{split} \end{equation} Making use of $\left\langle e^{\alpha (\Gamma_T-\Gamma_t )} \vert \gamma_t =\gamma_{+} \right\rangle =\left\langle e^{\alpha \Gamma_{T-t}} \vert \gamma_0 =\gamma_{+} \right\rangle$, Eqs.~(\ref{eq:ssmmgfjt20}) are expressed in terms of (\ref{eq:condssmmgf}) through (\ref{eq:ssmmgfjt}), (\ref{eq:probgammats}).
We next attempt to compute Eq.~(\ref{eq:condssmmgf}). In what follows, we will show that (\ref{eq:condssmmgf}) is given by (\ref{eq:ssmcharasyhomo}) for identical $\lambda_{+}=\lambda_{-}$, and that (\ref{eq:condssmmgf}) can be computed with the help of (\ref{eq:ssmcharasy}), (\ref{eq:pdfNt}) for distinct $\lambda_{+} \neq \lambda_{-}$. When $\gamma(0)=\gamma_{+}$, let $\gamma_k$ denote the value of $\gamma(s)$ after undergoing $k$ transitions, i.e., $\gamma_k=\gamma_{+}$ for even $k$ and $\gamma_k=\gamma_{-}$ for odd $k$. Let $\tau_k \triangleq \tau^{\gamma_{+}}$ for even $k$ and $\tau_k \triangleq \tau^{\gamma_{-}}$ for odd $k$. Let $T_{n}=\sum_{k=0}^{n-1} \tau_k$ and let $N_t= \max\lbrace n\in \mathbb{N} : T_n \leq t\rbrace$ denote the number of transitions of $\gamma(s)$ in the interval $s\in (0, t]$. From $\tau_k = T_{k+1}-T_k$, we have \begin{equation*} \Gamma_t = \int^t_0 {\gamma}(s) ds = \sum_{k=0}^{N_t-1} \gamma_k \tau_k + \gamma_{N_t}(t-T_{N_t}) = \sum_{k=0}^{N_t-1} (\gamma_k-\gamma_{N_t}) \tau_k + \gamma_{N_t}t. \end{equation*} Note $\tau \sim \exp(r)$ then $\mathbb{E}(e^{\alpha \tau})=\int_0^\infty e^{\alpha t} re^{-rt} dt = (1-{\alpha \over r})^{-1}$ for $\alpha< r$. Since $\lbrace \tau_k \rbrace_{k=0}^{n-1}$ are mutually independent and $\tau_k$ is distributed by exponential distribution, Eq.~(\ref{eq:condssmmgf}) allows for a formal expansion \begin{equation}
\begin{split} \label{eq:ssmcharasy} \left\langle e^{\alpha \Gamma_t} \vert \gamma_0 =\gamma_{+} \right\rangle & = \sum_{n=0}^\infty \mathbb{P}(N_t=n\vert \gamma_0 =\gamma_{+} ) \left\langle e^{\alpha \left(\sum_{k=0}^{n-1} (\gamma_k-\gamma_{n})\tau_k +\gamma_{n}t\right)} \vert \gamma_0 =\gamma_{+} \right\rangle \\ & = \sum_{\text{n:even}} \mathbb{P}(N_t=n\vert \gamma_0 =\gamma_{+} ) e^{\alpha \gamma_{+}t} \left( 1-\frac{\alpha (\gamma_{-}-\gamma_{+})}{\lambda_{-}}\right)^{-\frac{n}{2}} \\ & \quad + \sum_{\text{n:odd}} \mathbb{P}(N_t=n\vert \gamma_0 =\gamma_{+} )
e^{\alpha \gamma_{-}t} \left( 1-\frac{\alpha (\gamma_{+}-\gamma_{-})}{\lambda_{+}}\right)^{-\frac{n+1}{2}}
\end{split} \end{equation} for $ {\lambda_{-} \over \gamma_{-}-\gamma_{+} } < \alpha < {\lambda_{+} \over \gamma_{+}-\gamma_{-} } $.
To compute $\mathbb{P}(N_t=n\vert \gamma_0 =\gamma_{+} )$, notice \begin{equation*}
\begin{split} \mathbb{P}(N_t=n) & = \mathbb{P}(N_t \geq n)-\mathbb{P}(N_t \geq n+1) \\ & = \mathbb{P}(T_n \leq t)-\mathbb{P}(T_{n+1} \leq t)
\end{split} \end{equation*} which is, in other words, \begin{equation}
\begin{split} \label{eq:pdfofNt} \begin{cases} \mathbb{P}(N_t=0) & =1-\int^t_0 f_1(s)ds\\ \mathbb{P}(N_t=n) & = \int^t_0 f_n(s)ds -\int^t_0 f_{n+1}(s)ds, \quad n\geq 1 \end{cases}
\end{split} \end{equation} where $f_n$ denotes the probability density function of $T_n$. Let $f_{+}(t) = \lambda_{+}e^{-\lambda_{+}t}$ and $f_{-}(t) = \lambda_{-}e^{-\lambda_{-}t}$, and let $*$ denote the convolution, then \begin{equation*}
\begin{split} f_{m+n} (t) & = (f_{+})^{*m}*(f_{-})^{*n} (t) \\ & = \frac{\lambda_{+}^m\lambda_{-}^n}{(m-1)!(n-1)!} \int^t_0 s^{m-1}e^{-\lambda_{+}s} (t-s)^{n-1}e^{-\lambda_{-}(t-s)}ds\\ & = \frac{\lambda_{+}^m\lambda_{-}^n}{(m-1)!(n-1)!} e^{-\lambda_{-}t}t^{m+n-1} \int^1_0 s^{m-1}(1-s)^{n-1}e^{-(\lambda_{+}-\lambda_{-})ts}ds\\
\end{split} \end{equation*}
with $|m-n|=1$.
\subsubsection{$\lambda_{+} = \lambda_{-}$} In case of $\lambda_{+} = \lambda_{-}$, both are denoted by $\lambda$. Note the beta function is given by \begin{equation*} \int^1_0 s^{m-1}(1-s)^{n-1}ds = \frac{(m-1)!(n-1)!}{(m+n-1)!} \end{equation*} and the use of integration by parts yields \begin{equation*}
\begin{split} \int^t_0 {f}_n ds = \int^t_0 \lambda^n \frac{s^{n-1}}{(n-1)!}e^{-\lambda s}ds =\frac{\lambda^n}{n!}t^n e^{-\lambda t} + \int^t_0 f_{n+1}ds.
\end{split} \end{equation*} Therefore, from (\ref{eq:pdfofNt}), $N_t$ is distributed by Poisson distribution with parameter ${\lambda} t$ \begin{equation*} \mathbb{P}(N_t = n ) = e^{-{\lambda}t}\frac{({\lambda}t)^n}{n!}. \end{equation*} In this case, $\mathbb{P}(N_t=n\vert \gamma_0 =\gamma_{+} ) = \mathbb{P}(N_t=n)$ and Eq.~(\ref{eq:ssmcharasy}) is summed into the closed form \begin{equation}
\begin{split} \label{eq:ssmcharasyhomo} \left\langle e^{\alpha \Gamma_t} \vert \gamma_0 =\gamma_{+} \right\rangle & = e^{-\lambda t}\Bigg( e^{\alpha \gamma_{+}t} \cosh \left( \lambda t \left( 1-\frac{\alpha (\gamma_{-}-\gamma_{+})}{\lambda_{-}}\right)^{-\frac{1}{2}}\right) \\ & \quad + e^{\alpha \gamma_{-}t} \sinh \left( \lambda t \left( 1-\frac{\alpha (\gamma_{+}-\gamma_{-})}{\lambda_{+}}\right)^{-\frac{1}{2}}\right) \left( 1-\frac{\alpha (\gamma_{+}-\gamma_{-})}{\lambda_{+}}\right)^{-\frac{1}{2}} \Bigg)
\end{split} \end{equation} when $ \lambda$ represents $\lambda_{+} = \lambda_{-}$.
\subsubsection{$\lambda_{+} \neq \lambda_{-}$} Consider the case where $\lambda_{+}$ and $\lambda_{-}$ are different. Let $B(m,n;\alpha) \equiv \int^1_0 s^{m-1}(1-s)^{n-1}e^{\alpha s}ds$ then integration by parts yields \begin{equation*} B(m,n;\alpha)=\frac{1}{\alpha} \left[ (m-1)B(m-1,n;\alpha)-(n-1)B(m,n-1;\alpha)\right]. \end{equation*} Using $B(m-1,n;\alpha) = e^{\alpha} B(n,m-1;-\alpha)$ and \begin{equation*} \int t^n e^{\alpha t}dt=\frac{t^n e^{\alpha t}}{\alpha}-\frac{n}{\alpha}\int t^{n-1}e^{\alpha t}dt \end{equation*} we recursively obtain $f_n$ and the probability distribution of $N_t$ \begin{equation}
\begin{split} \label{eq:pdfNt} \begin{cases} \mathbb{P}(N_t=0 \vert \gamma_0=\gamma_{+}) & = e^{-\lambda_{+}t}\\ \mathbb{P}(N_t=1\vert \gamma_0=\gamma_{+}) & = \frac{\lambda_{+} (e^{-\lambda_{+}t}-e^{-\lambda_{-}t})}{-(\lambda_{+}-\lambda_{-})}\\ \mathbb{P}(N_t=2\vert \gamma_0=\gamma_{+}) &= \frac{ \lambda_{-}\lambda_{+}\exp(- \lambda_{-}t) - \lambda_{-}\lambda_{+}\exp(- \lambda_{+}t) + t \lambda_{-}\lambda_{+}(\lambda_{-}-\lambda_{+}) \exp(- \lambda_{+}t) }{ (\lambda_{-}-\lambda_{+})^2} \\ & \cdots \end{cases}
\end{split} \end{equation} from (\ref{eq:pdfofNt}). One can make use of (\ref{eq:pdfNt}) to compute a truncation of (\ref{eq:ssmcharasy}). We conclude the section with the following theorem.
\begin{Theorem} \label{thm:unif} The RHS of Eq.~(\ref{eq:ssmcharasy}) uniformly converges. \end{Theorem} \begin{proof} Let $f^{+}_n(s) = (\lambda_{+})^n \frac{s^{n-1}}{(n-1)!}e^{-\lambda_{+} s}$ and $f^{-}_n(s) = (\lambda_{-})^n \frac{s^{n-1}}{(n-1)!}e^{-\lambda_{-} s}$. The inequalities \begin{equation*} \left( \frac{\lambda_{+}}{\lambda_{-}}\right)^m {f}^{-}_{m+n}(t) \leq f_{m+n}(t) \leq \left( \frac{\lambda_{-}}{\lambda_{+}}\right)^n {f}^{+}_{m+n}(t). \end{equation*} hold from $1 \leq e^{-(\lambda_{+}-\lambda_{-})ts}\leq e^{-(\lambda_{+}-\lambda_{-})t}$ for $s \in [0,1]$. Let $ f_{m+1+n} (t) = (f_{+})^{*(m+1)}*(f_{-})^{*n} (t) $ then using \begin{equation*}
\begin{split} \int^t_0 \lambda^n \frac{s^{n-1}}{(n-1)!}e^{-\lambda s}ds \leq \int^t_0 \lambda^n \frac{s^{n-1}}{(n-1)!}ds =\frac{(\lambda t)^n}{n!}
\end{split} \end{equation*} we obtain \begin{equation*} \begin{split} \int^t_0 \left( f_{m+n}(s)-f_{m+1+n}(s) \right) ds & \leq \int^t_0 \left( \left( \frac{\lambda_{-}}{\lambda_{+}}\right)^n {f}^{+}_{m+n}(s) - \left( \frac{\lambda_{+}}{\lambda_{-}}\right)^{m+1} {f}^{-}_{m+1+n}(s) \right) ds \\ & \leq \left( \frac{\lambda_{-}}{\lambda_{+}}\right)^n \frac{(\lambda_{+}t)^{m+n}}{(m+n)!} + \left( \frac{\lambda_{+}}{\lambda_{-}}\right)^{m+1} \frac{(\lambda_{-}t)^{m+1+n}}{(m+1+n)!}. \end{split} \end{equation*} Therefore when $n$ is even \begin{equation*} \mathbb{P}(N_t = n \vert \gamma_0=\gamma_{+}) \leq \left( \frac{\lambda_{-}}{\lambda_{+}}\right)^{\frac{n}{2}} \frac{(\lambda_{+}t)^{n}}{n!} + \left( \frac{\lambda_{+}}{\lambda_{-}}\right)^{\frac{n}{2}+1} \frac{(\lambda_{-}t)^{n+1}}{(n+1)!} \end{equation*} is satisfied and a similar bound holds when $n$ is odd. The application of Weierstrass M-test leads to the uniform convergence in view of Eq.~(\ref{eq:ssmcharasyhomo}). \end{proof}
\subsection{Asymptotics for MGF of integral process} \label{subsec:amgfip} We here derive asymptotic formulae for MGFs (\ref{eq:ssmmgfjt20}), (\ref{eq:condssmmgf}) when $\epsilon \ll 1$ (scale-separation regime) and $\epsilon \gg 1$ (rare-event regime).
\subsubsection{Scale-separation regime} \label{subsec:scaleseparation} When $\lambda_{+}$ and $\lambda_{-}$ are distinct, and $\epsilon$ is small, we replace $\lambda$ in Eq.~(\ref{eq:ssmcharasyhomo}) by the harmonic mean $\bar{\lambda} \equiv \frac{2\lambda_{+}\lambda_{-}}{\lambda_{+}+\lambda_{-} }$ to derive an approximation. This is because the density function of $T_n$ is the convolution of $f_{+}=\lambda_{+}e^{-\lambda_{+}t}$ and $f_{-}=\lambda_{-}e^{-\lambda_{-}t}$, $n/2$ times respectively, and from \begin{equation*} \begin{split} \left( 1- \frac{i\alpha}{\lambda_{-}}\right)^{-\frac{n}{2}} \left( 1- \frac{i\alpha}{\lambda_{+}}\right)^{-\frac{n}{2}} \simeq \left( 1- \frac{i\alpha}{\bar{\lambda}}\right)^{-{n}} \end{split} \end{equation*} we see $T_n \sim f_{-}^{* n/2}* f_{+}^{* n/2} \simeq f$ where $f$ is the gamma distribution with rate $\bar{\lambda}$. Note $P(N_t = n ) \simeq e^{-\bar{\lambda}t}(\bar{\lambda}t)^n/n!$ from $P(N_t \geq n ) = P(T_n \leq t)$. Therefore we get \begin{equation*}
\begin{split} \left\langle e^{\alpha \Gamma_t} \vert \gamma_0 =\gamma_{+}\right\rangle & \simeq e^{-\bar{\lambda} t}\Bigg( e^{\alpha \gamma_{+}t} \cosh \left( \bar{\lambda} t \left( 1-\frac{\alpha (\gamma_{-}-\gamma_{+})}{\lambda_{-}}\right)^{-\frac{1}{2}}\right)\\ & \quad + e^{\alpha \gamma_{-}t} \sinh \left( \bar{\lambda} t \left( 1-\frac{\alpha (\gamma_{+}-\gamma_{-})}{\lambda_{+}}\right)^{-\frac{1}{2}}\right) \left( 1-\frac{\alpha (\gamma_{+}-\gamma_{-})}{\lambda_{+}}\right)^{-\frac{1}{2}} \Bigg) \\ & \simeq \exp\left( \alpha \bar{\gamma}_\infty t +\alpha^2\frac{3}{8}\frac{ (\gamma_{-}-\gamma_{+})^2(\lambda_{-}^2+\lambda_{+}^2)}{\lambda_{-}\lambda_{+} (\lambda_{+}+ \lambda_{-})}t + \frac{\alpha (\gamma_{+}-\gamma_{-})}{4\lambda_{+}} \right)
\end{split} \end{equation*} and the rescaling $\lambda_{\pm} \to {\lambda_{\pm}}/{\epsilon}$ yields \begin{equation*}
\begin{split} \left\langle e^{\alpha \Gamma_t} \vert \gamma_0 =\gamma_{+}\right\rangle \simeq \exp\left( \alpha \bar{\gamma}_\infty t +\alpha^2\frac{3}{8}\frac{ (\gamma_{-}-\gamma_{+})^2(\lambda_{-}^2+\lambda_{+}^2)}{\lambda_{-}\lambda_{+} (\lambda_{+}+ \lambda_{-})}t \epsilon + \frac{\alpha (\gamma_{+}-\gamma_{-})}{4\lambda_{+}} \epsilon +\mathcal{O}(\epsilon^2) \right)
\end{split} \end{equation*} where $(1+x)^k = 1+kx+(k(k-1)/2!)x^2+\cdots$ and $\frac{1}{2}e^{x+\alpha \epsilon}+\frac{1}{2}e^{x+\beta \epsilon}(1+\gamma \epsilon) \simeq e^{x+ ((\alpha+\beta+\gamma)/2)\epsilon}$ is used.
Further we use $c e^{A+\alpha \epsilon} + (1-c)e^{A+\beta \epsilon} \simeq e^{A+( c\alpha+(1-c) \beta )\epsilon}$ to obtain \begin{equation*}
\begin{split} \left\langle e^{\alpha (\Gamma_T-\Gamma_t)} \right\rangle & \simeq \exp\Bigg( \alpha \bar{\gamma}_\infty (T-t) +\alpha^2\frac{3}{8}\frac{ (\gamma_{-}-\gamma_{+})^2(\lambda_{-}^2+\lambda_{+}^2)}{\lambda_{-}\lambda_{+} (\lambda_{+}+ \lambda_{-})} (T-t) \epsilon\\ & \qquad + \alpha \left( \mathbb{P}(\gamma_0=\gamma_{+}) \frac{(\gamma_{+}-\gamma_{-})}{4\lambda_{+}} + \mathbb{P}(\gamma_0=\gamma_{-}) \frac{(\gamma_{-}-\gamma_{+})}{4\lambda_{-}} \right) \epsilon +\mathcal{O}(\epsilon^2) \Bigg)
\end{split} \end{equation*} when $\epsilon < T-t$.
\subsubsection{Rare-event regime} \label{subsec:ssmrer} When $\epsilon$ is large, we leave the first term in Eq.~(\ref{eq:ssmcharasy}) to obtain \begin{equation}
\begin{split} \label{eq:rer} \left\langle e^{\alpha \Gamma_T} \right\rangle & = \mathbb{P}( \gamma_0 =\gamma_{+} ) \left\langle e^{\alpha \Gamma_T} \vert \gamma_0 =\gamma_{+} \right\rangle + \mathbb{P}( \gamma_0 =\gamma_{-} ) \left\langle e^{\alpha \Gamma_T} \vert \gamma_0 =\gamma_{-} \right\rangle\\ & \simeq \mathbb{P}( \gamma_0 =\gamma_{+} ) \exp\left( -\frac{\lambda_{+}}{\epsilon}t+\alpha \gamma_{+}t\right) + \mathbb{P}( \gamma_0 =\gamma_{-} ) \exp\left( -\frac{\lambda_{-}}{\epsilon}t+\alpha \gamma_{-}t\right).
\end{split} \end{equation}
\subsection{Filters for SSM} \label{subsec:ssmfilter} Here we make use of the calculations given in subsection~\ref{subsec:selecton} to define Gaussian filter and Gaussian sum filter for SSM.
\subsubsection{Gaussian filter} \label{subsec:ssmgaussianfilter} We design the filter as the assumed density filter. Accordingly we assume $u_0$ to be Gaussian, and further assume the independence of $(u_0,\gamma_0)$ hence that of $(u_0,\Gamma_T )$. Then, from Eq.~(\ref{eq:uqmom}), it satisfies \begin{equation} \begin{split} \label{eq:ssmmomuq} \left\langle u_T \right\rangle & = \left\langle e^{-\Gamma_T}\right\rangle \langle u_0 \rangle \\ \text{Var}(u_T) & = \left\langle e^{-2\Gamma_T}\right\rangle \Big( \text{Var}(u_0) + \langle u_0 \rangle^2 \Big)
- \left\langle e^{-\Gamma_T}\right\rangle^2 \langle u_0 \rangle^2 \\ & \quad + \sigma_u^2 \int^T_0 \left\langle e^{-2(\Gamma_T-\Gamma_t)} \right\rangle dt \end{split} \end{equation} for SSM. Either using closed form solution in case of identical $\lambda_{\pm}$ or a truncation of the series solution in case of distinct $\lambda_{\pm}$, one can compute \begin{equation*} \left\langle e^{\alpha (\Gamma_T-\Gamma_t)} \right\rangle \end{equation*} where $\alpha=-1,-2$ and $t\in [0,T]$, and Eq.~(\ref{eq:ssmmomuq}). Together with using Eq.~(\ref{eq:probgamma}) for the prediction of $\gamma_t$, the first two moments mapping for $(u_0,\gamma_0) \to (u_T,\gamma_T)$ has been achieved. To complete the filter, we apply Kalman data assimilation for $u_T$ and keep $\gamma_T$ unchanged as this is consistent with Bayes' rule when $(u_T, \gamma_T)$ is independent \cite{doucet2001sequential}.
\subsubsection{Gaussian sum filter} \label{subsec:ssmgaussiansumfilter} Let $u_0$ be Gaussian mixture and the independence of $(u_0,\gamma_0)$ be assumed. Using \begin{equation*} \begin{split} \mathbb{P}({u}_T) =\mathbb{P}({\gamma}_0=\gamma_{+})\mathbb{P}({u}_T\vert {\gamma}_0=\gamma_{+}) +\mathbb{P}({\gamma}_0=\gamma_{-})\mathbb{P}({u}_T\vert {\gamma}_0=\gamma_{-}) \end{split} \end{equation*} we approximate $u_T$ as Gaussian mixture with doubled Gaussian kernels. Similarly with Eq.~(\ref{eq:ssmmomuq}), the mean and variance of $u_T\vert \gamma_0=\gamma_{\pm}$ are determined by \begin{equation*} \left\langle e^{\alpha (\Gamma_T-\Gamma_t)} \vert \gamma_0=\gamma_{\pm}\right\rangle \end{equation*} where $\alpha=-1,-2$ and $t\in [0,T]$. Using prior calculations, the conditioned mean and variance of each kernel are obtained. Then, using Eq.~(\ref{eq:probgamma}) for the prediction of $\gamma_t$, the algorithm of $(u_0,\gamma_0) \to (u_T,\gamma_T)$ is established.
To complete the filter, we apply Kalman data assimilation for each Gaussian kernel of $u_T$ with care of weights, while preserving the law of $\gamma_T$. Because the latter procedure preserves the number of Gaussians, total $2^n$ weighted Gaussian kernels describe the posterior distribution after $n$ inter-observation time steps provided $u_0$ is Gaussian.
\section{Diffusive Stochastic Models} \label{app:dsm}
This section is concerned with DSM and dDSM. The moments mapping formulae of DSM are derived in subsection~\ref{sec:spekfmomentmapping}. We study the computation of MGFs of integral process in subsection~\ref{subsec:dsmmgf} and their asymptotic behaviors in subsection~\ref{subsec:asmgfip}.
\subsection{DSM moments mapping} \label{sec:spekfmomentmapping} We in this subsection provide the moments mapping of \begin{equation*} \begin{split} \begin{cases} du &= - \gamma u\,dt+\sigma_u dB_u \\ d\gamma & = -d_\gamma(\gamma-\bar{\gamma}) \,dt+\sigma_\gamma dB_{\gamma} \end{cases} \end{split} \end{equation*} when $(u_0,\gamma_0)$ is Gaussian. Let $\Gamma_\gamma(t) \equiv \int^t_0 \gamma(s) ds$ then the path-wise solutions read \begin{equation*} \begin{split} u_t & = e^{-\Gamma_\gamma(t)}u_0 + \sigma_u \int^t_0 e^{-(\Gamma_\gamma(t)-\Gamma_\gamma(s))}dB_u(s) \equiv {A}_t+{B}_t \\ \gamma_t & = \bar{\gamma}+(\gamma_0-\bar{\gamma})e^{-d_\gamma t} + \sigma_\gamma \int^t_0 e^{-d_\gamma (t-s)} dB_\gamma(s). \end{split} \end{equation*} Define \begin{equation*}
\begin{split}
b_\gamma(t) &
\equiv (1-e^{-d_\gamma t)})/ {d_\gamma}
\\ \mathcal{B}_\gamma(t) & \equiv \sigma_\gamma \int^t_0 ds \int^s_0 e^{-d_\gamma(s-s')} dB_\gamma(s')
\end{split} \end{equation*} then $ \Gamma_\gamma(t) = \bar{\gamma}(t-b_\gamma(t))+b_\gamma(t) \gamma_0 + \mathcal{B}_\gamma(t) $ and we have \begin{equation}
\begin{split} \label{eq:mommaping} \langle u_t \rangle & = \langle A_t \rangle \\ \langle \gamma_t \rangle & = \bar{\gamma}+(\langle\gamma_0\rangle-\bar{\gamma})e^{-d_\gamma t} \\ \text{Var}(u_t ) & = \langle u_t^2 \rangle - \langle u_t \rangle^2 = \langle A_t^2 \rangle +\langle B_t^2 \rangle - \langle A_t \rangle^2 \\ \text{Var}(\gamma_t ) & = e^{-2d_\gamma t} \text{Var}( \gamma_0 )+ \frac{\sigma_\gamma^2}{2d_\gamma}\left( 1-e^{-2d_\gamma t}\right) \\ \text{Cov}(u_t,\gamma_t ) & = \langle u_t \gamma_t\rangle - \langle u_t\rangle \langle \gamma_t\rangle = \bar{\gamma}\left( 1- e^{-d_\gamma t}\right)\langle A_t \rangle + e^{-d_\gamma t} \langle A_t \gamma_0 \rangle \\ & \qquad \qquad \qquad \qquad \qquad + \langle A_t \dot{\mathcal{B}}_\gamma(t) \rangle - \langle A_t \rangle \langle \gamma_t \rangle \\
\end{split} \end{equation} where upper dot denotes derivative.
Using \begin{equation*}
\begin{split}
\langle \Gamma_\gamma(t)-\Gamma_\gamma(s) \rangle & = \left( b_\gamma(t)-b_\gamma(s) \right) \langle
\gamma_0 \rangle +\bar{\gamma}((t-s)-(b_\gamma(t)-b_\gamma(s)))
\\
\text{Var}\left( \Gamma_\gamma(t)-\Gamma_\gamma(s) \right) & = \left( b_\gamma(t)-b_\gamma(s) \right)^2
\text{Var}( \gamma_0 ) + \text{Var}\left( \mathcal{B}_\gamma(t)-\mathcal{B}_\gamma(s) \right) \\
\langle \mathcal{B}_\gamma(t) \rangle & = 0 \\
\text{Var}(\mathcal{B}_\gamma(t)) & = -\frac{\sigma_\gamma^2}{2d_\gamma^3}
\left( 3-4e^{-d_\gamma t}+e^{-2d_\gamma t}-2d_\gamma t\right) \\ \text{Var}(\mathcal{B}_\gamma(t)-\mathcal{B}_\gamma(s)) & = -\frac{\sigma_\gamma^2}{d_\gamma^3} \left( 1+d_\gamma(s-t)+e^{-d_\gamma(s+t)}\times\left( -1-e^{2d_\gamma s}+\cosh(d_\gamma(s-t)) \right) \right) \\ \left\langle e^{- \mathcal{B}_\gamma(t)} \dot{\mathcal{B}}_\gamma(t)\right\rangle & = -\frac{1}{2} \partial_t \left( \text{Var}(\mathcal{B}_\gamma(t)) \right) \left\langle e^{- \mathcal{B}_\gamma(t)} \right\rangle
\end{split} \end{equation*} and using \begin{equation*}
\begin{split} \langle e^{z} \rangle & = e^{ \langle z \rangle + \frac{1}{2}\text{Var}(z) } \\ \langle e^{z}x \rangle & = e^{ \langle z \rangle + \frac{1}{2}\text{Var}(z) } \big( \langle x\rangle + \text{Cov}(x,z ) \big) \\ \langle e^{z} xy \rangle & = e^{ \langle z \rangle + \frac{1}{2}\text{Var}(z) } \Big( \text{Cov}(x,y) + \big( \langle x \rangle + \text{Cov}(x,z ) \big)
\big( \langle y \rangle + \text{Cov}(y,z ) \big) \Big)
\end{split} \end{equation*} where $(x,y,z)$ is joint Gaussian, we can compute \begin{equation*}
\begin{split} \left\langle A_t \right\rangle = \left\langle e^{- \Gamma_\gamma(t) }u_0 \right\rangle & = e^{- \bar{\gamma}(t- b_\gamma(t))} \left\langle e^{- b_\gamma(t) \gamma_0 } u_0 \right\rangle \left\langle e^{- \mathcal{B}_\gamma(t) }\right\rangle \\ \left\langle A_t\gamma_0 \right\rangle = \left\langle e^{- \Gamma_T }u_0\gamma_0 \right\rangle & = e^{-\bar{\gamma}(t- b_\gamma(t))} \left\langle e^{-b_\gamma(t) \gamma_0} u_0\gamma_0 \right\rangle \left\langle e^{- \mathcal{B}_\gamma(t) }\right\rangle \\ \left\langle A_t\dot{\mathcal{B}}_\gamma(t) \right\rangle = \left\langle e^{- \Gamma_T }u_0\dot{\mathcal{B}}_\gamma(t) \right\rangle & = e^{-\bar{\gamma}(t- b_\gamma(t))} \left\langle e^{-b_\gamma(t) \gamma_0} u_0 \right\rangle \left\langle e^{-\mathcal{B}_\gamma(t)} \dot{\mathcal{B}}_\gamma(t)\right\rangle \\ \left\langle A_t^2 \right\rangle = \left\langle e^{- 2\Gamma_T }u_0^2 \right\rangle & = e^{-2 \bar{\gamma}(t- b_\gamma(t))} \left\langle e^{-2 b_\gamma(t) \gamma_0 } u_0^2 \right\rangle \left\langle e^{- 2\mathcal{B}_\gamma(t) }\right\rangle \\ \langle B_t^2 \rangle & = \sigma_u^2 \int^t_0 \left \langle e^{- 2(\Gamma_\gamma(t)-\Gamma_\gamma(s))} \right\rangle ds \\ & = \sigma_u^2 \int^t_0 e^{-2 \langle \Gamma_\gamma(t)-\Gamma_\gamma(s) \rangle+2 \text{Var}\left( \Gamma_\gamma(t)-\Gamma_\gamma(s) \right) } ds
\end{split} \end{equation*} and thereby Eq.~(\ref{eq:mommaping}). Here numerical integrator like the trapezoidal rule can be employed for the computation of $\langle B_t^2 \rangle$. As a consequence, the analytic moment-mapping $(u_0,\gamma_0 ) \to (u_t,\gamma_t)$ is obtained.
\subsection{Moment generating function of integral process} \label{subsec:dsmmgf} Recall \begin{equation*} \begin{split} \textbf{(DSM)}\qquad \left\{ \begin{array}{ll} d\widehat{u} &= - \widehat{\gamma} \widehat{u}\,dt+\sigma_u dB_u \\ d\widehat{\gamma} & = -\frac{\nu}{\epsilon}(\widehat{\gamma}-\mu) \,dt+ \frac{\sigma}{\sqrt{\epsilon}} dB_{\gamma} \\ \end{array} \right. \end{split} \end{equation*} and $\widehat{\Gamma}_t = \int^t_0 \widehat{\gamma}(s) ds$ then, from the preceding subsection, we have \begin{equation*} \begin{split}
\langle \widehat{\Gamma}_t\rangle & =\mu t +\left\langle \widehat{\gamma}_0 -\mu \right\rangle b_\gamma(t)\\ \text{Var}(\widehat{\Gamma}_t) & = \text{Var}(\widehat{\gamma}_0)b_\gamma(t)^2 +\text{Var}( \mathcal{B}_\gamma(t)) \\
\langle \widehat{\Gamma}_t-\widehat{\Gamma}_s \rangle & = b_\gamma(t-s) \langle \widehat{\gamma}_s \rangle +\mu((t-s)-b_\gamma(t-s)) \\
\text{Var}( \widehat{\Gamma}_t-\widehat{\Gamma}_s ) & = (b_\gamma(t-s))^2
\text{Var}( \widehat{\gamma}_s ) + \text{Var}\left( \mathcal{B}_\gamma(t-s) \right) \\ \langle \widehat{\gamma}_t \rangle & = \mu+(\langle\widehat{\gamma}_0\rangle-\mu)e^{-\nu t/\epsilon} \\ \text{Var}(\widehat{\gamma}_t ) & = e^{-2\nu t/\epsilon} \text{Var}( \widehat{\gamma}_0 )+ \frac{\sigma^2}{2\nu}\left( 1-e^{-2\nu t/\epsilon}\right) \\
\end{split} \end{equation*} where \begin{equation*} \begin{split}
b_\gamma(t) & = \epsilon (1-e^{-\nu t/\epsilon)})/ {\nu} \\
\text{Var}(\mathcal{B}_\gamma(t)) & = -\epsilon^2 \frac{\sigma^2}{2\nu^3}
\left( 3-4e^{-\nu t/\epsilon}+e^{-2\nu t/\epsilon}-2\nu t/\epsilon\right). \end{split} \end{equation*} Let $\widehat{\gamma}_0$ be Gaussian then $\widehat{\Gamma}_t$ is Gaussian as well, and the MGFs \begin{equation} \begin{split} \label{eq:spmcharasy} \left\langle e^{\alpha \widehat{\Gamma}_t}\right\rangle & = \exp\left( {\alpha \langle \widehat{\Gamma}_t\rangle + \frac{\alpha^2}{2}\text{Var}(\widehat{\Gamma}_t)} \right)\\ \left\langle e^{\alpha (\widehat{\Gamma}_t-\widehat{\Gamma}_s)} \right\rangle & = \exp \left( {\alpha \langle \widehat{\Gamma}_t-\widehat{\Gamma}_s\rangle + \frac{\alpha^2}{2}\text{Var}(\widehat{\Gamma}_t-\widehat{\Gamma}_s)} \right) \end{split} \end{equation} can be computed.
\subsection{Asymptotics of MGFs of integral process} \label{subsec:asmgfip}
\subsubsection{Scale-separation regime} \label{subsec:dsmssr} For small $\epsilon$, from substituting $b_\gamma(t) = \frac{1}{d}\epsilon + \mathcal{O}(\epsilon^2)$ and $\text{Var}(\mathcal{B}_\gamma(t)-\mathcal{B}_\gamma(s)) =\frac{\sigma^2}{d^2}(t-s)\epsilon +\mathcal{O}(\epsilon^2)$ into (\ref{eq:spmcharasy}), we obtain \begin{equation*} \begin{split} \left\langle e^{\alpha \widehat{\Gamma}_t}\right\rangle & = \exp\left( \alpha \left( \mu t + \langle \widehat{\gamma}_0-\mu\rangle \frac{\epsilon}{\nu}\right) +\alpha^2 \frac{\sigma^2}{2\nu^2}t\epsilon +\mathcal{O}(\epsilon^2) \right) \quad \epsilon < t\\ \left\langle e^{\alpha (\widehat{\Gamma}_t-\widehat{\Gamma}_s)} \right\rangle & = \exp\left( \alpha \mu(t-s) +\alpha^2 \frac{\sigma^2}{2\nu^2}(t-s)\epsilon + \mathcal{O}(\epsilon^2) \right) \quad \epsilon < s. \end{split} \end{equation*}
\subsubsection{Rare-event regime} \label{subsec:dsmrer} When $\epsilon$ is large, we use $b_r(t)=t-\frac{1}{2}\nu t^2/\epsilon+\mathcal{O}(1/\epsilon^2)$ and $\text{Var}(\mathcal{B}_\gamma(t))=\frac{\sigma^2}{3}t^3/\epsilon+\mathcal{O}(1/\epsilon^2)$ to obtain \begin{equation*} \begin{split} \left\langle e^{\alpha \widehat{\Gamma}_t} \right\rangle & =\exp \Bigg( \alpha \left( \mu t + \langle \widehat{\gamma}_0-\mu\rangle \left(t-\frac{\nu}{2} \frac{t^2}{\epsilon}\right) \right)+\frac{\alpha^2}{2} \left( \text{Var}(\widehat{\gamma}_0) \left(t^2-\nu \frac{t^3}{\epsilon} \right) + \frac{\sigma^2}{3} \frac{t^3}{\epsilon} \right) \\ & \qquad \qquad + \mathcal{O}\left(\frac{1}{\epsilon^2} \right) \Bigg) \end{split} \end{equation*} for DSM. Therefore, in case of dDSM, we have \begin{equation} \begin{split} \label{eq:dsmdasym} \left\langle e^{\alpha \widehat{\Gamma}'_t} \vert \widehat{\gamma}_0'=\gamma_{\pm} \right\rangle & =\exp\left( \alpha \left( \mu_{\pm} t + ( {\gamma}_{\pm}-\mu_{\pm}) \left(t-\frac{\nu_{\pm}}{2} \frac{ t^2}{\epsilon} \right) \right)+\frac{\alpha^2}{2} \frac{(\sigma_{\pm})^2}{3} \frac{t^3}{\epsilon} + \mathcal{O}\left(\frac{1}{\epsilon^2} \right) \right)\\ & =\exp\left( \alpha \left( {\gamma}_{\pm}t - \frac{1}{2\epsilon} ( {\gamma}_{\pm}-\mu_{\pm}) \nu_{\pm} t^2 \right) +\frac{\alpha^2}{2} \frac{(\sigma_{\pm})^2}{3\epsilon}t^3 + \mathcal{O}\left(\frac{1}{\epsilon^2} \right) \right). \end{split} \end{equation}
\section{Proofs of Theorems} \label{sec:equivalenceproof}
\subsection{Scale-Separation Limit}
\begin{proof} [of Lemma~\ref{thm:uconv2}] The convergence of the mean and variance follows from Eq.~(\ref{eq:uqmom}) and the bounded convergence theorem.
To show $L^2(\Omega;\mathbb{R})$ convergence, from Eq.~(\ref{eq:voc}) we obtain \begin{equation*} \begin{split} u_T^\epsilon -\bar{u}_T & = \left(e^{-\Gamma^\epsilon_T}u_0^\epsilon- e^{-\bar{\gamma}T}\bar{u}_0 \right) + \sigma_u \int^T_0 \left( e^{-(\Gamma^\epsilon_T-\Gamma^\epsilon_t)} - e^{-\bar{\gamma}(T-t)} \right) dB_u(t) \\ \end{split} \end{equation*} and \begin{equation*} \begin{split}
|u_T^\epsilon -\bar{u}_T|^2 & \leq 2 \left\lvert e^{-\Gamma^\epsilon_T}u_0^\epsilon- e^{-\bar{\gamma}T}\bar{u}_0 \right\rvert^2 +2 \sigma_u^2 \left\lvert \int^T_0 \left( e^{-(\Gamma^\epsilon_T-\Gamma^\epsilon_t)} - e^{-\bar{\gamma}(T-t)} \right) dB_u(t) \right\rvert^2. \end{split} \end{equation*} Use It\^o lemma to obtain \begin{equation*} \begin{split}
\left\langle |u_T^\epsilon -\bar{u}_T|^2 \right\rangle & \leq 2 \left\langle \left\lvert e^{-\Gamma^\epsilon_T}u_0^\epsilon - e^{-\bar{\gamma}T}\bar{u}_0 \right\rvert^2
\right\rangle +2 \sigma_u^2
\int^T_0 \left\langle \left\lvert e^{-(\Gamma^\epsilon_T-\Gamma^\epsilon_t)} - e^{-\bar{\gamma}(T-t)} \right\rvert^2 \right\rangle dt. \end{split} \end{equation*} Here note the term \begin{equation} \begin{split} \label{eq:integral} \left\langle e^{-2(\Gamma^\epsilon_T-\Gamma^\epsilon_t)}\right\rangle -2 \left\langle e^{-(\Gamma^\epsilon_T-\Gamma^\epsilon_t)} \right\rangle e^{-\bar{\gamma}(T-t)} + e^{-2\bar{\gamma}(T-t)} \end{split} \end{equation} converges to zero as $\epsilon \to 0$. Then the bounded convergence theorem ensures the integration of (\ref{eq:integral}) also converges to zero as $\epsilon \to 0$. Therefore the convergence \begin{equation*} \begin{split}
\left\langle |u_T^\epsilon - \bar{u}_T|^2 \right\rangle \to 0 \end{split} \end{equation*} as $\epsilon \to 0$ holds for any $\gamma^\epsilon$. \end{proof}
Now we state and prove Lemma~\ref{thm:weakconv} and Lemma~\ref{lem:mgf} which will be used to prove Lemma~\ref{thm:ssmmgf} and Lemma~\ref{thm:dsmmgf}.
\begin{Lemma} \label{thm:weakconv} Let $\mathcal{Y}$ be a Markov chain or a diffusion process associated with generator $\frac{1}{\epsilon}Q_0 $. We assume $\mathcal{Y}$ is an ergodic process with invariant measure $\rho^\infty_\mathcal{Y}$ satisfying $\text{Null}({Q}_0)=\text{span}\{ \mathbf{1} \}$, $\text{Null}({Q}_0^*)=\text{span}\{ \rho^\infty_\mathcal{Y} \}$. Let $\mathcal{X}$ satisfy the ODE \begin{equation*} \begin{split} \frac{d\mathcal{X}}{dt}=f(\mathcal{X},\mathcal{Y}) \end{split} \end{equation*} and let the generator of the combined process $(\mathcal{X},\mathcal{Y})$ be of the form \begin{equation*} \begin{split} Q = \frac{1}{\epsilon}Q_0+Q_1. \end{split} \end{equation*} Let $\bar{\mathcal{X}}$ satisfy the ODE \begin{equation} \begin{split} \label{eq:aode} \frac{d\bar{\mathcal{X}}}{dt}= \bar{Q}_1(\bar{\mathcal{X}}) = \int f(\bar{\mathcal{X}},\cdot) d\rho^\infty_\mathcal{Y} (\cdot) \end{split} \end{equation} then, for any $t>0$, $\mathcal{X}(t)$ converges weakly or in distribution to $\bar{\mathcal{X}}(t)$ as $\epsilon \to 0$ (recall $X_\epsilon \rightharpoonup X$ is referred to converges weakly provided $\mathbb{E}(f(X_\epsilon))\to \mathbb{E}(f(X))$ for any bounded continuous function $f$). \end{Lemma}
\begin{proof} [of Lemma~\ref{thm:weakconv}] The first step is to show that the averaged ODE is given by Eq.~(\ref{eq:aode}). Let be $\Phi$ be a bounded continuous function and let \begin{equation*} \begin{split} v(x,y, t) &= \mathbb{E}( \Phi(\mathcal{X}_t, \mathcal{Y}_t) \vert \mathcal{X}_0 =x,\mathcal{Y}_0=y). \end{split} \end{equation*} Then it satisfies the backward equation \begin{equation} \begin{split} \label{eq:bwdeq} \partial_t {v}(x,y, t) =Q {v}(x,y,t) =\left( \frac{1}{\epsilon} Q_0 +Q_1 \right){v}(x,y,t). \end{split} \end{equation} We seek solution $v=v(x,y,t)$ in the form of the multi-scale expansion \begin{equation*} \begin{split} v=v_0+\epsilon v_1 + \mathcal{O}(\epsilon^2). \end{split} \end{equation*} From substituting the expansion and equating coefficients of equal powers of $\epsilon$ to zero, we find \begin{equation} \begin{split} \label{eq:multieq} \mathcal{O}\left(\frac{1}{\epsilon}\right): & \qquad Q_0v_0=0 \\ \mathcal{O}(1): & \qquad Q_0v_1 = -Q_1v_0 +\frac{dv_0}{dt} \end{split} \end{equation} and we see $v_0$ is independent of $y$ due to $\text{null}(Q_0)=\mathbf{1}$. The operator $Q_0$ is singular and, for Eq.~(\ref{eq:multieq}) to have a solution, the Fredholm alternative implies the solvability condition $$ -Q_1v_0+\frac{dv_0}{dt} \perp \text{Null}(Q_0^*).$$ For arbitrary $c(x)$, we find \begin{equation*} \begin{split} \int \int c(x) \left( \frac{dv_0}{dt}- Q_1 v_0 \right)\,dx d\rho_\infty(y) =\int c(x) \left( \frac{dv_0}{dt}- \bar{Q}_1 v_0 \right) \,dx = 0 \end{split} \end{equation*} implying that $$\frac{dv_0}{dt}- \bar{Q}_1 v_0=0 .$$
The second step is to show the weak convergence. Substituting $$ v=v_0+\epsilon v_1 + r$$ into Eq.~(\ref{eq:bwdeq}) yields \begin{equation*} \begin{split} \frac{dr}{dt} &= \left( \frac{1}{\epsilon}Q_0+Q_1 \right) r+ \epsilon q \\ q &= Q_1 v_1-\frac{dv_1}{dt} \end{split} \end{equation*} and \begin{equation*} \begin{split} r(t)=e^{Qt}r(0)+\epsilon \int^t_0 e^{Q(t-s)}q(s)ds \end{split} \end{equation*} from the variation-of-constants. From $v(t)=e^{Qt}v(0)$ we obtain
$|e^{Qt}|_\infty \leq 1$ because $\Phi$ is bounded. We then have \begin{equation*} \begin{split}
|r(t)|_\infty
& \leq \epsilon |e^{Qt}|_\infty |r(0)|_\infty +\epsilon \int^t_0 |e^{Q(t-s)}|_\infty |q(s)|_\infty ds \\
& \leq \epsilon |v_1(0)|_\infty +\epsilon \int^t_0 |q(s)|_\infty ds \\
& \leq \epsilon \left( |v_1(0)|_\infty +t \sup_{0 \leq s \leq t} |q(s)|_\infty \right) \end{split} \end{equation*} and obtain \begin{equation*} \begin{split}
|v(t)-v_0(t)|_\infty \leq C(T)\epsilon \end{split} \end{equation*} for $0 \leq t \leq T$. \end{proof}
\begin{Lemma} \label{lem:mgf} Let $F_{X_\epsilon}(\cdot) \equiv \mathbb{P}(X_\epsilon \leq \cdot)$ and $F_{X}$ be the distribution function of $X_\epsilon$ and non-random variable $X$, respectively. If \begin{subequations} \label{eq:covcond} \begin{align} \label{eq:covcond1} &\lim_{ b\to \infty} e^{\alpha b}(F_{X_\epsilon}(b)-1) \to 0 \\ \label{eq:covcond2} &\lim_{a\to -\infty} e^{\alpha a }F_{X_\epsilon}(a) \to 0\\ \label{eq:covcond3} &\lim_{a\to -\infty, b\to \infty} \int^b_a \left( F_{X_\epsilon}(x)-F_{X}(x)\right) e^{\alpha x}dx \to 0 \end{align} \end{subequations} as $\epsilon \to 0$ then $\left\langle e^{\alpha X_{\epsilon}}\right\rangle \to e^{\alpha X}$ follows. The convergence rate is given by the lowest one in Eq.~(\ref{eq:covcond}). \end{Lemma}
\begin{proof} [of Lemma~\ref{lem:mgf}] Use integration by parts to obtain \begin{equation*} \begin{split} \left\langle e^{\alpha X_\epsilon} \right\rangle &= \lim_{a\to -\infty, b\to \infty} \int^b_a e^{\alpha x}dF_{X_\epsilon}(x) \\ & = \lim_{a\to -\infty, b\to \infty} e^{\alpha x}F_{X_\epsilon}(x) \vert^b_a-\int^b_a F_{X_\epsilon}(x) \alpha e^{\alpha x}dx \\ & = \lim_{ b\to \infty} e^{\alpha b}(F_{X_\epsilon}(b)-1)- \lim_{a\to -\infty} e^{\alpha a }F_{X_\epsilon}(a) +e^{\alpha X}\\ & \quad - \lim_{a\to -\infty, b\to \infty} \int^b_a \left( F_{X_\epsilon}(x)-F_{X}(x)\right) \alpha e^{\alpha x}dx. \end{split} \end{equation*} \end{proof}
\begin{proof} [of Lemma~\ref{thm:ssmmgf}] For a bounded continuous function $\Phi$, let \begin{equation*} \begin{split} v(x,y_i, t) &= \mathbb{E}( \Phi(\Gamma_t, \gamma_t) \vert \Gamma_0 =x,\gamma_0=y_i) \\ \end{split} \end{equation*} where $y_1=\gamma_{+}$ and $y_2=\gamma_{-}$. It satisfies the backward equation \begin{equation*} \begin{split} \partial_t v(x,y_i, t) &= \sum_j L_{ij} v(x,y_j,t)+y_i \partial_x v(x,y_i,t) \\ \end{split} \end{equation*} or \begin{equation*} \begin{split} \partial_t {v}(x, t) & = Q {v}(x,t)\\ Q &= \frac{1}{\epsilon}Q_0+Q_1 = \frac{1}{\epsilon}
\left( \begin{array}{cc} -\lambda_{+} &\lambda_{+} \\ \lambda_{-} & -\lambda_{-} \end{array} \right) +
\left( \begin{array}{cc} y_1 \partial_x & 0 \\ 0 & y_2 \partial_x \end{array} \right) \end{split} \end{equation*} in vector notation. The generator of $\gamma$ is then given by \begin{equation*} L= \frac{1}{\epsilon} \left( \begin{array}{cc} -\lambda_{+} &\lambda_{+} \\ \lambda_{-} & -\lambda_{-} \end{array} \right) \end{equation*} and $\gamma$ is ergodic process \cite{pavliotis2008multiscale}.
From Eq.~(\ref{eq:probgamma}), the time invariant measure of $\gamma$ is \begin{equation*} \begin{split} \rho_\gamma^\infty
= \frac{1}{\lambda_{-} +\lambda_{+}} \left( \begin{array}{c} \lambda_{-} \\ \lambda_{+} \end{array} \right) \end{split} \end{equation*} or \begin{equation*} \begin{split} \rho_\gamma^\infty & \triangleq \frac{\lambda_{-}}{\lambda_{-} +\lambda_{+}}\delta_{\gamma_{+}} +\frac{\lambda_{+}}{\lambda_{-} +\lambda_{+}}\delta_{\gamma_{-}} \end{split} \end{equation*} on $\mathbb{R}$. An averaging of \begin{equation*} \begin{split} \frac{d{\Gamma}}{dt} = {\gamma} \end{split} \end{equation*} yields \begin{equation*} \begin{split} \frac{d\bar{\Gamma}}{dt} = \int \gamma d\rho^\infty_\gamma = \frac{\lambda_{-}\gamma_{+}+\lambda_{+}\gamma_{-}}{\lambda_{-} +\lambda_{+}} \equiv \gamma_\infty. \end{split} \end{equation*} Let \begin{equation*} \begin{split} v_0(x,t) &= \mathbb{E}( \phi(\bar{\Gamma}_t) \vert \bar{\Gamma}_0 =x) \end{split} \end{equation*} where $\phi(\cdot)=\Phi(\cdot,y)$ then \begin{equation*} \begin{split} \partial_t v_0(x,t) ={\gamma}_\infty\partial_x v_0(x,t) = \bar{Q}_1 v_0(x,t) \end{split} \end{equation*} and Lemma~\ref{thm:weakconv} ensures $v(x,y_i,t)\to v_0(x,t)$ as $\epsilon \to 0$. In this case the weak convergence of $\Gamma_t$ to $\gamma_\infty t$ implies $\Gamma_T-\Gamma_t \rightharpoonup {\gamma}_\infty(T-t) $ from Slutsky's theorem stating that if $X_\epsilon \rightharpoonup X$ and $Y_\epsilon \rightharpoonup Y$ as $\epsilon \to 0$, where $Y$ is non-random, then $X_\epsilon+Y_\epsilon \rightharpoonup X+Y$ as $\epsilon \to 0$.
Let the distribution function of $\Gamma_T-\Gamma_t$ be denoted by $F_{\Gamma_T-\Gamma_t}(x) \equiv \mathbb{P}(\Gamma_T-\Gamma_t \leq x)$ then \begin{equation*} \begin{split} F_{\Gamma_T-\Gamma_t}(x) = \begin{cases} & 0 \qquad \text{for} \quad x < \gamma_{-} (T-t)\\ & 1 \qquad \text{for} \quad x \geq \gamma_{+} (T-t) \end{cases} \end{split} \end{equation*} Taking $a<\gamma_{-} (T-t)$ and $b>\gamma_{+} (T-t)$, Eqs.~(\ref{eq:covcond1}), (\ref{eq:covcond2}) are satisfied. Note $\Gamma_T-\Gamma_t \rightharpoonup {\gamma}_\infty(T-t)$ is equivalent to $F_{\Gamma_T-\Gamma_t}(x) \to F_{{\gamma}_\infty(T-t)}(x)$ for every $x$ that is continuity point of $F_{{\gamma}_\infty(T-t)}$, given by \begin{equation*} \begin{split} F_{{\gamma}_\infty(T-t)}(x)= \begin{cases} & 0 \qquad \text{for} \quad x < {\gamma}_\infty(T-t)\\ & 1 \qquad \text{for} \quad x \geq {\gamma}_\infty(T-t) \end{cases} \end{split} \end{equation*} from L\'{e}vy-Cram\'{e}r continuity theorem. Then Eq.~(\ref{eq:covcond3}) is satisfied from the bounded convergence theorem and Lemma~\ref{lem:mgf} ensures the MGF convergence. The convergence rate of $\left\langle e^{\alpha (\Gamma_T-\Gamma_t)}\right\rangle \to e^{\alpha \gamma_\infty(T-t)}$ is equal to the one in \begin{equation*} \begin{split} \lim_{\epsilon \to 0} \int^{\gamma_{+}(T-t)}_{\gamma_{-}(T-t)} \left( F_{\Gamma_T-\Gamma_t}(x)-F_{\gamma_\infty(T-t)}(x)\right) e^{\alpha x}dx = 0. \end{split} \end{equation*} \end{proof}
\begin{proof} [of Lemma~\ref{thm:dsmmgf}] The generator of the system \begin{equation*} \begin{split} \left\{ \begin{array}{ll} d\widehat{\Gamma} &= \widehat{\gamma} dt \\ d\widehat{\gamma} & = -\frac{1}{\epsilon}\nabla U(\widehat{\gamma})dt+ \frac{1}{\sqrt{\epsilon}} \beta( \widehat{\gamma})dB_{\gamma} \\ \end{array} \right. \end{split} \end{equation*} is given by \begin{equation*} \begin{split} y \partial_x + \frac{1}{\epsilon} \left( -\nabla U(y)\partial_y +\frac{1}{2}\beta(y)^2\partial_y^2 \right) =Q_1+\frac{1}{\epsilon}Q_0. \end{split} \end{equation*} If $\widehat{\gamma}$ is an ergodic process with invariant measure $\rho_{\widehat{\gamma}}^\infty$ then Lemma~\ref{thm:weakconv} ensures $\widehat{\Gamma}(t) \rightharpoonup \overline{\widehat{\Gamma}}(t)$ solving \begin{equation*} \begin{split} \frac{d\overline{\widehat{\Gamma}}}{dt} = \int \widehat{\gamma} d\rho_{\widehat{\gamma}}^\infty. \end{split} \end{equation*}
In case of DSM, the generator reads \begin{equation*} \begin{split} y \partial_x + \frac{1}{\epsilon} \left( -\nu(y-\mu)\partial_y +\frac{\sigma^2}{2}\partial_y^2 \right) =Q_1+\frac{1}{\epsilon}Q_0 \end{split} \end{equation*} and the invariant measure for $\widehat{\gamma}$ is \begin{equation*} \begin{split} \rho_{\widehat{\gamma}}^\infty & = \mathcal{N}\left(\mu, \frac{\sigma^2}{2\nu} \right) \end{split} \end{equation*} because it solves $Q_0^*\rho_{\widehat{\gamma}}^\infty=0$. Therefore we obtain $\widehat{\Gamma}_t\rightharpoonup\mu t$ and further $\widehat{\Gamma}_T-\widehat{\Gamma}_t \rightharpoonup \mu(T-t) $ from Slutsky's theorem.
Since $\widehat{\gamma}$ is Gaussian process, we use Chernoff bound $F_{\mathcal{N}(0,1)}(x) \leq \frac{1}{2}e^{-x^2/2}$ to meet Eqs.~(\ref{eq:covcond1}), (\ref{eq:covcond2}). Note Eq.~(\ref{eq:covcond3}) is satisfied from bounded convergence theorem. As a consequence, MGF convergence follows and the analyis of convergence rate is the same with SSM case. \end{proof}
\subsection{Rare-Event Limit} \begin{proof} [of Lemma~\ref{thm:uconv3}] It follows from \begin{equation*} \begin{split}
\left\langle {u}^\epsilon_t |\gamma^\epsilon_0=\gamma_{\pm}\right\rangle
& = \left\langle e^{-{\Gamma}^\epsilon_t}{u}^\epsilon_0 |\gamma^\epsilon_0=\gamma_{\pm}\right\rangle \\
\text{Var}(u^\epsilon_t|\gamma^\epsilon_0=\gamma_{\pm}) &= \left\langle \left( e^{-{\Gamma}^\epsilon_t}u_0^\epsilon \right)^2
|\gamma^\epsilon_0=\gamma_{\pm} \right\rangle
-\left\langle e^{-{\Gamma}^\epsilon_t}u^\epsilon_0 |\gamma^\epsilon_0=\gamma_{\pm}\right\rangle^2\\ & \quad + \sigma_u^2 \int^t_0 \left\langle e^{-2({\Gamma}^\epsilon_t-{\Gamma}^\epsilon_s)}
|\gamma^\epsilon_0=\gamma_{\pm}\right\rangle ds. \end{split} \end{equation*} \end{proof}
\begin{proof} [of Lemma~\ref{lem:ssmd}] We take $\epsilon\to \infty$ in Eq.~(\ref{eq:ssmcharasy}) and use Theorem~\ref{thm:unif}. In view of (\ref{eq:rer}), this yields the case of $t=0$. We invoke (\ref{eq:probgammats}) to complete proof.
\end{proof}
\begin{proof} [of Lemma~\ref{lem:dsmd}] We take $\epsilon\to \infty$ to Eq.~(\ref{eq:dsmdasym}) for the case of $t=0$. Direct computation of (\ref{eq:spmcharasy}) leads to the result.
\end{proof}
\end{appendix}
\end{document} |
\begin{document}
\title{The 1--2--3 Conjecture almost holds for regular graphs}
\author[agh]{Jakub Przyby{\l}o\fnref{MNiSW}} \ead{jakubprz@agh.edu.pl}
\fntext[MNiSW]{This work was partially supported by the Faculty of Applied Mathematics AGH UST statutory tasks within subsidy of Ministry of Science and Higher Education.}
\address[agh]{AGH University of Science and Technology, al. A. Mickiewicza 30, 30-059 Krakow, Poland}
\begin{abstract} The well-known 1--2--3 Conjecture asserts that the edges of every graph without isolated edges can be weighted with $1$, $2$ and $3$ so that adjacent vertices receive distinct weighted degrees. This is open in general, while it is known to be possible from the weight set $\{1,2,3,4,5\}$. We show that for regular graphs it is sufficient to use weights $1$, $2$, $3$, $4$. Moreover, we prove the conjecture to hold for every $d$-regular graph with $d\geq 10^8$. \end{abstract}
\begin{keyword} 1--2--3 Conjecture \sep weighted degree of a vertex \sep regular graph \end{keyword}
\maketitle
\section{Introduction}
One of the most basic observations in graph theory implies that there are no antonyms of regular graphs, understood as graphs whose all vertices have pairwise distinct degrees, except the trivial one vertex case.
Potential alternative definitions of an irregular graph were studied in the paper of Chartrand, Erd\H{o}s and Oellermann~\cite{ChartrandErdosOellermann}. Chartrand et al.~\cite{Chartrand} on the other hand turned towards measuring the level of irregularity of graphs, rather than defining their irregular representatives. Suppose we admit multiplying edges of a given simple graph $G$, then what is the minimum $k$ such that we may obtain an irregular multigraph (a multigraph with pairwise distinct all degrees) of $G$ via replacing its every edge $e$ by at most $k$ parallel copies of $e$? Such value was called the \emph{irregularity strength} of $G$, see details and exemplary results concerning this graph invariant in~\cite{Aigner,Lazebnik,Faudree,Frieze,KalKarPf,Lehel,MajerskiPrzybylo2,Nierhoff}. It was investigated in numerous further papers and gave rise to a wide list of related problems. Perhaps the most closely associated with the irregularity strength itself is its local variant, oriented towards differentiating degrees of exclusively adjacent vertices. Note that rather than multiplying edges of a given graph $G=(V,E)$, we may consider its \emph{edge $k$-weighting}, i.e. an assignment $\omega:E\to\{1,2,\ldots,k\}$, and instead of focusing on a degree of a vertex $v$ in the corresponding multigraph, consider its so-called \emph{weighted degree} in $G$, defined as $\sigma_\omega(v):=\sum_{u\in N(v)}\omega(uv)$. If this causes no ambiguities, we also write $\sigma(v)$ instead of $\sigma_\omega(v)$ and call it simply the
\emph{sum at $v$}.
We say $\omega$ is \emph{vertex-colouring} if $\sigma(u)\neq\sigma(v)$ for every edge $uv\in E$ -- we shall write that $u$ and $v$ are \emph{sum-distinguished} then or that there is \emph{no sum conflict} between $u$ and $v$. This concept gained equally considerable attention in the combinatorial community as its precursor largely due to the following intriguing conjecture. \begin{conjecture}[1--2--3 Conjecture]\label{Conjecture123Conjecture} Every graph without isolated edges admits a ver\-tex-co\-louring edge $3$-weighting. \end{conjecture} This remarkable presumption originates in the paper~\cite{123KLT} of Karo\'nski, {\L}uczak and Thomason, who confirmed it in particular for $3$-colourable graphs. First general constant upper bound was however showed by Addario-Berry, Dalal, McDiarmid, Reed and Thomason~\cite{Louigi30}, who designed strong and vastly applicative theorems on so-called degree constrained subgraphs (cf. e.g.~\cite{Louigi2}) to prove that every graph without isolated edges admits a vertex-colouring edge $30$-weighting. The same technique was further developed by Addario-Berry, Dalal and Reed~\cite{Louigi} to decrease $30$ to $16$, and by Wang and Yu~\cite{123with13}, who pushed it further down to $13$. A big break-through was later achieved due to research devoted to a total variant of the same concept, introduced in~\cite{12Conjecture}, and especially thanks to the result of Kalkowski~\cite{Kalkowski12}, generalized later through algebraic approach towards list setting by Wong and Zhu~\cite{WongZhu23Choos}. See also e.g.~\cite{BarGrNiw,PrzybyloWozniakChoos,WongZhuChoos} for other results, concerning in particular list versions of the both problems. A modification and development of a surprisingly simple algorithm designed by Kalkowski in~\cite{Kalkowski12} allowed Kalkowski, Karo\'nski and Pfender~\cite{KalKarPf_123} to achieve the best general bound thus far in view of Conjecture~\ref{Conjecture123Conjecture}, implying that weights $1,2,3,4,5$ are always sufficient. It is moreover known that the 1--2--3 Conjecture holds for very dense and large enough graphs, i.e. that there exists a constant $n'$ such that every graph with $n\geq n'$ vertices and minimum degree $\delta(G)>0.99985n$ admits a vertex-colouring edge $3$-weighting, as proved recently by Zhong in~\cite{123dense-Zhong}, and that even just weights 1,2 are asymptotically almost surely sufficient for a random graph (chosen from $G_{n,p}$ for a constant $p\in(0,1)$), see~\cite{Louigi}. On the other hand, it was proved by Dudek and Wajc~\cite{DudekWajc123complexity} that determining whether a particular graph admits a vertex-colouring edge $2$-weighting is NP-complete, while Thomassen, Wu and Zhang~\cite{ThoWuZha} showed that the same problem is polynomial in the family of bipartite graphs. In this paper we provide two results drawing us very close to a complete solution of the 1--2--3 Conjecture in the case of regular graphs, which apparently might seem most obstinate in its context in view of exactly equal degrees of all their vertices (though obviously regularities within them might be and are an asset while analysing them).
\section{Main Results}
\begin{theorem}\label{1234regTh} Every $d$-regular graph with $d\geq 2$ admits a vertex-colouring edge $4$-weighting. \end{theorem} This was earlier known for $d\leq 3$ by~\cite{123KLT} and possibly for $d=4$ -- see e.g.~\cite{Seamon123survey}. Recently this was also confirmed for $d=5$ by Bensmail~\cite{Julien5regular123}. Though our proof of Theorem~\ref{1234regTh} was obtained independently, its generic idea partly resembles the one from~\cite{Julien5regular123}, but extends to all regular graphs. Apart from this, we moreover prove that the 1--2--3 Conjecture holds for regular graphs with sufficiently large degree by showing the following. \begin{theorem}\label{123largeregTh} Every $d$-regular graph with $d\geq 10^8$ admits a vertex-colouring edge $3$-weighting. \end{theorem} The proof of this result is completely different and based on the probabilistic method. There can be found two common factors of the both approaches though. Firstly, both exploit at some point modifications of Kalkowski's algorithm from~\cite{Kalkowski12} in order to get read of a part of possible sum conflicts, but in a different manner -- this is used as one of the main tools in the first proof, and only as a kind of a final cleaning device in the second one. Secondly, in the both approaches we single out a special, usually small subset of vertices, and use the edges between this set an the rest of the vertices to adjust the sums in the graph (in the first proof such a set $I$ is stable, while in the second one -- such set $V_0$ is chosen randomly, and thus usually not stable), all details follow.
We shall apply the following rather standard notation for any given graph $G=(V,E)$, $v\in V$, $E'\subseteq E$ and $V',V''\subseteq V$ where $V'\cap V''=\emptyset$. By $G[V']$ we understand the graph induced by $V'$ in $G$, by $N_{V'}(v)$ -- the set of edges $uv\in E$ with $u\in V'$, by $d_{V'}(v)$ -- the number of edges $uv\in E$ with $u\in V'$ (i.e., $d_{V'}(v):=|N_{V'}(v)|$), by $d_{E'}(v)$ -- the number of edges in $E'$ incident with $v$, by $E(V')$ -- the set of edges from $E$ with both ends in $V'$, by $E(V',V'')$ -- the set of edges from $E$ with one end in $V'$ and the other in $V''$, and finally, by $G-v$ we mean the graph obtained from $G$ by removing $v$ and all its incident edges. Moreover, the sum of graphs $G_1=(V_1,E_1), G_2=(V_2,E_2)$ is understood as $G_1\cup G_2:=(V_1\cup V_2, E_1\cup E_2)$.
\section{Proof of Theorem~\ref{1234regTh}}
Let $G=(V,E)$ be a $d$-regular graph, $d\geq 2$, and let $I\subset V$ be an arbitrary maximal independent set in $G$. Denote $R:=V\smallsetminus I$. Let $R_1\subseteq R$ be the set of isolated vertices in $G[R]$, and set $R_2:=R\smallsetminus R_1$. Denote by $G_1,\ldots,G_p$ the components of $G[R_2]$ (each of which contains at least one edge). For every $i=1,\ldots,p$, we order the vertices of $G_i$ into a sequence $v_1,\ldots,v_{n}$ so that each $v_j$ with $j<n$ has a \emph{forward neighbour} in $G_i$, that is a neighbour $v_k$ of $v_j$ in $G_i$ with $k>j$ (this can be achieved by denoting any vertex of $G_i$ as $v_n$ and using e.g. BFS algorithm to find a spanning tree of $G_i$ rooted at $v_n$, denoting consecutive vertices encountered within the algorithm: $v_{n-1},v_{n-2},\ldots$); we denote the edge joining $v_j$ with such $v_k$ with the least index $k$ ($k>j$) the \emph{first forward edge of} $v_j$. Analogously we define \emph{backward neighbours} of a given vertex in $G_i$. The vertex $v_n$ shall moreover be called the \emph{last vertex} of $G_i$. By definition, every vertex in $R$ is incident with an edge joining it with $I$; for every $v\in R_2$ which is not the last vertex (in some component of $G[R_2]$) choose arbitrarily one such edge and denote it $e_v$ -- we shall call $e_v$ the \emph{supporting edge} of $v$. We shall first assign initial weights $\omega(e)$ to all the edges $e$ of $G$. These shall be modified so that at the end of our construction: \begin{itemize} \item[(a)] $\sigma(v)<3d$ for every $v\in R_2$; \item[(b)] $\sigma(v)\geq 3d$ for every $v\in I$; \item[(b')] $\sigma(v)<4d$ for every $v\in I$ with a neighbour in $R_1$; \item[(b'')] $\sigma(v)\in\{3d-1,4d\}$ for every $v\in R_1$, \end{itemize} where by $\sigma(v)$ we mean the sum at a given vertex $v$ in $G$. Note that since $I$ and $R_1$ are stable sets and there are no edges between $R_1$ and $R_2$ in $G$, by (a), (b), (b') and (b''), potential sum conflicts shall only be possible between adjacent vertices in $R_2$ then. We shall also require that throughout the whole construction: \begin{itemize} \item[(c)] $\omega(e)\in\{1,2,3\}$ for $e\in E(R_2)$,\\ $\omega(e)\in\{3,4\}$ for $e\in E(I,R_2)$,\\ $\omega(e)\in\{2,3,4\}$ for $e\in E(I,R_1)$. \end{itemize} Major concern of our weight modifyng algorithm shall be devoted to distinguishing adjacent vertices in $R_2$. Only in its final stage shall we adjust the sums in $R_1$ (still consistently with (a), (b), (b'), (b'') and (c)). Initially we assign the weight: \begin{itemize} \item[(i)] $\omega(e)=1$, if $e$ is the first forward edge of some vertex; \item[(ii)] $\omega(e)=2$, if $e$ is an edge of $G[R_2]$ which is not the first forward edge of any vertex; \item[(iii)] $\omega(e)=3$, if $e$ is incident with a vertex in $I$ and is not a supporting edge; \item[(iv)] $\omega(e)=4$, if $e$ is a supporting edge. \end{itemize} Note that these weights are consistent with (c).
In the following main part of our modifying procedure we analyse and alter the sums at consecutive vertices in all the components of $G[R_2]$. Thus suppose we have already analysed all vertices in $G_1,\ldots,G_{i-1}$, and within $G_i$ -- the vertices $v_1,\ldots,v_{j-1}$ (following the rules $(1^\circ)$ -- $(3^\circ)$ specified below), hence we are about to consider the vertex $v_j$ (consistently with the vertex ordering fixed in $G_i$). While analysing this vertex: \begin{itemize} \item[$(1^\circ)$] we are not allowed to modify the sums at already analysed vertices (which are fixed and shall not change till the end of the construction). \end{itemize} On the other hand we wish to make some weight alterations so that: \begin{itemize} \item[$(2^\circ)$] the obtained sum at $v_j$ is distinct from the sums at all the already analysed neighbours of $v_j$ in $G[R_2]$ (i.e. those in $\{v_1,\ldots,v_{j-1}\}$); \end{itemize} while for this aim: \begin{itemize} \item[$(3^\circ)$] we are allowed to modify by $1$ the weights of the edges joining $v_j$ with its backward neighbours in $G_i$ and the weights of their supporting edges so that (c) still holds. \end{itemize}
Before we show that we can indeed perform our modifying procedure in accordance with $(1^\circ)$ -- $(3^\circ)$, let us observe the following.
\begin{observation}\label{a_b_Observation} After analysing all vertices of $R_2$ consistently with requirements $(1^\circ)$ -- $(3^\circ)$, the conditions (a), (b) and (b') shall hold. \end{observation}
\begin{pf} By (iii) and (iv) all edges incident with a vertex $v\in I$ are initially weighted $3$ or $4$, while by $(3^\circ)$ the weight of an edge $e$ incident with $v$ can only be altered if $e$ is a supporting edge -- by $(3^\circ)$ and (c) we however must still have $\omega(e)\in\{3,4\}$ afterwards, thus (b) follows. If $v$ has moreover a neighbour $u\in R_1$, then by (iii) we must have $\omega(uv)=3$, and this weigh is not modified within our procedure (cf. $(3^\circ)$), and thus (b') is fulfilled as well.
To see that (a) must also hold, note first that each edge $e$ of $G[R_2]$ can be modified at most once (consistently with $(3^\circ)$) within the algorithm, when it joins the currently analysed vertex with its backward neighbour. Therefore, for every vertex $v\in R_2$ which is not the last vertex of some component of $G[R_2]$, immediately after analysing $v$, the first forward edge of $v$ still has unchanged weight $1$ (cf. (i)). By (i) -- (iv) and $(3^\circ)$, all its remaining incident edges have in turn weights at most $3$, except for $e_v$, which has weight $4$. Therefore, $\sigma(v)\leq 3d-1$, and by $(1^\circ)$ this does not change till the end of the construction. In order to prove the same holds also in the case when $v\in R_2$ is the last vertex of some component of $G[R_2]$, it is sufficient to note that then, by our construction: $\omega(e)\leq 3$ for every edge incident with $v$, as only supporting edges can be at this point weighted $4$. Thus (a) follows, as by (i) and $(3^\circ)$ the edge joining $v$ with the vertex $u$ directly preceding it in the corresponding ordering cannot have weight greater than $2$ (as according to the main feature of the previously fixed orderings, this has to be the first forward edge of $u$). \qed \end{pf}
Now we explain how we can perform every consecutive step of our modifying procedure, associated with a currently analysed vertex $v_j$ from component $G_i$, so that $(1^\circ)$ -- $(3^\circ)$ hold (provided that the previous steps were consistent with these rules). For this aim note first that while analysing $v_j$, the weight of every \emph{backward edge} of $v_j$ (i.e. an edge joining it with its backward neighbour in $G_i$) \textbf{can} be modified by $1$ if necessary. Indeed, suppose $e=v_kv_j$ is such an edge (i.e. $k<j$). If $e$ is not the first forward edge of $v_k$, then by (ii), $\omega(e)=2$ and by (c), $\omega(e_{v_k})\in\{3,4\}$. Thus, so that $(1^\circ)$ is obeyed, according to $(3^\circ)$, if $\omega(e_{v_k})=3$, we may change the weights of $e$ and $e_{v_k}$ to $1$ and $4$, resp., while if $\omega(e_{v_k})=4$, we may change the weights of $e$ and $e_{v_k}$ to $3$ and $3$, respectively. On the other hand, if $e$ is the first forward edge of $v_k$, then neither $\omega(e)$ nor $\omega(e_{v_k})$ have been modified thus far, hence we may modify their respective current values $1$ and $4$ to $2$ and $3$ respectively. Suppose now that $v_j$ has $b$ backward neighbours, hence also $b$ backward edges, then as each of these provides one more possible alteration of the sum at $v_j$, we altogether have $b+1$ available options for this sum (which do not influence the sums at the backward neighbours of $v_j$). Thus we may choose among these admissible alterations such that result in $\sigma(v_j)$ distinct from sums fixed for all $b$ backward neighbours of $v_j$ in $R_2$, i.e. consistent with $(2^\circ)$.
After analysing in this manner all vertices in $R_2$ we obtain a weighting of $G$ for which, by $(1^\circ)$, $(2^\circ)$ and Observation~\ref{a_b_Observation} (which guarantees (a) and (b)), $\sigma(u)\neq \sigma(v)$ for every $uv\in E(R_2\cup I)$. Now we modify the sums in $R_1$ so that (b'') holds. Recall that by definition, each vertex $v\in R_1$ is only adjacent with vertices in $I$, and thus all edges incident with such $v$ are weighted $3$ by (iii) and (c). One after another, for every $v\in R_1$ we proceed as follows. If $\sigma(u')\geq 3d+1$ for any neighbour $u'$ of $v$ ($u'\in I$), then we change the weight of exactly one edge, namely $u'v$ from $3$ to $2$. Otherwise, i.e. when due to (b) wee have $\sigma(u)=3d$ for every neighbour $u$ of $v$ ($u\in I$), we change the weight of $uv$ from $3$ to $4$ for all $u\in N(v)\subseteq I$. Note that in the both cases none of (a), (b) and (b') shall be violated, while we shall attain:
$\sigma(v)\in\{3d-1,4d\}$. After processing in this manner consecutively all vertices in $R_1$, all neighbours in $G$ shall finally be sum-distinguished, as vertices in $R_1$ are only adjacent with those in $I$, cf. (b), (b') and (b''). \qed
\section{Proof of Theorem~\ref{123largeregTh}\label{SectionProofLargeD}}
\subsection{Tools}
The proof of Theorem~\ref{123largeregTh} relies heavily on a random distribution of vertices and edges of a given graph to subsets with carefully predefined proportions. For this aim we shall however also make use of Corollary~\ref{QuarterDecompositionLemma} below implied by the following straightforward deterministic observation from~\cite{PrzybyloStandard22}, and possibly many other sources. \begin{observation} \label{EvenDecomposition} Every graph $G=(V,E)$ can be edge decomposed into two subgraphs $G_1, G_2$ so that for each $v\in V$ and $i\in{1,2}$: \begin{equation}\label{EQ_EulerianDecomposition} d_{G_i}(v)\in \left[\frac{d_G(v)}{2}-1,\frac{d_G(v)}{2}+1\right]. \end{equation} \end{observation}
\begin{corollary}\label{QuarterDecompositionLemma} Every graph $G_1=(V_1,E_1)$ has a subgraph $G'_1$ such that for each $v\in V_1$: \begin{equation}\label{dG'1} d_{G'_1}(v)\in\left[\frac{9}{16}d_{G_1}(v)-3,\frac{9}{16}d_{G_1}(v)+3\right]. \end{equation} \end{corollary}
\begin{pf} Such a subgraph can be constructed via four times repeated application of Observation~\ref{EvenDecomposition}, first to $G_1$ to obtain say $G^{(1)}_2$ and $G^{(2)}_2$, then to $G^{(2)}_2$ to obtain $G^{(1)}_3$ and $G^{(2)}_3$, next to $G^{(2)}_3$ to obtain $G^{(1)}_4$ and $G^{(2)}_4$, and finally to $G^{(2)}_4$ to get $G^{(1)}_5$ and $G^{(2)}_5$. It is then straightforward to verify that~(\ref{EQ_EulerianDecomposition}) implies that $G'_1:=G^{(1)}_2\cup G^{(2)}_5$ complies with our requirements. \qed \end{pf} For random arguments we shall mostly use the symmetric variant of the Lov\'asz Local Lemma, see e.g.~\cite{AlonSpencer} and the Chernoff Bound, see e.g.~\cite{JansonLuczakRucinski}. \begin{theorem}[\textbf{The Local Lemma}] \label{LLL-symmetric} Let $A_1,A_2,\ldots,A_n$ be events in an arbitrary pro\-ba\-bi\-li\-ty space. Suppose that each event $A_i$ is mutually independent of a set of all the other events $A_j$ but at most $D$, and that $\mathbf{Pr}(A_i)\leq p$ for all $1\leq i \leq n$. If $$ p \leq \frac{1}{e(D+1)},$$ then $ \mathbf{Pr}\left(\bigcap_{i=1}^n\overline{A_i}\right)>0$. \end{theorem} \begin{theorem}[\textbf{Chernoff Bound}]\label{ChernofBoundTh} For any $0\leq t\leq np$,
$$\mathbf{Pr}\left(\left|{\rm BIN}(n,p)-np\right|>t\right)<2e^{-\frac{t^2}{3np}}$$ where ${\rm BIN}(n,p)$ is the sum of $n$ independent Bernoulli variables, each equal to $1$ with probability $p$ and $0$ otherwise. \end{theorem}
Finally, the following technical observation shall be useful repeatedly throughout the proof of Theorem~\ref{123largeregTh} while applying the local lemma.
\begin{observation} For every $x\geq 10^8$, \begin{eqnarray} &&2e^{-\frac{x}{2.45\cdot 10^6}} <\frac{1}{2ex^2}; \label{TechIneq1}\\ &&2e^{-\frac{x}{4.9\cdot 10^6}} < \frac{1}{ex}. \label{TechIneq2} \end{eqnarray} \end{observation}
\begin{pf} Note first that (\ref{TechIneq1}) is directly implied by inequality (\ref{TechIneq2}) -- it is sufficient to square the both sides of (\ref{TechIneq2}). To prove inequality~(\ref{TechIneq2}) it is however equivalently sufficient to show that $f(x)>0$ for $x\geq 10^8$ where $$f(x):=\frac{x}{4.9\cdot 10^6}-\ln(2ex).$$ This in turn holds since $f'(x) = \frac{1}{4.9\cdot 10^6}-\frac{1}{x}>0$ for $x > 4.9\cdot 10^6$ and $f(10^8) = \frac{100}{4.9}-\ln\left(2e10^8\right) = \frac{20\cdot4.9+2}{4.9}-\ln2-1-8\ln10 > 20.4-0.7-1-8\cdot 2.31> 0$. \qed \end{pf}
\subsection{Random vertex and edge partitions}
Let $G=(V,E)$ be a $d$-regular graph with $d\geq 10^8$. \begin{claim}\label{ClaimV0} We can choose a subset $V_0\subseteq V$ such that for every $v\in V$: \begin{equation}\label{dV0}
\left|d_{V_0}(v)-0.05d\right|\leq 3\cdot10^{-4}d. \end{equation} \end{claim}
\begin{pf} Randomly and independently we place every vertex from $V$ in $V_0$ with probability $0.05$. Denote by $A_{1,v}$ the event that (\ref{dV0}) does not hold for a given $v\in V$. By the Chernoff Bound, i.e. Lemma~\ref{ChernofBoundTh}, and inequality~(\ref{TechIneq1}): $$\mathbf{Pr}\left(A_{1,v}\right) < 2e^{-\frac{(3\cdot10^{-4}d)^2}{3\cdot0.05d}} = 2e^{-\frac{d}{\frac53\cdot 10^6}} <\frac{1}{ed^2}.$$ As every event $A_{1,v}$ is mutually independent of all other events $A_{1,u}$ except those where $u$ share a common neighbour with $v$, i.e. all except at most $d(d-1)<d^2-1$ events, by the Lov\'asz Local Lemma, i.e. Lemma~\ref{LLL-symmetric}, with positive probability none of the events $A_{1,v}$ holds, and thus $V_0$ as desired must exist. \qed \end{pf}
Fix any $V_0$ consistent with Claim~\ref{ClaimV0}, and denote: $$V_1:=V\smallsetminus V_0,~~~~ G_1:=G[V_1] {\rm~~~~and~~~~} G_0:=G[V_0],$$ hence by~(\ref{dV0}), for every $v\in V$: \begin{equation}\label{dV1}
\left|d_{V_1}(v) - 0.95d\right|\leq 3\cdot10^{-4}d. \end{equation}
We shall first fix sums for all vertices in $V_1$ (containing great majority of all the vertices), keeping these relatively small, and using weights of some of the edges between $V_1$ and $V_0$ for some necessary adjustments. By Corollary~\ref{QuarterDecompositionLemma} there is a subgraph $G'_1$ of $G_1$ such that~(\ref{dG'1}) holds for every $v\in V_1$. By~(\ref{dG'1}) and~(\ref{dV1}) we thus obtain that for every $v\in V_1$: \begin{equation}\label{dG'1_2} 0.5d < \frac{9}{16}(0.95-0.0003)d-3\leq d_{G'_1}(v) \leq \frac{9}{16}(0.95+0.0003)d+3 < 0.54 d. \end{equation}
Let
$$c_1:V_1\to\{1,2,\ldots,10^4\}$$ be an auxiliary assignment of integers to the vertices of $G_1$. Denote: \begin{eqnarray} E': &=& \left\{uv\in E(G'_1): c_1(u)+c_1(v)\geq 10^4+2\right\}, \label{DefinitionOfE'}\\ E'': &=& \left\{uv\in E(G'_1): c_1(u)+c_1(v)\leq 10^4+1\right\} \label{DefinitionOfE''} \end{eqnarray} (the edges in $E'$ shall be the only ones in $G_1$ with weight $3$ assigned, while the remaining ones shall be weighted $1$ -- this, combined with Claim 2 below shall assure convenient sums distribution in neighbourhoods of all vertices in $V_1$), and for each $i=1,2,\ldots,10^4$: $$V_{1,i}: = \left\{v\in V_1: c_1(v)=i\right\}.$$
\begin{claim}\label{ClaimV1iE'} We may choose $c_1$ so that for every $i\in \{1,2,\ldots,10^4\}$ and each $v\in V_{1,i}$: \begin{eqnarray}
&& \left| d_{V_{1,i}}(v) - 10^{-4}d_{V_1}(v) \right| \leq 11\cdot 10^{-6}d; \label{dV1i}\\
&& \left| d_{E'}(v) - (i-1)10^{-4}d_{G'_1}(v) \right| \leq 6\cdot 10^{-4}d. \label{dE'} \end{eqnarray} \end{claim}
\begin{pf} We choose $c_1:V_1\to\{1,2,\ldots,10^4\}$ randomly by independently assigning every vertex $v\in V_1$ its value $c_1(v)$ from the set $\{1,2,\ldots,10^4\}$, each with equal probability. For every $v\in V_1$ we denote by $A_{2,v}$ and $A_{3,v}$ the events that (\ref{dV1i}) and that (\ref{dE'}) does not hold, respectively. Let $v\in V_1$. As by~(\ref{dV1}) we have $11\cdot 10^{-6}d\leq 10^{-4}d_{V_1}(v)$, by the Chernoff Bound, (\ref{dV1}) and (\ref{TechIneq1}) we obtain that: \begin{eqnarray} \mathbf{Pr}\left(A_{2,v}\right)<2e^{-\frac{(11\cdot 10^{-6}d)^2}{3\cdot 10^{-4}d_{V_1}(v)}} \leq 2e^{-\frac{121\cdot 10^{-12}d^2}{3\cdot 10^{-4}(0.95+3\cdot10^{-4})d}} = 2e^{-\frac{d}{\frac{285.09}{121}\cdot 10^{6}}} < \frac{1}{2ed^2}. \label{PrA2v} \end{eqnarray}
Note further that for every $i\geq 2$, $i\leq 10^4$, by~(\ref{DefinitionOfE'}) and~(\ref{DefinitionOfE''}): \begin{eqnarray}
&&\mathbf{Pr}\left(A_{3,v}~|~c_1(v)=i\right) \nonumber\\
&=& \mathbf{Pr}\left(\left|{\rm BIN}\left(d_{G'_1}(v),(i-1)10^{-4}\right) - (i-1)10^{-4}d_{G'_1}(v)\right|>6\cdot 10^{-4}d\right) \nonumber\\
&=&\mathbf{Pr}\left(\left|d_{E'}(v)-(i-1)10^{-4}d_{G'_1}(v)\right| > 6\cdot 10^{-4}d ~|~ c_1(v)=i\right) \nonumber\\
&=&\mathbf{Pr}\left(\left|d_{G'_1}(v)-d_{E''}(v)-(i-1)10^{-4}d_{G'_1}(v)\right| > 6\cdot 10^{-4}d ~|~ c_1(v)=i\right) \nonumber\\
&=& \mathbf{Pr}\left(\left|(10^4-i+1)10^{-4}d_{G'_1}(v)-d_{E''}(v)\right| > 6\cdot 10^{-4}d ~|~ c_1(v)=i\right) \nonumber\\
&=& \mathbf{Pr}\left(\left|{\rm BIN}\left(d_{G'_1}(v),(10^4-i+1)10^{-4}\right) -
(10^4-i+1)10^{-4}d_{G'_1}(v)\right|>6\cdot 10^{-4}d\right) \nonumber\\
&=& \mathbf{Pr}\left(A_{3,v}~|~c_1(v)=10^4-i+2\right). \label{A3vSymmetry} \end{eqnarray} Now for every fixed $13\leq i\leq 0.5\cdot 10^4+1$ (as then by~(\ref{dG'1_2}), $(i-1)10^{-4}d_{G'_1}(v) > 12\cdot 10^{-4} \cdot 0.5d = 6\cdot 10^{-4}d$), by the Chernoff Bound, (\ref{dG'1_2}) and (\ref{TechIneq1}) we obtain: \begin{equation}
\mathbf{Pr}\left(A_{3,v}~|~c_1(v)=i\right) < 2e^{-\frac{(6\cdot 10^{-4}d)^2}{3\cdot (i-1)10^{-4}d_{G'_1}(v)}} < 2e^{-\frac{(6\cdot 10^{-4}d)^2}{3\cdot 0.5 \cdot 0.54d}} = 2e^{-\frac{d}{2.25 \cdot 10^6}} < \frac{1}{2ed^2}. \label{A3vMostCases} \end{equation} For $i=1$, by the definition of $E'$ and $c_1$, we trivially have: \begin{equation}\label{A3vCase0}
\mathbf{Pr}\left(A_{3,v}~|~c_1(v)=1\right) = 0. \end{equation} For $i\in\{2,3,\ldots,12\}$ in turn (as then by~(\ref{dG'1_2}), $(i-1)10^{-4}d_{G'_1}(v) > 0.5\cdot 10^{-4}d$), by the Chernoff Bound and (\ref{TechIneq1}): \begin{eqnarray}
\mathbf{Pr}\left(A_{3,v}~|~c_1(v)=i\right)
&\leq& \mathbf{Pr}\left(\left|d_{E'}(v)-(i-1)10^{-4}d_{G'_1}(v)\right| > 0.5\cdot 10^{-4}d ~|~ c_1(v)=i\right) \nonumber\\ & <& 2e^{-\frac{( 0.5\cdot 10^{-4}d)^2}{3\cdot (i-1)10^{-4}d_{G'_1}(v)}} < 2e^{-\frac{( 0.5\cdot 10^{-4}d)^2}{3\cdot 11\cdot 10^{-4}d}} = 2e^{-\frac{d}{\frac{33}{25}\cdot 10^{6}}} < \frac{1}{2ed^2}. \label{A3vSmallCases}
\end{eqnarray} By (\ref{A3vMostCases}), (\ref{A3vSmallCases}), (\ref{A3vSymmetry}), (\ref{A3vCase0}) and the law of total probability, \begin{equation}\label{PrA3v} \mathbf{Pr}\left(A_{3,v}\right) < \frac{1}{2ed^2}. \end{equation} Let $\Delta_1$ be the maximum degree of $G_1$. As every event $A_{2,v}$ and every event $A_{3,v}$ is mutually independent of all other events $A_{2,u}$ and $A_{3,u}$ except possibly those where $u$ is at distance at most $2$ from $v$ in $G_1$, i.e. all except at most $2\Delta_1^2+1<2d^2-1$, by~(\ref{PrA2v}), (\ref{PrA3v}) and the Lov\'asz Local Lemma, with positive probability none of the events $A_{2,v}$, $A_{3,v}$ holds, and thus $c_1$ as desired must exist. \qed \end{pf}
\begin{claim}\label{ClaimE1inV0andV1} We may choose a set of edges $E_1\subseteq E(V_1,V_0)$ such that: \begin{eqnarray}
&& \left| d_{E_1}(u) - 0.08d_{V_0}(u) \right| \leq 5\cdot10^{-5}d {\rm ~~~~for~~~~} u\in V_1; \label{dE1inV1}\\
&& \left| d_{E_1}(v) - 0.08d_{V_1}(v) \right| \leq 10^{-3}d {\rm ~~~~for~~~~} v\in V_0. \label{dE1inV0} \end{eqnarray} \end{claim}
\begin{pf} Suppose we randomly and independently place every edge from $E(V_0,V_1)$ in $E_1$ with probability $0.08$. For every $u\in V_1$ and $v\in V_0$, denote by $A_{4,u}$ and $A_{5,v}$ the events that (\ref{dE1inV1}) and that (\ref{dE1inV0}) does not hold, respectively. Then by the Chernoff Bound, (\ref{dV0}), (\ref{dV1}) and (\ref{TechIneq2}) we obtain that for every $u\in V_1$: \begin{equation}\label{PrA4v} \mathbf{Pr}\left(A_{4,u}\right) < 2e^{-\frac{(5\cdot 10^{-5}d)^2}{3\cdot 0.08d_{V_0}(u)}} \leq 2e^{-\frac{(5\cdot 10^{-5}d)^2}{3\cdot0.08\cdot0.0503d}} = 2e^{-\frac{d}{4.8288 \cdot 10^{6}}} < \frac{1}{ed}, \end{equation} while for each $v\in V_0$: \begin{equation}\label{PrA5v} \mathbf{Pr}\left(A_{5,v}\right) < 2e^{-\frac{(10^{-3}d)^2}{3\cdot0.08d_{V_1}(v)}} \leq
2e^{-\frac{(10^{-3}d)^2}{3\cdot0.08\cdot0.9503d}} =
2e^{-\frac{d}{24\cdot 9503}} < \frac{1}{ed}. \end{equation} As every event $A_{4,v}$ and every event $A_{5,v}$ is mutually independent of all other events $A_{4,u}$ and $A_{5,u}$ except possibly those where $u$ is at distance at most $1$ from $v$ in the graph induced by the edges of $E(V_0,V_1)$, i.e. (by (\ref{dV0}) and (\ref{dV1})) all except at most $\max_{v\in V} d_{E(V_0,V_1)}(v) < d-1$, by~(\ref{PrA4v}), (\ref{PrA5v}) and the Lov\'asz Local Lemma, with positive probability none of the events $A_{4,v}$, $A_{5,v}$ holds, and thus $E_1$ as desired must exist. \qed \end{pf}
Carefully designed weight adjustments of the edges in $E_1$ shall be used to supplement roughly even distribution of sums assured by Claim~\ref{ClaimV1iE'}, and consequently to provide sum distinction of the neighbours in $V_1$. Out of the remaining edges in $E(V_0,V_1)$ we shall next choose a set $E_0$ of edges with special features, which shall partition $V_0$ into 5 subsets (with decreasing sums, all however larger than the ones in $V_1$), and shall facilitate the use of a modification of Kalkowski's algorithm in $G_0$ to distinguish the remaining neighbours in $G$. Denote: \begin{equation}\label{E*definition} E^*:= E(V_0,V_1)\smallsetminus E_1. \end{equation} Consider an assignment:
$$c_0:V_0\to\{0,1,2,3,4\}$$ and the following partition of $V_0$ induced by it: $$V_{0,j}:=\left\{v\in V_0: c_0(v)=j\right\}, ~~~~j=0,1,2,3,4.$$
\begin{claim}\label{ClaimE0inV0andV1anddV0i} We may choose $E_0\subseteq E^*$ and $c_0:V_0 \to \{0,1,2,3,4\}$ such that: \begin{eqnarray}
&& \left| d_{E_0}(v) - 0.2c_0(v)d_{E^*}(v) \right| \leq 10^{-3}d {\rm ~~~~for~~~~} v\in V_0; \label{dE0inV0}\\
&& \left| d_{V_{0,c_0(v)}}(v)- 0.2d_{V_0}(v)\right| \leq 10^{-3}d {\rm ~~~~for~~~~} v\in V_0; \label{dV0i}\\
&& \left| d_{E_0}(u) - 0.4d_{E^*}(u) \right| \leq 2\cdot10^{-4}d {\rm ~~~~for~~~~} u\in V_1. \label{dE0inV1} \end{eqnarray} \end{claim}
\begin{pf} First for every $v\in V_0$ choose randomly and independently an integer in $\{0,1,2,3,4\}$, each with equal probability, and denote it by $c_0(v)$. Then include every edge $uv\in E^*$ with $v\in V_0$ in $E_0$ randomly and independently with probability $0.2c_0(v)$. For every $v\in V_0$ and $u\in V_1$, denote by $A_{6,v}$, $A_{7,v}$ and $A_{8,u}$ the events that (\ref{dE0inV0}), (\ref{dV0i}) and (\ref{dE0inV1}) does not hold, respectively. Let $v\in V_0$. First note that by the Chernoff Bound, (\ref{dE1inV0}), (\ref{dV1}) and (\ref{TechIneq1}), for every fixed $j\in\{1,2,3,4\}$: \begin{eqnarray}
\mathbf{Pr}\left(A_{6,v} ~|~ c_0(v)=j\right) &<& 2e^{-\frac{(10^{-3}d)^2}{3\cdot 0.2j\cdot d_{E^*}(v)}} \leq 2e^{-\frac{(10^{-3}d)^2}{0.6 j (0.92d_{V_1}(v)+ 10^{-3}d)}} \nonumber\\ &\leq& 2e^{-\frac{(10^{-3}d)^2}{2.4\cdot (0.92\cdot 0.9503d+ 10^{-3}d)}} = 2e^{-\frac{d}{2100662.4}} < \frac{1}{2ed^2}. \label{PrA6v1234} \end{eqnarray} Moreover, if $c_0(v)=0$, then we trivially have: $d_{E_0}(u)=0$, and hence: \begin{equation}
\mathbf{Pr}\left(A_{6,v} ~|~ c_0(v)=0\right) = 0. \label{PrA6v0} \end{equation} By (\ref{PrA6v1234}), (\ref{PrA6v0}) and the law of total probability we thus obtain that: \begin{equation} \mathbf{Pr}\left(A_{6,v}\right) < \frac{1}{2ed^2}. \label{PrA6v} \end{equation}
Further, by the Chernoff Bound, (\ref{dV0}) and (\ref{TechIneq1}), \begin{eqnarray} \mathbf{Pr}\left(A_{7,v} \right) &<& 2e^{-\frac{(10^{-3}d)^2}{3\cdot 0.2d_{V_0}(v)}} \leq 2e^{-\frac{(10^{-3}d)^2}{ 0.6\cdot 0.0503 d}} = 2e^{-\frac{ d}{30180}} < \frac{1}{2ed^2}. \label{PrA7v} \end{eqnarray}
Let now $v\in V_1$. Note that as our choices are independent, then every edge from $E^*$ which is incident with $v$ is in fact independently chosen to $E_0$ with probability $$\frac{1}{5}\cdot(0+0.2+0.4+0.6+0.8)=0.4,$$ and hence, by the Chernoff Bound, (\ref{dE1inV1}), (\ref{dV0}) and (\ref{TechIneq1}), \begin{eqnarray} \mathbf{Pr}\left(A_{8,v}\right) &<& 2e^{-\frac{(2\cdot10^{-4}d)^2}{3\cdot 0.4d_{E^*}(v)}} \leq 2e^{-\frac{(2\cdot10^{-4}d)^2}{1.2\cdot (0.92d_{V_0}(v)+5\cdot 10^{-5}d)}} \leq 2e^{-\frac{(2\cdot10^{-4}d)^2}{1.2\cdot (0.92\cdot 0.0503d+5\cdot 10^{-5}d)}} \nonumber\\ & = &2e^{-\frac{d}{1389780}} < \frac{1}{2ed^2}. \label{PrA8v} \end{eqnarray}
It is easy to notice that each of the events $A_{6,v}$, $A_{7,v}$, $A_{8,v}$ is mutually independent of all but (much) less than $2d^2$ other events of such types, and thus by (\ref{PrA6v}), (\ref{PrA7v}), (\ref{PrA8v}) and the Lov\'asz Local Lemma, with positive probability none of the events $A_{6,v}$, $A_{7,v}$, $A_{8,v}$ holds, and hence there must exist $c_0$ and $E_0$ as required. \qed \end{pf}
\subsection{Initial weighting}
We define an initial edge 3-weighting $\omega_0$ of $G$ as follows: $$\omega_0(e):=\left\{ \begin{array}{lll} 1,&{\rm if} &e\in E(V_1)\smallsetminus E';\\ 2,&{\rm if}& e\in E_1\cup E_0\cup E(V_0);\\ 3,&{\rm if}& e\in E'\cup E(V_0,V_1)\smallsetminus(E_0\cup E_1). \end{array}\right.$$ We shall further modify only the weights of the edges in $E_1$ (increasing some of these to $3$, to adjust the sums in $V_1$) and of the edges in $E(V_0)$ (possibly changing some of them by $1$ in order to distinguish sums within $V_0$ in the final part of our construction).
Note that at this point for every $v\in V_1$: \begin{eqnarray} \sigma(v) &=& 3\cdot d_{E'}(v)+1\cdot \left(d_{V_1}(v)-d_{E'}(v)\right)+2\cdot \left(d_{E_0}(v)+d_{E_1}(v)\right)+3\cdot \left(d_{V_0}(v)-d_{E_0}(v)-d_{E_1}(v)\right)\nonumber\\ &=& d+2d_{E'}(v)+1\cdot \left(d_{E_0}(v)+d_{E_1}(v)\right)+2\cdot \left(d_{V_0}(v)-d_{E_0}(v)-d_{E_1}(v)\right) \nonumber\\ &=& d+2d_{E'}(v)+2d_{V_0}(v)-d_{E_0}(v)-d_{E_1}(v), \nonumber \end{eqnarray} and hence, if $v\in V_{1,i}$ for some $i\in\{1,2,\ldots,10^4\}$, then by (\ref{dE'}), (\ref{dE0inV1}), (\ref{E*definition}), (\ref{dG'1}), (\ref{dE1inV1}), (\ref{dV0}): \begin{eqnarray} &&\sigma(v) \in\nonumber\\ &&\Big[d+2\left((i-1)10^{-4}d_{G'_1}(v)-6\cdot 10^{-4}d\right)+2d_{V_0}(v) -\left(0.4\left(d_{V_0}(v)-d_{E_1}(v)\right)+2\cdot10^{-4}d\right)-d_{E_1}(v), \nonumber\\ &&d+2\left((i-1)10^{-4}d_{G'_1}(v)+6\cdot 10^{-4}d\right)+2d_{V_0}(v) -\left(0.4\left(d_{V_0}(v)-d_{E_1}(v)\right)-2\cdot10^{-4}d\right)-d_{E_1}(v)\Big]\nonumber\\ &\subseteq& \bigg[d+2(i-1)10^{-4}\left(\frac{9}{16}(d-d_{V_0}(v))-3\right)+1.6d_{V_0}(v)-0.6d_{E_1}(v) -0.0014d, \nonumber\\ &&d+2(i-1)10^{-4}\left(\frac{9}{16}(d-d_{V_0}(v))+3\right)+1.6d_{V_0}(v)-0.6d_{E_1}(v) +0.0014d\bigg] \nonumber\\ &\subseteq& \bigg[d+2(i-1)10^{-4}\left(\frac{9}{16}(d-d_{V_0}(v))-3\right)+1.6d_{V_0}(v) -0.6\left(0.08d_{V_0}(v)+5\cdot10^{-5}d\right) -0.0014d, \nonumber\\ &&d+2(i-1)10^{-4}\left(\frac{9}{16}(d-d_{V_0}(v))+3\right)+1.6d_{V_0}(v) -0.6\left(0.08d_{V_0}(v)-5\cdot10^{-5}d\right) +0.0014d\bigg] \nonumber\\ &=& \bigg[\left(1+\frac{9}{8}(i-1)10^{-4}\right)d+\left(1.552-\frac{9}{8}(i-1)10^{-4}\right)d_{V_0}(v) -0.00143d - 6(i-1)10^{-4}, \nonumber\\ &&\left(1+\frac{9}{8}(i-1)10^{-4}\right)d+\left(1.552-\frac{9}{8}(i-1)10^{-4}\right)d_{V_0}(v) + 0.00143d + 6(i-1)10^{-4}\bigg]\nonumber\\ &\subseteq& \bigg[\left(1+\frac{9}{8}(i-1)10^{-4}\right)d+\left(1.552-\frac{9}{8}(i-1)10^{-4}\right)\left(0.05d-0.0003d\right) -0.00143d - 6(i-1)10^{-4}, \nonumber\\ && \left(1+\frac{9}{8}(i-1)10^{-4}\right)d+\left(1.552-\frac{9}{8}(i-1)10^{-4}\right)\left(0.05d+0.0003d\right) + 0.00143d + 6(i-1)10^{-4}\bigg] \nonumber\\ &=& \big[\left(1.0776+1.06875(i-1)10^{-4}\right)d - 0.0018956d + 0.0003375(i-1)10^{-4}d - 6(i-1)10^{-4}, \nonumber \\ && \left(1.0776+1.06875(i-1)10^{-4}\right)d + 0.0018956d - 0.0003375(i-1)10^{-4}d + 6(i-1)10^{-4}\big] \nonumber\\ &\subseteq& \big[\left(1.0776+1.06875(i-1)10^{-4}\right)d - 0.0018956d, \left(1.0776+1.06875(i-1)10^{-4}\right)d + 0.0018956d\big]. \nonumber\\\label{FirstSv} \end{eqnarray}
Let $\Delta_2:=\max_{1\leq i \leq 10^4} \Delta(G[V_{1,i}])$. By~(\ref{dV1i}) and~(\ref{dV1}), \begin{eqnarray} \Delta_2 &\leq& 10^{-4}\cdot 0.9503d+11\cdot 10^{-6}d = 10^{-4}\cdot 1.0603 d. \label{Delta2} \end{eqnarray}
For every $i=1,2,\ldots,10^4$ we arbitrarily choose a proper vertex colouring of $G[V_{1,i}]$: \begin{equation}\label{DefinitionOfc1i} c_{1,i}: V_{1,i}\to\{0,1,\ldots,\Delta_2\}. \end{equation}
Now we modify weights of some of the edges in $E_1$ by adding $1$ to them (hence switching their weights from $2$ to $3$) so that for every $i\in \{1,2,\ldots,10^4\}$ and each $v\in V_{1,i}$: \begin{equation} \sigma(v)=\left\lfloor \left(1.0776+1.06875(i-1)10^{-4}\right)d + 0.0018956d\right\rfloor+c_{1,i}(v). \label{SecondSv} \end{equation} This is feasible by (\ref{FirstSv}), as by~(\ref{dE1inV1}), (\ref{dV0}) and~(\ref{Delta2}): \begin{eqnarray} d_{E_1}(v) &\geq& 0.08d_{V_0}(v) -5\cdot 10^{-5}d \geq 0.08 \cdot \left(0.05d - 3\cdot 10^{-4}d\right) -5\cdot 10^{-5}d \nonumber\\ &=& 0.003926 d> 0.00389723 d \geq 2\cdot 0.0018956d + \Delta_2. \nonumber \end{eqnarray} We denote the obtained weighting of the edges of $G$ by $\omega_1$. As a result, by (\ref{SecondSv}) and the definitions of $c_{1,i}$, neighbours are sum-distinguished within every $V_{1,i}$ in $G$, i.e. for each $i=1,2,\ldots,10^4$ and every edge $uv\in E(V_{1,i})$ we have $\sigma(u)\neq \sigma(v)$. In fact however, all neighbours in $V_1$ are at this point sum-distinguished, as no conflicts are possible between distinct sets $V_{1,i}$. To see this it is sufficient to observe that for each $1\leq i < 10^4$ and any $u\in V_{1,i}$ and $v\in V_{1,i+1}$, by (\ref{SecondSv}) and~(\ref{Delta2}) we now have: \begin{eqnarray} \sigma(u) &\leq& \left(1.0776+1.06875(i-1)10^{-4}\right)d + 0.0018956d + 10^{-4}\cdot 1.0603 d \nonumber\\ &<& \left(1.0776+1.06875(i-1)10^{-4}\right)d + 0.0018956d + 1.06875\cdot 10^{-4}d - 1 \nonumber\\ &<& \left\lfloor\left(1.0776+1.06875\cdot i\cdot 10^{-4}\right)d + 0.0018956d\right\rfloor\leq \sigma(v). \label{V1iSmallerV1i+1} \end{eqnarray}
\subsection{Setting sums in $V_0$}
We shall now modify the sums in $V_0$ by altering weights of some of the edges in $E(V_0)$ so that there are no sum conflicts within the sets $V_{0,i}$. First however we choose for every vertex $v\in V_0$ a set $E_v\subset E(V_0)$ of its \emph{personal} incident edges, whose weights' modifications shall settle the final sum at $v$ (up to an additive factor of $1$). These sets shall satisfy the following two features: \begin{eqnarray} E_u\cap E_v = \emptyset &{\rm ~~for~~}& u,v\in V_0, u\neq v; \label{emptyintersectionFeature}\\
|E_v| \geq 0.5d_{V_0}(v)-1 &{\rm ~~for~~}& v\in V_0. \label{halfFeature} \end{eqnarray} We define these for each component of $G_0$ separately. Suppose $H$ is any such component. We add a new vertex $u$ (if necessary) and join it with single edges with all vertices of odd degrees in $H$, thus obtaining an Eulerian graph $F$ of $H$. Then we traverse the edges of $F$ along any its Eulerian tour, starting at $u$ (or at any other vertex if $H$ was itself Eulerian), and temporarily direct these edges
consistently with our direction of movement along the Eulerian tour. We then remove the vertex $u$ (if we priory had to add it) and for every vertex $v$ we define $E_v$ as the set of edges in $H$ outgoing from $v$. It is straightforward to verify then that such sets $E_v$ meet our requirements~(\ref{emptyintersectionFeature}) and~(\ref{halfFeature}).
We now arbitrarily arrange the vertices of $V_0$ into a sequence $v_1,v_2,\ldots, v_k$ and analyse them one after another. Once we reach a given vertex $v$ we associate to it a set $$S_v\in \mathbb{S}:=\left\{\left\{2i,2i+1\right\}:i\in\mathbb{Z}\right\}$$ distinct from all sets $S_u$ already assigned to neighbours $u$ of $v$ from $V_{0,c_0(v)}$ (hence also disjoint from them due to the definition of $\mathbb{S}$) and guarantee that ever since this moment the sum at $v$ belongs to this set (note this shall guarantee that neighbours within every $V_{0,j}$ shall be sum-distinguished for $j=0,1,2,3,4$ at the end of our algorithm). To achieve such a goal for the currently analysed vertex $v$, we admit, if necessary, one of the following two modifications of the weight of every edge $uv\in E_v$: \begin{itemize} \item increasing the weight of $uv$ to $3$ (from $2$) if $u$ was not yet analysed, i.e. $v$ precedes $u$ in the chosen ordering; \item otherwise, changing the weight of $uv$ to $1$ or $3$ so that as a result we still have $\sigma(u)\in S_u$. \end{itemize} Note that exactly one of the two options of changing the weight of $uv$ (either to $1$ or to $3$) from the second point above is always available. Note moreover that in order to achieve our goal we cannot set the sum at $v$ to be equal to a number from any two-element set $S_u$ already associated to a neighbour $u$ of $v$ from $V_{0,c_0(v)}$, i.e. we must avoid at most $2d_{V_{0,c_0(v)}}(v)$ integers, while the admitted modifications of weights of the edges in $E_v$ yield \begin{eqnarray}
|E_v|+1 \geq 0.5d_{V_0}(v) &=& 2\left(0.2 d_{V_0}(v) + 10^{-3}d\right) + \left(0.1 d_{V_0}(v) - 2\cdot 10^{-3}d\right) \nonumber\\ &\geq&2d_{V_{0,c_0(v)}}(v) +\left(0.1\cdot \left(0.05d-3\cdot 10^{-4}d\right) - 2\cdot 10^{-3}d\right) >
2d_{V_{0,c_0(v)}}(v) \nonumber \end{eqnarray} potential options for the sum at $v$ (cf. (\ref{halfFeature}), (\ref{dV0i}) and (\ref{dV0})). We then choose any of these options, say $s^*$ which does not belong to any (already fixed) $S_u$ with $u\in N_{V_{0,c_0(v)}}(v)$ and which requires modification of at most $2d_{V_{0,c_0(v)}}(v)$ weights of the edges incident with $v$. We then perform these at most $2d_{V_{0,c_0(v)}}(v)$ admitted weight modifications in $E_v$ so that $\sigma(v)=s^*$ (note the weights modified in this step are the only ones incident with $v$ with a possible value $1$) and choose as $S_v$ the only element of $\mathbb{S}$ containing $s^*$. After analysing all vertices in $V_0$ we obtain our final edge $3$-weighting $\omega_2$ of $G$, such that there are no sum conflicts between neighbours within any $V_{0,i}$, $i=0,1,2,3,4$. Moreover, as within the algorithm above, for every $v\in V_0$ we might have had at most $2d_{V_{0,c_0(v)}}(v)$ edges weighted $1$ immediately after fixing $S_v$ and assuring that the sum at $v$ belongs to this set, and this sum at $v$ could change only by $1$ throughout the rest of the algorithm (so that $\sigma(v)\in S_v$), hence at the end of its execution we still have for every $v\in V_{0,i}$: \begin{equation}\label{FirstSvEstimation} \sigma(v)\geq 1 \cdot 2d_{V_{0,i}}(v) + 2 \cdot\left(d_{V_0}(v)-2d_{V_{0,i}}(v)\right) -1 + \sum_{u\in N_{V_1}(v)}\omega_2(uv). \end{equation}
\subsection{Final calculations}
It remains to show that there are not possible any sum conflicts between neighbours from different sets $V_{0,i}$ and between neighbours from $V_0$ and $V_1$. Note first that for every $i=0,1,2,3,4$ and each $v\in V_{0,i}$, by (\ref{FirstSvEstimation}), (\ref{dE0inV0}), (\ref{dV0i}), (\ref{dE1inV0}) and (\ref{dV0}): \begin{eqnarray} \sigma(v) &\geq& 2 d_{V_0}(v)-2d_{V_{0,i}}(v) -1+ 2 \left(d_{E_1}(v)+d_{E_0}(v)\right)+ 3\left(d_{V_1}(v) - d_{E_1}(v) - d_{E_0}(v)\right) \nonumber\\ &=& 2d+d_{V_1}(v) - d_{E_1}(v) - d_{E_0}(v) -2d_{V_{0,i}}(v) -1 \nonumber\\ &\geq& 2d +d_{V_1}(v) - d_{E_1}(v) - 0.2i\left(d_{V_1}(v) - d_{E_1}(v)\right) - 10^{-3}d - 2\left(0.2d_{V_0}(v)+10^{-3}d\right) -1 \nonumber\\ &=& 2d - 0.4 d_{V_0}(v) + \left(1-0.2i\right) d_{V_1}(v) - \left(1-0.2i\right)d_{E_1}(v) - 0.003d - 1 \nonumber\\ &\geq& 2d - 0.4 d_{V_0}(v) + \left(1-0.2i\right)d_{V_1}(v) - \left(1-0.2i\right)\left(0.08d_{V_1}(v)+ 10^{-3}d\right) - 0.003d - 1 \nonumber\\ &=& 2d - 0.4 d_{V_0}(v) + \left(1-0.2i\right)\cdot 0.92d_{V_1}(v) - 0.004d + 2i\cdot 10^{-4}d - 1 \nonumber\\ &=& 2d - 0.4 d_{V_0}(v) + \left(1-0.2i\right)\cdot 0.92\left(d-d_{V_0}(v)\right) - 0.004d + 2i\cdot 10^{-4}d - 1 \nonumber\\ &=& \left(2.916 - 0.184i\right)d - \left(1.32 - 0.184i\right)d_{V_0}(v)+ 2i\cdot 10^{-4}d - 1 \nonumber\\ &\geq& \left(2.916 - 0.184i\right)d - \left(1.32 - 0.184i\right) \cdot 0.0503 d + 2i\cdot 10^{-4}d - 1 \nonumber\\ &=& \left(2.849604 - 0,1745448i\right)d - 1, \label{LowerSvBoundinV0i} \end{eqnarray} while by~(\ref{dE0inV0}), (\ref{dE1inV0}) and (\ref{dV1}), also for every $i=0,1,2,3,4$ and each $v\in V_{0,i}$: \begin{eqnarray} \sigma(v) &\leq& 2d_{E_0}(v)+3\left(d-d_{E_0}(v)\right) = 3d-d_{E_0}(v) \leq 3d - 0.2i\left(d_{V_1}(v)-d_{E_1}(v)\right) + 10^{-3}d \nonumber\\ &\leq& 3d - 0.2i\left(d_{V_1}(v)-0.08d_{V_1}(v) - 10^{-3}d\right) + 10^{-3}d = \left(3.001 +2\cdot 10^{-4}i\right)d - 0.184i \cdot d_{V_1}(v) \nonumber\\ &\leq& \left(3.001 +2\cdot 10^{-4}i\right)d - 0.184i \left(0.95d - 0.0003d \right) = \left(3.001 - 0.1745448 i\right)d. \label{UpperSvBoundinV0i} \end{eqnarray} Thus by~(\ref{UpperSvBoundinV0i}) and~(\ref{LowerSvBoundinV0i}), for every $i=0,1,2,3$ and $u\in V_{0,i}$, $v\in V_{0,i+1}$: \begin{equation}\label{V0i+1SmallerV0i} \sigma(v) \leq \left[3.001 - 0.1745448 \left(i+1\right)\right]d = \left(2.8264552 - 0.1745448i\right)d < \sigma(u). \end{equation} Finally, to justify that there are no sum conflicts between vertices from $V_0$ and $V_1$, by~(\ref{V1iSmallerV1i+1}) and~(\ref{V0i+1SmallerV0i}) it is sufficient to show that sums in $V_{1,10^4}$ are smaller than sums in $V_{0,4}$. To see that this is actually true, note that by~(\ref{SecondSv}), (\ref{DefinitionOfc1i}), (\ref{Delta2}) and~(\ref{LowerSvBoundinV0i}), for any $u\in V_{1,10^4}$ and $v\in V_{0,4}$: \begin{eqnarray} \sigma(u) &\leq& \left(1.0776+1.06875\cdot \left(10^4-1\right)\cdot 10^{-4}\right)d + 0.0018956d + 10^{-4}\cdot 1.0603 d \nonumber\\ &=& 2.148244755 d < 2.1514248 d - 1 = \left(2.849604 - 0,1745448\cdot 4\right)d - 1 \leq \sigma(v). \nonumber \end{eqnarray} This finishes the proof of Theorem~\ref{123largeregTh} as the obtained weighting $\omega_2$ is thus indeed a $3$-weighting of the edges of $G$ such that there are no sum conflicts between neighbours in $G$. \qed
\section{Concluding Remarks}
The constant $10^8$ above could still be improved, but at the cost of clarity of presentation of the proof of Theorem~\ref{123largeregTh}. Nevertheless, we were far from being able to push it down to $10^7$. Actually, introducing the special subgraph $G'_1$ of $G_1$, based on Corollary~\ref{QuarterDecompositionLemma}, served merely optimization purposes. These required also using only $1$'s and $3$'s as weights in $G_1$. However, forgetting of this direction towards optimization, focused on the lower bound for $d$, might be otherwise beneficial. Namely, using mostly $2$'s and $1$'s in $G_1$ we might assure via a similar argument as in Section~\ref{SectionProofLargeD} an arbitrarily small fraction of all edges weighted $3$ in a vertex-colouring $3$-weighting of any $d$-regular graph with $d$ large enough. Apart from this, our approach can also be relatively easily extended to graphs which are not regular, but whose minimum degree $\delta$ is slightly larger (by an arbitrary $\epsilon>0$) than half of the maximum degree $\Delta$ (and $\Delta$ is large enough) -- this greatly improves the mentioned result from~\cite{123dense-Zhong} that the 1--2--3 Conjecture holds if $\delta>0.99985n$, where $n$ is sufficiently large order of a graph. We omit details here, as we believe that in fact this can still be improved towards a stronger result for general graphs for which it is sufficient that $\delta$ is at least a very small function of $\Delta$, of order much less than $\Delta$. This shall however require a few extra ideas, as our approach does not directly transfer at this point to such a case.
\end{document} |
\begin{document}
\title{Complete characterization of quantum correlations by randomized measurements} \author{Nikolai Wyderka} \affiliation{Institut für Theoretische Physik III, Heinrich-Heine-Universität Düsseldorf, Universitätsstr.~1, 40225 Düsseldorf, Germany} \author{Andreas Ketterer} \affiliation{Fraunhofer Institute for Applied Solid State Physics IAF, Tullastr.~72, 79108 Freiburg, Germany} \author{Satoya Imai} \affiliation{Naturwissenschaftlich-Technische Fakultät, Universität Siegen, Walter-Flex-Str.~3, 57068 Siegen, Germany} \author{Jan Lennart Bönsel} \affiliation{Naturwissenschaftlich-Technische Fakultät, Universität Siegen, Walter-Flex-Str.~3, 57068 Siegen, Germany} \author{Daniel E. Jones} \affiliation{DEVCOM Army Research Laboratory, Adelphi, Maryland 20783, USA} \author{Brian T. Kirby} \affiliation{DEVCOM Army Research Laboratory, Adelphi, Maryland 20783, USA} \affiliation{Tulane University, New Orleans, Louisiana 70118, USA} \author{Xiao-Dong Yu} \affiliation{Naturwissenschaftlich-Technische Fakultät, Universität Siegen, Walter-Flex-Str.~3, 57068 Siegen, Germany} \author{Otfried Gühne} \affiliation{Naturwissenschaftlich-Technische Fakultät, Universität Siegen, Walter-Flex-Str.~3, 57068 Siegen, Germany} \date{\today}
\begin{abstract} The fact that quantum mechanics predicts stronger correlations than classical physics is an essential cornerstone of quantum information processing. Indeed, these quantum correlations are a valuable resource for various tasks, such as quantum key distribution or quantum teleportation, but characterizing these correlations in an experimental setting is a formidable task. By definition, quantum correlations are invariant under local transformations; this physically motivated invariance implies, however, a dedicated mathematical structure and, therefore, constitutes a roadblock for an efficient analysis of these correlations in experiments. Here we provide a method to directly measure any locally invariant property of quantum states using locally randomized measurements, and we present a detailed toolbox to analyze these correlations for two quantum bits. We implement these methods experimentally using pairs of entangled photons, characterizing their usefulness for quantum teleportation and their potential to display quantum nonlocality in its simplest form. Our results can be applied to various quantum computing platforms, allowing simple analysis of correlations between arbitrary distant qubits in the architecture.
\end{abstract}
\maketitle
\section{Introduction} Quantum mechanics contains a plethora of fascinating nonlocal effects that are useful in various applications of quantum technologies. Such effects are, by definition, invariant under changes of the local reference systems or, mathematically speaking, of the choice of the local bases of the Hilbert space. This naturally leads to the expectation that they should be describable by quantities which are invariant under such transformations. A quantum state of composite systems is described by a density matrice $\rho_{AB}$ in the tensor product space of the individual systems, so invariance under local basis changes of any function $f(\rho_{AB})$ of the state can, in the case of bipartite systems, be expressed as \begin{align}
f(\rho_{AB}) = f(U_A\otimes U_B \rho_{AB} U_A^\dagger \otimes U_B^\dagger), \end{align} where $U_A$ and $U_B$ are unitary matrices governing the basis change of the first and second system, respectively. Due to this invariance, an average over all such transformations yields \begin{align}
f(\rho_{AB}) = \iint \text{d}U_A\text{d}U_B f(U_A\otimes U_B \rho_{AB} U_A^\dagger \otimes U_B^\dagger). \end{align} On the other hand, any physical function may be expanded in terms of powers of expectation values of certain observables, yielding \begin{align} f(\rho_{AB}) = \sum_{\vec{t}} c_{\vec{t}}\, \langle \mathcal{M}_1 \rangle^{t_1} \langle \mathcal{M}_2 \rangle^{t_2} \ldots \end{align} for appropriately chosen bipartite observables $\mathcal{M}_i$ and coefficients $c_{\vec{t}}$, where $\vec{t} = (t_1, t_2, \dots)$ denotes all possible multi-indices with positive integers $t_i$ and a varying number of entries $k$. Combining this with local unitary invariance yields \begin{align} \label{eq:moments} f(\rho_{AB}) = \sum_{\vec{t}} c_{\vec{t}}\, \RR{t_1}{\mathcal{M}_1}(\rho_{AB})\RR{t_2}{\mathcal{M}_2}(\rho_{AB})\ldots, \end{align} where the quantities \begin{align}\label{eq:mom}
\RR{t}{\mathcal{M}}(\rho) := \!\!\iint \text{d}U_A \text{d}U_B \{\operatorname{Tr}[(U_A\otimes U_B) \rho (U_A^\dagger \otimes U_B^\dagger) \mathcal{M}]\}^t. \end{align} are the $t$-th moments of the probability distribution of measurement results for the bipartite observable $\mathcal{M}$ under random local basis changes. For the case of product observables, the quantities $\RR{t}{A\otimes B}$ have been studied as randomized measurements, see, e.g., Refs.~\cite{PhysRevA.92.050301,Knips:2020aa,Knips2020momentrandom,elben2022randomized}. The advantages of these schemes include the possibility to obtain the data without having shared reference frames, having limited control over the measurements and in the presence of uncharacterized local unitary noise. With sufficient experimental control, the random unitaries may even be selected from a finite unitary $t$-design instead \cite{gross2007evenly,Ketterer2020entanglement}.
Here, we go beyond the standard randomized measurement schemes by allowing for non-product observables. While this sounds like a disadvantage for practical implementations, we stress that it is possible to obtain the moment data for non-product observables from the data of multiple product observables by classical post-processing.
As the moments of these distributions can be measured directly, they form the main objects of interest in order to describe the local unitary invariant functions. In principle, it is possible to expand any (polynomial) local unitary invariant (LU invariant) in this manner. However, so far only a small subset of these moments has been exploited for tasks like entanglement detection \cite{brydges2019probing, elben2020mixed,Ketterer_2019,imai2021bound} or fidelity estimation~\cite{PhysRevLett.106.230501,Elben_2020}.
In this paper, we develop a general framework linking the moments of randomized measurements and the set of LU invariants. For fixed local dimension $d$, the set of polynomial invariants is finitely generated \cite{grassl1998computing, springer2006invariant}, so it suffices to consider the generators. Indeed, complete sets of generators have been found in the case of two-qubit states and certain classes of higher-dimensional cases \cite{makhlin2002nonlocal, sun2017local}. In the following, we develop concrete schemes to measure {\it all} of the relevant two-qubit invariants, but naturally our theory can be extended to known invariants in higher-dimensional or multiparticle systems. As an application, we experimentally implement a randomized measurement scheme to measure some of the invariants and use it to certify the presence of Bell nonlocality and the usefulness of the prepared states for teleportation schemes.
\section{Local unitary invariants from randomized measurements} \label{sec:invariants}
\subsection{Randomized measurements}
In the framework of randomized measurements, a multiparticle quantum state undergoes random local unitary transformations before a fixed observable $\mathcal{M}$ is measured. The experiment is repeated a number of times for different choices of local unitaries. From the statistics and the moments of the resulting probability distribution, one then aims to infer properties of the underlying quantum state.
More formally, the quantities obtained in the experiment for a bipartite quantum state $\rho$ are those given in Eq.~(\ref{eq:mom}), where the integrals are evaluated with respect to the Haar measure over the unitary group $\mathcal U(d)$, the measured observable is denoted $\mathcal{M}$, and the moment of the corresponding probability distribution is denoted by~$t$.
In this paper, we are mainly concerned with two-qubit states, for which a complete generating set of 18 polynomial invariants has been characterized before \cite{makhlin2002nonlocal}. Of these invariants, six are needed only to distinguish certain specific states by the signs of these invariants, thus we do not expect to extract relevant information in terms of entanglement or nonlocality from these. The remaining twelve invariants are of degree up to six. In order to express them properly, let us decompose the bipartite quantum state in terms of the Bloch representation, i.e., we write \begin{align}
\rho = \frac14\!\!\left[ \mathds{1}\otimes \mathds{1} + \vec{\alpha}\cdot \vec{\sigma} \otimes \mathds{1} + \mathds{1} \otimes \vec{\beta}\cdot\vec{\sigma} + \! \sum_{i,j=1}^3\! T_{ij} \sigma_i \otimes \sigma_j\right], \end{align} where $\sigma_{1,2,3}$ denote the usual Pauli matrices. Thus, the state is determined by its local Bloch vectors $\vec{\alpha}$ and $\vec{\beta}$, and the real correlation matrix $T$. In terms of these, the invariants can be expressed conveniently, and we give a complete list in Appendix~\ref{app:luinvs}. For our purposes, we will focus on the invariants $I_1 = \det(T)$, $I_2 = \operatorname{Tr}(TT^T)$ and $I_3 = \operatorname{Tr}(TT^\text{T}TT^\text{T})$. With the help of these three invariants, it is possible to decide whether the state can violate a CHSH-like Bell inequality. Furthermore, it is possible to bound the teleportation fidelity of the state.
Two of the invariants, including $I_1$, are special in the sense that they flip signs under partial transposition of $\rho$, whereas all other invariants do not. This has consequences on how to measure them: While the other invariants can be obtained from the statistics of product observables, $I_1$ and $I_{14}$ require non-product observables. In turn, they are linked to the entanglement of the state and can be used to obtain the entanglement measure of negativity of the state using randomized measurements \cite{vidal2002computable}, see Appendix~\ref{app:luinvs} for more details.
\subsection{Expressions for the two-qubit LU invariants} \label{sec:expressions}
Let us now state explicitly how to measure the LU invariants in a randomized measurement scheme. To that end, recall that in order to observe the moments in Eq.~(\ref{eq:mom}), one has to choose an appropriate observable. Here, we show how to choose it in order to obtain the invariants $I_1$, $I_2$ and $I_3$.
As a first example, we explore the moments in case of the choice $\mathcal{M} = Z \otimes Z$. Note that choosing any other combination of Pauli matrices yields the same results, as they are related by local unitary rotations. For this choice, the first moment vanishes and we obtain as the second moment \begin{align}
\RR{2}{Z\otimes Z} &= \frac19 \operatorname{Tr}(TT^T) = \frac19 I_2,
\label{eq:R2ZZ} \end{align} where the occurring integrals can be solved using Weingarten calculus \cite{collins2022weingarten}.
Next, $t=3$ yields again zero (as well as any odd moment). For $t=4$, we obtain \begin{align}
\RR{4}{Z\otimes Z} &= \frac{1}{75}[2\operatorname{Tr}(TT^TTT^T) +\operatorname{Tr}(TT^T)^2] \\
&= \frac1{75}[2I_3 + I_2^2],
\label{eq:R4ZZ} \end{align} giving access to the invariant $I_3$.
Finally, as $I_1 = \det(T)$ flips sign under partial transposition, we consider the non-product observable $\mathcal{M}_{\det} = \sum_{i=1}^3 \sigma_i\otimes \sigma_i$ and $t=3$. Note that even though the observable is non-product, the moments can still be obtained by local measurements, as the expectation value can be obtained from the three measurements $X\otimes X$, $Y\otimes Y$, $ Z\otimes Z$ for a fixed choice of unitaries. The corresponding moment yields \begin{align}
\RR{3}{\mathcal{M}_{\det}}(\rho) = \det(T) = I_1.
\label{eq:R3Adet} \end{align}
Before turning to the experimental implementation of the randomized measurement scheme, some statistical considerations are in order. For any fixed choice of local unitaries, multiple measurements are needed to obtain an estimate of the expectation value. Furthermore, a large number of random unitaries have to be chosen. We denote the number of random unitary choices by $M$, and the number of measurements per choice to obtain the expectation value by $K$, such that the total number of measurements is given by $MK$.
Central for this scheme is the generation of Haar random unitaries. We certify the randomness of unitaries in our setup by calculating their so-called frame potential as detailed in Appendix~\ref{app:randomunitaries}.
\begin{figure*}
\caption{
(a) Schematic diagram of the experimental setup for performing the randomized measurement protocol with polarization-entangled photon pairs. DS: detector station. DSF: dispersion-shifted fiber. EPS: entangled photon source. PA: polarization analyzer. SCR: polarization scrambler. SPD: single photon detector.
(b, c, d) Experimentally determined unbiased estimators for the invariants $I_{1}$, $I_{2}$, and $I_{3}$ for 25 different runs consisting of measurements with 200 different random unitaries. The black lines show the expected values of each parameter calculated from the density matrix of the experimental system.
}
\label{fig:exp_data}
\end{figure*}
\section{Experimental methods} \label{sec:experiment}
We experimentally verify the functionality of the proposed randomized measurement method for the two-qubit case using polarization-entangled photon pairs. A schematic of our experimental setup is shown in Fig. \ref{fig:exp_data}(a). The entangled photon source (EPS) generates signal and idler photon pairs via four-wave mixing in a dispersion shifted fiber (DSF) \cite{fiorentino2002all}. The DSF is pumped with a 50 MHz pulsed fiber laser centered at 1552.52 nm and is arranged in a Sagnac loop with a polarizing beam splitter (PBS) to entangle the signal and idler in polarization. The photons are spectrally demultiplexed into 100 GHz-spaced channels on the International Telecommunication Union (ITU) grid after the Sagnac loop, resulting in photons with a temporal duration of about 15 ps \cite{wang2009robust, nucrypt}. For the experiment described here, ITU channels 27 (1555.75 nm) and 35 (1549.32 nm) are used. The source is tunable and typically outputs $\mu = 0.001 - 0.1$ pairs per pump pulse. Each photon is detected with gated InGaAs detectors with detection efficiencies of $\eta \sim 20\%$ and dark count probabilities of $ \sim 4 \times 10^{-5}$ per gate \cite{jones2017joint,jones2017situ}.
\subsection{Measuring the LU invariants} Given the polarization-entangled state generated by our source, we must implement random local unitary rotations in the form of random polarization state rotations. Therefore, we utilize the scrambling function of automated polarization controllers in order to apply random polarization rotations (for the remainder of the paper, a polarization controller operating in scrambling mode will be referred to as a polarization scrambler).
After verifying that the polarization scramblers can be used to apply sufficiently-random unitaries, we measured unbiased estimators (see Appendix~\ref{app:unbiased_estimators}) for the $I_{1}$, $I_{2}$, and $I_{3}$ invariants described in Section \ref{sec:invariants}. Each polarization scrambler was set to rotate incident light to a random polarization state (therefore, acting as a random unitary), and coincidences were measured in different bases. To that end, we define for each of the two parties $i=1,2$ the local bases $\{\ket{H}_i, \ket{V}_i\}$ of horizontally and vertically polarized light, $\{\ket{D}_i, \ket{A}_i\}$ of diagonal and anti-diagonal polarized light and $\{\ket{L}_i, \ket{R}_i\}$ of left circular and right circular polarized light. Note that while we associate these bases to polarization states, the unitary invariance of the measured quantities allows us to choose any local bases, as long as they are rotated by $\frac\pi2$ on the Bloch sphere w.r.t.~each other. In particular, the bases for measuring photons 1 and 2 do not need to be aligned, i.e. $|H\rangle_{1}$ and $|H\rangle_{2}$ do not need to be equivalent on their respective Bloch spheres. We then measured in each combination of these local bases repeatedly for $M=200$ different settings of the polarization scramblers, i.e. 200 different random unitaries were applied, where for each of these settings, $K\approx 1500$ repetitions were used to measure the expectation value.
The method to estimate $I_{1}$, $I_{2}$, and $I_{3}$ from finite measurement results is discussed in detail in Appendix~\ref{app:unbiased_estimators}. Although we collect measurement results in the $\{\ket{H}_i, \ket{V}_i\}$, $\{\ket{D}_i, \ket{A}_i\}$ and $\{\ket{L}_i, \ket{R}_i\}$ bases described above, the estimators for $I_{2}$ and $I_{3}$
only require projective measurements in a single joint basis. Therefore, those estimators are calculated using only a subset of the data, for example, the results for $|H\rangle_{1}|H\rangle_{2}$, $|H\rangle_{1}|V\rangle_{2}$, $|V\rangle_{1}|H\rangle_{2}$, and $|V\rangle_{1}|V\rangle_{2}$. On the other hand, the estimator for $I_{1}$ requires projective measurements in all three of the measured joint bases After calculating the invariants, the experiment described above was repeated 25 different times to allow for a statistical analysis of the results.
The experimentally determined estimators of the $I_{1}$, $I_{2}$, and $I_{3}$ invariants for all 25 runs (each run is shown in a different color) are shown in Fig. \ref{fig:exp_data}(b), (c), and (d), respectively. The black line in all plots corresponds to the expected value of each invariant calculated from the density matrix of the experimental system, measured by quantum state tomography, to allow for comparison with our method. The experimentally determined invariants converge near the expected values, therefore validating our randomized measurement protocol.
\section{Applications to the detection of Bell nonlocality and teleportation fidelity}\label{sec:invariantsandnonloc}
\subsection{Methods}
The most straight-forward application is the evaluation of $I_2 = \operatorname{Tr}(TT^\text{T})$, also known as the two-body sector length \cite{wyderka2020characterizing}. A quantum state is entangled if $I_{2}>1$, and the maximal value is $I_{2}=3$ for Bell states. Note that additional knowledge of $I_3=\operatorname{Tr}(TT^\text{T}TT^\text{T})$ allows for the detection of many more entangled states \cite{imai2021bound}.
Combined knowledge of $I_1 = \det(T)$, $I_2 = \operatorname{Tr}(TT^T)$ and $I_3 = \operatorname{Tr}(TT^\text{T}TT^\text{T})$ is useful for completely determining if a state's nonlocality can be detected by a CHSH-like inequality: Given the observable \cite{verstraete2002entanglement} \begin{align}
\mathcal{B} = \sum_{i,j=1}^3 [a_i(c_j+d_j) + b_i(c_j-d_j)]\sigma_i \otimes \sigma_j, \end{align} where $\vec{a}$, $\vec{b}$, $\vec{c}$ and $\vec{d}$ are real, normalized vectors, its expectation value is bounded by 2 for local states. For a given quantum state, the maximum expectation value that one can observe by varying the vectors that define the observable is given by $2\sqrt{\lambda_1^2 + \lambda_2^2}$, where $\lambda_1$ and $\lambda_2$ are the two largest singular values of the correlation matrix $T$ \cite{horodecki1995violating}. Thus, the quantity \begin{align}\label{eq:chshviolation} \text{CHSH}(\rho) = 2\sqrt{\lambda_1^2 + \lambda_2^2} - 2 \end{align} measures the observable violation.
As the squares of the singular values of $T$ coincide with the eigenvalues of $TT^\text{T}$, we can obtain them by measuring the coefficients of the characteristic polynomial \begin{multline}
p_T(x) = x^3 - \operatorname{Tr}(TT^\text{T})x^2- \\ - \frac12[\operatorname{Tr}(TT^\text{T}TT^\text{T})-\operatorname{Tr}(TT^\text{T})^2]x-\det(T)^2,\label{eq:charpol} \end{multline} which are LU invariants, and calculating its roots. However, some care is needed to properly transfer statistical errors from finite statistics of the invariants to the roots of this polynomial; we explain the data analysis methods in Appendix~\ref{app:xiaodong}.
As a second figure of merit, we can decide whether a given two-qubit state is useful in a teleportation protocol. There, the maximal fidelity $f_\text{max}$ of the teleported state is given by \cite{horodecki1999general} \begin{align}\label{eq:telfid}
f_\text{max} = \frac{F_\text{max}d+1}{d+1}, \end{align} where in our case $d=2$ and $F_\text{max}$ is the maximal overlap of the distributed state with the maximally entangled state $\ket{\phi^+} = \frac{1}{\sqrt{2}}(\ket{00}+\ket{11})$ under local operations and classical communication. As local unitary rotations constitute a subset of these, we can lower bound $F_\text{max}$ by optimizing over LUs instead, yielding \cite{guhne2021geometry} \begin{align}\label{eq:maxfidelity}
F_\text{max} \!\geq\! F_\text{max}^{U} \!:=\! \frac14\max\{&1\!-\!\lambda_1\!-\!\lambda_2\!-\!\lambda_3, 1\!-\!\lambda_1\!+\!\lambda_2\!+\!\lambda_3,\nonumber\\
&1\!+\!\lambda_1\!-\!\lambda_2\!+\!\lambda_3, 1\!+\!\lambda_1\!+\!\lambda_2\!-\!\lambda_3\}. \end{align} By examining the invariants $I_1$, $I_2$ and $I_3$, we can minimize $F_\text{max}^U$ over all eigenvalues $\lambda_i$ which are compatible with the observed data, giving a lower bound on the teleportation fidelity of the prepared quantum state.
\subsection{Results}
Using the methods described above and under the assumption that 25 repetitions of the experiment yields results which are well described by the Gaussian approximation, we extract the following experimental values for the invariants: \begin{align}
\operatorname{Tr}(TT^T) &= \phantom{-}2.41 \pm 0.15,\\
\operatorname{Tr}(TT^TTT^T) &= \phantom{-}2.21 \pm 0.21,\\
\det(T) &= -0.62\pm 0.15, \end{align} where the confidence regions correspond to $3\sigma$, i.e., 99.73\% confidence levels.
From these values, we obtain a potential CHSH violation of \begin{align}
\text{CHSH}(\rho) &\geq 0.46. \end{align} The confidence of this violation is given by $0.991 \approx 2.6\sigma$, as detailed in Appendix~\ref{app:xiaodong}. Note that the maximal observable value is given by $2\sqrt{2} - 2 \approx 0.83$.
Similarly, by requiring a higher confidence level of $5\sigma$ for the invariants, $\text{CHSH}(\rho) = 0.42$ with confidence $0.999998\approx 4.7\sigma$ can be obtained.
Using either method, our results clearly show that the randomized measurement protocol successfully determines that the state output by our EPS has the potential to violate a CHSH inequality.
For the teleportation fidelity, a confidence level of $3\sigma$ of the LU invariants yields a fidelity of at least \begin{align}
F_{\text{max}}^{U} &=0.88, \end{align} or, via Eq.~(\ref{eq:telfid}), $f_\text{max} = 0.92$, with a confidence level of $0.991 \approx 2.6\sigma$, By raising the confidence of the invariants to $5\sigma$, the lower bound decreases to $F_{\text{max}}^{U}=0.86$, or $f_\text{max} = 0.90$, with confidence $0.999998 \approx 4.7\sigma$.
A detailed derivation of these values can be found in Appendix~\ref{app:dataanalysis}, where we also give values for these quantities without the assumption of Gaussian distribution, by using the Hoeffding inequality instead.
\section{Conclusions}\label{sec:Conclusions}
We showed that any local unitary invariant characterizing the quantum correlations in quantum states of two or more particles can be directly measured using the moments of randomized measurements. We exemplified this for two-qubit states, where we showed how all relevant LU invariants can be inferred from randomized measurements of appropriately chosen observables. We proceeded to demonstrate the practicality of the introduced methods by conducting an experiment with entangled photon pairs leading to an efficient measurement of the LU invariants $I_1$, $I_2$ and $I_3$. The latter allowed us to directly certify important properties of the state, i.e., its Bell nonlocality as well as its usefulness for quantum teleportation. Furthermore, as a necessary by-product of our investigations, we devised methods allowing for a characterization of the degree of randomness of a set of experimentally implemented unitary transformations.
We emphasize the simplicity of the presented scheme which allows to infer several important properties of the underlying quantum state through a number of randomly assorted measurements. For this reason, it will be an interesting direction of future research to extend the present explicit constructions for two-qubit states also to higher-dimensional systems which likely will find ample applications in quantum communication tasks. Also, it would be desirable to extend our approach to the characterization of nonlocal quantum channels and multiparticle quantum correlations such as multi-setting Bell nonlocality or spin squeezing.
\section*{Acknowledgments} \begin{acknowledgments} N.W.~acknowledges support by the QuantERA project QuICHE via the German Ministry of Education and Research (BMBF Grant No.~16KIS1119K). A.K.~acknowledges funding from the Ministry of Economic Affairs, Labour and Tourism Baden-Württemberg, under the project QORA. S.I.~acknowledges support by the DAAD. J.L.B.~acknowledges support from the House of Young Talents of the University of Siegen. O.G. acknowledges support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation, project numbers 447948357 and 440958198), the Sino-German Center for Research Promotion (Project M-0294), the ERC (Consolidator Grant 683107/TempoQ) and the German Ministry of Education and Research (Project QuKuK, BMBF Grant No.~16KIS1618K).
\end{acknowledgments}
\appendix
\onecolumngrid \section{Two-qubit LU invariants} \label{app:luinvs}
We expand a two-qubit state $\rho$ in the Bloch basis as \begin{align}
\rho = \frac14\left[ \mathds{1}\otimes \mathds{1} + \vec{\alpha}\cdot \vec{\sigma} \otimes \mathds{1} + \mathds{1} \otimes \vec{\beta}\cdot\vec{\sigma} + \sum_{i,j=1}^3 T_{ij} \sigma_i \otimes \sigma_j\right], \end{align} such that it is determined by the local Bloch vectors $\vec{\alpha}$ and $\vec{\beta}$, and the correlation matrix $T$.
In terms of these, the Makhlin invariants read \cite{makhlin2002nonlocal} \begin{align}
& I_4 = \alpha^2, &&I_7 = \beta^2, && I_2 = \operatorname{Tr}(TT^T), \nonumber\\
\,\nonumber \\
& I_{12} = \vec{\alpha}T\vec{\beta}, && I_1 = \det(T), &&\nonumber\\
\,\nonumber \\
& I_5 = [\vec{\alpha}T]^2, && I_8 = [T\vec{\beta}]^2, && I_3 = \operatorname{Tr}(TT^TTT^T),\nonumber\\
& I_{14} = \operatorname{Tr}[(\star\vec{\alpha})T(\star \vec{\beta})^TT^T], && \nonumber\\
\,\nonumber\\
&I_{13} = \vec{\alpha}TT^TT\vec{\beta}, &&I_6 = [\vec{\alpha} T T^T]^2, &&I_9 = [T^TT\vec{\beta}]^2. \end{align} Here, the first row contains all invariants of degree two, the second row those of degree three, the third and fourth row the degree four invariants and the last row displays the degree five and six invariants. Furthermore, the Hodge star $\star$ maps a vector to a skew-symmetric matrix via $(\star \vec{\alpha})_{ij} = \sum_{ijk}\epsilon_{ijk}\alpha_k$. Also, we sometimes write scalar products of vectors as $I_{12} = \vec{\alpha}T\vec{\beta}$, instead of the more formal $I_{12} = \vec{\alpha}^T T\vec{\beta}$. The invariants $I_1$ and $I_{14}$ flip sign under partial transposition of $\rho$, while the others are invariant under this map.
Before we start, notice that we aim to measure the moments \begin{align}\label{eq:momappendix}
\RR{t}{\mathcal{M}}(\rho) := \iint \text{d}U_A \text{d}U_B \{\operatorname{Tr}[(U_A\otimes U_B) \rho (U_A^\dagger \otimes U_B^\dagger) \mathcal{M}]\}^t. \end{align} As an observable, we choose $\mathcal{M} = (k_A\mathds{1} + l_A Z) \otimes (k_B\mathds{1} + l_B Z)$. Note that this is the most general choice for product observables, as all other choices can be obtained by local rotations, which are averaged out in the integral.
\subsection{\texorpdfstring{The invariants $\alpha^2$, $\beta^2$ and $\operatorname{Tr}(TT^T)$}{The invariants a**2, b**2 and Tr(TT**T)}}
It is clear that the result of Eq.~(\ref{eq:momappendix}) must be expressible by local invariants, and for $t=2$, one can readily check that \begin{align}
\RR{2}{Z\otimes\mathds{1}}(\rho) &= \int \text{d}U_A \{\operatorname{Tr}[\rho_A U_A Z U_A^\dagger ]\}^2 \nonumber\\
&=\frac14 \sum_{i,j=1}^3 \alpha_i \alpha_j \underbrace{\int \text{d}U \operatorname{Tr}[\sigma_i U Z U^\dagger]\operatorname{Tr}[\sigma_j U Z U^\dagger]}_{\frac43\delta_{ij}} \nonumber\\
&= \frac13 \alpha^2. \end{align} Here, the integral can be solved using Weingarten calculus \cite{collins2022weingarten}.
Likewise, $\beta^2$ can be obtained by measuring $\RR{2}{\mathds{1}\otimes Z}$. Next, setting $\mathcal{M} = Z\otimes Z$, we obtain \begin{align}
\RR{2}{Z\otimes Z} &= \frac1{16} \sum_{ijkl=1}^3 T_{ij}T_{kl}\int\text{d}U_A \operatorname{Tr}[\sigma_iU_A ZU_A^\dagger]\operatorname{Tr}[\sigma_kU_AZU_A^\dagger]\int \text{d}U_B \operatorname{Tr}[\sigma_jU_BZU_B^\dagger]\operatorname{Tr}[\sigma_lU_BZU_B^\dagger]\nonumber \\
&= \frac19 \operatorname{Tr}(TT^T),
\label{eq:sector_length_infinite} \end{align} thus, all degree-two invariants can be obtained in this way.
\subsection{\texorpdfstring{The invariants $\operatorname{Tr}(TT^TTT^T)$, $\vec{\alpha}T\vec{\beta}$, $[\vec{\alpha}T]^2$ and $[T\vec{\beta}]^2$}{The invariants Tr(TT**TTT**T), aTb, [aT]**2 and [Tb]**2}}
Next, for $t=4$ and $\mathcal{M} = Z\otimes Z$, we obtain integrals like \begin{align}
\int \text{d}U \operatorname{Tr}[\sigma_{i_1}UZU^\dagger]\operatorname{Tr}[\sigma_{i_2}UZU^\dagger]\operatorname{Tr}[\sigma_{i_3}UZU^\dagger]\operatorname{Tr}[\sigma_{i_4}UZU^\dagger] = \frac{16}{15}[\delta_{i_1i_2}\delta_{i_3i_4} + \delta_{i_1i_3}\delta_{i_2i_4}+\delta_{i_1i_4}\delta_{i_2i_3}], \end{align} leading to \begin{align}
\RR{4}{Z\otimes Z} = \frac{1}{75}[2\operatorname{Tr}(TT^TTT^T) +\operatorname{Tr}(TT^T)^2], \end{align} giving access to invariant $I_3$. For the degree-three invariant $\vec{\alpha}T\vec{\beta}$; however, marginal terms are needed, i.e., $k_A\neq0\neq k_B$. Therefore, we set $k_A=k_B\equiv k$, $l_A = l_B \equiv l$, and $t=3$. We obtain
\begin{align}
\RR{3}{(k\mathds{1} + l Z)^{\otimes 2}} = k^6 + k^4l^2[\alpha^2+\beta^2] + \frac13 k^2l^4[\operatorname{Tr}(TT^T) + 2\vec{\alpha}T\vec{\beta}]. \end{align} Thus, knowledge of the degree-two invariants together with choosing $k\neq0\neq l$, one can extract the degree-three invariant $I_{12} = \vec{\alpha}T\vec{\beta}$.
Finally, we use the same technique to obtain the degree-four invariants $[\vec{\alpha}T]^2$ and $[T\vec{\beta}]^2$:
\begin{align}
\RR{4}{(k\mathds{1} + l Z)^{\otimes 2}} = k^8 &+ 2k^6l^2[\alpha^2+\beta^2] \nonumber\\
&+ \frac{2}{3}k^4l^4[\frac3{10}(\alpha^4+\beta^4) + \alpha^2\beta^2 +\operatorname{Tr}(TT^T) +4\vec{\alpha}T\vec{\beta}] \nonumber \\
&+ \frac{2}{15}k^2l^6[(\alpha^2+\beta^2)\operatorname{Tr}(TT^T) + 2( [\vec{\alpha}T]^2 + [T\vec{\beta}]^2)] \nonumber\\
&+ \frac1{75} l^8[2\operatorname{Tr}(TT^TTT^T) + \operatorname{Tr}(TT^T)^2]. \end{align} Thus, the combination $[\vec{\alpha}T]^2 + [T\vec{\beta}]^2$ can be obtained from this symmetric measurement. If, instead, one chooses $k_A\neq k_B$, $l_A \neq l_B$, the individual terms can also be measured.
\subsection{\texorpdfstring{The invariants $\vec{\alpha}TT^TT\vec{\beta}$, $[\vec{\alpha} T T^T]^2$, and $[T^TT\vec{\beta}]^2$}{The invariants aTT**TTb, [aTT**T]**2, and [T**TTb]**2}}
We find \begin{align}
\RR{5}{(k\mathds{1} + l Z)^{\otimes 2}}&=k^{10} + \frac{10}{3} k^8 l^2[\alpha^2+\beta^2] \nonumber\\
&+ \frac{10}{3}k^6l^4[\frac{3}{10}(\alpha^4+\beta^4) + \alpha^2\beta^2 +\frac{1}{3} \operatorname{Tr}(TT^T) +2\vec{\alpha}T\vec{\beta}] \nonumber\\
&+ \frac{2}{3}k^4l^6[(\alpha^2+\beta^2)(\operatorname{Tr}(TT^T)+2\vec{\alpha}T\vec{\beta}) + 2( [\vec{\alpha}T]^2 + [T\vec{\beta}]^2)] \nonumber\\
&+ \frac1{15} k^2l^8[2\operatorname{Tr}(TT^TTT^T)
+ \operatorname{Tr}(TT^T)(\operatorname{Tr}(TT^T)+4\vec{\alpha}T\vec{\beta})+8\vec{\alpha}TT^TT\vec{\beta}], \end{align} which allows us to extract $\vec{\alpha}TT^TT\vec{\beta}$.
Moreover, Weingarten calculus yields the useful expansion \begin{multline}
\int \text{d}U
\operatorname{Tr}[\sigma_{i_1}U ZU^\dagger]
\operatorname{Tr}[\sigma_{i_2}U ZU^\dagger]
\operatorname{Tr}[\sigma_{i_3}U ZU^\dagger]
\operatorname{Tr}[\sigma_{i_4}U ZU^\dagger]
\operatorname{Tr}[\sigma_{i_5}U ZU^\dagger]
\operatorname{Tr}[\sigma_{i_6}U ZU^\dagger]\\
= \frac{64}{105}\{
\delta_{i_1i_2} [\delta_{i_3i_4}\delta_{i_5i_6} +\delta_{i_3i_5}\delta_{i_4i_6}+\delta_{i_3i_6}\delta_{i_4i_5}]
+\delta_{i_1i_3} [\delta_{i_2i_4}\delta_{i_5i_6} +\delta_{i_2i_5}\delta_{i_4i_6}+\delta_{i_2i_6}\delta_{i_4i_5}]\\
+\delta_{i_1i_4} [\delta_{i_2i_3}\delta_{i_5i_6} +\delta_{i_2i_5}\delta_{i_3i_6}+\delta_{i_2i_6}\delta_{i_3i_5}]
+\delta_{i_1i_5} [\delta_{i_2i_3}\delta_{i_4i_6} +\delta_{i_2i_4}\delta_{i_3i_6}+\delta_{i_2i_6}\delta_{i_3i_4}]\\
+\delta_{i_1i_6} [\delta_{i_2i_3}\delta_{i_4i_5} +\delta_{i_2i_4}\delta_{i_3i_5}+\delta_{i_2i_5}\delta_{i_3i_4}]\}, \end{multline} which can readily be used to calculate \begin{align}
\RR{6}{(k\mathds{1} + l Z)^{\otimes 2}}&=k^{12} + 5 k^{10} l^2[\alpha^2+\beta^2] \nonumber\\
&+ \frac{1}{3}k^8l^4[9(\alpha^4+\beta^4) + 30\alpha^2\beta^2 +5\operatorname{Tr}(TT^T) +40\vec{\alpha}T\vec{\beta}] \nonumber\\
&+k^6l^6[(\alpha^2+\beta^2)(\alpha^2\beta^2+2\operatorname{Tr}(TT^T)+8\vec{\alpha}T\vec{\beta})
+ 4( [\vec{\alpha}T]^2 + [T\vec{\beta}]^2)
+\frac{1}{7}(\alpha^6+\beta^6)] \nonumber\\
&+ \frac1{5} k^4l^8[2\operatorname{Tr}(TT^TTT^T)
+ \operatorname{Tr}(TT^T)(\operatorname{Tr}(TT^T)+8\vec{\alpha}T\vec{\beta}
+\frac{5}{7}(\alpha^4+\beta^4)+2\alpha^2 \beta^2)]\nonumber\\
&+\frac1{5} k^4l^8[16\vec{\alpha}TT^TT\vec{\beta}
+(\frac{20}{7}\alpha^2+4\beta^2)[\vec{\alpha}T]^2
+(\frac{20}{7}\beta^2+4\alpha^2) [T\vec{\beta}]^2
+8[\vec{\alpha}T\vec{\beta}]^2]\nonumber\\
&+\frac{1}{35}k^2l^{10}[(\alpha^2+\beta^2)(2\operatorname{Tr}(TT^TTT^T)
+ \operatorname{Tr}(TT^T)^2)+4([\vec{\alpha}T]^2 + [T\vec{\beta}]^2)\operatorname{Tr}(TT^T)]\nonumber\\
&+\frac{8}{35}k^2l^{10}[[\vec{\alpha} T T^T]^2 + [T^TT\vec{\beta}]^2]\nonumber\\
&+\frac{1}{735}l^{12}[
8\operatorname{Tr}(TT^TTT^TTT^T)+\operatorname{Tr}(TT^T)(6\operatorname{Tr}(TT^TTT^T)+\operatorname{Tr}(TT^T)^3)]. \end{align} This expression yields $[\vec{\alpha} T T^T]^2$, and $[T^TT\vec{\beta}]^2$.
\subsection{Invariants from non-product observables}\label{app:nonprod}
There are two invariants left, namely $I_1 = \det(T)$ and $I_{14} = \operatorname{Tr}[(\star\vec{\alpha})T(\star \vec{\beta})^TT^T]$. Before we continue, note that for the moments of the partially transposed state $\rho^{T_B} = (\text{id} \otimes \hat{T})(\rho)$, where $\hat{T}$ denotes the usual transposition map, it holds that \begin{align} \RR{t}{A\otimes B}(\rho^{T_B}) &:= \iint \text{d}U_A \text{d}U_B \{\operatorname{Tr}[(U_A\otimes U_B) \rho^{T_B} (U_A^\dagger \otimes U_B^\dagger) (A\otimes B)]\}^t \nonumber\\
&= \iint \text{d}U_A \text{d}U_B \{\operatorname{Tr}[(U_A\otimes U_B) \rho (U_A^\dagger \otimes U_B^\dagger) (A\otimes B^T)]\}^t\nonumber\\
&=\RR{t}{A\otimes B^T}(\rho). \end{align} However, if we write for qubits $B = k\mathds{1} + l_X X + l_Y Y + l_Z Z$, then $B^T = k\mathds{1} + l_X X - l_Y Y + l_Z Z$, which is related to $B$ via a simple unitary rotation. Therefore, $\RR{t}{A\otimes B}(\rho^{T_B}) = \RR{t}{A\otimes B}(\rho)$, as long as the observable is product. As noted before, however, invariants $I_1$ and $I_{14}$ flip sign under partial transposition, and can therefore not occur in these product moments.
To circumvent this problem, we consider instead non-product observables. We start by using $\mathcal{M}_{\det} = \sum_{i=1}^3 \sigma_i\otimes \sigma_i$ and $t=3$. Note that even though the observable is non-product, the moments can still be obtained by local measurements, as \begin{align}
\RR{t}{\mathcal{M}_{\det}} = \iint \text{d}U_A \text{d}U_B \{\operatorname{Tr}[(U_A\otimes U_B) \rho (U_A^\dagger \otimes U_B^\dagger) \sum_{i}\sigma_i \otimes \sigma_i]\}^t, \end{align} and the expectation value can be obtained from the three measurements $ X\otimes X$, $ Y\otimes Y$, $ Z\otimes Z$ for a fixed choice of unitaries.
We now evaluate explicitly \begin{multline}
\RR{3}{\mathcal{M}_{\det}}(\rho) = \iint \text{d}U_A \text{d}U_B\left[\sum_i \operatorname{Tr}(\rho_U \sigma_i\otimes\sigma_i)^3 + 3\sum_{i\neq j}\operatorname{Tr}(\rho_U \sigma_i\otimes\sigma_i)^2 \operatorname{Tr}(\rho_U \sigma_j\otimes \sigma_j) \right.+\\
\left.\phantom{\sum_i} + 6\operatorname{Tr}(\rho_U X\otimes X)\operatorname{Tr}(\rho_U Y\otimes Y)\operatorname{Tr}(\rho_U Z\otimes Z) \right]. \end{multline} Here, we used the abbreviation $\rho_U := (U_A\otimes U_B)\rho (U_A^\dagger \otimes U_B^\dagger)$.
Using Weingarten calculus again, one can quickly see that only the last term is non-vanishing with
\begin{align} \int \text{d}U\operatorname{Tr}(\sigma_{i}U X U^\dagger)\operatorname{Tr}(\sigma_{j}U Y U^\dagger)\operatorname{Tr}(\sigma_{k}U Z U^\dagger) = \frac43\epsilon_{ijk}. \end{align} Thus, \begin{align}
\RR{3}{\mathcal{M}_{\det}}(\rho) &= 6\iint \text{d}U_A \text{d}U_B\operatorname{Tr}(\rho_U X\otimes X)\operatorname{Tr}(\rho_U Y\otimes Y)\operatorname{Tr}(\rho_U Z\otimes Z) \nonumber\\
&=\frac{6\cdot 4\cdot 4}{4^3 \cdot3\cdot 3}\sum_{i_1i_2i_3j_1j_2j_3} T_{i_1j_1}T_{i_2j_2}T_{i_3j_3}\epsilon_{i_1i_2i_3}\epsilon_{j_1j_2j_3} \nonumber\\
&= \det(T) = I_1. \end{align}
For invariant $I_{14}$, we need an expression of order four. We choose $\mathcal{M}_\text{Hodge} = \mathds{1}\otimes X + X \otimes \mathds{1} + Y\otimes Z + Z\otimes Y$. To see the moment distribution of this operator, we can apply the unitary average on it. To that end, we write \begin{align}
\RR{4}{\mathcal{M}} &= \iint \text{d}U_A \text{d}U_B \operatorname{Tr}(\rho_U^{\otimes 4} \mathcal{M}^{\otimes 4}) \nonumber\\
&= \iint \text{d}U_A \text{d}U_B \operatorname{Tr}[\rho^{\otimes 4} (U_A\otimes U_B \mathcal{M} U_A^\dagger \otimes U_B^\dagger)^{\otimes 4}] \nonumber\\
&= \operatorname{Tr}\left[\rho^{\otimes 4} \iint \text{d}U_A \text{d}U_B (U_A\otimes U_B \mathcal{M} U_A^\dagger \otimes U_B^\dagger)^{\otimes 4}\right]. \end{align} Thus, we can twirl the operator instead of the state. To evaluate this, we first rearrange the parties in the four-copy space as $A_1, A_2, A_3, A_4, B_1, B_2, B_3, B_4$. Then, before the averaging, we get for $\mathcal{M}_\text{Hodge}$ \begin{align}\label{eq:hodgetwirl}
\iint \text{d}U_A \text{d}U_B U_A^{\otimes 4} U_B^{\otimes 4} & (\mathds{1}\mathds{1}\mathds{1}\mathds{1} \otimes XXXX + XXXX\otimes \mathds{1}\mathds{1}\mathds{1}\mathds{1} + \mathds{1}\mathds{1} XX\otimes XX\mathds{1}\mathds{1} + \text{perms.} + \nonumber \\
&+\mathds{1}\mathds{1} YY\otimes XXZZ + \text{perms.} + \mathds{1}\mathds{1} ZZ\otimes XXYY + \text{perms.} +\nonumber \\
&+XXZZ\otimes \mathds{1}\mathds{1} YY + \text{perms.} + XXYY \otimes \mathds{1}\mathds{1} ZZ+ \text{perms.} +\nonumber \\
&+YYZZ\otimes ZZYY + \text{perms.} + YYYY\otimes ZZZZ + ZZZZ \otimes YYYY + \nonumber \\
&+\mathds{1} XYZ \otimes X\mathds{1} ZY + \text{perms.} + \ldots)(U_A^\dagger)^{\otimes 4}(U_B^\dagger)^{\otimes 4}. \end{align} Here, perms. denotes all permutations among the A and B parties of the preceding term. For instance, $\mathds{1} \mathds{1} XX\otimes XX\mathds{1}\mathds{1} + \text{perms.} = \mathds{1} \mathds{1} XX\otimes XX\mathds{1}\mathds{1} + \mathds{1} X \mathds{1} X\otimes X\mathds{1} X\mathds{1}+\mathds{1} XX\mathds{1} \otimes X \mathds{1} \mathds{1} X + X\mathds{1} \mathds{1} X \otimes \mathds{1} XX\mathds{1} + X\mathds{1} X \mathds{1} \otimes \mathds{1} X \mathds{1} X + XX \mathds{1} \mathds{1} \otimes \mathds{1} \mathds{1} XX$, yielding six terms in total. Note that we have already omitted all those terms that yield zero after applying the integral.
In the following, we sloppily concentrate on those yielding invariant $I_{14}$: These are the terms in the last row of Eq.~(\ref{eq:hodgetwirl}), where we use \begin{align}
\int \text{d}U U^{\otimes 4} \mathds{1} X Y Z (U^\dagger)^{ \otimes 4}=\sum_{\pi\in S_3} \operatorname{sgn(\pi)} \mathds{1} \sigma_{\pi_1}\sigma_{\pi_2}\sigma_{\pi_3}. \end{align} This yields terms in the fourth moment of the following form: \begin{align}
\RR{4}{\mathcal{M}_{\text{Hodge}}}(\rho) &= \ldots + 24\iint \text{d}U_A \text{d}U_B \operatorname{Tr}(\rho_U \mathds{1}\otimes X)\operatorname{Tr}(\rho_U X\otimes\mathds{1})\operatorname{Tr}(\rho_U Y\otimes Z)\operatorname{Tr}(\rho_U Z\otimes Y) \nonumber\\
&= \ldots + \frac{24}{4^4}\sum_{i_2i_3i_4j_1j_3j_4}\beta_{j_1}\alpha_{i_2}T_{i_3j_3}T_{i_4j_4}\times \nonumber\\
&\phantom{=\ldots + 2} \times \int \text{d}U_A \operatorname{Tr}(\sigma_{i_2}U_A XU_A^\dagger)\operatorname{Tr}(\sigma_{i_3}U_A YU_A^\dagger) \operatorname{Tr}(\sigma_{i_4}U_A ZU_A^\dagger) \times \nonumber\\
&\phantom{=\ldots + 2} \times \int\text{d}U_B \operatorname{Tr}(\sigma_{j_1}U_B XU_B^\dagger)\operatorname{Tr}(\sigma_{j_3}U_B ZU_B^\dagger)\operatorname{Tr}(\sigma_{j_4}U_B YU_B^\dagger) \nonumber\\
&=\ldots - \frac{24\cdot4\cdot4}{4^4\cdot 3\cdot 3}\sum_{i_2i_3i_4j_1j_3j_4}\beta_{j_1}\alpha_{i_2}T_{i_3j_3}T_{i_4j_4} \epsilon_{i_2i_3i_4}\epsilon_{j_1j_3j_4}\nonumber\\
&=\ldots-\frac16\sum_{ijklmn}\epsilon_{ijk}\epsilon_{lmn} \alpha_i \beta_l T_{jm}T_{kn} = \ldots - \frac16\operatorname{Tr}[(\star\vec{\alpha})T(\star \vec{\beta})^TT^T]. \end{align} Finally, in order to get rid of the undetermined contributions, note that measuring $\mathcal{M}_\text{Hodge}^\prime = \mathds{1}\otimes X + X \otimes \mathds{1} + Y\otimes Z - Z\otimes Y$ instead, the only term changing sign in Eq.~(\ref{eq:hodgetwirl}) is the Hodge term. Thus, the difference of the measured fourth moment of $\mathcal{M}_\text{Hodge}$ and $\mathcal{M}_\text{Hodge}^\prime$ is equal to one third of the invariant in question.
\subsection{Expressing the invariants via fidelities}
Instead of measuring expectation values of abstract operators, we can instead think of these as average fidelities. Indeed, coming back to invariant $I_1 = \det(T)$, we can make use of \begin{align}
\mathcal{M}_\text{det} = 4\ket{\psi^-}\bra{\psi^-} - \mathds{1}\otimes\mathds{1}, \end{align} thus, we can extract the determinant from \begin{align}
\RR{4}{\ket{\psi^-}\!\bra{\psi^-}}(\rho) = \iint \text{d}U_A \text{d}U_B \braket{\psi^-|\rho_U|\psi^-}^4, \end{align} where $\rho_U = U_A\otimes U_B \rho U_A^\dagger \otimes U_B^\dagger$.
For $\mathcal{M}_\text{Hodge}$, we can use similar tricks. By noticing that \begin{align}
\ket{\nu_X} &= \frac12(1, e^{i\pi/4}, e^{i\pi/4}, 1)^T \end{align} can be expanded as \begin{align}
\ket{\nu_X} \bra{\nu_X}&=\frac14[\mathds{1} \otimes \mathds{1} + \frac1{\sqrt{2}}(\mathds{1} \otimes X + X \otimes \mathds{1} + Y\otimes Z + Z \otimes Y) + X\otimes X], \end{align} we see that \begin{align}
\frac1{\sqrt{2}}\mathcal{M}_\text{Hodge} = 4\ket{\nu_X}\bra{\nu_X} - \mathds{1}\otimes\mathds{1} - X\otimes X, \end{align} which allows to extract $I_{14}$ in terms of the fourth moment of the overlap between $\rho$ and $\ket{\nu_X}$. Note that, due to local unitary invariance, we can instead also use \begin{align}
\ket{\nu_Z} &= (a,0,0,\sqrt{1-a^2})^T \end{align} with $a=\sqrt{\frac{1+\sqrt{2}}{2\sqrt{2}}} = \cos(\pi/8)$. To be more precise, measuring \begin{align}
\RR{4}{\ket{\nu_Z}\!\bra{\nu_Z}}(\rho) &= \iint \text{d}U_A \text{d}U_B \braket{\nu_Z|\rho_U|\nu_Z}^4 \nonumber\\ &=\frac{1}{75\cdot 2^{10}}\left[ 300(1-\alpha^2 - \beta^2) +400\operatorname{Tr}(TT^T) - 600\det(T)+400 \vec{\alpha}T\vec{\beta} \right. \nonumber \\ & \phantom{-----}+15(\alpha^4 + \beta^4)+50\alpha^2\beta^2+60(\alpha^2+\beta^2)\operatorname{Tr}(TT^T) \nonumber\\ & \phantom{-----}\left.+20[(\vec{\alpha}T)^2 +(T\vec{\beta})^2] -23\operatorname{Tr}(TT^TTT^T)+51\operatorname{Tr}(TT^T)^2-50I_{14}\right]. \end{align}
\subsection{Quantification of negativity}\label{app:negativity} A typical entanglement measure in two-qubit systems is the negativity \cite{vidal2002computable}, which is defined as \begin{align}
N(\rho) = -2\min\{0,\mu(\rho^{T_B})\}, \end{align} where $\mu(\rho^{T_B})$ is the minimal eigenvalue of the $\rho^{T_B}$, and we note that in the case of two entangled qubits, exactly one eigenvalue is negative \cite{sanpera1998local}.
With the help of Newton's identities, this eigenvalue can be calculated by the moments given by $p_k=\operatorname{Tr}[(\rho^{T_B})^k]$. In fact, it has been shown that the negativity can be obtained by solving the following fourth degree polynomial for $N$ \cite{bartkiewicz2015quantifying}: \begin{align}
48\det(\rho^{T_B})+3N^4+6N^3-6N^2(p_2-1)-4N(3p_2-2p_3-1)=0. \end{align} We remark that the determinant $\det(\rho^{T_B})$ can be rewritten in terms of the moments $p_k$ via \cite{augusiak2008universal} \begin{align}
\det(\rho^{T_B})
= \frac{1}{24}
(1-6p_4+8p_3+3p_2^2-6p_2). \end{align} Therefore, to quantify the negativity, it is sufficient to use known relations between the $p_k$ and LU invariants \cite{bartkiewicz2015method}: \begin{align}
p_2 &= \frac{1}{4}(1+x_1),\\
p_3 &= \frac{1}{16}(1+3x_1+6x_2),\\
p_4 &= \frac{1}{64}(1+6x_1+24x_2+x_1^2+2x_3+4x_4), \end{align} where \begin{align}
x_1& = I_2+I_4+I_7,\\
x_2& = I_1+I_{12} ,\\
x_3& = I_2^2 -I_3,\\
x_4& = I_5+I_8+I_{14}+I_4 I_7. \end{align}
\section{Randomness of unitary gates} \label{app:randomunitaries}
\subsection{Frame potential} A major concern in the experimental setup is the generation of random local unitary rotations. Since the framework of randomized measurements asserts that the unitaries must be distributed according to the Haar measure, it is important to check that the experimentally applied unitaries are indeed Haar random.
In order to check this, we resort to the so-called frame potential and unitary designs \cite{gross2007evenly, scott2008optimizing, hunter2019unitary}. A set $\mathcal{U}=\{U_i\}_{i=1}^N$ of $N$ unitary gates is called a unitary $t$-design, if \begin{equation}
\intop p(U)\,\text{d}U = \frac1N\sum_{i=1}^N p(U_i) \end{equation} for each polynomial $p$ of degree $t$ in the entries of the unitaries. That is, one can replace the integration over the Haar measure by averaging over a finite set of (carefully chosen) unitaries. The number of elements of such designs, which are known to exist for each dimension $d$ and order $t$, naturally grows with the order $t$ of the polynomial one wishes to average over. As the criterion we want to check involves invariants of order up to four, we are interested mainly in four-designs.
In order to check whether a given set $\mathcal{U}$ of $N$ unitaries constitutes a four-design, one can use the frame potential. It is defined for a set of unitaries via \begin{equation}\label{eq:framepotential}
F_t(\mathcal{U}) := \frac1{N^2} \sum_{U,V\in \mathcal{U}} \vert\operatorname{Tr}(UV^\dagger)\vert^{2t}. \end{equation} Interestingly, it is minimized by unitary $t$-designs as well as in the limit of infinite sets of Haar random unitaries. The minimal value for $d=2$, i.e., qubit systems, is given by $F_t^\text{Haar} = \frac{(2t)!}{t!(t+1)!}$ \cite{gessel1990symmetric}.
Now, if in an experiment, $N\rightarrow \infty$ unitaries are drawn, one can evaluate the frame potential for $t=4$ and compare it to the minimum, and if they match, one can be sure to have drawn them correctly (at least for the purposes of the task at hand; otherwise, $t$ has to be adjusted accordingly). However, if $N$ is a finite number, one would expect a deviation from the minimal number, even if the distribution is perfectly Haar random. To quantify this, we introduce the excess quantity \begin{align}
G_t(U_1,\ldots,U_N) := \frac{F_t(U_1,\ldots,U_N) }{F_t^\text{Haar}}. \end{align} Then, the expected excess $\mathbb{E}_{U_1,\ldots,U_N}\big[G_t(U_1,\ldots,U_N)\big]$ is given by \begin{align}
\mathbb{E}_{U_1,\ldots,U_N}\big[G_t(U_1,\ldots,U_N)\big] &= \intop\text{d}U_1\ldots\text{d}U_N \frac{F_t(U_1,\ldots,U_n) }{F_t^\text{Haar}}\nonumber\\
&= \frac1{N^2F_t^\text{Haar}}\sum_{i,j=1}^N \intop \text{d}U_1\ldots \text{d}U_N \vert\operatorname{Tr}(U_i U_j^\dagger)\vert^{2t}\nonumber\\
&= \frac1{N^2F_t^\text{Haar}}\left[Nd^{2t} + \sum_{i\neq j}\intop \text{d}U\text{d}V \vert\operatorname{Tr}(UV^\dagger)\vert^{2t}\right] \nonumber\\
&= \frac1{N^2F_t^\text{Haar}}\left[Nd^{2t} + N(N-1)F_t^\text{Haar}\right]\nonumber\\
&= \frac{d^{2t}}{NF_t^\text{Haar}} + \frac{N-1}{N},\label{eq:expft} \end{align} where the integral in the third line can either be solved via Weingarten calculus, or by noting that it coincides precisely with the value of the frame potential for Haar random unitaries, i.e., it yields its minimum $F_t^\text{Haar}$. From Eq.~(\ref{eq:expft}) it can be directly seen that the expectation value approaches one only in the limit $N\rightarrow \infty$, and otherwise is larger than one. In order to quantify the probability to observe an excess of this expectation value larger than $\delta$, we make use of the Cantelli inequality, stating that for a set of randomly drawn unitaries $\{U_1, \ldots, U_N\}$ \cite{ghosh2002probability}, \begin{align}
p(G_t(\{U_i\}) \geq \mathbb{E}(G_t) + \delta) \leq \frac{\operatorname{Var}(G_t)}{\delta^2 + \operatorname{Var}(G_t)} \end{align} with $\operatorname{Var}(G_t) = \mathbb{E}(G_t^2) - \mathbb{E}(G_t)^2$, and the abbreviation $\mathbb{E}(F) = \mathbb{E}_{U_1,\ldots,U_N}F(U_1,\ldots,U_N)$. We calculate \begin{align}
\mathbb{E}(G_t^2) = \frac1{N^4 (F_t^\text{Haar})^2} \sum_{ijkl=1}^N \intop \text{d}U_1\ldots\text{d}U_N \vert\operatorname{Tr}(U_iU_j^\dagger)\vert^{2t}\vert\operatorname{Tr}(U_kU_l^\dagger)\vert^{2t}. \end{align} In order to proceed, several cases concerning the summation variables have to be distinguished: \begin{align}
(A)~i=j=k=l, && (B)~i=j=k\neq l, && (C)~i=j\neq k=l, \nonumber\\
(D)~i=k\neq j=l, && (E)~i=j\neq k\neq l, && (F)~i=k\neq j\neq l,\nonumber\\
(G)~i\neq j \neq k \neq l. \end{align} (Note, that here, $j\neq k \neq l$ is to be understood to also imply $j\neq l$). Each of the cases can be solved individually, but occurs multiple times. We go through the expressions one at a time. \begin{align}
(A) &= \frac{d^{4t}}{N^4 (F_t^\text{Haar})^2} = (C), \\
(B) &= \frac1{N^4 (F_t^\text{Haar})^2} d^{2t} \intop \text{d}U \text{d}V \vert \operatorname{Tr}(UV^\dagger)\vert^{2t} = \frac{d^{2t}}{N^4 F_t^\text{Haar}} = (E), \\
(D) &= \frac1{N^4 (F_t^\text{Haar})^2} \intop \text{d}U \text{d}V \vert \operatorname{Tr}(UV^\dagger)\vert^{4t} = \frac{F_{2t}^\text{Haar}}{N^4 (F_t^\text{Haar})^2},\\
(F) &= \frac1{N^4 (F_t^\text{Haar})^2} \intop \text{d}U \text{d}V \text{d}W \vert \operatorname{Tr}(UV^\dagger)\vert^{2t}\vert \operatorname{Tr}(UW^\dagger)\vert^{2t} \nonumber\\
&= \frac1{N^4 (F_t^\text{Haar})^2} \intop \text{d}U \left(\intop \text{d}V \vert \operatorname{Tr}(UV^\dagger)\vert ^{2t}\right)^2 \nonumber\\
&= \frac1{N^4 (F_t^\text{Haar})^2} \intop \text{d}U \left(\intop \text{d}(VU^\dagger) \vert \operatorname{Tr}(UV^\dagger)\vert ^{2t}\right)^2 \nonumber\\
&= \frac1{N^4 (F_t^\text{Haar})^2} \intop \text{d}U (F_t^\text{Haar})^2 =\frac1{N^4}, \\
(G) &= \frac1{N^4 (F_t^\text{Haar})^2} \intop \text{d}U \text{d}V \text{d}W \text{d}X \vert \operatorname{Tr}(UV^\dagger)\vert^{2t}\vert \operatorname{Tr}(WX^\dagger)\vert^{2t} = \frac1{N^4} = (F). \end{align} Note that each term occurs multiple times, depending on the number of configurations. Thus, we collect all of them to obtain \begin{align}
\mathbb{E}(G_t^2) &= N(A) + 4N(N-1)(B) + N(N-1)(C) + 2N(N-1)(D) +\nonumber\\
&\phantom{=}+ 2N(N-1)(N-2)(E) + 4N(N-1)(N-2)(F) +\nonumber \\
&\phantom{=}+N(N-1)(N-2)(N-3)(G) \end{align} Inserting the results and subtracting $\mathbb{E}(G_t)^2$ yields \begin{align}
\operatorname{Var}(G_t) = \frac{2N(N-1)}{N^4}\left[\frac{F_{2t}^\text{Haar}}{(F_t^\text{Haar})^2}-1\right] \end{align}
Plugging this into Cantelli's inequality, we obtain the plot in Fig.~\ref{fig:framepotential}. It is to be understood as follows: Setting the right hand side to 10\%, we obtain an error band that allows us to conclude that for a set of correctly drawn unitaries of certain size, the value of the frame potential will lie with 90\% probability within the marked area. Conversely, if one observes a value above that threshold, then with high probability the process was not Haar random.
\begin{figure}
\caption{The expected value of the frame potential in Eq.~(\ref{eq:framepotential}) for Haar-randomly drawn sets of unitaries as a function of the set size for $t=2$ (blue) and $t=4$ (red). The colored areas above the curve denote the $1\sigma$ and $2\sigma$ confidence regions that are used to check whether the generated unitaries are compatible with the assumption of Haar-randomness. Note that values below the curves are attainable, but our main concern are sets of points whose frame potential values are higher than expected. Thus, we give one-sided confidence regions here.}
\label{fig:framepotential}
\end{figure}
\subsection{Spherical frames} In the experiment at hand, we characterize the randomly drawn unitaries by applying them to the fixed input state $\ket{0}$ and measuring the resulting state. Instead of yielding the random unitaries, this yields instead a set of random states $\{\ket{\psi_i}\}_{i=1}^N$, and the underlying unitary can only be reconstructed up to a relative phase.
We can remedy this by analyzing the randomness of the output states instead. A unitary $t$-design generates a spherical $t$-design by applying the unitaries to some fixed state. Analogously to the case of unitary designs, a spherical $t$-design is a set of vectors $\mathcal{S} = \{\ket{\psi_i}\}_{i=1}^N$, such that \begin{align}
\intop p(\ket{\psi}) \text{d}\!\ket{\psi} = \frac1N\sum_{i=1}^N p(\ket{\psi_i}) \end{align} for all polynomials of degree $t$ or less in the entries of the vector $\ket{\psi}$. Also in this case, one can define the (spherical) frame potential, \begin{align}
\tilde{F}_t(\mathcal{S}) := \frac1{N^2} \sum_{\ket{\phi}, \ket{\psi} \in \mathcal{S}} \vert\braket{\psi|\phi}\vert^{2t}, \end{align} which is minimized iff $\mathcal{S}$ is a spherical $t$-design. In contrast to the unitary case, the minimal value is given by $\tilde{F}_t^\text{Haar} = \frac{t!(d-1)!}{(t+d-1)!}$ \cite{renes2004frames}.
In complete analogy to the unitary case, we can define, for a given set of vectors $\mathcal{S}$, the excess quantity $\tilde{G}_t(\mathcal{S}) = \frac{\tilde{F}_t(\mathcal{S})}{\tilde{F}_t^\text{Haar}}$.
Calculating expectation values and confidence bounds for finite $N$ follows exactly the same steps as in the unitary case and yields \begin{align}\label{eq:expGt}
\mathbb{E}(\tilde{G}_t) &= \frac{1}{N\tilde{F}_t^\text{Haar}} + \frac{N-1}N,\\
\operatorname{Var}(\tilde{G}_t) &= \frac{2N(N-1)}{N^4}\left[\frac{\tilde{F}_{2t}^\text{Haar}}{(\tilde{F}_t^\text{Haar})^2}-1\right]. \end{align} We convert these again into confidence bounds using Cantelli's inequality and plot the results in Fig.~\ref{fig:frame_data}.
\subsection{Experimentally applying random unitaries} As stated in Section \ref{sec:experiment}, we implement random local unitary rotations in the form of random polarization state rotations using polarization scramblers. One can naively expect that a polarization scrambler which can completely cover the Bloch sphere in a non-periodic fashion would indeed act as a random unitary rotation in polarization. As such, we used a polarized light source and a standard polarimeter to inspect the polarization state output by the polarizaton scrambler after scrambling the polarization state 60 times. The resulting polarization states are shown in Fig. \ref{fig:frame_data}(a). Each polarization state measurement (denoted by each point on the sphere) was measured after scrambling for 10 seconds.
After measuring the polarization state output by the scrambler, we calculated the spherical frame potential. Figs. \ref{fig:frame_data}(b, c) show the resulting value of $F_{t}/F_{t}^{\text{Haar}}$ for a set of 60 randomly scrambled polarization states along with the $1\sigma$ and $2\sigma$ confidence regions described earlier. The values of $F_{t}/F_{t}^{\text{Haar}}$ clearly fall within the confidence bounds, confirming that, with high probability, the polarization scrambler produces nearly-Haar random unitary rotations.
\begin{figure*}
\caption{(a) Bloch sphere depicting the Stokes vectors resulting from 60 different unitary rotations applied with the polarization scrambler.
(b, c) Experimentally measured spherical frame potential (points) along with the expected value (lines) given by Eq.~(\ref{eq:expGt}) for Haar-randomly drawn sets of unitaries as a function of the set size for t = 2 (b) and t = 4 (c). The colored areas above the curve denote the $1\sigma$ and $2\sigma$ confidence regions. As our main concern is a distribution of unitaries that yields larger frame potentials as expected, we give one-sided confidence regions.}
\label{fig:frame_data}
\end{figure*}
\section{Construction of unbiased estimators} \label{app:unbiased_estimators} The experimental data allows us to estimate the expectation values of the chosen observables. However, we want to average certain powers of this quantity to form the moments. This requires the usage of unbiased estimators. \subsection{General results} Suppose that we perform randomized measurements on two particles using the local observable $A\otimes A$ with \begin{align}
A = \text{diag}(k_1,k_2,\cdots,k_n). \end{align} In a single trial, we obtain one measurement outcome $X_i$, which is an element of the set $\{k_jk_l : 1\leq j \leq l \leq n\}$, with corresponding outcome probability $p_i$ for $N$ independent trials. Then one can write \begin{align}
E =\mathrm{tr} \left[(U_A \otimes U_B) \rho_{AB} (U_A^{\dagger} \otimes U_B^{\dagger})(A \otimes A)\right] = \sum_i p_i X_i. \end{align}
First, the unbiased estimator of $E$ is given by \begin{align} \widetilde{E} = \sum_i X_i \widetilde{p_i}, \end{align} where $\widetilde{p_i} = {N_i}/{N}$ and $\mathbb{E}[\widetilde{p_i}]=p_i$. Here, $N_i$ are the number of each event observed over the $N$ trials with $\sum_i N_i = N$. Notice that $N_i$ are random variables following a multinomial distribution with parameters $(p_i, N)$. One can immediately check that $\widetilde{E}$ is the unbiased estimator of $E$, that is, $\mathbb{E} \big[\widetilde{E}\big]=E$, by recalling the assumption that the $N$ trials are independent and $\mathbb{E} [N_i] = Np_i$.
Next, let us create an unbiased estimator $\widetilde{E^2}$ such that $\mathbb{E}\big[\widetilde{E^2}\big] =E^2$. It should be noted that $\widetilde{E^2}$ is not given by $\big(\widetilde{E}\big)^2$. In fact, it can be written as \begin{align}
\widetilde{E^2} = \sum_i X_i^2 \widetilde{p_i^2}
+2\sum_{i < j} X_i X_j \widetilde{p_ip_j},
\label{eq:uE2} \end{align} where the unbiased estimators $\widetilde{p_i^2}$ and $\widetilde{p_ip_j}$ such that $\mathbb{E}\big[\widetilde{p_i^2}\big]=p_i^2$ and $\mathbb{E}\big[\widetilde{p_ip_j}\big]=p_ip_j$ are given by \begin{align}
\widetilde{p_i^2} = \frac{N(\widetilde{p_i})^2-\widetilde{p_i}}{N-1},\ \ \
\widetilde{p_ip_j}=\frac{N}{N-1}\widetilde{p_i} \widetilde{p_j}. \end{align} This can be straightforwardly shown using results from Ref.~\cite{newcomer2008computation}.
Similarly, we can create the unbiased estimators $\widetilde{E^3}$ and $\widetilde{E^4}$ such that $\mathbb{E}\big[\widetilde{E^3}\big] =E^3$ and $\mathbb{E}\big[\widetilde{E^4}\big] =E^4$. Using again Ref.~\cite{newcomer2008computation}, a straight-forward calculation leads to the expressions \begin{align}
\widetilde{E^3}
&= \sum_i X_i^3 \widetilde{p_i^3}
+ 3\sum_{i<j}
\left(
X_i^2 X_j \widetilde{p_i^2 p_j}
+ X_i X_j^2 \widetilde{p_i p_j^2}
\right)
+6\sum_{i<j<k}
\left(
X_i X_j X_k
\widetilde{p_i p_j p_k}
\right),\\
\widetilde{E^4}
&= \sum_i X_i^4 \widetilde{p_i^4}
+ 4\sum_{i<j}
\left(
X_i^3 X_j \widetilde{p_i^3 p_j}
+ X_i X_j^3 \widetilde{p_i p_j^3}
\right)
+6\sum_{i<j}
\left(
X_i^2 X_j^2 \widetilde{p_i^2 p_j^2}
\right) \nonumber\\
\quad &+12\sum_{i<j<k}
\left(
X_i^2 X_j X_k \widetilde{p_i^2 p_j p_k}
+X_i X_j^2 X_k \widetilde{p_i p_j^2 p_k}
+X_i X_j X_k^2 \widetilde{p_i p_j p_k^2}
\right)
+24\sum_{i<j<k<l}
\left(
X_i X_j X_k X_l
\widetilde{p_i p_j p_k p_l}
\right), \end{align} where the unbiased estimators $\widetilde{q} \in \Big\{ \widetilde{p_i^3}, \widetilde{p_i p_j^2}, \widetilde{p_ip_jp_k}, \widetilde{p_i^4}, \widetilde{p_i^3 p_j}, \widetilde{p_i p_j^3}, \widetilde{p_i^2 p_j^2}, \widetilde{p_i^2 p_j p_k}, \widetilde{p_i p_j^2 p_k}, \widetilde{p_i p_j p_k^2}, \widetilde{{p_i p_j p_k p_l}} \Big\}$ such that $\mathbb{E}\big[\widetilde{q}\big]=q$ are given by \begin{align}
\widetilde{p_i^3}
&= \frac{N^2\widetilde{p_i}^3-3N\widetilde{p_i}^2-2\widetilde{p_i}}{(N-1)(N-2)},\\
\widetilde{p_i p_j^2}
&= \frac{N^2\widetilde{p_i}^2\widetilde{p_j}-N\widetilde{p_i}\widetilde{p_j}}{(N-1)(N-2)},\\
\widetilde{p_ip_jp_k}
&= \frac{N^2\widetilde{p_i}\widetilde{p_j}\widetilde{p_k}}{(N-1)(N-2)},\\
\widetilde{p_i^4}
&= \frac{N^3\widetilde{p_i}^4-6N^2\widetilde{p_i}^3+11N\widetilde{p_i}^2-6\widetilde{p_i}}{(N-1)(N-2)(N-3)},\\
\widetilde{p_i^3 p_j}
&= \frac{N^3\widetilde{p_i}^3\widetilde{p_j}-3N^2\widetilde{p_i}^2\widetilde{p_j}+2N\widetilde{p_i}\widetilde{p_j}}{(N-1)(N-2)(N-3)},\\
\widetilde{p_i p_j^3}
&=\frac{N^3\widetilde{p_i}\widetilde{p_j}^3-3N^2\widetilde{p_i}\widetilde{p_j}^2+2N\widetilde{p_i}\widetilde{p_j}}{(N-1)(N-2)(N-3)},\\
\widetilde{p_i^2 p_j^2}
&=\frac{N^3\widetilde{p_i}^2\widetilde{p_j}^2-N^2(\widetilde{p_i}^2\widetilde{p_j}+\widetilde{p_i}\widetilde{p_j}^2)
+\widetilde{p_i}\widetilde{p_j}}{(N-1)(N-2)(N-3)},\\
\widetilde{p_i^2 p_j p_k}
&=\frac{N^3\widetilde{p_i}^2\widetilde{p_j}\widetilde{p_k}-N^2\widetilde{p_i}\widetilde{p_j}\widetilde{p_k}}{(N-1)(N-2)(N-3)},\\
\widetilde{p_i p_j^2 p_k}
&=\frac{N^3\widetilde{p_i}\widetilde{p_j}^2\widetilde{p_k}-N^2\widetilde{p_i}\widetilde{p_j}\widetilde{p_k}}{(N-1)(N-2)(N-3)},\\
\widetilde{p_i p_j p_k^2}
&=\frac{N^3\widetilde{p_i}\widetilde{p_j}\widetilde{p_k}^2-N^2\widetilde{p_i}\widetilde{p_j}\widetilde{p_k}}{(N-1)(N-2)(N-3)},\\
\widetilde{p_i p_j p_k p_l}
&=\frac{N^3\widetilde{p_i}\widetilde{p_j}\widetilde{p_k}\widetilde{p_l}}{(N-1)(N-2)(N-3)}. \end{align}
\subsection{Specific unbiased estimators for our experiment}
In our experimental investigation, we estimate the values of $I_{1}$, $I_{2}$, and $I_{3}$ from finite measurement results over observables $\mathcal{M}_{z}=\sigma_{3}\otimes\sigma_{3}$ and $\mathcal{M}_{det}=\sum_{i=1}^{3}\sigma_{i}\otimes\sigma_{i}$. We assume in our experiment that we collect $N$ measurement results for each of $M$ pairwise random local unitary rotations. Each local projective measurement has two possible outcomes $X_{i}\in\{-1,1\}$; hence, we have four possible pairwise measurement outcomes. For simplicity we will use the notation \begin{align}
E^{t}\left(\mathcal{M}_{i}\right) =\{\mathrm{tr} \left[(U_A \otimes U_B) \rho_{AB} (U_A^{\dagger} \otimes U_B^{\dagger})\mathcal{M}_{i}\right]\}^{t} \end{align}
From the above results we can create an unbiased estimator for Eq.~(\ref{eq:R2ZZ}) in the main text as \begin{align} \widetilde{I_{2}}=\widetilde{\RR{2}{Z\otimes Z}}=\frac{1}{M}\sum^{M}_{m=1}\left[\widetilde{E^{2}_{m}\left(\mathcal{M}_{z}\right)}\right] \label{eq:app_I2_estimator} \end{align} where $m$ indicates the $m$-th set of local unitary rotations used to evaluate $E^{t}$. Similarly, $\widetilde{I}_{3}$ can be accessed through a combination of $\widetilde{I_{2}}$ and the unbiased estimator of Eq.~(\ref{eq:R4ZZ}) in the main text which has the form \begin{align} \widetilde{\RR{4}{Z\otimes Z}}=\frac{1}{M}\sum^{M}_{m=1}\left[\widetilde{E^{4}_{m}(\mathcal{M}_{z})}\right]. \label{eq:app_R4_estimator} \end{align} Finally, we estimate $I_{1}$ directly through the unbiased estimator of Eq.~(\ref{eq:R3Adet}) in the main text which has the form \begin{align}
\widetilde{I_{1}}=\widetilde{\mathcal{R}_{\mathcal{M}_{det}}^{(3)}}=\frac{1}{M}\sum_{m=1}^{M}\left[\widetilde{E_{m}^{3}(\mathcal{M}_{det})}\right].
\label{eq:app_I1_estimator} \end{align}
\section{Statistical bounds}\label{app:xiaodong}
In order to derive confidence levels for the quantities that we derive, we apply Hoeffding's inequality, stating that for a set of $n$ statistically independent random variables $\{X_1, \ldots, X_n\}$, where each $X_i \in [a_i, b_i]$, the probability of observing their sum deviating more than $\delta$ from the mean value is bounded by \cite{hoeffding1994probability} \begin{align}
P\left(\vert \sum_i X_i - \mathbb{E}[\sum_i X_i] \vert \geq \delta\right) \leq \exp\left(-\frac{2\delta^2}{\sum_i (b_i - a_i)^2}\right). \end{align} We will use this inequality for independent, subsequent runs of the same inequality, i.e., $a_i \equiv a$, $b_i \equiv b$ for all $i$, and are interested in the mean, which demands a rescaling of $X_i \rightarrow X_i / n$. By fixing the right-hand side to a desired error probability $1 - \gamma$, where $\gamma$ denotes the confidence, we obtain the following upper bound for the deviation $\delta$: \begin{align}
\delta = \frac{b-a}{\sqrt{2n}} \sqrt{\ln(\frac{2}{1-\gamma})}. \end{align} This allows us, for any given estimator of an experimental quantity which yields numbers in the range $[a,b]$, to derive the maximal deviation of the true mean value given a certain confidence level of $\gamma$. We will use this to bound the LU invariants $\operatorname{Tr}(TT^\text{T})$, $\operatorname{Tr}(TT^\text{T}TT^\text{T})$ and $\det(T)$, needed to determine the roots of the characteristic polynomial in Eq.~(\ref{eq:charpol}) in the main text.
What is left is to translate the confidence region of these coefficients into confidence regions of the roots. For that, we first give a naive bound on the confidence level that all of the measured quantities lie within their individual confidence levels.
To that end, consider first two random variables $x$ and $y$, the experimental values of which being with confidence level $\gamma$ in the regions $[x_0, x_1]$ and $[y_0,y_1]$, respectively. This is depicted in Fig.~\ref{fig:confidences}, where the confidence intervals of the two variables split the graph into $9$ different regions, where the symbols $a,b,\ldots,i$ denote the probability to find a pair of measurement results in the corresponding region. We have $d+e+f = b+e+h = \gamma$ and $a+b+c+g+h+i = a+c+d+f+g+i = 1-\gamma$. The probability to find an experimental value outside of at least one of the confidence intervals is given by $a+b+c+d+f+g+h+i = 2(1-\gamma) - (a+c+g+i) \leq 2(1-\gamma)$. Therefore, the probability to have measurement results within both confidence bands is at least $1-2(1-\gamma)$. The same argument can be applied to more than two variables, $x^{(1)}, \ldots, x^{(n)}$, where each variable $x^{(i)}$ lies in the interval $[x^{(i)}_0,x^{(i)}_1]$ with confidence $\gamma$, yielding \begin{align}\label{eq:pbound}
p(\forall i\,:\,x^{(i)}_0 \leq x^{(i)} \leq x^{(i)}_1) \geq 1-n(1-\gamma). \end{align}
\begin{figure}
\caption{Illustration of the estimate in the text for two random variables: The probability to find experimental values inside of region $e$ is lower-bounded by the expression in Eq.~(\ref{eq:pbound}). }
\label{fig:confidences}
\end{figure}
Finally, we determine the expression in Eq.~(\ref{eq:chshviolation}) in the main text for each choice of coefficients in their corresponding confidence regions to obtain a range of violations, in which the true violation lies with confidence level $1-3(1-\gamma)$.
\section{Data analysis}\label{app:dataanalysis}
\subsection{Calculating CHSH violations from the measured data}
In order to certify the achieved violation of the CHSH inequality using Eq.~(\ref{eq:chshviolation}) in the main text we first have to determine appropriate confidence intervals for the three LU invariants $\operatorname{Tr}(TT^T)$, $\det(T)$ and $\operatorname{Tr}(TT^TTT^T)$, respectively. To do so, we follow two approaches, the first of which assumes that the coarse-graining of all $25 \times 200$ data points in $25$ groups is enough to justify that the resulting data be normally distributed. Calculating the mean values and standard deviations based on this assumption yields \begin{align}
\operatorname{Tr}(TT^T) &= \phantom{-}2.41 \pm 0.15,\\
\operatorname{Tr}(TT^TTT^T) &= \phantom{-}2.21 \pm 0.21,\\
\det(T) &= -0.62\pm 0.15, \end{align} with $3\sigma$, i.e., 99.73\% confidence levels.
Alternatively, we can drop the assumption of normally distributed data and use the Hoeffding inequality to determine appropriate error bounds \cite{hoeffding1994probability}. For instance, the sector length $\operatorname{Tr}(TT^T)$ can be directly expressed as a sample mean $\sum_{i=1}^M X_i/M$ of the squared correlation function $X_i =\langle {U_A}^\dagger Z U_A \otimes {U_B}^\dagger Z U_B\rangle^2= 9 (p^{(i)}_{00} - p^{(i)}_{01} - p^{(i)}_{10} + p^{(i)}_{11})^2$, with $X_i\in[0, 9]$. Applying the Hoeffding inequality to this case allows us to assign, with confidence $\gamma$, the following two-sided error bound: \begin{align}
\delta = \frac{9}{\sqrt{2M}} \sqrt{\ln(\frac{2}{1-\gamma})}, \end{align} which for $M=25\times 200 = 5000$ and $\gamma = 0.9973$ ($3\sigma$) leads to \begin{align}
\operatorname{Tr}(TT^T) = 2.41 \pm 0.24. \end{align} Using the Hoeffding inequality for the determinant, we obtain \begin{align}
\det(T) = -0.62\pm 1.09, \end{align} which is significantly worse compared to the Gaussian estimate. Lastly, for $\operatorname{Tr}(TT^TTT^T)$, we cannot apply the same arithmetic, as this quantity does not originate directly from a sample average over the runs but instead also involves the square of the respective sector length $\operatorname{Tr}(TT^T)^2$. As a workaround, we exploit the insight that the range of physically allowed values of the quantity $\operatorname{Tr}(TT^TTT^T)$ is constrained by the value of $\operatorname{Tr}(TT^T)$. Thus, we can derive a region of compatible values of $\operatorname{Tr}(TT^TTT^T)$ from the confidence region of $\operatorname{Tr}(TT^T)$, i.e. $\operatorname{Tr}(TT^T) = 2.41 \pm 0.24$. Following this procedure, we obtain \begin{align}
\operatorname{Tr}(TT^TTT^T) = 2.00 \pm 0.42. \end{align}
We now have two sets of results including $3\sigma$ confidence levels for the three invariants under consideration. We can thus proceed to calculate the roots of the characteristic polynomial~Eq.~(\ref{eq:charpol}) in the main text and, respectively, the achievable violation of the CHSH inequality. In order to determine the best permissible value of the latter, we scan the whole range of allowed values of the three invariants and calculate the corresponding roots and CHSH violation for each of them. Finally, we use the largest violation which is still compatible with the observed data. The confidence of this violation is then given by $1-3(1-\gamma) = 0.991 \approx 2.6\sigma$, as detailed in Appendix~\ref{app:xiaodong}.
Following the above procedure, we obtain the following CHSH violations: \begin{align}
\text{CHSH}_\text{Gauss} &\geq 0.46, \\
\text{CHSH}_\text{Hoeff} &\geq 0.40. \end{align}
Note that the maximal observable value is given by $2\sqrt{2} - 2 \approx 0.83$.
Similarly, by requiring a higher confidence level of $5\sigma$ for the invariants, or equivalently $\gamma = 0.9999994$, we obtain \begin{align}
\text{CHSH}_\text{Gauss} \geq 0.42, \\
\text{CHSH}_\text{Hoeff} \geq 0.34, \end{align} with confidence $1-3(1-\gamma)=0.999998\approx 4.7\sigma$. Using either method, our results clearly show that the randomized measurement protocol successfully determines that the state output by our EPS has the potential to violate a CHSH inequality.
\subsection{Bounding the maximal teleportation fidelity from below} The same error analysis can be used to obtain the lower bound $F_{\text{max}}^{U}$ in Eq.~(\ref{eq:maxfidelity}) in the main text from the confidence intervals of the LU invariants. For a confidence level of $3\sigma$ of the LU invariants, we obtain a lower bound of at least: \begin{align}
\left(F_{\text{max}}^{U}\right)_{\text{Gauss}}&=0.88,\\
\left(F_{\text{max}}^{U}\right)_{\text{Hoeff}}&=0.85, \end{align} with a confidence confidence level of $1-3(1-\gamma) = 0.991 \approx 2.6\sigma$.
Similarly, we obtain the lower bounds \begin{align}
\left(F_{\text{max}}^{U}\right)_{\text{Gauss}}&=0.86,\\
\left(F_{\text{max}}^{U}\right)_{\text{Hoeff}}&=0.60, \end{align} for a confidence level of $5\sigma$ of the LU invariants. The confidence level of the bounds is $1-3(1-\gamma)=0.999998 \approx 4.7\sigma$.
\end{document} |
\begin{document}
\renewcommand{Glossary of Notation}{Glossary of Notation} \renewcommand{\pagedeclaration}[1]{, #1} \setlength{\nomitemsep}{0pt}
\def\e#1\e{\begin{equation}#1\end{equation}} \def\ea#1\ea{\begin{align}#1\end{align}} \def\eq#1{{\rm(\ref{#1})}} \theoremstyle{plain} \newtheorem{thm}{Theorem}[section] \newtheorem{prop}[thm]{Proposition} \newtheorem{lem}[thm]{Lemma} \newtheorem{cor}[thm]{Corollary} \newtheorem{quest}[thm]{Question} \newtheorem{ass}[thm]{Assumption} \theoremstyle{definition} \newtheorem{dfn}[thm]{Definition} \newtheorem{ex}[thm]{Example} \newtheorem{rem}[thm]{Remark}
\newtheorem{conj}[thm]{Conjecture}
\numberwithin{equation}{section} \def\mathop{\rm dim}\nolimits{\mathop{\rm dim}\nolimits} \def\mathop{\rm supp}\nolimits{\mathop{\rm supp}\nolimits} \def\mathop{\rm cosupp}\nolimits{\mathop{\rm cosupp}\nolimits} \def{\mathop{\rm id}\nolimits}{\mathop{\rm id}\nolimits} \def\mathop{\rm Hess}\nolimits{\mathop{\rm Hess}\nolimits} \def\mathop{\rm Crit}{\mathop{\rm Crit}} \def\mathop{\rm Sym}\nolimits{\mathop{\rm Sym}\nolimits} \def\mathop{\rm DR}\nolimits{\mathop{\rm DR}\nolimits} \def\mathop{\rm HP}\nolimits{\mathop{\rm HP}\nolimits} \def\mathop{\rm HN}\nolimits{\mathop{\rm HN}\nolimits} \def\mathop{\rm HC}\nolimits{\mathop{\rm HC}\nolimits} \def\mathop{\rm NC}\nolimits{\mathop{\rm NC}\nolimits} \def\mathop{\rm PC}\nolimits{\mathop{\rm PC}\nolimits} \def\mathop{\rm CC}\nolimits{\mathop{\rm CC}\nolimits} \def{\rm inf}{{\rm inf}} \def\mathop{\bf St}\nolimits{\mathop{\bf St}\nolimits} \def\mathop{\bf Art}\nolimits{\mathop{\bf Art}\nolimits} \def\mathop{\bf dSt}\nolimits{\mathop{\bf dSt}\nolimits} \def\mathop{\bf dArt}\nolimits{\mathop{\bf dArt}\nolimits} \def\mathop{\bf dSch}\nolimits{\mathop{\bf dSch}\nolimits} \def{\rm cl}{{\rm cl}} \def\mathop{\rm fib}\nolimits{\mathop{\rm fib}\nolimits} \def\mathop{\rm Ho}\nolimits{\mathop{\rm Ho}\nolimits} \def\mathop{\rm Ker}{\mathop{\rm Ker}} \def\mathop{\rm Coker}{\mathop{\rm Coker}} \def\mathop{\rm GL}{\mathop{\rm GL}} \def\mathop{\boldsymbol{\rm Spec}}\nolimits{\mathop{\boldsymbol{\rm Spec}}\nolimits} \def\mathop{\rm Im}{\mathop{\rm Im}} \def\mathop{\rm inc}{\mathop{\rm inc}} \def\mathop{\rm Ext}\nolimits{\mathop{\rm Ext}\nolimits} \def{\rm qcoh}{{\rm qcoh}} \def{\mathop{\rm id}\nolimits}{{\mathop{\rm id}\nolimits}} \def\mathbin{\mathcal{H}om}{\mathbin{\mathcal{H}om}}
\def\mathop{\rm Aut}{\mathop{\rm Aut}} \newcommand{{\rm d}_{dR}}{{\rm d}_{dR}} \newcommand{\mathop{\rm Pic}}{\mathop{\rm Pic}} \newcommand{\mathop{\rm End}\nolimits}{\mathop{\rm End}\nolimits} \newcommand{\mathop{\rm hd}}{\mathop{\rm hd}} \newcommand{\mathop{\rm Hilb}\nolimits}{\mathop{\rm Hilb}\nolimits} \newcommand{{\rm cs}}{{\rm cs}} \newcommand{{\scriptscriptstyle{\rm Con}}}{{\scriptscriptstyle{\rm Con}}} \newcommand{{\mathbin{\bf dim}\kern.1em}}{{\mathbin{\bf dim}\kern.1em}} \newcommand{\mathop{\rm Stab}\nolimits}{\mathop{\rm Stab}\nolimits} \newcommand{\mathop{\mathfrak{stab}}\nolimits}{\mathop{\mathfrak{stab}}\nolimits} \newcommand{\mathop{\rm Quot}}{\mathop{\rm Quot}} \newcommand{\mathop{\rm CF}\nolimits}{\mathop{\rm CF}\nolimits} \newcommand{\mathop{\rm SU}}{\mathop{\rm SU}} \newcommand{\mathop{\rm SL}}{\mathop{\rm SL}} \newcommand{\mathop{\textstyle\rm U}}{\mathop{\textstyle\rm U}} \newcommand{\mathop{\rm Gr}}{\mathop{\rm Gr}} \newcommand{\mathop{\text{M\"o}}}{\mathop{\text{M\"o}}} \newcommand{\mathop{\rm Ch}}{\mathop{\rm Ch}} \newcommand{\mathop{\rm Tr}}{\mathop{\rm Tr}} \newcommand{\mathop{\rm Eu}\nolimits}{\mathop{\rm Eu}\nolimits} \newcommand{\mathop{\rm ch}\nolimits}{\mathop{\rm ch}\nolimits} \newcommand{{\rm num}}{{\rm num}} \newcommand{{\rm vir}}{{\rm vir}} \newcommand{{\rm stp}}{{\rm stp}} \newcommand{{\rm fr}}{{\rm fr}} \newcommand{{\rm stf}}{{\rm stf}} \newcommand{{\rm si}}{{\rm si}} \newcommand{{\rm na}}{{\rm na}} \newcommand{{\rm stk}}{{\rm stk}} \newcommand{{\rm ss}}{{\rm ss}}
\newcommand{{\rm st}}{{\rm st}}
\newcommand{{\rm all}}{{\rm all}} \newcommand{{\rm rk}}{{\rm rk}} \newcommand{{\rm vi}}{{\rm vi}} \newcommand{\mathop{\rm char}}{\mathop{\rm char}} \newcommand{{\rm \kern.05em ind}}{{\rm \kern.05em ind}} \newcommand{\mathop{\rm Exp}}{\mathop{\rm Exp}} \newcommand{\mathop{\rm Per}}{\mathop{\rm Per}} \newcommand{\mathop{\rm cone}\nolimits}{\mathop{\rm cone}\nolimits} \newcommand{\mathop{\rm At}\nolimits}{\mathop{\rm At}\nolimits} \newcommand{\mathop{\rm Vect}\nolimits}{\mathop{\rm Vect}\nolimits} \newcommand{\mathop{\rm Sp}\nolimits}{\mathop{\rm Sp}\nolimits} \newcommand{\mathop{\rm Rep}\nolimits}{\mathop{\rm Rep}\nolimits} \newcommand{\mathop{\rm Iso}\nolimits}{\mathop{\rm Iso}\nolimits} \newcommand{\mathop{\rm LCF}\nolimits}{\mathop{\rm LCF}\nolimits} \newcommand{\mathop{\rm SF}\nolimits}{\mathop{\rm SF}\nolimits} \newcommand{\mathop{\rm SF}\nolimits_{\rm al}}{\mathop{\rm SF}\nolimits_{\rm al}} \newcommand{\mathop{\rm SF}\nolimits_{\rm al}^{\rm ind}}{\mathop{\rm SF}\nolimits_{\rm al}^{\rm ind}} \newcommand{\mathop{\smash{\underline{\rm SF\!}\,}}\nolimits}{\mathop{\smash{\underline{\rm SF\!}\,}}\nolimits} \newcommand{\mathop{\smash{\underline{\rm SF\!}\,}}\nolimits^{\rm ind}}{\mathop{\smash{\underline{\rm SF\!}\,}}\nolimits^{\rm ind}} \newcommand{\mathop{\bar{\rm SF}}\nolimits}{\mathop{\bar{\rm SF}}\nolimits} \newcommand{\mathop{\bar{\rm SF}}\nolimits_{\rm al}}{\mathop{\bar{\rm SF}}\nolimits_{\rm al}} \newcommand{\oSFai}{{\textstyle\bar{\rm SF}{}_{\rm al}^{\rm ind}}} \newcommand{\mathop{\bar{\underline{\rm SF\!}\,}}\nolimits}{\mathop{\bar{\underline{\rm SF\!}\,}}\nolimits} \newcommand{\mathop{\bar{\underline{\rm SF\!}\,}}\nolimits_{\rm al}}{\mathop{\bar{\underline{\rm SF\!}\,}}\nolimits_{\rm al}} \newcommand{\uoSFi}{\mathop{\bar{\underline{\rm SF\!}\,}}\nolimits_{\rm al}^{\rm ind}} \newcommand{\mathop{\rm LSF}\nolimits}{\mathop{\rm LSF}\nolimits} \newcommand{\mathop{\rm LSF}\nolimits_{\rm al}}{\mathop{\rm LSF}\nolimits_{\rm al}} \newcommand{\mathop{\smash{\underline{\rm LSF\!}\,}}\nolimits}{\mathop{\smash{\underline{\rm LSF\!}\,}}\nolimits} \newcommand{{\dot{\rm LSF}}\kern-.1em\mathop{}\nolimits}{{\dot{\rm LSF}}\kern-.1em\mathop{}\nolimits} \newcommand{{\dot{\bar{\rm LSF}}}\kern-.1em\mathop{}\nolimits}{{\dot{\bar{\rm LSF}}}\kern-.1em\mathop{}\nolimits} \newcommand{\duoLSF}{{\dot{\bar{\underline{\rm LSF\!}\,}}}\kern-.1em\mathop{} \nolimits} \newcommand{\ouLSF}{{\bar{\underline{\rm LSF\!}\,}}\kern-.1em\mathop{} \nolimits} \newcommand{{\dot{\rm LSF}}\kern-.1em\mathop{}\nolimits^{\rm ind}}{{\dot{\rm LSF}}\kern-.1em\mathop{}\nolimits^{\rm ind}} \newcommand{{\dot{\rm LSF}}\kern-.1em\mathop{}\nolimits_{\rm al}}{{\dot{\rm LSF}}\kern-.1em\mathop{}\nolimits_{\rm al}} \newcommand{{\dot{\bar{\rm LSF}}}\kern-.1em\mathop{}\nolimits_{\rm al}}{{\dot{\bar{\rm LSF}}}\kern-.1em\mathop{}\nolimits_{\rm al}} \newcommand{\dLSFai}{{\dot{\rm LSF}}\kern-.1em\mathop{}\nolimits^{\rm ind}_{\rm al}} \newcommand{{\dot{\underline{\rm LSF\!}\,}}\kern-.1em\mathop{}\nolimits}{{\dot{\underline{\rm LSF\!}\,}}\kern-.1em\mathop{}\nolimits} \newcommand{\duLSFi}{{\dot{\underline{\rm LSF\!}\,}}\kern-.1em\mathop{} \nolimits^{\rm ind}} \newcommand{\mathop{\bar{\rm LSF}}\nolimits}{\mathop{\bar{\rm LSF}}\nolimits} \newcommand{\mathop{\bar{\rm LSF}}\nolimits_{\rm al}}{\mathop{\bar{\rm LSF}}\nolimits_{\rm al}} \newcommand{\mathop{\bar{\rm LSF}}\nolimits_{\rm al}^{\rm ind}}{\mathop{\bar{\rm LSF}}\nolimits_{\rm al}^{\rm ind}} \newcommand{\mathop{\bar{\underline{\rm LSF\!}\,}}\nolimits}{\mathop{\bar{\underline{\rm LSF\!}\,}}\nolimits} \newcommand{\mathop{\bar{\underline{\rm LSF\!}\,}}\nolimits_{\rm al}}{\mathop{\bar{\underline{\rm LSF\!}\,}}\nolimits_{\rm al}} \newcommand{\uoLSFi}{\mathop{\bar{\underline{\rm LSF\!}\,}}\nolimits_{\rm al}^{\rm ind}} \newcommand{\text{\rm mod-${\mathbin{\mathbb C}} Q$}}{\text{\rm mod-${\mathbin{\mathbb C}} Q$}} \newcommand{\text{\rm mod-${\mathbin{\mathbb C}} Q/I$}}{\text{\rm mod-${\mathbin{\mathbb C}} Q/I$}} \newcommand{\text{\rm mod-${\mathbin{\mathbb K}} Q$}}{\text{\rm mod-${\mathbin{\mathbb K}} Q$}} \newcommand{\text{\rm mod-${\mathbin{\mathbb K}} Q/I$}}{\text{\rm mod-${\mathbin{\mathbb K}} Q/I$}} \newcommand{\text{\rm proj-${\mathbin{\mathbb K}} Q$}}{\text{\rm proj-${\mathbin{\mathbb K}} Q$}} \newcommand{\text{\rm proj-${\mathbin{\mathbb K}} Q/I$}}{\text{\rm proj-${\mathbin{\mathbb K}} Q/I$}} \newcommand{\mathop{\rm Obj\kern .1em}\nolimits}{\mathop{\rm Obj\kern .1em}\nolimits} \newcommand{\mathop{\mathfrak{Obj}\kern .05em}\nolimits}{\mathop{\mathfrak{Obj}\kern .05em}\nolimits} \newcommand{{\mathbin{\scr A}}}{{\mathbin{\scr A}}} \newcommand{{\scr A}\kern-1.5pt{}_{\rm si}}{{\scr A}\kern-1.5pt{}_{\rm si}} \newcommand{{\mathbin{\scr B}}}{{\mathbin{\scr B}}} \newcommand{{\mathbin{\scr E}}}{{\mathbin{\scr E}}} \newcommand{{\mathbin{\scr F}}}{{\mathbin{\scr F}}} \newcommand{{\mathbin{\scr K}}}{{\mathbin{\scr K}}} \newcommand{{\mathbin{\scr Y}}}{{\mathbin{\scr Y}}} \newcommand{{\mathbin{\scr X}}}{{\mathbin{\scr X}}} \newcommand{{\mathbin{\scr G}}}{{\mathbin{\scr G}}} \newcommand{{\mathbin{\scr V}}}{{\mathbin{\scr V}}} \newcommand{{\mathbin{\scr W}}}{{\mathbin{\scr W}}} \newcommand{{\mathbin{\scr S}\kern-3pt{}}}{{\mathbin{\scr S}\kern-3pt{}}} \newcommand{{\mathbin{\mathbb G}}}{{\mathbin{\mathbb G}}} \newcommand{{\mathbin{\mathbb H}}}{{\mathbin{\mathbb H}}} \newcommand{{\mathbin{\mathbb B}}}{{\mathbin{\mathbb B}}} \newcommand{{\mathbin{\mathbb{CF}}}}{{\mathbin{\mathbb{CF}}}} \newcommand{{\mathbin{\mathcal B}}}{{\mathbin{\mathcal B}}} \def\mathop{\rm Perf}\nolimits{\mathop{\rm Perf}\nolimits} \def\mathop{\text{\rm Perf-is}}\nolimits{\mathop{\text{\rm Perf-is}}\nolimits} \def\mathop{\text{\rm Lb-is}}\nolimits{\mathop{\text{\rm Lb-is}}\nolimits} \def\mathop{\rm coh}\nolimits{\mathop{\rm coh}\nolimits} \def\mathop{\rm Hom}\nolimits{\mathop{\rm Hom}\nolimits} \def\mathop{\rm Perv}\nolimits{\mathop{\rm Perv}\nolimits} \def\mathop{\rm Sch}\nolimits{\mathop{\rm Sch}\nolimits} \def\mathop{\rm Var}\nolimits{\mathop{\rm Var}\nolimits} \def{\rm red}{{\rm red}} \def\mathop{\rm DM}\nolimits{\mathop{\rm DM}\nolimits} \def\mathop{\rm SO}\nolimits{\mathop{\rm SO}\nolimits} \def\mathop{\rm Spec}\nolimits{\mathop{\rm Spec}\nolimits} \def\mathop{\rm rank}\nolimits{\mathop{\rm rank}\nolimits} \def\boldsymbol{\boldsymbol} \def{\rm alg}{{\rm alg}} \def\geqslant{\geqslant} \def\leqslant\nobreak{\leqslant\nobreak} \def{\mathbin{\mathbb A}}{{\mathbin{\mathbb A}}} \def{\mathbin{\mathbb L}}{{\mathbin{\mathbb L}}} \def{\mathbin{\mathbb T}}{{\mathbin{\mathbb T}}} \def{\bs R}{{\boldsymbol R}} \def{\bs S}{{\boldsymbol S}} \def{\bs U}{{\boldsymbol U}} \def{\bs V}{{\boldsymbol V}} \def{\bs W}{{\boldsymbol W}} \def{\bs Y}{{\boldsymbol Y}} \def{\bs Z}{{\boldsymbol Z}} \def{\bs X}{{\boldsymbol X}} \def{\mathbin{\mathbb{CP}}}{{\mathbin{\mathbb{CP}}}} \newcommand{{\mathbin{\mathbb{RP}}}}{{\mathbin{\mathbb{RP}}}} \def{\mathbin{\cal F}}{{\mathbin{\cal F}}} \def{\mathbin{\cal G}}{{\mathbin{\cal G}}} \def{\mathbin{\cal K}}{{\mathbin{\cal K}}} \def{\mathbin{\cal S}}{{\mathbin{\cal S}\kern -0.1em}} \def{\mathbin{\cal S}\kern -0.1em}^{\kern .1em 0}{{\mathbin{\cal S}\kern -0.1em}^{\kern .1em 0}} \def{\mathbin{\cal E}}{{\mathbin{\cal E}}} \def{\mathbin{\cal L}}{{\mathbin{\cal L}}} \def{\mathbin{\cal H}}{{\mathbin{\cal H}}} \def{\mathbin{\cal O}}{{\mathbin{\cal O}}} \def{\mathbin{\cal P}}{{\mathbin{\cal P}}} \def{\mathbin{\cal S}}{{\mathbin{\cal S}}} \def{\mathbin{\cal{PV}}}{{\mathbin{\cal{PV}}}} \def{\mathbin{\cal{TS}}}{{\mathbin{\cal{TS}}}} \def{\mathbin{\cal W}}{{\mathbin{\cal W}}} \def{\mathbin{\cal M}}{{\mathbin{\cal M}}} \def{\mathbin{\smash{\,\,\overline{\!\!\mathcal M\!}\,}}}{{\mathbin{\smash{\,\,\overline{\!\!\mathcal M\!}\,}}}} \def{\mathbin{\cal O}}{{\mathbin{\cal O}}} \def{\mathbin{\cal A}}{{\mathbin{\cal A}}} \def{\mathbin{\cal B}}{{\mathbin{\cal B}}} \def{\mathbin{\cal C}}{{\mathbin{\cal C}}}
\def{\mathbin{\scr D}}{{\mathbin{\scr D}}}
\def{\mathbin{\cal{PV}}}{{\mathbin{\cal{PV}}}} \def{\mathbin{\mathbb G}}{{\mathbin{\mathbb G}}} \def{\mathbin{\mathbb D}}{{\mathbin{\mathbb D}}} \def{\mathbin{\mathbb H}}{{\mathbin{\mathbb H}}} \def{\mathbin{\mathbb P}}{{\mathbin{\mathbb P}}} \def{\mathbin{\mathcal A}}{{\mathbin{\mathcal A}}}
\def{\mathbin{\mathcal P}}{{\mathbin{\mathcal P}}}
\def{\mathbin{\mathfrak E}}{{\mathbin{\mathfrak E}}} \newcommand{{\mathbin{\mathfrak F}}}{{\mathbin{\mathfrak F}}} \newcommand{{\mathbin{\mathfrak H}}}{{\mathbin{\mathfrak H}}} \newcommand{{\mathbin{\mathcal Q}}}{{\mathbin{\mathcal Q}}} \newcommand{{\mathbin{\mathfrak N}}}{{\mathbin{\mathfrak N}}} \newcommand{\mathbin{\mathfrak U}}{{\mathbin{\mathfrak{U}\kern .05em}}} \newcommand{{\mathbin{\mathfrak V}}}{{\mathbin{\mathfrak V}}} \newcommand{{\mathbin{\mathfrak X}}}{{\mathbin{\mathfrak X}}} \newcommand{{\mathbin{\mathcal M}}}{{\mathbin{\mathcal M}}} \newcommand{{\mathbin{\mathfrak M}}}{{\mathbin{\mathfrak M}}} \newcommand{{\mathbin{\mathfrak h}}}{{\mathbin{\mathfrak h}}} \newcommand{{\mathbin{\mathfrak m}}}{{\mathbin{\mathfrak m}}} \newcommand{{\mathbin{\mathfrak R}}}{{\mathbin{\mathfrak R}}} \newcommand{{\mathbin{\mathfrak S}}}{{\mathbin{\mathfrak S}}} \newcommand{\mathop{\mathfrak{Exact}\kern .05em}\nolimits}{\mathop{\mathfrak{Exact}\kern .05em}\nolimits} \newcommand{{\mathbin{\mathcal{V}ect}}}{{\mathbin{\mathcal{V}ect}}} \newcommand{{\mathbin{\mathfrak{Vect}}}}{{\mathbin{\mathfrak{Vect}}}} \newcommand{\mathbin{\mathcal{H}ol}}{\mathbin{\mathcal{H}ol}} \newcommand{{\mathbin{\mathfrak{Hol}}}}{{\mathbin{\mathfrak{Hol}}}} \newcommand{{\mathbin{\mathfrak W}}}{{\mathbin{\mathfrak W}}} \newcommand{\mathbin{\mathcal{H}om}}{\mathbin{\mathcal{H}om}} \newcommand{\mathcal{E}xt}{\mathcal{E}xt} \newcommand{\dtensor}{\stackrel{L}{\otimes}} \newcommand{\mathop{\rm tr}\nolimits}{\mathop{\rm tr}\nolimits} \newcommand{\mathop{\rm Exal}\nolimits}{\mathop{\rm Exal}\nolimits} \def{\mathbin{\mathfrak G}}{{\mathbin{\mathfrak G}}} \def\mathbin{\mathfrak U}{\mathbin{\mathfrak U}} \def{\mathbin{\mathbb C}}{{\mathbin{\mathbb C}}} \def{\mathbin{\mathbb K}}{{\mathbin{\mathbb K}}} \def{\mathbin{\mathbb N}}{{\mathbin{\mathbb N}}} \def{\mathbin{\mathbb Q}}{{\mathbin{\mathbb Q}}} \def{\mathbin{\mathbb Z}}{{\mathbin{\mathbb Z}}} \def{\mathbin{\mathbb K}}{{\mathbin{\mathbb K}}} \def{\mathbin{\mathbb R}}{{\mathbin{\mathbb R}}} \newcommand{{\mathbin{\mathbb A}}}{{\mathbin{\mathbb A}}} \newcommand{\bar\delta}{\bar\delta} \newcommand{\bar\delta_{\rm ss}}{\bar\delta_{\rm ss}} \newcommand{\bar\epsilon}{\bar\epsilon} \def\alpha{\alpha} \def\beta{\beta} \def\gamma{\gamma} \def\delta{\delta} \def\iota{\iota} \def\epsilon{\epsilon} \def\lambda{\lambda} \def\kappa{\kappa} \def\theta{\theta} \def\zeta{\zeta} \def\upsilon{\upsilon} \def\varphi{\varphi} \def\sigma{\sigma} \def\omega{\omega} \def\Delta{\Delta} \def\Lambda{\Lambda} \def\Theta{\Theta} \def{\rm T}{{\rm T}} \def\Omega{\Omega} \def\Gamma{\Gamma} \def\Sigma{\Sigma} \def\Upsilon{\Upsilon} \def\partial{\partial} \def{\bar\partial}{{\bar\partial}} \def\textstyle{\textstyle}
\def\scriptscriptstyle{\scriptscriptstyle} \def\setminus{\setminus} \def\wedge{\wedge} \def\ltimes{\ltimes} \def\sharp{\sharp} \def\bullet{\bullet} \def\oplus{\oplus} \def\otimes{\otimes} \def\odot{\odot} \def\bigotimes{\bigotimes} \def\boxtimes{\boxtimes} \def\overline{\overline} \def\underline{\underline} \def\bigoplus{\bigoplus} \def\infty{\infty} \def\emptyset{\emptyset} \def\rightarrow{\rightarrow} \def\allowbreak{\allowbreak} \def\longrightarrow{\longrightarrow} \def\hookrightarrow{\hookrightarrow} \def\dashrightarrow{\dashrightarrow} \def\rightrightarrows{\rightrightarrows} \def\Rightarrow{\Rightarrow} \def\Longrightarrow{\Longrightarrow} \newcommand{{\mathbin{\ell\kern .08em}}}{{\mathbin{\ell\kern .08em}}} \def{\ts\frac{1}{2}}{{\textstyle\frac{1}{2}}} \def\ban#1{\bigl\langle #1 \bigr\rangle}
\def{\rm an}{{\rm an}}
\newcommand{\grave}{\grave} \def\times{\times} \def\circ{\circ} \def\tilde{\tilde} \def{\rm d}{{\rm d}} \newcommand{{\rm d}}{{\rm d}} \def\odot{\odot} \def\boxdot{\boxdot} \def\md#1{\vert #1 \vert} \def\ms#1{\vert #1 \vert^2} \def\nm#1{\Vert #1 \Vert} \def\bnm#1{\big\Vert #1 \big\Vert} \def\bmd#1{\big\vert #1 \big\vert} \def\mathop{\rm HM}\nolimits{\mathop{\rm HM}\nolimits} \def\mathop{\rm MHM}\nolimits{\mathop{\rm MHM}\nolimits} \def\mathop{\bf rat}\nolimits{\mathop{\bf rat}\nolimits} \def\mathop{\bf Rat}\nolimits{\mathop{\bf Rat}\nolimits} \def{\mathbin{\cal{HV}}}{{\mathbin{\cal{HV}}}} \def{\mathbin{\cal S}}{{\mathbin{\cal S}}} \def{\mathbin{\cal S}\kern -0.1em}^{\kern .1em 0}{{\mathbin{\cal S}\kern -0.1em}^{\kern .1em 0}} \def{\kern .1em\mathop{\otimes}\limits^{\sst L}\kern .1em}{{\kern .1em\mathop{\otimes}\limits^{\scriptscriptstyle L}\kern .1em}} \def{\kern .2em\mathop{\boxtimes}\limits^{\sst L}\kern .2em}{{\kern .2em\mathop{\boxtimes}\limits^{\scriptscriptstyle L}\kern .2em}} \def{\kern .1em\mathop{\boxtimes}\limits^{\sst T}\kern .1em}{{\kern .1em\mathop{\boxtimes}\limits^{\scriptscriptstyle T}\kern .1em}}
\def{\mathop{\rm Sh}\nolimits}{{\mathop{\rm Sh}\nolimits}} \def{\text{\rm $\K$-alg}}{{\text{\rm ${\mathbin{\mathbb K}}$-alg}}} \def{\text{\rm $\K$-vect}}{{\text{\rm ${\mathbin{\mathbb K}}$-vect}}} \def{\mathop{\mathfrak{Iso}}\nolimits}{{\mathop{\mathfrak{Iso}}\nolimits}} \def{\mathbin{\bs{\cal M}}}{{\mathbin{\boldsymbol{\cal M}}}} \def{\text{\rm na\"\i}}{{\text{\rm na\"\i}}} \def{\mathop{\text{\rm Lis-me}}\nolimits}{{\mathop{\text{\rm Lis-me}}\nolimits}} \def{\mathop{\text{\rm Lis-\'et}}\nolimits}{{\mathop{\text{\rm Lis-\'et}}\nolimits}} \def{\mathop{\text{\rm Lis-an}}\nolimits}{{\mathop{\text{\rm Lis-an}}\nolimits}} \def{\kern .1em\mathop{\otimes}\limits^{\sst L}\kern .1em}{{\kern .1em\mathop{\otimes}\limits^{\scriptscriptstyle L}\kern .1em}} \def{\kern .2em\mathop{\boxtimes}\limits^{\sst L}\kern .2em}{{\kern .2em\mathop{\boxtimes}\limits^{\scriptscriptstyle L}\kern .2em}} \def{\mathop{\text{\rm LM-\'em}}\nolimits}{{\mathop{\text{\rm LM-\'em}}\nolimits}} \def{\mathbin{\cal R}}{{\mathbin{\cal R}}} \def{{\rm st},\hat\mu}{{{\rm st},\hat\mu}}
\title{{\bf{Generalized Donaldson--Thomas theory \\ over fields ${\mathbin{\mathbb K}} \neq {\mathbin{\mathbb C}}$}}} \author{
\\ Vittoria Bussi \\ \small{The Mathematical Institute,}\\ \small{Andrew Wiles Building Radcliffe Observatory Quarter,}\\ \small{Woodstock Road, Oxford, OX1 3LB, U.K.} \\ \small{E-mail: \tt bussi@maths.ox.ac.uk}\\
} \date{\small{}} \maketitle
\begin{abstract}
{\it Generalized Donaldson--Thomas invariants\/} $\bar{DT}{}^\alpha(\tau)$ defined by Joyce and Song \cite{JoSo} are rational numbers which `count' both $\tau$-stable and $\tau$-semistable coherent sheaves with Chern character $\alpha$ on a Calabi--Yau 3-fold $X$, where $\tau$ denotes Gieseker stability for some ample line bundle on $X$. The $\bar{DT}{}^\alpha(\tau)$ are defined for all classes $\alpha$, and are equal to the classical $DT^\alpha(\tau)$ defined by Thomas \cite{Thom} when it is defined. They are unchanged under deformations of $X$, and transform by a wall-crossing formula under change of stability condition~$\tau$. Joyce and Song use gauge theory and transcendental complex analytic methods, so that their theory of generalized Donaldson--Thomas invariants is valid only in the complex case. This also forces them to put constraints on the Calabi--Yau $3$-fold they can define generalized Donaldson--Thomas invariants for.
This paper will propose a new algebraic method extending the theory to algebraically closed fields ${\mathbin{\mathbb K}}$ of characteristic zero, and partly to triangulated categories and for non necessarily compact Calabi--Yau 3-folds under some hypothesis.
It will describe the local structure of the moduli stack ${\mathbin{\mathfrak M}}$ of (complexes of) coherent sheaves on $X,$ showing that an atlas for ${\mathbin{\mathfrak M}}$ carries the structure of a $\mathop{\rm GL}(n,{\mathbin{\mathbb K}})$-invariant d-critical locus in the sense of \cite{ Joyc2} and thus it may be written locally as the zero locus of a regular function defined on an \'etale neighborhood in the tangent space of ${\mathbin{\mathfrak M}}$ and use this to deduce identities on the Behrend function $\nu_{\mathbin{\mathfrak M}}$.
Moreover, when ${\mathbin{\mathbb K}} = {\mathbin{\mathbb C}},$ \cite[Thm. 4.9]{JoSo} uses the integral Hodge conjecture result by Voisin for Calabi--Yau $3$-folds over ${\mathbin{\mathbb C}}$ to show that the numerical Grothendieck group $K^{\rm num}(\mathop{\rm coh}\nolimits(X))$ is unchanged under deformations of $X.$ This is important for the results that $\bar{DT}{}^\alpha(\tau)$ for $\alpha \in K^{\rm num}(\mathop{\rm coh}\nolimits(X))$ are invariant under deformations of $X$, even to make sense. We will provide an algebraic proof of that result, characterizing the numerical Grothendieck group of a Calabi--Yau $3$-fold in terms of a globally constant lattice described using the Picard scheme.
\end{abstract}
\tableofcontents
\section*{Introduction} \markboth{Introduction}{Introduction} \addcontentsline{toc}{section}{Introduction} \label{dt1}
In the following we will summarize some background material on Donaldson--Thomas theory which permits to allocate our problem and state the main result. After that, we outline the contents of the sections. Expert readers can skip the first introductory part.
\begin{center} \textbf{Notations and conventions} \end{center}
Let ${\mathbin{\mathbb K}}$ be an algebraically closed field of characteristic zero. A {\it Calabi--Yau\/ $3$-fold\/} is a smooth projective 3-fold $X$ over ${\mathbin{\mathbb C}}$ or ${\mathbin{\mathbb K}}$, with trivial canonical bundle\index{canonical bundle} $K_X$ and $H^1({\mathbin{\cal O}}_X)=0$. Fix a very ample line bundle ${\mathbin{\cal O}}_X(1)$ on $X$, and let $\tau$ be Gieseker stability on the abelian category of coherent sheaves $\mathop{\rm coh}\nolimits(X)$ on $X$ with respect to ${\mathbin{\cal O}}_X(1).$ If $E$ is a coherent sheaf on X then the class $[E] \in K^{{\rm num}}(\mathop{\rm coh}\nolimits(X))$ is in effect the Chern character $\mathop{\rm ch}\nolimits(E)$ of $E$ in the Chow ring $A^*(X)_{{\mathbin{\mathbb Q}}}$ as in \cite{Fult}. For a class $\alpha$ in the numerical Grothendieck group $K^{\rm num}(\mathop{\rm coh}\nolimits(X))$, write ${\mathbin{\mathcal M}}_{\rm ss}^\alpha(\tau),{\mathbin{\mathcal M}}_{\rm st}^\alpha(\tau)$ for the coarse moduli schemes of $\tau$-(semi)stable sheaves $E$ with class $[E]=\alpha$. Then ${\mathbin{\mathcal M}}_{\rm ss}^\alpha(\tau)$ is a projective ${\mathbin{\mathbb C}}$ or ${\mathbin{\mathbb K}}$-scheme whose points correspond to S-equivalence classes of $\tau$-semistable sheaves, and ${\mathbin{\mathcal M}}_{\rm st}^\alpha(\tau)$ is an open subscheme of ${\mathbin{\mathcal M}}_{\rm ss}^\alpha(\tau)$ whose points correspond to isomorphism classes of $\tau$-stable sheaves. Write ${\mathbin{\mathfrak M}}$ for the moduli stack of coherent sheaves $E$ on $X$. It is an Artin ${\mathbin{\mathbb C}}$ or ${\mathbin{\mathbb K}}$-stack, locally of finite type and has affine geometric stabilizers. For $\alpha\in K^{{\rm num}}(\mathop{\rm coh}\nolimits(X))$, write ${\mathbin{\mathfrak M}}^\alpha$ for the open and closed substack of $E$ with $[E]=\alpha$ in $K^{{\rm num}}(\mathop{\rm coh}\nolimits(X))$. Write ${\mathbin{\mathfrak M}}_{\rm ss}^\alpha(\tau), {\mathbin{\mathfrak M}}_{\rm st}^\alpha(\tau)$ for the substacks of $\tau$-(semi)stable sheaves $E$ in class $[E]=\alpha$, which are finite type open substacks of ${\mathbin{\mathfrak M}}^\alpha$.
\index{Artin stack!affine geometric stabilizers} \index{Artin stack!locally of finite type} \nomenclature[M al st]{${\mathbin{\mathcal M}}_{\rm st}^\alpha(\tau)$}{coarse moduli scheme of $\tau$-stable sheaves $E$ with class $[E]=\alpha$} \nomenclature[M al rss]{${\mathbin{\mathcal M}}_{\rm ss}^\alpha(\tau)$}{coarse moduli scheme of $\tau$-semistable sheaves $E$ with class $[E]=\alpha$} \index{Gieseker stability} \nomenclature[Knum]{$K^{\rm num}(\mathop{\rm coh}\nolimits(X))$}{numerical Grothendieck group of the abelian category $\mathop{\rm coh}\nolimits(X)$} \index{Grothendieck group!numerical}
\begin{center} \textbf{Historical overview} \end{center}
\index{Donaldson--Thomas invariants!original $DT^\alpha(\tau)$}\nomenclature[DTa]{$DT^\alpha(\tau)$}{original Donaldson--Thomas invariants defined in \cite{Thom}} In 1998, Thomas \cite{Thom}, following his proposal with Donaldson \cite{DoTh}, motivates a {\it holomorphic Casson invariant} and defines the {\it Donaldson--Thomas invariants\/} $DT^\alpha(\tau)$ which are integers `counting' $\tau$-stable coherent sheaves with Chern character $\alpha$ on a Calabi--Yau 3-fold $X$ over ${\mathbin{\mathbb K}},$ where $\tau$
denotes Gieseker stability for some ample line bundle on $X$. Mathematically, and in `modern' terms, he found that ${\mathbin{\mathcal M}}_{\rm st}^\alpha(\tau)$ is endowed with a symmetric obstruction theory and defined \begin{equation*} DT^\alpha(\tau)\quad =\textstyle\displaystyle \int\limits_{\small{[{\mathbin{\mathcal M}}_{\rm st}^\alpha(\tau)]^{\rm vir}}}\!\!\! 1 \end{equation*} which is mathematical reflection of the heuristic that views ${\mathbin{\mathcal M}}_{\rm st}^\alpha(\tau)$ as the critical locus of the {\it holomorphic Chern-Simons functional} and the shadow of a more deeper `derived' geometry. A crucial result is that the invariants are unchanged under deformations of the underlying geometry of $X$. Finally we remark that the conventional definition of Thomas \cite{Thom} works only for classes $\alpha$ containing no strictly $\tau$-semistable sheaves and this permits to work just with schemes rather than stacks as the stable moduli scheme itself already encodes all the information about the $\mathop{\rm Ext}\nolimits$ groups, and thus about the tangent-obstruction complex of the moduli functor.
In 2005, Behrend \cite{Behr} proved a {\it virtual Gauss--Bonnet theorem} which in particular yields that Donaldson--Thomas type invariants can be written as a weighted Euler characteristic $$DT^\alpha(\tau)=\chi\bigl({\mathbin{\mathcal M}}_{\rm st}^\alpha(\tau), \nu_{{\mathbin{\mathcal M}}_{\rm st}^\alpha(\tau)}\bigr)$$ of the stable moduli scheme ${\mathbin{\mathcal M}}_{\rm st}^\alpha(\tau)$ \nomenclature[M al tau]{${\mathbin{\mathcal M}}_{\rm st}^\alpha(\tau)$}{coarse moduli scheme of $\tau$-stable objects in class $\alpha$} by a constructible function $\nu_{{\mathbin{\mathcal M}}_{\rm st}^\alpha(\tau)},$ as a consequence known in literature as the {\it Behrend function}. It depends only on the scheme structure of ${\mathbin{\mathcal M}}_{\rm st}^\alpha(\tau),$ and it is convenient to think about it as a multiplicity function. An important moral is that it is better to `count' points in a moduli scheme by the weighted Euler characteristic rather than the unweighted one as it often gives answers unchanged under deformations of the underlying geometry. It is worth to point out that this equation is local, and `motivic', and makes sense even for non-proper finite type ${\mathbin{\mathbb K}}$-schemes. Anyway, using this formula to generalize the classical picture by defining the Donaldson--Thomas invariants as $\chi\bigl({\mathbin{\mathcal M}}_{\rm ss}^\alpha(\tau), \nu_{{\mathbin{\mathcal M}}_{\rm ss}^\alpha(\tau)}\bigr)$ when ${\mathbin{\mathcal M}}_{\rm ss}^\alpha(\tau)\ne{\mathbin{\mathcal M}}_{\rm st}^\alpha(\tau)$ is not a good idea as in the case there are strictly $\tau$-semistable sheaves, the moduli scheme ${\mathbin{\mathcal M}}_{\rm ss}^\alpha(\tau)$ is no more a good model and suggest that schemes are no more `enough' to extend the theory.
The crucial work by Behrend \cite{Behr} suggests that Donaldson--Thomas invariants can be written as motivic invariants, like those studied by Joyce in \cite{Joyc.1,Joyc.2,Joyc.3,Joyc.4,Joyc.5,Joyc.6}, and so it raises the possibility that one can extend the results of \cite{Joyc.1,Joyc.2,Joyc.3,Joyc.4,Joyc.5,Joyc.6} to Donaldson--Thomas invariants by including Behrend functions as weights.
\index{Donaldson--Thomas invariants!generalized $\bar{DT}{}^\alpha(\tau)$} \nomenclature[DTb]{$\bar{DT}{}^\alpha(\tau)$}{generalized Donaldson--Thomas invariants defined in \cite{JoSo}} Thus, in 2005, Joyce and Song \cite{JoSo} proposed a theory of {\it generalized Donaldson--Thomas invariants\/} $\bar{DT}{}^\alpha(\tau)$. They are rational numbers which `count' both $\tau$-stable and $\tau$-semistable coherent sheaves with Chern character $\alpha$ on a compact Calabi--Yau 3-fold $X$ over ${\mathbin{\mathbb C}}$; strictly $\tau$-semistable sheaves must be counted with complicated rational weights. The $\bar{DT}{}^\alpha(\tau)$ are defined for all classes $\alpha$, and are equal to $DT^\alpha(\tau)$ when it is defined. They are unchanged under deformations of $X$, and transform by a wall-crossing formula under change of stability condition~$\tau$. The theory is valid also for compactly supported coherent sheaves on {\it compactly embeddable} noncompact Calabi--Yau 3-folds in the complex analytic topology.
To prove all this they study the local structure of the moduli stack \index{moduli stack!local structure} ${\mathbin{\mathfrak M}}$ of coherent sheaves on $X.$ They first show that ${\mathbin{\mathfrak M}}$ is Zariski locally isomorphic to the moduli stack ${\mathbin{\mathfrak{Vect}}}$ of algebraic vector bundles on $X$. Then they use {\it gauge theory} on complex vector bundles and transcendental complex analytic methods to show that an atlas for ${\mathbin{\mathfrak M}}$ may be written locally in the complex analytic topology as $\mathop{\rm Crit}(f)$ for $f:U\rightarrow{\mathbin{\mathbb C}}$ a \index{analytic topology} holomorphic function on a complex manifold $U$. They use this to deduce identities on the Behrend function $\nu_{\mathbin{\mathfrak M}}$ through the Milnor fibre description of Behrend functions. These identities
\begin{equation*} \nu_{{\mathbin{\mathfrak M}}}(E_1\oplus E_2)=(-1)^{\bar\chi([E_1],[E_2])} \nu_{{\mathbin{\mathfrak M}}}(E_1)\nu_{{\mathbin{\mathfrak M}}}(E_2), \end{equation*}
\begin{equation*} \displaystyle \int\limits_{\small{\begin{subarray}{l} [\lambda]\in\mathbb{P}(\mathop{\rm Ext}\nolimits^1(E_2,E_1)):\\
\lambda\; \Leftrightarrow\; 0\rightarrow E_1\rightarrow F\rightarrow E_2\rightarrow 0\end{subarray}}}\!\!\!\!\! \!\!\!\! \!\!\!\! \!\!\!\! \nu_{{\mathbin{\mathfrak M}}}(F)\,{\rm d}\chi \quad - \!\!\!\! \!\!\!\! \!\!\!\! \displaystyle \int\limits_{\small{\begin{subarray}{l}[\mu]\in\mathbb{P}(\mathop{\rm Ext}\nolimits^1(E_1,E_2)):\\ \mu\; \Leftrightarrow\; 0\rightarrow E_2\rightarrow D\rightarrow E_1\rightarrow 0\end{subarray}}}\!\!\!\!\! \!\!\!\! \!\!\!\! \!\!\!\! \nu_{{\mathbin{\mathfrak M}}}(D)\,{\rm d}\chi \;\; = \;\; (e_{21}-e_{12})\;\; \nu_{{\mathbin{\mathfrak M}}}(E_1\oplus E_2), \end{equation*}
where $e_{21}=\mathop{\rm dim}\nolimits\mathop{\rm Ext}\nolimits^1(E_2,E_1)$ and $e_{12}=\mathop{\rm dim}\nolimits\mathop{\rm Ext}\nolimits^1(E_1,E_2)$ for $E_1,E_2\in\mathop{\rm coh}\nolimits(X),$ are crucial for the whole program of Joyce and Song, which is based on the idea that Behrend's approach should be integrated with Joyce's theory \cite{Joyc.1,Joyc.2,Joyc.3,Joyc.4,Joyc.5,Joyc.6}. As the proof uses gauge theory and transcendental methods, it works only over~${\mathbin{\mathbb C}}$ and forces them to put constraints on the Calabi--Yau 3-fold they can define generalized Donaldson--Thomas invariants for. Finally, in \cite[\S 4.5]{JoSo}, when ${\mathbin{\mathbb K}}={\mathbin{\mathbb C}},$ the Chern character embeds $K^{\rm num}(\mathop{\rm coh}\nolimits(X))$ in $H^{\rm even}(X;{\mathbin{\mathbb Q}})$, and the Voisin Hodge conjecture result \cite{Vois} for Calabi--Yau over ${\mathbin{\mathbb C}}$ completely characterize its image. They use this to show $K^{\rm num}(\mathop{\rm coh}\nolimits(X))$ is unchanged under deformations of $X$. This is important for the $\bar{DT}{}^\alpha(\tau)$ with $\alpha\in K^{\rm num}(\mathop{\rm coh}\nolimits(X))$ to be invariant under deformations of $X$ even to make sense. \index{deformation-invariance}\index{Chern character}
In 2008 and 2010, with two subsequent papers \cite{KoSo1,KoSo}, Kontsevich and Soibelman also studied generalizations of Donaldson--Thomas invariants, both in the direction of motivic and categorified Donaldson--Thomas invariants.
In \cite{KoSo1}, they proposed a very general version of the theory, which, very roughly speaking, can be outlined saying that, supposing for the sake of simplicity that ${\mathbin{\mathcal M}}_{\rm st}^\alpha(\tau)={\mathbin{\mathcal M}}_{\rm ss}^\alpha(\tau)$, their oversimplified idea is to define {\it motivic Donaldson--Thomas invariants} $$DT^\alpha_{\textrm{mot}}=\Upsilon({\mathbin{\mathcal M}}_{\rm st}^\alpha(\tau),\nu_{\textrm{mot}}),$$ where $\nu_{\textrm{mot}}$ is a complicated constructible function which we can refer to as the {\it motivic Behrend function} for a general motivic invariant $\Upsilon.$ Their construction is closely related to Joyce and Song's construction, even if they work in a more general context: they consider derived categories of coherent sheaves, Bridgeland stability conditions, and general motivic invariants, whereas Joyce and Song work with abelian categories of coherent sheaves, Gieseker stability, and the Euler characteristic. However, the price to work in a more general context is that most results depend on conjectures (motivic Behrend function identities, existence of orientation data, absence of poles). In particular, Kontsevich and Soibelman's parallel passages of Joyce and Song's proof of the Behrend function identities \cite[\S 4.4 \& \S 6.3]{KoSo1} work over a field ${\mathbin{\mathbb K}}$ of characteristic zero, and say that the formal completion $\hat{\mathbin{\mathfrak M}}_{[E]}$ of ${\mathbin{\mathfrak M}}$ at $[E]$ can be written in terms of $\mathop{\rm Crit}(f)$ for $f$ a formal power series on $\smash{\mathop{\rm Ext}\nolimits^1(E,E)}$, with no convergence criteria. Their analogue \cite[Conj.\,4]{KoSo1}, concerns the {\it motivic Milnor fibre} of the formal power series $f$. So the Behrend function identities are related to a conjecture of Kontsevich and Soibelman \cite[Conj.\,4]{KoSo1} and its application in \cite[\S 6.3]{KoSo1}, and could probably be deduced from it. Anyway, Joyce and Song's approach \cite{JoSo} is not wholly algebro-geometric -- it uses gauge theory, and transcendental complex analytic geometry methods. Therefore this method will not suffice to prove the parallel conjectures in Kontsevich and Soibelman \cite[Conj.\,4]{KoSo1}, which are supposed to hold for general fields ${\mathbin{\mathbb K}}$ as well as ${\mathbin{\mathbb C}}$, and for general motivic invariants of algebraic ${\mathbin{\mathbb K}}$-schemes as well as for the \index{motivic invariant} topological Euler characteristic. Recently, in 2012, Le Quy Thuong \cite{Thuong} provided a proof for this conjecture using some deep high technology results from motivic integration.
In \cite{KoSo}, Kontsevich and Soibelman exposed the categorified version of Donaldson--Thomas theory. To fix ideas, suppose again ${\mathbin{\mathcal M}}_{\rm st}^\alpha(\tau)={\mathbin{\mathcal M}}_{\rm ss}^\alpha(\tau).$ Following Thomas' argument \cite{Thom}, one can, heuristically, think of $\nu_{{\mathbin{\mathcal M}}_{\rm st}^\alpha(\tau)}$ as the Euler characteristic of the {\it perverse sheaf of vanishing cycles} $\cal P$ of the holomorphic Chern-Simons functional. Following this philosophy in which perverse sheaves are categorification of constructible functions, the hypercohomology $${\mathbb H}^*\bigl({\mathbin{\mathcal M}}_{\rm st}^\alpha(\tau);{\cal P}\vert_{{\mathbin{\mathcal M}}_{\rm st}^\alpha(\tau)}\bigr)$$ would be a natural cohomology group of ${\mathbin{\mathcal M}}_{\rm st}^\alpha(\tau)$ whose Euler characteristic is the Donaldson--Thomas invariant. Thus, the very basic idea in Kontsevich and Soibelman's paper is to define some kind of `generalized cohomology' for the moduli stack ${\mathbin{\mathfrak M}}$ as a kind of Ringel--Hall algebra.
In 2013, a sequence of five papers \cite{Joyc2,BBJ,BBDJS,BJM,BBBJ} developed a theory of d-critical loci, a new class of geometric objects by Joyce, and uses this theory to apply powerful results of derived algebraic geometry as in \cite{Toen,Toen2,Toen3,Toen4,ToVe1,ToVe2,PTVV} to Donaldson--Thomas theory. It is showed that the moduli stack of (complexes of) coherent sheaves on a Calabi--Yau 3-fold carries the structure of an algebraic d-critical stack and it is given locally in the Zariski topology as the critical locus of a regular function. Moreover, using the notion of {\it orientation data}, they construct a natural perverse sheaf and a natural motive on the moduli stacks, thus answering a long-standing question in the problem of categorification. See \S\ref{ourpapers} for a detailed discussion.
\begin{center} \textbf{The main result and its implications} \end{center}
Following Joyce and Song's proposal, the aim of this paper is to provide an extension of the theory of generalized Donaldson--Thomas invariants in \cite{JoSo} to algebraically closed fields ${\mathbin{\mathbb K}}$ of characteristic zero. Our argument provides the algebraic analogue of \cite[Thm 5.5]{JoSo}, \cite[Thm 5.11]{JoSo} and \cite[Cor. 5.28]{JoSo} which are enough to extend \cite{JoSo} at least for compact Calabi--Yau 3-folds. Unfortunately, to extend the whole project to complexes of sheaves and to compactly supported sheaves on a noncompact quasi-projective Calabi--Yau 3-fold, we would need other results also from derived algebraic geometry which we do not have at the present. We hope to come back on this point in a future work.
We will show that an atlas for ${\mathbin{\mathfrak M}}$ near $[E]\in{\mathbin{\mathfrak M}}({\mathbin{\mathbb K}})$ may be written locally in the \'etale topology as the zero locus ${\rm d} f^{-1}(0)$ for a $G$-invariant regular function $f$ defined on a \'etale neighborhood of $0\in U({\mathbin{\mathbb K}})$ in the affine ${\mathbin{\mathbb K}}$-space $\mathop{\rm Ext}\nolimits^1(E,E),$ \index{\'etale topology} where $G$ is a maximal torus of $\mathop{\rm Aut}(E).$ \index{reductive group!maximal}
Based on this picture, we give an algebraic proof of the Behrend function identities. We point out that our approach is actually valid much more generally for any stack which is locally a global quotient, and we actually do not use any particular properties of coherent sheaves on Calabi--Yau 3-folds. In the past, the author tried a picture in which the moduli stack of coherent sheaves was locally described as a zero locus of an algebraic almost closed 1-form in the sense of \cite{Behr}, which turned out later to be a wrong direction to follow.
Finally, we will study the deformation invariance properties of $\bar{DT}{}^\alpha(\tau)$ under changes of the underlying geometry of $X,$ characterizing a globally constant lattice containing the image through the Chern character of $K^{{\rm num}}(\mathop{\rm coh}\nolimits(X))$ and in which classes $\alpha$ vary.
The implications are quite exciting and far-reaching. Our algebraic method could lead to the extension of generalized Donaldson--Thomas theory to the derived categorical context. The plan to extend from abelian to derived categories the theory of Joyce and Song \cite{JoSo} starts by reinterpreting the series of papers by Joyce \cite{Joyc.1,Joyc.2,Joyc.3,Joyc.4,Joyc.5,Joyc.6,Joyc.7,Joyc.8} in this new general setup. In particular:
\begin{itemize}
\item[$(a)$]Defining configurations in triangulated categories $\mathcal{T}$ requires to replace the exact sequences by distinguished triangles.
\item[$(b)$]Constructing moduli stacks of objects and configurations in $\mathcal{T}$. Again, the theory of derived algebraic geometry \cite{Toen,Toen2,Toen3,Toen4,ToVe1,ToVe2,PTVV} can give us a satisfactory answer.
\item[$(c)$]Defining stability conditions on triangulated categories can be approached using Bridgeland's results, and its extension by Gorodentscev et al., which combines Bridgeland's idea with Rudakov's definition for abelian categories \cite{Rud}. Since Joyce's stability conditions \cite{Joyc.3} are based on Rudakov, the modifications should be straightforward.
\end{itemize}
\begin{itemize} \item[$(d)$]The `nonfunctoriality of the cone' in triangulated categories causes that the triangulated category versions of some operations on configurations are defined up to isomorphism, but not canonically, which yields that corresponding diagrams may be commutative, but not Cartesian as in the abelian case. In particular, one loses the associativity of the Ringel-Hall algebra of stack functions, which is a crucial object in Joyce and Song framework. We expect that derived Hall algebra approach of To\"en \cite{Toen3} resolve this issue. See also \cite{PL}. \end{itemize}
We expect that a well-behaved theory of invariants counting $\tau$-semistable objects in triangulated categories in the style of Joyce's theory exists, and we hope to come back on it in a future work.
\begin{center} \textbf{Outstanding problems and recent research} \end{center}
Donaldson--Thomas theory depicted in this picture is promising and the literature based on the sketched milestones \cite{Thom,Behr,JoSo,KoSo1,KoSo} is vast. Although several interesting developments have been achieved, there are many outstanding problems and a whole final picture overcoming these problems and related conjectures is far reaching.
In 2003, Maulik, Nekrasov, Okounkov and Pandharipande \cite{MNOP1,MNOP2} stated the celebrated {\it MNOP conjecture} in which Donaldson--Thomas invariants for sheaves of rank one have been conjectured to have deep connections with Gromov--Witten theory of Calabi--Yau 3-folds, but also with Gopakumar--Vafa invariants and Pandharipande--Thomas invariants \cite{PaTh}. Even if there are some results on this conjectural equivalence of theories of curve counting invariants (Bridgeland \cite{Brid2,Brid3}, Stoppa and Thomas \cite {StTh}, Toda \cite{Toda}), the MNOP conjecture is still unproved. Moreover, very little is known about the `meaning' of higher rank Donaldson--Thomas invariants. In the same work, \cite[Conj.1]{MNOP1}, they formulated a conjecture on values of the virtual count of $\mathop{\rm Hilb}\nolimits^dX$ (Donaldson--Thomas counting of dimension zero sheaves), that has now been established and different proofs are given by Behrend and Fantechi \cite{BeFa2}, Levine and Pandharipande~\cite{LePa} and Li \cite{Li}.
In \cite[Questions 4.18,5.7,5.10,5.12,6.29]{JoSo} Joyce and Song pointed out some outstanding problems of their theory and suggest new methods to deal with them. Some of those questions have been answered with new methods as in \cite{Joyc2,BBJ,BBDJS,BJM,BBBJ}. However, the main limitation of Joyce and Song's approach is due to the fact that they work using gauge theory and transcendental complex analytic methods, which causes the theory is valid only over the complex numbers and puts restrictions on the Calabi--Yau which they can define the theory for, and they deal with abelian rather than triangulated categories. This limits the usefulness of their theory as, for many applications, especially to physics, one needs triangulated categories.
Moreover, in \cite[\S 6]{JoSo}, Joyce and Song, following Kontsevich and Soibelman \cite[\S 2.5 \& \S 7.1]{KoSo1}, and from ideas similar to Aspinwall--Morrison computation for a Calabi--Yau 3-fold, defined the {\it BPS invariants} $\hat{DT}{}^\alpha(\tau),$ also generalizations of Donaldson--Thomas invariants, and conjectured to be integers for certain $\tau.$ There are some evidences on this fact \cite[\S 6]{JoSo}, but the problem is still open. Finally, in \cite[\S 7]{JoSo}, they extended their generalized Donaldson--Thomas theory to abelian categories of representations of a quiver $Q$ with relations coming from a superpotential on Q, and connected their ideas with the already existing literature on noncommutative Donaldson--Thomas invariants and on invariants counting quiver representations (just to cite some names: Bryan, Ginzburg, Hanany, Nagao, Nakajima, Reineke, Szendr\H oi, and Young). This is an active area of research in representation theory.
There is a seething big area of research which aims to extend Donaldson--Thomas theory in the derived categorical framework. For a long time there was the problem to prove that the moduli space of complexes of sheaves can be given as a critical locus, similarly to the moduli space of sheaves. In 2006, Behrend and Getzler \cite{BeGe} announced a development in this direction, which various papers in literature refers to (e.g. Toda \cite{Toda,Toda2}), but the paper has not yet been published. It says that the formal potential function $f$ for the cyclic dg Lie algebra L coming from the Schur objects in the derived category of coherent sheaves on Calabi--Yau 3-folds can be made to be convergent over a local neighborhood of the origin. In \cite[Conj.\,1.2]{Toda2}, Toda formulates the derived categorical analog of \cite[Thm.\,5.5]{JoSo} and then Hua announced in \cite{Hua} a joint work with Behrend \cite{BeHua} about the construction of the derived moduli space of complexes of coherent sheaves. In \cite{Hua}, Hua gives a construction of the global Chern-Simons functions for toric Calabi--Yau stacks of dimension three using strong exceptional collections. The moduli spaces of sheaves on such stacks can be identified with critical loci of these functions. Still in the direction of derived categorical context, Chang and Li \cite{ChangLi} defined recently a semi--perfect obstruction theory and used it to construct virtual cycles of moduli of derived objects on Calabi--Yau 3-folds. In an other paper with Kiem \cite{KiLi2}, Li studied stable objects in derived category using a `${\mathbin{\mathbb C}}^*$-intrinsic blowup' strategy. Finally, in 2013, the author et al. in \cite{BBBJ} completely answered the issue of presenting the moduli stack as a critical locus, and this opens now the question about possibilities to extend the whole project in \cite{JoSo} to triangulated categories, the main difficult of which would be to provide a generalization of wall-crossing formulae from abelian to triangulated categories, in the style of Joyce \cite{Joyc.1,Joyc.2,Joyc.3,Joyc.4,Joyc.5,Joyc.6,Joyc.7,Joyc.8}.
This discussion enlightens the fact that beyond this theory there is some deeper `derived' geometry: as the deformation theory of coherent sheaves concerns the $\mathop{\rm Ext}\nolimits$ groups, one way to talk about different geometric structures on moduli spaces is to ask what information they store about the $\mathop{\rm Ext}\nolimits$ groups. For instance, in Kontsevich and Soibelman's context, an interesting problem, among others, is finding what kind of geometric structure on moduli spaces of coherent sheaves on a Calabi--Yau 3-fold X would be the most appropriate for doing motivic and categorified Donaldson--Thomas theory. As a consequence, a natural question would be to ask if derived algebraic geometry has something again to say about a theory of Donaldson--Thomas invariants for Calabi--Yau m-folds for $m>3,$ and what is the most suitable geometric structure to develop the theory, see Corollary \ref{da5cor2}.
Finally, due to both many unproved conjectures and exciting results, Kontsevich and Soibelman's motivic and categorified theory brings to life a fervid area of research (just to cite some investigators: Behrend, Bryan, Davison, Dimca, Mozgovoy, Nagao and Szendr\H oi). In the present work, we will not discuss much more this area, but we will come back to Kontsevich and Soibelman's theory later.
\begin{center} \textbf{Outline of the paper} \end{center}
The paper begins with a section of background material on obstruction theories and conventional definition of Donaldson--Thomas theory, Behrend functions and Behrend's approach to Donaldson--Thomas theory and, finally, Joyce and Song's and Kontsevich and Soibelman's generalization of Donaldson--Thomas theory. This mainly aims to provide a soft introduction to Donaldson--Thomas theory and more specifically to Joyce's theory and the scenery in which the following sections take place.
Subsection \ref{dt2} will briefly recall material from \cite{BeFa}, \cite{LiTi} and then \cite{Thom}. This should provide a general picture about obstruction theories and the classical Donaldson--Thomas invariants. To say that a scheme $X$ has an {\it obstruction theory} \index{obstruction theory} means, very roughly speaking, that one is given locally on ${\mathbin{\mathfrak X}}$ an equivalence class of morphisms of vector bundles such that at each point the kernel of the induced linear map of vector spaces is the tangent space to $X$, and the cokernel is a space of obstructions. Following Donaldson and Thomas \cite[\S 3]{DoTh}, Thomas \cite{Thom} motivates a holomorphic Casson invariant \index{holomorphic Casson invariant} counting bundles on a Calabi--Yau 3-fold. He develops the deformation theory necessary to obtain the virtual moduli cycles in moduli spaces of stable sheaves whose higher obstruction groups vanish, which allows him to define the holomorphic Casson invariant of a Calabi--Yau 3-fold X and prove it is deformation invariant. Heuristically, the Donaldson--Thomas moduli space is the critical set of the holomorphic Chern--Simons functional \index{holomorphic Chern--Simons functional} and the Donaldson--Thomas invariant is a holomorphic analogue of the Casson invariant.
Subsection \ref{dt3} provides a more eclectic presentation of the Behrend function\index{Behrend function}. The first part will review the microlocal approach to defining it, with a discussion on the attempt to categorify Donaldson--Thomas theory. In particular the section describes the bridge between perverse sheaves and vanishing cycles on one hand, and Milnor fibres and Behrend functions on the other. Thus, if ${\mathbin{\mathfrak M}}$ is the Donaldson--Thomas moduli space of stable sheaves, one can, heuristically, think of $\nu_{{\mathbin{\mathfrak M}}}$ as the Euler characteristic of the perverse sheaf of vanishing cycles of the holomorphic Chern-Simons functional. Following this philosophy in which perverse sheaves are categorification of constructible functions, the section outline the categorification program for Donaldson--Thomas theory. Then, in the second part, the Euler characteristic weighted by the Behrend function is compared to the unweighted Euler characteristic, motivating the necessity to introduce the Behrend function as a multiplicity function. Finally, some properties are listed, in particular the Behrend approach to the Donaldson--Thomas invariants as weighted Euler characteristics and the formula in the complex setting of the Behrend function through {\it linking numbers}, which guarantee a more useful expression also in the case it is not known if the scheme admitting a symmetric obstruction theory can locally be written as the critical locus of a regular function on a smooth scheme. This is done introducing the definition of almost closed $1$-forms. We point out that Pandharipande and Thomas \cite{PaTh} give examples which are zeroes of almost closed 1-forms, but are not locally critical loci, and this is the main indication that almost closed 1-forms are not `enough' to develop our whole program.
Subsection \ref{dt4} combines some results of Joyce's series of papers \cite{Joyc.1,Joyc.2,Joyc.3,Joyc.4,Joyc.5,Joyc.6} with Behrend's approach to Donaldson--Thomas theory and describes how Joyce and Song developed the theory of generalized Donaldson--Thomas invariants in \cite{JoSo}. The idea behind the entire project is that one should insert the Behrend function $\nu_{{\mathbin{\mathfrak M}}}$ of the moduli stack ${\mathbin{\mathfrak M}}$ of coherent sheaves as a weight in the Joyce's program. A good introduction to the book is provided by Joyce in \cite{Joyc.8}. Then, a concluding remark presents a sketch on Kontsevich and Soibelman's generalization of Donaldson--Thomas theory. As the present paper is mainly concentrated on Joyce and Song's approach, the remark focuses on analogies and differences between the two theories rather than going into a detailed explanation of Kontsevich and Soibelman's program, both because it is beyond the author's competence and it is not directly involved in the results presented here.
Sections \ref{dcr}--\ref{ourpapers} presents briefly the main application in Donaldson--Thomas theory coming from the vast project developed in the series of papers \cite{Joyc2,BBJ,BBDJS,BJM,BBBJ}. We first summarize the theory of d-critical schemes and stacks introduced by Joyce \cite{Joyc2}. They form a new class of spaces, which should be regarded as classical truncations of the $-1$-shifted symplectic derived schemes of \cite{PTVV}. They are simpler than their derived analogues. In \cite{BBJ}, we prove a Darboux theorem for derived schemes with symplectic forms of degree $k<0$, in the sense of Pantev, To\"en, Vaqui\'e and Vezzosi \cite{PTVV}. More precisely, we show that a derived scheme ${\bs X}$ with symplectic form $\tilde\omega$ of degree $k$ is locally equivalent to $(\mathop{\boldsymbol{\rm Spec}}\nolimits A,\omega)$ for $\mathop{\boldsymbol{\rm Spec}}\nolimits A$ an affine derived scheme in which the cdga $A$ has Darboux-like coordinates with respect to which the symplectic form $\omega$ is standard, and in which the differential in $A$ is given by a Poisson bracket with a Hamiltonian function $H$ of degree $k+1$. When $k=-1$, this implies that a $-1$-shifted symplectic derived scheme $({\bs X},\tilde\omega)$ is Zariski locally equivalent to the derived critical locus $\boldsymbol\mathop{\rm Crit}(H)$ of a regular function $H:U\rightarrow{\mathbin{\mathbb A}}^1$ on a smooth scheme $U$. We use this to show that the classical scheme $X=t_0({\bs X})$ has the structure of an {\it algebraic d-critical locus}, in the sense of Joyce~\cite{Joyc2}. In the sequels \cite{BBBJ,BBDJS,BJM} we extend these results to (derived) Artin stacks, and discuss applications to categorified and motivic Donaldson--Thomas theory of Calabi--Yau 3-folds.
Section \ref{main.1} states our main results, including the description of the local structure of the moduli stack of coherent sheaves on a Calabi--Yau 3-fold, the Behrend function identities and the deformation invariance of the theory. The section explains why and where Joyce and Song use the restriction ${\mathbin{\mathbb K}}={\mathbin{\mathbb C}}$ in \cite{JoSo} and how our results overcome this restriction: \S\ref{dt.1} provides algebraic analogues of \cite[Thm. 5.5]{JoSo} and \cite[Thm. 5.11]{JoSo}. Finally \S\ref{def} provides the analogue of \cite[Cor. 5.28]{JoSo} which yields the deformation invariance over ${\mathbin{\mathbb K}}$ of the generalized Donaldson--Thomas invariants $\bar{DT}{}^\alpha(\tau)$ defined for classes $\alpha$ varying in a deformation invariant lattice $\Lambda_X$ described below in which the numerical Grothendieck group injects through the Chern character map. The section culminates in Theorem \ref{mainthm} which summarizes all these ideas.
Subsection \ref{dt.1} proves the Behrend function identities \index{Behrend function!Behrend identities} above using the existence of a $T$-equivariant $d$-critical chart in the sense of \cite{Joyc2} for each given point $E$ of ${\mathbin{\mathfrak M}},$ where $T\subset G$ is a maximal torus in $G,$ a maximal torus of $\mathop{\rm Aut}(E).$ This gives us the local description of the stack as a critical locus for a $T$-invariant regular function $f$ defined on a smooth scheme $U\subset \mathop{\rm Ext}\nolimits^1(E,E).$ This method is valid for every locally global quotient stack, and in particular it provides the required local description of the moduli stack (Theorem \ref{dt5thm2}).
\index{moduli stack!local structure} Note that we actually would not need the assumption of the local quotient structure if we wanted to restrict just to sheaves on Calabi--Yau 3-folds, as this would follow from the standard method for constructing coarse moduli schemes of semistable coherent sheaves as in Huybrechts and Lehn \cite{HuLe2}. More precisely, one can find a `good' local atlas for ${\mathbin{\mathfrak M}}$ which is a G-invariant, locally closed ${\mathbin{\mathbb K}}$-subscheme in the Grothendieck's Quot Scheme\index{Quot scheme} $\mathop{\rm Quot}_X\bigl({\mathbin{\mathbb K}}^{P(n)}\otimes{\mathbin{\cal O}}_X(-n),P\bigr)$, explained in \cite[\S 2.2]{HuLe2}, which parametrizes quotients ${\mathbin{\mathbb K}}^{P(n)}\otimes{\mathbin{\cal O}}_X(-n)\allowbreak\twoheadrightarrow E'$, where $E'$ has Hilbert polynomial $P,$ and which is acted on by the ${\mathbin{\mathbb K}}$-group $\mathop{\rm GL}(P(n),{\mathbin{\mathbb K}}).$ From \cite{JoSo} it turns out that the proof of the first Behrend identity is reduced to an identity between the Behrend function of the zero locus of ${\rm d} f$, which is a ${\mathbin{\mathbb K}}^*$-scheme, and the Behrend function of the fixed part of this zero locus, that is $$ \nu_{{\rm d} f^{-1}(0)}(p)=(-1)^{\mathop{\rm dim}\nolimits(T_p {\rm d} f^{-1}(0))-\mathop{\rm dim}\nolimits(T_p ({\rm d} f^{-1}(0))^T)}\nu_{({\rm d} f^{-1}(0))^T}(p), $$ where $p$ is a point in the ${\mathbin{\mathbb K}}^*$-fixed point locus $({\rm d} f^{-1}(0))^T.$ This relation is a generalization of the result in \cite{BeFa2} to the case that $p$ is not necessarily an isolated fixed point of the action and ${\mathbin{\mathbb K}}$ is a general algebraically closed field of characteristic zero. This argument is a different approach from the one suggested in a work by Li--Qin \cite{LiQin}, where there they use some properties of the Thom classes of vector bundles. The first Behrend function identity over algebraically closed fields ${\mathbin{\mathbb K}}$ of characteristic zero follows from a trick in the argument of the second Behrend function identity proof, which is directly proved over ${\mathbin{\mathbb K}},$ and is based on Theorem \ref{blowup}, which is the algebraic version of \cite[Thm 4.11]{JoSo}.
Subsection \ref{def} yields that it is possible to extend \cite[Cor. 5.28]{JoSo} on the deformation invariance of the generalized Donaldson--Thomas invariants in the compact case to algebraically closed fields ${\mathbin{\mathbb K}}$ of characteristic zero. First of all, using existence results by Grothendieck and Artin, and smoothness and properness properties of the {\it relative Picard scheme} in a family of Calabi--Yau 3-folds, one proves that the Picard groups form a local system. Moreover, it is a local system with finite monodromy, so it can be made trivial after passing to a finite \'etale cover of the base scheme, as formulated in the Theorem which is the algebraic generalization of \cite[Thm. 4.21]{JoSo}, and which studies the monodromy of the Picard scheme in a family instead of the numerical Grothendieck group. Finally, Theorem \ref{definv}, a substitute for \cite[Thm. 4.19]{JoSo}, which does not need the integral Hodge conjecture result by \cite{Vois} over Calabi--Yau 3-folds and which is valid over ${\mathbin{\mathbb K}},$ characterizes the numerical Grothendieck group of a Calabi--Yau 3-fold in terms of a globally constant lattice described using the Picard scheme: \begin{equation*} \Lambda_X=\textstyle\bigl\{ (\lambda_0,\lambda_1,\lambda_2,\lambda_3) \textrm{ where } \lambda_0,\lambda_3\in{\mathbin{\mathbb Q}}, \; \lambda_1\in\mathop{\rm Pic}(X)\otimes_{{\mathbin{\mathbb Z}}}{\mathbin{\mathbb Q}}, \; \lambda_2\in \mathop{\rm Hom}\nolimits(\mathop{\rm Pic}(X),{\mathbin{\mathbb Q}}) \textrm{ such that } \end{equation*} \begin{equation*} \lambda_0\in {\mathbin{\mathbb Z}},\;\> \lambda_1\in \mathop{\rm Pic}(X)/ _{\textrm{torsion}}, \;
\lambda_2-{\ts\frac{1}{2}}\lambda_1^2\in \mathop{\rm Hom}\nolimits(\mathop{\rm Pic}(X),{\mathbin{\mathbb Z}}),\;\> \lambda_3+\textstyle\frac{1}{12}\lambda_1 c_2(TX)\in {\mathbin{\mathbb Z}}\bigr\}, \end{equation*} where $\lambda_1^2$ is defined as the map $\alpha\in\mathop{\rm Pic}(X)\rightarrow \frac{1}{2}c_1(\lambda_1)\cdot c_1(\lambda_1)\cdot c_1(\alpha)\in A^3(X)_{{\mathbin{\mathbb Q}}}\cong {\mathbin{\mathbb Q}},$ and $\frac{1}{12}\lambda_1 c_2(TX)$ is defined as $\frac{1}{12}c_1(\lambda_1)\cdot c_2(TX)\in A^3(X)_{{\mathbin{\mathbb Q}}}\cong{\mathbin{\mathbb Q}}.$ Theorem \ref{definv} proves that $\Lambda_X$ is deformation invariant and the Chern character gives an injective morphism $\mathop{\rm ch}\nolimits:K^{\rm num}(\mathop{\rm coh}\nolimits(X))\!\hookrightarrow\!\Lambda_X$. Our $\bar{DT}{}^\alpha(\tau)$ will be defined for classes $\alpha\in\Lambda_X.$
Section \ref{dt7} sketches some implications of the theory and proposes new ideas for further research, in particular in the direction of derived categorical framework trying to establish a theory of generalized Donaldson--Thomas invariants for objects in the derived category of coherent sheaves, and for non necessarily compact Calabi--Yau 3-folds.
\noindent{\bf Acknowledgements.} I would like to thank Tom Bridgeland, Daniel Huybrechts, Frances Kirwan, Jun Li, Balazs Szendr\H oi, Richard Thomas and Bertrand To\"en for useful discussions and especially my supervisor Dominic Joyce for his continuous support, for many enlightening suggestions and valuable discussions. This research is part of my D.Phil. project funded by an EPSRC Studentship.
\section{Donaldson--Thomas theory: background material}
This section should be conceived as background picture in which next new sections should be allocated. The competent reader can skip directly to \S\ref{main.1}.
\subsection{Obstruction theories and Donaldson--Thomas type invariants} \label{dt2}
This section will briefly recall material from \cite{BeFa}, \cite{LiTi} and then \cite{Thom} which provide both important notions used in the sequel and a hopefully interesting picture of Donaldson--Thomas theory.
\subsubsection{Obstruction theories}
\label{dt2.1} \index{obstruction theory|(}
Suppose that $X$ is a subscheme of a smooth scheme $M,$ cut out by a section $s$ of a rank $r$
vector bundle $E\rightarrow M.$ Then the {\it expected dimension}, or virtual dimension, of $X$ is $n-r,$ the dimension it would have if the section $s$ was transverse. If it is not transverse, one wants to take a correct $(n-r)$-cycle on $X.$ As the section $s$ induces a cone in $E_{|_X},$ one may then intersect this cone with the zero section of $X$ inside $E$ to get a cycle of expected dimension on $X.$ The key observation is that one works entirely on $X$ and not in the ambient scheme $M.$ The deformation theory of the moduli problem is often endowed with the infinitesimal version of $s:M\rightarrow E$ on $X,$ namely the linearization of $s,$ yielding the following exact sequence: \begin{equation*}
\xymatrix@C=20pt@R=10pt{0\ar[r] & TX \ar[r] & TM_{|_X} \ar[r]^{{\rm d} s} & E_{|_X} \ar[r] & Ob \ar[r] & 0, } \end{equation*} for some cokernel $Ob,$ which in the moduli problem becomes the {\it obstruction sheaf}.\index{obstruction sheaf}
Moduli spaces in algebraic geometry often have an expected dimension \index{virtual dimension} at each point, which is a lower bound for the dimension at that point. Sometimes it may not coincide with the actual dimension of the moduli space and sometimes it is not possible to get a space of the expected dimension. When one has a moduli space $X$ one obtains {\it numerical invariants}\index{numerical invariants} by integrating certain cohomology classes over the virtual moduli cycle, a class of the expected dimension in its Chow ring.
One example is the moduli space of torsion-free, semi-stable vector bundles on a surface which yields the {\it Donaldson theory} \index{Donaldson invariants} and which provides a set of differential invariants of 4-manifolds. Another one is the moduli space of stable maps from curves of genus $g$ to a fixed projective variety which yields the {\it Gromov--Witten invariants}\index{Gromov--Witten invariants}, a kind of generalization of the classical enumerative invariant which counts the number of algebraic curves with appropriate constraints in a variety. In both cases, these invariants are intersection theories on the moduli spaces, respectively, of vector bundles over the surfaces, and of stable maps from curves to a variety. The fundamental class is the core of an intersection theory. However, for Gromov--Witten invariants for example, one cannot take the fundamental class of the whole moduli space directly. The virtual moduli cycle, roughly speaking, plays the role of the fundamental class in an appropriate ``good'' intersection theory.
\index{cycle!of correct dimension}\index{excess intersection theory} A nice picture to start with is the following situation: when the expected dimension does not coincide with the actual dimension of the moduli space, one may view this as if the moduli space is a subspace of an `ambient' space cut out by a set of `equations' whose vanishing loci do not meet transversely. Such a situation is well understood in the following setting described in the Introduction of \cite{LiTi}: let $X$, $Y$ and $W$ be smooth varieties, $X,Y\rightarrow W$ and let $Z=X\times_W Y.$ Then $[X]\cdot [Y]$, the intersection of the cycle $[X]$ and $[Y]$, is a cycle in $A_* W$ of dimension $\mathop{\rm dim}\nolimits X+\mathop{\rm dim}\nolimits Y-\mathop{\rm dim}\nolimits W$. When $\mathop{\rm dim}\nolimits Z=\mathop{\rm dim}\nolimits X+\mathop{\rm dim}\nolimits Y-\mathop{\rm dim}\nolimits W$, then $[Z]=[X]\cdot [Y]$. Otherwise, $[Z]$ may not be $ [X]\cdot[Y]$. The {\it excess intersection theory} gives that one can find a cycle in $A_* Z$ so that it is $[X]\cdot[Y]$. One may view this cycle as the virtual cycle of $Z$ representing $[X]\cdot[Y]$. Following Fulton--MacPherson's normal cone \index{normal cone!Fulton--MacPherson's construction} construction (in \cite{Fult,FuMacP1,FuMacP2}), this cycle is the image of the cycle of the normal cone to $Z$\ in $X$, denoted by $C_{Z/X}$, under the Gysin homomorphism $s^*: A_*( C_{Y/W}\times _YZ) \rightarrow A_* Z$, where $s: Z\rightarrow C_{Y/W}\times_YZ$\ is the zero section. This theory does not apply directly to moduli schemes, since, except for some isolated cases, it is impossible to find pairs $X\rightarrow W$\ and $Y\rightarrow W$ for smooth $X,Y$ and $W$ so that $X\times_WY$\ is the moduli space and $[X]\cdot[Y]$ so defined is the virtual moduli cycle one needs.
Behrend and Fantechi \cite{BeFa} and Li and Tian \cite{LiTi} give two different approaches to deal with this. Very briefly, the strategy to Li and Tian's approach in \cite{LiTi} is that rather than trying to find an embedding of the moduli space into some ambient space, they will construct a cone in a vector bundle directly, say $C\subset V$, over the moduli space and then define the virtual moduli cycle to be $s^*[C]$, where $s$ is the zero section of $V$. The pair $C\subset V$ will be constructed based on a choice of the {\it tangent-obstruction complex} \index{tangent-obstruction complex} of the moduli functor. The construction commutes with Gysin maps and carries a good invariance property.
In \cite{BeFa} Behrend and Fantechi introduce the notion of {\it cone stacks} \index{cone!cone stack} over a scheme $X$ (or more generally for Deligne--Mumford stacks). These are \index{Deligne--Mumford stack} Artin stacks which are locally the quotient of a cone by a vector bundle acting on it. They call a cone {\it abelian} \index{cone!abelian} if it is defined as \nomenclature[SpecSym]{$\mathop{\rm Spec}\nolimits\mathop{\rm Sym}\nolimits {\mathbin{\scr F}}$}{abelian cone associated to a coherent sheaf ${\mathbin{\scr F}}$} $\mathop{\rm Spec}\nolimits\mathop{\rm Sym}\nolimits {\mathbin{\scr F}}$, where ${\mathbin{\scr F}}$ is a coherent sheaf on $X$. Every cone is contained as a closed subcone in a minimal abelian one, which is called its {\it abelian hull}. \index{cone!abelian hull}The notions of being abelian and of abelian hull generalize immediately to cone stacks. Then, for a complex $E^\bullet$ in the derived category $D(X)$ of quasicoherent sheaves on $X$ \nomenclature[DX]{$D(X)$}{derived category of quasicoherent sheaves on $X$} which satisfies some suitable assumptions (denoted by ($*$), see Definition \ref{dt1def1}), there is an associated abelian cone stack $h^1/h^0((E^\bullet)^\vee)$. \nomenclature[hh]{$h^1/h^0((E^\bullet)^\vee)$}{abelian cone stack associated to a complex $E^\bullet$} In particular the {\it cotangent complex} \index{cotangent complex} $L_X ^\bullet$ of $X$ constructed by Illusie \cite{Illu1} (a helpful review is given in Illusie \cite[\S 1]{Illu2}) satisfies condition ($*$), so one can define the abelian cone stack ${\mathfrak N}_X:=h^1/h^0((L_X^\bullet)^\vee)$, the {\it intrinsic normal sheaf}. \index{intrinsic normal sheaf} \nomenclature[N X]{${\mathfrak N}_X$}{intrinsic normal sheaf over a scheme $X$}
More directly, ${\mathfrak N}_X$ is constructed as follows: \'etale locally on $X$, embed an open set $U$ of $X$ in a smooth scheme $W$, and take the stack quotient of the {\it normal sheaf} \index{normal sheaf} (viewed as abelian cone) $N_{U/W}$ by the natural action of $TW_{|_{U}}$. One can glue these abelian cone stacks together to get ${\mathfrak N}_X$. The {\it intrinsic normal cone} \index{intrinsic normal cone} ${\mathfrak C}_X$ is the closed \nomenclature[C X]{${\mathfrak C}_X$}{intrinsic normal cone associated to a scheme $X$} subcone stack of ${\mathfrak N}_X$ defined by replacing $N_{U/W}$ by the {\it normal cone} \index{normal cone} $C_{U/W}$ in the previous construction. In particular, the intrinsic normal sheaf ${\mathfrak N}_X$ of $X$ carries the obstructions for deformations of affine $X$-schemes. With this motivation, they introduce the notion of {\it obstruction theory} \index{obstruction theory!definition} for $X$. To say that $X$ has an obstruction theory means, very roughly speaking, that one is given locally on $X$ an equivalence class of morphisms of vector bundles such that at each point the kernel of the induced linear map of vector spaces is the tangent space to $X$, and the cokernel is a space of obstructions. That is, this is an object $E^\bullet$ in the derived category together with a morphism $E^\bullet\rightarrow L_X^\bullet$, satisfying Condition ($*$) and such that the induced map ${\mathfrak N}_X\rightarrow h^1/h^0((E^\bullet)^\vee)$ is a closed immersion. One denotes the sheaf $h^1({E^\bullet}^\vee)$ by $Ob,$ the obstruction sheaf of the obstruction theory. It contains the obstructions to the smoothness of $X.$ When an obstruction theory $E^\bullet$ is {\it perfect}, \index{obstruction theory!perfect} ${\mathfrak E}=h^1/h^0((E^\bullet)^\vee)$ is a vector bundle stack. Once an obstruction theory is given, with the additional technical assumption that it admits a global resolution, one can define a virtual fundamental class of the expected dimension: one has a vector bundle stack ${\mathfrak E}$ with a closed subcone stack ${\mathfrak C}_X$, and to define the virtual fundamental class of $X$ with respect to $E^\bullet$ one simply intersects ${\mathfrak C}_{\mathbin{\mathfrak X}}$ with the zero section of ${\mathfrak E}$. To get round of the problem of dealing with Chow groups for Artin stacks, Behrend and Fantechi choose to assume that $E^\bullet$ is globally given by a homomorphism of vector bundles $F^{-1}\rightarrow F^0$. Then ${\mathfrak C}_X$ gives rise to a cone $C$ in $F_1={F^{-1}}^\vee$ and one intersects $C$ with the zero section of $F_1$ (see \cite{Kre} for a statement without this assumption).
So, recall the following definitions from Behrend and Fantechi~\cite{Behr,BeFa,BeFa2}:
\begin{dfn} Let $Y$ be a ${\mathbin{\mathbb K}}$-scheme, and $D(Y)$ the derived category of quasicoherent sheaves on $Y$. \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parsep}{0pt} \item[{\bf(a)}] A complex $E^\bullet\in D(Y)$ is {\it perfect of perfect amplitude contained in\/} $[a,b]$, if \'{e}tale locally on $Y$, $E^\bullet$ is quasi-isomorphic to a complex of locally free sheaves of finite rank in degrees $a,a+1,\ldots,b$. \item[{\bf(b)}] A complex $E^\bullet\in D(Y)$ {\it satisfies condition\/} $(*)$ if \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parsep}{0pt} \item[(i)] $h^i(E^\bullet)=0$ for all $i>0$, \item[(ii)] $h^i(E^\bullet)$ is coherent for $i=0,-1$. \end{itemize} \item[{\bf(c)}] An {\it obstruction theory\/}\index{obstruction theory!definition} for $Y$ is a morphism $\varphi:E^\bullet\rightarrow L_Y$ in $D(Y)$, where $L_Y=L_{Y/\mathop{\rm Spec}\nolimits{\mathbin{\mathbb K}}}$ is the cotangent complex of $Y$, and $E$ satisfies condition $(*)$, and $h^0(\varphi)$ is an isomorphism, and $h^{-1}(\varphi)$ is an epimorphism. \item[{\bf(d)}] An obstruction theory $\varphi:E^\bullet\rightarrow L_Y$ is called {\it perfect\/}\index{obstruction theory!perfect} if $E^\bullet$ is perfect of perfect amplitude contained in $[-1,0]$. \item[{\bf(e)}] A perfect obstruction theory $\varphi:E^\bullet\rightarrow L_Y$ on $Y$ is called {\it symmetric\/}\index{obstruction theory!symmetric}\index{symmetric obstruction theory!definition} if there exists an isomorphism $\vartheta:E^\bullet\rightarrow E^{\bullet\vee}[1]$, such that $\vartheta^{\vee}[1]=\vartheta$. Here $E^{\bullet\vee}\!=\!R\mathbin{\mathcal{H}om}(E^\bullet,{\mathbin{\cal O}}_Y)$ is the {\it dual\/} of $E^\bullet$, and $\vartheta^\vee$ the dual morphism of~$\vartheta$. \item[{\bf(f)}] If moreover $Y$ is a scheme with a $G$-action, where $G$ is an algebraic group, an {\it equivariant }perfect obstruction theory \index{obstruction theory!equivariant} is a morphism $E^\bullet\rightarrow L_Y$ in \index{algebraic group} the category $D(Y)^G$, which is a perfect obstruction theory as a \nomenclature[DG]{$D(X)^G$}{derived category of equivariant quasicoherent ${\mathbin{\cal O}}_X$-modules} morphism in $D(Y)$ (this definition is originally due to Graber--Pandharipande~\cite{GP}). Here $D(Y)^G$ denotes the derived category of the abelian category of $G$-equivariant quasicoherent ${\mathbin{\cal O}}_Y$-modules. \item[{\bf(g)}] A {\it symmetric equivariant }obstruction theory (or an {\it equivariant symmetric }obstruction theory) \index{obstruction theory!equivariant symmetric} is a pair $(E^\bullet\rightarrow L_Y,E^\bullet\rightarrow E^{\bullet\vee}[1])$ of morphisms in the category $D(Y)^G$, such that $E^\bullet\rightarrow L_Y$ is an equivariant perfect obstruction theory and $\vartheta:E^\bullet\rightarrow E^{\bullet\vee}[1]$ is an isomorphism satisfying $\vartheta^\vee[1]=\vartheta$ in $D(Y)^G.$ Note that this is more than requiring that the obstruction theory be equivariant and symmetric, separately, as said in \cite{BeFa2}. \end{itemize}
If instead $Y\stackrel{\psi}{\longrightarrow}U$ is a morphism of ${\mathbin{\mathbb K}}$-schemes, so $Y$ is a $U$-scheme, we define {\it relative} \index{obstruction theory!relative} perfect obstruction theories $\phi:E^\bullet\rightarrow L_{Y/U}$ in the obvious way. \label{dt1def1} \end{dfn}
Behrend and Fantechi \cite[Th.~4.5]{BeFa} prove the following theorem, which both explains the term obstruction theory and provides a criterion for verification in practice:
\begin{thm} The following two conditions are equivalent for $E^\bullet \in D(Y)$ satisfying condition $(*)$. \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parsep}{0pt} \item[{\bf(a)}] The morphism $\phi: E^\bullet \rightarrow L_Y$ is an obstruction theory. \item[{\bf(b)}] Suppose that we are given a square-zero extension $\overline{T}$ of $T$ with ideal sheaf $J$, with $T,\overline T$ affine, and a morphism $g:T \rightarrow Y.$ The morphism $\phi$ induces an element $\phi^*(\omega(g))\in \mathop{\rm Ext}\nolimits^1(g^*E^\bullet, J)$ from $\omega(g)\in\mathop{\rm Ext}\nolimits^1(g^*L_Y, J)$ by composition. Then $\phi^*(\omega(g))$ vanishes if and only if there exists an extension $\overline{g}$ of\/ $g$. If it vanishes, then the set of extensions form a torsor under~$\mathop{\rm Hom}\nolimits(g^*E^\bullet,J)$. \end{itemize} \end{thm}
Some examples can be found in \cite{BeFa2}: Lagrangian intersections, sheaves on Calabi--Yau $3$-folds, stable maps to Calabi--Yau $3$-folds. Next section will concentrate on Donaldson--Thomas obstruction theory as in \cite{Thom}. \index{obstruction theory|)}
\subsubsection{Donaldson--Thomas invariants of Calabi--Yau 3-folds}
\label{dt2.2}\index{Calabi--Yau 3-fold|(}\index{Donaldson--Thomas invariants!original $DT^\alpha(\tau)$|(}
{\it Donaldson--Thomas invariants\/} $DT^\alpha(\tau)$ were defined by Richard Thomas \cite{Thom}, following a proposal of Donaldson and Thomas~\cite[\S 3]{DoTh}. They are the virtual counts of stable sheaves on Calabi--Yau 3-folds $X.$ Starting from the formal picture in which a Calabi--Yau $n$-fold is the complex analogue of an oriented real $n$-manifold, and a Fano with a fixed smooth anticanonical divisor is the analogue of a manifold with boundary, Thomas motivates a holomorphic Casson invariant counting bundles on a Calabi--Yau 3-fold. He develops the deformation theory necessary to obtain the virtual moduli cycles in moduli spaces of stable sheaves whose higher obstruction groups vanish which allows to define the holomorphic Casson invariant of a Calabi--Yau 3-fold $X$, prove it is deformation invariant, and compute it explicitly in some examples. Thus, heuristically, the Donaldson--Thomas moduli space is the critical set of the holomorphic Chern-Simons functional and the Donaldson--Thomas invariant is a holomorphic analogue of the Casson invariant. \index{holomorphic Chern--Simons functional} \index{holomorphic Casson invariant}
Mathematically, Donaldson--Thomas invariants are constructed as follows. Deformation theory gives rise to a perfect obstruction theory \cite{BeFa} (or a tangent-obstruction complex \index{obstruction theory} in the language of \cite{LiTi}) on the moduli space of stable sheaves ${\mathbin{\mathcal M}}_{\rm st}^\alpha(\tau).$ Recall that Thomas supposes ${\mathbin{\mathcal M}}_{\rm st}^\alpha(\tau)={\mathbin{\mathcal M}}_{\rm ss}^\alpha(\tau),$ that is, there are no strictly semistable sheaves\/ $E$ in class\/ $\alpha,$ which implies the properness of ${\mathbin{\mathcal M}}_{\rm st}^\alpha(\tau).$ As Thomas points out in \cite{Thom}, the obstruction sheaf is equal to $\Omega_{{\mathbin{\mathcal M}}_{\rm st}^\alpha(\tau)}$, the sheaf of K\"ahler differentials, and hence the tangents $T_{{\mathbin{\mathcal M}}_{\rm st}^\alpha(\tau)}$ are dual to the obstructions. This expresses a certain symmetry of the obstruction theory on ${\mathbin{\mathcal M}}_{\rm st}^\alpha(\tau)$ and is a mathematical reflection of the heuristic that views ${\mathbin{\mathcal M}}_{\rm st}^\alpha(\tau)$ as the critical locus of a holomorphic functional. Associated to the perfect obstruction theory is the virtual fundamental class\index{virtual moduli cycle}, an element of the Chow group $A_*({\mathbin{\mathcal M}}_{\rm st}^\alpha(\tau))$ of \index{Chow group} algebraic cycles modulo rational equivalence on ${\mathbin{\mathcal M}}_{\rm st}^\alpha(\tau).$ One implication \index{algebraic cycles} of the symmetry of the obstruction theory is the fact that the virtual fundamental class $[{\mathbin{\mathcal M}}_{\rm st}^\alpha(\tau)]^{\rm vir}$ is of degree zero. It can hence be integrated over the proper space of stable sheaves to an integer, the Donaldson--Thomas invariant or `virtual count' of ${\mathbin{\mathcal M}}_{\rm st}^\alpha(\tau)$ \e DT^\alpha(\tau)\quad=\textstyle\displaystyle \int_{\small{[{\mathbin{\mathcal M}}_{\rm st}^\alpha(\tau)]^{\rm vir}}}1. \label{dt2eq1} \e In fact Thomas did not define invariants $DT^\alpha(\tau)$ counting sheaves with fixed class $\alpha\in K^{\rm num}(\mathop{\rm coh}\nolimits(X))$, but coarser invariants $DT^P(\tau)$ \nomenclature[DTP]{$DT^P(\tau)$}{Donaldson--Thomas invariants counting sheaves with fixed Hilbert polynomial} counting sheaves with fixed Hilbert polynomial\index{Hilbert polynomial} $P(t)\in{\mathbin{\mathbb Q}}[t]$. \nomenclature[PHilb]{$P(t)$}{Hilbert polynomial $\in {\mathbin{\mathbb Q}}[t]$} Thus $$ {\mathbin{\mathcal M}}_{\rm ss}^P(\tau)\;\; = \displaystyle\coprod_{\small{\alpha:P_\alpha=P}}{\mathbin{\mathcal M}}_{\rm ss}^\alpha(\tau) \quad\leadsto\quad DT^P(\tau) \quad= \!\!\!\!\!\!\!\!\! \textstyle\displaystyle \sum_{\small{\alpha\in K^{\rm num}(\mathop{\rm coh}\nolimits(X)):P_\alpha=P}} \!\!\!\!\!\!\!\!\! DT^\alpha(\tau), $$ is the relationship with Joyce and Song's version $DT^\alpha(\tau)$ reviewed in \S\ref{dt4}, where the r.h.s. has only finitely many nonzero terms in the sum. Here, Thomas' main result \cite[\S 3]{Thom}: \index{deformation-invariance} \begin{thm} For each Hilbert polynomial $P(t),$ the invariant\/ $DT^P(\tau)$ is unchanged by continuous deformations of the underlying Calabi--Yau $3$-fold~$X$ over ${\mathbin{\mathbb K}}.$ \label{dt2thm1} \end{thm} The same proof shows that $DT^\alpha(\tau)$ for $\alpha\in K^{\rm num}(\mathop{\rm coh}\nolimits(X))$ is deformation-invariant, {\it provided\/} it is known that the group $K^{\rm num}(\mathop{\rm coh}\nolimits(X))$ is deformation-invariant, so that this statement makes sense. This issue is discussed in \cite[\S 4.5]{JoSo}. There, it is shown that when ${\mathbin{\mathbb K}}={\mathbin{\mathbb C}}$ one can describe $K^{\rm num}(\mathop{\rm coh}\nolimits(X))$ in terms of cohomology groups \index{Grothendieck group!numerical} $H^*(X;{\mathbin{\mathbb Z}}),$ $H^*(X;{\mathbin{\mathbb Q}})$, so that $K^{\rm num}(\mathop{\rm coh}\nolimits(X))$ is manifestly deformation-invariant, and therefore $DT^\alpha(\tau)$ is also deformation-invariant. \index{deformation-invariance} Theorem \cite[Thm. 4.19]{JoSo} crucially uses the integral Hodge conjecture result by \cite{Vois} for Calabi--Yau 3-folds over ${\mathbin{\mathbb C}}.$ \index{Hodge conjecture}\index{Picard scheme} In \cite[Rmk 4.20(e)]{JoSo}, Joyce and Song propose to extend that description over an algebraically closed base field ${\mathbin{\mathbb K}}$ of characteristic zero by replacing $H^*(X;{\mathbin{\mathbb Q}})$ by the {\it algebraic de Rham cohomology\/}\index{algebraic de Rham cohomology} $H^*_{\rm dR}(X)$ \nomenclature[HalgDR]{$H^*_{\rm dR}(X)$}{algebraic de Rham cohomology of a smooth projective ${\mathbin{\mathbb K}}$-scheme $X$} of Hartshorne \cite{Hart1}. For $X$ a smooth projective ${\mathbin{\mathbb K}}$-scheme, $H^*_{\rm dR}(X)$ is a finite-dimensional vector space over ${\mathbin{\mathbb K}}$. There is a Chern character map \index{Chern character} $\mathop{\rm ch}\nolimits:K^{\rm num}(\mathop{\rm coh}\nolimits(X))\hookrightarrow H^{\rm even}_{\rm dR}(X)$. In \cite[\S 4]{Hart1}, Hartshorne considers how $H^*_{\rm dR}(X_t)$ varies in families $X_t:t\in T$, and defines a {\it Gauss--Manin connection}, which \index{Gauss--Manin connection} makes sense of $H^*_{\rm dR}(X_t)$ being locally constant in~$t$. In \S\ref{def} we will use another idea to characterize the numerical Grothendieck group of a Calabi--Yau 3-fold in terms of a globally constant lattice described using the Picard scheme. \index{Picard scheme}
Next section will introduce the Behrend function and the work done by Behrend in \cite{Behr},
which has been crucial for the development of Donaldson--Thomas theory. \index{Calabi--Yau 3-fold|)}\index{Donaldson--Thomas invariants!original $DT^\alpha(\tau)$|)}
\subsection{Microlocal geometry and the Behrend function}
\label{dt3} \index{microlocal geometry|(}\index{Behrend function|(}
This section briefly explains Behrend's approach \cite{Behr} to Donaldson--Thomas invariants as Euler characteristics of moduli schemes weighted by the Behrend function. It was introduced by Behrend \cite{Behr} for finite type ${\mathbin{\mathbb C}}$-schemes $X$; in \cite[\S 4.1]{JoSo} it has been generalized to Artin ${\mathbin{\mathbb K}}$-stacks. Behrend functions are also defined for complex analytic spaces $X_{\rm an}$, and the Behrend function of a ${\mathbin{\mathbb C}}$-scheme $X$ coincides with that of the underlying complex analytic space~$X_{\rm an}$. The theory is also valid for ${\mathbin{\mathbb K}}$-schemes acted on by a reductive linear algebraic group. A good reference for this section, other than the original paper by Behrend \cite{Behr}, are \cite[\S4]{JoSo} and \cite{O} for the equivariant version.
\subsubsection{Microlocal approach to the Behrend function} \label{dt3.1}
In \cite{Behr}, Behrend suggests a microlocal approach to the problem. The first part of the discussion describes how the Behrend function is defined while the second part, although not detailed and not directly involved in the rest of the paper, aim to give a more complete picture.
\paragraph{The definition of the Behrend function.} \label{dt3.1.1}
Let ${\mathbin{\mathbb K}}$ be an algebraically closed field of characteristic zero, and $X$ a finite type ${\mathbin{\mathbb K}}$-scheme. Suppose $X\hookrightarrow M$ is an embedding of $X$ as a closed subscheme of a smooth ${\mathbin{\mathbb K}}$-scheme $M$. Then one has a commutative diagram \e \begin{gathered} \xymatrix@C=70pt@R=20pt{ Z_*(X) \ar[dr]_{c_0^M} \ar[r]_\cong^\mathop{\rm Eu}\nolimits & \mathop{\rm CF}\nolimits_{\mathbin{\mathbb Z}}(X) \ar[d]^{c_0^{SM}} \ar[r]_\cong^{\mathop{\rm Ch}} & {\mathbin{\cal L}}_X(M) \ar[dl]^{0^!}\\
& A_0(X) & } \label{dt3eq1} \end{gathered} \e\index{algebraic cycles}\nomenclature[CF]{$\mathop{\rm CF}\nolimits_{\mathbin{\mathbb Z}}(X)$}{group of ${\mathbin{\mathbb Z}}$-valued constructible functions on $X$ as in \cite{Joyc.1}}\index{constructible function}\index{local Euler obstruction}\nomenclature[Eu]{$\mathop{\rm Eu}\nolimits$}{the `local Euler obstruction', an isomorphism $Z_*(X)\rightarrow\mathop{\rm CF}\nolimits_{\mathbin{\mathbb Z}}(X)$} where the two horizontal arrows are isomorphisms. Here $Z_*(X)$ denotes the group of {\it algebraic cycles\/} on $X$, as in Fulton \cite{Fult}, and $\mathop{\rm CF}\nolimits_{\mathbin{\mathbb Z}}(X)$ the group of ${\mathbin{\mathbb Z}}$-valued constructible functions on $X$ in the sense of \cite{Joyc.1}. The {\it local Euler obstruction\/} is a group isomorphism $\mathop{\rm Eu}\nolimits:Z_*(X)\rightarrow\mathop{\rm CF}\nolimits_{\mathbin{\mathbb Z}}(X)$. The local Euler obstruction was first defined by MacPherson \cite{MacP} to solve the problem of existence of covariantly functorial Chern classes, answering thus a Deligne--Grothendieck conjecture when ${\mathbin{\mathbb K}}={\mathbin{\mathbb C}}$, using complex \index{Deligne--Grothendieck conjecture} analysis, but Gonzalez--Sprinberg \cite{GS} provides an alternative algebraic definition which works over any algebraically closed field ${\mathbin{\mathbb K}}$ of characteristic zero. It is the obstruction to extending a certain section of the tautological bundle on the {\it Nash blowup}. \index{Nash blowup} More precisely, if $V$ is a prime cycle on $X$, the constructible function $\mathop{\rm Eu}\nolimits(V)$ is given by \begin{equation*} \textstyle\mathop{\rm Eu}\nolimits(V):x\longmapsto\displaystyle \int_{\mu^{-1}(x)}c(\tilde T)\cap s(\mu^{-1}(x),\tilde V), \end{equation*} where $\mu:\tilde V\rightarrow V$ is the Nash blowup of $V$, $\tilde T$ the dual of the universal quotient bundle, $c$ the total Chern class and $s$ the Segre class of the normal cone to a closed immersion. Kennedy \cite[Lem. 4]{Kenn} proves that $\mathop{\rm Eu}\nolimits(V)$ is constructible.\index{constructible function}
As pointed out in the next section, it is worth observing that independently, at about the same time, Kashiwara proved an {\it index theorem} over ${\mathbin{\mathbb C}}$ for a holonomic $\mathcal{D}$-module relating its local Euler characteristic and the local Euler obstruction with respect to an appropriate stratification (see \cite{Gin} for details). It coincides with the one defined above and this is equivalent to saying that the diagram \eq{dt3eq2} below commutes. \index{index theorem!microlocal index theorem}\index{Euler characteristic}
Observe that this part of the diagram exists without the embedding into $M$ and is sufficient to give the definition of the Behrend function as follow. Let $C_{X/M}$ be the {\it normal cone\/} of $X$ in $M$, as in \cite[p.73]{Fult}, and $\pi:C_{X/M}\rightarrow X$ the projection. As in \cite[\S 1.1]{Behr}, define a cycle ${\mathfrak C}_{X/M}\in Z_*(X)$ by $$ {\mathfrak C}_{X/M}=\textstyle\displaystyle\sum_{C'}(-1)^{\mathop{\rm dim}\nolimits\pi(C')}{\rm mult}(C')\pi(C'), $$ where the sum is over all irreducible components $C'$ of $C_{X/M}$. It turns out that ${\mathfrak C}_{X/M}$ depends only on $X$, and not on the embedding $X\hookrightarrow M$. Behrend \cite[Prop. 1.1]{Behr} proves that given a finite type ${\mathbin{\mathbb K}}$-scheme $X$, there exists a unique cycle ${\mathfrak C}_X\in Z_*(X)$, such that for any \'etale map $\varphi:U\rightarrow X$ for a ${\mathbin{\mathbb K}}$-scheme $U$ and any closed embedding $U\hookrightarrow M$ into a smooth ${\mathbin{\mathbb K}}$-scheme $M$, one has $\varphi^*({\mathfrak C}_X)={\mathfrak C}_{U/M}$ in $Z_*(U)$. If $X$ is a subscheme of a smooth $M$ one takes $U=X$ and get ${\mathfrak C}_X={\mathfrak C}_{X/M}$. Behrend calls ${\mathfrak C}_X$ the {\it signed support of the intrinsic normal cone}, or the {\it distinguished cycle} of~$X$. For each finite type ${\mathbin{\mathbb K}}$-scheme $X$, define the {\it Behrend function} $\nu_X$ in $\mathop{\rm CF}\nolimits_{{\mathbin{\mathbb Z}}}(X)$ by $\nu_X=\mathop{\rm Eu}\nolimits({\mathfrak C}_X)$, as in Behrend~\cite[\S 1.2]{Behr}.\index{Behrend function!definition}\index{intrinsic normal cone!signed support}\index{distinguished cycle}
\index{conormal bundle} \index{characteristic cycle map} \nomenclature[char]{$\mathop{\rm Ch}$}{the characteristic cycle map $\mathop{\rm Ch}:\mathop{\rm CF}\nolimits_{{\mathbin{\mathbb Z}}}(U)\rightarrow {\mathbin{\cal L}}(U)$}
For completeness, the section now describes the other side of the diagram \eq{dt3eq1}, which yields another possible way to define the Behrend function. Write ${\mathbin{\cal L}}_X(M)$ for the free abelian \index{Lagrangian cycle!conical Lagrangian cycle} \nomenclature[L]{${\mathbin{\cal L}}_X(M)$}{free abelian group generated by closed, irreducible, reduced, conical Lagrangian, ${\mathbin{\mathbb K}}$-subvariety in $T^*M$ lying over cycles contained in $X$} group generated by closed, irreducible, reduced, conical Lagrangian, ${\mathbin{\mathbb K}}$-subvariety in $\Omega_M$ lying over cycles contained in $X.$ The isomorphism $\mathop{\rm Ch}:\mathop{\rm CF}\nolimits_{{\mathbin{\mathbb Z}}}(X)\rightarrow{\mathbin{\cal L}}_X(M)$ maps a constructible function to its characteristic cycle, which is a conic Lagrangian cycle on \index{characteristic cycle map} $\Omega_M$ supported inside $X$ defined in the following way. Consider the commutative diagram of group isomorphisms that fits in the diagram \eq{dt3eq1}: \e \xymatrix@R=10pt@C=50pt{ Z_*(M) \ar[r]^\mathop{\rm Eu}\nolimits \ar@/_.7pc/[rr]_L & \mathop{\rm CF}\nolimits_{{\mathbin{\mathbb Z}}}(M) \ar[r]^\mathop{\rm Ch} & {\mathbin{\cal L}}(M).} \label{dt6eqq1} \e Here $L:Z_*(M)\rightarrow {\mathbin{\cal L}}(M)$ is defined on any prime cycle $V$ by $L:V\rightarrow (-1)^{\mathop{\rm dim}\nolimits(V)}\ell(V),$ where $\ell(V)$ is the closure of the conormal bundle of any nonsingular dense open subset of $V.$ Then $\mathop{\rm Eu}\nolimits,$ $L$ are isomorphisms, and the {\it characteristic cycle map} $\mathop{\rm Ch}:\mathop{\rm CF}\nolimits_{{\mathbin{\mathbb Z}}}(M)\rightarrow {\mathbin{\cal L}}(M)\subset Z_{\mathop{\rm dim}\nolimits M}(\Omega_M)$ is defined to be the unique isomorphism making (\ref{dt6eqq1}) commute. In the complex case Ginsburg \cite{Gin} describes the inverse of this map as {\it intersection multiplicity} between two conical Lagrangian cycles. This formula is crucial in \cite[\S 4.3]{Behr}, where Behrend gives an expression for the Behrend function in terms of linking numbers, which \index{linking number} has a validity also in the case it is not known if a scheme admitting a symmetric obstruction theory can locally be written as the critical locus of a regular function on a smooth scheme (Theorem \ref{dt6prop1}). See also \cite[Ex. 19.2.4]{Fult}.\index{characteristic cycle map!inverse}\index{intersection multiplicity}
\index{Chern-Mather class} \index{Schwartz-MacPherson Chern class} \nomenclature[cm]{$c^M(V)$}{Mather class of an algebraic cycle $V$} \index{algebraic cycles}
The maps to $A_0(X)$ are the degree zero {\it Chern-Mather class}, the degree zero {\it Schwartz-MacPherson Chern class}, and the intersection with the zero section, respectively. The Mather class is a homomorphism $c^M:Z_*(X)\rightarrow A_*(X),$ whose definition is a globalization of the construction of the local Euler obstruction. One has $c^M(V)=\mu_*\big(c(\widetilde T)\cap[\widetilde V]\big)\,,$ for a prime cycle $V$ of degree $p$ on $X$ with the same notation as above. For a the expression in terms of normal cones, see for example \cite[\S 1]{Sabb}. Applying $c^M$ to the cycle ${\mathfrak C}_X$, one obtains the {\it Aluffi class} \index{Aluffi class} $\alpha_X=c^M({\mathfrak C}_X)\in A_*(X)$ defined in \cite{Alu}. \nomenclature[11]{$\alpha_X$}{Aluffi class of $X$} If $X$ is smooth, its Aluffi class equals $\alpha_X=c(\Omega_X)\cap[X]\,.$
Now given a symmetric obstruction theory on $X$, the {\it cone of curvilinear obstructions} $cv\hookrightarrow ob=\Omega_X$, pulls back to a cone in \index{cone!of curvilinear obstructions} \nomenclature[cv]{cv}{cone of curvilinear obstructions}
$\Omega_{M_{|_{X}}}$ via the epimorphism $\Omega_{M_{|_{X}}}\rightarrow \Omega_X.$ Via the embedding $\Omega_{M_{|_{X}}}\hookrightarrow \Omega_M$ one obtains a conic subscheme $C\hookrightarrow \Omega_M$, the {\it obstruction cone }for \index{cone!obstruction cone} the embedding $X\hookrightarrow M$. Behrend proves that the virtual fundamental class is $[X]^{\rm vir}=0^![C]$. \nomenclature[csm]{$c^{SM}(f)$}{Schwartz-MacPherson Chern class of a constructible function $f$} The key fact is that $C$ is {\it Lagrangian}. Because of this, there exists a unique constructible function $\nu_X$ on $X$ such that $\mathop{\rm Ch}(\nu_X)=[C]$ and $c_0^{SM}(\nu_X)=[X]^{\rm vir}$. Then Theorem \ref{dt3thm4} below follows as an application of MacPherson's theorem~\cite{MacP} (or equivalently from the microlocal index theorem of \index{index theorem!microlocal index theorem} Kashiwara~\cite{KaSc}), which one can think of as a kind of generalization of the {\it Gauss--Bonnet theorem} to singular schemes. \index{Gauss--Bonnet theorem} See Theorem \ref{dt3thm4} below for its validity over ${\mathbin{\mathbb K}}.$ The cycle ${\mathfrak C}_X$ such that $\mathop{\rm Eu}\nolimits({\mathfrak C}_X)=\nu_X$ is as defined above, the (signed) support of the intrinsic normal cone of $X$. The Aluffi class $\alpha_X=c^M({\mathfrak C}_X)=c^{SM}(\nu_X)$ has thus the property that its degree zero component is the virtual fundamental class of any symmetric obstruction theory on $X.$ \index{virtual moduli cycle} \index{obstruction theory!symmetric}
\index{Zariski topology}\nomenclature[Xan]{$X_{\rm an}$}{complex analytic space $X_{\rm an}$ underlying $X$} \index{Behrend function!algebraic} \index{Behrend function!analytic}
In the case ${\mathbin{\mathbb K}}={\mathbin{\mathbb C}}$, using MacPherson's complex analytic definition of the local Euler obstruction \cite{MacP}, the definition of $\nu_X$ makes sense in the framework of complex analytic geometry, and so Behrend functions can be defined for complex analytic spaces $X_{\rm an}$.\index{complex analytic space} Thus, as in \cite[Prop. 4.2]{JoSo} one has that if $X$ is a finite type ${\mathbin{\mathbb K}}$-scheme, then the Behrend function $\nu_X$ is a well-defined\/ ${\mathbin{\mathbb Z}}$-valued constructible function on $X,$ in the Zariski topology. If $Y$ is a complex analytic space then the Behrend function $\nu_Y$ is a well-defined\/ ${\mathbin{\mathbb Z}}$-valued locally constructible function on $Y,$ in the analytic topology. \index{analytic topology} Finally, if $X$ is a finite type ${\mathbin{\mathbb C}}$-scheme, with underlying complex analytic space $X_{\rm an},$ then the algebraic Behrend function $\nu_X$ and the analytic Behrend function $\nu_{\smash{X_{\rm an}}}$ coincide. In particular, $\nu_X$ depends only on the complex analytic space $X_{\rm an}$ underlying $X,$ locally in the analytic topology. Finally, the definition of Behrend functions is valid over ${\mathbin{\mathbb K}}$-schemes, algebraic ${\mathbin{\mathbb K}}$-spaces and Artin ${\mathbin{\mathbb K}}$-stacks, locally of finite type (see \cite[Prop. 4.4]{JoSo}). \index{Behrend function!of Artin stack}
\paragraph{Categorifying the theory.}
\label{dt3.1.2} \index{categorification|(}
What follows will not be needed to understand the rest of the paper. We include this material both for completeness, as it underlies the theory of Behrend functions, and also because it is one of the main application of the whole program \cite{Joyc2,BBJ,BBDJS,BJM,BBBJ} explained in \S\ref{ourpapers}.
For this paragraph, restrict to ${\mathbin{\mathbb K}}={\mathbin{\mathbb C}}$ for simplicity. There exists a sophisticated modern theory of linear partial differential equations on a smooth complex algebraic variety $X,$ sometimes called {\it microlocal analysis}, because it involves analysis on the cotangent bundle $T^* X; $ this yields a theory which is invariant with respect to the action of the whole group of canonical transformation of $T^* X$ while the usual theory is only invariant under the subgroup induced by diffeomorphism of $X.$ It is sometimes called $\mathcal{D}$-{\it module theory,} \index{sheaf of rings of holomorphic linear partial differential operators of finite order} because it involves sheaves of modules $\mathcal{M}$ over the sheaf of rings of holomorphic linear partial differential operators of finite order $\mathcal{D}=\mathcal{D}_X;$ \nomenclature[D]{$\mathcal{D}_X$}{sheaf of rings of holomorphic linear partial differential operators of finite order} these rings are noncommutative, left and right Noetherian, and have finite global homological dimension. It is also sometimes called {\it algebraic analysis} because it involves such algebraic constructions as $\mathop{\rm Ext}\nolimits^i_\mathcal{D}(\mathcal{M},\mathcal{N}).$ The theory as it is known today grew out of the work done in the 1960s by the school of Mikio Sato in Japan. During the 1970's, one of the central themes in $\mathcal{D}$-module theory was David Hilbert's twenty-first problem, now called the \index{Riemann-Hilbert problem} {\it Riemann-Hilbert problem.} A generalization of it may be stated as the problem to solve the {\it Riemann-Hilbert correspondence,} which, roughly speaking, \index{Riemann-Hilbert correspondence} describes the nature of the correspondence between a system of differential equations and its solutions. A comprehensive reference is the book of Kashiwara and Shapira \cite{KaSc}, while an interesting eclectic vision on the subject is provided by Ginsburg \cite{Gin}. One has the following commutative diagram: \e \begin{gathered} \xymatrix@C=60pt@R=25pt{ \textrm{(perverse) constructible sheaves} \ar[d]_{\chi} & \ar[l]_\sim^{DR} \textrm{(regular) holonomic modules} \ar[d]^{SS}\\ \textrm{constructible functions} \ar[r]_\sim^{\mathop{\rm Ch}} & \textrm{Lagrangian cycles in }T^* X.} \label{dt3eq2} \end{gathered} \e \index{constructible sheaf!perverse} \index{holonomic modules!regular} \index{characteristic cycle map} \nomenclature[SS]{$SS$}{characteristic cycle map} \index{characteristic cycle} \index{characteristic variety} Recall that here $SS$ denotes the {\it characteristic cycle map} which to a $\mathcal{D}$-module $\mathcal{M}$ associates its {\it characteristic cycle.} It is the formal linear combination of irreducible components of the {\it characteristic variety} (the support of the graded sheaf gr$\mathcal{M}$ associated to $\mathcal{M}$) counted with their multiplicities. It looks like $$SS(\mathcal{M})=\sum m_\alpha(\mathcal{M})\cdot \overline{T^*_{X_\alpha}X}$$ for a stratification $\{X_\alpha \}$ of $X,$ where $m_\alpha(\mathcal{M})$ are positive integers and $ \overline{T^*_{X_\alpha}X}$ is the closure of the conormal bundle $T^*_{X_\alpha}X.$ Each component of the characteristic variety has dimension at least $\mathop{\rm dim}\nolimits(X).$ A $\mathcal{D}$-module $\mathcal{M}$ is called {\it holonomic} if its characteristic variety is pure of dimension $\mathop{\rm dim}\nolimits(X).$ To have also {\it regular singularities} means, very roughly speaking, that the system is determined by its principal symbol.
So, to a holonomic system it has been associated an object of microlocal nature, the characteristic cycle. On the other side, the Riemann-Hilbert correspondence associates to an holonomic system $\mathcal{M}$ its {\it De Rham complex,} \index{De Rham complex} \nomenclature[DRM]{$\textrm{DR}\mathcal{M}$}{De Rham complex of an holonomic system} $$ \xymatrix@C=40pt@R=10pt{\textrm{DR}(\mathcal{M}): \; 0 \ar[r] & \Omega^0(\mathcal{M}) \ar[r]^{d\;} & \Omega^1(\mathcal{M}) \ar[r]^{\;d} & \ldots \ar[r]^{d\;\;\;\;\;\;\;\;} & \Omega^{\mathop{\rm dim}\nolimits(X)}(\mathcal{M}) \ar[r]^{\;\;\;\;\;\;\; d} & 0,} $$ where $ \Omega^p(\mathcal{M})$ is the sheaf of $\mathcal{M}$-valued $p$-forms on $X$ and $d$ is the differential defined by Cartan formula. As an object in the derived category it can be expressed as $\textrm{DR}(\mathcal{M})= \textrm{R}\mathbin{\mathcal{H}om}_{\mathcal{D}_X}({\mathbin{\cal O}}_X,\mathcal{M})[\mathop{\rm dim}\nolimits(X)].$ If $\mathcal{M}$ is holonomic, $\textrm{DR}(\mathcal{M})$ is constructible and determines $\mathcal{M}$ provided that the latter has regular singularities. Recall the following definition (see also \cite[\S 4]{JoSo}):
\begin{dfn} Let $X$ be a complex analytic space. Consider sheaves of ${\mathbin{\mathbb Q}}$-modules $\cal C$ on $X$. Note that these are {\it not\/} coherent sheaves, which are sheaves of ${\mathbin{\cal O}}_X$-modules. A sheaf $\cal C$ is called {\it constructible\/}\index{sheaf!constructible}\index{constructible sheaf} if there is a locally finite stratification $X=\bigcup_{j\in J}X_j$ of $X$ in the complex analytic topology, such that ${\cal C}\vert_{X_j}$ is a ${\mathbin{\mathbb Q}}$-local system for all $j\in J$, and all the \index{analytic topology} stalks ${\cal C}_x$ for $x\in X$ are finite-dimensional ${\mathbin{\mathbb Q}}$-vector spaces. A complex ${\cal C}^\bullet$ of sheaves of ${\mathbin{\mathbb Q}}$-modules on $X$ is called {\it constructible\/} if all its cohomology sheaves $H^i({\cal C}^\bullet)$ for $i\in{\mathbin{\mathbb Z}}$ are constructible. \index{constructible complex} Write $D^b_{\scriptscriptstyle{\rm Con}}(X)$\nomenclature[DbCon(X)]{$D^b_{\scriptscriptstyle{\rm Con}}(X)$}{bounded derived category of constructible complexes on $X$} for the bounded derived category of constructible complexes on $X$. It is a triangulated category. By \cite[Thm. 4.1.5]{Dimc}, $D^b_{\scriptscriptstyle{\rm Con}}(X)$ is closed under Grothendieck's ``six operations on sheaves''\index{sheaf!Grothendieck's six operations} $R\varphi_*,R\varphi_!,\varphi^*,\varphi^!,{\cal RH}om,\smash{\mathop{\otimes}\limits^{\scriptscriptstyle L}}$. The {\it perverse sheaves\/} on $X$ are a particular abelian subcategory $\mathop{\rm Per}(X)$\nomenclature[Per(X)]{$\mathop{\rm Per}(X)$}{abelian category of perverse sheaves on $X$} in $D^b_{\scriptscriptstyle{\rm Con}}(X)$, which is the heart of a t-structure on $D^b_{\scriptscriptstyle{\rm Con}}(X)$. So perverse sheaves are actually complexes of sheaves, not sheaves, on $X$. The category $\mathop{\rm Per}(X)$ is noetherian\index{noetherian}\index{abelian category!noetherian} and locally artinian, and is artinian\index{artinian}\index{abelian category!artinian} if $X$ is of finite type, so every perverse sheaf \index{perverse sheaf} has (locally) a unique filtration whose quotients are {\it simple} \index{perverse sheaf!simple} perverse sheaves; and the simple perverse sheaves can be described completely in terms of irreducible local systems on irreducible subvarieties in~$X$. \label{dt3def1} \end{dfn}
Now, given a constructible sheaf ${\cal C}^\bullet$ there is associated a constructible function on $X$: define a map \nomenclature[1wchX]{$\chi_X$}{constructible function on $X$ associated to a constructible sheaf} $\chi_X: \mathop{\rm Obj\kern .1em}\nolimits(D^b_{\scriptscriptstyle{\rm Con}}(X))\rightarrow\mathop{\rm CF}\nolimits_{\mathbin{\mathbb Z}}^{\rm an}(X)$ by taking Euler characteristics of the cohomology of stalks of complexes, given by \begin{equation*} \chi_X({\cal C}^\bullet):x\longmapsto \textstyle\displaystyle \sum_{k\in{\mathbin{\mathbb Z}}}(-1)^k\mathop{\rm dim}\nolimits{\cal H}^k({\cal C}^\bullet)_x. \end{equation*} Since distinguished triangles in $D^b_{\scriptscriptstyle{\rm Con}}(X)$ give long exact sequences on cohomology of stalks ${\cal H}^k(-)_x$, this $\chi_X$ is additive over distinguished triangles, and so descends to a group morphism $\chi_X:K_0(D^b_{\scriptscriptstyle{\rm Con}}(X))\rightarrow \mathop{\rm CF}\nolimits_{\mathbin{\mathbb Z}}^{\rm an}(X)$. These maps $\chi_X:\mathop{\rm Obj\kern .1em}\nolimits(D^b_{\scriptscriptstyle{\rm Con}}(X))\rightarrow\mathop{\rm CF}\nolimits_{\mathbin{\mathbb Z}}^{\rm an}(X)$ and $\chi_X: K_0(D^b_{\scriptscriptstyle{\rm Con}}(X))\rightarrow \mathop{\rm CF}\nolimits_{\mathbin{\mathbb Z}}^{\rm an}(X)$ are surjective, since $\mathop{\rm CF}\nolimits_{\mathbin{\mathbb Z}}^{\rm an}(X)$ is spanned by the characteristic functions of closed analytic cycles $Y$ in $X$, and each such $Y$ lifts to a perverse sheaf in $D^b_{\scriptscriptstyle{\rm Con}}(X)$. In category-theoretic terms, $X\mapsto D^b_{\scriptscriptstyle{\rm Con}}(X)$ is a functor $D^b_{\scriptscriptstyle{\rm Con}}$ from complex analytic spaces to triangulated categories, and $X\mapsto \mathop{\rm CF}\nolimits_{\mathbin{\mathbb Z}}^{\rm an}(X)$ is a functor $\mathop{\rm CF}\nolimits_{\mathbin{\mathbb Z}}^{\rm an}$ from complex analytic spaces to abelian groups, and $X\mapsto\chi_X$ is a natural transformation $\chi$ from $D^b_{\scriptscriptstyle{\rm Con}}$ \index{natural transformation} to~$\mathop{\rm CF}\nolimits_{\mathbin{\mathbb Z}}^{\rm an}$.
For a holonomic $\mathcal{D}$-module $\mathcal{M}$ one sets $\chi(x,\mathcal{M})=\chi(x,\textrm{DR}(\mathcal{M})).$ Thus, if $\mathcal{M}$ is a regular holonomic $\mathcal{D}$-module on $X\subset M,$ with $M$ smooth, whose characteristic cycle is $[C_{X/M}]$, then $$\nu_X(P)=\sum_i (-1)^i \mathop{\rm dim}\nolimits_{\mathbin{\mathbb C}} H^i_{\{P\}}(X,\mathcal{M}_{DR})\,,$$ for any point $P\in M$. Here $H^i_{\{P\}}$ denotes cohomology with supports in the subscheme $\{P\}\hookrightarrow M$ and $\mathcal{M}_{DR}$ denotes the perverse sheaf associated to \nomenclature[MDR]{$\mathcal{M}_{DR}$}{perverse sheaf associated to a regular holonomic $\mathcal{D}$-module $\mathcal{M}$ via the Riemann-Hilbert correspondence} $\mathcal{M}$ via the Riemann-Hilbert correspondence, as incarnated, for example, by the De~Rham complex $\textrm{DR}(\mathcal{M}).$ At the moment, Kai Behrend is attempting to give explicit constructions in some cases (see \cite{GeBaViLa}).
In the case $X$ is the critical scheme of a regular function $f$ on a smooth scheme $M,$ Behrend \cite{Behr} gives the following expression for the Behrend function due to Parusi\'nski and Pragacz \cite{PaPr}. This formula has been crucial in \cite{JoSo}. For the definition of the {\it Milnor fibres\/} \index{Milnor fibre} for holomorphic functions on complex \index{vanishing cycle} analytic spaces and the a review on {\it vanishing cycles\/} a survey paper on the subject is Massey \cite{Mass}, and three books are Kashiwara and Schapira \cite{KaSc}, Dimca \cite{Dimc}, and Sch\"urmann \cite{Schur}. Over the field ${\mathbin{\mathbb C}}$, Saito's theory of {\it mixed Hodge modules\/}\index{mixed Hodge module} \cite{Sait} provides a generalization of the theory of perverse sheaves with more structure, which may also be a context in which to generalize Donaldson--Thomas theory.
\begin{thm} Let\/ $U$ be a complex manifold of dimension $n,$ and\/ $f:U\rightarrow{\mathbin{\mathbb C}}$ a holomorphic function, and define $X$ to be the complex analytic space $\mathop{\rm Crit}(f)$ contained in $U_0=f^{-1}(\{0\}).$ Then the Behrend function $\nu_X$ of\/ $X$ is given by \e \nu_X(x)=(-1)^{\mathop{\rm dim}\nolimits U}\bigl(1-\chi(MF_f(x))\bigr) \qquad\text{for $x\in X$.} \label{dt3eq3} \e Moreover, the perverse sheaf of vanishing cycles\index{vanishing cycle!perverse sheaf}\index{perverse sheaf!of vanishing cycles} $\phi_f(\underline{{\mathbin{\mathbb Q}}}[n-1])$ on $U_0$ is supported on $X,$ and \nomenclature[1vphif]{$\phi_f$}{vanishing cycle functor on derived category of constructible sheaves} \nomenclature[MF]{$MF_f(x)$}{Milnor fibre of a holomorphic function $f$ at point $x$} \e \chi_{U_0}\bigl(\phi_f(\underline{{\mathbin{\mathbb Q}}}[n-1])\bigr)(x)=\begin{cases} \nu_X(x), & x\in X, \\ 0, & x\in U_0\setminus X, \end{cases} \label{dt3eq4} \e where $\nu_X$ is the Behrend function of the complex analytic space~$X$. \label{dt3thm1} \end{thm}
Thus, if $X$ is the Donaldson--Thomas moduli space of stable sheaves, one can, heuristically, think of $\nu_X$ as the {\it Euler characteristic of the perverse sheaf of vanishing cycles of the holomorphic Chern-Simons functional.}
In \cite[Question 4.18, 5.7]{JoSo}, Joyce and Song ask the following question. \begin{quest}{\bf(a)} Let\/ $X$ be a Calabi--Yau\/ $3$-fold over\/ ${\mathbin{\mathbb C}},$ and write\/ ${\mathbin{\mathcal M}}_{\rm si}$ \nomenclature[M al si]{${\mathbin{\mathcal M}}_{\rm si}$}{coarse moduli space of simple coherent sheaves on $X$} \index{coherent sheaf!simple} for the coarse moduli space of simple coherent sheaves on\/ $X$. Does there exist a natural perverse \index{perverse sheaf} sheaf\/ $\cal P$ on ${\mathbin{\mathcal M}}_{\rm si},$ with\/ $\chi_{{\mathbin{\mathcal M}}_{\rm si}}({\cal P})=\nu_{{\mathbin{\mathcal M}}_{\rm si}},$ which is locally isomorphic to $\phi_f(\underline{{\mathbin{\mathbb Q}}}[\mathop{\rm dim}\nolimits U-1])$ for $f,U$ as in \cite[Thm. 5.4]{JoSo}?
\noindent{\bf(b)} Is there also some Artin stack version of\/ $\cal P$ in\/ {\bf(a)} for the moduli stack\/ ${\mathbin{\mathfrak M}},$ locally isomorphic to $\phi_f(\underline{{\mathbin{\mathbb Q}}}[\mathop{\rm dim}\nolimits U-1])$ for $f,U$ as in Theorem {\rm\ref{dt5thm1}} below?
\noindent{\bf(c)} Let $M$ be a complex manifold, $\omega$ an almost closed holomorphic $(1,0)$-form on $M$ as defined below, and $X = \omega^{-1}(0)$ as a complex analytic subspace of $M.$ Can one define a natural perverse sheaf $\cal P$ supported on $X$, with $\chi_X(\cal P)$ $= \nu_X$ , such that $\cal P\cong$ $\phi_f(\underline{{\mathbin{\mathbb Q}}}[\mathop{\rm dim}\nolimits U-1])$ when $\omega={\rm d} f$ for $f:M\rightarrow {\mathbin{\mathbb C}}$ holomorphic? Are there generalizations to the algebraic setting?
\label{dt5quest1} \end{quest}
One can also ask Question \ref{dt5quest1} for Saito's mixed Hodge modules~\cite{Sait}.\index{mixed Hodge module} If the answer to Question \ref{dt5quest1}(a) is yes, it would provide a way of {\it categorifying\/} (conventional) Donaldson--Thomas invariants $DT^\alpha(\tau)$. That is ${\mathbb H}^*\bigl({\mathbin{\mathcal M}}_{\rm st}^\alpha(\tau);{\cal P}\vert_{{\mathbin{\mathcal M}}_{\rm st}^\alpha(\tau)}\bigr)$ would be a natural cohomology \index{perverse sheaf!hypercohomology} \nomenclature[HPM]{${\mathbb H}^*\bigl(\cal M; \cal P\bigr)$}{hypercohomology of a perverse sheaf $\cal P$ on a scheme $\cal M$} group of the stable moduli scheme ${\mathbin{\mathcal M}}_{\rm st}^\alpha(\tau)$ whose Euler characteristic is the Donaldson--Thomas invariant. This question is also crucial for the programme of Kontsevich--Soibelman \cite{KoSo1} to extend Donaldson--Thomas invariants of Calabi--Yau 3-folds to other motivic invariants.\index{motivic invariant}
as discussed in \cite[Remark 5.8]{JoSo}.\index{categorification|)} We will explain in \S\ref{ourpapers} how this question has been resolved.
\subsubsection{The Behrend function and its characterization} \label{dt3.2}
Here we will point out some important remarks and properties of the Behrend function.
\paragraph{Behrend function as a multiplicity function in the weighted Euler characteristic.}
\label{dt3.2.1} \index{Behrend function!multiplicity function|(} \index{Euler characteristic} It is worth to report here \cite[ \S 1.2]{JoSo} which provides a good way to think of Behrend functions as {\it multiplicity functions}. If $X$ is a finite type ${\mathbin{\mathbb C}}$-scheme then the Euler characteristic $\chi(X)$ `counts' points without multiplicity, so that each point of $X({\mathbin{\mathbb C}})$ contributes 1 to $\chi(X)$. \nomenclature[xred]{$X^{\rm red}$}{underlying reduced ${\mathbin{\mathbb C}}$-scheme of the ${\mathbin{\mathbb C}}$-scheme $X$} If $X^{\rm red}$ is the underlying reduced ${\mathbin{\mathbb C}}$-scheme then $X^{\rm red}({\mathbin{\mathbb C}})=X({\mathbin{\mathbb C}})$, so $\chi(X^{\rm red})=\chi(X)$, and $\chi(X)$ does not see non-reduced behaviour in $X$. However, the {\it weighted Euler characteristic} \index{Euler characteristic!weighted} $\chi(X,\nu_X)$ `counts' each $x\in X({\mathbin{\mathbb C}})$ weighted by its multiplicity $\nu_X(x)$. The Behrend function $\nu_X$ detects non-reduced behaviour, so in general $\chi(X,\nu_X)\ne\chi(X^{\rm red},\nu_{X^{\rm red}})$. For example, let $X$ be the $k$-fold point $\mathop{\rm Spec}\nolimits\bigl({\mathbin{\mathbb C}}[z]/(z^k)\bigr)$ for $k\geqslant 1$. Then $X({\mathbin{\mathbb C}})$ is a single point $x$ with $\nu_X(x)=k$, so $\chi(X)=1=\chi(X^{\rm red},\nu_{X^{\rm red}})$, but~$\chi(X,\nu_X)=k$.
An important moral of \cite{Behr} is that (at least in moduli problems with symmetric obstruction theories, such as Donaldson--Thomas theory) it is better to `count' points in a moduli scheme ${\mathbin{\mathcal M}}$ by the weighted Euler characteristic $\chi({\mathbin{\mathcal M}},\nu_{\mathbin{\mathcal M}})$ than by the unweighted Euler characteristic $\chi({\mathbin{\mathcal M}})$. One reason is that $\chi({\mathbin{\mathcal M}},\nu_{\mathbin{\mathcal M}})$ often gives answers unchanged under deformations of the underlying geometry, but $\chi({\mathbin{\mathcal M}})$ does not. For example, consider the family of ${\mathbin{\mathbb C}}$-schemes $X_t=\mathop{\rm Spec}\nolimits\bigl({\mathbin{\mathbb C}}[z]/(z^2-t^2)\bigr)$ for $t\in{\mathbin{\mathbb C}}$. Then $X_t$ is two reduced points $\pm t$ for $t\ne 0$, and a double point when $t=0$. So as above we find that $\chi(X_t,\nu_{X_t})=2$ for all $t$, which is deformation-invariant, but $\chi(X_t)$ is 2 for $t\ne 0$ and 1 for $t=0$, which is not deformation-invariant. \index{deformation-invariance}
\index{Behrend function!multiplicity function|)}
\paragraph{Properties of the Behrend function.} \label{dt3.2.2}
Here are some important properties of Behrend functions. They are proved by Behrend \cite[\S 1.2 \& Prop. 1.5]{Behr} when ${\mathbin{\mathbb K}}={\mathbin{\mathbb C}}$, but his proof is valid for general~${\mathbin{\mathbb K}}$. \begin{thm} Let $X,Y$ be Artin ${\mathbin{\mathbb K}}$-stacks locally of finite type. Then: \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parsep}{0pt} \item[{\rm(i)}] If\/ $X$ is smooth of dimension\/ $n$ then~$\nu_X\equiv(-1)^n$. \item[{\rm(ii)}] If\/ $\varphi:X\!\rightarrow\! Y$ is smooth with relative dimension $n$ then\/~$\nu_X\!\equiv\!(-1)^n\varphi^*(\nu_Y)$. \item[{\rm(iii)}] $\nu_{X\times Y}\equiv\nu_X\boxdot\nu_Y,$ where $(\nu_X\boxdot\nu_Y)(x,y)=\nu_X(x)\nu_Y(y)$. \end{itemize} \label{dt3thm3} \end{thm}
Let us recall \cite[Thm 4.11]{JoSo}. It is stated using the Milnor fibre, but its proof works algebraically over ${\mathbin{\mathbb K}}$.
\begin{thm} Let\/ $U$ be a smooth ${\mathbin{\mathbb K}}$-variety, $f:U\rightarrow {\mathbin{\mathbb A}}^1_{{\mathbin{\mathbb K}}}$ a regular function over $U,$ and $V$ a smooth ${\mathbin{\mathbb K}}$-subvariety of $U,$ and\/ $v\in V\cap\mathop{\rm Crit}(f)$. Define $\tilde U$ to be the blowup of\/ $U$ along\/ $V,$ with blowup map $\pi:\tilde U\rightarrow U,$ and set\/ $\tilde f=f\circ\pi:\tilde U\rightarrow {\mathbin{\mathbb A}}^1_{\mathbin{\mathbb K}}$. Then $\pi^{-1}(v)={\mathbb P}(T_vU/T_vV)$ is contained in $\mathop{\rm Crit}(\tilde f),$ and $$ \nu_{\mathop{\rm Crit}(f)}(v)\quad =\,\displaystyle \int\limits_{w\in {\mathbb P}(T_vU/T_vV)}\nu_{\mathop{\rm Crit}(\tilde f)}(w)\,{\rm d}\chi\quad+\quad (-1)^{\mathop{\rm dim}\nolimits U-\mathop{\rm dim}\nolimits V}\bigl(1-\mathop{\rm dim}\nolimits U+\mathop{\rm dim}\nolimits V\bigr)\nu_{\mathop{\rm Crit}(f\vert_V)}(v), $$ where $w\mapsto\nu_{\mathop{\rm Crit}(f)}(w)$ is a constructible function\index{constructible function} on ${\mathbb P}(T_vU/T_vV),$ and the integral is the Euler characteristic of\/ ${\mathbb P}(T_vU/T_vV)$ weighted by this. \label{blowup} \end{thm}
One can see the next result as a kind of {\it virtual Gauss--Bonnet formula}. It is crucial for Donaldson--Thomas theory. It is proved by Behrend \cite[Th. 4.18]{Behr} when ${\mathbin{\mathbb K}}={\mathbin{\mathbb C}}$, but his proof is valid for general~${\mathbin{\mathbb K}}$. It depends crucially on \cite[Prop.\,1.12]{Behr} which again depend on an application of MacPherson's theorem \cite{MacP} over ${\mathbin{\mathbb C}}$ but valid over general ${\mathbin{\mathbb K}}$ thanks to Kennedy \cite{Kenn} and the definition of the Euler characteristic over algebraically closed field ${\mathbin{\mathbb K}}$ of characteristic zero given by Joyce \cite{Joyc.1}. See also an independent construction of the Schwartz--MacPherson Chern class given by Aluffi \cite{Alu1}.
\begin{thm} Let $X$ a proper ${\mathbin{\mathbb K}}$-scheme with a symmetric obstruction theory, and\/ $[X]^{\rm vir}\in A_0(X)$ the corresponding virtual class. Then \begin{equation*} \textstyle\displaystyle \int_{[X]^{\rm vir}}1=\chi(X,\nu_X)\in{\mathbin{\mathbb Z}}, \end{equation*} where $\chi(X,\nu_X)=\int_{X({\mathbin{\mathbb K}})}\nu_X\,{\rm d}\chi$ is the Euler characteristic of\/ $X$ weighted by the Behrend function $\nu_X$ of\/ $X$. In particular, $ \int_{[X]^{\rm vir}}1$ depends only on the\/ ${\mathbin{\mathbb K}}$-scheme structure of\/ $X,$ not on the choice of symmetric obstruction theory. \label{dt3thm4} \end{thm}
Theorem \ref{dt3thm4} implies that $DT^\alpha(\tau)$ in \eq{dt2eq1} is given by \index{Donaldson--Thomas invariants!original $DT^\alpha(\tau)$} \e DT^\alpha(\tau)=\chi\bigl({\mathbin{\mathcal M}}_{\rm st}^\alpha(\tau),\nu_{{\mathbin{\mathcal M}}_{\rm st}^\alpha(\tau)}\bigr). \label{dt3eq5} \e There is a big difference between the two equations \eq{dt2eq1} and \eq{dt3eq5} defining Donaldson--Thomas invariants. Equation \eq{dt2eq1} is non-local, and non-motivic, and makes sense only if ${\mathbin{\mathcal M}}_{\rm st}^\alpha(\tau)$ is a proper ${\mathbin{\mathbb K}}$-scheme. But \eq{dt3eq5} is local, and (in a sense) motivic, and makes sense for arbitrary finite type ${\mathbin{\mathbb K}}$-schemes ${\mathbin{\mathcal M}}_{\rm st}^\alpha(\tau)$. In fact, one could take \eq{dt3eq5} to be the definition of Donaldson--Thomas invariants even when ${\mathbin{\mathcal M}}_{\rm ss}^\alpha(\tau)\ne{\mathbin{\mathcal M}}_{\rm st}^\alpha(\tau)$, but in \cite [\S 6.5]{JoSo} Joyce and Song argued that this is not a good idea, as then $DT^\alpha(\tau)$ would not be unchanged under deformations of~$X$. In \cite[\S 6.5]{JoSo} Joyce and Song say: \begin{quotation} `Equation \eq{dt3eq5} was the inspiration for this book. It shows that Donaldson--Thomas invariants $DT^\alpha(\tau)$ can be written as {\it motivic\/} invariants,\index{motivic invariant} like those studied in \cite{Joyc.3,Joyc.4,Joyc.5,Joyc.6,Joyc.7}, and so it raises the possibility that we can extend the results of \cite{Joyc.3,Joyc.4,Joyc.5,Joyc.6,Joyc.7} to Donaldson--Thomas invariants by including Behrend functions as weights.' \end{quotation}
\paragraph{Almost closed 1-forms.}
\label{dt3.2.3} \index{almost closed $1$-form|(}
In \cite{PaTh} Pandharipande and Thomas give a counterexample to the idea that every scheme admitting a symmetric obstruction theory can locally be written as the critical locus of a regular function on a smooth scheme. This limits the usefulness of the above formula for $\nu_X(x)$ in terms of the Milnor fibre. Here is the more general approach due to Behrend \cite{Behr}, which the author tried to use to give a strictly algebraic proof on the Behrend function identities, but later this proof turned out to be not completely correct.
\begin{dfn} Let ${\mathbin{\mathbb K}}$ be an algebraically closed field, and $M$ a smooth ${\mathbin{\mathbb K}}$-scheme. Let $\omega$ be an algebraic 1-form on $M$, that is, $\omega\in H^0(T^*M)$. Call $\omega$ {\it almost closed\/} if ${\rm d}\omega$ is a section of $I_\omega\cdot\Lambda^2T^*M$, where $I_\omega$ is the ideal sheaf of the zero locus $\omega^{-1}(0)$ of $\omega$. Equivalently, ${\rm d}\omega\vert_{\omega^{-1}(0)}$ is zero as a section of $\Lambda^2T^*M\vert_{\omega^{-1}(0)}$. In (\'etale) local coordinates $(z_1,\ldots,z_n)$ on $M$, if $$\omega=f_1{\rm d} z_1+\cdots+f_n{\rm d} z_n,$$ then $\omega$ is almost closed provided \begin{equation*} \frac{\partial f_j}{\partial z_k}\equiv\frac{\partial f_k}{\partial z_j} \;\>\mod (f_1,\ldots,f_n). \end{equation*} \label{dt3def2} \end{dfn} Let $M$ be a smooth Deligne--Mumford stack and $\omega$ an almost \index{Deligne--Mumford stack} closed 1-form on $M$ with zero locus $X=Z(\omega)$. It is a general principle, that a section of a vector bundle defines a perfect obstruction theory for the zero locus of the section. This obstruction theory is given by \e \begin{gathered}
\xymatrix@C=30pt@R=30pt{ [T_{M_{|_{X}}} \ar[r]^{d\circ\omega^\vee}\ar[d]_{\omega^\vee}
& \Omega_{M_{|_{X}}}]\ar[d]^1\\
[I/I^2\ar[r]^{d} & \Omega_{M_{|_{X}}}]} \label{dt3example} \end{gathered} \e
This obstruction theory is symmetric, in a canonical way, because \index{obstruction theory!symmetric} under the assumption that $\omega$ is almost closed one has that $d\circ\omega^\vee$ is self-dual, as a homomorphism of vector bundles over $X$.
Behrend \cite[Prop. 3.14]{Behr} proves a kind of converse of that, by a proof valid for general~${\mathbin{\mathbb K}}$, which says that, at least locally, every symmetric obstruction theory is given in this way by an almost closed $1$-form.
\begin{prop} Let\/ ${\mathbin{\mathbb K}}$ be an algebraically closed field, and\/ $X$ a ${\mathbin{\mathbb K}}$-scheme with a symmetric obstruction theory. Then $X$ may be covered by Zariski open sets $Y\subseteq X$ such that there exists a smooth\/ ${\mathbin{\mathbb K}}$-scheme $M,$ an almost closed\/ $1$-form $\omega$ on $M,$ and an isomorphism of\/ ${\mathbin{\mathbb K}}$-schemes\/~$Y\cong\omega^{-1}(0)$. \label{dt3prop5} \end{prop} Restricting to ${\mathbin{\mathbb K}}={\mathbin{\mathbb C}}$, Behrend \cite[Prop. 4.22]{Behr} gives an expression for the Behrend function of the zero locus of an almost closed 1-form as a {\it linking number}\index{linking number}. It is possible to use it to give an algebraic proof of the first Behrend identity over ${\mathbin{\mathbb C}}.$
\begin{prop} Let\/ $M$ be a smooth scheme and\/ $\omega$ an almost closed $1$-form on $M,$ and let\/ $Y=\omega^{-1}(0)$ be the scheme-theoretic zero locus of\/ $\omega$. Fix\/ $p$ a closed point in $Y$, choose \'etale coordinates $(x_1,\ldots,x_n)$ on $M$ around $p$ with\/ $(x_1,\ldots,x_n,p_1,\ldots, p_n)$ the associated canonical coordinates for $T^*M.$ Write $\omega=\displaystyle\sum_{i=1}^{n}f_i {\rm d} x_i$ in these coordinates. One can identify $T^*M$ near $p$ with\/~${\mathbin{\mathbb C}}^{2n}$. Then for all\/ $\eta\in{\mathbin{\mathbb C}}$ and\/ $\epsilon\in{\mathbin{\mathbb R}}$ with\/ $0<\md{\eta}\ll\epsilon\ll 1$ one has \e \nu_Y(p)=L_{{\cal S}_\epsilon}\bigl(\Gamma_{\eta^{-1}\omega}\cap{\cal S}_\epsilon, \Delta\cap {\cal S}_\epsilon\bigr), \label{dt6eq8} \e where \nomenclature[sphere]{${\cal S}_\epsilon$}{sphere of radius $\epsilon$ in ${\mathbin{\mathbb C}}^{2n}$} \nomenclature[1b]{$\Gamma_{\eta^{-1}\omega}$}{graph of $\eta^{-1}\omega$ for $\omega$ almost closed $1$-form and $\eta\in{\mathbin{\mathbb C}}$} \nomenclature[1c]{$\Delta$}{graph of the section given by the square of the distance function} \nomenclature[LS]{$L_{{\cal S}_\epsilon}(\,,\,)$}{linking number of two disjoint, closed, oriented $(n-1)$-submanifolds in ${\cal S}_\epsilon$} \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parsep}{0pt} \item ${\cal S}_\epsilon\!=\!\bigl\{(x_1,\ldots,p_n)\!\in\!{\mathbin{\mathbb C}}^{2n}: \ms{x_1}\!+\!\cdots\!+\!\ms{p_n}\!=\!\epsilon^2\bigr\}$ is the sphere of radius $\epsilon$ in ${\mathbin{\mathbb C}}^{2n},$ \item $\Gamma_{\eta^{-1}\omega}$ is the graph of\/ $\eta^{-1}\omega$ regarded locally as a complex submanifold of\/ ${\mathbin{\mathbb C}}^{2n}$ of real dimension $2n$ oriented so that $M\longrightarrow \Omega_M$ is orientation preserving and defined by the equations $\{\eta p_i=f_i(x)\},$ \item $\Delta=\bigl\{(x_1,\ldots,p_n)\!\in\!{\mathbin{\mathbb C}}^{2n}:p_j\!=\!\bar x_j,$ $j\!=\!1,\ldots,n\bigr\},$ i.e. the image of the smooth map $M\longrightarrow\Omega_M$ given by the section ${\rm d}\varrho$ of $\Omega_M,$ with $$\varrho=\displaystyle \sum_{i} x_{i} \bar x_{i}+\displaystyle \sum_{i} p_{i} \bar p_{i}$$ the square of the distance function defined on $\Omega_M$ by the choice of coordinates of real dimension $2n,$ \item $L_{{\cal S}_\epsilon}(\,,\,)$ is the linking number of two disjoint, closed, oriented\/ $(n\!-\!1)$-submanifolds in~${\cal S}_\epsilon$. \end{itemize} \label{dt6prop1} \end{prop}
We remark here that $\Delta$ is not a complex submanifold, but only a real submanifold. Thus, there are no good generalizations of $\Delta$ to other fields ${\mathbin{\mathbb K}}.$
\index{almost closed $1$-form|)}
\index{microlocal geometry|)}\index{Behrend function|)}
\subsection{Generalizations of Donaldson--Thomas theory} \label{dt4}
Next it will be briefly reviewed how the theory of generalized Donaldson--Thomas invariants has been developed, starting from the series of papers \cite{Joyc.1,Joyc.2,Joyc.3,Joyc.4,Joyc.5,Joyc.6,Joyc.7} about constructible functions, stack functions, Ringel--Hall algebras, counting invariants for Calabi--Yau 3-folds, and wall-crossing and then summarizing the main results in \cite{JoSo} including the definition of generalized Donaldson--Thomas invariants $\bar{DT}{}^\alpha(\tau) \in{\mathbin{\mathbb Q}}$, their deformation-invariance, and wall-crossing formulae under change of stability condition~$\tau$. In the sequel, there are two paragraphs on statements and a sketch of proofs of the theorems \cite[Thm 5.5]{JoSo} and \cite[Thm 5.11]{JoSo} on which this paper is concentrated. We conclude with a brief and rough remark on Kontsevich and Soibelman's parallel approach to Donaldson--Thomas theory \cite{KoSo1}, focusing more on analogies and differences with Joyce and Song's construction \cite{JoSo} rather than going into a detailed exposition. This choice is due to the fact that for the present paper we do not need it. \index{constructible function} \index{stack function} \index{Ringel--Hall algebra} \index{counting invariants for Calabi--Yau 3-folds} \index{wall crossing} \index{deformation-invariance} \index{configurations}
\subsubsection[Brief sketch of background from $\text{\cite{Joyc.1,Joyc.2,Joyc.3,Joyc.4,Joyc.5,Joyc.6,Joyc.7}}$]{Brief sketch of background from \cite{Joyc.1,Joyc.2,Joyc.3,Joyc.4,Joyc.5,Joyc.6,Joyc.7}} \label{dt4.1}
Here it will be recalled a few important ideas from \cite{Joyc.1,Joyc.2,Joyc.3,Joyc.4, Joyc.5,Joyc.6,Joyc.7}. They deal with {\it Artin stacks} rather than coarse moduli \index{Artin stack} schemes,\index{coarse moduli scheme}\index{moduli scheme!coarse} as in \cite{Thom}. Let $X$ be a Calabi--Yau 3-fold over ${\mathbin{\mathbb C}}$, and write ${\mathbin{\mathfrak M}}$ for the moduli stack of all coherent sheaves $E$ on $X$. It is an Artin ${\mathbin{\mathbb C}}$-stack.
\index{stack function} \nomenclature[SFfF]{$\mathop{\rm SF}\nolimits({\mathbin{\mathfrak F}})$}{vector space of `stack functions' on an Artin stack ${\mathbin{\mathfrak F}}$, defined using representable 1-morphisms} The ring of {\it stack functions} $\mathop{\rm SF}\nolimits({\mathbin{\mathfrak M}})$ in \cite{Joyc.2} is basically the Grothendieck group $K_0(\mathop{\rm Sta}_{\mathbin{\mathfrak M}})$ of the \index{2-category} \index{Grothendieck group} 2-category $\mathop{\rm Sta}_{\mathbin{\mathfrak M}}$ of stacks over ${\mathbin{\mathfrak M}}$. That is, \nomenclature[Sta]{$\mathop{\rm Sta}_{\mathbin{\mathfrak M}}$}{2-category of stacks over ${\mathbin{\mathfrak M}}$} $\mathop{\rm SF}\nolimits({\mathbin{\mathfrak M}})$ is generated by isomorphism classes $[({\mathbin{\mathfrak R}},\rho)]$ of representable 1-morphisms $\rho:{\mathbin{\mathfrak R}}\rightarrow{\mathbin{\mathfrak M}}$ for ${\mathbin{\mathfrak R}}$ a finite type Artin ${\mathbin{\mathbb C}}$-stack, with the relation \index{1-morphism!representable} \index{Artin stack!of finite type} \begin{equation*} [({\mathbin{\mathfrak R}},\rho)]=[({\mathbin{\mathfrak S}},\rho\vert_{\mathbin{\mathfrak S}})]+[({\mathbin{\mathfrak R}}\setminus{\mathbin{\mathfrak S}},\rho\vert_{{\mathbin{\mathfrak R}}\setminus{\mathbin{\mathfrak S}}})] \end{equation*} when ${\mathbin{\mathfrak S}}$ is a closed ${\mathbin{\mathbb C}}$-substack of ${\mathbin{\mathfrak R}}$. In \cite{Joyc.2} Joyce studies different kinds of stack function spaces with other choices of generators and relations, and operations on these spaces. These include projections \nomenclature[1pivi]{$\Pi^{\rm vi}_n$}{projection to stack functions of `virtual rank $n$'} $\Pi^{\rm vi}_n:\mathop{\rm SF}\nolimits({\mathbin{\mathfrak M}})\rightarrow\mathop{\rm SF}\nolimits({\mathbin{\mathfrak M}})$ to stack functions of {\it virtual rank} $n$, which act on $[({\mathbin{\mathfrak R}},\rho)]$ by modifying ${\mathbin{\mathfrak R}}$ depending on its stabilizer groups. \index{stabilizer group} \index{virtual rank}\index{stack function!virtual rank} \nomenclature[SFai]{$\mathop{\rm SF}\nolimits_{\rm al}^{\rm ind}({\mathbin{\mathfrak M}})$}{Lie subalgebra of $\mathop{\rm SF}\nolimits_{\rm al}({\mathbin{\mathfrak M}})$ of stack functions `supported on virtual indecomposables'} \index{virtual indecomposable}\index{stack function!supported on virtual indecomposables}
In \cite[\S 5.2]{Joyc.4} he defines a {\it Ringel--Hall} type algebra\index{Ringel--Hall algebra} $\mathop{\rm SF}\nolimits_{\rm al}({\mathbin{\mathfrak M}})$ of stack \nomenclature[SFal]{$\mathop{\rm SF}\nolimits_{\rm al}({\mathbin{\mathfrak M}})$}{Ringel--Hall algebra of stack functions with with algebra stabilizers} functions\index{stack function!with algebra stabilizers} {\it with algebra stabilizers} on ${\mathbin{\mathfrak M}}$, with an associative, non-commutative multiplication $*$ and in \cite[\S 5.2]{Joyc.4} he defines a Lie subalgebra $\mathop{\rm SF}\nolimits_{\rm al}^{\rm ind}({\mathbin{\mathfrak M}})$ of stack functions {\it supported on virtual indecomposables}. In \cite[\S 6.5]{Joyc.4} he defines an explicit Lie algebra $L(X)$ to be the \nomenclature[LX]{$L(X)$}{Lie algebra depending on a Calabi--Yau $3$-fold $X$} \nomenclature[1laal]{$\lambda^\alpha$}{basis element of Lie algebra $L(X)$} ${\mathbin{\mathbb Q}}$-vector space with basis of symbols $\lambda^\alpha$ for $\alpha\in K^{\rm num}(\mathop{\rm coh}\nolimits(X))$, with Lie bracket \e [\lambda^\alpha,\lambda^\beta]=\bar\chi(\alpha,\beta)\lambda^{\alpha+\beta}, \label{dt4eq1} \e for $\alpha,\beta\in K^{\rm num}(\mathop{\rm coh}\nolimits(X))$, where $\bar\chi(\,,\,)$ is the \nomenclature[1wchbar]{$\bar\chi(\,,\,)$}{Euler form on $K^{\rm num}(\mathop{\rm coh}\nolimits(X))$} {\it Euler form} \index{Euler form} on $K^{\rm num}(\mathop{\rm coh}\nolimits(X))$ defined as follows: \e \bar{\chi}([E],[F])=\displaystyle\sum_{i\geq 0} (-1)^i \mathop{\rm dim}\nolimits\mathop{\rm Ext}\nolimits^i(E,F) \label{eu} \e for all $E,F\in \mathop{\rm coh}\nolimits(X).$ As $X$ is a Calabi--Yau 3-fold, $\bar\chi$ is antisymmetric, so \eq{dt4eq1} satisfies the Jacobi identity and makes $L(X)$ into an infinite-dimensional Lie algebra over~${\mathbin{\mathbb Q}}$.
Then in \cite[\S 6.6]{Joyc.4} Joyce defines a \nomenclature[1vPsSfL]{$\Psi$}{Lie algebra morphism $\Psi:\mathop{\rm SF}\nolimits_{\rm al}^{\rm ind}({\mathbin{\mathfrak M}})\rightarrow L(X)$} {\it Lie algebra morphism\/} $\Psi:\mathop{\rm SF}\nolimits_{\rm al}^{\rm ind}({\mathbin{\mathfrak M}})\rightarrow L(X)$, which, roughly speaking, is of the form \e \Psi(f)\;\;\;\;=\textstyle\displaystyle \sum_{\alpha\in K^{\rm num}(\mathop{\rm coh}\nolimits(X))}\chi^{\rm stk} \bigl(f\vert_{{\mathbin{\mathfrak M}}^\alpha}\bigr)\lambda^{\alpha}, \label{dt4eq2} \e where $f=\sum_{i=1}^mc_i[({\mathbin{\mathfrak R}}_i,\rho_i)]$ is a stack function on $M$, and ${\mathbin{\mathfrak M}}^\alpha$ is the substack in ${\mathbin{\mathfrak M}}$ of sheaves $E$ with \nomenclature[Mal]{${\mathbin{\mathfrak M}}^\alpha$}{the substack in ${\mathbin{\mathfrak M}}$ of sheaves $E$ with class $\alpha$} class $\alpha$, and $\chi^{\rm stk}$ is a kind of stack-theoretic Euler \nomenclature[1wchistk]{$\chi^{\rm stk}$}{stack-theoretic Euler characteristic} \index{Euler characteristic!stack-theoretic} characteristic. But in fact the definition of $\Psi$, and the proof that $\Psi$ is a Lie algebra morphism, are highly nontrivial, and use many ideas from \cite{Joyc.1,Joyc.2,Joyc.4}, including those of `virtual rank' and `virtual indecomposable'.\index{virtual indecomposable} The problem is that the obvious definition of $\chi^{\rm stk}$ usually involves dividing by zero, so defining \eq{dt4eq2} in a way that makes sense is quite subtle. The proof that $\Psi$ is a Lie algebra morphism uses {\it Serre duality} and the \index{Serre duality} assumption that $X$ is a Calabi--Yau 3-fold.
\nomenclature[M bal ssalt]{${\mathbin{\mathfrak M}}_{\rm ss}^\alpha(\tau)$}{open, finite type substack in ${\mathbin{\mathfrak M}}$ of $\tau$-semistable sheaves $E$ in class $\alpha$, for all $\alpha\in K^{\rm num}(\mathop{\rm coh}\nolimits(X))$} \nomenclature[M bal stalt]{${\mathbin{\mathfrak M}}_{\rm st}^\alpha(\tau)$}{open, finite type substack in ${\mathbin{\mathfrak M}}$ of $\tau$-stable sheaves $E$ in class $\alpha$, for all $\alpha\in K^{\rm num}(\mathop{\rm coh}\nolimits(X))$} Now let $\tau$ be a stability condition on $\mathop{\rm coh}\nolimits(X)$, such as Gieseker stability. Then one has open, finite type substacks \index{Gieseker stability} ${\mathbin{\mathfrak M}}_{\rm ss}^\alpha(\tau),{\mathbin{\mathfrak M}}_{\rm st}^\alpha(\tau)$ in ${\mathbin{\mathfrak M}}$ of $\tau$-(semi)stable sheaves $E$ in class $\alpha$, for all $\alpha\in K^{\rm num}(\mathop{\rm coh}\nolimits(X))$. Write $\bar\delta_{\rm ss}^\alpha(\tau)$ for the \index{stack function!characteristic function} \nomenclature[1cc]{$\bar\delta_{\rm ss}^\alpha(\tau)$}{element of the Ringel--Hall Lie algebra $\mathop{\rm SF}\nolimits_{\rm al}({\mathbin{\mathfrak M}})$ that ÔcountsÕ $\tau$- semistable objects in class $\alpha$} characteristic function of ${\mathbin{\mathfrak M}}_{\rm ss}^\alpha(\tau)$, in the sense of stack functions \cite{Joyc.2}. Then $\bar\delta_{\rm ss}^\alpha(\tau)\in\mathop{\rm SF}\nolimits_{\rm al}({\mathbin{\mathfrak M}})$. In \cite[\S 8]{Joyc.5}, Joyce defines elements $\bar\epsilon^\alpha(\tau)$ in $\mathop{\rm SF}\nolimits_{\rm al}({\mathbin{\mathfrak M}})$ by \nomenclature[1ccc]{$\bar\epsilon^\alpha(\tau)$}{element of the Ringel--Hall Lie algebra $\mathop{\rm SF}\nolimits_{\rm al}^{\rm ind}({\mathbin{\mathfrak M}})$ that ÔcountsÕ $\tau$- semistable objects in class $\alpha$} \e \bar\epsilon^\alpha(\tau)\quad= \!\!\!\!\!\!\! \sum_{\begin{subarray}{l}\small{n\geqslant 1,\;\alpha_1,\ldots,\alpha_n\in K^{\rm num}(\mathop{\rm coh}\nolimits(X)):}\\ \small{\alpha_1+\cdots+\alpha_n=\alpha,\; \tau(\alpha_i)=\tau(\alpha),\text{ all $i$}}\end{subarray}} \!\!\! \frac{(-1)^{n-1}}{n}\,\,\bar\delta_{\rm ss}^{\alpha_1}(\tau)*\bar \delta_{\rm ss}^{\alpha_2}(\tau)* \cdots*\bar\delta_{\rm ss}^{\alpha_n}(\tau), \label{dt4eq3} \e where $*$ is the Ringel--Hall multiplication in $\mathop{\rm SF}\nolimits_{\rm al}({\mathbin{\mathfrak M}})$. Then \cite[Thm. 8.7]{Joyc.5} shows that $\bar\epsilon^\alpha(\tau)$ lies in the Lie subalgebra $\mathop{\rm SF}\nolimits_{\rm al}^{\rm ind}({\mathbin{\mathfrak M}})$, a nontrivial result. Thus one can apply the Lie algebra morphism $\Psi$ to $\bar\epsilon^\alpha(\tau)$. In \cite[\S 6.6]{Joyc.6} he defines invariants $J^\alpha(\tau)\in{\mathbin{\mathbb Q}}$ for all $\alpha\in K^{\rm num}(\mathop{\rm coh}\nolimits(X))$ by \nomenclature[Jalt]{$J^\alpha(\tau)$}{invariant counting $\tau$-semistable sheaves in class $\alpha$ on a CalabiÐYau $3$-fold, introduced in \cite{Joyc.7}} \e \Psi\bigl(\bar\epsilon^\alpha(\tau)\bigr)=J^\alpha(\tau)\lambda^\alpha. \label{dt4eq4} \e
These $J^\alpha(\tau)$ are rational numbers `counting' \index{invariant $J^\alpha(\tau)$} $\tau$-semistable sheaves $E$ in class $\alpha$. When ${\mathbin{\mathcal M}}_{\rm ss}^\alpha(\tau)={\mathbin{\mathcal M}}_{\rm st}^\alpha(\tau)$ then $J^\alpha(\tau)=\chi({\mathbin{\mathcal M}}_{\rm st}^\alpha(\tau))$, that is, $J^\alpha(\tau)$ is the na\"\i ve Euler characteristic of the moduli space \index{Euler characteristic!na\"\i ve} ${\mathbin{\mathcal M}}_{\rm st}^\alpha(\tau)$. This is {\it not\/} weighted by the Behrend function $\nu_{{\mathbin{\mathcal M}}_{\rm st}^\alpha(\tau)}$, and so in general does not coincide with the Donaldson--Thomas invariant $DT^\alpha(\tau)$ in~\eq{dt4eq1}. \index{Donaldson--Thomas invariants!original $DT^\alpha(\tau)$} \nomenclature[1wchinaive]{$\chi^{\textrm{na}}(C)$}{na\"\i ve Euler characteristic of a constructible set $C$ in a stack as in \cite{Joyc.1}} As the $J^\alpha(\tau)$ do not include Behrend functions, they do not count semistable sheaves with multiplicity, and so they will not in general be unchanged under deformations of the underlying Calabi--Yau 3-fold, as Donaldson--Thomas invariants are. However, the $J^\alpha(\tau)$ do have very good properties under change of stability condition. In \cite{Joyc.6} Joyce shows that if $\tau,\tilde\tau$ are two stability conditions on $\mathop{\rm coh}\nolimits(X)$, then it is possible to write $\bar\epsilon^\alpha(\tilde\tau)$ in terms of a (complicated) explicit formula involving the $\bar\epsilon^\beta(\tau)$ for $\beta\in K^{\rm num}(\mathop{\rm coh}\nolimits(X))$ and the Lie bracket in~$\mathop{\rm SF}\nolimits_{\rm al}^{\rm ind}({\mathbin{\mathfrak M}})$. Applying the Lie algebra morphism $\Psi$ shows that $J^\alpha(\tilde\tau)\lambda^\alpha$ may be written in terms of the $J^\beta(\tau)\lambda^\beta$ and the Lie bracket in $L(X)$, and hence \cite[Thm. 6.28]{Joyc.6} yields an explicit transformation law for the $J^\alpha(\tau)$ under change of stability condition. In \cite{Joyc.7} he shows how to encode invariants $J^\alpha(\tau)$ satisfying a transformation law in generating functions on a complex manifold of stability conditions, which are both holomorphic and continuous, despite the discontinuous wall-crossing behaviour of the $J^\alpha(\tau)$.
\subsubsection[Summary of the main results from $\text{\cite{JoSo}}$]{Summary of the main results from \cite{JoSo}} \index{Donaldson--Thomas invariants!generalized $\bar{DT}{}^\alpha(\tau)$|(} \label{dt4.2}
The basic idea behind the project developed in \cite{JoSo} is that the Behrend function $\nu_{\mathbin{\mathfrak M}}$ of the moduli stack ${\mathbin{\mathfrak M}}$ of coherent \index{Behrend function} sheaves in $X$ should be inserted as a weight in the programme of \cite{Joyc.1,Joyc.2,Joyc.3,Joyc.4,Joyc.5,Joyc.6,Joyc.7} summarized in \S\ref{dt4.1}. Thus one will obtain weighted versions $\tilde\Psi$ of the Lie algebra morphism $\Psi$ of \eq{dt4eq2}, and $\bar{DT}{}^\alpha(\tau)$ of the counting invariant $J^\alpha(\tau)\in{\mathbin{\mathbb Q}}$ in \eq{dt4eq4}. Here is how this is worked out in \cite{JoSo}.
Joyce and Song define a modification $\tilde L(X)$ of the Lie algebra $L(X)$ above, \nomenclature[LXt]{$\tilde L(X)$}{Lie algebra depending on a Calabi--Yau $3$-fold $X$, variant of $L(X)$} the ${\mathbin{\mathbb Q}}$-vector space with basis of symbols $\tilde \lambda^\alpha$ for \nomenclature[1lati]{$\tilde \lambda^\alpha$}{basis element of Lie algebra $\tilde L(X)$} $\alpha\in K^{\rm num}(\mathop{\rm coh}\nolimits(X))$, with Lie bracket \begin{equation*} [\tilde\lambda^\alpha,\tilde\lambda^\beta]=(-1)^{\bar\chi(\alpha,\beta)} \bar\chi(\alpha,\beta)\tilde \lambda^{\alpha+\beta}, \end{equation*} which is \eq{dt4eq2} with a sign change. Then they define a {\it Lie algebra morphism\/} $\tilde\Psi:\mathop{\rm SF}\nolimits_{\rm al}^{\rm ind}({\mathbin{\mathfrak M}})\rightarrow\tilde L(X)$. Roughly speaking this is of the form \nomenclature[1vPsiti]{$\tilde\Psi$}{Lie algebra morphism $\tilde\Psi:\mathop{\rm SF}\nolimits_{\rm al}^{\rm ind}({\mathbin{\mathfrak M}})\rightarrow\tilde L(X)$} \e \tilde\Psi(f)\;\;\;\;=\textstyle\displaystyle \sum_{\alpha\in K^{\rm num}(\mathop{\rm coh}\nolimits(X))}\chi^{\rm stk} \bigl(f\vert_{{\mathbin{\mathfrak M}}^\alpha},\nu_{{\mathbin{\mathfrak M}}}\bigr)\tilde \lambda^{\alpha}, \label{dt4eq5} \e that is, in \eq{dt4eq2} we replace the stack-theoretic Euler characteristic $\chi^{\rm stk}$ with a stack-theoretic Euler \index{Euler characteristic!stack-theoretic} \index{Euler characteristic!weighted} characteristic weighted by the Behrend function $\nu_{{\mathbin{\mathfrak M}}}$. The proof that $\tilde\Psi$ is a Lie algebra morphism combines the proof in \cite{Joyc.4} that $\Psi$ is a Lie algebra morphism with the two {\it Behrend function identities} \index{Behrend function!Behrend identities} \eq{dt6eq1}--\eq{dt6eq2} proved in \cite[thm. 5.11]{JoSo} and reported below. Proving \eq{dt6eq1}--\eq{dt6eq2} requires a deep understanding of the local structure of the moduli stack ${\mathbin{\mathfrak M}}$, which is of interest \index{moduli stack!local structure} in itself. First they show using a composition of {\it Seidel--Thomas twists} by ${\mathbin{\cal O}}_X(-n)$ for $n\gg 0$ that ${\mathbin{\mathfrak M}}$ is \index{Seidel--Thomas twist} locally 1-isomorphic to the moduli stack ${\mathbin{\mathfrak{Vect}}}$ of vector bundles \index{vector bundle!moduli stack} on $X$. Then they prove that near $[E]\in{\mathbin{\mathfrak{Vect}}}({\mathbin{\mathbb C}})$, an atlas for ${\mathbin{\mathfrak{Vect}}}$ can be written locally in the complex analytic \index{analytic topology} topology in the form $\mathop{\rm Crit}(f)$ for $f:U\rightarrow{\mathbin{\mathbb C}}$ a holomorphic function on an open set $U$ in $\mathop{\rm Ext}\nolimits^1(E,E)$. These $U,f$ are {\it not algebraic}, they are constructed using gauge theory on the \index{gauge theory} complex vector bundle $E$ over $X$ and transcendental methods. Finally, they deduce \eq{dt6eq1}--\eq{dt6eq2} using the Milnor fibre expression \eq{dt3eq3} for Behrend functions \index{Milnor fibre} applied to these~$U,f$.
Before going on with the review of Joyce and Song's program, it is worth to stop for a while on some details about \cite[Thm 5.5]{JoSo} and \cite[Thm 5.11]{JoSo}, the statements of the theorems and how they prove it. This will be useful later on in \S\ref{dt5}.
\paragraph[Gauge theory and transcendental complex analysis from $\text{\cite{JoSo}}$]{Gauge theory and transcendental complex analytic geometry from \cite{JoSo}.} \label{dt5.1} \index{gauge theory} In \cite[Thm. 5.5]{JoSo} Joyce and Song give a local characterization of an atlas\index{Artin stack!atlas} for the moduli stack ${\mathbin{\mathfrak M}}$ as the critical points of a holomorphic function on a complex manifold. The statement and a sketch of its proof are reported below. Some background references are Kobayashi \cite[\S VII.3]{Koba}, L\"ubke and Teleman \cite[\S 4.1 \& \S 4.3]{LuTe}, Friedman and Morgan \cite[\S 4.1--\S 4.2]{FrMo} and Miyajima \cite{Miya}.
\index{moduli stack} \begin{thm} Let\/ $X$ be a Calabi--Yau $3$-fold over\/ ${\mathbin{\mathbb C}},$ and\/ ${\mathbin{\mathfrak M}}$ the moduli stack of coherent sheaves on\/ $X$. Suppose\/ $E$ is a coherent sheaf on\/ $X,$ so that\/ $[E]\in{\mathbin{\mathfrak M}}({\mathbin{\mathbb C}})$. Let\/ $G$ be a maximal reductive subgroup\index{reductive group!maximal} in $\mathop{\rm Aut}(E),$ and\/ $G^{\scriptscriptstyle{\mathbin{\mathbb C}}}$ its complexification. Then\/ $G^{\scriptscriptstyle{\mathbin{\mathbb C}}}$ is an algebraic ${\mathbin{\mathbb C}}$-subgroup of\/ $\mathop{\rm Aut}(E),$ a maximal reductive subgroup,\index{reductive group!maximal} and\/ $G^{\scriptscriptstyle{\mathbin{\mathbb C}}}=\mathop{\rm Aut}(E)$ if and only if\/ $\mathop{\rm Aut}(E)$ is reductive. There exists a quasiprojective ${\mathbin{\mathbb C}}$-scheme $S,$ an action of\/ $G^{\scriptscriptstyle{\mathbin{\mathbb C}}}$ on $S,$ a point\/ $s\in S({\mathbin{\mathbb C}})$ fixed by $G^{\scriptscriptstyle{\mathbin{\mathbb C}}},$ and a $1$-morphism of Artin ${\mathbin{\mathbb C}}$-stacks $\Phi:[S/G^{\scriptscriptstyle{\mathbin{\mathbb C}}}]\rightarrow{\mathbin{\mathfrak M}},$ \index{1-morphism} which is smooth of relative dimension $\mathop{\rm dim}\nolimits\mathop{\rm Aut}(E)-\mathop{\rm dim}\nolimits G^{\scriptscriptstyle{\mathbin{\mathbb C}}},$ \index{quotient stack} where $[S/G^{\scriptscriptstyle{\mathbin{\mathbb C}}}]$ is the quotient stack, such that\/ $\Phi(s\,G^{\scriptscriptstyle{\mathbin{\mathbb C}}})=[E],$ the induced morphism on stabilizer groups \index{stabilizer group} $\Phi_*:\mathop{\rm Iso}\nolimits_{[S/G^{\scriptscriptstyle{\mathbin{\mathbb C}}}]}(s\,G^{\scriptscriptstyle{\mathbin{\mathbb C}}})\rightarrow\mathop{\rm Iso}\nolimits_{{\mathbin{\mathfrak M}}}([E])$ is the natural morphism $G^{\scriptscriptstyle{\mathbin{\mathbb C}}}\hookrightarrow\mathop{\rm Aut}(E)\cong\mathop{\rm Iso}\nolimits_{{\mathbin{\mathfrak M}}}([E]),$ and\/ ${\rm d}\Phi\vert_{s\,G^{\scriptscriptstyle{\mathbin{\mathbb C}}}}:T_sS\cong T_{s\,G^{\scriptscriptstyle{\mathbin{\mathbb C}}}} [S/G^{\scriptscriptstyle{\mathbin{\mathbb C}}}]\rightarrow T_{[E]}{\mathbin{\mathfrak M}}\cong \mathop{\rm Ext}\nolimits^1(E,E)$ is an isomorphism. \index{versal family} Furthermore, $S$ parametrizes a formally versal family $(S,{\cal D})$ of coherent sheaves on $X,$ equivariant under the action of\/ $G^{\scriptscriptstyle{\mathbin{\mathbb C}}}$ on $S,$ with fibre\/ ${\cal D}_s\cong E$ at\/ $s$. If\/ $\mathop{\rm Aut}(E)$ is reductive then $\Phi$ is \'etale.
Write $S_{\rm an}$ for the complex analytic space \index{complex analytic space} underlying the ${\mathbin{\mathbb C}}$-scheme $S$. Then there exists an open neighbourhood\/ $U$ of\/ $0$ in\/ $\mathop{\rm Ext}\nolimits^1(E,E)$ in the analytic topology, a holomorphic \index{analytic topology} function $f:U\rightarrow{\mathbin{\mathbb C}}$ with\/ $f(0)={\rm d} f\vert_0=0,$ an open neighbourhood\/ $V$ of\/ $s$ in $S_{\rm an},$ and an isomorphism of complex analytic spaces $\Xi:\mathop{\rm Crit}(f)\rightarrow V,$ such that\/ $\Xi(0)=s$ and\/ ${\rm d}\Xi\vert_0:T_0\mathop{\rm Crit}(f)\rightarrow T_sV$ is the inverse of\/ ${\rm d}\Phi\vert_{s\,G^{\scriptscriptstyle{\mathbin{\mathbb C}}}}:T_sS\rightarrow\mathop{\rm Ext}\nolimits^1(E,E)$. Moreover we can choose $U,f,V$ to be $G^{\scriptscriptstyle{\mathbin{\mathbb C}}}$-invariant, and\/ $\Xi$ to be $G^{\scriptscriptstyle{\mathbin{\mathbb C}}}$-equivariant. \label{dt5thm1} \end{thm}
In \cite{JoSo}, Theorem \ref{dt5thm1} gives Joyce and Song the possibility to use the Milnor fibre\index{Milnor fibre} formula \eq{dt3eq3} for the Behrend function of $\mathop{\rm Crit}(f)$ to study the Behrend function $\nu_{\mathbin{\mathfrak M}}$, crucially used in proving Behrend identities. \index{Behrend function!Behrend identities} The proof of Theorem \ref{dt5thm1} comes in two parts. First it is shown in \cite[\S 8]{JoSo} that ${\mathbin{\mathfrak M}}$ near $[E]$ is locally isomorphic, as an Artin ${\mathbin{\mathbb C}}$-stack, to the moduli stack ${\mathbin{\mathfrak{Vect}}}$ of {\it algebraic vector bundles\/}\index{vector bundle!algebraic} on $X$ near $[E']$ for some vector bundle $E'\rightarrow X$. The proof uses algebraic geometry, and is valid for $X$ a Calabi--Yau $m$-fold for any $m>0$ over any algebraically closed field ${\mathbin{\mathbb K}}$. The local morphism ${\mathbin{\mathfrak M}}\rightarrow{\mathbin{\mathfrak{Vect}}}$ is the composition of shifts and $m$ {\it Seidel--Thomas twists\/}\index{Seidel--Thomas twist} by ${\mathbin{\cal O}}_X(-n)$ for~$n\gg 0$. Thus, it is enough to prove Theorem \ref{dt5thm1} with ${\mathbin{\mathfrak{Vect}}}$ in place of ${\mathbin{\mathfrak M}}$. This is done in \cite[\S 9]{JoSo} using gauge theory on vector bundles over $X$. An interesting motivation for this approach could be found in \cite[\S 3]{DoTh} and \cite[\S 2]{Thom}. Let $E\rightarrow X$ be a fixed complex (not holomorphic) vector bundle over $X$. Write ${\mathbin{\scr A}}$ for the infinite-dimensional \index{semiconnection} \nomenclature[As]{${\mathbin{\scr A}}$}{affine space of smooth semiconnections on a vector bundle} affine space of smooth {\it semiconnections} (${\bar\partial}$-operators) on $E$, and ${\mathbin{\scr G}}$ for the infinite-dimensional Lie group of \index{gauge theory!gauge group} \nomenclature[G]{${\mathbin{\scr G}}$}{gauge group of smooth gauge transformations of a vector bundle} {\it smooth gauge transformations} of $E$. Then ${\mathbin{\scr G}}$ acts on ${\mathbin{\scr A}}$, and ${\mathbin{\scr B}}={\mathbin{\scr A}}/{\mathbin{\scr G}}$ \nomenclature[B]{${\mathbin{\scr B}}$}{space of gauge-equivalence classes of semiconnections on a vector bundle} is the space of gauge-equivalence classes of semiconnections on~$E$. Fix ${\bar\partial}_E$ in ${\mathbin{\scr A}}$ coming from a holomorphic vector bundle structure on $E$. Then points in ${\mathbin{\scr A}}$ are of the form ${\bar\partial}_E+A$ for $A\in C^\infty\bigl(\mathop{\rm End}\nolimits(E)\otimes_{\mathbin{\mathbb C}} \Lambda^{0,1}T^*X\bigr)$, and ${\bar\partial}_E+A$ makes $E$ into a holomorphic vector bundle if $F_A^{0,2}={\bar\partial}_EA+A\wedge A$ is zero in $\smash{C^\infty\bigl(\mathop{\rm End}\nolimits(E)\otimes_{\mathbin{\mathbb C}} \Lambda^{0,2}T^*X\bigr)}$. Thus, the moduli space (stack) of holomorphic vector bundle structures on $E$ is isomorphic to $\{{\bar\partial}_E+A\in{\mathbin{\scr A}}: F_A^{0,2}=0\}/{\mathbin{\scr G}}$. In \cite{Thom}, it is observed that when $X$ is a Calabi--Yau 3-fold, there is a natural holomorphic function $CS:{\mathbin{\scr A}}\rightarrow{\mathbin{\mathbb C}}$ called the {\it holomorphic Chern--Simons functional}, invariant under \nomenclature[CS]{$CS$}{holomorphic Chern--Simons functional} ${\mathbin{\scr G}}$ up to addition of constants, such that $\{{\bar\partial}_E+A\in{\mathbin{\scr A}}:F_A^{0,2}=0\}$ is the critical locus of $CS$. Thus, ${\mathbin{\mathfrak{Vect}}}$ is (informally) locally the critical points of a holomorphic function $CS$ on an infinite-dimensional complex stack ${\mathbin{\scr B}}={\mathbin{\scr A}}/{\mathbin{\scr G}}$. To prove Theorem \ref{dt5thm1} Joyce and Song show that one can find a finite-dimensional complex submanifold $U$ in ${\mathbin{\scr A}}$ and a finite-dimensional complex Lie subgroup $G^{\scriptscriptstyle{\mathbin{\mathbb C}}}$ in ${\mathbin{\scr G}}$ preserving $U$ such that the theorem holds with~$f=CS\vert_U$. These $U,f$ are {\it not algebraic}, they are constructed using gauge theory on the complex vector bundle $E$ over $X$ and transcendental methods.
\paragraph[The Behrend function identities from $\text{\cite{JoSo}}$]{The Behrend function identities from \cite{JoSo}.}
\label{dtBehid} \index{Behrend function!Behrend identities|(} In \cite[Thm. 5.11]{JoSo} Behrend function identities are proven: they are the crucial step to define the Lie algebra morphism $\tilde\Psi$ below and then the generalized Donaldson--Thomas invariants:
\begin{thm} Let\/ $X$ be a Calabi--Yau $3$-fold over\/ ${\mathbin{\mathbb C}},$ and\/ ${\mathbin{\mathfrak M}}$ the moduli stack of coherent sheaves on\/ $X$. The Behrend function $\nu_{{\mathbin{\mathfrak M}}}: {\mathbin{\mathfrak M}}({\mathbin{\mathbb C}})\rightarrow{\mathbin{\mathbb Z}}$ is a natural locally constructible function on ${\mathbin{\mathfrak M}}$. For all\/ $E_1,E_2\in\mathop{\rm coh}\nolimits(X),$ it satisfies:
\begin{equation} \nu_{{\mathbin{\mathfrak M}}}(E_1\oplus E_2)=(-1)^{\bar\chi([E_1],[E_2])} \nu_{{\mathbin{\mathfrak M}}}(E_1)\nu_{{\mathbin{\mathfrak M}}}(E_2), \label{dt6eq1} \end{equation}
\begin{equation} \displaystyle \int\limits_{\small{\begin{subarray}{l} [\lambda]\in\mathbb{P}(\mathop{\rm Ext}\nolimits^1(E_2,E_1)):\\
\lambda\; \Leftrightarrow\; 0\rightarrow E_1\rightarrow F\rightarrow E_2\rightarrow 0\end{subarray}}}\!\!\!\!\! \!\!\!\! \!\!\!\! \!\!\!\! \nu_{{\mathbin{\mathfrak M}}}(F)\,{\rm d}\chi \quad - \!\!\!\! \!\!\!\! \!\!\!\! \displaystyle \int\limits_{\small{\begin{subarray}{l}[\mu]\in\mathbb{P}(\mathop{\rm Ext}\nolimits^1(E_1,E_2)):\\ \mu\; \Leftrightarrow\; 0\rightarrow E_2\rightarrow D\rightarrow E_1\rightarrow 0\end{subarray}}}\!\!\!\!\! \!\!\!\! \!\!\!\! \!\!\!\! \nu_{{\mathbin{\mathfrak M}}}(D)\,{\rm d}\chi \;\; = \;\; (e_{21}-e_{12})\;\; \nu_{{\mathbin{\mathfrak M}}}(E_1\oplus E_2), \label{dt6eq2} \end{equation}
where $e_{21}=\mathop{\rm dim}\nolimits\mathop{\rm Ext}\nolimits^1(E_2,E_1)$ and $e_{12}=\mathop{\rm dim}\nolimits\mathop{\rm Ext}\nolimits^1(E_1,E_2)$ for $E_1,E_2\in\mathop{\rm coh}\nolimits(X).$ Here\/ $\bar\chi([E_1],[E_2])$ in \eq{dt6eq1} is the Euler form as in \eq{eu}, \index{Euler form} and in \eq{dt6eq2} the correspondence between\/ $[\lambda]\in\mathbb{P}(\mathop{\rm Ext}\nolimits^1(E_2,E_1))$ and\/ $F\in\mathop{\rm coh}\nolimits(X)$ is that\/ $[\lambda]\in\mathbb{P}(\mathop{\rm Ext}\nolimits^1(E_2,E_1))$ lifts to some\/ $0\ne\lambda\in\mathop{\rm Ext}\nolimits^1(E_2,E_1),$ which corresponds to a short exact sequence\/ $0\rightarrow E_1\rightarrow F\rightarrow E_2\rightarrow 0$ in\/ $\mathop{\rm coh}\nolimits(X)$ in the usual way. The function $[\lambda]\mapsto\nu_{{\mathbin{\mathfrak M}}}(F)$ is a constructible function\/ $\mathbb{P}(\mathop{\rm Ext}\nolimits^1(E_2,E_1))\rightarrow{\mathbin{\mathbb Z}},$ and the integrals in \eq{dt6eq2} are integrals of constructible functions using the Euler characteristic as measure. \index{constructible function} \index{Euler characteristic} \label{dt6thm1} \end{thm}
Joyce and Song prove Theorem \ref{dt6thm1} using Theorem \ref{dt5thm1} and the Milnor fibre description of Behrend functions from ~\S\ref{dt4}. They apply Theorem \ref{dt5thm1} to $E=E_1\oplus E_2$, and take the maximal reductive subgroup $G$ of $\mathop{\rm Aut}(E)$ to contain \index{reductive group!maximal} the subgroup $\bigl\{{\mathop{\rm id}\nolimits}_{E_1}+\lambda{\mathop{\rm id}\nolimits}_{E_2}: \lambda\in\mathop{\textstyle\rm U}(1)\bigr\}$, so that $G^{\scriptscriptstyle{\mathbin{\mathbb C}}}$ contains $\bigl\{{\mathop{\rm id}\nolimits}_{E_1}+\lambda{\mathop{\rm id}\nolimits}_{E_2}:\lambda\in {\mathbin{\mathbb G}}_m\bigr\}$. Equations \eq{dt6eq1} and \eq{dt6eq2} are proved by a kind of localization using this ${\mathbin{\mathbb G}}_m$-action on~$\mathop{\rm Ext}\nolimits^1(E_1\oplus E_2,E_1\oplus E_2)$. More precisely, Theorem \ref{dt5thm1} gives an atlas for ${\mathbin{\mathfrak M}}$ near $E$ as $\mathop{\rm Crit}(f)$ near $0$, \index{Milnor fibre} where $f$ is a holomorphic function defined near $0$ on ~$\mathop{\rm Ext}\nolimits^1(E_1\oplus E_2,E_1\oplus E_2)$ and $f$ is invariant under the action of $T=\bigl\{{\mathop{\rm id}\nolimits}_{E_1}+\lambda{\mathop{\rm id}\nolimits}_{E_2}: \lambda\in\mathop{\textstyle\rm U}(1)\bigr\}$ on $\mathop{\rm Ext}\nolimits^1(E_1\oplus E_2,E_1\oplus E_2)$ by conjugation. The fixed points of $T$ on $\mathop{\rm Ext}\nolimits^1(E_1\oplus E_2,E_1\oplus E_2)$ are $\mathop{\rm Ext}\nolimits^1(E_1,E_1)\oplus\mathop{\rm Ext}\nolimits^1(E_2,E_2)$ and heuristically one can says that the restriction of $f$ to these fixed points is $f_1 + f_2,$ where $f_j$ is defined near $0$ in $\mathop{\rm Ext}\nolimits^1(E_j,E_j)$ and $\mathop{\rm Crit}(f_j)$ is an atlas for ${\mathbin{\mathfrak M}}$ near $E_j$. \index{moduli stack!atlas} The Milnor fibre $MF_f(0)$ is invariant under $T$, so by localization one has $$\chi (MF_f(0))=\chi(MF_f(0)^T)=\chi(MF_{f_1 + f_2}(0)).$$ A product property of Behrend functions, which may be seen as a kind of {\it Thom-Sebastiani theorem}, gives \index{Thom-Sebastiani theorem} $$1-\chi(MF_{f_1 + f_2}(0))=(1-\chi(MF_{f_1}(0)))(1-\chi(MF_{f_2}(0))).$$ Then the identity \eq{dt6eq1} follows from Theorem \ref{dt3thm1}: $$\nu_{\mathbin{\mathfrak M}} (E)=(-1)^{ \mathop{\rm dim}\nolimits\mathop{\rm Ext}\nolimits^1(E,E)-\mathop{\rm dim}\nolimits\mathop{\rm Hom}\nolimits(E,E)}(1-\chi (MF_f(0))),$$ and the analogues for $E_1$ and $E_2$. Equation \eq{dt6eq2} uses a more involved argument to do with the Milnor fibres of $f$ at non-fixed points of the $U(1)$-action. The proof of Theorem \ref{dt6thm1} uses gauge theory, and transcendental complex analytic \index{gauge theory} geometry methods, and is valid only over~${\mathbin{\mathbb K}}={\mathbin{\mathbb C}}$. However, as pointed out in \cite[Question 5.12]{JoSo}, Theorem \ref{dt6thm1} makes sense as a statement in algebraic geometry, for Calabi--Yau 3-folds over \index{field ${\mathbin{\mathbb K}}$} ${\mathbin{\mathbb K}}$. \index{constructible function} \index{conical Lagrangian cycle}
\index{Behrend function!Behrend identities|)}
In \cite[\S 5]{JoSo}, Joyce and Song then define {\it generalized Donaldson--Thomas invariants\/} $\bar{DT}{}^\alpha(\tau)\in{\mathbin{\mathbb Q}}$ by\index{Donaldson--Thomas invariants!generalized $\bar{DT}{}^\alpha(\tau)$} \e \tilde\Psi\bigl(\bar\epsilon^\alpha(\tau)\bigr)=-\bar{DT}{}^\alpha(\tau)\tilde \lambda^\alpha, \label{dt4eq8} \e as in \eq{dt4eq4}. When ${\mathbin{\mathcal M}}_{\rm ss}^\alpha(\tau)={\mathbin{\mathcal M}}_{\rm st}^\alpha(\tau)$ then $\bar\epsilon^\alpha(\tau)=\bar\delta_{\rm ss}^\alpha(\tau)$, and \eq{dt4eq5} gives \e \tilde\Psi\bigl(\bar\epsilon^\alpha(\tau)\bigr)=\chi^{\rm stk}\bigl( {\mathbin{\mathfrak M}}_{\rm st}^\alpha (\tau),\nu_{{\mathbin{\mathfrak M}}_{\rm st}^\alpha(\tau)}\bigr)\tilde\lambda^\alpha. \label{dt4eq9} \e The projection $\pi:{\mathbin{\mathfrak M}}_{\rm st}^\alpha(\tau)\rightarrow{\mathbin{\mathcal M}}_{\rm st}^\alpha(\tau)$ from the moduli stack to the coarse moduli scheme\index{coarse moduli scheme}\index{moduli scheme!coarse} is smooth of dimension $-1$, so $\nu_{{\mathbin{\mathfrak M}}_{\rm st}^\alpha(\tau)}=-\pi^*(\nu_{{\mathbin{\mathcal M}}_{\rm st}^\alpha(\tau)})$ by (ii) in \S\ref{dt3.2.2}, and comparing \eq{dt3eq5}, \eq{dt4eq8}, \eq{dt4eq9} shows that $\bar{DT}{}^\alpha(\tau)=DT^\alpha(\tau)$. But the new invariants $\bar{DT}{}^\alpha(\tau)$ are also defined for $\alpha$ with ${\mathbin{\mathcal M}}_{\rm ss}^\alpha(\tau)\ne{\mathbin{\mathcal M}}_{\rm st}^\alpha(\tau)$, when conventional Donaldson--Thomas invariants $DT^\alpha(\tau)$ are not defined. \index{Donaldson--Thomas invariants!original $DT^\alpha(\tau)$}
Thanks to Theorem \ref{dt5thm1} and Theorem \ref{dt6thm1}, $\tilde\Psi$ is a Lie algebra morphism \cite[\S 5.3]{JoSo}, thus the change of stability condition formula for the $\bar\epsilon^\alpha(\tau)$ in \cite{Joyc.6} implies a formula for the elements $-\bar{DT}{}^\alpha(\tau)\tilde \lambda^\alpha$ in $\tilde L(X)$, and thus a transformation law for the invariants $\bar{DT}{}^\alpha(\tau)$, \index{wall crossing} using combinatorial coefficients. \index{Euler form} \nomenclature[V]{$V(I,\Gamma,\kappa;\tau,\tilde\tau)$}{combinatorial coefficient used in wall-crossing formulae}
\nomenclature[PI]{$PI^{\alpha,n}(\tau')$}{invariants counting stable pairs $s:{\mathbin{\cal O}}_X(-n)\rightarrow E$} \nomenclature[PT]{$PT_{n,\beta}$}{PandharipandeÐThomas invariants}\index{Pandharipande--Thomas invariants}\index{stable pair} \nomenclature[M al stp]{${\mathbin{\mathcal M}}_{\rm stp}^{\alpha,n}(\tau')$}{the moduli space of stable pairs $s:{\mathbin{\cal O}}_X(-n)\rightarrow X$ with $[E]=\alpha$}
To study the new invariants $\bar{DT}{}^\alpha(\tau)$, it is helpful to introduce another family of invariants $PI^{\alpha,n}(\tau')$,\index{stable pair invariants $PI^{\alpha,n}(\tau')$} similar to Pandharipande--Thomas invariants \cite{PaTh}. Let $n\gg 0$ be fixed. A {\it stable pair} is a nonzero morphism $s:{\mathbin{\cal O}}_{X}(-n)\rightarrow E$ in $\mathop{\rm coh}\nolimits(X)$ such that $E$ is $\tau$-semistable, and if $\mathop{\rm Im} s\subset E'\subset E$ with $E'\ne E$ then $\tau([E'])<\tau([E])$. For $\alpha\in K^{\rm num}(\mathop{\rm coh}\nolimits(X))$ and $n\gg 0$, the moduli space ${\mathbin{\mathcal M}}_{\rm stp}^{\alpha,n}(\tau')$ of stable pairs $s:{\mathbin{\cal O}}_X(-n)\rightarrow X$ with $[E]=\alpha$ is a fine moduli scheme,\index{fine moduli scheme}\index{moduli scheme!fine} which is proper and has a symmetric obstruction theory.\index{symmetric obstruction theory}\index{obstruction theory!symmetric} Joyce and Song define \e \textstyle PI^{\alpha,n}(\tau')\;\;\;\;=\displaystyle \int_{\small{[{\mathbin{\mathcal M}}_{\rm stp}^{\alpha,n}(\tau')]^{\rm vir}}}1\;\;\;\;=\;\;\;\; \chi\bigl( {\mathbin{\mathcal M}}_{\rm stp}^{\alpha,n}(\tau'),\nu_{{\mathbin{\mathcal M}}_{\rm stp}^{\alpha,n}(\tau')} \bigr)\in{\mathbin{\mathbb Z}}, \label{dt4eq11} \e where the second equality follows from Theorem \ref{dt3thm4}. By a similar proof to that for Donaldson--Thomas invariants in \cite{Thom}, Joyce and Song find that $PI^{\alpha,n}(\tau')$ is unchanged under deformations of the underlying Calabi--Yau 3-fold~$X$. \index{deformation-invariance} By a wall-crossing proof similar to that for $\bar{DT}{}^\alpha(\tau)$, they show that $PI^{\alpha,n}(\tau')$ can be written in terms of the $\bar{DT}{}^\beta(\tau).$ As $PI^{\alpha,n}(\tau')$ is deformation-invariant, one deduces from this relation by induction on $\mathop{\rm rank}\nolimits\alpha$ with $\mathop{\rm dim}\nolimits\alpha$ fixed that $\bar{DT}{}^\alpha(\tau)$ is also deformation-invariant. \index{deformation-invariance}
The pair invariants $PI^{\alpha,n}(\tau')$ are a useful tool for computing the $\bar{DT}{}^\alpha(\tau)$ in examples in \cite[\S 6]{JoSo}. The method is to describe the moduli spaces ${\mathbin{\mathcal M}}_{\rm stp}^{\alpha,n}(\tau')$ explicitly, and then use \eq{dt4eq11} to compute $PI^{\alpha,n}(\tau')$, and their relation with $\bar{DT}{}^\alpha(\tau)$ to deduce the values of $\bar{DT}{}^\alpha(\tau)$. Their point of view is that the $\bar{DT}{}^\alpha(\tau)$ are of primary interest, and the $PI^{\alpha,n}(\tau')$ are secondary invariants, of less interest in themselves.
\paragraph[Motivic Donaldson--Thomas invariants: Kontsevich and Soibelman's approach from $\text{\cite{KoSo1}}$]{Motivic Donaldson--Thomas invariants: Kontsevich and Soibelman's approach from \cite{KoSo1}.} \index{Donaldson--Thomas invariants!motivic|(} \label{KoSo}
Kontsevich and Soibelman in \cite{KoSo1} also studied generalizations of Donaldson--Thomas invariants. They work in a more general context but their results are in great part based on conjectures. They consider derived categories of coherent sheaves, Bridgeland stability conditions \cite{Brid1}, and general motivic invariants, whereas Joyce and Song work with abelian categories of coherent sheaves, Gieseker stability, and the Euler characteristic. Kontsevich and Soibelman's motivic functions in the equivariant setting \cite[\S 4.2]{KoSo1}, motivic Hall algebra \cite[\S 6.1]{KoSo1}, motivic quantum torus \cite[\S 6.2]{KoSo1} and their algebra morphism to define Donaldson--Thomas invariants \cite[Thm.\,8]{KoSo1} all have an analogue in Joyce and Song's program.
It is worth to note here some points (see \cite[\S 1.6]{JoSo} for the entire discussion). \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parsep}{0pt} \item[{\bf(a)}] Joyce was probably the first to approach Donaldson--Thomas type invariants in an abstract categorical setting. He developed the technique of motivic stack functions and understood the relevance of motives to the counting problem \cite{Joyc.1,Joyc.2,Joyc.3,Joyc.4,Joyc.5,Joyc.6}. The main limitation of his approach was due to the fact that he worked with abelian rather than triangulated categories. For many applications, especially to physics, one needs triangulated categories. The more recent theory of Joyce and Song \cite{JoSo} fixes some of these gaps and fits well with the general philosophy of \cite{KoSo1} (and actually Joyce and Song use some ideas from Kontsevich and Soibelman). They deal with concrete examples of categories (e.g. the category of coherent sheaves) and construct numerical invariants via Behrend approach. It is difficult to prove that they are in fact invariants of triangulated categories which is manifest in \cite{KoSo1}. \item[{\bf(b)}] Kontsevich and Soibelman write their wall-crossing formulae in terms of products in a pro-nilpotent Lie group while Joyce and Song's formulae are written in terms of combinatorial coefficients. \item[{\bf(c)}] Equations \eq{dt6eq1}--\eq{dt6eq2} are related to a conjecture of Kontsevich and Soibelman \cite[Conj.\,4]{KoSo1} and its application in \cite[\S 6.3]{KoSo1}, and could probably be deduced from it. Joyce and Song got the idea of proving \eq{dt6eq1}--\eq{dt6eq2} by localization using the ${\mathbin{\mathbb G}}_m$-action on $\mathop{\rm Ext}\nolimits^1(E_1\oplus E_2, E_1\oplus E_2)$ from \cite{KoSo1}. However, Kontsevich and Soibelman approach \cite[Conj.\,4]{KoSo1} via formal power series and non-Archimedean geometry. Their analogue concerns the `motivic Milnor fibre' \index{motivic Milnor fibre} of the formal power series $f$. Instead, in Theorem \ref{dt5thm1} Joyce and Song in effect first prove that they can choose the formal power series to be convergent, and then use ordinary differential geometry and Milnor fibres. \item[{\bf(d)}] While Joyce's series of papers \cite{Joyc.1,Joyc.2,Joyc.3,Joyc.4,Joyc.5,Joyc.6} develops the difficult idea of `virtual rank' and `virtual indecomposables', Kontsevich and Soibelman have no analogue of these. They come up against the problem (specialization from virtual Poincar\'e polynomial to Euler characteristic) this technology was designed to solve in the `absence of poles conjecture' \cite[\S7]{KoSo1}. \end{itemize}\index{non-Archimedean geometry}\index{formal power series}
Section \ref{dt7} proposes new ideas for further research also in the direction of Kontsevich and Soibelman's paper \cite{KoSo1}.
\index{Donaldson--Thomas invariants!motivic|)}
\index{Donaldson--Thomas invariants!generalized $\bar{DT}{}^\alpha(\tau)$|)}
\section{D-critical loci} \label{dcr}
We summarizes the theory of d-critical schemes and stacks introduced by Joyce \cite{Joyc.2}. There are two versions of the theory, complex analytic and algebraic d-critical loci, sometimes we give results for both the versions simultaneously, otherwise just briefly indicate the differences between the two, referring to \cite{Joyc.2} for details.
\subsection{D-critical schemes} \label{dcr.1}
Let $X$ be a complex analytic space or a ${\mathbin{\mathbb K}}$-scheme. Then \cite[Th.~2.1 \& Prop.~2.3]{Joyc.2} associates a natural sheaf ${\mathbin{\cal S}}_X$ to $X$, such that, very briefly, sections of ${\mathbin{\cal S}}_X$ parametrize different ways of writing $X$ as $\mathop{\rm Crit}(f)$ for $U$ a complex manifold or smooth ${\mathbin{\mathbb K}}$-scheme and $f:U\rightarrow{\mathbin{\mathbb C}}$ holomorphic or $f:U\rightarrow{\mathbin{\mathbb A}}^1$ regular. We refer to \cite[Th.~2.1 \& Prop.~2.3]{Joyc.2} for details. Just to give a bit more clear idea, we point out the following:
\begin{rem} Suppose we have $U$ a complex manifold, $f:U\rightarrow{\mathbin{\mathbb C}}$ an holomorphic, and $X=\mathop{\rm Crit}(f)$, as a closed complex analytic subspace of $U$. Write $i:X\hookrightarrow U$ for the inclusion, and $I_{X,U}\subseteq i^{-1}({\mathbin{\cal O}}_U)$ for the sheaf of ideals vanishing on $X\subseteq U$. Then we obtain a natural section $s\in H^0({\mathbin{\cal S}}_X)$. Essentially $s=f+I_{{\rm d} f}^2$, where $I_{{\rm d} f}\subseteq{\mathbin{\cal O}}_U$ is the ideal generated by ${\rm d} f$. Note that $f\vert_X=f+I_{{\rm d} f}$, so $s$ determines $f\vert_X$. Basically, $s$ remembers all of the information about $f$ which makes sense intrinsically on $X$, rather than on the ambient space~$U$. \label{dc2ex1} \end{rem}
Following \cite[Def.~2.5]{Joyc.2} we define algebraic d-critical loci:
\begin{dfn} An ({\it algebraic\/}) {\it d-critical locus\/} over a field ${\mathbin{\mathbb K}}$ is a pair $(X,s)$, where $X$ is a ${\mathbin{\mathbb K}}$-scheme and $s\in H^0({\mathbin{\cal S}\kern -0.1em}^{\kern .1em 0}_X)$, such that for each $x\in X$, there exists a Zariski open neighbourhood $R$ of $x$ in $X$, a smooth ${\mathbin{\mathbb K}}$-scheme $U$, a regular function $f:U\rightarrow{\mathbin{\mathbb A}}^1={\mathbin{\mathbb K}}$, and a closed embedding $i:R\hookrightarrow U$, such that $i(R)=\mathop{\rm Crit}(f)$ as ${\mathbin{\mathbb K}}$-subschemes of $U$, and $\iota_{R,U}(s\vert_R)=i^{-1}(f)+I_{R,U}^2$. We call the quadruple $(R,U,f,i)$ a {\it critical chart\/} on~$(X,s)$. If $U'\subseteq U$ is a Zariski open, and $R'=i^{-1}(U')\subseteq R$, $i'=i\vert_{R'}:R'\hookrightarrow U'$, and $f'=f\vert_{U'}$, then $(R',U',f',i')$ is a critical chart on $(X,s)$, and we call it a {\it subchart\/} of $(R,U,f,i)$, and we write~$(R',U',f',i')\subseteq (R,U,f,i)$.
Let $(R,U,f,i),(S,V,g,j)$ be critical charts on $(X,s)$, with $R\subseteq S\subseteq X$. An {\it embedding\/} of $(R,U,f,i)$ in $(S,V,g,j)$ is a locally closed embedding $\Phi:U\hookrightarrow V$ such that $\Phi\circ i=j\vert_R$ and $f=g\circ\Phi$. As a shorthand we write $\Phi: (R,U,f,i)\hookrightarrow(S,V,g,j)$. If $\Phi:(R,U,f,i)\hookrightarrow (S,V,g,j)$ and $\Psi:(S,V,g,j)\hookrightarrow(T,W,h,k)$ are embeddings, then $\Psi\circ\Phi:(R,U,i,e)\hookrightarrow(T,W,h,k)$ is also an embedding.
A {\it morphism\/} $\phi:(X,s)\rightarrow (Y,t)$ of d-critical loci $(X,s),(Y,t)$ is a ${\mathbin{\mathbb K}}$-scheme morphism $\phi:X\rightarrow Y$ with $\phi^\star(t)=s$. This makes d-critical loci into a category.
\label{sa3def1} \end{dfn}
\begin{rem} {\bf(a)} For $(X,s)$ to be a (complex analytic or algebraic) d-critical locus places strong local restrictions on the singularities of $X$. For example, Behrend \cite{Behr} notes that if $X$ has reduced local complete intersection singularities then locally it cannot be the zeroes of an almost closed 1-form on a smooth space, and hence not locally a critical locus, and Pandharipande and Thomas \cite{PaTh} give examples which are zeroes of almost closed 1-forms, but are not locally critical loci.
\noindent{\bf(b)} If $X=\mathop{\rm Crit}(f)$ for holomorphic $f:U\rightarrow{\mathbin{\mathbb C}}$, then $f\vert_{X^{\rm red}}$ is locally constant, and we can write $f=f^0+c$ uniquely near $X$ in $U$ for $f^0:U\rightarrow{\mathbin{\mathbb C}}$ holomorphic with $\mathop{\rm Crit}(f^0)=X=\mathop{\rm Crit}(f)$, $f^0\vert_{X^{\rm red}}=0$, and $c:U\rightarrow{\mathbin{\mathbb C}}$ locally constant with~$c\vert_{X^{\rm red}}=f\vert_{X^{\rm red}}$. Defining d-critical loci using $s\in H^0({\mathbin{\cal S}\kern -0.1em}^{\kern .1em 0}_X)$ corresponds to remembering only the function $f^0$ near $X$ in $U$, and forgetting the locally constant function $f\vert_{X^{\rm red}}:X^{\rm red}\rightarrow{\mathbin{\mathbb C}}$.
\noindent{\bf(c)} In \cite[ex. 2.16]{Joyc.2}, Joyce shows a case in which the algebraic d-critical locus remembers more information, locally, than the symmetric obstruction theory. In \cite[ex. 2.17]{Joyc.2}, Joyce shows that the (symmetric) obstruction theory remembers global, non-local information which is forgotten by the algebraic d-critical locus.
\noindent{\bf(e)} One could think about critical charts as Kuranishi neighbourhoods on a topological space, and embeddings as analogous to coordinate changes between Kuranishi neighbourhoods. \end{rem}
Here are~\cite[Prop.s 2.8, 2.30, Th.s 2.20, 2.28, Def.~2.31, Rem 2.32 \& Cor.~2.33]{Joyc.2}:
\begin{prop} Let\/ $\phi:X\rightarrow Y$ be a smooth morphism of\/ ${\mathbin{\mathbb K}}$-schemes. Suppose $t\in H^0({\mathbin{\cal S}\kern -0.1em}^{\kern .1em 0}_Y),$ and set\/ $s:=\phi^\star(t)\in H^0({\mathbin{\cal S}\kern -0.1em}^{\kern .1em 0}_X)$. If\/ $(Y,t)$ is a d-critical locus, then\/ $(X,s)$ is a d-critical locus, and\/ $\phi:(X,s)\rightarrow(Y,t)$ is a morphism of d-critical loci. Conversely, if also $\phi:X\rightarrow Y$ is surjective, then $(X,s)$ a d-critical locus implies $(Y,t)$ is a d-critical locus. \label{sa3prop1} \end{prop}
\begin{thm} Suppose\/ $(X,s)$ is an algebraic d-critical locus, and\/ $(R,U,f,i),\allowbreak(S,V,g,j)$ are critical charts on $(X,s)$. Then for each\/ $x\in R\cap S\subseteq X$ there exist subcharts $(R',U',f',i')\subseteq(R,U,f,i),$ $(S',V',g',j')\subseteq (S,V,g,j)$ with\/ $x\in R'\cap S'\subseteq X,$ a critical chart\/ $(T,W,h,k)$ on $(X,s),$ and embeddings $\Phi:(R',U',f',i')\hookrightarrow (T,W,h,k),$ $\Psi:(S',V',g',j')\hookrightarrow(T,W,h,k)$. \label{sa3thm2} \end{thm}
\begin{thm} Let\/ $(X,s)$ be an algebraic d-critical locus, and\/ $X^{\rm red}\subseteq X$ the associated reduced\/ ${\mathbin{\mathbb K}}$-subscheme. Then there exists a line bundle $K_{X,s}$ on $X^{\rm red}$ which we call the \begin{bfseries}canonical bundle\end{bfseries} of\/ $(X,s),$ which is natural up to canonical isomorphism, and is characterized by the following properties: \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parsep}{0pt} \item[{\bf(a)}] For each $x\in X^{\rm red},$ there is a canonical isomorphism \e \kappa_x:K_{X,s}\vert_x\,{\buildrel\cong\over\longrightarrow}\, \bigl(\Lambda^{\rm top}T_x^*X\bigr){}^{\otimes^2}, \label{sa3eq3} \e where $T_xX$ is the Zariski tangent space of\/ $X$ at\/~$x$. \item[{\bf(b)}] If\/ $(R,U,f,i)$ is a critical chart on $(X,s),$ there is a natural isomorphism \e \iota_{R,U,f,i}:K_{X,s}\vert_{R^{\rm red}}\longrightarrow i^*\bigl(K_U^{\otimes^2}\bigr)\vert_{R^{\rm red}}, \label{sa3eq4} \e where $K_U=\Lambda^{\mathop{\rm dim}\nolimits U}T^*U$ is the canonical bundle of\/ $U$ in the usual sense.
\item[{\bf(c)}] In the situation of\/ {\bf(b)\rm,} let\/ $x\in R$. Then we have an exact sequence \e \begin{gathered} {}\!\!\!\!\xymatrix@C=22pt@R=15pt{ 0 \ar[r] & T_xX \ar[r]^(0.4){{\rm d} i\vert_x} & T_{i(x)}U \ar[rr]^(0.53){\mathop{\rm Hess}\nolimits_{i(x)}f} && T_{i(x)}^*U \ar[r]^{{\rm d} i\vert_x^*} & T_x^*X \ar[r] & 0, } \end{gathered} \label{sa3eq5} \e and the following diagram commutes: \begin{equation*} \xymatrix@C=150pt@R=13pt{ *+[r]{K_{X,s}\vert_x} \ar[dr]_{\iota_{R,U,f,i}\vert_x} \ar[r]_(0.55){\kappa_x} & *+[l]{\bigl(\Lambda^{\rm top}T_x^*X\bigr){}^{\otimes^2}} \ar[d]_(0.45){\alpha_{x,R,U,f,i}} \\ & *+[l]{K_U\vert_{i(x)}^{\otimes^2},\!\!\!} } \end{equation*} where $\alpha_{x,R,U,f,i}$ is induced by taking top exterior powers in\/~\eq{sa3eq5}. \end{itemize} \label{sa3thm3} \end{thm}
\begin{prop} Suppose $\phi:(X,s)\rightarrow(Y,t)$ is a morphism of d-critical loci with\/ $\phi:X\rightarrow Y$ smooth, as in Proposition\/ {\rm\ref{sa3prop1}}. The \begin{bfseries}relative cotangent bundle\end{bfseries} $T^*_{X/Y}$ is a vector bundle of mixed rank on $X$ in the exact sequence of coherent sheaves on $X\!:$ \e \xymatrix@C=35pt{0 \ar[r] & \phi^*(T^*Y) \ar[r]^(0.55){{\rm d}\phi^*} & T^*X \ar[r] & T^*_{X/Y} \ar[r] & 0. } \label{sa3eq6} \e There is a natural isomorphism of line bundles on $X^{\rm red}\!:$ \e \Upsilon_\phi:\phi\vert_{X^{\rm red}}^* (K_{Y,t})\otimes\bigl(\Lambda^{\rm top}T^*_{X/Y}\bigr)\big\vert_{X^{\rm red}}^{\otimes^2} \,{\buildrel\cong\over\longrightarrow}\,K_{X,s}, \label{sa3eq7} \e such that for each\/ $x\in X^{\rm red}$ the following diagram of isomorphisms commutes: \e \begin{gathered} \xymatrix@C=160pt@R=17pt{ *+[r]{K_{Y,t} \vert_{\phi(x)}\otimes\bigl(\Lambda^{\rm top}T^*_{X/Y}\vert_x\bigr)^{\otimes^2}} \ar[r]_(0.7){\Upsilon_\phi\vert_x} \ar[d]^{\kappa_{\phi(x)}\otimes{\mathop{\rm id}\nolimits}} & *+[l]{K_{X,s}\vert_x} \ar[d]_{\kappa_x} \\ *+[r]{\bigl(\Lambda^{\rm top}T_{\phi(x)}^*Y\bigr)^{\otimes^2}\otimes \bigl(\Lambda^{\rm top}T^*_{X/Y}\vert_x\bigr)^{\otimes^2}} \ar[r]^(0.7){\upsilon_x^{\otimes^2}} & *+[l]{\bigl(\Lambda^{\rm top}T_x^*X\bigr)^{\otimes^2},\!\!{}} } \end{gathered} \label{sa3eq8} \e where $\kappa_x,\kappa_{\phi(x)}$ are as in {\rm\eq{sa3eq3},} and\/ $\upsilon_x:\Lambda^{\rm top}T_{\phi(x)}^*Y\otimes \Lambda^{\rm top}T^*_{X/Y} \vert_x\rightarrow\Lambda^{\rm top}T_x^*X$ is obtained by restricting \eq{sa3eq6} to $x$ and taking top exterior powers. \label{sa3prop2} \end{prop}
\begin{dfn} Let $(X,s)$ be an algebraic d-critical locus, and $K_{X,s}$ its canonical bundle from Theorem \ref{sa3thm3}. An {\it orientation\/} on $(X,s)$ is a choice of square root line bundle $K_{X,s}^{1/2}$ for $K_{X,s}$ on $X^{\rm red}$. That is, an orientation is a line bundle $L$ on $X^{\rm red}$, together with an isomorphism $L^{\otimes^2}=L\otimes L\cong K_{X,s}$. A d-critical locus with an orientation will be called an {\it oriented d-critical locus}. \label{sa3def2} \end{dfn}
\begin{rem} In view of equation \eq{sa3eq3}, one might hope to define a canonical orientation $K_{X,s}^{1/2}$ for a d-critical locus $(X,s)$ by $K_{X,s}^{1/2}\big\vert_x=\Lambda^{\rm top}T_x^*X$ for $x\in X^{\rm red}$. However, {\it this does not work}, as the spaces $\Lambda^{\rm top}T_x^*X$ do not vary continuously with $x\in X^{\rm red}$ if $X$ is not smooth. An example in \cite[Ex.~2.39]{Joyc.2} shows that d-critical loci need not admit orientations. \label{sa3rem1} \end{rem}
In the situation of Proposition \ref{sa3prop2}, the factor $(\Lambda^{\rm top}T^*_{X/Y})\vert_{X^{\rm red}}^{\otimes^2}$ in \eq{sa3eq7} has a natural square root $(\Lambda^{\rm top}T^*_{X/Y})\vert_{X^{\rm red}}$. Thus we deduce:
\begin{cor} Let\/ $\phi:(X,s)\rightarrow(Y,t)$ be a morphism of d-critical loci with\/ $\phi:X\rightarrow Y$ smooth. Then each orientation $K_{Y,t}^{1/2}$ for\/ $(Y,t)$ lifts to a natural orientation $K_{X,s}^{1/2}=\phi\vert_{X^{\rm red}}^*(K_{Y,t}^{1/2})\otimes(\Lambda^{\rm top}T^*_{X/Y}) \vert_{X^{\rm red}}$ for~$(X,s)$. \label{sa3cor1} \end{cor}
\subsection{D-critical stacks} \label{dcr.2}
In \cite[\S 2.7--\S 2.8]{Joyc.2} Joyce extends the material of \S\ref{dcr.1} from ${\mathbin{\mathbb K}}$-schemes to Artin ${\mathbin{\mathbb K}}$-stacks. We work in the context of the theory of {\it sheaves on Artin stacks} by Laumon and Moret-Bailly \cite{LaMo}.
\begin{prop}[Laumon and Moret-Bailly \cite{LaMo}] Let\/ $X$ be an Artin\/ ${\mathbin{\mathbb K}}$-stack. The category of sheaves of sets on $X$ in the lisse-\'etale topology is equivalent to the category ${\mathop{\rm Sh}\nolimits}(X)$ defined as follows:
\noindent{\bf(A)} Objects ${\mathbin{\cal A}}$ of\/ ${\mathop{\rm Sh}\nolimits}(X)$ comprise the following data: \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parsep}{0pt} \item[{\bf(a)}] For each\/ ${\mathbin{\mathbb K}}$-scheme $T$ and smooth\/ $1$-morphism $t:T\rightarrow X$ in $\mathop{\bf Art}\nolimits_{\mathbin{\mathbb K}},$ we are given a sheaf of sets ${\mathbin{\cal A}}(T,t)$ on $T,$ in the \'etale topology. \item[{\bf(b)}] For each\/ $2$-commutative diagram in $\mathop{\bf Art}\nolimits_{\mathbin{\mathbb K}}\!:$ \e \begin{gathered} \xymatrix@C=50pt@R=6pt{ & U \ar[ddr]^u \\ \rrtwocell_{}\omit^{}\omit{^{\eta}} && \\ T \ar[uur]^{\phi} \ar[rr]_t && X, } \end{gathered} \label{sa3eq9} \e where $T,U$ are schemes and\/ $t: T\rightarrow X,$ $u:U\rightarrow X$ are smooth\/ $1$-morphisms in $\mathop{\bf Art}\nolimits_{\mathbin{\mathbb K}},$ we are given a morphism ${\mathbin{\cal A}}(\phi,\eta):\phi^{-1} ({\mathbin{\cal A}}(U,u)) \rightarrow{\mathbin{\cal A}}(T,t)$ of \'etale sheaves of sets on $T$. \end{itemize} This data must satisfy the following conditions: \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parsep}{0pt} \item[{\bf(i)}] If\/ $\phi:T\rightarrow U$ in {\bf(b)} is \'etale, then ${\mathbin{\cal A}}(\phi,\eta)$ is an isomorphism. \item[{\bf(ii)}] For each\/ $2$-commutative diagram in $\mathop{\bf Art}\nolimits_{\mathbin{\mathbb K}}\!:$ \begin{equation*} \xymatrix@C=70pt@R=6pt{ & V \ar[ddr]^v \\ \rrtwocell_{}\omit^{}\omit{^{\zeta}} && \\ U \ar[uur]^{\psi} \ar[rr]_(0.3)u && X, \\ \urrtwocell_{}\omit^{}\omit{^{\eta}} && \\ T \ar[uu]_{\phi} \ar@/_/[uurr]_t } \end{equation*} with $T,U,V$ schemes and\/ $t,u,v$ smooth, we must have \begin{align*} {\mathbin{\cal A}}\bigl(\psi\circ\phi,(\zeta*{\mathop{\rm id}\nolimits}_{\phi})\odot\eta\bigr) &={\mathbin{\cal A}}(\phi,\eta)\circ\phi^{-1}({\mathbin{\cal A}}(\psi,\zeta))\quad\text{as morphisms}\\ (\psi\circ\phi)^{-1}({\mathbin{\cal A}}(V,v))&=\phi^{-1}\circ \psi^{-1}({\mathbin{\cal A}}(V,v))\longrightarrow{\mathbin{\cal A}}(T,t). \end{align*} \end{itemize}
\noindent{\bf(B)} Morphisms $\alpha:{\mathbin{\cal A}}\rightarrow{\mathbin{\cal B}}$ of\/ ${\mathop{\rm Sh}\nolimits}(X)$ comprise a morphism $\alpha(T,t):{\mathbin{\cal A}}(T,t)\rightarrow{\mathbin{\cal B}}(T,t)$ of \'etale sheaves of sets on a scheme $T$ for all smooth\/ $1$-morphisms $t:T\rightarrow X,$ such that for each diagram \eq{sa3eq9} in {\bf(b)} the following commutes: \begin{equation*} \xymatrix@C=120pt@R=25pt{*+[r]{\phi^{-1}({\mathbin{\cal A}}(U,u))} \ar[d]^{\phi^{-1}(\alpha(U,u))} \ar[r]_(0.55){{\mathbin{\cal A}}(\phi,\eta)} & *+[l]{{\mathbin{\cal A}}(T,t)} \ar[d]_{\alpha(T,t)} \\ *+[r]{\phi^{-1}({\mathbin{\cal B}}(U,u))} \ar[r]^(0.55){{\mathbin{\cal B}}(\phi,\eta)} & *+[l]{{\mathbin{\cal B}}(T,t).\!{}} } \end{equation*}
\noindent{\bf(C)} Composition of morphisms ${\mathbin{\cal A}}\,{\buildrel\alpha \over\longrightarrow}\,{\mathbin{\cal B}}\,{\buildrel\beta\over\longrightarrow}\,{\mathbin{\cal C}}$ in ${\mathop{\rm Sh}\nolimits}(X)$ is $(\beta\circ\alpha)(T,t)=\allowbreak\beta(T,t)\allowbreak\circ\allowbreak\alpha(T,t)$. Identity morphisms ${\mathop{\rm id}\nolimits}_{\mathbin{\cal A}}:{\mathbin{\cal A}}\rightarrow{\mathbin{\cal A}}$ are ${\mathop{\rm id}\nolimits}_{\mathbin{\cal A}}(T,t)={\mathop{\rm id}\nolimits}_{{\mathbin{\cal A}}(T,t)}$.
The analogue of all the above also holds for (\'etale) sheaves of\/ ${\mathbin{\mathbb K}}$-vector spaces, sheaves of\/ ${\mathbin{\mathbb K}}$-algebras, and so on, in place of (\'etale) sheaves of sets. Furthermore, the analogue of all the above holds for quasi-coherent sheaves, (or coherent sheaves, or vector bundles, or line bundles) on $X,$ where in {\bf(a)} ${\mathbin{\cal A}}(T,t)$ becomes a quasi-coherent sheaf (or coherent sheaf, or vector bundle, or line bundle) on $T,$ in {\bf(b)} we replace $\phi^{-1}({\mathbin{\cal A}}(U,u))$ by the pullback\/ $\phi^*({\mathbin{\cal A}}(U,u))$ of quasi-coherent sheaves (etc.), and\/ ${\mathbin{\cal A}}(\phi,\eta),\allowbreak\alpha(T,t)$ become morphisms of quasi-coherent sheaves (etc.) on\/~$T$.
We can also describe \begin{bfseries}global sections\end{bfseries} of sheaves on Artin ${\mathbin{\mathbb K}}$-stacks in the above framework: a global section $s\in H^0({\mathbin{\cal A}})$ of\/ ${\mathbin{\cal A}}$ in part\/ {\bf(A)} assigns a global section $s(T,t)\in H^0({\mathbin{\cal A}}(T,t))$ of\/ ${\mathbin{\cal A}}(T,t)$ on\/ $T$ for all smooth\/ $t:T\rightarrow X$ from a scheme $T,$ such that\/ ${\mathbin{\cal A}}(\phi,\eta)^*(s(U,u))=s(T,t)$ in $H^0({\mathbin{\cal A}}(T,t))$ for all\/ $2$-commutative diagrams \eq{sa3eq9} with\/ $t,u$ smooth. \label{sa3prop3} \end{prop}
In \cite[Cor.~2.52]{Joyc.2} Joyce generalizes the sheaves ${\mathbin{\cal S}}_X,{\mathbin{\cal S}\kern -0.1em}^{\kern .1em 0}_X$ in \S\ref{dcr.1} to Artin ${\mathbin{\mathbb K}}$-stacks:
\begin{prop} Let\/ $X$ be an Artin ${\mathbin{\mathbb K}}$-stack, and write\/ ${\mathop{\rm Sh}\nolimits}(X)_{\text{\rm $\K$-alg}}$ and\/ ${\mathop{\rm Sh}\nolimits}(X)_{\text{\rm $\K$-vect}}$ for the categories of sheaves of\/ ${\mathbin{\mathbb K}}$-algebras and\/ ${\mathbin{\mathbb K}}$-vector spaces on $X$ defined in Proposition\/ {\rm\ref{sa3prop3}}. Then: \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parsep}{0pt} \item[{\bf(a)}] We may define canonical objects\/ ${\mathbin{\cal S}}_X$ in both\/ ${\mathop{\rm Sh}\nolimits}(X)_{\text{\rm $\K$-alg}}$ and\/ ${\mathop{\rm Sh}\nolimits}(X)_{\text{\rm $\K$-vect}}$ by ${\mathbin{\cal S}}_X(T,t):={\mathbin{\cal S}}_T$ for all smooth morphisms $t:T\rightarrow X$ for $T\in\mathop{\rm Sch}\nolimits_{\mathbin{\mathbb K}},$ for ${\mathbin{\cal S}}_T$ as in\/ {\rm\S\ref{dcr.1}} taken to be a sheaf of\/ ${\mathbin{\mathbb K}}$-algebras (or ${\mathbin{\mathbb K}}$-vector spaces) on $T$ in the \'etale topology, and\/ ${\mathbin{\cal S}}_X(\phi,\eta) :=\phi^\star:\phi^{-1}({\mathbin{\cal S}}_X(U,u))=\phi^{-1}({\mathbin{\cal S}}_U)\rightarrow{\mathbin{\cal S}}_T ={\mathbin{\cal S}}_X(T,t)$ for all\/ $2$-commutative diagrams \eq{sa3eq9} in $\mathop{\bf Art}\nolimits_{\mathbin{\mathbb K}}$ with\/ $t,u$ smooth, where $\phi^\star$ is as in {\rm\S\ref{dcr.1}}. \item[{\bf(b)}] There is a natural decomposition ${\mathbin{\cal S}}_X\!=\!{\mathbin{\mathbb K}}_X\!\oplus\!{\mathbin{\cal S}\kern -0.1em}^{\kern .1em 0}_X$ in ${\mathop{\rm Sh}\nolimits}(X)_{\text{\rm $\K$-vect}}$ induced by the splitting ${\mathbin{\cal S}}_X(T,t)\!=\!{\mathbin{\cal S}}_T\!=\!{\mathbin{\mathbb K}}_T\oplus{\mathbin{\cal S}\kern -0.1em}^{\kern .1em 0}_T$ in {\rm\S\ref{dcr.1},} where ${\mathbin{\mathbb K}}_X$ is a sheaf of\/ ${\mathbin{\mathbb K}}$-subalgebras of\/ ${\mathbin{\cal S}}_X$ in ${\mathop{\rm Sh}\nolimits}(X)_{\text{\rm $\K$-alg}},$ and\/ ${\mathbin{\cal S}\kern -0.1em}^{\kern .1em 0}_X$ a sheaf of ideals~in\/~${\mathbin{\cal S}}_X$. \end{itemize} \label{sa3prop4} \end{prop} Here \cite[Def. 2.53]{Joyc.2} is the generalization of Definition \ref{sa3def1} to Artin stacks.
\begin{dfn} A {\it d-critical stack\/} $(X,s)$ is an Artin ${\mathbin{\mathbb K}}$-stack $X$ and a global section $s\in H^0({\mathbin{\cal S}\kern -0.1em}^{\kern .1em 0}_X)$, where ${\mathbin{\cal S}\kern -0.1em}^{\kern .1em 0}_X$ is as in Proposition \ref{sa3prop4}, such that $\bigl(T,s(T,t)\bigr)$ is an algebraic d-critical locus in the sense of Definition \ref{sa3def1} for all smooth morphisms $t:T\rightarrow X$ with~$T\in\mathop{\rm Sch}\nolimits_{\mathbin{\mathbb K}}$. \label{sa3def3} \end{dfn}
Here is a convenient way to understand d-critical stacks $(X,s)$ in terms of d-critical structures on an atlas $t:T\rightarrow X$ for~$X$ from \cite[Prop.~2.54]{Joyc.2}.
\begin{prop} Let\/ $X$ be an Artin ${\mathbin{\mathbb K}}$-stack, and\/ $t: T\rightarrow X$ a smooth atlas for $X$. Then $T\times_{t,X,t}T$ is equivalent to a ${\mathbin{\mathbb K}}$-scheme $U$ as $t$ is representable and\/ $T$ a scheme, so we have a $2$-Cartesian diagram \e \begin{gathered} \xymatrix@C=90pt@R=21pt{ *+[r]{U} \ar[d]^{\pi_1} \ar[r]_(0.3){\pi_2} \drtwocell_{}\omit^{}\omit{^{\eta}} & *+[l]{ T} \ar[d]_t \\ *+[r]{T} \ar[r]^(0.7)t & *+[l]{X} } \end{gathered} \label{sa3eq10} \e in $\mathop{\bf Art}\nolimits_{\mathbin{\mathbb K}},$ with\/ $\pi_1,\pi_2:U\rightarrow T$ smooth morphisms in $\mathop{\rm Sch}\nolimits_{\mathbin{\mathbb K}}$. Also $T,U,\pi_1,\pi_2$ can be naturally completed to a smooth groupoid in $\mathop{\rm Sch}\nolimits_{\mathbin{\mathbb K}},$ and\/ $X$ is equivalent in $\mathop{\bf Art}\nolimits_{\mathbin{\mathbb K}}$ to the associated groupoid stack\/ $[U\rightrightarrows T]$. \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parsep}{0pt} \item[{\bf(i)}] Let\/ ${\mathbin{\cal S}}_X$ be as in Proposition\/ {\rm\ref{sa3prop4},} and\/ ${\mathbin{\cal S}}_T,{\mathbin{\cal S}}_U$ be as in {\rm\S\ref{dcr.1},} regarded as sheaves on $T,U$ in the \'etale topology, and define $\pi_i^\star:\pi_i^{-1}({\mathbin{\cal S}}_T)\rightarrow {\mathbin{\cal S}}_U$ as in {\rm\S\ref{dcr.1}} for $i=1,2$. Consider the map $t^*:H^0({\mathbin{\cal S}}_X)\rightarrow H^0({\mathbin{\cal S}}_T)$ mapping $t^*:s\mapsto s(T,t)$. This is injective, and induces a bijection \e t^*:H^0({\mathbin{\cal S}}_X)\,{\buildrel\cong\over\longrightarrow}\,\bigl\{s'\in H^0({\mathbin{\cal S}}_T):\text{$\pi_1^\star(s')= \pi_2^\star(s')$ in $H^0({\mathbin{\cal S}}_U)$}\bigr\}. \label{sa3eq11} \e The analogue holds for ${\mathbin{\cal S}\kern -0.1em}^{\kern .1em 0}_X,{\mathbin{\cal S}\kern -0.1em}^{\kern .1em 0}_T,{\mathbin{\cal S}\kern -0.1em}^{\kern .1em 0}_U$. \item[{\bf(ii)}] Suppose $s\in H^0({\mathbin{\cal S}\kern -0.1em}^{\kern .1em 0}_X),$ so that\/ $t^*(s)\in H^0({\mathbin{\cal S}\kern -0.1em}^{\kern .1em 0}_T)$ with\/ $\pi_1^\star\circ t^*(s)=\pi_2^\star\circ t^*(s)$. Then $(X,s)$ is a d-critical stack if and only if\/ $\bigl(T,t^*(s)\bigr)$ is an algebraic d-critical locus, and then\/ $\bigl(U,\pi_1^\star\circ t^*(s)\bigr)$ is also an algebraic d-critical locus. \end{itemize} \label{sa3prop5} \end{prop}
In \cite[Ex.~2.55]{Joyc.2} we consider quotient stacks $X=[T/G]$.
\begin{ex} Suppose an algebraic ${\mathbin{\mathbb K}}$-group $G$ acts on a ${\mathbin{\mathbb K}}$-scheme $T$ with action $\mu:G\times T\rightarrow T$, and write $X$ for the quotient Artin ${\mathbin{\mathbb K}}$-stack $[T/G]$. Then as in \eq{sa3eq10} there is a natural 2-Cartesian diagram \begin{equation*} \xymatrix@C=110pt@R=21pt{ *+[r]{G\times T} \ar[d]^{\pi_T} \ar[r]_(0.3){\mu} \drtwocell_{}\omit^{}\omit{^{\eta}} & *+[l]{T} \ar[d]_t \\ *+[r]{T} \ar[r]^(0.6)t & *+[l]{X=[T/G],\!{}} } \end{equation*} where $t:T\rightarrow X$ is a smooth atlas for $X$. If $s'\in H^0({\mathbin{\cal S}\kern -0.1em}^{\kern .1em 0}_T)$ then $\pi_1^\star(s')=\pi_2^\star(s')$ in \eq{sa3eq11} becomes $\pi_T^\star(s')=\mu^\star(s')$ on $G\times T$, that is, $s'$ is $G$-invariant. Hence, Proposition \ref{sa3prop5} shows that d-critical structures $s$ on $X=[T/G]$ are in 1-1 correspondence with $G$-invariant d-critical structures $s'$ on~$T$. \label{sa3ex} \end{ex}
Here \cite[Th.~2.56]{Joyc.2} is an analogue of Theorem~\ref{sa3thm3}.
\begin{thm} Let\/ $(X,s)$ be a d-critical stack. Using the description of quasi-coherent sheaves on $X^{\rm red}$ in Proposition {\rm\ref{sa3prop3}} there is a line bundle $K_{X,s}$ on the reduced\/ ${\mathbin{\mathbb K}}$-substack\/ $X^{\rm red}$ of\/ $X$ called the \begin{bfseries}canonical bundle\end{bfseries} of\/ $(X,s),$ unique up to canonical isomorphism, such that: \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parsep}{0pt} \item[{\bf(a)}] For each point\/ $x\in X^{\rm red}\subseteq X$ we have a canonical isomorphism \e \kappa_x:K_{X,s}\vert_x\,{\buildrel\cong\over\longrightarrow}\, \bigl(\Lambda^{\rm top}T_x^*X\bigr)^{\otimes^2} \otimes\bigl(\Lambda^{\rm top}{\mathop{\mathfrak{Iso}}\nolimits}_x(X)\bigr)^{\otimes^2}, \label{sa3eq12} \e where $T_x^*X$ is the Zariski cotangent space of\/ $X$ at\/ $x,$ and\/ ${\mathop{\mathfrak{Iso}}\nolimits}_x(X)$ the Lie algebra of the isotropy group (stabilizer group) $\mathop{\rm Iso}\nolimits_x(X)$ of\/ $X$ at\/~$x$. \item[{\bf(b)}] If\/ $T$ is a ${\mathbin{\mathbb K}}$-scheme and\/ $t: T\rightarrow X$ a smooth $1$-morphism, so that\/ $t^{\rm red}:T^{\rm red}\rightarrow X^{\rm red}$ is also smooth, then there is a natural isomorphism of line bundles on $T^{\rm red}\!:$ \e \Gamma_{T,t}:K_{X,s}(T^{\rm red},t^{\rm red})\,{\buildrel\cong\over\longrightarrow}\, K_{T,s(T,t)}\otimes \bigl(\Lambda^{\rm top}T^*_{ T/X}\bigr)\big\vert_{T^{\rm red}}^{\otimes^{-2}}. \label{sa3eq13} \e Here $\bigl(T,s(T,t)\bigr)$ is an algebraic d-critical locus by Definition\/ {\rm\ref{sa3def3},} and\/ $K_{T,s(T,t)}\rightarrow T^{\rm red}$ is its canonical bundle from Theorem\/~{\rm\ref{sa3thm3}}. \item[{\bf(c)}] If\/ $t:T\rightarrow X$ is a smooth $1$-morphism, we have a distinguished triangle in~$D_{{\rm qcoh}}(T)\!:$ \e \xymatrix@C=40pt{ t^*({\mathbin{\mathbb L}}_X) \ar[r]^(0.55){{\mathbin{\mathbb L}}_t} & {\mathbin{\mathbb L}}_T \ar[r] & T^*_{T/X} \ar[r] & t^*({\mathbin{\mathbb L}}_X)[1], } \label{sa3eq14} \e where ${\mathbin{\mathbb L}}_T,{\mathbin{\mathbb L}}_X$ are the cotangent complexes of\/ $T,X,$ and\/ $T^*_{T/X}$ the relative cotangent bundle of\/ $t: T\rightarrow X,$ a vector bundle of mixed rank on\/ $T$. Let\/ $p\in T^{\rm red}\subseteq T,$ so that\/ $t(p):=t\circ p\in X$. Taking the long exact cohomology sequence of\/ \eq{sa3eq14} and restricting to $p\in T$ gives an exact sequence \e 0 \longrightarrow T^*_{t(p)}X \longrightarrow T^*_pT \longrightarrow T^*_{ T/X}\vert_p \longrightarrow {\mathop{\mathfrak{Iso}}\nolimits}_{t(p)}(X)^* \longrightarrow 0. \label{sa3eq15} \e Then the following diagram commutes: \begin{equation*} \xymatrix@!0@C=108pt@R=50pt{*+[r]{K_{X,s}\vert_{t(p)}} \ar[d]^{\kappa_{t(p)}} \ar@{=}[r] & K_{X,s}(T^{\rm red},t^{\rm red})\vert_p \ar[rr]_(0.3){\Gamma_{T,t}\vert_p} && *+[l]{K_{T,s(T,t)}\vert_p\otimes \bigl(\Lambda^{\rm top}T^*_{\smash{ T/X}}\bigr)\big\vert_p^{\otimes^{-2}}} \ar[d]_{\kappa_p\otimes{\mathop{\rm id}\nolimits}} \\ *+[r]{\bigl(\Lambda^{\rm top}T_{t(p)}^*X\bigr)^{\otimes^2}\!\!\otimes\!\bigl(\Lambda^{\rm top}{\mathop{\mathfrak{Iso}}\nolimits}_{t(p)}(X)\bigr)^{\otimes^2}} \ar[rrr]^(0.54){\alpha_p^2} &&& *+[l]{\bigl(\Lambda^{\rm top}T^*_pT\bigr)^{\otimes^2}\!\!\otimes\! \bigl(\Lambda^{\rm top}T^*_{T/X}\bigr) \big\vert_p^{\otimes^{-2}},} } \end{equation*} where $\kappa_p,\kappa_{t(p)},\Gamma_{T,t}$ are as in {\rm\eq{sa3eq3}, \eq{sa3eq12}} and\/ {\rm\eq{sa3eq13},} respectively, and\/ $$\alpha_p:\Lambda^{\rm top}T_{t(p)}^*X\otimes\Lambda^{\rm top}{\mathop{\mathfrak{Iso}}\nolimits}_{t(p)}(X)\,{\buildrel\cong\over\longrightarrow}\,\Lambda^{\rm top}T^*_pT\otimes\Lambda^{\rm top}T^*_{T/X}\vert^{-1}_p$$ is induced by taking top exterior powers in\/~\eq{sa3eq15}. \end{itemize} \label{sa3thm5} \end{thm}
Here \cite[Def.~2.57]{Joyc.2} is the analogue of Definition~\ref{sa3def2}:
\begin{dfn} Let $(X,s)$ be a d-critical stack, and $K_{X,s}$ its canonical bundle from Theorem \ref{sa3thm5}. An {\it orientation\/} on $(X,s)$ is a choice of square root line bundle $K_{X,s}^{1/2}$ for $K_{X,s}$ on $X^{\rm red}$. That is, an orientation is a line bundle $L$ on $X^{\rm red}$, together with an isomorphism $L^{\otimes^2}=L\otimes L\cong K_{X,s}$. A d-critical stack with an orientation will be called an {\it oriented d-critical stack}. \label{sa3def4} \end{dfn}
Let $(X,s)$ be an oriented d-critical stack. Then for each smooth $t:T\rightarrow X$ we have a square root $K_{X,s}^{1/2} (T^{\rm red},t^{\rm red})$. Thus by \eq{sa3eq13}, $K_{X,s}^{1/2}(T^{\rm red},t^{\rm red})\otimes (\Lambda^{\rm top}{\mathbin{\mathbb L}}_{\smash{T/X}})\vert_{T^{\rm red}}$ is a square root for $K_{T,s(T,t)}$. This proves~\cite[Lem.~2.58]{Joyc.2}:
\begin{lem} Let\/ $(X,s)$ be a d-critical stack. Then an orientation $K_{X,s}^{1/2}$ for $(X,s)$ determines a canonical orientation\/ $K_{T,s(T,t)}^{1/2}$ for the algebraic d-critical locus $\bigl(T,s(T,t)\bigr),$ for all smooth\/ $t: T\rightarrow X$ with\/ $T$ a ${\mathbin{\mathbb K}}$-scheme. \label{sa3lem1} \end{lem}
\subsection{Equivariant d-critical loci} \label{eqdcr}
Here we summarizes some results about group actions on algebraic d-critical loci from \cite{Joyc2}.
\begin{dfn} Let $(X,s)$ be an algebraic d-critical locus over ${\mathbin{\mathbb K}}$, and $\mu:G\times X\rightarrow X$ an action of an algebraic ${\mathbin{\mathbb K}}$-group $G$ on the ${\mathbin{\mathbb K}}$-scheme $X$. We also write the action as $\mu(\gamma):X\rightarrow X$ for $\gamma\in G$. We say that $(X,s)$ is $G$-{\it invariant\/} if $\mu(\gamma)^\star(s)=s$ for all $\gamma\in G$, or equivalently, if $\mu^\star(s)=\pi_X^\star(s)$ in $H^0({\mathbin{\cal S}\kern -0.1em}^{\kern .1em 0}_{G\times X})$, where $\pi_X:G\times X\rightarrow X$ is the projection.
Let $\chi:G\rightarrow{\mathbin{\mathbb G}}_m$ be a morphism of algebraic ${\mathbin{\mathbb K}}$-groups, that is, a character of $G$, where ${\mathbin{\mathbb G}}_m={\mathbin{\mathbb K}}\setminus\{0\}$ is the multiplicative group. We say that $(X,s)$ is $G$-{\it equivariant, with character\/} $\chi,$ if $\mu(\gamma)^\star(s)=\chi(\gamma)\cdot s$ for all $\gamma\in G$, or equivalently, if $\mu^\star(s)=(\chi\circ\pi_G)\cdot(\pi_X^\star(s))$ in $H^0({\mathbin{\cal S}\kern -0.1em}^{\kern .1em 0}_{G\times X})$, where $H^0({\mathbin{\cal O}}_G)\ni\chi$ acts on $H^0({\mathbin{\cal S}\kern -0.1em}^{\kern .1em 0}_{G\times X})$ by multiplication, as $G$ is a smooth ${\mathbin{\mathbb K}}$-scheme.
Suppose $(X,s)$ is $G$-invariant or $G$-equivariant, with $\chi=1$ in the $G$-invariant case. We call a critical chart $(R,U,f,i)$ on $(X,s)$ with a $G$-action $\rho:G\times U\rightarrow U$ a $G$-{\it equivariant critical chart\/} if $R\subseteq X$ is a $G$-invariant open subscheme, and $i:R\hookrightarrow U$, $f:U\rightarrow{\mathbin{\mathbb A}}^1$ are equivariant with respect to the actions $\mu\vert_{G\times R},\rho,\chi$ of $G$ on $R,U,{\mathbin{\mathbb A}}^1$, respectively.
We call a subchart $(R',U',f',i')\subseteq(R,U,f,i)$ a $G$-{\it equivariant subchart\/} if $R'\subseteq R$ and $U'\subseteq U$ are $G$-invariant open subschemes. Then $(R',U',f',i'),\rho'$ is a $G$-equivariant critical chart, where~$\rho'=\rho\vert_{G\times U'}$. \label{dc2def7} \end{dfn}
Note that $X$ may not be covered by $G$-equivariant critical charts without extra assumptions on~$X,G$. We will restrict to the case when $G$ is a torus, with a `good' action on~$X$:
\begin{dfn} Let $X$ be a ${\mathbin{\mathbb K}}$-scheme, $G$ an algebraic ${\mathbin{\mathbb K}}$-torus, and $\mu:G\times X\rightarrow X$ an action of $G$ on $X$. We call $\mu$ a {\it good action\/} if $X$ admits a Zariski open cover by $G$-invariant affine open ${\mathbin{\mathbb K}}$-subschemes $U\subseteq X$. \label{dc2def8} \end{dfn}
A torus-equivariant d-critical locus $(X,s)$ admits an open cover by equivariant critical charts if and only if the torus action is good:
\begin{prop} Let\/ $(X,s)$ be an algebraic d-critical locus which is invariant or equivariant under the action $\mu:G\times X\rightarrow X$ of an algebraic torus $G$.
\noindent{\bf(a)} If\/ $\mu$ is good then for all\/ $x\in X$ there exists a $G$-equivariant critical chart\/ $(R,U,f,i),\rho$ on $(X,s)$ with\/ $x\in R,$ and we may take $\mathop{\rm dim}\nolimits U=\mathop{\rm dim}\nolimits T_xX$.
\noindent{\bf(b)} Conversely, if for all\/ $x\in X$ there exists a $G$-equivariant critical chart\/ $(R,U,f,i),\rho$ on $(X,s)$ with\/ $x\in R,$ then $\mu$ is good. \label{dc2prop14} \end{prop}
\section{Derived symplectic structures in Donaldson--Thomas theory} \label{ourpapers}
We are now going to use derived algebraic geometry from \cite{PTVV} and summarize the main results from the sequel \cite{BBJ,BBDJS,BJM,BBBJ} and their consequences in Donaldson--Thomas theory. Some of them will not be used to prove our main results stated in \S\ref{main.1}, but we will expose them as they contribute to a whole picture of the theory.
\subsection{Symplectic derived schemes and critical loci}
Here we summarizes the main results from \cite{BBJ}. The following is \cite[Thm. 5.18]{BBJ}.
\begin{thm} Let\/ ${\bs X}$ be a derived\/ ${\mathbin{\mathbb K}}$-scheme with\/ $k$-shifted symplectic form $\tilde\omega$ for $k<0,$ and\/ $x\in{\bs X}$. Then there exists a standard form cdga $A$ over ${\mathbin{\mathbb K}}$ which is minimal at\/ $p\in\mathop{\rm Spec}\nolimits H^0(A)$ in the sense of \cite[\S 4]{BBJ}, a $k$-shifted symplectic form\/ $\omega$ on $\mathop{\boldsymbol{\rm Spec}}\nolimits A,$ and a morphism $\boldsymbol f:{\bs U}=\mathop{\boldsymbol{\rm Spec}}\nolimits A\rightarrow{\bs X}$ with\/ $\boldsymbol f(p)=x$ and\/ $\boldsymbol f^*(\tilde\omega)\sim\omega,$ such that if\/ $k$ is odd or divisible by $4,$ then $\boldsymbol f$ is a Zariski open inclusion, and\/ $A,\omega$ are in Darboux form, and if\/ $k\equiv 2\mod 4,$ then $\boldsymbol f$ is \'etale, and\/ $A,\omega$ are in strong Darboux form, as in \cite[\S 5]{BBJ}. \label{sa2thm3} \end{thm}
Let $Y$ be a Calabi--Yau $m$-fold over ${\mathbin{\mathbb K}}$, that is, a smooth projective ${\mathbin{\mathbb K}}$-scheme with $H^i({\mathbin{\cal O}}_Y)={\mathbin{\mathbb K}}$ for $i=0,m$ and $H^i({\mathbin{\cal O}}_Y)=0$ for $0<i<m$. Suppose ${\mathbin{\cal M}}$ is a classical moduli ${\mathbin{\mathbb K}}$-scheme of simple coherent sheaves in $\mathop{\rm coh}\nolimits(Y)$, where we call $F\in\mathop{\rm coh}\nolimits(Y)$ {\it simple\/} if $\mathop{\rm Hom}\nolimits(F,F)={\mathbin{\mathbb K}}$. More generally, suppose ${\mathbin{\cal M}}$ is a moduli ${\mathbin{\mathbb K}}$-scheme of simple complexes of coherent sheaves in $D^b\mathop{\rm coh}\nolimits(Y)$, where we call $F^\bullet\in D^b\mathop{\rm coh}\nolimits(Y)$ {\it simple\/} if $\mathop{\rm Hom}\nolimits(F^\bullet,F^\bullet)={\mathbin{\mathbb K}}$ and $\mathop{\rm Ext}^{<0}(F^\bullet,F^\bullet)=0$. Such moduli spaces ${\mathbin{\cal M}}$ are only known to be algebraic ${\mathbin{\mathbb K}}$-spaces in general, but we assume ${\mathbin{\cal M}}$ is a ${\mathbin{\mathbb K}}$-scheme. Then ${\mathbin{\cal M}}=t_0(\boldsymbol{\mathbin{\cal M}})$, for $\boldsymbol{\mathbin{\cal M}}$ the corresponding derived moduli ${\mathbin{\mathbb K}}$-scheme. To make ${\mathbin{\cal M}},\boldsymbol{\mathbin{\cal M}}$ into schemes rather than stacks, we consider moduli of sheaves or complexes with fixed determinant. Then Pantev et al.\ \cite[\S 2.1]{PTVV} prove $\boldsymbol{\mathbin{\cal M}}$ has a $(2-m)$-shifted symplectic structure $\omega$, so Theorem \ref{sa2thm3} shows that $(\boldsymbol{\mathbin{\cal M}},\omega)$ is Zariski locally modelled on $(\mathop{\boldsymbol{\rm Spec}}\nolimits A,\omega)$, and ${\mathbin{\cal M}}$ is Zariski locally modelled on $\mathop{\rm Spec}\nolimits H^0(A)$. In the case $m=3$, so that $k=-1$, we get \cite[Cor. 5.19]{BBJ}:
\begin{cor} Suppose $Y$ is a Calabi--Yau\/ $3$-fold over a field\/ ${\mathbin{\mathbb K}},$ and\/ ${\mathbin{\cal M}}$ is a classical moduli ${\mathbin{\mathbb K}}$-scheme of simple coherent sheaves, or simple complexes of coherent sheaves, on $Y$. Then for each\/ $[F]\in{\mathbin{\cal M}},$ there exist a smooth\/ ${\mathbin{\mathbb K}}$-scheme $U$ with\/ $\mathop{\rm dim}\nolimits U=\mathop{\rm dim}\nolimits\mathop{\rm Ext}\nolimits^1(F,F),$ a regular function $f:U\rightarrow{\mathbin{\mathbb A}}^1,$ and an isomorphism from $\mathop{\rm Crit}(f)\subseteq U$ to a Zariski open neighbourhood of\/ $[F]$ in\/~${\mathbin{\cal M}}$. \label{da5cor1} \end{cor}
Here $\mathop{\rm dim}\nolimits U=\mathop{\rm dim}\nolimits\mathop{\rm Ext}\nolimits^1(F,F)$ comes from $A$ minimal at $p$ and $\boldsymbol f(p)=[F]$ in Theorem \ref{sa2thm3}. This is a new result in Donaldson--Thomas theory. We already explained that when ${\mathbin{\mathbb K}}={\mathbin{\mathbb C}}$ and ${\mathbin{\cal M}}$ is a moduli space of simple coherent sheaves on $Y$, using gauge theory and transcendental complex methods, Joyce and Song \cite[Th.~5.4]{JoSo} prove that the underlying complex analytic space ${\mathbin{\cal M}}^{\rm an}$ of ${\mathbin{\cal M}}$ is locally of the form $\mathop{\rm Crit}(f)$ for $U$ a complex manifold and $f:U\rightarrow{\mathbin{\mathbb C}}$ a holomorphic function. Behrend and Getzler announced the analogue of \cite[Th.~5.4]{JoSo} for moduli of complexes in $D^b\mathop{\rm coh}\nolimits(Y)$, but the proof has not yet appeared. Over general ${\mathbin{\mathbb K}}$, as in Kontsevich and Soibelman \cite[\S 3.3]{KoSo1} the formal neighbourhood $\hat{\mathbin{\cal M}}_{[F]}$ of ${\mathbin{\cal M}}$ at any $[F]\in{\mathbin{\cal M}}$ is isomorphic to the critical locus $\mathop{\rm Crit}(\hat f)$ of a formal power series $\hat f$ on $\mathop{\rm Ext}\nolimits^1(F,F)$ with only cubic and higher terms.
Here are \cite[Thm. 6.6 \& Cor. 6.7]{BBJ}:
\begin{thm} Suppose\/ $({\bs X},\tilde\omega)$ is a $-1$-shifted symplectic derived\/ ${\mathbin{\mathbb K}}$-scheme, and let\/ $X=t_0({\bs X})$ be the associated classical\/ ${\mathbin{\mathbb K}}$-scheme of\/ ${{\bs X}}$. Then $X$ extends uniquely to an algebraic d-critical locus\/ $(X,s),$ with the property that whenever\/ $(\mathop{\boldsymbol{\rm Spec}}\nolimits A,\omega)$ is a $-1$-shifted symplectic derived\/ ${\mathbin{\mathbb K}}$-scheme in Darboux form with Hamiltonian $H\in A(0),$ as in \cite[Ex.s 5.8 \& 5.15]{BBJ}, and\/ $\boldsymbol f:\mathop{\boldsymbol{\rm Spec}}\nolimits A\rightarrow{\bs X}$ is an equivalence in $\mathop{\bf dSch}\nolimits_{\mathbin{\mathbb K}}$ with a Zariski open derived\/ ${\mathbin{\mathbb K}}$-subscheme\/ ${\bs R}\subseteq{\bs X}$ with\/ $\boldsymbol f^*(\tilde\omega)\sim\omega,$ writing\/ $U=\mathop{\rm Spec}\nolimits A(0),$ $R=t_0({\bs R}),$ $f=t_0(\boldsymbol f)$ so that\/ $H:U\rightarrow{\mathbin{\mathbb A}}^1$ is regular and\/ $f:\mathop{\rm Crit}(H)\rightarrow R$ is an isomorphism, for $\mathop{\rm Crit}(H)\subseteq U$ the classical critical locus of\/ $H,$ then $(R,U,H,f^{-1})$ is a critical chart on\/~$(X,s)$. The canonical bundle $K_{X,s}$ from Theorem\/ {\rm\ref{sa3thm3}} is naturally isomorphic to the determinant line bundle $\det({\mathbin{\mathbb L}}_{{\bs X}})\vert_{X^{\rm red}}$ of the cotangent complex\/ ${\mathbin{\mathbb L}}_{{\bs X}}$ of\/~${\bs X}$. \label{da6thm4} \end{thm} We can think of Theorem \ref{da6thm4} as defining a {\it truncation functor} \e \begin{split} F:\bigl\{&\text{category of $-1$-shifted symplectic derived ${\mathbin{\mathbb K}}$-schemes $({\bs X},\omega)$}\bigr\}\\ &\longrightarrow\bigl\{\text{category of algebraic d-critical loci $(X,s)$ over ${\mathbin{\mathbb K}}$}\bigr\}, \end{split} \label{da6eq3} \e where the morphisms $\boldsymbol f:({\bs X},\omega)\rightarrow({\bs Y},\omega')$ in the first line are (homotopy classes of) \'etale maps $\boldsymbol f:{\bs X}\rightarrow{\bs Y}$ with $\boldsymbol f^*(\omega')\sim\omega$, and the morphisms $f:(X,s)\rightarrow (Y,t)$ in the second line are \'etale maps $f:X\rightarrow Y$ with~$f^*(t)=s$. In \cite[Ex.~2.17]{Joyc2} Joyce gives an example of $-1$-shifted symplectic derived schemes $({\bs X},\omega),({\bs Y},\omega')$, both global critical loci, such that ${\bs X},{\bs Y}$ are not equivalent as derived ${\mathbin{\mathbb K}}$-schemes, but their truncations $F({\bs X},\omega),F({\bs Y},\omega')$ are isomorphic as algebraic d-critical loci. Thus, the functor $F$ in \eq{da6eq3} is not full.
Suppose again $Y$ is a Calabi--Yau 3-fold over ${\mathbin{\mathbb K}}$ and ${\mathbin{\cal M}}$ a classical moduli ${\mathbin{\mathbb K}}$-scheme of simple coherent sheaves in $\mathop{\rm coh}\nolimits(Y)$. Then Thomas \cite{Thom} defined a natural {\it perfect obstruction theory\/} $\phi:{\mathbin{\cal E}}^\bullet\rightarrow{\mathbin{\mathbb L}}_{\mathbin{\cal M}}$ on ${\mathbin{\cal M}}$ in the sense of Behrend and Fantechi \cite{BeFa}, and Behrend \cite{Behr} showed that $\phi:{\mathbin{\cal E}}^\bullet\rightarrow{\mathbin{\mathbb L}}_{\mathbin{\cal M}}$ can be made into a {\it symmetric obstruction theory}. More generally, if ${\mathbin{\cal M}}$ is a moduli ${\mathbin{\mathbb K}}$-scheme of simple complexes of coherent sheaves in $D^b\mathop{\rm coh}\nolimits(Y)$, then Huybrechts and Thomas \cite{HuTh} defined a natural symmetric obstruction theory on~${\mathbin{\cal M}}$. Now in derived algebraic geometry ${\mathbin{\cal M}}=t_0(\boldsymbol{\mathbin{\cal M}})$ for $\boldsymbol{\mathbin{\cal M}}$ the corresponding derived moduli ${\mathbin{\mathbb K}}$-scheme, and the obstruction theory $\phi:{\mathbin{\cal E}}^\bullet\rightarrow{\mathbin{\mathbb L}}_{\mathbin{\cal M}}$ from \cite{HuTh,Thom} is ${\mathbin{\mathbb L}}_{t_0}:{\mathbin{\mathbb L}}_{\boldsymbol{\mathbin{\cal M}}}\vert_{\mathbin{\cal M}}\rightarrow{\mathbin{\mathbb L}}_{\mathbin{\cal M}}$. Pantev et al.\ \cite[\S 2.1]{PTVV} prove $\boldsymbol{\mathbin{\cal M}}$ has a $-1$-shifted symplectic structure $\omega$, and the symmetric structure on $\phi:{\mathbin{\cal E}}^\bullet\rightarrow{\mathbin{\mathbb L}}_{\mathbin{\cal M}}$ from \cite{Behr} is $\omega^0\vert_{\mathbin{\cal M}}$. So as for Corollary \ref{da5cor1}, Theorem \ref{da6thm4} implies:
\begin{cor} Suppose $Y$ is a Calabi--Yau\/ $3$-fold over\/ ${\mathbin{\mathbb K}},$ and\/ ${\mathbin{\cal M}}$ is a classical moduli\/ ${\mathbin{\mathbb K}}$-scheme of simple coherent sheaves in $\mathop{\rm coh}\nolimits(Y),$ or simple complexes of coherent sheaves in $D^b\mathop{\rm coh}\nolimits(Y),$ with perfect obstruction theory\/ $\phi:{\mathbin{\cal E}}^\bullet\rightarrow{\mathbin{\mathbb L}}_{\mathbin{\cal M}}$ as in Thomas\/ {\rm\cite{Thom}} or Huybrechts and Thomas\/ {\rm\cite{HuTh}}. Then ${\mathbin{\cal M}}$ extends naturally to an algebraic d-critical locus $({\mathbin{\cal M}},s)$. The canonical bundle $K_{{\mathbin{\cal M}},s}$ from Theorem\/ {\rm\ref{sa3thm3}} is naturally isomorphic to $\det({\mathbin{\cal E}}^\bullet)\vert_{{\mathbin{\cal M}}^{\rm red}}$. \label{da6cor1} \end{cor}
\subsection{Categorification using perverse sheaves and motives}
Here we summarizes the main results from \cite{BBDJS}. This particular section is not really used in the sequel, but it completes the discussion started in \S\ref{dt3.1.2}. The following theorems are \cite[Cor. 6.10 \& Cor. 6.11]{BBDJS}:
\begin{thm} Let\/ $(\boldsymbol X,\omega)$ be a $-1$-shifted symplectic derived scheme over\/ ${\mathbin{\mathbb C}}$ in the sense of Pantev et al.\ {\rm\cite{PTVV},} and\/ $X=t_0(\boldsymbol X)$ the associated classical\/ ${\mathbin{\mathbb C}}$-scheme. Suppose we are given a square root\/ $\smash{\det({\mathbin{\mathbb L}}_{\boldsymbol X})\vert_X^{1/2}}$ for $\det({\mathbin{\mathbb L}}_{\boldsymbol X})\vert_X$. Then we may define $P_{\boldsymbol X,\omega}^\bullet\in\mathop{\rm Perv}\nolimits(X),$ uniquely up to canonical isomorphism, and isomorphisms $\Sigma_{\boldsymbol X,\omega}:P_{\boldsymbol X,\omega}^\bullet\rightarrow {\mathbin{\mathbb D}}_X(P_{\boldsymbol X,\omega}^\bullet),$ ${\rm T}_{\boldsymbol X,\omega}:P_{\boldsymbol X,\omega}^\bullet\rightarrow P_{\boldsymbol X,\omega}^\bullet$. The same applies for ${\mathbin{\scr D}}$-modules and mixed Hodge modules on $X,$ and for $l$-adic perverse sheaves and\/ ${\mathbin{\scr D}}$-modules on $X$ if\/ $\boldsymbol X$ is over ${\mathbin{\mathbb K}}$ with\/~$\mathop{\rm char}{\mathbin{\mathbb K}}=0$. \label{sm6cor2} \end{thm}
\begin{thm} Let\/ $Y$ be a Calabi--Yau\/ $3$-fold over\/ ${\mathbin{\mathbb C}},$ and\/ ${\mathbin{\cal M}}$ a classical moduli\/ ${\mathbin{\mathbb K}}$-scheme of simple coherent sheaves in $\mathop{\rm coh}\nolimits(Y),$ or simple complexes of coherent sheaves in $D^b\mathop{\rm coh}\nolimits(Y),$ with natural (symmetric) obstruction theory\/ $\phi:{\mathbin{\cal E}}^\bullet\rightarrow{\mathbin{\mathbb L}}_{\mathbin{\cal M}}$ as in Behrend\/ {\rm\cite{Behr},} Thomas\/ {\rm\cite{Thom},} or Huybrechts and Thomas\/ {\rm\cite{HuTh}}. Suppose we are given a square root\/ $\det({\mathbin{\cal E}}^\bullet)^{1/2}$ for $\det({\mathbin{\cal E}}^\bullet)$. Then we may define $P_{\mathbin{\cal M}}^\bullet\in\mathop{\rm Perv}\nolimits({\mathbin{\cal M}}),$ uniquely up to canonical isomorphism, and isomorphisms $\Sigma_{\mathbin{\cal M}}:P_{\mathbin{\cal M}}^\bullet\rightarrow {\mathbin{\mathbb D}}_{\mathbin{\cal M}}(P_{\mathbin{\cal M}}^\bullet),$ ${\rm T}_{\mathbin{\cal M}}:P_{\mathbin{\cal M}}^\bullet\rightarrow P_{\mathbin{\cal M}}^\bullet$. The same applies for ${\mathbin{\scr D}}$-modules and mixed Hodge modules on ${\mathbin{\cal M}},$ and for $l$-adic perverse sheaves and\/ ${\mathbin{\scr D}}$-modules on ${\mathbin{\cal M}}$ if\/ $Y,{\mathbin{\cal M}}$ are over ${\mathbin{\mathbb K}}$ with\/~$\mathop{\rm char}{\mathbin{\mathbb K}}=0$.
\label{sm6cor3} \end{thm}
Theorem \ref{sm6cor3} is relevant to the {\it categorification\/} of Donaldson--Thomas theory as discussed in \S\ref{dt3.1.2}. As in \cite[\S 1.2]{Behr}, the perverse sheaf $P_{{\mathbin{\cal M}}_{\rm st}^\alpha(\tau)}^\bullet$ has pointwise Euler characteristic $\chi\bigl(P_{{\mathbin{\cal M}}_{\rm st}^\alpha(\tau)}^\bullet\bigr)=\nu$. This implies that when $A$ is a field, say $A={\mathbin{\mathbb Q}}$, the (compactly-supported) hypercohomologies ${\mathbin{\mathbb H}}^*\bigl(P_{{\mathbin{\cal M}}_{\rm st}^\alpha(\tau)}^\bullet\bigr), {\mathbin{\mathbb H}}^*_{\rm cs}\bigl(P_{{\mathbin{\cal M}}_{\rm st}^\alpha(\tau)}^\bullet\bigr)$ satisfy \begin{align*} \textstyle\sum\limits_{k\in{\mathbin{\mathbb Z}}}(-1)^k\mathop{\rm dim}\nolimits {\mathbin{\mathbb H}}^k\bigl(P_{{\mathbin{\cal M}}_{\rm st}^\alpha(\tau)}^\bullet\bigr)&= \textstyle\sum\limits_{k\in{\mathbin{\mathbb Z}}}(-1)^k\mathop{\rm dim}\nolimits {\mathbin{\mathbb H}}^k_{\rm cs}\bigl(P_{{\mathbin{\cal M}}_{\rm st}^\alpha(\tau)}^\bullet\bigr)=\chi\bigl({\mathbin{\cal M}}_{\rm st}^\alpha(\tau),\nu\bigr)=DT^\alpha(\tau), \end{align*} where ${\mathbin{\mathbb H}}^k\bigl(P_{{\mathbin{\cal M}}_{\rm st}^\alpha(\tau)}^\bullet\bigr) \cong {\mathbin{\mathbb H}}^{-k}_{\rm cs}\bigl(P_{{\mathbin{\cal M}}_{\rm st}^\alpha(\tau)}^\bullet\bigr){}^*$ by Verdier duality. That is, we have produced a natural graded ${\mathbin{\mathbb Q}}$-vector space ${\mathbin{\mathbb H}}^*\bigl(P_{{\mathbin{\cal M}}_{\rm st}^\alpha(\tau)}^\bullet\bigr)$, thought of as some kind of generalized cohomology of ${\mathbin{\cal M}}_{\rm st}^\alpha(\tau)$, whose graded dimension is $DT^\alpha(\tau)$. This gives a new interpretation of the Donaldson--Thomas invariant~$DT^\alpha(\tau)$.
In fact, as discussed at length in \cite[\S 3]{Szen}, the first natural ``refinement'' or ``quantization'' direction of a Donaldson--Thomas invariant $DT^\alpha(\tau)\in{\mathbin{\mathbb Z}}$ is not the Poincar\'e polynomial of this cohomology, but its weight polynomial \begin{equation*} w\bigl({\mathbin{\mathbb H}}^*(P_{{\mathbin{\cal M}}_{\rm st}^\alpha(\tau)}^\bullet), t\bigr) \in{\mathbin{\mathbb Z}}\bigl[t^{\pm\frac{1}{2}}\bigr], \end{equation*} defined using the mixed Hodge structure on the cohomology of the mixed Hodge module version of $P_{{\mathbin{\cal M}}_{\rm st}^\alpha(\tau)}^\bullet$, which exists assuming that ${\mathbin{\cal M}}_{\rm st}^\alpha(\tau)$ is projective.
The material above is related to work by other authors. The idea of categorifying Donaldson--Thomas invariants using perverse sheaves or ${\mathbin{\scr D}}$-modules is probably first due to Behrend \cite{Behr}, and for Hilbert schemes $\mathop{\rm Hilb}^n(Y)$ of a Calabi--Yau 3-fold $Y$ is discussed by Dimca and Szendr\H oi \cite{DiSz} and Behrend, Bryan and Szendr\H oi \cite[\S 3.4]{BBS}, using mixed Hodge modules. Corollary \ref{sm6cor3} answers a question of Joyce and Song~\cite[Question~5.7(a)]{JoSo}.
As in \cite{JoSo,KoSo1} representations of {\it quivers with superpotentials\/} $(Q,W)$ give 3-Calabi--Yau triangulated categories, and one can define Donaldson--Thomas type invariants $DT^\alpha_{Q,W}(\tau)$ `counting' such representations, which are simple algebraic `toy models' for Donaldson--Thomas invariants of Calabi--Yau 3-folds. Kontsevich and Soibelman \cite{KoSo2} explain how to categorify these quiver invariants $DT^\alpha_{Q,W}(\tau)$, and define an associative multiplication on the categorification to make a {\it Cohomological Hall Algebra}. This paper was motivated by the aim of extending \cite{KoSo2} to define Cohomological Hall Algebras for Calabi--Yau 3-folds.
The square root $\det({\mathbin{\cal E}}^\bullet)^{1/2}$ required in Corollary \ref{sm6cor3} corresponds roughly to {\it orientation data\/} in the work of Kontsevich and Soibelman \cite[\S 5]{KoSo1}, \cite{KoSo2}.
Finally, we point out that Kiem and Li \cite{KiLi} have recently proved an analogue of Corollary \ref{sm6cor3} by complex analytic methods, beginning from Joyce and Song's result \cite[Th.~5.4]{JoSo}, proved using gauge theory, that ${\mathbin{\cal M}}_{\rm st}^\alpha(\tau)$ is locally isomorphic to $\mathop{\rm Crit}(f)$ as a complex analytic space, for $V$ a complex manifold and $f:V\rightarrow{\mathbin{\mathbb C}}$ holomorphic.
Now, we summarizes the main results from \cite{BJM}. The following theorems are \cite[Cor. 5.12 \& Cor. 5.13]{BJM}:
\begin{thm} Let\/ $(\boldsymbol X,\omega)$ be a $-1$-shifted symplectic derived scheme over\/ ${\mathbin{\mathbb K}}$ in the sense of Pantev et al.\ {\rm\cite{PTVV},} and\/ $X=t_0(\boldsymbol X)$ the associated classical\/ ${\mathbin{\mathbb K}}$-scheme, assumed of finite type. Suppose we are given a square root\/ $\smash{\det({\mathbin{\mathbb L}}_{\boldsymbol X})\vert_X^{1/2}}$ for $\det({\mathbin{\mathbb L}}_{\boldsymbol X})\vert_X$. Then we may define a natural motive $MF_{\boldsymbol X,\omega}\in {\mathbin{\smash{\,\,\overline{\!\!\mathcal M\!}\,}}}^{\hat\mu}_X$. \label{mo5cor3} \end{thm}
\begin{thm} Suppose $Y$ is a Calabi--Yau\/ $3$-fold over\/ ${\mathbin{\mathbb K}},$ and\/ ${\mathbin{\cal M}}$ is a finite type moduli\/ ${\mathbin{\mathbb K}}$-scheme of simple coherent sheaves in $\mathop{\rm coh}\nolimits(Y),$ or simple complexes of coherent sheaves in $D^b\mathop{\rm coh}\nolimits(Y),$ with obstruction theory\/ $\phi:{\mathbin{\cal E}}^\bullet\rightarrow{\mathbin{\mathbb L}}_{\mathbin{\cal M}}$ as in Thomas\/ {\rm\cite{Thom}} or Huybrechts and Thomas\/ {\rm\cite{HuTh}}. Suppose we are given a square root\/ $\det({\mathbin{\cal E}}^\bullet)^{1/2}$ for $\det({\mathbin{\cal E}}^\bullet)$. Then we may define a natural motive $MF_{\mathbin{\cal M}}\in{\mathbin{\smash{\,\,\overline{\!\!\mathcal M\!}\,}}}^{\hat\mu}_{\mathbin{\cal M}}.$ \label{mo5cor4} \end{thm}
Kontsevich and Soibelman define a motive over ${\mathbin{\cal M}}_{\rm st}^\alpha(\tau)$, by associating a formal power series to each (not necessarily closed) point, and taking its motivic Milnor fibre. The question of how these formal power series and motivic Milnor fibres vary in families over the base ${\mathbin{\cal M}}_{\rm st}^\alpha(\tau)$ is not really addressed in \cite{KoSo1}. Corollary \ref{mo5cor4} answers this question, showing that Zariski locally in ${\mathbin{\cal M}}_{\rm st}^\alpha(\tau)$ we can take the formal power series and motivic Milnor fibres to all come from a regular function $f:U\rightarrow{\mathbin{\mathbb A}}^1$ on a smooth ${\mathbin{\mathbb K}}$-scheme~$U$. As before, the square root $\det({\mathbin{\cal E}}^\bullet)^{1/2}$ required in Corollary \ref{mo5cor4} corresponds roughly to {\it orientation data\/} in Kontsevich and Soibelman \cite[\S 5]{KoSo1}, \cite{KoSo2}.
\subsection{Generalization to symplectic derived stacks}
Here we summarizes the main results from \cite{BBBJ}. The following theorems are \cite[Cor. 2.11 \& Cor. 2.12]{BBBJ}:
\begin{thm} Let\/ $({\bs X},\omega_{\bs X})$ be a $-1$-shifted symplectic derived Artin ${\mathbin{\mathbb K}}$-stack, and\/ $X=t_0({\bs X})$ the corresponding classical Artin ${\mathbin{\mathbb K}}$-stack. Then for each\/ $p\in X$ there exist a smooth\/ ${\mathbin{\mathbb K}}$-scheme $U$ with dimension $\mathop{\rm dim}\nolimits H^0\bigl({\mathbin{\mathbb L}}_X\vert_p\bigr),$ a point\/ $t\in U,$ a regular function $f:U\rightarrow{\mathbin{\mathbb A}}^1$ with\/ ${\rm d}_{dR} f\vert_t=0,$ so that\/ $T:=\mathop{\rm Crit}(f)\subseteq U$ is a closed\/ ${\mathbin{\mathbb K}}$-subscheme with\/ $t\in T,$ and a morphism $\varphi:T\rightarrow X$ which is smooth of relative dimension $\mathop{\rm dim}\nolimits H^1\bigl({\mathbin{\mathbb L}}_X\vert_p\bigr),$ with\/ $\varphi(t)=p$. We may take\/~$f\vert_{T^{\rm red}}=0$. \label{sa2cor1} \end{thm}
Thus, the underlying classical stack $X$ of a $-1$-shifted symplectic derived stack $({\bs X},\omega_{\bs X})$ admits an atlas consisting of critical loci of regular functions on smooth schemes.
Now let $Y$ be a Calabi--Yau 3-fold over ${\mathbin{\mathbb K}}$, and ${\mathbin{\cal M}}$ a classical moduli stack of coherent sheaves $F$ on $Y$, or complexes $F^\bullet$ in $D^b\mathop{\rm coh}\nolimits(Y)$ with $\mathop{\rm Ext}\nolimits^{<0}(F^\bullet,F^\bullet)=0$. Then ${\mathbin{\cal M}}=t_0(\boldsymbol{\mathbin{\cal M}})$, for $\boldsymbol{\mathbin{\cal M}}$ the corresponding derived moduli stack. The (open) condition $\mathop{\rm Ext}\nolimits^{<0}(F^\bullet,F^\bullet)=0$ is needed to make $\boldsymbol{\mathbin{\cal M}}$ $1$-geometric and $1$-truncated (that is, a derived Artin stack, in our terminology); without it, ${\mathbin{\cal M}},\boldsymbol{\mathbin{\cal M}}$ would be a higher derived stack. Pantev et al.\ \cite[\S 2.1]{PTVV} prove $\boldsymbol{\mathbin{\cal M}}$ has a $-1$-shifted symplectic structure $\omega_{\boldsymbol{\mathbin{\cal M}}}$. Applying Theorem \ref{sa2cor1} and using $H^i\bigl({\mathbin{\mathbb L}}_{\boldsymbol{\mathbin{\cal M}}}\vert_{[F]}\bigr)\cong \mathop{\rm Ext}\nolimits^{1-i}(F,F)^*$ yields a new result on classical 3-Calabi--Yau moduli stacks, the statement of which involves no derived geometry:
\begin{cor} Suppose $Y$ is a Calabi--Yau\/ $3$-fold over\/ ${\mathbin{\mathbb K}},$ and\/ ${\mathbin{\cal M}}$ a classical moduli ${\mathbin{\mathbb K}}$-stack of coherent sheaves $F,$ or more generally of complexes $F^\bullet$ in $D^b\mathop{\rm coh}\nolimits(Y)$ with $\mathop{\rm Ext}\nolimits^{<0}(F^\bullet,F^\bullet)=0$. Then for each\/ $[F]\in{\mathbin{\cal M}},$ there exist a smooth\/ ${\mathbin{\mathbb K}}$-scheme $U$ with\/ $\mathop{\rm dim}\nolimits U=\mathop{\rm dim}\nolimits\mathop{\rm Ext}\nolimits^1(F,F),$ a point\/ $u\in U,$ a regular function $f:U\rightarrow{\mathbin{\mathbb A}}^1$ with\/ ${\rm d}_{dR} f\vert_u=0,$ and a morphism $\varphi:\mathop{\rm Crit}(f)\rightarrow{\mathbin{\cal M}}$ which is smooth of relative dimension $\mathop{\rm dim}\nolimits\mathop{\rm Hom}\nolimits(F,F),$ with\/~$\varphi(u)=[F]$. \label{sa2cor2} \end{cor}
This is an analogue of \cite[Cor.~5.19]{BBJ}. When ${\mathbin{\mathbb K}}={\mathbin{\mathbb C}}$, a related result for coherent sheaves only, with $U$ a complex manifold and $f$ a holomorphic function, was proved by Joyce and Song \cite[Th.~5.5]{JoSo} using gauge theory and transcendental complex methods.
Here is \cite[Thm. 3.18]{BBBJ}, a stack version of Theorem \ref{da6thm4}.
\begin{thm} Let\/ ${\mathbin{\mathbb K}}$ be an algebraically closed field of characteristic zero, $({\bs X},\omega_{\bs X})$ a $-1$-shifted symplectic derived Artin ${\mathbin{\mathbb K}}$-stack, and\/ $X=t_0({\bs X})$ the corresponding classical Artin ${\mathbin{\mathbb K}}$-stack. Then there exists a unique d-critical structure $s\in H^0({\mathbin{\cal S}\kern -0.1em}^{\kern .1em 0}_X)$ on $X,$ making $(X,s)$ into a d-critical stack, with the following properties: \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parsep}{0pt} \item[{\bf(a)}] Let\/ $U,$ $f:U\rightarrow{\mathbin{\mathbb A}}^1,$ $T=\mathop{\rm Crit}(f)$ and\/ $\varphi:T\rightarrow X$ be as in Corollary\/ {\rm\ref{sa2cor1},} with\/ $f\vert_{T^{\rm red}}=0$. There is a unique $s_T\in H^0({\mathbin{\cal S}\kern -0.1em}^{\kern .1em 0}_T)$ on $T$ with\/ $\iota_{T,U}(s_T)=i^{-1}(f)+I_{T,U}^2,$ and\/ $(T,s_T)$ is an algebraic d-critical locus. Then $s(T,\varphi)=s_T$ in $H^0({\mathbin{\cal S}\kern -0.1em}^{\kern .1em 0}_T)$. \item[{\bf(b)}] The canonical bundle $K_{X,s}$ of\/ $(X,s)$ from Theorem\/ {\rm\ref{sa3thm5}} is naturally isomorphic to the restriction $\det({\mathbin{\mathbb L}}_{\bs X})\vert_{X^{\rm red}}$ to $X^{\rm red}\subseteq X\subseteq{\bs X}$ of the determinant line bundle $\det({\mathbin{\mathbb L}}_{\bs X})$ of the cotangent complex\/ ${\mathbin{\mathbb L}}_{\bs X}$ of\/~${\bs X}$. \end{itemize} \label{sa3thm6} \end{thm}
We can think of Theorem \ref{sa3thm6} as defining a {\it truncation functor} \begin{align*} F:\bigl\{&\text{$\infty$-category of $-1$-shifted symplectic derived Artin ${\mathbin{\mathbb K}}$-stacks $({\bs X},\omega_{\bs X})$}\bigr\}\\ &\longrightarrow\bigl\{\text{2-category of d-critical stacks $(X,s)$ over ${\mathbin{\mathbb K}}$}\bigr\}. \end{align*}
Let $Y$ be a Calabi--Yau 3-fold over ${\mathbin{\mathbb K}}$, and ${\mathbin{\cal M}}$ a classical moduli ${\mathbin{\mathbb K}}$-stack of coherent sheaves in $\mathop{\rm coh}\nolimits(Y)$, or complexes of coherent sheaves in $D^b\mathop{\rm coh}\nolimits(Y)$. There is a natural obstruction theory $\phi:{\mathbin{\cal E}}^\bullet\rightarrow{\mathbin{\mathbb L}}_{\mathbin{\cal M}}$ on ${\mathbin{\cal M}}$, where ${\mathbin{\cal E}}^\bullet\in D_{{\rm qcoh}}({\mathbin{\cal M}})$ is perfect in the interval $[-2,1]$, and $h^i({\mathbin{\cal E}}^\bullet)\vert_F\cong\mathop{\rm Ext}\nolimits^{1-i}(F,F)^*$ for each ${\mathbin{\mathbb K}}$-point $F\in{\mathbin{\cal M}}$, regarding $F$ as an object in $\mathop{\rm coh}\nolimits(Y)$ or $D^b\mathop{\rm coh}\nolimits(Y)$. Now in derived algebraic geometry ${\mathbin{\cal M}}=t_0({\mathbin{\bs{\cal M}}})$ for ${\mathbin{\bs{\cal M}}}$ the corresponding derived moduli ${\mathbin{\mathbb K}}$-stack, and $\phi:{\mathbin{\cal E}}^\bullet\rightarrow{\mathbin{\mathbb L}}_{\mathbin{\cal M}}$ is ${\mathbin{\mathbb L}}_{t_0}:{\mathbin{\mathbb L}}_{{\mathbin{\bs{\cal M}}}} \vert_{\mathbin{\cal M}}\rightarrow{\mathbin{\mathbb L}}_{\mathbin{\cal M}}$. Pantev et al.\ \cite[\S 2.1]{PTVV} prove ${\mathbin{\bs{\cal M}}}$ has a $-1$-shifted symplectic structure $\omega$. Thus Theorem \ref{sa3thm6} implies \cite[Cor. 3.19]{BBBJ}:
\begin{cor} Suppose $Y$ is a Calabi--Yau\/ $3$-fold over\/ ${\mathbin{\mathbb K}}$ of characteristic zero, and\/ ${\mathbin{\cal M}}$ a classical moduli\/ ${\mathbin{\mathbb K}}$-stack of coherent sheaves $F$ in $\mathop{\rm coh}\nolimits(Y),$ or complexes of coherent sheaves $F^\bullet$ in $D^b\mathop{\rm coh}\nolimits(Y)$ with $\mathop{\rm Ext}\nolimits^{<0}(F^\bullet,F^\bullet)=0,$ with obstruction theory\/ $\phi:{\mathbin{\cal E}}^\bullet\rightarrow{\mathbin{\mathbb L}}_{\mathbin{\cal M}}$. Then ${\mathbin{\cal M}}$ extends naturally to an algebraic d-critical locus $({\mathbin{\cal M}},s)$. The canonical bundle $K_{{\mathbin{\cal M}},s}$ from Theorem\/ {\rm\ref{sa3thm5}} is naturally isomorphic to $\det({\mathbin{\cal E}}^\bullet)\vert_{{\mathbin{\cal M}}^{\rm red}}$. \label{sa3cor2} \end{cor}
Here is \cite[Cor. 4.13]{BBBJ}, the stack version of Theorem \ref{sm6cor2}:
\begin{thm} Let\/ ${\mathbin{\mathbb K}}$ be an algebraically closed field of characteristic zero, $({\bs X},\omega)$ a $-1$-shifted symplectic derived Artin ${\mathbin{\mathbb K}}$-stack, and\/ $X=t_0({\bs X})$ the associated classical Artin\/ ${\mathbin{\mathbb K}}$-stack. Suppose we are given a square root\/ $\smash{\det({\mathbin{\mathbb L}}_{\bs X})\vert_X^{1/2}}$. Then working in $l$-adic perverse sheaves on stacks \cite[\S 4]{BBBJ} we may define a perverse sheaf\/ $\check P_{{\bs X},\omega}^\bullet$ on $X$ uniquely up to canonical isomorphism, and Verdier duality and monodromy isomorphisms $\check\Sigma_{{\bs X},\omega}:\check P_{{\bs X},\omega}^\bullet\rightarrow {\mathbin{\mathbb D}}_X(\check P_{{\bs X},\omega}^\bullet)$ and\/ $\check{\rm T}_{{\bs X},\omega}:\check P_{{\bs X},\omega}^\bullet\rightarrow\check P_{{\bs X},\omega}^\bullet$. These are characterized by the fact that given a diagram \begin{equation*} \xymatrix@C=60pt{ {\bs U}=\boldsymbol\mathop{\rm Crit}(f:U\rightarrow{\mathbin{\mathbb A}}^1) & {\bs V} \ar[l]_(0.3){\boldsymbol i} \ar[r]^{\boldsymbol\varphi} & {\bs X} } \end{equation*} such that\/ $U$ is a smooth\/ ${\mathbin{\mathbb K}}$-scheme, $\boldsymbol\varphi$ smooth of dimension $n,$ ${\mathbin{\mathbb L}}_{{\bs V}/{\bs U}} \simeq {\mathbin{\mathbb T}}_{{\bs V}/{\bs X}}[2],$ $\boldsymbol\varphi^*(\omega_{\bs X})\sim \boldsymbol i^*(\omega_{\bs U})$ for $\omega_{\bs U}$ the natural\/ $-1$-shifted symplectic structure on ${\bs U}=\boldsymbol\mathop{\rm Crit}(f:U\rightarrow{\mathbin{\mathbb A}}^1),$ and\/ $\varphi^*(\det({\mathbin{\mathbb L}}_{\bs X})\vert_X^{1/2})\!\cong\! i^*(K_U) \otimes \Lambda^n{\mathbin{\mathbb T}}_{{\bs V}/{\bs X}},$ then $\varphi^*(\check P_{{\bs X},\omega}^\bullet)[n],$ $\varphi^*(\check\Sigma_{{\bs X},\omega}^\bullet)[n],$ $\varphi^*(\check{\rm T}_{{\bs X},\omega}^\bullet)[n]$ are canonically isomorphic to $i^*({\mathbin{\cal{PV}}}_{U,f}),$ $i^*(\sigma_{U,f}),$ $i^*(\tau_{U,f}),$ for\/ ${\mathbin{\cal{PV}}}_{U,f},\sigma_{U,f},\tau_{U,f}$ as in \cite{BBBJ} . The same applies in the other theories of perverse sheaves and\/ ${\mathbin{\scr D}}$-modules on stacks. \label{sa4cor1} \end{thm}
Here is \cite[Cor. 4.14]{BBBJ}, the stack version of Theorem \ref{sm6cor3}:
\begin{thm} Let\/ $Y$ be a Calabi--Yau\/ $3$-fold over an algebraically closed field\/ ${\mathbin{\mathbb K}}$ of characteristic zero, and\/ ${\mathbin{\cal M}}$ a classical moduli\/ ${\mathbin{\mathbb K}}$-stack of coherent sheaves $F$ in $\mathop{\rm coh}\nolimits(Y),$ or of complexes $F^\bullet$ in $D^b\mathop{\rm coh}\nolimits(Y)$ with\/ $\mathop{\rm Ext}\nolimits^{<0}(F^\bullet,F^\bullet)=0,$ with obstruction theory\/ $\phi:{\mathbin{\cal E}}^\bullet\rightarrow{\mathbin{\mathbb L}}_{\mathbin{\cal M}}$. Suppose we are given a square root\/ $\det({\mathbin{\cal E}}^\bullet)^{1/2}$. Then working in $l$-adic perverse sheaves on stacks \cite[\S 4]{BBBJ}, we may define a natural perverse sheaf\/ $\check P_{\mathbin{\cal M}}^\bullet\in\mathop{\rm Perv}\nolimits({\mathbin{\cal M}}),$ and Verdier duality and monodromy isomorphisms $\check\Sigma_{\mathbin{\cal M}}:\check P_{\mathbin{\cal M}}^\bullet\rightarrow{\mathbin{\mathbb D}}_{\mathbin{\cal M}}(\check P_{\mathbin{\cal M}}^\bullet)$ and\/ $\check{\rm T}_{\mathbin{\cal M}}:\check P_{\mathbin{\cal M}}^\bullet\rightarrow\check P_{\mathbin{\cal M}}^\bullet$. The pointwise Euler characteristic of\/ $\check P_{\mathbin{\cal M}}^\bullet$ is the Behrend function $\nu_{\mathbin{\cal M}}$ of\/ ${\mathbin{\cal M}}$ from Joyce and Song {\rm\cite[\S 4]{JoSo},} so that\/ $\check P_{\mathbin{\cal M}}^\bullet$ is in effect a categorification of the Donaldson--Thomas theory of ${\mathbin{\cal M}}$. The same applies in the other theories of perverse sheaves and\/ ${\mathbin{\scr D}}$-modules on stacks. \label{sa4cor2} \end{thm}
Here is \cite[Cor. 5.16]{BBBJ}, the stack version of Theorem \ref{mo5cor3}:
\begin{thm} Let\/ $({\bs X},\omega)$ be a $-1$-shifted symplectic derived Artin ${\mathbin{\mathbb K}}$-stack in the sense of Pantev et al.\ {\rm\cite{PTVV},} and\/ $X=t_0({\bs X})$ the associated classical Artin\/ ${\mathbin{\mathbb K}}$-stack, assumed of finite type and locally a global quotient. Suppose we are given a square root\/ $\det({\mathbin{\mathbb L}}_{\bs X})\vert_X^{1/2}$ for $\det({\mathbin{\mathbb L}}_{\bs X}) \vert_X$. Then we may define a natural motive $MF_{{\bs X},\omega}\in{\mathbin{\smash{\,\,\overline{\!\!\mathcal M\!}\,}}}^{{\rm st},\hat\mu}_X,$ which is characterized by the fact that given a diagram \begin{equation*} \xymatrix@C=60pt{ {\bs U}=\boldsymbol\mathop{\rm Crit}(f:U\rightarrow{\mathbin{\mathbb A}}^1) & {\bs V} \ar[l]_(0.3){\boldsymbol i} \ar[r]^{\boldsymbol\varphi} & {\bs X} } \end{equation*} such that\/ $U$ is a smooth\/ ${\mathbin{\mathbb K}}$-scheme, $\boldsymbol\varphi$ is smooth of dimension $n,$ ${\mathbin{\mathbb L}}_{{\bs V}/{\bs U}} \simeq {\mathbin{\mathbb T}}_{{\bs V}/{\bs X}}[2],$ $\boldsymbol\varphi^*(\omega_{\bs X})\sim \boldsymbol i^*(\omega_{\bs U})$ for $\omega_{\bs U}$ the natural\/ $-1$-shifted symplectic structure on ${\bs U}=\boldsymbol\mathop{\rm Crit}(f:U\rightarrow{\mathbin{\mathbb A}}^1),$ and\/ $\varphi^*(\det({\mathbin{\mathbb L}}_{\bs X})\vert_X^{1/2})\cong i^*(K_U)\otimes\Lambda^n{\mathbin{\mathbb T}}_{{\bs V}/{\bs X}},$ then~$\varphi^*(MF_{{\bs X},\omega})={\mathbin{\mathbb L}}^{n/2}\odot i^*(MF^{{\rm mot},\phi}_{U,f})$ in ${\mathbin{\smash{\,\,\overline{\!\!\mathcal M\!}\,}}}^{{\rm st},\hat\mu}_V$. \label{sa5cor1} \end{thm}
Here is \cite[Cor. 5.17]{BBBJ}, the stack version of Theorem \ref{mo5cor4}:
\begin{thm} Let\/ $Y$ be a Calabi--Yau\/ $3$-fold over\/ ${\mathbin{\mathbb K}},$ and\/ ${\mathbin{\cal M}}$ a finite type classical moduli\/ ${\mathbin{\mathbb K}}$-stack of coherent sheaves in $\mathop{\rm coh}\nolimits(Y),$ with natural obstruction theory\/ $\phi:{\mathbin{\cal E}}^\bullet\rightarrow{\mathbin{\mathbb L}}_{\mathbin{\cal M}}$. Suppose we are given a square root\/ $\det({\mathbin{\cal E}}^\bullet)^{1/2}$ for $\det({\mathbin{\cal E}}^\bullet)$. Then we may define a natural motive $MF_{\mathbin{\cal M}}\in{\mathbin{\smash{\,\,\overline{\!\!\mathcal M\!}\,}}}^{{\rm st},\hat\mu}_{\mathbin{\cal M}}$. \label{sa5cor2} \end{thm}
Theorem \ref{sa5cor2} is relevant to Kontsevich and Soibelman's theory of {\it motivic Donaldson--Thomas invariants\/} \cite{KoSo1}. Again, our square root $\det({\mathbin{\cal E}}^\bullet)^{1/2}$ roughly coincides with their {\it orientation data\/} \cite[\S 5]{KoSo1}. In \cite[\S 6.2]{KoSo1}, given a finite type moduli stack ${\mathbin{\cal M}}$ of coherent sheaves on a Calabi--Yau 3-fold $Y$ with orientation data, they define a motive $\int_{\mathbin{\cal M}} 1$ in a ring $D^\mu$ isomorphic to our ${\mathbin{\smash{\,\,\overline{\!\!\mathcal M\!}\,}}}^{{\rm st},\hat\mu}_{\mathbin{\mathbb K}}$. We expect this should agree with $\pi_*(MF_{\mathbin{\cal M}})$ in our notation, with $\pi:{\mathbin{\cal M}}\rightarrow\mathop{\rm Spec}\nolimits{\mathbin{\mathbb K}}$ the projection. This $\int_{\mathbin{\cal M}} 1$ is roughly the motivic Donaldson--Thomas invariant of ${\mathbin{\cal M}}$. Their construction involves expressing ${\mathbin{\cal M}}$ near each point in terms of the critical locus of a formal power series. Kontsevich and Soibelman's constructions were partly conjectural, and our results may fill some gaps in their theory.
\section{The main results} \markboth{Statements of main results}{Statements of main results} \label{main.1}
We will prove and use the algebraic analogue of Theorem \ref{dt5thm1}, which we can state as follows:
\begin{thm} Let\/ $X$ be a Calabi--Yau $3$-fold over ${\mathbin{\mathbb K}},$ and write ${\mathbin{\mathfrak M}}$ for the moduli stack of coherent sheaves on $X$. Then for each\/ $[E]\in{\mathbin{\mathfrak M}}({\mathbin{\mathbb K}}),$ there exists a smooth\/ affine ${\mathbin{\mathbb K}}$-scheme $U,$ a point\/ $p\in\mathop{\textstyle\rm U}({\mathbin{\mathbb K}}),$ an \'etale morphism $u:U\rightarrow\mathop{\rm Ext}\nolimits^1(E,E)$ with $u(p)=0,$ a regular function $f:U\rightarrow{\mathbin{\mathbb A}}^1$ with\/ $f\vert_p=\partial f\vert_p=0,$ and a $1$-morphism $\xi:\mathop{\rm Crit}(f)\rightarrow{\mathbin{\mathfrak M}}$ smooth of relative dimension $\mathop{\rm dim}\nolimits\mathop{\rm Aut}(E),$ with $\xi(p)=[E]\in{\mathbin{\mathfrak M}}({\mathbin{\mathbb K}}),$ such that if $\iota:\mathop{\rm Ext}\nolimits^1(E,E)\rightarrow T_{[E]}{\mathbin{\mathfrak M}}$ is the natural isomorphism, then ${\rm d}\xi\vert_p=\iota\circ{\rm d} u\vert_p:T_pU\rightarrow T_{[E]}{\mathbin{\mathfrak M}}$.
Moreover, let $G$ be a maximal algebraic torus in $\mathop{\rm Aut}(E)$, acting on $\mathop{\rm Ext}\nolimits^1(E,E)$ by $\gamma:\epsilon\mapsto\gamma\circ\epsilon\circ\gamma^{-1}$. Then we can choose $U,p,u,f,\xi$ and a $G$-action on $U$ such that $u$ is $G$-equivariant and $p,f$ are $G$-invariant, so that $\mathop{\rm Crit}(f)$ is $G$-invariant, and\/ $\xi:\mathop{\rm Crit}(f) \rightarrow{\mathbin{\mathfrak M}}$ factors through the projection $\mathop{\rm Crit}(f)\rightarrow[\mathop{\rm Crit}(f)/G]$. \label{dt5thm2} \end{thm} \index{reductive group!maximal} \index{almost closed $1$-form} \index{Zariski topology} \nomenclature[1oi]{$\xi$}{the $1$-morphism $\xi:\omega^{-1}(0)\rightarrow{\mathbin{\mathfrak M}}$}\index{coarse moduli scheme}\index{moduli scheme!coarse}
Note that you can regard $u:U\rightarrow\mathop{\rm Ext}\nolimits^1(E,E)$ as an \'etale open neighbourhood of $0$ in $\mathop{\rm Ext}\nolimits^1(E,E).$ Theorem \ref{dt5thm2} will be proved in \S\ref{localdes}, using \S\ref{dcr}. Next, we will use this to prove the algebraic analogue of Theorem \ref{dt6thm1}:
\begin{thm} Let\/ $X$ be a Calabi--Yau $3$-fold over ${\mathbin{\mathbb K}},$ and ${\mathbin{\mathfrak M}}$ the moduli stack of coherent sheaves on\/ $X$. The Behrend function $\nu_{{\mathbin{\mathfrak M}}}: {\mathbin{\mathfrak M}}({\mathbin{\mathbb K}})\rightarrow{\mathbin{\mathbb Z}}$ is a natural locally constructible function on ${\mathbin{\mathfrak M}}$. For all\/ $E_1,E_2\in\mathop{\rm coh}\nolimits(X),$ it satisfies:
\begin{equation} \nu_{{\mathbin{\mathfrak M}}}(E_1\oplus E_2)=(-1)^{\bar\chi([E_1],[E_2])} \nu_{{\mathbin{\mathfrak M}}}(E_1)\nu_{{\mathbin{\mathfrak M}}}(E_2), \label{dt6eq1.1} \end{equation}
\begin{equation} \displaystyle \int\limits_{\small{\begin{subarray}{l} [\lambda]\in\mathbb{P}(\mathop{\rm Ext}\nolimits^1(E_2,E_1)):\\
\lambda\; \Leftrightarrow\; 0\rightarrow E_1\rightarrow F\rightarrow E_2\rightarrow 0\end{subarray}}}\!\!\!\!\! \!\!\!\! \!\!\!\! \!\!\!\! \nu_{{\mathbin{\mathfrak M}}}(F)\,{\rm d}\chi \quad - \!\!\!\! \!\!\!\! \!\!\!\! \displaystyle \int\limits_{\small{\begin{subarray}{l}[\mu]\in\mathbb{P}(\mathop{\rm Ext}\nolimits^1(E_1,E_2)):\\ \mu\; \Leftrightarrow\; 0\rightarrow E_2\rightarrow D\rightarrow E_1\rightarrow 0\end{subarray}}}\!\!\!\!\! \!\!\!\! \!\!\!\! \!\!\!\! \nu_{{\mathbin{\mathfrak M}}}(D)\,{\rm d}\chi \;\; = \;\; (e_{21}-e_{12})\;\; \nu_{{\mathbin{\mathfrak M}}}(E_1\oplus E_2), \label{dt6eq2.1} \end{equation}
where $e_{21}=\mathop{\rm dim}\nolimits\mathop{\rm Ext}\nolimits^1(E_2,E_1)$ and $e_{12}=\mathop{\rm dim}\nolimits\mathop{\rm Ext}\nolimits^1(E_1,E_2)$ for $E_1,E_2\in\mathop{\rm coh}\nolimits(X).$ Here\/ $\bar\chi([E_1],[E_2])$ in \eq{dt6eq1.1} is the Euler form as in \eq{eu}, \index{Euler form} and in \eq{dt6eq2.1} the correspondence between\/ $[\lambda]\in\mathbb{P}(\mathop{\rm Ext}\nolimits^1(E_2,E_1))$ and\/ $F\in\mathop{\rm coh}\nolimits(X)$ is that\/ $[\lambda]\in\mathbb{P}(\mathop{\rm Ext}\nolimits^1(E_2,E_1))$ lifts to some\/ $0\ne\lambda\in\mathop{\rm Ext}\nolimits^1(E_2,E_1),$ which corresponds to a short exact sequence\/ $0\rightarrow E_1\rightarrow F\rightarrow E_2\rightarrow 0$ in\/ $\mathop{\rm coh}\nolimits(X)$ in the usual way. The function $[\lambda]\mapsto\nu_{{\mathbin{\mathfrak M}}}(F)$ is a constructible function\/ $\mathbb{P}(\mathop{\rm Ext}\nolimits^1(E_2,E_1))\rightarrow{\mathbin{\mathbb Z}},$ and the integrals in \eq{dt6eq2.1} are integrals of constructible functions using the Euler characteristic as measure. \index{constructible function} \index{Euler characteristic} \label{dt6thm1.1} \end{thm}
As in \S\ref{dt4}, the identities \eq{dt6eq1.1}--\eq{dt6eq2.1} are crucial for the whole program in \cite{JoSo}, and will be proved in \S\ref{dt.1}.
In the next theorem, the condition that $\mathop{\rm Ext}\nolimits^{<0}(E^\bullet,E^\bullet)=0$ is necessary for $\tilde{\mathbin{\mathfrak M}}$ to be an Artin stack, rather than a higher stack. Note that this condition is automatically satisfied by complexes $E^\bullet$ which are semistable in any stability condition, for example Bridgeland stability conditions \cite{Brid1}. Therefore to prove wall-crossing formulae for Donaldson-Thomas invariants in the derived category $D^b\mathop{\rm coh}\nolimits(X)$ under change of stability condition by the ``dominant stability condition'' method of \cite{Joyc.4,Joyc.5,Joyc.6,Joyc.7,KaSc}, it is enough to know the Behrend function identities \eq{dt6eq1.1}--\eq{dt6eq2.1} for complexes $E^\bullet$ with $\mathop{\rm Ext}\nolimits^{<0}(E^\bullet,E^\bullet)=0$, and we do not need to deal with complexes $E^\bullet$ with $\mathop{\rm Ext}\nolimits^{<0}(E^\bullet,E^\bullet)\ne 0$, or with higher stacks.
\begin{thm} Let\/ $X$ be a Calabi--Yau 3-fold over ${\mathbin{\mathbb K}},$ and write $\tilde{\mathbin{\mathfrak M}}$ for the moduli stack of complexes $E^\bullet$ in $D^b\mathop{\rm coh}\nolimits(X)$ with $\mathop{\rm Ext}\nolimits^{<0}(E^\bullet,E^\bullet)=0$. This is an Artin stack by \cite{HuTh}. Let\/ $[E^\bullet]\in\tilde{\mathbin{\mathfrak M}}({\mathbin{\mathbb K}}),$ and suppose that a Zariski open neighbourhood of $[E^\bullet]$ in $\tilde{\mathbin{\mathfrak M}}({\mathbin{\mathbb K}})$ is equivalent to a global quotient $[S/\mathop{\rm GL}(n,{\mathbin{\mathbb K}})]$ for $S$ a ${\mathbin{\mathbb K}}$-scheme with a $\mathop{\rm GL}(n,{\mathbin{\mathbb K}})$-action. Then the analogues of Theorems \ref{dt5thm2} and \ref{dt6thm1.1}
hold with $\tilde{\mathbin{\mathfrak M}},E^\bullet$ in place of\/ $E,{\mathbin{\mathfrak M}}$. \label{dt6thm1.1.bis} \end{thm}
The condition on $\tilde{\mathbin{\mathfrak M}}$ that it should be {\it locally a global quotient}, is known for the moduli stack of coherent sheaves ${\mathbin{\mathfrak M}}$ using Quot schemes. A proof of that can be found in \cite[\S 9.3]{JoSo}, where Joyce and Song uses the standard method for constructing coarse moduli schemes\index{coarse moduli scheme}\index{moduli scheme!coarse} of semistable coherent sheaves in Huybrechts and Lehn \cite{HuLe2}, adapting it for Artin stacks, and an argument similar to parts of that of Luna's Etale Slice Theorem~\cite[\S III]{Luna}.\index{Luna's Etale Slice Theorem} However, this is not known for the moduli stack of complexes. The author expects Theorem \ref{dt6thm1.1.bis} to hold without this technical assumption, but currently can't prove it.
The proof of Theorem \ref{dt6thm1.1.bis} is the same as the proof of Theorem \ref{dt6thm1.1}, substituting sheaves with complexes of sheaves, and accordingly making the obvious modifications.
Finally, in \S\ref{def} we will characterize the numerical Grothendieck group of a Calabi--Yau 3-fold in terms of a deformation invariant lattice described using the Picard group. First of all, using existence results, and smoothness and properness properties of the relative Picard scheme in a family of Calabi--Yau 3-folds, one proves that the Picard groups form a local system. Actually, it is a local system with finite monodromy, so it can be made trivial after passing to a finite \'etale cover of the base scheme, as formulated in the analogue of \cite[Thm. 4.21]{JoSo}, which studies the monodromy of the Picard scheme instead of the numerical Grothendieck group in a family. Then, Theorem \ref{definv}, a substitute for \cite[Thm. 4.19]{JoSo}, which does not need the integral Hodge conjecture result by Voisin \cite{Vois} for Calabi--Yau 3-folds over ${\mathbin{\mathbb C}}$ and which is valid over ${\mathbin{\mathbb K}},$ characterizes the numerical Grothendieck group of a Calabi--Yau 3-fold in terms of a globally constant lattice described using the Picard scheme:
\begin{thm} Let\/ $X$ be a Calabi--Yau $3$-fold over ${\mathbin{\mathbb K}}$ with\/ $H^1({\mathbin{\cal O}}_X)\!=\!0$. Define\nomenclature[1l]{$\Lambda_X$}{lattice associated to a Calabi--Yau 3-fold $X$} \begin{equation*} \Lambda_X=\textstyle\bigl\{ (\lambda_0,\lambda_1,\lambda_2,\lambda_3) \textrm{ where } \lambda_0,\lambda_3\in{\mathbin{\mathbb Q}}, \; \lambda_1\in\mathop{\rm Pic}(X)\otimes_{{\mathbin{\mathbb Z}}}{\mathbin{\mathbb Q}}, \; \lambda_2\in \mathop{\rm Hom}\nolimits(\mathop{\rm Pic}(X),{\mathbin{\mathbb Q}}) \textrm{ such that } \end{equation*} \begin{equation*} \lambda_0\in {\mathbin{\mathbb Z}},\;\> \lambda_1\in \mathop{\rm Pic}(X)/ {\textrm{torsion}}, \;
\lambda_2-{\ts\frac{1}{2}}\lambda_1^2\in \mathop{\rm Hom}\nolimits(\mathop{\rm Pic}(X),{\mathbin{\mathbb Z}}),\;\> \lambda_3+\textstyle\frac{1}{12}\lambda_1 c_2(TX)\in {\mathbin{\mathbb Z}}\bigr\}, \end{equation*}
where $\lambda_1^2$ is defined as the map $\alpha\in\mathop{\rm Pic}(X)\rightarrow \frac{1}{2}c_1(\lambda_1)\cdot c_1(\lambda_1)\cdot c_1(\alpha)\in A^3(X)_{{\mathbin{\mathbb Q}}}\cong {\mathbin{\mathbb Q}},$ and $\frac{1}{12}\lambda_1 c_2(TX)$ is defined as $\frac{1}{12}c_1(\lambda_1)\cdot c_2(TX)\in A^3(X)_{{\mathbin{\mathbb Q}}}\cong{\mathbin{\mathbb Q}}.$ Then\/ for any family of Calabi-Yau 3-folds $\pi : {\cal X} \rightarrow S$ over a connected base $S$ with $X=\pi^{-1}(s_0),$ the lattices $\Lambda_{X_s}$ form a local system of abelian groups over $S$ with fibre $\Lambda_X$. Furthermore, the monodromy of this system lies in a finite subgroup of $\mathop{\rm Aut}(\Lambda_X)$, so after passing to an \'etale cover $\tilde S\rightarrow S$ of S, we can take the local system to be trivial, and coherently identify $\Lambda_{X_{\tilde s}}\cong \Lambda_X$ for all $\tilde s\in \tilde S$. Finally, the Chern character gives an injective morphism $\mathop{\rm ch}\nolimits:K^{\rm num}(\mathop{\rm coh}\nolimits(X))\!\hookrightarrow\!\Lambda_X$. \label{definv} \end{thm}
Following \cite{JoSo}, this yields
\begin{thm} The generalized Donaldson--Thomas invariants $\bar{DT}{}^\alpha(\tau)$ over ${\mathbin{\mathbb K}}$ for $\alpha\in\Lambda_X$ are unchanged under deformations of the underlying Calabi--Yau 3-fold X, by which we mean the following: let ${\mathbin{\mathfrak X}}\stackrel{\varphi}{\longrightarrow}T$ a smooth projective morphism of algebraic ${\mathbin{\mathbb K}}$-varieties $X,T$, with $T$ connected. Let ${\mathbin{\cal O}}_{\mathbin{\mathfrak X}}(1)$ be a relative very ample line bundle for ${\mathbin{\mathfrak X}}\stackrel{\varphi}{\longrightarrow}T$. For each $t\in T({\mathbin{\mathbb K}})$, write $X_t$ for the fibre $X\times_{\varphi,T,t}\mathop{\rm Spec}\nolimits{\mathbin{\mathbb K}}$ of $\varphi$ over $t$, and ${\mathbin{\cal O}}_{X_t}(1)$ for ${\mathbin{\cal O}}_{\mathbin{\mathfrak X}}(1)\vert_{X_t}$. Suppose that $X_t$ is a smooth Calabi--Yau 3-fold over ${\mathbin{\mathbb K}}$ for all $t\in T({\mathbin{\mathbb K}})$, with $H^1({\mathbin{\cal O}}_{X_t})= 0$. Then the generalized Donaldson--Thomas invariants $\bar{DT}{}^\alpha(\tau)_t$ are independent of $t\in T({\mathbin{\mathbb K}}).$ \label{defthm} \end{thm}
More precisely, the isomorphism $\Lambda_{X_t} = \Lambda_X$ is canonical up to action of a finite group $\Gamma,$ the monodromy on $T,$ and $DT^\alpha(\tau)_t$ are independent of the action of $\Gamma$ on $\alpha,$ so whichever identification $\Lambda_{X_t}=\Lambda_X$ is chosen, it is still true $DT^\alpha(\tau)_t$ independent of $t.$
Now, recall that in \cite{JoSo} Joyce and Song used the assumption that the base field is the field of complex numbers ${\mathbin{\mathbb K}}={\mathbin{\mathbb C}}$
for the Calabi--Yau 3-fold $X$ in three main ways:\index{field ${\mathbin{\mathbb K}}$|(} \index{field ${\mathbin{\mathbb K}}$!algebraically closed} \begin{itemize}\index{gauge theory}\index{Behrend function!Behrend identities} \setlength{\itemsep}{0pt} \setlength{\parsep}{0pt} \item[\bf(a)] Theorem \ref{dt5thm1} in \S\ref{dt4} is proved using gauge theory and transcendental complex analytic methods, and work only over ${\mathbin{\mathbb K}}={\mathbin{\mathbb C}}$. It is used to prove the Behrend function identities \eq{dt6eq1}--\eq{dt6eq2}, which are vital for much of their results, including the wall crossing formula for the $\bar{DT}{}^\alpha(\tau)$, and the relation between $PI^{\alpha,n}(\tau'),\bar{DT}{}^\alpha(\tau)$. \item[\bf(b)] In \cite[\S 4.5]{JoSo}, when ${\mathbin{\mathbb K}}={\mathbin{\mathbb C}}$ the Chern character\index{Chern character} embeds $K^{\rm num}(\mathop{\rm coh}\nolimits(X))$ in $H^{\rm even}(X;{\mathbin{\mathbb Q}})$, and they use this to show $K^{\rm num}(\mathop{\rm coh}\nolimits(X))$ is unchanged under deformations of $X$. This is important for the results that $\bar{DT}{}^\alpha(\tau)$ and $PI^{\alpha,n}(\tau')$ for $\alpha\in K^{\rm num}(\mathop{\rm coh}\nolimits(X))$ are invariant under deformations of $X$ even to make sense. \item[\bf(c)] Their notion of `compactly embeddable' noncompact Calabi-Yau 3-folds \index{compactly embeddable} in \cite[\S 6.7]{JoSo} is complex analytic and does not make sense for general~${\mathbin{\mathbb K}}$. This constrains the noncompact Calabi--Yau 3-folds they can define generalized Donaldson--Thomas invariants for. \end{itemize}
Now Theorem \ref{dt5thm2} and Theorem \ref{dt6thm1.1} extend the results in {\bf(a)} over algebraically closed field ${\mathbin{\mathbb K}}$ of characteristic zero. As noted in \cite{Joyc.1}, constructible functions\index{constructible function!in positive characteristic} methods fail for ${\mathbin{\mathbb K}}$ of positive characteristic.\index{field ${\mathbin{\mathbb K}}$!positive characteristic} Because of this, the alternative descriptions \eq{dt3eq5} and \eq{dt4eq11}, for $DT^\alpha(\tau)$ and $PI^{\alpha,n}(\tau')$ as weighted Euler characteristics, and the definition of $\bar{DT}{}^\alpha(\tau)$ in \S\ref{dt4}, cannot work in positive characteristic, so working over an algebraically closed field of characteristic zero is about as general as is reasonable.
The point {\bf(a)} above has consequences also on {\bf(c)}, because Joyce and Song only need the notion of `compactly embeddable' as their complex analytic proof of \eq{dt6eq1}--\eq{dt6eq2} requires $X$ compact. Unfortunately the given algebraic version of \eq{dt6eq1}--\eq{dt6eq2} in Theorem \ref{dt6thm1.1} uses results from derived algebraic geometry, and the author does not know if they apply also for compactly supported sheaves
\index{coherent sheaf!compactly supported} on a noncompact~$X$.\index{field ${\mathbin{\mathbb K}}$|)} \index{Calabi--Yau 3-fold!noncompact} We can prove a version of that under some technical assumptions, as stated in \S\ref{dt7}. Observe, also, that in the noncompact case you cannot expect to have the deformation invariance property unless in some particular cases in which the moduli space is proper. The extension of {\bf(b)} to ${\mathbin{\mathbb K}}$ is given in Section \ref{def}, which yields Theorem \ref{defthm}, thanks to which it is possible to extend \cite[Cor. 5.28]{JoSo} about the deformation invariance of the generalized Donaldson--Thomas invariants in the compact case to algebraically closed fields ${\mathbin{\mathbb K}}$ of characteristic zero. Thus, this proves our main theorem:
\begin{thm}The theory of generalized Donaldson--Thomas invariants defined in \cite{JoSo} is valid over algebraically closed fields of characteristic zero. \label{mainthm} \end{thm}
Next, we will respectively prove Theorems \ref{dt5thm2}, \ref{dt6thm1.1} and \ref{definv} in \S\ref{localdes}, \S\ref{dt.1} and \S\ref{def}.
\subsection{Local description of the Donaldson--Thomas moduli space} \label{localdes}
Let us fix a moduli stack ${\mathbin{\mathfrak M}}$ which is locally a global quotient. In particular, ${\mathbin{\mathfrak M}}$ can be the moduli stack of coherent sheaves over a Calabi-Yau 3-fold $X$, so that the theory exposed in \S\ref{dcr} and \S\ref{ourpapers} applies.
The first step in order to proving Theorem \ref{dt5thm2} is to show the existence of a quasiprojective ${\mathbin{\mathbb K}}$-scheme $S,$ an action of\/ $G$ on $S,$ a point\/ $s\in S({\mathbin{\mathbb K}})$ fixed by $G,$ and a $1$-morphism of Artin ${\mathbin{\mathbb K}}$-stacks $\xi:[S/G]\rightarrow{\mathbin{\mathfrak M}},$ which is smooth of relative dimension $\mathop{\rm dim}\nolimits\mathop{\rm Aut}(E)-\mathop{\rm dim}\nolimits G,$ \index{Artin stack}\index{quotient stack}\index{stabilizer group} where $[S/G]$ is the quotient stack, such that\/ $\xi(s\,G)=[E],$ the induced morphism on stabilizer groups $\xi_*:\mathop{\rm Iso}\nolimits_{[S/G]}(s\,G)\rightarrow\mathop{\rm Iso}\nolimits_{{\mathbin{\mathfrak M}}}([E])$ is the natural morphism $G\hookrightarrow\mathop{\rm Aut}(E)\cong\mathop{\rm Iso}\nolimits_{{\mathbin{\mathfrak M}}}([E]),$ and\/ ${\rm d}\xi\vert_{s\,G}:T_sS\cong T_{s\,G} [S/G]\rightarrow T_{[E]}{\mathbin{\mathfrak M}}\cong \mathop{\rm Ext}\nolimits^1(E,E)$ is an isomorphism.
As ${\mathbin{\mathfrak M}}$ is locally a global quotient, let's say ${\mathbin{\mathfrak M}}$ is locally $[Q/H]$ with $H=\mathop{\rm GL}(n,{\mathbin{\mathbb K}}),$ and a ${\mathbin{\mathbb K}}$-scheme $Q$ which is $H$-invariant, so that the projection $[Q/H]\rightarrow{\mathbin{\mathfrak M}}$ is a 1-isomorphism with an open ${\mathbin{\mathbb K}}$-substack ${\mathfrak Q}$ of ${\mathbin{\mathfrak M}}$. This $1$-isomorphism identifies the stabilizer groups $\mathop{\rm Iso}\nolimits_{{\mathbin{\mathfrak M}}}([E])=\mathop{\rm Aut}(E)$ and\/ $\mathop{\rm Iso}\nolimits_{[Q/H]}(sH)=\mathop{\rm Stab}\nolimits_H(s),$ and the Zariski tangent spaces $T_{[E]}{\mathbin{\mathfrak M}}\cong\mathop{\rm Ext}\nolimits^1(E,E)$ and\/ $T_{sH}[Q/H]\cong T_sQ/T_s(sH),$ so one has natural isomorphisms $\mathop{\rm Aut}(E)\cong\mathop{\rm Stab}\nolimits_H(s)$ and\/~$\mathop{\rm Ext}\nolimits^1(E,E)\cong T_sQ/T_s(sH)$, and $G$ is identified as a subgroup of $H.$
To obtain the 1-morphism with the required properties, following \cite[\S 9.3]{JoSo} and Luna's Etale Slice Theorem~\cite[\S III]{Luna}, we obtain an atlas $S$ as a $G$-invariant, locally closed\/ ${\mathbin{\mathbb K}}$-subscheme in $Q$ with\/ $s\in S({\mathbin{\mathbb K}}),$ such that\/ $T_sQ=T_sS\oplus T_s(sH),$ and the morphism $\mu: S\times H\rightarrow Q$ induced by the inclusion $S\hookrightarrow Q$ and the $H$-action on $Q$ is smooth of relative dimension $\mathop{\rm dim}\nolimits\mathop{\rm Aut}(E).$ Here $s\in Q({\mathbin{\mathbb K}})$ project to the point\/ $sH$ in ${\mathfrak Q}({\mathbin{\mathbb K}})$ identified with\/ $[E]\in{\mathbin{\mathfrak M}}({\mathbin{\mathbb K}})$ under the $1$-isomorphism\/ ${\mathfrak Q}\cong [Q/H]$ and $G$, a ${\mathbin{\mathbb K}}$-subgroup of the ${\mathbin{\mathbb K}}$-group H, is as in the statement of Theorem \ref{dt5thm2}, that is, a maximal torus in $\mathop{\rm Aut}(E).$ Since $S$ is invariant under the ${\mathbin{\mathbb K}}$-subgroup $G$ of the ${\mathbin{\mathbb K}}$-group $H$ acting on $Q$, the inclusion $i:S\hookrightarrow Q$ induces a representable 1-morphism of quotient stacks $i_*:[S/G]\rightarrow [Q/H]$. In \cite{JoSo}, Joyce and Song found that $i_*$ is smooth of relative dimension $\mathop{\rm dim}\nolimits\mathop{\rm Aut}(E)-\mathop{\rm dim}\nolimits G$. Combining the 1-morphism $i_*:[S/G]\rightarrow [Q/H]$, the 1-isomorphism ${\mathfrak Q}\cong [Q/H]$, and the open inclusion ${\mathfrak Q}\hookrightarrow{\mathbin{\mathfrak M}}$, yields a 1-morphism $\xi:[S/G]\rightarrow{\mathbin{\mathfrak M}}$, as required for Theorem \ref{dt5thm2}. This $\xi$ is smooth of relative dimension $\mathop{\rm dim}\nolimits\mathop{\rm Aut}(E)-\mathop{\rm dim}\nolimits G$, as $i_*$ is. If $\mathop{\rm Aut}(E)$ is reductive, so that $G=\mathop{\rm Aut}(E)$, then $\xi$ is smooth of dimension 0, that is, $\xi$ is \'etale.\index{etale morphism@\'etale morphism} The conditions that $\xi(s\,G)=[E]$ and that $\xi_*:\mathop{\rm Iso}\nolimits_{[S/G]}(s\,G)\rightarrow\mathop{\rm Iso}\nolimits_{{\mathbin{\mathfrak M}}}([E])$ is the natural $G\hookrightarrow\mathop{\rm Aut}(E) \cong\mathop{\rm Iso}\nolimits_{{\mathbin{\mathfrak M}}}([E])$ in Theorem \ref{dt5thm1} are immediate from the construction. That ${\rm d}\xi\vert_{s\,G}:T_sS\cong T_{s\,G} [S/G]\rightarrow T_{[E]}{\mathbin{\mathfrak M}}\cong \mathop{\rm Ext}\nolimits^1(E,E)$ is an isomorphism follows from $T_{[E]}{\mathbin{\mathfrak M}}\cong T_{sH}[Q/H]\cong T_sQ/T_s(sH)$ and
$T_sQ=T_sS\oplus T_s(sH)$.\index{Artin stack!atlas|)}\index{moduli stack!atlas|)}
In conclusion, we can summarize as follows: given a point $[E]\in {\mathbin{\mathfrak M}}({\mathbin{\mathbb K}})$, that is an equivalence class of a (complex of) coherent sheaves, we will denote by $G$ a maximal torus in $\mathop{\rm Aut}(E).$ As ${\mathbin{\mathfrak M}}$ is locally a global quotient, there exists an atlas $S$, which is a scheme over ${\mathbin{\mathbb K}}$, and a smooth morphism $\pi : S \rightarrow {\mathbin{\mathfrak M}}$, with $\pi$ smooth of relative dimension $\mathop{\rm dim}\nolimits G.$ If $x \in S$ is the point corresponding to $E\in {\mathbin{\mathfrak M}}({\mathbin{\mathbb K}}),$ then $\pi$ smooth of $\mathop{\rm dim}\nolimits G$ means that $\pi$ has {\it minimal dimension} near $E,$ that is $T_x S = \mathop{\rm Ext}\nolimits^1(E,E).$ Moreover, the atlas $S$ is endowed with a $G$-action, so that $\pi$ descends to a morphism $[S/G] \rightarrow {\mathbin{\mathfrak M}}.$
Note next that the maximal torus $G$ acts on $S$ preserving $s$ and fixing $x.$ By replacing $S$ by a $G$-equivariant \'etale open neighbourhood $S'$ of $s,$ we can suppose $S$ is affine. Then, from material in \S\ref{dcr} and \S\ref{ourpapers} we deduce that the atlas $S'$ in the sense of Theorems \ref{sa2cor1} and \ref{sa3thm6} for the moduli stack ${\mathbin{\mathfrak M}}$ carries a d-critical locus structure $(S',s_{S'})$ which is $\mathop{\rm GL}(n,{\mathbin{\mathbb K}})$-equivariant in the sense of \S\ref{eqdcr}.
Using Proposition \ref{dc2prop14}, there exists a $G$-invariant critical chart $(R,U,f,i)$ in the sense of \S\ref{dcr} for $(S,s)$ with $x$ in $R,$ and $\mathop{\rm dim}\nolimits U$ to be minimal so that $T_{i(x)}U = T_x R = \mathop{\rm Ext}\nolimits^1(E,E)$.
Making $U$ smaller if necessary, we can choose $G$-equivariant \'etale coordinates $U \rightarrow {\mathbin{\mathbb A}}^n = \mathop{\rm Ext}\nolimits^1(E,E)$ near $i(x),$ sending $i(x)$ to $0,$ and with $T_{i(x)}U = \mathop{\rm Ext}\nolimits^1(E,E)$ the given identification. Then we can regard $U \rightarrow \mathop{\rm Ext}\nolimits^1(E,E)$ as a $G$-equivariant \'etale open neighbourhood of $0$ in $\mathop{\rm Ext}\nolimits^1(E,E),$ which concludes the proof of Theorem \ref{dt5thm2}.
\subsection{Behrend function identities} \label{dt.1}
Now we are ready to prove Theorem \ref{dt6thm1.1}. Let $X$ be a Calabi--Yau $3$-fold over an algebraically closed field ${\mathbin{\mathbb K}}$ of characteristic zero, ${\mathbin{\mathfrak M}}$ the moduli stack of coherent sheaves on $X$, and $E_1,E_2$ be coherent sheaves on $X$. Set $E=E_1\oplus E_2$. Using the splitting \e \mathop{\rm Ext}\nolimits^1(E,E)\!=\!\mathop{\rm Ext}\nolimits^1(E_1,E_1)\!\oplus\!\mathop{\rm Ext}\nolimits^1(E_2,E_2)\!\oplus\!\mathop{\rm Ext}\nolimits^1(E_1,E_2) \!\oplus\!\mathop{\rm Ext}\nolimits^1(E_2,E_1), \label{dt6eq3} \e write elements of $\mathop{\rm Ext}\nolimits^1(E,E)$ as $(\epsilon_{11},\epsilon_{22},\epsilon_{12},\epsilon_{21})$ with $\epsilon_{ij}\in\mathop{\rm Ext}\nolimits^1(E_i,E_j)$. For simplicity, we will write $e_{ij}=\mathop{\rm dim}\nolimits\mathop{\rm Ext}\nolimits^1(E_i,E_j).$ Choose a maximal torus \index{ reductive group!maximal} $G$ of $\mathop{\rm Aut}(E)$ which contains the subgroup
$T=\bigl\{{\mathop{\rm id}\nolimits}_{E_1}+\lambda{\mathop{\rm id}\nolimits}_{E_2}: \lambda\in{\mathbin{\mathbb G}}_m\bigr\}$, which acts on $\mathop{\rm Ext}\nolimits^1(E,E)$ by \e \lambda:(\epsilon_{11},\epsilon_{22},\epsilon_{12},\epsilon_{21}) \mapsto(\epsilon_{11},\epsilon_{22},\lambda^{-1}\epsilon_{12},\lambda\epsilon_{21}). \label{dt6eq4} \e
Apply Theorem \ref{dt5thm2} with these $E$ and $G$. This gives an \'etale morphism $u:U\rightarrow \mathop{\rm Ext}\nolimits^1(E,E)$ with $U$ a smooth affine ${\mathbin{\mathbb K}}$-scheme, and $u(p)=0,$ for $p\in U({\mathbin{\mathbb K}}),$ a $G$-invariant regular function $f:U\rightarrow {\mathbin{\mathbb A}}^1_{\mathbin{\mathbb K}}$ on $U$ with $f\vert_p=\partial f\vert_p=0,$ an open neighbourhood\/ $V$ of\/ $s$ in $S,$ and a $1$-morphism $\xi:\mathop{\rm Crit}(f) \rightarrow{\mathbin{\mathfrak M}}$ smooth of relative dimension $\mathop{\rm dim}\nolimits\mathop{\rm Aut}(E),$ with\/ $\xi(p)=[E]\in{\mathbin{\mathfrak M}}({\mathbin{\mathbb K}})$ and\/ ${\rm d}\xi\vert_p:T_p(\mathop{\rm Crit}(f))=\mathop{\rm Ext}\nolimits^1(E,E)\rightarrow T_{[E]}{\mathbin{\mathfrak M}}$ the natural isomorphism. Then the Behrend function $\nu_{\mathbin{\mathfrak M}}$ at $[E]=[E_1\oplus E_2]$ satisfies \index{moduli stack!local structure} \e \nu_{{\mathbin{\mathfrak M}}}(E_1\oplus E_2)=(-1)^{\mathop{\rm dim}\nolimits\mathop{\rm Aut}(E)} \nu_{\mathop{\rm Crit}(f)}(0), \label{dt6eq5} \e where one uses that $\xi$ is smooth of relative dimension $\mathop{\rm dim}\nolimits\mathop{\rm Aut}(E),$ and Theorem \ref{dt3thm3} to say that $$\nu_{\mathop{\rm Crit}(f)}=(-1)^{\mathop{\rm dim}\nolimits(\mathop{\rm Aut}(E))}\xi^*(\nu_{{\mathbin{\mathfrak M}}}).$$
On the other hand, the last part of the proof of \eq{dt6eq1.1} in \cite[Section 10.1]{JoSo} uses algebraic methods and gives \e \nu_{\mathbin{\mathfrak M}}(E_1)\nu_{\mathbin{\mathfrak M}}(E_2)=\nu_{{\mathbin{\mathfrak M}}\times{\mathbin{\mathfrak M}}}(E_1,E_2)=(-1)^{\mathop{\rm dim}\nolimits\mathop{\rm Aut}(E_1)+\mathop{\rm dim}\nolimits\mathop{\rm Aut}(E_2)}\nu_{Crit(f^G)}(0), \label{dt6eq6} \e where $\nu_{Crit(f^G)}(0)=\nu_{\mathop{\rm Crit}(f)^G}(0)=\nu_{\mathop{\rm Crit}(f\vert_{U\cap\mathop{\rm Ext}\nolimits^1(E,E)^G})}(0)$ and $U$ is as in Theorem \ref{dt5thm2} and $\mathop{\rm Ext}\nolimits^1(E,E)^G$ denotes the fixed point locus of $\mathop{\rm Ext}\nolimits^1(E,E)$ for the $G$-action. Thus what actually remains to prove in order to establish identity \eq{dt6eq1.1} is \e \nu_{\mathop{\rm Crit}(f)}(0)=(-1)^{\mathop{\rm dim}\nolimits\mathop{\rm Ext}\nolimits^1(E_1,E_2)+\mathop{\rm dim}\nolimits\mathop{\rm Ext}\nolimits^1(E_2,E_1)}\nu_{Crit(f^G)}(0). \label{dt6eq7} \e
This is a generalization of a result in \cite{BeFa2} over ${\mathbin{\mathbb C}}$ in the case of an isolated ${\mathbin{\mathbb C}}^*$-fixed point. Combining equations \eq{dt6eq5}, \eq{dt6eq6} and \eq{dt6eq7} and sorting out the signs as in \cite[Section 10.1]{JoSo} proves equation \eq{dt6eq1.1}. Equation \eq{dt6eq7} will be crucial also for the proof of the second Behrend identity \eq{dt6eq2.1}.
Let us start by recalling an easy result similar to \cite[Prop. 10.1]{JoSo}, but now in the \'etale topology. Let $u:U \rightarrow \mathop{\rm Ext}\nolimits^1(E,E)$ be the \'etale map as in \S\ref{localdes}, and $p\in U$ such that $u(p)=0.$ We will consider points $(0,0,\epsilon_{12},0),(0,0,0,\epsilon_{21})\in \mathop{\rm Ext}\nolimits^1(E,E)$ like basically points in $U$. This is because we consider a unique lift $\alpha(e_{12})$ of $(0,0,\epsilon_{12},0)\in \mathop{\rm Ext}\nolimits^1(E,E)$ to $U$, such that $u(\alpha(e_{12}))=(0,0,e_{12},0)$ and $\lim_{\lambda \rightarrow 0} \lambda . \alpha(e_{12}) =p,$ using that $\lim_{\lambda\rightarrow 0} (0,0,\lambda^{-1}\epsilon_{12},0) = (0,0,0,0).$ So we can state the following result, for the proof of which we cite \cite[Prop.10.1]{JoSo}, with appropriate obvious modifications, working in the \'etale topology.
\begin{prop} Let\/ $\epsilon_{12}\in\mathop{\rm Ext}\nolimits^1(E_1,E_2)$ and\/ $\epsilon_{21}\in\mathop{\rm Ext}\nolimits^1(E_2,E_1)$. Then \begin{itemize} \setlength{\itemsep}{0pt} \setlength{\parsep}{0pt} \item[{\bf(i)}] $(0,0,\epsilon_{12},0),(0,0,0,\epsilon_{21})\in\mathop{\rm Crit}(f) \subseteq U\subseteq\mathop{\rm Ext}\nolimits^1(E,E),$ and\/ $(0,0,\epsilon_{12},0),\allowbreak (0,\allowbreak 0,\allowbreak 0,\allowbreak\epsilon_{21})\in V\subseteq S({\mathbin{\mathbb K}})\subseteq\mathop{\rm Ext}\nolimits^1(E,E);$ \item[{\bf(ii)}] $\xi$ maps $(0,0,\epsilon_{12},0) \mapsto (0,0,\epsilon_{12},0)$ and\/ $(0,0,0,\epsilon_{21})\mapsto(0,0,0,\epsilon_{21});$ and \item[{\bf(iii)}] the induced morphism on closed points $[S/\mathop{\rm Aut}(E)] ({\mathbin{\mathbb K}})\rightarrow{\mathbin{\mathfrak M}}({\mathbin{\mathbb K}})$ maps $[(0,\allowbreak 0,\allowbreak 0,\allowbreak\epsilon_{21})]\mapsto[F]$ and\/ $[(0,0,\epsilon_{12},0)]\mapsto [F'],$ where the exact sequences $0\rightarrow E_1\rightarrow F\rightarrow E_2\rightarrow 0$ and\/ $0\rightarrow E_2\rightarrow F'\rightarrow E_1\rightarrow 0$ in $\mathop{\rm coh}\nolimits(X)$ correspond to $\epsilon_{21}\in\mathop{\rm Ext}\nolimits^1(E_2,E_1)$ and\/ $\epsilon_{12}\in\mathop{\rm Ext}\nolimits^1(E_1,E_2),$ respectively. \end{itemize} \label{dt10prop2} \end{prop}
Now use the idea in \cite[\S10.2]{JoSo}. Set $U'=\bigl\{(\epsilon_{11},\epsilon_{22},\epsilon_{12},\epsilon_{21})\in U:\epsilon_{21}\ne 0\bigr\}$, an open set in $U$, and write $V'$ for the submanifold of $(\epsilon_{11},\epsilon_{22},\epsilon_{12},\epsilon_{21})\in U'$ with $\epsilon_{12}=0$. Let $\tilde U'$ be the blowup of $U'$ along $V'$, with projection $\pi':\tilde U'\rightarrow U'$. Points of $\tilde U'$ may be written $(\epsilon_{11},\epsilon_{22},[\epsilon_{12}],\lambda\epsilon_{12},\epsilon_{21})$, where $[\epsilon_{12}]\in\mathbb{P} (\mathop{\rm Ext}\nolimits^1(E_1,E_2))$, and $\lambda\in{\mathbin{\mathbb K}}$, and $\epsilon_{21}\ne 0$. Write $f'=f\vert_{U'}$ and $\tilde f'=f'\circ\pi'$. Then applying Theorem \ref{blowup} to $U',V',f',\tilde U',\pi',\tilde f'$ at the point $(0,0,0,\epsilon_{21})\in U'$ gives \e \begin{split} \nu_{\mathop{\rm Crit}(f)}(0,0,0,\epsilon_{21}) \quad=\displaystyle\int\limits_{\small{[\epsilon_{12}] \in\mathbb{P}(\mathop{\rm Ext}\nolimits^1(E_1,E_2))}}\!\!\!\!\!\!\nu_{\mathop{\rm Crit}(\tilde f')}(0,0,[\epsilon_{12}],0,\epsilon_{21})\,{\rm d}\chi + (-1)^{e_{12}}\bigl(1- e_{12}\bigr) \nu_{\mathop{\rm Crit}(f\vert_{V'})}(0,0,0,\epsilon_{21}). \label{dt6eq26} \end{split} \e Here $\nu_{\mathop{\rm Crit}(f)}(0,0,0,\epsilon_{21})$ is independent of the choice of $\epsilon_{21}$ representing the point $[\epsilon_{21}]\in\mathbb{P} (\mathop{\rm Ext}\nolimits^1(E_2,E_1))$, and is a constructible function\index{constructible function} of $[\epsilon_{21}]$, so the integrals in \eq{dt6eq26} are well-defined. Note that $\nu_{\mathop{\rm Crit}(f)}$ and the other Behrend functions in the sequel are nonzero just on the zero loci of the corresponding functions, so here and in the sequel the integrals over the whole $\mathbb{P}(\mathop{\rm Ext}\nolimits^1(\ldots))$ actually are just over the points that lie in these zero loci. Adopt this convention for the whole section.
Similarly consider the analogous situation exchanging the role of $\epsilon_{12}$ and $\epsilon_{21}.$ Set $U''=\bigl\{(\epsilon_{11},\epsilon_{22},\epsilon_{12},\epsilon_{21})\in U :\epsilon_{12}\ne 0\bigr\}$, an open set in $U$, and write $V'' =\bigl\{(\epsilon_{11},\epsilon_{22},\epsilon_{12},\epsilon_{21})\in U'' : \epsilon_{21}=0\bigr\}$. Let $\tilde U''$ be the blowup of $U''$ along $V''$, with projection $\pi'':\tilde U''\rightarrow U''$. Points of $\tilde U''$ may be written $(\epsilon_{11},\epsilon_{22},\epsilon_{12},[\epsilon_{21}],\lambda\epsilon_{21})$, where $[\epsilon_{21}]\in\mathbb{P} (\mathop{\rm Ext}\nolimits^1(E_2,E_1))$, and $\lambda\in{\mathbin{\mathbb K}}$, and $\epsilon_{12}\ne 0$. Write $f''=f\vert_{U''}$ and $\tilde f''=f''\circ\pi''$. Similarly to the previous situation, we can apply Theorem \ref{blowup} to $U'',V'',f'',\tilde U'',\pi'',\tilde f''$ at the point $(0,0,\epsilon_{12},0)\in U''$ which gives \e \begin{split} \nu_{\mathop{\rm Crit}(f)}(0,0,\epsilon_{12},0)\quad =\displaystyle\int\limits_{\small{[\epsilon_{21}] \in\mathbb{P}(\mathop{\rm Ext}\nolimits^1(E_2,E_1))}}\!\!\!\!\!\!\nu_{\mathop{\rm Crit}(\tilde f'')}(0,0,\epsilon_{12},0,[\epsilon_{21}])\,{\rm d}\chi + (-1)^{e_{21}}\bigl(1- e_{21}\bigr) \nu_{\mathop{\rm Crit}(f\vert_{V''})}(0,0,\epsilon_{12},0). \label{dt6eq27} \end{split} \e Let $L_{12}\rightarrow\mathbb{P}(\mathop{\rm Ext}\nolimits^1(E_1,E_2))$ and $L_{21}\rightarrow\mathbb{P} (\mathop{\rm Ext}\nolimits^1(E_2,E_1))$ be the tautological line bundles, so that the fibre of $L_{12}$ over a point $[\epsilon_{12}]$ in $\mathbb{P} (\mathop{\rm Ext}\nolimits^1(E_1,E_2))$ is the 1-dimensional subspace $\{\lambda\,\epsilon_{12}: \lambda\in{\mathbin{\mathbb K}}\}$ in $\mathop{\rm Ext}\nolimits^1(E_1,E_2)$. Consider the fibre product $$ \xymatrix@C=70pt@R=25pt{Z\; \ar[r]^{\textrm{\'etale}\quad\quad\quad\quad\quad\quad\quad\quad\quad} \ar[d] & \mathop{\rm Ext}\nolimits^1(E_1,E_1)\times \mathop{\rm Ext}\nolimits^1(E_2,E_2)\times (L_{12}\oplus L_{21}) \ar[d] \\ U \; \ar[r]^{\textrm{\'etale}\quad} & \mathop{\rm Ext}\nolimits^1(E,E)} $$ where the horizontal maps are \'etale morphisms. Informally, this defines $Z\subseteq\mathop{\rm Ext}\nolimits^1(E_1,E_1)\times \mathop{\rm Ext}\nolimits^1(E_2,E_2)\times (L_{12}\oplus L_{21})$ to be the \'etale open subset of points $\bigl(\epsilon_{11},\epsilon_{22},[\epsilon_{12}],\lambda_{1} \, \epsilon_{12}, [\epsilon_{21}],\lambda_{2} \, \epsilon_{21}\bigr)$ for $\lambda_i \in {\mathbin{\mathbb K}},$ for which $(\epsilon_{21},\epsilon_{22},\lambda_{1}\,\epsilon_{12},\lambda_{2} \, \epsilon_{21})$ lies in $U.$ Observe that $Z$ contains both $\tilde U'$ and $\tilde U'',$ which respectively have subspaces $\mathop{\rm Crit}(\tilde f')$ and $\mathop{\rm Crit}(\tilde f'').$
Define also an \'etale open set of points $W\subseteq\mathop{\rm Ext}\nolimits^1(E_1,E_1)\times \mathop{\rm Ext}\nolimits^1(E_2,E_2)\times (L_{12}\otimes L_{21})$ fitting into the following cartesian square: $$ \xymatrix@C=70pt@R=25pt{Z\; \ar[r]^{\textrm{\'etale}\quad\quad\quad\quad\quad\quad\quad\quad\quad} \ar[d]_\Pi & \mathop{\rm Ext}\nolimits^1(E_1,E_1) \times \mathop{\rm Ext}\nolimits^1(E_2,E_2)\times (L_{12}\oplus L_{21}) \ar[d]^{\Pi'} \\ W \; \ar[r]^{\textrm{\'etale}\quad\quad\quad\quad\quad\quad\quad\quad\quad} & \mathop{\rm Ext}\nolimits^1(E_1,E_1)\times \mathop{\rm Ext}\nolimits^1(E_2,E_2)\times (L_{12}\otimes L_{21}) } $$ where the line bundle $$L_{12}\otimes L_{21}\rightarrow\mathbb{P}(\mathop{\rm Ext}\nolimits^1(E_1,E_2))\times \mathbb{P} (\mathop{\rm Ext}\nolimits^1(E_2,E_1))$$ has fibre over $([\epsilon_{12}],[\epsilon_{21}])$ which is $\{\lambda\,\epsilon_{12}\otimes\epsilon_{21}: \lambda\in{\mathbin{\mathbb K}}\}$. Write points of the total space of $L_{12}\otimes L_{21}$ as $\bigl([\epsilon_{12}],[\epsilon_{21}],\lambda\,\epsilon_{12}\otimes\epsilon_{21}\bigr)$. Informally, $W$ is defined as the open subset of points $\bigl(\epsilon_{11},\epsilon_{22},[\epsilon_{12}], [\epsilon_{21}],\lambda\,\epsilon_{12}\otimes\epsilon_{21}\bigr)$ for which $(\epsilon_{21},\epsilon_{22},\lambda\,\epsilon_{12},\epsilon_{21})$ lies in $U$. Since $U$ is $G$-invariant, this definition is independent of the choice of representatives $\epsilon_{12},\epsilon_{21}$ for $[\epsilon_{12}], [\epsilon_{21}]$, since any other choice would replace $(\epsilon_{11},\epsilon_{22},\lambda\,\epsilon_{12},\epsilon_{21})$ by $(\epsilon_{11},\epsilon_{22}, \lambda\mu\,\epsilon_{12},\mu^{-1}\epsilon_{21})$ for some $\mu\in{\mathbin{\mathbb G}}_m$. The map $\Pi: Z\rightarrow W$ is \'etale equivalent to $$\Pi':(\epsilon_{11},\epsilon_{22},[\epsilon_{12}],\lambda_{1} \, \epsilon_{12}, [\epsilon_{21}],\lambda_{2} \, \epsilon_{21})\mapsto(\epsilon_{11},\epsilon_{22},\allowbreak [\epsilon_{12}],\allowbreak[\epsilon_{21}],\lambda\epsilon_{12}\otimes\epsilon_{21})$$ which is a smooth projection of relative dimension $1$ except at the points such that $\lambda_{1} =\lambda_{2} =0.$ However it is smooth at $(0 ,\lambda_{2} )$ with $\lambda_{2} \neq 0$ and similarly at $(\lambda_{1},0 )$ with $\lambda_{1} \neq 0,$ that is, the two restrictions of $\Pi$ to $\tilde U'$ and $\tilde U''$ are both smooth of relative dimension $1.$ \index{almost closed $1$-form!invariant}
Here is the crucial point: $\mathop{\rm Crit}(\tilde f')\subset \tilde U'$ and $\mathop{\rm Crit}(\tilde f'')\subset \tilde U''$ are ${\mathbin{\mathbb G}}_m$-invariant subschemes, so there exists a subscheme $Q$ of $W$ such that $\mathop{\rm Crit}(\tilde f')=\Pi^{-1}(Q)\cap\tilde U'$ and $\mathop{\rm Crit}(\tilde f'')=\Pi^{-1}(Q)\cap\tilde U''$ and both $\Pi: \mathop{\rm Crit}(\tilde f')\rightarrow Q$ and $\Pi: \tilde \omega''^{-1}(0)\rightarrow Q$ are smooth of relative dimension $1.$ Thus Theorem \ref{dt3thm3} yields that $\nu_{\mathop{\rm Crit}(\tilde f')}=-\Pi^*(\nu_Q)$ and $\nu_{\mathop{\rm Crit}(\tilde f'')}=-\Pi^*(\nu_Q)$ and then \e \nu_{\mathop{\rm Crit}(\tilde f')}(0,0,[\epsilon_{12}],0,\epsilon_{21})=-\nu_{Q}(0,0,[\epsilon_{12}],[\epsilon_{21}],0)=\nu_{\mathop{\rm Crit}(\tilde f'')}(0,0,\epsilon_{12},0,[\epsilon_{21}]), \label{dt6eq29} \e where the sign comes from the fact that the map $\Pi$ is smooth of relative dimension $1.$ Moreover observe that \e \nu_{\mathop{\rm Crit}(f\vert_{V'})}(0,0,0,\epsilon_{21})=(-1)^{ e_{21}}\nu_{\mathop{\rm Crit}(f)^G}(0,0,0,0). \label{dt6eq30} \e
This is because the $T$-invariance of $f$ imply that its values on $(\epsilon_{11},\epsilon_{22},0,\epsilon_{21})$ and $(\epsilon_{11},\epsilon_{22},0,0)$ are the same and the projection $\mathop{\rm Crit}(f\vert_{V'}) \rightarrow\mathop{\rm Crit}(f\vert_{U^T})$ is smooth of relative dimension $e_{21}.$ For the same reason, one has \e \nu_{\mathop{\rm Crit}(f\vert_{V''})}(0,0,\epsilon_{12},0)=(-1)^{e_{12}}\nu_{\mathop{\rm Crit}(f)^G}(0,0,0,0). \label{dt6eq31} \e
Now, substitute equations (\ref{dt6eq29}), (\ref{dt6eq30}) and (\ref{dt6eq31}) into (\ref{dt6eq26}) and (\ref{dt6eq27}). One gets \e \begin{split} \nu_{\mathop{\rm Crit}(f)}(0,0,0,\epsilon_{21})\quad=\quad-\displaystyle\int\limits_{\small{[\epsilon_{12}] \in\mathbb{P}(\mathop{\rm Ext}\nolimits^1(E_1,E_2))}}\!\!\!\!\!\!\nu_{Q}(0,0,[\epsilon_{12}],[\epsilon_{21}],0) \,{\rm d}\chi +(-1)^{e_{12}+e_{21}}\bigl(1-e_{12}\bigr) \nu_{\mathop{\rm Crit}(f)^G}(0,0,0,0), \end{split} \label{dt6eq32} \e \e \begin{split} \nu_{\mathop{\rm Crit}(f)}(0,0,\epsilon_{12})\quad=\quad-\displaystyle\int\limits_{\small{[\epsilon_{21}] \in\mathbb{P}(\mathop{\rm Ext}\nolimits^1(E_2,E_1))}}\!\!\!\!\!\!\nu_{Q}(0,0,[\epsilon_{12}],[\epsilon_{21}],0) \,{\rm d}\chi +(-1)^{e_{12}+e_{21}}\bigl(1-e_{21}\bigr) \nu_{\mathop{\rm Crit}(f)^G}(0,0,0,0). \end{split} \label{dt6eq33} \e
Finally integrating \eq{dt6eq32} over $[\epsilon_{21}]\in\mathbb{P}(\mathop{\rm Ext}\nolimits^1(E_2,E_1))$ and \eq{dt6eq33} over $[\epsilon_{12}] \in\mathbb{P}(\mathop{\rm Ext}\nolimits^1(E_1,E_2)),$ yields
\e \begin{split} \int\limits_{\small{[\epsilon_{21}]\in\mathbb{P}(\mathop{\rm Ext}\nolimits^1(E_2,E_1))}} & \!\!\!\!\!\!\!\!\!\nu_{\mathop{\rm Crit}(f)}(0,0,0,\epsilon_{21}) \,{\rm d}\chi\quad =\quad - \int\limits_{\small{([\epsilon_{12}],[\epsilon_{21}])\in\mathbb{P}(\mathop{\rm Ext}\nolimits^1(E_1,E_2))\times \mathbb{P}(\mathop{\rm Ext}\nolimits^1(E_2,E_1))}} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\nu_{Q}(0,0,[\epsilon_{12}],[\epsilon_{21}],0) \,{\rm d}\chi \\ & +(-1)^{e_{12}+e_{21}}\bigl(1-e_{12}\bigr) e_{21} \nu_{\mathop{\rm Crit}(f)^G}(0), \label{dt6eq34} \end{split} \e \e \begin{split} \int\limits_{\small{[\epsilon_{12}]\in\mathbb{P}(\mathop{\rm Ext}\nolimits^1(E_1,E_2))}} & \!\!\!\!\!\!\!\!\!\nu_{\mathop{\rm Crit}(f)}(0,0,\epsilon_{12},0) \,{\rm d}\chi\quad=\quad - \int\limits_{\small{([\epsilon_{12}],[\epsilon_{21}])\in\mathbb{P}(\mathop{\rm Ext}\nolimits^1(E_1,E_2))\times \mathbb{P}(\mathop{\rm Ext}\nolimits^1(E_2,E_1))}} \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\nu_{Q}(0,0,[\epsilon_{12}],[\epsilon_{21}],0) \,{\rm d}\chi \\& +(-1)^{e_{12}+e_{21}}\bigl(1-e_{21}\bigr) e_{12} \nu_{\mathop{\rm Crit}(f)^G}(0), \label{dt6eq35} \end{split} \e
since $\chi\bigl(\mathbb{P}(\mathop{\rm Ext}\nolimits^1(E_2,E_1))\bigr)=e_{21}$ and $\chi\bigl(\mathbb{P}(\mathop{\rm Ext}\nolimits^1(E_1,E_2))\bigr)=e_{12}.$ Subtracting \eq{dt6eq34} from \eq{dt6eq35}, gives
\e \begin{split} \int\limits_{\small{[\epsilon_{21}]\in\mathbb{P}(\mathop{\rm Ext}\nolimits^1(E_2,E_1))}} & \!\!\!\!\!\!\!\!\!\nu_{\mathop{\rm Crit}(f)}(0,0,0,\epsilon_{21}) \,{\rm d}\chi\quad - \int\limits_{\small{[\epsilon_{12}]\in\mathbb{P}(\mathop{\rm Ext}\nolimits^1(E_1,E_2))}} \!\!\!\!\!\!\!\!\!\nu_{\mathop{\rm Crit}(f)}(0,0,\epsilon_{12},0) \,{\rm d}\chi= \\ & (-1)^{e_{12}+e_{21}}\bigl(e_{21}-e_{12}\bigr) \nu_{\mathop{\rm Crit}(f)^G}(0). \end{split} \label{dt6eq25} \e
Consider equation \eq{dt6eq25} applied substituting $\mathbb{P}(\mathop{\rm Ext}\nolimits^1(E_2,E_1)\oplus{\mathbin{\mathbb K}})$ to $\mathbb{P}(\mathop{\rm Ext}\nolimits^1(E_2,E_1)).$ This adds one dimension to $\mathop{\rm Ext}\nolimits^1(E,E).$ Denote $\tilde{\tilde f}$ the lift of $f$ to $\mathop{\rm Ext}\nolimits^1(E,E)\oplus{\mathbin{\mathbb K}}.$ In this case equation \eq{dt6eq25} becomes \e \begin{split} \int\limits_{\small{[\epsilon_{21}]\in\mathbb{P}(\mathop{\rm Ext}\nolimits^1(E_2,E_1)\oplus{\mathbin{\mathbb K}})}} & \!\!\!\!\!\!\!\!\!\nu_{\mathop{\rm Crit}({\tilde{\tilde f}})}(0,0,0,\epsilon_{21}\oplus\lambda) \,{\rm d}\chi\quad - \int\limits_{\small{[\epsilon_{12}]\in\mathbb{P}(\mathop{\rm Ext}\nolimits^1(E_1,E_2))}} \!\!\!\!\!\!\!\!\!\nu_{\mathop{\rm Crit}({\tilde{\tilde f}})}(0,0,\epsilon_{12},0) \,{\rm d}\chi= \\ &(-1)^{1+e_{12}+e_{21}}\bigl(1+e_{21}-e_{12}\bigr) \nu_{\mathop{\rm Crit}({\tilde{\tilde f}})^G}(0), \end{split} \label{dt6eq25.1.1} \e
Now, observe that $\nu_{\mathop{\rm Crit}(f)}=-\nu_{\mathop{\rm Crit}({\tilde{\tilde f}})}$ from Theorem \ref{dt3thm3} and $\nu_{\mathop{\rm Crit}({\tilde{\tilde f}})^G}(0)=\nu_{\mathop{\rm Crit}(f)^G}(0)$ as $(\mathop{\rm Ext}\nolimits^1(E,E)\oplus{\mathbin{\mathbb K}})^G=\mathop{\rm Ext}\nolimits^1(E,E)^G\oplus 0$ and the map $\mathop{\rm Crit}({\tilde{\tilde f}})^G \rightarrow \mathop{\rm Crit}(f)^G$ is \'etale. Thus \e \begin{split} -\!\!\!\!\!\!\!\!\!\int\limits_{\small{[\epsilon_{21}]\in\mathbb{P}(\mathop{\rm Ext}\nolimits^1(E_2,E_1))}} & \!\!\!\!\!\!\!\!\!\nu_{\mathop{\rm Crit}(f)}(0,0,0,\epsilon_{21}) \,{\rm d}\chi \; -\; \nu_{\mathop{\rm Crit}(f)}(0,0,0,0) \quad + \int\limits_{\small{[\epsilon_{12}]\in\mathbb{P}(\mathop{\rm Ext}\nolimits^1(E_1,E_2))}} \!\!\!\!\!\!\!\!\!\nu_{\mathop{\rm Crit}(f)}(0,0,\epsilon_{12},0) \,{\rm d}\chi= \\ & (-1)^{1+e_{12}+e_{21}}\bigl(1+e_{21}-e_{12}\bigr) \nu_{\mathop{\rm Crit}(f)^G}(0). \end{split} \label{dt6eq25.1} \e
Here, $\nu_{\mathop{\rm Crit}(f)}(0)$ on the l.h.s. comes from the fact that the ${\mathbin{\mathbb G}}_m$-action over $\mathbb{P}(\mathop{\rm Ext}\nolimits^1(E_2,E_1)\oplus{\mathbin{\mathbb K}})$ fixes $\mathbb{P}(\mathop{\rm Ext}\nolimits^1(E_2,E_1))$ and $[0,1];$ the free orbits of the ${\mathbin{\mathbb G}}_m$-action contribute zero to the weighted Euler characteristic. Then one uses that $\nu_{\mathop{\rm Crit}({\tilde{\tilde f}})}$ valued over $[0,1]$ is equal to $-\nu_{\mathop{\rm Crit}(f)}(0).$ Adding \eq{dt6eq25} and \eq{dt6eq25.1} yields \eq{dt6eq7}, which concludes the proof of identity \eq{dt6eq1.1}.
The conclusion of the proof of identity \eq{dt6eq2.1} is now easy. Let $0\ne\epsilon_{21}\in\mathop{\rm Ext}\nolimits^1(E_2,E_1)$ correspond to the short exact sequence $0\rightarrow E_1\rightarrow F\rightarrow E_2\rightarrow 0$ in $\mathop{\rm coh}\nolimits(X)$. Then \e \nu_{{\mathbin{\mathfrak M}}}(F)=(-1)^{\mathop{\rm dim}\nolimits\mathop{\rm Aut}(E)}\nu_{\mathop{\rm Crit}(f)}(0,0,0,\epsilon_{21}) \label{dt6eq24} \e using $\xi_*:[(0,0,0,\epsilon_{21})]\mapsto[F]$ from Proposition \ref{dt10prop2} and $\xi$ smooth of relative dimension $\mathop{\rm dim}\nolimits(\mathop{\rm Aut}(E))$ and properties of Behrend function in Theorem \ref{dt3thm3}. Substituting \eq{dt6eq24} and its analogue for $D$ in the place of $F$ into \eq{dt6eq2.1}, using equation \eq{dt6eq5} and identity \eq{dt6eq7} to substitute for $\nu_{{\mathbin{\mathfrak M}}}(E_1\oplus E_2)$, and cancelling factors of $(-1)^{\mathop{\rm dim}\nolimits\mathop{\rm Aut}(E)} $, one gets that \eq{dt6eq2.1} is equivalent to \eq{dt6eq25}, which concludes the proof.
\subsection{Deformation invariance issue} \markboth{Deformation invariance issue}{Deformation invariance issue} \label{def} \index{Picard scheme}\index{Picard scheme!relative}
Thomas' original definition \eq{dt2eq1} of \index{Donaldson--Thomas invariants!original $DT^\alpha(\tau)$} $DT^\alpha(\tau)$, and Joyce and Song's definition \eq{dt4eq11} of the pair \index{stable pair invariants $PI^{\alpha,n}(\tau')$} invariants $PI^{\alpha,n}(\tau')$, are both valid over ${\mathbin{\mathbb K}}$. Joyce and Song suggest to solve problem (b) in \S\ref{main.1} to work in \cite[Rmk 4.20 (e)]{JoSo}, replacing $H^*(X;{\mathbin{\mathbb Q}})$ by the {\it algebraic de Rham cohomology\/}\index{algebraic de Rham cohomology} $H^*_{\rm dR}(X)$ of Hartshorne \cite{Hart1}. \index{cohomology} Here we suggest another argument which is based on the theory of {\it Picard schemes} by Grothendieck \cite{Grot4,Grot5}. Other references are \cite{Art,Kle}. Even if our argument will not prove that the numerical Grothendieck groups are deformation invariant, as this last fact depend deeply on the integral Hodge conjecture type result \cite{Vois} which we are not able to prove in this more general context, we will however find a deformation invariant lattice $\Lambda_{X_t}$ containing its image through the Chern character map and define $\bar{DT}{}^\alpha(\tau)_t$ for $\alpha\in \Lambda_{X_t}$ which will be deformation invariant.
\nomenclature[Pic]{$\mathop{\rm Pic}_{{\mathbin{\mathfrak X}}/T}$}{relative Picard scheme of a family ${\mathbin{\mathfrak X}}\rightarrow T$} \nomenclature[Picc]{$\mathop{\rm Pic}(X)$}{Picard scheme of a ${\mathbin{\mathbb K}}$-scheme}
To prove deformation-invariance we need to work not with a single Calabi--Yau 3-fold $X$ over ${\mathbin{\mathbb K}}$, but with a {\it family\/} of Calabi--Yau 3-folds ${\mathbin{\mathfrak X}}\stackrel{\varphi}{\longrightarrow}T$ over a base ${\mathbin{\mathbb K}}$-scheme $T$. Taking $T=\mathop{\rm Spec}\nolimits{\mathbin{\mathbb K}}$ recovers the case of one Calabi--Yau 3-fold. Here are our assumptions and notation for such families. Let ${\mathbin{\mathfrak X}}\stackrel{\varphi}{\longrightarrow}T$ be a smooth projective morphism of algebraic ${\mathbin{\mathbb K}}$-varieties $X,T$, with $T$ connected. Let ${\mathbin{\cal O}}_{\mathbin{\mathfrak X}}(1)$ be a relative very ample line bundle for ${\mathbin{\mathfrak X}}\stackrel{\varphi}{\longrightarrow}T$. For each $t\in T({\mathbin{\mathbb K}})$, write $X_t$ for the fibre $X\times_{\varphi,T,t}\mathop{\rm Spec}\nolimits{\mathbin{\mathbb K}}$ of $\varphi$ over $t$, and ${\mathbin{\cal O}}_{X_t}(1)$ for ${\mathbin{\cal O}}_{\mathbin{\mathfrak X}}(1)\vert_{X_t}$. Suppose that $X_t$ is a smooth Calabi--Yau 3-fold over ${\mathbin{\mathbb K}}$ for all $t\in T({\mathbin{\mathbb K}})$, with $H^1({\mathbin{\cal O}}_{X_t})= 0$.
There are some important existence theorems which refine the original Grothendieck's theorem \cite[Thm. 3.1]{Grot4}. In \cite[Thm. 7.3]{Art}, Artin proves that given $f:X\rightarrow S$ a flat, proper, and finitely presented map of algebraic spaces cohomologically flat in dimension zero, then the relative Picard scheme $\mathop{\rm Pic}_{X/S}$ exists as an algebraic space which is locally of finite presentation over S. Its fibres are the Picard schemes $\mathop{\rm Pic}(X_s)$ of the fibres. They form a family whose total space is $\mathop{\rm Pic}_{X/S}.$
In \cite[Prop. 2.10]{Grot5} Grothendieck shows that if $H^2(X_s,{\mathbin{\cal O}}_{X_{s}})=0$ for some $s\in S,$ there exists a neighborhood $U$ of $s$ such that the scheme $\mathop{\rm Pic}_{{X/S}_{|_U}}$ is smooth, and in this case $\mathop{\rm dim}\nolimits(\mathop{\rm Pic}(X_s))=\mathop{\rm dim}\nolimits(H^1(X_s,{\mathbin{\cal O}}_{X_s})).$
In our case, $\mathop{\rm Pic}_{{\mathbin{\mathfrak X}}/T}$ exists and is smooth with $0$-dimensional fibres which are the Picard schemes $\mathop{\rm Pic}(X_t).$ Moreover the morphism $(\pi,P): \mathop{\rm Pic}_{{\mathbin{\mathfrak X}}/T}\longrightarrow T\times {\mathbin{\mathbb Q}}[s],$ where $\pi$ is the projection to the base scheme and $P$ assigns to an isomorphism class of a line bundle $[L]$ its Hilbert polynomial $P_L(s)$ with respect to ${\mathbin{\cal O}}_{\mathbin{\mathfrak X}}(1),$ is proper. This implies an upper semicontinuity result for $t\rightarrow\mathop{\rm dim}\nolimits(\mathop{\rm Pic}(X_t))$ \cite[Cor. 2.7]{Grot5}. These results yield that the Picard schemes $\mathop{\rm Pic}(X_t)$ for $t\in T({\mathbin{\mathbb K}})$ are canonically isomorphic {\it locally} in $T({\mathbin{\mathbb K}})$. Observe that at the moment we don't have canonical isomorphisms $\mathop{\rm Pic}(X_t)\cong \mathop{\rm Pic}(X)$ for all $t\in T({\mathbin{\mathbb K}})$ (this would be canonically isomorphic {\it globally} in $T({\mathbin{\mathbb K}})$). Instead, we mean that the groups $\mathop{\rm Pic}(X_t)$ for $t\in T({\mathbin{\mathbb K}})$ form a {\it local system of abelian groups\/} over $T({\mathbin{\mathbb K}})$, with fibre $\mathop{\rm Pic}(X)$.
When ${\mathbin{\mathbb K}}={\mathbin{\mathbb C}}$, Joyce and Song proved \cite[\S 4]{JoSo} that $K^{\rm num}(\mathop{\rm coh}\nolimits(X_t))$ form a local system of abelian groups over $T({\mathbin{\mathbb K}})$, with fibre $K^{\rm num}(\mathop{\rm coh}\nolimits(X))$. This means that in simply-connected regions of $T({\mathbin{\mathbb C}})$ in the complex analytic topology the $K^{\rm num}(\mathop{\rm coh}\nolimits(X_t))$ are all canonically isomorphic, and isomorphic to $K(\mathop{\rm coh}\nolimits(X))$. But around loops in $T({\mathbin{\mathbb C}})$, this isomorphism with $K(\mathop{\rm coh}\nolimits(X))$ can change by {\it monodromy},\index{monodromy} by an automorphism $\mu:K(\mathop{\rm coh}\nolimits(X))\rightarrow K(\mathop{\rm coh}\nolimits(X))$ of $K(\mathop{\rm coh}\nolimits(X))$. In \cite[Thm 4.21]{JoSo} they showed that the group of such monodromies $\mu$ is finite, and so it is possible to make it trivial by passing to a finite cover $\tilde T$ of $T$. If they worked instead with invariants $PI^{P,n}(\tau')$ counting pairs $s:{\mathbin{\cal O}}_{X}(-n)\rightarrow E$ in which $E$ has fixed Hilbert polynomial\index{Hilbert polynomial} $P$, rather than fixed class $\alpha\in K^{\rm num}(\mathop{\rm coh}\nolimits(X))$, as in Thomas' original definition of Donaldson--Thomas invariants \cite{Thom}, then they could drop the assumption on $K^{\rm num}(\mathop{\rm coh}\nolimits(X_t))$ in Theorem \cite[Thm. 5.25]{JoSo}.
Similarly, we now study monodromy phenomena for $\mathop{\rm Pic}(X_t)$ in families of smooth ${\mathbin{\mathbb K}}$-schemes ${\mathbin{\mathfrak X}}\rightarrow T$ following the idea of \cite[Thm. 4.21]{JoSo}. We find that we can always eliminate such monodromy by passing to a finite cover $\tilde T$ of $T$. This is crucial to prove deformation-invariance of the~$\bar{DT}{}^\alpha(\tau),PI^{\alpha,n}(\tau')$ in \cite[\S 12]{JoSo}.
\begin{thm} Let\/ ${\mathbin{\mathbb K}}$ be an algebraically closed field of characteristic zero, $\varphi:{\mathbin{\mathfrak X}}\rightarrow T$ a smooth projective morphism of\/ ${\mathbin{\mathbb K}}$-schemes with\/ $T$ connected, and\/ ${\mathbin{\cal O}}_{\mathbin{\mathfrak X}}(1)$ a relative very ample line bundle on ${\mathbin{\mathfrak X}},$ so that for each\/ $t\in T({\mathbin{\mathbb K}}),$ the fibre $X_t$ of\/ $\varphi$ is a smooth projective ${\mathbin{\mathbb K}}$-scheme with very ample line bundle\/ ${\mathbin{\cal O}}_{X_t}(1)$. Suppose the Picard schemes $\mathop{\rm Pic}(X_t)$ are locally constant in $T({\mathbin{\mathbb K}}),$ so that\/ $t\mapsto \mathop{\rm Pic}(X_t)$ is a local system of abelian groups on\/~$T$. Fix a base point\/ $s\in T({\mathbin{\mathbb K}}),$ and let\/ $\Gamma$ be the monodromy group of $\mathop{\rm Pic}(X_s).$ Then $\Gamma$ is a finite group. There exists a finite \'etale cover $\pi:\tilde T\rightarrow T$ of degree $\md{\Gamma},$ with\/ $\tilde T$ a connected ${\mathbin{\mathbb K}}$-scheme, such that writing $\tilde {\mathbin{\mathfrak X}}={\mathbin{\mathfrak X}}\times_T\tilde T$ and $\tilde\varphi:\tilde {\mathbin{\mathfrak X}}\rightarrow\tilde T$ for the natural projection, with fibre $\tilde X_{\tilde t}$ at $\tilde t\in\tilde T({\mathbin{\mathbb K}}),$ then $\mathop{\rm Pic}(\tilde X_{\tilde t})$ for all $\tilde t\in\tilde T({\mathbin{\mathbb K}})$ are all globally canonically isomorphic to $\mathop{\rm Pic}(X_s)$. That is, the local system $\tilde t\mapsto \mathop{\rm Pic}(\tilde X_{\tilde t})$ on $\tilde T$ is trivial. \label{finmon} \end{thm}
\begin{proof} As $\mathop{\rm Pic}(X_s)$ is finitely generated, one can choose classes $[L_1],\ldots, [L_k]\in \mathop{\rm Pic}(X_s)$ as generators. Let $P_1,\ldots, P_k$ be the Hilbert polynomials respectively of $[L_1],\ldots, [L_k]$ with respect to ${\mathbin{\cal O}}_{X_s}(1)$. Let $\gamma\in\Gamma$, and consider the images $\gamma\cdot [L_i] \in \mathop{\rm Pic}(X_s)$ for $i=1,\ldots,k$. As we assume ${\mathbin{\cal O}}_{\mathbin{\mathfrak X}}(1)$ is globally defined on $T$ and does not change under monodromy, it follows that the Hilbert polynomials $P_1,\ldots, P_k$ do not change under monodromy. Hence $\gamma\cdot [L_i]$ has Hilbert polynomial $P_i$. Again one uses properness to show that the set $\mathop{\rm Pic}^{P_i}(X_s)$ composed by isomorphism classes of line bundles in $\mathop{\rm Pic}(X_s)$ with Hilbert polynomial $P_i$ for some $i=1,\ldots, k$ is a finite set, that is, every $P_i$ is the Hilbert polynomial of only finitely many classes $[R_1],\ldots,[R_{n_i}]$ in $\mathop{\rm Pic}(X_s).$ It follows that for each $\gamma\in\Gamma$ we have $\gamma\cdot [L_i]\in\{[R_1],\ldots,[R_{n_i}]\}$. So there are at most $n_1\cdots n_k$ possibilities for $(\gamma\cdot [L_1],\ldots, \gamma\cdot [L_k])$. But $(\gamma\cdot [L_1],\ldots,\gamma\cdot [L_k])$ determines $\gamma$ as $[L_1],\ldots, [L_k]$ generate $\mathop{\rm Pic}(X_s)$. Hence $\md{\Gamma}\leqslant\nobreak n_1\cdots n_k$, and $\Gamma$ is finite.
We can now construct an \'etale cover $\pi:\tilde T\rightarrow T$ which is a principal $\Gamma$-bundle, and so has degree $\md{\Gamma}$, such that the ${\mathbin{\mathbb K}}$-points of $\tilde T$ are pairs $(t,\iota)$ where $t\in T({\mathbin{\mathbb K}})$ and $\iota:\mathop{\rm Pic}(X_t)\rightarrow \mathop{\rm Pic}(X_s)$ is an isomorphism from the properness and smoothness argument above, and $\Gamma$ acts freely on $\tilde T({\mathbin{\mathbb K}})$ by $\gamma:(t,\iota)\mapsto (t,\gamma\circ\iota)$, so that the $\Gamma$-orbits correspond to points $t\in T({\mathbin{\mathbb K}})$. Then for $\tilde t=(t,\iota)$ we have $\tilde X_{\tilde t}=X_t$, with canonical isomorphism~$\iota:\mathop{\rm Pic}(\tilde X_{\tilde t})\rightarrow \mathop{\rm Pic}(X_s)$. \end{proof} \index{Grothendieck group!numerical}\index{monodromy} \index{cohomology}\index{Picard scheme}
So the conclusion is that from properness and smoothness argument, $\mathop{\rm Pic}(X_t)$ are canonically isomorphic locally in $T({\mathbin{\mathbb K}})$. But by Theorem \ref{finmon}, one can pass to a finite cover $\tilde T$ of $T$, so that the $\mathop{\rm Pic}(\tilde X_{\tilde t})$ are canonically isomorphic globally in $\tilde T({\mathbin{\mathbb K}})$. So, replacing ${\mathbin{\mathfrak X}},T$ by $\tilde {\mathbin{\mathfrak X}},\tilde T$, we will assume from here that the Picard schemes $\mathop{\rm Pic}(X_t)$ for $t\in T({\mathbin{\mathbb K}})$ are all canonically isomorphic globally in $T({\mathbin{\mathbb K}})$, and we write $\mathop{\rm Pic}(X)$ for this group $\mathop{\rm Pic}(X_t)$ up to canonical isomorphism.
In Theorem \cite[Thm. 4.19]{JoSo} Joyce and Song showed that when ${\mathbin{\mathbb K}}={\mathbin{\mathbb C}}$ and $H^1({\mathbin{\cal O}}_X)=0$ the numerical Grothendieck group $K^{\rm num}(\mathop{\rm coh}\nolimits(X))$ is unchanged under small deformations of $X$ up to canonical isomorphism. As we said, here we will not prove this result. So, the idea is to construct a globally constant lattice $\Lambda_{X}$ using the globally constancy of the Picard schemes such that there exist an inclusion $K^{\rm num}(\mathop{\rm coh}\nolimits(X))\hookrightarrow\Lambda_{X}.$ It could happen that the image of the numerical Grothendieck group varies with $t$ as it has to do with the integral Hodge conjecture as in \cite[Thm. 4.19]{JoSo}, but this does not affect the deformation invariance of $\bar{DT}{}^\alpha(\tau)$ as for them to be deformation invariant is enough to find a deformation invariant lattice in which the classes $\alpha$ vary. Next, we describe such lattice $\Lambda_X$ and explain how the numerical Grothendieck group $K^{\rm num}(\mathop{\rm coh}\nolimits(X))$ is contained in it. Our idea follows \cite[Thm. 4.19]{JoSo}.
\index{Chern character}
Let $X$ be a Calabi--Yau 3-fold over ${\mathbin{\mathbb K}}$, with $H^1({\mathbin{\cal O}}_X)=0$ and consider the {\it Chern character}, as in Hartshorne \cite{Hart2}: for each $E\in\mathop{\rm coh}\nolimits(X)$ we have the rank $r(E)\in A^0(X)\cong {\mathbin{\mathbb Z}},$ and the Chern classes $c_i(E)\in A^{i}(X)$ for $i=1,2,3$. It is useful to organize these into the Chern character $\mathop{\rm ch}\nolimits(E)$\nomenclature[ch(E)]{$\mathop{\rm ch}\nolimits(E)$}{Chern character of a coherent sheaf $E$} \nomenclature[chi(E)]{$\mathop{\rm ch}\nolimits_i(E)$}{$i^{\rm th}$ component of Chern character of $E$} in $A^{*}(X)_{\mathbin{\mathbb Q}}$, \nomenclature[Heven(X)]{$H_{dR}^{\rm even}(X)$}{even cohomology of a ${\mathbin{\mathbb K}}$-scheme $X$} where $\mathop{\rm ch}\nolimits(E)=\mathop{\rm ch}\nolimits_0(E)+\mathop{\rm ch}\nolimits_1(E)+\mathop{\rm ch}\nolimits_2(E)+\mathop{\rm ch}\nolimits_3(E)$ with $\mathop{\rm ch}\nolimits_i(E)\in A^{i}(X)_{\mathbin{\mathbb Q}}:$ \e \mathop{\rm ch}\nolimits_0(E)=r(E),\quad \mathop{\rm ch}\nolimits_1(E)=c_1(E),\quad \mathop{\rm ch}\nolimits_2(E)=\textstyle\frac{1}{2}\bigl(c_1(E)^2-2c_2(E)\bigr),\quad \mathop{\rm ch}\nolimits_3(E)=\textstyle\frac{1}{6}\bigl(c_1(E)^3-3c_1(E)c_2(E)+3c_3(E)\bigr). \label{chernch} \e
By the Hirzebruch--Riemann--Roch Theorem\index{Hirzebruch--Riemann--Roch Theorem} \cite[Th.~A.4.1]{Hart2}, the Euler form\index{Euler form} on coherent sheaves $E,F$ is given in terms of their Chern characters by \e \bar\chi\bigl([E],[F]\bigr)=\deg\bigl(\mathop{\rm ch}\nolimits(E)^\vee\cdot\mathop{\rm ch}\nolimits(F)\cdot{\rm td}(TX)\bigr){}_3, \label{euform} \e where $(\cdot)_3$ denotes the component of degree $3$ in $A^*(X)_{\mathbin{\mathbb Q}}$ and where ${\rm td}(TX)$\nomenclature[td(TX)]{${\rm td}(TX)$}{Todd class of $TX$ in $H_{dR}^{\rm even}(X)$} is the {\it Todd class\/}\index{Todd class} of $TX$, which is $1+\frac{1}{12}c_2(TX)$ as $X$ is a Calabi--Yau 3-fold, and $(\lambda_0,\lambda_1,\lambda_2,\lambda_3)^\vee=(\lambda_0,-\lambda_1,\lambda_2,-\lambda_3)$, writing $(\lambda_0,\ldots,\lambda_3)\in A^{*}(X)$ with $\lambda_i\in A^{i}(X)$. Define:
\begin{equation*} \Lambda_X=\textstyle\bigl\{ (\lambda_0,\lambda_1,\lambda_2,\lambda_3) \textrm{ where } \lambda_0,\lambda_3\in{\mathbin{\mathbb Q}}, \; \lambda_1\in\mathop{\rm Pic}(X)\otimes_{{\mathbin{\mathbb Z}}}{\mathbin{\mathbb Q}}, \; \lambda_2\in \mathop{\rm Hom}\nolimits(\mathop{\rm Pic}(X),{\mathbin{\mathbb Q}}) \textrm{ such that } \end{equation*} \begin{equation*} \lambda_0\in {\mathbin{\mathbb Z}},\;\> \lambda_1\in \mathop{\rm Pic}(X)/ {\textrm{torsion}}, \;
\lambda_2-{\ts\frac{1}{2}}\lambda_1^2\in \mathop{\rm Hom}\nolimits(\mathop{\rm Pic}(X),{\mathbin{\mathbb Z}}),\;\> \lambda_3+\textstyle\frac{1}{12}\lambda_1 c_2(TX)\in {\mathbin{\mathbb Z}}\bigr\}, \end{equation*}
where $\lambda_1^2$ is defined as the map $\alpha\in\mathop{\rm Pic}(X)\rightarrow \frac{1}{2}c_1(\lambda_1)\cdot c_1(\lambda_1)\cdot c_1(\alpha)\in A^3(X)_{{\mathbin{\mathbb Q}}}\cong {\mathbin{\mathbb Q}},$ and $\frac{1}{12}\lambda_1 c_2(TX)$ is defined as $\frac{1}{12}c_1(\lambda_1)\cdot c_2(TX)\in A^3(X)_{{\mathbin{\mathbb Q}}}\cong{\mathbin{\mathbb Q}}.$ Theorem \ref{definv} states that $\Lambda_X$ is deformation invariant and the Chern character gives an injective morphism $\mathop{\rm ch}\nolimits:K^{\rm num}(\mathop{\rm coh}\nolimits(X))\!\hookrightarrow\!\Lambda_X$. \index{Picard scheme} \index{deformation invariance} The proof of Theorem \ref{definv} is straightforward:
\begin{proof} The proof follows exactly as in \cite[Thm. 4.19]{JoSo} and the fact the Picard scheme $\mathop{\rm Pic}(X)$ is globally constant in families from the argument above yields that the lattice $\Lambda_X$ is deformation invariant. Moreover, the proof that $\mathop{\rm ch}\nolimits\bigl(K^{\rm num}(\mathop{\rm coh}\nolimits(X))\bigr)\subseteq\Lambda_X$ is again as in \cite[Thm. 4.19]{JoSo}. Observe that we do not prove that $\mathop{\rm ch}\nolimits\bigl(K^{\rm num}(\mathop{\rm coh}\nolimits(X))\bigr)=\Lambda_X,$ fact which uses Voisin's Hodge conjecture proof for Calabi--Yau 3-folds over ${\mathbin{\mathbb C}}$ \cite{Vois}. \end{proof}
\begin{quest} Does Voisin's result \cite{Vois} work over ${\mathbin{\mathbb K}}$ in terms of $\mathop{\rm Hom}\nolimits(\mathop{\rm Pic}(X),{\mathbin{\mathbb Z}})$? \end{quest}
This concludes the discussion of problem (b) in \S\ref{main.1} and yields the deformation-invariance of $DT^\alpha(\tau),$ $PI^{\alpha,n}(\tau')$ over ${\mathbin{\mathbb K}}.$
\section{Implications and conjectures} \markboth{Implications and conjectures}{Implications and conjectures} \label{dt7}\index{non-Archimedean geometry}\index{formal power series}
In this final section we sketch some exciting and far reaching implications of the theory and propose new ideas for further research. One proposal is in the direction of extending Donaldson--Thomas invariants to compactly supported coherent sheaves on noncompact quasi-projective Calabi--Yau 3-folds. A second idea is in the derived categorical framework trying to establish a theory of generalized Donaldson--Thomas invariants for objects in the derived category of coherent sheaves. Here we expose the problems and illustrate some possible approaches when known.
\subsection{Noncompact Calabi--Yau 3-folds}
We start by recalling the following definition from \cite[Def. 6.27]{JoSo}:
\begin{dfn} Let $X$ be a noncompact Calabi-Yau $3$-fold over ${\mathbin{\mathbb C}}.$ We call $X$ compactly embeddable if whenever $K \subset X$ is a compact subset, in the analytic topology, there exists an open neighbourhood $U$ of $K$ in $X$ in the analytic topology, a compact Calabi-Yau 3-fold $Y$ over ${\mathbin{\mathbb C}}$ with $H^1({\mathbin{\cal O}}_Y ) = 0,$ an open subset $V$ of $Y$ in the analytic topology, and an isomorphism of complex manifolds $\varphi : U \rightarrow V.$ \end{dfn}
Joyce and Song only need the notion of `compactly embeddable' as their complex analytic proof of \eq{dt6eq1}--\eq{dt6eq2} recalled in \S\ref{dtBehid}, requires $X$ compact; but unfortunately the given algebraic version of \eq{dt6eq1}--\eq{dt6eq2} in Theorem \ref{dt6thm1.1} uses results from derived algebraic geometry \cite{ToVe1,ToVe2,PTVV,Toen,Toen2,Toen3,Toen4}, and the author does not know if they apply also for compactly supported sheaves
\index{coherent sheaf!compactly supported} on a noncompact~$X$.\index{field ${\mathbin{\mathbb K}}$|)} \index{Calabi--Yau 3-fold!noncompact}
More precisely, in \cite{PTVV} it is shown that if $X$ is a projective Calabi-Yau $m$-fold then the derived moduli stack ${\mathbin{\mathfrak M}}_{\mathop{\rm Perf}\nolimits(X)}$ of perfect complexes of coherent sheaves on $X$ is $(2-m)$-shifted symplectic. It is not obvious that if $X$ is a quasi-projective Calabi-Yau $m$-fold, possibly noncompact, then the derived moduli stack ${\mathbin{\mathfrak M}}_{\mathop{\rm Perf}\nolimits_{cs}(X)}$ of perfect complexes on $X$ with compactly-supported cohomology is also $(2-m)$-shifted symplectic.
At the present, we can state the following result. We thank Bertrand To\"en for explaining this to us.
\begin{thm} Suppose $Z$ is smooth projective of dimension $m,$ and $s \in H^0(K_Z^{-1}),$ and $X \subset Z$ is Zariski open with $s$ nonvanishing on $X,$ so that $X$ is a (generally non compact) quasi-projective Calabi-Yau $m$-fold. Then the derived moduli stack ${\mathbin{\mathfrak M}}_{\mathop{\rm Perf}\nolimits_{cs}}(X)$ of compactly-supported coherent sheaves on $X,$ or of perfect complexes on $X$ with compactly-supported cohomology, is $(2-m)$-shifted symplectic. \label{noncpt} \end{thm}
\begin{proof} Let $Z$ be smooth and projective of dimension $m,$ and $s$ be any section of $K_Z^{-1}$. Let $Y$ be the derived scheme of zeros of $s$ and $X=Z\setminus Y.$ Then, $Y$ is equipped with a canonical $O$-orientation in the sense of \cite{PTVV} of dimension $m-1,$ so ${\mathbin{\mathfrak M}}_{\mathop{\rm Perf}\nolimits}(Y)$ is $(2-m-1)$-symplectic, even if $Y$ is not smooth. The restriction map ${\mathbin{\mathfrak M}}_{\mathop{\rm Perf}\nolimits}(Z) \rightarrow {\mathbin{\mathfrak M}}_{\mathop{\rm Perf}\nolimits}(Y)$ is moreover Lagrangian. The map $\ast \rightarrow {\mathbin{\mathfrak M}}_{\mathop{\rm Perf}\nolimits}(Y),$ corresponding to the zero object is \'etale, and thus its pull-back provides a Lagrangian map ${\mathbin{\mathfrak M}}_{\mathop{\rm Perf}\nolimits_{cs}}(X) \rightarrow\ast$, or, equivalently, a $(2-m)$-symplectic structure on ${\mathbin{\mathfrak M}}_{\mathop{\rm Perf}\nolimits_{cs}}(X).$ Now if $X'$ is open in $X,$ then ${\mathbin{\mathfrak M}}_{\mathop{\rm Perf}\nolimits_{cs}}(X') \rightarrow {\mathbin{\mathfrak M}}_{\mathop{\rm Perf}\nolimits_{cs}}(X)$ is an open immersion, so ${\mathbin{\mathfrak M}}_{\mathop{\rm Perf}\nolimits_{cs}}(X')$ is also $(2-m)$-symplectic. \end{proof}
We remark the following:
\begin{itemize} \item[$(a)$] We point out that the condition of Theorem \ref{noncpt} is similar to the compactly-embeddable condition in \cite[Def. 6.27]{JoSo}, but more general, as we do not require $Z$ to be a Calabi-Yau.
\item[$(b)$] Observe that in the non-compact case we cannot expect to have the deformation invariance property unless in some particular cases in which the moduli space is proper.
\item[$(c)$] Note that we need the noncompact Calabi--Yau to be quasi-projective in order to have a quasi projective Quot scheme \cite[Thm. 6.3]{NN}. \end{itemize}
We conclude the section with the following:
\begin{conj}The theory of generalized Donaldson--Thomas invariants defined in \cite{JoSo} is valid over algebraically closed fields of characteristic zero for compactly supported coherent sheaves on noncompact quasi-projective Calabi--Yau 3-folds. In this last case, one can define $\bar{DT}{}^\alpha(\tau)$ and prove the wall--crossing formulae and the relation with $PI^{\alpha,n}(\tau')$ is still valid, while one loses the deformation invariance property and the properness of moduli spaces. \end{conj}
\subsection{Derived categorical framework} \label{dercat}
Our algebraic method could lead to the extension of generalized Donaldson--Thomas theory to the derived categorical context. The plan to extend from abelian to derived categories the theory of Joyce and Song \cite{JoSo} starts by reinterpreting the series of papers by Joyce \cite{Joyc.1,Joyc.2,Joyc.3,Joyc.4,Joyc.5,Joyc.6,Joyc.7,Joyc.8} in this new general setup. In particular:
\begin{itemize}
\item[$(a)$]Defining configurations in triangulated categories $\mathcal{T}$ requires to replace the exact sequences by distinguished triangles.
\item[$(b)$]Constructing moduli stacks of objects and configurations in $\mathcal{T}$. Again, the theory of derived algebraic geometry \cite{Toen,Toen2,Toen3,Toen4,ToVe1,ToVe2,PTVV} can give us a satisfactory answer.
\item[$(c)$]Defining stability conditions on triangulated categories can be approached using Bridgeland's results, and its extension by Gorodentscev et al., which combines Bridgeland's idea with Rudakov's definition for abelian categories \cite{Rud}. Since Joyce's stability conditions \cite{Joyc.3} are based on Rudakov, the modifications should be straightforward.
\end{itemize}
\begin{itemize} \item[$(d)$]The `nonfunctoriality of the cone' in triangulated categories causes that the triangulated category versions of some operations on configurations are defined up to isomorphism, but not canonically, which yields that corresponding diagrams may be commutative, but not Cartesian as in the abelian case. In particular, one loses the associativity of the Ringel-Hall algebra of stack functions, which is a crucial object in Joyce and Song framework. We expect that derived Hall algebra approach of To\"en \cite{Toen3} resolve this issue. See also \cite{PL}. \end{itemize}
The list above does not represent a big difficulty. The main issues actually are: proving existence of Bridgeland stability conditions (or other type) on the derived category; proving that semistable moduli schemes and stacks are finite type (permissible), and proving that two stability conditions can be joined by a path of permissible stability conditions.
Theorem \ref{dt6thm1.1.bis} is just one of the steps in developing this program. The author thus expects that a well-behaved theory of invariants counting $\tau$-semistable objects in triangulated categories in the style of Joyce's theory exists, that is, Theorem \ref{mainthm} should be valid also in the derived categorical context:
\begin{conj} The theory of generalized Donaldson--Thomas invariants defined in \cite{JoSo} is valid for complexes of coherent sheaves on Calabi-Yau $3$-folds over algebraically closed fields of characteristic zero. \end{conj}
\small{
}
\end{document} |
\begin{document}
\title{Finite state automata and homeomorphism of self-similar sets}
\author{Liangyi Huang} \address{College of Mathematics and Statistics, Chongqing University, Chongqing, 401331, China} \email{liangyihuang@cqu.edu.cn}
\author{Zhiying Wen} \address{Department of Mathematical Sciences, Tsinghua University, Beijing, 100084, China} \email{wenzy@tsinghua.edu.cn}
\author{Yamin Yang$^*$} \address{Institute of applied mathematics, College of Science, Huazhong Agricultural University, Wuhan,430070, China} \email{yangym09@mail.hzau.edu.cn}
\author{Yunjie Zhu} \address{School of Mathematics and Physics, Hubei Polytechnic University, Huangshi, 435003, China} \email{yjzhu\_ccnu@sina.com}
\thanks{* The correspondence author.}
\maketitle
\begin{abstract} The topological and metrical equivalence of fractals is an important topic in analysis. In this paper, we use a class of finite state automata, called $\Sigma$-automaton, to construct psuedo-metric spaces, and then apply them to the study of classification of self-similar sets. We first introduce a notion of topology automaton of a fractal gasket, which is a simplified version of neighbor automaton; we show that a fractal gasket is homeomorphic to the psuedo-metric space induced by the topology automaton. Then we construct a universal map to show that psuedo-metric spaces induced by different automata can be bi-Lipschitz equivalent. As an application, we obtain a rather general sufficient condition for two fractal gaskets to be homeomorphic or Lipschitz equivalent. \end{abstract}
\section{ Introduction}
To determine whether two fractal sets are homeomorphic, quasi-symmetric or Lipschitz equivalent is important
in analysis. The study of homeomorphism of fractal sets dated back to
Whyburn \cite{Why58}. For studies of quasi-symmetric equivalence of fractal sets,
see \cite{Solomyak10, Bonk}.
The study of Lipschitz equivalence of fractal sets derives from 1990's and it becomes a very active topic in recent years \cite{DS,FM,FanRZ15,LuoL13,RRX06,RuanWX14,RaoZ15,XiXi20}, where most of the studies focus on self-similar sets which are totally disconnected.
For self-similar sets which are not totally disconnected, to construct homeomorphisms, quasi-symmetric maps or bi-Lipschitz maps is very difficult and there are few results (\cite{Why58, Bonk, YZ18, RaoZhu16}).
Whyburn \cite{Why58} proved that all the Sierpinski curves are homeomorphic, which can be applied to a class of connected fractal squares.
Solomyak \cite{Solomyak10} proved that a Julia set is always quasi-symmetric equivalent to a planar self-similar set with two branches.
Bonk and Merenkov\cite{Bonk} proved that the quasi-symmetric map from Sierpinski carpet to itself must be an isometry.
There are several works devoted to
the Lipschitz classification of non-totally disconnected fractal squares with contraction ratio $1/3$, that is, a kind of Sierpinski carpets (\cite{RuanW17,LuoL16,RWW17,YZ18}), but the problem is unsolved in case
of the fractal squares with 5 branches.
Using neighbor automaton, Rao-Zhu \cite{RaoZhu16} proved that $F_1\simeq F_2$ in Figure \ref{compare}, but it is not known whether $F_j, j=2,3,4,5$ are Lipschitz equivalent or homeomorphic.
\begin{figure}
\caption{Fractal squares with $5$ branches.}
\label{compare}
\end{figure}
In this paper, we develop a more
systematic theory to study the topological and metrical equivalence of self-similar sets
by finite state automaton, which are mostly inspired by Rao and Zhu \cite{RaoZhu16}. First, we recall the definition of finite state automaton.
\begin{defn}[\cite{JEH79}]\label{FA} \emph{A \emph{finite state automaton} is a 5-tuple $(Q,\mathcal{A},\delta,q_0,P)$, where $Q$ is a finite set of states, $\mathcal{A}$ is a finite input alphabet, $q_0$ in $Q$ is the initial state, $P\subset Q$ is the set of final states, and $\delta$ is the transition function mapping $Q\times\mathcal{A}$ to $Q$. That is, $\delta(q,a)$ is a state for each state $q$ and input symbol $a$.} \end{defn}
Let $\Sigma=\{1,\dots,N\}$, which we call the set of alphabet. Inspired by the neighbor automaton of self-similar sets, we define
\begin{defn}\emph{A finite state automaton $M$ is called a \emph{$\Sigma$-automaton} if \begin{equation}\label{sigma-auto} M=(Q,\Sigma^2,\delta,Id, Exit), \end{equation}
where the state set is $Q=Q_0\cup \{Id, Exit\}$, the input alphabet is $\Sigma^2$, the initial state is $Id$, the final state is $Exit$, and the transition function $\delta$ satisfies
\begin{equation}\label{eq:id}
\delta(Id,(i,j))=Id \Leftrightarrow i=j.
\end{equation}
}
\end{defn}
By inputting symbol string $({\mathbf{x}},{\mathbf{y}})\in\Sigma^{\infty}\times\Sigma^{\infty}$ to $M$, we obtain a sequence of states $(S_{i})_{i\ge 0}$ and call it the \emph{itinerary} of $({\mathbf{x}},{\mathbf{y}})$. If we arrive at the state $Exit$, then we stop there and
the itinerary is finite, otherwise, it is infinite.
We define the \emph{surviving time} of $({\mathbf{x}},{\mathbf{y}})$ to be \begin{equation} T_M({\mathbf{x}},{\mathbf{y}})=(\text{length of the itinerary})-1. \end{equation}
Let $0<\xi<1$, we define a function $\rho_{M,\xi}$ on $\Sigma^\infty\times \Sigma^\infty$ as \begin{equation}\label{eq:metric} \rho_{M,\xi}({\mathbf{x}}, {\mathbf{y}})=\xi^{T_{M}({\mathbf{x}}, {\mathbf{y}})}. \end{equation}
We are interested in the $\Sigma$-automaton such that $(\Sigma^\infty, \rho_{M,\xi})$ is a psuedo-quasimetric space for some $0<\xi<1$ (see Section 2 for precise definition). In this case, we define ${\mathbf{x}}\sim {\mathbf{y}}$ if $\rho_{M,\xi}({\mathbf{x}},{\mathbf{y}})=0$, then $\sim$ is an equivalent relation. Set $${\mathcal A}_M=\Sigma^\infty/\sim,$$
then $(\mathcal{A}_M,\rho_{M,\xi})$ is a psuedo-metric space (Lemma \ref{lem:induce}), and we call it the \emph{psuedo-metric space induced by $M$}.
Our purpose is to use such spaces to study the homeomorphism or Lipschitz equivalence of fractal sets. (We remark that to construct bi-Lipschitz maps between two totally disconnected self-similar sets, a crucial idea is to
use symbolic spaces with suitable metric as intermediate metric spaces, see \cite{FM, RRX06, XiXi10, RuanWX14, RaoZ15}; in Luo and Lau \cite{LuoL13}, even a certain hyperbolic tree is used for such purpose.)
\begin{defn}\label{def:Holder}\emph{Two psudo-metric spaces $(X,d_X)$ and $(Y,d_Y)$ are said to be \emph{H\"{o}lder equivalent}, if there is a bijection $f:~X\to Y$, a number $s>0$ and a constant $C>0$ such that \begin{equation} C^{-1}d_X(x_1,x_2)^{1/s} \leq d_Y \big( f(x_1),f(x_2)\big ) \leq C d_X(x_1,x_2)^s,\ \forall x_1,x_2\in X; \end{equation} in this case we say $f$ is a \emph{bi-H\"{o}lder map} with index $s$. }
\emph{If $s=1$, we say $X$ and $Y$ are \emph{Lipschitz equvalent}, denote by $X\simeq Y$, and call $f$ a \emph{bi-Lipschitz map}. } \end{defn}
In this paper, we confine ourselves to a special class of finite state automaton called the \emph{gasket automaton},
where the state set $Q$ contains eight states, see Definition \ref{triangle-auto} and \ref{gasket-auto}. We show that
\begin{thm}\label{lem:feasible}
Let $M$ be a gasket automaton. Then $(\Sigma^\infty, \rho_{M,\xi})$ is a psuedo-metric space for every $0<\xi<1$.
\end{thm}
To construct bi-H\"older maps between induced psuedo-metric spaces of different automata is a difficult problem. To this end, we define a \emph{$\gamma$-isolated} condition.
Moreover, for a gasket automaton $M$, we define the one-step simplification of $M$ (see Section \ref{sec:gamma}). The key result of this paper is the following.
\begin{thm}\label{spaceLip} Let $M$ be a gasket automaton satisfying the $\gamma$-isolated condition, and let $M'$ be a one-step simplification of $M$. Then $(\mathcal{A}_M,\rho_{M,\xi})\simeq(\mathcal{A}_{M'},\rho_{M',\xi})$ for every $\xi\in(0,1)$, \textit{i.e}, they are Lipschitz equivalent. \end{thm}
Next, we apply the above results to study the classification of fractal gaskets defining as follows. An \emph{iterated function system} (IFS) is a family of contractions $\{\varphi_j\}_{j=1}^N$ on $\mathbb{R}^{d}$, and the \emph{attractor} of the IFS is the unique nonempty compact set $K$ satisfying $K=\bigcup_{j=1}^N\varphi_j(K)$ and it is called a \emph{self-similar set} \cite{Hutchinson1981} if all $\varphi_j$ are similitudes.
Let $\triangle\subset\mathbb{R}^2$ be the regular triangle with vertexes $(0,0),$ $(1,0)$, $\omega=(1/2,\sqrt{3}/2)$. \begin{defn}[Fractal gasket]\label{fractalgasket} \emph{ Let $(r_1,\dots, r_N)\in (0,1)^N$ and $\{d_1,\dots, d_N\}\subset {\mathbb R}^2$.
Let $K$ be a self-similar set generated by the IFS $\{\varphi_j\}_{j=1}^N$ where $\varphi_j(z)=r_j(z+d_j)$. We call $K$ a \emph{fractal gasket} if }
\emph{ (i) $\bigcup_{i=1}^N\varphi_i(\triangle)\subset\triangle$;}
\emph{(ii) for any $i\neq j$, $\varphi_i(\triangle)$ and $\varphi_j(\triangle)$ can only intersect at their vertices.} \end{defn}
\textbf{Notations of $\alpha, \beta, \gamma$}. If $(0,0)\not\in K$, we set $\alpha=-1$, otherwise, we set $\alpha$ to be the symbol in $\Sigma=\{1,\dots, N\}$ such that $\varphi_\alpha((0,0))=(0,0)$. Similarly, we set $\beta=-2$ or $\varphi_\beta$ is the map with fixed point $(1,0)$, and set $\gamma=-3$ or $\varphi_\gamma$ is the map with fixed point $\omega$. Hereafter, we will denote $(0,0), (1,0)$ and $\omega$ by $\omega_\alpha$, $\omega_\beta$ and $\omega_\gamma$, respectively.
For a fractal gasket $K$, we introduce a notion of \emph{topology automaton} of $K$, which we denote by $M_K$, in Section \ref{sec:structure}.
Comparing to the neighbor automaton of self-similar sets, the topology automaton records less information
since the information of size is ignored.
Denote $r_*=\min\{r_1,\dots, r_N\}$ and $r^*=\max\{r_1,\dots, r_N\}$. Let $\pi: \Sigma^\infty\to K$ be the well-known coding map,
then $\pi$ induces a bijection from $\mathcal{A}_{M_K}$ to $K$ which we still denote by $\pi$
(see Section \ref{sec:structure} for details). Actually, $\pi$ is a bi-H\"older map.
\begin{thm}\label{thm:Holder} Let $K$ be a fractal gasket. Let $s=\sqrt{\log r^*/\log r_*}$ and $\xi=(r_*)^s$. Then $\pi: (\mathcal{A}_{M_K},\rho_{M_K,\xi})\to K$ is a bi-H\"{o}lder map with index $s$. \end{thm}
\begin{remark}\emph{ We remark that if two of $\{\omega_\alpha,\omega_\beta,\omega_\gamma\}$ do not belong to $K$, then $\varphi_i(K)\cap \varphi_j(K)=\emptyset$ whenever $i\neq j$. In this case $K$ is totally disconnected and we will deal with it in another paper. From now on, we will always assume that $\omega_\alpha,\omega_\beta\in K$ without loss of generality, or equivalently, $\alpha, \beta\in \Sigma$. } \end{remark}
A fractal gasket $K$ is said to satisfy the \emph{top isolated condition}, if $\omega_\gamma\in K$, and $\varphi_{\gamma}(\triangle)\cap \varphi_j(\triangle)=\emptyset$ provided $j\neq \gamma$. This condition corresponds to the $\gamma$-isolated condition in Theorem \ref{spaceLip}.
We remark that if a fractal gasket satisfies the top isolated condition, then its non-trivial connected components are horizontal line segments. See Appendix A.
\begin{defn}[Horizontal block]\label{Hblock} \emph{Let $K$ be a fractal gasket.
We call $$ I=\{i_1,i_2,\dots,i_k\}\subset\Sigma $$ a \emph{horizontal block} of $K$ if $\varphi_{i_{j}}(\omega_\beta)=\varphi_{i_{j+1}}(\omega_\alpha)$ for $1\le j\le k-1$ and $I$ is maximal with this property. We call $k$ the size of $I$. } \end{defn}
If $\alpha$ and $\beta$ belong to a same horizontal block, then we call this block the $\alpha\beta$-block.
If $K$ satisfies the top isolated condition or $\omega_\gamma\not\in K$, and $\alpha$ and $\beta$ does not belong the same horizontal block, then $K$ is totally disconnected, and it is out of our consideration. Let ${\mathcal F}_{T,\alpha\beta}$ denote the collection of fractal gasket $K$ satisfying
(i) $K$ satisfies the top isolated condition or $\omega_\gamma\not\in K$;
(ii) $\alpha$ and $\beta$ belong to a same horizontal block of $K$.
\begin{thm}\label{thm:Lip} Let $E, F\in {\mathcal F}_{T,\alpha\beta}$. If there is a size-preserving bijection from the collection of horizontal-blocks of $E$ to that of $F$, and the $\alpha\beta$-block of $E$ have equal size with that of $F$, then $E$ is bi-H\"{o}lder equivalent (and homeomorphic) to $F$.
If in addition that both $E$ and $F$ are of uniform contraction ratio $r$, then $E\simeq F$. \end{thm}
\begin{figure}
\caption{Fractal gaskets in Example 1.1. $E$ is homeomorphic to $F$.}
\label{example}
\end{figure}
Let us briefly explain the strategy of the proof of Theorem \ref{thm:Lip}. Let $E$ and $F$ be two fractal gaskets in Theorem \ref{thm:Lip}. Let $M_E$ be the topology automaton of $E$. We construct a sequence of automata $$ M_E=M_{E,0}, M_{E,1}, \dots, M_{E,p}=M_E^* $$ such that $M_{E,i+1}$ is a one-step simplification of $M_{E,i}$ for each $i$,
and $M_E^*$ only records the horizontal connective relations among $\varphi_j(\triangle), j\in\Sigma$.
By Theorem \ref{spaceLip}, we have $\mathcal{A}_{M_E}\simeq \mathcal{A}_{M_E^*}$.
We do the same thing for $F$ and we obtain an automaton $M_F^*$. The assumptions in Theorem \ref{thm:Lip} guarantee that $\mathcal{A}_{M_E^*}\simeq \mathcal{A}_{M_F^*}$. Therefore, $\mathcal{A}_{M_E}\simeq \mathcal{A}_{M_F}$. Finally, by Theorem \ref{thm:Holder}, we obtain that
$E$ is bi-H\"{o}lder equivalent to $F$.
\begin{exam}\label{exam:1} \emph{Let $E$ and $F$ be two fractal gaskets indicated by Figure \ref{example}. There are four horizontal-blocks in both $E$ and $F$, and the sizes of the blocks are $1, 2, 3, 5$ respectively. By Theorem \ref{thm:Lip}, $E$ is homeomorphic to $F$. (In the fractal $E$, there are 3 points connecting $\varphi_j(\triangle)$ in different horizontal blocks, so we need $3$ one-step simplifications to obtain $M^*_E$.)} \end{exam}
\noindent \textbf{Open question:} \emph{Can we replace the $\gamma$-isolated condition by the following condition: $\varphi_\alpha(\triangle)$ and $\varphi_\beta(\triangle)$ belong to a same connected component of $\bigcup_{j=1}^N \varphi_j(\triangle)$, but $\varphi_\gamma(\triangle)$ does not?}
This article is organized as follows: In Section \ref{sec:psuedo}, we discuss the psuedo-metric space induced by a finite state automaton. In Section 3, we introduce the gasket automaton, and Theorem \ref{lem:feasible} is proved there. In Section \ref{sec:structure}, we define the topology automaton of a fractal gasket, and Theorem \ref{thm:Holder} is proved there. In Section \ref{sec:gamma}, we discuss the one-step simplification of a gasket automaton.
In Section \ref{sec:proof}, we prove Theorem \ref{spaceLip} and Theorem \ref{thm:Lip}
by assuming Theorem \ref{main}. In Section \ref{sec:mapg}, we construct a universal map $g$ on the symbolic space $\Omega$. Finally, in Section \ref{sec:time}, we prove Theorem \ref{main} which is technical.
\section{\textbf{Psuedo-metric space induced by $\Sigma$-automaton}}\label{sec:psuedo}
Let us recall the definition of psuedo-metric space, see for instance \cite{Pepo90, Bao18}.
\begin{defn}\label{def:psuedo}\emph{
A \emph{pseudo-quasimetric space} is a pair $(\mathcal{A},\rho)$ where $\mathcal{A}$ is a set and $\rho:\mathcal{A}\times\mathcal{A}\rightarrow\mathbb{R}_{\geq 0}$ satisfying for all $x,y,z\in \mathcal{A}$, it holds that }
\emph{(i) $\rho(x,x)=0$;}
\emph{(ii) $\rho(x,y)=\rho(y,x)$;}
\emph{(iii) (psuedo-triangle inequality) $\rho(x,z)\le C(\rho(x,y)+\rho(y,z)),$
where $C\ge 1$ is a constant independent of $x,y,z$.}
\emph{If in addition $x\neq y$ implies $\rho(x,y)>0$, then we call $({\mathcal A}, \rho)$ a \emph{psuedo-metric space}. } \end{defn}
Let $({\mathcal A},\rho)$ be a psuedo-quasimetric space. Define $x\sim y$ if $\rho(x,y)=0$, then clearly $\sim$ is an equivalence relation. Denote the equivalent class containing $x$ by $[x]$ . Set $\widetilde {\mathcal A}:={\mathcal A}/\sim$ to be the quotient space. For $[x],[y]\in\widetilde {\mathcal A}$, define \begin{equation}\label{metric}
\widetilde{\rho}([x],[y])=\inf\{\rho(a,b); a\in [x], b\in [y]\}. \end{equation}
\begin{lem}\label{lem:induce} The quotient space $(\widetilde {\mathcal A}, \tilde \rho)$ is a psuedo-metric space. \end{lem}
\begin{proof} The assertions $\tilde \rho([x],[x])=0$ and $\tilde \rho([x],[y])=\tilde \rho([y],[x])$ are obvious.
If $a,a'\in [x]$ and $b,b'\in [y]$, by the psuedo-triangle inequality, one can show that $\rho(a,b)\leq C^2\rho(a',b')$. Hence, if $a\in [x]$ and $b\in [y]$, we have \begin{equation}\label{eq:xi2} C^{-2} \rho(a,b)\leq \widetilde \rho([x],[y]). \end{equation} It follows that $\widetilde{\rho}([x],[y])>0$ if $[x]\neq [y]$, and \begin{equation}\label{eq:xi3} \widetilde{\rho}([x],[z])\leq \rho(x,z)\leq C(\rho(x,y)+\rho(y,z))\leq C^3( \widetilde{\rho}([x],[y])+\widetilde{\rho}([y],[z])). \end{equation}
The lemma is proved. \end{proof}
Let $(\mathcal{A},\rho)$ be a pseudo-metric space. In the same manner as the metric space, we can define convergence of sequence, dense subset and completeness of ${\mathcal A}$. (See \cite{Pepo90, Bao18}.) The following lemma is obvious.
\begin{lem}\label{extendLip} Let $(\mathcal{A},\rho)$ and $(\mathcal{A}',\rho')$ be two complete pseudo-metric spaces. Suppose $B\subset\mathcal{A}$ is $\rho$-dense in $\mathcal{A}$ and $B'\subset\mathcal{A}'$ is $\rho'$-dense in $\mathcal{A}'$. If $B\simeq B'$, then $\mathcal{A}\simeq\mathcal{A}'$. \end{lem}
Let $\Sigma=\{1,\dots, N\}$. For $a\in\Sigma$, we use $a^k$ to denote the word consisting of $k$ numbers of $a$. Let $\Sigma^{\infty}$ and $\Sigma^{k}$ be the sets of infinite words and words of length $k$ over $\Sigma$ respectively. Let $\Sigma^*=\bigcup_{k\geq0} \Sigma^{k}$.
Let $M$ be a $\Sigma$-automaton. If $(\Sigma^\infty, \rho_M)$ is a psuedo-quasimetric space, then we can define a psuedo-metric space by \eqref{metric}, which we denote by $({\mathcal A}_M, \rho_M)$. Denote by ${\mathbf{x}}\wedge{\mathbf{y}}$ the maximal common prefix of ${\mathbf{x}}$ and ${\mathbf{y}}$. By \eqref{eq:id}, we see that
$$T_M({\mathbf{x}},{\mathbf{y}})\geq |{\mathbf{x}} \wedge {\mathbf{y}}|,$$
where $|W|$ denotes the length of a word $W$.
\begin{lem}\label{complete} If $(\Sigma^\infty, \rho_M)$ is a psuedo-quasimetric space, then the induced pseudo-metric space $(\mathcal{A}_M,\rho_M)$ is complete. \end{lem}
\begin{proof}We equip $\Sigma^\infty$ with the following metric: For any ${\mathbf{x}},{\mathbf{y}}\in\Sigma^\infty$, define $d({\mathbf{x}},{\mathbf{y}})=2^{-|{\mathbf{x}}\wedge{\mathbf{y}}|}$. It is folklore that $(\Sigma^\infty,d)$ is a compact metric space.
For any Cauchy sequence $\{[{\mathbf{x}}_k]\}_{k=1}^\infty$ of $\mathcal{A}_M$, let $\{{\mathbf{x}}_{k_p}\}_{p=1}^\infty$ be a subsequence of $\{{\mathbf{x}}_k\}_{k=1}^\infty$ which converges to ${\mathbf{y}}\in\Sigma^\infty$ in the metric $d$. Then $\lim_{p\rightarrow\infty}|{\mathbf{x}}_{k_p}\wedge{\mathbf{y}}|=+\infty$, so we have $[{\mathbf{x}}_{k_p}]\to[{\mathbf{y}}]$ with respect to $\rho_M$. Since $\{[{\mathbf{x}}_k]\}_{k=1}^\infty$ is Cauchy in $\rho_M$, we conclude that $[{\mathbf{x}}_k]\to[{\mathbf{y}}]$ in $\rho_M$. The lemma is proved. \end{proof}
\begin{lem}\label{Omegadense} Suppose $(\mathcal{A}_M, \rho_M)$ is a psuedo-metric space. Let $\kappa\in \Sigma$. Then the set $\Omega=\{[\omega\kappa^{\infty}];~\omega\in\Sigma^*\}$ is $\rho_M$-dense in $\mathcal{A}_M$. \end{lem} \begin{proof} Pick $[{\mathbf{x}}]\in\mathcal{A}_M$, denote ${\mathbf{x}}=(x_i)_{i=1}^\infty$ and let ${\mathbf{x}}_k=x_1\dots x_k\kappa^\infty$, then $[{\mathbf{x}}_k]\in\Omega$. Clearly, $[{\mathbf{x}}_k]\to[{\mathbf{x}}]$. This finishes the proof. \end{proof}
\section{\textbf{Triangle automaton and gasket automaton}} In this section, we introduce the triangle automaton and gasket automaton.
\subsection{Triangle automaton} \ \\ \indent Let $\Sigma=\{1,\cdots, N\}$. Let $\{\alpha,\beta,\gamma\}$ be a subset of $\Sigma\cup \{-1,-2,-3\}$. For a pair $u,v\in \{\alpha,\beta,\gamma\}$ with $u\ne v$, we associate with it a state and denote it by $S_{uv}$. We set the state set $Q$ to be \begin{equation}\label{states} Q=\{S_{\alpha\gamma},S_{\beta\gamma},S_{\alpha\beta},S_{\beta\alpha}, S_{\gamma\beta},S_{\gamma\alpha}\}\cup \{Id, Exit\}. \end{equation}
\begin{defn}[Triangle automaton]\label{triangle-auto} \emph{A $\Sigma$-automaton $M=\{Q,\Sigma^2,\delta,Id,Exit\}$ is called a \emph{triangle automaton} if
$Q$ is given by \eqref{states}, and the transition function $\delta$ satisfies the following conditions: Let $u,v\in\{\alpha,\beta,\gamma\}$ and $u\ne v$.}
\emph{ (i) (Symmetry) If $\delta(Id,(i,j))=S_{uv}$, then $\delta(Id,(j,i))=S_{vu}$.}
\emph{ (ii) (Loop) $\delta(S_{uv},(i,j))=\left \{ \begin{array}{ll} S_{uv}, &\text{ if } (i,j)=(v,u);\\ Exit, & \text{ otherwise.} \end{array} \right .$ } \end{defn}
Let us call $S_{uv}$ the \emph{mirror state} of $S_{vu}$, and let the mirror state of $Id$ and $Exit$ to be themselves. Clearly, if $S$ is translated to $S'$ by $(i,j)$, then the mirror state of $S$ is translated to the mirror state of $S'$ by $(j,i)$. Therefore, we have
\begin{lem}\label{lem:symmetry} Let $(S_k)_{k\geq 1}$ and $(S'_k)_{k\geq 1}$ be the itineraries of $({\mathbf{x}},{\mathbf{y}})$ and $({\mathbf{y}},{\mathbf{x}})$, respectively. Then $S_k'$ is the mirror state of $S_k$. Consequently, $T_M({\mathbf{x}},{\mathbf{y}})=T_M({\mathbf{y}},{\mathbf{x}})$. \end{lem}
We denote $$ \mathcal{P}_{uv}=\{(i,j)\in\Sigma^2;\delta(Id,(i,j))=S_{uv}\}. $$ Then by symmetry of the automaton, $\mathcal{P}_{vu}=\{(i,j);(j,i)\in\mathcal{P}_{uv}\}$. The transition diagram of a triangle automaton $M$ is illustrated in Figure \ref{diagram}. Clearly, $M$ is completely determined by $\mathcal{P}_{\alpha\beta},\mathcal{P}_{\alpha\gamma},\mathcal{P}_{\beta\gamma}$.
\begin{figure}
\caption{The transition diagram of triangle automaton $M$.}
\label{diagram}
\end{figure}
If $(i,j)\in\mathcal{P}_{\alpha\gamma}$, then we denote $i\lhd_{\alpha\gamma}j$ and say $i$ is the \emph{$\alpha\gamma$-predecessor} of $j$ and $j$ is the \emph{$\alpha\gamma$-successor} of $i$; similarly, we define $i\lhd_{\beta\gamma}j$, $i\lhd_{\alpha\beta}j$. Moreover, according to Definition \ref{triangle-auto}(i), we make the convention that $i\lhd_{uv}j$ if and only if $j\lhd_{vu}i$.
\subsection{Gasket automaton}
\ \\ \indent Firstly, we recall some notions of graph theory, see \cite{Bal2000}. Let $G=(V, \mathcal{E})$ be a directed graph, where $V$ is the {vertex set} and $\mathcal{E}$ is the {edge set}. Each edge $\mathbf{e}$ is associated to an ordered pair $(u,v)$ in $V$, we say $\mathbf{e}$ is \emph{incident out} of $u$ and \emph{incident into} $v$.
The number of edges incident out of a vertex $v$ is the \emph{outdegree} of $v$ and is denoted by $\deg^+(v)$. The number of edges incident into a vertex $v$ is the \emph{indegree} of $v$ and is denoted by $\deg^-(v)$. If $\deg^-(v)=0$, then we say $v$ is \emph{minimal}; if $\deg^+(v)=0$, then we say $v$ is \emph{maximal}. If $v$ is both minimal and maximal, then we say $v$ is \emph{isolated}.
A \emph{directed walk} joining vertex $v_1$ to vertex $v_k$ in $G$ is a sequence $(v_1, v_2, \dots, v_k)$ with $(v_{i},v_{i+1})\in\mathcal{E}$. In addition, if all $v_i (1\le i\le k)$ are distinct, then we call it a \emph{path}. If all $v_i (1\le i\le k-1)$ are distinct and $v_k=v_1$, then we call it a \emph{cycle}.
A path $(v_1, v_2, \dots, v_k)$ is called a \emph{chain}, if $v_1$ is minimal and $v_k$ is maximal.
For a triangle automaton $M$, we will regard $(\Sigma,\mathcal{P}_{\alpha\gamma})$, $(\Sigma,\mathcal{P}_{\beta\gamma})$ and $(\Sigma,\mathcal{P}_{\alpha\beta})$ as three graphs.
A symbol $j\in \Sigma$ is said to be \emph{$\alpha\beta$-minimal} (resp. maximal) if it is minimal (resp. maximal) in $(\Sigma,\mathcal{P}_{\alpha\beta})$. Similarly, we can define $\alpha\gamma$-minimal (maximal) and $\beta\gamma$-minimal (maximal).
Moreover, we make the convention that $j$ is $uv$-minimal
if it is $vu$-maximal.
\begin{defn}[Gasket automaton]\label{gasket-auto} \emph{A triangle automaton $M=\{Q,\Sigma^2,\delta,Id,Exit\}$ is called a \emph{gasket automaton} if
$\delta$ satisfies the following conditions: Let $u,v\in\{\alpha,\beta,\gamma\}$ and $u\ne v$.}
\emph{(i) (Uniqueness) If $i\lhd_{uv} j$ and $i\lhd_{uv} j'$, then $j=j'$.}
\emph{(ii) (Gathering condition) Any two of the following statements imply the third one: \ding{172} $a\lhd_{\alpha\gamma}c$; \ding{173} $a\lhd_{\beta\gamma}b$; \ding{174} $b\lhd_{\alpha\beta}c$.}
\emph{(iii) (Boundary condition) If $\alpha\in \Sigma$, then $\alpha$ is $\alpha\gamma$-minimal and $\alpha\beta$-minimal; similarly, if $\beta\in \Sigma$, then $\beta$ is $\beta\gamma$-minimal and $\beta\alpha$-minimal; if
$\gamma\in \Sigma$, then $\gamma$ is $\gamma\alpha$-minimal and $\gamma\beta$-minimal.} \end{defn}
By the uniqueness property, we see that for any $u,v\in\{\alpha,\beta,\gamma\}$, the graph $(\Sigma,\mathcal{P}_{uv})$ is a union of disjoint chains and cycles.
\begin{figure}
\caption{An illustration of the boundary condition: the six positions with crosses are forbidden.}
\label{fig_boundary}
\end{figure}
\begin{remark} \emph{Let $K$ be a fractal gasket. If $\varphi_a(\omega_\gamma)=\varphi_b(\omega_\beta)=\varphi_c(\omega_\alpha)$, then $\varphi_a(\triangle)$, $\varphi_b(\triangle)$ and $\varphi_c(\triangle)$ gather at one point. This is the motivation of gathering condition. } \end{remark}
\subsection{Induced psuedo metric space}
\ \\ \indent Let $M$ be a triangle automaton. For $({\mathbf{x}},{\mathbf{y}})\in\Sigma^{\infty}\times\Sigma^{\infty}$,
let $(S_{M,i})_{i\ge 0}$ be the itinerary of $({\mathbf{x}},{\mathbf{y}})$; recall that the surviving time
$T_M({\mathbf{x}},{\mathbf{y}})$ is the largest $k$ such that $S_{M,k}\neq Exit.$
We will use $S\stackrel{(i,j)}\longrightarrow S'$ as an alternative notation for $\delta(S,(i,j))=S'$,
and denote the initial state by $id$ instead of $Id$ for clarity.
\begin{prop}\label{dis} Let $M$ be a gasket automaton. For any ${\mathbf{x}},{\mathbf{y}},{\mathbf{z}}\in\Sigma^\infty$ we have \begin{equation}\label{dis1} \min\{T_M({\mathbf{x}},{\mathbf{y}}), T_M({\mathbf{x}},{\mathbf{z}})\}\le T_M({\mathbf{y}},{\mathbf{z}})+1. \end{equation} \end{prop} \begin{proof} We will denote $T:=T_M$ for simplicity. Clearly, \eqref{dis1} holds if either $T({\mathbf{y}},{\mathbf{z}})=\infty$ or any two of ${\mathbf{x}},{\mathbf{y}},{\mathbf{z}}$ are equal.
So we assume that $T({\mathbf{y}},{\mathbf{z}})<\infty$ and ${\mathbf{x}},{\mathbf{y}},{\mathbf{z}}$ are distinct.
Denote $\ell:=|{\mathbf{y}}\wedge{\mathbf{z}}|$. Then $T({\mathbf{y}},{\mathbf{z}})\ge\ell$, and at least one of $|{\mathbf{x}}\wedge{\mathbf{y}}|\le\ell$ and $|{\mathbf{x}}\wedge{\mathbf{z}}|\le\ell$ holds. Without loss of generality, assume that $$
k=|{\mathbf{x}}\wedge{\mathbf{y}}|\le |{\mathbf{x}}\wedge{\mathbf{z}}|. $$ Suppose on the contrary that \eqref{dis1} is false, then $T({\mathbf{x}},{\mathbf{y}})>\ell+1$ and $T({\mathbf{x}},{\mathbf{z}})>\ell+1$.
\textit{Case 1.} $k<\ell$.
In this case we have
$|{\mathbf{x}}\wedge{\mathbf{y}}|=|{\mathbf{x}}\wedge{\mathbf{z}}|=k$,
which together with $T({\mathbf{x}},{\mathbf{y}})>\ell+1$ imply that the first $(\ell+3)$-states (including $id$) of the itinerary of $({\mathbf{x}},{\mathbf{y}})$ are \begin{equation}\label{dis2} id\to(Id)^k\to (S_{uv})^{\ell+2-k}, \quad \text{ where }u,v\in\{\alpha,\beta,\gamma\}. \end{equation} So $(x_{k+1}, y_{k+1})\in P_{uv}$ and $$ (\sigma^k({\mathbf{x}}),\sigma^k({\mathbf{y}}))=(x_{k+1}v^{\ell+1-k}\cdots, y_{k+1}u^{\ell+1-k}\cdots). $$
By $|{\mathbf{y}}\wedge{\mathbf{z}}|=\ell$,
the first $\ell+1$ states (including $id$) of the itinerary of $({\mathbf{x}},{\mathbf{z}})$ are the same as that of $({\mathbf{x}},{\mathbf{y}})$; in particular, the $(\ell+1)$-th state is $S_{uv}$. Moreover,
since $T({\mathbf{x}},{\mathbf{z}})>\ell+1$, a prefix of the itinerary of $({\mathbf{x}},{\mathbf{z}})$ is also given by \eqref{dis2}.
It follows that
$\sigma^k({\mathbf{z}})=y_{k+1}u^{\ell+1-k}\cdots$ and $|{\mathbf{y}}\wedge{\mathbf{z}}|\geq \ell+2$, which is a contradiction.
Hence \eqref{dis1} holds in this case.
\textit{Case 2.} $k=\ell=|{\mathbf{x}}\wedge{\mathbf{z}}|$.
Then $x_1\dots x_k=y_1\dots y_k=z_1\dots z_k$ and $x_{k+1},y_{k+1}$ and $z_{k+1}$ are distinct. So the itinerary of $({\mathbf{x}},{\mathbf{y}})$ is initialled by \begin{equation}\label{dis3} id \to(Id)^k\to (S_{uv})^2, \text{ where } u,v\in\{\alpha,\beta,\gamma\}, \end{equation}
and the itinerary of $({\mathbf{x}},{\mathbf{z}})$ is initialled by \begin{equation}\label{dis4} id\to(Id)^k\to (S_{wv'})^2,\text{ where } w,v'\in\{\alpha,\beta,\gamma\}. \end{equation} The $(k+2)$-th transitions of \eqref{dis3} and \eqref{dis4} imply $(x_{k+2},y_{k+2})=(v,u)$ and $(x_{k+2}, z_{k+2})$ $=(v', w)$, and it follows that $v=v'$. The $(k+1)$-th transitions imply
$x_{k+1}\lhd_{uv}y_{k+1}$ and $x_{k+1}\lhd_{wv}z_{k+1}$, then $w\neq u$ by the uniqueness property, and $y_{k+1}\lhd_{wu}z_{k+1}$ by the gathering condition.
Let $p$ be the largest integer such that $$ (\sigma^k({\mathbf{x}}),\sigma^k({\mathbf{y}}),\sigma^k({\mathbf{z}}))=(x_{k+1}v^p\cdots, y_{k+1}u^p\cdots, z_{k+1}w^p\cdots). $$ Then all of $T({\mathbf{x}},{\mathbf{y}}),$$ T({\mathbf{y}},{\mathbf{z}}), $$T({\mathbf{x}},{\mathbf{z}})$ are no less than $k+p+1$, and two of them are no larger than $k+p+2$ since $(x_{k+p+2},y_{k+p+2},z_{k+p+2}))\neq (v,u,w)$. The lemma holds in this case.
\textit{Case 3.} $k=\ell<|{\mathbf{x}}\wedge{\mathbf{z}}|$.
Since $x_{k+1}\neq y_{k+1}$, equation \eqref{dis3} still holds. Let $p$ be the largest integer such that $$ (\sigma^k({\mathbf{x}}),\sigma^k({\mathbf{y}}))=(x_{k+1}v^p\cdots, y_{k+1} u^p\cdots). $$ Then $T({\mathbf{x}},{\mathbf{y}})=k+1+p$.
Let $q$ be the largest integer such that $\sigma^k({\mathbf{z}})=x_{k+1}v^q\cdots$. If $q\geq p-1$, then $T({\mathbf{y}},{\mathbf{z}})\geq k+p$ since $|{\mathbf{x}}\wedge {\mathbf{z}}|\geq k+p$, and the lemma holds in this case. If $q\leq p-2$, then $T({\mathbf{y}},{\mathbf{z}})=k+q+1$ and $$(\sigma^{k+q+1}({\mathbf{x}}), \sigma^{k+q+1}({\mathbf{z}}))=(v^2\cdots, \tilde v \eta \cdots), \quad \text{ where } \tilde v\neq v.$$ Suppose $Id$ is not translated to $Exit$ by $(v,\tilde v)$, say, $Id\stackrel{(v,\tilde v)}\longrightarrow S$. If $S=S_{wv}$ for some $w\in\{\alpha,\beta,\gamma\}$, then $\tilde v\lhd_{vw} v$, which violates the boundary condition.
Thus $S\stackrel{(v,\eta)}\longrightarrow Exit$ and $T({\mathbf{x}},{\mathbf{z}})\leq k+q+2$. The lemma holds in this scenario. This finishes the proof of \eqref{dis1}. \end{proof}
Now we can prove Theorem \ref{lem:feasible}.
\begin{proof}[\textbf{Proof of Theorem \ref{lem:feasible}}] Let ${\mathbf{x}},{\mathbf{y}},{\mathbf{z}}\in\Sigma^\infty$. Denote $\rho:=\rho_{M,\xi}$.
First, it is obvious that $T_M({\mathbf{x}},{\mathbf{x}})=\infty$, so $\rho({\mathbf{x}},{\mathbf{x}})=0$.
Secondly, $T_M({\mathbf{x}},{\mathbf{y}})=T_M({\mathbf{y}},{\mathbf{x}})$ by Lemma \ref{lem:symmetry}, so $\rho({\mathbf{x}},{\mathbf{y}})=\rho({\mathbf{y}},{\mathbf{x}})$.
Thirdly, by Proposition \ref{dis}, we have $$ \rho({\mathbf{x}},{\mathbf{y}})+\rho({\mathbf{y}},{\mathbf{z}})=\xi^{T_M({\mathbf{x}},{\mathbf{y}})}+\xi^{T_M({\mathbf{y}},{\mathbf{z}})} \geq \xi^{T_M({\mathbf{x}},{\mathbf{z}})+1}=\xi\rho({\mathbf{x}},{\mathbf{z}}). $$ The theorem is proved. \end{proof}
Hence a gasket automaton $M$ induces a psuedo metric space, which we denote by $({\mathcal A_M}, \rho_M)$. By \eqref{eq:xi2} and \eqref{eq:xi3}, we have \begin{equation}\label{eq:xi22} \rho_M({\mathbf{x}},{\mathbf{y}})\leq \xi^{-2}\rho_M([{\mathbf{x}}],[{\mathbf{y}}]) \text{ and } \end{equation} \begin{equation}\label{eq:xi33} \rho_M([{\mathbf{x}}],[{\mathbf{z}}])\leq \xi^{-3} (\rho_M([{\mathbf{x}}],[{\mathbf{y}}])+\rho_M([{\mathbf{y}}],[{\mathbf{z}}])). \end{equation}
\section{\textbf{Topology automaton of fractal gasket}}\label{sec:structure}
The neighbor graph (automaton) of self-similar sets is an important tool in fractal geometry, see \cite{BanM09,YZ18,RaoZhu16}. The topology automaton
is a simplified version of the neighbor automaton.
\begin{defn}[Topology automaton]\emph{Let $K$ be a fractal gasket generated by $\{\varphi_j\}_{j=1}^N$. Let $\{\alpha,\beta,\gamma\}$ be a subset of $\Sigma\cup\{-1,-2,-3\}$ defined in Section 1. Let $M_K$ be a triangle automaton satisfying: For $i\neq j$, $$ \delta(Id,(i,j))=\left \{ \begin{array}{ll} S_{uv}, & \text{ if } u,v\in\Sigma \text{ and } \varphi_i(\omega_v)=\varphi_j(\omega_u),\\ Exit, &\text{ if } \varphi_i(K)\cap \varphi_j(K)=\emptyset. \end{array} \right . $$ We call $M_K$ the \emph{topology automaton} of $K$. } \end{defn}
\begin{lem} The topology automaton of a fractal gasket is always a gasket automaton. \end{lem}
\begin{proof} Let $K$ be a fractal gasket generated by $\{\varphi_j\}_{j=1}^N$.
Since the functions $\varphi_i$ are all distinct, $\varphi_i(K)$ can has at most one neighbor in $\theta$-direction for each $\theta\in\{\exp(2\pi \mathbf{i}k/6;~k=0,1,\dots, 5\}$.
This verifies the uniqueness property of the gasket automaton.
The gathering condition and the boundary condition are obvious. \end{proof}
For $A,B\subset\mathbb{R}^2$, let $\mathop{\rm dist}\nolimits(A,B)=\min\{\|a-b\|;~a\in A, b\in B\}$. Zhu and Yang \cite{YZ18} defined the sharp separation condition for self-similar sets with uniform contraction ratio. We extend it to general self-similar sets.
\begin{defn}[Sharp separation condition]\emph{ A self-similar set $K$ is said to satisfy the \emph{sharp separation condition},
if there exists a constant $C'>0$ such that for any $k\geq 1$ and $I, J\in\Sigma^k$, $\varphi_I(K)\cap\varphi_J(K)=\emptyset$ implies that
$$dist(\varphi_I(K),\varphi_J(K))\ge C'\min\{\text{diam } \varphi_I(K), \text{diam } \varphi_J(K)\}.$$
}
\end{defn}
\begin{lem}\label{lem:sharp} A fractal gasket always satisfies the sharp separation condition. \end{lem} \begin{proof} Let $K$ be a fractal gasket with IFS $\Phi=\{\varphi_i\}_{i=1}^N$. Denote $K_i=\varphi_i(K)$ for $1\le i\le N$. Let $$C_1=\min\{dist(K_i,K_j); i,j\in \Sigma \text{ and } K_i\cap K_j=\emptyset\};$$ $$ C_2=\min\{dist(K_i,z); i\in \Sigma, z\in \{\omega_\alpha,\omega_\beta,\omega_\gamma\} \text{ and } z\not\in K_i\}. $$
For any $I=x_1\dots x_k, J=y_1\dots y_k\in\Sigma^k$, suppose that $\varphi_I(K)\cap\varphi_J(K)=\emptyset$. Let $0\le\ell\le k-1$ be the largest integer such that $\varphi_{x_1\dots x_\ell}(K)\cap\varphi_{y_1\dots y_\ell}(K)\ne\emptyset$.
If $x_1\dots x_{\ell}=y_1\dots y_{\ell}$, then \begin{align*} &dist(\varphi_I(K),\varphi_J(K)) = r_{x_1\dots x_{\ell}} dist(K_{x_{\ell+1}}, K_{y_{\ell+1}})\ge \frac{C_1}{\text{diam}(K)} \text{diam}(\varphi_I(K)). \end{align*}
If $x_1\dots x_{\ell}\neq y_1\dots y_{\ell}$, let $z_0$ be the unique element of $\varphi_{x_1\dots x_\ell}(K)\cap \varphi_{y_1\dots y_\ell}(K)$. Let us assume that $z_0\not\in \varphi_{x_1\dots x_{\ell+1}}(K)$ without loss of generality. Then \begin{align*} dist(\varphi_I(K),\varphi_J(K))&\ge dist(\varphi_{x_1\dots x_{\ell+1}}(K),z_0)\ge \frac{C_2}{\text{diam}(K)} \text{diam}(\varphi_I(K)). \end{align*} The lemma is proved. \end{proof}
Define $\pi:~\Sigma^\infty\to K$, which we call the \emph{coding map}, by \begin{equation} \big\{\pi({\mathbf{x}})\big\}=\bigcap_{i\geq1} \varphi_{x_1\cdots x_i}(K). \end{equation} If $\pi({\mathbf{x}})=x\in K$, then the sequence ${\mathbf{x}}$ is called a \emph{coding} of $x$.
\begin{proof}[\textbf{Proof of Theorem \ref{thm:Holder}}] Take $x,y\in K$. Let ${\mathbf{x}}$ and ${\mathbf{y}}$ be a coding of $x$ and $y$, respectively. Let $k=T({\mathbf{x}},{\mathbf{y}})$ be the surviving time in the topology automaton $M_K$. Then the $k$-th cylinders containing $x$ and that containing $y$ either coincide or have non-empty intersection. It follows that $$
\|x-y\|\leq 2(r^*)^{k}. $$ On the other hand, the $(k+1)$-th cylinders containing $x$ and that containing $y$ are disjoint, so by the sharp separation condition, we have $$
\|x-y\|\geq C'(r_*)^{k+1} $$ where $C'$ is the constant in the sharp separation condition. Recall that $s=\sqrt{\log r^*/\log r_*}$ and $\xi=(r_*)^s$. So $$\rho_{M_K, \xi}({\mathbf{x}},{\mathbf{y}})=\xi^k=(r_*)^{sk}=(r^*)^{k/s}.$$ Set $C=\max\{2, 1/(r_*C')\}$, we obtain the theorem. \end{proof}
\section{\textbf{$\gamma$-isolated gasket automaton and simplification}}\label{sec:gamma} In this section we imposing additional conditions to the gasket automaton so that the automaton can be simplified.
\begin{defn}[$\gamma$-isolated condition]\label{gamma-auto} \emph{ Let $M$ be a gasket automaton. We say $M$ satisfies the \emph{$\gamma$-isolated condition} if \\ \indent (i) $\{\alpha,\beta,\gamma\}\subset \Sigma$; \\ \indent (ii) The graph $(\Sigma,\mathcal{P}_{\alpha\gamma}\cup\mathcal{P}_{\beta\gamma})$ has no cycle;\\
\indent (iii) $\gamma$ is isolated, in the sense that it is $\alpha\gamma$-isolated, $\beta\gamma$-isolated and $\alpha\beta$-isolated. } \end{defn}
\begin{lem}If $K$ is a fractal gasket such that
$\{\omega_\alpha, \omega_\beta,\omega_\gamma\}\subset K$ and satisfying the top isolated condition, then the topology automaton $M_K$ satisfies the $\gamma$-isolated condition. \end{lem}
\begin{proof} Let $K$ be a fractal gasket with IFS $\Phi=\{\varphi_i\}_{i=1}^N$. If $i\lhd_{\alpha\gamma}j$, then $\varphi_j(\omega_\alpha)=\varphi_i(\omega_\gamma)$; denote by $c$ the contraction ratio of $\varphi_i$, we have $$\varphi_j(\omega_\alpha)-\varphi_i(\omega_\alpha)= \varphi_i(\omega_\gamma)-\varphi_i(\omega_\alpha)=c(\omega_\gamma-\omega_\alpha),$$ and it follows that $\varphi_j(0,0)$ has larger second coordinate than that of $\varphi_i(0,0)$.
The same conclusion holds if $i\lhd_{\beta\gamma}j$. This verifies (ii) in Definition \ref{gamma-auto}. Clearly the top isolated condition implies that $\gamma$ is isolated.
\end{proof}
In the rest of this section, we always assume that $M$ is a gasket automaton satisfying the $\gamma$-isolated condition such that \begin{equation} \mathcal{P}_{\alpha\gamma}\cup\mathcal{P}_{\beta\gamma}\ne\emptyset. \end{equation}
For any $b\in\Sigma$, we say $b$ is \emph{double-maximal} in $M$ if $b$ is both $\alpha\gamma$-maximal and $\beta\gamma$-maximal.
\begin{lem}\label{haschain} There exists $(\tau,\kappa)\in\mathcal{P}_{\alpha\gamma}\cup\mathcal{P}_{\beta\gamma}$ such that $\kappa$ is double-maximal. \end{lem} \begin{proof} Since $\mathcal{P}_{\alpha\gamma}\cup\mathcal{P}_{\beta\gamma}\ne\emptyset$, there exists $(b_1,b_2)\in\mathcal{P}_{\alpha\gamma}\cup\mathcal{P}_{\beta\gamma}$. Let $(a_1,\dots, a_k)$ be the chain in $\mathcal{P}_{\alpha\gamma}\cup\mathcal{P}_{\beta\gamma}$ containing $(b_1,b_2)$, then $(\tau, \kappa)=(a_{k-1},a_k)$ is the desired edge. \end{proof}
From now on, we fix a pair $(\tau, \kappa)$ satisfying Lemma \ref{haschain}. Moreover, we assume that $(\tau,\kappa)\in\mathcal{P}_{\alpha\gamma}$ without loss of generality.
If $\kappa$ has no $\alpha\beta$-predecessor, we set \begin{equation}\label{fig_break1} \mathcal{P}'_{\alpha\beta}=\mathcal{P}_{\alpha\beta}, \ \mathcal{P}'_{\alpha\gamma}=\mathcal{P}_{\alpha\gamma}\setminus\{(\tau,\kappa)\}, \text{ and } \mathcal{P}'_{\beta\gamma}=\mathcal{P}_{\beta\gamma}. \end{equation} If $\kappa$ has a $\alpha\beta$-predecessor, we denote it by $\lambda$ and set \begin{equation}\label{fig_break2} \mathcal{P}'_{\alpha\beta}=\mathcal{P}_{\alpha\beta}, \ \mathcal{P}'_{\alpha\gamma}=\mathcal{P}_{\alpha\gamma}\setminus\{(\tau,\kappa)\}, \text{ and } \mathcal{P}'_{\beta\gamma}=\mathcal{P}_{\beta\gamma}\setminus\{(\tau,\lambda)\}. \end{equation}
Let $M'$ be the triangle automaton determined by $\mathcal{P}'_{\alpha\beta}, \mathcal{P}'_{\alpha\gamma}$ and $\mathcal{P}'_{\beta\gamma}$, and we call it a \emph{one-step simplification} of $M$. If \eqref{fig_break1} holds, we call $M'$ a $(\tau, \kappa)$-simplification, otherwise, we call $M'$ a $(\tau,\kappa,\lambda)$-simplification.
\begin{figure}
\caption{(a)$\Rightarrow$(b) illustrates a $(\tau,\kappa)$-simplification, while (c)$\Rightarrow$(d) illustrates a $(\tau,\kappa,\lambda)$-simplification.}
\label{break}
\end{figure}
\begin{lem}\label{M'gasketauto} Let $M'$ be the one-step simplification of $M$, then $M'$ is also a gasket automaton satisfying the $\gamma$-isolated condition. \end{lem} \begin{proof} Notice that $(\Sigma,\mathcal{P}'_{\alpha\beta})=(\Sigma,\mathcal{P}_{\alpha\beta})$, moreover, $(\Sigma,\mathcal{P}'_{\alpha\gamma})$ and $(\Sigma,\mathcal{P}'_{\beta\gamma})$ are subgraphs of $(\Sigma,\mathcal{P}_{\alpha\gamma})$ and $(\Sigma,\mathcal{P}_{\beta\gamma})$ respectively, so $M'$ satisfies the unique property and the boundary condition in the definition of gasket automaton. Also, item (ii) in Definition \ref{gamma-auto} holds.
If three edges with vertices $a,b,c$ satisfies the assumptions \ding{172}\ding{173}\ding{174} in gathering condition, then we call the set $\{(a,b),(b,c),(c,a)\}$ a \emph{family} of $M$. Since one edge in a family can determine the other two edges, we obtain that any two families are edge-disjoint.
If $M'$ is a $(\tau,\kappa)$-simplification, then $(\tau,\kappa)$ does not belong to any family, so the simplification does not affect any family, and $M'$ satisfies the gathering condition. If $M'$ is a $(\tau,\kappa,\lambda)$-simplification, then the simplification deletes two members of a family, so $M'$ still satisfies the gathering condition. The lemma is proved. \end{proof}
\begin{figure}
\caption{A family in $M$.}
\label{three}
\end{figure}
\begin{lem}\label{max_element} Let $M'$ be the one-step simplification of $M$. Then\\ \indent \emph{(i)} both $\tau$ and $\kappa$ are double-maximal in $M'$, and $\kappa$ is $\alpha\gamma$-isolated in $M'$.\\ \indent \emph{(ii)} $\kappa\notin\{\alpha,\gamma\}$. \end{lem} \begin{proof} (i) That $\kappa$ is double-maximal in $M'$ since it is double-maximal in $M$. The one-step simplification
deletes the edges connecting $\tau$ to its $\alpha\gamma$-successor and $\beta\gamma$-successor (if exists),
so the other assertions in (i) hold. (See Figure \ref{break} (b) or (d) for an illustration.)
(ii) Since $\tau\lhd_{\alpha\gamma} \kappa$, that $\alpha$ is $\alpha\gamma$-minimal and $\gamma$ is $\alpha\gamma$-isolated imply $\kappa\notin\{\alpha, \gamma\}$.
\end{proof}
The following theorem plays a crucial role in this paper.
\begin{thm}\label{main} Let $M'$ be the one-step simplification of $M$. Then for any ${\mathbf{x}},{\mathbf{y}}\in\Omega:=\{\omega\kappa^{\infty};\omega\in\Sigma^*\}$ there exists a bijection $g:\Omega\to \Omega$ such that \begin{equation}\label{eq-key}
|T_M({\mathbf{x}},{\mathbf{y}})-T_{M'}(g({\mathbf{x}}),g({\mathbf{y}}))|\le 5. \end{equation} \end{thm}
In Section \ref{sec:mapg}, we give the construction of the map $g$, and we prove Theorem \ref{main} in Section \ref{sec:time}.
\section{\textbf{Proof of Theorem \ref{spaceLip} and Theorem \ref{thm:Lip}}}\label{sec:proof}
Let $M$ be a gasket automaton satisfying the $\gamma$-isolated condition.
We denote by $M^*$ the gasket automaton determined by $\mathcal{P}^*_{\alpha\gamma}=\mathcal{P}^*_{\beta\gamma}=\emptyset$ and $\mathcal{P}^*_{\alpha\beta}=\mathcal{P}_{\alpha\beta}$, and we call it the \emph{final-simplification} of $M$.
\begin{proof}[\textbf{Proof of Theorem \ref{spaceLip}}] Recall that $\Omega=\{\omega\kappa^{\infty};\omega\in\Sigma^*\}.$ Let $g:\Omega\rightarrow\Omega$ be the map given in Theorem \ref{main}. Since for any ${\mathbf{x}},{\mathbf{y}}\in\Omega$, $$ T_M({\mathbf{x}},{\mathbf{y}})-5\le T_{M'}(g({\mathbf{x}}),g({\mathbf{y}}))\le T_M({\mathbf{x}},{\mathbf{y}})+5, $$ which implies that $\xi^5\rho_M({\mathbf{x}},{\mathbf{y}})\le\rho_{M'}(g({\mathbf{x}}),g({\mathbf{y}}))\le\xi^{-5}\rho_M({\mathbf{x}},{\mathbf{y}}).$ Hence $g$ is bi-Lipschitz.
Define $[{\mathbf{x}}]_M=\{{\mathbf{y}}\in \Sigma^\infty;~ \rho_M({\mathbf{x}},{\mathbf{y}})=0\}$ and set $\Omega_M=\{[{\mathbf{x}}]_M; ~{\mathbf{x}}\in \Omega\}$; similarly we define $[{\mathbf{x}}]_{M'}$ and $\Omega_{M'}$. Define $\widetilde g: \Omega_M\to \Omega_{M'}$ by $\widetilde g([{\mathbf{x}}]_M)=[g({\mathbf{x}})]_{M'}$. We claim that $\widetilde g$ is bi-Lipschitz. First, $$ \begin{array}{rl} & \rho_{M'}(\widetilde g([{\mathbf{x}}]_M), \widetilde g([{\mathbf{y}}]_M))=\rho_{M'}([g({\mathbf{x}})]_{M'}, [g({\mathbf{y}})]_{M'})\\ \leq &\rho_{M'}(g({\mathbf{x}}), g({\mathbf{y}}))\leq \xi^{-5} \rho_M({\mathbf{x}},{\mathbf{y}})\\ \leq & (\xi^{-5})(\xi^{-2}) \rho_M([{\mathbf{x}}]_M, [{\mathbf{y}}]_M). \quad \text{(By \eqref{eq:xi22}.)} \end{array} $$ In the same manner, we can prove the other direction inequality. The claim is proved.
Moreover, by Lemma \ref{Omegadense}, $\Omega_M$ is dense in $({\mathcal A}_M,\rho_M)$ and $\Omega_{M'}$ is dense in $({\mathcal A}_{M'},\rho_{M'})$. So ${\mathcal A}_M\simeq {\mathcal A}_{M'}$ by Lemma \ref{extendLip}. \end{proof}
Using Theorem \ref{spaceLip} repeatedly, we obtain
\begin{coro}\label{finalspaceLip} Let $M$ be a gasket automaton and $M^*$ be the final-simplification of $M$. Then $(\mathcal{A}_M,\rho_M)\simeq(\mathcal{A}_{M^*},\rho_{M^*})$. \end{coro}
\begin{proof} Notice that a gasket automaton $M$ admits a one-step simplification provided $\mathcal{P}_{\alpha\gamma}\cup\mathcal{P}_{\beta\gamma}\ne\emptyset$ (see Section 5). Then there exists a sequence $$ M=M_0,\ M_1,\ \dots,\ M_q=M^* $$ such that $M_{j+1}$ is the one-step simplification of $M_j$ for each $0\le j\le q-1$. So the result is a consequence of Theorem \ref{spaceLip}. \end{proof}
Let $E$ and $F$ be the fractal gaskets defined in Theorem \ref{thm:Lip}, and let
$M_E$ and $M_F$ be the topology automata of $E$ and $F$, respectively.
Without loss of generality, we assume that $\alpha=1$ and $\beta=2$ for both $E$ and $F$.
If $\omega_\gamma\in E$, we set $M^*_E=(M_E)^*$ to be the final-simplification of $M_E$, otherwise, we set $M_E^*=M_E.$ Similarly we define $M^*_F$.
\begin{remark}\emph{ Actually, if $\omega_\gamma\not\in E$, then the topology automaton $M_E$ only records the horizontal connective relations among $\varphi_j(E), j\in\Sigma$; more precisely, for any $i,j\in\Sigma$, $\varphi_i(E)\cap\varphi_j(E)\ne\emptyset$ if and only if $i\lhd_{\alpha\beta}j$ or $j\lhd_{\alpha\beta}i$. Thus $\mathcal{P}_{\alpha\gamma}\cup\mathcal{P}_{\beta\gamma}=\emptyset$. } \end{remark}
\begin{lem}\label{finalEFLip} There exists an isometry $f:(\mathcal{A}_{M^*_E},\rho_{M^*_E})\to(\mathcal{A}_{M^*_F},\rho_{M^*_F})$. \end{lem} \begin{proof} Let $I=\{a_1,a_2,\dots,a_k\}\subset\Sigma$ be a horizontal-block of $E$. By the definition of horizontal-block and neighbor automaton $M_E$, we have $$ a_j\lhd_{\alpha\beta}a_{j+1}\text{ in }M_E,\ 1\le j\le k-1. $$ By assumptions of Theorem \ref{thm:Lip}, there is a size-preserving bijection from the collection of horizontal-blocks of $E$ to that of $F$, which we denote by $\widehat{h}$. That is, \begin{equation}\label{finalEFLip1} \widehat{h}(I)=\{b_1,b_2,\dots,b_k\} \end{equation} is a horizontal-block of $F$. Define $h:\Sigma\to\Sigma$ by $h(a_j)=b_j$, that is, if $a_j$ is the $j$-th element of a horizontal block $I$ of $E$, then we define $h(a_j)$ to be the $j$-th element of $\widehat{h}(I)$. Then, for any $r,s\in\Sigma$, \begin{equation}\label{finalEFLip2} r\lhd_{\alpha\beta}s\text{ in }M_E^*\quad\text{if and only if}\quad h(r)\lhd_{\alpha\beta}h(s)\text{ in }M_F^*. \end{equation}
Now we define $f:\Sigma^\infty\to\Sigma^\infty$ by $ f((x_i)_{i=1}^\infty)=(h(x_i))_{i=1}^\infty. $ Clearly, $f$ is a bijection and for any ${\mathbf{x}},{\mathbf{y}}\in\Sigma^\infty$,
$T_{M_E^*}({\mathbf{x}},{\mathbf{y}})=T_{M_F^*}(f({\mathbf{x}}),f({\mathbf{y}})).$ It follows that $[{\mathbf{x}}]\mapsto [f({\mathbf{x}})]$ is an isometry from $\mathcal{A}_{M^*_E}$ to $\mathcal{A}_{M^*_F}$. \end{proof}
\begin{proof}[\textbf{Proof of Theorem \ref{thm:Lip}}] Fix $\xi\in (0,1)$. We have $$ \mathcal{A}_{M_E}\simeq \mathcal{A}_{M^*_E} \simeq \mathcal{A}_{M^*_F} \simeq \mathcal{A}_{M_F}, $$ where the first and third relations are due to Corollary \ref{finalspaceLip}, and the secondly relation is by Lemma \ref{finalEFLip}. Next, by Theorem \ref{thm:Holder}, we have that $E$ is bi-H\"{o}lder equivalent to $\mathcal{A}_{M_E}$ and $F$ is bi-H\"{o}lder equivalent to $\mathcal{A}_{M_F}.$ Thus $E$ is bi-H\"{o}lder equivalent to $F$. (Here we use the fact that $({\mathcal A_M}, \rho_{M,\xi})$ is bi-H\"older equivalent to $({\mathcal A_M}, \rho_{M,\xi'})$ provided $\xi'\in (0,1)$.)
Especially, if both $E$ and $F$ have uniform contraction ratio $r$, then setting $\xi=r$, we obtain $E\simeq F$. \end{proof}
\section{\textbf{A universal map from $\Omega$ to $\Omega$}}\label{sec:mapg}
Let $\Sigma=\{1,2,\dots,N\}$ with $N\ge 4$. Let $\tau,\kappa,\alpha,\gamma\in\Sigma$ be distinct except that
$\tau=\alpha$ is allowed. Set \begin{equation}\label{Omega} \Omega=\{\omega\kappa^{\infty};\omega\in\Sigma^*\}.
\end{equation}
In this section, we construct a bijection $g:\Omega\to\Omega$, which can be realized by a transducer (see Appendix B). In next section, we will show that the map $g$ is the desired map in Theorem \ref{main}.
\begin{remark} \emph{(i) The discussion of this section is purely symbolic: it is irrelevant to metric or automaton. }
\emph{(ii) If $\tau$ and $\kappa$ come from a one-step simplification, then
$\kappa\not\in\{ \alpha, \gamma\}$ by Lemma \ref{max_element}, and $\tau\neq \gamma$ since $\gamma$ is isolated.}
\end{remark}
\subsection{Segment decomposition}
\ \\ \indent First we introduce two decompositions of sequences in $\Omega$. Set \begin{equation}\label{CM} \mathcal{C}_M:=\{\tau\gamma^k;k\ge 2\}\cup\{\kappa\alpha^{k}\kappa\gamma;k\ge 0\}. \end{equation}
\begin{defn}[$M$-decomposition]\label{M-segment} \emph{Let ${\mathbf{x}}=(x_i)_{i=1}^\infty\in\Omega$. The longest prefix $X_1$ of ${\mathbf{x}}$ satisfying $X_1\in {\mathcal C}_M\cup \Sigma$ is called the \emph{$M$-initial segment} of ${\mathbf{x}}$.}
\emph{Inductively, each ${\mathbf{x}}=(x_i)_{i=1}^{\infty}\in\Omega$ can be uniquely written as ${\mathbf{x}}=\prod_{j=1}^\infty X_j:=X_1X_2\cdots X_k \cdots,$ where $X_k$ is the $M$-initial segment of $\prod_{j\geq k} X_j$. We call $(X_j)_{j\geq 1}$ the \emph{$M$-decomposition} of ${\mathbf{x}}$.} \end{defn}
Next we define $M'$-decomposition. Set \begin{equation}\label{CM'} \mathcal{C}_{M'}=\{\kappa\alpha^k\kappa\gamma;k\ge 0\}\cup\{\kappa\alpha^{k}\kappa\gamma\gamma;k\ge 0\}\cup\{\tau\gamma\gamma\}, \end{equation}
\begin{defn}[$M'$-decomposition]\label{M'-segment} \emph{Let ${\mathbf{u}}=(u_i)_{i=1}^\infty\in\Omega$. A word $U_1$ is called the $M'$-initial segment of ${\mathbf{u}}$, if it is the longest prefix of ${\mathbf{u}}$ such that $U_1\in {\mathcal C}_{M'}\cup \Sigma$. Similar as above, we define the \emph{$M'$-decomposition} of ${\mathbf{u}}$.
} \end{defn}
Two words are said to be \emph{comparable}, if one is a prefix of the other.
\begin{remark}\label{rem:observe} \emph{Here are two useful observations.}
\emph{ (i) If two elements in ${\mathcal C}_M$ are comparable, then both of them are of the form $\tau\gamma^k$. If two elements in ${\mathcal C}_{M'}$ are comparable, then one of them is $\kappa\alpha^k\kappa\gamma$ and another one is $\kappa\alpha^k\kappa\gamma\gamma$. }
\emph{(ii) Let $W\in {\mathcal C}_M\cup {\mathcal C}_{M'}$. Then $W$ is initialled by a word in $\{\kappa \alpha, \kappa\kappa, \tau\gamma\}$. Moreover, these words cannot appear in $W$ except as a prefix. } \end{remark}
\subsection{Construction of $g$}
First we define $g_0:\mathcal{C}_M\cup\Sigma\rightarrow\mathcal{C}_{M'}\cup\Sigma$ by $$ g_0:\left\{\begin{array}{rl} \tau\gamma^k&\mapsto\kappa\alpha^{k-2}\kappa\gamma,\ k\ge 2;\\ \kappa\alpha^k\kappa\gamma&\mapsto\kappa\alpha^{k-1}\kappa\gamma\gamma,\ k\ge 1;\\ \kappa\kappa\gamma&\mapsto\tau\gamma\gamma;\\ i&\mapsto i, \forall i\in\Sigma. \end{array} \right. $$
Clearly $g_0:\mathcal{C}_M\cup\Sigma\rightarrow\mathcal{C}_{M'}\cup\Sigma$ is a bijection. We define $g:\Omega\rightarrow\Omega$ by \begin{equation}\label{gxdecomposition} g({\mathbf{x}})=\prod_{j=1}^\infty g_0(X_j), \end{equation} where $(X_j)_{j=1}^\infty$ is the $M$-decomposition of ${\mathbf{x}}$. (Notice that any ${\mathbf{x}}\in \Omega$ has a $M$-decomposition of the form $(X_j)_{j=1}^\ell(\kappa)^\infty$, so $g({\mathbf{x}})=(\prod_{j=1}^\ell g_0(X_j))(\kappa)^\infty\in\Omega$.)
\begin{remark}\emph{Rao and Zhu \cite{RaoZhu16} constructed the first map of this type related to two neighbor automata of fractal squares,
based on geometrical observations. The map $g$ above is an improvement of the map in \cite{RaoZhu16}.
} \end{remark}
\begin{prop}\label{coincide} Let ${\mathbf{x}}=x_1x_2\dots, {\mathbf{u}}=u_1u_2\dots=g({\mathbf{x}})$.\\ \indent \emph{(i)} If $(X_j)_{j\geq 1}$ is the $M$-decomposition of ${\mathbf{x}}$, then the $M'$-decomposition of $g({\mathbf{x}})$ is $\left (g_0(X_j)\right )_{j\geq 1}$.\\ \indent \emph{(ii)} Similarly, if $(U_j)_{j\geq 1}$ is the $M'$-decomposition of ${\mathbf{u}}$, then the $M$-decomposition of $h({\mathbf{u}})$ is $\left ( g^{-1}_0(U_j) \right )_{j\geq 1}$, where $h({\mathbf{u}})=\prod_{j=1}^\infty g_0^{-1}(U_j)$.\\ \indent \emph{(iii)} The map $g:\Omega\rightarrow\Omega$ is a bijection. \end{prop}
\begin{proof} We denote $U\lhd W$ if $U$ is a prefix of $W$.
(i) To prove the first statement, we only need to show that $U_1=g_0(X_1)$.
Suppose on the contrary that $U_1\neq g_0(X_1)$. First, $U_1\lhd g_0(X_1)$ is impossible, since $g_0(X_1)\in {\mathcal C}_{M'}$ and we always choose the longest one to be the initial segment. Let $p$ be the least integer such that $U_1\lhd g_0(X_1)\cdots g_0(X_p)$, then $p\geq 2$.
If $|X_j|=1$ for all $j=1,\dots, p$, then ${\mathbf{x}}$ is initialled by $x_1\dots x_{|U_1|}=U_1$, which forces that $U_1\in {\mathcal C}_{M'}\setminus {\mathcal C}_M$, so $U_1=\kappa \alpha^k\kappa\gamma\gamma$ is the only choice. But then the initial segment of ${\mathbf{x}}$ should be $\kappa\alpha^{k}\kappa\gamma$, a contradiction.
So at least one $X_j\in {\mathcal C}_M$. Suppose it happens for $j_0$. Since elements in ${\mathcal C}_M$ are initialled by $\kappa\kappa$, $\kappa\alpha$ or $\tau\gamma$ , we deduce that $j_0=1$. Thus $g_0(X_1)$ is a proper prefix of $U_1$, which forces that $g_0(X_1)=\kappa\alpha^k\kappa\gamma$ and $U_1=\kappa \alpha^k\kappa\gamma\gamma$. But then $X_1=\tau \gamma^{k+2}$ and $x_{k+4}=\gamma$, which contradicts that $X_1$ is an initial segment.
Item (i) is proved.
(ii) For the second assertion, we only need to show that $X_1=g_0^{-1}(U_1)$, which is almost the same as the proof of (i).
Suppose on the contrary that $X_1\neq g_0^{-1}(U_1)$. Then $X_1\lhd g_0^{-1}(U_1)$ is impossible since $g_0^{-1}(U_1)\in {\mathcal C}_{M}$. Let $p\geq 2$ be the least integer such that $X_1\lhd g_0^{-1}(U_1)\cdots g_0^{-1}(U_p)$.
If $|U_j|=1$ for all $j=1,\dots, p$, then ${\mathbf{u}}$ is initialled by $X_1$, so $X_1\in {\mathcal C}_M\setminus {\mathcal C}_{M'}$ and $X_1=\tau\gamma^k (k\ge 3)$ is the only choice. But then the initial segment of ${\mathbf{u}}$ should be $\tau\gamma\gamma$, a contradiction.
By the same reason as item (i), we have $U_{1}\in {\mathcal C}_{M'}$. Thus $g_0^{-1}(U_1)$ is a proper prefix of $X_1$, which forces that $g_0^{-1}(U_1)=\tau\gamma^k (k\geq 2)$ and $X_1=\tau\gamma^\ell (\ell>k)$. But then $U_1=\kappa\alpha^{k-2}\kappa\gamma$ and $u_{k+2}=\gamma$,
which is a contradiction. Item (ii) is proved.
(iii) From (i) and (ii) we have $h\circ g=g\circ h=id$, so $g$ is a bijection. \end{proof}
Let $\sigma:\Sigma^\infty\to\Sigma^\infty$ be the shift operator defined by $\sigma((x_k)_{k\ge 1})=(x_k)_{k\geq 2}.$
\begin{lem}\label{lem-length} Let ${\mathbf{x}}=(x_k)_{k\geq 1}, {\mathbf{y}}=(y_k)_{k\geq 1}\in\Omega$. Then
$$|g({\mathbf{x}})\wedge g({\mathbf{y}})|\geq |{\mathbf{x}}\wedge{\mathbf{y}}|-2.$$ In other words, $u_1\cdots u_k$ is determined by $x_1\cdots x_{k+2}$, where $k\ge 1$. \end{lem} \begin{proof} Denote ${\mathbf{u}}=g({\mathbf{x}}), {\mathbf{v}}=g({\mathbf{y}})$. Let $(X_j)_{j=1}^\infty$ and $(Y_j)_{j=1}^\infty$ be the $M$-decompositions of ${\mathbf{x}}$ and ${\mathbf{y}}$ respectively. Denote $U_1=g_0(X_1)$ and $V_1=g_0(Y_1)$.
Let $k=|{\mathbf{x}}\wedge {\mathbf{y}}|-2\geq 1$.
We first prove the lemma in case of $X_1\neq Y_1$. Without loss of generality, we assume that $|Y_1|\geq |X_1|$.
\ding{172} Suppose $|X_1|=1$. In this case we must have $Y_1=\kappa\alpha^q\kappa\gamma$ and $q\geq k$.
Moreover, we have $|X_i|=1$ as long as $i\leq |Y_1|-2$, since a word in ${\mathcal C}_M$ is initialled by
$\kappa\kappa$, $\kappa\alpha$ or $\tau\gamma$. Since, $k+1\leq |Y_1|-2$, this implies that $x_1\dots x_{k+1}$ (which is the same as $y_1\dots y_{k+1})$ is a prefix of ${\mathbf{u}}$. Since $V_1=\kappa\alpha^{q-1}\kappa\gamma\gamma$ is initialled by $y_1\dots y_q$, we have
$|{\mathbf{u}}\wedge{\mathbf{v}}|\geq |y_1\dots y_{k+1}\wedge y_1\dots y_{q}|=\min\{k+1,q\}\geq k.$
\ding{173} Suppose $|X_1|>1$. If $X_1=\tau\gamma^{\ell}$, then $Y_1=\tau \gamma^{q}$ with $q>\ell$, so $|{\mathbf{u}}\wedge{\mathbf{v}}|=|U_1\wedge V_1|=|\kappa\alpha^{\ell-2}\kappa\gamma\wedge \kappa\alpha^{q-2}\kappa\gamma|= \ell-1=k$. If $X_1=\kappa\alpha^{\ell}\kappa\gamma$, then $Y_1=\kappa\alpha^q\kappa\gamma$ with $q>\ell$, so
$|{\mathbf{u}}\wedge {\mathbf{v}}|=|\kappa \alpha^{\ell-1}\kappa\gamma\wedge \kappa \alpha^{q-1}\kappa\gamma |= \ell=|{\mathbf{x}} \wedge {\mathbf{y}}|-1 =k+1$.
Hence the lemma is valid if $X_1\neq Y_1$.
If $X_1=Y_1$, denote $p=|X_1|$. Denote ${\mathbf{a}}=\sigma^p({\mathbf{x}})$ and ${\mathbf{b}}=\sigma^p({\mathbf{y}})$. Hence the lemma holds for ${\mathbf{x}}$ and ${\mathbf{y}}$ if and only if it holds for ${\mathbf{a}}$ and ${\mathbf{b}}$.
So the lemma can be proved by induction. \end{proof}
In Appendix B, we give a transducer which can realize the map $g$. The transducer provides an alternative proof of Lemma \ref{lem-length}.
\section{\textbf{Proof of Theorem \ref{main}}}\label{sec:time}
Let $M$ be a gasket automaton satisfying the $\gamma$-isolated condition, and let $M'$ be a one-step simplification of $M$. For ${\mathbf{x}}, {\mathbf{y}}\in\Omega=\{\omega\kappa^{\infty};\omega\in\Sigma^*\}$, denote ${\mathbf{u}}=g({\mathbf{x}})~{\rm and}~ {\mathbf{v}}=g({\mathbf{y}})$. Let $(X_j)_{j=1}^{\infty}$, $(Y_j)_{j=1}^{\infty}$ be the $M$-decompositions of ${\mathbf{x}}, {\mathbf{y}}$ respectively, and $(U_j)_{j=1}^{\infty}$, $(V_j)_{j=1}^{\infty}$ be the $M'$-decompositions of ${\mathbf{u}}, {\mathbf{v}}$ respectively. Clearly, we always have \begin{equation}\label{eq:Less} T_{M'}({\mathbf{x}},{\mathbf{y}})\leq T_M({\mathbf{x}},{\mathbf{y}}). \end{equation}
Recall that $\mathcal{C}_M=\{\tau\gamma^k;k\ge 2\}\cup\{\kappa\alpha^{k}\kappa\gamma;k\ge 0\}$ and
$\mathcal{C}_{M'}=\{\kappa\alpha^k\kappa\gamma;k\ge 0\}\cup\{\kappa\alpha^{k}\kappa\gamma\gamma;k\ge 0\}\cup\{\tau\gamma\gamma\}$. One should keep in mind that \begin{equation}\label{eq:cycle} S_{uv}\stackrel{(i,j)}\longrightarrow S_{uv} \text{ if and only if } (i,j)=(v,u) \end{equation} in both $M$ and $M'$. Since $\gamma$ is isolated, if $\tilde \gamma\in \Sigma\setminus\{\gamma\}$, then \begin{equation}\label{eq:gammaExit} Id\stackrel{(\gamma,\tilde \gamma)}\longrightarrow Exit \text{ and } Id\stackrel{(\tilde \gamma,\gamma)}\longrightarrow Exit \end{equation} in both $M$ and $M'$.
\begin{lem}\label{gotoExit}
Let ${\mathbf{a}},{\mathbf{b}}\in\Omega$ such that $a_1\neq b_1$. If ${\mathbf{a}}=\alpha^k\kappa\gamma\cdots (k\geq 0)$,
then
$T_M({\mathbf{a}},{\mathbf{b}})\leq 2.$ \end{lem}
\begin{proof} Suppose $(a_1,b_1)\in\mathcal{P}_M$, then $Id\stackrel{(a_1,b_1)}\longrightarrow S_{uv}$ for some $u,v\in\{\alpha,\beta,\gamma\}$. To prove the lemma, by \eqref{eq:cycle}, we only need to show that $a_2\neq v$ or $a_3\neq v$.
If $k=0$, then $(a_1,b_1)=(\kappa,b_1)$ and $a_2=\gamma$. Since $\kappa$ is double-maximal in both $M$, that is, $\kappa$ is $\alpha\gamma$-maximal and $\beta\gamma$-maximal, we have $v\ne\gamma$. So $a_2\neq v$.
If $k=1$, then $a_2=\kappa$ and $a_3=\gamma$.
Since $\kappa\neq \gamma$, clearly either $a_2\ne v$ or $a_3\neq v$.
If $k>1$, then $(a_1,b_1)=(\alpha,b_1)$ and $a_2=\alpha$. Since $\alpha$ is $\alpha\gamma$-minimal and $\alpha\beta$-minimal $M$, we have $v\neq \alpha$. The lemma is proved. \end{proof}
\begin{lem}\label{case-id} \emph{(i)} Let ${\mathbf{x}},{\mathbf{y}}\in\Omega$. If $x_1=y_1$ and $X_1\neq Y_1$, then
$$
T_M({\mathbf{x}},{\mathbf{y}})\leq |{\mathbf{x}}\wedge{\mathbf{y}}|+2. $$
\emph{(ii)} Let ${\mathbf{u}},{\mathbf{v}}\in\Omega$. If $u_1= v_1$ and $U_1\neq V_1$, then
$$
T_{M'}({\mathbf{u}},{\mathbf{v}})\leq |{\mathbf{u}}\wedge{\mathbf{v}}| +2. $$
\end{lem}
\begin{proof}
(i) Let $k=|{\mathbf{x}}\wedge{\mathbf{y}}|$, then $k\ge 1$ and
$x_{k+1}\ne y_{k+1}.$
By $X_1\neq Y_1$ we know that at least one of $X_1$ and $Y_1$ is in $\mathcal{C}_M$, say $X_1\in\mathcal{C}_M$. We consider two cases according to $X_1$.
\ding{172} $X_1=\tau\gamma^\ell(\ell\ge 2)$. In this case, we have $k\le\ell+1$, for otherwise $Y_1=X_1$.
If $k\le\ell$, then $x_{k+1}=\gamma$, so $(x_{k+1},y_{k+1})=(\gamma,\widetilde{\gamma})$, where $\widetilde{\gamma}\in\Sigma\setminus\{\gamma\}$; if $k=\ell+1$, then $Y_1=\tau\gamma^s$ with $s>\ell$, which implies $(x_{k+1},y_{k+1})=(\widetilde{\gamma},\gamma)$. Hence, by formula \eqref{eq:gammaExit}, $Id$ is transferred to $Exit$ by $(x_{k+1},y_{k+1})$,
so $T_M({\mathbf{x}},{\mathbf{y}})=k$.
\ding{173} $X_1=\kappa\alpha^\ell\kappa\gamma(\ell\ge 0)$. In this case, we have $k\le\ell+2$, for otherwise $X_1=Y_1$.
If $k\le\ell+1$, then $x_{k+1}x_{k+2}\cdots=\alpha^p\kappa\gamma\cdots (p\geq 0)$,
and we have $T_M({\mathbf{x}},{\mathbf{y}})\leq k+2$ by Lemma \ref{gotoExit}. If $k=\ell+2$, then $x_{k+1}=\gamma$ and $(x_{k+1},y_{k+1})=(\gamma,\widetilde{\gamma})$, so $T_M({\mathbf{x}},{\mathbf{y}})=k$. This complete the proof of (i).
(ii) Using item (i) we have just proved, we have $$
|{\mathbf{u}}\wedge {\mathbf{v}}|\leq T_{M'}({\mathbf{u}},{\mathbf{v}})\leq T_M({\mathbf{u}},{\mathbf{v}})\leq |{\mathbf{u}}\wedge {\mathbf{v}}|+2. $$
The lemma is proved. \end{proof}
\begin{lem}\label{tool1} Let ${\mathbf{x}}=x_1s^{k}\cdots$, where $k\ge 2$ and $s\in\{\alpha,\beta,\gamma\}$. Then $$ g({\mathbf{x}})=\left \{ \begin{array}{ll} \kappa\alpha^{k-2}\cdots, &\text{ if } x_1s^{k}=\tau\gamma^{k} ;\\ x_1s^{k-2}\cdots, &\text{ otherwise.} \end{array} \right . $$ \end{lem} \begin{proof} By Lemma \ref{lem-length}, $u_1\cdots u_{k-1}$ is determined by $x_1\dots x_{k+1}$. So the lemma holds since $g(\tau\gamma^{k}\kappa^\infty)=\kappa\alpha^{k-2}\kappa\gamma\kappa^\infty$ and $g(x_1s^{k}\kappa^\infty)=x_1s^{k}\kappa^\infty$. \end{proof}
\begin{lem}\label{lem:key-1} Let ${\mathbf{x}},{\mathbf{y}}\in\Omega$. If $X_1\ne Y_1$, then \begin{equation}\label{eq-key-1} T_M({\mathbf{x}},{\mathbf{y}})-T_{M'}({\mathbf{u}},{\mathbf{v}})\le 5. \end{equation} \end{lem}
\begin{proof} Let $S_{M,1}$ be the first state of the itinerary of $({\mathbf{x}},{\mathbf{y}})$ in $M$ (after the initial state $id$). If $S_{M,1}=Exit$, obvious \eqref{eq-key-1} holds.
If $S_{M,1}=Id$, then $x_1=y_1$, so
$T_M({\mathbf{x}},{\mathbf{y}}) \leq |{\mathbf{x}}\wedge{\mathbf{y}}|+2$ by Lemma \ref{case-id}. By Lemma \ref{lem-length}, we have $T_{M'}({\mathbf{u}},{\mathbf{v}})\geq |{\mathbf{u}}\wedge {\mathbf{v}}|\geq |{\mathbf{x}}\wedge{\mathbf{y}}|-2$. Hence \eqref{eq-key-1} holds in this case.
Finally, we deal with the case $S_{M,1}\in Q\setminus\{Id, Exit\}$.
Suppose $\tau$ has a $\beta\gamma$-successor $\lambda$ in $M$, then $M'$ is the $(\tau,\kappa,\lambda)$-simplification of $M$ with $\mathcal{P}'_{\alpha\beta}=\mathcal{P}_{\alpha\beta}, \ \mathcal{P}'_{\alpha\gamma}=\mathcal{P}_{\alpha\gamma}\setminus\{(\tau,\kappa)\} \text{ and } \mathcal{P}'_{\beta\gamma}=\mathcal{P}_{\beta\gamma}\setminus\{(\tau,\lambda)\}.$ (See Figure \ref{xy}.)
\begin{figure}
\caption{$(\tau,\kappa,\lambda)$-simplification}
\label{xy}
\end{figure}
Denote $k=T_M({\mathbf{x}},{\mathbf{y}})$. If $k\le 5$, \eqref{eq-key-1} holds trivially. Now we assume $k\ge 6$. Denote $S_{M,1}=S_{rs}$, where $r,s\in\{\alpha,\beta,\gamma\}$. Then the itinerary of $({\mathbf{x}},{\mathbf{y}})$ is $id\to (S_{rs})^{k}\to Exit$, so $(x_1,y_1)\in\mathcal{P}_{rs}$ in $M$ and $$ {\mathbf{x}}=x_1s^{k-1}\cdots,\quad {\mathbf{y}}=y_1r^{k-1}\cdots. $$
\textit{Case 1. } $x_1s^{k-1},y_1r^{k-1}\ne\tau\gamma^{k-1}$.
By Lemma \ref{tool1} we have ${\mathbf{u}}=x_1s^{k-3}\cdots, \quad {\mathbf{v}}=y_1r^{k-3}\cdots.$
We claim that \begin{equation}\label{lem-key-2eq2} (x_1,y_1)\notin\{(\tau,\kappa),(\tau,\lambda)\}\cup\{ (\kappa,\tau),(\lambda,\tau)\}. \end{equation} If $(x_1,y_1)=(\tau,\kappa)$, then $x_1\lhd_{\alpha\gamma}y_1$, which implies that $s=\gamma$, which contradicts $x_1s^{k-1}\neq \tau\gamma^{k-1}$. Similarly, we can eliminate the other scenarios. The claim is proved.
It follows that $(u_1,v_1)=(x_1,y_1)\in\mathcal{P}'_{rs}$. Therefore, the first $k-1$ states of the itinerary of $({\mathbf{u}},{\mathbf{v}})$ are $id\to (S_{rs})^{k-2}$, so $T_{M'}({\mathbf{u}},{\mathbf{v}})\ge k-2=T_M({\mathbf{x}},{\mathbf{y}})-2$.
\textit{Case 2. }$x_1s^{k-1}=\tau\gamma^{k-1}$ or $y_1r^{k-1}=\tau\gamma^{k-1}$.
Without loss of generality, we assume $x_1s^{k-1}=\tau\gamma^{k-1}$. Then $\tau\lhd_{r\gamma}y_1$ in $M$. By Lemma \ref{tool1} we have ${\mathbf{u}}=\kappa\alpha^{k-3}\cdots$ and ${\mathbf{v}}=y_1r^{k-3}\cdots.$
If $r=\alpha$, then $\tau\lhd_{\alpha\gamma}y_1$, which forces $y_1=\kappa$ and ${\mathbf{v}}=\kappa\alpha^{k-3}\cdots$. Hence
$T_{M'}({\mathbf{u}},{\mathbf{v}})\geq |{\mathbf{u}}\wedge{\mathbf{v}}|\ge k-2$. Similarly, if $r=\beta$, then $\tau\lhd_{\beta\gamma}y_1$, which forces $y_1=\lambda$ and ${\mathbf{v}}=\lambda\beta^{k-3}\cdots$. So $T_{M'}({\mathbf{u}},{\mathbf{v}})=T_{M'}(\kappa\alpha^{k-3}\cdots ,\lambda\beta^{k-3}\cdots)\ge k-2$.
If $M'$ is a $(\kappa,\tau)$ simplification of $M$, we can prove the lemma in the same manner as above. The proof is finished. \end{proof}
For ${\mathbf{u}},{\mathbf{v}}\in\Omega$, we claim that if $U_1\ne V_1$, then \begin{equation}\label{eq-key-2} T_{M'}({\mathbf{u}},{\mathbf{v}})-T_M({\mathbf{x}},{\mathbf{y}})\leq 5. \end{equation} Let $S_{M',1}$ be the first state of the itinerary of $({\mathbf{u}},{\mathbf{v}})$ in $M'$ (after the initial state). We will prove \eqref{eq-key-2} in Lemmas \ref{lem-key-5} and \ref{lem-key-6}.
\begin{lem}\label{tool2} Let ${\mathbf{v}}=v_1\eta^\ell\widetilde{\eta}\cdots$ where $\widetilde{\eta}\neq \eta$. If $V_1=v_1$, then $g^{-1}({\mathbf{v}})=v_1\eta^{\ell-2}\cdots$. \end{lem}
\begin{proof} Notice that if $\eta^p$ is a prefix of an element of ${\mathcal C}_{M'}$, then we must have $p=1$ or $2$ (in the later case the element must be $\kappa\kappa\gamma$). Hence, we have
$|V_i|=1$ holds for $1\le i\le \ell-1$, so $g^{-1}({\mathbf{v}})=v_1\eta^{\ell-2}\cdots$. \end{proof}
\begin{lem}\label{lem-key-5} Equation \eqref{eq-key-2} holds if $U_1\neq V_1$ and $S_{M',1}=\text{Id}$. \end{lem}
\begin{proof} That $S_{M',1}=Id$ implies $u_1=v_1$, so at least one of $U_1$ and $V_1$ is in $\mathcal{C}_{M'}$, say $U_1\in\mathcal{C}_{M'}$. Now we divide the proof into two cases.
\textit{Case 1.} $U_1=\tau\gamma\gamma$.
In this case, we have $|{\mathbf{u}}\wedge{\mathbf{v}}|\le 2$, so $T_{M'}({\mathbf{u}},{\mathbf{v}})\leq 4$ and \eqref{eq-key-2} follows.
\textit{Case 2.} $U_1=\kappa\alpha^k\kappa\gamma$ or $\kappa\alpha^k\kappa\gamma\gamma$($k\ge 0$).
\ding{172} If $V_1=\kappa\alpha^\ell\kappa\gamma$ or $\kappa\alpha^\ell \kappa\gamma\gamma$($\ell\ge 0$), then $\ell \neq k$ when $U_1$ and $V_1$ have the same form. It is easy to see that
$
|{\mathbf{u}}\wedge{\mathbf{v}}|\leq 3+\min\{k,\ell\}. $
Applying $g_0^{-1}$ to $U_1$ and $V_1$, we have $$ X_1\in\{\tau\gamma^{k+2},\kappa\alpha^{k+1}\kappa\gamma\} \text{ and } Y_1\in\{\tau\gamma^{\ell+2},\kappa\alpha^{\ell+1}\kappa\gamma\}. $$ Therefore, $T_M({\mathbf{x}},{\mathbf{y}})\ge 1+\min\{k+1,\ell+1\}$. Thus
$$T_{M'}({\mathbf{u}},{\mathbf{v}})-T_M({\mathbf{x}},{\mathbf{y}})\leq |{\mathbf{u}}\wedge{\mathbf{v}}|+2-T_M({\mathbf{x}},{\mathbf{y}})\leq 3.$$
\ding{173} If $V_1=\kappa$, write ${\mathbf{v}}$ as $\kappa\alpha^\ell\widetilde{\alpha}\dots$ where $\widetilde{\alpha} \neq \alpha$, then
$
|{\mathbf{u}}\wedge{\mathbf{v}}|\leq 2+\min\{k,\ell\}. $
Applying $g_0^{-1}$ to $U_1$ and using Lemma \ref{tool2} to ${\mathbf{v}}$, we get $$ X_1\in\{\tau\gamma^{k+2},\kappa\alpha^{k+1}\kappa\gamma\},\quad {\mathbf{y}}=\kappa\alpha^{\ell-2}\cdots. $$ Therefore, $T_M({\mathbf{x}},{\mathbf{y}})\ge 1+\min\{k+1,\ell-2\}$. So \eqref{eq-key-2} holds by the same reason as above. The lemma is proved. \end{proof}
\begin{lem}\label{lem-key-6} Equation \eqref{eq-key-2} holds if $U_1\neq V_1$ and $S_{M',1}\in Q\setminus\{Id,Exit\}$. \end{lem}
\begin{proof} Let $k=T_{M'}({\mathbf{u}},{\mathbf{v}})$. If $k\le 5$, \eqref{eq-key-2} holds trivially. So we assume $k\ge 6$. Denote $S_{M',1}=S_{rs}$, where $r,s\in\{\alpha,\beta,\gamma\}$. Then the itinerary of $({\mathbf{u}},{\mathbf{v}})$ is $id\to (S_{rs})^{k}\to Exit$, so $(u_1,v_1)\in\mathcal{P}_{rs}$ and $$ {\mathbf{u}}=u_1s^{k-1}\dots,\ {\mathbf{v}}=v_1r^{k-1}\dots. $$ Since $\tau$ is double-maximal in $M'$ (Lemma \ref{max_element}), we have $u_1s^{k-1}, v_1r^{k-1}\ne\tau\gamma^{k-1}$.
\textit{Case 1. }$u_1s^{k-1}, v_1r^{k-1}\ne\kappa\alpha^{k-1}$.
In this case, neither ${\mathbf{u}}$ nor ${\mathbf{v}}$ can be initialled by $\tau\gamma$ or $\kappa\alpha$, so $|U_1|=|V_1|=1$.
Hence, by Lemma \ref{tool2} we have ${\mathbf{x}}=u_1s^{k-3}\cdots,\ {\mathbf{y}}=v_1r^{k-3}\cdots.$ Thus $$T_M({\mathbf{x}},{\mathbf{y}})\geq T_{M'}({\mathbf{x}},{\mathbf{y}})\ge k-2=T_{M'}({\mathbf{u}},{\mathbf{v}})-2.$$
\textit{Case 2. }$u_1s^{k-1}=\kappa\alpha^{k-1}$ or $v_1r^{k-1}=\kappa\alpha^{k-1}$.
Without loss of generality, assume that $u_1s^{k-1}=\kappa\alpha^{k-1}$, then $v_1\lhd_{\alpha r}\kappa$.
Since $\kappa$ is $\alpha\gamma$-isolated in $M'$ (Lemma \ref{max_element}), we have $v_1=\lambda$ and $r=\beta$. So $M'$ is a $(\tau,\kappa, \lambda)$-simplification of $M$. On one hand, ${\mathbf{y}}=g^{-1}(\lambda\beta^{k-1}\cdots)=\lambda\beta^{k-1}\cdots$. On the other hand, if $U_1=\kappa\alpha^\ell\kappa\gamma(\ell\ge k-1)$, then $X_1=g_0^{-1}(U_1)=\tau\gamma^{\ell+2}$; otherwise, by Lemma \ref{tool2} we have ${\mathbf{x}}=\kappa\alpha^{k-3}\cdots$
no matter $U_1=\kappa\alpha^\ell\kappa\gamma\gamma(\ell\ge k-1)$ or $|U_1|=1$. Thus $$ {\mathbf{x}}\in\{\tau\gamma^{k+1}\cdots,\kappa\alpha^{k-3}\cdots\} \text{ and } {\mathbf{y}}=\lambda\beta^{k-1}\dots, $$ which imply that $T_M({\mathbf{x}},{\mathbf{y}})\ge k-2$. The lemma is proved. \end{proof}
\begin{proof}[\textbf{Proof of Theorem \ref{main}}]
For ${\mathbf{x}},{\mathbf{y}}\in\Omega$, if $X_1\dots X_k=Y_1\dots Y_k$ and $X_{k+1}\ne Y_{k+1}$, then $U_1\dots U_k=V_1\dots V_k$ and $U_{k+1}\ne V_{k+1}$. Let $\ell=|X_1\dots X_k|$, then \begin{equation}\label{proofofmain1} T_M({\mathbf{x}},{\mathbf{y}})-T_{M'}({\mathbf{u}},{\mathbf{v}})=T_M(\sigma^\ell({\mathbf{x}}),\sigma^\ell({\mathbf{y}}))- T_{M'}(\sigma^\ell({\mathbf{u}}),\sigma^\ell({\mathbf{v}})), \end{equation} where $\sigma$ is the shift operator. By \eqref{eq-key-1} and \eqref{eq-key-2}, Theorem \ref{main} holds for $\sigma^\ell({\mathbf{x}})$ and $\sigma^\ell({\mathbf{y}})$, so it also holds for ${\mathbf{x}}$ and ${\mathbf{y}}$. \end{proof}
\begin{appendix}
\section{\textbf{Connected components of a fractal gasket}}\label{app:A}
Define $\pi(x,y)=y$.
\begin{lem}\label{component} Let $K$ be a fractal gasket. If $K$ satisfies the top isolated condition or $\omega_\gamma\not\in K$, then any nontrivial connected component of $K$ is a line segment. If $\alpha$ and $\beta$ are not in the same horizontal block in addition, then $K$ is totally disconnected. \end{lem}
\begin{proof}
Let $U$ be a connected component of $K$. Notice that the collection of the vertices of $\varphi_i(\triangle)$, $i\in\Sigma$, forms a cut set of $K$.
Let $I$ be a horizontal block of $\Sigma$ (see Definition \ref{Hblock}), we denote $B(I)=\bigcup_{i\in I} \varphi_i(\triangle)$.
If $\omega_\gamma\not \in K$, then all $B(I)$ are disjoint, so either $U\subset B(I)$ or they are disjoint; if $K$ satisfies the top isolated condition, it is easy to show that the same conclusion still holds.
It follows that $|\pi(U)|\leq r^*$.
Now we regard $K$ as the invariant set of the IFS
$\{\varphi_I\}_{I\in\Sigma^k}$. By the same argument as above, we obtain $|\pi(U)|\leq (r^*)^k$. Let $k \rightarrow\infty$, we see that $\pi(U)$ is a single point. So $U$ is a horizontal line segment.
Suppose $\alpha$ and $\beta$ are not in the same horizontal block, let $p$ be the size of the block containing $\alpha$, and $q$ be the size of the block containing $\beta$. Then for any $k\geq 1$, the size of any horizontal block
of $\Sigma^k$ is bounded by $p+q$. Hence the diameter of $U$ is bounded by $(p+q)(r^*)^k$. Let $k\to \infty$,
we obtain that $U$ is a single point. \end{proof}
\section{The transducer of $g$} The map $g$ defined in \eqref{gxdecomposition} can be realized by the transducer indicated in Figure \ref{transducer}. The state set is $\{Id,\tau,\tau',\tau'',\kappa,\kappa',\kappa'',\kappa'''\}$ where `$Id$' is the initial state. Each edge is labeled by $x_i/\omega_i$, where $x_i\in\Sigma$ is the input letter, and $\omega_i\in\Sigma^*$ is the output word. For any input symbol string ${\mathbf{x}}=(x_i)_{i=1}^\infty\in\Sigma^\infty$, there is a unique sequence determined by the transducer, which is denoted by $\omega_1\omega_2\dots=g({\mathbf{x}})$.
\begin{figure}
\caption{The transducer of $g$, where $a\in\Sigma\setminus\{\tau,\kappa\}$, $b\in\Sigma\setminus\{\tau,\kappa,\gamma\}$, $c\in\Sigma\setminus\{\tau,\kappa,\alpha\}$, $d\in\Sigma\setminus\{\tau,\kappa,\alpha,\gamma\}$.}
\label{transducer}
\end{figure}
\end{appendix}
\end{document} |
\begin{document}
\title{A Unified Framework for Symmetry Handling}
\begin{abstract}
Handling symmetries in optimization problems is essential
for devising efficient solution methods.
In this article, we present a general framework that captures many of the
already existing symmetry handling methods.
While these methods are mostly discussed independently from each other, our
framework allows to apply different methods simultaneously and thus
outperforming their individual effect.
Moreover, most existing symmetry handling methods only apply to binary variables.
Our framework allows to easily generalize these methods to
general variable types.
Numerical experiments confirm that our novel framework is superior to the
state-of-the-art symmetry handling methods as implemented in the solver
{\tt SCIP}\xspace on a broad set of instances.
\noindent
{\bfseries Keywords:}\
symmetries, branch-and-bound, integer programming, propagation,
lexicographic order. \end{abstract}
\section{Introduction} We consider optimization problems~$\mathrm{OPT}(f,\ensuremath{\mathrm\Phi}) \ensuremath{\coloneqq} \min \{ f(x) : x \in \ensuremath{\mathrm\Phi}\}$ for a real-valued function~$f\colon\mathds{R}^n \to \mathds{R}$ and feasible region~$\ensuremath{\mathrm\Phi} \subseteq \mathds{R}^n$ such that~$\mathrm{OPT}(f,\ensuremath{\mathrm\Phi})$ can be solved by (spatial) branch-and-bound (B\&{}B\xspace)~\cite{LandDoig1960,HorstTuy1996}. This class of problems is very rich and captures problems such as mixed-integer linear programs and mixed-integer nonlinear programs. The core of B\&{}B\xspace-methods is to repeatedly partition the feasible region~$\ensuremath{\mathrm\Phi}$ into smaller subregions~$\ensuremath{\mathrm\Phi}'$ and to solve the reduced problem~$\mathrm{OPT}(f, \ensuremath{\mathrm\Phi}')$. Subregions do not need to be explored further if it is known that they do not contain optimal or improving solutions (i.e., pruning by bound), or if the region becomes empty (i.e., pruning by feasibility). The sketched mechanism allows to routinely solve problems with thousands of variables and constraints. If symmetries are present, however, plain B\&{}B\xspace usually struggles with solving optimization problems as we explain next.
A \emph{symmetry} of the optimization problem is a bijection~$\gamma\colon\mathds{R}^n \to \mathds{R}^n$ that maps solution vectors $x \in \mathds{R}^n$ to solution vectors $\gamma(x)$ while preserving the objective value and feasibility state, i.e., $f(x) = f(\gamma(x))$ holds for all~$x \in \mathds{R}^n$ and $x \in \ensuremath{\mathrm\Phi}$ if and only if $\gamma(x) \in \ensuremath{\mathrm\Phi}$. Resulting from this definition, the set of all symmetries of an optimization problem forms a group~$\bar{\Gamma}$. When enumerating the subregions~$\ensuremath{\mathrm\Phi}'$ in B\&{}B\xspace, it might happen that several subregions at different parts of the branch-and-bound tree contain equivalent copies of (optimal) solutions. This results in unnecessarily large B\&{}B\xspace trees. By exploiting the presence of symmetries, one could enhance B\&{}B\xspace by finding more reductions and pruning rules that further restrict the (sub-)regions without sacrificing finding optimal solutions to the original problem \cite{KouyialisEtAl2019,Liberti2012,pfetsch2019computational}.
To handle symmetries, different approaches have been discussed in the literature. Two very popular classes of symmetry handling methods are \emph{symmetry handling constraints} (\SHC{}s\xspace)~\cite{BendottiEtAl2021,Friedman2007,Hojny2020,hojny2019polytopes,KaibelEtAl2011,KaibelPfetsch2008,Liberti2008,Liberti2012,Liberti2012a,LibertiOstrowski2014} and \emph{variable domain reductions} (\VDR{}s\xspace) derived from the B\&{}B\xspace-tree~\cite{Margot2003,ostrowski2009symmetry,OstrowskiEtAl2011}. Both approaches remove symmetric solutions from the search space without eliminating all optimal solutions. As we detail in Section~\ref{sec:overview}, \SHC{}s\xspace and \VDR{}s\xspace come with a different flavor. \SHC{}s\xspace usually use a static scheme for symmetry reductions, whereas \VDR{}s\xspace dynamically find reductions based on decisions in the B\&{}B\xspace-tree. In their textbook form, \SHC{}s\xspace and \VDR{}s\xspace are thus incompatible, i.e., they cannot be combined and that might leave some potential for symmetry reductions unexploited. Moreover, many \SHC{}s\xspace and \VDR{}s\xspace have only been studied for binary variables and for symmetries corresponding to permutations of variables, which restricts their applicability.
The goal of this article is to overcome these drawbacks. We therefore devise a unified framework for symmetry handling. The contributions of our framework are that it
\begin{enumerate}[label={{(C\arabic*)}},ref={C\arabic*}] \item\label{contrib1} resolves incompatibilities between \SHC{}s\xspace and \VDR{}s\xspace, \item\label{contrib2} applies for general variable types, and \item\label{contrib3} can handle symmetries of arbitrary finite groups, which are not necessarily permutation groups. \end{enumerate}
Due to~\ref{contrib1}, one is thus not restricted anymore to either use \SHC{}s\xspace or VDR\xspace methods. In particular, we show that many popular VDR\xspace techniques for binary variables such as orbital fixing~\cite{OstrowskiEtAl2011} and isomorphism pruning~\cite{Margot2002,ostrowski2009symmetry}, but also \SHC{}s\xspace can be simultaneously cast into our framework. That is, our framework unifies the application of these techniques. To fully facilitate our framework regarding~\ref{contrib2}, the second contribution of this paper is a generalization of many symmetry handling techniques from binary variables to general variable types. This allows for handling symmetries in more classes of optimization problems, in particular classes with non-binary variables.
Regarding~\ref{contrib3}, we stress that this result is not based on the observation that every finite group is isomorphic to a permutation group by Cayley's theorem~\cite{AlperinBell1995}, because the space in which the isomorphic permutation group is acting might differ from~$\mathds{R}^n$.
\paragraph{Outline} After providing basic notations and definitions, Section~\ref{sec:overview} provides an overview of existing symmetry handling methods. In particular, we illustrate the techniques that we will later on cast into our unified framework. The framework itself will be introduced in Section~\ref{sec:framework}. Section~\ref{sec:cast} shows how existing symmetry handling methods can be used in our framework and how these methods can be generalized from binary to general variables. We conclude this article in Section~\ref{sec:num} with an extensive numerical study of our new framework both for specific applications and benchmarking instances. The study reveals that our novel framework is substantially faster than the state-of-the-art methods on both \SHC{}s\xspace and VDRs as implemented in the solver {\tt SCIP}\xspace.
\paragraph{Notation and Definitions} Throughout the article, we assume that we have access to a group~$\Gamma$ consisting of (not necessarily all) symmetries of the optimization problem $\mathrm{OPT}(f, \ensuremath{\mathrm\Phi})$. That is, $\Gamma$ is a subgroup of $\bar{\Gamma}$, which we denote by~$\Gamma \leq \bar{\Gamma}$. We refer to~$\Gamma$ as a symmetry group of the problem. For solution vectors~$x \in \mathds{R}^n$, the set of symmetrically equivalent solutions is its~\emph{$\Gamma$-orbit}~$\{\gamma(x) : \gamma\in\Gamma\}$.
Let~$\symmetricgroup{n}$ be the \emph{symmetric group} of~$[n] \ensuremath{\coloneqq} \{1,\dots,n\}$. Moreover, let~$[n]_0 \ensuremath{\coloneqq} [n] \cup \{0\}$. Being in line with the existing literature on symmetry handling, we assume that permutations~$\gamma \in \symmetricgroup{n}$ act on vectors~$x \in \mathds{R}^n$ by permuting their index sets, i.e., $\gamma(x) \ensuremath{\coloneqq} ( x_{\gamma^{-1}(i)} )_{i=1}^n$. We call such symmetries \emph{permutation symmetries}. The identity permutation is denoted by~$\ensuremath{\mathrm{id}}$. To represent permutations~$\gamma$, we use their disjoint cycle representation, i.e., $\gamma$ is the composition of disjoint cycles $(i_1, \dots, i_r)$ such that $\gamma(i_k) = i_{k + 1}$ for $k \in \{1,\dots, r-1\}$ and $\gamma(i_r) = i_1$.
In practice, the symmetry group $\Gamma$ is either provided by a user or found using detection methods such as in~\cite{pfetsch2019computational,salvagnin2005dominance}. Detecting the full permutation symmetry group for binary problems, however, is \ensuremath{\mathrm{NP}}-hard~\cite{margot2010symmetry}. For non-linear problems, depending on how the feasible region~$\ensuremath{\mathrm\Phi}$ is given, already verifying if~$\gamma$ is a symmetry might be undecidable~\cite{Liberti2012}.
To handle symmetries, among others, we will make use of variable domain propagation. The idea of propagation approaches is, given a symmetry reduction rule and domains for all variables, to derive reductions of some variable domains if every solution adhering to the symmetry rule is contained in the reduced domain. More concretely, let~$\ensuremath{\mathrm\Phi}'$ be the feasible region of some subproblem encountered during branch-and-bound. For every variable $x_i$, $i \in [n]$, let~$\ensuremath{\mathcal{D}}_i \subseteq \mathds{R}$ be its domain, which covers the projection of $\ensuremath{\mathrm\Phi}'$ on $x_i$, i.e., $\ensuremath{\mathcal{D}}_i \supseteq \{ v \in \mathds{R} : x_i = v \text{ for some } x \in \ensuremath{\mathrm\Phi}'\}$. In an integer programming context, the domain $\ensuremath{\mathcal{D}}_i$ corresponds to an interval in practice. A symmetry reduction rule is encoded as a set~$\mathcal{C} \subseteq \mathds{R}^n$, which consists of all solution vectors that adhere to the rule. The goal of \emph{variable domain propagation} is to find sets~$\ensuremath{\mathcal{D}}'_i \subseteq \ensuremath{\mathcal{D}}_i$, $i \in [n]$, such that $ \mathcal{C} \cap \bigtimes_{i = 1}^n \ensuremath{\mathcal{D}}'_i = \mathcal{C} \cap \bigtimes_{i = 1}^n \ensuremath{\mathcal{D}}_i $. In this case, the domain of variable~$x_i$ can be reduced to~$\ensuremath{\mathcal{D}}'_i$. We say that propagation is \emph{complete} if, for every~$i \in [n]$, domain~$\ensuremath{\mathcal{D}}'_i$ is inclusionwise minimal.
Throughout this article, we denote \emph{full} B\&{}B\xspace-trees by~$\ensuremath{\mathcal B} = (\ensuremath{\mathcal V}, \ensuremath{\mathcal E})$, i.e., we do not prune nodes by their objective value and do not apply enhancements such as cutting planes or bound propagation. This is only required to prove \emph{theoretical} statements about symmetry handling and does not restrict their practical applicability as we will discuss below. If not mentioned differently, we assume~$\ensuremath{\mathcal B}$ to be finite, which might not be the case for spatial branch-and-bound; the case of infinite \bb-trees will be discussed separately. For~$\beta \in \ensuremath{\mathcal V}$, let~$\chi_\beta$ be the set of its children and let~$\ensuremath{\mathrm\Phi}(\beta) \subseteq \ensuremath{\mathrm\Phi}$ be the feasible solutions at $\beta$, i.e., the intersection of $\ensuremath{\mathrm\Phi}$ and the branching decisions. If~$\beta$ is not a leaf, we assume that~$\ensuremath{\mathrm\Phi}(\omega)$, $\omega \in \chi_\beta$, partitions~$\ensuremath{\mathrm\Phi}(\beta)$. In our definitions, this is even the case for spatial branch-and-bound, meaning that the partitioned feasible regions are not necessarily closed sets. We will discuss the practical consequences of this assumption below.
\section{Overview of symmetry handling methods for binary programs} \label{sec:overview}
This section provides an overview of symmetry handling methods. The methods lexicographic fixing, orbitopal fixing, isomorphism pruning, and orbital fixing are described in detail, because we will later on show that these methods can be cast into our framework and can be generalized from binary to arbitrary variable domains. Further symmetry handling methods will only be mentioned briefly. We illustrate the different methods using the following running example.
\begin{problem}[NDB] \label{prob:NDbinary} Sherali and Smith~\cite{sherali2001models} consider the \emph{Noise Dosage Problem} (ND). There are $p$ machines, and on every machine a number of tasks must be executed. For machine $i \in [p]$, there are $d_i$ work cycles, each requiring $t_i$ hours of operation, and each such work cycle induces $\alpha_i$ units of noise. There are $q$ workers to be assigned to the machines, each of which is limited to $H$ hours of work. The problem is to minimize the noise dosage of the worker that receives the most units of noise. We extend this problem definition with the requirement that each worker can only be assigned once to the same machine, which makes the problem a binary problem (NDB), namely to \begin{subequations} \makeatletter \defCD{NDB} \makeatother \renewcommand{CD\arabic{equation}}{NDB\arabic{equation}} \label{eq:NDB} \begin{align} \text{minimize}\ \eta &, \\ \text{subject to}\ \eta &\geq \sum\nolimits_{i \in [p]} \alpha_i \vartheta_{i,j} && \text{for all}\ j \in [q], \\ \sum\nolimits_{j \in [n]} \vartheta_{i,j} &= d_i && \text{for all}\ i \in [p], \label{eq:ndb:demand} \\ \sum\nolimits_{i \in [m]} t_i \vartheta_{i, j} &\leq H && \text{for all}\ j \in [q], \label{eq:ndb:time} \\ \vartheta &\in \ensuremath{\{0, 1\}}^{p \times q}, \label{prob:ND:binary} \\ \eta &\geq 0. \end{align} \end{subequations} For a solution, $\vartheta$ represent the worker schedules in a~$p \times q$ binary matrix. The value of variable $\vartheta_{i, j}$ states how many tasks on machine $i$ are allocated to worker $j$. Since all workers have the same properties in this model, symmetrically equivalent solutions are found by permuting the worker schedules. This corresponds to permuting the columns of the $\vartheta$-matrix. As such, a symmetry group of this problem is the group $\Gamma$ consisting of all column permutations of this $p \times q$ matrix. \end{problem}
For illustration purposes, we focus on an NDB instance with ${p=3}$ machines and ${q=5}$ workers. We stress that the symmetry handling methods work even if variable domain reductions inferred by the model constraints are applied. For the ease of presentation, however, we assume no such reductions are made in the NDB problem instance. For this reason, we do not specify~$d_i$, $t_i$, and~$H$.
\subsection{Symmetry handling constraints based on lexicographic order} \label{sec:shcbin}
The philosophy of \emph{symmetry handling constraints} (\SHC{}s\xspace) is to restrict the feasible region of an optimization problem to representatives of the~$\Gamma$-orbits of feasible solutions. A common way to do this, is to enforce that feasible solutions must be \emph{lexicographically maximal} in their $\Gamma$-orbit~\cite{Friedman2007}.
Let $x, y \in \mathds{R}^n$. We say $x$ is \emph{lexicographically larger} than $y$, denoted $x \succ y$, if for some $k \in [n]$ we have $x_i = y_i$ for $i < k$, and $x_k > y_k$. If $x \succ y$ or $x = y$, we write $x \succeq y$. Since the lexicographic order specifies a total ordering on~$\mathds{R}^n$, to solve the optimization problem $\mathrm{OPT}(f, \ensuremath{\mathrm\Phi})$, it is sufficient to consider only those solutions $x$ that are lexicographically maximal in their $\Gamma$-orbit. Let \[
\ensuremath{\mathcal{X}}\xspace \ensuremath{\coloneqq} \{ x \in \ensuremath{\{0, 1\}}^n : x \succeq \gamma(x)
\text{ for all } \gamma \in \Gamma \}. \] Then, solving~$\mathrm{OPT}(f, \ensuremath{\mathrm\Phi} \cap \ensuremath{\mathcal{X}}\xspace)$ yields the same optimal objective and the same feasibility state as the original problem. Note, however, that deciding whether a vector~$x \in \ensuremath{\{0, 1\}}^n$ is contained in $\ensuremath{\mathcal{X}}\xspace$ is \ensuremath{\mathrm{coNP}}-complete~\cite{babai1983canonical}. Complete propagation of the \SHC{}s\xspace~\ensuremath{\mathcal{X}}\xspace is thus \ensuremath{\mathrm{coNP}}-hard for general groups. In practice, one therefore either neglects the group structure or applies specialized algorithms for particular groups~\cite{BendottiEtAl2021,doornmalenhojny2022cyclicsymmetries}. We discuss lexicographic fixing and orbitopal fixing as representatives for these two approaches.
\subsubsection{Lexicographic fixing (LexFix\xspace)} \label{sec:lexfix} Instead of handling $x \succeq \gamma(x)$ for all $\gamma \in \Gamma$, one can handle this SHC\xspace for a single permutation~$\gamma$ only. For binary problems, \cite{friedman2007fundamental} shows that~$x \succeq \gamma(x)$ is equivalent to the linear inequality~$\sum_{k=1}^n 2^{n-k} x_k \geq \sum_{k=1}^n 2^{n-k} \gamma(x)_k$. Due to the large coefficients, however, these inequalities might cause numerical instabilities. To circumvent numerical instabilities, \cite{hojny2019polytopes} presents an alternative family of linear inequalities modeling~$x \succeq \gamma(x)$ in which all variable coefficients are either~$0$ or~$\pm 1$ and that can be separated efficiently.
Alternatively, $x \succeq \gamma(x)$ can also be enforced using a complete propagation algorithm that runs in linear time~\cite{BestuzhevaEtal2021OO,doornmalenhojny2022cyclicsymmetries}. Since a variable domain reduction in the binary setting corresponds to fixing a variable, we refer to this algorithm as the \emph{lexicographic fixing algorithm}, or LexFix\xspace in short.
Using our running example NDB, we illustrate the idea of LexFix\xspace for the permutation~$\gamma$ that exchanges column~$2$ and~$3$ and fixes all remaining columns. Since the lexicographic order depends on a specific variable ordering, we assume that the variables of the~$\vartheta$-matrix are sorted row-wise. That is, $x = (\vartheta_{1,1}, \dots, \vartheta_{1, 5}; \dots; \vartheta_{3,1}, \dots, \vartheta_{3, 5})$. We omit $\eta$ from the vector, since the orbit of $\eta$ is trivial with respect to $\Gamma$.
When removing fixed points of the solution vector from~$\gamma$, enforcement of~$x \succeq \gamma(x)$ corresponds to $ (\vartheta_{1,2}, \vartheta_{1,3}, \vartheta_{2,2}, \vartheta_{2,3}, \vartheta_{3,2}, \vartheta_{3,3}) \succeq (\vartheta_{1,3}, \vartheta_{1,2}, \vartheta_{2,3}, \vartheta_{2,2}, \vartheta_{3,3}, \vartheta_{3,2}) $, which in turn corresponds to $ (\vartheta_{1,2}, \vartheta_{2,2}, \vartheta_{3,2}) \succeq (\vartheta_{1,3}, \vartheta_{2,3}, \vartheta_{3,3}) $. Complete propagation of that constraint for the running example is shown in Figure~\ref{fig:branching:lexfix}. For instance, in the leftmost node, $\vartheta_{1,3}$ can be fixed to~0, because any solution with~$\vartheta_{1,3} = 1$ and satisfying the remaining local variable domains violates the lexicographic order constraint as~$(\vartheta_{1,2},\vartheta_{2,2},\vartheta_{3,2}) = (0,\vartheta_{2,2},\vartheta_{3,2}) \nsucceq (1,0,\vartheta_{3,3}) = (\vartheta_{1,3},\vartheta_{2,3},\vartheta_{3,3})$.
\begin{figure}
\caption{Branch-and-bound tree for the NDB problem. Fixings by LexFix\xspace are drawn red.}
\label{fig:branching:lexfix}
\end{figure}
\subsubsection{Orbitopal fixing} \label{sec:orbitopalfixing}
Note that LexFix\xspace neglects the entire group structure and thus might not find variable domain reductions that are based on the interplay of different symmetries. Since propagation for~$\ensuremath{\mathcal{X}}\xspace$ is \ensuremath{\mathrm{coNP}}-hard, special cases of groups have been investigated that appear very frequently in practice. One of these groups corresponds to the symmetries present in NDB, i.e., the symmetry group~$\Gamma$ acts on $p \times q$ matrices of binary variables by exchanging their columns. We refer to such matrix symmetries as \emph{orbitopal} symmetries. Besides in NDB, orbitopal symmetries arise in many further applications such as graph coloring or unit commitment problems~\cite{BendottiEtAl2021,KaibelEtAl2011,margot2007symmetric}.
For the variable ordering discussed in Section~\ref{sec:lexfix} and~$\Gamma$ being a group of orbitopal symmetries, one can show that enforcing~$x \succeq \gamma(x)$ for all~$\gamma \in \Gamma$ is equivalent to sorting the columns of the variable matrix in lexicographically non-increasing order. Bendotti et al.~\cite{BendottiEtAl2021} present a propagation algorithm for such symmetries, so-called \emph{orbitopal fixing}. Kaibel et~al.\@\xspace~\cite{KaibelEtAl2011} discuss a propagation algorithm for the case that each row of the variable matrix has at most one~$1$-entry. Both algorithms are complete and run in linear time. Moreover, Kaibel and Pfetsch~\cite{KaibelPfetsch2008} derive a facet description of all binary matrices with lexicographically sorted columns and at most (or exactly) one~1-entry per row. That is, the SHC\xspace~$\ensuremath{\mathcal{X}}\xspace$ can be replaced by the facet description in this case.
Given initial variable domains $\ensuremath{\mathcal{D}} \subseteq \ensuremath{\{0, 1\}}^{p \times
q}$, the algorithm of Bendotti et~al.\@\xspace finds the tightest variable domains as follows. First, the lexicographically minimal and maximal matrices in $\ensuremath{\mathcal{X}}\xspace \cap \ensuremath{\mathcal{D}}$ are computed. Then, for each column in the variable matrix, the associated variables can be fixed to the value of the lexicographically extreme matrices up to the first row where these extremal matrices differ. If the columns of the extremal matrices are identical, the whole column can be fixed.
For the running example, Figure~\ref{fig:branching:orbitopalfixing} presents the branch-and-bound tree with variable fixings by orbitopal fixing. For instance, if~$(\vartheta_{2,3}, \vartheta_{1,2}) \gets (0, 0)$, the lexicographically minimal and maximal matrices are {\footnotesize $ \left[\begin{array}{*5{@{}wc{2mm}@{}}} 0&\underline 0&0&0&0\\ 0&0&\underline 0&0&0\\ 0&0&0&0&0 \end{array}\right] $} and {\footnotesize $ \left[\begin{array}{*5{@{}wc{2mm}@{}}} 1&\underline 0&0&0&0\\ 1&1&\underline 0&0&0\\ 1&1&1&1&1 \end{array}\right] $}, with the branching decisions underlined, respectively. Applying orbitopal fixing then leads to the leftmost matrix in Figure~\ref{fig:branching:orbitopalfixing}.
\begin{figure}
\caption{Branch-and-bound tree for the NDB problem. Fixings by orbitopal fixing are drawn red.}
\label{fig:branching:orbitopalfixing}
\end{figure}
Note that orbitopal fixing does not find any variable domain reduction after the first branching decision in Figure~\ref{fig:branching:orbitopalfixing}. The reason is that branching occurred for a variable in the second row. To still be able to benefit from some symmetry reductions in this case, Bendotti et~al.\@\xspace~\cite{BendottiEtAl2021} also discuss a variant of orbitopal fixing that adapts the order of rows based on the branching decisions. They empirically show that this adapted algorithm performs better than the original algorithm for the unit commitment problem. We discuss the adapted variant in more detail and with more flexibility in terms of our new framework in Section~\ref{sec:framework} in Example~\ref{ex:orbitopalfixing}.
\subsection{Symmetry reductions based on branching tree structure} \label{sec:overviewtree}
Recall that the \SHC{}s\xspace discussed in the previous section restrict the feasible region of an optimization problem. That is, already before solving the optimization problem, it is determined which symmetric solutions are discarded. A second family of symmetry handling techniques uses a more dynamic approach, which prevents to create symmetric copies of subproblems already created in the branch-and-bound tree. The motivation of this is that symmetry reductions can be carried out earlier than in a static setting as described in the previous section.
Throughout this section, let~$\ensuremath{\mathcal B} = (\ensuremath{\mathcal V}, \ensuremath{\mathcal E})$ be a branch-and-bound tree, where branching is applied on a single variable. For a node $\beta \in \ensuremath{\mathcal V}$, let $B_0^\beta$ (resp.~$B_1^\beta$) be the set of variable indices of a solution vector that are fixed to~0 (resp.~1) by the branching decisions on the rooted tree path to $\beta$. Moreover, we assume~$\ensuremath{\mathrm\Phi} \subseteq \ensuremath{\{0, 1\}}^n$ as the techniques that we describe next have mostly been discussed for binary problems.
\subsubsection{Isomorphism pruning} \label{sec:isomorphismpruning}
In classical branch-and-bound approaches, a node can be pruned if the corresponding subproblem is infeasible (pruning by infeasibility) or if the subproblem cannot contain an improving solution (pruning by bound). In the presence of symmetry, Margot~\cite{Margot2002,Margot2003} and Ostrowski~\cite{ostrowski2009symmetry} discuss another pruning rule that discards symmetric or isomorphic subproblems, so-called \emph{isomorphism
pruning}.
The least restrictive version of isomorphism pruning is due to Ostrowski~\cite{ostrowski2009symmetry}. Note that the way how we phrase isomorphism pruning differs from the notation in~\cite{ostrowski2009symmetry}. In terms of our unified framework that we discuss in Section~\ref{sec:framework}, however, our notation is more suitable.
Let~$\beta \in \ensuremath{\mathcal V}$ be a branch-and-bound tree node at depth~$m$, and suppose that every branching decision corresponds to a single variable fixing. Let $i_k$ be the index of the branching variable at depth~$k \in [m]$ on the rooted path to~$\beta$. Then, $B_0^\beta \cup B_1^\beta = \{i_1, \dots, i_m\}$. Let~$\pi_\beta \in \symmetricgroup{n}$ be any permutation with $\pi_\beta(i_k) = k$ for $k \in [m]$ and let~$y \in \ensuremath{\{0, 1\}}^n$ such that~$y_i = 1$ if and only if~$i \in B_1^\beta$. For a vector~$x \in \mathds{R}^n$ and~$A \subseteq [n]$, we denote by~$\restrict{x}{A}$ its restriction to the entries in~$A$.
\begin{theorem}[Isomorphism Pruning]
\label{thm:isoprune}
Let~$\beta \in \ensuremath{\mathcal V}$ be a node at depth~$m$.
Node~$\beta$ can be pruned if there exists~$\gamma \in \Gamma$ such that
$\restrict{\pi_\beta(y)}{[m]} \prec \restrict{\pi_\beta(\gamma(y))}{[m]}$. \end{theorem}
Testing if a vector is lexicographically maximal in its orbit is a \ensuremath{\mathrm{coNP}}-complete problem~\cite{babai1983canonical}. As such, deciding if $\beta$ can be pruned by isomorphism is an \ensuremath{\mathrm{NP}}-complete problem.
\begin{remark}
Margot~\cite{margot2007symmetric} also describes a variant of isomorphism pruning that can be used to
handle symmetries of general integer variables.
Margot's variant assumes a specific branching rule.
We do not describe it in more detail as our framework can also
handle general integer variables while not relying on any assumptions on the
branching rule such as Ostrowski's version for binary variables. \end{remark}
Figure \ref{fig:branching:isomorphismpruning} shows the branch-and-bound tree after applying isomorphism pruning. The only node~$\beta$ that can be pruned is where $(\vartheta_{2,3},\vartheta_{1,2},\vartheta_{1,3}) = (0, 0, 1)$. This is due to the symmetry $\gamma$ swapping column~$2$ and~$3$. For this node, $\pi_\beta(x) = (\vartheta_{2,3},\vartheta_{1,2},\vartheta_{1,3})$ and $(\pi_\beta \circ \gamma)(x) = (\vartheta_{2,2},\vartheta_{1,3},\vartheta_{1,2})$, and as such $ \pi_\beta(y) = (0, 0, 1, \dots) \prec (0, 1, 0, \dots) = \pi_\beta(\gamma(y)) $.
\begin{figure}
\caption{Branch-and-bound tree for the NDB problem with pruned nodes by isomorphism pruning.}
\label{fig:branching:isomorphismpruning}
\end{figure}
Note that isomorphism pruning is a pruning method, which means that it does not find reductions. However, isomorphism pruning can be enhanced by fixing rules that allow to find additional variable fixings early on in the branch-and-bound tree as we discuss next.
\subsubsection{Orbital fixing} \label{sec:orbitalfixing} Orbital fixing (OF\xspace) refers to a family of variable domain reductions (\VDR{}s\xspace), whose common ground is to fix variables within orbits of already fixed variables. The exact definition of OF\xspace differs between different authors~\cite{Margot2003,OstrowskiEtAl2011}. The main difference is whether fixings found by branching decisions are distinguished from fixings found by orbital fixing. We describe the variant~\cite[Theorem~3]{OstrowskiEtAl2011}, which is compatible with isomorphism pruning. Let~$\beta \in \ensuremath{\mathcal V}$. The group consisting of all permutations that stabilize the~1-branchings up to node~$\beta$ is denoted by~$\Delta^\beta \ensuremath{\coloneqq} \stab{\Gamma}{B_1^\beta} \ensuremath{\coloneqq} \{ \gamma \in \Gamma : \gamma(B_1^\beta) = B_1^\beta \}$.
\begin{theorem}[Orbital Fixing]
Let~$\beta \in \ensuremath{\mathcal V}$.
If~$i \in B_0^\beta$, all variables in the $\Delta^\beta$-orbit
of~$i$ can be fixed to~0. \end{theorem}
Figure~\ref{fig:branching:orbitalfixing} shows the branch-and-bound tree for applying this orbital fixing rule to the running example. Note that, if up to node $\beta$ no variables are branched to one, $\Delta^\beta$ corresponds to the symmetry group $\Gamma$. This means that for zero-branchings its whole orbit of $\Gamma$ (the corresponding row in $\vartheta$) can be fixed to zero. \begin{figure}
\caption{Branch-and-bound tree for the NDB problem with fixings by orbital fixing.}
\label{fig:branching:orbitalfixing}
\end{figure}
Since~\cite{OstrowskiEtAl2011} does not distinguish variables fixed to~1 by branching or other decisions, $\Delta^\beta$ can be replaced by all permutations that stabilize the variables that are fixed to~1 (opposed to just branched to be~1). Note that neither definition of~$\Delta^\beta$ contains the other, i.e., neither version of OF\xspace dominates the other in terms of the number of fixings that can be found. Another variant of OF\xspace that also finds $1$-fixings is presented in~\cite{ostrowski2009symmetry}, see also~\cite{pfetsch2019computational}.
\subsection{Further symmetry handling methods}
Liberti and Ostrowski~\cite{LibertiOstrowski2014} as well as Salvagnin~\cite{Salvagnin2018} present symmetry handling inequalities that can be derived from the Schreier-Sims table of group. Further symmetry handling inequalities are described by Liberti~\cite{Liberti2012a}. In contrast to the constraints from Section~\ref{sec:shcbin}, they are also able to handle symmetries of non-binary variables; their symmetry handling effect is limited though. Another class of inequalities, so-called orbital conflict inequalities, have been proposed in~\cite{LinderothEtAl2021}. Moreover, symmetry handling inequalities for specific problem classes are discussed, among others, by~\cite{GhoniemSherali2011,Hojny2020,KaibelPfetsch2008,MendezDiazZabala2001,sherali2001models}.
Besides the propagation approaches discussed above, also tailored complete algorithms that can handle special cyclic groups exist~\cite{doornmalenhojny2022cyclicsymmetries}. Moreover, Ostrowski~\cite{ostrowski2009symmetry} presents smallest-image fixing, a propagation algorithm for binary variables. Instead of exploiting symmetries in a propagation framework, symmetries can also be handled by tailored branching rules~\cite{OstrowskiEtAl2011,OstrowskiAnjosVannelli2015}. Furthermore, orbital shrinking~\cite{FischettiLiberti2012} is a method that handles symmetries by aggregating variables contained in a common orbit, which results in a relaxation of the problem. Finally, core points~\cite{Bodi2013,Herr2013,HerrRehnSchuermann2013} can be used to restrict the feasible region of problems to a subset of solutions. This latter approach does not coincide with lexicographically maximal representatives.
\section{Unified framework for symmetry handling} \label{sec:framework}
As the literature review shows, different symmetry handling methods use different paradigms to derive symmetry reductions. For instance, \SHC{}s\xspace remove symmetric solutions from the initial problem formulation, whereas methods such as orbital fixing remove symmetric solutions based on the branching history. At first glance, these methods thus are not necessarily compatible.
To overcome this seeming incompatibility, we present a unified framework for symmetry handling that easily allows to check whether symmetry handling methods are compatible. It turns out that, via our framework, isomorphism pruning and OF\xspace can be made compatible with a variant of LexFix\xspace. Moreover, in contrast to many symmetry handling methods discussed in the literature, our framework also applies to non-binary problems and is not restricted to permutation symmetries. Before we present our framework in Section~\ref{sec:theframework}, it will be useful to first provide an interpretation of isomorphism pruning through the lens of symmetry handling constraints.
\subsection{Isomorphism pruning and orbital fixing revisited} \label{sec:isopruneRevisited}
Let~$\beta \in \ensuremath{\mathcal V}$ be a node at depth~$m$ and let~$y$ be the incidence vector of 1-branching decisions as described in Section~\ref{sec:isomorphismpruning}. Due to Theorem~\ref{thm:isoprune}, node~$\beta$ can be pruned by isomorphism if solution vector~$y$ violates~$\restrict{\pi_\beta(x)}{[m]} \succeq \restrict{\pi_\beta(\gamma(x))}{[m]}$ for some~$\gamma \in \Gamma$. The latter condition looks very similar to classical \SHC{}s\xspace, however, there are some differences: the variable order is changed via~$\pi_\beta$, not all variables are present in this constraint due to the restriction, and most importantly, every node has a potentially different reordering and restriction. Nevertheless, these modified \SHC{}s\xspace can be used to remove all symmetries from a binary problem in the sense that, for every solution~$x$ of a binary problem, there exists exactly one node of~$\ensuremath{\mathcal B}$ at depth~$n$ that contains a symmetric counterpart of~$x$, see~\cite[Thm.~4.5]{ostrowski2009symmetry}.
Based on the modified \SHC{}s\xspace, it is easy to show that isomorphism pruning and orbital fixing are compatible, provided one can show that both methods are compatible with the modified \SHC{}s\xspace.
\begin{lemma}
Let~$\beta \in \ensuremath{\mathcal V}$ be a node at depth~$m$.
If~$\beta$ gets pruned by isomorphism, there is no~$x \in \ensuremath{\{0, 1\}}^n$
that is feasible for the subproblem at~$\beta$ and that satisfies
$\restrict{\pi_\beta(x)}{[m]} \succeq
\restrict{\pi_\beta(\gamma(x))}{[m]}$ for all~$\gamma \in \Gamma$. \end{lemma}
\begin{proof}
As in Section~\ref{sec:isomorphismpruning}, let~$y \in \ensuremath{\{0, 1\}}^n$ be such
that~$y_i = 1$ if and only if~$i \in B_1^\beta$.
If IsoPr\xspace prunes~$\beta$, there is~$\gamma\in\Gamma$
with~$\restrict{\pi_\beta(y)}{[m]} \prec \restrict{\pi_\beta(\gamma(y))}{[m]}$.
As the first $m$ entries of $\pi_\beta(y)$ are branching variables
and the remaining entries are~0, we find~$\restrict{\pi_\beta(\gamma(x))}{[m]}
\geq \restrict{\pi_\beta(\gamma(y))}{[m]}$
(componentwise) for each~$x \in \ensuremath{\mathrm\Phi}(\beta)$.
Thus, $\restrict{\pi_\beta(x)}{[m]} = \restrict{\pi_\beta(y)}{[m]}
\prec \restrict{\pi_\beta(\gamma(y))}{[m]}
\leq \restrict{\pi_\beta(\gamma(x))}{[m]}$,
which means every solution in~$\ensuremath{\mathrm\Phi}(\beta)$
violates~$\restrict{\pi_\beta(x)}{[m]} \succeq
\restrict{\sigma_\beta(\gamma(x))}{[m]}$. \end{proof}
\begin{lemma}
Let~$\beta \in \ensuremath{\mathcal V}$ be a node at depth~$m$.
Every fixing found by~OF\xspace at node~$\beta$ is implied by
$\restrict{\pi_\beta(x)}{[m]} \succeq
\restrict{\pi_\beta(\gamma(x))}{[m]}$ for all~$\gamma \in \Gamma$. \end{lemma}
\begin{proof}
Assume OF\xspace is not compatible with~$\restrict{\pi_\beta(x)}{[m]} \succeq
\restrict{\pi_\beta(\gamma(x))}{[m]}$, $\gamma \in \Gamma$.
Then, there exists a node~$\beta \in \ensuremath{\mathcal V}$, a solution~$\bar{x} \in
\ensuremath{\mathrm\Phi}(\beta)$ that satisfies~$\restrict{\pi_\beta(\bar{x})}{[m]} \succeq
\restrict{\pi_\beta(\gamma(\bar{x}))}{[m]}$ for all $\gamma \in \Gamma$, and
an index~$j \in [m]$
with $i_j \in B_0^\beta$
such that~$\bar{x}_{\ell} = 1$ for some~$\ell$ in
the~$\Delta^\beta$-orbit of~$i_j$.
Suppose~$j$ is minimal.
Since~$\ell$ is contained in the~$\Delta^\beta$-orbit of~$i_j$, there
exists~$\gamma \in \Delta^\beta$ with~$\gamma(\ell) = i_j$.
By definition of~$\Delta^\beta$,
for all~$k \in B_1^\beta$, $\gamma(k) \in B^\beta_1$.
Moreover, $\pi_\beta(\gamma(\bar x))_k = \bar x_{\gamma^{-1}(i_k)} = 0$
for all $k \in [j-1]$ with $i_k \in B_0^\beta$,
because~$j$ is selected minimally.
Consequently, since $B_0^\beta \cup B_1^\beta = [m]$ holds,
$\pi_\beta(\bar x)$ and~$\pi_\beta(\gamma(\bar x))$
coincide on the first~$j-1$
entries, and~$1 = \bar x_\ell = \bar x_{\gamma^{-1}(i_j)} =
\pi_\beta(\gamma(\bar x))_j
> \pi_\beta(\bar x))_j = \bar x_{i_j} = 0$.
That is, $\restrict{\pi_\beta(\bar x)}{[m]}
\prec \restrict{\pi_\beta(\gamma(\bar x))}{[m]}$,
contradicting that~$\bar{x}$ satisfies all \SHC{}s\xspace.
OF\xspace is thus compatible with the \SHC{}s\xspace. \end{proof}
Isomorphism pruning and orbital fixing are thus compatible. While isomorphism pruning can become active as soon as one can show that no lexicographically maximal solution w.r.t.\ the modified \SHC{}s\xspace is feasible at a node~$\beta$, orbital fixing might not be able to find all symmetry related variable reductions.
\begin{example}\label{ex:OFweak} Let~$\Gamma \leq \symmetricgroup{4}$ be generated by a cyclic right shift, i.e., the non-trivial permutations in~$\Gamma$ are~$\gamma_1 = (1,2,3,4)$, $\gamma_2 = (1,3)(2,4)$, and~$\gamma_3 = (1,4,3,2)$. Consider the branch-and-bound tree in Figure~\ref{fig:exBB}. At node~$\beta_0$, no reductions can be found by OF\xspace as no proper shift fixes the $1$-branching variable~$x_3$. For~$\gamma_1$, the SHC~$\restrict{\pi_{\beta_0}(x)}{[2]} \succeq \restrict{\pi_{\beta_0}(\gamma(x))}{[2]}$ reduces to $(x_3, x_4) \succeq (x_2, x_3)$. Due to the variable bounds at~$\beta_0$, the constraint simplifies to~$(1,0) \succeq (x_2,1)$. This constraint is violated if $x_2$ has value~1, so $x_2$ can be fixed to~0. \end{example}
\begin{figure}
\caption{Branch-and-bound tree for Example~\ref{ex:OFweak}.}
\label{fig:exBB}
\end{figure}
Consequently, symmetry handling by isomorphism pruning and orbital fixing can be improved by identifying further symmetry handling methods that are compatible with the modified \SHC{}s\xspace.
\subsection{The framework} \label{sec:theframework}
In this section, we present our unified framework for symmetry handling with the following goals: It should
\begin{enumerate}[label={{(G\arabic*)}},ref={G\arabic*}] \item\label{G1} allow to check whether different symmetry
handling methods are compatible.
In particular, it should ensure compatibility of LexFix\xspace, isomorphism
pruning, and OF\xspace. \item\label{G2} generalize the modified \SHC{}s\xspace by
Ostrowski~\cite{ostrowski2009symmetry}. \item\label{G3} apply to general variable types and general symmetries (not
necessarily permutations). \end{enumerate}
To achieve these goals, we define a more general class of \SHC{}s\xspace~$\sigma_\beta(x) \succeq \sigma_\beta(\gamma(x))$, where~$\gamma \in \Gamma$ and~$\beta \in \ensuremath{\mathcal V}$, that are not necessarily based on branching decisions.
Let~$\ensuremath{\mathrm\Phi} \subseteq \mathds{R}^n$ and let~$f\colon \ensuremath{\mathrm\Phi} \to \mathds{R}$ be such that~$\mathrm{OPT}(f,\ensuremath{\mathrm\Phi})$ can be solved by (spatial) branch-and-bound. Let~$\Gamma$ be a group of symmetries of~$\mathrm{OPT}(f,\ensuremath{\mathrm\Phi})$. Let~$\ensuremath{\mathcal B} = (\ensuremath{\mathcal V}, \ensuremath{\mathcal E})$ be a branch-and-bound tree and let~$\beta \in \ensuremath{\mathcal V}$. In our modified \SHC{}s\xspace, the map~$\sigma_\beta\colon \mathds{R}^n \to \mathds{R}^{m_\beta}$ will be parameterized via a permutation~$\pi_\beta \in \symmetricgroup{n}$, a symmetry~$\varphi_\beta \in \Gamma$, and an integer~$m_\beta \in \{0, \dots, n\}$ as~$\sigma_\beta(\cdot) \ensuremath{\coloneqq} \restrict{\left( \pi_\beta \circ \varphi_\beta (\cdot) \right)}{[m_\beta]}$. As in Ostrowski's approach, $\pi_\beta$ selects a variable ordering and~$m_\beta$ allows to restrict the \SHC{}s\xspace to a subset of variables. In contrast to~\cite{ostrowski2009symmetry}, however, $\pi_\beta$ does not necessarily correspond to the branching order. Moreover, $\varphi_\beta$ provides more degrees of freedom as it allows to change the variable order imposed by~$\pi_\beta$. We refer to the structure~$(m_\beta, \pi_\beta, \varphi_\beta)_{\beta \in \ensuremath{\mathcal V}}$ as a \emph{symmetry prehandling structure} for $\ensuremath{\mathcal B}$. Note that this definition already achieves goal~\eqref{G2} by setting~$\varphi_\beta = \ensuremath{\mathrm{id}}$, using the same~$\pi_\beta$ as in Section~\ref{sec:isopruneRevisited}, and setting~$m_\beta$ to be the number of different branching variables in node~$\beta$.
\begin{theorem}
\label{thm:main}
Let~$\ensuremath{\mathrm\Phi} \subseteq \mathds{R}^n$ and let~$f\colon \ensuremath{\mathrm\Phi} \to \mathds{R}$ be
such that~$\mathrm{OPT}(f,\ensuremath{\mathrm\Phi})$ can be solved by (spatial)
branch-and-bound.
Let~$\Gamma$ be a finite group of symmetries of~$\mathrm{OPT}(f,\ensuremath{\mathrm\Phi})$.
Suppose that the branch-and-bound method used for
solving~$\mathrm{OPT}(f,\ensuremath{\mathrm\Phi})$ generates a finite full
B\&{}B\xspace-tree~$\ensuremath{\mathcal B} = (\ensuremath{\mathcal V}, \ensuremath{\mathcal E})$.
For each node~$\beta \in \ensuremath{\mathcal V}$,
let~$(m_\beta,\pi_\beta,\varphi_\beta) \in [n]_0 \times \symmetricgroup{n}
\times \Gamma$.
Let~$\sigma_\beta(\cdot) = \restrict{(\pi_\beta \circ
\varphi_\beta(\cdot))}{[m_\beta]}$.
Suppose that we enforce, for every~$\beta \in \ensuremath{\mathcal V}$,
\begin{equation}
\label{eq:main}
\sigma_\beta(x) \succeq \sigma_\beta (\gamma(x))\
\text{for all}\
\gamma \in \Gamma.
\end{equation} If~$(m_\beta,\pi_\beta,\varphi_\beta)$ satisfies so-called \emph{correctness conditions}~ \eqreffromto{cond:ffunc}{cond:permutationcondition} for all nodes~$\beta \in \ensuremath{\mathcal V}$: \begin{enumerate}[ label={{(C\arabic*)}}, ref={C\arabic*}, itemsep={.5em} ] \item \label{cond:ffunc} If $\beta$ has a parent $\alpha \in \ensuremath{\mathcal V}$, then $m_\beta \geq m_\alpha$ and for all $i \leq m_\alpha$ and $x \in \mathds{R}^n$ holds $\pi_\alpha(x)_i = \pi_\beta(x)_i$;
\item \label{cond:symmetricallypermute} If $\beta$ has a parent $\alpha \in \ensuremath{\mathcal V}$, then $\varphi_\beta =\varphi_\alpha \circ \psi_\alpha$ for some \[
\psi_\alpha \in \stab{\Gamma}{\ensuremath{\mathrm\Phi}(\alpha)}
\ensuremath{\coloneqq} \{ \gamma \in \Gamma :
\ensuremath{\mathrm\Phi}(\alpha) = \gamma(\ensuremath{\mathrm\Phi}(\alpha)) \}; \]
\item \label{cond:sibl} If $\beta$ has a sibling $\beta' \in \ensuremath{\mathcal V}$, then $m_\beta = m_{\beta'}$, $\pi_\beta = \pi_{\beta'}$, and $\varphi_\beta = \varphi_{\beta'}$, i.e., $\psi_\alpha$ in \eqref{cond:symmetricallypermute} does not depend on $\beta$;
\item \label{cond:permutationcondition} If $\beta$ has a feasible solution $x \in \ensuremath{\mathrm\Phi}(\beta)$, then for all permutations $\xi \in \Gamma$ with $\sigma_\beta(x) = \sigma_\beta(\xi(x))$ also the permuted solution $\xi(x)$ is feasible in $\ensuremath{\mathrm\Phi}(\beta)$; \end{enumerate} then, for each~$\tilde{x} \in \ensuremath{\mathrm\Phi}$, there is exactly one leaf~$\nu$ of the B\&{}B\xspace-tree con\-taining a solution symmetric to~$\tilde{x}$, i.e., for which there is~$\xi \in \Gamma$ with~$\xi(\tilde{x}) \in \ensuremath{\mathrm\Phi}(\nu)$. \end{theorem}
Before we apply and prove this theorem, we interpret the correctness conditions and provide some implications and consequences. We start with the latter.
\begin{itemize} \item Enforcing~\eqref{eq:main} handles
symmetries by excluding feasible solutions from the search space while
guaranteeing that exactly one representative solution per class of
symmetric solutions remains feasible (recall that~$\ensuremath{\mathcal B}$ does not
prune nodes by bound).
Note that by enforcing~\eqref{eq:main}, symmetry reductions can only take
place on variables ``seen'' by~$\sigma_\beta(x) \succeq
\sigma_\beta(\gamma(x))$ for some~$\gamma\in\Gamma$.
We stress that it is not immediate how~\eqref{eq:main} can be enforced
efficiently.
We will turn to this question in Section~\ref{sec:cast}.
\item If we prune nodes by bound, \eqref{eq:main} still can be used to
handle symmetries.
But not necessarily all~$x \in \ensuremath{\mathrm\Phi}$ have
a symmetric counterpart feasible at some leaf (e.g., if~$x$ is
suboptimal).
\item If not all constraints of type~\eqref{eq:main} are completely enforced, we still find valid symmetry reductions, but not necessarily exactly one representative solution.
\item If different symmetry handling methods can be expressed in terms of~\eqref{eq:main} having the same choice of the symmetry prehandling structure~$(m_\beta, \pi_\beta, \varphi_\beta)_{\beta \in \ensuremath{\mathcal V}}$, then both symmetry handling methods can be applied at the same time, i.e., they are \emph{compatible}.
\item In practice, \bb is enhanced by cutting planes or domain
propagation such as reduced cost fixing.
Both also work in our framework if their reductions are
\emph{symmetry compatible}, i.e., if, for~$\beta \in \ensuremath{\mathcal V}$, the
domain of a variable~$x_i$ is reduced, the same reduction can be applied
to all symmetric variables w.r.t.\ symmetries at~$\beta$.
Margot~\cite[Section~4]{Margot2003} discusses this in detail for
IsoPr\xspace. He refers to this as \emph{strict setting
algorithms}.
\item For spatial branch-and-bound, the children of a node~$\alpha$ do not
necessarily partition~$\ensuremath{\mathrm\Phi}(\alpha)$ (the feasible regions of
children can overlap on their boundary).
In this case, \eqref{eq:main} can still be used to handle symmetries, but
there might exist several leaves containing a symmetrically equivalent
solution. \end{itemize}
\begin{remark} \label{remark:improper} As propagating \SHC{}s\xspace cuts off feasible solutions, such propagations are not symmetry-compatible. Therefore, we consider SHC\xspace reductions in our framework as special branching decisions, called \emph{improper}: For a SHC\xspace reduction $C \subseteq \mathds{R}^n$ at node~$\beta \in \ensuremath{\mathcal V}$, two children $\omega, \omega'$ are introduced with $\ensuremath{\mathrm\Phi}(\omega) = \ensuremath{\mathrm\Phi}(\beta) \cap C$ and~ $\ensuremath{\mathrm\Phi}(\omega') = \ensuremath{\mathrm\Phi}(\beta) \setminus \ensuremath{\mathrm\Phi}(\omega)$. Node $\omega'$ can then be pruned by symmetry. Complementing this, traditional (standard) branching decisions are called \emph{proper}. \end{remark}
\paragraph{Interpretation} Theorem~\ref{thm:main} iteratively builds \SHC{}s\xspace~$\sigma_\beta(x) \succeq \sigma_\beta(\gamma(x))$ that do not necessarily build upon a common lexicographic order for different nodes~$\beta \in \ensuremath{\mathcal V}$. The map~${\sigma_\beta(\cdot) = \restrict{(\pi_\beta \circ \varphi_\beta(\cdot))}{[m_\beta]}}$ accepts an~$n$-dimensional vector, considers a symmetrically equivalent representative solution hereof ($\varphi_\beta$), reorders its entries ($\pi_\beta$), and afterwards restricts them to the first~$m_\beta$ coordinates. This way, $\sigma_\beta$ selects~$m_\beta$ expressions (and their images) that appear in the \SHC{}s\xspace~\eqref{eq:main}. To ensure that consistent \SHC{}s\xspace are derived, sufficient information needs to be inherited to a node's children in the B\&{}B\xspace-tree, which is achieved as follows.
For the ease of explanation, let us first assume~$\varphi_\beta$ is the identity~$\ensuremath{\mathrm{id}}$. Then, \eqref{cond:ffunc} guarantees that a child has not less information than its parent. Moreover, siblings must not be too different, i.e., new information at one child also needs to be known to its siblings~\eqref{cond:sibl}. \eqref{cond:permutationcondition} ensures that if two solutions~$x$ and~$\xi(x)$ appear identical for the \SHC{}s\xspace in the sense~$\sigma_\beta(x) = \sigma_\beta(\xi(x))$, feasibility of~$x$ should imply feasibility of~$\xi(x)$. In other words, if $x$ and $\xi(x)$ are identical with respect to $\sigma_\beta$, it may not be that one solution is feasible at $\beta$ while the other solution is not.
Conditions~\eqref{cond:ffunc}, \eqref{cond:sibl}, and~\eqref{cond:permutationcondition} describe how $\sigma_\beta(x)$ ``grows'' as nodes $\beta$ follow a rooted path, and that siblings are handled in the same way. If $\varphi_\beta = \ensuremath{\mathrm{id}}$, for a node $\beta$ with ancestor $\mu$ all variables and expressions of $\sigma_\mu(x)$ also occur in the first $m_\mu$ elements of $\sigma_\beta(x)$. Condition~\eqref{cond:symmetricallypermute} allows for more flexibility in this. Let $\alpha$ be the parent of $\beta$. If there is a symmetry $\gamma \in \Gamma$ that leaves the feasible region of $\alpha$ invariant (i.e., $\ensuremath{\mathrm\Phi}(\alpha) = \gamma(\ensuremath{\mathrm\Phi}(\alpha))$), one can choose to handle the symmetries considering the symmetrically equivalent solution space as of node $\beta$.
This degree of freedom might help a solver to find more symmetry reductions in comparison to just ``growing'' the considered representatives. For example, in Figure~\ref{fig:branching:orbitopalfixing} at node $\alpha$ with~$(\vartheta_{2,3},\vartheta_{1,2},\vartheta_{1,3}) \gets (0,1,0)$ the feasible region $\ensuremath{\mathrm\Phi}(\alpha)$ is identical when permuting the first two columns or the last three columns. Suppose that one branches next on variable $\vartheta_{3,3}$, then the zero-branch will find two reductions (namely $\vartheta_{3,4},\vartheta_{3,5} \gets 0$) and the one-branch will find no reductions. If the solver has a preference to reduce the discrepancy between the number of reductions found over the siblings, one could exchange column 3 and 4 for the sake of symmetry handling. Effectively, this moves the branching variable to the fourth column. Applying orbitopal fixing on the matrix where these columns are exchanged leads to one fixing in either child.
\paragraph{Examples} Let~$\ensuremath{\mathcal B} = (\ensuremath{\mathcal V}, \ensuremath{\mathcal E})$ be a B\&{}B\xspace-tree, in which each branching decision partitions the domain of exactly one variable. We will show that there are many possible symmetry prehandling structures ~$(m_\beta, \pi_\beta, \varphi_\beta)_{\beta \in \ensuremath{\mathcal V}}$ that satisfy the correctness conditions of Theorem~\ref{thm:main}. Hence, this gives many degrees of freedom to handle symmetries. In the following, we discuss choices that resemble three symmetry handling techniques: static \SHC{}s\xspace, Ostrowksi's branching variable ordering, and a variant of orbitopal fixing that is more flexible than the setting of Bendotti~et~al.\@\xspace.
\begin{example}[Static \SHC{}s\xspace]
\label{ex:staticsetting}
The static \SHC{}s\xspace~$x \succeq \gamma(x)$ for all~$\gamma \in \Gamma$ can be
derived in our framework by setting, for each~$\beta \in \ensuremath{\mathcal V}$, the
parameters~$m_\beta = n$, $\pi_\beta = \varphi_\beta = \psi_\beta = \ensuremath{\mathrm{id}}$.
\eqreffromto{cond:ffunc}{cond:sibl} are satisfied trivially.
As any~$x \in \mathds{R}^n$ satisfies
$\sigma_\beta(x) = \restrict{\pi_\beta \varphi_\beta(x)}{[n]} = x$,
we find $\sigma_\beta(x) = \sigma_\beta \gamma(x)$
if and only if~$x = \gamma(x)$. Hence, also
\eqref{cond:permutationcondition} holds. \end{example}
Next, we resemble Ostrowski's rank for binary variables and generalize it to arbitrary variable types. In the latter case, only considering the branching order is not sufficient as one might branch several times on the same variable.
\begin{example}[Branching-based]
\label{ex:vardynamic}
Let~$\beta \in \ensuremath{\mathcal V}$.
If~$\beta$ is the root node, let~$m_\beta = 0$, i.e., $\sigma_\beta$ is
void.
Otherwise, let~$\alpha$ be the parent of~$\beta$.
If~$\beta$ arises from~$\alpha$ by a proper branching decision on
variable~$x_{\ensuremath{\hat\imath}}$ and~$\ensuremath{\hat\imath}$ has not been used for branching
before, i.e., $\ensuremath{\hat\imath} \notin (\pi_\beta \varphi_\beta)^{-1}([m_\beta])$,
then set~$m_\beta = m_\alpha + 1$, $\varphi_\beta = \ensuremath{\mathrm{id}}$ and
select~$\pi_\beta \in \symmetricgroup{n}$ with~$\pi_\beta(i) =
\pi_\alpha(i)$ for $i \leq m_\alpha$ and $\pi_\beta(m_\beta) = \ensuremath{\hat\imath}$.
Otherwise, inherit the symmetry prehandling structure from~$\alpha$,
i.e.,
$\pi_\beta = \pi_\alpha$, $m_\beta = m_\alpha$, and $\varphi_\beta =
\ensuremath{\mathrm{id}}$. \end{example}
\begin{proof}[Example~\ref{ex:vardynamic} satisfies \eqreffromto{cond:ffunc}{cond:permutationcondition}] \eqreffromto{cond:ffunc}{cond:sibl} hold trivially. To show~\eqref{cond:permutationcondition}, let~$x \in \ensuremath{\mathrm\Phi}(\beta)$ and $\xi \in \Gamma$ such that~$\sigma_\beta(x) = \sigma_\beta(\xi(x))$. By definition of~$(m_\beta,\pi_\beta,\varphi_\beta)$, $\sigma_\beta(x)$ restricts~$x$ onto all (resorted) variables used for branching up to node~$\beta$. To show~\eqref{cond:permutationcondition}, note that the feasible region $\ensuremath{\mathrm\Phi}(\beta)$ is the intersection of
\begin{enumerate*}[label={(\roman*)}, ref={(\roman*)}] \item \label{ex:vardynamic:pr:1} $\ensuremath{\mathrm\Phi}$, \item \label{ex:vardynamic:pr:2} proper branching decisions, and \item \label{ex:vardynamic:pr:3} symmetry reductions due to~\eqref{eq:main}. \end{enumerate*}
It is thus sufficient to show~$\xi(x)$ is contained in each of these sets. Since~$\xi$ is a problem symmetry and~$x \in \ensuremath{\mathrm\Phi}$, also~$\xi(x) \in \ensuremath{\mathrm\Phi}$. Moreover, as all branching variables are represented in~$\sigma_\beta$ and~$x$ respects the branching decisions, $\sigma_\beta(x) = \sigma_\beta(\xi(x))$ implies that~$\xi(x)$ satisfies the branching decisions. Thus, \ref{ex:vardynamic:pr:1} and~\ref{ex:vardynamic:pr:2} hold. Finally, the \SHC{}s\xspace~\eqref{eq:main} for~$\beta$ dominate the \SHC{}s\xspace for its ancestors~$\alpha$ since~\eqref{cond:ffunc} and~$\varphi_\alpha = \ensuremath{\mathrm{id}}$ hold, i.e, if~$\xi(x)$ satisfies the \SHC{}s\xspace for~$\beta$, then also all previous \SHC{}s\xspace. As~$\Gamma \circ \xi = \Gamma$, each~$\gamma \in \Gamma$ can be written as~$\gamma' \circ \xi$ for some~$\gamma' \in \Gamma$. Therefore, for all~$\gamma' \in \Gamma$, we conclude $\sigma_\beta(\xi(x)) = \sigma_\beta(x) \succeq \sigma_\beta(\gamma(x)) = \sigma_\beta(\gamma'(\xi(x)))$, i.e., \ref{ex:vardynamic:pr:3} and thus~\eqref{cond:permutationcondition} holds. \end{proof}
By adapting the variable order used in LexFix\xspace to the order imposed by~$\sigma_\beta$, LexFix\xspace is thus compatible with isomorphism pruning and OF\xspace, i.e., the framework achieves goal~\eqref{G1}. In particular, the statement is true for non-binary problems if these methods can be generalized to arbitrary variable domains. We will discuss this in more detail in the next section.
The last symmetry prehandling structure accommodates orbitopal fixing. Bendotti et~al.\@\xspace~\cite{BendottiEtAl2021} already discussed a dynamic variant of orbitopal fixing, which reorders the rows of the orbitope matrix similar to Ostrowski's rank; columns, however, are not reordered. As described above, allowing also column reorderings might lead to more balanced branch-and-bound trees, which can be achieved as follows.
\begin{example}[Specialized for orbitopal fixing]
\label{ex:orbitopalfixing}
Let $M$ be the $p \times q$ orbitope matrix corresponding to the problem
variables via $M_{i,j} = x_{q(i-1) + j}$.
That is, $x$ is filled row-wise with the entries of~$M$.
Let~$\beta \in \ensuremath{\mathcal V}$.
If~$\beta$ is the root node, define $(m_\beta, \pi_\beta, \varphi_\beta)
=
(0, \ensuremath{\mathrm{id}}, \ensuremath{\mathrm{id}})$.
Otherwise, let~$\alpha$ be the parent of~$\beta$.
If~$\beta$ arises from~$\alpha$ by a proper branching decision on
variable~$M_{\ensuremath{\hat\imath},\hat{\jmath}}$ and no variable in the~$\ensuremath{\hat\imath}$-th row
has been
used for branching before,
set $m_\beta = m_\alpha + q$, select~$\pi_\beta \in \symmetricgroup{n}$
with
$\pi_\beta(k) = \pi_\alpha(k)$ for~$k \in [m_\alpha]$,
and, for~$k \in [q]$, define $\pi_\beta(m_\alpha + k) = q(\ensuremath{\hat\imath}-1) + k$.
Also choose
$\psi_\alpha \in \stab{\Gamma}{\ensuremath{\mathrm\Phi}(\alpha)}$
yielding $\varphi_\beta = \varphi_\alpha\circ \psi_\alpha$.
Consistent with Condition~\eqref{cond:sibl},
the choice of $\psi_\alpha$ is the same for all children
sharing the same parent $\alpha$.
If the variable is already included in the variable ordering
or if the branching decision is improper,
inherit~$(m_\beta, \pi_\beta, \varphi_\beta)
= (m_\alpha, \pi_\alpha, \varphi_\alpha)$.
Effectively, this creates a new matrix in which the rows are sorted based
on branching decisions and columns can be permuted as long as this does
not affect symmetrically feasible solutions. \end{example}
Completely handling \SHC{}s\xspace~\eqref{eq:main} on $\beta$ corresponds to using orbitopal fixing on the $(m_\beta / q) \times q$-matrix filled row-wise with the variables with indices in~$(\pi_\beta \varphi_\beta)^{-1}(i)$ for~$i \in [m_\beta]$. Bendotti et~al.\@\xspace~\cite{BendottiEtAl2021} introduce this without the freedom of permuting the matrix columns, i.e., for all $\beta \in \ensuremath{\mathcal V}$ they choose~$\varphi_\beta = \ensuremath{\mathrm{id}}$. We call their setting \emph{row-dynamic}, wheres we refer to our setting as row- and column-dynamic.
\begin{proof}[Example~\ref{ex:orbitopalfixing} satisfies
\eqreffromto{cond:ffunc}{cond:permutationcondition}]
Obviously, \eqreffromto{cond:ffunc}{cond:sibl} hold.
To show~\eqref{cond:permutationcondition}, we use induction.
As~\eqref{cond:permutationcondition} holds at the root node, the
induction base holds.
So, assume~\eqref{cond:permutationcondition} holds at node $\alpha$
with child~$\beta$~{\itshape (IH)}.
We show \eqref{cond:permutationcondition} also holds at $\beta$.
Let $x \in \ensuremath{\mathrm\Phi}(\beta)$ and $\xi \in \Gamma$
with $\sigma_\beta(x) = \sigma_\beta(\xi(x))$.
To show~$\xi(x) \in \ensuremath{\mathrm\Phi}(\beta)$, we distinguish if the branching
decision from $\alpha$ to $\beta$ is proper or not.
Note that both proper and improper branching decisions only happen on
variables present in~$\sigma_\beta$ by construction
of~$(m_\beta,\pi_\beta,\varphi_\beta)$.
Hence, if~${\xi(x) \in \ensuremath{\mathrm\Phi}(\alpha)}$ holds,
$\sigma_\beta(x) = \sigma_\beta(\xi(x))$
implies $\xi(x) \in \ensuremath{\mathrm\Phi}(\beta)$.
Thus, it suffices to prove~${\xi(x) \in \ensuremath{\mathrm\Phi}(\alpha)}$.
For improper branching decisions,
$\sigma_\beta = \sigma_\alpha$ and \SHC{}s\xspace~\eqref{eq:main} are propagated.
As ${x \in \ensuremath{\mathrm\Phi}(\beta) \subseteq \ensuremath{\mathrm\Phi}(\alpha)}$
and $\sigma_\alpha(x) = \sigma_\alpha(\xi(x))$,
{\itshape (IH)} yields $\xi(x) \in \ensuremath{\mathrm\Phi}(\alpha)$.
For proper branching decisions,
we observe that
$\sigma_\beta(\cdot) =
\restrict{(\pi_\beta \varphi_\alpha \psi_\alpha(\cdot))}{[m_\beta]}$,
$\sigma_\alpha(\cdot) =
\restrict{(\pi_\beta \varphi_\alpha(\cdot))}{[m_\alpha]}$
and~$m_\beta \geq m_\alpha$.
Thus,
$\sigma_\beta(x) = \sigma_\beta(\xi(x))$
implies
$\sigma_\alpha(\psi_\alpha(x)) = \sigma_\alpha(\psi_\alpha\xi(x))$.
As~$x \in \ensuremath{\mathrm\Phi}(\beta) \subseteq \ensuremath{\mathrm\Phi}(\alpha)$
and $\psi_\alpha \in \stab{\Gamma}{\ensuremath{\mathrm\Phi}(\alpha)}$,
we find~$\psi_\alpha(x) \in \ensuremath{\mathrm\Phi}(\alpha)$.
By~{\itshape (IH)},
$\sigma_\alpha(\psi_\alpha(x)) = \sigma_\alpha(\psi_\alpha\xi(x))$
yields $\psi_\alpha \xi(x) \in \ensuremath{\mathrm\Phi}(\alpha)$.
Again, since $\psi_\alpha \in \stab{\Gamma}{\ensuremath{\mathrm\Phi}(\alpha)}$,
by applying $\psi_\alpha^{-1}$ left
we find $\xi(x) \in \ensuremath{\mathrm\Phi}(\alpha)$. \end{proof}
\paragraph{Proof of Theorem~\ref{thm:main}} The examples illustrate that many symmetry prehandling structures are compatible with the correctness conditions, which shows that there are potentially many variants to handle symmetries based on Theorem~\ref{thm:main}. We proceed to prove this theorem. To this end, we make use of the following lemma.
\begin{lemma} \label{lem:gen:main} \begin{subequations}
Let~$\ensuremath{\mathrm\Phi} \subseteq \mathds{R}^n$ and let~$f\colon \ensuremath{\mathrm\Phi} \to \mathds{R}$ be
such that~$\mathrm{OPT}(f,\ensuremath{\mathrm\Phi})$ can be solved by (spatial)
branch-and-bound.
Let~$\Gamma$ be a finite group of symmetries of~$\mathrm{OPT}(f,\ensuremath{\mathrm\Phi})$.
Suppose that the branch-and-bound method used for
solving~$\mathrm{OPT}(f,\ensuremath{\mathrm\Phi})$ generates a full
B\&{}B\xspace-tree~$\ensuremath{\mathcal B} = (\ensuremath{\mathcal V}, \ensuremath{\mathcal E})$.
Let $\beta \in \ensuremath{\mathcal V}$ be not a leaf of the B\&{}B\xspace-tree.
If there is a feasible solution $\tilde x \in \ensuremath{\mathrm\Phi}(\beta)$ with
\begin{equation}
\label{eq:sigmalexmax}
\sigma_\beta(\tilde x) \succeq \sigma_\beta \gamma (\tilde x)\
\text{for all}\ \gamma \in \Gamma,
\end{equation}
then $\beta$ has exactly one child $\omega \in \chi_\beta$ for which
there is~$\xi \in \Gamma$ such that
\begin{gather}
\label{eq:xiinfeas}
\xi(\tilde x) \in \ensuremath{\mathrm\Phi}(\omega),\
\\
\text{and}\
\label{eq:sigmaxilexmax}
\sigma_\omega \xi (\tilde x) \succeq \sigma_\omega \gamma \xi (\tilde x)\
\text{for all}\ \gamma \in \Gamma.
\end{gather} \end{subequations} \end{lemma}
\begin{proof} Let $\tilde x \in \ensuremath{\mathrm\Phi}(\beta)$ respect~\eqref{eq:sigmalexmax}. First, we show the existence of $\omega \in \chi_\beta$ satisfying~\eqref{eq:xiinfeas} and~\eqref{eq:sigmaxilexmax}. Thereafter, we show that $\omega$ is unique.
\noindent \emph{Existence:}\quad By Condition~\eqref{cond:sibl}, the maps $\sigma_\omega$ for all children~$\omega \in \chi_\beta$ are the same. Let~$\xi \in \Gamma$ be such that~$\sigma_\omega \xi(\tilde x)$ is lexicographically maximal. Note that~$\xi$ exists, since $\Gamma$ is a finite group. Then, $\sigma_\omega \xi(\tilde x) \succeq \sigma_\omega \gamma \xi(\tilde x)$, because~$\xi, \gamma \in \Gamma$ implies~$\xi\gamma \in \Gamma$. Thus, $\xi$ satisfies~\eqref{eq:sigmaxilexmax}. We show that~$\xi(\tilde x) \in \ensuremath{\mathrm\Phi}(\omega)$ for some $\omega \in \chi_\beta$.
Recall that the branching decision at $\beta$ partitions its feasible region, i.e., $\{ \ensuremath{\mathrm\Phi}(\omega) : \omega \in \chi_\beta \}$ partitions $\ensuremath{\mathrm\Phi}(\beta)$. As such, there is exactly one child $\omega \in \chi_\beta$ with $\xi(\tilde x) \in \ensuremath{\mathrm\Phi}(\omega)$ if $\xi(\tilde x) \in \ensuremath{\mathrm\Phi}(\beta)$. To show \eqref{eq:xiinfeas}, it thus suffices to prove~$\xi(\tilde x) \in \ensuremath{\mathrm\Phi}(\beta)$.
For any child $\omega \in \chi_\beta$, vector~$x$, and~$i \leq m_\beta$, we have \begin{equation} \label{eq:sigmabetasubstitutesigmaomega} (\sigma_\omega(x))_i = (\pi_\omega \varphi_\omega (x))_i \stackrel{\eqref{cond:ffunc}}= (\pi_\beta \varphi_\omega (x))_i \stackrel{\eqref{cond:symmetricallypermute}}= (\pi_\beta \varphi_\beta \psi_\beta (x))_i = (\sigma_\beta \psi_\beta (x))_i . \end{equation} Recall that $\xi \in \Gamma$ satisfies~\eqref{eq:sigmaxilexmax}. Substituting \eqref{eq:sigmabetasubstitutesigmaomega} yields $\sigma_\beta \psi_\beta \xi (\tilde x) \succeq \sigma_\beta \psi_\beta \gamma \xi (\tilde x)$ for all $\gamma \in \Gamma$. In particular, for $\gamma = \psi_\beta^{-1}\xi^{-1} \in \Gamma$, we find $\sigma_\beta \psi_\beta \xi (\tilde x) \succeq \sigma_\beta(\tilde x)$. Then~\eqref{eq:sigmalexmax} yields $\sigma_\beta \psi_\beta \xi (\tilde x) = \sigma_\beta(\tilde x)$. By~\eqref{cond:permutationcondition}, we thus have $\psi_\beta \xi(\tilde x) \in \ensuremath{\mathrm\Phi}(\beta)$. Since $\psi_\beta \in \stab{\Gamma}{\ensuremath{\mathrm\Phi}(\beta)}$, applying $\psi_\beta^{-1}$ left on this solution yields $\xi(\tilde x) \in \ensuremath{\mathrm\Phi}(\beta)$, herewith completing the first part.
\noindent \emph{Uniqueness:}\quad Suppose $\xi, \xi' \in \Gamma$ satisfy~\eqref{eq:sigmaxilexmax}. For~$\gamma = \xi' \xi^{-1}$, \eqref{eq:sigmaxilexmax} for~$\xi$ implies $\sigma_\omega \xi(\tilde x) \succeq \sigma_\omega \xi'(\tilde x)$. Analogously, for $\xi'$ we choose $\gamma = \xi (\xi')^{-1}$ to find $\sigma_\omega \xi'(\tilde x) \succeq \sigma_\omega \xi(\tilde x)$. As a result, $\sigma_\omega \xi(\tilde x) = \sigma_\omega \xi'(\tilde x)$.
Suppose $\xi(\tilde x) \in \ensuremath{\mathrm\Phi}(\omega)$. Let $x = \xi(\tilde x)$ and $\gamma = \xi' \xi^{-1} \in \Gamma$. Then, we find \[
\sigma_\omega(x)
=
\sigma_\omega \xi(\tilde x)
=
\sigma_\omega \xi'(\tilde x)
=
\sigma_\omega \xi'(\xi^{-1}(x))
=
\sigma_\omega \gamma(x), \] and Condition~\eqref{cond:permutationcondition} yields $\xi'(\tilde x) = \gamma(x) \in \ensuremath{\mathrm\Phi}(\omega)$. As the children $\chi_\beta$ partition $\ensuremath{\mathrm\Phi}(\beta)$ and $\xi'(\tilde x) \in \ensuremath{\mathrm\Phi}(\omega)$, there is no other child of $\beta$ where $\xi'(\tilde x)$ is feasible. Thus, independent from $\xi \in \Gamma$ satisfying~\eqref{eq:sigmaxilexmax}, there is exactly one child~$\omega \in \chi_\beta$ with~$\xi(\tilde x) \in \ensuremath{\mathrm\Phi}(\omega)$. \end{proof}
We are now able to prove Theorem~\ref{thm:main}.
\begin{proof}[Proof of Theorem~\ref{thm:main}]
Recall that we assumed~$\ensuremath{\mathcal B} = (\ensuremath{\mathcal V}, \ensuremath{\mathcal E})$ to be finite
and that we do not prune nodes by bound.
Let~$\ensuremath{\mathcal B}_d$ be the tree arising from~$\ensuremath{\mathcal B}$ by pruning all nodes
at depth larger than~$d$.
Let~$(m_\beta,\pi_\beta,\varphi_\beta)_{\beta \in \ensuremath{\mathcal V}}$ satisfy the
correctness conditions.
Let~$\check{x} \in \ensuremath{\mathrm\Phi}$ be any feasible solution to the original
problem.
We proceed by induction and show that, for every depth~$d$ of the tree,
there is exactly one leaf node in~$\ensuremath{\mathcal B}_d$ for which a permutation
of~$\check{x}$ is feasible and that does not violate the local
\SHC{}s\xspace~\eqref{eq:main}.
Let~$d = 0$.
The only node at depth~$d$ is the root node~$\alpha$.
Any feasible solution~$\check x \in \ensuremath{\mathrm\Phi}$
is feasible in the root node~$\alpha \in \mathcal V$
of the branch-and-bound tree~$\mathcal B$.
In particular, we can permute~$\check x$ by any $\xi \in \Gamma$,
and have a feasible symmetrical solution.
For the root node, choose $\xi \in \Gamma$ such that~$\sigma_\alpha
\xi(\check x) \succeq \sigma_\alpha \gamma \xi(\check x)$
for all~$\gamma \in \Gamma$.
That is, $\xi(\check{x})$ is not cut off by~\eqref{eq:main} at~$\alpha$.
Let~$d > 0$ and let~$\tilde{x} \in \ensuremath{\mathrm\Phi}$.
By induction, we may assume that there is exactly one leaf node~$\beta$
of~$\ensuremath{\mathcal B}_d$ at which a permutation~$\xi(\check{x})$ is feasible and
that is not cut off by~\eqref{eq:main}.
If~$\beta$ is also a leaf in~$\ensuremath{\mathcal B}$, we are done.
Otherwise, since~$\xi(\check{x})$ is not cut off by~\eqref{eq:main}, we
can apply Lemma~\ref{lem:gen:main} and find that~$\beta$ has exactly one
child~$\omega$ at which a permutation of~$\xi(\check{x})$ is feasible and
is not cut off by~\eqref{eq:main} at node~$\omega$.
This concludes the proof. \end{proof}
\begin{remark}
For spatial branch-and-bound algorithms, two subtleties arise.
On the one hand, there might not exist a finite branch-and-bound tree.
If all branching decisions partition a subproblem's feasible region,
Theorem~\ref{thm:main} holds true for all trees pruned at a certain depth
level.
On the other hand, branching decisions do not necessarily partition the
feasible region.
In this case, \eqref{eq:main} can still be used to handle symmetries.
However, in the depth-pruned tree there might exist more than one leaf
containing a symmetric copy of a feasible solution. \end{remark}
\begin{remark}
Theorem~\ref{thm:main} still holds in case of some infinite groups.
The only place where finiteness is used is in the proof of
Lemma~\ref{lem:gen:main}, where it implies that a symmetry~$\xi \in \Gamma$
exists such that $\sigma_{\omega}\xi(\tilde x)$ is lexicographically maximal
for a fixed solution vector $\tilde x \in \ensuremath{\mathrm\Phi}(\beta)$.
For instance, for infinite groups of rotational symmetries, such a
symmetry always exists. \end{remark}
\section{Apply framework on generic optimization problems} \label{sec:cast}
Due to Theorem~\ref{thm:main}, we can completely handle all symmetries of an arbitrary problem~$\mathrm{OPT}(f, \ensuremath{\mathrm\Phi})$, provided we know how to handle Constraints~\eqref{eq:main}. The aim of this section is therefore to find symmetry handling methods that can deal with non-binary variables. Since handling Constraints~\eqref{eq:main} is already difficult for binary problems, we cannot expect to handle all symmetries efficiently. Instead, we revisit the efficient methods LexFix\xspace, orbitopal fixing, and OF\xspace for binary variables and provide proper generalizations for non-binary problems, which allows us to partially enforce Constraints~\eqref{eq:main}. We refer to these generalizations as lexicographic reduction, orbitopal reduction, and orbital symmetry handling, respectively.
Throughout this section, we assume that~$\Gamma \leq \symmetricgroup{n}$.
\subsection{Lexicographic reduction} \label{sec:gen:lexred}
\subsubsection{The static setting}
Assume the symmetry prehandling structure of Example~\ref{ex:staticsetting} is used in Theorem~\ref{thm:main}. Then, the \SHC{}s\xspace~$x \succeq \gamma(x)$ for all~$\gamma \in \Gamma$ are enforced at each node of the branch-and-bound tree. For all~$i \in [n]$, let~$\ensuremath{\mathcal{D}}_i \subseteq \mathds{R}^n$ be the domain of variable~$x_i$ at a node of the branch-and-bound tree and let~$\ensuremath{\mathcal{D}} = (\ensuremath{\mathcal{D}}_i)_{i \in [n]}$ be the vector of variable domains. The aim of the lexicographic reduction (LexRed\xspace) algorithm is to find, for a fixed permutation~$\gamma \in \Gamma$, the smallest domains~$\ensuremath{\mathcal{D}}'_i$, $i \in [n]$, such that $\left\{ x \in \bigtimes_{i = 1}^n \ensuremath{\mathcal{D}}_i : x \succeq \gamma(x)\right\} = \left\{ x \in \bigtimes_{i = 1}^n \ensuremath{\mathcal{D}}'_i : x \succeq \gamma(x)\right\}$.
If~$\ensuremath{\mathcal{D}}_i \subseteq \ensuremath{\{0, 1\}}$ for all~$i \in [n]$, the reductions found by LexRed\xspace are equivalent to the reductions found by LexFix\xspace. For non-binary domains, similar ideas as for LexFix\xspace, which are described in~\cite{BestuzhevaEtal2021OO,doornmalenhojny2022cyclicsymmetries}, can be used: We iterate over the variables~$x_i$ with indices in increasing order. If~$x_j = \gamma(x)_j$ for all indices~$j < i$, we enforce~$x_i \geq \gamma(x)_i$, and we check if a solution with~$x_i = \gamma(x)_i$ exists. Before we provide a rigorous algorithm, we illustrate the idea.
\begin{example}
\label{ex:lexred} Let $\ensuremath{\mathrm\Phi} = [-1, 1]^4 \cap \mathds Z^4$ and $\gamma = (1,3,2,4)$. Consider a node with relaxed region $x \in \{ 0 \} \times [-1, 0] \times \{ 1 \} \times [-1, 1]$. Propagating $x \succeq \gamma(x)$, we find {\footnotesize \begin{equation*} \begin{bmatrix} x_1& =& 0\\ x_2& \in& [-1,0]\\ x_3& =& 1 \\ x_4& \in& [-1, 1]\\ \end{bmatrix} \succeq \begin{bmatrix} x_4& \in& [-1, 1]\\ x_3& =& 1 \\ x_1& =& 0\\ x_2& \in& [-1,0]\\ \end{bmatrix} \stackrel{\text{(}\dagger\text{)}}{\leadsto} \begin{bmatrix} x_1& =& 0\\ x_2& \in& [-1,0]\\ x_3& =& 1 \\ x_4& \in& [-1, 0]\\ \end{bmatrix} \succeq \begin{bmatrix} x_4& \in& [-1, 0]\\ x_3& =& 1 \\ x_1& =& 0\\ x_2& \in& [-1,0]\\ \end{bmatrix} \!. \end{equation*} } In ($\dagger$), we restrict the domain of $x_4$ by propagating $0 = x_1 \geq x_4$, resulting in $x_4 \in [-1, 0]$. If~${x_1 = x_4 = 0}$, then SHC\xspace $x \succeq \gamma(x)$ implies the contradiction ${[-1, 0] \ni x_2 \geq x_3 = 1}$, so we must have $x_1 > x_4$. Since $x_4 \in \mathds Z$, $x_4$ must be fixed to $-1$. No further domain reductions can be derived from $x \succeq \gamma(x)$. \end{example}
We now proceed with our generalization of LexFix\xspace. To enforce~$x \succeq \gamma(x)$ for general variable domains~$\ensuremath{\mathcal{D}}$, some artifacts need to be taken into account. For example, if~$n = 3$ and~$\gamma$ is the cyclic right-shift, then~$y^\epsilon \ensuremath{\coloneqq}(1+\epsilon,0,1) \succeq \gamma(y^\epsilon) = (1,1+\epsilon,0)$ for every~$\epsilon > 0$, but~$y^0 \prec \gamma(y^0)$, i.e., $\{x \in \mathds{R}^n : x \succeq \gamma(x)\}$ is not necessarily closed. Since optimization software usually can only handle closed sets, we propose the following solution. We extend~$\mathds{R}$ by an infinitesimal symbol~$\varepsilon$ that we can add to or subtract from any real number to represent a strict difference. This results in a symbolically correct algorithm that is as strong as possible. For example, $\min\{ 1 + x : x > 1 \} = 2 + \varepsilon$, $\min\{1 + x + \varepsilon : x > 1 \} = 2 + \varepsilon$, $\max\{ 1 + x : x < 2 \} = 3 - \varepsilon$, and we do not allow further arithmetic with the $\varepsilon$ symbol. In practice, however, we cannot enforce strict inequalities. We thus replace~$\varepsilon$ by~0, which will lead to slightly weaker but still correct reductions. That is no problem for our purposes, since we will only either apply the $\min$-operator or the $\max$-operator, the sign of $\varepsilon$ will always be the same; namely, if $\varepsilon$ appears, this has a positive sign in minimization-operations, and a negative sign in maximization-operations.
Now, we turn to the generalization of LexFix\xspace to arbitrary variable domains. We introduce a timestamp $t$. At every time $t$, the current domain is denoted by~$\ensuremath{\mathcal{D}}^t$. We initialize~$\ensuremath{\mathcal{D}}_i^0 = \ensuremath{\mathcal{D}}_i$ for all~$i \in [n]$, and for two timestamps~$t > t'$, we will possibly strengthen the domains, i.e., $\ensuremath{\mathcal{D}}_i^{t} \subseteq \ensuremath{\mathcal{D}}_i^{t'}$.
The core of LexRed\xspace is the observation that if~$\restrict{x}{[t-1]} = \restrict{\gamma(x)}{[t-1]}$ holds for some~$t \geq 1$, then constraint~$x \succeq \gamma(x)$ can only hold when~$x_t \geq \gamma(x)_t = x_{\gamma^{-1}(t)}$. This observation is exploited in a two-stage approach. In the first stage, LexRed\xspace performs the following steps for all~$t = 1,\dots,n$:
\begin{enumerate} \item The algorithm propagates~$x_t \geq \gamma(x)_t$ by
updating the variable domains via
\begin{equation} \label{eq:VDR} \begin{aligned} \ensuremath{\mathcal{D}}_t^t &= \{ z \in \ensuremath{\mathcal{D}}_t^{t-1} : z \geq \min(\ensuremath{\mathcal{D}}_{\gamma^{-1}(t)}^{t-1}) \} ,\\ \ensuremath{\mathcal{D}}_{\gamma^{-1}(t)}^t &= \{ z \in \ensuremath{\mathcal{D}}_{\gamma^{-1}(t)}^{t-1} : z \leq \max(\ensuremath{\mathcal{D}}_{t}^{t-1}) \} , \text{ and}\\ \ensuremath{\mathcal{D}}_i^t &= \ensuremath{\mathcal{D}}_i^{t-1}\ \text{for}\ i \in [n] \setminus \{ t, \gamma^{-1}(t) \}. \end{aligned} \end{equation} \item Then, it checks whether~$\ensuremath{\mathcal{D}}^t_i \neq \emptyset$ for all~$i \in
[n]$ and whether~$x \in \ensuremath{\mathcal{D}}^t$ guarantees~$\restrict{x}{[t]} =
\restrict{\gamma(x)}{[t]}$.
If this is the case, the algorithm continues with iteration~$t+1$.
Otherwise, the first phase of LexRed\xspace terminates, say at time~$t^\star$. \end{enumerate}
Of course, all variable domain reductions found during phase one are correct based on the previously mentioned observation.
At the end of phase one, three possible cases can occur: a variable domain is empty, phase one has propagated all variables, i.e., $x = \gamma(x)$ for all~$x \in \ensuremath{\mathcal{D}}^n$, or~$\restrict{x}{[t^\star-1]} = \restrict{\gamma(x)}{[t^\star-1]}$ and there exists~$(v,w) \in \ensuremath{\mathcal{D}}^{t^\star}_{t^\star} \times \ensuremath{\mathcal{D}}^{t^\star}_{\gamma^{-1}(t^\star)}$ with~$v \neq w$. In either of the first two cases, the algorithm stops because it either has shown that no solution~$x \in \ensuremath{\mathcal{D}}^0$ exists with~$x \succeq \gamma(x)$ or all variables are fixed. In the last case, note that~$v > w$ holds due to the domain reductions at time~$t^\star$. Since~$\restrict{x}{[t^\star-1]} = \restrict{\gamma(x)}{[t^\star-1]}$ holds for all~$x \in \ensuremath{\mathcal{D}}^{t^\star}$, the relation~$v > w$ shows that there exists~$x \in \ensuremath{\mathcal{D}}^{t^\star}$ such that~$\restrict{x}{[t^\star]} \succ \restrict{\gamma(x)}{[t^\star]}$. Consequently, the domains of variables~$x_{t^\star+1},\dots,x_n$ cannot be tightened. It might be possible, however, that the domains of~$x_{t^\star}$ and~$\gamma(x)_{t^\star}$ can be reduced further. Namely, if~$x_{t^\star} = \min \ensuremath{\mathcal{D}}^{t^\star}_{\gamma^{-1}(t^\star)}$ or~$x_{\gamma^{-1}(t^\star)} = \max \ensuremath{\mathcal{D}}^{t^\star}_{t^\star}$. In this case, the other variable necessarily attains the same value, which means that a solution with~$\restrict{x}{[t^\star]} = \restrict{\gamma(x)}{[t^\star]}$ is created, which might lead to a contradiction with~$x \succeq \gamma(x)$ as illustrated in Example~\ref{ex:lexred}.
In the second stage of LexRed\xspace, it is checked whether one of these cases indeed leads to a contradiction. If this is the case, $\min \ensuremath{\mathcal{D}}^{t^\star}_{\gamma^{-1}(t^\star)}$ can be removed from the domain of~$x_{t^\star}$ or~$\max \ensuremath{\mathcal{D}}^{t^\star}_{t^\star}$ can be removed from the domain of~$x_{\gamma^{-1}(t)}$. To detect whether a contradiction occurs, the second phase hypothetically fixes~$x_{t^\star}$ or~$x_{\gamma^{-1}(t^\star)}$ to the respective value and continues with stage one since~$\restrict{x}{[t^\star]} = \restrict{\gamma(x)}{[t^\star]}$ now holds. If phase one then terminates because a variable domain becomes empty, this shows that the domain of~$x_{t^\star}$ or~$x_{\gamma^{-1}(t^\star)}$ can be reduced. Otherwise, no further variable domain reductions can be derived.
\begin{proposition}
Let~$\tau$ be the time needed to perform one variable domain reduction
in~\eqref{eq:VDR}.
Then, LexRed\xspace finds all possible variable domain reductions for~$x
\succeq \gamma(x)$ in~$\ensuremath{\mathop{\mathcal{O}}}(n \cdot \tau)$ time. \end{proposition}
\begin{proof}
Completeness of LexRed\xspace follows from the previous discussion.
The running time holds as the first stage computes at most~$n$
domain reductions and the second stage triggers phase
one at most twice. \end{proof}
In many cases, for instance, if variable domains are continuous or discrete intervals, $\tau = \ensuremath{\mathop{\mathcal{O}}}(1)$, turning lexicographic reduction into a linear time algorithm.
\subsubsection{Dynamic settings}
Theorem~\ref{thm:main} shows that $\sigma_\beta(x) \succeq \sigma_\beta \gamma(x)$ is a valid symmetry handling constraint for certain symmetry prehandling structures~$(m_\beta,\pi_\beta,\varphi_\beta)_{\beta \in \ensuremath{\mathcal V}}$. If~$\Gamma$ is a permutation group, $\sigma_\beta(x)$ and $\sigma_\beta(\gamma(x))$ are just permutations of the solution vector entries and a restriction of this vector. In this case, lexicographic reduction can, of course, also propagate these \SHC{}s\xspace by changing the order in which we iterate over the solution vector entries.
In particular, in the binary case and the symmetry prehandling structure of Example~\ref{ex:vardynamic}, the adapted version of LexRed\xspace is compatible with IsoPr\xspace and OF\xspace as we have seen in Section~\ref{sec:framework} that the latter two methods propagate~$\sigma_\beta(x) \succeq \sigma_\beta \gamma(x)$ for all~$\gamma\in\Gamma$.
\subsection{Orbitopal reduction} \label{sec:gen:orbitopalfixing} Bendotti et~al.\@\xspace~\cite{BendottiEtAl2021} present a complete propagation algorithm to handle orbitopal symmetries on binary variables. In this section, we generalize their algorithm to arbitrary variable domains. We call the generalization of orbitopal fixing \emph{orbitopal reduction} as it does not necessarily fix variables.
\subsubsection{The static setting}
Suppose that~$\Gamma$ is the group that contains all column permutations of a $p \times q$ variable matrix~$X$. Further, assume that Theorem~\ref{thm:main} uses the symmetry prehandling structure from Example~\ref{ex:staticsetting}, where we assume that the variable vector~$x$ associated with the~$p \cdot q$ variables in~$X$ is such that enforcing~$x \succeq \gamma(x)$ for all~$\gamma \in \Gamma$ corresponds to sorting the columns of~$X$ in lexicographic order. With slight abuse of notation, for $\gamma \in \Gamma$, we write $X \succeq \gamma(X)$ if and only if the corresponding vector~$x \in \mathds{R}^{pq}$ satisfies~$x \succeq \gamma(x)$.
We use the following notation. For any~$M \in \mathds{R}^{p \times q}$ and~$(i,j) \in [p] \times [q]$, we denote by $M_i$ the $i$-th row of~$M$, by~$M^j$ the $j$-th column of~$M$, and by~$M_i^j$ the entry at position $(i, j)$. For every variable~$X_i^j$, we denote its domain by $\ensuremath{\mathcal{D}}_i^j \subseteq \mathds{R}$. Using the same matrix notation, $\ensuremath{\mathcal{D}} \subseteq \mathds{R}^{p \times q}$ denotes the $p \times q$-matrix where entry $(i, j)$ corresponds to~$\ensuremath{\mathcal{D}}_i^j$. For given domain~$\ensuremath{\mathcal{D}} \subseteq \mathds{R}^{p \times q}$, we denote by~$\underline{M}(\ensuremath{\mathcal{D}})$ and~$\overline{M}(\ensuremath{\mathcal{D}})$ the lexicographically smallest and largest element in~$\ensuremath{\mathcal{D}}$, respectively. Whenever the domain~$\ensuremath{\mathcal{D}}$ is clear from the context, we just write~$\underline{M}$ and~$\overline{M}$. Moreover, let \[
O_{p \times q} \ensuremath{\coloneqq} \{ X \in \mathds{R}^{p \times q} : X \succeq \gamma(X)\ \text{for all}\ \gamma \in \Gamma \} \] be the set of all matrices with lexicographically sorted columns. Our goal is to find all possible \VDR{}s\xspace of the \SHC{}s\xspace $X \succeq \gamma(X)$ for $\gamma \in \Gamma$, i.e., we want to find the smallest~$\hat{\ensuremath{\mathcal{D}}} \subseteq \mathds{R}^{p \times q}$ such that \[
\hat{\ensuremath{\mathcal{D}}} \cap O_{p \times q} = \ensuremath{\mathcal{D}} \cap O_{p \times q}. \] It turns out that, as for the binary case~\cite{BendottiEtAl2021}, the matrices~$\underline{M}(\ensuremath{\mathcal{D}})$ and~$\overline{M}(\ensuremath{\mathcal{D}})$ contain sufficient information for finding~$\hat{\ensuremath{\mathcal{D}}}$. In the following, recall that we (implicitly) use the infinitesimal notation introduced in the previous section to represent strict inequalities.
\begin{theorem}
\label{thm:genorbitopalfixing}
Let~$\ensuremath{\mathcal{D}} \subseteq \mathds{R}^{p \times q}$.
For~$j \in [q]$, let~$i_j \ensuremath{\coloneqq} \min(\{ i \in [p] :
\underline{M}(\ensuremath{\mathcal{D}})_i^j \neq \overline{M}(\ensuremath{\mathcal{D}})_i^j \} \cup \{ p +
1 \})$.
Then, the smallest~$\hat{\ensuremath{\mathcal{D}}} \subseteq \mathds{R}^{p \times q}$ for
which~$\hat{\ensuremath{\mathcal{D}}} \cap O_{p \times q} = \ensuremath{\mathcal{D}} \cap O_{p \times q}$
holds satisfies, for every~$(i,j) \in [p] \times [q]$,
\[
\hat{\ensuremath{\mathcal{D}}}^j_i
=
\begin{cases}
\ensuremath{\mathcal{D}}^j_i \cap [\underline{M}(\ensuremath{\mathcal{D}})_i^j,
\overline{M}(\ensuremath{\mathcal{D}})_i^j],
& \text{if } i \leq i_j,\\
\ensuremath{\mathcal{D}}^j_i, &\text{otherwise}.
\end{cases}
\] \end{theorem}
This theorem is proven by the following two lemmas. The first lemma shows that no tighter \VDR{}s\xspace can be achieved: for every $(i, j) \in [p] \times [q]$ and $v \in \hat\ensuremath{\mathcal{D}}_i^j$ a lexicographically non-increasing solution matrix $\tilde X$ exists with $\tilde X_i^j = v$. The second lemma shows that the \VDR{}s\xspace are valid: for every~$(i, j) \in [p] \times [q]$ and $v \in \ensuremath{\mathcal{D}}_i^j \setminus \hat\ensuremath{\mathcal{D}}_i^j$, no matrix $\tilde X$ with $\tilde X_i^j = v$ exists.
\begin{lemma} \label{lem:genorbitopalfixing:tight} Suppose that $O_{p \times q} \cap \ensuremath{\mathcal{D}} \neq \emptyset$. Let $i' \in [p]$ and $j' \in [q]$ with $i' \leq i_{j'}$. For all $x \in \ensuremath{\mathcal{D}}_{i'}^{j'}$ with~$\underline M _{i'}^{j'} \leq x \leq \overline M _{i'}^{j'}$ there is $X \in O_{p \times q} \cap \ensuremath{\mathcal{D}}$ with $X_{i'}^{j'} = x$. \end{lemma}
\begin{proof} We define two matrices $A, B \in \ensuremath{\mathcal{D}}$, for which entries~$(i,j) \in [p] \times [q]$ are \[ A_i^j = \begin{cases} \overline M_i^j & \text{if}\ j < j', \\ \underline M_i^j & \text{if}\ j > j', \\ \overline M_i^j = \underline M_i^j & \text{if}\ j = j', i < i_j, \\ \overline M_i^j (> \underline M_i^j) & \text{if}\ j = j', i = i_j,\\ \min(\ensuremath{\mathcal{D}}_i^j) & \text{if}\ j = j', i > i_j, \end{cases} \ \text{and}\ B_i^j = \begin{cases} \overline M_i^j & \text{if}\ j < j', \\ \underline M_i^j & \text{if}\ j > j', \\ \overline M_i^j = \underline M_i^j & \text{if}\ j = j', i < i_j, \\ \underline M_i^j (< \overline M_i^j) & \text{if}\ j = j', i = i_j, \\ \max(\ensuremath{\mathcal{D}}_i^j) & \text{if}\ j = j', i > i_j. \end{cases} \] From these two matrices, we show that for any~$x \in \ensuremath{\mathcal{D}}_{i'}^{j'}$ with~$\underline M _{i'}^{j'} \leq x \leq \overline M _{i'}^{j'}$ there is~$X \in O_{p \times q} \cap \ensuremath{\mathcal{D}}$ with $X_{i'}^{j'} = x$. We call such a matrix~$X$ a certificate for~$x$. In the following, we first provide a construction for these certificates, and after that we show that they are contained in~$\ensuremath{\mathcal{D}} \cap O_{p \times q}$.
If $i' < i_{j'}$, then $x = \overline M_{i'}^{j'} = \underline M_{i'}^{j'}$. Thus, $X = A$ is a certificate. If $i' = i_{j'}$, then there are three options: If $x = \underline M_{i'}^{j'}$, then $X = B$ is a certificate; if $x = \overline M_{i'}^{j'}$, then $X = A$ is a certificate; and if~$\underline M_{i'}^{j'} < x < \overline M_{i'}^{j'}$, then construct $X$ with $X_i^j = A_i^j$ if $(i, j) \neq (i', j')$, and $X_{i'}^{j'} = x$.
Note that $X \in \ensuremath{\mathcal{D}}$. We finally show that $X \in O_{p \times q}$, concluding the proof. The first $j' - 1$ columns of $X$ correspond to $\overline M$. That is, they satisfy $X^j \succeq X^{j + 1}$ for all $1 \leq j < j' - 1$. Similarly, the columns after column $j'$ correspond to~$\underline M$. Hence, $X^j \succeq X^{j + 1}$ for all $j' < j < q$. By the definition of $A$ and $B$, $\overline M^{j' - 1} \succeq \overline M^{j'} \succeq A^{j'}$ and $B^{j'} \succeq \underline M^{j'} \succeq \underline M^{j' + 1}$. As the columns of~$X$ are either columns of~$A$ or~$B$, or equal to~$A^{j'}$ up to one entry while remaining lexicographically larger than $B^{j'}$, we find~$\overline M^{j' - 1} = A^{j' - 1} = X^{j' - 1} \succeq A^{j'} \succeq X^{j'} \succeq B^{j'} \succeq B^{j' + 1} = X^{j' +
1} = \underline M^{j' + 1}$. So, for all consecutive $j\in [q - 1]$, we have~$X^j \succeq X^{j + 1}$, and hence $X \in O_{p \times q} \cap \ensuremath{\mathcal{D}}$. \end{proof}
\begin{lemma} \label{lem:genorbitopalfixing:valid} Suppose that $O_{p \times q} \cap \ensuremath{\mathcal{D}} \neq \emptyset$. Let $i' \in [p]$ and $j' \in [q]$ with $i' \leq i_{j'}$. For all $X \in O_{p \times q} \cap \ensuremath{\mathcal{D}}$, we have $\underline M _{i'}^{j'} \leq X_{i'}^{j'} \leq \overline M _{i'}^{j'}$. \end{lemma}
\begin{proof} Suppose the contrary, i.e., for $X \in O_{p \times q} \cap \ensuremath{\mathcal{D}}$ either $X_{i'}^{j'} < \underline M _{i'}^{j'}$ or $X_{i'}^{j'} > \overline M _{i'}^{j'}$. Suppose that~$i'$ is minimal, i.e., there is no $i'' < i'$ with $X_{i''}^{j'} < \underline M _{i''}^{j'}$ or $X_{i''}^{j'} > \overline M _{i''}^{j'}$. By symmetry, it suffices to consider the case $X_{i'}^{j'} < \underline M _{i'}^{j'}$.
If $X_{i}^{j'} \leq \underline M _{i}^{j'}$ holds for all $i < i'$, then $X^{j'} \prec \underline M^{j'}$, which contradicts that $\underline M$ is the lexicographically minimal solution of $O_{p \times q} \cap \ensuremath{\mathcal{D}}$. Hence, there is a row $i'' < i'$ with $X_{i''}^{j'} > \underline M_{i''}^{j'}$. Since~$i'' < i' \leq i_{j'}$ yields $\underline M_{i''}^{j'} = \overline M_{i''}^{j'}$, we have $X_{i''}^{j'} > \overline M_{i''}^{j'}$. This contradicts that $i'$ is supposed to be minimal with~$X_{i'}^{j'} < \underline M_{i'}^{j'}$ or~$X_{i'}^{j'} > \overline M_{i'}^{j'}$, since for~$i'' < i'$ we satisfy the second condition. This is a contradiction. \end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:genorbitopalfixing}]
Lemmas~\ref{lem:genorbitopalfixing:tight}
and~\ref{lem:genorbitopalfixing:valid} prove the assertion for $i' \leq
i_{j'}$.
Since the domains for~${i' > i_{j'}}$ are not restricted in comparison
to~$\ensuremath{\mathcal{D}}$, domain~$\hat{\ensuremath{\mathcal{D}}}^{j'}_{i'}$ is valid.
To show that it is as tight as possible, we can reconsider in the proof of
Lemma~\ref{lem:genorbitopalfixing:tight} the matrix $A \in \ensuremath{\mathcal{D}} \cap
O_{p \times q}$.
Replacing entry~$(i', j')$ in $A$ with any value in $\ensuremath{\mathcal{D}}_{i'}^{j'}$
yields a matrix $\tilde A$.
If $i' > i_{j'}$, this change does not affect the lexicographic order
constraint, so $\tilde A \in \ensuremath{\mathcal{D}} \cap O_{p \times q}$ is a certificate
of tightness.
Combining these statements shows correctness of
Theorem~\ref{thm:genorbitopalfixing}. \end{proof}
We conclude this section with an analysis of the time needed to find~$\hat{\ensuremath{\mathcal{D}}}$. The crucial step is to find the matrices~$\underline{M}$ and~$\overline{M}$. To find these matrices, we adapt the idea from~\cite{BendottiEtAl2021} for the binary case. Denote by~$\lexmin(\cdot)$ and~$\lexmax(\cdot)$ the operators that determine the lexicographically minimal and maximal elements of a set, respectively. We claim that for the lexicographically minimal element~$\underline{M}$, the~$j$-th column is \[ \underline M^j = \begin{cases} \lexmin \{ X \in \ensuremath{\mathcal{D}}^j : X \succeq \underline M^{j+1} \}, & \text{if}\ j < q, \\ \lexmin \{ X \in \ensuremath{\mathcal{D}}^j \}, & \text{otherwise}. \end{cases} \] This can be computed iteratively, starting with the last column $j=q$, and then iteratively reducing~$j$ until the first column. The arguments for correctness are the same as in~\cite[Thm.~1, Lem.~2]{BendottiEtAl2021}. For this reason, we only describe how to compute the~$j$-th column. Due to this iterative approach, when computing column $\underline{M}^{j}$ for $j > q$, column $\underline{M}^{j+1}$ is known. The idea is to choose the entries of $\underline{M}^{j}$ minimally such that $\underline{M}^{j} \succeq \underline{M}^{j+1}$ holds. This resembles the propagation method of the previous section (LexRed\xspace), by choosing the entries minimally such that the constraint holds when restricted to the first elements, then increasing the vector sizes by one iteratively. If this leads to a contradiction with the constraint, it is returned to the last step where the entry was not fixed, and this entry is increased to repair feasibility of the constraint.
More precisely, $\underline M^{j}$ is found by iterating~$i$ from $1$ to~$p$ as follows. If there is a row index~${i' < i}$ with $\underline M_{i'}^j > \underline M_{i'}^{j + 1}$, let~$\underline M_i^j \gets \min ( \ensuremath{\mathcal{D}}_i^j )$. This is possible, because row~$i'$ already guaranteed that~$\underline{M}^j \succ \underline{M}^{j+1}$. If no such index exists, we may assume that all preceding rows $i' < i$ have~${\underline M_i^j = \underline M_i^{j+1}}$ (otherwise, the~$j$-th column cannot be lexicographically larger than column~$j+1$ as becomes clear in the following). In this case, denote~$S^i \ensuremath{\coloneqq} \{ x \in \ensuremath{\mathcal{D}}_i^j : x \geq \underline M_i^{j+1} \}$. On the one hand, if $\card{S^i} > 0$, set $\underline M_i^j \gets \min(S^i)$. If this yields $\underline M_i^j > \underline M_i^{j + 1}$, then stop the iteration, and for all $i'' > i$ set~$\underline M_{i''}^{j} \gets \min(\ensuremath{\mathcal{D}}_{i''}^j)$. This makes sure that~$\underline{M}^j$ is lexicographically strictly larger than~$\underline{M}^{j+1}$.
On the other hand, if $\card{S^i} = 0$, we cannot enforce~$\underline{M}^j \succ \underline{M}^{j+1}$ in row~$i$. To ensure~$\underline{M}$ becomes the lexicographically smallest element in~$O_{p \times q} \cap \ensuremath{\mathcal{D}}$, we return to the largest $i' < i$ with~$\card{S^{i'}} > 1$ and enforce a lexicographic difference by setting $\underline M_{i'}^j \gets \min\{ x \in \ensuremath{\mathcal{D}}_{i'}^j : x > \underline M_{i'}^{j+1} \}$, and, for all~$i'' > i'$, we assign~$\underline M_{i''}^{j} \gets \min(\ensuremath{\mathcal{D}}_{i''}^j)$. If no $i' < i$ exists with~$\card{S^{i'}} > 1$, column~$j$ cannot become lexicographically at least as large as column~$j+1$. That is, $\ensuremath{\mathcal{D}} \cap O_{p \times q} = \emptyset$.
Analogously, one computes $\overline M$ by \[ \overline{M}^j = \begin{cases} \lexmax\{ X \in \ensuremath{\mathcal{D}}^j : \overline M^{j - 1} \succeq X \} & \text{if}\ j > 1,\ \text{and} \\ \lexmax\{ X \in \ensuremath{\mathcal{D}}^j \} & \text{otherwise}. \end{cases} \] Since determining the~$j$-th column of~$\underline{M}$ and~$\overline{M}$ requires to iterate over its elements a constant number of times, for each element a constant number of comparisons and variable domain reductions is executed. The time for finding~$\underline{M}$ and~$\overline{M}$ is therefore~$\ensuremath{\mathop{\mathcal{O}}}(pq\tau)$, where~$\tau$ is again the time needed to reduce variable domains. Combining all arguments thus yields the following result regarding orbitopal reduction.
\begin{theorem}
Let~$\ensuremath{\mathcal{D}} \subseteq \mathds{R}^{p \times q}$.
Orbitopal reduction finds the smallest~$\hat{\ensuremath{\mathcal{D}}} \subseteq \mathds{R}^{p
\times q}$ such that~$\hat{\ensuremath{\mathcal{D}}} \cap O_{p \times q} = \ensuremath{\mathcal{D}} \cap
O_{p \times q}$ holds in~$\ensuremath{\mathop{\mathcal{O}}}(pq\tau)$ time.
In particular, if all variable domains are intervals, orbitopal reduction
can be implemented to run in~$\ensuremath{\mathop{\mathcal{O}}}(pq)$ time. \end{theorem}
\subsubsection{Dynamic settings}
Similar to LexRed\xspace, also orbitopal reduction can be used to propagate \SHC{}s\xspace~$\sigma_\beta(x) \succeq \sigma_\beta\gamma(x)$ for permutations~$\gamma$ from a group~$\Gamma$ of orbitopal symmetries. The only requirement is that~$\sigma_\beta$ is compatible with the matrix interpretation of a solution~$x$, which can be achieved by using the symmetry prehandling structure of Example~\ref{ex:orbitopalfixing}. In this case, the static orbitopal reduction algorithm is only applied to the variables ``seen'' by~$\sigma_\beta(x) \succeq \sigma_\beta\gamma(x)$.
Note that this symmetry prehandling structure admits some degrees of freedom in selecting~$\varphi_\beta$. If~$\varphi_\beta = \ensuremath{\mathrm{id}}$ for all $\beta \in \ensuremath{\mathcal V}$, this resembles the adapted version of orbitopal fixing as mentioned in Section~\ref{sec:orbitopalfixing}. But also other choices are possible as we will discuss in Section~\ref{sec:num}.
\subsection{Variable ordering derived from branch-and-bound} \label{sec:gen:orbitalreduction} A natural question is whether also generalizations for isomorphism pruning and OF\xspace exist. The main challenge is that after branching on general variables, they are not necessarily fixed (in contrast to the binary setting). Thus, stabilizer computations as discussed in Section~\ref{sec:overview} might not apply in the generalized setting. Inspired by OF\xspace, we present a way to reduce variable domains of arbitrary variables based on symmetry, called \emph{orbital reduction}.
For vectors $x$ and $y$ of equal length $m$, we write $x \leq y$ if $x_i \leq y_i$ for all $i \in [m]$. Let~$\beta \in \ensuremath{\mathcal V}$ be a node of the branch-and-bound tree, $W^\beta \ensuremath{\coloneqq} \{ x \in \mathds R^n : \sigma_\beta(x) \succeq \sigma_\beta(\delta(x))\ \text{for all}\ \delta \in \Gamma \}$, and~$\Delta^\beta \ensuremath{\coloneqq} \{ \gamma \in \Gamma : \sigma_\beta(x) \leq \sigma_\beta \gamma(x)\ \text{for all}\ x \in \ensuremath{\mathrm\Phi}(\beta) \cap W^\beta \} $ be a group of symmetries. Similar to Section~\ref{sec:overviewtree}, we intend to use $\Delta^\beta$ to find \VDR{}s\xspace. We first show that this is indeed a group.
\begin{lemma} Let~$\beta$ be a node of a B\&{}B\xspace-tree using single variable branching with symmetry prehandling structure of Example~\ref{ex:vardynamic}. Then, $\Delta^\beta$ is a group. \end{lemma}
\begin{proof}
Recall that we assume~$\Gamma \leq \symmetricgroup{n}$ in this
section.
Therefore, both~$\Gamma$ and~$\Delta^\beta \subseteq \Gamma$ are finite. To show that it is a group, it suffices to show that compositions of elements of $\Delta^\beta$ are also contained therein. The identity and inverses follow implicitly.
Let $\gamma_1, \gamma_2 \in \Delta^\beta$, and suppose $x \in \ensuremath{\mathrm\Phi}(\beta) \cap W^\beta$. By definition of $\Delta^\beta$, we have $\sigma_\beta(x) \leq \sigma_\beta(\gamma_2(x))$. Since $\gamma_2 \in \Delta^\beta \leq \Gamma$ and $x \in W^\beta$, we have $\sigma_\beta(x) \succeq \sigma_\beta(\gamma_2(x))$. Note that $\sigma_\beta(x) \succeq \sigma_\beta(\gamma_2(x))$ and~ $\sigma_\beta(x) \leq \sigma_\beta(\gamma_2(x))$ imply~ $\sigma_\beta(x) = \sigma_\beta(\gamma_2(x))$. Since the correctness conditions are satisfied for Example~\ref{ex:vardynamic}, due to Condition~\eqref{cond:permutationcondition}, the properties $x \in \ensuremath{\mathrm\Phi}(\beta)$ and $\sigma_\beta(x) = \sigma_\beta(\gamma_2(x))$ imply that~$\gamma_2(x) \in \ensuremath{\mathrm\Phi}(\beta)$ holds.
Since $x \in W^\beta$, for all~$\delta \in \Gamma$ we have~ $\sigma_\beta(\gamma_2(x)) = \sigma_\beta(x) \succeq \sigma_\beta(\delta(x))$. Because $\gamma_2$ is a group element of $\Gamma$, we thus also have $\sigma_\beta(\gamma_2(x)) \succeq \sigma_\beta(\delta \circ \gamma_2(x))$ for all $\delta \in \Gamma$, meaning that $\gamma_2(x) \in W^\beta$. Summarizing, we have $\sigma_\beta(x) = \sigma_\beta(\gamma_2(x))$ and $\gamma_2(x) \in \ensuremath{\mathrm\Phi}(\beta) \cap W^\beta$. By analogy, the same results hold for~$\gamma_1(x)$.
Since $\gamma_2(x) \in \ensuremath{\mathrm\Phi}(\beta) \cap W^\beta$ and $\gamma_1 \in \Delta^\beta$, the definition of $\Delta^\beta$ yields $\sigma_\beta(\gamma_2(x)) \leq \sigma_\beta(\gamma_1 \circ \gamma_2(x))$. Using the same reasoning as above, $\gamma_2(x) \in W^\beta$ implies~ $\sigma_\beta(\gamma_2(x)) \succeq \sigma_\beta(\gamma_1 \circ \gamma_2(x))$, so~$\sigma_\beta(\gamma_2(x)) = \sigma_\beta(\gamma_1 \circ \gamma_2(x))$. We thus find that $\sigma_\beta(x) = \sigma_\beta(\gamma_2(x)) = \sigma_\beta(\gamma_1 \circ \gamma_2(x))$, which implies $\gamma_1 \circ \gamma_2 \in \Delta^\beta$. \end{proof}
We show two feasible reductions that are based on $\Delta^\beta$. The first reduction shows that the variable domains of the variables in the $\Delta^\beta$-orbit of the branching variable can possibly be tightened. To this end, denote the orbit of $i$ in $\Delta^\beta$ by~$O_i^\beta \ensuremath{\coloneqq} \{ \gamma(i) : \gamma \in \Delta^\beta \}$. If the branching decision after node $\beta$ decreased the upper bound of variable~$x_i$ for some~$i \in [n]$, a valid VDR\xspace is to decrease the upper bounds of~$x_j$ for all $j \in O^\beta_i$ to the same value as we show next.
\begin{lemma}[Orbital symmetry handling] \label{lem:branchorbit} Let $\ensuremath{\mathcal B} = (\ensuremath{\mathcal V}, \ensuremath{\mathcal E})$ be a B\&{}B\xspace-tree using single variable branching with symmetry prehandling structure of Example~\ref{ex:vardynamic}. Let $\omega \in \ensuremath{\mathcal V}$ be a child of $\beta \in \ensuremath{\mathcal V}$ where~$x_i$ is the branching variable for some $i \in [n]$. Then, at node~$\omega$, each solution~$x \in \ensuremath{\mathrm\Phi}(\omega)$ satisfying~${\sigma_\omega(x) \succeq \sigma_\omega(\delta(x))}$ for all $\delta \in \Gamma$ (i.e., \eqref{eq:main} for node $\omega$) also satisfies~$x_i \geq x_j$ for all~$j \in O_i^\beta$. \end{lemma}
\begin{proof} Let $\gamma \in \Delta^\beta$ and let $x \in \ensuremath{\mathrm\Phi}(\omega)$ satisfy $\sigma_\omega(x) \succeq \sigma_\omega(\delta(x))$ for all $\delta \in \Gamma$. Since $\omega$ is a child of $\beta$, we have~$x \in \ensuremath{\mathrm\Phi}(\omega) \subseteq \ensuremath{\mathrm\Phi}(\beta)$. Also, for all $\delta \in \Gamma$ we have $\sigma_\omega(x) \succeq \sigma_\omega(\delta(x))$, so due to the symmetry prehandling structure of Example~\ref{ex:vardynamic}, we also have~$\sigma_\beta(x) \succeq \sigma_\beta(\delta(x))$, meaning that $x \in W^\beta$. Hence, we have $x \in \ensuremath{\mathrm\Phi}(\beta) \cap W^\beta$. By definition of $\Delta^\beta$, we thus have $\sigma_\beta(x) \leq \sigma_\beta(\gamma(x))$. Recall that due to Example~\ref{ex:vardynamic}, we have for all $\delta \in \Gamma$ that $\sigma_\beta(x) \succeq \sigma_\beta(\delta(x))$. Since $\gamma \in \Delta^\beta \leq \Gamma$, therefore~$\sigma_\beta(x) \leq \sigma_\beta(\gamma(x))$ and $\sigma_\beta(x) \succeq \sigma_\beta(\gamma(x))$ hold, implying $\sigma_\beta(x) = \sigma_\beta(\gamma(x))$. Denote this result by ($\dagger$).
Due to Example~\ref{ex:vardynamic}, we have \[ \sigma_\omega(x) = \begin{cases} \sigma_\beta(x), & \text{if}\ i \in (\pi_\beta \varphi_\beta)^{-1}(m_\beta) \ \text{(i.e., variable $x_i$ appears in $\sigma_\beta(x)$), and}\\ \binom{\sigma_\beta(x)}{x_i}, & \text{otherwise.} \end{cases} \] As such, SHC\xspace $\sigma_\omega (x) \succeq \sigma_\omega (\gamma(x))$ is equivalent to either $\sigma_\beta(x) \succ \sigma_\beta (\gamma(x))$, or both~$\sigma_\beta(x) = \sigma_\beta (\gamma(x))$ and $x_i \geq \gamma(x)_i$. Note that this statement is the case independent from whether entry $i$ is branched on before or not (i.e., whether $i \in (\pi_\beta\varphi_\beta)^{-1}([m_\beta])$ or not). Using ($\dagger$), the first of the two options cannot hold, so we must have~$x_i \geq \gamma(x)_i = x_{\gamma^{-1}(i)}$. Consequently, $x_i \geq x_{\gamma^{-1}(i)}$ is a valid SHC\xspace for~$\gamma \in \Delta^\beta$. Thus, for all $j \in O_i^\beta$, we can propagate $x_i \geq x_j$. \end{proof}
Second, recall our assumption that any VDR\xspace that is not based on our symmetry framework needs to be symmetry compatible, see Section~\ref{sec:framework}. In practice, however, a solver might not find all symmetric \VDR{}s\xspace, e.g., due to iteration limits. The following lemma allows us to find missing (but not necessarily all) \VDR{}s\xspace based on symmetries, which corresponds to orbital fixing as discussed in~\cite{pfetsch2019computational} without the restriction to binary variables.
\begin{lemma} \label{lem:intersection} Let~$\beta$ be a node of a B\&{}B\xspace-tree using single variable branching with symmetry prehandling structure of Example~\ref{ex:vardynamic}. Then, when \SHC{}s\xspace~\eqref{eq:main} are enforced (i.e., solutions are in $W^\beta$), a valid VDR\xspace is to reduce the domain of~$x_i$ to the intersection of all variable domains $x_j$ for $j \in O_i^\beta$. \end{lemma}
\begin{proof} Let $x \in \ensuremath{\mathrm\Phi}(\beta)$ and let~$i \in [n]$. Let~$j \in O_i^\beta$, i.e., there exists~$\gamma \in \Delta^\beta$ with~$\gamma(i) = j$. As $\gamma \in \Delta^\beta$, $\sigma_\beta(x) \leq \sigma_\beta(\gamma(x))$ holds. Since \SHC{}s\xspace~\eqref{eq:main} are enforced, it must as well hold that $\sigma_\beta(x) \succeq \sigma_\beta(\gamma(x))$, and thus $\sigma_\beta(x) = \sigma_\beta(\gamma(x))$.
Since Example~\ref{ex:vardynamic} satisfies the correctness conditions, Condition~\eqref{cond:permutationcondition} yields~$\gamma(x) \in \ensuremath{\mathrm\Phi}(\beta)$. Thus, $x_i$ is not only contained in the domain of variable~$i$, but also in the domain of variable~$x_{\gamma(i)}$. For this reason, the domain of~$x_i$ can be restricted to the intersection of the domains for all variables $x_j$ for all $j \in O_i^\beta$. \end{proof}
In practice, Lemma~\ref{lem:branchorbit} and~\ref{lem:intersection} cannot be used immediately as the orbits depend on~$\Delta^\beta$, which cannot be computed easily as it depends on~$\ensuremath{\mathrm\Phi}(\beta)$ and $W^\beta$. Instead, we base ourselves on a suitably determined subgroup~$\tilde \Delta^\beta \leq \Delta^\beta$, and apply the reductions induced by that subgroup. Because the reductions are based on variables in the same orbit, and orbits of the subgroup are subsets of the orbits of the larger group, \VDR{}s\xspace found by $\tilde \Delta^\beta$ would also be found by~$\Delta^\beta$.
For all $i \in [n]$, let~$\ensuremath{\mathcal{D}}_i^\beta \subseteq \mathds R$ be the (known) domain of variable~$x_i$ at node~$\beta$. In particular, we thus have~$ \ensuremath{\mathrm\Phi}(\beta) \cap W^\beta \subseteq \ensuremath{\mathrm\Phi}(\beta) \subseteq \bigtimes_{i \in [n]} \ensuremath{\mathcal{D}}_i^\beta $. By replacing $\ensuremath{\mathrm\Phi}(\beta) \cap W^\beta$ in the definition of $\Delta^\beta$ by $\bigtimes_{i \in [n]} \ensuremath{\mathcal{D}}_i^\beta$, we get a subset of symmetries: $ \{ \gamma \in \Gamma : \sigma_\beta(x) \leq \sigma_\beta(\gamma(x))\ \text{for all}\ x \in \bigtimes_{i \in[n]} \ensuremath{\mathcal{D}}_i^\beta \} \subseteq \Delta^\beta $. In particular, using (a subset of) the left set as generating set, one finds a permutation group that is a subgroup of~$\Delta^\beta$. For the computational results shown in Section~\ref{sec:num} we discuss the subset selection procedure that we chose in our implementation.
We finish this section by showing that, in binary problems, \VDR{}s\xspace yielded by OF\xspace from Section~\ref{sec:overviewtree} are also yielded by the generalized setting of Lemma~\ref{lem:branchorbit} and~\ref{lem:intersection}.
\begin{lemma} Denote $\Delta^{\smash\beta}_{\smash{\rm bin}}$ as the group $\Delta^{\smash\beta}$ defined in Section~\ref{sec:overviewtree}, and $\Delta^{\smash\beta}_{\smash{\rm gen}}$ as the group with the same symbol defined here. If the symmetry group acts on binary variables exclusively, then $\Delta^\beta_{\rm bin} \leq \Delta^\beta_{\rm gen}$. \end{lemma} \begin{proof} In binary problems, branching on variables fixes their values. As such, because vector $\sigma_\beta(x)$ contains all branched variables, it is the same for all~${x \in \ensuremath{\mathrm\Phi}(\beta)}$. Suppose $\gamma \in \Delta^\beta_{\rm bin}$, i.e., $\gamma \in \Gamma$ and~$\gamma(B_1^\beta) = B_1^\beta$. For all $i \in [m_\beta]$, with $\sigma_\beta(x)_i = 1$, we have $\sigma_\beta(\gamma(x))_i = 1$ for all $x \in \ensuremath{\mathrm\Phi}(\beta)$. Similarly, for all $i \in [m_\beta]$ with $\sigma_\beta(x)_i = 0$, we have $\sigma_\beta(\gamma(x))_i \geq 0$. This means that for all~$x \in \ensuremath{\mathrm\Phi}(\beta)$ we have $\sigma_\beta(x) \leq \sigma_\beta(\gamma(x))$. This holds in particular for all $x \in \ensuremath{\mathrm\Phi}(\beta) \cap W^\beta$, i.e., $\gamma \in \Delta^\beta_{\rm gen}$. \end{proof}
By the OF\xspace rule described in Section~\ref{sec:overviewtree}, for all variable indices $i$ where $x_i$ is branched to zero, all variables $x_j$ with $j$ in the orbit of $i$ in $\Delta^\beta_{\smash{\rm bin}}$ can be fixed to zero, as well. Because $\Delta^\beta_{\smash{\rm bin}} \leq \Delta^\beta_{\smash{\rm gen}}$, every orbit of $\Delta^\beta_{\smash{\rm bin}}$ is contained in an orbit of $\Delta^\beta_{\smash{\rm gen}}$. As such, if $x_i$ is the branching variable at the present node, this is implied by~$x_i \geq x_j$ in Lemma~\ref{lem:branchorbit}. Otherwise, this is implied by Lemma~\ref{lem:intersection}.
\section{Computational study} \label{sec:num}
To assess the effectiveness of our methods, we compare the running times of the implementations of the various dynamic symmetry handling methods of Section~\ref{sec:cast} (in the regime of Examples~\ref{ex:vardynamic} and~\ref{ex:orbitopalfixing}) to similar existing methods. To this end, we make use of diverse testsets. \begin{itemize} \item Symmetric benchmark instances from MIPLIB~2010~\cite{KochEtAl2011MIPLIB} and MIPLIB~2017~\cite{Gleixner2021MIPLIB}. \item Existence of minimum $t\text{-}(v,k,\lambda)$-covering designs. \item Noise dosage problem instances as discussed by Sherali and Smith~\cite{sherali2001models} (cf. Problem~\ref{prob:NDbinary}). \end{itemize} The MIPLIB instances offer a diverse set of instances that contain symmetries, but these symmetries operate on binary variables predominantly. To evaluate the effectiveness of our framework for non-binary problems, we consider the covering design and noise dosage instances. The symmetries of the former are orbitopal, whereas the latter has no orbitopal symmetries.
Although our framework allows to handle more general symmetries, we restrict the numerical experiments to permutation symmetries. On the one hand, \code{SCIP} can only detect permutation symmetries at this point of time. On the other hand, most symmetry handling methods discussed in the literature only apply to permutation symmetries. The development of methods for other kinds of symmetries is out of scope of this article.
\subsection{Solver components and configurations} We use a development version of the solver \code{SCIP~8.0.3.5}~\cite{BestuzhevaEtal2021OO}, commit \texttt{8443db2}\footnote{Public mirror: \url{https://github.com/scipopt/scip/tree/8443db213892153ff2e9d6e70c343024fb26968c}}, with LP-solver \code{Soplex~6.0.3}. \code{SCIP} contains implementations of the state-of-the-art methods LexFix\xspace, orbitopal fixing, and OF\xspace to which we compare our methods. We have extended the code with our dynamic methods. Our modified code is available on GitHub\footnote{Project page: \url{https://github.com/JasperNL/scip-unified}}. This repository also contains the instance generators and problem instances for the noise dosage and covering design problems.
For all settings, symmetries are detected by finding automorphisms of a suitable graph~\cite{pfetsch2019computational,salvagnin2005dominance} using \code{bliss~0.77}~\cite{JunttilaKaski2015bliss}. We make use of the readily implemented symmetry detection code of \code{SCIP}, which finds a set of permutations $\Pi$ that generate a symmetry group $\Gamma$ of the problem, namely the symmetries implied by its formulation~\cite{pfetsch2019computational,salvagnin2005dominance,margot2010symmetry}. This is a permutation group acting on the solution vector index space, so the setting of Section~\ref{sec:overview} and~\ref{sec:cast} applies.
If~$\Gamma$ is a product group consisting of $k$ components, i.e., $\Gamma = \bigtimes_{i \in [k]} \Gamma_i$, then by using similar arguments as in~\cite[Proposition~5]{hojny2019polytopes}, the symmetries of the different components~$\Gamma_i$, $i \in [k]$, can be handled independently; compositions of permutations from different components do not need to be taken into account. In particular, it is possible to select a different symmetry prehandling structure for the different components. We therefore decompose the set of permutations $\Pi$ generating $\Gamma$ that are found by \code{SCIP} in components, yielding generating sets~$\Pi_1, \dots, \Pi_k$ for components~$\Gamma_1, \dots, \Gamma_k$. Symmetry in each component is handled separately.
For all settings, we disable restarts to ensure that all methods exploit the same symmetry information. We compare our newly implemented methods to the methods originally implemented in \code{SCIP}.
\paragraph{Our configurations} For every component $\Gamma_i$, we handle symmetries as follows, where we skip some steps if the corresponding symmetry handling method is disabled. \begin{enumerate} \item\label{step:orbitope} If a \code{SCIP}-internal heuristic,
cf.~\cite[Sec.~4.1]{hojny2019polytopes}, detects that~$\Gamma_i$ consists
of orbitopal symmetries:
\begin{enumerate}
\item\label{step:cons} If the orbitope matrix is a single row
$[x_1 \cdots x_\ell]$, add linear constraints
$x_1 \geq \dots \geq x_\ell$.
\item Otherwise, if the orbitope matrix contains only two columns
(i.e., is generated by a single permutation $\gamma$),
then use lexicographic reduction using the dynamic
variable ordering of Example~\ref{ex:vardynamic},
as described in Section~\ref{sec:gen:lexred}.
\item\label{step:pack} Otherwise, if there are at least 3 rows with
binary variables whose sum is at most 1 (so-called packing-partitioning type)
then use the complete static propagation method for packing-partitioning
orbitopes as described by Kaibel and Pfetsch~\cite{KaibelPfetsch2008},
where the orbitope matrix is restricted to the rows with this structure.
\item\label{step:dyn} Otherwise, use dynamic orbitopal reduction
as described in Section~\ref{sec:gen:orbitopalfixing}
using the dynamic variable ordering of Example~\ref{ex:orbitopalfixing}.
We select $\varphi_\beta$ such that it
swaps the column containing the branched variable
to the middlemost (or leftmost) symmetrically equivalent column
when propagating the \SHC{}s\xspace~\eqref{eq:main}
using static orbitopal reduction.
\end{enumerate} \item Otherwise (i.e., if the symmetries are not orbitopal),
use the symmetry prehandling structure of Example~\ref{ex:vardynamic}
and use two compatible methods simultaneously:
\begin{enumerate}
\item Lexicographic reduction as described
in Section~\ref{sec:gen:lexred}.
\item Orbital reduction as described
in Section~\ref{sec:gen:orbitalreduction}.
Since computing $\Delta^\beta$ is non-trivial,
we work with a subgroup of $\Delta^\beta$,
namely the group generated by all permutations $\gamma \in \Pi_i$
for which $\sigma_\beta(x) \leq \sigma_\beta(\gamma(x))$
for all $x \in \bigtimes_{j \in [n]} \ensuremath{\mathcal{D}}_j^\beta$.
\end{enumerate} \end{enumerate}
We also compare settings where orbitopal reduction, orbital reduction and lexicographic reduction are turned off. If orbitopal symmetries are not handled, we always resort to the second setting, where lexicographic reduction (if enabled) and/or orbital reduction (if enabled) are applied.
For orbitopal symmetries, we have chosen to handle certain common cases before refraining to the dynamic orbitopal reduction code that we devised. First, for single-row orbitopes, the symmetry is completely handled by the linear constraints. Since linear constraints are strong and work well with other components of the solver, we decided to handle those this way. If an orbitope only has two columns, the underlying symmetry group is generated by a single permutation of order~2. In that case, the symmetry is completely handled by lexicographic reduction. Third, it is well known that exploiting problem-specific information can greatly assist symmetry handling. If a packing-partitioning structure can be detected, we therefore apply specialized methods as discussed above. Otherwise, we use orbitopal reduction as discussed in Section~\ref{sec:gen:orbitopalfixing}. In Step~\ref{step:dyn}, the choice of~$\varphi_\beta$ is inspired by the discussion in Section~\ref{sec:theframework} (``Interpretation''). By moving the branching variable to the middlemost possible column, balanced subproblems are created, whereas the leftmost possible column might lead to more reductions in one child than in the other. Below, we will investigate which technique is more favorable.
For the components that are do not consist of orbitopal symmetries, we decided to settle on the setting of Example~\ref{ex:vardynamic} and use both compatible methods. Since the \code{SCIP} version that we compare to either uses orbital fixing or (static) lexicographic fixing for such components, our setting allows us to assess the impact of adapting lexicographic fixing to make it compatible with orbital fixing.
\paragraph{Comparison base} We compare to similar, readily implemented methods in \code{SCIP}. In \code{SCIP} jargon, these methods are called \emph{polyhedral} and \emph{orbital fixing} and can be enabled/disabled independently. The polyhedral methods consist of LexFix\xspace for the \SHC{}s\xspace~$x \succeq \gamma(x)$ for~$\gamma \in \Pi_i$ and methods to handle orbitopal symmetries; orbital fixing uses OF\xspace from~\cite{pfetsch2019computational}. Note that only symmetries of binary variables are handled.
If a component consists of polytopal symmetries, the polyhedral methods handle these symmetries by carrying out the check of Step~\ref{step:pack}. In case it evaluates positively, methods exploiting packing-partitioning structures are applied as described above. Otherwise, orbitopal symmetries of binary variables are handled by a variant of row-dynamic orbitopal fixing. The remaining components are either handled by static LexFix\xspace or orbital fixing, depending on whether the polyhedral methods or orbital fixing is enabled. Moreover, if polyhedral methods are disabled, orbital fixing is also applied to components consisting of orbitopal symmetries.
\subsection{Results} All experiments have been run in parallel on the Dutch National supercomputer Snellius ``thin'' consisting of compute nodes with dual AMD Rome 7H12 processors providing a total of \num{128} physical CPU cores, and~\SI{256}{\giga\byte} memory. Each process has an allocation of \num{4} physical CPU cores and~\SI{8}{\giga\byte} of memory. In the results below we report the running times (column \emph{time}) and time spent on symmetry handling (column \emph{sym}) in shifted geometric mean~$\prod_{i = 1}^n (t_i + 1)^{\frac{1}{n}} - 1$ to reduce the impact of outliers. We also report the number of instances solved within the time limit of~\SI{1}{\hour} (column~\emph{\#S}). If the time limit is reached, the solving time of that instance is reported as \SI{1}{\hour}. None of the instances failed or exceeded the memory limit. We report the aggregated results for all instances, for all instances for which at least one setting solved the instance within the time limit, and for all instances solved by all settings. For each of these classes, we provide their size below.
We use abbreviations for the settings. We compare no symmetry handling (\emph{Nosym}), traditional polyhedral methods (\emph{Polyh}), traditional orbital fixing (\emph{OF}), dynamic orbitopal reduction (\emph{OtopRed}), dynamic lexicographic reduction (\emph{LexRed}), orbitopal reduction (\emph{OR}), and combinations hereof. Note that also setting \emph{Nosym} reports a small symmetry time, because due to \code{SCIP}'s architecture, handling the corresponding plug-in requires some time even if it is not used. Moreover, if symmetries are handled by linear constraints in the model (cf.\ Step~\ref{step:cons}), these are not reported in the symmetry handling figures. Recall from~\ref{step:dyn} that we consider two variants of selecting~$\varphi_\beta$ for dynamic orbitopal fixing. We refer to these variants as \emph{first} and \emph{median} for the leftmost and middlemost column, respectively. As the noise dosage testset exclusively consists of orbitopal symmetries, we test both parameterizations there. For the remaining testsets, we restrict ourselves to \emph{median} as it performs better on average for the noise dosage instances.
Since the covering design and noise dosage testsets are relatively small and contain many easy instances, performance variability is minimized by repeating each configuration-instance pair three times with a different global random seed shift. Due to the large number of instances, settings and large time requirements, only one seed is used for the MIPLIB instances.
\subsubsection{MIPLIB} \label{sec:results:miplib} To compose our testset, we presolved all instances from the MIPLIB~2010~\cite{KochEtAl2011MIPLIB} and MIPLIB~2017~\cite{Gleixner2021MIPLIB} benchmark testsets and selected those for which \code{SCIP} could detect symmetries. This results in~129 instances. We excluded instance \texttt{mspp16} as it exceeds the memory limit during presolving.
The goal of our experiments for MIPLIB instances is twofold. On the one hand, we investigate whether our framework allows to solve symmetric mixed-integer programs faster than the state-of-the-art methods as implemented in \code{SCIP}. On the other hand, we are interested in the effect of adapting different symmetry handling methods for~$\sigma_\beta(x) \succeq \sigma_\beta \gamma(x)$ in comparison with their static counterparts. Table~\ref{tab:results:miplib} shows the aggregate results of our experiments.
\begin{table}[t] \caption{Results for MIPLIB 2010 and MIPLIB 2017} \label{tab:results:miplib} \centering \footnotesize \begin{tabular}{@{}L{3.2cm}*{3}{R{1cm}R{1cm}R{.6cm}}@{}} \toprule \multicolumn{1}{c}{Setting} & \multicolumn{3}{c}{All instances (128)} & \multicolumn{3}{c}{Solved by some setting (75)} & \multicolumn{3}{c}{Solved by all settings (51)} \\ \cmidrule(lr){1-1} \cmidrule(lr){2-4} \cmidrule(lr){5-7} \cmidrule(lr){8-10} & time (s) & sym (s) & \#S & time (s) & sym (s) & \#S & time (s) & sym (s) & \#S \\ \cmidrule{2-10} Nosym & 970.73 & 0.18 & 59 & 384.01 & 0.17 & 59 & 157.61 & 0.08 & 51 \\ Polyh & 747.43 & 1.82 & 67 & 245.46 & 1.02 & 67 & 138.09 & 0.69 & 51 \\ OF & 822.26 & 1.34 & 66 & 289.16 & 0.93 & 66 & 134.34 & 0.48 & 51 \\ Polyh + OF & 728.88 & 1.97 & 69 & 235.17 & 0.98 & 69 & 130.08 & 0.58 & 51 \\ \midrule OtopRed & 811.90 & 1.34 & 62 & 282.95 & 0.75 & 62 & 129.08 & 0.49 & 51 \\ LexRed & 799.82 & 2.22 & 67 & 275.78 & 1.30 & 67 & 142.30 & 0.73 & 51 \\ OR & 807.75 & 4.94 & 66 & 280.46 & 3.29 & 66 & 140.62 & 1.40 & 51 \\ OR + LexRed & 788.29 & 5.44 & 68 & 269.01 & 3.43 & 68 & 138.18 & 1.57 & 51 \\ OR + OtopRed & 727.04 & 2.38 & 66 & 234.14 & 1.34 & 66 & 128.62 & 0.57 & 51 \\ OtopRed + LexRed & 691.22 & 1.90 & 68 & 214.84 & 1.01 & 68 & 123.68 & 0.56 & 51 \\ OR + OtopRed + LexRed & 708.63 & 2.57 & 67 & 224.18 & 1.43 & 67 & 125.73 & 0.63 & 51 \\ \bottomrule \end{tabular} \end{table}
Regarding the first question, we observe that using any type of symmetry handling is vastly superior over the setting where no symmetry is handled. Considering all instances, the best of the traditional settings reports an average running time improvement of \SI{24.9}{\percent} over \emph{Nosym}, and the best of the dynamified methods report \SI{28.8}{\percent}. Our framework thus allows to improve on \code{SCIP}'s state-of-the-art by~\SI{5.2}{\percent}. On the instances that can be solved by at least one setting, this effect is even more pronounced and improves on \code{SCIP}'s best setting by \SI{8.6}{\percent}. We believe that this is a substantial improvement, because the MIPLIB instances are rather diverse. In particular, some of these instances contain only very few symmetries.
Regarding the second question, we compare the \code{SCIP} settings \emph{Polyh}, \emph{OF}, and \emph{Polyh + OF} with their counterparts in our framework, being \emph{OtopRed + LexRed}, \emph{OR}, and \emph{OR +
OtopRed}, respectively. The running time of the pure polyhedral setting \emph{Polyh} can be improved in our framework by~\SI{7.5}{\percent} when considering all instances, and by~\SI{12.4}{\percent} when considering only the instances solved by some setting within the time limit. Consequently, adapting symmetry handling to the branching order via the symmetry prehandling structure of Example~\ref{ex:vardynamic} and~\ref{ex:orbitopalfixing} allows to gain substantial performance improvements. Our explanation for this behavior is that symmetry reductions can be found much earlier in branch-and-bound than in the static setting (cf.\ Figure~\ref{fig:branching:orbitopalfixing}, where no reductions can be found at depth~1 if no adaptation is used). Thus, symmetric parts of the branch-and-bound tree can be pruned earlier.
Comparing \emph{OF} and \emph{OR}, we observe that \emph{OR} is slightly faster than \emph{OF}. Both methods, however, are much slower than \emph{Polyh} and \emph{OtopRed
+ LexRed}. A possible explanation is that the latter methods make use of orbitopal fixing, which can handle entire symmetry groups, whereas \emph{OF} and \emph{OR} only find some reductions based on orbits.
Among \code{SCIP}'s methods, \emph{Polyh + OF} performs best. Its counterpart \emph{OR + OtopRed} in our framework performs comparably on all instances and the solvable instances, however, three fewer instances are solved. A possible explanation for the comparable running time is that traditional orbital fixing and the variant of row-dynamic orbitopal fixing are already dynamic methods. Thus, a comparable running time can be expected (although this does not explain the difference in the number of solved instance).
Lastly, we discuss settings which are not possible in the traditional setting, i.e., combing LexRed\xspace and orbital reduction. Enhancing orbital reduction by LexRed\xspace indeed leads to an improvement of~\SI{2.4}{\percent} and allows to solve two more instances. The best setting in our framework, however, does not enable all methods in our framework. Indeed, the running time of \emph{OR + OtopRed + LexRed} can be improved by \SI{4.9}{\percent} when disabling orbital reduction. We explain this phenomenon with the fact that orbitopal reduction already handles a lot of group structure. Combining LexRed\xspace and orbital reduction on the remaining components only finds a few more reductions. The time needed for finding these reductions is then not compensated by the symmetry reduction effect. If no group structure is handled via orbitopal reduction, LexRed\xspace can indeed be enhanced by orbital reduction.
Recall that symmetries in MIPLIB instances act predominantly on binary variables. As opposed to the traditional settings that we compare to, the generalized setting can handle symmetries on non-binary variable domains. Thus, potentially more reductions can follow from larger symmetry groups that include non-binary variables. As shown in Appendix~\ref{app:tables}, a larger group is detected in only 10 out of the 128 instances, and only 2 of these can be solved by some setting. When considering the subset of instances where symmetry is handled based on the same group, we report similar results as before. This shows that even if we only consider problems where symmetries act on the binary variables, our generalized methods outperform similar state-of-the-art methods.
\subsubsection{Minimum $\boldsymbol{t\text{-}(v, k, \lambda)}$-covering
designs}
Since the symmetries of MIPLIB instances predominantly act on binary variables, we turn the focus in the following to symmetric problems without binary variables to assess our framework in this regime. Let~$v \geq k \geq t \geq 0$ and $\lambda > 0$ be integers. Let $V$ be a set of cardinality $v$, and let~$\mathcal K$ (resp.\@~$\mathcal T$) be the collections of all subsets of~$V$ having sizes~$k$ (resp.\@~$t$). A~\emph{$t\text{-}(v, k, \lambda)$-covering design} is a multiset~$\mathcal{C} \subseteq \mathcal{K}$ if all sets $T \in \mathcal T$ are contained in at least $\lambda$ sets of $\mathcal C$, counting with multiplicity. A covering design is \emph{minimum} if no smaller covering design with these parameters exist, and finding these is of interest, e.g., in~\cite{margot2003coveringdesigns,nurmela1999covering,fadlaoui2011tabucoveringdesigns}. Margot~\cite{margot2003coveringdesigns} gives an ILP-formulation, having decision variables~ $\nu \in \{0, \dots, \lambda \}^{\mathcal K}$ specifying the multiplicity of the sets $K \in \mathcal K$ for the minimum $t\text{-}(v, k, \lambda)$-covering design sought after. The problem is to \begin{subequations} \makeatletter \defCD{CD} \makeatother \renewcommand{CD\arabic{equation}}{CD\arabic{equation}} \label{eq:CD} \begin{align} \text{minimize}\ \sum_{K \in \mathcal K} \nu_{K}&, \\ \text{subject to}\ \sum_{K \in \mathcal K : T \subseteq K} \nu_K &\geq \lambda &&\text{for all}\ T \in \mathcal T,\\ \nu &\in \{ 0, \dots, \lambda \}^{\mathcal K}. \end{align} \end{subequations} Symmetries in this problem re-label the elements of $V$, and these are also detected by the symmetry detection routine of \code{SCIP}. Note that, although the underlying group is symmetric, these symmetries in terms of the variables $\nu$ are not orbitopal.
Margot~\cite{margot2003coveringdesigns} considers an instance with $\lambda = 1$, which is a binary problem. We consider all non-binary instances with parameters $\lambda \in \{ 2, 3 \}$ and $12 \geq v \geq k \geq t \geq 1$, and restricted ourselves to instances that were solved within \SI{7200}{\second} in preliminary runs and that require at least \SI{10}{\second} for some setting. This way, 58 instances remain. With 3 seeds per instance, we end up with~174 results. The aggregated results are shown in Table~\ref{tab:results:coveringdesigns}. Because the instances are non-binary, none of the considered traditional methods can be applied to handle the symmetries. As such, we can only compare to no symmetry handling. The best of our instances reports an improvement of~\SI{77.8}{\percent} over no symmetry handling. \begin{table}[t] \caption{Results for finding minimum $t\text{-}(v,k,\lambda)$ covering designs} \label{tab:results:coveringdesigns} \centering \footnotesize \begin{tabular}{@{}L{2.4cm}*{3}{R{1cm}R{1cm}R{.6cm}}@{}} \toprule \multicolumn{1}{c}{Setting} & \multicolumn{3}{c}{All instances (174)} & \multicolumn{3}{c}{Solved by some setting (171)} & \multicolumn{3}{c}{Solved by all settings (126)} \\ \cmidrule(lr){1-1} \cmidrule(lr){2-4} \cmidrule(lr){5-7} \cmidrule(lr){8-10} & time (s) & sym (s) & \#S & time (s) & sym (s) & \#S & time (s) & sym (s) & \#S \\ \cmidrule{2-10} Nosym & 211.13 & 0.16 & 138 & 200.85 & 0.15 & 138 & 100.65 & 0.08 & 126 \\ LexRed & 80.98 & 2.19 & 154 & 75.72 & 2.01 & 154 & 30.67 & 0.60 & 126 \\ OR & 42.75 & 0.72 & 159 & 39.49 & 0.64 & 159 & 18.52 & 0.20 & 126 \\ OR + LexRed & 39.92 & 1.51 & 159 & 36.83 & 1.35 & 159 & 17.00 & 0.50 & 126 \\ \bottomrule \end{tabular} \end{table}
If orbital reduction and LexRed\xspace are not combined, orbital reduction is the more competitive method as it improves upon LexRed\xspace by~\SI{47.2}{\percent}. Although LexRed\xspace is much faster than \emph{Nosym}, this comparison shows that more symmetries can be handled when the group structure is exploited via orbits. Nevertheless, our framework allows to further improve orbital reduction by another~\SI{6.6}{\percent} if it is combined with LexRed\xspace. That is, orbital reduction is not able to capture the entire group structure and missing information can be handled by LexRed\xspace efficiently via our framework.
\subsubsection{Noise dosage} \label{sec:results:noisedosage} To assess the effectiveness of orbitopal reduction isolated from other symmetry handling methods, we consider the noise dosage (ND) problem~\cite{sherali2001models}, which has orbitopal symmetries as explained in Problem~\ref{prob:NDbinary}. However, we replace the binary constraint~\eqref{prob:ND:binary} by~${\vartheta \in \mathds Z_{\geq 0}^{p \times q}}$ to be able to evaluate the effect of orbitopal reduction on non-binary problem. In particular, we are interested whether the choice of the \emph{left} or \emph{median} variants matters for the parameter~$\varphi_\beta$.
For each of the parameters $(p, q) \in \{ (3, 8), (4, 9), (5, 10) \}$, Sherali and Smith have generated four instances~\cite{sherali2001models}. We thank J.\@ Cole Smith for providing us these instances. It turns out that these instances are dated and very easy to solve even without symmetry handling methods. As such, we have extended the testset. For each parameterization $(p, q) \in \{ (6, 11), (7, 12), \dots, (11, 16) \}$, we generate five instances. The details of our generator are in Appendix~\ref{app:noisedosage}.
Symmetries in the ND problem can be handled by adding $\sum_{i=1}^p M^{p-i} \vartheta_{i,j} \geq \sum_{i=1}^p M^{p-i} \vartheta_{i,j+1}$ for $j \in \{1, \dots, q-1\}$, where $M$ is an upper bound to the maximal number of tasks that one worker can perform on a machine, as described by Sherali and Smith~\cite{sherali2001models}. This is similar to fundamental domain inequalities~\cite{friedman2007fundamental} and the symmetry handling constraints of Ostrowski~\cite{ostrowski2009symmetry}. Although these can be used to handle symmetries, it is folklore that problems with largely deviating coefficients can lead to numerical instabilities. These constraints work well for instances with a small number of machines $p$, such as in the original instances of Sherali and Smith. However, for our instance \texttt{noise11\_16\_480\_s2}, such a constraint has minimal absolute coefficient~1 and maximal absolute coefficient~$11^{10}$. Various warnings in the log files confirm the presence of numerical instabilities.
In fact, observable incorrect results follow. For instance, when adding these linear constraints, instance \texttt{noise11\_16\_480\_s1} finds infeasibility during presolving, whereas it is a feasible instance as illustrated by the no symmetry handling and dynamic orbitopal fixing runs. Moreover, for instance \texttt{noise9\_14\_480\_s3}, no infeasibility is detected, but reports a wrong optimal solution. Thus, there is a need to replace these inequalities with numerically more stable methods.
\begin{table}[t] \caption{Results for finding optimal solutions to the noise dosage problems} \label{tab:results:noisedosage} \centering \footnotesize \begin{tabular}{@{}L{2.4cm}*{3}{R{1cm}R{1cm}R{.6cm}}@{}} \toprule \multicolumn{1}{c}{Setting} & \multicolumn{3}{c}{All instances (165)} & \multicolumn{3}{c}{Solved by some setting (132)} & \multicolumn{3}{c}{Solved by all settings (90)} \\ \cmidrule(lr){1-1} \cmidrule(lr){2-4} \cmidrule(lr){5-7} \cmidrule(lr){8-10} & time (s) & sym (s) & \#S & time (s) & sym (s) & \#S & time (s) & sym (s) & \#S \\ \cmidrule{2-10} Nosym & 152.97 & 1.33 & 90 & 69.02 & 0.87 & 90 & 10.13 & 0.19 & 90 \\ Sherali-Smith & 26.45 & 0.66 & 129 & 7.11 & 0.18 & 129 & 1.30 & 0.02 & 90 \\ OtopRed (first) & 35.55 & 4.11 & 123 & 10.60 & 1.29 & 123 & 2.10 & 0.20 & 90 \\ OtopRed (median) & 33.89 & 3.84 & 117 & 9.95 & 1.25 & 117 & 1.73 & 0.13 & 90 \\ \bottomrule \end{tabular} \end{table}
We have removed these two instances from our testset as they obviously result in wrong results. The aggregated results for the remaining instances are presented in Table~\ref{tab:results:noisedosage}. We observe that the symmetry handling inequalities perform~\SI{22.0}{\percent} better than orbitopal reduction. However, the presented numbers for the inequality-based approach need to be interpreted carefully as reporting a correct objective value does not necessarily mean that the branch-and-bound algorithm worked correctly. For instance, nodes might have been pruned because of numerical inaccuracies although a numerical correct algorithm would not have pruned them. These issues do not occur for the propagation algorithm orbitopal reduction as it just reduces variable domains.
Comparing the two parameterizations of orbitopal fixing, we see that the \emph{median} variant performs \SI{4.7}{\percent} better than the \emph{first} variant. This shows that the choice for the column reordering by symmetry~$\varphi_\beta$ has a measurable impact on the running time for dynamic orbitopal reduction. A possible explanation for the median rule to perform better than the first rule is that median creates more balanced branch-and-bound trees, cf.\ Section~\ref{sec:theframework}. Consequently, the right choice of~$\varphi_\beta$ might significantly change the performance of orbitopal reduction. We leave it as future research to find a good rule for the selection of~$\varphi_\beta$ as this rule might be based on the structure of the underlying problem, e.g., size of the variable domains, number of rows, and number of columns.
\section{Conclusions and future research}
Symmetry handling is an important component of modern solver technology. One of the main issues, however, is the selection and combination of different symmetry handling methods. Since the latter is non-trivial, we have proposed a flexible framework that easily allows to check whether different methods are compatible and that does apply for arbitrary variable domains. Numerical results show that our framework is substantially faster than symmetry handling in the state-of-the-art solver \code{SCIP}. In particular, we benefit from combining different symmetry handling methods, which is possible in our framework, but only in a limited way in \code{SCIP}. Moreover, due to our generalization of symmetry handling algorithms for binary problems to general variable domains, our framework allows us to reliably handle symmetries in different non-binary applications.
Due to the flexibility of our framework, it is not only applicable for the methods discussed in this article, but also allows to apply methods that will be developed in the future (provided they are compatible with \SHC{}s\xspace~\eqref{eq:main}). This opens, among others, the following directions for future research.
In this article, we experimentally evaluated our framework only for permutation symmetries. As the framework also supports other types of symmetries such as rotational and reflection symmetries, further research could involve devising symmetry handling methods for such symmetries. Moreover, to handle permutation symmetries, we only used propagation techniques. In the future, these methods can be complemented in two ways. On the one hand, other techniques such as separation routines can be applied to handle \SHC{}s\xspace~\eqref{eq:main}. On the other hand, our symmetry handling methods have not exploited additional problem structure such as packing-partitioning structures in orbitopal fixing. Further research can focus on the incorporation of problem constraints in handling the symmetry handling constraint of Theorem~\ref{thm:main}. This includes a dynamification of packing-partitioning orbitopes, as well as introducing a way to handle overlapping orbitopal subgroups within a component.
Last, in the computational results we describe decision rules for enabling/disabling certain symmetry handling methods. If new symmetry handling methods are cast into our framework, however, these rules need to be updated. Future research could thus encompass the derivation of good rules for how to handle symmetries in our framework.
\paragraph{Acknowledgment} We thank J.\ Cole Smith for providing us the instances of the noise dosage problem used in~\cite{sherali2001models}.
\appendix \section{Further results for MIPLIB} \label{app:tables}
In this appendix, we investigate in more detail the performance gains achieved by our methods for MIPLIB instances in comparison to the traditional methods. The traditional methods only work for symmetries on binary variables, whereas our generalized methods can also handle non-binary variables. As the symmetric MIPLIB benchmark instances contain mostly instances with binary symmetries, the question arises whether the observed gains are thus due to a few instances for which our methods can handle more (non-binary) symmetries.
To investigate this effect, we have partitioned these instances in two classes. The first class contains instances where more symmetries than in the traditional setting arise; the second class contains the remaining instances, i.e., both settings handle the same symmetries. The results for the subsets of the instances are shown in Table~\ref{tab:miplib:diff} and~\ref{tab:miplib:same}, respectively.
\begin{table}[b!] \caption{MIPLIB results where the detected symmetry group is different for the traditional setting and our generalized setting.} \label{tab:miplib:diff} \centering \footnotesize \begin{tabular}{@{}L{3.2cm}*{3}{R{1.1cm}R{1cm}R{.6cm}}@{}} \toprule \multicolumn{1}{c}{Setting} & \multicolumn{3}{c}{All instances (10)} & \multicolumn{3}{c}{Solved by some setting (2)} & \multicolumn{3}{c}{Solved by all settings (1)} \\ \cmidrule(lr){1-1} \cmidrule(lr){2-4} \cmidrule(lr){5-7} \cmidrule(lr){8-10} & time (s) & sym (s) & \#S & time (s) & sym (s) & \#S & time (s) & sym (s) & \#S \\ \cmidrule{2-10} Nosym & 2397.54 & 0.15 & 1 & 468.79 & 0.02 & 1 & 60.29 & 0.01 & 1 \\ Polyh & 2369.31 & 4.29 & 1 & 436.46 & 0.01 & 1 & 52.14 & 0.00 & 1 \\ OF & 2396.04 & 3.09 & 1 & 469.20 & 0.00 & 1 & 60.39 & 0.00 & 1 \\ Polyh + OF & 2341.74 & 5.69 & 1 & 413.65 & 0.01 & 1 & 46.72 & 0.00 & 1 \\ \midrule OtopRed & 2395.33 & 2.63 & 1 & 468.48 & 0.06 & 1 & 60.20 & 0.02 & 1 \\ LexRed & 2321.62 & 7.21 & 2 & 400.65 & 0.30 & 2 & 46.62 & 0.00 & 1 \\ OR & 2390.39 & 15.94 & 1 & 463.55 & 12.32 & 1 & 58.93 & 0.17 & 1 \\ OR + LexRed & 2383.85 & 17.78 & 2 & 456.75 & 12.60 & 2 & 59.05 & 0.24 & 1 \\ OR + OtopRed & 2402.87 & 3.62 & 1 & 467.93 & 0.04 & 1 & 60.06 & 0.02 & 1 \\ OtopRed + LexRed & 2399.78 & 3.50 & 1 & 472.57 & 0.03 & 1 & 61.24 & 0.00 & 1 \\ OR + OtopRed + LexRed & 2395.61 & 3.62 & 1 & 468.22 & 0.07 & 1 & 60.13 & 0.03 & 1 \\ \bottomrule \end{tabular} \end{table}
\begin{table}[b!] \caption{MIPLIB results where the detected symmetry group is the same for the traditional setting and our generalized setting.} \label{tab:miplib:same} \centering \footnotesize \begin{tabular}{@{}L{3.2cm}*{3}{R{1.1cm}R{1cm}R{.6cm}}@{}} \toprule \multicolumn{1}{c}{Setting} & \multicolumn{3}{c}{All instances (118)} & \multicolumn{3}{c}{Solved by some setting (73)} & \multicolumn{3}{c}{Solved by all settings (50)} \\ \cmidrule(lr){1-1} \cmidrule(lr){2-4} \cmidrule(lr){5-7} \cmidrule(lr){8-10} & time (s) & sym (s) & \#S & time (s) & sym (s) & \#S & time (s) & sym (s) & \#S \\ \cmidrule{2-10} Nosym & 899.10 & 0.18 & 58 & 381.92 & 0.17 & 58 & 160.65 & 0.08 & 50 \\ Polyh & 677.77 & 1.67 & 66 & 241.61 & 1.06 & 66 & 140.79 & 0.71 & 50 \\ OF & 750.98 & 1.23 & 65 & 285.34 & 0.97 & 65 & 136.50 & 0.49 & 50 \\ Polyh + OF & 660.19 & 1.77 & 68 & 231.55 & 1.02 & 68 & 132.76 & 0.59 & 50 \\ \midrule OtopRed & 740.73 & 1.25 & 61 & 279.06 & 0.77 & 61 & 131.06 & 0.50 & 50 \\ LexRed & 730.72 & 1.98 & 65 & 272.97 & 1.34 & 65 & 145.50 & 0.75 & 50 \\ OR & 736.76 & 4.44 & 65 & 276.63 & 3.16 & 65 & 143.08 & 1.43 & 50 \\ OR + LexRed & 717.69 & 4.88 & 66 & 265.14 & 3.29 & 66 & 140.54 & 1.61 & 50 \\ OR + OtopRed & 656.95 & 2.29 & 65 & 229.74 & 1.40 & 65 & 130.59 & 0.59 & 50 \\ OtopRed + LexRed & 621.98 & 1.79 & 67 & 210.24 & 1.04 & 67 & 125.43 & 0.57 & 50 \\ OR + OtopRed + LexRed & 639.09 & 2.49 & 66 & 219.69 & 1.48 & 66 & 127.59 & 0.64 & 50 \\ \bottomrule \end{tabular} \end{table}
There are only 10 instances in the first class, and only 2 of these can be solved by any of the settings considered. So, indeed, as mentioned in the beginning of Section~\ref{sec:num}, the symmetrical instances from MIPLIB act predominantly on binary variables. Since only few instances handle symmetries based on a different group, this only has a small effect on the reported results in Section~\ref{sec:results:miplib}. When restricting to the instances with the same symmetries (Table~\ref{tab:miplib:same}), the best of our methods on all instances achieves an improvement of~\SI{5.8}{\percent} over the traditional best, as opposed to the figure of~\SI{5.2}{\percent} from Section~\ref{sec:results:miplib}. This shows that our generalized methods outperform similar state-of-the-art methods even for problems where symmetries only act on binary variables.
\section{Testset generation for Noise Dosage instances} \label{app:noisedosage}
In this appendix, we describe how we generate the instances of the noise dosage problem used in our experiments. We have extended the testset used by Sherali and Smith~\cite{sherali2001models}, by including instances with more workers and more machines. Since the generator of thei instances is not available to us, we have analyzed the instances from~\cite{sherali2001models} provided by J. Cole Smith and extracted features.
Most importantly, we have observed that the total worker time is about half the total time required by the machines, and that the number of tasks per machine range from 4 to 9. The noise dosage units and time per machine are floating point numbers. Given $p$ machines and $q$ workers, the workers work at most $H = 480$ hours, we sample the number of tasks $d_i$ per machine uniformly (discrete) random between 4 and 10, then choose $\mu = \frac{1}{2H} \sum_{i=1}^p d_i$ and sample the time per task from the normal distribution with mean $\mu$ and standard deviation $\frac15 \mu$, i.e., $\mathcal N(\mu, \frac15 \mu)$. Last, the noise units are sampled from $\mathcal N(18, 4)$. When sampling from the normal distribution, we ignore sample values with a negative value.
Our instance generator and generated instance files are publicly available at~\url{https://github.com/JasperNL/scip-unified/tree/unified/problem_instances}.
\end{document} |
\begin{document}
\title{Higher K-theory of Toric stacks} \author{Roy Joshua and Amalendu Krishna} \thanks{The first author was supported by a grant from the National Science Foundation. The second author was supported by the Swarnajayanti fellowship, Govt. of India, 2011.} \address{Department of Mathematics, Ohio State University, Columbus, Ohio, 43210, USA} \address{School of Mathematics, Tata Institute of Fundamental Research, Homi Bhabha Road, Colaba, Mumbai, India} \email{joshua@math.ohio-state.edu} \email{amal@math.tifr.res.in} \baselineskip=10pt
\keywords{toric, fan, stack, K-theory}
\subjclass[2010]{19L47, 14M25}
\begin{abstract} In this paper, we develop several techniques for computing the higher G-theory and K-theory of quotient stacks. Our main results for computing these groups are in terms of spectral sequences.
We show that these spectral sequences degenerate in the case of many toric stacks, thereby providing an efficient computation of their higher K-theory. We apply our main results to give explicit description for the higher K-theory for many smooth toric stacks. As another application, we describe the higher K-theory of toric stack bundles over smooth base schemes. \end{abstract}
\maketitle
\section{Introduction}\label{section:Intro} Toric varieties form a good testing ground for verifying many conjectures in algebraic geometry. This becomes particularly apparent when one wants to understand cohomology theories for algebraic varieties. Computations of cohomology rings of smooth toric varieties such as the Grothendieck ring of vector bundles, the Chow ring and the singular cohomology ring have been well-understood for many years. These computations facilitate predictions on the structure of various cohomology rings of a general algebraic variety.
Just like toric varieties, one would like to have a class of algebraic stacks on which various cohomological problems about stacks can be tested. The class of {\sl toric stacks}, first introduced and studied in \cite{BCS} by Borisov, Chen and Smith, in terms of combinatorial data called {\sl stacky fans}, is precisely such a class of algebraic stacks. These stacks are expected to be the toy models for understanding cohomology theories of algebraic stacks, a problem which is still very complicated in general.
Such a point of view probably accounts for the recent flurry of activity in this area with several groups considering various forms of toric stacks. (See for example, \cite{FMN}, \cite{Laf}, \cite{GSI} in addition to \cite{BCS}.) In \cite{FMN}, Fantechi, Mann and Nironi study the structure of toric Deligne-Mumford stacks in detail. Recently, in \cite{GSI}, Geraschenko and Satriano consider in detail {\sl toric stacks} which may not be Deligne-Mumford. A class of toric stacks of this kind and their cohomology were earlier considered by Lafforgue \cite{Laf} in the study of geometric Langlands correspondence. All examples of toric stacks studied before, including those in \cite{BCS}, \cite{FMN} and \cite{Laf} are shown to be special cases of the stacks introduced in \cite{GSI}. These stacks appear naturally while solving certain moduli problems and computation of their cohomological invariants allows us to understand these invariants for many moduli spaces.
In \cite{BCS}, Borisov, Chen and Smith computed the rational Chow ring and orbifold Chow ring of toric Deligne-Mumford stacks. The integral version of this result for certain type of toric Deligne-Mumford stacks is due to Jiang and Tseng \cite{JTseng1}, and Iwanari \cite{Iwanari}. Jiang and Tseng also extend some of these results to certain toric stack bundles in \cite{Jiang} and \cite{JTseng2}. Borisov and Horja \cite{BH} computed the integral Grothendieck ring $\bdK _0(X)$ of a toric Deligne-Mumford stack $X$. See also \cite{Sm}.
On the other hand, almost nothing has been worked out till now regarding the higher K-groups of toric stacks, even when they are Deligne-Mumford stacks, though the higher K-groups of toric varieties had been well understood ({\sl cf.} \cite{VV}) and the higher Chow groups of toric varieties have been computed recently in \cite{Krishna}. Furthermore, we still do not know how to compute even the Grothendieck K-theory ring of a general smooth toric stack.
One goal of this paper is to develop general techniques for computing the (integral) higher K-theory of smooth toric stacks. In fact, our results apply to a much bigger class of stacks than just toric stacks. In particular, these results can be used to describe the higher equivariant K-theory of many spherical varieties. Our general results are in terms of spectral sequences which we show degenerate in various cases of interest. This allows us to give explicit description of the higher K-theory of toric stacks.
As a consequence of this degeneration of spectral sequences, we show how one can recover (the integral versions of) and generalize all the previously known computations of the Grothendieck group of toric Deligne-Mumford stacks. As further applications of the main results, we completely describe the (integral) higher K-theory of weighted projective spaces. As another application, we give a complete description of the higher K-theory of toric stack bundles over a smooth base scheme.
\subsection{Overview of the main results} The following is an overview of our main results. We shall fix a base field $k$ throughout this text. A {\sl scheme} in this paper will mean a separated and reduced scheme of finite type over $k$. A {\sl linear algebraic group} $G$ over $k$ will mean a smooth and affine group scheme over $k$. By a closed subgroup $H$ of an algebraic group $G$, we shall mean a morphism $H \to G$ of algebraic groups over $k$ which is a closed immersion of $k$-schemes. In particular, a closed subgroup of a linear algebraic group will be of the same type and hence smooth. An algebraic group $G$ will be called {\sl diagonalizable} if it is a product of a split torus over $k$ and a finite abelian group of order prime to the characteristic of $k$. In particular, we shall be dealing with only those tori which are split over $k$. Unless mentioned otherwise, all products of schemes will be taken over $k$. A $G$-scheme will mean a scheme with an action of the algebraic group $G$.
For a $G$-scheme $X$, let $\bdG ^G(X)$ (resp. $\bdK ^G(X)$) denote the spectrum of the K-theory of $G$-equivariant coherent sheaves (resp. vector bundles) on $X$. Let $R(G)$ denote the representation ring of $G$. This is canonically identified with $\bdK ^\bdG _0(k)$. If ${\mathfrak X}$ denotes an algebraic stack, we let $\bdK ({\mathfrak X})$ ($\bdG ({\mathfrak X})$) denote the Quillen K-theory (G-theory) of the exact category of vector bundles (coherent sheaves, respectively) on the stack ${\mathfrak X}$. For a quotient stack $\mathfrak{X} = [X/G]$, the spectrum $\bdK (\mathfrak{X})$ (resp. $\bdG (\mathfrak{X})$) is canonically weakly equivalent to the equivariant K-theory $\bdK ^G(X)$ (resp. G-theory $\bdG ^G(X)$) of $X$. See \S ~\ref{subsection:K-thry} for more details.
Recall (see below) that a {\sl stacky} toric stack $\mathfrak{X}$ is of the form $[X/G]$ where $X$ is a toric variety with dense torus $T$ and $G$ is a diagonalizable group with a given morphism $\phi: G \to T$.
\vskip .3cm
Our first result is the construction of a spectral sequence which allows one to compute the higher K-theory of the stack $[X/G]$ from the K-theory of the stack $[X/T]$, whenever a torus $T$ acts on a scheme $X$ and $\phi : G \to T$ is a morphism of diagonalizable groups. This is related to the spectral sequence of Merkurjev (\cite[Theorem 5.3]{Merk}), whose $E_2$-terms are expressed in terms of ${\bdG}_*([X/T])$ and which converges to ${\bdG}_*(X)$ (See also \cite{Lev} for related constructions).
\begin{comment} This is closely related to the spectral sequence of Merkurjev \cite[Theorem 5.3]{Merk} whose $E_2$-terms are expressed in terms of the equivariant G-theory of a scheme with a group action and converging to the non-equivariant G-theory of the same scheme. (See also \cite{Lev} for related constructions.) Since we restrict to actions by diagonalizable groups, the class of group actions we consider is definitely more restrictive than what was considered in \cite{Merk}. Nevertheless, for this smaller class, our approach
provides a generalization of the Merkurjev spectral sequence: the $E_2$-terms in our spectral sequence are given in terms of the $T$-equivariant G-theory of a $T$-scheme and converge to the equivariant G-theory, equivariant with respect to a diagonalizable group mapping into the torus. \end{comment} We also prove the degeneration of our spectral sequences in many cases, which
provides an efficient tool for computing the higher K-theory of many quotient stacks, including toric stacks.
\begin{thm} \label{thm:main-thm-1} Let $T$ be a split torus acting on a scheme $X$ and let $\phi: G \to T$ be a morphism of diagonalizable groups so that $G$ acts on $X$ via $\phi$. Then, there is a spectral sequence: \begin{equation}\label{eqn:gen.weak.eq1} E^{s,t}_2 = {{\operatorname{Tor}}}_{s}^{R(T)}(R(G), \bdG _t([X/T])) \Rightarrow \bdG _{s+t}([X/G]). \end{equation} Moreover, the edge map $\bdG _0([X/T]) {\underset {R(T)} \otimes} R(G) \to \bdG _0([X/G])$ is an isomorphism.
The spectral sequence ~\eqref{eqn:gen.weak.eq1} degenerates at the $E_2$-terms if $X$ is a smooth toric variety with dense torus $T$ such that $\bdK _0([X/T])$ is a projective $R(T)$-module and we obtain the ring isomorphism: \begin{equation}\label{eqn:gen.weak.eq2} \bdK _*([X/T]) {\underset {R(T)} \otimes} R(G) \xrightarrow{\cong} \bdK _*([X/G]). \end{equation} In particular, this isomorphism holds when $X$ is a smooth and projective toric variety. \end{thm}
If $\mathfrak{X} = [X/G]$ is a generically stacky toric stack associated to the data $\underline{X} = (X, G \xrightarrow{\phi} T)$, then the above results apply to the G-theory and K-theory of $\mathfrak{X}$. We shall apply Theorem~\ref{thm:main-thm-1} in Subsection~\ref{subsection:BHR} to give an explicit presentation of the Grothendieck K-theory ring of a smooth toric stack. If we specialize to the case of smooth toric Deligne-Mumford stacks, this recovers the main result of Borisov--Horja \cite{BH}.
Another useful application of Theorem~\ref{thm:main-thm-1} is that it tells us how we can read off the $T'$-equivariant G-groups of a $T$-scheme $X$ in terms of its $T$-equivariant G-groups, whenever $T'$ is a closed subgroup of $T$. The special case of the isomorphism $\bdG _0([X/T]) {\underset {R(T)} \otimes} R(G) \xrightarrow{\cong} \bdG _0([X/G])$ when $G$ is the trivial group and $X$ is a smooth toric variety, recovers the main result of \cite{Moreli}.
We should also observe that Theorem~\ref{thm:main-thm-1} applies to a bigger class of schemes than just toric varieties. In particular, one can use them to compute the equivariant $K$-theory of many spherical varieties. Another special case of the isomorphism $\bdG _0([X/T]) {\underset {R(T)} \otimes} R(G) \xrightarrow{\cong} \bdG _0([X/G])$ when $G$ is the trivial group and $X$ is a spherical variety, recovers the main result of \cite{Takeda}.
\vskip .3cm
\begin{thm}\label{thm:main-thm-2} Let $T$ be a split torus acting on a smooth and projective scheme $X$ which is $T$-equivariantly linear ({\sl cf.} Definition~\ref{defn:linear}). Let $\phi: G \to T$ be a morphism of diagonalizable groups so that $G$ acts on $X$ via $\phi$. Then the map \begin{equation}\label{eqn:main-2-0} \rho: \bdK _0([X/G]){\underset {{\mathbb Z}} \otimes} \bdK _*(k) \cong \bdK _0([X/G]){\underset {R(G)} \otimes} \bdK ^\bdG _*(k) \to \bdK _*([X/G]) \end{equation} is a ring isomorphism. \end{thm}
It turns out that all smooth and projective spherical varieties ({\sl cf.} \S~\ref{subsubsection:Spherical}) satisfy the hypothesis of Theorem~\ref{thm:main-thm-2}. When $X$ is a smooth projective toric variety and $G = T$, then the above theorem recovers a result of Vezzosi--Vistoli (\cite[Theorem~6.9]{VV}). \vspace*{1cm}
As an illustration of how the spectral sequence ~\eqref{eqn:gen.weak.eq1} degenerates in the cases not covered by Theorems~\ref{thm:main-thm-1} and ~\ref{thm:main-thm-2}, we prove the following result which describes the higher K-theory of toric stack bundles.
\begin{thm}\label{thm:main-thm-3} Let $B$ be a smooth scheme over a perfect field $k$ and let $[X/G]$ be a toric stack where $X$ is smooth and projective. Let $R_G\left(\bdK _*(B), \Delta \right)$ denote the Stanley-Reisner algebra ({\sl cf.} Definition~\ref{defn:RING}) over $\bdK _*(B)$ associated to a closed subgroup $G$ of $T$. Let $\pi: \mathfrak{X} \to B$ be a toric stack bundle with fiber $[X/G]$. Then there is a ring isomorphism \begin{equation}\label{eqn:vanish4*0} \Phi_G : R_G\left(\bdK _*(B), \Delta \right) \xrightarrow{\cong} \bdK _*(\mathfrak{X}). \end{equation} \end{thm}
When $G$ is the trivial group, the Grothendieck group $\bdK _0(\mathfrak{X})$, was computed in \cite[Theorem~1.2(iii)]{SU}. When $[X/G]$ is a Deligne-Mumford stack, a computation of $\bdK _0(\mathfrak{X})$ appears in \cite{JTseng2}.
The focus of this paper was to describe the higher K-theory of toric stacks. Similar description of the motivic cohomology (higher Chow groups) of such stacks will appear in \cite{JK}.
\vskip .3cm
Here is an {\it outline of the paper}. The second section is a review of toric stacks and their K-theory. In \S~\ref{section:ELin}, we define the notion of equivariantly linear schemes and study their G-theory. We prove Theorem~\ref{thm:main-thm-1} in \S~\ref{section:Gen}, which is the most general result of this paper. We conclude this section with a detailed description of the Grothendieck K-theory ring of general smooth toric stacks.
In \S~\ref{section:Kunneth}, we prove a derived K{\"u}nneth formula and prove Theorem~\ref{thm:main-thm-2} as a consequence. We conclude this section by working out the higher K-theory of (stacky) weighted projective spaces. We study the K-theory of toric stack bundles over smooth base schemes in the last two sections and conclude by providing a complete determination of these.
\section{A review of toric stacks and their K-theory} \label{section:T-stacks} In this section, we review the concept of toric stacks from \cite{GSI} and set up the notations for the G-theory and K-theory of such stacks. This is done in some detail for the convenience of the reader.
In what follows, we shall fix a base field $k$ and all schemes and algebraic groups will be defined over $k$. Let ${{\mathcal V}}_k$ denote the category of $k$-schemes and let ${\mathcal V}^S_k$ denote the full subcategory of smooth $k$-schemes. If $G$ is an algebraic group over $k$, we shall denote the category of $G$-schemes with $G$-equivariant maps by ${\mathcal V}_G$. The full subcategory of smooth $G$-schemes will be denoted by ${\mathcal V}^S_G$.
\subsection{Toric stacks}\label{subsection:TStacks-def} \begin{defn} \label{toric.stacks.def} Let $T$ be a torus and let $X$ be a toric variety with dense torus $T$. According to \cite{GSI}, a {\sl toric stack} $\mathfrak{X}$ is an Artin stack of the form $[X/G]$ where $G$ is a subgroup of $T$.
A {\sl generically stacky toric stack} is an Artin stack of the form $[X/G]$ where $G$ is a diagonalizable group with a morphism $\phi: G \to T$. In this case, the stack $\mathfrak{X}$ has an open substack of the form $[T/G]$ which acts on it. The action of $\mathfrak{T} = [T/G]$ on $\mathfrak{X}$ is induced from the torus action on $X$. The stack $\mathfrak{T}$ is often called the {\sl stacky} dense torus of $\mathfrak{X}$. A generically stacky toric stack $[X/G]$ as above will often be described by the data $\underline{X} = (X, G \xrightarrow{\phi} T)$. \end{defn}
\begin{exms} Generically stacky toric stacks arise naturally while one studies toric stacks. This is because a toric variety $X$ with dense torus $T$ has many $T$-invariant subvarieties which are toric varieties and whose dense tori are quotients of $T$. If $Z \subsetneq X$ is such a subvariety, and $G$ is a diagonalizable subgroup of the torus $T$, then $[Z/G]$ is not a toric stack but only a generically stacky toric stack.
A (generically stacky) toric stack $\mathfrak{X}$ is called a {\sl toric Deligne-Mumford stack} if it is a Deligne-Mumford stack after forgetting the toric structure. It is called smooth if $X$ is a smooth scheme. As pointed out in the introduction, Deligne-Mumford toric stacks were introduced for the first time in \cite{BCS} using the notion of stacky fans. A geometric description of the stacks considered in \cite{BCS} was given in \cite{FMN} where many nice properties of such stacks were proven. It turns out that all these stacks are special cases of the ones defined above.
One extreme case of a toric stack is when $G$ is the trivial group, in which case $\mathfrak{X}$ is just a toric variety. The other extreme case is when $G$ is all of $T$: clearly such toric stacks are Artin. Toric stacks of this form were considered before by Lafforgue \cite{Laf}. In general, a toric stack occupies a place between these two extreme cases. If $\mathfrak{X}$ is a toric Deligne-Mumford stack, then the stacky torus $\mathfrak{T}$ is of the form $T' \times \mathfrak{B}_{\mu}$, where $T'$ is a torus and $\mathfrak{B}_{\mu}$ is the classifying stack of a finite abelian group $\mu$.
In general, every generically stacky toric stack can be written in the form $\mathfrak{X} \times \mathfrak{B}_G$, where $\mathfrak{X}$ is a toric stack and $\mathfrak{B}_G$ is the classifying stack of a diagonalizable group $G$. This decomposition often reduces the study of the cohomology theories of generically stacky toric stacks to the study the cohomology theories of toric stacks and the classifying stacks of diagonalizable groups.
The coarse moduli space $\pi: \mathfrak{X} \to \overline{X}$ of a Deligne-Mumford toric stack is a simplicial toric variety whose dense torus is the moduli space of $\mathfrak{T}$. Conversely, every simplicial toric variety is the coarse moduli space of a canonically defined toric Deligne-Mumford stack ({\sl cf.} \cite[\S~4.2]{FMN}). \end{exms}
\subsection{Toric stacks via stacky fans}\label{subsection:Fan} In \cite{GSI}, Geraschenko and Satriano showed that all (generically stacky) toric stacks are obtained from {\sl stacky fans} in much the same way toric varieties are obtained from fans. They describe in detail the dictionary between toric stacks and stacky fans.
Associated to the toric variety $X$ is a fan $\Sigma$ on the lattice of 1-parameter subgroups of $T$, $L={\rm Hom}_{\rm gp}({\mathbb G}_m,T)$ (see \cite[\S 1.4]{fulton} or \cite[\S 3.1]{cls}). The surjection of tori $T\to T/G$ corresponds to the homomorphism of lattices of 1-parameter subgroups, $\beta\colon L\to N={\rm Hom}_{\rm gp}({\mathbb G}_m,T/G)$. The dual homomorphism, $\beta^*\colon \text{hom}(N,{\mathbb Z})\to \text{hom}(L,{\mathbb Z})$, is the induced homomorphism of characters. Since $T\to T/G$ is surjective, $\beta^*$ is injective, and the image of $\beta$ has finite index. Therefore, one may define a \emph{stacky fan} as a pair $(\Sigma,\beta)$, where $\Sigma$ is a fan on a lattice $L$, and $\beta\colon L\to N$ is a homomorphism to a lattice $N$ such that $\beta(L)$ has finite index in $N$. Conversely, any stacky fan $(\Sigma,\beta)$ gives rise to a toric stack as follows.
Let $X_\Sigma$ be the toric variety associated to $\Sigma$. The dual of $\beta$, $\beta^*\colon N^{\vee} \to L^{\vee}$, induces a homomorphism of tori $T_\beta\colon T_L\to T_N$, naturally identifying $\beta$ with the induced map on lattices of 1-parameter subgroups. Since $\beta(L)$ is of finite index in $N$, $\beta^*$ is injective, so $T_\beta$ is surjective. Let $G_\beta=\ker(T_\beta)$. Note that $T_L$ is the torus of $X_\Sigma$ and $G_\beta\subseteq T_L$ is a subgroup. If $(\Sigma,\beta)$ is a stacky fan, the associated toric stack $\mathfrak{X}_{\Sigma,\beta}$ is defined to be $[X_\Sigma/G_\beta]$, with the torus $T_N=T_L/G_\beta$.
A \emph{generically stacky fan} is a pair $(\Sigma,\beta)$, where $\Sigma$ is a fan on a lattice $L$, and $\beta \colon L\to N$ is a homomorphism to a finitely generated abelian group. If $(\Sigma, \beta)$ is a generically stacky fan, the associated generically stacky toric stack $\mathfrak{X}_{\Sigma,\beta}$ is defined to be $[X_\Sigma/G_\beta]$, where the action of $G_\beta$ on $X_\Sigma$ is induced by the homomorphism $G_\beta\to D(L^*)=T_L$.
One can give a more explicit description of $\mathfrak{X}_{\Sigma,\beta}$ considered above which will show that it is a generically stacky toric stack. Let $(\Sigma,\beta\colon L\to N)$ be a generically stacky fan and let $C(\beta)$ denote the complex $L \xrightarrow{\beta} N$. Let
\[
{\mathbb Z}^s\xrightarrow Q {\mathbb Z}^r\to N\to 0
\] be a presentation of $N$, and let $B\colon L\to {\mathbb Z}^r$ be a lift of $\beta$ (which exists). One defines the fan $\Sigma'$ on $L\oplus {\mathbb Z}^s$ as follows. Let $\tau$ be the cone generated by $e_1,\dots, e_s\in {\mathbb Z}^s$. For each $\sigma\in \Sigma$, let $\sigma'$ be the cone spanned by $\sigma$ and $\tau$ in $L\oplus {\mathbb Z}^s$. Let $\Sigma'$ be the fan generated by all the $\sigma'$. Corresponding to the cone $\tau$, we have the closed subvariety $Y\subseteq X_{\Sigma'}$, which is isomorphic to $X_\Sigma$ since $\Sigma$ is the \emph{star} (sometimes called the \emph{link}) of $\tau$ \cite[Proposition 3.2.7]{cls}. One defines \[ \xymatrix@R-2pc @C-1pc{
\llap{$\beta'=B\oplus Q\colon\,$}L\oplus {\mathbb Z}^s\ar[r] & {\mathbb Z}^r\\
(l,a)\ar@{|->}[r] & B(l)+Q(a).} \] Then $(\Sigma',\beta')$ is a generically stacky fan and we see that $\mathfrak{X}_{\Sigma,\beta}\cong [Y/G_{\beta'}]$. Note that $C(\beta')$ is quasi-isomorphic to $C(\beta)$, so $G_{\beta'}\cong G_\beta$.
Toric stacks and generically stacky toric stacks arise naturally, especially in the solution of certain moduli problems. Any toric variety naturally gives rise to a toric stack. In fact, it is shown in \cite[Theorem~6.1]{GSII} that if $k$ is algebraically closed field of characteristic zero, then every Artin stack with a dense open torus substack is a toric stack under certain fairly general conditions. We refer the readers to \cite{GSI} where many examples of toric and generically stacky toric stacks are discussed.
\vskip .3cm
{\sl In the rest of this paper, a {\sl toric stack} will always mean any generically stacky toric stack. A toric stack as in Definition ~\ref{toric.stacks.def} will be called a reduced toric stack or a toric orbifold}.
\vskip .3cm
\subsection{K-theory of quotient stacks}\label{subsection:K-thry} Let $G$ be a linear algebraic group acting on a scheme $X$. The spectrum of the K-theory of $G$-equivariant coherent sheaves (resp. vector bundles) on $X$ is denoted by $\bdG ^G(X)$ (resp. $\bdK ^G(X)$). We will let $\bdK ^G$ denote $\bdK ^G(Spec \, k)$. The direct sum of the homotopy groups of these spectra are denoted by $\bdG ^G_*(X)$ and $\bdK ^G_*(X)$. The latter is a graded ring. The natural map $\bdK ^G(X) \to \bdG ^G(X)$ is a weak equivalence if $X$ is smooth. For a quotient stack $\mathfrak{X}$ of the form $[X/G]$, one writes $\bdK ^G(X)$ and $\bdK (\mathfrak{X})$ interchangeably. The ring $\bdK ^G_0(k)$ will be denoted by $R(G)$. This is same as the representation ring of $G$.
The functor $X \mapsto \bdG ^G(X)$ on ${\mathcal V}_G$ is covariant for proper maps and contravariant for flat maps. It also satisfies the localization sequence and the projection formula. It satisfies the homotopy invariance property in the sense that if $f: V \to X$ is a $G$-equivariant vector bundle, then the map $f^*: \bdG ^G(X) \to \bdG ^G(V)$ is a weak equivalence. The functor $X \mapsto \bdK ^G(X)$ on ${\mathcal V}_G$ is a contravariant functor with values in commutative graded rings. For any $G$-equivariant morphism $f: X \to Y$, $\bdG ^G(X)$ is a module spectrum over the ring spectrum $\bdK ^G(Y)$. In particular, $\bdG ^G_*(X)$ is an $R(G)$-module. We refer to \cite[\S~1]{Thomason1} to verify the above properties.
\section{Equivariant G-theory of linear schemes}\label{section:ELin} We will prove Theorem~\ref{thm:main-thm-1} as a consequence of a more general result (Theorem~\ref{thm:main-thm-1*}) on the equivariant G-theory of schemes with a group action. In this section, we study the equivariant G-theory of a certain class of schemes which we call equivariantly linear. Such schemes in the non-equivariant set-up were earlier considered by Jannsen \cite{Jan} and Totaro \cite{Totaro1}. The G-theory of such schemes in the non-equivariant set-up was studied in \cite{J01}. We end this section with a proof of Theorem~\ref{thm:main-thm-1*} for equivariantly linear schemes.
\begin{defn}\label{defn:linear} Let $G$ be a linear algebraic group over $k$ and let $X \in {\mathcal V}_G$. \begin{enumerate} \item We will say $X$ is $G$-equivariantly $0$-linear if it is either empty or isomorphic to ${\rm Spec \,}({\rm Sym}(V^*))$ where $V$ is a finite-dimensional rational representation of $G$. \item For a positive integer $n$, we will say that $X$ is $G$-equivariantly $n$-linear if there exists a family of objects $\{U, Y, Z\}$ in ${\mathcal V}_G$ such that $Z \subseteq Y$ is a $G$-invariant closed immersion with $U$ its complement, $Z$ and one of the schemes $U$ or $Y$ are $G$-equivariantly $(n-1)$-linear and $X$ is the other member of the family $\{U, Y, Z\}$. \item We will say that $X$ is {\it $G$-equivariantly linear (or simply, $G$-linear)} if it is $G$-equivariantly $n$-linear for some $n \ge 0$. \end{enumerate} \end{defn}
It is immediate from the above definition that if $G \to G'$ is a morphism of algebraic groups then every $G'$-equivariantly linear scheme is also $G$-equivariantly linear.
\begin{defn}\label{defn:T-CELL} Let $G$ be a linear algebraic group over $k$. A scheme $X \in {\mathcal V}_G$ is called {\sl $G$-equivariantly cellular} (or, $G$-cellular) if there is a filtration \[ \emptyset = X_{n+1} \subsetneq X_n \subsetneq \cdots \subsetneq X_1 \subsetneq X_0 = X \] by $G$-invariant closed subschemes such that each $X_i \setminus X_{i+1}$ is isomorphic to a rational representation $V_i$ of $G$. These representations of $G$ are called the (affine) $G$-{\em cells} of $X$. \end{defn} It is obvious that a $G$-equivariantly cellular scheme is cellular in the usual sense ({\sl cf.} \cite[Example~1.9.1]{Fulton1}).
Before we collect examples of equivariantly linear schemes, we state the following two elementary results which will be used throughout this paper.
\begin{lem}\label{lem:elem} Let $G$ be a diagonalizable group over $k$ and let $H \subseteq G$ be a closed subgroup. Then $H$ is also defined over $k$ and is diagonalizable. If $T$ is a split torus over $k$, then all subtori and quotients of $T$ are defined over $k$ and are split over $k$. \end{lem} \begin{proof} The first statement follows from \cite[Proposition~8.2]{Borel}. If $T$ is a split torus over $k$, then any of its subgroups is defined over $k$ and is split by the first assertion. In particular, all quotients of $T$ are defined over $k$. Furthermore, all such quotients are split over $k$ by \cite[Corollary~8.2]{Borel}. \end{proof}
\begin{lem}\label{lem:Open-orbit} Let $T$ be a split torus acting on a scheme $X$ with finitely many orbits. Then: \begin{enumerate} \item Any $T$-orbit in $X$ of minimal dimension is closed. \item Any $T$-orbit in $X$ of maximal dimension is open.
\end{enumerate} \end{lem} \begin{proof} The first assertion is well known and can be found in \cite[Proposition~1.8]{Borel}. We prove the second assertion.
Let $f:S \to X$ be the inertia group scheme over $X$ for the $T$-action and let $\gamma: X \to S$ denote the unit section. Then for a point $x \in X$, the fiber $S_x$ of the map $f$ is the stabilizer subgroup of $x$ and the dimension of $S_x$ is its dimension at the point $\gamma(x)$. For any $s \ge 0$, let $X_{\le s}$ denote the set of points $x \in X$ such that $\text{\rm dim}(S_x) \le s$. It follows from Chevalley's theorem ({\sl cf.} \cite[\S~13.1.3]{EGAIV}) that each $X_{\le s}$ is open in $X$ (see also \cite[\S~2.2]{VV}).
Let $U \subseteq X$ denote a $T$-orbit of maximal dimension (say, $d$) and let $x \in U$. Suppose $s = \text{\rm dim}(S_x) \ge 0$. Notice that all points in a $T$-orbit have the same stabilizer subgroup because $T$ is abelian. We claim that there is no point on $X$ whose stabilizer subgroup has dimension less than $s$. If there is such a point $y \in X$, then $Ty$ is a $T$-orbit of $X$ of dimension bigger than the dimension of $U$, contradicting our choice of $U$. This proves the claim.
It follows from this claim that $X_{\le s}$ is a $T$-invariant open subscheme of $X$ which is a disjoint union of its $T$-orbits such that the stabilizer subgroups of all points of $X_{\le s}$ have dimension $s$. In particular, all $T$-orbits in $X_{\le s}$ have dimension $d$. We conclude from the first assertion of the lemma that all $T$-orbits (of closed points) in $X_{\le s}$ (including $U$) are closed in $X_{\le s}$. Since there only finitely many orbits in $X$, the same is true for $X_{\le s}$. We conclude that $X_{\le s}$ is a finite disjoint union of its closed orbits. Hence these orbits must also be open in $X_{\le s}$. In particular, $U$ is open in $X_{\le s}$ and hence in $X$. \end{proof}
\begin{remk}\label{remk:Gen-diag} The reader can verify that Lemma~\ref{lem:Open-orbit} is true for the action of any diagonalizable group. But we do not need this general case. \end{remk}
The following result yields many examples of equivariantly linear schemes.
\begin{prop}\label{prop:lin-elem} Let $T$ be a split torus over $k$ and let $T'$ be a quotient of $T$. Let $T$ act on $T'$ via the quotient map. Then the following hold. \begin{enumerate} \item $T'$ is $T$-linear. \item A toric variety with dense torus $T$ is $T$-linear. \item A $T$-cellular scheme is $T$-linear. \item If $k$ is algebraically closed, then every $T$-scheme with finitely many $T$-orbits is $T$-linear. \end{enumerate} \end{prop} \begin{proof} We first prove $(1)$. It follows from Lemma~\ref{lem:elem} that $T'$ is a split torus. Hence, it is enough to show using the remark following the definition of $T$-linear schemes that a split torus $T$ is $T$-linear under the multiplication action.
We can write $T = ({\mathbb G}_m)^n$ and consider ${\mathbb A}^n$ as the toric variety with the dense torus $T$ via the coordinate-wise multiplication so that the complement of $T$ is the union of the coordinate hyperplanes. Since ${\mathbb A}^n$ is $T$-linear, it suffices to show that the union of the coordinate hyperplanes is $T$-linear.
We shall prove by induction on the rank of $T$ that any union of the coordinate hyperplanes in ${\mathbb A}^n$ is $T$-linear. If $n =1$, then this is obvious. So let us assume that $n > 1$ and let $Y$ be a union of some coordinate hyperplanes in ${\mathbb A}^n$. After permuting the coordinates, we can write $Y$ as $Y^{n}_{\{1, \cdots, m\}} = H_1 \cup \cdots \cup H_m$ where
$H_i = \{(x_1, \cdots , x_n) \in {\mathbb A}^n | x_i = 0\}$. If $m =1$, then $Y^{n}_{\{1\}}$ is $T$-equivariantly $0$-linear. So we assume by an induction on $m$ that $Y^{n}_{\{2, \cdots , m\}}$ is $T$-linear.
Set $U = Y^{n}_{\{1, \cdots, m\}} \setminus Y^{n}_{\{2, \cdots , m\}}$. Then $U$ is the complement of a union of hyperplanes $W^{n-1}_{\{2, \cdots ,m\}}$ in $H_1 \cong {\mathbb A}^{n-1}$. Notice that $T$ acts on $H_1$ through the product $T_1$ of its last $(n-1)$ factors. By induction on $n$, we conclude that $W^{n-1}_{\{2, \cdots ,m\}}$ is $T_1$-linear. Since $H_1$ is clearly $T_1$-linear, we conclude that $U$ is $T_1$-linear and hence $T$-linear. Thus we have concluded that both $Y^{n}_{\{2, \cdots , m\}}$ and $U$ are $T$-linear. It follows from this that $Y^{n}_{\{1, \cdots, m\}}$ is $T$-linear too.
The assertion $(2)$ easily follows from $(1)$ and an induction on the number of $T$-orbits in a toric variety. The assertion (3) is immediate from the definitions, using an induction on the length of the filtration of a $T$-cellular scheme. To prove $(4)$, let $X$ be a $T$-scheme with only finitely many $T$-orbits. It follows from Lemma~\ref{lem:Open-orbit} that $X$ has an open $T$-orbit $U$. Since $k$ is algebraically closed, such an open $T$-orbit must be isomorphic to a quotient of $T$. In particular, it is $T$-linear by the first assertion. An induction of the number of $T$-orbit implies that $X \setminus U$ is $T$-linear. We conclude that $X$ is also $T$-linear. \end{proof}
\subsubsection{Spherical varieties}\label{subsubsection:Spherical} Recall that if $G$ is a connected reductive group over $k$, then a normal variety $X \in {\mathcal V}_G$ is called {\sl spherical} if a Borel subgroup of $G$ has a dense open orbit in $X$. The spherical varieties constitute a large class of varieties with group actions, including toric varieties, flag varieties and all symmetric varieties. It is known that a spherical variety $X$ has only finitely many fixed points for the $T$-action where $T$ is a maximal torus of $G$ contained in $B$.
It follows from a theorem of Bialynicki-Birula \cite{BB} (generalized to the case of non-algebraically closed fields by Hesselink \cite{Hessel}) that if $T$ is a split torus over $k$ and if $X$ is a smooth projective variety with a $T$-action such that the fixed point locus $X^T$ is isolated, then $X$ is $T$-equivariantly cellular. We conclude that a smooth and projective spherical variety is $T$-cellular and hence $T$-linear. We do not know if all spherical varieties are $T$-linear.
\vskip .4cm
\subsection{Equivariant G-theory of equivariantly linear schemes} \label{subsection:Equiv-linear} Recall that if a linear algebraic group $G$ acts on a scheme $X$, then the G-theory and K-theory of the quotient stack $[X/G]$ are same as the equivariant G-theory and K-theory of $X$ for the action $G$. We shall use this identification throughout this text without further mention.
The following result from \cite[\S~1.9]{Thomason1} will be used repeatedly in this text.
\begin{thm}\label{thm:Morita} Let $G$ be a linear algebraic group over $k$ and let $H \subseteq G$ be a closed subgroup of $G$. Then for any $X \in {\mathcal V}_H$, the map of spectra \[ \bdG ([(X \stackrel{H}{\times} G)/G]) \to \bdG ([X/H]) \] is a weak-equivalence. In particular, the map of spectra \[ \bdG ([(X \times G/H)/G]) \to \bdG ([X/H]) \] is a weak-equivalence if $X \in {\mathcal V}_G$. These are weak equivalences of ring spectra if $X$ is smooth. \end{thm}
\vskip .3cm
Recall that for a stack $\mathfrak{X}$, the K-theory spectrum $\bdK (\mathfrak{X})$ is a ring spectrum and $\bdG (\mathfrak{X})$ is a module spectrum over $\bdK (\mathfrak{X})$. In the following results, we make essential use of the derived smash products of module spectra over ring spectra. This is the derived functor of the smash product of spectra in their homotopy category. We refer to \cite{SS} (see also \cite{EKMM} and \cite[\S~3]{J01}) for basic results in this direction. In general, if $R$ is a ring spectrum and $M, N$ are module spectra over $R$, the derived smash product of $M$ and $N$ over $R$ will be denoted by $M{\overset L {\underset {R} \wedge}} N$. We shall now prove the following special case of Theorem~\ref{thm:main-thm-1*}. The proof follows a trick used in \cite[Theorem~4.1]{J01} in a different context.
\begin{prop}\label{prop:Linear-case} Let $T$ be a split torus and let $X \in {\mathcal V}_T$ be $T$-linear. Let $\phi: G \to T$ be a morphism of diagonalizable groups such that $G$ acts on $X$ via $\phi$. Then the natural map of spectra \begin{equation}\label{eqn:gen.weak.eq*0} \bdK ([{{\rm Spec \,}(k)}/G]) {\overset L {\underset {\bdK ([{{\rm Spec \,}(k)}/T])} \wedge}} \bdG ([X/T]) \to \bdG ([X/G]) \end{equation} is a weak-equivalence. \end{prop} \begin{proof} We assume that $X$ is $T$-equivariantly $n$-linear for some $n \ge 0$. We shall prove our result by an ascending induction on $n$. If $n = 0$, then $X \cong {\mathbb A}^n$ and hence by the homotopy invariance, we can assume that $X = {\rm Spec \,}(k)$, and the result is immediate in this case. We now assume that $n > 0$. By the definition of $T$-linearity, there are two cases to consider: \begin{enumerate} \item There exists a $T$-invariant closed subscheme $Y$ of $X$ with complement $U$ such that $Y$ and $U$ are $T$-equivariantly $(n-1)$-linear. \item There exists a $T$-scheme $Z$ which contains $X$ as a $T$-invariant open subscheme such that $Z$ and $Y = Z \setminus X$ are $T$-equivariantly $(n-1)$-linear. \end{enumerate}
In the first case, the localization fiber sequence in equivariant G-theory gives us a commutative diagram of fiber sequences in the homotopy category of spectra\footnote {For spectra, this is same as a cofiber sequence}: \[ \xymatrix@C2pc{ {\bdK ^G{\overset L {\underset {\bdK ^T} \wedge}} \bdG ([Y/T])} \ar@<1ex>[r] \ar@<-1ex>[d] & {\bdK ^G{\overset L {\underset {\bdK ^T} \wedge}} \bdG ([X/T])} \ar@<1ex>[r] \ar@<-1ex>[d] & {\bdK ^G{\overset L {\underset {\bdK ^T} \wedge}} \bdG ([U/T])} \ar@<-1ex>[d] \\ {\bdG ([Y/G])} \ar@<1ex>[r] & {\bdG ([X/G])} \ \ar@<1ex>[r] &{\bdG ([U/G])}.} \]
The left and the right vertical maps are weak equivalences by the induction. We conclude that the middle vertical map is a weak equivalence too.
In the second case, we obtain as before, a commutative diagram of fiber sequences in the homotopy category of spectra: \[ \xymatrix@C2pc{ {\bdK ^G{\overset L {\underset {\bdK ^T} \wedge}} \bdG ([Y/T])} \ar@<1ex>[r] \ar@<-1ex>[d] & {\bdK ^G{\overset L {\underset {\bdK ^T} \wedge}} \bdG ([Z/T])} \ar@<1ex>[r] \ar@<-1ex>[d] & {\bdK ^G{\overset L {\underset {\bdK ^T} \wedge}} \bdG ([X/T])} \ar@<-1ex>[d] \\ {\bdG ([Y/G])} \ar@<1ex>[r] & {\bdG ([Z/G])} \ \ar@<1ex>[r] &{\bdG ([X/G])}.} \] The first two vertical maps are weak equivalences by induction and hence the last vertical map must also be a weak equivalence. This completes the proof of the proposition. \end{proof}
We end this section with the following (rather technical) result which will be used in the proof of Theorem~\ref{thm:main-thm-1*}. Taking $V$ to be ${\rm Spec \,}(k)$, this becomes a special case of what is considered in the last Proposition.
\begin{lem}\label{lem:BC-I} Let $T$ be a split torus over $k$ and let $T'$ be a quotient of $T$. Let $T$ act on $T'$ via the quotient map and let it act trivially on an affine scheme $V$. Consider the scheme $X = V \times T'$ where $T$ acts diagonally. Let $\phi: G \to T$ be a morphism of diagonalizable groups such that $G$ acts on any $T$-scheme via $\phi$. Then the map of spectra \begin{equation}\label{eqn:BC-I0} \bdK ([{{\rm Spec \,}(k)}/G]) {\overset L {\underset {\bdK ([{{\rm Spec \,}(k)}/T])} \wedge}} \bdG ([X/T]) \to \bdG ([X/G]) \end{equation} is a weak-equivalence. \end{lem} \begin{proof} Let $H$ denote the image of $G$ in $T'$ under the composite map $G \xrightarrow{\phi} T \twoheadrightarrow T'$ and let $H' = {T'}/H$. Notice that $T'$ is a split torus by Lemma~\ref{lem:elem}. Since $T$ (and hence $G$) acts trivially on the scheme $V$, it follows that $T$ and $G$ act on $X$ via their quotients $T'$ and $H$, respectively. Since $X$ is affine and all the underlying groups are diagonalizable, it follows from \cite[Lemma~5.6]{Thomason1} that the maps of spectra \begin{equation}\label{eqn:BC-I1} \bdG ([X/{T'}]) {\overset L {\underset {\bdK ^{T'}} \wedge}} \bdK ^T \to \bdG ([X/T]) ; \end{equation} \[ \bdG ([X/{H}]) {\overset L {\underset {\bdK ^{H}} \wedge}} \bdK ^G \to \bdG ([X/G]) \] are weak equivalences. Using the first weak equivalence, we obtain \begin{equation}\label{eqn:BC-I2} \begin{array}{lll} \bdG ([X/T]) {\overset L {\underset {\bdK ^{T}} \wedge}} \bdK ^G & {\cong} & \left(\bdG ([X/{T'}]) {\overset L {\underset {\bdK ^{T'}} \wedge}} \bdK ^T\right) {\overset L {\underset {\bdK ^{T}} \wedge}} \bdK ^G \\ & {\cong} & \bdG ([X/{T'}]) {\overset L {\underset {\bdK ^{T'}} \wedge}} \bdK ^G \\ & {\cong} & \bdG ([X/{T'}]) {\overset L {\underset {\bdK ^{T'}} \wedge}} \left(\bdK ^H {\overset L {\underset {\bdK ^{H}} \wedge}} \bdK ^G\right) \\ & {\cong} & \left( \bdG ([X/{T'}]) {\overset L {\underset {\bdK ^{T'}} \wedge}} \bdK ^H\right) {\overset L {\underset {\bdK ^{H}} \wedge}} \bdK ^G. \end{array} \end{equation} On the other hand, we have \begin{equation}\label{eqn:BC-I3} \begin{array}{lll} \bdG ([X/{T'}]) {\overset L {\underset {\bdK ^{T'}} \wedge}} \bdK ^H & {\cong}^{1} & \bdG ([X/{T'}]) {\overset L {\underset {\bdK ^{T'}} \wedge}} \bdG ([{H'}/{T'}]) \\ & {\cong}^{2} & \bdG ([(X \times H')/{T'}])\\ & {\cong}^{3} & \bdG ([X/H]), \end{array} \end{equation} where the isomorphisms ${\cong}^{1}$ and ${\cong}^{3}$ follow from Theorem~\ref{thm:Morita}. The isomorphism ${\cong}^{2}$ follows from Propositions~\ref{prop:lin-elem} and ~\ref{prop:Kunneth-Linear-case}. Combining ~\eqref{eqn:BC-I1}, ~\eqref{eqn:BC-I2} and ~\eqref{eqn:BC-I3}, we get the weak equivalences \[ \bdG ([X/T]) {\overset L {\underset {\bdK ^{T}} \wedge}} \bdK ^G \cong \bdG ([X/H]) {\overset L {\underset {\bdK ^{H}} \wedge}} \bdK ^G \cong \bdG ([X/G]) \] and this proves the lemma. \end{proof}
\section{G-theory of general toric stacks}\label{section:Gen} This section is devoted to the determination of the G-theory of a general (generically stacky) toric stack. We prove our main results in a much more general set-up where the underlying scheme with a $T$-action need not be a toric variety.
Our first result is a spectral sequence that computes the $G$-equivariant G-theory of a $T$-scheme $X$ in terms of its $T$-equivariant G-theory and the representation ring of $G$ whenever there is a morphism of diagonalizable groups $\phi: G \to T$. When the underlying scheme is assumed to be smooth, these conclusions may be stated in terms of K-theory instead of G-theory.
This result specializes to the case of all (generically stacky) toric stacks when $X$ is assumed to be a toric variety. We conclude this section with an explicit presentation of the Grothendieck K-theory ring of a smooth toric stack which may not necessarily be Deligne-Mumford.
\vskip .3cm
We now prove the following main result of this section and derive its consequences.
\begin{thm}\label{thm:main-thm-1*} Let $T$ be a split torus acting on a scheme $X$ and let $\phi: G \to T$ be a morphism of diagonalizable groups such that $G$ acts on $X$ via $\phi$. Then the natural map of spectra \begin{equation}\label{eqn:gen.weak.eq*0*} \bdK ([{{\rm Spec \,}(k)}/G]) {\overset L {\underset {\bdK ([{{\rm Spec \,}(k)}/T])} \wedge}} \bdG ([X/T]) \to \bdG ([X/G]) \end{equation} is a weak-equivalence. In particular, one obtains a spectral sequence: \begin{equation}\label{eqn:gen.weak.eq*1} E^2_{s,t} = {{\operatorname{Tor}}}_{s,t}^{\bdK ^T_*(k)}(\bdK ^G_*(k), \bdG _*([X/T])) \Rightarrow \bdG _{s+t}([X/G]). \end{equation} \end{thm} \begin{proof} We shall prove the theorem by the noetherian induction on $T$-schemes. The statement of the theorem is obvious if $X$ is the empty scheme so that both sides of ~\eqref{eqn:gen.weak.eq*0*} are contractible. Suppose $X$ is any $T$-scheme such that ~\eqref{eqn:gen.weak.eq*0*} holds when $X$ is replaced by all its proper $T$-invariant closed subschemes. We show that ~\eqref{eqn:gen.weak.eq*0*} holds for $X$. This will prove the theorem.
By Thomason's generic slice theorem \cite[Proposition~4.10]{Thomason2}, there exists a $T$-invariant dense open subset $U \subseteq X$ which is affine. Moreover, $T$ acts on $U$ via its quotient $T'$ which in turn acts freely on $U$ with affine geometric quotient $U/T$ such that there is a $T$-equivariant isomorphism $U \cong (U/T) \times T'$. Here, $T$ acts trivially on $U/T$, via the quotient map on $T'$ and diagonally on $U$. The weak equivalence of ~\eqref{eqn:gen.weak.eq*0*} holds for $U$ by Lemma~\ref{lem:BC-I}.
We now set $Y = X \setminus U$. Then $Y$ is a proper $T$-invariant closed subscheme of $X$. The localization sequence induces the commutative diagram of the fiber sequences in the homotopy category of spectra: \[ \xymatrix@C2pc{ {\bdK ^G{\overset L {\underset {\bdK ^T} \wedge}} \bdG ([Y/T])} \ar@<1ex>[r] \ar@<-1ex>[d] & {\bdK ^G{\overset L {\underset {\bdK ^T} \wedge}} \bdG ([X/T])} \ar@<1ex>[r] \ar@<-1ex>[d] & {\bdK ^G{\overset L {\underset {\bdK ^T} \wedge}} \bdG ([U/T])} \ar@<-1ex>[d] \\ {\bdG ([Y/G])} \ar@<1ex>[r] & {\bdG ([X/G])} \ \ar@<1ex>[r] &{\bdG ([U/G])}.} \]
We have shown above that the right vertical map is a weak equivalence. The left vertical map is a weak equivalence by the noetherian induction. We conclude that the middle vertical map is a weak equivalence too.
The existence of the spectral sequence now follows along standard lines (see for example, \cite[Theorem IV.4.1]{EKMM}). \end{proof}
\vskip .3cm
{\bf{Proof of Theorem~\ref{thm:main-thm-1}:}} To obtain the spectral sequence ~\eqref{eqn:gen.weak.eq1}, it is enough to identify this spectral sequence with the one in ~\eqref{eqn:gen.weak.eq*1}.
To see this, we recall from \cite[Lemma~5.6]{Thomason1} that the maps $R(T){\underset {{\mathbb Z}} \otimes} \bdK _*(k) \to \bdK _*([{{\rm Spec \,}(k)}/T])$ and $R(G){\underset {{\mathbb Z}} \otimes} \bdK _*(k) \to \bdK _*([{{\rm Spec \,}(k)}/G])$ are ring isomorphisms. Since $R(T)$ and $R(G)$ are flat ${\mathbb Z}$-modules, these isomorphisms can be written as \begin{equation}\label{eqn:main-thm-1*0} R(T) {\overset L {\underset {{\mathbb Z}} \otimes}} \bdK _*(k) \xrightarrow{\cong} \bdK _*([{{\rm Spec \,}(k)}/T]) \ {\rm and} \ \ R(G) {\overset L {\underset {{\mathbb Z}} \otimes}} \bdK _*(k) \xrightarrow{\cong} \bdK _*([{{\rm Spec \,}(k)}/G]), \end{equation} where ${\overset L \otimes}$ denotes the derived tensor product.
Let $M^{\bullet} \xrightarrow{\sim} R(G)$ be a flat resolution of $R(G)$ as an $R(T)$-module. Since $R(T)$ is a flat ${\mathbb Z}$-module, we see that $M^{\bullet} \xrightarrow{\sim} R(G)$ is a flat resolution of $R(G)$ also as a ${\mathbb Z}$-module. In particular, we obtain
\begin{equation}\label{eqn:main-thm-1*1} \begin{array}{lll} \bdK ^G_*(k) {\overset L {\underset {\bdK ^T_*(k)} \otimes}} \bdG _*([X/T]) & \cong & \left(R(G) {\overset L {\underset {{\mathbb Z}} \otimes}} \bdK _*(k) \right) {\overset L {\underset {R(T) {\underset {{\mathbb Z}}\otimes} \bdK _*(k)} \otimes}} \bdG _*([X/T]) \\ & \cong & \left(M ^{\bullet}{\overset L {\underset {{\mathbb Z}} \otimes}} \bdK _*(k) \right) {\overset L {\underset {R(T) {\underset {{\mathbb Z}}\otimes} \bdK _*(k)} \otimes}} \bdG _*([X/T]) \\ & {\cong}^{1} & \left(M^{\bullet} {\underset {{\mathbb Z}}\otimes} \bdK _*(k) \right) {\overset L {\underset{R(T) {\underset {{\mathbb Z}}\otimes} \bdK _*(k)} \otimes}} \bdG _*([X/T]) \\ & {\cong}^{2} & \left(M ^{\bullet} {\underset {{\mathbb Z}}\otimes} \bdK _*(k) \right) {\underset {R(T) {\underset {{\mathbb Z}}\otimes} \bdK _*(k)} \otimes} \bdG _*([X/T]) \\ & {\cong} & M ^{\bullet} {\underset {R(T)} \otimes} \left(R(T) {\underset {{\mathbb Z}} \otimes} \bdK _*(k)\right) {\underset {R(T) {\underset {{\mathbb Z}}\otimes} \bdK _*(k)} \otimes} \bdG _*([X/T]) \\ & {\cong} & M ^{\bullet} {\underset {R(T)} \otimes} \bdG _*([X/T]) \\ & {\cong}^{3} & M ^{\bullet}{\overset L {\underset {R(T)} \otimes}} \bdG _*([X/T]) \\ & {\cong} & R(G) {\overset L {\underset {R(T)} \otimes}} \bdG _*([X/T]), \end{array} \end{equation} where the isomorphism ${\cong}^{1}$ follows because $M^{\bullet}$ is a complex of flat ${\mathbb Z}$-modules, ${\cong}^{2}$ follows because $M^{\bullet} {\underset {{\mathbb Z}}\otimes} \bdK _*(k)$ is a complex of flat $R(T) {\underset {{\mathbb Z}}\otimes} \bdK _*(k)$-modules and the isomorphism ${\cong}^{3}$ follows because $M^{\bullet}$ is a complex of flat $R(T)$-modules. Taking the homology groups on the both sides, we obtain \[ {{\operatorname{Tor}}}_{s,t}^{\bdK ^T_*(k)}(\bdK ^\bdG _*(k), \bdG _*([X/T])) \cong {{\operatorname{Tor}}}^{R(T)}_{s,t}(R(G), \bdG _*([X/T])) \] which yields the spectral sequence ~\eqref{eqn:gen.weak.eq1}. The isomorphism of the edge map $\bdG _0([X/T]) {\underset {R(T)} \otimes} R(G) \to \bdG _0([X/G])$ follows immediately from ~\eqref{eqn:gen.weak.eq1} and the fact that the equivariant G-theory spectra appearing in Theorem ~\ref{thm:main-thm-1} are all connected (have no negative homotopy groups).
Let us now assume that $X$ is a smooth toric variety with dense torus $T$ such that $\bdK _0([X/T])$ is a projective $R(T)$-module. In this case, we can identify the G-theory and the K-theory. To show the degeneration of the spectral sequence ~\eqref{eqn:gen.weak.eq1}, it suffices to show that the map \begin{equation}\label{eqn:Degen1} \bdG _*([X/T]) {\underset {R(T)} \otimes} R(G) \to \bdG _*([X/T]) {\overset L {\underset {R(T)} \otimes}} R(G) \end{equation} is an isomorphism. However, we have \[ \begin{array}{lll} \bdG _*([X/T]) {\overset L {\underset {R(T)} \otimes}} R(G) & {\cong}^{0} & \left(\bdK ^T_*(k) {\underset {R(T)} \otimes} \bdG _0([X/T])\right) {\overset L {\underset {R(T)} \otimes}} R(G) \\
& {\cong}^{1} & \left(\bdK ^T_*(k) {\overset L {\underset {R(T)} \otimes}} \bdG _0([X/T])\right) {\overset L {\underset {R(T)} \otimes}} R(G) \\ & {\cong}^{2} & \left(\bdK _*(k) {\overset L {\underset {{\mathbb Z}} \otimes}} R(T)\right) {\overset L {\underset {R(T)} \otimes}} \left(\bdG _0([X/T]) {\overset L {\underset {R(T)} \otimes}} R(G)\right) \\ & {\cong}^{3} & \left(\bdK _*(k) {\overset L {\underset {{\mathbb Z}} \otimes}} R(T)\right) {\overset L {\underset {R(T)} \otimes}} \left(\bdG _0([X/T]) {\underset {R(T)} \otimes} R(G)\right) \\ & {\cong}^{4} & \bdK _*(k) {\overset L {\underset {{\mathbb Z}} \otimes}} \left(\bdG _0([X/T]) {\underset {R(T)} \otimes} R(G)\right) \\ & {\cong}^{5} & \bdK _*(k) {\underset {{\mathbb Z}} \otimes} \left(\bdG _0([X/T]) {\underset {R(T)} \otimes} R(G)\right) \\ & {\cong}^{6} & \left(\bdK _*(k) {\underset {{\mathbb Z}} \otimes} \bdG _0([X/T])\right) {\underset {R(T)} \otimes} R(G) \\ & {\cong}^{7} & \bdG _*([X/T]) {\underset {R(T)} \otimes} R(G). \end{array} \] The isomorphism ${\cong}^{0}$ follows from \cite[Proposition~6.4]{VV} in general
and also from Theorem~\ref{thm:main-thm-2} when $X$ is projective. The isomorphism ${\cong}^{1}$ follows because $\bdG _0([X/T])$ is projective $R(T)$-module. The isomorphism ${\cong}^{2}$ follows from \cite[Lemma~5.6]{Thomason1} because $R(T)$ is flat ${\mathbb Z}$-module. The isomorphism ${\cong}^{3}$ follows again from the projectivity of $\bdG _0([X/T])$ as an $R(T)$-module. The isomorphisms ${\cong}^{4}$ and ${\cong}^{6}$ are the associativity of the ordinary and derived tensor products. The isomorphism ${\cong}^{5}$ follows because $R(G)$ is a free ${\mathbb Z}$-module and $\bdG _0([X/T]) {\underset {R(T)} \otimes} R(G)$ is a projective $R(G)$-module and hence is flat as a ${\mathbb Z}$-module. The isomorphism ${\cong}^{7}$ follows again from \cite[Proposition~6.4]{VV} in general and also from Theorem~\ref{thm:main-thm-2} when $X$ is projective. This proves ~\eqref{eqn:Degen1}. The projectivity of $\bdG _0([X/T])$ as $R(T)$-module when $X$ is a smooth and projective toric variety, is shown in \cite[Proposition~6.9]{VV} (see also Lemma~\ref{lem:linear-P}). The proof of Theorem~\ref{thm:main-thm-1} is now complete. $\hspace*{14.4cm} \hfil\square$
\begin{remk}\label{remk:SS} The spectral sequence ~\eqref{eqn:gen.weak.eq1} is basically an Eilenberg-Moore type spectral sequence. A spectral sequence similar to the one in ~\eqref{eqn:gen.weak.eq1} had been constructed by Merkurjev \cite{Merk} in the special case when $G$ is the trivial group. The construction of that spectral sequence is considerably more involved. This special case ($G = \{e\}$) of the above construction yields a completely different and simpler proof of Merkurjev's theorem in the setting of schemes with the action of split tori. \end{remk}
\begin{remk}\label{remk:Baggio} It was shown by Baggio \cite{Bag} that there are examples of non-projective smooth toric varieties $X$ such that $\bdG _0([X/T])$ is a projective $R(T)$-module. This shows that there are smooth non-projective toric varieties for which the spectral sequence in Theorem ~\ref{thm:main-thm-1} degenerates. In all these cases, one obtains a complete description of the K-theory of the toric stack $[X/G]$. We shall see in Section~\ref{subsection:WPS} that there are examples where the spectral sequence of Theorem ~\ref{thm:main-thm-1} degenerates even if $\bdG _0([X/T])$ is not a projective $R(T)$-module.
\end{remk}
\subsection{Grothendieck group of toric stacks}\label{subsection:BHR} In \cite{BH}, Borisov and Horja had computed the Grothendieck $K$-theory ring $\bdK _0([X/G])$ when $[X/G]$ is a smooth toric Deligne-Mumford stack. Recall from \S~\ref{section:T-stacks} that the dense stacky torus of a Deligne-Mumford stack is of the form $T' \times \mathfrak{B}_{\mu}$ where $T'$ is a torus and $\mu$ is a finite abelian group. The following consequence of Theorem~\ref{thm:main-thm-1} generalizes the result of \cite{BH} to the case of all smooth toric stacks, not necessarily Deligne-Mumford. Even in this latter case, we obtain a simpler proof.
\begin{thm}\label{thm:BH} Let $\mathfrak{X} = [X/G]$ be a smooth and reduced toric stack associated to the data $\underline{X} = (X, G \xrightarrow{\phi} T)$. Let $\Delta$ be the fan defining $X$ and let $d$ be the number of rays in $\Delta$. Let $I^G_{\Delta}$ denote the ideal of the Laurent polynomial algebra ${\mathbb Z}[t^{\pm 1}_1, \cdots , t^{\pm 1}_d]$ generated by the relations: \begin{enumerate} \item
$(t_{j_1}-1) \cdots (t_{j_l}-1), \ 1 \le j_p \le d$
such that the rays $\rho_{j_1}, \cdots , \rho_{j_l}$ do not span a cone of $\Delta$. \item
$\left(\stackrel{d}{\underset{j = 1}\prod} (t_j)^{<-\chi, v_j>}\right) - 1, \ \chi \in (T/G)^{\vee}$.
\end{enumerate} Then there is a ring isomorphism \begin{equation}\label{eqn:BH0} \phi: \frac{{\mathbb Z}[t^{\pm 1}_1, \cdots , t^{\pm 1}_d]}{I^G_{\Delta}} \xrightarrow{\cong} \bdK _0(\mathfrak{X}). \end{equation} \end{thm} \begin{proof} It follows from Theorem~\ref{thm:main-thm-1} that the map $\bdK _0([X/T]) {\underset {R(T)} \otimes} R(G) \xrightarrow{\cong} \bdK _0([X/G])$ is a ring isomorphism. Since $G$ is a diagonalizable subgroup of $T$ ($[X/G]$ is reduced), the ring $R(G)$ is a quotient of $R(T)$ by the ideal
$J^G_{\Delta} = \left(\chi - 1, \chi \in (T/G)^{\vee}\right)$ ({\sl cf.} Lemma~\ref{lem:Groupring}). This implies that \begin{equation}\label{eqn:BH1} \bdK _0([X/G]) \cong \frac{\bdK _0([X/T])}{J^G_{\Delta}\bdK _0([X/T])}. \end{equation}
If we let $\Delta(1) = \{\rho_1, \cdots , \rho_d\}$, then for each $1 \le j \le d$, there is a unique $T$-equivariant line bundle $L_j$ on $X$ which has a $T$-equivariant section $s_{j} : X \to L_{j}$ and whose zero locus is the orbit closure $V_j = \overline{O_{\rho_j}}$. Then every character $\chi \in T^{\vee}$ acts on $\bdK _0([X/T])$ by multiplication with the element $(\stackrel{d}{\underset{j = 1}\prod} ([L_{j}])^{<\chi, v_j>})$ ({\sl cf.} \cite[Proposition~4.3]{SU}). We conclude that there is a ring isomorphism \begin{equation}\label{eqn:BH2} \frac{\bdK _0([X/T])} {\left(\stackrel{d}{\underset{j = 1}\prod} ([L^{\vee}_{j}])^{<-\chi, v_j>} - 1, \ \chi \in (T/G)^{\vee} \right)} \xrightarrow{\cong} \bdK _0([X/G]). \end{equation}
If $I^T_{\Delta}$ denotes the ideal of ${\mathbb Z}[t^{\pm 1}_1, \cdots , t^{\pm 1}_d]$ generated by the relations (1) above, then it follows from \cite[Theorem~6.4]{VV} that there is a ring isomorphism \begin{equation}\label{eqn:BH3} \frac{{\mathbb Z}[t^{\pm 1}_1, \cdots , t^{\pm 1}_d]}{I^T_{\Delta}} \xrightarrow{\cong} \bdK _0([X/T]). \end{equation}
Setting $\phi(t_j) = [L^{\vee}_{j}]$,
we obtain the isomorphism ~\eqref{eqn:BH0} by combining ~\eqref{eqn:BH2} and ~\eqref{eqn:BH3}. \end{proof}
\begin{remk}\label{remk:non-red} If $[X/G]$ is not a reduced stack and there is an exact sequence \[ 0 \to H \to G \to F \to 0 \] where $F = {\rm Im}(\phi)$, then the stack $[X/G]$ is isomorphic to $[X/F] \times \mathfrak{B}_H$. In this case, one obtains an isomorphism $\bdK _*([X/G]) \cong \bdK _*([X/F]) {\underset {R(F)} \otimes} R(G)$ ({\sl cf.} \cite[Lemma~5.6]{Thomason1}). In particular, if $H$ is a torus, one obtains $\bdK _*([X/G]) \cong \bdK _*([X/F]) {\underset {{\mathbb Z}} \otimes} R(H)$. Thus, we see that the calculation of the K-theory of a (generically stacky) toric stack can be easily reduced to the case of reduced stacks. \end{remk}
\section{A K{\"u}nneth formula and its consequences} \label{section:Kunneth} Our goal in this section is to prove Theorem~\ref{thm:main-thm-2} and give applications. We shall deduce this theorem from a K{\"u}nneth spectral sequence for the equivariant K-theory for the action of diagonalizable groups. A similar spectral sequence for topological K-theory was constructed long time ago by Hodgkin \cite{Hodgkin} and Snaith \cite{Snaith}. A spectral sequence of this kind in the non-equivariant setting was constructed by the first author in \cite[Theorem~4.1]{J01}.
\subsection{K{\"u}nneth formula}\label{subsection:KFormula} Suppose that $X$ and $X'$ are schemes acted upon by a linear algebraic group $G$. In this case, the flatness of $X$ and $X'$ over $k$ implies that the spectra $\bdG ([X/G])$ and $\bdG ([{X'}/G])$ are module spectra over the ring spectrum $\bdK ([{{\rm Spec \,}(k)}/G])$. This flatness also ensures that the external tensor product of coherent ${\mathcal O}$-modules induces a pairing $\bdG ([X/G]) \wedge \bdG ([{X'}/G]) \to \bdG ([(X{ \times}{X'})/G])$, where the action of $G$ on $X{\times}{X'}$ is the diagonal action. This pairing is compatible with the structure of the above spectra as module spectra over the ring spectrum $\bdK ([{{\rm Spec \,}(k)}/G])$ so that one obtains the induced pairing:
\[ p_1^* \wedge p_2^*: \bdG ([X/G]) {\overset L {\underset {\bdK ([{{\rm Spec \,}(k)}/ G])} \wedge}} \bdG ([{X'}/G]) \to \bdG ([(X \times {X'})/ G]). \] This is a map of ring spectra if $X$ and $X'$ are smooth.
\begin{prop}\label{prop:Kunneth-Linear-case} Let $T$ be a split torus and let $X, X'$ be in ${\mathcal V}_T$ such that $X$ is $T$-linear. Let $\phi: G \to T$ be a morphism of diagonalizable groups such that $G$ acts on $X$ and $X'$ via $\phi$. Then the natural map of spectra \begin{equation}\label{eqn:gen.weak.eq*0} \bdG ([X/G]) {\overset L {\underset {\bdK ([{{\rm Spec \,}(k)}/G])} \wedge}} \bdG ([X'/G]) \to \bdG ([(X\times X')/G]) \end{equation} is a weak-equivalence.
In particular, there exists a first quadrant spectral sequence
\begin{equation}\label{eqn:MT2*1} E^2_{s,t} = {{\operatorname{Tor}}}^{\bdK ^G_*(k)}_{s,t}(\bdG _*([X/G]), \bdG _*([{X'}/G])) \Rightarrow \bdG _{s+t}([(X \times X')/G]). \end{equation} \end{prop} \begin{proof} We assume that $X$ is $T$-equivariantly $n$-linear for some $n \ge 0$. This proposition is proved by an ascending induction on $n$, along the same lines as the proof of Proposition~\ref{prop:Linear-case}. We sketch the argument.
If $n = 0$, then $X \cong {\mathbb A}^n$ and hence by the homotopy invariance, we can assume that $X = {\rm Spec \,}(k)$, and the result is immediate in this case. We now assume that $n > 0$. By the definition of $T$-linearity, there are two cases to consider: \begin{enumerate} \item There exists a $T$-invariant closed subscheme $Y$ of $X$ with complement $U$ such that $Y$ and $U$ are $T$-equivariantly $(n-1)$-linear. \item There exists a $T$-scheme $Z$ which contains $X$ as a $T$-invariant open subscheme such that $Z$ and $Y = Z \setminus X$ are $T$-equivariantly $(n-1)$-linear. \end{enumerate}
In the first case, the localization fiber sequence in equivariant G-theory gives us a commutative diagram of fiber sequences in the homotopy category of spectra: \[ \xymatrix@C.8pc{ {\bdG ([X'/G]){\overset L {\underset {\bdK ^G} \wedge}} \bdG ([Y/G])} \ar@<1ex>[r] \ar@<-1ex>[d] & {\bdG ([X'/G]){\overset L {\underset {\bdK ^G} \wedge}} \bdG ([X/G])} \ar@<1ex>[r] \ar@<-1ex>[d] & {\bdG ([X'/G]){\overset L {\underset {\bdK ^G} \wedge}} \bdG ([U/G])} \ar@<-1ex>[d] \\ {\bdG ([(Y \times X')/G])} \ar@<1ex>[r] & {\bdG ([(X \times X')/G])} \ \ar@<1ex>[r] &{\bdG ([(U \times X')/G])}.} \]
The left and the right vertical maps are weak equivalences by the induction on $n$. We conclude that the middle vertical map is a weak equivalence. The second case is proved in the same way where we now use induction on $Y$ and $Z$ (see the proof of Proposition~\ref{prop:Linear-case}).
The existence of the spectral sequence now follows along standard lines (see for example, \cite[Theorem IV.4.1]{EKMM}). \end{proof}
\begin{comment} In the second case, we obtain as before, a commutative diagram of fiber sequences in the homotopy category of spectra \[ \xymatrix@C.8pc{ {\bdG ([X'/G]){\overset L {\underset {\bdK ^G} \wedge}} \bdG ([Y/G])} \ar@<1ex>[r] \ar@<-1ex>[d] & {\bdG ([X'/G]){\overset L {\underset {\bdK ^G} \wedge}} \bdG ([Z/G])} \ar@<1ex>[r] \ar@<-1ex>[d] & {\bdG ([X'/G]){\overset L {\underset {\bdK ^G} \wedge}} \bdG ([X/G])} \ar@<-1ex>[d] \\ {\bdG ([(Y \times X')/G])} \ar@<1ex>[r] & {\bdG ([(X \times X')/G])} \ \ar@<1ex>[r] &{\bdG ([(Z \times X')/G])}.} \] The first two vertical maps are weak equivalences by induction on $n$ and hence the last vertical map must also be a weak equivalence. \end{comment}
\begin{remk}\label{remk:subtorus-case} As an application of Proposition~\ref{prop:Kunneth-Linear-case}, one can obtain another proof of the special case of the spectral sequence ~\eqref{eqn:gen.weak.eq1} when $G$ is a closed subgroup of $T$. This is done by taking $G = T$, $X' = T/G$ in ~\eqref{eqn:MT2*1} and using the Morita weak equivalences $\bdG ([{X'}/T]) \cong \bdG ([{\rm Spec \,}(k)/G])$ and $\bdG ([(X \times X')/T]) \cong \bdG ([X/G])$. Notice that $X' = T/G$ is $T$-linear by Proposition~\ref{prop:lin-elem}. \end{remk}
\begin{cor}[K{\"u}nneth decomposition]\label{cor:Kunneth-dec} Let $T$ be a split torus over $k$ and let $X$ be a $T$-linear scheme. Then the class of the diagonal $[\Delta] \in \bdG _0([(X \times X)/G])$ admits a strong K{\"u}nneth decomposition, i.e., may be written as $\stackrel{n}{\underset{i =1}\Sigma}
p_1^*(\alpha_i) \otimes p_2^*(\beta_i)$, where $\alpha_i, \beta_i \in \bdG _0([X/G])$. \end{cor} \begin{proof} The spectral sequence of Proposition~\ref{prop:Kunneth-Linear-case} shows in general that \begin{equation}\label{eqn:Ku-0} \bdG _0([(X \times X') / G]) \cong \bdG _0([X/G]){\underset {R(G)} \otimes} \bdG _0([{X'}/G]). \end{equation} The K{\"u}nneth decomposition now follows by taking $X = X'$. \end{proof}
\vskip .3cm
{\bf{Proof of Theorem~\ref{thm:main-thm-2}:}} Let $X$ be a smooth and projective $T$-linear scheme. Since the group $G$ is diagonalizable, we apply \cite[Lemma~5.6]{Thomason1} to obtain the isomorphism: \begin{equation}\label{eqn:main-2*0} R(G) {\underset {{\mathbb Z}}\otimes} \bdK _*(k) \xrightarrow{\cong} \bdK ^G_*(k) \end{equation} and this provides the first isomorphism of ~\eqref{eqn:main-2-0}. Since $X$ is smooth, we can identify $G_*([X/G])$ with $\bdK _*([X/G])$.
Let $[x] \in \bdK _*([X/G])$. Then $[x]= p_{1*} (\Delta \circ p_2^*([x]))$. Now we use the K{\"u}nneth decomposition for $\Delta$ obtained in Corollary~\ref{cor:Kunneth-dec} and the projection formula (since $X$ is projective) to identify the last term with $\stackrel{n}{\underset{i =1}\sum} \alpha _i \circ p_{1*}p_2^*(\beta _i\circ [x])$. The Cartesian square \[ \xymatrix@C2pc{ X\times X \ar[r]^>>>>>>{p_2} \ar[d]_{p_1} & X \ar[d]^{p'_1} \\ X \ar[r]_<<<<<<{p'_2} & {\rm Spec \,}(k)} \] and the flat base-change for the equivariant G-theory show that $p_{1*}( p_2^*(\beta_i \circ [x]))$ identifies with ${p_{2}'}^*{p_1'}_*(\beta _i \circ [x])$ so that \begin{equation}\label{eqn:surj.diag} [x]= \stackrel{n}{\underset{i = 1}\sum} \alpha _i \circ {p_2'}^*({p_1'}_*(\beta _i \circ [x])). \end{equation} The class ${p_1'}_*(\beta _i \circ [x]) \in \bdG ([{{\rm Spec \,}(k)}/G])$. It follows that the classes $\{\alpha_i \}$ generate $\pi_*(G[X/G])$ as a module over $\bdK ^G_*(k)$. This shows that the map in question is surjective.
Next we prove the injectivity of the map $\rho$. The key is the following diagram: \begin{equation}\label{eqn:inj.dig} \xymatrix@C2pc{ \bdK _*([X/G]) \ar@<1ex>[dr]_{\mu} &
{\bdK _0([X/G]) {\underset {\bdK ^G_0(k)} \otimes} \bdK ^G_*(k)} \ar[l]_<<<<<<{\rho} \ar@<1ex>^{\alpha}[d] \\ & {{{\rm Hom}}_{\bdK ^G_0(k)}(\bdK _0([X/G]), \bdK ^G_*(k))}} \end{equation} where $\alpha(x \otimes y)$ (resp. $\mu(x)$, $ x \in \bdK _*([X/G])$) is defined by $ \alpha (x \otimes y) =$ the map $x' \mapsto f_*(x' \circ x) \circ y$ (resp., the map $x' \mapsto f_*(x' \circ x)$). Here, $f$ denotes the projection map $X \to {\rm Spec \,}(k)$ and $x' \circ x$ denotes the product in the ring $\bdK _*([X/G])$. The commutativity of the above diagram is an immediate consequence of the projection formula: observe that $\rho(x \otimes y) = x \circ f^*(y)$. Therefore, to show that $ \rho$ is injective, it suffices to show that the map $\alpha$ is injective. For this, we define a map $\beta$ to be a splitting for $\alpha$ as follows.
If $\phi \in {{\rm Hom}}_{\bdK ^G_0(k)}(\bdK _0([X/G]), \bdK ^G_*(k))$, we let $\beta(\phi) = \stackrel{n}{\underset{i =1}\sum} \alpha_{i} \otimes (\phi(\beta_{i}))$. Observe that \[ \begin{array}{lll} \beta (\alpha (x \otimes y)) & = & \beta \left(the \quad map \quad {x'} \rightarrow f_*(x' \circ x) \circ y\right) \\ & = & (\stackrel{n}{\underset{i =1}\sum} \alpha_{i} \otimes f_*(\beta_{i} \cdot x)) \circ y. \end{array} \]
We next observe that $f_*( \beta_{i} \cdot x) \in \bdK ^G_0(k)$, so that we may write the last term as $(\stackrel{n}{\underset{i =1}\sum} \alpha_{i} . f^*f_*( \beta_{i} \cdot x)) \circ y$. By ~\eqref{eqn:surj.diag}, the last term $=x \circ y$. This proves that $ \alpha$ is injective and hence that so is $\rho$. This completes the proof. $\hspace*{3cm} \hfil \square$
\vskip .4cm
The following result generalizes ~\eqref{eqn:gen.weak.eq2} to a bigger class of schemes.
\begin{cor}\label{cor:Base-change} Let $T$ be a split torus over $k$ and let $X$ be a smooth and projective $T$-linear scheme. Let $\phi: G \to T$ be a morphism of diagonalizable groups such that $G$ acts on $X$ via $\phi$. Then the map \[ \bdK _*([X/T]) {\underset {R(T)} \otimes} R(G) \to \bdK _*([X/G]) \] is an isomorphism. In particular, $\bdK _0([X/G])$ is a free $R(G)$-module $($and hence a free ${\mathbb Z}$-module$)$ if $X$ is $T$-cellular. \end{cor} \begin{proof} To prove the first part of the corollary, we trace through the sequence of isomorphisms: \[ \begin{array}{lll} \bdK _*([X/T]) {\underset {R(T)} \otimes} R(G) & {\cong} & \left(\bdK ^T_*(k) {\underset {R(T)} \otimes} \bdK _0([X/T])\right) {\underset {R(T)} \otimes} R(G) \\ & {\cong} & \left(\bdK _*(k) {\underset {{\mathbb Z}} \otimes} R(T)\right) {\underset {R(T)} \otimes} \left( \bdK _0([X/T]) {\underset {R(T)} \otimes} R(G)\right) \\ & {\cong}^{\dag} & \bdK _*(k) {\underset {{\mathbb Z}} \otimes} \bdK _0([X/G]) \\ & {\cong} & \bdK _*([X/G]). \end{array} \]
The first and the last isomorphisms in this sequence follow from Theorem~\ref{thm:main-thm-2} and the isomorphism ${\cong}^{\dag}$ follows from Theorem~\ref{thm:main-thm-1}. This proves the first part of the corollary. If $X$ is $T$-cellular, the freeness of $\bdK _0([X/G])$ as an $R(G)$-module follows from Lemma~\ref{lem:linear-P}. \end{proof}
\begin{remk}\label{remk:freeness} In the special case when $[X/G]$ is a smooth toric Deligne-Mumford stack (with $X$ projective), the freeness of $\bdK _0([X/G])$ as ${\mathbb Z}$-module was earlier shown in \cite[Theorem~2.2]{Hua} and independently in \cite{GHHKK} using symplectic methods. It is known ({\sl cf.} \cite[Example~4.1]{Hua}) that the freeness property may fail if $X$ is not projective. \end{remk}
\subsection{K-theory of weighted projective spaces} \label{subsection:WPS} In the past, there have been many attempts to study the K-theory and Chow rings of weighted projective spaces. However, there are only a few explicit computations in this regard. We end this section with an explicit description of the integral higher K-theory of stacky weighted projective spaces. These are examples of toric stacks, where the spectral sequence ~\eqref{eqn:gen.weak.eq1} degenerates even though $\bdK_0([X/T])$ is not a projective $R(T)$-module. We also describe the rational higher G-theory of weighted projective schemes as another application of Theorem~\ref{thm:main-thm-2}.
\subsubsection{Weighted projective spaces} Let $\underline{q} = \{q_0, \cdots , q_n\}$ be an ordered set of positive integers and let $d = gcd(q_0, \cdots , q_n)$. This ordered set of positive integers gives rise to a morphism of tori $\phi: {\mathbb G}_m \to ({\mathbb G}_m)^{n+1}$ given by $\phi(\lambda) = (\lambda^{q_0}, \cdots, \lambda^{q_n})$.
The (stacky) weighted projective space ${\mathbb P}(q_0, \cdots ,q_n)$ is the stack $[{({\mathbb A}^{n+1}_k \setminus \{0\})}/{{\mathbb G}_m}]$, where ${\mathbb G}_m$ acts on ${\mathbb A}^{n+1}_k$ by $\lambda \cdot (a_0, \cdots , a_n) = (\lambda^{q_0}a_0, \cdots , \lambda^{q_n} a_n)$. Notice that ${\mathbb A}^{n+1} \setminus \{0\}$ is a toric variety with dense torus $T = ({\mathbb G}_m)^{n+1}$ acting by the coordinate-wise multiplication. We see that ${\mathbb P}(\underline{q})$ is the toric stack associated to the data $(({\mathbb A}^{n+1}_k \setminus \{0\}), {\mathbb G}_m \xrightarrow{\phi} T)$. It is known that ${\mathbb P}(\underline{q})$ is a Deligne-Mumford toric stack and is reduced (an orbifold) if and only if $d = 1$.
\subsubsection{{\rm K}-theory of ${\mathbb P}(\underline{q})$} To describe the higher K-theory of ${\mathbb P}(\underline{q})$, we consider ${\mathbb A}^{n+1}$ as the toric variety with dense torus $T = ({\mathbb G}_m)^{n+1}$ acting by the coordinate-wise multiplication. Let $V$ be the $(n+1)$-dimensional representation of $T$ which represents ${\mathbb A}^{n+1}$ as the toric variety. Let $\iota: {\rm Spec \,}(k) \to {\mathbb A}^{n+1}$ and $j: U \to {\mathbb A}^{n+1}$ be the $T$-invariant closed and open inclusions, where we set $U = {\mathbb A}^{n+1} \setminus \{0\}$. Observe that $V$ is the $T$-equivariant normal bundle of ${\rm Spec \,}(k)$ sitting inside ${\mathbb A}^{n+1}$ as the origin.
We have the localization exact sequence: \begin{equation}\label{eqn:WPS0} \cdots \to \bdK_i([{{\rm Spec \,}(k)}/{{\mathbb G}_m}]) \xrightarrow{\iota_*} \bdK_i([{{\mathbb A}^{n+1}}/{{\mathbb G}_m}]) \xrightarrow{j^*} \bdK_i([{U}/{{\mathbb G}_m}]) \to \cdots . \end{equation}
Our first claim is that this sequence splits into short exact sequences \begin{equation}\label{eqn:WPS1} 0 \to \bdK_i([{{\rm Spec \,}(k)}/{{\mathbb G}_m}]) \xrightarrow{\iota_*} \bdK_i([{{\mathbb A}^{n+1}}/{{\mathbb G}_m}]) \xrightarrow{j^*} \bdK_i([{U}/{{\mathbb G}_m}]) \to 0 \end{equation} for each $i \ge 0$.
Using \cite[Proposition~4.3]{VV}, it suffices to show that $\lambda_{-1}(V) = \stackrel{n}{\underset{i=0}\sum} (-1)^i[\wedge^i(V)]$ is not a zero-divisor in the ring $\bdK_*([{{\rm Spec \,}(k)}/{{\mathbb G}_m}])$. However, we can write $V = \stackrel{n}{\underset{i = 0}\oplus} V_i$, where ${\mathbb G}_m$ acts on $V_i \cong k$ by $\lambda \cdot v = \lambda^{q_i}v$. Since each $q_i$ is positive, we see that no irreducible factor of $V$ is trivial. It follows from \cite[Lemma~4.2]{VV} that $\lambda_{-1}(V)$ is not a zero-divisor in the ring $\bdK_*([{{\rm Spec \,}(k)}/{{\mathbb G}_m}])$, and hence ~\eqref{eqn:WPS1} is exact. We have thus proven our claim.
We can now use ~\eqref{eqn:WPS1} to compute $\bdK_*([U/{{\mathbb G}_m}])$. We first observe that the map $\bdK_*([{{\rm Spec \,}(k)}/{{\mathbb G}_m}]) \to \bdK_*([{{\mathbb A}^{n+1}}/{{\mathbb G}_m}])$ induced by the structure map is an isomorphism by the homotopy invariance. So we can identify the middle term of ~\eqref{eqn:WPS1} with $\bdK_i([{{\rm Spec \,}(k)}/{{\mathbb G}_m}])$. Furthermore, it follows from the Self-intersection formula (\cite[Theorem~2.1]{VV}) that the map $\iota_*$ is multiplication by $\lambda_{-1}(V)$ under this identification.
Since $V = \stackrel{n}{\underset{i = 0}\oplus} V_i$, we get $\lambda_{-1}(V) = \stackrel{n}{\underset{i = 0}\prod} \lambda_{-1}(V_i)$. Furthermore, since the class of $V_i$ in $R({\mathbb G}_m) = {\mathbb Z}[t^{\pm 1}]$ is $t^{q_i}$, we see that $\lambda_{-1}(V_i) = 1 - t^{q_i}$. We conclude that $\lambda_{-1}(V) = \stackrel{n}{\underset{i = 0}\prod} (1 - t^{q_i})$. We have thus proven:
\begin{thm}\label{thm:WPS-main} There is a ring isomorphism \[ \frac{\bdK_*(k)[t^{\pm 1}]}{\stackrel{n}{\underset{i = 0}\prod} (1 - t^{q_i})} \xrightarrow{\cong} \bdK_*({\mathbb P}(\underline{q})). \] \end{thm}
\begin{remk}\label{remk:Non-proj} In the above calculations, we can replace ${\mathbb G}_m$ by the dense torus $T$ to get a similar formula. In this case, the exact sequence ~\eqref{eqn:WPS1} shows that $\bdK_0([({\mathbb A}^{n+1} \setminus \{0\})/T])$ is a quotient of $R(T)$ and hence is not a projective $R(T)$-module. \end{remk}
\subsubsection{{\rm G}-theory of weighted projective scheme} The weighted projective scheme is the scheme theoretic quotient of ${\mathbb A}^{n+1} \setminus \{0\}$ by the above action of ${\mathbb G}_m$. This is the coarse moduli scheme of ${\mathbb P}(\underline{q})$. We shall denote this scheme by $\widetilde{{\mathbb P}(\underline{q})}$. It is known that this is a normal (but singular in general) projective scheme. There was no computation available for the higher G-theory or K-theory of this schematic weighted projective space. As an application of Theorem~\ref{thm:main-thm-2}, we now give a simple description of the rational higher G-theory of $\widetilde{{\mathbb P}(\underline{q})}$. We still do not know how to compute its K-theory.
In order to describe the higher G-theory of $\widetilde{{\mathbb P}(\underline{q})}$, we shall use the following presentation of this scheme which allows us to use our main results. We assume that the characteristic of $k$ does not divide any $q_i$.
The torus $T = {\mathbb G} ^n_m$ acts on ${\mathbb P}^n_k$ as the dense open torus by $(\lambda_1, \cdots , \lambda_n) \star [z_0, \cdots , z_n] = [z_0, \lambda_1 z_1, \cdots , \lambda_n z_n]$. Let $G = \mu_{q_0} \times \cdots \times \mu_{q_n}$ be the product of finite cyclic groups. Then $G$ acts on ${\mathbb P}^n_k$ by $(a_0, \cdots , a_n) \bullet [z_0, \cdots , z_n] = [a_0z_0, \cdots , a_nz_n]$. It is then easy to see that $\widetilde{{\mathbb P}(\underline{q})}$ is isomorphic to the scheme ${{\mathbb P}^n_k}/G$.
Define $\phi : G \to T$ by $\phi(a_0, \cdots , a_n) = ({a_1}/{a_0}, \cdots , {a_n}/{a_0})$. Then one checks that \[ \begin{array}{lll}
H := {\rm Ker}(\phi) & = & \{(a_0, \cdots , a_n) \in G | a_0 = \cdots = a_n\} \\
& = & \{\lambda \in {\mathbb G}_m| \lambda^{q_0} = 1 = \cdots = \lambda^{q_n}\} \\
& = & \{\lambda \in {\mathbb G}_m| \lambda^d = 1\} \\ & \cong & \mu_d. \end{array} \]
Moreover, it is easy to see that \[ \begin{array}{lll} (a_0, \cdots , a_n) \bullet [z_0, \cdots , z_n] & = & [a_0z_0, \cdots , a_nz_n] \\ & = & [a^{-1}_0(a_0z_0), \cdots , a^{-1}_0(a_nz_n)] \\ & = & [z_0, ({a_1}/{a_0})z_1, \cdots , ({a_n}/{a_0})z_n] \\ & = & \phi(a_0, \cdots , a_n) \star [z_0, \cdots , z_n]. \end{array} \] In particular, $G$ acts on ${\mathbb P}^n_k$ through $\phi$. We conclude that $\mathfrak{X} = [{\mathbb P}^{n}_k/G]$ is a smooth toric Deligne-Mumford stack associated to the data $({\mathbb P}^n_k, G \xrightarrow{\phi} T)$ and there is an isomorphism $\mathfrak{X} \cong [{{\mathbb P}^n_k}/F] \times {\mathfrak{B}}_{\mu_d}$, where $F = {\rm Im}(\phi)$.
\begin{thm}\label{thm:WPS-K-th} There is a ring isomorphism \begin{equation}\label{eqn:WPS0} \bdK _*(k) {\underset {{\mathbb Z}} \otimes} \frac{[t, t_0, \cdots , t_n]} {((t-1)^{n+1}, t^{q_0}_0-1, \cdots , t^{q_n}_n-1)} \xrightarrow{\cong} \bdK _*(\mathfrak{X}). \end{equation} \end{thm} \begin{proof} It follows from Corollary~\ref{cor:Base-change} and Theorem~\ref{thm:main-thm-2} that there is a ring isomorphism \[ \bdK _*(k) {\underset {{\mathbb Z}} \otimes} \bdK _0([{{\mathbb P}^n_k}/T]) {\underset {R(T)} \otimes} R(G) \xrightarrow{\cong} \bdK _*(\mathfrak{X})). \] On the other hand, the projective bundle formula implies that the left side of this isomorphism is same as $\bdK _*(k) {\underset {{\mathbb Z}} \otimes} \frac{R(T)[t]}{((t-1)^{n+1})} {\underset {R(T)} \otimes} R(G)$ which in turn is isomorphic to $\bdK _*(k) {\underset {{\mathbb Z}} \otimes} \frac{R(G)[t]}{((t-1)^{n+1})}$. The theorem now follows from the isomorphism $R(G) \cong \frac{{\mathbb Z}[t_0, \cdots , t_n]}{(t^{q_0}_0-1, \cdots , t^{q_n}_n-1)}$. \end{proof}
\begin{cor} There is an isomorphism \[ \frac{\bdG_*(k)[t]}{((t-1)^{n+1})} \xrightarrow{\cong} \bdG_*\left(\widetilde{{\mathbb P}(\underline{q})}\right) \] with the rational coefficients. \end{cor} \begin{proof} All the groups in this proof will be considered with rational coefficients. Let $\pi: {\mathbb P}^{n+1}_k \to \widetilde{{\mathbb P}(\underline{q})}$ be the quotient map. The assignment ${\mathcal F} \mapsto \left(\pi_*({\mathcal F})\right)^G$ defines a covariant functor from the category of $G$-equivariant coherent sheaves on ${\mathbb P}^{n+1}_k$ to the category of ordinary coherent sheaves on $\widetilde{{\mathbb P}(\underline{q})}$. Since the characteristic of $k$ does not divide the order of $G$, this functor is exact and gives a push-forward map $\pi_*: {\bdG}^G_*({\mathbb P}^{n+1}_k) \to \bdG_*\left(\widetilde{{\mathbb P}(\underline{q})}\right)$.
Let ${\rm CH}^G_*({\mathbb P}^{n+1}_k)$ denote the equivariant higher Chow groups of ${\mathbb P}^{n+1}_k$ (\cite{EG1}). By \cite[Theorem~3]{EG1}, there is a push-forward map $\overline{\pi}_*: {\rm CH}^G_*({\mathbb P}^{n+1}_k) \to {\rm CH}_*\left(\widetilde{{\mathbb P}(\underline{q})}\right)$ which is an isomorphism. It follows from \cite[Theorem~9.8, Lemma~9.1]{Krishna0} (see also \cite[Theorem~3.1]{EG2}) that there is a commutative diagram
\[ \xymatrix@C3pc{ {\bdG}^G_*({\mathbb P}^{n+1}_k) {\underset{R(G)}\otimes} {\mathbb Q} \ar[r]^>>>>>>{\tau_G} \ar[d]_{\pi_*} & {\rm CH}^G_*({\mathbb P}^{n+1}_k) \ar[d]^{\overline{\pi}_*}_{\cong} \\ \bdG_*\left(\widetilde{{\mathbb P}(\underline{q})}\right) \ar[r]_{\tau} & {\rm CH}_*\left(\widetilde{{\mathbb P}(\underline{q})}\right),} \] where the horizontal arrows are the Riemann-Roch maps which are isomorphisms (\cite[Theorem~8.6]{Krishna0}). It follows that the left vertical arrow is an isomorphism. The corollary now follows by combining this isomorphism with Theorem~\ref{thm:WPS-K-th}. \end{proof}
\section{Toric stack bundles and the stacky Leray-Hirsch theorem} \label{section:TS-bundle} Toric bundle schemes and their cohomology were first studied by Sankaran and Uma in \cite{SU}. They computed the Grothendieck group of a toric bundle over a smooth base scheme. Jiang \cite{Jiang} studied smooth and simplicial Deligne-Mumford toric stack bundles over schemes and computed their Chow rings. These bundles are relative analogues of toric Deligne-Mumford stacks. A description of the Grothendieck group of toric Deligne-Mumford stack bundles was given by Jiang and Tseng in \cite{JTseng2}.
In this section, we give a general definition of toric stack bundles over a base scheme in such a way that every fiber of this bundle is a (generically stacky) toric stack in the sense of \cite{GSI}. We prove a stacky version of the Leray-Hirsch theorem for the algebraic K-theory of stack bundles. This Leray-Hirsch theorem will be used in the next section to describe the higher K-theory of toric stack bundles.
\subsection{Toric stack bundles}\label{subsection:TS-bun-def} Let $T$ be a split torus of rank $n$ and let $X$ be a scheme with a $T$-action. Let $G$ be a diagonalizable group over $k$ and let $\phi: G \to T$ be a morphism of algebraic groups over $k$.
Let $p: E \to B$ be a principal $T$-bundle over a scheme $B$. Let $G$ act on $E \times X$ by $g(e,x) = (e,gx):= (e, \phi(g)x)$ and let $T$ act on $E \times X$ via the diagonal action. It is easy to see that these two actions commute and the projection map $E \times X \to E$ is equivariant with respect to these actions.
The commutativity of the actions ensures that the $G$-action descends to the quotients $E(X) : = E \stackrel{T}{\times} X$ and $E/T = B$ such that the induced map of quotients $\overline{p} : E(X) \to B$ is $G$-equivariant. Since $E$ has trivial $G$-action, so does $B$ and we see that $G$ acts on $E(X)$ fiber-wise and the map $\overline{p}$ canonically factors through the stack quotient $\pi: [E(X)/G] \to B$. Notice that $E$ is a Zariski locally trivial $T$-bundle and so are $E(X) \to B$ and $[E(X)/G] \to B$. Setting $\mathfrak{X} = [E(X)/G]$, we conclude that the map $\pi: \mathfrak{X} \to B$ is a Zariski locally trivial fibration each of whose fiber is the stack $[X/G]$. The morphism $\pi$ will be called a {\sl stack bundle} over $B$.
If $X$ is a toric variety with dense torus $T$, then $\pi: \mathfrak{X} \to B$ will be called a {\sl toric stack bundle} over $B$. In this case, each fiber of $\pi$ is the toric stack $[X/G]$ in the sense of \cite{GSI}. If $[X/G]$ is a Deligne-Mumford stack, this construction recovers the notion of toric stack bundles used in \cite{Jiang} and \cite{JTseng2}.
\subsection{Leray-Hirsch Theorem for stack bundles} \label{subsection:LRT} First we prove the following lemma. \begin{lem}\label{lem:linear-P} Let $X$ be a $T$-equivariantly cellular scheme with the $T$-equivariant cellular decomposition \begin{equation}\label{eqn:linear-P*} \emptyset = X_{n+1} \subsetneq X_n \subsetneq \cdots \subsetneq X_1 \subsetneq X_0 = X \end{equation} and let $U_i = X\setminus X_i$ for $0 \le i \le n+1$. Let $G$ be a diagonalizable group provided with a morphism of algebraic groups $\phi: G \to T$. Then for any $0 \le i \le n$, the sequence \begin{equation}\label{eqn:linear-P0} 0 \to \bdG ^G_*\left(U_{i+1} \setminus U_i\right) \to \bdG ^G_*(U_{i+1}) \to \bdG ^G_*(U_i) \to 0 \end{equation} is exact. In particular, $\bdG ^G_0(X)$ is a free $R(G)$-module of rank equal to the number of $T$-invariant affine cells in $X$ with basis given by the closures of the affine cells. \end{lem} \begin{proof} To prove the exactness part of the proposition, we first make the following claim. Suppose $X$ is a $G$-scheme and $j:U \hookrightarrow X$ is a $G$-invariant open inclusion with complement $Y$. Suppose that $U$ is isomorphic to a representation of $G$. Then the localization sequence \begin{equation}\label{eqn:linear-P1} 0 \to \bdG ^G_*(Y) \to \bdG ^G_*(X) \xrightarrow{j^*} \bdG ^G_*(U) \to 0 \end{equation} is (split) short exact.
To prove the claim, let $\alpha: X \to {\rm Spec \,}(k)$ and $\beta: U \to {\rm Spec \,}(k)$ be the structure maps (which are $G$-equivariant) so that $\beta = \alpha \circ j$. The homotopy invariance of equivariant K-theory shows that $\beta^*$ is an isomorphism. Let $\gamma = \alpha^* \circ (\beta^*)^{-1}$. Then one checks that $\gamma$ is a section of $j^*$ and hence the localization sequence splits into short exact sequences. This proves the claim.
We shall prove ~\eqref{eqn:linear-P0} by induction on the number of $T$-invariant affine cells in $X$. For $i = 0$, ~\eqref{eqn:linear-P0} is immediate. So we assume $i \ge 1$ and consider the commutative diagram: \begin{equation}\label{eqn:linear-P2} \xymatrix@C1pc{ & 0 \ar[d] & 0 \ar[d] & 0 \ar[d] & \\ 0 \ar[r] & \bdG ^G_*(X_i \setminus X_{i+1}) \ar[r] \ar@{=}[d] & \bdG ^G_*(X_1 \setminus X_{i+1}) \ar[r] \ar[d] & \bdG ^G_*(X_1 \setminus X_{i}) \ar[r] \ar[d] & 0 \\ 0 \ar[r] & \bdG ^G_*(X_i \setminus X_{i+1}) \ar[r] & \bdG ^G_*(X \setminus X_{i+1}) \ar[r] \ar[d] & \bdG ^G_*(X \setminus X_{i}) \ar[r] \ar[d] & 0 \\ & & \bdG ^G_*(X \setminus X_1) \ar@{=}[r] \ar[d] & \bdG ^G_*(X \setminus X_1) \ar[r] \ar[d] & 0 \\ & & 0 & 0.} \end{equation}
The top row is exact by induction on the number of affine cells since $X_1$ is $T$-equivariantly cellular with fewer number of cells. The two columns are exact by the above claim. It follows that the middle row is exact, which proves ~\eqref{eqn:linear-P0}.
To prove the last (freeness) assertion, we apply ~\eqref{eqn:linear-P1} to the inclusion $X_1 \subset X$ and see that $\bdG ^G_0(X) \cong \bdG ^G_0(X_1) \oplus R(G)$. An induction on the number of affine $G$-cells now finishes the proof. \end{proof}
\begin{prop}\label{prop:linear} Let $X$ be a $T$-equivariantly cellular scheme and let $B$ be any scheme with trivial $T$-action. Then the external product map \begin{equation}\label{eqn:linear1} \bdG _*(B) {\otimes}_{{\mathbb Z}} \bdG ^G_0(X) \to \bdG ^G_*(B \times X) \end{equation} is an isomorphism. In particular, the natural map $\bdK _*(k) {\otimes}_{{\mathbb Z}} \bdG ^G_0(X) \to \bdG ^G_*(X)$ is an isomorphism. \end{prop} \begin{proof} Since the map \begin{equation}\label{eqn:linear1*} \bdG_ *(B) \otimes_{{\mathbb Z}} R(G) \xrightarrow{\cong} \bdG ^G_*(B) \end{equation} is an isomorphism ({\sl cf.} \cite[Lemma~5.6]{Thomason1}), the lemma is equivalent to the assertion that the map \begin{equation}\label{eqn:linear1*0} \bdG ^G_*(B) {\otimes}_{R(G)} \bdG ^G_0(X) \to \bdG ^G_*(B \times X) \end{equation} is an isomorphism.
Consider the cellular decomposition of $X$ as in Lemma~\ref{lem:linear-P}. Then each $U_i = X\setminus X_i$ is also a $T$-equivariantly cellular scheme. It suffices to show by induction on $i \ge 0$ that ~\eqref{eqn:linear1*0} holds when $X$ is any of these $U_i$'s. There is nothing to prove for $i =0$ and the case $i = 1$ follows by the homotopy invariance since $U_1$ is an affine space.
To prove the general case, we use the short exact sequence \begin{equation}\label{eqn:linear2} 0 \to \bdG ^G_0\left(U_{i+1} \setminus U_i\right) \to \bdG ^G_0(U_{i+1}) \to \bdG ^G_0(U_i) \to 0 \end{equation} given by Lemma~\ref{lem:linear-P}. This sequence splits, since each $\bdG ^G_0(U_i)$ was shown to be free over $R(G)$ in Lemma~\ref{lem:linear-P}. Tensoring this with $\bdG ^G_*(B)$ over $R(G)$, we obtain a commutative diagram \[ \xymatrix@C.3pc{ 0 \ar[r] & \bdG ^G_*(B) {\otimes} \bdG ^G_0\left(U_{i+1} \setminus U_i\right) \ar[r] \ar[d] & \bdG ^G_*(B) {\otimes} \bdG ^G_0(U_{i+1}) \ar[r] \ar[d] & \bdG ^G_*(B) {\otimes} \bdG ^G_0(U_i) \ar[r] \ar[d] & 0 \\ \ar[r] & \bdG ^G_*(B \times (U_{i+1} \setminus U_i)) \ar[r]_{\ \ \ i_*} & \bdG ^G_*(B \times U_{i+1}) \ar[r]_{j^*} & \bdG ^G_*(B \times U_i) \ar[r] &} \] where the top row remains exact since the short exact sequence in ~\eqref{eqn:linear2} is split. The bottom row is the localization exact sequence. The left vertical arrow is an isomorphism by the homotopy invariance and the right vertical arrow is an isomorphism by the induction. In particular, $j^*$ is surjective in all indices. We conclude that $i_*$ is injective in all indices and the middle vertical arrow is an isomorphism. \end{proof}
\begin{thm}$($Stacky Leray-Hirsch theorem$)$\label{thm:LHT} Suppose that $k$ is a perfect field and $B$ is a smooth scheme over $k$. Let $X$ be a $T$-equivariantly cellular scheme. Let $\mathfrak{F} \xrightarrow{i} \mathfrak{X} \xrightarrow{\pi} B$ be a Zariski locally trivial stack bundle (\S~\ref{subsection:TS-bun-def}) each of whose fiber $\mathfrak{F}$ is a smooth stack of the form $[X/G]$. Assume that there are elements $\{e_1, \cdots , e_r\}$ in $\bdK _0(\mathfrak{X})$ such that $\{f_1 = i^*(e_1), \cdots , f_r = i^*(e_r)\}$ is an $R(G)$-basis of $\bdK _0(\mathfrak{X}_b)$ for each fiber $\mathfrak{X}_b = \mathfrak{F}$ of the fibration.
Then the map \begin{equation}\label{eqn:LHT**} \Phi : \bdK _0(\mathfrak{F}) {\underset{R(G)}\otimes} \bdK ^G_*(B) \to \bdK _*(\mathfrak{X}) \end{equation} \[ \Phi\left({\underset{1 \le i \le r}\sum} \ f_i \otimes b_i\right) = {\underset{1 \le i \le r}\sum} {\pi}^*(b_i) e_i \] is an isomorphism of $R(G)$-modules. In particular, $\bdK _*(\mathfrak{X})$ is a free $\bdK ^G_*(B)$-module and the map $\pi^*: \bdK ^G_*(B) \to \bdK _*(\mathfrak{X})$ is injective.
\end{thm} \begin{proof} Since $k$ is perfect and since the fibration $p$ is Zariski locally trivial, we can find a filtration \begin{equation}\label{eqn:LHT-fil} \emptyset = B_{n+1} \subsetneq B_n \subsetneq \cdots \subsetneq B_1 \subsetneq B_0 = B \end{equation} of $B$ by closed subschemes such that for each $0 \le i \le n$, the scheme $B_i \setminus B_{i+1}$ is smooth and the given fibration is trivial over it. We set $U_i = B \setminus B_i$ and $V_i = U_i \setminus U_{i-1} = B_{i-1} \setminus B_i$. Observe then that each of $U_i$'s and $V_i$'s is smooth.
Set $\mathfrak{X}_i = {\pi}^{-1}(U_i)$ and $\mathfrak{W}_i = {\pi}^{-1}(V_i) = V_i \times \mathfrak{F}$. Let $\eta_i : \mathfrak{X}_i \hookrightarrow \mathfrak{X}$ and $\iota_i: \mathfrak{W}_i \hookrightarrow\mathfrak{X}$ be the inclusion maps. We prove by induction on $i$ that the map $\bdK _0(\mathfrak{F}) {\underset{R(G)}\otimes} \bdK ^G_*(U_i) \to \bdK _*(\mathfrak{X}_i)$ is an isomorphism, which will prove the theorem.
Since $U_0 = \emptyset$ and $\mathfrak{X}_1 = U_1 \times \mathfrak{F}$, the desired isomorphism for $i \le 1$ follows from Proposition~\ref{prop:linear} and the isomorphism $U_1 \times \mathfrak{F} \cong [(U_1 \times X)/G]$. We now consider the commutative diagram: \begin{equation}\label{eqn:LHT&} \xymatrix@C.5pc{ {\begin{array}{c} \bdK ^G_*(U_{i}) \\ {\otimes} \\ \bdK _0(\mathfrak{F}) \end{array}} \ar[r] \ar[d] & {\begin{array}{c} \bdK ^G_*(V_{i+1}) \\ {\otimes} \\ \bdK _0(\mathfrak{F}) \end{array}} \ar[d] \ar[r] & {\begin{array}{c} \bdK ^G_*(U_{i+1}) \\ {\otimes} \\ \bdK _0(\mathfrak{F}) \end{array}} \ar[d] \ar[r] & {\begin{array}{c} \bdK ^G_*(U_{i}) \\ {\otimes} \\ \bdK _0(\mathfrak{F}) \end{array}} \ar[r] \ar[d] & {\begin{array}{c} \bdK ^G_*(V_{i+1}) \\ {\otimes} \\ \bdK _0(\mathfrak{F}) \end{array}} \ar[d] \\ \bdK _*(\mathfrak{X}_i) \ar[r] & \bdK _*(\mathfrak{W}_{i+1}) \ar[r] & \bdK _*(\mathfrak{X}_{i+1}) \ar[r] & \bdK _*(\mathfrak{X}_i) \ar[r] & \bdK _*(\mathfrak{W}_{i+1}).} \end{equation}
The top row in this diagram is obtained by tensoring the K-theory long exact localization sequence with $\bdK _0(\mathfrak{F})$ over $R(G)$, and the bottom row is just the localization exact sequence. Since $\bdK _0(\mathfrak{F})$ is a free $R(G)$-module ({\sl cf.} Lemma~\ref{lem:linear-P}), the top row is also exact.
It is easily checked that the second and the third squares commute using the commutativity property of the push-forward and pull-back maps of K-theory of coherent sheaves in a Cartesian diagram of proper and flat maps. We show that the other squares also commute. It is enough to show that the first square commutes as the fourth one is same as the first. Let $\delta$ denote the connecting homomorphism in a long exact localization sequence for higher K-theory.
If we start with an element $b \otimes i^*(e_j) \in \bdK _*(U_{i}) {\otimes} \bdK _0(\mathfrak{F})$ and map this horizontally, we obtain $\delta b \otimes i^*(e_j)$ which maps vertically down to ${\pi}^*(\delta b) \cdot \iota^*_{i+1}(e_j)$. On the other hand, if we first map vertically, we obtain ${\pi}^*(b) \cdot \eta^*_i(e_j)$ which maps horizontally to $\delta \left({\pi}^*(b) \cdot \eta^*_i(e_j) \right)$.
Now, we recall that these elements in the higher K-theory of coherent sheaves are represented by the elements in the higher homotopy groups of the various infinite loop spaces. Moreover, if we have a closed immersion of smooth stacks $\mathfrak{F} \hookrightarrow \mathfrak{X}$ with open complement $\mathfrak{U}$, then we have a fibration sequence of ring spectra \begin{equation}\label{eqn:Ring-sp} \bdK (\mathfrak{F}) \to \bdK (\mathfrak{X}) \to \bdK (\mathfrak{U}). \end{equation}
The homotopy groups of these ring spectra form graded rings and the connecting homomorphism in the long exact sequence of the homotopy groups associated to the above fibration sequence satisfies the Leibniz rule (e.g., see \cite[Appendix~A]{Brown} and \cite[\S~2.4]{Panin}).
Applying this Leibniz rule, we see that the term $\delta \left({\pi}^*(b) \cdot \eta^*_i(e_j) \right)$ is same as $\delta {\pi}^*(b) \cdot \iota^*_{i+1}(e_j) = {\pi}^*\left(\delta b\right) \cdot \iota^*_{i+1}(e_j)$ since $\delta (\eta^*_i(e_j)) = 0$. We have shown that the above diagram commutes.
The first and the fourth vertical arrows in ~\eqref{eqn:LHT&} are isomorphisms by induction. The second and the fifth vertical arrows are isomorphisms by Proposition~\ref{prop:linear}. Hence the middle vertical arrow is also an isomorphism by 5-lemma.
To show that $\pi^*$ is injective, consider the $T$-invariant filtration of $X$ as in ~\eqref{eqn:linear-P*} and let $j: [E(U_1)/G] = \mathfrak{X}_1 \to \mathfrak{X}$ be the open inclusion. If we apply ~\eqref{eqn:LHT**} to the map $ \mathfrak{X}_1 \to B$, we see that the composite map $\bdK ^G_*(B) \to \bdK _*(\mathfrak{X}) \to \bdK _*(\mathfrak{X}_1)$ is an isomorphism (since $U_1$ is a $T$-invariant cell of $X$). We conclude that $\pi^*$ is split injective. \end{proof}
\begin{comment} \begin{cor}\label{cor:LHT-Toric} Let $B$ be a smooth scheme over $k$. Let $\mathfrak{X} = [E(X)/G] \xrightarrow{p} B$ be a toric stack bundle as in ~\eqref{subsection:TS-bun-def} where $G$ is a diagonalizable subgroup of $T$. Assume that there are elements $\{e_1, \cdots , e_r\}$ in $\bdK _0(\mathfrak{X})$ such that for every point $b \in B$, $\{f_1 = i^*(e_1), \cdots , f_r = i^*(e_r)\}$ forms an $R(G) = \bdK ^G_0\left(k(b)\right)$-basis of $\bdK _0(\mathfrak{X}_b)$ for the fiber $\mathfrak{X}_b$. Then $\bdK _*(\mathfrak{X})$ is a free $\bdK ^G_*(B)$-module with basis $\{e_1, \cdots , e_r\}$. \end{cor} \end{comment}
\section{Higher K-theory of toric stack bundles} \label{section:K-Chow-TSB} In this section, we give explicit descriptions of the higher K-theory of toric stack bundles in terms of the higher K-theory of the base scheme.
Let $T$ be a split torus of rank $n$. Let $N = {\rm Hom}({\mathbb G}_m, T)$ be the lattice of one-parameter subgroups of $T$ and let $M = {\rm Hom}(T, {\mathbb G}_m) = N^{\vee}$ be its character group. Let $X = X(\Delta)$ be a smooth projective toric variety associated to a fan $\Delta$ in $N_{{\mathbb R}}$. Let
\begin{equation}\label{eqn:reduced} 0 \to G \to T \to T' \to 0 \end{equation} be an exact sequence of diagonalizable groups. This yields the exact sequence of the character groups \begin{equation}\label{eqn:reduced} 0 \to T'^{\vee} \to T^{\vee} \to G ^{\vee} \to 0. \end{equation}
\subsection{The Stanley-Reisner algebra associated to a subgroup of $T$}\label{subsection:SRA} We fix an ordering $\{\sigma_1, \cdots , \sigma_m\}$ of $\Delta_{\rm max}$ and let $\tau_i \subset \sigma_i$ be the cone which is the intersection of $\sigma_i$ with all those $\sigma_j$ such that $j \ge i$ and which intersect $\sigma_i$ in dimension $n-1$. Let $\tau'_i \subset \sigma_i$ be the cone such that $\tau_i \cap \tau'_i = \{0\}$ and ${\rm dim}(\tau_i) + {\rm dim}(\tau'_i) = n$ for $1 \le i \le m$. It is easy to see that $\tau'_i$ is the intersection of $\sigma_i$ with all those $\sigma_j$ such that $j \le i$ and which intersect $\sigma_i$ in dimension $n-1$. Since $X$ is smooth and projective, it is well known that we can choose the above ordering of $\Delta_{\rm max}$ such that \begin{equation}\label{eqn:order} \tau_i \subset \sigma_j \Rightarrow \ i \le j \ \ {\rm and} \ \ \tau'_i \subset \sigma_j \Rightarrow \ j \le i. \end{equation}
Let $\Delta_1 = \{\rho_1, \cdots , \rho_d\}$ be the set of one-dimensional cones in $\Delta$ and let $\{v_1, \cdots , v_d\}$ be the associated primitive elements of $N$. We choose $\{\rho_1, \cdots , \rho_n\}$ to be a set of one dimensional faces of $\sigma_m$ such that $\{v_1, \cdots , v_n\}$ is a basis of $N$. Let $\{\chi_1, \cdots , \chi_n\}$ be the dual basis of $M$. Let $\{\chi'_1, \cdots , \chi'_r\}$ be a chosen basis of $T'^{\vee} = M'$. We will denote the group operations in all the lattices additively.
\begin{defn}\label{defn:RING} Let $A$ be a commutative ring with unit and let $\{r_1, \cdots ,r_n\}$ be a set of invertible elements in $A$. Let $I^T_{\Delta}$ denote the ideal of the Laurent polynomial algebra $A[t^{\pm 1}_1, \cdots , t^{\pm 1}_d]$ generated by the elements \begin{equation}\label{eqn:Reln1}
(t_{j_1}-1) \cdots (t_{j_l}-1), \ 1 \le j_p \le d \end{equation} such that $\rho_{j_1}, \cdots , \rho_{j_l}$ do not span a cone of $\Delta$. Let $J^G_\Delta$ denote the ideal of $A[t^{\pm 1}_1, \cdots , t^{\pm 1}_d]$ generated by the relations \begin{equation}\label{eqn:Reln2} s_i := \left(\stackrel{d}{\underset{j = 1}\prod} (t_j)^{<-\chi'_i, v_j>}\right) - r_i, \ 1 \le i \le r. \end{equation} We define the $A$-algebras $R_T(A, \Delta)$ and $R_G(A, \Delta)$ to be quotients of ${A[t^{\pm 1}_1, \cdots , t^{\pm 1}_d]}$ by the ideals $I^T_\Delta$ and $I^G_\Delta = I^T_{\Delta} + J^G_{\Delta}$, respectively. \end{defn}
The ring $R_G(A, \Delta)$ will be called the {\sl Stanley-Reisner} algebra over $A$ associated to the subgroup $G$. Every character $\chi \in M$ acts on $R_T(A, \Delta)$ via multiplication by the element $t_\chi = \left(\stackrel{d}{\underset{j = 1}\prod} (t_j)^{<-\chi, v_j>}\right)$ and this makes $R_T(A, \Delta)$ (and hence $R_G(A, \Delta)$) an $\left(A {\underset{{\mathbb Z}}\otimes} R(T)\right)$-algebra.
\vskip .3cm
\begin{comment} {\bf{Notations:}} For a $T$-equivariant (resp. ordinary) line bundle $L$ on a smooth scheme $X$, let $c^T_1(L)$ (resp. $c_1(L)$) denote the class $1- [L^{\vee}]$ in $\bdK ^T_0(X)$ (resp. $\bdK _0(X)$). For elements $\alpha, \beta \in \bdK ^T_0(X)$, let $\alpha +_F \beta$ denote the element $\alpha + \beta - \alpha \beta$.
Notice that $c_1(L \otimes M) = c_1(L) +_F c_1(M)$. For a non-negative integer $n$, the term $n_Fc_1(L)$ will denote $c_1(L) +_F \cdots +_F c_1(L) = c_1(L^{\otimes n})$. For $n < 0$, $n_Fc_1(L)$ will mean $(-n)_Fc_1(L^{\vee})$. \end{comment}
\subsection{The K-theory of toric stack bundles} \label{subsection:Main-formula} Let $T$ be a split torus over a perfect field $k$ and let $G$ be a closed subgroup of $T$ (which may not necessarily be a torus). Let $X$ be a smooth projective toric variety with dense torus $T$ and let $\pi:\mathfrak{X} = [(E(X)/G] \to B$ be a toric stack bundle over a smooth $k$-scheme $B$ associated to a principal $T$-bundle $p: E \to B$. We wish to describe the K-theory of $\mathfrak{X}$ in terms of the K-theory of $B$.
\vskip .3cm
Any $T$-equivariant line bundle $L \to X$ uniquely defines a $G$-equivariant line bundle $E(L) = E \stackrel{T}{\times} L$ on $E(X)$, where the $G$-action on $E(L)$ is given exactly as on $E(X)$. Every $\rho \in \Delta_1$ defines a unique $T$-equivariant line bundle $L_{\rho}$ on $X$ with a $T$-equivariant section $s_{\rho} : X \to L_{\rho}$ which is transverse to the zero-section and whose zero locus is the orbit closure $V_\rho = \overline{O_\rho}$.
For any $\sigma \in \Delta$, let $u_{\sigma}$ denote the fundamental class $[{\mathcal O}_{V_\sigma}]$ of the $T$-invariant subscheme $V_{\sigma}$ in $\bdK ^T_0(X)$ and let $y_{\sigma}$ denote the fundamental class of $[E\left(V_\sigma\right)]$ in $\bdK ^G_0(E(X)) = \bdK _0(\mathfrak{X})$.
Notice that ${\overline{p}}_{\sigma}: E\left(V_\sigma\right) \to B$ is a $G$-equivariant smooth projective toric sub-bundle of $\overline{p}:E(X) \to B$ with fiber $V_\sigma$. In particular, $\pi_{\sigma} :[E\left(V_\sigma\right)/G] \to B$ is a toric stack sub-bundle of $\pi: \mathfrak{X} \to B$ with fiber $[V_\sigma/G]$. We set $\mathfrak{X}_{\sigma} = [E\left(V_\sigma\right)/G]$.
Suppose that $\rho_{j_1}, \cdots , \rho_{j_l}$ do not span a cone in $\Delta$. Then $s = (s_{j_1}, \cdots , s_{j_l})$ yields a $G$-equivariant nowhere vanishing section of $E(L_{\rho_{j_1}}) \oplus \cdots \oplus E(L_{\rho_{j_l}})$ and hence the Whitney sum formula for Chern classes in K-theory implies that
\begin{equation}\label{eqn:vanish1} y_{\rho_{j_1}} \cdots y_{\rho_{j_l}} = 0 \ {\rm in} \ \bdK ^G_0\left(E(X)\right). \end{equation}
We now consider the commutative diagram \begin{equation}\label{eqn:vanish2} \xymatrix@C.7pc{ X_l \ar[r]^{\iota} \ar[d]_{\pi_l} & E(X) \ar[d]_{\overline{p}} & E \times X \ar[d]^{p_E} \ar[r]^>>>>{p_X} \ar[l]_{p'} & X \ar[d]^{\pi_X} \\ {\rm Spec}(l) \ar[r] & B & E \ar[l]^{p} \ar[r]_<<<<{\pi_E} & {\rm Spec}(k),} \end{equation} where ${\rm Spec}(l)$ is any point of $B$. It is clear that all squares are Cartesian and all the maps in the right square are $T$-equivariant.
We define $(T \times G)$-actions on any $T$-invariant subscheme $Y \subseteq X$ and on $E$ by $(t,g)\cdot y = tg \cdot y$ and $(t,g)\cdot e = t\cdot e$, respectively. An action of $(T \times G)$ on $E \times X$ is defined by $(t,g) \cdot (e,x) = (t\cdot e, tg \cdot x)$. It is clear that these are group actions such that the square on the right in ~\eqref{eqn:vanish2} is $(T\times G)$-equivariant. This implies that the middle square is also $(T\times G)$-equivariant and the map $\overline{p}$ is $G$-equivariant with respect to the trivial action of $G$ on $B$. The square on the left is $G$-equivariant.
Let $L_{\chi}$ denote the $T$-equivariant line bundle on ${\rm Spec}(k)$ associated to a character $\chi$ of $T$. Let $(T \times G)$ act on $L_{\chi}$ by $(t,g)\cdot v = \chi(t) \chi(g) \cdot v$. If $\chi \in M' = T'^{\vee}$, then $G$ acts trivially on $L_{\chi}$ and hence it acts trivially on $\pi^*_E(L_{\chi})$. Recall that $(T \times G)$ acts on $E$ via $T$. Hence $\pi^*_E(L_{\chi}) \to E$ is a $(T \times G)$-equivariant line bundle on which $G$-acts trivially.
Since the $T$-equivariant line bundles on $E$ are same as ordinary line bundles on $B$, we find that for every $\chi \in M'$, there is a unique ordinary line bundle $\zeta_{\chi}$ on $B$ such that $\pi^*_E(L_{\chi}) = p^*(\zeta_{\chi})$.
Since $G$ acts trivially on $B$, there is a canonical ring homomorphism $c_B: \bdK _*(B) \to \bdK ^G_*(B)$ such that the composite $\bdK _*(B) \xrightarrow{c_B} \bdK ^G_*(B) \to \bdK _*(B)$ is identity. These maps are simply the maps $\bdK _*(B) \xrightarrow{c_B} \bdK ^G_*(B) = \bdK _*(B)\otimes_{{\mathbb Z}} R(G) \to \bdK _*(B)$. Since $p^*_X \circ \pi^*_X (L_{\chi}) = p^*_E \circ \pi^*_E (L_{\chi})$ and since the $(T \times G)$-equivariant vector bundles on $E \times X$ are same as $G$-equivariant vector bundles on $E(X)$, we conclude that for every $\chi \in M'$, there is a unique ordinary line bundle $\zeta_{\chi}$ on $B$ such that \begin{equation}\label{eqn:vanish3} E(\pi^*_X(L_{\chi})) = {\overline{p}}^*(\zeta_{\chi}) = {\overline{p}}^*\left(c_B(\zeta_{\chi})\right). \end{equation}
Notice also that on each open subset of $B$ where the bundle $p$ is trivial, the restriction of $\zeta_{\chi}$ is the trivial line bundle since $\zeta_{\chi}$ is obtained from the $T$-line bundle $L_{\chi}$ on ${\rm Spec \,}(k)$.
We define a homomorphism of $\bdK _*(B)$-algebras $\bdK _*(B)[t^{\pm 1}_1, \cdots , t^{\pm 1}_d] \to \bdK _*(\mathfrak{X})$ by the assignment $t_i \mapsto [{E(L^{\vee}_{\rho_i})}/G]$ for $1 \le i \le d$. If we let $r_i = \zeta_{\chi'_i}$ for $1 \le i \le r$ ({\sl cf.} \S~\ref{subsection:SRA}), then it follows from ~\eqref{eqn:vanish1} and ~\eqref{eqn:vanish3} that this homomorphism descends to a $\bdK _*(B)$-algebra homomorphism \begin{equation}\label{eqn:vanish4} \Phi_G : R_G\left(\bdK _*(B), \Delta \right) \to \bdK _*(\mathfrak{X}). \end{equation}
\vskip .3cm
Given a sequence $\gamma = \{i_1, \cdots, i_d\}$ of integers, set $E(\gamma) = E\left((L^{\vee}_{\rho_1})^{i_1} \otimes \cdots \otimes (L^{\vee}_{\rho_d})^{i_d}\right)$. We then see that for a monomial $\gamma(\underline{t}) = t^{i_1}_1\cdots t^{i_d}_d$, we have \begin{equation}\label{eqn:vanish4*} \Phi_G(\gamma(\underline{t})) = [{E(\gamma)}/G]. \end{equation}
The following result describes the higher K-theory of the toric stack bundle $\pi: \mathfrak{X} \to B$.
\begin{thm}\label{thm:CTB} The homomorphism $\Phi_G$ is an isomorphism. \end{thm}
Before we prove this theorem, we consider some special cases which will be used in the final proof. The following observations will be used throughout the proofs.
The first observation is that the cell closures of $X$ are the $T$-equivariant subschemes $V_{\tau_i}$. So the classes of ${\mathcal O}_{V_{\tau_i}}$ form an $R(G)$-basis of $\bdK ^G_0(X)$ by Lemma~\ref{lem:linear-P}. Since $\iota^*(y_{\tau_i}) = [{\mathcal O}_{V_{\tau_i}}]$, we see that Theorem~\ref{thm:LHT} applies to the toric stack bundle $\pi:\mathfrak{X} \to B$.
Second observation is that $G$ is a diagonalizable group which acts trivially on $B$. Hence the map $\bdK _*(B) {\underset{{\mathbb Z}}\otimes} R(G) \to \bdK ^G_*(B)$ is a ring isomorphism by \cite[Lemma~3.6]{Thomason1}. This identification will be used without further mention. Since any character $\chi \in M$ acts on $R_T(\bdK _*(B), \Delta)$ and $\bdK ^G_*(E(X))$ via multiplication by $t_{\chi}$ and $\Phi_G(t_{\chi})$ respectively ({\sl cf.} \cite[Proposition~4.3]{SU}), we observe that the composite map $R_T(\bdK _*(B), \Delta) \to R_G(\bdK _*(B), \Delta) \to \bdK ^G_*(E(X))$ is $\bdK ^T_*(B)$-linear.
\begin{remk}\label{remk:Thomason} We remark that the result of Thomason in \cite[Lemma~3.6]{Thomason1} is stated for affine schemes, but his proof works for all schemes. Another way to deduce the general case from the affine case is to get a stratification of $B$ by affine subschemes as in ~\eqref{eqn:LHT-fil}, use induction on the number of affine strata, the localization sequence and the fact that $R(G)$ is free over ${\mathbb Z}$. \end{remk}
\vskip .3cm
\begin{lem}\label{lem:T-case} The homomorphism $\Phi_G$ is an isomorphism when $G = T$. \end{lem} \begin{proof} In this case, we first notice that the map $R_T({\mathbb Z}, \Delta) \xrightarrow{\phi} \bdK ^T_0(X)$ which takes $t_i$ to $[L^{\vee}_{\rho_i}]$, is an isomorphism of $R(T)$-algebras by \cite[Theorem~6.4]{VV}. On the other hand, we have the maps \begin{equation}\label{eqn:CTB0*} \bdK _*(B) {\underset{{\mathbb Z}}\otimes} R_T({\mathbb Z}, \Delta) \xrightarrow{\cong} R_T(\bdK _*(B), \Delta) \xrightarrow{\Phi_T} \bdK ^T_*(E(X)), \end{equation} where the first map takes $\alpha \otimes t_i$ to $\alpha \cdot t_i$ for $1 \le i \le d$. This map is clearly an isomorphism (see ~\eqref{eqn:Reln1}). It is clear from the definition of $\Phi_T$ that the composite map is same as the map $\Phi$ in ~\eqref{eqn:LHT**} (with $G = T$). It follows from Theorem~\ref{thm:LHT} that the composite map in ~\eqref{eqn:CTB0*} is an isomorphism. We conclude that $\Phi_T$ is an isomorphism. \end{proof}
\begin{cor}\label{cor:T-case*} For any closed subgroup $G \subseteq T$, the ring $R_G(\bdK _*(B), \Delta)$ is a free $\bdK _*(B)$-module. \end{cor} \begin{proof} We have seen above that the image of a character $\chi \in M$ in $R_T(\bdK _*(B), \Delta)$ is $t_{\chi}$. If we let $J^G$ denote the ideal $\left(\chi'_1 - \zeta_{\chi'_1}, \cdots , \chi'_r - \zeta_{\chi'_r}\right)$ in $\bdK ^T_*(B)$, then it follows from ~\eqref{eqn:Reln2} that $J^G_{\Delta} = J^GR_T(\bdK _*(B), \Delta)$ under the map $\bdK ^T_*(B) \to R_T(\bdK _*(B), \Delta)$.
It follows from Lemma~\ref{lem:T-case} and Theorem~\ref{thm:LHT} (with $G=T$) that $R_T(\bdK _*(B), \Delta)$ is a free $\bdK ^T_*(B)$-module. This implies that $R_G(\bdK _*(B), \Delta) = {R_T(\bdK _*(B), \Delta)}/{J^G_{\Delta}}$ is a free ${\bdK ^T_*(B)}/{J^G}$-module. Thus, it suffices to show that ${\bdK ^T_*(B)}/{J^G}$ is a free $\bdK _*(B)$-module. Since $\bdK ^T_*(B)$ is isomorphic to a Laurent polynomial ring $\bdK _*(B)[x^{\pm 1}_1, \cdots, x^{\pm 1}_n]$ and since each character $\chi \in M'$ is a monomial in this ring, the desired freeness follows from Lemma~\ref{lem:easy}. \end{proof}
\begin{lem}\label{lem:Trvial-bundle-case} The homomorphism $\Phi_G$ is an isomorphism when $p: E \to B$ is a trivial principal bundle. \end{lem} \begin{proof} Since $p: E \to B$ is a trivial bundle, we have observed before that $\zeta_{\chi'_i} =1$ for each $1 \le i \le r$. In particular, the map ${\bdK ^T_*(B)}/{J^G} \to \bdK ^G_*(B)$ is an isomorphism by Lemma~\ref{lem:Groupring}, where $J^G$ is as in Corollary~\ref{cor:T-case*}.
It follows from Theorem~\ref{thm:LHT} and Lemma~\ref{lem:T-case} that $\Phi_T$ is an isomorphism of free $\bdK ^T_*(B)$-modules. This implies that $R_G(\bdK _*(B), \Delta) = {R_T(\bdK _*(B), \Delta)}/{J^G_{\Delta}}$ is a free ${\bdK ^T_*(B)}/{J^G} = \bdK ^G_*(B)$-module. It follows from this and Theorem~\ref{thm:LHT} that $\Phi_G$ is a basis preserving homomorphism of free $\bdK ^G_*(B)$-modules of same rank. Hence, it must be an isomorphism. \end{proof}
\begin{lem}\label{lem:easy} Let $S = A[x^{\pm 1}_1, \cdots, x^{\pm 1}_n]$ be a Laurent polynomial ring over a commutative ring $A$ with unit. Let $\{t_1, \cdots , t_r\}$ be a set of monomials in $S$ and let $\{u_1, \cdots , u_r\}$ be a set of units in $A$. Then the ring $\frac{S}{(t_1 - u_1, \cdots , t_r-u_r)}$ is free over $A$. \end{lem} \begin{proof} This is left as an easy exercise using the fact that $S$ is a free $A$-module on the monomials. \end{proof}
\begin{lem}\label{lem:Groupring} Let $A$ be a commutative ring with unit and let \[ 0 \to L \to M \to N \to 0 \]
be a short exact sequence of finitely generated abelian groups. Let $I_L$ be the ideal of the group ring $A[M]$ generated by the set $\{s-1 | s \in S\}$, where $S$ is a generating set of $L$. Then the map of group rings $\frac{A[M]}{I_L} \to A[N]$ is an isomorphism. \end{lem} \begin{proof} This is an elementary exercise and a proof can be found in \cite[Proposition~2]{May}. \end{proof}
\vskip .5cm
{\bf{Proof of Theorem~\ref{thm:CTB}}:} We shall prove this theorem along the same lines as the proof of Theorem~\ref{thm:LHT}. Recall that our base field $k$ is perfect. We consider the stratification of $B$ by smooth locally closed subschemes as in ~\eqref{eqn:LHT-fil}. We shall follow the notations used in the proof of Theorem~\ref{thm:LHT}. It suffices to show by induction on $i$ that the theorem is true when $B$ is replaced by each $U_i$. Since $U_0 = \emptyset$ and since $E \xrightarrow{p} B$ is trivial over $U_1$, the desired isomorphism for $i \le 1$ follows from Lemma~\ref{lem:Trvial-bundle-case}.
Given a smooth locally closed subscheme $j: U \hookrightarrow B$, let $\zeta^U_i = j^*(\zeta_{\chi'_i}) \in \bdK _*(U)$ for $1 \le i \le r$ and set $J^G_U = \left(\chi'_1 - \zeta^U_{1}, \cdots , \chi'_r - \zeta^U_{r}\right)$.
We have seen in the proof of Lemma~\ref{lem:T-case} that for any such inclusion $U \subseteq B$, $R_T(\bdK _*(U), \Delta)$ is same as $\bdK ^T_0(X) {\underset{{\mathbb Z}}\otimes}\bdK_*(U)$. Moreover, the maps
\begin{equation}\label{eqn:CTB12*} R_G(\bdK _*(B), \Delta) {\underset{\bdK _*(B)}\otimes} \bdK _*(U) \cong \frac{R_T(\bdK _*(B), \Delta)}{J^GR_T(\bdK _*(B), \Delta)} {\underset{\bdK _*(B)}\otimes} \bdK _*(U) \to \frac{R_T(\bdK _*(U), \Delta)}{J^G_UR_T(\bdK _*(U), \Delta)} \end{equation} \[ \hspace*{10cm} \to R_G(\bdK _*(U), \Delta) \] are all isomorphisms.
We now consider the diagram: \begin{equation}\label{eqn:LHT&CT} \xymatrix@C.4pc{ R_G(\bdK _*(U_i), \Delta) \ar[r] \ar[d]_{\Phi^{U_i}_G} & R_G(\bdK _*(V_{i+1}), \Delta) \ar[r] \ar[d]_{\Phi^{V_{i+1}}_G} & R_G(\bdK _*(U_{i+1}), \Delta) \ar[r] \ar[d]_{\Phi^{U_{i+1}}_G} & R_G(\bdK _*(U_i), \Delta) \ar[r] \ar[d]_{\Phi^{U_i}_G} & R_G(\bdK _*(V_{i+1}), \Delta) \ar[d]_{\Phi^{V_{i+1}}_G} \\ \bdK _*(\mathfrak{X}_i) \ar[r] & \bdK _*(\mathfrak{W}_{i+1}) \ar[r] & \bdK _*(\mathfrak{X}_{i+1}) \ar[r] & \bdK _*(\mathfrak{X}_i) \ar[r] & \bdK _*(\mathfrak{W}_{i+1}).} \end{equation}
Using ~\eqref{eqn:CTB12*}, we see that the top row of ~\eqref{eqn:LHT&CT} is obtained by tensoring the localization exact sequence \[ \cdots \to \bdK _*(U_{i}) \to \bdK _*(V_{i+1}) \to \bdK _*(U_{i+1}) \to \bdK _*(U_{i}) \to \bdK _*(V_{i+1}) \to \cdots \] of $\bdK _*(B)$-modules with $R_G(\bdK _*(B), \Delta)$. Hence, this row is exact by Corollary~\ref{cor:T-case*}. The bottom row is anyway a localization exact sequence.
We now show that the diagram ~\eqref{eqn:LHT&CT} commutes. It is clear that the third square commutes and the fourth square is same as the first. So we need to check that the first two squares commute.
Let $\alpha: V_{i+1} \hookrightarrow U_{i+1}$ and $\beta: \mathfrak{M}_{i+1} \hookrightarrow \mathfrak{X}_{i+1}$ be the closed immersions of smooth schemes and stacks. Following the notations in the proof of Theorem~\ref{thm:LHT}, we see that for any $u \in \bdK _*(U_i)$ and for any monomial $\gamma(\underline{t}) = t^{i_1}_1\cdots t^{i_d}_d$,
\begin{equation}\label{eqn:CTB13*} \begin{array}{lll} \delta \circ \Phi^{U_i}_G(u \otimes \gamma(\underline{t})) & = & \delta\left(\pi^*_{U_i}(u) \cdot \eta^*_i\left([{E(\gamma)}/G]\right)\right) \\ & {=} & \delta(\pi^*_{U_i}(u)) \cdot \iota^*_{i+1}\left([{E(\gamma)}/G]\right) \\ & = & \pi^*_{V_{i+1}}(\delta(u)) \cdot \iota^*_{i+1}\left([{E(\gamma)}/G]\right) \\ & = & \Phi^{V_{i+1}}_G \left(\delta(u)\otimes \gamma(\underline{t})\right) \\ & = & \Phi^{V_{i+1}}_G \circ \delta(u \otimes \gamma(\underline{t})), \end{array} \end{equation} where $E(\gamma) \in \bdK _*(\mathfrak{X})$ is as in ~\eqref{eqn:vanish4*}. The second equality follows from the Leibniz rule and the third equality follows from the commutativity of ~\eqref{eqn:LHT&}. This shows that the first (and the last) square commutes.
\begin{comment} any $1 \le j \le d$, we have \begin{equation}\label{eqn:CTB13*} \begin{array}{lll} \delta \circ \Phi^{U_i}_G(u \otimes t_j)) & = & \delta\left(\pi^*_{U_i}(u) \cdot \eta^*_i\left({[{E(L^{\vee}_{\rho_j})}/G]\right)\right) \\ & {=} & \delta(\pi^*_{U_i}(u)) \cdot \iota^*_{i+1}\left([{E(L^{\vee}_{\rho_j})}/G]\right) \\ & = & \pi^*_{V_{i+1}}(\delta(u)) \cdot \iota^*_{i+1}\left([{E(L^{\vee}_{\rho_j})}/G]\right) \\ & = & \Phi^{V_{i+1}}_G \left(\delta(u)\otimes t_j\right) \\ & = & \Phi^{V_{i+1}}_G \circ \delta(u \otimes t_j), \end{array} \end{equation} \end{comment}
To show the commutativity of the second square, let $v \in \bdK _*(V_{i+1})$. We then have \begin{equation}\label{eqn:CTB13*} \begin{array}{lll} {\beta}_* \circ \Phi^{V_{i+1}}_G (v \otimes \gamma(\underline{t})) & = & {\beta}_* \left(\pi^*_{V_{i+1}}(v) \cdot \iota^*_{i+1}\left([{E(\gamma)}/G]\right)\right) \\ & = & {\beta}_* \left(\pi^*_{V_{i+1}}(v) \cdot \beta^* \circ \eta^*_{i+1}\left([{E(\gamma)}/G]\right)\right) \\ & = & {\beta}_*(\pi^*_{V_{i+1}}(v)) \cdot \eta^*_{i+1}\left([{E(\gamma)}/G]\right) \\ & = & \pi^*_{U_{i+1}}(\alpha_*(v)) \cdot \eta^*_{i+1}\left([{E(\gamma)}/G]\right) \\ & = & \Phi^{U_{i+1}}_G (\alpha_*(v) \otimes\gamma(\underline{t}) ) \\ & = & \Phi^{U_{i+1}}_G \circ \alpha_* (v \otimes \gamma(\underline{t})), \end{array} \end{equation} where third equality follows from the projection formula and the fourth equality follows from the commutativity of ~\eqref{eqn:LHT&}. This shows that the second square commutes.
\begin{comment} We then have \begin{equation}\label{eqn:CTB13*} \begin{array}{lll} {\beta}_* \circ \Phi^{V_{i+1}}_G (v \otimes t_j) & = & {\beta}_* \left(\pi^*_{V_{i+1}}(v) \cdot \iota^*_{i+1}\left([{E(L^{\vee}_{\rho_j})}/G]\right)\right) \\ & = & {\beta}_* \left(\pi^*_{V_{i+1}}(v) \cdot \beta^* \circ \eta^*_{i+1}\left([{E(L^{\vee}_{\rho_j})}/G]\right)\right) \\ & = & {\beta}_*(\pi^*_{V_{i+1}}(v)) \cdot \eta^*_{i+1}\left([{E(L^{\vee}_{\rho_j})}/G]\right) \\ & = & \pi^*_{U_{i+1}}(\alpha_*(v)) \cdot \eta^*_{i+1}\left([{E(L^{\vee}_{\rho_j})}/G]\right) \\ & = & \Phi^{U_{i+1}}_G (\alpha_*(v) \otimes t_j) \\ & = & \Phi^{U_{i+1}}_G \circ \alpha_* (v \otimes t_j), \end{array} \end{equation} where third equality follows from the projection formula and the fourth equality follows from the commutativity of ~\eqref{eqn:LHT&}. This shows that the second square commutes. \end{comment}
The first and the fourth vertical arrows are isomorphisms by induction. The second and the fifth vertical arrows are isomorphisms by Lemma~\ref{lem:Trvial-bundle-case}. Hence the middle vertical arrow is also an isomorphism by 5-lemma. This concludes the proof of Theorem~\ref{thm:CTB}. $\hspace*{12.5cm} \hfil\square$
\vskip .3cm
\begin{remk}\label{remk:Non-reduced-bundle} It was assumed in Theorem~\ref{thm:CTB} that $G$ is subgroup of $T$. Since $\mathfrak{X}$ is just the toric stack $[{E(X)}/G]$ associated to the data $(E(X), G \xrightarrow{\phi} T)$, the general case can always be reduced to the case of Theorem~\ref{thm:CTB}. We refer to Remark~\ref{remk:non-red} for how this can be done. \end{remk}
\vskip .3cm
\noindent\emph{Acknowledgments.} Parts of this work were carried out while the first author was visiting the Tata Institute of Fundamental Research, while the second author was visiting the Mathematics department of Ohio state university, Columbus and also while both the authors were visiting the Mathematics department of the Harish Chandra Research Institute, Allahabad. The first author was also supported by an adjunct professorship at the same institute. They would
would like to thank these departments for the invitation and financial support during these visits. They also would like to thank Hsian-Hua Tseng for helpful comments on an earlier version of this paper.
\enlargethispage*{75pt}
\end{document} |
\begin{document}
\title[Yang-Mills fields on the Schwarzschild black hole]{Instability
of infinitely-many stationary solutions of the $SU(2)$ Yang-Mills fields on the exterior of the Schwarzschild black hole} \author{Dietrich H\"afner, C\'ecile Huneau} \address{Universit\'e Grenoble-Alpes, Institut Fourier, 100 rue des
maths, 38610 Gi\`eres, France} \email{Dietrich.Hafner@univ-grenoble-alpes.fr} \email{Cecile.Huneau@univ-grenoble-alpes.fr}
\maketitle
\begin{abstract} We consider the spherically symmetric $SU(2)$ Yang-Mills fields on the Schwarzschild metric. Within the so called purely magnetic Ansatz we show that there exists a countable number of stationary solutions which are all nonlinearly unstable. \end{abstract}
\setcounter{page}{1} \pagenumbering{arabic}
\section{Introduction}
\subsection{General introduction}
We study the $SU(2)$ Yang-Mills equations on the Schwarzschild metric, with spherically symmetric initial data fulfilling the so called purely magnetic Ansatz. This equation has at least a countable number of stationary solutions. In \cite{GH} the first author and S. Ghanem showed that the zero curvature solution is stable within this Ansatz. In this paper we show that the other solutions of this set are nonlinearly unstable.
Global existence for Yang-Mills fields on ${\mathbb R}^{3+1}$ was shown by Eardley and Moncrief in a classical result, \cite{EM1} and \cite{EM2}. Their result was then generalized by Chru\'sciel and Shatah to general globally hyperbolic curved space-times in \cite{CS}. Later, the hypotheses of \cite{CS} were weakened in \cite{G1}.
The purely magnetic Ansatz excludes Coulomb type solutions and reduces the Yang-Mills equations to a nonlinear scalar wave equation: \begin{equation} \label{SYM} \partial_{t}^{2} {W}- \partial_{x}^{2} W+ \frac{(1- \frac{2m}{r}) }{r^2} W(W^2-1)=0. \end{equation}
Strong numerical evidence of the existence of a countable number of stationary solutions $(W_n)_{n\in \nn}$ in the case of Yang Mills equations coupled with Einstein equations with spherical symmetry was shown in \cite{cbh} (see also \cite{BRZ}). It was then proved analytically, still in the coupled case, in \cite{SWY}. For sake of completeness, we give an analytical proof of this fact (adapted from \cite{SWY}) in the appendix of this paper. The solution $W_n$ possesses $n$ zeros. The stationary solutions $W_0=\pm 1$ correspond to the zero curvature solution. Linearizing around a stationary solution $W_n$ leads to the linear operator \begin{equation*} {\mathcal A}_n=-\partial_x^2+\frac{(1- \frac{2m}{r}) }{r^2}(3W_n^2-1). \end{equation*} In \cite{BRZ} it was numerically observed for the first stationary solutions that ${\mathcal A}_n$ has $n$ negative eigenvalues. In this paper we show analytically that ${\mathcal A}_n$ has at least one negative eigenvalue for $n\ge 1$. An abstract result then shows that this leads to a nonlinear instability. We will describe in Section \ref{Sec2} a general abstract setting for non linear one dimensional wave equations. This abstract setting is applied in Section \ref{Sec3} to the Yang-Mills equation.
\subsection{The exterior of the Schwarzschild black hole} The exterior Schwarzschild spacetime is given by ${\mathcal M}={\mathbb R}_t\times {\mathbb R}_{r>2m}\times S^2$ equipped with the metric \begin{eqnarray*} \notag g &=& - (1 - \frac{2m}{r})dt^{2} + \frac{1}{ (1 - \frac{2m}{r})} dr^{2} + r^{2} d\theta^{2} + r^{2}\sin^{2} (\theta) d\phi^{2} \\ &=& N(-dt^2+d{x}^{2})+r^2d\sigma^2 \end{eqnarray*} where \begin{eqnarray} N &=& (1 - \frac{2m}{r}) \end{eqnarray} and $d\sigma^2$ is the usual volume element on the sphere. The coordinate $x$ is defined by the requirement \begin{equation*} \frac{dx}{dr}=N^{-1}. \end{equation*} The coordinates $t,r, \theta, \phi$, are called Boyer-Lindquist coordinates. The singularity $r=2m$ is a coordinate singularity and can be removed by changing coordinates, see \cite{HE}. $m$ is the mass of the black hole. We will only be interested in the region outside the black hole, $r>2m$.
\subsection{The spherically symmetric $SU(2)$ Yang-Mills equations on the Schwarz\-schild metric} \label{sphericallysymmetricYM}
Let $G = SU(2)$, the real Lie group of $2 \text{x} 2$ unitary matrices of determinant 1. The Lie algebra associated to $G$ is $su(2)$, the antihermitian traceless $2 \text{x} 2$ matrices. Let $\tau_{j}$, $j \in \{1, 2, 3 \}$, be the following real basis of $su(2)$: \begin{eqnarray*} \tau_1=\frac{i}{2}\left(\begin{array}{cc} 0 & 1 \\ 1 &
0\end{array}\right),\quad \tau_2=\frac{1}{2}\left(\begin{array}{cc} 0 & -1 \\ 1 &
0\end{array}\right),\quad \tau_3=\frac{i}{2}\left(\begin{array}{cc} 1 & 0\\ 0 &
-1 \end{array}\right). \end{eqnarray*} Note that \begin{eqnarray*} [\tau_1,\tau_2]=\tau_3,\quad [\tau_3,\tau_1]=\tau_2,\quad [\tau_2,\tau_3]=\tau_1. \end{eqnarray*}
We are looking for a connection $A$, that is a one form with values in the Lie algebra $su(2)$ associated to the Lie group $SU(2)$, which satisfies the Yang-Mills equations which are: \begin{eqnarray} \text{\bf D}^{(A)}_{{\alpha}} F^{{\alpha}{\beta}} \equiv \nabla_{{\alpha}} F^{{\alpha}{\beta}} + [A_{{\alpha}}, F^{{\alpha}{\beta}} ] = 0, \label{eq:YM} \end{eqnarray} where $[.,.]$ is the Lie bracket and $F_{{\alpha}{\beta}}$ is the Yang-Mills curvature given by
\begin{eqnarray} F_{{\alpha}{\beta}} = \nabla_{{\alpha}}A_{{\beta}} - \nabla_{{\beta}}A_{{\alpha}} + [A_{{\alpha}},A_{{\beta}}], \label{defYMcurvature} \end{eqnarray} and where we have used the Einstein raising indices convention with respect to the Schwarzschild metric. We also have the Bianchi identities which are always satisfied in view of the symmetries of the Riemann tensor and the Jacobi identity for the Lie bracket: \begin{eqnarray} \text{\bf D}^{(A)}_{{\alpha}}F_{\mu\nu} + \text{\bf D}^{(A)}_{\mu}F_{\nu{\alpha}} + \text{\bf D}^{(A)}_{\nu} F_{{\alpha}\mu} = 0. \label{eq:Bianchi} \end{eqnarray}
The Cauchy problem for the Yang-Mills equations formulates as the following: given a Cauchy hypersurface $\Sigma$ in $M$, and a ${\mathcal G}$-valued one form $A_{\mu}$ on $\Sigma$, and a ${\mathcal G}$-valued one form $E_{\mu}$ on $\Sigma$ satisfying \begin{eqnarray} \label{YMconstraintsone} \left.\begin{array}{rcl} E_{t} &=& 0, \\ \textbf{D}^{(A)}_{\mu}E^{\mu} &=& 0\end{array}\right\} \end{eqnarray} we are looking for a ${\mathcal G}$-valued two form $F_{\mu\nu}$ satisfying the Yang-Mills equations such that once $F_{\mu\nu}$ restricted to $\Sigma$ we have \begin{eqnarray} F_{\mu t} = E_{\mu} \label{YMconstraintstwo} \end{eqnarray} and such that $F_{\mu\nu}$ corresponds to the curvature derived from the Yang-Mills potential $A_{\mu}$, i.e. given by \eqref{defYMcurvature}. Equations \eqref{YMconstraintsone} are the Yang-Mills constraints equations on the initial data.
Any spherically symmetric Yang-Mills potential can be written in the following form after applying a gauge transformation, see \cite{FM}, \cite{GuHu} and \cite{W}, \begin{eqnarray} \label{SPAA} A &=& [ -W_{1}(t, r) \tau_{1} - W_{2}(t, r) \tau_{2} ] d\theta + [ W_{2} (t, r) \sin (\theta) \tau_{1} - W_{1} (t, r) \sin (\theta) \tau_{2}] d\phi\nonumber\\ & + & \cos (\theta) \tau_{3} d\phi + A_{0} (t, r) \tau_{3} dt + A_{1} (t, r) \tau_{3} dr, \end{eqnarray} where $A_{0} (t, r) $, $A_{1} (t, r) $, $W_{1}(t, r)$, $W_{2}(t, r)$ are arbitrary real functions. We consider here a purely magnetic Ansatz in which we have $A_0=A_1=W_2=0,\, W_1=:W$. The components of the curvature are then \begin{eqnarray*} \left.\begin{array}{rcl} F_{\theta x} &=& W' \tau_{1},\\ F_{\theta t} &=& \dot{W} \tau_{1}, \\ F_{\phi x} &=& W' \sin (\theta) \tau_{2}, \\ F_{\phi t} &=&\dot{W} \sin (\theta) \tau_{2}, \\ F_{tx} &=& 0,\\ F_{\theta\phi} &=& ( W^{2} -1 ) \sin (\theta) \tau_{3}. \end{array}\right\} \end{eqnarray*} This kind of Ansatz is preserved by the evolution. Also the principal restriction is $A_0=A_1=0$. The constraint equations then impose that $W_1$ is proportional to $W_2$, a case which can be reduced to $W_2=0$. We refer the reader to \cite{GH} for details.
\subsection{The initial value problem for the purely magnetic Ansatz} \label{AnsatzforinitialdataYM} We look at initial data prescribed on $t=0$ where there exists a gauge transformation such that once applied on the initial data, the potential $A$ can be written in this gauge as \begin{equation} \label{Ansatz} \left.\begin{array}{rcl} A_{t} (t=0) &=& 0, \\ A_{r} (t=0) &=& 0, \\ A_{\theta} (t=0) &=& -W_0(r)\tau_{1}, \\ A_{\phi} (t=0) &=& -W_0( r) \sin (\theta) \tau_{2} + \cos (\theta) \tau_{3}, \end{array}\right\} \end{equation} and, we are given in this gauge the following one form $E_{\mu}$ on $t=0$: \begin{equation} \label{AnsatzE}\left.\begin{array}{rcl} E_{\theta} (t=0) &=& F_{\theta t} (0) = W_1(r) \tau_{1}, \\ E_{\phi} (t=0) &=& F_{\phi t} (0) = W_1(r) \sin (\theta) \tau_{2}, \\ E_{r} (t=0) &=& F_{rt} (0) = 0, \\ E_{t} (t=0) &=& F_{tt} (t=0) = 0. \end{array}\right\} \end{equation} Notice that with this Ansatz the constraint equations \eqref{YMconstraintsone} are automatically fulfilled \begin{eqnarray} \label{constraintintheAnsatz} \notag ( { \text{\bf D}^{(A)}}^{\theta} E_{\theta} + {\text{\bf D}^{(A)}}^{\phi} E_{\phi} + {\text{\bf D}^{(A)}}^{r} E_{r} ) (t=0)= 0. \end{eqnarray}
The Yang-Mills equations now reduce to \begin{equation} \label{YMSW} \left.\begin{array}{rcl} \ddot{W}-W''+PW(W^2-1)&=&0,\\ W(0)&=&W_0,\\ \partial_t W(0)&=&W_1,\end{array}\right\} \end{equation} where \begin{equation*} P=\frac{(1-\frac{2m}{r})}{r^2}. \end{equation*} It is easy to check that the following energy is conserved, see also \cite{GH}, \begin{equation*} \mathcal{E}(W,\dot{W})=\int \dot{W}^2+(W')^2+\frac{P}{2}(W^2-1)^2 dx. \end{equation*}
We note by $\dot{H}^k=\dot{H}^k({\mathbb R}, dx)$ and
$H^k=H^k({\mathbb R},dx)$, the homogeneous and inhomogeneous Sobolev spaces of order $k$, respectively.
\begin{definition} \begin{enumerate} \item We define the spaces $L^4_P$, resp. $L^2_P$, as the completion of $C_0^{\infty}({\mathbb R})$ for the norm \begin{eqnarray} \Vert v\Vert_{L^4_P}^4:=\int P\vert v\vert^4 dx\quad \mbox{resp.}\quad \Vert v\Vert_{L^2_P}^2:=\int P \vert v\vert^2 dx. \end{eqnarray} \item We also define for $1\le k\le 2$ the space ${\mathcal H}^k$ as the completion of $C_0^{\infty}({\mathbb R})$ for the norm \begin{eqnarray} \Vert u\Vert^2_{{\mathcal H}^k}=\Vert u\Vert_{\dot{H}^k}^2+\Vert u\Vert_{L^4_P}^2. \end{eqnarray} \end{enumerate} \end{definition} We note that ${\mathcal H}^k$ is a Banach space which contains all constant functions. It turns out that $\mathcal{E}:={\mathcal H}^1\times L^2$ is exactly the space of finite energy solutions, see \cite{GH} for details. We then have \cite[Theorem 1]{GH} \begin{theorem} \label{ThGEYM} Let $(W_0,W_1)\in {\mathcal H}^2\times H^1$. Then there exists a unique strong solution of \eqref{YMSW} with \begin{eqnarray*} W&\in&C^1([0,\infty);{\mathcal H}^1)\cap C([0,\infty);{\mathcal H}^2),\\ \partial_tW&\in& C^1([0,\infty);L^2)\cap C([0,\infty);H^1),\\ \sqrt{P}(W^2-1)&\in&C^1([0,\infty);L^2)\cap C([0,\infty);H^1). \end{eqnarray*} \end{theorem} We can reformulate the above theorem in the following way \begin{corollary} \label{Cor1} We suppose that the initial data for the Yang-Mills equations is given after suitable gauge transformation by \begin{eqnarray*} \left.\begin{array}{rcl} A_t(0)&=&A_r(0)=0,\\
A_{\theta}(0)&=&-W_0\tau_1,\\
A_{\phi}(0)&=&-W_0\sin
\theta\tau_2+\cos\theta\tau_3,\\ E_{\theta}(0)&=&W_1\tau_1,\\ E_{\phi}(0)&=&W_1\sin\theta\tau_2,\\ E_r(0)&=&E_t(0)=0\end{array}\right\} \end{eqnarray*} with $(W_0,W_1)\in {\mathcal H}^2\times H^1$. Then, the Yang-Mills equation \eqref{eq:YM} admits a unique solution $F$ with \begin{eqnarray*} F_{\theta x},\, \frac{1}{\sin\theta}F_{\phi x},\, F_{\theta t},\, \frac{1}{\sin\theta}F_{\phi
t},\sqrt{P}\frac{1}{\sin\theta}F_{\theta\phi}&\in& C^1([0,\infty);L^2)\cap C([0,\infty);H^1). \end{eqnarray*} \end{corollary} \subsection{Energies} We now introduce the Yang-Mills energy momentum tensor \begin{equation*} T_{\mu\nu}=\<F_{\mu\beta},F_{\nu}^{\beta}\rangle-\frac{1}{4}g_{\mu\nu}\<F_{\alpha\beta},F^{\alpha\beta}\rangle. \end{equation*} Here $\langle.,.\rangle$ is an Ad-invariant scalar product on the Lie algebra $su(2)$. We have \begin{equation*} \nabla^{\nu}T_{\mu\nu}=0. \end{equation*} For a vector field $X^{\nu}$ we define \begin{equation*} J_{\mu}(X)=X^{\nu}T_{\mu\nu} \end{equation*} and the energy on the spacelike slice $\Sigma_t$ ($\Sigma_{t_0}=\{t=t_0\}$ ) by \begin{equation*} E^{(X)}(F(t))=\int_{\Sigma_t}J_{\mu}(X)n^{\nu}d_{\Sigma_t}. \end{equation*} By the divergence theorem this energy is conserved if $X$ is Killing. In particular \begin{equation*} E^{(\p_t)}(F(t))=\int _{\Sigma_t}J_{\mu}(\p_t)n^{\mu}d_{\Sigma_t} \end{equation*} is conserved. If $F$ is the curvature associated to $(W,\dot{W})$, then \begin{equation*} E^{(\p_t)}(F(t))=\mathcal{E}(W,\dot{W}), \end{equation*} see \cite{GH} for details. \subsection{Main result} We first recall the following result which is implicit in the paper \cite{BRZ} of P. Bizo\'n, A. Rostworowski and A. Zenginoglu. \begin{theorem} \label{thstat} There exists a decreasing sequence $\{a_n\}_{n\in \nn^{\ge1}},\, 0<...< a_n< a_{n-1}<...<a_1=\frac{1+\sqrt{3}}{3\sqrt{3}+5}$ and smooth stationary solutions $W_n$ of \eqref{YMSW} with \begin{equation*} -1\le W_n\le 1,\quad \lim_{x\rightarrow -\infty}W_n(x)=a_n,\quad \lim_{x\rightarrow
\infty}W_n(x)=(-1)^n. \end{equation*} The solution $W_n$ has exactly $n$ zeros. \end{theorem}
\begin{remark}
There is an explicit formula for the first stationary solution
(see \cite{BCC})
$$W_1 = \frac{c-\frac{r}{2m}}{\frac{r}{2m}+3(c-1)}, \quad c=\frac{3+\sqrt{3}}{2}.$$
This solution corresponds to $\lim_{x\rightarrow -\infty}W_1(x)=a_1=\frac{1+\sqrt{3}}{3\sqrt{3}+5}$. \end{remark}
We give a detailed proof of this result in the appendix, where we follow arguments of Smoller, Wasserman, Yau and McLeod. The above solutions are all nonlinearly instable : \begin{theorem}[Main Theorem] \label{Mainth} For all $n\ge 1$ the solution $W_n$ of \eqref{YMSW} is unstable. More precisely there exists $\epsilon_0>0$ and a sequence $(W_{0,n}^m,W_{1,n}^m)$ with $\Vert (W_{0,n}^m,W_{1,n}^m)-(W_n,0)\Vert_{\mathcal{E}}\rightarrow 0,\, m\rightarrow \infty$, but for all $m$ \begin{equation*} \sup_{t\ge 0}\Vert(W_n^m(t),\partial_tW_n^m(t))-(W_n,0)\Vert_{\mathcal{E}}\ge \epsilon_0>0. \end{equation*} \end{theorem}
\begin{remark}
We don't show in this paper that there is no stationary solution with $W(2m)>a_1$. We do not exclude either the fact that there may exist solutions with an infinite number of zeros which tend to zero at infinity. Our main theorem does not apply to this two categories of hypothetical stationary solutions. \end{remark}
For $n$ given we construct initial data from $W_n$ as in Section \ref{AnsatzforinitialdataYM}. Let $F_{n}$ be the corresponding curvature at time $t=0$. We obtain \begin{corollary} \label{corstat} For all $n\ge 1$ the solution $F_n$ of \eqref{eq:YM} is unstable. More precisely there exists $\epsilon_0>0$ and a sequence of initial data giving rise to the curvature $F_{0,n}^m$ with \begin{equation*} E^{(\p_t)}(F_{0,n}^m-F_{n})\rightarrow 0,\quad m\rightarrow \infty, \end{equation*} but for all $m$ \begin{equation*} \sup_{t\ge 0}E^{(\p_t)}(F_n^m(t)-F_{n})\ge \epsilon_0, \end{equation*} where $F_n^m(t)$ is the solution associated to the initial data corresponding to the curvature $F_{0,n}^m$. \end{corollary} \textbf{Acknowledgments.} The first author acknowledges support from the ANR funding ANR-12-BS01-012-01. Both authors thank Sari Ghanem for fruitful discussions on Yang-Mills equations. \section{Abstract setting} \label{Sec2} \subsection{Abstract result} We consider the one dimensional wave equation \begin{equation} \label{AWE} \left\{\begin{array}{rcl} \ddot{u}-u''+Vu&=&F(u),\\ u\vert_{t=0}&=&u_0,\\ \partial_tu\vert_{t=0}&=&u_1 \end{array}\right. \end{equation} with $\dot{}=\partial_t,\, '=\partial_x$ and \begin{equation} \tag{HV}\label{HV} V\in C({\mathbb R})\cap L^1({\mathbb R}),\, \lim_{\vert x\vert\rightarrow \infty}V(x)=0,\, \int_{{\mathbb R}}V(x)dx<0. \end{equation} We also suppose that \begin{equation} \tag{HF}\label{HF} \Vert F(u)-F(v)\Vert_{L^2}\le M_F(\Vert u\Vert_{H^1}+\Vert v\Vert_{H^1})\Vert u-v\Vert_{H^1} \end{equation} for $\Vert u\Vert_{H^1}\le 1,\Vert v\Vert_{H^1}\le 1$. Let $X=H^1\times L^2$. We then have the following \begin{theorem} The zero solution of \eqref{AWE} is unstable. More precisely there exists $\epsilon_0>0$ and a sequence $(u_0^m,u_1^m)$ with $\Vert (u_0^m,u_1^m)\Vert_X\rightarrow 0,\, m\rightarrow \infty$, but for all $m$ \begin{equation*} \sup_{t\ge 0}\Vert(u^m(t),\partial_tu^m(t))\Vert_X\ge \epsilon_0>0. \end{equation*} Here $u^m(t)$ is the solution of \eqref{AWE} with initial data $(u_0^m,u_1^m)$ and the supremum is taken over the maximal interval of existence of $u^m(t)$. \end{theorem} Let \begin{equation*} {\mathcal A}=-\partial_x^2+V,\quad D({\mathcal A})=H^2({\mathbb R}). \end{equation*} We note that ${\mathcal A}$ is a selfadjoint operator. \subsection{Spectral analysis of ${\mathcal A}$} \begin{proposition} We have \begin{equation*} \sigma({\mathcal A})=\{-\lambda_n^2\}_{n\in {\mathcal N}}\cup [0,\infty), \end{equation*} where $-\lambda_n^2,\quad \lambda_0> \lambda_1>....\lambda_n> ...>0$ is a finite (${\mathcal N}=\{0,...,N\}$) or infinite $({\mathcal N}=\nn)$ sequence of negative eigenvalues with only possible accumulation point $0$. \end{proposition} \proof
First note that $\sigma({\mathcal A})\cap {\mathbb R}^-\neq \emptyset$. Indeed let $\chi\in C_0^{\infty}({\mathbb R}),\, \chi(0)=1,\, \chi\ge 0,\, \chi_R(.)=\chi(\frac{.}{R})$. Then \begin{equation*} \langle{\mathcal A}\chi_R,\chi_R\rangle=\frac{1}{R}\int \vert \chi'(x)\vert^2dx+\int V(x)\chi_R^2dx\rightarrow \int V(x) dx<0,\quad R\rightarrow \infty. \end{equation*} We now introduce the comparison operator \begin{equation*} {\mathcal B}=-\partial_x^2. \end{equation*} We compute \begin{equation*} ({\mathcal B}-z^2)^{-1}-({\mathcal A}-z^2)^{-1}=({\mathcal A}-z^2)^{-1}V({\mathcal B}-z^2)^{-1}. \end{equation*} Using that $\lim_{x\rightarrow \pm \infty} V(x)=0$ we see that this is a compact operator. By the Weyl criterion \begin{equation*} \sigma_{ess}({\mathcal A})=\sigma_{ess}({\mathcal B})=[0,\infty). \end{equation*} On the other hand we already know that ${\mathcal A}$ has negative spectrum. It therefore has at least one negative eigenvalue. ${\mathcal A}$ being bounded from below the proposition follows. $\Box$
\subsection{The wave equation as a first order equation} \subsubsection{The linear equation} The equation \begin{equation*} \ddot{v}+{\mathcal A} v=0 \end{equation*} is equivalent to \begin{equation*} \partial_t\psi=L\psi,\quad L=\left(\begin{array}{cc} 0 & i \\
i{\mathcal A} & 0 \end{array}\right),\quad \psi=\left(\begin{array}{c} v
\\ \frac{1}{i} \partial_t v \end{array}\right). \end{equation*} \begin{remark} Let \begin{equation*} {\mathcal A}\phi_0=-\lambda^2\phi_0. \end{equation*} Then we have \begin{enumerate} \item $\phi_0\in H^2$. \item Let $\psi^{\pm}_0=\left(\begin{array}{c} \phi_0 \\ \pm\frac{1}{i} \lambda
\phi_0\end{array}\right)$. Then \begin{equation*} L\psi^{\pm}_0=\pm\lambda\psi^{\pm}_0. \end{equation*} \end{enumerate} \end{remark} Let $V_-$ be the negative part of the potential. For $\mu^2>\Vert V_-\Vert_{\infty}(\ge \lambda_0^2)$ we introduce the scalar product \begin{equation*} \langle u,v\rangle_{\mu}=\langle({\mathcal A}+\mu^2)u_0,v_0\rangle+\<u_1,v_1\rangle \end{equation*} where $\langle.,.\rangle$ is the usual scalar product on ${\mathcal H}=L^2({\mathbb R})$. We note $\Vert.\Vert_{\mu}$ the corresponding norm. It is easy to check that the norms $\Vert .\Vert_{\mu}$ and $\Vert.\Vert_X$ are equivalent. \begin{proposition} $L$ is the generator of a $C^0-$ semigroup $e^{tL}$ on $X$. \end{proposition} \proof
Let $\mu^2>\Vert V_-\Vert$ and \begin{equation*} L_{\mu}=\left(\begin{array}{cc} 0 & i\\ i({\mathcal A}+\mu^2) &
0 \end{array}\right),\quad B_{\mu}=\left(\begin{array}{cc} 0 & 0\\ -i\mu^2 &
0 \end{array}\right). \end{equation*} $iL_{\mu}$ is a selfadjoint operator on $(X, \langle.,.\rangle_{\mu})$ and in particular the generator of a $C^0-$ semigroup $e^{L_{\mu}t}$. We have $L=L_{\mu}+B_{\mu}$. $B_{\mu}$ being bounded, we can apply \cite[Theorem 3.1.1]{Pa} to see that $L$ is the generator of a $C^0-$ semigroup on $(X, \Vert .\Vert_{\mu})$ and thus on $(X,\Vert .\Vert_X)$. $\Box$
Let now \begin{equation*} M_i=\left(\begin{array}{cc} \bbbone & \bbbone \\
\frac{\lambda_i}{i} & -\frac{\lambda_i}{i} \end{array} \right). \end{equation*}
Note that $det M_i=2i\lambda_i\neq 0$ and that $M_i$ is thus invertible. We define $P_i=\bbbone_{\{-\lambda_i^2\}}({\mathcal A})M_i$ and $X_i=P_iX$. We also define $X_{\infty}=\bbbone_{{\mathbb R}^+}({\mathcal A})\bbbone_2X.$ Here $\bbbone_{\{-\lambda_i^2\}}({\mathcal A})$ and $\bbbone_{{\mathbb R}^+}({\mathcal A})$ are defined by the spectral theorem. In particular $\bbbone_{\{-\lambda_i^2\}}({\mathcal A})$ is the projection on the eigenspace of ${\mathcal A}$ associated to the eigenvalue $-\lambda_i^2$. \begin{lemma} \begin{equation*} X=\left(\oplus_{i\in {\mathcal N}}X_i\right)\oplus X_{\infty}. \end{equation*} \end{lemma} \begin{remark} Note that the sum is orthogonal with respect to the scalar product $\langle.,.\rangle_{\mu}$. \end{remark} \proof
Let $(\phi,\psi)\in X$. We put \begin{eqnarray*} \phi_i&=&\bbbone_{\{-\lambda_i^2\}}({\mathcal A})\phi,\quad \psi_i=\bbbone_{\{-\lambda_i^2\}}({\mathcal A})\psi,\quad \left(\begin{array}{c} \tilde{\phi}_i \\ \tilde{\psi}_i\end{array}\right)=M_i^{-1}\left(\begin{array}{c} \phi_i \\ \psi_i\end{array}\right). \end{eqnarray*} Since ${\mathcal A}$ is self-adjoint, we can write $$\phi = \sum_{i\in {\mathcal N}} \phi_i + \bbbone_{{\mathbb R}^+}({\mathcal A})\phi, \quad \psi = \sum_{i\in {\mathcal N}} \psi_i + \bbbone_{{\mathbb R}^+}({\mathcal A})\psi.$$ Then \begin{equation*} \left(\begin{array}{c} \phi \\
\psi\end{array}\right)=\sum_{i\in {\mathcal N}}M_i\left(\begin{array}{c}
\tilde{\phi}_i \\
\tilde{\psi}_i\end{array}\right)+\left(\begin{array}{c}
\bbbone_{{\mathbb R}^+}({\mathcal A})\phi \\ \bbbone_{{\mathbb R}^+}({\mathcal A})\psi \end{array}\right) \end{equation*} gives the required decomposition. For uniqueness let \begin{equation*} \left(\begin{array}{c} \phi_i \\
\psi_i\end{array}\right)=\sum_{i\in{\mathcal N}}M_i\left(\begin{array}{c}
\tilde{\phi}_i \\
\tilde{\psi}_i\end{array}\right)+\left(\begin{array}{c}
\phi_{\infty} \\ \psi_{\infty} \end{array}\right) \end{equation*} Applying $\bbbone_{{\mathbb R}^+}({\mathcal A}),\, \bbbone_{\{-\lambda_i^2\}}({\mathcal A})$ to each line immediately gives \begin{equation*} \phi_{\infty}=\bbbone_{{\mathbb R}^+}({\mathcal A})\phi,\quad \psi_{\infty}=\bbbone_{{\mathbb R}^+}({\mathcal A})\psi,\quad \left(\begin{array}{c} \tilde{\phi}_i \\ \tilde{\psi}_i\end{array}\right)=M_i^{-1} \left(\begin{array}{c} \phi_i \\ \psi_i\end{array}\right), \end{equation*} where $\psi_i=\bbbone_{\{-\lambda_i^2\}}({\mathcal A})\psi,\, \phi_i=\bbbone_{\{-\lambda_i^2\}}({\mathcal A})\phi$. $\Box$
Let \begin{equation*} X_i^{\pm}=M_i\bbbone_{\{-\lambda_i^2\}}({\mathcal A})P_{\pm}X, \end{equation*} where $P_+(\phi,\psi)=(\phi,0),\, P_-(\phi,\psi)=(0,\psi)$. Clearly $X_i=X_i^+\oplus X_i^-$ and thus \begin{equation*} X=\left(\bigoplus_{i\in {\mathcal N}}(X_i^+\oplus X_i^-)\right)\oplus X_{\infty}. \end{equation*}
\begin{remark}
Let $(\phi_i,\psi_i) \in X_i^{\pm}$. Then $L (\phi_i,\psi_i)= \pm \lambda_i (\phi_i,\psi_i)$. \end{remark}
\begin{remark} On $X_i$ the norm $\Vert.\Vert_{{\sqrt{2}\lambda_i}}$ is equivalent to the norm $\Vert.\Vert_X$ and $X_i^+,\, X_i^-$ are orthogonal with respect to this scalar product. Indeed : \end{remark} \begin{equation*} \left\langle\left(\begin{array}{c} \phi \\
\frac{\lambda_i}{i}\phi \end{array}\right),\left(\begin{array}{c}
\psi \\
-\frac{\lambda_i}{i}\psi \end{array}\right)\right\rangle_{\sqrt{2}\lambda_i}=\lambda_i^2\langle\phi,\psi\rangle-\lambda_i^2\langle\phi,\psi\rangle=0. \end{equation*}
\begin{proposition} \label{prop3} \begin{enumerate} \item The spaces $X_i,\, X_{\infty}$ are $e^{tL}$ invariant. \item For all $\epsilon>0$ there exists $C_{\epsilon}>0$ such that for all $i\in {\mathcal N}$ and for
all $t\in {\mathbb R}$ \begin{equation*} \Vert e^{tL}\vert_{X_i}\Vert_{X\rightarrow X}\le C_{\epsilon} e^{(\lambda_i+\epsilon)\vert t\vert}. \end{equation*} \item For all $\epsilon>0$ there exists $C_{\epsilon}>0$ such that for
all $t\in {\mathbb R}$ \begin{equation*} \Vert e^{tL}\vert_{X_{\infty}}\Vert_{X\rightarrow X} \le C_{\epsilon} e^{\epsilon \vert
t\vert}. \end{equation*} \end{enumerate} \end{proposition} \proof
(1) We have \begin{equation*} e^{tL}M_i\bbbone_{\{-\lambda_i^2\}}({\mathcal A})\bbbone_2\left(\begin{array}{c}
\phi \\ \psi\end{array}\right)=M_i\bbbone_{\{-\lambda_i^2\}}({\mathcal A})\bbbone_2\left(\begin{array}{c}
e^{t\lambda_i} \phi \\ e^{-t\lambda_i}\psi \end{array}\right) \end{equation*} and thus $X_i$ is invariant under the evolution. The fact that $X_{\infty}$ is invariant follows from the fact that $\bbbone_{{\mathbb R}^+}({\mathcal A})$ commutes with $L$.
(2) Because of the equivalence of the norms it is sufficient to estimate the $\Vert.\Vert_{\mu}$ norm. Let \begin{equation*} \left(\begin{array}{c} \phi_i\\ \psi_i\end{array}\right)\in X_i. \end{equation*} We compute
\begin{eqnarray*} \left\Vert e^{tL}\left(\begin{array}{c} \phi_i \\ \psi_i\end{array}\right)\right\Vert_{\mu}^2 &=&\left\Vert \left(\begin{array}{cc} (\mu^2-\lambda_i^2)^{1/2}
& 0 \\ 0 & \bbbone \end{array}\right)M_i\left(\begin{array}{cc}
e^{t\lambda_i} & 0 \\ 0 & e^{-\lambda_i
t} \end{array}\right)M_i^{-1}\left(\begin{array}{c}
\phi_i \\ \psi_i\end{array}\right)\right\Vert_{{\mathcal H}\times
{\mathcal H}}\\ &\le&\Vert N_i\Vert^2_{{\mathbb R}^2\rightarrow {\mathbb R}^2}\left\Vert \left(\begin{array}{c}
\phi_i \\ \psi_i\end{array}\right)\right\Vert^2_{\mu}, \end{eqnarray*} where \begin{equation*} N_i=\left(\begin{array}{cc} (\mu^2-\lambda_i^2)^{1/2}
& 0 \\ 0 & \bbbone \end{array}\right)M_i\left(\begin{array}{cc}
e^{t\lambda_i} & 0 \\ 0 & e^{-\lambda_i
t} \end{array}\right)M_i^{-1}\left(\begin{array}{cc} (\mu^2-\lambda_i^2)^{-1/2}
& 0 \\ 0 & \bbbone \end{array}\right). \end{equation*} We then estimate uniformly in $i\in {\mathcal N}$: \begin{eqnarray*} \Vert N_i\Vert^2_{{\mathbb R}^2\rightarrow {\mathbb R}^2}&\lesssim& \left\Vert
\frac{1}{2}\left(\begin{array}{cc} e^{t\lambda_i}+e^{-t\lambda_i} & \frac{1}{i\lambda_i}(e^{-t\lambda_i}-e^{t\lambda_i}) \\
\frac{\lambda_i}{i}(e^{t\lambda_i }-e^{-t\lambda_i}) &
e^{t\lambda_i}+e^{-\lambda_i t} \end{array}\right)\right\Vert_2^2.\\ \end{eqnarray*} We have for $t\ge 0$ \begin{eqnarray*} \frac{1}{\lambda_i}(e^{t\lambda_i}-e^{-t\lambda_i})&=&2\sum_{i=1}^{\infty}\frac{(t\lambda_i)^{2i+1}}{\lambda_i(2i+1)!}\\ &\le&2t\sum_{i=1}^{\infty}\frac{(t\lambda_i)^{2i}}{(2i)!}\le t(e^{t\lambda_i }+e^{-t\lambda_i})\le \tilde{C}_{\epsilon} e^{(\lambda_i+\epsilon) t}. \end{eqnarray*} Using that $\lambda_i\le \lambda_0$ we find uniformly in $i\in {\mathcal N}$: \begin{equation*} \Vert N_i\Vert_{{\mathbb R}^2\rightarrow {\mathbb R}^2}\lesssim e^{(\lambda_i+\epsilon)\vert t\vert}. \end{equation*} (3) We consider the case $t\ge 0$. First note that \begin{equation*} \Vert u\Vert_{X_\epsilon}^2=\langle{\mathcal A} u_0,u_0\rangle+\Vert u_1\Vert^2+\epsilon^2\Vert u_0\Vert^2 \end{equation*} defines a norm on $X_{\infty}$. We estimate for $u(t)=e^{tL}u$ \begin{eqnarray*} \frac{d}{dt}\Vert u\Vert_{X_\epsilon}^2&=&2{\rm
Re}\left(\langle{\mathcal A} u_0,\dot{u}_0\rangle+\<u_1,\dot{u}_1\rangle+\epsilon^2\<u_0,\dot{u}_0\rangle\right)\\ &=&2{\rm Re}\, \epsilon^2\<u_0,i u_1\rangle\\ &\le&2\epsilon^2\Vert u_0\Vert\Vert u_1\Vert\le \epsilon^3\Vert u_0\Vert^2+\epsilon\Vert u_1\Vert^2\le \epsilon \Vert u\Vert_{X_\epsilon}^2. \end{eqnarray*} By the Gronwall lemma we obtain: \begin{equation*} \Vert u(t)\Vert_{X_\epsilon}^2\le \tilde{C}_{\epsilon} e^{\epsilon t}\Vert u\Vert^2_{X_\epsilon}. \end{equation*} We now claim that on $X_{\infty}$ the $X$ and the $X_\epsilon$ norms are equivalent. Indeed \begin{eqnarray*} \langle{\mathcal A} u_0,u_0\rangle+\Vert u_1\Vert^2+\epsilon^2\Vert u_0\Vert^2\lesssim \Vert u_0\Vert_{H^1}^2+\Vert u_1\Vert^2. \end{eqnarray*} Also, \begin{eqnarray*} \Vert u_0\Vert_{H^1}^2+\Vert u_1\Vert^2&=&\langle(-\partial_x^2+V)u_0,u_0\rangle-\<Vu_0,u_0\rangle+\Vert u_0\Vert^2+\Vert u_1\Vert^2\\ &\lesssim& \langle{\mathcal A} u_0,u_0\rangle+\Vert u_0\Vert^2+\Vert u_1\Vert^2\lesssim \Vert u\Vert_{X_\epsilon}^2. \end{eqnarray*} Then we can estimate \begin{eqnarray*} \Vert u(t)\Vert_X\lesssim \Vert u(t)\Vert_{X_\epsilon}\lesssim e^{\epsilon
t}\Vert u\Vert_{X_\epsilon}\lesssim e^{\epsilon t}\Vert u\Vert_X. \end{eqnarray*} $\Box$
Let $Y=X_0^-\oplus\left(\bigoplus_{i=1}^{N}X_i\right)\oplus X_{\infty}.$ We have $X=X^+_0\oplus Y$ and both spaces are invariant under $e^{tL}$. \begin{corollary} \label{cor1} For all $\epsilon>0$ there exists $M_{L,\epsilon}>0$ such that for all $t\ge 0$ we have \begin{equation*} \Vert e^{tL}\vert_{Y}\Vert_{X\rightarrow X}\le M_{L,\epsilon} e^{(\lambda_1+\epsilon) t}. \end{equation*} \end{corollary} \proof
Because of the equivalence of the norms $\Vert.\Vert_{X}$ and $\Vert.\Vert_{\mu}\, (\mu^2>\Vert V_-\Vert_{\infty})$ it is sufficient to show the estimate with respect to the norm $\Vert.\Vert_{\mu}$. We choose $\epsilon<\lambda_1$ and apply Proposition \ref{prop3}. Let \begin{equation*} \phi=\phi_0^-+\sum_{i=1}^{N} \phi_i+\phi_{\infty} \end{equation*} with $\phi_0^-\in X_0^-,\, \phi_i\in X_i,\, \phi_{\infty}\in X_{\infty}$. We have \begin{eqnarray*} \Vert e^{tL}\phi\Vert_{\mu}^2&=&e^{-\lambda_0 t}\Vert \phi_0^-\Vert^2_{\mu}+\sum_{i=1}^{N}\Vert e^{tL}\phi_i\Vert^2_{\mu}+\Vert \phi_{\infty}\Vert^2_{\mu}\\ &\lesssim& e^{2(\lambda_1+\epsilon) t }(\Vert \phi_0^-\Vert^2_{\mu}+\sum_{i=0}^{N} \Vert \phi_i\Vert^2_{\mu}+\Vert \phi_{\infty}\Vert^2_{\mu})=e^{2(\lambda_1+\epsilon) t}\Vert
\phi\Vert^2_{\mu}. \end{eqnarray*} $\Box$
Let \begin{equation*} E_0=\bbbone_{\{-\lambda_0^2\}}({\mathcal A})M_0P_+M_0^{-1},\, E_1=\bbbone-E_0. \end{equation*} We easily check that \begin{equation*} \forall \psi \in X,\, E_0\psi\in X_0^+;\quad \forall \psi\in X,\, E_1\psi\in Y;\quad E_0+E_1=\bbbone. \end{equation*} \subsubsection{The nonlinear equation} The nonlinear equation writes now as a first order equation \begin{equation} \label{abstrequ} \left\{\begin{array}{rcl} \partial_t\psi&=&L\psi+G(\psi),\\ \psi(0)&=&\psi_0 \end{array}\right. \end{equation} with \begin{equation*} G(\psi)=\left(\begin{array}{c} 0 \\ F(P_{+}(\psi)) \end{array}\right). \end{equation*} From hypothesis \eqref{HF} we directly obtain \begin{equation} \label{LipschitzG} \Vert G(\psi)-G(\phi)\Vert_{X}\le M_F (\Vert \psi\Vert_X+\Vert \phi\Vert_X)\Vert \psi-\phi\Vert_X \end{equation} for $\Vert \psi\Vert_X\le 1,\, \Vert \phi\Vert_X\le 1$. The abstract theorem then writes \begin{theorem} The zero solution of \eqref{abstrequ} is unstable. More precisely there exists $\epsilon_0>0$ and a sequence $\psi_0^m$ with $\Vert \psi_0^m\Vert_X\rightarrow 0,\, m\rightarrow \infty$, but for all $m$ \begin{equation*} \sup_{t\ge 0}\Vert\psi^m(t)\Vert_X\ge \epsilon_0>0. \end{equation*} Here $\psi^m(t)$ is the solution of \eqref{abstrequ} with initial data $\psi_0^m$ and the supremum is taken over the maximal interval of existence of $\psi^m$. \end{theorem}
\subsection{Proof of the abstract theorem}
We note $L_0$ the restriction of $L$ to $X^+_0$ and $L_1$ the restriction of $L$ to $Y$. For $\psi_0\in X^+_0$ with small norm we consider for a certain parameter $\tau>0$ the integral equation \begin{equation} \label{intequ} \psi(t)=e^{L_0(t-\tau)}\psi_0+\int_{\tau}^te^{L_0(t-s)}E_0G(\psi)ds+\int_{-\infty}^te^{L_1(t-s)}E_1G(\psi)ds=:{\mathcal I}(\psi). \end{equation} We fix $\epsilon>0$ in Corollary \ref{cor1} small enough such that $\tilde{\lambda}_1:=\lambda_1+\epsilon<\lambda_0$. We will drop in the following the index $\epsilon$ ($M_L=M_{L,\epsilon}$). We fix $\beta>0$ such that $\lambda_0>2\beta>\tilde{\lambda}_1$. Let \begin{equation*} Z=\{\psi\in C([0,\tau];X);\, \Vert\psi\Vert_X\le e^{\beta(t-\tau)}\rho\}. \end{equation*} We equip $Z$ with the norm \begin{equation*} \Vert \psi\Vert_Z=\sup_{0\le t\le\tau}\Vert e^{-\beta(t-\tau)}\psi(t)\Vert_X. \end{equation*} Let $\psi_0$ such that $\Vert\psi_0\Vert_X=\frac{\rho}{3}$. We claim that for $\rho$ small enough \begin{equation*} {\mathcal I}:\overline{B}_Z(0,\rho)\rightarrow \overline{B}_Z(0,\rho) \end{equation*} and that it is a contraction on that space. First note that \begin{equation*} {\mathcal I}(\psi)={\mathcal I}_0(\psi)+{\mathcal I}_1(\psi)+{\mathcal I}_2(\psi) \end{equation*} with \begin{eqnarray*} {\mathcal I}_0(\psi)&=&e^{L_0(t-\tau)}\psi_0,\\ {\mathcal I}_1(\psi)&=&-\int_{t}^{\tau}e^{L_0(u-\tau)}E_0G(\psi(t+\tau-u)) du,\\ {\mathcal I}_2(\psi)&=&\int_{-\infty}^te^{L_1(t-s)}E_1G(\psi(s))ds. \end{eqnarray*} We first estimate for $t\leq \tau$ \begin{equation*} \Vert {\mathcal I}_0(\psi)\Vert_X= e^{\lambda_0(t-\tau)}\Vert \psi_0\Vert_X\le 1/3 e^{\beta(t-\tau)}\rho. \end{equation*} We then estimate for $\psi\in \overline{B}_Z(0,\rho)$ \begin{eqnarray*} \Vert{\mathcal I}_1(\psi)\Vert_X&\le&M_F\Vert E_0\Vert\int_t^{\tau}e^{\lambda_0(u-\tau)}\Vert\psi\Vert_X^2(t+\tau-u)du\\ &\le&M_F\Vert E_0\Vert\int_t^{\tau}e^{\lambda_0(u-\tau)}\rho^2e^{2\beta(t-u)}du\\ &\le&M_F\Vert E_0\Vert e^{2\beta
t}e^{-\lambda_0\tau}\rho^2\int_t^{\tau}e^{(\lambda_0-2\beta)u}du\\ &\le&M_F\Vert E_0\Vert\rho^2e^{2\beta
t}e^{-\lambda_0\tau}\frac{1}{\lambda_0-2\beta}e^{(\lambda_0-2\beta)\tau}\\ &=&\frac{M_F\Vert E_0\Vert\rho^2}{\lambda_0-2\beta} e^{2\beta(t-\tau)}\\ &\le&\frac{M_F\Vert E_0\Vert\rho^2}{\lambda_0-2\beta} e^{\beta(t-\tau)}\le 1/3\rho e^{\beta(t-\tau)} \end{eqnarray*} for $\rho$ small enough. We then estimate for $\psi\in \overline{B}_Z(0,\rho)$ : \begin{eqnarray*} \Vert {\mathcal I}_2(\psi(t))\Vert_X&\le&M_LM_F\Vert E_1\Vert\int_{-\infty}^te^{\tilde{\lambda}_1(t-s)}\rho^2e^{2\beta(s-\tau)}ds\\ &\le&\frac{M_LM_F\Vert E_1\Vert\rho^2}{2\beta-\tilde{\lambda}_1}e^{\tilde{\lambda}_1
t}e^{-2\beta\tau}e^{(2\beta-\tilde{\lambda}_1)t}\\ &=&\frac{M_LM_F\Vert
E_1\Vert\rho^2}{2\beta-\tilde{\lambda}_1}e^{2\beta(t-\tau)}\le 1/3\rho^{\beta(t-\tau)} \end{eqnarray*} for $\rho$ small enough. We have just proven ${\mathcal I} (\psi) \in \overline{B}_Z(0,\rho)$. Let us now show that ${\mathcal I}$ is a contraction. We estimate \begin{eqnarray*} \Vert {\mathcal I}_1(\psi)-{\mathcal I}_1(\phi)\Vert_X&\le&2M_F\Vert E_0\Vert\int_t^{\tau}e^{\lambda_0(u-\tau)}\rho e^{\beta(t-u)}\Vert\psi-\phi\Vert_X(t+\tau-u)du\\ &\le&2M_F\Vert E_0\Vert\rho\Vert\psi-\phi\Vert_Z\int_t^{\tau}e^{\lambda_0(u-\tau)}e^{2\beta(t-u)}du\\ &=&2M_F\Vert E_0\Vert\rho\Vert\psi-\phi\Vert_Ze^{2\beta
t}e^{-\lambda_0\tau}\int_t^{\tau}e^{(\lambda_0-2\beta)u}du\\ &\le&\frac{2M_F\Vert
E_0\Vert\rho}{\lambda_0-2\beta}e^{2\beta(t-\tau)}\le 1/4 e^{\beta(t-\tau)} \end{eqnarray*} for $\rho$ sufficiently small. We then estimate \begin{eqnarray*} \Vert {\mathcal I}_2(\psi)-{\mathcal I}_2(\phi)\Vert_X&\le& \int_{-\infty}^t2M_LM_F\Vert E_1\Vert \rho e^{\tilde{\lambda}_1(t-s)}e^{\beta(s-\tau)}\Vert \psi-\phi\Vert_X ds\\ &\le&2M_LM_F\Vert E_1\Vert\rho\Vert \psi-\phi\Vert_Z\int_{-\infty}^te^{\tilde{\lambda}_1(t-s)}e^{2\beta(s-\tau)}ds\\ &=&2M_LM_F\Vert E_1\Vert\rho\Vert \psi-\phi\Vert_Ze^{\tilde{\lambda}_1t}e^{-2\beta
\tau}\int_{-\infty}^te^{(2\beta-\tilde{\lambda}_1)s}ds\\ &\le&\frac{2M_LM_F\Vert
E_1\Vert\rho}{2\beta-\tilde{\lambda}_1}e^{2\beta(t-\tau)}\le 1/4e^{\beta(t-\tau)} \end{eqnarray*} for $\rho$ sufficiently small.
It follows that for $\rho$ sufficiently small there exists a solution of \eqref{intequ} in $\overline{B}_Z(0,\rho)$. We note this solution $\psi(t,\tau)$. We easily check that $\psi(t,\tau)$ is also solution of \eqref{abstrequ} with initial data satisfying \begin{equation*} \Vert \psi(0,\tau )\Vert_X\le \rho e^{-\beta\tau}\rightarrow 0,\tau \rightarrow \infty. \end{equation*} We also estimate \begin{eqnarray*} \Vert \psi(\tau)\Vert_X&\ge&\Vert \psi_0\Vert_X-M_LM_F\Vert E_1\Vert\int_{-\infty}^{\tau}e^{\tilde{\lambda}_1(\tau-s)}\rho^2e^{2\beta(s-\tau)}ds\\ &=&\rho/3-M_LM_F\Vert E_1\Vert\rho^2e^{(\tilde{\lambda}_1-2\beta)\tau}\int_{-\infty}^{\tau}e^{(2\beta-\tilde{\lambda}_1)s}ds\\ &\ge&\rho/3-\frac{M_LM_F\Vert E_1\Vert\rho^2}{2\beta-\tilde{\lambda}_1}\ge\rho/6 \end{eqnarray*} for $\rho$ small enough. It follows that $\psi^m(t)=\psi(t,m)$ does the job. $\Box$
\begin{remark} The theorem is close to \cite[Theorem VII.2.3]{DaKr} and \cite[Theorem 5.1.3]{He}. However both theorems do not apply directly. Whereas \cite[Theorem VII.2.3]{DaKr} is restricted to bounded operators, \cite[Theorem 5.1.3]{He} only applies if the linear part is sectorial. \end{remark} \section{Application of the abstract result to the Yang-Mills
equation} \label{Sec3} First note that if $W(t,r)$ is solution of the Yang-Mills equation \eqref{YMSW} (written in the $r$ variable), then $ W(2m t,
2m r)$ is solution of the same equation with $m=1/2$ and vice versa. We can therefore suppose in the following $m=1/2$. We linearize around $W=W_n$ and obtain for $v=W-W_n$: \begin{equation*} \ddot{v}-v''+P(3W_n^2-1)v+Pv^2(v+3W_n)=0. \end{equation*} The linear operator \begin{equation*} {\mathcal A}_n=-\partial_x^2+P(3W_n^2-1) \end{equation*} depends on the stationary solution which we don't know explicitly. We put \begin{equation*} V_n=P(3W_n^2-1). \end{equation*} We first want to apply our abstract result on $X=H^1\times L^2$. It is easy to see that the nonlinear part fulfills the hypotheses of the abstract theorem. Indeed we have \begin{proposition} \label{propnonllip} We have for $\Vert v\Vert_{H^1}\le 1,\, \Vert u\Vert_{H^1}\le 1$: \begin{equation*} \Vert F(v)-F(u)\Vert_{L^2}\lesssim(\Vert v\Vert_{H^1}+\Vert u\Vert_{H^1})\Vert u-v\Vert_{H^1}. \end{equation*} \end{proposition} \proof
We compute \begin{equation*} F(v)-F(u)=P(v^2+u^2+uv+3(W_nv+W_nu))(u-v). \end{equation*} Thus \begin{eqnarray*} \Vert F(v)-F(u)\Vert_{L^2}&\lesssim&(\Vert v^2\Vert_{L^2}+\Vert u^2\Vert_{L^2})\Vert u-v\Vert_{L^{\infty}}+(\Vert v\Vert_{L^{\infty}}+\Vert v\Vert_{L^{\infty}}\Vert u\Vert_{L^{\infty}}+\Vert u\Vert_{L^{\infty}})\Vert u-v\Vert_{L^2}\\ &\lesssim&(\Vert v\Vert^2_{L^4}+\Vert u\Vert_{L^4}^2)\Vert u-v\Vert_{H^1}+(\Vert v\Vert_{H^1}+\Vert v\Vert_{H^1}\Vert u\Vert_{H^1}+\Vert u\Vert_{H^1})\Vert v-u\Vert_{H^1}\\ &\lesssim&(\Vert v\Vert^2_{H^1}+\Vert u\Vert_{H^1}^2+\Vert v\Vert_{H^1}+\Vert u\Vert_{H^1})\Vert u-v\Vert_{H^1}\\ &\lesssim& (\Vert v\Vert_{H^1}+\Vert u\Vert_{H^1})\Vert u-v\Vert_{H^1} \end{eqnarray*} for $\vert u\Vert_{H^1}\le 1,\, \Vert v\Vert_{H^1}\le 1$. Here we have used the Gagliardo Nirenberg inequality and the Sobolev embedding $H^1\hookrightarrow L^{\infty}$. $\Box$
In the next subsection we will show that \begin{equation*} \int _{{\mathbb R}}V_n(x)dx<0. \end{equation*} \subsection{Study of the potential $V_n$} Going back to the $r$ variable we see that the potential $W_n$ fulfills the following equation \begin{equation} \label{ym}\left(1-\frac{1}{r}\right)\p_r^2W_n+ \frac{1}{r^2}\p_rW_n+ \frac{1}{r^2}W_n(1-W_n^2)=0 \end{equation} with initial data (or boundary condition) $W_n(1)=a_n$, for $0<a_n\leq \frac{1+\sqrt{3}}{5+3\sqrt{3}}$. We also have $\lim_{r\rightarrow \infty}W_n(r)=(-1)^n$. We will drop the index $n$ in the rest of this subsection.
\subsubsection{A bound on $W$}
\begin{lemma} We have $-a\leq W\leq a$ for $1\leq r\leq 3$. \end{lemma}
\begin{proof} Since the initial data for $W$ are $W(1)=a$ and $W'(1)=-a(1-a^2)<0$, there exists $r_0>1$ such that for $1\leq r \leq r_0$ we have $$-a\leq W(r)\leq a$$ Then Lemma \ref{borne} implies that on this interval we have $$-a \leq \p_rW(r)\leq a.$$ $W$ is initially decreasing and can not have a local minimum in the region $W>0$ (this is a consequence of the maximum principle, see Lemma \ref{max}). Consequently there exists $r_1>1$ such that $0\leq W\leq a$ on $[1,r_1]$ and $W(r_1)=0$. Because of the bound of
the derivative we have $r_1\ge 2$. By the same bound we have $-a\leq W\leq a$ on $[r_1,r_1+1]$. \end{proof}
Let $Q(r)=1-\frac{1}{r}-\frac{1}{2r^2}$. \begin{proposition}\label{enc} We have for $r\geq 3$ $$-Q(r)\leq W(r)\leq Q(r)$$ \end{proposition}
Let $$L(u,r)= \left(1-\frac{1}{r}\right)\p_r^2u+\frac{1}{r^2}\p_ru+\frac{1}{r^2}u(1-u^2).$$ Before proving Proposition \ref{enc}, we need the following lemma \begin{lemma} For $r\geq 3$ we have $L(Q,r)<0$ and $L(-Q,r)>0$. \end{lemma} \begin{proof} Since $L$ is odd in $u$, it is sufficient to prove $L(Q,r)<0$. We calculate \begin{align*}L(Q,r)=&\left(1-\frac{1}{r}\right)\left(-\frac{2}{r^3}-\frac{3}{r^4}\right)+\frac{1}{r^2}\left(\frac{1}{r^2}+ \frac{1}{r^3}\right) +\frac{1}{r^2}\left(1-\frac{1}{r}-\frac{1}{2r^2}\right)\left(1-\left(1-\frac{1}{r}-\frac{1}{2r^2}\right)^2\right)\\ =&-\frac{2}{r^4}+\frac{2}{r^5}+ \frac{3}{4r^6}+ \frac{3}{4r^7}+ \frac{1}{8r^8}. \end{align*} Consequently, for $r\geq 3$ we have $$L(Q,r)\leq \frac{1}{r^4}\left(-2+\frac{2}{3}+\frac{3}{4*3^2}+\frac{3}{4*3^3 }+\frac{1}{8*3^4}\right) \leq -\frac{1}{r^4}<0.$$ \end{proof}
\begin{proof}[Proof of Proposition \ref{enc}] We have $-a\leq W(3) \leq a$ and $$a<\frac{11}{18}=1-\frac{1}{3}-\frac{1}{2*9}=Q(3).$$ If the inequality of Proposition \ref{enc} is false, there exists $r_1<r_2$ with $r_2$ which can be infinite such that $$W(r_1)=Q(r_1), \quad W(r_2)=Q(r_2)$$ and $W>Q$ on $]r_1,r_2[$ (The case $W<-Q$ is treated in a similar way). Consider $r_0$ such that $W-Q$ is maximum at $r_0$. Note that such a maximum always exists
independently if $\lim_{r\rightarrow \infty} W(r)=-1$ (in which case
$r_2<\infty$) or $\lim_{r\rightarrow
\infty}W(r)=1=\lim_{r\rightarrow \infty}Q(r)$. Then we have $$L(W,r_0)-L(Q, r_0)=-L(Q, r_0)>0$$ so $$\left(1-\frac{1}{r_0}\right) (\p_r^2W-\p_r^2Q)(r_0)+ \frac{1}{r_0^2}\left(W(1-W^2)-Q(1-Q^2)\right)>0$$ Since $$W(r_0)>Q(r_0)\geq Q(3)= \frac{11}{18} \geq \frac{1}{\sqrt{3}}$$ and the function $x\mapsto x(1-x^2)$ is decreasing for $x\geq \frac{1}{\sqrt{3}}$ we have $$\left(W(1-W^2)-Q(1-Q^2)\right) \leq 0$$ and consequently $$\left(1-\frac{1}{r_0}\right) (\p_r^2W-\p_r^2Q)(r_0) >0$$ which is a contradiction with the fact that $W-Q$ is maximum at $r_0$. \end{proof}
\subsubsection{A bound on the potential} We now come back to the potential $$V=P(3W^2-1)$$ \begin{proposition} We have $$\int_{{\mathbb R}} V(x) dx <0.$$ \end{proposition} \begin{proof} First note that \begin{equation*} \int_{{\mathbb R}} V(x)dx=\int_1^{\infty} \frac{3W^2-1}{r^2} dr. \end{equation*} We estimate $$\int_1^3 \frac{3W^2-1}{r^2} \leq \int_1^3 \frac{3a^2-1}{r^2}= \frac{2(3a^2-1)}{3}$$ and \begin{eqnarray*} \int_3^\infty \frac{3W^2-1}{r^2} &\leq& \int_3^\infty \frac{1}{r^2}\left( 3\left(1-\frac{1}{r}\right)^2-1\right) \leq \int_3^\infty \frac{1}{r^2}\left(2-\frac{6}{r}+
\frac{3}{r^2}\right)=\left[-\frac{2}{r}+\frac{3}{r^2}-\frac{1}{r^3} \right]_3^\infty\\ &=& \frac{1}{3}+ \frac{1}{27}= \frac{10}{27} \end{eqnarray*} Note that $$\frac{2(3a^2-1)}{3}+\frac{10}{27}<0$$ because $a\le\frac{1+\sqrt{3}}{5+3\sqrt{3}}< \frac{2}{3\sqrt{3}}$. Therefore we have $$\int_{{\mathbb R}} V(x) dx<0.$$ \end{proof}
\subsection{Proof of Theorem \ref{Mainth}} The main theorem with $\mathcal{E}$ replaced by $X$ now follows from the abstract result. In order to be able to replace $X$ by $\mathcal{E}$ we need the following lemma. We will drop the index $n$. \begin{lemma} \label{lem2} Let $\phi_0$ be an eigenfunction of ${\mathcal A}$ with eigenvalue $-\lambda^2$. Then we have \begin{eqnarray} \label{lem2.1} \int_{{\mathbb R}} P\vert\phi_0\vert^2\ge \lambda^2\int_{{\mathbb R}} \vert \phi_0\vert^2.\\ \label{lem2.2} -\int V\vert\phi_0\vert^2\ge 0. \end{eqnarray} \end{lemma} \proof
Let us first show \eqref{lem2.1}. We have \begin{equation*} (-\partial_x^2+V)\phi_0=-\lambda^2\phi_0. \end{equation*} Multiplication by $\phi_0$ and integration by parts gives \begin{equation} \label{lem2.3} \int \vert \phi_0'\vert^2+\int V\vert \phi_0\vert^2+\lambda^2\int \vert \phi_0\vert^2=0. \end{equation} Now recall that $V=P(3W^2-1)$, thus \begin{equation*} \int P\vert \phi_0\vert^2\ge \lambda^2\int \vert\phi_0\vert^2. \end{equation*} We now show \eqref{lem2.2}. From \eqref{lem2.3} we obtain : \begin{equation*} -\int V\vert \phi_0\vert^2=\int \vert\phi_0'\vert^2+\lambda^2\int \vert\phi_0\vert^2\ge 0. \end{equation*} $\Box$
Let $\tilde{{\mathcal H}}^1$ the completion of $C_0^{\infty}$ for the norm \begin{equation*} \Vert u\Vert_{\tilde{{\mathcal H}}^1}^2=\Vert u\Vert_{\dot{H}^1}^2+\Vert u\Vert_{L^2_P}^2 \end{equation*} We put $\tilde{\mathcal{E}}=\tilde{{\mathcal H}}^1\times L^2$.
{\bf Proof of Theorem \ref{Mainth}}
We continue using the notations of the abstract setting. We claim that it is sufficient to show the following : \begin{equation} \tag{IM}\label{IM} \begin{array}{c} \mbox{There exists $\epsilon_0>0$ and a sequence $\psi_0^m$ with $\Vert \psi_0^m\Vert_X\rightarrow 0,\, m\rightarrow \infty$,}\\ \mbox{but for all $m$}\quad \sup_{t\ge 0}\Vert \psi^m(t)\Vert_{\tilde{\mathcal{E}}}\ge \epsilon_0>0.\end{array} \end{equation} To see this we first note that \begin{equation*} \Vert \psi_0^m\Vert_{\mathcal{E}}\le \Vert \psi_0^m\Vert_X \end{equation*} because \begin{equation*} \left(\int P\vert u\vert^4\right)^{1/4}\lesssim \Vert u\Vert^{1/2}_{\infty}\Vert u\Vert_{L^2}^{1/2}\le \Vert u\Vert_{H^1} \end{equation*} by the Sobolev embedding $H^1\hookrightarrow L^{\infty}$. On the other hand \begin{equation*} \Vert u\Vert_{L^2_P}=\left(\int P\vert u\vert^2\right)^{1/2}\le \left(\int P\right)^{1/4}\left(\int P\vert
u\vert^4\right)^{1/4}\lesssim \Vert u\Vert_{L^4_P} \end{equation*} and thus \begin{equation*} \Vert \psi^m(t)\Vert_{\mathcal{E}}\gtrsim \Vert \psi^m(t)\Vert_{\tilde{\mathcal{E}}}. \end{equation*} Let us now show \eqref{IM}. We follow the proof of the main theorem. We choose \begin{equation*} \psi_0=\left(\begin{array}{c} \phi_0 \\ \frac{\lambda_0}{i}
\phi_0\end{array}\right),\, \phi_0\in
\bbbone_{\{-\lambda_0^2\}}({\mathcal A}){\mathcal H},\, \Vert
\phi_0\Vert=\frac{1}{3(1+\Vert V_-\Vert_{\infty})^{1/2}}\rho. \end{equation*} We estimate \begin{eqnarray*} \Vert \psi_0\Vert_X^2&=&\langle(-\partial_x^2+V)\phi_0,\phi_0\rangle-\<V\phi_0,\phi_0\rangle+\Vert \phi_0\Vert^2+\lambda_0^2\Vert \phi_0\Vert^2\\ &\le&(\Vert V_{-}\Vert_{\infty}+1)\Vert\phi_0\Vert^2=1/9\rho^2. \end{eqnarray*} Thus the first part of the proof goes through without any changes. We then have to estimate $\Vert \psi(\tau)\Vert_{\tilde{\mathcal{E}}}$. We estimate \begin{eqnarray*} \Vert \psi_0\Vert^2_{\tilde{\mathcal{E}}}&=&\langle{\mathcal A}\phi_0,\phi_0\rangle-\<V\phi_0,\phi_0\rangle+\int P\vert\phi_0\vert^2+\lambda_0^2\vert \phi_0\vert^2\\ &=&-\<V\phi_0,\phi_0\rangle+\int P\vert\phi_0\vert^2\\ &\ge&\int P\vert \phi_0\vert^2\ge \lambda_0^2\int \vert\phi_0\vert^2\\ &=&\lambda_0^2\frac{1}{9(1+\Vert V_-\Vert_{\infty})}\rho^2. \end{eqnarray*} Here we have used Lemma \ref{lem2}. Using \begin{equation*} \Vert u\Vert_{\tilde{ \mathcal{E}}}\le C_1\Vert u\Vert_X \end{equation*} we find \begin{eqnarray*} \Vert \psi(\tau)\Vert_{\tilde{\mathcal{E}}}\ge \frac{\lambda_0}{3(1+\Vert
V_-\Vert_{\infty})^{1/2}}\rho-\frac{2C_1M_LM_F\Vert
E_1\Vert}{2\beta-\lambda_1}\rho^2\ge \frac{\lambda_0}{6(1+\Vert V_-\Vert_{\infty})}\rho \end{eqnarray*} for $\rho$ small enough. $\Box$
\subsection{Proof of Corollary \ref{corstat}} We recall $$E^{(\p_t)}(F(t))= \mathcal{E}(W,\dot{W}).$$ We take the same sequence of data $W_{0,n}^m$ as in Theorem \ref{Mainth}. We first have to show that \begin{equation*} \int P ((W_{0,n}^m)^2-W_n^2)^2\rightarrow 0,\quad m\rightarrow \infty. \end{equation*} This follows from \begin{eqnarray*} \int P ((W_{0,n}^m)^2-W_n^2)^2&\lesssim&\int P (W^m_{0,n}-W_n)^4+\int PW_n^2(W^m_{0,n}-W_n)^2\\ &\lesssim&\int P (W^m_{0,n}-W_n)^4+\left(\int P (W^m_{0,n}-W_n)^4\right)^{1/2}\rightarrow 0,\quad m\rightarrow \infty \end{eqnarray*} by Theorem \ref{Mainth}. In the first inequality we have used the estimate $$(A^2-B^2)^2= (A-B)^2(A+B)^2= (A-B)^2(A-B+2B)^2 \leq 2(A-B)^4+ 8B^2(A-B)^2,$$ and the fact that $\Vert W_n \Vert_{L^\infty}\leq 1$. Now we have to show that \begin{equation} \label{eqcor1} \sup_{t\ge 0} \int (\dot{W}^m_n)^2+((W^m_n)'-W_n')^2+P ((W^m_n)^2-W_n^2)^2\ge \epsilon_1>0. \end{equation} We know by Theorem \ref{Mainth} that \begin{equation} \label{eqcor2} \sup_{t\ge 0}\int (\dot{W}^m_n)^2+((W^m_n)'-W_n')^2+P (W^m_n-W_n)^4\ge \epsilon_0>0. \end{equation} We also know from the proof of Theorem \ref{Mainth} that this supremum is achieved on the interval $[0,m]$ and that on this interval \begin{equation*} \Vert W^m_n-W_n\Vert_{L^2}\le \rho \end{equation*} for some $\rho>0$. Now observe that for $u\in H^1$ we have \begin{equation} \label{eqcor3} \int Pu^2\le 2\left(\int \frac{1}{r^2}
u^2\right)^{1/2}\left(\int (u')^2\right)^{1/2}. \end{equation} Indeed by density we can suppose $u\in C_0^{\infty}({\mathbb R})$ and then compute \begin{eqnarray*} \int Pu^2=\int \p_x(-\frac{1}{r})u^2=2\int \frac{1}{r}uu'\le 2 \left(\int \frac{1}{r^2}
u^2\right)^{1/2}\left(\int (u')^2\right)^{1/2}. \end{eqnarray*} Let us now show \eqref{eqcor1}. We can suppose that \begin{equation*} \sup_{t\ge 0} \int ((W_n^m)'-W_n')^2\le \frac{\epsilon^2_0}{4^4\rho^2} \end{equation*} because otherwise there is nothing to show. Then we estimate \begin{eqnarray*} \lefteqn{\int \dot{W_n^m}^2+((W_n^m)'-W'_n)^2+P((W_n^m)^2-W_n^2)^2}\\ &\ge&\int \dot{W_n^m}^2+((W_n^m)'-W'_n)^2+\frac{1}{2}P(W_n^m-W_n)^4-4PW_n^2(W_n^m-W_n)^2\\ &\ge&\int\frac{1}{2}(\dot{W_n^m}^2+((W_n^m)'-W'_n)^2+P(W_n^m-W_n)^4)\\ &-&4\left(\int((W_n^m)'-W_n')^2\right)^{1/2}\left(\int\frac{(W_n^m-W_n)^2}{r^2}\right)^{1/2}\\ &\ge&\frac{1}{2}\int\dot{W_n^m}^2+((W_n^m)'-W'_n)^2+P(W_n^m-W_n)^4-\epsilon_0/4, \end{eqnarray*} where in the first inequality we have used the estimate \begin{align*}(A^2-B^2)^2&= (A-B)^2(A-B+2B)^2 = (A-B)^2\left((A-B)^2 +4B(A-B) +4B^2\right) \\ &\geq (A-B)^2((A-B)^2-\frac{1}{2}(A-B)^2-8B^2+4B^2)=
\frac{1}{2}(A-B)^4-4B^2(A-B)^2,
\end{align*}
and in the second inequality we have used \eqref{eqcor3} and the fact that $\Vert W_n \Vert_{L^\infty}\leq 1$.
The supremum over $t\ge 0$ of this expression is $\ge \epsilon_0/4$ by \eqref{eqcor2}. $\Box$
\appendix \section{Proof of Theorem \ref{thstat}} In this appendix we give an explicit proof of theorem \ref{thstat}. We adapt in the simpler uncoupled case the arguments of Smoller Wasserman Yau, McLeod \cite{SWYM}; Smoller, Wasserman, Yau \cite{SWY} and Smoller, Wasserman \cite{SW} to show the existence of infinitely many solutions. In this appendix we work with the $r$ variable and \underline{we note $'=\p_r$} in this appendix ! Again we
can suppose that $m=1/2$. Recall that the stationary equation writes \begin{equation} \label{eqA.1} \left(1-\frac{1}{r}\right)W''+\frac{1}{r^2}W'+\frac{1}{r^2}W(1-W^2)=0. \end{equation} \subsection{Local solutions} \begin{proposition} Let $0<\alpha<1$ and $0\leq a \leq 1$. There exists $r_a>1$ and a unique solution $W \in C^{2,\alpha}([1,r_a])$ with boundary condition $$W(1)=a, \quad W'(1)= b , \quad W''(1)=c$$ where $$b= -a(1-a^2), \quad 2c=-b(1-3a^2)$$ \end{proposition} \begin{proof} We set $z= W'$ to write the equation as a first order system. We consider $$X= \{(w,z)\in C^{(2,\alpha) }([1,1+\epsilon])\times C^{(1,\alpha) }([1,1+\epsilon]), w(1)=a,w'(1)=z(1)=b,w''(1)=z'(1)=c\}$$ and the map $T:(w,z) \in X \mapsto (\wht w,\wht z)$ with \begin{align*} \wht w &= a+ \int_1^r z,\\ \wht z &= b- \int_1^r \frac{1}{\rho(\rho-1)}(z+w(1-w^2)). \end{align*} We first show that $T$ preserves the boundary conditions. We calculate \begin{align*}\wht z'=& -\frac{1}{r(r-1)}(z+w(1-w^2))\\ =&-\frac{1}{r(r-1)}\left(z +a(1-a^2) + \int_1^r w'(1-3w^2) \right)\\ =&-\frac{1}{r(r-1)}(z-b) -\frac{1}{r(r-1)}\int_1^r w'(1-3w^2) \end{align*} so $\wht z'(r)\rightarrow -z'(1) -w'(1)(1-3w^2(1))=-c+2c=c$ when $r\rightarrow 1$. We now show that $T$ is a contraction in $B_X(0,A)$ for $\epsilon$ small enough. For this the only difficulty is to estimate \begin{align*}
\frac{|\wht z'(r)-\wht z'(1)|}{|r-1|^\alpha}
\le & \frac{1}{|r-1|^\alpha}\left|-\frac{1}{r(r-1)}(z+w(1-w^2))-c\right|\\
\le&\frac{1}{|r-1|^\alpha}\Big|-\frac{1}{r(r-1)}\Big( b+\int_1^r (z'(\rho)-z'(1))d\rho+c(r-1)\\ &+a(1-a^2)+\int_1^r
(w(1-w^2))'(\rho)-(w(1-w^2))'(1) d\rho+b(1-3a^2)(r-1)\Big)-c\Big| \\
\le &\frac{1}{|r-1|^\alpha}\left|-\frac{1}{r(r-1)}
\int_1^r (z'(\rho)-z'(1))d\rho\right|\\
+& \frac{1}{|r-1|^\alpha}\left| -\frac{1}{r(r-1)} \left(-(r-1)c+ \int_1^r (w(1-w^2))'(\rho)-(w(1-w^2))'(1) d\rho\right)-c
\right|\\
\le & \frac{1}{r(r-1)^{1+\alpha}}\int_1^r |z'(\rho)-z'(1)| +c\frac{1}{(r-1)^{\alpha}}\left(1-\frac{1}{r}\right)\\
+&\frac{1}{(r-1)^{1+\alpha}}\|(w(1-w^2))'\|_{C^{1}} (r-1)^2\\
\le & \frac{1}{r(r-1)^{1+\alpha}}\|z'\|_{C^{0,\alpha}}\int_1^r |\rho-1|^\alpha + c\epsilon^{1-\alpha}+ C(\|w\|_{C^2}) \epsilon^{1-\alpha}\\
\le &\frac{1}{1+ \alpha}\|z'\|_{C^{0,\alpha}}
+c\epsilon^{1-\alpha}+ C (\|w\|_{C^{2}})\epsilon^{1-\alpha}. \end{align*} Consequently we can show that for $\epsilon$ small enough, $T$ is a contraction, with contracting constant $\frac{1}{1+\alpha}+C(A)\epsilon^{1-\alpha}$, and consequently it has a unique fixed point. \end{proof}
As a corollary of the proof of local existence we obtain the continuity of the family of solutions $W_a$ with respect to the initial data $a$.
\begin{corollary}\label{cont} Let $\delta>0$. If $W_a$ is a solution on $[1,R]$ with $-1\leq W_a\leq1$ and $a'$ is sufficiently close to $a$, then $W_{a'}$ is defined on $[1,R]$ we have
$$\|W_a-W_{a'}\|_{C^{2,\alpha}([1,R])} \leq \delta.$$ \end{corollary}
\subsection{Basic facts} \begin{lemma}\label{borne}
Let $0<B\leq 1$. As long as $W$ is a $C^2$ solution with $|W|\leq B$ we have $|W'|\leq B$. \end{lemma}
\begin{proof}
Assume that in $[1,r_0]$ we have $|W|\leq B$. Then $$-\frac{B}{r^2}\leq (1-\frac{1}{r})W''+ \frac{1}{r^2}W'\leq \frac{B}{r^2}$$ and consequently $$\left[\frac{B}{r}\right]_1^r \leq \left[(1-\frac{1}{r})W'\right]_1^r \leq \left[-\frac{B}{r}\right]_1^r$$ so $$-B \leq W'(r)\leq B.$$ \end{proof}
\begin{corollary}
The solution $W$ exists and is $C^{2,\alpha}$ as long as $|W|\leq 1$. \end{corollary}
We now consider the solution $W$ on $[0,r_a[$ where $r_a$ is the smallest $r$ such that $|W|=1$ if it exists, and $r_a= \infty$ otherwise.
\begin{lemma}\label{max} The solution $W$ cannot have a local minimum with $W>0$ nor a local maximum with $W<0$. \end{lemma} \begin{proof} If $W$ has a positive local minimum at $r_0$ then $$\left(1-\frac{1}{r_0}\right)W''(r_0)+\frac{1}{r^2}W(r_0)(1-W^2(r_0))=0$$ but $\frac{1}{r^2}W(r_0)(1-W^2(r_0))>0$ (the local minimum cannot be $1$), and $W''(r_0)\geq 0$, which is a contradiction. \end{proof}
\begin{lemma}\label{lim} The solution $W$ can not have a limit $l \neq -1,0,1$. \end{lemma}
\begin{proof} Assume that $W\rightarrow l$ with $0<l<1$. We can write for $r_1$ big enough and $r_n \geq r_1$ $$\left[\frac{l(1-l^2)+\epsilon}{r}\right]_{r_1}^{r_n} \leq \left[(1-\frac{1}{r})W'\right]_{r_1}^{r_n} \leq \left[\frac{l(1-l^2)-\epsilon}{r}\right]_{r_1}^{r_n}.$$ Since $W \rightarrow l$ there exists a sequence $r_n \rightarrow \infty$ such that $W'(r_n) \rightarrow 0$. Letting $n\rightarrow \infty$ we obtain $$ \frac{l(1-l^2)-\epsilon}{r_1} \leq W'(r_1)\left(1-\frac{1}{r_1}\right) \leq \frac{l(1-l^2)+\epsilon}{r_1}$$ so $$ \frac{l(1-l^2)-\epsilon}{r_1-1} \leq W'(r_1)\leq \frac{l(1-l^2)+\epsilon}{r_1-1}$$ and there exists a constant $C$ such that for $r$ big enough $$(l(1-l^2)-\epsilon)\ln(r-1) \leq W(r)-C$$ which is a contradiction. \end{proof}
\subsection{More technical facts} \begin{proposition}\label{ham}
Let $0\leq a \leq 1$. There exists $\epsilon>0$ and $R>0$ such that if there exists $R<r_0<r_a$ such that $W$ has a local extremum at $r_0$ with $1-\epsilon \leq |W(r_0)|<1$ then $r_a<\infty$ and $W$ has one and only one zero in $[r_0,r_a]$. \end{proposition}
\begin{proof} We consider the case $W(r_0)>0$. The other case can be treated similarly. We consider $$H= r^2\frac{(W')^2}{2} + \frac{W^2}{2}-\frac{W^4}{4}.$$ We calculate \begin{align*}H'(r)=&r (W')^2 +r^2\frac{1}{r(1-r)}(W'+W(1-W^2))W' + WW'-W^3W'\\ = &(W')^2\left(r +\frac{r^2}{r(1-r)}\right)+ WW'(1-W^2)\left(1+ \frac{r^2}{r(1-r)}\right)\\ =&(W')^2\left(r +\frac{r^2}{r(1-r)}\right)+ WW'(1-W^2)\frac{1}{1-r}. \end{align*} Let $R$ be such that for $r\geq R$ we have $$\left(r +\frac{r^2}{r(1-r)}\right)>0$$ then for $r\geq R$ and $WW'\leq 0$ we have $H'(r)>0$. With our assumption on $r_0$ we can estimate $$H(r_0) \geq \frac{(1-\epsilon)^2}{2}-\frac{1}{4}\geq \frac{1}{4\delta}$$ for a suitable $\delta$ which will be precised later, and $\epsilon$ small enough. Since $W$ has a local maximum at $r_0$, there are two possibilities \begin{itemize} \item We have $r_a= \infty$, $W$ is decreasing on $[r_0,+ \infty[$ and $W\rightarrow 0$ at $\infty$. \item There exists $r_1<r_a$ such that $W(r_1)= 0$ and W is decreasing on $[r_0,r_1]$. \end{itemize} In the first case we obtain for all $r\geq r_a$, $H'(r)>0$ so $$H(r) \geq \frac{1}{4\delta}$$ and since $W\rightarrow 0$ the expression of $H$ yields the existence of $r_2$ such that for $r\geq r_2$ $$W'(r)^2 \geq \frac{1}{(2\delta+1)r^2}$$ so $W'(r)\leq -\frac{1}{\sqrt{2\delta+1}r}$ and $W(r) \leq W(r_2)-\frac{1}{\sqrt{2\delta+1}}\ln(r)$ which is a contradiction. Consequently we are in the second case. We have $H(r_1)\geq H(r_0)$ so we can estimate $W'(r_1)$ $$W'(r_1)\leq -\frac{1}{\sqrt{2\delta}r_1}$$ Moreover, when $-1\leq W\leq 1$ we have $W(1-W^2)\leq \frac{2}{3\sqrt{3}}$ and consequently we can write for $r_a>r_2>r_1$ $$\left[W'(r)\left(1-\frac{1}{r}\right)\right]_{r_1}^{r_2} \leq \left[-\frac{2}{3\sqrt{3}r}\right]_{r_1}^{r_2}$$ and consequently $$\left(1-\frac{1}{r_2}\right)W'(r_2)\leq -\left(1-\frac{1}{r_1}\right)\frac{1}{\sqrt{2\delta}r_1} +\frac{2}{3\sqrt{3}r_1}-\frac{2}{3\sqrt{3}r_2}.$$ For $r_1$ big enough (which is possible by choosing $R$ big enough) and $\delta$ close enough to $1$ (which is possible by choosing $\epsilon$ small enough) we have $$-\left(1-\frac{1}{r_1}\right)\frac{1}{\sqrt{2\delta}r_1} +\frac{2}{3\sqrt{3}r_1}\leq 0,$$ since $\frac{2}{3\sqrt{3}}<\frac{1}{\sqrt{2}}$. and consequently $$W'(r_2)\leq -\frac{2}{3\sqrt{3}(r_2-1)},$$ and therefore $r_a<\infty$, $W(r_a)=-1$ and $W'$ is decreasing on $[r_1,r_a]$. This concludes the proof of Proposition \ref{ham} \end{proof}
\begin{proposition}\label{zero} Let $N>0$. Then for $a$ small enough, the solution $W$ has more than $N$ zeros on $[1,r_a]$ \end{proposition}
\begin{proof} To count the number of zeros of a solution $W$ we can introduce the function $\theta$ which is the continuous function such that $$\tan(\theta)= \frac{W'}{W}$$ and $ -\frac{\pi}{2}<\theta(1)<\frac{\pi}{2}$. Then $W$ has $N$ zero between $1$ and $r_0$ if and only if $$ -\frac{\pi}{2}-N\pi<\theta(r_0)<\frac{\pi}{2}-N\pi.$$ It is totally similar to count the number of zero thanks to the function $\psi$ defined by $$\tan(\psi)= \frac{rW'}{W}.$$ We estimate $\psi'$ \begin{align*}\psi'(r)=&\frac{1}{1+\left(\frac{rW'}{W}\right)^2}\frac{W(W'+rW'')-r(W')^2}{W^2}\\ =&\frac{WW'-W\frac{1}{r-1}(W'+W(1-W^2)) -(rW')^2}{W^2+(rW')^2}\\ =&\frac{WW'\frac{r-2}{r-1}-\frac{1}{r-1}W^2(1-W^2) -(rW')^2}{W^2+(rW')^2}. \end{align*} We first estimate
$$\frac{WW'\frac{r-2}{r-1}}{W^2+(rW')^2}\leq \frac{1}{r}\frac{|rWW'|}{W^2+(rW')^2}\leq \frac{1}{2r}.$$
We assume that $|W|\leq \delta$. To estimate the other terms we consider three cases \begin{itemize}
\item $2|W|^2\leq |rW'|^2$. Then we have $$\psi'(r) \leq \frac{1}{2r} -\frac{r(W')^2}{W^2+ (rW')^2} \leq \frac{1}{2r}-\frac{2}{3r}= -\frac{1}{6r}.$$
\item $2|rW'|^2\leq |W|^2$. Then we have $$\psi'(r) \leq \frac{1}{2r} -\frac{W^2(1-W^2)}{(r-1)(W^2+(rW')^2)} \leq \frac{1}{2r}-\frac{2(1-\delta^2)}{3(r-1)} \leq \frac{3-4(1-\delta^2)}{6r}.$$
\item $\frac{1}{2}|W|^2\leq|rW'|^2\leq 2|W|^2$. Then we have $$\psi'(r) \leq \frac{1}{2r} -\frac{r(W')^2}{W^2+ (rW')^2}
-\frac{W^2(1-W^2)}{(r-1)(W^2+(rW')^2)}
\leq \frac{1}{2r}-\frac{1}{3r} -\frac{(1-\delta^2)}{3r}\leq \frac{3-4+2\delta^2}{6r}.$$ \end{itemize} If we take $\delta$ small enough we then have $$\psi'(r)\leq -\frac{1}{12r}.$$ Let now $R$ be such that $-\frac{1}{12}\ln(R)\leq -N\pi$. Thanks to Corollary \ref{cont}, we can find $a_0$ small enough such that for $0\leq a \leq a_0$ the solution exists on $[1,R]
$ and satisfies $|W|\leq \delta$ on this interval. Then $\psi(R)-\psi(1)\leq -N\pi$ so $W$ has at least $N$ zero on $[1,R]$. This concludes the proof of Proposition \ref{zero}. \end{proof} \begin{corollary}\label{limbis} Let $W$ be a solution with $r_a= \infty$ and a finite number of zeros. Then $W \rightarrow \pm 1$. \end{corollary}
\begin{proof}
Because of Lemma \ref{max} a solution with a finite number of zeros has a finite limit. Because of Lemma \ref{lim} this limit must be $0$ or $\pm 1$. If it was $0$ we could find $R_0$ such that $|W|\leq \delta$ for $r\geq R_0$, with $\delta$ defined in the proof of Proposition \ref{zero}. then for $r\geq R_0$ we have $$\psi'(r)\leq -\frac{1}{12r},$$ consequently $\psi$ is unbounded from above, so $W$ has an infinite number of zeros. \end{proof}
\subsection{Proof of the Theorem}
\begin{lemma} Let $X_n$ be the set of initial data $a$ such that the corresponding solution has $n$ zeros and satisfies $r_a<\infty$. Then $X_n$ is open and if $\alpha$ is a limit point of $X_n$ the corresponding solution satisfies $r_\alpha= \infty$ has $m$ zeros, with $m=n$ or $m=n-1$ and tends to $(-1)^m$ at infinity. \end{lemma}
\begin{proof} The fact that $X_n$ is open is a direct consequence of Corollary \ref{cont}. Let $\alpha$ be a limit point and let $a_i \in X_n$ be such that $a_i \rightarrow \alpha$.
Assume first that $W_\alpha$ is such that $r_a<\infty$. Then we can compare the solution on the fixed interval $[1,r_\alpha]$ so Corollary \ref{cont} implies that $W_\alpha$ has exactly $n$ zeros, so $\alpha \in X_n$ which is a contradiction. Consequently $r_a= \infty$. This also implies that the sequence of $r_{a_i}$ is not bounded.
Assume now that there exists $R$ such that $W_\alpha$ has strictly more than $n$ zeros before $R$. Once again Corollary \ref{cont} yields a contradiction.
Assume now that $W_{\alpha}$ has $m$ zeros with $m<n$. Then thanks to Corollary \ref{limbis} $W_{\alpha}$ tend to $(-1)^m$ and Proposition \ref{ham} implies that the $W_{a_i}$ have $m$ or $m+1$ zeros. \end{proof}
\begin{proof}[Proof of Theorem \ref{thstat}] Let $\wht X_n$ be the set of initial data $a$ such that the corresponding solution has less than $n$ zeros. Let $\alpha = \min(\wht X_n)$. Proposition \ref{zero} implies that $\alpha>0$. There are two case \begin{itemize} \item If $\alpha \in \wht X_n$, then $W_{\alpha}$ is a solution with $m\leq n$ zeros with $r_{\alpha}= \infty$. Then Corollary \ref{cont} and Proposition \ref{ham} imply that for $a$ close to $\alpha$ either the solutions have $m$ zeros, either have $m+1$ zeros and $r_a<\infty$. Consequently we have $m=n$ and considering a sequence $a_i<\alpha$ converging to $\alpha$ we have shown that $X_{n+1}$ is non empty.
\item If $\alpha \notin \wht X_n$ then $W_{\alpha}$ must be a solution with $r_{\alpha}= \infty$ and $k>n$ zeros. But we have shown in the previous point that in a neighborhood of such a solution we can only have solutions with $k$ or $k+1$ zeros, so this case can not occur.
\end{itemize}
We start the iteration with the function $$W_1 = \frac{c-r}{r+3(c-1)}, \quad c=\frac{3+\sqrt{3}}{2},$$ which is a special solution of \eqref{ym}, with only one $0$ (see \cite{BCC}). Note also that $$W_1(1)= \frac{1+\sqrt{3}}{5+3\sqrt{3}}=a_1.$$ We then obtain at least one solution $-1\leq W_{a_n} \leq 1$ for each number of zeros $n$. This concludes the proof of Theorem \ref{thstat}. \end{proof}
\end{document} |
\begin{document}
\title{Symplectic Matroids, Circuits, and Signed Graphs}
\author{Zhexiu Tu}
\address{University of the South\\ Department of Math and Computer Science\\ Sewanee, TN37355}
\email{zhtu@sewanee.edu}
\subjclass[2000]{Primary 05B35; Secondary 05E15; 20F55; 05C25}
\keywords{Symplectic Matroids; Circuit Axiomatization, Signed Graphs}
\begin{abstract} One generalization of ordinary matroids is symplectic matroids. While symplectic matroids were initially defined by their collections of bases, there has been no cryptomorphic definition of symplectic matroids in terms of circuits. We give a definition of symplectic matroids by collections of circuits. As an application, we construct a class of examples of symplectic matroids from graphs in terms of circuits. \end{abstract}
\maketitle
\section{Introduction}
A matroid is a combinatorial structure that generalizes the notion of linear independence in vector spaces. There are many textbooks on this subject. We refer the readers to \cite{oxley} for more background on matroids. There are different cryptomorphic characterizations of matroids, for example, in terms of bases, circuits, flats, etc.. Below we list a matroid definition in terms of circuits.
\begin{Def} A finite \textit{matroid} $M$ is a pair $(E,\mathcal{C})$, where $E$ is a finite set (called the ground set) and $\mathcal {C}$ is a family of subsets of $E$ (called the circuits) with the following properties: \begin{enumerate} \item[(C1)] $\emptyset \notin \mathcal{C}$. \item[(C2)] $C_1 , C_2 \in \mathcal{C}$ and $C_2 \subseteq C_1$ implies $C_2 = C_1$. \item[(C3)] $C_1, C_2 \in \mathcal{C}$ with $C_1 \neq C_2$ and $e \in C_1 \cap C_2$ implies there exists some $C_3 \in \mathcal{C}$ such that $C_3 \subset (C_1 \cup C_2 ) - \{ e \}$. \end{enumerate} \end{Def}
We refer to matroids as \textit{ordinary matroids}, to distinguish them from different generalizations of matroids, such as \textit{symplectic matroids}. Symplectic matroids are obtained when we replace the symmetric group with the \textit{hyperoctahedral group}, a group of symmetries of the $n$-cube $[-1, 1]^n$. Geometrically, symplectic matroids are related to the vector spaces endowed with bilinear forms, although in a way different from the way ordinary matroids are related to vector spaces. Symplectic matroids are a generalization of the following matroids that are all equivalent, $\Delta$-matroids \cite{bouch}, metroids \cite{bouch}, or 2-matroids \cite{bouch}.
Symplectic matroids were defined in \cite{borov} by Borovik, Gelfand and White using the maximality property of bases. In 2003, T. Chow defined symplectic matroids in terms of independent sets, and proved the equivalence between the two definitions in \cite{chow}. In \cite{chow}, Chow posed a complicated exchange property on independent sets, and proposed a conjectural exchange property on the collection of bases. We know of no progress toward defining symplectic matroids using any other axiomatizations similar to those of ordinary matroids.
In \cite{borov}, a special type of symplectic matroids, called \textit{Lagrangian matroids}, which turn out to be equivalent to $\Delta$-matroids, was studied. Borovik, Gelfand and White provided the circuit axiomatizations of Lagrangian matroids and proved the equivalence between the definitions. However, Lagrangian matroids are just a special case of all symplectic matroids. At present there are no circuit axiomatizations of symplectic matroids.
In this paper, we define symplectic matroids in terms of circuits. Some of these axioms resemble circuit axioms for ordinary matroids, including the circuit elimination axiom. We prove the equivalence between our definition and the definition by Borovik et al in \cite{borov}. As an application of this result, we show how every finite undirected multigraph gives rise to a symplectic matroid in terms of circuits.
It is worth mentioning that in \cite{borov} a special type of symplectic matroids, called the \textit{Lagrangian matroids}, was studied. Borovik, Gelfand and White provided and proved the equivalence between the definitions of Lagrangian matroids in terms of bases and circuits. Lagrangian matroids are a class of Coxeter matroids where we let $W = BC_n \cong S_2 \wr S_n$ and $P = S_n$. Hence they are the symplectic matroids where the size of each basis matches the size of the symplectic ground set. In other words, they are the full-rank symplectic matroids. Lagrange matroids are in the names of symmetric matroids \cite{bouch30}, $\Delta$-matroids \cite{bouch30}, metroids \cite{bouch35}, or 2-matroids \cite{bouch34}.
The structure of this paper is as follows. In Section~\ref{sec2}, we give basic definitions and terms that we will use in our proofs. In Section~\ref{sec3}, we give an alternative definition or axiomatization of a symplectic matroid in terms of circuits (Theorem~\ref{thm:main}), which is our main theorem. In Section~\ref{sec4}, we show that symplectic matroids always satisfy the circuit axioms that we have defined in Section~\ref{sec3}. In Section~\ref{sec5}, we go backwards and show that out circuit axioms guarantee symplectic matroids. It then suffices to prove Theorem~\ref{thm:main}. In Section~\ref{sec6}, we apply Theorem~\ref{thm:main} and construct a class of examples of symplectic matroids from graphs in terms of circuits.
\section{Background and definitions} \label{sec2} In this section we give the basic definitions of symplectic matroids. Let \[ [n] = \{1,2,\ldots,n \} \textrm{ and } [n]^* = \{1^*,2^*,\ldots,n^* \} \] where the map $*: [n] \to [n]^*$ is defined by $i \mapsto i^*$ and $*: [n]^* \to [n]$ is defined by $i^* \mapsto i$. We apply $*$ to sets and collections of sets, for example $C^*$ and $\mathcal{C}^*$. Let \[ E_{\pm n} : = [n] \cup [n]^*\] be the new ground set. Thus $i^{**} = i$ signifies that $i$ is an involutive permutation of $E_{\pm n}$. That is why sometimes we write $i^*$ as $-i$ and $E_{\pm n}$ can be thought of as a set equivalent to $\{ -n, -(n-1) , \ldots, -1 , 1, 2, \ldots , n \}$. We say a set $S$ is \textit{admissible} if $S \cap S^* = \emptyset$. A permutation $\omega$ of $E_{\pm n}$ is \textit{admissible} if $\omega (x^*) = \omega (x)^*$ for all $x \in E_{\pm n}$. An ordering $<$ on $E_{\pm n}$ is \textit{admissible} if and only if $<$ is a linear ordering and from $i < j$ it follows that $j^* < i^*$. Denote by $E_k$ the collection of all admissible $k$-subsets in $E_{\pm n}$, for $k < 2n$. If $<$ is an arbitrary linear ordering on $E_{\pm n}$, it induces the partial ordering (which we also denote by the same symbol $<$) on $E_k$: if $A, B \in E_k$ and \[ A := \{ a_1 < a_2 < \ldots < a_k \} \textrm{ and } B := \{b_1 < b_2 < \ldots < b_k \},\] we set $A \leq B$ if \[ a_1 \leq b_1 , \,\, a_2 \leq b_2, \,\, \ldots , \,\, a_k \leq b_k. \]
We can visualize an admissible ordering as a signed permutation $\sigma$ of $[n]$ followed by the negative of the reversal of $\sigma$. For example, when $n = 3$ \[ 1 < 3 < 2^* < 2 < 3^* < 1^* \] is one admissible ordering.
\begin{Def} If $\mathcal{B}$ is a non-empty family of equi-numerous admissible subsets of $E_{\pm n}$ with the property that for every admissible ordering $<$ of $E_{\pm n}$, the collection $\mathcal{B}$ always contains a unique maximal element, then $M = (E_{\pm n}; \mathcal{B})$ is a symplectic matroid, and $\mathcal{B}$ is called the collection of \textit{bases} of $M$. \end{Def}
Below is an example of a non-symplectic matroid.
\begin{Ex} Let $n = 3$ and $k = 2$, and let $\mathcal{B} = \{12, 2^*3, 13 \}$, where we use our abbreviated notation by listing $\{a, b\}$ as $ab$. Consider the admissible ordering $1 < 3 < 2^* < 2 < 3^* < 1^*$. Then $12$ and $2^* 3$ are incomparable in the induced ordering on $E_2$, and both are larger than $13$, hence $\mathcal{B}$ cannot be a symplectic matroid. \end{Ex}
\section{Circuits} \label{sec3}
Let $M = (E_{\pm n}; \mathcal{B})$ be a symplectic matroid, where $\mathcal{B}$ is the collection of bases of $M$. Let $\mathcal{C}$ be the collection of minimal admissible subsets of $E_{\pm n}$ not contained in any member of $\mathcal{B}$. That collection of subsets $\mathcal{C}$ is called the collection of \textit{circuits} of $M$. An admissible set containing no circuits as its subset is called an \textit{independent set}. Otherwise, it is \textit{dependent}.
We let $A \Delta B$ be the \textit{symmetric difference} between two sets $A$ and $B$ defined by $A \Delta B = A \cup B - A \cap B$. We give an important definition of the term \textit{span}. \begin{Def} Let $\mathcal{C}$ be a collection of admissible subsets of $E_{\pm n}$. Then an admissible set $P$ spans $x \in E_{\pm n}$ if there exist some $J \in \mathcal{C}$ such that $J - P = \{x\}$. \end{Def}
A characterization of $\mathcal{C}$ could be used as an alternative definition or axiomatization of a symplectic matroid. This is precisely what the following theorem provides, followed by an example.
\begin{restatable}{thm}{main} \label{thm:main} Let $\mathcal{B}$ be the collection of bases of a symplectic matroid. Let $\mathcal{C}$ be the collection of minimal admissible subsets of $E_{\pm n}$ not contained in any member of $\mathcal{B}$. Then $\mathcal{C}$ satisfies the following four properties. \begin{enumerate} \item[(SC1)] $\emptyset \notin \mathcal{C}$. \item[(SC2)] If $C_1,C_2 \in \mathcal{C}$ with $C_1 \subseteq C_2$, then $C_1 = C_2$. \item[(SC3)] If $C_1,C_2 \in \mathcal{C}$ with $C_1 \neq C_2$, $x \in C_1 \cap C_2$ and $C_1 \cup C_2$ is admissible, then there exists some $C \in \mathcal{C}$ with $C \subseteq (C_1 \cup C_2)-\{x\}$.
\item[(SC4)] Let $P$ be an admissible subset of $E_{\pm n}$ and $B \in \mathcal{B}$. If $|P| < |B|$, $P$ does not span $E_{\pm n}-P \cup P^*$. \end{enumerate} Conversely, let $\mathcal{C}$ be a collection of admissible subsets of $E_{\pm n}$, and let $\mathcal{B}$ be the collection of maximal admissible subsets of $E_{\pm n}$ not containing members of $\mathcal{C}$. If $\mathcal{C}$ satisfies (SC1) - (SC4), then $\mathcal{B}$ is the collection of bases of a symplectic matroid. \end{restatable}
\begin{Rem} (SC1), (SC2) and (SC3) resemble the circuit axioms of ordinary matroids. However, they don't suffice to guarantee the equi-cardinality of bases of symplectic matroids. (SC4) guarantees the equi-cardinality of bases. \end{Rem}
\begin{Ex} Let $\mathcal{B} = \{\{1, 2, 3\}, \{1^*, 2^*, 3\}, \{1, 3, 4\}, \{2^*, 3, 4\} \}$. Then \[ \{ \mathcal{C} = \{3^*\}, \{4^*\}, \{1^*, 2\}, \{1, 2^*\}, \{1^*, 4\}, \{2, 4\} \}\] is the collection of minimal admissible subsets not contained in any member of $\mathcal{B}$. Meantime, $\mathcal{B}$ is the collection of maximal admissible subsets not containing members of $\mathcal{C}$.
We can check that $\mathcal{C}$ satisfies (SC1), (SC2) and (SC3) without much obstacle. For any admissible set $P$ with $|P| = 4$, it contains some $C \in \mathcal{C}$. For any admissible set $Q = \{a , b \}$ where $a,b \in [4] \cup [4]^*$, $Q$ doesn't span $E_{\pm 4} - Q \cup Q^*$. \end{Ex}
\section{Symplectic matroids satisfying circuit axioms} \label{sec4}
Throughout this section, $\mathcal{B}$ is the collection of bases of a symplectic matroid $M$, and $\mathcal{C}$ is the collection of minimal admissible subsets of $E_{\pm n}$ not contained in any member of $\mathcal{B}$. \begin{Lemma} \label{lemma1} Let $B \in \mathcal{B}$, and some $x \notin B$ such that $B \cup \{ x \}$ is admissible. Then there exists a unique circuit $C \subseteq B \cup \{x\}$ where $C$ is given by \[C=\{x\} \cup \{b \in B \mid B \cup \{x\} - \{b\} \in \mathcal{B}\}.\] \end{Lemma} \begin{proof}
Let $B \in \mathcal{B}$ and $x \notin B$ such that $B \cup \{x\}$ is admissible. Then $|B \cup \{x\}| > |B|$. Therefore, $B \cup \{x\}$ is dependent, which means $B \cup \{x\}$ contains a circuit. Since $\{b \in B \mid B \cup \{x\} - \{b\} \in \mathcal{B}\} \subseteq B$ and $B \cup \{x\}$ is admissible, so $C$ is definitely admissible.
The expression of the unique circuit $C$ and its proof for symplectic matroids is the same as those for ordinary matroids, which can be found in various papers or textbooks, for example, in \cite{minieka}. \end{proof}
\begin{Lemma} \label{lemma2} Let $C_1$ and $C_2$ be two distinct circuits of M, $C_1 \cup C_2$ be admissible and $x \in C_1 \cap C_2$. Then for every $c\in C_1 \Delta C_2$, there exists some $C_c \in \mathcal{C}$ such that $c \in C_c \subseteq C_1 \cup C_2 - \{x\}$. \end{Lemma} \begin{proof} Suppose $C_1 \cup C_2 - \{x\}$ is independent. Then $C_1 \cup C_2 - \{x\} \subseteq B$. We know $x \notin B$, otherwise $C_1 \subseteq B$. Hence, $C_1,C_2 \subseteq C_1 \cup C_2 \subseteq B \cup \{x\}$. $B \cup \{x\}$ is dependent because B is a basis, and $B \cup \{x\}$ is admissible. Thus $B \cup \{x\}$ contains a unique circuit by Lemma~\ref{lemma1}. That contradicts $C_1$ and $C_2$ being distinct. Thus $C_1 \cup C_2 - \{x\}$ is dependent.
Since we suppose $C_1 \cup C_2$ is admissible, we show the existence of such a circuit $C_c$. This proof resembles that in \cite{borov}. We proceed by induction on $|C_1 \cup C_2|$. For the base step of induction, consider $C_1 = \{c_1,x\}$ and $C_2 = \{c_2,x\}$. Then $C=\{c_1,c_2\}=C_1 \Delta C_2$ must be a circuit. For the inductive step, let $c \in C_2 - \ C_1$ without the loss of generality. We have shown that there exists a circuit $C \subseteq (C_1 \cup C_2) - \{x\}$. Suppose $c \notin C$. Since $C \not \subseteq C_2$, there exists some $y \in (C \cap C_1) - C_2$. We notice $x \in C_1 - C$, but $c \notin C \cup C_1$. Thus, $C \cup C_1 \subset C_1 \cup C_2$ and we can apply the induction hypothesis to $C$, $C_1$, and $x, y$ to find a circuit $C_3$ with $x \in C_3 \subseteq (C \cup C_1) - \{y\}$. Since $y \notin C_2$ and $y \notin C_3$, we have $C_3 \cup C_2 \subset C_1 \cup C_2$. However, $x \in C_2 \cap C_3$ and $c \in C_2 - C_3$. Thus, by applying the induction hypothesis again, we get a circuit $C_c$ with $c \in C_c \subseteq (C_3 \cup C_2) - \{x\} \subseteq (C_1 \cup C_2) - \{x\}$. \end{proof}
\begin{Th} \label{theorem1}
Let $P$ be an admissible subset of $E_{\pm n}$ and $B \in \mathcal{B}$. If $|P| < |B|$, $P$ does not span $E_{\pm n}-P \cup P^*$. \end{Th} \begin{proof}
Suppose there exists some $P$ such that $|P| < |B| = k$ and $P$ is the minimal set that spans $E_{\pm n}-P \cup P^*$, which means no subset $P_0$ of $P$ spans $E_{\pm n}-P_0 \cup P_0^*$. Without the loss of generality, suppose $P = \{ 1 , 2, \ldots, k-1 \}$. Hence $P$ spans every element in $\{ k, k+1,\ldots,n \} \cup \{ k, k+1,\ldots,n \}^*$. Thus there exist some $J_{k+j} \in \mathcal{C}$ such that \[ J_{k+j} - P = \{ k+j \} \] for all $j = 0 , \ldots, n-k$, and $J_{(k+j)^*} \in \mathcal{C}$ such that \[ J_{(k+j)^*} - P = \{ (k+j)^* \} \] for all $j = 0 , \ldots, n-k$. However, $P$ cannot be independent because $P \cup \{x \}$ is always dependent for any $x \in E_{\pm n}-P \cup P^*$, which makes $P$ a basis of size $k-1$, a contradiction. So $P$ is dependent. Thus $P \not \subseteq J_{k+j}$ nor $P \not \subseteq J_{(k+j)^*}$ for all $j$.
Suppose $P$ is a circuit. (The proof when $P$ contains a circuit is similar.) There exists some $z \in P$ such that $z \notin J_{n^*}$. Let $S : = P - \{ z \}$. Then $J_{n^*} - \{ n^* \} \subseteq S$. Thus $S \cup \{ n^* \}$ is dependent because $J_{n^*} \subseteq S \cup \{ n^* \}$. For any $x \in E_{\pm n}-P \cup P^*$ and $x \neq n$, if $z \in J_x$, then by Lemma~\ref{lemma2}, there exists some $C \subseteq J_x \cup P - \{ z \}$, which means $S \cup \{ x \}$ is dependent; if $z \notin J_x$, then $J_x \subseteq S \cup \{ x \}$, which means $S \cup \{ x \}$ is dependent. Hence $S \cup \{ x \}$ is always dependent for all $x \in E_{\pm n}-P \cup P^*$. Moreover, $S \cup \{ z \}$ is dependent.
We are left with $S \cup \{ z^* \}$. Suppose $S \cup \{ z^* \}$ is dependent. Then $S$ is maximally independent, and hence a basis. However, we have $|S| = k-2$, which contradicts $|B| = k$. Suppose $S \cup \{ z^* \}$ is independent. Then $S \cup \{ z^* \}$ is maximally independent, and hence a basis. However, we have $|S \cup \{ z^* \}| = k-1$, which contradicts $|B| = k$. Therefore, there exists no $P$ such that $|P| < |B|$ and $P$ spans $E_{\pm n}-P \cup P^*$. \end{proof}
Below we state the \textit{Symmetric Exchange Axiom}.
For every $X,Y \in \mathcal{B}$, if $i \in Y - X$, then there exists a $j \in X-Y$ such that $X \cup \{i\} - \{j\} \in \mathcal{B}$.
We show this Symmetric Exchange Axiom leads to the Maximality Property of symplectic matroids.
\begin{Th} \label{theorem2} If $\mathcal{B}$ is a collection of admissible sets of cardinality $k$ in $[n] \cup [n]^\ast$ where $k \leq n$, then the Symmetric Exchange Axiom guarantees the Maximality Property. \end{Th} \begin{proof} This proof resembles that in \cite{borov}. Assume $\mathcal{B}$ satisfies the Symmetric Exchange Axiom. X and $X \cup \{i\} - \{j\}$ must be comparable because the ordering of $E_{\pm n}$ is total. Suppose X, Y are two distinct maximal bases. Let $i$ be the maximal element of $X \Delta Y$. Without the loss of generality, suppose $i \in Y$. Then there exists some $j \in X$ such that $X \cup \{i\}-\{j\} \in \mathcal{B}$. We know X and $X \cup \{i\} - \{j\}$ are comparable and distinct. Since $X$ is maximal and $i$ is the maximal element in $X \Delta Y$, then $X \cup \{i\} - \{j\}$ is greater than $X$. This causes a contradiction. Therefore, the Symmetric Exchange Axiom induces the Maximality Property.\\ \end{proof}
\section{Circuit Axioms leading to symplectic matroids} \label{sec5}
Now we prove the other direction of the main theorem. Lemma~\ref{lemma1}, Lemma~\ref{lemma2}, and Theorem~\ref{theorem1} already told us that when $\mathcal{B}$ is the collection of bases of a symplectic matroid, then (SC1) - (SC4) hold. Now suppose $\mathcal{C}$ is a collection satisfying axioms (SC1) - (SC4) and $\mathcal{B}$ the collection of maximal admissible subsets of $E_{\pm n}$ not containing members of $\mathcal{C}$. We prove the following claims.
\noindent \textbf{Claim 1}\\ The bases in $\mathcal{B}$ are equi-numerous.
Suppose $B_1 , B_2 \in \mathcal{B}$ such that $|B_1| < |B_2|$. By Axiom (SC4), there exists an $x \in E_{\pm n} - B_1 \cup B_1^*$ that $B_1$ doesn't span. Then $B_1 \cup \{ x \}$ is admissible and meantime contains no circuit, which contradicts the maximality of the basis $B_1$.
\noindent \textbf{Claim 2}\\ Let $B \in \mathcal{B}$, and some $x \notin B$ such that $B \cup \{ x \}$ is admissible. Then there exists a unique circuit $C \in B \cup \{x\}$ where $C$ is given by \[C=\{x\} \cup \{b \in B \mid B \cup \{x\} - \{b\} \in \mathcal{B}\}. \]
To prove Claim 2, we let $x \notin B$ such that $B \cup \{ x \}$ is admissible. Then there exists some $D \in \mathcal{C}$ such that $D \subseteq B \cup \{ x \}$. If $\{ x \} \in \mathcal{C}$, then we are done. Otherwise let \[C=\{x\} \cup \{b \in B \mid B \cup \{x\} - \{b\} \in \mathcal{B}\}. \] We want to show $C = D$.
Since $D \not \subset B$, we know $x \in D$. Now let $y \in D - \{ x \}$. Then $y \in B$. Let $A := B \cup \{ x \} - \{ y \}$. Suppose, for contradiction, that $A$ contains some circuit $E \in \mathcal{C}$. For sure $x \in E$. If $E$ and $D$ are distinct, then by Axiom (SC3), there exists some circuit $F$ such that $F \subseteq E \cup D - \{ x \}$. But then $F \subseteq B$, a contradiction. Hence $E = D$. However $y \notin E$, $y \in D$. Thus we reach a contradiciton.
Hence $A \in \mathcal{B}$. So $y \in C$ and $D \subseteq C$. To show $C \in \mathcal{C}$, we must show $C - \{ z\}$ is independent for all $z \in C$. If $z = x$, then $C - \{z \} \subseteq B \in \mathcal{B}$. Otherwise $C - \{ z \} \subseteq B \cup \{x \} - \{z \}$, which is a member of $\mathcal{B}$ by the definition of $C$. Therefore $C,D \in \mathcal{C}$ and by Axiom (SC2), we have $D = C$. Claim 2 is proved.
Let $A, B \in \mathcal{B}$ with $a \in A - B$. We show that there exists $b \in B-A$ such that $B \cup \{ a\} - \{ b\} \in \mathcal{B}$. Claim 2 says that there exists a circuit $C \in B \cup \{a\}$ such that \[C - \{a\} = \{b \in B \mid B \cup \{a\} - \{b\} \in \mathcal{B}\}. \] However, $C - \{a\}$ is never empty because otherwise $C \subseteq A$ and is thus independent, which leads to a contradiction. So the Symmetric Exchange Property is satisfied here, which leads to the Maximality of symplectic matroids by Theorem~\ref{theorem2}.
\section{From graphs to symplectic matroids} \label{sec6}
In this section, a \textit{graph} refers to a finite undirected multigraph. Inspired by Theorem 2 in \cite{chow}, we apply Theorem~\ref{thm:main} to see how every graph gives rise to a symplectic matroid.
Let $G$ be a graph with $n$ edges $e_1, e_2, \ldots, e_n$. We define a family $\mathcal{C}(G)$ of admissible subsets of $E_{\pm n}$ as follows. If $S \subseteq E_{\pm n}$ is admissible, let \[ G(S) : = \{ e_i \mid i \in S \textrm{ or } i^* \in S \}. \] We let an admissible set $S$ be a member of $\mathcal{C}(G)$ if and only if \begin{enumerate} \item either $G(S)$ is a (single) cycle and there is an even number of edges $e_i$ in $G(S)$ such that $i^* \in S$ (The parity of $G(S)$ is the product of the signs of these edges and is thus positive); \item or $G(S)$ is a union of (single) cycles, there is an even number of edges $e_i$ in $G(S)$ such that $i^* \in S$, and in each cycle there is an odd number of edges with negative signs. \end{enumerate}
We use some notions and terms from \cite{Zas}, which we review now. A \textit{signed graph} is a graph with each edge given either a plus sign or a minus sign. A cycle in a signed graph is \textit{balanced} if the product of the signs of the corresponding edges is positive and is \textit{unbalanced} otherwise. Every signed graph $\Gamma$ gives rise to an ordinary matroid $M(\Gamma)$ in the following manner. The ground set of $M(\Gamma)$ is the signed edge set of $\Gamma$, and a set of edges is independent if every connected component is either a tree or a unicyclic graph whose unique cycle is unbalanced. \cite[Theorem~5.1]{Zas} shows that $M(\Gamma)$ is a matroid. Notice that a basis of $M(\Gamma)$ can have as many elements as $G$ has vertices, but not more.
To phrase this another way, our construction of $\mathcal{C}(G)$ is the union of all $M(\Gamma)$ as $\Gamma$ ranges over all $2^n$ signed graphs with underlying graph $G$.
\begin{Th} For every graph $G$, $\mathcal{C}(G)$ is the collection of circuits of a symplectic matroid. \end{Th} \begin{Rem} The symplectic matroid we construct from graph $G$ in terms of circuits is the same matroid constructed differently in terms of independent sets by Theorem 2 in \cite{chow}. \end{Rem}
\begin{proof} It is easy to check that members of $\mathcal{C}(G)$ satisfy (SC1) and (SC2). Let $C_1 , C_2 \in \mathcal{C}(G)$ and suppose $C_1 , C_2$ are single cycles. If $C_1 \cap C_2 \neq \emptyset$ and suppose $e_1 \in C_1 \cap C_2$, there definitely exists a cycle $C_3 = C_1 \cup C_2 - C_1 \cap C_2 \subseteq C_1 \cup C_2 - \{ e_1 \}$. For any $e \in C_1 \cap C_2$, the deletion of such an edge doesn't change the parity of $C_3$ because we delete it twice from $C_1$ and $C_2$. Thus there is an even number of negative edges in $C_3$. If either or both of $C_1$ and $C_2$ are unions of (single) cycles, the proof would be analogous.
\cite[Theorem~5.1]{Zas} shows that
$M(\Gamma)$ is a matroid whose set of edges is independent if every connected component is either a tree or a unicyclic graph. Notice that a basis of $M(\Gamma)$ cannot have more elements than $G$ has vertices. Therefore if an admissible subset $P$ of $E_{\pm n}$ satisfies $|P| < |B|$, then $|P| < \# V(G) - 1$. In other words, $G(P)$ is a subset of a spanning tree in $G$. Therefore if $P$ spans $x \in E_{\pm n} - P \cup P^*$, there exists a unique $J \in \mathcal{C}(G)$ such that $J - P = \{ x \}$. Considering the parity of $J$, $P$ is not able to span $x^*$ at the same time. Thus $P$ does not span $E_{\pm n} - P \cup P^*$.
Therefore $\mathcal{C}(G)$ is the collection of circuits of a symplectic matroid. \end{proof}
\end{document} |
\begin{document}
\title{Liouville theorem for Pseudoharmonic maps from Sasakian manifolds\footnote{Supported by NSFC grant No. 11271071} \footnote{Key words: Liouville theorem, pseudoharmonic map, sub-gradient estimate, Sasakian manifold, Heisenberg group} \footnote{2010 Mathematics Subject Classification. Primary: 32V05, 32V20. Secondary: 58E20}} \author{Yibin Ren\footnote{School of Mathematical Science, Fudan University, Shanghai 200433, P.R.China. E-mail: allenrybqqm@hotmail.com} \and Guilin Yang \and Tian Chong} \date{} \maketitle
\begin{abstract} In this paper, we derive a sub-gradient estimate for pseudoharmonic maps from noncompact complete Sasakian manifolds which satisfy CR sub-Laplace comparison property, to simply-connected Riemannian manifolds with nonpositive sectional curvature. As its application, we obtain some Liouville theorems for pseudoharmonic maps. In the Appendix, we modify the method and apply it to harmonic maps from noncompact complete Sasakian manifolds. \end{abstract}
\section{Introduction}
In \cite{y}, S. T. Yau derived a well-known gradient estimate for harmonic functions on complete noncompact Riemnnian manifolds. By this estimate, he got a Liouville theorem for positive harmonic functions on Riemannian manifolds with nonnegative Ricci curvature. In \cite{c}, S. Y. Cheng generlized the method in \cite{y} to harmonic maps. In \cite{ckt}, S. C. Chang, T. J. Kuo and J. Tie modified the method in \cite{y} and applied it to positive pseudohamonic functions on noncompact Sasakian $(2n+1)$-manifolds. They introduced a new auxiliary function and successfully dealt with the awkward term in Bochner-type formula. As a result, they obtained a sub-gradient estimate and Liouville theorem for positive pseudoharmonic functions.
In this paper, inspired by \cite{c, ckt}, we derive a sub-gradient estimate for pseudoharmonic maps from noncompact complete Sasakian manifolds which satisfies CR sub-Laplace comparison property (Theorem \ref{nsg}). Then we get the Liouville theorem for pseudoharmonic maps (Theorem \ref{cse}). In the Appendix, we apply the method to harmonic maps from noncompact complete Sasakian manifolds and derive a Reeb energy density estimate (Theorem \ref{csh}). From this estimate, we can prove Liouville theorem for harmonic maps on Sasakian manifolds.
\section{Basic Notions} \label{crb} A smooth manifold $M$ of real dimension $(2n+1)$ is said to be a CR manifold, if there exists a smooth rank $n$ complex subbundle $T_{1,0} M \subset TM \otimes \mathbb{C}$ such that $$ T_{1,0} M \cap T_{0,1} M =0 $$ and $$ [\Gamma (T_{1,0} M), \Gamma (T_{1,0} M)] \subset \Gamma (T_{1,0} M) $$ where $T_{0,1} M = \overline{T_{1,0} M}$ is the complex conjugate of $T_{1,0} M$. If $M$ is a CR manifold, then its Levi distribution is the real subbundle $HM = Re \: \{ T_{1,0}M \oplus T_{0,1}M \}$.
It carries a complex structure $J_b : HM \rightarrow HM$, which is given by $J_b (X+\overline{X})= \sqrt{-1} (X-\overline{X})$ for any $X \in T_{1,0} M$. Since $HM$ is naturally oriented by the complex structure, then $M$ is orientable if and only if there exists a global nonvanishing 1-form $\theta$ such that $\theta (HM) =0 $. Any such section $\theta$ is referred to as a pseudo-Hermitian structure on $M$. The Levi form $L_\theta $ is given by $$L_\theta (Z,\overline{W} ) = - \sqrt{-1} d \theta (Z, \overline{W}) $$ for any $Z , W \in T_{1, 0} M$. \begin{dfn} An orientable CR manifold $M$ with a pseudo-Hermitian structure $\theta$, denoted by $(M, HM, J_b, \theta)$, is called a pseudo-Hermitian manifold. A pseudo-Hermitian manifold $(M, HM, J_b, \theta)$ is said to be a strictly pseudoconvex CR manifold if its Levi form $L_\theta$ is positive definite. \end{dfn}
If $(M, HM, J_b, \theta)$ is strictly pseudoconvex, there exists a unique nonvanishing vector field $T$, transverse to $HM$, satisfying $T \lrcorner \: \theta =1, \ T \lrcorner \: d \theta =0$. This vector field is called the characteristic direction of $(M, HM, J_b, \theta)$. Define the bilinear form $G_\theta$ by $$ G_\theta (X, Y)= d \theta (X, J_b Y) $$ for $X, Y \in HM$. Since $L_\theta$ and $G_\theta$ coincide on $T_{1,0} M \otimes T_{0,1} M$, $G_\theta$ is also positive definite on $HM \otimes HM$. This allows us to define a Riemannian metric $g_\theta$ on $M$ by $$g_\theta (X, Y) = G_\theta (\pi_H X, \pi_H Y)+ \theta(X) \theta (Y), \quad X, Y \in TM$$ where $\pi_H : TM \rightarrow HM$ is the projection associated to the direct sum decomposition $TM = HM \oplus \mathbb{R} T$. This metric is usually called the Webster metric.
On a strictly pseudoconvex CR manifold, there exists a canonical connection preserving the complex structure and the Webster metric. Actually
\begin{prp} [\cite{dt}] Let $(M, HM, J_b, \theta)$ be a strictly pseudoconvex CR manifold. Let $T$ be the characteristic direction and $J_b$ the complex structure in $HM$ (extending to an endomorphism of $TM$ by requiring that $J_b T=0$). Let $g_\theta$ be the Webster metric. Then there is a unique linear connection $\nabla$ on $M$ (called the Tanaka-Webster connection) such that: \begin{enumerate}[(i)] \item The Levi distribution HM is parallel with respect to $\nabla$. \item $\nabla J_b=0$, $\nabla g_\theta=0$. \item The torsion $T_\nabla$ of $\nabla$ satisfies $T_{\nabla} (X, Y)= 2 d \theta (X, Y) T $ and $T_{\nabla} (T, J_b X) + J_b T_{\nabla} (T, X) =0 $ for any $X, Y \in HM$. \end{enumerate} \end{prp}
The pseudo-Hermitian torsion, denoted $\tau$, is the $TM$-valued 1-form defined by $\tau(X) = T_{\nabla} (T,X)$. Note that $\tau(T_{1,0} M) \subset T_{0,1} M$ and $\tau$ is $g_\theta$-symmetric (cf. \cite{dt}).
\begin{prp}[\cite{dt}] If $(M, HM, J_b, \theta)$ is a strictly pseudoconvex CR manifold, the synthetic object $(J_b, -T, -\theta, g_\theta)$ is a contact metric structure on $M$. This contact metric structure is a Sasakian structure if and only if the pseudo-Hermitian torsion $\tau$ is zero. \end{prp}
\begin{exm}[Heisenberg group] The Heisenberg group $\mathbb{H}^n$ is obtained by $\mathbb{C}^n \times \mathbb{R}$ with the group law $$ (z,t) \cdot (w,s) = (z+w, t+s+ 2 Im \langle z, w \rangle) . $$ Let us consider the complex vector fields on $\mathbb{H}^n$, $$ T_\alpha= \frac{\partial}{\partial z^\alpha} + \sqrt{-1} \overline{z^\alpha} \frac{\partial}{\partial t} $$ where $\frac{\partial }{\partial z^\alpha} = \frac{1}{2} (\frac{\partial}{\partial x^\alpha} - \sqrt{-1} \frac{\partial}{\partial y^\alpha})$ and $z^\alpha= x^\alpha + \sqrt{-1} y^\alpha$. The CR structure $T_{1,0} \mathbb{H}^n$ is spanned by $\{ T_1, \dots, T_n\}$. There is a pseudo-Hermitian structure $\theta$ on $\mathbb{H}^n$ defined by
$$ \theta = d t+ 2 \sum_{\alpha =1}^n (x^\alpha d y^\alpha- y^\alpha d x^\alpha ). $$ The Levi form $L_\theta= 2 \sum_{\alpha =1}^n d z^\alpha \wedge d z^{\bar{\alpha}} $ is positive definite, so $(\mathbb{H}^n, H \mathbb{H}^n, J_b, \theta)$ is a strictly pseudo-Hermitian CR manifold. The characteristic direction is $T = \frac{\partial}{\partial t}$. Moreover, the Tanaka-Webster connection of $(\mathbb{H}^n, H \mathbb{H}^n, J_b, \theta)$ is flat. Hence the pseudo-Hermitian torsion is zero, and $(\mathbb{H}^n, H \mathbb{H}^n, J_b, \theta)$ is Sasakian (See \cite{dt} for details).
\end{exm}
Let $(M, HM, J_b, \theta)$ be a strictly pseudoconvex CR (2n+1)-manifold. Let $\{ Z_1, \dots, Z_n \}$ be a local orthonormal frame of $T_{1,0} M$ defined on the open set $U \subset M$ , and $\{ \theta^1, \dots \theta^n \}$ its dual coframe.
Then, \begin{align*} d \theta = 2 \sqrt{-1} \sum_{\alpha=1}^{n} \theta^\alpha \wedge \theta^{\bar{\alpha}} . \end{align*} Since $\tau(T_{1,0} M) \subset T_{0,1} M$, one can set $\tau Z_\alpha = A_{\alpha}^{\ \bar{\beta}} Z_{\bar{\beta}}$ for some local smooth functions $A_{\alpha}^{\ \bar{\beta}} : U \rightarrow \mathbb{C}$. Denote by $\{ \omega_{\alpha}^{\ \beta} \}$ the Tanaka-Webster connection 1-forms with respect to the frame $\{ T_\alpha \}$, i.e. $\nabla Z_\alpha = \omega_{\alpha}^{\ \beta} \otimes Z_\beta$. Then the structure equations can be expressed as follows: \begin{align} \label{se1} d \theta^\beta = \theta^\alpha \wedge \omega_{\alpha}^{\ \beta} + \theta \wedge \tau^\beta, \quad \tau_\alpha \wedge \theta^\alpha=0 , \quad \omega_{\alpha}^{\ \beta} + \omega_{\bar{\beta}}^{\ \bar{\alpha}}=0 \end{align} where $\tau^\alpha = A^\alpha_{\ \bar{\beta}} \theta^{\bar{\beta}} = A_{\bar{\alpha} \bar{\beta}} \theta^{\bar{\beta}}$ is a local 1-form.
In \cite{dt, w}, the authors showed that the curvature form of Tanaka-Webster connection $\Pi_{\beta}^{\ \alpha} = d \omega_{\beta}^{\ \alpha} - \omega_{\beta}^{\ \gamma} \wedge \omega_{\gamma}^{\ \alpha}$ is given by \begin{equation} \label{s3} \Pi_{\beta}^{\ \alpha} = R_{\beta \ \mu \bar{\gamma}}^{\ \alpha} \theta^{\mu} \wedge \theta^{\bar{\gamma}} + W_{\beta \ \mu}^{\ \alpha} \theta^{\mu} \wedge \theta - W_{\ \beta \bar{\mu}}^{\alpha} \theta^{\bar{\mu}} \wedge \theta + 2 \sqrt{-1} \theta_{\beta} \wedge \tau^{\alpha} - 2 \sqrt{-1} \tau_{\beta} \wedge \theta^{\alpha} \end{equation} where $W_{\beta \ \mu}^{\ \alpha} = A_{\beta \mu, }^{\quad \ \alpha}$ and $W_{\ \beta \bar{\mu}}^\alpha = A_{\ \bar{\mu} , \beta}^\alpha$. In particular, $R_{\beta \bar{\alpha} \mu \bar{\gamma}}=R_{\mu \bar{\alpha} \beta \bar{\gamma}}$.
The pseudo-Hermitian $Ric$ tensor and the $Tor$ tensor on $T_{1,0} M$ are defined by \begin{equation} \label{pric} Ric(X,Y) = R_{\alpha \bar{\beta}} X_{\bar{\alpha}} Y_{\beta} =R_{\alpha \bar{\beta} \gamma \bar{\gamma}} X_{\bar{\alpha}} Y_{\beta} \end{equation} and \begin{equation} Tor(X,Y) = \sqrt{-1} (X_\alpha Y_\beta A_{\bar{\alpha} \bar{\beta}}- X_{\bar{\alpha}} Y_{\bar{\beta}} A_{\alpha \beta}) \end{equation} for $X=X_{\bar{\alpha}} Z_{\alpha} \in M, \ Y=Y_{\bar{\beta}} Z_{\beta} \in M$.
Assume that $(N,h)$ is a Riemannian manifold. Let $\{ \xi_i \}$ be a local orthonormal frame of $TN$, and $\{ \sigma^i \}$ its dual coframe. Denote by $\{ \eta^{\ i}_j \}$ the connection 1-forms of the Levi-Civita connection $\hat{\nabla}$ on $N$, i.e. $\hat{\nabla} \xi_i = \eta_i^{\ j} \otimes \xi_j$. Then we have the structure equations \begin{align} \label{se2} d \sigma^i = \sigma^j \wedge \eta^{\ i}_j, \quad d \eta^{\ i}_j = \eta^{\ l}_j \wedge \eta^{\ i}_l + \Omega^{\ i}_j, \quad \Omega^{\ i}_j = \frac{1}{2} \hat{R}_{j \ kl}^{\ i} \sigma^k \wedge \sigma^l , \end{align} where $\hat{R}$ is the curvature of Levi-Civita connection $\hat{\nabla}$ in $(N,h)$.
Suppose that $(M, HM, J_b, \theta)$ is a strictly pseudoconvex CR (2n+1)-manifold and $\nabla$ is its Tanaka-Webster connection. Let $f:M \rightarrow N$ be a smooth map and $f^* TN$ the pullback bundle.
Denote \begin{align} \label{e1} \begin{gathered} d_b f =\pi_H df= f^i_\alpha \theta^\alpha \otimes \xi_i + f^i_{\bar{\alpha}} \theta^{\bar{\alpha}} \otimes \xi_i \ \in \Gamma (T^*M \otimes f^*TN) , \\ f_0 = df(T) = f_0^i \: \xi_i \ \in \Gamma (f^*TN) . \end{gathered} \end{align} Let $\nabla^f$ be the pullback connection in $f^* TN$ induced by the Levi-Civita connection of $(N, h)$.
Then we can determine a connection $\nabla^f$ in $T^* M \otimes f^* TN$ by
$$ \nabla_X^f (\omega \otimes \xi) = \nabla_X \omega \otimes \xi + \omega \otimes \nabla_X^f \: \xi$$
for any $X \in \Gamma(TM)$, $\omega \in \Gamma(T^*M)$ and $\xi \in \Gamma(f^*TN)$.
Under the local frame $\{ \theta, \theta^\alpha, \theta^{\bar{\alpha}} \}$ and $\{ \xi_i \}$, the tensor $\nabla^f df$ can be expressed by: \begin{align} \nabla^f d f = & \ f^i_{\alpha \beta} \theta^\alpha \otimes \theta^\beta \otimes \xi_i + f^i_{\bar{\alpha} \beta} \theta^{\bar{\alpha}} \otimes \theta^{\beta} \otimes \xi_i + f^i_{\alpha \bar{\beta}} \theta^{\alpha} \otimes \theta^{\bar{\beta}} \otimes \xi_i \nonumber \\ & \ +f^i_{\bar{\alpha} \bar{\beta}} \theta^{\bar{\alpha}} \otimes \theta^{\bar{\beta}} \otimes \xi_i + f^i_{0 \alpha} \theta \otimes \theta^\alpha \otimes \xi_i + f^i_{\alpha 0} \theta^\alpha \otimes \theta \otimes \xi_i \nonumber \\ & \ + f^i_{0 \bar{\alpha} } \theta \otimes \theta^{\bar{\alpha}} \otimes \xi_i + f^i_{\bar{\alpha} 0} \theta^{\bar{\alpha}} \otimes \theta \otimes \xi_i +f^i_{00} \theta \otimes \theta \otimes \xi_i . \label{se3} \end{align} Denote by $\nabla^f_b d_b f$ the restriction of $\nabla^f df$ to $HM \times HM$. Throughout the paper, the Einstein summation convention is used (except in the inequality \eqref{bfp1}) and the ranges of indices are $$ \alpha, \beta, \gamma, \mu ,\dots \in \{ 1, \dots, n \}, \quad i, j, k, l , \dots \in \{ 1, \dots, \mbox{dim N} \} $$ where dim $M = 2n+1$.
\begin{dfn}[\cite{dt}] Let us consider the $f-$tensor field on $M$ given by $$\tau(f;\theta,\hat{\nabla})=trace_{G_\theta} ( \nabla^f_b d_b f )\in \Gamma (f^* TN) .$$ We say that $f$ is pseudoharmonic, if $\tau(f;\theta,\hat{\nabla})=0$.
\end{dfn} It is known that pseudoharmonic maps are the critical points of the following energy functional (cf. \cite{dt}):
$$E_\Omega (f)= \frac{1}{2} \int_\Omega trace_{G_\theta} (\pi_H f^*h) \ \theta \wedge (d\theta)^n$$ for any compact domain $\Omega \subset \subset M$. With respect to the local frame $\{ \theta, \theta^\alpha, \theta^{\bar{\alpha}} \}$ and $\{ \xi_i \}$, we have \begin{align} \label{ph} \tau(f;\theta,\hat{\nabla}) = (f^i_{\alpha \bar{\alpha}} + f^i_{\bar{\alpha} \alpha}) \xi_i . \end{align}
\section{Bochner-Type formulas} In \cite{g}, A. Greenleaf obtained the commutation relations of smooth functions and established Bochner-type formulas of pseudoharmonic functions. In \cite{lee}, John M. Lee derived the commutation relations of $(1,0)$-forms. We shall need the commutation relations of various covariant derivatives of smooth maps and Bochner-type formulas of pseudoharmonic maps.
\begin{lem} \label{cc} Let $f: M\rightarrow N$ be a smooth map. The covariant derivatives of $df$ satisfy the following commutation relations: \begin{align} f^i_{\alpha \beta} = & f^i_{\beta \alpha}, \label{s1} \\ f^i_{\alpha \bar{\beta}} -f^i_{\bar{\beta} \alpha} = & 2 \sqrt{-1} f^i_0 \delta_{\alpha \bar{\beta}}, \label{s6} \\ f^i_{0 \alpha} -f^i_{\alpha 0} = & f^i_{\bar{\beta}} A^{\bar{\beta}}_{\ \alpha} , \label{s2} \end{align} and \begin{align} f^i_{\alpha \beta \gamma} - f^i_{\alpha \gamma \beta} = &2 \sqrt{-1} f^i_{\beta} A_{\alpha \gamma} -2 \sqrt{-1} f^i_\gamma A_{\alpha \beta} -f^j_\alpha f^k_\beta f^l_\gamma \hat{R}_{j \ kl}^{\ i}, \label{s4} \\ f^i_{\alpha \bar{\beta} \bar{\gamma}}- f^i_{\alpha \bar{\gamma} \bar{\beta}} = & 2 \sqrt{-1} f^i_\mu A^{\mu}_{\ \bar{\gamma}} \delta_{\alpha \bar{\beta}}- 2 \sqrt{-1} f^i_\mu A^{\mu}_{\ \bar{\beta}} \delta_{\alpha \bar{\gamma}}- f^j_{\alpha} f^k_{\bar{\beta}} f^l_{\bar{\gamma}} \hat{R}_{j \ kl}^{\ i}, \\ f^i_{\alpha \beta \bar{\gamma}} - f^i_{\alpha \bar{\gamma} \beta} =& f^i_\mu R_{\alpha \ \beta \bar{\gamma}}^{\ \mu} + 2 \sqrt{-1} f^i_{\alpha 0} \delta_{\beta \bar{\gamma}} - f^j_{\alpha} f^k_{\beta} f^l_{\bar{\gamma}} \hat{R}_{j \ kl}^{\ i}, \\ f^i_{\alpha \beta 0} - f^i_{\alpha 0 \beta} = & f^i_\gamma A_{\alpha \beta, }^{\quad \ \gamma}-f^i_{\alpha \bar{\gamma}} A^{\bar{\gamma}}_{\ \beta}- f^j_{\alpha} f^k_{\beta} f^l_0 \hat{R}_{j \ kl}^{\ i}, \\ f^i_{\alpha \bar{\beta} 0} -f^i_{\alpha 0 \bar{\beta}} =& -f^i_\gamma A^{\gamma}_{\ \bar{\beta},\alpha}- f^i_{\alpha \gamma} A^{\gamma}_{\ \bar{\beta}} -f^j_{\alpha} f^k_{\bar{\beta}} f^l_0 \hat{R}_{j \ kl}^{\ i}. \label{s5} \end{align} \end{lem}
\begin{proof} The identities \eqref{e1} imply \begin{align} \label{e2} f^* \sigma^i = f^i_\alpha \theta^\alpha + f^i_{\bar{\alpha}} \theta^{\bar{\alpha}} + f^i_0 \theta . \end{align} We take the exterior derivative of \eqref{e2} and use the structure equations \eqref{se1}, \eqref{se2} to get \begin{align} 0=& (d f^i_\alpha -f^i_\beta \omega^{\ \beta}_\alpha + f^j_\alpha \tilde{\eta}^{\ i}_j ) \wedge \theta^\alpha + (d f^i_{\bar{\alpha}} -f^i_{\bar{\beta}} \omega^{\ \bar{\beta}}_{\bar{\alpha}} + f^j_{\bar{\alpha}} \tilde{\eta}^{\ i}_j ) \wedge \theta^{\bar{\alpha}} \nonumber \\ & + (d f^i_0 + f^j_0 \tilde{\eta}^{\ i}_j) \wedge \theta + f^i_\alpha \theta \wedge \tau^\alpha + f^i_{\bar{\alpha}} \theta \wedge \tau^{\bar{\alpha}} +2 \sqrt{-1} f^i_0 \theta^\beta \wedge \theta^{\bar{\beta}} , \label{se4} \end{align} where $\tilde{\eta}^{\ i}_j = f^* \eta^{\ i}_j$. On the other hand, the second-order covariant derivatives satisfy \begin{align} d f^i_\alpha -f^i_\beta \omega^{\ \beta}_\alpha + f^j_\alpha \tilde{\eta}^{\ i}_j &= f^i_{\alpha \beta} \theta^\beta + f^i_{\alpha \bar{\beta}} \theta^{\bar{\beta}} +f^i_{\alpha 0} \theta , \label{se5} \\ d f^i_{\bar{\alpha}} -f^i_{\bar{\beta}} \omega^{\ \bar{\beta}}_{\bar{\alpha}} + f^j_{\bar{\alpha}} \tilde{\eta}^{\ i}_j &= f^i_{\bar{\alpha} \beta} \theta^\beta + f^i_{\bar{\alpha} \bar{\beta}} \theta^{\bar{\beta}} +f^i_{\bar{\alpha} 0} \theta , \\ d f^i_0 + f^j_0 \tilde{\eta}^{\ i}_j &= f^i_{0 \beta} \theta^\beta + f^i_{0 \bar{\beta}} \theta^{\bar{\beta}} +f^i_{0 0} \theta . \end{align} Substituting the above three equations into \eqref{se4} and using $\tau^\alpha = A^\alpha_{\ \bar{\beta}} \theta^{\bar{\beta}}$, we obtain \begin{align*} 0= & f^i_{\alpha \beta} \theta^{\beta} \wedge \theta^{\alpha} +f^i_{\bar{\alpha} \bar{\beta}} \theta^{\bar{\beta}} \wedge \theta^{\bar{\alpha}} + (f^i_{\alpha \bar{\beta}} -f^i_{\bar{\beta} \alpha} - 2\sqrt{-1} f^i_0 \delta_{\alpha \bar{\beta}} ) \theta^{\bar{\beta}} \wedge \theta^\alpha \\ & + (f^i_{\alpha 0} - f^i_{0 \alpha} + f^i_{\bar{\beta}} A^{\bar{\beta}}_{\ \alpha}) \theta \wedge \theta^\alpha + (f^i_{\bar{\alpha} 0} - f^i_{0 \bar{\alpha}} + f^i_{\beta} A^{\beta}_{\ \bar{\alpha}}) \theta \wedge \theta^{\bar{\alpha}} . \end{align*} which (by comparing types) yields \eqref{s1}-\eqref{s2}. To prove the next five equations, we differentiate \eqref{se5} and use the structure equations again. Then we obtain \begin{align} 0= & (d f^i_{\alpha \beta} - f^i_{\alpha \gamma} \omega^{\ \gamma}_{\beta} - f^i_{\gamma \beta} \omega_{\alpha}^{\ \gamma} + f^j_{\alpha \beta} \tilde{\eta}^{\ i}_j) \wedge \theta^\beta \nonumber \\ &+(d f^i_{\alpha \bar{\beta}} - f^i_{\alpha \bar{\gamma}} \omega^{\ \bar{\gamma}}_{\bar{\beta}} - f^i_{\gamma \bar{\beta}} \omega_{\alpha}^{\ \gamma} + f^j_{\alpha \bar{\beta}} \tilde{\eta}^{\ i}_j) \wedge \theta^{\bar{\beta}} \nonumber \\ &+ (d f^i_{\alpha 0} - f^i_{\beta 0} \omega_{\alpha}^{\ \beta} + f^j_{\alpha 0} \tilde{\eta}^{\ i}_j) \wedge \theta + f^i_\beta \Pi^{\ \beta}_{ \alpha}- f^j_\alpha f^*(\Omega^i_j) \nonumber \\ & + f^i_{\alpha \beta} A^{\beta}_{\ \bar{\gamma}} \theta \wedge \theta^{\bar{\gamma}} + f^i_{\alpha \bar{\beta}} A^{\bar{\beta}}_{\ \gamma} \theta \wedge \theta^\gamma+ 2 \sqrt{-1} f^i_{\alpha 0} \theta^\beta \wedge \theta_\beta . \label{se6} \end{align} Since the third-order covariant derivatives of $f$ is given by \begin{align*} d f^i_{\alpha \beta} - f^i_{\alpha \gamma} \omega^{\ \gamma}_{\beta} - f^i_{\gamma \beta} \omega_{\alpha}^{\ \gamma} + f^j_{\alpha \beta} \tilde{\eta}^{\ i}_j =& f^i_{\alpha \beta \gamma} \theta^{\gamma} + f^i_{\alpha \beta \bar{\gamma}} \theta^{\bar{\gamma}} + f^i_{\alpha \beta 0} \theta , \\ d f^i_{\alpha \bar{\beta}} - f^i_{\alpha \bar{\gamma}} \omega^{\ \bar{\gamma}}_{\bar{\beta}} - f^i_{\gamma \bar{\beta}} \omega_{\alpha}^{\ \gamma} + f^j_{\alpha \bar{\beta}} \tilde{\eta}^{\ i}_j =& f^i_{\alpha \bar{\beta} \gamma} \theta^{\gamma} + f^i_{\alpha \bar{\beta} \bar{\gamma}} \theta^{\bar{\gamma}} + f^i_{\alpha \bar{\beta} 0} \theta , \\ d f^i_{\alpha 0} - f^i_{\beta 0} \omega_{\alpha}^{\ \beta} + f^j_{\alpha 0} \tilde{\eta}^{\ i}_j = & f^i_{\alpha 0 \gamma} \theta^{\gamma} + f^i_{\alpha 0 \bar{\gamma}} \theta^{\bar{\gamma}} + f^i_{\alpha 0 0} \theta , \end{align*} we can substitute them into \eqref{se6} and use \eqref{s3}, \eqref{se2} to obtain \begin{align*} 0&= \sum_{\gamma < \beta} (f^i_{\alpha \beta \gamma} - f^i_{\alpha \gamma \beta} - 2\sqrt{-1} f^i_{\beta} A_{\alpha \gamma} + 2 \sqrt{-1} f^i_\gamma A_{\alpha \beta} + f^j_\alpha f^k_\beta f^l_\gamma \hat{R}_{j \ kl}^{\ i}) \theta^\gamma \wedge \theta^\beta \\ &+ \sum_{\gamma , \beta} (f^i_{\alpha \beta \bar{\gamma}} - f^i_{\alpha \bar{\gamma} \beta} - f^i_\mu R_{\alpha \ \beta \bar{\gamma}}^{\ \mu} - 2 \sqrt{-1} f^i_{\alpha 0} \delta_{\beta \bar{\gamma}} + f^j_{\alpha} f^k_{\beta} f^l_{\bar{\gamma}} \hat{R}_{j \ kl}^{\ i}) \theta^{\bar{\gamma}} \wedge \theta^{\beta} \\ &+ \sum_{\gamma < \beta} (f^i_{\alpha \bar{\beta} \bar{\gamma}}- f^i_{\alpha \bar{\gamma} \bar{\beta}} -2 \sqrt{-1} f^i_\mu A^{\mu}_{\ \bar{\gamma}} \delta_{\alpha \bar{\beta}}+ 2\sqrt{-1} f^i_\mu A^{\mu}_{\ \bar{\beta}} \delta_{\alpha \bar{\gamma}}+ f^j_{\alpha} f^k_{\bar{\beta}} f^l_{\bar{\gamma}} \hat{R}_{j \ kl}^{\ i}) \theta^{\bar{\gamma}} \wedge \theta^{\bar{\beta}} \\ &+ \sum_{\beta} (f^i_{\alpha \beta 0} - f^i_{\alpha 0 \beta} - f^i_\gamma A_{\alpha \beta, }^{\quad \ \gamma}+f^i_{\alpha \bar{\gamma}} A^{\bar{\gamma}}_{\ \beta}+ f^j_{\alpha} f^k_{\beta} f^l_0 \hat{R}_{j \ kl}^{\ i}) \theta \wedge \theta^\beta \\ &+ \sum_{\beta} (f^i_{\alpha \bar{\beta} 0} -f^i_{\alpha 0 \bar{\beta}} +f^i_\gamma A^{\gamma}_{\ \bar{\beta},\alpha}+ f^i_{\alpha \gamma} A^{\gamma}_{\ \bar{\beta}} +f^j_{\alpha} f^k_{\bar{\beta}} f^l_0 \hat{R}_{j \ kl}^{\ i}) \theta \wedge \theta^{\bar{\beta}} . \end{align*} which (by comparing types) yields \eqref{s4}-\eqref{s5}.
\end{proof}
Before introducing the Bochner-type formulas, we recall a property of the sub-Laplace operator $\triangle_b$ (cf. \cite{dt}). If $u$ is a $C^2$ function on $M$, then $\triangle_b u = trace_{G_\theta} (\nabla_b d_b u)$. With respect to the local orthonormal frame $\{ T, Z_\alpha, Z_{\bar{\alpha}} \}$, we have $\triangle_b u = u_{\alpha \bar{\alpha}}+ u_{\bar{\alpha} \alpha}$.
\begin{lem} \label{c1} For any smooth map $f: M \rightarrow N$, we have \begin{align}
\frac{1}{2} \triangle_b |d_b f|^2 =& \ |\nabla_b^f d_b f|^2 +\langle \nabla_b^f \tau (f;\theta,\hat{\nabla}), d_b f\rangle - 4 \langle d_b f \circ J_b, \nabla_b^f f_0\rangle \nonumber\\ & \ + (2Ric - 2 (n-2)Tor) (f^i_{\bar{\beta}} Z_\beta, f^i_{\bar{\alpha}} Z_\alpha) \nonumber\\ & \ +2 (f^i_{\bar{\alpha}} f^j_{\beta} f^k_{\bar{\beta}} f^l_{\alpha} \hat{R}^{\ i}_{j\ kl} + f^i_{\alpha} f^j_{\beta} f^k_{\bar{\beta}} f^l_{\bar{\alpha}} \hat{R}^{\ i}_{j\ kl}) , \label{bf1}\\
\frac{1}{2} \triangle_b |d f(T)|^2 =& \ | \nabla_b^f f_0 |^2 + \langle df(T), \nabla_T^f \; \tau(f;\theta,\hat{\nabla})\rangle + 2 f^i_0 f^j_{\alpha} f^k_{\bar{\alpha}} f^l_0 \hat{R}_{j \ kl}^{\ i} \nonumber\\
&\ +2( f^i_0 f^i_\beta A_{\bar{\beta} \bar{\alpha} , \alpha} + f^i_0 f^i_{\bar{\beta}} A_{\beta \alpha , \bar{\alpha}} + f^i_0 f^i_{\bar{\beta} \bar{\alpha}} A_{\beta \alpha} +f^i_0 f^i_{\beta \alpha} A_{\bar{\beta} \bar{\alpha}} ) \label{bf2}
\end{align} where $\nabla_b^f \tau (f;\theta,\hat{\nabla})$ and $\nabla_b^f f_0$ are the restriction of $\nabla^f \tau (f;\theta,\hat{\nabla})$ and $\nabla^f f_0$ to $HM$. \end{lem}
\begin{proof}
Since $|d_b f|^2 = 2 f^i_\alpha f^i_{\bar{\alpha}}$, we have \begin{align*}
\frac{1}{2} \triangle_b |d_b f|^2 =& (f^i_\alpha f^i_{\bar{\alpha}})_{\beta \bar{\beta}}+ (f^i_\alpha f^i_{\bar{\alpha}})_{\bar{\beta} \beta} \\ = & 2(f^i_{\alpha \beta} f^i_{\bar{\alpha} \bar{\beta}} + f^i_{\alpha \bar{\beta}} f^i_{\bar{\alpha} \beta}) + f^i_{\bar{\alpha}} f^i_{\alpha \beta \bar{\beta}} + f^i_{\alpha} f^i_{\bar{\alpha} \beta \bar{\beta}} + f^i_{\alpha} f^i_{\bar{\alpha} \bar{\beta} \beta} + f^i_{\bar{\alpha}} f^i_{\alpha \bar{\beta} \beta} \\
= & |\nabla^f_b d_b f|^2 + f^i_{\bar{\alpha}} f^i_{\alpha \beta \bar{\beta}} + f^i_{\alpha} f^i_{\bar{\alpha} \beta \bar{\beta}} + f^i_{\alpha} f^i_{\bar{\alpha} \bar{\beta} \beta} + f^i_{\bar{\alpha}} f^i_{\alpha \bar{\beta} \beta} . \end{align*} Lemma \ref{cc} implies \begin{align*} f^i_{\alpha \beta \bar{\beta}} = & f^i_{\beta \alpha \bar{\beta}} = f^i_{\beta \bar{\beta} \alpha } +f^i_\mu R_{\beta \ \alpha \bar{\beta}}^{\ \mu} + 2 \sqrt{-1} f^i_{\beta 0} \delta_{\alpha \bar{\beta}} - f^j_{\beta} f^k_{\alpha} f^l_{\bar{\beta}} \hat{R}_{j \ kl}^{\ i} \\ =& f^i_{\beta \bar{\beta} \alpha } +f^i_\mu R_{\beta \ \alpha \bar{\beta}}^{\ \mu} + 2 \sqrt{-1} (f^i_{0 \beta}- f^i_{\bar{\mu}} A^{\bar{\mu}}_{\ \beta}) \delta_{\alpha \bar{\beta}} - f^j_{\beta} f^k_{\alpha} f^l_{\bar{\beta}} \hat{R}_{j \ kl}^{\ i}, \\ f^i_{\bar{\alpha} \beta \bar{\beta}} = & (f^i_{\beta \bar{\alpha}} - 2 \sqrt{-1} f^i_0 \delta_{\beta \bar{\alpha}})_{\bar{\beta}} = f^i_{\beta \bar{\alpha} \bar{\beta}} - 2 \sqrt{-1} f^i_{0 \bar{\beta}} \delta_{\beta \bar{\alpha}} \\ = & f^i_{\beta \bar{\beta} \bar{\alpha}}+ 2 \sqrt{-1} f^i_\mu A^{\mu}_{\ \bar{\beta}} \delta_{\beta \bar{\alpha}}- 2 \sqrt{-1} f^i_\mu A^{\mu}_{\ \bar{\alpha}} \delta_{\beta \bar{\beta}}- f^j_{\beta} f^k_{\bar{\alpha}} f^l_{\bar{\beta}} \hat{R}_{j \ kl}^{\ i}- 2 \sqrt{-1} f^i_{0 \bar{\beta}} \delta_{\beta \bar{\alpha}} . \end{align*} Substituting them into the previous identity, we obtain \begin{align*}
\frac{1}{2} \triangle_b |d_b f|^2 =& |\nabla^f_b d_b f|^2 + f^i_{\bar{\alpha}} (f^i_{\beta \bar{\beta}} + f^i_{\bar{\beta} \beta})_{\alpha} + f^i_{\alpha} (f^i_{\beta \bar{\beta}} + f^i_{\bar{\beta} \beta})_{\bar{\alpha}} \\ &+ 2 f^i_{\bar{\alpha}} f^i_{\mu} R_{\alpha \bar{\mu} \beta \bar{\beta}} - 2 \sqrt{-1} (n-2) (f^i_{\alpha} f^i_\mu A_{\bar{\mu} \bar{\alpha}}- f^i_{\bar{\alpha}} f^i_{\bar{\mu}} A_{\mu \alpha}) \\ & +2 (f^i_{\bar{\alpha}} f^j_{\beta} f^k_{\bar{\beta}} f^l_{\alpha} \hat{R}^{\ i}_{j\ kl} + f^i_{\alpha} f^j_{\beta} f^k_{\bar{\beta}} f^l_{\bar{\alpha}} \hat{R}^{\ i}_{j\ kl}) + 4 \sqrt{-1} (f^i_{\bar{\alpha}} f^i_{0 \alpha} - f^i_{\alpha} f^i_{0 \bar{\alpha}}) . \end{align*} By the identity $\langle d_b f \circ J_b, \nabla_b^f f_0\rangle= \sqrt{-1} (f^i_{\alpha} f^i_{0 \bar{\alpha}} -f^i_{\bar{\alpha}} f^i_{0 \alpha}) $, we get \eqref{bf1}. The proof of \eqref{bf2} is similar. \end{proof}
\begin{lem} \label{bte} Let $(M, HM, J_b, \theta)$ be a $(2n+1)$-Sasakian manifold with \begin{equation} \label{ric3}
Ric(X,X) \geq -k |X|^2 \end{equation} for all $X \in T_{1,0} M$, and some $k \geq 0$. Suppose that $(N,h)$ is a Riemannian manifold with nonpositive sectional curvature. If $f: M \rightarrow N$ is a pseudoharmonic map, then for any $\nu>0$, we have \begin{equation}
\triangle_b |d_b f|^2 \geq |\nabla_b^f d_b f|^2 + 2n |f_0|^2 - \left( 2k+ \frac{32}{\nu } \right) |d_b f|^2 - \frac{1}{2} \nu |\nabla_b^f f_0|^2 \label{nsm3} \end{equation} and \begin{equation} \label{nsm4}
\triangle_b |f_0|^2 \geq 2 |\nabla_b^f f_0|^2 . \end{equation} \end{lem}
\begin{proof} Since $f$ is pseudoharmonic, by definition we have $\tau(f;\theta,\hat{\nabla}) =0$. Because $(M, HM, J_b, \theta)$ is Sasakian, the tensor $Tor=0$. Hence, by the assumption on the pseudo-Hermitian Ricci curvature, \eqref{bf1} becomes \begin{align}
\triangle_b |d_b f|^2 =& 2|\nabla_b^f d_b f|^2 - 8 \langle d_b f \circ J_b, \nabla_b^f f_0\rangle - 2 k |d_b f|^2 \nonumber\\ & \ +4 (f^i_{\bar{\alpha}} f^j_{\beta} f^k_{\bar{\beta}} f^l_{\alpha} \hat{R}^{\ i}_{j\ kl} + f^i_{\alpha} f^j_{\beta} f^k_{\bar{\beta}} f^l_{\bar{\alpha}} \hat{R}^{\ i}_{j\ kl}). \label{e3} \end{align} Using the commutation relation \eqref{s6}, we can estimate \begin{align}
|\nabla_b^f d_b f |^2= & 2 \: \sum_{\alpha, \beta =1}^n (f^i_{\bar{\alpha} \beta} f^i_{\alpha \bar{\beta}} + f^i_{\alpha \beta} f^i_{\bar{\alpha} \bar{\beta}}) \geq 2 \: \sum_{\alpha=1}^n f^i_{\alpha \bar{\alpha}} f^i_{\bar{\alpha} \alpha} \nonumber \\
= & \frac{1}{2} \: \sum_{\alpha=1}^n [|f^i_{\alpha \bar{\alpha}}+ f^i_{\bar{\alpha} \alpha}|^2+ |f^i_{\alpha \bar{\alpha}}- f^i_{\bar{\alpha} \alpha}|^2]\
\geq \frac{1}{2} \: \sum_{\alpha=1}^n |f^i_{\alpha \bar{\alpha}}- f^i_{\bar{\alpha} \alpha}|^2 \nonumber \\
= & 2n |f_0|^2 . \label{bfp1} \end{align}
The second term of the right side of \eqref{e3} can be controlled by the Schwarz inequality \begin{equation} \label{bfp2}
- 8 \langle d_b f \circ J_b, \nabla_b^f f_0 \rangle \geq - \frac{32}{\nu} |d_b f|^2 - \frac{1}{2} \nu |\nabla_b^f f_0|^2 . \end{equation} To deal with the last term of \eqref{e3}, we set $e_\alpha = Re \; df(Z_\alpha)$ and $ e_{\alpha}' = Im \; df(Z_\alpha) $. Then \begin{align} Last\ term \ of \ \eqref{e3} =& \ 4 \langle \hat{R}( df(Z_{\bar{\beta}}), df(Z_{\bar{\alpha}}) ) df(Z_\beta), df(Z_\alpha)\rangle \nonumber \\ & \ \ + 4\langle \hat{R}( df(Z_{\bar{\beta}}) , df(Z_\alpha) ) df(Z_\beta), df(Z_{\bar{\alpha}})\rangle \nonumber \\ =& \ -4( \langle \hat{R}( e_\alpha, e_\beta ) e_\beta, e_\alpha\rangle + \langle \hat{R}( e_\alpha, e_{\beta}' ) e_{\beta}', e_\alpha\rangle \nonumber \\ & \ \ + \langle \hat{R}( e_{\alpha}', e_\beta ) e_\beta, e_{\alpha}'\rangle + \langle \hat{R}( e_{\alpha}', e_{\beta}' ) e_{\beta}', e_{\alpha}'\rangle ) \nonumber \\ \geq & \ 0 \label{bfp3} \end{align} where we have used the assmuption that the sectional curvature of $N$ is nonpositive. Substituting \eqref{bfp1}, \eqref{bfp2} and \eqref{bfp3} into \eqref{e3}, we get \eqref{nsm3}.
Observe that \begin{align} f^i_0 f^j_{\alpha} f^k_{\bar{\alpha}} f^l_0 \hat{R}_{j \ kl}^{\ i} = & \langle \hat{R}( df(Z_{\bar{\alpha}}), df(T) ) df(Z_\alpha) , df(T)\rangle \nonumber\\ = & -(\langle \hat{R}(e_\alpha,e_0)e_0,e_\alpha\rangle + \langle \hat{R}(e_{\alpha}',e_0)e_0,e_{\alpha}'\rangle ) \nonumber \\ \geq & 0 . \label{btep} \end{align} Then \eqref{nsm4} can be easily proved from \eqref{bf2} and \eqref{btep}.
\end{proof}
From now on, we assume that $(N,h)$ is a simply connected Riemannian manifold with nonpositive sectional curvature. Let $\rho$ be the distance to a fixed point $y_0 \in N$. Then $\rho^2$ is smooth on $N$. By the Hession comparison theorem, we have $$Hess(\rho^2) \geq 2h . $$ For any smooth map $f: M \rightarrow N$, the chain rule gives that \begin{align*} \triangle_b (\rho^2 \circ f) = d \rho^2(\tau(f;\theta,\hat{\nabla})) + trace_{G_\theta} \: Hess(\rho^2)(d_b f,d_b f).
\end{align*} Therefore, we can conclude that if $f$ is pseudoharmonic, then \begin{equation} \label{nd1}
\triangle_b (\rho^2 \circ f) \geq 2|d_b f|^2. \end{equation}
\section{Cannot-Carath\'eodory distance} As known, the maximum principle is an important tool to obtain pointwise estimates for solutions of geometric PDEs.
In order to use it in Sasakian manifolds, we need some special exhaustion function to construct a cutoff function. A natural choice is the Carnot-Carath\'eodory distance function. \begin{dfn} \label{ccd}
Let $(M, HM, J_b, \theta)$ be a strictly pseudoconvex CR manifold. A piecewise $C^1$-curve $\gamma : [0,1] \rightarrow M$ is said to be horizontal if $\gamma' (t) \in HM$ whenever $\gamma' (t)$ exists. The length of $\gamma$ is given by $$ l(\gamma) =\int^1_0 |\gamma'|^{1/2}_{G_\theta} dt . $$ We define the Cannot-Carath\'eodory distance between two points $p,q \in M$ by
$$ d_c(p,q) = inf \{ l(\gamma) | \: \gamma \in C_{p,q} \} $$
where $C_{p,q}$ is the set of all horizontal curves joining $p$ and $q$. We say that $(M, HM, J_b, \theta)$ is complete if it is complete as a metric space. A horizontal curve $\gamma: [0,1] \rightarrow M$ is called length minimizing geodesic if $l(\gamma) =d_c(\gamma(0), \gamma (1))$. Fix $x_0 \in M$, and set $r(x) = d_c(x_0,x)$. The Carnot-Carath\'eodory ball of radius $R$ centered at $x_0$ is denoted by $B_R(x_0) = \{ x \in M | \: r(x) <R \}$. \end{dfn}
In \cite{s}, R. Strichartz pointed out that if $(M, HM, J_b, \theta)$ is complete, then for any $x_0, x \in M$, there exists at least one length minimizing geodesic $\gamma : [0,1] \rightarrow M$ joining $x_0$ and $x$. Moreover, $\gamma$ can extend to $(-\infty, \infty)$.
We say that $x$ is a cut point of $x_0$, if for any $\epsilon > 0$, $\gamma |_{[0,1+ \epsilon]}$ is no longer a length minimizing geodesic joining $x_0$ and $\gamma (1+ \epsilon)$. The set of all cut points of $x_0$, denoted by $cut(x_0)$, is called the cut locus of $x_0$.
Theorem 1.2 and Proposition 1.2 in \cite{a} assert that the Cannot-Carath\'eodory distance $r$ to a reference point $x_0 $ is smooth on $M \setminus ( cut(x_0) \cup \{ x_0 \} )$.
\begin{dfn}[\cite{ckt}] \label{crcp} Let $(M, HM, J_b, \theta)$ be a noncompact complete Sasakian $(2n+1)$-manifold with \begin{equation*}
Ric(X,X) \geq -k |X|^2 \end{equation*} for all $X \in T_{1,0} M$ and some $k \geq 0$. We say that $(M, HM, J_b, \theta)$ satisfies CR sub-Laplace comparison property relative to a point $x_0 \in M$, if there exists a positive constant $C_1$
such that the Carnot-Carath\'eodory distance $r$ to $x_0$ satisfies \begin{equation} \triangle_b r \leq C_1 (\frac{1}{r}+\sqrt{k}) \end{equation} on $M \setminus ( cut(x_0) \cup \{ x_0 \} )$ and where $r \geq 1$. \end{dfn}
\begin{prp}[\cite{ctw}] \label{hp1}
There exists a positive constant $C_1'$ on Heisenberg group $(\mathbb{H}^n, H \mathbb{H}^n, J_b, \theta)$ such that \begin{eqnarray} \triangle_b r \leq \frac{C_1'}{r} \label{h1} \end{eqnarray} on $M \setminus ( cut(o) \cup \{ o \} )$. Here $r$ is the Carnot-Carath\'eodory distance to the origin $o$. \end{prp} Since the pseudohermitian torsion and the pseudohermitian Ricci curvature of Heisenberg group $(\mathbb{H}^n, H \mathbb{H}^n, J_b, \theta)$ are both zero, Proposition \ref{hp1} asserts that the CR sub-Laplace comparison property holds on Heisenberg group.
\section{Sub-Gradient Estimate For Pseudoharmonic Map} \label{slp} In this section, we will obtain a sub-gradient estimate for pseudoharmonic maps. Let $(M, HM, J_b, \theta)$ be a noncompact complete $(2n+1)$-Sasakian manifold with CR sub-Laplace comparison property relative to a point $x_0 \in M$ and \begin{equation*}
Ric(X,X) \geq -k |X|^2 \end{equation*} for all $X \in T_{1,0} M$ and some $k\geq 0$. Suppose that $(N,h)$ is a simply connected Riemannian manifold with nonpositive sectional curvature. We consider a pseudoharmonic map $f: M \rightarrow N$. Let $\rho $ be the Riemannian distance to $y_0 = f(x_0)$.
We choose a function $\psi \in C^{\infty} ([0, \infty))$ with the property that
$$ \psi |_{[0,1]} =1, \quad \psi |_{[2, \infty)}=0, \quad -C_2 \: |\psi|^{\frac{1}{2}} \leq \psi ' \leq 0, \quad |\psi ''| \leq C_2 . $$ Let $R > 1 $ be fixed. By CR sub-Laplacian comparison property, the cutoff function $\eta = \psi(\frac{r}{R}) $ satisfies: \begin{equation} \begin{gathered}
\eta^{-1} |d_b \eta|^2 \leq \frac{C_2'}{R^2} \\
\triangle_b \eta = \frac{\psi ''}{R^2} |d_b r|^2 + \frac{\psi '}{R} \triangle_b r \geq - C_2' \: \left( \frac{1}{R^2}+\frac{\sqrt{k}}{R} \right) \label{nseta} \end{gathered} \end{equation}
on $M \setminus ( cut(x_0) \cup \{ x_0 \} )$. Here $C_2'$ depends only on $C_2$ and $C_1$. Denote $b_R =2 \: sup \: \{ \rho \circ f(x) | x \in B_{2R} (x_0) \}$. We construct a smooth function $F(x): B_{2R} (x_0) \rightarrow \mathbb{R}$ by \begin{equation} \label{nf}
F(x)= \frac{|d_b f|^2 + \mu \eta |f_0|^2}{ b_R^2 - \rho^2 \circ f} (x) . \end{equation} The positive coefficient $\mu$ will be determined later. \begin{lem} \label{l1} If $r$ is smooth at $x \in B_{2R} (x_0)$ and $(\eta F) (x) \neq 0$, then at $x$, we have \begin{align}
& \triangle_b (|d_b f|^2 + \mu \eta |f_0|^2) \geq \frac{1}{24} \frac{|d_b (|d_b f|^2 + \mu \eta |f_0|^2)|^2}{|d_b f|^2 + \mu \eta |f_0|^2} \nonumber\\
& \quad \quad \qquad + \left[ 2n - 6 \mu C_2' ( \frac{1}{R^2}+\frac{\sqrt{k}}{R}) \right] |f_0|^2 - 32 \left( k+ \frac{1}{\mu \eta} \right) |d_b f|^2 . \label{nsm6} \end{align} \end{lem}
\begin{proof} First we compute \begin{align*}
\triangle_b (\mu \eta |f_0|^2) = & \mu[(\triangle_b \eta) |f_0|^2 + 2 \langle d_b \eta, d_b |f_0|^2\rangle + \eta \: \triangle_b |f_0|^2] \\
= & \mu[(\triangle_b \eta) |f_0|^2 + 2 \langle d_b \eta, 2 \langle \nabla_b^f f_0, f_0\rangle_{f^*TN} \rangle + \eta \: \triangle_b |f_0|^2] \\
\geq & \mu[(\triangle_b \eta) |f_0|^2 - \eta |\nabla_b^f f_0|^2 - 4 |f_0|^2 \eta^{-1} |d_b \eta|^2 + \eta \: \triangle_b |f_0|^2] \\
\geq & \ \mu \eta |\nabla_b^f f_0|^2- 5 \mu C_2' \: \left(\frac{1}{R^2}+\frac{\sqrt{k}}{R} \right) |f_0|^2 . \end{align*}
The last inequality is due to \eqref{nsm4} and \eqref{nseta}. Hence by \eqref{nsm3} with $\nu = \mu \eta$, we have the estimate
\begin{align}
& \triangle_b (|d_b f|^2 + \mu \eta |f_0|^2) \geq \frac{1}{2} \left( |\nabla_b^f d_b f|^2 + \mu \eta |\nabla_b^f f_0|^2 \right) +\frac{1}{2} |\nabla_b^f d_b f|^2 \nonumber\\
& \quad \qquad + \left[ 2n- 5 \mu C_2' ( \frac{1}{R^2}+\frac{\sqrt{k}}{R}) \right] |f_0|^2 - 32 \left( k+ \frac{1}{\mu \eta} \right) |d_b f|^2 . \label{nsm5} \end{align} In order to deal with the first term of the right side, we need the following Schwarz inequalities: \begin{align}
|d_b |d_b f|^2|^2 \leq & 4 \: |d_b f|^2 \: |\nabla_b^f d_b f|^2 \label{cs1} , \\
|d_b |f_0|^2|^2 \leq & 4 \: |f_0|^2 |\nabla_b^f f_0|^2 . \label{cs2} \end{align}
If $|d_b f|(x) \neq 0$ and $|f_0|(x) \neq 0$, then at $x$, we have \begin{align*}
&\frac{1}{2} \left( |\nabla_b^f d_b f|^2 + \mu \eta |\nabla_b^f f_0|^2 \right) \\
& \qquad \geq \ \frac{1}{8} \left( \frac{|d_b |d_b f|^2|^2}{|d_b f|^2} + \mu \eta \frac{|d_b |f_0|^2|^2}{|f_0|^2}+ \mu \frac{|d_b \eta|^2}{\eta} |f_0|^2 \right) \ - \frac{1}{8} \mu \frac{|d_b \eta|^2}{\eta} |f_0|^2 \\
& \qquad \geq \ \frac{1}{24} \frac{|d_b (|d_b f|^2 + \mu \eta |f_0|^2)|^2}{|d_b f|^2 + \mu \eta |f_0|^2} - \frac{\mu C_2'}{8} \frac{1}{R^2} |f_0|^2. \end{align*}
Substituting this inequality to \eqref{nsm5}, we get \eqref{nsm6}. If $|d_b f|(x) =0$ (or $|f_0|(x)=0$), we can directly discard the nonnegative term $\frac{1}{2} |\nabla_b^f d_b f|^2$ (or $\frac{1}{2} \mu \eta |\nabla_b^f f_0|^2$) from \eqref{nsm5} and use the Schwarz inequality \eqref{cs1} (or \eqref{cs2}) to obtain \eqref{nsm6}. \end{proof}
Let $x$ be a maximum point of $\eta F$ on $B_{2R}(x_0)$. If $x$ is not in the cut locus of $x_0$, then $\eta$ is smooth near $x$. If $x$ is in the cut locus of $x_0$, we may remedy $\eta$ by the following consideration. Since $(M, HM, J_b, \theta)$ is complete, there exists a length minimizing geodesic curve $\gamma: [0,1] \rightarrow M$ which joins $x_0$ and $x$. Let $\epsilon$ be a small positive number. Along $\gamma$, $x$ is before the cut point of $\gamma (\epsilon)$. This guarantees that the modified function $\tilde{r}(z) = d_c (z,\gamma(\epsilon)) + \epsilon$ is smooth in the neighborhood of $x$. Moreover, triangle inequality implies that: $$ r \leq \tilde{r}, \quad and \quad r(x) = \tilde{r} (x) . $$ Set $\tilde{\eta} = \psi (\frac{\tilde{r}}{R})$. Then $\tilde{\eta}$ is smooth near $x$ and $$ \eta \geq \tilde{\eta}, \quad and \quad \eta(x) = \tilde{\eta}(x). $$ This means that $x$ is still a maximum point of $\tilde{\eta} F$. Hence, we may assume without loss of generality that $r$ is already smooth near $x$.
\begin{lem} \label{es1} If x is a nonzero maximum point of $\eta F$ on $B_{2R} (x_0)$, then at $x$, we have the estimate \begin{equation} \label{nsm7}
0 \geq \left[ 2 \eta F - 34n ( k+ \frac{1}{\mu} ) \right] \frac{|d_b f|^2}{b^2_R-\rho^2 \circ f}+ \left[ 2n - 31 \mu C_2' ( \frac{1}{R^2}+ \frac{\sqrt{k}}{R} ) \right] \frac{F}{\mu} . \end{equation} \end{lem}
\begin{proof} It is obvious that $x$ is still a maximum point of $\ln(\eta F)$ on $B_{2R} (x_0)$.
Since $\triangle_b$ is a degenerate elliptic operator, the maximum principle implies that at $x$, \begin{align}
0 & = d_b \ln(\eta F) = \frac{d_b \eta}{\eta} + \frac{d_b (|d_b f|^2 + \mu \eta |f_0|^2)}{|d_b f|^2 + \mu \eta |f_0|^2} +\frac{d_b (\rho^2 \circ f)}{b^2_R-\rho^2 \circ f}, \label{nsm1}\\
0 & \geq \triangle_b \ln(\eta F) = \frac{\triangle_b \eta}{\eta} - \frac{|d_b \eta|^2}{\eta^2} + \frac{\triangle_b (|d_b f|^2 + \mu \eta |f_0|^2)}{|d_b f|^2 + \mu \eta |f_0|^2} \nonumber\\
& \qquad -\frac{|d_b (|d_b f|^2 + \mu \eta |f_0|^2)|^2}{(|d_b f|^2 + \mu \eta |f_0|^2)^2}+ \frac{\triangle_b (\rho^2 \circ f)}{b^2_R-\rho^2 \circ f} +\frac{|d_b (\rho^2 \circ f)|^2}{(b^2_R- \rho^2 \circ f)^2}. \label{nsm2} \end{align} By Lemma \ref{l1}, \eqref{nsm2} becomes \begin{align*}
0\geq &\ \frac{\triangle_b \eta}{\eta} - \frac{|d_b \eta|^2}{\eta^2} - \frac{23}{24} \frac{|d_b (|d_b f|^2 + \mu \eta |f_0|^2)|^2}{(|d_b f|^2 + \mu \eta |f_0|^2)^2}+\frac{|d_b (\rho^2 \circ f)|^2}{(b^2_R-\rho^2 \circ f)^2} \\
& \ + \frac{ [2n- 6 \mu C_2' ( \frac{1}{R^2}+\frac{\sqrt{k}}{R}) ] |f_0|^2 - 32( k+ \frac{1}{\mu \eta} ) |d_b f|^2}{|d_b f|^2 + \mu \eta |f_0|^2} + \frac{\triangle_b (\rho^2 \circ f)}{b^2_R-\rho^2 \circ f} . \end{align*}
Substituting \eqref{nsm1} in above inequality and using Schwarz inequality: $(\alpha + \beta)^2 \leq 24 \alpha ^2 + \frac{24}{23} \beta^2$, we obtain \begin{align*}
0 \geq &\ \frac{\triangle_b \eta}{\eta} - 24 \frac{|d_b \eta|^2}{\eta^2} - 32 \left(k + \frac{1}{\mu \eta} \right) \frac{|d_b f|^2 }{|d_b f|^2 + \mu \eta |f_0|^2} \\
&\quad + \left[2n- 6 \mu C_2' \left(\frac{1}{R^2}+\frac{\sqrt{k}}{R} \right)\right]\frac{|f_0|^2 }{|d_b f|^2 + \mu \eta |f_0|^2} +\frac{\triangle_b (\rho^2 \circ f)}{b^2_R-\rho^2 \circ f}. \end{align*} By the estimates \eqref{nd1} and \eqref{nseta}, we have
\begin{align}
0 \geq &\ - 25 \frac{C_2'}{\eta} \: \left(\frac{1}{R^2} +\frac{\sqrt{k}}{R} \right) - 32 \left(k + \frac{1}{\mu \eta} \right) \frac{|d_b f|^2 }{|d_b f|^2 + \mu \eta |f_0|^2} \nonumber \\
&\quad + \left[2n - 6 \mu C_2' \left(\frac{1}{R^2}+\frac{\sqrt{k}}{R} \right)\right]\frac{|f_0|^2 }{|d_b f|^2 + \mu \eta |f_0|^2} +2 \frac{|d_b f|^2}{b^2_R- \rho^2 \circ f} . \nonumber \end{align} Hence multiplying both sides by $\eta F$, we conclude that \begin{align}
0 \geq & -25 C_2' \: \left(\frac{1}{R^2}+\frac{\sqrt{k}}{R} \right) F - 32 \left( \eta k + \frac{1}{\mu} \right) \frac{|d_b f|^2}{b^2_R -\rho^2 \circ f} \nonumber \\
& \quad + \left[2n - 6 \mu C_2' (\frac{1}{R^2}+\frac{\sqrt{k}}{R}) \right] \frac{\eta |f_0|^2}{b^2_R-\rho^2 \circ f} + 2 \eta F \frac{|d_b f|^2}{b^2_R -\rho^2 \circ f}. \label{nsm8} \end{align} Finally, we rewrite \eqref{nf} as
$$\frac{ \eta |f_0|^2}{b^2_R -\rho^2 \circ f} = \frac{1}{\mu} (F- \frac{|d_b f|^2}{b^2_R -\rho^2 \circ f}) $$ and substitute it into the previous inequality. This procedure yields \begin{align*} 0 &\geq \ \left[ 2n - 31 \mu C_2' ( \frac{1}{R^2}+ \frac{\sqrt{k}}{R} ) \right] \frac{F}{\mu} \\
&\qquad + \left[ 2 \eta F - \frac{1}{\mu} \left( 2n - 6\mu C_2' ( \frac{1}{R^2}+ \frac{\sqrt{k}}{R} ) \right) - 32 ( \eta k+ \frac{1}{\mu}) \right] \frac{|d_b f|^2}{b^2_R-\rho^2 \circ f} \\
&\geq \ \left[ 2n - 31 \mu C_2' ( \frac{1}{R^2}+ \frac{\sqrt{k}}{R} ) \right] \frac{F}{\mu}+\left[ 2 \eta F - \frac{2 n}{\mu} - 32 ( k+ \frac{1}{\mu} ) \right] \frac{|d_b f|^2}{b^2_R-\rho^2 \circ f}. \end{align*} The last inequality is due to $0 \leq \eta \leq 1$. Since $n \geq 1$, we get \eqref{nsm7}. \end{proof}
Now we present our main results. \begin{thm} \label{nsg} Let $(M, HM, J_b, \theta)$ be a noncompact complete $(2n+1)$-Sasakian manifold with CR sub-Laplace comparison property relative to a fixed point $x_0$ and \begin{equation*}
Ric(X,X) \geq -k |X|^2 \end{equation*}
for all $X \in T_{1,0} M$, and some $k\geq 0$. Suppose that $(N,h)$ is a simply connected Riemannian manifold with nonpositive sectional curvature. Assume that $f: M \rightarrow N$ is a pseudoharmonic map. Let $\rho $ be the Riemannian distance to $y_0 = f(x_0)$. For any $R>1$, set $b_R =2 \: sup \: \{ \rho \circ f(x) | x \in B_{2R} (x_0) \}$ and $a=\frac{R^2}{1 + \sqrt{k} R}$. Then, on $B_R (x_0)$ \begin{equation} \label{nsge}
|d_b f|^2 + a |f_0|^2 \leq C_3 \: b_R^2 \: \left( \frac{1}{a} + k \right) \end{equation} where the constant $C_3$ only depends on the dimension of $M$ and $C_1$. \end{thm}
\begin{rmk} Our auxiliary function \eqref{nf} for the maximum principle is slightly different from that one introduced in \cite{ckt}. In our case, we omit the variable $t$ in the auxiliary function. This seems to simplify the related estimates even for the pseudoharmonic function case. \end{rmk}
\begin{proof} Let $\mu = \frac{n}{31 C_2'} \frac{R^2}{1+ \sqrt{k}R}= \frac{n}{31 C_2'} a$. We consider the auxiliary function $F$ given by \eqref{nf}. Let $x$ be a maximum point of $\eta F$ on $B_{2R} (x_0)$. We assume $(\eta F) (x) \neq 0$ (Otherwise, the following estimate \eqref{n2} is trivial). Since $2n- 31 \mu C_2' (\frac{1}{R^2} + \frac{\sqrt{k}}{R}) = n >0$, the last term of the right side in \eqref{nsm7} is positve. Hence Lemma \ref{es1} yields \begin{equation} \label{n2} \max_{z \in B_{2R} (x_0)} \ (\eta F)(z) \leq 17n \left( k+\frac{1}{\mu}\right). \end{equation}
Since $\eta(z)=1$ for $z \in B_{R}(x_0)$, this inequality asserts that on $B_{R}(x_0)$ \begin{equation*}
|d_b f|^2 + \mu |f_0|^2 \leq 17n (b_R^2- \rho^2 \circ f ) \left( k+\frac{1}{\mu} \right) \leq 17n b_R^2 \left( k+\frac{1}{\mu} \right). \end{equation*} Hence \eqref{nsge} can be obtained by choosing a proper constant $C_3$.
\end{proof}
The Reeb energy density is defined by the partial energy density $\frac{1}{2} |df(T)|^2$. From the sub-gradient estimate \eqref{nsge}, we can derive an estimate of Reeb energy density for pseudoharmonic maps and get some vanishing results. \begin{cor} \label{plr} Let $(M, HM, J_b, \theta)$ be a noncompact complete Sasakian manifold with CR sub-Laplace comparison property relative to a fixed point $x_0$ and $$
Ric(X,X) \geq -k |X|^2 $$
for all $X \in T_{1,0} M$ and some $k \geq 0$. Suppose that $(N,h)$ is a simply connected Riemannian manifold with nonpositive sectional curvature. Assume that $f: M \rightarrow N$ is a pseudoharmonic map. Let $\rho $ be the Riemannian distance to $y_0 = f(x_0)$. For any $R>1$, set $b_R =2 \: sup \: \{ \rho \circ f(x) | x \in B_{2R} (x_0) \}$ and $a=\frac{R^2}{1 + \sqrt{k} R}$. Then, on $B_R (x_0)$ \begin{equation} \label{reeb}
|f_0|^2 \leq C_3 \: b_R^2 \left( \frac{2}{R^4} + \frac{3k}{R^2} + \frac{k\sqrt{k}}{R} \right) . \end{equation} In particular, \begin{enumerate}[(i)] \item if $Ric \geq 0$ (i.e. $k=0$) and the image of $f$ satisfies: \begin{equation*}
\overline{\lim_{R \rightarrow \infty}} R^{-2} \: sup \: \{ \rho \circ f(x) | x \in B_{2R} (x_0)\} =0, \end{equation*} then $df(T)=0$. \item if the pseudohermitian Ricci curvature of $M$ has strictly negative lower bound (i.e. $k>0$) and the image of f satisfies: \begin{equation*}
\overline{\lim_{R \rightarrow \infty}} R^{-\frac{1}{2}} \: sup \: \{ \rho \circ f(x) | x \in B_{2R}(x_0) \} =0, \end{equation*} then $df(T)=0$. \end{enumerate} \end{cor}
The sub-gradient estimate \eqref{nsge} also gives Liouville theorem for pseudoharmonic maps. \begin{thm} \label{cse}
Let $(M, HM, J_b, \theta)$ be a noncompact complete Sasakian manifold with nonnegative pseudohermitian Ricci curvature, and satisfy CR sub-Laplace comparison property relative to a fixed point $x_0 \in M$. Suppose that $(N,h)$ is a simply connected Riemannian manifold with nonpositive sectional curvature. Assume that $f: M \rightarrow N$ is a pseudoharmonic map. Let $\rho $ be the Riemannian distance to $y_0 = f(x_0)$. For any $R>1$, set $b_R =2 \: sup \: \{ \rho \circ f(x) | x \in B_{2R} (x_0) \}$. Then, on $B_R (x_0)$ \begin{equation*}
|d_b f|^2 + R^2 |f_0|^2 \leq C_3 \: \frac{b_R^2}{R^2}. \end{equation*} In particular, if the image of f satisfies \begin{equation*}
\overline{\lim_{R \rightarrow \infty}} R^{-1} \: sup \: \{ \rho \circ f(x) | x \in B_{2R}(x_0) \} =0, \end{equation*} then f is a constant map. \end{thm}
Since Heisenberg group $(\mathbb{H}^n, H \mathbb{H}^n, J_b, \theta)$ satisfies CR sub-Laplace comparison property, Theorem \ref{cse} can be applied to Heisenberg group. \begin{cor} \label{hm1} There is no bounded pseudoharmonic map from Heisenberg group $(\mathbb{H}^n, H \mathbb{H}^n, J_b, \theta)$ to a simply connected Riemannian manifold with nonpositive sectional curvature. \end{cor}
\section{Appendix} \label{slh} In this section, we will derive a Reeb energy density estimate for harmonic maps from Sasakian manifolds to Riemannian manifolds. We recall the definition of harmonic maps. Let $(M, HM, J_b, \theta)$ be a strictly pseudoconvex CR manifold, and let $\nabla^\theta$ be the Levi-Civita connection of $(M, g_\theta)$. Let $(N,h)$ be a Riemannian manifold, and $\hat{\nabla}$ its Levi-Civita connection. Suppose that $f: M \rightarrow N$ is a smooth map. Let $f^*TN$ be the pullback bundle and $\nabla^f$ the pullback connection. We can determine a connection $\nabla^{f, \theta}$ in $T^*M \otimes f^*TN$ by $$ \nabla^{f, \theta}_X (\omega \otimes \xi )= \nabla^{ \theta}_X \omega \otimes \xi + \omega \otimes \nabla_X^f \: \xi$$ for any $X \in \Gamma(TM)$, $\omega \in \Gamma(T^*M)$ and $\xi \in \Gamma(f^*TN)$. So $f$ is harmonic if $$ \tau^\theta (f;\theta,\hat{\nabla}) = trace_{g_\theta} (\nabla^{f,\theta} df) =0. $$ With respect to the local orthonormal frame $\{ \theta, \theta^\alpha, \theta^{\bar{\alpha}}\}$ in $T^*M \otimes \mathbb{C}$ and $\{ \xi_i \}$ in $TN$, we have \begin{align} \label{b1} \tau^\theta (f;\theta,\hat{\nabla}) (f) = (f^i_{\alpha \bar{\alpha}} + f^i_{\bar{\alpha} \alpha} + f^i_{00} ) \xi_i. \end{align} Comparing with the equation \eqref{ph}, we obtain \begin{equation} \label{phr} \tau^\theta (f;\theta,\hat{\nabla}) (f) = \tau(f;\theta,\hat{\nabla})(f) + \nabla_T^f df(T). \end{equation}
As above, we need a Bochner-type formula for harmonic maps and a special exhaustion function. \begin{lem} Let $f: M \rightarrow N$ be a smooth map. Then \begin{align}
\frac{1}{2} \triangle |d f(T)|^2 =& \ | \nabla^f f_0 |^2 + \langle df(T), \nabla_T^f \; \tau^\theta (f;\theta,\hat{\nabla})\rangle + 2 f^i_0 f^j_{\alpha} f^k_{\bar{\alpha}} f^l_0 \hat{R}_{j \ kl}^{\ i} \nonumber\\ &\ +2( f^i_0 f^i_\beta A_{\bar{\beta} \bar{\alpha} , \alpha} + f^i_0 f^i_{\bar{\beta}} A_{\beta \alpha , \bar{\alpha}} + f^i_0 f^i_{\bar{\beta} \bar{\alpha}} A_{\beta \alpha} +f^i_0 f^i_{\beta \alpha} A_{\bar{\beta} \bar{\alpha}} ) , \label{bf3} \end{align} where $\triangle$ is the Laplacian operator in $(M, g_\theta)$. \end{lem} \begin{proof} On the one hand, we notice that \begin{align} \label{b2}
\frac{1}{2} \triangle |d f(T)|^2 =& \frac{1}{2} \triangle_b |d f(T)|^2 + \frac{1}{2} (f^i_0 f^i_0)_{00}= \frac{1}{2} \triangle_b |d f(T)|^2 + f^i_{00} f^i_{00} + f^i_0 f^i_{000} . \end{align} On the other hand, by \eqref{b1}, we have \begin{align*} \langle df(T), \nabla_T^f \; \tau^\theta (f;\theta,\hat{\nabla})\rangle =& \langle df(T), \nabla_T^f \; \tau(f;\theta,\hat{\nabla})\rangle+ \langle df(T), \nabla_T^f \nabla_T^f df(T) \rangle \\ =& \langle df(T), \nabla_T^f \; \tau(f;\theta,\hat{\nabla})\rangle+ f^i_0 f^i_{000} . \end{align*} Hence substituting the above equation and \eqref{bf2} into \eqref{b2}, we get \eqref{bf3}. \end{proof}
\begin{lem} \label{sb1} Let $(M, HM, J_b, \theta)$ be a Sasakian manifold, and $(N,h)$ a Riemannian manifold with nonpositive sectional curvature. If $f: M \rightarrow N$ is a harmonic map, then \begin{equation}
\frac{1}{2} \triangle |d f(T)|^2 \geq | \nabla^f f_0 |^2 . \label{bte3} \end{equation} \end{lem} The proof follows from \eqref{btep} and \eqref{bf3}. \begin{dfn} \label{crch} Let $(M, HM, J_b, \theta)$ be a Sasakian manifold with
$$Ric(X,X) \geq -k |X|^2$$ for any $X \in T_{1,0} M$, and some $k \geq 0$. We say that $(M, HM, J_b, \theta)$ satisfies CR Laplace comparison property relative to a fixed point $x_0 \in M$, if there exists a positive constant $C_4$ such that the Carnot-Carath\'eodory distance $r$ to $x_0$ satisfies \begin{eqnarray} \triangle r & \leq & C_4 \: (\frac{1}{r}+ \sqrt{k}) \\
|d r|_{g_\theta} & \leq & C_4 \end{eqnarray} on $M \setminus ( cut(x_0) \cup \{ x_0 \} )$ and where $r \geq 1$. \end{dfn}
On Heisenberg group $(\mathbb{H}^n, H \mathbb{H}^n, J_b, \theta)$, the square of the Carnot-Carath\'eodory distance function $r$ to the origin has the following expression \begin{equation}
[r(z,t)]^2 = \frac{\phi^2}{(\sin \phi)^2} ||z||^2 \end{equation}
where $||z||^2 = \sum_{\alpha =1}^{n} |z^\alpha|^2$, $\phi$ is the unique solution of $\chi(\phi) ||z||^2 =|t|$ in the interval $[0, \pi)$ and $\chi(\phi)= \frac{\phi}{(\sin \phi)^2} - \cot \phi$. See \cite{ckt, ctw} for details. \begin{prp} \label{hp2} On Heisenberg group $(\mathbb{H}^n, H \mathbb{H}^n, J_b, \theta)$, there exists a positive constant $C_4'$ such that the Carnot-Carath\'eodory distance $r$ to the origin $o$ satisfies \begin{eqnarray} \triangle r & \leq & \frac{C_4'}{r} \label{h2}\\
|d r|_{g_\theta}^2 & \leq & C_4' \label{h3} \end{eqnarray} on $M \setminus ( cut(o) \cup \{ o \} )$ and where $r \geq 1$. Therefore, $(\mathbb{H}^n, H \mathbb{H}^n, J_b, \theta)$ satisfies CR Laplace comparison property relative to the origin. \end{prp} \begin{proof}
We first calculate $T r$ and $TT r$ on $M \setminus ( cut(o) \cup \{ o \} )$. When $t>0$, we take the partial derivative along $\frac{\partial}{\partial t}$ of $\chi(\phi) ||z||^2 =|t|$ and use the expression of $\chi$. The result is \begin{equation*}
\frac{\partial \phi}{\partial t} = \frac{1}{2||z||^2} \: \frac{(\sin \phi)^3}{\sin \phi - \phi \cos \phi}. \end{equation*} Therefore, \begin{align*} T r^2 & =\ \frac{\partial r^2}{\partial t} = \phi , \\ TT r^2 & =\ \frac{\partial^2 r^2}{\partial t^2} = \frac{1}{r^2} \frac{(\sin \phi)^5}{\phi^2 \: (\sin \phi - \phi \cos \phi)} . \end{align*}
Since $TT r^2 = 2r \: TTr + 2 |Tr|^2$, there exists a constant $\tilde{C_4}$ such that \begin{equation} \label{tr}
|Tr| \leq \frac{\tilde{C_4}}{r}, \quad |TTr| \leq \frac{\tilde{C_4}}{r^3} . \end{equation} When $t<0$, we can do the similar calculations and obtain the same inequality \eqref{tr}. When $t=0$, we can use the continuity property to get the same estimate \eqref{tr}, since $r$ is smooth on $M \setminus ( cut(o) \cup \{ o \} )$. Hence the inequalities \eqref{tr} always hold on $M \setminus ( cut(o) \cup \{ o \} )$. From Proposition \ref{hp1}, there exists a constant $\tilde{C_4'}$ such that \begin{equation} \label{ccde} \triangle_b r \leq \frac{\tilde{C_4'}}{r} \end{equation} on $M \setminus ( cut(o) \cup \{ o \} )$. Let $C_4'=1+\tilde{C_4}+\tilde{C_4}^2+\tilde{C_4'}$. Then \begin{eqnarray*} \triangle r & =& \triangle_b r + TTr \leq \frac{C_4'}{r} \\
|d r|^2 & = & |d_b r |^2 + (Tr)^2 \leq C_4' \end{eqnarray*} on $M \setminus ( cut(o) \cup \{ o \} )$ and where $r \geq 1$. \end{proof}
To derive the Reeb energy density estimate, we need an analogue estimate of \eqref{nd1}. Assume that $(N,h)$ is a simply connected Riemannian manifold with nonpositive sectional curvature. Let $\rho$ be the distance to a fixed point $y_0 \in N$. If $f: M \rightarrow N$ is harmonic, the Hession comparison theorem implies \begin{equation} \label{nd2}
\triangle (\rho^2 \circ f) \geq 2|d f|^2. \end{equation}
\begin{thm} \label{csh} Let $(M, HM, J_b, \theta)$ be a noncompact complete Sasakian manifold with CR Laplace comparison property relative to a fixed point $x_0$ and
$$Ric(X,X) \geq -k |X|^2$$
for any $X \in T_{1,0} M$, and some $k \geq 0$. Suppose that $(N,h)$ is a simply connected Riemannian manifold with nonpositive sectional curvature. Let $f: M \rightarrow N$ be a harmonic map. Let $\rho $ be the Riemannian distance to $y_0 = f(x_0)$. For any $R>1$, set $b_R =2 \: sup \: \{ \rho \circ f(x) | x \in B_{2R} (x_0)\}$. Then, on $B_R (x_0)$ \begin{equation} \label{he}
|df(T)|^2 \leq C_6 \: b_R^2 \: \left(\frac{1}{R^2}+ \frac{\sqrt{k}}{R}\right) \end{equation} where the constant $C_6$ depends only on $C_4$. Moreover, \begin{enumerate}[(i)] \item if $Ric \geq 0$ (i.e. $k=0$) and the image of f satisfies \begin{equation*}
\overline{\lim_{R \rightarrow \infty}} R^{-1} \: sup \: \{ \rho \circ f(x) | x \in B_{2R}(x_0) \} =0, \end{equation*} then $df(T)=0$. \item if the pseudohermitian Ricci curvature of $M$ has strictly negative lower bound (i.e. $k>0$) and the image of f satisfies \begin{equation*}
\overline{\lim_{R \rightarrow \infty}} R^{-\frac{1}{2}} \: sup \: \{ \rho \circ f(x) | x \in B_{2R} (x_0)\} =0, \end{equation*} then $df(T)=0$. \end{enumerate} \end{thm}
\begin{rmk} In \cite{p}, R. Petit got a similar vanishing theorem for harmonic maps from compact Sasakian manifolds to Riemannian manifolds with nonpositive sectional curvature. \end{rmk}
\begin{proof} The choices of $\psi$ and $\eta$ are the same as in Section \ref{slp}. Since $(M, HM, J_b, \theta)$ satisfies CR Laplace comparison property, then $\eta$ satisfies \begin{equation} \begin{gathered}
\eta^{-1} |d \eta|^2 \leq \frac{C_5}{R^2} \\
\triangle \eta = \frac{\psi ''}{R^2} |d r|^2 + \frac{\psi '}{R} \triangle r \geq - C_5 \: \left(\frac{1}{R^2}+ \frac{\sqrt{k}}{R}\right) \label{etah} \end{gathered} \end{equation} \noindent on $M \setminus ( cut(x_0) \cup \{ x_0 \} )$. Here $C_5 $ depends only on $C_4$ and $C_2$.
Given $R >1$, we consider the function $G: M \rightarrow \mathbb{R}$, which is given by
$$ G(x) = \frac{|f_0|^2}{b_R^2-\rho^2 \circ f} (x) . $$ Let $x$ be a maximum point of $\eta G$ on $B_{2R} (x_0)$. If $x$ is in the cut locus of $x_0$, then we can modify $r$ as in Section \ref{slp}. Without loss of generality, assume that $r$ is smooth at $x$ and $ (\eta G)(x) \neq 0$. It is obvious that $x$ is still a maximum point of $\ln (\eta G)$ on $B_{2R} (x_0)$. Then the maximum principle asserts that at $x$, \begin{align}
0 \ = d \ln (\eta G)= &\frac{d \eta}{\eta} + \frac{d |f_0|^2}{|f_0|^2} + \frac{d (\rho^2 \circ f)}{b^2_R- \rho^2 \circ f}, \label{hmp1}\\
0 \geq \triangle \ln (\eta G)= & \frac{\triangle \eta}{\eta} - \frac{|d \eta|^2}{\eta^2} + \frac{\triangle |f_0|^2}{|f_0|^2} - \frac{|d |f_0|^2|^2}{|f_0|^4} \nonumber \\
& \qquad+\frac{\triangle (\rho^2 \circ f)}{b^2_R-\rho^2 \circ f} +\frac{|d (\rho^2 \circ f)|^2}{(b^2_R- \rho^2 \circ f)^2}. \label{hmp2} \end{align}
Applying \eqref{bte3} and the inequality $|d |f_0|^2|^2 \leq 4 \: |f_0|^2 |\nabla^f f_0|^2$ to \eqref{hmp2}, we have \begin{equation*}
0 \geq \frac{\triangle \eta}{\eta} - \frac{|d \eta|^2}{\eta^2} -\frac{1}{2} \frac{|d |f_0|^2|^2}{|f_0|^4} +\frac{|d (\rho^2 \circ f)|^2}{(b^2_R- \rho^2 \circ f)^2}+\frac{\triangle (\rho^2 \circ f)}{b^2_R-\rho^2 \circ f}. \end{equation*} With the aid of Schwarz inequality, we can use \eqref{hmp1} to estimate the third and fourth terms. The result is
\begin{equation*}
0 \geq \frac{\triangle \eta}{\eta} -2\: \frac{|d \eta|^2}{\eta^2} +\frac{\triangle (\rho^2 \circ f)}{b^2_R-\rho^2 \circ f}. \end{equation*} Therefore combining with \eqref{nd2} and \eqref{etah}, we conclude that at $x$, \begin{equation*}
\frac{|d f|^2}{b^2_R-\rho^2 \circ f} \leq \frac{3 C_5}{ 2\eta } \left(\frac{1}{R^2}+ \frac{\sqrt{k}}{R}\right). \end{equation*}
Hence by $|f_0|^2 \leq |d f|^2$, we can get an estimate of $\eta G$: \begin{equation*}
\max_{z \in B_{2R}(x)} \frac{\eta |f_0|^2}{b^2_R-\rho^2 \circ f} \: (z) = (\eta G)(x) = \frac{\eta |f_0|^2}{b^2_R-\rho^2 \circ f} \: (x) \leq \frac{3 C_5}{2} \: \left(\frac{1}{R^2}+ \frac{\sqrt{k}}{R}\right). \end{equation*}
This yields for any $z \in B_R (x_0)$, \begin{equation*}
|f_0|^2 \: (z) \leq \frac{3 C_5}{2} \; (b^2_R-\rho^2 \circ f (z)) \left( \frac{1}{R^2}+ \frac{\sqrt{k}}{R} \right) \leq \frac{3 C_5}{2} \: b^2_R \: \left(\frac{1}{R^2}+ \frac{\sqrt{k}}{R}\right). \end{equation*} Let $C_6= \frac{3}{2} C_5$. The above inequality yields \eqref{he}. The rest of this theorem follows from the estimate \eqref{he}. \end{proof}
The relation \eqref{phr} shows that if $df(T) =0$, then harmonic map is equivalent to pseudoharmonic map. Therefore, Theorem \ref{cse} asserts the following Liouville theorem.
\begin{cor} Let $(M, HM, J_b, \theta)$ be a noncompact complete Sasakian manifold with nonnegative pseudohermitian Ricci curvature, and satisfy both CR sub-Laplace comparison property and CR Laplace comparison property relative to a fixed point $x_0 \in M$. Suppose that $(N,h)$ is a simply connected Riemannian manifold with nonpositive sectional curvature. Assume that $f: M \rightarrow N$ is a harmonic map. Let $\rho$ be the Riemnnian distance to $y_0 = f(x_0)$. If the image of $f$ satisfies \begin{equation*}
\overline{\lim_{R \rightarrow \infty}} R^{-1} \: sup \: \{ \rho \circ f(x) | x \in B_{2R}(x_0) \} =0, \end{equation*} then f is a constant map. \end{cor}
Proposition \ref{hp1} and Proposition \ref{hp2} state that Heisenberg group satisfies both CR sub-Laplace comparison property and CR Laplace comparison property relative to the origin.
\begin{cor} \label{hm2} There is no bounded harmonic map from Heisenberg group $(\mathbb{H}^n, H \mathbb{H}^n, J_b, \theta)$ to a simply connected Riemannian manifold with nonpositive sectional curvature. \end{cor}
\begin{rmk} If $n \geq 2$, then the Levi-Civita connection of Heisenberg group $(\mathbb{H}^n, H \mathbb{H}^n, J_b, \theta)$ does not have nonnegative Ricci curvature. Thus Corollary \ref{hm2} can not be derived from the results in \cite{c}. \end{rmk}
\end{document} |
\begin{document}
\title{Area and Perimeter of the Convex Hull of Stochastic Points}
\begin{abstract} Given a set $P$ of $n$ points in the plane, we study the computation of the probability distribution function of both the area and perimeter of the convex hull of a random subset $S$ of $P$. The random subset $S$ is formed by drawing each point $p$ of $P$ independently with a given rational probability $\pi_p$. For both measures of the convex hull, we show that it is \#P-hard to compute the probability that the measure is at least a given bound $w$. For $\varepsilon\in(0,1)$, we provide an algorithm that runs in $O(n^{6}/\varepsilon)$ time and returns a value that is between the probability that the area is at least $w$, and the probability that the area is at least $(1-\varepsilon)w$. For the perimeter, we show a similar algorithm running in $O(n^{6}/\varepsilon)$ time. Finally, given $\varepsilon,\delta\in(0,1)$ and for any measure, we show an $O(n\log n+ (n/\varepsilon^2)\log(1/\delta))$-time Monte Carlo algorithm that returns a value that, with probability of success at least $1-\delta$, differs at most $\varepsilon$ from the probability that the measure is at least $w$. \end{abstract}
\section{Introduction}\label{sec:intro}
Let $P$ be a set of $n$ points in the plane, where each point $p$ of $P$ is assigned a probability $\pi_p$.
Given any subset $X\subset \mathbb{R}^2$, let $\mathbb{A}(X)$ and $\mathbb{P}(X)$ denote the area and perimeter, respectively, of the convex hull of $X$. In this paper, we study the random variables $\mathbb{A}(S)$ and $\mathbb{P}(S)$, where $S$ is a random subset of $P$, formed by drawing each point $p$ of $P$ independently with probability $\pi_p$.
We assume the model in which the probability $\pi_p$ of every point $p$ of $P$ is a rational number, and where deciding whether $p$ is present in a random sample of $P$ can be done in constant time. Then, any random sample of $P$ can be generated in $O(n)$ time. We show the following results:
\begin{enumerate}\itemsep0em \item Given $w\ge 0$, computing $\Pr[\mathbb{A}(S)\ge w]$ is \#P-hard, even in the case where $\pi_p=\rho$ for all $p\in P$, for every $\rho\in(0,1)$.
\item Given $w\ge 0$, computing $\Pr[\mathbb{P}(S)\ge w]$ is \#P-hard, even in the case where $\pi_p\in\{\rho,1\}$ for all $p\in P$, for every $\rho\in(0,1)$.
\item For any measure $\mathsf{m}\in\{\mathbb{A},\mathbb{P}\}$, $w\ge 0$, and $\varepsilon\in(0,1)$, a value $\sigma$ so that $\Pr[\mathsf{m}(S)\ge w] \le \sigma \le \Pr[\mathsf{m}(S)\ge (1-\varepsilon)w]$ can be computed in $O(n^{6}/\varepsilon)$ time.
\item For any measure $\mathsf{m}\in\{\mathbb{A},\mathbb{P}\}$ and $\varepsilon,\delta\in(0,1)$, a value $\sigma'$ satisfying $\Pr[\mathsf{m}(S)\ge w]-\varepsilon < \sigma' < \Pr[\mathsf{m}(S)\ge w] + \varepsilon$ with probability at least $1-\delta$, can be computed in $O(n\log n+ (n/\varepsilon^2)\log(1/\delta))$ time.
\item If $P\subset[0,U]^2$ for some $U>0$, then given $\varepsilon\in(0,1)$ and $w\ge 0$, a value $\tilde{\sigma}$ satisfying $\Pr[\mathbb{A}(S)\ge w+\varepsilon] \le \tilde{\sigma} \le \Pr[\mathbb{A}(S)\ge w-\varepsilon]$ can be computed in $O(n^4\cdot U^4/\varepsilon^2)$ time. \end{enumerate}
For the ease of explanation, we assume that the point set $P$ satisfies the next properties: no three points of $P$ are collinear, and no two points of $P$ are in the same vertical or horizontal line.
All our results can be extended to consider point sets $P$ without these assumptions.
{\bf Notation:} Given three different points $p,q,r$ in the plane, let $\Delta(p,q,r)$ denote the triangle with vertex set $\{p,q,r\}$, $\ell(p,q)$ denote the directed line through $p$ in direction to $q$, $h(p)$ denote the horizontal line through $p$, $pq$ denote the segment with endpoints $p$ and $q$, and $\overline{pq}$ denote the length of $pq$.
We say that a triangle defined by three vertices of the convex hull of a random sample $S\subseteq P$ is {\em canonical} if the triangle contains the topmost point of $S$.
{\bf Outline:} In Section~\ref{sec:Pr}, we show that computing the probability that the area is at least a given bound is \#P-hard, and provide the algorithms to approximate this probability. In Section~\ref{sec:perim}, we show the results for the perimeter.
\section{Related work}
Stochastic finite point sets in the plane, as the one considered in this paper, appear in a natural manner in many database scenarios in which the gathered data has many false positives~\cite{AgrawalBSHNSW06,cormode2009,jorgensen2012}. This model of random points differs from the model in which $n$ points are chosen independently at random in some Euclidean region, and questions related to the final positions of the points are considered~\cite{har2011expected,schneider200412,wendel1962problem}.
In the last years, algorithmic problems and solutions considering stochastic points have emerged. In 2011, Chan et al.~\cite{kamousi2011} studied the computation of the expectation $\mathbb{E}[MST(S)]$, where $S$ is a random sample drawn on the point set $P$ and $MST(S)$ is the total length of the minimum Euclidean spanning tree of $S$. Each point is included in the sample $S$ independently with a given rational probability.
They motivate this problem from the following three situations: the point set $P$ may denote all possible customer locations, each with a known probability of being present at an instant, or it may denote sensors that trigger and upload data at unpredictable times, or it may be a set of multi-dimensional observations, each with a confidence value. Among other results, they proved that computing $\mathbb{E}[MST(S)]$ is \#P-hard and provided a random sampling based algorithm running in $O((n^5/\varepsilon^2)\log(n/\delta))$ time, that returns a $(1+\varepsilon)$-approximation with probability at least $1-\delta$.
In 2014, Chan et al.~\cite{kamousi2014} studied the probability that the distance of the closest pair of points is at most a given parameter, among $n$ stochastic points. Computing the closest pair of points among a set of precise points is a classic and well-known problem with an efficient solution in $O(n\log n)$ time. When introducing the stochastic imprecision, computing the above probability becomes \#P-hard~\cite{kamousi2014}.
Foschini at al.~\cite{yildiz2011} studied in 2011 the expected volume of the union of $n$ stochastic axis-aligned hyper-rectangles, where each hyper-rectangle is present with a given probability. They showed that the expected volume can be computed in polynomial time (assuming the dimension is a constant), provided a data structure for maintaining the expected volume over a dynamic family of such probabilistic hyper-rectangles, and proved that it is NP-hard to compute the probability that the volume exceeds a given value even in one dimension, using a reduction from the \pb{SubsetSum}~\cite{Garey1979}.
With respect to the convex hull of stochastic points, in the same model that we consider (called {\em unipoint model}~\cite{PankajAgarwal2014}), Suri et al.~\cite{Suri2013} investigated the most likely convex hull of stochastic points, which is the convex hull that appears with the most probability. They proved that such a convex hull can be computed in $O(n^3)$ time in the plane, and its computation is NP-hard in higher dimensions.
In a more general model of discrete probabilistic points (called {\em multipoint model}~\cite{PankajAgarwal2014}), each of the $n$ points either does not occur or occurs at one of finitely many locations, following its own discrete probability distribution. In this model that generalizes the one considered in this paper, Agarwal et al.~\cite{PankajAgarwal2014} gave exact computations and approximations of the probability that a query point lies in the convex hull, and Feldman et al.~\cite{Munteanu2014} considered the minimum enclosing ball problem and gave a $(1+\varepsilon)$-approximation.
In this more general model and other ones, Jorgensen et al.~\cite{jorgensen2012} studied approximations of the distribution functions of the solutions of geometric shape-fitting problems, and described the variation of the solutions to these problems with respect to the uncertainty of the points. They noted that in the multipoint model the distribution of area or perimeter of the convex hull may have exponential complexity if all the points lie on or near a circle.
More recently, in 2014, Li et al.~\cite{li2014} considered a set of $n$ points in the plane colored with $k$ colors, and studied, among other computation problems, the computation of the expected area or perimeter of the convex hull of a random sample of the points. Such random samples are obtained by picking for each color a point of that color uniformly at random. They proved that both expectations can be computed in $O(n^2)$ time. We note that their arguments can be used to compute both $\mathbb{E}[\mathbb{A}(S)]$ and $\mathbb{E}[\mathbb{P}(S)]$, each one in $O(n^2)$ time. In the case of the expected perimeter, similar arguments were discussed by Chan et al.~\cite{kamousi2011}.
\section{Probability distribution function of area}\label{sec:Pr}
\subsection{\#P-hardness}\label{sec:Pr-nphard}
\begin{theorem}\label{theo:hard-prob} Given a stochastic point set $P$ at rational coordinates, an integer $w>0$, and a probability $\rho\in(0,1)$, it is \#P-hard to compute the probability $\Pr[\mathbb{A}(S)\ge w]$ that the area of the convex hull of a random sample $S\subseteq P$ is at least $w$, where each point of $P$ is included in $S$ independently with probability $\rho$. \end{theorem}
\begin{proof} We show a Turing reduction from the \pb{\#SubsetSum} that is \#P-complete~\cite{faliszewski2009}. Our Turing reduction assumes an unknown algorithm (i.e.\ oracle) $\mathcal{A}(P,w)$ computing $\Pr[\mathbb{A}(S)\ge w]$, that will be called twice. The \pb{\#SubsetSum} receives as input a set $\{a_1,\ldots,a_n\}\subset \mathbb{N}$ of $n$ numbers and a target $t\in\mathbb{N}$, and counts the number of subsets
$J\subseteq [1..n]$ such that $\sum_{i\in J}a_j=t$. It remains \#P-hard if the subsets $J$ to count must also satisfy $|J|=k$, for given $k\in[1..n]$. Furthermore, we can add a large value (e.g.\ $1+a_1+\dots+a_n$) to every $a_i$, and add $k$ times this value to the target $t$, so that in the new instance only $k$-element index sets $J$ can add up to the new target.
Let $(\{a_1,\ldots,a_n\},t,k)$ be an instance of this restricted \pb{\#SubsetSum}. Then, by the above observations, we assume that
only sets $J\subseteq[1..n]$ with $|J|=k$ satisfy $\sum_{i\in J}a_j=t$.
To show that computing $\Pr[\mathbb{A}(S)\ge w]$ is \#P-hard, we construct in polynomial time the point set $P$ consisting of the $2n+1$ stochastic points $p_1,p_2,\ldots,p_{n+1}$ and $q_1,q_2,\ldots,q_n$ with the next properties (see Figure~\ref{fig:fig3}): \begin{enumerate}[(a)] \item\label{item:1} $P$ is in convex position and its elements appear as $p_1,q_1,p_2,q_2,\ldots,p_n,q_n,p_{n+1}$ clockwise;
\item the coordinates of $p_1,\ldots,p_{n+1}$ and $q_1,\ldots,q_n$ are rational numbers, each equal to the fraction of two polynomially-bounded natural numbers;
\item $\pi_{p}=\rho$ for every $p\in P$;
\item for some positive $b\in\mathbb{N}$, $\mathbb{A}(\{p_j,q_j,p_{j+1}\})=b\cdot a_j\in\mathbb{N}$ for all $j\in[1..n]$;
\item\label{item:2} $\mathbb{A}(\{p_1,\ldots,p_{n+1}\})\in\mathbb{N}$;
\item\label{item:f} $\mathbb{A}(\{q_i,p_{i+1},q_{i+1}\})$ for every $i\in[1..n-1]$, $\mathbb{A}(\{p_1,q_1,p_{n+1}\})$, and $\mathbb{A}(\{p_1,q_n,p_{n+1}\})$ are all greater than $b\cdot (a_1+\dots+a_n)$. \end{enumerate}
\begin{figure}\label{fig:fig3}
\end{figure}
Let $G=\mathbb{A}(\{p_1,\ldots,p_{n+1}\})$, and $S\subseteq P$ be any random sample of $P$ such that $\{p_1,\ldots,p_{n+1}\}\subseteq S$. Let $J_S=\{j\in[1..n]\given q_j\in S\}$. Observe that \begin{equation}
\label{eq7}
\mathbb{A}(S) ~= ~ G + \sum_{j\in J_S} \mathbb{A}(\{p_j,q_j,p_{j+1}\})
~=~ G + b\sum_{j\in J_S} a_j, \end{equation}
and that for every $J\subseteq [1..n]$ the probability that $J_S=J$ is precisely $\rho^{|J|}(1-\rho)^{n-|J|}$.
For $x\in \mathbb{N}$, let $f(x)$ denote the number of subsets $J\subseteq[1..n]$ with $x=\sum_{i \in J} a_i$, which by the above assumptions satisfy $|J|=k$. Then, the \pb{\#SubsetSum} instance asks for $f(t)$.
Let $E$ stand for the event in which $\{p_1,\dots,p_{n+1}\}\subseteq S$, and $\overline{E}$ the complement of $E$. Then, \begin{equation}
\label{eq8}
\Pr[\mathbb{A}(S)= G+bt] ~=~ \Pr[\mathbb{A}(S)= G+bt\given E]\cdot\Pr[E] +
\Pr[\mathbb{A}(S)= G+bt\given \overline{E}]\cdot\Pr[\overline{E}]. \end{equation} When the event $E$ does not occur, that is, when some point $p\in \{p_1,\dots,p_{n+1}\}$ is not in $S$, we have that the triangle with vertex set $p$ and the two vertices neighboring $p$ in the convex hull of $P$ is missing from the convex hull of $S$. Let \[
\Delta ~=~ \min\left\{\begin{array}{l}
\min_{i\in[1..n-1]} \mathbb{A}(\{q_i,p_{i+1},q_{i+1}\}), \\
\mathbb{A}(\{p_1,q_1,p_{n+1}\}), \\
\mathbb{A}(\{p_1,q_n,p_{n+1}\}).
\end{array}\right. \] Then, by property~(\ref{item:f}), we have that \[
\mathbb{A}(S) ~\le~ \mathbb{A}(P) - \Delta
~=~ G + b\cdot (a_1+\dots+a_n) -\Delta
~<~ G. \] Hence, $\mathbb{A}(S)= G+bt$ cannot happen when conditioned in $\overline{E}$. We then continue with equation~\eqref{eq8}, using equation~\eqref{eq7}, as follows: \begin{eqnarray*}
\Pr[\mathbb{A}(S)= G+bt] & = & \Pr[\mathbb{A}(S)= G+bt\given E]\cdot\Pr[E] \\
& = & \Pr\left[\sum_{j\in J_S}a_j=t, |J_S|=k\right] \cdot \Pr[E] \\
& = & \Pr\left[\sum_{j\in J_S}a_j=t\sgiven |J_S|=k\right]\cdot \Pr\bigl[|J_S|=k\bigr] \cdot \Pr[E] \\
& = & \frac{f(t)}{\binom{n}{k}} \cdot \binom{n}{k} \rho^{k}(1-\rho)^{n-k}\cdot \rho^{n+1} \\
& = & f(t) \cdot \rho^{n+k+1}(1-\rho)^{n-k}. \end{eqnarray*} Then, we have that \[
f(t)\cdot \rho^{n+k+1}(1-\rho)^{n-k}
~=~ \Pr[\mathbb{A}(S)\ge G+bt] - \Pr[\mathbb{A}(S)\ge G+bt+1]. \] Calling twice the algorithm $\mathcal{A}(P,w)$, we can compute $\Pr[\mathbb{A}(S)\ge G+bt]$ and $\Pr[\mathbb{A}(S)\ge G+bt+1]$, and then $f(t)$. Hence, computing $\Pr[\mathbb{A}(S)\ge w]$ is \#P-hard.
We show now how the above stochastic point set $P$ can be built in polynomial time. Let $p_i=((2i-1)^2,2i-1)$ for every $i\in[1..n+1]$, and $s_j=((2j)^2,2j)$ for every $j\in[1..n]$. Observe that the points $p_1,\ldots,p_{n+1},s_1,\ldots,s_n$ belong to $\mathbb{N}^2$, are in convex position, and they appear in the order $p_1,s_1,p_2,s_2,\ldots,p_n,s_n,p_{n+1}$ clockwise.
Furthermore, $\mathbb{A}(\{p_i,s_i,p_{i+1}\})=1$ for all $i\in[1..n]$. Let $\hat{a}=\max\{a_1,\ldots,a_n\}$, and $\lambda_i=a_i/n\hat{a}$ for $i\in[1..n]$. For every $i\in[1..n]$, we build the point $q_i$ on the segment $s_im_i$, where $m_i=(p_i+p_{i+1})/2$ is the midpoint of the segment $p_ip_{i+1}$ (see Figure~\ref{fig:fig2}).
\begin{figure}\label{fig:fig2}
\end{figure}
The point $q_i$ is such that \[
\frac{~\overline{q_im_i}~}{\overline{s_im_i}}~=~ \lambda_i ~=~ \frac{a_i}{n\hat{a}} ~\le~ \frac{1}{n}. \] Observe then that $q_i\in\mathbb{Q}^2$, and $\mathbb{A}(\{p_i,q_i,p_{i+1}\})=\lambda_i$ for all $i\in[1..n]$.
Finally, we scale the point set $P=\{p_1,\ldots,p_{n+1},$ $q_1,\ldots,q_n\}$ by $2n\hat{a}$. Let $b=4n\hat{a}$.
We have now that \[
\mathbb{A}(\{p_i,q_i,p_{i+1}\}) ~=~ \left(2n\hat{a}\right)^2 \cdot \lambda_i ~=~ b\cdot a_i ~\in~ \mathbb{N}, \] and that $G=\mathbb{A}(\{p_1,\ldots,p_{n+1}\})\in\mathbb{N}$ since every new $p_i$ has even integer coordinates (see Figure~\ref{fig:fig3}).
By considering $\pi_{p}=\rho$ for every $p\in P$, the point set $P$ ensures the properties (\ref{item:1})-(\ref{item:2}). We now show that condition~(\ref{item:f}) is also ensured.
Before scaling by $2n\hat{a}$, we have that \[
m_i~=~ (4i^2+1,2i) \] and \[
q_i~=~ m_i+\lambda_i(s_i-m_i) ~=~ (4i^2 +1 -\lambda_i,2i). \] Then, for $i\in[1..n-1]$, \begin{eqnarray*}
\mathbb{A}(\{q_i,p_{i+1},q_{i+1}\}) & = & \frac{1}{2}\left|
{\rm~det} \begin{bmatrix}
4i^2 + 1 -\lambda_i & 2i & 1 \\
(2i+1)^2 & 2i+1 & 1 \\
4(i+1)^2 + 1 -\lambda_{i+1} & 2i+2 & 1
\end{bmatrix}
\right| \\
& = & \frac{1}{2}\left|
{\rm~det} \begin{bmatrix}
-\lambda_i & 0 & 1 \\
4i & 1 & 1 \\
8i + 4 -\lambda_{i+1} & 2 & 1
\end{bmatrix}
\right| \\
& = & \frac{1}{2}\left(4-\lambda_i-\lambda_{i+1}\right) \\
& > & 1 \\
& \ge & \sum_{j\in[1..n]}\lambda_j. \end{eqnarray*} After scaling, we will have \[
\mathbb{A}(\{q_i,p_{i+1},q_{i+1}\}) ~>~ (2n\hat{a})^2 \cdot\sum_{j\in[1..n]}\lambda_j
~=~ b\cdot(a_1+\dots+a_n). \] Similarly, assuming $n\ge 2$, before scaling we have \begin{eqnarray*}
\mathbb{A}(\{p_1,q_1,p_{n+1}\}) & = & \frac{1}{2}\left|
{\rm~det} \begin{bmatrix}
1 & 1 & 1 \\
5-\lambda_1 & 2 & 1 \\
(2n+1)^2 & 2n+1 & 1
\end{bmatrix}
\right|\\
& = & n\lambda_1 + 2n(n-1)\\
& > & 1, \end{eqnarray*} and \begin{eqnarray*}
\mathbb{A}(\{p_1,q_n,p_{n+1}\}) & = & \frac{1}{2}\left|
{\rm~det} \begin{bmatrix}
1 & 1 & 1 \\
4n^2+1-\lambda_n & 2n & 1 \\
(2n+1)^2 & 2n+1 & 1
\end{bmatrix}
\right| \\
& = & n\lambda_n + (2n+1)(n-1) \\
& > & 1. \end{eqnarray*} Then, after scaling we will have \[
\mathbb{A}(\{p_1,q_1,p_{n+1}\}), \mathbb{A}(\{p_1,q_n,p_{n+1}\}) > b\cdot(a_1+\dots+a_n). \] This shows that property~(\ref{item:f}) is ensured. The result thus follows. \end{proof}
\subsection{Approximations}\label{sec:Pr-apx}
The idea to approximate $\Pr[\mathbb{A}(S)\ge w]$ is to first consider the fact that when the area of each triangle defined by points of $P$ is a natural number, we can compute such a probability in time polynomial in $n$ and $w$ (see lemmas~\ref{lem:integer-areas-0} and~\ref{lem:integer-areas}). After that, the idea follows by using conditionings of the samples $S$ on subsets of $P$ of bounded area of the convex hull, to apply on such conditionings a rounding strategy to the area of each triangle so that each area becomes a natural number, and to use Lemma~\ref{lem:integer-areas-0}
using the rounded areas instead of the real ones. With the formula of the total probability over the conditionings, we get the approximation to $\Pr[\mathbb{A}(S)\ge w]$.
\begin{lemma}\label{lem:integer-areas-0} Let $a\in P$, and $E_{a}$ denote the event for the random sample $S\subseteq P$ in which $a$ is the topmost point of $S$. Assuming that the area of each triangle defined by points of $P$ is a natural number, given an integer $w\ge 0$, the probability $\Pr[\mathbb{A}(S)\ge w\mid E_a]$ can be computed in $O(n^{3}\cdot w)$ time. \end{lemma}
\begin{proof} We show how to compute the probability $\Pr[\mathbb{A}(S)\ge w\given E_{a}]$ using dynamic programming. Let $B_a\subset P$ denote the points below the line $h(a)$, and $\mathbf{P}_{a}\subset (\{a\}\cup B_a)^2$ denote the set of pairs of distinct points $(u,v)$ such that either $v=a$, or $v\neq a$ and $u$ is to the left of the directed line $\ell(a,v)$.
For a point $b\in B_a$, let $F_b$ stand for the event that $b$ is the vertex following $a$ in the counter-clockwise order of the vertices of the convex hull of $(S\cap B_a)\cup \{a\}$.
For every $(u,v)\in \mathbf{P}_{a}$, let $Z_{u,v}\subset\mathbb{R}^2$ denote the region of the points below the line $h(a)$, to the left of the line $\ell(a,u)$, and to the left of the line $\ell(v,u)$ (see Figure~\ref{fig:fig7}). \begin{figure}\label{fig:fig7}
\end{figure} Now, for every $z\in[0..w]$, consider the entry $T[u,v,z]$ of the table $T$, defined as \[
T[u,v,z] ~=~ \Pr\Bigl[\mathbb{A}\bigl((S\cap Z_{u,v})\cup \{a,u\}\bigr)\ge z\Bigr], \] which stands for the event that the convex hull of the random sample restricted to $Z_{u,v}$, together with the points $a$ and $u$, is at least $z$. Then, note that \begin{equation}\label{eq5}
\Pr\Bigl[\mathbb{A}(S)\ge w\given E_{a}\Bigr] ~=~ \sum_{b\in B_a}\Pr\bigl[F_b\bigr]\cdot T[b,a,w]. \end{equation} We show now how to compute $T[u,v,z]$ recursively for every $u,v,z$. For every point $u'\in P\cap Z_{u,v}$, let $N_{u'}$ stand for the event in which $u'$ satisfies the following properties:
$u'\in S$ and $u'$ is the vertex of the convex hull of $(S\cap Z_{u,v})\cup \{a,u\}$ that follows the vertex $u$ in counter-clockwise order, that is, $uu'$ is an edge of the convex hull of $(S\cap Z_{u,v})\cup \{a,u\}$ and the elements of $(S\cap Z_{u,v})\setminus \{u'\}$ are to the left of the line $\ell(u,u')$ (see Figure~\ref{fig:fig8}(left)). Note that $u'$ is also the first point of $S\cap Z_{u,v}$ hit by the line $\ell(v,u)$ when rotated counter-clockwise centered at $u$. Then, we have that \begin{figure}
\caption{\small{Computing the entries $T[u,v,z]$ recursively.}}
\label{fig:fig8}
\end{figure} \[
T[u,v,0]~=~1~~ \text{ for all } (u,v) \in \mathbf{P}_{a} \] and \[
T[u,v,z] ~=~ \sum_{u'\in P\cap Z_{u,v}} \Pr[N_{u'}]\cdot F(u,z,u') \] for all $(u,v) \in \mathbf{P}_{a}$ and $z\in[1..w]$, where \[
F(u,z,u') ~=~ \left\{
\begin{array}{l}
T\bigl[u',u,z-\mathbb{A}(\{u,u',a\})\bigr] ~~\text{if}~\mathbb{A}(\{u,u',a\})<z\\
\\
1,~~\text{if}~\mathbb{A}(\{u,u',a\})\ge z,
\end{array}
\right. \] (see Figure~\ref{fig:fig8}(right)). Since the points in $P\cap Z_{u,v}$ can be sorted radially around $u$ in $O(n)$ time, by computing the dual arrangement of $P$ in $O(n^2)$ time as a unique preprocessing, the probabilities $\Pr[N_{u'}]$, $u'\in P\cap Z_{u,v}$, can be computed in overall $O(n)$ time by following such radial sorting of $P\cap Z_{u,v}$. Then, all entries $T[u,v,z]$ can be computed in $O(n^{3}\cdot w)$ time. Similarly, using the dual arrangement of $P$, the probabilities $\Pr[F_{b}]$, $b\in B_a$, can be computed in overall $O(n)$ time, and then $\Pr[\mathbb{A}(S)\ge w\given E_{a}]$ can be computed in linear time using the information of table $T$ and equation~\eqref{eq5}.
Hence, $\Pr[\mathbb{A}(S)\ge w\mid E_a]$ can be computed in overall $O(n^{3}\cdot w)$ time. The result thus follows. \end{proof}
\begin{lemma}\label{lem:integer-areas} Assuming that the area of each triangle defined by points of $P$ is a natural number, given an integer $w\ge 0$, the probability $\Pr[\mathbb{A}(S)\ge w]$ can be computed in $O(n^{4}\cdot w)$ time. \end{lemma}
\begin{proof} Observe that we have \[
\Pr\Bigl[\mathbb{A}(S)\ge w\Bigr] ~=~ \sum_{a\in P}\Pr\Bigl[\mathbb{A}(S)\ge w\given E_{a}\Bigr]\cdot\Pr\Bigl[E_{a}\Bigr], \] and that all probabilities $\Pr[E_{a}]$, $a\in P$, can be computed in $O(n)$ time after an $O(n\log n)$-time vertical sorting preprocessing of $P$. Using Lemma~\ref{lem:integer-areas-0} to compute $\Pr[\mathbb{A}(S)\ge w\given E_{a}]$ for each $a\in P$, the overall running time to compute $\Pr[\mathbb{A}(S)\ge w]$ is $O(n^{4}\cdot w)$. \end{proof}
Before proving the main result of this section (i.e.\ Theorem~\ref{theo:Pr-approx}), we prove the following useful technical lemma:
\begin{figure}\label{fig:fig12}
\end{figure}
\begin{lemma}\label{lemma:lambda} Let $X$ be a (finite) point set in the plane, $p$ a topmost point of $X$, $q$ a bottommost point of $X$, and $\lambda$ the area of the triangle of maximum area with vertices $p$, $q$, and another point of $X$. Then, we have that: \[
\lambda ~\le~ \mathbb{A}(X) ~\le~ 4\lambda. \] \end{lemma}
\begin{proof} Let $r\in X$ be a point such that $\mathbb{A}(\{p,q,r\})=\lambda$, and assume w.l.o.g.\ that $r$ is to the left of the line $\ell(p,q)$. Let $\ell_1$ denote the line through $r$ and parallel to $\ell(p,q)$, and line $\ell_2$ the reflection of $\ell_1$ about $\ell(p,q)$ (see Figure~\ref{fig:fig12}). Let points $s_0=\ell(p,q)\cap h(r)$, $s_1=\ell_1\cap h(q)$, $s_2=\ell_2\cap h(q)$, $s_3=\ell_2\cap h(p)$, and $s_4=\ell_1\cap h(p)$. Note that triangles $\Delta(p,r,s_0)$ and $\Delta(p,s_4,r)$ are congruent, and triangles $\Delta(q,s_0,r)$ and $\Delta(q,r,s_1)$ are congruent. Furthermore, $X$ is contained in the parallelogram with vertex set $\{s_1,s_2,s_3,s_4\}$. Then, we have \begin{eqnarray*}
\mathbb{A}(X) & \le & \mathbb{A}(\{s_1,s_2,s_3,s_4\}) \\
& = & 2\cdot \mathbb{A}(\{s_1,q,p,s_4\}) \\
& = & 2\cdot \Bigl( \mathbb{A}(\{p,r,s_0\}) + \mathbb{A}(\{p,s_4,r\}) + \mathbb{A}(\{q,s_0,r\}) + \mathbb{A}(\{q,r,s_1\}) \Bigr) \\
& = & 2\cdot \Bigl( 2\cdot \mathbb{A}(\{p,r,s_0\}) + 2\cdot \mathbb{A}(\{q,s_0,r\}) \Bigr) \\
& = & 4 \cdot \mathbb{A}(\{p,q,r\})\\
& = & 4\lambda. \end{eqnarray*} Trivially, $\lambda \le \mathbb{A}(X)$, and the lemma thus follows. \end{proof}
\begin{theorem}\label{theo:Pr-approx} Given $\varepsilon\in(0,1)$ and $w\ge 0$, a value $\sigma$ satisfying \[
\Pr[\mathbb{A}(S)\ge w] ~\le~ \sigma ~\le~ \Pr[\mathbb{A}(S)\ge (1-\varepsilon)w] \] can be computed in $O(n^{6}/\varepsilon)$ time. \end{theorem}
\begin{proof} Given two points $p,q\in P$, let $E_{p,q}$ denote the event in which the random sample $S\subseteq P$ satisfies that: $p$ is the topmost point of $S$, and $q$ is the bottommost point of $S$.
Conditioned on the event $E_{p,q}$, for two points $p,q\in P$, let $\lambda=\lambda(p,q)$ denote the area of the triangle of maximum area with vertices $p$, $q$, and another point of $S$. By Lemma~\ref{lemma:lambda}, we have \[
\lambda ~\le~ \mathbb{A}(S) ~\le~ 4\lambda. \] Furthermore, if $w\leq \lambda$ then $\Pr[\mathbb{A}(S)\ge w\given E_{p,q}]=1$, and if $4\lambda<w$ then $\Pr[\mathbb{A}(S)\ge w\given E_{p,q}]=0$. Then, we can compute $\Pr[\mathbb{A}(S)\ge w]$ as follows:
\begin{eqnarray}
\nonumber
\Pr\bigl[\mathbb{A}(S)\ge w\bigl] & = & \sum_{p,q\in P} \Pr\bigl[E_{p,q}\bigr]\cdot\Pr\bigl[\mathbb{A}(S)\ge w\given E_{p,q}\bigr] \\
\nonumber
& = & \sum_{p,q\in P} \Pr\bigl[E_{p,q}\bigr] \biggl( \Pr\bigl[\mathbb{A}(S)\ge w\given E_{p,q},\lambda\ge w \bigr]\Pr\bigl[\lambda\ge w\given E_{p,q}\bigr] + \\
\nonumber
& & \Pr\left[\mathbb{A}(S)\ge w\given E_{p,q},\lambda\in \left[\tfrac{w}{4},w\right) \right] \cdot \Pr\left[\lambda\in \left[\tfrac{w}{4},w\right)\sgiven E_{p,q}\right] + \\
\nonumber
& & \Pr\left[\mathbb{A}(S)\ge w\given E_{p,q},\lambda < \tfrac{w}{4} \right]
\Pr\left[\lambda < \tfrac{w}{4}\sgiven E_{p,q}\right] \biggr)\\
\nonumber
& = & \sum_{p,q\in P} \Pr\bigl[E_{p,q}\bigr] \biggl( \Pr\bigl[\lambda\ge w\given E_{p,q}\bigr]+\\
\label{eq6}
& & \Pr\left[\mathbb{A}(S)\ge w\sgiven E_{p,q},\lambda\in \left[\tfrac{w}{4},w\right) \right] \cdot
\Pr\left[\lambda\in \left[\tfrac{w}{4},w\right)\sgiven E_{p,q}\right] \biggr). \end{eqnarray}
For given $p,q\in P$, and $z\ge 0$, let $P(p,q,z)\subseteq P$ denote the set of the points $r\in P$ lying in the strip bounded by the horizontal lines through $p$ and $q$, respectively, such that $\mathbb{A}(\{p,q,r\})\ge z$. Since \[
\Pr[\lambda\ge z\given E_{p,q}] ~=~ 1 - \prod_{r\in P(p,q,z)}(1-\pi_r), \] both $\Pr[\lambda\ge w\given E_{p,q}]$ and $\Pr[\lambda\in [w/4,w)\given E_{p,q}]=\Pr[\lambda\ge w/4\given E_{p,q}]-\Pr[\lambda\ge w\given E_{p,q}]$ can be computed in $O(n)$ time.
To approximate $\Pr[\mathbb{A}(S)\ge w]$ using equation~\eqref{eq6}, we compute in what follows the value $\sigma_{p,q}\in[0,1]$ as an approximation to the probability $\Pr[\mathbb{A}(S)\ge w\given E_{p,q},\lambda\in [w/4,w) ]$.
Let $P'=P(p,q,0)\setminus P(p,q,w)$, and note that $S\subseteq P'$ when conditioned on $E_{p,q}$ and $\lambda\in [w/4,w)$.
Let $\theta=\varepsilon/n$. We round the area $a$ of each triangle defined by three points of $P'$ by $\widehat{a}=\lceil\frac{a}{\theta\cdot w}\rceil$, and round the target $w$ by $\widehat{w}=\lfloor\frac{1}{\theta}\rfloor$. Let $\widehat{\mathbb{A}}(S)$ be the sum of the rounded areas of the canonical triangles of the convex hull of $S$. Given that the algorithm of Lemma~\ref{lem:integer-areas-0} sums areas of canonical triangles, we can run such an algorithm over $P'$ by assuming that event $E_p$ is satisfied (i.e.\ $p$ is the topmost point of any random sample $S\subseteq P'$) and $\pi_q=1$, but considering the rounded areas instead of the original ones. We can make these assumptions because event $E_{p,q}$ holds. Doing this, we can compute the probability $\Pr[\widehat{\mathbb{A}}(S)\ge \widehat{w}\mid E_p]$ of Lemma~\ref{lem:integer-areas-0}, for $S\subseteq P'$, in \[
O(n^{3}\cdot \widehat{w})~=~O\left(n^{3}\cdot\left\lfloor\frac{1}{\theta}\right\rfloor\right)
~=~ O(n^{4}/\varepsilon) \] time, and set $\sigma_{p,q}$ to it. We now analyse how close $\sigma_{p,q}$ is to $\Pr[\mathbb{A}(S)\ge w\given E_{p,q},\lambda\in [w/4,w) ]$. Let $S$ be a random sample conditioned on both $E_{p,q}$ and $\lambda\in [w/4,w)$, and so that the convex hull of $S$ is triangulated into $k$ canonical triangles of areas $a_1,a_2,\ldots,a_k$, respectively. We have \[
w ~\ge~ \theta w\left\lfloor\frac{1}{\theta}\right\rfloor ~=~ \theta w \cdot\widehat{w} \] and \[
\theta w\left(\widehat{a_1}+\dots+\widehat{a_k}\right) ~=~
\theta w\left\lceil\frac{a_1}{\theta w}\right\rceil+\dots+
\theta w\left\lceil\frac{a_k}{\theta w}\right\rceil
~\ge~ a_1+\dots+a_k. \] Then, $a_1+\dots+a_k\ge w$ implies $\widehat{a_1}+\dots+\widehat{a_k}\ge \widehat{w}$. Hence, \begin{equation}\label{eq1}
\Pr\Bigl[\mathbb{A}(S)\ge w\given E_{p,q},\lambda\in [w/4,w) \Bigr]~\le~ \sigma_{p,q}. \end{equation} Assume now that $\widehat{a_1}+\dots+\widehat{a_k}\ge \widehat{w}$. Then, given that \[
\widehat{w} ~=~ \left\lfloor\frac{1}{\theta}\right\rfloor ~\ge~ \frac{1}{\theta}-1 \] and \[
\widehat{a_1}+\dots+\widehat{a_k} ~=~
\left\lceil\frac{a_1}{\theta w}\right\rceil + \dots +
\left\lceil\frac{a_k}{\theta w}\right\rceil \\
~\le~ \frac{a_1}{\theta w} + \dots + \frac{a_k}{\theta w} + k, \] we have \[
\frac{a_1}{\theta w} + \dots + \frac{a_k}{\theta w} + k ~\ge~ \frac{1}{\theta}-1 \] which implies \[
a_1+\dots+a_k ~\ge~ w - (k+1)\cdot\theta w
~\ge~ w - n\cdot\theta w
~=~ (1-n \theta)w
~=~ (1-\varepsilon)w. \] Then, $\widehat{a_1}+\dots+\widehat{a_k}\ge \widehat{w}$ implies $a_1+\dots+a_k\ge (1-\varepsilon)w$. Therefore, \begin{equation}\label{eq2}
\sigma_{p,q} ~\le ~ \Pr\left[\mathbb{A}(S)\ge (1-\varepsilon)w\sgiven E_{p,q},\lambda\in \left[\tfrac{w}{4},w\right) \right]. \end{equation} We then compute in $O(n^2 \cdot n^4/\varepsilon)=O(n^{6}/\varepsilon)$ time the value \[
\sigma ~=~ \sum_{p,q\in P} \Pr\bigl[E_{p,q}\bigr] \Bigl( \Pr\bigl[\lambda\ge w\given E_{p,q}\bigr] +
\sigma_{p,q} \cdot \Pr\left[\lambda\in \left[\tfrac{w}{4},w\right)\sgiven E_{p,q}\right] \Bigr), \] which verifies \[
\Pr\Bigl[\mathbb{A}(S)\ge w\Bigr] ~\le~ \sigma \] by equations~\eqref{eq6} and~\eqref{eq1}. Let $w_{\varepsilon}=(1-\varepsilon)w<w$. By equations~\eqref{eq6} and~\eqref{eq2}, $\sigma$ also verifies that \begin{eqnarray*} \sigma
& \le & \sum_{p,q\in P} \Pr\bigl[E_{p,q}\bigr] \biggl( \Pr\bigl[\lambda\ge w\given E_{p,q}\bigr] + \\
& &
\Pr\left[\mathbb{A}(S)\ge w_{\varepsilon}\given E_{p,q},\lambda\in \left[\frac{w}{4},w\right) \right] \cdot
\Pr\left[\lambda\in \left[\frac{w}{4},w\right)\given E_{p,q}\right] \biggr) \\
& \le & \sum_{p,q\in P} \Pr\bigl[E_{p,q}\bigr] \biggl( \Pr\bigl[\lambda\ge w\given E_{p,q}\bigr] +\\
& & \Pr\left[\mathbb{A}(S)\ge w_{\varepsilon}\sgiven E_{p,q},\lambda\in \left[\frac{w_{\varepsilon}}{4},w\right) \right] \cdot
\Pr\left[\lambda\in \left[\frac{w_{\varepsilon}}{4},w\right)\sgiven E_{p,q}\right] \biggr)\\
& = & \sum_{p,q\in P} \Pr\bigl[E_{p,q}\bigr] \biggl( \Pr\bigl[\lambda\ge w\given E_{p,q}\bigr] +\\
& & \Pr\left[\mathbb{A}(S)\ge w_{\varepsilon}\sgiven E_{p,q},\lambda\in\left[\frac{w_{\varepsilon}}{4},w_{\varepsilon}\right)\right]\cdot
\Pr\left[\lambda\in \left[\frac{w_{\varepsilon}}{4},w_{\varepsilon}\right)\sgiven E_{p,q}\right] +\\
& & \Pr\bigl[\mathbb{A}(S)\ge w_{\varepsilon}\given E_{p,q},\lambda\in [w_{\varepsilon},w) \bigr] \cdot
\Pr\bigl[\lambda\in [w_{\varepsilon},w)\given E_{p,q}\bigr] \biggr)\\
& = & \sum_{p,q\in P} \Pr\bigl[E_{p,q}\bigr] \biggl( \Pr\bigl[\lambda\ge w\given E_{p,q}\bigr] +\\
& & \Pr\left[\mathbb{A}(S)\ge w_{\varepsilon}\sgiven E_{p,q},\lambda\in \left[\frac{w_{\varepsilon}}{4},w_{\varepsilon}\right)\right]\cdot
\Pr\left[\lambda\in \left[\frac{w_{\varepsilon}}{4},w_{\varepsilon}\right)\sgiven E_{p,q}\right] +\\
& & \Pr\bigl[\lambda\in [w_{\varepsilon},w)\given E_{p,q}\bigr] \biggr)\\
& = & \sum_{p,q\in P} \Pr\bigl[E_{p,q}\bigr] \biggl( \Pr\bigl[\lambda\ge w_{\varepsilon}\given E_{p,q}\bigr] +\\
& & \Pr\left[\mathbb{A}(S)\ge w_{\varepsilon}\sgiven E_{p,q},\lambda\in \left[\frac{w_{\varepsilon}}{4},w_{\varepsilon}\right)\right] \cdot
\Pr\left[\lambda\in \left[\frac{w_{\varepsilon}}{4},w_{\varepsilon}\right)\sgiven E_{p,q}\right] \biggr)\\
& = & \Pr\bigl[\mathbb{A}(S)\ge (1-\varepsilon)w\bigr]. \end{eqnarray*} The result thus follows. \end{proof}
Given the high running time of the algorithm in Theorem~\ref{theo:Pr-approx}, and that it may happen that $\Pr[\mathbb{A}(S)\ge (1-\varepsilon)w]-\Pr[\mathbb{A}(S)\ge w]$ is close to 1, we give the following simple Monte Carlo algorithm to approximate $\Pr[\mathbb{A}(S)\ge w]$ with absolute error and a probability of success. A similar algorithm was given by Agarwal et al.~\cite{PankajAgarwal2014} to approximate the probability that a given query point is contained in the convex hull of the probabilistic points.
\begin{theorem}\label{theo:chernoff-apx} Given $\varepsilon,\delta\in(0,1)$ and $w\ge 0$, a value $\sigma'$ can be computed in $O(n\log n+ (n/\varepsilon^2)\log(1/\delta))$ time so that with probability at least $1-\delta$ \[
\Pr[\mathbb{A}(S)\ge w]-\varepsilon ~<~ \sigma' ~<~ \Pr[\mathbb{A}(S)\ge w]+\varepsilon. \] \end{theorem}
\begin{proof} The idea is to use repeated random sampling. Let $S_1,S_2,\ldots,S_N\subseteq P$ be $N$ random samples of $P$, where $N$ is going to be specified later, and let $X_i$ ($i=1,\ldots,N$) be the indicator variable such that $X_i=1$ if and only if $\mathbb{A}(S_i)\ge w$. Let $\mu=\Pr[\mathbb{A}(S)\ge w]$
and $\sigma'=(1/N)\sum_{i=1}^N X_i$, and note that $\mathbb{E}[X_i]=\mu$. Using a Chernoff-Hoeffding bound, we have $\Pr[|\sigma'-\mu|\ge \varepsilon]\leq 2\exp(-2\varepsilon^2N)$. Then, setting $N=\lceil(1/2\varepsilon^2)\ln(2/\delta)\rceil$, we have that $|\sigma'-\mu|<\varepsilon$ with probability at least $1-\delta$. Since after an $O(n\log n)$-time sorting preprocessing of $P$, the convex hull of each sample $S_i$ can be computed in $O(n)$ time, the running time is $O(n\log n+N\cdot n)=O(n\log n+ (n/\varepsilon^2)\log(1/\delta))$. \end{proof}
If the coordinates of the points of $P$ belong to some range of bounded size, then we can round the coordinates of each point of $P$ so that in the resulting point set every triangle defined by three points has integer area. After that, we can use Lemma~\ref{lem:integer-areas} over the resulting point set to approximate the probability $\Pr[\mathbb{A}(S)\ge w]$. This approach is used in the following result.
\begin{theorem}\label{theo:bounded-domain-apx} If $P\subset[0,U]^2$ for some $U>0$, then given $\varepsilon\in(0,1)$ and $w\ge 0$ a value $\tilde{\sigma}$ satisfying \[
\Pr[\mathbb{A}(S)\ge w+\varepsilon] ~\le~ \tilde{\sigma} ~\le~ \Pr[\mathbb{A}(S)\ge w-\varepsilon] \] can be computed in $O(n^4\cdot U^4/\varepsilon^2)$ time. \end{theorem}
\begin{proof} Let $\delta>0$ be a parameter to be specified later.
For every random sample $S\subseteq P$, let \[
\tilde{S} ~=~ \left\{\left(2\left\lfloor \frac{x}{\delta} \right\rfloor,
2\left\lfloor \frac{y}{\delta} \right\rfloor \right) ~:~ (x,y)\in S\right\}. \] Note that the area of every triangle defined by three points of $\tilde{S}$ is a natural number, for every $S\subseteq P$. Furthermore, we have that \[
\left| \mathbb{A}(S) - \left(\frac{\delta^2}{4}\right)\mathbb{A}(\tilde{S}) \right| ~<~ 4\delta U. \] Using Lemma~\ref{lem:integer-areas}, we can compute the probability \[
\tilde{\sigma} ~=~ \Pr\left[\mathbb{A}(\tilde{S}) \ge \left\lceil \frac{4w}{\delta^2} \right\rceil\right] \] in $O\left(n^4 \cdot \left\lceil 4w/\delta^2 \right\rceil\right) \subseteq O\left(n^4\cdot U^2/\delta^2\right)$ time.
If $\mathbb{A}(\tilde{S}) \ge \left\lceil 4w/\delta^2 \right\rceil$, then \[
w ~\le~ \mathbb{A}(\tilde{S})\cdot \frac{\delta^2}{4} ~<~ \mathbb{A}(S) + 4\delta U, \] which implies $\mathbb{A}(S) ~\ge~ w-4\delta U$. Hence, \begin{equation}
\label{eq3}
\tilde{\sigma} ~=~ \Pr\left[\mathbb{A}(\tilde{S}) \ge \left\lceil 4w/\delta^2 \right\rceil\right]
~\le~ \Pr\Bigl[\mathbb{A}(S)\ge w-4\delta U\Bigr]. \end{equation}
If $\mathbb{A}(S)\ge w+4\delta U$, then \[
w+4\delta U ~\le~ \mathbb{A}(S) ~<~ \frac{\delta^2}{4}\cdot \mathbb{A}(\tilde{S}) + 4\delta U, \] which implies $\mathbb{A}(\tilde{S})\ge \left\lceil 4w/\delta \right\rceil$ since $\mathbb{A}(\tilde{S})\in \mathbb{N}$. Then, we have that \begin{equation}
\label{eq4}
\Pr\Bigl[\mathbb{A}(S)\ge w+4\delta U\Bigr] ~\le~ \Pr\left[\mathbb{A}(\tilde{S}) \ge \left\lceil 4w/\delta^2 \right\rceil\right]
~=~ \tilde{\sigma}. \end{equation} Setting $\delta=\frac{\varepsilon}{4U}$, and combining~\eqref{eq3} and~\eqref{eq4}, we have that $\tilde{\sigma}$ satisfies \[
\Pr[\mathbb{A}(S)\ge w+\varepsilon] ~\le~ \tilde{\sigma} ~\le~ \Pr[\mathbb{A}(S)\ge w-\varepsilon], \] and can be computed in $O(n^4 U^4/\varepsilon^2)$ time. \end{proof}
\section{Perimeter}\label{sec:perim}
Similar to Lemma~\ref{lem:integer-areas}, we can prove that if all the distances between the elements of $P$ are considered integer, the probability $\Pr[\mathbb{P}(S)\ge w]$ can be computed in $O(n^{4}\cdot w)$ time, for every integer $w\ge 0$. Then, using conditioning of the samples and a rounding strategy, we adapt the arguments of Theorem~\ref{theo:Pr-approx} to obtain the following result: \begin{theorem}\label{theo:Pr-approx-perim} Given $\varepsilon\in(0,1)$ and $w\ge 0$, a value $\sigma'$ satisfying \[
\Pr[\mathbb{P}(S)\ge w] ~\le~ \sigma' ~\le~ \Pr[\mathbb{P}(S)\ge (1-\varepsilon)w] \] can be computed in $O(n^{6}/\varepsilon)$ time. \end{theorem}
We can further show that Theorem~\ref{theo:chernoff-apx} also holds if perimeter is used instead of area, as stated in the next more general theorem. \begin{theorem} Let $\mathsf{m}:2^P\rightarrow\mathbb{R}$ be a function such that after a $T(n)$-time preprocessing of $P$ the value of $\mathsf{m}(S)$ can be computed in $C(n)$ time, for all $S\subseteq P$. Given $\varepsilon,\delta\in(0,1)$ and $w\ge 0$, a value $\sigma'$ can be computed in $O(T(n)+ C(n)\cdot(1/\varepsilon^2)\log(1/\delta))$ time so that with probability at least $1-\delta$ \[
\Pr[\mathsf{m}(S)\ge w]-\varepsilon ~<~ \sigma' ~<~ \Pr[\mathsf{m}(S)\ge w]+\varepsilon. \] \end{theorem} Note that for $\mathsf{m}\in\{\mathbb{A},\mathbb{P}\}$ we will have $T(n)=O(n\log n)$ and $C(n)=O(n)$.
We complement this section by proving that, in general, computing the probability $\Pr[\mathbb{P}(S)\ge w]$ is \#P-hard. The arguments are similar to that of Theorem~\ref{theo:hard-prob}, but the proof requires several key details to deal with distances between points, expressed by square roots. We note that this hardness result (see next Theorem~\ref{theo:hard-prob-perim}) is weaker than that of Theorem~\ref{theo:hard-prob} in the sense that it uses points with two different probabilities.
\begin{theorem}\label{theo:hard-prob-perim} Given a stochastic point set $P$ at rational coordinates, an integer $w>0$, and a probability $\rho\in(0,1)$, it is \#P-hard to compute the probability $\Pr[\mathbb{P}(S)\ge w]$ that the perimeter of the convex hull of a random sample $S\subseteq P$ is at least $w$, where each point of $P$ is included in $S$ independently with a probability in $\{\rho,1\}$. \end{theorem}
\begin{proof}
We show a Turing reduction from the version of the \pb{\#SubsetSum}~\cite{faliszewski2009}, in which given numbers $\{a_1,\ldots,a_n\}\subset\mathbb{N}$, a target $t$, and value $k\in[1..n]$, counts the number of subsets $J$ such that $|J|=k$ and $\sum_{j\in J} a_j=t$.
Let $(\{a_1,\ldots,a_n\},t,k)$ be an instance of this \pb{\#SubsetSum}.
We assume that
$\{a_1,\ldots,a_n\}$ and $t$ are such that only subsets $J$ satisfying $|J|=k$ ensure that $\sum_{j\in J} a_j=t$ (see the proof of Theorem~\ref{theo:hard-prob}). Furthermore, each of the numbers $a_1,\ldots,a_n$ can be represented in a polynomial number of bits (refer to the NP-completeness proof of the \pb{SubsetSum}~\cite{Garey1979}), then the base-2 logarithm of each of them is polynomially bounded.
Let $c\in\mathbb{N}$ be a big enough and polynomially bounded number that will be specified later. For every $k\in[1..2n]$, let $v_k$ denote de vector \[
v_k ~=~ \left(c\cdot\frac{k^2-1}{k^2+1},c\cdot\frac{2k}{k^2+1}\right). \] Let $p_1=(0,0)$, and for $i=1,\dots,n$, let $s_i=p_i+v_{2i-1}$ and $p_{i+1}=s_i+v_{2i}$. Let $z_1=p_{n+1}-v_{1}$, and for $j=2,\ldots,2n-1$, let $z_j=z_{j-1}-v_j$. Note that the $4n$ points $p_1,s_1,p_2,s_2,\ldots,p_n,s_n,p_{n+1},$ $z_1,\ldots z_{2n-1}$ are at rational coordinates and in convex position, and appear in this order clockwise. Further note that each edge of the convex hull of those points has length precisely $c$, and that the perimeter is equal to $L=4n\cdot c\in\mathbb{N}$ (see Figure~\ref{fig:fig10}).
\begin{figure}\label{fig:fig10}
\end{figure}
Let $\varepsilon=1/(2n)$. For every $i\in[1..n]$, we build in polynomial time the point $q_i\in\mathbb{Q}^2$ in the triangle $\Delta(p_i,s_i,p_{i+1})$ so that \[
c-a_i ~\leq~ \overline{p_iq_i} ~=~ \overline{q_ip_{i+1}} ~<~ (c-a_i)+\varepsilon. \] The value of $c$ is selected so that the point $q_i$ exists for every $i\in[1..n]$. Let $P$ denote the point set $\{p_1,s_1,p_2,s_2,\ldots,p_n,s_n,p_{n+1},z_1,\ldots z_{2n-1}\}\cup\{q_1,\ldots,q_n\}$, and let $\pi_u=1$ for all $u\in\{p_1,p_2,\ldots,p_n,p_{n+1},z_1,\ldots z_{2n-1}\}\cup\{q_1,\ldots,q_n\}$, and $\pi_v=\rho$ for all $v\in\{s_1,\ldots,s_n\}$. Let $S\subseteq P$ be any random sample of $P$, $J_S=\{j\in[1..n]\given s_j\notin S\}$, and $\varepsilon_j=\overline{p_jq_j}-(c-a_j)$ for every $j\in[1..n]$. Observe that \begin{eqnarray*}
\mathbb{P}(S) & = & 2n\cdot c ~+~ \sum_{j\in J_S} 2\cdot \overline{p_jq_j} ~+~ \sum_{j\notin J_S} 2c\\
& = & 2n\cdot c ~+~ \sum_{j\in J_S} 2\left((c-a_i)+\varepsilon_j\right) ~+~ \sum_{j\notin J_S} 2c\\
& = & L ~-~ 2\sum_{j\in J_S} a_j ~+~ 2\sum_{j\in J_S}\varepsilon_j, \end{eqnarray*} which implies that \[
L ~-~ 2\sum_{j\in J_S} a_j ~=~ \left\lfloor\mathbb{P}(S)\right\rfloor, \] given that \[
0 ~\leq~ 2\sum_{j\in J_S}\varepsilon_j ~<~ 2|J_S|\cdot \varepsilon ~\leq~ 2n\cdot \varepsilon ~=~ 1. \]
For $x\in \mathbb{N}$, let $f(x)$ denote the number of subsets $J\subseteq[1..n]$ with $x=\sum_{i \in J} a_i$, which satisfy $|J|=k$.
For every $J\subseteq [1..n]$, the probability that $J_S=J$ is precisely $(1-\rho)^{|J|}\rho^{n-|J|}$.
Then, \[
\Pr\bigl[\left\lfloor\mathbb{P}(S)\right\rfloor = L - 2t\bigr] ~=~ \Pr\left[\sum_{j\in J_S}a_j=t,|J_S|=k\right]
~=~ f(t) \cdot (1-\rho)^{k}\rho^{n-k}. \] Hence, computing $\Pr[\mathbb{P}(S)\ge w]$ is \#P-hard since \[
\Pr\bigl[\left\lfloor\mathbb{P}(S)\right\rfloor ~=~ L - 2t\bigr]
~=~ \Pr\bigl[\mathbb{P}(S) \ge L - 2t\bigr] - \Pr\bigl[\mathbb{P}(S) \ge L - 2t + 1\bigr]. \]
We show now how to compute the value of $c$, and how to compute the point $q_i$ for every $i\in[1..n]$. Consider the isosceles triangle $\Delta(p_i,s_{i},p_{i+1})$ (see Figure~\ref{fig:fig11}). \begin{figure}
\caption{\small{Construction of the point $q_i$.}}
\label{fig:fig11}
\end{figure} Let $m_i$ denote the midpoint of the segment $p_ip_{i+1}$, and $s=2i-1$. To ensure the existence of a point $\tilde{q_i}\in s_im_i$ such that $\overline{p_i\tilde{q_i}}=c-a_i$, we need to guarantee that \begin{eqnarray*}
(c-a_i)^2
& > & \overline{p_im_i}^2\\
& = & \frac{c^2}{4}\left( \left(\frac{s^2-1}{s^2+1} + \frac{(s+1)^2-1}{(s+1)^2+1}\right)^2 +
\left(\frac{2s}{s^2+1} + \frac{2(s+1)}{(s+1)^2+1}\right)^2 \right)\\
& = & \frac{c^2}{2}\left( 1 + \frac{s^2-1}{s^2+1}\cdot \frac{(s+1)^2-1}{(s+1)^2+1} +
\frac{2s}{s^2+1}\cdot\frac{2(s+1)}{(s+1)^2+1} \right)\\
& = & \frac{c^2}{2}\left(1+\frac{s^4+2s^3+3s^2+2s}{s^4+2s^3+3s^2+2s+2}\right)\\
& = & c^2\left(1-\frac{1}{s^4+2s^3+3s^2+2s+2}\right), \end{eqnarray*} which holds if \[
\left(1-\frac{a_i}{c}\right)^2 ~\ge~ \left(1-\frac{1}{20s^4}\right)^2~~~(\text{i.e. }c~\geq~20s^4a_i) \] since \[
\left(1-\frac{1}{20s^4}\right)^2 ~>~ 1-\frac{1}{10s^4}
~\ge~ 1-\frac{1}{s^4+2s^3+3s^2+2s+2}. \] Then, we set $c=20\cdot(2n)^4\cdot \max\{a_1,\ldots,a_n\}=320\cdot n^4\cdot\max\{a_1,\ldots,a_n\}$.
Let $d=\overline{p_im_i}$ and $z=\overline{\tilde{q_i}m_i}^2=(c-a_i)^2-d^2\in\mathbb{Q}$. The point $q_i$ is a point in the segment $s_im_i$, that is close to $\tilde{q_i}$, such that, if $h$ denotes the distance $\overline{q_im_i}$, then $h$ is rational and satisfies \[
\sqrt{z} ~\leq~ h ~<~ \sqrt{z} + \delta, \] where $\delta=\frac{1}{2^{k+1}}$ and $k=\lfloor\log_2 ((1+2\sqrt{z})/\varepsilon^2)\rfloor$. Note that $k$ can be computed in $O(\log(z/\varepsilon))\subseteq O(\log(c/\varepsilon))\subseteq O(\log n+\log c)\subseteq O(\log c)$ time, which polynomial in the size of the input. Further note that $h$ can be found, by using a binary search, in polynomial $O(\log(\sqrt{z}/\delta))\subseteq O(\log c )$ time. Then, we have \[
h^2 - z ~=~ (h - \sqrt{z})(h + \sqrt{z})
~<~ \delta(\delta + 2\sqrt{z})
~<~ \delta(1 + 2\sqrt{z})
~<~ \varepsilon^2, \] which implies \[
(c-a_i)^2 ~\le~ d^2 + h^2 ~<~ (c-a_i)^2+\varepsilon^2 ~<~ \left( (c-a_i)+\varepsilon\right)^2. \] Hence, \[
c-a_i ~\le~ \sqrt{d^2 + h^2}
~=~ \overline{p_iq_i}
~=~ \overline{q_ip_{i+1}}
~<~ (c-a_i)+\varepsilon. \] Since the slope of the line $\ell(p_i,p_{i+1})$ is rational, the slope of $\ell(s_i,m_i)$ is also rational. Then, $q_i$ has rational coordinates since $\overline{q_im_i}=h\in \mathbb{Q}$. \end{proof}
\section{Discussion}\label{sec:conclusions}
The results of this paper consider the unipoint model: each point has a fixed location but exists with a given probability. The arguments given for approximating the probability distribution functions of area and perimeter, respectively, seem not to work in the multipoint model, in which each point exists probabilistically at one of multiple possible sites. For the unipoint model, both the expectation and the probability distribution function of the number of vertices in the convex hull can be computed exactly in polynomial time. It suffices to consider either that the area of each triangle defined by three points is equal to one, or that the segment defined by each pair of points has length equal to one, and then use Lemma~\ref{lem:integer-areas} of this paper. With respect to our dynamic-programming approaches, similar dynamic-programming algorithms have been given by Eppstein et al.~\cite{eppstein1992}, Fischer~\cite{fischer1997}, and Bautista et al.~\cite{bautista2011}.
\small
\end{document} |
\begin{document}
\begin{abstract} Given a triple cover $\pi : X \longrightarrow Y$ of varieties, we produce a new variety $\mathfrak S_X$ and a birational morphism $\rho_X : \mathfrak S_X \longrightarrow X$ which is an isomorphism away from the fat-point ramification locus of $\pi$. The variety $\mathfrak S_X$ has a natural interpretation in terms of the data describing the triple cover, and the morphism $\rho_X$ has an elegant geometric description. \end{abstract}
\title{A small resolution for triple covers in algebraic geometry}
\section{Introduction}
The basic fact regarding a triple cover $\pi : X \longrightarrow Y$, proven in \cite{miranda-3}, is that any such cover is determined by a rank~2 locally free sheaf $E$ on $Y$ and a global section $\sigma$ of $S^3(E)^{*} \otimes \Lambda^2(E)$. Furthermore, $X$ can be realized as a subvariety of the geometric vector bundle $\mathbb V(E)$ equipped with the natural projection to $Y$.
In this article we give a necessary and sufficient criterion for $X$ to be realized as a subvariety of a $\mathbb P^1$-bundle equipped with its natural projection to $Y$: we show that $X$ can be so realized if and only if $\pi$ has no fat triple ramification (a fat triple ramification point of $\pi$ is a point $x \in X$ whose Zariski tangent space in the fibre of $\pi$ has dimension~2).
Along the way, we show that to any triple cover $\pi : X \longrightarrow Y$, one can associate a subvariety $\mathfrak S_X$ of $\mathbb P(E^{*})$ defined in terms of the global section $\sigma$. This variety $\mathfrak S_X$ is equipped with a birational morphism $\rho_X : \mathfrak S_X \longrightarrow X$, which has a nice geometric interpretation. In fact $\rho_X$ is a sort of small resolution: it is the blow-up of a Weil divisor in $X$, and its fibre over any fat ramification point of $\pi$ is a $\mathbb P^1$, but its exceptional set is in general of codimension larger than 1 in $\mathfrak S_X$. We construct this resolution first in a local case, and then we show that the construction globalizes. Throughout this article we make extensive use of Miranda's analyses in \cite{miranda-3}.
We note that our main result is not new: a more general statement (with a correspondingly more technical proof) can be found as Theorem~1.3 in the beautiful paper \cite{casnati-ekedahl}. However, it is our hope that our simple geometric description in the case of triple covers can provide some insight into the more general case.
The authors wish to thank Ciro Ciliberto, Alberto Calabri, Flaminio Flamini, Alfio Ragusa, and especially Rick Miranda for their extraordinary efforts during the PRAGMATIC~2001 summer school in Catania. The authors also wish to thank Mike Roth for teaching us about small resolutions, which turned out to be precisely the right objects for describing the results of our research.
\section{Some examples of small resolutions}
Let $\mathbb A^5$ have coordinates $x,y,z,w,t$, and let $\mathbb P^4$ be its projectivization. Let $X \subset \mathbb P^4$ be the hypersurface defined by the equation $xw - yz = 0$. If we restrict our attention to the $\mathbb P^3 \subset \mathbb P^4$ where $t=0$, this same equation defines a smooth quadric $Q \subset \mathbb P^3$; thus $X$ is just the projective cone over $Q$ with vertex $[0:0:0:0:1]$.
It is clear that $X$ is smooth away from its vertex. If $\epsilon : \widetilde{X} \longrightarrow X$ is the blow-up of the vertex, then it is easy to see that $\widetilde{X}$ is a smooth variety, and that the exceptional divisor is isomorphic to $Q = \mathbb P^1 \times \mathbb P^1$. We are going to describe a method of resolving the singular point of $X$ with a morphism $\rho : \Gamma \longrightarrow X$, where $\Gamma$ is a smooth variety, but where the exceptional set is a $\mathbb P^1$. In particular, the exceptional set is ``too small'' to be a divisor; thus $\rho$ will be an example of a {\em small resolution}.
We begin by choosing a line $L$ belonging to one of the two rulings of the quadric $Q$; for ease in computation, we take $L$ to be the line $x = y = 0$. Clearly $L$ is a Weil divisor on $Q$. It is easy to check that $L$ is also a Cartier divisor: since there is no point on $Q$ at which $x,y,z,w$ all vanish, the two open sets of $Q$ where $z \neq 0$ and where $w \neq 0$ cover $L$. On the first set $L$ is defined by $x=0$, and on the second set $L$ is defined by $y=0$.
Now let $D \subset X$ be the cone over $L$. Then $D$ is a Weil divisor in $X$, but it is not Cartier: $D$ cannot be defined by only one equation in any open neighborhood of the origin. Our small resolution $\rho : \Gamma \longrightarrow X$ is the blow-up of $X$ along $D$. (Note that once we show that $\Gamma$ is not isomorphic to $X$, we will have proven indirectly that $D$ is not Cartier: the blow-up of a Cartier divisor is always an isomorphism.)
Since $D$ can be defined by the two equations $x = y = 0$ in $X$, the blow-up of $X$ along $D$ can be defined as the (closed) graph of the rational map $\phi : X --\rightarrow \mathbb P^1$, where $\phi ([x:y:z:w:t]) = [x:y]$. This graph $\Gamma$ is a closed subvariety of the product $X \times \mathbb P^1$, and the morphism $\rho : \Gamma \longrightarrow X$ is the restriction of the first projection map. Note that there is an open set of $X$ on which the map $\phi$ agrees with the map $\psi$ sending $[x:y:z:w:t]$ to $[z:w]$; that these two maps agree is a consequence of the defining equation for $X$. Also note that at any point of $X$ other than its vertex $[0:0:0:0:1]$, at least one of $\phi$ and $\psi$ is defined. This observation enables us to write down the defining equations for $\Gamma \subset X \times \mathbb P^1$: if $u,v$ are coordinates on the $\mathbb P^1$ factor, then $\Gamma$ is defined by $uy - vx = 0$ and $uw - vz = 0$. It follows that $\rho$ is an isomorphism away from the vertex of $X$, and the exceptional set over the vertex is $\mathbb P^1$. It is also easy to check that $\Gamma$ is smooth.
We now present an alternative way of describing the variety $\Gamma$. We may regard $\Gamma$ as a subvariety of $\mathbb P^4 \times \mathbb P^1$, defined by the three equations $xw - yz = 0$, $uy - vx = 0$, and $uw - vz = 0$. These three equations may be expressed in a single matrix equation: \[ \Gamma = \left\{ [x:y:z:w:t] \times [u:v] \in \mathbb P^4 \times \mathbb P^1
\mbox{ such that }
\left[
\begin{array}{cc} x & y \\ z & w \end{array}
\right]
\left[
\begin{array}{c} -v \\ u \end{array}
\right] = 0
\right\}. \] The first of the three equations is now seen to express the vanishing of a determinant, which is necessary and sufficient for the second and third equations to have a nonzero solution.
Note that in this description of $\Gamma$, the morphism $\rho$ is the restriction of the natural projection from $\mathbb P^4 \times \mathbb P^1$ to $\mathbb P^4$.
As a second example, we take $X \subset \mathbb P^6$ to be the projective cone over $\mathbb P^2 \times \mathbb P^1$, with vertex $[\vec{0}:1]$. By analogy with the previous example, we consider a divisor $D \subset X$ which is the cone over one of the $\mathbb P^1 \times \mathbb P^1$ ``rulings'' of $\mathbb P^2 \times \mathbb P^1$; for example, we can take $D$ to be defined by $x_0=x_1=0$. As before, blowing up this divisor gives a small resolution $\rho : \Gamma \longrightarrow X$ which is an isomorphism away from the vertex of $X$, and whose fibre over the vertex is a $\mathbb P^1$. A computation similar to the previous one gives that $\Gamma$ may be realized as a subvariety of $\mathbb P^6 \times \mathbb P^1$, using a matrix condition: \[ \Gamma = \left\{ [\vec{x}:t] \times [u:v] \in \mathbb P^6 \times \mathbb P^1
\mbox{ such that }
\left[
\begin{array}{cc} x_0 & x_1 \\
x_2 & x_3 \\
x_4 & x_5 \end{array}
\right]
\left[
\begin{array}{c} -v \\ u \end{array}
\right] = 0
\right\}. \] This single matrix equation expresses six quadratic conditions: the vanishing of the three $2 \times 2$ minors, which are the defining equations for $X \subset \mathbb P^6$, and the three row equations, which come from the blow-up computation.
As before, the morphism $\rho$ is just the restriction of the natural projection from $\mathbb P^6 \times \mathbb P^1$ to $\mathbb P^6$.
Note that by simply eliminating the variable $t$ in the above matrix description, we can define $\Gamma$ as a subvariety of $\mathbb A^6 \times \mathbb P^1$, where $\mathbb A^6$ is the finite ($t \neq 0$) part of $\mathbb P^6$. This construction is the one that will provide us with a sort of universal local picture of our resolution for triple covers.
\section{The local picture of the resolution}
Consider the affine space $\mathbb A^4$ with coordinates $A,B,C,D$, and let $F$ be the free sheaf of rank~2 on this affine space. Then $\mathbb V(F)$ is nothing more than the affine space $\mathbb A^6$ with coordinates $A,B,C,D,z,w$; here $z,w$ are global sections that generate $F$. Let $\mathfrak X$ be the subvariety of $\mathbb V(F)$ defined by the three quadrics \begin{eqnarray*}
z^2 & = & Az + Bw + 2(A^2 - BD) \\
zw & = & -Dz - Aw + (BC-AD) \\
w^2 & = & Cz + Dw + 2(D^2 - AC). \end{eqnarray*} By the results in Miranda's paper \cite{miranda-3}, we know that the projection of $\mathbb V(F)$ to $\mathbb A^4$ sending $(A,B,C,D,z,w)$ to $(A,B,C,D)$ restricts to a triple cover $\Pi : \mathfrak X \longrightarrow \mathbb A^4$.
Now, as pointed out in \cite{miranda-3}, the variety $\mathfrak X$ is determinantal: it is the locus in $\mathbb V(F)$ where the matrix \[
\left[
\begin{array}{cc} z+A & B \\
C & w+D \\
w-2D & z-2A \end{array}
\right] \] has rank at most one. By a result in \cite{miranda-3}, the rank of this matrix is zero if and only if the map $\Pi$ has fat triple ramification over the point $(A,B,C,D)$; it is clear from the matrix description that this happens only over the point $(0,0,0,0)$.
This determinantal representation is familiar: up to a change of coordinates on $\mathbb A^6$, we see that $\mathfrak X$ is just the affine cone over $\mathbb P^2 \times \mathbb P^1$. Furthermore, we see that the vertex of this cone -- its only singular point -- is exactly the fat triple point where $A=B=C=D=z=w=0$. The temptation to compute its small resolution is overwhelming, and so we define $\Gamma \subset \mathfrak X \times \mathbb P^1$ to be the subvariety of $\mathbb V(F) \times \mathbb P^1$ defined by the matrix condition \[
\left[
\begin{array}{cc} z+A & B \\
C & w+D \\
w-2D & z-2A \end{array}
\right]
\left[
\begin{array}{c} -v \\ u \end{array}
\right] = 0. \] We know from our previous computation that the natural projection $\rho : \Gamma \longrightarrow \mathfrak X$ is an isomorphism away from the fat point, and that the fibre over this point is all of $\mathbb P^1$. We will refer to the morphism $\rho : \Gamma \longrightarrow \mathfrak X$ as the {\em resolution of the triple cover $\Pi$}.
We note that $\Gamma$ comes equipped with a morphism $\phi$ to $\mathbb A^4 \times \mathbb P^1$: $\phi$ is just the product of $\Pi \circ \rho$ with the second projection of $\Gamma$ to $\mathbb P^1$. We are going to compute the image of $\phi$. To do this, we first note that if $(A,B,C,D,z,w) \times [u:v]$ is a point of $\Gamma$, then we can solve for $z$ and $w$ in terms of the other coordinates, using the first two rows of the matrix: \begin{eqnarray} \label{eq:phi-inverse} z &=& B\left(\frac{u}{v}\right) - A \\ \nonumber w &=& C\left(\frac{v}{u}\right) - D. \end{eqnarray} Here we assume that both $u$ and $v$ are nonzero; in the case that either vanishes, the third row of the matrix can be used instead of one of the first two. Continuing under the assumption that both $u$ and $v$ are nonzero, we use the third row of the matrix to compute that \[
-v\left(C\left(\frac{v}{u}\right) - D - 2D\right)
+u\left(B\left(\frac{u}{v}\right) - A - 2A\right) = 0, \] and since $uv \neq 0$, we conclude that: \begin{equation}
Bu^{3} - 3Au^{2}v + 3Duv^{2} - Cv^{3} = 0. \label{eq:local-cubic} \end{equation} We note that the same equation results from the computations in the cases where $u=0$ or $v=0$.
Let $\mathfrak S \subset \mathbb A^4 \times \mathbb P^1$ be the subvariety defined by
equation~(\ref{eq:local-cubic}). Let $\Pi' : \mathfrak S \longrightarrow \mathbb A^4$ be the obvious projection; this projection is compatible via $\phi$ with the composite map $\Pi \circ \rho : \Gamma \longrightarrow \mathbb A^4$. In fact, we have the following result:
\begin{prop} The morphism $\phi : \Gamma \longrightarrow \mathfrak S$ is an isomorphism of varieties over $\mathbb A^4$. \end{prop} \begin{proof} The fact that $\Pi \circ \rho = \Pi' \circ \phi$ is clear from the definition of $\phi$, so we only need to show that $\phi$ is an isomorphism. To do this, note that the equations~(\ref{eq:phi-inverse}) for $z$ and $w$ in terms of $A,B,C,D$ define regular functions on all of $\mathfrak S$; this is easily checked using equation~(\ref{eq:local-cubic}). From the definition \[ \phi((A,B,C,D,z,w) \times [u:v]) = (A,B,C,D) \times [u:v] , \] we see that $\phi$ is surjective, and also that the regular functions for $z$ and $w$ are sufficient to define the inverse morphism. Thus $\phi$ is an isomorphism, as needed. \end{proof}
\begin{cor} Away from the point $(0,0,0,0) \in \mathbb A^4$, the three morphisms $\Pi : \mathfrak X \longrightarrow \mathbb A^4$, $\Pi \circ \rho : \Gamma \longrightarrow \mathbb A^4$, and $\Pi' : \mathfrak S \longrightarrow \mathbb A^4$ are isomorphic triple cover maps. \end{cor}
Now we are in a position to construct a triple cover resolution for any sufficiently local triple cover $\pi : X \longrightarrow Y$. By ``sufficiently local'' we mean that $Y$ is affine, $E$ is a free sheaf of rank~2 on $Y$, and $X$ is the subvariety of $\mathbb V(E)$ defined by the three quadrics \begin{eqnarray} \label{eq:local-quadrics} z^2 & = & az + bw + 2(a^2 - bd) \\ \nonumber zw & = & -dz - aw + (bc-ad) \\ \nonumber w^2 & = & cz + dw + 2(d^2 - ac); \end{eqnarray} here the coefficients $a,b,c,d$ are regular functions on $Y$, and $z,w$ are global sections that generate $E$. It follows from Miranda's analysis in \cite{miranda-3} that this is in fact the local situation for any triple cover.
Given such a sufficiently local triple cover, we define a morphism $f : Y \longrightarrow \mathbb A^4$ by the formula $f(y) = (a(y), b(y), c(y), d(y))$. This is equivalent to requiring that $f^{*}(A) = a$, and so on; thus we have the following commutative diagram: \[ \begin{CD} f^{*}\Gamma @>>> \Gamma \\ @V{f^{*}\rho}VV @VV{\rho}V \\ X=f^{*}\mathfrak X @>>> \mathfrak X \\ @V{\pi}VV @VV{\Pi}V \\ Y @>f>> \mathbb A^4 \end{CD} \] It is proven in \cite{miranda-3} that the fat points of $X \subset \mathbb V(E)$ are precisely the points where $a=b=c=d=0$; it follows that the morphism $f^{*}\rho$ is an isomorphism away from the fat-point ramification locus of $\pi$, and has a $\mathbb P^1$ fibre over any fat point in $X$. We will refer to $f^{*}\rho$ as the {\em resolution of the triple cover $\pi$}.
In this way we may view the right-hand side of the above diagram as a sort of universal local picture of our triple cover resolution. The reader with some skill in visualizing three-dimensional commutative diagrams\footnote{At least, with more skill than the authors have in drawing them.} will see that the isomorphism $\phi : \Gamma \longrightarrow \mathfrak S$ pulls back via $f$ to an isomorphism $f^{*}\phi : f^{*}\Gamma \longrightarrow f^{*}\mathfrak S$; here $f^{*}\mathfrak S$ is the subvariety of $Y \times \mathbb P^1$ defined by the the equation \[
bu^{3} -3au^2v + 3duv^2 - cv^{3} = 0. \] This is in fact a variety over $Y$, whose structure morphism $\pi' : f^{*}\mathfrak S \longrightarrow Y$ is none other than $f^{*}\Pi'$. We can similarly ``pull back'' our other result:
\begin{cor} Let $B \subset Y$ be the (set-theoretic) image under $\pi$ of the fat-point ramification locus in $X$. Away from $B$, the three morphisms $\pi : X \longrightarrow Y$, $\pi \circ f^{*}\rho : f^{*}\Gamma \longrightarrow Y$, and $\pi' : f^{*}\mathfrak S \longrightarrow Y$ are isomorphic triple cover maps. \end{cor}
Thus we reach the following interesting conclusion:
\begin{prop}\label{prop:local-result} Let $\pi : X \longrightarrow Y$ be a sufficiently local triple cover; as before, this means that $X$ is defined as a subvariety of a free rank~2 vector bundle on $Y$. If $\pi$ has no fat-point ramification, then in fact $X$ is isomorphic as a triple cover to a subvariety of a (trivial) $\mathbb P^1$-bundle over $Y$ equipped with the natural projection. \end{prop} \begin{proof} This is just a restatement of the isomorphism between $X$ and $f^{*}\mathfrak S$ from the previous corollary. \end{proof}
\section{Geometric description of the resolution} In this section we are going to describe geometrically the isomorphism appearing in the preceding proposition. Along the way, we will also describe the geometric meaning of the $\mathbb P^1$-bundle appearing there.
To begin, let $\pi : X \longrightarrow Y$ be any triple cover. For convenience of notation, we set $F = \pi_*(\mathcal{O}_X)$. Following \cite{miranda-3}, we have that $F = \mathcal{O}_Y \oplus E$, where $E$ is a rank~2 locally free sheaf on $Y$. If $U \subset Y$ is any open set over which $E$ is generated freely by two sections $z,w$, then over $U$ we have that $X$ is defined as a subvariety of $\mathbb V(E)$ by the quadrics~(\ref{eq:local-quadrics}); this makes sense, because local sections of $E$ correspond to local coordinates on $\mathbb V(E)$. Over a fixed $y \in Y$, the fibre of $\mathbb V(E)$ is an affine plane, and the quadrics~(\ref{eq:local-quadrics}) cut out one, two, or three points in this plane.
Our idea is to consider the function on $X$ that is defined by sending a point in a fibre of $\pi$ to the line through the other two points in the fibre; clearly we need to work a bit to understand this idea. For one thing, this definition only makes sense for fibres containing three distinct points of $X$. Still, we may hope to define a rational map on $X$ whose locus of indeterminacy is contained in the ramification locus of $\pi$. The greater difficulty is understanding what the range of this function should be: we need to map to a bundle whose fibre over a fixed point in $Y$ is the $\mathbb P^1$ of lines in the fibre of $\mathbb V(E)$.
It turns out that we can make this idea work by considering the inclusion $E \hookrightarrow \mathcal{O}_Y \oplus E = F$. This inclusion allows us to identify the fibre of $\mathbb V(E)$ with the ``finite part'' of the fibre of $\mathbb P(F)$, which is a projective plane. The ``line at infinity'' in the fibre of $\mathbb P(F)$ is identified with the fibre of $\mathbb P(E)$. (These identifications follow from applying the functors $\mathbf{Spec}$ and $\mathbf{Proj}$ to the stated inclusion.) As sets, we have that $\mathbb P(F) = \mathbb V(E) \cup \mathbb P(E)$, and so we may view $X \subset \mathbb V(E)$ as a subvariety of $\mathbb P(F)$ that does not intersect $\mathbb P(E)$.
Now we are in a position to describe our putative rational map on $X$. Over a point $y$ not in the branch locus of $\pi$, the fibre of $\pi$ consists of three distinct points $x_1,x_2,x_3$. The line through any two of these points, say $x_2$ and $x_3$, is a line in the fibre of $\mathbb P(F)$ over $y$. Such a line corresponds to a point $p_{2,3}$ in the fibre of $\mathbb P(F^{*})$ over $y$. There is a natural projection $\mathbb P(F^{*}) --\rightarrow \mathbb P(E^{*})$ which dualizes the inclusion $\mathbb P(E) \hookrightarrow \mathbb P(F)$; over the point $y$, this map is the projection whose center is the point corresponding to the line at infinity in the fibre of $\mathbb P(F)$ over $y$. We have that $p_{2,3}$ is never equal to the center of this projection, because $X$ does not meet $\mathbb P(E)$; thus we can project $p_{2,3}$ to a point $q_{2,3}$ in the fibre of $\mathbb P(E^{*})$ over $y$. We define a rational map $\psi : X --\rightarrow \mathbb P(E^{*})$ by setting $\psi(x_1) = q_{2,3}$, and similarly for $x_2,x_3$.
We have the following result: \begin{prop} If $\pi : X \longrightarrow Y$ is sufficiently local, then the map $\psi$ is in fact rational, and its image is contained in $f^{*}\mathfrak S \subset \mathbb P(E^{*}) = Y \times \mathbb P^1$. The restricted map \[
\psi : X --\rightarrow f^{*}\mathfrak S \] is birational, and the isomorphism $f^{*}\phi : f^{*}\Gamma \longrightarrow f^{*}\mathfrak S$ is the resolution of indeterminacy of this birational map. \end{prop} All of these claims can be checked easily (by the reader!) once we establish the expression for $\psi$ in terms of local coordinates on $X \subset \mathbb V(E)$:
\begin{lemma} The local expression for $\psi$ over a point $y \in Y$ is \begin{eqnarray*}
\psi(y \times (z,w)) &=& y \times [z + a(y) : b(y)] \\
&=& y \times [c(y) : w + d(y)] \\
&=& y \times [w-2d(y) : z-2a(y)], \end{eqnarray*} where $z,w$ are coordinates on the fibre of $\pi$ over $y$, and $a,b,c,d$ are the sections of $\mathcal{O}_Y$ appearing as coefficients in the equations~(\ref{eq:local-quadrics}). \end{lemma} Note that the equivalence of the three expressions for $\psi$ is a consequence of the equations~(\ref{eq:local-quadrics}) which define $X$ as a subvariety of $\mathbb V(E)$. \begin{proof} In order to proceed, we need to recall a fact from \cite{miranda-3} regarding the sheaf $E$: the $\mathcal{O}_Y$-algebra $\pi_{*}(\mathcal{O}_X)$ is in fact a rank~3 $\mathcal{O}_Y$-module, and $E$ is the rank~2 submodule consisting of sections that have zero trace over $\mathcal{O}_Y$. This means that if we take local generators $z,w$ of $E$ as local coordinates on $\mathbb V(E)$, then the vector sum of the points $(z_i,w_i)$ in any fibre of $\pi$ must be zero.
Now we can prove the lemma. Let $y \in Y$ be any point not in the branch locus of $\pi$. Then the fibre of $\pi$ over $y$ consists of three distinct points, which we denote $(z_1,w_1),(z_2,w_2),(z_3,w_3)$. The fibre of $\mathbb V(E)$ over $y$ is an affine plane with coordinates $z,w$; inside this plane, the line containing $(z_2,w_2)$ and $(z_3,w_3)$ is given by \[ -(w_3 - w_2)z + (z_3 - z_2) w + (z_2 w_3 - z_3 w_2) = 0. \] In local coordinates, then, we have \[ \psi(z_1 , w_1) = [ -(w_3 - w_2) : (z_3 - z_2) ]; \] this is understood to be a point in the fibre of $\mathbb P(E^{*})$ over $y$.
We claim that this expression agrees with the first one given in the statement of the lemma. To see this, we use the equations~(\ref{eq:local-quadrics}) and the zero trace observation above to compute that \begin{eqnarray*}
\left( z_1 + a(y) \right) (z_3 - z_2) &=& z_1 z_3 - z_1 z_2 + a(y)(z_3 - z_2) \\
&=& z_1 z_3 - z_1 z_2 + \left( z_{3}^2 - z_{2}^2 - b(y)(w_3 - w_2) \right) \\
&=& (z_1 + z_2 + z_3)(z_3 - z_2) - b(y)(w_3 - w_2) \\
&=& 0 - b(y)(w_3 - w_2) . \end{eqnarray*} This proves that $\psi(z_1,z_2) = [ z_1 + a(y) : b(y) ]$ on the open set where $\pi$ is unramified and where this expression is defined. Since we only require $\psi$ to be a rational map, the lemma is proved. \end{proof}
This result shows that the locus of indeterminacy of $\psi$ is precisely the fat-point ramification locus of $\pi$, which in general is a proper subset of the ramification locus of $\pi$. This is consistent with the fact that reasonable definitions of the rational map $\psi$ can be made for double ramification points and for curvilinear triple ramification points; in these cases the Zariski tangent spaces to the ramification points determine lines in the fibres of $\pi$. At a fat point, the dimension of the Zariski tangent space in the fibre is equal to 2, so there is no reasonable way to define $\psi$ at such a point.
\section{Globalization}
Now we are going to define our resolution for an arbitrary triple cover $\pi : X \longrightarrow Y$. The idea is straightforward: we know from \cite{miranda-3} that $Y$ is covered by open affine sets $Y_i$ for which the restricted triple covers $\pi : X_i \longrightarrow Y_i$ are sufficiently local, and we have already defined the resolution for sufficiently local triple covers. It remains to check that these local definitions patch together compatibly to define a global resolution.
Recall that in defining the universal local resolution $\rho : \Gamma \longrightarrow \mathfrak X$, we constructed $\Gamma$ as a subvariety of $\mathbb V(F) \times \mathbb P^1$. The unidentified factor of $\mathbb P^1$ is an obstruction to globalization: if each sufficiently local resolution variety is defined as a
subvariety of $\mathbb V(E)_{|Y_i} \times \mathbb P^1$, then it is not clear how to interpret the second factor as the restriction of a globally defined object. Fortunately, we have seen how to remedy this: using the isomorphism $\phi : \Gamma \longrightarrow \mathfrak S$, we will take $\mathfrak S$ to be our resolution variety instead of $\Gamma$. Then we take the resolution variety for a sufficiently local triple cover to be $\mathfrak S_i = f_{i}^{*}\mathfrak S$ instead of $f_{i}^{*}\Gamma$. In the previous section we showed that each variety $\mathfrak S$ is naturally a subvariety of
$\mathbb P(E^{*})_{|Y_i}$. Thus we may hope to patch together the varieties $\mathfrak S_i$ to construct a subvariety $\mathfrak S_X$ of $\mathbb P(E^{*})$.
Now we will invoke a beautiful result of Miranda from \cite{miranda-3} to finish our construction. Miranda shows that any triple cover $\pi : X \longrightarrow Y$ is determined by a rank~2 locally free sheaf $E$ on $Y$ and a global section $\sigma$ of $S^3(E)^{*} \otimes \Lambda^2(E)$. In fact, Miranda shows that if $a,b,c,d$ are the coefficients appearing in the quadrics~(\ref{eq:local-quadrics}) that define $X_i$ as a
subvariety of $\mathbb V(E)_{|Y_i}$, then the local expression for $\sigma$ over $Y_i$ is \[
-b(z^3)^{*} + a(z^2)^{*}w^{*} - dz^{*}(w^2)^{*} + c(w^3)^{*}. \] Using the natural isomorphism $S^3(E)^{*} \cong S^3(E^{*})$, we get the following local expression for $\sigma$: \[
-\frac{1}{6}b(z^{*})^3 + \frac{1}{2}a(z^{*})^{2}w^{*} - \frac{1}{2}dz^{*}(w^{*})^{2} + \frac{1}{6}c(w^{*})^{3}. \] Up to a constant factor, this is just the cubic defining $\mathfrak S_i$ as a
subvariety of $\mathbb P(E^{*})_{|Y_i}$. Since $\sigma$ is a global section, we conclude that the varieties $\mathfrak S_i$ must patch together to form a variety $\mathfrak S_X \subset \mathbb P(E^{*})$. It is clear that the structure morphisms patch together compatibly, and so we have the following result:
\begin{prop} Let $\pi : X \longrightarrow Y$ be any triple cover, and let $E$ be a rank~2 locally free sheaf on $Y$ such that $X \subset \mathbb V(E)$. Then there is a variety $\mathfrak S_X \subset \mathbb P(E^{*})$ and a birational morphism $\rho_X : \mathfrak S_X \longrightarrow X$ which is an isomorphism away from the fat-point ramification locus of $\pi$, and whose fibre over every fat point is a $\mathbb P^1$. \end{prop}
We refer to the morphism $\rho_X : \mathfrak S_X \longrightarrow X$ as the {\em resolution of the triple cover $\pi$}. Now we get the following global result:
\begin{prop} Let $\pi : X \longrightarrow Y$ be a triple cover. $X$ is isomorphic as a triple cover to a subvariety of a $\mathbb P^1$-bundle on $Y$ equipped with the natural projection if and only if $\pi$ has no fat-point ramification. \end{prop} \begin{proof} One implication is the global version of Proposition~\ref{prop:local-result}; the other implication follows from the fact that the fibre of a subvariety of a $\mathbb P^1$-bundle cannot have a two-dimensional Zariski tangent space. \end{proof}
\end{document} |
\begin{document}
\begin{abstract} It is well known that in every inverse semigroup the binary operation and the unary operation of inversion satisfy the following three identities: \[ \quad x=(xx')x \qquad \quad (xx')(y'y)=(y'y)(xx') \qquad \quad (xy)z=x(yz'')\,. \] The goal of this note is to prove the converse, that is, we prove that an algebra of type $\langle 2,1\rangle$ satisfying these three identities is an inverse semigroup and the unary operation coincides with the usual inversion on such semigroups. \end{abstract}
\title{An Elegant 3-Basis for Inverse Semigroups}
\section{Introduction} \seclabel{intro}
In the language of a binary operation $\cdot$ and a unary operation ${}'$, a set of $n$ independent identities is an $n$-basis for inverse semigroups, if those identities define the variety of inverse semigroups considered as algebras $(S,\cdot,{}')$ of type $\langle 2,1\rangle$, where the unary operation coincides with the natural inversion. Denoting by $x'$ the inverse of an element $x$ in an inverse semigroup, we then have $x=(xx')x$ (as inverse semigroups are regular semigroups) and $(xx')(y'y)=(y'y)(xx')$ (as both $xx'$ and $y'y$ are idempotents, and idempotents commute in inverse semigroups). Thus we might be tempted to think that the following identities provide a $3$-basis for inverse semigroups: \begin{equation} \eqnlabel{candidates} x=(xx')x, \qquad (xx')(y'y)=(y'y)(xx') \qquad \text{and}\qquad (xy)z=x(yz)\,. \end{equation} However, for $S=\{0,1\}$ with $xy=0$, except for $11=1$, and defining $x'=1$, we have the previous identities satisfied, but $0'\neq 0'00'$ and hence $'$ does not coincide with the natural inversion in $(S,\cdot)$.
B.M. Schein \cite{Schein} repaired the {\em defect} of \eqnref{candidates} by adjoining two additional identities: $x''=x$ and $(xy)'=y'x'$. The resulting set of five identities indeed provides a $4$-basis for inverse semigroups. (The identity $(xy)'=y'x'$ is dependent upon the others, and hence can be discarded. However it is worth observing that in the same paper Schein also provided a $5$-basis using $xx'x'x=x'xxx'$ instead of $xx'y'y=y'yxx'$; see \cite[Theorem 1.6]{Schein} and \cite[p. 15, Ex. 20(b)]{higgins}.) Therefore the natural question to ask would be: \emph{is it possible to find a 3-basis for inverse semigroups?} This question was first answered in the affirmative in \cite{AM}, but the $3$-basis given there requires an extremely complicated proof (it is still an open problem to provide a reasonable proof for that result).
The aim of this note is to repair \eqnref{candidates} by providing an easy, transparent and \emph{elegant} $3$-basis for inverse semigroups.
\begin{main} Let $(S,*,')$ be an algebra of type $\langle 2,1\rangle$. Then this algebra is an inverse semigroup and the unary operation coincides with the usual inversion on such semigroups if and only if \[ (\mathbf{E}_1)\quad x=(xx')x, \qquad (\mathbf{E}_2)\quad (xx')(y'y)=(y'y)(xx'), \qquad (\mathbf{E}_3)\quad (xy)z=x(yz'')\,. \] \end{main}
\section{Proof of the Theorem} \seclabel{Proof}
In this section we prove that the identities ($\mathbf{E}_1$)--($\mathbf{E}_3$) imply Schein's $4$-basis for inverse semigroups. As the converse is obvious, the equivalence of the two bases will follow.
Throughout this section let $(S,\cdot,{}')$ be an algebra of type $\langle 2,1\rangle$ satisfying ($\mathbf{E}_1$)--($\mathbf{E}_3$). We start by proving a few handy identities.
\begin{lemma} The following identities hold. \begin{align} x'x''&= x'x \eqnlabel{lemma1} \\ (xy')y&=x(y'y) \eqnlabel{lemma5}\\ x & =x(x'x) \eqnlabel{lemma3b}\\ x''&= (x''x')x = x''(x'x) \eqnlabel{lemma6}\\ x'''x&=x'''x''=x''' x^{(4)} \eqnlabel{lemma50} \end{align} \end{lemma}
\begin{proof} Firstly, for \eqnref{lemma1}, we have \[ x'x'' \byx{(\mathbf{E}_1)} x'[(x''x''')x''] \byx{(\mathbf{E}_3)} [x'(x''x''')]x \byx{(\mathbf{E}_3)} [(x'x'')x']x \byx{(\mathbf{E}_1)} x'x\,. \]
Next, for \eqnref{lemma5}, we compute $(xy')y \byx{(\mathbf{E}_3)} x(y'y'') \by{lemma1} x(y'y)$.
Regarding \eqnref{lemma3b}, we have $x(x'x) \by{lemma5} (xx')x \byx{(\mathbf{E}_1)} x$.
Then for \eqnref{lemma6}, we compute $x'' \by{lemma3b} x''(x'''x'') \byx{(\mathbf{E}_3)} (x''x''')x \by{lemma1} (x''x')x \by{lemma5} x''(x'x)$.
Finally, for \eqnref{lemma50}, we have \[ x'''x \by{lemma3b} [x'''(x''''x''')]x \byx{(\mathbf{E}_3)} x'''[(x''''x''')x''] \by{lemma6} x''' x'''' \by{lemma1} x''' x''\,. \] \end{proof}
The next two lemmas are the key tools in the proof that the identities ($\mathbf{E}_1$)--($\mathbf{E}_3$) imply $x''=x$.
\begin{lemma} \label{396}
$(x'x)x'''=x'''$. \end{lemma}
\begin{proof}
We start with two observations. Firstly,
as \[ [x(y''' y)]y' \byx{(\mathbf{E}_3)} x[(y'''y)y'''] \by{lemma50} x[(y'''y'''')y'''] \byx{(\mathbf{E}_1)} xy'''\,, \] we have \begin{equation} \eqnlabel{136} (x(y''' y))y' = xy'''\,. \end{equation}
Secondly, \[ (x'x)(x'''x) \by{lemma50} (x'x)(x'''x'''') \byx{(\mathbf{E}_2)} (x'''x'''')(x'x) \by{lemma50} (x'''x'')(x'x) \by{lemma5} [(x'''x'')x']x \by{lemma6} x'''x\,, \] so that \begin{align} (x'x)(x'''x)&=x'''x. \eqnlabel{148} \end{align}
Now we have all we need to prove the lemma. \[
x'''\by{lemma6}(x'''x'')x'\by{lemma50}(x'''x)x'\by{148}[(x'x)(x'''x)]x'\by{136}(x'x)x'''. \] \end{proof}
\begin{lemma}\label{lemma16}
$(xy)z'=x(yz')$. \end{lemma}
\begin{proof} We start by proving that \begin{equation} \eqnlabel{455} x'''=x'\,. \end{equation} In fact we have $xx' \by{lemma3b} [x(x'x)]x' \byx{(\mathbf{E}_3)} x[(x'x)x'''] = xx'''$, using Lemma \ref{396} in the last equality. Thus \begin{equation} \eqnlabel{452} xx'''=xx'\,. \end{equation} Now, by Lemma \ref{396}, \[
x'''=(x'x)x''' \by{lemma1} (x'x'')x''' \byx{(\mathbf{E}_3)} x'(x''x^{(5)}) \by{452} x'(x''x''') \byx{(\mathbf{E}_3)} (x'x'')x' \byx{(\mathbf{E}_1)} x'\,. \]
Replacing $z$ by $z'$ in ($\mathbf{E}_3$), we get \[
(xy)z'=x(yz''')=x(yz')\,, \] where the last equality follows from \eqnref{455}. The lemma is proved. \end{proof}
We have everything we need to prove our main result.
\begin{theorem} The identities \emph{(}$\mathbf{E}_1$\emph{)}--\emph{(}$\mathbf{E}_3$\emph{)} imply $x'' = x$ and the associative law. \end{theorem}
\begin{proof} First, we have \begin{align*} x''x' &\by{lemma6} [(x''x')x]x' = (x''x')(xx') \byx{(\mathbf{E}_2)} (xx')(x''x')\\ &= [(xx')x'']x' = [x(x'x'')]x' = x[(x'x'')x'] \byx{(\mathbf{E}_1)} xx'\,, \end{align*} where we have used Lemma \ref{lemma16} in the unlabeled equalities. Thus \begin{equation} \eqnlabel{hmph} x''x' = xx'\,. \end{equation}
Now $x'' \by{lemma6} (x''x')x \by{hmph} (xx')x \byx{(\mathbf{E}_1)} x$, as claimed.
Associativity now follows easily: $(xy)z \byx{(\mathbf{E}_1)} x(yz'') = x(yz)$. \end{proof}
\section{Other Sets of Axioms}
It is natural to ask how sensitive the axioms ($\mathbf{E}_1$)--($\mathbf{E}_3$) are to certain modifications, such as shifting the parentheses in ($\mathbf{E}_1$) or changing the placement of the double inverse in ($\mathbf{E}_3$).
If, for instance, we leave ($\mathbf{E}_2$) intact, replace ($\mathbf{E}_1$) with $x(x'x) = x$ and replace ($\mathbf{E}_3$) with $(x''y)z = x(yz)$, then we obtain a set of identities which are dual to ($\mathbf{E}_1$)--($\mathbf{E}_3$). By an argument dual to that in \S\secref{Proof}, this set of identities is another $3$-basis for inverse semigroups.
Thus to dispense with these sorts of obvious dualities, we will assume that both ($\mathbf{E}_1$) and ($\mathbf{E}_2$) are left intact, and consider only alternative placement of the double inverse in ($\mathbf{E}_3$). Using \textsc{Prover9}, we found that each of the following identities can substitute for ($\mathbf{E}_3$) to give another $3$-basis for inverse semigroups: \begin{align*} (xy)z &= x''(yz) \hspace{2cm} (xy)z = x(y''z) \\ x(yz) &= (xy'')z \hspace{2cm} x(yz) = (xy)z''. \end{align*}
The remaining possibility, $x(yz) = (x''y)z$, does not work. Using \textsc{Mace4}, we found the counterexample given by the following tables. It satisfies ($\mathbf{E}_1$), ($\mathbf{E}_2$) and $x(yz) = (x''y)z$, but the binary operation is not associative ($(0\cdot 0)\cdot 0 = 1\cdot 0 = 7\neq 6 = 0\cdot 1 = 0\cdot (0\cdot 0)$), and the unary operation clearly fails to satisfy $x'' = x$.
\begin{table}[htb] \centering
\begin{tabular}{r|cccccccccccc} $\cdot$ & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11\\ \hline
0 & 1 & 6 & 5 & 7 & 3 & 8 & 4 & 2 & 0 & 4 & 4 & 4 \\
1 & 7 & 2 & 6 & 0 & 8 & 4 & 5 & 1 & 3 & 5 & 5 & 5 \\
2 & 5 & 8 & 3 & 6 & 1 & 7 & 0 & 4 & 2 & 0 & 0 & 0 \\
3 & 8 & 0 & 7 & 4 & 6 & 2 & 1 & 3 & 5 & 1 & 1 & 1 \\
4 & 3 & 7 & 1 & 8 & 5 & 6 & 2 & 0 & 4 & 2 & 2 & 2 \\
5 & 6 & 4 & 8 & 2 & 7 & 0 & 3 & 5 & 1 & 3 & 3 & 3 \\
6 & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 6 & 6 & 6 \\
7 & 4 & 3 & 0 & 5 & 2 & 1 & 7 & 8 & 6 & 7 & 7 & 7 \\
8 & 2 & 5 & 4 & 1 & 0 & 3 & 8 & 6 & 7 & 8 & 8 & 8 \\
9 & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 6 \\
10 & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 10 & 9 & 6 \\
11 & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 6 & 6 & 11 \end{tabular}
\begin{tabular}{r|cccccccccccc} ${}'$ & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 \\ \hline
& 1 & 2 & 3 & 4 & 5 & 0 & 6 & 8 & 7 & 9 & 10 & 11 \end{tabular} \end{table}
\section{Problem}
\emph{Does there exist a $2$-basis for inverse semigroups?}
We guess that the answer is no.
\begin{acknowledgment} We are pleased to acknowledge the assistance of the automated deduction tool \textsc{Prover9} and the finite model builder \textsc{Mace4}, both developed by McCune \cite{McCune}.
The first author was partially supported by FCT and FEDER, Project POCTI-ISFL-1-143 of Centro de Algebra da Universidade de Lisboa, and by FCT and PIDDAC through the project PTDC/MAT/69514/2006. \end{acknowledgment}
\end{document} |
\begin{document}
\title{Singmaster's conjecture in the interior of Pascal's triangle}
\author[Matom\"aki]{Kaisa Matom\"aki} \address{Department of Mathematics and Statistics \\ University of Turku, 20014 Turku\\ Finland} \email{ksmato@utu.fi}
\author[Radziwi{\l}{\l}]{Maksym Radziwi{\l}{\l}} \address{ Department of Mathematics,
Caltech,
1200 E California Blvd,
Pasadena, CA, 91125 \\
USA} \email{maksym.radziwill@gmail.com}
\author[Shao]{Xuancheng Shao} \address{Department of Mathematics, University of Kentucky\\ 715 Patterson Office Tower\\ Lexington, KY 40506\\ USA} \email{xuancheng.shao@uky.edu}
\author[Tao]{Terence Tao} \address{Department of Mathematics, UCLA\\ 405 Hilgard Ave\\ Los Angeles CA 90095\\ USA} \email{tao@math.ucla.edu}
\author[Ter\"av\"ainen]{Joni Ter\"av\"ainen} \address{Mathematical Institute, University of Oxford \\ Woodstock Road \\ Oxford OX2 6GG \\ United Kingdom} \email{joni.teravainen@maths.ox.ac.uk}
\begin{abstract} Singmaster's conjecture asserts that every natural number greater than one occurs at most a bounded number of times in Pascal's triangle; that is, for any natural number $t \geq 2$, the number of solutions to the equation $\binom{n}{m} = t$ for natural numbers $1 \leq m <n$ is bounded. In this paper we establish this result in the interior region $\exp(\log^{2/3+\eps} n) \leq m \leq n - \exp(\log^{2/3+\eps} n)$ for any fixed $\eps>0$. Indeed, when $t$ is sufficiently large depending on $\eps$, we show that there are at most four solutions (or at most two in either half of Pascal's triangle) in this region. We also establish analogous results for the equation $(n)_m = t$, where $(n)_m \coloneqq n(n-1) \dots (n-m+1)$ denotes the falling factorial. \end{abstract}
\maketitle
\section{Introduction}
In 1971, Singmaster \cite{singmaster} conjectured that any natural number greater than one only appeared in Pascal's triangle a bounded number of times. In asymptotic notation\footnote{Our conventions for asymptotic notation are set out in Section \ref{notation-sec}.}, we can express this conjecture as
\begin{conjecture}[Singmaster's conjecture]\label{singmaster} For any natural number $t \geq 2$, the number of integer solutions $1 \leq m < n$ to the equation \begin{equation}\label{bnt} \binom{n}{m} = t \end{equation}
is $O(1)$. \end{conjecture}
Note that we can exclude the edges $m=0,m=n$ of Pascal's triangle from consideration since $\binom{n}{m}=1$ in these cases. Currently the largest known number of solutions to \eqref{bnt} for a given $t$ is eight, arising from $t=3003$ and \begin{equation}\label{nam} (n,m) = (3003, 1), (78, 2), (15, 5), (14, 6), (14,8), (15, 10), (78, 76), (3003, 3002). \end{equation}
For the purposes of attacking this conjecture, we may of course assume $t$ to be larger than any given absolute constant, which we shall implicitly do in the sequel. In particular we can assume that the iterated logarithms $$ \log_2 t \coloneqq \log\log t; \quad \log_3 t \coloneqq \log\log\log t$$ are well-defined and positive.
In view of the symmetry \begin{equation}\label{symmetry} \binom{n}{m} = \binom{n}{n-m} \end{equation} we may restrict attention to the left half \begin{equation}\label{left-half} \{ (m,n) \in \N \times \N: 1 \leq m \leq n/2 \} \end{equation} of Pascal's triangle. For solutions to \eqref{bnt} in this half \eqref{left-half} of the triangle, we have $$ t = \binom{n}{m} \geq \binom{2m}{m} \asymp 4^m / \sqrt{m}$$ by Stirling's approximation \eqref{stirling}, and thus we have the upper bound \begin{equation}\label{m-bound}
m \leq \frac{1}{\log 4} \log t + O( \log_2 t ). \end{equation} Since $n \mapsto \binom{n}{m}$ is an increasing function of $n$ for fixed $m \geq 1$, $n$ is uniquely determined by $m$ and $t$. Thus by~\eqref{m-bound} we have at most $O(\log t)$ solutions to the equation $\binom{n}{m} = t$, a fact already observed in the original paper \cite{singmaster} of Singmaster. This bound was improved to $O(\log t / \log_2 t)$ by Abbott, Erd\H{o}s, and Hansen \cite{aeh}, to $O( \log t \log_3 t / \log_2^2 t )$ by Kane \cite{kane-1}, and finally to $O( \log t \log_3 t / \log_2^3 t)$ in a followup work of Kane \cite{kane-2}. This remains the best known unconditional bound for the total number of solutions, although it was observed in \cite{aeh} that the improved bound $O_\eps(\log^{2/3+\eps} t)$ was available for any $\eps>0$ assuming the conjecture of Cram\'er \cite{cramer}.
From the elementary inequalities $$ \frac{(n-m)^m}{m!} < \binom{n}{m} \leq \frac{n^m}{m!}$$ and some rearranging we see that any solution to $\binom{n}{m} = t$ obeys the bounds $$ (t m!)^{1/m} \leq n < (tm!)^{1/m} + m.$$ Applying Stirling's approximation \eqref{stirling} (and also $n \geq m$) we can thus obtain the order of magnitude of $n$ as a function of $m$ and $t$: \begin{equation}\label{n-form} n \asymp m t^{1/m} \end{equation} or equivalently \begin{equation}\label{n-form-alt} \frac{n}{m} \asymp \exp\left( \frac{\log t}{m} \right ). \end{equation} In particular we see that $n$ grows extremely rapidly when the ratio $m/\log t$ becomes small. This makes the difficulty of the problem increase as $m / \log t$ approaches zero, and indeed treating the case of small values of $m/\log t$ is the main obstruction to making further progress on bounding the total number of solutions.
\begin{remark} In the left half \eqref{left-half} of Pascal's triangle, a finer application of Stirling's approximation in \cite[(3.1)]{kane-1} gave the more precise estimate $$ n = (tm!)^{1/m} + \frac{m-1}{2} + O( m t^{-1/m} ).$$ We will not explicitly use this estimate here. \end{remark}
In this paper we study the opposite regime in which $m/\log t$ is relatively large, or equivalently (by \eqref{n-form-alt}) $n$ and $m$ are somewhat comparable (in the doubly logarithmic sense $\log_2 n \asymp \log_2 m$). More precisely, we have the following result:
\begin{theorem}[Singmaster's conjecture in the interior of Pascal's triangle]\label{main} Let $0 < \eps < 1$, and assume that $t$ is sufficiently large depending on $\eps$. Then there are at most two solutions to \eqref{bnt} in the region $\exp( \log^{2/3 + \eps} n ) \leq m \leq n/2$. By \eqref{symmetry}, we thus have at most four solutions to \eqref{bnt} in the region $\exp( \log^{2/3 + \eps} n ) \leq m \leq n - \exp( \log^{2/3 + \eps} n )$. Furthermore, in the smaller region $\exp( \log^{2/3 + \eps} n ) \leq m \leq n/\exp(\log^{1-\eps'} n)$ there is at most one solution, whenever $0 < \eps' < \frac{\eps}{2/3+\eps}$ and $t$ is sufficiently large depending on both $\eps$ and $\eps'$. \end{theorem}
\begin{remark}\label{fib} The bound of two (or four) solutions is absolutely sharp, in view of the infinite family of solutions observed in \cite{lind}, \cite{singmaster-2}, \cite{tovey} to the equation $$ \binom{n+1}{m+1} = \binom{n}{m+2}$$ given by $n = F_{2j+2} F_{2j+3} - 1$, $m = F_{2j} F_{2j+3}-1$, where $F_j$ denotes the $j^{th}$ Fibonacci number. See also \cite{jenkins} for further analysis of equations of this type. Besides this infinite family of collisions, and the ``trivial'' ones generated by \eqref{symmetry}, $\binom{n}{0} = 1$, and $\binom{n}{m} = \binom{\binom{n}{m}}{1}$, the only further known collisions between binomial coefficients arise from the identities $\binom{n}{2} = \binom{n'}{m'}$ for $$ (n,n',m') = (16,10,3), (21, 2, 4), (52, 22, 3), (120, 36, 3), (153, 19, 5), (221, 17, 8)$$ as well as the example in \eqref{nam}. It was conjectured by de Weger \cite{deweger} that these above examples generate all the non-trivial collisions $\binom{n}{m} = \binom{n'}{m'} = t$; this would of course imply Singmaster's conjecture. This conjecture has been verified for $(m,m') = (2,3)$ \cite{avanesov}, for $(m,m') = (2,4)$ \cite{pinter}, \cite{deweger-1}, for $(m,m') = (2,5)$ \cite{bugeaud}, for $(m,m') = (3,4)$ \cite{mordell}, \cite{deweger}, and $(m,m') = (2, 6), (2, 8), (3, 6), (4, 6), (4, 8)$ \cite{stroeker}, and for $n \leq 10^6$ or $t \leq 10^{60}$ in \cite{bbw}. \end{remark}
\begin{remark} In view of Theorem \ref{main}, we now see that to prove Conjecture \ref{singmaster}, we may restrict attention without loss of generality to the region $2 \leq m \leq \exp(\log^{2/3+\eps} n)$ for any fixed $\eps>0$, or equivalently (by \eqref{n-form-alt}) to $2 \leq m \leq \frac{\log t}{\log^{3/2-\eps}_2 t}$ for any fixed $\eps>0$. It follows from the conjecture of de Weger \cite{deweger} mentioned in Remark \ref{fib} that for $t$ sufficiently large there is only at most one solution in this region, that is to say all but a finite number of binomial coefficients $\binom{n}{m}$ for $2 \leq m \leq \exp(\log^{2/3+\eps} n)$ are distinct. In this direction, the number of solutions to the equation $\binom{n}{m} = \binom{n'}{m'}$ for fixed $2 \leq m < m'$ has been shown (via Siegel's theorem on integral points) to be finite in \cite{bst} (see also the earlier result \cite{kiss} treating the case $(m,m')=(2,p)$ for an odd prime $p$). This implies that there are no collisions in the regime $2 \leq m \leq w(n)$ if $w$ is a function of $n$ that goes to infinity sufficiently slowly as $n \to \infty$. Unfortunately, due to the reliance on Siegel's theorem, the function $w$ given by these arguments is completely ineffective. \end{remark}
\begin{remark} For some previous bounds of this type, in \cite{aeh} it was shown that the number of solutions to \eqref{bnt} in the range $n^{5/6} \leq m \leq n/2$ was $O(\log^{3/4} t)$, while the arguments in \cite[\S 7]{kane-1}, after some manipulation, show that the number of solutions to \eqref{bnt} in the range $\exp(\log^{1/2+\eps} n) \leq m \leq n^{5/6}$ is $O_\eps( \log t / \log_2^3 t )$. \end{remark}
\begin{remark} The implied quantitative bounds in the hypothesis ``$t$ is sufficiently large depending on $\eps$'' are effective; however, we have made no attempt whatsoever to optimize them in this paper, and will likely be too large to be of use in numerical verification of Singmaster's conjecture in their current form. \end{remark}
\subsection{An analog for falling factorials} The methods used to handle the equation \eqref{bnt} can be modified to treat the variant equation \begin{equation}\label{falling}
(n)_m = t \end{equation} for integers $1 \leq m < n$ and $t \geq 2$, where $(n)_m$ denotes the falling factorial $$ (n)_m \coloneqq n (n-1) \dots (n-m+1) = m! \binom{n}{m}.$$ We exclude the cases $m=0,m=n$ since $(n)_0 = 1$ and $(n)_n = (n)_{n-1} = n!$. In \cite[Theorem 4]{aeh} it was shown that for any $t \geq 2$ the number of integer solutions $(m,n)$ to \eqref{falling} with $1 \leq m \leq n-1$ is $O( \sqrt{\log t})$. We do not directly improve upon this bound here, but can obtain an analogue of Theorem \ref{main}:
\begin{theorem}[Falling factorial multiplicity in the interior]\label{main-falling} Let $0 < \eps < 1$, and assume that $t$ is sufficiently large depending on $\eps$. Then there are at most two integer solutions to \eqref{falling} in the region $\exp( \log^{2/3 + \eps} n ) \leq m < n$. \end{theorem}
We establish this result in Section \ref{falling-sec}. Note that the bound of two is best possible, as can be seen from the infinite family of solutions $$ (a^2 - a)_{a^2 - 2a} = (a^2 - a - 1)_{a^2 - 2a + 1}$$ for any integer $a>2$, and more generally $$ ( (a)_b)_{(a)_b - a} = ( (a)_b - 1 )_{(a)_b - a + b - 1}$$ whenever $2 \leq b < a$ are integers.
\subsection{Strategy of proof}
Theorem \ref{main} is a consequence of two Propositions that we now describe. The proof of Theorem \ref{main-falling} will follow a similar pattern as described here and we refer the reader to Section \ref{falling-sec} for details.
\begin{proposition}[Distance estimate]\label{distance} Let $\varepsilon > 0$. Suppose we have two solutions $(n,m), (n',m')$ to \eqref{bnt} in the left half \eqref{left-half} of Pascal's triangle. Then one has $$ m' - m \ll_\eps \exp( \log^{2/3+\eps}(n+n') )$$ for any $\eps>0$. Furthermore, if $$ m, m' \geq \exp( \log^{2/3+\eps}(n+n') )$$ then we additionally have $$ n' - n \ll_\eps \exp( \log^{2/3+\eps}(n+n') ).$$ \end{proposition}
Note how this proposition is consistent with the example in Remark \ref{fib}. We shall discuss the proof of Proposition~\ref{distance} in Section~\ref{ss:proofmethods}. For the application to Theorem \ref{main}, Proposition \ref{distance} localizes all solutions to \eqref{bnt} to a region of small diameter. To conclude Theorem \ref{main}, we can now proceed by adapting the Taylor expansion arguments of Kane \cite{kane-1}, \cite{kane-2}, in which one views $n$ as an analytic function of $m$ (keeping $t$ fixed) and exploits the non-vanishing of certain derivatives of this function; see Section \ref{analytic-sec}. This is what the proposition below accomplishes. In fact in our analysis only two derivatives of this function are needed (i.e., we only need to exploit the convexity properties of $n$ as a function of $m$).
\begin{proposition}[Kane-type estimate] \label{prop:kane}
Let $\varepsilon > 0$.
Suppose that $(n,m)$ is a solution to \eqref{bnt} in the left-half \eqref{left-half} of Pascal's triangle. There there exists at most one other solution $(n',m') \neq (n,m)$ to \eqref{bnt} with $m' < m$, $n' > n$ and
$$
|m - m'| + |n - n'| \ll \exp((\log_2 t)^{1 - \varepsilon}).
$$ \end{proposition}
With these two Propositions at hand it is easy to deduce Theorem \ref{main}.
\begin{proof}[Deduction of Theorem \ref{main}] Let $\eps>0$, let $t$ be sufficiently large depending on $\eps$, and let $(n,m)$ be the solution to \eqref{bnt} in the region \begin{equation}\label{region} \{ (n,m): \exp(\log^{2/3+\eps} n) \leq m \leq n/2\} \end{equation} with the maximal value of $m$ (if there are no such solutions then of course Theorem \ref{main} is trivial). For brevity we allow all implied constants in the following arguments to depend on $\eps$. If $(n',m')$ is any other solution in this region, then $m' < m$ and $n' > n$. From \eqref{n-form-alt} we have $$
m \gg \frac{\log t}{\log n} \geq \frac{\log t}{\log^{\frac{1}{2/3+\eps}} m} \gg \frac{\log t}{\log_2^{\frac{1}{2/3+\eps}} t} $$ thanks to \eqref{m-bound}. From further application of \eqref{n-form-alt} we then have $$ n \ll \exp( O( \log_2^{\frac{1}{2/3+\eps}} t ) ).$$ Similarly for $n'$. Applying Proposition \ref{distance} (with $\eps$ replaced by a sufficiently small quantity), we conclude that \begin{equation}\label{mim}
m-m', n'-n \ll_{\eps'} \exp( O( \log^{1-\eps'}_2 t ) ) \end{equation} whenever $1-\eps' > \frac{2/3}{2/3+\eps}$, or equivalently $\eps' < \frac{\eps}{2/3 + \eps}$. The result now follows from Proposition \ref{prop:kane}. \end{proof}
\begin{remark} The above arguments showed that for $t$ sufficiently large depending on $\eps$, there were at most four solutions to \eqref{bnt} in the region $\exp(\log^{2/3+\eps} n) \leq m \leq n - \exp(\log^{2/3+\eps} n)$. A modification of the argument also shows that there cannot be exactly \emph{three} such solutions. For if this were the case, we see from \eqref{symmetry} that there must be a solution $(n,m)$ with $n=2m$, so that $m \asymp \log t$ by Stirling's approximation. For all other solutions $(n',m')$ to \eqref{bnt} we have $n' \geq n+1$, hence $$ \binom{n}{n/2} = t = \binom{n'}{m'} \geq \binom{n+1}{m'}$$ and hence (by Stirling's approximation) $$ \binom{n+1}{m'} \leq \left( \frac{1}{2} + O\left(\frac{1}{n}\right)\right) \binom{n+1}{(n+1)/2}.$$
By Stirling's approximation (or the central limit theorem of de Moivre and Laplace) this forces $|m' - \frac{n+1}{2}| \gg \sqrt{n}$, thus $|m'-m| \gg m^{1/2}$. But this contradicts \eqref{mim}. \end{remark}
\subsection{Proof methods} \label{ss:proofmethods}
We now discuss the method of proof of Proposition \ref{distance}, which is our main new contribution. In contrast to the ``Archimedean'' arguments of Kane (such as Proposition \ref{prop:kane}) that use real and complex analysis of the binomial coefficients $\binom{n}{m}$, the proof of Proposition \ref{distance} relies more on ``non-Archimedean'' arguments, based on evaluating the $p$-adic valuations $v_p\left( \binom{n}{m} \right)$ for various primes $p$, defined as the number of times $p$ divides $\binom{n}{m}$. From the classical Legendre formula \begin{equation}\label{legendre} v_p( n! ) = \sum_{j=1}^\infty \left\lfloor \frac{n}{p^j} \right\rfloor, \end{equation} where $\lfloor x \rfloor$ is the integer part of $x$, we see that \begin{equation}\label{np} \begin{split} v_p\left( \binom{n}{m} \right) &= \sum_{j=1}^\infty \left(\left\lfloor \frac{n}{p^j} \right\rfloor - \left\lfloor \frac{m}{p^j} \right\rfloor - \left\lfloor \frac{n-m}{p^j} \right\rfloor \right)\\ &= \sum_{j=1}^\infty \left(\left\{ \frac{m}{p^j} \right\} + \left\{ \frac{n-m}{p^j} \right\} - \left\{ \frac{n}{p^j} \right\} \right) \end{split} \end{equation} where $\{x\} \coloneqq x - \lfloor x \rfloor$ denotes the fractional part of $x$. Note that the summands here vanish whenever $p^j > n$. From this identity we see that if $(n,m), (n',m')$ are two solutions to \eqref{bnt} then we must have \begin{equation}\label{p-eq} \sum_{j=1}^\infty \left(\left\{ \frac{m}{p^j} \right\} + \left\{ \frac{n-m}{p^j} \right\} - \left\{ \frac{n}{p^j} \right\}\right) = \sum_{j=1}^\infty \left(\left\{ \frac{m'}{p^j} \right\} + \left\{ \frac{n'-m'}{p^j} \right\} - \left\{ \frac{n'}{p^j} \right\} \right) \end{equation} for all primes $p$. Our strategy will be to apply this equation with $p$ set equal to a \emph{random} prime $\p$ drawn uniformly amongst all primes in the interval $[P, P+P\log^{-100} P]$ where the scale $P$ is something like $\exp( \log^{2/3+\eps/2}(n+n'))$, and inspect the distribution of the resulting random variables on the left and right-hand sides of \eqref{p-eq} in order to obtain a contradiction when $m,m'$ or $n,n'$ are sufficiently well separated. In order to do this we need some information concerning the equidistribution of fractional parts such as $\{ \frac{n}{\p^j}\}$. This will be provided by the following estimate, proven in Section \ref{equid-sec}. There and later the letter $p$ always denotes a prime.
\begin{proposition}[Equidistribution estimate]\label{equid} Let $\eps>0$ and $P \geq 2$ and let $I$ be an interval contained in $[P,2P]$. Let $M,N$ be real numbers with $M,N = O( \exp(\log^{3/2-\eps} P) )$, and let $j$ be a natural number. \begin{itemize} \item[(i)] For all $A > 0$, \[
\sum_{p \in I} e\left( \frac{N}{p} + \frac{M}{p^j} \right) = \int_I e\left( \frac{N}{t} + \frac{M}{t^{j}}\right)\ \frac{dt}{\log t} + O_{\eps,A}( P\log^{-A} P ). \] \item[(ii)] Let $W \colon \R^2 \to \C$ be a smooth $\Z^2$-periodic function. Then, for all $A > 0$,
$$ \sum_{p \in I} W\left( \frac{N}{p}, \frac{M}{p^j} \right) = \int_I W\left( \frac{N}{t}, \frac{M}{t^{j}} \right)\ \frac{dt}{\log t} + O_{\eps,A}( \|W\|_{C^{3}} P \log^{-A} P ),$$ where
$$ \|W\|_{C^{3}} \coloneqq \sum_{j=0}^{3} \sup_{x \in \R^2} |\nabla^j W(x)|.$$ \end{itemize} \end{proposition}
One can generalize this proposition to control the joint equidistribution of any bounded number of expressions of the form $\{ \frac{n}{p^j}\}$, but for our applications it will suffice to understand the equidistribution of pairs $\{\frac{N}{p}\}$, $\{\frac{M}{p^j}\}$.
When it comes to the proof of Proposition \ref{equid}, the first step is to use Fourier expansion to reduce part (ii) of the proposition to part (i). For part (i), the case where $\frac{|N|}{P}+\frac{|M|}{P^j}$ is small (say $\leq \log^{O(A)} P$) is easily handled using the prime number theorem with classical error term. In the regime where $\frac{|N|}{P}+\frac{|M|}{P^j}$ is large, we use Vaughan's identity to decompose the sum in (i) into type I and II sums, and assert that these exhibit cancellation; the type I and II bounds are given in \eqref{type-i} and \eqref{type-ii}.
Both type I and type II sums can be handled using Vinogradov's bound for sums of the form $\sum_{n\in I}e(f(n))$ with $f$ smooth, although we need to first cut from $I$ small intervals around zeros of the first $\log P$ derivatives of $N/t+M/t^j$. This way we obtain that the sum in (i) exhibits cancellation. It is here that the restriction $N,M=O(\exp(\log^{3/2-\varepsilon} P))$ arises; even under the Riemann hypothesis we do not know how to relax this requirement\footnote{Using standard randomness heuristics one could tentatively conjecture that this restriction $N,M=O(\exp(\log^{3/2-\varepsilon} P))$ could be relaxed to $N,M = O( \exp(P^c) )$ for some constant $c>0$; this would improve the range $\exp(\log^{2/3+\eps} n) \leq m \leq n/2$ in Theorem \ref{main} to $\log^C n \leq m \leq n/2$ for some constant $C>0$.}.
Once the equidistribution estimate, Proposition \ref{equid}, is established, the analysis of the distribution of both sides of \eqref{p-eq} is relatively straightforward, as long as the scale $P$ is chosen so that the powers $P^j$ do not lie close to various integer combinations of $m,n,m',n'$. However, there are some delicate cases when two of the numbers $n, m, n-m, n', m', n'-m'$ are ``commensurable'' in the sense that one of them is close to a rational multiple of the other, where the rational multiplier has small height. Commensurable integers are also known to generate some exceptional examples of integer factorial ratios \cite{bober}, \cite{bober-2}, \cite{sound-ratio}. Fortunately, we can handle these cases in our context by an analysis of covariances between various fractional parts $\{ \frac{n_1}{\p} \}, \{ \frac{n_2}{\p} \}$, in particular taking advantage of the fact that these covariances are non-negative up to small errors, and small unless $n_1,n_2$ are very highly commensurable.
\subsection{Acknowledgments}
KM was supported by Academy of Finland grant no. 285894. MR acknowledges the support of NSF grant DMS-1902063 and a Sloan Fellowship. XS was supported by NSF grant DMS-1802224. TT was supported by a Simons Investigator grant, the James and Carol Collins Chair, the Mathematical Analysis \& Application Research Fund Endowment, and by NSF grant DMS-1764034. JT was supported by a Titchmarsh Fellowship.
\subsection{Notation}\label{notation-sec}
We use $X \ll Y$, $X = O(Y)$, or $Y \gg X$ to denote the estimate $|X| \leq CY$ for some constant $C$. If we wish to permit this constant to depend on one or more parameters we shall indicate this by appropriate subscripts, thus for instance $O_{\eps,A}(Y)$ denotes a quantity bounded in magnitude by $C_{\eps,A} Y$ for some quantity $C_{\eps,A}$ depending only on $\eps,A$. We write $X \asymp Y$ for $X \ll Y \ll X$.
We use $1_E$ to denote the indicator of an event $E$, thus $1_E$ equals $1$ when $E$ is true and $0$ otherwise.
We let $e$ denote the standard real character $e(x) \coloneqq e^{2\pi ix}$. \section{Derivative estimates}\label{analytic-sec}
We generalize the binomial coefficient $\binom{n}{m}$ to real $0 \leq m \leq n$ by the formula $$\binom{n}{m} \coloneqq \frac{\Gamma(n+1)}{\Gamma(m+1)\Gamma(n-m+1)}$$ where $$\Gamma(x) \coloneqq \frac{e^{-\gamma x}}{x} \prod_{n=1}^\infty \left(1+\frac{x}{n}\right)^{-1} e^{x/n}$$ is the Gamma function (with $\gamma$ the Euler--Mascheroni constant). This is of course consistent with the usual definition of the binomial coefficient. Observe that the digamma function $$\psi(x) \coloneqq \frac{\Gamma'}{\Gamma}(x) = -\gamma + \sum_{n=0}^\infty \frac{1}{n+1} - \frac{1}{n+x}$$ is a smooth increasing concave function on $(0,+\infty)$, with $$ \psi'(x) = \sum_{n=0}^\infty \frac{1}{(n+x)^2}$$ positive and decreasing, and $$ \psi''(x) = -\sum_{n=0}^\infty \frac{2}{(n+x)^3}$$ negative. For future reference we also observe the standard asymptotics \begin{align}
\psi(x) &= \log x + O\left(\frac{1}{x}\right) \label{psi-1}\\
\psi'(x) &= \frac{1}{x} + O\left(\frac{1}{x^2}\right) \label{psi-2}\\
\psi''(x) &= -\frac{1}{x^2} + O\left(\frac{1}{x^3}\right) \label{psi-3} \end{align} and the Stirling approximation \begin{equation}\label{stirling} \log \Gamma(x) = x \log x - x - \frac{1}{2} \log x + \log \sqrt{2\pi} + O\left(\frac{1}{x}\right) \end{equation} for any $x \geq 1$; see e.g., \cite[\S 6.1, 6.3, 6.4]{abramowitz+stegun}. One could also extend these functions meromorphically to the entire complex plane, but we will not need to do so here.
From the increasing nature of $\psi$ we see that $n \mapsto \binom{n}{m}$ is strictly increasing on $[m,+\infty)$ for fixed real $m > 0$, and from Stirling's approximation \eqref{stirling} we see that it goes to infinity as $n \to \infty$. Thus for given $t > 1$, we see from the inverse function theorem that there exists a unique smooth function $f_t \colon [0,+\infty) \to [0,+\infty)$ with $f_t(m) > m$ for all $m$, such that \begin{equation}\label{bnm-2}
\binom{f_t(m)}{m} = t. \end{equation} In particular, the equation \eqref{bnt} holds for given integers $1 \leq m \leq n$ and $t \geq 2$ if and only if $n = f_t(m)$. This function $f_t$ was analyzed by Kane \cite{kane-1}, who among other things was able to extend $f_t$ holomorphically to a certain sector, which then allowed him to estimate high derivatives of this function. However, for our analysis we will only need to control the first few derivatives of $f_t$, which can be estimated by hand:
\begin{proposition}[Estimates on the first few derivatives]\label{derivi} Let $t,m$ be sufficiently large with $m \leq f_t(m)/2$. Then \begin{equation}\label{fatm}
f_t(m) \asymp m t^{1/m} \end{equation} and \begin{equation}\label{one} -f'_t(m) \asymp (f_t(m) - 2m) \frac{\log t}{m^2} \end{equation} and \begin{equation}\label{ten}
f''_t(m) \asymp f_t(m) \left(\frac{\log t}{m^2}\right)^2. \end{equation} In particular, $f_t$ is convex and decreasing in this regime. \end{proposition}
The bound \eqref{fatm} can be viewed as a generalization of \eqref{n-form} to non-integer values of $n,m,t$.
\begin{proof} Taking logarithms in \eqref{bnm-2} we have \begin{equation}\label{logf}
\log \Gamma(f_t(m)+1) - \log \Gamma(f_t(m)-m+1) - \log \Gamma(m+1) = \log t. \end{equation} Writing $n=f_t(m) \geq 2m$, we thus see from the mean value theorem that $$ m \psi(n - \theta m + 1) - \log \Gamma(m+1) = \log t$$ for some $0 \leq \theta \leq 1$ depending on $t,m$. Applying \eqref{psi-1}, we conclude that $$ \log(n - \theta m) = \frac{1}{m} ( \log t + \log \Gamma(m+1) ) + O( \frac{1}{n} )$$ which implies that $$ n \asymp n - \theta m \asymp \exp( \frac{1}{m} ( \log t + \log \Gamma(m+1) ) )$$ and the claim \eqref{fatm} then follows from Stirling's approximation \eqref{stirling}.
If we differentiate \eqref{logf} we obtain \begin{equation}\label{deriv}
f'_t(m) \psi( f_t(m)+1) - (f'_t(m)-1) \psi(f_t(m)-m+1) - \psi(m+1) = 0. \end{equation} In particular we obtain the first derivative formula \begin{equation}\label{form} f'_t(m) = \frac{\psi(m+1) - \psi(n-m+1)}{\psi(n+1) - \psi(n - m + 1)}. \end{equation} From \eqref{psi-2} and the mean value theorem we have \begin{equation}\label{psini}
\psi(n+1) - \psi(n - m + 1) \asymp \frac{m}{n} \end{equation} while from either the mean-value theorem and \eqref{psi-2} (if $m \asymp n$) or from \eqref{psi-1} (if say $m \leq n/4$) we see that $$ \psi(n-m+1) - \psi(m+1) \asymp \frac{n-2m}{n} \log \frac{n}{m}.$$ We conclude that $$ -f'_t(m) \asymp \frac{n-2m}{m} \log \frac{n}{m}$$ and the claim \eqref{one} follows from~\eqref{fatm}.
Differentiating \eqref{deriv} again, we conclude $$
f''_t(m) \psi( n+1) + (f'_t(m))^2 \psi'( n+1) - f''_t(m) \psi(n-m+1) -(f'_t(m)-1)^2 \psi'(n-m+1) - \psi'(m+1) = 0. $$ which we can rearrange using \eqref{form} as \begin{align*} f''_t(m) (\psi(n+1) - \psi(n-m+1))^3 &= (\psi(n+1) - \psi(n-m+1))^2 \psi'(m+1) \\ &\quad + (\psi(n+1)-\psi(m+1))^2 \psi'(n-m+1)\\ &\quad - (\psi(n-m+1)-\psi(m+1))^2 \psi'(n+1). \end{align*} From \eqref{psini}, \eqref{fatm} it thus suffices to show that \begin{align*}
(\psi(n+1) - \psi(n-m+1))^2 \psi'(m+1) \quad & \\ + (\psi(n+1)-\psi(m+1))^2 \psi'(n-m+1) \quad & \\ - (\psi(n-m+1)-\psi(m+1))^2 \psi'(n+1) &\asymp \frac{m}{n^2} \log^2(n/m). \end{align*} The quantity $(\psi(n+1) - \psi(n-m+1))^2 \psi'(m+1)$ is non-negative and is of size $O( m/n^2 )$ by \eqref{psini}, \eqref{psi-2}. Thus it will suffice to show that $$ (\psi(n+1)-\psi(m+1))^2 \psi'(n-m+1) - (\psi(n-m+1)-\psi(m+1))^2 \psi'(n+1) \asymp \frac{m}{n^2} \log^2(n/m).$$ We split the left-hand side as the sum of $$ (\psi(m+1)-\psi(n+1))^2 (\psi'(n-m+1) - \psi'(n+1)) $$ and \begin{align*} &\psi'(n+1) [(\psi(n+1)-\psi(m+1))^2 - (\psi(n-m+1)-\psi(m+1))^2] \\ &=(\psi(n+1)-\psi(m+1) + \psi(n-m+1)-\psi(m+1)) (\psi(n+1)-\psi(n-m+1)) \psi'(n+1). \end{align*} From \eqref{psi-1}, \eqref{psi-3}, and the mean value theorem the first term is positive and comparable to $\frac{m}{n^2} \log^2 \frac{n}{m}$; similarly, from \eqref{psi-1}, \eqref{psi-2}, and~\eqref{psini} the second term is positive and bounded above by $O( \frac{m}{n^2} \log \frac{n}{m} )$. The claim follows. \end{proof}
To apply these derivative bounds, we use the following lemma that implicitly appears in \cite{kane-1}, \cite{kane-2}:
\begin{lemma}[Small non-zero derivative implies few integer values]\label{integ} Let $k \geq 1$ be a natural number, and suppose that $f: I \to \R$ is a smooth function on an interval $I$ of some length $|I|$ such that one has the derivative bound \begin{equation}\label{deriv-f}
0 < \left| \frac{1}{k!} f^{(k)}(x) \right| < |I|^{-k(k+1)/2} \end{equation} for all $x \in I$. Then there are at most $k$ integers $m \in I$ for which $f(m)$ is also an integer. \end{lemma}
\begin{proof} Suppose for contradiction that there are $k+1$ distinct integers $m_1,\dots,m_{k+1} \in I$ with $f(m_1),\dots,f(m_{k+1})$ an integer. By Lagrange interpolation, the function \begin{equation}\label{p-def}
P(x) \coloneqq \sum_{i=1}^{k+1} \prod_{1 \leq j \leq k+1: j \neq i} \frac{x-m_j}{m_i - m_j} f(m_i) \end{equation}
is a polynomial of degree at most $k$ such that $f(x)-P(x)$ vanishes at $m_1,\dots,m_{k+1}$. By many applications of Rolle's theorem (see \cite[Corollary 2.1]{kane-1}), there must then exist $x_* \in I$ such that $f^{(k)}(x_*) - P^{(k)}(x_*)$ vanishes. From \eqref{p-def}, $\frac{1}{k!} P^{(k)}(x)$ (which is the degree $k$ coefficient of $P(x)$) is an integer multiple of $\frac{1}{\prod_{1 \leq i < j \leq k+1} |m_i-m_j|} \geq |I|^{-k(k+1)/2}$, and thus either vanishes or has magnitude at least $|I|^{-k(k+1)/2}$. But this contradicts \eqref{deriv-f}. \end{proof}
As an application of these bounds, we can locally control the number of solutions \eqref{bnt} in the region $n^{1/2+\eps} \leq m \leq n/2$, thus giving a version of Theorem \ref{main} in a small interval:
\begin{corollary}\label{core} Let $0 < \eps < 1$, let $t$ be sufficiently large depending on $\eps$, and suppose that $(n,m)$ is a solution to \eqref{bnt} in the left half \eqref{left-half} of Pascal's triangle with $m \geq n^{1/2+\eps}$. Then there is at most one other solution $(n',m')$ to \eqref{bnt} in the interval $m' \in [m - m^{\eps/10}, m]$. \end{corollary}
\begin{proof} From \eqref{n-form-alt} and the hypothesis $n^{1/2+\eps} \leq m \leq n/2$ we have \begin{equation}\label{man}
\frac{\log t}{\log_2 t} \ll m \ll \log t. \end{equation} For $x$ in the interval $I \coloneqq [m - m^{\eps/10},m]$, we then have $\frac{\log t}{x} = \frac{\log t}{m} + O( m^{-2+\eps/10} \log t) = \frac{\log t}{m} + O(1)$, and so we see from Proposition \ref{derivi} and \eqref{man} that $f_t(x) \asymp n$ and $$
0 < |f''_t(x)| \ll n \left( \frac{\log t}{m^2}\right)^2 \ll \frac{n}{m^2} \log_2^2 t \ll \frac{n}{m^2} \log^2 m$$ for all $x \in I$. Since $m \geq n^{1/2+\eps}$ and $t$ is sufficiently large depending on $\eps$, $m$ is also sufficiently large depending on $\eps$, and we have $$
0 < |f''_t(x)| < |I|^{-3}$$ for all $x \in I$. Applying Lemma \ref{integ}, there are at most two integers $m' \in I$ with $f_t(m')$ an integer. Since $m$ is already one of these integers, the claim follows. \end{proof}
The same method, using higher derivative estimates on $f_t$, also gives similar results (with weaker bounds on the number of solutions) for $m < n^{1/2+\eps}$; see \cite{kane-1}, \cite{kane-2}. However, we will only need to apply this method in the $m \geq n^{1/2+\eps}$ regime here.
We are now ready to prove Proposition \ref{prop:kane}. \begin{proof}[Proof of Proposition \ref{prop:kane}] Let $\eps>0$, let $t$ be sufficiently large depending on $\eps$, and let $(n,m)$ be a solution to \eqref{bnt} in the region \begin{equation}\label{region1} \{ (n,m): \exp(\log^{2/3+\eps} n) \leq m \leq n/2\} \end{equation} For brevity we allow all implied constants in the following arguments to depend on $\eps$. Suppose $(n',m')$ is another solution in this region with $m' < m$, $n' > n$ and$$
m-m', n'-n \ll_{\eps'} \exp( O( \log^{1-\eps'}_2 t ) ). $$ From \eqref{one} and convexity (and the bounds $m \ll \log t$ and $m-m' \geq 1$) we have \begin{align*} n'-n &= f_t(m')-f_t(m) \\ &\geq f'_t(m) (m'-m) \\ &\gg (n-2m) \frac{\log t}{m^2} (m-m') \\ &\gg \frac{n-2m}{m} \\ &= \frac{n}{m} - 2 \end{align*} and thus $$ n/m \ll_{\eps'} \exp( O( \log^{1-\eps'}_2 t ) )$$ From \eqref{n-form-alt} we have $n \gg \log t$, hence $\log^{1-\eps'}_2 t \ll \log^{1-\eps'} n$, and so for some constant $C > 0$, $m \geq n / \exp(C \log^{1-\eps'} n) \geq n^{9/10}$ (shrinking $\eps'$ slightly if necessary) if $t$ is sufficiently large depending on $\eps'$. The result now follows from Corollary \ref{core}.
\end{proof}
It remains to establish Proposition \ref{distance}. This will be the objective of the next two sections of the paper.
\section{The distance bound}\label{distance-sec}
In this section we assume Proposition \ref{equid} and use it to establish Proposition \ref{distance}.
Throughout this section $0 < \eps < 1$ will be fixed; we can assume it to be small. We may assume that $t$ is sufficiently large depending on $\eps$, as the claim is trivial otherwise. We may assume that $m' < m$, hence also $n' > n$. We assume for sake of contradiction that at least one of the claims \begin{equation}\label{meri}
m - m' \geq \exp( \log^{2/3 + \eps} n' ) \end{equation} and \begin{equation}\label{meri-2}
m, m', n'-n \geq \exp( \log^{2/3 + \eps} n' ) \end{equation} is true, as the claim is trivial otherwise. This allows us to select a ``good'' scale:
\begin{lemma}[Selection of scale]\label{scale} With the above assumptions, there exists $P > 1$ obeying the following axioms: \begin{itemize} \item[(i)] ($m,m',n,n'$ not too large) We have $m,m',n,n' \leq \exp( \log^{\frac{3}{2} - \frac{\eps}{10}} P )$. (In particular, $P$ will be sufficiently large depending on $\eps$, since otherwise $t = O_\eps(1)$.)
\item[(ii)] (Dichotomy) If $a,a',b,b'$ are integers with $|a|, |a'|, |b|, |b'| \leq \log^{1/100} P$, and $j$ is a natural number, then either \begin{equation}\label{latch}
|a m + a'm' + bn + b'n'| \leq P^j / \log^{1000} P \end{equation} or
$$ |a m + a'm' + bn + b'n'| \geq P^j \log^{1000} P.$$ \item[(iii)] (Separation) At least one of the statements $$ m-m' \geq P \log^{100} P$$ and $$ m, m', n'-n \geq P \log^{100} P$$ is true. \end{itemize} \end{lemma}
\begin{proof} We restrict $P$ to be a power of two in the range $$\exp( \log^{2/3+\eps/2} n' ) \leq P \leq \exp(2 \log^{2/3+\eps/2} n' );$$ such a choice will automatically obey (i) since $n' > n > m > m'$ and (iii) since we assumed that either~\eqref{meri} or~\eqref{meri-2} holds. There are $\gg \log^{2/3+\eps/2} n'$ choices for $P$. Some of these will not obey (ii), but we can control the number of exceptions as follows. Firstly, observe that the conclusion \eqref{latch} will hold unless $j = O( \log^{1/3} n' )$, so we may restrict attention to this range of $j$. The number of possible tuples $(a,a',b,b',j)$ is then $O( \log^{4/100} P \log^{1/3} n' )$. For each such tuple, we see from the restriction on $P$ that the number of $P$ with
$$ P^j / \log^{1000} P < |a m + a'm' + bn + b'n'| < P^j \log^{1000} P$$ is at most $O(\log_2 n')$ (since $a m + a'm' + bn + b'n'$ is of size $O( (n')^2 )$, say). Thus we see that the total number of $P$ which fail to obey (ii) is at most $$ O( \log^{4/100} P \log^{1/3} n' \log_2 n' ) $$ which is negligible compared to the total number of choices, which is $\gg \log^{2/3+\eps/2} n'$. Thus we can find a choice of $P$ which obeys all of (i), (ii), and (iii), giving the claim. \end{proof}
Henceforth we fix a scale $P$ obeying the properties in Lemma \ref{scale}. We now introduce a relation $\approx$ on the reals by declaring $x \approx y$ if $|x-y| \leq P / \log^{1000} P$. Thus, by Lemma \ref{scale}(ii), if $am + a'm' + bn + b'n' \not \approx 0$ for $a,a',b,b'$ as in Lemma \ref{scale}(ii) then $|am + a'm' + bn + b'n'| \geq P \log^{1000} P$. Also, from Lemma \ref{scale}(iii), at least one of the statements $$ m \not \approx m'$$ and $$ m, m', n'-n \not \approx 0$$ is true.
We introduce a random variable $\p$, which is drawn uniformly from the primes in the interval $I \coloneqq [P, P + P \log^{-100} P]$ (note that there is at least one such prime thanks to the prime number theorem). From \eqref{p-eq} we surely have $$ \sum_{j=1}^\infty \left(\left\{ \frac{m}{\p^j} \right\} + \left\{ \frac{n-m}{\p^j} \right\} - \left\{ \frac{n}{\p^j} \right\}\right) = \sum_{j=1}^\infty \left(\left\{ \frac{m'}{\p^j} \right\} + \left\{ \frac{n'-m'}{\p^j} \right\} - \left\{ \frac{n'}{\p^j} \right\}\right).$$ We can restrict attention to those $j$ with $j \leq \log^{1/2} P$, since the summands vanish otherwise. For any real number $N$, we may take covariances of both sides of this identity with the random variable $\{ \frac{N}{\p}\}$ to conclude that \begin{equation}\label{jp}
\sum_{j \leq \log^{1/2} P} \left(c_j( N, m) + c_j(N, n-m) - c_j(N, n)\right) = \sum_{j \leq \log^{1/2} P} \left(c_j( N, m') + c_j(N, n'-m') - c_j(N, n')\right) \end{equation} for any real number $N$, where the covariances $c_j(N,M)$ are defined as \begin{align*} c_j(N,M) &\coloneqq \E \left\{ \frac{N}{\p}\right\} \left\{ \frac{M}{\p^j}\right\} - \E \left\{ \frac{N}{\p}\right\} \E \left\{ \frac{M}{\p^j}\right\} \\ &\coloneqq \E \left(\frac{1}{2} - \left\{\frac{N}{\p}\right\}\right) \left( \frac{1}{2} - \left\{ \frac{M}{\p^j}\right\} \right) - \E \left(\frac{1}{2} - \left\{ \frac{N}{\p}\right\} \right) \E \left( \frac{1}{2} - \left\{ \frac{M}{\p^j}\right\} \right). \end{align*}
We now compute these covariances:
\begin{proposition}[Covariance estimates]\label{covar} Let $N, M \in \{ m, n, m-n, m', n', n'-m'\}$, and $j$ be a natural number with $1 \leq j \leq \log^{1/2} P$. \begin{itemize} \item[(i)] If $j \geq 2$, then $c_j(N,M) \ll \log^{-10} P$. \item[(ii)] If $j=1$ and $N \approx 0$ or $M \approx 0$, then $c_j(N,M) \ll \log^{-1000} P$. \item[(iii)] If $j=1$, $N, M \not \approx 0$ and there exist coprime natural numbers $1 \leq a,b \leq \log^{1/100} P$ such that $aN \approx bM$, then $c_j(N,M) = \frac{1}{12 ab} + O( \log^{-1/1000} P)$. \item[(iv)] If $j=1$ and $N,M$ are not of the form in (ii) or (iii), then $c_j(N,M) \ll \log^{-1/1000} P$. \end{itemize} \end{proposition}
\begin{remark} The term $\frac{1}{12ab}$ appearing in Proposition \ref{covar}(iii) is also the covariance between $\{n{\bf x}\}$ and $\{m{\bf x}\}$ for ${\bf x}$ drawn randomly from the unit interval whenever $n,m$ are natural numbers with $an=bm$ for some coprime $a,b$; see \cite[Section 2]{sound-preprint}. Indeed, both assertions are proven by the same Fourier-analytic argument, and Proposition \ref{covar} endows the linear span of the six functions $\{ \frac{N}{\p} \}$ for $N \in \{m,n,m-n,m',n',n'-m'\}$ with an inner product closely related to the norm $N()$ studied in \cite{sound-preprint}, the structure of which is the key to obtaining a contradiction from our separation hypotheses on $n-n', m-m'$. \end{remark}
\begin{proof}[Proof of Proposition~\ref{covar} assuming Proposition~\ref{equid}] We first dispose of the easy case (ii). If $N \approx 0$, then $\{ \frac{N}{\p} \} \leq \log^{-1000} P$, and the claim follows from the triangle inequality; similarly if $M \approx 0$ or actually if $M \leq P^j/\log^{1000} P$. Hence by Lemma \ref{scale}(ii), we may from now on assume that \[ N \geq P \log^{1000} P \quad \text{and} \quad M \geq P^j \log^{1000} P. \]
To handle the remaining cases we use the truncated Fourier expansion \begin{equation} \label{eq:1/2-xFourier} \begin{split}
\frac{1}{2} - \{ x \} &= \sum_{0 < |n| \leq N_0} \frac{e(n x)}{2\pi i n} + O\left(\frac{1}{1+N_0 \mathrm{dist}(x, \Z)}\right) \\
&= \sum_{0 < |n| \leq N_0} \frac{e(n x)}{2\pi i n} + O\left(1_{\mathrm{dist}(x, \Z) \leq N_0^{-1/2}} + \frac{1}{N_0^{1/2}}\right) \end{split} \end{equation} that holds for any $N_0 \geq 1$ (see e.g.~\cite[Formula (4.18)]{ik}).
Our primary tool is Proposition \ref{equid}. Note that, for $t \in I$, $\log t = \log P + O(\log^{-99} P)$, so that together with the prime number theorem Proposition~\ref{equid} implies that \begin{equation}\label{eq}
\E W\left( \frac{N}{\p}, \frac{M}{\p^{j}} \right) = \frac{1}{|I|} \int_I W\left( \frac{N}{t}, \frac{M}{t^j} \right)\ dt + O_{\eps}( \|W\|_{C^3} \log^{-99} P) \end{equation} for any smooth $\Z^2$-periodic $W \colon \R^2 \to \C$ and that, for any $M', N' = O(\exp(\log^{3/2-\varepsilon/2} P))$, \begin{equation}\label{eq:EevsII}
\E e\left( \frac{N'}{\p} + \frac{M'}{\p^{j}} \right) = \frac{1}{|I|} \int_I e\left( \frac{N'}{t}+ \frac{M'}{t^j} \right)\ dt + O_{\eps}(\log^{-99} P) \end{equation}
Applying \eqref{eq} with $W$ a suitable cutoff localized to the region $\{ (x,y): \mathrm{dist}(x, \Z) \leq 2 N_0^{-1/2} \}$ that equals one on $\{ (x,y): \mathrm{dist}(x, \Z) \leq N_0^{-1/2} \}$ chosen so that $\|W\|_{C^3} \ll N_0^{3/2}$, we see that, for any $N_0 \in [1, \log^{20} P]$ we have $$ \P\left(\mathrm{dist}\left( \frac{N}{\p}, \Z\right) \leq N_0^{-1/2}\right)
\ll \frac{1}{|I|} \int_I 1_{\mathrm{dist}( \frac{N}{t}, \Z) \leq 2N_0^{-1/2}}\ dt + N_0^{-1/2}.$$ Since $N \geq P \log^{1000} P$, the first term on the right-hand side can be computed to be $O( N_0^{-1/2})$. Thus \begin{equation} \label{eq:PdistNsmall} \P\left(\mathrm{dist}\left( \frac{N}{\p}, \Z\right) \leq N_0^{-1/2}\right) \ll N_0^{-1/2} \end{equation} and a similar argument gives \begin{equation} \label{eq:PdistMsmall} \P\left(\mathrm{dist}\left( \frac{M}{\p^j}, \Z\right) \leq N_0^{-1/2}\right) \ll N_0^{-1/2}. \end{equation}
To prepare for the proofs of parts (i), (iii) and (iv), let us first show that, for $1 \leq j \leq \log^{1/2} P$, we have \begin{equation} \label{m-ast} \E \left( \frac{1}{2} - \left\{\frac{M}{\p^j} \right\} \right) \ll \log^{-10} P. \end{equation} We use the Fourier expansion~\eqref{eq:1/2-xFourier} with $N_0 = \log^{20} P$. Averaging over $p \in I$ and applying~\eqref{eq:PdistMsmall} to handle the first error term, we see that
$$ \E \left( \frac{1}{2} - \left\{\frac{M}{\p^j} \right\} \right) = \sum_{0 < |m| \leq \log^{20} P} \frac{1}{2 \pi i m} \E e\left( m \frac{M}{\p^j} \right) + O(\log^{-10} P).$$ By the triangle inequality and \eqref{eq:EevsII}, it suffices to show that, for every non-zero integer $m = O(\log^{20} P)$,
$$ \frac{1}{|I|} \int_I e\left( m \frac{M}{t^j} \right)\ dt \ll \log^{-11} P.$$ Recalling that $M \geq P^j \log^{1000} P$, this estimate follows from a standard integration by parts (see e.g.~\cite[Lemma 8.9]{ik}). Similarly \begin{equation} \label{n-ast} \E \left(\frac{1}{2} - \left\{\frac{N}{\p} \right\}\right) \ll \log^{-10} P . \end{equation}
Furthermore, using similarly~\eqref{eq:1/2-xFourier},~\eqref{eq:PdistNsmall},~\eqref{eq:PdistMsmall} and~\eqref{eq:EevsII}, we see that, whenever $1 \leq N_0 \leq \log^{20} P$, \begin{equation} \label{eq:NpMpFour}
\E \left(\frac{1}{2} - \left\{\frac{N}{\p} \right\}\right) \left(\frac{1}{2} - \left\{\frac{M}{\p^j} \right\} \right) = -\sum_{0 < |m|, |n| < N_0} \frac{1}{4\pi^2 mn} \frac{1}{|I|} \int_I e\left( n \frac{N}{t} + m \frac{M}{t^j} \right)\ dt + O\left(\frac{1}{N_0^{1/2}}\right). \end{equation}
Now we are ready to prove (i), (iii), and (iv). Let us start with (i). In light of~\eqref{m-ast},~\eqref{n-ast} and~\eqref{eq:NpMpFour} with $N_0 = \log^{20} P$, it suffices to show that
$$ \frac{1}{|I|} \int_I e\left( n \frac{N}{t} + m \frac{M}{t^j} \right)\ dt \ll \log^{-11} P.$$ whenever $n,m = O(\log^{20} P)$ are non-zero integers. Applying a change of variables $t = P / s$, we reduce to showing that \begin{equation} \label{eq:sintclaim} \int_{1/(1 + \log^{-100} P)}^1 e( as + bs^j )\ ds \ll \log^{-200} P \end{equation}
(say), where $a \coloneqq nN/P$ and $b \coloneqq mM/P^j$. By hypothesis, we have $|a|, |b| \geq \log^{1000} P$. Since $2 \leq j \leq \log^{1/2} P$, the derivative $a+jbs^{j-1}$ of the phase $as+bs^j$ is at least $\log^{200} P$ outside of an interval of length at most $O(\log^{-200} P)$, and~\eqref{eq:sintclaim} now follows from a standard integration by parts (see e.g.~\cite[Lemma 8.9]{ik}). This concludes the proof of (i).
Let us now turn to (iv). In light of~\eqref{m-ast},~\eqref{n-ast} and~\eqref{eq:NpMpFour} with $N_0 = \log^{1/500} P$, it suffices to show that
$$ \frac{1}{|I|} \int_I e\left( \frac{nN+mM}{t} \right)\ dt \ll \log^{-1/500} P$$
whenever $n,m = O( \log^{1/500} P)$ are non-zero integers. From the hypothesis (iv) and Lemma \ref{scale}(ii) (after factoring out any common multiple of $n$ and $m$), we have $|nN+mM| \geq P \log^{1000} P$. The claim (iv) now follows from integration by parts.
Finally we show (iii). In light of~\eqref{m-ast},~\eqref{n-ast} and~\eqref{eq:NpMpFour} with $N_0 = \log^{1/500} P$, it suffices to show that
$$ -\sum_{0 < |n|,|m| \leq \log^{1/500} P} \frac{1}{4\pi^2 mn} \frac{1}{|I|} \int_I e\left( \frac{nN+mM}{t} \right)\ dt = \frac{1}{12ab} + O( \log^{-1/1000} P ).$$
Let us first consider those $n,m = O(\log^{1/500} P)$ for which $nN+mM \not \approx 0$. By Lemma \ref{scale}(ii) $|nN+mM| \geq P \log^{1000} P$ and similarly to case (iv), the contribution of such pairs $(n,m)$ is acceptable.
Consider now the case $nN \approx -mM$ for some non-zero integers $n,m = O(\log^{1/500} P)$. By assumption also $aN \approx bM$ for some co-prime positive integers $a, b \leq \log^{1/100} P$. and hence by Lemma~\ref{scale}(ii) $-amM \approx bnM$ which contradicts the assumption $M \not \approx 0$ unless $(n,m)$ is a multiple of $(a,-b)$. On the other hand if $(n, m)$ is a multiple of $(a, -b)$, then $n N \approx -mM$ by Lemma~\ref{scale}(ii).
Thus it remains to show that
$$ \sum_{0 <|k| \leq \frac{\log^{1/500} P}{\max\{a,b\}}} \frac{1}{4\pi^2 k^2 ab}
\frac{1}{|I|} \int_I e\left(\frac{kaN-kbM}{t} \right)\ dt = \frac{1}{12ab} + O( \log^{-1/1000} P ).$$ Since $aN \approx bM$ we have, for every $k \leq \log^{1/500} P$,
$$ \frac{1}{|I|} \int_I e\left(\frac{kaN -kbM}{t} \right)\ dt = 1 - O( \log^{-100} P )$$ and so it suffices to show that
$$ \sum_{0 <|k| \leq \frac{\log^{1/500} P}{\max\{a, b\}}} \frac{1}{4\pi^2 k^2 ab} = \frac{1}{12ab} + O( \log^{-1/1000} P )$$ This is trivial for $ab \geq \log^{1/1000} P$. For $ab \leq \log^{1/1000} P$ the claim follows from the Basel identity $$ \sum_{k=1}^\infty \frac{1}{k^2} = \frac{\pi^2}{6}$$ and the tail bound $$ \sum_{k \geq \log^{1/1000} P} \frac{1}{k^2} \ll \log^{-1/1000} P.$$ \end{proof}
Now we can get back to proving Proposition~\ref{distance} assuming Proposition~\ref{equid}. From Proposition \ref{covar}(i) and \eqref{jp} we see that \begin{equation}\label{c1}
c_1( N, m) + c_1(N, n-m) - c_1(N, n) = c_1( N, m') + c_1(N, n'-m') - c_1(N, n') + O( \delta) \end{equation} for $N \in \{m,n,n-m,m',n',m'-n'\}$, where for brevity we introduce the error tolerance $$ \delta \coloneqq \log^{-1/1000} P.$$ We can now arrive at the desired contradiction by some case analysis (reminiscent of that in \cite{sound-preprint, sound-ratio}) using the remaining portions of Proposition \ref{covar}, as follows.
\subsection*{Case $m' \approx 0$} Applying \eqref{c1} with $N = m$, we conclude from Proposition \ref{covar}(ii) that \begin{equation} \label{eq:covN=m} c_1(m,m) + c_1(m,n-m) - c_1(m,n) = c_1(m,n'-m') - c_1(m,n') + O(\delta). \end{equation} From Lemma \ref{scale}(iii) we have $m \not \approx 0$ (and hence also $n-m, n'-m',n' \not \approx 0$, since these quantities are greater than or equal to $m$), hence by Proposition \ref{covar}(iii) we have $c_1(m,m) = \frac{1}{12} + O(\delta)$. Furthermore, since $m' \approx 0$, we see from Lemma~\ref{scale}(ii) that, for $1 \leq a, b \leq \log^{1/100} P$, $am \approx b(n'-m')$ if and only if $am \approx bn'$. Hence Proposition~\ref{covar}(iii) (iv) implies that \[ c_1(m, n'-m') = c_1(m, n') + O(\delta). \] Plugging these facts into~\eqref{eq:covN=m} and rearranging, we obtain \[ \frac{1}{12} + c_1(m,n-m) = c_1(m,n) + O(\delta). \] But by Proposition~\ref{covar}(iii), (iv) we know that $c_1(m, n-m) \geq -O(\delta)$, so that \[ c_1(m, n) \geq \frac{1}{12} + O(\delta) \] But since $m \not \approx n$ (because $m \leq n/2$ and $m \not \approx 0$), another application of Proposition~\ref{covar}(iii), (iv) gives \[ c_1(m, n) \leq \frac{1}{2}\frac{1}{12} + O(\delta), \] which is a contradiction.
Since $m'$ was the smallest element of $\{m,n,n-m,m',n',m'-n'\}$, we now thus have $N \not \approx 0$ for all $N \in \{m,n,n-m,m',n',m'-n'\}$, and case (ii) of Proposition \ref{covar} no longer applies.
\subsection*{Case $m \not \approx m'$ and $m' \not \approx 0$} We apply \eqref{c1} with $N = m'$ to conclude that \begin{equation} \label{eq:mnotapm'1st} c_1(m',m) + c_1(m',n-m) - c_1(m',n) = c_1(m', m') + c_1(m',n'-m') - c_1(m',n') + O( \delta ). \end{equation} Now if there are no co-prime positive integers $a, b \leq \log^{1/100} P$ such that $am' \approx bn'$ or $am' \approx b(n'-m')$, then by Proposition~\ref{covar}(iv) we have \[ c_1(m',n'-m') - c_1(m',n') = O(\delta) \] On the other hand, if such co-prime integers exist, then $am' \approx bn'$ if and only if $(a-b)m' \approx b(n'-m')$ and necessarily $a > b$, so that by Proposition \ref{covar}(iii) we have in this case \begin{equation}
c_1(m',n'-m') - c_1(m',n') = \frac{1}{12 (a-b) b} - \frac{1}{12 ab} - O(\delta) \geq -O(\delta). \end{equation}
Since Proposition~\ref{covar}(iii) also gives $c_1(m', m') \geq 1/12 + O(\delta)$, combining with~\eqref{eq:mnotapm'1st} we obtain that \begin{equation}\label{c11} c_1(m',m) + c_1(m',n-m) - c_1(m',n) \geq \frac{1}{12} - O( \delta ). \end{equation} On the other hand, since $m' \not \approx m$, we also have $m' \not \approx n-m$ since $n-m \geq m > m'$. By Proposition \ref{covar}(iii), (iv), we have $$ c_1(m',m) + c_1(m',n-m) \leq \frac{1}{12} \cdot \frac{1}{2} + \frac{1}{12} \cdot \frac{1}{2} + O(\delta),$$ which can be improved to \begin{equation} \label{eq:mnappm'imp} c_1(m',m) + c_1(m',n-m) \leq \frac{1}{12} \cdot \frac{1}{3} + \frac{1}{12} \cdot \frac{1}{2} + O(\delta), \end{equation} unless both $m \approx 2m'$ and $n-m \approx 2m'$. Since by Proposition \ref{covar}(iii), (iv) we have $c_1(m',n) \geq - O( \delta )$, the estimate~\eqref{eq:mnappm'imp} contradicts \eqref{c11}.
Hence we must have both $m \approx 2m'$ and $n-m \approx 2m'$. But then Lemma \ref{scale}(ii) forces $n \approx 4m'$, hence by Proposition \ref{covar}(iii) $$ c_1(m',m) + c_1(m',n-m) - c_1(m',n) = \frac{1}{12}\cdot \frac{1}{2}+ \frac{1}{12} \cdot\frac{1}{2} - \frac{1}{12} \cdot\frac{1}{4} + O(\delta),$$ and we again contradict \eqref{c11}.
\subsection*{Case $m \approx m'$ and $m' \not \approx 0$} By Lemma \ref{scale}(iii), we must have $n \not \approx n'$. We apply \eqref{c1} for $N=n$ to obtain \begin{equation}\label{can}
c_1( n, m) + c_1(n, n-m) - c_1(n, n) = c_1( n, m') + c_1(n, n'-m') - c_1(n, n') + O( \delta). \end{equation} Since $m \approx m'$, we have by Proposition~\ref{covar}(iii), (iv) (using also Lemma~\ref{scale}(ii)) that $c_1(n, m) = c_1(n, m') + O(\delta)$. Proposition~\ref{covar}(iii) also gives $c_1(n, n) = 1/12 + O(\delta)$. Plugging these into~\eqref{can} and rearranging, we obtain \begin{equation} \label{simplmappm'} c_1(n, n-m) + c_1(n, n') = \frac{1}{12} + c_1(n, n'-m') + O( \delta). \end{equation} Since $n \not \approx n'$ and $m \not \approx 0$, we see from Proposition \ref{covar}(iii), (iv) that \begin{equation} \label{simplmappm'-2} c_1(n, n-m) + c_1(n, n') \leq \frac{1}{12}\cdot \frac{1}{2}+\frac{1}{12}\cdot\frac{1}{2} + O(\delta) \end{equation} which can be improved to \begin{equation} \label{simplmappm'-3} c_1(n, n-m) + c_1(n, n') \leq \frac{1}{12}\cdot\frac{1}{3}+\frac{1}{12}\cdot\frac{1}{2} + O(\delta) \end{equation} unless $2(n-m) \approx n$ and $n' \approx 2n$. Now~\eqref{simplmappm'-3} contradicts~\eqref{simplmappm'} since by Proposition \ref{covar}(iii), (iv) $c_1(n, n'-m') \geq - O(\delta)$.
Hence we can assume that $2(n-m) \approx n$ and $n' \approx 2n$. But using $m \approx m'$ and Lemma~\ref{scale}(ii) this implies that $2(n'-m') \approx 3n$, so that by~\eqref{simplmappm'} and Proposition~\ref{covar}(iii) we obtain \[ c_1(n, n-m) + c_1(n, n') = \frac{1}{12} + c_1(n, n'-m') + O( \delta) = \frac{1}{12} + \frac{1}{12} \cdot \frac{1}{2\cdot 3} + O( \delta). \] contradicting~\eqref{simplmappm'-2}.
\begin{remark} Morally speaking, the ability to obtain a contradiction here reflects the fact that one cannot have an identity of the form \begin{equation}\label{mx}
\{ mx \} + \{ (n-m) x \} - \{ nx\} = \{ m'x\} + \{ (n'-m')x\} - \{n' x\} \end{equation} for almost all real numbers $x$ and some integers $1 \leq m \leq n/2$, $1 \leq m' \leq n'/2$ unless one has both $m=m'$ and $n=n'$ (this type of connection goes back to Landau \cite[p. 116]{landau-1}). This latter fact is easily established by inspecting the jump discontinuities of both sides of \eqref{mx}, but it is also possible to establish it by computing the covariances of both sides of \eqref{mx} with $\{ Nx\}$ for various choices of $N$, and the arguments above can be viewed as an adaptation of this latter method. \end{remark}
It remains to establish Proposition \ref{equid}. This will be established in the next section. \section{Equidistribution}\label{equid-sec}
In this section we prove Proposition \ref{equid}. Fix $\eps,A$. We may assume that $P$ is sufficiently large depending on $\eps,A$, as the claim is trivial otherwise. If we have $P^j \geq M \log^A P$ then we can replace in both parts of the proposition $\frac{M}{P^j}$ by $0$ with negligible error, so we may assume that either $M=0$ or $P^j < M \log^A P$. In either event we may thus assume that $j \leq \log^{1/2} P$. Next, by partitioning $I$ into at most $\log^{100} P$ intervals of length at most $P \log^{-100} P$ and using the triangle inequality, it suffices (after suitable adjustment of $P$, $A$) to assume that $I \subset [P, P + P \log^{-100} P]$. In particular we have \begin{equation}\label{pj}
P^{j-1} \leq t^{j-1} \leq 2 P^{j-1} \end{equation} for all $t \in I$.
Let us first reduce Proposition~\ref{equid}(ii) to Proposition \ref{equid}(i). We perform a Fourier expansion $$ W(x,y) = \sum_{n,m \in \Z} c_{n,m} e(nx + my)$$ where by integration by parts the Fourier coefficients $$ c_{n,m} = \int_{\R^2/\Z^2} W(x,y) e(-nx-my)\ dx dy$$ obey the bounds
$$ |c_{n,m}| \ll \|W\|_{C^3} (1 + |n| + |m|)^{-3}.$$
By the triangle inequality, the contributions of those frequencies $n,m$ with $|n| + |m| \geq \log^{2A} P$ is then acceptable. By a further application of the triangle inequality, Proposition~\ref{equid}(ii) follows from showing that $$ \sum_{p \in I} e\left( n \frac{N}{p} + m \frac{M}{p^j} \right) = \int_I e\left( n \frac{N}{t} + m \frac{M}{t^{j}}\right)\ \frac{dt}{\log t} + O_{\eps,A}( P\log^{-10A} P )$$
whenever $n,m$ are integers with $|n| + |m| \leq \log^{2A} P$. But this follows from Proposition \ref{equid}(i) by adjusting the values of $\varepsilon, A, M, N$ suitably.
The proof of part (i) will use the standard tools of Vaughan's identity and Vinogradov's exponential sum estimates. We state a suitable form of the latter tool here:
\begin{lemma}[Vinogradov's exponential sum estimate]\label{le_vin_expsum} Let $X\geq 2$, $F \geq X^4$, and $\alpha \geq 1$. Let $I\subset [X,2X]$ be an interval. Let $f(x)$ be a smooth function on $I$ satisfying for all $t\in I$ \begin{align}\label{vin24}
\alpha^{-r^3}F\leq \frac{t^r}{r!}|f^{(r)}(t)|\leq\alpha^{r^3}F \end{align} for all integers $1\leq r\leq 10\lceil \log F/(\log X)\rceil+1.$ Assume further that \begin{align}\label{vin23} (\log \alpha)\frac{(\log F)^2}{(\log X)^3}<10^{-3}. \end{align} Then we have \begin{align}\label{vin22} \sum_{n\in I}e(f(n))\ll \alpha X\exp(-2^{-18}(\log X)^3/(\log F)^2), \end{align} where the implied constant is absolute. \end{lemma}
\begin{proof} This is essentially \cite[Theorem 8.25]{ik} with minor modifications (the modification needed is that we only assume \eqref{vin24} for $r$ in a certain range, not all integers $r\geq 1$.).
Let $R:=10\lceil \log F/(\log X)\rceil$, and as in \cite[p. 217]{ik}, let \begin{align*} F_n(q):=\sum_{0\leq r\leq R}\alpha_r(n)q^r,\quad \alpha_r(n) \coloneqq \frac{f^{(r)}(n)}{r!}. \end{align*}
Let $S_f(I)$ denote the sum in \eqref{vin22}. By Taylor's formula, for any $q\geq 1$ we have \begin{align*}
S_f(I)=\sum_{n\in I}e(F_n(q))+O\left(q+Xq^{R+1}\frac{\max_{t\in I}|f^{(R+1)}(t)|}{(R+1)!}\right). \end{align*}
Let $\mathcal{Q}:=\{xy:1\leq x\leq V,1\leq y\leq V\}\cap \mathbb{N}$, where $\mathcal{Q}$ is interpreted as a multiset. Also let $Q=|\mathcal{Q}|=V^2$. Then \begin{align*}
S_f(I)=\sum_{n\in I}|\mathcal{Q}|^{-1}\sum_{q\in \mathcal{Q}}e(F_n(q))+O\left(Q+XQ^{R+1}\frac{\max_{t\in I}|f^{(R+1)}(t)|}{(R+1)!}\right). \end{align*} We take $V=X^{1/4}$ in which case by \eqref{vin24} the error term is \begin{equation} \label{eq:TaylorError} \begin{split} &\ll X^{1/2}+X \cdot X^{(R+1)/2}\alpha^{(R+1)^3}F/X^{R+1}\\ &\ll X^{1/2}+X^{-(R+1)/4}\alpha^{(R+1)^3} \cdot (F X^{1-(R+1)/4}). \end{split} \end{equation} The term in the parenthesis is $\leq FX^{3/4} F^{-10/4} \leq 1$. Using also~\eqref{vin23} we see that~\eqref{eq:TaylorError} is $\ll X^{1/2}$ which is in particular smaller than the right-hand side of \eqref{vin22}. The sum $\sum_{q\in \mathcal{Q}}e(F_n(q))$ is precisely the one estimated in \cite[pp. 217--225]{ik}. The only assumption needed of $f$ in that argument is \eqref{vin24}, and the only restriction on $F$ and $X$ there is $F\geq X^4$. Hence, we conclude that the lemma holds by following the analysis there verbatim. \end{proof}
We now apply this estimate to obtain an estimate for an exponential sum over integers.
\begin{proposition}[Exponential sums over integers]\label{expi} Let $\eps > 0$, $A \geq 1$, $X \geq 2$, $2 \leq j \ll \log^{1/2} X$, and let $N,M$ be real numbers with $N,M \ll \exp( O( \log^{3/2-\eps} X))$. Let $I$ be an interval in $[X,X+X\log^{-100} X]$. Then \begin{equation}\label{integer-sum} \sum_{n \in I} e\left( \frac{N}{n} + \frac{M}{n^j} \right) \ll_{\eps,A} X (1+F)^{-c} \log^{O(A)} X + X \log^{-A} X \end{equation} for some absolute constant $c>0$, where
$$ F \coloneqq \frac{|N|}{X} + \frac{|M|}{X^j}.$$
\end{proposition}
\begin{proof} We may assume without loss of generality that $A$ is sufficiently large, and $X$ is sufficiently large depending on $\eps,A$. By hypothesis we have $F \ll \exp( O( \log^{3/2-\eps} X))$. We may assume that $F \geq \log^{CA} X$ for a large absolute constant $C$, since the claim is trivial otherwise.
Let $f \colon I \to \R$ denote the phase function $$ f(t) \coloneqq \frac{N}{t} + \frac{M}{t^j}.$$ Then for any $r \geq 1$ and $t \in I$ we have \begin{equation} \label{eq:expint1}
\frac{t^r}{r!}|f^{(r)}(t)| = \left| \frac{N}{t} + \frac{M_r}{t^j} \right| \asymp X^{-1} |N + M_r / t^{j-1}| \end{equation} where $$ M_r \coloneqq \binom{r+j-1}{j-1} M.$$ Since $$ 1 \leq \binom{r+j-1}{j-1} \leq (r+j)^r = \exp(r\log(r+j)) = \exp(O( r^2 \log_2 X ) ) $$ we conclude that $$ M_r = \exp( O( r^2 \log_2 X ) ) M$$ and
$$ \frac{|N|}{X} + \frac{|M_r|}{X^j} = \exp( O( r^2 \log_2 X ) ) F.$$
If $|M_r| \leq |N| X^{j-1}/4$ then from the triangle inequality and \eqref{pj} we have $$
X^{-1} |N + M_r / t^{j-1}| \asymp X^{-1} |N| = \exp( O( r^2 \log_2 X ) ) F. $$
Consider then the case $|M_r| > |N| X^{j-1}/4$. We have the upper bound
$$ X^{-1} |N + M_r / t^{j-1}| \ll \frac{|M_r|}{X^j} \ll \exp( O( r^2 \log_2 X ) ) F $$ for all $t \in I$ from the triangle inequality. Furthermore, since the function $t \mapsto -1/t^{j-1}$ has derivative $\asymp j/X^j$ on $I$, we also have, for all $t$ outside of an interval of length $O( X \log^{-2A} X)$, the lower bound
$$ X^{-1} |N + M_r / t^{j-1}| \gg \frac{|M_r|}{X^j} \log^{-3A} X \gg \exp( O( r^2 \log_2 X ) ) F \log^{-3A} X.$$
If we set $\alpha \coloneqq \log^{4A} X$ and $A$ is sufficiently large, then we conclude from~\eqref{eq:expint1} and the bounds above that the estimate \eqref{vin24} holds for all $1 \leq r \leq \log X$ and all $t \in I$ outside the union of $O(\log X)$ intervals of length $O( X \log^{-2A} X)$. The contribution of these exceptional intervals to \eqref{integer-sum} is negligible, and removing them splits $I$ up into at most $O(\log X)$ subintervals, so by the triangle inequality it suffices to show that $$ \sum_{n \in I'} e\left( \frac{N}{n} + \frac{M}{n^j} \right) \ll_{\eps,A} X \log^{-2A} X $$ for any subinterval $I'$ with the property that \eqref{vin24} holds for all $t \in I'$ and $1 \leq r \leq \log X$. If $F \geq X^4$, we may apply Lemma \ref{le_vin_expsum} to conclude that $$ \sum_{n \in I'} e\left( \frac{N}{n} + \frac{M}{n^j} \right) \ll X \log^{4A} X \exp( - c \log^{2\eps} X )$$ for some absolute constant $c>0$, and the claim follows. If instead $F < X^4$, we can apply the Weyl inequality \cite[Theorem 8.4]{ik} with $k=5$ to conclude that $$ \sum_{n \in I'} e\left( \frac{N}{n} + \frac{M}{n^j} \right) \ll \alpha^{O(1)} ( F/X^5 + 1/F )^c X \log X$$ for some absolute constant $c>0$; since $F \geq \log^{C A} X$, we obtain the claim by taking $C$ large enough. \end{proof}
Now we prove Proposition~\ref{equid}(i). We may assume without loss of generality that $j \geq 2$, since for $j=1$ we can absorb the $M$ terms into the $N$ term (and add a dummy term with $M=0$ and $j=2$, say). By summation by parts (see e.g. \cite[Lemma 2.2]{mrt-div}), and adjusting $A$ as necessary, it suffices to show that $$ \sum_{p \in I} e\left( \frac{N}{p} + \frac{M}{p^j} \right) \log p = \int_I e\left( \frac{N}{t} + \frac{M}{t^{j}}\right)\ dt + O_{\eps,A}( P\log^{-10A} P )$$ for all intervals $I \subset [P, P + P \log^{-100} P]$. This is equivalent to $$ \sum_{n \in I} e\left( \frac{N}{n} + \frac{M}{n^j} \right) \Lambda(n) = \int_I e\left( \frac{N}{t} + \frac{M}{t^{j}}\right)\ dt + O_{\eps,A}( P\log^{-10A} P ),$$ where $\Lambda$ is the von Mangoldt function, since the contribution of the prime powers is negligible. We introduce the quantity
$$ F \coloneqq \frac{|N|}{P} + \frac{|M|}{P^j}.$$ If $F \leq \log^{CA} P$ for some large absolute constant $C>0$, then the total variation of the phase $t \mapsto \frac{N}{t} + \frac{M}{t^{j}}$ is $O( \log^{CA} P)$, and the claim readily follows from a further summation by parts (see e.g. \cite[Lemma 2.2]{mrt-div}) and the prime number theorem (with classical error term). Thus we may assume that \begin{equation}\label{logcp}
F > \log^{CA} P. \end{equation} In this case, a change of variables $t = P/s$ gives $$ \int_I e\left( \frac{N}{t} + \frac{M}{t^{j}}\right)\ dt = -P \int_{P/I} e\left( \frac{N}{P} s + \frac{M}{P^{j}} s^j \right)\ \frac{ds}{s^2}.$$ The derivative of the phase here is $N/P + j s^{j-1} M/P^j $ which, once $C$ is large enough, is $\geq \log^{10 A} P$ for all $s \in P/I$ apart from an interval of length at most $O(\log^{-10 A} P)$. Hence by partial integration we get that \[ \int_I e\left( \frac{N}{t} + \frac{M}{t^{j}}\right)\ dt \ll P \log^{-10A} P \] if $C$ is large enough, so it remains to establish the bound $$ \sum_{n \in I} e\left( \frac{N}{n} + \frac{M}{n^j} \right) \Lambda(n) \ll P \log^{-10 A} P$$ under the hypothesis \eqref{logcp}.
By Vaughan's identity in the form of \cite[Proposition 13.4]{ik} (with $y=z=P^{1/3}$), followed by a shorter-than-dyadic decomposition, we can write \begin{align*} \Lambda(n)=\sum_{r\leq R}(\alpha_r*1(n)+\alpha_r'*\log(n)+\beta_r*\gamma_r(n)) \end{align*} for $n \in [P,2P]$, where $*$ denotes Dirichlet convolution, and \begin{align*} R&\ll \log^{O(1)} P,\\
|\alpha_r(n)|, |\alpha'_r(n)|, |\beta_r(n)|, |\gamma_r(n)|&\ll \log P,\\ \supp(\alpha_r), \supp(\alpha'_r) &\subset [M_r,(1 + \log^{-100} P)M_r], \\ \supp(\beta_r) &\subset [K_r,(1 + \log^{-100} P)K_r], \\ \supp(\gamma_r) &\subset [N_r,(1 + \log^{-100} P)N_r],\\ 1 \leq M_r &\ll P^{2/3};\\ P^{1/3} \ll K_r, N_r &\ll P^{2/3} \end{align*} (the bound for the coefficients arising from Vaughan's identiy is $\ll \log P$ since $1 \ast \Lambda = \log$). By the triangle inequality, it thus suffices to establish the Type I estimates \begin{equation}\label{type-i} \sum_{n \in I} e\left( \frac{N}{n} + \frac{M}{n^j} \right) (\alpha_r * 1)(n) \ll_{\eps,A} P \log^{-11 A} P \end{equation} and \begin{equation}\label{type-i-alt} \sum_{n \in I} e\left( \frac{N}{n} + \frac{M}{n^j} \right) (\alpha'_r * \log)(n) \ll_{\eps,A} P \log^{-11 A + 1} P \end{equation} as well as the Type II estimates \begin{equation}\label{type-ii} \sum_{n \in I} e\left( \frac{N}{n} + \frac{M}{n^j} \right) (\beta_r * \gamma_r)(n) \ll_{\eps,A} P \log^{-11 A} P \end{equation} for all $1 \leq r \leq R$ and $I \subset [P, P + P \log^{-100} P]$. The second Type I estimate \eqref{type-i-alt} follows from the first Type I estimate \eqref{type-i} (replacing $\alpha_r$ with $\alpha'_r$) and a summation by parts (see e.g. \cite[Lemma 2.2]{mrt-div}), so it suffices to establish \eqref{type-i} and \eqref{type-ii}.
We begin with \eqref{type-i}. By the triangle inequality, the left-hand side is bounded by
$$ \ll \log P \sum_{m \in [M_r,(1 + \log^{-100} P) M_r]} \left|\sum_{n \in \frac{1}{m} \cdot I} e\left( \frac{N}{mn} + \frac{M}{m^jn^j} \right)\right|.$$ Applying Proposition \ref{expi} with $X=P/m$ and $N/m$ and $M/m^j$ in place of $N$ and $M$, we can bound this by $$ \ll_{\eps,A} P \log P \left( (1 + F)^c \log^{O(A)} P + \log^{-20 A} P \right)$$ for some constant $c>0$, and the claim now follows from \eqref{logcp}.
Now we establish \eqref{type-ii}. We can assume that $K_r N_r \asymp P$, as the sum vanishes otherwise. By the triangle inequality, the left-hand side is bounded by
$$ \ll \log P \sum_{m \in [K_r, (1 + \log^{-100} P) K_r]} \left|\sum_{n \in \frac{1}{m} \cdot I} \gamma_r(n) e\left( \frac{N}{mn} + \frac{M}{m^jn^j} \right)\right|.$$ By Cauchy--Schwarz it suffices to show that
$$ \sum_{m \in [K_r, (1 + \log^{-100} P) K_r]} \left|\sum_{n \in \frac{1}{m} \cdot I} \gamma_r(n) e\left( \frac{N}{mn} + \frac{M}{m^jn^j} \right)\right|^2 \ll K_r N^2_r \log^{-30 A} P$$ (say). Rearranging, it suffices to show that \begin{equation} \label{eq:equidfinclaim} \sum_{n,n' \in [N_r, (1 + \log^{-100} P) N_r]} \gamma_r(n) \overline{\gamma_r(n')} X_{n,n'} \ll K_r N^2_r \log^{-30 A} P \end{equation} where $$ X_{n,n'} \coloneqq \sum_{m \in [K_r,(1 + \log^{-100} P) K_r] \cap \frac{1}{n} \cdot I \cap \frac{1}{n'} \cdot I} e\left( \frac{N(n'-n)}{nn'm} + \frac{M((n')^j-n^j)}{n^j (n')^j m^j} \right).$$ By Proposition \ref{expi}, we have
$$ X_{n,n'} \ll_{\eps,A} K_r \left( \left(1 + \frac{|n'-n|}{N_r} F\right)^{-c} \log^{O(A)} P + \log^{-40A} P \right)$$ for some absolute constant $0 < c < 1$. Bounding $\gamma_r(n) \overline{\gamma_r(n')} \ll \log^2 P$ and noting that
$$ \sum_{n \in [N_r, (1 + \log^{-100} P) N_r]} \left(1 + \frac{|n'-n|}{N_r} F\right)^{-c} \ll N_r F^{-c} $$ for all $n' \in [N_r, (1 + \log^{-100} P) N_r]$, we obtain the claim~\eqref{eq:equidfinclaim} from \eqref{logcp}. This completes the proof of Proposition \ref{equid}. \section{Multiplicity of the falling factorial}\label{falling-sec}
In this section we establish Theorem \ref{main-falling}. We first observe that if $1 \leq m \leq n$ solves \eqref{falling} for some sufficiently large $t$, then $$ t = (n)_m \geq (m)_m = m! \gg (m/e)^m $$ by Stirling's formula. Hence we have an analogue of \eqref{m-bound}: \begin{equation}\label{m-bound-falling}
m \ll \frac{\log t}{\log_2 t} \end{equation} Next, since $$ (n-m)^m < (n)_m \leq n^m$$ we have \begin{equation}\label{tam}
t^{1/m} \leq n < t^{1/m} + m \end{equation} and we obtain an analogue \begin{equation}\label{n-form-falling}
n \asymp t^{1/m} = \exp\left( \frac{\log t}{m} \right) \end{equation} of \eqref{n-form}, \eqref{n-form-alt}.
Next, we obtain the following analogue of Proposition \ref{distance}.
\begin{proposition}[Distance estimate]\label{distance-falling} Suppose we have two solutions $(n,m), (n',m')$ to \eqref{falling} in region $\{ (m,n) \in \N^2: 1 \leq m \leq n \}$. Then one has \begin{equation}\label{malt}
m' - m \ll \log(n+n'). \end{equation} Furthermore, if \begin{equation}\label{mam}
\exp( \log^{2/3+\eps}(n+n') ) \leq m,m' \leq (n+n')^{2/3} \end{equation} for some $\eps>0$, then we additionally have \begin{equation}\label{nan}
n' - n \ll_{A,\eps} \frac{m+m'}{\log^A(m+m')} \end{equation} for any $A>0$. \end{proposition}
\begin{proof} We begin with \eqref{malt}. We follow the arguments from \cite[Proof of Theorem 4]{aeh}. Taking $2$-valuations $v_2$ of both sides of \eqref{falling} and using \eqref{legendre} we have $$ \sum_{j=1}^\infty \left(\left\lfloor \frac{n}{2^j}\right\rfloor - \left\lfloor \frac{n-m}{2^j}\right\rfloor\right) = \sum_{j=1}^\infty \left(\left\lfloor \frac{n'}{2^j}\right\rfloor - \left\lfloor \frac{n'-m'}{2^j}\right\rfloor\right).$$ The summands here vanish unless $j \leq \log(n+n')$. Writing $\lfloor x \rfloor = x + O(1)$, we conclude that $$ \sum_{1 \leq j \leq \log(n+n')} \frac{m}{2^j} + O(\log (n+n')) = \sum_{1 \leq j \leq \log(n+n')} \frac{m'}{2^j} + O(\log (n+n')) $$ and \eqref{malt} follows.
Now we prove \eqref{nan}. Fix $A,\eps>0$. We may assume without loss of generality that $m' < m$, so that $n' > n$ by \eqref{falling}. We may also assume $t$ is sufficiently large depending on $A, \eps$, as the claim is trivial otherwise; from \eqref{mam} this also implies that $m,m',n,n'$ are sufficiently large depending on $A,\eps$. Henceforth all implied constants are permitted to depend on $A,\eps$. By \eqref{mam} we have $$ \log^{2/3+\eps} n \leq \log m$$ while from \eqref{n-form-falling} we have $\log n \asymp \frac{\log t}{m}$. From this and \eqref{m-bound-falling} we have \begin{equation} \label{eq:msize} m \asymp \frac{\log t}{\log_2 t} \end{equation} and then $$ \log n \ll \log^{\frac{1}{2/3 + \eps}}_2 t.$$ Similarly for $m',n'$. From \eqref{malt} we conclude that \begin{equation}\label{mmp}
m-m' \ll \log^{\frac{1}{2/3 + \eps}}_2 t \ll \log^{\frac{1}{2/3+\eps}} m. \end{equation} In particular $m \asymp m'$ and, combining~\eqref{n-form-falling} with~\eqref{mmp} and~\eqref{eq:msize}, also $n \asymp n' t^{1/m - 1/m'} \asymp n'$. Hence from \eqref{mam} we see that \begin{equation}\label{nbig}
n, n' \gg m^{3/2}. \end{equation} Also we have $$\frac{\log t}{m'} = \frac{\log t}{m} + O\left( \frac{\log t \log^{\frac{1}{2/3+\eps}} m}{m^2} \right) = \frac{\log t}{m} + O( m^{-1/2} )$$ (say), hence on exponentiating and using \eqref{tam}, \eqref{nbig} \begin{equation}\label{stam}
n' = \exp\left(\frac{\log t}{m'}\right) + O(m) = n + O( m^{-1/2} n ). \end{equation}
Suppose that we could find a prime $p > m$ obeying the inequalities \begin{equation}\label{obey} \max\left(1 - \left\{ \frac{n'-n}{p}\right\}, 1 - \frac{m}{p}\right) < \left\{ \frac{n-m}{p} \right\} < 1; \quad \left\{ \frac{n'-n}{p}\right\} < 1 - \frac{m}{p}. \end{equation} These inequalities imply in particular that \[ \left\{\frac{n-m}{p}\right\} - 1 + \frac{m}{p} \in [0, 1) \text{ and } \left\{\frac{n-m}{p}\right\} + \left\{ \frac{n'-n}{p} \right\} - 1 + \frac{m}{p} \in [0, 1), \] so that these quantities respectively equal $\{\frac{n}{p}\}$ and $\{\frac{n'}{p}\}$. Consequently, if~\eqref{obey} hold, then we would have \begin{equation} \label{eq:obeycon1} \left \{ \frac{n}{p}\right\} = \left\{\frac{n-m}{p}\right\} - 1 + \frac{m}{p} < \frac{m}{p} \end{equation} and (since $m' < m$) \begin{equation} \label{eq:obeycon2} \left\{ \frac{n'}{p}\right\} = \left\{\frac{n-m}{p}\right\} + \left\{ \frac{n'-n}{p} \right\} - 1 + \frac{m}{p} \geq \frac{m}{p} \geq \frac{m'}{p}. \end{equation} Now~\eqref{eq:obeycon1} implies that $p$ divides $(n)_m$, while~\eqref{eq:obeycon2} implies that $p$ does not divide $(n')_{m'}$. This contradicts the assumption $(n)_m = t = (n')_{m'}$. Thus there cannot be any prime $p \geq 2m$ obeying \eqref{obey}.
Let $w_1 \colon \R \to [0,1]$ be a suitable smooth $\Z$-periodic function supported on the region $$ \{ x \in \R: \{ x\} \in (1 - \log^{-2A} m,1)\}$$
chosen so that $\int_0^1 w_1 \gg \log^{-2A}m$ and $\|w_1\|_{C^3} \ll \log^{6A} m$, and let $w_2 \colon \R \to [0,1]$ similarly be a smooth $\Z$-periodic function supported on the region $$ \{ y \in \R: \{y\} \in (\log^{-2A} m, 1/2) \}$$
chosen so that $w_2(y) = 1$ when $\{y\} \in [2\log^{-2A} m, 1/4]$ and $\|w_2\|_{C^3} \ll \log^{6A}m$. Let $\p$ be a prime drawn uniformly from all the primes in $[2m, 100m]$. As $\p$ does not obey \eqref{obey}, we have $$ \E w_1\left(\frac{n-m}{\p}\right) w_2\left(\frac{n'-n}{\p}\right) = 0$$ and hence by Proposition \ref{equid} (and dyadic decomposition) $$ \int_2^{100} w_1\left( \frac{n-m}{tm}\right) w_2\left(\frac{n'-n}{tm} \right)\ dt \ll \log^{-100 A} m,$$ or on changing variables $t=1/s$ \begin{equation}\label{w1w2} \int_{1/100}^{1/2} w_1\left( \frac{n-m}{m}s\right) w_2\left(\frac{n'-n}{m}s \right)\ ds \ll \log^{-100 A} m. \end{equation} On the other hand, by \eqref{stam}, \eqref{nbig} we have \begin{equation} \label{w1large} \frac{n-m}{m} \gg \frac{n}{m} \gg m^{1/2} \frac{n'-n}{m} + m^{1/2}. \end{equation} We perform a Fourier expansion $$ w_1(x) = \sum_{\ell \in \Z} c_{\ell} e(\ell x), $$ where by integration by parts the Fourier coefficients obey the bounds
$$ |c_{\ell}| \ll (1 + |\ell|)^{-3} \log^{6A}m. $$ Thus~\eqref{w1w2} can then be rewritten as \begin{equation}\label{w1w2fourier} \sum_{\ell \in \Z} c_{\ell} \int_{1/100}^{1/2} w_2\left(\frac{n'-n}{m}s \right) e\left(\frac{n-m}{m} \ell s\right)\ ds \ll \log^{-100A}m. \end{equation} By~\eqref{w1large} and integration by parts, one readily establishes the bound
$$ \int_{1/100}^{1/2} w_2\left(\frac{n'-n}{m}s \right) e\left(\frac{n-m}{m} \ell s\right)\ ds \ll \frac{\log^{6A} m}{|\ell| m^{1/2}} $$ for $\ell \neq 0$. Thus the total contribution to the left-hand side of~\eqref{w1w2fourier} from the terms with $\ell \neq 0$ is negligible, and hence $$ c_0 \int_{1/100}^{1/2} w_2\left(\frac{n'-n}{m}s \right)\ ds \ll \log^{-100A}m. $$ Since $c_0 = \int_0^1 w_1 \gg \log^{-2A}m$ and $w_2$ equals $1$ on $[2\log^{-2A}m, 1/4]$, we have \begin{equation} \label{eq:fupper} f\left( \frac{n'-n}{m} \right) \ll \log^{-98 A} m \end{equation} where $$ f(\theta) \coloneqq \int_{1/100}^{1/2} 1_{2 \log^{-2A} m \leq \{ \theta s\} \leq 1/4} \ ds.$$ However, direct calculation shows that when $\theta \geq 3$, we have $$ f(\theta) \geq \sum_{\frac{\theta}{16} \leq n \leq \frac{\theta}{2}-\frac{1}{4}} \int_\R 1_{n + 1/100 \leq \theta s \leq n+1/4}\ ds \gg \theta \cdot \theta^{-1} = 1,$$ when $1/2 < \theta < 3$, we have $$ f(\theta) \geq \int_{\frac{1}{30\theta}}^{\frac{1}{20\theta}}\ ds \asymp 1,$$ and, when $8\log^{-A} m \leq \theta \leq 1/2$, we have $$ f(\theta) \geq \int_{1/4}^{1/2}\ ds \asymp 1.$$ Hence~\eqref{eq:fupper} can only hold if $$ \frac{n'-n}{m} \ll \log^{-A} m,$$ giving the claim \eqref{nan}. \end{proof}
Now we adapt the analysis from Section \ref{analytic-sec}. We extend the falling factorial $(n)_m$ to real $n \geq m \geq 0$ by the formula $$ (n)_m \coloneqq \frac{\Gamma(n+1)}{\Gamma(n-m+1)}.$$ From the increasing nature of the digamma function $\psi$ we see that for fixed $m$, $(n)_m$ increases from $\Gamma(m+2)$ when $n$ goes from $m+1$ to infinity. Applying the inverse function theorem, we conclude that for any sufficiently large $t$ there is a unique smooth function $g_t \colon \{ m > 0: \Gamma(m+2) \leq t \} \to \R$ such that for any $m>0$ with $\Gamma(m+2) \leq t$, one has $g_t(m) \geq m$ and \begin{equation}\label{gtm} (g_t(m))_m = t. \end{equation} Indeed, one could simply set $g_t(m) \coloneqq f_{t/\Gamma(m+1)}(m)$, where $f_t$ is the function studied in Section \ref{analytic-sec}.
We have an analogue of Proposition \ref{derivi}:
\begin{proposition}[Estimates on the first few derivatives]\label{derivi-falling} Let $C > 1$, and let $t,m$ be sufficiently large depending on $C$ with $\Gamma(m+2) \leq t$. Then \begin{equation}\label{fatm-falling}
g_t(m) \asymp t^{1/m}. \end{equation} In the range $m \leq g_t(m)/2$, we have \begin{equation}\label{one-falling} -g'_t(m) \asymp g_t(m) \frac{\log t}{m^2} \end{equation} and in the range $m \leq g_t(m) - C \log^2 g_t(m)$, one has \begin{equation}\label{ten-falling} 0 < g''_t(m) \ll g_t(m) \left(\frac{\log t}{m^2}\right)^2 + C^{-1} \log^{-3} m. \end{equation} \end{proposition}
\begin{proof} Write $n=g_t(m) \geq m$. First note that~\eqref{fatm-falling} is simply~\eqref{n-form-falling}. Taking logarithms in \eqref{gtm} we have \begin{equation}\label{logf-falling}
\log \Gamma(g_t(m)+1) - \log \Gamma(g_t(m)-m+1) = \log t. \end{equation}
If we differentiate \eqref{logf-falling} we obtain \begin{equation}\label{deriv-falling}
g'_t(m) \psi( g_t(m)+1) - (g'_t(m)-1) \psi(g_t(m)-m+1) = 0. \end{equation} In particular we obtain the first derivative formula \begin{equation}\label{form-falling} g'_t(m) = \frac{- \psi(n-m+1)}{\psi(n+1) - \psi(n - m + 1)}. \end{equation} In the regime $m \leq n/2$ we can then obtain \eqref{one-falling} from \eqref{psini}, \eqref{psi-1}, \eqref{fatm-falling}.
Differentiating \eqref{deriv-falling} again, we conclude $$
g''_t(m) \psi( n+1) + (g'_t(m))^2 \psi'( n+1) - g''_t(m) \psi(n-m+1) -(g'_t(m)-1)^2 \psi'(n-m+1) = 0 $$ which we can rearrange using \eqref{form-falling} as \begin{equation} \label{eq:g''tform} \begin{split} g''_t(m) (\psi(n+1) - \psi(n-m+1))^3 &= \psi(n+1)^2 \psi'(n-m+1)\\ &\quad - \psi(n-m+1)^2 \psi'(n+1). \end{split} \end{equation} Suppose first that $m \leq n/2$. Then \eqref{psini} applies, and it suffices to show that $$ \psi(n+1)^2 \psi'(n-m+1) - \psi(n-m+1)^2 \psi'(n+1) \asymp \left( \frac{m}{n} \right)^3 n \left(\frac{\log t}{m^2}\right)^2.$$ By \eqref{fatm-falling} the right-hand side is $\asymp \frac{m \log^2 n}{n^2}$. On the other hand, from the mean value theorem and \eqref{psi-1}, \eqref{psi-2}, \eqref{psi-3} we have $$ 0 < \psi(n+1)^2 (\psi'(n-m+1) - \psi'(n+1)) \asymp \frac{m \log^2 n}{n^2}$$ and $$ 0 < (\psi(n+1)^2 - \psi(n-m+1)^2) \psi'(n+1) \ll \frac{m \log n}{n^2}$$ giving the claim.
Now suppose that $n/2 \leq m \leq n - C \log^2 n$. From \eqref{psi-1}, \eqref{psi-2} we have \begin{align*} \psi(n+1) - \psi(n-m+1) &\asymp \log \frac{n}{n-m} \\ \psi(n+1)^2 (\psi'(n-m+1) - \psi'(n+1)) &\asymp \frac{\log^2 n}{n-m} \\ 0 < (\psi(n+1)^2 - \psi(n-m+1)^2) \psi'(n+1) &\ll \frac{\log^2 n}{n} \end{align*} and hence by~\eqref{eq:g''tform} $$ g_t''(m) \asymp \frac{\log^2 n}{(n-m) \log^3 \frac{n}{n-m}}.$$ Since $\frac{n}{2} \geq n-m \geq C \log^2 n$, we have $$ (n-m) \log^3 \frac{n}{n-m} \gg C \log^5 n$$ (as can be seen by checking the cases $n-m \leq \sqrt{n}$ and $n-m > \sqrt{n}$ separately), and the claim follows. \end{proof}
Now we can establish Theorem \ref{main-falling}. Let $C>0$ be a large absolute constant, let $\eps>0$, and suppose that $t$ is sufficiently large depending on $\eps,C$. Let $(n,m)$ be the integer solution to \eqref{falling} in the region $\exp( \log^{2/3 + \eps} n ) \leq m \leq n- 1$ with a maximal value of $m$; we may assume that such a solution exists, since we are done otherwise. If $(n',m')$ is any other solution in this region, then $m'<m$ and $n < n'$. Note that $n,n',m,m'$ are sufficiently large depending on $\eps,C$. From Proposition \ref{distance-falling} and \eqref{n-form-falling} we have $$ m - m' \ll \log n' \ll \frac{\log t}{m'}$$ and from~\eqref{n-form-falling} and~\eqref{m-bound-falling} $$m' \asymp \frac{\log t}{\log n'} \geq \frac{\log t}{\log^{\frac{1}{2/3+\eps}} m'} \gg \frac{\log t}{\log^{\frac{1}{2/3+\eps}}_2 t}$$ and thus $m' \asymp m$ and $\frac{\log t}{m} = \frac{\log t}{m'} + O(1)$. Hence $n \asymp n'$ and \begin{equation}\label{mamn}
m - m' \ll \log n. \end{equation}
First suppose that $m \leq n^{1/2} \log^{10} n$. Here we will exploit the fact that $n$ grows rapidly as $m$ decreases. From Proposition \ref{distance-falling} we have $$ n' - n \ll_{\eps} \frac{m}{\log^{200} m} \ll \frac{m}{\log^{100} n}.$$ On the other hand, from \eqref{one-falling} and the mean value theorem we have $$ n' - n = g_t(m') - g_t(m) \gg \frac{n \log t}{m^2} (m-m') \geq \frac{n}{m}$$ thanks to \eqref{m-bound-falling} and the trivial bound $m-m' \geq 1$. Thus we have $$ \frac{n}{m} \ll \frac{m}{\log^{100} n}$$ but this contradicts the hypothesis $m \leq n^{1/2} \log^{10} n$.
Now suppose we are in the regime $$ n^{1/2} \log^{10} n < m \leq n - C \log^2 n.$$ Here we will take advantage of the convexity properties of $g_t$. From \eqref{mamn}, $m'$ lies in the interval $[m - O(\log n), m]$. By \eqref{fatm-falling}, for all $x$ in this interval, we have $$ g_t(x) \asymp t^{1/x} \asymp t^{1/m} \asymp n$$ and by \eqref{ten-falling}, we have \begin{align*}
0 < g''_t(x) &\ll g_t(x) \left(\frac{\log t}{x^2}\right)^2 + C^{-1} \log^{-3} x \\ &\ll n \left(\frac{\log t}{m^2}\right)^2 + C^{-1} \log^{-3} m \\ &\ll n \left(\frac{\log n}{m}\right)^2 + C^{-1} \log^{-3} n \\ &\ll C^{-1} \log^{-3} n \end{align*} since $m > n^{1/2} \log^{10} n$. Applying Lemma \ref{integ} with $k = 2$, we see (for $C$ large enough) that there are at most two integers $m'$ in this interval with $g_t(m')$ an integer, giving Theorem \ref{main-falling} follows in this case.
It remains to handle the case \begin{equation}\label{nmc}
n - C \log^2 n < m \leq n-1. \end{equation} Recall from~\eqref{mamn} that $m'$ lies in the interval $[m - O(\log n), m]$. From \eqref{n-form-falling}, \eqref{nmc} we have $$ m \asymp n \asymp \frac{\log t}{\log_2 t}$$ so $m' = m - O(\log_2 t)$. From \eqref{n-form-falling} again we thus also have $$ m' \asymp n' \asymp \frac{\log t}{\log_2 t}.$$ From \eqref{falling} we have $$ \frac{n'}{n'-m'} \frac{n'-1}{n'-1-m'} \dots \frac{n+1}{n+1-m'} = (n-m') \dots (n-m+1).$$ The right-hand side is at most $\exp(O( \log_2 t \log_3 t) )$. This implies that $n' - n \ll \log_3 t$, since otherwise the left hand side would be, for any $C \geq 1$, \[ \gg \left(\frac{n}{n-m'+1+ C \log_3 t}\right)^{C \log_3 t} \gg \exp\left(\frac{C}{2} \log_3 t \log_2 t\right) \] which contradicts the bound for the right hand side when $C$ is sufficiently large.
In particular we have from the triangle inequality that $$ n-m, n' - m' \ll C \log^2_2 t.$$ Making the change of variables $\ell := n-m$, it now suffices to show that there are at most two integer solutions to the equation \begin{equation}\label{l-eq} (n)_{n-\ell} = t \end{equation} in the regime $1 \leq \ell \ll C \log^2_2 t$. We write this equation \eqref{l-eq} as $$ n! = t \ell!$$ or equivalently $$ n = h_t(\ell)$$ where $h_t(x) \coloneqq \Gamma^{-1}( t \Gamma(x+1 ) ) - 1 $, and $\Gamma^{-1} \colon [1,+\infty) \to [2,+\infty)$ is the inverse of the gamma function. Here we will exploit the very slowly varying nature of $h_t$. From Stirling's formula we have $$ h_t(x) \asymp \frac{\log t}{\log_2 t}$$ whenever $1 \leq x \ll C \log^2_2 t$. Taking the logarithmic derivative of the equation $$ \Gamma(h_t(x) + 1) = t \Gamma(x+1)$$ we have $$ h'_t(x) \psi(h_t(x)+1) = \psi(x+1).$$ Hence by \eqref{psi-1} $$ h'_t(x) \asymp \frac{\log x}{\log h_t(x)} \ll \frac{\log_3 t}{\log_2 t}$$ in the regime $1 \leq x \ll C \log^2_2 t$. In particular, for two solutions $(n,\ell), (n',\ell')$ to \eqref{l-eq} in this regime we have \begin{equation}\label{nnp}
n-n' \ll \frac{\log_3 t}{\log_2 t} |\ell-\ell'|. \end{equation}
For fixed $n$ there is at most one $\ell \geq 1$ solving \eqref{l-eq}. We conclude that for two distinct solutions $(n,\ell), (n',\ell')$ to \eqref{l-eq} in this regime, we have $|n-n'| \geq 1$, and hence the separation
$$ |\ell-\ell'| \gg \frac{\log_2 t}{\log_3 t}.$$
Now suppose we have three solutions $(n_1,\ell_1), (n_2,\ell_2), (n_3,\ell_3)$ to \eqref{l-eq} in this regime. We can order $\ell_1 < \ell_2 < \ell_3$, so that $n_1 < n_2 < n_3$. From the preceding discussion we have $$ \frac{\log_2 t}{\log_3 t} \ll \ell_2 - \ell_1, \ell_3 - \ell_2 \ll C \log^2_2 t$$ and $$ 1 \leq n_2-n_1, n_3 - n_2 \ll C \log_2 t \log_3 t.$$ If $2^j$ is a power of $2$ that divides an integer in $(n_1,n_2]$ as well as an integer in $(n_2,n_3]$, then we must therefore have $2^j \ll C \log_2 t \log_3 t$, so that $j \ll \log_3 t$. Thus, there must exist $i=1,2$ such that the interval $(n_i,n_{i+1}]$ only contains multiples of $2^j$ when $j \ll \log_3 t$. Fix this $i$. Taking $2$-adic valuations of \eqref{l-eq} using \eqref{legendre} we have $$ \sum_{j=1}^{\infty} \left\lfloor \frac{n_i}{2^j} \right \rfloor = v_2(t) + \sum_{j=1}^\infty \left\lfloor \frac{\ell_i}{2^j} \right \rfloor $$ and $$ \sum_{j=1}^{\infty} \left\lfloor \frac{n_{i+1}}{2^j} \right \rfloor = v_2(t) + \sum_{j=1}^\infty \left\lfloor \frac{\ell_{i+1}}{2^j} \right \rfloor $$ and thus \begin{equation} \label{eq:2adiccomp} \sum_{j=1}^{\infty} \left(\left\lfloor \frac{n_{i+1}}{2^j} \right \rfloor - \left\lfloor \frac{n_{i}}{2^j} \right \rfloor\right) = \sum_{j=1}^\infty \left(\left\lfloor \frac{\ell_{i+1}}{2^j} \right \rfloor - \left\lfloor \frac{\ell_{i}}{2^j} \right \rfloor\right). \end{equation} Since \begin{equation}\label{lii} \ell_{i+1} - \ell_i \gg \frac{\log_2 t}{\log_3 t}, \end{equation} we certainly have $\ell_{i+1} - \ell_i \geq 2$, and the right-hand side of~\eqref{eq:2adiccomp} is at least $$ \left\lfloor \frac{\ell_{i+1}}{2} \right \rfloor - \left\lfloor \frac{\ell_{i}}{2} \right \rfloor \gg \ell_{i+1} - \ell_i.$$ By construction, the terms on the left-hand side of~\eqref{eq:2adiccomp} vanish unless $j \ll \log_3 t$, in which case they are equal to $\frac{n_{i+1}-n_i}{2^j} + O( 1)$. Thus the left-hand side of~\eqref{eq:2adiccomp} is at most $O( n_{i+1} - n_i + \log_3 t )$. Thus $$ \ell_{i+1} - \ell_i \ll n_{i+1} - n_i + \log_3 t.$$ But from \eqref{nnp} one has $n_{i+1} - n_i \ll \frac{\log_3 t}{\log_2 t} (\ell_{i+1}-\ell_i)$. Hence $\ell_{i+1} - \ell_i \ll \log_3 t$. But this contradicts \eqref{lii}. This concludes the proof of Theorem \ref{main-falling}.
\end{document} |
\begin{document}
\title[Signed graphs cospectral with the path]{Signed graphs cospectral with the path}
\author[S. Akbari, W.H. Haemers, H.R. Maimani and L. Parsaei Majd]{Saieed Akbari}
\address{S. Akbari, Department of Mathematical Sciences, Sharif University of Technology, Tehran, Iran.} \email{s$_{-}$akbari@sharif.ir}
\author[]{Willem H. Haemers} \address{W.H. Haemers, Department of Econometrics and Operations Research, Tilburg University, Tilburg, The Netherlands.} \email{haemers@uvt.nl}
\author[]{Hamid Reza Maimani} \address{H.R. Maimani, Mathematics Section, Department of Basic Sciences, Shahid Rajaee Teacher Training University, P.O. Box 16785-163, Tehran, Iran.} \email{maimani@ipm.ir}
\author[]{Leila Parsaei Majd} \address{L. Parsaei Majd, Mathematics Section, Department of Basic Sciences, Shahid Rajaee Teacher Training University, P.O. Box 16785-163, Tehran, Iran, and School of Mathematics, Institute for Research in Fundamental Sciences (IPM), P.O. Box 19395-5746, Tehran, Iran.} \email{leila.parsaei84@yahoo.com}
\begin{abstract} A signed graph $\Gamma$ is said to be determined by its spectrum if every signed graph with the same spectrum as $\Gamma$ is switching isomorphic with $\Gamma$. Here it is proved that the path $P_n$, interpreted as a signed graph, is determined by its spectrum if and only if $n\equiv 0, 1$, or 2 (mod 4), unless $n\in\{8, 13, 14, 17, 29\}$, or $n=3$.
\\ Keywords: signed graph; path; spectral characterization; cospectral graphs. \\ AMS subject classification 05C50, 05C22. \end{abstract}
\maketitle
\section{Introduction} \label{sec1}
Throughout this paper all graphs are simple, without loops or parallel edges. A \textit{signed graph} $\Gamma=(G, \sigma)$ (with $G=(V, E)$) is a graph with the vertex set $V$ and the edge set $E$ together with a function $\sigma : E \rightarrow \{-1, +1\}$, called the \textit{signature function}. So, every edge becomes either positive or negative. The adjacency matrix $A$ of $\Gamma$ is obtained from the adjacency matrix of the underlying graph $G$, by replacing $1$ by $-1$ whenever the corresponding edge is negative. The spectrum of $A$ is also called the spectrum of the signed graph $\Gamma$. For a vertex subset $X$ of $\Gamma$, the operation that changes the sign of all outgoing edges of $X$, is called switching. In terms of the matrix $A$, switching multiplies the rows and columns of $A$ corresponding to $X$ by $-1$. The switching operation gives rise to an equivalence relation, and equivalent signed graphs have the same spectrum (see \cite[Proposition 3.2]{zaslavsky1}). If a signed graphs can be switched into an isomorphic copy of another signed graph, the two signed graphs are called \textit{switching isomorphic}. Clearly switching isomorphic graphs are cospectral (that is, they have the same spectrum). A signed graph $\Gamma$ is determined by spectrum whenever every graph cospectral with $\Gamma$ is switching isomorphic with $\Gamma$. For unsigned graphs it is known that the path $P_n$ is determined by the spectrum of the adjacency matrix, see \cite[Proposition~1]{dam-hae}.
Among the signed graphs this is in general not true anymore. In this paper we determine precisely for which $n$ this is still the case, see Theorems~\ref{n-even}, \ref{1nod4}, and Corollary~\ref{3mod4}.
We refer to \cite{zaslavsky1} and \cite{zaslavsky2} for more information about signed graphs. For the relevant background on graphs we refer to \cite{brou-haem}, \cite{cvet1}, or \cite{cvet}. The initial problem was, possibly, first introduced by Acharya in \cite{[1]}.
\section{preliminaries}
A \textit{walk} of length $k$ in a signed graph $\Gamma$ is a sequence $v_1 e_1 v_2 e_2 \ldots v_k e_k v_{k+1}$ of vertices $v_1, v_2, \ldots, v_{k+1}$ and edges $e_1, e_2, \ldots, e_{k}$ such that $v_i\neq v_{i+1}$ and $e_i=\{v_i,v_{i+1}\}$ for each $i = 1, 2, \ldots, k$. A walk is said to be \textit{positive} if it contains an even number of negative edges, otherwise it is called \textit{negative}. Let $w_{ij}^{+}(k)$ (resp. $w_{ij}^{-}(k)$) denote the number of positive (resp., negative) walks of length $k$ from the vertex $v_i$ to the vertex $v_j$. A closed walk is a walk that starts and ends at the same vertex.
In the unsigned case, the $(i, j)$-entry of $A^{k}$ represents the number of walks of length $k$ from $v_i$ to $v_j$. But in the signed case, powers of $A$ count walks in a signed way. The $(i, j)$-entry of $A^{k}$ is $w_{ij}^{+}(k) - w_{ij}^{-}(k)$ (\cite[Lemma 3.2]{belardo}, \cite[Theorem II.1]{zaslavsky2}). For simplicity, we set $W_{k}(\Gamma)=\sum_{i=1}^{n}{(w_{ii}^{+}(k) - w_{ii}^{-}(k))}$. It is easy to see that if $\Gamma$ and $\Gamma^{'}$ are two cospectral signed graphs, then $W_k(\Gamma)=W_k(\Gamma^{'})$ for each $k\geqslant 1$. Moreover, if $\Gamma$ and $\Gamma^{'}$ are two cospectral signed graphs since the sum of the squares of the eigenvalues is twice of the number of edges, we obtain that the order and the size of $\Gamma$ and $\Gamma^{'}$ are the same.
The following lemma can be easily proved by induction. \begin{lem}\label{walk} ~\\[-27pt] $$W_4(P_n)=14 + 6(n-4),~for ~n\geqslant 2,$$
$$W_6(P_n)=76 + 20(n-6),~\text{for}~ n\geqslant 3,\ \mbox{and}\ W_6(P_2)=2.$$ \end{lem}
A cycle in a signed graph is called \textit{balanced} if it contains an even number of negative edges, otherwise it is called \textit{unbalanced}. A signed graph is balanced if all its circuits are balanced. It is easily seen that a signed path and a balanced cycle is switching isomorphic with the underlying unsigned path and cycle, respectively. An unbalanced cycle is switching isomorphic with the underlying cycle with precisely one negative edge.
\begin{lem}\cite[Lemma 4.4]{belardo}.\label{spec} Let $P_n$ and $C_n$ (resp. $C_{n}^{-}$) be the path and the balanced cycle (resp. unbalanced cycle) on $n$ vertices, respectively. Then the following hold:
\begin{align*} \mathrm{Spec}(C_n)&=\big\{2\mathrm{cos}\dfrac{2i\pi}{n}:~i=0, 1, \ldots, n-1\big\},\\ \mathrm{Spec}(C_{n}^{-})&=\big\{2\mathrm{cos}\dfrac{(2i+1)\pi}{n}:~i=0, 1, \ldots, n-1\big\},\\ \mathrm{Spec}(P_n)&=\big\{2\mathrm{cos}\dfrac{i\pi}{n+1}:~i=1, \ldots, n\big\}. \end{align*} \end{lem} Observe that $C_n$ has largest eigenvalue $2$, and that $C_n^-$ has smallest eigenvalue $-2$ when $n$ is odd, while all eigenvalues of the path are strictly between $-2$ and $2$. Moreover, all eigenvalues of the path are simple (have multiplicity $1$), while $C_n$ and $C_n^-$ have (many) eigenvalues of multiplicity $2$.
Suppose $\Gamma$ is a signed graph of order $n$ with adjacency matrix $A$. Then we write $\det(\Gamma)$ instead of $\det(A)$. So $\det(\Gamma)$ equals the product of the eigenvalues of $\Gamma$, and if $p(x)=a_0+a_1 x +\ldots +a_{n-1}x^{n-1}+x^{n}$ is the characteristic polynomial of $\Gamma$, then clearly $\det(\Gamma)=a_0=p(0)$. We define $\det'(\Gamma)=a_1=p'(0)$. If $\Gamma$ has an eigenvalue $0$, then $\det(\Gamma)=0$, and $\det'(\Gamma)$ is the product of the $n-1$ remaining eigenvalues.
\begin{lem}\label{detpn} \begin{itemize} \item[(a)] If $n$ is even, then $\mathrm{det}(P_n)=(-1)^{\frac{n}{2}}$. \item[(b)] If $n$ is odd then $\mathrm{det}^{'}(P_n)=(n+1)/2$. \end{itemize} \end{lem}
\begin{proof} (a)~Clearly $\det(P_2)=-1$, and expanding $\det(P_{n+2})$ with respect to an end vertex of $P_{n+2}$ gives $\det(P_{n+2})=-\det(P_n)$. \\ (b)~Let $B_n$ be the adjacency matrix of $P_n$. When $n$ is odd, we can write \[ B_n=\left[\begin{array}{cc} O & N \\ N^\top & O \end{array}\right],\ \mbox{where } N=\left[\begin{array}{ccccc} 1 & 1 & 0 & \cdots & 0 \\
& \ddots & \ddots & & \\
& & \ddots & \ddots & \\ 0 & \cdots & 0 & 1 & 1 \end{array}\right]. \] The eigenvalues of $B_n^2$ are the eigenvalues of $NN^\top$ together with the eigenvalues of $N^\top N$. Since $NN^\top$ and $N^\top N$ have the same nonzero eigenvalues it follows that $\det'(B_n)=\det(NN^\top)$. We easily have that $NN^\top=2I+B_m$, where $m=(n-1)/2$. Write $d_m=\det(2I+B_m)$, then $d_1=2$, $d_2=3$ and $d_{m+2}=2d_{m+1}-d_m$, so $d_m=m+1=(n+1)/2$. \end{proof}
\begin{lem}\label{ddet} Let $B$ be a symmetric matrix of order $n$ with two equal rows (and columns), and let $B'$ be the matrix of order $n-1$ obtained from $B$ by deleting one repeated row and column. Then $\det(B)=0$, and $\det'(B)=2\det(B')$. \end{lem}
\begin{proof} Clearly $B$ is singular, so $\det(B)=0$. Without loss of generality we assume that the first two rows and columns of $B$ are equal. Consider the following orthogonal matrices $Q_2=\frac{1}{\sqrt{2}}\left[\begin{array}{cc}1&1\\-1&1\end{array}\right]$, and $Q=\left[\begin{array}{cc}Q_2&O\\O&I_{n-2}\end{array}\right]$. Then $Q^\top BQ= \left[\begin{array}{cc}0&\underline{0}^\top\\ \underline{0}&B''\end{array}\right]$, where $B''$ is obtained from $B'$ by multiplying the first row and column by $\sqrt{2}$. On the other hand, $B$ and $Q^\top BQ$ are cospectral, therefore Spec$(B'')=\mbox{Spec}(B)\setminus\{0\}$. So $\det'(B)=\det(B'')=2\det(B')$. \end{proof}
\section{Signed graphs cospectral with the path}
In the remaining of the paper we assume that $\Gamma$ is a signed graph cospectral but not switching isomorphic with the path $P_n$. We know that $\Gamma$ has $n$ vertices and $n-1$ edges. Since $\Gamma$ is not a signed path, $\Gamma$ has at least two components. In this section we obtain conditions for the components of $\Gamma$. \\ Graph $D_m$ in Fig.~\ref{nonsimple}, is the union of $K_{1, 3}$ and $P_{m-4}$, where an end vertex of $P_{m-4}$ is joined to a vertex of degree one in $K_{1, 3}$. \begin{obs}\label{obs2} \begin{enumerate} \item By the interlacing theorem and Lemma \ref{spec}, $\Gamma$ contains no odd cycle, no balanced even cycle, and no star $K_{1, 4}$ as an induced subgraph, (note that the biggest adjacency eigenvalue of $K_{1, 4}$ is $2$). Hence, all cycles in $\Gamma$ are unbalanced of even order, and the maximum degree of $\Gamma$ is at most $3$. \item We checked (by computer) that a signed graph for which the underlying unsigned graph is one of the graphs given in Fig. \ref{greater2} has largest eigenvalue at least $2$. Therefore, no graph in Fig. \ref{greater2} has an induced subgraph of $\Gamma$. Also each graph of Fig.~\ref{nonsimple} has at least one eigenvalue of multiplicity at least $2$. Therefore none of these can be a component of $\Gamma$. Note that Graph~(g) in Fig.~\ref{nonsimple}, has an eigenvalue of multiplicity $3$, so by the interlacing theorem, each graph on $8$ vertices having Graph~(g) as an induced subgraph has at least one non-simple eigenvalue, and therefore cannot be a component of $\Gamma$.
\item Let $M$ be Graph~(e) of Fig.~\ref{nonsimple}. Then $M$ is not an induced subgraph of $\Gamma$. Indeed, $M$ is not a component of $\Gamma$, and every graph on $9$ vertices with maximum degree $3$ that contains $M$ as an induced subgraph contains an odd cycle, or Graph~(a) from Fig.~\ref{greater2}.
\item A $\Theta$-graph is a union of three internally disjoint paths $P_{p}, P_{q}, P_{r}$ with common end vertices, where $p, q, r \geqslant 2$ and at most one of them equals $2$. If $p, q, r \geqslant 3$ we call the $\Theta$-graph \textit{proper}. A proper signed $\Theta$-graph has at least one balanced cycle. Then using the interlacing theorem for this induced balanced cycle, we conclude that a $\Gamma$ has no proper $\Theta$-graph as an induced subgraph.
\begin{figure}
\caption{Graphs with largest eigenvalue at least $2$}
\label{greater2}
\end{figure}
\begin{figure}
\caption{Graphs with some non-simple eigenvalues (dashed edges are negative)}
\label{nonsimple}
\end{figure}
\item An unbalanced even cycle has eigenvalues of multiplicity $2$, and therefore cannot occur as a component of $\Gamma$. Furthermore, we claim that $\Gamma$ contains no induced even cycle of order more than $6$.
Indeed, let $C_r^-$ be an unbalanced induced cycle for $r\geqslant 8$. Since $C_r^-$ is not a component, there exist a vertex $v$ out of $C_r^-$, which is adjacent to one, two or three vertices of $C_r^-$. If $v$ is adjacent to just one vertex of $C_r^-$, then we have Graph $(c)$ given in Fig. \ref{greater2} as an induced subgraph. If $v$ is adjacent to two vertices of $C_r^-$, then graph $\langle V(C_r) \cup \{v\}\rangle$ is a proper $\Theta$-graph (recall that $\Gamma$ has no odd cycle). If $v$ is adjacent to three vertices of $C_r^-$, then it is easy to check that the graph $\langle V(C_r) \cup \{v\}\rangle$ has one of the graphs $(a)$ or $(b)$ given in Fig. \ref{greater2} as an induced subgraph. In all three case we have a contradiction, and since no vertex of $\Gamma$ has degree more than three, the claim is proved. \end{enumerate} \end{obs}
In \cite{mckee} the authors have classified signed graphs having all their eigenvalues in the interval $[-2, 2]$. Now, based on \cite[Theorem 4]{mckee} and Observation \ref{obs2} we have the following results.
\begin{lem}\label{C6} If $H$ is a component of $\Gamma$ containing an induced $6$-cycle, then $H$ is one of the graphs presented in Fig.~\ref{6-cycle} \end{lem} \begin{figure}\end{figure} \begin{proof} We know that the $6$-cycle is unbalanced. Also, the unbalanced $6$-cycle $C_6^-$ has eigenvalues of multiplicity $2$, so $H\neq C_6^-$. By \cite[Theorem 4]{mckee} and Fig.~\ref{nonsimple}, there are only two types for component $H$, see Fig.~\ref{6-cycle}. \end{proof}
\begin{thm}\label{main} The only graphs that can occur as a component of $\Gamma$ are listed in Fig.~\ref{1}. \end{thm} \begin{proof} Let $H$ be a component of $\Gamma$. If $H$ is a tree, then since all eigenvalues are strictly less than $2$, $H$ is one of the trees in Fig.~\ref{1} (see \cite[Theorem 3.1.3]{brou-haem}). Note that $K_{1, 3}$ is not among the graphs given in Fig.~\ref{1}, because $K_{1, 3}$ has a non-simple eigenvalue. Now, suppose that $H$ has a cycle. By Observation \ref{obs2}, $H$ has no induced unbalanced cycle of order more than $6$. Hence, every induced cycle of $H$ has order $4$ or $6$. If $H$ has an induced $6$-cycle, then $H$ is Graph (f) or (j) in Fig.~\ref{1}, by Lemma \ref{C6}.
Now, assume that $H$ has an induced unbalanced $4$-cycle but no induced $6$-cycle. If $H=C_4^-$, then $\Gamma$ has non-simple eigenvalues. By \cite[Theorem 4]{mckee}, Observation~\ref{obs2} and Figs.~\ref{greater2} and \ref{nonsimple}, $H$ is one of the graphs given in Fig.~\ref{1}. \end{proof}
\begin{figure}
\caption{All possible components for $\Gamma$}
\label{1}
\end{figure}
\begin{thm} If $\Gamma$ is cospectral, but not switching isomorphic to $P_n$, then $\Gamma$ contains an unbalanced 4-cycle as induced subgraph. \end{thm}
\begin{proof} Suppose $\Gamma$ does not contain an $C_4^-$. Then, since $\Gamma$ contains an induced unbalanced cycle $C_r^-$ with $r\leqslant 6$, Theorem~\ref{main} implies that Graph (f) of Fig.~\ref{1} is a component of $\Gamma$. Also there are not two or more components isomorphic to Graph~(f), since then $\Gamma$ would have eigenvalues of multiplicity at least $2$. So we can conclude that $\Gamma$ has just one non-tree component, which is Graph~(f), and there is just one more component isomorphic to (m), (n), (o), (p), or (q) of Fig.~\ref{1} because the size of $\Gamma$ should be equal to the order of $\Gamma$ minus $1$. Moreover, the reader can find the spectrum of Graphs ~(m), (n), (o), (p), and (q) in Fig.~\ref{1} on \cite[Theorem 3.1.3]{brou-haem}. By verification it follows that none of these possibilities has the spectrum of $P_n$. \end{proof}
Note that only four cases in Fig.~\ref{1} represent an infinite family. Graph~(q) of order $m$ is the path $P_m$, and Graph~(p) of order $m$ is known as $D_m$. Graph~(k) and (l) will be denoted by $H_t$ and $H_{t}^{t+m}$, respectively. More precisely, $H_t$ is the union of $C_4^-$ and $P_t$, where an end vertex of $P_t$ is joined to a vertex of $C_4^-$, and $H_t^{t+m}$ is the union of $C_4^-$, $P_t$ and $P_{m+t}$ where an end vertex of $P_t$ is joined to one vertex of $C_4^-$, and an end vertex of $P_{m+t}$ is joined to the opposite vertex of $C_4^-$. \begin{lem}\label{lem11} For integers $t\geqslant 1$, $k\geqslant 0$ and $m\geq 1$, $\mathrm{det}(H_t)$, $\mathrm{det}(H_{t}^{t+m})$, $\mathrm{det}(D_{m})$, $\mathrm{det'}(H_t)$, $\mathrm{det'}(H_{t}^{t+m})$, and $\mathrm{det'}(D_{m})$ are even. \end{lem} \begin{proof} Let $B$ be the adjacency matrix of the underlying unsigned graphs of $H_t$ or $H_{t}^{t+m}$. Then $B$ contains two repeated rows (and columns), so by Lemma~\ref{ddet} $\det(B)=0$ and $\det'(B)=2\det(B')$, so $\det'(B)$ is even. On the other hand, the signed and the unsigned graph have equal adjacency matrices modulo $2$. \end{proof}
\begin{thm}\label{eigH} The eigenvalues of $H_{t}^{t+m}$ ($t\geq 1$, $m\geq 0$) are as follows: The first type of eigenvalues are $$2\mathrm{cos}\dfrac{(2i-1)\pi}{2k},~\text{for}~k=t+m+2,~i=1, \ldots , k.$$ The second type of eigenvalues are $$2\mathrm{cos}\dfrac{(2i-1)\pi}{2t+4},~\text{for}~i=1, \ldots , t+2.$$ \end{thm} \begin{proof} Consider a labeling of $H_{t}^{t+m}$, $t$ and $m$ are even or odd, as presented in Figs.~\ref{H2,1} and \ref{H2,2}. Note that the sets $\{v_1, v_2, \ldots\}$ and $\{u_1, u_2, \ldots\}$ in Figs.~\ref{H2,1} and \ref{H2,2}, are two appropriate partitions of signed bipartite graph $H_{t}^{t+m}$. Hence, we can write the adjacency matrix $A$ of $H_{t}^{t+m}$ as follows:
\begin{equation*} A = \begin{bmatrix} O & N \\ N^T & O\\ \end{bmatrix}. \end{equation*} Then it is seen that \begin{equation*} A^2 = \begin{bmatrix} NN^T & O \\ O & N^TN \\ \end{bmatrix}. \end{equation*} We can write $NN^T$ as the following matrix \begin{equation*} NN^T= \begin{bmatrix} K & O \\ O & L \\ \end{bmatrix}, \end{equation*} where $K$ and $L$ are tridiagonal matrices with all-ones on the upper and lower diagonal, and $\left[ 3, 2, 2, \ldots , 2, 1\right ]$ or $\left[ 3, 2, 2, \ldots , 2 \right ]$ on the diagonal.
\begin{figure}
\caption{Labeling of $H_{t}^{t+m}$ for even $t$}
\label{H2,1}
\end{figure}
\begin{figure}
\caption{Labeling of $H_{t}^{t+m}$ for odd $t$}
\label{H2,2}
\end{figure}
Assume that $K$ and $L$ are square matrices of size $s$ and $r$, respectively. If $t$ is even, then $s=\frac{t}{2}+1$ and $r=[\frac{t+m}{2}]+1$. Otherwise $s=[\frac{t+m}{2}]+1$ and $r=\lceil \frac{t+1}{2}\rceil$. Moreover, by \cite[Theorems 2, 3]{eigtri}, we can obtain the eigenvalues of $K$ and $L$ using the following equalities, respectively. $$\lambda_j = 2 + 2 \mathrm{cos}\dfrac{(2j-1)\pi}{2s}, ~j=1, 2, \ldots, s,$$ $$\lambda_i= 2 + 2 \mathrm{cos}\dfrac{(2i-1)\pi}{2r+1}, ~i=1, 2, \ldots, r.$$ Now, using a simple trigonometric relation the assertion is proved. \end{proof}
\section{Paths of even order}
Suppose $n$ is even. By Lemma \ref{detpn} $\det(\Gamma)=\det(P_n)=(-1)^{\frac{n}{2}}$. Therefore each component of $\Gamma$ has determinant $+1$ or $-1$, hence Graphs (a), (c), (e), (h), (i), (j), (m), (o), and (q) ($P_k$ with $k$ even) given in Fig.~\ref{1} are the only possible components of $\Gamma$.
\begin{lem}\label{cos.spec} The spectrum of Graphs (c), (e) and (i) in Fig.~\ref{1} are as follows: \begin{align*} \mathrm{Spec}(c)&=\big\{2\mathrm{cos}\dfrac{k\pi}{24}:~k=1, 5, 7, 11, 13, 17, 19, 23\big\},\\ \mathrm{Spec}(e)&=\big\{2\mathrm{cos}\dfrac{k\pi}{20}:~k=1, 3, 7, 9, 11, 13, 17, 19\big\},\\ \mathrm{Spec}(i)&=\big\{2\mathrm{cos}\dfrac{k\pi}{18}:~k=1, 3, 5, 7, 11, 13, 15, 17\big\}. \end{align*} \end{lem}
By \cite[Theorem 3.1.3]{brou-haem} and Lemma \ref{cos.spec}, Graphs (c), (e), (i) and (m) cannot be a component of a signed graph cospectral with $P_n$ for any even $n$. Therefore the only graphs which can occur as a component of $\Gamma$ for even $n$ are the graphs presented in Fig.~\ref{components}. \begin{figure}
\caption{All possible components of $\Gamma$}
\label{components}
\end{figure}
We note that the second and the third graph in Fig.~\ref{components} are cospectral. Therefore, at most one of them can be a component of $\Gamma$.\\
\begin{lem}\label{lem.1} If $n$ is even and $\Gamma$ has two connected components then $n=8$. Moreover, $\Gamma$ is switching isomorphic with the disjoint union of $P_2$ and Graph~(a) from Fig.~\ref{1}. \end{lem} \begin{proof} Based on the possible components for $\Gamma$ in Fig. \ref{components}, we have only one type of $\Gamma$ with two components, being Graph~(a) and $P_{n-6}$. By considering the values of $W_{4}(\Gamma)$ and $W_{6}(\Gamma)$ and using Lemma \ref{walk}, we have $W_4(\Gamma) = W_4(P_n)$ for each $n$, but $W_6(\Gamma) \neq W_6(P_n)$ for $n\neq 8$. If $n=8$ it is easily verified that $\Gamma$ and $P_8$ are cospectral. \end{proof}
\begin{lem}\label{lem.2} If $n$ is even and $\Gamma$ has three connected components then $n=14$. Moreover, if $n=14$, then $\Gamma$ is switching isomorphic with the disjoint union of either $P_2$, $P_4$ and Graph~(h), or $P_2$, $P_4$ and Graph~(j) in Fig. \ref{1}. \end{lem} \begin{proof} After considering all cases of the components of $\Gamma$ in Fig.~\ref{components}, we obtain two types of $\Gamma$ with three components given in Fig.~\ref{3-component}. These two possible types of $\Gamma$ are similar because the spectrum of the first components are the same. Hence, it is sufficient to verify one of these two cases for $\Gamma$. We note that when $\Gamma$ contains two paths, then the orders of the paths are different because otherwise the multiplicity of some of the eigenvalues will be at least two. We have $W_4(\Gamma)=W_4(P_n)$, but $W_6(\Gamma)\neq W_6(P_n)$, unless $n=14$. By an easy inspection, we conclude that if $n=14$ and the path components have orders $2$ and $4$, then $\mathrm{Spec}(\Gamma)=\mathrm{Spec}(P_{14})$. \end{proof}
\begin{figure}
\caption{All types of $\Gamma$ with three components and $n$ even}
\label{3-component}
\end{figure}
\begin{lem}\label{lem.3} If $n$ is even, then $\Gamma$ has at most three components. \end{lem} \begin{proof} Assume that $\Gamma$ has more than three components. Using Fig.~\ref{components} we see that there are only the two types for $\Gamma$ shown in Fig. \ref{4-component}. Similar to the proof of Lemmas \ref{lem.1} and \ref{lem.2}, it is sufficient to determine $W_6$ for $\Gamma$ and $P_n$. In each case we achieve a contradiction. \end{proof} \begin{figure}
\caption{All types of $\Gamma$ with four components and $n$ even}
\label{4-component}
\end{figure}
\begin{thm}\label{n-even} Suppose $n$ is even. Then $P_n$ is determined by the spectrum if and only if $n\neq 8, 14$. \end{thm}
\begin{proof} It follows from Lemmas \ref{lem.1}, \ref{lem.2} and \ref{lem.3}. \end{proof}
\section{Paths of the odd order}
\begin{thm}\label{1nod4} Suppose $n\equiv 1~(\mathrm{mod}~4)$. Then $P_n$ is determined by the spectrum if and only if $n\not\in\{13, 17, 29\}$. \end{thm} \begin{proof} Since $n$ is odd, $\det(\Gamma)=0$, and exactly one component $H$ of $\Gamma$ has an eigenvalue $0$. The product of all other eigenvalues of $\Gamma$ equals $\det'(\Gamma)=(n+1)/2$ by Lemma \ref{detpn}. Since $(n+1)/2$ is odd, $\det'(H)$ is odd, and every component different from $H$ has an odd determinant. Hence, by Lemma \ref{lem11}, the possible candidates do not include Graphs~(k), (l) and (p) in given Fig.~\ref{1}. So, there is only a small list of possible components of $\Gamma$. Clearly $\lambda_1(P_n)$ is equal to $\lambda_1(H)$ for one of the components of $\Gamma$. Since $\lambda_1(P_k)<\lambda_1(P_n)$ when $k<n$, $H\neq P_k$, and the largest eigenvalue of each of the other possible components is at most $\lambda_1(P_{29})$. Therefore $P_n$ is determined by the spectrum when $n\geq 33$.
For $n=5, 9$, it is easy to check that $P_n$ is determined by the spectrum. If $n=21, 25$, then $\det'(P_n)=11, 13$ respectively. But none of the components $H$ in Fig.~\ref{1} (except $P_{21}$ and $P_{25}$) has $\det(H)$, or $\det'(H)$ equal to $11$ or $13$. Hence, $P_{21}$ and $P_{25}$ are determined by their spectrums. Furthermore, we give graphs cospectral with $P_{13}$, $P_{17}$ and $P_{29}$ in Fig.~\ref{counterexamples}. \end{proof}
\begin{thm}\label{odd case2} Let $n=4k+3$ for some integer $k\geq 1$. Then there exists a graph $\Gamma$ which is cospectral but not switching isomorphic with $P_n$. \end{thm} \begin{proof} Consider graph $\Gamma$ with two components $H_2$ and $P_1$. It is easy to check that $\Gamma$ is cospectral with $P_7$. For other cases, we show that a signed graph with two components $H_{k-1}^{2k}$ and $P_k$ is a cospectral mate of $P_{4k+3}$. \begin{align*} &\mathrm{Spec}(P_{4k+3})=\{2\mathrm{cos}\dfrac{i\pi}{4k+4}, ~i=1, 2, \ldots, 4k+3\}\\ & =\{2\mathrm{cos}\dfrac{i\pi}{4(k+1)}, ~i=1, 3, \ldots, 4k+3\}\cup \{2\mathrm{cos}\dfrac{j\pi}{4(k+1)}, ~j=2, 4, \ldots, 4k+2\}\\ & =\{2\mathrm{cos}\dfrac{i\pi}{4(k+1)}, ~i=1, 3, \ldots, 4k+3\}\cup \{2\mathrm{cos}\dfrac{j\pi}{2(k+1)}, ~j=1, 2, \ldots, 2k+1\}\\ &=\mathrm{Spec}(H_{k-1}^{2k})\cup \mathrm{Spec}(P_k). \end{align*} ~\\[-35pt] \end{proof} ~\\ In Fig.~\ref{counterexamples} ($E_6$ is Graph (m) of Fig.~\ref{1}, and $E_8$ is Graph (o) of Fig.~\ref{1}), we give signed graphs cospectral with $P_{11}, P_{15}$ and $P_{23}$. It shows that the presented graphs in Theorem~\ref{odd case2} are in general not unique.
Obviously $P_3$ is determined by its spectrum, so we have the following conclusion.
\begin{cor}\label{3mod4} Suppose $n\equiv 3 \mod 4$. Then $P_n$ is determined by its spectrum if and only if $n=3$. \end{cor}
\begin{figure}
\caption{Cospectral mates of $P_{11}, P_{13}, P_{15}, P_{17}, P_{23}$, $P_{29}$}
\label{counterexamples}
\end{figure}
\section*{Acknowledgment}
The research of the first author was partly funded by Iran National Science Foundation (INSF) under the contract No. 96004167. \\ We would like to thank an anonymous referee for comments and suggestions.
\end{document} |
\begin{document}
\title{Formalizing May's Theorem}
\begin{abstract}
This report presents a formalization of May's theorem in the proof assistant Coq. It describes how the theorem statement is first translated into Coq definitions, and how it is subsequently proved. Various aspects of the proof and related work are discussed. To the best of the author's knowledge, this project is the first documented attempt in mechanizing May's Theorem. \end{abstract}
\section{Introduction}
In 1952, Kenneth May published a mathematical theorem on social choice theory, which establishes a set of necessary and sufficient conditions for simple majority voting~\cite{mays}. This result is now known as \textbf{May's theorem}. Though not as famous as Arrow's theorem~\cite{arrows} and the Gibbard-Satterthwaite theorem~\cite{gibbard}, May's theorem is still considered to be one of the "minor classics" in voting theory~\cite{barry}. While the other above theorems in economics have been formalized in proof assistants~\cite{formalize_arrow_isabelle}\cite{formalize_arrow_mizar}, May's theorem has never been mechanically proved.
In this report, I present the first mechanized proof of May's theorem, which I implemented in \textbf{Coq}~\cite{coq}. Proving the theorem in Coq not only increments the library of all proved theorems, but also provides insights in mechanizing results from social choice theory in a type theory based proof assistant. The structure of the Coq proof differs from the conventional proof sketch, and this report explains various subtle details of it.
In this document, I first present the high-level statement of the theorem in Section~\ref{mays}. In Section~\ref{definitions}, I then elaborate how this statement is translated into Coq definitions. Section~\ref{if} and \ref{onlyif} then detail the structure of the Coq proof in the if direction and only if direction, respectively. In Section~\ref{discussion}, I consider the nature of the proof itself by discussing its usefulness, correctness, and the extra extensionality axiom the proof relies on. Section~\ref{conclusion} concludes the report and examines various future directions.
It should be noted that this document only mentions a small but important subset of the lemmas that constitute the main backbone of the Coq proof. A large number of relatively minor lemmas that the proof uses are omitted from this report for the sake of brevity. Although many of these lemmas appear straightforward at first glance, their mechanical proofs are quite convoluted and require much case analysis or extensive proof by induction. One example is the following lemma which states if two distinct elements, say $x$ and $y$, are members of a list containing no duplicates, the list can be expressed as the concatenation of five lists, two of which are singleton lists containing $x$ and $y$, respectively\footnote{The Coq keyword \texttt{Lemma} allows us to write a proposition for which its proof is then built using tactics.}\footnote{\texttt{NoDup l} expresses that the list \texttt{l} contains no duplicates. }:
\centercode{Lemma two\_elements\_exist \{A:Type\} (l:list A) x y \`{}\{!NoDup l\} \`{}\{x$\neq$y\}: \\ \indent In x l → In y l → $\exists $l1 l2 l3, \\ \indent l=l1++[x]++l2++[y]++l3 $\lor$ l=l1++[y]++l2++[x]++l3.}
\section{May's Theorem\label{mays}} We now establish certain definitions following social choice theory nomenclature.
Consider an election with exactly two candidates $x$ and $y$, and a finite set of voters. Each voter can cast a vote, called a \textbf{preference}, indicating that they either prefer $x$ over $y$, $y$ over $x$, or that they are indifferent between $x$ and $y$.
A \textbf{social choice function} is a function that maps each possible list of preferences to a unique result, either $x$ wins, $y$ wins, or there is a tie between the two candidates.
Not all social functions are what we would perceive to be fair. May identifies several properties.
A social choice function is \textbf{anonymous} if it is a symmetric function of its arguments. Each voter is treated the same by the social choice function, and swapping the preference of any two voters will yield the same election result.
A social choice function is \textbf{neutral} if flipping the preference of each voter will also flip the function's result. (Here, an indifference preference remains unchanged after a flip). Both candidates are treated equally by the social choice function and swapping the names of the candidates will not affect the final result.
A social choice function is \textbf{monotone} if for any list of preferences where the result is indifferent or favorable to $x$, changing any voter's preference in a positive way towards $x$ results in $x$ winning the election. (Here, changing a preference in a positive way towards $x$ means changing a $y$ vote to an indifference one, a $y$ to a $x$, or an indifference vote to a $x$.) This means the social choice function responds to changes of individual preferences in a "positive" manner.
Suppose the number of people who vote for $x$ is $a$ and the number of people who vote for $y$ is $b$. The \textbf{simple majority function} is the social choice function that decides $x$ wins if $a>b$, $y$ wins if $b>a$, or that the two candidates tie otherwise.
May's theorem states that a social choice function is anonymous, neutral, and monotone if and only if it is the simple majority function. This means that anonymity, neutrality, and montonicity form a set of necessary and sufficient conditions for a social choice function to be the simple majority function.
\section{Definitions and assumptions in Coq\label{definitions}}
In my proof, I used the \texttt{Finite} typeclass, from the library coq-std++~\cite{stdpp}, to express the finite set of voters being considered\footnote{The Coq keyword \texttt{Context} declares variables in the context of a section. In this case, the type \texttt{voter} belonging to the typeclass \texttt{Finite}.}:
\centercode{Context \`{}\{Finite voter\}.}
As shown below, this typeclass\footnote{Typeclasses are defined with the Coq keyword \texttt{Record}.} provides three fields for the \texttt{voter} type: the field \texttt{enum voter} is a list containing all the voters, \texttt{NoDup\_enum voter} is a proposition stating that the list contains no duplicates, and \texttt{elem\_of\_enum voter} is a proposition stating that every term of the type \texttt{voter} is an element of the list. All three fields turn out to be equally important and are used extensively throughout the mechanical proof.
\centercode{Record Finite (A : Type) (EqDecision0 :\ EqDecision A) : Type := \\ \indent Build\_Finite \{ enum :\ list A;\\ \indent \indent NoDup\_enum :\ NoDup enum;\\
\indent \indent elem\_of\_enum : $\forall$ x :\ A, x $\in$ enum \}.}
Because whenever we want to reason about the two candidates (whether it is the preference of a voter or the result of the social choice function), we always have a third case where they tie, it is convenient to express the \texttt{candidate} type as \texttt{option bool}. Assuming the candidates are $x$ and $y$, this allows us to represent $x$ being preferred over $y$ by \texttt{Some true}, $y$ being preferred by \texttt{Some false}, and a tie with \texttt{None}. The use of \texttt{bool} also allows us to reason about the duality of preferences with simple \texttt{bool} functions like \texttt{negb}. The type for \texttt{preferences} and \texttt{social\_choice\_function} are simple function types\footnote{The Coq keyword \texttt{Definition} binds a term to a variable name.}\footnote{The text description of the theorem talks about a list of preferences, but in Coq, it is simpler to model the preferences of voters as a function type.}:
\centercode{Definition preferences := voter → candidate.\\ Definition social\_choice\_function := preferences → candidate. }
The \texttt{anonymous} property is expressed via a \texttt{swap} function, which swaps the preferences of two voters, while leaving the rest unchanged:
\centercode{Definition swap (v1 v2:voter) (p:preferences) :=\\
\indent fun v => \\
\indent \indent if bool\_decide (v=v1) then p v2 \\
\indent \indent else if bool\_decide (v=v2) then p v1 \\
\indent \indent else p v.\\ Definition anonymous (scf:social\_choice\_function):=\\
\indent $\forall$ p v1 v2, scf p = scf (swap v1 v2 p). }
The \texttt{neutral} property is expressed via a \texttt{flip} and \texttt{flip\_vote} function. The former flips a single \texttt{candidate} term, while the latter expresses the flipped version of the preferences of all voters:
\centercode{Definition flip cand := \\
\indent match cand with \\
\indent | Some b => Some (negb b)\\
\indent | None => None\\
\indent end.\\ Definition flip\_vote (p:preferences) :=\\
\indent fun v:voter=> flip (p v).\\ Definition neutral (scf:social\_choice\_function) :=\\
\indent $\forall$ p, scf p = flip (scf (flip\_vote p)). }
The \texttt{monotone} property is expressed with an \texttt{update} function that updates the preference of a single voter while leaving the rest unchanged:
\centercode{Definition update v i (p:preferences) := \\ \indent fun v':voter => \\ \indent \indent if bool\_decide(v'=v) then i else p v'.\\ Definition monotone (scf:social\_choice\_function) :=\\ \indent $\forall$ p , (scf p = Some true $\lor$ scf p = None)→ \\ \indent \indent ($\forall$ v, p v = Some false → scf (update v None p) = Some true) $\land$ \\ \indent \indent ($\forall$ v, p v = Some false → scf (update v (Some true) p) = Some true) $\land$ \\ \indent \indent ($\forall$ v, p v = None → scf (update v (Some true) p) = Some true).}
After defining \texttt{count}, \texttt{count\_helper}, and various predicate functions for candidates like \texttt{is\_some\_true}, we can express the \texttt{majority\_election} rule as such:
\centercode{Definition majority\_election (p:preferences) :=\\ \indent let true\_num := count is\_some\_true p in \\
\indent let false\_num := count is\_some\_false p in\\
\indent if bool\_decide (false\_num < true\_num ) then Some true \\
\indent else if bool\_decide (true\_num < false\_num) then Some false \\
\indent else None.}
Finally, one can express the main \texttt{mays\_thm} with the following proposition\footnote{The Coq keyword \texttt{Theorem} works the same as \texttt{Lemma}. }:
\centercode{Theorem mays\_thm scf: \\ \indent anonymous scf $\land$ neutral scf $\land$ monotone scf $\leftrightarrow$ scf = majority\_election .}
One must also mention that the mechanical proof requires an additional axiom that is outside Coq's usual type-based calculus\footnote{The Coq keyword \texttt{Axiom} extends the environment with an axiom. }. Specifically, we include the axiom of functional extensionality. The reason for this inclusion is discussed in subsection~\ref{extensionality}.
\centercode{Axiom functional\_extensionality \{A B\} (f g :\ A → B):\\ \indent ($\forall$ x, f x = g x) → f = g. }
\section{If direction\label{if}}
For the if direction of the theorem, we prove that the simple majority voting system is anonymous, neutral, and monotone.
\subsection{Anonymity}
The anonymous property of the simple majority function is mainly supported by the following invariant on \texttt{swap}\footnote{The argument \texttt{f} ranges over predicate functions for candidates, like \texttt{is\_some\_false}.}:
\centercode{Lemma swap\_invariant\_count p f l v1 v2 \`{}\{!NoDup l\}:\\ \indent In v1 l → In v2 l → count\_helper f p l = count\_helper f (swap v1 v2 p) l. }
This lemma states that given a subset of voters, swapping the preferences of any two voters within the subset does not change the overall number of each type of preference. In fact this lemma relies on another similar one, which asserts that swapping the preferences of any two voters \textbf{not} within the subset also does not change the frequency of the types of preferences:
\centercode{Lemma swap\_not\_in\_list p f v1 v2 l:\\ \indent $\thicksim$In v1 l → $\thicksim$In v2 l → count\_helper f p l =\\ \indent \indent count\_helper f (swap v1 v2 p) l. }
It is also possible that we choose to swap the preference of a voter with itself, in which case, the list of preferences remains unchanged:
\centercode{Lemma swap\_same v p:\ (swap v v p) = p.}
\subsection{Neutrality\label{neutrality}}
The neutrality of the simple majority function is proved by showing that the frequency of \texttt{Some true} and \texttt{Some false} are swapped whenever we perform a \texttt{flip\_vote} on a list of preferences:
\centercode{Lemma flip\_reverse\_count1 p l:\\ \indent count\_helper is\_some\_false (flip\_vote p) l = \\ \indent count\_helper is\_some\_true p l.\\ Lemma flip\_reverse\_count2 p l:\\ \indent count\_helper is\_some\_true (flip\_vote p) l =\\ \indent count\_helper is\_some\_false p l. } \subsection{Monotonicity} To show that the simple majority function is monotone, I proved a number of lemmas that specify how the number of each type of preference changes if we update a certain voter's preference for each possible case. Here I present one of them, which states that changing a \texttt{Some false} vote to a \texttt{None} decreases the \texttt{Some false} vote frequency by one:
\centercode{Lemma update\_count\_lemma\_1 v p l \`{}\{NoDup l\}: \\ \indent p v = Some false → In v l →\\ \indent count\_helper is\_some\_false p l =\\ \indent \indent 1 + count\_helper is\_some\_false (update v None p) l. }
There is also the case where if we are considering a subset of voters, and we update someone not in the subset, this does not affect the frequency of each type of preference within the original subset:
\centercode{Lemma upgrade\_not\_in\_list f p v cand l: \\ $\thicksim$In v l → count\_helper f p l = count\_helper f (update v cand p) l. } \section{Only if direction\label{onlyif}}
For the only if direction of the theorem, we prove that for any social choice function, denoted as \texttt{scf}, that is anonymous, neutral, and monotone, it is equivalent to the simple majority function.
Given our goal \texttt{scf = majority\_election}, by the axiom of functional extensionality, it suffices to prove that for any list of preferences, denoted as \texttt{p}, it is the case that \texttt{scf p = majority\_election p}. Subsequently, we perform case analysis of all the possible outcomes of \texttt{scf p} and \texttt{majority\_election p}. For the three cases where they match, the statement follows trivially. Thus, it suffices to show that if \texttt{scf p $\neq$ majority\_election p}, we can achieve a contradiction where we can prove the \texttt{False} proposition.
There are six cases where \texttt{scf p $\neq$ majority\_election p}: \begin{center}
\begin{tabular}{ c | c | c}
Case number & Preference condition & \texttt{scf p} \\
\hline
1 & \texttt{count is\_some\_false p < count is\_some\_true p} & \texttt{Some false} \\
2 & \texttt{count is\_some\_false p < count is\_some\_true p} & \texttt{None} \\
3 & \texttt{count is\_some\_true p < count is\_some\_false p} & \texttt{Some true} \\
4 & \texttt{count is\_some\_true p < count is\_some\_false p} & \texttt{None} \\
5 & \texttt{count is\_some\_false p = count is\_some\_true p} & \texttt{Some false} \\
6 & \texttt{count is\_some\_false p = count is\_some\_true p} & \texttt{Some true} \\ \end{tabular} \end{center}
Actually, half of the cases are redundant. For example if case number $3$ holds, we can reduce it to a variation of case number $1$. To see why this is the case, assume case number $3$ holds, and consider the list of preferences with each element flipped, i.e.\ \texttt{flip\_vote p}, denoted as \texttt{p$'$}. By the two lemmas from subsection~\ref{neutrality}, we have \texttt{count is\_some\_false p$'$ < count is\_some\_true p$'$}. In addition, by the neutrality condition of \texttt{scf}, we have \texttt{scf p$'$ = Some false}. We now have a variation of case number $1$ where we replace \texttt{p} with \texttt{p$'$}. In other words, we only need to find a contradiction for each of the following three cases:
\begin{center}
\begin{tabular}{ c | c | c}
Case number & Preference condition & \texttt{scf p} \\
\hline
1 & \texttt{count is\_some\_false p<count is\_some\_true p} & \texttt{Some false} \\
2 & \texttt{count is\_some\_false p<count is\_some\_true p} & \texttt{None} \\
3 & \texttt{count is\_some\_false p=count is\_some\_true p} & \texttt{Some false} \\ \end{tabular} \end{center}
For the rest of this section, unless otherwise specified, we use \texttt{a} and \texttt{b} to denote the number of \texttt{Some true} and \texttt{Some false} votes in the list of preferences \texttt{p}, respectively.
\subsection{Case 1: \texttt{a > b} and \texttt{scf p = Some false}} Suppose \texttt{a > b} and \texttt{scf p = Some false}. Consider \texttt{flip\_vote p}, the list of all the preferences flipped, denoted as \texttt{p1}. The following three statements can be proved:
\begin{enumerate}
\item Using proof by induction, the number of \texttt{Some true} and \texttt{Some false} votes in \texttt{p1} are flipped with respect to that of \texttt{p}, i.e.\ \texttt{count is\_some\_false p1 = a} and \texttt{count is\_some\_true p1 = b}.
\item By case analysis, the preference of a voter in \texttt{p} is \texttt{None} if and only if their preference in \texttt{p1} is \texttt{None}, i.e.\ \texttt{$\forall$ voter, p voter = None $\leftrightarrow$ p1 voter = None}.
\item By neutrality of \texttt{scf}, we have \texttt{scf p1 = Some true}. \end{enumerate}
We then construct the list of preferences \texttt{p2}, which is the same as \texttt{p1} but we update the first \texttt{(a $-$ b)} \texttt{Some false} votes in \texttt{p1} to \texttt{Some true} via the function \texttt{upgrade\_vote\_list}\footnote{The Coq keyword \texttt{Fixpoint} is the same as the keyword \texttt{Definition}, except that it is used specifically for recursive definitions. It also performs additional checks that the defined function is total.}:
\centercode{Fixpoint upgrade\_vote\_list p l:=\\
\indent match l with \\
\indent \indent | [] => p \\
\indent\indent | hd::tl => update hd (Some true) (upgrade\_vote\_list p tl)\\
\indent end. }
The following three statements then follow:
\begin{enumerate}
\item Using proof by induction, the number of \texttt{Some true} and \texttt{Some false} votes in \texttt{p2} are the same as that of \texttt{p}, i.e.\ \texttt{count is\_some\_true p2 = a} and \texttt{count is\_some\_false p2 = b}.
\item By showing that \texttt{None} votes are not changed from \texttt{p1} to \texttt{p2}, it is the case that the preference of a voter in \texttt{p} is \texttt{None} if and only if their preference in \texttt{p2} is \texttt{None}, i.e.\ \texttt{$\forall$ voter, p voter = None $\leftrightarrow$ p2 voter = None}.
\item By monotonicity of \texttt{scf} and the following lemma, we have \texttt{scf p2 = Some true}:
\centercode{Lemma upgrade\_vote\_list\_monotone scf p l:\\
\hspace*{0.5cm} monotone scf → \\
\hspace*{0.5cm} scf p = Some true → \\
\hspace*{0.5cm} scf (upgrade\_vote\_list p l) = Some true.} \end{enumerate}
Lastly, after defining a function for swapping a list of pairs of preferences, we show that there exists a simple list of swaps that enables us to transform \texttt{p2} to \texttt{p}:
\centercode{Fixpoint swaps p l:=\\
\indent match l with \\
\indent | [] => p \\
\indent | (x,y)::tl => swap x y (swaps p tl)\\
\indent end.\\ Lemma same\_true\_num\_implies\_swappable p p': \\ \indent ($\forall$ x : voter, p x = None $\leftrightarrow$ p' x = None) → \\ \indent count is\_some\_true p = count is\_some\_true p' →\\ \indent $\exists$ l , p = swaps p' l. }
In fact, this list can be constructed easily: it is the result of zipping the list of voters who voted \texttt{Some true} in \texttt{p} and \texttt{Some false} in \texttt{p2}, and the list of voters who voted the other way round, as highlighted by the following function:
\centercode{Definition count\_true\_same\_swap\_list\_helper p p' l:= \\
\indent let l1:= left\_true\_right\_false p p' l in \\
\indent let l2:= left\_false\_right\_true p p' l in \\
\indent zip l1 l2. }
Various properties of this list have to be proved. For example, one has to prove that the two lists are of the same length, so no element is dropped during the \texttt{zip} process:
\centercode{Lemma count\_true\_difference\_relation p p' l \`{}\{!NoDup l\}:\\
\indent ($\forall$ x, p x = None $\leftrightarrow$ p' x = None) →\\
\indent count\_helper is\_some\_true p l = count\_helper is\_some\_true p' l →\\ \indent length (left\_true\_right\_false p p' l) = \\ \indent \indent length (left\_false\_right\_true p p' l). }
Lastly, by the anonymity property of \texttt{scf}, we then have \texttt{scf p = Some true}. However, we started with the assumption that \texttt{scf p = Some false}, and thus a contradiction is achieved. \subsection{Case 2: \texttt{a > b} and \texttt{scf p = None}}
In this case, where \texttt{a > b} and \texttt{scf p = None}, the proof is similar to that of case number 1. However, during the construction of \texttt{p2}, we need a different invariant to prove that \texttt{scf p2 = Some true}. Specifically, we use the following lemma together with the fact that the number of voters to be updated is non-zero (since \texttt{a $-$ b > 0}):
\centercode{Lemma upgrade\_vote\_list\_monotone\_weak scf p l:\\ \indent monotone scf → (scf p = Some true $\lor$ scf p = None) →\\ \indent(scf (upgrade\_vote\_list p l) = Some true $\lor$ \\ \indent \indent scf (upgrade\_vote\_list p l) = None). }
\subsection{Case 3: \texttt{a = b} and \texttt{scf p = Some false}} In this case, where \texttt{a = b} and \texttt{scf p = Some false}, the entire proof for case number 1 almost works perfectly here for us to achieve a contradiction as well. The main difference is that we need not update any voters to produce \texttt{p2} from \texttt{p1} (as \texttt{a - b = 0}), i.e.\ \texttt{p2 = p1}. \section{Discussion\label{discussion}}
\subsection{Why is this formalization useful?} The first obvious answer is that formalizing the proof of a theorem allows us to accept it as fact with more certainty. Humans make mistakes, and it is not uncommon to hear mathematical proofs widely accepted by the community are found to contain minor gaps and inaccuracies afterwards~\cite{paulson}\cite{gouezel}. While it is unlikely for May's theorem to be false, formalizing it enables us to ensure we do not miss any edge cases in high-level proof sketches.
This project also enables one to gain a precise understanding of the technique used within the proof. Defining and proving properties of each step rigorously allows the programmer to gain deeper insights into how and why a proof works. In particular, from a more personal perspective, the first proof I wrote relies on the axiom of excluded middle, which is an axiom from classical logic that is not included in Coq's intuitionistic logic system. Only during a thorough review of the code did I realize that the proof can be rewritten slightly to circumvent using the axiom, and thus allowing my most recent proof to be completely constructive.
Nonetheless, I believe whether a project is useful or not is sometimes not the best way to justify its value. I mainly pursued working on this project simply because I was interested in it. To quote Benthem Jutting's PhD thesis~\cite{jutting}:
\noindent "\textit{A further motive, for the author, was that the work involved in the project appealed to him.}"
\subsection{Is this theorem not obvious?} It might be true that most intermediate steps of the proof are not difficult to understand. However this is not equivalent to saying that the theorem is obvious or that the project itself is trivial.
Firstly, as mentioned at the beginning, though most intermediate steps might be easy to comprehend, the proof of those steps might be long and tedious. When reading a pen-and-paper proof, we automatically infer various properties implicitly, but in a proof assistant, everything must be stated and proved explicitly, e.g.\ whether a list has no duplicates, whether an element is contained (or not contained) within a list.
In addition, without knowing the trick beforehand, it is not exactly straightforward how one can prove May's theorem, especially for the only if direction. Perhaps, it might be more accurate to describe the proof to be elegantly short as opposed to describing the theorem as trivial.
\subsection{Is this proof correct?\label{correct}} There are two potential sources of errors which might lead to the proof being incorrect. Luckily, both are unlikely.
Firstly, the Coq proof itself might not be sound, meaning that we can trace the error back to a bug in the kernel of the Coq proof assistant. Coq is based on the Calculus of Inductive Constructions~\cite{CIC}, which is a reliable and well-understood type theory. Coq also satisfies the de Bruijn criterion~\cite{criterion}, meaning it generates proof terms that can be verified by an independent and relatively small kernel. As a powerful verification tool that has been maintained for more than thirty years, one can safely trust and assume the correctness of Coq's kernel without losing much sleep.
Another source of error might be due to the definitions stating a completely different theorem instead of that of May's. To reduce the likelihood of this mistake, definitions are written as clearly as possible and are written to reflect the original texts closely.
There is one part where the Coq proof differs from the original paper. In May's paper, there is actually a fourth property in the set of sufficient and necessary properties for simple majority voting, called \textbf{decisiveness}, which states the function must be defined and single-valued for all possible lists of preferences. This, however, is exactly the definition for a function in mathematics anyway! As so, many subsequent papers on the theorem omitted this property in the statement~\cite{infinite}, which I also followed suit.
\subsection{Is the axiom of extensionality necessary?\label{extensionality}} Recall that the mechanical proof assumes the axiom of extensionality, which is not part of Coq's standard library. It is used because when we assert that two social choice functions are equivalent, we implicitly mean that the functions agree on every possible input.
One can circumvent the need for the axiom by redefining May's theorem slightly:
\centercode{Theorem mays\_thm2 scf: \\ \indent anonymous scf $\land$ neutral scf $\land$ monotone scf $\leftrightarrow$ \\ \indent $\forall$ p, scf p = majority\_election p.}
I still stuck with the first definition (see Section~\ref{definitions}), since I think that definition of equality is clearer, as justified by Subsection~\ref{correct}. \section{Conclusion and future directions\label{conclusion}} In this report, I showed how May's theorem is formalized in the Coq proof assistant. I discussed several aspects of the Coq proof, including the translation from the theorem statement, various lemmas used in the proof, and its correctness.
Since the original publication, various others have extended May's theorem in multiple ways, e.g.\ when there is an infinite number of voters~\cite{infinite}, or when there are more than two candidates~\cite{list}. A possible extension is to formalize those generalized theorems in Coq as well.
Being the first formalization of May's theorem, this Coq proof establishes another step into formalizing fundamental results in social choice theory. Another extension would be to continue formalizing other interesting theorems in this area, such as the median voter theorem~\cite{black}, the McKelvey-Schofield chaos theorem~\cite{mckelvey}, and Sen's possibility theorem~\cite{sen}. It might also be worthwhile to develop a library containing voting-related primitives and lemmas for formalizing similar theorems.
\end{document} |
\begin{document}
\subjclass[2010]{Primary: 03F03; Secondary: 03F25, 03F50}
\begin{abstract} In generic realizability for set theories, realizers treat unbounded quantifiers generically. To this form of realizability, we add another layer of extensionality by requiring that realizers ought to act extensionally on realizers, giving rise to a realizability universe $\mathrm{V_{ex}}(A)$ in which the axiom of choice in all finite types, ${\sf AC}_{\ft}$, is realized, where $A$ stands for an arbitrary partial combinatory algebra. This construction furnishes ``inner models'' of many set theories that additionally validate ${\sf AC}_{\ft}$, in particular it provides a self-validating semantics for ${\sf CZF}$ (Constructive Zermelo-Fraenkel set theory) and ${\sf IZF}$ (Intuitionistic Zermelo-Fraenkel set theory). One can also add large set axioms and many other principles. \end{abstract}
\maketitle \tableofcontents
\section{Introduction} In this paper we define an extensional version of generic\footnote{The descriptive attribute ``generic'' for this kind of realizability is due to McCarty \cite[p.\ 31]{M84}.} realizability over any given partial combinatory algebra (pca) and prove that it provides a self-validating semantics for ${\sf CZF}$ (Constructive Zermelo-Fraenkel set theory) as well as ${\sf IZF}$ (Intuitionistic Zermelo-Fraenkel set theory), i.e., every theorem of ${\sf CZF}$ (${\sf IZF}$) is realized by just assuming the axioms of ${\sf CZF}$ (${\sf IZF}$) in the background theory. Moreover, it is shown that the axiom of choice in all finite types, ${\sf AC}_{\ft}$, also holds under this interpretation.\footnote{As a byproduct, we reobtain the already known result (e.g. \cite[4.31, 4.33]{R03m}) that augmenting ${\sf CZF}$ by ${\sf AC}_{\ft}$ does not increase the stock of provably recursive functions. Likewise, we reobtain the result (a consequence of \cite{friedman73}) that augmenting ${\sf IZF}$ by ${\sf AC}_{\ft}$ does not increase the stock of provably recursive functions.} This uniform tool of realizability can be combined with forcing to show that ${\sf IZF}+{\sf AC}_{\ft}$ is conservative over ${\sf IZF}$ with respect to arithmetic formulae (and similar results with large set axioms). For special cases, namely, finite type dependent choice, ${\sf DC}_{\ft}$, and finite type countable choice, ${\sf CAC}_{\ft}$,\footnote{${\sf DC}_{\ft}$ is the scheme $\forall x^{\sigma}\,\exists y^{\sigma}\,\varphi(x,y)\to \forall x^{\sigma}\, \exists f^{0\sigma}\,[ f(0)=x\;\wedge\; \forall n\,\varphi(f(n),f(n+1))]$, while ${\sf CAC}_{\ft}$ stands for the scheme $\forall n\, \exists y^{\tau}\, \varphi(n,y)\to \exists f^{0\tau}\, \forall n\,\varphi(n,f(n))$.} this has been shown in \cite[Theorem 5.1]{friedman_scedrov84} and \cite[XV.2]{B85}, but not for ${\sf AC}_{\ft}$. The same technology works for ${\sf CZF}$. However, for several subtheories of ${\sf CZF}$ (with exponentiation in lieu of subset collection) such conservativity results have already been obtained by Gordeev \cite{gordeev} by very different methods, using total combinatory algebras and an abstract form of realizability combined with genuine proof-theoretic machinery.
Generic\ realizability is markedly different from Kleene's number and function realizability as well as modified realizability. It originates with Kreisel's and Troelstra's \cite{KTr70} definition of realizability for second order Heyting arithmetic and the theory of species. Here, the clauses for the realizability relation $\Vdash$ relating to second order quantifiers are the following: $e\Vdash \forall X\, \phi(X)\Leftrightarrow \forall X\,e\Vdash \phi(X)$, $e\Vdash \exists X\, \phi(X)\Leftrightarrow \exists X\,e\Vdash \phi(X)$. This type of realizability does not seem to give any constructive interpretation to set quantifiers; realizing numbers ``pass through" quantifiers. However, one could also say that thereby the collection of sets of natural numbers is generically conceived. Kreisel-Troelstra realizability was applied to systems of higher order arithmetic and set theory by Friedman \cite{F73} and to further set theories by Beeson \cite{B79}. An immediate descendant of the interpretations of Friedman and Beeson was used by McCarty \cite{M84,M86}, who, unlike the realizabilities of Beeson, devised realizability directly for extensional ${\sf IZF}$: {\em ``we found it a nuisance to interpret the extensional theory into the intensional before realizing." } (\cite[p.\ 82]{M84}). A further generalization, inspired by a remark of Feferman in \cite{F75}, that McCarty introduced was that he used realizers from applicative structures, i.e. arbitrary models of Feferman's theory $\mathrm{APP}$, rather than just natural numbers.
Generic\ realizability \cite{M84} is based on the construction of a realizability universe $\mathrm{V}(A)$ on top of an applicative structure or partial combinatory algebra $A$. Whereas in \cite{M84,M86} the approach is geared towards ${\sf IZF}$, making use of transfinite iterations of the powerset operation, it was shown in \cite{R06} that ${\sf CZF}$ suffices for a formalization of $\mathrm{V}(A)$ and the generic realizability based upon it. This tool has been successfully applied to the proof-theoretic analysis of ${\sf CZF}$ ever since \cite{R05a, R05, R08, S14}.
With regard to ${\sf AC}_{\ft}$, it is perhaps worth mentioning that, by using generic\ realizability \cite{M84}, one can show that ${\sf AC}_{0,\tau}$ for $\tau\in\{0,1\}$ holds in the realizability universe $\mathrm{V}(A)$ for any pca $A$ (cf.\ also \cite{DR19}). For instance, one can take Kleene's first algebra. With some effort, one can also see that ${\sf AC}_{1,\tau}$ for $\tau\in\{0,1\}$ holds in $\mathrm{V}(A)$ by taking, e.g., Kleene's second algebra. It is conceivable that one can construct a specific pca $A$ so as to validate ${\sf AC}_{\ft}$ in $\mathrm{V}(A)$. In this paper we show that, by building extensionality into the realizability universe and by adapting the definition of realizability, it is possible to satisfy choice in all finite types at once, regardless of the partial combinatory algebra $A$ one starts with.
Extensional variants of realizability in the context of (finite type) arithmetic have been investigated by Troelstra (see \cite{T98}) and van Oosten \cite{ Oosten97,Oosten08}, as well as \cite{F19, BS18}, and for both arithmetic and set theory by Gordeev in \cite{gordeev}. For earlier references on extensional realizability, in particular \cite{Grayson}, where the notion for first order arithmetic first appeared, and \cite{Pitts}, see Troelstra \cite[p.\ 441]{T98}.
\section{Partial combinatory algebras} Combinatory algebras are the brainchild of Sch\"onfinkel \cite{Schoen24} who presented his ideas in G\"ottingen in 1920. The quest for an optimization of his framework, singling out a minimal set of axioms, engendered much work and writings from 1929 onwards, notably by Curry \cite{Curry29,Curry30}, under the heading of {\em combinatory logic}. Curiously, a very natural generalization of Sch\"onfinkel's structures, where the application operation is not required to be always defined, was axiomatically characterized only in 1975 by Feferman in the shape of the most basic axioms of his theory $T_0$ of explicit mathematics \cite{F75}\footnote{In the literature, this subtheory of $T_0$ has been christened $\mathrm{EON}$ (for {\em elementary theory of operations and numbers}; see \cite[p.\ 102]{B85}) and $\mathrm{APP}$ (on account of comprising the {\em applicative axioms} of $T_0$; see \cite[Chapter 9, Section 5]{TvD88}). However, to be precise let us point out that $T_0$ as formulated in \cite{F79} differs from the original formulation in \cite{F75}: \cite{F79} has a primitive classification constant $\mathbb N$ for the natural numbers as well as constants for successor and predecessor on $\mathbb N$, and more crucially, equality is not assumed to be decidable and the definition-by-cases operation is restricted to $\mathbb N$.} and in \cite[p.\ 70]{F78a}. Feferman called these structures {\em applicative structures}.
\begin{notation} In order to introduce the notion of a pca, we shall start with that of a partial operational structure $(M,\cdot)$,
where $\cdot$ is just a partial binary operation on $M$. We use $a\cdot b\simeq c$ to convey that $a\cdot b$ is defined and equal to $c$. $a\cdot b\downarrow $ stands for $\exists c\, (a\cdot b\simeq c)$. In what follows, instead of $a\cdot b$ we will just write $ab$. We also employ the association to the left convention, meaning that e.g.
$abc\simeq d$ stands for the following: there exists $e$ such that $ab\simeq e$ and $ec\simeq d$. \end{notation}
\begin{definition} A {\em partial combinatory algebra} (pca) is a partial operational structure $(A,\cdot)$ such that $A$ has at least two elements and there are elements $\mb k$ and $\mb s$ in $A$ such that $\mb k a$, $\mb sa$ and $\mb sab$ are always defined, and \begin{itemize}
\item $\mb ka b\simeq a$;
\item $\mb sabc\simeq ac(bc)$. \end{itemize} The combinators $k$ and $s$ are due to Sch\"onfinkel \cite{Schoen24} while the axiomatic treatment, although formulated just in the total case, is due to Curry \cite{Curry30}. The word ``combinatory" appears because of a property known as {\em combinatory completeness} described next. For more information on pcas see \cite{F75,F79, B85,Oosten08}. \end{definition}
\begin{definition} Given a pca $A$, one can form application terms over $A$ by decreeing that: \begin{enumerate}[(i)]
\item variables $x_1,x_2,\ldots$ and the constants $\mb k$ and $\mb s$ are applications terms over $A$;
\item elements of $A$ are application terms over $A$;
\item given application terms $s$ and $t$ over $A$, $(ts)$ is also an application term over $A$. \end{enumerate}
Application terms over $A$ will also be called $A$-terms. Terms generated solely by clauses (i)--(iii), will be called {\em application terms}.
An $A$-term $q$ without free variables has an obvious interpretation $q^A$ in $A$ by interpreting elements of $A$ by themselves and letting $(ts)^A$ be $t^A\cdot s^A$ with $\cdot$ being the partial operation of $A$. Of course, $q$ may fail to denote an element of $A$. We write $A\models q\downarrow$ (or just $ q\downarrow$) if it does, i.e., if $q^A$ yields an element of $A$. \end{definition}
The combinatory completeness of a pca $A$ is encapsulated in $\lambda$-abstraction (see \cite[p.\ 95]{F75}, \cite[p.\ 63]{F79}, and \cite[p.\ 101]{B85} for more details).
\begin{lemma}[$\lambda$-abstraction] For every term $t$ with variables among the distinct variables $x,x_1,\ldots,x_n$, one can find in an effective way a new term $s$, denoted $\lambda x.t$, such that \begin{itemize}
\item the variables of $s$ are the variables of $t$ except for $x$,
\item $s[a_1/x_1,\ldots,a_n/x_n]\downarrow$ for all $a_1,\ldots,a_n\in A$,
\item $(s[a_1/x_1,\ldots,a_n/x_n]) a\simeq t[a/x,a_1/x_1,\ldots,a_n/x_n]$ for all $a,a_1,\ldots,a_n\in A$. \end{itemize}
The term $\lambda x.t$ is built solely with the aid of $\mb k, \mb s$ and symbols occurring in $t$. \end{lemma}
An immediate consequence of the foregoing abstraction lemma is the recursion theorem for pca's (see \cite[p.\ 96]{F75}, \cite[p.\ 63]{F79}, \cite[p.\ 103]{B85}). \begin{lemma}[Recursion theorem] There exists a closed application term $\mb f$ such that for every pca $A$ and $a,b\in A$ we have $A\models {\mb f}\downarrow$
and \begin{itemize}
\item $A\models \mb f a\downarrow$;
\item $A\models \mb f ab\simeq a(\mb fa) b$. \end{itemize} \end{lemma}
\begin{proof} The heuristic approach consists in finding a fixed point of the form $cc$. Let us search for $\mb f$ satisfying $\mb fa\simeq cc$, and hence find a solution of the equation
\[ ccb\simeq a(cc)b. \] By using $\lambda$-abstraction, we can easily arrange to have, for every $d$,
\[ cdb\simeq a(dd)b. \] Indeed, let $\mb f:=\lambda a.cc$, where $c:=\lambda db.a(dd)b$. Then $f$ is as desired. \end{proof}
In every pca, one has pairing and unpairing\footnote{Let $\mb p=\lambda xyz.zxy$, $\mb{p_0}:=\lambda x.x\mb k$, and $\mb{p_1}:=\lambda x.x\bar{\mb k}$, where $\bar{\mb k}:=\lambda xy.y$. Projections $\mb{p_0}$ and $\mb{p_1}$ need not be total. For realizability purposes, however, it is not necessary to have total projections.} combinators $\mb p$, $\mb {p_0}$, and $\mb {p_1}$ such that: \begin{itemize}
\item $\mb pab\downarrow$;
\item $\mb {p_i}(\mb pa_0a_1)\simeq a_i$. \end{itemize}
Generic realizability is based on partial combinatory algebras with some additional structure (see however Remark \ref{remark}).
\begin{definition} We say that $A$ is a pca over $\omega$ if there are extra combinators $\mb{succ}, \mb{pred}$ (successor and predecessor combinators), $\mb d$ (definition by cases combinator), and a map $n\mapsto \bar n$ from $\omega$ to $A$ such that for all $n\in \omega$
\begin{align*}
\succe \bar n&\simeq \overline{n+1}, & \pred\overline{n+1}&\simeq \bar n,
\end{align*}
\[ \mb d\bar n\bar mab\simeq
\begin{cases} a & n=m;\\ b & n\neq m. \end{cases}\] One then defines $\mb 0:=\bar 0$ and $\mb 1:=\bar 1$.
The notion of a pca over $\omega$ coincides with the notion of $\omega$-pca$^+$ in, e.g., \cite{R05a}. \end{definition}
Note that one can do without $\mb k$ by letting $\mb k:=\mb d\mb 0\mb 0$. The existence of $\mb d$ implies that the map $n\mapsto \bar n$ is one-to-one. In fact, suppose $\bar n=\bar m$ but $n\neq m$. Then $\mb d\bar n\bar n\simeq \mb d\bar n\bar m$. It then follows that $a\simeq \mb d\bar n\bar nab\simeq \mb d\bar n\bar mab\simeq b$ for all $a,b$. On the other hand, by our definition, every pca contains at least two elements.
\begin{remark}\label{remark} The notion of a pca over $\omega$ is slightly impoverished one compared to that of a model of Beeson's theory $\mathbf{PCA}^+$ \cite[VI.2]{B85} or Feferman's applicative structures \cite{F79}. However, for our purposes all the differences between these structures are immaterial as every pca can be expanded to a model of $\mathbf{PCA}^+$, which at the same time is also an applicative structure (see \cite[VI.2.9]{B85}).
By using, say, Curry numerals, one obtains a combinator $\mb d$ for this representation of natural numbers. So, every pca can be turned into a pca over $\omega$ by using Curry numerals. On the other hand, the notion of pca over $\omega$ allows for other possible representations of natural numbers. Note that the existence of a combinator $\mb d$ for a given representation of natural numbers (together with a predecessor combinator), entails the existence of a primitive recursion operator $\mb r$ for such representation, that is, an element $\mb r$ such that: \begin{align*} \mb rab\bar 0&\simeq a;\\ \mb rab\overline{n+1}&\simeq b(\mb rab\bar n)\bar n. \end{align*} \end{remark}
\section{The theory ${\sf CZF}$} The logic of ${\sf CZF}$ (Constructive Zermelo-Fraenkel set theory) is intuitionistic first order logic with equality. The only nonlogical symbol is $\in$ as in classical Zermelo-Fraenkel set theory $\sf ZF$. \begin{center} Axioms \end{center}
1. \textbf{Extensionality}: $\forall x\, \forall y\, (\forall z\, (z\in x\leftrightarrow z\in y)\rightarrow x=y)$,
2. \textbf{Pairing}: $\forall x\, \forall y\, \exists z\, (x\in z\land y\in z)$,
3. \textbf{Union}: $\forall x\, \exists y\, \forall u\, \forall z\, (u\in z\land z\in x\rightarrow u\in y)$,
4. \textbf{Infinity}: $\exists x\, \forall y\, (y\in x\leftrightarrow y=0\lor \exists z\in x\, (y=z\cup\{z\}))$,
5. \textbf{Set induction}: $\forall x\, (\forall y\in x\, \varphi(y)\rightarrow \varphi(x))\rightarrow \forall x\, \varphi(x)$, for all formulae $\varphi$,
6. \textbf{Bounded separation}: $\forall x\, \exists y\, \forall z\, (z\in y\leftrightarrow z\in x\land \varphi(z))$, for $\varphi$ bounded, where a formula is bounded if all quantifiers appear in the form $\forall x\in y$ and $\exists x\in y$,
7. \textbf{Strong collection}: $\forall u\in x\, \exists v\, \varphi(u,v)\rightarrow \exists y\, (\forall u\in x\, \exists v\in y\, \varphi(u,v)\land \forall v\in y\, \exists u\in x\, \varphi(u,v))$, for all formulae $\varphi$,
8. \textbf{Subset collection}: $\forall x\, \forall y\, \exists z\, \forall p\, (\forall u\in x\, \exists v\in y\, \varphi(u,v,p)\rightarrow \exists q\in z\, (\forall u\in x\, \exists v\in q\, \varphi(u,v,p)\land \forall v\in q\, \exists u\in x\, \varphi(u,v,p)))$, for all formulae $\varphi$.
\begin{notation} Let $x=0$ be $\forall y\in x\, \neg (y=y)$ and $x=y\cup\{y\}$ be $\forall z\in x\, (z\in y\lor z=y)\land \forall z\in y\, (z\in x)\land y\in x$. \end{notation}
\section{Finite types and axiom of choice}
Finite types $\sigma$ and their associated extensions $F_\sigma$ are defined by the following clauses: \begin{itemize}
\item $o\in\ft$ and $F_o=\omega$;
\item if $\sigma,\tau\in\ft$, then $(\sigma)\tau\in\ft$ and \[F_{(\sigma)\tau}=F_\sigma\to F_\tau=\{ \text{total functions from $F_\sigma$ to $F_\tau$}\}.\] \end{itemize} For brevity we write $\sigma\tau$ for $(\sigma)\tau$, if the type $\sigma$ is written as a single symbol. We say that $x\in F_\sigma$ has type $\sigma$.
The set $\ft$ of all finite types, the set $\{ F_\sigma\colon \sigma\in\ft\}$, and the set $\mathbb{F}=\bigcup_{\sigma\in\ft}F_\sigma$ all exist in ${\sf CZF}$.
\begin{definition}[Axiom of choice in all finite types] The schema ${\sf AC}_{\ft}$ consists of formulae \[ \tag{${\sf AC}_{\sigma,\tau}$} \forall x^\sigma\, \exists y^\tau\, \varphi(x,y)\rightarrow \exists f^{\sigma\tau}\, \forall x^\sigma\, \varphi(x,f(x)), \] where $\sigma$ and $\tau$ are (standard) finite types. \end{definition}
\begin{notation} We write $\forall x^\sigma\, \varphi(x)$ and $\exists x^\sigma\, \varphi(x)$ as a shorthand for $\forall x\, (x\in F_\sigma\rightarrow \varphi(x))$ and $\exists x\, (x\in F_\sigma\land \varphi(x))$ respectively. \end{notation}
\section{Defining extensional realizability in ${\sf CZF}$}
In ${\sf CZF}$, given a pca $A$ over $\omega$, we inductively define a class $\mathrm{V_{ex}}(A)$ such that \[\forall x\, (x\in\mathrm{V_{ex}}(A)\leftrightarrow x\subseteq A\times A\times \mathrm{V_{ex}}(A)).\] The intuition for $\pair{a,b,y}\in x$ is that $a$ and $b$ are \emph{equal realizers} of the fact that $y^A\in x^A$, where $x^A=\{y^A\colon \pair{a,b,y}\in x\text{ for some } a,b\in A\}$.
General information on how to handle inductive definitions in ${\sf CZF}$ can be found in \cite{A86,AR01,czf2}. The inductive definition of $\mathrm{V_{ex}}(A)$ within ${\sf CZF}$ is on par with that of $\mathrm{V}(A)$, the specifics of which appear in \cite[3.4]{R06}.
\begin{notation} We use $(a)_i$ or simply $a_i$ for $\mb {p_i}a$. Whenever we write an application term $t$, we assume that it is defined. In other words, a formula $\varphi(t)$ stands for $\exists a\, (t\simeq a\land \varphi(a))$. \end{notation}
\begin{definition}[Extensional realizability] We define the relation $a=b\Vdash \varphi$, where $a,b\in A$ and $\varphi$ is a realizability formula with parameters in $\mathrm{V_{ex}}(A)$. The atomic cases fall under the scope of definitions by transfinite recursion. \begin{align*} a=b&\Vdash x\in y && \Leftrightarrow && \exists z\, (\langle (a)_0,(b)_0,z\rangle \in y\land (a)_1=(b)_1\Vdash x=z)\\ a=b& \Vdash x=y && \Leftrightarrow &&\forall \langle c,d,z\rangle \in x\, ((ac)_0=(bd)_0\Vdash z\in y) \text{ and } \\ &&&&& \forall \langle c,d,z\rangle \in y\, ((ac)_1=(bd)_1\Vdash z\in x)\\ a=b& \Vdash \varphi\land \psi && \Leftrightarrow && (a)_0=(b)_0\Vdash \varphi \land (a)_1=(b)_1\Vdash \psi \\ a=b& \Vdash \varphi\lor\psi && \Leftrightarrow && (a)_0\simeq(b)_0\simeq \mb 0\land (a)_1=(b)_1\Vdash \varphi \text{ or } \\ &&&&& (a)_0\simeq (b)_0\simeq \mb 1\land (a)_1=(b)_1\Vdash \psi \\ a=b&\Vdash \neg\varphi && \Leftrightarrow && \forall c, d\, \neg (c=d\Vdash \varphi) \\ a=b&\Vdash \varphi\rightarrow\psi && \Leftrightarrow && \forall c,d\, (c=d\Vdash \varphi\rightarrow ac=bd\Vdash \psi) \\ a=b& \Vdash \forall x\in y\, \varphi && \Leftrightarrow && \forall \langle c,d,x\rangle\in y\, (ac=bd\Vdash \varphi) \\ a=b&\Vdash \exists x\in y\, \varphi && \Leftrightarrow && \exists x\, (\langle (a)_0,(b)_0,x\rangle \in y\land (a)_1=(b)_1\Vdash \varphi)\\ a=b& \Vdash \forall x\, \varphi && \Leftrightarrow && \forall x\in \mathrm{V_{ex}}(A)\, (a=b\Vdash \varphi) \\ a=b& \Vdash \exists x\, \varphi && \Leftrightarrow && \exists x\in \mathrm{V_{ex}}(A)\, (a=b\Vdash \varphi) \end{align*} \end{definition} \begin{notation} We write $a\Vdash \varphi$ for $a=a\Vdash \varphi$. \end{notation}
The above definition builds on the variant \cite{R06} of generic\ realizability \cite{M84}, where bounded quantifiers are treated as quantifiers in their own right. Note that in the language of ${\sf CZF}$, bounded quantifiers can be seen as syntactic sugar by letting $\forall x\in y\, \varphi:=\forall x\, (x\in y\rightarrow \varphi)$ and $\exists x\in y\, \varphi:=\exists x\, (x\in y\land \varphi)$. Nothing gets lost in translation, thanks to the following. \begin{lemma}
There are closed application terms $\mb u$ and $\mb v$ such that ${\sf CZF}$ proves
\[ \mb u\Vdash \forall x\in y\, \varphi \leftrightarrow \forall x\, (x\in y\rightarrow \varphi), \]
\[ \mb v\Vdash \exists x\in y\, \varphi \leftrightarrow \exists x\, (x\in y\land \varphi). \] \end{lemma} The advantage of having special clauses for bounded quantifiers is that it simplifies a great deal the construction of realizers.
\begin{remark} In the context of (finite type) arithmetic, extensional notions of realizability typically give rise to a partial equivalence relation. Namely, for every formula $\varphi$, the relation $\{(a,b)\in A^2\colon a=b\Vdash \varphi\}$ is symmetric and transitive. This is usually seen by induction on $\varphi$, the atomic case being trivial. The situation, though, is somewhat different in set theory. Say that $a=b\Vdash x\in y$ and $b=c\Vdash x\in y$. All we know is that for some $u,v\in\mathrm{V_{ex}}(A)$ we have that $\pair{(a)_0,(b)_0,u}, \pair{(b)_0,(c)_0,v}\in y$, $(a)_1=(b)_1\Vdash x=u$, and $(b)_1=(c)_1\Vdash x=v$. Since $u$ and $v$ need not be the same set, even if elements of $\mathrm{V_{ex}}(A)$ behave as expected, that is, $\{(a,b)\colon \pair{a,b,y}\in x\}$ is symmetric and transitive for any given $x,y\in\mathrm{V_{ex}}(A)$,\footnote{One could inductively define $\mathrm{V_{ex}}(A)$ so as to make $\{(a,b)\in A^2\colon \pair{a,b,y}\in x\}$ symmetric and transitive. Just let $x\in \mathrm{V_{ex}}(A)$ if and only if
\begin{itemize}
\item $x$ consists of triples $\pair{a,b,y}$ with $y\in \mathrm{V_{ex}}(A)$;
\item whenever $\pair{a,b,y}\in x$, $\pair{b,a,y}\in x$;
\item whenever $\pair{a,b,y}\in x$ and $\pair{b,c,y}\in x$, also $\pair{a,c,y}\in x$. \end{itemize}} we cannot conclude that $a=c\Vdash x\in y$. So, transitivity can fail.
As it turns out, for our purposes, this is not an issue at all. Note however that the canonical names for objects of finite type do indeed behave as desired and so does the relation $a=b\Vdash \varphi$ for formulas of finite type arithmetic. This is in fact key in validating the axiom of choice in all finite types (Section \ref{finite type}). Except for this deviation, the clauses for connectives and quantifiers follow the general blueprint of extensional realizability. We just feel justified in keeping the notation $a=b\Vdash \varphi$. \end{remark}
\section{Soundness for intuitionistic first order logic with equality}
From now on, let $A$ be a pca over $\omega$ within ${\sf CZF}$. Realizability of the equality axioms relies on the following fact about pca's.
\begin{lemma}[Double recursion theorem] There are combinators $\mb g$ and $\mb h$ such that, for all $a,b,c\in A$: \begin{itemize}
\item $\mb gab\downarrow$ and $\mb hab\downarrow$;
\item $\mb gabc\simeq a(\mb hab)c$;
\item $\mb habc\simeq b(\mb gab)c$. \end{itemize} \end{lemma} \begin{proof}
Let $t(a,b):=\lambda xc.a(\lambda c.bxc)c$. Set $\mb g:=\lambda ab.\mb ft(a,b)$, where $\mb f$ is the fixed point operator from the recursion theorem. Set $\mb h:=\lambda abc.b(\mb ft(a,b))c$. Verify that $\mb g$ and $\mb h$ are as desired. \end{proof}
\begin{lemma}\label{equality} There are closed application terms $\mb{i_r}$, $\mb{i_s}$, $\mb{i_t}$, $\mb{i_0}$ and $\mb{i_1}$ such that ${\sf CZF}$ proves, for all $x,y,z\in\mathrm{V_{ex}}(A)$, \begin{enumerate}[\quad $(1)$] \item $\mb{i_r}\Vdash x=x$; \item $\mb{i_s}\Vdash x=y\rightarrow y=x$; \item $\mb{i_t}\Vdash x=y\land y=z\rightarrow x=z$; \item $\mb{i_0}\Vdash x=y\land y\in z\rightarrow x\in z$; \item $\mb{i_1}\Vdash x=y\land z\in x\rightarrow z\in y$. \end{enumerate} \end{lemma} \begin{notation} Write, say, $a_{ij}$ for $\mb{p_j}(\mb{p_i} a)$. \end{notation} \begin{proof} (1) By the recursion theorem in $A$, we can find $\mb{i_r}$ such that \[ \mb{i_r}a\simeq \mb p(\mb p a\mb{i_r})(\mb p a\mb{i_r}). \] By set induction, we show that $\mb{i_r}\Vdash x=x$ for every $x\in\mathrm{V_{ex}}(A)$. Let $\pair{a,b,y}\in x$. We want $(\mb{i_r}a)_0=(\mb{i_r}b)_0\Vdash y\in x$. Now $(\mb{i_r}a)_{00}\simeq a$ and similarly for $b$. On the other hand, $(\mb{i_r}a)_{01}\simeq(\mb{i_r}b)_{01}\simeq \mb{i_r}$. By induction, $\mb{i_r}\Vdash y=y$, and so we are done. Similarly for $(\mb{i_r}a)_1=(\mb{i_r}b)_1\Vdash y\in x$.
(2) We just need to interchange. Let \[ \mb{i_s}:=\lambda ac.\mb p (ac)_1(ac)_0. \] Suppose $a=b\Vdash x=y$. We want $\mb{i_s}a=\mb{i_s}b\Vdash y=x$. Let $\pair{c,d,z}\in y$. By definition, $(ac)_1=(bd)_1\Vdash z\in x$. Now $(ac)_1\simeq (\mb{i_s}ac)_0$, and similarly $(bd)_1\simeq (\mb{i_s}bd)_0$. Then we are done. Similarly for the other direction.
(3,4) Combinators $\mb{i_t}$ and $\mb{i_0}$ are defined by a double recursion in $A$. By induction on triples $\pair{x,y,z}$, one then shows that $\mb{i_t}\Vdash x=y\land y=z\rightarrow x=z$ and $\mb{i_0}\Vdash x=y\land y\in z\rightarrow x\in z$. Eventually, $\mb {i_t}$ and $\mb {i_r}$ are solutions of equations of the form \begin{align*}
\mb {i_t}a& \simeq \mb t \mb {i_0}a, \\
\mb {i_0}a&\simeq \mb r\mb {i_t}a, \end{align*} where $\mb t$ and $\mb r$ are given closed application terms. These are given by the fixed point operators from the double recursion theorem.
(5) Set \[ \mb{i_1}:=\lambda a.\mb p (a_0a_{10})_{00}(\mb{i_t}(\mb p a_{11}(a_0a_{10})_{01})). \] \end{proof}
\begin{theorem}\label{int sound} For every formula $\varphi(x_1,\ldots,x_n)$ provable in intuitionistic first order logic with equality, there exists a closed application term $\mb e$ such that ${\sf CZF}$ proves $\mb e\Vdash \forall x_1\cdots \forall x_n\, \varphi(x_1,\ldots,x_n)$. \end{theorem} \begin{proof} The proof is similar to \cite[5.3]{M84} and \cite[4.3]{R06}. \end{proof}
\section{Soundness for ${\sf CZF}$}
We start with a lemma concerning bounded separation.
\begin{lemma}[${\sf CZF}$]\label{bounded}
Let $\varphi(u)$ be a bounded formula with parameters from $\mathrm{V_{ex}}(A)$ and $x\subseteq\mathrm{V_{ex}}(A)$.
Then
\[ \{\pair{a,b,u}\colon a,b\in A\land u\in x\land a=b\Vdash \varphi(u)\} \]
is a set. \end{lemma} \begin{proof}
As in \cite[Lemma 4.5, Lemma 4.6, Corollary 4.7]{R06}. \end{proof}
\begin{theorem}\label{czf sound} For every theorem $\varphi$ of ${\sf CZF}$, there is a closed application term $\mb e$ such that ${\sf CZF}$ proves $\mb e\Vdash \varphi$. \end{theorem}
\begin{proof} In view of Theorem \ref{int sound}, it is sufficient to show that every axiom of ${\sf CZF}$ has a realizer. The proof is similar to that of \cite[Theorem 5.1]{R06}. The rationale is simple: use the same realizers, duplicate the names. Remember that $\mb e\Vdash\varphi$ means $\mb e=\mb e\Vdash\varphi$.
\textbf{Extensionality}. Let $x,y\in\mathrm{V_{ex}}(A)$. Suppose $a=b\Vdash z\in x\leftrightarrow z\in y$ for all $z\in\mathrm{V_{ex}}(A)$. We look for $\mb e$ such that $\mb ea=\mb eb\Vdash x=y$. Set \[ \mb e:=\lambda ac.\mb p(a_0(\mb p c\mb{i_r}))(a_1(\mb p c\mb{i_r})). \] Suppose $\pair{c,d,z}\in x$. Then $\mb p c\mb{i_r}=\mb p d\mb{i_r}\Vdash z\in x$, since $\mb{i_r}\Vdash z=z$. Then $a_0(\mb p c\mb{i_r})=b_0(\mb p d\mb{i_r})\Vdash z\in y$. Therefore, $(\mb eac)_0=(\mb ebd)_0\Vdash z\in y$, as desired. The other direction is similar.\\
\textbf{Pairing}. Find $\mb e$ such that for all $x,y\in \mathrm{V_{ex}}(A)$, \[ \mb e\Vdash x\in z\land y\in z, \] for some $z\in \mathrm{V_{ex}}(A)$. Let $x,y\in\mathrm{V_{ex}}(A)$ be given. Define $z=\{\pair{\mb 0,\mb 0,x},\pair{\mb 0,\mb 0,y}\}$. Let \[ \mb e=\mb p (\mb p \mb 0\mb{i_r})(\mb p \mb 0\mb{i_r}). \]
\textbf{Union}. Find $\mb e$ such that for all $x\in \mathrm{V_{ex}}(A)$, \[ \mb e\Vdash \forall u\in x\, \forall v\in u\, (v\in y), \] for some $y\in \mathrm{V_{ex}}(A)$. Given $x\in\mathrm{V_{ex}}(A)$, let $y=\{\pair{c,d,v}\colon \exists \pair{a,b,u}\in x\, (\pair{c,d,v}\in u)\}$. Set $\mb e:=\lambda ac.\mb p c\mb{i_r}$.\\
\textbf{Infinity}. Let $\dot\omega=\{\pair{\bar n,\bar n,\dot n}\colon n\in\omega\}$, where $\dot n=\{\pair{\bar m,\bar m, \dot m}\colon m<n\}$. Let us find $\mb e$ such that for all $y\in\mathrm{V_{ex}}(A)$,
\[ \mb e\Vdash y\in \dot \omega\leftrightarrow y=0\lor \exists z\in \dot\omega\, (y=z\cup\{z\}). \] Recall that $y=0$ stands for $\forall x\in y\, \neg (x=x)$ and $y=z\cup\{z\}$ stands for $\forall x\in y\, (x\in z\lor x=z)\land (\forall x\in z\, (x\in y)\land z\in y)$.
Let $\vartheta(y):=y=0\lor \exists z\in \dot\omega\, (y=z\cup\{z\})$. We want $\mb e$ such that for every $y\in\mathrm{V_{ex}}(A)$ \[ \tag{1} \mb e_0\Vdash y\in\dot\omega\rightarrow \vartheta(y), \] \[ \tag{2} \mb e_1\Vdash \vartheta(y)\rightarrow y\in\dot\omega. \]
Let us first consider (1). Suppose $a=b\Vdash y\in\dot \omega$. We want $\mb e_0a=\mb e_0b\Vdash \vartheta(y)$.
By definition, there is $n\in\omega$ such that $a_0\simeq b_0\simeq \bar n$ and $a_1=b_1\Vdash y=\dot n$.
Case $n=0$. Then $\mb 0\Vdash y=0$, and so $\mb p\mb 0\mb 0\Vdash \vartheta(y)$.
Case $n>0$. We have $\pred a_0\simeq \pred b_0\simeq \bar m$ with $n=m+1$. We aim for a term $t(x)$ such that $t(a)=t(b)\Vdash \exists z\in\dot\omega\, (y=z\cup\{z\})$ by requiring \[ t(a)_0\simeq t(b)_0\simeq \bar m, \] \[ \tag{3} t(a)_1=t(b)_1\Vdash y=\dot m\cup\{\dot m\}. \] If we succeed, then \[ \mb p\mb 1 t(a)=\mb p\mb 1t(b)\Vdash \vartheta(y). \] Now, (3) amounts to \begin{align}
\tag{4} t(a)_{10}=t(b)_{10}&\Vdash \forall x\in y\, (x\in \dot m\lor x=\dot m)\\
\tag{5} t(a)_{110}=t(b)_{110}&\Vdash \forall x\in \dot m\, (x\in y)\\
\tag{6} t(a)_{111}=t(b)_{111}&\Vdash \dot m\in y \end{align} Part (4). Let $\pair{c,d,x}\in y$. Then $(a_1c)_0=(b_1d)_0\Vdash x\in \dot n$, that is, \[ \pair{(a_1c)_{00},(b_1d)_{00},\dot k}\in \dot n, \] \[ (a_1c)_{01}=(b_1d)_{01}\Vdash x=\dot k, \] where $(a_1c)_{00}\simeq (b_1d)_{00}\simeq \bar k$. Here we have two more cases. If $k=m$, then \[ \mb p\mb 1 (a_1c)_{01}=\mb p\mb 1(b_1d)_{01}\Vdash x\in \dot m\lor x=\dot m. \] If $k<m$, then $\pair{\bar k,\bar k,\dot k}\in\dot m$ and $\mb p\bar k(a_1c)_{01}=\mb p\bar k(b_1d)_{01}\Vdash x\in \dot m$, so that \[ \mb p\mb 0(\mb p\bar k(a_1c)_{01})=\mb p\mb 0(\mb p\bar k(b_1d)_{01}) \Vdash x\in\dot m\lor x=\dot m. \] Then $t(a)$ such that \[ t(a)_{10}\simeq \lambda c.\mb d(a_1c)_{00}(\pred a_0)(\mb p\mb 1(a_1c)_{01})(\mb p\mb 0(a_1c)_0)\] is as desired.
Parts (5) and (6). Let $t(a)$ satisfy \[ t(a)_{110}\simeq \lambda x.(a_1x)_1, \] \[ t(a)_{111}\simeq (a_1(\pred a_0))_1. \]
We want $\mb e$ such that \[ \mb e_0\simeq\lambda a.\mb d\mb 0a_0(\mb p\mb 0\mb 0)(\mb p\mb 1t(a)). \] Then $\mb e_0$ does the job.
As for (2), suppose $a=b\Vdash \vartheta(y)$. We want $\mb e_1a=\mb e_1b\Vdash y\in \dot \omega$. By unravelling the definitions, we obtain two cases.
(i) $a_0\simeq b_0\simeq \mb 0$ and $a_1=b_1\Vdash y=0$. It follows that $y=\dot 0$ and so $\mb{i_r}\Vdash y=\dot 0$. Therefore $\mb p a_0\mb{i_r}=\mb p b_0\mb{i_r}\Vdash y\in\dot\omega$, as $\pair{\mb 0,\mb 0,\dot 0}\in\dot\omega$.
(ii) $a_0\simeq b_0\simeq \mb 1$ and $a_1=b_1\Vdash \exists z\in\dot \omega\, (y=z\cup\{z\})$ Then there exists $m\in\omega$ such that $a_{10}\simeq b_{10}\simeq \bar m$ and \[ \tag{7} a_{11}=b_{11}\Vdash y=\dot m\cup\{\dot m\}. \] We aim for a term $s(x)$ such that $s(a)=s(b)\Vdash y=\dot{n}$, where $n=m+1$. If we succeed, then \[ \mb p(\succe a_{10})s(a)=\mb p(\succe b_{10})s(b) \Vdash y\in\dot\omega. \] Note in fact that $\succe a_{10}\simeq \succe b_{10}\simeq \bar n$.
For the left to right inclusion, suppose $\pair{c,d,x}\in y$. Our goal is $(s(a)c)_0=(s(b)d)_0\Vdash x\in\dot n$. It follows from (7) that \[ a_{110}=b_{110}\Vdash \forall x\in y\, (x\in\dot m\lor x=\dot m), \] and therefore \[ \tag{8} a_{110}c=b_{110}d\Vdash x\in\dot m\lor x=\dot m. \] >From (8) we get two more cases. First case: $(a_{110}c)_0\simeq (b_{110}d)_0\simeq \mb 0$ and $(a_{110}c)_1=(b_{110}d)_1\Vdash x\in\dot m$. Then one can verify that \[ (a_{110}c)_1=(b_{110}d)_1\Vdash x\in\dot n. \] Second case: $(a_{110}c)_0\simeq (b_{110}d)_0\simeq \mb 1$ and $(a_{110}c)_1=(b_{110}d)_1\Vdash x=\dot m$. Then \[ \mb p\bar m(a_{110}c)_1=\mb p\bar m(b_{110}d)_1\Vdash x\in \dot n. \] Let $s(x)$ be such that \[ (s(a)c)_0\simeq \mb d\mb 0(a_{110}c)_0(a_{110}c)_1(\mb p a_{10}(a_{110}c)_1). \]
For the right to left inclusion, suppose $k<n$. Our goal is $(s(a)\bar k)_1=(s(b)\bar k)_1\Vdash \dot k\in y$. It follows from (7) that \begin{align*}
\tag{9} a_{1110}=b_{1110}&\Vdash \forall x\in\dot m\, (x\in y), \\
\tag{10} a_{1111}=b_{1111}&\Vdash \dot m\in y. \end{align*} If $k<m$, then $\pair{\bar k,\bar k, \dot k}\in \dot m$, and hence $a_{1110}\bar k=b_{1110}\bar k\Vdash \dot k\in y$ by (9). On the other hand, if $k=m$ then (10) gives us the realizers. Therefore let $s(x)$ be such that \[ (s(a)\bar k)_1\simeq \mb d \bar ka_{10}a_{1111}(a_{1110}\bar k). \]
We thus want $\mb e$ such that \[ \mb e_1\simeq \lambda a.\mb d\mb 0a_0(\mb pa_0\mb{i_r})(\mb p(\succe a_{10})s(a)).\] Then $\mb e_1$ does the job. \\
\textbf{Set induction}. By the recursion theorem, let $\mb e$ be such that $\mb ea\simeq a(\lambda c.\mb ea)$. Prove that \[ \mb e\Vdash \forall x\, (\forall y\in x\, \varphi(y)\rightarrow \varphi(x))\rightarrow \forall x\, \varphi(x). \] Let $a=b\Vdash \forall x\, (\forall y\in x\, \varphi(y)\rightarrow \varphi(x))$. By definition, $a=b\Vdash \forall y\in x\, \varphi(y)\rightarrow \varphi(x)$ for every $x\in\mathrm{V_{ex}}(A)$. By set induction, we show that $\mb ea=\mb eb\Vdash \varphi(x)$ for every $x\in\mathrm{V_{ex}}(A)$. Assume by induction that $\mb ea=\mb eb\Vdash \varphi(y)$ for every $\pair{c,d,y}\in x$. This means that $\lambda c.\mb ea=\lambda d.\mb eb\Vdash \forall y\in x\, \varphi(y)$. Then $a(\lambda c.\mb ea)=b(\lambda d.\mb eb)\Vdash \varphi(x)$. The conclusion $\mb ea=\mb eb\Vdash \varphi(x)$ follows. \\
\textbf{Bounded separation}. Find $\mb e$ such that for all $x\in\mathrm{V_{ex}}(A)$, \[ \mb e\Vdash \forall u\in y\, (u\in x\land \varphi(u))\land \forall u\in x\, (\varphi(u)\rightarrow u\in y), \] for some $y\in\mathrm{V_{ex}}(A)$. Given $x\in\mathrm{V_{ex}}(A)$, let \[ y=\{ \pair{\mb p ac,\mb p bd,u}\colon \pair{a,b,u}\in x\land c=d\Vdash \varphi(u)\}. \] It follows from Lemma \ref{bounded} that $y$ is a set. Moreover, $y$ belongs to $\mathrm{V_{ex}}(A)$. We want $\mb e$ such that \begin{align*} \mb e_0&\Vdash \forall u\in y\, (u\in x\land \varphi(u)),\\ \mb e_1&\Vdash \forall u\in x\, (\varphi(u)\rightarrow u\in y). \end{align*} By letting $\mb e=\mb p e_0e_1$, where \begin{align*} e_0&:=\lambda f.\mb p(\mb p f_0\mb{i_r})f_1, \\ e_1&:= \lambda ac.\mb p(\mb pac)\mb{i_r}, \end{align*} one verifies that $\mb e$ is as desired. \\
\textbf{Strong Collection}. Set $\mb e:=\lambda a.\mb p(\lambda c.\mb p c(ac))(\lambda c.\mb p c(ac))$. Let $a=b\Vdash \forall u\in x\, \exists v\, \varphi(u,v)$. By strong collection, we can find a set $y$ such that \begin{itemize}
\item $\forall \pair{c,d,u}\in x\, \exists v\in\mathrm{V_{ex}}(A)\, (\pair{c,d,v}\in y\land ac=bd\Vdash \varphi(u,v))$, and
\item $\forall z\in y\, \exists \pair{c,d,u}\in x\, \exists v\in\mathrm{V_{ex}}(A)\, (z=\pair{c,d,v}\land ac=bd\Vdash \varphi(u,v))$. \end{itemize} In particular, $y\in\mathrm{V_{ex}}(A)$. Show that \[ \mb ea=\mb eb\Vdash \forall u\in x\, \exists v\in y\, \varphi(u,v)\land \forall v\in y\exists u\in x\, \varphi(u,v). \]
\textbf{Subset collection}. We look for $\mb e$ such that for all $x,y\in\mathrm{V_{ex}}(A)$ there is a $z\in\mathrm{V_{ex}}(A)$ such that for all $p\in\mathrm{V_{ex}}(A)$ \[ \mb e\Vdash \forall u\in x\, \exists v\in y\, \varphi(u,v,p)\rightarrow \exists q\in z\, \psi(x,q,p), \] where \[ \psi(x,q,p):= \forall u\in x\, \exists v\in q\, \varphi(u,v,p)\land \forall v\in q\, \exists u\in x\, \varphi(u,v,p). \] Form the set $y'=\{\pair{f,g,v}\colon f,g\in A\land \exists i,j\in A\, \pair{i,j,v}\in y\}$. By subset collection, we can find a set $z'$ such that for all $a,b,p$, if \[ \tag{11} \forall \pair{c,d,u}\in x\, \exists \pair{\mb p ac,\mb p bd,v}\in y'\, (ac)_1=(bd)_1\Vdash \varphi(u,v,p), \] then there is a $q\in z'$ such that \[ \tag{12} \forall \pair{c,d,u}\in x\, \exists w\in q\, \vartheta\land \forall w\in q\, \exists \pair{c,d,u}\in x\, \vartheta, \] where $\vartheta=\vartheta(c,d,u,w;a,b,p)$ is \[\exists v\, (w=\pair{\mb p ac,\mb p bd,v}\land (ac)_1=(bd)_1\Vdash \varphi(u,v,p)). \]
Note that the $q\in z'$ asserted to exist is a subset of $y'$ and so $q\in\mathrm{V_{ex}}(A)$. On the other hand, there might be $q\in z'$ that are not in $\mathrm{V_{ex}}(A)$, and hence $z'$ need not be a subset of $\mathrm{V_{ex}}(A)$. Let $z''=\{q\cap y'\colon q\in z'\}$. Now, $z''\subseteq\mathrm{V_{ex}}(A)$. Finally, set \[ z=\{\pair{\mb 0,\mb 0, q}\colon q\in z''\}. \] Then $z\in\mathrm{V_{ex}}(A)$. It remains to find $\mb e$. Let $p\in\mathrm{V_{ex}}(A)$ and suppose \[ \tag{13} a=b\Vdash \forall u\in x\, \exists v\in y\, \varphi(u,v,p). \] We would like to have \[ \mb ea=\mb eb\Vdash \exists q\in z\, \psi(x,q,p). \] By definition of $z$, we let $(\mb ea)_0\simeq \mb 0$ and we look for a $q\in z''$ such that $(\mb ea)_1=(\mb eb)_1\Vdash \psi(x,q,p)$, that is, \begin{align*} (\mb ea)_{10}=(\mb eb)_{10}&\Vdash \forall u\in x\, \exists v\in q\, \varphi(u,v,p), \\ (\mb ea)_{11}=(\mb eb)_{11}&\Vdash \forall v\in q\, \exists u\in x\, \varphi(u,v,p). \end{align*} By (13) one can see that the parameters $a,b,p$ satisfy (11). Let $q\in z'$ be as in (12). We have already noticed that $q\in z''$. Let $\mb e$ be such that \begin{align*} (\mb ea)_{10}&\simeq \lambda c.\mb p(\mb pac)(ac)_1, \\ (\mb ea)_{11}&\simeq \lambda f. \mb pf_1(f_0f_1)_1. \end{align*} One can verify that $\mb e$ is as desired. \end{proof}
\section{Realizing the axiom of choice in all finite types}\label{finite type}
We will make use of certain canonical names for pairs in $\mathrm{V_{ex}}(A)$.
\begin{definition}[Internal pairing] For $x,y\in\mathrm{V_{ex}}(A)$, let \[ \vset{x}=\{\pair{\mb 0,\mb 0,x}\}, \] \[ \vset{x,y}=\{\pair{\mb 0,\mb 0,x},\pair{\mb 1,\mb 1,y}\}, \] \[ \vpair{x,y}=\{\pair{\mb 0,\mb 0,\vset{x}}, \pair{\mb 1,\mb 1,\vset{x,y}}\}. \]
Note that all these sets are in $\mathrm{V_{ex}}(A)$. \end{definition}
Below we shall use $\mathrm{UP}(x,y,z)$ and $\mathrm{OP}(x,y,z)$ as abbreviations for the set-theoretic formulae expressing, respectively, that $z$ is the unordered pair of $x$ and $y$ (in standard notation, $z=\{x,y\}$) and $z$ is the ordered pair of $x$ and $y$ (in standard notation, $z=\pair{x,y}$). E.g., $\mathrm{UP}(x,y,z)$ stands for $x\in z\land y\in z\land \forall u\in z\, (u=x\lor u=y)$. Similarly, one can pick a suitable rendering of $\mathrm{OP}(x,y,z)$ according to the definition of ordered pair $\pair{x,y}:=\{\{x\},\{x,y\}\}$.
\begin{lemma}\label{pairs} There are closed application terms $\mb{u_0}$, $\mb{u_1}$, $\mb v$, $\mb w$, $\mb z$ such that for all $x,y\in \mathrm{V_{ex}}(A)$ \begin{align*} \mb{u_0}&\Vdash \mathrm{UP}(x,x,\vset{x}), \\ \mb{u_1}&\Vdash \mathrm{UP}(x,y,\vset{x,y}), \\ \mb v &\Vdash \mathrm{OP}(x,y, \vpair{x,y}), \\ \mb w& \Vdash \vpair{x,y}=\vpair{u,v}\rightarrow x=u\land y=v,\\ \mb z&\Vdash \mathrm{OP}(x,y,z) \rightarrow z=\vpair{x,y}. \end{align*} \end{lemma} \begin{proof} This is similar to \cite[3.2, 3.4]{M84}. \end{proof}
We now build a copy of the hereditarily effective operations relative to a pca $A$. \begin{definition}[$\mathsf{HEO}_A$] Let $A$ be a pca over $\omega$ with map $n\mapsto \bar n$ from $\omega$ to $A$. For any finite type $\sigma$, we define $a=_\sigma b$ with $a,b\in A$ by letting: \begin{itemize}
\item $a=_0 b$ iff there is $n\in\omega$ such that $a=b=\bar n$;
\item $a=_{\sigma\tau} b$ iff for every $c=_{\sigma}d$ we have $ac=_\tau bd$. \end{itemize} Let $A_\sigma=\{a\in A\colon a=_\sigma a\}$. \end{definition}
\begin{lemma} For any type $\sigma$, and for all $a,b,c\in A$: \begin{itemize}
\item if $a=_\sigma b$ and $b=_\sigma c$, then $a=_\sigma a$, $b=_\sigma a$, and $a=_\sigma c$. \end{itemize} It thus follows that $A_\sigma=\bigcup_{b\in A}\{a\in A\colon a=_\sigma b\}=\bigcup_{a\in A}\{b\in A\colon a=_\sigma b\}$ and $=_\sigma$ is an equivalence relation on $A_\sigma$. \end{lemma} \begin{proof} By induction on the type. \end{proof}
\begin{definition}[Internalization of objects of finite type] For $a\in A_\sigma$, we define $\tuep a\sigma\in\mathrm{V_{ex}}(A)$ as follows: \begin{itemize}
\item if $a=\bar n$, let $\tuep ao =\{ \pair {\bar m,\bar m,\tuep{\bar{m}}o}\colon m<n\}$;
\item if $a\in A_{\sigma\tau}$, let $\tuep a{\sigma\tau}=\{ \pair{c,d,\vpair{c^\sigma,\tuep e\tau}} \colon c=_\sigma d\text{ and } ac\simeq e\}$. \end{itemize}
Finally, for any finite type $\sigma$, let \[ \dot F_\sigma=\{ \pair{a,b,\tuep a{\sigma}}\colon a=_\sigma b\} \] be our name for $F_\sigma$. \end{definition}
Note that $\dot F_o=\dot \omega$, where $\dot\omega$ is the name for $\omega$ used to realize the infinity axiom in the proof of Theorem \ref{czf sound}. \\
\begin{notation} Write $\Vdash \varphi$ for $\exists a, b\in A\, (a=b\Vdash \varphi)$. \end{notation}
\begin{lemma}[Absoluteness and uniqueness up to extensional equality]\label{abs}
For all $a,b\in A_\sigma$, \begin{itemize}
\item $\Vdash \tuep a\sigma=\tuep b\sigma$ implies $a=_\sigma b$,
\item $a=_\sigma b$ implies $\tuep a\sigma=\tuep b\sigma$. \end{itemize} \end{lemma} \begin{proof} By induction on the type.
Type $o$. Let $a=\bar n$ and $b=\bar m$ with $n,m\in\omega$. Suppose $\Vdash \tuep a{o}=\tuep bo$. By a double arithmetical induction one shows $n=m$. The second part is obvious as $a=_ob$ implies $a=b$.
Type $\sigma\tau$. Let $a,b\in A_{\sigma\tau}$. Suppose $\Vdash \tuep a{\sigma\tau}=\tuep b{\sigma\tau}$. The aim is to show that $a=_{\sigma\tau} b$. Let $c\in A_{\sigma}$ and $ac\simeq e$. Then $\Vdash \vpair{\tuep c\sigma,\tuep e\tau}\in \tuep a{\sigma\tau}$ and hence $\Vdash \vpair{\tuep c\sigma,\tuep e\tau}\in \tuep b{\sigma\tau}$. >From the latter we infer that there exist $c_0\in A_{\sigma}$ and $e_0\in A_{\tau}$ such that $bc_0\simeq e_0$ and $\Vdash \vpair{\tuep c\sigma,\tuep e\tau}=\vpair{\tuep {c_0}\sigma,\tuep {e_0}\tau}$. By the properties of internal pairing, we obtain $\Vdash \tuep c\sigma=\tuep {c_0}\sigma\;\wedge\; \tuep e\tau = \tuep {e_0}\tau$ giving $c=_{\sigma}c_0$ and $e=_{\tau} e_0$ by the induction hypothesis. Whence $ac=_{\tau}bc_0=_{\tau}bc$ as $b \in A_{\sigma\tau}$. As a result one has $ac=_{\tau}bd$ whenever $c=_{\sigma}d$, yielding $a=_{\sigma\tau}b$.
For the second part, suppose $a=_{\sigma\tau} b$. An element of $\tuep a{\sigma\tau}$ is of the form $\pair{c,d,\vpair{\tuep c{\sigma},\tuep e\tau}}$ where $c=_{\sigma}d$ and $ac\simeq e$. Let $e_0\simeq bc$. As $ac=_{\tau}bc$ the induction hypothesis yields $\tuep e\tau =\tuep {e_0}\tau$, and hence $\pair{c,d,\vpair{\tuep c{\sigma},\tuep e\tau}}=\pair{c,d,\vpair{\tuep c{\sigma},\tuep {e_0}\tau}}\in \tuep b{\sigma\tau}$, showing $\tuep a{\sigma\tau}\subseteq\tuep b{\sigma\tau}$. Owing to the symmetry of the argument, we can conclude that $\tuep a{\sigma\tau}=\tuep b{\sigma\tau}$. \end{proof}
\begin{theorem}[Choice]\label{choice} There exists a closed application term $\mb e$ such that ${\sf CZF}$ proves \[ \mb e\Vdash \forall x\in \dot F_\sigma\, \exists y\in \dot F_\tau\, \varphi(x,y)\rightarrow \exists f\colon \dot F_\sigma\to \dot F_\tau\, \forall x\in \dot F_\sigma\, \varphi(x,f(x)), \] for all finite types $\sigma$ and $\tau$ and for every formula $\varphi$. \end{theorem} \begin{proof} Suppose $a=b\Vdash \forall x\in \dot F_\sigma\, \exists y\in \dot F_\tau\, \varphi(x,y)$. By definition, this means that for every $\pair{c,d,\tuep c\sigma}\in \dot F_\sigma$ we have \[ \tag{1} \pair{(ac)_0,(bd)_0,\tuep e\tau}\in \dot F_\tau, \] \[ \tag{2} (ac)_1=(bd)_1\Vdash \varphi(\tuep c\sigma,\tuep e\tau), \] where $e\simeq (ac)_0$. Let \[ f=\{\pair{c,d,\vpair{\tuep c\sigma,\tuep e\tau}}\colon c=_\sigma d\land e\simeq (ac)_0\}. \] Note that $c=_\sigma d$ implies $(ac)_0\downarrow$ by (1).
Below we shall use $z=\langle x,y\rangle$ as a somewhat sloppy abbreviation for $\mathrm{OP}(x,y,z)$. We look for an $\mb e$ such that \[ \tag{3} (\mb ea)_{0}=(\mb eb)_0\Vdash \forall z\in f\, \exists x\in \dot F_\sigma\, \exists y\in \dot F_\tau\, (z=\pair{x,y}), \] \[\tag{4} (\mb ea)_{10}=(\mb e b)_{10}\Vdash \forall x\in \dot F_\sigma\, \exists y\in \dot F_\tau\, \exists z\in f\, (z=\pair{x,y}\land \varphi(x,y)), \] \[ \tag{5} (\mb ea)_{11}=(\mb eb)_{11}\Vdash \forall z_0\in f\, \forall z_1\in f\, \forall x,y_0,y_1\, (z_0=\pair{x,y_0}\land z_1=\pair{x,y_1}\rightarrow y_0=y_1). \]
First, note that $\lambda c.(ac)_0=_{\sigma\tau}\lambda d.(bd)_0$. This follows from (1). In fact, $c=_\sigma d$ implies $(ac)_0=_\tau (bd)_0$, for all $c,d\in A$. Moreover, since this is an equivalence relation, we have $\lambda c.(ac)_0\in A_{\sigma\tau}$.
For (3), let $\mb e$ be such that \begin{align*} ((\mb e a)_0c)_0&\simeq c, & ((\mb e a)_0 c)_{10}&\simeq (ac)_0, \\ & & ((\mb e a)_0c)_{11}&\simeq \mb v, \end{align*} where $\mb v\Vdash \vpair{x,y}=\pair{x,y}$ for all $x,y\in\mathrm{V_{ex}}(A)$ as in Lemma \ref{pairs}. Let us show that any such $\mb e$ satisfies (3). Let $\pair{c,d,\vpair{\tuep c\sigma,\tuep e\tau}}\in f$, where $e\simeq (ac)_0$. We would like \[ (\mb e a)_0c=(\mb e b)_0d\Vdash \exists x\in \dot F_\sigma\, \exists y\in \dot F_\tau\, (\vpair{\tuep c\sigma,\tuep e\tau}=\pair{x,y}). \] Now, $\pair{c,d,\tuep c\sigma}\in \dot F_\sigma$, $c\simeq ((\mb e a)_0c)_0$, and $d\simeq ((\mb e b)_0d)_0$. Therefore, we just need to verify \[ ((\mb ea)_0c)_1=((\mb eb)_0d)_1\Vdash \exists y\in \dot F_\tau\, \vpair{\tuep c\sigma,\tuep e\tau}=\pair{\tuep c\sigma,y}.\] Similarly, $\pair{(ac)_0,(bd)_0,\tuep e\sigma}\in \dot F_\tau$ since, as noted before, $(ac)_0=_\tau (bd)_0$. On the other hand, $(ac)_0\simeq ((\mb e a)_0 c)_{10}$ and $(bd)_0\simeq ((\mb eb)_0d)_{10}$. So we just need to show that \[ ((\mb ea)_0c)_{11}=((\mb eb)_0d)_{11}\Vdash \vpair{\tuep c\sigma,\tuep e\tau}=\pair{\tuep c\sigma,\tuep e\tau}. \] Now, $((\mb ea)_0c)_{11}\simeq ((\mb eb)_0d)_{11}\simeq \mb v$, and $\mb v\Vdash \vpair{\tuep c\sigma,\tuep e\tau}=\pair{\tuep c\sigma,\tuep e\tau}$. So we are done.
As for (4), Let $\mb e$ be such that \begin{align*} ((\mb ea)_{10}c)_0& \simeq (ac)_0, & ((\mb ea)_{10}c)_{10}&\simeq (ac)_0, & ((\mb ea)_{10}c)_{110}&\simeq \mb v, \\ &&&& ((\mb ea)_{10}c)_{111}&\simeq (ac)_1, \end{align*} where $\mb v$ is as in part (3). That $\mb e$ satisfies (4) is proved in similar fashion by using (1) and (2).
For (5), suppose $\pair{c_i,d_i,z_i}\in f$ with $z_i=\vpair{\tuep {c_i}\sigma,\tuep {e_i}\tau}$ and $e_i\simeq (ac_i)_0$, where $i=0,1$. We are looking for an $\mb e$ such that \[ (\mb ea)_{11}c_0c_1=(\mb eb)_{11}d_0d_1\Vdash z_0=\pair{x,y_0}\land z_1=\pair{x,y_1}\rightarrow y_0=y_1, \] for all $x,y_0,y_1\in\mathrm{V_{ex}}(A)$. Suppose \[ \tag{6} g=h\Vdash z_0=\pair{x,y_0}\land z_1=\pair{x,y_1}. \] We want $(\mb ea)_{11}c_0c_1g=(\mb eb)_{11}d_0d_1h\Vdash y_0=y_1$. Unravelling (6), we get \[ g_i=h_i\Vdash \vpair{\tuep {c_i}\sigma,\tuep {e_i}\tau}=\pair{x,y_i}. \] By Lemma \ref{pairs}, \[ \mb wg_i=\mb w h_i\Vdash \tuep {c_i}\sigma=x\land \tuep {e_i}\tau=y_i, \] for some closed application term $\mb w$. By the realizabilty of equality, it follows that \[ \tag{7} \Vdash \tuep {c_0}\sigma=\tuep {c_1}\sigma. \] Also, \[ \mb p(\mb w g_0)_1(\mb wg_1)_1=\mb p(\mb w h_0)_1(\mb wh_1)_1\Vdash \tuep {e_0}\tau=y_0\land \tuep {e_1}\tau=y_1. \] By absoluteness, (7) implies $c_0=_\sigma c_1$. As $\lambda c.(ac)_0\in A_{\sigma\tau}$, we have $(ac_0)_0=_\tau (ac_{1})_0$, that is, $e_0=_\tau e_1$. By uniqueness, $\tuep {e_0}\tau=\tuep {e_1}\tau$. By realizability of equality, there is a closed application term $\mb i$ such that \[ \mb i\Vdash z=y_0\land z=y_1\rightarrow y_0=y_1. \] Therefore $\mb e$ can be chosen such that \[ (\mb ea)_{11}c_0c_1g\simeq \mb i(\mb p(\mb w g_0)_1(\mb wg_1)_1)\] is as required.
By $\lambda$-abstraction, one can find $\mb e$ satisfying (3), (4), and (5). \end{proof}
\begin{theorem}[Arrow types]\label{arrow} There exists a closed application term $\mb e$ such that ${\sf CZF}$ proves
\[ \mb e\Vdash \dot F_{\sigma\tau}= \dot F_\sigma\to \dot F_\tau, \] for all finite types $\sigma$ and $\tau$. \end{theorem} \begin{proof} We look for $\mb e$ such that \[ \mb e_0\Vdash \forall f\in \dot F_{\sigma\tau}\, (f\colon\dot F_\sigma\to \dot F_\tau), \] and for every $f\in \mathrm{V_{ex}}(A)$, \[ \mb e_1\Vdash (f\colon \dot F_\sigma\to \dot F_\tau) \rightarrow f\in \dot F_{\sigma\tau}. \]
For $\mb e_0$, we need that for all $a=_{\sigma\tau} b$, \[ \tag{1} (\mb e_0 a)_0=(\mb e_0 b)_0 \Vdash \forall z\in \tuep a{\sigma\tau}\, \exists x\in\dot F_\sigma\, \exists y\in\dot F_\tau\, (z=\pair{x,y}),\] \[ \tag{2} (\mb e_0a)_{10}=(\mb e_0b)_{10}\Vdash \forall x\in\dot F_\sigma\, \exists y\in\dot F_\tau\, \exists z\in \tuep a{\sigma\tau}\, (z=\pair{x,y}),\] \[ \tag{3} (\mb e_0 a)_{11}=(\mb e_0 b)_{11}\Vdash \forall z_0\in \tuep a{\sigma\tau}\, \forall z_1\in \tuep a{\sigma\tau}\, \forall x,y_0,y_1\, (z_0=\pair{x,y_0}\land z_1=\pair{x,y_1}\rightarrow y_0=y_1). \]
For (1), let $\mb e_0$ be such that \[ (\mb e_0a)_0\simeq\lambda c.\mb p c(\mb p (ac)\mb v), \] where $\mb v\Vdash \vpair{x,y}=\pair{x,y}$ for all $x,y\in \mathrm{V_{ex}}(A)$ as in Lemma \ref{pairs}.
Let us verify that $\mb e_0$ does the job. Let $a=_{\sigma\tau}b$. We want to show \[ \lambda c.\mb p c(\mb p (ac)\mb v)=\lambda d.\mb p d(\mb p (bd)\mb v)\Vdash \forall z\in \tuep a{\sigma\tau}\, \exists x\in\dot F_\sigma\, \exists y\in\dot F_\tau\, (z=\pair{x,y}). \] Let $\pair{c,d,\vpair{ \tuep c{\sigma}, \tuep c{\tau}}}\in \tuep a{\sigma\tau}$, where $c=_\sigma d$ and $ac\simeq e$. We want \[ \mb p c(\mb p (ac)\mb v)=\mb p d(\mb p (bd)\mb v)\Vdash \exists x\in \dot F_\sigma\, \exists y\in\dot F_\tau\, (\vpair{ \tuep c{\sigma}, \tuep c{\tau}}=\pair{x,y}). \] By definition, $\pair{c,d, \tuep c{\sigma}}\in\dot F_\sigma$. Let us check that
\[ \mb p (ac)\mb v=\mb p (bd)\mb v\Vdash \exists y\in\dot F_\tau\, (\vpair{ \tuep c{\sigma}, \tuep c{\tau}}=\pair{ \tuep c{\sigma},y}). \] We have $ac=_\tau bd$ and hence $\pair{ac,bd, \tuep c{\tau}}\in \dot F_\tau$. Finally, \[ \mb v\Vdash \vpair{ \tuep c{\sigma}, \tuep c{\tau}}=\pair{ \tuep c{\sigma}, \tuep c{\tau}}. \]
For (2), let $\mb e_0$ be such that \[ (\mb e_0a)_{10}\simeq\lambda x.\mb p (ax)(\mb p x\mb v), \] where $\mb v$ is as above.
For (3), let $\mb e_0$ be such that \[ (\mb e_0 a)_{11}c_0c_1g\simeq \mb i (\mb p(\mb w g_0)_1(\mb wg_1)_1),\] where $\mb w$ and $\mb i$ are as in the proof of Theorem \ref{choice}. \\
As for $\mb e_1$, suppose that $f\in\mathrm{V_{ex}}(A)$ and
\[ a=b\Vdash f\colon\dot F_\sigma\to\dot F_\tau. \] Then
\[ \tag{4} a_0=b_0\Vdash \forall z\in f\, \exists x\in\dot F_\sigma\, \exists y\in\dot F_\tau\, (z=\pair{x,y}), \]
\[ \tag{5} a_{10}=b_{10}\Vdash \forall x\in\dot F_\sigma\, \exists y\in\dot F_\tau\, \exists z\in f\, (z=\pair{x,y}), \]
\[ \tag{6} a_{11}=b_{11}\Vdash \forall z_0\in f\, \forall z_1\in f\, \forall x,y_0,y_1\, (z_0=\pair{x,y_0}\land z_1=\pair{x,y_1}\rightarrow y_0=y_1). \]
We aim for \[ \mb e_1a=\mb e_1b\Vdash f\in \dot F_{\sigma\tau}. \]
As in the proof of Theorem \ref{choice}, it follows from (5) that $\lambda c.(a_{10}c)_0=_{\sigma\tau}\lambda d.(b_{10}d)_0$. Therefore \[ \pair{\lambda c.(a_{10}c)_0,\lambda d.(b_{10}d)_0, \tuep g{\sigma\tau}}\in \dot F_{\sigma\tau}, \] where $g:=\lambda c.(a_{10}c)_0$. We thus want $\mb e_1$ such that \[ (\mb e_1a)_0\simeq \lambda c.(a_{10}c)_0, \] \[ (\mb e_1a)_1=(\mb e_1b)_1\Vdash f= \tuep g{\sigma\tau}. \]
By definition and Lemma \ref{abs}, \[ \tuep g{\sigma\tau}=\{\pair{c,d,\vpair{ \tuep c{\sigma}, \tuep c{\tau}}}\colon c=_\sigma d\land (a_{10}c)_0=_\tau e\}. \]
($\subseteq$) Let $\pair{\tilde{c},\tilde d,z}\in f$. We aim for $((\mb e_1a)_1\tilde c)_0=((\mb e_1b)_1\tilde d)_0\Vdash z\in \tuep g{\sigma\tau}$. By (4), $(a_0\tilde c)_0=_\sigma (b_0\tilde d)_0$ and \[ (a_0\tilde c)_{11}=(b_0\tilde d)_{11}\Vdash z=\pair{ \tuep c{\sigma}, \tuep c{\tau}}, \] where $c\simeq (a_0\tilde c)_0$ and $e\simeq (a_0\tilde c)_{10}$. By Lemma \ref{pairs}, let $\mb z$ be a closed application term such that for all $x,y,z\in\mathrm{V_{ex}}(A)$, \[ \mb z\Vdash z=\pair{x,y}\rightarrow z=\vpair{x,y}. \] Then \[ \mb z(a_0\tilde c)_{11}=\mb z(b_0\tilde d)_{11}\Vdash z=\vpair{ \tuep c{\sigma}, \tuep c{\tau}}. \] By using (5), (6) and absoluteness, one obtains $(a_{10}c)_0=_\tau e$. Let $\mb e_1$ satisfy \begin{align*} ((\mb e_1a)_1\tilde c)_{00}& \simeq (a_0\tilde c)_0,\\ ((\mb e_1a)_1\tilde c)_{01}& \simeq \mb z (a_0\tilde c)_{11}. \end{align*} Then $\mb e_1$ is as desired.
($\supseteq$) Let $\pair{c,d,\vpair{ \tuep c{\sigma}, \tuep c{\tau}}}\in \tuep g{\sigma\tau}$, with $e\simeq (a_{10}c)_0$. We aim for $((\mb e_1a)_1 c)_1=((\mb e_1b)_1d)_1\Vdash \vpair{ \tuep c{\sigma}, \tuep c{\tau}}\in f$.
By unravelling (5), we obtain that for some $z\in\mathrm{V_{ex}}(A)$, \[ \pair{(a_{10}c)_{10},(b_{10}d)_{10}, z}\in f, \] \[ (a_{10}c)_{11}=(b_{10}d)_{11}\Vdash z=\pair{ \tuep c{\sigma}, \tuep c{\tau}}. \] Let $\mb e_1$ be such that \[ ((\mb e_1a)_1c)_1\simeq \mb p (a_{10}c)_{10} (\mb {i_s}(\mb z(a_{10}c)_{11})), \] where $\mb z$ is as above.
By $\lambda$-abstraction, one can find $\mb e$ satisfying the above equations. \end{proof}
\begin{theorem}\label{choice sound} For all finite types $\sigma$ and $\tau$ there exists a closed application term $\mb c$ such that ${\sf CZF}$ proves \[ \mb c\Vdash \forall x^\sigma\, \exists y^\tau\, \varphi(x,y)\rightarrow \exists f^{\sigma\tau}\, \forall x^\sigma\, \varphi(x,f(x)). \] \end{theorem} \begin{proof} A proof is obtained by combining Theorem \ref{choice} and Theorem \ref{arrow}. Let \[ \vartheta_0(z):=\text{\lq $z$ is the set of natural numbers\rq}, \] \[ \vartheta_{\sigma\tau}(z):=\exists x\, \exists y\, (\vartheta_\sigma(x)\land \vartheta_{\tau}(y)\land z=x\to y). \] We are claiming that for all finite types $\sigma$ and $\tau$ there exists a closed application term $\mb c_{\sigma\tau}$ such that ${\sf CZF}$ proves \[ \mb c_{\sigma\tau}\Vdash \forall z_\sigma\, \forall z_\tau\, (\vartheta_\sigma(z_\sigma)\land \vartheta_\tau(z_\tau)\rightarrow \psi(z_\sigma,z_\tau)), \] where $\psi(z_\sigma,z_\tau)$ is \[ \forall x\in z_\sigma\, \exists y\in z_\tau\, \varphi(x,y)\rightarrow \exists f\colon z_\sigma\to z_\tau\, \forall x\in z_\sigma\, \varphi(x,f(x)). \] Let $\mb e_0$ be such that $\mb e_0\Vdash \vartheta_0(\dot \omega)$. By using $\mb e_0$ and Theorem \ref{arrow}, for every finite type $\sigma$, we can find $\mb e_\sigma$ such that $\mb e_\sigma\Vdash \vartheta_\sigma(\dot F_\sigma)$. As ${\sf CZF}\vdash \vartheta_\sigma(z_0)\land \vartheta_\sigma(z_1)\rightarrow z_0=z_1$, by soundness (Theorem \ref{czf sound}) there is a $\mb u_\sigma$ such that \[ \mb u_\sigma\Vdash \vartheta_\sigma(z_0)\land \vartheta_\sigma(z_1)\rightarrow z_0=z_1 \] for all $z_0,z_1\in\mathrm{V_{ex}}(A)$. By soundness as well, there are $\mb i_{\sigma\tau}$ and $\mb j_{\sigma\tau}$ such that \begin{align*} \mb i_{\sigma\tau}&\Vdash \psi(\dot F_\sigma,\dot F_\tau)\land z_\sigma=\dot F_\sigma \rightarrow \psi(z_\sigma,\dot F_\tau), \\ \mb j_{\sigma\tau}&\Vdash \psi(z_\sigma,\dot F_\tau)\land z_\tau=\dot F_\tau\rightarrow \psi(z_\sigma,z_\tau), \end{align*} for all $z_\sigma,z_\tau\in \mathrm{V_{ex}}(A)$. Finally, with the aid of $\mb e_\sigma$, $\mb e_\tau$, $\mb u_\sigma$, $\mb u_\tau$, $\mb i_{\sigma\tau}$, $\mb j_{\sigma\tau}$, and of the closed application term $\mb e$ from Theorem \ref{choice}, one can construct $\mb c_{\sigma\tau}$ as desired. \end{proof}
\begin{corollary}\label{czf choice sound} For every theorem $\varphi$ of ${\sf CZF}+{\sf AC}_{\ft}$, there is a closed application term $\mb e$ such that ${\sf CZF}$ proves $\mb e\Vdash \varphi$. In particular, ${\sf CZF}+{\sf AC}_{\ft}$ is consistent relative to ${\sf CZF}$. \end{corollary} \begin{proof} By Theorem \ref{czf sound} and Theorem \ref{choice sound}. \end{proof} \begin{corollary} ${\sf CZF}+{\sf AC}_{\ft}$ is conservative over ${\sf CZF}$ with respect to $\Pi^0_2$ sentences. \end{corollary} \begin{proof} Let $\varphi(x,y)$ be a bounded formula with displayed free variables and suppose that \[ \forall x\in\omega\, \exists y\in\omega\, \varphi(x,y)\footnote{Of course we mean that, e.g., $\forall z\, (\vartheta_0(z)\rightarrow \forall x\in z\, \exists y\in z\, \varphi(x,y)))$ is provable, where $\vartheta_0(z)$ is a formula defining $\omega$.} \] is provable in ${\sf CZF}$ plus ${\sf AC}_{\ft}$. By the corollary above, we can find a closed application term $\mb e$ such that \[ {\sf CZF}\vdash \mb e\Vdash \forall x\in\dot\omega\, \exists y\in\dot\omega\, \varphi(x,y). \] In particular, \[ {\sf CZF}\vdash \forall n\in\omega\, \exists m\in\omega\, (e\bar n)_1\Vdash \varphi(\dot n,\dot m). \] It is a routine matter (cf.\ also \cite[Chapter 4, Theorem 2.6]{M84}) to show that realizability equals truth for bounded arithmetic formulas, namely, \[ {\sf CZF}\vdash \forall n_1,\ldots, n_k\in\omega\, (\psi(n_1,\ldots,n_k)\leftrightarrow \exists a,b\in A\, (a=b\Vdash \psi(\dot n_1,\ldots,\dot n_k)), \] for $\psi(x_1,\ldots,x_k)$ bounded with all the free variables shown. We can then conclude \[ {\sf CZF} \vdash \forall x\in\omega\, \exists y\in\omega\, \varphi(x,y). \] \end{proof}
\section{Soundness for ${\sf IZF}$}\label{sec IZF} The theory ${\sf IZF}$ (Intuitionistic Zermelo-Fraenkel set theory) shares the logic and language of ${\sf CZF}$. Its axioms are
1. \textbf{Extensionality},
2. \textbf{Pairing},
3. \textbf{Union},
4. \textbf{Infinity},
5. \textbf{Set induction},
6. \textbf{Separation}: $\forall x\, \exists y\, \forall z\, (z\in y\leftrightarrow z\in x\land \varphi(z))$, for all formulae $\varphi$,
7. \textbf{Collection}: $\forall u\in x\, \exists v\, \varphi(u,v)\rightarrow \exists y\, \forall u\in x\, \exists v\in y\, \varphi(u,v)$, for all formulae $\varphi$,
8. \textbf{Powerset}: $\forall x\, \exists y\, \forall z\, (\forall u\in z\, (u\in x)\rightarrow z\in y)$.\\
Thus ${\sf IZF}$ is a strengthening of ${\sf CZF}$ with bounded separation replaced by full separation and subset collection replaced by powerset. Note that powerset implies subset collection and strong collection follows from separation and collection.
Note that in ${\sf IZF}$, due to the presence of powerset, the construction of $\mathrm{V_{ex}}(A)$ can proceed by transfinite recursion along the ordinals (cf.\ \cite{M84}).
\begin{theorem}\label{izf choice sound} For every theorem $\varphi$ of ${\sf IZF}+{\sf AC}_{\ft}$, there is a closed application term $\mb e$ such that ${\sf IZF}$ proves $\mb e\Vdash \varphi$. In particular, ${\sf IZF}+{\sf AC}_{\ft}$ is consistent relative to ${\sf IZF}$. \end{theorem} \begin{proof} The soundness for theorems of intuitionistic first order logic with equality follows immediately from Theorem \ref{int sound}. As for nonlogical axioms, in view of Corollary \ref{czf choice sound}, it is sufficient to deal with separation and powerset.
The argument for separation is similar to the corresponding argument for bounded separation in the proof of Theorem \ref{czf sound}, employing full separation in the background theory.
It thus remains to address powerset. Write $z\subseteq x$ for $\forall u\in z\, (u\in x)$. We look for $\mb e$ such that for all $x\in\mathrm{V_{ex}}(A)$ there is a $y\in\mathrm{V_{ex}}(A)$ such that
\[ \mb e\Vdash z\subseteq x \rightarrow z\in y, \]
for all $z\in\mathrm{V_{ex}}(A)$.
On account of powerset, in ${\sf IZF}$, we can define sets $\mathrm{V_{ex}}(A)_\alpha$, with $\alpha$ ordinal (i.e., a transitive set of transitive sets), such that $\mathrm{V_{ex}}(A)=\bigcup_\alpha\mathrm{V_{ex}}(A)_\alpha$ and $\mathrm{V_{ex}}(A)_\alpha=\bigcup_{\beta\in\alpha}\ps(A\times A\times \mathrm{V_{ex}}(A)_\beta)$. Note that in ${\sf CZF}$ the $\mathrm{V_{ex}}(A)_\alpha$'s are just classes.
Given $x\in\mathrm{V_{ex}}(A)_\alpha$, let
\[ y=\{\pair{a,b,z}\in A\times A\times \mathrm{V_{ex}}(A)_\alpha \mid a=b\Vdash z\subseteq x\}. \] The set $y$ exists by separation. Set
\[ \mb e:=\lambda a.\mb p a\mb {i_r}. \]
It is easy to check that $y$ and $\mb e$ are as desired, once established that if $z\in \mathrm{V_{ex}}(A)$ and $a=b\Vdash z\subseteq x$ then $z\in\mathrm{V_{ex}}(A)_\alpha$. This is proved by set induction by showing that for all $u,v\in\mathrm{V_{ex}}(A)$:
\begin{itemize}
\item if $a=b\Vdash u\in v$ and $v\in\mathrm{V_{ex}}(A)_\alpha$, then $u\in\mathrm{V_{ex}}(A)_\beta$ for some $\beta\in\alpha$;
\item if $a=b\Vdash u=v$ and $v\in\mathrm{V_{ex}}(A)_\alpha$, then $u\in \mathrm{V_{ex}}(A)_\alpha$.
\end{itemize} \end{proof}
As before, we obtain the following.
\begin{corollary} ${\sf IZF}+{\sf AC}_{\ft}$ is conservative over ${\sf IZF}$ with respect to $\Pi^0_2$ sentences. \end{corollary}
\section{Conclusions} We defined an extensional notion of realizability that validates ${\sf CZF}$ along with all finite type axiom of choice ${\sf AC}_{\ft}$ provably in ${\sf CZF}$. We have shown that one can replace ${\sf CZF}$ with ${\sf IZF}$. Presumably, this holds true for many other intuitionistic set theories as well.
There is a sizable number of well-known \emph{extra principles} $P$ that can be added to the mix, in the sense that $T+P$ proves $\mb e\Vdash P$, for some closed application term $\mb e$, where $T$ is either ${\sf CZF}$ or ${\sf IZF}$. This applies to arbitrary pca's in the case of large set axioms such as $\sf REA$ (Regular Extension axiom) by adapting \cite[Theorem 6.2]{R06}. In the case of choice principles, this also applies to arbitrary pca's for Countable Choice, ${\sf DC}$ (Dependent Choice), ${\sf RDC}$ (Relativized Dependent Choice), and ${\sf PAx}$ (Presentation Axiom) by adapting the techniques of \cite{DR19}. Specializing to the case of the first Kleene algebra, one obtains extensional realizability of ${\sf MP}$ (Markov Principle) and forms of ${\sf IP}$ (Independence of Premise) adapting results from \cite[Section 11]{M84}, \cite{M86} , \cite[Section 7]{R06}.
We claim that realizability combined with truth and the appropriate pca modeled on \cite{R05,R08} yields the closure under the choice rule for finite types, i.e., \[ \text{If }T\vdash \forall x^\sigma\, \exists y^\tau\, \varphi(x,y), \text{ then } T\vdash \exists f^{\sigma\tau}\, \forall x^\sigma\, \varphi(x,f(x)) \] for large swathes of intuitionistic set theories.
Church's thesis, \[ \tag{{\sf CT}}\forall f\colon\omega\to\omega\, \exists e\in\omega\, \forall x\in\omega\, ( f(x)\simeq \{e\}(x)),\] where $\{e\}(x)$ is Turing machine application, and the finite type axiom of choice are incompatible in extensional finite type arithmetic \cite{T77} (cf.\ \cite[Chapter 5, Theorem 6.1]{B85}).\footnote{The elementary recursion-theoretic reason that prevents Church's thesis from being extensionally realizable is the usual one: there is no type $2$ extensional index in Kleene's first algebra, that is, there is no $e\in\omega$ such that, for all $a,b\in\omega$, if $\{a\}(n)=\{b\}(n)$ for every $n\in\omega$, i.e., $a=_1b$, then $\{e\}(a)=\{e\}(b)$.} A fortiori, they are incompatible on the basis of ${\sf CZF}$, and thus of ${\sf IZF}$. However, negative versions of Church's thesis can still obtain in a universe in which ${\sf AC}_{\ft}$ holds. The assertion that no function from $\omega$ to $\omega$ is incomputable is known as weak Church's thesis \cite{T73}: \[ \tag{{\sf WCT}}\forall f\colon\omega\to\omega\, \neg\neg \exists e\in\omega\, \forall x\in\omega\, ( f(x)\simeq \{e\}(x)). \] Using Kleene's first algebra, one can easily verify that {\sf WCT} is extensionally realizable in ${\sf CZF}$. Therefore, ${\sf CZF}$ augmented with both ${\sf AC}_{\ft}$ and ${\sf WCT}$ is consistent relative to ${\sf CZF}$, and similarly for ${\sf IZF}$.
Continuity principles are a hallmark of Brouwer's intuitionism. They are compatible with finite type arithmetic (see \cite{B85,T73,T77,Oosten97}) and also with set theory (see \cite{B85,M84,R05,R05a}). They are known, though, to invite conflict with ${\sf AC}_{\ft}$ (see \cite[ Theorem 9.6.11]{TvD88}). However, as in the case of {\sf CT}, negative versions of them are likely to be compatible with ${\sf AC}_{\ft}$ on the basis of ${\sf CZF}$ and ${\sf IZF}$. Similar to the case of ${\sf CT}$, one would expect that the assertion that no function from $\mathbb R$ to $\mathbb R$ is discontinuous can go together with ${\sf AC}_{\ft}$. One obvious tool that suggests itself here is extensional generic realizability based on Kleene's second algebra. We shall not venture into this here and add the verification of this claim to the task list.
We conclude with the following remark. It is currently unknown whether one can provide a realizability model for choice principles based on larger type structures. Say that $I$ is a base if for every $I$-indexed family $(X_i)_{i\in I}$ of inhabited sets $X_i$ there exists a function $f\colon I\to \bigcup_{i\in I}X_i$ such that $f(i)\in X_i$ for every $i\in I$. Let $\mathcal C$-${\sf AC}$ say that every set $I$ in the class $\mathcal C$ is a base. The question is whether one can realize $\mathcal C$-${\sf AC}$, where $\mathcal C$ is the smallest $\Pi\Sigma$-closed class, or even the smallest $\Pi\Sigma W$-closed class, without assuming choice in the background theory.
\end{document} |
\begin{document}
\renewcommand{1.2}{1.2}
\thispagestyle{empty}
\title[The Nichols algebra of a semisimple Yetter-Drinfeld module] {The Nichols algebra of a semisimple Yetter-Drinfeld module} \author[andruskiewitsch, heckenberger and schneider]{Nicol\'{a}s Andruskiewitsch} \address{Facultad de Matem\'{a}tica, Astronom\'{i}a y F\'{i}sica\\\newline\indent Universidad Nacional de C\'{o}rdoba \\ \newline\indent CIEM - CONICET \\ \newline\indent (5000) Ciudad Universitaria \\ C\'{o}rdoba \\Argentina} \email{andrus@famaf.unc.edu.ar} \author[]{Istv\'an Heckenberger} \address{Mathematisches Institut\\ \newline\indent Universit\"at M\"unchen \\ \newline\indent Theresienstr. 39, D-80333 Munich, Germany} \email{i.heckenberger@googlemail.com} \author[]{Hans-J\"urgen Schneider} \address{Mathematisches Institut\\ \newline\indent Universit\"at M\"unchen \\ \newline\indent Theresienstr. 39, D-80333 Munich, Germany} \email{Hans-Juergen.Schneider@mathematik.uni-muenchen.de} \thanks{The work of N. A. was partially supported by Ag. Cba. Ciencia, CONICET, Foncyt and Secyt (UNC). The work of I.H. was partially supported by DFG within a Heisenberg fellowship at the University of Munich}
\begin{abstract} We study the Nichols algebra of a semisimple Yetter-Drin\-feld module and introduce new invariants such as real roots. The crucial ingredient is a ``reflection'' in the class of such Nichols algebras. We conclude the classifications of finite-dimensional{} pointed Hopf algebras over $\st$, and of finite-dimensional{} Nichols algebras over $\sk$. \end{abstract}
\maketitle
\setcounter{tocdepth}{2} \tableofcontents
\section*{Introduction}
{\bf 1.} Although semisimple complex Lie algebras $\mathfrak{g}$ cannot be deformed there are highly interesting $q$-deformations $U_q(\mathfrak{g})$ of their enveloping algebras {\em as Hopf algebras} with generic $q$ introduced by Drinfeld and Jimbo around 1985.
As an algebra, $U_q(\mathfrak{g})$ is generated by elements $E_i,F_i,K_i^{\pm}, 1 \leq i \leq n$. The Hopf algebra $U_q(\mathfrak{g})$ is determined by the +-part $U_q^+(\mathfrak{g})= k\langle E_1,\dots,E_n \rangle$ since $U_q(\mathfrak{g})$ is essentially a Drinfeld double of $U_q^+(\mathfrak{g})$. The algebra $U_q^+(\mathfrak{g})$ has a very easy and beautiful description as the Nichols algebra (or quantum symmetric algebra) $\mathcal{B}(W)$ of a finite-dimensional vector space \begin{equation}\label{example} W=\mathbb{C} E_1 \oplus \cdots \oplus \mathbb{C} E_n \end{equation} together with a grading and an action of a free abelian group $G$ with basis $K_1,\dots,K_n$. Each $E_i$ has degree $K_i$ and the action of $G$ is given by \begin{equation}\label{action} K_i E_jK_i^{-1}=q^{d_ia_{ij}} \text{ for all }i,j. \end{equation} Here $(d_ia_{ij})$ is the symmetrized Cartan matrix. Thus $W$ has the structure of a Yetter-Drinfeld module over the group algebra $k[G]$, and the vector spaces $\mathbb{C}E_i$ are one-dimensional Yetter-Drinfeld submodules. See Section 1 for the definition of Yetter-Drinfeld modules.
In the same way Nichols algebras also determine the small quantum groups $u_q(\mathfrak{g})$, $q$ a root of unity, introduced by Lusztig, and the generalizations of the quantum groups $U_q(\mathfrak{g})$ to Kac-Moody Lie algebras, see \cite{L, R, R2, Sbg}.
\medbreak{\bf 2.} Nichols algebras appeared in the work of Nichols \cite{N}. They are defined for Yetter-Drinfeld modules $W$ over any Hopf algebra $H$ (with bijective antipode) instead of the group algebra $H=k[G]$. The category of Yetter-Drinfeld modules over $H$ is braided, and the Nichols algebra can be defined by the following universal property: The tensor algebra $T(W)$ is a braided Hopf algebra where the elements of $W$ are primitive. Then \begin{equation}\label{N} \mathcal{B}(W) = T(W)/I_W, \end{equation} where $I_W$ is the largest coideal of $T(W)$ spanned by elements of $\mathbb{N}$-degree $\geq 2$.
The smash product $\mathcal{B}(W) \# H$ (called bosonization) is a Hopf algebra in the usual sense, and to understand Nichols algebras of Yetter-Drinfeld modules in general is of fundamental importance for the general theory of Hopf algebras. Nichols algebras form a crucial part of the $\mathbb{N}$-graded Hopf algebra associated to the coradical filtration of a Hopf algebra whose coradical is a Hopf subalgebra \cite{AS-jalg}. An important class of such Hopf algebras are pointed Hopf algebras, that is Hopf algebras where the coradical is a group algebra (or equivalently, where all the simple comodules are one-dimensional). The quantum groups $U_q(\mathfrak{g})$ and all their variants are pointed.
\medbreak The definition of the Nichols algebra is easy. The inherent conceptual difficulty of understanding Nichols algebras is their very indirect definition by a universal property. In general there is no method to actually determine the Nichols algebra $\mathcal{B}(W)$ for a given $W$, for example to calculate the dimensions of the $\mathbb{N}$-homogeneous components of $\mathcal{B}(W)$ or to compute the defining relations, that is to compute generators of the unknown ideal $I_W$.
The relations of the Nichols algebra of the Yetter-Drinfeld module \eqref{example} are the quantized Serre relations, see \cite[33.1.5]{L} for a proof of this deep result.
\medbreak{\bf 3.} During the last few years several classification results for Hopf algebras were obtained based on the theory of Nichols algebras and following the procedure proposed in \cite{AS-jalg}. This program has been particulary successful for finite-dimensional pointed Hopf algebras with abelian group of group-like elements \cite{AS-05}.
Let $H$ be a finite-dimensional cosemisimple complex Hopf algebra and let us consider a finite-dimensional Hopf algebra $A$ such that its coradical $A_0$ is a Hopf algebra isomorphic to $H$. To solve the problem of classifying all such Hopf algebras $A$, we have to address two fundamental questions: Given a Yetter-Drinfeld module $W$ over $H$, \begin{enumerate}
\item[(a)]\label{questions}
decide when $\dim {\mathcal B}(W) < \infty$, and
\item[(b)] describe a suitable set of defining relations of
${\mathcal B}(W)$. \end{enumerate}
Now, since $H$ is cosemisimple, it is also semisimple \cite{LR}; then the category ${}^H_H\mathcal{YD}$ of Yetter-Drinfeld modules over $H$ is semisimple \cite{Ramq}. Therefore we just need to consider the questions (a) and (b) in the following cases:
\begin{enumerate}
\item[(i)] when $W$ is an irreducible Yetter-Drinfeld module, and
\item[(ii)] when $W = V_1 \oplus \dots \oplus V_\theta$ is a direct sum of irreducible Yetter-Drinfeld modules, under the assumption that the answers to questions (a) and (b) are known for $V_1, \dots, V_\theta$. \end{enumerate}
As applications of the main results of the present paper we obtain new information in case (ii) when not all the $V_i$ are one-dimensional.
\medbreak If $V = {\mathbb C} v$ is a one-dimensional Yetter-Drinfeld submodule over $H$, then it determines a group-like element $g\in G(H)$ and a character $\chi\in \operatorname{Alg}(H, {\mathbb C})$ defining the coaction and the action of $H$. Let $q = \chi(g)$. The Nichols algebra of $V$ is easy to determine: it is either the polynomial algebra ${\mathbb C}[v]$, when $q = 1$ or is not a root of 1, or else it is the truncated polynomial algebra ${\mathbb C}[v]/(v^N)$, when $q$ is a root of 1 of order $N >1$. In other words, questions (a) and (b) have completely satisfactory answers in this case.
\medbreak Assume next that $W = {\mathbb C} v_1 \oplus \dots \oplus {\mathbb C} v_\theta $ is a direct sum of \emph{one-dimensional} Yetter-Drinfeld submodules over $H$. Let $g_i\in G(H)$ and $\chi_i\in \operatorname{Alg}(H, \mathbb{C})$ determined by the submodule $\Bbbk v_i$ as above. Let $q_{ij} = \chi_j(g_i)$, $1\le i,j\le \theta$. Note that in the classical situation of the Yetter-Drinfeld module \eqref{example} for $q$ a root of unity, $q_{ij} = q^{d_ia_{ij}}$. The Nichols algebra of $W$ can be viewed as a ``gluing'' of the various Nichols subalgebras ${\mathcal B}(\mathbb{C} v_i)$ along the generalized Dynkin diagram with vertices $1, \dots, \theta$; there is a line joining the vertices $i$ and $j$ if $q_{ij}q_{ji} \neq 1$, and then the line is labelled by the scalar $q_{ij}q_{ji}$, resembling the classical Killing-Cartan classification of semisimple Lie algebras.
\medbreak Assume moreover that $W$ is of Cartan type, that is, there exist $a_{ij}\in \mathbb Z$ such that $q_{ij}q_{ji} = q_{ii}^{a_{ij}}$ for any $i\neq j$; the classical example of Cartan type is $q_{ij}=q^{d_ia_{ij}}$ for all $i,j$ where $(d_ia_{ij})$ is the symmetrized Cartan matrix. Then $A = (a_{ij})$ is a generalized Cartan matrix, and we have complete answers to questions (a) and (b) above:
\begin{enumerate}
\item[(a)] \emph{$\dim{\mathcal B}(W) < \infty$ if and only if $A$ is of finite
type.}
\item[(b)] \emph{If $\dim{\mathcal B}(W) < \infty$, then the ideal of relations of ${\mathcal B}(W)$ is generated by the quantum
Serre relations and appropriate powers of the root vectors.}
\end{enumerate}
These results were proved in \cite{AS-adv} under some restrictions on the orders of the $q_{ij}$'s by reduction to the theory of quantum groups. Part (a) was shown without any restriction in \cite{H3}.
\medbreak The classification of the matrices $(q_{ij})$ whose corresponding Nichols algebras are finite-dimensional{} is given in \cite{H2}. In general, one Cartan matrix is not sufficient to describe the Nichols algebra, a family of generalized Cartan matrices is needed. The main instrument to control them is the Weyl groupoid -- introduced already in \cite{H3}. As for the defining relations, these are not yet known, except in the standard case \cite{Ag} when all the Cartan matrices are the same, and in the general case of rank two, that is $\theta=2$ \cite{H4}. Their precise description is more delicate than (b) above.
\medbreak {\bf 4.} Let now $W = V_1 \oplus \dots \oplus V_\theta$ be an arbitrary direct sum of irreducible Yetter-Drinfeld modules. In analogy with the situation of Cartan type, it was proposed to consider the $V_i$'s as ``fat points'' of a generalized Dynkin diagram (or some kind of generalized Cartan matrix) and, assuming the knowledge of the Nichols algebras ${\mathcal B}(V_i)$, to describe the Nichols algebra ${\mathcal B}(W)$ as a ``gluing'' of the various Nichols subalgebras ${\mathcal B}(V_i)$ along it \cite[p. 41]{bariloche}. Because of \cite{H2}, it is clear that just one generalized Cartan matrix would not be enough, and that we would need to attach to our $W$ a collection of generalized Cartan matrices. This is what we do in the present paper.
\medbreak {\bf 5.} Let us now proceed with a detailed description of our results which in fact hold in a much more general context. Let $\Bbbk$ be an arbitrary field and let $H$ be any Hopf algebra with bijective antipode. Let \begin{equation} W = V_1 \oplus \dots \oplus V_\theta \end{equation} be a direct sum of finite-dimensional{} irreducible Yetter-Drinfeld modules over $H$. Assume for simplicity that the adjoint action of ${\mathcal B}(W)$ on itself is locally finite. We fix an index $i$, $1\le i\le \theta $. Let \begin{align}\label{cartanmatrices}
a_{ij} &= 1 - \text{top degree of } \ad\, {\mathcal B}(V_i)(V_j), \quad i\neq
j,\text{ and } a_{ii} = 2. \end{align} Then $(a_{ij})_{1\le i,j\le \theta}$ is a generalized Cartan matrix attached to $W$; note that a version of the quantum Serre relations holds by definition. We define $V'_i=V_i^*$, $V'_j$ as the top homogeneous component of $\ad\, {\mathcal B}(V_i)(V_j)$ if $i\neq j$, and \begin{align}\label{V}
W' &= V'_1 \oplus \dots \oplus V'_\theta. \end{align}
Let ${\mathcal K}$ be the algebra of coinvariant elements of ${\mathcal B}(W)$ with respect to the right coaction of ${\mathcal B}(V_i)$, and $\#$ denotes the smash product introduced in Definition \ref{def:ksmash}. Our first main result is the key step for the construction of the family of Cartan matrices generalizing \cite[Prop. 1]{H3} where all $V_i$ are one-dimensional.
\begin{thmintro}\label{theo:first} There is an isomorphism \begin{align}\label{intro:iso}
{\mathcal B}(W')\simeq {\mathcal K}\#{\mathcal B}(V_i^*). \end{align}
In particular, if $\dim {\mathcal B}(W) < \infty$, then $\dim {\mathcal B}(W)=\dim {\mathcal B}(W')$. \end{thmintro}
The assignment $W\mapsto W'$ is a generalized $i$-th reflection. Theorem \ref{theo:first} allows to find by iterated reflections a class of new Nichols algebras $\mathcal{B}(W')$ of the same dimension as $\mathcal{B}(W)$. This defines an equivalence relation between non-isomorphic Nichols algebras. The collection of generalized Cartan matrices we are looking for consists of the generalized Cartan matrices of the Nichols algebras in the equivalence class of $\mathcal{B}(W)$. We are now in a position to define real roots of $\mathcal{B}(W)$, see Section \ref{ss:Weylgroupoid}.
In order to prove Theorem \ref{theo:first}, we have to overcome several difficulties. The proof of \cite[Prop. 1]{H3} depends on the existence of PBW-bases shown by Kharchenko \cite{Kh}. In our case where not all the $V_i$ are one-dimensional such bases do not exist in general. Another difficulty in the general case is to prove irreducibility of the Yetter-Drinfeld modules $V_i'$ in \eqref{V}. Our proof of Theorem 1 can not rely on the usual characterization of a Nichols algebra as a braided Hopf algebra with special properties, because it does not seem possible to describe the comultiplication of ${\mathcal K}\#{\mathcal B}(V_i^*)$ explicitly. Instead, we use a new characterization of Nichols algebras in terms of braided derivations, see Theorem \ref{theo:quotients-with-derivations}. This new characterization is a powerful tool to deal with Nichols algebras; we expect many applications of it.
\medbreak {\bf 6.} Having defined the collection of generalized Cartan matrices and reflections attached to our $W$, the following questions arise:
\begin{enumerate}
\item[(A)] To develop a theory of generalized root systems that
correspond to our collections of generalized Cartan matrices,
including classifications of suitable classes of them.
\item[(B)] To obtain answers to questions (a) and
(b) in page \pageref{questions} on the Nichols algebra ${\mathcal B}(W)$ from the structure of
its generalized root systems.
\end{enumerate}
These matters are out of the scope of the present paper. In \cite{HS} the generalized root system of $\mathcal{B}(W)$ is defined (under the restriction that $H$ is semisimple or more generally that all finite tensor powers of $W$ are semisimple). These root systems satisfy the axioms introduced in \cite{HY} and studied in \cite{CH08a, CH08b}.
We present however a partial answer to question (B). Let us say that $W$ is \emph{standard} if the generalized Cartan matrix $(a'_{ij})_{1\le i,j\le \theta }$ corresponding to $W'$ coincides with the generalized Cartan matrix $(a_{ij})_{1\le i,j\le \theta }$ corresponding to $W$, for all $W'$ obtained from $W$ by finitely many reflections.
\begin{thmintro}\label{theo:second} If $W$ is standard and $\dim {\mathcal B}(W)< \infty$, then the generalized Cartan matrix is of finite type. \end{thmintro}
See Thm.~\ref{theorem:gpd-nichols-finite}. By \cite[Corollary 7.4]{HS} the converse of Theorem 2 is true, that is, if $W$ is standard, $\dim {\mathcal B}(V_i')<\infty $ for all $i$ and all $W'$ obtained from $W$ by iterated reflections, and if $(a_{ij})_{1\le i,j\le \theta }$ is of finite type, then $\dim {\mathcal B}(W)<\infty $. Using the results of the present paper a necessary and sufficient criterion for $\dim {\mathcal B}(W)< \infty$ is given in the general non-standard case in \cite[Theorem 7.3]{HS}.
\medbreak {\bf 7.} There is at the present moment no general method to deal with questions (a) and (b) for \emph{irreducible} Yetter-Drinfeld modules over a finite non-abelian group. In fact, we know very few examples with finite dimension. The first examples, calculated in 1995, correspond to the transpositions in $\sn$, $n=3,4,5$ \cite{MS}. As an application of Theorem \ref{theo:second} for $\st$ and $\sk$, we prove that $\mathcal{B}(W)$ is infinite-dimensional if $W$ is not irreducible. (In \cite{HS} this result is generalized to all finite simple groups and to all symmetric groups $\sn$, $n \geq 3$.) This allows to conclude the classifications of finite-dimensional{} pointed Hopf algebras over $\st$, Theorem \ref{theo:s3}, and of finite-dimensional{} Nichols algebras over $\sk$, Theorem \ref{thm:s4}. The group $\st$ is the first non-abelian group $G$ where the classification of finite-dimensional{} pointed Hopf algebras with coradical $\Bbbk G$ is known, and where a Hopf algebra other than the group algebra exists. Recently, some groups that admit no finite-dimensional{} pointed Hopf algebra except the group algebra were found: $\mathbb{A}_n$, $n\geq 5$, $n\neq 6$ \cite{AF2,AFGV} and $SL(2,q)$ with $q$ even \cite{FGV}.
Theorems \ref{theo:s3} and \ref{thm:s4} can be rephrased in terms of racks, giving rise to new techniques to establish that some Nichols algebras have infinite dimension \cite{AF3}. These techniques have been applied in \cite{AFZ, AFGV, AFGVe}.
\medbreak {\bf 8.} The paper is organized in four sections, besides this introduction. In Sect.~\ref{section:preliminaries} we collect several well-known results that will be used later on. In Sect.~\ref{section:qdo} we use quantum differential operators to give a new characterization of Nichols algebras. Sect.~\ref{section:weyl} is the bulk of the paper: We construct the reflection of a semisimple Yetter-Drinfeld module satisfying some hypothesis (for instance, having finite-dimensional{} Nichols algebra), discuss the notion of ``standard'' modules, and prove our main theorems. In Sect.~\ref{section:appl} we state a few general consequences of the theory in the previous sections, and then prove the classification results for $\st$ and $\sk$ alluded above. We also include a result on Nichols algebras over the dihedral group $\dn$ with $n$ odd.
\medbreak In the paper $H$ denotes a Hopf algebra with bijective antipode ${\mathcal S} $.
\section{Preliminaries}\label{section:preliminaries}
\subsection{Notation}\label{subsection:notation}
Let $\Bbbk$ be a field. All vector spaces, algebras, coalgebras, Hopf algebras, unadorned tensor products and unadorned Hom spaces are over $\Bbbk$. If $V$ is a vector space and $n\in {\mathbb N}$, then $V^{{\otimes} n}$ or $T^n(V)$ denote the $n$-fold tensor product of $V$ with itself. We use the notation $\langle\,,\,\rangle: \Hom(V, \Bbbk) \times V \to \Bbbk$ for the standard evaluation. We identify $\Hom(V, \Bbbk){\otimes} \Hom(V, \Bbbk)$ with a subspace of $\Hom(V{\otimes} V, \Bbbk)$ by the recipe $$ \langle f{\otimes} g, v{\otimes} w\rangle = \langle f, w\rangle\langle g, v\rangle $$ for $f, g\in \Hom(V, \Bbbk)$, $v, w\in V$. Consequently, we identify $\Hom(V, \Bbbk)^{{\otimes} n}$ with a subspace of $\Hom(V^{{\otimes} n}, \Bbbk)$, $n\in {\mathbb N}$, via \begin{equation}\label{eqn:vndual} \langle f_n{\otimes} \dots {\otimes} f_1, v_1{\otimes} \dots {\otimes} v_n\rangle = \prod_{1\le i \le n} \langle f_i, v_i\rangle, \end{equation} for $f_1, \dots, f_n\in \Hom(V, \Bbbk)$, $v_1, \dots, v_n\in V$.
\medbreak Let $\theta\in {\mathbb N}$ and let ${\mathbb I} = \{1, \dots, \theta\}$. Let $V = \oplus_{\alpha\in \Z^{\theta}} V_\alpha$ be a $\Z^{\theta}$-graded vector space. If $\alpha = (n_1, \dots, n_{\theta})\in \Z^{\theta}$, then let $\operatorname{pr}_\alpha= \operatorname{pr}_{n_1, \dots, n_{\theta}}: V\to V_\alpha$ denote the projection associated to this direct sum. We identify $\Hom(V_\alpha, \Bbbk)$ with a subspace of $\Hom(V, \Bbbk)$ via the transpose of $\operatorname{pr}_\alpha$. The graded dual of $V$ is \begin{align}
V^{\operatorname{gr-dual} } = \oplus_{\alpha\in \Z^{\theta}} \Hom(V_\alpha, \Bbbk)
\subset \Hom(V, \Bbbk).
\label{eq:grdual} \end{align} If $V = \oplus_{\alpha\in \Z^{\theta}} V_{\alpha}$ is a
$\Z^{\theta}$-graded vector space, then the support of $V$ is $\operatorname{supp} V := \{\alpha\in \Z^{\theta} \,|\, V_{\alpha} \neq 0\}$.
\medbreak Let $C$ be a coassociative coalgebra. Let $\Delta^n: C \to C^{{\otimes} (n+1)}$ denote the $n$-th iterated comultiplication of $C$. Let $G(C)$ denote the set of group-like elements of $C$. If $g, h\in G(C)$, then let ${{\mathcal P}}_{g,h}(C)$ denote the space
$\{x \in C\,|\, \Delta(x) = g \otimes x + x \otimes h\}$ of $g$, $h$ skew-primitive elements of $C$. If $C$ is a braided bialgebra, then ${{\mathcal P}}(C) := {{\mathcal P}}_{1,1}(C)$. The category of left (resp. right) $C$-comodules is denoted ${}^C\mathcal{M}$, resp. $\mathcal{M}^C$. We use Sweedler's notation for the comultiplication of $C$: If $x\in C$, then $\Delta(x) = x\_{1}\otimes x\_{2}$. Similarly, the coaction of a left $C$-comodule $M$ is denoted $\delta(m) = m\_{-1} {\otimes} m\_{0}\in C{\otimes} M$, $m\in M$.
\begin{obs}\label{rem:C*alg}
The dual vector space $C^* = \Hom(C, \Bbbk)$ is an algebra with the convolution product: $\langle fg, c\rangle = \langle g, c\_{1}\rangle\, \langle f, c\_{2}\rangle$, cf. \eqref{eqn:vndual}, for $f,g\in C^*$, $c\in C$. The reader should be warned that usually one writes $C^*{}^{\op }$ for this algebra, see \cite[Sect.\,1.4.1]{Mo}. With our convention -- forced by \eqref{eqn:vndual} -- any left $C$-comodule becomes a \emph{left} $C^*$-module by \begin{equation}\label{transpose-action} f\cdot m = \langle f, m\_{-1}\rangle m\_{0}, \end{equation} $f\in C^*$, $m\in M$. Indeed, if also $g\in C^*$, then \begin{align*} f\cdot (g\cdot m) &= \langle g, m\_{-1}\rangle f\cdot m\_{0} = \langle f, m\_{-1}\rangle \langle g, m\_{-2}\rangle m\_{0} \\ &= \langle fg, m\_{-1}\rangle m\_{0} = (fg)\cdot m. \end{align*} \end{obs}
Recall that a graded coalgebra is a coalgebra $C$ provided with a grading $C = \oplus_{m\in {\mathbb N}_0} C^m$ such that $\Delta(C^m) \subset \oplus_{i+j =m} C^i {\otimes} C^j$. Then the graded dual $C^{\operatorname{gr-dual} }$ is a subalgebra of $C^*$.
Let $\Delta_{i,j}: C^{m} \to C^i {\otimes} C^j$ denote the composition $\operatorname{pr}_{i,j}\Delta$, where $m = i+ j$. More generally, if $i_1, \dots, i_n\in {\mathbb N}_0$ and $i_1 + \dots + i_n = m$, then $\Delta_{i_1, \dots, i_n}$ is the composition $\operatorname{pr}_{i_1, \dots, i_n}\Delta^{n-1}$: \begin{equation}\label{deltaij} \xymatrix{C^{m} \ar[d]_{\Delta_{i_1, \dots, i_n}} \ar[0,1]^-{\Delta^{n-1}} & \oplus_{j_1 + \dots + j_n = m}C^{j_1} {\otimes} \dots {\otimes} C^{j_n} \ar@{>>}[1,-1]^{\qquad\operatorname{pr}_{i_1, \dots, i_n}} \\ C^{i_1} {\otimes} \dots {\otimes} C^{i_n}. &} \end{equation}
\begin{obs}\label{subcom-gen} Let $C$ be a coalgebra, let $M\in {}^{C}\mathcal{M}$ and let $Z\subset M$ be a vector subspace. Then the subcomodule generated by $Z$ is \begin{equation}\label{subcom-gen-u}
C^*\cdot Z = \Bbbk\text{-span of }\{\langle f, z\_{-1}\rangle\, z\_{0}\,|\, z\in Z, \, f\in C^*\}. \end{equation} If $C = \oplus_{m\in {\mathbb N}_0} C^n$ is a graded coalgebra, then \begin{equation}\label{subcom-gen-gr}
C^*\cdot Z = \Bbbk\text{-span of }\{\langle f, z\_{-1}\rangle\, z\_{0}\,|\, z\in Z, \, f\in C^{\operatorname{gr-dual} }\}. \end{equation} \end{obs}
\begin{proof} Clearly, \eqref{subcom-gen-u} is the subcomodule generated by $Z$. Assume that $\dim Z< \infty$. Then there exists $m\in {\mathbb N}$ such that $\delta(Z) \subset \oplus_{0\le n\le m} C^n{\otimes} M$. Therefore, in \eqref{subcom-gen-u} it suffices to take $$f\in\left(\oplus_{ n> m} C^n\right)^\perp \simeq \left(\oplus_{0\le n\le m} C^n\right)^* \subset \oplus_{n\ge 0} \left(C^n\right)^*.$$ If $\dim Z$ is arbitrary, then $$C^*\cdot Z = C^*\cdot\Big(\sum_{Z'\subset Z:
\dim Z' < \infty} Z'\Big) = \sum_{Z'\subset Z\,|\,\dim Z' < \infty} \big(C^*\cdot Z'\big),$$ proving the assertion. \end{proof}
\subsection{Yetter-Drinfeld modules}\label{subsection:yd} Our reference for the theory of Hopf algebras is \cite{Mo}. Recall that $H$ is a Hopf algebra with bijective antipode ${\mathcal S}$. The adjoint representation of $H$ on itself is the algebra map $\ad: H\to \operatorname{End} H$, $\ad x (y) = x\_{1} y {\mathcal S}(x\_{2})$, $x,y\in H$. Then \begin{equation}\label{ad} \ad x (yy') = \ad (x\_{1}) (y) \ad(x\_{2})(y'), \end{equation} $x,y, y'\in H$. That is, $H$ is a left $H$-module algebra via the adjoint.
\medbreak Let ${}^H_H\mathcal{YD}$ be the category of Yetter-Drinfeld modules over $H$; $V\in {}^H_H\mathcal{YD}$ is a left $H$-module and a left $H$-comodule such that \begin{equation}\label{yd} \delta(h\cdot x) = h\_{1}x\_{-1}{\mathcal S}(h\_{3}){\otimes} h\_{2}\cdot x\_{0}, \end{equation} $h\in H$, $x\in V$. It is well-known that ${}^H_H\mathcal{YD}$ is a braided tensor category, with braiding $c_{V, W}: V{\otimes} W \to W{\otimes} V$, $c_{V, W}(v{\otimes} w) = v\_{-1}\cdot w{\otimes} v\_{0}$, $V, W\in {}^H_H\mathcal{YD}$, $v\in V$, $w\in W$. We record that the inverse braiding is given by \begin{equation}\label{inversebraiding}
c^{-1}_{V, W}(v{\otimes} w) = w\_{0}{\otimes} {\mathcal S}^{-1}(w\_{-1})\cdot v, \end{equation} $V, W\in {}^H_H\mathcal{YD}$, $v\in V$, $w\in W$.
\begin{obs}\label{ydsubm} Let $V\in {}^H_H\mathcal{YD}$.
(i) If $U\subset V$ is an $H$-submodule, then the subcomodule $H^* \cdot U$ generated by $U$ is a Yetter-Drinfeld submodule of $V$.
(ii) If $T\subset V$ is an $H$-subcomodule, then the submodule $H\cdot T$ generated by $T$ is a Yetter-Drinfeld submodule of $V$. \end{obs}
\begin{proof} (i). If $u\in U$, $f\in H^*$ and $h\in H$, then $$ h\cdot(\langle f, u\_{-1}\rangle\, u\_{0}) = \left\langle f, {\mathcal S}( h\_{1})(h\_{2}\cdot u)\_{-1} h\_{3}\right\rangle\, (h\_{2}\cdot u)\_{0} \in H^*\cdot U $$ by \eqref{yd}. (ii) is also a direct consequence of \eqref{yd}. \end{proof}
Let $V\in {}^H_H\mathcal{YD}$ be finite-dimensional. The left and right duals of $V$ are respectively denoted $^*V$ and $V^*$. As vector spaces, $^*V = V^* = \Hom(V, \Bbbk)$. Their structures of Yetter-Drinfeld modules are determined by requiring that the following natural maps are morphisms in ${}^H_H\mathcal{YD}$: \begin{align*} &\operatorname{ev}: V^* {\otimes} V\to \Bbbk, & &\operatorname{coev}: \Bbbk\to V {\otimes} V^*, \\ &\operatorname{ev}: V {\otimes} {}^*V\to \Bbbk, & &\operatorname{coev}: \Bbbk\to {}^*V {\otimes} V, \end{align*} cf. \cite[Def. 2.1.1]{BK}. Thus $V^*$ has action and coaction given by \begin{align}\label{vd1} \langle h\cdot f, v\rangle &= \langle f, {\mathcal S}(h)\cdot v\rangle, \\\label{vd2} f\_{-1}\langle f\_{0}, v\rangle &= {\mathcal S}^{-1}(v\_{-1})\langle f, v\_{0}\rangle, \end{align} $f \in V^*$, $v\in V$. Albeit evident, we record that \eqref{vd2} is equivalent to \begin{align}\label{vd2bis} {\mathcal S}(f\_{-1})\langle f\_{0}, v\rangle &= v\_{-1}\langle f, v\_{0}\rangle, \end{align} $f \in V^*$, $v\in V$. Notice that \eqref{vd1}
provides $V^* = \Hom(V, \Bbbk)$ with an $H$-module
structure, regardless of whether $\dim V$ is finite or not.
It is easy to see that $T^n(V^*)$ is a Yetter-Drinfeld submodule of $(T^n(V))^*$ via the identification \eqref{eqn:vndual}. Also, the evaluation $\langle\,,\,\rangle: V^* \times V \to \Bbbk$ satisfies \begin{equation}\label{braided-duality} \langle c_{V^*}(f{\otimes} g), v{\otimes} w\rangle = \langle f{\otimes} g, c_{V}(v{\otimes} w)\rangle, \end{equation} $f, g\in V^*$, $v,w\in V$.
\begin{proof} We compute \begin{align*} \langle c_{V^*}(f{\otimes} g), v{\otimes} w\rangle &= \langle f\_{-1}\cdot g{\otimes} f\_{0}, v{\otimes} w\rangle = \langle f\_{0}, v\rangle\langle f\_{-1}\cdot g, w\rangle \\&= \langle f\_{0}, v\rangle\langle g, {\mathcal S}(f\_{-1})\cdot w\rangle = \langle f, v\_{0}\rangle\langle g, v\_{-1}\cdot w\rangle \\&= \langle f{\otimes} g, v\_{-1}\cdot w{\otimes} v\_{0}\rangle = \langle f{\otimes} g, c_{V}(v{\otimes} w)\rangle. \end{align*} \end{proof}
Analogously, $^*V$ has action and coaction given by $\langle v, h\cdot f\rangle = \langle {\mathcal S}^{-1}(h)\cdot v, f\rangle$, $f\_{-1} \langle v, f\_{0}\rangle = {\mathcal S}(v\_{-1})\langle v\_{0}, f\rangle$, $f \in {}^*V$, $v\in V$.
\begin{obs}\label{lema:double-dual}
One has $V\simeq V^{**}$ for any finite-dimensional{} $V\in {}^H_H\mathcal{YD}$
\cite[(2.2.6)]{BK}. Explicitly, if we identify
$V$ and $V^{**}$ as vector spaces via the map $v\mapsto \varphi _v$,
where $\langle \varphi _v,f\rangle :=\langle f,v\rangle $ for all $f\in V^*$ and $v\in V$,
then the isomorphism $\psi_V: V^{**}\to V$ in ${}^H_H\mathcal{YD}$ and its inverse
$\phi_V := \psi^{-1}_V$ are given by \begin{align}\label{eqn:double-dual}
\psi_V(\varphi _v) &= {\mathcal S}^{-2}(v\_{-1})\cdot v\_0, \\
\label{eqn:double-dualbis}
\phi_V(v) &= {\mathcal S}((\varphi _v)\_{-1})\cdot (\varphi _v)\_0,
\qquad v\in V. \end{align} Further, \eqref{vd1} and \eqref{vd2} imply that \begin{align}
\delta (\varphi _v)=&{\mathcal S}^{-2}(v\_{-1}){\otimes} \varphi _{v\_0}, \notag \\
\langle \phi _V(v), f\rangle =& \langle v\_{-1}\cdot f,v\_0\rangle .
\label{eq:phiVv,f} \end{align} \end{obs}
\subsection{Smash coproduct}\label{subsection:smashcoproduct} We shall need later the following well-known facts. Let $C\in {}^H\mathcal{M}$ be a left comodule coalgebra-- that is, the comultiplication of $C$ is a comodule map. Let us denote the comultiplication of $C$ by the following variation of Sweedler's notation: If $c\in C$, then $\Delta(c) = c\^{1}\otimes c\^{2}$. Let $C\# H$ be the corresponding smash coproduct: This is the vector space $C{\otimes} H$ (with generic element $c\#h$) with comultiplication \begin{align} \Delta(c\# h) = c^{(1)} \# (c^{(2)})_{(-1)} h_{(1)} \otimes
(c^{(2)})_{(0)} \# h_{(2)},
\label{eq:smashcopr} \end{align} $c\in C$, $h\in H$. Let $p_C = \id{\otimes} \varepsilon : C\# H \to C$ and $p_H = \varepsilon {\otimes} \id: C\# H \to H$ be the canonical coalgebra projections. Let $\tau: H{\otimes} C \to C{\otimes} H$ be given by $$\tau(h{\otimes} c) = c\_{0}{\otimes} {\mathcal S}^{-1}(c\_{-1})h,\qquad h\in H, \, c\in C.$$
\begin{lema}\label{lema:smashcopr} Let $M\in {}^{C\# H}\mathcal{M}$ with coaction $\delta_{C\# H}$. Hence also $M\in {}^{C}\mathcal{M}$ with coaction $\delta_{C} = (p_C{\otimes} \id)\delta_{C\# H}$ and $M\in {}^{H}\mathcal{M}$ with coaction $\delta_{H} = (p_H{\otimes} \id)\delta_{C\# H}$. Then the following hold. \begin{enumerate}
\item[(i)] $\delta_{C\#H} = (\id {\otimes}\delta_{H})\delta_{C} = (\tau{\otimes} \id)(\id {\otimes}\delta_{C})\delta_{H}$.
\item[(ii)] If $N\subset M$ is both a $C$-subcomodule and an $H$-subcomodule, then it is a $C\# H$-subcomodule. \item[(iii)] If $Z\subset M$ is an $H$-subcomodule, then the $C$-subcomodule generated by $Z$ is a $C\# H$-subcomodule. \end{enumerate} \end{lema}
\begin{proof} Let $m\in M$ and write $\delta_{C\# H}(m) = m\_{C, -1}\# m\_{H, -1}{\otimes} m\_{0}$. We spell out the coassociativity in this notation: \begin{multline}\label{coassociativity} m\_{C, -1}\# m\_{H, -1}{\otimes} m\_{0,C, -1}\# m\_{0,H, -1}{\otimes} m\_{0,0} \\= (m\_{C, -1})^{(1)} \# ((m\_{C, -1})^{(2)})_{(-1)} (m\_{H, -1})_{(1)} \\ \otimes ((m\_{C, -1})^{(2)})_{(0)} \# (m\_{H, -1})_{(2)} {\otimes} m\_{0}. \end{multline} Applying $p_C{\otimes} p_H {\otimes} \id$ to \eqref{coassociativity}, we get \begin{align*} (\id {\otimes}\delta_{H})\delta_{C}(m)&= m\_{C, -1}\varepsilon ( m\_{H, -1})\# \varepsilon (m\_{0,C, -1}) m\_{0,H, -1}{\otimes} m\_{0,0} \\ &= m\_{C, -1}\# m\_{H, -1} {\otimes} m\_{0} = \delta_{C\# H}(m). \end{align*} Applying $(\tau{\otimes} \id)(p_H{\otimes} p_C {\otimes} \id)$ to \eqref{coassociativity}, we get \begin{align*} (\tau{\otimes} \id)&(\id {\otimes}\delta_{C})\delta_{H}(m) = \tau \left((m\_{C, -1})_{(-1)} m\_{H, -1} \otimes (m\_{C, -1})_{(0)}\right) {\otimes} m\_{0} \\ &= (m\_{C, -1})_{(0)} {\otimes} {\mathcal S}^{-1}\left((m\_{C, -1})_{(-1)}\right)(m\_{C, -1})_{(-2)} m\_{H, -1} {\otimes} m\_{0}\\ &= \delta_{C\#H}(m). \end{align*} Now (ii) follows from the first equality in Lemma~\ref{lema:smashcopr} (i). Finally, the equality of the first and third expressions in Lemma~\eqref{lema:smashcopr} (i) gives that the $C\#H$-subcomodule generated by $Z$ is contained in (and hence it coincides with) the $C$-subcomodule generated by $Z$. This gives (iii). \end{proof}
\subsection{Braided Hopf algebras and bosonization}\label{subsection:bosonization} We briefly summarize results from \cite{Ra}, see also \cite{Mj}. Let $A$ be a Hopf algebra provided with Hopf algebra maps $\pi: A\to H$, $\imath: H\to A$, such that $\pi\iota = \id_H$. In other words, we have a commutative diagram in the category of Hopf algebras: $$ \xymatrix{& & H\ar@{=}[1,0]\ar@{_{(}->}[1,-2]_{\iota}\\A\ar[0,2]^{\pi}& & H. } $$
Let $R = A^{\operatorname{co} H} = \{a\in A\,|\,(\id\otimes \pi_{H})\Delta (a) = a\otimes 1\}$. Then $R$ is a braided Hopf algebra in ${}^H_H\mathcal{YD}$. Following the notation in Subsection \ref{subsection:smashcoproduct}, let $\Delta(r) = r\^{1}\otimes r\^{2}$ denote the coproduct of $r\in R$ (or any other braided Hopf algebra). Explicitly, $R$ is a subalgebra of $A$, and \begin{equation}\label{smash2} \begin{aligned}h\cdot r &= h_{(1)} r{\mathcal S}(h_{(2)}), \\ r\_{-1}{\otimes} r\_{0} &= \pi(r\_{1}){\otimes} r\_{2}, \\ r\^{1}{\otimes} r\^{2} &= \vartheta_R(r\_{1}){\otimes} r\_{2}, \end{aligned}\end{equation} $r\in R$, $h\in H$. Here $\vartheta_R: A \to R$ is the map defined by \begin{equation}\label{vartheta}\vartheta_R(a) = a\_{1}\iota\pi({\mathcal S} (a\_{2})),\end{equation} $a\in A$. It can be easily shown that \begin{equation}\label{proptheta} \vartheta_R(rh) = r\varepsilon (h),\qquad \vartheta_R(hr) = h\cdot r\end{equation} for $r\in R$, $h\in H$. Reciprocally, let $R$ be a braided Hopf algebra in ${}^H_H\mathcal{YD}$. A construction discovered by Radford, and interpreted in terms of braided categories by Majid, produces a Hopf algebra $R\# H$ from $R$. We call $R\# H$ the bosonization of $R$. As a vector space, $R\# H = R{\otimes} H$; if $r\# h := r{\otimes} h$, $r\in R$, $h\in H$, then the multiplication and comultiplication of $R\# H$ are given by \begin{equation}\label{smash1} \begin{aligned}(r\# h)(s\# f) &= r (h_{(1)} \cdot s)\# h_{(2)}f, \\ \Delta(r\# h) &= r^{(1)} \# (r^{(2)})_{(-1)} h_{(1)} \otimes (r^{(2)})_{(0)}
\# h_{(2)}.
\end{aligned}
\end{equation} The maps $$\pi_{H}: R\# H \to H \text{ and }\imath: H \to R\# H, \quad \pi_{H}(r\# h) = \varepsilon (r)h, \quad \imath(h) = 1\# h,$$ $r\in R$, $h\in H$, are Hopf algebra homomorphisms; we identify $H$ with the image of $\imath$. Hence \begin{equation}\label{smash3} r\_{1}{\otimes} r\_{2} = r\^{1}(r\^{2})\_{-1}{\otimes} (r\^{2})\_{0}, \end{equation} $r\in R$. The map $p_{R}: R\# H \to R$, $p_{R}(r\# h) = r\varepsilon (h)$, $r\in R$, $h\in H$, is a coalgebra homomorphism -- see page \pageref{subsection:smashcoproduct}. We shall write $rh$ instead of $r\# h$, $r\in R$, $h\in H$. The antipodes ${\mathcal S}_R$ of $R$ and ${\mathcal S} = {\mathcal S}_{R\# H}$ of $R\# H$ are related by \begin{align}\label{antipodas} \begin{aligned}{\mathcal S}_R(r) &= r\_{-1} {\mathcal S}(r\_{0}), \\
{\mathcal S}(r) &= {\mathcal S}(r\_{-1}) {\mathcal S}_R(r\_{0}), \end{aligned}\end{align} $r\in R$. The antipode ${\mathcal S}_R$ is a morphism of Yetter-Drinfeld modules. Let $\mu$ be the multiplication of $R$ and $c\in\operatorname{End}(R\otimes R)$ be the braiding. Then ${\mathcal S} _R$ is anti-multiplicative and anti-comultiplicative in the following sense: \begin{align}\label{eqn:antipodatrenzada-anti} \begin{aligned}{\mathcal S}_R\mu &= \mu ({\mathcal S}_R{\otimes} {\mathcal S}_R)c = \mu c({\mathcal S}_R{\otimes} {\mathcal S}_R), \\ \Delta{\mathcal S}_R &= ({\mathcal S}_R{\otimes} {\mathcal S}_R)c\Delta = c({\mathcal S}_R{\otimes} {\mathcal S}_R)\Delta , \end{aligned}\end{align} see for instance \cite[1.2.2]{AG}. The adjoint representation of $R$ on itself is the algebra map $\ad_c: R\to \operatorname{End} R$, $\ad_c x(y) = \mu(\mu\otimes{\mathcal S})(\id\otimes c)(\Delta\otimes\id)(x\otimes y)$, $x,y\in R$. That is, \begin{equation}\label{braided-adj} \ad_c x(y) = x\^{1}[(x\^2)\_{-1}\cdot y]{\mathcal S}((x\^2)\_{0}) = \ad x(y) \end{equation} for all $x,y\in R$, where the second equality follows immediately from \eqref{smash2} and \eqref{antipodas}. If $x\in {{\mathcal P}} (R)$, then \begin{align}
\ad_c x(y) =xy - (x\_{-1}\cdot y)x\_{0}
\label{eq:adcxy} \end{align}
for all $y\in R$. Similarly, define $$\ad _{c^{-1}} x(y)=xy-y\_0({\mathcal S}^{-1}(y\_{-1})\cdot x) $$ for $x\in {{\mathcal P}}(R)$, $y\in R$. We record the next well-known remark for further reference. \begin{obs}\label{prim-ydh} The space of primitive elements ${\mathcal P}(R)$ is a Yetter-Drinfeld submodule of $R$. \qed \end{obs}
The next consequences of \eqref{eqn:antipodatrenzada-anti} will be used later.
\begin{lema}\label{S}
\begin{enumerate}
\item[(i)] Let $x\in {\mathcal P}(R)$, $y\in R$. Then \begin{equation} \label{S1}\ad_c x({\mathcal S}_{R}(y)) = {\mathcal S}_{R}(\ad_{c^{-1}} x(y)). \end{equation} \item[(ii)] Let $X$ be a Yetter-Drinfeld submodule of $R$ and let $K$ be the subalgebra generated by $X$. Then ${\mathcal S}_{R}(K)$ is the subalgebra generated by ${\mathcal S}_{R}(X)$.
\end{enumerate} \end{lema}
\begin{proof} Since ${\mathcal S}_{R}(x) = -x$, (i) follows directly from \eqref{eqn:antipodatrenzada-anti}: $\ad_c x({\mathcal S}_{R}(y)) = -\mu({\mathcal S}_{R}{\otimes} {\mathcal S}_{R})(\id - c) (x{\otimes} y) \overset{\eqref{eqn:antipodatrenzada-anti}}= -{\mathcal S}_{R}\mu(c^{-1} - \id) (x{\otimes} y) = {\mathcal S}_{R}(\ad_{c^{-1}} x(y))$.
(ii). If $X$, $Y$ are Yetter-Drinfeld submodules of $R$, then $XY$ is also a Yetter-Drinfeld submodule and ${\mathcal S}_R(XY) = {\mathcal S}_R(Y){\mathcal S}_R(X)$ by \eqref{eqn:antipodatrenzada-anti}. This implies immediately (ii). \end{proof}
\begin{obs}\label{smash-bos} Let $K$ be a left $A$-module algebra, that is, $K$ is a left $H$-module algebra and a left $R$-module such that the action $\cdot $ of $R$ on $K$ satisfies equation $r\cdot (k\widetilde{k}) = \big(r\^{1}\cdot ((r\^{2})\_{-1}\cdot k)\big) \big((r\^{2})\_{0}\cdot \widetilde{k}\big)$ for all $r\in R$, $k, \widetilde k \in K$.
(i) The smash product $K\# A$ is a right $H$-comodule
algebra via the coaction $(\id \# \id {\otimes} \pi)(\id \# \Delta)$,
with subalgebra of coinvariants $K\# R$.
According to \eqref{eq:smashcopr}, the product in the last is given by $$(k\#r)(k'\#r') = k(r\^{1}(r\^{2})\_{-1})\cdot k' \# (r\^{2})\_{0}r',$$ $k,k'\in K$, $r,r'\in R$.
(ii) The multiplication induces a linear isomorphism $R{\otimes} K \to K\# R$. The inverse map is given by $k\#r \mapsto r\_{2}{\otimes} {\mathcal S} ^{-1}(r\_{1})\cdot k$. \end{obs}
\begin{obs}\label{obs:cop} Let $B$ be a braided bialgebra. Let $B^{\cop}$ denote the algebra $B$ together with the comultiplication $c^{-1}\Delta$; this is a braided Hopf algebra but with the inverse braiding, see \cite[Prop. 2.2.4]{AG}. Clearly, ${\mathcal P}(B) = {\mathcal P}(B^{\cop})$. \end{obs}
\subsection{Nichols algebras}
Let $V\in {}^H_H\mathcal{YD}$. The tensor algebra $T(V)$ is a braided Hopf algebra in ${}^H_H\mathcal{YD}$. A very important example of braided Hopf algebra in ${}^H_H\mathcal{YD}$ is the Nichols algebra ${\mathcal B}(V)$ of $V$; this is the quotient of $T(V)$ by a homogeneous ideal ${\mathfrak J} = {\mathfrak J}(V)$, generated by (some) homogeneous elements of degree $\ge 2$. See \cite{AS-cambr} for the precise definition and main properties of Nichols algebras, and the relation with pointed Hopf algebras.
\medbreak Another description of the ideal ${\mathfrak J}(V)$ is as the kernel of the quantum symmetrizer introduced by Woronowicz \cite{Wo}, see \cite{Sbg}. Let $\mathbb B_{n}$ be the braid group in $n$ letters and let $\pi: \mathbb B_{n} \to \mathbb S_{n}$ be a natural projection; it admits a set-theoretical section $s:\mathbb S_{n} \to \mathbb B_{n}$ called the Matsumoto section. Let $\mathfrak S_{n} := \sum_{\sigma \in \mathbb S_{n}} s(\sigma)$. The braid group $\mathbb B_{n}$ acts on $T^n(V)$ via $c$ and the homogeneous component ${\mathfrak J}^n(V)$ of ${\mathfrak J} (V)$ equals $\ker \mathfrak S_{n}$. Thus ${\mathcal B}(V)$ depends (as algebra and coalgebra) only on the braiding $c$. We write ${\mathcal B}(V) = {\mathcal B}(V,c)$, ${\mathfrak J}(V) = {\mathfrak J}(V,c)$.
The Nichols algebra has a unique grading ${\mathcal B}(V) = \oplus_{n\in {\mathbb N}_0}{\mathcal B}^n(V)$ such that ${\mathcal B}^1(V)=V$, the multiplication and the comultiplication are graded, and the action and the coaction of $H$ are homogeneous.
If $\dim V<\infty $, then there exists a bilinear form $\langle\, , \,\rangle: T({V}^*) \times T(V) \to \Bbbk$ such that \begin{align}\label{eqn:duality-homogeneous-tensor} \langle T^n({V}^*) , T^m(V)\rangle &= 0, \qquad n\neq m, \\ \label{eqn:duality-tensor} \langle f_n \dots f_1, x\rangle &= \langle f_n{\otimes} \dots {\otimes} f_1, \Delta_{1, \dots, 1}(x)\rangle \end{align} for $f_1, \dots, f_n\in {V}^*$, $x\in T^n(V)$, $n\in {\mathbb N}_0$. It satisfies the following properties:
\begin{align}\label{properties:duality1-tensor} \langle fg, x\rangle &= \langle f, x\^{2}\rangle\langle g, x\^{1}\rangle, \\\label{properties:duality2-tensor} \langle f, xy\rangle &= \langle f\^{2}, x\rangle\langle f\^{1}, y\rangle, \\\label{properties:duality3-tensor} \langle h\cdot f, x\rangle &= \langle f, {\mathcal S}(h)\cdot x\rangle, \\\label{properties:duality4-tensor} f\_{-1}\langle f\_{0}, x\rangle &= {\mathcal S}^{-1}(x\_{-1})\langle f, x\_{0}\rangle \end{align} for all $f, g\in T({V}^*)$, $x, y\in T(V)$, $h\in H$. This was first observed in \cite{Mj93}, see also \cite[10.4.13]{Mj-book}. A combination of the explicit formulas in \cite[10.4.13]{Mj-book} and \cite[Eqs.\,(3.25), (3.26)]{Wo} shows that $$\Delta_{1,\ldots ,1}= \mathfrak S_{n}$$ for all $n\in {\mathbb N} $, that is, ${\mathfrak J} (V,c)$ is the radical of the form in the second argument. More precisely, the following holds.
\begin{prop}\label{prop:duality}
\cite[Thm.\,3.2.29]{AG} Assume that $V\in {}^H_H\mathcal{YD} $ such that $\dim V<\infty $.
Then there exists a non-degenerate bilinear form $$\langle\, , \,\rangle: {\mathcal B}({V}^*) \times {\mathcal B}(V) \to \Bbbk$$ such that \begin{align}\label{eqn:duality-homogeneous} \langle{\mathcal B}^n({V}^*) , {\mathcal B}^m(V)\rangle &= 0, \qquad n\neq m, \\ \label{eqn:duality} \langle f_n \dots f_1, x\rangle &= \langle f_n{\otimes} \dots {\otimes} f_1, \Delta_{1, \dots, 1}(x)\rangle, \end{align} for $f_1, \dots, f_n\in {V}^*$, $x\in {\mathcal B}^n(V)$, $n\in {\mathbb N}_0$. It satisfies \eqref{properties:duality1-tensor}, \eqref{properties:duality2-tensor}, \eqref{properties:duality3-tensor}, and \eqref{properties:duality4-tensor} for all $f, g\in {\mathcal B}({V}^*)$, $x, y\in {\mathcal B}(V)$, $h\in H$. \end{prop}
This proposition tells that \begin{align}
{\mathcal B}(V)^{\operatorname{gr-dual} }\simeq{\mathcal B}(V^*),
\label{eq:BVgrdual} \end{align} where ${\mathcal B}(V)^{\operatorname{gr-dual} }$ is the graded dual of ${\mathcal B}(V)$, see \eqref{eq:grdual}.
\begin{lema}\label{lema:toba-c-menos-uno} ${\mathfrak J}(V, c) = {\mathfrak J}(V,c^{-1})$ and ${\mathcal B}(V,c) \simeq {\mathcal B}(V,c^{-1})$ as algebras. \end{lema} \begin{proof} Let ${\mathcal B}(V)^{\cop}$ be the opposite coalgebra, see Remark \ref{obs:cop}. Clearly, the algebra ${\mathcal B}(V)^{\cop}$ is generated in degree one, and ${\mathcal P}\left({\mathcal B}(V)^{\cop}\right) = {\mathcal P}\left({\mathcal B}(V)\right) = V$. Hence ${\mathcal B}(V)^{\cop} \simeq {\mathcal B}(V,c^{-1})$, and ${\mathfrak J}(V, c) = {\mathfrak J}(V,c^{-1})$. \end{proof}
\begin{lema}\label{rmk:gradedcondition} Let $x = \sum_{n\ge 1} x(n)\in {\mathcal B}(V)$, with $x(n)\in {\mathcal B}^n(V)$. Assume that $x\^{1} {\otimes} \operatorname{pr}_1(x\^{2}) = 0$. Then $x=0$. \end{lema}
\begin{proof} From $0 =x\^{1} {\otimes} \operatorname{pr}_1(x\^{2}) =\sum_{n\ge 1} x(n)\^{1} {\otimes} \operatorname{pr}_1(x(n)\^{2})$ we conclude that $\Delta_{n-1, 1}(x(n)) = x(n)\^{1} {\otimes} \operatorname{pr}_1(x(n)\^{2}) = 0$, since $x(n)\^{1} {\otimes} \operatorname{pr}_1(x(n)\^{2})$ $\in {\mathcal B}^{n-1}(V){\otimes} {\mathcal B}^{1}(V)$. But $\Delta_{n-1, 1}$ is injective in a Nichols algebra, hence $x(n) = 0$ for all $n$ and \emph{a fortiori} $x=0$. \end{proof}
For simplicity, we write ${\mathcal A}(V) = {\mathcal B}(V)\# H$ for the bosonization of ${\mathcal B}(V)$. Then ${\mathcal A}(V) = \oplus_{n\in {\mathbb N}_0}{\mathcal A}^n(V)$, where ${\mathcal A}^n(V) = {\mathcal B}^n(V)\# H$, is a graded Hopf algebra.
\section{The algebra of quantum differential operators}\label{section:qdo}
We now discuss two algebras of quantum differential operators that appeared frequently in the literature. For quantum groups, it seems that they were first defined in \cite{Ka}, see also \cite[Chapter 15]{L}. For Yetter-Drinfeld modules over finite group algebras, see \cite{G-jalg}.
\subsection{The algebra of quantum differential operators}
Let $B$ be a brai\-ded bialgebra in ${}^H_H\mathcal{YD}$. Then the space of linear endomorphisms $\operatorname{End} B$ is an associative algebra with respect to the convolution product: $T*S\,(b) = T(b\^2)S(b\^1)$, $T, S\in \operatorname{End} B$, $b\in B$, a convention coherent with \eqref{eqn:vndual}. Since $B$ is a left and right comodule over itself via the comultiplication, it becomes a left and right module over $B^*$. If $\xi\in B^*$, then we define the quantum differential operators $\pl{}, \pr{}: B^* \to \operatorname{End} B$ as the representations associated to those actions. That is, \begin{equation}\label{eqn:defiqdo-gral} \pl{\xi}(b) = \langle \xi, b\^1\rangle b\^2, \qquad \pr{\xi}(b) = \langle \xi, b\^2\rangle b\^1, \qquad b\in B,\, \xi\in B^*. \end{equation} Let also $L, R:B\to \operatorname{End} B$ be the left and right regular representations.
\medbreak If $\xi,\zeta\in B^*$, then clearly $\pl{\zeta}\pr{\xi} = \pr{\xi}\pl{\zeta}$. Other basic properties of the quantum differential operators are stated in the next lemma.
Recall that $A^{\sw}$ denotes the Sweedler dual of an algebra $A$. Explicitly,
\begin{align*}A^{\sw} &= \{f\in \Hom(A, \Bbbk)\,|\,\ker f \text{ contains a left ideal $I$ of finite codimension}\}. \end{align*}
\begin{lema}\label{lema:qdo-gral}
{\rm (i)} The maps $\pl{}: B^* \to \operatorname{End} B$ and $\pr{}: B^{*\op} \to \operatorname{End} B$ are injective algebra homomorphisms.
{\rm (ii)} If $B$ is a braided Hopf algebra with bijective antipode,
then the maps $\Psi^L, \Psi^R: B{\otimes} B^* \to \operatorname{End} B$,
$\Psi^L(b{\otimes} \xi) = L_b\circ \pl{\xi}$,
$\Psi^R(b{\otimes} \xi) = R_b\circ \pr{\xi}$,
are injective.
{\rm (iii)} If $\xi\in B^{\sw}$ and $b,c\in B$, then
\begin{align}\label{eqn:leibniz-pl-gral} \pl{\xi} (bc) &= \langle \xi\^2, b\^1 \rangle (b\^2)\_{0} \, \pl{{\mathcal S}^{-1}((b\^2)\_{-1})\cdot\xi\^1} (c), \\\label{eqn:leibniz-pr-gral} \pr{\xi} (bc) &= \pr{(\xi\^2)\_{0}}(b)\, {\mathcal S}\left((\xi\^2)\_{-1}\right) \cdot \pr{ \xi\^1}(c). \end{align}
{\rm (iv)} If $\xi\in {\mathcal P}(B^{\sw})$ and $b,c\in B$, then
\begin{align}\label{eqn:leibniz-pl-prim} \pl{\xi} (bc) &= b\_{0} \pl{{\mathcal S}^{-1}(b\_{-1})\cdot\xi} (c) + \pl{\xi}(b) c, \\\label{eqn:leibniz-pr-prim} \pr{\xi} (bc) &= b \pr{\xi}(c) + \pr{\xi\_{0}}(b)\, {\mathcal S}\left(\xi\_{-1}\right) \cdot c. \end{align}
{\rm (v)} Let $U$ be a Yetter-Drinfeld submodule of
${\mathcal P}(B^{\sw})$. Let $S$ be the subalgebra of $B^{\sw}$
generated by $U$.
Then ${\mathcal D}^L(B, U) :=
L(B)\circ \pl{}(S)$ and ${\mathcal D}^R(B, U) := R(B)\circ \pr{}(S)$ are
subalgebras of $\operatorname{End} B$. \end{lema}
\begin{proof} (i). If $\pl{\xi}(b) = 0$, then $\langle \xi, b\rangle = \varepsilon \pl{\xi}(b) = 0$; thus $\pl{}$ is injective -- and similarly for $\pr{}$. Now, if $b\in B$, $\xi, \zeta\in B^*$, then \begin{align*} \pl{\zeta}\pl{\xi}(b) &= \langle \xi, b\^1\rangle \pl{\zeta} (b\^2) = \langle \xi, b\^1\rangle \langle \zeta, b\^2\rangle b\^3 = \langle \zeta *\xi, b\^1\rangle b\^2 = \pl{\zeta *\xi}(b), \\ \pr{\zeta}\pr{\xi}(b) &= \langle \xi, b\^2\rangle
\pr{\zeta} (b\^1) = \langle \xi, b\^3\rangle \langle \zeta, b\^2\rangle b\^1 = \langle \xi *\zeta,
b\^2\rangle b\^1 = \pr{\xi *\zeta}(b). \end{align*}
(ii). Let $\sum_i b_i{\otimes}\xi_i\in \ker \Psi^L$, and assume that the $b_i$'s are linearly independent. Thus $\sum_i b_i \langle \xi_i, b\^1\rangle b\^2 = 0$ for any $b\in B$. Therefore \begin{align*} \sum_i b_i \langle \xi_i, b\rangle = \sum_i b_i \langle \xi_i, b\^1\rangle b\^2 {\mathcal S}_B(b\^3)= 0 \implies \langle \xi_i, b\rangle = 0 \end{align*} for all $i$ and $b\in B$; hence $\xi_i = 0$ for all $i$. The argument for $\Psi^R$ is similar.
(iii). We compute {\allowdisplaybreaks \begin{align*}
\pl{\xi} (bc) &\overset{\phantom{\eqref{vd1}}}=
\langle \xi, (bc)\^1\rangle (bc)\^2
= \langle \xi\^1{\otimes} \xi\^2, b\^1{\otimes} (b\^2)\_{-1}\cdot c\^1\rangle
(b\^2)\_{0} c\^2 \\
&\overset{\phantom{\eqref{vd1}}}= \langle \xi\^2, b\^1 \rangle (b\^2)\_{0}\,
\langle \xi\^1,(b\^2)\_{-1}\cdot c\^1\rangle c\^2 \\
&\overset{\eqref{vd1}}= \langle \xi\^2, b\^1 \rangle (b\^2)\_{0} \,
\langle {\mathcal S}^{-1}\big((b\^2)\_{-1}\big)\cdot\xi\^1, c\^1\rangle c\^2; \\
\pr{\xi} (bc) &\overset{\phantom{\eqref{vd1}}}=
\langle \xi, (bc)\^2\rangle (bc)\^1
= \langle \xi\^1{\otimes} \xi\^2, (b\^2)\_{0} {\otimes} c\^2\rangle
b\^1 (b\^2)\_{-1}\cdot c\^1 \\
&\overset{\phantom{\eqref{vd1}}}= \langle \xi\^2, (b\^2)\_{0} \rangle
b\^1 \,\langle \xi\^1,c\^2\rangle (b\^2)\_{-1}\cdot c\^1 \\
&\overset{\eqref{vd2bis}}= \langle (\xi\^2)\_{0}, b\^2 \rangle
b\^1 \,\langle \xi\^1,c\^2\rangle {\mathcal S}\left((\xi\^2)\_{-1}\right) \cdot c\^1. \end{align*} }
Now (iv) follows at once from (iii). Next, \eqref{eqn:leibniz-pl-prim} and \eqref{eqn:leibniz-pr-prim} say that \begin{align}\label{eqn:leibniz-pl-appl} \pl{\xi} \circ L_b &= L_{b\_{0}} \circ\pl{{\mathcal S}^{-1}(b\_{-1})\cdot\xi} + L_{\pl{\xi}(b)}, \\\label{eqn:leibniz-pr-appl} \pr{\xi}\circ R_c &= R_{\pr{\xi}(c)} + R_{{\mathcal S}\left(\xi\_{-1}\right) \cdot c} \circ\pr{\xi\_{0}} \end{align} for $\xi\in {\mathcal P}(B^{\sw})$, $b,c\in B$. These equalities imply (v). \end{proof}
\begin{exa}\label{exa:qdo} (i). If $B$ is a usual bialgebra, then the generalized Leibniz rules \eqref{eqn:leibniz-pl-gral} and \eqref{eqn:leibniz-pr-gral} simply say that $$\pl{\xi} (bc) = \pl{\xi\^2} (b) \pl{\xi\^1}(c), \quad \pr{\xi} (bc) = \pr{\xi\^2} (b) \pr{\xi\^1}(c).$$
\medbreak (ii). Let $W\in {}^H_H\mathcal{YD}$ be finite-dimensional{} and let $B = {\mathcal B}(W)$. By Prop.~\ref{prop:duality}, there exists an embedding ${\mathcal B}(W^*) \to {\mathcal B}(W)^*$ and we can consider the algebras of \emph{quantum differential operators} \begin{align*} {\mathcal D}^L(W) &:= {\mathcal D}^L({\mathcal B}(W), W^*) = L({\mathcal B}(W))\circ \pl{}({\mathcal B}(W^*)), \\ {\mathcal D}^R(W) &:= {\mathcal D}^R({\mathcal B}(W), W^*) = R({\mathcal B}(W))\circ \pr{}({\mathcal B}(W^*)); \end{align*} these are subalgebras of $\operatorname{End} B$ by Lemma~\ref{lema:qdo-gral}(v).
\medbreak Let $x\in{\mathcal B}(W)$ be homogeneous of degree $p$. Let us write in this case $$ \Delta(x) = x{\otimes} 1 + 1{\otimes} x + \sum_{0<r<p} x'_r{\otimes} x''_{p-r}. $$ Here we use a symbolic notation with $x'_r\in{\mathcal B}^r(W)$, $x''_{p-r}\in{\mathcal B}^{p-r}(W)$. If $f\in W^*$ and $p>1$, then $\pl{f}(x) = \langle f, x'_1\rangle x''_{p-1}$, $\pr{f}(x) = x'_{p-1}\langle f, x''_1\rangle$. Also, $\pl{f}(w) = \langle f, w\rangle = \pr{f} (w)$ for $w\in W$.
The following fact is well-known and goes back essentially to \cite{N}: If $x\in {\mathcal B}(W)$ and $\pl{f}(x) = 0$ for all $f\in W^*$, then $x\in \Bbbk$.
\medbreak (iii). Let $W$ and ${\mathcal B}(W)$ as in (ii). Assume that $W$ admits a basis $v_1, \dots, v_\theta$ such that $\delta(v_i) = g_i {\otimes} v_i$, for some $g_i\in G(H)$, $1\le i \le \theta$. Let $f_1, \dots, f_\theta$ be the dual basis; then $\delta(f_i) = g_i^{-1} {\otimes} f_i$, $1\le i \le \theta$. Set $\partial_i = \pr{f_i}$. Then $$ \partial_i(bc) = b\partial_i(c) + \partial_i(b) \,g_i\cdot c, \qquad b,c\in {\mathcal B}(W). $$ Similarly, let $\operatorname{Alg}(H,\Bbbk )$ be the group of algebra homomorphisms from $H$ to $\Bbbk$; it acts on $B$ by $\chi\cdot b = \langle\chi, b\_{-1}\rangle b\_0$. Suppose that $W$ admits a basis $v_1, \dots, v_\theta$ such that $h\cdot v_i = \chi_i(h) v_i$, for some $\chi_i\in \operatorname{Alg}(H,\Bbbk )$, $1\le i \le \theta$. Let $f_1, \dots, f_\theta$ be the dual basis; then $h\cdot f_i = \chi_i^{-1}(h) f_i$, $1\le i \le \theta$. Set $\underline{\partial}_i = \pl{f_i}$. Then $$ \underline{\partial}_i(bc) = (\chi_i\cdot b)\underline{\partial}_i(c) + \underline{\partial}_i(b) c, \qquad b,c\in {\mathcal B}(W). $$ \end{exa}
\begin{prop}\label{prop:qdo-toba} Let $W\in {}^H_H\mathcal{YD}$ be finite-dimensional.
(1). The map $$\Psi ^L:{\mathcal B} (W)\otimes {\mathcal B}(W^*)\to {\mathcal D}^L(W)$$ is a linear isomorphism.
(2). The map $\Theta:T(W \oplus W^*) \to {\mathcal D}^L(W)$, $(v, f) \mapsto L_v\circ \pl{f}$, $v\in W$, $f\in W^*$, induces an algebra isomorphism $\vartheta: T(W \oplus W^*)/I \to {\mathcal D}^L(W)$, where $I$ is the two-sided ideal generated by
\begin{enumerate}
\item[(i)] the relations of ${\mathcal B}(W)$,
\item[(ii)] the relations of ${\mathcal B}(W^*)$,
\item[(iii)] the relations
\begin{align}\label{eqn:rel-qdo} f v = v\_{0}\, {\mathcal S}^{-1}(v\_{-1})\cdot f + \pl{f}(v), \qquad v\in W, \, f\in W^*. \end{align} \end{enumerate}
\end{prop}
\medbreak If $v\in W$, $f\in W^*$, then \eqref{eqn:rel-qdo} implies that \begin{align}\label{eqn:rel-qdo-bis} vf = (v\_{-1}\cdot f) \, v\_{0} - \pl{v\_{-1}\cdot f}(v\_0). \end{align}
\begin{proof} By what was already said, $\Theta$ induces $\vartheta$ and this is surjective. Indeed, \eqref{eqn:leibniz-pl-appl} says more generally that
\begin{align}\label{eqn:rel-qdogral} f x = x\_{0}\, {\mathcal S}^{-1}(x\_{-1})\cdot f + \pl{f}(x), \qquad x\in {\mathcal B}(W), \, f\in W^*. \end{align} Clearly, the inclusions of $W$ and $W^*$ induce algebra maps $j_W: {\mathcal B}(W) \to T(W \oplus W^*)/I$, $j_{W^*}: {\mathcal B}(W^*) \to T(W \oplus W^*)/I$. Let $\mu$ be the multiplication of $T(W \oplus W^*)/I$. Then \eqref{eqn:rel-qdo} guarantees that $\mu \circ(j_W{\otimes} j_{W^*})$ is surjective. But the following diagram commutes: \begin{equation*} \xymatrix{{\mathcal B}(W){\otimes}{\mathcal B}(W^*) \ar[1,1]_{\Psi^L} \ar[0,2]^-{\mu \circ(j_W{\otimes} j_{W^*})} & & T(W \oplus W^*)/I \ar@{>>}[1,-1]^{\vartheta} \\ &{\mathcal D}^L(W), &} \end{equation*} and $\Psi^L$ is a linear isomorphism by Lemma~\ref{lema:qdo-gral} (ii). Thus $\mu \circ(j_W{\otimes} j_{W^*})$ and $\vartheta$ are isomorphisms. \end{proof}
\begin{cor}\label{cor:qdo-ydh} Let $W\in {}^H_H\mathcal{YD}$ be finite-dimensional. Then ${\mathcal D}^L(W)$ is an algebra in ${}^H_H\mathcal{YD}$. \end{cor}
\begin{proof}
Straightforward. \end{proof}
\begin{definition} \label{def:ksmash} Let ${\mathcal B}(W)\#{\mathcal B}(W^*)$ denote the vector space ${\mathcal B}(W)\otimes {\mathcal B}(W^*)$ with the multiplication transported along the isomorphism $\Psi ^L:{\mathcal B}(W)\otimes {\mathcal B}(W^*)\to {\mathcal D}^L(W)$. Thus \begin{align}\label{eqn:BWBW*} \xi \, b &= \langle \xi\^2, b\^1 \rangle (b\^2)\_{0} \# {\mathcal S}^{-1}((b\^2)\_{-1})\cdot \xi\^1 \end{align} for $b\in {\mathcal B}(W)$ and $\xi\in {\mathcal B}(W^*)$ by \eqref{eqn:leibniz-pl-gral}. The multiplication map
$\mu :{\mathcal B}(W^*)\otimes {\mathcal B}(W)\to {\mathcal B} (W)\#{\mathcal B}(W^*)$
is an isomorphism, with inverse map $\mu ^-$ given by
\begin{align}
\mu^-:b\xi \mapsto
\langle {\mathcal S}_{{\mathcal B}(W^*)}( (b\_{-1}\cdot \xi )\^2),b\_0{}\^1\rangle
(b\_{-1}\cdot \xi )\^1\otimes b\_0{}\^2
\label{eq:multinv}
\end{align}
for $b\in {\mathcal B}(W)$, $\xi \in {\mathcal B}(W^*)$. Note that $\Psi ^L:{\mathcal B}(W)\#{\mathcal B}(W^*)\to {\mathcal D}^L(W)$ is an isomorphism
in ${}^H_H\mathcal{YD} $, where $H$ acts and coacts diagonally on
${\mathcal B}(W)\#{\mathcal B}(W^*)$, see Corollary~\ref{cor:qdo-ydh}. \end{definition}
\begin{obs}\label{rem:Heisenberg}
Alternatively to the above construction, the algebra ${\mathcal D}^L(W)$ can be
obtained as the subalgebra ${\mathcal B} (W)\#{\mathcal B}(W^*)$
of the Heisenberg double ${\mathcal A} (W)\# {\mathcal A} (W)^{\circ }$.
Here for any Hopf algebra $A$ the Heisenberg double $A\# A^{\circ }$
is the smash product algebra corresponding to the left action of
the Hopf dual $A^\circ $, see our convention in Remark \ref{rem:C*alg},
given by the left $A$-coaction on $A$ via $\Delta $. The embedding of
${\mathcal B}(W)\otimes {\mathcal B}(W^*)$ into ${\mathcal A} (W)\# {\mathcal A} (W)^{\circ }$ is given by
the inclusion of ${\mathcal B}(W)$ and the map
$${\mathcal B}(W^*)\ni f\mapsto \langle f, \cdot \rangle {\otimes} \varepsilon
\in ({\mathcal B} (W)\# H)^{\circ }={\mathcal A} (W)^{\circ }.$$
One can check that ${\mathcal B}(W)\otimes {\mathcal B}(W^*)\subset
{\mathcal A}(W)\# {\mathcal A}(W)^{\circ }$
is a subalgebra and that this algebra structure on
${\mathcal B} (W) \otimes {\mathcal B}(W^*)$ coincides with ${\mathcal B} (W) \# {\mathcal B}(W^*)$
as in Definition \ref{def:ksmash}. Further, the restriction of the map in
Remark \ref{smash-bos} (ii) coincides with the map in \eqref{eq:multinv}.
These facts will not be used in the sequel. \end{obs}
\begin{obs}\label{re:multinv} Let $K$ be a subalgebra of ${\mathcal B}(W)$ and $\mathfrak K$ be a
braided Hopf subalgebra of ${\mathcal B}(W^*)$ such that \begin{itemize}
\item $K$ is an $H$-subcomodule,
\item $\mathfrak K$ is an $H$-submodule,
\item $\pl{\xi}(b) =\langle \xi ,b\^1 \rangle b\^2\in K$
for all $b\in K$, $\xi\in \mathfrak K$. \end{itemize} Then $K{\otimes} \mathfrak K $ is a subalgebra of ${\mathcal B}(W)\#{\mathcal B}(W^*)$, denoted by $K\# \mathfrak K$. Again, the multiplication map
$\mu :\mathfrak K \otimes K\to K\# \mathfrak K $
is an isomorphism. If $K\subset {\mathcal B}(W)$ and $\mathfrak K \subset {\mathcal B}(W^*)$ are subobjects in ${}^H_H\mathcal{YD}
$ then $K\#\mathfrak K $ is a subalgebra of ${\mathcal B}(W)\#{\mathcal B}(W^*)$ in ${}^H_H\mathcal{YD} $. \end{obs}
\begin{obs}\label{obs:Dlgraded} Let $\Gamma$ be an abelian group. Assume that
$W = \oplus_{\gamma\in\Gamma} V_{\gamma}$ is a finite-dimensional{} $\Gamma$-graded Yetter-Drinfeld module; $W^* \simeq \oplus_{\gamma\in\Gamma} V^*_{\gamma}$ becomes a $\Gamma$-graded Yetter-Drinfeld module with $\deg V^*_{\gamma} = - \gamma$. Then ${\mathcal B}(W)$, ${\mathcal B}(W)\#{\mathcal B}(W^*)$, and ${\mathcal D}^L(W)$ are $\Gamma $-graded algebras. \end{obs}
\begin{proof} The tensor algebras $T(W)$ and $T(W \oplus W^*)$ inherit the $\Gamma$-grading of Yetter-Drinfeld modules in the usual way: $\deg (V_{\gamma_1} {\otimes} \dots V_{\gamma_s}) =\gamma_1 + \dots + \gamma_s$. By definition, the braiding $c$ preserves homogeneous components; thus ${\mathcal B}(W)$ inherits the grading. Now the relations \eqref{eqn:rel-qdo} are also homogeneous, hence ${\mathcal B}(W)\#{\mathcal B}(W^*)$ and ${\mathcal D}^L(W)$ are $\Gamma$-graded algebras. \end{proof}
\subsection{Braided derivations}\label{subsection:skew-derivations} We next give a characterization of Nichols algebras in terms of quantum differential operators suitable for our later purposes. Recall that the kernel of the counit of a bialgebra $B$ is denoted by $B^+$.
\medbreak First, let $B$ be a braided bialgebra and consider $B^{\cop}$ as in Remark \ref{obs:cop}. We write $\Delta(x) = x^{[1]}{\otimes} x^{[2]}$ to distinguish from the previous coproduct. Thus $\Delta(xy) = x^{[1]}{y^{[1]}}\_0{\otimes} \big({\mathcal S}^{-1}({y^{[1]}}\_{-1}) \cdot x^{[2]}\big)y^{[2]}$, for $x,y\in B$, \emph{cf.} \eqref{inversebraiding}. Let $\xi\in B^*$ and let $\plb{\xi}\in \operatorname{End} B$ be $\pl{\xi}$ for this bialgebra, that is \begin{align}
\plb{\xi}(x) = \langle \xi, x^{[1]}\rangle x^{[2]}.
\label{eq:plbxi} \end{align} Then \begin{align}\label{plinverso} \plb{\xi} (xy) &= \big({\xi^{[1]}}\_{-1} \cdot \plb{\xi^{[2]}}(x)\big) \plb{{\xi^{[1]}}\_0}(y). \end{align} Indeed, \begin{align*} \plb{\xi} (xy) &= \big\langle \xi, x^{[1]}{y^{[1]}}\_0\big\rangle \,\big( {\mathcal S}^{-1}({y^{[1]}}\_{-1}) \cdot x^{[2]}\big)y^{[2]} \\ &= \big\langle \xi^{[2]}, x^{[1]} \big\rangle\, {\mathcal S}^{-1}({y^{[1]}}\_{-1}) \cdot x^{[2]} \,\big\langle \xi^{[1]}, {y^{[1]}}\_0\big\rangle y^{[2]} \\ &\overset{\makebox[0pt]{\tiny \eqref{vd2}}}= \big\langle \xi^{[2]}, x^{[1]} \big\rangle\, {\xi^{[1]}}\_{-1} \cdot x^{[2]} \,\big\langle {\xi^{[1]}}\_0,y^{[1]}\big\rangle y^{[2]} \\ &= \big({\xi^{[1]}}\_{-1} \cdot \plb{\xi^{[2]}}(x)\big) \plb{{\xi^{[1]}}\_0}(y). \end{align*} If $\xi\in{\mathcal P}(B) = {\mathcal P}(B^{\cop})$, then \begin{align}\label{plinverso-prim} \plb{\xi} (xy) &= (\xi\_{-1} \cdot x )\,\plb{\xi\_0}(y) + \plb{\xi}(x) \,y. \end{align}
\medbreak Part (i) of the following theorem is well-known, but part (ii) seems to be new.
\begin{theorem}\label{theo:quotients-with-derivations}
Let $W\in {}^H_H\mathcal{YD}$ be finite-dimensional. Let $I\subset T(W)^+$ be a 2-sided ideal, stable under the action of $H$. Let $R = T(W)/I$ and let $\pi: T(W) \to R$ be the canonical projection. \begin{enumerate}
\item[(i)] Assume that $I$ is a homogeneous Hopf ideal, so that $R$ is a graded braided Hopf algebra quotient of $T(W)$, and that $I\cap W = 0$. Then for any $f\in W^*$ there exists a map $d_f\in \operatorname{End} R$ such that for all $x,y\in R$, $v\in W$, \begin{align} \label{derivadauno-bis} d_f(xy) &= (f\_{-1} \cdot x)\, d_{f\_{0}}(y) + d_{f}(x)\, y,\\ \label{derivadados} d_f(\pi(v)) &= \langle f, v\rangle. \end{align}
\item[(ii)] Conversely, assume that for any $f\in W^*$ there exists a map $d_f\in \operatorname{End} R$ such that \eqref{derivadauno-bis} and \eqref{derivadados} hold. Then $I\subseteq {\mathfrak J}(W)$, that is, there exists a unique surjective algebra map $\Omega: R \to {\mathcal B}(W)$ such that $\Omega (\pi (w))=w$ for all $w\in W$. Moreover \begin{align} \label{factorizacion-omega} \Omega d_f &= \plb{f}\Omega. \end{align} \end{enumerate} \end{theorem}
\begin{proof} (i). We have $R = \oplus_{n\ge 0} R^n$ with $R^1 \simeq W$; we identify $W^*$ with a subspace of $R^*$, see Subsection \ref{subsection:notation}. Hence, if $f\in W^*$, then $d_f := \plb{f}$ satisfies \eqref{derivadauno-bis} by \eqref{plinverso-prim}.
(ii). We apply (i) to $I = 0$; set $D_f\in \operatorname{End} T(W)$, $D_f = \plb{f}$ for $f\in W^*$. Note that \eqref{derivadados} implies that $\pi$ restricted to $W$ is injective. We claim that $d_{f}\pi = \pi D_{f}$, that is, the following diagram commutes: \begin{equation}\label{factorizacion-d} \xymatrix{T(W) \ar[d]_{\pi} \ar[0,2]^-{D_{f}}& &T(W) \ar[d]^{\pi} \\ R \ar[0,2]^-{d_f}&& R. } \end{equation} For, let $\delta_f = d_{f}\pi$, $\widetilde{\delta}_f = \pi D_{f}: T(W) \to R$, and let $x,y\in T(W)$. Then \begin{align*} d_{f}(\pi(xy)) &= (f\_{-1} \cdot \pi(x))\, d_{f\_{0}}(\pi(y)) + d_{f}(\pi(x))\, \pi(y); \\ \pi D_{f}(xy) &= \pi\big((f\_{-1} \cdot x)\, D_{f\_{0}}(y) + D_{f}(x)\, y\big)\\ &= (f\_{-1} \cdot \pi(x))\, \pi D_{f\_{0}}(y) + \pi D_{f}(x)\, \pi (y), \end{align*} by the hypothesis on $I$. Also $d_f(\pi(v)) = \langle f, v\rangle = \pi D_{f}(v)$ for $v\in W$. Thus the set of all $x\in T(W)$ such that $\delta_f(x) = \widetilde{\delta}_f (x)$ is a subalgebra that contains $W$; hence $d_{f}\pi = \pi D_{f}$. (This shows that such a map $d_f$ is unique when it exists; hence $f\mapsto d_f$ is linear in $f$). In other words, $ D_{f}(I) \subset I$. Let $\langle\, , \,\rangle: T({W}^*) \times T(W) \to \Bbbk$ be the bilinear form defined by \eqref{eqn:duality-homogeneous} and \eqref{eqn:duality}, but with respect to $c^{-1}$. We know that ${\mathfrak J}(W, c^{-1})$ is the (right) radical of this form, and ${\mathfrak J}(W, c^{-1})= {\mathfrak J}(W, c)$ by Lemma~\ref{lema:toba-c-menos-uno}; so we need to show that $\langle T(W^*) , I\rangle = 0$, or equivalently that $\langle T^n(W^*) , I\rangle = 0$ for all $n\ge 0$. If $n=0$, then this is clear as $\varepsilon (I) = 0$. If $n=1$, $f\in W^*$ and $x\in I$, then $\langle f, x\rangle = \varepsilon (\langle f, x^{[1]}\rangle x^{[2]}) = \varepsilon ( D_{f}(x)) \in \varepsilon (I) = 0$. If $n >1$, $g\in T^{n-1}(W^*)$, $f\in W^*$ and $x\in I$, then $$\langle gf, x\rangle = \langle f, x^{[1]}\rangle\langle g, x^{[2]}\rangle = \langle g,
D_{f}(x)\rangle \in \langle g, D_{f}(I)\rangle\subset \langle g,I\rangle = 0.$$ In the following diagram, the big and upper squares commute by (i) and \eqref{factorizacion-d}, respectively: \begin{equation*} \xymatrix{T(W) \ar@/_2pc/[2,0]_{p} \ar[d]_{\pi} \ar[0,2]^-{D_{f}}& &T(W) \ar[d]^{\pi} \ar@/^2pc/[2,0]^{p} \\ R \ar[d]_{\Omega} \ar[0,2]^-{d_{f}}& &R \ar[d]^{\Omega} \\ {\mathcal B}(W) \ar[0,2]^-{\plb{f}}&& {\mathcal B}(W). } \end{equation*} Hence $\plb{f}\Omega\pi = \plb{f}p = pD_f = \Omega\pi D_f = \Omega d_f \pi$, and since $\pi$ is surjective, $\plb{f}\Omega = \Omega d_f $. \end{proof}
There are other versions of this theorem. Taking \eqref{eqn:leibniz-pl-prim} or \eqref{eqn:leibniz-pr-prim} into consideration, we have similar results replacing the requirement \eqref{derivadauno-bis} by either of the following: \begin{align} \label{derivadauno-izq} d_f(xy) &= x\_{0} d_{{\mathcal S}^{-1}(x\_{-1})\cdot f}(y) + d_{f}(x) y,\\ \label{derivadauno} d_f(xy) &= x d_f(y) + d_{f\_0}(x){\mathcal S}(f\_{-1})\cdot y, \end{align} where $x,y\in R$, $f\in W^*$. The proof goes exactly as for Theorem \ref{theo:quotients-with-derivations}.
\medbreak The results in Theorem \ref{theo:quotients-with-derivations} motivate the following definition.
\begin{definition}\label{defi:skew-der} Let $M\in {}^H\mathcal{M}$, $R$ an algebra, $T$ an $H$-module algebra, $\wp: R\to T$ an algebra map, and let $d: M\to \Hom(R, T)$ be a linear map, denoted by $f\mapsto d_f$. Following \cite{Mj93} we say that $d$ is a \emph{family of braided derivations} if for all $x,y\in R$, $f\in M$, \begin{equation} \label{derivadagral} d_f(xy) = (f\_{-1} \cdot \wp(x))\, d_{f\_{0}}(y) + d_{f}(x)\, \wp(y). \end{equation} \end{definition}
We are mostly concerned with the case when
$R = T$ and $\wp = \id$. In this case we say that $d$ is a \emph{family of braided derivations of $R$.}
\begin{definition}\label{de:canfam}
Let $W\in {}^H_H\mathcal{YD} $.
The family $d^W:W^*\to \operatorname{End} {\mathcal B}(W)$ of braided derivations
of ${\mathcal B} (W)$
with $d^W_f(w)=\langle f,w\rangle $ for all $f\in W^*$ and
$w\in W$, see Theorem \ref{theo:quotients-with-derivations} (i),
is called the \textit{canonical family of braided derivations}
of ${\mathcal B} (W)$. \end{definition}
Our next goal is to develop basic properties of families of braided derivations which will be useful in the sequel.
\begin{lema}\label{le:fambrderTV}
Let $M\in {}^H\mathcal{M}$, $V$ a vector space, $T$ an $H$-module algebra,
and $\wp : V\to T$ a linear map. Then any family of braided derivations
$d:M\to \Hom (T(V),T)$ determines a linear map $d^1:M\to
\Hom (V,T)$ by letting $d^1_f=d_f|_V$, $f\in M$.
Conversely, any linear map $d^1:M\to \Hom
(V,T)$ gives rise to a unique family of braided derivations $d:M\to \Hom
(T(V),T)$, where $d_f|_V=d^1_f$, $f\in M$. \end{lema}
\begin{proof} If $d$ is a family of braided derivations, then linearity of $d$ gives that $d^1:M\to \Hom (V,T)$ is a linear map. On the other hand, if $V$, $\wp :V\to T$, and $d^1:M\to \Hom (V,T)$ are given, then $\wp $ extends uniquely to an algebra map $\wp :T(V)\to T$, and the formula \begin{align*}
d_f(v_1v_2\cdots v_n)=\sum _{i=1}^n &(f\_{1-i}\cdot \wp (v_1))
(f\_{2-i}\cdot \wp (v_2)) \cdots (f\_{1}\cdot \wp (v_{i-1}))\\
&\times d_{f\_{0}}(v_i)\wp (v_{i+1})\cdots \wp (v_n), \end{align*} where $v_j\in V$ for all $j=1,\ldots ,n$, defines a family of braided derivations $d:M\to \Hom (T(V),T)$ for $M$, $\wp $, $T(V)$, and $T$. The uniqueness of $d$ as a family of braided derivations follows from \eqref{derivadagral} and the fact that $V$ generates the algebra $T(V)$. \end{proof}
\begin{lema}\label{le:fambrderTVI}
Let $M\in {}^H\mathcal{M}$, $V$ a vector space,
$T$ an $H$-module algebra, $\wp : T(V)\to T$ an algebra map,
and $d:M\to \Hom (T(V),T)$ a family of braided derivations. Let $I\subset T(V)$
be an ideal with $\wp (I)=0$. Assume that $I$ is generated by
a subset $J\subset I$, and define $R=T(V)/I$. The following
are equivalent.
\begin{enumerate}
\item[(i)]
$d$ induces a family of braided derivations $d^R:M\to \Hom (R,T)$ by
letting $d^R_f(x+I):=d_f(x)$ for $x\in T(V)$, $f\in M$,
\item[(ii)] $d_f(I)=0$ for all $f\in M$,
\item[(iii)]
$d_f(x)=0$ for all $f\in M$ and all generators $x\in J$.
\end{enumerate} \end{lema}
\begin{proof} The implications (i)$\Rightarrow $(ii) and (ii)$\Rightarrow $(iii) are trivial. By \eqref{derivadagral}, the linearity of $d_f$, and since $\wp (I)=0$, one obtains (iii)$\Rightarrow $(ii). Finally, since $d^R:M\to \Hom (R,T)$ is a well-defined linear map, for the implication (ii)$\Rightarrow $(i) it is sufficient to check \eqref{derivadagral}. The latter holds since $I$ is an ideal and $\wp (I)=0$. \end{proof}
For the next theorem we need a compatibility relation between the maps $\pl{g}$ and $\ad _c$.
\begin{lema}\label{le:plrel}
Let $W\in {}^H_H\mathcal{YD} $, $w\in {\mathcal B}(W)$, $x\in W$, and $g\in W^*$. Then \begin{align*}
\pl{w\_{-1}\cdot g}( \ad _c (w\_{0})(x))
= {}& \pl{w\_{-1}\cdot g}( w\_{0} )x \\
&-(w\_{-2}\cdot x\_0)\pl{w\_{-1}{\mathcal S}^{-1}(x\_{-1})\cdot g}(w\_{0}). \end{align*} \end{lema}
\begin{proof} The definition of $\ad _c$ and \eqref{eqn:leibniz-pl-prim} imply that {\allowdisplaybreaks \begin{align*}
&\pl{w\_{-1}\cdot g}( \ad _c (w\_{0})(x))
= \pl{w\_{-1}\cdot g}( w\_{0}x)
-\pl{w\_{-2}\cdot g}( (w\_{-1}\cdot x)w\_{0})\\
&\quad = \pl{w\_{-1}\cdot g}( w\_{0})x
-\pl{w\_{-2}\cdot g}( w\_{-1}\cdot x)w\_{0}
+w\_{0}\pl{{\mathcal S} ^{-1}(w\_{-1})w\_{-2}\cdot g}(x)\\
&\qquad
-(w\_{-2}\cdot x\_{0})\pl{{\mathcal S}^{-1}(w\_{-3}x\_{-1}{\mathcal S} (w\_{-1}))w\_{-4}\cdot
g}(w\_{0})\\
&\quad = \pl{w\_{-1}\cdot g}( w\_{0} )x
-\langle w\_{-2}\cdot g,w\_{-1}\cdot x\rangle w\_{0}\\
&\qquad +w\langle g,x\rangle
-(w\_{-2}\cdot x\_0)\pl{w\_{-1}{\mathcal S}^{-1}(x\_{-1})\cdot g}(w\_{0}). \end{align*} } The claim of the lemma now follows from \eqref{vd1}. \end{proof}
We now show a very general way of constructing a family of braided derivations of ${\mathcal B}(W)\#{\mathcal B}(W^*)$. This will be crucial in the proof of Theorem \ref{theo:main}, but it may be of independent interest. Recall the notion of canonical family of braided derivations $d^W$, see Definition \ref{de:canfam}.
\begin{theorem}\label{theo:skew-der-qdo}
Let $W\in {}^H_H\mathcal{YD}$ be finite-dimensional.
For all $w\in W$ and $f=\phi _W(w) \in W^{**}$,
see \eqref{eqn:double-dualbis},
define $d_f\in \operatorname{End} ({\mathcal B} (W)\#{\mathcal B} (W^*))$ by
\begin{align}\label{eqn:def-dv}
d_f(x\# g) &= -\ad_c w(x) \# g + (f\_{-1}\cdot x) \# \, d^{W^*}_{f\_0}(g) \end{align}
\noindent for all $x\in {\mathcal B}(W)$ and $g\in {\mathcal B}(W^*)$. Then $d:W^{**}\to \operatorname{End} ({\mathcal B}(W)\#{\mathcal B}(W^*))$ is a family of braided derivations of ${\mathcal B}(W)\#{\mathcal B}(W^*)$. \end{theorem}
\begin{proof} By Proposition \ref{prop:qdo-toba} and Definition \ref{def:ksmash} there exists a unique algebra map $\wp :T(W\oplus W^*)\to {\mathcal B} (W)\#{\mathcal B} (W^*)$ with $\wp (w)=w$, $\wp (g)=g$ for all $w\in W$, $g\in W^*$. Let $d':W^{**}\to \Hom (T(W\oplus W^*),{\mathcal B}(W)\#{\mathcal B}(W^*))$ be the unique family of braided derivations with this $\wp $ and with \begin{gather*}
d'_f(x)=-\ad _c (\psi _W(f))(x),\qquad
d'_f(g)=\langle f,g\rangle , \end{gather*} where $f\in W^{**}$, $x\in W$, and $g\in W^*$, see Lemma~\ref{le:fambrderTV}. We are going to use
the implication (iii)$\Rightarrow $(i) in Lemma~\ref{le:fambrderTVI} to show that $d'$ induces the family $d$ of braided derivations of ${\mathcal B}(W)\#{\mathcal B}(W^*)$. Indeed, one has $d_f(z)=d'_f(z)$ for $z\in W\oplus W^*$. Further, for $w=\psi _W(f)$ the map $-\ad _c w\in \operatorname{End} {\mathcal B}(W)$ satisfies the relation \begin{align*}
&-\ad _c w (xy) = -wxy+(w\_{-1}\cdot (xy))w\_0\\
&\quad = -wxy + (w\_{-1}\cdot x)w\_0 y
- (w\_{-1}\cdot x)w\_0 y +(w\_{-2}\cdot x)(w\_{-1}\cdot y)w\_0\\
&\quad =-\ad _c w(x)\, y-(w\_{-1}\cdot x)\,\ad _c (w\_0)(y). \end{align*} Thus, since $\psi _W$ is an $H$-comodule map, the restriction of $d'_f$ to $T(W)$ coincides with $-\ad _c w\circ \pi _W$, where $\pi _W:T(W)\to {\mathcal B}(W)$ is the canonical map. Moreover, the restriction of $d'_f$ to $T(W^*)$ is precisely $d^{W^*}_f\circ \pi _{W^*}$. It remains to show that $d'$ induces a family of braided derivations of ${\mathcal B}(W)\#{\mathcal B}(W^*)$. By the previous claims the latter family then has to coincide with the family $d$ of linear maps $d_f$, where $f\in W^{**}$, and hence $d$ is a family of braided derivations.
To see that $d'$ induces a family of braided derivations of ${\mathcal B}(W)\#{\mathcal B}(W^*)$, we have to check that $d'_f$ vanishes on the generators (i)--(iii) in Prop.~\ref{prop:qdo-toba}. Since the restriction of $d'_f$ to $T(W)$ coincides with $-\ad _c (\psi (f))\circ \pi _W$, one gets $d'_f(z)=0$ for all $z\in {\mathfrak J}(W)$. Similarly one has $d'_f(z)=d_f(\pi _{W^*}(z))$ for all $z\in {\mathfrak J}(W^*)$. Thus it suffices to check that \begin{align}\label{eq:d'frel}
d'_f(g{\otimes} x-x\_{0}{\otimes} ({\mathcal S} ^{-1}(x\_{-1})\cdot g)-\pl{g}(x))=0 \end{align} for all $x\in W$, $g\in W^*$, and $f\in W^{**}$. Note that $d'_f(\pl{g}(x))=0$ since $\pl{g}(x)\in \Bbbk $. Let now $x,w\in W$, $g\in W^*$, and $f=\phi _W(w)\in W^{**}$. By definition of $d'_f$ one gets {\allowdisplaybreaks \begin{align*}
&d'_f(g{\otimes} x-x_{(0)}{\otimes} ({\mathcal S}^{-1}(x\_{-1})\cdot g))
=d'_f(g)x+(f\_{-1}\cdot g)d'_{f\_{0}}(x)\\
&\quad -d'_f(x\_{0})({\mathcal S}^{-1}(x\_{-1})\cdot g)
-(f\_{-1}\cdot x\_{0})d'_{f\_{0}}({\mathcal S}^{-1}(x\_{-1})\cdot g)\\
&\quad =\langle f,g\rangle x-(w\_{-1}\cdot g)\,\ad _c (w\_{0})(x)
+\ad _c w(x\_0)\,({\mathcal S}^{-1}(x\_{-1})\cdot g)\\
&\qquad -(w\_{-1}\cdot x\_0)\langle \phi _W(w\_0),{\mathcal S}^{-1}(x\_{-1})\cdot g
\rangle .\\
\intertext{Now \eqref{eqn:rel-qdogral} and Lemma~\ref{le:plrel}
allow to simplify this expression further:}
&\quad =\langle f,g\rangle x-\ad _c (w\_{0})(x\_{0})\,
({\mathcal S}^{-1}(w\_{-1}x\_{-1})w\_{-2}\cdot g)\\
&\qquad -\pl{w\_{-1}\cdot g}( \ad _c (w\_0)(x))
+\ad _c w(x\_0)\,({\mathcal S}^{-1}(x\_{-1})\cdot g)\\
&\qquad -(w\_{-1}\cdot x\_0)\langle \phi _W(w\_0),{\mathcal S}^{-1}(x\_{-1})\cdot g
\rangle \\
&\quad =\langle f,g\rangle x
-\langle w\_{-1}\cdot g,w\_0\rangle x
+(w\_{-2}\cdot x\_0)\langle w\_{-1}{\mathcal S}^{-1}(x\_{-1})\cdot g,w\_{0}\rangle \\
&\qquad -(w\_{-1}\cdot x\_0)\langle \phi _W(w\_0),{\mathcal S}^{-1}(x\_{-1})\cdot g
\rangle . \end{align*} Using the relation $f=\phi _W(w)$ and \eqref{eq:phiVv,f} twice, the latter expression becomes zero. This proves \eqref{eq:d'frel}. } \end{proof}
\section{Reflections of Nichols algebras}\label{section:weyl}
This section is devoted to the construction of ``reflections'', see \eqref{eq:rfliM}. Based on them we introduce and study new invariants of Nichols algebras in ${}^H_H\mathcal{YD} $, see Definition \ref{def:rsys}. Then we discuss the particular class of standard semisimple Yetter-Drinfeld modules.
\subsection{Braided Hopf algebras with projection} \label{subsection:braidedproj} We begin by considering a commutative diagram of braided Hopf algebras in ${}^H_H\mathcal{YD}$: $$ \xymatrix{& & R\ar@{=}[1,0]\ar@{_{(}->}[1,-2]_{\iota}\\S\ar[0,2]^{\pi_R}& & R. } $$ Here and below we use subscripts to distinguish between the various projections, coactions, etc. By bosonization, we get a commutative diagram of Hopf algebras: $$ \xymatrix{& & R\# H\ar@{=}[1,0]\ar@{_{(}->}[1,-2]_{\iota}
\\S \# H\ar[0,2]_{\pi_{R \# H}}\ar[1,2]_{\pi_{H,S}}& &R \# H \ar[1,0]^{\pi_{H,R}}\\ && H.} $$ Clearly, the projections $\pi_{H,R}: R \# H \to H$ and $\pi_{H,S}: S \# H \to H$ satisfy \begin{equation}\label{pis0} \pi_{H,R}\pi_{R \# H} = \pi_{H,S}. \end{equation} We propose to study this situation through the subalgebra of coinvariants \begin{equation}\label{K0} K := (S\# H)^{\operatorname{co} R\# H}. \end{equation} We collect some basic properties of $K$.
\begin{lema}\label{lema:basicK0} \begin{enumerate}
\item[(i)] $K$ is a braided Hopf algebra in ${}^{R\# H}_{R\# H}\mathcal{YD}$ and the
multiplication induces an isomorphism
$$ K\# (R\# H) \simeq S\# H.
$$
\item[(ii)]$K = S^{\operatorname{co} R} = \{x\in S\,|\,x\^{1}\otimes \pi_{R}(x\^{2}) = x\otimes 1\}$ is a subalgebra of $S$ and the multiplication induces an algebra isomorphism
$$ K\# R \simeq S, \qquad \text{\emph{cf.} Remark \ref{smash-bos}.}
$$
\item[(iii)] $K$ is a Yetter-Drinfeld submodule over $H$ of $S$ and \begin{equation}\label{deltah=deltarh} \delta_H(x) = (\pi_{H,R}{\otimes} \id)\delta_{R\# H}(x), \qquad x\in K. \end{equation}
\item[(iv)] ${\mathcal S}_S(K)$ is a subalgebra and Yetter-Drinfeld submodule over $H$ of
$S$. \end{enumerate} \end{lema}
\begin{proof} (i). By the general theory of biproducts.
(ii). Let $x\in K$. By \eqref{pis0}, $x\_{1} {\otimes} \pi_{H, S}(x\_{2}) = x\_{1} {\otimes} \pi_{H, R}\pi_{R\# H}(x\_{2}) = x{\otimes} 1$; hence $x\in S$. Now, $$x{\otimes} 1 = x\_{1} {\otimes} \pi_{R\# H}(x\_{2}) = x^{(1)} (x^{(2)})_{(-1)} \otimes \pi_{R}((x^{(2)})_{(0)})$$ by \eqref{smash3}. Applying the $H$-coaction to the second tensorand and then $(\mu_S{\otimes} \id)(\id{\otimes} {\mathcal S}{\otimes} \id)$, we get \begin{align*} x{\otimes} 1 &= x\^{1}(x\^{2})\_{-2} {\mathcal S}((x\^{2})\_{-1}){\otimes}\pi_{R}((x\^{2})\_{0}) =x\^{1}\otimes\pi_{R}(x\^{2}), \end{align*} since $\pi_{R}$ is $H$-colinear. Thus $x\in S^{\operatorname{co} R}$.
Conversely, let $x\in S^{\operatorname{co} R}$. Applying the $H$-coaction to the second tensorand of the equality $x\^{1}\otimes \pi_{R}(x\^{2}) = x\otimes 1$, and since $\pi_{R}$ is $H$-colinear, we get $x{\otimes} 1 = x^{(1)} (x^{(2)})_{(-1)} \otimes \pi_{R}((x^{(2)})_{(0)}) = x\_{1} {\otimes} \pi_{R\# H}(x\_{2})$. Hence $x\in K$. The multiplication gives rise to an isomorphism because of the analogous fact in (i).
(iii). Clearly, $K$ is an $H$-submodule of $S$. From \eqref{smash2} and \eqref{pis0} we get \eqref{deltah=deltarh}. Thus $K$ is also an $H$-subcomodule, and \emph{a fortiori} a Yetter-Drinfeld submodule, of $S$.
(iv) follows from (iii) and the properties of the antipode, \emph{cf.} \eqref{eqn:antipodatrenzada-anti}. \end{proof}
\subsection{The algebra ${\mathcal K}$}\label{subsection:basicK} We next work in the following general setting. Let $V$, $W$ be Yetter-Drinfeld modules over $H$ such that $V$ is a direct summand of $W$ in ${}^H_H\mathcal{YD}$. Or, in other words, we have a commutative diagram in ${}^H_H\mathcal{YD}$: $$ \xymatrix{& & V\ar@{=}[1,0]\ar@{_{(}->}[1,-2]_{\iota}\{\mathfrak W}\ar[0,2]^{\pi}& & V. } $$ Set $\widetilde{V} = \ker \pi$, so that $W = V\oplus \widetilde{V}$ in ${}^H_H\mathcal{YD}$. By functoriality of the Nichols algebra, we have a commutative diagram of graded Hopf algebras in ${}^H_H\mathcal{YD}$: $$ \xymatrix{& & {\mathcal B}(V)\ar@{=}[1,0]\ar@{_{(}->}[1,-2]_{\iota}\\{\mathcal B}(W)\ar[0,2]_{\pi_{{\mathcal B}(V)}}& & {\mathcal B}(V) .} $$ By bosonization, we get a commutative diagram of graded Hopf algebras:
\begin{equation}\label{eqn:commdiag-avaw} \xymatrix{& & \qquad {\mathcal A}(V) = {\mathcal B}(V) \# H\ar@{=}[1,0]\ar@{_{(}->}[1,-2]_{\iota}
\\{\mathcal A}(W) = {\mathcal B}(W) \# H\ar[0,2]_{\pi_{{\mathcal A}(V)}}\ar[1,2]_{\pi_{H,W}}& &{\mathcal A}(V)
= {\mathcal B}(V) \# H\ar[1,0]^{\pi_{H,V}}
\\ && H.} \end{equation} As before, the projections $\pi_{H,V}: {\mathcal A}(V) \to H$ and $\pi_{H,W}: {\mathcal A}(W) \to H$ satisfy \begin{equation}\label{pis} \pi_{H,V}\pi_{{\mathcal A}(V)} = \pi_{H,W}. \end{equation}
The main actor of this section is the subalgebra of coinvariants \begin{equation}\label{K} {\mathcal K} := {\mathcal A}(W)^{\operatorname{co} {\mathcal A}(V)}. \end{equation}
\begin{lema}\label{basicK} \begin{enumerate}
\item[(i)] ${\mathcal K}$ is a graded braided Hopf algebra in ${}^{\ac(V)}_{\ac(V)}\mathcal{YD}$ and the
multiplication induces an isomorphism
$$
{\mathcal K}\# {\mathcal A}(V) \simeq {\mathcal A}(W).
$$
\item[(ii)] ${\mathcal K} = {\mathcal B}(W)^{\operatorname{co} {\mathcal B}(V)} = \{x\in {\mathcal B}(W):
x\^{1}\otimes \pi_{{\mathcal B}(V)}(x\^{2}) = x\otimes 1\}$ is a graded
subalgebra of ${\mathcal B}(W)$ and the multiplication induces a homogeneous
isomorphism
$$
{\mathcal K}\# {\mathcal B}(V) \simeq {\mathcal B}(W).
$$
\item[(iii)] ${\mathcal K}$ is a Yetter-Drinfeld submodule over $H$ of ${\mathcal B}(W)$
and \begin{equation}\label{deltah=deltaav}
\delta_H(x) = (\pi_{H,V}{\otimes} \id)\delta_{{\mathcal A}(V)}(x), \qquad x\in {\mathcal K}. \end{equation}
\item[(iv)] ${\mathcal K}\cap W = \widetilde{V} \subset {\mathcal P}({\mathcal K})$. \end{enumerate} \end{lema}
\begin{proof} (i) to (iii) are consequences of Lemma~\ref{lema:basicK0} except for statements ``${\mathcal K}$ is graded'', that follow since $\pi_{{\mathcal A}(V)}$ is homogeneous.
(iv). If $x\in W$, then $x\^{1}\otimes \pi_{{\mathcal B}(V)}(x\^{2}) = x{\otimes} 1 + 1{\otimes} \pi_{{\mathcal B}(V)}(x)$. Hence $x\in W\cap {\mathcal K}$ if and only if $x\in \ker \pi _{{\mathcal B}(V)}\cap W= \widetilde{V}$. Moreover, if $x\in \widetilde{V}$, then $\vartheta_{{\mathcal K}}(x) = x$, thus $\Delta_{{\mathcal K}}(x) = x{\otimes} 1 + 1{\otimes} x$, cf. \eqref{smash2}. \end{proof}
\subsection{The module $L$}\label{subsection:basicL} We keep the notation of Subsection \ref{basicK}. Let $U$ be a Yetter-Drinfeld submodule over $H$ of $\widetilde{V}$. We define \begin{equation}\label{defL} L := \ad {\mathcal B}(V) (U). \end{equation} In other words, $L$ is the vector subspace of ${\mathcal A}(W)$ spanned by the elements \begin{equation}\label{defLm} \ad_c (x_1)(\dots (\ad_c (x_m)(y))), \qquad x_h\in V,\, 1\le h\le m, \, y \in U, \end{equation} for $m \ge 0$. We collect some basic properties of $L$.
\begin{lema}\label{basicL} \begin{enumerate}
\item[(i)] $L = \ad {\mathcal A}(V) (U)$.
\item[(ii)] $L = \oplus_{m\in {\mathbb N}} L^m$, where
$L^m = L\cap {\mathcal B}^m(W)$; $L^1 = U$.
\item[(iii)] $L$ is a graded Yetter-Drinfeld submodule over ${\mathcal A}(V)$ of ${\mathcal P}({\mathcal K})$.
\item[(iv)] $L$ is a graded Yetter-Drinfeld submodule over $H$ of ${\mathcal P}({\mathcal K})$. \item[(v)] For any $x\in L$, we have \begin{align}\label{crucial} \Delta_{{\mathcal A}(W)} (x) &\in x{\otimes} 1 + {\mathcal A}(V) {\otimes} L,\\ \label{crucial2} \Delta_{{\mathcal B}(W)} (x) &\in x{\otimes} 1 + {\mathcal B}(V) {\otimes} L. \end{align} \item[(vi)] If $x\in L$ and $\pi_{{\mathcal A}(V)}(x\_{1}) {\otimes} \operatorname{pr}_1(x\_{2}) = 0$, then $x=0$. \item[(vii)]
If $0\neq L'$ is an ${\mathcal A}(V)$-subcomodule of $L$, then $L'\cap U \neq 0$. \end{enumerate} \end{lema}
\begin{proof} (i) follows from $\ad {\mathcal A}(V) (U) = \ad {\mathcal B}(V) \ad H (U) \subset \ad {\mathcal B}(V) (U)$.
(ii). It is clear that $L$ is a graded subspace of ${\mathcal B}(W)$ since ${\mathcal B}(V)$ is graded and $U$ is homogeneous. Indeed, for all $m\in {\mathbb N} _0$ the space $L^{m+1}$ is the span of the elements \eqref{defLm}.
(iii). We know that $U \subset {\mathcal P}({\mathcal K})$ by Lemma~\ref{basicK} (iv). Hence $L \subset {\mathcal P}({\mathcal K})$ by Remark \ref{prim-ydh}. We show that $U$ is also an ${\mathcal A}(V)$-subcomodule. If $y\in U$, then \begin{align*} \delta_{{\mathcal A}(V)}(y) &= (\pi_{{\mathcal A}(V)}{\otimes} \id)(y{\otimes} 1 + y\_{-1}{\otimes} y\_{0}) = y\_{-1}{\otimes} y\_{0}, \end{align*} because $\pi_{{\mathcal A}(V)}(y) = 0$ (since $y\in \widetilde{V} = \ker \pi$) and $\pi_{{\mathcal A}(V)}(y\_{-1}) = y\_{-1}$ (since $y\_{-1}\in H$). By (i) and Remark \ref{ydsubm} (ii), $L$ is a Yetter-Drinfeld submodule over ${\mathcal A}(V)$ of ${\mathcal P}({\mathcal K})$. Finally, $L^m = L\cap {\mathcal K}^m$, being the intersection of two Yetter-Drinfeld submodules, is a Yetter-Drinfeld submodule itself.
(iv) follows from (iii) and \eqref{deltah=deltaav}. We prove \eqref{crucial}: If $x = \ad z(y)$, where $z\in {\mathcal B}(V)$ and $y\in U$, then \begin{align*} \Delta_{{\mathcal A}(W)}(x) &= z\_{1}y\_{1}{\mathcal S}(z\_{4}) {\otimes} z\_{2}y\_{2}{\mathcal S}(z\_{3}) \\ &= z\_{1}y{\mathcal S}(z\_{4}) {\otimes} z\_{2}{\mathcal S}(z\_{3}) + z\_{1}y\_{-1}{\mathcal S}(z\_{3}) {\otimes} \ad (z\_{2})(y\_{0}) \\ &\in x{\otimes} 1 + {\mathcal A}(V) {\otimes} L, \end{align*} since $z\_{1}y\_{-1}{\mathcal S}(z\_{3}) \in {\mathcal B}(V)\#H$ and $ \ad (z\_{2})(y\_{0}) \in L$. Here again we used that $y\_{1} {\otimes} y\_{2} = y{\otimes} 1 + y\_{-1} {\otimes} y\_{0}$. Now \begin{align*} \Delta_{{\mathcal B}(W)}(x) &= (\vartheta_{{\mathcal B}(W)} {\otimes} \id)\Delta_{{\mathcal A}(W)}(x) \\ &\in \vartheta_{{\mathcal B}(W)}(x){\otimes} 1 + \vartheta_{{\mathcal B}(W)}({\mathcal A}(V)) {\otimes} L \\ &= x{\otimes} 1 + {\mathcal B}(V) {\otimes} L. \end{align*} by \eqref{proptheta}, showing \eqref{crucial2}.
(vi). By \eqref{crucial}, for some $y_i\in {\mathcal A}(V)$, $\ell_i \in L$, we have \begin{align*} 0 &= \pi_{{\mathcal A}(V)}(x\_{1}) {\otimes} \operatorname{pr}_1(x\_{2}) = \pi_{{\mathcal A}(V)}(x){\otimes} \operatorname{pr}_1(1) + \sum_i \pi_{{\mathcal A}(V)}(y_i) {\otimes} \operatorname{pr}_1(\ell_i) \\&= \sum_i y_i {\otimes} \operatorname{pr}_1(\ell_i) = x\_{1} {\otimes} \operatorname{pr}_1(x\_{2}) = x\^{1}(x\^{2})\_{-1}{\otimes} \operatorname{pr}_1\left((x\^{2})\_{0}\right). \end{align*} As the projection $\operatorname{pr}_1$ is $H$-colinear, we infer that \begin{align*} 0 &= x\^{1}(x\^{2})\_{-2}{\otimes} (x\^{2})\_{-1}{\otimes} \operatorname{pr}_1\left((x\^{2})\_{0}\right)\\ &\overset{\text{applying }(\mu{\otimes}\id)(\id{\otimes}{\mathcal S}{\otimes}\id)}{\implies} \quad x\^{1} {\otimes} \operatorname{pr}_1(x\^{2}) = 0. \end{align*} Since $x\in L \subset \sum_{n\ge 1} {\mathcal B}^n(W)$, we conclude that $x=0$ by Lemma~\ref{rmk:gradedcondition}.
(vii). Let $0\neq x\in L'$ and write $x = \sum_{1\le m \le p} x(m)$ with $x(m)\in L^m$ and $y:= x(p) \neq 0$. By (vi), $$ 0\neq \pi_{{\mathcal A}(V)}(y\_{1}) {\otimes} \operatorname{pr}_1(y\_{2}) \in {\mathcal A}^{p-1}(V){\otimes} {\mathcal B}^1(W). $$ Let now $F\in \Hom({\mathcal A}(V), \Bbbk )$ such that the restriction of $F$ to ${\mathcal A}^m(V)$ is 0 for all $m\neq p-1$. We claim that $$ F\pi_{{\mathcal A}(V)}(x\_{1})x\_{2} = F\pi_{{\mathcal A}(V)}(y\_{1}) \operatorname{pr}_1(y\_{2}). $$ Indeed, \begin{align*} F\pi_{{\mathcal A}(V)}(x\_{1})x\_{2} &=\sum_{1\le m \le p} F\pi_{{\mathcal A}(V)}(x(m)\_{1})x(m)\_{2} \\&=F\pi_{{\mathcal A}(V)}(y\_{1})y\_{2} \\ &=F\pi_{{\mathcal A}(V)}(y\_{1}) \operatorname{pr}_1(y\_{2}). \end{align*} Here the second and third equalities are clear from the assumption on $F$; if $m\le p$ then $\pi_{{\mathcal A}(V)}(x(m)\_{1}){\otimes} x(m)\_{2} \in \oplus_{0\le h \le m}{\mathcal A}^{m-h}(V) {\otimes} {\mathcal A}^{h}(W)$. Applying $F$ we get 0 except $m = p$, $h=1$. Choosing $F$ appropriately, we have $$ 0\neq F\pi_{{\mathcal A}(V)}(x\_{1})x\_{2} = F\pi_{{\mathcal A}(V)}(y\_{1}) \operatorname{pr}_1(y\_{2})\in L'\cap U. $$ \end{proof}
Part (vii) of Lemma~\ref{basicL} implies some strong restrictions on the Yetter-Drinfeld submodules of $L$.
\begin{prop}\label{ldirectsum} Assume that $U = U_1 \oplus\cdots \oplus U_\theta$ in ${}^H_H\mathcal{YD}$. Let $L_i = \ad {\mathcal A}(V) (U_i)$. Then $L= L_1 \oplus\cdots \oplus L_\theta$ in ${}^{\ac(V)}_{\ac(V)}\mathcal{YD}$. \end{prop}
\begin{proof} We have to show that the sum $L_1 +\cdots + L_\theta$ is direct. Suppose that $L_i \cap (\sum_{j\neq i} L_j) \neq 0$; then $L_i \cap (\sum_{j\neq i} L_j) \cap U \neq 0$ by Lemma~\ref{basicL}(vii). Note that $(\sum_{j\neq i} L_j) \cap U = (\sum_{j\neq i} L_j)^1 \cap U = \sum_{j\neq i} U_j$. Thus $L_i \cap (\sum_{j\neq i} L_j) \cap U = U_i \cap (\sum_{j\neq i} U_j) \neq 0$, a contradiction. \end{proof}
Clearly, if $U' \subsetneq U$ in ${}^H_H\mathcal{YD}$, then $L' := \ad{\mathcal A}(V) (U') \subsetneq L$ in ${}^{\ac(V)}_{\ac(V)}\mathcal{YD}$. Hence, if $L$ is irreducible in ${}^{\ac(V)}_{\ac(V)}\mathcal{YD}$, then $U$ is irreducible in ${}^H_H\mathcal{YD}$. The converse holds because of Lemma~\ref{basicL}(vii).
\begin{prop}\label{lirreducible} If $U$ is irreducible in ${}^H_H\mathcal{YD}$, then $L$ is irreducible in ${}^{\ac(V)}_{\ac(V)}\mathcal{YD}$. \end{prop}
\begin{proof} Let $0\neq L'$ be a subobject of $L$ in ${}^{\ac(V)}_{\ac(V)}\mathcal{YD}$. Then $L'\cap U \neq 0$ by Lemma~\ref{basicL}(vii). Since both $L'$ and $U$ are $H$-stable, $L'\cap U$ is an $H$-submodule of $U$. It is an $H$-subcomodule of $U$ by \eqref{deltah=deltaav}; thus $L'\cap U \hookrightarrow U$ in ${}^H_H\mathcal{YD}$. By the irreducibility assumption, $L'\cap U = U$, hence $L = \ad {\mathcal A}(V) (U) \subseteq L'$. \end{proof}
If $U = \widetilde{V}$, then we have the following property, important for our later considerations.
\begin{prop}\label{Kgrnlj} The algebra ${\mathcal K}$ is generated
by $\ad {\mathcal B}(V) (\widetilde{V})$. \end{prop}
\begin{proof} Let ${\mathcal K}'$ be the subalgebra of ${\mathcal K}$ generated by $\ad {\mathcal B}(V) (\widetilde{V})$ and let $X$ be the image of ${\mathcal K}'\# {\mathcal B}(V)$ under the isomorphism ${\mathcal K}\# {\mathcal B}(V) \simeq {\mathcal B}(W)$ given by multiplication. It suffices to prove that $X = {\mathcal B}(W)$. Since $V \subset X$ and $\widetilde{V} \subset X$, one gets $W\subset X$; it remains then to show that $X$ is a subalgebra of ${\mathcal B}(W)$. For this, observe that ${\mathcal K}'$ is stable under the adjoint action of ${\mathcal A}(V)$. Indeed, $\ad x(yy') = \ad (x\_{1}) (y) \ad (x\_{2})(y')$, for all $x\in {\mathcal A}(V)$, $y, y' \in \ad {\mathcal B}(V) (\widetilde{V} )$. Hence, if $x\in V$ and $y\in {\mathcal K}'$, then $xy = \ad_c x(y) + (x\_{-1}\cdot y)x\_0 \in {\mathcal K}' + {\mathcal K}'\# V \subset X$. As both ${\mathcal K}'$ and ${\mathcal B}(V)$ are subalgebras, we conclude that $X$ is a subalgebra and the proposition follows. \end{proof}
We now introduce the following finiteness condition on $U$. Recall that $L=\ad {\mathcal B}(V)(U)$.
\begin{equation}\label{Lmax}
\text{$L^M\neq 0$ and $L^p = 0$ for some $M\in \mathbb{N}$
and all $p>M$. } \tag{F} \end{equation} Clearly, a sufficient condition for \eqref{Lmax} is that $L = \oplus_{m\in {\mathbb N}} L^m$ has finite dimension. In this case, $\dim U< \infty$ too.
If $M$ is determined by \eqref{Lmax}, then we write \begin{align}
L^{\rm max} := L^M.
\label{eq:lmax} \end{align}
\begin{lema}\label{basicLt} Assume that $U$ satisfies Condition (F). Let $Z$ be a Yetter-Drinfeld submodule over $H$ of $L^{\rm max}$ and ${\langle Z\rangle}$ the ${\mathcal B}(V)$-subcomodule of $L$ generated by $Z$. \begin{enumerate}
\item[(i)] ${\langle Z\rangle} = \oplus_{m=1}^M {\langle Z\rangle}^m$, where
${\langle Z\rangle}^m = {\langle Z\rangle} \cap {\mathcal B}^m(W)$ for all $m$, and ${\langle Z\rangle}^M = Z$.
\item[(ii)] ${\langle Z\rangle}$ is the ${\mathcal A}(V)$-subcomodule of $L$ generated by $Z$.
\item[(iii)] ${\langle Z\rangle}$ is a graded Yetter-Drinfeld submodule over ${\mathcal A}(V)$ of $L$.
\item[(iv)] ${\langle Z\rangle}$ is a graded Yetter-Drinfeld submodule over $H$ of $L$.
\end{enumerate} \end{lema}
\begin{proof} By \eqref{subcom-gen-gr}, ${\langle Z\rangle}$ is the vector subspace of $L$ spanned by the elements \begin{equation*}
\langle f, z\_{-1}\rangle \, z\_{0}, \quad \text{where } z\in Z, \, f\in {\mathcal B}^n(V)^*, \qquad n\ge 0, \end{equation*} where $z\_{-1}{\otimes} z\_{0}=\delta _{{\mathcal B}(V)}(z)$. Let $z\in Z$ and $f\in {\mathcal B}^n(V)^*$. We obtain that $\langle f, z\_{-1}\rangle\, z\_{0} \in {\langle Z\rangle}^{M-n}$, since $$z\_{-1}{\otimes} z\_{0} = \pi_{{\mathcal B}(V)} (z\^{1}){\otimes} z\^{2}\in \oplus_{m\in {\mathbb N}_0} {\mathcal B}^{m}(V) {\otimes} {\mathcal B}^{M-m}(W).$$ This proves (i). Now (ii) follows from Lemma~\ref{lema:smashcopr} (iii); then (iii)
follows from (ii), Assumption (F), and Remark \ref{ydsubm} (i), while (iv) follows from (iii) and \eqref{deltah=deltaav}. \end{proof}
We can now present the first ingredient of our construction in \eqref{eq:rfliM}.
\begin{theorem}\label{lmaxirreducible} Suppose that $U$ is irreducible
in ${}^H_H\mathcal{YD}$ and satisfies Condition~(F).
Then $L^{\rm max}$ is irreducible in ${}^H_H\mathcal{YD}$
and $L$ is generated by $L^{\rm max}$ as a ${\mathcal B}(V)$-comodule. \end{theorem}
\begin{proof} By Proposition \ref{lirreducible}, $L$ is irreducible in ${}^{\ac(V)}_{\ac(V)}\mathcal{YD}$. If $0\neq Z\hookrightarrowL^{\rm max}$ in ${}^H_H\mathcal{YD}$, then $0\neq {\langle Z\rangle} = {\mathcal B}(V)^*\cdot Z\hookrightarrow L$ in ${}^{\ac(V)}_{\ac(V)}\mathcal{YD}$ by Lemma~\ref{basicLt} (iii). Thus ${\langle Z\rangle} = L$, and $Z = {\langle Z\rangle}^M = L^M = L^{\rm max}$ by Lemma~\ref{basicLt} (i). \end{proof}
\subsection{Reflections} \label{subsection:reflected}
\medbreak Let us fix $\theta \in \mathbb{N}$ and let ${\mathbb I} =\{1,\ldots ,\theta \}$. Let ${\mathcal C}_\theta $ denote the class of all families $$M=(M_1,\dots ,M_\theta )$$ of finite-dimensional{} irreducible Yetter-Drinfeld modules $M_j\in {}^H_H\mathcal{YD} $, where $j\in {\mathbb I} $. Two families $M,M'\in {\mathcal C}_\theta $ are called \textit{isomorphic} if $M_j$ is isomorphic to $M'_j$ in ${}^H_H\mathcal{YD} $ for all $j\in {\mathbb I} $. In this case we write $M\simeq M'$.
\medbreak Let $(\alpha _1,\ldots ,\alpha _\theta )$ be the standard basis of $\Z^{\theta} $. Let $M =(M_1,\ldots ,M_\theta )\in {\mathcal C}_\theta $ and \begin{align}
W=\oplus _{j=1}^\theta M_j.
\label{eq:Wdec} \end{align} Define a $\Z^{\theta}$-grading on $W$ by $\deg M_j=\alpha _j$ for all $j\in {\mathbb I} $.
We fix $i\in{\mathbb I} $ and set $$ V = M_i,\qquad \widetilde{V} = \bigoplus_{j\in {\mathbb I},\, j\neq i}M_j.$$ Thus, we are in the situation of Subsections \ref{subsection:basicK} and \ref{subsection:basicL}. Let \begin{equation}\label{defLj}
L_j := \ad {\mathcal B}(V) (M_j) \qquad \text{for $j\in {\mathbb I}\setminus
\{i\}$.} \end{equation} Thus, $L_j$ is the vector subspace of ${\mathcal B}(W)$ spanned by the elements $$\ad_c (x_1)(\dots(\ad_c (x_m)(y))),\quad x_h\in M_i,\, 1\le h\le m,\, y \in M_j,\, m \ge 0.$$ Recall that ${\mathcal K} ={\mathcal A}(W)^{\operatorname{co} {\mathcal A}(V)}={\mathcal B} (W)^{\operatorname{co} {\mathcal B} (V)}$, see \eqref{K} and Lemma~\ref{basicK} (ii). Consider the $\Z^{\theta}$-grading on the algebras ${\mathcal B}(W)$ and ${\mathcal B}(V)$ discussed in Remark \ref{obs:Dlgraded}, page~\pageref{obs:Dlgraded}. Then the algebras ${\mathcal A}(W)$ and ${\mathcal A}(V)$ are also $\Z^{\theta}$-graded, by setting $\deg H = 0$. Since the map $\pi_{{\mathcal A}(V)}$ in \eqref{eqn:commdiag-avaw} is homogeneous, the algebra ${\mathcal K}$ inherits this grading. Then $L_j$ is a $\Z^{\theta}$-graded subspace of ${\mathcal K}$ and $\operatorname{supp} L_j \subset \alpha_j + {\mathbb N}_0 \alpha_i$. Let \begin{equation}\label{eqn:defi-cartan-matrix}
-a^M_{ij} := \sup \{h\in {\mathbb N}_0\,|\,\alpha_j + h \alpha_i\in \operatorname{supp} L_j\}. \end{equation} Then either $a^M_{ij}\in {\mathbb Z}_{\leq 0}$ (when $\operatorname{supp} L_j$ is finite), or $a^M_{ij} = -\infty$. Let also $a^M_{ii} = 2$.
We introduce the following finiteness conditions for $M$. \begin{enumerate}
\item[$(F_i)$]$\dim L_j$ is finite for all $j\in {\mathbb I}$, $j\neq i$, \end{enumerate} or, equivalently, \begin{enumerate}
\item[$(F'_i)$]$\operatorname{supp} L_j$ is finite for all $j\in {\mathbb I}$, $j\neq i$. \end{enumerate} Note that $(F_i)$ means that $a^M_{i j}>-\infty $ for all $j\in {\mathbb I} \setminus \{i\}$.
\begin{obs}\label{rmk:gkdim}
It would be interesting to find an \emph{a priori} condition guaranteeing that $(F_i)$ holds.
Obviously, if $\dim {\mathcal B}(W)<\infty$, then $\dim L_j < \infty$ for all $j$. Because of \cite{R2}, we believe that $(F_i)$ holds whenever the Gelfand-Kirillov dimension of ${\mathcal B}(W)$ is finite. \end{obs}
Assume that $M$ satisfies Condition~$(F_i)$. Let $s_{i,M} \in GL(\theta, {\mathbb Z})$ and \begin{align}
{\mathcal R} _i(M):= (M'_1,\ldots ,M'_\theta )\in {\mathcal C}_\theta
\label{eq:rfliM} \end{align} be given by \begin{align}\label{eqn:betaj} s_{i,M}(\alpha_j) &= \alpha_j - a^M_{ij}\alpha_i , \qquad j\in {\mathbb I}, \\ \label{eqn:defRi} M'_j &= \begin{cases}
L_j^{\max } & \text{if $j\neq i$,} \\
{M_i}^{\,*} = V^* & \text{if $j=i$.}\end{cases} \end{align} Notice that ${\mathcal R}_i(M)$ is an object of ${\mathcal C}_{\theta }$ by Theorem \ref{lmaxirreducible}. We say that ${\mathcal R} _i$ is \textit{the $i$-th reflection}. The linear map $s_{i,M} $ is a reflection in the sense of \cite[Ch.\,V, \S2.2]{B68}, that is, $s_{i,M} ^2=\id $ and the rank of $\id -s_{i,M} $ is $1$.
We embed $V^*$ into $W^*$ via the decomposition of $W$ in \eqref{eq:Wdec}. Then $${\mathcal K}\# {\mathcal B}({}V^*)\subset {\mathcal B}(W)\#{\mathcal B}(W^*)$$ is a subalgebra, see Definition \ref{def:ksmash}. Further, ${\mathcal K} \# {\mathcal B}(V^*)$ is a $\Z^{\theta} $-graded algebra in ${}^H_H\mathcal{YD} $ with $\deg x=s_{i,M} (\alpha _i)=-\alpha _i$ for all $x\in V^*$, see Remark \ref{obs:Dlgraded}.
\begin{lema}
The map $T({\mathcal K}\oplus V^*)\to
{\mathcal K} \# {\mathcal B}(V^*)$, ${\mathcal K}\oplus V^*\ni (x,f)\mapsto x\#1+1\#f$, induces
an algebra isomorphism $T({\mathcal K}\oplus V^*)/I\to {\mathcal K}\#{\mathcal B}(V^*)$, where $I$ is
the two-sided ideal generated by
\begin{itemize}
\item[(i)] the elements $x\otimes y-xy$, where $x,y\in {\mathcal K}$, and $1_{\mathcal K} -
1_{T({\mathcal K} \oplus V^*)}$,
\item[(ii)] the relations of ${\mathcal B}(V^*)$,
\item[(iii)] the elements \begin{align}\label{eq:V*Krel} g\otimes x - x\_{0}\otimes {\mathcal S}^{-1}(x\_{-1})\cdot g -\pl{g}(x), \qquad x\in {\mathcal K} , \, g\in V^*. \end{align}
\end{itemize}
\label{le:KBgenrel} \end{lema}
\begin{proof} See the proof of Proposition \ref{prop:qdo-toba}. \end{proof}
Let $W'=\oplus _{j=1}^\theta M'_j$. Then $W'$ is contained in ${\mathcal K} \# {\mathcal B} (V^*)$ via the embeddings \begin{align*}
&M'_j\subset {\mathcal K} \simeq {\mathcal K} \# \Bbbk \subset {\mathcal K} \# {\mathcal B} (V^*) \qquad
\text{for all $j\not=i$,}\\
&M'_i=V^*\simeq \Bbbk \# V^*\subset {\mathcal K} \# {\mathcal B} (V^*)^+. \end{align*} Moreover, $W'$ inherits a $\Z^{\theta} $-grading from ${\mathcal B}(W)\#{\mathcal B}(W^*)$: One has $\deg M'_j=s_{i,M} (\alpha _j)$ for all $j\in {\mathbb I}$.
\begin{lema}\label{lema:gen-by-Wprime}
The algebra ${\mathcal K}\# {\mathcal B}(V^*)$ is generated by $W'$. \end{lema}
\begin{proof} Let $\Bg =\Bbbk \langle W'\rangle $ be the subalgebra of ${\mathcal K} \# {\mathcal B} (V^*)$ generated by $W'$. Since $W'\in {}^H_H\mathcal{YD} $, $\Bg$ is a subobject of ${\mathcal B}(W)\#{\mathcal B}(W^*)$ in ${}^H_H\mathcal{YD} $ by Corollary~\ref{cor:qdo-ydh}. Fix $j\neq i$ and pick $x\in L_j\cap \Bg$, $f \in V^*$. Then $f x = x\_{0}\, {\mathcal S}^{-1}(x\_{-1})\cdot f + \pl{f}(x)$ by \eqref{eqn:rel-qdogral}. Now, $L_j\cap \Bg$ being a Yetter-Drinfeld submodule over $H$, this says that $\pl{f}(x)\in \Bg$. But $$ \pl{f}(x) = \langle f, x\^1\rangle x\^2 \in \langle f, x\rangle 1 + \langle f, {\mathcal B}(V) \rangle L_j \subset L_j, $$ by \eqref{crucial2}. This shows that $L_j\cap \Bg$ is a ${\mathcal B}(V)$-subcomodule of $L_j$; indeed, $\langle f,x\_{-1}\rangle x\_{0} = \langle f,\pi_{{\mathcal B}(V)} (x\^{1})\rangle x\^{2}=\langle f, x\^1\rangle x\^2$. We conclude that $L_j\cap \Bg = L_j$ by Lemma~\ref{basicLt} (iii) and Prop.~\ref{lirreducible}, since $0\neq L_j\cap \Bg \supset L_j^{\max}$.
Hence $L_j\subset \Bg $ for $j\in {\mathbb I} \setminus \{i\}$, and Prop.~\ref{Kgrnlj} implies that ${\mathcal K}\subset \Bg $. This proves the lemma. \end{proof}
\medbreak Here is our first main result.
\begin{theorem}\label{theo:main}
Let $M=(M_1,\ldots ,M_\theta )\in {\mathcal C}_\theta $
and $i\in {\mathbb I} $ such that $M$ satisfies Condition~$(F_i)$.
Let $V=M_i$, $W=\oplus _{j\in {\mathbb I} }M_j$,
${\mathcal K} ={\mathcal B}(W)^{\operatorname{co} {\mathcal B}(V)}$,
$M'={\mathcal R}_i(M)$ and $W'=\oplus _{j\in {\mathbb I} }M'_j$.
Define a $\Z^{\theta} $-grading on $W'$ by $\deg x=s_{i,M}(\alpha _j)$
for all $x\in M'_j$, $j\in {\mathbb I} $.
\begin{enumerate}
\item
The
inclusion $W'\hookrightarrow {\mathcal K}\#{\mathcal B}(V^*)$
induces a $\Z^{\theta}$-homogeneous isomorphism
${\mathcal B}(W') \simeq {\mathcal K}\#{\mathcal B}(V^*)$ of algebras and of Yetter-Drinfeld modules over $H$. \item The family ${\mathcal R}_i(M)$ satisfies Condition $(F_i)$, and
$s_{i,{\mathcal R} _i(M)}=s_{i,M} $, ${\mathcal R} _i^2(M)\simeq M$.
\end{enumerate}
\end{theorem}
We prove the theorem in several steps. The strategy of the proof is the following. First we define a surjective algebra map $\Omega :{\mathcal K}\#{\mathcal B}(V^*)\to {\mathcal B} (W')$. Then we conclude that the same construction can be performed for $M'$ instead of $M$, and that (2) holds. Finally we prove that $\Omega $ is bijective. The restriction of the inverse map of $\Omega $ to $W'$ is the given embedding of $W'$ in ${\mathcal K} \# {\mathcal B} (V^*)$.
For the definition of $\Omega $, see Prop.~\ref{pr:defOm}, we use the characterization of Nichols algebras in Theorem \ref{theo:quotients-with-derivations} (ii). In the next lemma we prove the existence of the required family of braided derivations.
\begin{mlema}\label{le:exderiv} There is a unique family $d:W'{}^{*}\to \operatorname{End} ({\mathcal K} \# {\mathcal B} (V^*))$ of braided derivations of ${\mathcal K} \# {\mathcal B} (V^*)$ such that \begin{align}\label{eq:dfw'}
d_f(w')=\langle f,w'\rangle \end{align} for all $f\in W'{}^* \simeq V^{**} \oplus \oplus_{j\in {\mathbb I} \setminus \{i\}}(L_j^{\max})^*$, $w'\in W'$. Moreover, for all $v\in V$, $f=\phi_V(v)\in V^{**}$, and $x\in {\mathcal K} $ equation $d_f(x)=-\ad_c v(x)$ holds. \end{mlema}
\begin{proof} The family $d$ is unique since ${\mathcal K}\#{\mathcal B}(V^*)$ is generated by $W'$, see Lemma~\ref{lema:gen-by-Wprime}. By Definition \ref{defi:skew-der} it is sufficient to show that \begin{enumerate}
\item there exists a family $d:V^{**}\to \operatorname{End} ({\mathcal K} \# {\mathcal B} (V^*))$ of braided derivations of ${\mathcal K} \# {\mathcal B} (V^*)$ such that $d_f(w')=\langle f,w'\rangle $ for all $f\in V^{**}$ and $w'\in W'$,
\item for all $j\in {\mathbb I} \setminus \{i\}$ there exists a family $d:(L_j^{\max})^*\to \operatorname{End} ({\mathcal K} \# {\mathcal B} (V^*))$ of braided derivations of ${\mathcal K} \# {\mathcal B} (V^*)$ such that $d_f(w')=\langle f,w'\rangle $ for all $f\in (L_j^{\max})^*$ and $w'\in W'$. \end{enumerate}
First we prove (1). Let $d:V^{**}\to \operatorname{End} ({\mathcal B}(W)\#{\mathcal B}(W^*))$ be the restriction to $V^{**}$ of the family of braided derivations in Theorem \ref{theo:skew-der-qdo}. By \eqref{eqn:def-dv} one gets \begin{align}
d_f(x)=-\ad _c v(x)\quad \text{for all $v\in V$, $f=\phi _V(v)\in V^{**}$,
$x\in {\mathcal K} $.}
\label{eq:dfx} \end{align} Thus $d_f({\mathcal K})\subset{\mathcal K}$ for all $f\in V^{**}$, and $d_f({\mathcal B}(V^*))\subset {\mathcal B} (V^*)$ since $d_f(w')=\langle f,w'\rangle $ for all $w'\in V^*$ by \eqref{eqn:def-dv}. Hence $d$ induces a family of braided derivations of ${\mathcal K} \# {\mathcal B}(V^*)$ by restriction. The relation $d_f(w')=\langle f,w'\rangle =0$ for $w'\in L_j^{\max}$, $j\not=i$, follows from the definition of $L_j^{\max}$, and the second claim of the lemma holds by \eqref{eq:dfx}.
\medbreak To prove (2), let $j\in {\mathbb I} \setminus \{i\}$. We first define a family
$d:(L_j^{\max})^*\to \operatorname{End} ({\mathcal K} )$ of braided derivations of ${\mathcal K} $.
Then we extend $d$ to a family of braided derivations of ${\mathcal K} \# {\mathcal B}(V^*)$.
Recall from \eqref{eqn:betaj} that $s_{i,M} (\alpha _j)=\alpha _j-a^M_{i j}\alpha _i$. Define $d_F: {\mathcal B}(W) \to {\mathcal B}(W)$ for any $F\in {\mathcal B}(W^*)_{-s_{i,M} (\alpha _j)}$ by \begin{equation}\label{eqn:dfLj} d_F(x) := \langle F, (x\^2)\_0\rangle {\mathcal S}^{-1}((x\^2)\_{-1})\cdot x\^1, \qquad x\in {\mathcal B}(W), \end{equation} see \eqref{eq:plbxi}. Then \begin{equation}\label{eqn:dfLj-seanula} d_F(x) = 0 \qquad \text{if } x\in L_h, h\neq j,i, \text{ or } x\in L_j^m,\ m < 1 - a^M_{ij}. \end{equation} Indeed, if $x\in L_h^m$, where $h\in {\mathbb I} \setminus \{i\}$, $m\in {\mathbb N} $, then by Lemma~\ref{basicL} (iii) and \eqref{crucial2} one gets
\begin{align*} \Delta(x) \in x{\otimes} 1 + 1{\otimes} x + \sum_{0< r< m} {\mathcal B}(W)_{r\alpha_i}{\otimes}{\mathcal B}(W)_{\alpha_h + (m-1-r)\alpha_i}. \end{align*} Hence $\langle F, (x\^2)\_0\rangle = 0$ whenever $h\not=j$ or $h=j$, $m<1-a_{i j}^M$. Further, if $x\in L_j^{1-a^M_{i j}}$ then \begin{equation}\label{eqn:dfLj-max}
d_F(x) = \langle F, x\rangle \qquad \text{for all } x\in
L_j^{1-a^M_{ij}}. \end{equation} We next claim that \begin{equation}\label{eqn:dfLj-K}
d_F(xy) := d_F(x)y + (F\_{-1}\cdot x) d_{F\_0}(y) \qquad
\text{for all }x, y\in {\mathcal K}. \end{equation} Let $x,y\in {\mathcal K} $. Then \begin{equation}\label{eq:dFxy} \begin{aligned} d_F(xy) = &\langle F, (x\^2)\_0(y\^2)\_0\rangle \\ &\quad \times {\mathcal S}^{-1}((x\^2)\_{-1}(y\^2)\_{-1}) \cdot [x\^1((x\^2)\_{-2}\cdot y\^1)]. \end{aligned} \end{equation} Now $\langle F, (x\^2)\_0(y\^2)\_0\rangle = \langle F\^1, (y\^2)\_0\rangle\langle F\^2, (x\^2)\_0\rangle$. Further, \begin{align*}
\Delta(F) - F{\otimes} 1 - 1{\otimes} F \in
\sum _{0<r<1-a^M_{i j}}(&{\mathcal B}(W^*)_{-r\alpha _i}{\otimes}
{\mathcal B}(W^*)_{-s_{i,M} (\alpha _j)+r\alpha _i}\\
&+{\mathcal B}(W^*)_{-s_{i,M} (\alpha _j)+r\alpha _i}{\otimes} {\mathcal B}(W^*)_{-r\alpha _i}). \end{align*} Since ${\mathcal K} \subset {\mathcal B}(W)$ is a left coideal and $\langle F',{\mathcal K} \rangle=0$ for all $F'\in {\mathcal B}(W^*)_{-r\alpha _i}$ and $r>0$, one gets $$\langle F, (x\^2)\_0(y\^2)\_0\rangle = \langle F,(y\^2)\_0\rangle \varepsilon ((x\^2)\_0)+\varepsilon ((y\^2)\_0)\langle F,(x\^2)\_0\rangle.$$ This means that $d_F$ behaves in the same way as $d_{F'}$ for primitive $F'$, and hence \eqref{eqn:dfLj-K} follows from \eqref{plinverso-prim}.
We point out two consequences of the claim \eqref{eqn:dfLj-K}. First, this shows that $d_F({\mathcal K}) \subset {\mathcal K}$; indeed, ${\mathcal K}$ is generated as an algebra by $L$ and we know already that $d_F(L) \subset {\mathcal K}$ by \eqref{eqn:dfLj-seanula} and \eqref{eqn:dfLj-max}. Second, the inclusion $L_j^{1-a^M_{ij}}\subset {\mathcal B}(W)_{s_{i,M} (\alpha _j)}$ induces a projection $\pi: {\mathcal B}(W^*)_{-s_{i,M} (\alpha _j)} \to \big(L_j^{1-a^M_{ij}}\big)^*$; then $d_F\in \operatorname{End} {\mathcal K}$ depends only on $f = \pi(F)$. For, if $\pi(F) = 0$, then $d_F = 0$ on $L$ by \eqref{eqn:dfLj-seanula} and \eqref{eqn:dfLj-max}. Hence $d_F = 0$ on ${\mathcal K}$ by \eqref{eqn:dfLj-K}. Thus we have constructed the desired family $d:(L_j^{\max})^*\to \operatorname{End} {\mathcal K} $ of braided derivations of ${\mathcal K} $.
Now we extend $d$ to a family of braided derivations of ${\mathcal K}\#{\mathcal B}(V^*)$ by letting \begin{equation}\label{eqn:df-final} d_f(x g) = d_f(x) g, \qquad x\in {\mathcal K},\, g\in {\mathcal B}(V^*). \end{equation} It is clear that $d_f(w')=\langle f,w'\rangle $ for all $f\in (L_j^{\max})^*$, $w'\in W'$. It remains to prove that \begin{align}
d_f(bc) = (f\_{-1} \cdot b)\, d_{f\_{0}}(c) + d_{f}(b)\, c
\label{eq:dfbc} \end{align} for all $b,c\in {\mathcal K}\#{\mathcal B}(V^*)$ and $f = \pi(F)\in \big(L_j^{1-a^M_{ij}}\big)^*$. Similarly to the proof of Theorem \ref{theo:skew-der-qdo}, we use Lemma~\ref{le:fambrderTVI} (iii)$\Rightarrow $(i) and Lemma~\ref{le:KBgenrel} to show that $d:(L_j^{\max})^*\to \operatorname{End} ( {\mathcal K}\#{\mathcal B}(V^*))$, given in \eqref{eqn:df-final}, defines a family of braided derivations of ${\mathcal K}\#{\mathcal B}(V^*)$. Again it suffices to check that \begin{align}
d'_f(g\otimes x)=d'_f(x\_0\otimes {\mathcal S}^{-1}(x\_{-1})\cdot g +\pl{g}(x))
\label{eq:d'fgx} \end{align} for all $x\in {\mathcal K}$, $g\in V^*$, where $d':(L_j^{\max})^*\to \Hom(T({\mathcal K}\oplus V^*),{\mathcal K}\#{\mathcal B}(V^*))$ denotes the family of braided derivations induced by
$d'_f|_{\mathcal K}=d_f$ and $d'_f|_{V^*}=0$.
The right-hand side of \eqref{eq:d'fgx} is \begin{align*} &d'_f\big( x\_{0}\otimes {\mathcal S}^{-1}(x\_{-1})\cdot g + \langle g, x\^1\rangle x\^2\big)\\ &\quad = d_F( x\_{0})\, {\mathcal S}^{-1}(x\_{-1})\cdot g +d_F( \langle g, x\^1\rangle x\^2)\\ &\quad = \langle F, (x\^2)\_0\rangle \big({\mathcal S}^{-1}((x\^2)\_{-1})\cdot (x\^1)\_0\big) \, \big( {\mathcal S}^{-1}((x\^1)\_{-1}(x\^2)\_{-2})\cdot g\big)
\\ &\qquad + \langle g, x\^1\rangle \langle F, (x\^3)\_0\rangle {\mathcal S}^{-1}((x\^3)\_{-1})\cdot x\^2,
\end{align*} and the left-hand side is \begin{align*} (f\_{-1} \cdot g)& d'_{f\_{0}}(x) + d'_f(g) x = (F\_{-1} \cdot g) d_{F\_{0}}(x) \displaybreak[0] \\ \overset{\eqref{eqn:dfLj}}{=}& (F\_{-1} \cdot g) \langle F\_0, (x\^2)\_0\rangle {\mathcal S}^{-1}((x\^2)\_{-1})\cdot x\^1\\ \overset{\eqref{properties:duality4-tensor}}{=}& ({\mathcal S}^{-1}( (x\^2)\_{-1})\cdot g) \langle F, (x\^2)\_0\rangle {\mathcal S}^{-1}((x\^2)\_{-2})\cdot x\^1\\ \overset{\phantom{\eqref{eqn:dfLj}}}{=}& \langle F, (x\^2)\_0\rangle {\mathcal S}^{-1}( (x\^2)\_{-1})\cdot (gx\^1)\\ \overset{\eqref{eqn:rel-qdogral}}{=}& \langle F, (x\^2)\_0\rangle {\mathcal S}^{-1}( (x\^2)\_{-1})\cdot \big( (x\^1)\_0 {\mathcal S} ^{-1}( (x\^1)\_{-1})\cdot g\big)\\ \displaybreak[0] &+ \langle F, (x\^3)\_0\rangle {\mathcal S}^{-1}( (x\^3)\_{-1})\cdot \langle g,x\^1\rangle x\^2. \end{align*} This proves \eqref{eq:d'fgx} and completes the proof of the lemma. \end{proof}
Recall the notation from Theorem~\ref{theo:main}.
\begin{prop}\label{pr:defOm}
There exists a unique surjective algebra map
$$\Omega :{\mathcal K}\# {\mathcal B}(V^*)\to {\mathcal B}(W')$$
which is the identity on $W'$.
Define a $\Z^{\theta} $-grading on $W'$ by $\deg x=s_{i,M} (\alpha _j)$ for all
$x\in M'_j$, $j\in {\mathbb I} $.
Then $\Omega $ is a $\Z^{\theta} $-graded map in ${}^H_H\mathcal{YD} $, and for
all $v\in V$, $f\in V^*$, $x\in {\mathcal K}$ the following equations
hold.
\begin{align}
\label{eq:Omplf}
\Omega (\pl{f}(x))=&\ad _{c^{-1}}f(\Omega (x)),\\
\Omega ( \ad _c v(x))=&-d^{W'}_{\phi_V(v)}(\Omega (x)).
\label{eq:Omadv}
\end{align} \end{prop}
\begin{proof} By Lemma~\ref{lema:gen-by-Wprime} there is a unique surjective algebra map $T(W')\to {\mathcal K}\#{\mathcal B}(V^*)$ which is the identity on $W'$. Let $I$ be the kernel of this map. Since ${\mathcal K}\#{\mathcal B}(V^*)$ is an $H$-module, $I$ is invariant under the action of $H$. By Main Lemma~\ref{le:exderiv} there is a unique family $d:W'{}^*\to \operatorname{End} ({\mathcal K} \# {\mathcal B}(V^*))$ of braided derivations satisfying \eqref{eq:dfw'}. Thus Theorem \ref{theo:quotients-with-derivations} (ii) applies, that is, the algebra map $\Omega $ exists and is unique. By definition of the $\Z^{\theta} $-gradings and the Yetter-Drinfeld structures, $\Omega $ is a $\Z^{\theta} $-graded map in ${}^H_H\mathcal{YD} $.
\eqref{eq:Omadv} follows from \eqref{factorizacion-omega} by using the second part of Main Lemma~\ref{le:exderiv}, Equations \eqref{eq:plbxi}, \eqref{plinverso-prim}, and Definition \ref{de:canfam}. \eqref{eq:Omplf} follows from the formulas \begin{align*}
\Omega (\pl{f}(x))
\overset{\eqref{eqn:rel-qdogral}}{=}&
\Omega (fx-x\_0 ({\mathcal S}^{-1}(x\_{-1})\cdot f))\\
\overset{\phantom{\eqref{eqn:rel-qdogral}}}{=}&
f\Omega (x)-\Omega (x\_0) ({\mathcal S}^{-1}(x\_{-1})\cdot f)
=\ad _{c^{-1}}f(\Omega (x)). \end{align*} \end{proof}
Now we are prepared to prove Theorem \ref{theo:main}.
\begin{proof}[Proof of Theorem \ref{theo:main}] We follow the strategy explained below Theorem \ref{theo:main}. Recall that $L_j=\ad {\mathcal B}(V)(M_{\alpha _j})$ and \begin{equation} \begin{aligned}
L_j=\Bbbk\text{-span of }\{\pl{f_1}\cdots \pl{f_n}(x)\,|\,&
x\in L_j^{\max}=M'_j,\\
&n\ge 0, f_1,\ldots ,f_n\in V^*\} \end{aligned}
\label{eq:Lj} \end{equation} for all $j\in {\mathbb I} \setminus \{i\}$ by Theorem \ref{lmaxirreducible}. Let $M'={\mathcal R}_i(M)$ as in \eqref{eq:rfliM}, \begin{align}
L'_j=\ad {\mathcal B}(V^*)(M'_{\beta _j})\subset {\mathcal B}(W'),
\label{eq:L'j} \end{align} and $\Omega :{\mathcal K}\#{\mathcal B}(V^*)\to {\mathcal B}(W')$ the epimorphism in Prop.~\ref{pr:defOm}.
We first claim that \begin{align}
\widetilde{\Omega }|_{L_j}:L_j\to L'_j\quad \text{is bijective,}
\label{eqn:isodemodulos} \end{align} where $\widetilde{\Omega }={\mathcal S} _{{\mathcal B}(W')}\Omega $. Indeed, let $x\in L_j^{\max}$, $n\ge 0$, and $f_1,\ldots ,f_n\in V^*$. Then \begin{align*}
&{\mathcal S}_{{\mathcal B}(W')}\Omega(\pl{f_1}\cdots \pl{f_n}(x))
\overset{\eqref{eq:Omplf}}{=}{\mathcal S}_{{\mathcal B}(W')}\big( \ad _{c^{-1}}(f_1)(\Omega
(\pl{f_2}\cdots \pl{f_n}(x)))\big)\\
&\quad \overset{\eqref{S1}}{=}\ad _c (f_1)\big({\mathcal S} _{{\mathcal B}(W')}\Omega
(\pl{f_2}\cdots \pl{f_n}(x))\big)\\
&\quad \overset{\phantom{\eqref{S1}}}{=}
\ad _c (f_1)\big(\cdots \big(\ad _c (f_n)\big({\mathcal S} _{{\mathcal B}(W')}\Omega (x)
\big)\big)\big)\\
&\quad \overset{\phantom{\eqref{S1}}}{=}
\ad _c (f_1)\big(\cdots \big(\ad _c (f_n)\big({\mathcal S} _{{\mathcal B}(W')}(x)\big)
\big)\big)\\
&\quad \overset{\phantom{\eqref{S1}}}{=}
-\ad _c (f_1)(\cdots (\ad _c (f_n)(x)))\in L'_j. \end{align*} Since $\widetilde{\Omega }(x)=-x$ for all $x\in L_j^{\max}=M'_{\beta _j}$, \eqref{eq:Lj} and \eqref{eq:L'j} imply that $\widetilde{\Omega }(L_j)=L'_j$. We now prove that $\ker \widetilde{\Omega }\cap L_j=\ker \Omega \cap L_j$ is a Yetter-Drinfeld module over ${\mathcal A}(V)$. Together with the irreducibility of $L_j$, see Prop.~\ref{lirreducible}, this implies that $\widetilde{\Omega }$ is injective and hence Claim~\eqref{eqn:isodemodulos} holds.
Since $\Omega $ is a map in ${}^H_H\mathcal{YD} $, see Prop.~\ref{pr:defOm}, one obtains that $\ker \Omega \cap L_j$ is an object in ${}^H_H\mathcal{YD} $. Further, for all $x\in L_j\cap \ker \Omega $ one has \begin{align*}
\Omega ({\mathcal B}(V)\cdot x)=0\quad \text{by \eqref{eq:Omadv}, and}\quad
\Omega(V^*\cdot x)=0 \quad \text{by \eqref{eq:Omplf}.} \end{align*} Thus $L_j\cap \ker \Omega $ is an object in ${}^{\ac(V)}_{\ac(V)}\mathcal{YD} $, and Claim~\eqref{eqn:isodemodulos} is proven.
Now we prove Theorem \ref{theo:main}(2). Since $\Omega $ and ${\mathcal S}_{{\mathcal B}(W')}$ are $\Z^{\theta} $-graded maps, \eqref{eqn:isodemodulos} implies that $\operatorname{supp} L_j=\operatorname{supp} L'_j$ for all $j\in {\mathbb I} \setminus \{i\}$. In particular, $\operatorname{supp} L'_j$ is finite for all $j\in {\mathbb I} \setminus \{i\}$, that is, Condition~$(F_i)$ is fulfilled for $M'={\mathcal R}_i(M)$, and hence $M'':={\mathcal R}_i(M')$ is well-defined. For all $j\in {\mathbb I} $ let $\gamma _j=s_{i,M'}s_{i,M}(\alpha _j)$. Then by Eq.~\eqref{eqn:betaj} one obtains for all $j\in {\mathbb I} \setminus \{i\}$ the equations \begin{align}\label{eqn:defi-cartan-matrix-prime}
-a^{M'}_{ij} = &\sup \{h\in {\mathbb N}_0\,|\,s_{i,M} (\alpha_j) + h s_{i,M} (\alpha _i)
\in \operatorname{supp} L'_j\}
=-a^M_{i j}, \\
\gamma_j = &s_{i,M} (\alpha_j) - a^{M'}_{ij}s_{i,M} (\alpha_i)=\alpha _j,\\
M''_j = &L'_j\cap {\mathcal B}(W')_{\gamma _j}\simeq
L_j\cap {\mathcal B}(W)_{\gamma _j}=M_j,
\label{eq:M''} \end{align} where the last equation follows from the fact that
$\widetilde{\Omega }|_{L_j}:L_j\to L'_j$ is a $\Z^{\theta} $-graded isomorphism in ${}^H_H\mathcal{YD} $, see Prop.~\ref{pr:defOm} and Claim~\eqref{eqn:isodemodulos}. Since $M''_i=(M'_i)^*=M_i^{**} \simeq M_i$ by Remark \ref{lema:double-dual}, one obtains that ${\mathcal R}_i(M')\simeq M$, that is, Theorem \ref{theo:main}(2) is proven.
It remains to prove that $\Omega$ is an isomorphism. Let ${\mathcal K}'={\mathcal B}(W')^{\operatorname{co} {\mathcal B}(M'_{\beta _i})}$, $W''=\oplus _{j\in {\mathbb I} }M''_{\gamma _j}$, and ${\mathcal K}''={\mathcal B}(W'')^{\operatorname{co} {\mathcal B}(M''_{\gamma _i})}$. Since ${\mathcal K}$ resp. ${\mathcal K}'$ is generated as an algebra by $\oplus _{j\in {\mathbb I} \setminus \{i\}}L_j$ resp. $\oplus _{j\in {\mathbb I} \setminus \{i\}}L'_j$, see Prop.~\ref{Kgrnlj}, we conclude from Lemma~\ref{S} (ii) and Claim~\eqref{eqn:isodemodulos} that $\widetilde{\Omega}({\mathcal K}) = {\mathcal K}'$. By the same argument we have $\widetilde{\Omega}'({\mathcal K}') = {\mathcal K}''$, where $\widetilde{\Omega}':{\mathcal K}'\#{\mathcal B}(V^{**})\to {\mathcal B}(W'')$ is the map in Prop.~\ref{pr:defOm} obtained by starting with the family $M'$ instead of $M$. Thus $\widetilde{\Omega}$ and $\widetilde{\Omega'}$ define surjective $\mathbb{Z}^{\theta}$-homogeneous maps $${\mathcal K}
\xrightarrow{\widetilde{\Omega}|_{\mathcal K}} {\mathcal K}'
\xrightarrow{\widetilde{\Omega'}|_{{\mathcal K}'}} {\mathcal K}''$$ of Yetter-Drinfeld modules over $H$. But ${\mathcal K} \simeq {\mathcal K}''$ as $\mathbb{Z}^{\theta}$-graded Yetter-Drinfeld modules since $W \simeq W''$ by Theorem \ref{theo:main} (2). The $\mathbb{Z}^{\theta}$-homogeneous components of ${\mathcal K}$ are all finite-dimensional{} since $W$ is finite-dimensional. Hence the map ${\mathcal K}
\xrightarrow{\widetilde{\Omega}|_{\mathcal K}} {\mathcal K}'
\xrightarrow{\widetilde{\Omega'}|_{{\mathcal K}'}} {\mathcal K}''$ is bijective, and
$\widetilde{\Omega}|_{\mathcal K} : {\mathcal K} \rightarrow {\mathcal K}'$ is an isomorphism. Next, let $$\mu : \mathfrak{B}(V^*) \otimes {\mathcal K} \to {\mathcal K} \# \mathfrak{B}(V^*)\; \text{ and } \mu' : {\mathcal K}' \otimes \mathfrak{B}(V^*) \to \mathfrak{B}(W')$$ be the multiplication maps. By Remark \ref{re:multinv} resp. Lemma~\ref{basicK} (ii), both maps are bijective. Let $f \in \mathfrak{B}(V^*)$ and $x \in {\mathcal K}$. Then \begin{align*} \widetilde{\Omega}(fx) &= {\mathcal S}_{\mathfrak{B}(W')}(f \Omega(x)) = (f\_{-1} \cdot \widetilde{\Omega}(x)) {\mathcal S}_{\mathfrak{B}(W')}(f\_0) \\& =(f\_{-1} \cdot \widetilde{\Omega}(x)) {\mathcal S}_{\mathfrak{B}(V^*)}(f\_0). \end{align*} Thus $\widetilde{\Omega} \mu = \mu' c ({\mathcal S}_{\mathfrak{B}(V^*)}
\otimes ({\mathcal S}_{{\mathcal B}(W')} \Omega|_{\mathcal K}))$. Hence $\widetilde{\Omega}$, and \emph{a fortiori} $\Omega$, are bijective. This completes the proof of Theorem \ref{theo:main}. \end{proof}
\begin{obs} The proof of Theorem \ref{theo:main} does not use the fact that
$V=M_i$ is irreducible in ${}^H_H\mathcal{YD} $. However,
$M_i$ has to be irreducible if one wants to apply the theorem
for an index $j\in {\mathbb I} $, $j\not=i$, which satisfies Condition $(F_j)$. \end{obs}
\bigbreak The algebras ${\mathcal B}(W)$ and ${\mathcal B}(W')$ are not necessarily isomorphic. However, we have the following consequences of Theorem \ref{theo:main}.
\begin{cor}\label{co:WW'supp}
Let $M$, $i$, $W$, $W'$ and the $\Z^{\theta} $-gradings of $W$ and $W'$
be as in Thm. ~\ref{theo:main}. Then
${\mathcal B}(W)\#{\mathcal B}(W^*)$ and ${\mathcal B}(W')\#{\mathcal B}(W'{}^*)$ are isomorphic
as $\Z^{\theta} $-graded objects in ${}^H_H\mathcal{YD} $. In particular,
$\operatorname{supp} {\mathcal B}(W)\#{\mathcal B}(W^*)=\operatorname{supp} {\mathcal B}(W')\#{\mathcal B}(W'{}^*)$. \end{cor}
\begin{proof} Since the homogeneous components of ${\mathcal K}$ are finite-dimensional, the graded dual ${\mathcal K}^{\operatorname{gr-dual} }$ of ${\mathcal K}$ is a $\Z^{\theta} $-graded object in ${}^H_H\mathcal{YD}$. By definition of ${\mathcal K} $ and the isomorphism ${\mathcal B}(W^*)\simeq {\mathcal B}(W)^{\operatorname{gr-dual} }$, see \eqref{eq:BVgrdual}, one has $${\mathcal B}(W)\#{\mathcal B}(W^*)\simeq {\mathcal K} \otimes {\mathcal B}(V) \otimes {\mathcal K} ^{\operatorname{gr-dual} }\otimes {\mathcal B}(V^*)$$ as $\Z^{\theta} $-graded objects in ${}^H_H\mathcal{YD} $. Further, Theorem \ref{theo:main} implies that $${\mathcal B}(W')\#{\mathcal B}(W'{}^*)\simeq {\mathcal K} \otimes {\mathcal B}(V^*) \otimes {\mathcal K} ^{\operatorname{gr-dual} }\otimes {\mathcal B}(V)$$ as $\Z^{\theta} $-graded objects in ${}^H_H\mathcal{YD} $. Since $A{\otimes} B\simeq B{\otimes} A$ for all $\Z^{\theta}$-graded objects $A,B$ in ${}^H_H\mathcal{YD} $, the above equations prove the corollary. \end{proof}
In many applications it will be more convenient to use the following reformulation of Corollary~\ref{co:WW'supp}.
\begin{cor}\label{co:WW'supp2}
Let $M$, $i$, $W$, $W'$ be as in Thm. ~\ref{theo:main}. Define
$\Z^{\theta} $-gradings on $W$ and $W'$ by $\deg x=\alpha _j$ for all $x\in M_j$
and all $x\in M'_j$, $j\in {\mathbb I} $.
Then for all $\alpha \in \Z^{\theta} $ the homogeneous components
$({\mathcal B}(W)\#{\mathcal B}(W^*))_{\alpha }$ and $({\mathcal B}(W')\#{\mathcal B}(W'{}^*))_{s_{i,M}
(\alpha )}$
are isomorphic in ${}^H_H\mathcal{YD} $. In particular,
$$\operatorname{supp} {\mathcal B}(W')\#{\mathcal B}(W'{}^*)=s_{i,M} (\operatorname{supp} {\mathcal B}(W)\#{\mathcal B}(W^*)).$$ \end{cor}
\begin{cor}\label{cor:dimfin} If $\dim {\mathcal B} (W) < \infty$, then $\dim {\mathcal B} (W) = \dim{\mathcal B}(W')$. \end{cor}
\begin{proof} We compute $\dim {\mathcal B} (W) \overset{} = \dim {\mathcal K} \dim {\mathcal B} (V) = \dim {\mathcal K} \dim {\mathcal B} (V^*) = \dim{\mathcal B}(W')$. Here the first equality holds by Lemma~\ref{K0} (ii), the second by Prop.~\ref{prop:duality}, and the third by
Theorem \ref{theo:main}. \end{proof}
\subsection{Weyl groupoid and real roots} \label{ss:Weylgroupoid}
In this subsection we define and study invariants of finite families of finite dimensional irreducible Yetter-Drinfeld modules. The definitions are based on Theorem \ref{theo:main}.
\medbreak
Recall the definition of ${\mathcal C}_\theta$ from Subsection \ref{subsection:reflected}. If $M$, $M'\in {\mathcal C}_{\theta }$, then we say that $$M\sim M'$$ if there exists an index $i$ such that Condition~$(F_i)$ holds for $M$, see Subsection \ref{subsection:reflected}, and if ${\mathcal R}_i(M)\simeq M '$. By Theorem \ref{theo:main}(2), the relation $\sim $ is symmetric. The equivalence relation $\approx$ generated by $\sim$ is called \textit{Weyl equivalence}.
Recall that $(\alpha _1,\ldots ,\alpha _\theta )$ is the standard basis of $\Z^{\theta} $ and ${\mathbb I} =\{1,2,\dots ,\theta \}$.
\begin{definition}\label{def:rsys}
Let $M \in {\mathcal C}_\theta $.
Define
\begin{align*}
{\mathfrak W} (M)=&\{M'\in {\mathcal C}_\theta \,|\,M'\approx M\}.
\end{align*}
Let ${\mathcal W} (M)$ denote the following category with
$\mathrm{Ob}({\mathcal W} (M))={\mathfrak W} (M)$.
For each $M'\in {\mathfrak W} (M)$ such that $M'$ satisfies $(F_i)$
consider the reflection $s_{i,M'}\in \operatorname{Aut} (\Z^{\theta} )$,
$s_{i,M'}(\alpha _j)=\alpha_j-a^{M'}_{ij}\alpha_i$ for all $j\in {\mathbb I} $,
as a morphism
$M'\to {\mathcal R} _i(M')$.
Let ${\mathcal W} (M)$ be the category in which all morphisms are compositions of the
morphisms $s_{i,M'}$, where $i\in {\mathbb I} $ and $M'\in {\mathfrak W} (M)$ satisfies
$(F_i)$. The category ${\mathcal W} (M)$ is called the \textit{Weyl groupoid of} $M$.
Let
\begin{align*}
\boldsymbol{\Delta }^{re} (M)=\{ w(\alpha _j)\,|\,w\in \Hom (M',M),M'\in {\mathfrak W} (M)\}
\subset \Z^{\theta}.
\end{align*} Following the notation in \cite[\S 5.1]{K},
$\boldsymbol{\Delta }^{re} (M)$ is called the
\textit{set of real roots of} $M$. \end{definition}
\begin{obs}
Let $M\in {\mathcal C}_\theta $. Then the category ${\mathcal W} (M)$ is a connected
groupoid.
Indeed, if $i\in {\mathbb I} $ and $M'\in {\mathfrak W} (M)$ satisfies $(F_i)$,
then ${\mathcal R} _i(M')$ satisfies $(F_i)$, ${\mathcal R} _i({\mathcal R} _i(M'))
\simeq M'$ and $s_{i,{\mathcal R} _i(M')}s_{i,M'}=\id _{\Z^{\theta} }$
by Theorem~\ref{theo:main}.
Therefore the generating morphisms (and hence all morphisms)
of ${\mathcal W} (M)$ are invertible. Further, for any two $M',M''\in {\mathfrak W} (M)$
there is a morphism in $\Hom (M',M'')$ by the definition of ${\mathfrak W} (M)$. \end{obs}
\begin{obs}
If
$M'\in{\mathfrak W}(M)$, then equation $$\dim{\mathcal B}(\oplus _{n\in {\mathbb I} }M_n) =\dim{\mathcal B}(\oplus _{n\in {\mathbb I} }M'_n)$$ holds by Corollary~\ref{cor:dimfin}. \end{obs}
\begin{obs} Assume that $M$ is of diagonal type,
that is, $\dim M_n=1$ for all $n\in {\mathbb I} $.
Let $M'\in {\mathfrak W}(M)$, $j\in {\mathbb I} $, $k\in \mathbb{N}$, $i_1,\dots ,i_k\in {\mathbb I} $,
$M^0=M',M^1,M^2,\dots ,M^k=M\in {\mathfrak W} (M)$, $M^l\simeq {\mathcal R}
_{i_{l+1}}(M^{l+1})$ for all $l\in \mathbb{N}_0$ with $l<k$,
and $\beta =s_{i_k,M^{k-1}}\cdots s_{i_2,M^1}s_{i_1,M^0}(\alpha _j)\in \boldsymbol{\Delta }^{re}
(M)$. Let $m_1,\dots
,m_\theta \in \mathbb{Z}$ such that $\beta =\sum _{n\in {\mathbb I} }m_n\alpha _n$.
Then $M'_j\simeq \otimes _{i\in {\mathbb I} }M_n^{\otimes m_n}$ in ${}^H_H\mathcal{YD} $,
where $M_n^{\otimes m_n}=(M_n^*)^{\otimes -m_n}$ if $m_n<0$.
This means, that $M'$ depends only on $M$ and $\beta $, but not on the
particular choice of $i_1, \dots ,i_k$.
However, for more general Yetter-Drinfeld modules $M$ it is in general not
clear, if $M'_j$ can be recovered from $j$, $\beta $ and $M$.
\end{obs}
\begin{prop}\label{pr:finrsys}
Let $M\in {\mathcal C}_\theta $
and $W=M_1\oplus \cdots \oplus M_\theta $. Then
$\boldsymbol{\Delta }^{re} (M )\subset \operatorname{supp} {\mathcal B}(W)\#{\mathcal B} (W^*)$. In particular, if
${\mathcal B} (W) $ is finite-dimensional, then $\boldsymbol{\Delta }^{re} (M )$ is a finite subset of $\Z^{\theta} $. \end{prop}
\begin{proof} Clearly, $\operatorname{supp} W =\{\alpha _1,\dots ,\alpha _\theta \} \subset \operatorname{supp} {\mathcal B}(W)\#{\mathcal B} (W^*)$. Let $i\in {\mathbb I} $, $M '={\mathcal R}_i(M )$, and $W'=\oplus _{n\in {\mathbb I} }M'_n$. Then $$\operatorname{supp} W'\subset \operatorname{supp} {\mathcal B}(W')\#{\mathcal B}(W'{}^*) =s_{i,M}(\operatorname{supp} {\mathcal B}(W)\#{\mathcal B}(W^*))$$ by Corollary~\ref{co:WW'supp2}. Thus equation $s_{i,M'}s_{i,M} =\id $ gives that $s_{i,M'}(\alpha _j)\in \operatorname{supp} {\mathcal B}(W)\#{\mathcal B}(W^*)$ for all $j\in {\mathbb I} $. By iteration one obtains that $\boldsymbol{\Delta }^{re} (M )\subset \operatorname{supp} {\mathcal B}(W)\#{\mathcal B} (W^*)$. If $\dim {\mathcal B}(W)<\infty $ then the finiteness of $\boldsymbol{\Delta }^{re} (M )$ follows from the equations \begin{align*} \operatorname{supp} {\mathcal B} (W)\otimes {\mathcal B}(W^*)= &\operatorname{supp} {\mathcal B} (W)+\operatorname{supp} {\mathcal B}(W)^{\operatorname{gr-dual} }\\ =& \operatorname{supp} {\mathcal B} (W)-\operatorname{supp} {\mathcal B}(W), \end{align*} see \eqref{eq:BVgrdual}, and the fact that $\operatorname{supp} {\mathcal B} (W)$ is finite.
\end{proof}
\begin{lema}\label{exa:fatqls} Let $M\in {\mathcal C}_{\theta }$
and let $i,j\in {\mathbb I} $ such that $a^M_{ij} = 0$. Then $a^M_{ji} = 0$, and ${\mathcal B}(M_i\oplus M_j) \simeq {\mathcal B}(M_i){\otimes} {\mathcal B}(M_j)$ as graded vector spaces. \end{lema}
\begin{proof} Let $x\in M_i$, $y\in M_j$. Then \eqref{eq:adcxy} gives that \begin{equation}\label{eqn:fatqls} \Delta(\ad_c x(y)) = \ad_c x(y){\otimes} 1 + x{\otimes} y - c^2(x{\otimes} y) + 1{\otimes} \ad_c x (y). \end{equation}
Thus, $a_{i j}^M=0$ implies that $\ad_c x (y) = 0$. Hence $x{\otimes} y - c^2(x{\otimes} y) = 0$, that is, $(\id - c^2)(M_i{\otimes} M_j) = 0$. Then $(\id - c^2)c(M_j{\otimes} M_i)=0$, but $c$ is invertible, so that $(\id - c^2)(M_j{\otimes} M_i)=0$. Eq.~\eqref{eqn:fatqls} gives that $\ad _c x(y)$ is primitive in ${\mathcal B} (M_i\oplus M_j)$ for all $x\in M_j$, $y\in M_i$, hence zero. This yields $a^M_{ji}=0$. The last claim of the lemma is \cite[Thm. \,2.2]{G-jalg}. \end{proof}
\begin{lema}\label{lema:gen-cartan-matrix}
Let $M = ( M_j)_ {j\in {\mathbb I} }$ be an object in ${\mathcal C}_{\theta }$ which satisfies $(F_i)$ for all $i\in {\mathbb I} $.
Then $A = (a^M_{ij})$ is a generalized Cartan matrix. In particular, the subgroup
$${\mathcal W}_0(M) := \langle s_{i,M}\,|\,i\in {\mathbb I}\rangle$$ of $GL(\theta ,{\mathbb Z} )$ is isomorphic to the Weyl group of the Kac-Moody algebra $\g(A)$. \end{lema}
\begin{proof} The first claim follows from Lemma~\ref{exa:fatqls}. Let $(\h, \Pi, \Pi^{\vee})$ be a realization of $A$ \cite[\S 1.1]{K} and let $W$ be the Weyl group of $\g(A)$ \cite[\S 3.7]{K}. Then $W$ preserves the subspace $V$ of $\h^*$ generated by $\Pi^{\vee}$ and the morphism $W\to GL(V)$ is injective \cite[Ex. 3.6]{K}. Now $V \simeq \Z^{\theta}{\otimes}_{{\mathbb C}} {\mathbb C}$ by \cite[(1.1.1)]{K} and the image of $W$ in $GL(V)$ coincides with ${\mathcal W}_0(M)$ by \cite[(1.1.2)]{K}. \end{proof}
It follows that ${\mathcal W}_0(M)$ is a Coxeter group \cite[Prop. 3.13]{K} but we do not need this fact in the sequel. The group ${\mathcal W}_0(M)$ is important in the study of Nichols algebras in the following special case.
\begin{definition}\label{defi:standard} We say that $M\in {\mathcal C}_{\theta }$
is \emph{standard} if $M'$ satisfies Condition~$(F_i)$
and $a^{M'}_{i j}=a^M_{i j}$ for all $M'\in {\mathfrak W} (M)$ and
$i,j\in {\mathbb I} $.
\end{definition}
\begin{obs}\label{rem:standard} In the following two special cases the family $M\in {\mathcal C}_\theta $ is standard.
1. Let $H$ be the group algebra of an abelian group $\Gamma $ and $M$ a
family of 1-dimensional Yetter-Drinfeld modules $\Bbbk v_i=M_i$
over $H$, where $i\in {\mathbb I} $. Let $\delta (v_i)=g_i{\otimes} v_i$ and
$g\cdot v_i=\chi _i(g)v_i$ denote the coaction and action of $H$,
respectively, where $g_i\in \Gamma $, $\chi _i\in \widehat{\Gamma }$,
$i\in {\mathbb I} $. Define $q_{i j}=\chi _j(g_i)\in \Bbbk $ for $i,j\in {\mathbb I} $.
If $M$ is of Cartan type, that is, for all $i\not=j$ there exist
$a_{i j}\in {\mathbb Z}$ such that $0\le -a_{i j}<\mathrm{ord}\,q_{i i}$ and
$q_{i j}q_{j i}=q_{i i}^{a_{i j}}$, then $M$ is standard. This can be seen
from \cite[Lemma\,1(ii), Eq.\,(24)]{H3}.
2. Assume that $M\in {\mathcal C}_\theta $ satisfies Condition $(F_i)$
and ${\mathcal R} _i(M)_{s_{i,M}(\alpha _j)}\simeq M_j$ in ${}^H_H\mathcal{YD} $
for all $i,j\in {\mathbb I} $. Then $M$ is standard by
definition of $a^M_{i j}$. \end{obs}
\begin{lema}\label{le:wrsys}
Assume that $M\in {\mathcal C}_\theta $ is standard. Then
$$\boldsymbol{\Delta }^{re} (M)=\{w(\alpha _j)\,|\,w\in {\mathcal W}_0(M) ,j\in {\mathbb I} \}.$$
In particular, $w(\boldsymbol{\Delta }^{re} (M))=\boldsymbol{\Delta }^{re} (M)$ for all $w\in {\mathcal W}_0(M) $. \end{lema}
\begin{proof} This follows immediately from the definitions of $\boldsymbol{\Delta }^{re} (M)$ and ${\mathcal W}_0(M) $ and the relations $a^{M'}_{ij}=a^M_{ij}$ for all $i,j\in {\mathbb I} $ and $M'\in {\mathfrak W} (M)$. \end{proof}
\begin{theorem}\label{theorem:gpd-nichols-finite}
Let $M=(M_i)_{i\in {\mathbb I} }\in {\mathcal C}_\theta $ and
$W=\oplus _{i\in {\mathbb I} }M_i$.
If $M$ is standard and $\dim {\mathcal B}(W)<\infty $, then the generalized Cartan matrix $(a^M_{ij})_{i,j\in {\mathbb I} }$ is of finite type. \end{theorem}
\begin{proof} Since $\dim {\mathcal B}(W)<\infty $, the set $\boldsymbol{\Delta }^{re} (M)$ is finite by Prop.~\ref{pr:finrsys}. Since $M$ is standard, $\boldsymbol{\Delta }^{re} (M)$ is stable under the action of ${\mathcal W}_0(M) $ by Lemma~\ref{le:wrsys}. The corresponding permutation representation ${\mathcal W}_0(M) \to S(\boldsymbol{\Delta }^{re} (M))$ is injective, since ${\mathcal W}_0(M) \subset GL(\theta ,{\mathbb Z} )$ and $\boldsymbol{\Delta }^{re} (M)$ contains the standard basis of ${\mathbb Z} ^\theta $. Therefore ${\mathcal W}_0(M) $ is finite. Thus the claim follows from Lemma~\ref{lema:gen-cartan-matrix} and \cite[Prop.\,4.9]{K}. \end{proof}
\section{Applications}\label{section:appl}
\subsection{Hopf algebras with few finite-dimensional{} Nichols algebras}
\begin{lema}\label{lema:one-irred-finite} Let $H$ be a Hopf algebra. Assume that, up to isomorphism, there is exactly one finite-dimensional{} irreducible Yetter-Drinfeld module $L\in {}^H_H\mathcal{YD} $ such that $\dim {\mathcal B}(L)< \infty$. Let $M = (M_1,M_2)\in {\mathcal C}_2$ such that $M_1\simeq M_2\simeq L$.
\begin{enumerate}
\item[(i)]
If $M$ satisfies $(F_1)$ then $M$ satisfies $(F_2)$
and $a^M_{12} = a^M_{21}$.
If additionally $\dim {\mathcal B}(L^2)<\infty $ then $M$ is standard.
\item[(ii)] If
$M$ does not fulfill $(F_1)$ or if $a^M_{12} \leq -2$,
then $\dim({\mathcal B}(L^n)) = \infty$ for $n\geq 2$.
\item[(iii)] If $a^M_{12} =0$,
then $\dim {\mathcal B}(L^n) = (\dim {\mathcal B}(L))^n$ for all $n\in {\mathbb N}$.
\item[(iv)] When $a^M_{12} = -1$, then $\dim {\mathcal B}(L^n) = \infty$
for $n\geq 3$. \end{enumerate} \end{lema}
Note that if $a^M_{12}=-1$ then Lemma~\ref{lema:one-irred-finite} gives no information about $\dim {\mathcal B}(L^2)$.
\begin{proof} If $M$ does not fulfill Condition~$(F_1)$ then $\dim {\mathcal B}(L^2) = \infty$. Otherwise $a^M_{12}\in {\mathbb Z}_{\le 0}$, and $a^M_{12} = a^M_{21}$ by symmetry. Moreover, if $\dim {\mathcal B} (L^2)<\infty $, then for $i\in \{1,2\}$ the Nichols algebra of ${\mathcal R}_i(M)_1\oplus {\mathcal R}_i(M)_2$ is also finite-dimensional{} by Corollary~\ref{cor:dimfin}, and hence ${\mathcal R}_i(M)_j\simeq L$ for $j\in \{1,2\}$. Therefore $M$ is standard, and (i) is proven.
The generalized Cartan matrix $ \begin{pmatrix} 2 & a^M_{12} \\ a^M_{12} & 2 \end{pmatrix}$ is of finite type iff $a^M_{12} = 0$ or $a^M_{1 2}=-1$. Then (ii) follows from Theorem \ref{theorem:gpd-nichols-finite}. Now (iii) follows from \cite{G-jalg}, see Lemma~\ref{exa:fatqls}. If $a^M_{12} = -1$, then the generalized Cartan matrix of $L^3$ has Dynkin diagram $A_2^{(1)}$; hence $\dim {\mathcal B}(L^3) = \infty$, and \emph{a fortiori} the same holds for $L^n$ for $n\ge 3$. This shows (iv). \end{proof}
\begin{theorem}\label{theo:one-irred-finite} Let $H$ be a Hopf algebra such that the category of finite-dimensional{} Yetter-Drinfeld modules is semisimple. Assume that up to isomorphism there is exactly one irreducible $L\in {}^H_H\mathcal{YD}$ such that $\dim {\mathcal B}(L)< \infty$. Let $M = (M_1,M_2)\in {\mathcal C}_2$, where $M_1=M_2=L$. If $M$ satisfies $(F_1)$ then $M$ satisfies $(F_2)$ and $ a^M_{12} = a^M_{21}\in {\mathbb Z}_{\le 0}$.
\begin{enumerate}
\item[(i)] If $a^M_{12} = - \infty$ or $a^M_{12} \leq -2$,
then $L$ is the only Yetter-Drinfeld module over $H$ with
finite-dimensional{} Nichols algebra.
\item[(ii)] If $a^M_{12} =0$, then
a Yetter-Drinfeld module $W$ over $H$ has finite-dimensional{} Nichols algebra if
and only if $W\simeq L^n$ for some $n\in {\mathbb N}$.
Furthermore,
$\dim {\mathcal B}(L^n) = (\dim {\mathcal B}(L))^n$.
\item[(iii)] If $a^M_{12} = -1$, then the only possible
Yetter-Drinfeld modules over $H$ with finite-dimensional{}
Nichols algebra are $L$ and (perhaps) $L^2$. \end{enumerate} \end{theorem}
\begin{proof} By hypothesis, the only Yetter-Drinfeld module candidates to have finite-dimensional{} Nichols algebras are those of the form $L^n$, $n\in {\mathbb N}$. The theorem follows then from Lemma~\ref{lema:one-irred-finite}. \end{proof}
Now we state another general result that can be obtained from Theorem \ref{theorem:gpd-nichols-finite}. We shall use it when considering Nichols algebras over $\sk$.
\begin{lema}\label{lema:one-twocopies-finite}
Let $M_1, \dots, M_s\in {}^H_H\mathcal{YD} $,
where $s\in {\mathbb N} $, be a maximal set of pairwise
nonisomorphic irreducible Yetter-Drinfeld modules, such that
$\dim {\mathcal B}(M_i)< \infty$ for $1\le i \le s$.
Assume that there exist $i, j \in \{1, \dots, s\}$
(the possibility $i=j$ is not excluded) such that \begin{enumerate}
\item[(i)] $\dim {\mathcal B}(M_i\oplus M_j) < \infty$.
\item[(ii)] If $\{\ell,m\}\neq \{i,j\}$, then
$\dim {\mathcal B}(M_\ell\oplus M_m) = \infty$.
\item[(iii)] $M_i \not\simeq M_j^*$. \end{enumerate} Let $M = (M_i, M_j)\in {\mathcal C}_2$.
Then $M$ is standard. \end{lema}
\begin{proof} By (i) the Nichols algebra of $(M_i\oplus M_j)^*\simeq M_i^*\oplus M_j^*$ is finite-dimensional. By (ii) one has $M_i^*\oplus M_j^*\simeq M_i\oplus M_j$, and (iii) implies that $M_i^*\simeq M_i$ and $M_j^*\simeq M_j$. Thus it suffices to consider the reflection ${\mathcal R} _i$. By (i) and Corollary~\ref{cor:dimfin}, $M '=(M'_1,M'_2):={\mathcal R} _i(M )$ is well-defined and $\dim {\mathcal B}(M'_1\oplus M'_2)<\infty $. By (ii) one has $M'_1\oplus M'_2\simeq M_i\oplus M_j$. Since $M'_1\simeq M_i^*\simeq M_i$ by the beginning of the proof, one has $M'_2\simeq M_j$. Hence $M$ is standard by Remark~\ref{rem:standard}. \end{proof}
\subsection{Pointed Hopf algebras with group $\st$} In the rest of this section, it is assumed that the base field is $\Bbbk ={\mathbb C} $. Let $G$ be a finite non-abelian group. We shall use the rack notation $x\triangleright y := xyx^{-1}$, $x,y\in G$. Since the group algebra ${\mathbb C} G$ is semisimple, the corresponding category ${}^{\Cc G}_{\Cc G}\mathcal{YD}$ of Yetter-Drinfeld modules is semisimple. It is well-known that the irreducible objects in ${}^{\Cc G}_{\Cc G}\mathcal{YD}$ are parametrized by pairs $({\mathcal O}, \rho)$, where ${\mathcal O}$ is a conjugacy class of $G$ and $\rho$ is an irreducible representation of the centralizer $G^s$ of a fixed $s\in {\mathcal O}$. Let $M({\mathcal O}, \rho)$ denote the irreducible Yetter-Drinfeld module corresponding to $({\mathcal O}, \rho)$ and let ${\mathcal B}({\mathcal O}, \rho)$ be its Nichols algebra. Then $M({\mathcal O}, \rho)$ is the induced module $\operatorname{Ind}_{G^s}^G \rho$, and the comodule structure is given by the following rule. Let $g_1 = g$, \dots, $g_{t}$ be a numeration of ${\mathcal O}$ and let $x_i\in G$ such that $x_i \triangleright g = g_i$ for all $1\le i \le t$. Then $M({\mathcal O}, \rho) = \oplus_{1\le i \le t} x_i\otimes V$. If $x_iv := x_i\otimes v \in M({\mathcal O},\rho)$, then $\delta(x_iv) = g_i{\otimes} x_iv$, for $1\le i \le t$, $v\in V$. The braiding in $M({\mathcal O}, \rho)$ is given by $c(x_iv\otimes x_jw) = g_i\cdot(x_jw)\otimes x_iv = x_h\,\rho(\gamma)(w) \otimes x_iv$ for any $1\le i,j\le t$, $v,w\in V$, where $g_ix_j = x_h\gamma$ for unique $h$, $1\le h \le t$ and $\gamma \in G^s$.
\medbreak If $G = \sn$, then ${\mathcal O}^n_2$ is the conjugacy class of the involutions and $\sgn$ is the restriction of the sign representation to the isotropy group.
\medbreak Before stating our first classification result, we need to recall the construction of some Hopf algebras from \cite{AG-ama}.
\begin{definition}\label{defi:abofet} Let $\lambda\in \Bbbk$. Let ${{\mathcal A}(\st, {\mathcal O}_2^3, \lambda)}$ be the algebra presented by generators $e_t$, $t\in T:= \{(12), (23)\}$, and $a_\sigma$, $\sigma \in {\mathcal O}_2^3$; with relations \begin{align} \label{relgpo} e_t e_se_{t} &= e_se_{t}e_{s}, \quad e_t^2 = 1, \quad s\neq t\in T; \\ \label{relsym4} e_t a_\sigma &= -a_{t\sigma t} e_t \hspace{55pt} t \in T,\, \sigma\in {\mathcal O}_2^3; \\\label{relsym41} a_\sigma^2 &= 0, \hspace{80pt} \sigma\in {\mathcal O}_2^3; \\\label{relsym43} a_{(12)} a_{(23)} &+ a_{(23)} a_{(13)} +a_{(13)} a_{(12)} = \lambda (1- e_{(12)} e_{(23)}); \\\label{relsym43bis} a_{(12)} a_{(13)} &+ a_{(13)} a_{(23)} +a_{(23)} a_{(12)} = \lambda (1- e_{(23)} e_{(12)}). \end{align} Set $e_{(13)} = e_{(12)}e_{(23)}e_{(12)}$. Then ${{\mathcal A}(\st, {\mathcal O}_2^3, \lambda)}$ is a Hopf algebra of dimension $72$ with comultiplication determined by \begin{equation}\label{eqn:abofet-comult} \Delta(a_{\sigma}) = a_{\sigma}{\otimes} 1 + e_{\sigma}{\otimes} a_{\sigma}, \quad \Delta(e_{t}) = e_{t}{\otimes} e_{t}, \qquad \sigma\in {\mathcal O}_2^3, t\in T. \end{equation}
Observe that the Hopf algebra ${{\mathcal A}(\st, {\mathcal O}_2^3, \lambda)}$ is isomorphic to ${\mathcal A}(\st, {\mathcal O}_2^3, \lambda c^2)$ (via $a_{\sigma} \mapsto c^{-1}a'_{\sigma}$, where $a'_{\sigma}$ are the generators of the latter). Also ${\mathcal A}(\st, {\mathcal O}_2^3, 0)$ $\simeq{\mathcal B}({\mathcal O}_2^3, \sgn) \#\Bbbk \st$. But ${\mathcal A}(\st, {\mathcal O}_2^3, 0) \not \simeq {\mathcal A}(\st, {\mathcal O}_2^3, 1)$ since the former is self-dual but the latter is not. \end{definition}
\begin{theorem}\label{theo:s3}
Let $H$ be a finite-dimensional{} pointed Hopf algebra with $G(H)\simeq \st$. Then either $H\simeq \Bbbk \st$, or $H\simeq {\mathcal B}({\mathcal O}_2^3, \sgn) \#\Bbbk \st$ or $H\simeq {\mathcal A}(\st, {\mathcal O}_2^3, 1)$. \end{theorem}
\begin{proof} It is known that $\dim {\mathcal B}({\mathcal O}_2^3, \sgn) = 12$ \cite{MS}; it is also known that this is the only finite-dimensional{} Nichols algebra with irreducible Yetter-Drinfeld module of primitives \cite{AZ}. We can then apply Theorem \ref{theo:one-irred-finite}. Let $M = (M({\mathcal O}_2^3, \sgn), M({\mathcal O}_2^3, \sgn))$. Assume that $a^M_{12}\in {\mathbb Z}_{\le 0}$, notation as above. We claim that $-a^M_{12}\ge 2$.
Let $\sigma_1 = (12)$, $\sigma_2 = (23)$, $\sigma_3 = (13)\in \st$. The Yetter-Drinfeld module $M({\mathcal O}_2^3, \sgn) \oplus M({\mathcal O}_2^3, \sgn)$ has a basis $x_1$, $x_2$, $x_3$ (from the first copy), $y_1$, $y_2$, $y_3$ (from the second copy) with \begin{equation}\label{eqn:oc32} \delta(x_i) = \sigma_i \otimes x_i, \, \delta(y_i) = \sigma_i \otimes y_i, \, t\cdot x_i = \sgn(t)x_{t \triangleright i}, \, t\cdot y_i = \sgn(t)y_{t \triangleright i}, \end{equation} for $1\le i \le 3$, $ t \in \st$. Here $\sigma_{t \triangleright i} := t\triangleright \sigma_i = t\sigma_it^{-1}$. Also, $j\triangleright i$ means $\sigma_{j \triangleright i} := \sigma_j \triangleright \sigma_i$. The braiding in the vectors of the basis gives \begin{align*} c(x_j\otimes x_i) &= - x_{j\triangleright i}\otimes x_j,&\, c(y_j\otimes y_i) &= - y_{j\triangleright i} \otimes y_{j},\\ c(x_j\otimes y_i) &= - y_{j\triangleright i} \otimes x_j, &\, c(y_j\otimes x_i) &= - x_{j\triangleright i} \otimes y_j. \end{align*} To prove our claim, we need to find $i,j,k$ such that $\ad_c(x_i)(\ad_c(x_j) (y_k))\neq 0$. Let $\partial_{x_i}$, $\partial_{y_i}$ be the skew-derivations as in \cite{MS}. Now \begin{align*} \ad_c(x_2)(\ad_c(x_1) (y_2)) &= \ad_c(x_2)(x_1y_2 + y_3x_1) \\ &= x_2x_1y_2 + x_2y_3x_1 - x_3y_2x_2 - y_1x_3x_2, \end{align*} hence $\partial_{x_3} \partial_{y_1}\left(\ad_c(x_2)(\ad_c(x_1) (y_2))\right) = \partial_{x_3}\left(-x_2x_3\right) = -x_2 \neq 0$, and the claim is proved. Thus $\dim {\mathcal B}(M({\mathcal O}_2^3, \sgn) \oplus M({\mathcal O}_2^3, \sgn)) = \infty$ by Theorem \ref{theo:one-irred-finite}, and ${\mathcal B}({\mathcal O}_2^3, \sgn)$ is the only finite-dimensional{} Nichols algebra over $\st$.
\medbreak Let $H\not\simeq \Bbbk \st$ be a finite-dimensional{} pointed Hopf algebra with $G(H)\simeq \st$. Then the infinitesimal braiding of $H$, see \cite{AS-cambr}, is isomorphic to $M({\mathcal O}_2^3, \sgn)$. Hence $H$ is generated as algebra by group-like and skew-primitive elements \cite[Theorem \,2.1]{AG-ama} and the theorem follows from \cite[Thm.\,3.8]{AG-ama}. \end{proof}
\subsection{Nichols algebras over the group $\sk$}
Let us recall the general terminology for $\mathbb S_n$. If $\pi=(12)\in {\mathcal O}_2^n$, then the isotropy subgroup is $\sn^{\pi}\simeq {\mathbb Z}_2\times \mathbb S_{n-2}$. Any irreducible representation of $\sn^{\pi}$ is of the form $\chi{\otimes} \rho$, where $\chi\in \widehat{{\mathbb Z}_2}$, $\rho\in \widehat{\mathbb S_{n-2}}$. If $\chi = \varepsilon $, then $\chi{\otimes} \rho(\pi) = 1$ and $\dim {\mathcal B}({\mathcal O}_2^n, \varepsilon {\otimes} \rho) = \infty$. Thus, we are actually interested in the Nichols algebras ${\mathcal B}({\mathcal O}_2^n, \sgn{\otimes} \rho)$. If $\rho = \sgn$, then $\sgn{\otimes} \rho$ is just the restriction to $\sn^\pi$ of the sign representation of $\sn$; we denote in this case ${\mathcal B}({\mathcal O}_2^n, \sgn) = {\mathcal B}({\mathcal O}_2^n, \sgn{\otimes} \sgn)$.
\medbreak The proof of Theorem \ref{theo:s3} gives the following result.
\begin{lema}\label{lema:o2n} The Nichols algebras ${\mathcal B}\big(M({\mathcal O}_2^n, \sgn{\otimes} \rho)\oplus M({\mathcal O}_2^n, \sgn{\otimes} \rho')\big)$, $n\ge 4$, $\rho, \rho'\in \widehat{\mathbb S_{n-2}}$, have infinite dimension. \end{lema}
\begin{proof} The braided vector space $M({\mathcal O}_2^3, \sgn) \oplus M({\mathcal O}_2^3, \sgn)$ is a braided subspace of any of these braided vector spaces. \end{proof}
The isotropy group of the 4-cycle $(1234)$ in $\sk$ is the cyclic group $\langle(1234)\rangle$. Let $\chi_-$ be its character defined by $\chi_-(1234) = -1$. Let ${\mathcal O}_4^4$ be the conjugacy class of 4-cycles in $\sk$.
\begin{theorem}\label{thm:s4} The only Nichols algebras of Yetter-Drinfeld modules over $\sk$ with finite dimension, up to isomorphism, are those in the following list. All of them have dimension 576.
\begin{enumerate}
\item ${\mathcal B}({\mathcal O}_2^4, \sgn)$.
\item ${\mathcal B}({\mathcal O}_2^4, \sgn{\otimes} \varepsilon )$.
\item ${\mathcal B}({\mathcal O}_4^4, \chi_{-})$. \end{enumerate} \end{theorem}
\begin{proof} The Nichols algebras in the list have the claimed dimension by \cite{FK, MS, AG-adv}, respectively. These are the only Nichols algebras of irreducible Yetter-Drinfeld modules over $\sk$ with finite dimension by \cite{AZ}.
It remains to show: If $M$, $M'$ are two of $M({\mathcal O}_2^4, \sgn)$, $M({\mathcal O}_2^4, \sgn{\otimes} \varepsilon )$, $M({\mathcal O}_4^4, \chi_{-})$, then $\dim {\mathcal B}(M\oplus M') = \infty$. Some possibilities are covered by Lemma~\ref{lema:o2n}. The rest are: \begin{enumerate}
\item[(i)] ${\mathcal B}(M({\mathcal O}_4^4, \chi_{-}) \oplus M({\mathcal O}_4^4, \chi_{-}))$.
\item[(ii)] ${\mathcal B}(M({\mathcal O}_2^4, \sgn)\oplus M({\mathcal O}_4^4, \chi_{-}))$.
\item[(iii)] ${\mathcal B}(M({\mathcal O}_2^4, \sgn{\otimes} \varepsilon )\oplus M({\mathcal O}_4^4, \chi_{-}))$. \end{enumerate}
(i). We claim that there is a surjective rack homomorphism ${\mathcal O}_4^4 \to {\mathcal O}_2^3$ that induces a surjective morphism of braided vector spaces $M({\mathcal O}_4^4, \chi_{-}) \oplus M({\mathcal O}_4^4, \chi_{-}) \to M({\mathcal O}_2^3, \sgn) \oplus M({\mathcal O}_2^3, \sgn)$; since the Nichols algebra of the latter is infinite-dimensional{} by the proof of Theorem \ref{theo:s3}, $\dim{\mathcal B}(M({\mathcal O}_4^4, \chi_{-}) \oplus M({\mathcal O}_4^4, \chi_{-})) = \infty$ too. Let us now verify the claim. We numerate the elements in the orbit ${\mathcal O}_4^4$ as follows: \begin{align*} \tau_1 &= (1234), &\tau_3 &= (1243), &\tau_5 &= (1324), \\ \tau_2 &= (1432) = \tau_1^{-1}, &\tau_4 &= (1342) = \tau_3^{-1}, &\tau_6 &= (1423) = \tau_5^{-1}; \end{align*} set accordingly \begin{align*} h_1 &= \tau_1, &h_2 &= (24), &h_3 &= \tau_6, &h_4 &= \tau_5, &h_5 &= \tau_3, & \quad h_6 &= \tau_4; \end{align*} so that $h_i\triangleright \tau_1 = \tau_i$, $1\le i \le 6$. The Yetter-Drinfeld module $M({\mathcal O}_4^4, \chi_{-}) \oplus M({\mathcal O}_4^4, \chi_{-})$ has a basis $u_1, \dots, u_6$ (from the first copy), $w_1, \dots, w_6$ (from the second copy) with \begin{equation}\begin{aligned}\label{eqn:deltaoc44} \delta(u_i) &= \tau_i \otimes u_i, & t\cdot u_i &= \chi_{-}(\widetilde t)u_{t \triangleright i}, \\ \delta(w_i) &= \tau_i \otimes w_i, & t\cdot w_i &= \chi_{-}(\widetilde t)w_{t \triangleright i}, \end{aligned}\end{equation} for $1\le i \le 6$, $ t \in \sk$. Here $t\triangleright i$ and $\widetilde t \in \sk^{\tau_1} = \langle\tau_1\rangle$ have the meaning that $t h_i = h_{t \triangleright i}\widetilde t$. Let now \begin{align}\label{eqn:defIa} I_3 &= \{1,2\}, & I_2 &= \{3,4\}, & I_1 &= \{5,6\}. \end{align} Let $a,b\in \{1,2,3\}$, $i\in I_a$, $j \in I_b$. If $a=b$, then the braiding in the corresponding vectors of the basis is \begin{align*} c(u_i\otimes u_{j}) &= - u_{j}\otimes u_{i}, & c(u_i\otimes w_{j}) &= - w_{j}\otimes u_{i}, \\ c(w_i\otimes w_{j}) &= - w_{j}\otimes w_{i}, & c(w_i\otimes u_{j}) &= - u_{j}\otimes w_{i}; \end{align*} and if $a\neq b$, then for some $\ell\in I_c$, where $c\neq a,b$, one has \begin{align*}c(u_i\otimes u_{j}) &= - u_{\ell}\otimes u_{i}, & c(u_i\otimes w_{j}) &= - w_{\ell}\otimes u_{i}, \\ c(w_i\otimes w_{j}) &= - w_{\ell}\otimes w_{i}, & c(w_i\otimes u_{j}) &= - u_{\ell}\otimes w_{i}. \end{align*}
Thus, the map $\pi: M({\mathcal O}_4^4, \chi_{-}) \oplus M({\mathcal O}_4^4, \chi_{-}) \to M({\mathcal O}_2^3, \sgn) \oplus M({\mathcal O}_2^3, \sgn)$ given by $\pi(u_i) = x_a$, $\pi(x_i) = y_a$, for $i\in I_a$, $a=1,2,3$, preserves the braiding. This proves the claim.
(ii). We claim that there is a surjective morphism of braided vector spaces $ M({\mathcal O}_2^4, \sgn)\oplus M({\mathcal O}_4^4, \chi_{-}) \to M({\mathcal O}_2^3, \sgn) \oplus M({\mathcal O}_2^3, \sgn)$. Again, this implies that the Nichols algebra in (ii) is infinite-dimensional. Let us check the claim. We numerate the elements in the orbit ${\mathcal O}_2^4$ as follows: \begin{align*} \sigma_1 &= (12), &\sigma_2 &= (23), &\sigma_3 &= (13), & \sigma_4 &= (14), &\sigma_5 &= (24), &\sigma_6 &= (34); \end{align*} set accordingly \begin{align*} g_1 &= \sigma_1, &g_2 &= \sigma_3, &g_3 &= \sigma_2, &g_4 &= \sigma_5, &g_5 &= \sigma_4, & \quad g_6 &= (1324); \end{align*} so that $g_i\triangleright \sigma_1 = \sigma_i$, $1\le i \le 6$. Let $\tau_i$ and $h_i$, $1\le i \le 6$, be as in the previous part of the proof. The Yetter-Drinfeld module $M({\mathcal O}_4^4, \sgn) \oplus M({\mathcal O}_4^4, \chi_{-})$ has a basis $z_1, \dots, z_6$ (from the first summand), $w_1, \dots, w_6$ (from the second summand) with \begin{equation}\begin{aligned}\label{eqn:deltaoc42} \delta(z_i) &= \sigma_i \otimes z_i, & t\cdot z_i &= \sgn (t')z_{t \triangleright i}, \\ \delta(w_i) &= \tau_i \otimes w_i, & t\cdot w_i &= \chi_{-}(\widetilde t)w_{t \triangleright i}, \end{aligned}\end{equation} for $1\le i \le 6$, $ t \in \sk$. Here, in the first line $t\triangleright i$ and $t' \in \sk^{\sigma_1}$ have the meaning that $t g_i = g_{t \triangleright i}t'$; and in the second line, $t\triangleright i$ and $\widetilde t \in \sk^{\tau_1} = \langle\tau_1\rangle$ have the meaning that $t h_i = h_{t \triangleright i}\widetilde t$. Set $ t_1 :=\sigma_1$, $t_2 :=\sigma_6$, so that $\sk^{\sigma_1} = \langle t_1, t_2 \rangle$.
\medbreak Let now $I_a$ be as in \eqref{eqn:defIa} and let $J_1 = \{1, 6\}$, $J_2 = \{2,4\}$, $J_3 = \{3,5\}$. Let $a,b, c\in \{1,2,3\}$ such that $\sigma_a\triangleright \sigma_b = \sigma_c$. Let $i\in I_a$, $j \in I_b$. Then there exist $k\in I_c$, $\ell,m\in J_c$, $\epsilon \in \{\pm 1\}$, $p,q\in \{1,2\}$ such that \begin{align*} \sigma_ih_j &= h_k\tau_1^{\epsilon},&\sigma_i g_j &= g_\ell t_p, &\tau_i g_j &= g_m t_q; \end{align*} see the Table \ref{tab:1}.
\begin{table}[ht] \caption{Multiplication in $\sk$.}\label{tab:1} \begin{center}
\begin{tabular}{r|cccccc}
$\cdot $ & $h_1$ & $h_2$ & $h_3$ & $h_4$ & $h_5$ & $h_6$\\
\hline
$\sigma _1$ & $h_4\tau_1^{-1}$ & $h_3\tau_1^{-1}$ & $h_2\tau _1$ &
$h_1\tau_1$ & $h_6\tau_1$ & $h_5\tau _1^{-1}$ \\
$\sigma_2$ & $h_5\tau_1^{-1}$ & $h_6\tau_1$ & $h_4\tau _1$ &
$h_3\tau_1^{-1}$ & $h_1\tau_1$ & $h_2\tau _1^{-1}$ \\
$\sigma_3$ & $h_2\tau_1$ & $h_1\tau _1$ & $h_5\tau _1^{-1}$ &
$h_6\tau_1$ & $h_3\tau_1^{-1}$ & $h_4\tau _1^{-1}$ \\
$\sigma _4$ & $h_6\tau_1^{-1}$ & $h_5\tau_1$ & $h_4\tau _1^{-1}$ &
$h_3\tau_1$ & $h_2\tau_1^{-1}$ & $h_1\tau _1$ \\
$\sigma _5$ & $h_2\tau_1$ & $h_1\tau_1^{-1}$ & $h_6\tau _1^{-1}$ &
$h_5\tau_1^{-1}$ & $h_4\tau_1$ & $h_3\tau _1$ \\
$\sigma _6$ & $h_3\tau_1^{-1}$ & $h_4\tau_1^{-1}$ & $h_1\tau _1$ &
$h_2\tau_1$ & $h_6\tau_1^{-1}$ & $h_5\tau _1$ \end{tabular} \end{center}
\begin{center}
\begin{tabular}{r|cccccc}
$\cdot $ & $g_1$ & $g_2$ & $g_3$ & $g_4$ & $g_5$ & $g_6$\\
\hline
$\sigma _1$ & $g_1 t_1$ & $g_3 t_1$ & $g_2 t_1$ &
$g_5 t_1$ & $g_4 t_1$ & $g_6 t_2$ \\
$\sigma_2$ & $g_3 t_1$ & $g_2 t_1$ & $g_1 t_1$ &
$g_4 t_2$ & $g_6 t_1$ & $g_5 t_1$ \\
$\sigma_3$ & $g_2 t_1$ & $g_1 t_1$ & $g_3 t_1$ &
$g_6 t_2$ & $g_5 t_2$ & $g_4 t_2$ \\
$\sigma _4$ & $g_5 t_1$ & $g_2 t_2$ & $g_6 t_1$ &
$g_4 t_1$ & $g_1 t_1$ & $g_3 t_1$ \\
$\sigma _5$ & $g_4 t_1$ & $g_6 t_2$ & $g_3 t_2$ &
$g_1 t_1$ & $g_5 t_1$ & $g_2 t_2$ \\
$\sigma _6$ & $g_1 t_2$ & $g_5 t_2$ & $g_4 t_2$ &
$g_3 t_2$ & $g_2 t_2$ & $g_6 t_1$ \end{tabular} \end{center}
\begin{center}
\begin{tabular}{r|cccccc}
$\cdot $ & $g_1$ & $g_2$ & $g_3$ & $g_4$ & $g_5$ & $g_6$\\
\hline
$\tau _1$ & $g_2 t_2$ & $g_6 t_1$ & $g_5 t_1$ &
$g_1 t_2$ & $g_3 t_2$ & $g_4 t_1$ \\
$\tau _2$ & $g_4 t_2$ & $g_1 t_2$ & $g_5 t_2$ &
$g_6 t_1$ & $g_3 t_1$ & $g_2 t_1$ \\
$\tau _3$ & $g_5 t_2$ & $g_4 t_2$ & $g_1 t_2$ &
$g_2 t_1$ & $g_6 t_2$ & $g_3 t_2$ \\
$\tau _4$ & $g_3 t_2$ & $g_4 t_1$ & $g_6 t_2$ &
$g_2 t_2$ & $g_1 t_2$ & $g_5 t_2$ \\
$\tau _5$ & $g_6 t_1$ & $g_5 t_1$ & $g_2 t_2$ &
$g_3 t_1$ & $g_4 t_2$ & $g_1 t_2$ \\
$\tau _6$ & $g_6 t_2$ & $g_3 t_2$ & $g_4 t_1$ &
$g_5 t_2$ & $g_2 t_1$ & $g_1 t_1$ \end{tabular} \end{center} \end{table}
Hence, the braiding in the vectors of the basis is \begin{align*} c(z_i\otimes w_{j}) &= - w_{k}\otimes z_{i}, & c(z_i\otimes z_{j}) &= - z_{\ell}\otimes z_{i}, & c(w_i\otimes z_{j}) &= - z_{m}\otimes w_{i}. \end{align*} Thus, the map $\pi: M({\mathcal O}_2^4, \sgn) \oplus M({\mathcal O}_4^4, \chi_{-}) \to M({\mathcal O}_2^3, \sgn) \oplus M({\mathcal O}_2^3, \sgn)$ given by $\pi(z_i) = x_a$, $\pi(w_j) = y_a$, for $i\in I_a$, $j\in J_a$, $a=1,2,3$, preserves the braiding. This proves the claim.
\medbreak (iii). The argument in the preceding part can not be adapted to this one. However, assume that $\dim {\mathcal B}(M({\mathcal O}_2^4, \sgn{\otimes} \varepsilon )\oplus M({\mathcal O}_4^4, \chi_{-}))< \infty$. Then $M({\mathcal O}_2^4, \sgn{\otimes} \varepsilon )\oplus M({\mathcal O}_4^4, \chi_{-})$ is standard with finite Cartan matrix $(a_{ij})$, by Lemma~\ref{lema:one-twocopies-finite}. Let $\sigma_i$ and $g_i$, $\tau_i$ and $h_i$, $1\le i \le 6$, be as in previous part of the proof. The Yetter-Drinfeld module $M({\mathcal O}_4^4, \sgn{\otimes} \varepsilon ) \oplus M({\mathcal O}_4^4, \chi_{-})$ has a basis $\widetilde z_1, \dots, \widetilde z_6$ (from the first summand), $w_1, \dots, w_6$ (from the second summand) with action and coaction given by $\delta(\widetilde z_i) = \sigma_i \otimes \widetilde z_i$, $t\cdot \widetilde z_i = (\sgn {\otimes} \varepsilon )(t')\widetilde z_{t \triangleright i}$ for $1\le i \le 6$, $ t \in \sk$, and the second line of \eqref{eqn:deltaoc42}. Here, $t\triangleright i$ and $t' \in \sk^{\sigma_1}$ have the meaning that $t g_i = g_{t \triangleright i}t'$. Then \begin{align*} \ad (\widetilde z_2)(\ad (\widetilde z_1) (w_1)) &= \widetilde z_2 \widetilde z_1 w_1 + \widetilde z_2 w_4\widetilde z_1 - \widetilde z_3 w_5\widetilde z_2 - w_3\widetilde z_3 \widetilde z_2 \neq 0\\ \text{since } \partial_{\widetilde z_1} &\partial_{w_1} \big(\ad (\widetilde z_2)(\ad (\widetilde z_1) (w_1)) \big) = \partial_{\widetilde z_1}(\widetilde z_2 \widetilde z_1) = \widetilde z_2 \neq 0;\\ \ad (w_2)(\ad (w_1) (\widetilde z_1)) &= w_2 w_1\widetilde z_1 - w_2 \widetilde z_2w_1 + w_1 \widetilde z_4w_2 - \widetilde z_1w_1 w_2 \neq 0\\ \text{since } \partial_{w_5} \partial_{\widetilde z_2} &\big( \ad (w_2)(\ad (w_1) (\widetilde z_1)) \big) = \partial_{w_5}(w_2 w_5) = w_2 \neq 0. \end{align*} Hence $a_{12} \leq -2$, $a_{21} \leq -2$, a contradiction. Thus, $\dim {\mathcal B}(M({\mathcal O}_2^4, \sgn{\otimes} \varepsilon )\oplus M({\mathcal O}_4^4, \chi_{-}))= \infty$. \end{proof}
\subsection{Nichols algebras over the group $\dn$, $n$ odd}
Let $n>1$ be an odd integer and let $\dn$ be the dihedral group of order $2n$, generated by $x$ and $y$ with defining relations $x^2 = e = y^n$ and $xyx = y^{-1}$. Let ${\mathcal O}$ be a conjugacy class of $\dn$ and let $\rho$ be an irreducible representation of the centralizer $G^s$ of a fixed $s\in {\mathcal O}$.
By \cite[Th. 3.1]{AF2}, we know that there is at most one pair $({\mathcal O}, \rho)$ such that the Nichols algebra ${\mathcal B}({\mathcal O}, \rho)$ is finite-dimensional, namely $({\mathcal O}, \rho) = ({\mathcal O}_{x}, \sgn)$, where $\sgn \in \widehat{\dn^x}$, $\dn^x=\langle x \rangle \simeq \mathbb Z_2$. However, it is not known if the dimension of ${\mathcal B}({\mathcal O}_{x}, \sgn)$ is finite, except when $n= 3$-- since $\dt \simeq \st$.
\smallbreak The next result generalizes the first part of the proof of Theorem \ref{theo:s3}.
\begin{theorem}\label{theorem:dn} The only possible Nichols algebra over $\dn$ with finite dimension, up to isomorphism, is ${\mathcal B}({\mathcal O}_{x}, \sgn)$. \end{theorem}
\begin{proof} If $\dim{\mathcal B}({\mathcal O}_{x}, \sgn) = \infty$, then there is no finite-dimensional Nichols algebra over $\dn$. Otherwise, we can apply Theorem \ref{theo:one-irred-finite}. Let ${\mathbb M} = M({\mathcal O}_{x}, \sgn) \oplus M({\mathcal O}_{x}, \sgn)$. Assume that $a_{12}\in {\mathbb Z}_{\le 0}$, notation as above. We claim that $-a_{12}\ge 2$. Let
$\sigma_i = xy^i\in \dn$; ${\mathcal O}_{x} = \{\sigma_i\,|\,i\in {\mathbb Z}_n\}$. The Yetter-Drinfeld module ${\mathbb M}$ has a basis $v_i$, $i\in {\mathbb Z}_n$ (from the first copy), $w_i$, $i\in {\mathbb Z}_n$ (from the second copy) with action, coaction and braiding
\begin{align*} t\cdot v_i &= \sgn(t)v_{t \triangleright i}, &\, t\cdot w_i&= \sgn(t)w_{t \triangleright i}, \\ \delta(v_i) &= \sigma_i \otimes v_i, &\, \delta(w_i) &= \sigma_i \otimes w_i, \\ c(v_j\otimes v_i) &= - v_{j\triangleright i}\otimes v_j,&\, c(w_j\otimes w_i) &= - w_{j\triangleright i} \otimes w_{j},\\ c(v_j\otimes w_i) &= - w_{j\triangleright i} \otimes v_j, &\, c(w_j\otimes v_i) &= - v_{j\triangleright i} \otimes w_j.\end{align*} for $i,j\in {\mathbb Z}_n$, $ t \in \dn$. Here, as above, $\sigma_{t \triangleright i} := t\triangleright \sigma_i = t\sigma_it^{-1}$. To prove our claim, we need to find $i,j,k$ such that $\ad_c(v_i)\ad_c(v_j) (w_k)\neq 0$. Let $\partial_{v_i}$, $\partial_{w_i}$ be the skew-derivations as in \cite{MS}. Now \begin{align*} \ad_c(v_2)\ad_c(v_1) (w_2) &= \ad_c(v_2)(v_1w_2 + w_0v_1) \\ &= v_2v_1w_2 + v_2w_0v_1 - v_3w_2v_2 - w_4v_3v_2, \end{align*} hence $\partial_{v_6} \partial_{w_4}\left(\ad_c(v_2)\ad_c(v_1) (w_2)\right) = \partial_{v_6}\left(-v_5v_6\right) = -v_5 \neq 0$. The claim and the theorem are proved. \end{proof}
\bigbreak
\end{document} |
\begin{document}
\title{On the Approximation of Laplacian Eigenvalues in Graph Disaggregation}
\author{ \name{ Xiaozhe Hu\textsuperscript{a}, John C. Urschel\textsuperscript{b,c,d$\ast$}\thanks{$^\ast$Corresponding author. Email: urschel@mit.edu}, Ludmil T. Zikatanov \textsuperscript{d,e} } \affil{ \textsuperscript{a}Department of Mathematics, Tufts University, Medford, MA, USA \textsuperscript{b}Baltimore Ravens, NFL, Owings Mills, MD, USA;
\textsuperscript{c}Department of Mathematics, Massachusetts Institute of Technology,
Cambridge, MA, USA.
\textsuperscript{d}Department of Mathematics, Penn State University,
University Park, PA, USA;
\textsuperscript{e}Institute for Mathematics and
Informatics, Bulgarian Academy of Sciences, Sofia, Bulgaria } }
\maketitle
\begin{abstract} Graph disaggregation is a technique used to address the high cost of computation for power law graphs on parallel processors. The few high-degree vertices are broken into multiple small-degree vertices, in order to allow for more efficient computation in parallel. In particular, we consider computations involving the graph Laplacian, which has significant applications, including diffusion mapping and graph partitioning, among others. We prove results regarding the spectral approximation of the Laplacian of the original graph by the Laplacian of the disaggregated graph. In addition, we construct an alternate disaggregation operator whose eigenvalues interlace those of the original Laplacian. Using this alternate operator, we construct a uniform preconditioner for the original graph Laplacian. \end{abstract}
\begin{keywords} Spectral Graph Theory; Graph Laplacian; Disaggregation; Spectral Approximation; Preconditioning \end{keywords}
\begin{classcode} 05C85; 65F15; 65F08; 68R10 \end{classcode}
\section{Introduction} A variety of real-world graphs, including web networks \cite{MR2091634}, social networks \cite{MR2282139}, and bioinformatics networks \cite{Ji04agraph}, exhibit a degree power law. Namely, the fraction of nodes of degree $k$, denoted by $P(k)$, follows a power distribution of the form $P(k) \sim k^{- \gamma}$, where $\gamma$ is typically in the range $2< \gamma <3$. Networks of this variety are often referred to as scale-free networks. The pairing of a few high-degree vertices with many low-degree vertices on large scale-free networks makes computations such as Laplacian matrix-vector products and solving linear and eigenvalue equations challenging. The computation of the minimal nontrivial eigenpair can become prohibitively expensive. This eigenpair has many important applications, such as diffusion mapping and graph partitioning \cite{CPE:CPE4330060203,Nadler05diffusionmaps,urschel2015cascadic,urschel2014spectral}.
Breaking the few high degree nodes into multiple smaller degree nodes is a way to address this issue, especially when large-scale parallel computers are available. This technique, called graph disaggregation, was introduced by Kuhlemann and Vassilevski \cite{Kuhlemann.V;Vassilevski.P2013a,d2015compatible}. In this process, each of the high-degree vertices of the network is replaced by a graph, such as a cycle or a clique, where each incident edge of the original node now connects to a node of the cycle or clique (see Figure~\ref{fig:dis}).
Independently, Lee, Peng, and Spielman investigated the concept of graph disaggregation, referred to as vertex splitting, in the setting of combinatorial spectral sparsifiers ~\cite{lee2015sparsified}. They proved results for graphs disaggregated from complete graphs and expanders, and used the Schur complement of the disaggregated Laplacian with respect to the disaggregated vertices to approximate the original Laplacian. The basic motivating assumption in such constructions is that the spectral structure of the graph Laplacian induced by the disaggregated graph approximates the spectral structure of the original graph well.
In \cite{Kuhlemann.V;Vassilevski.P2013a,d2015compatible} Kuhlemann and Vassilevski took a numerical approach. We extend, expand upon, and prove precise and rigorous theoretical results regarding this technique. First, we look at the case of a single disaggregated vertex and establish bounds on the error in spectral approximation with respect to the Laplacians of the original and disaggregated graph, as well as results related to the Cheeger constant. We investigate a conjecture made in \cite{Kuhlemann.V;Vassilevski.P2013a} and give strong theoretical evidence that it does not hold in general. Then, we treat the more general case of disaggregation of multiple vertices and prove analogous results. Finally, we construct an alternative disaggregation operator whose eigenvalues interlace with those of the original graph Laplacian, and, hence, provide excellent approximation to the spectrum of the latter. We then use this new disaggregation operator to construct a uniform preconditioner for the graph Laplacian of the original graph. We prove that the preconditioned graph Laplacian can be made arbitrarily close to the identity operator if we require that the weights of the internal disaggregated edges are sufficiently large.
\begin{figure}
\caption{Example of disaggregation: original graph (left); disaggregate using cycle (middle); disaggregate using clique (right).}
\label{fig:dis}
\end{figure}
\section{Single Vertex Disaggregation}
Consider a weighted, connected, undirected graph
$\mathsf{G}=(V, E, \omega)$, $| V | = n$. Let $e = (i,j)$ denote an edge that connects vertices $i$ and $j$, and
$\langle \cdot, \cdot \rangle$ and $\| \cdot \|$ denote the standard $\ell^2$-inner product and the corresponding induced norm. The associated weighted graph Laplacian $A \in \mathbb{R}^{n\times n}$ is given by \[ \langle A \bm{u} ,\bm{v} \rangle= \sum_{e=(i,j)\in E} \omega_e(u_i-u_j)(v_i-v_j), \quad \omega_e = (-a_{ij}), \] where we denote the $(i,j)$-{th} element of $A$ by $a_{ij}$. Without loss of generality, let us disaggregate the last vertex $\mathsf{v}_n$ of the graph $\mathsf{G}$. Then, the Laplacian can be written in the following block form $$ A = \begin{pmatrix} A_0 & - \bm{a}_{n} \\
- \bm{a}_{n}^T & a_{nn} \end{pmatrix}, $$ where $a_{nn}$ is the degree of $\mathsf{v}_n$. Here, we assume that the graph is simply connected and the associated Laplacian $A$ has eigenvalues $$ 0 = \lambda_1(A) < \lambda_2 (A) \le \cdots \le \lambda_n (A) $$ and corresponding eigenvectors $$ \bm{1}_n=\bm{\varphi}^{(1)}(A) , \bm{\varphi}^{(2)}(A), \cdots, \bm{\varphi}^{(n)}(A), \qquad \text{where} \ \bm{1}_n=(\underbrace{1,\cdots,1}_n)^T. $$ The eigenpair $\left( \lambda_2(A), \bm{\varphi}^{(2)}(A) \right)$ has special significance, and therefore $\lambda_2(A)$ is referred to as the {\it algebraic connectivity}, denoted $a(\mathsf{G})$, and $\bm{\varphi}^{(2)}(A)$ is referred to as the {\it Fiedler vector}.
We can also write a given nontrivial eigenpair $(\lambda(A), \bm{\varphi}(A))$, $\lambda(A) \ne 0$, $\| \bm{\varphi}(A) \| =1$, in block notation, namely $$ \bm{\varphi}(A) = \begin{pmatrix} \bm{\varphi}_0 \\ \varphi_n \end{pmatrix}. $$ We have the relations
\begin{eqnarray*}
\langle \bm{\varphi}_0, \bm{1}_{n_0} \rangle + \varphi_n &=& 0, \\
A_0 \bm{\varphi}_0 - \varphi_n \bm{a}_{n} &=& \lambda (A) \bm{\varphi}_0, \\ a_{nn} \varphi_n - \bm{a}_{n}^T \bm{\varphi}_0 &= &\lambda (A) \varphi_n, \end{eqnarray*} where $n_0 = n-1$. Suppose that the vertex $\mathsf{v}_n$ is disaggregated into $d$ vertices, with an unspecified connected structure between the disaggregated elements. We will denote this graph by $\mathsf{G}_D$. This induces a disaggregated graph Laplacian $A_D \in \mathbb{R}^{N \times N}$, $N = n_0 + d$, with eigenvalues $0 = \lambda_1(A_D) < \lambda_2 (A_D) \le \cdots \le \lambda_N (A_D)$ and corresponding eigenvectors $\bm{1}_N = \bm{\varphi}^{(1)}(A_D) , \bm{\varphi}^{(2)}(A_D), \cdots, \bm{\varphi}^{(N)}(A_D)$. We can write $A_D$ in block form $$ A_D = \begin{pmatrix} A_0 & - A_{0n} \\ -A_{0n}^T & A_n \end{pmatrix}.$$ We have the relations \begin{align*}
a_{nn} &= \bm{a}_{n}^T \bm{1}_{n_0}, \\
A_{0n}^T \bm{1}_{n_0} &= A_n \bm{1}_d, \\
\bm{a}_{n} &= A_0 \bm{1}_{n_0} = A_{0n} \bm{1}_d. \end{align*} Let us introduce the prolongation operator $P: \mathbb{R}^n \rightarrow \mathbb{R}^N$, \begin{equation} \label{def:P} P = \begin{pmatrix} I_{n_0 \times n_0} & 0\\ 0 & \bm{1}_d \end{pmatrix}. \end{equation}
The following result is immediate.
\begin{lemma}\label{immediate} Let $A$ and $A_D$ be the graph Laplacian of the original graph $\mathsf{G}$ and the disaggregated and simply connected graph $\mathsf{G}_D$, respectively. If $P$ is defined as \eqref{def:P}, then we have $$A = P^T A_D P.$$ \end{lemma}
We aim to show that the algebraic connectivity of $A_D$ is bounded away from the algebraic connectivity of the original graph $A$. To do so, suppose we have an eigenpair $(\lambda, \bm{\varphi})$ of the Laplacian of the original graph $\mathsf{G}$. We prolongate the eigenvector $\bm{\varphi}$ to the disaggregated graph $\mathsf{G}_D$ and obtain an approximate eigenvector by the procedure \begin{equation}\label{def:hat-varphi} \widetilde{\bm{\varphi}} = P \bm{\varphi} - s \bm{1}_N = \begin{pmatrix} \bm{\varphi}_0 \\ \varphi_n \bm{1}_d \end{pmatrix} - s \bm{1}_N, \quad \text{where} \ s = \frac{d-1}{N} \varphi_n. \end{equation} This gives $\langle \widetilde{\bm{\varphi}}, \bm{1}_N \rangle = 0$.
We consider $\widetilde{\bm{\varphi}}$ to be an approximation of $\bm{\varphi}$ on the non-trivial eigenspace of the disaggregated operator $A_D$. We have the following relation between the eigenvalue $\lambda$ of $A$ and the Rayleigh quotient of $\widetilde{\bm{\varphi}}$ with respect to $A_D$.
\begin{lemma} \label{lem:rq-relation}
Let $(\lambda, \bm{\varphi})$, $\|\bm{\varphi}\| = 1$, be an eigenpair of the graph Laplacian $A$ associated with a simply connected graph $\mathsf{G}$, and $\widetilde{\bm{\varphi}}$ be defined by \eqref{def:hat-varphi}. We have $$\mathrm{RQ}(\widetilde{\bm{\varphi}}):=\frac{ \langle A_D \widetilde{\bm{\varphi}} , \widetilde{\bm{\varphi}} \rangle}{ \langle\widetilde{\bm{\varphi}} , \widetilde{\bm{\varphi}} \rangle } = \frac{ \lambda } { 1 + \tfrac{(d -1)n}{N} \varphi_n^2 } .$$ \end{lemma}
\begin{proof} We have \begin{align*}
\langle \widetilde{\bm{\varphi}} , \widetilde{\bm{\varphi}} \rangle &= \langle P \bm{\varphi} - s \bm{1}_N, P \bm{\varphi} - s \bm{1}_N \rangle = \langle P \bm{\varphi} , P \bm{\varphi} \rangle - 2s \langle P \bm{\varphi} , \bm{1}_N \rangle + s^2 \langle \bm{1}_N, \bm{1}_N \rangle \\
& = \langle \bm{\varphi_0}, \bm{\varphi}_0 \rangle + \varphi_n^2 \langle \bm{1}_d, \bm{1}_d \rangle -
2s \left( \langle \bm{\varphi}_0, \bm{1}_{n_0} \rangle + \varphi_n \langle \bm{1}_d, \bm{1}_d \rangle \right) + s^2N \\
& = \langle \bm{\varphi_0}, \bm{\varphi}_0 \rangle + \varphi_n^2 + (d-1) \varphi_n^2 - 2s (d-1) \varphi_n + s^2 N \\
& = 1 + \left[ (d-1) - 2\frac{(d-1)^2}{N} + \frac{(d-1)^2}{N} \right] \varphi_n^2 \\
& = 1 + \frac{(d-1)n}{N} \varphi_n^2
\end{align*}
and
\begin{eqnarray*}
\langle {A_D} \widetilde{\bm{\varphi}} , \widetilde{\bm{\varphi}}\rangle&=& \langle A_D (P \bm{\varphi} - s \bm{1}_N), P \bm{\varphi} - s \bm{1}_N \rangle \\ &=& \langle A_D P \bm{\varphi} , P \bm{\varphi}\rangle - 2s\langle A_D \bm{1}_N , P\bm{\varphi}\rangle + s^2 \langle A_D \bm{1}_N, \bm{1}_N \rangle \\
& = & \langle P^TA_D P \bm{\varphi} , \bm{\varphi}\rangle = \langle A \bm{\varphi}, \bm{\varphi} \rangle = \lambda.
\end{eqnarray*}
This completes the proof. \end{proof}
The following result quickly follows by applying Lemma \ref{lem:rq-relation} to the Fielder vector.
\begin{theorem} \label{thm:algebraic-connectivity}
Let $\bm{\varphi} = (\bm{\varphi}_0, \varphi_n)^T$, $\| \bm{\varphi} \| = 1$, be the Fiedler vector of the graph Laplacian $A$ associated with a simply connected graph $\mathsf{G}$. Let $A_D$ be the graph Laplacian corresponding to the disaggregated and simply connected graph $\mathsf{G}_D$ resulting from disaggregating one vertex into $d>1$ vertices. We have $$\frac{a(\mathsf{G})}{a(\mathsf{G}_D)} \ge 1 + \frac{(d -1)n}{N} \varphi_n^2. $$ \end{theorem}
\begin{proof} Noting that $\widetilde{\bm{\varphi}}$ is orthogonal to $\bm{1}_N$, we have \begin{align*} a(\mathsf{G}_D) & \leq \frac{ \langle A_D \widetilde{\bm{\varphi}} , \widetilde{\bm{\varphi}} \rangle}{ \langle\widetilde{\bm{\varphi}} , \widetilde{\bm{\varphi}} \rangle } = \frac{ \lambda } { 1 + \tfrac{(d -1)n}{N} \varphi_n^2 } = \frac{ a(\mathsf{G}) } { 1 + \tfrac{(d -1)n}{N} \varphi_n^2 }, \end{align*} which completes the proof. \end{proof}
If the characteristic value of the disaggregated vertex is non-zero, then the algebraic connectivity of the disaggregated graph stays bounded away from that of the original graph, independent of the structure of $A_n$. Therefore, as the weight on the internal edges approaches infinity, the approximation stays bounded away.
In \cite{Kuhlemann.V;Vassilevski.P2013a}, the authors made the following conjecture.
\begin{conjecture} \label{wrongconject} Under certain conditions the Laplacian eigenvalues of the graph
Laplacian of the disaggregated graph approximate the eigenvalues of
the graph Laplacian of the original graph, provided that the weight
on the internal edges of the disaggregation is chosen to be large
enough. \end{conjecture}
Theorem \ref{thm:algebraic-connectivity} directly implies that Conjecture \ref{wrongconject} is false when the characteristic value of the disaggregated vertex is non-zero, which, for a random power law graph, occurs with probability one.
We also have the following result, providing an estimate of how close the approximation $\widetilde{\bm{\varphi}}$ is to the invariant subspace with respect to $A_D$. \begin{lemma} Let $A$ and $A_D$ be the original graph and disaggregated graph, $(\lambda, \bm{\varphi})$ be an eigenpair of the graph Laplacian $A$ associated with a simply connected graph $\mathsf{G}$, and $\widetilde{\bm{\varphi}}$ be defined by \eqref{def:hat-varphi}. We have $$
\| A_D \widetilde{\bm{\varphi}} - \mathrm{RQ}(\widetilde{\bm{\varphi}} ) \widetilde{\bm{\varphi}} \| \le \left( \|A_{0n}^T (\bm{1}_{n_0} - \bm{\varphi}_0 / \varphi_n ) \| + \frac{\sqrt{dn (d+n)}}{N} \lambda + \frac{d n}{N} \lambda | \varphi_n | \right) | \varphi_n |.$$
\end{lemma}
\begin{proof} We recall that
$$ \| \widetilde{\bm{\varphi}} \| = \left( 1 + \frac{(d -1)n}{N} \varphi_n^2 \right)^{1/2}$$ and $$ \lvert \lambda - \mathrm{RQ}(\widetilde{\bm{\varphi}}) \rvert = \frac{ \frac{(d -1)n}{N} \varphi_n^2 }{ 1 + \frac{(d -1)n}{N} \varphi_n^2} \lambda.$$
We also have \begin{align*} A_D \widetilde{\bm{\varphi}}&= A_D \left(P \bm{\varphi} - s\bm{1}_N \right) = A_D P \bm{\varphi} = \begin{pmatrix} A_0 \bm{\varphi}_0 - \varphi_n A_{0n} \bm{1}_d \\ \varphi_n A_n \bm{1}_d - A_{0n}^T \bm{\varphi}_0 \end{pmatrix} = \begin{pmatrix} A_0 \bm{\varphi}_0 - \varphi_n \bm{a}_n \\ A_{0n}^T ( \varphi_n \bm{1}_{n_0} - \bm{\varphi}_0 ) \end{pmatrix} \\ &= \lambda P \bm{\varphi} + \begin{pmatrix} \bm{0} \\ A_{0n}^T ( \varphi_n \bm{1}_{n_0} - \bm{\varphi}_0 ) - \lambda \varphi_n \bm{1}_d \end{pmatrix} = \lambda \widetilde{\bm{\varphi}} + s \lambda \bm{1}_N + \begin{pmatrix} \bm{0} \\ A_{0n}^T ( \varphi_n \bm{1}_{n_0} - \bm{\varphi}_0 ) - \lambda \varphi_n \bm{1}_d \end{pmatrix}\\ & = \lambda \widetilde{\bm{\varphi}} + \frac{\lambda \varphi_n}{N} \left[ (d-1) \begin{pmatrix} \bm{1}_0 \\ \bm{0} \end{pmatrix} - n
\begin{pmatrix} \bm{0} \\ \bm{1}_d \end{pmatrix}
\right]
+ \begin{pmatrix} \bm{0} \\ A_{0n}^T ( \varphi_n \bm{1}_{n_0} - \bm{\varphi}_0 ) \end{pmatrix}, \end{align*} giving
\begin{align*}
\| A_D \widetilde{\bm{\varphi}} - \mathrm{RQ}(\widetilde{\bm{\varphi}} ) \widetilde{\bm{\varphi}} \| &\le \| A_D \widetilde{\bm{\varphi}} - \lambda \widetilde{\bm{\varphi}} \| + \vert \lambda - \mathrm{RQ}(\widetilde{\bm{\varphi}}) \rvert \| \widetilde{\bm{\varphi}} \| \\
&\le \| A_{0n}^T (\varphi_n \bm{1}_{n_0} - \bm{\varphi}_0) \| +\frac{ \sqrt{(d-1)^2 n_0 + d n^2}}{N} | \varphi_n | \lambda + \frac{ \frac{(d -1)n}{N} }{ \sqrt{1 + \frac{(d-1)n}{N} \varphi_n^2}} \varphi^2_n \lambda \\
&\le \left( \|A_{0n}^T (\bm{1}_{n_0} - \bm{\varphi}_0 / \varphi_n ) \| + \frac{\sqrt{dn(d+n)}}{N} \lambda + \frac{d n}{N} \lambda | \varphi_n | \right) | \varphi_n |. \end{align*} \end{proof}
In many applications, we are only concerned with minimal Laplacian eigenpairs. For minimal eigenvalues of scale-free graphs, we have $\lambda = O(1)$ and $\varphi_n = O (N^{-1/2})$. In this way, often the largest source of error comes from the term $\|A_{0n}^T (\varphi_n \bm{1}_{n_0} - \bm{\varphi}_0)\| $. Heuristically, the error of this term is typically best controlled when $d$ is relatively small and each new disaggregate is connected to roughly the same number of exterior vertices.
Next, we consider the eigenvalues of the normalized Laplacian. This gives us insight into how the Cheeger constant changes after disaggregating a vertex. Again, suppose we have an eigenpair $(\nu, \bm{\phi})$ of the normalized graph Laplacian $D^{-1}A$, where $D$ is the degree matrix of $\mathsf{G}$, namely $D = \text{diag}(a_{11}, a_{22}, \cdots, a_{nn})$.
Again, we prolongate the eigenvector $\bm{\phi}$ and obtain an approximate eigenvector of the disaggregated normalized graph Laplacian $D_D^{-1}A_D$, where $D_D$ is the degree matrix of $\mathsf{G}_D$, in a similar fashion \begin{equation}\label{def:phi} \widetilde{\bm{\phi}} = P \bm{\phi} - s \bm{1}_N, \quad \text{where} \ s = \frac{\langle P\bm{\phi}, \bm{1}_N \rangle_{D_D}}{\langle \bm{1}_N, \bm{1}_N \rangle_{D_D}}. \end{equation} We may write $D_D$ in the following way \begin{align*} D_D &= \text{diag}(a^D_{1,1}, \cdots, a^D_{n_0,n_0}, a^D_{n,n}, a^D_{n+1,n+1}, \cdots, a^D_{N,N}) \\ & = \text{diag}(a^D_{1,1}, \cdots, a^D_{n_0,n_0}, \omega_{n}, \omega_{n+1}, \cdots, \omega_N) + \text{diag}(0, \cdots, 0, d^{ex}_n, d^{ex}_{n+1}, \cdots, d^{ex}_{N}) \\ & =: D_D^1 + D_D^{ex}, \end{align*} where $\omega_n, \omega_{n+1}, \cdots \omega_N$ are the weights of the edges incident with vertex $\mathsf{v}_n$ on the original graph (note $\sum_{i=n}^N \omega_i = a_{n,n}$) and $d^{ex}_i = a^D_{i,i} - \omega_i$, $i = n, n+1, \cdots, N$. We may also rewrite the shift $s$ as $$s = \frac{\langle P\bm{\phi}, \bm{1}_N \rangle_{D_D}}{\langle \bm{1}_N, \bm{1}_N \rangle_{D_D}} = \frac{\sum_{i = n}^N d^{ex}_i \phi_n}{\sum_{i=1}^N a^D_{i,i}}.$$
Let $\omega_{\text{total}}(\mathsf{H})$ denote the total weights of a graph $\mathsf{H}$, and let $\mathsf{G}_a$ be the disaggregated local subgraph. Similarly, we consider $\widetilde{\bm{\phi}}$ as an approximation of the eigenvectors of the disaggregated normalized graph Laplacian. We have the following lemma. \begin{theorem} \label{thm:eigenvalue_normalized} Let $(\nu, \bm{\phi})$ be an eigenpair of the normalized graph Laplacian associated with a simply connected graph $\mathsf{G}$ and $\widetilde{\bm{\phi}}$ be defined by \eqref{def:phi}. We have \begin{equation}\label{eqn:RQ_normalized} \frac{\langle A_D \widetilde{\bm{\phi}}, \widetilde{\bm{\phi}} \rangle}{\langle D_D \widetilde{\bm{\phi}}, \widetilde{\bm{\phi}} \rangle } = \frac{\nu}{1 + \frac{2 \omega_{\text{total}}(\mathsf{G}) \, \omega_{\text{total}}(\mathsf{G}_a)}{\omega_{\text{total}}(\mathsf{G}_D)} \phi_n^2 }, \end{equation} and \begin{equation}\label{ine:nu_2} \nu^D_2 = \alpha \nu_2, \quad \alpha = \left( 1 + \frac{2 \omega_{\text{total}}(\mathsf{G}) \, \omega_{\text{total}}(\mathsf{G}_a)}{\omega_{\text{total}}(\mathsf{G}_D)} \phi_n^2 \right)^{-1} \leq 1. \end{equation} \end{theorem} \begin{proof} We have \begin{align*} \langle A_D \widetilde{\bm{\phi}}, \widetilde{\bm{\phi}} \rangle = \langle A_D(P \bm{\phi} - s \bm{1}_N ), P \bm{\phi} - s \bm{1}_N \rangle = \langle A_D P \bm{\phi}, \bm{\phi} \rangle = \langle A \bm{\phi}, \bm{\phi} \rangle = \nu \end{align*} and \begin{align*} \langle D_D \bm{\phi}, \bm{\phi} \rangle & = \langle D_D P \bm{\phi}, P\bm{\phi} \rangle - s \langle D_D \bm{1}_N, P \bm{\phi} \rangle \\ & = \langle D_D^1 P \bm{\phi}, P\bm{\phi} \rangle + \langle D_D^{ex} P \bm{\phi}, P\bm{\phi} \rangle - s \langle D_D \bm{1}_N, P \bm{\phi} \rangle \\ & = \langle D \bm{\phi}, \bm{\phi} \rangle + \sum_{i=n}^{N} d_i^{ex} \phi_n^2 - \frac{\left( \sum_{i=n}^N d_i^{ex} \phi_n \right)^2}{\sum_i^{N} a^D_{i,i}} \\ & = 1 + \frac{ \left( \sum_i^N a^D_{i,i} - \sum_{i=n}^N d_i^{ex} \right)\sum_{i=n}^N d_i^{ex} }{\sum_i^N a^D_{i,i}} \phi_n^2 \\ & = 1 + \frac{ \left( \sum_{i=1}^n a_{i,i} \right) \left( \sum_{i=n}^N d_i^{ex} \right) }{\sum_{i=1}^N a^D_{i,i}} \phi_n^2. \end{align*} Noting that $\sum_{i=1}^N a_{i,i} = 2 \omega_{\text{total}}(\sf{G})$, $\sum_{i=1}^N a^D_{i,i} = 2 \omega_{\text{total}}(\mathsf{G}_D)$, and $\sum_{i=n}^N d_i^{ex} = 2\omega_{\text{total}}(\mathsf{G}_{a})$, we obtain \eqref{eqn:RQ_normalized}. Moreover, \eqref{ine:nu_2} follows directly from \eqref{eqn:RQ_normalized}. \end{proof}
The Cheeger constant of a weighted graph is defined as follows \cite{Friedland.S;Nabben.R2002a,Chung.F1997a} \begin{equation*}
h(\mathsf{G}) = \min_{\emptyset \neq U \subset V} \frac{| E(U, \bar{U}) |}{\min( \text{vol}(U), \text{vol}(\bar{U}) )}, \end{equation*} where \begin{equation*}
\bar{U} = V \backslash U, \quad \text{vol}(U) = \sum_{i \in U} \delta_i, \quad \delta_i = \sum_{e=(i,j) \in E} \omega_e, \quad | E(U, \bar{U}) | = \sum_{\substack{e=(i,j)\in E \\ i \in U, j \in \bar{U}}} \omega_e. \end{equation*} Due to the Cheeger inequality \cite{Friedland.S;Nabben.R2002a,Chung.F1997a}, the Cheeger constant $h(\mathsf{G})$ and $\nu_2$ are related as follows \begin{equation}\label{ine:cheeger} 1 - \sqrt{1 - h(\mathsf{G})^2} \leq \nu_2 \leq 2 h(\mathsf{G}). \end{equation}
\begin{theorem}\label{thm:Cheeger constant} For the Cheeger constant of the original graph $\mathsf{G}$ and the disaggregated and simply connected graph $\mathsf{G}_D$, we have \begin{equation*} h(\mathsf{G}_D) \leq \sqrt{1 - (1 - 2 \alpha h(\mathsf{G}))^2 }, \end{equation*} where $\alpha$ is defined by \eqref{ine:nu_2}. If $h(\mathsf{G}) \geq \frac{4 \alpha}{ 4 \alpha^2 + 1}$, then $h(\mathsf{G}_D) \leq h(\mathsf{G})$. \end{theorem}
\begin{proof} Based on \eqref{ine:cheeger} and \eqref{ine:nu_2}, we have \begin{align*} h(\mathsf{G}_D) \leq \sqrt{1 - (1 - \nu^D_2)^2} = \sqrt{1 - (1- \alpha \nu_2)^2} \leq \sqrt{1 - (1 - 2 \alpha h(\mathsf{G}))^2 }. \end{align*} Basic algebra shows that $h(\mathsf{G}_D) \leq h(\mathsf{G})$ if $h(\mathsf{G}) \geq \frac{4 \alpha}{ 4 \alpha^2 + 1}$. \end{proof}
\section{Graph Disaggregation} We now move on to the more general case of multiple disaggregated vertices. Without loss of generality, for a graph Laplacian $A \in \mathbb{R}^{n \times n}$, suppose we are disaggregating the first $m$ vertices. This gives us the disaggregated Laplacian $A_D \in \mathbb{R}^{N\times N}$. Here, $N$ is the number of vertices in the disaggregated graph, given by \[ N = n - m + \sum_{k=1}^m d_k = n-m+n_d, \quad n_d=\sum_{k=1}^m d_k. \] Note that we have $m$ groups of vertices associated with the disaggregation, which we can also number consecutively \begin{equation}\label{numb} \{1,\ldots,N\} = \{\underbrace{1,\ldots, d_1}_{d_1},\ldots \underbrace{d_1+1,\ldots, d_1+d_2}_{d_2},\ldots, n_d+1,\ldots,N\}. \end{equation}
Similar to the case of a single disaggregated vertex, we can establish a relationship between $A$ and $A_D$ through a prolongation matrix $P: \mathbb{R}^n \rightarrow \mathbb{R}^N$, given by \begin{equation} P = \begin{pmatrix} \label{def:Prolongation} P_m & 0\\ 0 & I_{n_0 \times n_0} \end{pmatrix}, \quad\mbox{where}\quad n_0=(n-m). \end{equation} Here, $P_m\in \mathbb{R}^{n_d \times m}$, and \[ P_m=\begin{pmatrix} \bm{1}_{d_1} & 0 & \ldots & 0 \\ 0 & \bm{1}_{d_2} & \ldots & 0 \\ \vdots & \vdots & \vdots & \vdots\\ 0 & 0 & \ldots & \bm{1}_{d_m} \end{pmatrix}. \] Note that $P_m^T P_m = \operatorname{diag}(d_1,\ldots d_m)$. We have the following lemma, which can be easily verified by simple algebraic calculation.
\begin{lemma} Let $A$ and $A_D$ be the graph Laplacian of the original graph $\mathsf{G}$ and the disaggregated and simply connected graph $\mathsf{G}_D$, respectively. If $P$ is defined as \eqref{def:Prolongation}, then we have \begin{equation} \label{eqn:A-A_D} A = P^T A_D P. \end{equation} \end{lemma}
If we look at the disaggregated graph Laplacian $A_D$ directly, we can obtain a similar bound on the algebraic connectivity as shown in Theorem \ref{thm:algebraic-connectivity}. Let $(\lambda, \bm{\varphi})$ be an eigenpair of $A$. We can define an approximated eigenvector of $A_D$ by prolongating $\bm{\varphi}$ as follows \begin{equation} \label{def:hat-varphi-multi} \widetilde{\bm{\varphi}} = P \bm{\varphi} - s \bm{1}_N, \quad \text{where} \ s = \frac{1}{N}\sum_{i=1}^m (d_i - 1) \varphi_i. \end{equation} It is easy to check that $\langle \widetilde{\bm{\varphi}}, \bm{1}_N \rangle = 0$. Now we have the following lemma about the Rayleigh quotient of $\widetilde{\bm{\varphi}}$ with respect to $A_D$.
\begin{lemma} \label{lem:rq-relation-multi} Let $(\lambda, \bm{\varphi})$ be an eigenpair of the graph Laplacian $A$ associated with a simply connected graph $\mathsf{G}$ and $\widetilde{\bm{\varphi}}$ be defined by \eqref{def:hat-varphi-multi}. We have $$\mathrm{RQ}(\widetilde{\bm{\varphi}}):=\frac{ \langle A_D \widetilde{\bm{\varphi}} , \widetilde{\bm{\varphi}} \rangle}{ \langle\widetilde{\bm{\varphi}} , \widetilde{\bm{\varphi}} \rangle } \leq \frac{ \lambda } { 1 + \frac{1}{N}\sum_{i=1}^m (d_i - 1) (n + n_d - md_i) \varphi_i^2 }. $$ Moreover, if for $d_i^{\max} = \max_i d_i$, we have $n + n_d - m d_i^{\max} > 0$, then $ RQ(\widetilde{\bm{\varphi}}) < \lambda. $ \end{lemma}
\begin{proof} We note that \begin{align*}
\langle {A_D} \widetilde{\bm{\varphi}} , \widetilde{\bm{\varphi}}\rangle&=\langle A_D (P \bm{\varphi} - s \bm{1}_N), P \bm{\varphi} - s \bm{1}_N \rangle = \langle A_D P \bm{\varphi} , P \bm{\varphi}\rangle - 2s\langle A_D \bm{1}_N , P\bm{\varphi}\rangle + s^2 \langle A_D \bm{1}_N, \bm{1}_N \rangle \\
& = \langle P^TA_D P \bm{\varphi} , \bm{\varphi}\rangle = \langle A \bm{\varphi}, \bm{\varphi} \rangle = \lambda.
\end{align*}
Denoting $\bm{\varphi}$ by $\bm{\varphi} = (\varphi_1, \varphi_2, \cdots, \varphi_m, \bm{\varphi}_0)^T$, we have
\begin{align*}
\langle \widetilde{\bm{\varphi}} , \widetilde{\bm{\varphi}} \rangle &= \langle P \bm{\varphi} - s \bm{1}_N, P \bm{\varphi} - s \bm{1}_N \rangle = \langle P \bm{\varphi} , P \bm{\varphi} \rangle - 2s \langle P \bm{\varphi} , \bm{1}_N \rangle + s^2 \langle \bm{1}_N, \bm{1}_N \rangle \\
& = \langle \bm{\varphi_0}, \bm{\varphi}_0 \rangle + \sum_{i=1}^m \varphi_i^2 \langle \bm{1}_{d_i}, \bm{1}_{d_i} \rangle -
2s \left( \langle \bm{\varphi}_0, \bm{1}_{n_0} \rangle + \sum_{i=1}^m \varphi_i \langle \bm{1}_{d_i}, \bm{1}_{d_i} \rangle \right) + s^2N \\
& = \langle \bm{\varphi_0}, \bm{\varphi}_0 \rangle + \sum_{i=1}^m \varphi_i^2 + \sum_{i=1}^m(d_i-1) \varphi_i^2 - 2s \sum_{i=1}^m (d_i-1) \varphi_i + s^2 N \\
& = 1 + \sum_{i=1}^m(d_i-1) \varphi_i^2 - \frac{1}{N} \left( \sum_{i=1}^m (d_i -1) \varphi_i \right)^2 \\
& \geq 1+ \sum_{i=1}^m(d_i-1) \varphi_i^2 - \frac{m}{N} \sum_{i=1}^m (d_i -1)^2 \varphi_i^2 \\
& = 1 + \sum_{i=1}^m (d_i - 1) \frac{N - m (d_i - 1)}{N} \varphi_i^2 \\
& = 1 + \frac{1}{N}\sum_{i=1}^m (d_i - 1) (n + n_d - md_i) \varphi_i^2.
\end{align*} This completes the proof. \end{proof}
From the Rayleigh quotient and applying the above lemma to the Fielder vector, we have the following theorem concerning the algebraic connectivity.
\begin{theorem} \label{thm:algebraic-connectivity-multi} Let $\bm{\varphi}$ be the Fiedler vector of the graph Laplacian $A$ associated with a simply connected graph $\mathsf{G}$ and $A_D$ be the graph Laplacian corresponding to the disaggregated and simply connected graph $\mathsf{G}_D$. Suppose we have disaggregated $m$ vertices and each of those vertices are disaggregated into $d_i>1$ vertices, $i=1,2,\cdots,m$. We have $$\frac{a(\mathsf{G})}{a(\mathsf{G}_D)} \ge 1 + \frac{1}{N}\sum_{i=1}^m (d_i - 1) (n + n_d - md_i) \varphi_i^2 .$$ \end{theorem}
\begin{proof} The proof is similar to the proof of Theorem \ref{thm:algebraic-connectivity} and uses Lemma \ref{lem:rq-relation-multi}. \end{proof}
It is possible to perform a more careful estimate, using the fact that disaggregating $m$ vertices at once is equivalent to disaggregating $m$ vertices one by one. Denote $n_0 = n$ and $n_i = n_{i-1} - 1 + d_i$, $i = 1,2,\cdots, m$. Note that $n_i = n - i + \sum_{k=1}^i d_k$, $i = 1,2,\cdots, m$ and $N = n_m$. Recursively applying Theorem \ref{thm:algebraic-connectivity}, we have the following result.
\begin{theorem} \label{thm:algebraic-connectivity-multi-1} Let $\bm{\varphi}$ be the Fiedler vector of the graph Laplacian $A$ associated with a simply connected graph $\mathsf{G}$ and $A_D$ be the graph Laplacian corresponding to the disaggregated and simply connected graph $\mathsf{G}_D$, respectively. Suppose we disaggregated $m$ vertices and each of those are disaggregated into $d_i>1$ vertices, $i=1,2,\cdots,m$. We have $$\frac{a(\mathsf{G})}{a(\mathsf{G}_D)} \ge \prod \left( 1 + \frac{ (d_i - 1) n_{i-1} }{n_i} \varphi_i^2 \right). $$ \end{theorem} \begin{proof} Let $\mathsf{G}_D^i$ be the resulting graph after disaggregating the $i$-th vertex in the graph $\mathsf{G}_D^{i-1}$ and note that $\mathsf{G}_D = \mathsf{G}_D^m$. We have \begin{equation*} \frac{a(\mathsf{G})}{a(\mathsf{G}_D)} = \frac{\mathsf{a(G)}}{a(\mathsf{G}_D^1)} \times\frac{a(\mathsf{G}_D^1)}{a(\mathsf{G}_D^2)} \times \cdots \times \frac{a(\mathsf{G}_D^{m-1})}{a(\mathsf{G}_D^m)}. \end{equation*} The result follows immediately from applying Theorem \ref{thm:algebraic-connectivity} on each pair of graphs $\mathsf{G}_D^{i}$ and $\mathsf{G}_D^{i+1}$. \end{proof}
\begin{remark} A direct consequence of Theorem \ref{thm:algebraic-connectivity-multi-1} is $a(\mathsf{G}_D) \leq a(\mathsf{G})$. \end{remark}
Similarly, we also have the following result concerning the minimal eigenvalue of the normalized graph Laplacian after disaggregating several vertices. Denote $\mathsf{G}_D^0 = \mathsf{G}$, and denote the graph after disaggregating vertex $i$ by $\mathsf{G}_D^i$. Note that $\mathsf{G}_D^m = \mathsf{G}_D$. The local subgraph corresponding to disaggregating vertex $i$ is denoted by $\mathsf{G}_a^i$.
\begin{theorem} Let $\nu_2$ be the second smallest eigenvalue of the normalized graph Laplacian associated with a simply connected graph $\mathsf{G}$ and $\nu_2^D$ be the second smallest eigenvalue of the normalized graph Laplacian corresponding to the disaggregated and simply connected graph $\mathsf{G}_D$. Suppose we disaggregated $m$ vertices and each of them are disaggregated into $d_i>1$ vertices, $i=1,2,\cdots,m$. We have \begin{equation*} \frac{\nu_2}{\nu_2^D} \geq \prod_{i=1}^m \left( 1 + \frac{2 \omega_{\text{total}}(\mathsf{G}_D^{i-1}) \,\omega_{\text{total}}(\mathsf{G}_a^i)}{ \omega_{\text{total}}( \mathsf{G}_D^i) }\phi_i^2 \right). \end{equation*} Consequently, we have $\nu_2^D =\alpha \nu_2$, where \begin{equation} \label{def:alpha} \alpha := \left[ \prod_{i=1}^m \left( 1 + \frac{2 \omega_{\text{total}}(\mathsf{G}_D^{i-1}) \omega_{\text{total}}(\mathsf{G}_a^i)}{ \omega_{\text{total}}( \mathsf{G}_D^i) }\phi_i^2 \right) \right]^{-1} \leq 1. \end{equation} \end{theorem}
\begin{proof} The result follows by applying Theorem \ref{thm:eigenvalue_normalized} recursively. \end{proof}
Based on the estimates on the eigenvalues of normalized graph Laplacian, we can estimate the Cheeger constants as follows.
\begin{theorem}\label{thm:Cheeger constant-1} For the Cheeger constant of the original graph $\mathsf{G}$ and the disaggregated and simply connected graph $\mathsf{G}_D$, we have \begin{equation*} h(\mathsf{G}_D) \leq \sqrt{1 - (1 - 2 \alpha h(\mathsf{G}))^2 }, \end{equation*} where $\alpha$ is defined by \eqref{def:alpha}. If $h(\mathsf{G}) \geq \frac{4 \alpha}{ 4 \alpha^2 + 1}$, then $h(\mathsf{G}_D) \leq h(\mathsf{G})$. \end{theorem}
\begin{proof} The proof is the same as the proof of Theorem \ref{thm:Cheeger constant}. \end{proof}
\section{Preconditioning Using Disaggregated Graph}
We aim to show eigenvalue interlacing between $A$ and a new operator, which is obtained by scaling $A_D$ appropriately.
We can rescale $P$ by introducing $\widetilde{P}= D_s P$, where \begin{equation}\label{def:D_s} D_s = \operatorname{diag}(d^{-1/2}_1I_{d_1 \times d_1},\ldots , d^{-1/2}_m I_{d_m \times d_m}, I_{n0 \times n0}) \end{equation} is a diagonal scaling matrix, giving us $\widetilde{P}^T\widetilde{P}=I$. Based on the scaled prolongation, we are able to show the eigenvalues of diagonal scaled matrix \begin{equation} \label{def:tildeA_D} \widetilde{A}_D:= D_s^{-1} A_D D_s^{-1} \end{equation} interlaces with $A$. First, let us recall the interlacing theorem.
\begin{theorem}[Interlacing Theorem \cite{Courant.R;Hilbert.D1924a},
Vol. 1, Chap. I] \label{thm:interlacing} Let
$S \in \mathbb{R}^{N \times n}$ be such that
$S^T S = I_{n \times n}$, $n < N$ and let
$B \in \mathbb{R}^{N \times N}$ be symmetric, with eigenvalues
$\lambda_1 \le \lambda_2 \le ... \le \lambda_n $. Define
$A = S^T B S$ and let $A$ have eigenvalues
$\mu_1\le \mu_2 \le ... \le \mu_m$. Then
$ \lambda_i \le \mu_i \le \lambda_{n-m+i}.$ \end{theorem}
From here, we have the following. \begin{theorem}
Let $A$ have eigenvalues
$\lambda_1(A) \le \lambda_2(A) \le ... \le \lambda_n(A)$ and
$\widetilde{A}_D=D_s^{-1}A_DD_s^{-1}$ have eigenvalues
$\lambda_1(\widetilde{A}_D) \le \lambda_2(\widetilde{A}_D) \le
... \le \lambda_N(\widetilde{A}_D)$. Then $$ \lambda_i(\widetilde{A}_D) \le \lambda_i(A) \le \lambda_{N-n+i}(\widetilde{A}_D). $$ \end{theorem}
\begin{proof} From the above Lemma, we have \[ A = P^T A_D P = P^T D_s D_s^{-1} A_D D_s^{-1} D_s P = \widetilde{P}^TD_s^{-1} A_D D_s^{-1} \widetilde{P} = \widetilde{P}^T \widetilde{A}_D \widetilde{P}. \] As $\widetilde{P}^T\widetilde{P}=I_{n \times n}$, by the Interlacing Theorem \ref{thm:interlacing}, the eigenvalues of $A$ and $\widetilde{A}_D$ interlace. \end{proof}
We now discuss how to use the disaggregated graph $\mathsf{G}_D$ to solve the graph Laplacian on the original graph $\mathsf{G}$. Here, we will use $\widetilde{A}_D$ as the auxiliary problem and design a preconditioner based on the Fictitious Space Lemma \cite{Nepomnyaschikh.S1992} and auxiliary space framework \cite{Xu.J1996}. Because $A$ and $\widetilde{A}_D$ are both symmetric positive semi-definite, we first state the refined version of the Fictitious Space Lemma proposed in \cite{Dios.B;Brezzi.F;Marini.L;Xu.J;Zikatanov.L2014a}.
\begin{theorem}[Theorem 6.3 and 6.4 in \cite{Dios.B;Brezzi.F;Marini.L;Xu.J;Zikatanov.L2014a}] Let $\widetilde{V}$ and $V$ be two Hilbert spaces and $\Pi: \widetilde{V} \mapsto V$ be a surjective map. Suppose that $\widetilde{\mathcal{A}}: \widetilde{V} \mapsto \widetilde{V}'$ and $\mathcal{A}: V \mapsto V' $ are symmetric semi-definite operators. Moreover, suppose \begin{align}
& \Pi (N( \widetilde{\mathcal{A}})) = N(\mathcal{A}), \label{eqn:null-space} \\
&\| \Pi \, \widetilde{v} \|_A \leq c_1 \| \widetilde{v} \|_{\widetilde{A}}, \quad \forall \ \widetilde{v} \in \widetilde{V}, \label{ine:upper} \\
& \text{for any} \ v\in V \ \text{there exists} \ \widetilde{v} \in \widetilde{V} \ \text{such that} \ \Pi \, \widetilde{v} = v \ \text{and} \ \| \widetilde{v} \|_{\widetilde{A}} \leq c_0 \| v \|_A, \label{ine:lower} \end{align} then for any symmetric positive definite operator $\widetilde{\mathcal{B}}: \widetilde{V}' \mapsto \widetilde{V}$, we have that for $\mathcal{B} = \Pi \, \widetilde{\mathcal{B}} \, \Pi^T$, \begin{equation*} \kappa(\mathcal{BA}) \leq \left( \frac{c_1}{c_0} \right)^2 \kappa(\widetilde{\mathcal{B}} \widetilde{\mathcal{A}}). \end{equation*} \end{theorem}
Applying the above theory to our disaggregation framework, we take $\mathcal{A} = A$, $\widetilde{\mathcal{A}} = \widetilde{A}_D$, and $\Pi = \widetilde{P}^T$. Noting that the null space of $\widetilde{A}_D$ is spanned by $D_s \bm{1}_N$, we have $$ \widetilde{P}^T D_s \bm{1}_N = P^T D_s^2 \bm{1}_N = \bm{1}_n $$ which verifies \eqref{eqn:null-space}. Naturally, $\widetilde{P}^T$ is surjective. Using a preconditioner $\widetilde{B}_D$ of $\widetilde{A}_D$, we can define a preconditioner \begin{equation*} B = \widetilde{P}^T \widetilde{B}_D \widetilde{P} \end{equation*} for $A$. We give the following results concerning the quality of the preconditioner $B$.
\begin{corollary}\label{coro:prec-B} Let $A$ be the graph Laplacian corresponding to the graph $\mathsf{G}$ and $A_D$ be the graph Laplacian corresponding to the disaggregated and simply connected graph $\mathsf{G}_D$. Let $D_s$ be defined by \eqref{def:D_s} and $\widetilde{A}_D$ be defined by \eqref{def:tildeA_D}. If \begin{equation} \label{ine:upper-c1}
\| \widetilde{P}^T \widetilde{\bm{v}} \|_A \leq c_1 \| \widetilde{\bm{v}} \|_{\widetilde{A}_D}, \quad \forall \ \widetilde{v} \in \widetilde{V} \end{equation} and for any $\bm{v} \in V$, there exist a $\widetilde{\bm{v}} \in \widetilde{V}$ such that $\widetilde{P}^T \widetilde{\bm{v}} = \bm{v}$ and \begin{equation} \label{ine:lower-c0}
\| \widetilde{\bm{v}} \|_{\widetilde{A}_D} \leq c_0 \| \bm{v} \|_A. \end{equation} Then for the preconditioner $B = \widetilde{P}^T \widetilde{B}_D \widetilde{P}$, we have \begin{equation*} \kappa(BA) \leq \left( \frac{c_1}{c_0} \right)^2 \kappa(\widetilde{B}_D \widetilde{A}_D). \end{equation*} \end{corollary}
We need to verify that conditions \eqref{ine:upper-c1} and \eqref{ine:lower-c0} hold for $\widetilde{P} = D_s P$. For condition \eqref{ine:lower-c0}, we choose $\widetilde{\bm{v}} = \widetilde{P}^T \bm{v}$ for any $\bm{v} \in V$, giving $\widetilde{P}^T \widetilde{\bm{v}} = \widetilde{P}^T \widetilde{P} \bm{v} = \bm{v}$ since $\widetilde{P}^T \widetilde{P} = I$. Note that \begin{equation}\label{eqn:lower-c0}
\| \widetilde{\bm{v}} \|^2_{\widetilde{A}_D} = \langle \widetilde{A}_D \widetilde{P} \bm{v}, \widetilde{P} \bm{v} \rangle = \| \bm{v} \|_A^2, \end{equation} which implies condition \eqref{ine:lower-c0} holds with $c_0 = 1$.
To show that condition \eqref{ine:upper-c1} holds, we use the following result. \begin{lemma}\label{taylor} Let $A\in \mathbb{R}^{n\times n}$ be a graph Laplacian corresponding to a connected graph with $n$ vertices. For all $i\in\{1,\ldots, n\}$ and $\bm{u} \in \mathbb{R}^n$ we have \begin{equation*} \frac{1}{n}(\bm{u},\bm{1}_n)-(\bm{u},\bm{e}_i) = \frac1n (A_i^{-1} \bm{1}_n, A \bm{u}), \end{equation*} where $A_i = (A+\bm{e}_i\bm{e}_i^T)$. \end{lemma} \begin{proof}
First, we note that for all $i\in \{1,\ldots,n\}$ the matrices $A_i$
are invertible, because they all are irreducibly diagonally dominant
$M$-matrices. We refer to Varga~\cite{2000VargaR-aa} for this classical
result.
Next, observe that $A_i \bm{1}_n = \bm{e}_i$, and hence,
$A_i^{-1}\bm{e}_i = \bm{1}_n$. Therefore, we have that \begin{eqnarray*} (A_i^{-1} \bm{1}_n, A \bm{u}) & = & (A_i^{-1} \bm{1}_n, (A_i-\bm{e}_i\bm{e}_i^T) \bm{u})
=
(\bm{1}_n, \bm{u}) - (\bm{u},\bm{e}_i)(A_i^{-1} \bm{1}_n, \bm{e}_i)\\ & = &
(\bm{1}_n, \bm{u}) - (\bm{u},\bm{e}_i)(\bm{1}_n, A_i^{-1} \bm{e}_i) =
(\bm{1}_n, \bm{u}) - (\bm{u},\bm{e}_i)(\bm{1}_n, \bm{1}_n). \end{eqnarray*} As $(\bm{1}_n, \bm{1}_n)=n$, this completes the proof. \end{proof} The result shown in Lemma~\ref{taylor} is also found in~\cite[Lemma~3.2]{Brannick.J;Chen.Y;Kraus.J;Zikatanov.L.2013a}, but is included for completeness.
We now apply Lemma~\ref{taylor} to each disaggregated local subgraph $\mathsf{G}_a^k = (V_a^k, E_a^k, \omega^k_a)$, $k=1,2,\cdots, m$, with $\bm{u} = \widetilde{\bm{v}}_k$, the restriction of $\widetilde{\bm{v}}$ on $\mathsf{G}_a^k$. For $j \in V_a^k$, we have, \begin{equation}\label{eqn:taylor-dis-k}
\widetilde{v}_j = \frac{1}{d_k} \sum_{p \in V_a^k} \widetilde{v}_p - \frac{1}{d_k} \langle L_{k,j}^{-1} \bm{1}_{d_k}, L_k \widetilde{\bm{v}}_k \rangle, \end{equation} where $L_k$ is the unweighted graph Laplacian of the local graph $\mathsf{G}_a^k$ and $L_{k,j}$ is defined in accordance with Lemma~\ref{taylor}: $L_{k,j} = L_k + \bm{e}_j^k \left( \bm{e}^k_j \right)^T$, for $j \in V_a^k$.
Setting $W_k^j := \frac{1}{d_k^2} \| L_{k,j}^{-1} \bm{1}_{d_k} \|^2_{L_k} $, $j \in V_a^k$, and denoting \begin{align*} E_D^0 &:= \{ e=(i,j) \in E_D, i, j \in V^0 \}, \\ E_D^1 &:= \{ e=(i,j) \in E_D, i\in V^0, j\in V_a^k, k=1,2,\cdots,m \}, \\ E_D^2 &:= \{ e=(i,j) \in E_D, i \in V_a^k, j\in V_a^\ell, k, \ell=1,2,\cdots,m, k \neq \ell \}, \end{align*}
we are ready to present the following lemma related to the condition \eqref{ine:upper-c1}.
\begin{lemma} \label{lem:upper-c1} For each disaggregated local subgraph $\mathsf{G}_a^k$, if, for an edge $e' = (p,q) \in E_a^k$, we assign a weight $\omega_{e'}$ such that \begin{equation}\label{def:weight} \omega_{e'} \geq W_{e'} := (1+\epsilon^{-1}) \left[ \sum_{\substack{e = (i,j) \in E_D^1 \\ i\in V^0, \ j\in V_a^k}} \omega_e W_k^j + 2 \sum_{\ell=1}^{m} \left( \sum_{\substack{e=(i,j)\in E_D^2 \\ i \in V_a^k, \ j \in V_a^\ell }} \omega_e W_k^i + \sum_{\substack{e=(i,j)\in E_D^2 \\ i \in V_a^\ell, \ j \in V_a^k }} \omega_e W_k^j \right) \right], \end{equation} then we have \begin{equation}\label{ine:upper-epsilon}
\| \widetilde{P}^T \widetilde{\bm{v}} \|^2_A \leq (1+\epsilon) \| \widetilde{\bm{v}} \|^2_{\widetilde{A}_D}, \quad \forall \ \widetilde{v} \in \widetilde{V}, \end{equation} where $\epsilon > 0$. \end{lemma} \begin{proof}
We denote $\widetilde{\bm{u}} = \widetilde{P}^T \widetilde{P} \widetilde{\bm{v}}$, and we have, \begin{eqnarray*}
\| \widetilde{P}^T \widetilde{\bm{v}} \|_A^2 & = & \langle \widetilde{A}_D \widetilde{\bm{u}}, \widetilde{\bm{u}} \rangle = \sum_{e=(i,j) \in E_D} \omega_e ( \widetilde{u}_i - \widetilde{u}_j )^2 \\ & = & \sum_{e=(i,j) \in E_D^0} \omega_e ( \widetilde{u}_i - \widetilde{u}_j )^2 + \sum_{k=1}^m \sum_{e=(i,j) \in E_a^k} \omega_e ( \widetilde{u}_i - \widetilde{u}_j )^2 \\ && + \sum_{e=(i,j) \in E_D^1} \omega_e ( \widetilde{u}_i - \widetilde{u}_j )^2 + \sum_{e=(i,j) \in E_D^2} \omega_e \left( \widetilde{u}_i - \widetilde{u}_j \right)^2 \\ & =: & I_0 + I_1 + I_2. \end{eqnarray*} Here, we have set $I_0=\sum_{e=(i,j) \in E_D^0} \omega_e \left( \widetilde{v}_i - \widetilde{v}_j \right)^2$, \[ I_1=\sum_{k=1}^m \sum_{\substack{e = (i,j) \in E_D^1 \\ i\in V^0, \ j\in V_a^k}} \omega_e \left( \widetilde{v}_i - \frac{1}{d_k}\sum_{p \in V_a^k} \widetilde{v}_p \right)^2, \] and \begin{eqnarray*} I_2&=& \sum_{k=1}^m \sum_{\ell=1}^m \sum_{\substack{e=(i,j) \in E_D^2 \\ i \in V_a^k, \ j \in V_a^\ell}} \omega_e \left( \frac{1}{d_k}\sum_{p \in V_a^k} \widetilde{v}_p - \frac{1}{d_\ell}\sum_{q \in V_a^\ell} \widetilde{v}_q \right)^2. \end{eqnarray*}
Next, we estimate $I_1$ and $I_2$ on the right-hand side. For $e=(i,j) \in E_D^1$, $i \in V^0$ and $j \in V_a^k$, using \eqref{eqn:taylor-dis-k}, we have \begin{align*} \left( \widetilde{v}_i - \frac{1}{d_k}\sum_{p \in V_a^k} \widetilde{v}_p \right)^2 & = \left( \widetilde{v}_i - \widetilde{v}_j - \frac{1}{d_k} \langle L_{k,j}^{-1} \bm{1}_{d_k}, L_k \widetilde{\bm{v}}_k \rangle \right)^2 \\
& \leq (1+ \epsilon) \left( \widetilde{v}_i - \widetilde{v}_j \right)^2 + \left( 1 + \epsilon^{-1} \right) \frac{1}{d_k^2} \| L_{k,j}^{-1} \bm{1}_{d_k} \|^2_{L_k} \| \widetilde{\bm{v}}_k \|^2_{L_k} \\ & = (1+ \epsilon) \left( \widetilde{v}_i - \widetilde{v}_j \right)^2 + \sum_{e'=(p,q) \in E_a^k} \left[ \left( 1+ \epsilon^{-1} \right) W_k^j \right] \left( \widetilde{v}_p - \widetilde{v}_q \right)^2. \end{align*} Then \begin{eqnarray*} I_1 & \leq &\sum_{k=1}^m \sum_{\substack{e = (i,j) \in E_D^1 \\ i\in V^0, \ j\in V_a^k}} \omega_e \left\{ (1+ \epsilon) \left( \widetilde{v}_i - \widetilde{v}_j \right)^2 + \sum_{ e'=(p,q) \in E_a^k} \left[ \left( 1+ \epsilon^{-1} \right) W_k^j \right] \left( \widetilde{v}_p - \widetilde{v}_q \right)^2 \right \} \\ & = &\left( 1 + \epsilon \right) \sum_{k=1}^m\sum_{\substack{e = (i,j) \in E_D^1 \\ i\in V^0, \ j\in V_a^k}} \omega_e \left( \widetilde{v}_i - \widetilde{v}_j \right)^2 \\ && + \sum_{k=1}^m \sum_{e'=(p,q) \in E_a^k} \left[ (1+\epsilon^{-1}) \sum_{\substack{e = (i,j) \in E_D^1 \\ i\in V^0, \ j\in V_a^k}} \omega_e W_k^j \right] \left( \widetilde{v}_p - \widetilde{v}_q \right)^2. \end{eqnarray*} Next, using \eqref{eqn:taylor-dis-k}, for $e = (i,j) \in E^2_D$, $i \in V_a^k$ and $j \in V_a^\ell$ we have \begin{align*} \left( \frac{1}{d_k}\sum_{p \in V_a^k} \widetilde{v}_p - \frac{1}{d_\ell}\sum_{q \in V_a^\ell} \widetilde{v}_q \right)^2 &= \left( \widetilde{v}_i - \widetilde{v}_j + \frac{1}{d_k} \langle L_{k,i}^{-1} \bm{1}_{d_k}, L_k \widetilde{\bm{v}}_k \rangle - \frac{1}{d_\ell} \langle L_{\ell,j}^{-1} \bm{1}_{d_\ell}, L_\ell \widetilde{\bm{v}}_\ell \rangle \right)^2 \\
& \leq (1+\epsilon ) \left( \widetilde{v}_i - \widetilde{v}_j \right)^2 + 2(1+\epsilon^{-1}) \frac{1}{d_k^2} \| L_{k,i}^{-1} \bm{1}_{d_k} \|^2_{L_k} \| \widetilde{\bm{v}}_k \|^2_{L_k} \\
& \quad + 2(1+\epsilon^{-1}) \frac{1}{d^2_\ell} \| L_{\ell,j}^{-1} \bm{1}_{d_\ell} \|_{L_{\ell}} \| \widetilde{\bm{v}}_{\ell} \|_{L_\ell}^2 \\ & = (1+\epsilon ) \left( \widetilde{v}_i - \widetilde{v}_j \right)^2 + \sum_{e'=(p,q) \in E_a^k} \left[ 2\left( 1+ \epsilon^{-1} \right) W_k^i \right] \left( \widetilde{v}_p - \widetilde{v}_q \right)^2 \\ & \quad + \sum_{e'=(p,q) \in E_a^\ell} \left[2 \left( 1+ \epsilon^{-1} \right) W_\ell^j \right] \left( \widetilde{v}_p - \widetilde{v}_q \right)^2. \end{align*} Then \begin{eqnarray*} I_2 &\leq & \sum_{k=1}^m \sum_{\ell=1}^m \sum_{\substack{e=(i,j) \in E_D^2 \\ i \in V_a^k, \ j \in V_a^\ell}} \omega_e \left \{ (1+\epsilon ) \left( \widetilde{v}_i - \widetilde{v}_j \right)^2 + \sum_{e'=(p,q) \in E_a^k} \left[ 2\left( 1+ \epsilon^{-1} \right) W_k^i \right] \left( \widetilde{v}_p - \widetilde{v}_q \right)^2 \right. \\ && \left. + \sum_{e'=(p,q) \in E_a^\ell} \left[2 \left( 1+ \epsilon^{-1} \right) W_\ell^j \right] \left( \widetilde{v}_p - \widetilde{v}_q \right)^2 \right\}. \end{eqnarray*} Therefore, we have \begin{eqnarray*} I_2&\le& (1+\epsilon) \sum_{k=1}^{m} \sum_{\ell=1}^{m} \sum_{\substack{e=(i,j) \in E_D^2 \\ i \in V_a^k, \ j \in V_a^\ell}} \omega_e \left( \widetilde{v}_i - \widetilde{v}_j \right)^2 \\ &&+ \sum_{k=1}^m \sum_{e'=(p,q) \in E_a^k} \left[ \sum_{l=1}^m \sum_{\substack{e=(i,j) \in E_D^2 \\ i \in V_a^k, \ j \in V_a^\ell}} 2(1+\epsilon^{-1})\omega_e W_k^i \right] \left( \widetilde{v}_p - \widetilde{v}_q \right)^2 \\ && + \sum_{\ell=1}^m \sum_{e'=(p,q) \in E_a^\ell} \left[ \sum_{k=1}^m \sum_{\substack{e=(i,j) \in E_D^2 \\ i \in V_a^k, \ j \in V_a^\ell}} 2 (1+\epsilon^{-1}) \omega_e W_\ell^j \right] \left( \widetilde{v}_p - \widetilde{v}_q \right)^2. \end{eqnarray*} Hence, \begin{eqnarray*} I_2& \le & (1+\epsilon) \sum_{k=1}^{m} \sum_{\ell=1}^{m} \sum_{\substack{e=(i,j) \in E_D^2 \\ i \in V_a^k, \ j \in V_a^\ell}} \omega_e \left( \widetilde{v}_i - \widetilde{v}_j \right)^2 \\ & & + \sum_{k=1}^{m} \sum_{e'=(p,q)\in E_a^k} \left[ 2 (1+\epsilon^{-1}) \left( \sum_{\ell=1}^m \sum_{\substack{e=(i,j)\in E_D^2 \\ i \in V_a^k, \ j \in V_a^\ell }} \omega_e W_k^i + \sum_{\ell=1}^{m} \sum_{\substack{e=(i,j)\in E_D^2 \\ i \in V_a^\ell, \ j \in V_a^k }} \omega_e W_k^j \right) \right] \left( \widetilde{v}_p - \widetilde{v}_q \right)^2. \end{eqnarray*}
Now, we use the definition of $W_{e'}$ \eqref{def:weight} and the estimates on $I_1$ and $I_2$ to obtain that \begin{eqnarray*}
&& \| \widetilde{P}^T \widetilde{\bm{v}} \|_A^2 \leq \sum_{e \in E_D^0} \omega_e \left( \widetilde{v}_i - \widetilde{v}_j \right)^2 + \left( 1 + \epsilon \right) \sum_{k=1}^m\sum_{\substack{e = (i,j) \in E_D^1 \\ i\in V^0, \ j\in V_a^k}} \omega_e \left( \widetilde{v}_i - \widetilde{v}_j \right)^2 \\ &&\quad + (1+\epsilon) \sum_{k=1}^{m} \sum_{\ell=1}^{m} \sum_{\substack{e=(i,j) \in E_D^2 \\ i \in V_a^k, \ j \in V_a^\ell}} \omega_e \left( \widetilde{v}_i - \widetilde{v}_j \right)^2 + \sum_{k=1}^{m} \sum_{e'=(p,q)\in E_a^k} W_{e'} \left( \widetilde{v}_p - \widetilde{v}_q \right)^2. \end{eqnarray*}
Due to \eqref{def:weight}, we have that $\omega_{e'} \geq W_{e'}$ and \eqref{ine:upper-epsilon} follows. This completes the proof.
\end{proof}
Lemma \ref{lem:upper-c1} shows that the constant $c_1$ can be made arbitrarily close to $1$ if the weights on the internal edges of the disaggregation are chosen to be large enough. As an immediate consequence, we have the following theorem for the preconditioner $B$. \begin{theorem}\label{thm:prec-B}
Under the assumptions of Corollary \ref{coro:prec-B} and
Lemma~\ref{lem:upper-c1}, for the preconditioner
$B = \widetilde{P}^T \widetilde{B}_D \widetilde{P}$, we have \begin{equation} \label{ine:prec-B-cond} \kappa(BA) \leq \left( 1 + \epsilon \right) \kappa(\widetilde{B}_D \widetilde{A}_D). \end{equation} \end{theorem} \begin{proof}
The relation~\eqref{ine:prec-B-cond} follows from
Corollary~\ref{coro:prec-B} since $c_0 = 1$ in~\eqref{eqn:lower-c0}
and $c_1 = (1+\epsilon)^{1/2}$ in Lemma~\ref{lem:upper-c1}. \end{proof}
Finally, since $\widetilde{A}_D:= D_s^{-1} A_D D_s^{-1}$, if we have a preconditioner $B_D$ for $A_D$ and define $\widetilde{B}_D = D_s B_D D_s$, then it is easy to verify that $\kappa(\widetilde{B}_D\widetilde{A}_D) = \kappa(B_DA_D)$. We have the following theorem showing that the preconditioned operator $BA$ has a condition number comparable to the condition number of $B_DA_D$. \begin{theorem}\label{thm:prec-B1} Under the assumptions of Corollary \ref{coro:prec-B} and Lemma \ref{lem:upper-c1} and let $\widetilde{B}_D = D_s B_D D_s$, for the preconditioner $B = \widetilde{P}^T \widetilde{B}_D \widetilde{P}$, we have
\begin{equation} \label{ine:prec-B1-cond} \kappa(BA) \leq \left( 1 + \epsilon \right) \kappa(B_D A_D). \end{equation} \end{theorem} \begin{proof} \eqref{ine:prec-B1-cond} follows from Theorem \ref{thm:prec-B} and the fact that $\kappa(\widetilde{B}_D\widetilde{A}_D) = \kappa(B_DA_D)$. \end{proof} Clearly, Theorems \ref{ine:prec-B-cond} and \ref{ine:prec-B1-cond} imply that, when the weights on the internal edges of the disaggregation are chosen to be large enough, preconditioners for disaggregated graph provide effective preconditioners for the original graph, which indirectly supports the technique suggested in~\cite{Kuhlemann.V;Vassilevski.P2013a}.
\end{document} |
\begin{document}
\title{Polyhedra without cubic vertices are prism-hamiltonian}
\author{ Simon \v Spacapan\footnote{ University of Maribor, FME, Smetanova 17, 2000 Maribor, Slovenia. e-mail: simon.spacapan @um.si. }} \date{\today}
\maketitle
\begin{abstract}
The prism over a graph $G$ is the Cartesian product of $G$ with the complete graph on two vertices. A graph $G$ is prism-hamiltonian if the prism over $G$ is hamiltonian. We prove that every polyhedral graph (i.e. 3-connected planar graph) of minimum degree at least four is prism-hamiltonian. \end{abstract}
\noindent {\bf Key words}: Hamiltonian cycle, circuit graph
\noindent {\bf AMS subject classification (2010)}: 05C10, 05C45
\section{Introduction}
The study of hamiltonicity of planar graphs is largely concerned with finding subclasses of 3-connected planar graphs for which each member of the subclass is hamiltonian or has some hamiltonian-type property. One such result was obtained in 1956 by Tutte who proved that all $4$-connected planar graphs are hamiltonian \cite{tutte2}. Although not every 3-connected planar graph is hamiltonian it is possible to prove that this class of graphs satisfies (hamiltonian-type) properties weaker than hamiltonicity. A 2-walk in a graph is a closed spanning walk that visits every vertex at most twice. Clearly, every hamiltonian graph has a 2-walk. In \cite{richter} Gao and Richter proved that every $3$-connected planar graph has a 2-walk.
There is an extensive list of non-hamiltonian 3-connected planar graphs with special properties, such as graphs with small order and size \cite{bar},
plane triangulations \cite{zamfirescu}, regular graphs
\cite{tutte1, zamfirescu1}, $K_{2,6}$-minor-free graphs \cite{eli1},
and graphs with few 3-cuts \cite{brink2}. However some classes of graphs mentioned above are prism-hamiltonian. For example every plane triangulation is prism-hamiltonian \cite{bib}, and every cubic 3-connected graph is prism-hamiltonian \cite{kaiser},\cite{paul}. It is well known that every prism-hamiltonian graph has a 2-walk, so the result obtained in \cite{bib} strengthens the result of Gao and Richter mentioned above.
Rosenfeld and Barnette \cite{domneva} conjectured that every 3-connected planar graph is prism-hamiltonian (see also \cite{kral}). This conjecture was recently refuted in \cite{jaz} where vertex degrees play a central role in construction of counterexamples. In particular every counterexample to Rosenfeld-Barnette conjecture given in \cite{jaz} has many cubic vertices and two vertices of \enquote{high} degree (linear in order of the graph). In \cite{zam} the authors show that there is an infinite family of 3-connected planar graphs, each of them not prism-hamiltonian, such that the ratio of cubic vertices tends to 1 when the order goes to infinity, and maximum degree stays bounded by 36.
Vertex degrees in relation to hamiltonicity properties are discussed already by Ore in \cite{ore} and later by Jackson and Wormald in \cite{jackson}. Let $\sigma_k(G)$ be the minimum sum of vertex degrees of an independent set of $k$ vertices. Ore showed that $\sigma_2(G)\geq n$ implies that $G$ is hamiltonian, and Jackson and Wormald showed that $\sigma_3(G)\geq n$ implies that $G$ has a 2-walk (provided that $G$ is connected). This was strenghtened by Ozeki in \cite{kenta} who showed that $\sigma_3(G)\geq n$ implies that $G$ is prism-hamiltonian.
In this paper we prove that every 3-connected planar graph of minimum degree at least four is prism-hamiltonian.
Equivalently, every 3-connected planar graph which is not prism-hamiltonian must have at least one cubic vertex. In particular this implies that every regular 3-connected planar graph is prism-hamiltonian. The class of 3-connected planar graphs of minimum degree at least four is neither hamiltonian nor traceable (even when restricted to plane triangulations, or to regular graphs), see \cite{zamfirescu} and \cite{zamfirescu1}. In this sense prism-hamiltonicity appears to be the strongest hamiltonian-type property this class has.
The proof we give in this article builds on results obtained in \cite{richter}, where a
method of decomposing graphs into plain chains is developed. In \cite{richter} the authors work with circuit graphs (which where originally defined in \cite{barnette}). A plane graph is a circuit graph if it is obtained from a 3-connected plane graph $G$ by deleting all vertices that lie in the exterior of a cycle of $G$. A cactus is a connected graph $G$ such that every block of $G$ is either a $K_2$ or a cycle, and such that every vertex of $G$ is contained in at most two blocks of $G$ (the last condition is usually omitted, however for us it will be crucial, so we include it in the definition). The main result of \cite{richter} is that any circuit graph (and hence also any 3-connected plane graph) has a spanning cactus as a subgraph. Here we improve this result by proving that any circuit graph with no internal cubic vertex has a spanning bipartite cactus as a subgraph. Every cactus has a 2-walk while every bipartite cactus is prism-hamiltonian. Our result thus implies that
circuit graphs with all internal vertices of degree at least 4 are prism-hamiltonian.
We mention that 3-connected planar graphs of minimum degree at least 4 also appear in \cite{thomassen} where the author proved that no graph in this class is hypohamiltonian.
\section{Preliminaries} We refer to \cite{mt} for terminology not defined here. Let $G=(V(G),E(G))$ be a graph, $x\in V(G)$ and $X\subseteq V(G)$. We say that $x$ is {\em adjacent} to $X$, if $x$ is adjacent to some vertex of $X$.
If $u$ and $v$ are adjacent then $e=uv$ denotes the edge with endvertices $u$ and $v$; the subgraph induced by $u$ and $v$ is a path denoted by $u,v$. The {\em union} of graphs $G=(V(G),E(G))$ and $H=(V(H),E(H))$ is the graph $G\cup H=(V(G)\cup V(H),E(G)\cup E(H))$ and the {\em intersection} of $G$ and $H$ is $G\cap H=(V(G)\cap V(H),E(G)\cap E(H))$. The graph $G-X$ is obtained from $G$ by deleting all vertices in $X$ and edges incident to a vertex in $X$. Similarly, for $M\subseteq E(G)$, $G-M$ is the graph obtained from $G$ by deleting all edges in $M$. If $X=\{x\}$ we write $G-x$ instead of $G-\{x\}$.
Let $G$ be a plane graph.
Vertices and edges incident to the unbounded face of $G$ are called {\em external vertices} and {\em external edges}, respectively. If a vertex (or an edge) is not an external vertex (or edge), then
it is called an {\em internal vertex} (or an {\em internal edge}). A path $P$ is an {\em external} resp. {\em internal} path of $G$ if all edges of $P$ are external resp. internal edges.
We use $[n]$ to denote the set of positive integers less or equal $n$. A path of odd/even length is called an {\em odd/even path}, respectively. Similarly we define {\em odd} and {\em even faces}, based on the parity of their degree.
Recall that every vertex of a cactus $G$ is contained in at most two blocks of $G$. A vertex of a catus $G$ is {\em good} if it is contained in exactly one block of $G$.
A {\em prism} over a graph $G$ is the Cartesian product of $G$ and the complete graph on two vertices $K_2$. The following proposition is given in \cite{eli3} (Theorem 2.3.). For the sake of completness we include the proof of it also here.
\begin{proposition} \label{bolje} Every bipartite cactus is prism-hamiltonian. \end{proposition} \noindent{\bf Proof.\ } We denote $V(K_2)=\{a,b\}$. We use induction to prove the following stronger statement. Every prism $G\Box K_2$ over a bipartite cactus $G$ has a Hamilton cycle $C$ such that for every good vertex $x$ of $G$, we have $(x,a)(x,b)\in E(C)$. This is clearly true when $G$ is an even cycle or $K_2$.
Let $G$ be a bipartite cactus and assume that the statement is true for all bipartite cactuses with fewer vertices than
$|V(G)|$. If all vertices of $G$ are good, then $G$ is an even cycle or $K_2$. Otherwise, there is a vertex $u$, which is not a good vertex of $G$. Hence, $u$ is contained in exactly two blocks of $G$.
Let $G_1'$ and $G_2'$ be connected components of $G-x$, and let $G_1=G-G_2'$ and $G_2=G-G_1'$. Both, $G_1$ and $G_2$, are bipartite cactuses. Moreover, $x$ is a good vertex in $G_i$, for $i=1,2$. By induction hypothesis there is a Hamilton cycle $C_i$ in $G_i$ such that $C_i$ uses the edge $e=(x,a)(x,b)$ in $G_i$. The desired Hamilton cycle in $G$ is $(C_1\cup C_2)-e$. Observe that every good vertex of $G$ is a good vertex of $G_1$ or $G_2$. It follows that for every good vertex $x$ of $G$, we have $(x,a)(x,b)\in E(C)$.
$\square$
\begin{corollary} Every graph $G$, that has a bipartite cactus $H$ as a spanning subgraph, is prism-hamiltonian. \end{corollary}
If $G$ is plane graph and $H$ is a subgraph of $G$, then $H$ is also a plane graph and we assume that the embedding of $H$ in the plane is the one given by $G$.
Let $G$ be a plane graph and $G^+$ the graph obtained from $G$ by adding a vertex to $G$ and making it adjacent to all external vertices of $G$. The graph $G$ is a {\em circuit graph} if $G^+$ is 3-connected.
It follows from the definition that any circuit graph is 2-connected, and hence every face of a circuit graph is bounded by a cycle. If $G$ is a 3-connected plane graph (or if $G$ is a circuit graph) and $C$ is a cycle of $G$, then the subgraph of $G$ bounded by $C$ is a circuit graph.
Observe also that for any circuit graph $G$ with outer cycle $C$, and any separating set $S$ of size 2 in $G$, every connected component of $G-S$ intersects $C$.
A graph $G$ is a {\em chain of blocks} if the block-cutvertex graph of $G$ is a path. We denote the blocks and cutvertices of $G$, by $$B_1,b_1,B_2,\ldots,b_{n-1}, B_n\,,$$ where $B_i$ are blocks for $i\in [n]$, and $b_i\in V(B_i)\cap V(B_{i+1})$ are cutvertices of $G$ for $i\in [n-1]$. A plane graph $G$ is a {\em plane chain of blocks} if it is a chain of blocks $$G=B_1,b_1,B_2,\ldots,b_{n-1}, B_n$$ such that every external vertex of $B_i, i \in [n]$ is also an external vertex of $G$. The following lemma is given in \cite{richter} (Lemma 3, p. 261).
\begin{lemma}\label{plainchain} Let $G$ be a circuit graph with outer cycle $C$ and let $x\in V(C)$. Let $x'$ and $x''$ be the neighbors of $x$ in $C$. Then \begin{itemize} \item[(i)] $G-x$ is a plane chain of blocks $B_1,b_1,B_2,\ldots,b_{n-1}, B_n$ and each nontrivial block of $G-x$ is a circuit graph. \item[(ii)]Setting $x'=b_0$ and $x''=b_n$, then $ B_i\cap C$ is a path in $C$ with endvertices $b_{i-1}$ and $b_i$, for every $i\in [n]$.
\end{itemize} \end{lemma}
It follows from the above lemma that for every nontrivial block $B_i$ of $G-x$, with outer cycle $C_i$,
$C_i$ is the union of two $b_{i-1}b_i$-paths $P_i$ and $P_i'$, where $P_i$ is an internal path in $G$ and $P_i'$ is an external path in $G$.
\section{The proof of main result}
In this section we prove that any circuit graph $G$ such that every internal vertex of $G$ is of degree at least 4 is prism-hamiltonian.
\begin{definition} Let $G$ be a circuit graph with outer cycle $C$ and let $x,y\in V(C)$. We say that $G$ is bad with respect to $x$ and $y$ if \begin{itemize} \item[(i)] $G$ has exactly one bounded odd face $F$ \item[(ii)] $x$ and $y$ are incident to $F$ \item[(iii)] If $x$ and $y$ are adjacent, then $e=xy$ is an internal edge of $G$.
\end{itemize} We say that $G$ is good with respect to $x$ and $y$ if it's not bad with respect to $x$ and $y$. \end{definition}
If $G$ is a circuit graph and $G$ is bad with respect to $x$ and $y$, then there is no hamiltonian cycle $C$ in $G\Box K_2$ such that $C$ uses vertical edges at $x$ and $y$ (edges between the two layers of $G$). For example, an odd cycle is bad with respect to any two non-adjacent vertices, and hence the prism over an odd cycle has no hamiltonian cycle that uses vertical edges at two non-adjacent vertices of this cycle. Conversely, it turns out (and is a consequence of Theorem \ref{glavni}) that for any circuit graph $G$ with all internal vertices of degree at least 4, and any external vertices $x$ and $y$ of $G$ such that $G$ is good with respect to $x$ and $y$, there is a hamiltonian cycle in $G\Box K_2$ that uses vertical edges at $x$ and $y$.
Note also that a bipartite circuit graph $B$ is good with respect to any two external vertices of $B$ (this fact we shall use frequently). In order to simplify the formulation of statements, we
also say that complete graphs $K_1$ and $K_2$ are good with respect to any of its vertices.
\begin{definition} Let $G=B_1,b_1,B_2,\ldots,b_{n-1}, B_n$ be a plain chain of blocks such that each nontrivial block $B_i$ is a circuit graph. Let $b_0\neq b_1$ be an external vertex of $B_1$, and $b_n\neq b_{n-1}$ be an external vertex of $B_n$. We say that $G$ is a good chain with respect to $b_0$ and $b_n$ if $B_i$ is good with respect to $b_{i-1}$ and $b_i$
for every $i\in [n]$. \end{definition}
The same definition is used when only one of the two vertices $b_0$ and $b_n$ is given, and in this case we say that $G$ is a good chain with respect to $b_0$ or with respect to $b_n$. If $G=B_1$ has only one block we say that $G$ is a good chain with respect to any external vertex of $G$.
\begin{lemma} \label{dvodelen} Let $B$ be a bipartite circuit graph with outer cycle $C$ such that all internal vertices of $B$ are of degree at least 4. Then $B$ has at least 4 external vertices of degree 2. \end{lemma}
\noindent{\bf Proof.\ } Let $C$ be a $k$-cycle, $k\geq 4$. Let ${\mathcal F}$ be the set of faces of $B$, and set $e=|E(B)|, v=|V(B)|$ and $f=|{\mathcal F}|$. Since $B$ is bipartite $$2e=\sum_{F\in {\mathcal F}} \deg (F)\geq 4(f-1)+k\,.$$ We use the Euler's formula to obtain $$\sum_{x\in V(B)}\deg(x)=2e\leq 4v-k-4\,.$$ Since every internal vertex of $B$ is of degree at least 4 we get $$\sum_{x\in V(C)}\deg(x)\leq 3k-4\,.$$ Since all vertices of $C$ are of degree at least 2, the claim of the lemma follows from the pigeonhole principle.
$\square$
The following lemma is a well known fact.
\begin{lemma}\label{sodalica} A plane graph $G$ is bipartite if and only if all bounded faces of $G$ are even. \end{lemma}
\begin{lemma}\label{osnovna1} Let $B$ be a circuit graph with outer cycle $C$ such that all internal vertices of $B$ are of degree at least 4. Let $x$ and $y$ be any vertices of $C$, and $Q$ a $xy$-path in $C$. Suppose that all vertices in $V(C)\setminus V(Q)$ are of degree at least three in $B$. Then $B$ is good with respect to $x$ and $y$.
\end{lemma}
\noindent{\bf Proof.\ } Suppose to the contrary, that $B$ is bad with respect to $x$ and $y$. Then $x$ and $y$ are incident to odd face $F$ of $B$, and $F$ is the only bounded odd face of $B$. Moreover, $x$ and $y$ are not adjacent in $C$. It follows that $B-\{x,y\}$ has exactly two components.
Let $H$ be the component of $B-\{x,y\}$ that contains a vertex of $Q$. If $xy\in E(B)$ and
$F$ is contained in the exterior of the cycle $E(Q)\cup \{xy\}$ define $H'=(B-V(H))-xy$. Otherwise define $H'=B-V(H)$.
$H'$ is a plain chain of blocks and each nontrivial block of $H'$ is a bipartite circuit graph (by Lemma \ref{sodalica}), so assume $$H'=D_1,d_1,D_2,\ldots,d_{m-1}, D_m\,.$$ Let $d_0=x$ and $d_m=y$. If $j\in [m]$ and $u\in V(D_j)\setminus \{d_{j-1},d_j\}$, then $\deg_{D_j}(u)>2$. So if $D_j$ is nontrivial, then it has at most two vertices of degree 2 in $D_j$; since $D_j$ is bipartite this contradicts Lemma \ref{dvodelen}. It follows that all blocks of $H'$ are trivial. If $H'$ is $K_2$ then $x$ and $y$ are adjacent in $C$ (a contradiction), otherwise a vertex in $V(C)\setminus V(Q)$ is of degree $\leq 2$ (this contradicts the assumption of the lemma).
$\square$
\begin{lemma}\label{osnovna} Let $B$ be a circuit graph with outer cycle $C$ such that all internal vertices of $B$ are of degree at least 4. Let $x\in V(C)$ be any vertex and $$B-x=B_1,b_1,B_2,\ldots,b_{n-1}, B_n\,.$$ Let $b_0\in V(B_1)$ and $b_n\in V(B_n)$ be the neighbors of $x$ in $C$. Then for every $i\in [n]$, $B_i$ is good with respect to $b_{i-1}$ and $b_i$.
\end{lemma}
\noindent{\bf Proof.\ } Let $B_i$ be a nontrivial block with outer cycle $C_i$, and define $Q=C\cap B_i$. $Q$ is a path in $C_i$ with endvertices $b_{i-1}$ and $b_i$, and every vertex in $V(C_i)\setminus V(Q)$ is of degree more than 2 in $B_i$. By Lemma \ref{osnovna1}, $B_i$ is good with respect to $b_{i-1}$ and $b_i$.
$\square$
\begin{lemma}\label{osnovna2} Let $B$ be a circuit graph with outer cycle $C$ such that all internal vertices of $B$ are of degree at least 4. Let $x$ and $y$ be any vertices of $C$ and $Q$ a $xy$-path in $C$ such that all vertices in $V(C)\setminus V(Q)$ are of degree at least three in $B$. If $B-x$ is bipartite, then
$|V(B_i\cap Q)|\geq 2$ for every block $B_i$ of $B-x$. \end{lemma}
\noindent{\bf Proof.\ } If $V(C)=V(Q)$, the lemma follows from Lemma \ref{plainchain}. Assume $V(C)\neq V(Q)$, and let $u\in V(C)\setminus V(Q)$ be the neighbor of $x$. The block $B_1$ of $B-x$ containing $u$ is nontrivial, for otherwise $\deg_B(u)=2$.
If $|V(B_1\cap Q)|< 2$, then $B_1$ has at most two vertices of degree two in $B_1$. Therefore, by Lemma \ref{dvodelen}, $B_1$ is non-bipartite and hence $B-x$ is non-bipartite.
$\square$
\begin{lemma} \label{skupek} Let $B$ be a circuit graph with outer cycle $C$ such that all internal vertices of $B$ are of degree at least 4. Let $x\in V(C)$ be any vertex, and let $$B-x=B_1,b_1,B_2,\ldots,b_{n-1}, B_n\,.$$ Then for every $k \in [n-1]$ the graph $$G=B-\bigcup_{i=k+1}^n V(B_i)$$
is a good chain with respect to $x$.
\end{lemma}
\noindent{\bf Proof.\ } Let $b_0\in V(B_1)$ be the neighbor of $x$ in $C$. Denote the path $x, b_0$ by $B_0$.
{\em Case 1:
Suppose that $B_k$ is trivial. } \\
If $x$ is not adjacent to a vertex in $G-b_0$, then
$G$ induces a plain chain of blocks $$B_0,b_0,B_1,\ldots,b_{k-2}, B_{k-1}\,$$ and, by Lemma \ref{osnovna}, $B_i$ is good with respect to $b_{i-1}$ and $b_i$ for $i\in [k-1]$. Assume therefore that $x$ is adjacent to a vertex in $G-b_0$. Let $\ell\in [k]$ be the maximum number such that $x$ is adjacent to $B_\ell-\{b_{\ell-1},b_k\}$. Since $B_k$ is trivial, $\ell\neq k$. The graph $H$ induced by $\bigcup_{i=0}^\ell V(B_i)$ is a nontrivial block of $G$. Moreover, since $H$ is a subgraph of $B$ bounded by a cycle of $B$, $H$ is a circuit graph. Note also that if $x$ and $b_\ell$ are incident to a bounded face $F$ of $H$, then $x$ and $b_\ell$ are adjacent, moreover $xb_\ell$ is an external edge of $H$. It follows that
$H$ is good with respect to $x$ and $b_\ell$, and therefore $$G=H,b_\ell,B_{\ell+1},\ldots, b_{k-2}, B_{k-1}$$ is a good chain with respect to $x$.
{\em Case 2: Suppose that $B_k$ is nontrivial. } \\ By Lemma \ref{plainchain}, $B_k-b_k$ is a plain chain of blocks, so let $$B_k-b_k=D_1,d_1,D_2,\ldots,d_{m-1}, D_m\,.$$ Let $C_k$ be the outer cycle of $B_k$, and $d_0\in V(D_1), d_m\in V(D_m)$ be the neighbors of $b_k$ in $C_k$. Without loss of generality assume that
$b_kd_0$ is an internal edge of $B$. Note that $D_1$ is nontrivial if $b_{k-1}\neq d_0$, for otherwise $\deg_{B} (d_0)\leq 3$ (this is a contradiction because $d_0$ is an internal vertex of $B$ if $b_{k-1}\neq d_0$).
Suppose that $x$ is adjacent to $D_1-\{b_{k-1},d_1\}$ (it is possible that $b_{k-1}=d_1$). Then $D_1$ is nontrivial. Let $j\in [m]$ be such that $b_{k-1}\in V(D_j)\setminus \{d_{j-1}\}$ and let $H'$ be the graph induced by $$\bigcup_{i=0}^{k-1} V(B_i)\cup \bigcup_{i=1}^jV(D_i)\,.$$ $H'$ is bounded by a cylce of $B$, so it is a circuit graph. We shall prove that $H'$ is good with respect to $x$ and $d_j$. Suppose that $x$ and $d_j$ are incident to a face $F'$ of $H'$, and that $F'$ is the only bounded odd face of $H'$. Then $d_j=b_{k-1}$, and all bounded faces of $D_1$ are even. This contradicts Lemma \ref{dvodelen}, because $\deg_{D_1}(u)>2$ for every $u\in V(D_1)\setminus \{d_0, d_1\}$. It follows that
$$G=H',d_{j},D_{j+1},\ldots,d_{m-1},D_m\,$$ is a good chain with respect to $x$.
Suppose that $x$ is not adjacent to $D_1-\{b_{k-1},d_1\}$.
We claim that $|V(D_1)\cap V(C)|\geq 2$.
To prove the claim suppose the contrary, that $|V(D_1)\cap V(C)|<2$. Then $\{b_k,d_1\}$ is a separating set in $B$, and $D_1-\{b_k,d_1\}$ is a component of $B-\{b_k,d_1\}$ disjoint with $C$.
It follows that $B$ is not a circuit graph, a contradiction. This proves the claim.
Define $\ell$ and $H$ as in Case 1. We claim that
$$G=H,b_\ell,B_{\ell+1},\ldots, B_{k-1},b_{k-1},D_1,d_1,\ldots,d_{m-1},D_m$$ is a good chain with respect to $x$. We have already shown (in Case 1) that $H$ is good with respect to $x$ and $b_{\ell}$. By Lemma \ref{osnovna}, $B_i$ is good with respect to $b_{i-1}$ and $b_i$ for $i\in [k-1]\setminus [\ell]$, and $D_i$ is good with respect to $d_{i-1}$ and $d_i$ for $i\in [m], i\neq 1$. It remains to prove that $D_1$ is good with respect to $b_{k-1}$ and $d_1$.
Let $C'$ be the outer cycle of $D_1$ and let $Q=C\cap D_1$ (or equivalently $Q=C\cap C'$). Note that for every vertex $z\in V(C')\setminus V(Q)$, $\deg_{D_1}(z)>2$. By Lemma \ref{osnovna1}, $D_1$ is good with respect to $b_{k-1}$ and $d_1$.
$\square$
\begin{definition}\label{def} Let $B$ be a circuit graph with outer cycle $C$. Let $\{x,y\}\subseteq V(C)$ and $\{u_1,u_2\}\subseteq V(C)$ be any sets. A set of pairwise disjoint chains ${\mathcal C}=\{G_1,\ldots,G_k\}$ is a $(x,y;u_1,u_2)$-set of chains in $B$ if there exists a $xy$-path $P$ in $B$ such that \begin{itemize} \item[(i)] $V(B)\setminus V(P)\subseteq \bigcup_{i=1}^{k}V(G_i)$ \item[(ii)] For $i\in [k]$, $G_i$ intersects $P$ in exactly one vertex $x_i$, and $G_i$ is a good chain with respect to $x_i$. \item[(iii)] For $j\in [2]$, either $G_i$ is a good chain with respect to $u_j$ and $x_i$ for some $i\in [k]$, or $u_j \notin \bigcup_{i=1}^{k}V(G_i)$.
\end{itemize}
A path $P$ that fulfills (i),(ii) and (iii) is called a ${\mathcal C}$-path. The set ${\mathcal C}$ is an odd or an even $(x,y;u_1,u_2)$-set of chains if there exits an odd or an even ${\mathcal C}$-path, respectively. \end{definition}
We say that a set of pairwise disjoint chains $G_1,\ldots,G_k$ is a $(x,y;u_1)$-set of chains
if it satisfies (i),(ii) and (iii) for $j=1$.
Moreover, ${\mathcal C}$ is a $(x,y)$-set of chains if it satisfies (i) and (ii) of Definition \ref{def}.
We also use Definition \ref{def} in slightly more general settings in which $B$ is a plain chain of blocks (and each block is a circuit graph). More precisely, if $B$ is a plain chain of blocks, $x,y$ are two external vertices of $B$, and ${\mathcal C}$ is a set of pairwise disjoint plain chains that satisfy (i),(ii) and (iii), then ${\mathcal C}$ is a $(x,y;u_1,u_2)$-set of chains in $B$.
\begin{lemma} \label{spajanje}
Let $G$ be a bipartite plain chain of blocks $$G=B_1,b_1,B_2,\ldots,b_{n-1}, B_n $$ such that for $i\in [n]$ each nontrivial block $B_i$ of $G$ is a circuit graph with outer cycle $C_i$. Suppose that $u,x,y\in V(C_j),u\neq b_j$, and that
${\mathcal C}$ is a $(x,y;u,b_j)$-set of chains in $B_j$ for some $j\in [n]$. Then for every $\ell>j$ and any vertex $v\in V(C_\ell)\setminus V(C_{\ell-1})$, there is a $(x,y;u,v)$-set of chains in $\bigcup_{i=j}^{\ell} B_i$. Moreover, if $u=b_{j-1}$, then for every $\ell'<j$ and any vertex $v'\in V(C_{\ell'})\setminus V(C_{\ell'+1})$, there is a $(x,y;v',v)$-set of chains in $\bigcup_{i=\ell'}^{\ell} B_i$. \end{lemma}
\noindent{\bf Proof.\ } Let $\ell>j$ and $v\in V(C_\ell)\setminus V(C_{\ell-1})$. Suppose that ${\mathcal C}=\{ G_1,\ldots,G_k\}$ is a $(x,y;u,b_j)$-set of chains in $B_j$ and that $P$ is a ${\mathcal C}$-path. Then (a) or (b) occures.
\begin{itemize} \item[(a)] There is a chain $G_{r} \in {\mathcal C}$ such that $G_{r}$ is a good chain with respect to $x_{r}$ and $b_{j}$, where $\{x_{r}\}=V(G_{r})\cap V(P)$ \item[(b)] $b_j \notin \bigcup_{i=1}^{k}V(G_i)$. \end{itemize} In case (a), $G_{r}'=G_{r}\cup \bigcup_{i=j+1}^{\ell} B_i$ is a good chain with respect to $x_{r}$ and $v$, and therefore ${\mathcal C'} ={\mathcal C}\cup \{G_r'\}\setminus \{G_r\}$ is a $(x,y;u,v)$-set of chains in $ \bigcup_{i=j}^{\ell} B_i$.
In case (b), $G_0=\bigcup_{i=j+1}^{\ell} B_i$ is a good chain with respect to $b_j$ and $v$, and therefore ${\mathcal C'}=\{G_0,\ldots,G_{k}\}$ is a $(x,y;u,v)$-set of chains in $ \bigcup_{i=j}^{\ell} B_i$. In both cases a ${\mathcal C'}$-path is $P$. The last sentence of the lemma is proved analogously.
$\square$
If we use the notation of the above lemma, we note that a $(x,y;b_j)$-set of chains in $B_j$ can be extended to a $(x,y)$-set of chains in $\bigcup_{i=j}^{\ell} B_i$ (in fact the construction given in the above proof works also in this case). Note also that a $(x,y;u,v)$-set of chains in $\bigcup_{i=j}^{\ell} B_i$ exists also under the assumption that $B_i$ is bipartite for $i>j$ (and $G$ may be non-bipartite). In \cite{richter} the following result was proved (Theorem 5, p.262).
\begin{theorem}\label{rihta} Let $B$ be a bipartite circuit graph with outer cycle $C$. If $x,y\in V(C)$, then for any vertex $u\in V(C)$ (not necessarily distinct from $x$ and $y$) there exists a $(x,y;u)$-set of chains in $B$. \end{theorem}
\begin{lemma}\label{posebna} Let $B$ be a bipartite circuit graph with outer cycle $C$. Suppose that $x,y\in V(C)$ and that $Q$ is a $xy$-path in $C$. If every internal vertex of $B$ is of degree at least 4 and every vertex in $V(C)\setminus V(Q)$ is of degree at least 3 in $B$, then there exists a $(x,y;x,y)$-set of chains in $B$.
\end{lemma}
\noindent{\bf Proof.\ }
By Lemma \ref{plainchain}, $B-x$ is a plain chain of blocks $$B-x=B_1,b_1,B_2,\ldots,b_{n-1}, B_n\,.$$
By Lemma \ref{osnovna2}, $|V(B_i\cap Q)|\geq 2$ for $i\in [n]$. We may assume, without loss of generality, that $y\in V(B_1)$ and $y\neq b_1$. If $B_1$ is nontrivial then $B_1-y$ is a plain chain of blocks $$B_1-y=D_1,d_1,D_2,\ldots,d_{m-1}, D_m\,.$$ Let $d_0\in V(D_1)$ be the neighbor of $y$ in $Q$, and define
$k=\max\{i\,|\,D_i\cap Q\neq \emptyset\}$.
{\em Case 1:} Suppose that $D_k$ intersects $Q$ in exactly one vertex (in this case $d_{k-1}$). Then $G=\bigcup_{i=k}^m D_i$ is a good chain with respect to $d_{k-1}$.
If $D_i$ is trivial define $P_i=D_i$ and ${\mathcal C_i}=\emptyset$, for $i\in [k-1]$. If $D_i$ is nontrivial then, by Theorem \ref{rihta}, there is a $(d_{i-1},d_i;d_i)$-set of chains ${\mathcal C_i}$ in $D_i$, for $i\in [k-1]$. In this case let $P_i$ be a ${\mathcal C_i}$-path in $D_i$.
If $B_i$ is trivial define $R_i=B_i$ and ${\mathcal F_i}=\emptyset$, for $i\in [n],i\neq 1$. If $B_i$ is nontrivial then, by Theorem \ref{rihta}, there is a $(b_{i-1},b_i;b_{i-1})$-set of chains ${\mathcal F_i}$ in $B_i$,
for $i\in[n],i\neq 1$. In this case let $R_i$ be a ${\mathcal F_i}$-path in $B_i$. Let $b_n$ be the neighbor of $x$ in $Q$, and let $R_{n+1}$ be the path $x,b_n$. Additionally let $P_0$ be the path $y,d_0$. Define $$P=\bigcup_{i=0}^{k-1} P_i\cup \bigcup_{i=2}^{n+1} R_i\,.$$ The chain $G$ together with chains ${\mathcal C_i}, i\in [k-1]$ and ${\mathcal F_i},i\in [n],i\neq 1$ is a $(x,y;x,y)$-set of chains in $B$. If we call this set of chains ${\mathcal C}$, then $P$ is a ${\mathcal C}$-path.
{\em Case 2:} Suppose that $D_k$ intersects $Q$ in more than one vertex. Then, by Theorem \ref{rihta}, there is a $(d_{k-1},b_1;d_k)$-set of chains ${\mathcal H}$ in $D_k$. By Lemma \ref{spajanje} (see also the note directly after Lemma \ref{spajanje}) there is a $(d_{k-1},b_1)$-set of chains in $\bigcup_{i=k}^m D_i$. The rest of the proof is similar as in Case 1.
If $B_1$ is trivial, then $x$ and $y$ are adjacent in $C$ (for otherwise $\deg_B(u)=2$, where $u$ is the neighbor of $x$ in $V(C)\setminus V(Q)$) and $V(Q)=V(C)$. Define $R_1=B_1$. In this case $\bigcup_{i=2}^{n} {\mathcal F_i}$ is a $(x,y;x,y)$-set of chains in $B$. The corresponding path is $\bigcup_{i=1}^{n+1} R_i.$
$\square$
\begin{lemma}\label{to} Let $B$ be a bipartite circuit graph with outer cycle $C$. Let $x,y,u_1,u_2\in V(C)$ be such that $\{x,y\}\neq \{u_1,u_2\}$. If every internal vertex of $B$ is of degree at least 4, then there is a $(x,y;u_1,u_2)$-set of chains in $B$. \end{lemma}
\noindent{\bf Proof.\ } Suppose that the claim of the lemma is not true; let $B$ be a counterexample with minimum number of vertices. It's easy to verfy the lemma when $B$ is a 4-cycle, or any even cycle.
By Lemma \ref{plainchain}, $B-x$ is a plain chain of blocks $$B-x=B_1,b_1,B_2,\ldots,b_{n-1}, B_n\,.$$ Let $Q$ and $Q'$ be the $xy$-paths in $C$.
Let $b_0\in V(B_1)$ and $b_n\in V(B_n)$ be the neighbors of $x$ in $Q$ and $Q'$, respectively. We set $B_0=\emptyset$ (to avoid ambiguity in the following definitions). Let $k\in [n]$ be such that $y\in V(B_{k})\setminus V(B_{k-1})$, and let $k_j\in [n]$ be such that $u_j\in V(B_{k_j})\setminus V(B_{k_j-1})$ for $j=1,2$ (if
$x\in \{u_1,u_2\}$ this applies only to $k_1$ and we set $u_2=x$). We may assume, without loss of generality, that $y\neq b_0$ (otherwise $y\neq b_n$ and we have a similar proof) and that $k_1\leq k_2$.
We shall construct a $xy$-path $P$ in $B$.
For $i\in [k]$, if $B_i$ is trivial define ${\mathcal C_i}=\emptyset$ and $P_i=B_i$. In the sequal we define ${\mathcal C_i}$ and $P_i$ for nontrivial blocks $B_i$.
By minimality of $B$, Lemma \ref{to} is true for every nontrivial block $B_i$ of $B-x$ and therefore, for every $i\in [k]$ we can apply the statement of Lemma \ref{to} to $B_i$.
Denote the outer cycle of $B_i$ by $C_i$. Since $Q_i=B_i\cap C$ is a $b_{i-1}b_i$-path in $C_i$ and every vertex of $V(C_i)\setminus V(Q_i)$ is of degree at least 3 in $B_i$, we can also apply Lemma \ref{posebna} to $B_i$. The following statements are obtained either by an application of Lemma \ref{to} or Lemma \ref{posebna} to $B_i$. For $i\in [k-1]$ and $j=1,2$ there exists:
\begin{itemize} \item[(i)] a $(b_{i-1},b_i;b_{i-1},b_i)$-set of chains in $B_i$ (by Lemma \ref{posebna}), \item[(ii)] a $(b_{k-1},y;b_{k-1},b_k)$-set of chains in $B_k$ (by minimality of $B$ (i.e. by the statement of Lemma \ref{to}) if $y\neq b_k$, and by Lemma \ref{posebna} if $y=b_k$), \item[(iii)] a $(b_{k_j-1},b_{k_j};b_{k_j-1},u_{j})$-set of chains in $B_{k_j}$ (by minimality of $B$ if $u_j\neq b_{k_j}$, and by Lemma \ref{posebna} if $u_j=b_{k_j}$), \item[(iv)] if $k_j=k$ and $u_j\neq y$, there is a $(b_{k-1},y;b_{k-1},u_j)$-set of chains in $B_k$ (by minimality of $B$), \item[(v)] if $k_j=k$ and $u_j\neq b_k$, there is a $(b_{k-1},y;u_j,b_{k})$-set of chains in $B_k$ (by minimality of $B$), \item[(vi)] if $k_1=k_2$, there is a $(b_{{k_1}-1},b_{k_1};u_1,u_2)$-set of chains in $B_{k_1}$ (by minimality of $B$), \item[(vii)] if $k_1=k_2=k$, there is a $(b_{k-1},y;u_1,u_2)$-set of chains in $B_k$ (by minimality of $B$). \end{itemize}
Since $\{x,y\}\neq \{u_1,u_2\}$ we may assume, without loss of generality, that $y\notin \{u_1,u_2\}$. Therefore we have the following possibilities (1) $u_1,u_2\notin V(Q')$, (2) $u_1\notin V(Q'),u_2\notin V(Q)$, (3) $u_2=x$ and $u_1\notin V(Q')$. All other possibilites are symmetric, and they can be obtained from one of the above cases by exchanging the roles of $Q$ and $Q'$; for example, $u_1,u_2\notin V(Q')$ is symmetric to $u_1,u_2\notin V(Q)$. Therefore we can also assume that $u_1\notin V(Q')$. With this assumption the following cases with regard to $k, k_1$ and $k_2$ may appear. Next to each particular case below we also write which of the above statements we use to prove the existence of a $(x,y;u_1,u_2)$-set of chains in $B$. Later we give detailed arguments.
\begin{itemize} \item[(a)] $k_1<k_2<k$, we use (i) for $i\in [k-1]\setminus \{k_1,k_2\}$, (iii) for $j\in [2]$, and (ii). \item[(b)] $k_1<k_2=k$, we use (i) for $i\in [k-1]\setminus \{k_1\}$, (iii) for $j=1$, and (iv) for $j=2$. \item[(c)] $k_1<k<k_2$, we use (i) for $i\in [k-1]\setminus \{k_1\}$, (iii) for $j=1$, and (ii).
\item[(d)] $k_1=k_2<k$, we use (i) for $i\in [k-1]\setminus \{k_1\}$, (vi) for $j=1$, and (ii). \item[(e)] $k_1=k_2=k$, we use (i) for $i\in [k-1]$, and (vii). \item[(f)] $k_1<k$ and $u_2=x$, we use (i) for $i\in [k-1]\setminus \{k_1\}$, and (ii). \item[(g)] $k_1=k$ and $u_2=x$, we use (i) for $i\in [k-1]$, and (v) for $j=1$. \end{itemize}
We prove cases (a), (c) and (g) in detail. Cases (b),(d) and (e) are similar to case (a), and case (f) is similar to case (g), so here we skip details.
{\em Case (a).} Suppose that $k_1<k_2<k$. By (i), there is a $(b_{i-1},b_i;b_{i-1},b_i)$-set of chains ${\mathcal C_i}$ in $B_i$, for $i\in [k-1]\setminus \{k_1,k_2\}$. By (iii), there is a $(b_{k_j-1},b_{k_j};b_{k_j-1},u_j)$-set of chains ${\mathcal C_{k_j}}$ in $B_{k_j}$ for $j=1,2$. By (ii), there is a $(b_{k-1},y;b_{k-1},b_k)$-set of chains ${\mathcal C_k}$ in $B_k$. Denote by $P_{i}$ a ${\mathcal C_i}$-path in $B_{i}$, for $i\in [k]$.
By Lemma \ref{skupek}, $G_0=B- \bigcup_{i=1}^{k}V(B_i)$ is a good chain with respect to $x$. Let $P_0$ be the path $x,b_0$. Define $P=\bigcup_{i=0}^{k} P_i$ (and recall that $P_i=B_i$, if $B_i$ is trivial). Then ${\mathcal C}=\{G_0\}\cup \bigcup_{i=1}^k{\mathcal C_i}$ is a $(x,y;u_1,u_2)$-set of chains in $B$.
{\em Case (c).} Suppose that $k_1<k<k_2$.
By (i) there is a $(b_{i-1},b_i;b_{i-1},b_i)$-set of chains ${\mathcal C_i}$ in $B_i$, for $i\in [k-1]\setminus \{k_1\}$. By (iii) there is a $(b_{k_1-1}b_{k_1};b_{k_1-1},u_1)$-set of chains ${\mathcal C_{k_1}}$ in $B_{k_1}$. By (ii) there is a $(b_{k-1},y;b_{k-1},b_k)$-set of chains ${\mathcal C_k}$ in $B_k$.
Since ${\mathcal C}_{k}$ is a $(b_{k-1},y;b_{k-1},b_k)$-set of chains in $B_k$, by Lemma \ref{spajanje} there is a $(b_{k-1},y;b_{k-1},u_2)$-set of chains ${\mathcal D}_{k}$ in $\bigcup_{i=k}^{k_2} B_i$.
By Lemma \ref{skupek}, $G_1=B- \bigcup_{i=1}^{k_2}V(B_i)$ is a good chain with respect to $x$. Then $G_1$ together with chains in ${\mathcal C_i},i\in [k-1]$ and ${\mathcal D}_{k}$ forms a $(x,y;u_1,u_2)$-set of chains in $B$ (the corresponding path is $P=\bigcup_{i=0}^{k} P_i$).
{\em Case (g).} By (i) there is a $(b_{i-1},b_i;b_{i-1},b_i)$-set of chains ${\mathcal C_i}$ in $B_i$, for $i\in [k-1]$. Since $u_1\notin V(Q')$, by an assumption, we have $u_1\neq b_k$. By (v), there is a $(b_{k-1},y;u_1,b_{k})$-set of chains ${\mathcal C_k}$ in $B_k$. By Lemma \ref{spajanje} there is a $(b_{k-1},y;u_1)$-set of chains ${\mathcal F}_{k}$ in $ \bigcup_{i=k}^{n} B_i$.
Let $P_{i}$ be a ${\mathcal C_i}$-path in $B_{i}$, for $i\in [k]$, and define $P=\bigcup_{i=0}^{k} P_i$. Then chains in ${\mathcal C_i},i\in [k-1]$ and ${\mathcal F}_{k}$ form a $(x,y;u_1,x)$-set of chains in $B$, with $P$ being the corresponding path.
$\square$
\begin{definition}\label{defcik} Let $B$ be a circuit graph with outer cycle $C$, and let $u_1,u_2, u_3\in V(C)$. A set of pairwise disjoint chains ${\mathcal C}=\{G_1,\ldots,G_k\}$ is a $[u_1,u_2,u_3]$-set of chains in $B$ if there exists an even cycle $C'$ in $B$ such that \begin{itemize} \item[(i)] $V(B)\setminus V(C')\subseteq \bigcup_{i=1}^{k}V(G_i)$ \item[(ii)] For $i\in [k]$, $G_i$ intersects $C'$ in exactly one vertex $x_i$, and $G_i$ is a good chain with respect to $x_i$. \item[(iii)] For $j\in [3]$, either $G_i$ is a good chain with respect to $u_j$ and $x_i$ for some $i\in [k]$, or $u_j \notin \bigcup_{i=1}^{k}V(G_i)$. \end{itemize}
A cycle $C'$ that fulfills (i),(ii) and (iii) is called a ${\mathcal C}$-cycle. \end{definition}
If ${\mathcal C}$ fulfills (i),(ii) and (iii) for $j=1,2$, then ${\mathcal C}$ is a $[u_1,u_2]$-set of chains in $B$.
\begin{lemma} \label{cikli} Let $B$ be a bipartite circuit graph with outer cycle $C$, and let $u_1,u_2,u_3$ be any vertices of $C$. If all internal vertices of $B$ are of degree at least 4, then there exits a $[u_1,u_2,u_3]$-set of chains in $B$. \end{lemma}
\noindent{\bf Proof.\ } By Lemma \ref{plainchain}, $B-u_3$ is a plain chain of blocks $$B-u_3=B_1,b_1,B_2,\ldots,b_{n-1}, B_n\,.$$ Let $k_i\in [n]$ be such that $u_i\in V(B_i)\setminus V(B_{i-1})$ (here we set $B_0=\emptyset$).
For $i\in [n]$, define $P_i=B_i$ and ${\mathcal C_i}=\emptyset$, if $B_i$ is trivial. In the sequal we define $P_i$ and ${\mathcal C_i}$ for nontrivial blocks $B_i$.
{\em Case 1: $k_1\neq k_2$}. By Lemma \ref{posebna} and \ref{to} there is
\begin{itemize} \item[(i)] a $(b_{i-1},b_i;b_{i-1})$-set of chains ${\mathcal C_i}$ in $B_i$, if $i\notin \{k_1,k_2\}$, \item[(ii)] a $(b_{i-1},b_i;b_{i-1},u_j)$-set of chains ${\mathcal C_i}$ in $B_i$, if $i=k_j$ for $j=1,2$. \end{itemize} Let $P_i$ be the corresponding ${\mathcal C_i}$-path in $B_i$, for $i\in [n]$. Let $b_0\in V(B_1)$ and $b_n\in V(B_n)$ be the neighbors of $x$ in $C$ and define $P_0=x,b_0$ and $P_{n+1}=b_n,x$. Define $C'=\bigcup_{i=0}^{n+1} P_i$ and $${\mathcal C}=\bigcup_{i=1}^n {\mathcal C_i}\,.$$ Then ${\mathcal C}$ is a $[u_1,u_2,u_3]$-set of chains, and $C'$ is a corresponding ${\mathcal C}$-cycle.
{\em Case 2: $k_1= k_2$}. By Lemma \ref{posebna} and \ref{to} there is
\begin{itemize} \item[(i)] a $(b_{i-1},b_i;b_{i-1},b_i)$-set of chains ${\mathcal C_i}$ in $B_i$, if $i\neq k_1$, \item[(ii)] a $(b_{k_1-1},b_{k_1};u_1,u_2)$-set of chains ${\mathcal C_{k_1}}$ in $B_{k_1}$. \end{itemize} The rest of the proof is the same as above.
$\square$
\begin{lemma} \label{bipartite} Let $B$ be a non-bipartite circuit graph with outer cycle $C$. Suppose that $x,y\in V(C)$ and that $Q$ is a $xy$-path in $C$. If all internal vertices of $B$ are of degree at least 4, every vertex in $V(C)\setminus V(Q)$ is of degree at least 3 in $B$, and $B-x$ is bipartite, then for any vertex $u\in V(C-x)$ there is an odd and an even $(x,y;u)$-set of chains in $B$. \end{lemma}
\noindent{\bf Proof.\ } We claim that for any neighbor $z$ of $x$, there is a $(x,y;u)$-set of chains ${\mathcal C}$ in $B$ such that a ${\mathcal C}$-path contains the edge $xz$.
Before we prove the claim let us see how we prove the lemma using this claim. Since $B$ is non-bipartite and 2-connected, there is an odd cycle $C'$ containing $x$. Let $x_1$ and $x_2$ be the neigbors of $x$ in $C'$. Let $R$ be the $x_1x_2$-path in $C'$ not containing $x$. Suppose that $R_i$ is a $x_iy$-path in $B-x$ for $i=1,2$. Since $B-x$ is bipartite, $R_1\cup R_2\cup R$ is an even closed walk, and since $R$ is odd, $R_1$ and $R_2$ have different parities. It follows that every $x_1y$-path in $B-x$ is odd, and every $x_2y$-path in $B-x$ is even (or vice-versa). Using the above claim and setting $z=x_1$ (resp. $z=x_2$) we get an even (resp. an odd) $(x,y;u)$-set of chains in $B$.
In the rest of the proof we prove the claim. Let $$B-x=B_1,b_1,B_2,\ldots,b_{n-1}, B_n$$
and suppose that $xz\in E(B)$. By Lemma \ref{osnovna2}, $|V(B_i\cap Q)|\geq 2$ for $i\in [n]$, hence we may assume that $y\in V(B_n)\setminus V(B_{n-1})$. Let $k_u,k_{z}\in [n]$ be such that $u\in V(B_{k_u})\setminus V(B_{k_u+1})$ and $z\in V(B_{k_{z}})\setminus V(B_{k_{z}+1})$ (here we set $B_{n+1}=\emptyset$). It follows from these definitions that $u\neq b_{k_u}$ and $z\neq b_{k_z}$.
We shall construct a $(x,y;u)$-set of chains ${\mathcal C}$ in $B$, and a ${\mathcal C}$-path $P$ in $B$, so that $P$ contains the edge $xz$.
We distinguish several cases with regard to $k_u$ and $k_z$. In each case we define $P_i=B_i$ and ${\mathcal C_i}=\emptyset$, if $B_i$ is trivial and $i\in [n]$. Now we treat different cases and define $P_i$ and ${\mathcal C_i}$, if $B_i$ is nontrivial.
Suppose that $k_{z}<k_u<n$. By Lemma \ref{posebna} and Lemma \ref{to} there is \begin{itemize} \item[(i)] a $(z,b_{k_z};b_{{k_z}-1},b_{k_z})$-set of chains in $B_{k_z}$, where $k_z\neq k_u$ \item[(ii)] a $(b_{i-1},b_i;b_{i-1},b_i)$-set of chains in $B_i$, for $k_z< i< n,i\neq k_u$ \item[(iii)] a $(b_{n-1},y;b_{n-1})$-set of chains in $B_{n}$, where $n\neq k_u$ \item[(iv)] a $(b_{k_u-1},b_{k_u};u)$-set of chains in $B_{k_u}$. \end{itemize}
Let ${\mathcal C_i}$ be the set of chains in $B_i$ (as defined above), and let $P_i$ be a ${\mathcal C_i}$-path
for $i\in [n]\setminus [k_z-1]$. By Lemma \ref{skupek}, $G_0=B-\bigcup_{i=k_z}^{n} V(B_i)$ is a good chain with respect to $x$ in $B$ (if $k_z=1$ this is irrelevant). Let $P_0$ be the path $x,z$ and define $P=P_0\cup \bigcup_{i=k_z}^{n} P_i$. Then $${\mathcal C}=\{G_0\} \cup \bigcup_{i=k_z}^{n} {\mathcal C_{i}}$$ is a $(x,y;u)$-set of chains in $B$ and $P$ is a ${\mathcal C}$-path. If $k_z=k_u<n$ we use (ii) and (iii), and instead of (iv) we use \begin{itemize} \item[(v)] there is a $(z,b_{k_u};u)$-set of chains in $B_{k_u}$. \end{itemize} The rest of the proof is the same as above. If $k_z<k_u=n$ we use (i) and (ii), and instead of (iv) we use \begin{itemize} \item[(vi)] there is a $(b_{n-1},y;u)$-set of chains in $B_{n}$ \end{itemize} and the rest of the proof is (again) the same as above (note that (v) and (vi) follow from Lemma \ref{to}).
If $k_u<k_z<n$ then
we use (i),(ii) and (iii). By Lemma \ref{spajanje} and (i), there is a $(z,b_{k_z}; b_{k_z},u)$-set of chains ${\mathcal F_{k_z}}$ in $\bigcup_{i=k_u}^{k_z} B_i$. By Lemma \ref{skupek}, $G_1=B-\bigcup_{i=k_u}^{n} V(B_i)$ is a good chain with respect to $x$ in $B$.
It follows that
$${\mathcal C}=\{G_1\}\cup {\mathcal F_{k_z}} \cup \bigcup_{i=k_z+1}^{n} {\mathcal C_{i}}$$
is a $(x,y;u)$-set of chains in $B$. The path $P$ (as defined above) is a ${\mathcal C}$-path. This proves the claim when $k_z\neq n$.
Assume now that $k_z=n$.
If $k_z=k_u=n$ and $z\neq y$ then, by Lemma \ref{to}, there is \begin{itemize} \item[(vii)] a $(z,y;u)$-set of chains ${\mathcal H_{n}}$ in $B_{n}$. \end{itemize}
By Lemma \ref{skupek}, $G_{2}=B- V(B_n)$ is a good chain with respect to $x$ in $B$. It follows that
${\mathcal C}=\{G_{2}\}\cup {\mathcal H_{n}} $
is a $(x,y;u)$-set of chains in $B$, and $P$ (as defined above) is a ${\mathcal C}$-path.
If $k_z=k_u=n$ and $z=y=u$ then $xz$ is an edge of $C$ (recall that $y\neq b_{n-1}$ and that $y$ is an external vertex of $B$). By Lemma \ref{osnovna}, $G_3=B-y$ is a good chain with respect to $x$. Therefore ${\mathcal C}=\{G_3\}$ is a $(x,y;u)$-set of chains, where a ${\mathcal C}$-path in $B$ is the path $x, y$. If $z=y\neq u$ then $B_n$ is a good chain with respect to $y$ and $u$, and $G_2$ is a good chain with respect to $x$. It follows that $\{B_n,G_2\}$ is a $(x,y;u)$-set of chains in $B$; again a ${\mathcal C}$-path in $B$ is the path on two vertices $x,y$. If $z \neq y$
Finally, if $k_u<k_z=n$ and $z\neq y$ there is \begin{itemize} \item[(viii)] a $(z,y;b_{n-1})$-set of chains in $B_{n}$. \end{itemize}
By Lemma \ref{spajanje} and (viii), there is a $(z,y;u)$-set of chains ${\mathcal I_{n}}$ in $\bigcup_{i=k_u}^{n} B_i$. In this case
${\mathcal C}=\{G_{1}\}\cup {\mathcal I_{n}}$ is a $(x,y;u)$-set of chains in $B$. If $z=y$, then $G_4=\bigcup_{i=k_u}^{n} B_i$ is a good chain with respect to $u$ and $y$, hence
${\mathcal C}=\{G_{1},G_4\} $ is a $(x,y;u)$-set of chains in $B$. This proves the claim, and hence also the lemma.
$\square$
\begin{theorem}\label{main} Let $B$ be a non-bipartite circuit graph with outer cycle $C$, and let $x,y\in V(C)$. If all internal vertices of $B$ are of degree at least 4, then for any vertex $u\in V(C)$ there is a $(x,y;u)$-set of chains in $B$. Moreover, if $Q$ is a $xy$-path in $C$ such that every vertex in $V(C)\setminus V(Q)$ is of degree at least 3 in $B$, then for any vertex $u\in V(Q)$ there is an odd and an even $(x,y;u)$-set of chains in $B$. \end{theorem}
\noindent{\bf Proof.\ } Suppose the theorem is not true. Let $B$ be a counterexample of minimum order. We may assume that $u\neq x$ (otherwise $u\neq y$, and the proof is analogous).
By Lemma \ref{plainchain}, $B-x$ is a plain chain of blocks $$B-x=B_1,b_1,B_2,\ldots,b_{n-1}, B_n\,.$$ Let $k_1,k_2\in [n]$ be such that $u\in V(B_{k_1})\setminus V(B_{{k_1-1}})$ and $y\in V(B_{k_2})\setminus V(B_{k_2-1})$ (here we set $B_0=\emptyset$).
Let $b_0\in V(B_1)$ and $b_n\in V(B_n)$ be the neighbors of $x$ in $C$. We may assume that $xb_0$ is an edge of $Q$ and $xb_n$ is not an edge of $Q$, and that $u\in V(Q)$ (the last sentence of the theorem assumes $u\in V(Q)$, and for the proof of the first part of the theorem $u\in V(Q)$ may be assumed without loss of generality). Since $u\in V(Q)$ we have $k_1\leq k_2$. We give two constructions. In both constructions we define $P_i=B_i$ and ${\mathcal C_{i}}=\emptyset$, if $B_i$ is trivial. In the sequal we treat nontrivial blocks $B_i$. \\
{\em Construction A.} If $k_1<k_2$ then, by minimality of $B$ (if $B_i$ is non-bipartite) and by Lemma \ref{to} (if $B_i$ is bipartite), there is \begin{itemize} \item[(i)] a $(b_{i-1},b_i;b_i)$-set of chains in $B_i$, for $i\in [k_1-1]$, \item[(ii)] a $(b_{k_1-1},b_{k_1};u)$-set of chains in $B_{k_1}$, \item[(iii)] a $(b_{i-1},b_i;b_{i-1})$-set of chains in $B_i$, for $i\in [k_2-1]\setminus [k_1]$, \item[(iv)] a $(b_{k_2-1},y;b_{k_2-1})$-set of chains in $B_{k_2}$, \end{itemize}
if $k_1=k_2$ and $y\neq b_0$ there is \begin{itemize}
\item[(v)] a $(b_{k_2-1},y;u)$-set of chains in $B_{k_2}$. \end{itemize}
Note that for $i\in [k_2-1]$ every vertex in $V(C_i)\setminus V(Q_i)$ is of degree at least 3 in $B_i$, where $C_i$ is the outer cycle of $B_i$ and $Q_i=Q\cap B_i$. By minimality of $B$ we may apply the (last) statement of the theorem to $B_i$, if $B_i$ is non-bipartite. Hence, if $B_i$ is non-bipartite for some $i\in [k_2-1]$, there is an odd and an even set of chains for (i), (ii) and (iii). Additionally, if $B_{k_2}$ is non-bipartite and $y=b_{k_2}$, then there is also an odd and an even set of chains
${\mathcal C_{k_2}}$ for (iv) and (v) (by minimality of $B$).
Denote by ${\mathcal C_i}$ the set of chains in $B_i$ defined by (i)-(iv) if $k_1<k_2$; and defined by (i) and (v) if $k_1=k_2$ and $y\neq b_0$. The ${\mathcal C_i}$-path is denoted by $P_i$, for $i\in [k_2]$. Let $P_0$ be the path $x,b_0$. Define $P=\bigcup_{i=0}^{k_2}P_i$. By Lemma \ref{skupek}, $G_1=B-\bigcup_{i=1}^{k_2}B_i$ is a good chain with respect to $x$ in $B$. Hence $${\mathcal C}=\{G_1\}\cup \bigcup_{i=1}^{k_2} {\mathcal C_i}$$ is a $(x,y;u)$-set of chains in $B$. The path $P$ is a ${\mathcal C}$-path in $B$. Moreover, if a block $B_i, i\in [k_2-1]$ is non-bipartite, then there exists an odd and an even set of chains ${\mathcal C_i}$ in $B_i$, and so ${\mathcal C}$ is an odd or an even set of chains subject to the choice of ${\mathcal C_i}$.
If $y=b_0$ then $u=y=b_0$ (by our assumptions $u\in V(Q)$ and $u\neq x$). In this case $G_2=B-y$ is a good chain with respect to $x$, by Lemma \ref{osnovna}. Hence, ${\mathcal C}=\{G_2\}$ is a $(x,y;u)$-set of chains, and $P=x,y$ is the corresponding ${\mathcal C}$-path. This proves the first claim of the theorem; and also the second claim of the theorem if $B_i$ is non-bipartite for some $i\in [k_2-1]$, or if $B_{k_2}$ is non-bipartite and $y=b_{k_2}$ (note that this, in particular, proves the theorem for the case if $y=b_n$ and $B-x$ is non-biparite). To finish the proof of the second claim of the theorem we give construction B, in which we assume that
every vertex in $V(C)\setminus V(Q)$ is of degree at least 3 in $B$. We also assume that $B_i$ is bipartite for $i\in [k_2-1]$, and if $y=b_{k_2}$ then $B_i$ is bipartite for $i\in [k_2]$. \\
{\em Construction B.} If $k_1<k_2$ and $y\neq b_{k_2}$ then, by minimality of $B$ (if $B_i$ is non-bipartite) and by Lemma \ref{to} (if $B_i$ is bipartite), there is \begin{itemize} \item[(vi)] a $(b_{i-1},b_i;b_{i-1})$-set of chains in $B_i$, for $i\in [n]\setminus [k_2]$,
\item[(vii)] a $(b_{k_2},y;b_{k_2-1})$-set of chains in $B_{k_2}$, \end{itemize} and if $k_1=k_2$ and $y\neq b_{k_2}$ there is
\begin{itemize} \item[(viii)] a $(b_{k_2},y;u)$-set of chains in $B_{k_2}$. \end{itemize}
Denote by ${\mathcal C_i}$ the set of chains in $B_i$ defined by (vi) and (vii) if $k_1<k_2$ and $y\neq b_{k_2}$; and defined by (vi) and (viii) if $k_1=k_2$ and $y\neq b_{k_2}$. The ${\mathcal C_i}$-paths are denoted by $P_i$, for $i\in [n]\setminus [k_2-1]$.
Suppose that $y\neq b_{k_2}$ and $k_1< k_2$. Let $R_1$ and $R_2$ be the $yb_{k_2}$-paths in $C_{k_2}$ (where $C_{k_2}$ is the outer cycle of $B_{k_2}$), and assume $b_{k_2-1}\in V(R_1)$. Since every vertex in $V(C)\setminus V(Q)$ is of degree at least 3 in $B$, we find that every vertex in $V(C_{k_2})\setminus V(R_1)$ is of degree at least 3 in $B_{k_2}$. Therefore, by minimailty of $B$, if $B_{k_2}$ is non-bipartite there is an odd and an even set of chains ${\mathcal C_{k_2}}$ for (vii) and (viii). Moreover, if $B_i$ is non-bipartite there exist odd and even sets of chains ${\mathcal C_{i}}$ for $i\in [n]\setminus [k_2]$, as given by (vi).
If $u\neq b_{k_1}$ then, by Lemma \ref{spajanje} (see notes directly after Lemma \ref{spajanje}) and (vii), there is a $(b_{k_2},y;u)$-set of chains ${\mathcal D_{k_2}}$ in $ \bigcup_{i=k_1}^{k_2} B_i$ (recall the assumption that $B_i$ is bipartite for $i\in [k_2-1]$).
By Lemma \ref{skupek}, $G_3=B-\bigcup_{i=k_1}^{n}B_i$ is a good chain with respect to $x$. Let $P_{n+1}$ be the path $x,b_n$. Define $P=\bigcup_{i=k_2}^{n+1}P_i$.
Then $${\mathcal C}=\{G_3\}\cup {\mathcal D_{k_2}}\cup \bigcup_{i=k_2+1}^{n} {\mathcal C_i}$$ is a $(x,y;u)$-set of chains in $B$. The corresponding ${\mathcal C}$-path is $P$.
If $u=b_{k_1}$ the construction of a $(x,y;u)$-set of chains in $B$ is analogous as in the case $u\neq b_{k_1}$ (the only difference is that ${\mathcal D_{k_2}}$ is a $(b_{k_2},y;u)$-set of chains in $ \bigcup_{i=k_1+1}^{k_2} B_i$, and $G_3=B-\bigcup_{i=k_1+1}^{n}B_i$).
If $y\neq b_{k_2}$ and $k_1=k_2$, we use (vi) and (viii). In this case $${\mathcal C}=\{G_3\}\cup \bigcup_{i=k_2}^{n} {\mathcal C_i}$$ is a
$(x,y;u)$-set of chains in $B$. If $B_i$ is non-bipartite for some $i\in [n]\setminus [k_2-1]$ then we can choose ${\mathcal C}_i$ so that ${\mathcal C}$ is an odd or an even $(x,y;u)$-set of chains in $B$ (in all cases above). This proves the second claim of the theorem if $y\neq b_{k_2}$ and $B_i$ is non-bipartite for some $i\in [n]\setminus [k_2-1]$.
Suppose now that $y=b_{k_2},k_2\neq n$ and $u\neq b_{k_1}$. If we use (vi) for $i=k_2+1$, we find that ${\mathcal C_{k_2+1}}$ is a $(b_{k_2},b_{k_2+1};b_{k_2})$-set of chains in $B_{k_2+1}$. Hence, by Lemma \ref{spajanje} (see notes after Lemma \ref{spajanje}), there is a $(b_{k_2},b_{k_2+1};u)$-set of chains ${\mathcal F_{k_2+1}}$ in
$ \bigcup_{i=k_1}^{k_2+1} B_i$ (recall the assumption that $B_i$ is bipartite for $i\in [k_2]$ if $y=b_{k_2}$).
Then $${\mathcal C}=\{G_3\}\cup {\mathcal F_{k_2+1}}\cup \bigcup_{i=k_2+2}^{n} {\mathcal C_i}$$ is a $(x,y;u)$-set of chains in $B$. The corresponding ${\mathcal C}$-path is $P$.
If $y=b_{k_2}, k_2\neq n$ and $u=b_{k_1}$, then let
${\mathcal H_{k_2+1}}$ be a $(b_{k_2},b_{k_2+1};u)$-set of chains in
$ \bigcup_{i=k_1+1}^{k_2+1} B_i$ (it exits by Lemma \ref{spajanje}) and define $G_4=B-\bigcup_{i=k_1+1}^{n} B_i$.
In this case
${\mathcal C}=\{G_4\}\cup {\mathcal H_{k_2+1}}\cup \bigcup_{i=k_2+2}^{n} {\mathcal C_i}$ is a $(x,y;u)$-set of chains in $B$.
Observe that, if $B_i$ is non-bipartite for some $i\in [n]\setminus[k_2]$, then we can choose $P_i$, and hence also $P$, so that ${\mathcal C}$ is odd or even. This proves the second claim of the theorem if $y=b_{k_2}$ ($k_2\neq n$) and $B_i$ is non-bipartite for some $i\in [n]\setminus [k_2]$.
The last case to consider is when $B_i$ is bipartite for $i\in [n]$. In this case $B-x$ is bipartite and the theorem follows from Lemma \ref{bipartite}.
$\square$
The proof of the following lemma is similar to the proof of Lemma \ref{spajanje} (so we skip this proof).
\begin{lemma} \label{spajanje1}
Let $G$ be a bipartite plain chain of blocks $$G=B_1,b_1,B_2,\ldots,b_{n-1}, B_n\,.$$ Let $u\in V(B_j)\setminus \{b_{j-1},b_j\}$ for some $j\in \{2,\ldots,n-1\}$, and suppose that there exits a $[b_{j-1},b_j,u]$-set of chains in $B_j$. Then for any $v'\in V(B_1)\setminus V(B_{2})$ and $v''\in V(B_n)\setminus V(B_{n-1})$, there exists a $[v',v'',u]$-set of chains in $G$. \end{lemma}
\begin{theorem} \label{glavni} Let $B$ be a non-bipartite circuit graph with outer cycle $C$. Suppose that $x,y\in V(C)$ and that $B$ is good with respect to $x$ and $y$. If every internal vertex of $B$ is of degree at leat 4 and $B$ is not an odd cycle,
then there exists a $[x,y]$-set of chains in $B$. \end{theorem}
\noindent{\bf Proof.\ } By Lemma \ref{plainchain}, $B-x$ is a plain chain of blocks $$B-x=B_1,b_1,B_2,\ldots,b_{n-1}, B_n\,.$$ Let $b_0\in V(B_1)$ and $b_n\in V(B_n)$ be the neighbors of $x$ in $C$. Let $k \in [n]$ be such that $y\in V(B_{k})\setminus V(B_{{k-1}})$ (here we set $B_0=\emptyset$).
Suppose that at least one block $B_i$ is non-bipartite. If $B_i$ is trivial, define
${\mathcal C_i}=\emptyset$ and $P_i=B_i$. Otherwise, by Theorem \ref{main} and Lemma \ref{to} there is \begin{itemize} \item[(i)] a $(b_{i-1},b_i;b_{i})$-set of chains in $B_i$, for $i\in [k-1]$,
\item[(ii)] a $(b_{k-1},b_k;y)$-set of chains in $B_{k}$, \item[(iii)] a $(b_{i-1},b_i;b_{i-1})$-set of chains in $B_{i}$ for $i\in [n]\setminus [k]$. \end{itemize}
Denote by ${\mathcal C_i}$ the set of chains in $B_i$ defined by (i), (ii) and (iii). The ${\mathcal C_i}$-paths are denoted by $P_i$, for $i\in [n]$. Let $P_0=x,b_0$ and $P_{n+1}=b_n,x$, and define $C'=\bigcup_{i=0}^{n+1} P_i$. Since $B_i$ is non-bipartite for some $i\in [n]$, there is an odd and an even set of chains
${\mathcal C_i}$ for (i),(ii) or (iii). Hence, we can choose ${\mathcal C_i}$ and $P_i$ so that
$C'$ is even, and therefore ${\mathcal C}= \bigcup_{i=1}^n{\mathcal C_i}$ is a $[x,y]$-set of chains in $B$.
Suppose that all blocks $B_i, i\in [n]$ are bipartite. Then all odd faces of $B$ are incident to $x$. Define $B_{n+1}=P_{n+1}$.
If $y\notin\{b_{k-1},b_k\}$, then by Lemma \ref{cikli} there exits a $[b_{k-1},b_k,y]$-set of chains ${\mathcal D_k}$ in $B_k$. Let $G=B_1,b_1,B_2,\ldots,b_{n-1}, B_n,b_n,B_{n+1}\,.$ By Lemma \ref{spajanje1} there is a $[x,y]$-set of chains in $G$, which is also a $[x,y]$-set of chains in $B$ (because $G$ is a spanning subgraph of $B$).
Assume now that $y\in\{b_{k-1},b_k\}$. Suppose that $y\in \{b_0,b_n\}$. We may assume $y=b_0$. If a block $B_i$ of $G$ is nontrivial, then by Lemma \ref{cikli}, there is a $[b_{i-1},b_i]$-set of chains ${\mathcal F_i}$ in $B_i$. Therefore, by Lemma \ref{spajanje1} there is a $[x,y]$-set of chains in $G$, which is also a $[x,y]$-set of chains in $B$. Otherwise all blocks $B_i,i\in [n+1]$ are trivial. If $C$ is an even cycle, then $C$ itself is a $[x,y]$-set of chains in $B$. Otherwise $C$ is odd, and since $B$ is not an odd cycle, $C$ has a chord. Hence $B$ has an even cycle $C_0$ (which goes through $x$). Clearly, $C_0$ together with blocks $B_i$ such that
$|V(B_i)\cap V(C_0)|\leq 1$ forms a $[x,y]$-set of chains in $B$.
Hence we may assume that $y=b_k$ where $k\notin\{0,n\}$. Suppose that all bounded odd faces of $B$ are incident to $y$ (and recall that all bounded odd faces are incident to $x$). Then there are exactly one or two such faces. However, if there is exactly one bounded odd face in $B$, and this odd face is incident to $x$ and $y$, then $xy\in E(C)$ (follows from the fact that $B$ is good with respect to $x$ and $y$) and so $y\in \{b_0,b_n\}$.
Therefore there are exactly two bounded odd faces in $B$ (both adjacent to $x$ and $y$). In this case the cycle $C'$(defined above) bounds exactly two odd faces of $B$ and therefore $C'$ is even. Hence, ${\mathcal C}$ (defined above) is a $[x,y]$-set of chains in $B$.
We may therefore assume that there is a bounded odd face $F$ of $B$, which is not incident to $y$, and that $y\notin \{b_0,b_n\}$.
Let $xx_1$ and $xx_2$ be edges incident to $F$. Since $F$ is not incident to $y=b_k$ we may assume, without loss of generality, that $x_1,x_2\in \bigcup_{i=1}^{k} V(B_i)$. Since $F$ is an odd face and $G$ is bipartite, every $x_1x$-path in $G$ is odd and every $x_2x$-path in $G$ is even (or vice-versa).
Let $k'\in [k]$ be such that $x_1\in V(B_{k'})\setminus V(B_{k'+1})$. If $B_{k'}$ resp. $B_i$ is nontrivial, then by Lemma \ref{posebna} and Lemma \ref{to} there is a \begin{itemize} \item[(iv)] a $(x_1,b_{k'};b_{k'-1},b_{k'})$-set of chains ${\mathcal G_{k'}}$ in $B_{k'}$, \item[(v)] a $(b_{i-1},b_i;b_{i-1},b_i)$-set of chains ${\mathcal G_{i}}$ in $B_{i}$ for $i\in [n]\setminus [k']$. \end{itemize}
Let $P_i$ be the ${\mathcal G_{i}}$-path in $B_i$ for $i\in [n]\setminus [k'-1]$ (if $B_i$ is trivial, define $P_i=B_i$ and ${\mathcal G_{i}}=\emptyset$), and define $C''=\bigcup_{i=k'}^{n+1}P_i \cup \{xx_1\}$ (recall that $P_{n+1}=b_n,x$). Since every $x_1x$-path in $G$ is odd, $C''$ is even. By (iv) and Lemma \ref{spajanje}, there is a $(x_1,b_{k'};b_{k'})$-set of chains ${\mathcal H_{k'}}$ in $\bigcup_{i=1}^{k'}B_i$. Then ${\mathcal G}=\bigcup_{i=k'+1}^n {\mathcal G_{i}}\cup {\mathcal H_{k'}}$ is a $[x,y]$-set of chains in $B$, and $C''$ is a ${\mathcal G}$-cycle in $B$.
$\square$
\begin{theorem}\label{final} Let $B$ be a circuit graph such that every internal vertex of $B$ is of degree at least 4. Then $B$ has a spanning bipartite cactus. \end{theorem} \noindent{\bf Proof.\ } Let $C$ be the outer cycle of $B$. We prove a slightly stronger statement: if $B$ is a circuit graph such that every internal vertex of $B$ is of degree at least 4, and $x,y\in V(C)$ are vertices such that $B$ is good with respect to $x$ and $y$, then $B$ has a spanning bipartite cactus $T$ such that $x$ and $y$ are contained in exactly one block of $T$.
The proof is by induction on $|V(B)|$. The statement is clealy true if $B$ is an even cycle. If $B$ is an odd cycle and $B$ is good with respect to $x$ and $y$ then $x$ and $y$ are adjacent. A spanning $xy$-path in $B$ is a bipartite spanning cactus in $B$ such that $x$ and $y$ are contained in exactly one block of this cactus.
If $B$ is not an odd cycle, then by Theorem \ref{glavni} (if $B$ is non-biparitite) and Lemma \ref{cikli} (if $B$ is bipartite), there is a $[x,y]$-set of chains ${\mathcal C}=\{G_1,\ldots,G_k\}$ in $B$. Let $C'$ be a ${\mathcal C}$-cycle. Note that each block $B'$ of a chain $G_i,i\in [k]$ is good with respect to (both) cutvertices of $G_i$ contained in $B'$. Moreover, either $x\notin \bigcup_{i=1}^k V(G_i)$ or a chain of ${\mathcal C}$ is good with respect to $x$ (a similar fact is true for $y$). Therefore we can use the induction hypothesis, to obtain a spanning bipartite cactus $T(B')$ in $B'$ such that (both) cutvertices of $G_i$ contained in $B'$ are contained in exactly one block of $T(B')$. Moreover the block $B_x$ that contains $x$ (if any) has a spanning bipartite cactus $T(B_x)$ such that $x$ is contained in exactly one block of $T(B_x)$ (a similar fact is true for $y$). Let ${\mathcal B}$ be the set of all blocks of $G_i,i\in [k]$ and define $T=C'\cup \bigcup_{B' \in {\mathcal B}}T(B')$. This gives the required bipartite cactus in $B$.
$\square$
Let ${\mathcal P}$ be the class of 3-connected planar graphs whose prisms are not hamiltonian. We end the article with the problem to determine the minimum ratio of cubic vertices in a graph $G\in {\mathcal P}$. Let $V_3(G)$ denote the set of cubic vertices in $G$.
\begin{problem} Determine minimum $\epsilon$ such that there exist arbitrary large graphs $G\in {\mathcal P}$ with
$|V_3(G)|/|V(G)|<\epsilon$. In particular, can $\epsilon$ be arbitrary small ? \end{problem}
\noindent {\bf Acknowledgement:} The author thanks Uro\v s Milutinovi\'c for proofreading parts of the final version of this paper. This work was supported by the Ministry of Education of Slovenia [grant numbers P1-0297, J1-9109].
\end{document} |
\begin{document}
\fancyhead{}
\title{On Training Sample Memorization: Lessons from Benchmarking Generative Modeling with a Large-scale Competition}
\author{Ching-Yuan Bai} \email{b05502055@csie.ntu.edu.tw} \affiliation{
\institution{
Computer Science and Information Engineering\\ National Taiwan University}
\country{Taiwan}
}
\author{Hsuan-Tien Lin} \email{htlin@csie.ntu.edu.tw} \affiliation{
\institution{Computer Science and Information Engineering\\ National Taiwan University}
\country{Taiwan}
}
\author{Colin Raffel} \email{craffel@google.com} \affiliation{
\institution{Google}
\country{USA} }
\author{Wendy Chih-wen Kan} \email{wendykan@google.com} \affiliation{
\institution{Kaggle, Google}
\country{USA} }
\renewcommand{Bai et al.}{Bai et al.}
\begin{abstract} Many recent developments on generative models for natural images have relied on heuristically-motivated metrics that can be easily gamed by memorizing a small sample from the true distribution or training a model directly to improve the metric. In this work, we critically evaluate the gameability of these metrics by designing and deploying a generative modeling competition. Our competition received over 11000 submitted models. The competitiveness between participants allowed us to investigate both intentional and unintentional memorization in generative modeling. To detect intentional memorization, we propose the ``Memorization-Informed Fr\'echet Inception Distance'' (MiFID) as a new memorization-aware metric and design benchmark procedures to ensure that winning submissions made genuine improvements in perceptual quality. Furthermore, we manually inspect the code for the 1000 top-performing models to understand and label different forms of memorization. Our analysis reveals that unintentional memorization is a serious and common issue in popular generative models. The generated images and our memorization labels of those models as well as code to compute MiFID are released to facilitate future studies on benchmarking generative models. \end{abstract}
\begin{CCSXML} <ccs2012>
<concept>
<concept_id>10010147.10010257</concept_id>
<concept_desc>Computing methodologies~Machine learning</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012> \end{CCSXML}
\ccsdesc[500]{Computing methodologies~Machine learning}
\keywords{benchmark, competition, neural networks, generative models, memorization, datasets, computer vision}
\maketitle
\section{Introduction} Recent work on generative models for natural images has produced huge improvements in image quality, with some models producing samples that can be indistinguishable from real images \cite{karras2017progressive,karras2019style,karras2019analyzing,brock2018large,kingma2018glow,maaloe2019biva,menick2018generating,razavi2019generating}. Improved sample quality is important for tasks like super-resolution \cite{ledig2017photo} and inpainting \cite{yu2019free}, as well as creative applications \cite{park2019semantic,isola2017image,zhu2017unpaired,zhu2017your}. These developments have also led to useful algorithmic advances on other downstream tasks such as semi-supervised learning \cite{kingma2014semi,odena2016semi,salimans2016improved,izmailov2019semi} or representation learning \cite{dumoulin2016adversarially,donahue2016adversarial,donahue2019large}.
Modern generative models utilize a variety of underlying frameworks, including autoregressive models \cite{oord2016pixel}, Generative Adversarial Networks \citep[GANs;][]{goodfellow2014generative}, flow-based models \cite{dinh2014nice,rezende2015variational}, and Variational Autoencoders \citep[VAEs;][]{kingma2013auto,rezende2014stochastic}. This diversity of approaches, combined with the subjective nature of evaluating generation performance, has prompted the development of heuristically-motivated metrics designed to measure the perceptual quality of generated samples such as the Inception Score~\citep[IS;][]{salimans2016improved} or the Fr\'echet Inception Distance~\citep[FID;][]{heusel2017gans}. These metrics are used in a benchmarking procedure where ``state-of-the-art'' results are claimed based on a better score on standard datasets.
Indeed, much recent progress in the field of machine learning as a whole has relied on useful benchmarks on which researchers can compare results. Specifically, improvements on the benchmark metric should reflect improvements towards a useful and nontrivial goal. Evaluation of the metric should be a straightforward and well-defined procedure so that results can be reliably compared. For example, the ImageNet Large-Scale Visual Recognition Challenge \cite{deng2009imagenet,russakovsky2015imagenet} has a useful goal (classify objects in natural images) and a well-defined evaluation procedure (top-1 and top-5 accuracy of the model's predictions). Sure enough, the ImageNet benchmark has facilitated the development of dramatically better image classification models which have proven to be extremely impactful across a wide variety of applications.
Unfortunately, some of the commonly-used benchmark metrics for generative models of natural images do not satisfy the aforementioned properties. For instance, although the IS is demonstrated to correlate well with human perceived image quality~\citep{salimans2016improved}, \citet{barratt2018note} point out several flaws of the IS when used as a single metric for evaluating generative modeling performance, including its sensitivity to the choice of a representational space, which undermines generalization capability. Separately, directly optimizing a model to improve the IS can result in extremely \textit{unrealistic}-looking images~\citep{barratt2018note} despite resulting in a better score. It is also well-known that a good IS can be achieved~\citep{gulrajani2018towards} by memorizing images from the training set (i.e.\ producing \textit{non-novel} images). On the other hand, FID is widely accepted as an improvement over IS due to its better consistency under perturbation~\cite{heusel2017gans}. However, there is no clear evidence of FID resolving any of the flaws of the IS.
Motivated to better understand the potential misalignment between the goal and the benchmark in generative modeling, we benchmark generative models and critically examine the metrics used to evaluate them by holding a public machine learning competition. To the extent of our knowledge, no large-scale generative modeling competitions have ever been held, possibly due to the immense difficulty of measuring perceptual quality and identifying training sample memorization in an efficient and scalable manner. We modified FID to autonomously penalize competition submissions with
memorization in an attempt to discourage contestants from intentionally memorizing training images. We also manually inspected the code for the top 1000 submissions to reveal different forms of intentional or unintentional memorization, to ensure that the winning submissions reflect meaningful improvements, and to confirm efficacy of our proposed metric. We hope that the success of the first-ever generative modeling competition can serve as future reference and stimulate more research in developing better generative modeling benchmarks.
The remainder of this paper is structured as follows: In Section~\ref{sec:background}, we briefly review the metrics and challenges of evaluating generative models. In Section~\ref{sec:design}, we explain in detail the competition design choices and propose a novel benchmarking metric, Memorization-Informed Fréchet Inception Distance (MiFID). We show that MiFID enables fast profiling of participants that intentionally memorize the training examples. In Section~\ref{sec:data-release}, we introduce a dataset released along with this paper that includes over one hundred million generated images and labels of the memorization methods adopted in the generation process (derived from manual code review). In Section~\ref{sec:insights}, we connect the phenomena observed in large-scale benchmarking of generative models in the real world back to the research community and point out crucial but neglected flaws in FID.
\label{sec:background} In generative modeling, the goal is to produce a model $p_\theta(x)$, parameterized by $\theta$, that approximates some true distribution $p(x)$. We are not given direct access to $p(x)$; instead, we are provided only with samples $x$ drawn from $p(x)$. In this paper, we will assume that samples $x$ from $p(x)$ are $64$-by-$64$ pixel natural images, i.e.\ $x \in \mathbb{R}^{64 \times 64 \times 3}$. A common approach is to optimize $\theta$ so that $p_\theta(x)$ assigns high likelihood to training examples drawn from $p(x)$. This provides a natural evaluation procedure which measures the likelihood assigned by $p_\theta(x)$ to held-out examples drawn from $p(x)$. However, not all generative models returns an explicit form of $p_\theta(x)$ that can be directly computed.
Notably, Generative Adversarial Networks (GANs) \cite{goodfellow2014generative} learn an ``implicit'' model that can be sampled from but do not provide an exact (nor even an estimate of) $p_\theta(x)$ for any given $x$. The GANs have proven particularly successful at producing models that can generate extremely realistic and high-resolution images, which leads to a natural question: How should we evaluate the quality of a generative model if we cannot compute the likelihood $p_\theta$ assigned to held-out examples (following the conventional approach of evaluating generalization in machine learning)?
This question has led to the development of many alternative ways to evaluate generative models \cite{borji2019pros}. A historically popular metric, proposed in \cite{salimans2016improved}, is the Inception Score (IS) which computes \[
\IS(p_\theta) = \mathbb{E}_{x \sim p_\theta(x)} [ \divergence{KL}(\IN(y | x) \| \IN(y)) ] \]
where $\IN(y | x)$ is the conditional probability of a class label $y$ assigned to a datapoint $x$ by a pre-trained Inception Network \cite{szegedy2015going}. More recently, \citep{heusel2017gans} proposed the Fr\'echet Inception Distance (FID) which is claimed to better correlate with perceptual quality. FID uses the estimated mean and covariance of the Inception Network feature space distribution to calculate the distance between the real and fake distributions up to the second order. FID between the real images $r$ and generated images $g$ is computed as: \[
\text{FID}(r, g)=\left\|\mu_{r}-\mu_{g}\right\|_{2}^{2}+\operatorname{Tr}\left(\Sigma_{r}+\Sigma_{g}-2\left(\Sigma_{r} \Sigma_{r}\right)^{\frac{1}{2}}\right) \] where $\mu_{r}$ and $\mu_{g}$ are the mean of the real and generated images in latent space, and $\Sigma_{r}$ and $\Sigma_{g}$ are the covariance matrices for the real and generated feature vectors. A drawback of both IS and FID is that they assign a very good score to a model which simply memorizes a small and finite sample from $p(x)$ \cite{gulrajani2018towards}, an issue we address in Section \ref{sec:mifid}.
\section{Generative Modeling Competition Design} \label{sec:design} We designed the first generative model competition,\footnote{\url{https://www.kaggle.com/c/generative-dog-images}} hosted by Kaggle, where participants were invited to generate realistic dog images given $20{,}579$ images of dogs from ImageNet \cite{russakovsky2015imagenet}. Participants were required to implement their generative models in a constrained computation environment to prevent them from obtaining unfair advantages (e.g.\ downloading additional dog images). The computation environment was designed with: \begin{itemize}
\item Limited computation resource (9 hours on a NVIDIA P100 GPU for each submission) since generative model performance is known to be highly related to the amount of computational resources used~\cite{brock2018large}.
This allows us to remove the factor of computation budget when analyzing the results.
\item Isolated containerization to avoid continuous training by reloading model checkpoints from previous sessions.
\item No access to external resources (i.e.\ the internet) to avoid usage of pre-trained models or additional data. \end{itemize} Each submission is required to provide $10{,}000$ generated images of dimension $64 \times 64 \times 3$ and receives a public score in return. Participants are allowed to submit any number of submissions during the two-month competition. Before the end of the competition, each team is required to choose two submissions, and the final ranking is determined by the better private score (described below) out of the two selected submissions.
Top 5 place winners on the private leaderboard received monetary rewards (\$$2{,}000$ each), while top $100$ winners receive Kaggle medals.
In the following sections, we discuss how the final decisions were made regarding pretrained model selection (for FID feature projection) and how we enforced penalties to ensure the fairness of the competition. The system design is demonstrated in Figure~\ref{fig:workflow}.
\begin{figure*}
\caption{Workflow calculating MiFID for public and private scores. Note that the ``mysterious'' neural network and dataset were kept intentionally vague to prevent score overfitting. The details of them are described in Table \ref{table:public-private} }
\label{fig:workflow}
\end{figure*}
\subsection{Memorization-Informed Fréchet Inception Distance (MiFID)} \label{sec:mifid} The most crucial part of the competition is the performance evaluation metric to score the submissions. To assess the quality of generated images, we adopted the Fréchet Inception Distance~\cite{heusel2017gans} which is a widely used metric for benchmarking GANs. Compared to the Inception Score~\cite{salimans2016improved}, FID has the benefits of better robustness against noise and distortion and more efficient computation~\cite{borji2019pros}.
For a generative modeling competition, a good metric not only needs to reflect the quality of generated samples but must also allow for easy identification of intentional-memorization with as little manual intervention as possible. Many forms of intentional-memorization were prevented by setting up the aforementioned computation environment, but even with these safeguards it would be possible to ``game'' FID. Specifically, we predicted that memorization of training data would be a major issue, since current generative model evaluation metrics such as IS or FID are prone to assign high scores to models that regurgitate memorized training data \cite{gulrajani2018towards}. This motivated the addition of a "memorization-aware" metric that penalizes models producing images too similar to the training set.
Combining memorization-aware and generation quality components, we introduced Memorization-Informed Fr\'echet Inception Distance (MiFID) as the metric used for the competition: \[ \text{MiFID}(S_g, S_t) = m_\tau(S_g, S_t) \cdot \text{FID}(S_g, S_t) \] where $S_g$ is the generated set, $S_t$ is the original training set, FID is the Fr\'echet Inception Distance, and $m_\tau$ is the memorization penalty, as discussed below.
\subsubsection{Memorization Penalty} To capture the similarity between two sets of data -- in our case, generated images and original training images -- we started by measuring similarity between individual images. We opted to use the cosine similarity in a learned representation space to compare images. The cosine similarity is easy to implement with high computational efficiency (with existing optimized BLAS libraries) which is ideal when running a competition with hundreds of submissions each day. The value is also bounded, making it possible to intuitively understand and compare the degree of similarity.
We define the memorization distance $s$ of a target projected generated set $S_g \subseteq \mathbb{R}^d$ with respect to a reference projected training set $S_t \subseteq \mathbb{R}^d$ as 1 subtracted by the mean of minimum (signed cosine) similarity of all elements $S_g$ and $S_t$. Intuitively, lower memorization distance is associated with more severe training sample memorization. Note that the distance is asymmetric i.e. $s(S_g, S_t) \neq s(S_t, S_g)$, but this is irrelevant for our use-case. \[
s(S_g, S_t) := \frac{1}{|S_g|} \sum_{x_g \in S_g} \min_{x_t \in S_t}
\bigg{(} 1 - \frac{|\langle x_g, x_t \rangle|}{|x_g| \cdot |x_t|} \bigg{)} \]
We hypothesize that submissions with intentional memorization would generate images with significantly lower memorization distance. To leverage this idea, only submissions with distance lower than a specific threshold $\tau$ are penalized. Thus, the memorization penalty $m_\tau$ is defined as \[
m_\tau(S_g, S_t) = \left\{\begin{array}{ll}
{\frac{1}{s(S_g, S_t) + \epsilon} \quad (\epsilon \ll 1),} & {\text { if } s(S_g, S_t) < \tau} \\
{1,} & {\text { otherwise }}
\end{array}\right. \] More memorization (subceeding the predefined threshold $\tau$) will result in higher penalization. Dealing with false positives and negatives under this penalty scheme is further discussed in Section~\ref{sec:final-ranks}.
\subsubsection{Preventing overfitting}
In order to prevent participants of the competition from overfitting to the public leaderboard, we used different data for calculating the public and private score, and generalized FID to a different feature projection space.
Specifically, we selected different pre-trained ImageNet classification models for public and private score calculation. For each score, we take one pre-trained model, and use the model to compute both the memorization penalty and FID.
Inception V3 was used for the public score following past literature, while NASNet~\cite{zoph2018learning}was used for the private score. Table \ref{table:public-private} shows the different datasets and models used in public and private leaderboards. We discuss how NASNet was selected in Section~\ref{sec:model-selection}.
\subsection{Determining Final Ranks} \label{sec:final-ranks}
After the competition was closed to submission there was a two-week window to re-process all the submissions and remove those that violated the competition rules (e.g.\ by intentionally memorizing the training set) before the final private leaderboard was announced. The memorization penalty term in MiFID was efficiently configured for re-running with a change of the parameter $\tau$, allowing finalizing of results within a short time frame.
\subsubsection{Selecting Pre-trained Model for the Private Score} \label{sec:model-selection}
As it is commonly assumed that FID is generally invariant to the projection space, the pre-trained model for private score was selected to best combat intentional-memorization via training set memorization. The goal is to separate legitimate and illegitimate submissions as cleanly as possible. We calculated the memorization distance for a subset of submissions projected with the chosen pre-trained model and coarsely label whether the submission intentionally memorized training samples. Coarse labeling of submissions was achieved by exploiting competition-related clues.
There exists a threshold $\tau^*$ that best separates memorized versus non-memorized submissions via the memorization distance (see Figure~\ref{fig:dist-histograms-private},~\ref{fig:dist-histograms-public}). Here we define the memorization margin $d$ of pre-trained model $M$ as \[ d(M) = \min_{\tau}\sum_{\forall S_g} (s(S_g, S_t) - \tau)^2 \] The pre-trained model with largest memorization margin was then selected for calculation of the private score, in this case, NASNet~\cite{zoph2018learning}, and the optimal corresponding memorization penalty $m_\tau$ where $\tau = \tau^*$. Selecting the threshold after the competition allows the threshold to adapt to the submissions better and the fairness is ensured as any false penalization were well handled.
\begin{figure}
\caption{Histogram of memorization distance for private leaderboard (using NASNet). The two classes (legitimate models and illegitimate models) are well separated.}
\label{fig:dist-histograms-private}
\end{figure}
\begin{figure}
\caption{Histogram of memorization distance for public leaderboards (using Inception). The two classes (legitimate models and illegitimate models) are well separated.}
\label{fig:dist-histograms-public}
\end{figure}
\begin{table}[ht] \caption{Configurations for the public and private scores} \label{table:public-private} \begin{center} \begin{small} \begin{sc}
\begin{tabular}{|l|l|l|} \hline
& Public & Private \\ \hline Model & Inception & NASNet
\\ \hline DataSet & \begin{tabular}[c]{@{}l@{}}ImageNet dogs \\ 120 breeds, \\ 20579 images\end{tabular} & \begin{tabular}[c]{@{}l@{}}ImageNet dogs +\\ private \\ dogs +\\ Internet dogs\end{tabular} \\ \hline \end{tabular} \end{sc} \end{small} \end{center} \end{table}
\subsubsection{Handling False Penalization} \label{sec:false-penalization} While MiFID was designed to handle penalization automatically, in practice, we observed minor mixing of legitimate and illegitimate submissions between the well-separated peaks (Figure~\ref{fig:dist-histograms-private},~\ref{fig:dist-histograms-public}). While it is well accepted that no model can be perfect, it was necessary to ensure that competition was fair. Therefore, different strategies were adopted to resolve false positives and negatives. For legitimate submissions that were falsely penalized (false positives), participants were allowed to actively submit rebuttals for the result. For illegitimate submissions that were dissimilar enough to the training set to dodge penalization (false negatives), the team's submitted code was manually reviewed to determine if intentional memorization was present. This manual reviewing process of code submissions was labor intensive, as it required expert knowledge of generative modeling, and could not be accomplished by human raters unfamiliar with machine learning. The goal was to review enough submissions such that the top 100 teams on the leaderboard would be free of intentional-memorization and others, since we reward the top 100 ranked teams. Thanks to our design of MiFID, it is possible to set the penalty threshold $\tau$ such that we were comfortable that most users ranked lower than 100 on the leaderboard who intentionally memorized were penalized by MiFID. This configuration of MiFID significantly reduced the time needed to finish the review, approximately by 5x. The results of the manual review is presented in Section~\ref{sec:mem-summary}. The review procedure was announced to all participants during\footnote{\url{https://www.kaggle.com/c/generative-dog-images/discussion/106206}} and after\footnote{\url{https://www.kaggle.com/c/generative-dog-images/discussion/102701}} the competition submission period.
\section{Results and Data Release} \label{sec:data-release} A total of $924$ teams joined the competition, producing over $11{,}192$ submissions. Visual samples from submitted images are shown in the appendix.
\subsection{Data Release} The complete dataset is released with the publication of this paper to facilitate future work on benchmarking generative modeling\footnote{\url{https://www.kaggle.com/andrewcybai/generative-dog-images}}. It includes: \begin{itemize}
\item A total of $1{,}675$ submissions selected by users for final scoring, each containing $10{,}000$ generated dog images with dimension $64 \times 64 \times 3$.
\item Manual labels for the top 1000 ranked submissions of whether the code is a legitimate generative method and the type of illegitimacy involved if it is not.
This was extremely labor-intensive to obtain.
\item Crowd-labeled image quality: ~$50{,}000$ human labeled quality and diversity of images generated from the top 100 teams (non-memorized submissions only). \end{itemize}
We also release the code to reproduce results in the paper\footnote{\url{https://github.com/jybai/generative-memorization-benchmark}}, as well as demo code to compute MiFID \footnote{\url{https://www.kaggle.com/wendykan/demo-mifid-metric-for-dog-image-generation-comp}}.
\subsection{Summary of Memorization Study} \label{sec:mem-summary} The 1000 top submissions were manually labeled as to whether or not (and how) they intentionally memorized. As we predicted prior to the start of the competition, the most pronounced method of cheating was training sample memorization. We observed different levels of sophistication in these memorization methods - from very naive (submitting the training images) to highly complex (designing a GAN to memorize). The labeling results are summarized as follows:
\subsubsection{Memorization GAN (mgan)} Typical GANs are trained via an iterative method where a generator and a discriminator are updated alternatively to converge towards an equilibrium in a minimax game. Memorization GANs are purposely trained to memorize the training set while maintaining the architecture of a typical GAN by modifying the update policy of the generator and discriminator. The training process is split into two parts. In the first part, the discriminator is updated by feeding only data from the training set (and nothing from the generator). The discriminator then degenerates into a classifier of training set membership. In the second part, the discriminator is frozen and only the generator is updated. To ensure perfect memorization, one-hot vectors instead of random-sampled vectors are fed into the generator as seeds. Each one-hot vector will then be mapped to one of the training images. To avoid mode collapse, hyperparameters are tuned based on conventional GAN metrics such as IS or FID~\cite{memGAN}. \subsubsection{Supervised mapping (sup)} These models directly achieve memorization by training a supervised task mapping distinct inputs (ex. one-hot vectors with dimension the size of the training set) to data from the training set. This method is easier to identify since the model does not follow the model structure of a typical GAN. \subsubsection{Autoencoder and VAE Reconstruction (ae)} Autoencoders trained on the training set can directly be used for reconstruction with high fidelity and slight compression to avoid being punished by the memorization penalty. Memorization is achieved trivially by feeding the model with data from the training set.
On the other hand, while variational autoencoders \cite{kingma2013auto,rezende2014stochastic} are legitimate generation methods if the seeds fed into the decoder are randomly sampled, memorization can be achieved by selecting seeds that are encoded training images (passing through the encoder) which are then passed as input to the decoder to basically reconstruct the training set.
\subsubsection{Augmentation (aug)} These submissions directly generate images by augmenting the training set with typical augmentation techniques such as cropping, morphing, blending and adding noise. The naivety of this approach makes it the easiest to identify and generally can be filtered out with MiFID.
\subsection{Competition Results Summary} In Figure~\ref{fig:labeled-fid}, we observe that
memorizing models score extremely good (low) FID scores on both the public and private leaderboard. Specifically, memorization GAN achieved top-tier performance and it was a highly-debated topic for a long time whether it should be allowed in the competition. Ultimately, memorization GAN was banned, but it serves as a good reminder that generative-looking models may not actually be generative in the true sense. In Figure~\ref{fig:labeled-md}, we observe that the range of memorization calculated by NASNet (private) spans twice as wide as Inception (public), allowing easier profiling of intentionally-memorizing submissions by memorization penalty. It reflects the effectiveness of our strategy selecting the model for calculating private score.
Participants generally started with basic generative models such as DCGAN~\cite{radford2015unsupervised} and moved to more complex ones as they grow familiar with the framework. Most notably BigGAN~\cite{brock2018large}, SAGAN~\cite{zhang2018selfattention} and StyleGAN~\cite{karras2019style} achieved the most success. Interestingly, one submission using DCGAN~\cite{radford2015unsupervised} with spectral-normalization~\cite{miyato2018spectral} made it into top 10 in the private leaderboard, suggesting that different variations of GANs with proper tuning might all be able to achieve good FID scores~\cite{lucic2017gans}.
According to the participants' feedback after the competition~\cite{competition_takeaways}, given the computation constraint, fitting a larger model with less epochs, as opposed to a smaller model with more epochs, can help with overfitting and producing more diverse generated instances. Vanilla batch normalization~\cite{DBLP:journals/corr/IoffeS15} is also not advised, as it causes samples to have higher correlation which results in lower diversity. They also found that applying simpler models with less hyperparameters are more easily tuned and produced stabler performance. Finally, participants agree the discrepancy between FID scores calculated with different projections in the public and private leaderboard undermines the belief of FID being projection-space invariant.
\begin{figure}
\caption{Distribution of FID for public vs private scores with manual labels. The better (lower) FIDs are the ones using various memorization techniques.}
\label{fig:labeled-fid}
\end{figure}
\begin{figure}
\caption{Distribution of memorization distances for public vs private scores with manual labels. The better (lower) FIDs are the ones using various memorization techniques.}
\label{fig:labeled-md}
\end{figure}
\section{Insights} \label{sec:insights} \subsection{Unintentional Memorization: models with better FID memorize more} In our observation, almost all removed submissions intentionally memorize the training set. This is likely because it is well-known that memorization achieves a good FID score. The research community has long been aware that memorization can be an issue for FID. Although several recent studies have proposed methods for detecting memorization in generative modeling~\cite{gulrajani2018towards,meehan2020non}, there are only limited formal studies on benchmarking both the quality and memorization aspects. This can pose a serious problem when researchers continue to claim state-of-the-art results based on improvements to FID if there is not a systematic way to measure and address training set memorization. With disturbing findings from our study, we caution the danger of ignoring memorization in research benchmark metrics, especially with unintentional memorization of training data.
In Figure~\ref{fig:corrs-fid-md}, we plot the relationship between FID and memorization distance for all 500 legitimate models in the public and private leaderboard, respectively. Note that these models are legitimate, most of which popular variants of state-of-the-art generative models such as DCGAN and SAGAN recently published in top machine learning conferences. Interestingly and unfortunately, the Pearson correlation between FID and memorization distance is above 0.95 for both leaderboards. In general, correlation is expected since a generative model should capture the real data distribution which will then generate samples close to the training data. However, we observe high correlation between FID and memorization distance even in the top 100 submissions where the difference in quality among submissions is barely identifiable with human perception. In this case, we would expect an unbiased, fair quality metric to be independent from the memorization distance (distance to nearest neighbor in the training set).
We argue that it is important for the research community to take memorization more seriously, given how easy it is for memorization to occur unintentionally. The research community needs to better study and understand the limitations of current metrics for benchmarking generative models. When proposing new generative techniques, it is crucial to adopt rigorous inspections of model quality, especially to evaluate training sample memorization. Existing methods such as visualizing pairs of generated images and their nearest neighbors in the training dataset should be mandatory in benchmarks. Furthermore, other methods such as the FID and memorization distance correlation (Figure~\ref{fig:corrs-fid},~\ref{fig:corrs-fid-md}) for different model parameters can also be helpful to include in publications.
\begin{figure}
\caption{Public FID Inception vs private FID NASNet. it shows that FIDs from two pre-trained models are highly correlated.}
\label{fig:corrs-fid}
\end{figure}
\begin{figure}
\caption{FID vs memorization distance distribution with non-memorized submissions. It shows that FID is highly correlated to memorization.}
\label{fig:corrs-fid-md}
\end{figure}
\subsection{Debunking FID: choice of latent space for feature projection is non-trivial} In the original paper where FID is proposed~\cite{heusel2017gans}, features from the coding layer of an Inception model are used as the projected latent space to obtain ``vision-relevant'' features. It is generally assumed that Fr\'echet Inception Distance is invariant to the chosen latent space for projection as long as the space is "information-rich", which is why the arbitrary choice of the Inception model has been widely accepted. Interestingly, there has not been much study as to whether the assumption holds true even though a relatively large amount of new generative model architectures are being proposed (many of which rely heavily on FID for performance benchmarking). In our competition, we used different models for the public and private leaderboards in an attempt to avoid models which ``overfit'' to some particular feature space.
In Figure~\ref{fig:corrs-fid}, we examine the relationship between Fr\'echet Distance calculated by two different pre-trained image models that achieved close to state-of-the-art performance on ImageNet classification (specifically, Inception~\cite{szegedy2016rethinking} and NasNet~\cite{zoph2016neural}). At first glance, a Spearman correlation of 0.93 seems to support the assumption of FID being invariant to the projection space. However, on closer inspection we noticed that the mean absolute rank difference is 124.6 between public and private leaderboards for all 1675 effective submissions. If we take out the consistency of rank contributed by illegitimate submissions by considering the top 500 labeled, legitimate submissions only, the mean absolute rank difference is as large as 94.7 (18.9~\%). To put it into perspective, only the top 5 places receive monetary awards and there is only 1 common member between the top 5 evaluated by FID projected with the two models.
It is common to see publications claiming state-of-art performance with less than $5\%$ improvement compared to others. As summarized in the Introduction section of this paper, generative model evaluation, compared to other well-studied tasks such as classification, is extremely difficult. Observing that model performance measured by FID fluctuates in such great amplitude compared to the improvements claimed by many newly-proposed generative modeling techniques, we would suggest taking progression on the FID metric with a grain of salt.
\section{Related work} Our competition results verified a well-known issue with generative modeling metrics - training sample memorization is highly correlated to the most popular benchmark metrics, IS and FID~\citep{borji2019pros}. Although this is a well-known issue, most studies in generative modeling have chosen to believe that this only happens in extreme or hypothetical settings. On the other hand, there are alternative metrics shown to be more sensitive to memorization. Borji et al. reported that the Average Log Likelihood~\cite{goodfellow2014generative,theis2015note}, Classifier Two-sample Tests~\cite{lehmann2006testing}, and Wasserstein Critic~\cite{arjovsky2017wasserstein} are capable of detecting some level of memorization. However, the lack of awareness regarding the severity of memorization and the popularity of the Inception Score~\citep[IS;][]{salimans2016improved} and Fr\'echet Inception Distance~\citep[FID;][]{heusel2017gans} have hindered the adoption or development of better generative modeling benchmark metrics. Recent studies have proposed metrics to address model generalization in the perspectives of data-copying~\cite{gulrajani2018towards, meehan2020non} and cross-domain consistency (text, sound, images)~\cite{grnarova2018domain}. However, there is still no comprehensive metric that takes both generation quality and memorization into account.
We believe that a good benchmark metric should be extensively tested for its various properties and potential limits before being widely adopted, making public competitions the perfect testing ground. Our work contributes to the field of generative modeling by showcasing the feasibility of generative modeling competitions and providing the $11{,}000+$ collected submission results in our competition as an standard benchmark dataset for benchmark metrics.
\section{Conclusions} We summarized our design of the first ever generative modeling competition and shared insights obtained regarding FID as a generative modeling benchmark metric. By running a public generative modeling competition, we observed how participants attempted to game the FID, specifically with memorization, when incentivized with monetary awards. Our proposed Memorization-Informed Fr\'echet Inception Distance (MiFID) effectively punished models that intentionally memorize the training set which current popular generative modeling metrics do not take into consideration. We are not suggesting that MiFID is a drop-in replacement for FID in general but rather an efficient profiling tool suitable for competition settings.
We shared two main insights from analyzing the $11{,}000+$ submissions. First, unintentional training sample memorization is a serious and possibly widespread issue. Careful inspection of the models and analysis on memorization should be mandatory when proposing new generative model techniques. Second, contrary to popular belief, the choice of pre-trained model latent space when calculating FID is non-trivial. The top 500 labeled, non-memorized submission mean absolute rank difference percentage between our two models is 18.9~\%, suggesting that FID is rather unstable to serve as the benchmark metric for new studies to claim minor improvement over past methods.
\begin{acks} The authors would like to thank Phil Culliton for the contribution in metric development; Douglas Sterlin for code-only competition platform development and support; Julia Elliott for business support and human labeling work. Bai and Lin were partially supported by the MOST of Taiwan under 107-2628-E-002-008-MY3. \end{acks}
\appendix \section{Generated Samples Visualization} \begin{figure}
\caption{Submissions from ranks 1 (first row), 2, 3, 5, 10, 50, 100 (last row) on the private leaderboard. Each row is a random sample of 10 images from the same team. Visually, the quality of the generated images gets lower as the ranks get higher. }
\label{fig:dogs}
\end{figure}
\end{document} |
\begin{document}
\title{Calibrating the Negative Interpretation}
\section{What this essay is about} G\"odel and Gentzen proved by negative translations that classical Peano arithmetic {\bf PA} is equiconsistent with its intuitionistic subsystem, Heyting arithmetic {\bf HA}. By hereditarily replacing $\mathrm{A \vee B}$ by its classical equivalent $\mathrm{\neg (\neg A ~\&~ \neg B)}$, and $\mathrm{\exists x A(x)}$ by its classical equivalent $\mathrm{\neg \forall x \neg A(x)}$, they showed that the negative fragment of {\bf HA} (with only the logical symbols $\mathrm{\&, \neg, \rightarrow, \forall}$ and their axioms and rules) faithfully interprets {\bf PA} in the following sense: the negative translations of the mathematical axioms of {\bf PA} are provable in {\bf HA}, the classical logical axioms and rules for $\mathrm{\&, \neg, \rightarrow}$ and $\mathrm{\forall}$ are correct by intuitionistic logic, and every formula of the full language is provably equivalent in {\bf PA} to its negative translation.\footnote{G\"odel \cite{go1933} also translated $\mathrm{A \rightarrow B}$ hereditarily by $\mathrm{\neg (A ~\&~ \neg B)}$, but Gentzen \cite{ge1933} did not. This paper is based on the simpler Gentzen translation, and on Kleene's axiomatization of intuitionistic and classical logic, arithmetic and two-sorted number theory in \cite{kl1952} and \cite{klve1965}.}
G\"odel \cite{go1933} interpreted this result as showing that intuitionistic arithmetic {\em contains} classical arithmetic via his ``somewhat deviant'' interpretation. He observed that the absence of a corresponding result for intuitionistic and classical theories of numbers and number-theoretic functions results from mathematical and philosophical, rather than logical, differences. For example, the negative translation \[\mathrm{\forall x \neg (\neg \forall y \alpha(\langle x,y \rangle) = 0 ~\&~ \neg \neg \forall y \alpha(\langle x,y \rangle) = 0)}\] of the instance $\mathrm{\forall x(\forall y \alpha(\langle x,y \rangle) = 0 \vee \neg \forall y \alpha(\langle x,y \rangle) = 0)}$ of the law of excluded middle is provable by intuitionistic logic, and $\mathrm{\forall x (A(x) \vee \neg A(x)) \rightarrow \exists \beta \forall x (\beta(x) = 0 \leftrightarrow A(x))}$ is provable in the classically correct part of Brouwer's intuitionistic analysis {\bf I} as formalized and developed by Kleene and Vesley in \cite{klve1965}, but the negative translation \[\mathrm{\forall \alpha \neg \forall \beta \neg \forall x (\beta(x) = 0 \leftrightarrow \forall y \alpha(\langle x,y \rangle) = 0)}\] of $\mathrm{\forall \alpha \exists \beta \forall x (\beta(x) = 0 \leftrightarrow \forall y \alpha(\langle x,y \rangle) = 0)}$ is neither provable nor refutable in {\bf I}.\footnote{For the relative independence of $\mathrm{\forall \alpha \neg \forall \beta \neg \forall x (\beta(x) = 0 \leftrightarrow \forall y \alpha(\langle x,y \rangle) = 0)}$ from {\bf I} cf. \cite{jrm1971}, \cite{jrm2010}.}
Suppose {\bf S} is a subsystem of Kleene's {\bf I} which (unlike {\bf I} itself) is consistent with classical logic. Then the question is: exactly what must be added to {\bf S} in order to prove the Gentzen negative interpretations of its axioms, hence of its theorems? The goal is to find a simple characterization of the precise constructive cost of expanding {\bf S} to include a faithful copy of its classical twin {\bf S$^\circ$} $\equiv$ {\bf S} + $\mathrm{(\neg \neg A \rightarrow A)}$.
\subsection{Definitions}A formal system {\bf S} based on intuitionistic logic is {\em classically consistent} if and only if {\bf S} + $\mathrm{(\neg \neg A \rightarrow A)}$ is consistent. The {\em classical content} $\mathrm{E}^g$ of a formula $\mathrm{E}$ is its Gentzen negative interpretation, and the {\em classical content} $\mathrm{\Gamma}^g$ of a set $\mathrm{\Gamma}$ of formulas is the closure under intuitionistic logic of the set $\mathrm{\{E}^g\mathrm{: E \in \Gamma\}}$. The {\em minimum classical extension} {\bf S}$^{+g}$ of a classically consistent formal system {\bf S} is the closure under intuitionistic logic of $\mathrm{{\bf S}\cup{\bf S}}^g$.
If {\bf S} is an axiomatic system based on intuitionistic logic and $\mathrm{A_1, \ldots, A_n}$ is a list of formulas and (logical or mathematical) schemata, then {\bf S} + $\mathrm{A_1 + \ldots + A_n}$ is the formal system obtained by adding $\mathrm{A_1, \ldots, A_n}$ to the axioms of {\bf S}. For easier comprehension, the negative translations $\mathrm{\neg \forall x \neg}$, $\mathrm{\neg \forall \alpha \neg}$ of existential quantifiers will sometimes be replaced by their intuitionistic equivalents $\mathrm{\neg \neg \exists x}$, $\mathrm{\neg \neg \exists \alpha}$ respectively.
\subsection{The example of intuitionistic analysis} By viewing the choice sequence variables $\mathrm{\alpha, \beta, \ldots}$ of the language $\mathcal{L}$({\bf I}) of {\bf I} alternatively as variables over classical one-place number-theoretic functions, restricting the language and logic by omitting $\mathrm{\vee}$ and $\mathrm{\exists}$ with their axioms and rules, and replacing each mathematical axiom of a classically consistent subsystem {\bf S} of {\bf I} by its negative translation, one obtains a classically equivalent copy {\bf S$^g$} of {\bf S$^\circ$} within {\bf S$^{+g}$}. In particular, if {\bf B} is the subsystem of {\bf I} which omits the continuous choice axiom schema CC$_{11}$ (``Brouwer's Principle for a Function,'' axiom schema $^x$27.1 of \cite{klve1965}) then {\bf B$^\circ$} $\equiv$ {\bf B} + $\mathrm{(\neg \neg A\rightarrow A)}$ is classical analysis with countable choice, and {\bf B$^{+g}$} contains a negative version {\bf B}$^g$ of {\bf B$^\circ$}.
The goal here differs from Kleene's in \cite{kl1965} where he showed that {\bf I} is consistent with all purely arithmetical formulas, and all negations of prenex formulas, of the full language $\mathcal{L}$({\bf I}) which are provable in {\bf B$^\circ$}. Any system {\bf S} in $\mathcal{L}$({\bf I}) which is based on intuitionistic logic and has a classical $\omega$-model (a model with standard integers) may be called {\em classically sound}. The minimum classical extension {\bf S$^{+g}$} of any subsystem {\bf S} of {\bf I} for which classical Baire space is an $\omega$-model is classically sound, contains only the essential intuitionistically dubious principles, and is consistent with {\bf I}.
Some easy consequences of continuous choice, such as the axiom schema DC$_1$ of dependent choice for sequences and Troelstra's neighborhood function principle NFP, are true in classical Baire space; so {\bf B} + DC$_1$ and {\bf B} + NFP are classically sound subsystems of {\bf I} apparently extending {\bf B}. Their minimum classical extensions, which will be partially analyzed in a later section, are also consistent with {\bf I}.
\subsection{Mathematically significant extensions of intuitionistic analysis} An extension of {\bf B} may be consistent with {\bf I} without being a subsystem of {\bf I}. If {\em either} of the two axioms about to be described is added to {\bf B}, the classical content does not change because the new axioms are true by classical logic alone. In each case the result is consistent with {\bf I} by an appropriate realizability interpretation, but if {\em both} are added to {\bf B}, the result (with the same classical content) is {\em inconsistent} with {\bf I}.
Kleene established the consistency of {\bf I} ($\equiv$ {\bf B} + CC$_{11}$) relative to {\bf B} by means of function-realizability (Theorem 9.3(a) of \cite{klve1965}). The strong form \[\mathrm{MP_1. \; \; \; \; \forall \alpha (\neg \forall x \neg \alpha(x) = 0 \rightarrow \exists x \alpha(x) = 0)}\] of Markov's Principle is self-realizing over {\bf B}, hence consistent with {\bf I}. All negative statements true in classical Baire space are realizable by Lemma 8.4(a) of \cite{klve1965}, thus ({\bf B} + MP$_1$)$^{+g}$ + CC$_{11}$ = {\bf B}$^{+g}$ + MP$_1$ + CC$_{11}$ is consistent relative to {\bf B}$^{+g}$ + MP$_1$.
Brouwer refuted Markov's Principle using a ``creating subject'' argument. Kleene proved {\bf I} $\not\vdash$ MP$_1$ in \cite{klve1965}, using a typed modification of function-realizability he called ``$_S$realizability.'' Vesley \cite{ve1970} proposed adding to {\bf I} the schema \begin{multline*} \mathrm{VS. \; \; \forall \alpha \forall x \exists \beta (\overline{\beta}(x) = \overline{\alpha}(x) ~\&~ \neg A(\beta))}\\ \mathrm{\rightarrow [\forall \alpha (\neg A(\alpha) \rightarrow \exists \beta B(\alpha, \beta)) \rightarrow \forall \alpha \exists \beta (\neg A(\alpha) \rightarrow B(\alpha, \beta))]} \end{multline*} where $\mathrm{\overline{\alpha}(x)}$ codes the first $\mathrm{x}$ values of the function $\mathrm{\alpha}$, and $\mathrm{\beta}$ is not free in $\mathrm{\neg A(\alpha)}$. VS is $_S$realizable, hence consistent with {\bf I}, and {\bf I} + VS $\vdash$ $\mathrm{\neg MP_1}$. Negative statements true in classical Baire space are $_S$realizable by Lemma 10.7 of \cite{klve1965}, so ({\bf B} + VS)$^{+g}$ + CC$_{11}$ = {\bf B$^{+g}$} + VS + CC$_{11}$ is consistent and refutes $\mathrm{MP_1}$.
These examples illustrate the mathematical freedom gained by separating the constructive language from the classical language, entirely eliminating the need for classical logic. The modification GC$_1^\mathrm{neg}$ of Troelstra's principle GC$_1$ of generalized continuous choice\footnote{GC$_1^\mathrm{neg}$ simply replaces ``almost negative'' by ``negative'' in the statement of GC$_1$, cf.\cite{tr1973}.} characterizes Kleene's function-realizability over {\bf B} + MP$_1$, so {\bf B} + MP$_1$ + GC$_1^\mathrm{neg}$ is a consistent extension of {\bf I} + MP$_1$.
{\bf B} + GC$_1^\mathrm{neg}$ proves that every partial functional which is defined at least on a negative dense subspecies of the intuitionistic continuum (e.g. on all sequences which are not eventually monotone) has a continuous partial extension. In contrast, {\bf I} + VS $\equiv$ {\bf B} + VS + CC$_{11}$ proves the stronger result that every partial functional defined at least on a negative dense subspecies of the intuitionistic continuum has a continuous total extension, although GC$_1^\mathrm{neg}$ is stronger than CC$_{11}$ and {\bf B} + VS has the same classical content as {\bf B}.
\subsection{Additional examples and related work} A basic axiomatization of the recursive sequences {\bf MRA}, and its minimum classical extension, are studied in this article. Intuitionistic arithmetic of arbitrary finite types {\bf HA$^\omega$}, Troelstra's {\bf EL}, Bishop's constructive analysis, and three versions of Brouwer's bar theorem in the context of {\bf B} and {\bf I} are discussed in \cite{jrmgvf2021}. Vafeiadou's results in that article show that minimum classical extensions of consistent but classically unsound theories like {\bf I} should be maximally consistent for the negative language. Classically sound extensions of {\bf B} which are subsystems of {\bf I} or consistent with {\bf I}, and classically sound theories such as {\bf MRA} which are inconsistent with {\bf B}, have more interesting minimum classical extensions.
A seminal analysis of double negation shift and the negative interpretation of countable choice, in the context of {\bf HA$^\omega$}, was carried out by Berardi, Bezem and Coquand in \cite{bebeco1998}. The recent, technical \cite{fuko2018} treats weak nonconstructive principles in the context of {\bf EL}, {\bf HA} or {\bf HA$^\omega$}. The bibliographies of both point to related work. For a precise comparison of Troelstra's {\bf EL} and other weak versions of intuitionistic analysis with the systems treated here see \cite{gvfms}, \cite{gvf2012}. In \cite{lo2009} I. Loeb analyzes a consequence of VS from a constructive reverse mathematics perspective.
\section{The constructive core of intuitionistic analysis} Like Brouwer, Bishop worked informally, but it seems unlikely that he would have objected to the mathematical content of any of the axioms or axiom schemas of Kleene's neutral basic system {\bf B} except the principle of bar induction. Bishop used countable choice routinely, so Kleene's strongest countable choice axiom schema ($^x$2.1 in \cite{klve1965}): \[\mathrm{AC_{01}. \; \; \; \forall x \exists \alpha A(x,\alpha) \rightarrow \exists \beta \forall x A(x,\lambda y.\beta(\langle x,y \rangle))}\] may be assumed to hold in constructive analysis, with its consequence ($^*$2.2 in \cite{klve1965}): \[\mathrm{AC_{00}. \; \; \; \forall x \exists y A(x,y) \rightarrow \exists \alpha \forall x A(x,\alpha(x))}\] for all formulas $\mathrm{A(x,\alpha)}$ and $\mathrm{A(x,y)}$ of the language, with free variables of both types allowed and with the appropriate conditions on the distinguished variables (e.g. for AC$_{00}$: $\mathrm{\alpha, x}$ must be free for $\mathrm{y}$ in $\mathrm{A(x,y)}$).
Weaker subsystems of {\bf B} are distinguished by restrictions on AC$_{00}$, which in turn determine the classical omega-models of the subsystems. Classical omega-models are important for constructive analysis because (a) Bishop's work is consistent with classical mathematics, and (b) the simplest assumption is that the constructive natural numbers are standard.
\subsection{Two-sorted intuitionistic arithmetic {\bf IA$_1$}} This is an extension of Kleene's first-order intuitionistic arithmetic {\bf IA$_0$} (\cite{kl1952} p. 82). {\bf IA$_1$} adds variables $\mathrm{\alpha, \beta, \gamma, \ldots}$ over one-place number-theoretic functions, quantifiers $\mathrm{\forall \alpha, \exists \alpha}$ with their (intuitionistic) logical axioms and rules, and finitely many constants for primitive recursive function(al)s with their defining axioms. Additional primitive recursive function constants, with their definitions, may be added as needed.
Terms (of type 0) and functors (of type 1) are defined inductively. Church's lambda symbol makes it possible to define primitive recursive functors from terms. There is an axiom schema of lambda-reduction $\mathrm{(\lambda x.t(x))(s) = t(s)}$ (where $\mathrm{t(x), s}$ are terms, and $\mathrm{s}$ is free for $\mathrm{x}$ in $\mathrm{t(x)}$).
Equality at type 0 is a primitive notion, and is decidable in {\bf IA$_1$}. Equality at type 1 is defined extensionally by $\mathrm{\alpha = \beta \equiv \forall x (\alpha(x) = \beta(x))}$, and {\bf IA$_1$} includes the open equality axiom $\mathrm{\forall x \forall y (x = y \rightarrow \alpha(x) = \alpha(y))}$.\footnote{{\bf IA$_1$} is the ``least subsystem'' {\bf L} of {\bf I} in \cite{jrmphd}, \cite{kl1969}. It is defined precisely in \cite{gvf2012}, \cite{jrmgvf2012}.}
The primitive recursive infinite sequences form a classical omega-model of {\bf IA$_1$}.
\subsection{Intuitionistic recursive analysis IRA} Vafeiadou proved in (\cite{gvf2012}) that Troelstra's formal system {\bf EL} (\cite{tr1973}, \cite{tvd1988}) of elementary constructive analysis and the subsystem {\bf IRA} $\equiv$ {\bf IA$_1$} + QF-AC$_{00}$ of Kleene's {\bf B} have a common definitional extension, where QF-AC$_{00}$ (``quantifier-free countable choice'') restricts AC$_{00}$ to formulas $\mathrm{A(x,y)}$ containing no sequence quantifiers, and only bounded number quantifiers. {\bf IRA} can also be axiomatized by adding to {\bf IA$_1$} a single axiom, either \[\mathrm{\forall \rho [\forall x \exists y \, \rho(\langle x,y \rangle) = 0 \rightarrow \exists \alpha \forall x \, \rho(\langle x,\alpha(x) \rangle) = 0]} \; \; \;\ \mbox{or}\] \[\mathrm{\forall \rho [\forall x \exists y \, \rho(\langle x,y \rangle) = 0 \rightarrow \exists \alpha \forall x [\rho(\langle x,\alpha(x) \rangle) = 0 ~\&~ \forall z < \alpha(x) \, \rho(\langle x,z \rangle) \neq 0]],}\] asserting that the universe of sequences is closed under unbounded constructive search.\footnote{Veldman prefers the unbounded search axiom to the schema QF-AC$_{00}$ for his system {\bf BIM} of intuitionistic recursive analysis (cf. \cite{vel2014}).}
The general recursive infinite sequences provide a natural classical omega-model of intuitionistic recursive analysis {\bf IRA}.
\subsection{Countable comprehension and arithmetical countable choice} Stronger than QF-AC$_{00}$ over {\bf IA$_1$}, but weaker than AC$_{00}$, is {\em countable comprehension} or ``unique choice'' \[\mathrm{AC_{00}!. \; \; \; \forall x \exists ! y A(x,y) \rightarrow \exists \alpha \forall x A(x,\alpha(x)),}\] where $\mathrm{\exists ! y A(x,y)}$ always abbreviates $\mathrm{\exists y A(x,y) ~\&~ \forall y \forall z (A(x,y) ~\&~ A(x,z) \rightarrow y = z)}$. Since quantifier-free formulas are decidable in {\bf IA$_1$}, the hypothesis of an instance of QF-AC$_{00}$ provides unique least witnesses for the corresponding instance of AC$_{00}$! and so AC$_{00}$! entails QF-AC$_{00}$ -- but not conversely.
Vafeiadou (\cite{gvf2012}, \cite{gvfms}) proved that AC$_{00}!$ is equivalent over {\bf IRA} to the schema \[\mathrm{CF_d. \; \; \; \forall x (A(x) \vee \neg A(x)) \rightarrow \exists \alpha \forall x [\alpha(x) \leq 1 ~\&~ (\alpha(x) = 0 \leftrightarrow A(x))],}\] asserting that every analytically definable subset of the natural numbers with a decidable membership relation has a characteristic function. The converse of $\mathrm{CF_d}$ is provable in {\bf IA$_1$}.
It follows that {\bf IA$_1$} + AC$_{00}$! and {\bf IA$_1$} + AC$_{00}$ have the same classical omega-models, including all analytically definable infinite sequences.
A formula of the two-sorted language is called {\em arithmetical} if it contains only number quantifiers; free variables of both types are permitted. The {\em arithmetical countable choice} schema AC$_{00}^{Ar}$ restricts AC$_{00}$ to arithmetical formulas $\mathrm{A(x,y)}$, and {\em arithmetical comprehension} AC$_{00}^{Ar}$! is the corresponding restriction of AC$_{00}$!.
The arithmetical sequences provide a classical omega-model of {\bf IA$_1$} + AC$_{00}^{Ar}$ (and of {\bf IA$_1$} + AC$_{00}^{Ar}$!).
\subsection{Full countable choice and function comprehension} The schema AC$_{01}$ expresses countable choice for functions. AC$_{01}$! (with $\mathrm{\forall x \exists ! \alpha A(\overline{\alpha}(x))}$ as hypothesis) expresses the corresponding function comprehension principle, where in general $\mathrm{\exists ! \alpha B(x)}$ $\equiv$ $\mathrm{\exists \alpha B(x) ~\&~ \forall \alpha \forall \beta (B(\alpha) ~\&~ B(\beta) \rightarrow \forall x \alpha(x) = \beta(x))}$.
While AC$_{00}$ is weaker than AC$_{01}$ both classically and intuitionistically, AC$_{00}$! is equivalent to AC$_{01}$! over {\bf IA$_1$}.\footnote{cf. \cite{jrmphd}, \cite{jrm1967} where {\bf M} = {\bf IA$_1$} + AC$_{00}$! is proposed as a minimal base theory for constructive analysis. However, Troelstra \cite{tr1973} observed that Kleene's formalization \cite{kl1969} of the theory of recursive functionals in {\bf M} could equally well be done in {\bf EL}, hence in {\bf IRA}.} Although Kleene chose AC$_{01}$ as an axiom schema for {\bf B}, he observed in \cite{klve1965} that in all but one instance AC$_{00}$ would have sufficed. It could be interesting to look for essential uses of the stronger principle in constructive and intuitionistic mathematics.
AC$_{00}$ is equivalent over {\bf IRA} to dependent choice for numbers \[\mathrm{DC_0. \; \; \; \forall x \exists y A(x,y) \rightarrow \forall x \exists \alpha (\alpha(0) = x ~\&~ \forall y A(\alpha(y), \alpha(y+1))).}\] Over {\bf IA$_1$} + AC$_{00}$ + $\mathrm{(\neg \neg A \rightarrow A)}$, DC$_0$ is equivalent to classical bar induction BI$^\circ$ (to be described in the next section) using $^\ast$26.1$^\circ$ in \cite{klve1965} together with a straightforward converse argument.
It follows that every classical $\omega$-model of {\bf IA$_1$} + AC$_{01}$ is also an $\omega$-model of {\bf B}, since {\bf IA$_1$} + AC$_{01}$ $\vdash$ AC$_{00}$. Moreover, {\bf IA$_1$} + AC$_{00}$, {\bf IA$_1$} + AC$_{00}!$ and {\bf IA$_1$} + BI$_\mathrm{d}$ all have the same classical $\omega$-models, where BI$_\mathrm{d}$ is intuitionistic bar induction with a decidable bar (to be described in the next section).
\section{Brouwer's principles of bar and fan induction} In addition to full mathematical induction and the principle of countable choice, Brouwer believed he could justify another classically sound principle known as the ``bar theorem.'' Kleene analyzed Brouwer's proof of this principle and found it to be circular. Kleene's {\bf B} has an axiom schema of bar induction in four versions, which are equivalent over {\bf IA$_1$} + AC$_{00}$!. Each has the general form\footnote{In Kleene's primitive recursive coding $\mathrm{\langle a_0,\ldots,a_n \rangle = \Pi_{j=0}^{j=n} p_j^{a_j}}$ where $\mathrm{p_j}$ is the j$th$ prime, and $\mathrm{(\langle a_0,\ldots,a_n \rangle)_j = a_j}$. ``Sequence numbers'' w satisfying $\mathrm{Seq(w) \equiv \forall j < lh(w) \, (w)_j \neq 0}$ uniquely code finite sequences of numbers, where $\mathrm{lh(w) = \Sigma_{j < w} sg((w)_j)}$ and $\mathrm{sg(n) = 1 \stackrel{.}{-}(1 \stackrel{.}{-} n)}$. 1 codes the empty sequence, $\mathrm{\langle a_0+1,\ldots,a_n+1 \rangle}$ codes $\mathrm{(a_0,\ldots,a_n)}$ and $\mathrm{*}$ denotes concatenation. $\mathrm{\overline{\alpha}(0) = 1}$ and $\mathrm{\overline{\alpha}(n+1) = \langle \alpha(0)+1, \ldots, \alpha(n)+1 \rangle}$.} \begin{multline*} \mathrm{BI. \; \; \; \forall \alpha \exists x R(\overline{\alpha}(x)) ~\&~ \forall w (Seq(w) ~\&~ R(w) \rightarrow A(w))} \\ \mathrm{ ~\&~ \forall w (Seq(w) ~\&~ \forall s A(w \ast \langle s+1 \rangle) \rightarrow A(w)) \rightarrow A(1),} \end{multline*} where $\mathrm{R(w)}$ is the basis (or bar) predicate and $\mathrm{A(w)}$ is the inductive predicate.\footnote{Later Kreisel and Troelstra \cite{krtr1970} developed a competing formal system for Brouwer's analysis in which the ``bar theorem'' was treated as a principle of generalized inductive definition; cf. \cite{ka2019a}.} As usual, free variables of both types are allowed.
{\em Classical bar induction} BI$^\circ$ places no restrictions on $\mathrm{R(w)}$. Kleene observed that BI$^\circ$ conflicts with Brouwer's continuity principle so some restriction is necessary in the intuitionistic context.
Brouwer used bar induction to prove his ``fan theorem,'' which (together with the assumption that every full function is pointwise continuous) allowed him to conclude that every function completely defined on the closed unit interval is uniformly continuous there. The {\em full fan theorem} (\cite{klve1965} $^\ast$27.9), which is provable in {\bf I} for all predicates $\mathrm{R(w)}$ in which the substitution of $\mathrm{\overline{\alpha}(x)}$ for $\mathrm{w}$ is free, is \[\mathrm{FT. \;\;\;\;\forall \alpha_{B(\alpha)} \exists x \, R(\overline{\alpha}(x)) \rightarrow \exists n \forall \alpha_{B(\alpha)} \exists x \leq n \, R(\overline{\alpha}(x))},\] where $\mathrm{B(\alpha) \equiv \forall x \, \alpha(x) \leq \beta(\overline{\alpha}(x))}$. For the {\em binary fan theorem}, which is no weaker over {\bf IRA}, $\mathrm{B(\alpha) \equiv \forall x \alpha(x) \leq 1}$. Troelstra \cite{tr1974} proved that the full fan theorem is conservative over Heyting arithmetic.
FT justifies a principle of {\em fan induction} with $\mathrm{R(w)}$ as basis and an arbitrary inductive predicate $\mathrm{A(w)}$. For the binary fan the general form is \begin{multline*} \mathrm{\forall \alpha_{B(\alpha)} \exists x R(\overline{\alpha}(x)) ~\&~ \forall w_{B(w)} (R(w) \rightarrow A(w))} \\ \mathrm{ ~\&~ \forall w_{B(w)} (A(w \ast \langle 1 \rangle) ~\&~ A(w \ast \langle 2 \rangle) \rightarrow A(w)) \rightarrow A(1),} \end{multline*} where $\mathrm{B(\alpha) \equiv \forall x \, \alpha(x) \leq 1}$ and $\mathrm{B(w) \equiv \forall n < lh(w) \, (1 \leq (w)_n \leq 2)}$. Modern reverse constructive mathematics establishes equivalences between restricted versions of FT and classically correct theorems of intuitionistic mathematics (e.g. \cite{ka2019}).
\subsection{Bar induction with a bar defined by a characteristic function} Kleene's strongest restriction on the basis predicate $\mathrm{R(w)}$ leads to his weakest version \begin{multline*} \mathrm{BI_1. \; \; \; \forall \alpha \exists x \rho(\overline{\alpha}(x)) = 0 ~\&~ \forall w (Seq(w) ~\&~ \rho(w) = 0 \rightarrow A(w))} \\ \mathrm{ ~\&~ \forall w (Seq(w) ~\&~ \forall s A(w \ast \langle s+1 \rangle) \rightarrow A(w)) \rightarrow A(1)} \end{multline*} ($^x$26.3b in \cite{klve1965}) of bar induction. Over {\bf IRA} this restriction is equivalent to requiring $\mathrm{R(w)}$ to be quantifier-free. Solovay \cite{jrmrms} proved in primitive recursive arithmetic that Kleene's {\bf I} + MP$_1$ is consistent relative to its subsystem {\bf IRA} + BI$_1$ + MP$_1$.
The corresponding version of the binary fan theorem is \[\mathrm{FT_1. \;\;\;\;\forall \alpha_{B(\alpha)} \exists x \, \rho(\overline{\alpha}(x)) = 0 \rightarrow \exists n \forall \alpha_{B(\alpha)} \exists x \leq n \, \rho(\overline{\alpha}(x)) = 0},\] where $\mathrm{B(\alpha) \equiv \forall x \, \alpha(x) \leq 1}$. In Theorem 9.6 and Corollary 9.8 of \cite{vel2014}, Veldman has compiled a long list of theorems of intuitionistic mathematics equivalent to FT$_1$ over his minimal formal system {\bf BIM} (comparable to {\bf IRA}). In particular, he showed that FT$_1$ is equivalent to the version of FT with $\mathrm{R(w) \equiv \exists n \, \beta(n) = w+1}$.
Kleene proved in \cite{klve1965} that the recursive sequences do not provide a classical omega-model of {\bf IRA} + FT$_1$ but the arithmetical sequences do; this distinction is exploited in \cite{vel2014}. BI$_1$ is stronger than FT$_1$ over {\bf IRA}; even the hyperarithmetical sequences fail to satisfy BI$_1$.
\subsection{Decidable, thin and monotone bar induction} Kleene formulated four axiom schemas ($^x$26.3a-d in \cite{klve1965}) of bar induction, including BI$_1$.
{\em Decidable bar induction} BI$_\mathrm{d}$ ($^x$26.3a) adds $\mathrm{\forall w (Seq(w) \rightarrow R(w) \vee \neg R(w))}$ to the hypotheses of BI. {\em Thin bar induction} BI! strengthens the assumption $\mathrm{\forall \alpha \exists x R(\overline{\alpha}(x))}$ of BI to $\mathrm{\forall \alpha \exists ! x R(\overline{\alpha}(x))}$ for ($^x$26.3c), or to $\mathrm{\forall \alpha \exists x (R(\overline{\alpha}(x)) ~\&~ \forall y<x \, \neg R(\overline{\alpha}(y)))}$ in the fourth version ($^x$26.3d). BI$_\mathrm{d}$ is equivalent to BI! but stronger than BI$_1$ over {\bf IRA}.
Using BI! and continuous choice Kleene derived a fifth version, {\em monotone bar induction} BI$_\mathrm{mon}$ ($^\ast$27.13 in \cite{klve1965}), which adds $\mathrm{\forall \alpha \forall x (R(\overline{\alpha}(x)) \rightarrow \forall y_{y>x} R(\overline{\alpha}(y)))}$ to the hypotheses of BI. It was shown in \cite{jrmgvf2021} that BI$_\mathrm{d}$, BI$_\mathrm{mon}$ and BI$^\circ$ have the same classical content over {\bf IA$_1$}. From the classical point of view, BI$_\mathrm{d}$ and BI$_\mathrm{mon}$ express the full bar theorem (which is inconsistent with {\bf I}), but over {\bf IA$_1$} their negative interpretations are equivalent to (BI$^\circ$)$^g$ which is consistent with {\bf I}.
The corresponding versions FT$_\mathrm{d}$, FT! and FT$_\mathrm{mon}$ of the fan theorem are not all equivalent over {\bf IRA}. Each version justifies a principle of restricted fan induction. J. Berger (\cite{jbe2006}) proved that a special case c-FT of the monotone fan theorem is constructively equivalent over {\bf HA$^\omega$} to the theorem that every pointwise continuous function from $\{0,1\}^\mathbb{N}$ to $\mathbb{N}$ is uniformly continuous.
\section{Two families of intuitionistically dubious principles} If {\bf S} is a subsystem of {\bf B} then {\bf S$^\circ$} $\equiv$ {\bf S} + $\mathrm{(\neg \neg A \rightarrow A)}$ has the same language and mathematical axioms as {\bf S}, and {\bf S}$^{+g}$ $\subseteq$ {\bf S$^\circ$}; in this sense {\bf S$^\circ$} is to {\bf S} as {\bf PA} is to {\bf HA}. If it happens that {\bf S}$^{+g}$ $\not \subseteq$ {\bf S}, we seek an elegant characterization of the difference.
\subsection{Double negation shift principles} \subsubsection{Double negation shift for numbers} This is the schema \[\mathrm{DNS_0. \; \; \; \forall x \neg \neg A(x) \rightarrow \neg \neg \forall x A(x)}\] for all formulas $\mathrm{A(x)}$ of the language. The converse is provable in {\bf IA$_1$}, so the $\mathrm{\rightarrow}$ can be strengthened to $\mathrm{\leftrightarrow}$. {\bf IA$_1$} proves the restriction DNS$_0^-$ of DNS$_0$ to negative formulas $\mathrm{A(x)}$ since {\bf IA$_1$} $\vdash$ $\mathrm{\neg \neg A \leftrightarrow A}$ for every formula $\mathrm{A}$ not containing $\mathrm{\vee}$ or $\mathrm{\exists}$.
The restriction of DNS$_0$ to $\Sigma^0_1$ formulas $\mathrm{A(x)}$ is a weak consequence \[\mathrm{{\Sigma^0_1}\mbox{-}DNS_0. \; \; \; \forall x \neg \neg \exists y \alpha(\langle x,y \rangle) = 0 \rightarrow \neg \neg \forall x \exists y \alpha(\langle x,y \rangle) = 0}\] of MP$_1$ which Brouwer used in 1918 to prove that the intuitionistic real numbers form a closed species. Van Atten \cite{va2018} notes that Brouwer later formulated a stronger definition of ``closed'' in order to avoid this use of (a consequence of) Markov's Principle.
In \cite{scve1983} Scedrov and Vesley studied a principle of which $\Sigma^0_1$-DNS$_0$ is a special case. They proved that {\bf B} $\not\vdash$ $\Sigma^0_1$-DNS$_0$ because $\Sigma^0_1$-DNS$_0$ fails in Krol's model of intuitionistic analysis \cite{krol1978}, and that {\bf B} + $\Sigma^0_1$-DNS$_0$ $\not\vdash$ MP$_1$. Their second argument, by $_S$realizability, establishes that {\bf I} + $\Sigma^0_1$-DNS$_0$ is consistent with Vesley's Schema VS which proves Brouwer's creating-subject counterexamples.
\subsubsection{Double negation shift and the negative interpretation of countable choice}
Over {\bf IA$_1$} + AC$_{01}$, (AC$_{01}$)$^g$ is equivalent to \[\mathrm{({\Sigma^1_1 {neg}})\mbox{-}DNS_0. \; \; \; \forall x \neg \neg \exists \alpha R(x,\alpha) \rightarrow \neg \neg \forall x \exists \alpha R(x,\alpha)}\] where $\mathrm{R(x,\alpha)}$ may be any negative formula, with parameters of both types allowed. The easy argument uses the fact that the converse of AC$_{01}$ is provable in {\bf IA$_1$}.
($\Sigma^1_1$neg)-DNS$_0$ is stronger than $\Sigma^0_1$-DNS$_0$, but still consistent with {\bf I} + VS because (AC$_{01}$)$^g$ is $_S$realizable (because it is negative and presumably true in classical Baire space). Hence ($\Sigma^1_1$neg)-DNS$_0$ is also $_S$realizable by Theorem 11.3(a) of \cite{klve1965}.
\subsubsection{Double negation shift for functions} Full double negation shift for functions conflicts with Brouwer's continuity principles, but the version \[\mathrm{DNS_1. \; \; \; \forall \alpha \neg \neg \exists x R(\overline{\alpha}(x)) \rightarrow \neg \neg \forall \alpha \exists x R(\overline{\alpha}(x))}\] does not. Lemma 27 in \cite{jrm2002} establishes that DNS$_1$, like MP$_1$, is self-realizing over {\bf B}, so {\bf I} + DNS$_1$ is consistent. The useful special case \[\mathrm{\Sigma^0_1\mbox{-}DNS_1. \; \; \; \forall \alpha \neg \neg \exists x \rho(\overline{\alpha}(x)) = 0 \rightarrow \neg \neg \forall \alpha \exists x \rho(\overline{\alpha}(x)) = 0}\] is consistent with {\bf I} + VS by classical $_S$realizability, so might be considered in this context to be a palatable substitute for Markov's Principle. Scedrov and Vesley observed in effect that {\bf IRA} + $\Sigma^0_1$-DNS$_1$ $\vdash$ $\Sigma^0_1$-DNS$_0$.
G\"odel, Dyson and Kreisel proved that the weak completeness of intuitionistic predicate logic for Beth semantics is equivalent, over {\bf IRA}, to a weaker consequence of $\Sigma^0_1$-DNS$_1$ which could be called the ``G\"odel-Dyson-Kreisel Principle'':\footnote{A doubly negated version $\mathrm{\neg \neg WKL \equiv \forall n \exists \beta_{B(\beta)} \forall x \leq n \, \rho(\overline{\beta}(x)) \neq 0 \rightarrow \neg \neg \exists \beta_{B(\beta)} \forall x \rho(\overline{\beta}(x)) \neq 0}$ of weak K\"onig's Lemma is equivalent over {\bf IA$_1$} to $\neg \neg$FT$_1$ + GDK. A proof is in \cite{jrm????}, forthcoming.} \[\mathrm{GDK. \; \; \; \forall \alpha_{B(\alpha)} \neg \neg \exists x \rho(\overline{\alpha}(x)) = 0 \rightarrow \neg \neg \forall \alpha_{B(\alpha)}\exists x \rho(\overline{\alpha}(x)) = 0}.\] Because GDK is $^{\Delta^1_1}$realizable (\cite{jrm2010}) while $\Sigma^0_1$-DNS$_1$ is not, {\bf I} + GDK $\not\vdash$ $\Sigma^0_1$-DNS$_1$. \subsection{Doubly negated characteristic function principles} A number-theoretic relation $\mathrm{A(x)}$ (perhaps with number and sequence parameters) has a characteristic function for $\mathrm{x}$ only if it satisfies $\mathrm{\forall x (A(x) \vee \neg A(x))}$. The doubly negated characteristic function (comprehension) schema \[\mathrm{\neg \neg \, CF_0. \; \; \; \neg \neg \exists \zeta \forall x (\zeta(x) = 0 \leftrightarrow A(x))}\] says only that it is {\em persistently consistent} to assume a characteristic function for $\mathrm{A(x)}$ exists. If {\bf S} proves an instance of $\mathrm{\neg\neg CF_0}$ in which the $\mathrm{A(x)}$ contains only $\mathrm{x}$ free, every consistent extension of {\bf S} is consistent with $\mathrm{\exists \zeta \forall x (\zeta(x) = 0 \leftrightarrow A(x))}$.
By Vafeiadou's characterization, the restriction $\mathrm{\neg \neg \, CF_0^{neg}}$ of $\mathrm{\neg \neg \, CF_0}$ to negative formulas $\mathrm{A(x)}$ is provable in the minimum classical extension of {\bf IA$_1$} + AC$_{00}!$. An important special case\footnote{Over {\bf IRA} + CF$_\mathrm{d}$ or {\bf EL} + CF$_\mathrm{d}$, $\mathrm{\neg \neg \, \Pi^0_1\mbox{-}CF_0}$ is equivalent to the principle $\mathrm{\neg \neg \, \Pi^0_1\mbox{-}LEM}$ in \cite{fuko2018}, and $\mathrm{\neg \neg \Sigma^0_1}$-CF$_0$ is equivalent to $\mathrm{\neg \neg} \, \Sigma^0_1$-LEM.}, equivalent by intuitionistic logic to ($\Pi^0_1$-CF$_0$)$^g$, is \[\mathrm{\neg \neg \Pi^0_1\mbox{-}CF_0. \; \; \; \; \forall \alpha \neg \neg \exists \zeta \forall x (\zeta(x) = 0 \leftrightarrow \forall y \alpha(\langle x,y \rangle) = 0)}.\]
\section{Minimum classical extensions of some subsystems of {\bf B}}
The negative translations of classical logical axioms and rules are correct by ituitionistic logic, so if E follows from $\mathrm{\Gamma}$ by classical logic then E$^g$ follows from $\mathrm{\Gamma}^g$ by intuitionistic logic. With classical logic, E and E$^g$ are equivalent. Even with intuitionistic logic, $\mathrm{\neg \neg E}^g$ and E$^g$ are equivalent. These facts will be used without much comment in the following proofs.
\subsection{Theorem.} \begin{enumerate} \item[(i)]{{\bf (IA$_1$)$^{+g}$} = {\bf IA$_1$}.} \item[(ii)]{{\bf (IRA)$^{+g}$} $\equiv$ ({\bf IA$_1$} + QF-AC$_{00}$)$^{+g}$ = {\bf IRA} + $\Sigma^0_1$-DNS$_0$.} \item[(iii)]{({\bf IA$_1$} + AC$^{Ar}_{00}$)$^{+g}$ = {\bf IA$_1$} + AC$^{Ar}_{00}$ + $\Sigma^0_1$-DNS$_0$ + $\mathrm{\neg \neg \,\Pi^0_1\mbox{-}CF_0}$.} \item[(iv)]{({\bf IA$_1$} + AC$_{00}!$)$^{+g}$ = {\bf IA$_1$} + AC$_{00}!$ + $\Sigma^0_1$-DNS$_0$ + $\mathrm{\neg \neg \, CF_0^\mathrm{neg}}$.} \item[(v)]{({\bf IA$_1$} + AC$_{00}$)$^{+g}$ = {\bf IA$_1$} + AC$_{00}$ + $\Sigma^0_1$-DNS$_0$ + $\mathrm{\neg \neg \, CF_0^\mathrm{neg}}$.} \item[(vi)]{({\bf IA$_1$} + AC$_{01}$)$^{+g}$ = {\bf IA$_1$} + AC$_{01}$ + $\mathrm{({\Sigma^1_1 {neg}})\mbox{-}DNS_0}$ = {\bf IA$_1$} + (AC$_{01}$)$^g$.} \item[(vii)]{({\bf IA$_1$} + FT$_1$)$^{+g}$ = {\bf IA$_1$} + FT$_1$ + GDK.} \item[(viii)]{({\bf IRA} + FT$_1$)$^{+g}$ = {\bf IRA} + FT$_1$ + $\Sigma^0_1$-DNS$_0$ + GDK.} \item[(ix)]{({\bf IRA} + BI$_1$)$^{+g}$ = ({\bf IRA})$^{+g}$ + BI$_1$ + (BI$_1$)$^g$ $\subseteq$ {\bf IRA} + BI$_1$ + $\Sigma^0_1$-DNS$_1$.} \item[(x)]{({\bf IA$_1$} + AC$_{00}$ + BI$_1$)$^{+g}$ = {\bf IA$_1$} + AC$_{00}$ + BI$_1$ + $\Sigma^0_1$-DNS$_0$ + $\mathrm{\neg \neg \, CF_0^\mathrm{neg}}$.} \item[(xi)]{{\bf B}$^{+g}$ = ({\bf IA$_1$} + AC$_{01}$ + BI$_1$)$^{+g}$ = {\bf B} + $\mathrm{({\Sigma^1_1 {neg}})\mbox{-}DNS_0}$ = {\bf B} + (AC$_{01}$)$^g$.} \end{enumerate}
{\em Proofs}. (i): The Gentzen negative translations of the axioms of {\bf IA$_1$} are provable in {\bf IA$_1$}, and the negative translations of the rules of inference are admissible for {\bf IA$_1$}, so no additions are needed.
(ii): To each quantifier-free formula $\mathrm{A(x,y)}$ there is by \cite{kl1969} a term $\mathrm{s(x,y)}$, with the same free variables, such that {\bf IA$_1$} proves both $\mathrm{\forall x \forall y (A(x,y) \leftrightarrow s(x,y) = 0)}$ and $\mathrm{\forall x \forall y (u(\langle x,y \rangle) = 0 \leftrightarrow s(x,y) = 0)}$ where $\mathrm{u = \lambda z . s((z)_0,(z)_1)}$. Therefore {\bf IA$_1$} proves $\mathrm{\exists \beta \forall x \forall y [A(x,y) \leftrightarrow \beta(\langle x,y \rangle) = 0]}$. By intuitionistic logic the negative translation of $\mathrm{\forall x \exists y \beta(\langle x,y \rangle) = 0}$ is equivalent to $\mathrm{\forall x \neg \neg \exists y \beta(\langle x,y \rangle) = 0}$, and the negative translation of $\mathrm{\exists \alpha \forall x \beta(\langle x,\alpha(x) \rangle) = 0}$ is equivalent to $\mathrm{\neg \neg \exists \alpha \forall x \beta(\langle x,\alpha(x) \rangle) = 0}$; therefore {\bf IRA} + $\Sigma^0_1$-DNS$_0$ $\vdash$ (QF-AC$_{00}$)$^g$. Conversely, $\Sigma^0_1$-DNS$_0$ is equivalent over {\bf IRA} to the negative translation of an instance of QF-AC$_{00}$.
(iii): Since QF-AC$_{00}$ is a special case of AC$^{Ar}_{00}$, {\bf IRA} $\subseteq$ {\bf IA$_1$} + AC$^{Ar}_{00}$. By formula induction, {\bf IRA} + $\mathrm{\neg \neg \, \Pi^0_1\mbox{-}CF_0}$ proves $\mathrm{\neg \neg \exists \eta \forall x \forall y (\eta(\langle x,y \rangle) = 0 \leftrightarrow A(x,y))}$ for every negative arithmetical formula $\mathrm{A(x,y)}$. The negative translation of AC$^{Ar}_{00}$ now follows using QF-AC$_{00}$ and $\Sigma^0_1$-DNS$_0$ as in (ii). This is a variation of Solovay's argument; he started with MP$_1$ and $\mathrm{\neg \neg \, \Sigma^0_1\mbox{-}CF_0}$ instead of $\mathrm{\Sigma^0_1\mbox{-}DNS_0}$ and $\mathrm{\neg \neg \, \Pi^0_1\mbox{-}CF_0}$, which give a precise characterization here. See the next theorem also.
Conversely, $\mathrm{\forall x \exists z (z = 0 \leftrightarrow \forall y \alpha(\langle x,y \rangle) = 0) \rightarrow \exists \zeta \forall x (\zeta(x) = 0 \leftrightarrow \forall y \alpha(\langle x,y \rangle) = 0)}$ is an instance of AC$^{Ar}_{00}$, and $\mathrm{\forall x \neg \neg \exists z (z = 0 \leftrightarrow \forall y \alpha(\langle x,y \rangle) = 0)}$ is provable in {\bf IA$_1$}. It follows that $\mathrm{\neg \neg \exists \zeta \forall x (\zeta(x) = 0 \leftrightarrow \forall y \alpha(\langle x,y \rangle) = 0)}$ is provable in {\bf IA$_1$} + (AC$^{Ar}_{00}$)$^g$.
(iv): {\bf IA$_1$} + AC$_{00}!$ = {\bf IRA} + CF$_{\mathrm{d}}$ by Vafeiadou's characterization; therefore ({\bf IA$_1$} + AC$_{00}!$)$^{+g}$ = ({\bf IRA} + CF$_{\mathrm{d}}$)$^{+g}$ = ({\bf IRA})$^{+g}$ + CF$_{\mathrm{d}}$ + (CF$_{\mathrm{d}}$)$^g$. Each instance of $\mathrm{\neg \neg \, CF^\mathrm{neg}_0}$ is equivalent over {\bf IA$_1$} to the conclusion of the negative translation of an instance of CF$_{\mathrm{d}}$, and the negative translation $\mathrm{\forall x \neg (\neg A}^g\mathrm{(x) ~\&~ \neg \neg A}^g\mathrm{(x))}$ of $\mathrm{\forall x (A(x) \vee \neg A(x))}$ is provable in {\bf IA$_1$} for all formulas $\mathrm{A(x)}$, so (CF$_{\mathrm{d}}$)$^g$ and $\mathrm{\neg \neg \, CF^\mathrm{neg}_0}$ are equivalent over {\bf IA$_1$}.
(v) follows from (iv) because AC$_{00}$ and AC$_{00}$! are equivalent as schemas over {\bf IA$_1$} + $\mathrm{(\neg \neg A \rightarrow A)}$, which proves $\mathrm{\forall x (\exists y A(x,y) \rightarrow \exists ! y (A(x,y) ~\&~ \forall z < y \neg A(x,z))}$. Therefore (AC$_{00}$)$^g$ and (AC$_{00}!$)$^g$ are equivalent over {\bf IA$_1$}.
(vi) is immediate from the definitions.
(vii): It is routine to show that {\bf IA$_1$} + FT$_1$ + GDK proves (FT$_1$)$^g$. The proof of GDK in {\bf IA$_1$} + (FT$_1$)$^g$ is an easy exercise. (viii) follows by (ii).
(ix): It is routine to show that {\bf IRA} + BI$_1$ + $\Sigma^0_1$-DNS$_1$ proves (BI$_1$)$^g$, and $\Sigma^0_1$-DNS$_0$ follows from $\Sigma^0_1$-DNS$_1$ in {\bf IRA}. Now use (ii).
(x) and (xi) follow from (v) and (vi) because {\bf IA$_1$} + AC$_{00}$ + $\mathrm{(\neg \neg A \rightarrow A)}$ $\vdash$ BI$_1$ (cf. $^\ast$26.1$^\circ$ in \cite{klve1965}), so ({\bf IA$_1$} + AC$_{00}$)$^{+g}$ $\vdash$ (BI$_1$)$^g$. \qed
\subsection{Corollary.} {\bf IRA} + BI$_1$ + $\Sigma^0_1$-DNS$_1$ is its own minimum classical extension.
\vskip 0.1cm
{\em Proof.} By Theorem 5.1(ix) with the observation that {\bf IA$_1$} $\vdash$ ($\Sigma^0_1$-DNS$_1$)$^g$. \qed \subsection{Corollary.}For each subsystem {\bf S} of {\bf B} considered in Theorem 5.1: \begin{enumerate} \item[(i)]{{\bf S}$^{+g}$ is its own minimum classical extension.} \item[(ii)]{({\bf S} + MP$_1$)$^{+g}$ = {\bf S}$^{+g}$ + MP$_1$ is its own minimum classical extension.} \item[(iii)]{{\bf S}$^{+g}$ + MP$_1$ is consistent with strong continuous choice CC$_{11}$ ($^{\mathrm{x}}$27.1 in \cite{klve1965}).} \item[(iv)]{{\bf S}$^{+g}$ + CC$_{11}$ $\not\vdash$ MP$_1$.} \end{enumerate}
{\em Proofs.} (i) is true because the Gentzen negative translation is idempotent. (ii) is true because {\bf S} $\vdash$ (MP$_1$)$^g$. The rest is implicit in \cite{klve1965}. (ii) holds by classical Kleene function-realizability (cf. Lemma 8.4(a) of \cite{klve1965}). (iv) holds because every theorem of {\bf S}$^{+g}$ + CC$_{11}$ is $_S$realizable but MP$_1$ is not (cf. Lemma 10.7, Theorem 11.3 and Corollary 11.10(a) in \cite{klve1965}). \qed
\subsection{Corollary} Each of {\bf IRA} + MP$_1$, {\bf IA$_1$} + FT$_1$ + MP$_1$, {\bf IRA} + FT$_1$ + MP$_1$ and {\bf IRA} + BI$_1$ + MP$_1$ is its own minimum classical extension.
{\em Proof.} {\bf IA$_1$} + MP$_1$ proves $\Sigma^0_1$-DNS$_0$, $\Sigma^0_1$-DNS$_1$ and GDK so the results follows from Theorem 5.1(ii), (vii), (viii) and (ix) using Corollary 5.3.
\subsection{Two questions} Sometimes only one or two additional axioms must be added to a subsystem {\bf S} of {\bf B} in order to prove its G\"odel-Gentzen negative interpretation. The unrestricted axioms of countable choice and comprehension have resisted this treatment, requiring instead the addition of an axiom schema $\mathrm{\neg \neg \, CF_0^\mathrm{neg}}$ or $\mathrm{({\Sigma^1_1 {neg}})\mbox{-}DNS_0}$. Is there a more elegant solution?
$\Sigma^0_1$-DNS$_1$ evidently suffices for the negative interpretation of BI$_1$, but is it stronger than necessary? Does {\bf IRA} + BI$_1$ + $\Sigma^0_1$-DNS$_0$ + (BI$_1$)$^g$ $\vdash$ $\Sigma^0_1$-DNS$_1$?
\section{Bar induction in two contexts}
The next result sharpens Solovay's proof that {\bf IA$_1$} + AC$^{Ar}_{00}$ + BI$_1$ + $\mathrm{(\neg \neg A \rightarrow A)}$ can be negatively interpreted in {\bf IRA} + BI$_1$ + MP$_1$. In fact he proved the stronger theorem (cf. \cite{jrmrms}) that $\mathrm{\neg \neg \, \Sigma^0_1\mbox{-}CF_0}$ (thus $\mathrm{\neg \neg \, CF_0^{Ar}}$) holds in {\bf IRA} + BI$_1$ + MP$_1$, but $\mathrm{\neg \neg \, \Pi^0_1\mbox{-}CF_0}$ gives the double negation of the characteristic function principle for {\em negative} arithmetical formulas, which suffices with $\Sigma^0_1$-DNS$_0$ and QF-AC$_{00}$ for the negative interpretation of AC$^{Ar}_{00}$. For the derivation of $\mathrm{\neg \neg \, \Pi^0_1\mbox{-}CF_0}$ by bar induction and for the negative interpretation of BI$_1$, MP$_1$ is not needed; $\Sigma^0_1$-DNS$_1$ suffices.
\subsection{Theorem.} (after Solovay) \begin{enumerate} \item[(i)]{{\bf IA$_1$} + (BI$_1$)$^g$ $\vdash$ $\mathrm{\neg \neg \, \Pi^0_1\mbox{-}CF_0}$.} \item[(ii)]{{\bf IA$_1$} + AC$^{Ar}_{00}$ + BI$_1$ + $\mathrm{(\neg \neg A \rightarrow A)}$ can be negatively interpreted in (and therefore is equiconsistent with) its subsystem {\bf IRA} + BI$_1$ + $\Sigma^0_1$-DNS$_1$.} \end{enumerate}
{\em Proofs.} (i): Adapting Solovay's argument that {\bf IA$_1$} + BI$_1$ + MP$_1$ $\vdash$ $\mathrm{\neg \neg \, \Sigma^0_1\mbox{-}CF_0}$ (as in \cite{jrm2003}, \cite{jrmrms}), assume for contradiction {\bf (a)} $\mathrm{\forall \zeta \neg \forall x (\zeta(x) = 0 \leftrightarrow \forall y \alpha(\langle x,y \rangle) = 0)}$. Then {\bf (b)} $\mathrm{\forall \zeta \neg \neg \exists x [(\zeta(x) = 0 ~\&~ \neg \neg \exists y \alpha(\langle x,y \rangle) \neq 0) \vee (\zeta(x) \neq 0 ~\&~ \forall y \alpha(\langle x,y \rangle) = 0)]}$ follows in {\bf IA$_1$}, and this entails {\bf (c)} $\mathrm{\forall \zeta \neg \neg \exists x [(\zeta((x)_0) = 0 ~\&~ \alpha(\langle (x)_0,(x)_1 \rangle) \neq 0) ~\vee}$
\noindent $\mathrm{(\zeta((x)_0) \neq 0 ~\&~ \forall y \alpha(\langle (x)_0,y \rangle) = 0)]}$.
In {\bf IRA} one can define a binary sequence $\mathrm{\rho}$ such that $\mathrm{\rho(w) = 0}$ if and only if $\mathrm{Seq(w)}$ and for some $\mathrm{j < lh(w)}$ either \begin{enumerate} \item[{\bf (d)}]{$\mathrm{(w)_j = 1 ~\&~ \exists y < lh(w)\, \alpha(\langle j,y \rangle) \neq 0}$, or} \item[{\bf (e)}]{$\mathrm{(w)_j > 1 ~\&~ [\alpha(\langle j, ((w)_j\stackrel{.}{-}2) \rangle) = 0 \vee \exists y < (w)_j\stackrel{.}{-}2 \; \alpha(\langle j,y \rangle) \neq 0]}$.} \end{enumerate} Now prove {\bf (f)} $\mathrm{\forall \zeta \neg \neg \exists n \rho(\overline{\zeta}(n)) = 0}$ by cases on (c) using (d) and (e), giving the first hypothesis for an application of (BI$_1$)$^g$. The negative inductive predicate $\mathrm{A(w)}$ is \begin{multline*} \mathrm{A(w) \equiv \neg \neg \exists j < lh(w) [((w)_j = 1 \rightarrow \neg \forall y \alpha(\langle j,y \rangle) = 0)} \\ \mathrm{\&~ ((w)_j > 1 \rightarrow [\alpha(\langle j, ((w)_j\stackrel{.}{-}2) \rangle) \neq 0 \rightarrow \exists y < ((w)_j\stackrel{.}{-}2) \; \alpha(\langle j,y \rangle) \neq 0])].} \end{multline*} Evidently {\bf (g)} $\mathrm{\forall w (Seq(w) ~\&~ \rho(w) = 0 \rightarrow A(w))}$. In order to establish the inductive hypothesis {\bf (h)} $\mathrm{\forall w (Seq(w) ~\&~ \forall s A(w*\langle s+1\rangle) \rightarrow A(w))}$, argue by contradiction as follows, noting that in general $\mathrm{(w*\langle n \rangle)_{lh(w)} = n}$.
Assume $\mathrm{Seq(w) ~\&~ \forall s A(w*\langle s+1\rangle) ~\&~ \neg A(w)}$. From $\mathrm{A(w*\langle 1 \rangle)}$ and $\mathrm{\neg A(w)}$ we get $\mathrm{(w*\langle 1 \rangle)_{lh(w)} = 1 ~\&~ \neg \forall y \alpha(\langle lh(w),y \rangle) = 0}$. From $\mathrm{\forall n A(w*\langle n+2 \rangle)}$ and $\mathrm{\neg A(w)}$ we get $\mathrm{\forall n [(w*\langle n+2 \rangle)_{lh(w)} > 1 ~\&~ (\alpha(\langle lh(w),n \rangle) \neq 0 \rightarrow \exists y < n \, \alpha(\langle lh(w),y \rangle) \neq 0)}]$, from which it follows that $\mathrm{\forall n (\alpha(\langle lh(w),n \rangle) \neq 0 \rightarrow \exists y < n \, \alpha(\langle lh(w),y \rangle) \neq 0)}$, contradicting $\mathrm{\neg \forall y \alpha (\langle lh(w),y \rangle) = 0}$. This completes the proof of (h).
By (BI$_1$)$^g$ conclude $\mathrm{A(\langle \, \rangle)}$, which is impossible because $\mathrm{lh(\langle \, \rangle) = 0}$. Therefore $\mathrm{\neg \neg \, \Pi^0_1\mbox{-}CF_0}$ holds in {\bf IRA} + (BI$_1$)$^g$.
(ii): (BI$_1$)$^g$ was treated in Theorem 5.1(ix), and {\bf IRA} + (BI$_1$)$^g$ $\vdash$ (AC$^{Ar}_{00}$)$^g$ by formula induction from (i) (cf. the proof of Theorem 5.1(iii)). Observe that {\bf IRA} $\subseteq$ {\bf IA$_1$} + AC$_{00}^{Ar}$, and {\bf IA$_1$} proves $\mathrm{(\neg \neg A \rightarrow A)}^g$ for all formulas $\mathrm{A}$. \qed
\subsection{Theorem.} {\bf IRA} + (BI$_1$)$^g$ + $\mathrm{\neg \neg \, \Pi^1_1\mbox{-}CF_0}$ $\vdash$ $\Sigma^0_1$-DNS$_1$, where $\mathrm{\neg \neg \Pi^1_1\mbox{-}CF_0}$ is \[\mathrm{\forall \gamma \neg \neg \exists \zeta \forall x (\zeta(x) = 0 \leftrightarrow \forall \alpha \exists y \, \gamma(\overline{\alpha}(\langle x,y \rangle)) = 0).}\]
{\em Proof.} Assume {\bf (a)} $\mathrm{\forall \alpha \neg \neg \exists x \rho(\overline{\alpha}(x)) = 0}$. The goal is to prove $\mathrm{\neg \neg \forall \alpha \exists x \rho(\overline{\alpha}(x)) = 0}$ in {\bf IA$_1$} + $\mathrm{\neg \neg \, \Pi^1_1\mbox{-}CF_0}$ + (BI$_1$)$^g$. First define in {\bf IA$_1$} a function $\mathrm{\gamma}$ such that {\bf (b)} $\mathrm{\forall \alpha \forall x (Seq(x) \rightarrow \forall y(\rho(x * \overline{\alpha}(y)) = 0 \leftrightarrow \gamma(\overline{\alpha}(\langle x,y \rangle)) = 0))}$. As the desired conclusion is negative, assume for ``$\mathrm{\neg \neg \exists}$-elimination'' (cf. \cite{jrm2003}) from the appropriate instance of $\mathrm{\neg \neg \, \Pi^1_1\mbox{-}CF_0}$: {\bf (c)} $\mathrm{\forall x (\zeta(x) = 0 \leftrightarrow \forall \alpha \exists y \gamma(\overline{\alpha}(\langle x,y \rangle)) = 0)}$. Then in particular {\bf (d)} $\mathrm{\forall w (Seq(w) \rightarrow (\zeta(w) = 0 \leftrightarrow \forall \alpha \exists y \rho(w * \overline{\alpha}(y)) = 0))}$.
From (d) follow the other hypotheses {\bf(e)} $\mathrm{\forall w (Seq(w) ~\&~ \rho(w) = 0 \rightarrow \zeta(w) = 0)}$ and {\bf (f)} $\mathrm{\forall w (Seq(w) ~\&~ \forall s \, \zeta(w * \langle s+1 \rangle) = 0 \rightarrow \zeta(w) = 0)}$ of the instance of (BI$_1$)$^g$ with $\mathrm{\zeta(w) = 0}$ as the inductive predicate, so {\bf (g)} $\mathrm{\zeta(\langle \, \rangle) = 0}$, hence $\mathrm{\forall \alpha \exists x \rho(\overline{\alpha}(x)) = 0}$. Discharging hypothesis (c) by $\neg \neg \exists$-elimination, {\bf (h)} $\mathrm{\neg \neg \forall \alpha \exists x \rho(\overline{\alpha}(x)) = 0}$. \qed
\section{Minimum classical extensions of systems between {\bf B} and {\bf I}} The principle of monotone bar induction BI$_\mathrm{mon}$ is provable in {\bf I} and in {\bf B$^\circ$} but not in {\bf B}, and {\bf IA$_1$} + BI$_\mathrm{mon}$ proves BI$_\mathrm{d}$. It follows that the variant {\bf B$'$} of {\bf B} with BI$_\mathrm{mon}$ as an axiom schema in place of BI$_\mathrm{d}$ is classically sound and lies strictly between {\bf B} and {\bf I}.\footnote{Veldman's careful analysis of Brouwer's writing on the subject led him to the conclusion that Brouwer sometimes assumed a monotone bar, but sometimes fell into the error of trying to justify classical bar induction (which is inconsistent with his own continuity principle).} As it happens, {\bf B$'$} has the same classical content as {\bf B} over {\bf IA$_1$} (cf. \cite{jrmgvf2021}), but this may not be the case for every classically sound intermediate system.
\subsection{Neighborhood function principles} A classically sound choice principle guaranteeing that every pointwise continuous relation has a modulus of continuity is Troelstra's {\em neighborhood function principle}: \[\mathrm{NFP. \; \forall \alpha \exists x A(\overline{\alpha}(x)) \rightarrow \exists \sigma \forall \alpha [\exists ! x \sigma(\overline{\alpha}(x)) > 0 ~\&~ \forall x \forall y (\sigma(\overline{\alpha}(x)) = y+1 \rightarrow A(\overline{\alpha}(y)))]}.\] This version, labeled AC$_{1/2,0}$ in \cite{jrmgvf2012}, is equivalent to Troelstra's over {\bf IA$_1$}.
NFP follows easily from ``Brouwer's Principle for a Number'' CC$_{10}$ ($^\ast$27.2 in \cite{klve1965}) but is not provable in {\bf B}. The monotone version NFP$_\mathrm{mon}$ (AC$^\mathrm{m}_{1/2,0}$ in \cite{jrmgvf2012}) of NFP is interderivable with BI$_\mathrm{mon}$ over {\bf B} (so does not add classical content to {\bf B}), but NFP is apparently stronger. A partial characterization of ({\bf IRA} + NFP)$^{+g}$ follows.
\subsection{Theorem} \begin{enumerate} \item[(i)]{({\bf IRA} + NFP)$^{+g}$ $\subseteq$ {\bf IRA} + NFP + $\Sigma^0_1$-DNS$_1$ + $\neg \neg \, $CF$_0^\mathrm{neg}$.} \item[(ii)]{{\bf IRA} + NFP$^g$ + $\Sigma^0_1$-DNS$_0$ $\vdash$ $\neg \neg \, $CF$_0^\mathrm{neg}$.} \end{enumerate}
{\em Proofs.} (i) Assume {\bf (a)} $\mathrm{\forall \alpha \neg \neg \exists x A}^g{(\overline{\alpha}(x))}$. For $\mathrm{\neg \neg \exists \zeta}$-elimination from $\neg\neg$CF$_0^\mathrm{neg}$: {\bf (b)} $\mathrm{\forall w [\zeta(w) = 0 \leftrightarrow A}^g\mathrm{(w)]}$. From (a), (b) by $\Sigma^0_1$-DNS$_1$: {\bf (c)} $\mathrm{\neg \neg \forall \alpha \exists x \zeta(\overline{\alpha}(x)) = 0.}$ NFP gives {\bf (d)} $\mathrm{\neg \neg \exists \sigma \forall \alpha [\exists ! x \sigma(\overline{\alpha}(x)) > 0 ~\&~ \forall x \forall y (\sigma(\overline{\alpha}(x)) = y + 1 \rightarrow \zeta(\overline{\alpha}(y)) = 0)]}$, whence {\bf (e)} $\mathrm{\neg \neg \exists \sigma \forall \alpha [\exists ! x \sigma(\overline{\alpha}(x)) > 0 ~\&~ \forall x \forall y (\sigma(\overline{\alpha}(x)) = y + 1 \rightarrow A}^g\mathrm{(\overline{\alpha}(y))]}$ by (b) and {\em a fortiori} {\bf (f)} $\mathrm{\neg \neg \exists \sigma \forall \alpha [\neg \neg \exists ! x \sigma(\overline{\alpha}(x)) > 0 ~\&~ \forall x \forall y (\sigma(\overline{\alpha}(x)) = y + 1 \rightarrow A}^g\mathrm{(\overline{\alpha}(y))]}$. Because (f) is a negation not involving $\mathrm{\zeta}$, (b) may now be discharged by $\mathrm{\neg \neg \exists \zeta}$-elimination.
(ii) {\bf IA$_1$} $\vdash$ $\mathrm{\forall x \neg \neg (A(x) \vee \neg A(x))}$ and so {\bf (a)} $\mathrm{\forall x \neg \neg \exists y (y \leq 1 ~\&~ (y = 0 \leftrightarrow A(x)))}$. Let B(w) abbreviate $\mathrm{Seq(w) ~\&~ 1 \leq lh(w) \leq 2 ~\&~ (lh(w) = 1 \leftrightarrow A((w)_0 \stackrel{.}{-} 1))}$, where for $\neg \neg \, $CF$_0^\mathrm{neg}$ the A(w) and therefore B(w) are negative. Then {\bf(b)} $\mathrm{\forall \alpha \neg \neg \exists y B(\overline{\alpha}(y))}$, so by NFP$^g$: {\bf (c)} $\mathrm{\neg \neg \exists \sigma \forall \alpha [\neg \neg \exists ! x \sigma(\overline{\alpha}(x)) > 0 ~\&~ \forall x \forall y (\sigma(\overline{\alpha}(x)) = y+1 \rightarrow B(\overline{\alpha}(y)))]}$. Now assume {\bf (d)} $\mathrm{\forall \alpha \neg \neg \exists ! x \sigma(\overline{\alpha}(x)) > 0 ~\&~ \forall \alpha \forall x \forall y (\sigma(\overline{\alpha}(x)) = y+1 \rightarrow B(\overline{\alpha}(y)))}$ for $\mathrm{\neg \neg \exists \sigma}$-elimination from (c), since $\neg \neg \, $CF$_0^\mathrm{neg}$ is negative. Substituting $\mathrm{\lambda t.n}$ for $\mathrm{\alpha}$ in (d) gives {\bf (e)} $\mathrm{\forall n \neg \neg \exists ! x \sigma(\overline{\lambda t.n}(x)) > 0 ~\&~ \forall n \forall x \forall y (\sigma(\overline{\lambda t. n}(x)) = y+1 \rightarrow B(\overline{\lambda t.n}(y)))}$, hence {\bf(f)} $\mathrm{\neg \neg \forall n \exists x \sigma(\overline{\lambda t.n}(x)) > 0 ~\&~ \forall n \forall x \forall y (\sigma(\overline{\lambda t. n}(x)) = y+1 \rightarrow B(\overline{\lambda t.n}(y)))}$ by $\Sigma^0_1$-DNS$_0$, so by QF-AC$_{00}$: {\bf (g)} $\mathrm{\neg \neg \exists \tau \forall n \sigma(\overline{\lambda t.n}(\tau(n))) > 0}$. For $\mathrm{\neg \neg \exists \tau}$-elimination from (g) assume {\bf (h)} $\mathrm{\forall n \, \sigma(\overline{\lambda t.n}(\tau(n))) > 0}$, so {\bf (i)} $\mathrm{\forall n B(\overline{\lambda t. n}(\sigma(\overline{\lambda t.n}(\tau(n)))\stackrel{.}{-} 1))}$ by (f). It follows that {\bf (j)} $\mathrm{\forall n [1 \leq lh(\overline{\lambda t. n}(\sigma(\overline{\lambda t.n}(\tau(n)))\stackrel{.}{-} 1)) = \sigma(\overline{\lambda t.n}(\tau(n)))\stackrel{.}{-} 1 \leq 2]}$, so {\bf (k)} $\mathrm{\forall n \,[(\overline{\lambda t. n}(\sigma(\overline{\lambda t.n}(\tau(n)))\stackrel{.}{-} 1)_0 \stackrel{.}{-} 1 = n]}$. Finally set $\mathrm{\zeta = \lambda n.\sigma(\overline{\lambda t.n}(\tau(n)))\stackrel{.}{-} 2}$ to conclude $\mathrm{\exists \zeta \forall n (\zeta(n) = 0 \leftrightarrow A(n))}$. Two $\mathrm{\neg \neg \exists}$-eliminations, discharging (h) and (d) respectively, complete the proof of $\neg \neg \, $CF$_0^\mathrm{neg}$. \qed
\subsection{Dependent choice for sequences} Dependent choice for numbers DC$_0$ is a theorem of {\bf IA$_1$} + AC$_{00}$, but dependent choice for sequences \[\mathrm{DC_1. \; \; \forall \alpha \exists \beta A(\alpha,\beta) \rightarrow \forall \alpha \exists \gamma [(\gamma)_0 = \alpha ~\&~ \forall n A((\gamma)_n,(\gamma)_{n+1})]}\] (where $\mathrm{(\gamma)_n = \lambda x. \gamma(\langle n,x \rangle)}$) is not obviously provable in {\bf B} or even in {\bf B$^\circ$}. It is not hard to see, however, that {\bf B} + DC$_1$ is a classically sound subsystem of {\bf I}.
\subsection{Theorem} \begin{enumerate} \item[(i)]{{\bf I} $\vdash$ DC$_1$.} \item[(ii)]{{\bf IRA} + DC$_1$ $\vdash$ AC$_{01}$.} \item[(iii)]{({\bf B} + DC$_1$)$^{+g}$ = {\bf IRA} + BI$_1$ + DC$_1$ + (DC$_1$)$^g$.} \end{enumerate}
{\em Proofs.} (i) Assume {\bf (a)} $\mathrm{\forall \alpha \exists \beta A(\alpha,\beta)}$. By CC$_{11}$ there is a $\mathrm{\sigma}$ satisfying {\bf (b)} $\mathrm{\forall \alpha \exists ! \beta [\{\sigma\}[\alpha] \simeq \beta ~\&~ A(\alpha,\beta)]}$. Fix $\mathrm{\alpha}$. It will be enough to show that there is a $\mathrm{\zeta}$ such that {\bf (c)} $\mathrm{\forall n [((\zeta)_n)_0 = \alpha ~\&~ \forall i < n [A(((\zeta)_n)_i,((\zeta)_n)_{i+1}) ~\&~ ((\zeta)_n)_i = ((\zeta)_{n+1})_i]]}$, since then we can define $\mathrm{\gamma}$ so that {\bf (d)} $\mathrm{\forall n [(\gamma)_n = ((\zeta)_{n+2})_n]}$ and then it will follow that $\mathrm{(\gamma)_0 = \alpha}$ and for all $\mathrm{n}$: $\mathrm{(\gamma)_{n+1} = ((\zeta)_{n+3})_{n+1} = ((\zeta)_{n+2})_{n+1}}$ so $\mathrm{A((\gamma)_n,(\gamma)_{n+1})}$.
Toward (c), first prove by induction: $\mathrm{\forall n \exists \delta [(\delta)_0 = \alpha ~\&~ \forall i < n (\delta)_{i+1} \simeq \{\sigma\}[(\delta)_i]]}$. By AC$_{01}$, $\mathrm{\exists \zeta \forall n [((\zeta)_n)_0 = \alpha ~\&~ \forall i < n ((\zeta)_n)_{i+1} \simeq \{\sigma\}[((\zeta)_n)_i]]}$. Apply (b).\footnote{The logic of partial terms is {\em not} involved in this argument because the informal expression $\mathrm{\{\sigma\}[\alpha]}$, which helps to clarify the proof, always designates a fully defined sequence $\mathrm{\beta}$ satisfying $\mathrm{\forall x \forall y [\beta(x) = y \leftrightarrow \exists z [\sigma(\langle x+1 \rangle \ast \overline{\alpha}(z)) = y+1 ~\&~ \forall n<z \, \sigma(\langle x+1 \rangle \ast \overline{\alpha}(n)) = 0]]}$.}
(ii) Assume {\bf (a)} $\mathrm{\forall n \exists \alpha A(n,\alpha)}$. We want to show $\mathrm{\exists \beta \forall n A(n,(\beta)_n)}$. From (a) we conclude {\bf (b)} $\mathrm{\forall \alpha \exists \beta [\beta(0) = \alpha(0)+1 ~\&~ A(\alpha(0),\lambda x.\beta(x+1))]}$, from which DC$_1$ gives {\bf (c)} $\mathrm{\exists \gamma [(\gamma)_0 = \lambda t. 0 ~\&~ \forall n [(\gamma)_{n+1}(0) = (\gamma)_n(0)+1 ~\&~ A((\gamma)_n(0),\lambda x. (\gamma)_{n+1}(x+1))]]}$. For any such $\mathrm{\gamma}$, {\bf (d)} $\mathrm{\forall n \,(\gamma)_n(0) = n}$ and {\bf (e)} $\mathrm{\forall n \, A((\gamma)_n(0),\lambda x. (\gamma)_{n+1}(x+1))}$ hold by induction, and it is easy to define a $\mathrm{\beta}$ such that $\mathrm{\forall n \forall x \, \beta(\langle n,x \rangle )= (\gamma)_{n+1}(x+1)}$.
(iii) is immediate from (ii) and (the proof of) Theorem 5.1(xi). \qed
\subsection{Bar induction of type one} One year after the publication of \cite{klve1965}, Howard and Kreisel \cite{hokr1966} corrected a typo in the conclusion of the formulation, in Section 6.3 of \cite{sp1962}, of Spector's axiom schema of bar induction of type one; call the corrected version ``SBI$_1$.'' In a minor variant {\bf H} of {\bf IA$_1$}, using the continuous choice principle CC$_{10}$ which is a theorem of {\bf I}, they derived SBI$_1$ from the special case BI$_\mathrm{QF}$ of BI in which $\mathrm{R(w)}$ is required to be quantifier-free (hence decidable, even if free sequence variables are present). Their proof also shows that {\bf IRA} + BI$_1$ + CC$_{10}$ $\vdash$ SBI$_1$.
In Appendix 1 of \cite{hokr1966} Howard and Kreisel proved that DC$_1$ is equivalent to SBI$_1$ over {\bf H$^\circ$} $\equiv$ {\bf H} + $\mathrm{(\neg \neg A \rightarrow A)}$, and they asked (Problem 9 of the appendix) whether DC$_1$ is derivable from AC$_{01}$ over {\bf H} or {\bf H$^\circ$}. The same questions can be asked over {\bf IRA} and {\bf IRA$^\circ$}. Is it possible that {\bf B$^{+g}$} = ({\bf B} + SBI$_1$)$^{+g}$?
\subsection{A sticky question} Is there a reasonable way to define a unique minimum classical extension of a consistent theory which is not classically consistent? How can one make sense of cls({\bf I}), and hence of {\bf I}$^{+g}$?
Gandy, Kreisel and Tait \cite{gakrta1960} proved that every sequence belonging to all classical $\omega$-models of {\bf B} is hyperarithmetic, and therefore $\Delta^1_1$ by the Suslin-Kleene Theorem. Kleene (\cite{kl1955b}, XXIV) proved that the $\Delta^1_1$ sequences do not satisfy BI$_1$. Classical $\omega$-models of subsystems of {\bf I} extending {\bf B} must include all projective sequences.
The negative interpretations of principles like NFP, DC$_1$ and BI$_\mathrm{mon}$ which are provable in {\bf I}, and whose addition to {\bf B} is classically sound, should obviously belong to {\bf I}$^{+g}$. But {\bf I} refutes very simple consequences of the law of excluded middle to which Bishop constructivists have given colorful names. For example, {\bf I} proves ($^\ast$27.17 in \cite{klve1965}) the {\em negation} of the ``weak limited principle of omniscience'' \[\mathrm{WLPO. \; \; \; \forall \alpha (\forall x \alpha(x) = 0 \vee \neg \forall x \alpha (x) = 0).}\]
One possibility is suggested by the fact that {\bf I} is consistent with the collection of all {\em negative} sentences of the language $\mathcal{L}({\bf I})$ which are {\em true in classical Baire space} $\mathcal{B}$ = $(\omega, \omega^\omega)$. Let $\mathcal{X}_\mathcal{B}$ be the collection of all subsystems {\bf S} of {\bf I} which extend {\bf B} and prove {\em only statements true in} $\mathcal{B}$. Then $\bigcup \{\mathrm{\bf S} : \mathrm{\bf S} \in \mathcal{X}_\mathcal{B}\}$ is classically sound, and proves just the theorems of {\bf I} which are true in $\mathcal{B}$. The {\em minimum classical extension} {\bf I$_\mathcal{B}^{+g}$} {\em of} {\bf I} {\em relative to} $\mathcal{B}$ may be identified with $\bigcup \{\mathrm{\bf S}^{+g} : \mathrm{\bf S} \in \mathcal{X}_\mathcal{B}\}$ + CC$_{11}$.
\subsubsection{\bf Theorem} (Vafeiadou, \cite{jrmgvf2021}) By this definition, {\bf I$_\mathcal{B}^{+g}$} = {\bf I} + $\mathrm{(\Gamma_\mathcal{B}^\circ)^g}$ where $\mathrm{\Gamma_\mathcal{B}^\circ}$ is the collection of all sentences in $\mathcal{L}({\bf I})$ which are true in $\mathcal{B}$.
\vskip 0.2cm
This appeal to truth in $\mathcal{B}$ appears necessary. If {\bf S} is a classically consistent system such that {\bf B} $\subseteq$ {\bf S} $\subseteq$ {\bf I} then {\bf S}$^{+g}$ is also classically consistent and {\bf B}$^{+g}$ $\subseteq$ {\bf S}$^{+g}$. However, if $\mathcal{Y}$ is the collection of {\em all} classically consistent subsystems of {\bf I} containing {\bf B}, then $\bigcup \{\mathrm{\bf S} : \mathrm{\bf S} \in \mathcal{Y}\}$ is classically inconsistent, as the following argument (inspired by Vafeiadou's proof of Theorem 7.6.1) shows. Let Con({\bf B}) be a sentence of $\mathcal{L}$({\bf I}) expressing the statement ``{\bf B} $\not \vdash \; 0=1$.''
\subsubsection{\bf Theorem} Consider the intermediate systems {\bf S$_1$} $\equiv$ {\bf B} + $\mathrm{(WLPO \rightarrow Con({\bf B}))}$ and
{\bf S$_2$} $\equiv$ {\bf B} + $\mathrm{(WLPO \rightarrow \neg Con({\bf B}))}$.
\begin{enumerate}
\item[(i)]{{\bf S$_1$} and {\bf S$_2$} are classically consistent subsystems of {\bf I}.}
\item[(ii)]{{\bf S$_1$} + {\bf S$_2$} is not classically consistent.}
\item[(iii)]{({\bf S$_1$})$^g$ conflicts with ({\bf S$_2$})$^g$, so $\bigcup \{\mathrm{\bf S}^g : \mathrm{\bf S} \in \mathcal{Y}\}$ is inconsistent.}
\end{enumerate}
{\em Proofs.} (i) is a consequence of G\"odel's second incompleteness theorem with the fact that {\bf I} $\vdash$ $\mathrm{\neg WLPO}$. (ii) holds because ({\bf S$_1$})$^\circ$ $\vdash$ Con({\bf B}) and ({\bf S$_2$})$^\circ$ $\vdash$ $\mathrm{\neg}$ Con({\bf B}). (iii) follows immediately because ({\bf S$_1$})$^g$ $\vdash$ (Con({\bf B}))$^g$ and ({\bf S$_2$})$^g$ $\vdash$ $\mathrm{\neg}$ (Con({\bf B}))$^g$. \qed
\section{Alternative varieties of constructive analysis}
\subsection{Axiomatizing the recursive model}
Troelstra and van Dalen \cite{tvd1988} propose that constructive recursive mathematics RUSS, up to and including the Kreisel-Lacombe-Shoenfield-Tsejtlin Theorem, should be axiomatized in the language of arithmetic by {\bf CRM} = {\bf HA} + ECT$_0$ + MP$_0$, where ECT$_0$ is Troelstra's ``extended Church's Thesis'' (cf. \cite{tr1973}) and MP$_0$ is an arithmetical form of Markov's Principle. By number-realizability, {\bf CRM} is consistent relative to its classically consistent subtheory {\bf HA} + MP$_0$, but (unlike {\bf CRM}) all of Russian recursive mathematics is consistent with classical logic. A classically sound formalization of RUSS appears to require sequence variables.
The $\omega$-model of $\mathcal{L}$({\bf I}) in which the infinite sequences are the recursive sequences satisfies the classically sound theory {\bf MRA} $\equiv$ {\bf IRA} + CT$_1$ + MP$_1$ where the axiom \[\mathrm{CT_1. \; \; \; \forall \alpha \exists e [\forall x \exists y T(e,x,y) ~\&~ \forall x \forall y (T(e,x,y) \rightarrow U(y) = \alpha(x))]}\] (abbreviated $\mathrm{\forall \alpha GR(\alpha)}$), with no parameters allowed, plays the restrictive role of Church's Thesis. CT$_1$ fails in {\bf B$^\circ$} by Lemma 9.8 of \cite{klve1965}, and is refutable in {\bf I} using Brouwer's Principle for Numbers ($^\ast$27.2 in \cite{klve1965}). Its negative interpretation, however, is provable in {\bf MRA} and is consistent with {\bf I} (but not with {\bf B$^\circ$}).\footnote{In contrast, the negative interpretation of the continuous choice axiom CC$_{11}$ of {\bf I} is inconsistent with {\bf I} and with {\bf B$^\circ$}.} \subsubsection{\bf Theorem.} \begin{enumerate} \item[(i)]{{\bf MRA$^{+g}$} = {\bf MRA}.} \item[(ii)]{{\bf MRA} can be negatively interpreted in its subsystem {\bf IRA} + $\Sigma^0_1$-DNS$_0$ + $\mathrm{\forall \alpha \neg \neg GR(\alpha)}$.} \item[(iii)]{{\bf I} + $\Sigma^0_1$-DNS$_0$ + $\mathrm{\forall \alpha \neg \neg GR(\alpha)}$ + VS is consistent and proves $\mathrm{\neg MP_1}$.} \end{enumerate} {\em Proofs.} (i) holds by {\bf IA$_1$} + $\Sigma^0_1$-DNS$_0$ $\vdash$ $\mathrm{(\forall \alpha \neg \neg GR(\alpha) \leftrightarrow (CT_1)}^g)$, Theorem 5.1(ii) and the easy facts that {\bf IRA} + MP$_1$ $ \vdash $ $\Sigma^0_1$-DNS$_0$ and {\bf IA$_1$} + CT$_1$ $\vdash$ $\mathrm{\forall \alpha \neg \neg GR(\alpha)}$. (ii) follows by the proof of Corollary 5.3(ii), and (iii) holds by classical $^G$realizability (\cite{jrm1971}) and \cite{ve1970}. \qed
\vskip 0.1cm
Kleene's formalization \cite{kl1969} of the theory of recursive functionals can be carried out in {\bf IRA} so constructive recursive mathematics should be formalizable in {\bf MRA}. The arithmetical recursive choice principle CT$_0$ which holds in {\bf CRM} suggests adding to {\bf MRA} either $\mathrm{AC_{00}^{Ar}}$ or a comprehension principle \[\mathrm{CF_d^{Ar}. \; \; \forall x (A(x) \vee \neg A(x)) \rightarrow \exists \alpha \forall x (\alpha(x) = 0 \leftrightarrow A(x))}\] restricted to formulas $\mathrm{A(x)}$ without free sequence variables. {\bf MRA} + CF$\mathrm{_d^{Ar}}$ should be consistent by recursive number-realizability, and its minimal classical extension relative to the recursive model follows the pattern of Vafeiadou's Theorem 7.6.1.
\subsection{Bishop's constructive mathematical analysis} Anything that can be formalized in Troelstra's {\bf EL} + AC$_{01}$, which has been used by Bishop constructivists, can also be formalized in the common subsystem {\bf IA$_1$} + AC$_{01}$ of {\bf I} and {\bf B$^\circ$} by \cite{gvf2012}. By Theorem 5.1(vi),(xi) with the fact that {\bf IA$_1$} + AC$_{00}$ + $\mathrm{(\neg \neg A \rightarrow A)}$ $\vdash$ BI$_1$, Bishop's constructive analysis BISH has the same classical content as Kleene's {\bf B}.
\subsection{Afterword} Reverse constructive mathematics establishes precise connections among mathematical theorems, function existence axioms, and logical principles over weak constructive theories based on intuitionistic logic. The (weak and not so weak) base theories used here are classically sound subsystems of Kleene's formal system {\bf I} in \cite{klve1965}. Kreisel's {\em two} uncomplimentary reviews notwithstanding, Kleene and Vesley's book contained the first coherent treatment of Brouwer's intuitionistic analysis in ordinary mathematical language with intuitionistic logic, together with a proof of its consistency relative to its classically sound subtheory {\bf B}.
The classical contents (as expressed by the G\"odel-Gentzen negative interpretation) of classical analysis with countable choice, Bishop's constructive analysis, and Markov's recursive analysis are individually consistent with Kleene's and Vesley's versions \cite{klve1965}, \cite{ve1970} of intuitionistic analysis.
The perceived conflicts among CLASS, INT, BISH and RUSS partly reflect the ways language is used in these four varieties of mathematical practice. Gentzen's negative interpretation enables a parallel treatment of classical and constructive mathematics by making linguistic differences explicit, restricting the logic to be intuitionistic, and expressing classical reasoning in the negative language. The constructive cost of reconciliation can be measured precisely by computing the minimum classical extensions of classically sound theories.
Of course, those conflicts are never just a matter of linguistic interpretation or of intuitionistic versus classical logic. They also reflect fundamentally different ideas about what constitutes an infinite sequence of natural numbers.
\end{document} |
\begin{document}
\title{Semipurity of tempered Deligne cohomology} \author{Jos\'e Ignacio Burgos Gil} \address{Facultad de Matem\'aticas\\Universidad de Barcelona\\ Gran V\'\i{}a C. C. 585\\ Barcelona, Spain} \thanks{Partially supported by Grants
BFM2003-02914 and MTM2006-14234-C02-01}
\begin{abstract}
In this paper we define the formal and tempered Deligne cohomology
groups, that are obtained by applying the Deligne complex functor
to the complexes of formal differential forms and tempered currents
respectively. We then prove the existence of a duality between them,
a vanishing theorem for the former and
a semipurity property for the latter. The motivation of these
results comes from the
study of covariant arithmetic Chow groups. The semi-purity property
of tempered Deligne cohomology implies, in particular, that several
definitions of covariant arithmetic Chow groups agree for
projective arithmetic varieties.
\end{abstract}
\maketitle
\tableofcontents
\section{Introduction} \label{sec:introduction}
The aim of this note is to study some properties of formal and tempered Deligne cohomology (with real coefficients). These cohomology groups are defined by applying the Deligne complex functor to the complexes of formal differential forms and tempered currents respectively.
Let $X$ be a complex projective manifold and let $W$ be a Zariski locally closed subset of $X$. Let $i:W\longrightarrow X$ denote the inclusion and let $i^{*},i^{!}, i_{\ast}, i_{!}$ be the induced functors in the derived category of abelian sheaves. Then the complex of formal differential forms of $W$ computes the cohomology of $W$ with compact supports. That is, it computes the groups $H^{\ast}(X,i_{!}i^{\ast}\underline {\mathbb{R}})$. The complex of tempered currents on $W$ compute the cohomology of $X$ with supports on $W$, that is, it computes the groups $H^{\ast}(X,i_{*}i^{!}\underline {\mathbb{R}})$. Following Deligne, the previous groups have a mixed Hodge structure, hence a Hodge filtration that we will call the Deligne-Hodge filtration. The complexes of formal differential forms and tempered currents are examples of Dolbeault complexes (see \cite{BurgosKramerKuehn:cacg}). Therefore they have a Hodge filtration obtained from the bigrading of differential forms. In general, this Hodge filtration does not induce the Deligne-Hodge filtration in cohomology. Moreover, the spectral sequence associated to this Hodge filtration does not degenerate at the $E^{1}$-term.
This implies that formal and tempered Deligne cohomology groups with real coefficients will not have, in general, the same properties as Deligne-Beilinson cohomology. For instance they do not need to be finite dimensional. They have a structure of topological vector spaces, but they may be non-separated.
Note however that, in the particular case when $W=X$, the formal and tempered Deligne cohomology groups with real coefficients, agree with the usual real Deligne cohomology groups.
In this note we will construct a (Poincar\'e like) duality between formal Deligne cohomology and tempered Deligne cohomology, that induce a perfect pairing between the corresponding separated vector spaces. In particular, applying this duality to the case $W=X$ we obtain an exceptional duality for real Deligne Beilinson cohomology (corollary \ref{cor:3}) of smooth projective varieties that, to my knowledge, is new. The shape of this exceptional duality reminds very much the functional equation of $L$-functions. It would be interesting to know whether this duality has any arithmetic meaning.
The second result is a vanishing result for formal Deligne cohomology. Thanks to the previous duality, the vanishing result of formal Deligne cohomology implies a semipurity property of tempered Deligne cohomology (corollary \ref{cor:1}).
The motivation for these results comes from the study of covariant arithmetic Chow groups introduced in \cite{Burgos:acr} and \cite{BurgosKramerKuehn:cacg}. The covariant arithmetic Chow groups are a variant of the arithmetic Chow groups defined by Gillet and Soul\'e, that are covariant for arbitrary proper morphism. By contrast, the groups defined by Gillet and Soul\'e are only covariant for proper morphisms between arithmetic varieties that induce smooth maps between the corresponding complex varieties. The covariant arithmetic Chow groups do not have a product structure, but they are a module over the contravariant arithmetic Chow groups (see \cite{BurgosKramerKuehn:cacg} for more details). Similar definitions of covariant Chow groups have been given by Kawaguchi and Moriwaki \cite{KawaguchiMoriwaki:isfav} and by Zha \cite{zha99:_rieman_roch}. These two definitions are equivalent except for the fact that Zha neglects the structure of real manifold induced on the complex manifold associated to an arithmetic variety.
Although not explicitly stated, in the paper \cite{BurgosKramerKuehn:cacg}, the covariant arithmetic Chow groups are defined by means of tempered Deligne cohomology. The semi-purity property of tempered Deligne cohomology was announced and used in \cite{BurgosKramerKuehn:cacg}. Hence this paper can be seen as a complement of \cite{BurgosKramerKuehn:cacg}. A new consequence of the semipurity property is that, for an arithmetic variety that is generically projective, the covariant Chow groups introduced in \cite{Burgos:acr} and \cite{BurgosKramerKuehn:cacg} are isomorphic to the covariant Chow groups introduced by Kawaguchi and Moriwaki.
\nnpar{Acknowledgments.} In the course of preparing this manuscript, I had many stimulating discussions with many colleagues. We would like to thank them all. In particular, I would like to express my gratitude to J.-B.~Bost, U.~K\"uhn, J.~Kramer, K.~K\"uhnemann, V.~Maillot, D.~Roessler, C.~Schapira, J~Wildeshaus. Furthermore, I would like to thank the CRM (Bellaterra, Barcelona), for partial support of this work.
\section{Complexes of forms and currents} \label{sec:compl-forms-curr}
By a complex algebraic manifold we will mean the analytic manifold associated to a smooth scheme over $\mathbb{C}.$ Let $X$ be a projective complex algebraic manifold. We will consider the following situation: let $Z\subset Y$ be closed subvarieties of $X$, let $U$ and $V$ be the open subsets $U=X\setminus Y$, $V=X\setminus Z$ and let $W$ be the locally closed subset $W=Y\setminus Z$.
\subsection{Flat forms and Whitney forms} \label{sec:flat-forms-whitney} \
\nnpar{The complex of Whitney forms.} Let $\mathscr{E}^{\ast}_{X}$ denote the sheaf of smooth differential forms on $X$. We will denote by $E^{\ast}(U)$ the complex of global differential forms over $U$ and by $E^{\ast}_{c}(U)$ the complex of differential forms with compact support.
Let $\mathscr{E}^{\ast}_{X}(\fflat Y)$ denote the ideal sheaf of differential forms that are flat along $Y$. Recall that a differential form on $X$ is called flat along $Y$ if its Taylor expansion vanishes at all points of $Y$. We write $$\mathscr{E}^{\ast}_{Y^{\infty}}= \mathscr{E}^{\ast}_{X}/\mathscr{E}^{\ast}_{X}(\fflat Y).$$ The sections of this complex of sheaves are called Whitney forms on $Y$. Whitney's extension theorem (\cite{Tougeron:Ifd} IV theorem 3.1), gives us a precise description of the space of Whitney forms in terms of jets over $Y$. For instance, if $Y$ is the smooth subvariety of $\mathbb{C}^{n}$ defined by the equations $z_1=\dots =z_{k}=0$, then the germ of the sheaf of Whitney functions on $Y$ at the point $x=(0,\dots ,0)$ is \begin{displaymath}
\mathscr{E}^{0}_{Y^{\infty},x}=
\mathscr{E}^{0}_{Y,x}[[z_{k+1},\dots ,z_{n},\bar z_{k+1},\dots ,\bar
z_{n}]]. \end{displaymath}
We will write \begin{displaymath}
\mathscr{E}^{\ast}_{Y^{\infty}}(\fflat Z)= \mathscr{E}^{\ast}_{X}(\fflat Z)/\mathscr{E}^{\ast}_{X}(\fflat Y). \end{displaymath} Observe that $\mathscr{E}^{\ast}_{Y^{\infty}}(\fflat Z)$ can also be defined as the kernel of the morphism \begin{displaymath}
\mathscr{E}^{\ast}_{Y^{\infty}}\longrightarrow
\mathscr{E}^{\ast}_{Z^{\infty}}. \end{displaymath}
The sheaf $\mathscr{E}^{\ast}_{Y^{\infty}}(\fflat Z)$ agrees with the sheaf denoted $\mathbb{C}_{W}\overset{W}{\otimes } \mathcal{C}_{X}^{\infty} $ in \cite{KashiwaraSchapira:mfcacs}.
The complex $\mathscr{E}^{\ast}_{Y^{\infty}}(\fflat Z)$ is a complex of fine sheaves. We will denote the corresponding complex of global sections by $E^{\ast}_{X^{\mathcal{W}}}(W):=\Gamma (X,\mathscr{E}^{\ast}_{Y^{\infty}}(\fflat Z))$. Note that the complex $E^{\ast}_{X^{\mathcal{W}}}(W)$ depends only on the locally closed subspace $W\subset X$ and not on a particular choice of closed subsets $Y$ and $Z$. Observe also that $E^{\ast}_{X^{\mathcal{W}}}(X)=E^{\ast}(X)$ is the usual complex of smooth differential forms on $X$.
We will denote by $E^{\ast}_{X^{\mathcal{W}},\mathbb{R}}(W)$ the real subcomplex underlying $E^{\ast}_{X^{\mathcal{W}}}(W)$.
By the acyclicity of fine sheaves, there is a diagram of short exact sequences \begin{equation}\label{eq:1}
\xymatrix{ &&&0\ar[d]&\\
& 0 \ar[d]& 0 \ar[d]& E_{X^{\mathcal{W}}}^{\ast}(W) \ar[d]&\\
0\ar[r] &E_{X^{\mathcal{W}}}^{\ast}(U)\ar[r]\ar[d]&
E^{\ast}(X)\ar[r]\ar[d]&
E_{X^{\mathcal{W}}}^{\ast}(Y)\ar[r]\ar[d]&0\\
0\ar[r] &E_{X^{\mathcal{W}}}^{\ast}(V)\ar[r]\ar[d]&
E^{\ast}(X)\ar[r]\ar[d]&
E^{\ast}_{X^{\mathcal{W}}}(Z)\ar[r]\ar[d]&0\\
& E_{X^{\mathcal{W}}}^{\ast}(W) \ar[d]& 0 & 0 & \\
& 0 &&& } \end{equation}
The complex $E^{\ast}(X)$ is a topological vector space with the $C^{\infty}$ topology. With this topology $E^{\ast}(X)$ is a Fr\'echet topological vector space (\cite{bourbaki87:_topol_vector_spaces_chapt} III p. 9). Moreover $E^{\ast}_{X^{\mathcal{W}}}(U)$ is a closed subspace. In fact, by \cite{Tougeron:Ifd} V corollaire 1.6, it is the closure of the complex of differential forms that have compact support contained in $U$, that we denote $E^{\ast}_{c}(U)$. More generally, all the monomorphisms in diagram \eqref{eq:1} are closed immersions.
The following result states that, being $U$ an algebraic open subset of $X$, the complex $E^{\ast}_{X^{\mathcal{W}}}(U)$ does not depend on $X$ but only on $U$.
\begin{proposition}\label{prop:3} Let $\pi :\widetilde X\longrightarrow X$ be a proper birational morphism with $D=\pi ^{-1}(Y)$, that induces an isomorphism between $\widetilde X\setminus D$ and $U$, then the natural map \begin{displaymath}
\pi ^{\ast}:E^{\ast}(X)
\longrightarrow
E^{\ast}(\widetilde X) \end{displaymath} induces an isomorphism $\pi ^{\ast}:\Gamma (X,\mathscr{E}^{\ast}_{X}(\fflat Y))
\longrightarrow
\Gamma (\widetilde X,\mathscr{E}^{\ast}_{\widetilde X}(\fflat D))$. \end{proposition} \begin{proof}
By \cite{Poly:shcsesa} the morphism
\begin{displaymath}
\pi ^{\ast}:E^{\ast}(X)\longrightarrow E^{\ast}(\widetilde X)
\end{displaymath}
is a closed immersion. Since $\Gamma
(X,\mathscr{E}^{\ast}_{X}(\fflat Y))
$ and $
\Gamma (\widetilde X,\mathscr{E}^{\ast}_{\widetilde X}(\fflat D))$
are the closure of $E^{\ast}_{c}(U)$ in $E^{\ast}(X)$ and
$E^{\ast}(\widetilde X)$ respectively, then they are identified by
$\pi ^{\ast}$. \end{proof}
\nnpar{The cohomology of the complex of Whitney forms.} By \cite{Poly:shcsesa} (see also \cite{BrasseletPflaum:_whitn} for a more general statement) we have
\begin{proposition} \label{prop:4}
The complex $\mathscr{E}^{\ast}_{Y^{\infty}}$ is a resolution
of the constant
sheaf $\underline {\mathbb{C}}$ on $Y$ by fine sheaves. Therefore
\begin{displaymath}
H^{\ast}(E_{X^{\mathcal{W}}}^{\ast}(W))=H^{\ast}_{c}(W,\mathbb{C}),
\end{displaymath}
where $H^{\ast}_{c}$ denotes cohomology with compact supports.
$\square$ \end{proposition}
\subsection{Currents with support in a subvariety} \label{sec:curr-with-supp} \
\nnpar{The complex of currents.} We first recall the definition of the complex of currents and we fix the sign convention and some normalizations. We will follow the conventions of \cite{BurgosKramerKuehn:cacg} \S 5.4 but with the homological grading.
Let $\mathscr {D}_{n}^{X}$ be the sheaf of degree $n$ currents on $X$. That is, for any open subset $V$ of $X$, the group $\mathscr {D}_{n}^{X}(V)$ is the topological dual of the group of sections with compact support $E^{n}_{c}(V)$. The differential \begin{displaymath} \dd:\mathscr{D}_{n}^{X}\longrightarrow\mathscr{D}_{n-1}^{X} \end{displaymath} is defined by \begin{displaymath} \dd T(\varphi)=(-1)^{n}T(\dd\varphi); \end{displaymath} here $T$ is a current and $\varphi$ a test form. Note that we are using the sign convention of, for instance \cite{Jannsen:DcHD}, instead of the sign convention of \cite{GriffithsHarris:pag}.
The bigrading $\mathscr{E}^{n}_{X}=\bigoplus_{p+q=n}\mathscr{E}^{p,q}_{X}$ induces a bigrading $$\mathscr{D}_{n}^{X}=\bigoplus_{p+q=n}\mathscr{D}_{p,q}^{X},$$ with $\mathscr {D}_{p,q}^{X}(V)$ the topological dual of $\Gamma_{c}(V,\mathscr{E}^{p,q}_{X})$.
The real structure of $\mathscr{E}^{n}_{X}$ induces a real structure \begin{displaymath}
\mathscr{D}_{n}^{X,\mathbb{R}}\subset \mathscr{D}_{n}^{X}. \end{displaymath}
We will denote \begin{displaymath}
\mathscr{D}_{n}^{X,\mathbb{R}}(p)=\frac{1}{(2\pi i)^{p}}
\mathscr{D}_{n}^{X,\mathbb{R}}\subset \mathscr{D}_{n}^{X}. \end{displaymath}
If $X$ is equidimensional of dimension $d$ we will write \begin{equation}\label{eq:4}
\mathscr{D}^{n}_{X}=\mathscr{D}_{2d-n}^{X}, \quad
\mathscr{D}^{p,q}_{X}=\mathscr{D}_{d-p,d-q}^{X}, \quad
\text{and}\quad
\mathscr{D}^{n}_{X,\mathbb{R}}(p)=\mathscr{D}_{2d-n}^{X,\mathbb{R}}(d-p). \end{equation}
We will use all the conventions of \cite{BurgosKramerKuehn:cacg} \S 5.4. In particular, if $y$ is an algebraic cycle of $X$ of dimension $e$, we will write $\delta _{y}\in \mathscr{D}_{e,e}^{X}\cap \mathscr{D}_{2e}^{X,\mathbb{R}}(e)$ for the current \begin{displaymath}
\delta _{y}(\eta)=\frac{1}{(2\pi i)^{e}}\int_{y}\eta. \end{displaymath} Furthermore, there is an action \begin{displaymath}
\begin{matrix}
\mathscr{E}^{n}_{X}\otimes\mathscr{D}_{m}^{X}&\longrightarrow &
\mathscr{D}_{m-n}^{X},\\
\omega\otimes T&\longmapsto &\omega
\land T
\end{matrix} \end{displaymath} where the current $\omega \land T$ is defined by \begin{displaymath} (\omega\land T)(\eta)=T(\eta\land\omega). \end{displaymath} This action induces actions \begin{displaymath}
\mathscr{E}^{p,q}_{X}\otimes\mathscr{D}_{r,s}^{X}\longrightarrow \mathscr{D}_{r-p,s-q}^{X}, \quad\text{and}\quad \mathscr{E}^{n}_{X,\mathbb{R}}(p)\otimes \mathscr{D}_{m}^{X,\mathbb{R}}(q)\longrightarrow \mathscr{D}_{m-n}^{X,\mathbb{R}}(q-p). \end{displaymath}
Finally, if $X$ is equidimensional of dimension $d$, there is a fundamental current $\delta _{X}\in \mathscr{D}_{d,d}^{X}\cap \mathscr{D}_{2d}^{X,\mathbb{R}}(d)$, and a morphism \begin{equation}\label{eq:11} \mathscr{E}^{\ast}_{X}\longrightarrow \mathscr{D}_{2d-\ast}^{X}= \mathscr{D}^{\ast}_{X},\quad\omega \longmapsto [\omega ]=\omega \land \delta _{X}. \end{equation} This morphism sends $\mathscr{E}^{n}_{X\mathbb{R}}(p)$ to $\mathscr{D}_{2d-n}^{X,\mathbb{R}}(d-p)=\mathscr{D}^{n}_{X,\mathbb{R}}(p)$.
\nnpar{Currents with support on a subvariety and tempered currents.} As in the previous section let $Z\subset Y$ denote two closed subvarieties of $X$ and put $U=X\setminus Y$, $V=X\setminus Z$ and $W=Y\setminus Z$. We denote by $\mathscr {D}_{\ast}^{Y^{\infty}}$ the subcomplex of $\mathscr{D}_{\ast}^{X}$ formed by currents with support on $Y$. In other words, for any open subset $U'$ of $X$ we have \begin{displaymath}
\mathscr {D}_{n}^{Y^{\infty}}(U')=\{T\in \mathscr {D}_{n}^{X}(U')\mid T(\eta)=0,\ \forall \eta\in \Gamma_{c}(U'\cap U,\mathscr{E}^{n}_{X})\}. \end{displaymath} Observe that, by continuity, the sections of $\mathscr{D}_{n}^{Y^{\infty}}(U')$ vanish on the subgroup $\Gamma _{c}(U',\mathscr{E}^{\ast}_{X}(\fflat Y))$.
We write $\mathscr{D}_{n}^{X/Y^{\infty}}=\mathscr {D}_{n}^{X}\left / \mathscr {D}_{n}^{Y^{\infty}}\right .$ and $
\mathscr{D}_{n}^{Y^{\infty}/Z^{\infty}}=
\mathscr{D}_{n}^{Y^{\infty}}/\mathscr{D}_{n}^{Z^{\infty}}.$
As in the case of differential forms, the complex $ \mathscr{D}_{n}^{Y^{\infty}/Z^{\infty}}$ can also be defined as the kernel of the morphism \begin{displaymath}
\mathscr{D}_{n}^{X/ Z^{\infty}} \longrightarrow
\mathscr{D}_{n}^{X/ Y^{\infty}}. \end{displaymath} All the above sheaves inherit a bigrading and a real structure.
Observe that, except for the fact that we are using here the homological grading, the complex of sheaves $\mathscr{D}_{n}^{X/ Y^{\infty}}$ agrees with the complex denoted by $\mathcal{TH}om(\mathbb{C}_{W},\mathcal{D}b_{X})$ in \cite{KashiwaraSchapira:mfcacs}.
The complex $\mathscr{D}_{n}^{ Y^{\infty}/Z^{\infty}}$ is a complex of fine sheaves. We will denote the complex of global sections by $D_{\ast}^{X^{\mathcal{T}}}(W^{\infty})=\Gamma (X, \mathscr{D}_{\ast}^{Y^{\infty}/Z^{\infty}})$. Thus the complex $D_{\ast}^{X^{\mathcal{T}}}(W^{\infty})$ is defined for any Zariski locally closed subset $W\subset X$. The corresponding real complex will be denoted by $D_{\ast}^{X^{\mathcal{T}},\mathbb{R}}(W^{\infty})$.
By \cite{Poly:shcsesa}, the complex $D_{\ast}^{X^{\mathcal{T}}}(U)$ can be identified with the image of the morphism \begin{displaymath}
D^{\ast}(X)\longrightarrow D^{\ast}(U). \end{displaymath} That is, it is the complex of currents on $U$ that can be extended to a current on the whole $X$. The elements of $D_{\ast}^{X^{\mathcal{T}}}(U)$ will be called tempered currents. In the literature they are called also moderate, temperate or extendable currents. Moreover, as was the case with the complex $E^{\ast}_{X^{\mathcal{W}}}(U)$, being $U$ a Zariski open subset, the complex $D_{\ast}^{X^{\mathcal{T}}}(U)$ only depends on $U$ and not on $X$.
\nnpar{The pairing between forms and currents.} We have already introduced an action \begin{equation}\label{eq:5} E^{n}(X)\otimes D_{m}(X)\longrightarrow D_{m-n}(X),\quad\omega\otimes T\longmapsto\omega \land T, \end{equation} where the current $\omega \land T$ is defined by \begin{displaymath} (\omega\land T)(\eta)=T(\eta\land\omega). \end{displaymath}
The subspace $D_{\ast}^{X^{\mathcal{T}}}(Y)$ is invariant under this action and annihilates the subspace $E_{X^{\mathcal{W}}}^{\ast}(U)$. Therefore we obtain induced actions \begin{equation}\label{eq:6}
E^{n}_{X^{\mathcal{W}}}(Y)\otimes D^{X^{\mathcal{T}}}_{m}(Y)\longrightarrow D^{X^{\mathcal{T}}}_{m-n}(Y),\qquad E^{n}_{X^{\mathcal{W}}}(U)\otimes D_{m}^{X^{\mathcal{T}}}(U)\longrightarrow D_{m-n}^{X^{\mathcal{T}}}(U) \end{equation} and, more generally, an action \begin{equation}\label{eq:7}
E^{n}_{X^{\mathcal{W}}}(W)\otimes
D_{m}^{X^{\mathcal{T}}}(W)\longrightarrow
D_{m-n}^{X^{\mathcal{T}}}(W). \end{equation}
Since $X$ is proper, there is a canonical morphism \begin{displaymath}
\deg: D_{0}(X)\longrightarrow \mathbb{C} \end{displaymath} given by $\deg(T)=T(1)$. Observe that $\deg(D_{0}^{\mathbb{R}}(X))\subset \mathbb{R}$.
Combining the degree and the above actions, we recover the pairing \begin{displaymath}
E^{n}(X)\otimes D_{n}(X)\longrightarrow \mathbb{C}, \end{displaymath} that identifies $D_{n}(X)$ with the topological dual of $E^{n}(X)$. Under this identification, the subspace $E^{n}_{X^{\mathcal{W}}}(U)$ is the orthogonal to the subspace $D_{n}^{X^{\mathcal{T}}}(Y)$. Therefore $D_{n}^{X^{\mathcal{T}}}(U)$ is the topological dual of $E^{n}_{X^{\mathcal{W}}}(U)$ and $D_{n}^{X^{\mathcal{T}}}(Y)$ is the topological dual of $E^{n}_{X^{\mathcal{W}}}(Y)$. More generally $D_{n}^{X^{\mathcal{T}}}(W)$ is the topological dual of $E^{n}_{X^{\mathcal{W}}}(W)$. Note that here, the key point is the fact that $E^{n}_{X^{\mathcal{W}}}(U)$ is the closure of $\Gamma _{c}(U,\mathscr{E}^{n}_{X})$ and hence a closed subspace.
The above pairings induce a pairing \begin{displaymath}
E^{n}_{\mathbb{R}}(X)(p)\otimes
D_{n}^{\mathbb{R}}(X)(p)\longrightarrow \mathbb{R}, \end{displaymath} and similar pairings for the other complexes of forms and currents.
Finally, observe that there is a commutative diagram with exact rows and columns \begin{equation}\label{eq:2}
\xymatrix{ &&&0\ar[d]&\\
& 0 \ar[d]& 0 \ar[d]& D^{X^{\mathcal{T}}}_{\ast}(W) \ar[d]&\\
0\ar[r] &D_{\ast}^{X^{\mathcal{T}}}(Z)\ar[r]\ar[d]&
D_{\ast}(X)\ar[r]\ar[d]&
D_{\ast}^{X^{\mathcal{T}}}(V)\ar[r]\ar[d]&0\\
0\ar[r] &D_{\ast}^{X^{\mathcal{T}}}(Y)\ar[r]\ar[d]&
D_{\ast}(X)\ar[r]\ar[d]&
D_{\ast}^{X^{\mathcal{T}}}(U)\ar[r]\ar[d]&0\\
& D^{X^{\mathcal{T}}}_{\ast}(W) \ar[d]& 0 & 0 & \\
& 0 &&& } \end{equation} that is the topological dual of the diagram \eqref{eq:1}.
\nnpar{The homology of the complexes of currents.} By \cite{Poly:shcsesa} we have \begin{proposition}
The homology of the complexes $D_{\ast}^{X^{\mathcal{T}}}(W)$ is
given by
\begin{displaymath}
H_{\ast}(D^{X^{\mathcal{T}}}_{\ast}(W))=H_{\ast}^{BM}(W,\mathbb{C}),
\end{displaymath}
where $H_\ast^{BM}$ denote Borel-Moore homology. In particular,
since we are assuming $Y$ proper,
\begin{displaymath}
H_{\ast}(D_{\ast}^{X^{\mathcal{T}}}(Y))=H_{\ast}(Y,\mathbb{C}).
\end{displaymath}
$\square$ \end{proposition}
\subsection{Formal and tempered Deligne cohomology} \label{sec:form-deligne-cohom} \
\nnpar{Formal Deligne cohomology.} The complex $E^{\ast}_{X^{\mathcal{W}},\mathbb{R}}(W)$ is an example of a Dolbeault algebra (see \cite{BurgosKramerKuehn:cacg}). Recall that, following Deligne, the cohomology of any complex variety has a mixed Hodge structure. We will call the Hodge filtration of this mixed Hodge structure the Deligne-Hodge filtration.
From the structure of Dolbeault algebra of $E^{\ast}_{X^{\mathcal{W}}}(W)$ we can define a Hodge filtration. It is the filtration associated to the bigrading. In general, this Hodge filtration does not induce the Deligne-Hodge filtration in cohomology. Moreover, the spectral sequence associated to this Hodge filtration does not need to degenerate at the $E_{1}$ term. Therefore, the Dolbeault cohomology groups $H^{p,q}_{\overline \partial}(E^{\ast}(Y^{\infty}))$ are not, in general, direct summands of $H^{p+q}(Y,\mathbb{C})$. In fact, they can be infinite dimensional as can be seen in the easiest example: Put $X=\mathbb{P}^{1}_{\mathbb{C}}$. Let $t$ be the absolute coordinate and let $Y$ be the point $t=0$. Then $H^{0,0}_{\bar
\partial}(E^{\ast}(Y^{\infty}))=\mathbb{C}[[t]]$, the ring of formal power series in one variable.
Following \cite{Burgos:CDB} and \cite{BurgosKramerKuehn:cacg}, to every Dolbeault algebra we can associate a Deligne algebra. We refer the reader to \cite{Burgos:CDB} and \cite{BurgosKramerKuehn:cacg} \S 5 for the definition and properties of Dolbeault algebras, Dolbeault complexes and the associated Deligne complexes. We will use freely the notation therein. In particular the Deligne algebra associated to the above Dolbeault algebra will be denoted $\mathcal{D}^{\ast}(E^{\ast}_{X^{\mathcal{W}}}(W),\ast)$.
\begin{definition} \label{def:1} The real formal Deligne cohomology
of $W$ (with
compact supports) is defined by
\begin{align*}
H_{\mathcal{D}^{f},c}^{\ast}(W^{\infty},\mathbb{R}(p))&=
H^{\ast}(\mathcal{D}^{\ast}(E_{X^{\mathcal{W}}}(W),p)).
\end{align*}
When $W$ is proper we will just write
$H_{\mathcal{D}^{f}}^{\ast}(W^{\infty},\mathbb{R}(p))$. \end{definition} The notation $W^{\infty}$ is a reminder that this cohomology depends, not only on $W$ but on an infinitesimal neighborhood of infinite order of $W$ in $X$.
\begin{remark}
Since we are assuming that $X$ is smooth and proper, the formal Deligne
cohomology of $X$,
$H_{\mathcal{D}^{f}}^{\ast}(X^{\infty},\mathbb{R}(p))$, given in
the previous definition, agrees with the usual Deligne cohomology of $X$.
Nevertheless, by the discussion before the definition, the formal
Deligne cohomology with
compact supports of $U$ or the formal Deligne cohomology of $Y$,
do not agree, in general, with the usual Deligne-Beilinson
cohomology. For instance the groups
$H_{\mathcal{D}^{f}}^{\ast}(U,\mathbb{R}(p))$ can be infinite
dimensional. \end{remark}
\nnpar{Homological Dolbeault complexes and homological Deligne
complexes.} In order to define formal Deligne homology we first translate the notions of \cite{BurgosKramerKuehn:cacg} \S 5.2 to the homological grading.
\begin{definition} \label{def:12} A \emph{homological Dolbeault complex} $A=(A_{\ast}^{\mathbb{R}},\dd_{A})$ is a graded complex of real vector spaces, which is bounded from above and equipped with a bigrading on $A^{\mathbb{C}}=A^{\mathbb{R}} \otimes_{\mathbb{R}}{\mathbb{C}}$, i.e., \begin{displaymath} A_{n}^{\mathbb{C}}=\bigoplus_{p+q=n}A_{p,q}, \end{displaymath} satisfying the following properties: \begin{enumerate} \item[(i)] The differential $\dd_{A}$ can be decomposed as the sum $\dd_{A}= \partial+\bar{\partial}$ of operators $\partial$ of type $(-1,0)$, resp. $\bar{\partial}$ of type $(0,-1)$. \item[(ii)] It satisfies the symmetry property $\overline{A_{p,q}}=A_{q,p}$, where $\overline{\phantom{M}}$ denotes complex conjugation. \end{enumerate} \end{definition}
\begin{notation} \label{def:13} Given a homological Dolbeault complex $A=(A_{\ast}^{\mathbb{R}},\dd_{A})$, we will use the following notations. The Hodge filtration $F$ of $A$ is the increasing filtration of $A^{\mathbb{C}}_{\ast}$ given by \begin{displaymath} F_{p}A_{n}=F_{p}A_{n}^{\mathbb{C}}=\bigoplus_{p'\leq p}A_{p',n-p'}. \end{displaymath} The filtration $\overline F$ of $A$ is the complex conjugate of $F$, i.e., \begin{displaymath} \overline{F}_{p}A_{n}=\overline{F}_{p}A_{n}^{\mathbb{C}}=\overline {F_{p}A_{n}^{\mathbb{C}}}. \end{displaymath} For an element $x\in A^{\mathbb{C}}$, we write $x_{i,j}$ for its component in $A_{i,j}$. For $k,k' \in \mathbb{Z}$, we define an operator $F_{k,k'}:A^{\mathbb{C}}\longrightarrow A^{\mathbb{C}}$ by the rule \begin{displaymath} F_{k,k'}(x):=\sum_{l\leq k,l'\leq k'}x_{l,l'}. \end{displaymath} We note that the operator $F_{k,k'}$ is the projection of $A^{\ast}_ {\mathbb{C}}$ onto the subspace $F_{k}A_{\ast}\cap\overline{F}_{k'} A_{\ast}$. This subspace will be denoted $F_{k,k'}A_{\ast}$. We will also denote by $F_{k}$ the operator $F_{k,\infty}$.
We denote by $A_{n}^{\mathbb{R}}(p)$ the subgroup $(2\pi i)^{-p}\cdot A_{n}^{\mathbb{R}}\subseteq A_{n}^{\mathbb{C}}$, and we define the operator \begin{displaymath} \pi_{p}:A^{\mathbb{C}}\longrightarrow A^{\mathbb{R}}(p) \end{displaymath} by setting $\pi_{p}(x):=\frac{1}{2}(x+(-1)^{p}\bar{x})$. \end{notation}
To any homological Dolbeault complex we can associate a homological Deligne complex.
\begin{definition} Let $A$ be a homological Dolbeault complex. We denote by $A_{\ast}(p)^ {\mathcal{D}}$ the complex $s(A^{\mathbb{R}}(p)\oplus F_{p}A \overset{u}{\longrightarrow}A^{\mathbb{C}})$, where $u(a,f)= -a+f$ and $s(\ )$ denotes the simple complex of a morphism of complexes. \end{definition}
\begin{definition} Let $A$ be a homological Dolbeault complex. Then, the \emph{(homological) Deligne complex $(\mathcal{D}^{\ast}(A,\ast),\dd_{\mathcal{D}})$ associated to $A$} is the graded complex given by \begin{align*} &\mathcal{D}_{n}(A,p)= \begin{cases} A^{\mathbb{R}}_{n+1}(p+1)\cap F_{n-p,n-p}A_{n+1}^{\mathbb{C}}, &\qquad\text{if}\quad n\geq 2e+1, \\ A^{\mathbb{R}}_n(p)\cap F_{p,p}A_{n}^{\mathbb{C}}, &\qquad\text{if}\quad n\leq 2p, \end{cases} \intertext{with differential given, for $x\in\mathcal{D}_{n}(A,p)$, by} &\dd_{\mathcal{D}}x= \begin{cases} -F_{n-p+1,n-p+1}\dd_{A}x, &\qquad\text{if}\quad n>2p+1, \\ -2\partial\bar{\partial}x, &\qquad\text{if}\quad n=2p+1, \\ \dd_{A}x, &\qquad\text{if}\quad n\leq 2p. \end{cases} \end{align*} \end{definition}
For instance, let $A$ be a Dolbeault complex satisfying $A_{p,q}=0$ for $p<0$, $q<0$, $p>n$, or $q>n$. Then, for $p\ge n$, the complex $\mathcal{D}(A,p)$ agrees with the real complex $A_{\ast}^{\mathbb{R}}(p)$. For $0\le p<n$, we have represented $\mathcal{D}(A,p)$ in figure \ref{fig:1}, where the upper right square is shifted by one; this means in particular that $A_{n,n}$ sits in degree $2n-1$ and $A_{p+1,p+1}$ sits in degree $2p+1$. For $p<0$ the complex $\mathcal{D}(A,p)$ agrees with the real complex $A_{\ast}^{\mathbb{R}}(p+1)[1]$.
\begin{figure}
\caption{$\mathcal{D}(A,p)$}
\label{fig:1}
\end{figure}
\begin{remark} It is clear from the definition that, for all $p\in \mathbb{Z}$, the functor $\mathcal{D} (\cdot,p)$ is exact. \end{remark}
The main property of the Deligne complex is expressed by the following proposition; for a proof in the cohomological case see \cite{Burgos:CDB}.
\begin{proposition} \label{prop:32} The complexes $A_{\ast}(p)^{\mathcal{D}}$ and $\mathcal{D}_{\ast} (A,p)$ are homotopically equivalent. The homotopy equivalences $\psi:A_{n}(p)^{\mathcal{D}}\longrightarrow\mathcal{D}_{n}(A,p)$, and $\varphi:\mathcal{D}_{n}(A,p)\longrightarrow A_{n}(p)^{\mathcal {D}}$ are given by \begin{displaymath} \psi(a,f,\omega)= \begin{cases} \pi(\omega),\qquad&\text{if }n\ge 2p+1, \\ F_{p,p}a+2\pi_{p}(\partial\omega_{p+1,n-p-1}),\quad&\text{if }n\le 2p, \end{cases} \end{displaymath} where $\pi(\omega)=\pi_{p+1}(F_{n-p,n-p}\omega)$, i.e., $\pi$ is the projection of $A_{\mathbb{C}}$ over the co\-kernel of $u$, and \begin{displaymath} \varphi(x)= \begin{cases} (\partial x_{p+1,n-p}-\bar{\partial}x_{n-p,p+1},2\partial x_{p+1,n-p},x),\quad&\text{if }n\ge 2p+1, \\ (x,x,0),&\text{if }n\le 2p. \end{cases} \end{displaymath} Moreover, $\psi\circ\varphi=\Id$, and $\varphi\circ\psi-\Id=\dd h+ h\dd$, where $h:A_{n}(p)^{\mathcal{D}}\longrightarrow A_{n+1}(p)^ {\mathcal{D}}$ is given by \begin{displaymath} h(a,f,\omega)= \begin{cases} (\pi_{p}(\overline{F}_{p}\omega+\overline{F}_{n-p}\omega),-2F_{p} (\pi_{p+1}\omega),0),\quad&\text{if }n\ge 2p+1, \\ (2\pi_{p}(\overline{F}_{n-p}\omega),-F_{p,p}\omega-2F_{n-p} (\pi_{p+1}\omega),0),\quad&\text{if }n\le 2p. \end{cases} \end{displaymath} \end{proposition}
$\square$
\nnpar{Tempered Deligne homology.} Applying the above discussion to the complex of currents $D_{\ast}^{X^{\mathcal{T}},\mathbb{R}}(W)$ we define the homological Deligne complex $\mathcal{D}_{\ast}(D_{\ast}^{X^{\mathcal{T}}}(W),\ast).$
\begin{definition} \label{def:2} \emph{The tempered Deligne (Borel-Moore)
homology of $W$} is defined by
\begin{displaymath}
H^{\mathcal{D}^{\mathcal{T}}}_{\ast}(W^{\infty},\mathbb{R}(p))=
H_{\ast}(\mathcal{D}_{\ast}(D^{X^{\mathcal{T}}}_{\ast}(W),p)).
\end{displaymath} \end{definition}
\begin{remark}
\begin{enumerate}
\item Again, since $X$ is smooth and proper, the tempered Deligne
homology of
$X$ agrees with the Deligne homology of $X$. In particular,
the group
$H^{\mathcal{D}}_{n}(X,\mathbb{R}(p))$ agrees with the group denoted
${}'H^{-n}_{\mathcal{D}}(X,\mathbb{R}(-p))$ in
\cite{Jannsen:DcHD}. But, since the Hodge filtration of the complex
of currents with support on $Y$ does not induce the Deligne-Hodge
filtration in the
homology of $Y$, the tempered Deligne homology does not
agree in general with Deligne-Beilinson homology.
\item As in the case of formal cohomology, the notation
$H^{\mathcal{D}^{\mathcal{T}}}_{\ast}(W^{\infty},\mathbb{R}(p))$ reminds us
that these groups do not depend only on $W$ but on an
infinitesimal neighborhood of $W$ of infinite order.
\end{enumerate} \end{remark}
\nnpar{Equidimensional manifolds.} If $X$ is equidimensional of dimension $d$ the morphism \eqref{eq:11} induces morphisms \begin{equation}
\mathcal{D}^{n}(E^{\ast}(X),p)\longrightarrow
\mathcal{D}_{2d-n}(D_{\ast}(X),d-p), \quad p \in \mathbb{Z}, \end{equation} that, in turn, induce the Poincar\'e duality isomorphisms \begin{equation}\label{eq:8}
H^{n}_{\mathcal{D}}(X,\mathbb{R}(p))
\longrightarrow H_{2d-n}^{\mathcal{D}}(X,\mathbb{R}(d-p)), \quad n,p \in \mathbb{Z}. \end{equation}
By analogy, we can define tempered Deligne cohomology groups as follows \begin{align*}
H_{\mathcal{D}^{\mathcal{T}}}^{n}(U,\mathbb{R}(p))&=
H^{\mathcal{D}^{\mathcal{T}}}_{2d-n}(U,\mathbb{R}(d-p)),\\
H_{\mathcal{D}^{\mathcal{T}},W}^{n}(V,\mathbb{R}(p))&=
H^{\mathcal{D}^{\mathcal{T}}}_{2d-n}(W^{\infty},\mathbb{R}(d-p)). \end{align*}
In general, if $X$ is a disjoint union of equidimensional algebraic manifolds, then we define the tempered Deligne cohomology of $X$ as the direct sum of the tempered Deligne cohomology of its components.
\nnpar{The module structure of tempered Deligne homology.} The notion of Dolbeault module over a Dolbeault algebra introduced in \cite{BurgosKramerKuehn:cacg} can be easily modified to define homological Dolbeault modules over a Dolbeault algebra. The actions \eqref{eq:5}, \eqref{eq:6} and \eqref{eq:7} provide the basic examples. Modifying the construction of \cite{BurgosKramerKuehn:cacg} 5.17 and 5.18 we obtain
\begin{proposition}
There is a pseudo-associative action \begin{displaymath}
\mathcal{D}^{n}(E_{X^{\mathcal{W}}}(W),p)\otimes
\mathcal{D}_{m}(D_{\ast}^{X^{\mathcal{T}}}(W),q)\longrightarrow
\mathcal{D}_{m-n}(D_{\ast}^{X^{\mathcal{T}}}(W),q-p) \end{displaymath} that induces an associative action \begin{displaymath}
H_{\mathcal{D}^{f},c}^{n}(W^{\infty},\mathbb{R}(p))\otimes
H^{\mathcal{D}^{\mathcal{T}}}_{m}(W^{\infty},\mathbb{R}(q))\longrightarrow
H^{\mathcal{D}^{\mathcal{T}}}_{m-n}(W^{\infty},\mathbb{R}(q-p)). \end{displaymath}
$\square$ \end{proposition}
\nnpar{The exceptional duality.} In general, Poincar\'e duality for Deligne cohomology is not given by a bilinear pairing, but by the isomorphism \eqref{eq:8} between Deligne cohomology and Deligne homology (see for instance \cite{Jannsen:DcHD}). Nevertheless, in the case of real Deligne cohomology, there is an exceptional duality that comes from the symmetry of the Deligne complex associated with a Dolbeault complex. This duality can be generalized to a pairing between formal Deligne cohomology and tempered Deligne homology.
\begin{proposition}\label{prop:5}
For every pair of integers $n,p$, there is a pairing
\begin{displaymath}
\mathcal{D}^{n}(E_{X^{\mathcal{W}}}(W),p)\otimes
\mathcal{D}_{n-1}(D^{X^{\mathcal{T}}}(W),p-1)\longrightarrow \mathbb{R}
\end{displaymath}
given by $\omega \otimes T\longmapsto T(\omega )$.
This pairing identifies $\mathcal{D}_{n-1}(D^{X^{\mathcal{T}}}(W),p-1)$
with the topological dual of
$\mathcal{D}^{n}(E_{X_{\mathcal{W}}}(W),p)$. Moreover, it is
compatible, up to the sign, with the differential in the Deligne complex:
\begin{displaymath}
T (\dd_{\mathcal{D}}\omega ) =
\begin{cases}
(-1)^{n+1}
(\dd_{\mathcal{D}}T)(\omega),& \text{ if } n\le 2p-1,\\
(-1)^{n}
(\dd_{\mathcal{D}}T)(\omega),& \text{ if } n\ge 2p.\\
\end{cases}
\end{displaymath}
It is also compatible, up to the sign, with the action of
$\mathcal{D}^{\ast}(E_{X^{\mathcal{W}}}(W),\ast)$. That is, if the
forms
$\omega \in \mathcal{D}^{n}(E_{X^{\mathcal{W}}}(W^{\infty}),p)$ and
$\eta \in \mathcal{D}^{l}(E_{X^{\mathcal{W}}}(W),r)$, and the current
$T\in \mathcal{D}_{m}(D^{X^{\mathcal{T}}}(W),q)$, with
$n-m+l=1$ and $p-q+r=1$ then
\begin{displaymath}
(\omega \bullet T)(\eta)=
\begin{cases}
(-1)^{n} T(\eta \bullet \omega), & \text{ if } m>2q,\ l\ge 2r,\\
T(\eta \bullet \omega), & \text{ if } m\le 2q,\ l< 2r,\\
(-1)^{m-1} T(\eta \bullet \omega), & \text{ if } m>2q,\ l< 2r,\\
(-1)^{l} T(\eta \bullet \omega), & \text{ if } m\le 2q,\ l\ge 2r.\\
\end{cases}
\end{displaymath} \end{proposition} \begin{proof}
Assume that $n<2p$. Put $q=p-1$ and
$m=n-1$. Then
\begin{align*}
&\mathcal{D}^{n}(E_{X^{\mathcal{W}}}(W),p)\\
&\phantom{A}=
E^{n-1}_{X^{\mathcal{W}},\mathbb{R}}(W)(p-1)\left /
(F^{p} E^{n-1}_{X^{\mathcal{W}}}(W) + \bar F ^{p}
E^{n-1}_{X^{\mathcal{W}}}(W))\cap
E^{n-1}_{X^{\mathcal{W}}}(W)_{\mathbb{R}}(p-1) \right.\\
&\phantom{A}= E^{n-1}_{X^{\mathcal{W}},\mathbb{R}}(W)(p-1)
\cap \bar F ^{n-p}
E^{n-1}_{X^{\mathcal{W}}}(W))\cap
F^{n-p} E^{n-1}_{X^{\mathcal{W}}}(W),\\
&\mathcal{D}_{m}(D^{X^{\mathcal{T}}}(W),q)\\
&\phantom{A}=D_{m}^{X^{\mathcal{T}},\mathbb{R}}(W^{\infty})(q)\cap
F_{q} D_{m}^{X^{\mathcal{T}}}(W) \cap \bar F _{q}
D_{m}^{X^{\mathcal{T}}}(W))\\
&\phantom{A}=D_{n-1}^{X^{\mathcal{T}},\mathbb{R}}(W)(p-1)\cap
F_{p-1} D_{n-1}^{X^{\mathcal{T}}}(W) \cap \bar F _{p-1}
D_{n-1}^{X^{\mathcal{T}}}(W)).
\end{align*}
Therefore, the first statement follows from the duality between
$E_{X^{\mathcal{W}}}(W)$ and $D^{X^{\mathcal{T}}}(W)$ and the fact
that, under this duality,
$D_{n-1}^{X^{\mathcal{T}},\mathbb{R}}(W)(p-1)$ is identified
with the dual of $E^{n-1}_{X^{\mathcal{W}},\mathbb{R}}(W)(p-1)$ and
$F_{p-1} D_{n-1}^{X^{\mathcal{T}}}(W) $ is identified with the
dual of $\bar F ^{n-p}
E^{n-1}_{X^{\mathcal{T}}}(W)$.
The compatibility with the differential is a
straightforward computation using the formulas for the differential
given in \cite{Burgos:CDB} theorem 2.6. For instance,
if $\omega \in
\mathcal{D}^{n}(E_{X^{\mathcal{W}}}(W),p)$, with $n<2p-1$ and $T\in
\mathcal{D}_{m}(D^{X^{\mathcal{T}}}(W),q)$, with $m=n$ and
$q=p-1$, then we have
\begin{align*}
(\dd_{\mathcal{D}} T)(\omega )
&=(\dd T)(\omega)\\
&=(-1)^{n}T(\dd \omega)\\
&=(-1)^{n}T(F^{n-p+1,n-p+1}\dd \omega)\\
&=(-1)^{n}T(-\dd_{\mathcal{D}} \omega ).
\end{align*}
In the third equality we have used that $T\in F_{q}\cap \bar
F_{q}=F_{p-1,p-1}$, which implies that, for any form $\eta$, we have
$T(\eta)=T(F^{n-p+1,n-p+1}\eta)$. The other cases are analogous.
Similarly, the compatibility with the product follows from
\cite{Burgos:CDB} theorem 2.6. For instance,
let
$\omega \in \mathcal{D}^{n}(E_{X^{\mathcal{W}}}(W),p)$,
$T\in \mathcal{D}_{m}(D^{X^{\mathcal{T}}}(W),q)$ and
$\eta \in \mathcal{D}^{l}(E_{X^{\mathcal{W}}}(W),r)$, with
$n-m+l=1$ and $p-q+r=1$. Assume that $n<2p$, $m>2q$, $l\ge 2r$,
then
\begin{displaymath}
(\omega \bullet T)(\eta)=
((-1)^{n}r_{p}(\omega )\land T+\omega \land r_{q}(T))(\eta),
\end{displaymath}
where $r_p(\omega )=2\pi _{p}(F^{p}\dd \omega )$ and $r_q(T)=2\pi
_{q}(F_{q}\dd T )$.
But
\begin{displaymath}
(-1)^{n}r_{p}(\omega )\land T(\eta)=
(-1)^{n}T(\eta \land r_{p}(\omega )),
\end{displaymath}
and
\begin{align*}
(\omega \land r_{q}(T))(\eta)&= r_{q}(T)(\eta\land \omega )\\
&=2\pi _{q}F_{q}(\dd T)(\eta\land \omega )\\
&= 2 F_{q}(\dd T)(\eta\land \omega )\\
&= 2 \partial T_{q+1,m-q}(\eta\land \omega )\\
&= T \left( 2 (-1)^{m-1}\partial (\eta\land \omega)^{q,m-q}\right)\\
&= T \left( 2 (-1)^{n+l}\partial (\eta\land \omega)^{p+r-1,n+l-p-r}\right).
\end{align*}
On the other hand
\begin{displaymath}
T(\eta \bullet \omega )=T\left(\eta \land r_{p}(\omega )+
(-1)^{l}2 \partial (\omega \land \eta)^{p+r-1,n+l-p-r}\right).
\end{displaymath}
The other cases are analogous. \end{proof}
\nnpar{Duality.} We summarize in the next proposition the basic properties of formal Deligne cohomology and tempered Deligne homology that follow from the previous discussions.
\begin{proposition}\label{prop:2} For every pair of integers $n$ and
$p$, by applying the exact functors
$\mathcal{D}^{\ast}(\underline{\phantom{A}},p)$ and
$\mathcal{D}_{\ast}(\underline{\phantom{A}},p-1)$ to the
diagrams \eqref{eq:1} and
\eqref{eq:2} respectively, we obtain the corresponding diagrams of
Deligne complexes that are the topological dual of each other. In
particular we obtain long exact sequences
\begin{multline}\label{eq:13}
H^{n}_{\mathcal{D}^{f},c}(W^{\infty},\mathbb{R}(p))
\rightarrow H^{n}_{\mathcal{D}^{f}}(Y^{\infty},\mathbb{R}(p))
\rightarrow H^{n}_{\mathcal{D}^{f}}(Z^{\infty},\mathbb{R}(p))
\rightarrow \\ H^{n+1}_{\mathcal{D}^{f},c}(W^{\infty},\mathbb{R}(p))
\rightarrow
\end{multline}
and
\begin{multline}\label{eq:12}
\leftarrow H_{n-1}^{\mathcal{D}^{\mathcal{T}}}(W^{\infty},\mathbb{R}(p-1))
\leftarrow
H_{n-1}^{\mathcal{D}^{\mathcal{T}}}(Y^{\infty},\mathbb{R}(p-1))
\leftarrow \\
H_{n-1}^{\mathcal{D}^{\mathcal{T}}}(Z^{\infty},\mathbb{R}(p-1))
\leftarrow H_{n}^{\mathcal{D}^{\mathcal{T}}}(W^{\infty},\mathbb{R}(p-1))
\end{multline}
and pairings
\begin{align*}
H^{n}_{\mathcal{D}^{f}}(Y^{\infty},\mathbb{R}(p))\otimes
H_{n-1}^{\mathcal{D}^{\mathcal{T}}}(Y^{\infty},\mathbb{R}(p-1))
&\longrightarrow \mathbb{R},\\
H^{n}_{\mathcal{D}^{f},c}(W^{\infty},\mathbb{R}(p))\otimes
H_{n-1}^{\mathcal{D}^{\mathcal{T}}}(W^{\infty},\mathbb{R}(p-1))
&\longrightarrow \mathbb{R},\\
H^{n}_{\mathcal{D}^{f}}(Z^{\infty},\mathbb{R}(p))\otimes
H_{n-1}^{\mathcal{D}^{\mathcal{T}}}(Z^{\infty},p-1)
&\longrightarrow \mathbb{R}.
\end{align*}
that are compatible with the above sequences.
Moreover, the topologies of the space of differential forms and of the
space of currents induce structures of topological vector spaces on
the real formal Deligne cohomology groups and the tempered Deligne
homology groups. The
above pairings induce a perfect pairing of the corresponding
separated vector spaces. \end{proposition} \begin{proof}
This is a direct consequence of the exactness of the functors
$\mathcal{D}^{\ast}(\underline{\phantom{A}},p)$ and
$\mathcal{D}_{\ast}(\underline{\phantom{A}},p-1)$ and proposition
\ref{prop:5}. \end{proof}
The image of $\dd_{\mathcal{D}}$ in the complex
$\mathcal{D}^{\ast}(E_{\flat}(U),p)$ does not need to be
closed. Therefore the pairing between formal cohomology and
tempered homology do
not need to be perfect. Only the induced pairing in the
corresponding separated vector spaces is perfect. Nevertheless, in
the case of a
proper algebraic complex manifold $X$, by Hodge theory, we obtain
a perfect pairing between Deligne-Beilinson cohomology and homology.
\begin{corollary}[Exceptional duality for Deligne cohomology] \label{cor:3}
Let $X$ be a proper complex algebraic manifold, equidimensional of
dimension $d$.
Then there is a
perfect duality
\begin{displaymath}
H_{\mathcal{D}}^{n}(X,\mathbb{R}(p))\otimes
H_{\mathcal{D}}^{2d-n+1}(X,\mathbb{R}(d-p+1))\longrightarrow \mathbb{R}
\end{displaymath}
which is compatible, up to a sign, with the product in Deligne
cohomology. \end{corollary} \begin{proof}
By Poincar\'e duality in Deligne cohomology (cf. \cite{Jannsen:DcHD}
1.5) there is a natural isomorphism
\begin{displaymath}
H_{\mathcal{D}}^{2d-n+1}(X,\mathbb{R}(d-p+1))\cong
H^{\mathcal{D}}_{n-1}(X,\mathbb{R}(p-1)).
\end{displaymath}
By Hodge theory we know that
\begin{displaymath}
H_{\mathcal{D}}^{n}(X,\mathbb{R}(p))=
\begin{cases}
H^{n-1}(X,\mathbb{R}(p-1))\cap \overline F^{n-p} \cap F^{n-p},&
\text{ if } n<2p,\\
H^{n}(X,\mathbb{R}(p))\cap \overline F^{p} \cap F^{p},&
\text{ if } n\ge 2p.
\end{cases}
\end{displaymath}
Moreover, the pairing is given, up to a sign, by the wedge product of
differential forms followed by the integral along $X$.
Therefore, by Serre's duality, the pairing of
proposition \ref{prop:2} is perfect. \end{proof}
\subsection{Semi-purity of tempered Deligne cohomology } \label{sec:purity-form-deligne} \
\nnpar{Vanishing theorems.} The aim of this section is to prove the following result
\begin{theorem}(Semi-purity of tempered Deligne homology)
Let $X$ be a projective complex algebraic manifold,
$W$ a locally closed subvariety, of dimension at most $p$. Then
\begin{displaymath}
H_{n}^{\mathcal{D}^{\mathcal{T}}}(W^{\infty},\mathbb{R}(e))=0, \text{ for
all } n > \max(e+p,2p-1).
\end{displaymath} \end{theorem} \begin{proof}
We will prove the result by ascending induction over $p$. The
result is trivially true for $p<0$. Then, by the exact sequence
\eqref{eq:12} and
induction, one is
reduced to the case $W$ closed.
We will deduce the theorem by duality from the following proposition
\begin{proposition}\label{prop:1}
Let $Y$ be a closed subvariety of a projective complex algebraic
manifold. Let $p$ be the dimension of $Y$. Then
\begin{displaymath}
H^{n+1}_{\mathcal{D}^{f}}(Y^{\infty},\mathbb{R}(e+1))=0, \text{ for
all } n > max(e+p,2p-1)
\end{displaymath}
\end{proposition}
\begin{proof}
Let $\mathscr{I}_{Y}$ be the ideal of holomorphic functions on $X$
vanishing at $Y$. We denote
\begin{displaymath}
\Omega ^{q}_{Y^{\infty}}= \lim_{\substack{\longleftarrow\\k}}
\Omega ^{q}_{X}\left/ \mathscr{I}_{Y}^{k}\Omega ^{q}_{X}. \right.
\end{displaymath}
By \cite{KashiwaraSchapira:mfcacs} theorem 5.12 we have
\begin{lemma} \label{lemm:1}
The complex of sheaves $\mathscr{E}^{q,\ast}_{Y,\mathbb{R}}$ is a
fine resolution of $\Omega ^{q}_{Y^{\infty}}$.
\end{lemma}
Since, by \cite{Poly:shcsesa}, the sheaf
$\mathscr{E}^{\ast}_{Y^{\infty},\mathbb{R}}$ is an acyclic
resolution of the constant sheaf $\underline{\mathbb{R}}_{Y}$,
from lemma \ref{lemm:1} and the techniques of \cite{Burgos:CDB},
we deduce that
$H^{\ast}_{\mathcal{D}^{f}}(Y^{\infty},\mathbb{R}(e+1))$ is
isomorphic to the hypercohomology of the complex of sheaves
\begin{equation}\label{eq:3}
\underline{\mathbb{R}}_{\mathcal{D}^{f},Y^{\infty}}(e):=
\underline{\mathbb{R}}_{Y}(f)\longrightarrow \Omega
^{0}_{Y^{\infty}}\longrightarrow \dots \longrightarrow
\Omega ^{e}_{Y^{\infty}}.
\end{equation}
\begin{lemma} \label{lemm:2} If $n> p$ then $H^{n}(Y,\Omega _{Y^{\infty}}^{q})=0$.
\end{lemma}
\begin{proof}
By \cite{hartshorne75:Rhamcag} proposition I.6.1
\begin{displaymath}
H^{n}(Y,\Omega _{Y^{\infty}}^{q})=
H^{n}(Y^{\text{{\rm alg}}},\hat \Omega _{Y}^{q}),
\end{displaymath}
where $Y^{\text{{\rm alg}}}$ is the corresponding algebraic variety and
$\hat \Omega _{Y}^{q}$ is the completion of the sheaf of
algebraic differentials. But now $Y^{\text{{\rm alg}}}$ is a noetherian
topological space of dimension $p$, hence the lemma.
\end{proof}
Using lemma \ref{lemm:2} we obtain that the $E_{1}^{s,t}$ term of the
spectral sequence of the hypercohomology of the complex
\eqref{eq:3} can be non zero only for $s=0$, $0\le t \le 2p$ and
$1\le s \le e+1$, $0\le t \le p$, which implies proposition
\ref{prop:1}.
\end{proof}
We finish now the proof of the theorem. By proposition \ref{prop:1},
for every $n > \max(p+e,2p-1)$, the morphism
\begin{displaymath}
\dd_{\mathcal{D}}^{n}: \mathcal{D}^{n}(E_{X^{\mathcal{W}}}(Y),e+1)
\longrightarrow \mathcal{D}^{n+1}(E_{X^{\mathcal{W}}}(Y),e+1)
\end{displaymath}
satisfies $\Img
(\dd_{\mathcal{D}}^{n})=\Ker(\dd_{\mathcal{D}}^{n+1})$, hence the
image of $\dd_{\mathcal{D}}^{n}$ is
a closed subspace. Therefore, by
\cite{bourbaki87:_topol_vector_spaces_chapt} IV.2 theorem 1, we have
that the dual morphism
\begin{displaymath}
\dd_{\mathcal{D}}:\mathcal{D}_{n}(D^{X^{\mathcal{T}}}(Y),e) \longrightarrow
\mathcal{D}_{n-1}(D^{X^{\mathcal{T}}}(Y),e)
\end{displaymath}
has closed image. This implies that, for $n\ge \max(p+e,2p-1)$, the
vector space
$H_{n}^{\mathcal{D}^{\mathcal{T}}}(Y^{\infty},\mathbb{R}(e))$ is separated.
Therefore, by proposition \ref{prop:2},
for $n>\max(p+e,2p-1)$ the pairing
\begin{displaymath}
H^{n+1}_{\mathcal{D}^{f}}(Y^{\infty},\mathbb{R}(e+1))\otimes
H_{n}^{\mathcal{D}^{\mathcal{T}}}(Y^{\infty},\mathbb{R}(e))
\longrightarrow \mathbb{R}
\end{displaymath}
is perfect. Hence by proposition \ref{prop:1} we obtain the theorem. \end{proof}
\nnpar{semi-purity of tempered Deligne cohomology.} The semi-purity theorem can be stated in terms of tempered Deligne cohomology as follows. \begin{corollary} \label{cor:1}
Let $X$ be a complex quasi-projective manifold and $Y$ a closed
subvariety of codimension at least $p$. Then
\begin{displaymath}
H^{n}_{\mathcal{D}^{\mathcal{T}}, Y}(X,\mathbb{R}(e))=0, \text{ for
all } n < \min(e+p,2p+1),
\end{displaymath}
In particular
\begin{displaymath}
H^{n}_{\mathcal{D}^{\mathcal{T}}, Y}(X,\mathbb{R}(p))=0, \text{ for
all } n < 2p.
\end{displaymath} \end{corollary} This is the weak purity property used in \cite{BurgosKramerKuehn:cacg} 6.4.
\section{Arithmetic Intersection Theory} \label{sec:arithm-inters-theory}
\subsection{ Definition of Covariant arithmetic Chow groups} \label{sec:covar-arithm-chow} In \cite{Burgos:acr}, the author introduced a variant of the arithmetic Chow groups that are covariant with respect to arbitrary proper morphisms. In the paper \cite{BurgosKramerKuehn:cacg} these groups are further studied as an example of cohomological arithmetic Chow groups. These groups are denoted by $\cha^{\ast} (X,\mathcal{D}_{\text{{\rm cur}}})$. The semi-purity property (corollary \ref{cor:1}) was announced in \cite{BurgosKramerKuehn:cacg} and has consequences in the behavior of the covariant arithmetic Chow groups. On the other hand, Kawaguchi and Moriwaki \cite{KawaguchiMoriwaki:isfav} have given another definition of covariant arithmetic Chow groups called $D$-arithmetic Chow groups. A consequence of Corollary \ref{cor:1} is that, when $X$ is equidimensional and generically projective, both definitions of covariant arithmetic Chow groups agree. We note that Zha \cite{zha99:_rieman_roch} has also introduced a notion of covariant arithmetic Chow groups that only differs from the definition of \cite{KawaguchiMoriwaki:isfav} on the fact that he neglects the anti-linear involution $F^{\infty}$.
In this section we will summarize the properties of the covariant arithmetic Chow groups. We will follow the notations and terminology of \cite{BurgosKramerKuehn:cacg}, but we will use the grading by dimension that is more natural when dealing with covariant Chow groups.
\nnpar{Arithmetic rings and arithmetic varieties.} Let $A$ be an arithmetic ring (see \cite{GilletSoule:ait}) with fraction field $F$. In particular $A$ is provided with a non empty set of complex embeddings $\Sigma $ and a conjugate linear involution $F_{\infty}$ of $\mathbb{C}^{\Sigma }$ that commutes with the diagonal embedding of $A$ in $\mathbb{C}^{\Sigma }$.
Since we will be working with dimension of cycles, following \cite{GilletSoule:aRRt} we will further impose that $A$ is equicodimensional and Jacobson. Let $S=\Spec A$ and let $e=\dim S$.
An arithmetic variety $X$ is a flat quasi-projective scheme over $A$, that has smooth generic fiber $X_{F}$. To every arithmetic variety $X$ we can associate a complex algebraic manifold $X_{\Sigma }$ and a real algebraic manifold $X_{\mathbb{R}}=(X_{\Sigma },F_{\infty})$.
\nnpar{The arithmetic complex of tempered Deligne homology.} To every pair of integers $n,p$, and every open Zariski subset $U$ of $X_{\mathbb{R}}$ we assign the group \begin{displaymath}
\mathcal{D}_{n}^{\text{{\rm cur}},X}(U,p)=\mathcal{D}_{n}\left(
D_{\ast}^{X_{\Sigma }^{\mathcal{T}}}(U),p\right)^{\sigma}, \end{displaymath} where $\sigma $ is the involution that acts as complex conjugation on the space and on the currents. That is, if $T\in D_{n}(X_{\mathbb{C}})$ then $\sigma (T)=\overline {(F_{\infty})_{\ast}T}$. And $(\phantom{A})^{\sigma}$ denote the elements that are fixed by $\sigma $. Then $\mathcal{D}_{n}^{\text{{\rm cur}},X}(\underline{\phantom{A}},p)$ is a totally acyclic sheaf (in the sense of \cite{BurgosKramerKuehn:cacg}) for the real scheme underlying $X_{\mathbb{R}}$. When $X$ is fixed, $\mathcal{D}_{\ast}^{\text{{\rm cur}},X}$ will be denoted by $\mathcal{D}_{\ast}^{\text{{\rm cur}}}$.
If $U$ is a Zariski open subset of $X_{\mathbb{R}}$ and $Y=X\setminus U_{\mathbb{R}}$ we write \begin{align}
H^{\mathcal{D}^{\mathcal{T}}}_{\ast}(U,\mathbb{R}(p))&=
H_{\ast}(\mathcal{D}^{\text{{\rm cur}}}(U,p)),\\
H^{\mathcal{D}^{\mathcal{T}},Y}
_{\ast}(X_{\mathbb{R}},\mathbb{R}(p))&=
H_{\ast}(s(\mathcal{D}^{\text{{\rm cur}}}(U,p),
\mathcal{D}^{\text{{\rm cur}}}(X_{\mathbb{R}},p))),\\
\widetilde {\mathcal{D}}_{2p-1}^{\text{{\rm cur}}}(X_{\mathbb{R}},p)&=
\mathcal{D}_{2p-1}^{\text{{\rm cur}}}(X_{\mathbb{R}},p)\left/
\Img \dd_{\mathcal{D}},\right.\\
{\rm Z} \mathcal{D}_{2p}^{\text{{\rm cur}}}(X_{\mathbb{R}},p)&=
\Ker(\dd_{\mathcal{D}}:\mathcal{D}_{2p}^{\text{{\rm cur}}}(X_{\mathbb{R}},p)\longrightarrow
\mathcal{D}_{2p+1}^{\text{{\rm cur}}}(X_{\mathbb{R}},p)). \end{align}
Let $\mathcal{Z}_{p}=\mathcal{Z}_{p}(X_{\mathbb{R}})$ be the set of dimension $p$ Zariski closed subsets of $X_{\mathbb{R}}$ ordered by inclusion. Then we will write \begin{align*}
\mathcal{D}_{\ast}^{\text{{\rm cur}}}(X_{\mathbb{R}}\setminus \mathcal{Z}_{p},p)
&=\lim_{\substack{\longrightarrow
\\ Y\in\mathcal{Z}^{p}}}
\mathcal{D}_{\ast}^{\text{{\rm cur}}}(X_{\mathbb{R}}\setminus Y,p),\\
\widetilde{\mathcal{D}}_{\ast}^{\text{{\rm cur}}}(X_{\mathbb{R}}\setminus
\mathcal{Z}_{p},p)
&= \mathcal{D}_{\ast}^{\text{{\rm cur}}}(X_{\mathbb{R}}\setminus
\mathcal{Z}_{p},p)\left /
\Img \dd_{\mathcal{D}}\right.,\\
H^{\mathcal{D}^{\mathcal{T}},\mathcal{Z}_{p}}_{\ast}
(X_{\mathbb{R}},\mathbb{R}(p))&=
H_{\ast}(s(\mathcal{D}^{\text{{\rm cur}}}(X_{\mathbb{R}}\setminus
\mathcal{Z}_{p},p),
\mathcal{D}^{\text{{\rm cur}}}(X_{\mathbb{R}},p))). \end{align*}
\nnpar{Green objects.} We recall the definition of Green object for a cycle given in \cite{BurgosKramerKuehn:cacg} but adapted to the grading by dimension. Let $y$ be a dimension $p$ algebraic cycle of $X_{\mathbb{R}}$. Let $Y$ be the support of $y$. The class of $y$ in $H^{\mathcal{D}^{\mathcal{T}},Y}_{2p}(X_{\mathbb{R}},\mathbb{R}(p))$, denoted $\cl(y)$, is represented by the pair $(\delta _{y},0)\in s(\mathcal{D}^{\text{{\rm cur}}}(X_{\mathbb{R}},p),
\mathcal{D}^{\text{{\rm cur}}}(U_{\mathbb{R}},p))$. We denote also by $\cl(y)$ the image of this class in $H^{\mathcal{D}^{\mathcal{T}},\mathcal{Z}_{p}}_{2p}
(X_{\mathbb{R}},\mathbb{R}(p))$.
In this setting, the truncated homology classes can be written as \begin{multline*}
\widehat {H}^{\mathcal{D}^{\mathcal{T}},\mathcal{Z}_{p}}_{\ast}
(X_{\mathbb{R}},\mathbb{R}(p))=\\ \{(\omega _{y},\widetilde g_{y})\in {\rm Z} \mathcal{D}_{2p}^{\text{{\rm cur}}}(X,p)\oplus \widetilde{\mathcal{D}}_{2p-1}^{\text{{\rm cur}}}(X_{\mathbb{R}}\setminus
\mathcal{Z}_{p},p)\mid \dd_{\mathcal{D}} \widetilde g_{y}=\omega
_{y}\}. \end{multline*}
There is an obvious class map \begin{displaymath}
\cl: \widehat {H}^{\mathcal{D}^{\mathcal{T}},\mathcal{Z}_{p}}_{\ast}
(X_{\mathbb{R}},\mathbb{R}(p))\longrightarrow
H^{\mathcal{D}^{\mathcal{T}},\mathcal{Z}_{p}}_{\ast}
(X_{\mathbb{R}},\mathbb{R}(p)). \end{displaymath} Then a Green object for $y$ is an element $$\mathfrak{g}_{y}=(\omega _{y},\widetilde g_{y})\in \widehat {H}^{\mathcal{D}^{\mathcal{T}},\mathcal{Z}_{p}}_{2p}
(X_{\mathbb{R}},\mathbb{R}(p)) $$ such that $\cl(\mathfrak{g}_{y})=\cl(y)$.
The following result follows directly from the definition \begin{lemma} \label{lemm:3}
An element $\mathfrak{g}_{y}=(\omega _{y},\widetilde g_{y})\in
\widehat {H}^{\mathcal{D}^{\mathcal{T}},\mathcal{Z}_{p}}_{2p}
(X_{\mathbb{R}},\mathbb{R}(p))$ is a Green object for $y$ if
and only if there exists a current $\widetilde \gamma \in \widetilde
{\mathcal{D}}_{2p-1}^{\text{{\rm cur}}}(X_{\mathbb{R}},p)$ such that
\begin{align*}
\widetilde g_{y}&=\widetilde \gamma|_{X\setminus
\mathcal{Z}_{p}}\\
\dd_{\mathcal{D}} \widetilde \gamma +\delta _{y}&=\omega _{y}.
\end{align*} \end{lemma}
\nnpar{Arithmetic Chow groups.} Every dimension $p$ algebraic cycle $y$ on $X$ defines a dimension $(p-e)$ algebraic cycle $y_{\mathbb{R}}$ on $X_{\mathbb{R}}$, where $e$ is the dimension of the base scheme $S$.
\begin{definition}
The group of arithmetic cycles of dimension $p$ is defined as
\begin{displaymath}
\za_{p}(X,\mathcal{D}^{\text{{\rm cur}}})=
\{(y,\mathfrak{g}_{y})\in {\rm Z}_{p}(X)\oplus
\widehat {H}^{\mathcal{D}^{\mathcal{T}},\mathcal{Z}_{p-e}}_{2p-2e}
(X_{\mathbb{R}},\mathbb{R}(p-e))\mid
\cl(y_{\mathbb{R}})=\cl(\mathfrak{g}_{y})\}.
\end{displaymath}
Let $W$ be a dimension $p+1$ irreducible subvariety of $X$ and $f\in
K(W)^{\ast}$ be a rational function. Let $\widetilde W_{\mathbb{R}}$
be a resolution of singularities of $W_{\mathbb{R}}$ and let
$\iota:\widetilde W_{\mathbb{R}}\longrightarrow X_{\mathbb{R}}$ be
the induced map. Then we write
\begin{displaymath}
\diva f = (\dv f, (0,\iota_{\ast}(-\frac{1}{2} \log f\bar f )).
\end{displaymath}
The group of cycles rationally equivalent to zero is the subgroup
$$\rata_{p}(X,\mathcal{D}^{\text{{\rm cur}}})\subset \za_{p}(X,\mathcal{D}^{\text{{\rm cur}}})$$
generated by the elements
of the form $\diva f$.
The \emph{homological arithmetic Chow groups} of $X$ are defined as
\begin{displaymath}
\cha_{p}(X,\mathcal{D}^{\text{{\rm cur}}})=\za_{p}(X,\mathcal{D}^{\text{{\rm cur}}})\left /
\rata_{p}(X,\mathcal{D}^{\text{{\rm cur}}})\right.
\end{displaymath} \end{definition}
There are well-defined maps \begin{alignat*}{2} \zeta&:\cha_{p}(X,\mathcal{D}^{\text{{\rm cur}}})\longrightarrow\CH_{p}(X),&&\quad \zeta[y,\mathfrak{g}_{y}]=[y], \\ \rho&:\CH_{p,p+1}(X)\longrightarrow H_{2p-2e+1}^{\mathcal{D}^{\mathcal{T}}}(X,p-e) \subseteq\widetilde{\mathcal{D}}_{2p+1}^{\text{{\rm cur}}}(X,p),&&\quad\rho[f]= \cl(f), \\ \amap&:\widetilde{\mathcal{D}}_{2p-2e+1}(X,p-e)\longrightarrow\cha_{p} (X,\mathcal{D}^{\text{{\rm cur}}}),&&\quad\amap(\widetilde{a})=[0,\amap(\widetilde{a})], \\ \omega&:\cha_{p}(X,\mathcal{D}^{\text{{\rm cur}}})\longrightarrow{\rm
Z}\mathcal{D}_{2p-2e}^{\text{{\rm cur}}}(X,p-e), &&\quad\omega[y,\mathfrak{g}_{y}]=\omega(\mathfrak{g}_{y}), \\ h&:{\rm Z}\mathcal{D}_{2p}^{\text{{\rm cur}}}(X,p)\longrightarrow H_{2p}^{\mathcal{D}^{\mathcal{T}}}(X,p), &&\quad h(\alpha)=[\alpha]. \end{alignat*}
\subsection{ Properties of Covariant arithmetic Chow groups} \label{sec:prop-covar-arithm} \
\nnpar{Basic properties.} Recall that in \cite{BurgosKramerKuehn:cacg}, there are defined contravariant arithmetic Chow groups denoted by $\cha^ {\ast}(X,\mathcal{D}_{\log})$. The following result follows from the theory developed \cite{BurgosKramerKuehn:cacg} and corollary \ref{cor:1} (semi-purity property).
\begin{theorem} \label{thm:logD} With the above notations, we have the following statements: \begin{enumerate} \item[(i)] There are exact sequences \begin{displaymath} \CH_{p,p+1}(X)\overset{\rho}{\longrightarrow}\widetilde {\mathcal{D}}_{2p-2e+1}^{\text{{\rm cur}}}(X,p-e)\overset{\amap}{\longrightarrow} \cha_{p}(X,\mathcal{D}^{\text{{\rm cur}}})\overset{\zeta}{\longrightarrow} \CH_{p}(X)\longrightarrow 0. \end{displaymath} \begin{align*} &\CH_{p,p+1}(X)\overset{\rho}{\longrightarrow}H_{2p-2e+1}^{\mathcal {D}^{\mathcal{T}}}(X_{\mathbb{R}},\mathbb{R}(p-e))\overset{\amap}{\longrightarrow} \cha_{p}(X,\mathcal{D}^{\text{{\rm cur}}})\overset{(\zeta,-\omega)} {\longrightarrow} \\ &\phantom{CH_{p,p+1}}\CH_{p}(X) \oplus{\rm Z}\mathcal{D}_{2p-2e}^{\text{{\rm cur}}}(X,p-e)\overset{\cl+h} {\longrightarrow}H_{2p-2e}^{\mathcal{D}^{f}}(X_{\mathbb{R}},\mathbb{R}(p-e)) \longrightarrow 0. \end{align*} In particular, if $X_{F}$ is projective, then there is an exact sequence \begin{align*} &\CH_{p,p+1}(X)\overset{\rho}{\longrightarrow}H_{2p-2e+1}^{\mathcal {D}}(X_{\mathbb{R}},\mathbb{R}(p-e))\overset{\amap}{\longrightarrow} \cha_{p}(X,\mathcal{D}^{\text{{\rm cur}}})\overset{(\zeta,-\omega)} {\longrightarrow} \\ &\phantom{CH_{p,p+1}}\CH_{p}(X) \oplus{\rm Z}\mathcal{D}_{2p-2e}^{\text{{\rm cur}}}(X,p-e)\overset{\cl+h} {\longrightarrow}H_{2p-2e}^{\mathcal{D}}(X_{\mathbb{R}},\mathbb{R}(p-e)) \longrightarrow 0. \end{align*} \item[(ii)] For any regular arithmetic variety $X$ over $A$ there are defined contravariant arithmetic Chow groups $\cha^{p}(X,\mathcal{D}_{\log})$. Furthermore, if $X$ is equidimensional of dimension $d$, then there is a morphism of arithmetic Chow groups \begin{displaymath} \cha^{p}(X,\mathcal{D}_{\log})\longrightarrow \cha_{d-p} (X,\mathcal{D}^{\text{{\rm cur}}}). \end{displaymath} When $X_{F}$ is projective this morphism is a monomorphism. Moreover, if $X_{F}$ has dimension zero, this morphism is an isomorphism. \item[(iii)] \label{item:1} For any proper morphism $f:X\longrightarrow Y$ of arithmetic varieties over $A$, there is a
morphism of covariant arithmetic Chow groups \begin{displaymath} f_{\ast}:\cha_{p}(X,\mathcal{D}_{\text{{\rm cur}}})\longrightarrow\cha_{p} (Y,\mathcal{D}_{\text{{\rm cur}}}). \end{displaymath} If $g:Y\longrightarrow Z$ is another such morphism, the equality $(g\circ f)_{\ast}=g_{\ast}\circ f_{\ast}$ holds. Moreover, if $X$ and $Y$ are regular and $f_{F}:X_{F}\longrightarrow Y_{F}$ is a smooth proper morphism of projective varieties, then $f_{\ast}$ is compatible with the direct image of contravariant arithmetic Chow groups. \item [(iv)] If $f:X\longrightarrow Y$ is a flat morphism,
equidimensional of relative
dimension $d$, and such that $f_{F}$ is smooth, then there
is a pull-back map
\begin{displaymath}
f^{\ast}:\cha_{p}(Y,\mathcal{D}^{\text{{\rm cur}}})\longrightarrow
\cha_{p+d}(X,\mathcal{D}^{\text{{\rm cur}}}).
\end{displaymath}
If $X$ and $Y$ are regular and equidimensional, this map is equivalent with
the pullback map defined in the contravariant Chow groups.
\item [(v)] Let $f:X\longrightarrow Y$ be a flat map between arithmetic
varieties, which is smooth over $F$ and let $g:P\longrightarrow Y$
be a proper map. Let $Z$ be the fiber product of $X$ and $P$ over
$Y$, with $p:Z\longrightarrow P$ and $q:Z\longrightarrow X$ the two
projections. Thus $p$ is flat and smooth over $F$ and $q$ is
proper. Then for any $x\in \cha_{\ast}(P,\mathcal{D}^{\text{{\rm cur}}})$, it holds
\begin{displaymath}
q_{\ast}p^{\ast}(x)=f^{\ast}g_{\ast}(x)\in
\cha_{\ast}(X,\mathcal{D}^{\text{{\rm cur}}}).
\end{displaymath} \end{enumerate} \end{theorem} \begin{proof}
Part (i) follows from the standard exact sequences of
\cite{BurgosKramerKuehn:cacg} Theorem 4.13 adapted to the
grading by dimension and corollary \ref{cor:1}.
For (ii) we first note that, if $M$ is an equidimensional complex
algebraic
manifold, $D\subset X$ is a normal crossing divisor, $\omega$ is a
differential form with logarithmic singularities along $D$ and
$\eta$ is a form that is flat along $D$, then $\eta\wedge \omega $ is
flat along $D$. In particular, if $M$ is proper and $U=M\setminus D$,
then the associated
current $[\omega]$ belongs to $ D^{\text{{\rm extd}}}_{\ast}(U)$. Therefore, if $y$ is a
codimension $p$ cycle on $X$ then, by the assumptions on $X$ and on the
arithmetic ring, $y$ is a dimension $d-p$ algebraic algebraic
cycle. Moreover, if $(\omega_{y},\widetilde g_{y})$ is a Green form
for $y$ (i.e. a $\mathcal{D}_{\log}$-Green object for $y$) then, by
lemma \ref{lemm:3} and
\cite{BurgosKramerKuehn:cacg} Proposition 6.5 we have that $([\omega
_{y}],[\widetilde g_{y}])$ is a $\mathcal{D}^{\text{{\rm cur}}}$-Green object for
$y$. Thus we have a well defined map
\begin{displaymath}
\za^{p}(X,\mathcal{D}_{\log})\longrightarrow
\za_{d-p}(X,\mathcal{D}^{\text{{\rm cur}}}).
\end{displaymath}
By definition this map is compatible with rational equivalence,
hence we obtain a map at the level of Chow groups.
To prove (iii) we first observe that, if $Z\subset X_{\Sigma }$ is a
closed subset, then $f_{\ast} D_{\ast}^{X_{\Sigma }^{\mathcal{T}}}(Z)\subset
D^{X_{\Sigma }^{\mathcal{T}}}(f(Z))$. Therefore, the push-forward of
currents define a
covariant $f$-morphism
\begin{displaymath}
f_{\#}:f_{\ast}\mathcal{D}_{\ast}^{\text{{\rm cur}},X} \longrightarrow
\mathcal{D}^{\text{{\rm cur}},Y}_{\ast}.
\end{displaymath}
Here we are using the terminology of \cite{BurgosKramerKuehn:cacg}
3.67 but adapted to the grading by dimension. Therefore applying
\cite{BurgosKramerKuehn:cacg} \S 4.5 we obtain the push-forward map
for covariant arithmetic Chow groups.
More concretely this map is defined as
\begin{displaymath}
f_{\ast}(y,(\omega _{y},\widetilde g_{y}))=
(f_{\ast}y,(f_{\ast} \omega
_{y},(f_{\ast}g_{y})\widetilde{\phantom A})).
\end{displaymath}
It is straightforward to check that it is compatible with the
direct image of $\mathcal{D}_{\log}$-arithmetic Chow groups when $Y$
is projective and $f_{F}$ smooth.
We now prove (iv). Since $f_{F}$ is smooth, for any Zariski closed subset
$Z\subset Y_{\mathbb{R}}$ equidimensional of dimension $p$, there is a
well defined morphism
$f^{\ast}D_{n}(Y_{\Sigma })\longrightarrow D_{n+2d}(X_{\Sigma })$
that sends $D_{n}^{Y_{\Sigma }^{\mathcal{T}}}(Z)$ to $D^{X_{\Sigma
}^{\mathcal{T}}}_{n+2d}(f^{-1}(Z))$. Therefore we obtain well
defined morphisms
\begin{displaymath}
\begin{matrix}
f^{\#}:\mathcal{D}^{\text{{\rm cur}}}_{n}(Y_{\mathbb{R}},p)&\longrightarrow&
\mathcal{D}^{\text{{\rm cur}}}_{n+2d}(X_{\mathbb{R}},p+d),\\
f^{\#}:\mathcal{D}^{\text{{\rm cur}}}_{n}(Y_{\mathbb{R}}\setminus Z,p)&\longrightarrow&
\mathcal{D}^{\text{{\rm cur}}}_{n+2d}(X_{\mathbb{R}}\setminus f^{-1}Z,p+d),
\end{matrix}
\end{displaymath}
that send $T$ to $f^{\ast}T/(2\pi i)^{d}$.
Then the proof of (iv) is straightforward using the theory of
\cite{BurgosKramerKuehn:cacg} 4.4 adapted to the grading by
dimension.
(v) Follows as \cite{GilletSoule:aRRt} Lemma 11. \end{proof}
\nnpar{Multiplicative properties.} In the next result we state the multiplicative properties between covariant and contravariant Chow groups. The proofs are simple modification of \cite{GilletSoule:aRRt} Theorem 3. First, for a form $\eta \in \widetilde{\mathcal{D}}^{2p-1}_{\log}(X_{\mathbb{R}},p)$ and an element $x\in \cha_{q}(X,\mathcal{D}^{\text{{\rm cur}}})$ we define \begin{displaymath}
\eta \cap x = \amap(\eta \bullet \omega (x))=\amap(\eta \wedge
\omega (x)). \end{displaymath}
\begin{theorem} \label{thm:1}
Given a map $f:X\longrightarrow Y$ of arithmetic varieties,
with $Y$ regular, there is a cap product
\begin{displaymath}
\begin{matrix}
\cha^{p}(Y,\mathcal{D}_{\log})\otimes \cha_{q}(X,\mathcal{D}^{\text{{\rm cur}}})
&\longrightarrow &\cha_{q-p}(X,\mathcal{D}^{\text{{\rm cur}}})_{\mathbb{Q}}\\
y\otimes x &\longmapsto & y._{f}x
\end{matrix}
\end{displaymath}
which is also denoted $y\cap X$ if $X=Y$. This product satisfies the
following properties
\begin{enumerate}
\item $\omega (y._{f}x)=f^{\ast}\omega (y)\land \omega (x)$, and,
for any $\eta \in \widetilde
{\mathcal{D}}_{\log}^{2p-1}(Y_{\mathbb{R}},p)$, it holds
$\amap(\eta)._{f}x=\amap(f^{\ast}(\eta))\cap x$.
\item $\cha_{\ast}(X,\mathcal{D}^{\text{{\rm cur}}})_{\mathbb{Q}}$ is a graded
$\cha^{\ast}(Y,\mathcal{D}_{\log})$-module.
\item If $g:Y\longrightarrow Y'$ is a map of arithmetic varieties
with $Y'$ also regular, $y'\in \cha^{p}(Y',\mathcal{D}_{\log})$ and
$x\in \cha_{q}(X,\mathcal{D}^{\text{{\rm cur}}})$, then
$y'._{gf}x=g^{\ast}(y')._{f}x$.
\item If $h:X'\longrightarrow X$ is a projective morphism, $x'\in
\cha_{q}(X',\mathcal{D}^{\text{{\rm cur}}})$ and $y\in
\cha^{p}(Y,\mathcal{D}_{\log})$, then
$y._{f}(h_{\ast}(x'))=h_{\ast}(y._{fh} x')$.
\item If $h:X'\longrightarrow X$ is flat and smooth over $F$, $x\in
\cha_{q}(X,\mathcal{D}^{\text{{\rm cur}}})$, $y\in
\cha^{p}(Y,\mathcal{D}_{\log})$, then
$h^{\ast}(y._{f}x)=y._{f}(h^{\ast}(x))$.
\item Let $f:X\longrightarrow Y$ be a flat map between arithmetic
varieties, with $Y$ regular and projective, and let $g:P\longrightarrow Y$
be a proper smooth map of arithmetic varieties of relative dimension
$d$. Let $Z$ be the fiber product of $X$ and $P$ over
$Y$, with $p:Z\longrightarrow P$ and $q:Z\longrightarrow X$ the two
projections. Then, for all $x\in \cha_{p}(X,\mathcal{D}^{\text{{\rm cur}}})$ and
$\gamma \in \cha^{q}(P,\mathcal{D}_{\log})$, it holds the equality
\begin{displaymath}
q_{\ast}(\gamma ._{p}q^{\ast}(x))=g_{\ast}\gamma ._{f} \alpha .
\end{displaymath}
\end{enumerate} \end{theorem} \begin{proof}
To define $y._{f}x$ we follow closely \cite{GilletSoule:aRRt}. We
may assume that $Y$ is equidimensional, that
$x=(V,\mathfrak{g}_{V})$ with $V$ a prime algebraic cycle and
$y=(W,\mathfrak{g}_{W})$ with each component of $W$ meeting $V$
properly on the generic fiber $X_{F}$. As in \cite{GilletSoule:aRRt}
we can define a cycle $[V]._{f}[W]\in CH_{q-p}(V\cap
f^{-1}(|W|))_{\mathbb{Q}}$ that gives us a well defined cycle
$([V]._{f}[W])_{F}\in {\rm Z}_{q-p}(X_{F})$. Our task now is to
construct the Green object for this cycle. Let
$\mathfrak{g}_{W}=(\omega _{W},\widetilde g_{W})$ and
$\mathfrak{g}_{V}=(\omega _{V},\widetilde g_{V})$. We write
$U_{V}=X_{\mathbb{R}\setminus |V|}$,
$U_{W}=X_{\mathbb{R}\setminus f^{-1}|W|}$ and $r=q-p$.
We now define,
in analogy with \cite{BurgosKramerKuehn:cacg} theorem 3.37,
\begin{align*}
\mathfrak{g}_{W}&\ast_{f} \mathfrak{g}_{V}=
f^{\ast}\mathfrak{g}_{W}\ast \mathfrak{g}_{V}\\
&=\left(f^{\ast}(\omega_{W})\bullet\omega_{V},
((f^{\ast}(g_{W})\bullet\omega_{V},f^{\ast}(\omega_{W})\bullet
g_{V}),-f^{\ast}(g_{W})\bullet g_
{V})^{\widetilde{\phantom{=}}}\right)\\
&=(f^{\ast}(\omega_{W})\wedge \omega_{V},
((f^{\ast}(g_{W})\wedge \omega_{V},f^{\ast}(\omega_{W})\wedge
g_{V}),\\
& \
\partial f^{\ast}(g_{W})\land
g_{V}-\bar{\partial}f^{\ast}(g_{W})\land g_{V}-
f^{\ast}(g_{W})\land\partial g_{V}+f^{\ast}(g_{W})\land\bar{\partial}g_{V}
)^{\widetilde{\phantom{=}}}\\
&\in \widehat H_{2e}(\mathcal{D}^{\text{{\rm cur}}}_{\ast}(X_{\mathbb{R}},e),
s(\mathcal{D}^{\text{{\rm cur}}}_{\ast}(U_{W},e)\oplus
\mathcal{D}^{\text{{\rm cur}}}_{\ast}(U_{V},e)\rightarrow
\mathcal{D}^{\text{{\rm cur}}}_{\ast}(U_{W}\cap U_{V},e)))\\
&\cong \widehat H_{2e}(\mathcal{D}^{\text{{\rm cur}}}_{\ast}(X_{\mathbb{R}},e),
\mathcal{D}^{\text{{\rm cur}}}_{\ast}(U_{W}\cup U_{V},e)).
\end{align*}
Now the proof follows as in \cite{GilletSoule:aRRt} Theorem 3 and
Lemma 12. \end{proof}
\begin{remark}
\begin{enumerate}
\item The main difference between the arithmetic Chow groups
introduced here and the arithmetic Chow groups used in
\cite{GilletSoule:aRRt} is that, if $x\in
\cha_{\ast}(X,\mathcal{D}^{\text{{\rm cur}}})$ then $\omega (x)$ is an arbitrary
current instead of a smooth differential form. This allows us to
define direct images for arbitrary proper morphisms. But the price
we have to pay is that there ire defined inverse images only for
morphisms that are smooth over $F$.
\item The fact that
the compatibility of direct images for the covariant Chow groups
and direct images for the contravariant Chow groups in theorem \ref{thm:logD}
\label{item:2} is stated only for varieties that are generically
projective, is due to the fact that the
latter is only defined when the base is proper. There are two ways
to overcome this difficulty. One is to allow arbitrary
singularities at infinity in the spirit of
\cite{BurgosKramerKuehn:accavb} 3.5, but then, one will have to
allow also arbitrary singularities at infinity for currents. This
means that we will have to consider currents that are tempered
in some components of the boundary but are not tempered in the
other. The second option would be to use a different notion of
logarithmic singularities that has better properties with respect
to direct images.
\end{enumerate}
\end{remark}
\nnpar{Relationship with other arithmetic Chow groups.} Let us assume now that $X_{F}$ is projective and let $\cha^{\ast}(X)$ denote the arithmetic Chow groups introduced in \cite{GilletSoule:ait} and $\cha_{\ast}(X)$ denote the arithmetic Chow groups introduced in \cite{GilletSoule:aRRt}. In \cite{BurgosKramerKuehn:cacg} it is shown that there is an isomorphism \begin{displaymath}
\psi :\cha^{\ast}(X,\mathcal{D}_{\log})\longrightarrow \cha^{\ast}(X), \end{displaymath} that is compatible with products, inverse images with respect to arbitrary morphisms and direct images with respect to proper morphism that are smooth over $F$. We shall state the analogous result for covariant arithmetic Chow groups.
\begin{proposition}\label{prop:6}
Let $X$ be an arithmetic variety with $X_{F}$ projective. Then there
is a short exact sequence
\begin{multline*}
0\longrightarrow \cha_{\ast}(X)\overset{\phi }{\longrightarrow }
\cha_{\ast}(X,\mathcal{D}^{\text{{\rm cur}}})\\
\longrightarrow
\bigoplus_{p}{\rm Z}\mathcal{D}_{2p}^{\text{{\rm cur}}}(X_{\mathbb{R}},p)\left/
{\rm Z} \mathcal{D}_{2p}^{\text{{\rm smooth}}}(X_{\mathbb{R}},p)\longrightarrow
0\right. ,
\end{multline*}
where $\mathcal{D}_{2p}^{\text{{\rm smooth}}}(X_{\mathbb{R}},p)$ denotes the
subspace of currents that can be represented by smooth differential
forms. Moreover $\phi $ satisfies the following properties
\begin{enumerate}
\item If $f:X\longrightarrow Y$ is a proper morphism of arithmetic
varieties that is smooth over $F$ and with $Y_{F}$ projective,
then $f_{\ast}\circ \phi =\phi
\circ f_{\ast}$.
\item If $f:X\longrightarrow Y$ is a flat morphism of arithmetic
varieties that is smooth over $F$, with $X_{F}$ and $Y_{F}$
projective, then $f^{\ast}\circ \phi =\phi
\circ f^{\ast}$.
\item If $f:X\longrightarrow Y$ is a morphism of arithmetic
varieties, with $X_{F}$ and $Y_{F}$ projective and $Y$ regular
then, for $y\in \cha^{p}(Y,\mathcal{D}_{\log})$ and $x\in
\cha_{q}(Y)$, it holds the equality
\begin{displaymath}
y._{f} \phi (x) = \psi (y)._{f} x.
\end{displaymath}
\end{enumerate} \end{proposition} \begin{proof}
Let $y$ be a dimension $p$ algebraic cycle of $X$ and let $g_{y}$ be a Green current for $y$ in the sense of \cite{GilletSoule:aRRt}. Recall that the normalization used here for the current $\delta _{y}$ differs with the normalization used in \cite{GilletSoule:aRRt} by a factor $\frac{1}{(2\pi i)^{p}} $. Then, by \ref{lemm:3}, the pair \begin{displaymath}
\left(\frac{1}{2(2\pi i)^{p+1}}g_{y}|_{X_{\mathbb{R}}\setminus
\mathcal{Z}_{p}},
\frac{1}{2(2\pi i)^{p+1}} (-2\partial\bar \partial) g_{y}+\delta
_{y} \right) \end{displaymath} is a $\mathcal{D}^{\text{{\rm cur}}}$-Green object for $y$. Therefore we obtain a well defined morphism $\za_{p}(X)\longrightarrow \za_{p}(X,\mathcal{D}^{\text{{\rm cur}}})$. It is straightforward to check that this map preserves rational equivalence, the exactness of the above exact sequence and properties (i), (ii) and (iii). \end{proof}
\begin{corollary} With the hypothesis of the proposition,
every element $x\in \cha_{p}(X,\mathcal{D}^{\text{{\rm cur}}})$ can be represented as
\begin{displaymath}
x= \phi(x_{1})+\amap(\eta)
\end{displaymath}
where $x_{1}\in \cha_{p}(X)$ and $\eta\in \widetilde
{\mathcal{D}}_{2p+1}^{\text{{\rm cur}}}(X_{\mathbb{R}},p)$. Moreover, if
\begin{displaymath}
x= \phi(x_{1})+\amap(\eta) = \phi(x'_{1})+\amap(\eta')
\end{displaymath}
are two such representations, then $\eta-\eta'\in \widetilde
{\mathcal{D}}_{2p+1}^{\text{{\rm smooth}}}(X_{\mathbb{R}},p).$ \end{corollary} \begin{proof}
This follows from the previous proposition and the fact that the map
\begin{displaymath}
\dd_{\mathcal{D}}:\widetilde
{\mathcal{D}}_{2p+1}^{\text{{\rm cur}}}(X_{\mathbb{R}},p)\longrightarrow
{\rm Z}\mathcal{D}_{2p}^{\text{{\rm cur}}}(X_{\mathbb{R}},p)\left/
{\rm Z} \mathcal{D}_{2p}^{\text{{\rm smooth}}}(X_{\mathbb{R}},p)
\right.
\end{displaymath}
is surjective due to the projectivity of $X$. The last statement
follows from \cite{GilletSoule:ait} Theorem 1.2.2. \end{proof}
The following result follows now easily from the previous corollary.
\begin{corollary}
Assume furthermore that $X$ is equidimensional of dimension $d$ and
let $\cha^{\ast}_{D}(X)$ denote the $D$-arithmetic Chow groups
introduced in \cite{KawaguchiMoriwaki:isfav}. Then there is a
natural isomorphism
\begin{displaymath}
\bigoplus _{p}\cha^{p}_{D}(X)\longrightarrow \bigoplus
_{p}\cha_{d-p}(X,\mathcal{D}^{\text{{\rm cur}}}).
\end{displaymath}
Moreover this isomorphism is compatible with push-forwards and
the structure of module over the contravariant arithmetic Chow
groups.
$\square$ \end{corollary}
\newcommand{\noopsort}[1]{} \newcommand{\printfirst}[2]{#1}
\newcommand{\singleletter}[1]{#1} \newcommand{\switchargs}[2]{#2#1} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
\end{document} |
\begin{document}
\title[Low-rate reliability bound] {A low-rate bound on the reliability of a quantum discrete memoryless channel$^1$} \author[Alexander Barg]{Alexander Barg} \address{ Bell Labs, Lucent Technologies, 700 Mountain Avenue, Rm. 2C-375, Murray Hill, NJ 07974 USA} \email{abarg@research.bell-labs.com} \begin{abstract} We extend a low-rate improvement of the random coding bound on the reliability of a classical discrete memoryless channel to its quantum counterpart. The key observation that we make is that the problem of bounding below the error exponent for a quantum channel relying on the class of stabilizer codes is equivalent to the problem of deriving error exponents for a certain symmetric classical channel. \end{abstract} \maketitle \footnotetext[1]{Research supported in part by the Binational Science Foundation (USA-Israel), grant no. 1999099.} \section{Introduction}
Derivation of error bounds in quantum information theory is usually performed by translation of the standard methods from its classical counterpart. Error exponents for the classical-quantum channel (transmission of orthogonal states) were derived in \cite{hol00}. Here we are concerned with the so-called quantum-quantum channel which is the standard universe for quantum error-correcting codes. An exponential upper bound on the distortion (error) probability was derived in a recent paper \cite{ham01}. Here we show that this bound can be improved for low noise and low values of the transmission rate. In Sect. \ref{sect:prelims} we give precise definitions of the quantum discrete memoryless channel (henceforth QDMC), codes, decoding, and error probability. Sect. \ref{sect:stab} contains a brief review of stabilizer codes and their decoding. It turns out that if we restrict ourselves to the the class of stabilizer codes, then the bounds on their distortion exponent also follow from the corresponding classical results. In particular, in Sect. \ref{sect:rce} we give a short proof of the result of \cite{ham01}. The link to the classical results motivates us to derive a low-rate error exponent for a QDMC (Sect. \ref{sect:x}). A condition when it improves the random coding bound of \cite{ham01} is given. We conclude by specializing the results to the case of a depolarizing channel and showing a concrete improvement for low code rates in the case of low noise.
\section{Preliminaries}\label{sect:prelims}
A quantum $d$-ary digit, a {\em qudit}, is a $d$-dimensional complex space $H=\complexes^d,$ where $d$ will be assumed a prime number. Below by $\cX$ we denote the finite field $\ff_{q},$ where $q=d^2.$ We consider transmission of unit-length state vectors $\ket\psi$ from the $d^n$-dimensional space $H_n=H^{\otimes n}.$ Let us fix some orthonormal basis of $H$ and write it as $(\ket 0,\ket 1,\dots,\ket {d-1}).$ A unitary basis of error operators (an error basis, for short) is defined as $\{E_{i,j}=X^iZ^j, i,j\in \ff_d\},$ \[ X\ket i=\ket{(i-1)\text{mod } d},\quad Z\ket j=\omega^j\ket j, \] and $\omega$ is a primitive $d$th root of unity.
A {\em quantum discrete memoryless channel} $\sW$ is defined as an arbitrary collection of operators of the form $(A_{u}, u\in \cX)$, where \[ A_u=\sum_{v\in \cX} a_{uv}E_v \] and where the complex row vectors $a_v=(a_{uv}, u\in \cX)$ define a probability distribution on $\cX$ given by \[ W(v)=a_v a_v^\ast \quad (v\in \cX), \quad \sum_v W(v)=1. \] We note that this definition is derived from the general definition of the quantum channel $\Phi$ which is a trace-preserving completely positive map on the set of density operators on $H_n$. It is known that any such map $\Phi$ can be written as \[ \Phi(S)=\sum_k A_k S A_k^\ast \] for some set of operators $A_k$, where $S$ is a density operator on $H_n$ (the so-called Kraus representation of the channel). The absence of memory in the channel is reflected by the fact that the operators $A_k$ can be written as tensor products of operators on $H.$
As an example, let $d=2$ and consider the so-called {\em depolarizing channel} $\sW=\{\sqrt{1-p}I,\sqrt{p/3}\sigma_x,\linebreak[2] \sqrt{p/3}\sigma_z,\sqrt{p/3}\sigma_y\}, $ where $(\sigma_x,\sigma_z,\sigma_y)$ is the set of Pauli matrices. This channel acts on qubits by phase flips, amplitude flips, or combinations of both applied with probability $p/3$ each. More generally, for any $d$ we can define a depolarizing channel as follows: $\sW=\{\sqrt{1-p}I;\sqrt\frac p{q-1}E_{i,j}, i,j\in \ff_d\}.$
A quantum code $\cQ$ is a linear subspace of $H_n.$ The {rate} of $\cQ$ is defined as $R=R(\cQ):=(\log_dK)/n,$ where $K$ is the dimension of $\cQ.$ Let $\cR$ be a recovery operator, i.e., another completely positive trace-preserving map on $H_n$, restricted to $\cQ$. The {\em fidelity} of the code $\cQ$ for a given channel $\Phi$ and a given recovery operator $\cR$ equals \[ F(\cQ,\{\Phi,\cR\})=\frac{1}{K}\min_{B\subset \cQ} \sum_{\psi\in B} \bra\psi\cR\Phi [\ket\psi\bra\psi]\ket\psi, \] where the minimum is taken over all orthonormal bases $B$ of the code. In particular, for the QDMC defined above, $\Phi=\sW^{\otimes n}.$ Below we will omit the recovery operator from the notation.
For a given rate $R$ we wish to define the reliability (exponent) of a QDMC $\sW.$ Let \[ E(n, R,\sW)=\sup_{\cQ\subset H_n: R(\cQ)\ge R}-\frac 1n \log_d(1-F(\cQ,\sW)) \] be the error exponent for the rate $R$ and code length $n$. Let \[ E(R,\sW)=\liminf_{n\to\infty} E(n,R,\sW). \]
Let $H_m(Q)=-\sum_{x\in\cX}Q(x)\log_mQ(x)$ be the entropy of a probability distribution $Q$ on $\cX$. For two probability distributions $P$ and $Q$, their information divergence is given by
$D_m(Q\|P)=\sum_{x\in \cX}Q(x)\log_m\frac{Q(x)}{P(x)}$ (if the base of the logarithms and exponents below is omitted, it is equal to $d$).
The following theorem was proved in \cite{ham01}. \begin{theorem}\label{thm:qrce} \cite{ham01} For any rate $R\ge 0$ and any QDMC $\sW$ \begin{equation}\label{eq:qrce}
E(R,\sW)\ge E_r(R,\sW)=\min_{V}[D(V\|W)+|1-H(V)-R|^+], \end{equation}
where the minimum is taken with respect to all probability distributions on $\cX$ and $|a|^+:=\max(a,0).$ \end{theorem} Since $E_r(R,\sW)>0$ for $0\le R< 1-H(W),$ this result also implies a lower bound of $1-H(W)$ on the capacity of the channel $\sW$.
Given a vector $x\in \cX^n,$ we can define an empirical probability
distribution $P$ on $\cX$ given by $P(u)=|\{i: x_i=u\}|/n, u\in \cX$. Below we call it the {\em type} of the vector $x$ and write $T(x)=P.$ The type of the all-zero vector will be denoted by $P_0$; we have $P_0(u)=\delta_{u,0}.$ The set of all sequences of a given type $P$ will be denoted as ${\sf T}_P(\cX^n).$ It is clear that \[
|{\sf T}_P(\cX^n)|= \exp_q(n (H_q(P)+o(1))). \] Let $\cP(\cX^n)$ be the set of all types on $\cX^n.$ Obviously, \[
|\cP(\cX^n)|=\binom{n+q-1}{q-1}\le n^q \quad(n, q\ge 2). \] For any $x\in \cX^n$ and any stochastic matrix $V: \cX\to \cY$, the $V$-{\em shell} of $x$ is defined as the set ${\sf T}_V(x)\subset \cY^n$
formed by those $y$ whose conditional type is $V$. This means that for any such $y$ its type is $T(y)=PV,$ where $PV$ is the probability distribution on $\cY$ given by $PV(y)=\sum_{x\in\cX} P(x)V(y|x).$
\section{Stabilizer codes and their decoding}\label{sect:stab} The construction of stabilizer quantum codes in \cite{cal97b}, \cite{cal98} is as follows. Consider the vector space $V_n=(\ff_d\times\ff_d)^n.$ Write a typical vector $x\in V_n$ as $(x_1,x_1',x_2,x_2',\dots,x_n,x_n')$ and consider a standard symplectic form on $V_n$ defined by \[ ( x,y )=\sum_{i=1}^n x_iy_i'-x_i'y_i. \] Now let $\cC\subset \cX^n$ be an additive code, i.e., an additive subgroup of $\cX^n$ and define $\cC^\bot$ as the set of vectors in $(\ff_q^+)^n\cong V_n$ that are $(\,,\,)$-orthogonal to every vector in $\cC.$ Suppose that the number of vectors in $\cC$ is $q^k$ so that the rate of $\cC$ equals
$R(\cC)=k/n.$ We then have $|\cC^\bot|=q^{n-k}.$
We begin with a pair of codes $\cC^\bot\subset \cC\subset \cX^n$ and a set $\cE\subset \cX^n$ such that \[ \forall_{x,y\in \cE} (y-x\in \cC) \;\Rightarrow\; (x=y). \] According to this definition, we can take at most one error vector per coset of $\cX^n/\cC$ and therefore, the maximum size of the set $\cE$ equals $q^{n-k}.$ It is possible to construct a quantum code $\cQ\subset H_n$ of (complex) dimension $d^{2k-n}$ which is an invariant subspace of the set of error operators $N_\cE=\{N_x, x\in \cE\}$ given by \[ N_x=\bigotimes\limits_{i=1}^n N_{x_i}, \] where for every $i$ the operator $N_{x_i}=E_{x_{i,1},x_{i,2}}$ is an element of the error basis determined by the representation of the coordinate $x_i\in \ff_q$ of $x$ as a pair of elements $(x_{i,1},x_{i,2})\in (F_d)^2.$ Moreover, there are $d^{2(n-k)}$ such invariant subspaces whose orthogonal direct sum equals $H_n.$ Thus the rate $R$ of the stabilizer code $\cQ$ is related to the rate of $\cC$ as $R=2R(\cC)-1.$
A stabilizer code $\cQ$ is $\cE$-error-correcting in the sense that the action of any error operator from the set $N_\cE$ can be removed from the transmitted state. The received state $w$ is measured with respect to the set of pairwise orthogonal operators $P_i$, each being an orthogonal projector on the subspace of $H_n$ that corresponds to a coset of $\cX^n/\cC.$ Then within this coset we find one of the most probable error vectors and recover the transmitted state by applying the inverse error operator.
The following bound on the fidelity of a given stabilizer code $\cQ$ was proved in \cite{ham01} based on a result in \cite{pre99}. \begin{theorem} \label{thm:fidelity}\cite{ham01} Let $\cQ$ be an $\cE$-error-correcting stabilizer code used over a QDMC $\sW$. Then \[ 1-F(\cQ,\sW)\le \sum_{x\not\in \cE} W^n(x). \] \end{theorem} This theorem provides a link between the quantum and the classical setting which will be pivotal in our argument.
Note that there is substantial freedom in the choice of the error set $\cE.$ To derive our result, we will take $\cE$ as follows. As pointed out above, the channel $\sW$ defines a probability distribution $W$ on $\cX.$ For an additive code $\cC$ consider the quotient space $\cX^n/\cC.$ From each coset $S$ we take one of the vectors $y=y(S)$ whose probability $W^n(y)=\prod W(y_i)$ is the largest in $S.$ Finally, we take $\cE=\cup_{_S} y(S).$
We conclude this section by deriving a general analog of the weight distribution and of the Gilbert-Varshamov bound for additive codes over $\cX$. For $q=4$ and the Hamming weight distribution this result was proved in \cite{ash99i}.
\begin{theorem}\label{thm:gv} For any rate $R(\cC)>0$ and any $\delta>0,$ there exists an additive code $\cC\subset \cX^n$ of size $\exp(nR(\cC))$ such that $\cC^\bot\subset \cC$ and for any type $P\ne P_0$, \begin{equation}\label{eq:additive}
|\cC\cap {\sf T}_P|\le \exp_q[n(R(\cC)+H_q(P)-1+\delta)]. \end{equation} In particular, for any $x\in \cC\backslash\{0\}$ with $T(x)=P$ we have \[ R(\cC)\ge 1-H_q(P)-\delta. \] \end{theorem} \Proof Let \[
S_{n,k}=\{\cC\in \cX^n: \log_q|\cC|=k, \cC^\bot\subset \cC\}. \] It was observed several times in the literature (e.g., \cite{cal97b}, \cite{ham01})
that every vector $x\in \cX^n\backslash\{0\}$ is contained in the same number of codes in $S_{n,k}.$ Denote this number by $B$ and let $S=|S_{n,k}|.$ Counting in two ways the sum of sizes of all the codes in $S_{n,k}$ we obtain \( (q^n-1)B=(q^k-1)S. \) Let us fix a type $P$. Clearly, \[
\sum_{P'\in \cP^n(\cX):\; H_q(P')\le H_q(P)} |{\sf T}_{P'}|\le n^q q^{nH_q(P)}. \] Thus as long as $ n^q q^{nH_q(P)}B< S$ or \[ n^q q^{nH_q(P)}<\frac{q^n-1}{q^k-1}=q^{n(1-R(\cC))}(1+o(1)), \] there exists a code $\cC\in S_{n,k}$ such that for every $x\in \cC\backslash \{0\}$ we have $H_q(T(x))\ge H_q(P).$ This proves the last part of the claim.
For any $P\ne P_0$ the average number of code vectors of type $P$ in a code $\cC\in S_{n,k}$ equals \[
\frac1S\sum_{{\cC}\in S_{n,k}}|\{x\in (\cC\cap {\sf T}_P)\}|=
\frac{B|{\sf T}_P| }{S}= \exp_q[n(R(\cC)+H_q(P)-1+o(1))]. \] Since there are no more than $n^q$ different types, this proves the first part of the claim. \qed
\section{The random coding bound}\label{sect:rce}
Let $\cX$ be an input and $\cY$ an output alphabet of a classical DMC given by a stochastic matrix $W(y|x)$. Suppose that $\cX\subset \cY$ and that
$\cY$ is an abelian group, written additively. A channel is called {\em additive} if $W(y|x)$ depends only on the difference $y-x,$
i.e., $W(y|x)=W(y-x)$ (the last term is actually $W(y-x|0),$ but below we abuse the notation slightly and use unconditional distributions). Note that an additive channel $W$ is symmetric in the sense that every row is a permutation of a fixed probability vector, and the same is true with respect to every column. By Theorem \ref{thm:fidelity} the problem of bounding from below the reliability exponent of a QDMC is now {\em reduced to the corresponding classical problem for a symmetric, additive DMC} with $\cY=\cX.$ With this observation Theorem \ref{thm:qrce} follows by a combination of standard arguments; so having in mind the reader well familiar with error exponents of classical channels we could as well stop here. In the interest of staying self-contained we will supply some more details.
A. {\sc General form of the random coding exponent.} For any type $P\in \cP(\cX^n)$ and any stochastic
$|\cX|\times|\cY|$ matrix $V$, let \[
D(V\|W|P)=\sum_{x,y}P(x)V(y|x)\log \frac{V(y|x)}{W(y|x)} \] be the conditional divergence and \[
I(P,V)=\sum_{x,y}P(x)V(y|x)\log \frac{V(y|x)}{\sum_x P(x)V(y|x)} \] be the mutual information between $x\in {\sf T}_P(\cX^n)$ and $y\in {\sf T}_V(x)$. The following theorem (reformulated slightly from \cite{csi98}) gives one of the general forms of the error exponent of a classical DMC. \begin{theorem}\label{thm:genrce} For a given type $P\in \cP(\cX^n)$ let
$A\subset {\sf T}_P(\cX^n), |A|=d^{(R'-\epsilon)n}$ be a code such that for every stochastic matrix $\tilde V:\cX\to \cX$ \begin{equation}\label{eq:dd0}
|\{(x_i,x_j) \in A\times A: \; x_j\in {\sf T}_{\tilde V}(x_i)\}|
\le \exp[{n(R'-I(P,\tilde V))}]. \end{equation} Suppose that $A$ is used over a DMC $W:\cX\to\cY$ together with a maximum mutual information decoder. Then the exponent $E(A,W)$ of the maximum error probability $\max_{x\in A} p_e$ satisfies $E(A,W)\ge E_r(P,R',W),$ where \begin{equation}\label{eq:rce}
E_r(P,R',W)=\min_{V}[D(V\|W|P)+|I(P,V)-R'|^+], \end{equation} and where $V$ runs over the set of all channels $\cX\to \cY.$ \end{theorem}
{\em Remarks.} 1. This theorem is a generalization of a classical fact of coding theory, that ``binary linear codes of rate $R$ and weight distribution $A_w\le 2^{n(R-1)}\binom nw, w=1,2,\dots, n$ achieve the random coding exponent of the binary symmetric channel.''
2. The best bound on the reliability exponent of the channel $W$ is obtained by computing the maximum on $P$ in (\ref{eq:rce}). The quantity $E(R',W)=\max_P E_r(P,R',W)$ is usually called the random coding exponent of $W$.
3. The maximum mutual information decoder, which is used to prove this result and which was employed in \cite{ham01}, is different from the decoder defined in Sect. \ref{sect:stab}.
B. {\sc Additive channels and codes.} Recall that in our problem $\cX$ is an additive group and that $\cY=\cX$. Further, since the channel $W$ is symmetric, the maximizing input distribution $P$ in (\ref{eq:rce}) is known to be uniform \cite{dob63}:
$P_u(x)=|\cX|^{-1}$ for any $x\in \cX.$
Let us substitute $P_u$ into the condition (\ref{eq:dd0}) on the ``distance distribution'' of the code $A.$ Let $x$ be a vector such that $T(x)=P_u$ and let $\tilde V$ be a stochastic matrix such that ${\sf T}_{\tilde V}(x)\cap {\sf T}_{P_u}(\cX^n)\ne\emptyset.$
Then for any letter $x\in \cX$ the sum $\sum_{xf\in\cX} \tilde V(y|x)=1.$ We compute \[
I(P_u,\tilde V)=\log|\cX|-H(\tilde V|P_u), \]
So the upper bound in (\ref{eq:dd0}) takes the form \begin{equation}\label{eq:dd01}
|\{(x_i,x_j) \in A\times A: \; x_j\in {\sf T}_{\tilde V}(x_i)\}|
\le \exp[n(R'+H(\tilde V|P_u)-\log|\cX|)]. \end{equation}
Now consider the code $\cC$ from Theorem \ref{thm:gv}. Almost all of its codewords are of type $P_u$ and nearby types (types close to it in some suitable metric, say, the $\ell_1$-distance). We claim that the ``distance distribution'' of the code $\cC$ satisfies (\ref{eq:dd01}). Since the code is additive,
it suffices to consider matrices $\tilde V$ such that $\tilde V(y|x)$ depends only on the difference $y-x.$ Any such matrix defines a distribution
$\tilde V(z)=\tilde V(z|0)$ on $\cX.$ Using this in (\ref{eq:dd01}), we observe that this condition reduces to the condition (\ref{eq:additive}) satisfied by the ``weight'' distribution of $\cC.$
Now recall from \cite{csi81b} that the function $E_r(P,R',W)$ is uniformly continuous on $P$ and that, on account of the channel and code being additive, the error probability of decoding does not depend on the transmitted codeword. Therefore for growing $n$ the error exponent of the code $\cC$ attains the bound $E(R',W)$. This proves Theorem \ref{thm:qrce}.
Transforming the exponent (\ref{eq:rce}) to the form (\ref{eq:qrce}) is a matter of calculation. Indeed, let us substitute $P_u$ in (\ref{eq:rce}).
Clearly, $D(V\|W|P)=D(V\|W),$ where on the right-hand side $V$ and $W$ are probability distributions on $\cX$ given by
$W(z)=W(y|x), V(z)=V(y|x)$ for any $y,x$ such that $z=x-y.$ Further, \begin{align*}
I(P,V)-R'=-|\cX|^{-1} \sum_{z\in \cX} H(V)+\log |\cX|-1-R=1-R-H(V), \end{align*} where we have used the relation $R'=2R(\cC)=1+R.$
{\sc Further observations.}
1. By the same token, the capacity of the quantum channel $\sW$ is bounded below by the capacity of the classical symmetric channel $W.$ Again the mutual information is maximized for the uniform input distribution, which implies the bound $\sC\ge 1-H(W)$ independently of the results on error exponents. Note however that when this result is specialized to the depolarizing channel (see Example in the next section), it falls below the best currently known estimate of \cite{div98}.
2. If we return from (\ref{eq:rce}) to Gallager's original form of the random coding bound (by a method outlined in \cite[pp.\,192-193]{csi81b}), the exponent (\ref{eq:qrce}) can be written in a somewhat more convenient form. Namely: \begin{theorem}\label{thm:qrceG} Let $E_0(\rho,W)=\rho-(1+\rho) \log \sum_{x\in \cX} W(x)^{\frac{1}{1+\rho}}.$ Then \[
E_r(R,\sW)=1-R-\log\Big(\sum_{x\in \cX}\sqrt{W(x)}\Big)^2 \quad(0\le R<\tfrac{\partial E_0}{\partial \rho}|_{\rho=1}) \]
and \[
E_r(R,\sW)=\max_{0\le\rho\le 1}[-\rho R+E_0(\rho,W)]\quad(\tfrac{\partial E_0}{\partial \rho}|_{\rho=1}\le R\le 1-H(W)). \]
\end{theorem}
3. In the classical setting, the line of thought realized in Theorem \ref{thm:qrce} would correspond to an attempt to prove error bounds for a general DMC relying on the class of additive codes. It is well known \cite{dob63},\,\cite{csi81b} that this approach produces good results only when the optimizing probability distribution on the input alphabet is uniform. The classical channel derived from a general QDMC for stabilizer codes turns out to be {additive} and hence symmetric. Hence the lower bounds on the reliability exponent thus obtained are arguably rather strong.
\section{Expurgation exponent for a QDMC}\label{sect:x}
Let $\cQ\subset H_n$ be a stabilizer code of rate $R=R(\cQ)$ used over a QDMC $\sW$ together with the decoder defined in Sect. \ref{sect:stab}. Define the $W$-weight of a letter $x\in\cX$ as \[
|x|_{_W}=-\log\sum_{e\in \cX}\sqrt{W(e)W(e-x)}, \] where $\log 0=-\infty$ by definition.
\begin{theorem}\label{thm:ex} \[ E(R,\sW)\ge E_x(R,\sW)=\min_{P:\, H(P)\ge 1-R}\;
\Big[ \sum_{x\in \cX}P(x) |x|_{_W} -(R+H(P)-1)\Big]. \] \end{theorem} \Proof We start with the code $\cC$ whose existence is proved in Theorem \ref{thm:gv}. Let $\cQ$ be the stabilizer quantum code associated with it. By Theorem \ref{thm:fidelity} \begin{align*} 1-F(\cQ,\sW)&= \sum_{e\not\in\cE} W^n(e)=\sum_{x\in \cC\setminus\{0\}} \sum_{\genfrac{}{}{0pt}{}{y\in \cX^n}{ W^n(y-x)\ge W^n(y)}} W^n(y)\\[2mm] &\le \sum_{x\in \cC\setminus\{0\}}\sum_{y\in\cX^n}\sqrt{W^n(y)W^n(y-x)} \end{align*} \begin{align*} &=\sum_{P\in \cP(X^n)}\sum_{x\in \cC\cap{{\sf T}_p(\cX^n)}} \sum_y\sqrt{W^n(y)W^n(y-x)}\\[2mm] &\le\sum_{P\in \cP(X^n)} \exp_d[2n(R(\cC)+H_q(P)-1+o(1))-n\sum_{x\in\cX}
P(x)|x|_{_W}], \end{align*} where the last step follows because the channel is memoryless. Conclude by computing the logarithm and substituting the relation $2R(\cC)=1+R.$ \qed
Note that it is possible that $E_x(R,\sW)$ becomes infinite for $R\downarrow R_\infty(\sW)>0,$ which means that for rates $R<R_\infty(\sW)$ errors outside the set $\cE$ occur with probability zero. The quantity $R_\infty(\sW)$ gives a lower bound on the zero-error capacity of the channel $\sW$. Shannon's classical example of a channel with $R_\infty(\sW)>0$ \cite[p.\,532]{gal68} is given by the additive channel with $\cX=\integers_5$ and $W(x)=W(x+1)=1/2.$ Clearly, $R_\infty(\sW)>0$ if and only if
$|x|_{_W}=0$ for some $x\in \cX$. A channel is called {\em indivisible} if this condition does not hold, and hence $R_\infty(\sW)=0.$
The function $E_x$ can be transformed to a different form, also due to Gallager \cite{gal68}: \[ E_x(R,\sW)=\sup_{\rho\ge 1}[-\rho R+E_{ex}(\rho,W)], \] where \[ E_{ex}(\rho,W)=-\rho\log_d\frac1{d^2} \sum_{x\in \cX} \Big(\sum_{e\in \cX}\sqrt{W(e)W(e+x)}\Big)^{1/\rho}. \]
Let us state a condition for the bound $E_{ex}(R,\sW)$ to improve the result of Theorem \ref{thm:qrce}. As remarked above, the optimizing probability distribution on $\cX$ for the random coding bound (\ref{eq:rce}) in our case is uniform. Moreover, the exponent $E_x$ is also derived under the same assumption. It is known \cite{gal68} that for one and the same input distribution and for code rates $R< \partial E_{ex}(\rho,W)/\partial\rho\vert_{\rho=1}$ the function $E_{x}(R,\sW)$ is greater than $E_r(R,\sW)$, so in this region of rates Theorem \ref{thm:ex} improves the result of Theorem \ref{thm:qrce}. Hence if the point $R_x=\partial E_{ex}(\rho,W)/\partial\rho\vert_{\rho=1}>0$ then there is a nonempty interval of code rates where $E_x(R,\sW)>E_r(R,\sW).$ Note that typically such an interval exists only for low noise level in the channel. To make an analogy with the classical case, the improvement takes place if the value of the code rate $R(\cC)$ that corresponds to $R_x$ is greater than $1/2$. In the range where it improves the bound (\ref{eq:qrce}), the exponent $E_x(R,\sW)$ can be written as \begin{equation}\label{eq:exlow}
E_x(R,\sW)=\min_{P:H(P)=1-R} {\sf E}\, |X|_{_W}, \end{equation} where $X$ is a random variable on $\cX$ distributed according to $P.$ This follows by the Gilbert-Varshamov bound of Theorem \ref{thm:gv}.
\remove{ \begin{align*} &-\frac 1q\sum_z\log\sum_e\sqrt{W(e)W(e+z)} +\log\frac 1q \Big(\sum_x\sqrt{W(x)}\Big)^2\\ &=-\frac1q\sum_z\log\frac{\sum_e\sqrt{W(e)W(e+z)}}{\frac 1q (\sum_x\sqrt{W(x)})^2}\\ &\ge\log\frac{\sum_z\sum_e\sqrt{W(e)W(e+z)}}{(\sum_x\sqrt{W(x)})^2}=0. \end{align*} Hence }
{\em Remark.} The general form of the function $E_x(R,\sW)$ for a given additive, indivisible channel $\sW$ is as follows: \[ E_x(R,\sW)=\max_P\sup_{\rho\ge 1}[-\rho R+E_{ex}(\rho,P,\sW)], \] where \[ E_{ex}(\rho,P,\sW)=-\rho\log_d \sum_{x,x'\in \cX}P(x)P(x') \Big(\sum_{e\in \cX}\sqrt{W(x-e)W(x'-e)}\Big)^{1/\rho}. \] Optimization on the input distribution $P$ in this expression is easy if the $q\times q$ matrix \[ [(\sum_{e\in \cX}\sqrt{W(x-e)W(x'-e)})^{1/\rho}] \] is nonnegative definite for every $\rho\ge 1$ \cite{jel68}, and turns into a difficult problem otherwise. For the channel to be nonnegative definite it is sufficient that for every pair of distinct vectors $(x,x')$ the sum on $e$ in the expression for $E_x(\rho,P,\sW)$ takes one and the same value (the so-called equidistant channels \cite{jel68}). For equidistant channels the maximum on $P$ is achieved for the uniform distribution $P(x)=1/q, x\in \cX.$ For instance, the $d$-ary depolarizing channel is equidistant. However, there are many examples of not nonnegative definite additive, indivisible channels. For instance, let $d=3.$ Consider the channel given by the following probability distribution: \begin{equation*} \begin{array}{r*{9}c} u &00&01&02&10&11&12&20&21&22\\ W(u)&0&0.49&0&0.01&0.01&0&0.49&0&0 \end{array}, \end{equation*} where $u\in (\ff_9)^+\cong\integers_3\times \integers_3.$ It is easily verified that this channel is not nonnegative definite for $\rho\ge 1.37.$ \qed
\begin{figure}
\caption{Error exponents for the depolarizing channel with $d=2$ and $p = 0.0005$. For $0\le R\le R_x$ the function $E_x$ gives a stronger bound than $E_r$.}
\label{fig:QSC}
\end{figure}
{\em Example.} Let us specialize the results of Theorems \ref{thm:qrce} and \ref{thm:ex} for the case of the $d$-ary depolarizing channel $\sW.$ Let us denote the reliability exponent of $\sW$ by $E(R,p).$ The result can be expressed in a closed form. Let \begin{eqnarray*} h(x)&=&-x\log_q \frac{x}{q-1}-(1-x)\log_q x\\
D(x\|y)&=&x\log_q \frac{x}{y}+(1-x)\log_q \frac{1-x}{1-y}\\ \dgv(x)&=&h^{-1}(1-x). \end{eqnarray*} We have \[ E(R(\cQ),\sW)\ge 2E_\ell((1+R)/2,p), \] where \begin{align} E_\ell(r,p)&=-\dgv(r)\log_q\gamma_q(p) &(0\le r\le r_x) \label{eq:depol_ex}\\
E_\ell(r,p)&=D(\rho_0\|p)+\rcrit-r &(r_x\le r\le \rcrit) \nonumber\\
E_\ell(r,p)&=D(\dgv(r)\|p) &(\rcrit\le r\le 1-h(p)), \nonumber \end{align} \[ r_x=1-h\Big(\rho_0\Big(2-\frac{q\rho_0}{q-1}\Big)\Big), \quad \rcrit=1-h(\rho_0), \] \[ \rho_0=\frac{\sqrt{p(q-1)}}{\sqrt{p(q-1)}+\sqrt{1-p}},\qquad \gamma_q(p)=p\frac{q-2}{q-1}+2\sqrt\frac{p(1-p)}{q-1}. \]
This reliability exponent can be obtained from Theorems \ref{thm:qrceG}, \ref{thm:ex} or computed directly starting with codes whose existence is proved in Theorem \ref{thm:gv}. The expurgation exponent (\ref{eq:depol_ex}) is straightforward from (\ref{eq:exlow}). If $R_x:=2r_x-1>0,$ then from (\ref{eq:depol_ex}) we obtain an improvement over the result of Theorem \ref{thm:qrce} in the interval of values of $R$ between zero and $R_x.$ It turns out that this condition is satisfied for low noise level (see an example in Fig. \ref{fig:QSC}). For $d=2$ the expurgation bound improves the random coding exponent for $0<p\le 0.004.$\qed
{\em Acknowledgment.} The author is grateful to A. Ashikhmin and G. Kramer for helpful discussions.
\renewcommand\baselinestretch{0.9} {\footnotesize \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
}
\end{document} |
\begin{document}
\title{Assouad type dimensions for self-affine sponges with a weak coordinate ordering condition}
\begin{abstract} Recently self-affine sponges have been shown to be interesting counter-examples to several previously open problems. One class of recently studied sponges are \emph{Bedford-McMullen sponges with a weak coordinate ordering condition}, that is, sponges with several coordinates having the same contraction ratio. The Assouad type dimensions of such sets cannot be calculated using the same formula as the regular Bedford-McMullen sponges. We calculate the Assouad type dimensions for such sponges and the more general Lalley-Gatzouras sponges with a weak coordinate ordering condition, discussing some of their more subtle details along the way. \\
\emph{Mathematics Subject Classification} 2010: primary: 28A80; secondary: 37C45, 28C15.
\emph{Key words and phrases}: Assouad dimension, lower dimension, self-affine set, weak tangent. \end{abstract}
\section{Introduction} \label{intro}
In the recent paper \cite{fraser-howroyd}, a new phenomenon was noted in the study of self-affine sponges which only occurs in the higher dimensional case (ambient spatial dimensions greater than or equal to 3). The techniques used to calculate the Assouad dimension of Bedford-McMullen sponges in that paper fail for what we will call \emph{sponges with a weak coordinate ordering condition}, where the condition that each coordinate direction is divided into a strictly different number of pieces $n_1 <\ldots < n_d$ is relaxed to $n_1 \leqslant \ldots \leqslant n_d$, which stops us from having a strict ordering of the coordinates. By doing this simple generalisation we will observe a dimension drop which only occurs for the Assouad and lower dimensions. We will also calculate the Assouad and lower dimensions for Lalley-Gatzouras sponges satisfying a weak coordinate ordering condition. This paper will follow the proof in \cite{fraser-howroyd} and, as such, an understanding of that paper is required for the more technical parts of this one.
\subsection{Assouad dimension and lower dimension} \label{dimension}
Many definitions of dimension exist, each with different interesting properties that we would like to study. One such definition is the Assouad dimension which has seen much activity in the past few years, for example \cite{dfsu, fhor, fraser-jordan, mackay}. We will assume throughout that $F \subseteq \mathbb{R}^d$ is a non-empty compact bounded set. Generally the Assouad dimension of $F$ is defined by \begin{multline*} \dim_{\text{A}} F = \inf \Bigg\{ s \geqslant 0 \, \, : \, \exists \text{ constants } C,\rho > 0 \text{ such that, for all } \, \, 0< r< R \leqslant \rho,\\ \text{ we have }\sup_{x\in F} N_r (B(x,R)\cap F) \leqslant C\left(\frac{R}{r}\right)^{s} \Bigg\}\end{multline*} where $B(x,R)$ means the open ball of radius $R$ and centre $x$ and $N_r(E)$ is the smallest number of open sets in $\mathbb{R}^d$ with diameter less than or equal to $r$ required to cover a bounded set $E$.
When one is considering one of the box dimensions (upper or lower box), it is natural to consider the other at the same time. Similarly the Assouad dimension has a natural dual that we will call the lower dimension $\dim_\text{L} F$, following Bylund and Gudayol \cite{bylund}. Other names do exist for the lower dimension but they all use equivalent definitions. \begin{multline*} \dim_{\text{L}} F = \sup \Bigg\{ s \geqslant 0 \, \, : \, \exists \text{ constants }C, \rho > 0 \text{ such that, for all } \, \, 0< r< R \leqslant \rho,\\ \text{ we have }\inf_{x\in F} N_r (B(x,R)\cap F) \geqslant C\left(\frac{R}{r}\right)^{s} \Bigg\}. \end{multline*}
For more information on some of the properties of these dimensions see \cite{robinson, luk, fraser} but generally the Assouad dimension should be considered as giving us information on the `densest' part of our set whilst the lower dimension does the same for the `thinnest' part, this will become clear in our results. Some of the more common dimensions that exist are the Hausdorff and upper and lower box dimensions, which we denote by $\dim_\text{H}$, $\overline{\dim}_\text{B}$ and $\underline{\dim}_\text{B}$ respectively, and we refer the reader to \cite[Chapters 2--3]{falconer} for their definitions and basic properties. When the upper and lower box dimension coincide we will simply call the common value the box dimension; this is the case in all sets that are considered in this paper. We will often compare our results to the analogous Hausdorff and box dimension results to highlight interesting properties. For any compact set $F$, we generally have \[ \dim_\text{L} F \ \leqslant \ \dim_\text{H} F\ \leqslant \ \underline{\dim}_\text{B} F \ \leqslant \ \overline{\dim}_\text{B} F \ \leqslant \ \dim_\text{A} F. \]
\subsection{Bedford-McMullen sponges with a weak coordinate ordering condition} \label{BMintro}
We will now provide the definition of Bedford-McMullen carpets and sponges, some of the most classical self-affine fractals that have been studied. Much work has been spent understanding the dimension theoretic properties of such sets starting from the original works of Bedford and McMullen \cite{bedford, mcmullen}, followed by \cite{lalley-gatz, baranski, kenyonperes, fengaffine, fraser_box}, to the more recent studies of the Assouad dimension in \cite{mackay, fraser, fraser-howroyd, dfsu, fraser-jordan}. The recent paper \cite{das-simmons} found an interesting application of these results to answer a long existing open problem in dimension theory on the existence of an ergodic invariant measure of full Hausdorff dimension for every expanding repeller, see \cite{kenyonperes} for the full statement. They did this by showing the existence of self-affine sponges where any ergodic invariant measure supported on the sponge has a strictly lower Hausdorff dimension than the sponge itself. Much of the work in this area was aimed at showing that there always existed such a measure, however sponges are simple enough to study yet with complicated enough structure for their result to be possible.
The following notation is a mixture of Olsen's \cite{sponges} work with some additional parts introduced in \cite{fraser-howroyd} modified to solve our problem. \emph{Bedford--McMullen sponges} are defined as follows. Let $d\in \mathbb{N}$ and, for all $l=1, \ldots, d$, choose $n_l \in \mathbb{N}$ such that $1 < n_1 \leqslant n_2 \leqslant \cdots \leqslant n_d$. When these integers are all equal our sponge is simply a strictly self-similar set and the dimension is equal to the similarity dimension since our sponge satisfies the Open Set Condition, see \cite{olsenassouad}. Our formula will actually work for this case as well, something that the previous formula in \cite{fraser-howroyd} was not able to do. Let $\mathcal{I}_l=\left\{0,\ldots, n_l-1 \right\}$ and $\mathcal{I}=\prod_{l=1}^{d}\mathcal{I}_l$ and consider a fixed digit set $D \subseteq \mathcal{I}$ with at least two elements. For $\textbf{i} =\left( i_1,\ldots, i_d\right)\in D$ we define the affine contraction $S_\textbf{i}\colon [0,1]^d \rightarrow [0,1]^d$ by \[ S_{\textbf{i}}\left(x_1, \ldots, x_d \right) = \left( \frac{x_1+i_1}{n_1},\ldots,\frac{x_d+i_d}{n_d} \right) . \] Thanks to a well known theorem of Hutchinson $\cite{hutch}$, we know that there exists a unique non-empty compact set $K \subseteq [0,1]^d$ satisfying \[ K=\bigcup_{\textbf{i}\in D}S_{\textbf{i}}(K) \] called the attractor of the iterated function system (IFS) $\ \left\{ S_{\textbf{i}} \right\}_{\textbf{i}\in D}$. The attractor $K$ is called a \emph{self-affine sponge} when constructed with these contractions and when $d=2$ we call the resulting set a \emph{carpet}. The $2$ dimensional case was first considered by Bedford and McMullen separately in \cite{bedford, mcmullen} and then Kenyon and Peres \cite{kenyonperes} generalised to the $d>2$ dimensional case. Without loss of generality we shall assume that $K$ does not lie in a hyperplane. In such cases, we simply restrict our attention to the minimal lower hyperplane containing $K$ and consider it as a self-affine sponge in this space.
One advantage of this construction is the ability to link our sponge $K$ and the symbolic space $D^{\mathbb{N}}$, the set of all infinite words over the symbols in $D$ equipped with the product topology generated by the cylinders $[\mathbf{i}_1,\ldots, \mathbf{i}_n]$ corresponding to all finite words over $D$. The function $\tau :D^{\mathbb{N}} \rightarrow [0,1]^d$ defined below is the key to this property: \[ \tau(\omega)=\bigcap_{n \in \mathbb{N}} S_{\omega\vert n}([0,1]^d) \] where $\omega= (\textbf{i}_1, \textbf{i}_2,\ldots)\in D^{\mathbb{N}}$, $\omega \vert n=\left( \textbf{i}_1, \ldots , \textbf{i}_n \right) \in D^n$, $\textbf{i}_j=(i_{j,1},\ldots,i_{j,d})$ for any $j\in \mathbb{N}$, and $S_{\omega\vert n} = S_{\left( \textbf{i}_1, \ldots ,\textbf{i}_n \right) } = S_{ \textbf{i}_1} \circ \cdots \circ S_{ \textbf{i}_n} $. Thus \[ \tau(D^{\mathbb{N}})=K. \]
The Assouad dimension of Bedford-McMullen sponges was calculated recently in \cite{fraser-howroyd} as long as all of the inequalities in $1<n_1< \cdots < n_d$ are strict. An analogous assumption was assumed in \cite{dfsu} when calculating the Assouad and lower dimensions of Lalley-Gatzouras sponges, a generalisation of the Bedford-McMullen sponges, which indicates that this issue might arise in further generalizations of self-affine sponges. This problem is unique to the Assouad and lower dimensions in the higher dimensional cases ($d>2$) and does not appear in the 2-dimensional case or when considering the Hausdorff or box dimensions of sponges. We call these problematic sponges \emph{Bedford-McMullen sponges with a weak coordinate ordering condition}. Let $1<n_1\leqslant\ldots\leqslant n_d$ with $d>2$ such that there is at least one $1\leqslant a < d$ with $n_a=n_{a+1}$. Using the same method as for the general Bedford-McMullen sponges we obtain an attractor $K$ which we define as a Bedford-McMullen sponge with a weak coordinate ordering condition. The main idea will be to consider the equal coordinates as forming a sort of `strictly self-similar set inside our self-affine sponge' and then employ methods that would generally be used in the self-similar setting for these problematic coordinates whilst continuing the original procedures for the ones with strict inequalities.
To simplify notation we define $\mathcal{J}_1=\mathcal{I}_1 \times \mathcal{I}_2 \times\cdots \times \mathcal{I}_{a_1}$ such that $1<n_1=\cdots=n_{a_1}<n_{a_1+1}$ then $\mathcal{J}_{2}=\mathcal{I}_{a_1+1} \times \cdots \times \mathcal{I}_{a_1+a_2}$, again with $n_{a_1+1}=\cdots=n_{a_1+a_2}<n_{a_1+a_2+1}$ and we continue by induction on the dimension until we obtain $\mathcal{I}=\prod_{l=1}^{d^*} \mathcal{J}_l$ where $d^*$ is the number of distinct integers from the list $n_1,\ldots, n_d$. So $\mathcal{J}_{l}$ is a collection of $a_l$ dimensions for any $l$ and we define $n_l^*=n_{a_1+\cdots+a_l}$, this is the contraction associated to all the coordinates in the $l^\text{th}$ collection. This groups the `self-similar' parts together and allows us to think of our sponge not as a $d$-dimensional self-affine set without a strict coordinate ordering but a `$d^{*}$-dimensional self-affine set with a strict coordinate ordering' with some `dimensions' actually being several combined coordinates.
A \emph{pre-fractal} of an attractor $F$ associated to an IFS as defined above is the set defined by the application of all possible combinations of functions in our IFS to an initial set a certain number of times (the number of times is called the level of the pre-fractal). As the level tends to infinity, the pre-fractals will converge to our attractor in the Hausdorff metric, for any initial set satisfying some simple conditions, e.g. $\left[0,1\right]^d$. For a more detailed explanation see \cite[page 126]{falconer}. The $n$th pre-fractal of a sponge $K$ is \[ \bigcup_{\left( \textbf{i}_1, \ldots ,\textbf{i}_n \right)\in D^{n} } S_{\left( \textbf{i}_1, \ldots ,\textbf{i}_n \right) }([0,1]^d), \] which is just a collection of $\lvert D \rvert^n$ rectangles, later we will work with objects that are comparable to the pre-fractals. Below is a figure of the first pre-fractals for a given Bedford-McMullen sponge, we can already see how they will converge to our attractor $K$.
\begin{figure}
\caption{The first and second levels in the construction of a specific self-affine sponge satisfying a weak ordering condition in $\mathbb{R}^3$ where $n_1=2$, $n_2=3$, $n_3=3$ and $D = \{(0,0,0), (0,1,1), (0,2,2), (1,0,1) \}$.}
\label{fig:part-affine}
\end{figure}
The technique used to obtain the upper bound for the Assouad dimension will require an understanding of \emph{Bernoulli measures} supported on the set, thankfully iterated function systems provide a nice way of constructing such measures by defining a Borel probability measure on the symbol space and then using the push-forward $\tau$ to acquire a Bernoulli measure on our set. To do this we associate a probability vector $\ \left\{ p_{\textbf{i}} \right\}_{\textbf{i}\in D}$ with $D$ and let $\tilde{\mu}=\prod_{\mathbb{N}}\left( \sum_{\textbf{i}\in D}p_{\textbf{i}}\delta_{\textbf{i}} \right)$ be the natural Borel product probability measure on $D^{\mathbb{N}}$, where $\delta_{\textbf{i}}$ is the Dirac measure on $D$ concentrated at $\textbf{i}$. Finally, the measure \[ \mu(A)=\tilde{\mu}\circ \tau^{-1}(A), \] for a Borel set $A\subseteq K$, is our Bernoulli measure supported on $K$. For those not familiar with this technique, one could simply consider the Borel probability measure on the symbol space whilst remembering that cylinders and pre-fractals are closely related. We will later define a Bernoulli measure which will be used in the proofs to obtain an upper bound on the Assouad dimension. This measure will reflect both the strict inequalities and the equalities of the $n_i$. For now we simply introduce the notation needed for the construction of the measure and the statement of our result.
For any $l=1,2,\ldots,d^*$ we define $\pi_l \, : \, D \rightarrow \prod_{k=1}^l \mathcal{J}_k$ to be the projection onto the first $l$ `clusters' of coordinates so say $\pi_l (i_1,\ldots, i_d)=(i_1,\ldots, i_{a_1+\cdots+a_l})$, let $D_l=\pi_l (D)$ and $N=\#(\pi_1 D)$. Then, for $l=1, \dots, d^*-1$ and $(i_1, \dots , i_{a_1+\cdots+a_l}) \in D_l$ let \begin{align*} N(i_1, \ldots, i_{a_1+\cdots+a_l})=& \# \{ \left(i_{a_1+\cdots+a_l+1}, \ldots, i_{a_1+\cdots+a_l+a_{l+1}} \right) \in \mathcal{J}_{l+1} \\ &\qquad \qquad: (i_1, \ldots, i_{a_1+\cdots+a_l+a_{l+1}}) \in D_{l+1} \} \end{align*}
be the number of possible ways to choose the next $a_{l+1}$ digits of $(i_1, \dots , i_{a_1+\cdots+a_l})$. Thus $N(i_1, \ldots, i_l)$ is an integer between 1 and $\left(n_{l}^*\right)^{a_{l+1}}$, inclusive.
\subsection{Lalley-Gatzouras sponges with a weak coordinate ordering condition} \label{LGintro}
One can generalise the construction of Bedford-McMullen sponges to the sets first considered by Lalley and Gatzouras \cite{lalley-gatz}, who calculated the Hausdorff and box dimensions of Lalley-Gatzouras carpets. Mackay \cite{mackay} then calculated the Assouad dimension of these carpets. Following Mackay's notation we let $d\in \mathbb{N}$ and choose a fixed digit set $D\subset \mathcal{I} = \prod_{l=1}^d \left\{ 0,\ldots,n_l-1\right\}= \prod_{l=1}^d \mathcal{I}_l$ where $1<n_1\leqslant n_2 \leqslant \cdots \leqslant n_d$ are integers, note these integers are not always going to be related to our contraction ratios, we simply need them to construct our symbol space. Given an $\textbf{i}=(i_1,\ldots,i_d) \in D$ we define \[ D_{i_1,\ldots,i_{l-1}} = \left\{(j_1,\ldots,j_{l-1},j_l) \in \pi_l(D)\colon (i_1,\ldots, i_{l-1})=(j_1,\ldots,j_{l-1}) \right\} \] for any $l=2,\ldots, d$ , where $\pi_l$ is the projection function onto $\prod_{k=1}^l \mathcal{I}_k$. Then for each $\textbf{i}=(i_1,\ldots,i_d) \in D$ we associate contractions $c_{i_1,\ldots,i_l}$ and translations $t_{i_1,\ldots, i_l}$ for all $l=1,\ldots, d$ such that the following conditions hold: \begin{itemize}
\item $1>c_{i_1} \geqslant c_{i_1,i_2} \geqslant \cdots \geqslant c_{i_1,\ldots, i_d}>0$,
\item if $\textbf{i},\textbf{j}\in D $ such that $(i_{1}\ldots,i_{l}) = (j_1,\ldots j_l)$ for some $l$, then $c_{i_1,\ldots,i_l}=c_{j_1\ldots,j_l}$,
\item $\sum_{i\in \pi_1(D)} c_i \leqslant 1$,
\item for any $l=2,\ldots,d$ and $(i_1,\ldots,i_{l-1}) \in \pi_{l-1}(D)$, $\sum_{(j_i,\ldots,j_{l})\in D_{i_1,\ldots,i_{l-1}} } c_{j_1,\ldots,j_{l}} \leqslant 1$,
\item $0\leqslant t_i < t_j <1$ and $t_{i} + c_i \leqslant t_{j}$ for any $i,j \in \pi_1 D$, $i< j$,
\item $0\leqslant t_{i_1,\ldots,i_{l-1},i_l} < t_{i_1,\ldots,i_{l-1},j_l}<1$ and $t_{i_1,\ldots,i_{l-1},i_l} + c_{i_1,\ldots,i_{l-1},i_l} \leqslant t_{i_1,\ldots,i_{l-1},j_l}<1$ for any $l=2,\ldots,d$ and $(i_1,\ldots,i_l), (i_1,\ldots,j_l) \in D_{i_1,\ldots,i_{l-1}}$, $i_l < j_l$,
\item let $i=\max \{i\in \pi_1 D\}$ then $t_{i}+c_i \leqslant 1$,
\item for any $l=2,\ldots,d$ and $(i_1,\ldots,i_{l-1}) \in \pi_{l-1}(D)$, let $i_l=\max \{j_l \colon (i_1,\ldots,i_{l-1},j_l)\in D_{i_1,\ldots,i_{l-1}}\}$, then $t_{i_1,\ldots,i_{l-1},i_l}+c_{i_1,\ldots,i_{l-1},i_l} \leqslant 1$. \end{itemize} This is essentially saying that our maps will map the unit cube into itself and the IFS will satisfy the open set condition. Now for any $\mathbf{i}\in D$ define the affine contraction $S_{\textbf{i}} \colon [0,1]^d \rightarrow [0,1]^d$ by \[ S_{\textbf{i}}(x_1,\ldots, x_d) = (c_{i_1} x_1 + t_{i_1},\ldots, c_{i_1,\ldots,i_d}x_d+t_{i_1,\ldots,i_d}). \] The attractor of the IFS $\left\{S_{\textbf{i}} \right\}_{\textbf{i}\in D}$ is called a Lalley-Gatzouras sponge. This sponge satisfies a weak ordering condition only when, for some $l=2,\ldots, d$, $c_{i_1,\ldots,i_l}=c_{i_1,\ldots,i_{l-1}}$ for all $(i_1,\ldots, i_{l-1})\in \pi_{l-1}D$ and $(i_1,\ldots, i_{l})\in D_{i_1,\ldots, i_{l-1}}$. In this case we group together all the coordinates that cannot be ordered as before into $d^*$ many sets $\mathcal{J}_k= \prod_{i=a_1+\cdots + a_{k-1}+1}^{a_1+\cdots +a_k} \mathcal{I}_i$ where $c_{i_1,\ldots, i_{a_1+\cdots+a_{k-1}+1}}=\cdots = c_{i_1,\ldots, i_{a_1+\cdots+a_{k}}}$ for all $(i_1,\ldots, i_{a_1+\cdots+a_{k-1}+1})\in D_{i_1,\ldots, i_{a_1+\cdots+a_{k-1}}},\ldots,(i_1,\ldots, i_{a_1+\cdots+a_{k}})\in D_{i_1,\ldots, i_{a_1+\cdots+a_{k}-1}}$ and $(i_1,\ldots,i_{a_1+\cdots+a_{k-1}})\in \pi_{a_{1}+\cdots+a_{k-1}}D$; then our maps can be thought of as having the following form \[ S_{\textbf{i}}(\textbf{x}_1,\ldots, \textbf{x}_{d^*}) = (c_{i_1} \textbf{x}_1 + \textbf{t}_{i_1},\ldots, c_{i_1,\ldots,i_{d^*}}\textbf{x}_{d^*}+\textbf{t}_{i_1,\ldots,i_{d^*}}). \] In an abuse of notation we redefine $D_{i_1,\ldots,i_{a_1+\cdots +a_{l-1}}}$ to be \[ D_{i_1,\ldots,i_{a_1+\cdots +a_{l-1}}} = \left\{(j_1,\ldots,j_{a_1+\cdots+ a_{l}}) \in \pi_l(D)\colon (i_1,\ldots, i_{a_1+\cdots +a_{l-1}})=(j_1,\ldots,j_{a_1+\cdots +a_{l-1}}) \right\} \] and \[ D_l=\pi_l D \] where $\pi_l$ is the projection of $D$ onto $\prod_{k=1}^l \mathcal{J}_k$.
Finally let $s$ be the dimension of $\pi_1 D$ (a self-similar set satisfying the open set condition) and for any $\textbf{i}=(i_1,\ldots,i_{d}) \in D$ and $l=1,\ldots,d^*-1$, let $s(i_1,\ldots,i_{a_1+\cdots+a_l})$ be the unique number satisfying \[ \sum_{(i_1,\ldots,i_{a_1+\cdots+a_l+a_{l+1}})\in D_{i_1,\ldots,i_{a_1+\cdots+a_l}}} c_{i_1,\ldots,i_{a_1+\cdots+a_l+a_{l+1}}}^{s(i_1,\ldots,i_{a_1+\cdots+a_l})}=1. \]
\section{Results} \label{results}
In this paper we will provide an answer to question \cite[Question 4.2]{fraser-howroyd} by calculating the Assouad dimension of Bedford-McMullen and Lalley-Gatzouras sponges with a weak coordinate ordering condition.
\begin{thm} \label{BedfordMcMullen} Let $K$ be a Bedford-McMullen sponge with a weak coordinate ordering condition. The Assouad dimension of $K$ is \begin{equation}\label{new} \dim_{\text{\emph{A}}} K \ = \ \frac{\log N}{\log n_1^*} \ + \ \sum_{l=2}^{d^*} \frac{\displaystyle\log\max_{(i_1,\ldots, i_{a_1+\cdots+a_{l-1}})\in D_{l-1} } N(i_1, \ldots, i_{a_1+\cdots+a_{l-1}})}{\log n^*_l} \end{equation} and the lower dimension of $K$ is \[ \dim_{\text{\emph{L}}} K \ = \ \frac{\log N}{\log n_1^*} \ + \ \sum_{l=2}^{d^*} \frac{\displaystyle\log\min_{(i_1,\ldots, i_{a_1+\cdots+a_{l-1}})\in D_{l-1} } N(i_1, \ldots, i_{a_1+\cdots+a_{l-1}})}{\log n^*_l} \] \end{thm}
The dimension of a Bedford-McMullen sponge whose coordinates are well ordered, calculated in \cite{fraser-howroyd}, is \begin{equation}\label{old} \dim_\text{\emph{A}} K = \frac{\log N}{\log n_l}+ \sum_{l=2}^{d} \frac{\displaystyle\log \max_{(i_1,\ldots,i_{l-1})\in D_{l-1} } N'(i_i,\ldots, i_{l-1})}{\log n_l}, \end{equation} where $N'(i_i,\ldots, i_{l-1})$ is the number of ways of choosing the next digit of $(i_i,\ldots, i_{l-1})$ as a subset of the original $\mathcal{I}$. Formula (\ref{old}) is found by considering each coordinate separately as a collection of 1-dimensional Cantor sets (formed by the different columns). For each coordinate one finds the set with the greatest similarity dimension and then we add these maxima together to obtain the Assouad dimension of the whole.
When the coordinates of our sponge cannot be strictly ordered as in our setting, instead of always considering 1-dimensional sets we sometimes look at higher dimensional self-similar sets for which we can again find the greatest similarity dimension. The actual Assouad dimension of our set is then just the sum of the maximum similarity dimensions.
Comparing formulas (\ref{new}) and (\ref{old}), there are clearly sponges, satisfying just a weak ordering condition, whose dimensions, predicted by the formulas, will coincide. But there are many others whose dimension, determined by (\ref{new}), is strictly smaller than that predicted by (\ref{new}). In fact, formula (\ref{old}) can give two different dimensions when a sponge satisfies only a weak ordering condition as one could swap two coordinates with equal contraction ratio, changing the $N'(i_1,\ldots i_l)$.
Equality of the two formulas occurs when the largest multi-dimensional columns consist of only the maximal 1-dimensional columns that appear in (\ref{old}), that is \begin{align*} \max_{(i_1,\ldots, i_{a_1+\cdots+a_{l-1}})\in D_{l-1} } N(i_1, \ldots, &i_{a_1+\cdots+a_{l-1}})= \\ &\prod_{k=1}^{a_l}\max_{(i_1,\ldots,i_{a_1+\cdots+a_{l-1}+k})\in D_{a_1+\cdots+a_{l-1}+k} } N'(i_1,\ldots, i_{a_1+\cdots+a_{l-1}+k}). \end{align*}
As mentioned previously this dimension drop does not occur in the box and Hausdorff dimensions, or the Assouad dimension of carpets, and as such it would be interesting to try and find other, non-carpet examples where such an issue manifests.
We motivate this difference by two examples. Figure \ref{fig:part-affine} is a sponge $F$ whose coordinates cannot be strictly ordered but whose Assouad dimension can be calculated using either formula (\ref{new}) or (\ref{old}): \[ \dim_{\text{\emph{A}}} F = 1 + \frac{\log 3}{\log 3} = 2. \] However if we slightly modify that sponge by adding one extra map so $D= \left\{(0,0,0),(0,1,1),(0,2,1),(0,2,2),(1,0,1) \right\}$ with the same contraction ratios then we notice that the formulas for the Assouad dimension of $F$ provide two different values; the one in \cite{fraser-howroyd} gives \[ 2+ \frac{\log 2}{ \log 3} \approx 2.63 \] whereas the real value of the Assouad dimension is \[ \dim_{\text{\emph{A}}} F = 1 + \frac{\log 4}{\log 3} \approx 2.26. \] A third example of this phenomenon can be found in \cite[Section 4.3]{fraser-howroyd} where the Assouad dimension of a sponge with this weak coordinate ordering condition was calculated using the fact that the particular sponge was a product of two self-similar sets.
The Assouad and lower dimensions of Lalley-Gatzouras sponges were recently calculated in \cite{dfsu} but their proof relies on a condition analogous to the strict coordinate ordering needed in \cite{fraser-howroyd}. A simple extension of our result provides the following.
\begin{thm} \label{LalleyGatzouras} Let $K$ be a Lalley-Gatzouras sponge with a weak coordinate ordering condition. The Assouad dimension of $K$ is \begin{equation}\label{lalley} \dim_{\text{\emph{A}}} K \ = \ s \ + \ \sum_{l=2}^{d^*} \max_{(i_1,\ldots, i_{a_1+\cdots+a_{l-1}})\in D_{l-1} } s(i_1, \ldots, i_{a_1+\cdots+a_{l-1}}) \end{equation} and the lower dimension of $K$ is \[ \dim_{\text{\emph{L}}} K \ = \ s \ + \ \sum_{l=2}^{d^*} \min_{(i_1,\ldots, i_{a_1+\cdots+a_{l-1}})\in D_{l-1} } s(i_1, \ldots, i_{a_1+\cdots+a_{l-1}}). \] \end{thm}
Interestingly, Lalley and Gatzouras did not allow for any equality of contractions when computing the Hausdorff and box dimensions, however Das and Simmons \cite[Corolloary 3.4]{das-simmons} calculated the Hausdorff dimensions of a much more general set of sponges, which include Lalley-Gatzouras sponges that satisfy a weak ordering condition and no drop is perceived for these dimensions.
\section{Proof} \label{proof}
In this section we will prove Theorems \ref{BedfordMcMullen} and \ref{LalleyGatzouras}. In \ref{notation} we shall introduce any remaining notation that is needed for the proof of Theorem \ref{BedfordMcMullen}. Then we will prove the upper bound for the Assouad dimension of Bedford-McMullen sponges in \ref{upperbound} followed by the lower bound in \ref{lowerbound}. For completeness we will include a proof of both the upper and lower bounds for the lower dimension in \ref{lowerlower} and \ref{upperlower}. Much of this part will be similar to the proof in \cite{fraser-howroyd} so we will omit many of the technical details that have already been covered in that paper but we will nonetheless provide a summary of these techniques. Finally in Section \ref{LalleyGatzourasProof} we will sketch how the Bedford-McMullen proofs can be extended to the Lalley-Gatzouras case.
\subsection{Notation for Bedford-McMullen sponges with a weak coordinate ordering condition} \label{notation}
We define $\sigma:D^{\mathbb{N}} \rightarrow D^{\mathbb{N}}$ to be the shift map $\sigma(\textbf{i}_1,\textbf{i}_2,\ldots)=(\textbf{i}_2, \textbf{i}_3, \ldots)$, which acts in the inverse direction of $S_{\textbf{i}}$ but on the symbolic space instead of the unit cube.
Cylinder sets are unfortunately not optimal covers, as such we introduce the approximate cubes which are similar to cylinders and regular cubes at the same time, giving us the tools from the IFS with the symbolic space and the optimal covers from cubes. These will be used extensively throughout our proofs. For all $r\in (0,1]$ we choose the unique integers $k_1(r),\ldots,k_{d}(r)$, greater than or equal to 0, such that \[ \left(\frac{1}{n_l}\right)^{k_l(r)+1}< r \leqslant \left(\frac{1}{n_l}\right)^{k_l(r)} \] for $l=1,\ldots,d$. In particular, $ \frac{-\log r}{\log n_l}-1 < k_l(r) \leqslant \frac{-\log r}{\log n_l}$. We observe that $k_{a_1+\cdots+a_{l-1}+1}=\cdots = k_{a_1+\cdots+a_l}$ and denote the common value by $k_l^*(r)=k_{a_1+\cdots+a_l}(r)$ for any $l=1,\ldots, d^*$; this is just the integer associated with all the coordinates in $\mathcal{J}_l$. Then the approximate cube $Q(\omega, r)$ of (approximate) side length $r<1$ determined by $\omega =\left( \textbf{i}_1, \textbf{i}_2 , \ldots \right) =\left( (i_{1,1}, \dots, i_{1,d}), (i_{2,1}, \dots, i_{2,d}) , \ldots \right) \in D^{\mathbb{N}}$ is defined by \[ Q(\omega, r)=\left\{ \omega'=\left( \textbf{j}_1, \textbf{j}_2 , \ldots \right)\in D^{\mathbb{N}} : \forall \, \, l=1, \ldots, d \text{ and } \forall\, \, t= 1, \ldots, k_l(r) \text{ we have } j_{t,l}=i_{t,l} \right\}. \] The geometric analogue of approximate cubes, $\tau\left(Q(\omega, r)\right)$, is slightly harder to define and is contained in \[ \prod_{l=1}^d \left[\frac{i_{1,l}}{n_l}+\cdots+\frac{i_{k_l(r),l}}{n_l^{k_l(r)}} \, , \, \frac{i_{1,l}}{n_l}+\cdots+\frac{i_{k_l(r),l}}{n_l^{k_l(r)}}+\frac{1}{n_l^{k_l(r)}} \right]; \]
a rectangle in $\mathbb{R}^d$ aligned with the coordinate axes and of side lengths $n_l^{-k_l(r)}$, which are all comparable to $r$ since $ r \leqslant n_l^{-k_l(r)} < n_l r$.
The lower bound for the Assouad dimension will use tangents and is interested in the `distance' between sets. This notion of distance is simply the Hausdorff metric $d_\mathcal{H}$ defined on the space of non-empty compact subsets of $\mathbb{R}^d$, which is defined by \[ d_\mathcal{H}(A,B) \ = \ \inf \big\{ \varepsilon \geqslant 0 : A \subseteq [B]_{\varepsilon} \text{ and } B \subseteq [A]_{\varepsilon} \big\} \] where $[A]_{\varepsilon}$ is the closed $\varepsilon$-neighbourhood of a set $A$.
\subsection{Upper bound for Assouad dimension} \label{upperbound}
Generally one uses a covering argument to obtain the upper bound, however our sponges being $d$-dimensional objects makes this idea quite difficult to implement. Instead we can simply use a measure theoretic technique thanks to the following proposition first proved in \cite[Proposition 3.1]{fraser-howroyd} and motivated by the measure theoretic definition developed in \cite{luksak, konyagin}: \begin{prop}[\cite{fraser-howroyd}]\label{adupmeasure} Suppose there exists a Borel probability measure $\nu$ on $D^{\mathbb{N}}$ and constants $C >0$ and $s \geqslant 0$ such that for any $0< r<R\le1$ and $\omega \in D^{\mathbb{N}}$ we have \[ \frac{\nu \left( Q(\omega,R)\right)}{\nu \left( Q(\omega,r)\right)}\leqslant C \left( \frac{R}{r} \right)^s. \] Then $\dim_{\text{\emph{A}}} K \leqslant s$. \end{prop}
The proof of this technical lemma is simple and follows from the original definition of Assouad dimension; for its proof see the original paper. Essentially it says that if one has a measure satisfying some condition on approximate cubes then we have an upper bound on the Assouad dimension. Thankfully we can modify the measure used to this end in \cite{fraser-howroyd} such that our desired dimension is an upper bound.
The modified measure is defined by \[ p_\textbf{i}=p_{(i_1,\ldots,i_d)}=\frac{1}{N\left(\prod_{l=2}^{d^*} N(i_1,\ldots,i_{a_1+\cdots+a_{l-1}})\right)}. \] Again, this is simply the `coordinate uniform measure' introduced for the regular sponges except for the coordinates with equal contractions where it is a sort of uniform distribution, as desired. Similarly one can check from the definitions that $\sum_{ \textbf{i} \in D} p_\textbf{i} = 1$. We call this measure the `partial coordinate uniform measure', since it is a modified version of the original.
There is a precise formula for conditional probabilities on the symbol space as noted in Olsen \cite[Section 3.1]{sponges}, we omit the exact formulation as it simply reduces to, whenever $(i_1, \ldots, i_{a_1+\cdots + a_{l}})\in D_l$, $p(i_{a_1+\cdots + a_{l-1}+1},\ldots,i_{a_1+\cdots + a_{l}} \vert i_{1},\ldots,i_{a_1+\cdots+a_{l-1}})=1/N(i_1,\ldots,i_{a_1+\cdots+a_{l-1}})$ for $l=2,\ldots,d^*$ and $p( i_1, \ldots, i_{a_1} \vert \emptyset)=1/N$. This conditional probability is just the probability of picking the next set of digits in the set $\mathcal{J}_l$ given the previous coordinates.
Technically our measure is defined on the pre-fractals of our sponge, not on approximate cubes. Fortunately Olsen \cite[(6.2)]{sponges} noted the following straightforward formula \begin{equation} \label{approxcubemeasure} \tilde \mu(Q(\omega,r))=\prod^d_{l=1} \prod_{j=0}^{k_l(r)-1}p_l(\sigma^j\omega) \end{equation} where $p_l(\omega)=p(i_{1,l}\vert i_{1,1},\ldots,i_{1,l-1})$. The important fact to note here is that $p(i_{l}\vert i_{1},\ldots,i_{l-1}) \times p(i_{l+1}\vert i_{1},\ldots,i_{l})=p(i_l,i_{l+1}\vert i_{1},\ldots,i_{l-1})$ and as such we can combine the conditional probabilities in (\ref{approxcubemeasure}) when two successive coordinates have the same contraction ratio so $k_l(r)=k_{l+1}(r)$. We are now ready to obtain the upper bound.
\begin{proof} By (\ref{approxcubemeasure}), when $s$ is the desired Assouad dimension, we have \begin{align*} \frac{\tilde \mu(Q(\omega,R))}{\tilde \mu(Q(\omega,r))} &=\frac{\prod^d_{l=1} \prod_{j=0}^{k_l(R)-1}p_l(\sigma^j\omega)}{\prod^d_{l=1} \prod_{j=0}^{k_l(r)-1}p_l(\sigma^j\omega)} \\ &=\prod^d_{l=1} \prod_{j=k_l(R)}^{k_l(r)-1}\frac{1}{p_l(\sigma^j\omega)} \\ &=\prod^{d^*}_{l=1} \prod_{j=k_l^*(R)}^{k_l^*(r)-1}\frac{1}{p(i_{j+1,a_1+\cdots+a_{l-1}+1},\ldots,i_{j+1,a_1+\cdots+a_{l}}\vert i_{j+1,1},\ldots, i_{j+1,a_1+\cdots+a_{l-1}})} \\ &\leqslant \left( \prod_{j=k_1^*(R)}^{k_1^*(r)-1} N \right)\left(\prod^{d^*}_{l=2} \prod_{j=k_l^*(R)}^{k_l^*(r)-1} \max_{(i_1,\ldots, i_{a_1+\cdots+a_{l-1}})\in D_{l-1} } N(i_1, \ldots, i_{a_1+\cdots+a_{l-1}}) \right) \\ &= N^{k_1^*(r)-k_1^*(R)} \left(\prod^{d^*}_{l=2} \max_{(i_1,\ldots,i_{l-1})\in D_{l-1}} N(i_1,\ldots,i_{l-1})^{k_l^*(r)-k_l^*(R)}\right) \\ & \leqslant N^{\log R/\log n_1^*-\log r/\log n_1^*+1} \\ &\qquad \left(\prod^{d^*}_{l=2} \max_{(i_1,\ldots, i_{a_1+\cdots+a_{l-1}})\in D_{l-1} } N(i_1, \ldots, i_{a_1+\cdots+a_{l-1}})^{\log R/\log n_l^*-\log r/\log n_l^*+1}\right) \\ & \leqslant \left(n^*_1\right)^{a_1} \times \cdots \times \left(n^*_{d^*}\right)^{a_d}\left(\frac{R}{r}\right)^{\displaystyle\frac{\log N}{\log n_1^*}} \\ & \qquad \left(\prod^{d^*}_{l=2} \left(\frac{R}{r}\right)^{\frac{\displaystyle\log \max_{(i_1,\ldots, i_{a_1+\cdots+a_{l-1}})\in D_{l-1} } N(i_1, \ldots, i_{a_1+\cdots+a_{l-1}})}{\displaystyle\log n_l^*}}\right) \\ &\leqslant n_d^{d}\left( \frac{R}{r}\right)^{s}. \end{align*}
This estimate combined with Proposition \ref{adupmeasure} gives us the desired upper bound.
\end{proof}
\subsection{Lower bound for Assouad dimension} \label{lowerbound}
For the lower bound we will use `weak tangents', a technique possible thanks to a proposition of Mackay and Tyson \cite[Proposition 6.1.5]{mackaytyson} which was then modified by Fraser \cite[Proposition 7.7]{fraser}. This is the section where the original proof in \cite{fraser-howroyd} does not apply, there is however a way around this using some of the ideas detailed so far.
\begin{prop}[Very weak tangents]\label{tangents} Let $X\subset \mathbb{R}^d$ be compact and let $F$ be a compact subset of $X$. Let $(T_k)$ be a sequence of bi-Lipschitz maps defined on $\mathbb{R}^d$ with Lipschitz constants $a_k, b_k \geqslant 1$ such that \[ a_k \lvert x-y \rvert \leqslant \lvert T_k(x) - T_k(y) \rvert \leqslant b_k \lvert x-y \rvert \,\,\,\,\,\,\,\, (x,y\in\mathbb{R}^d) \] and \[ \sup_k b_k / a_k = C_0 <\infty \] and suppose that $T_k(F) \cap X \rightarrow \hat{F}$ in the Hausdorff metric. Then the set $\hat F$ is called a \emph{very weak tangent} to $F$ and, moreover, $\dim_{\text{\emph{A}}} F \geqslant \dim_{\text{\emph{A}}} \hat{F}$. \end{prop}
To simplify notation, we choose $\textbf{i}(l) = (i(l)_1, \ldots, i(l)_d)\in D$ for $l=2, \ldots d^*$ to be an element of $D$ which attains the maximum value for $N(i_1, \ldots, i_{a_1+\cdots+a_{l-1}})$, i.e. \[ N \left(i(l)_1, \ldots, i(l)_{a_1+\cdots+a_{l-1}} \right) \ = \ \max_{(i_1,\ldots,i_{a_1+\cdots+a_{l-1}})\in D_{l-1}} N(i_1, \ldots, i_{a_1+\cdots+a_{l-1}}). \] There might be several possibilities for each $\textbf{i}(l)$, but we can pick it arbitrarily.
The rest of this part will be a brief explanation of the technique as it is very similar to that in \cite{fraser-howroyd}. The main idea is to construct a tangent $\hat{F}$ that is some product of self-similar sets whose box dimension can be calculated using common formulas as in \cite{falconer} and the box dimension of $\hat{F}$ is simply the desired Assouad dimension. To show that this tangent is indeed a tangent one shows that $T_k(F) \cap X$ and $\hat{F}$ are both subsets of a set that is the product of some Menger-like sponge and $d^*-1$ pre-fractals of Menger-like sponges which depend on their respective coordinate collection (and the level depends on $k$) and then as $k$ tends to infinity the Hausdorff distance between all these sets becomes 0. There is also a complication that might arise due to the definition of approximate cubes but this is easily circumvented in the Assouad dimension case.
The desired tangent is \[ \hat{K}=\pi_1 K \times \prod_{l=2}^{d^*} K_l \] where $K_l$ is the Menger-like sponge obtained by the IFS acting on $[0,1]^{a_l}$ which forms a sponge identical to the maximal $a_l$-dimensional plane in the $a_1+\cdots + a_{l-1}+1$ to $a_1+\cdots + a_{l}$ coordinates of our original sponge. These are simply the Cantor sets of the original proof when the contraction ratios are different but self-similar sponges when they are equal. The set $\pi_1 K$ is the geometric projection of the sponge on to the first $a_1$ coordinates (note that we use the projection function $\pi$ in both the geometric and symbolic spaces).
For $l=2, \dots, d^*$ and $m \in \mathbb{N}$, we let $K_l^m$ be the $m^{\text{th}}$ pre-fractal of $K_l$ where our initial set is $[0,1]^{a_l}$. In particular, the set $K_l^m$ is a union of $N(i_1, \ldots, i_{a_1+\cdots+a_{l-1}})^m$ cubes of side lengths $(n_l^*)^{-m}$.
Given a geometric approximate cube $\tau(Q) =\tau(Q(\omega,r))$, we define a bi-Lipschitz map $T^Q: \tau(Q) \to [0,1]^d$ (or $T^Q: \mathbb{R}^d \to \mathbb{R}^d$) by \[ T^Q(\textbf{x})= \begin{pmatrix}n_1^{k_1(r)}\left( x_1-\left( \frac{i_{1,1}}{n_1}+\ldots+\frac{i_{k_1(r),1}}{n_1^{k_1(r)}} \right) \right)\\\vdots\\n_d^{k_d(r)}\left( x_d-\left( \frac{i_{1,d}}{n_d}+\ldots+\frac{i_{k_d(r),d}}{n_d^{k_d(r)}} \right) \right)\end{pmatrix}. \] Thus $T^Q$ translates $\tau(Q)$ such that the point closest to the origin from the rectangle containing $\tau(Q)$ becomes the origin and then scales it up by a factor of $n_l^{k_l(r)}$ in each coordinate $l$. Thus these maps take the natural rectangle containing $\tau(Q)$ precisely to the unit cube $[0,1]^d$. These maps clearly satisfy the conditions imposed by Proposition \ref{tangents}, i.e., they are restrictions of bi-Lipschitz maps on $\mathbb{R}^d$ with constants $b_Q=\sup_{l=1, \ldots, d}n_l^{k_l(r)}$ and $a_Q=\inf_{l=1, \ldots, d}n_l^{k_l(r)}$ satisfying \[ \frac{b_Q}{a_Q}\leqslant \sup_{l=1,\ldots,d}\frac{rn_l}{r}\leqslant n_d < \infty \] for any $Q$. This follows from the definition of $k_l(r)$ and does not rely on the inequality of the $n_l$.
We define, for small $R$, $\omega(R)=\left(\textbf{i}_1, \textbf{i}_2, \ldots \right) \in D^{\mathbb{N}}$ where $\textbf{i}_t = (i_{t,1}, \dots, i_{t,d}) =\textbf{i}(l)$ for $t=k_l^*(R)+1, \ldots,k_{l-1}^*(R)$ for all $l=2, \ldots, d^*$. So $\omega(R)$ has the form \[ \omega(R)=\left( \textbf{i}_1, \ldots, \underbrace{\textbf{i}(d), \ldots, \textbf{i}(d)}_{k_{d^*-1}^*(R)-k_{d^*}^*(R) \text{ times}}, \ \ \underbrace{\textbf{i}(d-1), \ldots, \textbf{i}(d-1)}_{k_{d^*-2}^*(R)-k_{d^*-1}^*(R) \text{ times}}, \ldots, \underbrace{\textbf{i}(2), \ldots, \textbf{i}(2)}_{k_1^*(R)-k_{2}^*(R) \text{ times}}, \ldots \right). \]
We can then prove the following technical lemma, see \cite[Lemma 3.3]{fraser-howroyd} for details.
\begin{lma}[\cite{fraser-howroyd}] \label{productform} For $ R \in (0,1]$ small enough and $Q=Q(\omega(R),R)$, we have \[ T^Q(\tau(Q)) \ \subseteq \ \pi_1 K \times \prod_{l=2}^{d^*} K_l^{k_{l-1}^*(R)-k_{l}^*(R)}. \] \end{lma}
By using this lemma and the definition of pre-fractals we can show that \[ d_{\mathcal{H}}\left( \hat{K}, \ T^Q(\tau(Q(\omega(R),R))) \right)\rightarrow 0 \] as $R \to 0$.
One final technical lemma then solves the following: \[ T^Q(K) \cap [0,1]^d \ \supseteq \ T^Q(\tau(Q(\omega(R),R))) \ \to \ \hat{K}, \] where the containment may be strict, which implies that $\hat{K}$ is not necessarily a tangent of $K$.
\begin{lma} Let $E_k, \, F_k \subseteq [0,1]^d$ be sequences of non-empty compact sets which converge in the Hausdorff metric to compact sets $E$ and $F$ respectively. If $F_k \subseteq E_k$ for all $k$, then $F \subseteq E$. \end{lma}
The proof of this classical lemma is left to the reader. We can now finally calculate the lower bound.
\begin{proof} Standard results on the box dimensions of product sets \cite[Chapter 7]{falconer} and of self-similar sets \cite[Chapter 9]{falconer} imply that \begin{eqnarray*} \dim_{\text{B}} \hat{K} &=& \dim_{\text{B}} \pi_1K+\sum_{l=2}^{d^*} \dim_{\text{B}} K_l \\ \\&=& \frac{\log N}{\log n_1}+\sum_{l=2}^{d^*} \frac{\displaystyle\log\max_{(i_1,\ldots, i_{a_1+\cdots+a_{l-1}})\in D_{l-1} } N(i_1, \ldots, i_{a_1+\cdots+a_{l-1}})}{\log n^*_l}. \end{eqnarray*}
By compactness we can take a convergent subsequence of $T^Q(K) \cap [0,1]^d \rightarrow K'$. Then by Proposition \ref{tangents} there is a set $\hat{K} \subseteq K'$ and by monotonicity of Assouad dimension the following holds \begin{eqnarray*} \dim_{\text{A}} K \ \geqslant \ \dim_{\text{A}} K' \ \geqslant \ \dim_{\text{A}} \hat{K} \geqslant &\dim_{\text{B}} \hat{K}, \end{eqnarray*} as required. \end{proof}
\subsection{Lower bound for lower dimension} \label{lowerlower}
The proof of the lower bound for the lower dimension is essentially the same as the proof in \ref{upperbound}. We start by introducing a lemma proved in \cite[Proposition 3.5]{fraser-howroyd}.
\begin{prop}[\cite{fraser-howroyd}]\label{lowlowmeasure} Suppose there exists a Borel probability measure $\nu$ on $D^{\mathbb{N}}$ and constants $C >0$ and $s \geqslant 0$ such that for any $0< r<R\le1$ and $\omega \in D^{\mathbb{N}}$ we have \[ \frac{\nu \left( Q(\omega,R)\right)}{\nu \left( Q(\omega,r)\right)}\geqslant C \left( \frac{R}{r} \right)^s. \] Then $\dim_{\text{\emph{L}}} K \geqslant s$. \end{prop}
Then using a mixture of the original lower dimension's lower bound proof and the new Assouad dimension upper bound proof we get
\begin{proof} \begin{align*} \frac{ \tilde \mu(Q(\omega,R))}{\tilde \mu(Q(\omega,r))} &\geqslant n_d^{-d}\left( \frac{R}{r}\right)^{\displaystyle\frac{\log N}{\log n_1^*} \ + \ \sum_{l=2}^{d^*} \frac{\displaystyle\log\min_{(i_1,\ldots, i_{a_1+\cdots+a_{l-1}})\in D_{l-1} } N(i_1, \ldots, i_{a_1+\cdots+a_{l-1}})}{\log n^*_l} }. \end{align*}
This estimate combined with Proposition \ref{lowlowmeasure} gives us the required lower bound. \end{proof}
\subsection{Upper bound for lower dimension} \label{upperlower}
The method for this final proof is again a mixture of the original one with the new idea to avoid the equality problem and should be clear from these other proofs, as such we leave it to the reader.
\subsection{Extension to Lalley-Gatzouras sponges}\label{LalleyGatzourasProof}
In this section we will introduce the more generalised notation and concepts needed to calculate the Assouad dimension of Lalley-Gatzouras sponges. The proof for the lower dimension then follows easily and as such is skipped.
The definition of $k_l(r)$ cannot be simply copied to Lalley-Gatzouras sponges as the contractions for any given coordinate are not necessarily uniform, as such, given a word $\omega=(\textbf{i}_1,\textbf{i}_2,\ldots) \in D^{\mathbb{N}}$, $r\in (0,\min_{(i_1,\ldots,i_d)\in D} c_{i_1,\ldots,i_d}]$ and $l=1,\ldots,d $ we choose $k_l(r,\omega)$ to be the unique integer, greater than or equal to 1, such that \[ \prod_{m=1}^{k_l(r,\omega)+1} c_{i_{m,1},\ldots,i_{m,l}} < r \leqslant \prod_{m=1}^{k_l(r,\omega)} c_{i_{m,1},\ldots,i_{m,l}}. \] Note that when two coordinates, say $l$ and $l+1$, have at least one $\textbf{j}\in D$ such that $c_{j_1,\ldots,j_l} > c_{j_1,\ldots,j_{l+1}}$ then there exists a word $\omega=(\textbf{j},\ldots)$ such that $k_l(r,\omega)>k_{l+1}(r,\omega)$. When no such contraction exists then our sponge satisfies a weak ordering condition and we identify the two coordinates as one as previously explained. As before we identify $k^*_l(r,\omega)=k_{a_1+\cdots+a_l}(r,\omega)$.
The definition of approximate cubes is the same as for Bedford-McMullen sponges; note that the geometric analogue of approximate cubes for Lalley-Gatzouras sponges are contained in a slightly more notationally complex product of intervals, but the general concept still holds.
To calculate the upper bound for the Assouad dimension we again use the measure theoretic definition using the following measure \[ p_\mathbf{i}=p_{i_1,\ldots,i_d}=c_{i_1,\ldots,i_{a_1}}^s c_{i_1,\ldots,i_{a_1+a_2}}^{s(i_1,\ldots,i_{a_1})}\cdots c_{i_1,\ldots ,i_{d}}^{s(i_1,\ldots,i_{a_1+\cdots+a_{d^*-1}})}. \] We call this the `coordinate measure of full dimension', as, like the coordinate uniform measure, it distributes the mass in such a way that the measure `sees' the dimension of each coordinate individually, up to the weak ordering, where the dimension of the grouped coordinates is observed.
Naturally the conditional probabilities are what we expect: \[ p(i_{a_1+\cdots+a_{l-1}+1},\ldots,i_{a_1+\cdots+a_l}\colon i_1,\ldots,i_{a_1+\cdots+a_{l-1}})=c_{i_1,\ldots,i_{a_1+\cdots+a_l}}^{s(i_1,\ldots,i_{a_1+\cdots+a_{l-1}})}. \] Thus the following calculation provides our upper bound. \begin{align*} \frac{\tilde{\mu}(Q(\omega,R))}{\tilde{\mu}(Q(\omega,r))}& = \frac{\prod_{l=1}^d\prod_{j=0}^{k_l(R,\omega)-1}p_l(\sigma^j\omega)}{\prod_{l=1}^d\prod_{j=0}^{k_l(r,\omega)-1}p_l(\sigma^j\omega)} \\ &=\left(\prod_{j=k_1^*(R,\omega)}^{k_1^*(r,\omega)-1}c_{i_{j,1},\ldots,i_{j,a_1}}^{-s} \right)\times\left( \prod_{l=2}^{d^*}\prod_{j=k_l^*(R,\omega)}^{k_l^*(r,\omega)-1}c_{i_{j,1},\ldots,i_{j,a_1+\cdots+a_l}}^{-s(i_{j,1},\ldots,i_{j,a_1+\cdots+a_{l-1}})}\right)\\ & \leqslant \left(\prod_{j=k_1^*(R,\omega)}^{k_1^*(r,\omega)-1}c_{i_{j,1},\ldots,i_{j,a_1}}\right)^{-s} \times \prod_{l=2}^{d^*}\left( \prod_{j=k_l^*(R,\omega)}^{k_l^*(r,\omega)-1}c_{i_{j,1},\ldots,i_{j,a_1+\cdots+a_l}}\right)^{-\max s(i_{j,1},\ldots,i_{j,a_1+\cdots+a_{l-1}})}\\ & \leqslant \left( \min_{\mathbf{i}\in D} c_{i_1,\ldots,i_d}^{-d} \right) \left( \frac{R}{r}\right)^s\times \prod_{l=2}^{d^*}\left( \frac{R}{r}\right)^{\max s(i_{j,1},\ldots,i_{j,a_1+\cdots+a_{l-1}})} \end{align*} where the exponent is our desired dimension. Technically to use the measure theoretic definition we need an analgous version of Proposition \ref{adupmeasure} which was originally proved just for Bedford-McMullen sponges, however the proof for Lalley-Gatzouras sponges is the same and is omitted.
For the lower bound we wish to use Proposition \ref{tangents}. The technique for this part is the same as Section \ref{lowerbound} in the sense that we want to construct a weak tangent that is a product of self-similar sets. We define $T^Q$, where $Q$ is an approximate cube, to be the function which maps $Q$ to the unit cube as before.
For each $l=2,\ldots,d^*$, define $\mathbf{i}(l)=(i(l)_1,\ldots,i(l)_d)\in D$ to be an element such that \[ s(i(l)_1,\ldots,i(l)_{a_1+\cdots+a_{l-1}})=\max_{(i_1,\ldots,i_{a_1+\cdots+a_{l-1}})\in D_{l-1}}s(i_1,\ldots,i_{a_1+\cdots+a_{l-1}}). \] We also let $\mathbf{i}(\text{twist},l)=(i(\text{twist},l)_1,\ldots,i(\text{twist},l)_d)\in D$ be an element of $D$ such that \[ c_{i(\text{twist},l)_1,\ldots,i(\text{twist},l)_{a_1+\ldots+a_{l-1}}}> c_{i(\text{twist},l)_1,\ldots,i(\text{twist},l)_{a_1+\ldots+a_l}}. \] This condition will be used to make $k_l^*(R,\omega)<k_{l-1}^*(R,\omega)$ and is possible due to our identification of coordinates.
Heuristically the approximate cubes $Q(\omega(R),R)$ that we zoom into for our weak tangent will be the same as before, that is from $k_l^*(R,\omega(R))$ to $k_{l-1}^*(R,\omega(R))$ we want to pick the largest column for the $l^{\text{th}}$ coordinate. However we want the $k_{l-1}^*(R,\omega(R))-k_l^*(R,\omega(R))$ to be positive and tend to infinity as $R$ goes to zero; there exists Lalley-Gatzouras sponges where this does not always happen without an additional step. To overcome this we choose the first $k_{d}^*(R,\omega(R))$ elements of $\omega(R)$ to be our twist symbols, so as $R$ goes to 0 the differences will tend to infinity.
Formally, for small enough $R$, we define $\omega(R)=(\mathbf{i}_1,\mathbf{i}_2,\ldots)\in D^\mathbb{N}$ by setting \[ \mathbf{i}_{m d^* + 1},\ldots,\mathbf{i}_{m d^* + d^*}= \mathbf{i}(\text{twist},1), \ldots,\mathbf{i}(\text{twist},d^*) \] for $m=0,1,\ldots, \lfloor\frac{k_d^*(R,\omega(R))}{d^*}\rfloor-1$ and $\mathbf{i}_t=(i_{t,1},\ldots,i_{t,d}) = \mathbf{i}(l)$ for all $t=k^*_l(R,\omega(R))+1,\ldots, k^*_{l-1}(R,\omega(R))$ and for all $l=2,\ldots, d^*$. Due to the definition of $k_l^*(R,\omega)$, fixing elements after the $k_l^*$-th symbol does not change $k_l^*(R,\omega)$, so such a word is well-defined.
To finish, in much the same way as in Section \ref{lowerbound}, one can check that $T^{Q(R,\omega(R))}(K) \cap [0,1]^d$ will converge to a set with a subset of the desired dimension.
\begin{centering}
\textbf{Acknowledgements}
\end{centering}
The author was supported by the EPSRC Doctoral Training Grant EP/N509759/1 whilst writing this paper. He thanks his supervisor Jonathan Fraser for the advice received during the project and an anonymous referee for many stimulating comments.
\end{document} |
\begin{document}
\title[Exponential convergence results for a 2D overhead crane with input delays]{Further results on the asymptotic behavior \\ of a 2D overhead crane with input delays:\\ Exponential convergence}
\author{Ka\"{\i}s Ammari} \address{UR Analysis and Control of PDEs, UR13ES64, Department of Mathematics, Faculty of Sciences of Monastir, University of Monastir, 5019 Monastir, Tunisia} \email{kais.ammari@fsm.rnu.tn}
\author{Boumedi\`ene Chentouf} \address{Kuwait University, Faculty of Science, Department of Mathematics, Safat 13060, Kuwait} \email{chenboum@hotmail.com, boumediene.chentouf@ku.edu.kw}
\begin{abstract} This article is concerned with the asymptotic behavior of a 2D overhead crane. Taking into account the presence of a delay in the boundary, and assuming that no displacement term appears in the system, a distributed (interior) damping feedback law is proposed in order to compensate the effect of the delay. Then, invoking the frequency domain method, the solutions of the closed-loop system are proved to converge exponentially to a stationary position. This improves the recent result obtained in \cite{ac}, where the rate of convergence is at most of polynomial type. \end{abstract}
\subjclass[2010]{34B05, 34D05, 70J25, 93D15} \keywords{Overhead crane; damping control; time-delay; asymptotic behavior; exponential convergence}
\maketitle
\thispagestyle{empty}
\section{Introduction}
\setcounter{equation}{0} \begin{figure}
\caption{The overhead crane model}
\label{fi1}
\end{figure}
In the present work, we consider an overhead crane system (see Fig. \ref{fi1}). It consists of a motorized platform of mass $m$ moving along a horizontal rail, where a flexible cable of unit length is attached. Moreover, the cable is assumed to hold a load mass $M$ to be transported. Taking into consideration the effect of time-delay in the boundary, the system is governed by the following coupled PDE-ODEs (for more details about the model, the reader is referred to \cite{ac,ANBCR}) \begin{equation} \left\{ \begin{array} [c]{ll} y_{tt}(x,t)-\left( ay_{x}\right) _{x}(x,t)+{\mathcal{F}}(x,t)=0, & 0<x<1,\;t>0,\\ my_{tt}(0,t)-\left( ay_{x}\right) (0,t)=\alpha y_{t}(0,t-\tau)-\beta y_{t}(0,t), & t>0,\\ My_{tt}(1,t)+\left( ay_{x}\right) (1,t)=0, & t>0,\\ y(x,0)=y_{0}(x),\;y_{t}(x,0)=y_{1}(x), & x \in (0,1),\\ y_{t}(0,t-\tau )=f(t-\tau), & t \in(0,\tau), \end{array} \right. \label{(1.1)} \end{equation} in which $y$ is the displacement of the cable, ${\mathcal{F}}(x,t)$ is the external controlling force (distributed control) which drives the platform along the rail. Furthermore, $\alpha >0, \, \beta>0$, and $\tau >0$ is the time-delay, whereas the space variable coefficient $a(x)$ is the modulus of tension of the cable.
We would like to point out that the feedback gain $\alpha$ is assumed to be positive throughout this article just for sake of simplicity. Otherwise, one can suppose that $\alpha \in \field{R}\setminus \{0\}$ and in this case it suffices to replace $\alpha$ by $|\alpha|$ in the whole paper. In turn, the modulus of tension of the cable $a(x)$ satisfies the following standard conditions (see \cite{ANBCR}) \begin{equation} \left\{ \begin{array} [c]{l} a\in H^{1}(0,1);\\ \text{there exists a positive constant}\;a_{0}\;\text{such that}\;a(x) \geq a_{0}>0 \;\;\text{for all}\;\;x \in\lbrack 0,1 \rbrack. \end{array} \right. \label{1.3} \end{equation}
Overhead cranes have been extensively studied by many authors which gave rise to a huge number of research articles (see for instance \cite{ac,ANBCR,AC,mif2,cos,mif1,Ra}). A pretty comprehensive literature review has been conducted in \cite{ac}, where the reader can have a broad idea about stabilization outcomes achieved in different physical situations. Nonetheless, the effect of the presence of a delay term has been disregarded in those works and this has motivated the authors in \cite{ac} to treat such a case. Specifically, the system (\ref{(1.1)}) with ${\bm {\mathcal F}=0}$ has been considered in \cite{ac}, where it has been showed that the solutions can be driven to an equilibrium state with a polynomial rate of convergence in an appropriate functional space. On the other hand, it has been proved that exponential convergence of solutions cannot be reached when ${\bm {\mathcal F}=0}$.
In this article, we go a step further and improve the convergence result of \cite{ac} through the action of the additional distributed (interior) control \begin{equation} {\mathcal F}(x,t)=\sigma y_t(x,t), \label{F} \end{equation} where $\sigma >0$. Specifically, the main contribution of the present work is to show that despite the presence of the delay term in one boundary condition of (\ref{(1.1)}), the solutions of the closed-loop system (\ref{(1.1)})-(\ref{F}) converge {\bf exponentially} to an equilibrium state which depends on the initial conditions. It is also worth mentioning that another interesting finding of the current work is to prove that the presence of the boundary velocity (in addition of the interior control) is crucial for the convergence of solutions. Indeed, a non-convergence result is established when the boundary velocity is omitted ($\beta =0$ in (\ref{(1.1)})) despite the action of the interior control (\ref{F}). This shows that the ''destabilizing'' effect of the delay term in (\ref{(1.1)}) cannot be compensated through the action of {\bf only} the interior control.
Now, let us briefly present an overview of this paper. In Section \ref{sect2a}, we present a comprehensive study of the closed-loop system (\ref{(1.1)})-(\ref{F}) without boundary velocity ($\beta=0$). The study includes, inter alia, well-posedness and more importantly a non-convergence result for the system with $\beta=0$. In turn, an exponential convergence outcome is proved for a shifted system associated to (\ref{(1.1)})-(\ref{F}) with $\beta=0$. In Section \ref{sect2}, we go back to our initial problem (\ref{(1.1)})-(\ref{F}) and prove that it is indeed well-posed in the sense of semigroups theory of linear operators. LaSalle's principle is used in Section \ref{sect3} to show that the solutions of the closed-loop system (\ref{(1.1)})-(\ref{F}) converge to an equilibrium state whose expression is explicitly determined. In Section \ref{sect4}, the convergence of solutions to the stationary position is showed to be exponential. The proof is based on the frequency domain method. Lastly, the paper ends with a conclusion.
\section{The closed-loop system (\ref{(1.1)})-(\ref{F}) without boundary velocity} \label{sect2a} \setcounter{equation}{0} The main concern of this section is to provide a complete study of the closed-loop system (\ref{(1.1)})-(\ref{F}) with $\beta=0$, namely, \begin{equation} \left\{ \begin{array} [c]{ll} y_{tt}(x,t)-\left( ay_{x}\right) _{x}(x,t)+\sigma y_t(x,t)=0, & 0<x<1,\;t>0,\\ my_{tt}(0,t)-\left( ay_{x}\right) (0,t)=\alpha y_{t}(0,t-\tau), & t>0,\\ My_{tt}(1,t)+\left( ay_{x}\right) (1,t)=0, & t>0,\\ y(x,0)=y_{0}(x),\;y_{t}(x,0)=y_{1}(x), & x \in (0,1),\\ y_{t}(0,t-\tau )=f(t-\tau), & t \in(0,\tau). \end{array} \right. \label{(01.1)} \end{equation} Primary emphasis is placed on the study of the asymptotic behavior of solutions to (\ref{(01.1)}). In fact, we shall prove that the solutions of (\ref{(01.1)}) do not converge to their equilibrium state. This outcome justifies why the boundary velocity $y_t(0,t)$ has to be present in (\ref{(01.1)}) if one would like to have a convergence result.
\subsection{Well-posedness of the system (\ref{(01.1)})} \label{sect02} \setcounter{equation}{0}
Thanks to the useful change of variables \cite{da} $$u(x,t)=y_{t}(0,t-x\tau),$$ the system (\ref{(01.1)}) writes \begin{equation} \left\{ \begin{array} [c]{ll} y_{tt}(x,t)-(ay_{x})_{x}(x,t) + \sigma y_t(x,t) =0, & (x,t)\in(0,1)\times(0,\infty),\\ \tau u_{t}(x,t)+u_{x}(x,t)=0, & (x,t)\in(0,1)\times(0,\infty),\\[1mm] my_{tt}(0,t)-\left( ay_{x}\right) (0,t)=\alpha u(1,t), & t>0,\\ My_{tt}(1,t)+\left( ay_{x}\right) (1,t)=0, & t>0,\\ y(x,0)=y_{0}(x),\;y_{t}(x,0)=y_{1}(x), & x\in(0,1),\\ u(x,0)=y_{t}(0,-x\tau)=f(-x\tau), & x\in(0,1). \end{array} \right. \label{03} \end{equation} Then, consider the state variable $\Phi=(y,z,u,\xi,\eta),$ where $z(\cdot,t)=y_{t}(\cdot,t),\;\xi (t)=y_{t}(0,t)$ and $\eta (t)=y_{t}(1,t)$. Next, the state space of our system is \[ {\mathcal{H}}_0=H^{1}(0,1)\times L^{2}(0,1)\times L^{2}(0,1)\times\mathbb{R}^{2}, \] equipped with the following real inner product \begin{equation} \begin{array} [c]{l} \langle(y,z,u,\xi,\eta),(\tilde{y},\tilde{z},\tilde{u},\tilde{\xi},\tilde {\eta})\rangle_{\mathcal{H}_0}=\displaystyle\int_{0}^{1}\left( ay_{x}\tilde {y}_{x}+z\tilde{z}\right) \, dx+K \tau\int_{0}^{1} u \tilde{u} \,dx+m \xi \tilde{\xi }+M\eta\tilde{\eta}\\ +\displaystyle \varpi \left[ \int_{0}^{1}(\sigma y+z) dx+m \xi+M \eta-\alpha y(0)+\tau \alpha\int_{0}^{1} u \, dx \right] \\ \hspace{5.7cm} \displaystyle \times \left[ \int_{0}^{1} (\sigma \tilde{y}+\tilde{z}) \, dx+m \tilde{\xi }+M \tilde{\eta}-\alpha{\tilde{y}}(0)+\tau \alpha \int_{0}^{1}\tilde{u} \, dx \right], \label{0ip} \end{array} \end{equation} where $K>0$ satisfies the condition \begin{equation} \alpha \leq K,\label{0uni} \end{equation} and $\varpi$ is a positive constant sufficiently small so that the norm induced by (\ref{0ip}) is equivalent to the usual norm of ${\mathcal{H}}_0$. The proof of this claim runs on much the same lines as that of the article \cite{ac} and hence the details are omitted.
Now, one can write the closed-loop system (\ref{03}) as follows \begin{equation} \left\{ \begin{array} [c]{l} {\Phi}_t (t)={\mathcal{A}_0}\Phi(t),\\ \Phi(0)=\Phi_{0}, \end{array} \right. \label{0si} \end{equation} in which $\Phi=(y,z,u,\xi,\eta)$, $\Phi_{0}=(y_{0},y_{1},f(-\tau\cdot),\xi_{0},\eta_{0})$ and
${\mathcal{A}_0}$ is a linear operator defined by \begin{equation} \begin{array} [c]{l} {\mathcal{D}}({\mathcal{A}_0})=\left\{ (y,z,u,\xi,\eta)\in{\mathcal{H}}_0;y\in H^{2}(0,1),\;\;z,u\in H^{1}(0,1),\;\;\xi=u(0)=z(0),\;\eta=z(1)\right\} ,\\ \displaystyle{{\mathcal{A}_0}}(y,z,u,\xi,\eta)=\left( z,(ay_{x})_{x}-\sigma z,-\frac{u_{x}}{\tau},\frac{1}{m}\left[ (ay_{x})(0)+\alpha u({1})\right],-\displaystyle\frac{(ay_{x})(1)}{M}\right),\label{01.62} \end{array} \end{equation} for each $(y,z,u,\xi,\eta)\in {\mathcal{D}}({\mathcal{A}_0})$.
The well-posedness result is stated below: \begin{theo} Assume that (\ref{1.3}) and (\ref{0uni}) hold. Then, the operator ${\mathcal{A}_0}$ defined by (\ref{01.62}) is densely defined in ${\mathcal{H}}_0$ and generates a $C_{0}$-semigroup $e^{t{\mathcal{A}_0}}$ on ${\mathcal{H}}_0$. Consequently, if $\Phi_{0}\in{\mathcal{D}}({\mathcal{A}_0})$, then the solution $\Phi$ of (\ref{0si}) is strong and belongs to $C([0,\infty);{\mathcal{D}}({\mathcal{A}_0})\cap C^{1}([0,\infty);\mathcal{H}_0)$. However, for any initial condition $\Phi_{0}\in{\mathcal{H}}_0$, the system (\ref{0si}) has a unique mild solution $\Phi\in C([0,\infty);\mathcal{H}_0)$. \label{(0t1)} \end{theo}
\begin{proof} Given $\Phi=(y,z,u,\xi,\eta)\in{\mathcal{D}}({{\mathcal{A}_0}}),$ and using (\ref{0ip}) as well as (\ref{01.62}), we obtain after simple integrations by parts \[ \langle{{\mathcal{A}_0}}\Phi,\Phi)\rangle_{{\mathcal{H}}_0}=\displaystyle -\sigma \int_{0}^{1} z^{2} \,dx+(ay_{x} )(1)z(1)-(ay_{x})(0)z(0)-\displaystyle\frac{K}{2}(u^{2}(1)-u^{2}(0))+\xi(ay_{x})(0)+\alpha\xi u(1) \] \[ -\eta(ay_{x})(1)+\varpi\left( \int_{0}^{1}z\,dx+\tau\alpha\int_{0} ^{1}u\,dx+m\xi+M\eta\right) \left( \alpha u(0)-\alpha z(0)\right) \] \[ =\displaystyle -\sigma \int_{0}^{1} z^{2} \,dx+\alpha\xi u(1)-\frac{K}{2}u^{2}(1)+\displaystyle\frac{K}{2}u^{2}(0). \] Applying Young's inequality, we have \begin{equation} \langle{{\mathcal{A}_0}}\Phi,\Phi)\rangle_{{\mathcal{H}}_0} \leq \displaystyle -\sigma \int_{0}^{1} z^{2} \,dx +\displaystyle\frac{K+\alpha}{2} \xi^{2}+\displaystyle\frac{\alpha-K}{2}u^{2}(1)\label{0dis1}. \end{equation} Thenceforth, the operator \begin{equation} {\mathcal{P}}:={{\mathcal{A}_0}}-\displaystyle\frac{K+\alpha}{2}I \label{pp} \end{equation} is dissipative due to the assumption (\ref{0uni}).
Now, let us show that the operator $\lambda I-{\mathcal{P}}$ is onto $\mathcal{H}_0$ for $\lambda>0$. To do so, let $(f,g,v,p,q)\in{\mathcal{H}}_0$, and let us look for $(y,z,u,\xi,\eta)\in{\mathcal{D}}({\mathcal{A}_0})$ such that $(\lambda I-{{\mathcal{A}_0}})(y,z,u,\xi,\eta)=(f,g,v,p,q)$. This problem reduces to \begin{equation} \left\{ \begin{array} [c]{l} \lambda (\lambda+\sigma) y-(ay_{x})_{x}=(\lambda+\sigma) f+g,\\ \lambda\left[ m\lambda-\alpha e^{-{\tau}\lambda}\right] y(0)-(ay_{x})(0)=mp+(m\lambda-\alpha e^{-{\tau}\lambda})f(0)+\displaystyle{\tau}\alpha\int_{0}^{1}e^{-{\tau}\lambda (1-s)}v(s)\,ds,\\ \lambda^{2}My(1)+(ay_{x})(1)=Mq+\lambda Mf(1), \end{array} \right. \label{0ra1} \end{equation} whose weak formulation is \[ \displaystyle\int_{0}^{1}\left[ (\lambda+\sigma) \phi y+ay_{x}\phi_{x}\right] \, dx+\lambda\left[ m\lambda-\alpha e^{-{\tau}\lambda}\right] y(0)\phi(0)+\lambda^{2}My(1)\phi(1) \] \[ =\int_{0}^{1}\left[ (\lambda+\sigma) f+g \right] \phi\,dx+\left[ mp+(m\lambda -\alpha e^{-{\tau}\lambda})f(0)+\displaystyle{\tau}\alpha\int_{0} ^{1}e^{-{\tau}\lambda(1-s)}v(s)\,ds\right] \phi(0) \] \[+\left[ Mq+\lambda Mf(1)\right] \phi(1).\] Applying Lax-Milgram Theorem \cite{br}, there exists a unique solution $y\in H^{2}(0,1)$ of (\ref{0ra1}), for $\lambda>0$ and hence the range of $\lambda I-{\mathcal{A}_0}$ is ${\mathcal{H}}_0$. Now, since ${\mathcal{P}}:={{\mathcal{A}_0}}-\displaystyle\frac{K+\alpha}{2}I$, one can deduce that $\lambda I-{\mathcal{P}}$ is also onto ${\mathcal{H}}_0$. Combining the latter with (\ref{0dis1}) and evoking semigroups theory \cite{Pa:83}, the operator ${\mathcal{P}}$ generates generates on ${\mathcal{H}}_0$ a $C_{0}$-semigroup of contraction $e^{t {\mathcal{P}}}$. Lastly, utilizing classical results of bounded perturbations \cite{Pa:83}, we conclude that ${\mathcal{A}_0}$ also generates a $C_{0}$-semigroup on ${\mathcal{H}}_0$. The other assertions of the theorem directly follow from semigroups theory of linear operators \cite{Pa:83}. \end{proof}
\begin{rem} Obviously, $\lambda=0$ is an eigenvalue of ${\mathcal{A}_0}$ whose eigenfunction is $(C,0,0,0,0)$, where $C \in\mathbb{R} \setminus\{0\}$. Thus, the system (\ref{0si}) is not stable. Notwithstanding, one may expect a convergence result of the solutions of (\ref{0si}) to some equilibrium state $(C,0,0,0,0)$ as in \cite{ac} (see also \cite{ac2}). It turned out that this is not the case. The proof of this claim will be given in the next section. \label{rem1} \end{rem}
\subsection{Lack of convergence} \label{sect9} As stated before, we shall prove that the solutions of the solutions of the system (\ref{(01.1)}) do not converge to their equilibrium state $(C,0,0,0,0)$. First, consider the following subspace \begin{equation} \tilde{\mathcal{H}}_0 = \left\{ (y,z,u,\xi,\eta) \in {\mathcal{H}}_0; \, \int_{0}^{1} \left( \sigma y(x)+z(x) +\alpha\tau u(x) \right) \, dx -\alpha \, y(0) + m \xi+ M \eta= 0 \right\}, \label{h0} \end{equation} and the operator \[ \tilde{\mathcal{A}}_0 : \mathcal{D}(\tilde{\mathcal{A}}_0) := \mathcal{D}({\mathcal{A}_0}) \cap \tilde{\mathcal{H}}_0 \subset\tilde{\mathcal{H}}_0 \rightarrow \tilde{\mathcal{H}}_0, \] \begin{equation} \label{til0} \tilde{\mathcal{A}}_0 (y,z,u,\xi,\eta) = \mathcal{A}_0 (y,z,u,\xi,\eta), \; \mbox{for any} \; (y,z,u,\xi,\eta) \in \mathcal{D}(\tilde{\mathcal{A}}_0). \end{equation} Based on the outcomes of Section \ref{sect02}, the operator $\tilde{\mathcal{A}}_0$ generates a $C_{0}$-semigroup $e^{t \tilde{\mathcal{A}}_0}$. Notwithstanding, $0$ is not an eigenvalue of $\tilde{\mathcal{A}}_0$.
Furthermore, in order to reach our target (the solutions of the system (\ref{(01.1)}) do not converge to their equilibrium state $(C,0,0,0,0)$), it suffices to prove that the semigroup $e^{t \tilde{\mathcal{A}}_0}$ generated by the operator $\tilde{\mathcal{A}}_0$ (see (\ref{til0})) is not stable on $\tilde{\mathcal{H}}_0$. To do so, let assume that $a(x)=1$, for all $x \in [0,1]$. Then, consider a nonzero complex number $\lambda$ and a nonzero function of $f$ in $H^2(0,1)$. The immediate task is to seek a solution $y(x,t)=e^{\lambda t} f(x)$ of the system associated to the operator $\tilde{\mathcal{A}}_0$ on $\tilde{{\mathcal{H}}}_0$, namely, \begin{equation} \left\{ \begin{array}[c]{ll} y_{tt}(x,t)-y_{xx}(x,t)=0, & 0<x<1,\;t>0,\\ my_{tt}(0,t)- y_{x} (0,t)=\alpha y_{t}(0,t-\tau)-\beta y_{t}(0,t), & t>0,\\ My_{tt}(1,t)+ y_{x} (1,t)=0, & t>0, \label{1.6f} \end{array} \right. \end{equation} with $ \int_{0}^{1} y_t(x,t)\, dx +\alpha \tau \int_{0}^{1} y_{t}(0,t-x \tau) \, dx + (\beta-\alpha) \, y(0) + m y_{t}(0,t)+ M y_{t}(1,t)= 0.$ A straightforward computation gives the following system \begin{equation} \left\{ \begin{array}{l} f_{xx} -\left(\lambda^2+\sigma \lambda\right) f=0,\\ f_{x}(0)+\lambda \left(\alpha e^{-\lambda \tau}-m \lambda \right) f(0)=0,\\ f_{x}(1)+M \lambda^2 f(1)=0.\label{k2} \end{array} \right. \end{equation} Solving the differential equation of (\ref{k2}) yields $y(x)=k_1 e^{-\sqrt{\lambda^2+\sigma \lambda} x}+k_2 e^{\sqrt{\lambda^2+\sigma \lambda}}$, where $k_1$ and $k_2$ are constants to be determined via the boundary conditions in (\ref{k2}). The latter leads us to claim that $f$ is a nontrivial solution of (\ref{k2}) if and only if $\lambda$ is a nonzero solution of the following equation: \begin{eqnarray} && \left[ \lambda \left( \alpha e^{-\lambda \tau} -\lambda m \right) -\sqrt{\lambda^2+\sigma \lambda} \right] \left[M \lambda^2+ \sqrt{\lambda^2+\sigma \lambda} \right] e^{\sqrt{\lambda^2+\sigma \lambda}} \nonumber\\ &-& \left[ \lambda \left( \alpha e^{-\lambda \tau} -\lambda m \right) + \sqrt{\lambda^2+\sigma \lambda} \right] \left[M \lambda^2 - \sqrt{\lambda^2+\sigma \lambda} \right] e^{-\sqrt{\lambda^2+\sigma \lambda}}=0.\label{k3} \end{eqnarray} For sake of simplicity, we shall pick up $\lambda=\sigma >0$. Next, we choose $\tau=\sqrt{2}$ and $M=\sqrt{2}/\sigma.$ Thereby, (\ref{k3}) yields \[\alpha= \sqrt{2} \left(1+\frac{m}{M}\right) e^{\frac{2}{M}}.\] Whereupon, (\ref{k3}) holds for $\lambda = \sigma >0$ and the above values of the time-delay $\tau$ as well as the physical parameters $M$ and $\alpha$. Thus, $y(x,t)=e^{\sigma t} f(x)$ is a solution of (\ref{1.6f}), which clearly does not tend to zero in $\tilde{{\mathcal{H}}}_0$. To summarize, we have showed the following result: \begin{theo} Let $\tau=\sqrt{2}$ and $M=\sqrt{2}/\sigma$ and $\alpha=\sqrt{2} \left(1+\frac{m}{M}\right) e^{\frac{2}{M}}.$ Although the assumption (\ref{0uni}) hold, the solutions of (\ref{1.6f}) are unstable in $\tilde{{\mathcal{H}}}_0$. In other words, the solutions of the closed-loop system (\ref{(01.1)}) do not converge in ${\mathcal{H}}_0$ to their equilibrium state as $t\longrightarrow+\infty$. \label{non} \end{theo}
\subsection{Asymptotic behavior result of $e^{t\mathcal{P}}$} \label{sect03} In this section, we are going to study the asymptotic behavior of the semigroup $e^{t\mathcal{P}}$ generated by the operator $\mathcal{P}$ (see (\ref{pp})) which has been evoked in the proof of Theorem \ref{(0t1)}. First, going back to the proof of Theorem \ref{(0t1)}, one can claim that the operator $\left(\lambda I-{\mathcal{P}} \right) ^{-1}$ exists and maps ${\mathcal{H}}_0$ into ${\mathcal{D}}({\mathcal{P}})$. Thereafter, Sobolev embedding \cite{ad} leads to assert that $\left( \lambda I-{\mathcal{P}} \right)^{-1}$ is compact and hence the spectrum of ${\mathcal{P}}$ consists of isolated eigenvalues of finite algebraic multiplicity only \cite{ka}. Moreover, the trajectories set of solutions $\{\Phi(t)\}_{\scriptscriptstyle t\geq0}$ of the shifted system \begin{equation} {\Phi}_t (t)={\mathcal{P}} \Phi(t), \;\; \Phi(0)=\Phi_{0}\label{0b} \end{equation} is bounded for the graph norm and thus precompact by means of the compactness of the operator $\left(\lambda I-\mathcal{P}\right) ^{-1}$, for $\lambda>0$. Now, let $\Phi(t)=\left( y(t),y_{t} (t),u(t),\xi(t),\eta(t)\right) =e^{t\mathcal{P}} \Phi_{0}$ be the solution of (\ref{0b}) stemmed from $\Phi_{0}=\left( y_{0},y_{1},f,\xi_{0},\eta_{0}\right) \in {\mathcal{D}}({\mathcal{P}})$. By virtue of LaSalle's principle \cite{HA}, it follows that the $\omega$-limit set $$ \omega \left( \Phi_0 \right)= \left \lbrace \Psi \in \mathcal{H}; \; \mbox{there exists a sequence} \; t_n \to \infty \;\;\mbox{as}\;\; n \to \infty \; \mbox{such that} \; \Psi=\lim_{{\scriptstyle n \to \infty}} e^{t_n {\mathcal{P}}} \Phi_0 \right \rbrace $$ is non empty, compact, invariant under the semigroup $e^{t{\mathcal{P}}}$. Moreover, the solution $e^{t{\mathcal{P}}}\Phi _{0}\longrightarrow\omega\left( \Phi_{0}\right) \;$ as $t\rightarrow \infty\,$ \cite{HA}. Next, let $\tilde{\Phi}_{0}=\left( \tilde{y}_{0},\tilde {y}_{1},\tilde{f},\tilde{\xi},\tilde{\eta}\right) \in\omega\left( \Phi _{0}\right) \subset{D}({\mathcal{P}})$ and consider $\tilde{\Phi}(t)=\left( \tilde{y}(t),\tilde{y}_{t}(t),\tilde{u}(t),\tilde{\xi}(t),\tilde{\eta }(t)\right) =e^{t{\mathcal{P}}}\tilde{\Phi}_{0}\in{D}({\mathcal{P}})$ as the unique strong solution of (\ref{0si}). Since $\Vert\tilde{\Phi}(t)\Vert_{\mathcal{H}_0}$ is constant \cite[Theorem 2.1.3 p. 18]{HA}, we have \[ <{\mathcal{P}} \tilde{\Phi},\tilde{\Phi}>_{\mathcal{H}_0}=0. \] This, together with (\ref{0dis1}), gives $\tilde{z}=\tilde{y}_t =0$, and $\tilde{u}(1)=\tilde{y}_{t}(0,t-\tau)=0$ as long as $K > \alpha$. Consequently, $\tilde{y}$ is constant and hence the $\omega$-limit set $\omega\left( \Phi_{0}\right) $ reduces to $(\zeta,0,0,0,0)$. Now, it remains to find $\zeta$. To do so, let $(\zeta,0,0,0,0) \in \omega \left( \Phi_{0}\right)$, which yields \begin{equation} \Phi(t_{n})=(y(t_{n}),y_{t}(t_{n}),u(t_{n}),\xi(t_{n}),\eta(t_{n}))=e^{t_{n} {\mathcal{P}}}\Phi_{0}\longrightarrow(\zeta ,0,0,0,0), \, \text{for some} \, t_{n} \rightarrow\infty, \text{as} \, n \rightarrow\infty. \label{boum1} \end{equation} In turn, any solution of (\ref{0b}) stemmed from $\Phi_{0}=(y_{0},y_{1},f,\xi_{0},\eta_{0})$ verifies \[ \frac{\text{d}}{\text{dt}}\left[ \int_{0}^{1} \left( \sigma y(x,t)+ y_{t}(x,t)+\alpha \tau y_{t}(0,t-x\tau) \right) \,dx+my_{t}(0,t)+My_{t}(1,t)-\alpha y(0,t) \right]=0, \] for each $t\geq0$, and thus \begin{align} & \int_{0}^{1} \left( \sigma y(x,t)+ y_{t}(x,t)+\alpha \tau y_{t}(0,t-x\tau) \right)\,dx + my_{t}(0,t)+My_{t}(1,t)-\alpha y(0,t)\nonumber\\ & =\int_{0}^{1} \left( \sigma y(x,0)+ y_{t}(x,0)+\alpha \tau y_{t}(0,-x\tau) \right) \,dx+my_{t}(0,0)+My_{t}(1,0)-\alpha y(0,0)\nonumber\\ & =\int_{0}^{1} \left( \sigma y_0 (x)+ y_{1}(x)+\alpha \tau f(-x\tau) \right) \,dx+m\xi_{0}+M\eta_{0}-\alpha y_{0}(0).\label{bou1} \end{align} Finally, the desired value of $\zeta$ can be obtained by simply letting $t=t_{n}$ in (\ref{bou1}) with $n\rightarrow\infty$, and then using (\ref{boum1}).
Whereupon, the following result has been showed: \begin{theo} Assume that (\ref{1.3}) holds and $K$ satisfies $K > \alpha $. Then, for any initial data $\Phi_{0}=(y_{0},y_{1},f,\xi_{0},\eta_{0}) \in {\mathcal{H}}_0$, the solution $\Phi(t)=\biggl(y,y_{t},y_{t}(0,t-x\tau),y_{t}(0,t),y_{t}(1,t)\biggr)$ of (\ref{0b}) tends in ${\mathcal{H}}_0$ to $(\zeta,0,0,0,0)$ as $t\longrightarrow+\infty$, where \begin{equation} \zeta=\displaystyle -\frac{1}{\alpha}\displaystyle\left[ \int_{0}^{1} \left( \sigma y_0 (x)+ y_{1} (x)+\alpha \tau f(-\tau x) \right)\,dx-\alpha y_{0}(0)+m\xi _{0}+M\eta_{0}\right] .\label{0cste} \end{equation} \label{0t2} \end{theo}
\subsection{Exponential convergence result for $e^{t\mathcal{P}}$} \label{sect04}
This subsection addresses the problem of determining the rate of convergence of solutions of the shifted system (\ref{0b}) to their equilibrium state $(\zeta,0,0,0,0)$ as $t\longrightarrow+\infty$ previously obtained (see Theorem \ref{0t2}). To proceed, we recall the following result (see Huang \cite{huang} and Pr\"{u}ss \cite{pruss}):
\begin{theo} \label{lemraokv} There exist two positive constant $C$ and $\omega$ such that a $C_{0}$-semigroup $e^{t{\mathcal{L}}}$ of contractions on a Hilbert space ${\mathcal{H}}$ satisfies the estimate \[
||e^{t{\mathcal{L}}}||_{{\mathcal{L}}({\mathcal{H}})} \leq C e^{-\omega t}, \; \; \text{for all} \; t>0, \] if and only if \begin{equation} \rho({\mathcal{L}})\supset\bigr\{i\gamma; \, \gamma\in\mathbb{R}\bigr\}\equiv i\mathbb{R},\label{01.8kv} \end{equation} and \begin{equation}
\limsup_{|\gamma|\rightarrow\infty}\Vert(i\gamma I-{\mathcal{L}})^{-1}\Vert_{{\mathcal{L}}({\mathcal{H}})}<\infty,\label{01.9kv} \end{equation} where $\rho({\mathcal{L}})$ denotes the resolvent set of the operator ${\mathcal{L}}$. \end{theo}
Let us first define another operator on the subspace $\tilde{\mathcal{H}}_0$ (see (\ref{h0})) as follows \[ \tilde{\mathcal{P}} : \mathcal{D}(\tilde{\mathcal{P}}) := \mathcal{D}({\mathcal{P}}) \cap \tilde{\mathcal{H}}_0 \subset\tilde{\mathcal{H}}_0 \rightarrow\tilde{\mathcal{H}}_0, \] \begin{equation} \label{01.62bis} \tilde{\mathcal{P}} (y,z,u,\xi,\eta) = {\mathcal{P}} (y,z,u,\xi,\eta), \, \forall\, (y,z,u,\xi,\eta) \in \mathcal{D}(\tilde{\mathcal{P}}). \end{equation}
Recalling the results obtained in the previous sections, the operator $\tilde{\mathcal{P}}$ defined by (\ref{01.62bis}) generates on $\tilde{{\mathcal{H}}}_0$ a $C_{0}$-semigroup of contractions $e^{t\tilde{\mathcal{P}}}$ and its spectrum $\sigma(\tilde{\mathcal{P}})$ consists of isolated eigenvalues of finite algebraic multiplicity only.
Then, we have the following result:
\begin{theo} \label{0lrkv} Assume that (\ref{1.3}) hold and $K$ satisfies $K>\alpha$. Then, there exist $C>0$ and $\omega >0$ such that for all $t>0$, we have \[ \left\Vert e^{t\tilde{\mathcal{P}}}\right\Vert _{{\mathcal{L}}(\tilde{\mathcal{H}}_0)}\leq C e^{-\omega t}. \] \end{theo}
One direct consequence of the above theorem is: the solutions of the system (\ref{0b}) exponentially tend in ${\mathcal{H}}_0$ to $(\zeta,0,0,0,0)$ as $t\longrightarrow+\infty$, where $\zeta$ is given by (\ref{0cste}).
\begin{proof}
In the first lieu, we are going to check that (\ref{01.8kv}) holds. Consider the eigenvalue problem \begin{equation} \tilde{\mathcal{P}}Z=i \gamma Z, \label{0eigenkv} \end{equation} where $\gamma \in \mathbb{R}$ and $Z=(y,z,u,\xi,\eta)=0_{\tilde{\mathcal{H}}_0}$. The latter writes \begin{align} z -\displaystyle\frac{K+\alpha}{2} y & =i\gamma y\label{0eigen1}\\ (a\,y_{x})_{x} -\left( \sigma+\displaystyle\frac{K+\alpha}{2} \right) z & =i\gamma z\label{0eigen2}\\ -\frac{u_{x}}{\tau}-\displaystyle\frac{K+\alpha}{2} u & =i\gamma u\label{0eigen3}\\ \frac{1}{m}\left[ (ay_{x})(0)+\alpha u(0)\right] -\displaystyle\frac{K+\alpha}{2} \xi & =i\gamma \xi\ .\label{0eigen4}\\ -\frac{(ay_{x})(1)}{M} -\displaystyle\frac{K+\alpha}{2} \eta & =i\gamma\eta,\label{0eigenbis} \end{align} Taking the inner product of (\ref{0eigenkv}) with $Z$ and recalling (\ref{0dis1}), we get: \begin{equation} 0=\mathrm{Re\,}\left( <\tilde{\mathcal{P}}Z,Z>_{{\tilde{\mathcal{H}}_0}}\right) \leq \displaystyle -\sigma \int_{0}^{1} z^{2} \,dx-\displaystyle\frac{K-\alpha}{2}u^{2}(1) (\leq0).\label{01.7kv} \end{equation} Thereby, $z=0$ in $L^2(0,1)$, and $u(1)=0$. Amalgamating $z=0$ and (\ref{0eigen1}) gives $y=0$ and hence $z(1)=\eta$ by (\ref{0eigenbis}). Exploiting (\ref{0eigen3}) together with $u(1)=0$, we get $u=0$. Finally, $\xi=z(0)=0$ by (\ref{0eigen4}). Whereupon, (\ref{01.8kv}) is fulfilled by the operator $\tilde{\mathcal{P}}$.
We turn now to the proof of the fact that the resolvent operator of $\tilde{\mathcal{P}}$ obeys the condition \eqref{01.9kv}. Suppose that this claim is not true. Then, thanks to Banach-Steinhaus Theorem \cite{br}, there exist a sequence of real numbers $\gamma_{n} \rightarrow+\infty$ and a sequence of vectors $Z^{n}=(y^{n},z^{n},u^{n},\xi^{n},\eta^{n})\in\mathcal{D}(\tilde{\mathcal{P}})$ with \begin{equation}
\left\| Z^{n} \right\|_{\tilde{\mathcal{H}}_0}=\left\| y^n\right\|_{H^{1}(0,1)}+\left\| z^n \right\|_{L^{2}(0,1)}+\left\| u^n \right\|_{L^{2}(0,1)}+\left| \xi^{n} \right|_{\mathbb{C}}+\left| \eta^{n} \right|_{\mathbb{C}}=1 \label{0boun} \end{equation} such that \begin{equation} \Vert(i\gamma_{n}I-\tilde{\mathcal{P}})Z^{n}\Vert_{\tilde {\mathcal{H}}_0}\rightarrow0\;\;\;\;\mbox{as}\;\;\;n\rightarrow\infty ,\label{01.12kv} \end{equation} that is, {as} $n\rightarrow\infty$, we have: \begin{equation} \left( i\gamma_{n} + \displaystyle\frac{K+\alpha}{2} \right) y^{n}-z^{n} \equiv f_{n}\rightarrow0\;\;\;\mbox{in}\;\;H^{1}(0,1),\label{01.13kv} \end{equation} \begin{equation} \left( \sigma+i\gamma_{n} + \displaystyle\frac{K+\alpha}{2} \right) z^{n}-(a y_x^{n})_{x} \equiv g^{n} \rightarrow0 \;\;\;\mbox{in}\;\;L^{2}(0,1),\label{01.13bkv} \end{equation} \begin{equation} \left( i\gamma_{n} + \displaystyle\frac{K+\alpha}{2} \right) u^{n}+\frac{u_x^{n}}{\tau} \equiv v^{n}\rightarrow0 \;\;\;\mbox{in}\;\;L^{2} (0,1),\label{01.14bkv} \end{equation} \begin{equation} \left( i\gamma_{n} + \displaystyle\frac{K+\alpha}{2} \right) \xi^{n}-\frac{1}{m}\left[ (a y_x^{n})(0)+\alpha u^{n}(1)\right] \equiv p^{n}\rightarrow0 \;\;\;\mbox{in}\;\; \mathbb{C},\label{01.14kv} \end{equation} \begin{equation} \left( i\gamma_{n} + \displaystyle\frac{K+\alpha}{2} \right) \eta^{n}+\frac{(ay_x^{n})(1)}{M} \equiv q^{n} \rightarrow0 \;\;\;\mbox{in}\;\; \mathbb{C}.\label{0lasteq} \end{equation} Exploring the fact that
\[ \left\| (i\gamma_{n}I-\tilde{\mathcal{P}})Z^{n} \right\|_{\tilde{\mathcal{H}}_0 }\geq \left\vert \mathrm{Re\,}\langle (i\gamma_{n}I-\tilde{\mathcal{P}})Z^{n},Z^{n} \rangle_{\tilde{\mathcal{H}}_0} \right\vert, \]
together with \eqref{01.7kv}-\eqref{01.12kv}, we have \begin{equation} z^n \rightarrow0, \quad \gamma_{n} y^n \rightarrow0, \quad \text{and}\; y^n \rightarrow0, \quad \hbox{in}\;L^{2}(0,1) \label{0z0} \end{equation} and \begin{equation} u^{n}(1)\rightarrow0 \;\;\mbox{in}\;\; \mathbb{C}.\label{0unkv} \end{equation} Combining (\ref{01.14bkv}) and (\ref{0unkv}) yields \begin{equation} u^{n}\longrightarrow 0\;\;\mbox{in}\;\;L^{2}(0,1).\label{0znkv} \end{equation} It follows from (\ref{01.13bkv}) that \[\displaystyle\frac{(a y_x^{n})_{x}}{\gamma_{n}}= \left( \displaystyle\frac{\sigma+ \displaystyle\frac{K+\alpha}{2}}{\gamma_{n}}+i \right) z^n- \displaystyle\frac{g^{n}}{\gamma_{n}}.\] This together with \eqref{0z0} give \begin{equation} \displaystyle\frac{(a y_x^{n})_{x}}{\gamma_{n}} \longrightarrow0 \;\;\;\mbox{in}\;\;L^{2}(0,1). \label{0yxx} \end{equation} Applying Gagliardo-Nirenberg interpolation inequality \cite{br}, we have \[
\left\|a y_x^n \right\|_2^2 \leq c \, \displaystyle\frac{\left\|(a y_x^n)_x \right\|_2}{\left| \gamma_{n} \right|} \left\|\gamma_{n} y^n \right\|_2,\]
where $\| \cdot \|_2$ is the usual norm in $L^{2}(0,1)$, and $c$ is a positive constant independent of $n$. Thereby, \begin{equation}
\left\|a y_x^n \right\|_2 \longrightarrow 0 \;\;\;\mbox{in}\;\;L^{2}(0,1),\label{0der} \end{equation} due to \eqref{0yxx} and \eqref{0z0}.
Furthermore, we have \begin{equation} \displaystyle\frac{(a y_x^{n})(1)}{\gamma_{n}} \longrightarrow0 \;\;\;\mbox{in}\;\; \mathbb{C}, \label{0yx1} \end{equation} by means of \eqref{0boun} and \eqref{0yxx}. Amalgamating \eqref{0lasteq} and \eqref{0yx1}, we have \begin{equation} \eta^{n}=z^n (1) \rightarrow0, \;\;\;\mbox{in}\;\; \mathbb{C}.\label{0eta} \end{equation} Similarly, it follows from \eqref{01.14kv} that \begin{equation} \xi^{n}=z^n (0) \rightarrow0, \;\;\;\mbox{in}\;\; \mathbb{C}.\label{0xi} \end{equation} To recapitulate, it follows from \eqref{0z0}-\eqref{0znkv}, \eqref{0eta}, and \eqref{0xi} that $\left\Vert Z^{n}\right\Vert _{\tilde{\mathcal{H}}_0}\longrightarrow 0$, which contradicts the fact that $\left\Vert Z^{n}\right\Vert _{\tilde{\mathcal{H}}_0}=1,\;\forall\ n\in\mathbb{N}.$ Thus, the conditions (\ref{01.8kv}) and (\ref{01.9kv}) are fulfilled. This completes the proof of Theorem \ref{0lrkv}. \end{proof}
Now, we go back to our original system (\ref{(1.1)})-(\ref{F}) and investigate in the next sections the existence and uniqueness of its solutions as well as their behavior.
\section{Well-posedness of the closed-loop system (\ref{(1.1)})-(\ref{F})} \label{sect2} \setcounter{equation}{0}
Letting $u(x,t)=y_{t}(0,t-x\tau),$ the closed-loop system (\ref{(1.1)})-(\ref{F}) becomes \begin{equation} \left\{ \begin{array} [c]{ll} y_{tt}(x,t)-(ay_{x})_{x}(x,t) + \sigma y_t(x,t) =0, & (x,t)\in(0,1)\times(0,\infty),\\ \tau u_{t}(x,t)+u_{x}(x,t)=0, & (x,t)\in(0,1)\times(0,\infty),\\[1mm] my_{tt}(0,t)-\left( ay_{x}\right) (0,t)=\alpha u(1,t)-\beta u(0,t), & t>0,\\ My_{tt}(1,t)+\left( ay_{x}\right) (1,t)=0, & t>0,\\ y(x,0)=y_{0}(x),\;y_{t}(x,0)=y_{1}(x), & x\in(0,1),\\ u(x,0)=y_{t}(0,-x\tau)=f(-x\tau), & x\in(0,1). \end{array} \right. \label{3} \end{equation} Then, consider the state variable $\Phi=(y,z,u,\xi,\eta),$ where $z(\cdot,t)=y_{t}(\cdot,t),\;\xi (t)=y_{t}(0,t)$ and $\eta (t)=y_{t}(1,t)$. Next, the state space of our system is \[ {\mathcal{H}}=H^{1}(0,1)\times L^{2}(0,1)\times L^{2}(0,1)\times\mathbb{R}^{2}. \] The immediate task is to equip ${\mathcal{H}}$ with an inner product that induces a norm equivalent to the usual one. To do so, let $K$ be a positive constant to be determined and define the energy associated to (\ref{3}) as follows: \begin{equation} {\mathcal{E}}(t)=E_{0}(t)+E_{1}(t),\label{ene} \end{equation} where \begin{equation} E_{0}(t)=\frac{1}{2}\left[ \int_{0}^{1}\left[ y_{t}^{2}(x,t)+a(x)y_{x}^{2}(x,t)+K \tau y_{t}^{2}(0,t-x\tau) \right] \, dx+my_{t}^{2}(0,t)+My_{t}^{2}(1,t) \right] ,\label{e0} \end{equation} and \begin{equation} E_{1}(t)=\frac{1}{2} \left[ \int_{0}^{1}\left[ \sigma y(x,t)+y_{t}(x,t)+\alpha \tau y_{t}(0,t-x\tau)\right] \, dx+my_{t}(0,t)+My_{t}(1,t)+(\beta-\alpha) y(0,t) \right]^2.\label{e1} \end{equation} Differentiating $E_1$ in a formal way, using (\ref{3}) and integrating by parts, we obtain after a straightforward computation \begin{equation} {\mathcal{E}}^{\prime}(t) \leq -\sigma \int_{0}^{1} y_{t}^{2}(x,t) \,dx + \left( \displaystyle\frac{K}{2} +\displaystyle\frac{\alpha}{2}-\beta\right) y_{t}^{2}(0,t)+\displaystyle\frac{1}{2}\left(\alpha -K\right) y_{t}^{2}(0,t-\tau).\label{dec} \end{equation} Clearly, the energy ${\mathcal{E}}(t)$ is decreasing if we assume that \begin{equation} \alpha<\beta,\label{sma} \end{equation} and then choose $K$ such that \begin{equation} \alpha \leq K \leq 2\beta-\alpha.\label{uni} \end{equation}
Whereupon, the space ${\mathcal{H}}$ shall be equipped with the following real inner product \begin{equation} \begin{array} [c]{l} \langle(y,z,u,\xi,\eta),(\tilde{y},\tilde{z},\tilde{u},\tilde{\xi},\tilde {\eta})\rangle_{\mathcal{H}}=\displaystyle\int_{0}^{1}\left( ay_{x}\tilde {y}_{x}+z\tilde{z}\right) \, dx+K \tau\int_{0}^{1} u \tilde{u} \,dx+m \xi \tilde{\xi }+M\eta\tilde{\eta}\\ +\displaystyle \varpi \left[ \int_{0}^{1}(\sigma y+z) dx+m \xi+M \eta+(\beta-\alpha) y(0)+\tau \alpha\int_{0}^{1} u \, dx \right] \\ \hspace{5.7cm} \displaystyle \times \left[ \int_{0}^{1} (\sigma \tilde{y}+\tilde{z}) \, dx+m \tilde{\xi }+M \tilde{\eta}+(\beta-\alpha){\tilde{y}}(0)+\tau \alpha \int_{0}^{1}\tilde{u} \, dx \right], \label{ip} \end{array} \end{equation} where $K>0$ satisfies the condition (\ref{uni}) and $\varpi$ is a positive constant sufficiently small so that the norm induced by (\ref{ip}) is equivalent to the usual norm of ${\mathcal{H}}$. In fact, arguing as in \cite{ac} one can prove the following lemma \begin{lem} The state space ${\mathcal{H}}$ endowed with the inner product (\ref{ip}) is a Hilbert space provided that $\varpi$ is small enough and the hypotheses (\ref{1.3}), (\ref{sma}) and (\ref{uni}) are fulfilled. \label{p1} \end{lem}
Now, one can write the closed-loop system (\ref{3}) as follows \begin{equation} \left\{ \begin{array} [c]{l} {\Phi}_t (t)=\mathcal{A}\Phi(t),\\ \Phi(0)=\Phi_{0}, \end{array} \right. \label{si} \end{equation} in which $\mathcal{A}$ is a linear operator defined by \begin{equation} \begin{array} [c]{l} {\mathcal{D}}(\mathcal{A})=\left\{ (y,z,u,\xi,\eta)\in{\mathcal{H}};y\in H^{2}(0,1),\;\;z,u\in H^{1}(0,1),\;\;\xi=u(0)=z(0),\;\eta=z(1)\right\} ,\\ \displaystyle{\mathcal{A}}(y,z,u,\xi,\eta)=\left( z,(ay_{x})_{x}-\sigma z,-\frac{u_{x}}{\tau},\frac{1}{m}\left[ (ay_{x})(0)-\beta\xi+\alpha u({1})\right],-\displaystyle\frac{(ay_{x})(1)}{M}\right);\label{1.62} \end{array} \end{equation} whereas $\Phi=(y,z,u,\xi,\eta)$ and $\Phi_{0}=(y_{0},y_{1},f(-\tau\cdot),\xi_{0},\eta_{0})$.
The well-posedness result is stated below: \begin{theo} Assume that (\ref{1.3}), (\ref{sma}) and (\ref{uni}) hold. Then, the operator $\mathcal{A}$ defined by (\ref{1.62}) is densely defined in ${\mathcal{H}}$ and generates on ${\mathcal{H}}$ a $C_{0}$-semigroup of contractions $e^{t\mathcal{A}}$. Consequently, if $\Phi_{0}\in{\mathcal{D}}(\mathcal{A})$, then the solution $\Phi$ is strong and belongs to $C([0,\infty);{\mathcal{D}}(\mathcal{A})\cap C^{1}([0,\infty);\mathcal{H})$. However, for any initial condition $\Phi_{0}\in{\mathcal{H}}$, the system (\ref{si}) has a unique mild solution $\Phi\in C([0,\infty);\mathcal{H})$. Finally, the spectrum of $\mathcal{A}$ consists of isolated eigenvalues of finite algebraic multiplicity only. \label{(t1)} \end{theo}
\begin{proof} Given $\Phi=(y,z,u,\xi,\eta)\in{\mathcal{D}}({\mathcal{A}}),$ and recalling (\ref{ip}) as well as (\ref{1.62}), we obtain after several integrations by parts \begin{eqnarray*} \langle{\mathcal{A}}\Phi,\Phi)\rangle_{{\mathcal{H}}} &=& \displaystyle -\sigma \int_{0}^{1} z^{2} \,dx+(ay_{x} )(1)z(1)-(ay_{x})(0)z(0)-\displaystyle\frac{K}{2}(u^{2}(1)-u^{2}(0))+\xi(ay_{x} )(0)-\beta\xi^{2}\\ &+& \varpi\left( \int_{0}^{1}z\,dx+\tau\alpha\int_{0}^{1}u\,dx+m\xi+M\eta\right) \left( \alpha u(0)+\beta\xi+(\beta-\alpha)z(0)\right)\\ &+& \alpha\xi u(1)- \eta(ay_{x})(1)\\ &=&\displaystyle -\sigma \int_{0}^{1} z^{2} \,dx+\alpha\xi u(1)-\frac{K}{2}u^{2}(1)+\displaystyle\frac{K}{2}u^{2}(0)-\beta \xi^{2}. \end{eqnarray*} Invoking Young's inequality, we deduce that \begin{equation} \langle{\mathcal{A}}\Phi,\Phi)\rangle_{{\mathcal{H}}} \leq \displaystyle -\sigma \int_{0}^{1} z^{2} \,dx+\left[ -\beta+\displaystyle\frac{K+\alpha}{2}\right] \xi^{2}+\displaystyle\frac{\alpha-K}{2}u^{2}(1)\label{dis1}. \end{equation} Thenceforth, the operator ${\mathcal{A}}$ is dissipative due to the assumptions (\ref{sma})-(\ref{uni}).
On the other hand, one can show that the operator $\lambda I-\mathcal{A}$ is onto $\mathcal{H}$ for $\lambda>0$. Indeed, given $(f,g,v,p,q)\in{\mathcal{H}}$, we seek $(y,z,u,\xi,\eta)\in{\mathcal{D}}(\mathcal{A})$ such that $(\lambda I-\mathcal{A})(y,z,u,\xi,\eta)=(f,g,v,p,q)$. A classical argument leads us to solve the following elliptic problem \begin{equation} \left\{ \begin{array} [c]{l} \lambda (\lambda+\sigma) y-(ay_{x})_{x}=(\lambda+\sigma) f+g,\\ \lambda\left[ (m\lambda+\beta)-\alpha e^{-{\tau}\lambda}\right] y(0)-(ay_{x})(0)=mp+(m\lambda+\beta-\alpha e^{-{\tau}\lambda})f(0)\\ \hspace{6.9cm}+\displaystyle{\tau}\alpha\int_{0}^{1}e^{-{\tau}\lambda (1-s)}v(s)\,ds,\\ \lambda^{2}My(1)+(ay_{x})(1)=Mq+\lambda Mf(1), \end{array} \right. \label{ra1} \end{equation} whose weak formulation is \[ \displaystyle\int_{0}^{1}\left[ (\lambda+\sigma) \phi y+ay_{x}\phi_{x}\right] \, dx+\lambda\left[ (m\lambda+\beta)-\alpha e^{-{\tau}\lambda}\right] y(0)\phi(0)+\lambda^{2}My(1)\phi(1) \] \[ =\int_{0}^{1}\left[ (\lambda+\sigma) f+g \right] \phi\,dx+\left[ mp+(m\lambda +\beta-\alpha e^{-{\tau}\lambda})f(0)+\displaystyle{\tau}\alpha\int_{0} ^{1}e^{-{\tau}\lambda(1-s)}v(s)\,ds\right] \phi(0) \] \begin{equation} +\left[ Mq+\lambda Mf(1)\right] \phi(1).\label{(1I)} \end{equation} Applying Lax-Milgram Theorem \cite{br}, there exists a unique solution $y\in H^{2}(0,1)$ of (\ref{ra1}) for $\lambda>0$ and hence $\lambda I-\mathcal{A}$ is onto ${\mathcal{H}}$. Therefore, $\left(\lambda I-{\mathcal{A}}\right) ^{-1}$ exists and maps ${\mathcal{H}}$ into ${\mathcal{D}}({\mathcal{A}})$. Thereafter, Sobolev embedding \cite{ad} implies that $\left( \lambda I-{\mathcal{A}}\right)^{-1}$ is compact and hence the spectrum of $\mathcal{A}$ consists of isolated eigenvalues of finite algebraic multiplicity only \cite{ka}. In turn, semigroups theory \cite{Pa:83} permits to vindicate the other assertions of our proposition. \end{proof}
\section{Asymptotic behavior of the closed-loop system (\ref{(1.1)})-(\ref{F})} \label{sect3} By virtue of (\ref{1.62}), $\lambda=0$ is an eigenvalue of $\mathcal{A}$ whose eigenfunction is $\nu(1,0,0,0,0)$, where $\nu \in\mathbb{R}\setminus\{0\}$. Hence the energy ${\mathcal{E}}(t)$ (see (\ref{ene})) of the system (\ref{3}) corresponding to $\Phi=\nu(1,0,0,0,0)$ is constant and so will not tend to zero as $t\longrightarrow+\infty$. In turn, we have the following convergence result \begin{theo}
Assume that (\ref{1.3}), (\ref{sma}) holds and $K$ satisfies $|\alpha |<K<2\beta-\alpha$. Then, for any initial data $\Phi_{0}=(y_{0},y_{1},f,\xi_{0},\eta_{0})\in{\mathcal{H}}$, the solution $\Phi(t)=\biggl(y,y_{t},y_{t}(0,t-x\tau),y_{t}(0,t),y_{t}(1,t)\biggr)$ of the closed-loop system (\ref{si}) tends in ${\mathcal{H}}$ to $(\zeta,0,0,0,0)$ as $t\longrightarrow+\infty$, where \begin{equation} \zeta=\displaystyle\frac{1}{\beta-\alpha}\displaystyle\left[ \int_{0}^{1} \left( \sigma y_0 x)+ y_{1} x)+\alpha \tau f(-\tau x)\right)\,dx+(\beta-\alpha)y_{0}(0)+m\xi _{0}+M\eta_{0}\right] .\label{cste} \end{equation} \label{t2} \end{theo}
\begin{proof} Let $\Phi(t)=\left( y(t),y_{t} (t),u(t),\xi(t),\eta(t)\right) =e^{t\mathcal{A}}\Phi_{0}$ be the solution of (\ref{3}) stemmed from $\Phi_{0}=\left( y_{0},y_{1},f,\xi_{0},\eta_{0}\right) \in {\mathcal{D}}({\mathcal{A}})$. It follows from Theorem \ref{(t1)} that the trajectories set of solutions $\{\Phi(t)\}_{\scriptscriptstyle t\geq0}$ is bounded for the graph norm and thus precompact by virtue of the compactness of the operator $\left(\lambda I-\mathcal{A}\right) ^{-1}$, for $\lambda>0$. Thanks to LaSalle's principle \cite{HA}, it follows that the $\omega$-limit set $$ \omega \left( \Phi_0 \right)= \left \lbrace \Psi \in \mathcal{H}; \; \mbox{there exists a sequence} \; t_n \to \infty \;\;\mbox{as}\;\; n \to \infty \; \mbox{such that} \; \Psi=\lim_{{\scriptstyle n \to \infty}} e^{t_n \mathcal{A}} \Phi_0 \right \rbrace $$ is non empty, compact, invariant under the semigroup $e^{t\mathcal{A}}$. Moreover, the solution $e^{t\mathcal{A}}\Phi _{0}\longrightarrow\omega\left( \Phi_{0}\right) \;$ as $t\rightarrow \infty\,$ \cite{HA}. Next, let $\tilde{\Phi}_{0}=\left( \tilde{y}_{0},\tilde {y}_{1},\tilde{f},\tilde{\xi},\tilde{\eta}\right) \in\omega\left( \Phi _{0}\right) \subset{D}(\mathcal{A})$ and consider $\tilde{\Phi}(t)=\left( \tilde{y}(t),\tilde{y}_{t}(t),\tilde{u}(t),\tilde{\xi}(t),\tilde{\eta }(t)\right) =e^{t\mathcal{A}}\tilde{\Phi}_{0}\in{D}(\mathcal{A})$ as the unique strong solution of (\ref{si}). Exploiting the fact that $\Vert\tilde{\Phi}(t)\Vert_{\mathcal{H}}$ is constant \cite[Theorem 2.1.3 p. 18]{HA}, we have \begin{equation} <\mathcal{A}\tilde{\Phi},\tilde{\Phi}>_{\mathcal{H}}=0.\label{e1n} \end{equation} Combining (\ref{dis1}) with (\ref{e1n}), we obtain $\tilde{z}=\tilde{y}_t$, $\tilde{\xi}=\tilde{y}_{t}(0,t)=0$ and $\tilde{u}(1)=\tilde{y}_{t}(0,t-\tau)=0$. Consequently, $\tilde{y}$ is constant and hence the $\omega$-limit set $\omega\left( \Phi_{0}\right) $ reduces to $(\zeta,0,0,0,0)$.
Now, the proof will be completed once we find $\zeta$. This can be done by arguing as in the proof of Theorem \ref{0t2}. Indeed, let $(\zeta,0,0,0,0) \in \omega \left( \Phi_{0}\right)$, which yields \begin{equation} \Phi(t_{n})=(y(t_{n}),y_{t}(t_{n}),u(t_{n}),\xi(t_{n}),\eta(t_{n}))=e^{t_{n} \mathcal{A}}\Phi_{0}\longrightarrow(\zeta ,0,0,0,0), \, \text{for some} \, t_{n} \rightarrow\infty, \text{as} \, n \rightarrow\infty. \label{boum2} \end{equation} In turn, any solution of the closed-loop system (\ref{si}) stemmed from $\Phi_{0}=(y_{0},y_{1},f,\xi_{0},\eta_{0})$ verifies \begin{equation} \frac{\text{d}}{\text{dt}}\left[ \int_{0}^{1} \left( \sigma y(x,t)+ y_{t}(x,t)+\alpha \tau y_{t}(0,t-x\tau) \right) \,dx+my_{t}(0,t)+My_{t}(1,t)+(\beta-\alpha)y(0,t) \right]=0, \label{ga} \end{equation} for each $t\geq0$, and thus \begin{align} & \int_{0}^{1} \left( \sigma y(x,t)+ y_{t}(x,t)+\alpha \tau y_{t}(0,t-x\tau) \right)\,dx + my_{t}(0,t)+My_{t}(1,t)+(\beta-\alpha)y(0,t)\nonumber\\ & =\int_{0}^{1} \left( \sigma y(x,0)+ y_{t}(x,0)+\alpha \tau y_{t}(0,-x\tau) \right) \,dx+my_{t}(0,0)+My_{t}(1,0)+(\beta-\alpha)y(0,0)\nonumber\\ & =\int_{0}^{1} \left( \sigma y_0 (x)+ y_{1}(x)+\alpha \tau f(-x\tau) \right) \,dx+m\xi_{0}+M\eta_{0}+(\beta-\alpha)y_{0}(0).\label{bou2} \end{align} Lastly, letting $t=t_{n}$ in (\ref{bou2}) with $n\rightarrow\infty$ and then exploring (\ref{boum2}), one can get the expression of $\zeta$. \end{proof}
\section{Exponential convergence rate for the closed-loop system (\ref{(1.1)})-(\ref{F})} \label{sect4} The main result of this section is to show that the rate of convergence of the solutions of the system (\ref{3}) is exponential. The proof depends on an essential way on the application of the frequency domain theorem (see Theorem \ref{lemraokv}).
Firstly, let us denote by $\hat{\mathcal{H}}$ the closed subspace of $\mathcal{H}$ and of co-dimension $1$ defined as follows \[ \hat{\mathcal{H}} = \left\{ (y,z,u,\xi,\eta) \in\mathcal{H}; \, \int_{0}^{1} \left( \sigma y(x)+z(x) +\alpha\tau u(x) \right) \, dx + (\beta-\alpha) \, y(0) + m \xi+ M \eta= 0 \right\}. \] Subsequently, we consider a new operator associated to the operator $\mathcal{A}$ (see (\ref{1.62})) \[ \hat{\mathcal{A}} : \mathcal{D}(\hat{\mathcal{A}}) := \mathcal{D}(\mathcal{A}) \cap\hat{\mathcal{H}} \subset\hat{\mathcal{H}} \rightarrow\hat{\mathcal{H}}, \] \begin{equation} \label{1.62bis}\hat{\mathcal{A}} (y,z,u,\xi,\eta) = \mathcal{A} (y,z,u,\xi ,\eta), \, \forall\, (y,z,u,\xi,\eta) \in\mathcal{D}(\hat{\mathcal{A}}). \end{equation} By virtue of Theorem \ref{(t1)}, the operator $\hat{\mathcal{A}}$ defined by (\ref{1.62bis}) generates on $\dot{{\mathcal{H}}}$ a $C_{0}$-semigroup of contractions $e^{t\hat{\mathcal{A}}}$ under the conditions (\ref{sma}) and (\ref{uni}). Also, $\sigma(\hat{\mathcal{A}})$, the spectrum of $\hat{\mathcal{A}}$, consists of isolated eigenvalues of finite algebraic multiplicity only.
Our main result is
\begin{theo} \label{lrkv} Assume that (\ref{1.3}) and (\ref{sma}) hold and $K$ satisfies $\alpha<K<2\beta-\alpha$. Then, there exist $C>0$ and $\omega >0$ such that for all $t>0$ we have \[ \left\Vert e^{t\hat{\mathcal{A}}}\right\Vert _{{\mathcal{L}}(\hat{\mathcal{H}})}\leq C e^{-\omega t}. \] \end{theo}
\begin{proof} For sake of clarity, we shall prove our result by proceeding by steps.
{\bf Step 1:} The task ahead is to check that $\hat{\mathcal{A}}$ satisfies (\ref{01.8kv}), that is, if $\gamma$ is a real number, then $i\gamma$ is not an eigenvalue of $\hat{\mathcal{A}}$, that is, the equation \begin{equation} \hat{\mathcal{A}}Z=i \gamma Z\label{eigenkv} \end{equation} with $Z=(y,z,u,\xi,\eta)\in\mathcal{D}(\hat{\mathcal{A}})$ and $\gamma \in \mathbb{R}$ has only the trivial solution. In other words, one should verify from \eqref{eigenkv} that the system \begin{align} z & =i\gamma y\label{eigen1}\\ (a\,y_{x})_{x} -\sigma z & =i\gamma z\label{eigen2}\\ -\frac{u_{x}}{\tau} & =i\gamma u\label{eigen3}\\ \frac{1}{m}\left[ (ay_{x})(0)-\beta\xi+\alpha u(0)\right] & =i\gamma \xi\ .\label{eigen4}\\ -\frac{(ay_{x})(1)}{M} & =i\gamma\eta,\label{eigenbis} \end{align} has only the trivial solution. To do so, if $\gamma=0$, then (\ref{eigen1}) gives $z=0$ and hence $\xi=\eta=0$ as $z(0)=\xi$ and $z(1)=\eta$. In turn, if $\gamma\neq0$ then exploiting (\ref{eigenkv}) and (\ref{dis1}), we obtain \begin{equation} 0=\mathrm{Re\,}\left( <\hat{\mathcal{A}}Z,Z>_{{\hat{\mathcal{H}}}}\right) \leq \displaystyle -\sigma \int_{0}^{1} z^{2} \,dx+\left[ -\beta+\displaystyle\frac{K+\alpha}{2}\right] \xi^{2}+\displaystyle\frac{\alpha-K}{2}u^{2}(1) (\leq0).\label{1.7kv} \end{equation} Hence $z=0$ in $L^2(0,1)$, $z(0)=\xi=0$ and $u(1)=0$. Going back to \eqref{eigenkv}, we deduce our desired result.
{\bf Step 2:} We turn now to prove that the resolvent operator of $\hat{\mathcal{A}}$ obeys the condition \eqref{01.9kv}. Otherwise, Banach-Steinhaus Theorem (see \cite{br}) leads us to claim that there exist a sequence of real numbers $\gamma_{n} \rightarrow+\infty$ and a sequence of vectors \newline $Z^{n}=(y^{n},z^{n},u^{n},\xi^{n},\eta^{n})\in\mathcal{D}(\hat{\mathcal{A}})$ with \begin{equation}
\left\| Z^{n} \right\|_{\hat{\mathcal{H}}}=\left\| y^n\right\|_{H^{1}(0,1)}+\left\| z^n \right\|_{L^{2}(0,1)}+\left\| u^n \right\|_{L^{2}(0,1)}+\left| \xi^{n} \right|_{\mathbb{C}}+\left| \eta^{n} \right|_{\mathbb{C}}=1 \label{boun} \end{equation} such that \begin{equation} \Vert(i\gamma_{n}I-\hat{\mathcal{A}})Z^{n}\Vert_{\dot {\mathcal{H}}}\rightarrow0\;\;\;\;\mbox{as}\;\;\;n\rightarrow\infty ,\label{1.12kv} \end{equation} that is, {as} $n\rightarrow\infty$, we have: \begin{equation} i\gamma_{n}y^{n}-z^{n} \equiv f_{n}\rightarrow0\;\;\;\mbox{in}\;\;H^{1}(0,1),\label{1.13kv} \end{equation} \begin{equation} (i\gamma_{n} +\sigma) z^{n}-(a y_x^{n})_{x} \equiv g^{n} \rightarrow0 \;\;\;\mbox{in}\;\;L^{2}(0,1),\label{1.13bkv} \end{equation} \begin{equation} i\gamma_{n}u^{n}+\frac{u_x^{n}}{\tau} \equiv v^{n}\rightarrow0 \;\;\;\mbox{in}\;\;L^{2} (0,1),\label{1.14bkv} \end{equation} \begin{equation} i\gamma_{n}\xi^{n}-\frac{1}{m}\left[ (a y_x^{n})(0)-\beta\xi^{n}+\alpha u^{n}(1)\right] \equiv p^{n}\rightarrow0 \;\;\;\mbox{in}\;\; \mathbb{C},\label{1.14kv} \end{equation} \begin{equation} i\gamma_{n}\eta^{n}+\frac{(ay_x^{n})(1)}{M} \equiv q^{n} \rightarrow0 \;\;\;\mbox{in}\;\; \mathbb{C}.\label{lasteq} \end{equation}
Exploring the fact that
\[ \left\| (i\gamma_{n}I-\hat{\mathcal{A}})Z^{n} \right\|_{\hat{\mathcal{H}} }\geq \left\vert \mathrm{Re\,}\langle (i\gamma_{n}I-\hat{\mathcal{A}})Z^{n},Z^{n} \rangle_{\hat{\mathcal{H}}} \right\vert, \]
together with \eqref{1.7kv}-\eqref{1.12kv}, we get \begin{equation} z^n \rightarrow0, \quad \gamma_{n} y^n \rightarrow0, \quad \text{and}\; y^n \rightarrow0, \quad \hbox{in}\;L^{2}(0,1), \label{z0} \end{equation} as well as \begin{equation} \xi^{n}=z^n (0)=u^{n}(0)\rightarrow0 \; , \quad u^{n}(1)\rightarrow0 \;\;\;\mbox{in}\;\; \mathbb{C}.\label{unkv} \end{equation} In the light of (\ref{1.14bkv}), we have \[ u^{n}(x)=u^{n}(0)\,e^{-i\tau\gamma_{n}x}+\tau\,\int_{0}^{x}e^{-i\tau\gamma _{n}(x-s)}v^{n}(s)\,ds. \] Recalling that $v^{n}$ converges to zero in $L^{2}(0,1)$ and invoking (\ref{unkv}), the latter yields \begin{equation} u^{n}\longrightarrow 0\;\;\mbox{in}\;\;L^{2}(0,1).\label{znkv} \end{equation} Returning to (\ref{1.13bkv}), we get \[\displaystyle\frac{(a y_x^{n})_{x}}{\gamma_{n}}= \left( \displaystyle\frac{\sigma}{\gamma_{n}}+i \right) z^n- \displaystyle\frac{g^{n}}{\gamma_{n}},\] which implies by virtue of \eqref{z0} \begin{equation} \displaystyle\frac{(a y_x^{n})_{x}}{\gamma_{n}} \longrightarrow0 \;\;\;\mbox{in}\;\;L^{2}(0,1). \label{yxx} \end{equation} Furthermore, we have $$(ay_x^{n})(1)=\int_{x}^{1} (a y_r^{n})_{r} \,dr + a y_x^{n}, $$ and hence
$$\left|(ay_x^{n})(1)\right|^2 \leq c \left( \left\|(a y_x^{n})_{x} \right\|_2^2 +\left\|a y_x^{n} \right\|_2^2 \right), $$
where $\| \cdot \|_2$ is the usual norm in $L^{2}(0,1)$, and $c$ is a positive constant (independent of $n$). For convenience, we shall use, in the sequel, the same letter $c$ to represent a positive constant which is independent of $n$. Whereupon, \begin{equation} \displaystyle\frac{(a y_x^{n})(1)}{\gamma_{n}} \longrightarrow0 \;\;\;\mbox{in}\;\; \mathbb{C}, \label{yx1} \end{equation} by means of \eqref{boun} and \eqref{yxx}. Amalgamating \eqref{1.14kv} and \eqref{yx1}, we have \begin{equation} \eta^{n}=z^n (1) \rightarrow0, \;\;\;\mbox{in}\;\; \mathbb{C}.\label{eta} \end{equation} Evoking Gagliardo-Nirenberg interpolation inequality \cite{br}, we have \[
\left\|a y_x^n \right\|_2^2 \leq c \, \displaystyle\frac{\left\|(a y_x^n)_x \right\|_2}{\left| \gamma_{n} \right|} \left\|\gamma_{n} y^n \right\|_2, \] for some positive constant $c$. Thereby, \begin{equation}
\left\|a y_x^n \right\|_2 \longrightarrow 0 \;\;\;\mbox{in}\;\;L^{2}(0,1),\label{der} \end{equation} due to \eqref{yxx} and \eqref{z0}.
Lastly, the findings in \eqref{z0}-\eqref{znkv}, \eqref{eta}, and \eqref{der} contradicts the fact that $\left\Vert Z^{n}\right\Vert _{\hat{\mathcal{H}}}=1,\;\forall\ n\in\mathbb{N}.$ Whereupon, we managed to show that the conditions (\ref{01.8kv}) and (\ref{01.9kv}) are fulfilled. This achieves the proof of Theorem \ref{lrkv}. \end{proof}
Note that one immediate consequence of Theorem \ref{lrkv} is the exponential convergence of the solutions of the closed-loop system (\ref{3}) in ${\mathcal{H}}$ to $(\zeta,0,0,0,0)$ as $t\longrightarrow+\infty$, where $\Omega$ is given by (\ref{cste}).
\subsection{Lack of convergence when $\alpha \geq \beta$} This subsection is intended to provide an answer to the following question: what happens to the solutions of the closed-loop system (\ref{3}) if the condition (\ref{sma}) used to get the convergence results of the system (\ref{3}) is violated, that is, if $\alpha \geq \beta$? Obviously, the answer to such a question could be provided once the semigroup $e^{t \hat{\mathcal{A}}}$ is showed to be unstable in $\hat{\mathcal{H}}$, for some delays $\tau$. For sake of simplicity and without loss of generality, we shall suppose that $a=1$. Then, let us look for a solution $y(x,t)=e^{\gamma t} g(x)$, with $\gamma$ is a positive real number and $g$ is a nonzero function of $H^2(0,1)$, of the system associated to the operator $\hat{\mathcal{A}}$ (see (\ref{1.62bis}) on $\hat{{\mathcal{H}}}$, namely, \begin{equation} \left\{ \begin{array}[c]{ll} y_{tt}(x,t)-y_{xx}(x,t)+\sigma y_t (x,t) =0, & 0<x<1,\;t>0,\\ my_{tt}(0,t)- y_{x} (0,t)=\alpha y_{t}(0,t-\tau)-\beta y_{t}(0,t), & t>0,\\ My_{tt}(1,t)+ y_{x} (1,t)=0, & t>0, \label{n1.6f} \end{array} \right. \end{equation}
with $ \int_{0}^{1} y_t(x,t)\, dx +\alpha \tau \int_{0}^{1} y_{t}(0,t-x \tau) \, dx + (\beta-\alpha) \, y(0) + m y_{t}(0,t)+ M y_{t}(1,t)= 0.$ Then, one can claim $\|y\|_{L^2(0,1)} =e^{\gamma t} \|g\|_{L^2(0,1)} \rightarrow +\infty$ and thereby the solution of (\ref{n1.6f}) is unstable.
One can readily verify that $y(x,t)=e^{\gamma t} g(x)$ is a solution to (\ref{n1.6f}) if $g$ is a nontrivial solution to \begin{equation} \left\{ \begin{array}{l} g_{xx} -\left(\gamma^2+\sigma \gamma\right) g=0,\\ g_{x}(0)+\gamma \left(\alpha e^{-\gamma \tau}-m \gamma -\beta \right) g(0)=0,\\ g_{x}(1)+M \gamma^2 g(1)=0.\label{nk1} \end{array} \right. \end{equation} Solving the differential equation of (\ref{nk1}) yields $y(x)=c_1 e^{-\sqrt{\gamma^2+\sigma \gamma} x}+c_2 e^{\sqrt{\gamma^2+\sigma \gamma}}$, where $c_1$ and $c_2$ are constants to be determined by the boundary conditions in (\ref{nk1}). Indeed, the latter implies that $g$ is a nontrivial solution of (\ref{nk1}) if and only if $\gamma$ is a nonzero solution of the following equation: \begin{eqnarray} &&\left[ \gamma \left( \alpha e^{-\gamma \tau} -\gamma m -\beta \right) -\sqrt{\gamma^2+\sigma \gamma} \right] \left[M \gamma^2+ \sqrt{\gamma^2+\sigma \gamma} \right] e^{\sqrt{\gamma^2+\sigma \gamma}} \nonumber\\ &-& \left[ \gamma \left( \alpha e^{-\gamma \tau} -\gamma m - \beta\right) + \sqrt{\gamma^2+\sigma \gamma} \right] \left[M \gamma^2 - \sqrt{\gamma^2+\sigma \gamma} \right] e^{-\sqrt{\gamma^2+\sigma \gamma}}=0.\label{nk3} \end{eqnarray} Now let us take $\gamma=\sigma >0$ and $M=\sqrt{2}/\sigma.$ Thereafter, (\ref{nk3}) gives \[ \alpha e^{-\frac{\sqrt{2}}{M} \tau} -\beta -\frac{\sqrt{2}}{M} m -\sqrt{2}=0. \] Lastly, solving the above equation, we get $\tau=\frac{M}{\sqrt{2}} \ln \left(\frac{\alpha M}{(\beta+\sqrt{2})M+\sqrt{2}m} \right)>0$ provided that $\alpha \geq \beta + \sqrt{2} \left(1+\frac{m}{M}\right)$. Bearing in mind this choices, we conclude that (\ref{nk3}) is satisfied with $\gamma = \sigma >0$. Therefore, we have \begin{theo} If the assumption (\ref{sma}) is not satisfied, then there exists a delay $\tau$ for which the convergence of solutions of the closed-loop system (\ref{3}) does not hold. \label{non2} \end{theo}
\section{Concluding discussion}
\label{sect6} This article has addressed the problem of improving the convergence rate of solutions of an overhead crane system modeled by a hyperbolic PDE coupled with two ODEs and subject to the effect of a time-delay in the boundary. First, an interior damping control is proposed. Then, the system is showed to be well-posed in a functional space with an appropriate norm. Next, it is proved that the solutions converge to an equilibrium state. Last but not least, we show that the convergence rate of solutions is exponential. This finding improves that of the authors in a recent work \cite{ac}, where the convergence is only polynomial.
\end{document} |
\begin{document}
\begin{abstract} This article deals with the variable coefficient thin obstacle problem in $n+1$ dimensions. We address the regular free boundary regularity, the behavior of the solution close to the free boundary and the optimal regularity of the solution in a low regularity set-up. \\ We first discuss the case of zero obstacle and $W^{1,p}$ metrics with $p\in(n+1,\infty]$. In this framework, we prove the $C^{1,\alpha}$ regularity of the regular free boundary and derive the leading order asymptotic expansion of solutions at regular free boundary points. We further show the optimal $C^{1,\min\{1-\frac{n+1}{p}, \frac{1}{2}\}}$ regularity of solutions. New ingredients include the use of the Reifenberg flatness of the regular free boundary, the construction of an (almost) optimal barrier function and the introduction of an appropriate splitting of the solution. Important insights depend on the consideration of various intrinsic geometric structures.\\ Based on variations of the arguments in \cite{KRS14} and the present article, we then also discuss the case of non-zero and interior thin obstacles. We obtain the optimal regularity of the solutions and the regularity of the regular free boundary for $W^{1,p}$ metrics and $W^{2,p}$ obstacles with $p\in (2(n+1),\infty]$. \end{abstract}
\subjclass[2010]{Primary 35R35}
\keywords{Variable coefficient Signorini problem, variable coefficient thin obstacle problem, thin free boundary}
\thanks{ H.K. acknowledges support by the DFG through SFB 1060. A.R. acknowledges that the research leading to these results has received funding from the European Research Council under the European Union's Seventh Framework Programme (FP7/2007-2013) / ERC grant agreement no 291053 and a Junior Research Fellowship at Christ Church. W.S. is supported by the Hausdorff Center of Mathematics.} \maketitle
\tableofcontents
\section{Introduction} In this article we continue our discussion of the variable coefficient \emph{thin obstacle} or \emph{Signorini problem} in a low regularity framework. Here our main objectives are an improved understanding of the (regular) free boundary, the determination of the asymptotic behavior of solutions close to the (regular) free boundary and the derivation of optimal regularity estimates for solutions. To achieve this in our low regularity set-up, we in particular introduce two key new arguments: The identification of the regular free boundary as a \emph{Reifenberg flat} set, which enables us to construct (almost) optimally scaling barrier functions (c.f. Section \ref{sec:Reifenberg}) and a ``\emph{splitting technique}'' (c.f. Proposition \ref{prop:v1}), which allows us to deal with divergence form right hand sides and to identify the leading order contributions in the respective equations.\\
Let us explain the precise set-up of our problem: We consider local minimizers of the constrained Dirichlet energy: \begin{align} \label{eq:energy} J(w) = \int\limits_{B_1^+} a^{ij} (\p_i w) (\p_j w) dx, \end{align} where we use the Einstein summation convention and assume that \begin{align*}
w\in \mathcal{K}:=\{v\in H^1(B_1^+)| \ v \geq 0 \mbox{ on } B_1':= B_1^+\cap \{x_{n+1}=0\} \}. \end{align*}
Here, the metric $a^{ij}: B_1^+ \rightarrow \R^{(n+1)\times (n+1)}_{sym}$ is a symmetric, uniformly elliptic tensor field which is $W^{1,p}$, $p\in (n+1,\infty]$, regular and $B_1^+:=\{x\in B_1\subset \R^{n+1}| \ x_{n+1}\geq 0\}$ denotes the upper half-ball. On the upper half-ball it is possible to consider arbitrary local variations of local minimizers. Hence, local minimizers solve an elliptic divergence form equation in the upper half-ball, on this set they are ``free''. However, on the codimension one surface $B_1'$ they obey the convex constraint $w\geq 0$ which leads to \emph{complementary} or \emph{Signorini} boundary conditions. In this sense the obstacle is ``thin''.\\
In the sequel we are in particular interested in obtaining an improved understanding of the free boundary
$$\Gamma_w := \partial_{B_1'} \{x\in B_1'| \ w(x)>0 \}.$$
This set (which for Lipschitz metrics $a^{ij}$ is of Hausdorff dimension $n-1$, c.f. Remark \ref{rmk:Hausdorff}) separates the \emph{contact set}, $\Lambda_w:=\{x\in B_1'| \ w(x)=0\},$ in which the solution coincides with the obstacle, from the \emph{positivity set},
$ \Omega_w:=\{x\in B_1'| \ w(x)>0\},$ in which the solution is ``free''. Moreover, we seek to understand the structure of the solution close to the free boundary.\\
Considering variations of local minimizers in the energy functional (\ref{eq:energy}) leads to an equivalent formulation of the local minimization problem (\ref{eq:energy}) in the form of a variational inequality posed in the energy space $\mathcal{K}$ \cite{U87}: For a solution, $w\in \mathcal{K}$, of (\ref{eq:energy}) we have \begin{align*} \int_{B_1^+}a^{ij}(\p_iw)\p_j(v-w) dx \geq 0 \mbox{ for all } v\in \mathcal{K}. \end{align*} If in addition $w\in H^2(B_1^+)$, this corresponds to an elliptic equation with \emph{complementary} or \emph{Signorini} boundary conditions: \begin{equation} \begin{split} \label{eq:varcoef'} \p_i a^{ij} \p_j w & = 0 \mbox{ in } B^+_{1}, \\
w\geq 0, -a^{n+1,j}\p_{j}w\geq 0, \ w (a^{n+1,j}\p_{j} w)&= 0 \mbox{ on } B_{1}'. \end{split} \end{equation} Here the Signorini condition is derived from the pointwise inequality $-a^{n+1,j}\p_j w(v-w)\geq 0$ on $B'_1$ which holds for any $v\in \mathcal{K}$.\\
In the sequel we investigate the thin obstacle problem by studying solutions of (\ref{eq:varcoef'}). Moreover, we address variants of it which involve inhomogeneities, \emph{non-flat} obstacles and boundaries and \emph{non-flat interior} obstacles.
\subsection{Main results}
In this article we first derive the $C^{1,\alpha}$ regularity of the so-called regular set of the free boundary in the presence of $W^{1,p}$ metrics, $a^{ij}$, with $p>n+1$. Moreover, in this framework we deduce a leading order aymptotic expansion of solutions to (\ref{eq:varcoef'}) with error estimates. Combining the regularity of the regular free boundary with the Carleman estimate from \cite{KRS14}, we then show the optimal $C^{1,\min\{1-\frac{n+1}{p},\frac{1}{2}\}}$ regularity of solutions with $W^{1,p}$ metrics, $a^{ij}$, for $p>n+1$. In addition to this, we also treat perturbations of the thin obstacle problem including non-flat free boundaries and obstacles, as well as inhomogeneities in the equations and the interior thin obstacle problem.\\ In order to deduce these results, we rely on two main new ingredients: A ``splitting argument'' and the construction of (almost) optimally scaling barrier functions. The latter builds on the identification of the regular free boundary as a Reifenberg flat set. \\ In the following subsections we elaborate on these results, put them into the context of the literature on the thin obstacle problem, and explain the main difficulties and the new arguments which are used to overcome these.\\
We first recall the main results from \cite{KRS14} on free boundary points: All free boundary points $x\in \Gamma_w$ are classified by their associated vanishing order $\kappa_x$ (c.f. Section 4 in \cite{KRS14}\xspace): $$\Gamma_w = \Gamma_{3/2}(w)\cup \bigcup\limits_{\kappa \geq 2}\Gamma_{\kappa}(w).$$ Here $\Gamma_{3/2}(w)$ is the so-called \emph{regular} free boundary which is given by all free boundary points with vanishing order $3/2$. The remaining set $\bigcup\limits_{\kappa \geq 2}\Gamma_{\kappa}(w)$ consists of all free boundary points with a higher order of vanishing (c.f. Definition 4.1 in \cite{KRS14}\xspace or Section~\ref{sec:not} for the precise definition of $\kappa_x$). Moreover, the map $\Gamma_w \ni x \mapsto \kappa_x$ is upper semi-continuous. In \cite{KRS14} we also proved that for each $x\in \Gamma_w$ with $\kappa_x<\infty$, there exists an $L^2$-normalized blow-up sequence $w_{x,r_j}$ such that the limit $w_{x,0}$ is a homogeneous global solution with homogeneity $\kappa_x$. Furthermore, if $\kappa_x=\frac{3}{2}$, the blow-up limit $w_{x,0}$ is two-dimensional and (up to a rotation of coordinates) is equal to $c_n\Ree(x_n+ix_{n+1})^{3/2}$. \\
In the first part of the present paper, we study the regular free boundary $\Gamma_{3/2}(w)$, which is relatively open by the upper semi-continuity of $\kappa_x$. Relying on comparison principles (c.f. Proposition \ref{prop:nondeg}) combined with a splitting technique for equations with divergence right hand sides (c.f. Proposition \ref{prop:v1}), we obtain the $C^{1,\alpha}$ regularity of the regular free boundary $\Gamma_{3/2}(w)$:
\begin{thm} \label{thm:C1a} Let $a^{ij}: B_1^+ \rightarrow \R^{(n+1)\times (n+1)}_{sym}$ be a uniformly elliptic, symmetric $W^{1,p}$, $p\in(n+1,\infty]$, tensor field. Assume that $w$ is a solution of the variable coefficient thin obstacle problem (\ref{eq:varcoef'}). For each $x_0\in \Gamma_{3/2}(w)$, there exist a parameter $\alpha \in (0,1]$, a radius $\rho=\rho(x_0,w)$ and a $C^{1,\alpha}$ function $g: B_{\rho}''(x_0) \rightarrow \R$ such that (possibly after a rotation) \begin{align*}
\Gamma_w \cap B_{\rho}' (x_0) = \Gamma_{3/2}(w)\cap B_{\rho}' (x_0)= \{x| \ x_n = g(x'')\}\cap B_{\rho}'(x_0). \end{align*} \end{thm}
We remark that we turn the usual order of the arguments around: Instead of \emph{first} proving optimal regularity of the solution, and \emph{then} regularity of the (regular) free boundary (c.f. \cite{AC06}, \cite{ACS08}, \cite{PSU}), we \emph{first} prove regularity of the (regular) free boundary, and \emph{then} deduce optimal regularity of the solution. The regular free boundary is $C^{1,\alpha}$ under conditions which do \emph{not} imply $C^{1,\frac{1}{2}}$ regularity of the solution. \\
With the $C^{1,\alpha}$ regularity of the (regular) free boundary at hand, it then becomes possible to study the local behavior of solutions around regular free boundary points. This is based on identifying the leading order in the asymptotics of solutions of (\ref{eq:varcoef'}) at regular free boundary points in the presence of $W^{1,p}$ metrics with $p\in(n+1,\infty]$ (c.f. Proposition \ref{prop:wasympt}). As the asymptotics are complemented by a higher order error estimate, this allows us to obtain local growth bounds. Then combining this with our Carleman estimate, we are able to obtain the $C^{1,1/2}$ optimal regularity of solutions associated with $W^{1,p}$ metrics for $p\in(2(n+1),\infty]$:
\begin{thm}[Optimal regularity] \label{prop:full_opti}
Let $a^{ij}: B_1^+ \rightarrow \R^{(n+1)\times (n+1)}_{sym}$ be a uniformly elliptic, symmetric $W^{1,p}$, $p\in[2(n+1),\infty]$, tensor field. Assume that $w$ is a solution of the variable coefficient thin obstacle problem (\ref{eq:varcoef'}). Then, there exists a constant $C>0$ depending only on $\|a^{ij}\|_{W^{1,p}(B_{1}^+)}, n, p$ such that \begin{align*}
\left\| w \right\|_{C^{1,1/2}(B_{1/2}^+)} \leq C \left\| w\right\|_{L^2(B_1^+)}. \end{align*} \end{thm}
Finally, in the last part of the paper we prove that these results are not restricted to the flat thin obstacle problem, i.e. the setting in which the obstacle is the zero function. We show that, on the contrary, it is possible to deal with \emph{inhomogeneities}, \emph{non-constant} obstacles, \emph{non-flat} boundaries and even \emph{non-flat interior} obstacles (c.f. Section \ref{sec:pert}). For instance, we prove the following result for non-flat obstacles:
\begin{thm} Let $a^{ij}: B_1^+ \rightarrow \R^{(n+1)\times (n+1)}_{sym}$ be a uniformly elliptic, symmetric $W^{1,p}$, $p\in(2(n+1),\infty]$, tensor field. Suppose that $\varphi \in W^{2,p}(B'_{1})$. Let $w:B_1^+ \rightarrow \R$ be a solution of the thin obstacle problem \begin{equation} \label{eq:varcoef_f} \begin{split} \p_i a^{ij} \p_j w & = 0 \mbox{ in } B_{1}^+,\\ w \geq \varphi, \ - a^{n+1,j} \p_j w \geq 0, \ (w-\varphi) ( a^{n+1,j} \p_j w) & = 0 \mbox{ on } B_{1}'. \end{split} \end{equation} Then, the following statements hold: \begin{itemize} \item[(i)] The function $w$ has the optimal Hölder regularity: $$ w \in C^{1,1/2}(B_{1/2}^+).$$ \item[(ii)] Assuming that $0\in \Gamma_{3/2}(w)$, there exist a radius $\rho>0$, a parameter $\alpha \in (0,1]$ and a $C^{1,\alpha}$ function $g$ such that (potentially after a rotation)
$$\Gamma_w\cap B_{\rho}'=\Gamma_{3/2}(w) \cap B_{\rho}' = \{x| \ x_n = g(x'')\}\cap B_{\rho}'.$$ \end{itemize} \end{thm}
\subsection{Literature and context} In the last years the thin obstacle problem and its variants have been a very active field of research. After the complete characterization of the two-dimensional, constant coefficient thin obstacle problem by Lewy \cite{Le72} and Richardson \cite{Ri78} as well as impressive partial results on the general problem \cite{Fr77}, \cite{Ki81}, \cite{U85}, a major new idea emerged only relatively recently: In \cite{AC06} and \cite{ACS08} Athanasopoulos, Caffarelli and Salsa introduced Almgren's frequency function as a powerful tool of obtaining optimal regularity estimates and the $C^{1,\alpha}$ regularity of the free boundary for the constant coefficient problem. Here $\alpha$ is some constant in $(0,1]$. Later this was extended by \cite{Si} and \cite{CSS} to the related obstacle problem for the fractional Laplacian. Moreover, in a recent article \cite{KPS} Koch, Petrosyan and Shi prove the analyticity of the regular free boundary for the constant coefficient operator by introducing a connection between the thin obstacle problem and the Grushin Laplacian. Simultaneously, Savin and De Silva \cite{DSS14} obtained the $C^{\infty}$ regularity of the regular free boundary by exploiting higher order boundary Harnack inequalities. This discussion of the regularity of the regular free boundary is complemented by an article of Garofalo and Petrosyan \cite{GP09} in which a monotonicity formula is used to characterize the structure of the singular set of the thin obstacle problem.\\
While these results illustrate that there has been great progress in the \emph{constant} coefficient thin obstacle problem, less is known in the \emph{variable} coefficient framework. Here the work of Uraltseva \cite{U85} has shown that it is possible to obtain $C^{1,\alpha}$ Hölder regularity of solutions of the thin obstacle problem in the presence of $W^{1,p}$, $p\in (n+1,\infty]$, metrics. In her result the Hölder exponent is some value $\alpha \in (0,1/2]$ which depends on the ellipticity constants of the coefficients and the value of $p$.\\ Only very recently, the variable coefficient problem has been further investigated: In \cite{Gu} Guillen deals with the problem in the context of Hölder regular $C^{1,\alpha}$, for some $\alpha\in (0,1)$, metrics. This is improved in an article by Garofalo and Smit Vega Garcia \cite{GSVG14} in which the authors derive the optimal regularity of solutions of the thin obstacle problem in the presence of $C^{0,1}$ metrics and $C^{1,1}$ obstacles. This argument is based on an extension of Almgren's monotonicity formula to the setting of low regularity metrics and obstacles. In a recent preprint \cite{GPSVG15} Garofalo, Petrosyan and Smit Vega Garcia further build on this and deduce the $C^{1,\alpha}$, for some $\alpha\in (0,1)$, regularity of the regular free boundary in the presence of $C^{0,1}$ metrics and $C^{1,1}$ obstacles by proving an epiperimetric inequality.\\ Complementing these results by relying on Carleman estimates instead of frequency formulae and working in the setting of Sobolev metrics $a^{ij}\in W^{1,p}$, $p\in (n+1,\infty]$, \cite{KRS14} provides an alternative proof of the almost optimal regularity of solutions to (\ref{eq:varcoef'}). In the present article we extend these ideas, and, building on our previous work, prove optimal regularity in the framework of $W^{1,p}$, $p\in (n+1,\infty]$, metrics as well as the $C^{1,\alpha}$ regularity of the regular free boundary.
\subsection{Difficulties and strategy} As already in \cite{KRS14} the central difficulty with which we deal throughout the article is the low regularity of the metric.\\
Building on the results from \cite{KRS14}, we analyze the (regular) free boundary (c.f. Section \ref{sec:boundary}). Traditionally, it is studied by relying on comparison principles (c.f. \cite{PSU}): Differentiating the equation (\ref{eq:varcoef'}), an analysis of the equation for the tangential derivatives allows to transfer positivity properties of derivatives of the blow-up solutions to the problem at hand. This then permits to conclude the $C^{1,\alpha}$ regularity of the regular free boundary.\\ In a low regularity set-up this becomes more difficult. In particular we have to include divergence form right hand sides in our discussion. Hence, as a key new ingredient we exploit a ``splitting technique'' in which we divide our solution into a ``controlled error'' which handles the low regularity contributions originating from the metric, and a ``main part'' which captures the behavior of our solutions (c.f. Proposition \ref{prop:v1}). This allows us to argue along similar lines as in the literature. \\ Yet, we have to overcome a second difficulty: In order to provide a framework which also deals with non-flat obstacles of low regularity, we have to work as close as possible to the scaling critical setting. Thus, instead of proving \emph{linear} non-degeneracy of the tangential derivatives, we prove a (nearly) \emph{square root} non-degeneracy in appropriate cones (c.f. Proposition \ref{prop:nondeg}). In order to achieve this, we introduce a second main new ingredient and construct a new barrier function which exploits the Reifenberg flatness of the free boundary (c.f. Proposition \ref{prop:barrier}). \\
In the second part of our argument, we return to the investigation of solutions of the thin obstacle problem (c.f. Section \ref{sec:optimal}). Here we rely on the free boundary regularity which allows to improve the almost optimal growth estimates which were previously obtained from the Carleman estimate (c.f. Lemma 4.1 in \cite{KRS14}\xspace and Corollary \ref{cor:optgrowth}).\\ We argue in two steps in which we combine \emph{local} with \emph{global} information: First we prove a \emph{local} growth estimate around regular free boundary points in which we obtain the optimal growth estimates. As these bounds however rely on comparison arguments which depend on the free boundary itself, they are \emph{not} uniform in the free boundary points. By virtue of the $C^{1,\alpha}$ regularity of the (regular) free boundary, these growth estimates can be obtained by an asymptotic expansion of solutions around regular free boundary points. For the identification of the leading order contribution of the expansion and of the corresponding higher order error estimates, we exploit our boundary Harnack estimate in combination with the boundary regularity for $W^{1,p}$ metrics with $p\in(n+1,\infty]$. Here we exploit our main new results on the free boundary regularity.\\ In the second step in our optimal regularity argument for $W^{1,p}$ metrics with $p>2(n+1)$, we then combine the optimal, but non-uniform local with optimal \emph{global} information. For this we rely on Lemma 4.2 in \cite{KRS14}\xspace which allows us to transfer the local into global information. This is the only point at which we directly return to the arguments from \cite{KRS14}.\\
Finally, in Section \ref{sec:pert} we comment on the stability of the described methods by applying them to variants of the thin obstacle problem. Using the scaling of the Carleman inequality, we first show that it is possible to deal with inhomogeneities (c.f. Section \ref{sec:inhomo}). This then immediately entails that all the previous results remain true for sufficiently regular, non-constant obstacles, although we only require that they are $W^{2,p}$ regular for some $p>2(n+1)$ (c.f. Section \ref{sec:nonfl}).\\ Concluding the section on variants of the thin obstacle problem, we discuss the setting of the \emph{interior} thin obstacle problem (c.f. Section \ref{sec:int_obst}). Here we are confronted with additional difficulties, which arise due to a slightly modified boundary condition (instead of a sign condition on the Neumann derivative, there is a sign condition on the fluxes across the interior boundary). This leads to slight modifications in the derivation of the Carleman inequality from \cite{KRS14} and yields leading order \emph{linear} contributions in the asymptotics of the Neumann derivative (c.f. Proposition \ref{prop:intobst}).
\subsection{Organization of the paper} After briefly recalling auxiliary results, conventions and the notation from \cite{KRS14} in Section \ref{sec:not}, we begin with the analysis of the regular free boundary in Section \ref{sec:boundary}. Here Proposition \ref{prop:v1} is a crucial technical tool to overcome the difficulties with the low regularity set up. In Section \ref{subsec:Lip} we first prove the Lipschitz regularity of the regular free boundary (c.f. Proposition \ref{prop:Lip}). This is based on the observation that the free boundary is Reifenberg flat, which we exploit in Section \ref{sec:Reifenbergbarrier}, in order to construct an appropriate, sufficiently well scaling barrier function (c.f. Proposition \ref{prop:barrier}). In Section \ref{subsec:C1a} we improve the Lipschitz regularity to gain $C^{1,\alpha}$ regularity (c.f. Proposition \ref{prop:C1a}).\\ In the second part of the paper we return to the study of solutions of (\ref{eq:varcoef}): In Section \ref{sec:optimal} we identify the leading order asymptotics of solutions of the thin obstacle problem (Proposition \ref{prop:asympt}). This allows to derive growth bounds (Corollary \ref{cor:lowerbound} and Corollary \ref{cor:optgrowth}) as well as the optimal regularity of solutions of (\ref{eq:varcoef'}) (c.f. Theorem \ref{thm:optimal_reg}). Finally, in the last part of the article, in Section \ref{sec:pert}, we illustrate how the previous results can be transferred to variations of the thin obstacle problem.
\section{Preliminaries} \label{sec:prelim}
In this section, we briefly explain our normalizations and notational conventions.
\subsection{Auxiliary results} We start by recalling that, due to the discussion in Section 2.1 in \cite{KRS14}\xspace, we may, without loss of generality, consider solutions $w$ of (\ref{eq:varcoef'}) with \begin{itemize}
\item[(A0)] $\left\| w \right\|_{L^2(B_1^+(0))}=1$, \item[(A1)] $a^{i, n+1}(x',0)=0 \mbox{ on } \R^{n} \times \{0\}$ for $i=1,\ldots, n$, \item[(A2)] $a^{ij}$ is symmetric and uniformly elliptic with eigenvalues in the interval $[1/2, 2]$. \end{itemize}
In addition, we in the sequel also make the following hypotheses (where we either assume (A3) or (A3')): \begin{itemize} \item[(A3)] $a^{ij}\in W^{1,p}(B_1^+(0))$ for some $p\in (n+1,\infty]$, \item[(A3')] $a^{ij}\in W^{1,p}(B_1^+(0))$ for some $p\in (2(n+1),\infty]$, \item[(A4)] $a^{ij}(0)= \delta^{ij}$. \end{itemize}
These assumptions allow us to reduce (\ref{eq:varcoef'}) to \begin{equation} \begin{split} \label{eq:varcoef} \p_i a^{ij} \p_j w & = 0 \mbox{ in } B^+_{1}, \\
w\geq 0, -\p_{n+1}w\geq 0, \ w (\p_{n+1} w)&= 0 \mbox{ on } B_{1}'. \end{split} \end{equation} Due to the $H^2$ estimates and the almost optimal regularity result of \cite{KRS14}, it is possible to interpret (\ref{eq:varcoef}) not only as a variational inequality in the energy space $H^1(B_1^+)$, but also to understand the equation and its boundary values in a classical pointwise sense.\\
Let us comment on the conditions (A0)-(A4): We recall that it is always possible to achieve (A0) by a suitable normalization. Condition (A1) is a consequence of an appropriate change of coordinates, c.f. Uraltseva \cite{U85}. Also, assumption (A2) and (A4) can always be achieved by an additional affine change of coordinates (and a rescaling) and thus do not pose additional restrictions on our set-up. \\ Conditions (A3) and (A3') are regularity assumptions which allow us to apply the results of \cite{KRS14}. Here condition (A3') is slightly more restrictive. While this assumption is \emph{not} needed in the case of flat obstacles, it becomes necessary in our treatment of \emph{non-flat} obstacles, as a consequence of our strategy of proof: We work on the level of the differentiated equation. Moreover, we stress that the integrability assumption $p\geq 2(n+1)$ yields the embedding $W^{1,p}\hookrightarrow C^{0,1/2}$ which, by interior regularity, is needed (and might also suffice) to derive the (optimal) $C^{1,1/2}$ regularity of solutions to the variable coefficient thin obstacle problem. Both conditions (A3) and (A3') imply that \begin{align*}
|a^{ij}(x)-\delta^{ij}| \leq C_n \left\| \nabla a^{ij} \right\|_{L^p(B_1')}|x|^{1-\frac{n+1}{p}} \mbox{ for all } x\in B_{1}^+ \end{align*} by Morrey's inequality.
\subsection{Notation} \label{sec:not} We use the same notation as in \cite{KRS14}, which we briefly recall in the sequel: \begin{itemize}
\item $\R^{n+1}_+ := \{x\in \R^{n+1}| \ x_{n+1}\geq 0\}$, $\R^{n+1}_- := \{x\in \R^{n+1}| \ x_{n+1}\leq 0\}$. \item For points $x\in \R^{n+1}$ we also use the notation $x=(x',x_{n+1})$ or $x=(x'',x_{n},x_{n+1})$, if we want to emphasize the roles of the respective lower dimensional coordinates.
\item Let $x_0=(x_0',0) \in \R^{n+1}_+$. For the upper half-ball of radius $r>0$ around $x_0$ we write $B_r^+(x_0):=\{x\in \R^{n+1}_+| \ |x-x_0| < r \}$; the projection onto the boundary of $\R^{n+1}_+$ is respectively denoted by $B_r'(x_0):=\{x\in \R^{n} \times \{0\}| \ |x-x_0| < r \}$. If $x_0 = (0,0)$ we also write $B_r^+$ and $B_r'$. Analogous conventions are used for balls in the lower half sphere: $B^-_r(x_0)$. Moreover, we use the notation $B_r''(x_0) = \{x\in \R^{n-1}\times \{(x_0)_n\}\times \{0\}| \ |x''-x''_0|<r\}$, where $x_0=(x_0'', (x_0)_n,0), x= (x'', x_n,0)$. \item Annuli around a point $x_0=(x_0',0)$ in the upper half-space with radii $0<r<R<\infty$ as well as their projections onto the boundary of $\R^{n+1}_+$ are denoted by by $A_{r,R}^+(x_0):= B_{R}^+(x_0)\setminus B_{r}^+(x_0)$ and $A_{r,R}'(x_0):= B_{R}'(x_0)\setminus B_{r}'(x_0)$ respectively. For annuli around $x_0=(0,0)$ we also omit the center point. Furthermore, we set $A_{r,R}(x_0):= A_{r,R}^+(x_0)\cup A_{r,R}^-(x_0)$. \item We use $\mathcal{C}_\eta(e_n)$ to denote an $(n+1)$-dimensional cone with with opening angle $\eta$ and axis $e_n$. Analogously, $\mathcal{C}'_\eta(e_n)$ refers to a flat (i.e. $n$-dimensional) cone on $\{x_{n+1}=0\}$ with opening angle $\eta$ and axis $e_n$. \item For $f: \R^{n+1} \rightarrow \R$ we set $\nabla' f$ and $\nabla'' f$ to denote the derivatives with respect to the $x'$ and $x''$ components of $x$.
\item We use $|\cdot|$ to denote the standard Euclidean norm. \item Distances with respect to a Riemannian metric $g$ (e.g. if they are induced by certain operators as in Section \ref{sec:propv1}) are denoted by $d_g(\cdot, \cdot)$. \item $\dist_{H}(X,Y):= \max\{\sup\limits_{x\in X} d(x,Y), \sup\limits_{y\in Y} d(y,X)\}$ denotes the Hausdorff distance of two subsets $X$ and $Y$ in $\R^m$. \item Let $w:B_{1}^+ \rightarrow \R$ be a solution of (\ref{eq:varcoef}). Then \begin{itemize}
\item $\Omega_w:= \{x\in \R^{n}\times \{0\}| \ w(x)>0\}$ denotes the \emph{positivity set}. \item $\Gamma_w:=\partial_{B_1'} \Omega_w$ is the \emph{free boundary}. \item $\Lambda_w:= B_1' \setminus \Omega_w$ is the \emph{coincidence set}.
\item $\Gamma_{\frac{3}{2}}(w):= \{x \in \Gamma_w| \kappa_x = \frac{3}{2}\} \subset \Gamma_w$ is the \emph{regular set} or the \emph{regular free boundary}. Here $\kappa_x:= \limsup \limits_{r\rightarrow 0} \frac{\ln(r^{-\frac{n+1}{2}}\left\| w \right\|_{L^2(A_{r,2r}^+(x))})}{\ln(r)} $ is the \emph{vanishing order} of $w$ at $x$. \end{itemize}
\item As in \cite{KRS14} we use the notation $w_{r}(x):= \frac{w(r x)}{ r^{- \frac{n+1}{2} } \left\| w \right\|_{L^2(B_{r}^+)} }$ to denote the $L^2$ normalized rescaling of $w$ at zero. Analogously we consider the $L^2$-normalized blow-ups at arbitrary free boundary points $x_0 \in \Gamma_w$:
$w_{x_0,r}(x):= \frac{w(x_0+r x )}{ r^{- \frac{n+1}{2} } \left\| w \right\|_{L^2(B_{r}^+(x_0))} }$.
\item We set $w_{3/2}(x):= c_n \Ree(x_n + i x_{n+1})^{3/2}$, where $c_n$ is chosen such that $\left\| w_{3/2} \right\|_{L^2(B_1^+)}=1$.
\item In the following we often use the abbreviation $\ell_0:= (2\sqrt{n})^{-1}$ as well as $c_{\ast}:=\left\| \nabla a^{ij} \right\|_{L^p(B_1)}$. \item We use $L_0 = a^{ij}\p_{ij}$ and $L = \p_i a^{ij} \p_j$ to denote the non-divergence and divergence form operators involved. \item We reserve the parameter $\epsilon>0$ to quantify the Reifenberg flatness of a domain $\Gamma \subset \R^{n}\times \{0\}$, as well as the parameter $\epsilon_0$ to measure the closeness of a solution of (\ref{eq:varcoef}) to the $L^2$ normalized model solution $w_{3/2}(x)$. \item We use the notation $A\lesssim B$ to denote that there exists an only dimension dependent constant such that $A \leq C B$. Similar conventions are used for $\gtrsim$. \end{itemize}
\section{Regularity of the Regular Free Boundary} \label{sec:boundary}
In this section we deduce the $C^{1,\alpha}$ regularity of the regular part of the free boundary for solutions of the thin obstacle problem (\ref{eq:varcoef}) and thus provide the proof of Theorem \ref{thm:C1a}.\\
We recall that it is possible to classify free boundary points by their vanishing order $\kappa$ (Section 4 of \cite{KRS14}): \begin{align*} \Gamma_w = \Gamma_{\frac{3}{2}}(w) \cup \bigcup\limits_{\kappa \geq 2} \Gamma_{\kappa}(w), \end{align*}
where $\Gamma_{\kappa}(w):= \{x\in \Gamma_w \big| \ \kappa_{x}= \kappa \}$. The set $\Gamma_{\frac{3}{2}}(w)$ is referred to as the \emph{regular set} of $w$ or the \emph{regular free boundary} of $w$. A corresponding point $x\in \Gamma_{\frac{3}{2}}(w)$ is called \emph{regular}. \\
Although we are mainly interested in the \emph{regular} free boundary, in passing, we make the following observation on the Hausdorff-dimension of the \emph{(whole) free boundary}:
\begin{rmk}[Hausdorff dimension of $\Gamma_w$] \label{rmk:Hausdorff} We claim that, if the obstacle is zero and the metric $a^{ij}$ is Lipschitz continuous, the Hausdorff-dimension of the \emph{(whole) free boundary} is less than or equal to $n-1$. Indeed, this follows as in Lemma 4.1 and Theorem 8.10 in \cite{SW10}. Similarly as in these results, we prove a slightly stronger statement: We show that \begin{align*} \mathcal{H}^{s}(S_{w}\cap B_1') = 0 \mbox{ for all } s>n-1, \end{align*}
where $S_w:=\{x\in B_1^+| \ w(x)=|\nabla w(x)|=0\}$ in particular contains $\Gamma_w$.\\ We only provide a sketch of the proof here and refer to \cite{SW10} and \cite{Si96}, Chapter 3.3 for the details. As in \cite{SW10} we argue in two steps and first show that if $\varphi \in C^{1,\alpha}$ for some $\alpha>0$ is a solution of the \emph{constant} coefficient thin obstacle problem, then the Hausdorff dimension of the associated free boundary $S_{\varphi}\cap B_{1}'$ is less than or equal to $n-1$. \\ The argument for this follows by contradiction: Assuming that the statement of the claim were wrong, there existed $s>n-1$ such that \begin{align*} \mathcal{H}^s(S_{\varphi}\cap B_1') > 0. \end{align*} By density arguments for the Hausdorff measure (c.f. \cite{E15}, Chapter 2.3, Theorem 2.7) there exists a point $z\in S_{\varphi}\cap B_1'$ such that \begin{align*} \theta_{\mu_s}^{*}(S_{\varphi},z):= \limsup\limits_{\rho \rightarrow 0} \rho^{-s}\mu_s(S_{\varphi}\cap B_{\rho}'(z))>0, \end{align*} where for $A\subset B_1'$ we have $\mu_s(A):= \inf \sum\limits_{j=1}^{\infty} \rho_j^s$ and $A \subset \bigcup\limits_{j=1}^{\infty} B_{\rho_j}'(y_j)$ and $y_j \in B_{1}'$. Hence there exists a sequence $\sigma_j>0$ with $\sigma_j \rightarrow 0$ such that \begin{align*} \lim\limits_{\sigma_j \rightarrow 0} \sigma_j^{-s} \mu_s(S_{\varphi}\cap B_{\sigma_j}'(z))>0. \end{align*}
Rescaling the function by setting $\varphi_{j}(x):= \frac{\varphi(z+\sigma_j x)}{\sigma_j^{-\frac{n+1}{2}}\|\varphi\|_{L^2(B_{\sigma_j}^+(z))}}$ with $x\in B_{\sigma_j^{-1}(1-|z|)}$ results in \begin{align*} \liminf\limits_{j\rightarrow 0} \mu_s(S_{\varphi_j}\cap \overline{B_1'})>0. \end{align*} By compactness (c.f. \cite{SW10}, \cite{PSU}) we have that along a subsequence \begin{align*} \varphi_{j'} \rightarrow \psi \text{ in } C^1_{loc}(B_1^+), \end{align*}
where $\psi \in C^{1,\alpha}\cap W^{2,2}$ is a homogeneous solution with degree larger than or equal to $1+\alpha$ and it is normalized such that $\|\psi\|_{L^2(B_1^+)}=1$. Moreover, as in \cite{SW10} \begin{align*} \mu_s(S_{\psi}\cap \overline{B_1'})>0. \end{align*} We now repeat the outlined argument with $\psi$ instead of $\varphi$. This yields a function $\psi_1$ with $\mu_s(S_{\psi_1}\cap \overline{B_1'})>0$, $0\in S_{\psi_1}$ and $\psi_1$ being invariant under composition with translations in the direction $\lambda z$ for all $\lambda \in \R$. Indeed, the last point follows from the observation that $\mathcal{N}_{\psi_1}(0)= \mathcal{N}_{\psi_1}(z)$, where
$$\mathcal{N}_{\psi_1}(x):= \lim\limits_{\rho\rightarrow 0} \frac{\rho \int\limits_{B_{\rho}^+(x)}|D\psi_1(y)|^2dy}{\int\limits_{\partial B_{\rho}(x)}|\psi_1 (y)|^2 dy}$$
denotes the frequency function (c.f. \cite{SW10}, Section 2 and \cite{PSU}, Chapter 9 for the existence of this limit and its properties). We also note that this is well-defined for $x\in S_{\psi_1}$ and not only for $x\in\Gamma_{\psi_1}$. Similarly as in Remark 2.4 (2) in \cite{SW10} this implies that for the homogeneous homogeneous function $\psi_1$ it holds $\psi_1(x+\lambda z) = \psi_1(x)$ for all $x\in \R^{n+1}_+$. Arguing inductively and carrying out the argument a further $n-1$ times (which is possible by our contradiction argument), we end up with a function $\psi_{n}(x_1,\dots, x_{n+1})=f(x_{n+1})$ which only depends on the $x_{n+1}$-variable and is non-trivial. As it is however also harmonic and satisfies $\psi_{n}(0)=|\nabla \psi_{n}(0)|=0$, this yields a contradiction and hence results in our claim.\\ In a second step, the more general case of a solution $w$ to the \emph{variable} coefficient problem (\ref{eq:varcoef}) with Lipschitz coefficients $a^{ij}$ is treated. Here the claim is that also in this situation the Hausdorff-dimension of $S_w \cap B_{1/2}'$ is less than or equal to $n-1$, i.e. for all $s>n-1$ it holds $\mathcal{H}^{s}(S_w \cap B_{1/2}')=0$. Again this argument follows by contradiction assuming the claim on the Hausdorff dimension were wrong. Then however blow-up argument (in combination with compactness, which holds in the setting of Lipschitz coefficients, c.f. for instance \cite{SW10}) reduces the situation to that of a non-trivial solution to the constant coefficient problem with Hausdorff dimension larger than $n-1$. This however is in contradiction with our first claim. \end{rmk}
Returning to our main problem, in the following we study the regularity properties of the regular free boundary $\Gamma_{\frac{3}{2}}(w)$. This is divided into several steps: In Section \ref{sec:Reifenberg} we begin by proving that the regular free boundary is a relatively open set of the free boundary and that, due to the good compactness properties of blow-up solutions, it is Reifenberg flat. Then in the following Section \ref{subsec:Lip} we deduce its Lipschitz regularity. Our argument for this strongly relies on the boundary's Reifenberg flatness, as this allows to construct (nearly) optimally scaling lower barrier functions. In Section \ref{subsec:C1a} we exploit the Lipschitz regularity to infer the $C^{1,\alpha}$ regularity of the regular free boundary. Here we argue via a Carleson and a boundary Harnack estimate. Last but not least, in Section \ref{sec:propv1} we provide the proof of one of our most central technical tools, the splitting technique which is stated in Proposition \ref{prop:v1}.
\subsection{Reifenberg flatness} \label{sec:Reifenberg}
We begin the investigation of the regular set by showing that it is a relatively open subset of $\Gamma_w$:
\begin{prop}[Relative openness of $\Gamma_{3/2}(w)$] \label{prop:rel_op} Let $a^{ij}: B_1^+ \rightarrow \R^{(n+1)\times (n+1)}_{sym}$ be a uniformly elliptic $W^{1,p}$ tensor field with $p\in (n+1,\infty]$. Let $w$ be a solution of (\ref{eq:varcoef}) in $B_{1}^+$. Then $\Gamma_{3/2}(w)$ is a relatively open subset of $\Gamma_w$. \end{prop}
\begin{proof} By Proposition 4.2 in \cite{KRS14}\xspace the mapping $\Gamma_w \ni x\mapsto \kappa_x$ is upper semi-continuous. Therefore, the preimage of the set of free boundary points with $\kappa_x < 2$ is relatively open. Due to the classification of the possible homogeneous solutions and due to the regularity of general solutions, $\kappa_x <2 $ already implies that $\kappa_x = \frac{3}{2}$ (c.f. Proposition 4.6 in \cite{KRS14}\xspace). Hence, we infer that $\Gamma_{\frac{3}{2}}(w)$ is a relatively open set of the free boundary. \end{proof}
In order to study the regular set in greater detail, we recall a strong compactness result for $L^2$ normalized blow-ups of solutions of (\ref{eq:varcoef}). In contrast to the setting of free boundary points of higher vanishing order, it is possible to show that at \emph{regular} free boundary points any blow-up is a $3/2$-\emph{homogeneous} global solution. \\
For convenience of notation, in the sequel we set \begin{align} \label{eq:modelsol} w_{3/2}(x):= c_n \Ree(x_n + i x_{n+1})^{3/2} \mbox{ for } x\in B_{1}^+, \end{align}
where $c_n>0$ is an only dimension dependent constant, which is chosen such that $\left\| w_{3/2} \right\|_{L^2(B_1^+)}=1$. \\
Using this convention, we formulate the $L^2$ normalized blow-up result at regular free boundary points:
\begin{lem} \label{lem:3/2homo}
Let $a^{ij}: B_1^+ \rightarrow \R^{(n+1)\times (n+1)}_{sym}$ be a uniformly elliptic $W^{1,p}$ tensor field with $p\in (n+1,\infty]$. Let $w$ be a solution of (\ref{eq:varcoef}) in $B_{1}^+$. Consider $x_0\in \Gamma_{3/2}(w)\cap B_{1/2}'$ and $w_{r,x_0}(x):= \frac{w(x_0+rx)}{r^{-\frac{n+1}{2}} \left\| w \right\|_{L^2(B_r^+(x_0))} }$. Then for any sequence $\{r_j\}_{j\in\N}$ with $r_j \rightarrow 0$ there exists a subsequence $\{r_{j_k}\}_{k\in\N}$ and a matrix $Q\in SO(n+1)$ such that \begin{align} \label{eq:3/2homo} w_{r_{j_k},x_0}(x) \rightarrow w_{3/2}(Q x) \mbox{ in } C^{1}(B_{1/2}^+). \end{align} \end{lem}
\begin{proof} The proof of (\ref{eq:3/2homo}) relies on an argument by Andersson \cite{An} and reduces the problem to Benedicks's theorem \cite{Bene80}. For completeness we give a brief sketch of it. \\ To this end, we first note that by Proposition 4.4 in \cite{KRS14}\xspace there exists a subsequence $\{r_{j_k}\}_{k\in\N}$ such that \begin{align*} w_{r_{j_k},x_0} \rightarrow w_0 \mbox{ in } C^{1}(B_{1/2}^+). \end{align*}
Here $w_0$ is a global solution of the constant coefficient thin obstacle problem with $\|w_0\|_{L^2(B_1^+)}=1$. By Remark 17 in \cite{KRS14}\xspace we further infer that $w_0$ grows of a rate $R^{\kappa}$ with $\kappa \in (1,2)$ at infinity. Analogously, any tangential derivative $\p_e w_0$, with $e\in S^{n}\cap B_{1}'$, has at most a growth rate of $R^{\kappa-1}$ with $\kappa \in (1,2)$ at infinity.\\ Next we show that there exists $Q\in SO(n+1)$ such that $w_0(x)=w_{3/2}(Qx)$. Indeed, due to the at most $R^{\kappa-1}$, $\kappa \in (1,2)$, growth at infinity of $\p_e w_0$, the Friedland-Hayman inequality \cite{FH76} (see also Lemma 5.1 in \cite{An}), yields that for any tangential vector $e\in S^{n}\cap B_{1}'$ the directional derivative $\p_e w_0$ has a fixed sign in $\R^{n+1}_+$. Thus, arguing as in Proposition 9.9 in \cite{PSU}, we have that $w_0$ only depends on two variables, which, up to a rotation $Q$, can be written as $w_0(x_n,x_{n+1})$. As a consequence of the sign condition on $\p_n w_0$ (without loss of generality we assume $\p_n w_0\geq 0$), the fact that $\p_{n+1}w(0)=0$ (due to the $C^{1}$ convergence), and the growth rate at infinity, the coincidence set $\Lambda_0$ of $w_0$ is the $n$-dimensional half-plane $\Lambda_0=\{x_n\leq 0,x_{n+1}=0\}$. Due to the Signorini condition, after an even reflection about $x_{n+1}$, $\p_n w_0$ solves \begin{align*} \D \p_n w_0 &= 0 \mbox{ in } \R^{n+1}\setminus \Lambda_0, \ \p_n w_0 = 0 \mbox{ on } \Lambda_0, \ \p_n w_0\geq 0 \text{ in } \R^{n+1}\setminus \Lambda_0. \end{align*}
Applying Benedicks's theorem (c.f. Proposition 1 in \cite{An} and also \cite{Bene80}) to $\p_{n}w_0$ (or a Liouville type theorem for harmonic functions after opening up the domain), we have that up to a multiplicative constant $\p_n w_0(x)=\Ree(x_n+i|x_{n+1}|)^{1/2}$. A similar argument applied to $\p_{n+1}w_0$ yields $\p_{n+1}w_0(x)=\Imm (x_n+ix_{n+1})^{1/2}$ in $B_1^+$. Hence, necessarily $w_0(x) =c_n\Ree(x_n+ix_{n+1})^{3/2}= w_{3/2}(x)$ in $B_1^+$(where we used that $\|w_0\|_{L^2(B_1^+)}=\|w_{3/2}\|_{L^2(B_1^+)}=1$). \end{proof}
Keeping this compactness result in the back of our minds, we proceed by showing that the regular free boundary is Reifenberg flat. For this we first recall the following definition (c.f. \cite{CKL05}, \cite{LMS12}), which allows us to infer our first regularity result, Proposition \ref{prop:Reifenberg}, for the free boundary.
\begin{defi}[Reifenberg flatness] \label{defi:Reifenberg} A locally compact set $\Gamma \subset \R^{m}$ is \emph{$(\delta, R)$ Reifenberg flat} if for every $x_0\in \Gamma$ and every $r\in (0,R]$ there is a hyperplane $L(r,x_0)$ containing $x_0$ such that \begin{align} \label{eq:Reifenberg} \frac{1}{r}\dist_{H} (\Gamma \cap B_r(x_0), L(r,x_0)\cap B_{r}(x_0)) \leq \delta, \end{align} where $\dist_{H}(X,Y):= \max\{\sup\limits_{x\in X} d(x,Y), \sup\limits_{y\in Y} d(y,X)\}$ denotes the Hausdorff distance of two subsets $X$ and $Y$ in $\R^m$. \end{defi}
Recalling the convention from (\ref{eq:modelsol}), we have the following regularity result:
\begin{prop} \label{prop:Reifenberg} Let $a^{ij}: B_1^+ \rightarrow \R^{(n+1)\times (n+1)}_{sym}$ be a uniformly elliptic $W^{1,p}(B_1^+)$ tensor field with $p\in (n+1,\infty]$ which satisfies (A1), (A2) and (A4). Let $w$ be a solution of (\ref{eq:varcoef}) in $B_{1}^+$. Then for each $\epsilon>0$ there exists $\delta>0$ such that if \begin{itemize}
\item[(i)] $\left\| w - w_{3/2} \right\|_{C^1(B_1^+)} \leq \delta$,
\item[(ii)] $\left\| a^{ij} - \delta^{ij} \right\|_{L^{\infty}(B_1^+)} \leq \delta$, \end{itemize} then for each $x_0\in \Gamma_{w} \cap B_{1/2}'$ and each radius $r\in(0,1/2)$, there is a rotation $S=S(r,x_0)\in SO(n+1)$ such that \begin{align} \label{eq:Reifenb_close}
\left\| w_{r,x_0}(x) - w_{3/2}(S x) \right\|_{C^1(B_{1/2}^+)} \leq \epsilon. \end{align} In particular, if $\delta$ is chosen sufficiently small, then $ \Gamma_w \cap B_{1/2}'= \Gamma_{3/2}(w) \cap B_{1/2}'.$ \end{prop}
\begin{proof} We argue in two steps. First, we derive a non-degeneracy condition at free boundary points, which is crucial in order to obtain good compactness properties. This first step also immediately implies that $\Gamma_{3/2}(w) \cap B_{1/2}' = \Gamma_w \cap B_{1/2}'$. Then, in the second step, we argue that (\ref{eq:Reifenb_close}) holds.\\
\emph{Step 1: Compactness.} We begin by proving the following claim: \begin{claim*} Assume that the conditions of Proposition \ref{prop:Reifenberg} are satisfied. Then for each $\lambda>0$, if $\delta=\delta(n,p,\lambda)$ is chosen sufficiently small, for all $r\in (0,1/2)$ and for all $x_0 \in \Gamma_w \cap B_{1/2}'$, we have that \begin{align} \label{eq:nondeg_contr}
r^{-\frac{n+1}{2}}\left\| w \right\|_{L^2(B_{r}^+(x_0))} \geq \frac{1}{2}r^{\frac{3}{2} + \lambda}. \end{align} \end{claim*} We prove the claim as a consequence of the closeness assumption (i) to the model solution and of Corollary 3.1 in \cite{KRS14}\xspace for a suitable choice of $\tau$. \\
Consider $R:= (4\delta)^{\frac{2}{n+4}}$ with $0<\delta<\left(\frac{1}{2}\right)^{\frac{n}{2}+4}$. Then, by the triangle inequality and by condition (i), we have that for any $x_0\in \Gamma_w\cap B_{1/2}'$ \begin{align*}
\left\| w \right\|_{L^2(B_{r}^+(x_0))} \geq \left\| w_{3/2} \right\|_{L^2(B_{r}^+(x_0))} - \delta \geq \left\| w_{3/2} \right\|_{L^2(B_{r/2}^+(\bar{x}_0))} - \delta. \end{align*}
for all $r\in [R,1)$. Here $\bar{x}_0\in \Gamma_{w_{3/2}}\cap B_{1/2}'$ denotes a point such that $|\bar{x}_0 -x_0| \leq \delta$ (which exists by our assumption (i)). By virtue of the scaling of $w_{3/2}$ at free boundary points and by the choice of $R$, we thus infer that for all $r\in [R,1/2)$ \begin{align} \label{eq:low_est}
r^{- \frac{n+1}{2}}\left\| w \right\|_{L^2(B_{r}^+(x_0))} \geq \frac{1}{2} r^{\frac{3}{2}}. \end{align} This shows (\ref{eq:nondeg_contr}) for all $r\in [R,1/2)$. \\ We now argue that \eqref{eq:nondeg_contr} is also true for $r\in (0,R)$:
Let $r\in(0,R)$ and $\tau= \frac{1}{1+c_0\frac{\pi}{2}}\frac{\ln(\left\| w \right\|_{L^2(B_r^+(x_0))})-\ln (r)}{\ln(r)}$ (where $c_0$ is the constant in Corollary 3.1 in \cite{KRS14}\xspace). Applying Corollary 3.1 in \cite{KRS14}\xspace at the scales $r, R$ and $1/2$ (where $R$ is defined as above) yields: \begin{align*}
e^{\tau \tilde{\phi}(\ln R)}R^{-1}|\ln R|^{-2}\left\| w \right\|_{L^2(A_{R,2R}^+(x_0))} &\leq C(n,p) \left( e^{\tau \tilde{\phi}(\ln r)}r^{-1} \left\| w\right\|_{L^2(B_r^+(x_0))} \right.\\
& \quad \left. + e^{\tau \tilde{\phi}(\ln(1/4))} \left\| w\right\|_{L^2(B_{1/2}^+(x_0))} \right)\\ & \leq C(n,p). \end{align*} Here the last line follows from the definition of $\tau$. As a result, \begin{align*}
\left\| w \right\|_{L^2(A_{R,2R}^+(x_0))} \leq C(n,p) e^{-\tau \tilde{\phi}(\ln R)}R |\ln R|^2. \end{align*} Combining this with (\ref{eq:low_est}) and that $\tilde{\phi}(t)=-(1+c_0\frac{\pi}{2})t-c_0\left(1+\ln(-t)+\mathcal{O} (\frac{1}{t})\right)$ as $t\rightarrow -\infty$, we infer that for our choice of $R=(4\delta)^{\frac{2}{n+4}}$ \begin{align*}
\frac{1}{2}R^{\frac{n+4}{2}} \leq \left\| w \right\|_{L^2(A_{R,2R}^+(x_0))} \leq C(n,p) R^{\tau(1+c_0\frac{\pi}{2})}e^{\frac{C\tau c_0}{|\ln R|}} R|\ln R|^{2+c_0\tau}. \end{align*} Hence, using that $c_0,\tau\leq 1$ \begin{align*}
&\left( 1 +c_0 \frac{\pi}{2} + \frac{C c_0}{|\ln R|} \right)\tau + 1 \leq \frac{n+4}{2} - \frac{\ln(C(n,p))}{\ln(R)} - 4\frac{\ln(|\ln R|)}{\ln(R)} , \end{align*} which, by plugging in the expression for $\tau$, can be rewritten as \begin{align*}
&\left(1 + \frac{Cc_0}{(1+c_0 \frac{\pi}{2})|\ln R|} \right)\ln(\left\| w \right\|_{L^2(B_r^+(x_0))}) \ \\
& \geq \left( \frac{n+4}{2} - \frac{\ln(C(n,p))}{\ln(R)} - 4 \frac{\ln|\ln(R)|}{\ln(R)} + \frac{Cc_0}{1+c_0 \frac{\pi}{2}|\ln R|}\right) \ln(r). \end{align*} Thus, a sufficiently small choice of $R$ (and thus $\delta $ depending on $\lambda$) then yields \eqref{eq:nondeg_contr}. Recalling that $r\in(0,R)$ was chosen arbitrarily and combining the just derived estimate with (\ref{eq:low_est}), finally yields the full non-degeneracy condition (\ref{eq:nondeg_contr}) (i.e. for the full range of radii $r\in(0,1/2)$) and hence concludes the proof of the claim.\\
As a direct consequence of the claim, we note that any free boundary point $x_0\in \Gamma_w \cap B_{1/2}'$ has vanishing order at most $\frac{3}{2}+\lambda$. By choosing $\lambda < \frac{1}{2}$, we infer that $\kappa_{x_0}\in [0,2)$. By Proposition 4.6 in \cite{KRS14}\xspace this implies that $\kappa_{x_0}= 3/2$. Therefore, $\Gamma_w \cap B_{1/2}' = \Gamma_{3/2}(w)\cap B_{1/2}'$.\\
\emph{Step 2: Proof of (\ref{eq:Reifenb_close}).} We argue by contradiction. Suppose that (\ref{eq:Reifenb_close}) were wrong. Then there existed a parameter $\epsilon_0>0$, a sequence $\delta_k\rightarrow 0$ and a sequence, $w^k$, of solutions to the thin obstacle problem with associated metrics $a^{ij}_k$, i.e. \begin{align*} \p_i a^{ij}_k \p_j w^k = 0 \text{ in } B_1^+,\\ w^k\geq 0, \ -\partial_{n+1} w^k\geq 0, \ w^k \p_{n+1}w^k=0 \text{ on } B'_1, \end{align*} as well as a sequence of radii $r_k \in (0,1/2)$ and a sequence of points $x_k \in \Gamma_{w^k}\cap B_{1/2}'$ such that \begin{itemize}
\item[(i)] $\|w^k - w_{3/2}\|_{C^1(B_{1}^+)}\leq \delta_k$,
\item[(ii)] $\|a^{ij}_k-\delta^{ij}\|_{L^{\infty}(B_1^+)}\leq \delta_k$, \end{itemize} but for all $S\in SO(n+1)$ \begin{align} \label{eq:contra}
\left\| w_{r_k,x_k}^k(x) - w_{3/2}(S x) \right\|_{C^1(B_{1/2}^+)} \geq \epsilon_0. \end{align} We can also assume that $\Gamma_{w^k}\cap B_{1/2}' \ni x_k \rightarrow x_0 \in \Gamma_{w_{3/2}}\cap \overline{B_{1/2}'}$ and that $r_k \rightarrow r_0$.\\
We now show how the compactness statement of step 1 yields a contradiction to our assumption (\ref{eq:contra}). For this we consider the $L^{2}$ normalized rescalings, $w_{r,\bar{x}}^k(x)$, of $w$ at a free boundary point $\bar{x}\in \Gamma_{w^k}\cap B_{1/2}'$. As the conditions of the claim from step 1 are satisfied at any free boundary point $\bar{x}\in \Gamma_{w^k}\cap B_{1/2}'$, the doubling property (c.f. Proposition 4.3 in \cite{KRS14}\xspace) is satisfied uniformly in the choice of the free boundary point $\bar{x}\in \Gamma_{w^k}$ and in the sequence $w^k$. Hence, considering the sequences $r_k$, $x_k$ from above, entails that on the one hand $w_{r_k,x_k}^k(x) $ is bounded in $H^1(B_1^+)$. Therefore, up to a subsequence it converges strongly in $L^2(B_1^+)$ to a limit $w_0(x)\in H^1(B_1^+)$ with $\|w_0\|_{L^2(B_1^+)}=1$. By the $L^2-C^{1,\alpha}$ estimates this also entails that $w_{r_k,x_k}^k(x) \rightarrow w_0$ in $C^1(B_{1/2}^+)$. On the other hand, we recall the $C^1$ convergence \begin{align} \label{eq:converge} w^k (x)\rightarrow w_{3/2}(x), \end{align} which follows from (i). We now distinguish two cases: \begin{itemize} \item[Case 1:] $r_0>0$. In this case we observe that due to (\ref{eq:converge}) \begin{align*} w^k(x_k + r_k x) &\rightarrow w_{3/2}(x_0+ r_0 x) = r_0^{3/2} w_{3/2}(x),\\
r^{-\frac{n+1}{2}}_k \left\| w^k\right\|_{L^2(B_{r_k}^+(x_k))} &\rightarrow r^{-\frac{n+1}{2}}_0 \left\| w_{3/2}\right\|_{L^2(B_{r_0}^+(x_0))} = r^{3/2}_0, \end{align*} where in the last line we used the $L^2$ normalization of $w_{3/2}$ and that $w_{3/2}$ is homogeneous of degree $3/2$. Consequently, $w_{r_k, x_k}^k(x) \rightarrow w_{3/2}(x)$ in $C^1(B_{1/2}^+)$, which is a contradiction to (\ref{eq:contra}).
\item[Case 2:] $r_0=0$. In this case we note that, since $w^k$ is a solution of the thin obstacle problem, the same is true for $w_{r_k,x_k}^k(x)$. Moreover, by assumption (ii) $a^{ij}_k \rightarrow \delta^{ij}$, which implies that $w_0(x)$ is a nontrivial global solution of the constant coefficient thin obstacle problem which satisfies $\|w_0\|_{L^2(B_1^+)}=1$. By \eqref{eq:nondeg_contr} and the uniform upper growth estimate of Lemma 4.1 in \cite{KRS14}\xspace, the growth of $w_0$ at infinity is of the rate $R^\kappa$ with $\kappa\in (1,2)$. Thus, by the proof of Lemma \ref{lem:3/2homo}, $w_0(x)= w_{3/2}(S x)$ for some rotation $S\in SO(n+1)$. This yields a contradiction to (\ref{eq:contra}). \end{itemize} Combining the previous two cases shows (\ref{eq:Reifenb_close}). \end{proof}
Proposition \ref{prop:Reifenberg} immediately entails the following Corollary:
\begin{cor} \label{cor:Reifenberg} Let $a^{ij}: B_1^+ \rightarrow \R^{(n+1)\times (n+1)}_{sym}$ be a uniformly elliptic $W^{1,p}(B_1^+)$ tensor field with $p\in (n+1,\infty]$ which satisfies (A1), (A2) and (A4). Let $w$ be a solution of (\ref{eq:varcoef}) in $B_{1}^+$. For each $\epsilon>0$ there exists $\delta>0$ such that if \begin{itemize}
\item[(i)] $\left\| w - w_{3/2} \right\|_{C^1(B_1^+)} \leq \delta$,
\item[(ii)] $\left\| a^{ij} - \delta^{ij} \right\|_{L^{\infty}(B_1^+)} \leq \delta$, \end{itemize} then the free boundary $\Gamma_w\cap B_{1/2}'$ is $(\epsilon, 1/2)$ Reifenberg flat. \end{cor}
\begin{proof} We regard $\Gamma_{w}\cap B_{1/2}'$ as a subset of $\R^{n} \times \{0\}$. By Proposition \ref{prop:Reifenberg}, we have that for each $r\in(0,1/2)$ and $x_0\in \Gamma_{w}\cap B_{1/2}'$, there exists $S(r,x_0)\in SO(n+1)$ with \begin{align} \label{eq:close1}
\| w_{x_0, r}( S(r,x_0)x) - w_{3/2}(x) \|_{C^1(B_1^+)} \leq \epsilon. \end{align}
Consider $L(r,x_0):=\{x_0 + x\in \R^{n}\times \{0\}| \ x\cdot (S(r,x_0) e_{n}) = 0\}$. Due to (\ref{eq:close1}), we obtain that \begin{align*} \frac{1}{r}\dist_{H} (\Gamma_w \cap B_r(x_0), L(r,x_0)\cap B_{r}(x_0)) \leq \epsilon, \end{align*} which shows the $(\epsilon,1/2)$ Reifenberg flatness of $\Gamma_{w}\cap B_{1/2}'$. \end{proof}
By using an $L^2$ normalized blow-up, this can be transferred to obtain the Reifenberg flatness of the regular free boundary of a general solution of the thin obstacle problem (which does not necessarily satisfy the closeness assumption (i)):
\begin{cor}[Reifenberg flatness of $\Gamma_{3/2}(w)$] \label{cor:Reifenberg2} Let $a^{ij}: B_1^+ \rightarrow \R^{(n+1)\times (n+1)}_{sym}$ be a uniformly elliptic $W^{1,p}$ tensor field with $p\in (n+1,\infty]$. Let $w$ be a solution of (\ref{eq:varcoef}) in $B_{1}^+$. For any $\epsilon>0$ and any $K\Subset \Gamma_{3/2}(w)$, there exists a radius $r_0= r_0(\epsilon,K,w)\in (0,1)$, such that $\Gamma_{3/2}(w)\cap K$ is $(\epsilon,r_0)$-Reifenberg flat. \end{cor}
\begin{proof} \emph{Step 1: Uniformity in compact sets.} As in Proposition \ref{prop:Reifenberg}, the Corollary reduces to showing the following claim: \begin{claim*} \label{claim:Reifenberg2} Let $a^{ij}: B_1^+ \rightarrow \R^{(n+1)\times (n+1)}_{sym}$ be a uniformly elliptic $W^{1,p}$ tensor field with $p\in (n+1,\infty]$. Let $w$ be a solution of (\ref{eq:varcoef}) in $B_{1}^+$. Then, for any $\epsilon>0$ and $K\Subset \Gamma_{3/2}(w)$, there exists $r_0=r_0(\epsilon, w, K)$ such that for any $x_0\in K$ and $r\leq r_0$, there is a matrix $S=S(r,x_0)\in SO(n+1)$ such that \begin{align} \label{eq:close}
\|w_{x_0,r} (Sx)-w_{3/2}(x)\|_{C^1(B_1^+)}\leq \epsilon, \end{align} where $w_{3/2}(x)$ is as above. \end{claim*} In order to infer the claim, we argue by contradiction and assume that the statement were wrong. Then there were an $\epsilon>0$, a sequence $r_k$ with $r_k\rightarrow 0$ and and a sequence $x_k$ with $x_k \in K$ such that for all $S\in SO(n+1)$ \begin{align*}
\| w_{x_k, r_k}(S x) - w_{3/2}(x) \|_{C^1(B_1^+)} > \epsilon \quad \mbox{ for all } k\in \N. \end{align*} By compactness of $K$ we may assume that $x_k \rightarrow \bar{x}$. However, by compactness (which for variable centers we infer from a combination of Proposition 4.3 in \cite{KRS14}\xspace and Corollary 4.1 in \cite{KRS14}\xspace) of the sequence $w_{x_k, r_k}$ and by Lemma \ref{lem:3/2homo}, we obtain a subsequence, which we do not relabel, and a rotation $S$ such that \begin{align*}
\| w_{x_k, r_k}( x) - w_{3/2}(S x) \|_{C^1(B_1^+)} \rightarrow 0. \end{align*} \emph{Step 2: Conclusion.} By rescaling with the radius $r_{0}=r_{0}(\epsilon, K, w)$, it is possible to reduce the claim of Corollary \ref{cor:Reifenberg2} to the setting of Corollary \ref{cor:Reifenberg}. \end{proof}
\subsection{Lipschitz regularity} \label{subsec:Lip}
In this section we present a first improvement of the regularity of the regular free boundary. \\
In the sequel we will always assume that $0\in \Gamma_w$. Moreover, we will use the notation $$w_{3/2}(x):=c_n\Ree(x_n+ix_{n+1})^{3/2},$$
to denote our model solution. Here $c_n>0$ is a normalization constant such that $\|w_{3/2}\|_{L^2(B_1^+)}=1$. Keeping this convention in the back of our minds, we prove the Lipschitz regularity of the regular free boundary:
\begin{prop}[Lipschitz regularity] \label{prop:Lip} Let $a^{ij}:B_1^+ \rightarrow \R^{(n+1)\times (n+1)}_{sym}$ be a tensor field which satisfies (A1), (A2), (A3), (A4). Let $w$ be a solution of (\ref{eq:varcoef}) in $B_{1}^+$. Assume that for some small positive constants $\epsilon_0$ and $c_\ast$ \begin{itemize}
\item[(i)] $\left\| w - w_{3/2} \right\|_{C^1(B_{1}^+)} \leq \epsilon_0$,
\item[(ii)]$\|\nabla a^{ij}\|_{L^p(B_1^+)}\leq c_\ast$. \end{itemize} Then if $\epsilon_0$ and $c_\ast$ depending on $n,p$ are chosen sufficiently small, there exists a Lipschitz function $g:B_{1/2}'' \rightarrow \R$ such that (possibly after a rotation) \begin{align*}
\Gamma_w\cap B_{1/2}' = \{x_{n}= g(x'') \big| x'' \in B_{1/2}'' \}. \end{align*}
Moreover, there is a large constant $\tilde{C}=\tilde{C}(n,p)$, such that $\|\nabla'' g\|_{L^\infty(B''_{1/2})}\leq \tilde{C}\max\{\epsilon_0,c_\ast\}$. \end{prop}
We remark that in addition to the results from Proposition \ref{prop:Lip}, we actually show that $g$ -- and hence the regular free boundary -- is $C^{1}(B_{1/2}'')$ regular (c.f. Remark \ref{rmk:diff}). \\
In order to prove the Lipschitz regularity of the free boundary, we deduce positivity of the directional derivatives in a cone of tangential directions (c.f. Proposition \ref{prop:nondeg}). To achieve this, we use a comparison argument, which is standard in the constant coefficient case. We start by considering the equation for $v:=\p_e w$, where $e\in S^n \cap B_1'$ is a tangential direction. From the $H^2$ estimate of $w$ (c.f. \cite{U87}), $v\in H^1(B^+_1)$. By differentiating the equation of $w$, we obtain the equation of $v$ in $B^+_1\cap \{x_{n+1}>0\}$: \begin{align*} \p_i a^{ij}\p_j v &=\p_i F^i\text{ in } B_1^+\cap \{x_{n+1}>0\},\quad\text{where } F^i =-(\p_ea^{ij})\p_j w. \end{align*} Compared with the constant coefficient situation, the fact that the coefficients of our equation are not better than $W^{1,p}$ regular, leads to a divergence right hand side with $F^i\in L^p$. This causes difficulties in applying the comparison argument directly. To resolve this difficulty we use an appropriate decomposition to split the solution $v$ into an ``error term'' $v_1$ (c.f. equation \eqref{eq:v1}), which deals with the divergence term $F^i$, and into a ``main term'' $v_2$ (c.f. equation \eqref{eq:v2}), which captures the behavior of the respective solutions.
\begin{rmk}[Reflection and extension] \label{rmk:ref_ext} As it proves convenient to work on the whole ball instead of only on the upper half ball (for instance for applying interior elliptic regularity theory), we reflect the metric $a^{ij}$ to $B_1^-$ by setting \begin{equation} \label{eq:extend_reflect} \begin{split} a^{ij}(x',x_{n+1})&=a^{ij}(x',-x_{n+1}),\quad i,j=1,\dots,n,\\ a^{n+1,j}(x',x_{n+1})&=-a^{n+1,j}(x',-x_{n+1}),\quad j=1,\dots,n,\\ a^{n+1,n+1}(x',x_{n+1})&=a^{n+1,n+1}(x',-x_{n+1}), \end{split} \end{equation} for $(x',x_{n+1})\in B_1^-$. We reflect $w$ to $B^-_1$ by defining $w(x',x_{n+1})=w(x',-x_{n+1})$. Here we use the same notation to denote the functions after refection. Due to the off-diagonal assumption (A1), the reflected coefficient matrix $a^{ij}$ is in $W^{1,p}(B_1)$. Since $\p_{n+1}w=0$ in $B'_1\setminus \Lambda_w$, we have $w\in C^{1,1/2-\delta}(B_1\setminus \Lambda_w)\cap H^2(B_1\setminus \Lambda_w)$, where $\delta\in(0,1)$ is an arbitrarily small constant, and $v=\p_ew$ satisfies \begin{align}\label{eq:extend_v} \p_ia^{ij}\p_j v&=\p_i F^i\text{ in } B_1\setminus \Lambda_w, \quad F^i=-(\p_e a^{ij})\p_j w, \\ v&=0\text{ on } \Lambda_w. \notag \end{align}
As we are mainly interested in the local properties of solutions and of the (regular) free boundary and in order to avoid technical issues on the boundary $\p B_1\cap \Lambda_w$, we will often consider \eqref{eq:extend_v} as an equation in the whole space $\R^{n+1}$: We simply extend $a^{ij}$ to $\R^{n+1}$ such that the extended coefficients remain Sobolev regular, $a^{ij}\in W^{1,p}_{loc}(\R^{n+1})$, with equivalent ellipticity constants and with $\Vert \nabla a^{ij} \Vert_{L^p(\R^{n+1})} $ at most twice as large as it had originally been. Moreover, we extend $F^i$ to $\R^{n+1}$ by multiplying by the characteristic function $\chi_{B_1}$. With a slight abuse of the notation, we denote the set $\Lambda_w\cap B'_1$ by $\Lambda_w$. In the sequel, whenever the equations are written in the whole space, we will always have this extension procedure in mind, even though we may not refer to it explicitly. \end{rmk}
In order to deal with the $L^p$ divergence form right hand side in (\ref{eq:extend_v}), we decompose $v=v_1+v_2$ with \begin{equation} \label{eq:v1} \begin{split} \p_i a^{ij} \p_jv_1 -K\dist(x,\Gamma_w)^{-2}v_1 &= \p_i F^i \mbox{ in } \R^{n+1}\setminus \Lambda_w,\\ v_1&= 0 \mbox{ on } \Lambda_w, \end{split} \end{equation} where $K$ is a large constant. Then $v_2:= v - v_1$ solves \begin{equation} \label{eq:v2} \begin{split} \p_i a^{ij} \p_j v_2 &= - K\dist(x,\Gamma_w)^{-2} v_1 \mbox{ in } \R^{n+1} \setminus \Lambda_w,\\ v_2 &= 0 \mbox{ on } \Lambda_w. \end{split} \end{equation} Intuitively, we expect that $v_1$ is a ``controlled error'' (c.f. the bounds in Proposition \ref{prop:v1}) and that $v_2$ determines the behavior of our solution $v$.\\
That this is indeed the case and that $v_1$ can be treated as a ``controlled error'' is a consequence of the following more general result which will be proved in Section \ref{sec:propv1}:
\begin{prop} \label{prop:v1} Let $\Lambda$ and $\Gamma$ be closed, non-empty sets in $\R^n\times\{0\}$ with $\Gamma = \partial \Lambda$. Let $a^{ij}:\R^{n+1} \rightarrow \R^{(n+1)\times (n+1)}_{sym}$ be a bounded, measurable, uniformly elliptic tensor field whose eigenvalues are in $[1/2, 2]$. For $K>0$ consider the equation \begin{equation} \label{eq:globalv1} \begin{split} \partial_ia^{ij}\partial_j u-K\dist(x, \Gamma)^{-2} u &= \partial_i F^i + g \text{ in } \R^{n+1}\setminus \Lambda,\\ u&=0 \text{ on } \Lambda, \end{split} \end{equation} with $F^i\in L^2(\R^{n+1})$, $i\in \{1,\dots,n+1\}$ and $\dist(x,\Gamma) g\in L^2(\R^{n+1})$. Then there exists a unique solution $u$ with $\nabla u \in L^2(\R^{n+1})$, $\dist(x,\Gamma)^{-1}u\in L^2(\R^{n+1})$ and $u(x)=0$ on $\Lambda$ for $H^n$ a.e. $x$. Under the above conditions, if we additionally assume that for some $\sigma\geq 0$ and $p\in (n+1,\infty]$ \begin{align*} F^i\dist(x,\Gamma)^{-\sigma}&\in L^p(\R^{n+1}), \ i\in\{1,\dots, n+1\},\\
g\dist(x,\Gamma)^{1-\frac{n+1}{p}-\sigma}&\in L^{p/2}(\R^{n+1}), \end{align*} then there exist $c=c(n)$ large and $C_0=C_0(n,p)$, such that for $K=c(n)(\sigma+1)$, \begin{multline*}
|u(x)| \le C_0 \dist(x,\Gamma)^{1-\frac{n+1}{p} +\sigma}\left(\left\|F^i \dist(\cdot,\Gamma)^{-\sigma}\right\|_{L^p(\R^{n+1})}\right.\\
\left.+\|g \dist(\cdot,\Gamma)^{1-\frac{n+1}{p}-\sigma}\|_{L^{p/2}(\R^{n+1})}\right). \end{multline*} Moreover, $u$ vanishes continuously as $x$ approaches $\Lambda$. \end{prop}
\begin{rmk} With slight modifications of the function spaces, the statement of Proposition \ref{prop:v1} remains true for $\sigma < 0$. \end{rmk}
\begin{rmk} \label{rmk:impv1} If in addition $a^{ij}\in W^{1,p}(B_1^+)$ with $p\in (n+1,\infty)$, it is possible to make the vanishing order of $u$ towards $\Lambda$ more explicit (c.f. Remark~\ref{rmk:impv2} after the proof of Proposition \ref{prop:v1}). More precisely, for $p\in (n+1,\infty)$ we have \begin{multline*}
|u(x)| \le C_0 \dist(x,\Gamma)^{\sigma}\dist(x,\Lambda)^{1-\frac{n+1}{p}} \left(\left\|F^i \dist(\cdot,\Gamma)^{-\sigma}\right\|_{L^p(\R^{n+1})}\right.\\
\left. +\|g \dist(\cdot,\Gamma)^{1-\frac{n+1}{p}-\sigma}\|_{L^{p/2}(\R^{n+1})}\right). \end{multline*} \end{rmk}
Although we postpone the proof of Proposition \ref{prop:v1} to the end of this section, we will in the sequel frequently apply it, for instance with $\Lambda = \Lambda_w$ and $\Gamma = \Gamma_w$.\\
The remainder of the proof of Proposition \ref{prop:Lip} is organized as follows: Working in the framework of general $(\epsilon,1)$-Reifenberg flat sets $\Lambda$ and $\Gamma$, in Section \ref{sec:Reifenbergbarrier} we first construct a lower barrier function (Proposition \ref{prop:barrier}) which serves as the basis for the then following comparison argument. Here we deduce an (almost) optimal non-degeneracy condition in cones for elliptic equations with controlled inhomogeneities in domains with Reifenberg slit (Proposition \ref{prop:nondeg}). In Section \ref{sec:proofLip} this is then applied to prove the Lipschitz regularity of the (regular) free boundary for the thin obstacle problem (Proposition \ref{prop:Lip}).
\subsubsection{Construction of a barrier function and a comparison argument} \label{sec:Reifenbergbarrier}
In this section we first construct a barrier function for the slit domain with Reifenberg flat slit (c.f. Proposition \ref{prop:barrier}). This is crucial for the then following comparison argument which allows us to work with essentially critically scaling bounds.\\
In the whole subsection, we work under the following assumption: \begin{assum} \label{assum:Reifenberg} We consider a closed set $\Lambda\subset\R^n\times\{0\}$ with boundary $\Gamma:= \partial_{\R^n\times\{0\}} \Lambda$. We assume that $0\in \Gamma$ and that $\Gamma\cap B_1$ is $(\epsilon, 1)$-Reifenberg flat. Here $\epsilon>0$ is a fixed small constant. \end{assum}
We start by considering a Whitney decomposition, $\{Q_j\}_{j\in \N}$, of $B_{1} \setminus \Gamma$, i.e. $\{Q_j\}$ is a collection of dyadic cubes $Q_j$ with disjoint interiors that cover $B_{1} \setminus \Gamma$ such that: \begin{itemize} \item [(W1)] $\diam(Q_j)\leq \dist(Q_j, \Gamma \cap B_{1})\leq 4\diam(Q_j)$. \item [(W2)] If $Q_1$ and $Q_2$ touch, then $(1/4)\diam(Q_2)\leq \diam(Q_1)\leq 4\diam (Q_2)$. \item [(W3)] There are at most $(12)^{n+1}$ cubes in $\{Q_j\}$ which touch a given cube $Q\in \{Q_j\}$ (cf. Chapter VI \cite{St} for the existence of such cubes). \end{itemize} Moreover, as $\Lambda, \Gamma \subset \R^{n}\times \{0\}$, it is further possible to choose the Whitney cubes symmetric with respect to the plane $\{x_{n+1}=0\}$.\\
For each Whitney cube $Q_j$ with center $\hat x_j$ and diameter $r_j$, let $x_j \in \Gamma$ be a (not necessarily unique) point such that $\dist(\hat x_j,\Gamma)=|\hat x_j-x_j|$. Using this point, we associate a rotation $S_j$ to each Whitney cube $Q_j$ by choosing one of the (not necessarily unique) rotations $S(64 r_j,x_j)$ which are defined by the property that $S(64 r_j,x_j) e_n$ determines the normal of the plane $L( 64 r_j, x_j)$ in the Reifenberg condition (\ref{eq:Reifenberg}) with $\delta = \epsilon$ for some $\epsilon\ll 1$. We define $\nu_j:=S_j e_n$, which we refer to as the \textsl{approximate normal} at $x_j$ in $B_{64r_j}(x_j)$ associated with $Q_j$. Note that $\nu_j \in S^n\cap \{x_{n+1}=0\}$, as we interpret $\Gamma$ as a subset of $\R^{n}\times \{0\}$.\\
We observe that the approximate normals only vary slowly on neighboring Whitney cubes:
\begin{lem} \label{lem:normals} Let $ \Lambda, \Gamma$ be as in Assumption \ref{assum:Reifenberg}. Let $\{Q_j\}$ be a Whitney decomposition of $B_{1}\setminus \Gamma$ and let $Q_j$, $Q_k$ be neighboring Whitney cubes. Let $\nu_j$ and $\nu_k$ be the associated approximate normals defined above. Then \begin{align*}
|\nu_j - \nu_k| \leq C \epsilon. \end{align*} \end{lem}
\begin{proof} Let $\hat x_j$ and $\hat x_k$ be the centers of the Whitney cubes $Q_j$ and $Q_k$ respectively, and let $x_j,x_k\in \Gamma_w$ be (not necessarily unique) points which realize the distance of $\hat x_j,\hat x_k$ to $\Gamma_w$. Let $r_j$ be the diameter of $Q_j$. By using (W1) and (W2), it is immediate to check that \begin{align*}
0\leq |x_j-x_k|\leq 32 r_j. \end{align*} Let $S_j=S(64r_j, x_j)$ and let $\nu_j=S_j e_n$ be as above.
Let $L_j= \{x\in \R^n\times \{0\}| \ x\cdot \nu_j=0\}$ be an $(n-1)$-dimensional hyperplane in $\R^n\times \{0\}$. Then condition \eqref{eq:Reifenberg} in Definition \ref{defi:Reifenberg} gives that $$\dist_H(\Gamma_w\cap B_{64 r_j}(x_j), x_j+L_j \cap B_{64 r_j}(x_j))\leq C\epsilon r_j.$$ Similarly at $x_k$ we have \begin{align*} \dist_H(\Gamma_w\cap B_{64 r_k}(x_k), x_k+L_k \cap B_{64 r_k}(x_k))\leq C\epsilon r_k. \end{align*} Comparing the above two inequalities and using the triangle inequality for $\dist_H$, we deduce
$$|\nu_j-\nu_k|\leq C\epsilon.$$ \end{proof}
With this auxiliary result at hand, we construct a first barrier function for the divergence form operator $L= \p_i a^{ij}\p_j$ with $a^{ij}$ satisfying the assumptions (A2), (A3), (A4). The idea is first to locally construct subsolutions in each Whitney cube $Q_j$ and then to patch them together by a partition of unity. The patched function remains a subsolution as the approximate normals vary slowly due to Lemma~\ref{lem:normals}. In order to implement this idea, in a first step, we view the operator $L$ as a perturbation of the non-divergence form operator $L_0:= a^{ij}\p_{ij}$. Then in the second step we use the splitting technique from Proposition \ref{prop:v1} to derive a barrier for the full operator $L$.\\
In order to simplify notation, we introduce the following parameters as abbreviations: \begin{notation} \label{not:not1} We abbreviate by setting \begin{itemize}
\item $c_{\ast}:=\left\| \nabla a^{ij} \right\|_{L^p(B_1)}$, \item $\ell_0:= (2^{4}\sqrt{n})^{-1}$. \end{itemize} \end{notation}
Using this notation we proceed with the barrier construction.
\begin{prop}[Barrier function] \label{prop:barrier} Let $\Lambda, \Gamma, \epsilon$ be as in Assumption \ref{assum:Reifenberg} and let $c_{\ast}, \ell_0$ be as in Notation \ref{not:not1}. Then for any $s\in (0,1/2)$, if $\epsilon=\epsilon(n,s)$ and $c_\ast=c_\ast(n,p,s)$ are chosen sufficiently small, there exists a function $h^-_s: B_1 \rightarrow \R $ such that \begin{itemize} \item[(i)] $L h^-_s(x) \geq c_s\dist(x,\Gamma)^{-\frac{3}{2}+\frac{s}{2}} \mbox{ in } B_1 \setminus \Lambda$ for some $c_s>0$,
\item[(ii)] $h^-_s(x) \geq c_n \dist(x,\Gamma)^{\frac{1}{2}+\frac{s}{2}} \text{ on } B_1\cap \{x| \ \dist(x,\Lambda)\geq\ell_0 \dist(x,\Gamma)\}$, \item[(iii)] $h^-_s(x) \geq -c_n \dist(x,\Gamma)^{\frac{3}{2}-\frac{n+1}{p}}$ in $B_1$ and $h^-_s(x)=0$ on $\Lambda$. \end{itemize} \end{prop}
\begin{proof} \emph{Step 1: Barrier for $L_0$.}
Let $\eta_j$ be a partition of unity associated to the Whitney decomposition of $B_1\setminus \Gamma$ from above, i.e. the functions $\eta_j$ satisfy $0\leq \eta_j\leq 1$, $\eta_j = 1$ on $\frac{1}{2}Q_j$ and $\eta_j = 0$ for all $x\in Q_k$ if $Q_k\notin \mathcal{N}(Q_j):=\{Q_i| \ Q_i \text{ touches }Q_j\}$. We consider \begin{align*} h^-(x) := \sum\limits_{k\in \N} \eta_k (x) v_k^-(x), \end{align*} where $v_k^-$ is constructed as follows: We first define an affine transformation, $T_k$, associated with each Whitney cube $Q_k$. More precisely, we set $T_k (x):= R_k B^{-1}_k(x- x_k)$, where as before $x_k$ denotes a point on $\Gamma\cap B_1$ which realizes the distance of the center $\hat x_k$ of $Q_k$ to $\Gamma\cap B_1$, $B_k B_k^{t}:=(a^{ij}(x_k))$, and $R_k\in SO(n+1)$ is defined such that $R_k B^{t}_k \nu_k =\bar{c}_{1,k} e_n$ and $R_k B^{t}_k e_{n+1}=\bar{c}_{2,k} e_{n+1}$. Here $\nu_k$ is the associated approximate normal which was defined before and described in Lemma~\ref{lem:normals}. The constants $\bar{c}_{1,k},\bar{c}_{2,k}$ with $\bar{c}_{1,k},\bar{c}_{2,k} \in (\frac{1}{\sqrt{2}}, \sqrt{2})$ only depend on the ellipticity constants of $a^{ij}$. Next, for $s\in (0,1)$ we consider
$$v^-(x) := w_{1/2}(x_n, x_{n+1})^{1+s}, \quad w_{1/2}(x_n, x_{n+1})=\Ree(x_n+i|x_{n+1}|)^{1/2}.$$ A direct computation leads to \begin{align*} \Delta v^-(x)&=s(1+s)(x_n^2+x_{n+1}^2)^{-1/2}(w_{1/2})^{-1+s}\\ &\geq s(1+s) (x_n^2+x_{n+1}^2)^{-\frac{3}{4}+\frac{s}{4}}. \end{align*} Then we define $$v_k^-(x):=v^-(T_k x).$$ We observe that by definition \begin{align}\label{eq:Tk} T_k (x)\cdot e_n = \bar{c}_{1,k}^{-1} (x-x_k)\cdot \nu_k, \quad T_k (x)\cdot e_{n+1}= \bar{c}_{2,k}^{-1} (x-x_k)\cdot e_{n+1}. \end{align}
Thus, $v_k^-(x)=(\Ree(\bar{c}_{1,k}^{-1}(x-x_k)\cdot \nu_k+ i |\bar{c}_{2,k}^{-1} (x-x_k)\cdot e_{n+1}|)^{\frac{1}{2}})^{1+s}$. By using the computation for $\Delta v^-(x)$, one can check that $v_k^-$ satisfies \begin{align*} a^{ij}(x_k)\p_{ij} v_k^-(x) &\geq s(1+s)((T_k(x)\cdot e_n)^2+(T_k(x)\cdot e_{n+1})^2)^{-\frac{3}{4}+\frac{s}{4}}\\ &\quad \text{ in } \R^{n+1}\setminus \{x_{n+1}=0, (x-x_k)\cdot \nu_k\leq 0\},\\ v_k^-(x) &= 0 \text{ on } \{x_{n+1}=0, (x-x_k)\cdot \nu_k\leq 0\}. \end{align*} Due to the $(\epsilon, 1)$-Reifenberg condition on $\Gamma$, the definition \eqref{eq:Tk} and due to the property (W2), there exists an absolute constant $c\in (0,1)$ such that \begin{align*} a^{ij}(x_k)\p_{ij} v_k^-(x) \geq cs(1+s) r_k^{-\frac{3}{2}+\frac{s}{2}}, \quad r_k=\diam(Q_k) \end{align*} for each $x\in \supp(\eta_k)$. By (W1) and (W2) this can further be rewritten as \begin{align}\label{eq:lower_bar} a^{ij}(x_k)\p_{ij} v_k^-(x) \geq cs(1+s) \dist(x,\Gamma)^{-\frac{2}{3}+\frac{s}{2}}, \quad x\in \supp(\eta_k). \end{align} We compute $L_0(\sum_k\eta_kv_k^-)$, which is \begin{align*} L_0(\sum_k\eta_k v_k^-)&=\sum_k a^{ij}(x_k)\p_{ij}(\eta_kv_k^-)+\sum_k(a^{ij}(x)-a^{ij}(x_k))\p_{ij}(\eta_k v_k^-)\\ &=\sum_k(a^{ij}(x_k)\p_{ij}v_k^-) \eta_k + \sum_k(a^{ij}(x_k)\p_{ij}\eta_k) v_k^- \\ & \quad+ 2\sum_k a^{ij}(x_k)\p_i\eta_k\p_j v_k^- +\sum_k(a^{ij}(x)-a^{ij}(x_k))\p_{ij}(\eta_k v_k^-). \end{align*} Given any $x\in B_1\setminus \Gamma$, there is a unique $Q_\ell$ such that $x\in Q_\ell$. By definition of our partition of unity, $\eta_j \neq 0$ in $Q_\ell$ iff $Q_j \in \mathcal{N}(Q_\ell)$. Using \eqref{eq:lower_bar} and (W2) we thus deduce (with a possibly different $c$) \begin{align*} \sum_k(a^{ij}(x_k)\p_{ij}v_k^-(x)) \eta_k(x)\geq cs(1+s) r_\ell^{-\frac{3}{2}+\frac{s}{2}}. \end{align*} This will be the main contribution in $L_0 h^-$. In order to see this, we now estimate the error terms: For any $x\in Q_\ell$, \begin{align*} &\sum_k(a^{ij}(x_k)\p_{ij}\eta_k(x))v_k^-(x)\\ =&\sum_k [a^{ij}(x_k)-a^{ij}(x_\ell)] \p_{ij}\eta_k(x) v_k^-(x) + \sum_k a^{ij}(x_\ell) \p_{ij} \eta_k(x) v_k^-(x)\\ =&\sum_k [a^{ij}(x_k)-a^{ij}(x_\ell)] \p_{ij}\eta_k(x) v_k^-(x) + \sum_k a^{ij}(x_\ell) \p_{ij} \eta_k(x) [v_k^-(x)-v_\ell^-(x)]\\ \leq &C(n)\left(c_{\ast} r_\ell^{\gamma -\frac{3}{2}+\frac{s}{2}}+ \epsilon r_\ell^{-\frac{3}{2}+\frac{s}{2}}\right), \quad \gamma=1-\frac{n+1}{p}, \end{align*} where in the second equality we used $a^{ij}(x_\ell)\p_{ij} (\sum_k \eta_k(x))=\sum_k a^{ij}(x_\ell)\p_{ij}\eta_k(x)=0$. In the third inequality we applied Lemma~\ref{lem:normals} and noted that for $x\in \supp(\eta_\ell)$ and $Q_k\in \mathcal{N}(Q_\ell)$, \begin{align}\label{eq:barrier_est}
|v_k^-(x)-v_\ell^-(x)|=|v^-(T_k(x))-v^-(T_\ell (x))|\leq C \max\{c_\ast r_\ell^{\gamma}, \epsilon\} r_\ell^{\frac{1+s}{2}}. \end{align}
Indeed, this follows, since $|T_k(x)-T_l(x)| \lesssim \max\{c_\ast r_\ell^{\gamma}, \epsilon\} r_\ell $.\\ Similarly, by using $\sum_k a^{ij}(x_\ell)\p_i\eta_k(x)=0$ for all $j\in \{1,\dots, n+1\}$ as well as Lemma~\ref{lem:normals}, we infer that \begin{align*} &\sum_k a^{ij}(x_k)\p_i\eta_k\p_j v_k^- \\ &= \sum_k [a^{ij}(x_k)-a^{ij}(x_\ell)]\p_i\eta_k(x)\p_j v_k^-(x)+ \sum_k a^{ij}(x_\ell) \p_i \eta_k(x)[\p_jv_k^-(x)-\p_jv_\ell^-(x)]\\ &\leq C(n)\left(c_{\ast} r_\ell^{\gamma -\frac{3}{2}+\frac{s}{2}}+ \epsilon r_\ell^{-\frac{3}{2}+\frac{s}{2}} \right). \end{align*} Analogous to the estimate from before we used that \begin{align*}
|\p_j v_k^-(x)-\p_jv_\ell^-(x)|=|\p_j v^-(T_k x)-\p_jv^-(T_\ell x)|\lesssim \max\{c_\ast r_\ell^{\gamma},\epsilon\} r_\ell^{\frac{s-1}{2}} \end{align*} in the last inequality. Finally, \begin{align*} \sum_k(a^{ij}(x)-a^{ij}(x_k))\p_{ij}(\eta_k v_k^-(x))\leq C(n)c_{\ast} r_\ell^{\gamma-\frac{3}{2}+\frac{s}{2}}. \end{align*} Combining all the previous estimates, if $\epsilon=\epsilon(n,s)$ and $c_{\ast} =c_{\ast}(n,s)$ are small enough, we have for some $c_s>0$ \begin{equation} \label{eq:lower_bound} L_0h^-(x) \geq c_s\dist(x,\Gamma)^{- \frac{3}{2}+\frac{s}{2}}. \end{equation}
Next we show that $h^-=0$ on $\Lambda$. By construction we directly note that $h^-=0$ on $\Lambda\setminus \Gamma$. Due to the continuity of $h^-$ and due to the fact that $\Gamma = \partial \Lambda$, this then also holds on $\Gamma$.\\
\emph{Step 2: Perturbation.} We now modify $h^-$, so as to become a barrier for the full operator $L$. For this we note that $G= \chi_{B_{1}}(\p_i a^{ij}) \p_j h^-$ satisfies the bound \begin{align*}
\| G(x) \dist(x,\Gamma)^{\frac{1}{2}} \|_{L^p(\R^{n+1})} \leq C(n) c_{\ast}, \end{align*} since \begin{align*}
|\p_j h^-(x)| \leq C(n)\dist(x,\Gamma)^{-\frac{1}{2}} \mbox{ for all } x\in B_{1}\setminus \Lambda. \end{align*} Let $q(x)$ be the solution of \begin{align*} L q(x) - K \dist(x,\Gamma)^{-2} q(x) &= - G(x) \mbox{ in } \R^{n+1}\setminus \Lambda,\\ q(x) & = 0 \mbox{ on } \Lambda, \end{align*} for some large constant $K=K(n)$. Then, by Proposition \ref{prop:v1} there exists a constant $C_0(n,p)$ such that \begin{align}\label{eq:est_q}
|q(x)| \leq C_0 c_{\ast} \dist(x,\Gamma)^{\frac{3}{2}- \frac{n+1}{p}}, \end{align} and $q$ vanishes continuously up to $\Lambda$. Let $$h^-_s(x):=h^-(x)+q(x).$$ Thus, for $c_{\ast}=c_{\ast}(n,p,s)$ small, we have \begin{align*} L(h^- _s) &= L_0 h^- +(\p_i a^{ij}) \p_j h^- + K\dist(x,\Gamma)^{-2}q - (\p_i a^{ij}) \p_j h^-\\ & \geq c_s\dist(x,\Gamma)^{-\frac{3}{2}+\frac{s}{2}} - C_0 K c_{\ast} \dist(x,\Gamma)^{-\frac{1}{2} - \frac{n+1}{p}}\\ & \geq \frac{1}{2}c_s\dist(x,\Gamma)^{-\frac{3}{2}+\frac{s}{2}}. \end{align*} Moreover, we note that for each $k$ with $\supp(\eta_k)\cap \{\dist(x,\Lambda)\geq \ell_0\dist(x,\Gamma)\}\neq \emptyset$ and for each $x\in\supp(\eta_k)\cap \{\dist(x,\Lambda)\geq \ell_0\dist(x,\Gamma)\}$, we have that $v_k^-(x)\geq c_n \dist(x,\Gamma)^{\frac{1}{2}+\frac{s}{2}}$. Using the definition of $h^-$ and the estimate \eqref{eq:est_q} for $q(x)$ (as well as a sufficiently small choice of $c_{\ast}$), we therefore arrive at \begin{align} \label{eq:h+} h^-_s(x)\geq c_n \dist(x,\Gamma)^{\frac{1}{2}+\frac{s}{2}} \text{ for } x\in B_1\cap \{\dist(x,\Lambda)\geq \ell_0 \dist(x,\Gamma)\}. \end{align} \end{proof}
Using the previously constructed barrier function, we proceed by deducing a central non-degeneracy condition (in cones) for solutions of elliptic equations with controlled inhomogeneities in domains with Reifenberg slits. This corresponds to a comparison principle for solutions to divergence form, elliptic equations with rough metrics and divergence form right hand sides in domains with Reifenberg slits. \\
For convenience of notation, we change the geometry slightly, i.e. in the following we use cylinders instead of balls: For $r,\ell>0$ and $x_0\in \R^{n}\times\{0\}$, $B'_r(x_0)\times (-\ell,\ell):=\{(x',x_{n+1}) \in \R^{n+1}| \ |x'-x'_0|<r, |x_{n+1}|<\ell\}$.
\begin{prop} \label{prop:nondeg} Let $\Lambda, \Gamma, \epsilon$ be as in Assumption \ref{assum:Reifenberg}. Assume that $c_{\ast}$, $\ell_0$ are as in Notation \ref{not:not1}. Suppose that $u\in H^1(B'_1\times (-1,1))\cap C(B'_1\times (-1,1))$ solves \begin{align*} \p_ia^{ij}\p_j u=\p_i F^i +g_1 + g_2 \text{ in } B'_1\times (-1,1)\setminus \Lambda, \quad u= 0 \text{ on } \Lambda, \end{align*} and that the following conditions are satisfied: \begin{itemize} \item[(i)] The metric $a^{ij}$ satisfies the assumptions (A1), (A2), (A3), (A4) and for some constants $\delta_0\in (0,1)$, $\sigma>\frac{n+1}{p}-\frac{1}{2}$ and $\bar{\delta}>0$, \begin{align*}
&\sum_{i=1}^{n+1}\left\|\dist(\cdot, \Gamma)^{-\sigma}F^i \right\|_{L^p(B'_1\times(-1,1))} +
\left\|\dist(\cdot,\Gamma)^{1-\frac{n+1}{p}-\sigma}g_1\right\|_{L^{p/2}(B'_1\times(-1,1))}\\ & \quad +
\left\|\dist(\cdot,\Gamma)^{\frac{3}{2}-\delta_0}g_2\right\|_{L^{\infty}(B'_1\times(-1,1))}\leq \bar{\delta}. \end{align*}
\item[(ii)] $u \geq 1$ on $B'_1\times [-1,1]\cap \{|x_{n+1}|\geq \ell_0\}$. \item[(iii)] $u\geq -2^{-8} $ in $B'_1\times (-\ell_0,\ell_0)$. \end{itemize} Then, if $\epsilon$, $\bar \delta$, $c_{\ast}$ are sufficiently small depending on $n,p,\delta_0, \sigma$, there exists a constant $c_n>0$ such that \begin{equation} \label{eq:nondeg_a} u(x)\geq c_n \dist(x,\Gamma)^{\frac{1}{2}+\frac{\bar\epsilon}{2}},\quad \bar \epsilon=\min\{\delta_0,\frac{1}{2}-\frac{n+1}{p}+\sigma\} \end{equation}
for all $x\in (B_{1/2}'\times (-1/2,1/2)) \cap \{x| \dist(x,\Lambda)\geq \ell_0 \dist(x,\Gamma)\}$, and \begin{equation} \label{eq:nondeg_b} u(x)\geq - c_n\dist(x,\Gamma)^{1-\frac{n+1}{p}+\sigma} \end{equation} for all $x\in B_{1/2}'\times (-1/2,1/2)$. \end{prop}
\begin{rmk}
By a scaling argument, Proposition \ref{prop:nondeg} remains true, if condition (iii) is replaced by $u\geq \bar c_0$ on $B'_1\times [-1,1]\cap \{|x_{n+1}|\geq \ell_0\}$ for some $\bar c_0>0$ (and if the remaining constants in Proposition \ref{prop:nondeg} are rescaled appropriately). \end{rmk}
\begin{rmk} The parameter $\sigma>\frac{n+1}{p}-\frac{1}{2}$ quantifies the necessary decay conditions on the inhomogeneities $F^i$, $i\in\{1,\dots,n+1\},$ $g_1$, $g_2$ in dependence on the integrability of these quantities (measured in terms of $p$). In the application to the thin obstacle problem, the corresponding inhomogeneities satisfy these decay assumptions, if either \begin{itemize} \item $p>n+1$ and no obstacle is present, \item or $p>2(n+1)$ in which case an arbitrary $W^{2,p}$ obstacle is allowed. \end{itemize} More precisely, in the setting of the thin obstacle problem, we apply Proposition~\ref{prop:nondeg} to the tangential derivatives $\p_e w$, which satisfy the divergence form equation \begin{align*} \p_ia^{ij}\p_j u = \p_i F^i, \quad \text{where } F^i=(\p_ea^{ij})\p_j w+\p_e(a^{ij}\p_j\phi), \end{align*} with metric $a^{ij}\in W^{1,p}$ and obstacle $\phi \in W^{2,p}$. We note that on the one hand by Lemma 4.3 in \cite{KRS14}\xspace, the first term in $F^i$ satisfies \begin{align*}
\dist(x,\Gamma_w)^{-1/2}\ln (\dist(x,\Gamma_w))^{-2} (\p_ea^{ij})\p_j w \in L^p. \end{align*} On the other hand, the second term $\p_e(a^{ij}\p_j\phi)\in L^p$, which originates from a potentially non-vanishing obstacle (c.f. Section \ref{sec:nonfl}), does \emph{not} carry any additional decay. However, \begin{itemize} \item if $p\in (n+1,2(n+1)]$, then $\sigma>\frac{n+1}{p}-\frac{1}{2}\geq 0$, i.e. decay on $F^i$ is required. The assumption (i) from Proposition \ref{prop:nondeg} on $F^i$ is thus in general only satisfied if we set $\phi=0$. \item If $p\in (2(n+1),\infty]$, we have $\frac{n+1}{p}-\frac{1}{2}<0$. Hence, we do not need any additional decay on $F^i$. In this case Proposition~\ref{prop:nondeg} applies for any $\phi\in W^{2,p}$. \end{itemize} \end{rmk}
\begin{proof} In order to prove the result, we consider an appropriate comparison function. For $x_0\in B_{1/2}'\times (-\ell_0, \ell_0)$, we set $s=\bar\epsilon$ (where $\bar{\epsilon}$ is the constant from (\ref{eq:nondeg_a})) and consider \begin{align*}
h(x):= u(x) + |x'-(x_0)'|^2 - 2^{-8}h_s^-(x) - 2 n x_{n+1}^2. \end{align*} A direct computation shows that $h$ satisfies \begin{align*} \p_i a^{ij} \p_j h = \tilde{g}_1+\tilde{g}_2, \end{align*} where \begin{align*} \tilde{g}_1&= \p_iF^i+g_1 + 2(x_j-(x_0)_j)(\p_ia^{ij}) - 4n(x_{n+1})( \p_{j} a^{n+1,j}) ,\\ \tilde{g}_2&= g_2 -2^{-8}\p_ia^{ij}\p_j h_s^-+2 a^{ii} - 4 n a^{n+1,n+1}. \end{align*} We split $h=h_1+h_2$, where \begin{align*} \p_i a^{ij}\p_j h_1 -K \dist(x,\Gamma)^{-2} h_1&=\tilde{g}_1 \text{ in } \R^{n+1}\setminus \Lambda,\\ h_1&=0 \text{ on } \Lambda. \end{align*} By Proposition~\ref{prop:v1}, there exists $C_0=C_0(n,p)$ such that \begin{align} \label{eq:estimate_for_h1}
|h_1(x)|\leq C_0 (c_{\ast}+ \bar{\delta} )\dist(x,\Gamma)^{1-\frac{n+1}{p}+\sigma}. \end{align}
Now we consider $h_2$. Recalling the definition of $\ell_0$, we have that \begin{align*} h_2&\geq 1 -C_0 (c_{\ast} + \bar{\delta}) - 2^{-8} - 2n\ell_0^{2} \geq \frac{1}{2} \mbox{ on } B_{\frac{3}{4}}'\times \{\pm \ell_0\},\\ h_2&\geq -2^{-8} +\frac{1}{16 } -C_0 (c_{\ast} + \bar{\delta}) - 2^{-8} - 2n\ell_0^{2} \geq 2^{-8} \mbox{ on } \p B'_{\frac{3}{4}}\times (-\ell_0,\ell_0). \end{align*} We further recall that $u=h_1=h_s^-=0$ on $\Lambda$. Thus, $h_2 \geq 0$ on $\Lambda$.\\ If $\bar{\delta}$ and $c_{\ast}$ are chosen sufficiently small, $h_2$ satisfies \begin{align*} \p_ia^{ij}\p_jh_2&=\tilde{g}_2-K\dist(x,\Gamma)^{-2}h_1\\ &\leq (\bar{\delta}+ c_{\ast}) C_0K\dist(x,\Gamma)^{-1-\frac{n+1}{p}+\sigma}- 2^{-8}c_s\dist(x,\Gamma)^{- \frac{3}{2}+ \frac{s}{2}} \\ & \quad + 2a^{ii} -4 n a^{n+1,n+1} + \bar{\delta} \dist(x,\Gamma)^{-\frac{3}{2}+\delta_0} \\ &\leq 0 \quad \mbox{ in } (B'_{\frac{3}{4 }}\times (-\ell_0,\ell_0)) \setminus \Lambda, \end{align*} where we used the smallness of $\bar{\delta}$ and the definition of $s$ in terms of $\bar{\epsilon}$. Hence, by the comparison principle, $h_2\geq 0$ in $B'_{\frac{3}{4 }}\times (-\ell_0,\ell_0)$. Combining this with the estimate \eqref{eq:estimate_for_h1} gives $$h(x)\geq -C_0 (c_{\ast} + \bar{\delta})\dist(x,\Gamma)^{1-\frac{n+1}{p}+\sigma}\text{ in }B'_{\frac{3}{4 }}\times (-\ell_0,\ell_0).$$ In particular, $$h(x_0)\geq -C_0 (c_{\ast}+ \bar{\delta}) \dist(x_0,\Gamma)^{1-\frac{n+1}{p}+\sigma}.$$ Rewriting this in terms of $u_2$, yields \begin{align*} u(x_0) &\geq - C_0 (c_{\ast} + \bar{\delta}) \dist(x_0,\Gamma)^{1-\frac{n+1}{p}+\sigma} + 2^{-8}h_s^{-}(x_0). \end{align*}
Using the growth properties of the barrier function $h_s^-$ (c.f. Proposition \ref{prop:barrier}) and by choosing $c_{\ast}, \bar{\delta}$ sufficiently small in dependence of $n,p$ and $\delta_0$, implies the estimates (\ref{eq:nondeg_a}) and (\ref{eq:nondeg_b}) in $B_{1/2}'\times (-\ell_0, \ell_0)$. Finally, the full non-degeneracy in $B_{1/2}'\times (-1/2,1/2) \cap \{x| \dist(x,\Lambda)\geq \ell_0 \dist(x,\Gamma)\}$ follows from our assumption (ii) in combination with the fact that $0\in \Gamma$. \end{proof}
\subsubsection{Proof of Proposition \ref{prop:Lip}} \label{sec:proofLip}
With the aid of Proposition~\ref{prop:nondeg} we can now prove the Lipschitz (and $C^1$) regularity of the regular set of the free boundary. We only sketch the proof as there are no real modifications with respect to the one in \cite{PSU}:
\begin{proof}[Proof of Proposition \ref{prop:Lip}]
By choosing $\epsilon_0$ and $c_{\ast}$ sufficiently small and by invoking Proposition \ref{prop:Reifenberg}, we may assume that $\Gamma_{w}\cap B_{1/2}'$ is $(\epsilon,1/2)$ Reifenberg flat. Using this and the $C^1(B_1^+)$ closeness of $w$ to $w_{3/2}$, we apply Proposition \ref{prop:nondeg}. In order to satisfy its assumptions, we transfer positivity from $\p_e w_{3/2}$ to $\p_e w$, where $e\in S^{n-1}\times \{0\}$ is a tangential direction. We consider $\ell_0= (2^{4}\sqrt{n})^{-1}$ as in Proposition \ref{prop:nondeg}. Then for any $\eta>0$ and any tangential vector $e\in \{x \in S^n \cap B_1' \big| \ x_{n} > \eta |x''|\}$, the function $w_{3/2}$ satisfies: \begin{align*} \p_e w_{3/2}(x) &\geq 0 \mbox{ if } x\in B_{1},\\
\p_e w_{3/2}(x',x_{n+1}) &\geq c_n \eta (1+\eta^2)^{-1/2}\ell_0^{1/2} >0 \mbox{ if } x \in B_{1} \cap \{|x_{n+1}|\geq \ell_0\}. \end{align*} Moreover, $\p_e w$ solves \begin{align*} \p_i (a^{ij} \p_j \p_ew ) = \p_i F^{i}\mbox{ in } B_{1}\setminus \Lambda_{w},\quad F^i= -(\p_e a^{ij} )\p_j w. \end{align*}
By Lemma 4.3 in \cite{KRS14}\xspace, $\|\dist(\cdot,\Gamma_w)^{-\frac{1}{2}}\ln(\dist(x,\Gamma_w))^{-2}F^i\|_{L^p(B_1)}\leq C(n,p)c_\ast$. Hence, by Proposition~\ref{prop:nondeg} (applied with $\sigma = \frac{1}{2}-\mu$, where $\mu$ is an arbitrarily small but fixed constant), if $\epsilon_0$ and $c_\ast$ are chosen sufficiently small depending on $n, p, \mu$, then there exists a positive constant $c_{n,p}$ such that \begin{equation} \label{eq:positive_cone} \p_{e}w(x)\geq c_{n,p}\dist(x,\Gamma_{w})^{\frac{1}{2}+ \frac{\bar\epsilon}{2}}\geq 0 \end{equation}
for all $x\in B_{1/2}\cap \{x| \dist(x,\Lambda_{w}) \geq \ell_0\dist(x,\Gamma_{w}) \} $ and $e\in \{x\in S^n\cap B'_1| x_n>\eta|x''|\}$. Here $\eta\geq \tilde{C}\max\{\epsilon_0, c_\ast\}$ for a large enough positive constant $ \tilde{C}=\tilde{C}(n,p)$ and $\bar{\epsilon}= \sigma+\frac{1}{2}-\frac{n+1}{p}$. Moreover, Proposition \ref{prop:nondeg} also implies the global lower bound $\p_e w(x) \geq -\dist(x,\Gamma_w)^{1-\frac{n+1}{p}+\sigma}$ in $B_{1/2}$. As $\p_e w=0$ on $\Lambda_w$ and as $\Omega_w$ is covered by the set $\{x| \dist(x,\Lambda_{w}) \geq \ell_0\dist(x,\Gamma_{w}) \} $ (on which the bound (\ref{eq:positive_cone}) holds), we deduce that $\p_e w \geq 0$ in $B'_{ 1/2}$, with strict positivity in $B'_{ 1/2}\cap \{x| \dist(x,\Lambda_{w}) > \ell_0 \dist(x,\Gamma_{w}) \} $. As a consequence (c.f. \cite{PSU}), \begin{align*}
\{w(x',0)>0\}\cap B_{1/2}' = \{x_{n}>g(x'') \big| \ x'' \in B''_{1/2}\}, \end{align*}
where $g$ is a Lipschitz continuous function with $\|\nabla'' g\|_{L^\infty(B''_{1/2})}\leq \tilde{C}\max\{\epsilon_0,c_\ast\}$. \end{proof}
\begin{rmk} \label{rmk:diff}
If we return to an arbitrary solution $w$ of (\ref{eq:varcoef}), we can even conclude the $C^1$ regularity of the free boundary from the previously given argument. Indeed, after rescaling, it is always possible to satisfy the closeness assumption (i) in Proposition \ref{prop:Lip} with an arbitrarily small parameter $\epsilon_0$. Hence, we can infer that for each $\eta>0$ there exists $\rho_{\eta}>0$ such that $\left\| \nabla'' g\right\|_{L^{\infty}(B_{\rho_{\eta}}'')}\leq \eta$. This yields that $g$ is differentiable at $0$, or in other words, the tangent plane of $\Gamma_{3/2}(w)$ exists at the origin. The same holds true at other regular free boundary points. Moreover, from \eqref{eq:positive_cone} it is not hard to see that the tangent plane varies continuously in a neighborhood of the origin. Hence, we indeed obtain that $\Gamma_{3/2}(w)\cap B_\rho$ is not only Lipschitz but $C^1$ regular. \end{rmk}
\begin{rmk} At this stage and with these (comparison) techniques we do not know how to further estimate the modulus of continuity. Heuristically, the improvement comes from the fact that the convergence to the homogeneous solution is much faster than the compactness arguments suggest, once the solution is close to the homogeneous solution in the sense of Proposition \ref{prop:Reifenberg}. Thus, in the next section we use a boundary Harnack inequality to overcome this difficulty. \end{rmk}
\subsection{$C^{1,\alpha}$ regularity} \label{subsec:C1a}
In this section we improve the regularity of the (regular) free boundary to obtain its $C^{1,\alpha}$ regularity for some $\alpha \in (0,1]$:
\begin{prop} \label{prop:C1a}
Let $a^{ij}$ be a uniformly elliptic $W^{1,p}(B_1^+)$, $p\in(n+1,\infty]$, tensor field with $a^{ij}(0)=\delta^{ij}$. Assume that the conditions (i) and (ii) of Proposition \ref{prop:Lip} are satisfied with the same constants $\epsilon_0$ and $c_\ast$. Then if $\epsilon_0, c_{\ast}$ are sufficiently small depending on $n,p$, there exists $\alpha = \alpha(n,p) \in(0,1-\frac{n+1}{p}]$ such that $g\in C^{1,\alpha}(B_{1/4}'')$. Moreover, $\|\nabla ''g\|_{C^{0,\alpha}(B''_{1/4})}\leq C(n,p)\max\{\epsilon_0,c_\ast\}$. \end{prop}
We remark that Theorem \ref{thm:C1a} is an immediate consequence of Proposition \ref{prop:C1a}:
\begin{proof}[Proof that Proposition~\ref{prop:C1a} implies Theorem \ref{thm:C1a}]
As a result of the analysis in \cite{KRS14}, we may assume that for an appropriate sequence of radii $r_j>0$, $r_j \rightarrow 0$, the ($L^2$)-rescalings, $w_{r_k}(x):= \frac{w(r_k x)}{ r_{k}^{- \frac{n+1}{2}} \left\| w \right\|_{L^2(B_{r_k}^+)} }$, converge to a global solution $w_0$ in $C^{1,\alpha}_{loc}(\R^{n+1}_+)$ for some $\alpha \in (0,1/2)$, where after a possible rotation of coordinates \begin{align*} w_0(x)=w_{3/2}(x)=c_n \Ree(x_{n}+ ix_{n+1})^{\frac{3}{2}}. \end{align*} For each $k$, $w_{r_{k}}$ is a solution of \eqref{eq:varcoef} in $B_{r_k^{-1}}$ with metric $a^{ij}_{r_k}$, where $a^{ij}_{r_k}(x)=a^{ij}(r_k x)$, with rescaled free boundary $\Gamma_{w_{r_k}}$.
A direct calculation shows that $$\|\nabla a^{ij}_{r_k}\|_{L^p(B_1)}=r_k^{1-\frac{n+1}{p}}\|\nabla a^{ij}\|_{L^p(B_{r_k})}.$$ Thus, for $r_{k_0}$ small enough, assumptions (i), (ii) in Proposition~\ref{prop:Lip} are satisfied for $w_{r_{k_0}}$. Thus, Proposition~\ref{prop:Lip} implies that $\Gamma_{w_{r_{k_0}}}\cap B_{1/2}$ is a Lipschitz graph with sufficiently small Lipschitz constant. Hence, Proposition \ref{prop:C1a} is applicable to $w_{r_{k_0}}$. Rescaling back to $w$ yields Theorem~\ref{thm:C1a}. \end{proof}
The improvement from Lipschitz to $C^{1,\alpha}$ regularity of the regular free boundary is achieved with the aid of a boundary Harnack inequality (c.f. Theorem 7 in \cite{ACS08}) applied to tangential derivatives $v=\p_e w$ in the slit domain $B_1\setminus \Lambda_w$. Here we follow the ideas of \cite{CSS} by making use of the non-degeneracy condition of the solution. In this context, we have to be slightly more careful, as our inhomogeneity is in divergence form. Consequently, we rely on the decomposition, $v=v_1+v_2$, which we introduced above (c.f. Proposition \ref{prop:v1}). Another difficulty, which we have to overcome, is the fact that we may not assume that $v$ is positive in the whole domain $B_1\setminus \Lambda_w$: On the one hand it satisfies (by Proposition~\ref{prop:nondeg}) \begin{equation*}
v(x)\geq c_0 \dist(x,\Gamma_w)^{\frac{1}{2}+\frac{\bar\epsilon}{2}} \text{ in } \{x\in B_1| \ \dist(x,\Lambda_w)\geq \ell_0 \dist(x,\Gamma_w)\}, \end{equation*} for $\ell_0 = (2^{4}\sqrt{n})^{-1}$ and $\bar{\epsilon}=\min\{\delta_0,\sigma+\frac{1}{2}-\frac{n+1}{p}\}$, while on the other hand in the complement we only have \begin{equation*} v(x)\geq - c_1\dist(x,\Gamma_w)^{1-\frac{n+1}{p}+\sigma} \text{ in } B_1. \end{equation*} Here $\sigma>\frac{n+1}{p}-\frac{1}{2}$ is the same constant as in Proposition \ref{prop:nondeg} (i).\\
Despite these additional difficulties, it is possible to adapt the general strategy of \cite{CSS}: Working in the framework of general elliptic equations with controlled inhomogeneities in domains with Lipschitz slits, in Section \ref{sec:Carl_Harn} we first prove a Carleson estimate (Lemma \ref{lem:Carleson}) which is adapted to our setting. Then we continue with an appropriate boundary Harnack inequality (Lemma \ref{lem:boundHarn}). In Section \ref{sec:proofC1a} we finally apply these results in the setting of the thin obstacle problem and hence infer the desired regularity result of Proposition \ref{prop:C1a}.
\subsubsection{Carleson and boundary Harnack estimates} \label{sec:Carl_Harn}
In the sequel, we work in the slightly more general set-up of elliptic equations with controlled inhomogeneities in domains with Lipschitz slits. Hence, in this section we make the following assumptions:
\begin{assum} \label{assum:set} Let $g: \R^{n-1}\rightarrow \R$ be a Lipschitz function with Lipschitz constant $\epsilon>0$ and $g(0)=0$. Then we consider \begin{align*} \Lambda &:= \{x_{n}\leq g(x'')\} \subset \R^{n}\times \{0\},\\ \Gamma &:=\{x_n=g(x'')\}\subset \R^n\times \{0\}. \end{align*} \end{assum}
We remark that the Lipschitz regularity of $\Gamma$ in particular implies its $(\epsilon,1)$-Reifenberg flatness. Moreover, in our set-up, the smallness assumption on the Lipschitz constant of $g$ does not pose restrictions and can always be achieved by choosing the constants $\epsilon_0, c_\ast$ sufficiently small in Proposition \ref{prop:Lip}.\\
As in the previous Section \ref{sec:boundary}, we use the abbreviations from Notation \ref{not:not1}, i.e. we again set $\ell_0 := (2^{4} \sqrt{n})^{-1}$ and $c_{\ast}:=\left\| \nabla a^{ij} \right\|_{L^p(B_1)}$ for notational convenience.\\
In this framework we derive the following Carleson estimate:
\begin{lem}[Carleson estimate] \label{lem:Carleson} Let $\Lambda, \Gamma, \epsilon$ satisfy Assumption \ref{assum:set}. Suppose that $u\in C(B_1)\cap H^{1}(B_1 \setminus \Lambda)$ solves \begin{align*} \p_i a^{ij} \p_j u &= \p_i F^i+g_1+g_2 \mbox{ in } B_1\setminus \Lambda, \quad u=0 \text{ on } \Lambda, \end{align*} where we assume that condition (i) from Proposition \ref{prop:nondeg} and the following non-degeneracy assumption holds: For $\sigma, \bar{\epsilon}$ as in Proposition \ref{prop:nondeg} we have \begin{equation} \label{eq:nondeg} \begin{split}
u(x)&\geq \dist(x,\Gamma)^{\frac{1}{2}+\frac{\bar{\epsilon}}{2}} \text{ in } \{x\in B_1| \ \dist(x,\Lambda)\geq \ell_0 \dist(x,\Gamma)\},\\ u(x)&\geq - \dist(x,\Gamma)^{1-\frac{n+1}{p}+\sigma} \text{ in } B_1. \end{split} \end{equation} Then if $\bar\delta=\bar\delta(n,p,\delta_0)$ and $\epsilon=\epsilon(n,p,\delta_0)$ are small enough, there exists $C=C(n,p,\epsilon)$ such that \begin{align*} \sup\limits_{B_{r}(Q)}u \leq C u(Q+ r e_{n}), \end{align*} for all $Q\in \Gamma \cap B'_{1/2}$ and $r\in (0,1/4)$. \end{lem}
\begin{proof} Let $Q\in \Gamma\cap B'_{1/2}$ and $r\in (0,1/4)$. Let $u^*$ be the solution of \begin{align*} \p_i a^{ij} \p_j u^* &= 0 \mbox{ in } B_{2 r}(Q)\setminus \Lambda,\\ u^* &= \max\{u,0\} \mbox{ on } \p (B_{2 r}(Q)\setminus \Lambda). \end{align*} Then on $B_{r}(Q) \setminus \Lambda$ Theorem 7 of \cite{ACS08} implies that there exists $C=C(n,\epsilon)$ such that \begin{align*} u^*(x) \leq Cu^*(Q+ r e_{n}). \end{align*} We transfer this estimate to $u$ by exploiting a comparison argument which makes use of the non-degeneracy of $u$. We consider \begin{align*} h(x):= u^*(x) -u(x)-h^-_s(x) +8r^{\frac{1}{2}+\frac{s}{2}}, \end{align*} where $h^-_s$ is the barrier function from Proposition \ref{prop:barrier} and $s=\bar \epsilon$. For $h$ we obtain \begin{align*} \p_i a^{ij}\p_j h&=-\p_iF^i -g_1-g_2-\p_ia^{ij}\p_j h^-_s\quad \text{ in } B_{2 r}(Q)\setminus \Lambda,\\ h&\geq 4 r^{(1+s)/2}\quad \mbox{ on } \p ( B_{2 r}(Q)\setminus \Lambda), \end{align*}
where in the second inequality we have used $|u^\ast-u|=\max\{-u,0\}\leq r^{1-\frac{n+1}{p}+\sigma}$ on $\p (B_{2 r}(Q)\setminus \Lambda)$. In order to estimate $u$, we use the splitting technique again. With a slight abuse of notation we write $\Lambda=\Lambda\cap B_{2r}(Q)$ for simplicity. We decompose $h=h_1+h_2$, where $h_1$ solves \begin{align*} \p_ia^{ij}\p_j h_1-K\dist(x,\Gamma)^{-2}h_1&=-\p_iF^i-g_1 \text{ in } B_1\setminus \Lambda,\\ h_1&=0 \text{ on } \Lambda. \end{align*} Then by Proposition~\ref{prop:v1},
$$|h_1(x)|\leq C(n,p) \bar \delta \dist(x,\Gamma)^{1-\frac{n+1}{p}+\sigma}.$$ For $\bar \delta=\bar \delta(n,p)$ sufficiently small, $h_2\geq 0$ on $\p (B_{2 r}(Q)\setminus \Lambda )$. In $B_{2 r}(Q)\setminus \Lambda$, for $\bar\delta=\bar\delta(n,p,\delta_0)$ sufficiently small, \begin{align*} \p_ia^{ij}\p_j h_2&=-K\dist(x,\Gamma)^{-2}h_1-g_2-\p_ia^{ij}\p_j h^-_s\\ &\leq KC(n,p) \bar \delta \dist(x,\Gamma)^{-1-\frac{n+1}{p}+\sigma}+\bar\delta \dist(x,\Gamma)^{-\frac{3}{2}+\delta_0}\\ & \quad -c(\bar \epsilon)\dist(x,\Gamma)^{-\frac{3}{2}+\bar \epsilon}\\ &\leq 0. \end{align*} Thus, by the comparison principle we have $h_2\geq 0$ in $B_{2r}(Q)$. Hence, we obtain $$h(x)\geq - r^{1-\frac{n+1}{p}+\sigma}\text{ for any } x\in B_{2r}(Q)\setminus\Lambda.$$ Combining this with the choice of $s$ yields \begin{equation*} u^*-u\geq -Cr^{\frac{1}{2}+\frac{\bar\epsilon}{2}} \text{ in } B_{2r}(Q)\setminus\Lambda, \end{equation*} for some absolute constant $C$. Similarly, we deduce $u-u^*\geq -Cr^{\frac{1}{2}+\frac{\bar\epsilon}{2}}$ in $B_{2r}(Q)\setminus\Lambda$. As a result, for $x\in B_{r}(Q)$, \begin{align*} u(x) &\leq u^*(x) + Cr^{\frac{1}{2}+\frac{\bar\epsilon}{2}}\\ &\leq C u^*(Q+r e_{n}) + Cr^{\frac{1}{2}+\frac{\bar\epsilon}{2}}\\ &\leq C u(Q+r e_{n}) + Cr^{\frac{1}{2}+\frac{\bar\epsilon}{2}}+ Cr^{\frac{1}{2}+\frac{\bar\epsilon}{2}}\\ & \leq 3 C u(Q+r e_{n}), \end{align*} where in the last line we used the non-degeneracy of $u$. \end{proof}
With this preparation, we come to the boundary Harnack inequality. Let $\Lambda$ and $\Gamma$ be as in Assumption \ref{assum:set}. We now in addition choose $\theta=\theta(n,\epsilon)\in (0,2\pi)$ such that for any $x_0\in \Gamma$
$$x_0+\mathcal{C}_\theta'(e_n)\subset \{ x\in \R^{n}\times \{0\}| \ \dist(x,\Lambda)\geq \ell_0 \dist(x,\Gamma)\}. $$ Here $\mathcal{C}_{\theta}'(e_n)$ is a cone in $\R^{n}\times \{0\}$ with opening angle $\theta$ and $\ell_0$ is as in Notation \ref{not:not1}.
\begin{lem}[Boundary Harnack] \label{lem:boundHarn} Let $\Lambda, \Gamma, \epsilon$ satisfy Assumption \ref{assum:set}. Suppose that $u_1,u_2\in C(B_1)\cap H^{1}(B_1 \setminus \Lambda)$ solve \begin{align*} \p_i a^{ij} \p_j u_1 &= \p_i F^i +g_1+g_2 \mbox{ in } B_{1}\setminus \Lambda,\quad u_1=0 \text{ on } \Lambda,\\ \p_i a^{ij}\p_j u_2&=\p_i \tilde{F}^i+ \tilde{g}_1+\tilde{g}_2 \mbox{ in } B_{1}\setminus \Lambda, \quad u_2=0 \text{ on } \Lambda, \end{align*} where \begin{itemize} \item the coefficients $a^{ij}$ and inhomogeneities satisfy the condition (i) from Proposition \ref{prop:nondeg} with the same constants $\sigma$ and $\bar\delta$, \item both functions $u_1, u_2$ satisfy the non-degeneracy condition (\ref{eq:nondeg}) of Lemma~\ref{lem:Carleson} with the same constant $\bar \epsilon$. \end{itemize} Then, if $\epsilon$, $\bar\delta$ and $c_{\ast}$ are sufficiently small depending on $n,p$ and $\delta_0$, there exists a constant $s_0=s_0(n,p, \bar\epsilon, \sigma)>0$, such that \begin{align*} s_0\frac{u_2(\frac{1}{2}e_{n})}{u_1(\frac{1}{2} e_{n})}\leq \frac{u_2(x)}{u_1(x)} \leq s_0^{-1}\frac{u_2(\frac{1}{2}e_{n})}{u_1(\frac{1}{2} e_{n})} \text{ in } \bigcup_{x_0\in B_{1/2}\cap \Gamma} \left(x_0+\mathcal{C}'_\theta(e_n)\right)\cap B_{1/2}. \end{align*} Moreover, $u_2/u_1$ extends to a function which is Hölder continuous on $\Gamma\cap B'_{1/4}$. More precisely, there exist constants $\alpha \in (0,1]$ and $C>0$ depending on $n,p$ and $\bar\epsilon$, $\sigma$ such that for all $x_0\in \Gamma\cap B'_{1/4}$ \begin{align*}
\left|\frac{u_2}{u_1}(x) - \frac{u_2}{u_1}(x_0) \right| \leq C \frac{u_2(\frac{1}{2}e_n)}{u_1(\frac{1}{2}e_n)} |x-x_0|^{\alpha},\quad x \in B_{1/4}(x_0)\cap \left(x_0+\mathcal{C}'_\theta(e_n)\right). \end{align*} \end{lem}
\begin{proof} We only prove the Hölder continuity at $x_0=0$. The estimate at the remaining points follows analogously. \\
\emph{Step 1: Boundedness of the quotient.} Without loss of generality we may assume that $u_1(\frac{e_{n}}{2}) = u_2(\frac{e_{n}}{2})=1$. Then the Carleson estimate (Lemma~\ref{lem:Carleson}) and the non-degeneracy assumptions on $u_1,u_2$ imply that for all $\delta >0$ \begin{align*} u_1(x) &\leq C \mbox{ in } B_{1},\\
u_2(x) & \geq \ell_0^{\frac{1}{2}+\frac{\bar{\epsilon}}{2}} \mbox{ for all } x \mbox{ with } |x_{n+1}|\geq \ell_0. \end{align*} Therefore, Proposition \ref{prop:nondeg} implies that there exists $s_0\in (0,1)$ (depending only on the admissible quantities) such that
$$u_2-s_0 u_1 \geq 0 \mbox{ in } \{x\in B_{\frac{1}{2}}| \ \dist(x,\Lambda)\geq\ell_0 \dist(x,\Gamma)\}.$$ Hence, in particular, \begin{align*}
\frac{u_2(x)}{u_1(x)} \geq s_0 \mbox{ for all } x\in \{x\in B_{\frac{1}{2}}| \ \dist(x,\Lambda)\geq\ell_0 \dist(x,\Gamma)\}. \end{align*} Exchanging the role of $u_1$ and $u_2$, we have
$$ u_1 -s_0u_2 \geq 0 \mbox{ in }\{x\in B_{\frac{1}{2}}| \ \dist(x,\Lambda)\geq\ell_0 \dist(x,\Gamma)\}.$$ \emph{Step 2: Iteration argument.} We only prove the H\"older regularity at $x_0=0$.\\ We show that there exist $a_k, b_k$ depending on $s_0$ and $n,p,\bar{\epsilon}$ such that \begin{itemize} \item[($\alpha$)] $s_0\leq a_k \leq 1\leq b_k \leq s_0^{-1}$ and $b_k - a_k \leq C \mu_1^k$ for some $\mu_1 \in (2^{-\frac{\bar{\epsilon}}{2}},1)$, \item[($\beta$)] there exists $\mu_2 \in [2^{-\frac{\bar{\epsilon}}{2}},\mu_1]$ with $b_{k}-a_{k}\geq C\mu_2^{k}$ for some constant $C\geq 4$, \item[($\gamma$)] on $B_{2^{-k}}\cap \mathcal{C}'_\theta(e_n)$ we have $ a_k u_1\leq u_2 \leq b_k u_1 $. \end{itemize} We argue by induction and define \begin{align*} \tilde{w}_1(x) := \frac{u_2(2^{-k}x)- a_k u_1(2^{-k}x)}{b_k -a_k}, \ \tilde{w}_2(x):= \frac{b_k u_1(2^{-k}x) - u_2(2^{-k}x) }{b_k-a_k}. \end{align*} As $\tilde{w}_1(x) + \tilde{w}_2(x) = u_1(2^{-k}x)$, we may without loss of generality assume that $\tilde{w}_1(\frac{e_{n}}{2})\geq \frac{1}{2} u_1(\frac{2^{-k}e_{n}}{2})$. Dividing by $u_1(\frac{2^{-k}e_{n}}{2})$ and setting \begin{align*} w_1(x) := \frac{\tilde{w}_{1}(x)}{ u_1(\frac{2^{-k}e_{n}}{2})}, \ \bar{u}(x):=\frac{u_1(2^{-k}x)}{ u_1(\frac{2^{-k}e_{n}}{2})}, \end{align*} we obtain that \begin{align*} 2w_1(\frac{e_{n}}{2}) \geq \bar{u}(\frac{e_{n}}{2}) = 1. \end{align*} Seeking to apply step 1, we have to show that $2 w_1$ and $\bar{u}$ satisfy its assumptions, i.e. the bounds on the associated inhomogeneity and the non-degeneracy. For this we observe: \begin{itemize} \item The inhomogeneity associated with $2w_1$ satisfies the bounds from Lemma \ref{lem:Carleson}. Indeed, $w_1$ solves \begin{align*} \p_i a^{ij}_{2^{-k}}\p_j w_1 &= \p_iH^i+G_1+G_2 \text{ in } B_1\setminus \Lambda_{2^{-k}}, \end{align*} where \begin{align*} H^i(x)&=\frac{2^{-k}}{(b_k -a_k) u_1(\frac{2^{-k}e_{n}}{2})}\left[\tilde{F}^i(2^{-k}x)-a_kF^i(2^{-k}x)\right],\\ G_1(x)&=\frac{2^{-2k}}{(b_k -a_k) u_1(\frac{2^{-k}e_{n}}{2})}\left[\tilde{g}_1(2^{-k}x)-a_kg_1(2^{-k}x)\right],\\ G_2(x)&= \frac{2^{-2k}}{(b_k -a_k) u_1(\frac{2^{-k}e_{n}}{2})}\left[\tilde{g}_2(2^{-k}x)-a_kg_2(2^{-k}x)\right],\\ a^{ij}_{2^{-k}}(x)&=a^{ij}(2^{-k}x),\\
\Lambda_{2^{-k}}&= \frac{\Lambda}{2^{-k}}=\{x\in B_1| \ 2^{-k} x \in \Lambda\},\\
\Gamma_{2^{-k}}&= \frac{\Gamma}{2^{-k}}=\{x\in B_1| \ 2^{-k} x \in \Gamma\}. \end{align*} Since $\dist(x,\Gamma_{2^{-k}})=2^k \dist(2^{-k}x,\Gamma)$, we estimate \begin{equation} \begin{split} \label{eq:inhom}
&\left\| \dist(\cdot,\Gamma_{2^{-k}})^{-\sigma}H^i\right\|_{L^{p}(B_{1})}+\left\| \dist(\cdot,\Gamma_{2^{-k}})^{1-\frac{n+1}{p}-\sigma} G_1\right\|_{L^{p/2}(B_{1})}+\left\|\dist(\cdot,\Gamma_{2^{-k}})^{\frac{3}{2}-\delta_0}G_2\right\|_{L^\infty(B_1)}\\
&\leq \frac{2^{-k(1-\frac{n+1}{p}+\sigma)}}{(b_k -a_k)u_1(\frac{2^{-k}e_{n}}{2})}\left(\|\dist(\cdot,\Gamma)^{-\sigma}\tilde{F}^i\|_{L^p(B_{2^{-k}})} + a_k\|\dist(\cdot,\Gamma)^{-\sigma}F^i\|_{L^p(B_{2^{-k}})} \right. \\
& \quad \left.+\| \dist(\cdot,\Gamma)^{1-\frac{n+1}{p}-\sigma} \tilde{g}_1\|_{L^{p/2}(B_{2^{-k}})} + a_k\| \dist(\cdot,\Gamma)^{1-\frac{n+1}{p}-\sigma } g_1\|_{L^{p/2}(B_{2^{-k}})}\right)\\
& \quad +\frac{2^{-k(\frac{1}{2}+\delta_0)}}{{(b_k -a_k)u_1(\frac{2^{-k}e_{n}}{2})}}\left(\|\dist(\cdot,\Gamma)^{\frac{3}{2}-\delta_0} \tilde{g}_2\|_{L^{\infty}(B_{2^{-k}})}+a_k\| \dist(\cdot,\Gamma)^{\frac{3}{2}-\delta_0 }g_2\|_{L^{\infty}(B_{2^{-k}})}\right) \\ &\leq \frac{1}{(b_k -a_k)u_1(\frac{2^{-k}e_{n}}{2}) }2\bar \delta\cdot 2^{-k(\frac{1}{2}+\bar \epsilon)} \leq \bar \delta. \end{split} \end{equation} Here we used the non-degeneracy assumption \eqref{eq:nondeg} in the form $u_1(\frac{2^{-k}e_{n}}{2})\geq 2^{-(k+1)(\frac{1}{2}+\frac{\bar{\epsilon}}{2})}$ (for which we recall that $\bar\epsilon=\min\{\sigma+\frac{1}{2}-\frac{n+1}{p}, \delta_0\}$) as well as the condition ($\beta$), i.e. $\mu_2\geq 2^{-\frac{\bar{\epsilon}}{2}}$. The inhomogeneity associated with $\bar{u}$ can be estimated similarly.
\item $2w_1$ and $\bar{u}$ satisfy the non-degeneracy assumption. As $2w_1(e_{n}/2)\geq 1$ and $\bar{u}(e_{n}/2)=1$, Harnack's inequality in the domain $B_1\cap \{x| \ \dist(x,\Lambda_{2^{-k}})\geq\ell_0\dist(x,\Gamma_{2^{-k}})\}$ combined with Proposition \ref{prop:nondeg} (after a rescaling which only depends on $n$) implies that there exists $c_0=c_0(n,p)$ such that the non-degeneracy condition \eqref{eq:nondeg} is satisfied for $2w_1$ and $\bar u$: \begin{align*} 2w_1(x), \bar u(x)&\geq c_0 \dist(x,\Gamma_{2^{-k}})^{\frac{1}{2}+\frac{\bar{\epsilon}}{2}} \\
&\text{ in } \{x\in B_1| \ \dist(x,\Lambda_{2^{-k}})\geq \ell_0\dist(x,\Gamma_{2^{-k}})\},\\ 2w_1(x), \bar u(x)&\geq - c_0\dist(x,\Gamma_{2^{-k}})^{1-\frac{n+1}{p}+\sigma} \\
&\text{ in } \{x\in B_1| \ \dist(x,\Lambda_{2^{-k}})< \ell_0 \dist(x,\Gamma_{2^{-k}})\}. \end{align*} We remark that the constant $c_0$ can be chosen in the same way in each iteration step. \end{itemize} Therefore, step 1 implies (with a possibly different $s_0$ but with the same parameter dependence) \begin{align*}
\frac{2 w_1}{\bar{u}}\geq s_0 \text{ in } \{x\in B_{1/2}| \ \dist(x,\Lambda_{2^{-k}})\geq\ell_0 \dist(x,\Gamma_{2^{-k}})\}. \end{align*} Rescaling back and using the scale invariance of $\mathcal{C}'_\theta (e_n)$, we infer \begin{align*} \frac{u_2-a_k u_1}{(b_k -a_k)u_1} \geq \frac{s_0}{2}\text{ in } B_{2^{-k-1}}\cap \mathcal{C}'_\theta(e_n). \end{align*} Therefore, $$ (a_k + \frac{s_0}{2}(b_k-a_k)) u_1 \leq u_2 \leq b_k u_1 \text{ in } B_{2^{-k-1}}\cap \mathcal{C}'_\theta(e_n). $$ Thus, we define $a_{k+1}:= a_k + \frac{s_0}{2}(b_k-a_k)$ and $b_{k+1}:=b_k$. These satisfy \begin{align*} b_{k+1} - a_{k+1} = \left(1-\frac{s_0}{2}\right)(b_k -a_k). \end{align*} Let $\mu_1= 1 - \frac{s_0}{3}$ and $\mu_2 =1-\frac{s_0}{2}$ with $s_0$ chosen so small that $1-\frac{s_0}{2}>2^{-\frac{\bar{\epsilon}}{2}}$. This yields the conditions in ($\alpha$) and ($\beta$) and hence proves the desired estimate. \end{proof}
\subsubsection{Proof of Proposition \ref{prop:C1a}} \label{sec:proofC1a} \begin{proof}[Proof of Proposition \ref{prop:C1a}] The proof is standard (c.f. Theorem 6.9 of \cite{PSU}). We only give a brief sketch of it and indicate the main modifications. The idea is to apply the boundary Harnack inequality (Lemma~\ref{lem:boundHarn}) to the tangential derivatives $\p_e w$, $e\in \mathcal{C}'_\eta(e_n)$. This gives the uniform (in $h$) $C^{1,\alpha}$ regularity of level sets $\{w=h\}$ with small $h>0$. Then the uniform convergence of the level sets as $h\rightarrow 0$ allows to deduce the $C^{1,\alpha}$ regularity of the zero level set, which is the free boundary $\Gamma_w$.\\
We consider tangential derivatives $v:=\p_e w$ with $e\in \{x\in S^n\cap B'_1\big| x_n> \eta |x''|\}$. Here $\eta= \tilde{C}(n,p)\max\{\epsilon_0, c_\ast\}$ is the same constant as in \eqref{eq:positive_cone}. Using the off-diagonal assumption (A1) and extending $v$ to the whole ball $B_1$ as described in Remark \ref{rmk:ref_ext}, we note that it solves \begin{align*} \p_i a^{ij} \p_{j} v &= \p_i F^{i} \mbox{ in } B_{1}\setminus \Lambda_w,\\ v &= 0 \mbox{ on } B_{1}' \cap \Lambda_w. \end{align*}
By Lemma 4.3 in \cite{KRS14}\xspace, $\|\dist(\cdot,\Gamma_w)^{-\frac{1}{2}}\ln(\dist(\cdot,\Gamma_w))^{-2}F^i\|_{L^p(B_1)}\leq C(n,p)c_\ast$.\\ Moreover, by \eqref{eq:positive_cone}, $v$ satisfies the non-degeneracy conditions in Lemma~\ref{lem:boundHarn} (after passing to the normalized function $\frac{v}{c_n}$). Thus, for $\epsilon_0$ and $c_\ast$ sufficiently small, the boundary Harnack inequality (Lemma~\ref{lem:boundHarn}) yields the existence of constants $\alpha \in (0,1)$, $\theta=\theta(n,p)$ and $C=C(n,p)$ such that for all $x_0 \in \Gamma_w\cap B'_{1/4}$ \begin{align} \label{eq:boundary_holder}
\left| \frac{\p_e w}{\p_n w}(x) - \frac{\p_e w}{\p_n w }(x_0)\right| \leq C \frac{\p_e w}{\p_nw}(e_n/2)|x-x_0|^{\alpha}, \quad x\in (x_0+\mathcal{C}_\theta' (e_n))\cap B_{\frac{1}{4}}(x_0). \end{align} We observe that for $j\in\{1,\dots,n-1\}$, the estimate (\ref{eq:boundary_holder}) can be improved to the following more quantitative control \begin{align} \label{eq:boundary_holder1}
\left|\frac{\p_j w}{\p_n w}(x)-\frac{\p_j w}{\p_nw}(x_0)\right|\leq C \max\{\epsilon_0,c_\ast\} |x-x_0|^{\alpha}. \end{align}
Indeed, given $e_j\in S^n\cap B'_1$ with $j\in\{1,\dots, n-1\}$, we consider the vector $e=c_1 e_j+c_2 e_n$, where the constants $c_1,c_2$ are chosen such that $e\in \{x\in S^n\cap B'_1\big | x_n> \eta|x''|\}$. We note that (for sufficiently small values of $\eta$) it is possible to consider $ c_1 \geq \frac{1}{2}$ and $\tilde{C}(n,p)\max\{\epsilon_0,c_\ast\}\leq c_2\leq 2 \tilde{C}(n,p)\max\{\epsilon_0,c_\ast\}$. Inserting this into (\ref{eq:boundary_holder}) thus leads to (\ref{eq:boundary_holder1}).\\ We claim that (\ref{eq:boundary_holder}) and (\ref{eq:boundary_holder1}) imply the desired $C^{1,\alpha}$ regularity of the free boundary and give the desired explicit bound for the Hölder constant of $g$. Indeed, heuristically this follows from the identity \begin{align} \label{eq:derg}
\p_j g(x'') = -\left. \frac{\p_j w}{\p_n w}\right|_{(x'',g(x''),0)}, \end{align}
and the fact that $\frac{\p_j w(x)}{\p_n w(x)}\big | _{B'_1\setminus \Lambda_w}$ is Hölder continuous up to $\Gamma_w$. Rigorously, we have to justify that the derivative of $g$ exists and satisfies the claimed identity. In fact, since $\p_n w>0$ in $B'_1\setminus \Lambda_w$, for $h>0$ small, the level sets $\{w=h\}\cap B'_1$ are given as graphs $x_n=g_h(x'')$, where $\{g_h\}_{h\geq 0}$ is an increasing sequence in $h$ which, by \eqref{eq:boundary_holder}, is uniformly bounded in $C^1(B''_1)$. Thus, $g_h$ converges uniformly to $g$ as $h$ goes to zero by Dini's theorem. Next we notice that \begin{align*}
\p_j g_h(x'')=- \left. \frac{\p_j w}{\p_n w}\right|_{(x'',g_h(x''),0)}. \end{align*} Therefore, by \eqref{eq:boundary_holder}, \begin{align*}
\left|\p_j g_h(x'')-\left. \left(-\frac{\p_j w}{\p_n w}\right)\right|_{(x'',g(x''),0)} \right|&= \left|\left.\frac{\p_j w}{\p_n w}\right|_{(x'',g_h(x''),0)}-\left. \frac{\p_j w}{\p_n w}\right|_{(x'',g(x''),0)} \right|\\
&\leq C\left| g_h(x'')-g(x'')\right|^\alpha. \end{align*} Since the sequence $g_h$ converges to $g$ uniformly, we infer \begin{align*}
\left\| \p_j g_{h}(x'') - \left.\left(-\frac{\p_j w}{\p_n w} \right)\right|_{(x'',g(x''),0)} \right\|_{L^{\infty}(B_{\frac{1}{2}}'')} \rightarrow 0 \mbox{ as } h \rightarrow 0. \end{align*} Therefore, by the fundamental theorem of calculus, the partial derivatives of $g$ exist and they satisfy the identity (\ref{eq:derg}). Moreover, by \eqref{eq:boundary_holder} we have $\p_jg\in C^{0,\alpha}(\Gamma_w\cap B'_{1/4})$. This finishes the proof of Proposition \ref{prop:C1a}. \end{proof}
\begin{rmk} \label{rmk:alpha} Under the conditions of Proposition \ref{prop:C1a}, the $C^{1,\alpha}$ norm of $g$ is bounded by a constant, which only depends on $n$ and $p$. Indeed, by Proposition~\ref{prop:Lip} and \eqref{eq:boundary_holder1} (and by $g(0)=0$) we have that
$$\|\nabla g\|_{C^{1,\alpha}(B''_{1/2})}\leq C(n,p)\max\{\epsilon_0,c_\ast\}.$$ Moreover, the proof of Lemma \ref{lem:boundHarn} and the proof of Proposition \ref{prop:C1a} show that in this case the H\"older exponent $\alpha\in(0,1)$ only depends on $n$ and $p$ (if $\epsilon_0, c_{\ast}$ are chosen sufficiently small in dependence of $n,p$). \end{rmk}
\subsection{Proof of Proposition \ref{prop:v1}} \label{sec:propv1}
Last but not least, we come to the proof of Proposition \ref{prop:v1}. For convenience, we introduce the following notation $$d(x):=\dist(x,\Gamma).$$
The proof of Proposition \ref{prop:v1} is a consequence of weighted energy estimates (with weights which are naturally induced by the symbol of the operator) and local pointwise estimates.
\begin{proof} \emph{Step 1: Existence and uniqueness.} We define a Hilbert space $\mathcal{H}$ in which we solve an appropriate weak formulation of our problem. Let $u,v \in C^{\infty}_{0}(\R^{n+1}\setminus \Lambda)$. Consider the following inner product \begin{align*} &(\cdot, \cdot)_{\mathcal{H}}: C^{\infty}_{0}(\R^{n+1}\setminus \Lambda) \times C^{\infty}_{0}(\R^{n+1}\setminus \Lambda) \rightarrow \R,\\ & (u,v)_{\mathcal{H}}:= \int\limits_{\R^{n+1}}\nabla u \cdot\nabla v dx + \int\limits_{\R^{n+1}} d(x)^{-2} u v dx \mbox{ for all } u,v \in \mathcal{H}. \end{align*}
Using this, we define our Hilbert space $\mathcal{H}$ as the closure of $C^{\infty}_{0}(\R^{n+1}\setminus \Lambda)$ with respect to $\left\| u\right\|_{\mathcal{H}}^2:=(u,u)_{\mathcal{H}}$: \begin{align*}
\mathcal{H} : = \overline{C^{\infty}_{0}(\R^{n+1}\setminus \Lambda)}^{\left\| \cdot \right\|_{\mathcal{H}}} \end{align*} On $\mathcal{H}$ the bilinear form associated with the equation \eqref{eq:globalv1}, \begin{equation*} \mathcal{B}(u,v)=\int\limits_{\R^{n+1}} a^{ij}\partial_iu \partial_j v + K\int\limits_{\R^{n+1}} d(x)^{-2} uv, \quad u,v\in \mathcal{H}, \end{equation*} is bounded and coercive, thus by the Lax-Milgram theorem a unique solution exists in $\mathcal{H}$. This proves the first part of Proposition \ref{prop:v1}.\\
\emph{Step 2: Weighted energy estimate.}
Let $\bar{g}$ be the intrinsic metric determined by the symbol $d(x)^2 |\xi|^2$ in the cotangent or by the symbol $d(x)^{-2}|\xi|^2$ in the tangent space, i.e \begin{equation*} \bar{g}_x(v,w)=d(x)^{-2}v\cdot w, \quad v,w \in T_x(\R^{n+1}\setminus \Gamma), \ x\in \R^{n+1}\setminus \Gamma. \end{equation*} The distance function associated with $\bar{g}$ is given by \begin{equation}\label{eq:intrinsic_dist} d_{\bar{g}}(x_1,x_2)=\inf_{C}\int_C d(x(s))^{-1} d s, \end{equation} where the infimum is taken over all rectifiable curves $C$ in $\R^{n+1}\setminus \Gamma$ joining two points $x_1, x_2 \in \R^{n+1}\setminus \Gamma$ parametrized by arc length. We note that $d_{\bar{g}}(\cdot,\cdot)$ is finite on each connected component of $\R^{n+1}\setminus \Gamma$. Let $f:\R^{n+1}\setminus \Gamma\rightarrow \R$ be a Lipschitz function w.r.t. the intrinsic metric $d_{\bar{g}}$. It is not hard to see that $f$ is Lipschitz w.r.t. $d_{\bar{g}}$ if and only if it is locally Lipschitz w.r.t. the Euclidean distance and it satisfies
$$ |\nabla f(x)|\leq C d(x)^{-1} \text{ for a.e. } x\in\R^{n+1}\setminus \Gamma.$$
We decompose the open set $\R^{n+1}\setminus \Gamma$ into Whitney cubes $\{Q_j\}$ with respect to the Euclidean distance in $\R^{n+1}\setminus \Gamma$ (c.f. Section \ref{sec:Reifenbergbarrier} for the definition of a Whitney decomposition). Without loss of generality, we may assume that the decomposition is symmetric with respect to the $x_{n+1}$ variable. Indeed, the symmetry assumption can always be realized as $\Gamma \subset \R^{n}\times \{0\}$.\\ We now make the following claim: \begin{claim*} For any $\tilde{\kappa}>0$, there exists $s=s(n, \tilde{\kappa})>0$ sufficiently large, such that for any compact subset $W\subset \R^{n+1}\setminus \Gamma$ we have \begin{equation}\label{eq:exp}
\sum_{j=1}^{\infty}\|d^{-\tilde{\kappa}}e^{-sd_{\bar{g}}(x,W)}\|_{L^\infty(Q_j)}\leq C(n)\dist(W,\Gamma)^{- \tilde{\kappa}}. \end{equation} \end{claim*} We postpone the proof of the claim to step 4 and continue the proof of Proposition \ref{prop:v1}.\\
Let $\phi$ be a Lipschitz function w.r.t. the intrinsic distance $d_{\bar{g}}$ and assume that $|\phi(x)|\leq \kappa|\ln d(x)|$ for some $\kappa>0$. Consider $\tilde{u}:=e^{\phi} u$. Then (in a distributional sense) $\tilde{u}$ is a solution of \begin{equation} \label{eq:conj_weight} \begin{split} \partial_i(a^{ij}\partial_j \tilde{u})-(\partial_i \phi) a^{ij}\partial_j \tilde{u}-\partial_i(a^{ij} \tilde{u}\partial_j \phi) +(\partial_i \phi) (\partial_j \phi) a^{ij}\tilde{u}-Kd^{-2}\tilde{u}\\ =\partial_i(e^\phi F^i)-(\partial_i \phi) e^\phi F^i+e^\phi g. \end{split} \end{equation} We insert $e^{-sd_{\bar g}(x,W)}\tilde{u}$ as a test function into the divergence form of (\ref{eq:conj_weight}), where $s=s(n,\kappa)$ is the constant from the claim and $W\subset \R^{n+1}\setminus \Gamma$ is a given set. Note that $e^{-sd_{\bar g}(x,W)}\tilde{u}$ is a function in $\mathcal{H}$. Indeed, using the definition of $\tilde{u}$, Hölder's inequality and the statement of the claim yields \begin{align*}
\|d^{-1}e^{-sd_{\bar g}(x,W)}\tilde{u} \|_{L^2(\R^{n+1})}&\leq \|d^{-\kappa-1}e^{-sd_{\bar g}(x,W)} u \|_{L^2(\R^{n+1})}\\
&\leq \left( \sum_j \|d^{-\kappa}e^{-sd_{\bar g}(x,W)}\|_{L^\infty(Q_j)} \right) \sup\limits_{\hat{Q} \subset \{Q_k\}_k}\|d^{-1}u\|_{L^2(\hat{Q})}\\
&\leq C(n)\dist(W,\Gamma)^{-\kappa} \|d^{-1}u\|_{L^2(\R^{n+1})}. \end{align*}
Similarly is is possible to estimate $\nabla (e^{-sd_{\bar g}(x,W)}\tilde{u})$. As a consequence of the admissibility of $e^{-sd_{\bar g}(x,W)}\tilde{u}$ as a test function, we infer that if $K=K(\|\phi\|_{C^{0,1}_{\bar{d}_g}}, n)$ is large (where $\|\cdot \|_{C^{0,1}_{\bar{d}_g}}$ denotes the Lipschitz semi-norm with respect to the metric $d_{\bar{g}}$), then there exists $C>0$ depending on the ellipticity constants and dimension $n$ such that: \begin{align*}
\| e^{-sd_{\bar g}(x,W)} \nabla \tilde{u} \|_{L^2(\R^{n+1})}+ K/2\|d^{-1}e^{-sd_{\bar g}(x,W)}\tilde{u}\|_{L^2(\R^{n+1})}\\
\leq C \left(\|e^{-sd_{\bar g}(x,W)} e^\phi F^i \|_{L^2(\R^{n+1})}+ \|e^{-sd_{\bar g}(x,W)} e^\phi d g \|_{L^2(\R^{n+1})}\right). \end{align*} This can be written in terms of $u$ as \begin{align}\label{eq:weighted_energy}
\|e^{-s d_{\bar g}(x,W)} e^{\phi}\nabla u \|_{L^2(\R^{n+1})}+ \|e^{-s d_{\bar g}(x,W)} e^{\phi} d^{-1}u \|_{L^2(\R^{n+1})}\notag\\
\leq C\left(\|e^{-s d_{\bar g}(x,W)} e^\phi F^i\|_{L^2(\R^{n+1})}+\|e^{-s d_{\bar g}(x,W)} e^{\phi}d g\|_{L^2(\R^{n+1})}\right). \end{align}
\emph{Step 3: Local pointwise estimate:} Consider a given point $\hat x \in \R^{n+1}\setminus \Gamma$ and let $\hat Q\in \{Q_j\}$ be the Whitney cube which contains $\hat x$. Let $$\hat d := d(\hat x).$$ By the weak maximum principle (c.f. Theorems 8.15 and 8.17 in \cite{GT}) as well as by the properties (W1) and (W2) from the definition of a Whitney decomposition (c.f. Section \ref{sec:Reifenbergbarrier}), there exists $C=C(n,p)$ such that \begin{align}\label{eq:ptwise}
|u(\hat x)| \leq C\left( \hat d^{-\frac{n+1}{2}}\|u\|_{L^2(\mathcal{N}_{\hat Q})}+\hat d^{1-\frac{n+1}{p}}\|F^i\|_{L^p(\mathcal{N}_{\hat Q})}+\hat d^{2(1-\frac{n+1}{p})}\|g\|_{L^{p/2}(\mathcal{N}_{\hat Q})}\right), \end{align} where $\mathcal{N}_{\hat Q}$ is the union of all the Whitney cubes which touch $\hat Q$. Recalling the assumptions on $F^i$ and $g$ and using the properties (W1), (W2), we have \begin{align}
\|F^i\|_{L^p(\mathcal{N}_{\hat Q})}&\lesssim \|d^{-\sigma}F^i\|_{L^p(\R^{n+1})} \hat d^{\sigma}, \label{eq:estF1}\\
\|g\|_{L^{p/2}(\mathcal{N}_{\hat Q})}&\lesssim \|d^{1-\frac{n+1}{p}-\sigma}g\|_{L^{p/2}(\R^{n+1})}\hat d^{\sigma - 1+\frac{n+1}{p}} .\label{eq:estG1} \end{align}
To estimate $\hat d^{-(n+1)/2}\|u\|_{L^2(\mathcal{N}_{\hat Q})}$, we seek to apply \eqref{eq:weighted_energy} with $W= \mathcal{N}_{\hat Q}$ and $$\phi(x)=-\kappa\ln d(x), \quad \kappa=(n+1)\left(\frac{1}{2}-\frac{1}{p}\right)+\sigma.$$ The Lipschitz constant (w.r.t. $d_{\bar{g}}$) of $\phi$ is bounded by $c_n(\sigma+1)$. Thus, by taking $K=C(\sigma+1)$ with $C=C(n)$ large, we have \eqref{eq:weighted_energy}. We notice that from the properties (W1) and (W2) for Whitney cubes as well as \eqref{eq:weighted_energy}, \begin{equation} \label{eq:ch} \begin{split}
\hat{d}^{-\frac{n+1}{2}}\|u\|_{L^2(\mathcal{N}_{\hat Q})}
&\lesssim \hat d^{\sigma+1-\frac{n+1}{p}} \|d^{-\kappa-1} u\|_{L^2(\mathcal{N}_{\hat Q})}\\
&\lesssim \hat d^{\sigma+1-\frac{n+1}{p}} \|d^{-\kappa-1}e^{-sd_{\bar g}(x,\mathcal{N}_{\hat Q})} u\|_{L^2(\R^{n+1})}\\
&\lesssim \hat d^{\sigma+1-\frac{n+1}{p}} \left(\|d^{-\kappa}e^{-sd_{\bar g}(x,\mathcal{N}_{\hat Q})}F^i\|_{L^2(\R^{n+1})}\right.\\
& \quad \left.+\|d^{-\kappa+1}e^{-sd_{\bar g}(x,\mathcal{N}_{\hat Q})} g\|_{L^2(\R^{n+1})}\right). \end{split} \end{equation} Due to H\"older's inequality, property (W2) and due to the definition of $\kappa$ \begin{align*}
\|d^{-\kappa} F^i\|_{L^2(Q')}&\lesssim \|d^{-\sigma} F^i\|_{L^p(\R^{n+1})}\|d^{-\kappa+\sigma}\|_{L^{\frac{2p}{p-2}}(Q')} \\
&\lesssim\|d^{-\sigma} F^i\|_{L^p(\R^{n+1})},\notag\\
\|d^{-\kappa+1} g\|_{L^2(Q')}&\lesssim \|d^{1-\frac{n+1}{p}-\sigma} g\|_{L^{p/2}(\R^{n+1})}, \end{align*} for any $Q'\in \{Q_j\}$. Thus, combining this estimate with \eqref{eq:exp} (applied with $\tilde{\kappa}=0$) leads to \begin{equation} \label{eq:exp2} \begin{split}
\|d^{-\kappa}e^{-sd_{\bar g}(x,\mathcal{N}_{\hat Q})} F^i\|_{L^2(\R^{n+1})}
& \leq \sum\limits_j \left\| e^{-s d_{\bar{g}}(x,\mathcal{N}_{\hat{Q}})} \right\|_{L^{\infty}(Q_j)} \sup\limits_{Q'\in \{Q_j\}}\|d^{-\kappa} F^i\|_{L^2(Q')}\\
&\leq C(n) \|d^{-\sigma} F^i\|_{L^p(\R^{n+1})}. \end{split} \end{equation} Analogously, we infer \begin{align*}
\|d^{-\kappa+1}e^{-sd_{\bar g}(x,\mathcal{N}_{\hat Q})} g\|_{L^2(\R^{n+1})}&\leq C(n)\|d^{1-\frac{n+1}{p}-\sigma}g\|_{L^{p/2}(\R^{n+1})}. \end{align*} Combining \eqref{eq:ptwise}-\eqref{eq:estG1}, \eqref{eq:ch} and \eqref{eq:exp2}, yields a constant $C_0=C_0(n,p)$ such that \begin{equation*}
|u(\hat x)|\leq C_0 \hat d ^{\sigma+1-\frac{n+1}{p}}\left(\|d^{-\sigma}F^i\|_{L^p(\R^{n+1})}+\|d^{1-\frac{n+1}{p}-\sigma}g\|_{L^{p/2}(\R^{n+1})}\right). \end{equation*}
\emph{Step 4: Proof of the claim.} Without loss of generality, we only prove the claim for $W=\mathcal{N}(\hat Q)$. Let $\mathcal{N}_1:=\{Q_j| \ Q_j\text{ touches } \mathcal{N}(\hat Q)\}$ be the set of all Whitney cubes which touch $\mathcal{N}(\hat Q)$. In general, for $k\geq 2$ let $\mathcal{N}_k$ denote all Whitney cubes which touch at least one of the Whitney cubes in $\mathcal{N}_{k-1}$. Hence $\mathcal{N}_k\nearrow \{Q_j\}$. It is immediate from property (W3) for Whitney cubes that: \begin{itemize} \item[($\alpha$)] Let $\sharp\mathcal{N}_k$ denote the number of the Whitney cubes contained in $\mathcal{N}_k$. Define $\sharp\mathcal{N}_0=0$. Then $\sharp\mathcal{N}_k-\sharp\mathcal{N}_{k-1}\leq (12)^{k(n+1)}$, $k\geq 1$. \end{itemize} Moreover, from \eqref{eq:intrinsic_dist}, property (W1) for Whitney cubes and an induction argument, it is not hard to see that: \begin{itemize} \item[($\beta$)] For any $x\in Q_j\in \mathcal{N}_k$, $k\geq 2$ we have $d_{\bar{g}}(x,\mathcal{N}(\hat Q))\geq \frac{k-1}{4\sqrt{n+1}}$. \end{itemize} Let $r_0$ be the minimal sidelength of $Q_j \in \mathcal{N}(\hat Q)$. Note that by (W1) and (W2) we have $r_0\leq \dist(\mathcal{N}_{\hat Q},\Gamma)$. Combining ($\alpha$) and ($\beta$), we obtain \begin{align*}
&\sum_{j=1}^{\infty}\|d^{-\kappa}e^{-sd_{\bar{g}}(x,\mathcal{N}_{\hat Q})}\|_{L^\infty(Q_j)}\\ &\lesssim
r_0^{-\kappa}+ (4^{-1}r_0)^{-\kappa} \sharp \mathcal{N}_1 +\sum_{k=2}^\infty\sum_{Q_j\in \mathcal{N}_k\setminus \mathcal{N}_{k-1}}(4^{-k}r_0)^{-\kappa}\|e^{-sd_{\bar{g}}(x,\mathcal{N}_{\hat Q})}\|_{L^\infty(Q_j)}\\ &\leq r_0^{-\kappa}+4^{\kappa}12^{n+1} r_0^{-\kappa} + r_0^{-\kappa}\sum_{k=2}^{\infty}4^{k \kappa}e^{-\frac{s(k-1)}{4\sqrt{n+1}}}12^{k(n+1)}\\ &\leq C(n)r_0^{-\kappa}, \end{align*} if $s=s(n,\kappa)$ is chosen to be sufficiently large.\\
\emph{Step 5: Continuous realization of the boundary data.} In order to obtain the continuous attainment of the boundary data, we observe that any point $x \in \Lambda$ lies in an associated Whitney cube $Q$. Since the Whitney decomposition is considered with respect to $\Gamma$, the cube $Q$ does not intersect $\Gamma$. As $\Gamma = \partial \Lambda$, this implies that $Q \cap (\R^{n} \times \{0\}) \subset \Lambda$. Moreover, the same is true for all neighboring Whitney cubes $\mathcal{N}(Q)$. Hence, we find a ball $B_Q$ which contains $Q$ such that \begin{align*} \p_i a^{ij}\p_j u - d(x)^{-2}u &= \p_i F^i + g \mbox{ in } B_Q,\\ u& = 0 \mbox{ on } B_Q\cap (\R^{n}\times \{0\}). \end{align*} In particular, this implies that the boundary data of $u$ are attained continuously in $Q$ (c.f. \cite{GT}, Corollary 8.28). As this holds for any point $x\in \Lambda$, it proves the desired continuous attainment of the boundary data. \end{proof}
\begin{rmk}\label{rmk:impv2} In order to deduce the improved behavior close to $\Lambda$ from Remark \ref{rmk:impv1}, we argue by elliptic regularity and a scaling argument. In the case of a point $x_0\in B_1$ with $d(x_0)\sim 1$ but $\dist(x_0,\Lambda)\leq c_n$ (where $c_n$ is a dimensional constant less than $1/2$) we have the bound \begin{align*}
\| u\|_{W^{1,p}(B_{1/2}(x_0))} \leq C (\| d^{-\sigma}F\|_{L^p(\R^{n+1})} + \| d^{-1-\sigma}g\|_{L^{p/2}(\R^{n+1})} ). \end{align*} By the Sobolev embedding theorem and the fact that $u$ vanishes on $\Lambda$, this implies that \begin{align*}
|u(x_0)| \leq C (\| d^{-\sigma}F\|_{L^p(\R^{n+1})} + \| d^{1-\sigma-\frac{n+1}{p}}g\|_{L^{p/2}(\R^{n+1})} )\dist(x_0,\Lambda)^{1-\frac{n+1}{p}}. \end{align*} The general case follows by rescaling to the set-up with $d(x_0)\sim1$. \end{rmk}
\section[Optimal Regularity]{Optimal Regularity and Homogeneity of Blow-Up Solutions} \label{sec:optimal}
In this section we return to the study of solutions of the thin obstacle problem (\ref{eq:varcoef}) and present some consequences of the $C^{1,\alpha}$ regularity of the free boundary. Here our main result is an optimal regularity result for solutions of (\ref{eq:varcoef}), c.f. Theorem \ref{thm:optimal_reg}, and a leading order asymptotic expansion of the solution at free boundary points, c.f. Proposition~\ref{prop:wasympt}.\\
In the sequel, we assume that the metric $a^{ij}$ satisfies the assumptions (A1), (A2), (A3), (A4). Moreover, the metric $a^{ij}$ and the solution $w$ are extended to $B_1^-$ in the same way as in Remark~\ref{rmk:ref_ext}. In this section we mainly work with the equation of $\p_j w$ in our $C^{1,\alpha}$ slit domain. For $j\in \{1,\dots, n\}$, $\p_j w$ satisfies \eqref{eq:extend_v} in $B_1\setminus \Gamma_w$. For $j=n+1$, we extend $w$ to $B_1^-$ by setting $w(x',x_{n+1})=-w(x',-x_{n+1})$. Then $\p_{n+1} w\in C^{0,1/2-\delta}(B_1\setminus \overline{\Omega_w})\cap H^1(B_1\setminus \overline{\Omega_w})$ for any $\delta>0$, as $w=0$ on $\overline{\Omega_w}$. Moreover, it solves the divergence form equation
\begin{align*}
\p_i a^{ij}\p_j v &= \p_i F^i \text{ in } B_1\setminus \overline{\Omega_w}, \text{ where } F^i=-(\p_{n+1} a^{ij})\p_j w,\\
v&=0 \text{ on }\overline{\Omega_w}.
\end{align*}
Our main results rely on identifying the asymptotic behavior of solutions, c.f. Lemma \ref{lem:lowerbarrier} and Propositions \ref{prop:asympt} and \ref{prop:wasympt}, for which we invoke the $C^{1,\alpha}$ regularity of the regular free boundary. This then implies the optimal local growth behavior (c.f. Corollaries \ref{cor:lowerbound}-\ref{cor:optgrowth}) which in combination with our Carleman estimate from \cite{KRS14} yields the optimal regularity of solutions.\\
This section is organized as follows: We first deduce the behavior of general solutions of the Dirichlet problem in slit domains with $C^{1,\alpha}$ slits in Section \ref{subsubsec:low_up_barrier}. Then in Section \ref{subsubsec:local32} we apply these results to solutions of the thin obstacle problem, which yields the optimal local non-degeneracy and local growth estimates as well as a leading order asymptotic expansion in appropriate cones. In Section \ref{sec:opti} the local results are then improved to global estimates and the optimal regularity result. Finally, in Section \ref{sec:lower} we prove uniform lower bounds for a suitably normalized solution of the variable coefficient thin obstacle problem at regular free boundary points.
\subsection{Asymptotics and growth estimates} \label{subsubsec:low_up_barrier}
In this subsection we exploit the $C^{1,\alpha}$ regularity of the free boundary, in order to obtain a leading order asymptotic expansion of solutions of elliptic equations with controlled inhomogeneities in $C^{1,\alpha}$ slit domains (Proposition \ref{prop:asympt}). This expansion holds in appropriate cones. Using the asymptotic behavior, we then immediately obtain (local) growth estimates in the form of a Hopf principle (Corollary \ref{cor:lower_bound}) and the optimal upper growth bounds (Corollary \ref{cor:upper_bound}). Our approach relies on applying the boundary Harnack estimate to a given solution and a model function, which we construct in Lemma \ref{lem:lowerbarrier}.\\
In the whole of this section, we work under the following assumptions:
\begin{assum} \label{assum:set2} We assume that $\Lambda$ and $\Gamma$ are closed subsets of $\R^n\times\{0\}$ with \begin{align*} \Lambda=\{x_n\leq g(x'')\}, \quad \Gamma=\{x_n=g(x'')\}, \end{align*} where $g\in C^{1,\alpha}_{loc}(\R^{n-1})$ for some $\alpha\in (0,1]$ and $g(0)=0$. \end{assum}
As in the previous section, we use the abbreviation $L$ for the operator $L=\p_i a^{ij} \p_j$, and $L_0$ for $L_0 = a^{ij}\p^2_{ij}$. Again we use the abbreviations from Notation \ref{not:not1} and set $\ell_0 := (2^{4}\sqrt{n})^{-1}$ and $c_{\ast}:=\left\| \nabla a^{ij} \right\|_{L^p(B_1)}$.\\
Under these assumptions we now refine the estimates from the barrier construction from Proposition \ref{prop:barrier} in Section \ref{sec:boundary}, in order to identify the leading order contribution in the solutions to our equations close to free boundary points.
\begin{lem} \label{lem:lowerbarrier} Let $\Lambda, \Gamma$ and $g$ satisfy the conditions in Assumption \ref{assum:set2} and let $\ell_0, c_{\ast}$ be as in Notation \ref{not:not1}. There exists a function $h_0:B_1\rightarrow \R$ such that \begin{itemize} \item[(i)] $h_0 \geq 0$ on $B_1 \setminus \Lambda$ \mbox{ and } $h_0=0$ on $\Lambda$, \item[(ii)] $h_0(x)\geq c_n\dist(x,\Gamma)^{1/2}$ in $B_1\cap \{\dist(x,\Lambda)\geq \ell_0\dist(x,\Gamma)\}$, \item[(iii)] $L h_0=g_1+g_2$, where $g_1$ and $g_2$ satisfy \begin{align*}
\|\dist(\cdot,\Gamma)^{\frac{1}{2}}g_1\|_{L^p(B_1)}&\leq C_n c_{\ast},\\
\|\dist(\cdot,\Gamma)^{\frac{3}{2}-\min\{\alpha,1-\frac{n+1}{p}\}}g_2\|_{L^\infty(B_1)}&\leq C_n\left(c_{\ast}+\|\nabla'' g\|_{C^{0,\alpha}(B_1'')}\right). \end{align*} \end{itemize} \end{lem}
\begin{proof} As the proof follows along the lines of Proposition \ref{prop:barrier}, we only indicate the main differences.\\ Similarly as in our previous construction, we consider \begin{align*} v^{0}(x):= w_{1/2}(x_n, x_{n+1}), \end{align*} and define $v_k^0(x)$ by composition with the change of coordinates $T_k$ as in Proposition \ref{prop:barrier}. Using these building blocks, we define $h_0$ in analogy to Proposition \ref{prop:barrier}, i.e. for an appropriate partition of unity, $\eta_k$, we set: \begin{align*} h_0(x) := \sum\limits_{k} \eta_k v_k^0(x). \end{align*} By construction $h_0$ satisfies (i)-(ii).\\
To show (iii), we first notice that $Lh_0=(\p_i a^{ij})(\p_j h_0)+L_0h_0:=g_1+g_2.$ The bound for $g_1$ follows immediately by noting that $|\nabla h_0(x)|\leq C_n\dist(x,\Gamma)^{-1/2}$. In order to bound $g_2$, we compute $L_0(h_0)$ similarly as in step 1 of Proposition \ref{prop:barrier}. Combining the fact that for each $k$, $a^{ij}(x_k)\p_{ij}v_k^0(x)=0$ in $\R^{n+1}\setminus\{x_{n+1}=0,(x-x_k)\cdot \nu_k\leq 0\}$, with the properties of the Whitney decomposition leads to \begin{align*} \sum_k (a^{ij}(x_k)\p_{ij}v_k^0(x))\eta_k(x) =0 \text{ in } B_1\setminus \Lambda. \end{align*}
We proceed with the estimate for the error contributions: Let $\nu_k$ and $\nu_\ell$ denote the normals of $\Lambda$ at $x_k \in \Gamma$ and $x_\ell \in \Gamma$. Here $x_k, x_\ell\in \Gamma\cap B_1$ are the projection points of two centers, $\hat x_k$ and $\hat x_\ell$, of neighboring Whitney cubes, $Q_k$ and $Q_\ell$. For these normals we infer $|\nu_k-\nu_\ell|\lesssim \|\nabla'' g\|_{C^{0,\alpha}(B_1'')} r_\ell^{\alpha}$. Thus, defining $\gamma:=1-\frac{n+1}{p}$ and following the computation in Proposition~\ref{prop:barrier}, we have that for all $x\in Q_\ell$, \begin{align*} &\sum_k(a^{ij}(x_k)\p_{ij}\eta_k(x)) v_k^0(x)
\leq C(n)\left(c_{\ast} r_\ell ^{-\frac{3}{2}+\gamma}+ \|\nabla'' g\|_{C^{0,\alpha}(B_1'')} r_\ell^{-\frac{3}{2}+\alpha}\right),\\ &\sum_k a^{ij}(x_k) \p_i \eta_k \p_j v_k^0
\leq C(n) \left(c_{\ast} r_\ell^{-\frac{3}{2}+ \gamma}+ \|\nabla'' g\|_{C^{0,\alpha}(B_1'')} r_\ell^{-\frac{3}{2}+\alpha}\right),\\ &\sum_k (a^{ij}(x)-a^{ij}(x_k))\p_{ij} (\eta_k v_k^0(x))\leq C(n) c_{\ast} r_\ell ^{-\frac{3}{2}+\gamma}. \end{align*} Thus, the bound for $g_2$ follows. \end{proof}
We continue by deriving the boundary asymptotic behavior of solutions $u$ to general divergence form equations
in slit domains with $C^{1,\alpha}$ slit. The main tool is our boundary Harnack inequality (Lemma~\ref{lem:boundHarn}).\\
\begin{prop} \label{prop:asympt} Let $\Lambda, \Gamma$ and $g$ be as in Assumption \ref{assum:set2}. Suppose that $u\in C(B_1)\cap H^{1}(B_1)$ is a solution of \begin{equation} \label{eq:divergence} \p_i a^{ij}\p_j u =\p_i F^i +g_1 \text{ in } B_1\setminus \Lambda, \quad u=0\text{ on }\Lambda. \end{equation} Further assume that $a^{ij}$, $F^i$, $g_1$ and $u$ satisfy the assumptions from Proposition~\ref{prop:nondeg}.\\
If $c_\ast$, $\bar\delta$ and $\|\nabla'' g\|_{C^{0,\alpha}(B_1'')}$ are sufficiently small depending on $n$, $p$ and $\alpha$, then there exists a constant $\beta\in (0,\min\{\alpha, 1-\frac{n+1}{p}\}]$, a $C^{0, \beta}(\Gamma \cap B_{1/2}')$ function $b: \Gamma \rightarrow \R$, $b>0$, such that for all $x_0\in \Gamma\cap B_{1/4}'$ and $x\in B_{1/4}(x_0)$ \begin{equation} \label{eq:asympt} \begin{split}
\left|u(x) - b(x_0) w_{1/2}\left(\frac{\nu_{ x_0} \cdot (x- x_0) }{(\nu_{ x_0} \cdot A( x_0) \nu_{ x_0})^{1/2}},\frac{x_{n+1}}{(a^{n+1,n+1}( x_0))^{1/2}}\right)\right|\\
\leq C \left(c_\ast+\left\| \nabla'' g \right\|_{C^{0,\alpha}(B_{1/2}'')}\right) |x - x_{0}|^{\frac{1}{2}+\beta}. \end{split} \end{equation} Here $\nu_{x_0}$ is the outer unit normal of $\Lambda$ at $x_0\in \Gamma$, $g$ denotes the parametrization of $\Gamma$ and $A(x):= (a^{ij}(x))$. \end{prop}
\begin{proof} We seek to apply the boundary Harnack inequality (Lemma~\ref{lem:boundHarn}) to the pair $u, h_0$ (with $\delta_0=\min\{\alpha,1-\frac{n+1}{p}\}$), where $h_0$ is the function from Lemma~\ref{lem:lowerbarrier}. By Lemma~\ref{lem:lowerbarrier}, $h_0$ satisfies the conditions in Lemma~\ref{lem:boundHarn}. Moreover, we note that by Proposition~\ref{prop:nondeg} \begin{equation}\label{eq:nondege} u(x)\geq c_n\dist(x,\Gamma)^{\frac{1}{2}+\frac{\bar\epsilon}{2}} \text{ in } B_{1/2}\cap \{\dist(x,\Lambda)\geq\ell_0\dist(x,\Gamma)\}, \end{equation} and \begin{equation}\label{eq:lower_bound_a} u(x)\geq -\dist(x,\Gamma)^{\frac{1}{2}+\bar\epsilon} \text{ in } B_{1/2}. \end{equation} Here $\bar{\epsilon}= \sigma+\frac{1}{2}-\frac{n+1}{p}$ and $\sigma$ is the constant from Proposition \ref{prop:nondeg}. Hence, by the boundary Harnack inequality we have that \begin{align} \label{eq:bound_Harnack} s_0\frac{u(e_n/2)}{h_0(e_n/2)} \leq \frac{u(x)}{h_0(x)}\leq s_0^{-1} \frac{u(e_n/2)}{h_0(e_n/2)}, \end{align}
for $x\in B_{1/2}\cap \{\dist(x,\Lambda)\geq \ell_0\dist(x,\Gamma)\}$ and for some $s_0=s_0(n,p,\|\nabla'' g\|_{C^{0,\alpha}(B_1'')})$. Moreover, $\frac{u(x)}{h_0(x)}$ is H\"older continuous in $\mathcal{C}_\theta(\Gamma\cap B'_{1/4})$ up to $\Gamma\cap B'_{1/4}$. More precisely, for each $x_0\in \Gamma\cap B'_{1/4}$, the limit of $\frac{u(x)}{h_0(x)}$ as $x\rightarrow x_0$, $x\in x_0+\mathcal{C}_\theta(e_n)$, exists. Denoting it by $b(x_0)$, gives \begin{align} \label{eq:quant_est}
\left| \frac{u(x)}{h_0(x)} - b(x_0) \right| \leq C \frac{u(e_n/2)}{h_0(e_n/2)}|x-x_0|^{\tilde{\beta}}, \quad x\in (x_0+\mathcal{C}_\theta(e_n)) \cap B_{1/2}, \end{align} where $C=C(n,p)$, for some $\tilde{\beta} \in (0,1)$. By \eqref{eq:nondege}, Lemma~\ref{lem:lowerbarrier} (iii) for $h_0$ and by the fact that $g\in C^{1,\alpha}(B_1'')$, we have $ u(e_n/2)/h_0(e_n/2)\geq c_n>0$. Combining this with the lower bound in \eqref{eq:bound_Harnack} implies that $b(x_0)>0$.\\ We now extend this to the whole neighborhood $B_{1/4}(x_0)$ instead of only considering the cones $x_0 + C_{\theta}(e_n)$: Using (\ref{eq:lower_bound_a}) on all scales and arguing as in the proof of Lemma~\ref{lem:boundHarn}, we obtain for $x\in B_{1/4}(x_0)$ \begin{align*}
u(x) - b(x_0) h_0(x) &\geq -C|x-x_0|^{\min\{\frac{1}{2}+\bar\epsilon, \frac{1}{2}+\tilde{\beta}\}},\\
u(x)-b(x_0)h_0(x)&\leq C|x-x_0|^{\min\{\frac{1}{2}+\bar\epsilon, \frac{1}{2}+\tilde{\beta}\}}. \end{align*} Consequently, for $\beta=\min\{\tilde{\beta}, \bar\epsilon\}$, \begin{align} \label{eq:dist1}
|u(x)-b(x_0)h_0(x)|\leq C|x-x_0|^{\frac{1}{2}+\beta} \mbox{ for } x\in B_{1/4}(x_0). \end{align}
Next we show that for each $x_0\in \Gamma\cap B'_{1/4}$, \begin{equation}\label{eq:asym_h0}
|h_0(x)-w_{1/2}(T_{x_0}x)|\leq C \left(c_\ast+\left\| \nabla'' g \right\|_{C^{\alpha}(B_{1/2}'')}\right) |x-x_0|^{\frac{1}{2}+\min\{\alpha,1-\frac{n+1}{p}\}}, \quad x\in B_{1/2}, \end{equation} where $T_{x_0}$ is defined as the following affine transformation in $\R^{n+1}$: $T_{x_0}(x)=R_{x_0}B_{x_0}^{-1}(x-x_0)$, where $B_{x_0}B_{x_0}^t=A(x_0)$ and $R_{x_0}\in SO(n+1)$ is such that $R_{x_0}B_{x_0}^t \nu_{x_0}=\bar c_{1,x_0} e_n$, $R_{x_0}B_{x_0}^t e_{n+1}=\bar c_{2,x_0} e_{n+1}$. A direct computation leads to \begin{align*} (\bar{c}_{1,x_0})^2 &= (R_{x_0} B_{x_0}^{t} \nu_{x_0}) \cdot (R_{x_0} B_{x_0}^{t} \nu_{x_0}) = \nu_{x_0} \cdot A(x_0) \nu_{x_0},\\ (\bar{c}_{2,x_0})^2 &= (R_{x_0} B_{x_0}^{t} e_{n+1}) \cdot (R_{x_0} B_{x_0}^{t} e_{n+1}) = a^{n+1, n+1}(x_0). \end{align*} In order to show \eqref{eq:asym_h0}, we note that by a similar estimate as \eqref{eq:barrier_est} with $s=0$, we deduce \begin{align} \label{eq:varyT}
|w_{1/2}(T_k x)-w_{1/2}(T_{x_0}x)|\leq C \left(c_\ast+\left\| \nabla'' g \right\|_{C^{\alpha}(B_{1/2}'')}\right) |x-x_0|^{\frac{1}{2}+\min\{\alpha,1-\frac{n+1}{p}\}}, \end{align} for all $x\in Q\in \mathcal{N}(Q_k)$. Instead of Lemma~\ref{lem:normals}, we use \begin{align*}
|\nu_k-\nu_{x_0}|\leq C \left\| \nabla'' g \right\|_{C^{0,\alpha}(B_{1/2}'')} |x_k-x_0|^{\alpha}, \end{align*} in this context (we recall that $\nu_k$ is the normal of $\Gamma$ at $x_k$, where $x_k\in \Gamma$ realizes the distance of the center $\hat x_k$ of $Q_k$ to $\Gamma$). \\ Thus, by using the properties (W1)-(W3) of the Whitney decomposition, we have \begin{align*}
|h_0(x)-w_{1/2}(T_{x_0}x)|&\leq \sum_k \eta_k |w_{1/2}(T_k x)-w_{1/2}(T_{x_0}x)|\\
&\leq C\left(c_\ast+\left\| \nabla'' g \right\|_{C^{\alpha}(B_{1/2}'')}\right) |x-x_0|^{\frac{1}{2}+\min\{\alpha,1-\frac{n+1}{p}\}}. \end{align*} Writing $T_{x_0}$ out explicitly, we therefore obtain \begin{align*}
&\left| h_0(x) - w_{1/2}\left(\frac{\nu_{ x_0}\cdot (x- x_0) }{(\nu_{ x_0} \cdot A( x_0) \nu_{ x_0})^{1/2}},\frac{x_{n+1}}{(a^{n+1,n+1}( x_0))^{1/2}}\right)\right|\\
& \leq C\left(c_\ast+\left\| \nabla'' g \right\|_{C^{\alpha}(B_{1/2}'')}\right) |x-x_0|^{\frac{1}{2}+\min\{\alpha,1-\frac{n+1}{p}\}}. \end{align*} Using the above asymptotics of $h_0$ in \eqref{eq:dist1}, we arrive at the desired result. \end{proof}
The above asymptotic estimate leads to two immediate corollaries: On the one hand, it yields a Hopf principle and on the other hand, if combined with an interior elliptic estimate, it results in an upper growth bound.
\begin{cor} \label{cor:lower_bound} Assume that the conditions of Proposition~\ref{prop:asympt} are satisfied and that $0\in \Gamma$.
Then, if $c_{\ast}, \bar\delta$ and $\|\nabla'' g\|_{C^{0,\alpha}(B_1'')}$ are sufficiently small depending on $n,p$ and $\alpha$, there is a dimensional constant $c_n>0$ such that \begin{align*} u(x)\geq c_n\dist(x,\Gamma)^{\frac{1}{2}} \end{align*}
for $x\in B'_{1/2}\times (-1/2,1/2)\cap\{x| \dist(x,\Lambda)\geq \ell_0 \dist(x,\Gamma)\}$. \end{cor}
\begin{cor} \label{cor:upper_bound}
Assume that the conditions of Proposition~\ref{prop:asympt} are satisfied. If $c_\ast$, $\bar\delta$ and $\|\nabla'' g\|_{C^{0,\alpha}(B_1'')}$ are sufficiently small depending on $n,p$ and $\alpha$, then there exists a constant $\bar C=\bar C(n,p)>0$ such that for all $x\in B_{1/2}$ \begin{align*}
|u(x)|\leq \bar C M_0 \dist(x,\Gamma)^{\frac{1}{2}}, \end{align*}
where $M_0:=\sup _{B_1}|u|$. \end{cor}
\subsection{Local $3/2$-growth estimate, asymptotics and blow-up uniqueness} \label{subsubsec:local32} In this section we transfer the results from the previous section into the setting of the thin obstacle problem. Hence, this yields leading order asymptotics in appropriate cones, a Hopf principle and an optimal local growth estimate. Last but not least, we explain how the asymptotics and the growth estimates can be used to derive the uniqueness of the \emph{$3/2$-homogeneous blow-ups.}\\
We begin by transferring the general results of Section \ref{subsubsec:low_up_barrier} to the setting of solutions of the thin obstacle problem:
\begin{prop}[First order asymptotics] \label{prop:wasympt} Let $w$ be a solution of (\ref{eq:varcoef}) satisfying (A0), (A1), (A2), (A3), (A4). Assume that for some small positive constants $\epsilon_0$ and $c_\ast$ \begin{itemize}
\item[(i)] $\left\| w - w_{3/2} \right\|_{C^1(B_{1}^+)} \leq \epsilon_0$,
\item[(ii)]$\|\nabla a^{ij}\|_{L^p(B_1^+)}\leq c_\ast$. \end{itemize} Then if $\epsilon_0$ and $c_\ast$ are chosen sufficiently small depending on $n,p$, there exists a function $b_e:\Gamma_w\cap B_{1/2}\rightarrow \R$, $b_e\in C^{0,\alpha}$ for some $\alpha\in (0,1-\frac{n+1}{p}]$, (where $\alpha$ can be chosen the same as the H\"older regularity of $\Gamma_w$ in Proposition~\ref{prop:C1a}), such that for all $x_0\in \Gamma_w\cap B_{1/2}$, we have the following asymptotic expansion for $\p_e w$, $e\in S^{n-1}\times\{0\}$:\\ \begin{multline} \label{eq:asymptotics2}
\left| \p_{e} w(x)-b_e(x_0)w_{1/2}\left(\frac{(x- x_0) \cdot\nu_{ x_0}}{(\nu_{ x_0} \cdot A( x_0) \nu_{ x_0})^{1/2}},\frac{x_{n+1}}{(a^{n+1,n+1}( x_0))^{1/2}}\right)\right|\\ \leq C(n,p)\max\{\epsilon_0,c_\ast\} \dist(x,\Gamma_w)^{\frac{1}{2}+\alpha},\quad x\in B_{1/4}(x_0). \end{multline} Here $\nu_{ x_0}$ is the outer unit normal of $\Lambda_w$ at $ x_0$ and $A(x_0)=(a^{ij}(x_0))$. \end{prop}
Before coming to the proof of this result, we make the following observations:
\begin{rmk}\label{rmk:asym} \begin{itemize} \item[(i)] Similarly as in \eqref{eq:asymptotics2}, it is possible to obtain an asymptotic expansion for $\p_{n+1}w$: There exists a function $ b_{n+1}\in C^{0,\alpha}(\Gamma_w)$, such that for $ x_0\in \Gamma_w\cap B_{1/2}'$ and $x \in B_{1/4}(x_0)$ \begin{align*}
\left| \p_{n+1} w(x)-b_{n+1}(x_0)\bar w_{1/2}\left(\frac{(x- x_0) \cdot\nu_{ x_0}}{(\nu_{ x_0} \cdot A( x_0) \nu_{ x_0})^{1/2}},\frac{x_{n+1}}{(a^{n+1,n+1}(x_0))^{1/2}}\right)\right|\\ \leq C(n,p)\max\{\epsilon_0,c_\ast\} \dist(x,\Gamma_w)^{\frac{1}{2}+\alpha}, \end{align*} where $\bar w_{1/2}(x)=-\Imm(x_n+ix_{n+1})^{1/2}$. \item[(ii)] Later, we will see that $b_e$ and $b_{n+1}$ satisfy the following relation (c.f. \eqref{eq:curl} and \eqref{eq:defa} in Corollary~\ref{cor:wasympt}): There exists a $C^{0,\alpha}$ function $a:\Gamma_w \cap B_{1/2}\rightarrow \R$, $a>0$, such that \begin{align*} b_e(x_0)=\frac{3}{2}\frac{a(x_0)(e\cdot \nu_{x_0})}{(\nu_{x_0}\cdot A(x_0)\nu_{x_0})^{1/2}},\quad b_{n+1}(x_0)=\frac{3}{2} \frac{a(x_0)}{(a^{n+1,n+1}(x_0))^{1/2}}. \end{align*} \end{itemize} \end{rmk}
\begin{proof}[Proof of Proposition \ref{prop:wasympt}] The proof of Proposition \ref{prop:wasympt} follows immediately from Proposition~\ref{prop:asympt}. More precisely, we first note that, by Proposition \ref{prop:C1a} if $c_\ast, \epsilon_0$ are sufficiently small depending on $n,p$, then (up to a rotation) $$\Gamma_w\cap B_{1/2}=\{ x_n=g(x'')\}\cap B_{1/2},$$
where $g\in C^{1,\alpha}$, $\|g\|_{\dot{C}^{1,\alpha}}\lesssim \max\{c_\ast,\epsilon_0\}$ and $\alpha$ only depends on $n$ and $p$. Let $v=\partial_e w$ and recall from the proof of Proposition~\ref{prop:Lip} that it satisfies the following divergence form equation in $B_1\setminus \Lambda_w$ \begin{align*} \partial_i a^{ij}\partial_j v = \partial_i F^i,\quad F^i=-(\partial_e a^{ij})\partial_j w, \end{align*}
with $\|\dist(\cdot,\Gamma_w)^{-\frac{1}{2}}\ln(\dist(\cdot,\Gamma_w))^{-2}F^i\|_{L^p(B_1)}\leq C(n,p)c_\ast$ (c.f. Lemma 4.3 in \cite{KRS14}\xspace). Moreover, $\p_e w$ satisfies the assumptions of Proposition \ref{prop:asympt} if $e$ is in a sufficiently large cone of directions, i.e. $e\in \mathcal{C}'_\eta (e_n)$ with $\eta>0$ sufficiently small. Thus applying Proposition~\ref{prop:asympt} we obtain the desired result. Note that the exponent $\alpha\in (0,1-\frac{n+1}{p}]$ can be chosen to have the same value as the H\"older regularity exponent of $\Gamma_w$ in Proposition~\ref{prop:C1a}, since both of these come from the boundary Harnack inequality (Lemma~\ref{lem:boundHarn}). The result for general directions $e$ follows from Remark~\ref{rmk:asym} (ii). \end{proof}
As a corollary of the asymptotic expansion for $\nabla w$, it is now possible to obtain the asymptotic expansion for $w$ via integration along appropriate contours:
\begin{cor} \label{cor:wasympt} Under the assumptions of Proposition~\ref{prop:wasympt}, there exists a Hölder continuous function $a:\Gamma_w\rightarrow \R$, $a>0$, such that for $x_0 \in \Gamma_w \cap B'_{1/2}$ and all $x\in B_{1/4}(x_0)$ with $x -x_0 \in \spa \{ \nu_{x_0}, e_{n+1}\}$ \begin{multline} \label{eq:asymptotics}
\left|w(x)-a(x_0)w_{3/2}\left(\frac{(x- x_0) \cdot\nu_{ x_0}}{(\nu_{ x_0} \cdot A( x_0) \nu_{ x_0})^{1/2}},\frac{x_{n+1}}{(a^{n+1,n+1}( x_0))^{1/2}}\right)\right| \\ \leq C(n,p)\max\{\epsilon_0,c_\ast\}\dist(x,\Gamma_w)^{\frac{3}{2}+\alpha}. \end{multline} Here $\nu_{ x_0}$ is the outer unit normal of $\Lambda_w$ at $ x_0$ and $A(x_0)=(a^{ij}(x_0))$. \\ In particular, $w(x_0+rx)/r^{3/2}$ has a unique blow-up limit as $r\rightarrow 0$. \end{cor}
\begin{proof} The asymptotic expansion of $w$ follows from the asymptotics for $\p_{\nu_{x_0}} w$, $\p_{n+1}w$ and an integration along an appropriate path in the plane spanned by $\nu_{x_0}, e_{n+1}$. Given $x_0\in \Gamma_w$, we write $$ x=x_0+t_1\nu_{x_0}+t_2 e_{n+1} \text{ for all }x\in \spa\{\nu_{x_0}, e_{n+1}\}.$$ In particular, $t_1=(x-x_0)\cdot \nu_{x_0}$ and $t_2=(x-x_0)\cdot e_{n+1}=x_{n+1}$. We consider the restriction, $\tilde{w}$, of $w$ onto the two-dimensional plane spanned by $\nu_{x_0}$ and $e_{n+1}$, i.e. $$\tilde{w}(t_1,t_2):=w(x_0+t_1\nu_{x_0}+t_2e_{n+1}).$$ Then, \begin{equation*} \begin{split} \p_{t_1}\tilde{w}(t_1,t_2)&=\p_{\nu_{x_0}} w(x_0+t_1\nu_{x_0}+t_2e_{n+1}),\\ \p_{t_2}\tilde{w}(t_1,t_2)&=\p_{e_{n+1}}w(x_0+t_1\nu_{x_0}+t_2e_{n+1}). \end{split} \end{equation*} From the asymptotics derived in Proposition \ref{prop:wasympt}, we obtain \begin{equation} \label{eq:expand1} \begin{split} \p_{t_1}\tilde{w}(t_1,t_2)&=b_{\nu_{x_0}}w_{1/2}(c_{1}^{-1}t_1, c_{2}^{-1}t_2)+g_1(t_1,t_2),\\ \p_{t_2}\tilde{w}(t_1,t_2)&=b_{n+1}\bar w_{1/2}(c_{1}^{-1} t_1, c_{2}^{-1}t_2)+g_2(t_1,t_2). \end{split} \end{equation} Here
$$|g_1(t_1,t_2)|,\ |g_2(t_1,t_2)|\leq C(n,p)\max\{\epsilon_0,c_\ast\}(|t_1|^{1/2+\alpha}+|t_2|^{1/2+\alpha})$$ and $b_{\nu_{x_0}}=b_{\nu_{x_0}}(x_0)$, $b_{n+1}=b_{n+1}(x_0)$. The constants $c_{1}$, $c_2$ are (for fixed $x_0$) defined as \begin{align*} c_{1} = (\nu_{x_0} \cdot A(x_0) \nu_{x_0})^{\frac{1}{2}}, \quad c_{2} = (a^{n+1, n+1}(x_0))^{\frac{1}{2}}. \end{align*} Since $\frac{3}{2}(w_{1/2},\bar w_{1/2})=\nabla w_{3/2}$, we can rewrite (\ref{eq:expand1}) as \begin{equation} \label{eq:expand2} \begin{split} \p_{t_1}\tilde{w}(t_1,t_2)&=\frac{2b_{\nu_{x_0}}c_1}{3}\p_{t_1}w_{3/2}(c_{1}^{-1}t_1, c_{2}^{-1}t_2)+g_1(t_1,t_2),\\ \p_{t_2}\tilde{w}(t_1,t_2)&=\frac{2b_{n+1}c_2}{3}\p_{t_2} w_{3/2}(c_{1}^{-1}t_1, c_{2}^{-1}t_2)+g_2(t_1,t_2). \end{split} \end{equation} Since \begin{align*} \int\limits_{\R^2} \p_{t_1}\tilde{w}\p_{t_2}\phi=\int\limits_{\R^2} \p_{t_2}\tilde{w}\p_{t_1}\phi \text{ for any } \phi\in C^\infty_c(\R^2), \end{align*} we have \begin{align*} \left(\frac{2b_{\nu_{x_0}}c_1}{3}-\frac{2b_{n+1}c_2}{3}\right)\int\limits_{\R^2} w_{3/2}(c_{1}^{-1}t_1, c_{2}^{-1}t_2) \p^2_{t_1t_2}\phi= \int\limits_{\R^2} \tilde{g}(t_1,t_2) \p^2_{t_1t_2}\phi. \end{align*} Here \begin{align*} \tilde{g}(t_1,t_2)&=\tilde{g}_1(t_1,t_2)-\tilde{g}_2(t_1,t_2)=-\int_0^{t_1}g_1(s,t_2)ds+\int_0^{t_2}g_2(t_1,s) ds. \end{align*} A direct computation then gives \begin{align*}
|\tilde{g}(t_1,t_2)| &\leq C(n,p)\max\{\epsilon_0,c_\ast\}(|t_1|^{3/2+\alpha}+|t_1||t_2|^{1/2+\alpha}\\
& \quad +|t_2||t_1|^{1/2+\alpha}+|t_2|^{3/2+\alpha}), \end{align*} which is of higher (than $3/2$) order in $(t_1,t_2)$. Thus necessarily we have \begin{align}\label{eq:curl} b_{\nu_{x_0}}c_{1}=b_{n+1}c_{2}. \end{align} Now let $\ell(s)$ be the path from $(0,0)$ to $(t_1,t_2)$: $\ell(s)= (t_1 s, t_2 s)$, $s\in [0,1]$, and let $\widetilde{\ell}(s)= (c_{1}^{-1}t_1 s, c_{2}^{-1}t_2 s)$. Then \begin{align*}
\left|\frac{d }{ds} \tilde{w}(\ell(s))- \frac{2b_{n+1}c_{2}}{3}\frac{d }{ds} w_{3/2}(\widetilde{\ell}(s)) \right| &\leq C(n,p)\max\{\epsilon_0,c_\ast\}|\tilde{\ell}(s)|^{1/2+\alpha}. \end{align*} Integrating from $0$ to $1$ leads to
$$\left| \tilde{w}(t_1,t_2) -\frac{2b_{n+1}c_2}{3}w_{3/2}(c_1^{-1}t_1,c_2^{-1}t_2)\right| \leq C(n,p)\max\{\epsilon_0,c_\ast\}(|t_1|^{3/2+\alpha}+|t_2|^{3/2+\alpha}).$$ Finally defining \begin{align}\label{eq:defa} a(x_0):=\frac{2 b_{n+1}(x_0)c_2}{3}, \end{align} and recalling the explicit expression of $c_{1}$ and $c_{2}$ yields the asymptotics \eqref{eq:asymptotics} for $w$.\\ The fact that $a>0$ follows from Corollary~\ref{cor:lowerbound}. \end{proof}
From Proposition~\ref{prop:wasympt} and Remark~\ref{rmk:asym} (i), we immediately obtain the following lower and upper bounds on the growth rate of $\p_e w$ away from the regular free boundary:
\begin{cor}[Lower bound] \label{cor:lowerbound} Under the assumptions of Proposition~\ref{prop:wasympt} there exist a constant $c_0=c_0(n,p)>0$ and a radius $r_0=r_0(n,p)$ such that \begin{align*}
\p_ew(x)\geq c_0 \dist(x,\Gamma_w)^{\frac{1}{2}}, \quad x\in \{x\in B_{1/2}^+| \ \dist(x,\Lambda_w)\geq \ell_0 \dist(x,\Gamma_w)\}, \end{align*}
for any $e\in \{x\in S^{n-1} \times \{0\}| \ x_n\geq \frac{1}{2} |x''|\}$. \end{cor}
\begin{rmk} A uniform lower bound may not hold for $\p_e w$ in a full neighborhood of the origin. However, after subtracting an ``error term" or under higher regularity assumptions on the metric $a^{ij}$, it is possible to obtain a uniform lower bound in a full neighborhood of the origin (c.f. Section~\ref{sec:lower}). \end{rmk}
\begin{cor}[Upper bound] \label{cor:optgrowth} Under the assumptions of Proposition~\ref{prop:wasympt} there exist a constant $\bar C=\bar C(n,p)$ and a radius $r_0= r_0(n,p)\in (0,1)$ such that for each $x_0\in \Gamma_w\cap B_{1/2}$ \begin{equation} \label{eq:rescale_upper}
\sup_{B_r(x_0)}|\nabla w|\leq \bar C r^{1/2}, \quad 0<r\leq r_0. \end{equation} \end{cor}
\begin{rmk}
The above upper bound holds for solutions which are sufficiently close to $w_{3/2}$ (c.f. assumption (i) of Proposition~\ref{prop:wasympt}). In the next section we will remove this assumption and show a uniform upper bound for solutions $w$ with $\|w\|_{L^2(B_1^+)}=1$ (c.f. \eqref{eq:optimain}). \end{rmk}
\subsection{Optimal regularity of the solution} \label{sec:opti}
In this section we exploit the comparison arguments from the previous section in combination with the Carleman estimates from \cite{KRS14}, to obtain the optimal regularity of solutions to the thin obstacle problem with coefficients $a^{ij}\in W^{1,p}$ with $p\in (n+1,\infty]$. In comparison to the results of Proposition 4.8 in \cite{KRS14}\xspace our main improvement here is a \emph{uniform, optimal} upper growth bound without losses (c.f. (\ref{eq:optimain})). This then allows us to remove the logarithmic losses from the regularity estimates from Proposition 4.8 in \cite{KRS14}\xspace (which were exemplified in the non-uniform constant $C(\gamma)>0$ and the only $C^{1,1/2-\epsilon}$ regularity of solutions to the variable coefficient problem in Proposition 4.8 in \cite{KRS14}\xspace) and to obtain the following optimal regularity estimates:
\begin{thm} \label{thm:optimal_reg} Let $w$ be a solution of (\ref{eq:varcoef}) in $B_{1}^+$ which satisfies the normalization condition (A0) and let $a^{ij}:B_{1}^+ \rightarrow \R^{(n+1)\times (n+1)}_{sym}$ be a $W^{1,p}$ tensor field with $p\in(n+1,\infty]$ satisfying (A1), (A2), (A3) and (A4). Then the following statements hold: \begin{itemize}
\item[(i)] There exist a constant $C=C(n,p)$ and a radius $R_0=R_0(n,p, \left\| \nabla a^{ij}\right\|_{L^p(B_1)})$, such that \begin{equation}\label{eq:optimain}
\sup _{B_r(x_0)}|w|\leq C r^{3/2}\text{ for any } r\in (0,R_0) \text{ and } x_0\in \Gamma_w\cap B'_{\frac{1}{2}}. \end{equation}
\item[(ii)] If $p\in(n+1,2(n+1)]$, it holds that $w\in C^{1,\gamma}(B_{1/2}^+)$ with $\gamma= 1-\frac{n+1}{p}$. Moreover, there exists a constant $C=C(n,p, \| a^{ij} \|_{ W^{1,p}(B_1^+)})$ (which remains bounded as $\gamma \nearrow 1/2$) such that \begin{align*}
|\nabla w(x^1)- \nabla w(x^2)| \leq C|x^1 - x^2|^{\gamma} \mbox{ for all } x^1,x^2 \in B_{1/2}^+ . \end{align*}
\item[(iii)] If $p\in [2(n+1),\infty]$, then $w\in C^{1,\frac{1}{2}}(B_{1/2}^+)$. More precisely, there exists a constant $C=C(n,p, \| a^{ij} \|_{ W^{1,p}(B_1^+)})$ such that \begin{align*}
|\nabla w(x^1)- \nabla w(x^2)| \leq C|x^1 - x^2|^{1/2} \mbox{ for all } x^1,x^2 \in B_{1/2}^+ . \end{align*} \end{itemize} \end{thm}
\begin{proof} We will only show the growth estimate from part (i). With \eqref{eq:optimain} at hand, arguing similarly as for Proposition 4.8 in \cite{KRS14}\xspace, we then obtain the corresponding optimal regularity results (ii) and (iii). \\ By Lemma 4.1 in \cite{KRS14}\xspace and by the gap of the vanishing order (i.e. either $\kappa_{x_0} = 3/2$ or $\kappa_{x_0} \geq 2$), it suffices to show \eqref{eq:optimain} for $x_0\in \Gamma_{3/2}(w)\cap B'_{1/2}$. For simplicity we assume that $0\in \Gamma_{3/2}(w)$ and show the growth estimate (\ref{eq:optimain}) at $x_0=0$.\\
\emph{Step 1:} Suppose that $w$ is a solution to the thin obstacle problem which satisfies the assumptions (A0), (A1), (A2), (A4) with a metric $a^{ij}\in W^{1,p}(B_1^+)$ for some $p\geq p_0$, $p_0 \in (n+1,\infty)$. Suppose that $0\in \Gamma_{3/2}(w)$. We will show that for any $\delta>0$, there exists a constant $\epsilon=\epsilon(n,p, p_0, \delta)>0$, such that if \begin{itemize}
\item[(i)] $\|a^{ij}-\delta^{ij}\|_{L^\infty(B_1^+)}\leq \epsilon$,
\item[(ii)] $\|x\cdot \nabla w - \frac{3}{2} w\|_{L^2(A_{1/2,1}^+)}\leq \epsilon$, \end{itemize}
then there exists a nontrivial $3/2$-homogeneous global solution $w_0$ such that $\|w-w_0\|_{C^1(B_{3/4}^+)}\leq \delta$.
Suppose that this were wrong, then there existed a parameter $\delta_0>0$, a sequence $\epsilon_k\rightarrow 0$ and a sequence, $w_k$, of solutions to the thin obstacle problem \begin{align*} \p_i a^{ij}_k \p_j w_k = 0 \text{ in } B_1^+,\\ w_k\geq 0, \ -\partial_{n+1} w_k\geq 0, \ w_k \p_{n+1}w_k=0 \text{ on } B'_1, \end{align*} which satisfy the assumptions (A0), (A1), (A2), (A3), (A4), such that \begin{itemize} \item[(i)] $a^{ij}_k \in W^{1,p}(B_1^+)$ for some $p\geq p_0$ with $p_0$ as above, i.e. $p_0\in (n+1,\infty]$ and $p_0$ being independent of $k$,
\item[(ii)] $\|a^{ij}_k-\delta^{ij}\|_{L^\infty(B_1^+)}\leq \epsilon_k$,
\item[(iii)] $\|x\cdot \nabla w_k-\frac{3}{2}w_k\|_{L^2(A_{1/2,1}^+)}\leq \epsilon_k$, \end{itemize}
but $\|w_k- w_0\|_{C^1(B_{3/4}^+)}\geq \delta_0$ for any nontrivial $3/2$-homogeneous global solution $w_0$.
By the fundamental theorem of calculus, there exists an absolute constant $C>0$ with \begin{align*}
\|w_k\|_{L^2(A_{7/8,1}^+)}&\leq C \|w_k\|_{L^2(A_{1/2,7/8}^+)}+ C\| |x|^{3/2}x\cdot \nabla (|x|^{-3/2} w_k)\|_{L^2(A_{1/2,1}^+)}\\
&\leq C\|w_k\|_{L^2(A_{1/2,7/8}^+)}+ C\epsilon_k \|w_k\|_{L^2(B_1^+)}\\
&\leq C\|w_k\|_{L^2(B_{7/8}^{+})}+C\epsilon_k\|w_k\|_{L^2(A_{7/8,1}^+)}+C\epsilon_k\|w_k\|_{L^2(B_{7/8}^+)}. \end{align*} Hence, if $\epsilon_k>0$ is sufficiently small, then
$$\|w_k\|_{L^2(A_{7/8,1}^+)}\leq 2C \|w_k\|_{L^2(B_{7/8}^+)},$$ which implies that \begin{align}\label{eq:doubling22}
1=\|w_k\|_{L^2(B_1^+)}\leq (2C+1)\|w_k\|_{L^2(B_{7/8}^+)}. \end{align} By the interior $H^1$ estimate and by Proposition 4.8 in \cite{KRS14}\xspace, $w_k$ is uniformly bounded in $H^1(B_{7/8}^+)$ and $C^{1,\gamma_0}(B_{7/8}^+)$ for some $\gamma_0=\gamma_0(n,p_0)>0$. Thus, up to a subsequence \begin{align*} & w_k\rightarrow \bar{w} \text{ weakly in }H^1(B_{7/8}^+), \\ & w_k\rightarrow \bar{w} \text{ strongly in }L^2(B_{7/8}^+),\\ & w_k\rightarrow \bar{w} \text{ in } C^{1}_{loc}(B_{7/8}^+), \end{align*} where $\bar{w}$ (by the contradiction assumption (i)) solves the constant coefficient thin obstacle problem \begin{align*} \Delta \bar{w} =0 \text{ in } B_{7/8}^+,\\ \bar{w} \geq 0, \ -\partial_{n+1} \bar{w} \geq 0, \ \bar{w} \p_{n+1}\bar{w}=0 \text{ on } B'_{7/8}. \end{align*} Furthermore, by the contradiction assumption (ii), $\bar{w}$ is homogeneous of degree $3/2$ in $B_{7/8}^+$. By analyticity, $\bar{w}$ is a $3/2$-homogeneous global solution in $\R^{n+1}$. Moreover, by \eqref{eq:doubling22} and by the strong convergence in $L^2(B_{7/8}^+)$,
$$\|\bar{w}\|_{L^2(B_{7/8})}\geq (2C+1)^{-1},$$
thus $\bar{w}$ is nontrivial. Therefore, we have found a nontrivial $3/2$-homogeneous global solution $\bar{w}$ such that (up to a subsequence) $\|w_k-\bar{w}\|_{C^{1}(B_{3/4}^+)}\rightarrow 0$ as $k\rightarrow \infty$. This is a contradiction.\\
\emph{Step 2:} We show that there exists $\epsilon=\epsilon(n,p)$, such that if for some $r\in (0,R_0)$ with $R_0=R_0(n,p,\|\nabla a^{ij}\|_{L^p(B_1^+)})$ \begin{equation}\label{eq:opti2}
\|x\cdot \nabla w-\frac{3}{2}w\|_{L^2(A_{r/2,r}^+)}\leq \epsilon \|w\|_{L^2(B_{r}^+)}, \end{equation} then there exist constants $\mu_0=\mu_0(n,p)\in (0,1)$ and $C_1=C_1(n,p)>1$ such that \begin{align*}
|w(x)|\leq C_1 r^{-\frac{3}{2}} r^{- \frac{n+1}{2}} \left\| w \right\|_{L^2(B_{r}^+)} |x|^{\frac{3}{2}} \mbox{ for } x\in B_{\mu_0 r}^{+}. \end{align*}
For this we first notice that \eqref{eq:opti2} can be rewritten in terms of $L^2$ normalized rescalings \begin{equation} \label{eq:L2blowup}
w_r(x) := \frac{w(rx)}{r^{- \frac{n+1}{2}} \left\| w \right\|_{L^2(B_{r}^+)}}, \end{equation} to yield \begin{align}\label{eq:opti3}
\|x\cdot \nabla w_r - \frac{3}{2} w_r\|_{L^2(A_{1/2,1}^+)}\leq \epsilon \text{ for some } r\in (0,R_0). \end{align}
Here $w_r$ is a solution to the thin obstacle problem with coefficients $a^{ij}_r (x):=a^{ij}(rx)$, which satisfy $\|\nabla a^{ij}_r\|_{L^p(B_1^+)}= r^{1-\frac{n+1}{p}}\|\nabla a^{ij}\|_{L^p(B_r^+)}$. \\
Secondly, by Proposition \ref{prop:C1a} in Section~\ref{sec:boundary}, there exist constants $\epsilon_0=\epsilon_0(n,p)$ and $c_{\ast}=c_{\ast}(n,p)$, which are less than some universal constant, such that if $\|w_r-w_{3/2}\|_{C^1(B_{3/4}^+)}<\epsilon_0$ with $w_{3/2}(x)=c_n\Ree(x_n+ix_{n+1})^{3/2}$ (possibly after a rotation) and $\| \nabla a^{ij} \|_{L^p(B_{3/4}^+)}\leq c_{\ast}$, then the free boundary of $w_r$ is a $C^{1,\alpha}$ graph in $B'_{1/2}$. Moreover, its $\dot{C}^{1, \alpha}$ Hölder norm is uniformly bounded, depending only on $n,p$. The Hölder exponent, $\alpha$, only depends on $n,p$ (c.f. Remark \ref{rmk:alpha}). Thus, by Corollary~\ref{cor:optgrowth}, there exist constants $\mu_0 $ and $C_1$ depending on $n,p$, such that
$$|w_{r}(x)|\leq C_1|x|^{3/2} \mbox{ for any } x\in B_{\mu_0}^+.$$ Scaling back then yields \begin{align} \label{eq:rescaled_opt}
| w(x) | \leq C_1 r^{- \frac{3}{2}} r^{- \frac{n+1}{2}}\left\| w \right\|_{L^2(B_{r}^+)} |x|^{\frac{3}{2}} \text{ for any } x\in B_{\mu_0r}^+. \end{align}
To complete the proof of step 2, we apply step 1 to $w_r$ with $\delta=\epsilon_0$. This determines the parameter $\epsilon=\epsilon(n,p)$ (if $R_0=R_0(n,p,\|\nabla a^{ij}\|_{L^p(B_1^+)})$ is chosen such that $\|\nabla a^{ij}_r\|_{L^p(B_1^+)}\leq c_\ast(n,p)$ for any $r\in (0,R_0]$). \\
\emph{Step 3:} Fix $\epsilon>0$ as in step 2. Let $r_1\in (0,R_0]$ be the largest radius such that \eqref{eq:opti2} holds (we remark that the existence of an $r_1>0$ such that (\ref{eq:opti2}) is satisfied, follows from the fact that the $L^2$ rescaling has a $3/2$ homogeneous blow-up along a certain subsequence, c.f. Proposition 4.5 in \cite{KRS14}\xspace). Then for any $r\in [r_1,R_0]$, \begin{align} \label{eq:anti2}
\|x\cdot \nabla w-3/2w\|_{L^2(A_{r/2,r)}}\geq \epsilon \|w\|_{L^2(A_{r/2,r}^+)}. \end{align} In this regime we discuss two cases: \begin{itemize}
\item{Case 1:} $\|w\|_{L^2(B_{ r_1}^+)}< r_1^{\frac{3}{2}+\frac{n+1}{2}+\frac{\epsilon}{2}}$. In this case \begin{align*}
r_1^{- \frac{3}{2}} r_1^{- \frac{n+1}{2}}\left\| w \right\|_{L^2(B_{r_1}^+)} \leq r_1^{\frac{\epsilon}{2}}. \end{align*}
Thus, by (\ref{eq:rescaled_opt}) in step 2, there exists $C=C(n,p,\|\nabla a^{ij}\|_{L^{p}})$ such that
$$|w(x)|\leq C r^{\frac{\epsilon}{2}}_1 |x|^{3/2} \text{ for any } x\in B_{\mu_0r_1}^+.$$ Recalling the parameter dependence of $\mu_0$, we obtain \begin{equation} \label{eq:est_imp}
|w(x)|\leq C |x|^{\frac{3}{2} + \frac{\epsilon}{2}}\text{ for any } x\in A_{\mu_0 r_1, r_1}^+. \end{equation} For any $r \in (r_1,R_0)$, we then use Corollary 3.1 in \cite{KRS14}\xspace (note that this estimate has a logarithmic loss so that it is necessary to use the slightly improved estimate (\ref{eq:est_imp}) in the application of the Carleman inequality; alternatively it would have been possible to use the Carleman estimate from Lemma 3.2 in \cite{KRS14}\xspace, for instance in the version of Remark 10 in \cite{KRS14}\xspace which does not need the $\epsilon$-improvement). Thus, we obtain \begin{align*}
\|w\|_{L^2(A_{r/2,r}^+)}\leq C r^{\frac{3}{2}+\frac{n+1}{2}} \mbox{ with } C=C(n,p,\|\nabla a^{ij}\|_{L^{p}}) \end{align*} for all $r\in(r_1, R_0)$.
\item{Case 2:} $\|w\|_{L^2(B_{r_1}^+)}\geq r_1^{\frac{3}{2}+\frac{n+1}{2}+\frac{\epsilon}{2}}$. In this case statement (i) from Lemma 4.2 in \cite{KRS14}\xspace (with the two radii $r_1$ and $R_0$) would either imply that $r_1$ is bounded from below, i.e. $r_1\geq C$, for some constant $C>0$ which only depends on $n,p, \left\| \nabla a^{ij}\right\|_{L^p}$, or it would imply a contradiction to the maximal choice of $r_1$. \end{itemize}
In summary, we either have a uniform lower bound for $r_1$ (depending only on $n,p, \left\| \nabla a^{ij}\right\|_{L^p}$) or we obtain that the growth estimate \eqref{eq:optimain} holds for all $r\in (0,R_0)$. \\
Combining steps 1-3, we have thus obtained a radius $R_0 = R_0(n,p, \left\| \nabla a^{ij} \right\|_{L^p})$ and a constant $C=C(n,p)$ such that $\sup_{B_r}|w|\leq C r^{3/2}$ for any $r\in (0, R_0)$. This, together with Proposition 4.8 in \cite{KRS14}\xspace, completes the proof of Theorem \ref{thm:optimal_reg}. \end{proof}
\subsection{Improved lower bounds} \label{sec:lower} In this section we improve our lower bounds from Corollary \ref{cor:lowerbound} by making them uniform: While in general a uniform version of Corollary \ref{cor:lowerbound} does not hold for the full function $w$, after a suitable splitting, it is possible to show that the leading order term satisfies a uniform lower bound. More precisely, let $a^{ij}:B_1^+ \rightarrow \R^{(n+1)\times (n+1)}_{sym}$ be a uniformly elliptic $W^{1,p}$, $p\in (n+1,\infty]$, metric, let $f\in L^p(B_1^+)$ and let $w$ be a solution of \begin{equation} \label{eq:inhom} \begin{split} \p_{i} a^{ij} \p_j w & = f \mbox{ in } B_1^+,\\ \p_{n+1}w \leq 0, \ w \geq 0, \ w\p_{n+1}w&=0 \mbox{ in } B_1', \end{split} \end{equation} satisfying the normalization conditions (A0)-(A4) from Section \ref{sec:prelim} and conditions (i), (ii) from Proposition \ref{prop:wasympt}.\\ As in Remark \ref{rmk:ref_ext}, we extend $w$, the metric $a^{ij}$ as well as $f$ from $B_1^+$ to $B_1 \setminus \Lambda_w$ and further to $\R^{n+1}$. We now split $w$ into two components $$w=u+\tilde{u},$$ where $\tilde{u}$ solves \begin{align} \label{eq:split1} a^{ij}\p_{ij}\tilde{u}-\dist(x,\Gamma_w)^{-2}\tilde{u}=f - (\p_i a^{ij})\p_j w \text{ in } \R^{n+1}\setminus \Lambda_w, \quad \tilde{u}=0\text{ on }\Lambda_w, \end{align} and the function $u$ solves \begin{align} \label{eq:split2} a^{ij}\p_{ij}u=-\dist(x,\Gamma_w)^{-2}\tilde{u}\text{ in } \R^{n+1}\setminus \Lambda_w, \quad u=0\text{ on } \Lambda_w. \end{align} As explained in the passage following Remark \ref{rmk:ref_ext} in Section \ref{subsec:Lip} the intuition is that $\tilde{u}$ is a ``controlled error'' and that $u$ captures the essential behavior of $w$. Moreover, as we will see later, $u$ is $C^{2,1-\frac{n+1}{p}}(K)$ regular for any $K\Subset \R^{n+1}\setminus \Gamma_w$.
\begin{lem}[Positivity] \label{lem:lower1'} Let $f\in L^p(B_1^+)$, $a^{ij}\in W^{1,p}$ with $p\in (2(n+1),\infty]$, and $u, w$ be as above. Then we have that $u\in C^{2,1-\frac{n+1}{p}}(K)\cap C^{1,\frac{1}{2}}(B_{1/2})$ for any $K\Subset B_1\setminus \Gamma_w$. Moreover, for $e\in \mathcal{C}'_\eta(e_n)$, $\p_eu $ satisfies the lower bound \begin{align*} \p_{e}u (x) \geq c\dist(x,\Lambda_w)\dist(x,\Gamma_w)^{-\frac{1}{2}}, \quad c>0. \end{align*} \end{lem}
\begin{rmk} If $f=0$, the statement of Lemma \ref{lem:lower1'} can be improved to hold for metrics $a^{ij}\in W^{1,p}$ with $p\in(n+1,\infty]$. \end{rmk}
\begin{proof} Let $\tilde{u}$ be defined as in (\ref{eq:split1}). By the refined estimate from Remark \ref{rmk:impv1}, we infer that \begin{align} \label{eq:auxv}
|\tilde{u}(x)|\lesssim c_\ast \dist(x,\Lambda_w)\dist(x,\Gamma_w)^{1-\frac{n+1}{p}}. \end{align}
Moreover, combined with the $C^{1,1-\frac{n+1}{p}}$ regularity of $\tilde{u}$ away from $\Gamma_w$, we obtain the up to $\Gamma_w$ regularity of $\tilde{u}$, i.e. $\tilde{u}\in C^{1,1-\frac{n+1}{p}}(B_1)$ and $\|\tilde{u}\|_{C^{1,1-\frac{n+1}{p}}(B_1^+)}\leq C c_\ast$. Thus, applying the classical elliptic estimates to $u$ away from $\Gamma_w$, we deduce that $u\in C^{2,1-\frac{n+1}{p}}(K)$ for any $K\Subset B_1\setminus \Gamma_w$. Recalling the decay estimate of $w$ (c.f. Theorem \ref{thm:optimal_reg}) and combining it with the estimate \eqref{eq:auxv} for $\tilde{u}$, yields that $u=w-\tilde{u}$ satisfies \begin{align}\label{eq:upper_u}
|u(x)|\leq C_0 \dist(x,\Gamma_w)^{\frac{3}{2}}. \end{align}
This together with the $W^{3,p}$ interior estimate gives that $u\in C^{1,\frac{1}{2}}(B_1)$. Next we show the lower bound of $\p_e u$. This follows from an application of the comparison principle from Proposition \ref{prop:nondeg}. First we note that at $B_1\cap \{|x_{n+1}|=\ell_0\}$, where $\ell_0=\frac{1}{\sqrt{4n}}>0$, $\p_e u$ is non-degenerate in the sense that \begin{align}\label{eq:lower_u} \p_e u(x)\geq c_n \dist(x,\Gamma_w)^{\frac{1}{2}},\quad e\in \mathcal{C}'_\eta(e_n). \end{align}
Indeed, for $x\in B_1\cap\{|x_{n+1}|=\ell_0\}$ we have $\dist(x,\Gamma_w)\sim \dist(x,\Lambda_w)$, where by \eqref{eq:auxv} the function $\p_e\tilde{u}$ satisfies \begin{align} \label{eq:lower_ua}
|\p_e\tilde{u}(x)|\leq c_\ast \dist(x,\Gamma_w)^{1-\frac{n+1}{p}}. \end{align}
On the other side, the asymptotics from Proposition~\ref{prop:wasympt} yield that $\p_ew(x)\geq c \dist(x,\Gamma_w)^{1/2}$ for $x\in B_1\cap\{|x_{n+1}|=\ell_0\}$. As $p>2(n+1)$, $\dist(x,\Gamma_w)^{1/2}$ dominates and thus $\p_e u$ inherits the lower bound of $\p_e w$ on $x\in B_1\cap\{|x_{n+1}|=\ell_0\}$.\\ Next we consider the equation of $\p_e u$ (in non-divergence form): \begin{align*} a^{ij}\p_{ij}\p_eu &= -(\p_ea^{ij})\p_{ij}u-\p_e(\dist(x,\Gamma_w)^{-2}\tilde{u})\\ &=:H+\p_i G^i. \end{align*} The functions $H$, $G^i$ satisfy \begin{align*} &\dist(x,\Gamma_w)^{\frac{1}{2}}H\in L^p(B_1),\\ &\dist(x,\Gamma_w)^{\frac{n+1}{p}} G^i\in L^{\infty}(B_1). \end{align*}
Here the first estimate follows from the $W^{1,p}$ regularity of $a^{ij}$ and the pointwise interior estimates for $\p_{ij}u$; the second estimate is a direct consequence of the previously derived bound (\ref{eq:auxv}) for $\tilde{u}$. Furthermore, in $B_1$ we have $\p_e u=\p_ew-\p_e\tilde{u}\geq -c_\ast$. Thus, the assumptions of Proposition \ref{prop:nondeg} are satisfied (note that Proposition \ref{prop:nondeg} also holds for non-divergence form equation) and we obtain the desired lower bound for $\p_e u$ in the region $\{x\in B_{1}| \ \dist(x,\Gamma_w)\sim \dist(x,\Lambda_w) \}$. \\ In the end we remark that this lower bound holds in the whole ball. Indeed, this follows once more from a comparison argument: Let $h=c_s h_s+h_0$, where $h_s$ is the barrier function from Proposition \ref{prop:barrier} and $h_0$ is the barrier from Lemma \ref{lem:lowerbarrier}. Choosing $s=s(\alpha,n,p)$ appropriately yields that $h$ satisfies the conditions (i) and (ii) of Proposition \ref{prop:nondeg}. Moreover, by using an up to $\Lambda_w$ estimate for $q(x)$ as well as the fact that $h_0$ dominates $h_s$ (for an appropriately chosen constant $c_s$), we obtain \begin{align}\label{eq:barrier} h(x)\geq c\dist(x,\Lambda_w)\dist(x,\Gamma_w)^{-\frac{1}{2}}. \end{align} Relying on the test function \begin{align*}
u(x):=\p_e u(x)+|x'-(x_0)'|^2-2^{-8}h(x)-2nx_{n+1}^2, \end{align*} and introducing a similar splitting as in Proposition \ref{prop:nondeg} then yields \begin{align*} \p_e u(x)\geq ch(x)-c_\ast \dist(x,\Lambda_w)\dist(x,\Gamma_w)^{-\frac{n+1}{p}}. \end{align*} The second term on the RHS originates from the estimate of the auxiliary function $u_1$ (from a similar splitting as in the proof of Proposition \ref{prop:nondeg}), where we have exploited the Lipschitz regularity of $G^i$ away from $\Gamma_w$ (indeed, without this observation, the error estimate would not suffice to absorb the error contribution, c.f. Remark \ref{rmk:errorLip} below). Recalling the lower bound for $h$ (c.f. \eqref{eq:barrier}) implies the desired lower bound for $\p_e u$. \end{proof}
\begin{rmk} \label{rmk:errorLip} We note that without the Lipschitz regularity of $G^i$, we would only have had error estimates of the form \begin{align*} \p_eu(x)\geq ch (x)-c_\ast\dist(x,\Lambda_w)^{1-\frac{n+1}{q}}\dist(x,\Gamma_w)^{-\frac{n+1}{p} + \frac{n+1}{q}} \end{align*} for an arbitrary value $q\in(n+1,\infty)$. Clearly, this would not have sufficed for our absorption argument. Thus, the key improvement here consists in the observation that $G^i$ is actually a Lipschitz function away from $\Gamma_w$. As the estimates, which yield the improved bounds in Proposition \ref{prop:v1}, are interior estimates around $\Lambda_w$, this regularity away from $\Gamma_w$ suffices for our purposes. \end{rmk}
Assuming more regularity on the metric, the lower bound from Lemma \ref{lem:lower1'} can be further improved:
\begin{lem}[Positivity'] \label{lem:lower1} Let $a^{ij}:B_1^+ \rightarrow \R^{(n+1)\times (n+1)}_{sym}$ be a tensor field that satisfies the normalization conditions (A1)-(A4) from Section \ref{sec:prelim}, condition (ii) from Proposition \ref{prop:wasympt} and in addition is $C^{1,\gamma}$ regular for some $\gamma \in (0,1)$. Let $w:B_1^+ \rightarrow \R$ be a solution of the thin obstacle problem with metric $a^{ij}$ and assume that it satisfies the normalizations (A0) from Section \ref{sec:prelim} and (i) from Proposition \ref{prop:wasympt}. Then there exists $\eta= \eta(n)>0$ such that for $e\in \mathcal{C}'_\eta(e_n)$ \begin{align} \label{eq:lower1} \p_ew(x)\geq c\dist(x,\Lambda_w)\dist(x,\Gamma_w)^{-\frac{1}{2}}. \end{align} \end{lem}
\begin{proof} We only sketch the proof here. For each $x_0\in B_{1/2}^+$ we consider \begin{align*}
u(x):=\p_ew(x)+|x'-(x_0)'|^2-2^{-8}h(x)-2nx_{n+1}^2, \end{align*} where $h$ is as in the proof of Lemma \ref{lem:lower1'}. Next we split $u=u_1+u_2$, where $u_1$ solves \begin{align*} &\p_i a^{ij}\p_j u_1-K\dist(x,\Gamma_w)^{-2}u_1=\p_iF^i+g,\\ &\text{ where } F^i=-\p_ea^{ij}\p_jw, \quad g=2(x_j-(x_0)_j)\p_ia^{ij}-4nx_{n+1}\p_ja^{n+1,j}, \end{align*} with $K=K(n)$ sufficiently large. Hence, $u_2$ solves \begin{align*} &\p_ia^{ij}\p_j u_2=\tilde{g}-K\dist(x,\Gamma_w)^{-2}u_1,\\ &\text{ where } \tilde{g}=-2^{-8}\p_ia^{ij}\p_jh +2a^{ii}-4na^{n+1,n+1}. \end{align*} We apply Proposition \ref{prop:v1} to $u_1$. Since $a^{ij}\in C^{1,\gamma}$ and $w\in C^{1,1/2}$, we have $\dist(x,\Gamma_w)^{-1/2} F^i\in L^\infty$ and $g\in L^\infty$. By Remark~\ref{rmk:impv1}, for any $s\in (n+1,\infty)$ \begin{equation} \label{eq:aux} \begin{split}
|u_1(x)|&\leq C_0 \dist(x,\Lambda_w)\dist(x,\Gamma_w)^{-\frac{n+1}{s}+\frac{1}{2}}\cdot\\
&\cdot\left(\|F^i\dist(\cdot,\Gamma_w)^{-\frac{1}{2}}\|_{L^s(B_1^+)}+\|g\dist(\cdot,\Gamma_w)^{1-\frac{n+1}{s}-\frac{1}{2}}\|_{L^{s/2}(B_1^+)}\right). \end{split} \end{equation} For $u_2$ we apply a comparison argument. By the argument from the proof of Proposition \ref{prop:nondeg} we obtain that $u_2\geq 0$. Thus $u=u_1+u_2\geq u_1$. Evaluating at $x_0$ yields that $\p_ew(x_0)\geq 2^{-8}h(x_0)-u_1(x_0)$. Since $x_0$ is arbitrary in $B_{1/2}^+$ and by using the bound from \eqref{eq:aux} for $u_1$, we infer that \begin{align}\label{eq:dew} \p_ew(x)\geq 2^{-8} h (x)-u_1(x)\geq 2^{-8}h(x)-c_\ast \dist(x,\Lambda_w). \end{align}
Here we have used that $s>2(n+1)$ can be chosen to be sufficiently large and $\|Da^{ij}\|_{L^\infty}\leq c_\ast$. Recalling the estimate for $h$ in \eqref{eq:barrier}, we obtain the desired estimate for $\p_ew$ in \eqref{eq:lower1}. \end{proof}
\begin{rmk} We emphasize that without an additional splitting step, we cannot hope for an analogous result for a $W^{1,p}$, $p\in(2(n+1),\infty]$, metric $a^{ij}$. This is due to the fact that by Remark~\ref{rmk:impv1} for $u_1$ we only have \begin{align*}
|u_1(x)|&\leq C_0 \dist(x,\Lambda_w)^{1-\frac{n+1}{p}}\dist(x,\Gamma_w)^{\frac{1}{2}}\cdot\\
&\cdot\left(\|F^i\dist(\cdot,\Gamma_w)^{-\frac{1}{2}}\|_{L^p(\R^{n+1})}+\|g\dist(\cdot,\Gamma_w)^{1-\frac{n+1}{p}-\frac{1}{2}}\|_{L^{p/2}(\R^{n+1})}\right)\\ &\lesssim c_\ast \dist(x,\Lambda_w)^{1-\frac{n+1}{p}}\dist(x,\Gamma_w)^{\frac{1}{2}}. \end{align*} The bound by $\dist(x,\Lambda_w)^{1-\frac{n+1}{p}}$ is optimal in general, due to the divergence right hand side $\p_iF^i$ with $F^i\in L^p$. Thus, the resulting bound cannot be absorbed into $h^-$ in the estimate \eqref{eq:dew}. \end{rmk}
\section[Perturbations]{Perturbations} \label{sec:pert}
In this section we consider variants of the variable coefficient thin obstacle problem. Examples are settings in which the obstacle (c.f. Section \ref{sec:nonfl}) or the underlying manifold (c.f. Section \ref{sec:nfboundaries}) is not flat. Moreover, it is also possible to deal with interior thin obstacles (c.f. Section \ref{sec:int_obst}) and inhomogeneities in the equations (c.f. Section \ref{sec:inhomo}). We show that under suitable conditions on the metric and obstacles, it is possible to recover the regularity results from the flat setting. Instead of repeating all the necessary arguments, we only point out the main difficulties and differences with respect to the flat problem which was discussed in \cite{KRS14} and Sections \ref{sec:boundary}-\ref{sec:optimal} of the present article.\\ While treating the cases of inhomogeneities, non-flat obstacles and boundaries and interior obstacles separately in order to stress the essential aspects of the respective situation, we emphasize that it is possible to combine these results into a setting involving several/all of these features, e.g. a non-flat obstacle and a non-flat hypersurface.
\subsection{Inhomogeneities} \label{sec:inhomo}
Exploiting the scaling of the Carleman inequality, it is possible to deal with inhomogeneous thin obstacle problems along similar lines as in Sections 2-4 in \cite{KRS14} and Sections \ref{sec:boundary}-\ref{sec:optimal} in this paper. Here the main observation is that both for the Carleman estimate and the comparison arguments in Sections \ref{sec:boundary} and \ref{sec:optimal} the inhomogeneity is a ``lower order'' contribution.
\begin{prop}[Inhomogeneous thin obstacle problem] \label{prop:inhomo} Let \\ $a^{ij}: B_1^+ \rightarrow \R^{(n+1)\times (n+1)}$ be a $W^{1,p}(B_1^+)$, $p\in(n+1,\infty]$, metric satisfying (A1), (A2), (A4). Suppose that $f\in L^q(B_1^+)$ for some $q\in (n+1,\infty]$. Assume that $w\in H^1(B_1^+)$ is an $L^2$ normalized solution to the thin obstacle problem \begin{equation} \label{eq:varcoef_f} \begin{split} \p_i a^{ij} \p_j w = f \mbox{ in } B_{1}^+,\\ w \geq 0, \ -a^{n+1,j}\p_j w \geq 0, \ w (a^{n+1, j}\p_j w) = 0 \mbox{ in } B_1'. \end{split} \end{equation}
Then, the following statements hold: \begin{itemize} \item[(i)] The solution $w$ has the following growth estimate: There exists $C>0$ such that \begin{align*}
|w(x)|\leq C\left\{ \begin{array}{ll} \dist(x,\Gamma_w)^{2-\frac{n+1}{q}}(\ln (\dist(x,\Gamma_w)))^2 & \mbox{ if } n+1 < q< 2(n+1), \\ \dist(x, \Gamma_w)^{\frac{3}{2}}(\ln (\dist(x,\Gamma_w)))^2 & \mbox{ if } q \geq 2(n+1). \end{array} \right. \end{align*}
The constant $C>0$ depends on $n,p, q, \|f\|_{L^q}, \|\nabla a^{ij}\|_{L^p}, \left\| w \right\|_{L^2(B_1^+)}$. \item[(ii)] If additionally $a^{ij}\in W^{1,p}(B_1^+)$ and $f\in L^p(B_1^+)$ with $p\in (2(n+1),\infty]$, then $w$ has the optimal Hölder regularity: $$w \in C^{1,1/2}(B_{1/2}^+).$$ \item[(iii)] Under the same assumptions on $a^{ij}$ and $f$ as in (ii) and assuming that $0\in \Gamma_{3/2}(w)$, there exist a radius $\rho=\rho(w,f)>0$, a parameter $\alpha \in (0,1]$ and a $C^{1,\alpha}$ function $g$ such that (potentially after a rotation)
$$\Gamma_{3/2}(w) \cap B_{\rho}' = \{x = (x'',x_n,0)| \ x_{n}= g(x'') \mbox{ for } x''\in B_{\rho}'' \}.$$ \end{itemize} \end{prop}
\begin{rmk} \label{rmk:reg_est} Since $w$ solves a linear elliptic equation away from the free boundary, we can obtain an interior regularity result for $w$ by combining the growth estimate (i) with a local gradient estimate. The proof is standard (see e.g. Proposition~4.8 in \cite{KRS14}\xspace) and yields \begin{align*}
&|\nabla w(x) - \nabla w(y) |\\ &\leq \left\{ \begin{array}{ll}
C(q)|x-y|^{1-\frac{n+1}{q}}\ln^2 (|x-y|) & \mbox{ if } n+1 < q< 2(n+1), \ q \leq p,\\
C(p,q)|x-y|^{1-\frac{n+1}{p}} & \mbox{ if } n+1 < p < q< 2(n+1),\\
C |x-y|^{\frac{1}{2}}\ln^2(|x-y|) & \mbox{ if }\min\{p, q\} \geq 2(n+1), \end{array} \right. \end{align*} for all $x,y \in B_{1/2}^+$. We have \begin{align*} C(q) \rightarrow + \infty \mbox{ as } q\searrow 0 \mbox{ and } C(p,q) \rightarrow + \infty \mbox{ as } q\searrow p. \end{align*} \end{rmk}
\begin{proof} We only point out the main differences with respect to the previously presented arguments for $f=0$. \\
\emph{Step 1: Modifications in the Carleman estimate and in the proof of Proposition 4.5 in \cite{KRS14}\xspace}. As $f\in L^q$ for some $q\in (n+1,\infty]$, Uraltseva's $C^{1,\alpha}$ regularity result remains valid \cite{U87}. Furthermore, we remark that it is still possible to carry out Uraltseva's change of coordinates (c.f. Proposition 2.1 in \cite{KRS14}\xspace and \cite{U85}), which transforms the variable coefficient problem into a new variable coefficient thin obstacle problem with an off-diagonal property as in (A1) on $B_1'$. Here, if $f$ is the original inhomogeneity, the inhomogeneity of the transformed equation is given by $\tilde{f}(y) = f(x)|\det(DT(x))|^{-1}\left. \right|_{x=T^{-1}(y)}$. Consequenty, $\tilde{f}$ inherits the $L^q$ (and $W^{1,p}$) regularity from $f$. For convenience of notation, in the sequel we suppress the tildas.\\ The preceding discussion now allows us to prove the Carleman estimate along the same lines as in Section 3 in \cite{KRS14}\xspace. The only necessary modification is to incorporate a further right hand side contribution: \begin{equation} \label{eq:vCarl_f} \begin{split}
&\tau^{\frac{3}{2}} \left\| e^{\tau \phi}|x|^{-1} (1+\ln(|x|)^2)^{-\frac{1}{2}} w \right\|_{L^2(A_{\rho, r}^+)} + \tau^{\frac{1}{2}}\left\| e^{\tau \phi} (1+\ln(|x|)^2)^{-\frac{1}{2}} \nabla w \right\|_{L^2(A_{\rho,r}^+)}\\ &\leq C(n)c_0^{-1} \left(
\tau^2 C(a^{ij})\left\| e^{\tau\phi}|x|^{\gamma-1} w \right\|_{L^2(A_{\rho,r}^+)} + \left\|e^{\tau \phi} |x| f\right\|_{L^2(A_{\rho,r}^+)} \right), \end{split} \end{equation} where $\gamma := 1-\frac{n+1}{p}$ and \begin{align*}
C(a^{ij}) = \sup\limits_{A_{\rho,r}^+} \left| \frac{a^{ij}(x)-\delta^{ij}}{|x|^{\gamma}} \right| + \sup\limits_{\rho\leq \tilde{r}\leq r/2} \left\| |x|^{-\gamma} \nabla a^{ij} \right\|_{L^{n+1}(A_{\tilde{r},2\tilde{r}}^+)}. \end{align*} We estimate the additional term coming from the inhomogeneity for parameters $\tau \in [1, \frac{ \tau_0}{1+ c_0\pi/2}]$, where $\tau_0= \kappa_0 + \frac{n-1}{2}$ with $\kappa_0\geq 0 $: \begin{equation} \label{eq:bound_f} \begin{split}
\left\| e^{\tau \phi}|x|f \right\|_{L^2(A_{\rho,r}^+)} & \lesssim \left\| f \right\|_{L^q(B_1^+)} \left\| e^{\tau \phi} |x| \right\|_{L^{\frac{2q}{q-2}}(A_{\rho,r}^+)} \\
& \lesssim \left\| f \right\|_{L^q(B_1^+)}\left\| |x|^{-\tau_0 +1} \right\|_{L^{\frac{2q}{q-2}}(A_{\rho,r}^+)}\\
& \lesssim \left\| f \right\|_{L^q(B_1^+)}(r^{\delta}-\rho^\delta),\quad \delta= 2-\frac{n+1}{q}-\kappa_0. \end{split} \end{equation} Hence, the contribution originating from the inhomogeneity $f$ in the Carleman inequality can be regarded as a ``subcritical'' contribution (in the sense that $\delta\geq0$) for the above range of $\tau$ if $\kappa_0\leq 2-\frac{n+1}{q}$. Analogous to the Carleman estimate, Corollary 3.1 in \cite{KRS14}\xspace remains valid, if the contribution originating from $f$ is included on the right hand side: \begin{equation} \label{eq:consequence_Carl_f} \begin{split}
&\tau^{\frac{3}{2}}(1+|\ln(r_2)|)^{-1} e^{\tau \tilde{\phi}(\ln(r_2))} r_2^{-1}\left\| w \right\|_{L^2(A^+_{r_2,2r_2}(x_0))}\\
&+ \tau \max\{\ln(r_2/r_1)^{-1},\ln(r_3/r_2)^{-1}\} e^{\tau \tilde{\phi}(\ln(r_2))}r_2^{-1} \left\| w \right\|_{L^2(A^+_{r_2,2r_2}(x_0))} \\
& \leq C( e^{\tau \tilde{\phi}(\ln(r_1))} r_1^{-1} \left\| w \right\|_{L^2(A^+_{r_1,2r_1}(x_0))} \\
& \quad + e^{\tau \tilde{\phi}(\ln(r_3))}r_3^{-1}\left\| w \right\|_{L^2(A^+_{r_3,2 r_{3}}(x_0))} + \left\| e^{\tau \tilde{\phi}(\ln(|x|))} |x| f \right\|_{L^2(A^+_{r_1,2 r_{3}}(x_0))}). \end{split} \end{equation}
\emph{Step 2: Modifications in Section 4 in \cite{KRS14}\xspace.} Estimate (\ref{eq:consequence_Carl_f}) combined with the boundedness of (\ref{eq:bound_f}), immediately allows us to infer the analogue of Proposition 4.1 in \cite{KRS14}\xspace if $\kappa_x < 2-\frac{n+1}{q}$. Moreover, arguing by contradiction, we also obtain that if $\kappa_x\geq 2-\frac{n+1}{q}$, then not only \begin{align*} \limsup\limits_{r \rightarrow 0}\frac{\ln\left( \fint\limits_{A_{r,2r}^+} w^2 \right)^{1/2}}{\ln(r)} \geq 2 - \frac{n+1}{q}, \end{align*} but also \begin{align*} \liminf\limits_{r \rightarrow 0} \frac{\ln\left( \fint\limits_{A_{r,2r}^+} w^2 \right)^{1/2}}{\ln(r)} \geq 2- \frac{n+1}{q}. \end{align*} Similarly, the doubling property (c.f. Proposition 4.3 in \cite{KRS14}\xspace), the blow-up result (c.f. Proposition 4.4 in \cite{KRS14}\xspace) and the lower semi-continuity statement (c.f. Proposition 4.2 in \cite{KRS14}\xspace) remain valid at points of vanishing order less than $2-\frac{n+1}{q}$. Also the growth Lemma 4.1 in \cite{KRS14}\xspace remains valid for all $x_0 \in \Gamma_w$ with $\bar{\kappa}=2-\frac{n+1}{q}$: \begin{align} \label{eq:growth_f}
\sup\limits_{B_{r}^+(x_0)}|w| \leq C r^{\min\{\kappa_{x_0},2-\frac{n+1}{q}\}}|\ln(r)|^2, \end{align}
where $C=C(n,p,\left\| \nabla a^{ij}\right\|_{L^p(B_1^+)}, \left\| f \right\|_{L^q(B_1^+)})$.\\ Apart from these results, we also obtain the existence of homogeneous blow-up limits at free boundary points with vanishing order less than $2-\frac{n+1}{q}$. In fact, this is again a consequence of the boundedness of the contributions originating from $f$ in the Carleman estimate. \\ Therefore, by Proposition 4.4 in \cite{KRS14}\xspace and Proposition 4.5 in \cite{KRS14}\xspace, at each $x\in \Gamma_w$ with $\kappa_x<2-\frac{n+1}{q}$, there exists an $L^2$ normalized blow-up sequence centered at $x$, whose limit is a nontrivial homogeneous global solution (to the constant coefficient thin obstacle problem) with homogeneity equal to $\kappa_x$. Then by the classification of the homogeneous global solutions (Proposition 4.6 in \cite{KRS14}\xspace) on the one hand, we obtain that there is no free boundary point with $\kappa_x<2-\frac{n+1}{q}$ if $q\in (n+1,2(n+1))$. Hence, (\ref{eq:growth_f}) turns into: \begin{align} \label{eq:growth_f}
\sup\limits_{B_{r}^+(x_0)}|w| \leq C r^{2-\frac{n+1}{q}}|\ln(r)|^2, \end{align} for all $x_0\in \Gamma_w$. On the other hand, $\kappa_x \geq\frac{3}{2}$ if $q\in [2(n+1),\infty]$. Combining this information with the uniform upper growth estimate (Lemma 4.1 in \cite{KRS14}\xspace) results in \begin{align} \label{eq:modifL43}
\sup\limits_{B_r^+(x)}|w| \leq \begin{cases} Cr^{2-\frac{n+1}{q}} (\ln r)^2 &\text{ if } q\in (n+1,2(n+1))\\ Cr^{\frac{3}{2}} (\ln r)^2 &\text{ if } q \in [2(n+1),\infty] \end{cases},\quad r\in (0,1/2), \end{align}
where the constant $C>0$ depends on $\left\| \nabla a^{ij} \right\|_{L^p},n,p,\left\| f\right\|_{L^{q}}$.\\
\emph{Step 3: Modifications in Sections \ref{sec:boundary} and \ref{sec:optimal}.} In order to deduce the regularity of the free boundary and to obtain the optimal regularity of $w$, we argue along the lines of Sections \ref{sec:boundary} and \ref{sec:optimal}. Here we interpret the inhomogeneity in the equation for the derivative as a divergence right hand side: \begin{align*} \p_i a^{ij} \p_i \p_e w & = \p_i F^i \mbox{ in } B_{1}\setminus \Lambda,\\ \p_e w & = 0 \mbox{ in } \Lambda, \end{align*} where for all $i\in\{1,...,n+1\}$ \begin{align} \label{eq:inhom1} F^i := - (\p_e a^{ij})\p_j w + e^i f \in L^p(B_1). \end{align}
Thus, (potentially) after a rescaling which only depends on $\left\| \nabla a^{ij}\right\|_{L^p(B_1)}$ and $\left\| f \right\|_{L^p(B_1)}$, we may assume that the inhomogeneity is small. Therefore, all the results in Sections \ref{sec:boundary} and \ref{sec:optimal} are valid. \end{proof}
\subsection{Non-flat obstacles} \label{sec:nonfl}
In this section we present the main ideas of dealing with non-flat obstacles. Due to our almost scaling critical lower bound in Proposition \ref{prop:nondeg}, we are able to deal with the non-flat obstacle problem involving metrics $a^{ij}\in W^{1,p}(B_1)$ and obstacles $\varphi\in W^{2,p}(B_1)$ with $p\in (2(n+1),\infty]$. We however stress that the analogues of \cite{KRS14} are valid under the even weaker integrability assumption $p> n+1$.
\begin{prop}[Non-flat obstacles] \label{prop:inhomo} Let $a^{ij}: B_1^+ \rightarrow \R^{(n+1)\times (n+1)}_{sym}$ be a $W^{1,p}$ metric with $p\in (n+1,\infty]$ satisfying (A1), (A2), (A4). Suppose that $\varphi \in W^{2,p}(B_{1}')$. Let $w:B_1^+ \rightarrow \R$ be a solution of the thin obstacle problem \begin{equation} \label{eq:varcoef_f} \begin{split} \p_i a^{ij} \p_j w & = 0 \mbox{ in } B_{1}^+,\\ w \geq \varphi, \ - a^{n+1,j} \p_j w \geq 0, \ (w-\varphi) ( a^{n+1,j} \p_j w) & = 0 \mbox{ on } B_{1}'. \end{split} \end{equation} Then, the following statements hold: \begin{itemize} \item[(i)] The solution $w$ has the following growth estimate: There exists $C>0$ such that \begin{align*}
|w(x)|\leq C\left\{ \begin{array}{ll} \dist(x,\Gamma_w)^{2-\frac{n+1}{p}}(\ln (\dist(x,\Gamma_w)))^2 & \mbox{ if } n+1 < p< 2(n+1), \\ \dist(x, \Gamma_w)^{\frac{3}{2}}(\ln (\dist(x,\Gamma_w)))^2 & \mbox{ if } p \geq 2(n+1). \end{array} \right. \end{align*}
The constant $C>0$ depends on $n,p, \left\| \varphi \right\|_{W^{2,p}(B_1^+)}, a^{ij}, \left\| w \right\|_{L^2(B_1^+)}$. \item[(ii)] If additionally $a^{ij}\in W^{1,p}(B_1)$ and $\varphi\in W^{2, p}(B_1')$ with $p\in (2(n+1),\infty]$, then the solution $w$ has the optimal Hölder regularity: $$ w \in C^{1,1/2}(B_{1/2}^+).$$ \item[(iii)] Under the conditions in (ii) and assuming that $0\in \Gamma_{3/2}(w)$, there exist a radius $\rho>0$, a parameter $\alpha \in (0,1]$ and a $C^{1,\alpha}$ function $g$ such that (potentially after a rotation)
$$\Gamma_{3/2}(w) \cap B_{\rho}' = \{x = (x'',x_n,0)| \ x_{n}= g(x'') \mbox{ for } x''\in B_{\rho}'' \}.$$ \end{itemize} \end{prop}
\begin{rmk} We remark that as in Proposition \ref{prop:inhomo} the estimate in (i) immediately entails a corresponding regularity result (c.f. Remark \ref{rmk:reg_est}). \end{rmk}
We prove this result by reducing to the setting of flat obstacles.
\begin{proof} \emph{Step 1: Recovery of flat boundary conditions.} We first carry out Uraltseva's change of coordinates \cite{U85}. Then in order to recover flat boundary conditions, we introduce a function $v:B_{1}^+ \rightarrow \R$ with $v=w - \varphi$. This then leads to an inhomogeneous thin obstacle problem with a flat obstacle: \begin{equation} \label{eq:nfl} \begin{split} \p_i a^{ij} \p_j v & = f \mbox{ in } B_{1}^+,\\
v \geq 0, \ -\p_{n+1} v \geq 0 , \ v \p_{n+1} v & = 0 \mbox{ on } B_{1}', \end{split} \end{equation} with $f= - \p_i a^{ij} \p_j \varphi = - (\p_i a^{ij}) \p_j \varphi - a^{ij}\p_{ij}\varphi$. In recovering the results from the flat obstacle setting, the main difficulties arise from the error contributions which result from the inhomogeneity. In order to derive (i), we hence argue that the assumptions of Section \ref{sec:inhomo} are satisfied. In order to obtain (ii) and (iii) we interpret the error as a divergence form right hand side and argue as in Sections \ref{sec:boundary}-\ref{sec:optimal}.\\
\emph{Step 2: Bounding the inhomogeneity.} Due to our assumptions on $f$ and $a^{ij}$ and by Sobolev/ Morrey embedding, the metric and the inhomogeneity have the right integrability. Indeed, in the setting of (i) we have \begin{itemize} \item $\p_i a^{ij} \in L^p(B_1^+)$, $a^{ij}\in L^{\infty}(B_1^+)$, \item $\p_{ij} \varphi \in L^p(B_1')$, $\p_j \varphi \in L^{\infty}(B_1')$. \end{itemize} Hence, $f\in L^{p}(B_1^+)$ which allows us to invoke the results from Section \ref{sec:inhomo}. This then yields the growth estimate stated in (i).\\
\emph{Step 3: Argument for (ii) and (iii).} We argue as in step 3 in Section \ref{sec:inhomo}. In order to obtain the results (ii) and (iii), we consider the equation for tangential derivatives of $v$ (after carrying out an odd reflection as described in (\ref{eq:extend_reflect})): \begin{equation} \label{eq:deriv_v} \begin{split} \p_i a^{ij} \p_{j e} v & = -\p_i ((\p_e a^{ij}) \p_j v) + \p_e f \mbox{ in } B_1 \setminus \Lambda, \\ \p_e v & = 0 \mbox{ in } \Lambda. \end{split} \end{equation} We interpret the right hand side of (\ref{eq:deriv_v}) as a divergence form contribution with \begin{equation} \label{eq:div_form} F^{i}:= -(\p_e a^{ij}) \p_j v +e^{i} f, \ i\in\{1,...,n+1\}. \end{equation} The regularity of the metric and obstacle, $a^{ij} \in W^{1,p}(B_1)$, $\varphi \in W^{2,p}(B'_1)$ with $p\in (2(n+1),\infty]$, entail that \begin{align*} F^i\in L^p(B_1). \end{align*}
Moreover, by an appropriate scaling argument (which only depends on $\left\| \nabla a^{ij} \right\|_{L^p(B_1)}$ and $\left\| \varphi \right\|_{W^{2,p}(B_1')})$, we may assume that $\left\| F^i \right\|_{L^p(B_1)}$ is small. Hence, all the results in Sections \ref{sec:boundary}-\ref{sec:optimal} remain valid. This then yields the desired results of (ii) and (iii). \end{proof}
\subsection{Non-flat boundaries} \label{sec:nfboundaries} In this section we very briefly comment on the situation with non-flat boundaries. In this context we have the following result:
\begin{prop}[Non-flat boundaries] Let $\Omega \subset \R^{n+1}$ be a bounded open subset, whose boundary contains the $W^{2,p}$ hypersurface $\mathcal{M}$. Let $a^{ij}: \Omega \rightarrow \R^{(n+1)\times (n+1)}_{sym}$ be a uniformly elliptic, symmetric tensor field of class $W^{1,p}(\Omega)$, $p\in( n+1,\infty]$. Assume that $w: \Omega \cup \mathcal{M} \rightarrow \R $ is a solution of the thin obstacle problem with zero obstacle on the hypersurface $\mathcal{M}$: \begin{align*} \p_i a^{ij} \p_j w &= 0 \mbox{ in } \Omega,\\ w \geq 0, \ \nu_i a^{ij} \p_j w \geq 0, \ w(\nu_i a^{ij} \p_j w)&=0 \mbox{ on } \mathcal{M}, \end{align*} where $\nu: \mathcal{M} \rightarrow \R^{n+1}$ denotes the outer unit normal field on $ \mathcal{M}$. Then, the following statements hold: \begin{itemize} \item[(i)] (Optimal regularity) If $p\in (n+1,2(n+1)]$, then $w\in C^{1,1-\frac{n+1}{p}}_{loc}(\Omega\cup \mathcal{M})$; if $p\in (2(n+1),\infty]$, then $w \in C^{1,1/2}_{loc}(\Omega \cup \mathcal{M})$. \item[(ii)] Assuming that $0\in \Gamma_{3/2}(w)$, there exist a radius $\rho>0$ and a parameter $\alpha \in (0,1-\frac{n+1}{p}]$ such that $\Gamma_{3/2}(w)\cap B_\rho$ is an $(n-1)$-dimensional $C^{1,\alpha}$ submanifold. \end{itemize} \end{prop}
\begin{proof} The statement can be immediately reduced to the setting of the usual thin obstacle problem by carrying out a change of coordinates (c.f. Proposition 2.1 in \cite{KRS14}\xspace and \cite{U85}) that flattens the free boundary. As the free boundary is $W^{2,p}$ regular, this change of coordinates followed by an application of Uraltseva's change of coordinates from Proposition 2.1 in \cite{KRS14}\xspace, then implies the integrability and differentiability properties (A1), (A2), (A3), (A4). This allows to carry out the analysis from \cite{KRS14} and Sections \ref{sec:boundary}-\ref{sec:optimal} of the present article. \end{proof}
\subsection{Interior thin obstacles} \label{sec:int_obst}
Finally, we comment on the necessary modification steps in obtaining analogous results for interior thin obstacles. In this direction we have:
\begin{prop}[Interior thin obstacles] \label{prop:intobst} Let $a^{ij}: B_1 \rightarrow \R^{(n+1)\times (n+1)}_{sym}$ be a uniformly elliptic, symmetric tensor field of class $W^{1,p}$ with $p\in (n+1,\infty]$, satisfying (A1), (A2), (A3) and (A4). Assume that $w: B_1 \rightarrow \R $ is a solution of the interior thin obstacle problem with zero obstacle on $B'_1$: \begin{align*} \p_i a^{ij} \p_j w &= 0 \mbox{ in } \inte{B_1^+}\cup \inte{B_1^-},\\
w \geq 0, \ [\p_{n+1} w] \geq 0, \ w[ \p_{n+1} w] & = 0 \mbox{ on } B'_1. \end{align*} Here $[ \p_{n+1} w]:=( \p_{n+1} w)^+ -(\p_{n+1} w)^-$ and $(\p_{n+1} w)^{\pm}$ denotes the one-sided trace of the fluxes on $B'_1$. More precisely, for $H^n$ a.e. $x\in B'_1$ we have \begin{align*} ( \p_{n+1} w)^{\pm}(x):= \lim\limits_ { y \rightarrow x, \pm y_{n+1}>0} \p_{n+1} w (y). \end{align*} Then, the following statements hold: \begin{itemize} \item[(i)] (Optimal regularity) If $p\in (n+1,2(n+1)]$, then $w\in C^{1,1-\frac{n+1}{p}}_{loc}(B_1^+\cup B_1^-)$; if $p\in (2(n+1),\infty]$, then $w \in C^{1,1/2}_{loc}(B_1^+\cup B_1^- )$. \item[(ii)] Assuming that $0\in \Gamma_{3/2}(w)$, there exist a radius $\rho>0$, a parameter $\alpha \in (0,1-\frac{n+1}{p}]$ and a $C^{1,\alpha}$ function $g$ such that (potentially after a rotation) \begin{align*}
\Gamma_{3/2}(w) \cap B_{\rho}' = \{x = (x'',x_n,0)| \ x_{n}= g(x'') \mbox{ for } x''\in B_{\rho}'' \}. \end{align*} \item[(iii)] There exist functions $a, \tilde{b}:\Gamma_{3/2}(w)\cap B_{\rho/2}\rightarrow \R$ with $a \in C^{0,\alpha}(\Gamma_{3/2}(w)\cap B_{\rho/2})$ and $\tilde{b}\in C^{0,\frac{1}{2}+\alpha}(\Gamma_{3/2}(w)\cap B_{\rho/2})$ for some $\alpha>0$ (where $\alpha$ can be chosen as in (ii)), such that: for each $x_0 \in \Gamma_{3/2}(w)\cap B_{\rho/4}$, \begin{itemize} \item For each $e\in S^{n-1}\times\{0\}$ and $x\in B_{\rho/4}(x_0)$, \begin{align*}
&\left| \p_{e} w(x)-\frac{3}{2}\frac{a(x_0)(e\cdot \nu_{x_0})}{(\nu_{x_0}\cdot A(x_0)\nu_{x_0})^{1/2}}w_{1/2}\left(\frac{(x- x_0) \cdot\nu_{ x_0}}{(\nu_{ x_0} \cdot A( x_0) \nu_{ x_0})^{1/2}},\frac{x_{n+1}}{(a^{n+1,n+1}( x_0))^{1/2}}\right)\right|\notag\\ &\quad \leq C(n,p)\max\{\epsilon_0,c_\ast\} \dist(x,\Gamma_w)^{\frac{1}{2}+\widetilde{\gamma}}. \end{align*} \item For $x\in B_{\rho/4}(x_0)$, \begin{align*}
&\left| \p_{n+1} w(x)- \tilde{b}(x_0)\right.\\
&\quad \left. -\frac{3}{2} \frac{a(x_0)}{(a^{n+1,n+1}(x_0))^{1/2}}\bar w_{1/2}\left(\frac{(x- x_0) \cdot\nu_{ x_0}}{(\nu_{ x_0} \cdot A( x_0) \nu_{ x_0})^{1/2}},\frac{x_{n+1}}{(a^{n+1,n+1}(x_0))^{1/2}}\right)\right|\\ &\quad \leq C(n,p)\max\{\epsilon_0,c_\ast\} \dist(x,\Gamma_w)^{\frac{1}{2}+\widetilde{\gamma}}. \end{align*} \item For $x\in B_{\rho/4}(x_0)$ and $x-x_0\in \spa\{\nu_{x_0},e_{n+1}\}$, \begin{align*}
&\quad \left|w(x)-\tilde{b}(x_0)x_{n+1}-a(x_0)w_{3/2}\left(\frac{(x- x_0) \cdot\nu_{ x_0}}{(\nu_{ x_0} \cdot A( x_0) \nu_{ x_0})^{1/2}},\frac{x_{n+1}}{(a^{n+1,n+1}( x_0))^{1/2}}\right)\right|\notag\\ &\quad \leq C(n,p)\max\{\epsilon_0,c_\ast\} \dist(x,\Gamma_w)^{\frac{3}{2}+\widetilde{\gamma}}. \end{align*} \end{itemize} Here $w_{1/2}(x)= \Ree(x_n+ix_{n+1})^{1/2}$, $\bar{w}_{1/2}(x)=- \Imm(x_n + i x_{n+1})^{1/2}$, $w_{3/2}(x)=\Ree(x_n+ix_{n+1})^{3/2}$. Moreover, $\tilde{\gamma}=\min\{\alpha, \frac{1}{2}-\frac{n+1}{p}\}$ and $A(x_0)=(a^{ij}(x_0))$. \end{itemize} \end{prop}
\begin{rmk} As the proof of Proposition \ref{prop:intobst} shows, the function $\tilde{b}$ can be identified explicitly as $\tilde{b}(x_0)= (\p_{n+1}w)^+(x_0)$, where $x_0\in \Gamma_{3/2}(w)\cap B_{\rho/2}$. \end{rmk}
\begin{rmk} Similarly as in Sections~\ref{sec:inhomo}-\ref{sec:nfboundaries}, our results generalize to non-flat $W^{2,p}$ obstacles $\varphi$, non-flat $W^{2,p}$ manifolds $\mathcal{M}$ and $L^p$ inhomoneneities $f$, where $p>2(n+1)$. \end{rmk}
\begin{proof} As in the previous sections, we only comment on the main changes. In the first step we derive an analogue of the Carleman inequality (7) in \cite{KRS14}\xspace. Since an essential ingredient in the proof of the Carleman estimate was the vanishing of the boundary contributions due to the complementary boundary conditions, slight changes are necessary at this point. As $w$ satisfies an elliptic equation in both the upper and the lower half planes, we carry out the Carleman conjugation procedure in both half-balls separately. On each of these we obtain boundary contributions which do not necessarily vanish. However, adding the contributions originating from the lower and upper half-balls allows us to exploit the complementary boundary conditions for the interior thin obstacle problem. Hence, we obtain the vanishing of all involved boundary contributions. The Carleman estimate then reads: \begin{equation} \label{eq:consequence_Carl_i} \begin{split}
&\tau^{\frac{3}{2}}(1+|\ln(r_2)|)^{-1} e^{\tau \tilde{\phi}(\ln(r_2))} r_2^{-1}\left\| w \right\|_{L^2(A_{r_2,2r_2}(x_0))}\\
&+ \tau \max\{\ln(r_2/r_1)^{-1},\ln(r_3/r_2)^{-1}\} e^{\tau \tilde{\phi}(\ln(r_2))}r_2^{-1} \left\| w \right\|_{L^2(A_{r_2,2r_2}(x_0))} \\
& \leq C( e^{\tau \tilde{\phi}(\ln(r_1))} r_1^{-1} \left\| w \right\|_{L^2(A_{r_1,2r_1}(x_0))} \\
& \quad + e^{\tau \tilde{\phi}(\ln(r_3))}r_3^{-1}\left\| w \right\|_{L^2(A_{r_3,2 r_{3}}(x_0))} + \left\| e^{\tau \tilde{\phi}(\ln(|x|))} |x| f \right\|_{L^2(A_{r_1,2 r_{3}}(x_0))}), \end{split} \end{equation} where $A_{r_1,r_3}(x_0)$ denotes the full annulus around the point $x_0 \in B_{1}'$. \\ In deriving the analogues of the statements of Section 4 in \cite{KRS14}\xspace, we modify the definition of the blow-up sequences slightly: Indeed, we note that if $w(x)$ is a solution of the interior thin obstacle problem, then $w\in C^{1,\alpha}(B^+_1\cup B^-_1)$ for some $\alpha>0$ (c.f. page 207 in \cite{U87}). Then for $$\tilde{b}(x_0):=\lim\limits_{y \rightarrow x_0, y_{n+1}> 0}\p_{n+1}w(y),$$ the function \begin{align} \label{eq:normal1} v_{x_0}(x):= w(x) - \tilde{b}(x_0) x_{n+1}, \end{align} solves \begin{align*} \p_i a^{ij} \p_j v_{x_0} &= f \mbox{ in } B_{1/2},\\ v_{x_0 } & \geq 0, \ [\p_{n+1}v_{x_0}] \geq 0, \ v_{x_0}[\p_{n+1} v_{x_0}]=0 \mbox{ on } B_{1/2}'. \end{align*} Here $f(x):= -(\p_i a^{ij})(x) \tilde{b}(x_0) \in L^p(B_{1})$ by the regularity and integrability assumptions on $a^{ij}$ and the $C^{1,\alpha}(B_{1/2}^+)$ regularity of $w$ (c.f. \cite{U87}). Therefore, for any free boundary point $x_0 \in \Gamma_w \cap B_{1/2}$ the modified functions $v_{x_0}(x)$ now satisfy \begin{align} \label{eq:normalize}
v_{x_0}(x_0) = 0 = |\nabla v_{x_0}(x_0)|. \end{align} Here, by approaching $x=x_0$ from the interior of $\Omega_w$, we observed that $[\p_{n+1}w](x_0)=0$, which yields the second equality in (\ref{eq:normalize}).\\ Analogously, we consider the following blow-ups around the point $x_0$, which are based on $v_{x_0}$ instead of on $w$: \begin{align*}
v_{x_0,r}(x):= \frac{v_{x_0}(x_0 + r x)}{ r^{- \frac{n+1}{2}} \| v\|_{L^2(B_r(x_0))}}. \end{align*} In the case of the interior thin obstacle problem this replaces the blow-up functions defined in Section \ref{sec:not}. Arguing on the level of $v_{x_0}$ and $v_{x_0,r}$ and using the observations from Section \ref{sec:inhomo}, all the results from Section 4 in \cite{KRS14}\xspace follow. In particular (\ref{eq:normalize}) ensures that the discussion of the vanishing order in Proposition 4.6 in \cite{KRS14}\xspace remains valid.\\ Further relying on the functions $v_{x_0}$, we derive the analogues of the results from Sections \ref{sec:boundary}-\ref{sec:optimal} for $v_{x_0}$ by arguing along the same lines as for the boundary thin obstacle problem (in this context, we remark that since solutions are, by definition of the interior obstacle problem, already defined in the whole ball, it is not necessary to carry out reflection arguments). Hence, all the properties from these sections are valid on the level of $v_{x_0}$. These can then be directly transferred to the original function $w$. Due to the presence of the normalizing factor $\tilde{b}(x_0) x_{n+1}$ this changes the asymptotic expansion of $\p_{n+1}w$ with respect to the one derived in Remark \ref{rmk:asym} by the constant term $\tilde{b}(x_0)$. Using the $C^{0,\alpha}$ regularity of $a$ and the triangle inequality, we infer from the asymptotics of $\p_{n+1}w$ that $\tilde{b}\in C^{0,\frac{1}{2}+\alpha}(\Gamma_{3/2}(w)\cap B_{\rho/2})$. Indeed, for $x_0,x_1\in \Gamma_{3/2}(w)\cap B_{\rho/2}$, we fix a point
$x\in B_\rho$ with $\dist(x,\Gamma_w)\sim |x_0-x_1|$. By the triangle inequality we have \begin{align*}
|\tilde{b}(x_0)-\tilde{b}(x_1)|\leq |G_{x_0}(x)-G_{x_1}(x)|+C(n,p)\max\{\epsilon_0,c_\ast\}\dist(x,\Gamma_w)^{\frac{1}{2}+\widetilde{\gamma}}, \end{align*} where for $\xi\in \Gamma_{3/2}(w)\cap B_{\rho/2}$, $$G_\xi(x):=\frac{3}{2} \frac{a(\xi)}{(a^{n+1,n+1}(\xi))^{1/2}}\bar w_{1/2}\left(\frac{(x- \xi) \cdot\nu_{ \xi}}{(\nu_{ \xi} \cdot A( \xi) \nu_{ \xi})^{1/2}},\frac{x_{n+1}}{(a^{n+1,n+1}(\xi))^{1/2}}\right).$$
By using the $C^{0,\alpha}$ regularity of $a$, the ellipticity and regularity of $A=(a^{ij})$, the $C^{1,\alpha}$ regularity of $\Gamma_{3/2}(w)$, as well as the relation $\dist(x,\Gamma_w)\sim |x_0-x_1|$, we have \begin{align*}
|G_{x_0}(x)-G_{x_1}(x)|\lesssim |x_0-x_1|^{\frac{1}{2}+\alpha}. \end{align*} This proves the $C^{0,\frac{1}{2}+\alpha}$ regularity of $\tilde{b}$. The remaining observations follow along similar lines as in Sections \ref{sec:boundary}-\ref{sec:optimal}. \end{proof}
\end{document} |
\begin{document}
\twocolumn[ \icmltitle{Unimodal Probability Distributions for Deep Ordinal Classification}
\icmlsetsymbol{equal}{*}
\begin{icmlauthorlist} \icmlauthor{Christopher Beckham}{mila} \icmlauthor{Christopher Pal}{mila} \end{icmlauthorlist}
\icmlaffiliation{mila}{Montr\'eal Institute of Learning Algorithms, Qu\'{e}bec, Canada}
\icmlcorrespondingauthor{Christopher Beckham}{christopher.beckham@polymtl.ca}
\icmlkeywords{ordinal, unimodal, kappa, neural networks, deep learning, machine learning, ICML}
\vskip 0.3in ]
\printAffiliationsAndNotice{}
\begin{abstract}
Probability distributions produced by the cross-entropy loss for ordinal classification problems can possess undesired properties. We propose a straightforward technique to constrain discrete ordinal probability distributions to be unimodal via the use of the Poisson and binomial probability distributions. We evaluate this approach in the context of deep learning on two large ordinal image datasets, obtaining promising results. \end{abstract}
\section{Introduction}
Ordinal classification (sometimes called ordinal regression) is a prediction task in which the classes to be predicted are discrete and ordered in some fashion. This is different from discrete classification in which the classes are not ordered, and different from regression in that we typically do not know the distances between the classes (unlike regression, in which we know the distances because the predictions lie on the real number line). Some examples of ordinal classification tasks include predicting the stages of disease for a cancer \cite{gentry2015penalized}, predicting what star rating a user gave to a movie \cite{koren2011ordrec}, or predicting the age of a person \cite{eidinger2014age}.
Two of the easiest techniques used to deal with ordinal problems include either treating the problem as a discrete classification and minimising the cross-entropy loss, or treating the problem as a regression and using the squared error loss. The former ignores the inherent ordering between the classes, while the latter takes into account the distances between them (due to the square in the error term) but assumes that the labels are actually real-valued -- that is, adjacent classes are equally distant. Furthermore, the cross-entropy loss -- under a one-hot target encoding -- is formulated such that it only `cares' about the ground truth class, and that probability estimates corresponding to the other classes may not necessarily make sense in context. We present an example of this in Figure \ref{fig:abc}, showing three probability distributions: \textit{A}, \textit{B}, and \textit{C}, all conditioned on some input image. Highlighted in orange is the ground truth (i.e. the image is of an adult), and all probability distributions have identical cross-entropy: this is because the loss only takes into account the ground truth class, $-\log(p(y|\mathbf{x})_{c})$, where $c = adult$, and all three distributions have the same probability mass for the adult class.
\begin{figure*}
\caption{Three ordinal probability distributions conditioned on an image of an adult woman. Distributions \textit{A} and \textit{B} are unusual in the sense that they are multi-modal.}
\label{fig:abc_a}
\label{fig:abc_b}
\label{fig:abc}
\end{figure*}
Despite all distributions having the same cross-entropy loss, some distributions are `better' than others. For example, between \textit{A} and \textit{B}, \textit{A} is preferred, since \textit{B} puts an unusually high mass on the baby class. However, \textit{A} and \textit{B} are both unusual, because the probability mass does not gradually decrease to the left and right of the ground truth. In other words, it seems unusual to place more confidence on `schooler' than `teen' (distribution \textit{A}) considering that a teenager looks more like an adult than a schooler, and it seems unusual to place more confidence on 'baby' than 'teen' considering that again, a teenager looks more like an adult than a baby. Distribution \textit{C} makes the most sense because the probability mass gradually decreases as we move further away from the most confident class. In this paper, we propose a simple method to enforce this constraint, utilising the probability mass function of either the Poisson or binomial distribution.
For the remainder of this paper, we will refer to distributions like \textit{C} as `unimodal' distributions; that is, distributions where the probability mass gradually decreases on both sides of the class that has the majority of the mass.
\subsection{Related work}
Our work is inspired by the recent work of \citet{emd}, who shed light on the issues associated with different probability distributions having the same cross-entropy loss for ordinal problems. In their work, they alleviate this issue by minimising the `Earth mover's distance', which is defined as the minimum cost needed to transform one probability distribution to another. Because this metric takes into account the distances between classes -- moving probability mass to a far-away class incurs a large cost -- the metric is appropriate to minimise for an ordinal problem. It turns out that in the case of an ordinal problem, the Earth mover's distance reduces down to Mallow's distance: \begin{equation} \label{eq:emd2}
\text{emd}(\mathbf{\hat y}, \mathbf{y}) = \Big(\frac{1}{K}\Big)^{\frac{1}{l}} || \text{cmf}(\mathbf{\hat y}) - \text{cmf}(\mathbf{y}) ||_{l}, \end{equation} where $\text{cmf}(\cdot)$ denotes the cumulative mass function for some probability distribution, $\mathbf{y}$ denotes the ground truth (one-hot encoded), $\mathbf{\hat y}$ the corresponding predicted probability distribution, and $K$ the number of classes. The authors evaluate the EMD loss on two age estimation and one aesthetic estimation dataset and obtain state-of-the-art results. However, the authors do not show comparisons between the probability distributions learned between EMD and cross-entropy.
Unimodality has been explored for ordinal neural networks in \citet{da2008unimodal}. They explored the use of the binomial and Poisson distributions and a non-parametric way of enforcing unimodal probability distributions (which we do not explore). One key difference between their work and ours here is that we evaluate these unimodal distributions in the context of deep learning, where the datasets are generally much larger and have more variability; however, there are numerous other differences which we will highlight throughout this paper.
\citet{mypaper} explored a loss function with an intermediate form between a cross-entropy and regression loss. In their work the squared error loss is still used, but a probability distribution over classes is still learned. This is done by adding a regression layer (i.e. a one-unit layer) at the top of what would normally be the classification layer, $p(y|\mathbf{x})$. Instead of learning the weight vector $\mathbf{a}$ it is fixed to $[0, \dots, K-1]^{T}$ and the squared error loss is minimised. This can be interpreted as drawing the class label from a Gaussian distribution $p(c|\mathbf{x}) = N(c; \mathbb{E}[\mathbf{a}]_{p(y|\mathbf{x})}, \sigma^{2})$. This technique was evaluated against the diabetic retinopathy dataset and beat most of the baselines employed. Interestingly, since $p(c|\mathbf{x})$ is a Gaussian, this is also unimodal, though it is a somewhat odd formulation as it assumes $c$ is continuous when it is really discrete.
\citet{cheng} proposed the use of binary cross-entropy or squared error on an ordinal encoding scheme rather than the one-hot encoding which is commonly used in discrete classification. For example, if we have $K$ classes, then we have labels of length $K-1$, where the first class is $[0, \dots, 0]$, second class is $[1, \dots, 0]$, third class is $[1, 1, \dots, 0]$ and so forth. With this formulation, we can think of the $i$'th output unit as computing the cumulative probability $p(y > i | \mathbf{x})$, where $i \in \{0, \dots, K-2\}$. \citet{frank} also proposed this scheme but in a more general sense by using multiple classifiers (not just neural networks) to model each cumulative probability, and \citet{niu2016ordinal} proposed a similar scheme using CNNs for age estimation. This technique however suffers from the issue that the cumulative probabilities $p(y > 0 \ | \ \mathbf{x}), \dots, p(y > K-2 \ | \ \mathbf{x})$ are not guaranteed to be monotonically decreasing, which means that if we compute the discrete probabilities $p(y = 0 \ | \ \mathbf{x}), \dots, p(y = K-1 \ | \ \mathbf{x})$ these are not guaranteed to be strictly positive. To address the monotonicity issue, \citet{schapire2002modeling} proposed a heuristic solution.
There are other ordinal techniques but which do not impose unimodal constraints. The proportional odds model (POM) and its neural network extensions (POMNN, CHNN \cite{hyperspheres}) do not suffer from the monotonicity issue due to the utilization of monotonically increasing biases in the calculation of probabilities. The stick-breaking approach by \citet{khan2012stick}, which is a reformulation of the multinomial logit (softmax), could also be used in the ordinal case as it technically imposes an ordering on classes.
\subsection{Poisson distribution}
The Poisson distribution is commonly used to model the probability of the number of events, $k \in \mathbb{N} \cup 0$ occurring in a particular interval of time. The average frequency of these events is denoted by $\lambda \in R^{+}$. The probability mass function is defined as: \begin{equation} p(k; \lambda) = \frac{\lambda^{k}\exp(-\lambda)}{k!}, \end{equation} where $0 \leq k \leq K-1$. While we are not actually using this distribution to model the occurrence of events, we can make use of its probability mass function (PMF) to enforce discrete unimodal probability distributions. For a purely technical reason, we instead deal with the log of the PMF: \begin{equation} \begin{split} \label{eq:log} \log\Big[ \frac{\lambda^{k}exp(-\lambda)}{k!} \Big] &= \log(\lambda^{k}exp(-\lambda)) - \log(k!) \\ &= \log(\lambda^{k}) + \log(exp(-\lambda)) - \log(k!) \\ &= k \log(\lambda) - \lambda - \log(k!). \end{split} \end{equation} If we let $f(\mathbf{x})$ denote the scalar output of our deep net (where $f(\mathbf{x}) > 0$ which can be enforced with the softplus nonlinearity), then we denote $h(\mathbf{x})_{j}$ to be: \begin{equation} \label{eq:fx_and_bias} j \log(f(\mathbf{x})) - f(\mathbf{x}) - \log(j!), \end{equation}
where we have simply replaced the $\lambda$ in equation (\ref{eq:log}) with $f(\mathbf{x})$. Then, $p(y = j | \mathbf{x})$ is simply a softmax over $h(\mathbf{x})$: \begin{equation}
p(y = j | \mathbf{x}) = \frac{\exp(-h(\mathbf{x})_{j} / \tau)}{\sum_{i=1}^{K}\exp(-h(\mathbf{x})_{i} / \tau)}, \end{equation}
which is required since the support of the Poisson is infinite. We have also introduced a hyperparameter to the softmax, $\tau$, to control the relative magnitudes of each value of $p(y = j | \mathbf{x})$ (i.e., the variance of the distribution). Note that as $\tau \rightarrow \infty$, the probability distribution becomes more uniform, and as $\tau \rightarrow 0$, the distribution becomes more `one-hot' like with respect to the class with the largest pre-softmax value.
We can illustrate this technique in terms of the layers at the end of the deep network, which is shown in Figure \ref{fig:diagram}.
\begin{figure}
\caption{The first layer after $f(\mathbf{x})$ is a `copy' layer, that is, $f(\mathbf{x}) = f(\mathbf{x})_{1} = \dots = f(\mathbf{x})_{K}$. The second layer applies the log Poisson PMF transform followed by the softmax layer.}
\label{fig:diagram}
\end{figure}
We note that the term in equation (\ref{eq:fx_and_bias}) can be re-arranged and simplified to \begin{equation} \label{eq:simpl} \begin{split} h(\mathbf{x})_{j} &= j \ \log(f(\mathbf{x})) - f(\mathbf{x}) - \log(j!) \\ &= -f(\mathbf{x}) + j \log(f(\mathbf{x})) - \log(j!) \\ &= -f(\mathbf{x}) + b_{j}( f(\mathbf{x}) ). \end{split} \end{equation} In this form, we can see that the probability of class $j$ is determined by the scalar term $f(\mathbf{x})$ and a bias term that also depends on $f(\mathbf{x})$. Another technique that uses biases to determine class probabilities is the proportional odds model (POM), also called the ordered logit \cite{pom}, where the cumulative probability of a class depends on a learned bias: \begin{equation} \label{eq:pom}
p(y \leq j \ | \ \mathbf{x}) = \textmd{sigm}( f(\mathbf{x}) - \mathbf{b}_{j} ), \end{equation} where $\mathbf{b}_{1} < \dots < \mathbf{b}_{K}$. Unlike our technique however, the bias vector $\mathbf{b}$ is not a function of $\mathbf{x}$ nor $f(\mathbf{x})$, but a fixed vector that is learned during training, which is interesting. Furthermore, probability distributions computed using this technique are not guaranteed to be unimodal.
Figure \ref{fig:k4_tau} shows the resulting probability distributions for values of $f(x) \in [0.1,4.85]$ when $\tau = 1.0$ and $\tau = 0.3$. We can see that all distributions are unimodal and that by gradually increasing $f(\mathbf{x})$ we gradually change which class has the most mass associated with itself. The $\tau$ is also an important parameter to tune as it alters the variance of the distribution. For example, in Figure \ref{fig:k4_tau_1}, we can see that if we are confident in predicting the second class, $f(\mathbf{x})$ should be $\sim 2.6$, though in this case the other classes receive almost just as much probability mass. If we set $\tau = 0.3$ however (Figure \ref{fig:k4_tau_0p3}), at $f(\mathbf{x}) = 2.6$ the second class has relatively more mass, which is to say we are even more confident that this is the correct class. An unfortunate side effect of using the Poisson distribution is that the variance is equivalent to the mean, $\lambda$. This means that in the case of a large number of classes probability mass will be widely distributed, and this can be seen in the $K = 8$ case in Figure \ref{fig:k8_tau}. While careful selection of $\tau$ can mitigate this, we also use this problem to motivate the use of the binomial distribution.
In the work of \citet{da2008unimodal}, they address the infinite support problem by using a `right-truncated' Poisson distribution. In this formulation, they simply find the normalization constant such that the probabilities sum to one. This is almost equivalent to what we do, since we use a softmax, although the softmax exponentiates its inputs and we also introduce the temperature parameter $\tau$ to control for the variance of the distribution.
\subsection{Binomial distribution}
The binomial distribution is used to model the probability of a given number of `successes' out of a given number of trials and some success probability. The probability mass function for this distribution -- for $k$ successes (where $0 \leq k \leq K-1$), given $K-1$ trials and success probability $p$ -- is: \begin{equation} p(k; K-1, p) = \binom{K-1}{k} p^{k}(1 - p)^{K-1-k} \end{equation} In the context of applying this to a neural network, $k$ denotes the class we wish to predict, $K-1$ denotes the number of classes (minus one since we index from zero), and $p = f(\mathbf{x}) \in [0,1]$ is the output of the network that we wish to estimate. While no normalisation is theoretically needed since the binomial distribution's support is finite, we still had to take the log of the PMF and normalise with a softmax to address numeric stability issues. This means the resulting network is equivalent to that shown in Figure \ref{fig:diagram}, but with the log binomial PMF instead of Poisson. Just like with the Poisson formulation, we can introduce the temperature term $\tau$ into the resulting softmax to control for the variance of the resulting distribution.
Figure \ref{fig:binomd} shows the resulting distributions achieved by varying $p$ for when $K = 4$ and $K = 8$.
\section{Methods and Results}
In this section we go into details of our experiments, including the datasets used and the precise architectures.
\subsection{Data}
We make use of two ordinal datasets appropriate for deep neural networks: \begin{itemize} \item Diabetic retinopathy\footnote{https://www.kaggle.com/c/diabetic-retinopathy-detection/}. This is a dataset consisting of extremely high-resolution fundus image data. The training set consists of 17,563 pairs of images (where a pair consists of a left and right eye image corresponding to a patient). In this dataset, we try and predict from five levels of diabetic retinopathy: no DR (25,810 images), mild DR (2,443 images), moderate DR (5,292 images), severe DR (873 images), or proliferative DR (708 images). A validation set is set aside, consisting of 10\% of the patients in the training set. The images are pre-processed using the technique proposed by competition winner \citet{report} and subsequently resized to 256px width and height. \item The Adience face dataset\footnote{http://www.openu.ac.il/home/hassner/Adience/data.html} \cite{eidinger2014age}. This dataset consists of 26,580 faces belonging to 2,284 subjects. We use the form of the dataset where faces have been pre-cropped and aligned. We further pre-process the dataset so that the images are 256px in width and height. The training set consists of merging the first four cross-validation folds together (the last cross-validation fold is the test set), which comprises a total of 15,554 images. From this, 10\% of the images are held out as part of a validation set. \end{itemize}
\subsection{Network}
We make use of a modest ResNet \cite{resnets} architecture to conduct our experiments. Table \ref{tab:architecture} describes the exact architecture. We use the ReLU nonlinearity and HeNormal initialization throughout the network.
\begin{table}[H] \centering
\begin{tabular}{|l|l|} \hline \textbf{Layer} & \textbf{Output size} \\ \hline Input (3x224x224) & 3 x 224 x 224 \\ \hline Conv (7x7@32s2) & 32 x 112 x 112 \\ \hline MaxPool (3x3s2) & 32 x 55 x 55 \\ \hline 2 x ResBlock (3x3@64s1) & 32 x 55 x 55 \\ \hline 1 x ResBlock (3x3@128s2) & 64 x 28 x 28 \\ \hline 2 x ResBlock (3x3@128s1) & 64 x 28 x 28 \\ \hline 1 x ResBlock (3x3@256s2) & 128 x 14 x 14 \\ \hline 2 x ResBlock (3x3@256s1) & 128 x 14 x 14 \\ \hline 1 x ResBlock (3x3@512s2) & 256 x 7 x 7 \\ \hline 2 x ResBlock (3x3@512s1) & 256 x 7 x 7 \\ \hline AveragePool (7x7s7) & 256 x 1 x 1 \\ \hline \end{tabular} \caption{Description of the ResNet architecture used in the experiments. For convolution, WxH@FsS = filter size of dimension W x H, with F feature maps, and a stride of S. For average pooling, WxHsS = a pool size of dimension W x H with a stride of S. This architecture comprises a total of 4,307,840 learnable parameters.} \label{tab:architecture} \end{table}
We conduct the following experiments for both DR and Adience datasets: \begin{itemize} \item (Baseline) cross-entropy loss. This simply corresponds to a softmax layer for $K$ classes at the end of the average pooling layer in Table \ref{tab:architecture}. For Adience and DR, this corresponds to a network with 4,309,896 and 4,309,125 learnable parameters, respectively. \item (Baseline) squared-error loss. Rather than regress $f(\mathbf{x})$ against $y$, we regress with $(K-1)sigm(f(\mathbf{x}))$, since we have observed better results with this formulation in the past. For Adience and DR, this corresponds t 4,309,905 and 4,309,131 learnable parameters, respectively.
\item Cross-entropy loss using the Poisson and binomial extensions at the end of the architecture (see Figure \ref{fig:diagram}). For Adience and DR, this corresponds to 4,308,097 learnable parameters for both. Although \citet{da2008unimodal} mention that cross-entropy or squared error can be used, their equations assume a squared error between the (one-hot encoded) ground truth and $p(y|\mathbf{x})$, whereas we use cross-entropy. \item EMD loss (equation \ref{eq:emd2}) where $\ell = 2$ (i.e. Euclidean norm) and the entire term is squared (to get rid of the square root induced by the norm) using Poisson and binomial extensions at the end of architecture. Again, this corresponds to 4,308,097 learnable parameters for both networks. \end{itemize}
Amongst these experiments, we use $\tau = 1$ and also learn $\tau$ as a bias. When we learn $\tau$, we instead learn $\textmd{sigm}(\tau)$ since we found this made training more stable. Note that we can also go one step further and learn $\tau$ as a function of $\mathbf{x}$, though experiments did not show any significant gain over simply learning it as a bias. However, one advantage of this technique is that the network can quantify its uncertainty on a per-example basis. It is also worth noting that the Poisson and binomial formulations are slightly underparameterised compared to their baselines, but experiments we ran that addressed this (by matching model capacity) did not yield significantly different results.
It is also important to note that in the case of ordinal prediction, there are two ways to compute the final prediction: simply taking the argmax of $p(y|\mathbf{x})$ (which is what is simply done in discrete classification), or taking a `smoothed' prediction which is simply the expectation of the integer labels w.r.t. the probability distribution, i.e., $\mathbb{E}[0, \dots, K-1]_{p(y|x)}$. For the latter, we call this the `expectation trick'. A benefit of the latter is that it computes a prediction that considers the probability mass of all classes. One benefit of the former however is that we can use it to easily rank our predictions, which can be important if we are interested in computing top-$k$ accuracy (rather than top-1).
We also introduce an ordinal evaluation metric -- the quadratic weighted kappa (QWK) \cite{cohen1968weighted} -- which has seen recent use on ordinal competitions on Kaggle. Intuitively, this is a number between [-1,1], where a kappa $\kappa = 0$ denotes the model does no better than random chance, $\kappa < 0$ denotes worst than random chance, and $\kappa > 0$ better than random chance (with $\kappa = 1$ being the best score). The `quadratic' part of the metric imposes a quadratic penalty on misclassifications, making it an appropriate metric to use for ordinal problems.\footnote{The quadratic penalty is arbitrary but somewhat appropriate for ordinal problems. One can plug in any cost matrix into the kappa calculation.}
All experiments utilise an $\ell_{2}$ norm of $10^{-4}$, ADAM optimiser \cite{adam} with initial learning rate $10^{-3}$, and batch size 128. A `manual' learning rate schedule is employed where we manually divide the learning rate by 10 when either the validation loss or valid set QWK plateaus (whichever plateaus last) down to a minimum of $10^{-4}$ for Adience and $10^{-5}$ for DR.\footnote{We also re-ran experiments using an automatic heuristic to change the learning rate, and similar experimental results were obtained.}
\begin{figure*}
\caption{Experiments for the Adience dataset. For $\tau = 1$ and $\tau = \text{learned}$, we compare typical cross-entropy loss (blue), cross-entropy/EMD with Poisson formulation (orange solid / dashed, respectively), cross-entropy/EMD with binomial formulation (green solid / dashed, respectively), and regression (red). Learning curves have been smoothed with a LOESS regression for presentation purposes.}
\label{fig:ad_curves_a}
\label{fig:ad_curves_c}
\label{fig:ad_curves}
\end{figure*}
\begin{figure*}
\caption{Experiments for the diabetic retinopathy (DR) dataset. For $\tau = 1$ and $\tau = \text{learned}$, we compare typical cross-entropy loss (blue), cross-entropy/EMD with Poisson formulation (orange solid / dashed, respectively), cross-entropy/EMD with binomial formulation (green solid / dashed, respectively), and regression (red). Learning curves have been smoothed with a LOESS regression for presentation purposes.}
\label{fig:dr_curves_a}
\label{fig:dr_curves_c}
\label{fig:dr_curves}
\end{figure*}
\subsection{Experiments}
\begin{figure*}
\caption{Illustration of the probability distributions that are obtained from varying $f(\mathbf{x}) \in [0.1, 4.85]$ for when there are four classes ($K = 4$) and when $\tau = 1.0$ (left) and $\tau = 0.3$ (right). We can see that lowering $\tau$ results in a lower variance distribution. Depending on the number of classes, it may be necessary to tune $\tau$ to ensure the right amount of probability mass hits the correct class.}
\label{fig:k4_tau_1}
\label{fig:k4_tau_0p3}
\label{fig:k4_tau}
\end{figure*}
\begin{figure*}
\caption{Illustration of the probability distributions that are obtained from varying $f(\mathbf{x}) \in [1,10]$ for when there are eight classes ($K = 8$) and when $\tau = 1.0$ (left) and $\tau = 0.3$ (right). Because we have a greater number of classes, $f(\mathbf{x})$ must take on a greater range of values (unlike Figure \ref{fig:k4_tau}) before most of the probability mass moves to the last class.}
\label{fig:k8_tau_1}
\label{fig:k8_tau_0p3}
\label{fig:k8_tau}
\end{figure*}
\begin{figure*}
\caption{Probability distributions over selected examples in the validation set for Adience (those selected have non-unimodal probability distributions for the cross-entropy baseline). Left: from cross-entropy + Poisson model ($\tau$ learned), right: cross-entropy (baseline) model}
\label{fig:faces}
\label{fig:adience_xent_dists}
\label{fig:adience_pois_dists}
\label{fig:adience_dists}
\end{figure*}
Figure \ref{fig:ad_curves} shows the experiments run for the Adience dataset, for when $\tau = 1.0$ (Figure \ref{fig:ad_curves_a}) and when $\tau$ is learned (Figure \ref{fig:ad_curves_c}). We can see that for our methods, careful selection of $\tau$ is necessary for the accuracy on the validation set to be on par with that of the cross-entropy baseline. For $\tau = 1.0$, accuracy is poor, but even less so when $\tau$ is learned. To some extent, using the smoothed prediction with the expectation trick alleviates this gap. However, because the dataset is ordinal, accuracy can be very misleading, so we should also consider the QWK. For both argmax and expectation, our methods either outperform or are quite competitive with the baselines, with the exception of the QWK argmax plot for when $\tau = 1$, where only our binomial formulations were competitive with the cross-entropy baseline. Overall, considering all plots in Figure \ref{fig:ad_curves} it appears the binomial formulation produces better results than Poisson. There also appears to be some benefit gained from using the EMD loss for Poisson, but not for binomial.
Figure \ref{fig:dr_curves} show the experiments run for diabetic retinopathy. We note that unlike Adience, the validation accuracy does not appear to be so affected across all specifications of $\tau$. One potential reason for this is due to Adience having a larger number of classes compared to DR. As we mentioned earlier, the Poisson distribution is somewhat awkward as its variance is equivalent to its mean. Since most of the probability mass sits at the mean, if the mean of the distribution is very high (which is the case for datasets with a large $K$ such as Adience), then the large variance can negatively impact the distribution by taking probability mass away from the correct class. We can see this effect by comparing the distributions in Figure \ref{fig:k4_tau} ($k = 4$) and Figure \ref{fig:k8_tau} ($k = 8$). As with the Adience dataset, the use of the expectation trick brings the accuracy of our methods to be almost on-par with the baselines. In terms of QWK, only our binomial formulations appear to be competitive, but only in the argmax case do one of our methods (the binomial formulation) beat the cross-entropy baseline. At least for accuracy, there appears to be some gain in using the EMD loss for the binomial formulation. Because DR is a much larger dataset compared to Adience, it is possible that the deep net is able to learn reasonable and `unimodal-like' probability distributions without it being enforced in the model architecture.
\begin{figure*}
\caption{Illustration of the probability distributions that are obtained from varying $p \in [0,1]$ for the binomial classes when $K = 4$ (left) and $K = 8$ (right).}
\label{fig:binom_k4}
\label{fig:binom_k8}
\label{fig:binomd}
\end{figure*}
\begin{figure}
\caption{Top-$k$ accuracies computed on the Adience test set, where $k \in \{1, 2, 3\}$. }
\label{fig:adience_test}
\end{figure}
Overall, across both datasets the QWK for our methods are generally at least competitive with the baselines, especially if we learn $\tau$ to control for the variance. In the empirical results of \citet{da2008unimodal}, they found that the binomial formulation performed better than the Poisson, and when we consider all of our results in Figure \ref{fig:ad_curves} and \ref{fig:dr_curves} we come to the same conclusion. They justify this result by defining the `flexibility' of a discrete probability distribution and show that the binomial distribution is more `flexible' than Poisson. From our results, we believe that these unimodal methods act as a form of regularization which can be useful in regimes where one is interested in top-$k$ accuracy. For example, in the case of top-$k$ accuracy, we want to know if the ground truth was in the top $k$ predictions, and we may be interested in such metrics if it is difficult to achieve good top-1 accuracy. Assume that our probability distribution $p(y|\mathbf{x})$ has most of its mass on the wrong class, but the correct class is on either side of it. Under a unimodal constraint, it is guaranteed that the two classes on either side of the majority class will receive the next greatest amount of probability mass, and this can result in a correct prediction if we consider top-2 or top-3 accuracy. To illustrate this, we compute the top-$k$ accuracy on the test set of the Adience dataset, shown in Figure \ref{fig:adience_test}. We can see that even with the worst-performing model -- the Poisson formulation with $\tau = 1$ (orange) -- produces a better top-3 accuracy than the cross-entropy baseline (blue).
\section{Conclusion}
In conclusion, we present a simple technique to enforce unimodal ordinal probabilistic predictions through the use of the binomial and Poisson distributions. This is an important property to consider in ordinal classification because of the inherent ordering between classes. We evaluate our technique on two ordinal image datasets and obtain results competitive or superior to the cross-entropy baseline for both the quadratic weighted kappa (QWK) metric and top-$k$ accuracy for both cross-entropy and EMD losses, especially under the binomial distribution. Lastly, the unimodal constraint can makes the probability distributions behave more sensibly in certain settings. However, there may be ordinal problems where a multimodal distribution may be more appropriate. We leave an exploration of this issue for future work. Code will be made available here.\footnote{https://github.com/christopher-beckham/deep-unimodal-ordinal}
\end{document} |
\begin{document}
\title{On cubics and quartics through a canonical curve} \author{Christian Pauly} \maketitle
\begin{abstract} We construct families of quartic and cubic hypersurfaces through a canonical curve, which are parametrized by an open subset in a Grassmannian and a Flag variety respectively. Using G. Kempf's cohomological obstruction theory, we show that these families cut out the canonical curve and that the quartics are birational (via a blowing-up of a linear subspace) to quadric bundles over the projective plane, whose Steinerian curve equals the canonical curve. \end{abstract}
\section{Introduction} Let $C$ be a smooth nonhyperelliptic curve of genus $g \geq 4$, which we consider as an embedded curve $\iota_\omega :
C \hookrightarrow \mathbb{P}^{g-1}$ by its canonical linear series $|\omega|$. Let $I = \bigoplus_{n \geq 2} I(n)$ be the graded ideal of the canonical curve. It was classically known (Noether-Enriques-Petri theorem, see e.g. \cite{acgh} p. 124) that the ideal $I$ is generated by its elements of degree $2$, unless $C$ is trigonal or a plane quintic.
It was also classically known how to construct some distinguished quadrics in $I(2)$. We consider a double point of the theta divisor $\Theta \subset \mathrm{Pic}^{g-1}(C)$, which corresponds by Riemann's singularity theorem to a degree $g-1$ line bundle $L$ satisfying $\mathrm{dim} \: |L| = \mathrm{dim} \:
|\omega L^{-1}| = 1$ and we observe that the morphism
$\iota_L \times \iota_{\omega L^{-1}} : C \longrightarrow C' \subset |L|^* \times
|\omega L^{-1}|^* = \mathbb{P}^1 \times \mathbb{P}^1$ (here $C'$ denotes the image curve) followed by the Segre embedding into $\mathbb{P}^3$ factorizes through the canonical space
$|\omega|^*$, i.e., $$ \begin{array}{ccc}
C & \hookrightarrow & |\omega|^* \\ \downarrow & & \downarrow^{\pi} \\ \mathbb{P}^1 \times \mathbb{P}^1 & \hookrightarrow & \mathbb{P}^3, \end{array} $$ where $\pi$ is projection from a $(g-5)$-dimensional vertex $\mathbb{P} V^\perp$
in $|\omega|^*$. We then define the quadric $Q_L := \pi^{-1}(\mathbb{P}^1 \times \mathbb{P}^1)$, which is a rank $\leq 4$ quadric in $I(2)$ and coincides with the projectivized tangent cone at the double point $[L] \in \Theta$ under the identification of $H^0(C,\omega)^*$ with the tangent space $T_{[L]} \mathrm{Pic}^{g-1}(C)$. The main result, due to M. Green \cite{green}, asserts that the set of quadrics $\{ Q_L \}$, when $L$ varies over the double points of $\Theta$, linearly spans $I(2)$. From this result one infers a constructive Torelli theorem by intersecting all quadrics $Q_L$ --- at least for $C$ general enough.
The geometry of the theta divisor $\Theta$ at a double point $[L]$ can also be exploited to produce higher degree elements in the ideal $I$ as follows: we expand in a suitable set of coordinates a local equation $\theta$ of $\Theta$ near $[L]$ as $\theta = \theta_2 +
\theta_3 + \ldots$, where $\theta_i$ are homogeneous forms of degree $i$. Having seen that $Q_L = \mathrm{Zeros}(\theta_2)$, we denote by $S_L$ the cubic $\mathrm{Zeros}(\theta_3) \subset |\omega|^*$, the osculating cone of $\Theta$ at $[L]$. The cubic $S_L$ has many nice geometric properties: under the blowing-up of the vertex $\mathbb{P} V^\perp \subset S_L$, the cubic $S_L$ is transformed into a quadric bundle $\tilde{S}_L$ over $\mathbb{P}^1 \times \mathbb{P}^1$ and it was shown by G. Kempf and F.-O. Schreyer \cite{ks} that the Hessian and Steinerian curves of $\tilde{S}_L$ are $C' \subset \mathbb{P}^1 \times \mathbb{P}^1$ and
$C \subset |\omega|^*$ respectively, which gives another proof of Torelli's theorem.
In this paper we construct and study distinguished cubics and quartics in the ideal $I$ by adapting the methods of \cite{ks} to rank-$2$ vector bundles over $C$. Our construction basically goes as follows (section 2): we consider a general $3$-plane $W \subset H^0(C, \omega)$ and define the rank-$2$ vector bundle $E_W$ as the dual of the kernel of the evaluation map in $\omega$ of sections of $W$. The bundle $E_W$ is stable and admits a theta divisor $D(E_W)$ in the Jacobian $JC$. Since $D(E_W)$ contains the origin ${\mathcal O} \in JC$ with multiplicity $4$, the projectivized tangent cone to
$D(E_W)$ at ${\mathcal O}$ is a quartic hypersurface in $\mathbb{P} T_{\mathcal O} JC = |\omega|^*$, denoted by $F_W$ and which contains the canonical curve. We therefore obtain a rational map from the Grassmannian $\mathrm{Gr}(3,H^0(\omega))$
to the ideal of quartics $|I(4)|$ \begin{equation}\label{fmap4}
\mathbf{F}_4 \ : \ \mathrm{Gr}(3,H^0(\omega)) \dashrightarrow |I(4)|, \qquad W \mapsto F_W. \end{equation}
Our main tool to study the tangent cones $F_W$ is G. Kempf's cohomological obstruction theory \cite{kempf1},\cite{kempf2},\cite{ks} which in our set-up leads to a simple criterion (Proposition \ref{simpl})
for $b \in \mathbb{P} T_{\mathcal O} JC = |\omega|^*$ to belong to $F_W$. We deduce in particular from this criterion that the cubic polar $P_x(F_W)$ of $F_W$ with respect to a point $x \in W^\perp$ also contains the canonical curve. Here $W^\perp$ denotes the annihilator of $W \subset H^0(\omega)$. We therefore obtain a rational map from the flag variety $\mathrm{Fl}(3,g-1,H^0(\omega))$
parametrizing pairs $(W,x)$ to the ideal of cubics $|I(3)|$ \begin{equation}\label{fmap3}
\mathbf{F}_3 \ : \ \mathrm{Fl}(3,g-1,H^0(\omega)) \dashrightarrow |I(3)|, \qquad (W,x) \mapsto P_x(F_W). \end{equation}
Our two main results can be stated as follows.
{\bf (1)} Like the cubic osculating cones $S_L$, the quartic tangent cones $F_W$ transform under the blowing-up of the vertex $\mathbb{P} W^\perp \subset F_W$ into a quadric bundle $\tilde{F}_W \rightarrow
\mathbb{P} W^* = \mathbb{P}^2$. Their Hessian and Steinerian curves are the plane curve $\Gamma$, image under the projection with center $\mathbb{P} W^\perp$, $\pi: C \rightarrow \Gamma \subset \mathbb{P} W^*$, and the canonical curve $C \subset |\omega|^*$ (Theorem \ref{mainthm}). This surprising analogy with the osculating cones $S_L$ remains however unexplained.
{\bf (2)} Let us denote by $|F_4| \subset |I(4)|$ and $|F_3| \subset
|I(3)|$ the linear subsystems spanned by the quartics $F_W$ and the cubics $P_x(F_W)$ respectively. Then we show
(Theorem \ref{mainthm2}) that both base loci of $|F_4|$ and $|F_3|$ coincide with $C \subset |\omega|^*$,i.e., the quartics $F_W$ (resp. the cubics $P_x(F_W)$) cut out the canonical curve.
The starting point of our investigations was the question asked by B. van Geemen and G. van der Geer (\cite{vgvg} page 629) about ``these mysterious quartics'' which arise as tangent cones to $2\theta$-divisors in the Jacobian having multiplicity $\geq 4$ at the origin. In that paper the authors implicitly conjectured that the base locus of $|F_4|$ equals $C$, which was subsequently proved by G. Welters
\cite{welt}. Our proof follows from the fact that $|F_4|$ contains all squares of quadrics in $|I(2)|$.
This paper leaves many questions unanswered (section 7), like e.g. finding explicit equations of the quartics $F_W$, their syzygies, the dimensions of $|F_3|$ and $|F_4|$. The techniques used here also apply when replacing $|\omega|^*$ by Prym-canonical space
$|\omega \alpha|^*$, and generalizing rank-$2$ vector bundles to symplectic bundles.
{\bf Acknowledgements:} Many results contained in this paper arose from discussions with Bert van Geemen, whose influence on this work is considerable. I would like to thank him for these enjoyable and valuable discussions.
\section{Some constructions for rank-$2$ vector bundles with ca- nonical determinant}
In this section we briefly recall some known results from \cite{bv}, \cite{vgi} and \cite{pp} on rank-$2$ vector bundles over $C$.
\subsection{Bundles $E$ with $\mathrm{dim} \: H^0(C,E) \geq 3$}
Let $W \subset H^0(C,\omega)$ be a $3$-plane. We denote by $[W] \in
\mathrm{Gr}(3,H^0(\omega))$ the corresponding point in the Grassmannian and by $\mathcal{B} \subset \mathrm{Gr}(3,H^0(\omega))$ the codimension $2$ subvariety consisting of $[W]$ such that the net $\mathbb{P} W \subset |\omega|$ has a base point. For $[W] \in \hspace{-4mm}/\ \mathcal{B}$ we consider (see \cite{vgi} section 4) the rank-$2$ vector bundle $E_W$ defined by the exact sequence \begin{equation} \label{esw} 0 \longrightarrow E^*_W \longrightarrow {\mathcal O}_C \otimes W \map{ev} \omega \longrightarrow 0. \end{equation} Here $E^*_W$ denotes the dual bundle of $E_W$. We have $\mathrm{det} \: E_W = \omega$ and $W^* \subset H^0(C,E_W)$. We denote by $\mathcal{D}$ the
effective divisor in $|{\mathcal O}_{\mathrm{Gr}}(g-2)|$ defined by the condition $$ [W] \in \mathcal{D} \iff \mathrm{dim} \: H^0(C,E_W) \geq 4.$$ Moreover $\mathcal{B} \subset \mathcal{D}$ and $E_W$ is stable if $[W] \in \hspace{-4mm}/\ \mathcal{D}$.
Let $W^\perp \subset H^0(\omega)^* = H^1({\mathcal O})$ denote the annihilator of $W \subset H^0(\omega)$. We call the projective subspace $\mathbb{P} W^\perp \subset |\omega|^*$ the {\em vertex} and denote by
$$ \pi: |\omega|^* \dashrightarrow \mathbb{P} W^*, \qquad \pi: C \rightarrow \Gamma \subset \mathbb{P} W^*,$$ the projection with center $\mathbb{P} W^\perp$. If $[W] \in \hspace{-4mm}/\ \mathcal{B}$, then $C \cap \mathbb{P} W^\perp = \emptyset$ and $\pi$ restricts to a morphism $C \rightarrow \mathbb{P} W^*$. Its image is a plane curve $\Gamma$ of degree $2g-2$. We note that $E_W = \pi^* (T(-1))$, where $T$ is the tangent bundle of $\mathbb{P} W^* = \mathbb{P}^2$.
Conversely any bundle $E$ with $\mathrm{det} \: E = \omega$ and $\mathrm{dim} \: H^0(C,E) \geq 3$ is of the form $E_W$.
\subsection{Bundles $E$ with $\mathrm{dim} \: H^0(C,E) \geq 4$}
Following \cite{bv} (see also \cite{pp} section 5.2) we associate to a bundle $E$ with $\mathrm{dim} \: H^0(C,E) = 4$ a rank $\leq 6$ quadric
$Q_E \in |I(2)|$, which is defined as the inverse image of the Klein quadric under the dual $\mu^*$ of the exterior product map
$$ \mu^*: |\omega|^* \longrightarrow \mathbb{P} (\Lambda^2 H^0(E)^*) \supset \mathrm{Gr}(2, H^0(E)^*), \qquad Q_E:=(\mu^*)^{-1} \left( \mathrm{Gr} \right).$$ Composing with the previous construction, we obtain a rational map
$$ \alpha : \mathcal{D} \dashrightarrow |I(2)|, \qquad \alpha([W]) = Q_{E_W}.$$
Moreover given a $Q \in |I(2)|$ with $\mathrm{rk} \ Q \leq 6$ and $\mathrm{Sing} \ Q \cap C = \emptyset$, it is easily shown that
$$ \alpha^{-1}(Q) = \{ [W] \in \mathcal{D} \ | \ \mathbb{P} W^\perp \subset Q \}.$$ If $\mathrm{rk} \ Q = 6$, then $\alpha^{-1}(Q)$ has two connected components, which are isomorphic to $\mathbb{P}^3$.
\begin{lem} \label{quaw} We have $[W] \in \hspace{-4mm}/\ \mathcal{D}$ if and only if the linear map induced by restricting quadrics to the vertex $\mathbb{P} W^\perp$ $$res: I(2) \longrightarrow H^0(\mathbb{P} W^\perp,{\mathcal O}(2))$$ is an isomorphism. \end{lem}
\begin{proof} It is enough to observe that the two spaces have the
same dimension and that a nonzero element in $\mathrm{ker} \: res$ corresponds to a $Q \in |I(2)|$ with $\mathrm{rk} \: Q \leq 6$. \end{proof}
\subsection{Definition of the quartic $F_W$}
We will now define the main object of this paper. Given $[W] \in \hspace{-4mm}/\ \mathcal{B}$, we consider the $2\theta$-divisor $D(E_W) \subset JC$ (see e.g. \cite{bv},\cite{vgi},\cite{pp}), whose set-theoretical support equals
$$ D(E_W) = \{ \xi \in JC \ | \ \mathrm{dim} \: H^0(C, \xi \otimes E_W) > 0 \}.$$ Since $\mathrm{mult} \:_{\mathcal O} D(E_W) \geq \mathrm{dim} \: H^0(C, E_W) \geq 3$ and since any $2\theta$-divisor is symmetric, the first nonzero term of the Taylor expansion of a local equation of $D(E_W)$ at the origin ${\mathcal O}$ is a
homogeneous polynomial $F_W$ of degree $4$. The hypersurface in $|\omega|^* = \mathbb{P} T_{\mathcal O} JC$ associated to $F_W$ is also denoted by $F_W$. Here we restrict attention to the case $\mathrm{dim} \: H^0(C,E_W) = 3 \ \text{or} \ 4$. We have
$$ F_W := \mathrm{Cone}_{\mathcal O} (D(E_W)) \subset |\omega|^*.$$
The study of the quartics $F_W$ for $[W] \in \mathrm{Gr}(3,H^0(\omega)) \setminus \mathcal{D}$ is the main purpose of this paper. If $[W] \in \mathcal{D}$, the quartics $F_W$ have already been described in \cite{pp} Proposition 5.12.
\begin{prop} If $\mathrm{dim} \: H^0(C,E_W) = 4$, then $F_W$ is a double quadric $$ F_W = Q^2_{E_W}.$$ \end{prop}
Since $|I(2)|$ is linearly spanned by rank $\leq 6$ quadrics (see \cite{pp} section 5), we obtain the following fact, which will be used in section 6.
\begin{prop} \label{f4sq}
The linear subsystem $|F_4|$ contains all squares of quadrics in $|I(2)|$. \end{prop}
Although we will not use that fact, we mention that the rational map \eqref{fmap4} is given by a linear subsystem $\mathbb{P} \Gamma \subset
|\mathcal{J}_\mathcal{B}(g-1)|$, where $\mathcal{J}_\mathcal{B}$ is the ideal sheaf of the subvariety $\mathcal{B}$. If $g=4$, the inclusion is an equality (see \cite{opp} section 6). If $g>4$, a description of $\mathbb{P} \Gamma$ is not known.
\section{Kempf's cohomological obstruction theory}
In this section we outline Kempf's deformation theory \cite{kempf1} and apply it to the study of the tangent cones $F_W$ of the divisors $D(E_W)$.
\subsection{Variation of cohomology}
Let $\mathcal{E}$ be a vector bundle over the product $C \times S$, where $S = \mathrm{Spec}(A)$ is an affine neighbourhood of the origin of $JC$. We restrict attention to the case $$\mathcal{E} = \pi_C^* E_W \otimes \mathcal{L},$$ for some $3$-plane $W$, and recall that Kempf's deformation theory was applied \cite{kempf1}, \cite{kempf2}, \cite{ks} to the case $\mathcal{E} = \pi_C^* M \otimes \mathcal{L}$, for a line bundle $M$ over $C$. The line bundle $\mathcal{L}$ denotes the restriction of a Poincar\'e line bundle over $C \times JC$ to the neighbourhood $C \times S$. The fundamental idea to study the variation of cohomology, i.e. the two upper-semicontinuous functions on $S$ $$ s \mapsto h^0(C\times \{s\}, \mathcal{E} \otimes_A \mathbb{C}_s), \qquad
s \mapsto h^1(C\times \{s\}, \mathcal{E} \otimes_A \mathbb{C}_s), $$ where $\mathbb{C}_s = A/\mathfrak{m}_s$ and $\mathfrak{m}_s$ is the maximal ideal of $s \in S$, is based on the existence of an approximating homomorphism.
\begin{thm}[Grothendieck, \cite{kempf1} section 7] \label{gro} Given a family $\mathcal{E}$ of vector bundles over $C \times S$, there exist two flat $A$-modules $F$ and $G$ of finite type and an $A$-homomorphism $\alpha : F \rightarrow G$ such that for all $A$-modules $M$, we have isomorphisms $$ H^0(C\times S , \mathcal{E} \otimes_A M) \cong \mathrm{ker} \:(\alpha \otimes_A id_M), \qquad
H^1(C\times S , \mathcal{E} \otimes_A M) \cong \mathrm{coker} \:(\alpha \otimes_A id_M).$$ \end{thm}
By considering a smaller neighbourhood of the origin, we may assume the $A$-modules $F$ and $G$ to be locally free (Nakayama's lemma). Moreover (\cite{kempf1} Lemma 10.2) by restricting further the neighbourhood, we may find an approximating homomorphism $\alpha : F \rightarrow G$ such that $\alpha \otimes \mathbb{C}_0 : F \otimes_A A/\mathfrak{m}_0 \rightarrow G \otimes_A A/\mathfrak{m}_0$ is the zero homomorphism.
We apply this theorem to the family $\mathcal{E} = \pi^*_C E_W \otimes \mathcal{L}$, for $[W] \in \hspace{-4mm}/\ \mathcal{D}$. Since by Riemann-Roch $\chi(\mathcal{E} \otimes \mathbb{C}_s) = \chi(E_W \otimes \mathcal{L}_s) = 0$, $\forall s \in S$, and since $h^0(C,E_W) = 3$, the local equation $f$ of the divisor
$$ D(E_W)_{|S} = \{ s \in S \: | \: h^0(C \times \{s\}, E_W \otimes \mathcal{L}_s) > 0 \} $$ is given at the origin ${\mathcal O}$ by the determinant of a $3 \times 3$ matrix of regular functions $f_{ij}$ on $S$, with $1 \leq i,j \leq 3$, which vanish at ${\mathcal O}$, i.e., the $A$-modules $F$ and $G$ are free and of rank $3$. Hence $$ f = \mathrm{det} \: (f_{ij}). $$
The linear part of the regular functions $f_{ij}$ is related to the cup-product as follows (\cite{kempf1} Lemma 10.3 and Lemma 10.6): let $\mathfrak{m} = \mathfrak{m}_0$ be the maximal ideal of the origin ${\mathcal O} \in S$ and consider the exact sequence of $A$-modules $$ 0 \longrightarrow \mathfrak{m}/\mathfrak{m}^2 \longrightarrow A/\mathfrak{m}^2 \longrightarrow A/\mathfrak{m} \longrightarrow 0. $$ After tensoring with $\mathcal{E}$ over $C \times S$ and taking cohomology, we obtain a coboundary map $$H^0(C,E_W) = H^0(C \times \{s \} , \mathcal{E} \otimes_A A/\mathfrak{m} ) \map{\delta} H^1(C \times \{s \} , \mathcal{E} \otimes_A \mathfrak{m}/\mathfrak{m}^2 ) = H^1(C,E_W) \otimes \mathfrak{m}/\mathfrak{m}^2, $$ where $\mathfrak{m}/\mathfrak{m}^2$ is the Zariski cotangent space at ${\mathcal O}$ to $JC$. Note that we have a canonical isomorphism $(\mathfrak{m}/\mathfrak{m}^2)^* \cong H^1({\mathcal O})$ and that a tangent vector $b \in H^1({\mathcal O})$ gives, by composing with the linear form $l_b : \mathfrak{m}/\mathfrak{m}^2 \rightarrow \mathbb{C}$, a linear map $\delta_b : H^0(E_W) \rightarrow H^1(E_W)$. As in the line bundle case \cite{kempf1}, one proves
\begin{lem} For any nonzero $b \in H^1({\mathcal O}) = T_{{\mathcal O}} JC$, we have \begin{enumerate} \item The linear map $\delta_b: H^0(E_W) \rightarrow H^1(E_W)$ coincides with the cup-product $(\cup b)$ with the class $b$, and is {\em skew-symmetric} after identifying $H^1(E_W)$ with $H^0(E_W)^*$ (Serre duality). \item The coboundary map $\delta : H^0(E_W) \rightarrow H^1(E_W) \otimes \mathfrak{m}/\mathfrak{m}^2$ is described by a skew-symmetric $3 \times 3$ matrix $(x_{ij})$, with $x_{ij} \in H^1({\mathcal O})^*$. Moreover the linear form $x_{ij}$ coincides with the differential $(df_{ij})_0$ of $f_{ij}$ at the origin ${\mathcal O}$. \end{enumerate} \end{lem} The coboundary map $\delta$ induces a linear map $$ \Delta : H^1({\mathcal O}) \longrightarrow \Lambda^2 H^0(E_W)^*, \qquad
b \longmapsto \delta_b,$$ which coincides with the dual of the multiplication map of global sections of $E_W$. Moreover $$\mathrm{ker} \: \Delta = W^\perp = \{ x_{12} = x_{13} = x_{23} = 0 \}. $$ Using a flat structure \cite{kempf2} we can write the power series expansion of the regular functions $f_{ij}$ around ${\mathcal O}$ $$ f_{ij} = x_{ij} + q_{ij} + \cdots, $$ where $x_{ij}$ and $q_{ij}$ are linear and quadratic polynomials respectively. We easily calculate the expansion of $f$: by skew-symmetry its cubic term is zero, and its quartic term equals $$ F_W : q_{11} x^2_{23} + q_{22} x^2_{13} + q_{33} x^2_{12} + x_{12}x_{23} (q_{13} + q_{31}) - x_{12} x_{23} (q_{12} + q_{21}) - x_{12} x_{13} (q_{23} + q_{32}). $$ We straightforwardly deduce from this equation the following properties of $F_W$. \begin{prop} \label{singver} \begin{enumerate} \item The quartic $F_W$ is singular along the vertex $\mathbb{P} W^\perp$. \item For any $x \in W^\perp$, the cubic polar $P_x(F_W)$ is singular along the vertex $\mathbb{P} W^\perp$. \end{enumerate} \end{prop}
\subsection{Infinitesimal deformations of global sections of $E_W$}
We first recall some elementary facts on principal parts. Let $V$ be an arbitrary vector bundle over $C$ and let $\mathrm{Rat}(V)$ be the space of rational sections of $V$ and $p$ be a point of $C$. The space of principal parts of $V$ at $p$ is the quotient $$ \mathrm{Prin}_p(V) = \mathrm{Rat}(V)/\mathrm{Rat}_p(V), $$ where $\mathrm{Rat}_p(V)$ denotes the space of rational sections of $V$ which are regular at $p$. Since a rational section of $V$ has only finitely many poles, we have a natural mapping \begin{equation}\label{prinpart}
\mathrm{pp} : \mathrm{Rat}(V) \longrightarrow \mathrm{Prin}(V) := \bigoplus_{p \in C} \mathrm{Prin}_p(V), \qquad s \longmapsto \left( s \ \mathrm{mod} \ \mathrm{Rat}_p(V) \right)_{p \in C}. \end{equation} Exactly as in the line bundle case (\cite{kempf1} Lemma 3.3), one proves \begin{lem} There are isomorphisms $$ \mathrm{ker} \: \mathrm{pp} \cong H^0(C, V), \qquad \mathrm{coker} \: \mathrm{pp} \cong H^1(C, V). $$ \end{lem}
In the particular case $V = {\mathcal O}$, we see that a tangent vector $b \in H^1({\mathcal O}) = T_{{\mathcal O}} JC$ can be represented by a collection $\beta = \left( \beta_p \right)_{p \in I}$ of rational functions $\beta_p \in \mathrm{Rat}({\mathcal O})$, where $p$ varies over a finite set of points $I \subset C$. We then define $\mathrm{pp}(\beta) = \left( \omega_p \right)_{p \in I} \in \mathrm{Prin}({\mathcal O})$, where $\omega_p$ is the principal part of $\beta_p$ at $p$. We denote by $[\beta] = b$ its cohomology class in $H^1({\mathcal O})$. Note that we can define powers of $\beta$ by $\beta^k := \left( \beta_p^k \right)_{p \in I}$.
\noindent
For $i \geq 1$, let $D_i$ be the infinitesimal scheme $\mathrm{Spec}(A_i)$, where $A_i$ is the Artinian ring $\mathbb{C}[\epsilon]/\epsilon^{i+1}$. As explained in \cite{kempf2} section 2, a tangent vector $b \in H^1({\mathcal O})$ determines a morphism $$ \mathrm{exp}_{i,b} : D_i \longrightarrow JC,$$ with $\mathrm{exp}_{i,b}(x_0) = {\mathcal O}$, where $x_0$ is the closed point of $D_i$. Let $\mathbb{L}_{i+1}(b)$ denote the pull-back of the Poincar\'e sheaf $\mathcal{L}$ under the morphism $\mathrm{exp}_{i,b} \times id_C$. Note that we have the following exact sequences \begin{equation} \label{esext1} D_1 \times C : \qquad 0 \longrightarrow \epsilon {\mathcal O} \longrightarrow \mathbb{L}_2(b) \longrightarrow {\mathcal O} \longrightarrow 0, \end{equation} \begin{equation} \label{esext2} D_2 \times C : \qquad 0 \longrightarrow \epsilon^2 {\mathcal O} \longrightarrow \mathbb{L}_3(b) \longrightarrow \mathbb{L}_2(b) \longrightarrow 0. \end{equation}
The second arrows in each sequence correspond to the restriction to the subschemes $\{x_0 \} \times C \subset D_1 \times C$ and $D_1 \times C \subset D_2 \times C$ respectively. As above we choose a representative $\beta$ of $b$. Following \cite{kempf2} section 2, one shows that the space of global sections $H^0(C \times D_i, \mathbb{L}_{i+1}(b) \otimes E)$, with $E = E_W$ and $[W] \in \hspace{-4mm}/\ \mathcal{D}$, is isomorphic to the $A_i$-module \begin{equation} \label{glosecllbeta} V_i(\beta) = \{ f = f_0 + \cdots + f_i \epsilon^i \in \mathrm{Rat}(E) \otimes A_i \ \ \text{such that} \ f \mathrm{exp} (\epsilon \beta) \ \text{is regular} \ \forall p \in C \}. \end{equation} An element $f \in V_i(\beta)$ is called an $i$-th order deformation of the global section $f_0 \in H^0(E)$. In the case $i=2$, the condition $f \in V_i(\beta)$ is equivalent to the following three elements, \begin{equation} \label{expdef} f_0, \qquad f_1 + f_0 \beta, \qquad f_2 + f_1 \beta + f_0 \frac{\beta^2}{2}, \end{equation} being regular at all points $p \in C$ --- for $i=1$, we consider the first two elements. Alternatively this means that their classes in $\mathrm{Prin}(E)$ are zero. We note that, given two representatives $\beta = \left( \beta_p \right)_{p \in I}$ and $\beta' = \left( \beta'_p \right)_{p \in I'}$ with $[\beta] = [\beta']$, the two subspaces $V_i(\beta)$ and $V_i(\beta')$ of $\mathrm{Rat}(E) \otimes A_i$ are different and that any rational function $\varphi \in \mathrm{Rat}({\mathcal O})$ satisfying $\mathrm{pp} (\varphi) = \mathrm{pp} (\beta' - \beta)$ induces an isomorphism $V_i(\beta) \cong V_i(\beta')$.
\noindent
We consider a class $b \in H^1({\mathcal O}) \setminus W^\perp$ and a representative $\beta$ such that $[\beta] = b$. By taking cohomology of \eqref{esext1} tensored with $E$, we observe that a first order deformation of $f_0$, i.e., a global section $f = f_0 + f_1 \epsilon \in V_1(\beta) \cong H^0(C \times D_1, \mathbb{L}_2(b) \otimes E)$ always exists. Since $\mathrm{rk} \: (\cup b) = 2$, the global section $f_0$ is uniquely determined up to a scalar $$ f_0 \cdot \mathbb{C} = \mathrm{ker} \: \left( \cup b : H^0(E) \longrightarrow H^1(E) \right).$$ Moreover any two first order deformations of $f_0$ differ by an element in $\epsilon H^0(E)$.
\noindent
We now state a criterion for a tangent vector $b = [\beta]$ to lie on the quartic tangent cone $F_W$ in terms of a second order deformation of $f_0 \in H^0(E)$.
\begin{lem} \label{criterion} A cohomology class $b = [\beta] \in H^1({\mathcal O}) \setminus W^\perp$ is contained in the cone over the quartic $F_W$ if and only if there exists a global section $$f = f_0 + f_1 \epsilon + f_2 \epsilon^2 \in V_2(\beta) \cong H^0(C \times D_2 , \mathbb{L}_3(b) \otimes E).$$ \end{lem}
\begin{proof} The proof is similar to \cite{ks} Lemma 4. We work over the Artinian ring $A_4$, i.e., $\epsilon^5 = 0$. By Theorem \ref{gro} applied to the family $\mathbb{L}_5(b) \otimes E$ over $C \times D_4$, there exists an approximating homomorphism of $A_4$-modules \begin{equation} \label{approxhomo} A_4^{\oplus 3} \map{\varphi} A_4^{\oplus 3}, \end{equation}
such that $\mathrm{ker} \: \varphi_{|D_2} \cong H^0(C \times D_2, \mathbb{L}_3(b) \otimes E)$,
$\mathrm{coker} \: \varphi_{|D_2} \cong H^1(C \times D_2, \mathbb{L}_3(b) \otimes E)$, and $\varphi \otimes \mathbb{C}_0 = 0$. We denote by $\varphi_{|D_2}$ the homomorphism obtained from \eqref{approxhomo} by projecting to $A_2$. Note that any $A_4$-module is free. The matrix $\varphi$ is equivalent to a matrix $$ M:= \left( \begin{array}{ccc} \epsilon^u & 0 & 0 \\ 0 & \epsilon^v & 0 \\ 0 & 0 & \epsilon^w \end{array} \right). $$ Since $\varphi \otimes \mathbb{C}_0 = 0$, we have $u,v,w \geq 1$. Moreover we can order the exponents so that $1 \leq u \leq v \leq w$. It follows from the definition of $D(E_W)$ as a determinant divisor that the pull-back of $D(E_W)$ by $\mathrm{exp}_4: D_4 \longrightarrow JC$ is given by the equation (in $A_4$) $$ \mathrm{det} \: M = \epsilon^{u+v+w}.$$ We immediately see that $b \in F_W$ if and only if $u+v+w \geq 5$. Let us now restrict $\varphi$ to $D_1$,i.e., we project \eqref{approxhomo}
to $A_1$. Since we assume $b \in \hspace{-4mm}/\ W^\perp = \mathrm{ker} \: \Delta$, the restriction $\varphi_{|D_1}$ is nonzero and by skew-symmetry of rank $2$, i.e., $u=v=1$ and $w \geq 2$. Hence $b \in F_W$ if and only if $w \geq 3$.
On the other hand the $A_2$-module $\mathrm{ker} \: \varphi_{|D_2} \cong H^0(C \times D_2, \mathbb{L}_3(b) \otimes E)$ has length $2+w$. Let $\mu$ be the multiplication by $\epsilon^2$ on this $A_2$-module. Then by \eqref{glosecllbeta} the $A_2$-module $\mathrm{ker} \: \mu$ is isomorphic to the $A_1$-module $H^0(C \times D_1, \mathbb{L}_2(b) \otimes E)$, which is of length $4$, provided $b \in \hspace{-4mm}/\ W^\perp$. Hence we obtain that $w \geq 3$ if and only if there exists an $f \in H^0(C \times D_2, \mathbb{L}_3(b) \otimes E)$ such that $\mu(f) = \epsilon^2 f_0$. This proves the lemma.
\end{proof}
\section{Study of the quartic $F_W$}
In this section we prove geometric properties of the quartic $F_W$.
\subsection{Criteria for $b \in F_W$}
We now show that the criterion of Lemma \ref{criterion} simplifies to a criterion involving only a first order deformation $f = f_0 + f_1 \epsilon \in V_1(\beta)$ of $f_0$. As above we assume $b \in \hspace{-4mm}/\ W^\perp$.
First we observe that the rational differential form $f_1 \wedge f_0$ is independent of the choice of the representative $\beta$, i.e., $f_1 \wedge f_0$ only depends on the cohomology class $b =[\beta]$: suppose we take $\beta' = \left( \beta_p \cdot \varphi \right)_{p \in I}$, where $\varphi \in \mathrm{Rat}(\omega)$. Then $f_0$ and $f_1$ transform into $f'_0 = f_0$ and $f'_1 = f_1 + \varphi f_0$, from which it is clear that $f'_1 \wedge f'_0 = f_1 \wedge f_0$.
Secondly one easily sees that $f_0 = \pi(b)$ (section 2.1) and that, under the canonical identification $\Lambda^2 W^* = \Lambda^2 H^0(E) = W$, the $2$-plane $H^0(E) \wedge f_0$ coincides with the intersection $V_b := H_b \cap W$, where $H_b$ denotes the hyperplane determined by $b \in H^1({\mathcal O})$.
It follows from these two remarks that, given $b$ and $W$, the form $f_1 \wedge f_0$ is well-defined up to a regular differential form in $V_b \subset W$.
\begin{prop} \label{simpl} We have the following equivalence $$ b \in F_W \qquad \iff \qquad f_1 \wedge f_0 \in H_b.$$ \end{prop}
\begin{proof} Since $f_1 \wedge f_0$ does not depend on $\beta$, we may choose a $\beta$ with simple poles at the points $p \in I$. By Lemma \ref{criterion} and relation \eqref{expdef} we see that $b \in F_W$ if and only if the cohomology class $[f_1 \beta + f_0 \frac{\beta^2}{2}]$ is zero in $H^1(E) / \mathrm{im} \: (\cup b)$ --- we recall that $f_1$ is defined up to $H^0(E)$.
First we will prove that $[f_0 \frac{\beta^2}{2}] \in \mathrm{im} \:(\cup b)$. The commutativity of the upper right triangle of the diagram (see e.g. \cite{kempf1}) $$ \begin{array}{ccccccc}
& & & & H^0(E) & & \\
& & & & & & \\
& & & & \downarrow \cdot \frac{\beta^2}{2} & \searrow \ \cup [\frac{\beta^2}{2}] & \\
& & & & & & \\
H^0(E) & \longrightarrow & H^0(E(2I)) & \longrightarrow & E(2I)_{|2I} & \longrightarrow & H^1(E) \\
& & & & & & \\
& & \cap & & \cap & \nearrow & \\
& & & & & & \\
& & \mathrm{Rat}(E) & \map{\mathrm{pp}} & \mathrm{Prin}(E) & & \end{array} $$ implies that $[f_0 \frac{\beta^2}{2}] = f_0 \cup [\frac{\beta^2}{2}]$. Moreover the skew-symmetric cup-product map $\cup b$ $$\cup b = \wedge \overline{b}: \ H^0(E) = W^* \longrightarrow H^1(E) = W = \Lambda^2 W^*$$ identifies with the exterior product $\wedge \overline{b}$, where $\overline{b}= \pi(b) \in W^*$. It is clear that $\mathrm{im} \: (\cup b) = \mathrm{im} \: (\wedge \overline{b}) = \mathrm{ker} \: (\wedge \overline{b})$, where $\wedge \overline{b}$ also denotes the linear form \begin{equation} \label{wedgef0} \wedge \overline{b}: \ \Lambda^2 W^* \longrightarrow \Lambda^3 W^* \cong \mathbb{C}. \end{equation} As already observed, we have $f_0 = \overline{b}$. Denoting by $c \in W^*$ the class $\pi([\frac{\beta^2}{2}])$, we see that the relation $(f_0 \wedge c) \wedge \overline{b} = \overline{b} \wedge c \wedge \overline{b} = 0$ implies that $f_0 \cup [\frac{\beta^2}{2}] \in \mathrm{ker} \: (\wedge \overline{b}) = \mathrm{im} \: (\cup b)$.
Therefore the previous condition simplifies to $[f_1 \beta] \in \mathrm{im} \:(\cup b)$. We next observe that the linear form $\wedge \overline{b}$ on $H^1(E)$ \eqref{wedgef0} identifies with the exterior product map $$ H^1(E) \map{\wedge f_0} H^1(\omega) \cong \mathbb{C}.$$ Since we have a commutative diagram $$ \begin{array}{rccccc} f_1 \in & H^0(E(I)) & \map{\cdot \beta} & \mathrm{Prin}(E) & \longrightarrow & H^1(E) \\
& & & & & \\
& & & \downarrow \ \wedge f_0 & & \downarrow
\ \wedge f_0 \\
& & & & & \\ f_1 \wedge f_0 \in & H^0(\omega) & \map{\cdot \beta} & \mathrm{Prin}(\omega) & \longrightarrow & H^1(\omega), \end{array} $$ and since $f_1 \wedge f_0 \in H^0(\omega) \subset \mathrm{Rat}(\omega)$, we easily see that the condition $[f_1 \beta] \in \mathrm{im} \:(\cup b)$ is equivalent to $f_1 \wedge f_0 \in H_b = \mathrm{ker} \: \left( \cup b : H^0(\omega) \longrightarrow H^1(\omega) \right).$
\end{proof}
In the following proposition we give more details on the element $f_1 \wedge f_0 \in H^0(\omega)$. We additionally assume that $\pi(b) \in \hspace{-4mm}/\ \Gamma$, which implies that the global section $f_0 \in H^0(E)$ does not vanish at any point and hence determines an exact sequence \begin{equation} \label{extes}
0 \longrightarrow {\mathcal O} \map{f_0} E \map{\wedge f_0} \omega \longrightarrow 0. \end{equation} The coboundary map of the associated long exact sequence \begin{equation} \label{extcl}
\cdots \longrightarrow H^0(\omega) \map{\cup e} H^1({\mathcal O}) \longrightarrow \cdots \end{equation} is symmetric and coincides (e.g. \cite{kempf1} Corollary 6.8)
with cup-product $\cup e$ with the extension class $e \in \mathbb{P} H^1(\omega^{-1}) = |\omega^2|^*$. Moreover $\cup e$ is the image of $e$ under the dual of the multiplication map \begin{equation} \label{mapext}
H^1(\omega^{-1}) = H^0(\omega^2)^* \hookrightarrow \mathrm{Sym}^2 H^0(\omega)^*, \qquad e \longmapsto \cup e. \end{equation} We note that $\mathrm{corank}(\cup e) = 2$ and that $\mathrm{ker} \: (\cup e) = V_b$. Hence $(f_1 \wedge f_0) \cup e$ is well-defined.
\begin{prop} \label{quad} If $\pi(b) \in \hspace{-4mm}/\ \Gamma$, then $f_1 \wedge f_0 \in \hspace{-4mm}/\ \mathrm{ker} \: (\cup e)$ and we have (up to a nonzero scalar) $$(f_1 \wedge f_0) \cup e = b \in H^1({\mathcal O}).$$ \end{prop}
\begin{proof} We keep the notation of the previous proof. The condition $f_1 \wedge f_0 \in V_b$ implies that $f_1$ is a regular section and, by \eqref{expdef}, that $f_0$ vanishes at the support of $b$, i.e., $\pi(b) \in \Gamma$. As for the equality of the proposition, we introduce the rank-$2$ vector bundle $\hat{E}$ which is obtained from $E$ by (positive) elementary transformations at the points $p \in I$ and with respect to the line in $E_p$ spanned by the nonzero vector $f_0(p)$. Then we have $E \subset \hat{E} \subset E(I)$ and $\hat{E}$ fits into the exact sequence $$ 0 \longrightarrow E \longrightarrow \hat{E} \longrightarrow {\mathcal O}_I \longrightarrow 0. $$ Moreover $f_1 \in H^0(\hat{E})$, which follows from condition \eqref{expdef}. We also have the following exact sequences $$ \begin{array}{ccccccccccc}
0 & \longrightarrow & {\mathcal O}(I) & \longrightarrow & \hat{E} & \map{\wedge f_0} & \omega & \longrightarrow & 0 & \qquad & (\hat{e}) \\
& & & & & & & & & & \\
& & \cup & & \cup & & \Vert & & & & \\
& & & & & & & & & & \\
0 & \longrightarrow & {\mathcal O} & \map{f_0} & E & \map{\wedge f_0} & \omega & \longrightarrow & 0 & \qquad & (e), \end{array} $$ and the extension class $\hat{e} \in H^1(\omega^{-1}(D))$ is obtained from $e$ by the canonical projection $H^1(\omega^{-1}) \rightarrow H^1(\omega^{-1}(I))$. Taking the associated long exact sequences, we obtain $$ \begin{array}{cccccc} f_1 \in & H^0(\hat{E}) & \map{\wedge f_0} & H^0(\omega) & \map{\cup \hat{e}} & H^1({\mathcal O}(I)) \\
& & & & & \\
& \cup & & \Vert & & \uparrow \ \pi_I \\
& & & & & \\
& H^0(E) & \map{\wedge f_0} & H^0(\omega) & \map{\cup e} & H^1({\mathcal O}), \end{array} $$ where the two squares commute. This means that $$ \pi_I \left( (f_1 \wedge f_0) \cup e \right) = (f_1 \wedge f_0) \cup \hat{e} = 0. $$
Since $f_1 \wedge f_0$ does not depend on $\beta$ (nor on $I$), the latter relation holds for any $I$ with $I = \mathrm{supp} \: \beta$. Hence, denoting by $\langle I \rangle$ the linear span in $|\omega|^*$ of the support $I$ of $\beta$, we obtain $$ (f_1 \wedge f_0) \cup e \in \bigcap_{I = \mathrm{supp} \: \beta} \mathrm{ker} \: \ \pi_I = \bigcap_{b \in \langle I \rangle} \langle I \rangle = b.$$ \end{proof}
\subsection{Geometric properties of $F_W$}
\begin{prop} \label{geomprop} For any $[W] \in \hspace{-4mm}/\ \mathcal{D}$ we have the following \begin{enumerate} \item The quartic $F_W$ contains the canonical curve $C$,i.e.,
$F_W \in |I(4)|$.
\item The quartic $F_W$ contains the secant line $\overline{pq}$, with $p \not= q$, if and only if $\overline{pq} \cap \mathbb{P} W^\perp \not= \emptyset$ or $\mathrm{dim} \: W \cap H^0(\omega(-2p-2q)) >0$.
\item Let $\Sigma$ be the set of points $p$ at which the tangent line $\mathbb{T}_p(C)$ intersects the vertex $\mathbb{P} W^\perp$. Then $\Sigma$ is empty for general $[W]$ and finite for any $[W]$. Moreover any point $p \in C \setminus \Sigma$ is smooth on $F_W$ and the embedded tangent space $\mathbb{T}_p(F_W)$ is the linear span of $\mathbb{T}_p(C)$ and $\mathbb{P} W^\perp$. \end{enumerate} \end{prop}
\begin{proof} All statements are easily deduced from Proposition \ref{simpl}. Given a point $p \in C$ we denote by $\mathfrak{p}_p \in \mathrm{Prin}_p({\mathcal O})$ the principal part supported at $p$ of a rational function with a simple pole at $p$. Then the class
$[\mathfrak{p}_p] \in H^1({\mathcal O})$ is proportional to $i_\omega(p) \in |\omega|^* = \mathbb{P} H^1({\mathcal O})$ and the section $f_0$ vanishes at $p$. Hence $f_0 \mathfrak{p}_p \in \mathrm{Prin}(E)$ is everywhere regular and we may choose $f_1 =0$. This proves part 1. See also \cite{pp}.
As for part 2, we introduce $\beta_{\lambda,\mu} = \lambda \mathfrak{p}_p + \mu \mathfrak{p}_q \in \mathrm{Prin}({\mathcal O})$ for $\lambda, \mu \in \mathbb{C}$ and denote by $s_p$ and $s_q$ the global sections $\pi([\mathfrak{p}_p])$ and $\pi([\mathfrak{p}_q])$, which vanish at $p$ and $q$ respectively. Then one checks that $f_0 = \lambda s_p + \mu s_q \in \mathrm{ker} \: (\cup [\beta_{\lambda,\mu}])$ and $\mathrm{pp}(f_1) = \lambda \mu (s_q \mathfrak{p}_p + s_p \mathfrak{p}_q) \in \mathrm{Prin}(E)$. With this notation the condition of Proposition \ref{simpl} transforms into \begin{equation} \label{resfw} 0 = l_{\lambda, \mu}(f_0 \wedge f_1) = \lambda \mu (\lambda^2 \gamma_p + \mu^2 \gamma_q), \end{equation} where $l_{\lambda, \mu}$ is the linear form defined by $[\beta_{\lambda,\mu}] \in H^1({\mathcal O})$. The scalars $\gamma_p$ and $\gamma_q$ are the values of the section $s_p \wedge s_q \in W \cap H^0(\omega(-p-q))$ at $p$ and $q$ respectively. We now conclude noting that $s_p \wedge s_q = 0$ if and only if $\overline{pq} \cap \mathbb{P} W^\perp \not= \emptyset$.
As for part 3, we first observe that the assumption $\Sigma = C$ implies that the restriction $\pi_{|C} : C \rightarrow \mathbb{P} W^*$ contracts $C$ to a point, which is impossible. Next we consider the tangent vector $t_q$ at $p$ given by the direction $q$. By putting $\lambda = 1$ and $\mu = \epsilon$, with $\epsilon^2 = 0$, into equation \eqref{resfw} we obtain that $t_q \in \mathbb{T}_p(F_W)$ if and only if $\epsilon \gamma_p = 0$, i.e., $\pi(q) \in \mathbb{T}_{\pi(p)}(\Gamma)$. Hence $\mathbb{T}_p(F_W) = \pi^{-1}(\mathbb{T}_{\pi(p)}(\Gamma))$, which proves part 3.
\end{proof}
\subsection{The cubic polar $P_x(F_W)$}
Firstly we deduce from Propositions \ref{simpl} and \ref{quad} a criterion for $b \in P_x(F_W)$, with $x \in W^\perp$. Let $H_x$ be the hyperplane determined by $x \in H^1({\mathcal O})$. As above we assume $b \in \hspace{-4mm}/\ W^\perp$ and $\pi(b) \in \hspace{-4mm}/\ \Gamma$, i.e., the pencil $V = V_b$ is base-point-free.
\begin{prop} \label{cricub} We have the following equivalence $$ b \in P_x(F_W) \qquad \iff \qquad f_1 \wedge f_0 \in H_x.$$ \end{prop}
\begin{proof} We recall from section 4.1 that $\cup e$ induces a symmetric isomorphism $\cup e : (V^\perp)^* \map{\sim} V^\perp$ and we denote by $Q^* \subset \mathbb{P} (V^\perp)^*$ and $Q \subset \mathbb{P} V^\perp$ the two associated smooth quadrics. Note that $Q$ and $Q^*$ are dual to each other. Combining Propositions \ref{simpl}, \ref{quad} and \ref{singver} (1) we see that the restriction of the quartic
$F_W$ to the linear subspace $\mathbb{P} V^\perp \subset |\omega|^*$ splits into a sum of divisors
$$ \left(F_W \right)_{|\mathbb{P} V^\perp} = 2 \mathbb{P} W^\perp + Q.$$ We also observe that $Q$ only depends on $V$ (and on $W$) and not on $b$. Taking the polar with respect to $x \in W^\perp$, we obtain
$$ \left(P_x(F_W) \right)_{|\mathbb{P} V^\perp} = 2 \mathbb{P} W^\perp + P_x(Q).$$ Finally we see that the condition $b \in P_x(Q)$ is equivalent to $f_0 \wedge f_1 = (\cup e)^{-1} (b) \in H_x$. \end{proof}
We easily deduce from this criterion some properties of $P_x(F_W)$.
\begin{prop} \label{cubcc}
The cubic $P_x(F_W)$ contains the canonical curve $C$, i.e., $P_x(F_W) \in |I(3)|$. \end{prop}
\begin{proof} We first observe that the two closed conditions of Proposition \ref{cricub} are equivalent outside $\pi^{-1}(\Gamma)$. Hence they coincide as well on $\pi^{-1}(\Gamma)$ and we can drop the assumption $\pi(b) \in \hspace{-4mm}/\ \Gamma$. Now, as in the proof of Proposition \ref{geomprop}(1), we may choose $f_1 =0$. \end{proof}
\begin{prop} We have the following properties $$ \bigcap_{x \in W^\perp} P_x(F_W) = S_W \cup \mathbb{P} W^\perp \cup \bigcup_{n \geq 2} \Lambda_n,$$ $$ F_W \cap S_W = C \cup \Lambda_1, \qquad \text{and} \qquad \bigcup_{n \geq 0} \Lambda_n \subset F_W,$$ where $S_W$ is an irreducible surface. For $n \geq 0$, we denote by $\Lambda_n$ the union of $(n+1)$-secant $\mathbb{P}^n$'s to the canonical curve $C$, which intersect the vertex $\mathbb{P} W^\perp$ along a $\mathbb{P}^{n-1}$. If $W$ is general, then $\Lambda_n = \emptyset$ for $n \geq 2$ and $\Lambda_1$ is the union of $2(g-1)(g-3)$ secant lines. \end{prop}
\begin{proof} We consider $b$ in the intersection of all $P_x(F_W)$ and we first suppose that $\pi(b) \in \hspace{-4mm}/\ \Gamma$. Then by Propositions \ref{simpl} and \ref{cricub} we have $$ f_0 \wedge f_1 \in \bigcap_{x \in W^\perp} H_x = W.$$ Hence we obtain that $\mathbb{P} V^\perp \cap \bigcap_{x \in W^\perp} P_x(F_W)$ is reduced to the point $(\cup e)(W) \in \mathbb{P} V^\perp$. On the other hand a standard computation shows that $S_W$ is the image of $\mathbb{P}^2$ under the linear system of the adjoint curves of $\Gamma$. Hence $S_W$ is irreducible.
If $\pi(b) \in \Gamma$, we denote by $p_1,\ldots,p_{n+1} \in C$ the points such that $\pi(p_i) = \pi(b)$. Then $f_0$ vanishes at $p_1,\ldots,p_{n+1}$. Since $f_1 \wedge f_0$ does not depend on the support of $b$, we can choose $\mathrm{supp} \: b$ such that $p_i \in \hspace{-4mm}/\ \mathrm{supp} \: b$. Then $f_1$ is regular at $p_i$ and we deduce that $f_1 \wedge f_0 \in H^0(\omega(-\sum p_i)) \cap W = V_b.$ Now any rational $f_1$ satisfying $f_1 \wedge f_0 \in V_b = \mathrm{im} \: (\wedge f_0)$ is regular everywhere, which can only happen when $f_0$ vanishes at the support of $b$. By uniqueness we have $\mathrm{supp} \: b \subset \{ p_1,\ldots,p_{n+1} \}$ and $b \in \Lambda_n$. Note that $\Lambda_0 = C$. This proves the first equality.
If $b \in F_W \cap S_W$, we have $f_1 \wedge f_0 \in W \cap H_b = V_b$ and we conclude as above. Note that $\Lambda_1$ is contained in $S_W$ and is mapped by $\pi$ to the set of ordinary double points of $\Gamma$. \end{proof}
For any $[W] \in \mathrm{Gr}(3,H^0(\omega)) \setminus \mathcal{D}$ we introduce the subspace of $I(3)$
$$ L_W = \{ R \in I(3) \ | \ R \ \ \text{is singular along the vertex} \ \ \mathbb{P} W^\perp \}. $$ Then Propositions \ref{cubcc} and \ref{singver}(2) imply that $P_x(F_W) \in L_W$. More precisely, we have
\begin{prop} The restriction of the polar map of the quartic $F_W$ to its vertex $\mathbb{P} W^\perp$ $$ \mathbf{P}: \ \ W^\perp \longrightarrow L_W, \qquad x \longmapsto P_x(F_W),$$ is an isomorphism. \end{prop}
\begin{proof} First we show that $\mathrm{dim} \: L_W = g-3$. We choose a complementary subspace $A$ to $W^\perp$,i.e. $H^0(\omega)^* = W^\perp \oplus A$, and a set of coordinates $x_1,\ldots,x_{g-3}$ on $W^\perp$ and $a_1,a_2,a_3$ on $A$. This enables us to expand a cubic $F \in S^3 H^0(\omega)$ $$ F=F_3(x) + F_2(x)G_1(a) + F_1(x)G_2(a) + G_3(a), \qquad F_i \in \mathbb{C}[x_1,\ldots,x_{g-3}], \ G_i \in \mathbb{C}[a_1,a_2,a_3],$$ with $\mathrm{deg} \: F_i = \mathrm{deg} \: G_i = i$. Let $\mathcal{S}_A$ denote the subspace of cubics singular along $\mathbb{P} A$,i.e. $G_2 = G_3 = 0$. We consider the linear map $$ \alpha : I(3) \longrightarrow \mathcal{S}_A, \qquad F \longmapsto F_3(x) + F_2(x) G_1(a).$$ Since by Lemma \ref{quaw} any monomial $x_ix_j \in H^0(\mathbb{P} W^\perp, {\mathcal O}(2))$ lifts to a quadric $Q_{ij} \in I(2)$, we observe that the monomials $x_ix_jx_k$ and $x_ix_ja_l$, which generate $\mathcal{S}_A$, also lift e.g. to $Q_{ij}x_k$ and $Q_{ij}a_l$ in $I(3)$. Hence $\alpha$ is surjective and $\mathrm{dim} \: L_W = \mathrm{dim} \: \mathrm{ker} \: \alpha$ is easily calculated. One also checks that this computation does not depend on $A$.
In order to conclude, it will be enough to show that $\mathbf{P}$ is injective. Suppose that the contrary holds,i.e., there exists a point $x \in W^\perp$ with $P_x(F_W) = 0$. Given any base-point-free pencil $V\subset W$ and any $b \in V^\perp$, we obtain by Proposition \ref{cricub} that $f_0 \wedge f_1 \in H_x$. Since $\cup e: (V^\perp)^* \map{\sim} V^\perp$ is an isomorphism, we see that for
$b \in \hspace{-4mm}/\ (\cup e)^{-1}(H_x)$ the element $f_0 \wedge f_1$ must be zero. This implies that $b \in \Lambda$ and since $b$ varies in an open subset of $|\omega|^*$, we obtain $\Lambda = |\omega|^*$, a contradiction.
\end{proof}
\subsection{The quadric bundle associated to $F_W$}
Let $\tilde{\mathbb{P}}^{g-1}_W \rightarrow |\omega|^*$ denote the blowing-up of $|\omega|^*$ along the vertex $\mathbb{P} W^\perp \subset |\omega|^*$. The rational projection $\pi : |\omega|^* \dashrightarrow \mathbb{P}^2 = \mathbb{P} W^*$ resolves into a morphism $\tilde{\pi} : \tilde{\mathbb{P}}^{g-1}_W \rightarrow \mathbb{P}^2$. Since $F_W$ is singular along $\mathbb{P} W^\perp$ (Proposition \ref{singver} (2)), the proper transform $\tilde{F}_W \subset \tilde{\mathbb{P}}^{g-1}_W$ admits a structure of a quadric bundle $\tilde{\pi} : \tilde{F}_W \rightarrow \mathbb{P}^2$.
The contents of Propositions \ref{geomprop} and \ref{cubcc} can be reformulated in a more geometrical way.
\begin{thm} \label{mainthm} For any $[W] \in \mathrm{Gr}(3,H^0(\omega)) \setminus \mathcal{D}$, the quadric bundle $\tilde{\pi}: \tilde{F}_W \rightarrow \mathbb{P}^2$ has the following properties \begin{enumerate} \item Its {\em Hessian} curve is $\Gamma \subset \mathbb{P}^2$.
\item Its {\em Steinerian} curve is the (proper transform of the) canonical curve $C \subset
|\omega|^*$.
\item The rational {\em Steinerian map} $\mathrm{St} : \Gamma \dashrightarrow C$, which associates to a singular quadric its singular point, coincides with the adjoint map $\mathrm{ad}$ of the plane curve $\Gamma$. Moreover the closure of the image $\mathrm{ad}(\mathbb{P}^2)$ equals $S_W$. \end{enumerate} \end{thm}
\begin{rem} We note that Theorem \ref{mainthm} is analogous to the main result of \cite{ks} (replace $\mathbb{P}^2$ with $\mathbb{P}^1 \times \mathbb{P}^1$). In spite of this striking similarity and the relation between the two parameter spaces $\mathrm{Sing} \: \Theta$ and $\mathrm{Gr}(3,H^0(\omega))$ (see \cite{pp}), we were unable to find a common frame for both constructions. \end{rem}
\section{The cubic hypersurface $\Psi_V \subset \mathbb{P}^{g-3}$ associated to a base-point-free pencil $\mathbb{P} V \subset |\omega|$}
In this section we show that the symmetric cup-product maps $\cup e \in \mathrm{Sym}^2 H^0(\omega)^*$ (see \eqref{extcl}) arise as polar quadrics of a cubic hypersurface $\Psi_V$, which will be used in the proof of Theorem \ref{mainthm2}.
Let $V$ denote a base-point-free pencil of $H^0(\omega)$. We consider the exact sequence given by evaluation of sections of $V$ \begin{equation} \label{esv} 0 \longrightarrow \omega^{-1} \longrightarrow {\mathcal O}_C \otimes V \map{ev} \omega \longrightarrow 0. \end{equation} Its extension class $v \in \mathrm{Ext}^1(\omega,\omega^{-1}) \cong H^1(\omega^{-2}) \cong H^0(\omega^3)^*$ corresponds to the hyperplane in $H^0(\omega^3)$, which is the image of the multiplication map \begin{equation} \label{hyp} \mathrm{im} \: \left(V \otimes H^0(\omega^2) \longrightarrow H^0(\omega^3) \right). \end{equation} We consider the cubic form $\Psi_V$ defined by $$ \Psi_V : \mathrm{Sym}^3 H^0(\omega) \map{\mu} H^0(\omega^3) \map{\bar{v}} \mathbb{C},$$ where $\mu$ is the multiplication map and $\bar{v}$ the linear form defined by the extension class $v$. It follows from the description \eqref{hyp} that $\Psi_V$ factorizes through the quotient $$ \Psi_V : \mathrm{Sym}^3 \mathcal{V} \longrightarrow \mathbb{C},$$ where $\mathcal{V} := H^0(\omega)/V$. We also denote by $\Psi_V \subset \mathbb{P} \mathcal{V}$ its associated cubic hypersurface.
A $3$-plane $W \supset V$ determines a nonzero vector $w$ in the quotient $\mathcal{V} = H^0(\omega)/V$ and a general $w$ determines an extension \eqref{extes} --- recall that $W^* \cong H^0(E)$. Hence we obtain an injective linear map $\mathcal{V} \hookrightarrow H^1(\omega^{-1}), w \mapsto e$, which we compose with \eqref{mapext} $$\Phi: \mathcal{V} \hookrightarrow H^1(\omega^{-1}) = H^0(\omega^2)^* \hookrightarrow \mathrm{Sym}^2 H^0(\omega)^*, \qquad w \mapsto e \mapsto \cup e.$$ Since $V \subset \mathrm{ker} \: (\cup e)$, we note that $\mathrm{im} \: \Phi \subset \mathrm{Sym}^2 \mathcal{V}^*$.
We now can state the main result of this section.
\begin{prop} The linear map $\Phi: \mathcal{V} \rightarrow \mathrm{Sym}^2 \mathcal{V}^*$ coincides with the polar map of the cubic form $\Psi_V$, i.e., $$ \forall w \in \mathcal{V}, \qquad \Phi(w) = P_w(\Psi_V).$$ \end{prop}
\begin{proof} This is straightforwardly read from the diagram obtained by relating the exact sequences \eqref{esv} and \eqref{esw} via the inclusion $V \subset W$. We leave the details to the reader. \end{proof}
We also observe that, by definition of the Hessian hypersurface (see e.g. \cite{dk} section 3), we have an equality among degree $g-2$ hypersurfaces of $\mathbb{P} \mathcal{V} = \mathbb{P}^{g-3}$ \begin{equation} \label{hesspsi}
\mathrm{Hess}(\Psi_V) = \mathcal{D} \cap \mathbb{P} \mathcal{V}, \end{equation} where we use the inclusion $\mathbb{P} \mathcal{V} \subset \mathrm{Gr}(3,H^0(\omega))$.
\begin{rem} We recall (see \cite{dk} (5.2.1)) that the Hessian and Steinerian of a cubic hypersurface coincide and that the Steinerian map is a rational involution $i$. In the case of the cubic $\Psi_V$, the involution $$ i: \mathrm{Hess}(\Psi_V) \dashrightarrow \mathrm{Hess}(\Psi_V)$$ corresponds to the involution of \cite{bv} Propositions 1.18 and 1.19, i.e., $\forall w \in \mathcal{D} \cap \mathbb{P} \mathcal{V}$, the bundles $E_w$ and $E_{i(w)}$ are related by the exact sequence $$ 0 \longrightarrow E_{i(w)}^* \longrightarrow {\mathcal O}_C \otimes H^0(E_w) \map{ev} E_w \longrightarrow 0.$$ Since we will not use that result, we leave its proof to the reader. \end{rem}
\section{Base loci of $|F_3|$ and $|F_4|$}
Let us denote by $|F_3| \subset |I(3)|$ and $|F_4| \subset |I(4)|$ the linear subsystems spanned by the image of the rational maps $\mathbf{F}_3$ and $\mathbf{F}_4$ respectively. Then we have the following
\begin{thm} \label{mainthm2}
The base loci of $|F_3|$ and $|F_4|$ coincide with the canonical curve
$C \subset |\omega|^*$. \end{thm}
\begin{proof}
Let $b \in \mathrm{Bs} |F_3|$ and let us suppose that $b \in \hspace{-4mm}/\ C$. We consider a base-point-free pencil $V \subset H_b$. With the notation of section 5, we introduce the rational map $$ r_b : \mathbb{P} \mathcal{V} \dashrightarrow \mathbb{P} \mathcal{V}, \qquad w \mapsto r_b(w) = w', \qquad \text{with} \ \tilde{\Psi}_V (w,w', \cdot) = b,$$
where $\tilde{\Psi}_V$ is the symmetric trilinear form of $\Psi_V$. We note (Proposition \ref{quad}) that, for $w \in \hspace{-4mm}/\ \mathbb{P}(H_b/V)$, the element $r_b(w)$ is collinear with the nonzero element $f_0 \wedge f_1 \ \mathrm{mod} \ V$ and that $r_b$ is defined away from the hypersurface $\mathrm{Hess}(\Psi_V)$, which we assume to be nonzero. Since $b \in \mathrm{Bs}|F_3|$ we obtain by Proposition \ref{cricub} that $$r_b(w) = \left( \bigcap_{x \in W^{\perp}} H_x \right) \ \text{mod} \ V = W \ \text{mod} \ V = w.$$ Hence $r_b$ is the identity map (away from $\mathrm{Hess}(\Psi_V)$). This implies that $\tilde{\Psi}_V(w,w,\cdot) = b$ for any $w \in \mathbb{P} \mathcal{V}$, hence $\Psi_V = x_0^3$, where $x_0$ is the equation of the hyperplane $\mathbb{P}(H_b/V) \subset \mathbb{P} \mathcal{V}$. This in turn implies that $\mathrm{Hess}(\Psi_V) = 0$, i.e., $\mathbb{P} \mathcal{V} \subset \mathcal{D}$. Since for a general $[W] \in \mathrm{Gr}(3,H^0(\omega))$ the pencil $V = W \cap H_b$ is base-point-free, we obtain that a general $[W]$ lies on the divisor $\mathcal{D}$, which is a contradiction.
As for $|F_4|$, we recall that the fact $\mathrm{Bs} |F_4| = C$ follows from \cite{welt}. Alternatively, it can also be deduced by noticing
(see Proposition \ref{f4sq}) that $\mathrm{Bs} |F_4| \subset \mathrm{Bs} |I(2)|$. Hence, if $C$ is not trigonal nor a plane quintic, we are done. In the other cases, the result can be deduced from Proposition \ref{geomprop} --- we leave the details to the reader. \end{proof}
\section{Open questions}
\subsection{Dimensions}
The projective dimensions of the linear systems $|F_3|$ and $|F_4|$ are not known for general $g$. The known values of $\mathrm{dim} \: |F_4|$ for a general curve $C$ are given as follows (see \cite{pp}). $$
\begin{array}{|c|c|c|c|c|} \hline
g & 4 & 5 & 6 & 7 \\ \hline
\mathrm{dim} \: |F_4| & 4 & 15 & 40 & 88 \\ \hline \end{array} $$
The examples of \cite{pp} section 6 show that $\mathrm{dim} \: |F_4|$ depends on the gonality of $C$. Moreover it can be shown that
$|F_4| \not= |I(4)|$.
\subsection{Prym-canonical spaces and symplectic bundles}
The construction of the quartic hypersurfaces $F_W$ admit various analogues and generalizations, which we briefly outline.
{\bf{(1)}} Let $P_\alpha := \mathrm{Prym}(C_\alpha/C)$ denote the Prym variety of the \'etale double cover $C_\alpha \rightarrow C$ associated to the nonzero $2$-torsion point $\alpha \in JC$. Given a general $3$-plane $Z \subset H^0(C, \omega \alpha)$, we associate the rank-$2$ vector bundle $E_Z$ defined by $$ 0 \longrightarrow E_Z^* \longrightarrow {\mathcal O}_C \otimes Z \map{ev} \omega \alpha \longrightarrow 0.$$
By \cite{ip} Proposition 4.1 we can associate to $E_Z$ the divisor $\Delta(E_Z) \in |2\Xi|$, where $\Xi$ is a symmetric principal polarization on $P_\alpha$. Its projectivized tangent cone at the origin $0 \in P_\alpha$ is a quartic hypersurface
$F_Z$ in the Prym-canonical space $\mathbb{P} T_0P_\alpha \cong |\omega
\alpha|^*$. Kempf's obstruction theory equally applies to the quartics $F_Z$. We note that $F_Z$ contains the Prym-canonical curve $i_{\omega \alpha}(C) \subset |\omega \alpha|^*$.
{\bf{(2)}} Let $W$ be a vector space of dimension $2n+1$, for $n \geq 1$. We consider a {\em general} linear map $$ \Phi : \Lambda^2 W^* \longrightarrow H^0(C,\omega).$$ By taking the $n$-th symmetric power $\mathrm{Sym}^n \Phi$ and using the canonical maps $\mathrm{Sym}^n (\Lambda^2 W^*) \rightarrow \Lambda^{2n} W^* \cong W$ and $\mathrm{Sym}^n H^0(\omega) \rightarrow H^0(\omega^{\otimes n})$, we obtain a linear map $$ \alpha: W \longrightarrow H^0(\omega^{\otimes n}),$$ which we assume to be injective. We then define the rank $2n$ vector bundle $E_\Phi$ by $$ 0 \longrightarrow E^*_\Phi \longrightarrow {\mathcal O}_C \otimes W \map{ev} \omega^{\otimes n} \longrightarrow 0.$$ The bundle $E_\Phi$ carries an $\omega$-valued symplectic form and the projectivized tangent cone at ${\mathcal O} \in JC$
to the divisor $D(E_\Phi)$ is a hypersurface $F_\Phi$ in $|\omega|^*$
of degree $2n+2$. Moreover $F_\Phi \in |I(2n+2)|$.
\flushleft{Christian Pauly \\ Laboratoire J.-A. Dieudonn\'e \\ Universit\'e de Nice-Sophia-Antipolis \\ Parc Valrose \\ 06108 Nice Cedex 2 \\ France \\ e-mail: pauly@math.unice.fr}
\end{document} |
\begin{document}
\title{Path Query Data Structures in Practice}
\begin{abstract}
We perform experimental studies on data structures that answer
path median, path counting, and path reporting queries
in weighted trees.
These query problems generalize the well-known
range median query problem in arrays, as well as
the $2d$ orthogonal range counting and
reporting problems in planar point sets, to tree structured data.
We propose practical realizations of the latest theoretical results
on path queries. Our data structures, which use tree extraction,
heavy-path decomposition and wavelet trees,
are implemented in both succinct and pointer-based form.
Our succinct data structures
are further specialized to be plain or entropy-compressed. Through experiments on large sets,
we show that succinct data structures for path queries
may present a viable alternative to standard pointer-based realizations,
in practical scenarios. Compared
to na{\"i}ve approaches that compute the answer by explicit traversal of the query path,
our succinct data structures are several times faster
in path median queries and perform comparably in path counting and path reporting queries,
while being several times more space-efficient.
Plain pointer-based realizations of our data structures,
requiring a few times more space than the na{\"i}ve ones,
yield up to $100$-times speed-up over them.
\keywords{path query \and path median \and path counting \and path reporting \and weighted tree} \end{abstract}
\section {Introduction}\label{section:introduction}
Let $T$ be an ordinal tree on $n$ nodes, with each node $x$ associated with a {\textit{weight}} $\pnt{w}(x)$ over
an alphabet $[\sigma].$\footnote{we set $[n] \triangleq \{1,2,\ldots,n\}.$}
A {\textit{path query}} in such a tree asks to evaluate a certain given
function on the path $P_{x,y}$, which is the path between two given query nodes, $x$ and $y.$
A {\textit{path median}} query asks for the median weight on $P_{x,y}.$
A {\textit{path counting}} ({\textit{path reporting}})
query counts (reports)
the nodes on $P_{x,y}$ with weights falling inside the given query weight range.
These queries generalize the range median problem on arrays,
as well as the $2d$ orthogonal counting and reporting
queries in point sets, by replacing one of the dimensions with tree topology.
Formally, query arguments consist of a pair of vertices $x,y \in T$ along with an interval $Q.$
The goal is to preprocess the tree $T$ for the following types of queries:
\begin{itemize}
\item {\emph{Path Counting}}: return $|\{z \in P_{x,y}\,|\,\pnt{w}(z) \in {Q}\}|$.
\item {\emph{Path Reporting}}: enumerate $\{z \in P_{x,y}\,|\,\pnt{w}(z) \in {Q}\}$.
\item {\emph{Path Selection}}: return the $k^{th}$
($0 \leq k < |P_{x,y}|$) weight in the sorted list of weights on $P_{x,y};$
$k$ is given at query time.
In the special case of $k = \floor{|P_{x,y}|/2},$
a path selection is a {\textit{path median query}}.
\end{itemize}
Path queries is a widely-researched topic in computer science community
~\cite{Alon87optimalpreprocessing,
Chazelle1987,
DBLP:journals/iandc/Hagerup00,
DBLP:journals/njc/KrizancMS05,
DBLP:journals/talg/0001MZ16,
DBLP:journals/algorithmica/ChanHMZ17,
DBLP:conf/isaac/0001K19}.
Apart from theoretical appeal, queries on tree topologies reflect
the needs of efficient information retrieval from hierarchical data,
and are gaining ground in established domains such as RDBMS~\cite{ltree}.
The expected height of $T$ being $\sqrt{2\pi{}n}$~\cite{RenyiSzekeres1967}, this
calls for the development of methods beyond na{\"i}ve.
Previous work includes that of Krizanc et al.~\cite{DBLP:journals/njc/KrizancMS05}, who were the first to introduce path median query problem (henceforth \pathmed) in trees,
and gave an $\mathcal{O}(\lg{n})$ query-time
data structure with the space cost of $\mathcal{O}(n\lg^{2}n)$ words. They also gave an $\mathcal{O}(n\log_b{n})$ words
data structure to answer \pathmed{} queries in time $\mathcal{O}(b\lg^{3}n/\lg{b}),$ for any fixed $1 \leq b \leq n.$
Chazelle~\cite{Chazelle1987} gave an emulation dag-based linear-space data structure
for solving path counting (henceforth \pathcnt) queries in trees in time $\mathcal{O}(\lg{n}).$
While~\cite{DBLP:journals/njc/KrizancMS05,Chazelle1987} design different data structures for {\pathmed} and {\pathcnt},
He et al.~\cite{DBLP:conf/isaac/HeMZ11,DBLP:journals/talg/0001MZ16} use {\textit{tree extraction}} to solve both
{\pathcnt} and the path selection problem (henceforth \pathsel),
as well as the path reporting problem (henceforth \pathrp), which they were the first to introduce.
The running times for \pathsel/\pathcnt{} were $\mathcal{O}(\lg\sigma),$
while a \pathrp{} query is answered in $\mathcal{O}((1+\kappa)\lg\sigma)$ time,
with $\kappa$ henceforth denoting output size.
Also given is an $\mathcal{O}(n\lg\lg\sigma)$-words
and $\mathcal{O}(\lg\sigma+\kappa\lg\lg\sigma)$ query time solution, for \pathrp{}, in the RAM model.
Further, solutions based on {\textit{succinct}} data structures started to appear.
(In the interests of brevity, the convention throughout this paper is that a data structure is {\textit{succinct}} if its size in bits is
close to the information-theoretic lower bound.)
Patil et al.~\cite{DBLP:journals/jda/PatilST12}
presented an $\mathcal{O}(\lg{n}\cdot{}\lg{\sigma})$ query time data structure for \pathsel/\pathcnt{}, occupying $6n+n\lg\sigma+\smallO(n\lg\sigma)$ bits of space.
Therein, the tree structure and the weights distribution are decoupled and
delegated to respectively heavy-path decomposition~\cite{Sleator:1983:DSD:61337.61338}
and wavelet trees~\cite{DBLP:books/daglib/0038982}. Their data structure also solves \pathrp{} in $\mathcal{O}(\lg{n}\lg\sigma+(1+\kappa)\lg\sigma)$ query time.
Parallel to~\cite{DBLP:journals/jda/PatilST12},
He et al.~\cite{DBLP:conf/isaac/HeMZ11,DBLP:journals/talg/0001MZ16} devised
a succinct data structure occupying $nH(W_T)+\smallO(n\lg\sigma)$ bits of space
to answer \pathsel/\pathcnt{} in $\mathcal{O}(\frac{\lg{\sigma}}{\lg\lg{n}}+1),$
and \pathrp{} in $\mathcal{O}((1+\kappa)(\frac{\lg\sigma}{\lg\log{n}}+1))$ time.
Here, $W_T$ is the multiset of weights of the tree $T,$ and $H(W_T)$ is there entropy thereof.
Combining tree extraction and the ball-inheritance problem~\cite{DBLP:conf/compgeom/ChanLP11},
Chan et al.~\cite{DBLP:journals/algorithmica/ChanHMZ17}
proposed further trade-offs,
one of them being an $\mathcal{O}(n\lg^{\epsilon}{n})$-word structure with $\mathcal{O}(\lg\lg{n}+\kappa)$ query time, for \pathrp{}.
Despite the vast body of work, little is known on the practical performance
of the data structures for path queries,
with empirical studies on weighted trees definitely lacking, and existing related experiments being limited to
navigation in unlabeled trees only~\cite{DBLP:conf/alenex/ArroyueloCNS10},
or to very specific domains~\cite{DBLP:journals/algorithms/AbeliukCN13,DBLP:journals/jea/NavarroP16}.
By contrast, the empirical study of traditional orthogonal range queries
have attracted much attention~\cite{DBLP:journals/tcs/ArroyueloCDDHLMNSS11,DBLP:journals/is/BrisaboaBKNS16,DBLP:conf/dcc/IshiyamaS17}.
We therefore contribute to remedying this imbalance.
\subsection{Our work}
In this article, we provide an experimental study of data structures
for path queries. The types of queries we consider
are \pathmed, \pathcnt, and \pathrp.
The theoretical foundation of our work are the data
structures and algorithms developed in~\cite{
DBLP:conf/isaac/HeMZ11,
DBLP:journals/jda/PatilST12,
DBLP:journals/algorithmica/HeMZ14,
DBLP:journals/talg/0001MZ16}.
The succinct data structure by He et al.~\cite{DBLP:journals/talg/0001MZ16}
is optimal both in space and time in the RAM model.
However, it builds on components that are likely to be cumbersome in practice.
We therefore present a practical compact implementation
of this data structure that uses $3n\lg\sigma+\smallO(n\lg\sigma)$ bits of space
as opposed to the original $nH(W_T)+\smallO(n\lg\sigma)$ bits of space in~\cite{DBLP:journals/talg/0001MZ16}.
For brevity, we henceforth refer to the data structures based
on tree extraction as \ext.
Our implementation of \ext{} achieves the query time of $\mathcal{O}(\lg{\sigma})$
for \pathmed{} and \pathcnt{} queries,
and $\mathcal{O}((1+\kappa)\lg\sigma)$ time for \pathrp.
Further, we present an exact implementation
of the data structure (henceforth \wthpd) by Patil et al.~\cite{DBLP:journals/jda/PatilST12}.
The theoretical guarantees of {\wthpd} are $6n+n\lg\sigma+\smallO(n\lg\sigma)$
bits of space, with $\mathcal{O}(\lg{n}\lg\sigma)$ and $\mathcal{O}(\lg{n}\lg\sigma+(1+\kappa)\lg\sigma)$
query times for respectively \pathmed{}/\pathcnt{} and \pathrp{}.
Although \wthpd{} is optimal neither in space nor in time,
it proves competitive with \ext{} on the practical datasets we use.
Further, we evaluate time- and space-impact
of succinctness by realizing plain pointer-based versions of both \ext{} and \wthpd{}.
We show that succinct data structures based on {\ext} and {\wthpd} offer
an attractive alternative for their fast but space-consuming counterparts,
with query-time slow-down of $30$-$40$ times
yet commensurate savings in space.
We also implement, in pointer-based and succinct variations,
a na{\"i}ve approach of not preprocessing the tree at all but rather
answering the query by explicit scanning. The succinct solutions
compare favourably to the na{\"i}ve ones, the slowest former
being $7$-$8$ times faster than na{\"i}ve \pathmed, while occupying
up to $20$ times less space. \myenv{We also compare the performance of different
succinct solutions relative to each other.}
\section{Preliminaries}\label{section:preliminaries} \label{section:dcc:ptr}\label{section:dcc:extptr} This section introduces notation and main algorithmic techniques at the core of our data structures.
\subparagraph*{Notation.} The $i^{th}$ node visited during a preorder traversal of the given tree $T$ is said to have {\textit{preorder rank}} $i$. We identify a node by its preorder rank.
For a node $x \in T,$ its set of ancestors $\mathcal{A}(x)$ includes $x$ itself.
Given nodes $x,y \in T,$ where $y \in \mathcal{A}(x)$, we set $A_{x,y} \triangleq P_{x,y}\setminus{}\{y\};$ one then has $P_{x,y} = A_{x,z}\sqcup{}\{z\}\sqcup{}A_{y,z},$ where $z = LCA(x,y).$ The primitives {\texttt{rank/select/access}} are defined in a standard way, i.e. ${\mathtt{rank_1}(B,i)}$ is the number of $1$-bits in positions less than $i,$ ${\mathtt{select_{1}}(B,j)}$ returns the position of the $j^{th}$ $1$-bit, and $\mathtt{access(B,i)}$ returns the bit at the $i^{th}$ position, all with respect to a given bitmap $B$, which is omitted when the context is clear. \begin{figure}\label{figure:treeExtractionExample}
\end{figure}
\subparagraph*{Compact representations of ordinal trees.} Compact representations of ordinal trees is a well-researched area, mainstream methodologies including {\textit{balanced parentheses}} (BP)~\cite{DBLP:conf/focs/Jacobson89,DBLP:journals/siamcomp/MunroR01,DBLP:journals/tcs/GearyRRR06,DBLP:journals/talg/LuY08,DBLP:journals/tcs/MunroRRR12}, {\textit{depth-first unary degree sequence}} (DFUDS)~\cite{
DBLP:journals/algorithmica/BenoitDMRRR05,
DBLP:journals/talg/GearyRR06,
DBLP:journals/jcss/JanssonSS12}, {\textit{level-order unary degree sequence}} (LOUDS)~\cite{
DBLP:conf/focs/Jacobson89,
DBLP:conf/wea/DelprattRR06}, and {\textit{tree covering}} (TC)~\cite{
DBLP:journals/talg/GearyRR06,
DBLP:journals/talg/HeMS12,
DBLP:journals/algorithmica/FarzanM14}. Of these, BP-based representations ``combine good time- and space-performance with rich functionality'' in practice~\cite{DBLP:conf/alenex/ArroyueloCNS10}, and we use BP in our solutions. BP is a way of linearising the tree by emitting {``\texttt{(}''} upon first entering a node and {``\texttt{)}''} upon exiting, having explored all its descendants during the preorder traversal of the tree. For example, {\texttt{(((()())())((())()))}} would be a BP-sequence for the tree $T$ in \Cref{figure:treeExtractionExample}.
As shown in~\cite{DBLP:journals/siamcomp/MunroR01,DBLP:journals/talg/LuY08,DBLP:journals/tcs/MunroRRR12}, an ordinal tree $T$ on $n$ nodes can be represented in $2n+\smallO(n)$ bits of space to support the following operations in $\mathcal{O}(1)$ time, for any node $x \in T:$ {$\mathtt{child}(T,x,i)$}, the $i$-th child of $x$; {$\mathtt{depth}(T,x)$}, the number of ancestors of $x$; {$\mathtt{LCA}(T,x,y)$}, the lowest common ancestor of nodes $x,y \in T;$ and {$\mathtt{level\_anc}(T,x,i)$}, the $i^{th}$ lowest ancestor of $x.$ \subparagraph*{Tree extraction.} Tree extraction~\cite{DBLP:journals/talg/0001MZ16} selects a subset $X$ of nodes
while maintaining the underlying hierarchical relationship among the nodes in $X$.
Given a subset $X$ of tree nodes called {\textit{extracted nodes}},
an {\textit{extracted tree}} $T_X$ can be obtained from the original tree $T$ through the following procedure.
Let $v \notin X$ be an arbitrary node. The node $v$ and all its incident edges in $T$ are removed from $T,$ thereby
exposing the parent $p$ of $v$ and $v'$s children, $v_1,v_2,\ldots,v_k.$
Then the nodes $v_1,v_2,\ldots,v_k$ (in this order) become new children of $p,$
occupying the contiguous segment of positions starting from the (old) position of $v.$
After thus removing all the nodes $v \notin X,$ we have $T_X \equiv F_X,$
if the forest $F_X$ obtained is a tree; otherwise, a dummy root $r$ holds the roots of the trees in $F_X$ (in the original left-to-right order)
as its children. (The symmetry between $X$ and $\bar{X} = V\setminus{}X$
brings about the {\textit{complement}} $T_{\bar{X}}$ of the extracted tree $T_{X}.$)
An original node $x \in X$ of $T$ and its copy, $x'$, in $T_X$ are said to {\textit{correspond}} to each other;
also, $x'$ is the {\textit{$T_X$-{view}}} of $x,$ and $x$ is the {\textit{T-source}} of $x'.$
The $T_X$-{view} of a node $y \in T$ ($y$ is not necessarily in $X$) is generally defined to be the node
$y' \in T_X$ corresponding to the lowest node in $\mathcal{A}(y) \cap{} X.$
In this paper, tree extraction is predominantly used to classify nodes into categories,
and the labels assigned indicate the weight ranges the original weights belong to.
\Cref{figure:treeExtractionExample} gives an example of an extracted tree, {view}s and sources.
\section{Data Structures for Path Queries}\label{sec:compactRepr} This section gives the design details of the {\wthpd} and {\ext} data structures.
\subsection{Data structures based on heavy-path decomposition}\label{sec:HPD} We now describe the approach of~\cite{DBLP:journals/jda/PatilST12}, which is based on heavy-path decomposition~\cite{Sleator:1983:DSD:61337.61338}.
Heavy-path decomposition (HPD) imposes a structure on a tree. In HPD, for each non-leaf node, a {\textit{heavy child}} is defined as the child whose subtree has the maximum cardinality.
HPD of a tree $T$ with root $r$ is a collection of disjoint chains, first of which is obtained by always following
the heavy child, starting from $r,$ until reaching a leaf.
The subsequent chains are obtained by the same procedure, starting from the non-visited nodes closest to the root (ties broken
arbitrarily). The crucial property is that any root-to-leaf path in the tree encounters $\mathcal{O}(\lg{n})$ distinct chains. A chain's {\textit{head}} is the node of the chain that is closest to the root; a chain's tail is therefore a leaf.
Patil et al.~\cite{DBLP:journals/jda/PatilST12} used HPD to decompose a path query into $\mathcal{O}(\lg{n})$ queries in sequences. To save space, they designed the following data structure to represent the tree and its HPD. If $x$ is the head of a chain $\phi$, all the nodes in $\phi$ have a (conceptual) {\textit{reference}} pointing to $x,$ while $x$ points to itself. A {\textit{reference count}} of a node $x$ (denoted as ${rc_{x}}$) stands for the number of times a node serves as a reference. Obviously, only heads feature non-zero reference counts -- precisely the lengths of their respective chain. The reference counts of all the nodes are stored in unary in preorder in a bitmap $B = 10^{rc_1}10^{rc_2}\ldots{}10^{rc_{n}}$ using $2n+\smallO(n)$ bits. Then, one has that $rc_{x} = \mathtt{rank_{0}(B,select_1(B,x+1))-rank_{0}(B,select_1(B,x))}.$ The topology of the original tree $T$ is represented succinctly in another $2n+\smallO(n)$ bits. In addition, they encode the HPD structure of $T$ using a new tree $T'$ that is obtained from $T$ via the following transformation. All the non-head nodes become leaves and are directly connected to their respective heads; the heads themselves (except the root) become children of the references of their original parents. All these connections are established respecting the preorder ranks of the nodes in the original tree $T.$ Namely, a node farther from the head attaches to it only after the higher-residing nodes of the chain have done so. This transformation preserves the original preorder ranks. On $T',$ operation {\texttt{ref(x)}} is supported, which returns the head of chain to which the node $x$ in the original tree belongs.
To encode weights they call $C_x$ the weight-list of $x$ if it collects, in preorder, all the nodes for which $x$ is a reference. Thus, a non-head node's list is empty; a head's list spells the weights in the relevant chain. Define $C = C_1C_2\ldots{}C_{n}.$ Then, in $C,$ the weight of $x \in T$ resides at position \begin{equation}\label{eq:eqnHpd}
\mathtt{1+select_1(B,ref(x))-ref(x) + depth(x) - depth(ref(x))} \end{equation} (where $\mathtt{depth(x)}$ and $\mathtt{ref(x)}$ are provided by $T$ and $T',$ respectively). \myenv{$C$ is then encoded in a wavelet tree (WT).}
To answer a query, $T,$ $T',$ $B,$ and \Cref{eq:eqnHpd} are used to partition the query path into $\mathcal{O}(\lg{n})$ sub-chains that it overlaps in HPD; and for each sub-chain, one computes the interval in $C$ storing the weights of the nodes in the chain. $I_m$ denotes the set of intervals computed.
Precisely, for a node $x$, one uses $B$ to find out whether $x$ is the head of its chain;
if not, the parent of $x$ in $T'$ returns one (say, $y$).
Then {\Cref{eq:eqnHpd}} maps the path $A_{x,y}$ to its corresponding interval in $C.$
One proceeds to the next chain by fetching the (original) parent of $y$, using $T.$ Then, the WT is queried with $\mathcal{O}(\lg{}n)$ simultaneous (i) range quantile (for \pathmed); or (ii) orthogonal range $2d$ queries (for \pathcnt{} and \pathrp).
Range quantile query over a collection of ranges is accomplished via a straightforward extension of the algorithm of Gagie et al.~\cite{DBLP:conf/spire/GagiePT09}. One descends the wavelet tree $W_C$ maintaining a set of current weights $[a,b]$ (initially $[\sigma]$), the current node $v$ (initially the root of $W_C$), and $I_m.$ When querying the current node $v$ of $W_C$ with an interval $[l_j,r_j] \in I_m,$ one finds out, in $\mathcal{O}(1)$ time, how many weights in the interval are lighter than the mid-point $c$ of $[a,b],$ and how many of them are heavier.
The sum of these values then
determines which subtree of $W_C$ to descend to.
There being $\mathcal{O}(\lg\sigma)$ levels in $W_C$, and spending $\mathcal{O}(1)$ time for each segment in $I_m,$ the overall running time is $\mathcal{O}(\lg{n}\lg\sigma).$
\pathcnt/\pathrpt{} proceed by querying each interval, independently of the others, with the standard $2d$ search over $W_C$.
\subsection{Data structures based on tree extraction}\label{par:threelog}
The solution by He et al.~\cite{DBLP:journals/talg/0001MZ16} is based on performing a hierarchy of tree extractions, as follows. One starts with the original tree $T$ weighted over $[\sigma]$, and extracts two trees $T_0 = T_{1,m}$ and $T_1 = T_{m+1,\sigma},$ respectively associated with the intervals $I_0 = [1,m]$ and $I_1 = [m+1,\sigma],$ where $m = \floor{\frac{1+\sigma}{2}}.$ Then both $T_0$ and $T_1$ are subject to the same procedure, stopping only when the current tree is weight-homogeneous. We refer to the tree we have started with as the {\textit{outermost}} tree.
The key insight of tree extraction is that the number of nodes $n'$ with weights from $I_0$ on the path from $u$ to $v$ equals $
\mathtt{n' = depth_0(u_0) + depth_0(v_0) - 2\cdot{}depth_0(z_0) + \mathbf{1}_{w(z) \in I_0}}, $ where $\mathtt{depth_0(\cdot{})}$ is the depth function in $T_0,$ $z= LCA(u,v),$ $u_0,v_0,z_0$ are the $T_0$-{view}s of $u,v,$ and $z,$ and $\mathbf{1}_{pred}$ is $1$ if predicate $pred$ is {\sc{true}}, and $0$ otherwise. The key step is then, for a given node $x$, how to efficiently find its $0/1$-{\textit{parent}}, whose purpose is analogous to a {\texttt{rank}}-query when descending down the WT.
Consider a node $x \in T$ and its ${T_0}$-{view} $x_0.$ The corresponding node $x' \in T$ of $x_0 \in T_0$ is then called {$0$}-{\textit{parent}} of $x.$ The ${1}$-parent is defined analogously. Supporting $0/1$-parents in compact space is one of the main implementation challenges of the technique, as storing the view{}s explicitly is space-expensive. In~\cite{DBLP:journals/talg/0001MZ16}, the hierarchy of extractions is done by dividing the range not to $2$ but $f = \mathcal{O}(\lg^{\epsilon}{n})$ parts, with $0 < \epsilon < 1$ being a constant. They classify the nodes according to weights using these $f= \ceil{\lg^{\epsilon}{n}}$ labels and use tree covering to represent the tree with small labels
in order to find $T_{\alpha}$-{view}s for arbitrary $\alpha \in [\sigma],$ in constant time. They also use this representation to identify, in constant time, which extractions to explore. Therefore, at each of the $\mathcal{O}(\lg{\sigma}/\lg\lg{n})$ levels of the hierarchy of extractions, constant time work is done, yielding an $\mathcal{O}(\lg{\sigma}/\lg\lg{n})$-time algorithm for {\pathcnt}. Space-wise, it is shown that each of the $\mathcal{O}(\lg\sigma/\lg\lg{n})$ levels can be stored in $2n+nH_0(W)+\smallO(n\lg\sigma)$ bits of space in total (where $W$ is the multiset of weights on the level) which, summed over all the levels, yields $nH_0(W_T)+\mathcal{O}(n\lg\sigma/\lg\lg{n})$ bits of space. The components of this optimal result, however, use word-parallel techniques that are unlikely to be practical. In addition, one of the components, tree covering (TC) for trees labeled over $[\sigma],\,\sigma= \mathcal{O}(\lg^{\epsilon}{n})$ has not been implemented and experimentally evaluated even for unlabeled versions thereof. Finally, lookup tables for the word-RAM data structures may either be rendered too heavy by word alignment, or too slow by the concomitant arithmetic for accessing its entries.
In practice, small blocks of data are usually explicitly scanned~\cite{DBLP:conf/alenex/ArroyueloCNS10}. However, we can see no fast way to scan small labeled trees. At the same time, a generic multi-parentheses approach~\cite{DBLP:books/daglib/0038982} would spare the effort altogether, immediately yielding a $4n\lg\sigma+2n+\smallO(n\lg\sigma)$-bit encoding of the tree, with $\mathcal{O}(1)$-time support for $0/1$-parents. We achieve instead $3n\lg\sigma+\smallO(n\lg\sigma)$ bits of space, as we proceed to describe next.
We store $2n+\smallO(n)$ bits as a regular BP-structure $S$ of the original tree, in which a 1-bit represents an opening parenthesis, and a 0-bit represents a closing one, and mark in a separate length-$n$ bitmap $B$ the {\textit{types}} (i.e.~whether it is a $0$- or $1$-node) of the $n$ opening parentheses in $S.$ The type of an opening parenthesis at position $i$ in $S$ is thus given by $\mathtt{access(B,rank_1(S,i))}.$ Given $S$ and $B$, we find the $t \in \{0,1\}$-parent of $v$ with an approach described in~\cite{DBLP:journals/algorithmica/HeMZ14}. For completeness, we outline in \Cref{algo:algo02} how to locate the $T_t$-view{} of a node $v.$
\begin{algorithm}
\caption{Locate the view{} of $v\in T$ in $T_t$, where $T_t$ is the extraction from $T$ of the $t$-nodes}
\label{algo:algo02}
\begin{algorithmic}[1]
\Require{$t \in \{0,1\}$}
\Function{view\_of}{$v,t$}
\If{$\mathtt{B[v] == t}$} \Comment{$v$ is a $t$-node itself}
\State \Return {$\mathtt{B.rank_t(v)}$}
\EndIf
\Let{$\mathtt{\lambda}$}{$\mathtt{rank_t(B,v)}$} \label{alg:algo02:line:ref01} \Comment{how many $t$-nodes precede $v$?}
\If{$\mathtt{\lambda == 0}$} \label{alg:algo02:line:ref02}
\State \Return{\texttt{null}}
\EndIf
\Let{$\mathtt{u}$}{$\mathtt{select_t(B,\lambda)}$} \label{alg:algo02:line:ref03} \Comment{find the $\lambda^{th}$ $t$-node}
\If{$\mathtt{LCA(u,v) == u}$}
\State\Return{$\mathtt{B.rank_t(u)}$} \label{alg:algo02:line:ref04}
\EndIf
\Let{$\mathtt{z}$}{$\mathtt{LCA(u,v)}$} \Comment{$z$ is $LCA$ of a $t$-node $u$ and a non-$t$-node $v$}
\If{$\mathtt{z == null}$ {\bf{or}} $\mathtt{B[z] == t}$} \Comment{$z$ is a $t$-node $\Rightarrow \nexists$ $t$-parent closer to $v$}
\State \Return $\mathtt{B.rank_t(z)}$ \Comment{or \texttt{null}}
\EndIf
\Let{$\mathtt{\lambda}$}{$\mathtt{rank_t(B,z)}$} \Comment{how many $t$-nodes precede $z$?}
\Let{$\mathtt{r}$}{$\mathtt{select_t(B,\lambda+1)}$} \label{alg:algo02:line:ref05} \Comment{the first $t$-descendant of $z$}
\Let{$\mathtt{z_t}$}{$\mathtt{rank_t(B,r)}$} \label{alg:algo02:line:ref06} \Comment{$z_t$ is the $T_t$-view{} of $r$}
\Let{$\mathtt{p}$}{$\mathtt{T_t.parent(z_t)}$} \label{alg:algo02:line:ref07} \Comment{$p$ can be {\texttt{null}} if $z_t$ is $0$}
\State\Return{$\mathtt{p}$}
\EndFunction
\end{algorithmic} \end{algorithm}
First, find the number of $t$-nodes preceding $v$ (line~\ref{alg:algo02:line:ref01}). If none exists (line~\ref{alg:algo02:line:ref02}), we are done; otherwise, let $u$ be the $t$-node immediately preceding $v$ (line~\ref{alg:algo02:line:ref03}). If $u$ is an ancestor of $v,$ it is the answer (line~\ref{alg:algo02:line:ref04}); else, set $z= LCA(u,v).$ If $z$ is a $t$-node, or non-existent (because the tree is actually a forest), then return $z$ or {\texttt{null}}, respectively. Otherwise ($z$ exists and not a $t$-node), in line~\ref{alg:algo02:line:ref05} we find the first $t$-descendant $r$ of $z$ (it exists because of $u$). This descendant cannot be a parent of $v,$ since otherwise we would have found it before. It must share though the same $t$-parent with $v.$ We map this descendant to a node $z_t$ in $T_t$ (line~\ref{alg:algo02:line:ref06}). Finally, we find the parent of $z_t$ in $T_t$ (line~\ref{alg:algo02:line:ref07}).
The combined cost of $S$ and $B$ is $2n+n+\smallO(n) = 3n+\smallO(n)$ bits. At each of the $\lg\sigma$ levels of extraction, we encode $0/1$-labeled trees in the same way, so the total space is $3n\lg\sigma+\smallO(n\lg\sigma)$ bits.
Query algorithms in the {\ext} data structure proceed within the generic framework of extracting $T_0$ and $T_1.$
Let $n' = |P_{u_0,v_0}|.$ In \pathmed, we recurse on $T_0$ if $k < n',$ for a query that asks for a node with the $k^{th}$ smallest weight on the path $P_{u_0,v_0};$ otherwise, we recurse on $T_1$ with $k \leftarrow k-n'$ and $u_1,\,v_1.$ We stop upon encountering a tree with homogeneous weights. This logic is embodied in \Cref{algo:algo00} in \Cref{appendix:queryAlgos}. Theoretical running time is $\mathcal{O}(\lg\sigma)$, as all the primitives used are $\mathcal{O}(1)$-time.
A procedure for the \pathcnt{} and \pathrpt{} is essentially similar to that for the \pathmed{} problem. We maintain two nodes, $u$ and $v,$ as the query nodes with respect to the current extraction $T$, and a node $z$ as the lowest common ancestor of $u$ and $v$ in the current tree $T.$ Initially, $u,v \in T$ are the original query nodes, and $T$ is the outermost tree. Correspondingly, $z$ is the LCA of the nodes $u$ and $v$ in the original tree; we determine the weight of $z$ and store it in $w,$ which is passed down the recursion. Let $[a,b]$ be the query interval, and $[p,q]$ be the current range of weights of the tree. Initially, $[p,q] = [\sigma].$ First, we check whether the current interval $[p,q]$ is contained within $[a,b].$ If so, the entire path $A_{u,z} \cup{} A_{v,z}$ belongs to the answer. Here, we also check whether $w \in [a,b].$ Then we recurse on $T_t$ ($t \in \{0,1\}$) having computed the corresponding $T_t$-view{}s of the nodes $u,v,$ and $z,$ and with the corresponding current range. The full details of the $\mathcal{O}(\lg\sigma)$-time algorithm are given in \Cref{algo:extractionCounting} of \Cref{appendix:queryAlgos}.
\myenv{To summarize, the variant of {\ext} that we design here uses $3n\lg\sigma+\smallO(n\lg\sigma)$ bits to support {\pathmed} and {\pathcnt} in $\mathcal{O}(\lg\sigma)$ time, and {\pathrp} in $\mathcal{O}((1+\kappa)\lg\sigma)$ time. Compared to the original succinct solution~\cite{DBLP:journals/talg/0001MZ16} based on tree extraction, our variant uses about $3$ times the space with a minor slow-down in query time, but is easily implementable using bitmaps and BP, both of which have been studied experimentally (see e.g.~\cite{DBLP:conf/alenex/ArroyueloCNS10} and~\cite{DBLP:books/daglib/0038982} for an extensive review).}
\section{Experimental Results} \myenv{We now conduct experimental studies on data structures for path queries.} \begin{table*} \centering \ra{1.05}
\begin{tabular}{p{.03\textwidth}p{.15\textwidth}p{.75\textwidth}@{}}
\cmidrule[0.25ex]{2-3}
{} & {Symbol} & \multicolumn{1}{c}{Description}\\
\cmidrule{2-3} \parbox[t]{3mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{ \begin{footnotesize}{\textit{pointer-based}}\end{footnotesize} }}}
& {\nv} & {\begin{small}{Na{\"i}ve data structure in \Cref{sec:approaches}}\end{small}}\\
& {\nvlca} & {\begin{small}{Na{\"i}ve data structure in \Cref{sec:approaches},
augmented with $\mathcal{O}(1)$ query-time $LCA$ of~\cite{DBLP:journals/jal/BenderFPSS05}}\end{small}}\\
& {\extptr} & {\begin{small}{A solution based on tree extraction~\cite{DBLP:journals/talg/0001MZ16} in \Cref{section:dcc:ptr}}\end{small}}\\
& {\wthpdptr} & {\begin{small}{A non-succinct version of the wavelet tree- and heavy-path decomposition-based solution of~\cite{DBLP:journals/jda/PatilST12} in \Cref{sec:compactRepr}.}\end{small}}\\
\cmidrule{2-3}
& {\nvsuc} & {\begin{small}{Na{\"i}ve data structure of \Cref{sec:approaches}, using succinct data structures to represent the tree structure and weights}\end{small}}\\
\parbox[t]{3mm}{\multirow{5}{*}{\rotatebox[origin=c]{90}{ \begin{footnotesize}{\textit{succinct}}\end{footnotesize} }}}
& {\extrrr} & {\begin{small}{$3n\lg\sigma+\smallO(n\lg\sigma)$-bits-of-space scheme for tree extraction of \Cref{par:threelog}, with compressed bitmaps}\end{small}}\\
& {\extun} & {\begin{small}{$3n\lg\sigma+\smallO(n\lg\sigma)$-bits-of-space scheme for tree extraction of \Cref{par:threelog}, with uncompressed bitmaps }\end{small}}\\
& {\wthpdrrr} & {{\setlength{\parindent}{0cm}{\begin{small}{Succinct version of {\wthpd}, with compressed bitmaps}\end{small}}}}\\
& {\wthpdun} & {{\setlength{\parindent}{0cm}{\begin{small}{Succinct version of {\wthpd}, with uncompressed bitmaps}\end{small}}}}\\
\cmidrule[0.25ex]{2-3}
\end{tabular}
\caption{The implemented data structures and the abbreviations used to refer to them.} \label{tables:tabAbbr} \end{table*}
\subsection{Implementation}\label{sec:implementation}\label{sec:approaches} For ease of reference, we outline the data structures implemented in \Cref{tables:tabAbbr}.
Na{\"i}ve approaches (both plain pointer-based {\nv/\nvlca} and succinct {\nvsucc}) resolve a query on the path $P_{x,y}$ by explicitly traversing it from $x$ to $y$. At each encountered node, we either (i) collect its weight into an array (for \pathmed); (ii) check if its weight is in the query range (for \pathcnt); (iii) if the check
in (ii) succeeds, we collect the node into a container (for \pathrp). In \pathmed, we subsequently call a standard {\textit{introspective selection}} algorithm~\cite{DBLP:journals/spe/Musser97} over the array of collected weights. Depths and parent pointers, explicitly stored at each node, guide in upwards traversal from $x$ and $y$ to their common ancestor. Plain pointer-based tree topologies are stored using \textit{forward-star}~\cite{DBLP:books/daglib/0069809} representation. In \nvlca, we equip \nv{} with the linear-space and $\mathcal{O}(1)$-time LCA-support structure of~\cite{DBLP:journals/jal/BenderFPSS05}.
Succinct structures {\extrrr/\extun/\wthpdrrr/\wthpdun} are implemented with the help of the succinct data structures library \sdslite{} of Gog et al.~\cite{DBLP:conf/wea/GogBMP14}. To implement {\wthpd} and the practical variant of {\ext} we designed in \Cref{par:threelog}, two types of bitmaps are used: a compressed bitmap~\cite{DBLP:journals/talg/RamanRS07} (implemented in {\classname{sdsl::rrr\_vector}} of \sdslite) and plain bitmap (implemented in {\classname{sdsl::bit\_vector}} of \sdslite). For {\nvsuc}, the weights are stored using $\ceil{\lg\sigma}$ bits each in a sequence and the structure theoretically occupies $2n+n\lg\sigma + \smallO(n\lg\sigma)$ bits. For uniformity, across our data structures, tree navigation is provided solely by a BP representation based on~\cite{DBLP:journals/talg/GearyRR06} (implemented in {\classname{sdsl::bp\_support\_gg}}), chosen on the basis of our benchmarks.
Plain pointer-based implementation {\extptr} is an implementation of the solution by He et al.~\cite{DBLP:journals/talg/0001MZ16} for the pointer-machine model, which uses tree extraction. In it, the view{}s $x_0 \in T_0,\,x_1 \in T_1$ for each node that arises in the hierarchy of extractions, as well as the depths in $T,$ are explicitly stored. Similarly, {\wthpdptr} is a plain pointer-based implementation of the data structure by Patil et al.~\cite{DBLP:journals/jda/PatilST12}. The relevant source code is accessible at~\url{https://github.com/serkazi/tree_path_queries}.
\subsection{Experimental setup}\label{sec:results} \begin{table*} \centering \resizebox{\columnwidth}{!}{ \ra{1.3} \rowcolors{1}{}{lightgray} \begin{tabular}{@{}lrrrrrp{5cm}@{}} \toprule {} & \multicolumn{1}{c}{\texttt{num nodes}} & \multicolumn{1}{c}{\texttt{diameter}} & \multicolumn{1}{c}{\texttt{$\sigma$}} & \multicolumn{1}{c}{\texttt{$\log{\sigma}$}} & \multicolumn{1}{c}{\texttt{$H_0$}} & {Description}\\ \cmidrule{1-7} {\euosm} & {{\texttt{27,024,535}}} & {{\texttt{109,251}}} & {{\texttt{121,270}}} & {{\texttt{16.89}}} & {{\texttt{9.52}}} & {An MST we constructed over map of Europe~\cite{OpenStreetMap}}\\ {\eudimacs} & {{\texttt{18,010,173}}} & {{\texttt{115,920}}} & {{\texttt{843,781}}} & {{\texttt{19.69}}} & {{\texttt{8.93}}} & {An MST we constructured over European road network~\cite{kitdimacs}}\\ {\euemst} & {{\texttt{50,000,000}}} & {{\texttt{175,518}}} & {{\texttt{5020}}} & {{\texttt{12.29}}} & {{\texttt{9.95}}} & {An Euclidean MST we constructed over DEM of Europe~\cite{srtm}}\\ {\mars} & {{\texttt{30,000,000}}} & {{\texttt{164,482}}} & {{\texttt{29,367}}} & {{\texttt{14.84}}} & {{\texttt{13.23}}} & {An Euclidean MST we constructed over DEM of Mars~\cite{marsnasa}}\\ \bottomrule \end{tabular} } \caption{
Datasets metadata.
DEM stands for Digital Elevation Model, and MST for minimum spanning tree.
Weights are over $\{0,1,\ldots,\sigma-1\},$ and $H_0$ is the entropy of the multiset of weights.
In DEM, elevation (in meters) is used as weights. For \euosm{}, distance in meters between
locations,
and for \eudimacs, travel time between locations,
for a proprietary ``car'' profile in tenths of a second, are used as weights.} \label{tables:datasetTable} \end{table*}
The platform used is a $128$GiB RAM, Intel(R) Xeon(R) Gold $6234$ CPU $3.30$GHz server running 4.15.0-54-generic 58-Ubuntu SMP x86\_64 kernel. The build is due to {\texttt{clang-8}} with {\texttt{-g,-O2,}}\\ {\texttt{-std=c++17,mcmodel=large,-NDEBUG}} flags. Our datasets originate from geographical information systems (GIS). In \Cref{tables:datasetTable}, the relevant meta-data on our datasets is given.
We generated query paths by choosing a pair uniformly at random (u.a.r.). To generate a range of weights, $[a,b],$ we follow the methodology of~\cite{DBLP:conf/spire/ClaudeMN10} and consider {\texttt{large}}, {\texttt{medium}}, and {\texttt{small}} configurations: given $K,$ we generate the left bound $a \in [W]$ u.a.r., whereas $b$ is generated u.a.r. from $[a,a+\ceil{\frac{W-a}{K}}].$ We set $K = 1,10,$ and $100$ for respectively {\texttt{large}}, {\texttt{medium}}, and {\texttt{small}}. To counteract skew in weight distribution in some of the datasets, when generating the weight-range $[a,b]$, we in fact generate a pair from $[n]$ rather than $[\sigma]$ and map the positions to the sorted list of input-weights, ensuring the number of nodes covered by the generated weight-range to be proportional to $K^{-1}.$
\subsection{Space performance and construction costs} A single data structure we implement (be it ever \nv-, \ext-, or \wthpd-family), taken individually, answers all three types of queries (\pathmed, \pathcnt, and \pathrp). Hence, we consider space consumption first.
\newcommand{\moj}[2]{\textbf{#1}/\texttt{#2}} \begin{table*} \begin{small} \centering \resizebox{\columnwidth}{!}{ \ra{1.05}
\begin{tabular}{@{}ll*{9}{r}@{}}
\toprule
{} & {\multirow{1}{*}{Dataset}} & \multirow{1}{*}{\nv} & \multirow{1}{*}{\nvlca} & \multirow{1}{*}{\wthpdptr} & \multirow{1}{*}{\extptr} & \multirow{1}{*}{\nvsuc} &
\multicolumn{1}{c}{\extrrr} & \multicolumn{1}{c}{\extun} & \multicolumn{1}{c}{\wthpdrrr} & \multicolumn{1}{c}{\wthpdun}\\
\midrule
\parbox[t]{2mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{\texttt{space}}}}
& {\euosm} & {\laski{406.3}}& {\laski{972.1}} & {\laski{3801}} & {\laski{5943}} & {\laski{21.71}}&{\laski{59.85}}&{\laski{75.74}}&{\laski{21.71}}&{\laski{34.42}}\\
& {\eudimacs} & {\laski{406.4}}& {\laski{974.0}} & {\laski{4274}} & {\laski{6768}} & {\laski{34.46}}&{\laski{82.16}}&{\laski{106.0}}&{\laski{29.69}}&{\laski{48.77}}\\
& {\euemst} & {\laski{394.1}}& {\laski{988.5}} & {\laski{3342}} &{\laski{4613}} & {\laski{19.64}} & {\laski{45.41}}&{\laski{59.15}}&{\laski{19.64}}& {\laski{31.66}}\\
& {\mars} & {\laski{386.7}}& {\laski{1005}} & {\laski{3579}} &{\laski{5383}} & {\laski{17.35}} & {\laski{51.71}} & {\laski{66.02}} & {\laski{17.35}} & {\laski{28.80}}\\
\midrule
\parbox[t]{2mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{\texttt{peak/time}}}}
& {\euosm} &{\moj{491.0}{1}}&{\moj{987.9}{5}}&{\moj{3785}{28}} &{\moj{9586}{47}}&{\moj{21.71}{1}}&{\moj{295.0}{23}}&{\moj{295.0}{23}}&{\moj{1347}{62}}&{\moj{1347}{61}}\\
& {\eudimacs} &{\moj{439.8}{1}}&{\moj{1002}{4}}&{\moj{4403}{19}}&{\moj{12382}{37}}&{\moj{29.69}{1}}&{\moj{399.7}{18}}&{\moj{399.7}{18}}&{\moj{1360}{42}}&{\moj{1360}{42}}\\
& {\euemst} &{\moj{401.0}{2}}&{\moj{1021}{10}}&{\moj{3460}{47}}&{\moj{5286}{67}}&{\moj{19.64}{1}} & {\moj{287.6}{32}}&{\moj{287.6}{32}}&{\moj{1333}{115}}& {\moj{1333}{115}}\\
& {\mars} &{\moj{392.4}{1}}& {\moj{1016}{5}}&{\moj{3719}{30}} &{\moj{6027}{46}} & {\moj{17.35}{1}}&{\moj{269.3}{22}}&{\moj{269.3}{22}}&{\moj{1337}{69}}&{\moj{1337}{69}}\\
\bottomrule \end{tabular} } \caption{ (upper) Space occupancy of our data structures, in bits per node, when loaded into memory; (lower) peak memory usage (\textbf{m} in bits per node) during construction and construction time ($t$ in seconds) shown as $\mathbf{m}/t$.} \label{tables:spaceAll} \end{small} \end{table*}
The upper part of the \Cref{tables:spaceAll} shows the space usage of our data structures. The structures {\nv/\nvlca} are lighter than {\extptr/\wthpdptr}, as expected. Adding fast $LCA$ support doubles the space requirement for \nv, whereas succinctness (\nvsucc) uses up to $20$ times less space than \nv. The difference between {\extptr} and {\wthpdptr}, in turn, is in explicit storage of the $0$-view{}s for each of the $\Theta(n\lg\sigma)$ nodes occurring during tree extraction. In {\wthpdptr}, by contrast, {$\mathtt{rank}_0$} is induced from $\mathtt{rank}_1$ (via subtraction) -- hence the difference in the empirical sizes of the otherwise $\Theta(n\lg\sigma)$-word data structures.
The succinct {\nvsucc}'s empirical space occupancy is close to the information-theoretic minimum given by $\lg\sigma+2$ (\Cref{tables:datasetTable}). The structures {\extrrr/\extun} occupy about three times as much, which is consistent with the design of our practical solution (\Cref{par:threelog}). It is interesting to note that the data structure {\wthpdrrr} occupies space close to bare succinct storage of the input alone (\nvsucc). Entropy-compression significantly impacts both families of succinct structures, {\wthpd} and {\ext}, saving up to $20$ bits per node when switching from plain bitmap to a compressed one. Compared to pointer-based solutions (\nv/\nvlca/\wthpdptr/\extptr), we note that {\extrrr/\extun/\wthpdrrr/\wthpdun} still allow usual navigational operations on $T$, whereas the former shed this redundancy, to save space, after preprocessing.
Overall, the succinct {\wthpdun/\wthpdrrr/\extun/\extrrr} perform very well, being all well-under $1$ gigabyte for the large datasets we use. This suggests scalability: when trees are so large as not to fit into main memory, it is clear that the succinct solutions are the method of choice.
The lower part in \Cref{tables:spaceAll} shows peak memory usage ({\textbf{m}}, in bits per node) and construction time (t, in seconds), as $\mathbf{m}/t.$ The structures {\extun/\extrrr} are about three times faster than {\wthpdun/\wthpdrrr} to build, and use four times less space at peak. This is expected, as {\wthpd} builds two different structures (HPD and then WT). This is reversed for {\extptr/\wthpdptr}; time-wise, as {\extptr} performs more memory allocations during construction (although our succinct structures are flattened into a heap layout, {\extptr} stores pointers to $T_0/T_1$; this is less of a concern for {\wthpdptr}, whose very purpose is tree linearisation).
\begin{table*} \begin{small} \centering \resizebox{0.87\columnwidth}{!}{ \ra{1.05}
\begin{tabular}{@{}ll*{10}{r}@{}}
\toprule
& {Dataset} & \multirow{1}{*}{\nv} & \multirow{1}{*}{\nvlca} & \multirow{1}{*}{\extptr} & \multirow{1}{*}{\wthpdptr} & \multirow{1}{*}{\nvsuc} & \multicolumn{1}{c}{\extrrr} & \multicolumn{1}{c}{\extun} & \multicolumn{1}{c}{\wthpdrrr} & \multicolumn{1}{c}{\wthpdun}\\
\midrule
\parbox[t]{2mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{\texttt{median}}}} &
{\euosm} & {\laski{658}} & {\laski{475}} & {\laski{4.22}} & {\laski{6.10}} & {\laski{7078}} & {\laski{85.3}} & {\laski{51.1}} & {\laski{111}} & {\laski{51.2}}\\
& {\eudimacs} & {\laski{566}} & {\laski{412}} & {\laski{5.16}} & {\laski{6.28}} & {\laski{6556}} & {\laski{84.6}} & {\laski{54.8}} & {\laski{120}} & {\laski{54.7}}\\
& {\euemst} & {\laski{710}} & {\laski{436}} & {\laski{4.44}} & {\laski{5.10}} & {\laski{9404}} & {\laski{106}} & {\laski{81.9}} & {\laski{96.7}} & {\laski{54.9}}\\
& {\mars} & {\laski{472}} & {\laski{298}} & {\laski{4.93}} & {\laski{4.53}} & {\laski{7018}} & {\laski{124}} & {\laski{97.0}} & {\laski{88.3}} & {\laski{49.5}}\\
\midrule
\parbox[t]{2mm}{\multirow{12}{*}{\rotatebox[origin=c]{90}{\texttt{counting}}}} &
{\euosm} & {\laski{238}} & {\laski{140}} & {\laski{6.88}} & {\laski{18.4}} & {\laski{3553}} & {\laski{247}} & {\laski{167}} & {\laski{139}} & {\laski{56.9}} & \parbox[t]{2mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{\texttt{large}}}}\\
& {\eudimacs} & {\laski{204}} & {\laski{121}} & {\laski{7.31}} & {\laski{19.7}} & {\laski{3300}} & {\laski{253}} & {\laski{178}} & {\laski{142}} & {\laski{57.3}} &\\
& {\euemst} & {\laski{338}} & {\laski{195}} & {\laski{5.97}} & {\laski{11.5}} & {\laski{4835}} & {\laski{215}} & {\laski{168}} & {\laski{105}} & {\laski{55.9}} &\\
& {\mars} & {\laski{232}} & {\laski{174}} & {\laski{5.25}} & {\laski{8.40}} & {\laski{3614}} & {\laski{206}} & {\laski{164}} & {\laski{91}} & {\laski{49.3}} &\\
\cmidrule{2-11}
& {\euosm} & {\laski{244}} & {\laski{143}} & {\laski{5.47}} & {\laski{17.8}} & {\laski{3555}} & {\laski{213}} & {\laski{146}} & {\laski{129}} & {\laski{54.2}} & \parbox[t]{2mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{\texttt{medium}}}}\\
& {\eudimacs} & {\laski{209}} & {\laski{124}} & {\laski{6.94}} & {\laski{18.4}} & {\laski{3297}} & {\laski{224}} & {\laski{160}} & {\laski{133}} & {\laski{56.5}} & \\
& {\euemst} & {\laski{339}} & {\laski{195}} & {\laski{4.55}} & {\laski{10.0}} & {\laski{4840}} & {\laski{178}} & {\laski{140}} & {\laski{100}} & {\laski{54.9}} & \\
& {\mars} & {\laski{237}} & {\laski{143}} & {\laski{5.91}} & {\laski{8.74}} & {\laski{3613}} & {\laski{199}} & {\laski{154}} & {\laski{89.7}} & {\laski{48.9}} & \\
\cmidrule{2-11}
& {\euosm} & {\laski{239}} & {\laski{139}} & {\laski{5.25}} & {\laski{15.4}} & {\laski{3551}} & {\laski{190}} & {\laski{132}} & {\laski{119}} & {\laski{53.9}} & \parbox[t]{2mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{\texttt{small}}}}\\
& {\eudimacs} & {\laski{209}} & {\laski{123}} & {\laski{5.25}} & {\laski{18.9}} & {\laski{3300}} & {\laski{206}} & {\laski{148}} & {\laski{126}} & {\laski{55.2}} & \\
& {\euemst} & {\laski{347}} & {\laski{200}} & {\laski{3.92}} & {\laski{9.34}} & {\laski{4832}} & {\laski{154}} & {\laski{124}} & {\laski{94.9}} & {\laski{53.2}} & \\
& {\mars} & {\laski{238}} & {\laski{144}} & {\laski{4.82}} & {\laski{7.41}} & {\laski{3615}} & {\laski{178}} & {\laski{133}} & {\laski{84.2}} & {\laski{47.6}} & \\
\bottomrule \end{tabular} }
\caption{
Average time to answer a query, from a fixed set of $10^6$ randomly generated path median
and path counting queries, in microseconds.
Path counting queries are given in {\texttt{large}}, {\texttt{medium}}, and {\texttt{small}} configurations. } \label{tables:countingAndReportingTimes} \end{small} \end{table*}
\subsection{Path median queries} The upper section of \Cref{tables:countingAndReportingTimes} records the mean time for a single median query (in \microsec) averaged over a fixed set of $10^6$ randomly generated queries.
Succinct structures {\wthpdrrr/\wthpdun/\extrrr/\extun} perform well on these queries, with a slow-down of at most $20$-$30$ times from their respective pointer-based counterparts. Using entropy-compression degrades the speed of {\wthpd} almost twice. Overall, the families {\wthpd} and {\ext} seem to perform at the same order of magnitude. This is surprising, as in theory {\wthpd} should be a factor of ${\lg{n}}$ slower. The discrepancy is explained partly by small average number of segments in HPD, averaging $9\pm{}2$ for our queries. (The number of unary-degree nodes in our datasets is $35\%$-$56\%,$ which makes smaller number of heavy-path segments prevalent. \myenv{We did not use trees with few unary-degree nodes in our experiments, as the height of such trees are not large enough to make constructing data structures for path queries worthwhile.}) When the queries are partitioned by the number of chains in the HPD, the curves for {\extrrr/\extun} stay flat whereas those for {\wthpdrrr/\wthpdun} grow linearly (see \Cref{fig:avgVersusHpd} in \Cref{sec:timeVsHdp}). Take {\eudimacs} as an example. When the query path is partitioned into $9$ chains, {\extun} is only slightly faster than {\wthpdun}, but when the query path contains $19$ chains, {\extun} is about $2.3$ times so. This suggests to favour the {\ext} family over {\wthpd} whenever performance in the worst case is important. Furthermore, navigational operations in {\extun/\extrrr} and {\wthpdun/\wthpdrrr}, despite of similar theoretical worst-case guarantees, involve different patterns of using the {\texttt{rank/select}} primitives. For one, {\wthpdun/\wthpdrrr} does not call $LCA$ during the search -- mapping of the search ranges when descending down the recursion is accomplished by a single {\texttt{rank}} call, whereas {\extun/\extrrr} computes $LCA$ at each level of descent (for its its own analog of {\texttt{rank}} -- the {view{}} computation in \Cref{algo:algo02}). Now, $LCA$ is a non-trivial combination of {\texttt{rank/select}} calls. The difference between {\extun/\extrrr} and {\wthpdun/\wthpdrrr} will therefore become pronounced in a large enough tree; with tangible HPD sizes, the constants involved in (albeit theoretically $\mathcal{O}(1)$) $LCA$ calls are overcome by $\lg{n}.$
Na{\"i}ve structures {\nv/\nvlca/\nvsucc} are visibly slower in \pathmed{} than in \pathcnt{} (considered in \Cref{sec:resultsCounting}), as expected --- for \pathmed, having collected the nodes encountered, we also call a selection algorithm. In \pathcnt{}, by contrast, neither insertions into a container nor a subsequent search for median are involved. Navigation and weights-uncompression in {\nvsucc} render it about $10$ times slower than its plain counterpart. The {\nvlca} being little less than twice faster than its $LCA$-devoid counterpart, \nv, is explained by the latter effectively traversing the query path twice --- once to locate the $LCA,$ and again to answer the query proper. \myenv{Any succinct solution is about $4$-$8$ times faster than the fastest na{\"i}ve, \nvlca.}
\subsection{Path counting queries}\label{sec:resultsCounting} The lower section in \Cref{tables:countingAndReportingTimes} records the mean time for a single counting query (in \microsec) averaged over a fixed set of $10^6$ randomly generated queries, for {\texttt{large}}, {\texttt{medium}}, and {\texttt{small}} setups.
Structures {\nv/\nvlca/\nvsucc} are insensitive to $\kappa,$ as the bottleneck is in physically traversing the path.
Succinct structures {\wthpdun/\wthpdrrr} and {\extun/\extrrr} exhibit decreasing running times as one moves from {\texttt{large}} to {\texttt{small}} ---
as the query weight-range shrinks, so does the chance of branching during the traversal of the implicit range tree. The fastest (uncompressed) {\wthpdun} and the slowest (compressed) {\extrrr} succinct solutions differ by a factor of $4,$ which is intrinsically larger constants in {\extrrr}'s implementation compounded with slower {\texttt{rank/select}} primitives in compressed bitmaps, at play. \myenv{The uncompressed {\wthpdun} is about $2$-$3$ times faster than {\extun}, the gap narrowing towards the {\texttt{small}} setup.} The slowest succinct structure, {\extrrr}, is nonetheless competitive with the {\nv/\nvlca} already in {\texttt{large}} configuration, with the advantage of being insensitive to tree topology.
In {\extptr}-{\wthpdptr} pair, \wthpdptr{} is $2$-$3$ times slower. This is predictable, as the inherent $\lg{n}$-factor slow-down in {\wthpdptr} is no longer offset by differing memory access patterns -- following a pointer ``downwards'' (i.e. $0/1$-{view} in {\extptr} and {$\mathtt{rank}_{0/1}()$} in {\wthpdptr}) each require a single memory access.
\begin{table*}
\begin{small} \centering
\ra{1.00}
\begin{tabular}{@{}lll*{9}{r}r@{}}
\toprule
&\multirow{1}{*}{Dataset} & \multirow{1}{*}{$\kappa$} & \multirow{1}{*}{\nv} & \multirow{1}{*}{\nvlca} & \multirow{1}{*}{\extptr} & \multirow{1}{*}{\wthpdptr} & \multirow{1}{*}{\nvsuc} & \multirow{1}{*}{\extrrr} & \multirow{1}{*}{\extun} & \multirow{1}{*}{\wthpdrrr} & \multirow{1}{*}{\wthpdun} & \\
\midrule
& {\euosm} & {\laski{9,840}} & {\laski{356}} & {\laski{256}} & {\laski{184}} & {\laski{70.7}} & {\laski{3766}} & \hatched{1}{4} & {\parbox[t]{2mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{\texttt{large}}}}} \\
& {\eudimacs} & {\laski{9,163}} & {\laski{309}} & {\laski{224}} & {\laski{147}} & {\laski{66.8}} & {\laski{3485}} & \hatched{2}{4} & {} \\
& {\euemst} & {\laski{14,211}} & {\laski{389}} & {\laski{241}} & {\laski{140}} & {\laski{77.5}} & {\laski{4926}} & \hatched{3}{4} & {} \\
& {\mars} & {\laski{10,576}} & {\laski{267}} & {\laski{178}} & {\laski{89.2}} & {\laski{55.1}} & {\laski{3668}} & \hatched{4}{4} & {} \\
\cmidrule{1-8}
& {\euosm} & {\laski{1,093}} & {\laski{322}} & {\laski{222}} & {\laski{43.7}} & {\laski{28.8}} & {\laski{3706}} & \hatched{5}{4} & {\parbox[t]{2mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{\texttt{medium}}}}} \\
& {\eudimacs} & {\laski{1,090}} & {\laski{277}} & {\laski{196}} & {\laski{34.0}} & {\laski{29.7}} & {\laski{3434}} & \hatched{6}{4} & {} \\
& {\euemst} & {\laski{1,464}} & {\laski{354}} & {\laski{206}} & {\laski{32.1}} & {\laski{20.1}} & {\laski{4880}} & \hatched{7}{4} & {} \\
& {\mars} & {\laski{1,392}} & {\laski{250}} & {\laski{151}} & {\laski{22.1}} & {\laski{15.6}} & {\laski{3639}} & \hatched{8}{4} & {} \\
\cmidrule{1-12}
& {\euosm} & {\laski{182}} & {\laski{311}} & {\laski{212}} & {\laski{13.8}} & {\laski{19.0}} & {\laski{3685}} & {\laski{1965}} & {\laski{485}} & {\laski{795}} & {\laski{226}} & \parbox[t]{2mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{\texttt{small}}}} \\
& {\eudimacs} & {\laski{236}} & {\laski{271}} & {\laski{193}} & {\laski{13.2}} & {\laski{21.0}} & {\laski{3529}} & {\laski{2518}} & {\laski{632}} & {\laski{1043}} & {\laski{292}} & \\
& {\euemst} & {\laski{215}} & {\laski{353}} & {\laski{203}} & {\laski{10.2}} & {\laski{12.7}} & {\laski{4873}} & {\laski{1276}} & {\laski{378}} & {\laski{590}} & {\laski{205}} & \\
& {\mars} & {\laski{117}} & {\laski{242}} & {\laski{145}} & {\laski{8.88}} & {\laski{9.57}} & {\laski{3632}} & {\laski{881}} & {\laski{278}} & {\laski{475}} & {\laski{162}} & \\
\bottomrule \end{tabular}
\caption{Average time to answer a path reporting query, from a fixed set of $10^6$ randomly generated path
reporting queries, in microseconds. The queries are given
in {\texttt{large}}, {\texttt{medium}}, and {\texttt{small}} configurations. Average output size
for each group is given in column $\kappa.$} \label{tables:reportingTimes} \HatchedCell{start1}{end1}{
pattern color=black!70,pattern=north east lines} \HatchedCell{start2}{end2}{
pattern color=black!70,pattern=north west lines} \HatchedCell{start3}{end3}{
pattern color=black!70,pattern=north east lines} \HatchedCell{start4}{end4}{
pattern color=black!70,pattern=north west lines} \HatchedCell{start5}{end5}{
pattern color=black!70,pattern=north east lines} \HatchedCell{start6}{end6}{
pattern color=black!70,pattern=north west lines} \HatchedCell{start7}{end7}{
pattern color=black!70,pattern=north east lines} \HatchedCell{start8}{end8}{
pattern color=black!70,pattern=north west lines}
\end{small} \end{table*}
\subsection{Path reporting queries} \Cref{tables:reportingTimes} records the mean time for a single reporting query (in \microsec) averaged over a fixed set of $10^6$ randomly generated queries, for {\texttt{large}}, {\texttt{medium}}, and {\texttt{small}} setups.
Structures {\wthpdrrr/\wthpdun/\extrrr/\extun} recover each reported node's weight in $\mathcal{O}(\lg\sigma)$ time. Thus, when $\lg{n} \ll \kappa$, the query time for both {\ext} and {\wthpd} families become $\mathcal{O}(\kappa\cdot{}\log\sigma).$ (At this juncture, a caveat is in order: design of {\wthpd}'s in \Cref{sec:HPD} allows a \pathrp-query to only return the {\textit{index in the array $C$}} --- not the original preorder identifier of the node, as does the {\ext}.) When $\kappa$ is large, therefore, these structures are not suitable for use in \pathrp, as \nv/\nvlca/\nvsucc are clearly superior ($\mathcal{O}((1+\kappa)\lg{n})$ {\textit{vs}} $\mathcal{O}(\kappa)$), and we confine the experiments for {\extrrr/\extun/\wthpdrrr/\wthpdun} to the {\texttt{small}} setup only (bottom-right corner in \Cref{tables:reportingTimes}).
We observe that the succinct structures {\extun} and {\wthpdun} are competitive with {\nv/\nvlca}, in {\texttt{small}} setting: informally, time saved in locating the nodes to report is used to uncompress the nodes' weights (whereas in {\nv/\nvlca} the weights are explicit). Between the succinct {\ext} and {\wthpd}, clearly {\wthpd} is faster, as {\texttt{select()}} on a sequence as we go up the wavelet tree tend to have lower constant factors than the counterpart operation on BP.
Structures {\wthpdptr} and {\extptr} exhibit same order of magnitude in query time, with the former being sometimes about $2$ times faster on non-{\texttt{small}} setups. Among two somewhat intertwined reasons, one is that {\wthpdptr} returns an index to the permuted array, as noted above. (Converting to the original id would necessitate an additional memory access.) Secondly, in the implicit range tree during the $2d$ search in {\wthpdptr}, when the current range is contained within the query interval, we start reporting the node weights by merely incrementing a counter --- position in the WT sequence. By contrast, in such situations {\extptr} iterates through the nodes being reported calling {\texttt{parent()}} for the current node, which is one additional memory access compared to {\wthpdptr} (at the scale of \microsec, this matters). Indeed, operations on trees tend to be little more expensive than similar operations on sequences.
Structures {\nv/\nvlca/\nvsucc} are less sensitive to the query weight range's magnitude, since they simply scan the path along with pushing into a container. The differences in running time in \Cref{tables:reportingTimes} between the configurations are thus accounted for by container operations' cost. Na{\"i}ve structures' query times for \pathrp{} being dependent solely on the query path's length, they are unfeasible for large-diameters trees (whereas they may be suitable for shallow ones, e.g. originating from ``small-world'' networks).
\pgfplotsset{every axis/.append style={
label style={font=\ttfamily},
y tick label style={font=\small\ttfamily,rotate=90},
x tick label style={font=\small\ttfamily}
}}
\tikzset{>=stealth}
\begin{figure}
\caption{Visualization of some of the entries in \Cref{tables:countingAndReportingTimes}. Inner rectangle magnifies the mutual configuration of the succinct data structures {\wthpdun},{\wthpdrrr},{\extun}, and {\extrrr}. The succinct na{\"i}ve structure {\nvsucc} is not shown.}
\label{fig:fig002}
\end{figure}
\subparagraph{Overall evaluation.} We visualize in \Cref{fig:fig002} some typical entries in \Cref{tables:countingAndReportingTimes} to illustrate the structures clustering along the space/time trade-offs: {\nv/\nvlca} (upper-left corner) are lighter in terms of space, but slow; pointer-based {\extptr/\wthpdptr} are very fast, but space-heavy. Between the two extremes of the spectrum, the succinct structures {\extrrr/\extun/\wthpdrrr/\wthpdun}, whose mutual configuration is shown magnified in inner rectangle, are space-economical and yet offer fast query times.
\section{Conclusion}\label{sec:conclusion} We have designed and experimentally evaluated recent algorithmic proposals in path queries in weighted trees, by either faithfully replicating them or offering practical alternatives. Our data structures include both plain pointer-based and succinct implementations. Our succinct realizations are themselves further specialized to be either plain or entropy-compressed.
We measure both query time and space performance of our data structures on large practical sets. We find that the succinct structures we implement offer an attractive alternative to plain pointer-based solutions, in scenarios with critical space- and query time-performance and reasonable tolerance to slow-down. Some of the structures we implement (\wthpdrrr) occupy space equal to bare compressed storage (\nvsuc) of the object and yet offer fast queries on top of it, while another structure (\extrrr/\extun) occupies space comparable to \nvsuc, offers fast queries and low peak memory in construction. While {\wthpd} succinct family performs well in average case, thus offering attractive trade-offs between query time and space occupancy, {\ext} is robust to the structure of the underlying tree, and is therefore recommended when strong worst-case guarantees are vital.
Our design of the practical succinct structure based on tree extraction (\ext) results in a theoretical space occupancy of $3n\lg\sigma+\smallO(n\lg\sigma)$ bits, which helps explain its somewhat higher empirical space cost when compared to the succinct {\wthpd} family. At the same time, verbatim implementation of the space-optimal solution by He et al.~\cite{DBLP:journals/talg/0001MZ16} draws on components that are likely to be cumbersome in practice. For the path query types considered in this study, therefore, realization of the theoretically time- and space-optimal data structure --- or indeed some feasible alternative thereof --- remains an interesting open problem in algorithm engineering.
\appendix
\section{Query Algorithms}\label{appendix:queryAlgos}\label{appendix:queryOverExt} We enter \Cref{algo:algo00} with several parameters -- the current tree $T$, the query nodes $u,v$, the $LCA\,\,z$ of the two nodes, the quantile $k$ we are looking for, the weight-range $[a,b],$ and a number $w.$ These are initially set, respectively, to be the outermost tree, the original query nodes, the $LCA$ of the original query nodes, the median's index (i.e.~half the length of the corresponding path in the original tree), the weight range $[\sigma],$ and the weight of the $LCA$ of the original nodes. We maintain the invariant that $T$ is weighted over $[a,b],$ $z$ is the $LCA$ of $u$ and $v$ in $T.$ Line~\ref{algo:algo00lineref01} checks whether the current tree is weight-homogeneous. If it is, we immediately return the current weight $a$ (line~\ref{algo:algo00lineref02}). Otherwise, the quantile value we are looking for is either on the left or on the right half of the weight-range $[a,b].$ In lines~\ref{algo:algo00lineref05}-\ref{algo:algo00lineref11} we check, successively, the ranges $[a_0,b_0]$ and $[a_1,b_1]$ to determine how many nodes on the path from $u$ to $v$ in $T$ have weights from the corresponding interval. The accumulator variable {\texttt{acc}} keeps track of these values and is certain to always be at most $k.$ When the next value of {\texttt{acc}} is about to become larger than $k$ (line~\ref{algo:algo00lineref11}), we are certain that the current weight-interval is the one we should descend to (line~\ref{algo:algo00lineref12}). The invariants are maintained in line~\ref{algo:algo00lineref06}: there, we calculate the view{}s of the current nodes $u,v,$ and $z$ in the extracted tree we are looking at.
It is clear that $\mathcal{O}(\lg\sigma)$ levels of recursion are explored. At each level of recursion, a constant number of {\texttt{view\_of()}} and {\texttt{depth()}} operations are performed (lines~\ref{algo:algo00lineref06}-\ref{algo:algo00lineref07}). Hence, assuming the $\mathcal{O}(1)$-time for the latter operations themselves, we have a $\mathcal{O}(\lg\sigma)$ query-time algorithm, overall.
\begin{small} \begin{algorithm}[H]
\caption{Selection: return the $k$-th smallest weight on the path from $u \in T$ to $v \in T$}
\label{algo:algo00}
\small
\begin{algorithmic}[1]
\Require{$z = LCA(u,v),\,a\leq b,\,\,k \geq 0$}
\Function{select}{$T,u,v,z,k,w,[a..b]$}
\If{$a == b$} \label{algo:algo00lineref01}
\State \Return{$a$} \label{algo:algo00lineref02}
\EndIf
\Let{$\mathtt{acc}$}{$\mathtt{0}$}
\For{$\mathtt{t \in 0..1}$} \label{algo:algo00lineref05}
\Let{$\mathtt{iu,iv,iz}$}{$\mathtt{view\_of(u,t),\,view\_of(v,t),\,view\_of(z,t)}$} \label{algo:algo00lineref06}
\Let{$\mathtt{du,dv,dz}$}{$\mathtt{depth(B_t,ix),\,depth(B_t,iy),\,depth(B_t,iz)}$} \label{algo:algo00lineref07}
\Let{$\mathtt{dw}$}{$\mathtt{du+dv-2\cdot{}dz}$}
\If{$\mathtt{a_t \leq w \leq b_t}$} \Comment{$[a_0..b_0] = [a..c],\,\,[a_1..b_1] = [c+1..b],\,\,c=(a+b)/2$}
\Let{$\mathtt{dw}$}{$\mathtt{dw+1}$}
\EndIf
\If{$\mathtt{acc+dw > k}$} \label{algo:algo00lineref11}
\State \Return{\sc{select}($\mathtt{T_t,iu,iv,iz,k-acc,w,[a_t..b_t]}$)} \label{algo:algo00lineref12}
\EndIf
\Let{$\mathtt{acc}$}{$\mathtt{acc+dw}$}
\EndFor
\Assert{\texttt{false}}; \Comment{unreachable statement -- line~\ref{algo:algo00lineref12} should execute at some point}
\EndFunction
\end{algorithmic}
\end{algorithm} \end{small}
\Cref{algo:extractionCounting} is adapted from~\cite{DBLP:journals/talg/0001MZ16}, and reasoning similar to \Cref{algo:algo00} applies. Now we have a weight-range $[p,q],$ and maintain that $[p,q] \cap{} [a,b] \neq \emptyset$ (the appropriate action is in line~\ref{listingextractionCountinglabel02}). In line~\ref{listing:extractionCountinglabel01} we check if the query range $[p,q]$ is completely inside the the current range. If so, we return all the nodes (if $report$ argument is set to {\sc{True}}) and the number thereof (for counting case). If not, we descend to $T_0$ and $T_1$ (line~\ref{listing:extractionCountinglabel03}), as discussed previously. \Cref{algo:extractionCounting} emulates traversal of a path in range tree, maintaining the current weight range $[a,b]$ and halving at at each step (line~\ref{lines:line14}). As operations in lines~\ref{lines:line15} and \ref{lines:line16} are constant-time, the algorithm runs in time $\mathcal{O}(\lg{\sigma}).$
\begin{algorithm}[H]
\caption{Counting and reporting.}
\label{algo:extractionCounting}
\small
\begin{algorithmic}[1]
\Require{$z = LCA(u,v),\,p\leq q$}
\Function{countreport}{$T,u,v,z,w,[p,q],[a,b],vec=null,report=False$}
\If{$p \leq a \leq b \leq q$} \label{listing:extractionCountinglabel01}
\If{$report$}
\For{$\mathtt{pu} \in \mathcal{A}(u)\,\textrm{and}\,\mathtt{pu \neq z}$}
\Let{$\mathtt{vec}$}{$\mathtt{vec+original\_node(pu)}$}
\EndFor
\For{$\mathtt{pv} \in \mathcal{A}(v)\,\textrm{and}\,\mathtt{pv \neq z}$}
\Let{$\mathtt{vec}$}{$\mathtt{vec+original\_node(pv)}$}
\EndFor
\If{$a \leq w \leq b$}
\Let{$\mathtt{vec}$}{$\mathtt{vec+original\_node(pv)}$}
\EndIf
\EndIf
\State \Return{$\mathtt{depth(u)+depth(v)-2depth(z)+1_{w \in [p,q]}}$}
\EndIf
\If{$[p,q] \cap{} [a,b] = \emptyset$}
\State \Return{0} \label{listingextractionCountinglabel02}
\EndIf
\Let{$\mathtt{res}$}{$\mathtt{0}$}
\For{$\mathtt{t \in 0..1}$}\label{listing:extractionCountinglabel03} \Comment{$[a_0,b_0] = [a,m],\,\,[a_1,b_1] = [m+1,b],\,\,m=(a+b)/2$} \label{lines:line14}
\Let{$\mathtt{iu,iv,iz}$}{$\mathtt{view\_of(u,t),\,view\_of(v,t),\,view\_of(z,t)}$} \label{lines:line15}
\Let{$\mathtt{du,dv,dz}$}{$\mathtt{depth(B_t,ix),\,depth(B_t,iy),\,depth(B_t,iz)}$} \label{lines:line16}
\Let{$\mathtt{res}$}{$\mathtt{res}$+{\sc{countreport}($\mathtt{T_t,iu,iv,iz,w,[p,q],[a_t,b_t],vec,report}$)}}
\EndFor
\State \Return{$\mathtt{res}$}
\EndFunction
\end{algorithmic} \end{algorithm}
\section{Query-Time Performance Controlled for the Length of HPD}\label{sec:timeVsHdp} \pgfplotsset{every axis/.append style={
label style={font=\ttfamily},
x tick label style={font=\scriptsize\ttfamily}
}}
\begin{figure}
\caption{
Average time to answer a path median query, controlled for the number of segments in heavy-path decomposition,
in microseconds. Random fixed query set of size $10^6.$ }
\label{fig:avgVersusHpd}
\end{figure}
\end{document} |
\begin{document}
\begin{frontmatter}
\title{Asymptotic behaviour of dynamical systems with plastic self-organising vector fields}
\author{N.B. Janson$^{1}$ and P.E. Kloeden$^{2}$}
\address{$^{1}$Department of Mathematics, Loughborough University, Loughborough LE11 3TU, UK\\ $^{2}$Mathematics Department, University of T\"ubingen, T\"ubingen 72074, Germany}
\begin{abstract} In [Janson $\&$ Marsden 2017] a dynamical system with a plastic self-organising velocity vector field was introduced, which was inspired by the architectural plasticity of the brain and proposed as a possible conceptual model of a cognitive system. Here we provide a more rigorous mathematical formulation of this problem, make several simplifying assumptions about the form of the model and of the applied stimulus, and perform its mathematical analysis. Namely, we explore the existence, uniqueness, continuity and smoothness of both the plastic velocity vector field controlling the observable behaviour of the system, and of the behaviour itself. We also analyse the existence of pullback attractors and of forward limit sets in such a non-autonomous system of a special form. Our results verify the consistency of the problem, which was only assumed in the previous work, and pave the way to constructing models with more sophisticated cognitive functions. \end{abstract}
\begin{keyword} non-autonomous dynamical system, plastic spontaneously evolving velocity field, pullback attractor, model of cognition \end{keyword}
\end{frontmatter}
\section{Introduction}
It has been widely believed that replication in hardware or software of the brain's physical architecture would automatically replicate the brain's cognitive functions. To a certain extent, this assumption has been correct, since artificial neural networks have been highly successful in a range of applications requiring classification and pattern recognition \cite{Hassabis_brain-inspired_AI_Neu17}. However, they are still far away from demonstrating human-level intelligence. There is a growing appreciation that to recreate the brain's functions, one needs to reveal and reproduce the brain's working principles rather than the architecture per se. To achieve that, it is necessary to answer a number of fundamental questions posed by neuroscience, such as how memories are represented in the brain, how behaviour could be linked to the brain substance, and how cognitive processes could be described in rigorous terms \cite{Damasio_how_brain_creates_mind_SciAm99,Carandini_from_circuits_to_behavior_NN12,Abbott_solving_the_brain_Nat13,Yuste_new_century_of_brain_SA14,Grillner_criticism_neuroscience_Neu14,Katkov_memory_retrieval_Neuron17,Siegfried_long_way_in_understanding_brain_SciNews17}.
Given that the neurons in the brain fire {\it spontaneously}, biologically relevant brain models take the form of dynamical systems with continuous time \cite{Hodgkin_nuron_model_JP52, Leon_virtual_brain_Scholarpedia13}, whose key element is the velocity vector field combining two features. Firstly, assuming that the model of a spontaneously evolving device is derived from the first principles and is accurate, its velocity field is a mathematical representation of the device's physical architecture \cite{Janson_conceptual_brain_model_SR17_SN}. Secondly, this field is a mathematical expression of the force fully controlling the device's behaviour, and is an embodiment of the full set of behavioural rules.
In \cite{Janson_conceptual_brain_model_SR17} it was proposed that looking at the brain through the prism of its velocity vector field offers a solution to a number of fundamental questions asked by neuroscience. Firstly, the velocity field of the brain could represent the sought-after link between its physical properties and the resultant behaviour. Secondly, by hypothesising that memories could be imprints on the brain's velocity field, one could unify several dominating memory theories. Thirdly, the principles of the brain cognitive function could amount to the ability to create a plastic self-organising velocity vector field evolving according to certain rules, which need to be of an appropriate form to enable cognition. Ultimately, it has been suggested that to be cognitive, the system does not necessarily need to be a neural network, but rather to be capable of spontaneous modifications of its velocity vector field according to some suitable rules.
Thus, a conceptual model of a cognitive system has been proposed, which represents a dynamical system with a plastic self-organising velocity vector field. The standard part of this model is given by the non-autonomous differential equation: \begin{equation}\label{xeqn0} \frac{dx}{dt} = a(x,t).
\end{equation} Here, $x$ $\in$ $\mathbb{R}^d$ is the state of the cognitive system at any time $t$, which in the brain would be a collection of states of all the neurons, and
$a$ $\in$ $\mathbb{R}^d$ is the velocity vector field governing evolution of the state. The unconventional part of the proposed model assumes that the vector field $a$ evolves with time
{\it spontaneously} according to some pre-defined deterministic rules and is affected by a stimulus $\eta(t)$ $\in$ $\mathbb{R}^m$, $m$ $\le$ $d$, as expressed by an equation \begin{equation}\label{aeqn0} \frac{\partial a}{\partial t} = c(a,x,\eta(t),t),
\end{equation}
with $c$ taking values in $\mathbb{R}^d$.
In \cite{Janson_conceptual_brain_model_SR17} equation \eqref{aeqn0} was interpreted as a (degenerate) partial differential equation (PDE). For the mathematical analysis in this paper it is more appropriate to regard \eqref{aeqn0} as an ordinary differential equation (ODE) for the unknown variable $a$ which depends on the parameter $x$, and to write it as $\frac{da}{dt}=c(a,x,\eta(t),t)$. For an additional clarity, \eqref{aeqn0} could be re-written as a parameterised ODE $$ \frac{d}{dt} a(z,t) = c(a(z,t),z,\eta(t),t), \qquad z \in \mathbb{R}^d, $$ since the parameter $z$ does not evolve according to the ODE \eqref{xeqn0}.
Note, that although it has long been acknowledged that the brain can be regarded as a dynamical system \cite{vanGelder_98}, consideration of the brain within the classical framework of the dynamical systems theory did not fully explain its abilities for cognition and adaptation. As a possible reason for this, it has been suggested that the theory of dynamical systems has not been sufficiently developed and required extensions directly relevant to cognition \cite{Crutchfield_dyn_embod_cogn_BBS98}.
In \cite{Janson_conceptual_brain_model_SR17} the velocity vector field with an ability to self-organize has been suggested as an extension of the dynamical systems theory, which could explain adaptation of the behaviour to the environment as a spontaneous modification of the behavioural {\it rules} specified by this field, where this modification is induced by the self-organised architectural plasticity of the brain. Under the assumption that the conceptual model \eqref{xeqn0}--\eqref{aeqn0} was mathematically consistent and $a(t)$ stayed smooth for all $t$, a simple example of \eqref{aeqn0} was constructed, analysed numerically and shown to perform some basic cognition. One could potentially build more sophisticated examples of $c$ leading to more advanced information-processing functions. However, before this can be done, one needs to address the consistency of \eqref{xeqn0}--\eqref{aeqn0} and the conditions on $c$ and $\eta(t)$ in \eqref{aeqn0}, under which the solution of \eqref{xeqn0} exists and is unique, which has not beed done to date and which is the purpose of the current paper.
In Section \ref{sec_ps} we formulate a mathematical problem to be solved here. In Section \ref{sec_exist} we establish the existence and uniqueness of solutions of the system \eqref{Xeqn}--\eqref{Aeqn}. In Section \ref{sec_asym} we show that the first equation \eqref{Xeqn} has a global non-autonomous attractor. In Section \ref{disc} we discuss the results obtained.
\section{Problem statement} \label{sec_ps}
We can interpret \eqref{xeqn0}--\eqref{aeqn0} as a system of ODEs with an unconventional structure, \begin{eqnarray}\label{Xeqn} \frac{dx(t)}{dt} & = & a(x(t),t) \\[1.4ex] \label{Aeqn} \frac{d}{dt} a(z,t) &=& c(a(z,t),z,\eta(t),t), \qquad z \in \mathbb{R}^d, \end{eqnarray} with solutions $x(t)$ and $a(z,t)$ taking values in $\mathbb{R}^d$. The solution $a(t)$ of \eqref{Aeqn} depends on $z$ $\in$ $\mathbb{R}^d$ as a fixed parameter. Note that the solution $x(t)$ of the first equation \eqref{Xeqn} is not inserted into the second equation \eqref{Aeqn}, i.e. equation \eqref{Aeqn} is decoupled from equation \eqref{Xeqn}. Essentially, we need to solve the second equation \eqref{Aeqn} first, independently of equation \eqref{Xeqn}, to obtain the vector field for the first equation \eqref{Xeqn}.
The purpose of this paper is to provide a more precise mathematical formulation and an analysis of the modelling and numerical work in \cite{Janson_conceptual_brain_model_SR17}. In particular, we show that the system \eqref{Xeqn}--\eqref{Aeqn} is well posed in the sense of the global existence and uniqueness of its solutions.
It is also important to consider a long-term behaviour of the newly introduced systems, which is usually described by attractors. The concept of an attractor has been successfully extended from the autonomous to the standard non-autonomous dynamical systems of the form $\frac{dx}{dt}$ $=$ $f(x,t)$, where $f$ is some fixed vector field function \cite{Kloeden_NDS_book11}. However, the existence of an attractor where the vector field itself evolves {\it spontaneously} according to \eqref{Aeqn} needs to be proved. We show that, under a mild dissipativity assumption, the non-autonomous system generated by \eqref{Xeqn} has a non-autonomous (or random) attractor.
\section{Existence and uniqueness of solutions} \label{sec_exist}
In \cite{Janson_conceptual_brain_model_SR17} it was proposed that in \eqref{xeqn0}--\eqref{aeqn0} the stimulus $\eta(t)$ is used both to contribute to the modification of the vector field $a$ according to \eqref{aeqn0}, and to regularly reset the initial conditions of \eqref{xeqn0}. Here, we consider a simplified case, in which we allow $\eta(t)$ only to affect evolution of $a$. Therefore, when considering system \eqref{Xeqn}--\eqref{Aeqn} we will handle equation \eqref{Xeqn} separately from \eqref{Aeqn}, assuming that the vector field $a$ is known.
The existence and uniqueness of solutions of equations \eqref{Xeqn} and \eqref{Aeqn}
require at least a local Lipschitz property of the right-hand sides $a$ and $c$ in the corresponding state variable, while the existence of an attractor in \eqref{Xeqn} requires a dissipativity property.
Since the equations \eqref{Xeqn}--\eqref{Aeqn} represent a non-autonomous or a random system, we need to consider them on the entire time axis $t$ $\in$ $\mathbb{R}$ (see the discussion in Section \ref{disc} for when this does not hold). In particular, the vector field $a$ should be defined for all values of time $t$ $\in$ $\mathbb{R}$. Below we formulate our assumption on the stimulus $\eta$.
\begin{assumption} \label {assumpS1} $\eta$ $:$ $\mathbb{R}$ $\rightarrow$ $\mathbb{R}^m$ is continuous. \end{assumption} This stimulus signal is considered as a given and fixed input in the model.
\subsection{Existence and uniqueness of the observable behaviour $x(t)$ of \eqref{Xeqn}}
\begin{assumption} \label {assumpA1} $a$ $:$ $\mathbb{R}^d\times \mathbb{R}$ $\rightarrow$ $\mathbb{R}^d$ and $\nabla_x a(x,t)$ $:$ $\mathbb{R}^d\times \mathbb{R}$ $\rightarrow$ $\mathbb{R}^{d\times d}$ are continuous in both variables $(x,t)$. \end{assumption}
This assumption ensures the vector field $a$ is locally Lipschitz in $x$. Hence, by standard theorems (see Walter \cite[Chapter 2]{Walter_ODE_book98}), there exists a unique solution $x(t)$ $=$ $x(t;t_0,x_0)$ of the ODE \eqref{Xeqn} for each initial condition $x(t_0)$ $=$ $x_0$, at least for a short time interval.
\begin{assumption} \label{assumpA2}
$a$ $:$ $\mathbb{R}^d\times \mathbb{R}$ $\rightarrow$ $\mathbb{R}^d$ satisfies the dissipativity condition $\left<a(x,t),x\right>$ $\leq$ $-1$ for $\|x\|$ $\geq$ $R^*$ for some $R^*$. \end{assumption}
\noindent (Here $\|a\|$ $=$ $\sqrt{\sum_{i=1}^d a_i^2}$ is the Euclidean norm on $\mathbb{R}^d$ and $<a,b>$ $=$ $\sum_{i=1}^d a_i b_i$ is the corresponding inner product, for vectors $a$, $b$ $\in$ $\mathbb{R}^d$.)
This assumption (which may be stronger than we really need, but avoids assumptions about the specific structure of $a$) ensures that the ball $B^*$ $:=$ $\{x \in \mathbb{R}^d : \|x\| \leq R^*+1\}$ is positive invariant. This follows from the estimate $$
\frac{d}{dt} \|x(t)\|^2 = 2 \left<x(t),a(x(t),t) \right> \, \,\leq \,\, -1 \qquad \mbox{if}\,\, \|x(t)\| \geq R^* $$ and in turn ensures that the solution of the ODE \eqref{Xeqn} exists for all future time $t$ $\geq$ $t_0$. We thus formulate the following theorem. \begin{theorem} \label{thm1} Suppose that Assumptions \eqref{assumpS1}, \eqref{assumpA1} and \eqref{assumpA2} hold. Then for every initial condition $x(t_0)$ $=$ $x_0$, the ODE \eqref{Xeqn} has a unique solution
$x(t)$ $=$ $x(t;t_0,x_0)$, which exists for all $t$ $\geq$ $t_0$. Moreover, these solutions are continuous in the initial conditions, i.e., the mapping $(t_0,x_0)$ $\mapsto$ $x(t;t_0,x_0)$ is continuous.
\end{theorem}
\subsection{Existence and uniqueness of the vector field $a(x,t)$ as a solution of \eqref{Aeqn}}
The ODE \eqref{Aeqn} for the
velocity field $a(x,t)$ is independent of the
solution $x(t;t_0,x_0)$ of the ODE \eqref{Xeqn}. We need the following assumption to provide the existence and uniqueness of $a(x,t)$ for all future times $t$ $>$ $t_0$ and to ensure that this solution satisfies Assumptions \eqref{assumpA1} and \eqref{assumpA2}.
\begin{assumption} \label {assumpC1} $c$ $:$ $\mathbb{R}^d\times \mathbb{R}^d \times \mathbb{R}^m\times \mathbb{R}$ $\rightarrow$ $\mathbb{R}^d$ and $\nabla_a c $ $:$ $\mathbb{R}^d \times \mathbb{R}^d \times \mathbb{R}^m\times \mathbb{R}$ $\rightarrow$ $\mathbb{R}^{d\times d}$ are continuous in all variables. \end{assumption}
This assumption ensures the vector field $c$ is locally Lipschitz in $a$. Hence, by standard theorems (see Walter \cite[Chapter 2]{Walter_ODE_book98}), there exists a unique solution $a(t;t_0,a_0)$ of the ODE \eqref{Aeqn} for each initial condition $a(t_0)$ $=$ $a_0$, at least for a short time interval. This solution also depends continuously on the parameter $x$ $\in$ $\mathbb{R}^d$. To ensure that the solutions can be extended for all future times $t$, we need a growth bound such as in the following assumption. \begin{assumption} \label {assumpC2}
There exist constants $\alpha$ and $\beta$ (which need not be positive) such that $\left<a, c(a,x,y,t)\right>$ $\leq$ $\alpha \|a\|^2 +\beta$ for all $(x,y,t)$ $\in$ $\mathbb{R}^d \times \mathbb{R}^m\times \mathbb{R}$. \end{assumption}
The next assumption ensures that the solution of the ODE \eqref{Aeqn}, which we now write as $a(x,t)$, is continuously differentiable and hence locally Lipschitz in $x$, provided that the initial value $a(x,t_0)$ $=$ $a_0(x)$ is continuously differentiable. \begin{assumption} \label {assumpC3}
$\nabla_x c $ $:$ $\mathbb{R}^d \times \mathbb{R}^d \times \mathbb{R}^m\times \mathbb{R}$ $\rightarrow$ $\mathbb{R}^{d\times d}$ is continuous in all variables. \end{assumption} The above statement then follows from the properties of the linear matrix-valued variational equation $$ \frac{d}{dt} \nabla_x a = \nabla_a c \nabla_x a + \nabla_x c, $$ which is obtained by taking the gradient $\nabla_x $ of both sides of the ODE \eqref{Aeqn}.
Finally, we need to ensure that the solution $a(x,t)$ satisfies the dissipativity property as in Assumption \eqref{assumpA2}.
\begin{assumption} \label{assumpC4}
There exist $R^*$ such that
$$
\left<c(a,x,y,t),x\right> \, \leq \, 0 \quad \textrm{for} \quad \|x\| \geq R^*, \quad (a,y,t) \in \mathbb{R}^d \times \mathbb{R}^m\times \mathbb{R}. $$
\end{assumption}
To show this we write equation \eqref{Aeqn} in integral form $$ a(x,t) = a_0(x) + \int_{t_0}^t c\left(a(x,s),x, \eta(s),s\right)\, ds $$ and then take the scalar product on both sides with a constant $x$, which gives \begin{eqnarray*} \left<a(x,t),x\right> & = & \left<a_0(x),x\right> + \left< \int_{t_0}^t c\left(a(x,s),x, \eta(s),s\right) \, ds, x\right> \\[1.3ex] & = & \left<a_0(x),x\right> + \int_{t_0}^t \left< c\left(a(x,s),x, \eta(s),s\right),x\right>\, ds \\[1.3ex]
& \leq & - 1 +0 = -1 \quad \mbox{\rm for} \qquad \|x\| \geq R^*. \end{eqnarray*}
Summarising from the above, we can formulate the following theorem.
\begin{theorem}
\label{thm2} Suppose that Assumptions \eqref{assumpS1} and \eqref{assumpC1}--\eqref{assumpC4} hold. Further, suppose that $a_0(x)$ is continuously differentiable and satisfies the dissipativity condition in Assumption \eqref{assumpA2}. Then the ODE \eqref{Aeqn} has a unique solution $a(x,t)$ for the initial condition $a(x,t_0)$ $=$ $a_0(x)$, which exists for all $t$ $\geq$ $t_0$ and satisfies Assumptions \eqref{assumpA1} and \eqref{assumpA2}.
\end{theorem} Thus, we have obtained a theorem for the existence, uniqueness, continuity and dissipativity of the velocity vector field $a$ governing the behaviour of \eqref{Xeqn}.
\section{Asymptotic behaviour}
\label{sec_asym}
Here we consider the conditions for the existence of two kinds of attractors in equation \eqref{Xeqn} describing the observable behaviour of a system with a plastic velocity field. The ODE \eqref{Xeqn} is non-autonomous and its solution mapping generates a non-autonomous dynamical system on the state space $\mathbb{R}^d$ expressed in terms of a $2$-parameter semi-group, which is often called a process (see Kloeden \& Rasmussen \cite{Kloeden_NDS_book11}). Define \begin{equation*}
\mathbb{R}_{\geq}^+=
\left\{(t,t_0)\in\mathbb{R}\times\mathbb{R}:t\geq t_0\right\}. \end{equation*}
\begin{definition} \label{DETdefpro}
A \emph{process} is a mapping $\phi:\mathbb{R}_{\geq}^+\times \mathbb{R}^d\to \mathbb{R}^d$ with the
following properties:
\begin{itemize}
\item[(i)] {initial condition:}\,\,
$\phi(t_0,t_0,x_0)=x_0$ for all $x_0\in \mathbb{R}^d$ and $t_0\in\mathbb{R}$;
\item[(ii)] {$2$-parameter semi-group property:} \,
$\phi(t_2,t_0,x_0)=\phi(t_2,t_1,\phi(t_1,t_0,x_0))$
for all $t_0\leq t_1 \leq t_2$ in $\mathbb{R}$ and $x_0\in \mathbb{R}^d$;
\item[(iii)] {continuity:}\,\,
the mapping $(t,t_0,x_0)\mapsto\phi(t,t_0,x_0)$ is continuous.
\end{itemize} \end{definition} The $2$-parameter semi-group property is an immediate consequence of the existence and uniqueness of solutions of the non-autonomous ODE: the solution starting at~$(t_1,x_1)$, where $x_1$ $=$ $\phi(t_1,t_0,x_0)$, is unique so must be equal to $\phi(t,t_0,x_0)$ for $t$ $\geq$ $t_1$.
\subsection{Pullback attractors in equation \eqref{Xeqn}}
Time in an autonomous dynamical systems is a relative concept since such systems depend on the elapsed time $t-t_0$ only and not separately on the current time~$t$ and initial time~$t_0$, which means that limiting objects exist all the time and not just in the distant future. In contrast, non-autonomous systems depend explicitly on both~$t$ and~$t_0$, which has a profound affect on the nature of limiting objects (see \cite{Kloeden_NDS_book11, Crauel_nonaut_random_attr2015}).
In particular, a non-autonomous attractor is a family $\mathfrak{A}$ $=$ $\{ A(t): t \in \mathbb{R}\}$ of nonempty compact subsets $A(t)$ of $\mathbb{R}^d$ with the following properties:\\
\noindent 1) invariance: $A(t)$ $=$ $\phi(t,t_0,A(t_0))$ for all $t$ $\geq$ $t_0$;
\noindent 2) pullback attracting: $$ \lim_{t_0\to - \infty} \mbox{\rm dist}_{\mathbb{R}^d} \left(\phi(t,t_0,B),A(t)\right) = 0 \qquad \mbox{for all bounded subsets $B$ of $\mathbb{R}^d$. }\\ $$ It is called a \emph{pullback attractor} since the starting time $t_0$ is pulled further and further back into the past. The dynamics then moves forwards in time from this starting time $t_0$ to the present time $t$. Essentially, the pullback attractor takes into account the past history of the system, so we cannot expect it to say much about the future. In fact, a pullback attractor need not be forward attracting in the conventionally understood sense, i.e. as $t$ $\to$ $+\infty$ for fixed $t_0$ (see the example in subsection in \ref{subsec_example} below).
The existence and uniqueness of a global pullback attractor for a non-autonomous dynamical system on $\mathbb{R}^d$ is implied by the existence of a positive invariant absorbing set. The following theorem is adapted from \cite[Theorem 3.18]{Kloeden_NDS_book11}. \begin{theorem}\label{thm3} Suppose that a non-autonomous dynamical system $\phi$ on $\mathbb{R}^d$ has a positive invariant absorbing set $B^*$. Then it has a unique pullback attractor $\mathfrak{A}$ $=$ $\{ A(t): t \in \mathbb{R}\}$ with component sets defined by $$ A(t) = \bigcap_{t_0\leq t} \phi\left(t,t_0,B^*\right), \qquad t \in \mathbb{R}. $$ \end{theorem} An important characterization \cite[Lemma 2.15]{Kloeden_NDS_book11} of a pullback attractor is that it consists of the entire bounded solutions of the system, i.e., $\chi$ $:$ $\mathbb{R}$ $\rightarrow$ $\mathbb{R}^d$ for which $\chi(t)$ $=$ $\phi(t,t_0,\chi(t_0))$ $\in$ $A(t)$ for all $(t,t_0)$ $\in$ $\mathbb{R}_{\geq}^+$.
In particular, under the above assumptions, the ODE \eqref{Xeqn} describing the observable behaviour of the model of a cognitive system generates a non-autonomous dynamical system, which has a global pullback attractor. Summarising, we formulate the following theorem. \begin{theorem} \label{thm4} Suppose that Assumptions \eqref{assumpS1}, \eqref{assumpA1} and \eqref{assumpA2} hold. Then the non-autonomous dynamical system generated by the ODE \eqref{Xeqn} describing the observable behaviour has a global pullback attractor $\mathfrak{A}$ $=$ $\{ A(t): t \in \mathbb{R}\}$, which is contained in the absorbing set $B^*$.
\end{theorem}
Thus, Theorem \ref{thm4} specifies the conditions under which the global pullback attractor exists in a dynamical system with plastic spontaneously evolving velocity vector field.
\subsection{Forward limit sets in equation \eqref{Xeqn}} The concepts of pullback attraction and pullback attractors assume that the system exists for all time, in particular past time. This is obviously not true in many biological systems, though an artificial ``past" can some times be usefully introduced (see the final section).
The above definition of a non-autonomous dynamical system can be easily modified to hold only for $(t,t_0)$ $\in$ $\mathbb{R}_{\geq}^+(T^*)$ $=$ $\left\{(t,t_0)\in\mathbb{R}\times\mathbb{R}: t\geq t_0\geq T^* \right\}$ for some $T^*$ $>$ $-\infty$.
When the system has a nonempty positive invariant compact absorbing set $B^*$, as in the situation here, the forward omega limit set $$ \Omega(t_0) = \bigcap_{\tau \geq t_0} \overline{ \bigcup_{t\geq \tau} \phi\left(t,t_0,B^*\right)}, \qquad t_0 \in \mathbb{R}, $$
exists for each $t_0$ $\geq$ $T^*$, where the upper bar denotes the closure of the set under it. The set $\Omega(t_0)$ is thus a nonempty
compact subset of the absorbing set $B^*$ for each $t_0$ $\in$ $\mathbb{R}$.
Moreover, these sets are increasing in $t_0$, i.e., $\Omega(t_0)$ $\subset$ $\Omega(t'_0)$ for $t_0$ $\leq$ $t'_0$, and the closure of their union
$$
\Omega^* := \overline{ \bigcup_{t_0\geq T^*} \Omega(t_0)} \subset B^* $$ is a compact subset of $B^*$, which attracts all of the dynamics of the system in the forward sense, i.e. $$ \lim_{t\to \infty} \mbox{\rm dist}_{\mathbb{R}^d} \left(\phi(t,t_0,B), \Omega^* \right) = 0 $$ for all bounded subsets $B$ of $\mathbb{R}^d$, $t_0$ $\geq$ $T^*$.
Vishik \cite{Vishik_Asym_Beh_book92} called $\Omega^*$ the \emph{uniform attractor}\footnote{He required the system to be defined in the whole past and the convergence to be uniform in $t_0$ $\in$ $\mathbb{R}$.}, although strictly speaking $\Omega^*$ do not form an attractor since it need not be invariant and the attraction need not be uniform in the starting time $t_0$. Nevertheless, $\Omega^*$ does indicate where the future asymptotic dynamics ends up. Moreover, Kloeden \cite{Kloeden_asym_inv_JCD16} showed that $\Omega^*$ is \emph{asymptotically positive invariant}, which means that the later the starting time $t_0$, the more and more it looks like an attractor as conventionally understood. In \cite{Kloeden_asym_inv_JCD16} $\Omega^*$ was called the \emph{forward attracting set}.
Summarising from the above, we formulate the following theorem. \begin{theorem} \label{thm5} Suppose that Assumptions \eqref{assumpS1}, \eqref{assumpA1} and \eqref{assumpA2} hold. Then the non-autonomous dynamical system generated by the ODE \eqref{Xeqn} describing the observable behaviour of the system has a forward attracting set $\Omega^*$, which is contained in the absorbing set $B^*$.
\end{theorem}
Theorem \ref{thm5} expresses the conditions under which a forward attracting set exists in a dynamical system \eqref{Xeqn} with a plastic velocity vector field evolving according to \eqref{Aeqn}.
\section{Discussion} \label{disc}
In the previous sections we explicated the mathematical formulation of, and analysed mathematically, the conceptual model of a cognitive system introduced in \cite{Janson_conceptual_brain_model_SR17}. For clarity, we made some simplifying assumptions about the properties of the right-hand sides of this model and of the external stimulus.
If the model discussed here is to be used for the description of the cognitive function similar to that of a biological brain, one needs to take into account the different timescales at which different processes occur. It is known that the observable dynamics of neurons is much faster than the rate of change of the inter-neuron connections. Hence, the velocity vector field describing the dynamics of neurons in the brain should evolve at a much slower rate than the neural states. A realistic application of our model should take this into account.
Even the simplified cases studied here raise a number of questions, in particular about the relevance of pullback attractors for such models. These and some further issues will be briefly discussed here.
\subsection{Use of pullback attractors}
Pullback convergence requires the dynamical system to exist in the distant past, which is often not a realistic assumption in biological systems. Pullback attractors can nevertheless be used in such situations by inventing an artificial past. This and other aspects are discussed in \cite{Kloeden_pullback_SD03,Kloeden_NDS_book11}.
The simplest way to do this for this model is to set the vector field $a(x,t)$ $\equiv$ $a_0(x)$ for $t$ $\leq$ $T^*$ for some finite time $T^*$, which could be the
desired starting time $t_0$. In this case $a_0(x)$ would be the desired initial velocity vector field of the model of a cognitive system, which could be zero or contain some initial features representing previous memories. Then the ODE \eqref{xeqn0} should be replaced by the switching system
\begin{eqnarray}
\label{eq_sw} \frac{dx}{dt} = \left\{ \begin{array}{lcl} a_0(x) & : & t \leq t_0 \\[1.3ex] a(x,t) & : & t \geq t_0 \end{array}\right., \end{eqnarray} where $a(x,t)$ evolves according to the ODE \eqref{aeqn0} { for $t$ $\geq$ $t_0$ with the parameterised initial value $a_0(x)$. If $a_0(x)$ satisfies the dissipativity condition in Assumption \eqref{assumpA2}, then the switching system \eqref{eq_sw} will also be dissipative and have a pullback attractor with component sets $A(t)$ $=$ $A^*$ for $t$ $\leq$ $t_0$ and $A(t)$ $=$ $\phi(t,t_0,A^*)$ for $t$ $\geq$ $t_0$, where $A^*$ is the global attractor of the autonomous dynamical system generated by the autonomous ODE with the vector field $a_0(x)$.
\subsection{Random stimulus signals}
The stimulus signal $\eta(t)$ in Assumption \eqref {assumpS1} is a deterministic function. When this signal is random it would be a single sample path $\eta(t,\omega)$ of a stochastic process with $\omega$ $\in$ $\Omega$, where $\Omega$ is the sample space of the underlying probability space$(\Omega, \mathcal{F}, \mathbb{P})$. The above analysis holds, which is otherwise deterministic, for this fixed sample path. For emphasis, $\omega$ could be included in the system and the pullback attractor as an additional parameter, i.e., $\phi(t,t_0,x_0, \omega)$ and $\mathfrak{A}$ $=$ $\{ A(t,\omega): t \in \mathbb{R}\}$. Cui et al \cite{Cui_forward_random_att_submitted,Cui_uniform_att_nonaut_JDE17} call these objects non-autonomous random dynamical systems and random pullback attractors, respectively.
This is the appropriate formulation for brain vector fields generated by the ODE \eqref{Aeqn}, which has two sources of non-autonomity in its vector field $c$, i.e., indirectly through the stimulus signal $\eta(t)$ and directly through the independent variable $t$. The ODE \eqref{Aeqn} is then a random ODE (RODE), see \cite{Han_RODE_book17}. Note, that without this additional independent variable $t$, the theory of random dynamical systems (RDS) in Arnold \cite{Arnold_RDS_book98} could be used. It is also a pathwise theory with a random attractor defined through pullback convergence, but requires additional assumptions about the nature of the driving noise process, which is here represented in the stimulus signal.
Until now we considered the stimulus signal $\eta(t)$ with continuous sample paths. The above results remain valid when $\eta(t)$ has only measurable sample paths, such as for a Poisson or L\'evy process, but the RODE must now be interpreted pathwise as a Carath\'eodory ODE, see \cite{Han_RODE_book17}.
\subsection{Relevance of pullback attractors}
\label{subsec_example}
Assuming, by nature or artifice, that the system does have a pullback attractor, what does this actually tell us about the asymptotic dynamics of the neural activity?
As mentioned above, a pullback attractor consists of the entire bounded solutions of the system, which is useful information. This characterisation is also true of attractors of autonomous systems, for which pullback and forward convergence are equivalent due to the fact that only the elapsed time is important in such systems.
In general, a pullback attractor need not be forward attracting. This is easily seen in the following switching system $$ \frac{dx}{dt}= \left\{ \begin{array}{ccl} - x & : & t \leq 0 \\[1.5ex] x \left(1-x^2 \right) & : & t > 0 \end{array} \right. , $$ for which the set $B^*$ $=$ $[-2,2]$ is positively invariant and absorbing. The pullback attractor $\mathcal{A}$ has identical component subsets $A_t$ $\equiv$ $\{0\}$, $t$ $\in$ $\mathbb{R}$, corresponding to the zero entire solution, which is the only bounded entire solution of this switching system. This zero solution is obviously not forward asymptotically stable. The forward attracting set here is $\Omega^*$ $=$ $[-1,1]$. It is not invariant (though it is positive invariant in this case), but contains all of the forward limit points of the system.
Nevertheless, a pullback attractor indicates where the system settles down to when more and more information of its past is taken into account. This is very useful in system which is itself evolving in time, as in the brain plasticity model under consideration, for which the future input stimulus is not yet known.
Interestingly, a random attractor for the RDS in the sense of \cite{Arnold_RDS_book98} is pullback attracting in the pathwise sense and also forward attracting in probability, see \cite{Crauel_nonaut_random_attr2015}.
\subsection{Vector field from a potential function}
In an example investigated numerically in \cite{Janson_conceptual_brain_model_SR17}, the vector field $a$ was generated from a potential function $U$, i.e. with $a$ $=$ $-\frac{1}{t}\nabla_x U$, and a differential equation was constructed for $U$, rather than for $a$. Componentwise $a_i$ $=$ $-\frac{1}{t}\frac{\partial U}{\partial x_i}$, so the existence of such a potential requires $$ \frac{\partial a_i}{\partial x_j} = -\frac{1}{t}\frac{\partial^2 U}{\partial x_j\partial x_i} = -\frac{1}{t} \frac{\partial^2 U}{\partial x_i\partial x_j} = \frac{\partial a_j}{\partial x_i} $$ From equation \eqref{Aeqn} this requires $$
\frac{\partial a_i}{\partial x_j} = \frac{\partial a_j}{\partial x_i}. $$
The example considered in \cite{Janson_conceptual_brain_model_SR17} is a special case of the system \eqref{Xeqn}--\eqref{Aeqn}. Namely,
in \cite{Janson_conceptual_brain_model_SR17} $U$ satisfies a scalar parameterised ordinary differential equation \begin{equation}\label{ueq} \frac{d}{dt} U(x,t) = - k U(x,t) -g(x-\eta(t)), \end{equation} where $k$ $\geq$ $0$, $g$ is shaped like a Gaussian function $$ g(z) = \frac{1}{\sqrt{2 \pi \sigma^2}} e^{-\frac{z^2}{ \sigma^2} }, $$ and $\eta(t)$ is the given input, which is assumed to be defined for all $t \in \mathbb{R}$ and is continuous.
The gradient $\nabla_x U$ of $U$ satisfies the scalar parameterised ordinary differential equation \begin{equation}\label{ueq_grad} \frac{d}{dt} \nabla_x U(x,t) = - k \nabla_x U(x,t) - G(x-\eta(t)), \end{equation} where $$ G(x-\eta(t)) = \nabla_x g(x-\eta(t)) = - \frac{2}{\sigma^2\sqrt{2 \pi \sigma^2}} \left(x-\eta(t)\right) e^{-\frac{\left(x-\eta(t)\right) ^2}{ \sigma^2} }. $$ The linear ODE \eqref{ueq_grad} has an explicit solution $$
\nabla_x U(x,t) = \nabla_x U(x,t_0) e^{-k(t-t_0)} - \int_{t_0}^t e^{-k(t-s)} G(x-\eta(s)) ds. $$ Taking the pullback limit as $t_0$ $\to$ $-\infty$ gives $$
\nabla_x \bar{U}(x,t) = - \int_{-\infty}^t e^{-k(t-s)} G(x-\eta(s)) ds. $$ This solution is asymptotically stable and forward attracts all other solutions, since $$
\left| \nabla_x U(x,t) - \nabla_x \bar{U}(x,t) \right| \leq \left| \nabla_x U(x,t_0) - \nabla_x \bar{U}(x,t_0) \right| e^{-k(t-t_0)} $$ for every $x$ and any solution $ \nabla_x U(x,t)$ $\ne$ $\nabla_x \bar{U}(x,t)$.
Finally, the asymptotic dynamics of this example system with a plastic vector field satisfies the scalar ODE \begin{equation} \label{sol_asym} \frac{dx(t)}{dt} = - \frac{1}{t} \nabla_x \bar{U}(x(t),t) = \frac{1}{t} \int_{-\infty}^t e^{-k(t-s)} G(x(t)-\eta(s)) ds. \end{equation}
Since the integrand is uniformly bounded, it follows that $\left|\frac{dx(t)}{dt}\right|$ $\leq$ $\frac{C}{t}$ $\to$ $0$ as $t$ $\to$ $\infty$. From numerical simulations, the system \eqref{Xeqn} with $a(x,t)$ $=$ $-\frac{1}{t}\nabla_x U(x,t)$ appears to have a forward attracting set.
From the argument presented above, equation \eqref{Aeqn} for the vector field $a$ has a pullback attractor consisting of singleton set, i.e. a single entire solutions, which is also Lyapunov forward attracting. This implies that starting from an arbitrary smooth initial vector field $a(x,t_0)$, the solution $a(x,t)$ of \eqref{Aeqn} converges to
a time-varying function $\bar{a}(x,t)$ $=$ $ - \frac{1}{t} \nabla_x \bar{U}(x,t)$.
\begin{remark} The example considered in \cite{Janson_conceptual_brain_model_SR17} actually involved a random forcing term $\eta(t)$, which was the stochastic stationary solution (essentially its random attractor) of the scalar It\^o stochastic differential equation (SDE) \begin{equation}\label{noise} d\eta (t) = h(\eta(t)) dt + 0.5 dW(t), \end{equation} where $W(t)$ was a two-sided Wiener process. For the function $h(u)$ $=$ $3(u-u^3)/5$ used in \cite{Janson_conceptual_brain_model_SR17}, the representative potential function had two non-symmetric wells of different depths and widths. In such cases, the solutions of \eqref{Xeqn}--\eqref{Aeqn} depend on the sample path $\eta(t,\omega)$ of the noise process, and the convergences are pathwise, and random versions of the theorems formulated above apply. In particular, the random pullback attractor consists of singleton sets, i.e., it is essentially is a stochastic process. Moreover, it is
Lyapunov asymptotically stable in probability. \end{remark}
\section{Conclusion}
To conclude, we reconsidered the problem from \cite{Janson_conceptual_brain_model_SR17} from the perspective of recent developments in non-autonomous dynamical systems. In order to further develop modelling of information processing by means of dynamical systems with plastic self-organising vector fields, we needed to show that the problem is well-posed mathematically, which is one of the results of this paper obtained under some simplifying assumptions. At the same time, we have shown that asymptotic dynamics can be formulated in terms of non-autonomous and/or random attractors. This provides us with a firm foundation for a deeper understanding of the potential capabilities of systems with plastic adaptable rules of behaviour.
The model presented here offers many interesting mathematical challenges, such as the rigorous analysis of parameteter-free bifurcations occurring as a result of spontaneous evolution of the velocity field of the dynamical system. The necessary background theory is yet to be developed.
~
\noindent {\bf Acknowledgements}
\noindent The visit of PEK to Loughborough University was supported by London Mathematical Society.
~
\noindent {\bf Authors' Contributions}
\noindent NBJ and PEK jointly formulated the problem and interpreted the results. PEK obtained the mathematical results and NBJ put the problem into the context of applications to the brain and cognition.
\end{document} |
\begin{document}
\begin{abstract} We show that a simple separable unital nuclear nonelementary $C^*$-algebra whose tracial state space has a compact extreme boundary with finite covering dimension admits uniformly tracially large order zero maps from matrix algebras into its central sequence algebra. As a consequence, strict comparison implies $\mathcal Z$-stability for these algebras. \end{abstract}
\title{$\mathcal{Z}
\section{Introduction}
\noindent The theme of tensorial absorption is prominent in the theory of operator algebras, particularly where the classification of these algebras is concerned. An important step in Connes' proof that the hyperfinite $\mathrm{II}_1$ factor $\mathcal{R}$ is the unique injective $\mathrm{II}_1$ factor with separable predual exemplifies this theme: such a factor, say $\mathcal{M}$, has the property that $\mathcal{M} \overline{\otimes} \mathcal{R} \cong \mathcal{M}$. Another example arises in the theory of Kirchberg algebras, where the fact, due to Kirchberg, that such algebras absorb the Cuntz algebra $\mathcal{O}_\infty$ tensorially (see \cite{KP:Crelle}) plays a key role in their eventual classification via $\mathrm{K}$-theory. For general nuclear separable $C^*$-algebras, the best tensorial absorption theorem that one can hope for is absorption of the Jiang-Su algebra $\mathcal{Z}$ ($\mathcal{Z}$-stability). $\mathcal{Z}$-stability is a powerful tool for the classification of simple nuclear separable $C^*$-algebras, but is generally difficult to establish. The property of strict comparison, on the other hand, is often easier to verify. Examples show that both properties, although by no means automatic, often occur at the same time, and are closely related to finite topological dimension. These and other considerations led the first and third named authors to conjecture the following connections between regularity properties for $C^*$-algebras:
\begin{conjecture} Let $A$ be a simple nuclear separable unital infinite-dimensional $C^*$-algebra. The following conditions are equivalent: \begin{enumerate} \item $A$ has finite nuclear dimension; \item $A$ is $\mathcal{Z}$-stable; \item $A$ has strict comparison. \end{enumerate} \end{conjecture}
\noindent The implications $(1) \Longrightarrow (2)$ and $(2) \Longrightarrow (3)$ have been established by the third named author and R\o rdam, respectively (see \cite{W:Invent2} and \cite{R:IJM}). The reversal of either of these implications is at present only partial; proving $(3) \Longrightarrow (2)$ becomes accessible if one additionally assumes certain local approximation and divisibility properties \cite{W:Invent2} but at least the former assumption should ultimately be unnecessary. In a recent breakthrough, Matui and Sato lifted the local approximation hypothesis, establishing $(3) \Longrightarrow (2)$ for algebras with finitely many extremal tracial states \cite{MS:Acta}. Subsequently it has been an urgent task to remove this restriction on the tracial state space and in this article we extend their result to algebras whose extremal tracial boundary is compact and of finite covering dimension. The problem has received substantial attention; this result has also been discovered by Kirchberg and R\o{}rdam \cite{KR:InPrep} and Sato \cite{S:Pre}.
A word on the idea of our proof is in order, as the details are necessarily technical. A loose but in our case profitable way of thinking of a simple unital separable nuclear $C^*$-algebra $A$ with nonempty tracial state space is as a collection of everywhere nonzero ``sections'', where $A$ is viewed as a kind of noncommutative bundle over its space $T(A)$ of tracial states. For a classical topological vector bundle $\xi$ over a space $X$ we have the property of local triviality: restriction of the bundle to a sufficiently small neighborhood is a Cartesian product of the neighbourhood with a (complex) vector space having the same rank as $\xi$. The complexity of $\xi$ is generated by the way in which these local trivialisations are patched together. The analog of local trivialisation in our case comes from approximately central order zero maps with finite-dimensional domain which are large in a small open neighbourhood in $T(A)$. Our objective is to construct an approximately central order zero map which is globally large over $T(A)$, and so we look to glue together the maps which work locally. To do this we use nuclearity, and in particular the existence of approximate diagonals, to prove that there exist positive approximately central contractions in $A$ which, at the level of traces, represent indicator functions for open subsets of the extreme tracial boundary. Indeed, any continuous strictly positive affine function on $T(A)$ is uniformly realised by positive elements in an approximately central manner (Lemma \ref{L:CT}). Using suitable functions arising from open covers of $\partial_eT(A)$ we can patch together the local trivialisations to arrive at an order zero map which is uniformly bounded away from zero in trace. The bound we obtain depends on the covering dimension of $\partial_eT(A)$ and it is here where finite dimensionality arises.
Throughout the entire paper we work with central sequence algebras. The importance of these to tensorial absorption results dates back to McDuff's characterisation of separably acting II$_1$ factors which absorb the hyperfinite II$_1$ factor \cite{McD:PLMS}. In the $C^*$-setting central sequence algebras are intimately connected with with properties of stablity under tensoring by $\mathcal O_\infty$ and $\mathcal Z$, \cite{K:Able}.
The hypothesis of a compact extreme tracial boundary with finite covering dimension arose in \cite{DT:JFA}, where it was shown that for a simple algebra with strict comparison satisfying the said hypothesis, the Cuntz semigroup is almost divisible. The main result in this paper can be viewed as a proof that this almost divisibility can occur in an almost central manner.
Our paper is organised as follows. In Section \ref{Sect2} we introduce notation and review the relevant background from \cite{MS:Acta}. The main technical result (Lemma \ref{L:Key}) is established in Section \ref{Sect3}, and we show how the case of zero dimensional compact extreme tracial boundaries follows directly from this lemma. In the fourth and last section we extend to higher dimensional boundaries. Our method for doing this differs from those in \cite{KR:InPrep,S:Pre} as we marry the ideas needed to extend \cite{MS:Acta} to the zero dimensional compact extremal case with the geometric sequence arguments developed by the third named author in \cite{W:Invent1,W:Invent2}.
\section{Uniformly tracially large order zero maps}\label{Sect2}
\noindent Recall that a completely positive (cp) map $\phi:A\rightarrow B$ between $C^*$-algebras is said to be \emph{order zero} if it preserves orthogonality, i.e. if $e,f\in A_+$ have $ef=0$, then $\phi(e)\phi(f)=0$. The structure theory of these maps is developed in \cite{WZ:MJM}, which in particular establishes a functional calculus. Given a completely positive and contractive (cpc) order zero map $\phi:A\rightarrow B$, and a positive contractive function $f\in C_0(0,1]$, there exists a cpc order zero map $f(\phi):A\rightarrow B$. For projections $p\in A$, this map satisfies \begin{equation} f(\phi)(p)=f(\phi(p)). \end{equation} Secondly, given a cpc order zero map $A\rightarrow B$ and a tracial state $\tau:B\rightarrow\mathbb C$, the composition $\tau\circ\phi$ defines a positive tracial functional on $A$, \cite[Corollary 4.4]{WZ:MJM}. The other key fact we need is that order zero maps with finite dimensional domains are projective in the sense of Lemma \ref{L:Proj} below. This follows from the duality between order zero maps with domain $A$ and $^*$-homomorphisms from the cone $C(A)=C_0(0,1]\otimes A$ (see \cite{WZ:MJM}) and Loring's projectivity of cones over finite dimensional $C^*$-algebras \cite{L:Book}.
\begin{lemma}\label{L:Proj} Let $A,B,F$ be $C^*$-algebras with $F$ finite dimensional and let $q:A\twoheadrightarrow B$ a surjective $^*$-homomorphism. Given a cpc order zero map $\phi:F\rightarrow B$, there exists a cpc order zero map $\tilde{\phi}:F\rightarrow A$ with $q\circ\tilde{\phi}=\phi$. \end{lemma}
Given a $C^*$-algebra $A$, we denote the quotient $C^*$-algebra $\ell^\infty(A)/c_0(A)$ by $A_\infty$ and refer to this as the \emph{sequence algebra of $A$}. We have a natural $^*$-homomorphism from $A$ into $\ell^\infty(A)$ obtained by regarding each element of $A$ as a constant sequence in $\ell^\infty(A)$. Following this with the quotient map from $\ell^\infty(A)$ into $A_\infty$, we obtain an embedding $A\hookrightarrow A_\infty$, and we use this to regard $A$ as a $C^*$-subalgebra of $A_\infty$ henceforth. In this way we can form the relative commutant $A_\infty\cap A'$, which is referred to as the \emph{central sequence algebra} of $A$. A bounded sequence $(x_n)_{n=1}^\infty$ in $A$ is said to be \emph{central} if its image $[(x_n)_{n=1}^\infty]$ in $A_\infty$ lies in $A_\infty\cap A'$.
We write $T(A)$ for the tracial state space of $A$. Given a sequence $(\tau_n)_{n=1}^\infty$ in $T(A)$ and a free ultrafilter $\omega\in\beta\mathbb N\setminus\mathbb N$, the trace $$ (x_n)_{n=1}^\infty\mapsto\lim_{n\rightarrow\omega}\tau_n(x_n) $$ on $\ell^\infty(A)$ induces a trace on $A_\infty$. We write $T_\infty(A)$ for the collection of all traces on $A_\infty$ arising in this fashion. When $\tau_n=\tau\in T(A)$ for all $n$, we write $\tau_\omega$ for the resulting trace in $T_\infty(A)$. We use the traces in $T_\infty(A)$ to define uniformly tracially large order zero maps into $A_\infty$.
\begin{definition}\label{D:TLM} Let $A$ be a separable unital $C^*$-algebra with $T(A)\neq\emptyset$. A completely positive and contractive order zero map $\Phi:M_k\rightarrow A_\infty$ is \emph{uniformly tracially large} if $\tau(\Phi(1_k))=1$ for all $\tau\in T_\infty(A)$. \end{definition}
By Lemma \ref{L:Proj}, every cpc order zero map $M_k\rightarrow A_\infty$ lifts to a cpc order zero map $M_k\rightarrow\ell^\infty(A)$ which consists of a sequence of cpc order zero maps $M_k\rightarrow A$. We can rephrase the uniformly tracially large condition in terms of these liftings and traces on $A$. Indeed, the definition of $T_\infty(A)$ is designed to make it easy to manipulate conditions of the form (\ref{L:TI1}).
\begin{lemma}\label{L:TI} Let $A$ be a separable unital $C^*$-algebra with $T(A)\neq\emptyset$. Let $\Phi:M_k\rightarrow A_\infty$ be a cpc order zero map. Then $\Phi$ is uniformly tracially large if and only if any lifting $(\phi_n):M_k\rightarrow\ell^\infty(A)$ of $\Phi$ to a sequence of cpc order zero maps satisfies \begin{equation}\label{L:TI1} \lim_{n\rightarrow\infty}\min_{\tau\in T(A)}\tau(\phi_n(1_k))=1. \end{equation} \end{lemma} \begin{proof} That (\ref{L:TI1}) implies that $\Phi$ is uniformly tracially large is immediate. For the converse, suppose $\Phi:M_k\rightarrow A_\infty$ is a uniformly tracially large cpc order zero map but (\ref{L:TI1}) fails for some lifting $(\phi_n)$. Then, there exists $\varepsilon>0$, an increasing sequence $\{m_n\}_{n=1}^\infty$ in $\mathbb N$ and traces $\tau_n\in T(A)$ such that $\tau_n(\phi_{m_n}(1_k))\leq 1-\varepsilon$ for all $n\in\mathbb N$. Given a free ultrafilter $\omega$, the map $\rho:[(x_n)_{n=1}^\infty]\mapsto \lim_{n\rightarrow\omega}\tau_n(x_{m_n})$ defines a trace in $T_\infty(A)$, which has $\rho(\Phi(1_k))\leq 1-\varepsilon$, contrary to hypothesis. \end{proof}
\begin{remark}\label{R:OT} In a similar vein, for a fixed trace $\tau\in T(A)$ a cpc order zero map $\Phi:M_k\rightarrow A_\infty$ has $\tau_\omega(\Phi(1_k))=1$ for all $\omega\in\beta\mathbb N\setminus\mathbb N$ if and only if any lifting $(\phi_n)_n$ of $\Phi$ to a sequence of cpc order zero maps $M_k\rightarrow A$ has $\lim_{n\rightarrow\infty}\tau(\phi_n(1_k))=1$. \end{remark}
Via functional calculus and a standard central sequence technique, uniformly tracially large cpc order zero maps $M_k\rightarrow A_\infty\cap A'$ give rise to the maps produced by \cite[Lemma 3.3]{MS:Acta}.
\begin{lemma}\label{L:MS33} Let $A$ be a separable unital $C^*$-algebra with $T(A)\neq \emptyset$. Suppose that there exists a uniformly tracially large cpc order zero map $\Phi:M_k\rightarrow A_\infty\cap A'$. Then the conclusion of \cite[Lemma 3.3]{MS:Acta} holds for $A$. That is there exists a cpc order zero map $\Psi:M_k\rightarrow A_\infty\cap A'$ and a central sequence $(c_n)_{n=1}^\infty$ of positive contractions in $A$ such that \begin{equation}\label{L:MS33:1}
\lim_{n\rightarrow\infty}\max_{\tau\in T(A)}|\tau(c_n^m)-1/k|=0 \end{equation} for any $m\in\mathbb N$ and $\Psi(e)=[(c_n)_{n=1}^\infty]$ for some minimal projection $e\in M_k$. \end{lemma} \begin{proof} We need to produce a cpc order zero map $\Psi:M_k\rightarrow A_\infty\cap A'$ such that $\tau(\Psi^m(1_k))=1$ for each $m\in\mathbb N$ and $\tau\in T_\infty(A)$. Given such a map, fix a minimal projection $e\in M_k$ and take a lifting $(\psi_n)_{n=1}^\infty$ of $\Psi$ to a sequence of cpc order zero maps from $M_k$ to $A$. We can then set $c_n=\psi_n(e)$, so that $(c_n)_{n=1}^\infty$ is a central sequence. For each $m\in\mathbb N$ we have $$ \lim_{n\rightarrow\infty}\min_{\tau\in T(A)}\tau(\psi_n^m(1_k))=1 $$ by Lemma \ref{L:TI}. For each $m,n\in\mathbb N$ and $\tau\in T(A)$, the map $\tau(\psi_n^m(\cdot))$ is a trace on $M_k$ (\cite[Corollary 4.4]{WZ:MJM}), so $\tau(c_n^m)=\tau(\psi_n^m(1_k))/k$. Hence (\ref{L:MS33:1}) holds.
To construct $\Psi$, fix a uniformly tracially large cpc order zero map $\Phi:M_k\rightarrow A_\infty\cap A'$. Then, for each $m\in\mathbb N$, the map $\Phi^{1/m}:M_k\rightarrow A_\infty\cap A'$ is a cpc order zero map. Lift each $\Phi^{1/m}$ to a sequence $(\phi^{(m)}_n)_{n=1}^\infty$ of cpc order zero maps $M_k\rightarrow A$. Fix a dense sequence $(x_r)_{r=1}^\infty$ in $A$ and for each $s\in\mathbb N$, we can find $r_s$ sufficiently large such that: \begin{itemize}
\item $\|[\phi^{(s)}_{r_s}(y),x_i]\|\leq \frac{1}{s}\|y\|$, for all $y\in M_k$ and $i\in\{1,\dots,s\}$; \item $\tau(\phi^{(s)}_{r_s}(1_k)^s)\geq 1-\frac{1}{s}$, for all $\tau\in T(A)$. \end{itemize} To obtain the second condition, note that $((\phi^{(s)}_n)^s)_{n=1}^\infty$ is a lifting of $\Phi$ and apply Lemma \ref{L:TI}. The order zero map $\Psi:M_k\rightarrow A_\infty\cap A'$ induced by $(\phi^{(s)}_{r_s})_{s=1}^\infty$ has $\tau(\Psi^m(1_k))=1$ for all $m\in\mathbb N$ and $\tau\in T_\infty(A)$, as required. \end{proof}
The main result (Theorem 1.1) of \cite{MS:Acta} shows that for a simple separable unital nuclear nonelementary $C^*$-algebra $A$ with finitely many extremal traces and $T(A)\neq\emptyset$, the following properties are equivalent: \begin{enumerate}[(i)] \item $A$ is $\mathcal Z$-stable; \item $A$ has strict comparison; \item every completely positive map from $A$ to $A$ can be excised in small central sequences (see \cite[Definition 2.1]{MS:Acta} for the definition of this concept); \item $A$ has property (SI) as defined in \cite[Definition 3.3]{S:JFA} (see \cite[Definition 4.1]{MS:CMP} for the equivalent formulation used in \cite{MS:Acta}). \end{enumerate} The implication (i)$\implies$(ii) is due to R\o{}rdam \cite{R:IJM} and holds only assuming that $A$ is unital, separable, simple and exact. The implication (iii)$\implies$(iv) is immediate from the definitions. The proof of the remaining implications (ii)$\implies$(iii) and (iv)$\implies$(i) is valid for any unital simple separable nuclear $C^*$-algebra with $T(A)\neq\emptyset$ for which the conclusion of \cite[Lemma 3.3]{MS:Acta} holds as this is the only place where the extremal trace hypothesis enters play. For the implication (ii)$\implies$(iii) this is set out explicitly in the proof of \cite[Theorem 4.2]{MS:Acta}, and the proof of (iv)$\implies$(i) is readily seen to be a direct argument from property (SI) and the conclusion of \cite[Lemma 3.3]{MS:Acta}. Using Lemma \ref{L:MS33}, we can formulate this result as follows.
\begin{theorem}[Matui-Sato]\label{MS} Let $A$ be a simple separable unital nuclear $C^*$-algebra with strict comparison. Suppose that for each $k\geq 2$, $A$ admits uniformly tracially large cpc order zero maps $M_k\rightarrow A_\infty\cap A'$. Then $A$ is $\mathcal Z$-stable. \end{theorem}
We now turn to amenability and Matui-Sato's construction of uniformly tracially large order zero maps for simple separable unital $C^*$-algebras with finitely many extremal traces. Recall that in \cite{H:Invent}, Haagerup showed that nuclear $C^*$-algebras are amenable in the sense of \cite{J:MAMS}. Moreover \cite[Theorem 3.1]{H:Invent} gives additional information on the location of a virtual diagonal witnessing amenability (\cite[Theorem 3.1]{H:Invent}). Combining this with Johnson's Hahn-Banach argument for extraction of an approximate diagonal from a virtual diagonal (\cite[Lemma 1.2]{J:AJM}) gives Lemma \ref{L.Amenable}, which is used in the proof of Sato's lemma below, as well as the construction of central sequences of positive elements with specified tracial behaviour in Section \ref{Sect3}.
\begin{lemma}[Haagerup]\label{L.Amenable} Let $A$ be a unital nuclear $C^*$-algebra. Then for any finite subset $\mathcal F$ of $A$ and $\eta>0$, there exists $r\in\mathbb N$, contractions $a_1,\dots,a_r\in A$ and positive reals $\lambda_1,\dots,\lambda_r$ with $\sum_{i=1}^r\lambda_i=1$ such that: \begin{enumerate}
\item\label{L.Amenable1} $\|\sum_{i=1}^r\lambda_i a_ia_i^*-1\|<\eta$;
\item\label{L.Amenable2} $\|\sum_{i=1}^r\lambda_i (xa_i\otimes a_i^*-a_i\otimes a_i^*x)\|_{A\,\widehat{\otimes}\, A}<\eta$ for all $x\in\mathcal F$, \end{enumerate} where $A\,\widehat{\otimes}\, A$ is the projective tensor product. \end{lemma}
Let $N\subset\mathcal B(\Hs)$ be a von Neumann algebra acting on a separable Hilbert space $\mathcal H$. We define $N^\infty$ to be the quotient $C^*$-algebra $\ell^\infty(N)/J$, where $J$ denotes the norm-closed two sided ideal of all strong$^*$-null sequences in $\ell^\infty(N)$. Just as in the norm-closed setting, we can embed $N$ as a subalgebra of constant sequences in $N^\infty$ and so obtain the strong$^*$-central sequence algebra $N^\infty\cap N'$. With this notation we can now state the following result, which has been generalised to the nonnuclear setting in \cite{KR:InPrep}.
\begin{lemma}[Sato, {\cite[Lemma 2.1]{S:arXiv}}]\label{L:Sato}Let $A$ be a separable unital nuclear $C^*$-algebra. Suppose that $A\subset\mathcal B(\Hs)$ is a faithful unital representation of $A$ on a separable Hilbert space and write $N=A''$. Then the natural $^*$-homomorphism $$ A_\infty\cap A'\rightarrow N^\infty\cap N' $$ is surjective. \end{lemma}
Sato's lemma is the key ingredient in \cite[Lemma 3.3]{MS:Acta}, which obtains uniformly tracially large order zero maps when $A$ is separable, simple, unital and nuclear with finitely many extremal traces. For use in Section \ref{Sect3}, we show how to deduce this from the previous lemma using projectivity of order zero maps in the context of a fixed extremal trace. When $A$ has only finitely many extremal traces, a similar argument using the trace obtained from averaging the extremal traces can be used to prove \cite[Lemma 3.3]{MS:Acta}. Recall that a II$_1$ factor $N$ is said to be \emph{McDuff} if it absorbs the hyperfinite II$_1$ factor $R$ tensorially, i.e. $N\cong N\,\overline{\otimes}\,R$. Every McDuff factor $N$ has an abundance of centralising sequences: for each $k\geq 2$, factorise $N\cong N\,\overline{\otimes}\,R$ and by regarding $R$ as the weak closure of the UHF-algebra $M_{k^\infty}$ we can consider the sequence of $n$-th tensor factor embeddings $M_k\rightarrow M_{k^\infty}$ to obtain a unital embedding $M_k\rightarrow N^\infty\cap N'$.
\begin{lemma}[cf. {\cite[Lemma 3.3]{MS:Acta}}]\label{L:OT}Let $A$ be a simple separable unital nuclear nonelementary $C^*$-algebra and let $\tau$ be an extremal tracial state on $A$. For $k\geq 2$, there exists a cpc order zero map $\Phi:M_k\rightarrow A_\infty\cap A'$ with $\tau_\omega(\Phi(1_k))=1$ for all $\omega\in \beta\mathbb N\setminus\mathbb N$. \end{lemma} \begin{proof} Fix $k\geq 2$. Let $\pi_\tau$ denote the GNS-representation associated to $\tau$. As $\tau$ is an extremal trace, and $A$ is nuclear it follows that $\pi_\tau(A)''=N$ is an injective II$_1$ factor. By Connes' theorem \cite[Theorem 5.1]{C:Ann}, $N$ is McDuff. As such, there is a unital embedding $\iota:M_k\hookrightarrow N^\infty\cap N'$. By Lemma \ref{L:Sato} and the projectivity of order zero maps (Lemma \ref{L:Proj}), there exists an order zero map $\Phi:M_k\rightarrow A_\infty\cap A'$ lifting $\iota$. For each $\omega\in\beta\mathbb N\setminus\mathbb N$, the trace $\tau_\omega$ given by $\tau_\omega((x_n)_{n=1}^\infty)=\lim_{n\rightarrow\omega}\tau(x_n)$ is well defined on $N^\infty$ and has $\tau_\omega(\iota(1_k))=1$. Hence $\tau_\omega(\Phi(1_k))=\tau_\omega(\iota(1_k))=1$. \end{proof} \begin{remark} Note that the map $\Phi$ of Lemma \ref{L:OT} already has $\tau_\omega(\Phi^m(1_k))=1$ for all $m\in\mathbb N$ and $\omega\in\beta\mathbb N\setminus\mathbb N$. We do not have to run the argument of Lemma \ref{L:MS33} to obtain this here. \end{remark}
\section{Approximately central functions on the trace space}\label{Sect3}
\noindent Recall that the tracial state space $T(A)$ of a separable unital $C^*$-algebra is a compact (in the weak$^*$-topology) convex subset of the state space of $A$, and so the Krein-Milman theorem shows that $T(A)$ is the closed convex hull of its extreme points $\partial_eT(A)$. Further, $T(A)$ forms a metrisable Choquet simplex: every point of $T(A)$ is the barycentre of a unique measure supported on $\partial_eT(A)$, \cite{A:Book}. If additionally $\partial_eT(A)$ is compact, then $T(A)$ is known as a Bauer simplex. In this case we have a natural identification of $\mathrm{Aff}(T(A))=\{f:T(A)\rightarrow \mathbb R \mid f\text{ is continuous and affine}\}$ with $C_{\mathbb R}(\partial_eT(A))$ given by restriction (see \cite{G:Book}). Our objective in this section is Lemma \ref{L:Key} which enables us to produce a finite collection of cpc order zero maps with large sum when $T(A)$ has compact extreme boundary of finite covering dimension.
The covering dimension of a compact Hausdorff space $X$ can be defined in a number of equivalent fashions (see \cite{P:Book}). We use the colouring formulation as follows. For $m\in\{0,1,\dots,\}$, say that $\dim X\leq m$ if and only if every finite open cover $\mathcal U$ of $X$ admits an $(m+1)$-colourable refinement $\mathcal V$: that is $\mathcal V$ is an open cover of $X$ with the property that every $V\in\mathcal V$ is contained in some element of $\mathcal U$ (i.e. $\mathcal V$ refines $\mathcal U$) and there exists a function $c:\mathcal V\rightarrow\{0,1,\ldots,m\}$ such that if $V,V'\in\mathcal V$ have $c(V)=c(V')$, then $V\cap V'=\emptyset$ (i.e. $\mathcal V$ can be $(m+1)$-coloured, in that each element of $\mathcal V$ can be assigned a colour such that two sets of the same colour are disjoint). We need a slight strengthening so that the sets in $\mathcal V$ form a closed cover of $X$. This is well known, but we include a proof for completeness.
\begin{lemma}\label{L:CD} Let $X$ be a compact Hausdorff topological space with $\dim(X)\leq m$. Then for each finite open cover $\mathcal U$ of $X$, there exists a finite cover $\mathcal V$ consisting of closed sets refining $\mathcal U$ such that there is an $(m+1)$-colouring $c:\mathcal V\rightarrow\{0,1,\dots,m\}$ of $\mathcal V$ with the property that if $V,V'\in\mathcal V$ have $c(V)=c(V')$, then $V\cap V'=\emptyset$. \end{lemma} \begin{proof} Given a finite open cover $\mathcal U$ of $X$, we can find an open cover $\tilde{\mathcal V}$ refining $\mathcal U$ which is $(m+1)$ colourable. Construct a partition of unity $(f_V)_{V\in\tilde{V}}$ subordinate to $\tilde{\mathcal V}$, i.e. $0\leq f_V\leq 1$ for all $V$, $\sum_{V\in \tilde{\mathcal V}}f_V(x)=1$ for all $x\in X$ and the support of each $f_V$ is contained in $V$. Let $\mathcal V$ be the collection of the supports of the $f_V$. This consists of closed sets, and refines $\tilde{\mathcal V}$ so is $(m+1)$-colourable. As every point $x\in X$ lies in the support of some $f_V$ it follows that $\mathcal V$ covers $X$. \end{proof}
The following lemma of Lin (\cite{L:JFA}, based on work of Cuntz and Pedersen \cite{CP:JFA}) enables us to realise strictly positive elements of $\mathrm{Aff}(T(A))$ via positive elements of $A$.
\begin{lemma}[Lin {\cite[Theorem 9.3]{L:JFA}}, following Cuntz, Pedersen {\cite{CP:JFA}}]\label{T.Lin}
Let $A$ be a simple unital $C^*$-algebra with non-empty tracial state space and let $f\in\mathrm{Aff}(T(A))$ be strictly positive. Then for any $\varepsilon>0$, there exists $x\in A_+$ with $f(\tau)=\tau(x)$ for all $\tau\in T(A)$ and $\|x\|\leq\|f\|+\varepsilon$. \end{lemma}
Given any positive contraction $e$ in a nuclear $C^*$-algebra $A$ we can apply Haagerup's approximate diagonal to $e$ to produce a central sequence of positive contractions which has the same tracial behaviour as $e$. In particular, we can witness strictly positive elements of $\mathrm{Aff}(T(A))$ via central sequences of positive contractions.
\begin{lemma}\label{L:CT}
Let $A$ be a simple separable unital nuclear $C^*$-algebra with a non-empty trace space and let $f$ be a positive affine continuous function on $T(A)$ with $\|f\|\leq 1$. Then there exists $(e_n)\in A_\infty\cap A'$ consisting of positive contractions in $A$ with \begin{equation}\label{L:CT:E1}
\lim_{n\rightarrow\infty}\sup_{\tau\in T(A)}|\tau(e_n)-f(\tau)|=0. \end{equation} \end{lemma} \begin{proof} Define a sequence $(f_n)_{n=1}^\infty$ of continuous affine strictly positive functions on $T(A)$ by defining $$ f_n(\tau)=\frac{1}{3n}+\Big(1-\frac{2}{3n}\Big)f(\tau), $$
for $\tau\in T(A)$. By construction each $f_n$ is strictly positive and has $\|f_n\|\leq 1-\frac{1}{3n}$ and $|f_n(\tau)-f(\tau)|\leq \frac{1}{n}$ for all $\tau\in T(A)$. For each $n\in\mathbb N$, take $\varepsilon=\frac{1}{3n}$ in Lemma \ref{T.Lin} to obtain $x_n\in A_+$ with $\|x_n\|\leq 1$ such that $\tau(x_n)=f_n(\tau)$ for all $\tau\in T(A)$. By Haagerup's theorem (Lemma \ref{L.Amenable}), we can find an approximate diagonal $(\sum_{i=1}^{l_n}\lambda^{(n)}_ia^{(n)}_i\otimes a^{(n)}_i{}^*)_{n=1}^\infty$ in $A\odot A$ such that each $\|a_i^{(n)}\|\leq 1$ and $\lambda^{(n)}_i$ are positive reals with $\sum_{i=1}^{l_n}\lambda^{(n)}_i=1$ for all $n$ and \begin{align}
&\Big\|\sum_{i=1}^{l_n}\lambda^{(n)}_ia_i^{(n)}a_i^{(n)}{}^*-1_A \Big\|\stackrel{n\rightarrow\infty}{\rightarrow}0;\label{L:CT1}\\
&\Big\|\sum_{i=1}^{l_n}\lambda^{(n)}_iba_i^{(n)}\otimes a_i^{(n)}{}^*-\sum_{i=1}^{l_n}\lambda^{(n)}_ia_i^{(n)}\otimes a_i^{(n)}{}^*b\Big\|_{A\,\widehat{\otimes}\, A}\stackrel{n\rightarrow\infty}{\rightarrow}0,\quad b\in A.\label{L:CT2} \end{align}
Define $e_n=\sum_{i=1}^{l_n}\lambda^{(n)}_ia_i^{(n)}x_na^{(n)}_i{}^*$. These are positive contractions in $A$. For each $n\in\mathbb N$, the map $y\otimes z\mapsto yx_nz$ is contractive with respect to the projective tensor norm, so condition (\ref{L:CT2}) ensures that $(e_n)$ is a central sequence. For $\tau\in T(A)$, we estimate \begin{align*}
|\tau(x_n-e_n)|&=\Big|\tau\Big(x_n-\sum_{i=1}^{l_n}\lambda^{(n)}_ia_i^{(n)}x_na^{(n)}_i{}^*\Big)\Big|\\
&=\Big|\tau\Big(\Big(1_A-\sum_{i=1}^{l_n}\lambda^{(n)}_ia^{(n)}_i{}^*a_i^{(n)}\Big)x_n\Big)\Big|\\
&\leq\tau\Big(1_A-\sum_{i=1}^{l_n}\lambda^{(n)}_ia^{(n)}_i{}^*a_i^{(n)}\Big)\|x_n\|\\
&=\tau \Big(1_A-\sum_{i=1}^{l_n}\lambda^{(n)}_ia^{(n)}_ia_i^{(n)}{}^*\Big)\|x_n\|\\
&\leq \Big\|1_A-\sum_{i=1}^{l_n}\lambda^{(n)}_ia^{(n)}_ia_i^{(n)}{}^* \Big\|. \end{align*}
Then (\ref{L:CT:E1}) follows from this estimate, (\ref{L:CT1}) and the fact that $|f(\tau)-\tau(x_n)|\leq \frac{1}{n}$ for all $\tau\in T(A)$. \end{proof}
The next lemma enables us to convert central sequences which are tracially orthogonal to norm orthogonal sequences. The argument has its origins in Kishimoto's work \cite{K:JFA}, and our proof is based on \cite[Lemma 3.2]{MS:Acta}.
\begin{lemma}\label{L:KT} Let $A$ be a separable unital $C^*$-algebra with non-empty trace space $T(A)$. Let $T_0\subset T(A)$ be non-empty and suppose $(e^{(1)}_n)_{n=1}^\infty,\ldots,(e^{(L)}_n)_{n=1}^\infty$ are sequences of positive contractions in $A_+$ representing elements of $A_\infty\cap A'$ such that \begin{equation}\label{KT:E1}
\lim_{n\rightarrow\infty}\sup_{\tau\in T_0}|\tau(e_n^{(l)}e_n^{(l')})|=0,\quad l\neq l'. \end{equation} Then there exist positive elements $\tilde{e}^{(l)}_n\leq e^{(l)}_n$ so that: \begin{enumerate}[(i)] \item $(\tilde{e}_n^{(l)})_n$ represents an element of $A_\infty\cap A'$;\label{KT:1}
\item $\lim_{n\rightarrow\infty}\sup_{\tau\in T_0}|\tau(\tilde{e}^{(l)}_n-e_n^{(l)})|=0$ for all $l$;\label{KT:2} \item $(\tilde{e}_n^{(l)})_n\perp (\tilde{e}_n^{(l')})_n$ in $A_\infty\cap A'$ for $l\neq l'$.\label{KT:3} \end{enumerate} \end{lemma} \begin{proof} For each $l\in\{1,\dots,L\}$ and $n\in\mathbb N$ define $$ g^{(l)}_n=(e_n^{(l)})^{1/2}\Big(\sum_{l'\neq l}e_n^{(l')}\Big)(e_n^{(l)})^{1/2}, $$ so $(g^{(l)}_n)_{n=1}^\infty$ is a central sequence for each $l$. The hypothesis (\ref{KT:E1}) gives $$ \sup_{\tau\in T_0}\tau(g_n^{(l)})\leq \sum_{l'\neq l}\sup_{\tau\in T(A)}\tau(e_n^{(l)}e_n^{(l')})\stackrel{n\rightarrow\infty}{\rightarrow}0. $$
For $r\in\mathbb N$, define the continuous function $f_r:[0,\infty)\rightarrow [0,1]$ by $f_r(t)=\min(1,rt)$ and note that $$ \inf_{t\geq 0}(1-f(t))t\leq 1/r. $$ For $l\in\{1,\dots,L\}$ and $n,r\in\mathbb N$, define positive contractions by $$ x_{n,r}^{(l)}=(e_n^{(l)})^{1/2}\left(1-f_r(g_n^{(l)})\right)(e_n^{(l)})^{1/2}. $$ These satisfy $x_{n,r}^{(l)}\leq e_n^{(l)}$ and for each $l$ and $r$, the sequence $(x_{n,r}^{(l)})_{n=1}^\infty$ represents an element of $A_\infty\cap A'$.
For each $s\in\mathbb N$ and $l\in\{1,\dots,L\}$, we have $$
\sup_{\tau\in T_0}\tau((g_n^{(l)})^s)\leq\|g_n^{(l)}\|^{s-1}\sup_{\tau\in T(A)}\tau(g_n^{(l)})\stackrel{n\rightarrow\infty}{\rightarrow}0. $$ By choosing suitable polynomial approximations to $f_r(t)$ on $[0,L-1]$, it follows that \begin{equation}\label{KT:E2}
\sup_{\tau\in T_0}\tau(e_n^{(l)}-x_{n,r}^{(l)})=\sup_{\tau\in T_0}\tau((e_n^{(l)})^{1/2}f_r(g_n^{(l)})(e_n^{(l)})^{1/2})\leq\|e^{(l)}_n\|\sup_{\tau\in T_0}\tau(f_r(g_n^{(l)}))\stackrel{n\rightarrow\infty}{\rightarrow}0, \end{equation} for each $l\in\{1,\dots,L\}$ and $r\in\mathbb N$.
For each $l$, we compute exactly as in \cite[Lemma 3.2]{MS:Acta}, to obtain \begin{align}
\left\|x_{n,r}^{(l')}x_{n,r}^{(l)}\right\|^2&=\left\|x_{n,r}^{(l)}(x_{n,r}^{(l')})^2x_{n,r}^{(l)}\right\|\nonumber\\
&\leq\Big\|x_{n,r}^{(l)}\Big(\sum_{j\neq l}x_{n,r}^{(j)}\Big)x_{n,r}^{(l)}\Big\|\nonumber\\
&=\Big\|(e_n^{(l)})^{1/2}\left(1-f_r(g_n^{(l)})\right)(e_n^{(l)})^{1/2}\Big(\sum_{j\neq l}x_{n,r}^{(j)}\Big)(e_n^{(l)})^{1/2}\left(1-f_r(g_n^{(l)})\right)(x_n^{(l)})^{1/2}\Big\|\nonumber\\
&=\big\|(e_n^{(l)})^{1/2}\left(1-f_r(g_n^{(l)})\right)g_n^{(l)}\left(1-f_r(g_n^{(l)})\right)(e_n^{(l)})^{1/2}\big\|\nonumber\\
&\leq\left\|\left(1-f_r(g_n^{(l)})\right)g_n^{(l)}\right\|\leq 1/r,\label{KT:E3} \end{align} for every $n,r\in\mathbb N$ and $l'\neq l$.
Fix a countable dense sequence $(y_s)_{s=1}^\infty$ in $A$. For each $r\in\mathbb N$, use (\ref{KT:E2}) and the fact that for each $l\in\{1,\dots,L\}$, $(x_{n,r}^{(l)})_{n=1}^\infty$ is a central sequence to obtain $N_r\in\mathbb N$ such that \begin{itemize}
\item $\|[x^{(l)}_{n,r},y_s]\|\leq 1/r$ for $s\in\{1,\dots,r\}$; \item $\sup_{\tau\in T_0}\tau(e_{n}^{(l)}-x_{n,r}^{(l)})<1/r$ \end{itemize} for $n\geq N_r$. We may assume that $N_r<N_{r+1}$ for all $r$. Set $N_0=0$. For $n\in\mathbb N$, let $r_n\in\mathbb N$ be such that $N_{r_n}<n\leq N_{r_{n+1}}$ so that $r_n\rightarrow\infty$ as $n\rightarrow\infty$ and define $\tilde{e}_n^{(l)}=x_{n,r_n}^{(l)}$. The two conditions above give conditions (\ref{KT:1}) and (\ref{KT:2}), while (\ref{KT:3}) is a consequence of (\ref{KT:E3}). \end{proof}
We are now in position to give the main technical lemma which is already enough to handle the $0$-dimensional compact extreme boundary case.
\begin{lemma}\label{L:Key} Let $m\geq 0$, $k\geq 2$ and let $A$ be a simple separable unital nuclear nonelementary $C^*$-algebra with $T(A)\neq\emptyset$ such that $\partial_eT(A)$ is compact with $\dim(\partial_eT(A))\leq m$. Then for each finite set $\mathcal F\subset A$ and $\varepsilon>0$, there exist cpc order zero maps $\phi^{(0)},\dots,\phi^{(m)}:M_k\rightarrow A$ such that \begin{equation}\label{L:Key:1}
\|[\phi^{(i)}(x),y]\|\leq \varepsilon\|x\|, \end{equation} for all $i\in\{0,\dots,m\}$, $x\in M_k$, $y\in\mathcal F$ and such that for each $\tau\in\partial_eT(A)$, there exists $i(\tau)\in \{0,\dots,m\}$ such that $\tau(\phi^{(i(\tau))}(1_k))>1-\varepsilon$. \end{lemma} \begin{proof} Fix $k\geq 2$, a finite subset $\mathcal F\subset A$ and $\varepsilon>0$. For each $\tau\in \partial_eT(A)$, use Lemma \ref{L:OT} to provide a cpc order zero map $\Phi_\tau:M_k\rightarrow A_\infty\cap A'$ with $\tau_\omega(\Phi(1_k))=1$ for all $\omega\in\beta\mathbb N\setminus\mathbb N$. By Remark \ref{R:OT}, we can go sufficiently far down a sequence of cpc order zero maps from $M_k$ into $A$ which lift $\Phi_\tau$ to find a cpc order zero map
$\phi_\tau:M_k\rightarrow A$ with $\tau(\phi_\tau(1_k))>1-\varepsilon$ and \begin{equation}\label{L:Key:6}
\|[\phi_\tau(x),y]\|<\varepsilon\|x\|, \end{equation} for all $x\in M_k$, $y\in\mathcal F$, $\tau\in\partial_eT(A)$. Define an open neighbourhood of $\tau$ in $\partial_eT(A)$ by $U_\tau=\{\rho\in\partial_eT(A):\rho(\phi_\tau(1_k))>1-\varepsilon\}$. Then $\partial_eT(A)=\bigcup_{\tau\in\partial_eT(A)}U_\tau$, so by compactness there exist $\tau_1,\dots,\tau_L\in\partial_eT(A)$ such that $\partial_eT(A)=\bigcup_{l=1}^LU_{\tau_l}$. As $\dim(\partial_eT(A))\leq m$, Lemma \ref{L:CD} gives a finite cover $\mathcal V$ of $\partial_eT(A)$ consisting of closed sets such that $\mathcal V$ refines $\mathcal U=\{U_{\tau_1},\dots,U_{\tau_L}\}$ and a colouring $c:\mathcal V\rightarrow \{0,1,\dots,m\}$ such that if $c(V)=c(V')$, then $V$ and $V'$ are disjoint. For each $i\in\{0,\dots,m\}$, write $\mathcal V^{(i)}=c^{-1}(\{i\})=\{V^{(i)}_1,\dots,V^{(i)}_{L_i}\}$.
Fix $i\in\{0,\dots,m\}$. For each $j=0,\dots,L_i$, choose a continuous function $f^{(i)}_j:\partial_eT(A)\rightarrow [0,1]$ on $\partial_eT(A)$ with $f^{(i)}_j=1$ on $V^{(i)}_j$ and $f^{(i)}_j=0$ on $\bigcup_{j'\neq j}V^{(i)}_{j'}$. This is possible as elements of $\mathcal V^{(i)}$ are pairwise disjoint closed subsets of $\partial_eT(A)$. As $\partial_eT(A)$ is compact, we can extend $f^{(i)}_j$ to a continuous affine function on $T(A)$ also denoted $f^{(i)}_j$. Now apply Lemma \ref{L:CT} to obtain central sequences $(e_n^{(i,j)})_{n=1}^\infty$ of positive contractions such that $$
\lim_{n\rightarrow\infty}\sup_{\tau\in T(A)}|\tau(e^{(i,j)}_n)-f^{(i)}_j(\tau)|=0, $$ for each $j=1,\dots,L_i$. Thus \begin{equation}\label{L:Key:3} \lim_{n\rightarrow\infty}\inf_{\tau\in \mathcal V^{(i)}_j}\tau(e^{(i,j)}_n)=1,\quad \lim_{n\rightarrow\infty}\sup_{\tau\in\bigcup_{j'\neq j}V^{(i)}_{j'}}\tau(e^{(i,j)}_n)=0. \end{equation} By construction $$ \lim\sup_{\tau\in\bigcup_{s=1}^{L_i}V_s^{(i)}}\tau(e_n^{(i,j)}e_n^{(i,j')})=0 $$ for all $j\neq j'$ in $\{1,\dots,L_i\}$. Therefore we can apply Lemma \ref{L:KT} with $T_0=\bigcup_{s=1}^{L_i}V^{(i)}_s$ to obtain central sequences $(\tilde{e}^{(i,j)}_n)_{n=1}^\infty$ of positive contractions with $\tilde{e}^{(i,j)}_n\leq e^{(i,j)}_n$, \begin{equation}\label{L:Key:2}
\lim_{n\rightarrow\infty}\|\tilde{e}_n^{(i,j)}\tilde{e}_n^{(i,j')}\|=0 \end{equation} for $j\neq j'$ and \begin{equation}\label{L:Key:4} \lim_{n\rightarrow\infty}\sup_{\tau\in\bigcup_{s=1}^{L_i}V_s^{(i)}}\tau(e^{(i,j)}_n-\tilde{e}^{(i,j)}_n)=0, \end{equation} for $j\in\{1,\dots,L_i\}$. In this way (\ref{L:Key:3}) and (\ref{L:Key:4}) give \begin{equation}\label{L:Key:5} \lim_{n\rightarrow\infty}\inf_{\tau\in \mathcal V^{(i)}_j}\tau(\tilde{e}^{(i,j)}_n)=1. \end{equation}
For $i\in\{0,\dots,m\}$ and $j\in\{1,\dots,L_i\}$, there exists $l(i,j)\in\{1,\dots,L\}$ such that $V^{(i)}_j\subset U_{\tau_{l(i,j)}}$. For $i\in\{0,\dots,m\}$ and $n\in\mathbb N$, define maps $\psi^{(i)}_n:M_k\rightarrow A$ by \begin{equation}\label{L:Key:8} \psi^{(i)}_n(x)=\sum_{j=1}^{L_i}\tilde{e}^{(i,j)}_n{}^{1/2}\phi_{\tau_{l(i,j)}}(x)\tilde{e}^{(i,j)}_n{}^{1/2}, \end{equation} for $x\in M_k$. For each $i$, the sequences $(\psi^{(i)}_n)_{n=1}^\infty$ induce maps $\Psi^{(i)}:M_k\rightarrow A_\infty$. Further, by (\ref{L:Key:2}), we have $(\tilde{e}^{(i,j)}_n)\perp (\tilde{e}^{(i,j')}_n)$ in $A_\infty\cap A'$, so that $\Psi^{(i)}$ is a sum of $L_i$ pairwise orthogonal cpc order zero maps and so is cpc and order zero. The condition $(\tilde{e}^{(i,j)}_n)\perp (\tilde{e}^{(i,j')}_n)$ in $A_\infty\cap A'$ also allows us to use (\ref{L:Key:6}) to obtain \begin{equation}\label{L:Key:9}
\|[\Psi^{(i)}(x),y]\|\leq \max_{j\in\{1,\dots,L_i\}}\|[\phi_{\tau_{l(i,j)}}(x),y]\|<\varepsilon\|x\|, \end{equation} for all $i\in\{0,\dots,m\}$, $x\in M_k$, $y\in\mathcal F$. For $\rho\in V^{(i)}_j$, we have $\rho(\phi_{\tau_{l(i,j)}}(1_k))>1-\varepsilon$ as $V^{(i)}_j\subset U_{\tau_{l(i,j)}}$, giving \begin{align} \rho(\psi^{(i)}_n(1_k))&\geq \rho(\tilde{e}^{(i,j)}_n{}^{1/2}\phi_{\tau_{l(i,j)}}(1_k)\tilde{e}^{(i,j)}_n{}^{1/2})\nonumber\\ &=\rho(\tilde{e}^{(i,j)}_n\phi_{\tau_{l(i,j)}}(1_k))\nonumber\\ &=\rho(\phi_{\tau_{l(i,j)}}(1_k))-\rho((1_A-\tilde{e}^{(i,j)}_n)\phi_{\tau_{l(i,j)}}(1_k))\nonumber\\ &>(1-\varepsilon)-\rho(1_A-\tilde{e}^{(i,j)}_n).\label{L:Key:7} \end{align} Combining (\ref{L:Key:7}) with (\ref{L:Key:5}) gives \begin{equation}\label{L:Key:10} \liminf_{n\rightarrow\infty}\inf_{\rho\in\bigcup_{j=1}^{L_i}V_j^{(i)}}\rho(\psi^{(i)}_n(1_k))>1-\varepsilon. \end{equation}
For each $i\in\{0,\dots,m\}$, take a lifting $(\phi^{(i)}_n)_{n=1}^\infty$ of $\Psi^{(i)}$ to a sequence of cpc order zero maps $M_k\rightarrow A$. We claim that for $n$ sufficiently large, the maps $\phi^{(0)}_n,\dots,\phi^{(m)}_n$ satisfy the properties claimed in the statement of the lemma. Indeed, since
$$\sup_{\substack{x\in M_k\\\|x\|\leq 1}}\|\phi^{(i)}_n(x)-\psi^{(i)}_n(x)\|\rightarrow 0,$$ (\ref{L:Key:9}) gives $$
\|[\phi^{(i)}_n(x),y]\|<\varepsilon\|x\| $$ for all $n$ sufficiently large and for all $i\in\{0,\dots,m\}$, $x\in M_k$, $y\in\mathcal F$. By (\ref{L:Key:10}), we have \begin{equation}\label{L:Key:11} \liminf_{n\rightarrow\infty}\inf_{\rho\in\bigcup_{j=1}^{L_i}V_j^{(i)}}\rho(\phi^{(i)}_n(1_k))>1-\varepsilon, \end{equation} and so for all $n$ sufficiently large we have $$ \rho(\phi^{(i)}_n(1_k))>1-\varepsilon, $$ for all $i\in\{0,\dots,m\}$ and $\rho\in\bigcup_{j=1}^{L_i}V_j^{(i)}$ . Since $\bigcup_{i=0}^m\bigcup_{j=1}^{L_i}V^{(i)}_j=\partial_eT(A)$, the result follows with $i(\rho)=\min\{i:\rho\in \bigcup_{j=1}^{L_i}V_j^{(i)}\}$. \end{proof}
In the zero dimensional case, we immediately obtain uniformly tracially large order zero maps from the previous lemma. \begin{theorem}\label{T:0D} Let $A$ be a simple separable unital nuclear nonelementary $C^*$-algebra with $T(A)\neq\emptyset$ and $\partial_eT(A)$ compact and zero dimensional. Then for each $k\geq 2$, $A$ admits uniformly tracially large order zero maps $M_k\rightarrow A_\infty\cap A'$. \end{theorem} \begin{proof} Fix $k\geq 2$. Take a nested sequence $(\mathcal F_n)_{n=1}^\infty$ of finite subsets of $A$ whose union is dense in $A$. For each $n$, Lemma \ref{L:Key} gives a cpc order zero map $\phi_n:M_k\rightarrow A$ with $$
\|[\phi_n(x),y]\|\leq \frac{1}{n}\|x\|, $$ for all $y\in\mathcal F_n$ and $x\in M_k$, and \begin{equation}\label{T:0D:1} \tau(\phi_n(1_k))>1-\frac{1}{n} \end{equation} for all $\tau\in\partial_eT(A)$ and all $n\in\mathbb N$. By convexity, (\ref{T:0D:1}) holds for all $\tau\in T(A)$ and $n\in\mathbb N$. Thus the sequence $(\phi_n)_{n=1}^\infty$ induces a uniformly tracially large cpc order zero map $\Phi:M_k\rightarrow A_\infty\cap A'$ by Lemma \ref{L:TI}. \end{proof}
\begin{corollary} Let $A$ be a simple separable unital nuclear nonelementary $C^*$-algebra with $T(A)\neq\emptyset$ and $\partial_eT(A)$ compact and zero dimensional. Suppose $A$ has strict comparison, then $A$ is $\mathcal Z$-stable. \end{corollary} \begin{proof} This follows immediately from Theorems \ref{MS} and \ref{T:0D}. \end{proof} \section{Higher dimensional compact extreme boundaries}\label{Sect4}
In this last section we extend the previous work to higher dimensional compact extreme boundaries. The argument is based on the techniques developed in \cite{W:Invent1,W:Invent2}. The starting point is to use the finite dimensional compact extreme boundary to obtain a finite collection of cpc order zero maps with a large tracial sum in the sense of Lemma \ref{L:S1} below. It is at this point in the argument where it is critical that we can obtain the family of order zero maps in Lemma \ref{L:Key} not just with large tracial sum but so that for each $\tau\in \partial_eT(A)$ one member of the family is large in $\tau$.
\begin{lemma}\label{L:S1} Let $m\in\mathbb N$ and let $A$ be a simple separable unital nuclear nonelementary $C^*$-algebra with $T(A)\neq\emptyset$ and $\partial_eT(A)$ is compact with $\dim(\partial_eT(A))\leq m$. Then, for $k\geq 2$ and any separable subspace $X\subset A_\infty$, there exist cpc order zero maps $\phi^{(0)},\dots,\phi^{(m)}:M_k\rightarrow A_\infty\cap A'\cap X'$ such that \begin{equation}\label{L:S1:2} \tau\Big(\sum_{i=0}^m\phi^{(i)}(1_k)b\Big)\geq\tau(b), \end{equation} for all $\tau\in T_\infty(A)$ and all $b\in (A_\infty)_+$. \end{lemma}
\begin{proof} Fix $k\geq 2$ and a separable subspace $X\subset A_\infty$. Let $(y^{(i)})_{i=1}^\infty$ be a countable dense subset of $C^*(A,X)\subset A_\infty$ and lift each $y^{(i)}$ to a sequence $(y^{(i)}_n)_{n=1}^\infty$ in $\ell^\infty(A)$. For $n\in\mathbb N$, define a finite subset of $A$ by $\mathcal F_n=\{y^{(i)}_m:1\leq i,m\leq n\}$. For each $n$, use Lemma \ref{L:Key} to obtain cpc order zero maps $\phi^{(0)}_n,\dots,\phi^{(m)}_n:M_k\rightarrow A$ with $$
\|[\phi^{(i)}_n(x),y]\|\leq \frac{1}{n}\|x\|, $$ for $i\in\{0,\dots,m\}$, $x\in M_k$, $y\in\mathcal F_n$ and $n\in\mathbb N$ and such that for each $\rho\in \partial_eT(A)$ and $n\in\mathbb N$, there exists $i(\rho,n)$ with $\rho(\phi^{(i(\rho,n))}_n(1_k))>1-1/n$. These maps induce cpc order zero maps $\phi^{(0)},\dots,\phi^{(m)}:M_k\rightarrow A_\infty\cap A'\cap X'$.
Consider $\rho\in \partial_eT(A)$. For $n\in\mathbb N$, $a\in A_+$ with $\|a\|\leq 1$, we have \begin{align} \rho\Big(\sum_{i=0}^m\phi^{(i)}_n(1_k)a\Big)&\geq\rho(\phi_n^{(i(\rho,n))}(1_k)a)\nonumber\\ &=\rho(a)-\rho\Big(\big(1_A-\phi_n^{(i(\rho,n))}(1_k)\big)a\Big)\nonumber\\ &\geq\rho(a)-\rho(1_A-\phi_n^{(i(\rho,n))}(1_k))\nonumber\\ &\geq\rho(a)-\frac{1}{n}.\label{L:S1:1} \end{align} By convexity, the estimate (\ref{L:S1:1}) holds for all $\rho\in T(A)$. Given a sequence $(\tau_n)_{n=1}^\infty$ in $T(A)$, a sequence $(b_n)_{n=1}^\infty$ of positive contractions in $A$ representing $b\in A_\infty$, and a free ultrafilter $\omega\in\beta\mathbb N\setminus\mathbb N$, taking limits in (\ref{L:S1:1}) gives $$ \lim_{n\rightarrow\omega}\tau_n\Big(\sum_{i=0}^m\phi^{(i)}_n(1_k)b_n\Big)\geq\lim_{n\rightarrow\omega}\tau_n(b_n). $$ That is $$ \tau\Big(\sum_{i=0}^m\phi^{(i)}(1_k)b\Big)\geq\tau(b), $$ for all $\tau\in T_\infty(A)$ and all $b\in (A_\infty)_+$, verifying (\ref{L:S1:2}). \end{proof}
Before proceeding, we extract a standard central sequence argument from \cite{W:Invent2}. Writing $\beta=\frac{1}{\overline{k}+1}$ and following the proof of \cite[Proposition 4.6]{W:Invent2} verbatim from equation (38) through to the 4th displayed equation on page 288 of \cite{W:Invent2}, one obtains the following lemma. \begin{lemma}\label{W:Extract} Let $A$ be a separable unital $C^*$-algebra, let $X\subset A_\infty$ be a separable subspace and let $0<\beta<1$. Suppose that for each $\eta>0$, there exist orthogonal positive contractions $d^{(0)}_\eta,d^{(1)}_\eta$ in $A_\infty\cap A'\cap X'$ with $$ \tau(d^{(i)}_\eta b)\geq \beta\tau(b)-\eta $$ for $i\in\{0,1\}$, $\tau\in T_\infty(A)$ and contractions $b\in C^*(A,X)_+$. Then there exist orthogonal positive contractions $d^{(0)},d^{(1)}\in A_\infty\cap A'\cap X'$ with $$ \tau(d^{(i)}b)\geq\beta\tau(b), $$ for $i=\{0,1\}$, $\tau\in T_\infty(A)$ and contractions $b\in C^*(A,X)_+$. \end{lemma}
We can now use Lemma \ref{L:S1} to establish a version of \cite[Proposition 4.6]{W:Invent2}. Essentially the argument is the same as the deduction of (36) and (37) from (33) in \cite{W:Invent2}, but since the maps arising in our proof have slightly different domains, we give the details for completeness. For $0\leq\eta_1<\eta_2$, we denote by $g_{\eta_1,\eta_2}$ the continuous piecewise linear function on $\mathbb R$ given by \begin{equation}\label{Defg} g_{\eta_1,\eta_2}(t)=\begin{cases}1,&t\geq \eta_2;\\\frac{t-\eta_1}{\eta_2-\eta_1},&\eta_1<t<\eta_2;\\0,&t\leq\eta_1.\end{cases} \end{equation} \begin{lemma}\label{L:S2} Given $m\geq 0$ and $k\geq 1$ there is $1\leq L_{m,k}\in\mathbb N$ such that, given a simple separable unital nuclear nonelementary $C^*$-algebra $A$ with $T(A)\neq\emptyset$ such that $\partial_eT(A)$ is compact with $\dim(\partial_eT(A))\leq m$, and a separable subspace $X\subset A_\infty$, then there exist pairwise orthogonal contractions $$ d^{(1)},\dots,d^{(k)}\in A_\infty\cap A'\cap X' $$ such that $$ \tau(d^{(i)}b)\geq \frac{1}{L_{m,k}}\tau(b) $$ for all $i\in\{1,\dots,k\}$, $\tau\in T_\infty(A)$ and $b\in C^*(A,X)_+\subset A_\infty$. \end{lemma} \begin{proof} When $k=1$, we can take $L_{m,k}=1$ and $d^{(1)}=1_{A_\infty}$. We prove the statement when $k=2$. Once the statement is established for $k=2$, the general case follows by induction using exactly the same argument as in the last two paragraphs of the proof of \cite[Proposition 4.6]{W:Invent2}.
Define $L_{m,2}=2(m+1)$ and fix a separable subspace $X\subset A_\infty$. By Lemma \ref{L:S1}, there exist cpc order zero maps $\phi^{(0)},\dots,\phi^{(m)}:M_{2(m+1)}\rightarrow A_\infty\cap A'\cap X'$ such that \begin{equation}\label{L:S2:2} \tau\Big(\sum_{i=0}^m\phi^{(i)}(1_{2(m+1)})b\Big)\geq\tau(b), \end{equation} for all $b\in (A_\infty)_+$ and $\tau\in T_\infty(A)$. For contractions $b\in C^*(A,X)_+$, the maps $\phi^{(i)}(\cdot)b$ are cpc and order zero, so $\tau(\phi^{(i)}(\cdot)b)$ is a trace on $M_{2(m+1)}$, \cite[Corollary 4.4]{WZ:MJM}. Thus $$ \tau(\phi^{(i)}(e_{11})b)=\frac{1}{2(m+1)}\tau(\phi^{(i)}(1_{2m})b), $$ for all $i\in\{0,\dots,m\}$, all contractions $b\in C^*(A,X)_+$ and all $\tau\in T_\infty(A)$. Summing over $i$, we have \begin{equation}\label{L:S2:1} \tau\Big(\sum_{i=0}^m\phi^{(i)}(e_{11})b\Big)=\frac{1}{2(m+1)}\tau\Big(\sum_{i=0}^m\phi^{(i)}(1_{2(m+1)})b\Big)\geq\frac{1}{2(m+1)}\tau(b), \end{equation} for all $b\in C^*(A,X)_+$ and all $\tau\in T_\infty(A)$.
For $\eta>0$, define $$ d_\eta^{(0)}=g_{\eta,2\eta}\Big(\sum_{i=0}^m\phi^{(i)}(e_{11})\Big),\quad d_\eta^{(1)}=1_{A_\infty}-g_{0,\eta}\Big(\sum_{i=0}^m\phi^{(i)}(e_{11})\Big) $$ so that $d^{(0)}_\eta$ and $d^{(1)}_\eta$ are pairwise orthogonal positive contractions in $A_\infty\cap A'\cap X'$. We have $d^{(0)}_\eta+\eta1_A\geq \sum_{i=0}^m\phi^{(i)}(e_{11})$ so (\ref{L:S2:1}) gives \begin{equation}\label{L:S2:3} \tau(d^{(0)}_\eta b)\geq\frac{1}{2(m+1)}\tau(b)-\eta, \end{equation} for contractions $b\in C^*(A,X)_+$ and $\tau\in T_\infty(A)$. For a contraction $b\in C^*(A,X)_+$ and $\tau\in T_\infty(A)$ we have \begin{align} \tau((1_{A_\infty}-d^{(1)}_\eta)b)&=\tau\Big(g_{0,\eta}\Big(\sum_{i=0}^m\phi^{(i)}(e_{11})\Big)b\Big)\nonumber\\ &\leq\lim_{l\rightarrow\infty}\tau\Big(\Big(\sum_{i=0}^m\phi^{(i)}(e_{11})\Big)^{1/l} b\Big)\nonumber\\ &\leq\sum_{i=0}^m\lim_{l\rightarrow\infty}\tau\left((\phi^{(i)}(e_{11}))^{1/l}b\right)\label{L:S2:4}\\ &=\sum_{i=0}^m\lim_{l\rightarrow\infty}\tau\left((\phi^{(i)})^{1/l}(e_{11})b\right)\nonumber\\ &=\sum_{i=0}^m\lim_{l\rightarrow\infty}\frac{1}{2(m+1)}\tau\left((\phi^{(i)})^{1/l}(1_{2(m+1)})b\right)\label{L:S2:5}\\ &\leq\frac{m+1}{2(m+1)}\tau(b)=\frac{1}{2}\tau(b).\label{L:S2:6} \end{align} Here (\ref{L:S2:4}) uses the fact that $\langle \sum_{i=0}^m\phi^{(i)}(e_{11})\rangle\leq \sum_{i=0}^m\langle \phi^{(i)}(e_{11})\rangle$ in the Cuntz semigroup $\mathrm{Cu}(C^*(\phi^{(i)}(e_{11}):i=0,\dots,m))$ and $a\mapsto\lim_{l\rightarrow\infty}\tau(a^{1/l}b)$ is a dimension function on $C^*(\phi^{(i)}(e_{11}):i=0,\dots,m)\subset A_\infty\cap A'\cap X'$. The equality (\ref{L:S2:5}) follows as for each $i$ and $l$, the map $\tau((\phi^{(i)})^{1/l}(\cdot)b)$ is a trace on $M_{2(m+1)}$, \cite[Corollary 4.4]{WZ:MJM}. The estimate (\ref{L:S2:6}) gives \begin{equation}\label{L:S2:7} \tau(d^{(1)}_\eta b)\geq \Big(1-\frac{1}{2}\Big)\tau(b)=\frac{1}{2}\tau(b) \end{equation} for all $\tau\in T_\infty(A)$ and $b\in C^*(A,X)_+$.
Thus $$ \tau(d^{(i)}_\eta b)\geq\frac{1}{2(m+1)}\tau(b)-\eta $$ for $i=0,1$, $\tau\in T_\infty(A)$ and contractions $b\in C^*(A,X)_+$. As such the $k=2$ case of the lemma follows from Lemma \ref{W:Extract}. \end{proof}
Combining the previous two lemmas we obtain order zero maps $M_k\rightarrow A_\infty\cap A'$ which are nowhere small in trace in the presence of compact finite dimensional extremal boundary. \begin{proposition}\label{P:S3} Given $m\geq 0$ there is $0<\alpha_{m}\leq 1$ such that, given $k\geq 2$, a simple separable unital nuclear nonelementary $C^*$-algebra $A$ such that $T(A)\neq\emptyset$ and $\partial_eT(A)$ is compact with $\dim(\partial_eT(A))\leq m$, and a separable subspace $X\subset A_\infty$, there exists a cpc order zero map $\Phi:M_k\rightarrow A_\infty\cap A'\cap X'$ such that $$ \tau(\Phi(1_k)b)\geq \alpha_{m}\tau(b), $$ for all $\tau\in T_\infty(A)$ and $b\in C^*(A,X)_+\subset A_\infty$. \end{proposition} \begin{proof} Fix $m\geq 0$, $k\geq 2$ and a separable subspace $X\subset A_\infty$. Suppose that $A$ is simple separable unital nuclear with $T(A)\neq\emptyset$ and $\partial_eT(A)$ compact and with $\dim(\partial_eT(A))\leq m$. By Lemma \ref{L:S1}, there are cpc order zero maps $\phi^{(0)},\ldots,\phi^{(m)}:M_k\rightarrow A_\infty\cap A'\cap X'$ with \begin{equation}\label{P:S3:1} \tau\Big(\sum_{i=0}^m\phi^{(i)}(1_k)b\Big)\geq\tau(b), \end{equation} for all $\tau\in T_\infty(A)$ and all $b\in (A_\infty)_+$. By Lemma \ref{L:S2}, there exists $L_{m,m+1}\in\mathbb N$ and pairwise orthogonal contractions $d^{(0)},\dots,d^{(m)}$ in $A_\infty\cap A'\cap X'\cap (\mathrm{span}\{\phi^{(i)}(M_k):i=0,\dots,m\})'$ such that \begin{equation}\label{P:S3:4} \tau(d^{(i)}b)\geq \frac{1}{L_{m,m+1}}\tau(b), \end{equation} for all $i\in\{0,\dots,m\}$, $b\in C^*(A,X,\phi^{(j)}(M_k):j=0,\dots,m)_+$ and $\tau\in T_\infty(A)$.
Define $\Phi:M_k\rightarrow A_\infty\cap A'\cap X'$ by $$ \Phi(x)=\sum_{i=0}^m\phi^{(i)}(x)d^{(i)}. $$ This is cpc and order zero as the $d^{(i)}$'s are pairwise orthogonal and commute with the image of the $\phi^{(j)}$'s. Given a contraction $b\in C^*(A,X)_+$ and $\tau\in T_\infty(A)$, we have \begin{align} \tau(\Phi(1_k)b)&=\tau\Big(\sum_{i=1}^m\phi^{(i)}(1_k)d^{(i)}b\Big)\nonumber\\ &\geq \frac{1}{L_{m,m+1}}\tau\Big(\sum_{i=1}^m\phi^{(i)}(1_k)b\Big)\label{P:S3:2}\\ &\geq\frac{1}{L_{m,m+1}}\tau(b)\label{P:S3:3}, \end{align} where (\ref{P:S3:2}) follows from (\ref{P:S3:4}) and (\ref{P:S3:3}) from (\ref{P:S3:1}). Thus we can take $\alpha_m=\frac{1}{L_{m,m+1}}$. \end{proof}
A geometric sequence argument in the spirit of \cite{W:Invent1,W:Invent2} can be used to obtain uniformly tracially large order zero maps from Proposition \ref{P:S3}. The proof we give below uses the estimates from the geometric series argument in \cite[Lemma 5.11]{W:Invent2} but we simplify the calculations a little by taking a maximality approach. We work abstractly from the conclusion of Proposition \ref{P:S3} and so begin by noting that this implies the conclusion of Lemma \ref{L:S2}. We continue to use the functions $g_{\eta_1,\eta_2}$ defined in (\ref{Defg}).
\begin{lemma}\label{L:GS} Let $k\geq 2$ and suppose $A$ is a separable unital $C^*$-algebra with the property that there exists $\alpha>0$ with the property that for all separable subspaces $X\subset A_\infty$, there exists a cpc order zero map $\Phi:M_k\rightarrow A_\infty\cap A'\cap X'$ such that \begin{equation}\label{L:GS:1} \tau(\Phi(1_k)b)\geq \alpha\tau(b), \end{equation} for all $\tau\in T_\infty(A)$ and $b\in C^*(A,X)_+\subset A_\infty$. Then $A$ admits uniformly tracially large cpc order zero maps $M_k\rightarrow A_\infty\cap A'$. \end{lemma} \begin{proof} Fix $k\geq 2$. First note that the hypothesis gives that there exists some $\gamma>0$ with the property that for any separable subspace $X\subset A_\infty$, there exist pairwise orthogonal positive contractions $d^{(0)},d^{(1)}\in A^\infty\cap A'\cap X'$ such that \begin{equation}\label{L:GS:2} \tau(d^{(i)}b)\geq \gamma\tau(b) \end{equation} for $i=\{0,1\}$, $\tau\in T_\infty(A)$ and $b\in C^*(A,X)_+\subset A_\infty$. Indeed, one can take a cpc order zero map $\Phi:M_k\rightarrow A_\infty\cap A'\cap X'$ satisfying (\ref{L:GS:1}). For a contraction $b\in C^*(A,X)_+$, $\Phi(\cdot)b$ defines a cpc order zero map on $M_k$ so that $\tau(\Phi(\cdot)b)$ is a trace on $M_k$ for each $\tau\in T_\infty(A)$ (\cite[Corollary 4.4]{WZ:MJM}). As such $$ \tau(\Phi(e)b)=\frac{\mathrm{Tr}_{M_k}(e)}{\mathrm{Tr}_{M_k}(1_k)}\tau(\Phi(1_k)b)\geq\frac{\mathrm{Tr}_{M_k}(e)}{\mathrm{Tr}_{M_k}(1_k)}\alpha\tau(b) $$ for all $e\in (M_k)_+$, $\tau\in T_\infty(A)$ and $b\in C^*(A,X)_+$. Taking $d^{(i)}=\Phi(e^{(i)})$ for a pair $e^{(0)},e^{(1)}$ of orthogonal projections in $M_k$ of normalized trace at least $1/3$ we can take $\gamma=\alpha/3$.
Let $\alpha_0>0$ be the supremum of all $\alpha\geq 0$ with the property that for each separable subspace $X\subset A_\infty$, there exists a cpc order zero map $\Phi:M_k\rightarrow A_\infty\cap A'\cap X'$ satisfying (\ref{L:GS:1}). We must prove that $\alpha_0=1$, so suppose to the contrary that $0<\alpha_0<1$. Fix a separable subspace $X\subset A_\infty$ and take $\varepsilon>0$ such that \begin{equation}\label{L:GS:6} \alpha_1=\Big(\big(1-(\alpha_0-\varepsilon)\gamma\big)(\alpha_0-3\varepsilon)+(\alpha_0-\varepsilon)\gamma\Big)>\alpha_0. \end{equation}
Find a cpc order zero map $\Phi_0:M_k\rightarrow A_\infty\cap A'\cap X'$ such that \begin{equation}\label{L:GS:4} \tau(\Phi_0(1_k)b)\geq (\alpha_0-\varepsilon)\tau(b) \end{equation} for all $\tau\in T_\infty(A)$ and $b\in C^*(A,X)_+\subset A_\infty$. Find pairwise orthogonal contractions $d^{(0)},d^{(1)}\in A_\infty\cap A'\cap X'\cap \Phi_0(M_k)'$ such that \begin{equation}\label{L:GS:8} \tau(d^{(i)}f(\Phi_0)(1_k)b)\geq \gamma\tau(f(\Phi_0)(1_k)b) \end{equation} for $i=\{0,1\}$, $f\in C_0(0,1]$, $\tau\in T_\infty(A)$ and $b\in C^*(A,X)_+$.
Define $$ \Phi_1(\cdot)=g_{2\varepsilon,3\varepsilon}(\Phi_0)(\cdot)+d^{(0)}\left(g_{\varepsilon,2\varepsilon}-g_{2\varepsilon,3\varepsilon}\right)(\Phi_0)(\cdot). $$ This is certainly cpc. To see that $\Phi_1$ is order zero, note that for $x\in (M_k)_+$ we have $$ \Phi_1(x)\leq g_{\varepsilon,2\varepsilon}(\Phi_0)(x) $$ as $d^{(0)}$ commutes with $C^*(\Phi_0(M_k))$. Given $e,f\in (M_k)_+$ with $ef=0$, we have $$g_{\varepsilon,2\varepsilon}(\Phi_0)(e)^{1/2}g_{\varepsilon,2\varepsilon}(\Phi_0)(f)^{1/2}=0.$$ As $\Phi_1(e)^{1/2}\leq g_{\varepsilon,2\varepsilon}(\Phi_0)(e)^{1/2}$, there exists a sequence $(z_m)_{m=1}^\infty$ of contractions in $A$ with $z_mg_{\varepsilon,2\varepsilon}(\Phi_0)(e)^{1/2}\rightarrow \Phi_1(e)^{1/2}$ (see \cite[Lemma A-1]{H:MMJ}). Similarly there exists a sequence of contractions $(w_m)_{m=1}^\infty$ in $A$ with $g_{\varepsilon,2\varepsilon}(\Phi_0)(f)^{1/2}w_m\rightarrow \Phi_1(f)^{1/2}$. As such $$ \Phi_1(e)\Phi_1(f)=\lim_{m\rightarrow\infty}\Big(\Phi_1(e)^{1/2}z_mg_{\varepsilon,2\varepsilon}(\Phi_0)(e)^{1/2}g_{\varepsilon,2\varepsilon}(\Phi_0)(f)^{1/2}w_m\Phi_1(f)^{1/2}\Big)=0. $$
Define $$ h=d^{(1)}\left(g_{0,\varepsilon}-g_{\varepsilon,2\varepsilon}\right)(\Phi_0)(1_k)+(1_{A_\infty}-g_{0,\varepsilon}(\Phi_0)(1_k)). $$ We have $d^{(1)}\perp d^{(0)}$ and $\left(g_{0,\varepsilon}-g_{\varepsilon,2\varepsilon}\right)(\Phi_0)(1_k)\perp g_{2\varepsilon,3\varepsilon}(\Phi_0)(1_k)$ so that $$d^{(1)}\left(g_{0,\varepsilon}-g_{\varepsilon,2\varepsilon}\right)(\Phi_0)(1_k)\perp\Phi_1(1_k).$$ Also $(1_{A_\infty}-g_{0,\varepsilon}(\Phi_0)(1_k))\perp \Phi_1(1_k)$ so that $h\perp\Phi_1(1_k)$.
Now use the hypothesis again to find a cpc order zero map $\Phi_2:M_k\rightarrow A_\infty\cap A'\cap X'\cap\{h\}'$ such that $$ \tau(\Phi_2(1_k)hb)\geq(\alpha_0-\varepsilon)\tau(hb) $$ for all $b\in C^*(A,X)_+$ and $\tau\in T_\infty(A)$. The estimate (\ref{L:GS:8}) gives \begin{eqnarray} \tau(\Phi_2(1_k)hb)\geq (\alpha_0-\varepsilon)\tau(hb)&\geq &(\alpha_0-\varepsilon)\big(\gamma\tau\big((g_{0,\varepsilon}-g_{\varepsilon,2\varepsilon})(\Phi_0)(1_k)b\big) \nonumber \\ && +\tau\big((1_{A_\infty}-g_{0,\varepsilon}(\Phi_0)(1_k))b\big)\big)\nonumber\\ &\geq &(\alpha_0-\varepsilon)\gamma\tau\big((1_{A_\infty}-g_{\varepsilon,2\varepsilon}(\Phi_0)(1_k))b\big)\label{L:GS:3} \end{eqnarray} as $\gamma\leq 1$.
Define $\Psi:M_k\rightarrow A_\infty\cap A'\cap X'$ by $\Psi(x)=\Phi_1(x)+\Phi_2(x)h$. This is cpc and order zero as $\Phi_1(\cdot)$ and $\Phi_2(\cdot)h$ are cpc and order zero with orthogonal ranges. Then for $b\in C^*(A,X)_+$, we have \begin{align} \tau(\Psi(1_k)b)&=\tau(\Phi_1(1_k)b)+\tau(\Phi_2(1_k)hb)\nonumber\\ &\geq \tau\Big(\big(g_{2\varepsilon,3\varepsilon}(\Phi_0)(1_k)+d^{(0)}(g_{\varepsilon,2\varepsilon}-g_{2\varepsilon,3\varepsilon})(\Phi_0)(1_k)\big)b\Big)\nonumber\\ &\quad+(\alpha_0-\varepsilon)\gamma\tau\big((1_{A_\infty}-g_{\varepsilon,2\varepsilon}(\Phi_0)(1_k))b\big)\nonumber\\ &\geq \tau\big(g_{2\varepsilon,3\varepsilon}(\Phi_0)(1_k)b\big)+(\alpha_0-\varepsilon)\gamma\tau\big((1_{A_\infty}-g_{2\varepsilon,3\varepsilon}(\Phi_0)(1_k))b\big)\nonumber\\ &=\big(1-(\alpha_0-\varepsilon)\gamma\big)\tau\big(g_{2\varepsilon,3\varepsilon}(\Phi_0)(1_k)b\big)+(\alpha_0-\varepsilon)\gamma\tau(b)\label{L:GS:5} \end{align} using (\ref{L:GS:8}), (\ref{L:GS:3}) and the crude estimate $(\alpha_0-\varepsilon)<1$. As $g_{2\varepsilon,3\varepsilon}(\Phi_0)(1_k)\geq \Phi_0(1_k)-2\varepsilon 1_A$, (\ref{L:GS:4}) gives $$ \tau(g_{2\varepsilon,3\varepsilon}(\Phi_0)(1_kb))\geq (\alpha_0-3\varepsilon)\tau(b) $$ for all $b\in C^*(A,X)_+$. Combining this with (\ref{L:GS:5}) gives $$ \tau(\Psi(1_k)b)\geq \Big(\big(1-(\alpha_0-\varepsilon)\gamma\big)(\alpha_0-3\varepsilon)+(\alpha_0-\varepsilon)\gamma\Big)\tau(b)=\alpha_1\tau(b) $$ for all $b\in C^*(A,X)_+$. The choice of $\varepsilon$ in (\ref{L:GS:6}) ensures that $\alpha_1>\alpha_0$, giving the required contradiction. \end{proof}
\begin{theorem}\label{T:UTL} Let $A$ be a simple separable unital nuclear nonelementary $C^*$-algebra with $T(A)\neq\emptyset$ and $\partial_eT(A)$ compact and $\dim(\partial_eT(A))<\infty$. Then for each $k\geq 2$, $A$ admits uniformly tracially large cpc order zero maps $M_k\rightarrow A_\infty\cap A'$. \end{theorem} \begin{proof} This follows from Proposition \ref{P:S3} and Lemma \ref{L:GS}. \end{proof}
\begin{corollary}\label{MainCor} Let $A$ be a simple separable unital nuclear nonelementary $C^*$-algebra with $T(A)\neq\emptyset$ and $\partial_eT(A)$ compact and $\dim(\partial_eT(A))<\infty$. If $A$ has strict comparison, then $A$ is $\mathcal Z$-stable. \end{corollary} \begin{proof} This follows from Theorems \ref{MS} and \ref{T:UTL}. \end{proof}
\end{document} |
\begin{document}
{\def\thefootnote{}\footnotetext{E-mail: \texttt{na2844@columbia.edu}, \texttt{djhsu@cs.columbia.edu}, \texttt{clayton@cs.columbia.edu}}}
\title{\textbf{Intrinsic dimensionality and generalization properties of the $\mathcal{R}
\begin{abstract}We study the structural and statistical properties of $\mathcal{R}$-norm minimizing interpolants of datasets labeled by specific target functions. The $\mathcal{R}$-norm is the basis of an inductive bias for two-layer neural networks, recently introduced to capture the functional effect of controlling the size of network weights, independently of the network width. We find that these interpolants are intrinsically multivariate functions, even when there are ridge functions that fit the data, and also that the $\mathcal{R}$-norm inductive bias is not sufficient for achieving statistically optimal generalization for certain learning problems. Altogether, these results shed new light on an inductive bias that is connected to practical neural network training.
\end{abstract}
\section{Introduction}\label{sec:intro}
The study of inductive biases in neural network learning is important for theoretical understanding and for developing practical guidance in network training. Recent theories of generalization rely on inductive biases of training algorithms to explain how neural nets that (nearly) interpolate training data can be accurate out-of-sample~\citep{neyshabur2015search,zhang2021understanding}. When inductive biases are made explicit and their effects are elucidated, they can be incorporated into training procedures when deemed appropriate for a problem at hand.
In this paper, we study the inductive bias for two-layer neural networks implied by a variational norm called the \emph{$\mathcal{R}$-norm\xspace}, introduced by~\citet{sess19} and \citet{owss19} to capture the functional effect of controlling the size of network weights. (A definition is given in Section~\ref{ssec:rnorm}.) We focus on the \emph{approximation} and \emph{generalization} consequences of preferring networks with small $\mathcal{R}$-norm\xspace in the context of learning explicit target functions. It is well-known that the size of the weights can play a critical role in generalization properties of neural networks~\citep{bartlett1996valid}, and weight-decay regularization is a common practice in gradient-based training~\citep{hinton1987learning,hanson1988comparing}. Thus, explicating the consequences of the $\mathcal{R}$-norm\xspace inductive bias may advance our understanding of generalization in practical settings.
We investigate the $d$-dimensional variational problem \eqref{interpolating-r-norm}, which seeks a neural net $g \colon \varOmega \to \mathbb{R}$ of minimum $\mathcal{R}$-norm\xspace among those that perfectly fit a given labeled dataset $\setl{(x_i, y_i)}_{i \in [n]} \subset \varOmega \times \mathbb{R}$: \begin{alignat}{2}
\label{interpolating-r-norm}
\tag{VP}
\inf_{g \colon \varOmega \to \mathbb{R}} \rnorm{g}
& \quad \text{s.t.} \quad \;\; g(x_i) = y_i
& \quad & \forall i \in [n] ;
\\
\intertext{as well as a variant \eqref{approximate-r-norm} that only requires $g$ to uniformly approximate labels up to error $\epsilon \in (0,1)$:}
\label{approximate-r-norm}
\tag{$\epsilon$-VP}
\inf_{g \colon \varOmega \to \mathbb{R}} \rnorm{g}
& \quad \text{s.t.} \quad \abs{g(x_i) - y_i} \leq \epsilon
& & \forall i \in [n] . \end{alignat} Here, $\varOmega \subset \mathbb{R}^d$ is a $d$-dimensional domain of interest. We study structural and statistical properties of solutions to these problems for datasets labeled by specific target functions in high dimensions.
The recent introduction of the $\mathcal{R}$-norm\xspace and its connections to weight-decay regularization have catalyzed research on the foundational properties of solutions to \eqref{interpolating-r-norm}. In particular, solutions in the one-dimensional ($d=1$) setting have been precisely characterized and their generalization properties are now well-understood by their connections to splines~\citep{dduf21, sess19, pn21a, hanin21}. However, far less is known about the solutions of $\mathcal{R}$-norm\xspace-minimizing interpolants for the general $d$-dimensional case.
\paragraph{Key message.} Inductive biases based on certain variational norms, such as the $\mathcal{R}$-norm\xspace, are believed to offer a way around the curse of dimensionality suffered by kernel methods, because they are adaptive to low-dimensional structure. Researchers have pointed to this adaptivity property in non-parametric settings~\citep{b17,pn21b} and specific learning tasks with low-dimensional structure~\citep{wei2019regularization} as mathematical evidence of the statistical advantage of neural networks over kernel methods. One may hypothesize that the $\mathcal{R}$-norm\xspace inductive bias achieves this advantage by favoring functions with low-dimensional structure. Indeed, many other forms of inductive bias used in statistics and machine learning are known to explicitly identify relevant low-dimensional structure~\citep{candes2006robust,donoho2006compressed,candes2009exact,bhojanapalli2016global,begkmz22,damian2022neural,frei2022random,mousavi2022neural,galanti2022sgd}. Our results provide theoretical evidence that this is not always the case with the $\mathcal{R}$-norm\xspace inductive bias, in a very strong sense that becomes more pronounced in higher dimensions.
We show that, even in cases where the dataset can be perfectly fit by an intrinsically one-dimensional function, the solutions $g$ to \eqref{interpolating-r-norm} or \eqref{approximate-r-norm} are not necessarily the piecewise-linear ridge functions described in previous works~\citep{sess19, hanin21}. Rather, the $\mathcal{R}$-norm\xspace is far better minimized by a \emph{multi-directional}\footnote{By a multi-directional function, we mean a function that does not \emph{only} depend on a one-dimensional projection of its input---i.e., a function that is not a ridge function (defined in Section~\ref{ssec:notation}).} neural network $g$ that averages several ridge functions pointing in different directions, each of which approximates a small fraction of the data. \if0 Our experimental results suggest that deeper networks trained by gradient methods have similar inductive biases; the removal of weight-typing constraints results in trained networks with lower weight norms and a larger number of ``effective directions.'' \fi
\subsection{Our contributions}\label{ssec:contrib}
Our results are summarized by the following informal theorems concerning the structural and generalization properties of $\mathcal{R}$-norm interpolation. Together, they show that the $\mathcal{R}$-norm inductive bias (1) leads to interpolants that are qualitatively different from those that minimize width or intrinsic dimensionality of the learned network, and (2) is insufficient for obtaining optimal generalization for a well-studied learning problem.
\begin{itheorem}[$\mathcal{R}$-norm\xspace minimizers of the parity dataset are not ridge functions]\label{ithm:approx}
\sloppy
Suppose the dataset $\{ (x_i,y_i) \}_{i\in[n]} \subset \{\pm 1\}^d \times \{\pm 1\}$ used in \eqref{interpolating-r-norm} and \eqref{approximate-r-norm} is the complete dataset of $2^d$ examples labeled by the $d$-variable parity function.
\begin{itemize}
\item The optimal value of \eqref{interpolating-r-norm} is $\Theta(d)$.
\item The optimal value of \eqref{approximate-r-norm} for any $\epsilon\in\intco{0,1/2}$---with the additional constraint that $g$ be a ridge function---is $\Theta(d^{3/2})$.
\end{itemize}
\fussy \end{itheorem} This result is presented formally in Section~\ref{sec:parity-approx}. In Section~\ref{ssec:parity-rnorm-ridge-approx-lb}, we show that every ridge function satisfying the constraints of \eqref{approximate-r-norm} has $\mathcal{R}$-norm\xspace at least $\Omega(d^{3/2})$; this bound is tight for ridge functions, as there is a matching upper bound. Using an averaging strategy, we show in Section~\ref{ssec:parity-rnorm-ub} the existence of multi-directional interpolants $g$ of the parity dataset with $\rnorm{g} = O(d)$, and we also establish the optimality of this construction in Section~\ref{ssec:parity-rnorm-approx-lb}. These results characterize the optimal value of \eqref{interpolating-r-norm} in terms of the dimension $d$, and also establish the $\mathcal{R}$-norm\xspace-suboptimality of ridge function interpolants. (In Section~\ref{sec:cosine-approx}, we extend the averaging strategy to other types of target functions, expanding the scope of our structural findings.)
\begin{itheorem}[Min-$\mathcal{R}$-norm interpolation is sub-optimal for learning parities] Suppose the dataset $\{ (\bx_i,\chi(\bx_i)) \}_{i\in[n]} \subset \{\pm 1\}^d \times \{\pm 1\}$ used in \eqref{interpolating-r-norm} is an i.i.d.~sample, where $\bx_i \sim \operatorname{Unif}(\{\pm 1\}^d)$ is labeled by the $d$-variable parity function $\chi$ for all $i \in [n]$.
If the sample size is $n = o(d^2/\sqrt{\log d})$, then with probability at least $1/2$, every solution to \eqref{interpolating-r-norm} has mean squared error at least $1-o(1)$ for predicting $\chi$ over $\operatorname{Unif}(\{\pm 1\}^d)$. \end{itheorem} This result is presented formally in Section~\ref{ssec:parity-gen-lb}, and it is complemented by a sample complexity upper bound in Section~\ref{ssec:parity-gen-ub}. The results are stated for the parity function on all $d$ variables, but the same holds for any parity function over $\Omega(d)$ variables. It is well-known that an i.i.d.~sample of size $O(d)$ is sufficient for learning parity functions exactly ~\citep{helmbold1992learning,fischer1992learning}, and hence we conclude that the $\mathcal{R}$-norm\xspace inductive bias is insufficient for achieving the statistically optimal sample complexity for this learning problem.
\if 0 \begin{enumerate}
\item In Section~\ref{ssec:parity-rnorm-ridge-approx-lb}, we give lower bounds on the minimum $\mathcal{R}$-norm\xspace needed for a ridge function $g$ to approximately interpolate a target dataset supported on the discrete hypercube.
Concretely, we show that datasets labeled by parity functions require $\rnorm{g} = \Omega(d^{3/2})$.
\item Using an averaging strategy, we show in Section~\ref{ssec:parity-rnorm-ub} the existence of multi-directional interpolants $g$ of the parity dataset with $\rnorm{g} = O(d)$ and establish the optimality of this construction in Section~\ref{ssec:parity-rnorm-approx-lb}. This shows that every ridge interpolant has suboptimal $\mathcal{R}$-norm\xspace.
\item In Section~\ref{sec:parity-gen}, we upper- and lower-bound the sample complexity of the learning algorithm that solves \eqref{interpolating-r-norm} for learning parity functions on $\{\pm 1\}^d$.
We conclude that the $\mathcal{R}$-norm\xspace inductive bias is insufficient to achieve the statistically optimal sample complexity.
\item In Section~\ref{sec:cosine-approx}, we extend the averaging strategy to other target functions beyond parities, expanding the scope of our structural findings.
\if 0
\item In Section~\ref{sec:empirical},
building on experiments by \citet{dsbb19},
we empirically show that that ``un-sharing'' the weight parameters of partially-trained convolutional neural nets (CNNs), and then resuming training with stochastic gradient descent (SGD) on all weights, yields neural networks with smaller weight norms than that of the original CNNs.
This result suggests that gradient methods when no longer constrained by weight-tying architectures (which reduce the intrinsic dimensionality of the learned networks) unlock lower variational norms. \fi
\end{enumerate} \fi \subsection{Related work} \label{ssec:related}
\paragraph{Variational norms and inductive biases of optimization methods.} Many variational norms (such as $\mathcal{R}$-norm\xspace) from functional analysis can be regarded as representational costs that induce topologies on the space of infinitely-wide neural networks with certain activation functions. Prior works have analyzed these norms for homogeneous activation functions like ReLU~\citep[e.g.,][]{k01,m04,b17,sess19,owss19}; see \citet{sx21} and references therein for a comparison. In particular, the work of \citet{owss19} provided analytical descriptions of $\mathcal{R}$-norm\xspace in terms of the Radon transform of the function itself. This work was extended to higher powers of ReLU by \citet{pn21a}.
The variational norms are also connected to the implicit biases of optimization methods for training neural networks. In the context of univariate functions, the dynamics of gradient descent was shown to be biased towards (adaptive) linear or cubic spline depending on the optimization regime~\citep{wtspzb19, skm21, hbg18}, and these results have been partially extended to the multivariate case~\citep{jm20}. For classification problems, the implicit bias of gradient descent was connected to a variational problem related to $\mathcal{R}$-norm\xspace with margin constraints on the data~\citep{bc21}.
\paragraph{Solutions to the variational problem.} \citet{dduf21} and \citet{hanin21} fully characterized the form of all solutions of \eqref{interpolating-r-norm} for one-dimensional datasets (as discussed above). However, pinning down even a single solution for general multidimensional datasets appears to be difficult; \citet{ep21} was able to do so for rank-one datasets, where all the feature vectors lie on a line. The datasets we study do not satisfy the rank-one condition of \citet{ep21}, and thus we require different techniques to analyze multi-directional functions.
\paragraph{Adaptivity.} In the context of non-parametric regression, it is well-known that (deep) neural networks achieve minimax-optimal rates in the presence of low-dimensional structure in the target function \citep[e.g.,][]{s20, bk19, kk05, g02}. The convergence rates in these works depend only on the intrinsic dimension of the target function (and not the ambient dimension) and are achieved by optimally trading off accuracy and regularization in certain deep neural network architectures. Recent works \citep{kb16, pn21b, b17} consider two-layer neural networks with variational norm (similar to $\mathcal{R}$-norm\xspace) regularization, which also allows for adaptivity to low-dimensional structure. That is, a function $g \colon \mathbb{R}^d \to \mathbb{R}$ depending only on a $k$-dimensional projection of its input $x$, i.e., $g(x) = \phi(Ux)$ for some $U \in \mathbb{R}^{k \times d}$ (with orthonormal rows) and $\phi \colon \mathbb{R}^k \to \mathbb{R}$ has variational norm no greater than that of the corresponding low-dimensional function $\phi$~\citep{b17}. In particular, \citet{b17} and \citet{kb16} studied minimax rates under ridge target functions where $k=1$. Our results on generalization are of a different flavor: rather than striking a careful balance between fitting and regularization to achieve minimax rates, we study the behavior of $\mathcal{R}$-norm\xspace-minimizing interpolation.
Regularization based on weight decay (equivalent to $\mathcal{R}$-norm\xspace for shallow networks) has also been used to obtain minimax rates for learning smooth target functions. \citet{pn21b} do so by drawing analogies to spline theory, while \citet{wl21} consider a connection to the Group Lasso. \citet{zw22} exploits depth to promote stronger sparsity regularizes. This is distinct from the low-dimensional structures studied in this work and mentioned above.
\paragraph{Learning ridge functions and parity functions with neural nets.} Target functions that depend on low-dimensional projections of the input (of which ridge functions are the simplest case) have been long studied in statistics \citep[see, e.g.,][]{li18}, and learning such functions is one of the simplest problems where neural network training demonstrates adaptivity. Such demonstrations typically require going beyond the neural tangent kernel regime and have been used to explain the ``feature learning'' ability of neural networks \citep{frei2022random, damian2022neural, mousavi2022neural, bbss22}. Several recent works have considered the prospects of learning (sparse) parity functions by training neural nets with gradient-based algorithms~\citep{as20, dm20, mkas21, begkmz22, telgarsky2022feature}. The positive results express parities as low-weight linear combinations of (random) ReLUs, which motivates our focus on the variational norm of approximating neural nets. Our sample complexity lower bound shows that, even if computational and optimization considerations are set aside, the inductive bias imposed by the $\mathcal{R}$-norm\xspace may lead to suboptimal statistical performance.
\paragraph{Averaging and ensembling.} Neural networks have been interpreted as forms of averaging or ensemble methods to explain their statistical properties~\citep[e.g.,][]{bartlett1996valid,baldi2013understanding,gal2016dropout,olson2018modern}. Our use of averaging is different in that it serves as an approximation-theoretic mechanism for achieving smaller $\mathcal{R}$-norm\xspace.
\paragraph{Weight lower bounds for other explicit functions.} Representation costs for two-layer neural networks to approximate other explicit functions have been explored in several prior works~\citep{mcpz13,dan17,ss17,ses19}. These works establish exponential lower-bounds on the width of two-layer networks needed to approximate functions that are represented more compactly by three-layer networks. These results also imply lower-bounds on the size of second-layer weights in a two-layer network after fixing the width of the network. In contrast, our results hold for networks of unbounded width and for a target function that can be exactly represented by a two-layer networks of $\poly(d)$ width.
\section{Preliminaries}\label{sec:prelims}
\subsection{Notation}\label{ssec:notation}
In this work, we consider real-valued functions over the radius-$\sqrt{d}$ Euclidean ball $\varOmega := \setl{ x \in \mathbb{R}^d : \norml[2]{x} \leq \sqrt{d} }$. Let $\chi_S \colon \varOmega \to \{\pm 1\}$ denote the multi-linear monomial $\chi_S(x) := \prod_{i \in S} x_i$ over variables indexed by $S \subseteq [d]$, and let $\chi := \chi_{[d]}$. On input $x \in \{\pm 1\}^d$, $\chi_S(x)$ computes the \emph{parity} of $\{ x_i : i \in S \}$. We say $g \colon \varOmega \to \mathbb{R}$ is a \emph{ridge function} if $g(x) = \phi(v^\T x)$ for some unit vector $v \in \mathbb{S}^{d-1}$ and function $\phi \colon [-\sqrt{d},\sqrt{d}] \to \mathbb{R}$. A function $\phi$ is \emph{$\rho$-periodic} if $\phi(z+\rho)=\phi(z)$ for all $z \in \mathbb{R}$.
We consider two-layer neural networks (of infinite and finite width) with ReLU activations $\varphi_{\operatorname{r}}(z) := \max\setl{0, z}$. Let $\mathcal{M}$ denote the space of signed measures over $\mathbb{S}^{d-1} \times [-\sqrt{d},\sqrt{d}]$. For $\mu \in \mathcal{M}$, let $g_\mu: \varOmega \to \mathbb{R}$ denote the infinite-width neural network given by \[g_\mu(x) := \int_{\mathbb{S}^{d-1} \times [-\sqrt{d},\sqrt{d}]} \varphi_{\operatorname{r}}(w^\T x + b) \, \mu(\dd w, \dd b).\] The total variation norm of $\mu$ is $\abs{\mu} := \int_{\mathbb{S}^{d-1} \times [-\sqrt{d},\sqrt{d}]} \abs{\mu}(\dd w, \dd b)$, where $\abs{\mu}(\dd w,\dd b)$ is the corresponding total variation measure (somewhat abusing notation). The width-$m$ neural network $g_{\theta}$ with parameters $\theta = (a^{(j)}, w^{(j)}, b^{(j)})_{j \in [m]} \in (\mathbb{R} \times \mathbb{S}^{d-1} \times [-\sqrt{d},\sqrt{d}])^m$ is given by \[ g_\theta(x) := \sum_{j=1}^m a^{(j)} \varphi_{\operatorname{r}}(w^{(j) \T} x + b^{(j)}). \] We regard $g_\theta$ as an infinite-width neural network with the ``sparse'' atomic measure \begin{equation*}
\mu_\theta = \sum_{j=1}^m a^{(j)} \delta_{(w^{(j)},b^{(j)})} . \end{equation*} Observe that $g_\theta = g_{\mu_\theta}$ and $\abs{\mu_\theta} = \sum_{j=1}^m \absl{a^{(j)}} = \norml[1]{a}$.
Our constructions frequently use \textit{sawtooth functions}, a family of ridge functions that are composed of $t+1$ repetitions of a triangular wave that draw inspiration from a construction of \citet[Proposition 4.2]{ys19}. For $t \in \{0, \dots d\}$ with $t \equiv d \pmod2$ and $w \in \{\pm 1\}^d$, let $s_{w,t}(x) := \smash{(-1)^{d-t} \chi(w) \phi_t(w^\T x)}$ where $\phi_t: \mathbb{R} \to \mathbb{R}$ is a function that forms a piecewise affine interpolation between the points $(-t-1, 0)$, $\smash{\{(t - 2\tau, (-1)^\tau)\}_{\tau \in \{0, \dotsc, t\}}}$, $(t+1, 0)$, and $\phi_t(z) = 0$ for all $z \leq -t-1$ and $z \geq t+1$. We refer to $t$ as the \emph{width} of the sawtooth function $s_{w,t}$. Note that $s_{w,t}$ is $\sqrt{d}$-Lipschitz and $s_{w,t}(x) = \chi(x) \indicator{\abs{w^\T x} \leq t}$ for all $x \in \{\pm 1\}^d$. Also, $s_{w,t}$ can be expressed as a neural network $g_\theta$ with width $m \leq O(t+1)$ and $\absl{a^{(i)}} \leq O(\sqrt{d})$ for each $i \in [m]$.
Let $\nu := \operatorname{Unif}(\{\pm 1\}^d)$ denote the uniform distribution on $\{\pm 1\}^d$, and let $\boldsymbol{\nu}_n$ denote the empirical distribution on $\bx_1,\dotsc,\bx_n \simiid \nu$. We use the following inner products and norms over the vector space of real-valued functions on $\{\pm 1\}^d$ with respect to a distribution $\nu_0$ (such as $\nu$ or $\boldsymbol{\nu}_n$): \[\innerprodnuzero{g}{h} := \expectationl[\bx \sim \nu_0]{g(\bx)h(\bx)}, \quad \normnuzero{g} := \innerprodnuzero{g}{g}^{1/2}, \quad \normnuzeroinf{g} := \max_{x \in \operatorname{supp}(\nu_0)} \absl{g(x)}.\]
\subsection{$\mathcal{R}$-norm\xspace and attainment of the infimum} \label{ssec:rnorm}
We now recall the definition of the $\mathcal{R}$-norm\xspace of a function $g \colon \varOmega \to \mathbb{R}$, presented here in a variational form as the minimum cost of representing $g$ as an infinite-width neural network with a ``skip-connection'': \begin{equation}
\label{r-norm}
\tag{$\mathcal{R}$-norm}
\rnorm{g} := \inf_{\mu \in \mathcal{M}, v \in \mathbb{R}^d, c \in \mathbb{R}} \abs{\mu} \quad \text{s.t.} \quad g(x) = g_{\mu}(x) + v^\T x + c \ \ \forall x \in \varOmega. \end{equation} Indeed, $\rnorm{\cdot}$ is a semi-norm on the space of functions with finite $\mathcal{R}$-norm\xspace. It was initially introduced by \citet{owss19} along with explicit characterizations in terms of the Radon transform. See the works of \citet{owss19}, \citet{pn21a}, and \citet{sx21} for more discussion about the $\mathcal{R}$-norm\xspace and its connections to other function spaces.
The appearance of the affine component $v^\T x + c$ in the definition of $\mathcal{R}$-norm\xspace has implications for how the bias terms are treated. Notice that a neuron $x \mapsto \varphi_{\operatorname{r}}(w^\T x + b)$ with bias $\abs{b} \ge \sqrt{d}$ behaves as an affine function over the domain of interest $\varOmega$, so it can be absorbed into the ``free'' affine component (in the definition of $\mathcal{R}$-norm\xspace) so as to not be counted towards the $\mathcal{R}$-norm\xspace. Other works \citep[e.g.,][]{sx21} consider a different variational norm, $\vnorm{\cdot}$, which does not have ``free'' affine components, but instead permits biases $b$ to be in the larger range $[-2\sqrt{d}, 2\sqrt{d}]$. These differences in the way affine components are accommodated do not lead to different function spaces~\citep[see][Theorem~6]{pn21b}, and the results of this paper for $\mathcal{R}$-norm\xspace also hold for these other variational norms, as we demonstrate in Appendix~\ref{asec:vnorm}.
Although the $\mathcal{R}$-norm\xspace is defined in terms of an infimum, it has been shown by \citet[Lemma 2; see also Proposition~\ref{prop:integral-representation} in Appendix~\ref{asec:rnorm}]{pn21b} that the infimum is always achieved by a particular signed measure $\mu \in \mathcal{M}$. Since the total variation norm is sparsity-inducing, the objective in \eqref{interpolating-r-norm} favors finite-width networks. It can be shown, using an extension of Caratheodory's theorem~\citep{rosset2007}, that \eqref{interpolating-r-norm} in fact always has a finite-width solution. That is, \eqref{interpolating-r-norm} is solved by the sum of an affine function $x \mapsto v^\T x + c$ and a width-$m$ neural network, for some $m \leq \max\{0, n-(d+1)\}$. This claim is formalized as Theorem~\ref{thm:sparse-vp-solution} and proved in Appendix~\ref{asec:rnorm}. Thus, considering finite-width neural networks is sufficient to determine the value of \eqref{interpolating-r-norm}.\footnote{We note that the finite-width solution to \eqref{interpolating-r-norm} is not necessarily unique; \citet{hanin21} discusses this issue in the one-dimensional case ($d=1$) under general data models.}
The following lemma, which is a minor elaboration on Lemma 25 of \citet{pn21a}, relates the $\mathcal{R}$-norm\xspace of a finite-width network to the $\ell_1$-norm of its top-layer weights.
\begin{lemma}\label{lemma:rnorm-relu-l1} Let $v \in \mathbb{R}^d$, $c \in \mathbb{R}$, and $\theta = (a^{(j)}, w^{(j)}, b^{(j)})_{j \in [m]} \in (\mathbb{R} \times \mathbb{S}^{d-1} \times [-\sqrt{d},\sqrt{d}])^m$ be the set of parameters of a finite neural network where $(w^{(i)},b^{(i)}) \neq (w^{(j)},b^{(j)})$ for all $i \neq j$. \begin{enumerate}[label=(\roman*)]
\item The $\mathcal{R}$-norm\xspace of the sum of $g_\theta$ and an affine function $v^\T x + c$ satisfies
\begin{equation}
\label{eq:rnorm-relu-l1}
\rnorm{g_{\theta}(x) + v^\T x + c} \le \norm[1]{a} =
\absl{a^{(1)}} + \dotsb + \absl{a^{(m)}} . \end{equation}
\item Moreover, if the measure $\mu_\theta$ is even in a distributional sense (i.e., $\mu_{\theta}(w,b) = \mu_{\theta}(-w,-b)$), then the inequality in \eqref{eq:rnorm-relu-l1} holds with equality. \end{enumerate} \end{lemma}
Note that our assumption that $\mu_\theta$ is even in Lemma~\ref{lemma:rnorm-relu-l1}(ii) precludes the case where $a^{(i)} = -a^{(j)}$ and $(w^{(i)},b^{(i)}) = (-w^{(j)},-b^{(j)})$ for some $i \neq j$. This is needed because if such a case were allowed, we would have $a^{(i)} \varphi_{\operatorname{r}}(w^{(i) \T} x + b^{(i)}) + a^{(j)} \varphi_{\operatorname{r}}(w^{(j) \T} x + b_j) = a^{(i)} (w^{(i) \T} x + b^{(i)})$ for all $x \in \varOmega$---an affine function. After ruling out these cases, we can apply the argument of \citet{pn21a} to prove Lemma~\ref{lemma:rnorm-relu-l1}(ii).
\subsection{$\mathcal{R}$-norm\xspace of ridge functions} \label{sec:rnorm-ridge}
Prior works illuminate precise formulations of the $\mathcal{R}$-norm\xspace, and characterize solutions to \eqref{interpolating-r-norm}, albeit only for the one-dimensional setting \citep{hanin21, sess19, ep21}. These results are nevertheless useful for analyzing ridge functions in $d$-dimensional space.
\begin{theorem} \label{thm:single-index-r-norm} For any ridge function $g \colon \varOmega \to \mathbb{R}$ of the form $g(x) = \phi(w^\T x)$ where $w \in \mathbb{S}^{d-1}$ and $\phi \colon [-\sqrt{d}, \sqrt{d}] \to \mathbb{R}$ is Lipschitz, we have $$ \rnorm{g} = \tv{\phi'} \coloneqq \esssup_{- \sqrt{d} \leq t_0 < t_1 < \dotsb < t_r \leq \sqrt{d}; \, r \in \mathbb{N}} \sum_{i=1}^{r} \abs{\phi'(t_{i}) - \phi'(t_{i-1})} , $$ where $\phi'$ is a right continuous derivative of $\phi$.\footnote{Take $\phi'(u) = \lim_{t \downarrow 0}{\frac{\phi(u+t) - \phi(u)}{t} }$; the limit exists almost everywhere by Rademacher's theorem.} \end{theorem} \begin{remark}
If $\phi$ is twice differentiable, then $\rnorm{g} = \int_{-\sqrt{d}}^{\sqrt{d}}{ |\phi''(u)|\dd u } = \norm[1]{\phi''}$. Intuitively, this $\ell_1$-norm penalty induces sparsity in the second derivative, leading to representations that use few neurons. In contrast, minimizing the $\ell_2$-norm penalty $\norm[2]{\phi''}$ on the second derivative yields a cubic spline~\citep{kw71}. \end{remark} \begin{proof}
Without loss of generality, $g$ only depends on the first coordinate $x_{1}$ due to the invariance of the $\mathcal{R}$-norm\xspace to rotation \citep[cf.~Proposition 11 of][]{owss19}. The result then follows from Remark~4 of \citet{pn21b}. \end{proof}
This bound on the $\mathcal{R}$-norm\xspace for ridge functions (and univariate functions) is critical for analyses of the solutions to \eqref{interpolating-r-norm} for $d = 1$ \citep{hanin21, sess19}. It suggests a potential approach for our high-dimensional setting: project the dataset to every one-dimensional subspace, interpolate the data with a ridge function that points in that direction, directly compute the $\mathcal{R}$-norm\xspace of each using Theorem~\ref{thm:single-index-r-norm}, and return the ridge function with the lowest $\mathcal{R}$-norm\xspace. In the sequel, we examine the optimality of this approach, and find that ridge functions \textit{cannot} be optimal solutions to \eqref{interpolating-r-norm}, even when the dataset can be perfectly fit by a ridge function.
\section{Solutions to the variational problem for parity are multi-directional}\label{sec:parity-approx}
In this section, we study the $\mathcal{R}$-norm\xspace of neural networks that solve \ref{interpolating-r-norm} or \ref{approximate-r-norm} for the (full) \emph{parity dataset} $\{ (x,\chi(x)) : x \in \{\pm 1\}^d \}$, which has size $n = 2^d$. For simplicity, the labels are provided by the parity function $\chi$ over all $d$ variables, although the same quantitative results (up to constant factor differences) hold for any $\chi_S$ with $|S| = \Theta(d)$.
The high level message is that, despite the fact that this dataset can be exactly fit using ridge functions, the solutions to \eqref{interpolating-r-norm} and \eqref{approximate-r-norm} are \emph{not} ridge functions and instead must be multi-directional.
\subsection{Every ridge parity interpolant has $\mathcal{R}$-norm\xspace $\Omega(d^{3/2})$}\label{ssec:parity-rnorm-ridge-approx-lb}
We first show that any $\epsilon$-approximate interpolant of the parity dataset \emph{that is also a ridge function} must have $\mathcal{R}$-norm\xspace $\Omega(d^{3/2})$. This lower-bound is established even for $\epsilon=1/2$.
\begin{restatable}{theorem}{thmridgeparitylb}\label{thm:ridge-parity-lb} For $d\geq2$, let $\ridge_d$ be the set of functions $g \colon \varOmega \to \mathbb{R}$ such that $g(x) = \phi(w^\T x)$ for some $w \in \mathbb{S}^{d-1}$ and Lipschitz continuous $\phi \colon [-\sqrt{d},\sqrt{d}] \to \mathbb{R}$. Then \[ \inf\setl{ \rnorm{g} : g \in \operatorname{Ridge}_d , \, \cubenorminfty{g - \chi} \leq 1/2 } \geq d^{3/2}/(2\sqrt2) . \] \end{restatable}
The proof constructs a labeled dataset of $d+1$ points, and shows that any ridge function $g(x) = \phi(w^\T x)$ that approximates that dataset must have many high-magnitude oscillations. These oscillations imply a lower bound on $\tv{\phi'}$, which proves the claim by way of Theorem~\ref{thm:single-index-r-norm}.
\begin{proof} Take any $g \in \ridge_d$ of the form \smash{$g(x) = \phi(w^\T x)$} for some function $\phi$ and vector $w$ satisfying the approximation constraint \smash{$\cubenorminfty{g-\chi} \leq 1/2$}. Suppose for sake of contradiction that $w_i = 0$ for some $i \in [d]$. Then, there exists a pair of points $x, x' \in \{-1, 1\}^d$ that are identical except in the $i$-th positions, $x_i$ and $x'_i$. Thus, $\chi(x) = -\chi(x')$, but $w^\T x = w^\T x'$ and hence $g(x) = g(x')$; this contradicts the approximation constraint. So, we may henceforth assume that $w_i \neq 0$ for all $i\in [d]$.
For each $i \in \{0,1,\dotsc,d\}$, define \[
x^{(i)} := (\signl{w_1}, \dots, \signl{w_i}, -\signl{w_{i+1}}, \dots, -\signl{w_d}) . \] Because the parity of $x^{(i)}$ alternates with $i$, i.e., $\chi(x^{(i)}) \neq \chi(x^{(i+1)})$, $\signl{g(x^{(i)})}$ also alternates because $g$ satisfies the approximation constraint. Furthermore, again due to the approximation constraint, we have $\absl{g(x^{(i)}) - g(x^{(i+1)})} \geq 1$. We claim that, because $\phi$ interpolates $d+1$ well-separated data points $(w^\T x^{(i)}, \phi(w^\T x^{(i)}))$ that satisfy $w^\T x^{(i)} < w^\T x^{(i+1)}$ for all $i \in \{0,1,\dotsc,d-1\}$, there must be a large cost for representing $\phi$ using a neural network. By Theorem~\ref{thm:single-index-r-norm}, it suffices to obtain a lower bound on $\tv{\phi'}$, since this will imply a lower bound on $\rnorm{g}$.
By Lemma~\ref{lemma:calc} (in Appendix~\ref{assec:parity-rnorm-ridge-approx-lb}; essentially the mean value theorem), for every $i \in \{0,1,\dotsc,d-1\}$, there exists $A_i \subseteq [w^\T x^{(i)}, w^\T x^{(i+1)}]$ with Lebesgue measure $\Leb(A_i)>0$ such that, for every $z^{(i)} \in A_i$, we have \[ \absl{\phi'(z^{(i)})} \geq \frac12 \cdot \frac{\absl{\phi(w^\T x^{(i+1)}) - \phi(w^\T x^{(i)})}}{w^\T x^{(i+1)} - w^\T x^{(i)}},\] and $\smash{\sign{\phi'(z^{(i)})} = \sign{\phi(w^\T x^{(i+1)}) - \phi(w^\T x^{(i)})}}$. The fact that the signs of $\phi(w^\T x^{(i)})$ alternate with $i$ implies that the signs of $\phi'(z^{(i)})$ also alternate with $i$. We now lower-bound the total variation of $\phi'$ using the fact that $\prod_{i=1}^{d}\Leb(A_i) > 0$ and taking advantage of the alternating signs: \begin{align*} 2\tv{\phi'} &= 2 \esssup_{- \sqrt{d} \leq t_0 < t_1 < \dotsb < t_r \leq \sqrt{d}; \, r \in \mathbb{N}} \sum_{i=1}^{r} \abs{\phi'(t_{i}) - \phi'(t_{i-1})} \\ &\geq 2\sum_{i=1}^{d-1} \absl{\phi'(z^{(i)}) - \phi'(z^{(i-1)})} = 2\sum_{i=1}^{d-1}\parenl{ \absl{\phi'(z^{(i)})} + \absl{\phi'(z^{(i-1)})}} \geq 2 \sum_{i=0}^{d-1} \absl{\phi'(z^{(i)})}\\ &\geq \sum_{i=0}^{d-1} \frac{\absl{\phi(w^\T x^{(i+1)}) - \phi(w^\T x^{(i)})}}{w^\T x^{(i+1)} - w^\T x^{(i)}} \geq \sum_{i=0}^{d-1} \frac{1}{w^\T x^{(i+1)} - w^\T x^{(i)}} \\ &\ge \frac{d^2}{\sum_{i=0}^{d-1} w^\T x^{(i+1)} - w^\T x^{(i)}} = \frac{ d^2}{ w^\T x^{(d)} - w^\T x^{(0)}} \ge \frac{d^2}{\norml[2]{w}\norml[2]{x^{(d)}-x^{(0)}}} = \frac{d^{3/2}}{\sqrt{2}} . \end{align*} The second-to-last inequality is a consequence of Cauchy-Schwarz: for any $a_1, \dots, a_d > 0$, $d^2 = \smash{(\sum_i \sqrt{a_i} / \sqrt{a_i})^2 \leq (\sum_i a_i) (\sum_i 1/a_i)}$. Therefore, $\rnorm{g} = \tv{\phi'} \geq d^{3/2}/(2\sqrt2)$. \end{proof}
The lower-bound in Theorem~\ref{thm:ridge-parity-lb} is tight up to constants, because the sawtooth function $s_{\vec{1},d}$ satisfies the constraints of \eqref{interpolating-r-norm} and has $\rnorm{s_{\vec{1},d}} = O(d^{3/2})$.
\subsection{Existence of a multi-directional parity interpolant with $\mathcal{R}$-norm\xspace $O(d)$}\label{ssec:parity-rnorm-ub}
We now show that the $\Omega(d^{3/2})$ $\mathcal{R}$-norm\xspace lower-bound from Theorem~\ref{thm:ridge-parity-lb} for ridge functions can be avoided by neural networks that are not ridge functions. The main idea is to employ an \emph{averaging strategy} that combines a collection of distinct ridge functions, each of which perfectly fits a small fraction of the parity dataset---those on the ``equator'' relative to the ridge direction---while ignoring the ``outliers'' in that direction. Because all points on the cube are ``outliers'' for some directions and on the ``equator'' for others, this strategy ultimately ensures that every example is perfectly fit.
\begin{theorem}\label{thm:parity-rnorm-ub} For any even\footnote{Our results also hold for odd $d$, but the proofs are more tedious.} $d$, there exists a neural network $g: \varOmega \to \mathbb{R}$ having $g(x) = \chi(x)$ for all $x \in \{\pm 1\}^d$ such that $\rnorm{g} \leq O(d)$. \end{theorem}
\begin{proof} Recall that the sawtooth function $s_{w, 0}: \varOmega \to \mathbb{R}$ satisfies \smash{$s_{w, 0}(x) = \chi(x) \indicator{w^\T x = 0}$} for all \smash{$x \in \{\pm 1\}^d$}. By construction, $s_{w,0}$ is a ridge function that is a single ``bump'' around zero in the direction of $w$, and $\rnorml{s_{w, 0}} \leq \smash{O(\sqrt{d})}$. Consider \smash{$\bw \sim \operatorname{Unif}(\{\pm 1\}^d)$}. By symmetry, $\pr{\bw^\T x = 0} = \pr{\bw^\T x' = 0}$ for all $x, x' \in \{\pm 1\}^d$, so \[
\EE{s_{\bw, 0}(x)} = \chi(x) \cdot \pr{\bw^\T x = 0} = \chi(x) \cdot 2^{-d} \cdot \absl{\{v \in \{\pm 1\}^d : v^\T x = 0\}} = \chi(x) \cdot q , \] where \smash{$q := \binom{d}{d/2} / 2^d = \Theta(1/\sqrt{d})$}. Define \smash{$g(x) := \frac{1}{q 2^d} \sum_{w \in \{\pm 1\}^d} s_{w, 0}(x)$}. Then $g(x) = \frac{1}{q} \EE{s_{\bw, 0}(x)} = \chi(x)$ for each $x \in \{\pm 1\}^d$, i.e., $g$ interpolates the parity dataset. Finally, we bound the $\mathcal{R}$-norm\xspace: \[\rnorm{g} \leq \frac1{q2^d} \sum_{w \in \{\pm 1\}^d} \rnorm{s_{w,0}} \leq \frac1q \cdot O(\sqrt{d}) \leq O(d).\qedhere\] \end{proof}
While Theorem~\ref{thm:parity-rnorm-ub} successfully exhibits a neural network $g$ that perfectly fits the parity dataset with $\rnorm{g} = O(d)$, the width of $g$ is $\Omega(2^d)$. We next show that by allowing non-zero $\Lpu{\infty}$ error in the approximation, we can achieve a construction with both $O(d)$ $\mathcal{R}$-norm\xspace and $\poly(d)$ width.
\begin{restatable}{theorem}{thmparityrnormapproxub} \label{thm:parity-rnorm-approx-ub} There is a universal constant $c>0$ such that the following holds. For any even $d$, any $\epsilon \in (0,1)$, and any even $t \in \setl{0, 2,\dotsc,d}$, there exists a function $g \colon \varOmega \to \mathbb{R}$ that can be represented by a width-$m$ neural network such that $\cubenorminfty{g - \chi} \leq \epsilon$, where \begin{align*}
m & \leq O(d^{3/2} \sqrt{\log(1/\epsilon)}/\epsilon^2) & \text{and} \quad \rnorm{g} & \leq O(d \log(1/\epsilon)) && \text{if $t \leq c\sqrt{d \log(1/\epsilon)}$} ; \\
m & \leq O(d^2/(\epsilon t)) & \text{and} \quad \rnorm{g} & \leq O(t\sqrt{d}) && \text{otherwise} . \end{align*} Moreover, $g$ can be expressed as a linear combination of width-$t$ sawtooth functions.\end{restatable}
\begin{remark} Suppose $\epsilon$ is a constant. Using $t = \Theta(d)$, we obtain a neural network of width $m = O(d)$ and $\rnorm{g} = O(d^{3/2})$, matching the properties of the sawtooth (ridge function) interpolant $s_{w, d}$. Using $t = \Theta(1)$, we obtain a neural network of width $m = O(d^{3/2})$ and $\rnorm{g} = O(d)$, matching the properties of the interpolant from Theorem~\ref{thm:parity-rnorm-ub} but with almost exponentially smaller width. \end{remark}
A more detailed version of Theorem~\ref{thm:parity-rnorm-approx-ub} (which also specifies the \emph{intrinsic dimensionality} of $g$) is stated and proved in Appendix~\ref{assec:parity-rnorm-ub}. The proof uses a similar technique as that of Theorem~\ref{thm:parity-rnorm-ub}, but instead averages randomly sampled sawtooth functions \smash{$s_{\bw^{(1)}, t}, \dots, s_{\bw^{(k)}, t}$} for \smash{$\bw^{(j)} \sim \operatorname{Unif}(\{\pm 1\}^d)$} of width $t$. We show that for sufficiently large $k$, every $x \in \{\pm 1\}^d$ lies the in the ``active'' region of about the same number of sawtooth functions; this yields a good approximation of $\chi(x)$ for all $x$.
\subsection{Every parity interpolant has $\mathcal{R}$-norm\xspace $\Omega(d)$}\label{ssec:parity-rnorm-approx-lb}
Finally, we show that $\mathcal{R}$-norm\xspace upper-bounds from Theorems~\ref{thm:parity-rnorm-ub} and~\ref{thm:parity-rnorm-approx-ub} are tight. That is, we show that every solution to \eqref{approximate-r-norm} for the parity dataset has $\mathcal{R}$-norm\xspace $\Omega(d)$, even for constant $\epsilon$. This is implied by the following stronger result, which requires only $\Lpu{2}$ approximation, as opposed to $\Lpu{\infty}$.
\begin{restatable}{theorem}{thmparityrnormlb}\label{thm:parity-rnorm-lb} For any $d \geq 8$ and $\alpha \in (0, 1)$, $\inf\setl{\rnorm{g}: \cubenormtwo{g - \chi} \leq 1- \alpha} \geq \alpha d/8$. \end{restatable}
The core of the proof of Theorem~\ref{thm:parity-rnorm-lb} (given in Appendix~\ref{assec:parity-rnorm-approx-lb}) is an upper-bound on the correlation of any fixed ReLU neuron with the parity function $\chi$.
We note that a result analogous to Theorem~\ref{thm:parity-rnorm-lb} also holds for most \emph{sampled parity datasets} (defined in Section~\ref{sec:parity-gen}). This result is stated and proved in Appendix~\ref{asec:parity-approx-sampled}.
\section{Generalization properties of solutions to the variational problem}\label{sec:parity-gen}
In this section, we consider the generalization properties of a learning algorithm that returns a solution to \eqref{interpolating-r-norm} for a \emph{sampled parity dataset} $\{ (\bx_i,\chi(\bx_i)) : i \in [n] \}$ for $\bx_1,\dotsc,\bx_n \simiid \nu$.
(Again, for simplicity, we label data using $\chi$, but the same results hold for any $\chi_S$ with $|S| = \Theta(d)$.)
We show that $n = o(d^2/\sqrt{\log d})$ results in a predictor with nearly trivial accuracy. Note that information-theoretically, $n \geq O(d)$ is sufficient for learning any parity function~\citep{helmbold1992learning,fischer1992learning}. This means that the inductive bias based on $\mathcal{R}$-norm\xspace is not sufficient to achieve statistically optimal sample complexity for learning parity functions.
\subsection{Poor generalization with $n \ll d^2/\sqrt{\log d}$ samples} \label{ssec:parity-gen-lb}
We first give a lower bound on the sample size needed for non-trivial generalization for learning parity functions by solving \eqref{interpolating-r-norm} with the sampled parity dataset.
\begin{theorem}
\label{thm:parity-gen-lb}
If $n = o(d^2/\sqrt{\log d})$, then with probability at least $1/2$, every solution $\bg \colon \varOmega \to \mathbb{R}$ to \eqref{interpolating-r-norm} for the sampled parity dataset has $\cubenormtwo{\bg - \chi} \geq 1 - o(1)$. \end{theorem}
Its proof relies on the following approximation lemma, which shows the existence of a low-$\mathcal{R}$-norm\xspace network $\bg$ that perfectly fits all $n$ samples. The lemma (which is proved in Appendix~\ref{assec:parity-gen-lb}) defines $\bg$ with the same ``cap construction'' used in Theorem~1 of \citet{bln20}. \begin{restatable}{lemma}{lemmasmallsampleparityapprox}\label{lemma:small-sample-parity-approx}
There is an absolute constant $c > 0$ such that the following holds. If $n \leq cd^2$, and $\bx_1, \dotsc \bx_n \simiid \nu$, then with probability at least $1/2$, there exists $\bg \colon \varOmega \to \mathbb{R}$ with $\bg(\bx_i) = \chi(\bx_i)$ for all $i \in [n]$ and $\rnorm{\bg} \leq \smash{4n\sqrt{\ln d}/d}$. \end{restatable} We conclude that generalization fails in this low-sample regime because Theorem~\ref{thm:parity-rnorm-lb} shows that no network with sufficiently small $\mathcal{R}$-norm\xspace can correlate with parity.
\begin{proof}[Proof of Theorem~\ref{thm:parity-gen-lb}]
Let $\alpha := \smash{64n\sqrt{\ln d}/d^2}$, so $\alpha = o(1)$ by assumption on $n$.
By Theorem~\ref{thm:parity-rnorm-lb}, every $g \colon \varOmega \to \mathbb{R}$ with \smash{$\cubenormtwo{g - \chi} \leq 1 - \alpha$} has \smash{$\rnorm{g} \geq \alpha d / 8 \geq 8n\sqrt{\ln d}/d$}.
However, by Lemma~\ref{lemma:small-sample-parity-approx}, with probability at least $1/2$, every solution $\bg$ to \eqref{interpolating-r-norm} for the dataset $\setl{ (\bx_i,\chi(\bx_i)) }_{i \in [n]}$ has $\rnorm{\bg} \leq \smash{4n\sqrt{\ln d}/d}$.
In this event, the solutions $\bg$ have $\cubenormtwo{\bg - \chi} \geq 1 - \alpha = 1 - o(1)$. \end{proof}
\subsection{Good generalization with $n \gtrsim d^3$ samples} \label{ssec:parity-gen-ub}
We complement the lower-bound in Theorem~\ref{thm:parity-gen-lb} with the following sample complexity upper-bound.
\begin{restatable}{theorem}{thmparitygenub}\label{thm:parity-gen-ub}
There is an absolute constant $C>0$ such that the following holds.
For any $\epsilon \in (0,1)$ and $\delta \in (0,1)$, if
\smash{$n \geq C(\log(1/\delta) + d^3/\epsilon^2)$}, then with probability at least $1-\delta$, every solution $\bg \colon \varOmega \to \mathbb{R}$ to \eqref{interpolating-r-norm} for the sampled parity dataset satisfies
$\cubenormtwo{\chi - \operatorname{clip} \circ\, \bg}^2 \leq \epsilon$, where $\operatorname{clip}(t) := \min\{\max\{t, -1\}, 1\}$. \end{restatable}
For technical reasons, we only bound the $\Lpu{2}$ error of the natural truncation of a solution to \eqref{interpolating-r-norm}. The proof in Appendix~\ref{assec:parity-gen-ub} is based on standard Rademacher complexity arguments.
We note that there is a gap between our lower bound (Theorem~\ref{thm:parity-gen-lb}) and upper bound (Theorem~\ref{thm:parity-gen-ub}): roughly a factor of $d\sqrt{\log d}$. We believe that this gap could be narrowed if one resolves the open question raised by \citet{bln20} about the minimum Lipschitz constant achievable by two-layer ReLU networks of width $m$ networks that interpolate a sample of size $n$; Lemma~\ref{lemma:small-sample-parity-approx} is derived from a theorem that produces networks with smoothness conjectured to be sub-optimal. Nevertheless, our lower bound in Theorem~\ref{thm:parity-gen-lb} is already high enough to establish the statistical suboptimality of solutions to \eqref{interpolating-r-norm}.
\section{Generality of the averaging technique for minimizing $\mathcal{R}$-norm\xspace}\label{sec:cosine-approx}
In this section, we show how the benefit of averaging goes beyond the parity dataset. We consider an \emph{$f$-dataset} $\{(x, f(x)\}_{x \in \{\pm 1\}^d}$, a generalization of the parity dataset where $f(x) = \phi(v^\T x)$ is a ridge function with $L$-Lipschitz and $\rho$-periodic $\phi$. For another dataset generated by oscillatory ridge functions, we prove the same contrast between minimum-$\mathcal{R}$-norm\xspace interpolation with and without ridge constraints, so long as the periodicity $\rho$ is not too small (specifically, $\rho \geq 1/\sqrt{d}$). More concretely, suppose the dataset $\{ (x_i,f(x_i)) \}_{i\in[n]} \subset \{\pm 1\}^d \times \{\pm 1\}$ used in \eqref{interpolating-r-norm} and \eqref{approximate-r-norm} is the \emph{$f$-dataset}, where $v \in \{\pm \frac1{\sqrt{d}}\}^d$ and $\phi$ is $\rho$-periodic and $\frac1\rho$-Lipschitz.
Then we have the following:
\begin{itemize}
\item The optimal value of \eqref{approximate-r-norm} for constant $\epsilon \in (0,1/2)$ is $\tilde{O}(\sqrt{d} / \rho)$. (Theorem~\ref{thm:periodic-ub})
\item The optimal value of \eqref{approximate-r-norm} for constant $\epsilon \in (0,1/2)$---with the additional constraint that $g$ be a ridge function---is $\Omega(\sqrt{d} / \rho^2)$. (Theorem~\ref{thm:ridge-cosine-lb})
\end{itemize}
\fussy Because the parity dataset is an $f$-dataset with a $1/{\sqrt{d}}$-periodic and $\sqrt{d}$-Lipschitz choice of $\phi$, the above results closely match those of Informal Theorem~\ref{ithm:approx}. We give both results, starting with an upper bound on the minimum-$\mathcal{R}$-norm\xspace approximate interpolant, which parallels Theorem~\ref{thm:parity-rnorm-approx-ub}.
\begin{restatable}{theorem}{thmperiodicub}\label{thm:periodic-ub}
Suppose $f \colon \varOmega \to [-1,1]$ is given by $f(x) = \phi(v^\T x)$ for some unit vector $v \in \mathbb{S}^{d-1}$ and some $\phi \colon [-\sqrt{d}, \sqrt{d}] \to [-1,1]$ that is $L$-Lipschitz and $\rho$-periodic for $\rho \in [\norm[\infty]{v},1]$.
Fix any $\epsilon \in (0,1)$.
There exists a function $g \colon \varOmega \to \mathbb{R}$ represented by a width-$m$ neural network such that:
\begin{equation*}
\cubenorminfty{f - g} \leq \epsilon ;
\quad
m \leq dL\polylog(1/\epsilon) \sqrt{\rho\norml[1]{v}} / \epsilon^2 ;
\quad
\rnorm{g} \leq L^2\polylog(d/\epsilon) \rho \norml[1]{v} / \epsilon .
\end{equation*} \end{restatable}
\begin{remark}\label{rmk:cosine}
Suppose $f(x) = \cos(\frac{2\pi}{\rho} v^
\T x)$ for $v \in \{\pm \frac1{\sqrt{d}}\}^d$ and $\rho \in [\frac1{\sqrt{d}}, 1]$.
Theorem~\ref{thm:periodic-ub} implies that there exists an $\epsilon$-approximate interpolating neural network $g$ of width $\smash{\tilde{O}(\frac{d^{5/4}}{\sqrt\rho \epsilon^2})}$ and $\smash{\rnorm{g} = \tilde{O}(\frac{\sqrt{d}}{\rho \epsilon})}$.
If $d$ is even and $\rho = 4/ \sqrt{d}$, then $f(x) = \chi(x)$ for $x \in \{\pm 1\}^d$, and the width and $\mathcal{R}$-norm\xspace bounds of Theorem~\ref{thm:parity-rnorm-approx-ub} for small $t$ are approximately recovered. \end{remark} A detailed version of Theorem~\ref{thm:periodic-ub} appears in Appendix~\ref{assec:periodic-rnorm-ub}. The construction is more delicate than that in Theorem~\ref{thm:parity-rnorm-approx-ub} due to the potential lack of symmetries that had existed in the parity dataset.
We give the lower bound on the $\mathcal{R}$-norm\xspace of all approximately interpolanting ridge functions, whose proof in Appendix~\ref{assec:periodic-rnorm-ridge-approx-lb} relies a reduction to the argument of Theorem~\ref{thm:ridge-parity-lb}.
\begin{restatable}{theorem}{thmridgecosinelb}\label{thm:ridge-cosine-lb} Assume $d$ is even. Let $\ridge_d$ be the set of functions $g \colon \varOmega \to \mathbb{R}$ such that $g(x) = \phi(w^\T x)$ for some $w \in \mathbb{S}^{d-1}$ and Lipschitz continuous $\phi \colon [-\sqrt{d},\sqrt{d}] \to \mathbb{R}$. Let $\rho := 4q/\sqrt{d}$ for $q \in \setl{1, 2, \dots, \floorl{\sqrt{d}/4}}$ and $f(x) := \cos((2\pi/(\rho\sqrt{d}))\vec1^\T x)$. Then \[
\inf\setl{\rnorm{g}: g \in \ridge_d, \ \cubenorminfty{g - f} \leq 1/2}
= \Omega(\sqrt{d}/\rho^2) .
\] \end{restatable}
\begin{remark}
By contrasting the above result to the $\tilde{O}(\frac{\sqrt{d}}{\rho \epsilon})$ $\mathcal{R}$-norm\xspace of the averaging-based construction from Remark~\ref{rmk:cosine}, ridge functions are suboptimal solutions to \ref{approximate-r-norm} for constant $\epsilon$. \end{remark}
\begin{remark}\label{rmk:periodic-ridge}
Lemma~\ref{lemma:lipschitz-approx} (in Appendix~\ref{assec:periodic-rnorm-ub}) implies the existence of a neural network $g_{\ridge} \in \ridge_d$ that point-wise approximates $f$ (i.e., $\cubenorminfty{g_{\ridge} - f} \leq \epsilon$) and has $\smash{\rnorml{g_{\ridge}} = {O}(\frac{\sqrt{d}}{\rho^2 \epsilon})}$.
Hence, the lower bound in Theorem~\ref{thm:ridge-cosine-lb} is tight when $\epsilon$ is constant. \end{remark} \section{Conclusion and future work}\label{sec:conclusion}
In this work, we shed light on the $\mathcal{R}$-norm\xspace inductive bias for learning neural networks, but numerous questions remain. We are particularly interested in understanding the solutions to \eqref{interpolating-r-norm} for other datasets, as well as the generality of the averaging techniques used in our constructions. Extensions of the $\mathcal{R}$-norm\xspace to deeper networks and analyzing solutions to \eqref{interpolating-r-norm} for other high dimensional datasets could also be useful for proving depth-separation results that focus on variational norm, complementing existing works that focus on width~\citep{tel16,es15,mcpz13,dan17,ss17,ses19}. Finally, our work suggests that minimizing $\mathcal{R}$-norm\xspace yields neural networks that are intrinsically high-dimensional, and we are interested in whether this phenomenon is borne out in architectures beyond two-layer fully-connected networks.
\appendix
\section{Additional preliminaries} \label{asec:prelim}
\subsection{Additional definitions and notations} \label{assec:notation} We say that $g: \varOmega \to \mathbb{R}$ is \textit{$k$-index} if there exists a matrix $U \in \mathbb{R}^{k \times d}$ and $\phi: \mathbb{R}^k \to \mathbb{R}$ such that $g(x) = \phi(U x)$ for all $x \in \varOmega$. A ridge function is 1-index.
For a matrix $M \in \mathbb{R}^{m \times n}$, we denote the $i$-th largest singular value of $M$ by $\sigma_i(M)$ for $i=1,\dotsc,\min\{m,n\}$.
A random variable $\bu$ is $c$-\emph{subgaussian} if $\norm[\psi_2]{\bu} := \inf\setl{ t\geq0 : \EEl{\exp\parenl{\bu^2/t^2}} \le 2 } \leq c$, and a random vector $\bv$ is $\sigma^2$-\emph{subgaussian} if every one-dimensional projection of $\bv$ is $c$-subgaussian.
The \emph{bias-corrected network} $\bar{g}_{\mu}$ obtained from the (infinite-width) neural network $g_{\mu}$ is given by $\bar{g}_{\mu}(x) := g_{\mu}(x) - g_{\mu}(0)$; equivalently, $\bar{g}_{\mu}(x) = \int{\parenl{\varphi_{\operatorname{r}}(w^\T x + b) - \varphi_{\operatorname{r}}(b)} \, \mu(\dd w,\dd b)}$.
The asymptotics implied in the Landau notation (big-$O$, big-$\Omega$, etc.) regard all quantities as potentially increasing functions (e.g., $t$) or decreasing functions (e.g., $\epsilon$, $\delta$, $\alpha$, $\rho$) of the dimension $d$. The soft-$O$ notation $\tilde{O}(\cdot)$ (only used informally) suppresses terms that are poly-logarithmic in those that appear. Some of our theorems and lemmas contain an ``if clause'' that uses Landau notation, such as ``if $n \geq O(d^2)$, [\ldots]''. The interpretation of such a clause is: ``there exists $n_0(d) \in O(d^2)$ such that if $n \geq n_0(d)$, [\ldots]''. (And, of course, an analogous interpretation should be used when ``$O(d^2)$'' is replaced by other expressions using Landau notation.)
\subsection{Concentration inequalities} Our proofs make extensive use of textbook probability concentration inequalities. We provide those results below.
\begin{lemma}[Hoeffding's inequality; Theorem~2.8 in \citealp{blm13}]\label{lemma:hoeffding}
Let $\bu_1, \dots, \bu_n$ be independent, mean-zero random variables such that $\bu_i$ takes value in $[a_i, b_i]$ almost surely for all $i \in [n]$.
Then, for any $t > 0$,
\[\pr{\sum_{i=1}^n \bu_i \geq t} \leq \exp\paren{- \frac{2 t^2}{\sum_{i=1}^n (b_i - a_i)^2}}.\] \end{lemma}
\begin{lemma}[Multiplicative Chernoff bound; Theorem~4.4 in \citealp{mu17}]\label{lemma:chernoff}
Let $\bu_1, \dots, \bu_n$ be independent Bernoulli random variables with $\pr{\bu_i = 1} = p \in [0,1]$ for all $i \in [n]$.
Then, for any $\eta \in (0, 1]$,
\[\pr{\sum_{i=1}^n \bu_i \geq (1 + \eta)p} \leq \exp\paren{-\frac{p\eta^2}{3}}.\] \end{lemma}
\begin{lemma}[Bernstein's inequality; Corollary~2.11 in \citealp{blm13}]\label{lemma:bernstein}
Let $\bu_1, \dots, \bu_n$ be independent, mean-zero random variables with $\bu_i \leq K$ almost surely for all $i \in [n]$, and let $v \coloneqq \sum_{i=1}^n \EEl{\bu_i^2}$.
Then, for any $t > 0$,
\[\pr{\sum_{i=1}^n \bu_i \geq t} \leq \exp\paren{- \frac{t^2}{2(v + K t/3)}}.\] \end{lemma}
\begin{lemma}[McDiarmid's inequality; Theorem~6.2 in \citealp{blm13}]\label{lemma:mcdiarmid}
Let $\bu_1, \dots, \bu_n$ be independent random variables, and let $f$ be a measurable function.
Suppose, for each $i \in [n]$, the value of $f(u_1, \dots, u_n)$ can change by at most $c_i \geq 0$ by changing the value of $u_i$. Then, for any $t > 0$,
\[\pr{f(\bu_1, \dots, \bu_n) - \EE{f(\bu_1, \dots, \bu_n)} \geq t} \leq \exp\paren{- \frac{2t^2}{\sum_{i=1}^n c_i^2}}.\] \end{lemma}
\begin{lemma}[Properties of subgaussian random variables]\label{lemma:subg}
Let $\bu_1, \dots, \bu_n$ be independent random variables with $\norm[\psi_2]{\bu_i} < \infty$ for all $i \in [n]$.
There is a universal constant $C>0$ such that the following hold.
\begin{enumerate}[label=(\roman*)]
\item (Concentration; Section~2.5.2 in \citealp{vershynin2018high}) For any $t > 0$, $\pr{\abs{\bu_1} \geq t} \leq 2 \exp(-t^2 / (C \norm[\psi_2]{\bu_1}))$.
\item (Maximum; Exercise~2.5.10 in \citealp{vershynin2018high}) $\EE{\max_{i \in [n]} \abs{\bu_i}} \leq C \sqrt{\ln n} \max_{i \in [n]} \norml[\psi_2]{\bu_i}$.
\item (Averaging: Proposition~2.6.1 in \citealp{vershynin2018high}) $\norml[\psi_2]{\sum_{i=1}^n \bu_i}^2 \leq C \sum_{i=1}^n \norml[\psi_2]{\bu_i}^2$.
\item (Centering; Lemma~2.6.8 in \citealp{vershynin2018high}) $\norm[\psi_2]{\bu_1 - \EE{\bu_1}} \leq C \norm[\psi_2]{\bu_1}$.
\item (Lipschitzness)
For any $1$-Lipschitz function $\phi \colon \mathbb{R} \to \mathbb{R}$ with $\phi(0) = 0$, $\norm[\psi_2]{\phi(\bu_1)} \leq \norm[\psi_2]{\bu_1}$.
\end{enumerate} \end{lemma} \begin{proof}
The only property not already proved in \citep{vershynin2018high} is (v).
Since $\phi$ is $1$-Lipschitz and $\phi(0) = 0$,
\[
\absl{\phi(\bu_1)} = \absl{\phi(\bu_1) - \phi(0)} \leq \absl{\bu_1 - 0} = \absl{\bu_1} .
\]
Hence
\[
\EE{\exp(\phi(\bu_1)^2/\norml[\psi_2]{\bu_1}^2)} \leq \EE{\exp(\bu_1^2/\norml[\psi_2]{\bu_1}^2)} \leq 2 ,
\]
which implies $\norml[\psi_2]{\phi(\bu_1)} \leq \norml[\psi_2]{\bu_1}$. \end{proof}
\begin{lemma}[Singular values of random matrices; Theorem~4.6.1 in \citealp{vershynin2018high}]
\label{lemma:random-matrix-singular-values}
There is a positive constant $C>0$ such that the following holds.
Let $\bA$ be a $m \times n$ random matrix whose rows are independent, mean-zero, $v$-subgaussian random vectors in $\mathbb{R}^n$.
For any $t\geq0$, with probability at least $1-2e^{-t}$, the singular values $\sigma_1(\bA), \sigma_2(\bA), \dotsc, \sigma_n(\bA)$ of $\bA$ satisfy
\begin{equation*}
\sqrt{m} - C v (\sqrt{n} + \sqrt{t}) \leq \sigma_i(\bA) \leq \sqrt{m} + Cv (\sqrt{n} + \sqrt{t}) \quad \text{for all $i \in [n]$} .
\end{equation*} \end{lemma}
\begin{lemma}[Moment generating function for $\operatorname{Unif}(\{\pm 1\})$; Lemma~2.2 in \citealp{blm13}]
\label{lemma:mgf-flip}
If $\bu \sim \operatorname{Unif}(\{\pm 1\})$, then $\EEl{ \exp(t \bu) } \leq \exp(t^2/2)$ for all $t \in \mathbb{R}$. \end{lemma}
\subsection{Covering numbers} Let $\mathcal{N}(\epsilon, A, \gamma)$ denote the \textit{covering number} over a metric space $A$ with metric $\gamma:A\times A \to \mathbb{R}_{+}$. That is, $\mathcal{N}(\epsilon, A, d) = \min \absl{\mathcal{N}_\epsilon}$ for $\mathcal{N}_\epsilon \subset A$ such that for all $x \in A$, there exists $x' \in \mathcal{N}_\epsilon$ with $\gamma(x, x') \leq \epsilon$. Note that $\mathcal{N}(\epsilon, [-1,1], \abs{\cdot}) \leq \frac{2}\epsilon$.
\begin{lemma}[Covering numbers of $\mathbb{S}^{d-1}$; Corollary~4.2.13 in \citealp{vershynin2018high}]\label{lemma:covering}
For any $\epsilon > 0$, $\mathcal{N}(\epsilon, \mathbb{S}^{d-1}, \norm[2]{\cdot}) \leq (\frac2\epsilon + 1)^d$.
If $\epsilon \in [0,1]$, then $\mathcal{N}(\epsilon, \mathbb{S}^{d-1}, \norm[2]{\cdot}) \leq (\frac3\epsilon)^d$. \end{lemma}
\section{Additional properties of $\mathcal{R}$-norm\xspace}\label{asec:rnorm}
\subsection{Existence and sparsity of the solution to \ref{interpolating-r-norm}}
\begin{proposition}[Lemma 2 in \citealp{pn21b}] \label{prop:integral-representation} For any $g: \varOmega \to \mathbb{R}$ with $\rnorm{g} < \infty$, there exists an even Radon measure\footnote{Evenness of $\mu$ should interpreted in the distributional sense, but it roughly means $\mu(w,b) = \mu(-w,-b)$ when $\mu$ has a density.} $\mu$ over $\mathbb{S}^{d-1} \times [-\sqrt{d},\sqrt{d}]$, and $v \in \mathbb{R}^d, c\in \mathbb{R}$, such that $g$ admits an integral of the form $$ g(x) = \int_{\mathbb{S}^{d-1} \times [-\sqrt{d} \times \sqrt{d}]}{ \varphi_{\operatorname{r}}(w^\T x +b) \, \mu(\dd w,\dd b) } + v^\T x + c \quad \forall x \in \varOmega. $$ Moreover, $\mu$ attains the \eqref{r-norm}, i.e., $\rnorm{g} = \abs{\mu}$. \end{proposition}
The following theorem of \citet{pn21b} formalizes the fact that the $\mathcal{R}$-norm\xspace-minimizing interpolant of a $d$-dimensional dataset can be represented as a finite-width neural network. \begin{theorem}\label{thm:sparse-vp-solution}
For any dataset $\{ (x_i,y_i)\}_{i \in [n]}$ from $\varOmega \times \mathbb{R}$,
the infimum in \eqref{interpolating-r-norm} is achieved by the sum of an affine function
$x \mapsto v^\T x + c$ and
a finite-width neural network $g$ of the form
$$g(x) = \sum_{j=1}^{m}{a^{(j)} \varphi_{\operatorname{r}}(w^{(j)} x + b^{(j)})} + v^\T x + c,$$
with $m \le \max\{0, n - (d+1)\}$ and $(w_j,b_j) \in \mathbb{S}^{d-1} \times [-\sqrt{d},\sqrt{d}]$ for all $i \in [m]$. \end{theorem} \begin{proof} By Theorem~5 of \citet{pn21b} (see also the proof of Theorem~1 of \citet{pn21a} which covers the interpolation form of the optimization problem), there exists a neural network $x \mapsto \sum_{j=1}^{m'} a^{(j)} \varphi_{\operatorname{r}}(w^{(j)\T} x + b^{(j)})$ of width $m' \le n-(d+1)$, and an affine function $x \mapsto v^{(0)\T} x + c^{(0)}$, such that their sum achieves the infimum in \eqref{interpolating-r-norm}. We can divide neurons of the neural network into two sets based on whether their corresponding bias term is smaller or larger than $\sqrt{d}$ in absolute value. Since every $x \in \varOmega$ satisfies $\norml[2]{x} \leq \sqrt{d}$, without loss of generality (by possibly flipping the sign of some $a^{(j)}$ and $w^{(j)}$), assume the first $m$ neurons satisfy $\abs{b^{(j)}} \le \sqrt{d}$ and the rest satisfy $b^{(j)} > \sqrt{d}$. Then we have \begin{align*} g(x) &= \sum_{j=1}^{m}{a^{(j)} \varphi_{\operatorname{r}}(w^{(j)\T} x + b^{(j)})} + \sum_{j=m+1}^{m'}{a^{(j)} \varphi_{\operatorname{r}}(w^{(j)\T} x + b^{(j)})} + v^{(0)\T} x + c^{(0)} \\ &= \sum_{j=1}^{m}{a^{(j)} \varphi_{\operatorname{r}}(w^{(j)\T} x + b^{(j)})} + \sum_{j=m+1}^{m'}{a^{(j)} (w^{(j)\T} x + b^{(j)})} + v^{(0)\T} x + c^{(0)} \\ &= \sum_{j=1}^{m}{a^{(j)} \varphi_{\operatorname{r}}(w^{(j)\T} x + b^{(j)})} + \Bigl( \underbrace{v^{(0)} + \sum_{j=m+1}^{m'}{a^{(j)}w^{(j)}}}_{=: v} \Bigr)^\T x + \underbrace{\sum_{j=m+1}^{m'}{b^{(j)}} + c^{(0)}}_{=: c}. \end{align*} Therefore, $g$ has the desired form with $m \leq m' \leq n - (d+1)$. \end{proof}
\section{Extending our results to a different variational norm}\label{asec:vnorm}
\newcommand{$\mathcal{V}_2$-norm\xspace}{$\mathcal{V}_2$-norm\xspace}
This paper considers the approximation and generalization implications of bounding the complexity of shallow neural networks with the $\mathcal{R}$-norm\xspace. However, $\mathcal{R}$-norm\xspace is not the only weight-based complexity measurement, and other works employ slightly different norms for similar purposes. This appendix demonstrates that our results are not peculiarities of our formulation of $\mathcal{R}$-norm\xspace and extend to other variational norms. One alternative---which we refer to as the \textit{$\mathcal{V}_2$-norm\xspace}---omits the linear component of the neural network whose measure determines the $\mathcal{R}$-norm\xspace and instead permits ReLU neurons whose thresholds lie outside the domain $\varOmega$.
We first introduce notation for an infinite-width neural network that permits such thresholds. Let $\vmeasures$ denote the space of probability measures over $\mathbb{S}^{d-1} \times [-2\sqrt{d},2\sqrt{d}]$. For some measure $\tilde{\mu} \in \vmeasures$, let $\tilde{g}_\mu: \varOmega \to \mathbb{R}$ be an infinite-width neural network with \[\tilde{g}_{\tilde\mu}(x) = \int_{\mathbb{S}^{d-1} \times [-2\sqrt{d},2\sqrt{d}]} \varphi_{\operatorname{r}}(w^\T x + b) \, \tilde\mu(\dd w, \dd b),\] which has total variation norm $\abs{\tilde\mu} = \int_{\mathbb{S}^{d-1} \times [-2\sqrt{d},2\sqrt{d}]} \abs{\tilde\mu}(\dd w, \dd b)$. Now, we introduce the $\mathcal{V}_2$-norm\xspace for some $g: \varOmega \to \mathbb{R}$: \begin{equation}
\label{v-norm}
\tag{$\mathcal{V}_2$-norm\xspace}
\vnorm{g} = \inf_{\tilde\mu \in \vmeasures}{\abs{\tilde\mu}} \quad \text{s.t.} \quad g(x) = \tilde{g}_{\tilde\mu}(x), \quad \forall x \in \varOmega. \end{equation} In the same spirit as Lemma \ref{lemma:rnorm-relu-l1}, for a discrete network $g(x) = g_{\theta}(x)$ with $\theta = (a^{(j)}, w^{(j)}, b^{(j)})_{j \in [m]} \in (\mathbb{R} \times \mathbb{S}^{d-1} \times [-2\sqrt{d}, 2\sqrt{d}])^m$, we have $\vnorm{g} \leq \norm[1]{a}$.
Our definition of the $\mathcal{V}_2$-norm\xspace was introduced by \citet{sx21} as the norm corresponding to their variation space $\mathbb{P}_1$ with constants $c_1 = -2\sqrt{d}$ and $c_2 = 2\sqrt{d}$. They relate the $\mathcal{V}_2$-norm\xspace to the \emph{Barron norm} of \citet{emw19} and the Radon norm of \citet{owss19}. We show that the $\mathcal{V}_2$-norm\xspace and the $\mathcal{R}$-norm\xspace are closely related and that all of our bounds apply equivalently to the $\mathcal{V}_2$-norm\xspace. We first place upper and lower bounds on the $\vnorm{g}$ in terms of $\rnorm{g}$ and then explain why each of our results transfers to this new variational norm.
\begin{theorem}\label{thm:vr-norm}
Suppose $g: \varOmega \to \mathbb{R}$ has $\rnorm{g} < \infty$. Then, $\rnorm{g} \leq \vnorm{g}$. If $g$ is bounded near the origin (i.e., $\abs{g(x)} \leq K$ for all $x$ with $\norm[2]{x} \leq 1$), then $\vnorm{g} \leq 12\rnorm{g} + 18K$. \end{theorem}
As a result, all of our results that apply to the $\mathcal{R}$-norm\xspace translate modulo constants to the $\mathcal{V}_2$-norm\xspace. Because $\vnorm{g} \geq \rnorm{g}$ always holds, every theorem that places lower bounds on an $\mathcal{R}$-norm\xspace exactly translates to $\vnorm{g}$, including Theorems~\ref{thm:ridge-parity-lb}, \ref{thm:parity-rnorm-lb}, \ref{thm:parity-sample-rnorm-lb}, and \ref{thm:parity-gen-ub}. The upper-bounds hold up to constants by observing that every target function we consider is bounded by some $K$ on $\varOmega$.
\begin{itemize}
\item Because every sawtooth $s_{w,t}$ is bounded by 1, the averages of sawtooths $g$ in Theorems~\ref{thm:parity-rnorm-ub} and \ref{thm:parity-rnorm-approx-ub} are bounded by $K = \frac1q = O(\sqrt{d})$. Hence, $\vnorm{g} = O(d)$, just like $\rnorm{g}$.
\item For the ``cap'' construction $\bg$ of Theorem~\ref{thm:parity-gen-lb}, there are $k = O(n \log (d) / d)$ neurons, none of which are active at the origin. Their biases are negative and---under the ``good event''---their weight norms are $O(1 / \log d)$. Thus, no neuron can output a value greater than $O(1 / \log d)$, so even if all $k$ neurons activate, every $x$ with $\norm[2]{x} \leq 1$ has $\abs{\bg(x)} = O(n / d)$, which is dominated by the $\mathcal{R}$-norm\xspace of $O(n \sqrt{\log d} / d)$.
\item The construction $g$ of Theorem~\ref{thm:periodic-ub} computes an average of functions bounded on $[-1, 1]$.
Therefore, $\bg$ is bounded by 1, and its $\mathcal{V}_2$-norm\xspace is no more than its $\mathcal{R}$-norm\xspace. \end{itemize}
\begin{proof}[Proof of Theorem~\ref{thm:vr-norm}]
We show separately that $\vnorm{g} \geq \rnorm{g}$, and then that $\vnorm{g} \leq 12 \rnorm{g} + 18K$ under the additional hypothesis that $|g(x)| \leq K$ for all $x \in \varOmega$ such that $\norml[2]{x} \leq 1$.
\textbf{Lower bound on $\mathcal{V}_2$-norm\xspace:} Fix any $\xi > 0$. By the definition of $\mathcal{V}_2$-norm\xspace, there exists $\tilde\mu \in \vmeasures$ such that $g(x) = \tilde{g}_{\tilde\mu}(x)$ for all $x \in \varOmega$ and $\absl{\tilde\mu} \leq \vnorm{g} + \xi$.\footnote{This relies on the assumption that $\vnorm{g} < \infty$, but if it is not, then the claim trivially follows because $\rnorm{g} < \infty$.} We show that there exists $g_\mu$ (where $\mu$ is $\tilde{\mu}$ with the support of $b$ restricted to $[-\sqrt{d}, \sqrt{d}]$), $v$, and $c$ such that $\tilde{g}_{\tilde\mu}(x) = g_\mu(x) + v^\T x + c$ for all $x \in \varOmega$. Observe that for any $x \in \varOmega$, $w^\T x + b > 0$ if $b > \sqrt{d}$ and $w^\T x + b < 0$ if $ b < -\sqrt{d}$. \begin{align*}
\tilde{g}_{\tilde\mu}(x)
&= \int_{\mathbb{S}^{d-1} \times [-2\sqrt{d},-\sqrt{d}]} \varphi_{\operatorname{r}}(w^\T x + b)\tilde\mu(\dd w, \dd b) + \int_{\mathbb{S}^{d-1} \times [-\sqrt{d},\sqrt{d}]} \varphi_{\operatorname{r}}(w^\T x + b)\tilde\mu(\dd w, \dd b)\\ &\quad + \int_{\mathbb{S}^{d-1} \times [\sqrt{d},2\sqrt{d}]} \varphi_{\operatorname{r}}(w^\T x + b) \tilde\mu(\dd w, \dd b) \\
&= 0 + \int_{\mathbb{S}^{d-1} \times [-\sqrt{d},\sqrt{d}]} \varphi_{\operatorname{r}}(w^\T x + b)\mu(\dd w, \dd b) + \int_{\mathbb{S}^{d-1} \times [\sqrt{d},2\sqrt{d}]} (w^\T x + b) \tilde\mu(\dd w, \dd b) \\
&= g_\mu(x) + \sum_{i=1}^d x_i \underbrace{\int_{\mathbb{S}^{d-1} \times [\sqrt{d},2\sqrt{d}]} w_i \tilde\mu(\dd w, \dd b)}_{\coloneqq v_i} + \underbrace{\int_{\mathbb{S}^{d-1} \times [\sqrt{d},2\sqrt{d}]} b \tilde\mu(\dd w, \dd b)}_{\coloneqq c} \\
&= g_\mu(x) + v^\T x + c. \end{align*}
As a result, $\rnorm{g} \leq \abs{\mu} \leq \absl{\tilde\mu} \leq \vnorm{g} + \xi$. Because the argument holds simultaneously for all $\xi > 0$, we conclude that $\rnorm{g} \leq \vnorm{g}$.
\textbf{Upper bound on $\mathcal{V}_2$-norm\xspace:} By Proposition~\ref{prop:integral-representation}, there exist $\mu \in \vmeasures$, $v \in \mathbb{R}^d$, and $c \in \mathbb{R}$ such that $g(x) = g_\mu(x) + v^\T x + c$ for all $x \in \varOmega$ and $\abs\mu = \rnorm{g}$. We construct $\tilde\mu \in \vmeasures$ such that $g_\mu(x) + v^\T x + c = \tilde{g}_{\tilde\mu}(x)$ for all $x \in \varOmega$:\footnote{In the event that $v = \vec0$, we use $\frac{v}{\norm[2]{v}} := \frac1{\sqrt{d}} \vec1 \in \mathbb{S}^{d-1}$.} \[\tilde{\mu}(w, b) = \begin{cases}
\mu(w, b) & \text{if $b \in [-\sqrt{d}, \sqrt{d}]$} ; \\
\paren{-3\norm[2]{v} + \frac{2c}{\sqrt{d}}}\delta\paren{\paren{w, b} - \paren{v,2\sqrt{d}}} \\\quad + \paren{4\norm[2]{v} - \frac{2c}{\sqrt{d}}} \delta\paren{\paren{w, b} - \paren{v,\frac32\sqrt{d}}} & \text{otherwise}.
\end{cases} \]
Fix any $x \in \varOmega$. Then: \begin{align*}
\tilde{g}_{\tilde\mu}(x) - g_\mu(x)
&= \paren{-3\norm[2]{v} + \frac{2c}{\sqrt{d}}} \varphi_{\operatorname{r}}\paren{\frac{v^\T}{\norm[2]{v}} x + 2\sqrt{d}} \\&\quad + \paren{4\norm[2]{v} - \frac{2c}{\sqrt{d}}}\varphi_{\operatorname{r}}\paren{\frac{v^\T}{\norm[2]{v}} x + \frac32\sqrt{d}} \\
&= \paren{-3\norm[2]{v} + \frac{2c}{\sqrt{d}}} \paren{\frac{v^\T}{\norm[2]{v}} x + 2\sqrt{d}} + \paren{4\norm[2]{v} - \frac{2c}{\sqrt{d}}}\paren{\frac{v^\T}{\norm[2]{v}} x + \frac32\sqrt{d}} \\
&= v^\T x + c. \end{align*} Therefore, $\absl{\tilde\mu} \leq \abs\mu + \absl{-3\norm[2]{v} + \frac{2c}{\sqrt{d}}} + \absl{4\norm[2]{v} - \frac{2c}{\sqrt{d}}} \leq \abs\mu + 7 \norml[2]{v} + \frac{4\abs{c}}{\sqrt{d}}$. It suffices to bound $\norm[2]{v}$ and $\abs{c}$. \begin{itemize} \item Let $x_0\coloneqq \frac{v}{\norm[2]{v}}$. By boundedness, the triangle inequality, and several applications of Holder's inequality:
\begin{align*}
\abs{g(x_0) - g(0)} &\geq \abs{v^\T x_0 } - \abs{g_\mu(x_0) - g_\mu(0)} \\
&= \norm[2]{v} - \abs{\int_{\mathbb{S}^{d-1} \times [-\sqrt{d}, \sqrt{d}]} (\varphi_{\operatorname{r}}(w^\T x_0 + b) - \varphi_{\operatorname{r}}( b))\mu(\dd w, \dd b)} \\
&\geq \norm[2]{v} - \int_{\mathbb{S}^{d-1} \times [-\sqrt{d}, \sqrt{d}]} \abs{\varphi_{\operatorname{r}}(w^\T x_0 + b) - \varphi_{\operatorname{r}}(b)}\abs{\mu}(\dd w, \dd b) \\
&\geq \norm[2]{v} - \norml[2]{x_0} \abs{\mu}
=\norm[2]{v} - \abs{\mu}.
\end{align*}
Hence, $\norm[2]{v} \leq \abs{g(x_0) - g(0)} + \abs{\mu} \leq 2K + \abs{\mu}$.
\item We similarly employ our bound on $g(0)$: \begin{align*}
K
&\geq \abs{g(0)}
\geq \abs{c} - \abs{\int_{\mathbb{S}^{d-1} \times [-\sqrt{d}, \sqrt{d}]} \varphi_{\operatorname{r}}(b) \mu(\dd w, \dd b)}
\geq \abs{c} - \abs{\mu} \sqrt{d}.
\end{align*}
As a result, $\abs{c} \leq K + \abs\mu \sqrt{d}$. \end{itemize} Therefore, $\vnorm{g} \leq \absl{\tilde\mu} \leq 12\abs{\mu} + 18K \leq 12 \rnorm{g} + 18K$. \end{proof}
\section{Proofs for Section~\ref{sec:parity-approx}}\label{asec:parity-approx}
\subsection{Proofs for Section~\ref{ssec:parity-rnorm-ridge-approx-lb}}\label{assec:parity-rnorm-ridge-approx-lb}
The proof of Theorem~\ref{thm:ridge-parity-lb} relies on the following lemma, which is essentially a robust version of the mean value theorem.
\begin{lemma}\label{lemma:calc}
Suppose $\phi: \mathbb{R} \to \mathbb{R}$ is Lipschitz continuous on the interval $[t_1, t_2]$ with $\phi(t_1) \neq \phi(t_2)$.
Define
\begin{multline*}
A := \biggl\{ t \in [t_1, t_2] :
\text{$\phi'(t)$ exists} , \\
\abs{\phi'(t)} \geq \frac12 \cdot \frac{\abs{\phi(t_2) - \phi(t_1)}}{t_2 - t_1},
\ \sign{\phi'(t)} = \sign{\phi(t_2) - \phi(t_1)} \biggr\}.
\end{multline*}
(The factor $1/2$ in the definition of $A$ can be replaced by any constant in $(0,1)$.)
Then, $\Leb(A) > 0$, where $\Leb$ is the Lebesgue measure. \end{lemma} \begin{proof} Recall that $\phi'$ denotes the right continuous derivative of $g$ (or the right-hand Dini derivative) which is guaranteed to exist except on a null set by Rademacher's theorem. Let $s := \sign{\phi(t_2) - \phi(t_1)}$. By the assumption $\phi(t_1) \neq \phi(t_2)$ and the Fundamental Theorem of Calculus, we have \[ 0 < \absl{\phi(t_2) - \phi(t_1)} = s\paren{\phi(t_2) - \phi(t_1)} = \int_{t_2}^{t_1} s \phi'(z) \dd z \leq (t_2 - t_1) \esssup_{z \in [t_1, t_2]} s \phi'(z). \] Recall that, by definition, \[
\esssup_{z \in [t_1, t_2]} s \phi'(z)
= \inf\left\{
a :
\Leb(\{
z \in [t_1,t_2] :
\text{$\phi'(z)$ exists} , \
s \phi'(z) > a
\}) = 0
\right\} , \] and thus \[
B :=
\left\{
z \in [t_1,t_2] :
\text{$\phi'(z)$ exists} , \
s(\phi(t_2) - \phi(t_1)) \leq 2 \cdot (t_2 - t_1) s \phi'(z)
\right\} \] satisfies $\Leb(B) > 0$. Observe that $B=A$, so $\Leb(A) > 0$, concluding the proof. \end{proof}
\subsection{Proofs for Section~\ref{ssec:parity-rnorm-ub}}\label{assec:parity-rnorm-ub}
\begin{restatable}[Detailed version of Theorem~\ref{thm:parity-rnorm-approx-ub}]{theorem}{thmparityrnormapproxubfull}
\label{thm:parity-rnorm-approx-ub-full} There exists a universal constant $C$ such that for any even $d$, even sawtooth width $t \in \set{0, 2, \dots d}$, and accuracy $\epsilon \in (0, \frac12)$, there exists a $k$-index (see Appendix~\ref{assec:notation}) width-$m$ neural network $g$ with $\cubenorminfty{g - \chi} \leq \epsilon$ such that: \begin{enumerate}
\item If $t \leq C \sqrt{d \ln \frac1\epsilon}$, then $k = O(\frac{d^{3/2}}{t+1} \cdot \frac{\log^{1/2}\epsilon}{\epsilon^2})$, $m = O(d^{3/2} \frac{\log^{1/2}\epsilon}{\epsilon^2})$, and $\rnorm{g} = O(d \log \frac1\epsilon)$.
\item Otherwise, $k=O(\frac{d^2}{\epsilon t^2})$, $m = O(\frac{d^2}{\epsilon t})$, and $\rnorm{g} = O(t \sqrt{d})$. \end{enumerate} \end{restatable}
\begin{proof} For $\bw^{(1)}, \dots, \bw^{(k)} \simiid \operatorname{Unif}(\{\pm 1\}^d)$ and $q := \pr{\abs{\bw^{(1)\T} x} \leq t}$ (for any $x \in \{\pm 1\}^d$), let \[\bg(x) := \frac{1}{kq} \sum_{j=1}^k s_{\bw^{(j)}, t}(x).\] Because $\EE{s_{\bw, t}(x)} = q \cdot \chi(x)$, we have $\EE{\bg(x)} = \chi(x)$. Following the arguments in the proof of Theorem~\ref{thm:parity-rnorm-ub}, we have $\rnorm{\bg} = O(\frac{(t+1) \sqrt{d}}{q})$, and $\bg$ has width $O(k(t+1))$.
It remains to place a lower bound on $q$ and to show that with non-zero probability, $\bg$ uniformly approximates $\chi$. By applying a union bound, it suffices to show that $\pr{\abs{\bg(x) - \chi(x)} \geq \epsilon} \leq \frac{1}{2^{d+1}}$ for any fixed $x \in \{\pm 1\}^d$.
For fixed $x$, let $\br(x) := \absl{\setl{j \in [k]: \absl{x^\T \bw^{(j)}} > t}}$. We upper-bound the accuracy of approximation of $\chi(x)$ by $\bg(x)$ in terms of $\br(x)$: \begin{multline*}
\abs{\bg(x) - \chi(x)}
= \abs{\frac1{qk} \sum_{j=1}^k \indicatorl{\absl{x^\T \bw^{(j)}} \leq t}-q} \\
= \abs{\frac{(k - \br(x))(1 - q)}{qk}-\frac{\br(x)q}{qk}}
= \abs{\frac{1 - q}{q} - \frac{\br(x)}{qk}}. \end{multline*}
Define \smash{$T := 2\ceill{\sqrt{(d/2) \ln(8/\epsilon)}}$}, and note that $\pr{|\bw^\T x| \geq T} \leq \frac{\epsilon}4$. The proof naturally divides into two cases, depending on the value of $t$.
\textbf{Case 1: $t \leq T$.} We first lower-bound $q$. Because $\bw^\T x$ is a shifted symmetric binomial distribution around $\bw^\T x = 0$, if $\abs{t'} \geq \abs{t}$ and $t' \equiv t \pmod 2$, then $\pr{\bw^\T x = t'} \leq \pr{\bw^\T x = t}$. Then, for any $t \leq T$: \begin{align*} q &= \sum_{\tau=-t/2}^{t/2} \pr{\bw^\T x = 2\tau} = (t+1) \cdot \frac1{t+1} \sum_{\tau=-t/2}^{t/2} \pr{\bw^\T x = 2\tau} \\ &\geq (t+1) \cdot \frac1{T+1} \sum_{\tau=-T/2}^{T/2} \pr{\bw^\T x = 2\tau}
= \frac{t+1}{T+1} \pr{|\bw^\T x| \leq T} \\ &\geq \frac{(t+1)(1 - \frac\epsilon2)}{2\sqrt{d \ln \frac4\epsilon }} \geq \frac{t+1}{4\sqrt{d \ln \frac4\epsilon }} . \end{align*}
Now, we bound $\br(x)$ by Bernstein’s inequality (Lemma~\ref{lemma:bernstein}) by taking $k \geq \frac{C d^{3/2} \sqrt{\ln \frac1\epsilon}}{\epsilon^2 (t+1)}$: \begin{align*}
\pr{|\bg(x) - \chi(x)| > \epsilon} &= \pr{\abs{\br(x) - \EEl{\br(x)}} > \epsilon qk} \\ & \leq 2\exp\left(-\frac{\epsilon^2 q^2 k^2 }{2(kq(1 - q) + \epsilon q k/3) }\right) \\ & \leq 2\exp\left(-\frac{\epsilon^2 k(t+1)}{8(1+\epsilon/3)\sqrt{d \ln \frac4\epsilon}} \right) \leq \frac1{2^{d+1}}. \end{align*}
\textbf{Case 2: $t \geq T$.} By Hoeffding's inequality (Lemma~\ref{lemma:hoeffding}) and the assumption on $t$, we have $q \geq 1 - 2\exp(-\frac{2 t^2}{d}) \geq 1 - \epsilon/4 \geq 3/4$. Observe that $\EE{\br(x)} = (1-q)k \leq \frac{\epsilon k}{4}$.
We show that $ \abs{\bg(x) - \chi(x)}= \absl{\frac{1 - q}{q} - \frac{\br(x)}{qk}} \leq \epsilon$ by showing that $\br(x) \leq (1 - q)k + \epsilon qk$ and $\br(x) \geq (1 - q)k - \epsilon qk$. Because $1-q \leq \frac\epsilon4$ and $q \geq \frac34$, $(1 - q)k - \epsilon qk \leq - \frac\epsilon2k$, so the second inequality is always satisfied because $\br(x) \geq 0$. For the former inequality, it suffices to show that $\br(x) \leq \frac{3\epsilon k}4$ with probability at least $1-2^{-(d+1)}$. We take $k \geq \frac{Cd^2}{\epsilon t^2}$, which implies $k \geq C(\frac{d}{2\epsilon} + \frac{de^{-2t^2/d}}{2\epsilon^2}) \geq C(\frac{d}{2\epsilon} + \frac{d(1-q)}{4\epsilon^2})$ by the bounds on $t$ and $q$. Then, by Bernstein's inequality (Lemma~\ref{lemma:bernstein}), we have \begin{align*}
\pr{|\bg(x) - \chi(x)| > \epsilon} & = \pr{\abs{\br(x) - \EEl{\br(x)}} > \frac{3\epsilon k}4} \\ & \leq 2\exp\paren{-\frac{9\epsilon^2k^2/16}{2(kq(1-q) + \epsilon k/4)}} \leq \frac1{2^{d+1}} , \end{align*} so the claim follows. \end{proof}
\subsection{Proofs for Section~\ref{ssec:parity-rnorm-approx-lb}}\label{assec:parity-rnorm-approx-lb}
\thmparityrnormlb* \begin{proof} Consider any measure $\mu$ over $\mathbb{S}^{d-1} \times [-2 \sqrt{d}, 2 \sqrt{d}]$, $v \in \mathbb{R}^d$, and $c \in \mathbb{R}$ such that $g(x) = g_\mu(x) + v^\T x + c = \int_{\mathbb{S}^{d-1} \times \mathbb{R}} \varphi_{\operatorname{r}}(w^\T x + b) \mu(\dd w, \dd b)$ for all $\norm[2]{x} \leq d$. We prove the claim by showing that $\abs{\mu} \geq \frac{\alpha d}{8}$ for any such $\mu$.
By Fact~\ref{fact:ip-norm}, $\cubenormtwol{g - \chi} \leq 1 - \alpha$ implies that $\innerprod{g}{\chi} \geq \alpha.$ We show that this inner product bound is only possible if $\abs{\mu}$ is sufficiently large. By Lemma~\ref{lemma:relu-parity-correlation}, any fixed neuron $r_{w,b}(x) := \varphi_{\operatorname{r}}(w^\T x + b)$ has $\abs{\innerprod{ r_{w,b}}{\chi}} \leq \frac8d$. Because the inner-product over $\{\pm 1\}^d$ is a discrete sum and $\chi$ is orthogonal to any affine function (such as $x \mapsto v^\T x + c$), we can upper-bound the ability of $g$ to correlate with $\chi$ as follows: \begin{align*} \cubeinnerprod{g,\chi} &= \int_{\mathbb{S}^{d-1} \times \mathbb{R}} \cubeinnerprod{r_{w,b},\chi} \mu(\dd w, \dd b) + \EE{(v^\T \bx + c) \chi(\bx)}\\ &\leq \int_{\mathbb{S}^{d-1} \times \mathbb{R}} \abs{\cubeinnerprod{r_{w,b},\chi}} \abs{\mu(\dd w, \dd b)} \\ &\leq \frac8d \int_{\mathbb{S}^{d-1} \times \mathbb{R}} \abs{\mu(\dd w, \dd b)} = \frac{8 \abs{\mu}}d. \end{align*} Thus, $\innerprod{g_\mu}{\chi} \geq \alpha$ only if $\abs{\mu} \geq \frac{\alpha d}{8}$. \end{proof}
\begin{fact}\label{fact:ip-norm}
For any measure $\nu_0$ over $\{\pm 1\}^d$, $g \in L^2(\nu_0)$, $h: \{\pm 1\}^d \to \{\pm 1\}$, and $\alpha \in (0, 1)$, if $\normnuzero{g - h} \leq 1 - \alpha$, then $\innerprodnuzero{g}{h} \geq \alpha$. \end{fact} \begin{proof} The claim is a consequence of the fact $\innerprodnuzero{h}{h} = 1$ and Cauchy-Schwarz: \begin{align*} \innerprodnuzero{g}{h} &= \innerprodnuzero{h}{h} + \innerprodnuzero{g - h}{h} \\ &= 1 + \innerprodnuzero{g - h}{h} \\ &\ge 1 - \normnuzero{g - h} \\ &\ge 1 - (1-\alpha) = \alpha.\qedhere \end{align*} \end{proof}
\begin{lemma}\label{lemma:relu-parity-correlation}
For $d \geq 8$, $w \in \mathbb{R}^d$ with $\norm[2]{w} \leq 1$, and $b \in [-2\sqrt{d}, 2\sqrt{d}]$, the neuron $r_{w, b}(x) := \varphi_{\operatorname{r}}(w^\T x+b)$ satisfies $\abs{\innerprod{r_{w, b}}{\chi}} \leq \frac8d$. \end{lemma} \begin{remark}
Lemma~\ref{lemma:relu-parity-correlation} is asymptotically tight.
For even $d$, consider the ``single-blade'' sawtooth function \[s_{\vec1, 0}(x) = \sqrt{d}(r_{\vec1/\sqrt{d}, 1/\sqrt{d}}(x) - 2r_{\vec1/\sqrt{d}, 0}(x) + r_{\vec1/\sqrt{d}, -1/\sqrt{d}}(x))\] that satisfies $s_{\vec1,0}(x) = \chi(x) \indicatorl{\vec1^\T x = 0}$.
Then, \[\cubeinnerprodl{s_{\vec1, 0}, \chi} = \frac1{2^d}{d \choose d/2} \geq \frac1{\sqrt{2d}},\] and thus there exists $b$ with \smash{$\absl{\cubeinnerprodl{r_{\vec1/\sqrt{d}, b}, \chi}} \geq \frac1{4 \sqrt{2} d}$}. \end{remark} \begin{proof} We directly bound the inner product by showing that we can bound a discrete second derivative. For any $x \in \{\pm 1\}^d$, let $x^{j} \in\{\pm 1\}^d$ denote $x$ with a flipped $j$th bit. That is, $x^j_i = (-1)^{\indicator{i=j}} x_i $. Observe that $\chi(x) = - \chi(x^j)$. \begin{align*} \abs{\innerprod{ r_{w,b}}{\chi}} &= \frac{1}{2^d}\abs{ \sum_{x} r_{w,b}(x) \chi(x)} \\ &= \frac{1}{4 \cdot 2^d}\abs{ \sum_{x} (r_{w,b}(x) \chi(x) + r_{w,b}(x^j)\chi(x^j)+ r_{w,b}(x^{j'})\chi(x^{j'}) + r_{w,b}(x^{j,j'})\chi(x^{j,j'})) } \\
&= \frac1{4 \cdot 2^d} \left|\sum_x \chi(x) (r_{w,b}(x) - r_{w,b}(x^{j}) - r_{w,b}(x^{j'}) + r_{w,b}(x^{j,j'})) \right| \\
&\leq \frac{1}{4 \cdot 2^d} \sum_x |r_{w,b}(x) - r_{w,b}(x^{j}) - r_{w,b}(x^{j'}) + r_{w,b}(x^{j,j'})|. \end{align*}
We say that $(x, x^j)$ is \textit{cut} and denote $(x, x^j) \in C_j$ if $x$ and $x^j$ lie on the opposite side of the ``hinge'' of the neuron $r_{w,b}$, that is $\text{sign}(w^\T x - b) \neq \text{sign}(w^\T x^j - b)$. Let $S_{x, j, j'} = \setl{x, x^j, x^{j'}, x^{j, j'}}$ represent a ``square'' in $\{\pm 1\}^d$, and let $S_{x, j, j'} \in C_{j, j'}$ if any of its edges $(x, x^j), (x, x^{j'}), (x^j, x^{j,j'}), (x^{j'}, x^{j, j'})$ are cut. We bound the term inside the sum by considering two cases.
\begin{enumerate}
\item If $S_{x, j, j'} \not\in C_{j, j'}$,
then $|r_{w,b}(x) - r_{w,b}(x^{j}) - r_{w,b}(x^{j'}) + r_{w,b}(x^{j,j'})| = 0$.
\item Otherwise, the quantity is bounded by Lipschitzness:
\begin{align*}
|r_{w,b}(x) - r_{w,b}(x^{j}) - r_{w,b}(x^{j'}) + r_{w,b}(x^{j,j'})|
&\leq |r_{w,b}(x) - r_{w,b}(x^{j})| + |r_{w,b}(x^{j'}) - r_{w,b}(x^{j,j'})| \\
&\leq |w^\T x - w^\T x^{j}| +|w^\T x^{j'} - w^\T x^{j, j'}|
= 4|w_{j}|.
\end{align*} \end{enumerate}
Therefore, $\absl{\innerprodl{r_{w,b}}{\chi}} \leq \min_{j \neq j'} \frac1{2^d} \abs{C_{j, j'}} \absl{w_j}$. It remains to bound $\abs{C_{j, j'}}$ and $\absl{w_j}$ for some $j$ and $j'$. By employing a bound on the total number of cut edges \citep{oneil71}: \begin{align*}
\frac{1}d \sum_{j=1}^d \abs{C_j}
&\leq \frac1{2d} \cdot \ceil{\frac{d}2} {d \choose \floor{d/2}}
\leq \frac{ 2^d}{2\sqrt{d} }. \end{align*}
As a result, at most $\frac{d}2$ choices of $j$ satisfy $\absl{C_{j}} \geq \fracl{ 2^d}{\sqrt{d}}$. Because $\norm[2]{w} \leq 1$, at most $\frac{d}{4}$ coordinates $j$ have $|w_j| \geq \fracl{2}{\sqrt{d}}$.
Thus, there exist at \textit{least} $\frac{d}4$ coordinates $j$ satisfying both $\absl{C_{j}} \leq \fracl{2^d}{\sqrt{d}}$ and $|w_j | \leq \fracl2{\sqrt{d}}$. Assuming $d \geq 8$, let $j, j'$ be two of those coordinates. Since $\absl{C_{j, j'}} \leq 2\absl{C_j} + 2 \absl{C_{j'}}$, we conclude that $\absl{C_{j, j'}} \leq \fracl{4 \cdot 2^d}{\sqrt{d}}$, which gives the desired bound on the inner product. \end{proof}
\subsection{$\mathcal{R}$-norm\xspace lower bound for sampled parity datasets} \label{asec:parity-approx-sampled}
\begin{restatable}{theorem}{thmparitysamplernormlb}\label{thm:parity-sample-rnorm-lb} Fix any $\delta \in (0,1)$ and $\alpha = \omega(1/d)$, and assume $n \geq O \parenl{d^3(\log d + \log\parenl{1/\delta})}$. With probability at least $1 - \delta$, $\inf \setl{ \rnorm{g} : \empnorm{g - \chi} \le 1 - \alpha } \geq \Omega(\alpha d)$. \end{restatable}
\begin{proof} Let $g$ be a function with finite $\mathcal{R}$-norm\xspace which satisfies the $L^2(\boldsymbol{\nu}_n)$ approximability condition, which admits an integral representation due to Proposition~\ref{prop:integral-representation}. That is, \[ g(x) = \int_{\mathbb{S}^{d-1} \times [-\sqrt{d}, \sqrt{d}]}{(\varphi_{\operatorname{r}}(w^\T x + b) -\varphi_{\operatorname{r}}(b)) \mu(\dd w,\dd b)} + c + v^\T x \quad \forall x \in \varOmega \] for some measure $\mu$ and $v \in \mathbb{R}^d, c = g(0)$. Moreover, $g$ can be represented compactly as $g(x) = \bar{g}_{\mu}(x) + v^\T x + c$ where $\bar{g}_{\mu}(x) = g_{\mu}(x) - g_{\mu}(0)$.
By Fact~\ref{fact:ip-norm}, $\empnorm{g - \chi} \leq 1 - \alpha$ only if $\empinnerprod{g, \chi} \geq \alpha$. We use this correlation to prove lower bounds on $\abs{\mu}$ (the total variation of measure $\mu$). At a high level, we upper-bound \[\empinnerprod{g, \chi} = \empinnerprod{\bar{g}_{\mu}, \chi} + \empinnerprod{v^\T x + c, \chi(x)}\] in terms of $\abs{\mu}$ by relating quantities in $L^2(\boldsymbol{\nu}_n)$ with their $L^2(\nu)$ counterparts. We show that each component of the sum is small for sufficiently large $n$ and $d$.
We first bound the correlation of the linear combination of neurons with parity, proving upper bounds on $\empinnerprod{g, \chi}$. We denote $\bar{r}_{w,b}(x) = \varphi_{\operatorname{r}}(w^\T x + b) -\varphi_{\operatorname{r}}(b)$ to be the adjusted ReLU. By the triangle inequality, \begin{align*}
\empinnerprod{\bar{g}_{\mu}, \chi} &\le \int_{\mathbb{S}^{d-1} \times [-\sqrt{d}, \sqrt{d}]}{\abs{\empinnerprod{\bar{r}_{w,b},\chi}} \abs{\mu}(dw,db)} \\
&\le \abs{\mu} \sup_{\substack{w \in \mathbb{S}^{d-1}, b \in [-\sqrt{d}, \sqrt{d}]}}{\abs{ \empinnerprod{\bar{r}_{w,b} , \chi} }}. \end{align*}
Lemmas~\ref{lemma:relu-parity-correlation} and \ref{lemm:nonlinear-term-bound} together bound the correlation of any neuron $\bar{r}_{w,b}$ with $\chi$. That is, for any $w \in \mathbb{S}^{d-1}$ and $b \in [-\sqrt{d},\sqrt{d}]$, with probability at least $1-\delta/3$: \[ \abs{ \empinnerprod{\bar{r}_{w,b} , \chi} } \le \abs{ \cubeinnerprod{\bar{r}_{w,b} , \chi} } + C_1 \sqrt{\frac{d \parenl{\ln{n} + \ln\parenl{3/\delta}} }{n} } \le \frac{8}{d} + 2C_1 \sqrt{\frac{d \ln{n}}{n} } \le \frac{C_2}{d}, \] where $C_1$ is the constant from Lemma \ref{lemm:nonlinear-term-bound} and $n > C \parenl{d^3 (\log{d} + \log\parenl{1/\delta})}$ by assumption.
We now show that the linear components cannot be substantially correlated with the parity function and bound $\empinnerprod{v^\T x + c, \chi}$. Because no linear term correlates with the full parity dataset, Lemma~\ref{lemm:linear-term-bound} provides an upper bound on the inner product between the linear perturbation and sampled parity dataset and implies the following bound with probability at least $1-\delta/3$: \begin{align*}
\label{first-term}
\abs{\empinnerprod{ v^\T x + c, \chi}} &\le 8\max\{\abs{\mu}, 1\} \sup_{\substack{\abs{c} \le 1 \\ \norm[2]{v} \le 1}}{ \abs{ \empinnerprod{v^{\T} x + c, \chi} }} \\
&= 8\max\{\abs{\mu}, 1\}\left( \abs{\frac{1}{n}\sum_{i=1}^{n}{\by_i}} + \norm[2]{ \frac{1}{n}\sum_{i=1}^{n}{\by_i \bx_i} } \right)
. \end{align*} By Lemma~\ref{lemma:concentration-sampled-parity} and our assumptions on $n$, we bound the two data-dependent terms with probability at least $1 - \frac\delta3$ for some absolute constant $C_2$: \begin{align*}
\abs{\empinnerprod{ v^\T x + c, \chi}}
&\leq 8\max\{\abs{\mu}, 1\}\paren{\sqrt{\frac{2 \ln(12/\delta)}{n}} + 2 \sqrt{\frac{d}n}} \\
&\leq \frac{C_2}{d}\max\{\abs{\mu}, 1\}. \end{align*}
Combining both bounds, we have with probability at least $1-\delta$, \begin{align*}
\alpha \le \empinnerprod{\bar{g}_{\mu}(x) + v^{\T}x + c, \chi} \le \frac{C_2}{d}\left(\abs{\mu} + \max\{\abs{\mu},1\} \right) \le \frac{2C_2}{d}\max\{\abs{\mu}, 1\}. \end{align*} Therefore, we conclude \[
\abs{\mu} \geq \frac{\alpha d}{2C_2} - 1 .
\qedhere \] \end{proof}
\begin{lemma}\label{lemma:concentration-sampled-parity}
Fix any $\delta \in (0, 1)$.
Assume $n \geq O(\log(1/\delta))$ and
$n = \omega(d)$, let $\setl{(\bx_i, \by_i)}_{i \in [n]}$ be the sampled parity dataset (where $\by_i = \chi(\bx_i)$ for all $i \in[n]$), and let $\bX \in \mathbb{R}^{n \times d}$ be the data matrix containing all samples.
All of the following hold with probability $1 - \delta$:
\begin{enumerate}[label=(\roman*)]
\item $\abs{\frac{1}{n}\sum_{i=1}^{n}{\by_i}} \le \sqrt{\frac{2\ln(4/\delta)}{n}}$;
\item $\norm[2]{\frac{1}{n}\sum_{i=1}^{n}{\bx_i}} \le 2 \sqrt{\frac{d}{n}}$;
\item $\norm[2]{\frac{1}{n}\sum_{i=1}^{n}{\by_i \bx_i}} \le 2 \sqrt{\frac{d}{n}}$; and
\item $\frac34\sqrt{n} \le \sigma_{d}(\bX) \le \sigma_{1}(\bX) \le 2\sqrt{n}.$
\end{enumerate} \end{lemma} \begin{proof}
Claim \textit{(i)} holds with probability at least $1 - \frac\delta2$ as a result of a standard application of Hoeffding's inequality (Lemma~\ref{lemma:hoeffding}) to a sum of Rademacher random variables.
Claim \textit{(iv)} also holds with probability at least $1 - \frac\delta2$, since Lemma~\ref{lemma:random-matrix-singular-values} and the assumptions on $n$ imply that \[\sigma_1(\bX) \leq \sqrt{n} + C\paren{\sqrt{d} + \sqrt{\ln\frac2\delta}} \leq 2\sqrt{n}\] and \[\sigma_d(\bX) \geq \sqrt{n} - C\paren{\sqrt{d} - \sqrt{\ln\frac2\delta}} \geq \frac34\sqrt{n}.\]
Claims \textit{(ii)} and \textit{(iii)} follow from the singular value bounds on $\bX$.
\begin{align*}
\norm[2]{\frac{1}{n}\sum_{i=1}^{n}{\bx_i}} &\le \frac1n \sqrt{\operatorname{tr}(\bX^\T \bX)} \le \frac1n \cdot 2\sqrt{nd} = 2 \sqrt{\frac{d}{n}}; \\
\norm[2]{\frac{1}{n}\sum_{i=1}^{n}{\by_i \bx_i}} &\leq \frac1n \sqrt{\by^\T \bX \bX^\T \by} \leq \frac1n \cdot \sigma_1(\bX) \sqrt{d} \leq 2 \sqrt{\frac{d}{n}}.\qedhere
\end{align*} \end{proof}
\begin{lemma}\label{lemm:linear-term-bound} Fix any $\delta \in (0, 1)$. Assume $n \geq O(\log(1/\delta))$ and $n = \omega(d)$. With probability at least $1 - \delta$ over the random measure $\boldsymbol{\nu}_n$, if $\mu \in \mathcal{M}$ satisfies $\empnorm{g_{\mu}(\bx) + c + v^\T \bx - \chi(\bx) } \le 1$, then $\max\{\abs{c + g_{\mu}(0)}, \norm[2]{v}\} \le 8\max\{\abs{\mu}, 1\}.$ \end{lemma} \begin{proof} We draw inspiration from the fact that the full parity dataset is orthogonal to any linear term and can never be well-approximated with large linear components. In other words, the square loss on approximating the full parity dataset with a linear function is minimized by the constant-zero function and strictly worsens as the linear terms increase. That is, orthogonality ensures that $\cubenormtwol{c + v^\T x - \chi(x)}^2 = 1 + \absl{c}^2 + \norml[2]{v}^2$. Thus, having an upper bound on the squared error imposes similar upper bounds on the norms of the linear terms. We make a similar argument for the \textit{sampled} parity dataset, where we replace $\nu$ with $\boldsymbol{\nu}_n$.
Without loss of generality, we incorporate $g_{\mu}(0)$ into $c$ and define $\bar{g}_\mu(x) = g_{\mu}(x) - g_{\mu}(0)$ which can be also represented as $\bar{g}_{\mu}(x) = \int{\bar{r}_{w,b}(x) \mu(dw,db)}$ where $\bar{r}_{w,b} = \varphi_{\operatorname{r}}(w^\T x + b) - \varphi_{\operatorname{r}}(b)$. Let $\bX \in \mathbb{R}^{n \times d}$ be the collection of samples $\bx_i$ and let $\by_i = \chi(\bx_i)$. We bound the squared loss of the linear component $v^\T x + c$, ignoring the neural network $\bar{g}_\mu$: \begin{align*} \empnorm{c + v^\T x - \chi(x)}^2 &= 1 + c^{2} + v^{\T}\paren{\frac{1}{n}\sum_{i=1}^{n}{\bx_i \bx_i^{\T}}}v - \frac{2}{n}v^{\T}\paren{\sum_{i=1}^{n}{(\by_i-c) \bx_i }} - \frac{2c}{n}\sum_{i=1}^{n}\by_i \\ &\ge 1 + c^{2} + \frac1n \norm[2]{v}^{2} \sigma_{d}(\bX)^2 \\ &\quad - 2\norm[2]{v} \left( \norm[2]{\frac{1}{n}\sum_{i=1}^{n}{\by_i \bx_i}} + \abs{c}\norm[2]{\frac{1}{n}\sum_{i=1}^{n}{\bx_i}} \right)
- 2\abs{c}\abs{\frac{1}{n}\sum_{i=1}^{n}{\by_i}}. \end{align*}
With probability $1 - \delta$, all events of Lemma~\ref{lemma:concentration-sampled-parity} hold, and we use them to lower-bound the squared loss. \begin{align*} \empnorm{c + v^\T x - \chi(x)}^2 &\ge 1 + c^{2} +\frac{9}{16}\norm[2]{v}^{2} - 4\sqrt{\frac{d}{n}}(1 + \abs{c})\norm[2]{v} - \frac{2 \sqrt{2 \ln(8 / \delta)}}{\sqrt{n}}\abs{c} \\ &\ge \frac{1}{4} \max\{\abs{c}, \norm[2]{v}\}^2. \end{align*} where we have used the assumptions on $n$ and the AM/GM inequality. We now provide upper bounds on the square loss based on measure $\mu$ using the triangle inequality: \[ \empnorm{c + v^\T x - \chi(x)} \le \empnorm{\bar{g}_{\mu}} + \empnorm{\bar{g}_{\mu}(x) + c + v^{\T}x - \chi(x)} \le \empnorm{\bar{g}_{\mu}} + 1. \] We may now connect $L^2(\boldsymbol{\nu}_n)$ norm of $\bar{g}_{\mu}$ to its variational norm. We bound the output of $\bar{g}_\mu$ on a single input $\bx_i$ by employing Cauchy-Schwarz: \begin{align*}
\bar{g}_{\mu}(\bx_i)^2
&\le \paren{\int{|\bar{r}_{w,b}(\bx_i)| \abs{\mu}(\dd w,\dd b)}}^2 \\ &\le \abs{\mu} \int{\bar{r}_{w,b}(\bx_i)^2 \abs{\mu}(\dd w,\dd b)}. \end{align*}
We sum over all $i$ to bound the norm of $\bar{g}_\mu$: \begin{align*} \empnorm{\bar{g}_{\mu}(x)}^2 &\le \abs{\mu} \int{\empnorm{\bar{r}_{w,b}(x)}^2 } \abs{\mu}(\dd w,\dd b)
\le \abs{\mu}^2 \sup_{\substack{w \in \mathbb{S}^{d-1}, |b| \le \sqrt{d}}}{\empnorm{\bar{r}_{w,b}}^2 } \\
&\le \abs{\mu}^2 \sup_{w \in \mathbb{S}^{d-1}}{\frac{1}{n}\sum_{i=1}^{n}{|w^{\T}\bx_i|^2 }} = \abs{\mu}^2 \frac{\sigma_1(\bX)^2}n \le 4 \abs{\mu}^2. \end{align*} The second inequality relies on the Lipschitzness of $\varphi_{\operatorname{r}}$. Combining all the above, \[ \frac{1}{2} \max\{\abs{c}, \norm[2]{v}\} \le \empnorm{c + v^\T x - \chi(x)} \le 1 + \empnorm{g_{\mu}} < 2 + 2 \abs{\mu}.\qedhere \] \end{proof}
\begin{lemma}\label{lemm:nonlinear-term-bound} For $\bar{r}_{w,b}(x) = \varphi_{\operatorname{r}}(w^{\T} x + b)-\varphi_{\operatorname{r}}(b)$ and $n \geq d$, there exists an absolute constant $C$ such that for any $\delta \in (0,1)$ with probability at least $1-\delta$, \[ \abs{ \empinnerprod{\bar{r}_{w,b} , \chi} } \le \abs{ \cubeinnerprod{\bar{r}_{w,b} , \chi} } + C\sqrt{\frac{d \parenl{ \ln{n} + \ln\parenl{\fracl{1}{\delta}}}}{n} }, \] for all $w \in \mathbb{S}^{d-1}, b \in [-\sqrt{d},\sqrt{d}]$. \end{lemma} \begin{proof} Observe that the inner product over the sampled parity dataset is an unbiased estimate of the inner product over the full parity dataset, $$ \EE{\empinnerprod{\bar{r}_{w,b}, \chi}} = \cubeinnerprod{ \bar{r}_{w,b}, \chi}. $$ Let $\bZ_{w,b}$ denote the deviation from the mean, i.e. $$\bZ_{w,b} = \empinnerprod{\bar{r}_{w,b}, \chi} - \cubeinnerprod{ \bar{r}_{w,b}, \chi}.$$ We use standard concentration of measure techniques for the following steps: \begin{enumerate}
\item \label{itm:Lipschitz-process} $\bZ_{w,b}$ is Lipschitz in terms of its parameterization $(w,b)$ in the sense that $\abs{\bZ_{w_1,b_1} - \bZ_{w_2,b_2}} \le 4 \sqrt{d} \gamma((w_1,b_1), (w_2,b_2))$, where $\gamma$ is a distance defined later on.
\item \label{itm:subgaussian} $\bZ_{w,b}$ is $O(\frac{1}{\sqrt{n}})$-subgaussian for fixed $w,b$.
\item \label{itm:bound-on-expectation} $\EE{\sup_{w \in \mathbb{S}^{d-1}, b\in [-\sqrt{d},\sqrt{d}]}{\abs{\bZ_{w,b}}}} = O(\sqrt{\frac{d}{n}})$ using a covering argument.
\item \label{itm:bounded-difference} The maximum of $\abs{\bZ_{w,b}}$ is close to its expectation due to the bounded difference inequality. \end{enumerate}
\textbf{(Step~\ref{itm:Lipschitz-process})} Using the fact that $\varphi_{\operatorname{r}}$ is 1-Lipschitz and triangle inequality, \begin{align*}\abs{\bZ_{w_1,b_1} - \bZ_{w_2,b_2}} &\leq \abs{\empinnerprod{\bar{r}_{w_1,b_1}, \chi} - \empinnerprod{\bar{r}_{w_2,b_2}, \chi}} + \abs{\cubeinnerprod{ \bar{r}_{w_1,b_1}, \chi} - \cubeinnerprod{ \bar{r}_{w_2,b_2}, \chi}} \\ &\le 2 \cubenorminfty{ \bar{r}_{w_1,b_1} - \bar{r}_{w_2,b_2}} \\ &\le 2 ( \cubenorminfty{ \bar{r}_{w_1,b_1} - \bar{r}_{w_2,b_1} } + \cubenorminfty{\bar{r}_{w_2,b_1} - \bar{r}_{w_2,b_2}}) \\ &\le 2\paren{ \max_{x \in \{\pm 1\}^d}(w_1-w_2)^{\T}x + 2 \abs{b_1 - b_2} } \\
&\le 4\sqrt{d} \left( \norm[2]{w_1 - w_2} + \frac{|b_1 - b_2|}{\sqrt{d}} \right) \eqqcolon 4\sqrt{d} \gamma((w_1,b_1), (w_2,b_2)). \end{align*} Thus $\bZ_{w,b}$ is $4 \sqrt{d}$-Lipschitz with respect to $\gamma$.
\textbf{(Step~\ref{itm:subgaussian})} We bound the subgaussianity of $\bZ_{w,b}$. \begin{align*}
\norm[\psi_2]{\bZ_{w,b}}
&\leq C_1 \norm[\psi_2]{ \empinnerprod{ \bar{r}_{w,b}, \chi} }
= C_1 \norm[\psi_2]{\sum_{i=1}^n \by_{i}(\varphi_{\operatorname{r}}(w^{\T}\bx_i + b)- \varphi_{\operatorname{r}}(b)) } \\
&\le \frac{C_2}{\sqrt{n}} \norm[\psi_2]{ \by_{1}(\varphi_{\operatorname{r}}(w^{\T}\bx_1 + b)- \varphi_{\operatorname{r}}(b)) } \\
&\le \frac{C_2}{\sqrt{n}} \norm[\psi_2]{ \varphi_{\operatorname{r}}(w^{\T}\bx_1 + b) - \varphi_{\operatorname{r}}(b)} \\
&\leq \frac{C_2}{\sqrt{n}} \norm[\psi_2]{w^\T \bx_1}
\leq \frac{2C_2}{\sqrt{n}} \end{align*} The first, second, and fourth inequalities rely on the centering, averaging, and Lipschitzness properties of subgaussian random variables in Lemma~\ref{lemma:subg}. The third inequality follows from $\abs{\by_1} = 1$, and the final is due to the 2-subgaussianity of a vector with i.i.d.~Rademacher components.
\textbf{(Step~\ref{itm:bound-on-expectation})} Let $\mathcal{N}_\epsilon$ be an $\epsilon$-covering of $\mathbb{S}^{d-1} \times [-\sqrt{d}, \sqrt{d}]$ with respect to $\gamma$. We bound its size using the standard $\epsilon$-net result in Lemma~\ref{lemma:covering} for $\epsilon \leq 2$. \begin{align*}
\mathcal{N}\left( \epsilon, \mathbb{S}^{d-1} \times [-\sqrt{d},\sqrt{d}], \gamma \right) &\le \mathcal{N}\left( \frac{\epsilon}{2}, \mathbb{S}^{d-1} , \|\cdot\|_{2} \right) \times \mathcal{N}\left( \frac{\epsilon}{2},[-1,1], |\cdot| \right) \\ &\le \paren{\frac{6}{\epsilon}}^{d} \cdot \frac{4}{\epsilon} \le \paren{\frac{6}{\epsilon}}^{d+1}. \end{align*}
We bound the expected maximum deviation over all $w$ and $b$ by employing a bound on the expected maximum of subgaussian random variables (Lemma~\ref{lemma:subg}), applying the covering numbers argument, letting $\pi(w, b) = \argmin_{(w', b') \in \mathcal{N}_\epsilon} \gamma((w, b), (w', b'))$, and setting $\epsilon \coloneqq 1 / \sqrt{n}$. \begin{align*} \EE{ \sup_{\substack{w \in \mathbb{S}^{d-1}, b\in [-\sqrt{d},\sqrt{d}] }}{ \abs{\bZ_{w,b}} } } &\le \EE{ \sup_{w,b}{ \abs{ \bZ_{w,b} - \bZ_{\pi(w,b)} } } } + \EE{ \sup_{(w,b) \in \mathcal{N}_{\epsilon}}{ \abs{\bZ_{w,b}} } } \\ &\le 4\sqrt{d}\epsilon + \frac{2C_2}{\sqrt{n}} \sqrt{\ln \mathcal{N}\left( \epsilon, \mathbb{S}^{d-1} \times [-\sqrt{d},\sqrt{d}], \gamma \right)} \\ &\le 4\sqrt{d}\epsilon + 2C_2\sqrt{\frac{d+1}{n} \ln \frac{6}{\epsilon}} \leq C_3 \sqrt{\frac{d \ln n}n}. \end{align*}
\textbf{(Step~\ref{itm:bounded-difference})} We conclude by showing that $\sup_{w, b} \abs{\bZ_{w,b}}$ is close to its expectation with high probability due to the McDiarmid's inequality (Lemma~\ref{lemma:mcdiarmid}). Consider a perturbation where $\bx_i$ is replaced by some $\bx_i' \in \{\pm 1\}^d$ with $\by_i' = \chi(\bx_i')$, and let $\bZ_{w,b}^i$ denote the resulting deviation term. \begin{align*}
\abs{\sup_{w, b} \abs{\bZ_{w,b}} - \sup_{w, b} \abs{\bZ_{w, b}^i}}
&\leq \sup_{w, b} \abs{\bZ_{w,b} - \bZ_{w,b}^i}
= \frac1n \sup_{w, b} \abs{\by_i \bar{r}_{w, b}(\bx_i) - \by_i' \bar{r}_{w, b}(\bx_i')} \\
&\leq\frac1n \sup_{w, b} \bracket{\abs{\bar{r}_{w, b}(\bx_i) - \bar{r}_{w, b}(\bx_i')} + \abs{(\by_i - \by_i')\bar{r}_{w, b}(\bx_i)}} \\
&\leq \frac1n\bracket{\norm[2]{\bx_i - \bx_i'} + 2 \norm[2]{\bx_i}}
\leq \frac{4\sqrt{d}}n \end{align*}
Hence, with probability at least $1 - \delta$: \[\sup_{w, b} \abs{\bZ_{w, b}} \leq \sqrt{\frac{8d\ln 1/\delta}{n}} + \EE{\sup_{w, b} \abs{\bZ_{w, b}}} \leq C_4 \sqrt{\frac{d(\ln n + \ln 1/\delta)}{n}}.\] The bound in the lemma statement immediately follows. \end{proof}
\section{Proofs for Section~\ref{sec:parity-gen}}\label{asec:parity-gen}
\subsection{Proofs for Section~\ref{ssec:parity-gen-lb}}\label{assec:parity-gen-lb}
\begin{lemma}
\label{lemma:emprademacher-conditionally-subgaussian}
Fix $S \subseteq [d]$ with $|S| \geq 3$, and let $\bx \sim \operatorname{Unif}(\{\pm 1\}^d)$.
Conditional on the value of $\chi_S(\bx)$, the random vector $\bx$ is mean-zero, isotropic, and satisfies
\begin{equation*}
\E{ \exp(u^\T \bx) \mid \chi_S(\bx) } \leq \exp(\norml[2]{u}^2)
\end{equation*}
for all $u = (u_1,\dotsc,u_d) \in \mathbb{R}^d$. \end{lemma} \begin{proof}
The assumption $|S| \geq 3$ implies that, conditioned on $\chi_S(\bx)$, the $\{ \bx_i \}_{i \in [d]}$ are mean-zero and pairwise uncorrelated. So it remains to show that, for any vector $u = (u_1,\dotsc,u_d) \in \mathbb{R}^d$,
\begin{equation*}
\E{ \exp(u^\T \bx) \mid \chi_S(\bx) } \leq \exp(\norml[2]{u}^2) .
\end{equation*}
So fix $u$, and fix any $i \in S$. Let $u_{-i}$ (respectively, $\bx_{-i}$) be the vector obtained from $u$ (respectively, $\bx$) by removing the $i$-th entry.
Observe that $\bx_{-i} \mid \chi_S(\bx) \sim \operatorname{Unif}(\{\pm 1\}^{d-1})$, and also that $\bx_i \mid \chi_S(\bx) \sim \operatorname{Unif}(\{\pm 1\})$.
(But, of course, $\bx_{-i}$ and $\bx_i$ are not conditionally independent given $\chi_S(\bx)$.)
Therefore, using Cauchy-Schwarz,
\begin{align*}
\E{ \exp(u^\T \bx) \mid \chi_S(\bx) }
& = \E{ \exp\parenl{ u_{-i}^\T \bx_{-i} } \exp\parenl{ u_i \bx_i } \mid \chi_S(\bx) } \\
& \leq
\sqrt{ \E{ \exp\parenl{2 u_{-i}^\T \bx_{-i} } \mid \chi_S(\bx) } }
\sqrt{ \E{ \exp\parenl{2 u_i \bx_i } \mid \chi_S(\bx) } } \\
& \leq
\sqrt{ \exp\parenl{\norml[2]{2 u_{-i}}^2/2} }
\sqrt{ \exp\parenl{(2u_i)^2 / 2 } }
\\
& = \exp\parenl{ \norml[2]{u}^2 } .
\end{align*}
Above, the second inequality uses the moment generating function bound from Lemma~\ref{lemma:mgf-flip}, as well as the conditional independence of $\{ \bx_j : j \neq i \}$ given $\chi(\bx)$. \end{proof}
\lemmasmallsampleparityapprox*
\begin{proof}
Throughout, we take $C>0$ to be a suitably large constant, and we assume $n \leq d^2/C$.
The construction of $\bg \colon \varOmega \to \mathbb{R}$ is based on typical statistical behavior of the random examples $(\bx_1,\by_1), \dotsc, (\bx_n,\by_n)$, where $\by_i := \chi_S(\bx_i)$ for each $i \in [n]$.
We may assume that $n \geq d$, since otherwise the examples can be perfectly fit with a linear function $\bg$, and this function has $\rnorm{\bg} = 0$.
So, combining the assumption $n\geq d$ with the assumption
$n \leq d^2/C$ implies that $d \geq C$.
Observe that $\by_1, \dotsc, \by_n$ are i.i.d.~$\operatorname{Unif}(\{\pm 1\})$ random variables.
Since $n \geq d \geq C$, it follows by standard binomial tail bounds that with probability at least $5/6$ over the realizations of $\by_1, \dotsc, \by_n$, the number of $\by_i$ that are equal to $1$ is at least $n/3$, and also that the number of $\by_i$ that are equal to $-1$ is also at least $n/3$.
We henceforth condition on this ``good event'' (which depends only on $\by_1, \dotsc, \by_n$).
To help define our construction of $\bg \colon \varOmega \to \mathbb{R}$ and set up the rest of the analysis, we partition $[n]$ into disjoint groups
$G_1, G_2, \dotsc, G_m$ so that for each $j \in [m]$, (i) the size $n_j := |G_j|$ of the $j$-th group is between $c_1 d / \ln d$ and $2c_1 d / \ln d$, and (ii) all $\by_i$ for $i \in G_j$ are the same (i.e., all $+1$ or all $-1$).
Here, with foresight, we set $c_1 := 1/256$; by using $d \geq C$, we ensure that each group is non-empty, and also that $n_j < d$.
The feasibility of this partitioning is ensured because, in the ``good event'' (and using $d \geq C$), the number of $i \in [n]$ with $\by_i = 1$ is at least $n/3 \geq d/3 \geq c_1 d/\ln d$, and same for the number of $i \in [n]$ with $\by_i = -1$.
Let $z^{(j)}$ denote the common $\by_i$ value for all $i \in G_j$.
Finally, note that the number of groups $m$ satisfies $m \leq n \ln(d) / (c_1 d)$.
We now define our construction of $\bg \colon \varOmega \to \mathbb{R}$.
Let $\bA_j$ denote the random $n_j \times d$ matrix whose rows are the $\bx_i^\T$ for $i \in G_j$, and define the random vector $\bw^{(j)} := \bA_j^{\dag} (z^{(j)} \vec1)$.
Observe that $\bw^{(j)}$ is a least squares solution to the system of linear equations $\{ \bx_i^\T w = \by_i : i \in G_j \}$, since $\by_i = z^{(j)}$ for all $i \in G_j$.
We define $\bg$ as follows:
\begin{equation*}
\bg(x) = \sum_{j=1}^m z^{(j)} \varphi_{\operatorname{r}}(2 z^{(j)} \bw^{(j)\T} x - 1) .
\end{equation*}
To analyze our construction, we consider the realizations of $\bx_1, \dotsc, \bx_n$, and establish some basic properties that hold with sufficiently high probability (conditional on the ``good event'').
Note that within a group $G_j$, the $\{ \bx_i \}_{i \in G_j}$ are (conditionally) iid, and the realizations across groups are also (conditionally) independent.
We claim that with probability at least $5/6$ (conditional on the ``good event''),
\begin{itemize}
\item (P1) $\bw^{(j)\T} \bx_i = \by_i$ for all $j \in [m]$ and $i \in G_j$;
\item (P2) $\norml[2]{\bw^{(j)}} \leq 2\sqrt{n_j/d}$ for all $j \in [m]$.
\end{itemize}
To establish this claim, we lower-bound the $n_j$-th largest singular value $\sigma_{n_j}(\bA_j)$. Note that $\sigma_{n_j}(\bA_j)$ is at least the corresponding singular value of the $n_j \times (d-1)$ submatrix $\bB_j$ obtained from $\bA_j$ by removing the $t$-th column of $\bA_j$ for some $t \in S$.
(If $S$ is empty, we may remove any column.)
Since the rows of $\bA_j$ are independent, and since the entries of $\bx_i$ after removing the $t$-th one are iid $\operatorname{Unif}(\{\pm 1\})$ random variables, it follows that the $n_j \times (d-1)$ entries of $\bB_j$ are iid $\operatorname{Unif}(\{\pm 1\})$ random variables.
Hence, the rows of $\bB_j^\T$ are independent, mean-zero, isotropic, and $O(1)$-subgaussian.
By Lemma~\ref{lemma:random-matrix-singular-values} and a union bound, with probability at least $1 - 2m\exp(-\min_{j \in [m]} \{ n_j \})$,
\begin{equation*}
\sigma_{n_j}(\bA_j)
\geq \sigma_{n_j}(\bB_j^\T)
\geq \sqrt{d-1} - C_2 \sqrt{n_j}
\geq \paren{ \sqrt{1 - \frac{1}{d}} - C_2 \sqrt{\frac{c_1}{\ln d}} } \sqrt{d}
\quad \text{for all $j \in [m]$}
,
\end{equation*}
where $C_2>0$ is twice the absolute constant from Lemma~\ref{lemma:random-matrix-singular-values}, and the final inequality uses the upper-bound on $n_j$.
The fact $d \geq C$ and the upper-bounds on $m$ and $n$ altogether imply that the probability of the above event is at least $5/6$, and also that $\sqrt{1 - 1/d} - C_2\sqrt{c_1/\ln d} \geq 1/2$.
So in this event, for each $j \in [m]$, the column space of $\bA_j$ has rank $n_j$, so the system of linear equations defining $\bw^{(j)}$ is feasible, and
\begin{equation*}
\norml[2]{\bw^{(j)}}
= \norml[2]{\bA_j^\dag (z^{(j)} \vec1)}
\leq \sigma_1(\bA_j^\dag) \norml[2]{\vec1}
= \frac{\sqrt{n_j}}{\sigma_{n_j}(\bA_j)}
\leq 2\sqrt{\frac{n_j}{d}} .
\end{equation*}
This establishes P1 and P2 in the event as claimed.
We further claim that with probability at least $5/6$ (conditional on the ``good event''),
\begin{itemize}
\item (P3) $|\bw^{(j)\T} \bx_i| \leq 4\norml[2]{\bw^{(j)}} \sqrt{\ln d}$ for all $j \in [m]$ and $i \in [n] \setminus G_j$.
\end{itemize}
To establish this claim, first observe that $\bx_i$ and $\bw^{(j)}$ are independent for $i \notin G_j$. Moreover, by Lemma~\ref{lemma:emprademacher-conditionally-subgaussian}, conditional on $\bw^{(j)}$ (with $G_j \not\ni i$), $\bx_i^\T \bw^{(j)}$ is a mean-zero random variable satisfying
\begin{equation*}
\E{ \exp(\bw^{(j)\T} \bx) \mid \chi_S(\bx), \bw^{(j)} } \leq \exp(\norml[2]{\bw^{(j)}}^2) .
\end{equation*}
So, by Markov's inequality and a union bound, we have with probability at least $5/6$,
\begin{equation*}
|\bw^{(j)\T} \bx_i| \leq (\sqrt{2} \norml[2]{\bw^{(j)}}) \sqrt{2 \ln (12mn)}
\quad \text{for all $j \in [m]$ and $i \in [n] \setminus G_j$} .
\end{equation*}
Using $d \geq C$ and the upper-bounds on $m$ and $n$, we obtain $\sqrt{\ln(12mn)} \leq 2\sqrt{\ln d}$, and hence we deduce P3 from the above inequality.
So, by a union bound, with probability at least $2/3$ (conditional on the ``good event''), the properties P1, P2, and P3 all hold simultaneously.
We can now establish the desired properties of $\bg$.
Using $d \geq C$, P2, and the upper-bounds on $m$ and $n$, we obtain
\begin{equation*}
\rnorm{\bg}
\leq 2\sum_{j=1}^m \norml[2]{\bw^{(j)}}
\leq 4\sum_{j=1}^m \sqrt{\frac{n_j}{d}}
\leq 4\sqrt{m \sum_{j=1}^m \frac{n_j}{d}}
= 4\sqrt{\frac{mn}{d}}
\leq \frac{4n\sqrt{\ln d}}{d} .
\end{equation*}
Furthermore, by P1, we have for any $j \in [m]$ and $i \in G_j$,
\begin{equation*}
2z^{(j)} \bw^{(j)\T} \bx_i - 1 = 2 z^{(j)} \by_i - 1 = 1 .
\end{equation*}
And by P2, P3, and the upper-bound on $n_j$, we have for any $j \in [m]$ and $i \in [n] \setminus G_j$,
\begin{equation*}
2z^{(j)} \bw^{(j)\T} \bx_i - 1
\leq 2 |\bw^{(j)\T} \bx_i| - 1
\leq 16 \sqrt{\frac{n_j \ln d}{d}} - 1
\leq 16 \sqrt{c_1} - 1
= 0 ,
\end{equation*}
and hence $\varphi_{\operatorname{r}}(2z^{(j)} \bw^{(j)\T} \bx_i - 1) = 0$.
Therefore, for any $i \in [n]$, if $i \in G_j$,
\begin{equation*}
\bg(\bx_i) = z^{(j)} \varphi_{\operatorname{r}}(2z^{(j)} \bw^{(j)\T} \bx_i - 1) = z^{(j)} = \by_i .
\qedhere
\end{equation*} \end{proof}
\subsection{Proofs for Section~\ref{ssec:parity-gen-ub}}\label{assec:parity-gen-ub}
\thmparitygenub*
\begin{proof} Let $\bG$ denote all solutions to \eqref{interpolating-r-norm} on the sampled parity dataset, so $\empnorm{\bg - \chi} = 0$ for all $\bg \in \bG$. By Proposition~\ref{prop:integral-representation}, we can write each $\bg$ as $\bg(x) = g_{\boldsymbol{\mu}}(x) + \bv^\T x + \mathbf{c}$, where $\boldsymbol{\mu} \in \mathcal{M}$, $\bv \in \mathbb{R}^d$, and $\mathbf{c} \in \mathbb{R}$. Furthermore, we can assume that $g_{\boldsymbol{\mu}}(0) = 0$ by absorbing the value of $g_{\boldsymbol{\mu}}(0)$ into $\mathbf{c}$ (at the cost of losing the evenness of $\boldsymbol{\mu}$, but evenness is not needed in the sequel). Lemma~\ref{lemma:rnorm-relu-l1}, Theorem~\ref{thm:sparse-vp-solution}, and Theorem~\ref{thm:parity-rnorm-ub} together imply that every $\bg \in \bG$ satisfies $\rnorm{\bg} \leq Cd$ for some absolute constant $C>0$. Let $\mathcal{E}$ be the event that $\max\{\absl{\mathbf{c}}, \norml[2]{\bv}\} \leq 8Cd$ (for all $\bg \in \bG$), and let $\mathcal{E}^{\operatorname{c}}$ be its complement; event $\mathcal{E}$ occurs with probability at least $1-\delta/2$ by Lemma~\ref{lemm:linear-term-bound}, for another absolute constant $C'>0$.
Since, for each $\bg \in \bG$, we have $\bg(\bx_i) = \chi(\bx_i)$ for every example $(\bx_i,\chi(\bx_i))$ in the sampled parity dataset, it follows that $\empnorm{\operatorname{clip} \circ\, \bg - \chi} = \empnorm{\bg - \chi} = 0$ for all such $\bg \in \bG$. For any $t>0$, \begin{align*}
\lefteqn{
\pr{ \sup_{\bg \in \bG} \cubenormtwo{\operatorname{clip} \circ\, \bg - \chi}^2 \geq t }
} \\
& \leq
\pr{ \sup_{\bg \in \bG} \cubenormtwo{\operatorname{clip} \circ\, \bg - \chi}^2 \geq t \mid \mathcal{E} }
+ \pr{\mathcal{E}^{\operatorname{c}}}
\\
& =
\pr{
\sup_{\bg \in \bG}
\cubenormtwo{\operatorname{clip} \circ\, \bg - \chi}^2 - \empnorm{\operatorname{clip} \circ\, \bg - \chi}^2 \geq t \mid \mathcal{E}
}
+ \pr{\mathcal{E}^{\operatorname{c}}}
\\
& \leq
\pr{
\sup_{g \in \mathcal{G}_0}
\cubenormtwo{\operatorname{clip} \circ\, g - \chi}^2 - \empnorm{\operatorname{clip} \circ\, g - \chi}^2
\geq t
}
+ \delta/2
, \end{align*} where \[
\mathcal{G}_0 :=
\left\{
x \mapsto g(x) + v^\T x + c : \rnorm{g} \leq Cd , \ \max\{\norml[2]{v}, \absl{c} \} \leq 8Cd
\right\}
. \] Define \[
t_0 := 4 \underbrace{\EE{ \sup_{g \in \mathcal{G}_0} \frac1n \sum_{i=1}^n \beps_i g(\bx_i) }}_{\operatorname{Rad}_n(\mathcal{G}_0)}
\, + \, 4\sqrt{\frac{\log(2/\delta)}{n}} . \] Above, $\operatorname{Rad}_n(\mathcal{G}_0)$ denotes the Rademacher complexity of $\mathcal{G}_0$, where $\beps_1,\dotsc,\beps_n \simiid \operatorname{Unif}(\{\pm 1\})$, independent of $\bx_1,\dotsc,\bx_n$. Since, for any $y \in \{\pm 1\}$, the mapping $z \mapsto (y - \operatorname{clip}(z))^2 = (1-y \operatorname{clip}(z))^2$ is $4$-Lipschitz and has range $[-4,4]$, it follows by standard Rademacher complexity arguments~\citep[see, e.g.,][Theorem 8]{meir2003generalization} that \begin{equation*}
\pr{
\sup_{g \in \mathcal{G}_0}
\cubenormtwo{\operatorname{clip} \circ\, g - \chi}^2 - \empnorm{\operatorname{clip} \circ\, g - \chi}^2
\geq
t_0
} \leq \delta / 2
. \end{equation*}
So it remains to show that $t_0 \leq \epsilon$ under the assumption $n \geq C_0 ((d^3 + \log(1/\delta))/\epsilon^2)$ for suitably large absolute constant $C_0>0$. The second term in the definition of $t_0$ is at most $\epsilon/2$ provided that $C_0$ is chosen large enough. To bound the first term ($\operatorname{Rad}_n(\mathcal{G}_0)$), we use the fact that \[
\operatorname{Rad}_n(\mathcal{G}_0)
= \operatorname{Rad}_n(\mathcal{G}_1)
+ \operatorname{Rad}_n(\mathcal{G}_2) \] where $\mathcal{G}_1 := \{ g : \rnorm{g} \leq Cd \}$ and $\mathcal{G}_2 := \{ x \mapsto v^\T x + c : \max\{\norml[2]{v} , \absl{c} \} \leq 8Cd \}$. Theorem~10 of \citet{pn21a} implies \[
\operatorname{Rad}_n(\mathcal{G}_1)
\leq \frac{2 \cdot (Cd) \cdot \sqrt{d}}{\sqrt{n}}
= O\left( \sqrt{\frac{d^3}{n}} \right) , \] while Theorem~3 of \citet{kakade2008complexity} implies \[
\operatorname{Rad}_n(\mathcal{G}_2)
\leq \sqrt{d+1} \cdot \sqrt{(8Cd)^2} \cdot \sqrt{\frac{2}{n}}
= O\left( \sqrt{\frac{d^3}{n}} \right) . \] By choosing $C_0$ large enough, it follows that $\operatorname{Rad}_n(\mathcal{G}_0) \leq \epsilon/8$. Hence, we have shown that $t_0 \leq \epsilon$ as required. \end{proof}
\section{Proofs for Section~\ref{sec:cosine-approx}}\label{asec:cosine-approx}
\subsection{Proof of Theorem~\ref{thm:periodic-ub}} \label{assec:periodic-rnorm-ub}
\begin{restatable}[Detailed version of Theorem~\ref{thm:periodic-ub}]{theorem}{thmperiodicubfull}\label{thm:periodic-ub-detailed}
Suppose $f \colon \varOmega \to [-1,1]$ is given by $f(x) = \phi(v^\T x)$ for some unit vector $v \in \mathbb{S}^{d-1}$ and some $\phi \colon [-\sqrt{d}, \sqrt{d}] \to [-1,1]$ that is $L$-Lipschitz and $\rho$-periodic for $\rho \in [\norm[\infty]{v},1]$.
Let $\sigma_{\rho,v} := \sqrt{2\rho\norml[1]{v} - 1}$, and fix any $\epsilon \in (0,1)$.
There exists a function $g \colon \varOmega \to \mathbb{R}$ such that the following properties hold:
\begin{enumerate}
\item $\absl{f(x) - g(x)} \leq \epsilon$ for all $x \in \{\pm 1\}^d$;
\item $g$ is represented by a neural network of width at most
\begin{equation*}
O\paren{\frac{dL(\sigma_{\rho,v} \sqrt{\log(1/\epsilon)} + \rho \log(1/\epsilon))}{\epsilon^2}} ;
\end{equation*}
\item $g$ satisfies
\begin{equation*}
\rnorm{g} =
O\paren{ \frac{L^2(\sigma_{\rho,v} \sqrt{\log(1/\epsilon)} + \rho \log(1/\epsilon))(\sigma_{\rho,v} + \sqrt{\log(d/\epsilon)})}{\epsilon} } .
\end{equation*}
\end{enumerate} \end{restatable}
\begin{proof}
We first describe the (randomized) construction of our approximating neural network $\bg \colon \mathbb{R}^d \to \mathbb{R}$.
For $w \in \mathbb{Z}^d$, define $h_w \colon \mathbb{R}^d \to \mathbb{R}$ by $h_w(x):=\phi(v^\T x + \rho w^\T x)$.
Let $\bw \in \mathbb{Z}^d \setminus \{-(1/\rho)v\}$ be a random vector with distribution to be specified later in the proof.
Let $\bw^{(1)}, \dotsc, \bw^{(k)}$ be i.i.d.~copies of $\bw$ for a positive integer $k > (9(d+1)\ln(2))/\epsilon$, and let $\bh^{(j)} := h_{\bw^{(j)}}$ for each $j$.
Observe that each $\bh^{(j)}$ can be written as $\bh^{(j)}(x) = \phi^{(j)}(x^\T \bu^{(j)})$ for
\begin{equation*}
\bu^{(j)} := \frac1{\norml[2]{v+\rho\bw^{(j)}}} (v + \rho\bw^{(j)}) \in \mathbb{S}^{d-1}
\quad \text{and} \quad
\phi^{(j)}(z) := \phi(\norml[2]{v+\rho\bw^{(j)}} z) ,
\end{equation*}
where $\phi^{(j)} \colon \mathbb{R} \to [-1,1]$ is $L_j$-Lipschitz for $L_j := L\norml[2]{v+\rho\bw^{(j)}}$ (using the $L$-Lipschitzness of $\phi$).
Let $\tau>0$ be a value (depending on $\rho$, $v$, and $\epsilon$) also to be specified later.
By Lemma~\ref{lemma:lipschitz-approx} (with $t := \tau/\norml[2]{v + \rho \bw^{(j)}}$ and $\delta := \epsilon/3$), there exist $\tilde\bh^{(1)}, \dotsc, \tilde\bh^{(k)}$ such that:
\begin{itemize} \item (H1) $\tilde\bh^{(j)} \colon \mathbb{R}^d \to \mathbb{R}$ is represented by a neural network of width at most $O(\tau L/\epsilon)$;
\item (H2) $\rnorml{\tilde\bh^{(j)}} = O(\tau L^2 \norml[2]{v + \rho \bw^{(j)}}/\epsilon)$;
\item (H3) $|\tilde\bh^{(j)}(x) - \bh^{(j)}(x)| \leq \epsilon/3$ for all $x \in \{\pm 1\}^d$ such that $|x^\T \bu^{(j)}| \leq \tau / \norml[2]{v + \rho \bw^{(j)}}$;
\item (H4) $|\tilde\bh^{(j)}(x) - \bh^{(j)}(x)| \leq 1$ for all $x \in \mathbb{R}^d$.
\end{itemize}
Our approximating neural network $\bg \colon \mathbb{R}^d \to \mathbb{R}$ is defined by
\begin{equation*}
\bg(x) := \frac1k \sum_{j=1}^k \tilde\bh^{(j)}(x) .
\end{equation*}
By construction and using properties H1 and H2 (above), the following properties of $\bg$ are immediate:
\begin{itemize}
\item (G1) $\bg$ is represented by a neural network of width at most $O(k\tau L/\epsilon)$;
\item (G2) $\max\{\rnorml{\bg}, \vnorml{\bg} \} = O(\tau L^2 \max_{j \in [k]} \norml[2]{v + \rho \bw^{(j)}}/\epsilon)$.
\end{itemize}
Note that these properties are given in terms of $\tau$, which has yet to be specified, as well as $\max_{j \in [k]} \norml[2]{v + \rho \bw^{(j)}}$, which is a random variable.
So, in the remainder of the proof, we choose a particular distribution for $\bw$ (and hence also for $\bw^{(1)},\dotsc,\bw^{(k)}$) and a value of $\tau$ that, together, will ultimately allow us to establish the existence of an approximating neural network with the desired properties via the probabilistic method.
We first specify the probability distribution of $\bw$ and establish some of its properties.
We let $\bw = (\bw_1,\dotsc,\bw_d)$ be a vector of independent random variables $\bw_1,\dotsc,\bw_d$ with $p_i := \prl{ \bw_i = -2\sign{v_i} } = \abs{v_i}/(2\rho)$ and $\prl{ \bw_i = 0 } = 1-p_i$.
Note that $p_i \in [0,1/2]$ for all $i$ since we have assumed $\rho \geq \norm[\infty]{v}$, so the distribution of $\bw$ is well-defined.
Furthermore, observe that $\bw \neq -(1/\rho) v$ almost surely (since $v \neq 0$ and $\rho \geq \norml[\infty]{v}$ by assumption), $\EEl{v + \rho\bw} = 0$, and
\begin{equation*}
\EE{\norm[2]{v + \rho \bw}^2}
= \sum_{i=1}^d \Varl{\rho \bw_i}
= \sum_{i=1}^d 4\rho^2 \cdot \frac{\abs{v_i}}{2\rho} \cdot \left( 1- \frac{\abs{v_i}}{2\rho} \right)
= 2\rho \norm[1]{v} - \norm[2]{v}^2
= \sigma_{\rho,v}^2 .
\end{equation*}
Moreover, $\norml[2]{v + \rho \bw}$ is a function of independent random variables $\bw_1,\dotsc,\bw_d$ that satisfies the $(2|v_1|,\dotsc,2|v_d|)$-bounded differences property.
By McDiarmid's inequality (Lemma~\ref{lemma:mcdiarmid}) and Jensen's inequality,\begin{equation}
\label{eq:cos-approx-mcdiarmid}
\pr{ \norml[2]{v + \rho \bw} \geq \sigma_{\rho,v} + \sqrt{2\ln(1/\delta)} } \leq \delta \quad \text{for all $\delta \in (0,1)$} .
\end{equation}
Finally, for any fixed $x \in \{\pm 1\}^d$, $x^\T (v + \rho \bw) = \sum_{i=1}^d x_i (v_i + \rho \bw_i)$ is a sum of $d$ independent, mean-zero random variables, with variance $\Var{x^\T (v + \rho \bw)} = \sigma_{\rho,v}^2$ and $|x_i (v_i + \rho \bw_i)| \leq 2\rho$ almost surely for each $i$.
By Bernstein's inequality (Lemma~\ref{lemma:bernstein}),\begin{equation}
\label{eq:cos-approx-bernstein}
\pr{ \abs{x^\T (v + \rho \bw)} \geq \sigma_{\rho,v} \sqrt{2 \ln(2/\delta)} + 2\rho\ln(2/\delta)/3 } \leq \delta
\quad \text{for all $x \in \{\pm 1\}^d$, $\delta \in (0,1)$}
.
\end{equation}
We now show that $\bg$ has the desired properties with positive probability.
Since $w^\T x$ is an integer for any $w \in \mathbb{Z}^d$ and $x \in \{\pm 1\}^d$, and since $v + \rho \bw^{(j)} \neq 0$ almost surely, the $\rho$-periodicity of $\phi$ implies that $g_{\bw^{(j)}}(x) = f(x)$ for all $x \in \{\pm 1\}^d$ and all $j \in [k]$.
Therefore, the intermediate (random) function $\bg_1 \colon \mathbb{R}^d \to [-1,1]$ defined by $\bg_1(x) := \frac1k \sum_{j=1}^k \bh^{(j)}(x)$ satisfies $\bg_1(x) = f(x)$ for all $x \in \{\pm 1\}^d$.
For each $x \in \{\pm 1\}^d$, let \[\br(x) := |\{ j \in [k] : \absl{x^\T (v + \rho \bw^{(j)})} \geq \tau \}| = |\{ j \in [k] : \absl{x^\T \bu^{(j)}} \geq \tau / \norml[2]{v + \rho \bw^{(j)}} \}|.\]
Using the approximation properties of $\tilde\bh^{(j)}$ (i.e., H3 and H4 from above), we have for each $x \in \{\pm 1\}^d$,
\begin{equation*}
\abs{\bg(x) - \bg_1(x)}
= \frac1k \abs{ \sum_{j=1}^k (\tilde\bh^{(j)}(x) - \bh^{(j)}(x)) }
\leq \left(1 - \frac{\br(x)}{k} \right) \cdot \frac\epsilon3 + \frac{\br(x)}{k} \cdot 1 .
\end{equation*}
This final expression is at most $\epsilon$ if $\br(x) \leq 2k\epsilon/3$.
We choose $\tau$ such that for any $x \in \{\pm 1\}^d$, we have $\prl{ |x^\T (v + \rho \bw)| > \tau } \leq \epsilon/3$.
By~\eqref{eq:cos-approx-bernstein}, it suffices to choose
\begin{equation*}
\tau := \sigma_{\rho,v} \sqrt{2 \ln(6/\epsilon)} + 2\rho\ln(6/\epsilon)/3 .
\end{equation*}
By a multiplicative Chernoff bound (Lemma~\ref{lemma:chernoff}) and a union bound over all $x \in \{\pm 1\}^d$,
\begin{equation*}
\pr{ \exists x \in \{\pm 1\}^d \ \text{s.t.} \ \br(x) > 2k\epsilon/3 } \leq 2^d \cdot e^{-k\epsilon/9} < \frac12 ,
\end{equation*}
where the final inequality uses the choice of $k > (9(d+1) \ln 2)/\epsilon$.
Therefore, with probability more than $1/2$, we have $\br(x) \leq 2k\epsilon/3$ for all $x \in \{\pm 1\}^d$, and hence
\begin{equation}
\label{eq:cos-approx-accuracy}
\absl{\bg(x) - f(x)}
= \absl{\bg(x) - \bg_1(x)} \leq \epsilon
\quad \text{for all $x \in \{\pm 1\}^d$} .
\end{equation}
Finally, by \eqref{eq:cos-approx-mcdiarmid} an a union bound over all $j \in [k]$, we have that with probability more than $1/2$,
\begin{equation}
\label{eq:cos-approx-norms}
\max_{j \in [k]} \norml[2]{v + \rho \bw^{(j)}}
\leq \sigma_{\rho,v} + \sqrt{2\ln(2k)}
.
\end{equation}
So, there is a positive probability that both~\eqref{eq:cos-approx-accuracy} and~\eqref{eq:cos-approx-norms} hold simultaneously, and in this event, it can be checked (via G1 and G2 above) that the function $\bg$ satisfies the desired properties in the theorem. \end{proof}
\begin{lemma}\label{lemma:lipschitz-approx}
Suppose $f(x) = \phi(v^\T x)$ is an $L$-Lipschitz function for $v \in \mathbb{S}^{d-1}$, $\phi: \mathbb{R} \to [-1, 1]$, and $L \geq 1$.
For any $t \in [1, \sqrt{d} - 1]$ and $\delta \in (0, 1)$, there exists a neural network $g$ of width $O(\frac{tL}{\delta})$ such that:
\begin{enumerate}
\item $\rnorm{g} = O(\frac{tL^2}{\delta})$;
\item $\abs{f(x) - g(x)} \leq \delta$ for all $x$ with $\abs{v^\T x} \leq t$;
\item $\abs{f(x) - g(x)} \leq 1$ for all $x \in \mathbb{R}^d$;
\item $g(x) = 0$ for all $x$ with $\abs{v^\T x} \geq t + \frac1L$; and
\item $g$ is a ridge function that in direction $v$.
\end{enumerate} \end{lemma} \begin{proof}
We first introduce an $L$-Lipschitz function $\phi_t$ (visualized in Figure~\ref{fig:phi-t}) that perfectly fits $\phi$ on the interval $[-t, t]$ and is zero in $(\infty, -t-\frac1L] \cup [t+\frac1L, \infty]$:
\[\phi_t(z) := \begin{cases}
\phi(z) & \text{if $z \in [-t, t]$} ; \\
\sign{\phi(t)} \max\set{\abs{\phi(t)} - L(z - t)), 0} & \text{if $z \geq t$} ; \\
\sign{\phi(-t)} \max\set{\abs{\phi(-t)} - L(-z + t)), 0} & \text{if $z \leq -t$} .
\end{cases}\]
\begin{figure}
\caption{A visualization of how the truncated $\phi_t$ (gray) is generated from $\phi$ (blue), $t$, and $L$.}
\label{fig:phi-t}
\end{figure}
Then, there exists a piecewise-linear function $\psi_t$ that
\begin{itemize}
\item point-wise approximates $\phi_t$ to accuracy $\delta$;
\item has $\psi_t(z) = \phi_t(z)$ for all $z \not\in [-t, t]$;
\item has $\frac{2tL}{\delta}$ evenly-spaced knots on the interval $[-t, t]$ where $\psi_t$ exactly fits $\phi_t$; and
\item is $L$-Lipschitz.
\end{itemize}
As a result $\psi_t$ can be written as a neural network with $\psi_t(z) = \sum_{j=1}^m a^{(j)} \varphi_{\operatorname{r}}{z - b^{(j)}}$ where $m = \frac{2Lt}\delta$, $b^{(j)} \in [-t - \frac1L, t + \frac1L]$, and $\abs{a^{(j)}} \leq 2L$.
By taking $g(x) := \psi_t(v^\T x)$, we have a neural network that satisfies conditions 2, 3, 4, and 5.
The bound on $\rnorm{g}$ is immediate from the fact that $g$ can be expressed as a neural network with $O(\frac{tL}{\delta})$ neurons with unit weights, biases in $[-\sqrt{d}, \sqrt{d}]$, and bounded coefficients $a^{(j)}$. \end{proof}
\subsection{Proof of Theorem~\ref{thm:ridge-cosine-lb}} \label{assec:periodic-rnorm-ridge-approx-lb}
\thmridgecosinelb* \begin{proof} We prove the claim by a reduction to Theorem~\ref{thm:ridge-parity-lb}. That is, we show that an interpolant with better $\mathcal{R}$-norm\xspace than the bound stipulates can be used to construct a neural network that contradicts Theorem~\ref{thm:ridge-parity-lb}.
To do so, we consider a lower dimension $d' = 4\floorl{d/ 4q} - 4$ and create a mapping from points $z \in \{\pm 1\}^{d'}$ to $x_z \in \{\pm 1\}^d$. We define $a \in [0, 4q - 1]$ such that $2a \equiv d \pmod{4q}$. For any $z$, we define $x_z$ as follows: \[x_z = (\underbrace{z_1, \dots, z_1}_{q}, \dots, \underbrace{z_{d'}, \dots, z_{d'}}_{q}, \underbrace{1, \dots, 1}_{a}, \underbrace{-1, \dots, -1}_{d - d'q - a}).\] Observe that \[\vec1^\T x_z = q \vec1^\T z + 2a -d + d'q \equiv q \vec1^\T z \pmod{4q}.\] Due to the periodicity of cosine and the fact that $d'$ is a multiple of 4, \[\cos\parenl{\frac{2\pi}{\rho} v^\T x_z} = \cos\parenl{\frac{\pi}{2} \vec1^\T z} = \chi(z).\]
Consider some $g(x) = \phi(w^\T x)$ with $\norml[\infty]{g - \cos\parenl{\frac{2\pi}{\rho} v^\T \cdot}} \leq \frac12$. Define $w' \in \mathbb{R}^{d'}$ such that $w'_i := \sum_{j=1}^q w_{(i-1)q + j}$. Observe that $\norm[2]{w'} \leq \sqrt{q}$ and that $w^\T x_z = w^{\prime \T} z + c_w$, where $c_w$ depends only on the remaining elements of $w$. Define $\tilde{g}(z) = \phi(w^{\prime \T} z + c_w)$. Then, \[\absl{\tilde{g}(z) - \chi(z)} = \absl{\phi(w^\T x_z) - \cos\parenl{\frac{2\pi}{\rho} v^\T x_z}} \leq \frac12\] for all $z \in \{\pm 1\}^{d'}$. Since translation can only decrease the $\mathcal{R}$-norm\xspace (by exhausting some neurons to effectively behave linearly in the domain) namely, $\rnorm{\tilde{g}} \le \norm[2]{w'} \tv{\phi'} = \norm[2]{w'} \rnorm{g}$, Theorem~\ref{thm:ridge-parity-lb} implies that $\rnorm{g} = \Omega({d^{\prime 3/2}}/{\sqrt{q}})$. The theorem statement follows by plugging in $q$ and $d'$. \end{proof}
\end{document} |
\begin{document}
\preprint{USTC-ICTS/PCFT-21-03} \title{Class of Bell-Clauser-Horne inequalities for testing quantum nonlocality} \author{Chen Qian} \email{qianch18@mail.ustc.edu.cn}
\affiliation{Interdisciplinary Center for Theoretical Study and Department of Modern Physics, University of Science and Technology of China, Hefei, Anhui 230026, China} \affiliation{Peng Huanwu Center for Fundamental Theory, Hefei, Anhui 230026, China} \author{Yang-Guang Yang} \email{mathmuse@ustc.edu.cn}
\affiliation{Interdisciplinary Center for Theoretical Study and Department of Modern Physics, University of Science and Technology of China, Hefei, Anhui 230026, China} \affiliation{Peng Huanwu Center for Fundamental Theory, Hefei, Anhui 230026, China} \author{Cong-Feng Qiao} \email{qiaocf@ucas.ac.cn}
\affiliation{School of Physical Sciences, University of Chinese Academy of Sciences, Beijing 100049, China} \affiliation{Key Laboratory of Vacuum Physics, Chinese Academy of Sciences, Beijing 100049, China} \author{Qun Wang} \email{qunwang@ustc.edu.cn}
\affiliation{Interdisciplinary Center for Theoretical Study and Department of Modern Physics, University of Science and Technology of China, Hefei, Anhui 230026, China} \affiliation{Peng Huanwu Center for Fundamental Theory, Hefei, Anhui 230026, China} \begin{abstract} Quantum nonlocality, one of the most important features of quantum mechanics, is normally connected in experiments with the violation of Bell-Clauser-Horne (Bell-CH) inequalities. We propose effective methods for the rearrangement and linear inequality to prove a large variety of Bell-CH inequalities. We also derive a set of Bell-CH inequalities by using these methods which can be violated in some quantum entangled states. \end{abstract} \maketitle
\section{introduction}
\label{sec:introduction}Quantum nonlocality is one of the most striking aspects of quantum mechanics without a classical analog in reality. It can be characterized as correlated outcomes when we measure two or more entangled quantum systems, even if these systems are spatially separated. Quantum nonlocality originates from a contradiction between local realism and the completeness of quantum mechanics pointed out by Einstein, Podolsky and Rosen in 1935 \citep{Einstein1935}, which was called the ``EPR paradox'' and led to a great challenge of the concept of ``locality'' taken for granted by most physicists. To test such a contradiction in a real physical system, Bell, Clauser, and other researchers formulated observable inequalities with correlation functions of measurement outcomes for entangled systems \citep{Bell1964,Clauser1969,Bell1971}. These inequalities were later called Bell-Clauser-Horne-Shimony-Holt (Bell-CHSH) inequalities. They proved that the quantum correlation cannot be fully described by any local hidden-variable theory. Later on, Clauser and Horne proposed the CH inequality in terms of probabilities \citep{Clauser1974}, followed by many CH-like inequalities (we call them all Bell-CH inequalities in this paper). These inequalities have weaker auxiliary assumptions than Bell-CHSH inequalities in experimental considerations.
Violation of Bell-CH inequalities has been widely verified by many experiments in favor of the nonlocal feature of quantum mechanics \citep{Aspect2002,Ding2007,Li2010,Qian2020}. However, observing the violation of these inequalities is not feasible in every entangled system \citep{Hiesmayr2015}. One needs to generalize these inequalities in order to test them in experiments. The first attempt to find a systematic way of generalization was made by Froissart \citep{Froissart1981}, and later attempt was made by Pitowsky \citep{Pitowsky1986,Pitowsky1989,Pitowsky1991}, who introduced correlation polytopes to find Bell-CH inequalities inspired by Boole's method on probabilistic inequalities \citep{Pitowsky1994}. After a series of further works, such algorithms were built to find a set of Bell-CH inequalities based on the solution of convex problems \citep{Froissart1981,Collins2004,Avis2005,Avis2006,Ito2006,Bancal2010,Pironio2014}. A large variety of Bell-CH inequalities have been found \citep{Froissart1981,Sliwa2003,Collins2004,Ito2006,Brunner2008,Gisin2009,Pal2009,Bancal2010,Quintino2014,Cope2018,Oudot2019,Cruzeiro2019}. In Refs. \citep{Cope2018,Oudot2019,Cruzeiro2019}, the authors obtained a complete list of Bell-CH inequalities in the case of binary settings up to four measurements on each setting. The main goal of this paper is to prove these inequalities with analytical techniques and construct a class of Bell-CH inequalities. Our inequalities can be used as a substitute for the original ones for testing quantum nonlocality under some circumstances.
We consider a system consisting of two classical or quantum subsystems $x$ and $y$. Alice chooses $m$ measurement settings for a quantity that has $k$ values on $x$, and Bob chooses $n$ measurement settings for a quantity that has $l$ values on $y$. We denote this case as $mnkl$. The system can be measured many times with different selected settings. We define $P_{x_{i}}$ as the probability that Alice obtains a certain value $a$ for measurement setting $i$ on $x$, $P_{y_{j}}$ as the probability that Bob obtains a certain value $b$ for measurement setting $j$ on $y$, and $P_{x_{i}y_{j}}$ as the probability that Alice obtains $a$ for $i$ and Bob obtains $b$ for $j$ at the same time. For a classical system labeled $mn22$, we have the so-called Bell-CH inequalities of binary settings which hold for certain integer coefficients $C_{x_{i}}$, $C_{y_{j}}$, and $C_{x_{i}y_{j}}$, \begin{equation} \sum_{i=1}^{m}C_{x_{i}}P_{x_{i}}+\sum_{j=1}^{n}C_{y_{j}}P_{y_{j}}+\sum_{i=1}^{m}\sum_{j=1}^{n}C_{x_{i}y_{j}}P_{x_{i}y_{j}}\leqslant0.\label{eq:Bell-CH-ieq} \end{equation} In our formulation, we define the corresponding Bell-CH-like inequalities in algebraic forms (we call them algebraic Bell-CH inequalities for short) as \begin{equation} B\sum_{i=1}^{m}C_{x_{i}}x_{i}+A\sum_{j=1}^{n}C_{y_{j}}y_{j}+\sum_{i=1}^{m}\sum_{j=1}^{n}C_{x_{i}y_{j}}x_{i}y_{j}\leqslant0,\label{eq:Bell-CH-ieq-c} \end{equation} with $A$ and $B$ being the upper bounds of two sets of positive variables $x_{i}$ and $y_{i}$, respectively. The Bell-CH inequalities can be derived from the algebraic ones in local hidden-variable theory. Among these inequalities, there are independent ones which we label $Imn22$ for Bell-CH inequalities and \emph{$I_{mn22}$} for algebraic ones.
This paper is organized as follows. In this section, we have provided an introduction and the motivation of this work with definitions of Bell-CH inequalities and their algebraic forms. In Sec. \ref{sec:algebraic-Bell-CH}, we derive algebraic Bell-CH inequalities with two methods; one is from the rearrangement inequality, and the other is from the linear inequality. In Sec. \ref{sec:Bell-CH}, we derive Bell-CH inequalities and obtain a different class of inequalities. In Sec. \ref{sec:testing}, we test our inequalities with special quantum entangled states. In Sec. \ref{sec:summary} we summarize this work and draw conclusions.
\section{Algebraic Bell-CH inequalities for binary settings}
\label{sec:algebraic-Bell-CH}
\subsection{Method of rearrangement inequality}
\label{subsec:method-of-rearrangement}The rearrangement inequality is a well-known inequality in classical mathematics, from which one can prove many famous inequalities, such as the arithmetic-mean–geometric-mean inequality, Cauchy inequality, etc. In Appendix \ref{sec:rearrange} we give a very brief introduction of the rearrangement inequality; readers can see Ref. \citep{Hardy1988} for a review of this topic.
In this section, we derive two low-order algebraic Bell-CH inequalities by applying the rearrangement inequality: algebraic Bell-CH inequalities $I_{2222}$ and $I_{3322}$. In order to prove these inequalities, we use the maximum and minimum values of variables to enlarge the upper bound of polynomials in intermediate steps. Then we can rearrange the orders of variables in these polynomials and apply the rearrangement inequality to determine their signs. In this way, we find a profound connection between these inequalities and the classical rearrangement inequality.
\subsubsection{$I_{2222}$}
We define two sets of $m$ and $n$ positive real numbers \begin{eqnarray} a & = & \{x_{i},i=1,\cdots,m\},\nonumber \\ b & = & \{y_{j},j=1,\cdots,n\},\label{eq:s-eq} \end{eqnarray} with upper bounds $A$ and $B$, respectively, \begin{align} 0 & \leqslant x_{i}\leqslant A,\ 0\leqslant y_{j}\leqslant B.\label{eq:bc} \end{align} The maximum and minimum values of the numbers in set $a$ are denoted $x_{+}(1,\ldots,m)$ and $x_{-}(1,\ldots,m)$, respectively, and the maximum and minimum values of the numbers in set $b$ are denoted $y_{+}(1,\ldots,n)$ and $y_{-}(1,\ldots,n)$, respectively.
We define $I_{2}$ as a polynomial of the following form: \begin{equation} I_{2}=x_{1}y_{1}+x_{1}y_{2}+x_{2}y_{1}-x_{2}y_{2}-x_{1}B-Ay_{1},\label{eq:I2} \end{equation} which involves elements of set $a$ and $b$ in (\ref{eq:s-eq}) with $m=n=2$. We now prove the $I_{2222}$ inequality, $I_{2}\leqslant0$. Using boundary conditions \begin{align} 0\leqslant x_{-}(1,2)\leqslant x_{1},x_{2} & \leqslant x_{+}(1,2)\leqslant A,\nonumber \\ 0\leqslant y_{-}(1,2)\leqslant y_{1},y_{2} & \leqslant y_{+}(1,2)\leqslant B \end{align} in (\ref{eq:bc}), we obtain \begin{eqnarray} I_{2} & \leqslant & x_{1}(y_{1}+y_{2})+x_{2}(y_{1}-y_{2})-x_{1}y_{+}(1,2)-x_{+}(1,2)y_{1}\nonumber \\
& \leqslant & x_{1}(y_{1}+y_{2})+x_{2}(y_{1}-y_{2})-x_{1}y_{+}(1,2)-x_{+}(1,2)y_{1}\nonumber \\
& & -\left[x_{1}-x_{+}(1,2)\right]y_{-}(1,2)-x_{-}(1,2)\left[y_{1}-y_{+}(1,2)\right],\label{eq:I2-e-1} \end{eqnarray} where we have replaced $A$ and $B$ with $x_{+}(1,2)$ and $y_{+}(1,2)$ in Eq. (\ref{eq:I2}), respectively, to obtain the first line and added two positive quantities in the second inequality.
Since there are only two variables in each set, we always have \begin{align} x_{+}(1,2)+x_{-}(1,2) & =x_{1}+x_{2},\nonumber \\ y_{+}(1,2)+y_{-}(1,2) & =y_{1}+y_{2}. \end{align} Using the above identity, inequality (\ref{eq:I2-e-1}) becomes \begin{eqnarray} I_{2} & \leqslant & x_{1}(y_{1}+y_{2})+x_{2}(y_{1}-y_{2})-x_{1}(y_{1}+y_{2})-(x_{1}+x_{2})y_{1}\nonumber \\
& & +x_{+}(1,2)y_{-}(1,2)+x_{-}(1,2)y_{+}(1,2)\nonumber \\
& = & -(x_{1}y_{1}+x_{2}y_{2})+x_{+}(1,2)y_{-}(1,2)+x_{-}(1,2)y_{+}(1,2)\nonumber \\
& \equiv & I_{2}^{(0)}.\label{eq:I2-e-2} \end{eqnarray} We see in (\ref{eq:I2-e-2}) that $I_{2}^{(0)}$ is the difference between the reversed sum and the unordered sum for sets $a$ and $b$ in (\ref{eq:s-eq}) with $m=n=2$. Using the rearrangement inequality, we obtain \begin{equation} I_{2}\leqslant I_{2}^{(0)}\leqslant0.\label{eq:I2-e-r} \end{equation} This concludes the proof of the $I_{2222}$ inequality by applying the rearrangement inequality.
We can rewrite $I_{2}^{(0)}$ with the help of the Heaviside step function, \begin{eqnarray} \theta(x) & = & \begin{cases} \begin{array}{c} 1\\ \frac{1}{2}\\ 0 \end{array} & \begin{array}{c} x>0,\\ x=0,\\ x<0. \end{array}\end{cases}\label{eq:theta} \end{eqnarray} The maximum and minimum values can be put into the following form: \begin{align} x_{+}(1,2) & =x_{1}\theta(x_{1}-x_{2})+x_{2}\theta(-x_{1}+x_{2}),\nonumber \\ x_{-}(1,2) & =x_{1}\theta(-x_{1}+x_{2})+x_{2}\theta(x_{1}-x_{2}),\nonumber \\ y_{+}(1,2) & =y_{1}\theta(y_{1}-y_{2})+y_{2}\theta(-y_{1}+y_{2}),\nonumber \\ y_{-}(1,2) & =y_{1}\theta(-y_{1}+y_{2})+y_{2}\theta(y_{1}-y_{2}). \end{align} Inserting the above formula into $I_{2}^{(0)}$ and using the equation \begin{equation}
x\theta(x)=\frac{x+|x|}{2}, \end{equation} we obtain \begin{eqnarray} I_{2}^{(0)} & = & -(x_{1}-x_{2})(y_{1}-y_{2})\left[\theta(x_{1}-x_{2})\theta(y_{1}-y_{2})+\theta(-x_{1}+x_{2})\theta(-y_{1}+y_{2})\right]\nonumber \\
& = & -\frac{1}{2}\left[(x_{1}-x_{2})(y_{1}-y_{2})+\left|(x_{1}-x_{2})(y_{1}-y_{2})\right|\right].\label{eq:ch0} \end{eqnarray} This form will be used in the next section.
\subsubsection{$I_{3322}$}
Like $I_{2}$ in (\ref{eq:I2}), we can define $I_{3}$ as \begin{eqnarray} I_{3} & = & x_{1}y_{2}+x_{1}y_{3}+x_{2}y_{1}-x_{2}y_{2}+x_{2}y_{3}+x_{3}y_{1}+x_{3}y_{2}-x_{3}y_{3}\nonumber \\
& & -(x_{1}+x_{2})B-A(y_{1}+y_{2}),\label{eq:i3-symm} \end{eqnarray} which involves elements of sets $a$ and $b$ in (\ref{eq:s-eq}) with $m=n=3$. We now prove the $I_{3322}$ inequality, $I_{3}\leqslant0$. We use boundary conditions (\ref{eq:bc}), or, explicitly, \begin{align} 0\leqslant x_{-}(1,2,3)\leqslant x_{1},x_{2},x_{3} & \leqslant x_{+}(1,2,3)\leqslant A,\nonumber \\ 0\leqslant y_{-}(1,2,3)\leqslant y_{1},y_{2},y_{3} & \leqslant y_{+}(1,2,3)\leqslant B, \end{align} and replace $A$ and $B$ in (\ref{eq:i3-symm}) with $x_{+}(1,2,3)$ and $y_{+}(1,2,3)$, respectively, to obtain \begin{eqnarray} I_{3} & \leqslant & x_{1}(y_{2}+y_{3})+x_{2}(y_{1}+y_{3})+x_{3}(y_{1}+y_{2})-x_{2}y_{2}-x_{3}y_{3}\nonumber \\
& & -(x_{1}+x_{2})y_{+}(1,2,3)-x_{+}(1,2,3)(y_{1}+y_{2}).\label{eq:I3-e-1} \end{eqnarray} Using the inequalities \begin{align} -x_{2}y_{2} & \leqslant-x_{2}y_{-}(1,2,3)-x_{-}(1,2,3)y_{2}+x_{-}(1,2,3)y_{-}(1,2,3),\nonumber \\ -x_{3}y_{3} & \leqslant-x_{3}y_{+}(1,2,3)-x_{+}(1,2,3)y_{3}+x_{+}(1,2,3)y_{+}(1,2,3),\label{eq:I3-ee} \end{align} we can enlarge the right-hand side of (\ref{eq:I3-e-1}) to obtain \begin{eqnarray} I_{3} & \leqslant & x_{1}(y_{2}+y_{3})+x_{2}(y_{1}+y_{3})+x_{3}(y_{1}+y_{2})\nonumber \\
& & -x_{2}y_{-}(1,2,3)-x_{-}(1,2,3)y_{2}+x_{-}(1,2,3)y_{-}(1,2,3)\nonumber \\
& & -x_{3}y_{+}(1,2,3)-x_{+}(1,2,3)y_{3}+x_{+}(1,2,3)y_{+}(1,2,3)\nonumber \\
& & -(x_{1}+x_{2})y_{+}(1,2,3)-x_{+}(1,2,3)(y_{1}+y_{2}).\label{eq:I3-e-2} \end{eqnarray} The inequalities in (\ref{eq:I3-ee}) are true because they can be put into the following form: \begin{align} -\left[x_{2}-x_{-}(1,2,3)\right]\left[y_{2}-y_{-}(1,2,3)\right] & \leqslant0,\nonumber \\ -\left[x_{3}-x_{+}(1,2,3)\right]\left[y_{3}-y_{+}(1,2,3)\right] & \leqslant0. \end{align} After adding additional positive terms to enlarge the right-hand side of (\ref{eq:I3-e-2}), we can derive \begin{eqnarray} I_{3} & \leqslant & x_{1}(y_{2}+y_{3})+x_{2}(y_{1}+y_{3})+x_{3}(y_{1}+y_{2})\nonumber \\
& & -(x_{1}+x_{2}+x_{3})y_{+}(1,2,3)-x_{+}(1,2,3)(y_{1}+y_{2}+y_{3})\nonumber \\
& & -\left[x_{1}-x_{+}(1,2,3)\right]y_{-}(1,2,3)-x_{2}y_{-}(1,2,3)-\left[x_{3}-x_{+}(1,2,3)\right]y_{-}(1,2,3)\nonumber \\
& & -x_{-}(1,2,3)\left[y_{1}-y_{+}(1,2,3)\right]-x_{-}(1,2,3)y_{2}-x_{-}(1,2,3)\left[y_{3}-y_{+}(1,2,3)\right]\nonumber \\
& & +x_{-}(1,2,3)y_{-}(1,2,3)+x_{+}(1,2,3)y_{+}(1,2,3)\nonumber \\
& = & -(x_{1}y_{1}+x_{2}y_{2}+x_{3}y_{3})+x_{+}(1,2,3)y_{-}(1,2,3)+x_{-}(1,2,3)y_{+}(1,2,3)\nonumber \\
& & +\left[x_{1}+x_{2}+x_{3}-x_{+}(1,2,3)-x_{-}(1,2,3)\right]\nonumber \\
& & \times\left[y_{1}+y_{2}+y_{3}-y_{+}(1,2,3)-y_{-}(1,2,3)\right].\label{eq:I3-e-3} \end{eqnarray} Finally, we define $I_{3}^{(0)}$ as the last polynomial of (\ref{eq:I3-e-3}) for later use, \begin{eqnarray} I_{3}^{(0)} & = & -(x_{1}y_{1}+x_{2}y_{2}+x_{3}y_{3})+x_{+}(1,2,3)y_{-}(1,2,3)\nonumber \\
& & +x_{-}(1,2,3)y_{+}(1,2,3)+x_{r}y_{r},\label{eq:I3-0} \end{eqnarray} where $x_{r}$ and $y_{r}$ denote the middle terms in sets $a$ and $b$ as \begin{align} x_{r} & =x_{1}+x_{2}+x_{3}-x_{+}(1,2,3)-x_{-}(1,2,3),\nonumber \\ y_{r} & =y_{1}+y_{2}+y_{3}-y_{+}(1,2,3)-y_{-}(1,2,3). \end{align} It is obvious that $I_{3}^{(0)}$ in Eq. (\ref{eq:I3-0}) is the difference between the reversed sum and the unordered sum for sets $a$ and $b$ in (\ref{eq:s-eq}) with $m=n=3$. From the rearrangement inequality, we have \begin{equation} I_{3}\leqslant I_{3}^{(0)}\leqslant0. \end{equation} This concludes the proof of the $I_{3322}$ inequality using the method of the rearrangement inequality.
\subsection{Method of linear inequality}
\label{subsec:method-of-linear}In Sec. \ref{subsec:method-of-rearrangement}, we derived two algebraic inequalities in connection with low-order Bell-CH and rearrangement inequalities. But it is not easy to generalize this method to higher-order cases. In Ref. \citep{Collins2004}, the authors obtained a specific type of Bell-CH inequalities and found the relationship between lower-order and higher-order inequalities of this type. Following Ref. \citep{Collins2004}, there has been a lot of discussion on the perspective of mathematics along this line \citep{Avis2005,Avis2006,Avis2007,Avis2008}. Inspired by these works, we find a general method which we call the method of linear inequality to prove higher-order inequalities $I_{mn22}$ from the lowest-order one systematically. As a by-product, this method can be used to construct higher-order inequalities more effectively than the traditional method. We will prove a lemma using the property of monotonicity. Then we will apply the lemma to linear functions and obtain a theorem. Finally, we will use this method to prove a large variety of inequalities that have been obtained.
We consider two sets $a$ and $b$ defined by Eq. (\ref{eq:s-eq}) with conditions (\ref{eq:bc}). Starting from positivity inequalities \begin{eqnarray} -x_{i}y_{j} & \leqslant & 0,\nonumber \\ x_{i}(y_{j}-B) & \leqslant & 0,\nonumber \\ (x_{i}-A)y_{j} & \leqslant & 0,\label{eq:positivity} \end{eqnarray} we present a theorem for a specific type of algebraic Bell-CH inequalities.
\emph{Theorem 1. }$I_{kk}(x_{1},\cdots,x_{k}|y_{1},\cdots,y_{k})\leqslant0$ is an algebraic Bell-CH inequality for $k\geqslant2$ with variables
$x_{i}$ and $y_{j}$ in sets $a$ and $b$, respectively, where $I_{kk}(x_{1},\cdots,x_{k}|y_{1},\cdots,y_{k})$ is defined as \begin{equation}
I_{kk}(x_{1},\cdots,x_{k}|y_{1},\cdots,y_{k})=\sum_{j=1}^{k}\sum_{i=1}^{k+1-j}x_{i}y_{j}-\sum_{i=2}^{k}x_{i}y_{k+2-i}-\sum_{i=1}^{k-1}(k-i)x_{i}B-Ay_{1}.\label{eq:ikk-theorem} \end{equation}
\begin{comment} For $k=2$, we have \begin{eqnarray*} I_{22} & = & (x_{1}+x_{2})y_{1}+x_{1}y_{2}-x_{2}y_{2}-x_{1}B-Ay_{1} \end{eqnarray*} For $k=3$, we have \begin{eqnarray*} I_{33} & = & (x_{1}+x_{2}+x_{3})y_{1}+(x_{1}+x_{2})y_{2}+x_{1}y_{3}-x_{2}y_{3}-x_{3}y_{2}-(2x_{1}+x_{2})B-Ay_{1} \end{eqnarray*} \end{comment}
\emph{Proof. }Let $f(x_{i},\{c_{j}^{y}\})=x_{i}(\sum_{j}c_{j}^{y}y_{j})$ and $g(y_{i},\{c_{j}^{x}\})=(\sum_{j}c_{j}^{x}x_{j})y_{i}$, where $x_{i}$ and $y_{j}$ are variables in sets $a$ and $b$ and $c_{j}^{x}$ and $c_{j}^{y}$ are coefficients. Then we have the following linear inequalities: \begin{eqnarray} f(x_{i},\{c_{j}^{y}\}) & \leqslant & \theta(\sum_{j}c_{j}^{y}y_{j})f(A,\{c_{j}^{y}\})+\theta(-\sum_{j}c_{j}^{y}y_{j})f(0,\{c_{j}^{y}\}),\nonumber \\ g(y_{i},\{c_{j}^{x}\}) & \leqslant & \theta(\sum_{j}c_{j}^{x}x_{j})g(B,\{c_{j}^{x}\})+\theta(-\sum_{j}c_{j}^{x}x_{j})g(0,\{c_{j}^{x}\}).\label{eq:co-ieq} \end{eqnarray}
We can prove $I_{kk}(x_{1},\cdots,x_{k}|y_{1},\cdots,y_{k})\leqslant0$ with the help of (\ref{eq:co-ieq}) by the induction method. First, we should prove the case for $k=2$, which is just the algebraic version of the original Bell-CH inequality\emph{. }We have \begin{eqnarray}
I_{22}(x_{1},x_{2}|y_{1},y_{2}) & = & (x_{1}+x_{2})y_{1}+(x_{1}-x_{2})y_{2}-x_{1}B-Ay_{1}\nonumber \\
& \leqslant & \theta(x_{1}-x_{2})I_{22}(x_{1},x_{2}|y_{1},B)+\theta(-x_{1}+x_{2})I_{22}(x_{1},x_{2}|y_{1},0)\nonumber \\
& = & \theta(x_{1}-x_{2})[(x_{1}+x_{2})y_{1}-x_{2}B-Ay_{1}]\nonumber \\
& & +\theta(-x_{1}+x_{2})[(x_{1}+x_{2})y_{1}-x_{1}B-Ay_{1}], \end{eqnarray} where we have used (\ref{eq:co-ieq}) for $g(y_{2},\{c_{j}^{x}\})=(x_{1}-x_{2})y_{2}$ with $\{c_{j}^{x}\}=\{1,-1\}$. Then we can rewrite the right-hand side of the above inequality and obtain \begin{eqnarray}
I_{22}(x_{1},x_{2}|y_{1},y_{2}) & \leqslant & \theta(x_{1}-x_{2})[(x_{1}-A)y_{1}+x_{2}(y_{1}-B)]\nonumber \\
& & +\theta(-x_{1}+x_{2})[(x_{2}-A)y_{1}+x_{1}(y_{1}-B)]\nonumber \\
& \leqslant & 0.\label{eq:nch-c} \end{eqnarray} Thus, the inequality holds for $k=2$.
Then we assume the inequality holds for $k-1$ with \begin{eqnarray}
I_{k-1,k-1}(x_{1},x_{3},\cdots,x_{k}|y_{1},\cdots,y_{k-1}) & \leqslant & 0,\nonumber \\
I_{k-1,k-1}(x_{2},x_{3},\cdots,x_{k}|y_{1},\cdots,y_{k-1}) & \leqslant & 0. \end{eqnarray} According to the induction method, the inequality should hold for
$I_{kk}(x_{1},\cdots,x_{k}|y_{1},\cdots,y_{k})$ in (\ref{eq:ikk-theorem}). We now apply (\ref{eq:co-ieq}) to $I_{kk}$ for $g(y_{k},\{c_{j}^{x}\})=(x_{1}-x_{2})y_{k}$, with $\{c_{j}^{x},j=1,2,\cdots,k\}=\{1,-1,0,\cdots,0\}$, as \begin{eqnarray}
& & I_{kk}(x_{1},\cdots,x_{k}|y_{1},\cdots,y_{k})\nonumber \\
& \leqslant & \theta(x_{1}-x_{2})I_{kk}(x_{1},\cdots,x_{k}|y_{1},\cdots y_{k-1},B)+\theta(-x_{1}+x_{2})I_{kk}(x_{1},\cdots,x_{k}|y_{1},\cdots,y_{k-1},0)\nonumber \\
& = & \theta(x_{1}-x_{2})[(\sum_{i=1}^{k}x_{i})y_{1}+(\sum_{i=1}^{k-1}x_{i}-x_{k})y_{2}+\cdots+(x_{1}+x_{2}-x_{3})y_{k-1}\nonumber \\
& & +(x_{1}-x_{2})B-\sum_{i=1}^{k-1}(k-i)x_{i}B-Ay_{1}]\nonumber \\
& & +\theta(-x_{1}+x_{2})[(\sum_{i=1}^{k}x_{i})y_{1}+(\sum_{i=1}^{k-1}x_{i}-x_{k})y_{2}+\cdots+(x_{1}+x_{2}-x_{3})y_{k-1}\nonumber \\
& & -\sum_{i=1}^{k-1}(k-i)x_{i}B-Ay_{1}]. \end{eqnarray} We can then rewrite the right-hand side of the above inequality and obtain \begin{eqnarray}
& & I_{kk}(x_{1},\cdots,x_{k}|y_{1},\cdots,y_{k})\nonumber \\
& \leqslant & \theta(x_{1}-x_{2})\{I_{k-1,k-1}(x_{1},x_{3},\cdots,x_{k}|y_{1},\cdots,y_{k-1})+\sum_{i=1}^{k-1}[x_{2}(y_{i}-B)]\}\nonumber \\
& & +\theta(-x_{1}+x_{2})\{I_{k-1,k-1}(x_{2},x_{3},\cdots,x_{k}|y_{1},\cdots,y_{k-1})+\sum_{i=1}^{k-1}[x_{1}(y_{i}-B)]\}\nonumber \\
& \leqslant & I_{kk}^{(0)}(x_{1},\cdots,x_{k}|y_{1},\cdots,y_{k})\nonumber \\
& \leqslant & 0.\label{eq:ch-mn} \end{eqnarray} Here, $I_{kk}^{(0)}$ is given by \begin{eqnarray}
& & I_{kk}^{(0)}(x_{1},\cdots,x_{k}|y_{1},\cdots,y_{k})\nonumber \\
& = & \max\{I_{k-1,k-1}(x_{1},x_{3},\cdots,x_{k}|y_{1},\cdots,y_{k-1})+\sum_{i=1}^{k-1}[x_{2}(y_{i}-B)],\nonumber \\
& & I_{k-1,k-1}(x_{2},x_{3},\cdots,x_{k}|y_{1},\cdots,y_{k-1})+\sum_{i=1}^{k-1}[x_{1}(y_{i}-B)]\}. \end{eqnarray} So we prove in (\ref{eq:ch-mn}) that the inequality holds for $k$. This concludes the proof of the theorem.
The same method can also be applied to prove general inequalities $I_{mn22}$ recursively. For example, we can prove $I_{3}$ in (\ref{eq:i3-symm})
as the symmetric version of $I_{33}(x_{1},x_{2},x_{3}|y_{1},y_{2},y_{3})$:
$I_{3}$ can be obtained from $I_{33}(x_{1},x_{2},x_{3}|y_{1},y_{2},y_{3})$ by the transformation $x_{1}\rightarrow A-x_{1}$, $y_{2}\rightarrow B-y_{2}$, $y_{3}\rightarrow B-y_{3}$ and then relabeling indices of $\{x_{i}\}$ and $\{y_{j}\}$ by interchanging $1\leftrightarrow3$. Hence, $I_{3}\leqslant0$ can be proved by using lower-order inequalities with the help of (\ref{eq:co-ieq}).
\begin{comment} We can rewrite \begin{eqnarray*}
I_{33}(x_{1},x_{2},x_{3}|y_{1},y_{2},y_{3}) & = & (x_{1}+x_{2}+x_{3})y_{1}+(x_{1}+x_{2})y_{2}+x_{1}y_{3}\\
& & -x_{2}y_{3}-x_{3}y_{2}-(2x_{1}+x_{2})B-Ay_{1}\\
& \rightarrow & (A-x_{1}+x_{2}+x_{3})y_{1}+(A-x_{1}+x_{2})(B-y_{2})+(A-x_{1})(B-y_{3})\\
& & -x_{2}(B-y_{3})-x_{3}(B-y_{2})-[2(A-x_{1})+x_{2}]B-Ay_{1}\\
& = & (-x_{1}+x_{2}+x_{3})y_{1}+(x_{1}-x_{2})y_{2}+x_{1}y_{3}\\
& & +x_{2}y_{3}+x_{3}y_{2}\\
& & +AB-Ay_{2}+B(-x_{1}+x_{2})\\
& & +AB-Ay_{3}-x_{1}B-x_{2}B-x_{3}B\\
& & -2AB+(2x_{1}-x_{2})B\\
& = & (-x_{1}+x_{2}+x_{3})y_{1}+(x_{1}-x_{2})y_{2}+x_{1}y_{3}\\
& & +x_{2}y_{3}+x_{3}y_{2}-A(y_{2}+y_{3})-(x_{2}+x_{3})B\\
& = & x_{2}y_{1}+x_{1}y_{2}+x_{3}y_{1}+x_{1}y_{3}+x_{2}y_{3}+x_{3}y_{2}-x_{1}y_{1}-x_{2}y_{2}\\
& & -A(y_{2}+y_{3})-(x_{2}+x_{3})B\\
& \rightarrow & (1\leftrightarrow3)\\
& \rightarrow & I_{3} \end{eqnarray*} Compare with \begin{eqnarray*} I_{3} & = & x_{1}y_{2}+x_{1}y_{3}+x_{2}y_{1}+x_{2}y_{3}+x_{3}y_{1}+x_{3}y_{2}-x_{2}y_{2}-x_{3}y_{3}\\
& & -(x_{1}+x_{2})B-A(y_{1}+y_{2}) \end{eqnarray*} \end{comment}
As another example, we can reconstruct and prove $I_{5322}$ by applying linear inequalities. We use the polynomial $I_{53}(x_{1},x_{2},x_{3},x_{4},x_{5}|y_{1},y_{2},y_{3})$ to represent $I_{5322}$ as \begin{eqnarray}
& & I_{53}(x_{1},x_{2},x_{3},x_{4},x_{5}|y_{1},y_{2},y_{3})\nonumber \\
& \equiv & x_{1}y_{1}-x_{1}y_{2}+x_{1}y_{3}+x_{2}y_{2}+x_{2}y_{3}+x_{3}y_{1}+x_{3}y_{2}+x_{4}y_{1}-x_{4}y_{3}\nonumber \\
& & -x_{5}y_{1}+x_{5}y_{2}-x_{5}y_{3}-(x_{1}+x_{2}+x_{3})B-A(y_{1}+y_{2})\nonumber \\
& \leqslant & 0.\label{eq:ieq-53} \end{eqnarray} To prove inequality (\ref{eq:ieq-53}), we can apply linear inequalities (\ref{eq:co-ieq}) for $f(x_{4},\{c_{j}^{y}\})=x_{4}(y_{1}-y_{3})$, with $\{c_{j}^{y},j=1,2,3\}=\{1,0,-1\}$, and obtain \begin{eqnarray}
& & I_{53}(x_{1},x_{2},x_{3},x_{4},x_{5}|y_{1},y_{2},y_{3})\nonumber \\
& \leqslant & \theta(y_{1}-y_{3})I_{53}(x_{1},x_{2},x_{3},A,x_{5}|y_{1},y_{2},y_{3})+\theta(-y_{1}+y_{3})I_{53}(x_{1},x_{2},x_{3},0,x_{5}|y_{1},y_{2},y_{3})\nonumber \\
& = & \theta(y_{1}-y_{3})[I_{22}(x_{2},x_{1}|y_{3},y_{2})+I_{22}(x_{3},x_{5}|y_{2},y_{1})+x_{1}(y_{1}-B)-x_{5}y_{3}]\nonumber \\
& & +\theta(-y_{1}+y_{3})[I_{22}(x_{2},x_{5}|y_{2},y_{3})+I_{22}(x_{3},x_{1}|y_{1},y_{2})+x_{1}(y_{3}-B)-x_{5}y_{1}]\nonumber \\
& \leqslant & 0, \end{eqnarray}
where we have used inequality (\ref{eq:nch-c}) for $I_{22}(x_{2},x_{1}|y_{3},y_{2})$,
$I_{22}(x_{3},x_{5}|y_{2},y_{1})$, $I_{22}(x_{2},x_{5}|y_{2},y_{3})$, and $I_{22}(x_{3},x_{1}|y_{1},y_{2})$. It was shown in Refs. \citep{Deza2016,Deza2020} that there is only one $I5322$, and the explicit form was first found by \citep{Quintino2014}. Here, we give the proof of the corresponding algebraic one using the linear inequality method.
In the Supplemental Material \citep{Supplemental}, we summarize proofs of 257 algebraic Bell-CH inequalities using the linear inequality method. In these proofs, we can show that all inequalities can be reduced to second- and third-order ones.
\section{Bell-CH inequalities for binary settings}
\label{sec:Bell-CH}According to ``objective local theories'' introduced by Clauser and Horne \citep{Clauser1974}, any physical system, either a classical or quantum-mechanical one, can be considered as a state. The state is labeled by some local hidden variables $\lambda$ without any other assumptions. For example, in a bipartite correlated system, $P(x,\lambda)$ describes the probability density of some certain measurement outcome $x$ in one subsystem, and $P(x,y,\lambda)$ is the correlation probability density of measurement outcomes $x$ and $y$ in two subsystems.
From the definition of the state and local hidden variables, we can derive Bell-CH inequalities from algebraic ones. In a theory of local hidden variables, the physically detectable probability density $P(x)$ is related to a hidden variable $\lambda$, which is assumed to be drawn from the probability distribution $\rho(\lambda)\in[0,1]$ as \begin{equation} P(x)=\int P(x,\lambda)\rho(\lambda)d\lambda. \end{equation} Likewise, the joint probability density $P(x,y)$ is obtained with \begin{equation} P(x,y)=\int P(x,\lambda)P(y,\lambda)\rho(\lambda)d\lambda. \end{equation} Let us take the derivation of the CH inequality as an example. From the $I_{2222}$ inequality, replacing $x_{i}/A$ and $y_{i}/B$ with $P(x_{i},\lambda)$ and $P(y_{i},\lambda)$ for $i=1,2$, respectively, we obtain \begin{eqnarray} P(x_{1},\lambda)\left[P(y_{1},\lambda)+P(y_{2},\lambda)\right]+P(x_{2},\lambda)\left[P(y_{1},\lambda)-P(y_{2},\lambda)\right]-P(x_{1},\lambda)-P(y_{1},\lambda) & \leqslant & 0.\nonumber \\ \end{eqnarray} Multiplying the above inequality by $\rho(\lambda)$ and taking an integration over $\lambda$, we obtain the CH inequality \begin{equation} I_{2,CH}\leqslant0,\label{eq:ch} \end{equation} where $I_{2,CH}$ is defined as \begin{eqnarray} I_{2,CH} & = & P(x_{1},y_{1})+P(x_{1},y_{2})+P(x_{2},y_{1})-P(x_{2},y_{2})-P(x_{1})-P(y_{1}). \end{eqnarray}
In correspondence to (\ref{eq:I2-e-r}), we obtain the inequality \begin{equation} I_{2,CH}\leqslant I_{2,CH}^{(0)}\leqslant0,\label{eq:nieq-1} \end{equation} where $I_{2,CH}^{(0)}$ is defined as \begin{eqnarray} I_{2,CH}^{(0)} & = & -\frac{1}{2}\int\left\{ \left[P(x_{1},\lambda)-P(x_{2},\lambda)\right]\left[P(y_{1},\lambda)-P(y_{2},\lambda)\right]\right.\nonumber \\
& & \left.+\left|\left[P(x_{1},\lambda)-P(x_{2},\lambda)\right]\left[P(y_{1},\lambda)-P(y_{2},\lambda)\right]\right|\right\} \rho(\lambda)d\lambda. \end{eqnarray} Here, we use Eq. (\ref{eq:ch0}) and make the replacements $x_{i}/A\rightarrow P(x_{i},\lambda)$
and $y_{i}/B\rightarrow P(y_{i},\lambda)$ for $i=1,2$, respectively, multiplied by $\rho(\lambda)$ and integrated over $\lambda$. We apply Jensen's inequality in Appendix \ref{sec:jensen} assuming the convex function $\varphi(x)=\left|x\right|$, which leads to an upper bound for $I_{2,CH}^{(0)}$ and then for $I_{2,CH}$, \begin{eqnarray} I_{2,CH}\leqslant I_{2,CH}^{(0)} & \leqslant & -\frac{1}{2}\left\{ \left[P(x_{1},y_{1})-P(x_{1},y_{2})-P(x_{2},y_{1})+P(x_{2},y_{2})\right]\right.\nonumber \\
& & \left.+\left|P(x_{1},y_{1})-P(x_{1},y_{2})-P(x_{2},y_{1})+P(x_{2},y_{2})\right|\right\} ,\label{eq:nch} \end{eqnarray} where we have employed inequality (\ref{eq:nieq-1}). We can easily see that the upper bound of $I_{2,CH}^{(0)}$ is less than or equal to zero.
Another approach to our class of inequalities is through the method of linear inequalities in deriving algebraic inequalities $I_{mn22}$. We take a type of Bell-CH inequality as an example. In correspondence to inequality (\ref{eq:ch-mn}), we obtain the inequality \begin{equation} I_{kk;Q}\leqslant I_{kk;Q}^{(0)}\leqslant0,\label{eq:nieq-2} \end{equation} in which we have defined \begin{eqnarray}
I_{kk;Q} & \equiv & (AB)^{-1}\int d\lambda\rho(\lambda)I_{kk}\left[AP(x_{1},\lambda),\cdots,AP(x_{k},\lambda)|BP(y_{1},\lambda),\cdots,BP(y_{k},\lambda)\right],\nonumber \\
I_{kk;Q}^{(0)} & \equiv & (AB)^{-1}\int d\lambda\rho(\lambda)I_{kk}^{(0)}\left[AP(x_{1},\lambda),\cdots,AP(x_{k},\lambda)|BP(y_{1},\lambda),\cdots,BP(y_{k},\lambda)\right].\nonumber \\ \end{eqnarray} Here, we have made the replacements in (\ref{eq:ch-mn}) $x_{i}/A\rightarrow P(x_{i},\lambda)$ and $y_{i}/B\rightarrow P(y_{i},\lambda)$ for $i=1,\ldots,k$, respectively, multiplied by $\rho(\lambda)$ and integrated over $\lambda$. The inequality $I_{kk,Q}^{(0)}\leqslant0$ leads to the following alternative inequality: \begin{eqnarray} \mathrm{max}\left\{ I_{k-1,k-1;Q}^{(1)}+\sum_{i=1}^{k-1}\left[P(x_{2},y_{i})-P(x_{2})\right],I_{k-1,k-1;Q}^{(2)}+\sum_{i=1}^{k-1}\left[P(x_{1},y_{i})-P(x_{1})\right]\right\} & \leqslant & 0,\nonumber \\ \label{eq:nch-m} \end{eqnarray} where $I_{k-1,k-1;Q}^{(1)}$ and $I_{k-1,k-1;Q}^{(2)}$ are defined as \begin{eqnarray} I_{k-1,k-1;Q}^{(1)} & \equiv & \sum_{j=1}^{k-1}\sum_{i=1,i\neq2}^{k-j}P(x_{i},y_{j})-\sum_{i=3}^{k-1}P(x_{i},y_{k+1-i})-\sum_{i=1,i\neq2}^{k-2}(k-1-i)P(x_{i})-P(y_{1}),\nonumber \\ I_{k-1,k-1;Q}^{(2)} & \equiv & \sum_{j=1}^{k-1}\sum_{i=2}^{k-j}P(x_{i},y_{j})-\sum_{i=3}^{k-1}P(x_{i},y_{k+1-i})-\sum_{i=2}^{k-2}(k-1-i)P(x_{i})-P(y_{1}). \end{eqnarray} Note that this inequality can be tested in physical systems.
Applying this method to $Imn22$, we can obtain a set of Bell-CH inequalities which is summarized in the Supplemental Material \citep{Supplemental}. In the next section, we will test our inequalities with special quantum entangled states.
\section{Testing our Bell-CH inequalities with quantum entangled states}
\label{sec:testing}To test our class of Bell-CH inequalities we consider a simple quantum system with two qubits. The entangled states of two qubits we will use in the test are parameterized as \begin{equation}
\left|\psi(\theta)\right\rangle =\cos\theta\left|00\right\rangle +\sin\theta\left|11\right\rangle .\label{eq:t-state} \end{equation} For such quantum states, the correlation probabilities can be obtained from expectation values of operators acting on the Hilbert space, \begin{eqnarray}
P(x_{i}) & = & \left\langle \psi(\theta)\right|x_{i}\varotimes I_{y}\left|\psi(\theta)\right\rangle ,\nonumber \\
P(y_{j}) & = & \left\langle \psi(\theta)\right|I_{x}\varotimes y_{j}\left|\psi(\theta)\right\rangle ,\nonumber \\
P(x_{i},y_{j}) & = & \left\langle \psi(\theta)\right|x_{i}\varotimes y_{j}\left|\psi(\theta)\right\rangle , \end{eqnarray} with $x_{i}$ $(y_{j})$ being the projectors measured by Alice (Bob) on $x$ ($y$), and $I_{x}$ $(I_{y})$ being the unit operators acting on $x$ ($y$).
For entangled states (\ref{eq:t-state}), we define $Q$, $Q_{a}$, and $Q_{b}$ as the maximum violations of $Imn22$ and the corresponding Bell-CH inequalities $Imn22^{a}$ and $Imn22^{b}$ presented in the Supplemental Material \citep{Supplemental}, respectively. For the test, we have calculated the maximum violations of our inequalities for all $Imn22$ listed in \citep{Supplemental}. For states with the maximum violation characterized by $\theta_{max}$, we have also calculated the resistance to noise $\lambda_{max}$ defined through the mixture state \begin{equation}
\rho=\lambda\left|\psi(\theta_{max})\right\rangle \left\langle \psi(\theta_{max})\right|+(1-\lambda)\frac{I}{4}, \end{equation} so that it does not violate the inequality marginally, where $1-\lambda$ is the parameter for white noise. In Table 1, we give some selected results of 32 $Imn22$ and our corresponding Bell-CH inequalities from the complete table in the Supplemental Material \citep{Supplemental}.
\begin{longtable}[c]{>{\centering}m{1.5cm}>{\centering}m{1.5cm}>{\centering}m{1.5cm}>{\centering}m{1.5cm}>{\centering}m{1.5cm}>{\centering}m{1.5cm}>{\centering}m{1.5cm}>{\centering}m{1.5cm}>{\centering}m{1.5cm}>{\centering}m{1.5cm}} \caption{Maximum violations $Q$, $Q_{a}$, and $Q_{b}$; related parameters $\theta_{max}/\pi$, $\theta_{max}^{a}/\pi$, and $\theta_{max}^{b}/\pi$; and resistances to noise $\lambda_{max}$, $\lambda_{max}^{a}$, and $\lambda_{max}^{b}$ of 32 $Imn22$ (here we use the original names in the literature to represent them) and corresponding Bell-CH inequalities $Imn22^{a}$ and $Imn22^{b}$.} \tabularnewline \midrule Name & $Q$ & $\theta_{max}/\pi$ & $\lambda_{max}$ & $Q_{a}$ & $\theta_{max}^{a}/\pi$ & $\lambda_{max}^{a}$ & $Q_{b}$ & $\theta_{max}^{b}/\pi$ & $\lambda_{max}^{b}$\tabularnewline \midrule \endfirsthead \caption{(Continued.)} \tabularnewline \midrule Name & $Q$ & $\theta_{max}/\pi$ & $\lambda_{max}$ & $Q_{a}$ & $\theta_{max}^{a}/\pi$ & $\lambda_{max}^{a}$ & $Q_{b}$ & $\theta_{max}^{b}/\pi$ & $\lambda_{max}^{b}$\tabularnewline \midrule \endhead \midrule $I_{3322}$ & 0.25 & 0.25 & 0.8 & 0.2071 & 0.25 & 0.7836 & - & - & -\tabularnewline \midrule $I_{4422}^{4}$ & 0.056 & 0.1316 & 0.9728 & 0.2361 & 0.2332 & 0.864 & 0.4142 & 0.25 & 0.7071\tabularnewline \midrule $I_{4422}^{14}$ & 0.4103 & 0.238 & 0.8298 & 0.4282 & 0.2377 & 0.8034 & 0.3793 & 0.2304 & 0.7981\tabularnewline \midrule $I_{4422}^{16}$ & 0.2407 & 0.219 & 0.8791 & 0.226 & 0.2362 & 0.8691 & 0.2071 & 0.25 & 0.8579\tabularnewline \midrule $I_{4422}^{18}$ & 0.1812 & 0.168 & 0.9508 & 0.2983 & 0.2195 & 0.9096 & 0.5436 & 0.2278 & 0.7863\tabularnewline \midrule $J_{4422}^{3}$ & 0.7249 & 0.2448 & 0.838 & 0.7337 & 0.2419 & 0.8267 & - & - & -\tabularnewline \midrule $J_{4422}^{4}$ & 0.6862 & 0.2452 & 0.8138 & 0.6579 & 0.2426 & 0.7917 & - & - & -\tabularnewline \midrule $J_{4422}^{5}$ & 0.5007 & 0.2404 & 0.8749 & 0.4434 & 0.2303 & 0.8494 & - & - & -\tabularnewline \midrule $J_{4422}^{7}$ & 0.3642 & 0.2304 & 0.8917 & 0.6057 & 0.2376 & 0.7675 & - & - & -\tabularnewline \midrule $J_{4422}^{8}$ & 0.2657 & 0.19 & 0.9186 & 0.2772 & 0.2057 & 0.9002 & - & - & -\tabularnewline \midrule $J_{4422}^{12}$ & 0.4198 & 0.25 & 0.9147 & 0.5797 & 0.2407 & 0.8381 & - & - & -\tabularnewline \midrule $J_{4422}^{13}$ & 0.5629 & 0.2422 & 0.8766 & 0.5541 & 0.2213 & 0.8633 & 0.636 & 0.2421 & 0.8251\tabularnewline \midrule $J_{4422}^{15}$ & 0.6133 & 0.2433 & 0.8412 & 0.5668 & 0.2416 & 0.8411 & - & - & -\tabularnewline \midrule $J_{4422}^{21}$ & 0.5441 & 0.2397 & 0.8655 & 0.5717 & 0.2255 & 0.8279 & - & - & -\tabularnewline \midrule $J_{4422}^{30}$ & 0.2459 & 0.1832 & 0.9242 & 0.4201 & 0.2398 & 0.8264 & - & - & -\tabularnewline \midrule $J_{4422}^{31}$ & 0.2133 & 0.1847 & 0.9336 & 0.3796 & 0.2432 & 0.8556 & - & - & -\tabularnewline \midrule $J_{4422}^{34}$ & 0.4075 & 0.2377 & 0.8804 & 0.3925 & 0.2318 & 0.8515 & - & - & -\tabularnewline \midrule $J_{4422}^{42}$ & 0.6012 & 0.2362 & 0.8331 & 0.6722 & 0.2462 & 0.7881 & - & - & -\tabularnewline \midrule $J_{4422}^{51}$ & 0.6678 & 0.2419 & 0.8046 & 0.675 & 0.2466 & 0.7874 & - & - & -\tabularnewline \midrule $J_{4422}^{88}$ & 0.616 & 0.2468 & 0.7851 & 0.5956 & 0.2446 & 0.7705 & - & - & -\tabularnewline \midrule $J_{4422}^{113}$ & 0.8196 & 0.2399 & 0.83 & 0.8484 & 0.2448 & 0.793 & - & - & -\tabularnewline \midrule $N_{4422}^{6}$ & 0.5972 & 0.2496 & 0.8543 & 0.5695 & 0.2426 & 0.8404 & - & - & -\tabularnewline \midrule $N_{4422}^{9}$ & 0.7399 & 0.2445 & 0.8352 & 0.8259 & 0.2404 & 0.7841 & - & - & -\tabularnewline \midrule $N_{4422}^{10}$ & 0.5 & 0.25 & 0.8333 & 0.4331 & 0.2449 & 0.822 & - & - & -\tabularnewline \midrule $A_{10}$ & 0.4154 & 0.229 & 0.8082 & 0.3944 & 0.2388 & 0.7918 & - & - & -\tabularnewline \midrule $A_{11}$ & 0.4561 & 0.2379 & 0.7933 & 0.3944 & 0.2388 & 0.7918 & - & - & -\tabularnewline \midrule $A_{13}$ & 0.4031 & 0.2375 & 0.8128 & 0.4142 & 0.25 & 0.7836 & 0.4142 & 0.25 & 0.7836\tabularnewline \midrule $A_{16}$ & 0.416 & 0.2402 & 0.8278 & 0.4353 & 0.2447 & 0.7751 & 0.3944 & 0.2388 & 0.7601\tabularnewline \midrule $A_{34}$ & 0.514 & 0.2461 & 0.7956 & 0.535 & 0.2476 & 0.7659 & - & - & -\tabularnewline \midrule $A_{69}$ & 0.3304 & 0.2245 & 0.8833 & 0.4459 & 0.2393 & 0.8177 & 0.5148 & 0.25 & 0.7727\tabularnewline \midrule $A_{83}$ & 0.6962 & 0.2438 & 0.798 & 0.62 & 0.2424 & 0.784 & - & - & -\tabularnewline \midrule $A_{88}$ & 0.0768 & 0.1575 & 0.9702 & 0.197 & 0.2356 & 0.9103 & 0.25 & 0.25 & 0.8571\tabularnewline \bottomrule \end{longtable}
For all $Imn22$, we find that our Bell-CH inequalities have positive violation by entangled states (\ref{eq:t-state}). According to the preceding section, our inequalities can be decomposed into groups of inequalities formed by combinations of some original low-order $Imn22$; the nonlocality implied by the violation of high-order $Imn22$ can be replaced by that of our inequalities. Moreover, the resistance to noise for some of these inequalities is lower than that of the original ones; hence, our inequalities could be better candidates for testing nonlocality in some physical systems.
\section{Summary and conclusions}
\label{sec:summary}In this work effective methods for the rearrangement and linear inequality were employed to prove the Bell-CH inequalities. Alternative types of Bell-CH inequalities were found to be violated by entangled states. The main results are summarized as follows. First, a large variety of $Imn22$ ($m,n\le5$) can be easily derived through the rearrangement inequality and the linear inequality. Second, along with $I2222$ or the CH inequality, we gave an inequality in (\ref{eq:nch}) using the rearrangement inequality method. Third, all original $Imn22$ can be replaced by the maximum of lower-order $Imn22$ combinations using the linear inequality method, which can be violated by some entangled states in $Q_{a}$ and $Q_{b}$.
This work can help us understand the mathematical structures of Bell-CH inequalities. A number of interesting topics are open for future studies. One could investigate a set of Bell-CH inequalities for multipartite systems using the method of the linear inequality. Appropriate ways might be found to test our Bell-CH inequalities, especially inequality (\ref{eq:nch}), in systems of optics, high-energy physics, or condensed matter. \begin{acknowledgments} C.Q., Y.-G.Y., and Q.W. are supported in part by the National Natural Science Foundation of China (NSFC) under Grants No. 11890713 (a subgrant of Grant No. 11890710), No. 11947301, and No. 12047502. C.-F.Q. is supported in part by the National Natural Science Foundation of China (NSFC) under Grants No. 11975236 and No. 11635009. \end{acknowledgments}
\onecolumngrid
\appendix
\section{Rearrangement inequality}
\label{sec:rearrange}The rearrangement inequality (or the permutation inequality) is \begin{equation} x_{n}y_{1}+\cdots+x_{1}y_{n}\leqslant x_{\sigma(1)}y_{1}+\cdots+x_{\sigma(n)}y_{n}\leqslant x_{1}y_{1}+\cdots+x_{n}y_{n} \end{equation} for every permutation of $\sigma(i)$ ($i=1,\ldots,n$), with $n$ real numbers $x_{1},...,x_{n}$ which satisfy \begin{align} x_{1} & \leqslant\cdots\leqslant x_{n} \end{align} and $n$ real numbers $y_{1},...,y_{n}$ which satisfy \begin{equation} y_{1}\leqslant\cdots\leqslant y_{n}. \end{equation}
If the numbers are different, that is to say, $x_{1}<\cdots<x_{n}$ and $y_{1}<\cdots<y_{n}$, then the lower bound is obtained only for the permutation which reverses the order, i.e., $\sigma(i)=n-i+1$ for all $i=1,...,n$, and the upper bound is obtained only for the identity permutation, i.e., $\sigma(i)=i$ for all $i=1,...,n$.
\section{Jensen's inequality}
\label{sec:jensen}Suppose $f(x)$ is a non-negative measurable function satisfying \begin{equation} \int_{-\infty}^{\infty}f(x)dx=1, \end{equation} which is a probability density function in the probabilistic view. Jensen's inequality about convex integrals is \begin{equation} \varphi\left(\int_{-\infty}^{\infty}g(x)f(x)dx\right)\leqslant\int_{-\infty}^{\infty}\varphi(g(x))f(x)dx \end{equation} for any real-valued measurable function $g(x)$ and ${\textstyle \varphi}$ is convex over $g(x)$. If $g(x)=x$, then Jensen's inequality reduces to \begin{equation} \varphi\left(\int_{-\infty}^{\infty}xf(x)dx\right)\leqslant\int_{-\infty}^{\infty}\varphi(x)f(x)dx. \end{equation}
\end{document} |
\begin{document}
\begin{abstract} Projectively coresolved Gorenstein flat modules were introduced by Saroch and Stovicek and were shown to be Gorenstein projective. We give characterizations of Gorenstein projective, Gorenstein flat and projectively coresolved Gorenstein flat modules over a group ring $RG$, where $G$ is an $\textsc{\textbf{lh}}\mathfrak{F}$-group or a group of type $\Phi_R$ and $R$ is a commutative ring of finite Gorenstein weak global dimension. In this situation, we prove that every Gorenstein projective $RG$-module is projectively coresolved Gorenstein flat. We deduce that every Gorenstein projective $RG$-module is Gorenstein flat. The existence of weak characteristic modules for a group $G$ over a commutative ring $R$ plays a central role in our results. Furthermore, we determine the Gorenstein homological dimension of an $\textsc{\textbf{lh}}\mathfrak{F}$-group over a commutative ring of finite Gorenstein weak global dimension. \end{abstract}
\title{Gorenstein modules and dimension over large families of infinite groups}
\section{Introduction}Gorenstein homological algebra is the relative homological algebra, which is based on Gorenstein projective, Gorenstein injective and Gorenstein flat modules. The standard reference for these modules and the relative dimensions which are based on them is \cite{H1}. Recently, Saroch and Stovicek \cite{SS} introduced the notion of projectively coresolved Gorenstein flat modules (PGF modules, for short). Over a ring $R$, these modules are the syzygies of the acyclic complexes of projective modules that remain acyclic after applying the functor $I\otimes_R \_\!\_$ for every injective module $I$. It is clear that PGF modules are Gorenstein flat. While in classical homological algebra every projective module is flat, the relation between Gorenstein projective and Gorenstein flat modules is still mysterious. As shown in \cite[Theorem 4.4]{SS}, every PGF module is Gorenstein projective. A natural question is whether the class of Gorenstein projective modules is contained in the class of PGF modules.
In this paper, we study Gorenstein projective, Gorenstein flat and PGF modules over the group algebra $RG$ of a group $G$ with coefficients in a commutative ring $R$ and give characterizations of them, in terms of the module $B(G,R)$ of all bounded $R$-valued functions on $G$. Assuming that the commutative ring $R$ is of finite Gorenstein weak global dimension and $G$ is either an $\textsc{\textbf{lh}}\mathfrak{F}$-group or of type $\Phi$, we prove that every weak Gorenstein flat module is Gorenstein flat and every weak Gorenstein projective module is PGF. By doing so, we conclude that every Gorenstein projective $RG$-module is Gorenstein flat. Moreover, we determine the Gorenstein homological dimension of an $\textsc{\textbf{lh}}\mathfrak{F}$-group over a commutative ring of finite Gorenstein weak global dimension. We note that the condition of the commutative ring $R$ being of finite Gorenstein weak global dimension is equivalent with the finiteness of $\textrm{sfli}R$, where $\textrm{sfli}R$ is the supremum of the flat lengths (dimensions) of injective $R$-modules (see \cite[Theorem 2.4]{CET}). Our methods are based on the notion of a weak characteristic module for $G$, i.e. an $R$-pure $RG$-monomorphism $0 \rightarrow R \rightarrow A$ where $A$ is $R$-flat and $\textrm{fd}_{RG}A<\infty$. The notion of a weak characteristic module generalizes the characteristic modules which were used to prove many properties of the Gorenstein cohomological dimension $\textrm{Gcd}_R G$ of a group $G$ (see \cite{BDT,Tal}). As shown in \cite[Theorem 5.10]{St}, for every commutative ring $R$ of finite Gorenstein weak global dimension, the existence of a weak characteristic module for a group $G$ over a commutative ring $R$ is equivalent with the finiteness of the Gorenestein homological dimension $\textrm{Ghd}_R G$ of $G$. Furthermore, we make use of the stability properties of Gorenstein flat and PGF modules established in \cite{BK} and \cite{St}. Finally, over an $\textsc{\textbf{lh}}\mathfrak{F}$-group, our arguments are based on transfinite induction.
In Section 2, we establish notation, terminology and preliminary results that will be used in the sequel. In Sections 3 and 4 we consider a commutative ring $R$ of finite Gorenstein weak global dimension and a group $G$ such that there exists a weak characteristic module for $G$ over $R$. By noting first that the tensor product of a weak Gorenstein flat $RG$-module and an $R$-flat module is Gorenstein flat (with diagonal action), we prove in Section 3 that the class of Gorenstein flat $RG$-modules coincides with the class of modules which are syzygies in a double infinite exact sequence of flat $RG$-modules and moreover with the class of $RG$-modules such that after being tensored with $B(G,R)$ yield a Gorenstein flat module, with diagonal action (see Theorem 3.7). In Section 4, we note first that the tensor product of a weak Gorenstein projective $RG$-module and an $R$-projective module is PGF (with diagonal action). Working in a similar way as in Section 3, we prove that the class of PGF $RG$-modules coincides with the class of modules which are syzygies in a double infinite exact sequence of projective $RG$-modules and moreover with the class of $RG$-modules which after being tensored with $B(G,R)$ yield a PGF module, with diagonal action. In this way, we infer also that the class of Gorenstein projective $RG$-modules coincides with the the class of PGF $RG$-modules, and hence every Gorenstein projective $RG$-module is Gorenstein flat (see Theorem 4.7). This result is noteworthy, since it is not known whether all Gorenstein projective modules are Gorenstein flat over an arbitrary ring. Since for every group of type $\Phi_R$, the $RG$-module $B(G,R)$ is weak characteristic, we obtain in Sections 3 and 4 similar results for this class of groups.
In Sections 5 and 6, we consider a commutative ring of finite Gorenstein weak global dimension and an $\textsc{\textbf{lh}}\mathfrak{F}$-group $G$. Under these assumptions, we prove the same characterizations of Gorenstein flat, Gorenstein projective and PGF $RG$-modules, as in Sections 3 and 4. It seems that, under the assumption for the commutative ring $R$ to be of finite Gorenstein weak global dimension, the existence of a weak characteristic module for a group $G$ over $R$ is essentialy equivalent with $G$ being an $\textsc{\textbf{lh}}\mathfrak{F}$-group.
In our final section, we study the Gorenstein homological dimension $\textrm{Ghd}_{R}G$ of an $\textsc{\textbf{lh}}\mathfrak{F}$-group $G$ over a commutative ring $R$ of finite Gorenstein weak global dimension and we prove that $\textrm{Ghd}_{R}G=\textrm{fd}_{RG}B(G,R)$ (see Theorem 7.7). For this purpose, we firstly prove that $\textrm{f.k}(RG)=\textrm{sfli}(RG)=\textrm{fin.f.dim}(RG)$, where we denote by $\textrm{f.k}(RG)$ the supremum of flat dimensions of $RG$ modules $M$ which have finite flat dimension over every finite subgroup of $G$ (see Corollary 7.5). The Gorenstein cohomological dimension $\textrm{Gcd}_{R}G$ of an $\textsc{\textbf{lh}}\mathfrak{F}$-group $G$ over a commutative ring $R$ of finite global dimension, was studied in \cite[Theorem 3.1]{Bis2} and \cite[Theorem A.1]{ET2}.
\noindent {\em Terminology.} All rings are assumed to be associative and unital and all ring homomorphisms will be unit preserving. Unless otherwise specified, all modules will be left $R$-modules.
\section{Preliminaries} In this section, we collect certain notions and preliminary results that will be used in the sequel. \subsection{Gorenstein projective, Gorenstein flat and PGF modules.} An acyclic complex $\textbf{P}$ of projective modules is said to be a complete projective resolution if the complex of abelian groups $\mbox{Hom}_R(\textbf{P},Q)$ is acyclic for every projective module $Q$. Then, a module is Gorenstein projective if it is a syzygy of a complete projective resolution. We let ${\tt GProj}(R)$ be the class of Gorenstein projective modules. The Gorenstein projective dimension $\mbox{Gpd}_RM$ of a module $M$ is the length of a shortest resolution of $M$ by Gorenstein projective modules. If no such resolution of finite length exists, then we write $\mbox{Gpd}_RM = \infty$.
An acyclic complex $\textbf{F}$ of flat modules is said to be a complete flat resolution if the complex of abelian groups $I \otimes_R \textbf{F}$ is acyclic for every injective right module $I$. Then, a module is Gorenstein flat if it is a syzygy of a complete flat resolution. We let ${\tt GFlat}(R)$ be the class of Gorenstein flat modules. The Gorenstein flat dimension $\mbox{Gfd}_RM$ of a module $M$ is the length of a shortest resolution of $M$ by Gorenstein flat modules. If no such resolution of finite length exists, then we write $\mbox{Gfd}_RM = \infty$.
The projectively coresolved Gorenstein flat modules (PGF-modules, for short) were introduced by Saroch and Stovicek \cite{SS}. Such a module is a syzygy of an acyclic complex of projective modules $\textbf{P}$, which is such that the complex of abelian groups $I \otimes_R \textbf{P}$ is acyclic for every injective module $I$. It is clear that the class ${\tt PGF}(R)$ of PGF modules is contained in ${\tt GFlat}(R)$. The inclusion ${\tt PGF}(R) \subseteq {\tt GProj}(R)$ is proved in \cite[Theorem 4.4]{SS}. Moreover, the class of PGF $R$-modules, is closed under extensions, direct sums, direct summands and kernels of epimorphisms. The PGF dimension $\mbox{PGF-dim}_RM$ of a module $M$ is the length of a shortest resolution of $M$ by PGF modules. If no such resolution of finite length exists, then we write $\mbox{PGF-dim}_RM = \infty$ (see \cite{DE}).
\subsection{Group rings.}Let $R$ be a commutative ring, $G$ be a group and consider the associated group ring $RG$. The standard reference for group cohomology is \cite{Br}. Using the diagonal action of the group $G$, the tensor product $M\otimes_R N$ of two $RG$-modules is also an $RG$-module using the diagonal action of $G$; we define $g\cdot (x \otimes y)=gx \otimes gy \in M\otimes_R N$ for every $g\in G$, $x\in M$ and $y\in N$. We note that for every projective $RG$-module $M$ and every $R$-projective $RG$-module $N$, then the diagonal $RG$-module $M\otimes_R N$ is also projective. Similarly, for every flat $RG$-module $M$ and every $R$-flat $RG$-module $N$, the diagonal $RG$-module $M\otimes_R N$ is also flat. Indeed, since the class ${\tt Flat}(RG)$ of flat $RG$-modules is closed under filtered colimits and direct sums, invoking the Govorov-Lazard theorem, we may assume that $M=RG$.
\subsection{$\textsc{\textbf{lh}}\mathfrak{F}$-groups and groups of type $\Phi_R$} The class $\textsc{\textbf{h}}\mathfrak{F}$ was defined by Kropholler in \cite{Kr}. This is the smallest class of groups, which contains the class $\mathfrak{F}$ of finite groups and is such that whenever a group $G$ admits a finite dimensional contractible $G$-CW-complex with stabilizers in $\textsc{\textbf{h}}\mathfrak{F}$, then we also have $G\in \textsc{\textbf{h}}\mathfrak{F}$. More precisely, we define $\textsc{\textbf{h}}_0\mathfrak{F}:=\mathfrak{F}$, and for every ordinal number $\alpha>0$, we say that a group $G$ belongs to the class $\textsc{\textbf{h}}_{\alpha}\mathfrak{F}$ iff there exists a finite dimensional contractible CW-complex on which $G$ acts such that every isotropy subgroup of the action belongs to $\textsc{\textbf{h}}_{\beta}\mathfrak{F}$ for some ordinal $\beta < \alpha$. A group belongs to the class $\textsc{\textbf{h}}\mathfrak{F}$, if it belongs to the class $\textsc{\textbf{h}}_{\alpha}\mathfrak{F}$ for some ordinal $\alpha$. The class $\textsc{\textbf{lh}}\mathfrak{F}$ consists of those groups, all of whose finitely generated subgroups are in $\textsc{\textbf{h}}\mathfrak{F}$. All soluble groups, all groups of finite virtual cohomological dimension and all automorphism groups of Noetherian modules over a commutative ring are $\textsc{\textbf{lh}}\mathfrak{F}$-groups. The class $\textsc{\textbf{lh}}\mathfrak{F}$ is closed under extensions, ascending unions, free products with amalgamation and HNN extensions.
A group $G$ is said to be of type $\Phi_R$ if it has the property that for every $RG$-module $M$, $\textrm{pd}_{RG}M<\infty$ if and only if $\textrm{pd}_{RH}M<\infty$ for every finite subgroup $H$ of $G$. These groups were defined over $\mathbb{Z}$ in \cite{Ta}. Over a commutative ring $R$ of finite global dimension, every group of finite virtual cohomological dimension and every group which acts on a tree with finite stabilizers is of type $\Phi_R$ (see \cite[Corollary 2.6]{MS}).
Let $B(G,R)$ be the $RG$-module which consists of all functions from $G$ to $R$ whose image is a finite subset of $R$. The $RG$-module $B(G,R)$ is $R$-free and $RH$-free for every finite subgroup $H$ of $G$. For every element $\lambda \in R$, the constant function $\iota(\lambda)\in B(G,R)$ with value $\lambda$ is invariant under the action of $G$. The map $\iota: R \rightarrow B(G,R)$ which is defined in this way is then $RG$-linear and $R$-split. Indeed, for every fixed element $g\in G$, there exists an $R$-linear splitting for $\iota$ by evaluating functions at $g$. Moreover, the cokernel $\overline{B}(G,R)$ of $\iota$ is $R$-free (see \cite[Lemma 3.3]{Kr2} and \cite[Lemma 3.4]{BC}). We note that $\textrm{pd}_{RG}B(G,R)<\infty$ over any group $G$ of type $\Phi_R$. Thus, $B(G,R)$ is a (weak) characteristic module for every group $G$ of type $\Phi_R$ over any commutative ring $R$.
\subsection{Gedrich-Gruenberg invariants and Gorenstein global dimensions}The invariants $\textrm{silp}R$, $\textrm{spli}R$ were defined by Gedrich and Gruenberg in \cite{GG} as the supremum of the injective lengths (dimensions) of projective modules and the supremum of the projective lengths (dimensions) of injective modules, respectively. The invariant $\textrm{sfli}R$ is defined similarly as the supremum of the flat lengths (dimensions) of injective modules. Since projective modules are flat, the inequality $\textrm{sfli}R\leq \textrm{spli}R$ is clear. Moreover, for every commutative ring $R$ we have the inequality $\textrm{silp}R\leq\textrm{spli}R$, with equality if $\textrm{spli}R<\infty$ (see \cite[Corollary 5.4]{DE}). Thus, for every commutative ring $R$, invoking \cite[Theorem 4.1]{Emm3}, we infer that the finiteness of $\textrm{spli}R$ is equivalent to the finiteness of $\textrm{Ggl.dim}R$, and then $\textrm{Ggl.dim}R=\textrm{spli}R$. Furthermore, for every commutative ring $R$, invoking \cite[Theorem 2.4]{CET}, we infer that the finiteness of $\textrm{sfli}R$ is equivalent to the finiteness of $\textrm{Gwgl.dim}R$, and then $\textrm{Gwgl.dim}R=\textrm{sfli}R$.
\subsection{Weak Gorenstein modules}Let $R$ be a commutative ring. We denote by ${\tt WGProj}(R)$ the class of modules which are syzygies of an acyclic complex of projective modules $\textbf{P}$. We note that ${\tt GProj}(R)\subseteq {\tt WGProj}(R)$ and ${\tt PGF}(R)\subseteq {\tt WGProj}(R)$. Since the finiteness of $\textrm{sfli}R$ yields ${\tt WGProj}(R)\subseteq {\tt PGF}(R)$, we have ${\tt WGProj}(R)={\tt PGF}(R)={\tt GProj}(R)$ (see \cite[Theorem 4.4]{SS}).
Analogously, we denote by ${\tt WGFlat}(R)$ the class of modules which are syzygies of an acyclic complex of flat modules $\textbf{F}$. We note that ${\tt GFlat}(R)\subseteq {\tt WGFlat}(R)$. Moreover, the finiteness of $\textrm{sfli}R$ implies that ${\tt WGFlat}(R)\subseteq {\tt GFlat}(R)$, and hence ${\tt WGFlat}(R)={\tt GFlat}(R)$.
\subsection{Weak characteristic modules}Let $R$ be a commutative ring and $G$ be a group. We define a weak characteristic module for $G$ over $R$ as an $R$-flat $RG$-module $A$ with $\textrm{fd}_{RG}A <\infty$, which admits an $R$-pure $RG$-linear monomorphism $\jmath: R \rightarrow A$. We note that the existence of a weak characteristic module is equivalent with the existence of an $R$-projective $RG$-module $A'$ with $\textrm{fd}_{RG}A <\infty$, which admits an $R$-split $RG$-linear monomorphism $\jmath': R \rightarrow A'$ (see \cite[Theorem 5.10]{St}). If $\textrm{sfli}R<\infty$, the existence of a weak characteristic module for $G$ over $R$ is equivalent with the finiteness of $\textrm{sfli}(RG)$ (see \cite[Theorem 5.10]{St}).
\section{Gorenstein flat modules over groups with weak characteristic modules} We consider a commutative ring $R$ such that $\textrm{sfli}R<\infty$ and a group $G$ such that there exists a weak characteristic module for $G$ over $R$. Our goal in this section is to give a characterization of the class ${\tt GFlat}(RG)$, in terms of the $RG$-module $B(G,R)$. Moreover, under these conditions, we conclude that the class ${\tt GFlat}(RG)$ coincides with the class ${\tt WGFlat}(RG)$. Since for every group of type $\Phi_R$, the $RG$-module $B(G,R)$ is weak characteristic, similar results are obtained.
\begin{Proposition}\label{newlemmad}Let $R$ be a commutative ring such that $\textrm{sfli}R<\infty$ and $G$ be a group such that there exists a weak characteristic module for $G$ over $R$. Then for every $RG$-modules $M$, $N$ such that $M$ is a weak Gorenstein flat and $N$ is R-flat, the $RG$-module $M\otimes_R N$ is Gorenstein flat. \end{Proposition}
\begin{proof}Let $M$ be a weak Gorenstein flat $RG$-module and $N$ be an $R$-flat $RG$-module. Then, there exists an acyclic complex of projective $RG$-modules $$\textbf{F}=\cdots \rightarrow F_{2}\rightarrow F_1\rightarrow F_0 \rightarrow F_{-1}\rightarrow \cdots,$$ such that $M=\textrm{Im}(F_1 \rightarrow F_0)$. Since $N$ is $R$-flat, we obtain the induced complex of $RG$-flat modules (with diagonal action) $$\textbf{F}\otimes_R N = \cdots \rightarrow F_{2}\otimes_R N\rightarrow F_1\otimes_R N\rightarrow F_0\otimes_R N \rightarrow F_{-1}\otimes_R N\rightarrow \cdots,$$ where $M\otimes_R N= \textrm{Im}(F_1\otimes_R N \rightarrow F_0\otimes_R N)$. Since $\textrm{sfli}R<\infty$, the existence of a weak characteristic module is equivalent to the finiteness of $\textrm{sfli}(RG)$ by \cite[Theorem 5.10]{St}. Thus, the complex $I\otimes_{RG}(\textbf{F}\otimes_R N)$ is acyclic for every injective $RG$-module $I$. We conclude that the $RG$-module $M\otimes_R N$ is Gorenstein flat. \end{proof}
\begin{Definition}Let $R$ be a commutative ring and $G$ be a group. We denote by $\mathcal{X}_{B,{\tt GFlat}}$ the class of $RG$-modules $\mathscr{X}_{B,{\tt GFlat}}=\{M\in \textrm{Mod}(RG): \, M\otimes_R B(G,R)\in {\tt GFlat}(RG)\}$. \end{Definition}
\begin{Corollary}\label{coor24}Let $R$ be a commutative ring such that $\textrm{sfli}R<\infty$ and $G$ be a group such that there exists a weak characteristic module for $G$ over $R$. Then, ${\tt WG Flat}(RG)\subseteq\mathscr{X}_{B,{\tt GFlat}}$. \end{Corollary}
\begin{proof}Since the $RG$-module $B(G,R)$ is $R$-free, this is an immediate consequence of Proposition \ref{newlemmad}. \end{proof}
\begin{Proposition}\label{ppst} Let $R$ be a commutative ring such that $\textrm{sfli}R<\infty$ and $G$ be a group such that there exists a weak characteristic module for $G$ over $R$. Then,
$\mathscr{X}_{B,{\tt GFlat}}\subseteq {\tt GFlat}(RG)$. \end{Proposition}
\begin{proof}Let $B=B(G,R)$, $\overline{B}=\overline{B}(G,R)$ and consider an $RG$-module $M$ such that the $RG$-module $M\otimes_R B$ is Gorenstein flat. We also let $V_i=\overline{B}^{\otimes i}\otimes_R B$ for every $i\geq 0$, where $\overline{B}^{\otimes 0}=R$. Since the short exact sequence of $RG$-modules $0\rightarrow R \rightarrow B \rightarrow \overline{B}\rightarrow 0$ is $R$-split, we obtain for every $i\geq 0$ a short exact sequence of $RG$-modules of the form $$0\rightarrow M\otimes_R\overline{B}^{\otimes i}\rightarrow M\otimes_R V_i \rightarrow M\otimes_R \overline{B}^{\otimes i+1}\rightarrow 0.$$ Then, the splicing of the above short exact sequences for every $i\geq 0$ yields an exact sequence of the form
\begin{equation}\label{eq1}
0\rightarrow M \xrightarrow{\alpha} M\otimes_R V_0 \rightarrow M\otimes_R V_1 \rightarrow M\otimes_R V_2 \rightarrow \cdots.
\end{equation}
Since the $RG$-module $M\otimes_R B$ is Gorenstein flat and $\overline{B}$ is $R$-flat, we obtain that the $RG$-module $M\otimes_R V_i\cong (M\otimes_R B)\otimes_R \overline{B}^{\otimes i}$ is Gorenstein flat for every $i\geq 0$, by Lemma \ref{newlemmad}. We also consider an $RG$-flat resolution of $M$
\begin{equation*}
\textbf{Q}=\cdots \rightarrow Q_2 \rightarrow Q_1 \rightarrow Q_0 \xrightarrow{\beta} M \rightarrow 0.
\end{equation*}
Splicing the resolution $\textbf{Q}$ with the exact sequence (\ref{eq1}), we obtain an acyclic complex of Gorenstein flat $RG$-modules
\begin{equation*}
\mathfrak{P}=\cdots \rightarrow Q_2 \rightarrow Q_1 \rightarrow Q_0 \xrightarrow{\alpha \beta} M\otimes_R V_0 \rightarrow M\otimes_R V_1 \rightarrow M\otimes_R V_2 \rightarrow \cdots
\end{equation*}
which has syzygy the $RG$-module $M$. It suffices to prove that the complex $I\otimes_{RG}\mathfrak{P}$ is acyclic for every injective $RG$-module $I$. Using \cite[Theorem 1.2]{BK} we will then obtain that the $RG$-module $M$ is Gorenstein flat. Let $I$ be an injective $RG$-module. Then, the $R$-split short exact sequence of $RG$-modules $0\rightarrow R \rightarrow B \rightarrow \overline{B}\rightarrow 0$ yields an induced exact sequence of $RG$-modules with diagonal action $0\rightarrow I\rightarrow B \otimes_{R}I\rightarrow \overline{B}\otimes_{R} I\rightarrow 0$ which is $RG$-split. Thus, it suffices to prove that the complex $(B\otimes_{R}I)\otimes_{RG}\mathfrak{P}$ is acyclic. Since $B$ is $R$-flat, we obtain that the acyclic complex $\textbf{Q}\otimes_R B$ is a flat resolution of the Gorenstein flat $RG$-module $M\otimes_{R}B$. Hence, every syzygy module of $\textbf{Q}\otimes_R B$ is also a Gorenstein flat $RG$-module (see \cite[Lemma 2.4]{Ben}). Moreover, the $RG$-module $(M\otimes_R B)\otimes_R \overline{B}^{\otimes i}\cong (M\otimes_R \overline{B}^{\otimes i})\otimes_R B$ is Gorenstein flat for every $i\geq 0$. Consequently, every syzygy module of the acyclic complex
\begin{equation*}
\mathfrak{P}\otimes_R B =\cdots\rightarrow Q_1\otimes_R B\rightarrow Q_0\otimes_R B \rightarrow M\otimes_R V_0 \otimes_R B\rightarrow M\otimes_R V_1 \otimes_R B\rightarrow \cdots
\end{equation*} is a Gorenstein flat $RG$-module. As the functor $\textrm{Tor}^{RG}_1 (I,\_\!\_)$ vanishes on Gorenstein flat $RG$-modules, we conclude that the complex $(B\otimes_{R}I)\otimes_{RG}\mathfrak{P}\cong I\otimes_{RG}(\mathfrak{P}\otimes_{R}B)$ is acyclic, as needed.\end{proof}
\begin{Remark}\rm \label{remarkaki} A careful examination of the proof of Proposition \ref{nppst} shows that the existence of a weak characteristic module for $G$ over $R$ was only needed to ensure that the $RG$-module $(M\otimes_R B)\otimes_R \overline{B}^{\otimes i}$ is Gorenstein flat for every $i\geq 0$. \end{Remark}
\begin{Remark}\rm Let $R$ be a commutative ring such that $\textrm{sfli}(R)<\infty$ and $G$ be a group such that there exists a weak characteristic module $A$ for $G$ over $R$. We also consider an $RG$-module $M$ such that the $RG$-module $M\otimes_R A$ is Gorenstein flat. Then, $M$ is a Gorenstein flat $RG$-module.
Indeed, there exists an $R$-pure $RG$-short exact sequence $0\rightarrow R \rightarrow A \rightarrow \overline{A}\rightarrow 0$, where the $RG$-modules $A,\overline{A}$ are $R$-flat. Following step by step the proof of Proposition \ref{ppst}, we construct an acyclic complex of Gorenstein flat modules
\begin{equation*}
\mathfrak{P'}=\cdots \rightarrow Q_2' \rightarrow Q_1' \rightarrow Q_0' \rightarrow M\otimes_R V_0' \rightarrow M\otimes_R V_1' \rightarrow M\otimes_R V_2' \rightarrow \cdots,
\end{equation*} where $V_i'={\overline{A}}^{\otimes i}\otimes_R A$, for every $i\geq 0$, and has syzygy the $RG$-module $M$. Using the $R$-pure $RG$-short exact sequence $0\rightarrow R \rightarrow A \rightarrow \overline{A} \rightarrow 0 $ and \cite[Theorem 1.2]{BK}, it suffices to show that the complex $I\otimes_{RG}(\mathfrak{P'}\otimes_{R}A)$ is acyclic for every injective $RG$-module $I$. This follows exactly as in the proof of Proposition \ref{ppst}, since every syzygy module of $\mathfrak{P'}\otimes_{R}A$ is Gorenstein flat. \end{Remark}
\begin{Theorem}\label{the410}Let $R$ be a commutative ring such that $\textrm{sfli}R<\infty$ and $G$ be a group such that there exists a weak characteristic module for $G$ over $R$. Then, $\mathscr{X}_{B,{\tt GFlat}}={\tt GFlat}(RG)={\tt WGFlat}(RG)$. \end{Theorem}
\begin{proof}Invoking Corollary \ref{coor24}, we have ${\tt WG Flat}(RG)\subseteq\mathscr{X}_{B,{\tt GFlat}}$. Moreover, Proposition \ref{ppst} yields $\mathscr{X}_{B,{\tt GFlat}}\subseteq{\tt GFlat}(RG)$ and the inclusion ${\tt GFlat}(RG)\subseteq{\tt WGFlat}(RG)$ is clear. We conclude that $\mathscr{X}_{B,{\tt GFlat}}={\tt GFlat}(RG)={\tt WGFlat}(RG)$, as needed. \end{proof}
\begin{Corollary}\label{cor212}Let $R$ be a commutative ring such that $\textrm{sfli}R<\infty$ and $G$ be a group. If $\textrm{fd}_{RG}B(G,R)<\infty$, then $\mathscr{X}_{B,{\tt GFlat}}={\tt GFlat}(RG)={\tt WGFlat}(RG)$. \end{Corollary}
\begin{proof}Since $\textrm{fd}_{RG}B(G,R)<\infty$, the $RG$-module $B(G,R)$ is a weak characteristic module for $G$ over $R$. The result is now a direct consequence of Theorem \ref{the410}.\end{proof}
\begin{Corollary}\label{cor39}Let $R$ be a commutative ring such that $\textrm{sfli}R<\infty$ and $G$ be a group of type $\Phi_R$. Then, $\mathscr{X}_{B,{\tt GFlat}}={\tt GFlat}(RG)={\tt WGFlat}(RG)$. \end{Corollary}
\begin{proof}Since the $RG$-module $B(G,R)$ is $RH$-free for every finite subgroup $H$ of $G$, the definition of a group of type $\Phi_R$ implies that $\textrm{fd}_{RG}B(G,R)<\infty$. The result is now an immediate consequence of Corollary \ref{cor212}. \end{proof}
\begin{Corollary} Let $R$ be a commutative ring of finite weak global dimension and $G$ be a group of type $\Phi_R$. Then, $\mathscr{X}_{B,{\tt Flat}}={\tt GFlat}(RG)={\tt WGFlat}(RG)$, where $\mathscr{X}_{B,{\tt Flat}}=\{M\in \textrm{Mod}(RG): \, M\otimes_R B(G,R)\in {\tt Flat}(RG)\}$. \end{Corollary}
\begin{proof}Invoking Corollary \ref{cor39}, it suffices to show that $\mathscr{X}_{B,{\tt GFlat}}\subseteq\mathscr{X}_{B,{\tt Flat}}$. Let $M\in \mathscr{X}_{B,{\tt GFlat}}$. Then, $M\in {\tt WGFlat}(RG)\subseteq {\tt WGFlat}(R)$, and hence the finiteness of $\textrm{wgl.dim}(R)$ implies that $M$ is $R$-flat. Since $\textrm{fd}_{RG} B(G,R)<\infty$, we obtain that $\textrm{fd}_{RG}M\otimes_R B(G,R)<\infty$. We conclude that $M\otimes_R B(G,R)\in {\tt GFlat}(RG)\cap \overline{{\tt Flat}}(RG)={\tt Flat}(RG)$ (see \cite[Lemma 2.4]{St}). \end{proof}
\section{Gorenstein projective and PGF modules over groups with weak characteristic modules} We consider a commutative ring $R$ such that $\textrm{sfli}R<\infty$ and a group $G$ such that there exists a weak characteristic module for $G$ over $R$. Our goal in this section is to give a characterization of the class ${\tt GProj}(RG)$ related to the $RG$-module $B(G,R)$. Moreover, under these conditions, we conclude that the classes ${\tt GProj}(RG)$, ${\tt PGF}(RG)$ and ${\tt WGProj}(RG)$ coincide. As a result we have that, under the above conditions, every Gorenstein projective $RG$-module is Gorenstein flat. Since for every group of type $\Phi_R$, the $RG$-module $B(G,R)$ is weak characteristic, similar results are obtained.
\begin{Proposition}\label{Newlemmad}Let $R$ be a commutative ring such that $\textrm{sfli}R<\infty$ and $G$ be a group such that there exists a weak characteristic module for $G$ over $R$. Then for every $RG$-modules $M$, $N$ such that $M$ is weak Gorenstein projective and $N$ is $R$-projective, the $RG$-module $M\otimes_R N$ is PGF. \end{Proposition}
\begin{proof}Let $M$ be a weak Gorenstein projective $RG$-module and $N$ be an $R$-projective $RG$-module. Then, there exists an acyclic complex of projective $RG$-modules $$\textbf{P}=\cdots \rightarrow P_{2}\rightarrow P_1\rightarrow P_0 \rightarrow P_{-1}\rightarrow \cdots,$$ such that $M=\textrm{Im}(P_1 \rightarrow P_0)$. Since $N$ is $R$-projective, we obtain the induced complex of $RG$-projective modules (with diagonal action) $$\textbf{P}\otimes_R N = \cdots \rightarrow P_{2}\otimes_R N\rightarrow P_1\otimes_R N\rightarrow P_0\otimes_R N \rightarrow P_{-1}\otimes_R N\rightarrow \cdots,$$ where $M\otimes_R N= \textrm{Im}(P_1\otimes_R N \rightarrow P_0\otimes_R N)$. Since $\textrm{sfli}R<\infty$, the existence of the weak characteristic module $A$ is equivalent to the finiteness of $\textrm{sfli}(RG)$ by \cite[Theorem 5.10]{St}. Thus, the complex $I\otimes_{RG}(\textbf{P}\otimes_R N)$ is acyclic for every injective $RG$-module $I$. We conclude that the $RG$-module $M\otimes_R N$ is PGF. \end{proof}
\begin{Definition}Let $R$ be a commutative ring and $G$ be a group. We denote by $\mathcal{X}_{B,{\tt PGF}}$ the class of $RG$-modules $\mathscr{X}_{B,{\tt PGF}}=\{M\in \textrm{Mod}(RG): \, M\otimes_R B(G,R)\in {\tt PGF}(RG)\}$. \end{Definition}
\begin{Corollary}\label{coor43}Let $R$ be a commutative ring such that $\textrm{sfli}R<\infty$ and $G$ be a group such that there exists a weak characteristic module for $G$ over $R$. Then, ${\tt WG Proj}(RG)\subseteq\mathscr{X}_{B,{\tt PGF}}$. \end{Corollary}
\begin{proof}Since the $RG$-module $B(G,R)$ is $R$-free, this is an immediate consequence of Proposition \ref{Newlemmad}.\end{proof}
\begin{Proposition}\label{nppst} Let $R$ be a commutative ring such that $\textrm{sfli}R<\infty$ and $G$ be a group such that there exists a weak characteristic module for $G$ over $R$. Then, $\mathscr{X}_{B,{\tt PGF}}\subseteq {\tt PGF}(RG)$. \end{Proposition}
\begin{proof}Let $B=B(G,R)$, $\overline{B}=\overline{B}(G,R)$ and consider an $RG$-module $M$ such that the $RG$-module $M\otimes_R B$ is PGF. We also let $V_i=\overline{B}^{\otimes i}\otimes_R B$ for every $i\geq 0$, where $\overline{B}^{\otimes 0}=R$. Since the short exact sequence of $RG$-modules $0\rightarrow R \rightarrow B \rightarrow \overline{B}\rightarrow 0$ is $R$-split, we obtain for every $i\geq 0$ a short exact sequence of $RG$-modules of the form $$0\rightarrow M\otimes_R\overline{B}^{\otimes i}\rightarrow M\otimes_R V_i \rightarrow M\otimes_R \overline{B}^{\otimes i+1}\rightarrow 0.$$ Then, the splicing of the above short exact sequences for every $i\geq 0$ yields an exact sequence of the form \begin{equation}\label{Eeq1}
0\rightarrow M \xrightarrow{\alpha} M\otimes_R V_0 \rightarrow M\otimes_R V_1 \rightarrow M\otimes_R V_2 \rightarrow \cdots. \end{equation} Since the $RG$-module $M\otimes_R B$ is PGF and $\overline{B}$ is $R$-projective, we obtain that the $RG$-module $M\otimes_R V_i\cong (M\otimes_R B)\otimes_R \overline{B}^{\otimes i}$ is PGF for every $i\geq 0$, by Proposition \ref{Newlemmad}. We also consider an $RG$-projective resolution of $M$ \begin{equation*}
\textbf{P}=\cdots \rightarrow P_2 \rightarrow P_1 \rightarrow P_0 \xrightarrow{\beta} M \rightarrow 0. \end{equation*} Splicing the resolution $\textbf{P}$ with the exact sequence (\ref{Eeq1}), we obtain an acyclic complex of PGF $RG$-modules \begin{equation*}
\mathfrak{P}=\cdots \rightarrow P_2 \rightarrow P_1 \rightarrow P_0 \xrightarrow{\alpha \beta} M\otimes_R V_0 \rightarrow M\otimes_R V_1 \rightarrow M\otimes_R V_2 \rightarrow \cdots \end{equation*} which has syzygy the $RG$-module $M$. It suffices to prove that the complex $I\otimes_{RG}\mathfrak{P}$ is acyclic for every injective $RG$-module $I$. Using \cite[Theorem 6.7]{St} we will then obtain that the $RG$-module $M$ is PGF. Let $I$ be an injective $RG$-module. Then, the $R$-split short exact sequence of $RG$-modules $0\rightarrow R \rightarrow B \rightarrow \overline{B}\rightarrow 0$ yields an induced exact sequence of $RG$-modules with diagonal action $0\rightarrow I\rightarrow B \otimes_{R}I\rightarrow \overline{B}\otimes_{R} I\rightarrow 0$ which is $RG$-split. Thus, it suffices to prove that the complex $(B\otimes_{R}I)\otimes_{RG}\mathfrak{P}$ is acyclic. Since $B$ is $R$-projective, we obtain that the acyclic complex $\textbf{P}\otimes_R B$ is a projective resolution of the PGF $RG$-module $M\otimes_{R}B$. Therefore, every syzygy module of $\textbf{P}\otimes_R B$ is also a PGF $RG$-module (see \cite[Proposition 2.1]{St}). Moreover, the $RG$-module $(M\otimes_R B)\otimes_R \overline{B}^{\otimes i}\cong (M\otimes_R \overline{B}^{\otimes i})\otimes_R B$ is PGF for every $i\geq 0$. Consequently, every syzygy module of the acyclic complex \begin{equation*}
\mathfrak{P}\otimes_R B =\cdots\rightarrow P_1\otimes_R B\rightarrow P_0\otimes_R B \rightarrow M\otimes_R V_0 \otimes_R B\rightarrow M\otimes_R V_1 \otimes_R B\rightarrow \cdots \end{equation*} is a PGF $RG$-module. As the functor $\textrm{Tor}^{RG}_1 (I,\_\!\_)$ vanishes on PGF modules, we conclude that the complex $(B\otimes_{R}I)\otimes_{RG}\mathfrak{P}\cong I\otimes_{RG}(\mathfrak{P}\otimes_{R}B)$ is acyclic, as needed.\end{proof}
\begin{Remark}\rm \label{Remarkaki} A careful examination of the proof of Proposition \ref{nppst} shows that the existence of a characteristic module for $G$ over $R$ was only needed to ensure that the $RG$-module $(M\otimes_R B)\otimes_R \overline{B}^{\otimes i}$ is PGF for every $i\geq 0$. \end{Remark}
\begin{Remark}\label{rem46}\rm Let $R$ be a commutative ring such that $\textrm{sfli}(R)<\infty$ and $G$ be a group such that there exists a weak characteristic module $A$ for $G$ over $R$. We also consider an $RG$-module $M$ such that the $RG$-module $M\otimes_R A$ is PGF. Then, $M$ is a PGF $RG$-module.
Indeed, there exists an $R$-split $RG$-short exact sequence $0\rightarrow R \rightarrow A \rightarrow \overline{A}\rightarrow 0$, where the $RG$-modules $A,\overline{A}$ are $R$-projectives (this follows from \cite[Theorem 5.10(v)]{St}). Following step by step the proof of Proposition \ref{nppst}, we construct an acyclic complex of PGF modules
\begin{equation*}
\mathfrak{P'}=\cdots \rightarrow P_2' \rightarrow P_1' \rightarrow P_0' \rightarrow M\otimes_R V_0' \rightarrow M\otimes_R V_1' \rightarrow M\otimes_R V_2' \rightarrow \cdots,
\end{equation*} where $V_i'={\overline{A}}^{\otimes i}\otimes_R A$, for every $i\geq 0$, and has syzygy the $RG$-module $M$. Using the $R$-split $RG$-short exact sequence $0\rightarrow R \rightarrow A \rightarrow \overline{A} \rightarrow 0 $ and \cite[Theorem 6.7]{St}, it suffices to show that the complex $I\otimes_{RG}(\mathfrak{P'}\otimes_{R}A)$ is acyclic for every injective $RG$-module $I$. This follows exactly as in the proof of Proposition \ref{nppst}, since every syzygy module of $\mathfrak{P'}\otimes_{R}A$ is PGF. \end{Remark}
\begin{Theorem}\label{The410}Let $R$ be a commutative ring such that $\textrm{sfli}R<\infty$ and $G$ be a group such that there exists a weak characteristic module for $G$ over $R$. Then, $\mathscr{X}_{B,{\tt PGF}}={\tt PGF}(RG)={\tt WProj}(RG)={\tt GProj}(RG)$. \end{Theorem}
\begin{proof}Invoking Corollary \ref{coor43} and Proposition \ref{nppst}, we have the inclusions ${\tt WG Proj}(RG)\subseteq\mathscr{X}_{B,{\tt PGF}}\subseteq{\tt PGF}(RG)$. Moreover, ${\tt PGF}(RG)\subseteq {\tt GProj}(RG)$ by \cite[Theorem 4.4]{SS} and the inclusion ${\tt GProj}(RG)\subseteq{\tt WGProj}(RG)$ is clear. We conclude that $\mathscr{X}_{B,{\tt PGF}}={\tt PGF}(RG)={\tt WGProj}(RG)={\tt GProj}(RG)$, as needed.\end{proof}
\begin{Remark}\rm (i) We note that Theorem \ref{The410} implies that for every commutative ring $R$ such that $\textrm{sfli}R<\infty$ and every group $G$ such that there exists a weak characteristic module for $G$ over $R$, the class $\mathscr{X}_{B,{\tt PGF}}$ coincides with the class $\mathscr{X}_{B,{\tt GProj}}=\{M\in \textrm{Mod}(RG): \, M\otimes_R B(G,R)\in {\tt GProj}(RG)\}$.
(ii) Let $R$ be a commutative ring such that $\textrm{sfli}(R)<\infty$ and $G$ be a group such that there exists a weak characteristic module $A$ for $G$ over $R$. We also consider an $RG$-module $M$ such that the $RG$-module $M\otimes_R A$ is Gorenstein projective. Then, $M$ is a Gorenstein projective $RG$-module. This follows from Remark \ref{rem46} and Theorem \ref{The410}. \end{Remark}
\begin{Corollary}\label{Cor212}Let $R$ be a commutative ring such that $\textrm{sfli}R<\infty$ and $G$ be a group. If $\textrm{fd}_{RG}B(G,R)<\infty$, then $\mathscr{X}_{B,{\tt PGF}}={\tt PGF}(RG)={\tt WGProj}(RG)={\tt GProj}(RG)$. \end{Corollary}
\begin{proof}Since $\textrm{fd}_{RG}B(G,R)<\infty$, the $RG$-module $B(G,R)$ is a weak characteristic module for $G$ over $R$. The result is now a direct consequence of Theorem \ref{The410}.\end{proof}
\begin{Corollary}\label{cor410}Let $R$ be a commutative ring such that $\textrm{sfli}R<\infty$ and $G$ be a group of type $\Phi_R$. Then, $\mathscr{X}_{B,{\tt PGF}}={\tt PGF}(RG)={\tt WGProj}(RG)={\tt GProj}(RG)$. \end{Corollary}
\begin{proof}Since the $RG$-module $B(G,R)$ is $RH$-free for every finite subgroup $H$ of $G$, the definition of a group of type $\Phi_R$ implies that $\textrm{fd}_{RG}B(G,R)<\infty$. The result is now an immediate consequence of Corollary \ref{Cor212}. \end{proof}
\begin{Corollary}Let $R$ be a commutative ring such that $\textrm{sfli}R<\infty$ and $G$ be a group of type $\Phi_R$. Then, ${\tt GProj}(RG)\subseteq {\tt GFlat}(RG)$. Moreover, for every $RG$-module $M$ we have $\textrm{Gfd}_{RG}M\leq \textrm{Gpd}_{RG}M =\textrm{PGF-dim}_{RG}M$. \end{Corollary}
\begin{proof}This is a direct consequence of Corollary \ref{cor410}, since ${\tt PGF}(RG)\subseteq {\tt GFlat}(RG)$. \end{proof}
\begin{Corollary} Let $R$ be a commutative ring of finite weak global dimension and $G$ be a group of type $\Phi_R$. Then, $\mathscr{X}_{B,{\tt Proj}}={\tt PGF}(RG)={\tt WProj}(RG)={\tt GProj}(RG)$, where $\mathscr{X}_{B,{\tt Proj}}=\{M\in \textrm{Mod}(RG): \, M\otimes_R B(G,R)\in {\tt Proj}(RG)\}$ is the class of Benson's cofibrants. \end{Corollary}
\begin{proof}Invoking Corollary \ref{cor410}, it suffices to show that $\mathscr{X}_{B,{\tt PGF}}\subseteq\mathscr{X}_{B,{\tt Proj}}$. Let $M\in \mathscr{X}_{B,{\tt PGF}}$. Then, $M\in {\tt WGProj}(RG)\subseteq {\tt WGFlat}(R)$, and hence the finiteness of $\textrm{wgl.dim}(R)$ implies that $M$ is $R$-flat. Since $\textrm{fd}_{RG} B(G,R)<\infty$, we obtain that $\textrm{fd}_{RG}M\otimes_R B(G,R)<\infty$. We conclude that $M\otimes_R B(G,R)\in {\tt PGF}(RG)\cap \overline{{\tt Flat}}(RG)={\tt Proj}(RG)$ (see \cite[Lemma 5.2]{St}). \end{proof}
\section{Gorenstein flat modules over $\textsc{\textbf{lh}}\mathfrak{F}$-groups} We consider a commutative ring $R$ such that $\textrm{sfli}R<\infty$ and an $\textsc{\textbf{lh}}\mathfrak{F}$-group $G$. Our goal in this section is to achieve the same characterization of the class ${\tt GFlat}(RG)$, in terms of the $RG$-module $B(G,R)$, as in Section 3. Firstly, we prove that the tensor product of a weak Gorenstein flat $RG$-module and an $R$-flat module (with diagonal action) is Gorenstein flat. Moreover, we obtain that the class ${\tt GFlat}(RG)$ coincides with the class ${\tt WGFlat}(RG)$. By doing so, we may replace the existence of a weak characteristic module for $G$ over $R$ with the property that $G$ is an $\textsc{\textbf{lh}}\mathfrak{F}$-group in all the previous results of Section 3.
\begin{Lemma}\label{llem45} Let $R$ be a commutative ring, $G$ be a group and $H$ be a subgroup of $G$. Then, for every Gorenstein flat $RH$-module $M$, the $RG$-module $\textrm{Ind}^G_H M$ is also Gorenstein flat. \end{Lemma}
\begin{proof} Let $M$ be a Gorenstein flat $RH$-module. Then, there exists an acyclic complex of flat $RH$-modules $$\textbf{F}=\cdots \rightarrow F_{2}\rightarrow F_1\rightarrow F_0 \rightarrow F_{-1}\rightarrow \cdots,$$ such that $M=\textrm{Im}(F_1 \rightarrow F_0)$ and the complex $I\otimes_{RH}\textbf{F}$ is exact, whenever $I$ is an injective $RH$-module. Thus, the induced complex $$\textrm{Ind}^G_H\textbf{F}=\cdots \rightarrow\textrm{Ind}^G_H F_2 \rightarrow\textrm{Ind}^G_H F_1\rightarrow\textrm{Ind}^G_H F_0 \rightarrow\textrm{Ind}^G_H F_{-1}\rightarrow \cdots,$$ is an acyclic complex of flat $RG$-modules and has the $RG$-module $\textrm{Ind}^G_H M$ as syzygy. Since every injective $RG$-module $I$ is restricted to an injective $RH$-module, the isomorphism of complexes $I\otimes_{RG}\textrm{Ind}^G_H \textbf{F} \cong I\otimes_{RH}\textbf{F}$ implies that the $RG$-module $\textrm{Ind}^G_H M$ is Gorenstein flat. \end{proof}
\begin{Proposition}\label{prop1}Let $R$ be a commutative ring such that $\textrm{sfli}R<\infty$ and $G$ be an $\textsc{\textbf{lh}}\mathfrak{F}$-group. Consider a weak Gorenstein flat $RG$-module $M$ and an $RG$-module $N$ which is flat as $R$-module. Then, $M\otimes_R N \in{\tt G Flat}(RG)$. \end{Proposition}
\begin{proof}Let $M\in{\tt WGFlat}(RG)$ and $N\in{\tt Flat}(R)$. We will first show that $M\otimes_R N$ is Gorenstein flat as $RH$-module over any $\textsc{\textbf{h}}\mathfrak{F}$-subgroup $H$ of $G$. We use transfinite induction on the ordinal number $\alpha$, which is such that $H\in \textsc{\textbf{h}}_{\alpha}\mathfrak{F}$. If $\alpha=0$, then $H$ is finite and hence $\textrm{Ghd}_R H=0$, by \cite[Proposition 3.6]{St1}. Invoking \cite[Proposition 5.7]{St}, we obtain that $\textrm{sfli}(RH)\leq \textrm{sfli}R <\infty$. Since $M\in {\tt WGFlat}(RG)\subseteq {\tt WGFlat}(RH)$ and $N\in{\tt Flat}(R)$, we obtain that $M\otimes_R N \in {\tt WGFlat}(RH)$. Thus, the finiteness of $\textrm{sfli}(RH)$ implies that $M\otimes_R N \in {\tt GFlat}(RH)$. Now we assume that $M\otimes_R N$ is Gorenstein flat as $RH'$-module for every $\textsc{\textbf{h}}_{\beta}\mathfrak{F}$-subgroup $H'$ of $G$ and every $\beta<\alpha$. Let $H$ be an $\textsc{\textbf{h}}_{\alpha}\mathfrak{F}$-subgroup of $G$. Then, there
exists an exact sequence of $\mathbb{Z}H$-modules $$0\rightarrow C_r \rightarrow \cdots \rightarrow C_1 \rightarrow C_0 \rightarrow \mathbb{Z} \rightarrow 0,$$ where each $C_i$
is a direct sum of permutation $\mathbb{Z}H$-modules of the form $\mathbb{Z}[H/H']$, with $H'$ an $\textsc{\textbf{h}}_{\beta}\mathfrak{F}$-subgroup of $H$ for some $\beta<\alpha$. We note that the integer $r$ is the dimension of the $H$-CW-complex provided by the definition of $H$ being an $\textsc{\textbf{h}}_{\alpha}\mathfrak{F}$-group. The above exact sequence yields an exact sequence of $RH$-modules of the form
\begin{equation}\label{eqqqq}
0\rightarrow K_r \rightarrow \cdots \rightarrow K_1 \rightarrow K_0 \rightarrow M\otimes_R N \rightarrow 0,
\end{equation}
such that every $K_i$ is a direct sum of modules of the form ${\textrm{Ind}^H_{H'}}{\textrm{Res}^H_{H'}} (M \otimes_R N)$, where $H'\in \textsc{\textbf{h}}_{\beta}\mathfrak{F}$, $\beta<\alpha$ (see also \cite[Lemma 2.3]{Bis2}).
Our induction hypothesis implies that ${\textrm{Res}^H_{H'}} (M \otimes_R N)$ is a Gorenstein flat $RH'$-module. Invoking Lemma \ref{llem45}, we infer that ${\textrm{Ind}^H_{H'}}{\textrm{Res}^H_{H'}}(M \otimes_R N)$ is a Gorenstein flat $RH$-module. Since the class ${\tt GFlat}(RH)$ is closed under direct sums, we obtain that the $RH$-module $K_i$ is Gorenstein flat, for every $i=0,\dots r$. Thus, the exact sequence (\ref{eqqqq}) yields $\textrm{Gfd}_{RH}(M\otimes_R N)\leq r$. Moreover, $M\in {\tt WGFlat}(RG)$, and hence there exists an exact sequence of $RG$-modules of the form $$0\rightarrow M \rightarrow F_{r-1} \rightarrow \cdots \rightarrow F_1 \rightarrow F_0 \rightarrow M' \rightarrow 0,$$ where $F_i$ is flat for every $i=0,1,\dots,r-1$ and $M'\in {\tt WGFlat}(RG)$. Since $N$ is $R$-flat, we obtain the induced exact sequence of $RG$-modules (with diagonal action)
\begin{equation}\label{eqqq}
0\rightarrow M \otimes_R N\rightarrow F_{r-1}\otimes_R N \rightarrow \cdots \rightarrow F_0\otimes_R N \rightarrow M'\otimes_R N \rightarrow 0,
\end{equation}where $F_i \otimes_R N$ is a flat $RG$-module (and hence is flat as $RH$-module) for every $i=0,1,\dots,r-1$. The same argument as above for the $RG$-module $M' \in {\tt WGflat}(RG)$ yields $\textrm{Gfd}_{RH}(M'\otimes_R N)\leq r$. Since every ring is ${\tt GF}$-closed, using \cite[Theorem 2.8]{Ben}, we conclude that $M\otimes_R N$ is a Gorenstein flat $RH$-module.
Let $G$ be an $\textsc{\textbf{lh}}\mathfrak{F}$-group. Then, $G$ can be expressed the filtered union of its finitely generated subgroups $(G_{\lambda})_{\lambda}$, which are all contained in $\textsc{\textbf{h}}\mathfrak{F}$. Since $G_{\lambda}\in \textsc{\textbf{h}}\mathfrak{F}$, the $RG_{\lambda}$-module $M\otimes_R N$ is Gorenstein flat. Invoking Lemma \ref{llem45}, we obtain that the $RG$-module $\textrm{Ind}^G_{G_{\lambda}}(M\otimes_R N)$ is Gorenstein flat as well. Thus, the $RG$-module $M\otimes_R N\cong {\lim\limits_{\longrightarrow}}_{\lambda}\textrm{Ind}^G_{G_{\lambda}}(M\otimes_R N)$ is Gorenstein flat as direct limit of Gorenstein flat modules (see \cite[Corollary 4.12]{SS}).\end{proof}
\begin{Corollary}\label{theor1}Let $R$ be a commutative ring such that $\textrm{sfli}R<\infty$ and $G$ be an $\textsc{\textbf{lh}}\mathfrak{F}$-group. Then, ${\tt WG Flat}(RG)\subseteq\mathscr{X}_{B,{\tt GFlat}}$. \end{Corollary}
\begin{proof}Since the $RG$-module $B(G,R)$ is $R$-free, this is an immediate consequence of Proposition \ref{prop1}. \end{proof}
\begin{Remark}\rm The existence of a weak characteristic module Proposition \ref{ppst} may be replaced with the assumption that $G$ is an $\textsc{\textbf{lh}}\mathfrak{F}$-group. \end{Remark}
\begin{Proposition}\label{theor2}Let $R$ be a commutative ring such that $\textrm{sfli}R<\infty$ and $G$ be an $\textsc{\textbf{lh}}\mathfrak{F}$-group. Then, $\mathscr{X}_{B,{\tt GFlat}}\subseteq{\tt GFlat}(RG)$. \end{Proposition}
\begin{proof}Let $B=B(G,R)$, $\overline{B}=\overline{B}(G,R)$ and consider an $RG$-module $M$ such that the $RG$-module $M\otimes_R B$ is Gorenstein flat. Since the $RG$-module $\overline{B}$ is $R$-flat, we obtain that the $RG$-module $(M\otimes_R B)\otimes_R \overline{B}^{\otimes i}$ is Gorenstein flat for every $i\geq 0$, by Proposition \ref{prop1}. Given that, the proof is identical to that of Proposition \ref{ppst} (see also Remark \ref{remarkaki}).\end{proof}
\begin{Theorem}\label{cora}Let $R$ be a commutative ring such that $\textrm{sfli}R<\infty$ and $G$ be an $\textsc{\textbf{lh}}\mathfrak{F}$-group. Then, $\mathscr{X}_{B,{\tt GFlat}}={\tt GFlat}(RG)={\tt WGFlat}(RG)$. \end{Theorem}
\begin{proof}Invoking Corollary \ref{theor1}, we have ${\tt WG Flat}(RG)\subseteq\mathscr{X}_{B,{\tt GFlat}}$. Moreover, Proposition \ref{theor2} yields $\mathscr{X}_{B,{\tt GFlat}}\subseteq{\tt GFlat}(RG)$ and the inclusion ${\tt GFlat}(RG)\subseteq{\tt WGFlat}(RG)$ is clear. We conclude that $\mathscr{X}_{B,{\tt GFlat}}={\tt GFlat}(RG)={\tt WGFlat}(RG)$, as needed. \end{proof}
\section{Gorenstein projective and PGF modules over $\textsc{\textbf{lh}}\mathfrak{F}$-groups}
We consider a commutative ring $R$ such that $\textrm{sfli}R<\infty$ and an $\textsc{\textbf{lh}}\mathfrak{F}$-group $G$. Our goal in this section is to achieve the same characterization of the class ${\tt GProj}(RG)$, in terms of the $RG$-module $B(G,R)$, as in Section 4. Firstly, we prove that the tensor product of a weak Gorenstein projective $RG$-module and an $R$-projective module (with diagonal action) is PGF. Moreover, we obtain that the classes ${\tt GProj}(RG)$, ${\tt PGF}(RG)$ and ${\tt WGProj}(RG)$ coincide. As a result, we have that every Gorenstein projective $RG$-module is Gorenstein flat. By doing so, we may replace the existence of a weak characteristic module for $G$ over $R$ with the property that $G$ is an $\textsc{\textbf{lh}}\mathfrak{F}$-group in all the previous results of Section 4.
\begin{Lemma}\label{Llem45}{\rm(\cite[Lemma 2.12]{St1})} Let $R$ be a commutative ring, $G$ be a group and $H$ be a subgroup of $G$. Then, for every PGF $RH$-module $M$, the $RG$-module $\textrm{Ind}^G_H M$ is also PGF. \end{Lemma}
\begin{Proposition}\label{Prop1}Let $R$ be a commutative ring such that $\textrm{sfli}R<\infty$ and $G$ be an $\textsc{\textbf{lh}}\mathfrak{F}$-group. Consider a weak Gorenstein projective $RG$-module $M$ and an $RG$-module $N$ which is projective as $R$-module. Then, $M\otimes_R N \in{\tt PGF}(RG)$. \end{Proposition}
\begin{proof}Let $M\in{\tt WGProj}(RG)$ and $N\in{\tt Proj}(R)$. We will first show that $M\otimes_R N$ is PGF as $RH$-module over any $\textsc{\textbf{h}}\mathfrak{F}$-subgroup $H$ of $G$. We use transfinite induction on the ordinal number $\alpha$, which is such that $H\in \textsc{\textbf{h}}_{\alpha}\mathfrak{F}$. If $\alpha=0$, then $H$ is finite and hence $\textrm{Ghd}_R H=0$, by \cite[Proposition 3.6]{St1}. Invoking \cite[Proposition 5.7]{St}, we obtain that $\textrm{sfli}(RH)\leq \textrm{sfli}R <\infty$. Since $M\in {\tt WGProj}(RG)\subseteq {\tt WGProj}(RH)$ and $N\in{\tt Proj}(R)$, we have $M\otimes_R N \in {\tt WGProj}(RH)$. Thus, the finiteness of $\textrm{sfli}(RH)$ implies that $M\otimes_R N \in {\tt PGF}(RH)$. Now we assume that $M\otimes_R N$ is PGF as $RH'$-module for every $\textsc{\textbf{h}}_{\beta}\mathfrak{F}$-subgroup $H'$ of $G$ and every $\beta<\alpha$. Let $H$ be an $\textsc{\textbf{h}}_{\alpha}\mathfrak{F}$-subgroup of $G$. Then, there
exists an exact sequence of $\mathbb{Z}H$-modules $$0\rightarrow C_r \rightarrow \cdots \rightarrow C_1 \rightarrow C_0 \rightarrow \mathbb{Z} \rightarrow 0,$$ where each $C_i$
is a direct sum of permutation $\mathbb{Z}H$-modules of the form $\mathbb{Z}[H/H']$, with $H'$ an $\textsc{\textbf{h}}_{\beta}\mathfrak{F}$-subgroup of $H$ for some $\beta<\alpha$. We note that the integer $r$ is the dimension of the $H$-CW-complex provided by the definition of $H$ being an $\textsc{\textbf{h}}_{\alpha}\mathfrak{F}$-group. The above exact sequence yields an exact sequence of $RH$-modules of the form
\begin{equation}\label{Eqqqq}
0\rightarrow K_r \rightarrow \cdots \rightarrow K_1 \rightarrow K_0 \rightarrow M\otimes_R N \rightarrow 0,
\end{equation}
such that every $K_i$ is a direct sum of modules of the form ${\textrm{Ind}^H_{H'}}{\textrm{Res}^H_{H'}} (M \otimes_R N)$, where $H'\in \textsc{\textbf{h}}_{\beta}\mathfrak{F}$, $\beta<\alpha$ (see also \cite[Lemma 2.3]{Bis2}).
Our induction hypothesis implies that ${\textrm{Res}^H_{H'}} (M \otimes_R N)$ is a PGF $RH'$-module. Invoking Lemma \ref{Llem45}, we infer that ${\textrm{Ind}^H_{H'}}{\textrm{Res}^H_{H'}}(M \otimes_R N)$ is a PGF $RH$-module. The class ${\tt PGF}(RH)$ is closed under direct sums, and hence the $RH$-module $K_i$ is PGF, for every $i=0,\dots r$. Thus, the exact sequence (\ref{Eqqqq}) yields $\textrm{PGF-dim}_{RH}(M\otimes_R N)\leq r$. Moreover, $M\in {\tt WGProj}(RG)$, and hence there exists an exact sequence of $RG$-modules of the form $$0\rightarrow M \rightarrow P_{r-1} \rightarrow \cdots \rightarrow P_1 \rightarrow P_0 \rightarrow M' \rightarrow 0,$$ where $P_i$ is projective for every $i=0,1,\dots,r-1$ and $M'\in {\tt WGProj}(RG)$. As $N$ is $R$-projective, we obtain the induced exact sequence of $RG$-modules (with diagonal action)
\begin{equation}\label{Eqqq}
0\rightarrow M \otimes_R N\rightarrow P_{r-1}\otimes_R N \rightarrow \cdots \rightarrow P_0\otimes_R N \rightarrow M'\otimes_R N \rightarrow 0,
\end{equation}where $P_i \otimes_R N$ is a projective $RG$-module (and hence projective as $RH$-module) for every $i=0,1,\dots,r-1$. The same argument as above for the weak Gorenstein projective $RG$-module $M'$ shows that $\textrm{PGF-dim}_{RH}(M'\otimes_R N)\leq r$. Invoking \cite[Proposition 2.2]{DE}, we conclude that $M\otimes_R N$ is a PGF $RH$-module.
Let $G$ be an $\textsc{\textbf{lh}}\mathfrak{F}$-group. We will proceed by induction on the cardinality of $G$. If $G$ is a
countable group, then $G$ acts on a tree with stabilizers certain finitely generated subgroups of $G$, and hence $G\in \textsc{\textbf{h}}\mathfrak{F}$. Thus, we assume that $G$ is uncountable. The group $G$ may then be expressed as a continuous ascending union of subgroups $G = \cup_{\lambda<\delta} G_{\lambda}$, for some ordinal $\delta$, where each $G_{\lambda}$ has strictly smaller cardinality than $G$. By induction we have $M\otimes_R N$ is PGF as $RG_{\lambda}$-module, for every $\lambda<\delta$. Thus, invoking \cite[Proposition 4.5]{St1}, we infer that $\textrm{PGF-dim}_{RG}(M\otimes_R N)\leq 1$. Since $M\in {\tt WGProj}(RG)$, there exists a short exact sequence of $RG$-modules of the form $$0\rightarrow M \rightarrow P \rightarrow M'' \rightarrow 0,$$ where $M''\in {\tt WGProj}(RG)$ and $P\in {\tt Proj}(RG)$. As $N$ is $R$-projective, we obtain the following short exact sequence of $RG$-modules (with diagonal action) \begin{equation}\label{equu}0\rightarrow M\otimes_R N\rightarrow P \otimes_R N\rightarrow M''\otimes_R N\rightarrow 0,
\end{equation} where the $RG$-module $P\otimes_R N$ is projective. The same argument as before for the $RG$-module $M'' \in {\tt WGProj}(RG)$ yields $\textrm{PGF-dim}_{RG}(M''\otimes_R N)\leq 1$, and hence the exact sequence (\ref{equu}) implies that the $RG$-module $M\otimes_R N$ is PGF, as needed.\end{proof}
\begin{Remark}\rm The existence of a weak characteristic module Proposition \ref{nppst} may be replaced with the assumption that $G$ is an $\textsc{\textbf{lh}}\mathfrak{F}$-group.
\end{Remark}
\begin{Corollary}Let $R$ be a commutative ring such that $\textrm{sfli}R<\infty$ and $G$ be an $\textsc{\textbf{lh}}\mathfrak{F}$-group. Consider a Gorenstein projective $RG$-module $M$ and an $RG$-module $N$ which is projective as $R$-module. Then, $M\otimes_R N \in{\tt GProj}(RG)$. \end{Corollary}
\begin{Corollary}\label{Theo1}Let $R$ be a commutative ring such that $\textrm{sfli}R<\infty$ and $G$ be an $\textsc{\textbf{lh}}\mathfrak{F}$-group. Then, ${\tt WG Proj}(RG)\subseteq\mathscr{X}_{B,{\tt PGF}}$. \end{Corollary}
\begin{proof}Since the $RG$-module $B(G,R)$ is $R$-free, this is an immediate consequence of Proposition \ref{Prop1}.\end{proof}
\begin{Proposition}\label{Ppst} Let $R$ be a commutative ring such that $\textrm{sfli}R<\infty$ and $G$ be an $\textsc{\textbf{lh}}\mathfrak{F}$-group. Then,
$\mathscr{X}_{B,{\tt PGF}}\subseteq {\tt PGF}(RG)$. \end{Proposition}
\begin{proof}Let $B=B(G,R)$, $\overline{B}=\overline{B}(G,R)$ and consider an $RG$-module $M$ such that the $RG$-module $M\otimes_R B$ is PGF. Since the $RG$-module $\overline{B}$ is $R$-projective, we obtain that the $RG$-module $(M\otimes_R B)\otimes_R \overline{B}^{\otimes i}$ is PGF for every $i\geq 0$, by Proposition \ref{Prop1}. Given that, the proof is identical to that of Proposition \ref{nppst} (see also Remark \ref{Remarkaki}). \end{proof}
\begin{Theorem}\label{Cora}Let $R$ be a commutative ring such that $\textrm{sfli}R<\infty$ and $G$ be an $\textsc{\textbf{lh}}\mathfrak{F}$-group. Then, $\mathscr{X}_{B,{\tt PGF}}={\tt PGF}(RG)={\tt WGProj}(RG)={\tt GProj}(RG)$. \end{Theorem}
\begin{proof}Invoking Corollary \ref{Theo1} and Proposition \ref{Ppst}, we have the inclusions ${\tt WG Proj}(RG)\subseteq\mathscr{X}_{B,{\tt PGF}}\subseteq{\tt PGF}(RG)$. Moreover, ${\tt PGF}(RG)\subseteq {\tt GProj}(RG)$ by \cite[Theorem 4.4]{SS} and the inclusion ${\tt GProj}(RG)\subseteq{\tt WGProj}(RG)$ is clear. We conclude that $\mathscr{X}_{B,{\tt PGF}}={\tt PGF}(RG)={\tt WGProj}(RG)={\tt GProj}(RG)$, as needed. \end{proof}
\begin{Corollary}Let $R$ be a commutative ring such that $\textrm{sfli}R<\infty$ and $G$ be an $\textsc{\textbf{lh}}\mathfrak{F}$-group. Then, ${\tt GProj}(RG)\subseteq {\tt GFlat}(RG)$. \end{Corollary}
\begin{proof}This is a direct consequence of Theorem \ref{Cora}, since ${\tt PGF}(RG)\subseteq {\tt GFlat}(RG)$. \end{proof}
\begin{Corollary}Let $R$ be a commutative ring such that $\textrm{sfli}R<\infty$ and $G$ be an $\textsc{\textbf{lh}}\mathfrak{F}$-group. Then, for every $RG$-module $M$ we have $\textrm{Gfd}_{RG}M\leq \textrm{Gpd}_{RG}M =\textrm{PGF-dim}_{RG}M$. \end{Corollary}
\section{Gorenstein homological dimension of $\textsc{\textbf{lh}}\mathfrak{F}$-groups} Our goal in this section is to determine the Gorenstein homological dimension $\textrm{Ghd}_R G$ of an $\textsc{\textbf{lh}}\mathfrak{F}$-group $G$ over a commutative ring of finite Gorenstein weak global dimension.
\begin{Definition}Let $R$ be a commutative ring and $G$ be a group.
$\textrm{f.k}(RG):=\textrm{sup}\{\textrm{fd}_{RG}M \, : \, M\in \textrm{Mod}(RG), \, \textrm{fd}_{RH}M<\infty \, \textrm{for every finite} \,\, H\leq G\}$.
$\textrm{fin.f.dim}(RG):=\textrm{sup}\{\textrm{fd}_{RG}M \, : \, M\in \textrm{Mod}(RG), \, \textrm{fd}_{RG}M<\infty\}$. \end{Definition}
\begin{Lemma}\label{l220}Let $R$ be a commutative ring and $G$ be a group. Then, for every subgroup $H$ of $G$ we have $\textrm{fin.f.dim}(RH)\leq \textrm{fin.f.dim}(RG)$. \end{Lemma}
\begin{proof}It suffices to assume that $\textrm{fin.f.dim}(RG)=n<\infty$. Let $M$ be an $RH$-module such that $\textrm{fd}_{RH}M=k<\infty$. Then, there exists an $RH$-flat resolution of $M$ of length $k$ $$0\rightarrow F_k \rightarrow \cdots \rightarrow F_1 \rightarrow F_0 \rightarrow M \rightarrow 0,$$ and hence we obtain an exact sequence of $RG$-modules of the form $$0\rightarrow \textrm{Ind}^G_H F_k \rightarrow \cdots \rightarrow \textrm{Ind}^G_H F_1 \rightarrow \textrm{Ind}^G_H F_0 \rightarrow \textrm{Ind}^G_H M\rightarrow 0,$$ which constitutes an $RG$-flat resolution of $\textrm{Ind}^G_H M$ of length $k$. Since M is isomorphic to a direct summand of $\textrm{Res}^G_H \textrm{Ind}^G_H M$, we obtain that $\textrm{fd}_{RG}\textrm{Ind}^G_H M=k$. Thus, $\textrm{fd}_{RH}M=k\leq n$ for every $RH$-module $M$ of finite flat dimension. We conclude that $\textrm{fin.f.dim}(RH)\leq \textrm{fin.f.dim}(RG)$, as needed. \end{proof}
\begin{Proposition}\label{prop224}Let $R$ be a commutative ring and $G$ be an $\textsc{\textbf{lh}}\mathfrak{F}$-group. Then, $\textrm{f.k}(RG)\leq \textrm{fin.f.dim}(RG)$. \end{Proposition}
\begin{proof}It suffices to assume that $\textrm{fin.f.dim}(RG)=n<\infty$. Let $M$ be an $RG$-module such that $\textrm{fd}_{RF}M<\infty$ for every finite subgroup $F$ of $G$. We will first show that $\textrm{fd}_{RH}M\leq n$ over any $\textsc{\textbf{h}}\mathfrak{F}$-subgroup $H$ of $G$. We use transfinite induction on the ordinal number $\alpha$, which is such that $H\in \textsc{\textbf{h}}_{\alpha}\mathfrak{F}$. If $\alpha=0$, then $H$ is finite and hence $\textrm{fd}_{RH}M<\infty$. Then, Lemma \ref{l220} yields $\textrm{fd}_{RH}M\leq\textrm{fin.f.dim}(RH)\leq \textrm{fin.f.dim}(RG)= n$. Now we assume that $\textrm{fd}_{RH'}M\leq n$ for every $\textsc{\textbf{h}}_{\beta}\mathfrak{F}$-subgroup $H'$ of $G$ and every $\beta<\alpha$. Let $H$ be an $\textsc{\textbf{h}}_{\alpha}\mathfrak{F}$-subgroup of $G$. Then, there
exists an exact sequence of $\mathbb{Z}H$-modules $$0\rightarrow C_r \rightarrow \cdots \rightarrow C_1 \rightarrow C_0 \rightarrow \mathbb{Z} \rightarrow 0,$$ where each $C_i$
is a direct sum of permutation $\mathbb{Z}H$-modules of the form $\mathbb{Z}[H/H']$, with $H'$ an $\textsc{\textbf{h}}_{\beta}\mathfrak{F}$-subgroup of $H$ for some $\beta<\alpha$. We note that the integer $r$ is the dimension of the $H$-CW-complex provided by the definition of $H$ being an $\textsc{\textbf{h}}_{\alpha}\mathfrak{F}$-group. The above exact sequence yields an exact sequence of $RH$-modules
\begin{equation}\label{eq14}
0\rightarrow M_r \rightarrow \cdots \rightarrow M_1 \rightarrow M_0 \rightarrow M \rightarrow 0,
\end{equation}
where each $M_i$ is a direct sum of modules of the form ${\textrm{Ind}^H_{H'}}{\textrm{Res}^H_{H'}} M$, where $H'\in \textsc{\textbf{h}}_{\beta}\mathfrak{F}$, $\beta<\alpha$ (see also \cite[Lemma 2.3]{Bis2}). Our induction hypothesis implies that $\textrm{fd}_{RH'}\textrm{Res}^H_{H'}M\leq n$, for every $H'\in \textsc{\textbf{h}}_{\beta}\mathfrak{F}$, $\beta<\alpha$, and hence we also have $\textrm{fd}_{RH}\textrm{Ind}^H_{H'}\textrm{Res}^H_{H'}M\leq n$ for every $H'\in \textsc{\textbf{h}}_{\beta}\mathfrak{F}$, $\beta<\alpha$. Consequently, $\textrm{fd}_{RH}M_i <\infty$, for every $i=0,\dots,r$, and equation (\ref{eq14}) yields $\textrm{fd}_{RH}M <\infty$. Invoking Lemma \ref{l220}, we infer that $\textrm{fd}_{RH}M\leq \textrm{fin.f.dim}(RH)\leq \textrm{fin.f.dim}(RG)= n$.
Let $G$ be an $\textsc{\textbf{lh}}\mathfrak{F}$-group. Then, $G$ can be expressed as the filtered union of its finitely generated subgroups $(G_{\lambda})_{\lambda}$, which are all contained in $\textsc{\textbf{h}}\mathfrak{F}$. Since $G_{\lambda}\in \textsc{\textbf{h}}\mathfrak{F}$, we have $\textrm{fd}_{RG_{\lambda}}M\leq n$. We consider an exact sequence of $RG$-modules \begin{equation}\label{eq15}
0 \rightarrow K_n \rightarrow F_{n-1} \rightarrow \cdots \rightarrow F_1 \rightarrow F_0 \rightarrow M \rightarrow 0, \end{equation}
where the $RG$-module $F_i$ is flat for every $i=0,\dots,n-1$. Then, $K_n$ is a flat $RG_{\lambda}$-module, and hence the $RG$-module $\textrm{Ind}^G_{G_{\lambda}}K_n$ is also flat for every $\lambda$. Consequently, the $RG$-module $K_n\cong {\lim\limits_{\longrightarrow}}_{\lambda}\textrm{Ind}^G_{G_{\lambda}}K_n$ is flat as direct limit of flat modules. Thus, the exact sequence (\ref{eq15}) yields $\textrm{fd}_{RG}M\leq n$. We conclude that $\textrm{f.k}(RG)\leq n$, as needed.
\end{proof}
\begin{Lemma}\label{prop225}Let $R$ be a commutative ring such that $\textrm{sfli}R<\infty$ and $G$ be a group. Then, $\textrm{sfli}(RG)\leq \textrm{f.k}(RG)$. \end{Lemma}
\begin{proof}It suffices to assume that $\textrm{f.k}(RG)=n<\infty$. Let $I$ be an injective $RG$-module and $H$ a finite subgroup of $G$. Then, $\textrm{Ghd}_R H=0$ (see \cite[Proposition 3.6]{St1}), and hence \cite[Proposition 5.7]{St} yields $\textrm{sfli}(RH)\leq \textrm{sfli}R <\infty$. Since $I$ is injective as $RH$-module, we obtain that $\textrm{fd}_{RH}I<\infty$. It follows that $\textrm{fd}_{RG}I\leq \textrm{f.k}(RG)=n $, for every injective $RG$-module $I$. We conclude that $\textrm{sfli}(RG)\leq \textrm{f.k}(RG)$, as needed. \end{proof}
\begin{Corollary}\label{cor226}Let $R$ be a commutative ring such that $\textrm{sfli}R<\infty$ and $G$ be an $\textsc{\textbf{lh}}\mathfrak{F}$-group. Then, $\textrm{f.k}(RG)=\textrm{sfli}(RG)=\textrm{fin.f.dim}(RG)$. \end{Corollary}
\begin{proof}Since $RG\cong {(RG)}^{\textrm{op}}$, by \cite[Proposition 2.4(i)]{Emta} we obtain that $\textrm{fin.f.dim}(RG)\leq \textrm{sfli}(RG)$. Invoking Proposition \ref{prop224}, we have $\textrm{f.k}(RG)\leq \textrm{fin.f.dim}(RG)$. Moreover, Lemma \ref{prop225} yields $\textrm{sfli}(RG)\leq \textrm{f.k}(RG)$. We conclude that $\textrm{f.k}(RG)=\textrm{sfli}(RG)=\textrm{fin.f.dim}(RG)$, as needed. \end{proof}
\begin{Remark}\label{rem711} \rm Since the $RG$-module $B(G,R)$ is $R$-free and admits an $R$-split $RG$-linear monomorphism $\iota: R \rightarrow B(G,R)$, we infer that $B(G,R)$ is a weak characteristic module for $G$ over $R$ if and only if $\textrm{fd}_{RG} B(G,R)<\infty$. \end{Remark}
\begin{Theorem}\label{theo712}Let $R$ be a commutative ring such that $\textrm{sfli}R<\infty$ and consider an $\textsc{\textbf{lh}}\mathfrak{F}$-group $G$. Then:
\begin{itemize}
\item[(i)] $B(G,R)$ is a weak characteristic module for $G$ if and only if ${{\textrm{Ghd}}_{R}G}<\infty$,
\item[(ii)] ${{\textrm{Ghd}}_{R}G}=\textrm{fd}_{RG} B(G,R)$.
\end{itemize} \end{Theorem}
\begin{proof}(i) If $B(G,R)$ is a weak characteristic module, then \cite[Theorem 5.10]{St} implies that ${\textrm{Ghd}}_{R}G<\infty$. Conversely, we assume that ${\textrm{Ghd}}_{R}G<\infty$. Then, Corollary \ref{cor226} yields $\textrm{f.k}(RG)=\textrm{sfli}(RG)\leq {\textrm{Ghd}}_{R}G + \textrm{sfli}R <\infty$ (see \cite[Proposition 5.7]{St}). Since $B(G,R)$ is free as $RH$-module for every finite subgroup $H$ of $G$, we obtain that $\textrm{fd}_{RG}B(G,R)\leq \textrm{f.k}(RG)<\infty$. We conclude that $B(G,R)$ is a weak characteristic module for $G$ over $R$ (see Remark \ref{rem711}).
(ii) Using (i) and Remark \ref{rem711}, we have ${{\textrm{Ghd}}_{R}G}=\infty$ if and only if $\textrm{fd}_{RG}B(G,R)=\infty$. If ${{\textrm{Ghd}}_{R}G}<\infty$, then (i) implies that $B(G,R)$ is a weak characteristic module for $G$ over $R$, and hence, invoking \cite[Corollary 5.12(i),(iii))]{St}, we conclude that ${{\textrm{Ghd}}_{R}G}=\textrm{fd}_{RG} B(G,R)$. \end{proof}
\begin{Remark}\rm \label{rem78}Let $R$ be a commutative ring and $G$ be a group such that $\textrm{fd}_{\mathbb{Z}G}B(G,\mathbb{Z})<\infty$. Then $\textrm{fd}_{RG}B(G,R)\leq \textrm{fd}_{\mathbb{Z}G}B(G,\mathbb{Z})<\infty$. Indeed, let $\textrm{fd}_{\mathbb{Z}G}B(G,\mathbb{Z})=n$ and consider a $\mathbb{Z}G$-flat resolution $$0\rightarrow F_n \rightarrow F_{n-1}\rightarrow \cdots \rightarrow F_0 \rightarrow B(G,\mathbb{Z}) \rightarrow 0,$$ of $B(G,\mathbb{Z})$. Since $B(G,\mathbb{Z})$ is $\mathbb{Z}$-free (and hence $\mathbb{Z}$-flat), the above exact sequence is $\mathbb{Z}$-pure. Thus, we obtain an exact sequence of $RG$-modules $$0\rightarrow F_n\otimes_{\mathbb{Z}}R \rightarrow F_{n-1}\otimes_{\mathbb{Z}}R \rightarrow \cdots \rightarrow F_0\otimes_{\mathbb{Z}}R \rightarrow B(G,\mathbb{Z})\otimes_{\mathbb{Z}}R =B(G,R) \rightarrow 0,$$ wich constitutes an $RG$-flat resolution of $B(G,R)$, and hence $\textrm{fd}_{RG}B(G,R)\leq \textrm{fd}_{\mathbb{Z}G}B(G,\mathbb{Z})$. \end{Remark}
\begin{Corollary}Let $R$ be a commutative ring such that $\textrm{sfli}R<\infty$ and $G$ be an $\textsc{\textbf{lh}}\mathfrak{F}$-group of type FP$_{\infty}$. Then, ${{\textrm{Ghd}}_{R}G}=\textrm{fd}_{RG} B(G,R)<\infty$. \end{Corollary}
\begin{proof}The equality ${{\textrm{Ghd}}_{R}G}=\textrm{fd}_{RG} B(G,R)$ follows from Theorem \ref{theo712}. Since the $\textsc{\textbf{lh}}\mathfrak{F}$-group is of type FP$_{\infty}$, using \cite[Corollary B.2(2)]{Kr2}, which is also valid for $\textsc{\textbf{lh}}\mathfrak{F}$-groups, we infer that $\textrm{fd}_{\mathbb{Z}G} B(G,\mathbb{Z})<\infty$. Then, $\textrm{fd}_{RG}B(G,R)\leq \textrm{fd}_{\mathbb{Z}G}B(G,\mathbb{Z})< \infty$ (see Remark \ref{rem78}). \end{proof}
\begin{Corollary}Let $G$ be an $\textsc{\textbf{lh}}\mathfrak{F}$-group of type FP$_{\infty}$. Then, ${{\textrm{Ghd}}_{\mathbb{Z}}G}=\textrm{fd}_{\mathbb{Z}G} B(G,\mathbb{Z})<\infty$. \end{Corollary}
\begin{Corollary}Let $R$ be a commutative ring such that $\textrm{sfli}R<\infty$ and $G$ be an $\textsc{\textbf{lh}}\mathfrak{F}$-group of type FP$_{\infty}$. Then, $\textrm{f.k}(RG)=\textrm{sfli}(RG)=\textrm{fin.f.dim}(RG)<\infty$. In particular, if $M$ is an $RG$-module, then $\textrm{fd}_{RG}M<\infty$ if and only if $\textrm{fd}_{RH}M<\infty$ for every finite subgroup $H$ of $G$. \end{Corollary}
\begin{proof}In view of Corollary \ref{cor226}, it suffices to prove that $\textrm{sfli}(RG)<\infty$. Invoking \cite[Corollary B.2(2)]{Kr2}, which is also valid for $\textsc{\textbf{lh}}\mathfrak{F}$-groups, and Remarks \ref{rem711}, \ref{rem78}, we obtain that $B(G,R)$ is a weak characteristic module for $G$ over $R$. Thus, \cite[Theorem 5.10]{St} yields $\textrm{sfli}(RG)<\infty$, as needed.\end{proof}
{\small {\sc Department of Mathematics,
University of Athens,
Athens 15784,
Greece}}
{\em E-mail address:} {\tt dstergiop@math.uoa.gr}
\end{document} |
\begin{document}
\title{An iterative regularized mirror descent method for ill-posed nondifferentiable stochastic optimization
}
\author{Mostafa Amini\thanks{School of Industrial Engineering \& Management, Oklahoma State University, Stillwater, OK 74074, USA, \texttt{moamini@okstate.edu};}, {Farzad Yousefian}\thanks{School of Industrial Engineering \& Management, Oklahoma State University, Stillwater, OK 74074, USA,
\texttt{farzad.yousefian@okstate.edu};}
}
\setlength{\textheight}{9in} \setlength{\topmargin}{0in}\setlength{\headheight}{0in}\setlength{\headsep}{0in}
\setlength{\textwidth}{6.5in} \setlength{\oddsidemargin}{0in}\setlength{\marginparsep}{0in}
\maketitle
\thispagestyle{empty}
\begin{abstract} A wide range of applications arising in machine learning and signal processing can be cast as convex optimization problems. These problems are often ill-posed, i.e., the optimal solution lacks a desired property such as uniqueness or sparsity. In the literature, to address ill-posedness, a bilevel optimization problem is considered where the goal is to find among optimal solutions of the inner level optimization problem, a solution that minimizes a secondary metric, i.e., the outer level objective function. In addressing the resulting bilevel model, the convergence analysis of most existing methods is limited to the case where both inner and outer level objectives are differentiable deterministic functions. While these assumptions may not hold in big data applications, to the best of our knowledge, no solution method equipped with complexity analysis exists to address presence of uncertainty and nondifferentiability in both levels in this class of problems. Motivated by this gap, we develop a first-order method called Iterative Regularized Stochastic Mirror Descent (IR-SMD). We establish the global convergence of the iterate generated by the algorithm to the optimal solution of the bilevel problem in an almost sure and a mean sense. We derive a convergence rate of ${\cal O}\left(1/N^{0.5-\delta}\right)$ for the inner level problem, where $\delta>0$ is an arbitrary small scalar. Numerical experiments for solving two classes of bilevel problems, including a large scale binary text classification application, are presented. \end{abstract}
\section{Introduction}\label{sec:intro} Consider the following canonical stochastic convex optimization problem
\begin{align}\label{def:firstlevel} \tag{$P_f$}
\displaystyle \mbox{minimize}& \qquad f(x)\triangleq \EXP{F(x,\xi)}\\
\mbox{subject to} &\qquad x \in X, \notag \end{align} where $X \subseteq \mathbf{R}^n $ is a nonempty, closed and convex set, $f:X \rightarrow \mathbf{R}$ is a convex function given as an expected value of a stochastic function $F:X\times{\mathbf{R}^{d}} \rightarrow \mathbf{R}$, $\xi: \Omega \to \mathbf{R}^{d}$ is a random variable, and $(\Omega, {\cal F}, \mathbf{P})$ represents the associated probability space. In addressing \eqref{def:firstlevel}, Monte Carlo sampling methods have been very successful in the literature (\cite{Ermoliev69,Ermoliev83}). Of these, the stochastic approximation (SA) method, developed by Robbins and Monro \cite{robbins51sa}, has been applied extensively to solve stochastic optimization and equilibrium problems (\cite{Houyuan08,Farzad3}). Acceleration of SA methods first was introduced by Polyak and Juditsky in '90s \cite{Polyak92} and was carried out by employing averaging techniques. The extension of SA scheme in non-Euclidean spaces was developed by Nemirovski et al. in \cite{Nemir09} and is called stochastic mirror descent (SMD) method. In \cite{Nemir09}, SMD method is applied to solve problem \eqref{def:firstlevel} where function $F(x,\xi)$ is assumed to be nondifferentiable and convex. An optimal convergence rate of ${\cal O}\left({1}/{\sqrt{N}}\right)$ is derived under averaging. Nedi\'c and Lee \cite{Nedic14} developed SMD methods with an optimal convergence rate under a different set of averaging weights. To address high dimensionality in stochastic optimization, Dang and Lan \cite{Dang15} developed a randomized block-coordinate SMD method in that only a block of the iterate is updated. Optimal non-averaging SMD methods for smooth, nonsmooth, and high dimensional problems with strongly convex objective functions have been also developed (see \cite{Farzad1,Farzad2,Nahid17}).
Often in applications arising from machine learning and signal processing, problem \eqref{def:firstlevel} is ill-posed, i.e., the optimal solution lacks a desired property such as uniqueness or sparsity (see \cite{Tikhonov77} and \cite{Friedlander07} for a detailed review of ill-posed problems and their applications). To address ill-posedness in optimization, a secondary metric is employed that quantifies the desired property. The goal is then to obtain a solution among the optimal solution set of problem \eqref{def:firstlevel} that minimizes the secondary metric. Let function $h: X \rightarrow \mathbf R$ denote the secondary performance measure of interest. Consequently, the following optimization problem is considered \begin{align}\label{def:SL} \tag{$P_f^h$}
\displaystyle \mbox{minimize}& \qquad h(x)\\
\mbox{subject to} & \qquad x \in \mathop{\rm argmin}_{y\in X}\EXP{F(y,\xi)} \notag. \end{align} Problem \eqref{def:SL} has a bilevel structure and is referred to as the ``selection problem'' (e.g., see \cite{Friedlander07}). The main goal in this paper is to develop a first-order method equipped with complexity analysis for solving problem \eqref{def:SL}.
\begin{remark}
In some applications, function $h$ can be given in the form of an expectation. As such, throughout the paper, we assume function $h$ is given as $h(x)\triangleq \EXP{H(x,\xi)}$. A motivating example to this case is two-stage stochastic nonlinear programming that will be discussed in the following section (see Lemma \ref{two-stage2bilevel}).
\end{remark} \begin{remark}
We note that the term ``bilevel'' has been often used in the literature to refer to a more general formulation, where functions $h$ and $f$ are each characterized in terms of two groups of variables, e.g., $x$ and $y$ (cf. \cite{Dempe12}). However, similar to the terminology used in \cite{Solodov07,Beck14, Sabach17}, throughout this paper, the term ``bilevel'' is used to refer to the specific formulation \eqref{def:SL}. \end{remark}
\subsection{Example problems} We discuss two classes of problems that can be formulated using the model \eqref{def:SL}.\\ \noindent (i) \textbf{Ill-posed empirical loss minimization (ELM):} Given a training set $\{(a_i,b_i)\}_{i=1}^N \subset \mathcal{A}\times \mathcal{B}$ consisting of input objects $a_i$ and their associated output values $b_i$ for datum $i$, the goal in the ELM model lies in learning a function ({e.g.,} a hyperplane in linear regression) in order to classify new observations. The resulting problem is cast as the following convex optimization problem \begin{align}\label{eqn:problem3}\tag{ELM}
\displaystyle \mbox{minimize}& \qquad\frac{1}{N}\sum_{i=1}^N \mathcal{L}(a_i^Tx,b_i)\\
\mbox{subject to} &\qquad x \in X \subseteq \mathbf{R}^n,\notag \end{align} where $\mathcal{L}:\mathbf{R}\times \mathcal{B}\to \mathbf{R}$ is a convex loss function. Depending on the type of the application, a variety of choices for $\mathcal{L}$ have been employed. For instance, in binary classification problems, given an output $b_i \in \{-1,+1\}$, the logistic regression problem is characterized by $\mathcal{L}(z,b_i)=\log(1+\exp(-b_iz))$, while {the hinge loss} is given by $\mathcal{L}(z,b_i)=\max\{0,1-b_iz\}$. Challenges arise when the resulting large-scale problem of the form \eqref{eqn:problem3} is ill-posed. To address ill-posedness, a secondary metric $h(x)$ can be considered. The goal is then to find among optimal solutions of \eqref{eqn:problem3}, one that minimizes $h(x)$ \cite{Mangasarian91,Friedlander07}. For example, to induce sparsity, the \textit{elastic net} regularizer \cite{Zou05} can be considered as the secondary metric. Consequently, the following bilevel optimization model is considered \cite{Friedlander07,Sabach17} \begin{align}\label{prob:ELM-Bilevel}
\displaystyle \mbox{minimize}& \qquad h(x)\triangleq \|x\|_1 + \mu \|x\|_2^2\\
\mbox{subject to} &\qquad x \in \arg \min_{y \in X} \EXP{F(y,\xi)}, \nonumber \end{align} where, $\xi \in \{\xi_1, \cdots, \xi_N\}$ has a finite support with $Prob(\xi=\xi_i)=1/N$, $F(y,\xi_i)=\mathcal{L}(a_i^Tx,b_i)$, and $\mu>0$ regulates the trade-off between $\ell_1$ and $\ell_2$ norms.
\noindent (ii) \textbf{Two-stage stochastic nonlinear programming:} In this part, we first consider two-stage stochastic programming (cf. \cite{Birge11} and Ch. 2 of \cite{Shapiro09}) which has a wide area of applications specially in transportation, logistics, finance and power systems \cite{Ankur12,Lan16,Junyi18}. We provide the required preliminaries that help us write a nonlinear two-stage program in the form of a single-stage problem. Then, we show that under some mild assumptions, we can reformulate it as a bilevel problem of the form \eqref{def:SL}.
Consider the following two-sate stochastic nonlinear programming \begin{align}\label{single stage}
\displaystyle \mbox{minimize}& \qquad c(z)+\EXP{Q(z,\xi)} \\
\mbox{subject to} &\qquad u_\ell(z) \leq 0 , \qquad \hbox{for } \ell=1,\cdots, L, \nonumber \\
&\qquad z \in Z, \nonumber \end{align} for $Z \subseteq \mathbf{R}^n$, functions $c,u_\ell:\mathbf{R}^n \rightarrow \mathbf{R}$, and a random variable $\xi \in \mathbf{R}^d$ with a finite support $\{\xi_1, \cdots, \xi_N \}$. Here, $Q(z,\xi_i)$ is the optimal value of the following second-stage problem for $i=1,\cdots, N$ \begin{align}\label{second stage}
\displaystyle \mbox{minimize}& \qquad q(y_i,\xi_i) \\
\mbox{subject to} &\qquad t_j(z) + w_j(y_i,\xi_i) \leq 0, \qquad \hbox{for } j=1,\cdots, J,\nonumber \\ &\qquad y_i \in Y, \nonumber \end{align} for the set $Y\subseteq \mathbf{R}^m$, and functions $t_j:\mathbf{R}^n \rightarrow \mathbf{R}$, $w_j:\mathbf{R}^{m\times d} \rightarrow \mathbf{R}$. Note that we assume the random vector here has a finite support. The analysis where it has an infinite support i.e., $\xi \in \{\xi_1, \cdots, \xi_N \}$, is discussed in \cite{Birge11}.
In the following lemma, we show that how we can write the two-stage stochastic programming \eqref{single stage} in a compact form. The proof is provided in Appendix \ref{proof of lemma 2-stage 2}. \begin{lemma} \label{lemma 2-stage 2}
Let $Z \subseteq \mathbf{R}^n$ and $Y \subseteq \mathbf{R}^m$ be nonempty, closed and convex sets, functions $c,u_\ell, t_j:\mathbf{R}^n \rightarrow \mathbf{R}$ be convex over the set $Z$, and function $w_j$ be convex over $Y$ for all $j=1,\cdots,J$. Also, assume $\xi$ is a random variable with finite support $\{\xi_1, \cdots, \xi_N \}$ with $Prob(\xi=\xi_i)=p_i$ for $i=1,\cdots, N$. In addition, suppose $Y_i(z) \triangleq \{y_i\in Y| t_j(z)+w_j(y_i,\xi_i)\leq 0 \hbox{ for } j=1,\cdots,J\}$ is a nonempty set and $q$ is a real-valued convex function over $Y_i(x)$ for $i=1,\cdots, N$. Then, model \eqref{single stage} can be rewritten as follows
\begin{align}\label{compact two-stage}
\displaystyle \mbox{minimize}& \qquad c(z) + \sum_{i=1}^{N} p_i q(y_i,\xi_i) \\
\mbox{subject to} &\qquad u_\ell(z) \leq 0,\qquad \hbox{for } \ell=1, \cdots, L \nonumber \\
&\qquad t_j(z) + w_j(y_i,\xi_i) \leq 0, \qquad \hbox{for } i=1, \cdots, N,\ j=1,\cdots,J, \nonumber \\ &\qquad \ z \in Z, \ y_i \in Y, \qquad \hbox{for } i=1, \cdots, N. \nonumber
\end{align} \end{lemma}
In the following lemma, we state how we can reformulate the compact model \eqref{compact two-stage} as a bilevel problem of the form \eqref{def:SL}. The proof is given in Appendix \ref{proof of lemma two-stage2bilevel}. \begin{lemma}\label{two-stage2bilevel}
Let the random variable $\xi$ have a distribution with a finite support $\{\xi_1,\cdots, \xi_N\}$, and $Prob(\xi=\xi_i)=p_i$ for $i=1, \cdots, N$. Assume $Z \subseteq \mathbf{R}^n$ and $Y \subseteq \mathbf{R}^m$ are nonempty, closed and convex sets. Then, under assumptions given in Lemma \ref{lemma 2-stage 2}, model \eqref{single stage} is equivalent to the following bilevel optimization problem
\begin{align}\label{bilevel two-stage2}
\displaystyle \mbox{minimize}& \qquad \EXP{H(x,\xi)} \\
\mbox{subject to} &\qquad x \in \mathop{\rm argmin}_{x \in X } \EXP{F(x,\xi)}, \nonumber
\end{align}
where $x^T \triangleq (z^T,y_1^T,\cdots,y_N^T)$, $X \triangleq Z\times Y^N$, and
\begin{align*}
F(x,\xi_i) &\triangleq \sum_{j=1}^{J} \max \{0, t_j(z)+w_j(y_i,\xi_i) \} + \sum_{\ell=1}^{L} \max \{0,u_\ell(z) \}, \\
H(x,\xi_i) &\triangleq c(z)+ q(y_i,\xi_i).
\end{align*} \end{lemma} \subsection{Existing methods} In addressing problem \eqref{def:SL}, challenges may arise due to: (i) the bilevel structure of the problem, (ii) uncertainty, and (iii) nondifferentiability of functions $f$ and $h$. Next, we discuss some of the standard approaches in addressing these challenges for solving \eqref{def:SL} and explain their limitations. \subsubsection{Sequential regularization (SR)} When problem \eqref{def:firstlevel} is ill-posed, a standard approach is to employ the regularization technique, where a regularized optimization problem of the following form is considered \begin{align}\label{regularized form} \tag{$P_\lambda$}
\displaystyle \mbox{minimize}& \qquad f_\lambda(x)\triangleq f(x)+\lambda h(x)\\
\mbox{subject to} &\qquad x \in X, \notag \end{align}
where $\lambda>0$ is a (user-specific) regularization parameter and provides a trade-off between the two metrics $f$ and $h$. Examples of this technique include the celebrated Tikhonov regularization \cite{Tikhonov77} where we have $h(x)=\|x\|_2^2$. In signal processing applications, $l_1$ regularization (i.e., $h(x)=\|x\|_1$) has been used extensively to find sparse solutions, i.e. \cite{Boyd07,Beck09}. See \cite{Wright12,Lin14,Kimon16} for a more detailed discussion of the types of regularizers.
In addressing problem \eqref{def:SL}, one may solve a sequence of the regularized problems \eqref{regularized form} for $\lambda \in\{\lambda_k\} \subset \mathbf R_{++}$ with $\lambda_k \to 0$. This necessitates implementation of a two-loop scheme where in the inner loop, \eqref{regularized form} is solved for a fixed $\lambda$, and in the outer loop, $\lambda$ is updated. As such the sequential regularization scheme is computationally expensive compared to single-loop schemes (see Ch. 12 of \cite{facchinei02finite} for more details). \subsubsection{Exact regularization} In addressing ill-posed problems, Mangasarian et al. \cite{Mangasarian79,Mangasarian91} studied ``exact regularization'' of linear and nonlinear programs. A regularization is said to be exact when an optimal solution of \eqref{regularized form}, is also an optimal solution of problem \eqref{def:firstlevel}. Friedlander and Tseng \cite{Friedlander07} showed that the regularization of convex programs is exact when $\lambda$ is below some threshold, and derived error bounds for an inexact regularization. Extensions of this work to variational inequality problems is studied in \cite{Charitha17}. The main drawback of the exact regularization approach is that the threshold on the regularization parameter is not known and is often hard to determine a priori in practice (cf. \cite{Friedlander07}). \subsubsection{Iterative regularization (IR)} Another avenue for addressing ill-posedness is the iterative regularization technique. A key difference between SR and IR schemes is that in the latter, the regularization parameter is updated iteratively during the algorithm. As such, IR schemes have a single-loop structure and prove to be computationally more efficient than their SR counterparts. In \cite{Solodov07}, an IR scheme is developed where at the $k$th iteration, an approximate solution to problem \eqref{regularized form} with $\lambda=\lambda_k$ is generated. It is shown that when $\sum_{k=0}^\infty \lambda_k= \infty$, the iterate generated by the proposed method converges to the optimal solution of \eqref{def:SL}. In \cite{Yamada11}, under a similar set of conditions on $\{\lambda_k\}$, a ``hybrid steepest descent method" is developed and the convergence to an optimal solution of the problem is established. Other papers on addressing problem \eqref{def:SL} include \cite{Elias11,Helou17}. In all the aforementioned papers, the complexity analysis is not addressed. Our work in this paper builds on the work in \cite{Farzad3}, where Yousefian et al. considered ill-posed stochastic variational inequality problems where the mapping is merely monotone and possibly non-Lipschitz. In \cite{Farzad3}, an iterative regularized smoothing stochastic approximation scheme, called RSSA, is developed where at each iteration, a noisy observation of the stochastic mapping is used. It is shown that the generated sequence by the RSSA method converges to the least $\ell_2$ norm solution of the VI in an almost sure and a mean sense. Also, a convergence rate of the order $1/\sqrt{k^{1/6-\epsilon}}$ is derived in terms of a suitably defined gap function, where $\epsilon>0$ is an arbitrary small scalar. The main drawback of the RSSA scheme is the degraded convergence rate due to employment of a smoothing scheme. In this paper, this rate is improved to $1/\sqrt{k^{0.5-\epsilon}}$. Importantly, while in \cite{Farzad3}, the regularizer $h$ is assumed to be $\ell_2$ norm, in this work we allow the function $h$ to be given in the form of an expectation of a nondifferentiable stochastic and strongly convex function. Among the other papers that address the complexity analysis for solving \eqref{def:SL}, \cite{Beck14} and \cite{Sabach17} are described next. \subsubsection{Minimal norm gradient method (MNG)} In \cite{Beck14}, a ``minimal norm gradient algorithm'' is developed for solving problem \eqref{def:SL} where $f$ and $h$ are both assumed to be deterministic and differentiable. It is shown that the sequence generated by the algorithm converges to the optimal solution of \eqref{def:SL} (see Theorem 4.1 in \cite{Beck14}). A convergence rate of the order $\frac{1}{\sqrt{k}}$ is derived in terms of $f$ values (see Theorem 4.2 in \cite{Beck14}). The main drawback is that MNG is a two-loop scheme where at each iteration, an optimization problem characterized by the function $h$ needs to be solved. This is computationally expensive in the case that $h$ is complicated by uncertainty.
\subsubsection{Sequential averaging method (SAM)} In \cite{Sabach17}, a method called BiG-SAM, with an improved convergence rate of the order $\frac{1}{{k}}$ in terms of $f$ values is developed (see Theorem 1 in \cite{Sabach17}). In contrast with \cite{Beck14}, BiG-SAM is a single-loop scheme. An underlying assumption in BiG-SAM is that the function $f$ is of the form $f_1(x)+f_2(x)$, where $f_1$ is continuously differentiable and has Lipschitz gradients and $f_2$ is an extended-valued and possibly nonsmooth function. The differentiability of $f_1$ plays a key role in deriving the sublinear convergence rate in \cite{Sabach17}. Another limitation to \cite{Sabach17} is that both $f$ and $h$ are assumed to be deterministic. In big data applications, $f$ may be stochastic and nondifferentiable. Note that, in solving \eqref{prob:ELM-Bilevel}, the implementation of BIG-SAM becomes challenging due to nondifferentiablity of $h$ and the large sample size $N$.
\begin{table}[t]
\caption{Comparison of methods in addressing \eqref{def:SL}}
\centering
\scalebox{0.85}[0.85]{
\label{table1}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c}
\multirow{2}{*}{Reference} & \multicolumn{3}{c|}{Assump. on $f$} & \multicolumn{3}{c|}{Assump. on $h$} & \multirow{2}{*}{Method} & \multirow{2}{*}{Metric} & \multirow{2}{*}{Converg.} \\ \cline{2-7}
& con. & dif. & form & con. & dif. & form & & & \\ \hline \hline
\multirow{2}{*}{{\scriptsize Solodov} \cite{Solodov07}} & \multirow{2}{*}{C} & \multirow{2}{*}{\ding{51}} & \multirow{2}{*}{$f$} & \multirow{2}{*}{C} & \multirow{2}{*}{\ding{51}} & \multirow{2}{*}{$h$} & \multirow{2}{*}{iter. regu.} & $f_k-f^*$ & \multirow{2}{*}{asympt.} \\ \cline{9-9}
& & & & & & & & $h_k-h^*$ & \\ \hline
\multirow{2}{*}{{\scriptsize Solodov} \cite{Solodov072}} & \multirow{2}{*}{C} & \multirow{2}{*}{\ding{55}} & \multirow{2}{*}{$f$} & \multirow{2}{*}{C} & \multirow{2}{*}{\ding{55}} & \multirow{2}{*}{$h$} & \multirow{2}{*}{iter. regu.} & $f_k-f^*$ & \multirow{2}{*}{asympt.} \\ \cline{9-9}
& & & & & & & & $h_k-h^*$ & \\ \hline
\multirow{2}{*}{{\scriptsize Beck \& Sabach} \cite{Beck14}} & \multirow{2}{*}{C} & \multirow{2}{*}{\ding{51}} & \multirow{2}{*}{$f$} & \multirow{2}{*}{SC} & \multirow{2}{*}{\ding{51}} & \multirow{2}{*}{$h$} & \multirow{2}{*}{MNG} & $f_k-f^*$ & \scriptsize ${\cal O}(1/\sqrt k)$ \\ \cline{9-10}
& & & & & & & & $h_k-h^*$ & asympt. \\ \hline
\multirow{2}{*}{{\scriptsize Sabach \& Shtern} \cite{Sabach17}} & \multirow{2}{*}{C} & \multirow{2}{*}{\ding{51}} & \multirow{2}{*}{$f_1+f_2$} & \multirow{2}{*}{SC} & \multirow{2}{*}{\ding{51}} & \multirow{2}{*}{$h$} & \multirow{2}{*}{SAM} & $f_k-f^*$ &\scriptsize ${\cal O}(1/k)$ \\ \cline{9-10}
& & & & & & & & $h_k-h^*$ & asympt. \\ \hline
\multirow{2}{*}{{\scriptsize Garrigos et al.}\cite{Guil18}} & \multirow{2}{*}{C} & \multirow{2}{*}{\ding{51}} & \multirow{2}{*}{$f$} & \multirow{2}{*}{SC} & \multirow{2}{*}{\ding{51}} & \multirow{2}{*}{$h$} & \multirow{2}{*}{iter. regu} & $f_k-f^*$ & asymp. \\ \cline{9-10}
& & & & & & & & $h_k-h^*$ & \scriptsize ${\cal O}(1/k)$ \\ \hline
\multirow{2}{*}{{\scriptsize Yousefian et al.}\cite{Farzad3}} & \multirow{2}{*}{C} & \multirow{2}{*}{\ding{51}} & \multirow{2}{*}{$f$} & \multirow{2}{*}{SC} & \multirow{2}{*}{\ding{51}} & \multirow{2}{*}{$h$} & \multirow{2}{*}{iter. regu} & $f_k-f^*$ & asympt. \\ \cline{9-10}
& & & & & & & & $h_k-h^*$ & \scriptsize ${\cal O}(1/k^{(1/6-\delta)})$ \\ \hline
\multirow{2}{*}{{\scriptsize Amini \& Yousefian} \cite{Amini18}} & \multirow{2}{*}{C} & \multirow{2}{*}{\ding{55}} & \multirow{2}{*}{$\sum_i f_i$} & \multirow{2}{*}{SC} & \multirow{2}{*}{\ding{55}} & \multirow{2}{*}{$h$} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}incremental\\ iter. regu.\end{tabular}} & $f_k-f^*$ & \scriptsize ${\cal O}(1/k^{(0.5-\delta)})$ \\ \cline{9-10}
& & & & & & & & $h_k-h^*$ & asympt. \\ \hline
\multirow{2}{*}{{\scriptsize Kaushik \& Yousefian} \cite{Harshal18}} & \multirow{2}{*}{C} & \multirow{2}{*}{\ding{55}} & \multirow{2}{*}{high-dim} & \multirow{2}{*}{SC} & \multirow{2}{*}{\ding{55}} & \multirow{2}{*}{$h$} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}block-coord.\\ iter. regu.\end{tabular}} & $f_k-f^*$ & \scriptsize ${\cal O}(1/k^{(0.5-\delta)})$ \\ \cline{9-10}
& & & & & & & & $h_k-h^*$ & asympt. \\ \hline
\multirow{2}{*}{\bf This work} & \multirow{2}{*}{C} & \multirow{2}{*}{\ding{55}} & \multirow{2}{*}{$\EXP{F(\cdot,\xi)}$} & \multirow{2}{*}{SC} & \multirow{2}{*}{\ding{55}} & \multirow{2}{*}{$\EXP{H(\cdot,\xi)}$} & \multirow{2}{*}{iter. regu} &\scriptsize $\EXP{f_k}-f^*$ &\scriptsize ${\cal O}(1/k^{(0.5-\delta)})$ a.s. \\ \cline{9-10}
& & & & & & & & \scriptsize $\EXP{h_k}-h^*$ & asympt. a.s. \\ \hline \hline
\end{tabular}
}
\caption*{{\scriptsize C: Convex, SC: Strongly Convex}} \end{table} \subsection{Main contributions}\label{sec:main} To describe the contributions of our work, we provide Table \ref{table1}. The references \cite{Solodov07,Solodov072} provide no rate statements, while the analysis in \cite{Beck14,Sabach17} relies extensively on differentiability of functions $f$ and $h$. Moreover, in all the references listed in Table \ref{table1} functions in both levels of problem \eqref{def:SL} are assumed to be deterministic. In this paper, we allow both functions $f$ and $h$ to be nondifferentiable and complicated by uncertainty. We develop a first-order method called iterative regularized stochastic mirror descent (IR-SMD) (see Algorithm \ref{algorithm:SMD}) where at each iteration, a subgradient of function $f$ is regularized using a regularization parameter and a subgradient of function $h$. This regularization is iterative in the sense that the regularization parameter is updated at each iteration.
Our work is motivated by the idea of iterative regularization which has been studied recently in \cite{Farzad3} and \cite{Garrigos17} for solving variational inequality and optimization problems in ill-posed regimes. Here we apply this technique in solving the optimization problem \eqref{def:SL}. We establish the convergence of the iterate generated by the IR-SMD algorithm to the optimal solution of problem \eqref{def:SL} in both an almost sure and a mean sense (see Theorem \ref{thm conv for xbar}). To perform complexity analysis, we derive a rate of ${\cal O}\left(1/N^{0.5-\delta}\right)$ with respect to function $f$ in the inner level, where $\delta$ is an arbitrary small scalar (see Theorem \ref{thm rate}). To the best of our knowledge, the proposed method in this paper appears to be the first that addresses problem \eqref{def:SL} with rate analysis, when both $f$ and $h$ are nondifferentiable and stochastic.
The remainder of the paper is organized as follows. After presenting the notation, in Section \ref{assum:Prel}, we provide the setup for the prox mapping and outline its main properties. We also discuss the main assumptions on the problem, and show properties of the sequence of optimal solutions to the regularized problem \eqref{regularized form} (see Proposition \ref{prop:xk_estimate}). In Section \ref{sec:alg}, we present the proposed IR-SMD algorithm and outline the main assumptions of this scheme. In Section \ref{sec:conv}, we prove convergence of the averaging sequence generated by Algorithm \ref{algorithm:SMD} (see Theorem \ref{thm conv for xbar}). The rate of convergence of the proposed method is derived in Section \ref{sec:rate}. Numerical experiments on different nonsmooth problems including a big data text classification application are presented in Section \ref{sec:num}. The paper is ended by some concluding remarks in Section \ref{sec:rem}.
\textbf{Notation:}
The inner product of two vectors $x$ and $y$, both in $ \mathbf R^n$, is shown as $\langle x,y\rangle$. $\EXP{x}$ denotes the expectation of a random variable $x$. We let $\|\cdot\|$ and $\|\cdot\|_*$ denote a general norm and its dual, respectively. The dual norm is defined as $\|x\|_* = \sup \{ \langle x,y \rangle | \ \|y\|\leq 1\}$, for all $x \in \mathbf R^n$. For a convex function $f$ with the domain dom$(f)$, any vector $g_f$ that satisfies $f(x)+\langle g_f, y-x \rangle \leq f(y)$ for all $x,y \in \hbox{dom}(f)$, is called a subgradient of $f$ at $x$. We let $\partial f(x)$ and $\partial h(x)$ denote the set of all subgradients of functions $f$ and $h$ at $x$. Also, we let $\partial F(x,\xi), \partial H(x,\xi)$ denote the set of all subgradients of functions $F,H$ at $x$ for some $\xi$. Throughout the paper, we let $X^*$ and $x^* \in X^*$ denote the set of optimal solutions and an optimal solution of problem \eqref{def:firstlevel}, respectively. Similarly, we let $X^*_h$ and $x^*_h \in X^*_h$ denote the optimal solution set and an optimal solution of problem \eqref{def:SL}, respectively. We let $x^*_{\lambda}$ denote the optimal solution of problem \eqref{regularized form} and $f^*$ denote the optimal value of problem \eqref{def:firstlevel}. We use ``a.s.'' to denote almost sure convergence. \section{Preliminaries}\label{assum:Prel} In this section, we present an introduction to the basic concepts that will be employed in our analysis in the subsequent sections.
A distance generating function with respect to norm $\|\cdot\|$ is defined as $\omega:X \rightarrow \mathbf{R}$ when function $\omega$ is smooth and strongly convex with parameter $\mu_\omega>0$, i.e., \begin{align}\label{distance1}
\omega(y) \geq \omega(x) + \langle \nabla \omega(x), y-x \rangle + \frac{\mu_\omega}{2} \|x-y\|^2 \qquad \hbox{for all } x,y \in X. \end{align} Throughout, we assume \begin{align}\label{distance2}
\omega(y) \leq \omega(x) + \langle \nabla \omega(x), y-x \rangle + \frac{L_\omega}{2} \|x-y\|^2 \qquad \hbox{for all } x,y \in X, \end{align}
i.e., $\omega$ has Lipschitz gradients with parameter $L_\omega$. These assumptions have been considered in \cite{Nedic14,Dang15} and hold for example when $\omega(x)=\frac{1}{2}\|x\|^2_2$ for $\mu_\omega=L_\omega=1$. The Bregman distance $D: X\times X \rightarrow \mathbf{R}$ associated with $\omega$ is defined as follows: \begin{align}\label{distance3}
D(x,y) \triangleq \omega (y) - \omega(x)- \langle \nabla \omega(x) , y-x \rangle \qquad \hbox{for all } x,y \in X. \end{align} We also define the prox mapping $\mathcal P:X \times \mathbf{R}^n \rightarrow X$ as follows: \begin{align}\label{distance4}
\mathcal{P}_X(x,y) \triangleq \mathop{\rm argmin} _{z \in X} \{ \langle y,z \rangle + D(x,z)\} \qquad \hbox{for all } x\in X, y \in \mathbf{R}^n. \end{align}
In the following lemma, we state some properties of Bregman distance that will be used in this paper. A more comprehensive discussion of these properties can be found in \cite{Nemir09}. \begin{lemma}[\bf{Properties of Bregman distance}] \label{D prop}
Let $D$ be the Bregman distance given by \eqref{distance3}. Then, the following relations hold:
\begin{itemize}
\item[(a)] $\frac{\mu_\omega}{2}\|x-y\|^2 \leq D(x,y) \leq \frac{L_\omega}{2}\|x-y\|^2 \qquad$for all $x,y \in X$.
\item[(b)] $D(x,z)=D(x,y)+D(y,z)+\langle\nabla\omega(y)-\nabla\omega(x),z-y\rangle \qquad$for all $x,y,z \in X$.
\item[(c)] $\nabla_z D(x,z)=\nabla\omega(z)-\nabla\omega(x) \qquad$for all $x,z \in X$.
\end{itemize} \end{lemma} Next, we state our main assumptions that will be used in the convergence analysis. \begin{assumption}[{\bf Problem properties}]\label{assum:properties}
Let the following hold:
\begin{itemize}
\item[(a)] The set $X \subset \mathbf R^n$ is nonempty, compact, and convex.
\item[(b)] The function $f(x)$ is subdifferentiable and convex over the set $X$.
\item[(c)] The function $h(x)$ is subdifferentiable and strongly convex with parameter $\mu_h>0$ with respect to $\|\cdot\|$; i.e., for all $x,y \in X$ and $g_h(x) \in \partial h(x)$, we have $h(x)+\langle g_h(x), y-x \rangle + \frac{\mu_h}{2}\|x-y\|^2 \leq h(y)$.
\item[(d)] The stochastic subgradient $g_f(x)$ is such that the following hold almost surely for all $x \in X$
\begin{align}
&\EXP{g_F(x,\xi) \mid x}= g_f(x),\label{assumption1:d1}
\\ &\EXP{\|g_F(x,\xi)\|_*^2} \leq C_F^2,\label{assumption1:d2}
\end{align}
where $g_f(x) \in \partial f(x)$, $g_F(x,\xi) \in \partial F(x,\xi)$ and $C_F>0$ is a scalar.
\item[(e)] The subgradient $g_h(x)$ is such that the following holds almost surely for all $x \in X$
\begin{align}
&\EXP{g_H(x,\xi) \mid x}= g_h(x), \label{assumption1:e1}\\
&\EXP{\|g_H(x,\xi)\|_*^2} \leq C_H^2,\label{assumption1:e2}
\end{align}
where $g_h(x) \in \partial h(x)$, $g_H(x,\xi) \in \partial H(x,\xi)$ and $C_H>0$ is a scalar.
\end{itemize} \end{assumption} \begin{remark}\label{assumptionandJensen}
Note that using Jensen's inequality, from Assumption \ref{assum:properties}(d,e) we have
\begin{align*}
&\|g_f(x)\|_*^2=\|\EXP{g_F(x,\xi)}\|_*^2 \leq \EXP{\|g_F(x,\xi)\|_*^2} \leq C_F^2, \\
&\|g_h(x)\|_*^2=\|\EXP{g_H(x,\xi)}\|_*^2 \leq \EXP{\|g_H(x,\xi)\|_*^2} \leq C_H^2.
\end{align*} \end{remark} In the following result, we show that under our assumptions, both problems \eqref{def:SL} and \eqref{regularized form} have unique optimal solutions. The proof is provided in Appendix \ref{proof of lem:unique sol for h}. \begin{lemma}[{\bf Uniqueness of $\mathbf{x_\lambda^*}$ and $\mathbf{x_h^*}$}]\label{lem:unique sol for h} Let Assumption \ref{assum:properties}(a,b,c) hold. Consider problems \eqref{regularized form} and \eqref{def:SL}. Then,
\begin{itemize}
\item[(a)] Problem \eqref{regularized form} has a unique optimal solution $x_\lambda^*$, for any $\lambda>0$.
\item[(b)] Problem \eqref{def:SL} has a unique optimal solution $x_h^*$.
\end{itemize} \end{lemma}
In the next lemma, we show two inequalities that will be used later in the proof of Proposition \ref{prop:xk_estimate}. The proof is presented in Appendix \ref{proof of results from convexity of f and strong convexity of h}. \begin{lemma}\label{results from convexity of f and strong convexity of h}
Let Assumption \ref{assum:properties}(b,c) hold. Suppose $\{\lambda_k\}$ is a sequence of nonnegative scalars. Let $x_{\lambda_k}^*$ be the unique optimal solution of problem ($P_{\lambda_k}$) for $k\geq0$.Then, we have
\begin{align}
&\langle g_f(x_{\lambda_{k-1}}^*) - g_f(x_{\lambda_k}^*) ,x_{\lambda_{k-1}}^*-x_{\lambda_k}^* \rangle \geq 0 \qquad \hbox{for all } k\geq 1, \label{result from convexity of f} \\
&\langle g_h(x_{\lambda_{k-1}}^*) - g_h(x_{\lambda_k}^*) ,x_{\lambda_{k-1}}^*-x_{\lambda_k}^* \rangle \geq \mu_h \|x_{\lambda_k}^* - x_{\lambda_{k-1}}^*\|^2 \qquad \hbox{for all } k\geq 1. \label{result from strong convexity of h}
\end{align} \end{lemma}
In the next result, considering a sequence of regularized problems \eqref{regularized form} for $\lambda \in \{\lambda_k\}$, we derive an upper bound on the difference between optimal solutions of two regularized problems characterized by a general norm. Importantly, we show that when $\lambda_k$ is decreasing to zero, the trajectory of optimal solutions to the regularized problems, i.e., $\{x^*_{\lambda_k}\}$, converges to the optimal solution of the problem \eqref{def:SL}, i.e., $x_h^*$. This result is a key to the convergence analysis of our proposed algorithm. The proof is provided in Appendix \ref{proof of prop:xk_estimate}. \begin{proposition}[{\bf Properties of sequence $\mathbf{\{x_{\lambda_k}^*\}}$}]\label{prop:xk_estimate}
Let Assumption \ref{assum:properties} hold. Let $\{\lambda_k\}$ denote a nonnegative sequence for $k \geq 0$ and $x_{\lambda_k}^*$ be the unique optimal solution to problem ($P_{\lambda_k}$) for $k\geq0$.
\begin{itemize}
\item[(a)] Consider problem \eqref{regularized form}. Let $\mu_h$ and $C_H$ be given by Assumption \ref{assum:properties}(c,e). Then, for all $k \geq 1$
\begin{align}\label{boundxk}\|x_{\lambda_k}^*-x_{\lambda_{k-1}}^*\|\leq \frac{C_H}{\mu_h}\left |1-\frac{\lambda_{k-1}}{\lambda_k} \right|.\end{align}
\item[(b)] Consider problem \eqref{def:SL}. When $\lambda_k \rightarrow0$, then the sequence $\{x_{\lambda_k}^*\}$ converges to the unique optimal solution of problem \eqref{def:SL}, i.e., $x_h^*$.
\end{itemize} \end{proposition}
\section{Algorithm outline} \label{sec:alg} In this section, we present the iterative regularized stochastic mirror descent (IR-SMD) method for solving problem \eqref{def:SL}. An outline of this method is presented by Algorithm \ref{algorithm:SMD}. Recall the definition of prox mapping in \eqref{distance4}. In the IR-SMD method, at each iteration, an iterate $x_k$ is updated as follows \begin{align}\label{alg}
x_{k+1} := \mathcal{P}_X(x_k, \gamma_k(g_F(x_k,\xi_k) + \lambda_k g_H(x_k, \tilde \xi_k))) \qquad \hbox{for all } k \geq 0, \end{align} where $\gamma_k>0$ is a proper stepsize, $\lambda_k>0$ is an iterative regularization parameter, $g_F(x_k,\xi_k) \in \partial F(x_k,\xi_k)$, and $g_H(x_k, \tilde \xi_k) \in\partial H(x_k, \tilde \xi_k)$ and $\xi_k,\tilde \xi_k$ are two i.i.d. realizations of random variable $\xi$. A main distinction with the classical SMD method \cite{Nemir09,Nedic14,Dang15} is in terms of the additional regularized term $\lambda_k g_H(x_k, \tilde \xi_k)$ which incorporates the first-order information of the secondary objective function. We consider a weighted average sequence $\{\bar x_k\}$ defined as below: \begin{align}\label{weighted alg}
\bar x_{k+1} := \sum_{t=0}^{k} \eta_{t,k} x_t, \qquad \hbox{where } \eta_{t,k} \triangleq \frac{\gamma_t^r}{\sum_{i=0}^{k} \gamma_i^r}, \end{align} in which $r<1$ is a constant. Note that using induction, it can be shown that relation \eqref{weighted alg} is equivalent to \eqref{def:averagingII} in Algorithm \ref{algorithm:SMD} (see e.g., Proposition 3 in \cite{Farzad4}).
An important research question in our work is that how we may update the two parameters $\gamma_k$ and $\lambda_k$ in order to establish convergence of the averaging sequence $\{\bar x_N\}$, generated by Algorithm \ref{algorithm:SMD}, to the unique optimal solution of problem \eqref{def:SL}. This will be addressed by Theorem \ref{thm conv for xbar}. Another important research question is concerned with the complexity analysis of Algorithm \ref{algorithm:SMD}. This will be addressed in Theorem \ref{thm rate} where under specific update rules for stepsize and regularization parameter, we derive the rate ${\cal O}\left(1/N^{0.5-\delta}\right)$ with respect to $f$ function values.
\begin{algorithm}
\caption{Iterative Regularized Stochastic Mirror Descent Algorithm (IR-SMD)}
\label{algorithm:SMD}
\begin{algorithmic}
\STATE{\textbf{initialization:} Set a random initial point $x_0\in X$, $\gamma_0>0$ and $\lambda_0>0$ such that $\gamma_0\lambda_0 \leq \frac{L_\omega}{\mu_h}$, a scalar $r<1$, $\bar x_0=x_0 \in \mathbf R^n$, and $S_0=\gamma_0^r$.}
\FOR{$k=0,1,\cdots,N-1$}
\STATE{Generate $\xi_k$ and $\tilde \xi_k$ as realizations of random vectors $\xi$.}
\STATE{Evaluate subgradients $g_F(x_k,\xi_k) \in \partial F(x_k,\xi_k)$ and $g_H(x_k, \tilde \xi_k) \in\partial H(x_k, \tilde \xi_k)$.}
\STATE{Update $x_k$ using the following relation:}
\begin{align}\label{mainstep}
x_{k+1} := \mathcal{P}_X(x_k, \gamma_k(g_F(x_k,\xi_k) + \lambda_k g_H(x_k, \tilde \xi_k))).
\end{align}
\STATE{Update $S_k$ and $\bar x_{k}$ using the following recursions:}
\begin{align}
&S_{k+1}:=S_k+\gamma_{k+1}^r,\label{def:averagingI} \\
&\bar x_{k+1}:=\frac{S_k \bar x_k+\gamma_{k+1}^r x_{k+1}}{S_{k+1}}.\label{def:averagingII}
\end{align}
\STATE{Update the stepsize $\gamma_k$ and regularization parameter $\lambda_k$ (see Theorem \ref{thm conv for xbar} and \ref{thm rate}).}
\ENDFOR
\RETURN $\bar x_{N};$
\end{algorithmic}
\end{algorithm}
We make the following assumption on random variable $\xi$ in the algorithm. \begin{assumption} [{\bf Random variable $\mathbf{\xi}$}]\label{assum:RVs} For all $k \geq 0$, random variables $\xi_k, \tilde \xi_k \in \mathbf{R}^{d}$ are i.i.d. \end{assumption} Throughout, the history of the method is considered as: \begin{align}\label{assumption2:f}
\mathcal{F}_k =\{x_0,\xi_0,\tilde \xi_0, \xi_1, \tilde \xi_1, \cdots, \xi_{k-1}, \tilde \xi_{k-1} \} \qquad \hbox{for all } k\geq 1. \end{align}
\section{Convergence analysis} \label{sec:conv} In this section, our main objective is to establish convergence of the sequence $\{\bar x_N\}$ to the optimal solution of problem \eqref{def:SL}. To this end, we first show convergence properties of the sequence $\{x_k\}$ in Proposition \ref{prop:xk_estimate}. This result will be a key to establish convergence of the averaging sequence which will be presented in Theorem \ref{thm conv for xbar}.
We start with the following result where we characterize the error of the algorithm using a recursive relation in terms of Bregman distance. \begin{lemma} [\bf{A recursive upper bound}] \label {recursive bd} Consider problem \eqref{def:SL}. Let the sequence $\{x_k\}$ be generated by Algorithm \ref{algorithm:SMD}. Let Assumption \ref{assum:properties} and \ref{assum:RVs} hold. Also assume $0 < \gamma_k\lambda_k \leq \frac{L_\omega}{\mu_h}$. Then, for all $k\geq 1$ we have
\begin{align}\label{a recursive upper bound}
\EXP{D(x_{k+1},x_{\lambda_k}^*)|\mathcal{F}_k}&\leq \left(1-\frac{\mu_h}{2L_\omega}\gamma_k\lambda_k\right)D(x_k,x_{\lambda_{k-1}}^*)+ \frac{2C_H^2L_\omega^3}{\mu_h^3\mu_\omega \gamma_k\lambda_k}\left(\frac{\lambda_{k-1}}{\lambda_k}-1\right)^2\nonumber\\&+\frac{4C_F^2}{\mu_\omega}\gamma_k^2+\frac{4C_H^2}{\mu_\omega}\gamma_k^2\lambda_k^2,
\end{align}
where $x_{\lambda_k}^*$ is the unique optimal solution of problem \eqref{regularized form} for $\lambda=\lambda_k$. \end{lemma} \begin{proof}
Let $k\geq 1 $ be given and $z^*$ be the solution for \eqref{distance4}. From optimality conditions we have
\begin{align*}
\langle y + \nabla_{z} D(x,z^*),z-z^* \rangle \geq 0 \qquad \hbox{for all } x,z\in X, y \in \mathbf{R}^n.
\end{align*}
By Lemma \ref{D prop}(c) we can replace $\nabla_{z^*} D(x,z^*)$ by $\nabla\omega(z^*)-\nabla\omega(x)$. So we obtain
\begin{align*}
\langle y + \nabla \omega(z^*)-\nabla \omega(x),z-z^* \rangle \geq 0 \qquad \hbox{for all } x,z\in X, y \in \mathbf{R}^n.
\end{align*}
Set $x:= x_k$ and $y:=\gamma_k(g_F(x_k,\xi_k) + \lambda_k g_H(x_k, \tilde \xi_k))$. Note that from \eqref{mainstep} and \eqref{distance4} we have $z^*=x_{k+1}$. We obtain
\begin{align} \label{proof rate1}
\langle \gamma_k(g_F(x_k,\xi_k) + \lambda_k g_H(x_k, \tilde \xi_k)) + \nabla \omega(x_{k+1})-\nabla \omega(x_k),z-x_{k+1} \rangle \geq 0 \qquad \hbox{for all } z\in X.
\end{align}
Consider problem \eqref{regularized form} when $\lambda=\lambda_k$. From optimality conditions we have
\begin{align}\label{proof rate2}
\langle g_f(x_{\lambda_k}^*) + \lambda_k g_h(x_{\lambda_k}^*),x-x_{\lambda_k}^* \rangle \geq 0 \qquad \hbox{for all } x \in X.
\end{align}
Letting $z:=x_{\lambda_k}^*$ in \eqref{proof rate1}, $x:=x_{k+1}$ in \eqref{proof rate2} and multiplying it by $\gamma_k$, and adding the resulting inequalities together, we obtain
\begin{align*}
\langle \gamma_kg_F(x_k,\xi_k) + \gamma_k\lambda_k g_H(x_k, \tilde \xi_k) &- \gamma_k g_f(x_{\lambda_k}^*) - \gamma_k\lambda_k g_h(x_{\lambda_k}^*)\\&+\nabla \omega(x_{k+1})-\nabla \omega(x_k) ,x_{\lambda_k}^*-x_{k+1} \rangle \geq 0.
\end{align*}
By Lemma \ref{D prop}(b), we know that $\langle \nabla \omega(x_{k+1})-\nabla \omega(x_k) ,x_{\lambda_k}^*-x_{k+1} \rangle= D(x_k,x_{\lambda_k}^*)-D(x_{k+1},x_{\lambda_k}^*)-D(x_k,x_{k+1})$ and also from strong convexity of $\omega$ we have $D(x_k,x_{k+1}) \geq \frac{\mu_\omega}{2} \|x_{k+1}-x_k\|^2$. Combining these relations with the preceding inequality and rearranging the terms we have
\begin{align*}
D(x_{k+1},x_{\lambda_k}^*) &\leq D(x_k,x_{\lambda_k}^*) -\frac{\mu_\omega}{2} \|x_{k+1}-x_k\|^2 \\&+\gamma_k\langle g_F(x_k,\xi_k)-g_f(x_{\lambda_k}^*),x_{\lambda_k}^*-x_{k+1} \rangle\\&+ \gamma_k\lambda_k\langle g_H(x_k, \tilde \xi_k)-g_h(x_{\lambda_k}^*),x_{\lambda_k}^*-x_{k+1} \rangle.
\end{align*}
By adding and subtracting $x_k$ in the last two terms of the right-hand side
we obtain
\begin{align*}
D(x_{k+1},x_{\lambda_k}^*) &\leq D(x_k,x_{\lambda_k}^*) -\frac{\mu_\omega}{2} \|x_{k+1}-x_k\|^2 +\gamma_k\langle g_F(x_k,\xi_k)-g_f(x_{\lambda_k}^*),x_{\lambda_k}^*-x_k \rangle\\&+\underbrace{\gamma_k\langle g_F(x_k,\xi_k)-g_f(x_{\lambda_k}^*),x_k-x_{k+1} \rangle}_\text{\hbox{Term1}} \\& + \gamma_k\lambda_k\langle g_H(x_k, \tilde \xi_k)-g_h(x_{\lambda_k}^*),x_{\lambda_k}^*-x_k \rangle \\&+ \underbrace{\gamma_k\lambda_k\langle g_H(x_k, \tilde \xi_k)-g_h(x_{\lambda_k}^*),x_k-x_{k+1} \rangle}_\text{\hbox{Term2}}.
\end{align*}
Note that from Fenchel's inequality, $\langle a,b \rangle \leq \frac{1}{2\alpha}\|a\|^2 +\frac{\alpha}{2}\|b\|_*^2$, for any $a,b \in \mathbf{R}^n$ and $\alpha>0$. Therefore, by applying this relation for Term1 and Term2
\begin{align*}
D(x_{k+1},x_{\lambda_k}^*) &\leq D(x_k,x_{\lambda_k}^*) -\frac{\mu_\omega}{2} \|x_{k+1}-x_k\|^2 +
\gamma_k\langle g_F(x_k,\xi_k)-g_f(x_{\lambda_k}^*),x_{\lambda_k}^*-x_k\rangle\\
&+ \frac{\gamma_k^2}{\mu_\omega}\|g_F(x_k,\xi_k)-g_f(x_{\lambda_k}^*)\|_*^2 + \frac{\mu_\omega}{4}\|x_{k+1}-x_k\|^2 \\& +
\gamma_k\lambda_k\langle g_H(x_k, \tilde \xi_k)-g_h(x_{\lambda_k}^*),x_{\lambda_k}^*-x_k \rangle \\&+ \frac{\gamma_k^2\lambda_k^2}{\mu_\omega}\|g_H(x_k, \tilde \xi_k)-g_h(x_{\lambda_k}^*)\|_*^2 + \frac{\mu_\omega}{4}\|x_{k+1}-x_k\|^2.
\end{align*}
We obtain
\begin{align*}
D(x_{k+1},x_{\lambda_k}^*) &\leq D(x_k,x_{\lambda_k}^*) +\frac{\gamma_k^2}{\mu_\omega}\|g_F(x_k,\xi_k)-g_f(x_{\lambda_k}^*)\|_*^2 \\& +\frac{\gamma_k^2\lambda_k^2}{\mu_\omega}\|g_H(x_k, \tilde \xi_k)-g_h(x_{\lambda_k}^*)\|_*^2+\gamma_k\langle g_F(x_k,\xi_k)-g_f(x_{\lambda_k}^*),x_{\lambda_k}^*-x_k\rangle \\& + \gamma_k\lambda_k\langle g_H(x_k, \tilde \xi_k)-g_h(x_{\lambda_k}^*),x_{\lambda_k}^*-x_k \rangle .
\end{align*}
Using the triangular inequality, we have
\begin{align*}
D(x_{k+1},x_{\lambda_k}^*) &\leq D(x_k,x_{\lambda_k}^*) +2\frac{\gamma_k^2}{\mu_\omega}\|g_F(x_k,\xi_k)\|_*^2+2\frac{\gamma_k^2}{\mu_\omega}\|g_f(x_{\lambda_k}^*)\|_*^2\\&+2\frac{\gamma_k^2\lambda_k^2}{\mu_\omega}\|g_H(x_k, \tilde \xi_k)\|_*^2+2\frac{\gamma_k^2\lambda_k^2}{\mu_\omega}\|g_h(x_{\lambda_k}^*)\|_*^2\\&+\gamma_k\langle g_F(x_k,\xi_k)-g_f(x_{\lambda_k}^*),x_{\lambda_k}^*-x_k\rangle\\&+ \gamma_k\lambda_k\langle g_H(x_k, \tilde \xi_k)-g_h(x_{\lambda_k}^*),x_{\lambda_k}^*-x_k \rangle .
\end{align*}
By Assumption \ref{assum:properties}(d,e) and using the relations in Remark \ref{assumptionandJensen} we have
\begin{align*}
D(x_{k+1},x_{\lambda_k}^*) &\leq D(x_k,x_{\lambda_k}^*)+2\frac{\gamma_k^2}{\mu_\omega}C_F^2+2\frac{\gamma_k^2\lambda_k^2}{\mu_\omega}C_H^2+2\frac{\gamma_k^2}{\mu_\omega}\|g_F(x_k,\xi_k)\|_*^2 \\& +2\frac{\gamma_k^2\lambda_k^2}{\mu_\omega}\|g_H(x_k, \tilde \xi_k)\|_*^2+\gamma_k\langle g_F(x_k,\xi_k)-g_f(x_{\lambda_k}^*),x_{\lambda_k}^*-x_k\rangle \\& + \gamma_k\lambda_k\langle g_H(x_k, \tilde \xi_k)-g_h(x_{\lambda_k}^*),x_{\lambda_k}^*-x_k \rangle .
\end{align*}
By taking conditional expectation on $\mathcal{F}_k$ and using relations \eqref{assumption1:d2} and \eqref{assumption1:e2}, we have
\begin{align*}
\EXP{D(x_{k+1},x_{\lambda_k}^*)|\mathcal{F}_k} &\leq D(x_k,x_{\lambda_k}^*)+4\frac{\gamma_k^2}{\mu_\omega}C_F^2+4\frac{\gamma_k^2\lambda_k^2}{\mu_\omega}C_H^2 \\& +\gamma_k\langle \EXP{g_F(x_k,\xi_k)|\mathcal{F}_k}-g_f(x_{\lambda_k}^*),x_{\lambda_k}^*-x_k\rangle \\& + \gamma_k\lambda_k \left \langle \EXP{g_H(x_k, \tilde \xi_k)|\mathcal{F}_k}-g_h(x_{\lambda_k}^*),x_{\lambda_k}^*-x_k \right \rangle.
\end{align*}
Using relations \eqref{assumption1:d1} and \eqref{assumption1:e1}, we obtain
\begin{align*}
\EXP{D(x_{k+1},x_{\lambda_k}^*)|\mathcal{F}_k} &\leq D(x_k,x_{\lambda_k}^*)+4\frac{\gamma_k^2}{\mu_\omega}C_F^2+4\frac{\gamma_k^2\lambda_k^2}{\mu_\omega}C_H^2\\&+\gamma_k\langle g_f(x_k)-g_f(x_{\lambda_k}^*),x_{\lambda_k}^*-x_k\rangle\\&+ \gamma_k\lambda_k\langle g_h(x_k)-g_h(x_{\lambda_k}^*),x_{\lambda_k}^*-x_k \rangle.
\end{align*}
Similar to the proof of Lemma \ref{results from convexity of f and strong convexity of h}, by convexity of $f$ and strong convexity of $h$ we know that $\langle g_f(x_k)-g_f(x_{\lambda_k}^*),x_{\lambda_k}^*-x_k\rangle \leq 0$ and $\langle g_h(x_k)-g_h(x_{\lambda_k}^*),x_{\lambda_k}^*-x_k \rangle \leq -\mu_h\|x_k-x_{\lambda_k}^*\|^2 \leq -\frac{\mu_h}{2}\|x_k-x_{\lambda_k}^*\|^2$, so
\begin{align*}
\EXP{D(x_{k+1},x_{\lambda_k}^*)|\mathcal{F}_k} \leq D(x_k,x_{\lambda_k}^*)+4\frac{\gamma_k^2}{\mu_\omega}C_F^2+4\frac{\gamma_k^2\lambda_k^2}{\mu_\omega}C_H^2-\frac{\gamma_k\lambda_k\mu_h}{2}\|x_k-x_{\lambda_k}^*\|^2.
\end{align*}
From Lemma \ref{D prop}(a), $D(x,y) \leq \frac{L_\omega}{2}\|x-y\|^2$ for all $x,y \in X$ so,
\begin{align} \label{proofrate3}
\EXP{D(x_{k+1},x_{\lambda_k}^*)|\mathcal{F}_k} \leq \left(1-\frac{\gamma_k\lambda_k\mu_h}{L_\omega}\right)D(x_k,x_{\lambda_k}^*)+4\frac{\gamma_k^2}{\mu_\omega}C_F^2+4\frac{\gamma_k^2\lambda_k^2}{\mu_\omega}C_H^2.
\end{align}
Next we relate $D(x_k,x_{\lambda_k}^*)$ to $D(x_k,x_{\lambda_{k-1}}^*)$. By Lemma \ref{D prop}(b) we have
\begin{align*}
D(x_k,x_{\lambda_k}^*)=D(x_k,x_{\lambda_{k-1}}^*)+D(x_{\lambda_{k-1}}^*,x_{\lambda_k}^*)+ \underbrace{\langle \nabla\omega(x_{\lambda_{k-1}}^*)-\nabla\omega(x_k), x_{\lambda_k}^*-x_{\lambda_{k-1}}^* \rangle}_\text{\hbox{Term3}}.
\end{align*}
By multiplying and dividing the term $\sqrt{\frac{\mu_h\mu_\omega \gamma_k\lambda_k}{2L_\omega^3}}$ in Term3 and using Fenchel's inequality we obtain
\begin{align*}
D(x_k,x_{\lambda_k}^*) &\leq D(x_k,x_{\lambda_{k-1}}^*)+D(x_{\lambda_{k-1}}^*,x_{\lambda_k}^*)+ \frac{\mu_h\mu_\omega \gamma_k\lambda_k}{4L_\omega^3} \|\nabla\omega(x_{\lambda_{k-1}}^*)-\nabla\omega(x_k)\|_*^2 \\&+\frac{L_\omega^3}{\mu_h\mu_\omega \gamma_k\lambda_k} \|x_{\lambda_k}^*-x_{\lambda_{k-1}}^*\|^2,
\end{align*}
where $L_\omega$ is the Lipschitz parameter of $\nabla \omega$ defined in \eqref{distance2}. By definition of Lipschitzian property we know that $\|\nabla \omega(x) -\nabla \omega(y)\|_* \leq L_\omega \|x-y\|$ for all $x,y \in X$. Therefore, from the preceding relation we obtain
\begin{align*}
D(x_k,x_{\lambda_k}^*) & \leq D(x_k,x_{\lambda_{k-1}}^*)+D(x_{\lambda_{k-1}}^*,x_{\lambda_k}^*)\\& + \frac{\mu_h\mu_\omega \gamma_k\lambda_k}{4L_\omega} \|x_{\lambda_{k-1}}^*-x_k\|^2 +\frac{L_\omega^3}{\mu_h\mu_\omega \gamma_k\lambda_k} \|x_{\lambda_k}^*-x_{\lambda_{k-1}}^*\|^2.
\end{align*}
From $D(x,y) \leq \frac{L_\omega}{2}\|x-y\|^2$ for all $x,y \in X$, and also using Proposition \ref{prop:xk_estimate}(a) we have
\begin{align*}
D(x_k,x_{\lambda_k}^*) &\leq D(x_k,x_{\lambda_{k-1}}^*)+\frac{L_\omega C_H^2}{2\mu_h^2}\left|1-\frac{\lambda_{k-1}}{\lambda_k}\right|^2 \\& + \frac{\gamma_k\lambda_k\mu_h\mu_\omega}{4L_\omega} \|x_{\lambda_{k-1}}^*-x_k\|^2 +\frac{L_\omega^3 C_H^2}{\gamma_k\lambda_k\mu_h^3\mu_\omega} \left|1-\frac{\lambda_{k-1}}{\lambda_k}\right|^2.
\end{align*}
From Lemma \ref{D prop}(a) we have $\|x_{\lambda_{k-1}}^*-x_k\|^2 \leq \frac{2}{\mu_\omega}D(x_k,x_{\lambda_{k-1}}^*)$. Taking this into account and by rearranging the terms in the preceding relation we have
\begin{align*}
D(x_k,x_{\lambda_k}^*) \leq \left(1+\frac{\mu_h \gamma_k\lambda_k}{2L_\omega}\right) D(x_k,x_{\lambda_{k-1}}^*)+\frac{L_\omega C_H^2}{2\mu_h^2}\left(1+\frac{2L_\omega^2}{\mu_h\mu_\omega \gamma_k\lambda_k}\right)\left(\frac{\lambda_{k-1}}{\lambda_k}-1\right)^2.
\end{align*}
Replacing the preceding inequality in \eqref{proofrate3} since $\gamma_k\lambda_k \leq \frac{L_\omega}{\mu_h}$, and considering Lipschitzian property of $\omega$ we obtain
\begin{align*}
\EXP{D(x_{k+1},x_{\lambda_k}^*)|\mathcal{F}_k} &\leq \left(1-\frac{\mu_h \gamma_k\lambda_k}{L_\omega}\right) \left(1+\frac{\mu_h\gamma_k\lambda_k}{2L_\omega}\right)D(x_k,x_{\lambda_{k-1}}^*) \nonumber\\&+\left(1-\frac{\mu_h\gamma_k\lambda_k}{L_\omega}\right)\frac{L_\omega C_H^2}{2\mu_h^2}\left(1+\frac{2L_\omega^2}{\mu_h\mu_\omega \gamma_k\lambda_k}\right)\left(\frac{\lambda_{k-1}}{\lambda_k}-1\right)^2\\& +4\frac{\gamma_k^2}{\mu_\omega}C_F^2+4\frac{\gamma_k^2\lambda_k^2}{\mu_\omega}C_H^2.
\end{align*}
By rearranging the terms we have
\begin{align*}
\EXP{D(x_{k+1},x_{\lambda_k}^*)|\mathcal{F}_k} &\leq \underbrace{ \left(1-\frac{\mu_h\gamma_k\lambda_k}{2L_\omega} - \frac{(\mu_h\gamma_k\lambda_k)^2}{2L_\omega^2}\right)D(x_k,x_{\lambda_{k-1}}^*)}_\text{\hbox{Term4}} \nonumber\\&+\underbrace{\left(1-\frac{\mu_h\gamma_k\lambda_k}{L_\omega}\right)\frac{L_\omega C_H^2}{2\mu_h^2}\left(1+\frac{2L_\omega^2}{\mu_h\mu_\omega\gamma_k\lambda_k}\right)\left(\frac{\lambda_{k-1}}{\lambda_k}-1\right)^2}_\text{\hbox{Term5}}\\& +\frac{4C_F^2}{\mu_\omega}\gamma_k^2+\frac{4C_H^2}{\mu_\omega}\gamma_k^2\lambda_k^2.
\end{align*}
We can drop the nonpositive term $- \frac{(\mu_h\gamma_k\lambda_k)^2}{2L_\omega^2}$ in Term4. Also note that we can drop the nonpositive term $-\frac{\mu_h\gamma_k\lambda_k}{L_\omega}$ in Term5. We obtain
\begin{align*}
\EXP{D(x_{k+1},x_{\lambda_k}^*)|\mathcal{F}_k} &\leq \left(1-\frac{\mu_h\gamma_k\lambda_k}{2L_\omega} \right)D(x_k,x_{\lambda_{k-1}}^*) \nonumber\\&+\underbrace{\frac{L_\omega C_H^2}{2\mu_h^2}\left(1+\frac{2L_\omega^2}{\mu_h\mu_\omega\gamma_k\lambda_k}\right)\left(\frac{\lambda_{k-1}}{\lambda_k}-1\right)^2}_\text{\hbox{Term6}}+\frac{4C_F^2}{\mu_\omega}\gamma_k^2+\frac{4C_H^2}{\mu_\omega}\gamma_k^2\lambda_k^2.
\end{align*}
Note that we have $\mu_\omega \leq L_\omega$. Combining this relation with the assumption that $\gamma_k\lambda_k \leq \frac{L_\omega}{\mu_h}$, we have $ 1 \leq \frac{2L_\omega^2}{\mu_h\mu_\omega\gamma_k\lambda_k}$. From this relation and the preceding relation, we obtain
\begin{align*}
\EXP{D(x_{k+1},x_{\lambda_k}^*)|\mathcal{F}_k} &\leq \left(1-\frac{\mu_h}{2L_\omega}\gamma_k\lambda_k\right)D(x_k,x_{\lambda_{k-1}}^*) \\& +\frac{2C_H^2L_\omega^3}{\mu_h^3\mu_\omega \gamma_k\lambda_k}\left(\frac{\lambda_{k-1}}{\lambda_k}-1\right)^2+\frac{4C_F^2}{\mu_\omega}\gamma_k^2+\frac{4C_H^2}{\mu_\omega}\gamma_k^2\lambda_k^2.
\end{align*} \end{proof}
The next result will be utilized to establish the convergence of Algorithm \ref{algorithm:SMD}. To this end, we will employ this result in proving convergence of the iterate $x_k$ in Proposition \ref{conv rate}.
\begin{lemma} [\textbf{Lemma 11, pg.\ 50 of~\cite{Polyak87}}] \label {lemma conv lemma}
Let $\{\nu_k\}$ be a sequence of nonnegative random variables, where $\EXP{\nu_0}<\infty$, and let $\{\alpha_k\}$ and $\{\beta_k\}$ be deterministic scalar sequences such that:
\begin{align*}
&\EXP{\nu_{k+1}|\nu_0,\dots,\nu_k} \leq (1-\alpha_k)\nu_k + \beta_k \qquad \hbox{for all } k \geq 0,\\
&0 \leq \alpha_k \leq 1, \ \beta_k \geq0, \ \sum_{k=0}^{\infty}\alpha_k=\infty, \ \sum_{k=0}^{\infty}\beta_k <\infty, \ \hbox{and} \ \lim_{k \to \infty} \frac{\beta_k}{\alpha_k}=0.
\end{align*}
Then, $\nu_k \rightarrow 0$ almost surely, and $\lim_{k\to \infty}\EXP{\nu_k}=0$. \end{lemma} An extension of Lemma \ref{lemma conv lemma} is proposed in the following result. We will employ this result in Proposition \ref{conv rate}(c) to derive a rate statement. This lemma is proved in Appendix \ref{proof of lemma general conv lemma}. \begin{lemma} \label {lemma general conv lemma}
Let $\{\nu_k\}$ be a sequence of nonnegative random variables, where \\ $\EXP{\nu_0}<\infty$, and let $\{\alpha_k\}$ and $\{\beta_k\}$ be deterministic scalar sequences such that:
\begin{align}\label{conv lemma ineq}
\EXP{\nu_{k+1}|\nu_0,\dots,\nu_k} \leq (1-\alpha_k)\nu_k + \beta_k \qquad \hbox{for all } k \geq 0,
\end{align}
and also there exist some constant $0<\rho<1$, such that for all $k\geq 1$
\begin{align*}
0 \leq \alpha_k \leq 1,\ \beta_k \geq 0 \hbox{ and }
\frac{\beta_{k-1}}{\alpha_{k-1}} \leq \frac{\beta_k}{\alpha_k}(1+ \rho \alpha_k).
\end{align*}
Then, $\EXP{\nu_{k+1}} \leq \frac{\beta_k}{\alpha_k} \tau$, where $\tau \triangleq \max \left \{\frac{\EXP{\nu_1} \alpha_0}{\beta_0}, \frac{1}{1-\rho}\right \}$. \end{lemma} In the following, we make a set of assumptions on the stepsize and the regularization parameter used in Algorithm \ref{algorithm:SMD}. These assumptions will be used in Proposition \ref{conv rate}(a,b) to establish convergence in an almost sure sense and a mean sense. In Proposition \ref{prop condition for sequences}(i), we provide a class of sequences that satisfy all these conditions. \begin{assumption}\label{assumptions for conv} Assume that for all $k \geq 0$ we have
\noindent$(a)\ \{\gamma_k\}$ and $\{\lambda_k\}$ are positive and non-increasing sequences where $\gamma_0\lambda_0 \leq \frac{L_\omega}{\mu_h}.$ \\ $(b)\ \sum_{k=0}^{\infty}\gamma_k\lambda_k= \infty.$
$(c)\ \sum_{k=0}^{\infty}\frac{1}{\gamma_k\lambda_k}\left(\frac{\lambda_{k-1}}{\lambda_k}-1\right)^2<\infty.$ $(d)\ \sum_{k=0}^{\infty} \gamma_k^2 < \infty.$ \\
$(e)\ \lim_{k\to \infty} \frac{1}{\gamma_k^2\lambda_k^2}\left(\frac{\lambda_{k-1}}{\lambda_k}-1\right)^2=0.$ $(f)\ \lim_{k\to \infty} \frac{\gamma_k}{\lambda_k}=0.$
\end{assumption} To derive a rate statement in Proposition \ref{conv rate}(c), we make use of the following assumption on the stepsize and regularization parameter. In Proposition \ref{prop condition for sequences}(ii), we will provide an example of the two sequences for which these conditions are met. \begin{assumption}\label{assumptions for general conv} Assume that for all $k \geq 0$ we have
\noindent $(a)\ \{\gamma_k\}$ and $\{\lambda_k\}$ are positive and non-increasing sequences where $\gamma_0\lambda_0 \leq \frac{L_\omega}{\mu_h}.$ \\
$(b)\ $There are a scalar $B_1>0$ and an integer $k_1$ such that $\frac{1}{\gamma_k^3\lambda_k}\left(\frac{\lambda_{k-1}}{\lambda_k}-1\right)^2 \leq B_1$ for $k \geq k_1.$\\
$(c)\ $There are a scalar $0<\rho<1$ and an integer $k_2$ such that $\frac{\gamma_{k-1}}{\lambda_{k-1}} \leq \frac{\gamma_k}{\lambda_k} \left(1+\rho\frac{\mu_h}{2L_\omega}\gamma_k\lambda_k\right)$ for $k \geq k_2.$\\
$(d)\ \lim_{k\to \infty} \frac{\gamma_k}{\lambda_k}=0.$
\end{assumption} In the following result, we show convergence of the sequence $\{x_k\}$ generated in Algorithm \ref{algorithm:SMD}. This result will be used in the next sections in order to establish the convergence of the averaging sequence $\bar x_k$ generated by Algorithm \ref{algorithm:SMD} to the optimal solution of the problem \eqref{def:SL}.
\begin{proposition}[{\bf Convergence in almost sure and mean senses for $\mathbf{\{x_k\}}$}]\label{conv rate} Let Assumption \ref{assum:properties} and \ref{assum:RVs} hold. Consider problem \eqref{def:SL} and let $\{x_k\}$ be generated by Algorithm \ref{algorithm:SMD}. Additionally:
\begin{itemize}
\item[(a)] Let Assumption \ref{assumptions for conv} hold. Then $D(x_k,x_{\lambda_{k-1}}^*)$ converges to zero almost surely, and
\begin{align*}
\lim_{k\to \infty}\EXP{D(x_k,x_{\lambda_{k-1}}^*)}=0.
\end{align*}
\item[(b)] Let Assumption \ref{assumptions for conv} hold and $\lim_{k\to \infty} \lambda_k =0$. Then $x_k$ converges to the optimal solution of problem \eqref{def:SL}, i.e., $x_h^*$ almost surely.
\item[(c)] Let Assumption \ref{assumptions for general conv} hold for some $k_1$ and $k_2$. Then, for all $k \geq \bar{k} \triangleq \max \{k_1,k_2\}$
\begin{align*}
\EXP{D(x_{k+1},x_{\lambda_{k}}^*)} \leq \frac{\gamma_k}{\lambda_k} \tau,
\end{align*}
where,
\begin{align}\label{def theta}
\tau \triangleq \max \left \{ \frac{2L_\omega M^2 \lambda_{\bar k-1}}{\gamma_{\bar k-1}},\frac{2L_\omega(2C_H^2 L_\omega^3 +4C_F^2\mu_h^3 +4C_H^2 \mu_h^3 \lambda_{\bar k-1}^2)}{\mu_\omega \mu_h^4(1-\rho)} \right \},
\end{align}
in which $M$ is such that $\|x\| \leq M$ for all $x \in X$.
\end{itemize} \end{proposition} \begin{proof}
\noindent (a) Considering relation \eqref{a recursive upper bound}, to show this we apply Lemma \ref{lemma conv lemma}. Let $\nu_k \triangleq D(x_k,x_{\lambda_{k-1}}^*), \alpha_k\triangleq\frac{\mu_h}{2 L_\omega} \gamma_k\lambda_k$ and $\beta_k\triangleq\frac{2C_H^2L_\omega^3}{\mu_h^3\mu_\omega \gamma_k\lambda_k}\left(\frac{\lambda_{k-1}}{\lambda_k}-1\right)^2+\frac{4C_F^2}{\mu_\omega}\gamma_k^2+\frac{4C_H^2}{\mu_\omega}\gamma_k^2\lambda_k^2$. By \eqref{a recursive upper bound} we have
\begin{align*}
\EXP{\nu_{k+1}|\nu_1,\dots,\nu_k} \leq (1-\alpha_k)\nu_k + \beta_k \qquad \hbox{for all } k \geq 1.
\end{align*}
Note that $ \beta_k \geq0$. Also by Assumption \ref{assumptions for conv}(a), since $\{\gamma_k\}$ and $\{\lambda_k\} $ are positive and $\gamma_0\lambda_0 \leq \frac{2L_\omega}{\mu_h}$ we have $0 \leq \alpha_k \leq 1$. Assumption \ref{assumptions for conv}(b) is sufficient to have $\sum_{k=1}^{\infty}\alpha_k=\infty$. From Assumption \ref{assumptions for conv}(c,d) $\sum_{k=1}^{\infty}\beta_k <\infty$. In addition, we have
\begin{align*}
\lim_{k \to \infty} \frac{\beta_k}{\alpha_k}&=\lim_{k \to \infty} \frac{\frac{2C_H^2L_\omega^3}{\mu_h^3\mu_\omega \gamma_k\lambda_k}\left(\frac{\lambda_{k-1}}{\lambda_k}-1\right)^2+\frac{4C_F^2}{\mu_\omega}\gamma_k^2+\frac{4C_H^2}{\mu_\omega}\gamma_k^2\lambda_k^2}{\frac{\mu_h}{2 L_\omega} \gamma_k\lambda_k}\\
&=\frac{4C_H^2L_\omega^4}{\mu_h^4\mu_\omega}\lim_{k \to \infty} \frac{1}{\gamma_k^2\lambda_k^2}\left(\frac{\lambda_{k-1}}{\lambda_k}-1\right)^2 +\frac{8C_F^2L_\omega}{\mu_\omega\mu_h}\lim_{k\to \infty} \frac{\gamma_k}{\lambda_k}+
\frac{8C_H^2 L_\omega}{\mu_\omega\mu_h}\lim_{k\to \infty} \gamma_k\lambda_k.
\end{align*}
Applying Assumption \ref{assumptions for conv}(e,f) we only need to prove that $\lim_{k\to \infty} \gamma_k\lambda_k=0$. Since $\{\lambda_k\}$ is non-increasing we have, $\lambda_k \leq\lambda_0$ for all $k\geq 0$. Then $\lambda_k^2 \geq \lambda_0^2$. From Assumption \ref{assumptions for conv}(a) $\gamma_k\lambda_k^2 \leq \gamma_k\lambda_0^2$ implying that $\lambda_0 \frac{\gamma_k}{\lambda_k}$ is an upper bound for $\gamma_k \lambda_k$. So by Assumption \ref{assumptions for conv}(f), $\lim_{k \to \infty} \gamma_k\lambda_k=0$. Consequently $\lim_{k \to \infty} \frac{\beta_k}{\alpha_k}=0$. Therefore, all conditions of Lemma \ref{lemma conv lemma} are met indicating that $D(x_k,x_{\lambda_{k-1}}^*)$ goes to zero almost surely, and $\lim_{k \to \infty} \EXP{D(x_k,x_{\lambda_{k-1}}^*)}=0$.
\noindent (b) Invoking the triangle inequality, we obtain
\begin{align*}
\|x_k -x_h^*\|^2 \leq 2\|x_k - x_{\lambda_{k-1}}^* \|^2 + 2 \|x_{\lambda_{k-1}}^*-x_h^*\|^2 \qquad \hbox{for all } k \geq 0.
\end{align*}
Using Lemma \ref{D prop}(a), from the preceding inequality we obtain
\begin{align} \label{rel xk xh}
\|x_k -x_h^*\|^2 \leq \frac{4}{\mu_\omega}D(x_k,x_{\lambda_{k-1}}^*)+ 2 \|x_{\lambda_{k-1}}^*-x_h^*\|^2 \qquad \hbox{for all } k \geq 0.
\end{align}
From Proposition \ref{prop:xk_estimate}(b), we know that when $\lambda_k$ goes to zero, then the sequence $\{x_{\lambda_k}^*\}$ converges to the unique optimal solution of problem \eqref{def:SL}, i.e., $x_h^*$. In addition, from part (a), $D(x_k,x_{\lambda_{k-1}}^*)$ converges to zero almost surely. Therefore, from relation \eqref{rel xk xh}, $\|x_k -x_h^*\|$ converges to zero almost surely.
\noindent (c) We apply Lemma \ref{lemma general conv lemma} to show the desired inequality. Consider relation \eqref{a recursive upper bound}. From Assumption \ref{assumptions for general conv}(b) and that $\{\lambda_k\}$ is non-increasing, we have
\begin{align*}
\EXP{D(x_{k+1},x_{\lambda_k}^*)|\mathcal{F}_k}&\leq \left(1-\frac{\mu_h}{2L_\omega}\gamma_k\lambda_k\right)D(x_k,x_{\lambda_{k-1}}^*)\\&+
\left(\frac{2C_H^2L_\omega^3}{\mu_h^3\mu_\omega}B_1+\frac{4C_F^2}{\mu_\omega}+\frac{4C_H^2}{\mu_\omega}\lambda_{\bar k-1}^2\right)\gamma_k^2,
\end{align*}
for all $k\geq \bar k$. For $k \geq \bar k -1$, let us define
\begin{align*}
\nu_k\triangleq D(x_k,x_{\lambda_{k-1}}^*), \ \alpha_k\triangleq \frac{\mu_h}{2 L_\omega} \gamma_k\lambda_k, \hbox{ and } \beta_k\triangleq \left(\frac{2C_H^2L_\omega^3}{\mu_h^3\mu_\omega}B_1+\frac{4C_F^2}{\mu_\omega}+\frac{4C_H^2}{\mu_\omega}\lambda_{\bar k-1}^2\right) \gamma_k^2.
\end{align*}
Now by the preceding inequality we have
\begin{align*}
\EXP{\nu_{k+1}|\nu_{\bar k-1},\dots,\nu_k} \leq (1-\alpha_k)\nu_k + \beta_k \qquad \hbox{for all } k \geq \bar k-1.
\end{align*}
By Assumption \ref{assumptions for general conv}(a,b), it is easy to see that $0 \leq \alpha_k \leq 1$ and $\beta_k \geq 0$. Also
\begin{align*}
\frac{\beta_{k-1}}{\alpha_{k-1}}&= \frac{2 L_\omega\left(\frac{2C_H^2L_\omega^3}{\mu_h^3\mu_\omega}B_1+\frac{4C_F^2}{\mu_\omega}+\frac{4C_H^2}{\mu_\omega}\lambda_{\bar k-1}^2\right)}{\mu_h} \frac{\gamma_{k-1}}{\lambda_{k-1}} \\&\leq \frac{2 L_\omega\left(\frac{2C_H^2L_\omega^3}{\mu_h^3\mu_\omega}B_1+\frac{4C_F^2}{\mu_\omega}+\frac{4C_H^2}{\mu_\omega}\lambda_{\bar k-1}^2\right)}{\mu_h} \frac{\gamma_k}{\lambda_k}\left(1+\rho\frac{\mu_h}{2L_\omega}\gamma_k\lambda_k\right)
= \frac{\beta_k}{\alpha_k}(1+ \rho \alpha_k),
\end{align*}
where we used Assumption \ref{assumptions for general conv}(c) in the second relation. Note that all conditions in Lemma \ref{lemma general conv lemma} are satisfied. Therefore, we can write
\begin{align}\label{bound for E(D)}
\EXP{D(x_{k+1},x_{\lambda_{k}}^*)} \leq \frac{2L_\omega(2C_H^2 L_\omega^3 B_1 +4C_F^2\mu_h^3 +4C_H^2 \mu_h^3 \lambda_{\bar k-1}^2)}{\mu_\omega \mu_h^4} \frac{\gamma_k}{\lambda_k} \hat{\tau},
\end{align}
where
\begin{align*}
\hat{\tau} & \triangleq \max\left\{\frac{\EXP{\nu_{\bar k}}\alpha_{\bar k-1}}{\beta_{\bar k-1}}, \frac{1}{1-\rho}\right\}\\&=\max\left \{\frac{\EXP{\nu_{\bar k}}\mu_h^4\mu_\omega \lambda_{\bar k-1}}{2L_\omega(2C_H^2L_\omega^3 B_1+4C_F^2\mu_h^3+4C_H^2\mu_h^3\lambda_{\bar k-1}^2)\gamma_{\bar k-1}}, \frac{1}{1-\rho}\right \}.
\end{align*}
Note that for $\EXP{\nu_{\bar k}}$ by Lemma \ref{D prop}(a) we have
\begin{align*}
\EXP{\nu_{\bar k}}= \EXP{D(x_{\bar k},x_{\lambda_{\bar k-1}}^*)} &\leq \EXP{\frac{L_\omega}{2}\|x_{\bar k}-x_{\lambda_{\bar k-1}}^*\|^2}\leq \EXP{\frac{L_\omega}{2}\left(2\|x_{\bar k}\|^2 + 2\|x_{\lambda_{\bar k-1}}^*\|^2\right)}\\ &\leq \EXP{\frac{L_\omega}{2}\left(2M^2 + 2M^2\right)}=2L_\omega M^2,
\end{align*}
where $M$ is an upper bound for the set $X$. From the preceding two relations, we have
\begin{align*}
\hat{\tau}=\max\left \{\frac{\mu_h^4\mu_\omega M^2 \lambda_{\bar k-1}}{(2C_H^2L_\omega^3 B_1+4C_F^2\mu_h^3+4C_H^2\mu_h^3\lambda_{\bar k-1}^2)\gamma_{\bar k-1}}, \frac{1}{1-\rho} \right \}.
\end{align*}
From the preceding relation and definition of $\tau$ in \eqref{def theta}, we have
\begin{align*}
\tau = \frac{2L_\omega(2C_H^2 L_\omega^3 B_1 +4C_F^2\mu_h^3 +4C_H^2 \mu_h^3 \lambda_{\bar k-1}^2)}{\mu_\omega \mu_h^4} \hat{\tau}.
\end{align*}
From the preceding relation and inequality \eqref{bound for E(D)}, we have
\begin{align*}
\EXP{D(x_{k+1},x_{\lambda_{k}}^*)} \leq \frac{\gamma_k}{\lambda_k} \tau.
\end{align*} \end{proof} Proposition \ref{conv rate} guarantees the convergence of sequence $\{x_k\}$ generated in Algorithm \ref{algorithm:SMD} under general sets of conditions on the stepsize and regularization parameter given by Assumption \ref{assumptions for conv} and \ref{assumptions for general conv}. Below, we provide particular examples of sequences that meet the conditions in Assumption \ref{assumptions for conv} and \ref{assumptions for general conv} and therefore promise the convergences properties stated in Proposition \ref{conv rate}. The proof can be found in Appendix \ref{proof of prop condition for sequences}.
\begin{proposition}[{\bf Feasible sequences for Assumption \ref{assumptions for conv} and \ref{assumptions for general conv}}]\label{prop condition for sequences}
Assume $\{\gamma_k\}$ and $\{\lambda_k\}$ are sequences such that $\gamma_k=\frac{\gamma_0}{(k+1)^a}$ and $\lambda_k=\frac{\lambda_0}{(k+1)^b}$ where $a,b$ are scalars, $\gamma_0$ and $\lambda_0$ are positive scalars and $\gamma_0\lambda_0 \leq \frac{L_\omega}{\mu_h}$. Then
\begin{itemize}
\item[(i)] The sequences $\{\gamma_k\}$ and $\{\lambda_k\}$ satisfy Assumption \ref{assumptions for conv} when $a,b>0$, $a>b$, $a>0.5$ and $a+b<1$.
\item[(ii)] Assumption \ref{assumptions for general conv} is satisfied when $a,b>0$, $a>b$, $a+b<1$ and $3a+b<2$.
\end{itemize} \end{proposition}
The following lemma provides sufficient conditions on the weights of an averaging sequence so that the averaging sequence converges. We will make use of this result in Theorem \ref{thm conv for xbar} where we prove convergence of the averaging sequence generated by Algorithm \ref{algorithm:SMD}.
\begin{lemma}[{\bf Theorem 6, pg.\ 75 of~\cite{Knopp51}}] \label{lemma thm for avr}
Let $\{u_t\}\subset \mathbf{R}^n$ be a convergent sequence with the limit point $\hat u\in\mathbf{R}^n$ and let $\{\alpha_k\}$ be a
sequence of positive numbers where $\sum_{k=0}^\infty \alpha_k=\infty$. Suppose $\{v_k\}$ is given by
$v_k\triangleq \left({\sum_{t=0}^{k-1} \alpha_t u_t}\right)/{\sum_{t=0}^{k-1} \alpha_t}$ for all $k\ge1$. Then, $\lim_{k \rightarrow \infty} v_k=\hat u.$ \end{lemma} Below, we present the main result of this section. Theorem \ref{thm conv for xbar} provides convergence in both almost sure and mean senses for the averaging sequence generated by Algorithm \ref{algorithm:SMD} to the unique optimal solution of problem \eqref{def:SL}. Importantly, we specify particular sequences for the stepsize and regularization parameter to establish the convergence properties. \begin{theorem}[{\bf Convergence in almost sure and mean senses for $\mathbf{\{\bar x_k\}}$}] \label{thm conv for xbar}
Consider problem \eqref{def:SL}. Let Assumption \ref{assum:properties} and \ref{assum:RVs} hold. Assume $\{\gamma_k\}$ and $\{\lambda_k\}$ are sequences such that $\gamma_k=\frac{\gamma_0}{(k+1)^a}$, $\lambda_k=\frac{\lambda_0}{(k+1)^b}$ and $\gamma_0$ and $\lambda_0$ are positive scalars and $\gamma_0\lambda_0 \leq \frac{L_\omega}{\mu_h}$. Let $\bar x_k$ be generated by Algorithm \ref{algorithm:SMD}. If $a,b>0$, $a>b$, $a>0.5$, $a+b<1$ and $ar\leq 1$. Then, we have
\begin{itemize}
\item[(i)] The sequence $\{\bar x_k\}$ converges to $x_h^*$ almost surely.
\item[(ii)] We have $\lim_{k \to \infty}\EXP{\|\bar x_{k+1}-x_h^*\|}=0$.
\end{itemize} \end{theorem} \begin{proof}
\noindent (i) First note that due to the assumptions on scalars $a,b$, all conditions of Proposition \ref{prop condition for sequences}(i) are satisfied, implying that Assumption \ref{assumptions for conv} holds. Note that from definition of $\eta_{t,k} =\gamma_t^r / \sum_{i=0}^{k} \gamma_i^r$ given by \eqref{weighted alg}, it follows that $\sum_{t=0}^{k} \eta_{t,k}=1$. We have,
\begin{align*}
\|\bar x_{k+1}-x_h^*\|=\left\|\sum_{t=0}^{k} \eta_{t,k} x_t-\sum_{t=0}^{k}\eta_{t,k} x_h^*\right\|=\left\|\sum_{t=0}^{k} \eta_{t,k}(x_t-x_h^*)\right\|.
\end{align*}
Using the triangular inequality, we obtain
\begin{align}\label{xbar-xh}
\|\bar x_{k+1}-x_h^*\|\leq \sum_{t=0}^{k} \eta_{t,k}\|x_t-x_h^*\|.
\end{align}
Now let $\alpha_t \triangleq \gamma_t^r$, $u_t \triangleq \|x_t-x_h^*\|$ and $v_{k+1} \triangleq \sum_{t=0}^{k} \eta_{t,k} \|x_t-x_h^*\|$. Since $ar \leq1$ we can write $\sum_{t=0}^\infty \alpha_t=\sum_{t=0}^\infty \gamma_t^{r}=\sum_{t=0}^\infty (t+1)^{-ar}=\infty$. Since $b>0$, it follows that $\lambda_t=1/(t+1)^b$ goes to zero as $t\to \infty$. So from Proposition \ref{conv rate}(b), $u_t=\|x_t-x_h^*\|$ converges to zero almost surely. Now since conditions of Lemma \ref{lemma thm for avr} are satisfied for $\hat u=0$, we conclude that $\|\bar x_{k+1}-x_h^*\|$ converges to zero almost surely, which means $\{\bar x_k\}$ converges to $x_h^*$ almost surely.
\noindent (ii)
Consider relation \eqref{xbar-xh}, we can write
\begin{align*}
\EXP{\|\bar x_{k+1}-x_h^*\|}\leq \EXP{\sum_{t=0}^{k} \eta_{t,k}\|x_t-x_h^*\|}=\sum_{t=0}^{k} \eta_{t,k} \EXP{\|x_t-x_h^*\|}.
\end{align*}
Let $\alpha_t \triangleq \gamma_t^r$, $u_t \triangleq \EXP{\|x_t-x_h^*\|}$ and $v_{k+1} \triangleq \sum_{t=0}^{k} \eta_{t,k} \EXP{\|x_t-x_h^*\|}$. To apply Lemma \ref{lemma thm for avr}, we first show that $\{u_t\}$ goes to zero.
Adding and subtracting $x^*_{\lambda_{t-1}}$ and using the triangular inequality, we have
\begin{align*}
u_t=\EXP{\|x_t -x_h^*\|} &\leq \EXP{\|x_t -x^*_{\lambda_{t-1}}\|+\|x^*_{\lambda_{t-1}}-x_h^*\|} \\
&\leq \sqrt{\frac{2}{\mu_\omega}}\EXP{\sqrt{D(x_t,x_{\lambda_{t-1}}^*)}}+ \EXP{\|x_{\lambda_{t-1}}^*-x_h^*\|}\\
& \leq \sqrt{\frac{2}{\mu_\omega}} \sqrt{\EXP{D(x_t,x_{\lambda_{t-1}}^*)}}+ \EXP{\|x_{\lambda_{t-1}}^*-x_h^*\|}
\end{align*}
where in the second inequality we used $\frac{\mu_\omega}{2}\|x_t -x^*_{\lambda_{t-1}}\|^2 \leq D(x_t,x_{\lambda_{t-1}}^*)$ by Lemma \ref{D prop}(a) and in the third inequality we applied Jensen's inequality for concave functions. From Proposition \ref{prop:xk_estimate}(b), since $\lambda_t$ goes to zero, the sequence $\{x_{\lambda_t}^*\}$ converges to the unique optimal solution of problem \eqref{def:SL}, i.e., $x_h^*$. Moreover, from Proposition \ref{conv rate}(a) we have $\lim_{t\to \infty}\EXP{D(x_t,x_{\lambda_{t-1}}^*)}=0$. Therefore, $\lim_{t\to \infty}u_t=\EXP{\|x_t -x_h^*\|}=0$. The remainder of the proof can be done in a similar vein as part (i) through applying Lemma \ref{lemma thm for avr}. \end{proof}
\section{Rate analysis} Our goal in this section is to provide complexity analysis for the developed IR-SMD method. To this end, first in Lemma \ref{lemma averaging}, we derive an upper bound in terms of the objective function of problem \eqref{def:firstlevel}, i.e., $f$, evaluated at the averaging sequence $\{\bar x_N\}$ generated by Algorithm \ref{algorithm:SMD}. This result will then be employed in Theorem \ref{thm rate} to derive a rate statement for problem \eqref{def:firstlevel}.
\label{sec:rate} \begin{lemma} [\bf{An error bound for problem \eqref{def:firstlevel}}] \label{lemma averaging} Consider the sequence $\{\bar x_N\}$ generated by Algorithm \ref{algorithm:SMD}. Let Assumption \ref{assum:properties} and \ref{assum:RVs} hold. Also let $\{\gamma_k\}$ and $\{\lambda_k\}$ be positive and non-increasing sequences. Then, for all $N\geq 1$ and $z\in X$ we have
\begin{align*}
\EXP{f(\bar x_N)}-f(z)\leq &\left(\sum_{k=0}^{N-1} \gamma_k^r\right)^{-1}\left(2L_\omega M^2 \left(\gamma_{N-1}^{r-1}+\gamma_0^{r-1}\right)+ 2M_h\sum_{k=0}^{N-1}\gamma_k^r\lambda_k \right. \cr\\& \left. + \frac{C_F^2+C_H^2 \lambda_0^2}{\mu_\omega} \sum_{k=0}^{N-1} \gamma_k^{r+1}\right),
\end{align*}
where $M_h$ is an upper bound for function $h$ over the set $X$ and $M$ is an upper bound for the set $X$. \end{lemma} \begin{proof}
Let $k\geq1 $ be given. Along similar lines to the beginning of Lemma \ref{recursive bd}, we can write (see relation \eqref{proof rate1})
\begin{align}\label{error bound1}
\langle \gamma_k(g_F(x_k,\xi_k) + \lambda_k g_H(x_k, \tilde \xi_k)) + \nabla \omega(x_{k+1})-\nabla \omega(x_k),z-x_{k+1} \rangle \geq 0 \ \hbox{for all}, \ z\in X.
\end{align}
By properties of function $D$ in Lemma \ref{D prop}(a,b), we have
\begin{align*}
\langle \nabla \omega(x_{k+1})-\nabla \omega(x_k) ,z-x_{k+1} \rangle&= D(x_k,z)-D(x_{k+1},z)-D(x_k,x_{k+1}),\\
D(x_k,x_{k+1}) &\geq \frac{\mu_\omega}{2} \|x_{k+1}-x_k\|^2.
\end{align*}
Combining the preceding two relations with inequality \eqref{error bound1}, and rearranging the terms we obtain
\begin{align*}
D(x_{k+1},z) - D(x_k,z) & \leq -\frac{\mu_\omega}{2} \|x_{k+1}-x_k\|^2 \\& +\gamma_k\underbrace{\langle g_F(x_k,\xi_k),z-x_{k+1} \rangle}_\text{Term1}+ \gamma_k\lambda_k\underbrace{\langle g_H(x_k, \tilde \xi_k),z-x_{k+1} \rangle}_\text{Term2}.
\end{align*}
By adding and subtracting $x_k$ in Term1 and Term2, we obtain
\begin{align*}
D(x_{k+1},z) - D(x_k,z) &\leq -\frac{\mu_\omega}{2} \|x_{k+1}-x_k\|^2 +\gamma_k\langle g_F(x_k,\xi_k),z-x_k \rangle\\&+\underbrace{\gamma_k\langle g_F(x_k,\xi_k),x_k-x_{k+1} \rangle}_\text{Term3}+ \gamma_k\lambda_k\langle g_H(x_k, \tilde \xi_k),z-x_k \rangle \\&+ \underbrace{\gamma_k\lambda_k\langle g_H(x_k, \tilde \xi_k),x_k-x_{k+1} \rangle}_\text{Term4}.
\end{align*}
By multiplying and dividing Term3 and Term4 by $\sqrt{\frac{2}{\mu_\omega}}$ and then applying Fenchel's inequality, i.e., $\langle a,b \rangle \leq \frac{1}{2}\|a\|^2 +\frac{1}{2}\|b\|_*^2$, we have
\begin{align*}
D(x_{k+1},z) - D(x_k,z)&\leq -\frac{\mu_\omega}{2} \|x_{k+1}-x_k\|^2 +
\gamma_k\langle g_F(x_k,\xi_k),z-x_k\rangle\\&+ \frac{\gamma_k^2}{\mu_\omega}\|g_F(x_k,\xi_k)\|_*^2 + \frac{\mu_\omega}{4}\|x_{k+1}-x_k\|^2+
\gamma_k\lambda_k\langle g_H(x_k, \tilde \xi_k),z-x_k \rangle\\&+ \frac{\gamma_k^2\lambda_k^2}{\mu_\omega}\|g_H(x_k, \tilde \xi_k)\|_*^2 + \frac{\mu_\omega}{4}\|x_{k+1}-x_k\|^2.
\end{align*}
Therefore, we obtain
\begin{align*}
D(x_{k+1},z) - D(x_k,z) &\leq \frac{\gamma_k^2}{\mu_\omega}\|g_F(x_k,\xi_k)\|_*^2+\frac{\gamma_k^2\lambda_k^2}{\mu_\omega}\|g_H(x_k, \tilde \xi_k)\|_*^2\\&+\gamma_k\langle g_F(x_k,\xi_k),z-x_k\rangle+ \gamma_k\lambda_k\langle g_H(x_k, \tilde \xi_k),z-x_k \rangle .
\end{align*}
By taking conditional expectations from both sides and considering Assumption \ref{assum:properties}(d,e), we have
\begin{align}\label{error bound2}
\EXP{D(x_{k+1},z)|\mathcal{F}_k}-D(x_k,z)& \leq \frac{\gamma_k^2}{\mu_\omega}C_F^2+\frac{\gamma_k^2\lambda_k^2}{\mu_\omega}C_H^2 \\&+\gamma_k\langle g_f(x_k),z-x_k\rangle+ \gamma_k\lambda_k\langle g_h(x_k),z-x_k \rangle. \nonumber
\end{align}
From the definition of subgradient for functions $f$ and $h$ we have $\langle g_f(x), y-x\rangle \leq f(y)- f(x)$ and $\langle g_h(x), y-x\rangle \leq h(y)- h(x)$ for all $x,y \in X$. From these relations and \eqref{error bound2} we obtain
\begin{align*}
\EXP{D(x_{k+1},z)|\mathcal{F}_k}-D(x_k,z) &\leq \frac{\gamma_k^2}{\mu_\omega}C_F^2+\frac{\gamma_k^2\lambda_k^2}{\mu_\omega}C_H^2\\&+\gamma_k(f(z)-f(x_k))+ \gamma_k\lambda_k\left(h(z)-h(x_k)\right).
\end{align*}
Rearranging the terms, we have
\begin{align*}
\EXP{D(x_{k+1},z)|\mathcal{F}_k}-D(x_k,z)& \leq \frac{\gamma_k^2}{\mu_\omega}C_F^2+\frac{\gamma_k^2\lambda_k^2}{\mu_\omega}C_H^2\\&+\gamma_k(f(z)+\lambda_k h(z))- \gamma_k(f(x_k)+\lambda_k h(x_k)).
\end{align*}
Taking expectations from both sides and applying the law of total expectation, we have
\begin{align} \label{error bound3}
\EXP{D(x_{k+1},z)}-\EXP{D(x_k,z)} &\leq \frac{\gamma_k^2}{\mu_\omega}C_F^2+\frac{\gamma_k^2\lambda_k^2}{\mu_\omega}C_H^2\\&+\gamma_k(f(z)+\lambda_k h(z))- \gamma_k\left(\EXP{f(x_k)+\lambda_k h(x_k)}\right).\nonumber
\end{align}
Multiplying both sides of the preceding inequality by $\gamma_k^{r-1}$ and adding and subtracting $\gamma_{k-1}^{r-1}\EXP{D(x_k,z)}$ to the left-hand side, we have
\begin{align}\label{error bound4}
&\gamma_k^{r-1}\EXP{D(x_{k+1},z)}-\gamma_{k-1}^{r-1}\EXP{D(x_k,z)}-(\gamma_k^{r-1}-\gamma_{k-1}^{r-1})\EXP{D(x_k,z)}\leq\frac{\gamma_k^{r+1}}{\mu_\omega}C_F^2\\&+\frac{\gamma_k^{r+1}\lambda_k^2}{\mu_\omega}C_H^2+\gamma_k^r(f(z)+\lambda_k h(z))- \gamma_k^r(\EXP{f(x_k)+\lambda_k h(x_k)}).\nonumber
\end{align}
Note that since $r<1$ and $\gamma_k$ is non-increasing, $\gamma_k^{r-1}-\gamma_{k-1}^{r-1}$ is nonnegative. Also from Lemma \ref{D prop}(a) for $\EXP{D(x_k,z)}$ we have
\begin{align}\label{error bound5}
\EXP{D(x_k,z)} &\leq \EXP{\frac{L_\omega}{2}\|x_k-z\|^2}\leq \EXP{\frac{L_\omega}{2}\left(2\|x_k\|^2 + 2\|z\|^2\right)}\\ &\leq \EXP{\frac{L_\omega}{2}\left(2M^2 + 2M^2\right)}=2L_\omega M^2,\nonumber
\end{align}
where $M$ is an upper bound for our set $X$. From the bound given by \eqref{error bound5} and relation \eqref{error bound4} we obtain
\begin{align*}
&\gamma_k^{r-1}\EXP{D(x_{k+1},z)}-\gamma_{k-1}^{r-1}\EXP{D(x_k,z)}-(\gamma_k^{r-1}-\gamma_{k-1}^{r-1})2L_\omega M^2\leq\frac{\gamma_k^{r+1}}{\mu_\omega}C_F^2\\&+\frac{\gamma_k^{r+1}\lambda_k^2}{\mu_\omega}C_H^2+\gamma_k^r(f(z)+\lambda_k h(z))- \gamma_k^r(\EXP{f(x_k)+\lambda_k h(x_k)}).\nonumber
\end{align*}
Summing the preceding inequalities over $k=1,2, \cdots,N-1$, we obtain
\begin{align*}
&\gamma_{N-1}^{r-1}\EXP{D(x_{N},z)}-\gamma_{0}^{r-1}\EXP{D(x_1,z)}-(\gamma_{N-1}^{r-1}-\gamma_{0}^{r-1})2L_\omega M^2 \leq \frac{C_F^2}{\mu_\omega} \sum_{k=1}^{N-1} \gamma_k^{r+1} \\& +\frac{C_H^2}{\mu_\omega} \sum_{k=1}^{N-1} \gamma_k^{r+1}\lambda_k^2+ \sum_{k=1}^{N-1}\gamma_k^r(f(z)+\lambda_k h(z))-\sum_{k=1}^{N-1} \gamma_k^r\left(\EXP{f(x_k)+\lambda_k h(x_k)}\right).\nonumber
\end{align*}
By removing the nonnegative terms $\gamma_{N-1}^{r-1}\EXP{D(x_{N},z)}$ and $2L_\omega M^2\gamma_{0}^{r-1}$ from the left-hand side of the preceding inequality, we have
\begin{align}\label{sum overk}
-\gamma_{0}^{r-1}\EXP{D(x_1,z)}&-2L_\omega M^2\gamma_{N-1}^{r-1} \leq \frac{C_F^2}{\mu_\omega} \sum_{k=1}^{N-1} \gamma_k^{r+1} +\frac{C_H^2}{\mu_\omega} \sum_{k=1}^{N-1} \gamma_k^{r+1}\lambda_k^2\\&+ \sum_{k=1}^{N-1}\gamma_k^r(f(z)+\lambda_k h(z))-\sum_{k=1}^{N-1} \gamma_k^r(\EXP{f(x_k)+\lambda_k h(x_k)}).\nonumber
\end{align}
For $\EXP{D(x_1,z)}$ from \eqref{error bound3} when $k=0$, we have
\begin{align} \label{inequality for d1}
\EXP{D(x_1,z)}&\leq \frac{\gamma_0^2}{\mu_\omega}C_F^2+\frac{\gamma_0^2 \lambda_0^2}{\mu_\omega}C_H^2+\gamma_0(f(z)+\lambda_0 h(z))\\&-\gamma_0\left(\EXP{f(x_0)}+\lambda_0 \EXP{h(x_0)} \right)+2L_\omega M^2, \nonumber
\end{align}
where we substituted $\EXP{D(x_0,z)}$ by $2L_\omega M^2$ as we showed in \eqref{error bound5}. Then, by multiplying both sides of \eqref{inequality for d1} by $\gamma_0^{r-1}$ and summing the resulting relation with \eqref{sum overk} and rearranging the terms, we have
\begin{align*}
&\sum_{k=0}^{N-1} \gamma_k^r\left(\EXP{f(x_k)+\lambda_k h(x_k)}\right)-\sum_{k=0}^{N-1}\gamma_k^r(f(z)+\lambda_k h(z)) \leq 2L_\omega M^2 \left(\gamma_{N-1}^{r-1}+\gamma_0^{r-1}\right)\\ &+ \frac{C_F^2}{\mu_\omega} \sum_{k=0}^{N-1} \gamma_k^{r+1} +\frac{C_H^2}{\mu_\omega} \sum_{k=0}^{N-1} \gamma_k^{r+1}\lambda_k^2.
\end{align*}
Dividing both sides by $\sum_{k=0}^{N-1} \gamma_k^r$ and considering the definition of $\eta_{k,N-1}$ given by \eqref{weighted alg}, we obtain
\begin{align*}
&\sum_{k=0}^{N-1} \eta_{k,N-1}\left(\EXP{f(x_k)+\lambda_k h(x_k)}\right) -\sum_{k=0}^{N-1}\eta_{k,N-1}(f(z)+\lambda_k h(z))\\&\leq \left(\sum_{k=0}^{N-1} \gamma_k^r\right)^{-1}\left(2L_\omega M^2 \left(\gamma_{N-1}^{r-1}+\gamma_0^{r-1}\right)+ \frac{C_F^2}{\mu_\omega} \sum_{k=0}^{N-1} \gamma_k^{r+1} +\frac{C_H^2}{\mu_\omega} \sum_{k=0}^{N-1} \gamma_k^{r+1} \lambda_k^2\right).
\end{align*}
So we can write
\begin{align*}
&\EXP{\sum_{k=0}^{N-1} \eta_{k,N-1}(f(x_k)+\lambda_k h(x_k))} -\sum_{k=0}^{N-1}\eta_{k,N-1}(f(z)+\lambda_k h(z))\\&\leq \left(\sum_{k=0}^{N-1} \gamma_k^r\right)^{-1}\left(2L_\omega M^2 \left(\gamma_{N-1}^{r-1}+\gamma_0^{r-1}\right)+ \frac{C_F^2}{\mu_\omega} \sum_{k=0}^{N-1} \gamma_k^{r+1} +\frac{C_H^2}{\mu_\omega} \sum_{k=0}^{N-1} \gamma_k^{r+1} \lambda_k^2\right).
\end{align*}
Rearranging the terms, we have
\begin{align*}
&\EXP{\sum_{k=0}^{N-1} \eta_{k,N-1}f(x_k)} -\sum_{k=0}^{N-1}\eta_{k,N-1}f(z) \leq \sum_{k=0}^{N-1}\eta_{k,N-1}\lambda_k h(z)-\EXP{\sum_{k=0}^{N-1} \eta_{k,N-1}\lambda_k h(x_k)}\\&+ \left(\sum_{k=0}^{N-1} \gamma_k^r\right)^{-1}\left(2L_\omega M^2 \left(\gamma_{N-1}^{r-1}+\gamma_0^{r-1}\right)+ \frac{C_F^2}{\mu_\omega} \sum_{k=0}^{N-1} \gamma_k^{r+1} +\frac{C_H^2}{\mu_\omega} \sum_{k=0}^{N-1} \gamma_k^{r+1} \lambda_k^2\right).
\end{align*}
In left-hand side, note that we have $\sum_{k=0}^{N-1} \eta_{k,N-1}=1$ from definition of $\eta_{k,N-1}$ in \eqref{weighted alg}. So convexity of $f$ implies that $f(\bar x_N) \leq \sum_{k=0}^{N-1} \eta_{k,N-1}f(x_k)$. We have
\begin{align} \label{error bound6}
&\EXP{f(\bar x_N)}-f(z) \leq \underbrace{\sum_{k=0}^{N-1}\eta_{k,N-1}\lambda_k h(z)-\EXP{\sum_{k=0}^{N-1} \eta_{k,N-1}\lambda_k h(x_k)}}_\text{\hbox{Term5}}\\&+ \left(\sum_{k=0}^{N-1} \gamma_k^r\right)^{-1}\left(2L_\omega M^2 \left(\gamma_{N-1}^{r-1}+\gamma_0^{r-1}\right)+ \frac{C_F^2}{\mu_\omega} \sum_{k=0}^{N-1} \gamma_k^{r+1} +\frac{C_H^2}{\mu_\omega} \sum_{k=0}^{N-1} \gamma_k^{r+1} \lambda_k^2\right).\nonumber
\end{align}
For Term5 we can write
\begin{align*}
\hbox{Term5}&=\EXP{\sum_{k=0}^{N-1}\eta_{k,N-1}\lambda_k h(z)-\sum_{k=0}^{N-1} \eta_{k,N-1}\lambda_k h(x_k)} \\& \leq \EXP{\sum_{k=0}^{N-1}\eta_{k,N-1}\lambda_k |h(z)-h(x_k)|} \leq 2M_h\sum_{k=0}^{N-1}\eta_{k,N-1}\lambda_k,
\end{align*}
where we used the definition of $M_h$. Using this inequality for Term5 and \eqref{error bound6} we obtain,
\begin{align*}
&\EXP{f(\bar x_N)}-f(z) \leq 2M_h\sum_{k=0}^{N-1}\eta_{k,N-1}\lambda_k \\&+ \left(\sum_{k=0}^{N-1} \gamma_k^r\right)^{-1}\left(2L_\omega M^2 \left(\gamma_{N-1}^{r-1}+\gamma_0^{r-1}\right)+ \frac{C_F^2}{\mu_\omega} \sum_{k=0}^{N-1} \gamma_k^{r+1} +\frac{C_H^2}{\mu_\omega} \sum_{k=0}^{N-1} \gamma_k^{r+1} \lambda_k^2\right).
\end{align*}
Applying the formula of $ \eta_{k,N-1}$ in \eqref{weighted alg}, we obtain
\begin{align*}
\EXP{f(\bar x_N)}-f(z) \leq \left(\sum_{k=0}^{N-1} \gamma_k^r\right)^{-1}& \left(2L_\omega M^2 \left(\gamma_{N-1}^{r-1}+\gamma_0^{r-1}\right)+ 2M_h\sum_{k=0}^{N-1}\gamma_k^r\lambda_k \right. \cr\\& \left.+ \frac{C_F^2}{\mu_\omega} \sum_{k=0}^{N-1} \gamma_k^{r+1} +\frac{C_H^2}{\mu_\omega} \sum_{k=0}^{N-1} \gamma_k^{r+1} \lambda_k^2\right).
\end{align*}
Since $\{\lambda_k\}$ is a non-increasing sequence, we obtain
\begin{align*}
\EXP{f(\bar x_N)}-f(z)\leq \left(\sum_{k=0}^{N-1} \gamma_k^r\right)^{-1} & \left(2L_\omega M^2 \left(\gamma_{N-1}^{r-1}+\gamma_0^{r-1}\right)+ 2M_h\sum_{k=0}^{N-1}\gamma_k^r\lambda_k+ \right. \cr\\& \left. \frac{C_F^2+C_H^2 \lambda_0^2}{\mu_\omega} \sum_{k=0}^{N-1} \gamma_k^{r+1}\right).
\end{align*} \end{proof} To derive the convergence rate statement in Theorem \ref{thm rate}, we make use of the following result (see Lemma 9, page 418 in \cite{Farzad3}).
\begin{lemma}\label{lemma:ineqHarmonic}
For any scalar $\alpha\neq -1$ and integers $\ell$ and $N$ where $0\leq \ell \leq N-1$, we have
\begin{align*}
\frac{N^{\alpha+1}-(\ell+1)^{\alpha+1}}{\alpha+1}\leq \sum_{k=\ell}^{N-1}(k+1)^\alpha \leq (\ell+1)^\alpha+\frac{(N+1)^{\alpha+1}-(\ell+1)^{\alpha+1}}{\alpha+1}.
\end{align*} \end{lemma}
In the following result, we show that using Algorithm \ref{algorithm:SMD}, and under specific choices for the stepsize and regularization sequences, the objective function of problem \eqref{def:firstlevel} converges to its optimal value in a near optimal rate of $\mathcal{O}\left(1/N^{0.5-\delta}\right)$, where $\delta>0$ is an arbitrary small number. We also establish almost sure convergence of the generated sequence by Algorithm \ref{algorithm:SMD} to the unique optimal solution of problem \eqref{def:SL}.
\begin{theorem} [\bf{Convergence and a rate statement for Algorithm \ref{algorithm:SMD}}] \label{thm rate}
Consider problem \eqref{def:SL}. Let Assumption \ref{assum:properties} and \ref{assum:RVs} hold. Let $\{\bar x_N\}$ be generated by Algorithm \ref{algorithm:SMD}. Let $0<\delta<0.5$ be an arbitrary scalar and $r<1$ be an arbitrary constant. Assume for $0<\delta<0.5$, $\{\gamma_k\}$ and $\{\lambda_k\}$ are sequences such that
\begin{align*}
\boxed{
\gamma_k=\frac{\gamma_0}{(k+1)^{0.5+0.5\delta}} \hbox{ and } \lambda_k=\frac{\lambda_0}{(k+1)^{0.5-\delta}},}
\end{align*}
where $\gamma_0$ and $\lambda_0$ are positive scalars and $\gamma_0\lambda_0 \leq \frac{L_\omega}{\mu_h}$. Then,
\begin{itemize}
\item[(i)] The sequence $\{\bar x_N\}$ converges to $x_h^*$ almost surely.
\item[(ii)] We have $\lim_{N \to \infty}\EXP{\|\bar x_{N+1}-x_h^*\|}=0$.
\item[(iii)] $\EXP{f(\bar x_N)}$ converges to $f^*$ with the rate of ${\cal O}\left(1/N^{0.5-\delta}\right)$, where $f^*$ is the optimal objective value of problem \eqref{def:firstlevel}.
\end{itemize} \end{theorem} \begin{proof} Throughout, we use the notation $a=0.5+0.5\delta$, $b=0.5-\delta$.
\noindent (i,ii) From the values of $a$ and $b$, and that $r<1$ and $0<\delta<0.5$, we have
\begin{align*}
a>b>0,\ a>0.5, \ a+b=1-0.5\delta<1,\ ar= 0.5(1+\delta)r <0.5(1.5)=0.75<1.
\end{align*}
This implies that all conditions of Theorem \ref{thm conv for xbar} are satisfied. Therefore, $\{\bar x_N\}$ converges to $x_h^*$ almost surely and $\lim_{N \to \infty}\EXP{\|\bar x_{N+1}-x_h^*\|}=0$.
\noindent (iii) Substituting $\gamma_k$ and $\lambda_k$ in the inequality given by Lemma \ref{lemma averaging} and selecting $z=x^*$, we obtain
\begin{align*}
\EXP{f(\bar x_N)}-f^*\leq & \left(\sum_{k=0}^{N-1} \frac{\gamma_0^r}{(k+1)^{ar}}\right)^{-1}\left(2L_\omega M^2\gamma_0^{r-1} \left(N^{a(1-r)}+1\right) \right. \cr\\& \left. + 2M_h\sum_{k=0}^{N-1}\frac{\gamma_0^r\lambda_0}{(k+1)^{ar+b}}+ \left( \frac{C_F^2+C_H^2 \lambda_0^2}{\mu_\omega} \right) \sum_{k=0}^{N-1} \frac{\gamma_0^{r+1}}{(k+1)^{a(r+1)}}\right),
\end{align*}
where $M_h$ is the upper bound for function $h$ over the set $X$. Note that since $a>b>0$, we have $a(r+1)>ar+b$. Thus, $(k+1)^{a(r+1)}> (k+1)^{ar+b}$. Taking this into account, from the preceding relation we have
\begin{align}\label{boundonf}
\EXP{f(\bar x_N)}-f^*\leq & \left(\sum_{k=0}^{N-1} \frac{\gamma_0^r}{(k+1)^{ar}}\right)^{-1} \left(2L_\omega M^2\gamma_0^{r-1} \left(N^{a(1-r)}+1\right) \right. \cr \\ & \left. + \gamma_0^r \left(2M_h\lambda_0+ \left( \frac{C_F^2\gamma_0+C_H^2 \lambda_0^2\gamma_0}{\mu_\omega} \right)\right) \sum_{k=0}^{N-1}\frac{1}{(k+1)^{ar+b}}\right). \nonumber
\end{align}
Let us consider the following definitions.
\begin{align*}
&\hbox{Term1}= \left(\sum_{k=0}^{N-1} \frac{1}{(k+1)^{ar}}\right)^{-1},\hbox{ Term2}= \left(\sum_{k=0}^{N-1} \frac{1}{(k+1)^{ar}}\right)^{-1} N^{a(1-r)},\\
&\hbox{Term3}=\left(\sum_{k=0}^{N-1} \frac{1}{(k+1)^{ar}}\right)^{-1}\left(\sum_{k=0}^{N-1}\frac{1}{(k+1)^{ar+b}}\right).
\end{align*}
Using these definitions, equivalently from \eqref{boundonf}, we have
\begin{align} \label{terms}
\EXP{f(\bar x_N)}-f^* & \leq 2L_\omega M^2\gamma_0^{-1} (\hbox{Term1}+\hbox{Term2}) \\&+ \left(2M_h\lambda_0 + \left( \frac{C_F^2\gamma_0+C_H^2 \lambda_0^2\gamma_0}{\mu_\omega} \right)\right) \hbox{Term3}.
\end{align}
Next, we estimate the terms 1, 2 and 3. Note that for given $0<\delta<0.5$, from the definitions of $a$ and $b$ and that $r<1$, we have $ar<1$ and $ar+b<a+b<1$. By applying Lemma \ref{lemma:ineqHarmonic}, we have
\begin{align*}
&\hbox{Term1} \leq\frac{1-ar}{N^{1-ar}-1}={\cal O}\left(N^{-(1-ar)}\right), \\
&\hbox{Term2} \leq\frac{N^{a(1-r)}}{\frac{N^{1-ar}-1}{1-ar}}=\frac{(1-ar)N^{a(1-r)}}{N^{1-ar}-1}={\cal O}\left(N^{-(1-a)}\right), \\
&\hbox{Term3} \leq\frac{\frac{(N+1)^{1-ar-b}-1}{1-ar-b}+1}{\frac{N^{1-ar}-1}{1-ar}} =\frac{(1-ar)\left((N+1)^{1-ar-b}-1\right)}{(1-ar-b)\left(N^{1-ar}-1\right)} + \frac{1-ar}{N^{1-ar}-1} \\& ={\cal O}\left(N^{-b}\right) + {\cal O}\left(N^{-(1-ar)}\right).
\end{align*}
From the preceding bounds and relation \eqref{terms}, we have
\begin{align*}
\EXP{f(\bar x_N)}-f^*\leq {\cal O}\left(N^{-\min \{1-ar,1-a,b\}}\right)= {\cal O}\left(N^{-\min \{1-a,b\}}\right),
\end{align*}
where we used $1-a\leq 1-ar$. Replacing $a$ and $b$ by their values, we have
\begin{align*}
\EXP{f(\bar x_N)}-f^*\leq {\cal O}\left(N^{-\min \{0.5-0.5\delta,0.5-\delta\}}\right)={\cal O}\left(N^{-(0.5-\delta)}\right).
\end{align*} \end{proof}
\section{Experimental results} \label{sec:num} In this section, we examine the performance of Algorithm \ref{algorithm:SMD} on different sets of problems. First we consider linear inverse problems \cite{Beck09}. Given a matrix $A\in \mathbf{R}^{m\times n}$ and a vector $b\in \mathbf{R}^m$, the goal is to find $x \in \mathbf{R}^n$ such that $Ax+\delta=b$, where $\delta \in \mathbf{R}^m$ denotes an unknown noise. To solve this problem, one may consider the following least-squares problem given as \begin{equation}\label{def:firstlevelLS} \begin{split} &\begin{array}{ll}
\hbox{minimize } & f(x)\triangleq \|Ax-b\|_2^2\cr \hbox{subject to } & x \in \mathbf{R}^n, \end{array} \end{split} \end{equation} In many applications arising from signal processing \cite{Friedlander07} and image reconstruction \cite{Garrigos17}, problem \eqref{def:firstlevelLS} is ill-posed. To address ill-posedness, we consider a \eqref{def:SL} model here, where in as objective function $h$, we minimize a desired regularizer. Here we assume this function is characterized by both $\ell_1$ and $\ell_2$ norms. More precisely, we consider the following model \begin{align} \label{numeric upper}
\mbox{minimize}\ \ &h(x)\triangleq \frac{\mu_h }{2}\|x\|_2^2+ \|x\|_1\\
\hbox{subject to}\ \ &x \in \mathop{\rm argmin}_{y \in \mathbf R^n } \|Ay-b\|_2^2, \nonumber \end{align}
where $\mu_{h}>0$ is the strongly convex parameter. Note that in contrast with the work in \cite{ Beck14,Sabach17}, the function $h$ is nondifferentiable. To perform numerical experiments, similar to \cite{ Beck14,Sabach17}, we consider three inverse problems, namely ``Baart'', ``Philips'', and ``Foxgood'' where they differ in terms of the underlying method to generate $A$ and $b$. More information on these problems can be found on the website {http://www2.imm.dtu.dk/~pcha/Regutools/}. Table \ref{table:inverseProblem} summarizes the results of our experiments. Here for each class of the three inverse models, we vary the value of the initial point $x_0$ and the dimension $n$. We report the value of $|f(\bar x_N)-f^*|$, referred to as the feasibility gap, and the value of $|h(\bar x_N)-h^*|$, referred to as the optimality gap. For all different scenarios of these experiments, we let the algorithm stop after 250 seconds. We let $\mu_{h}=0.5$ in our experiments and let $\gamma_k$ and $\lambda_k$ be given by the update rules in Theorem \ref{thm rate}. To evaluate $f^*$ and $h^*$, we consider a representation of problem \eqref{def:firstlevel} as the linear system equation $Ax=b$. Using this reformulation, we were able to evaluate both $f^*$ and $h^*$ directly using the \textit{quadprog} package in Matlab. We observe that while the feasibility gap is very small, the optimality gap has approached to zero for almost all the scenarios. We note that for Foxgood when $n=500$ and $1000$, even though the optimality gap is not negligible, the relative optimality gap is as small as $0.4$\% and $0.6$\%, respectively.
\begin{table}[t]
\caption{Performance of IR-SMD method for linear inverse problems}
\centering
\scalebox{0.9}{
\label{determinstic}
\begin{tabular}{c|c|ccc|ccc}
\hline\noalign{
}
\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Initial Point\\ $x_0$\end{tabular}} & \multirow{2}{*}{$n$} & \multicolumn{3}{c|}{\begin{tabular}[c]{@{}c@{}}Feasibility Gap\\ $|f(\bar x_N)-f^*|$\end{tabular}} & \multicolumn{3}{c}{\begin{tabular}[c]{@{}c@{}}Optimality Gap\\ $|h(\bar x_N)-h^*|$\end{tabular}} \\ \cline{3-8}
& & Baart & Foxgood & Phillips & Baart & Foxgood & Phillips \\ \hline\noalign{
}
\multirow{5}{*}{$x_0=-10\times\mathbf{1}_n$ } & 20 & $3.12\mathrm{e}{-7}$ & $3.48\mathrm{e}{-6}$ & $7.87\mathrm{e}{-9}$ & 0.01 & 0.07 & 0.00 \\
& 100 & $1.26\mathrm{e}{-6}$ & $1.28\mathrm{e}{-5}$ & $2.95\mathrm{e}{-8}$ & 0.02 & 0.19 & 0.00 \\
& 200 & $4.04\mathrm{e}{-6}$ & $3.23\mathrm{e}{-5}$ & $7.96\mathrm{e}{-8}$ & 0.04 & 0.41 & 0.00 \\
& 500 & $3\mathrm{e}{-5}$ & $1.48\mathrm{e}{-4}$ & $4.8\mathrm{e}{-7}$ & 0.18 & 1.22 & 0.00 \\
& 1000 & $2.46\mathrm{e}{-4}$ & $8.49\mathrm{e}{-4}$ & $3.59\mathrm{e}{-6}$ & 0.74 & 3.5 & 0.01 \\ \hline\noalign{
}
\multirow{5}{*}{$x_0=\mathbf{0}_n$ } & 20 & $3.15\mathrm{e}{-7}$ & $3.47\mathrm{e}{-6}$ & $7.84\mathrm{e}{-9}$ & 0.01 & 0.07 & 0.00 \\
& 100 & $1.29\mathrm{e}{-6}$ & $1.26\mathrm{e}{-5}$ & $2.94\mathrm{e}{-8}$ & 0.02 & 0.19 & 0.00 \\
& 200 & $4\mathrm{e}{-6}$ & $3.29\mathrm{e}{-5}$ & $7.99\mathrm{e}{-8}$ & 0.04 & 0.41 & 0.00 \\
& 500 & $2.91\mathrm{e}{-5}$ & $1.45\mathrm{e}{-4}$ & $4.97\mathrm{e}{-7}$ & 0.18 & 1.22 & 0.00 \\
& 1000 & $2.5\mathrm{e}{-4}$ & $8.3\mathrm{e}{-4}$ & $3.64\mathrm{e}{-6}$ & 0.75 & 3.47 & 0.01 \\ \hline\noalign{
}
\multirow{5}{*}{$x_0=10\times\mathbf{1}_n$ } & 20 & $3.15\mathrm{e}{-7}$ & $3.47\mathrm{e}{-6}$ & $7.96\mathrm{e}{-9}$ & 0.01 & 0.07 & 0.00 \\
& 100 & $1.27\mathrm{e}{-6}$ & $1.26\mathrm{e}{-5}$ & $2.94\mathrm{e}{-8}$ & 0.02 & 0.19 & 0.00 \\
& 200 & $3.94\mathrm{e}{-6}$ & $3.29\mathrm{e}{-5}$ & $8.16\mathrm{e}{-8}$ & 0.04 & 0.41 & 0.00 \\
& 500 & $2.99\mathrm{e}{-5}$ & $1.51\mathrm{e}{-4}$ & $4.85\mathrm{e}{-7}$ & 0.18 & 1.23 & 0.00 \\
& 1000 & $2.3\mathrm{e}{-4}$ & $8.35\mathrm{e}{-4}$ & $3.89\mathrm{e}{-6}$ & 0.72 & 3.48 & 0.01 \\ \hline\noalign{
}
\end{tabular}
\label{table:inverseProblem}
} \end{table} In the second part of this section, we present the performance of the IR-SMD method on a text classification problem. For this experiment, in \eqref{eqn:problem3} we let $\mathcal{L}$ to be a hinge loss function, i.e., $\mathcal{L}(\langle x,a\rangle, b) \triangleq \max \{0, 1-b \langle x,a\rangle\}$ and the pair $(a,b)$ be generated using the dataset Reuters Corpus Volume I (RCV1) (see \cite{Lewis04}). This dataset is a collection of articles produced by Reuters between 1996 and 1997 that are categorized into four groups including Corporate/Industrial, Economics, Government/Social and Markets. We consider binary classification of articles with respect to the Markets class. After the tokenization process, each article is represented by a sparse binary vector, where 1 denotes existence and 0 denotes nonexistence of a token in the article. We use a subset of the data with $150,000$ articles and $138,921$ tokens. To induce sparsity of the optimal solution, we consider the problem \eqref{def:SL} \begin{align} \label{numeric upper2}
\mbox{minimize}\ \ &h(x)\triangleq \frac{\mu_h }{2}\|x\|_2^2+ \|x\|_1\\ \hbox{subject to}\ \ &x \in \mathop{\rm argmin}_{y \in \mathbf R^n } \EXP{ \mathcal{L}(\langle y,a\rangle, b)}, \nonumber \end{align} where we let $\mu_h=0.1$. Figure \ref{fig:rcv} shows the simulation results. Here we vary the initial stepsize, initial regularization parameter, initial vector $x_0$, and the averaging parameter $r<1$. We let $\gamma_k$ and $\lambda_k$ be given by the rules in Theorem \ref{thm rate}, and report the logarithm of averaged of the loss function $\mathcal{L}$ using 15 sample paths of size $10,000$. The plots in Figure \ref{fig:rcv} support convergence of the IR-SMD method for the optimization problem \eqref{numeric upper2}. \begin{table}[]
\centering
\scalebox{0.9}{
\begin{tabular}{l | c c c}
$(\gamma_0,\lambda_0)$ & $x_0=-10\times\mathbf{1}_n$ & $x_0=\mathbf{0}_n$ & $x_0=10\times\mathbf{1}_n$ \\ \hline\\
(10,1)
&
\begin{minipage}{.3\textwidth}
\includegraphics[scale=.26, angle=0]{finalfig1.pdf}
\end{minipage}
&
\begin{minipage}{.28\textwidth}
\includegraphics[scale=0.26, angle=0]{finalfig2.pdf}
\end{minipage}
&
\begin{minipage}{.28\textwidth}
\includegraphics[scale=.26, angle=0]{finalfig3.pdf}
\end{minipage}
\\
(1,10)
&
\begin{minipage}{.3\textwidth}
\includegraphics[scale=.26, angle=0]{finalfig4.pdf}
\end{minipage}
&
\begin{minipage}{.28\textwidth}
\includegraphics[scale=.26, angle=0]{finalfig5.pdf}
\end{minipage}
&
\begin{minipage}{.28\textwidth}
\includegraphics[scale=.26, angle=0]{finalfig6.pdf}
\end{minipage}
\\
(0.1,0.1) \
&
\begin{minipage}{.3\textwidth}
\includegraphics[scale=.26, angle=0]{finalfig7.pdf}
\end{minipage}
&
\begin{minipage}{.28\textwidth}
\includegraphics[scale=.26, angle=0]{finalfig8.pdf}
\end{minipage}
&
\begin{minipage}{.28\textwidth}
\includegraphics[scale=.26, angle=0]{finalfig9.pdf}
\end{minipage}
\end{tabular}}
\captionof{figure}{Performance of IR-SMD method for RCV1 dataset}
\label{fig:rcv} \end{table} \section{Conclusions}\label{sec:rem} Motivated by applications arising from machine learning and signal processing, we consider problem \eqref{def:SL}, where the optimal solution set of the problem \eqref{def:firstlevel} is the feasible set. We assume the objective function in \eqref{def:firstlevel} is convex and in the \eqref{def:SL} is strongly convex. We let both functions to be nondifferentiable and each one be given as an expected value of a stochastic function. We develop an iterative regularized stochastic mirror descent method (IR-SMD) where at each iteration, both stepsize and regularization parameters are updated. Our main contribution is two-fold: (i) We derive a family of update rules for the stepsize and the regularization parameter. We show that under this class of update rules, the sequence generated by the IR-SMD method converges to the unique optimal solution of problem \eqref{def:SL} in both an almost sure and a mean sense; (ii) To provide convergence rate statements, we show that under specific update rules for the stepsize and regularization sequences, the expected feasibility gap converges to zero with a near optimal rate of convergence. Moreover, under these update rules, the almost sure convergence and convergence in mean to the optimal solution of the bilevel problem are guaranteed. Our preliminary numerical results performed on three types of linear inverse problems and a binary text classification problem are promising.
\appendix \section{Proofs}\label{Appendix} \subsection{\textbf{Proof of Lemma \ref{lemma 2-stage 2}}} \label{proof of lemma 2-stage 2} \begin{proof}
We prove this lemma in two steps.\\
\noindent \textbf{Step1:} For any fixed $z\in Z$ we show that
\begin{align} \label{minsum = summin}
\min_{y\in Y(z)} \sum_{i=1}^{N} p_i q(y_i,\xi_i)= \sum_{i=1}^{N} p_i \min_{y_i \in Y_i(z)} q(y_i,\xi_i),
\end{align}
where the set $Y(z) \in \mathbf{R}^{m \times N}$ is defined as $Y(z)\triangleq \prod_{i=1}^N Y_i(z)$.
For this, we denote the optimization problem on the left of the equation \eqref{minsum = summin} by $P_{lhs}$, and the minimization problem $\min_{y_i \in Y_i(z)} q(y_i,\xi_i)$ on the right hand side by $P_{rhs}^i$ for all $i=1,\cdots, N$. To prove the desired statement, we show that a vector ${y^*}^T\triangleq \left({y_1^*}^T,\cdots, {y_N^*}^T\right) \in \mathbf{R}^{m \times N}$ is an optimal solution to problem $P_{lhs}$ if and only if for all $i=1,\cdots, N$, $y_i^*$ is an optimal solution to problem $P_{rhs}^i$. First, let us assume $y^*$ solves $P_{lhs}$. Then, from the optimality condition we can write
\begin{align} \label{optimality condition1}
\left( p_1\nabla^Tq(y_1^*,\xi_1), \cdots , p_N\nabla^Tq(y_N^*,\xi_N) \right)^T(y-y^*) \geq 0, \qquad \hbox{for all } y\in Y(z),
\end{align}
where we use the notation $y^T=\left(y_1^T, \cdots,y_N^T\right)$. Then, we obtain
\begin{align} \label{optimality condition2}
\sum_{i=1}^{N} p_i \nabla^Tq(y_i^*,\xi_i) (y_i-y_i^*) \geq 0, \qquad \hbox{for all } y\in Y(z).
\end{align}
Consider any arbitrary $i \in \{1,\cdots, N\}$ and an arbitrary $y_i \in Y_i(z)$. Let us define ${\hat y(i)}^T=\left({y_1^*}^T,\cdots,{y_{i-1}^*}^T,{y_i}^T,{y_{i+1}^*}^T, \cdots,{y_N^*}^T\right)$. Clearly $\hat y(i) \in Y(z)$ and from \eqref{optimality condition1} we have
\begin{align}
0 \leq \left( p_1 \nabla^Tq(y_1^*,\xi_1), \cdots , p_N \nabla^T q(y_N^*,\xi_N) \right)^T(\hat y(i)-y^*)=p_i \nabla^T q(y_i^*,\xi_i) (y_i-y_i^*),
\end{align}
implying that $y_i^*$ is an optimal solutoin to $P_{rhs}^i$ (because $p_i\geq 0$). Since this holds for any $i=1,\cdots, N$, we conclude that if $y^*$ solves $P_{lhs}$, then $y_i^*$ solves problem $P_{rhs}$ for $i=1,\cdots, N$.
To show the converse, assume $y_i^* \in Y_i(z)$ solves problem $P_{rhs}^i$ for all $i=1,\cdots,K$. By optimality condition we have
\begin{align}
\nabla^T q(y_i^*,\xi_i) (y_i-y_i^*) \geq 0, \qquad \hbox{for all } y_i\in Y_i(z).
\end{align}
Multiplying both sides of the preceding inequality by $p_i$ and summing over $i$, we can see that \eqref{optimality condition2} holds, suggesting that $y^*$ solves problem $P_{lhs}$.
\noindent \textbf{Step2:} We consider the following problem
\begin{align}\label{minmin}
\min_{z \in Z} \min_{y \in Y(z)} c(z)+q(y,\xi),
\end{align}
where $Y(z)$ and $y$ are defined in Step1 and we denote $q(y,\xi) \triangleq \sum_{i=1}^{N} p_i q(y_i,\xi_i)$. We show that the optimal objective value of \eqref{minmin} is equal to the optimal objective value of the following problm
\begin{align} \label{minmin2}
\min_{z\in Z, y \in Y(z)} c(z)+q(y,\xi).
\end{align}
For the problem $\min_{y \in Y(z)} c(z)+q(y,\xi)$, let us define the Lagrangian function
\begin{align*}
L(z,y,\nu)\triangleq c(z)+q(y,\xi)+ \sum_{i=1}^{N} \sum_{j=1}^{J} \nu _{ij}(t_j(z)+w_j(y_i,\xi_i)), \ \hbox{for } z \in Z, y_i \in Y,\ \nu \in \mathbf{R}_+^{I\times J}.
\end{align*}
First, let us assume $z\in Z$ is arbitrary and fixed. Taking supremum with respect to $\nu$ and considering the definition of $Y(z)$ we can write
\begin{align*}
\sup_{\nu} L(z,y,\nu)=&\sup_{\nu} \left\{c(z)+q(y,\xi)+ \sum_{i=1}^{N} \sum_{j=1}^{J} \nu _{ij}(t_j(z)+w_j(y_i,\xi_i)) \right\}\\=&\begin{cases}
c(z)+q(y,\xi) &\mbox{if } y \in Y(z) ,
\\
+\infty &\hbox{otherwise}.
\end{cases}
\end{align*}
Taking infimum with respect to $y$ and taking into account that $Y(z) \subseteq Y^N$ we have
\begin{align*}
\inf_{y \in Y} \sup_{\nu} L(z,y,\nu)=\inf_{y \in Y(z)} \{c(z)+q(y,\xi)\}.
\end{align*}
Now, by taking infimum with respect to $z \in Z$, we obtain
\begin{align}\label{infinf}
\inf_{z \in Z} \inf_{y \in Y} \sup_{\nu} L(z,y,\nu)=\inf_{z \in Z} \inf_{y \in Y(z)} \{c(z)+q(y,\xi)\}.
\end{align}
Second,
for any $z \in Z$, we have
\begin{align*}
\sup_{\nu} L(z,y,\nu)&=\sup_{\nu} \left\{c(z)+q(y,\xi)+ \sum_{i=1}^{N} \sum_{j=1}^{J} \nu _{ij}(t_j(z)+w_j(y_i,\xi_i)) \right\}\\&=\begin{cases}
c(z)+q(y,\xi) &\mbox{if } \sum_{i=1}^{N} \sum_{j=1}^{J} \nu _{ij}(t_j(z)+w_j(y_i,\xi_i)) \leq 0 ,
\\
+\infty &\hbox{otherwise}.
\end{cases}
\end{align*}
Thus, we have
\begin{align*}
\inf_{z \in Z, y \in Y} \sup_{\nu} L(z,y,\nu)=\inf_{z \in Z, y \in Y, \sum_{i=1}^{N} \sum_{j=1}^{J} \nu _{ij}(t_j(z)+w_j(y_i,\xi_i)) \leq 0} \{ c(z)+q(y,\xi)\}.
\end{align*}
Considering the definition of $Y(z)$, we can write
\begin{align}\label{infinf2}
\inf_{z \in Z} \inf_{y \in Y} \sup_{\nu } L(z,y,\nu)=\inf_{z \in Z, y \in Y(z)} \{c(z)+q(y,\xi)\}.
\end{align}
Equations \eqref{infinf} and \eqref{infinf2} imply that the following holds
\begin{align*}
\inf_{z \in Z} \inf_{y \in Y(z)} \{c(z)+q(y,\xi)\}=\inf_{z \in Z, y \in Y(z)} \{c(z)+q(y,\xi)\}.
\end{align*}
Now it can be easily seen that by combining Step1 and Step2, model \eqref{single stage} can be writen as \eqref{compact two-stage}. \end{proof}
\subsection{\textbf{Proof of Lemma \ref{two-stage2bilevel}}} \label{proof of lemma two-stage2bilevel} \begin{proof}
Since $\xi$ has a distribution with a finite support, we have
\begin{align}
\EXP{Q(z,\xi)}=\sum_{i=1}^{N} p_i Q(z,\xi_i).
\end{align}
Replacing this in \eqref{single stage}, and taking into account that all the assumptions for applying Lemma \ref{lemma 2-stage 2} are met, we can write problem \eqref{single stage} in the form of \eqref{compact two-stage}.
Let $H$ and $F$ be as given in the statement of the lemma.
Now, we show that $x^*\in X$ solves problem \eqref{compact two-stage} if and only if it solves \eqref{bilevel two-stage2}.\\
$(\Rightarrow)$ Assume $x^*\in X$ solves problem \eqref{compact two-stage}. Therefore, $x^*$ satisfies all the constraints of problem \eqref{compact two-stage}. Considering the definition of $F$, it is easy to see that $\EXP{F(x^*,\xi)}=0$. In addition, note that the definition of $F$ implies that $\EXP{F(x^*,\xi)}\geq 0$. Thus, we conclude that $x^* \in \mathop{\rm argmin}_{x \in X } \EXP{F(x,\xi)}$. This implies that $x^*$ is a feasible solution to problem \eqref{bilevel two-stage2}. To show optimality of $x^*$ for \eqref{bilevel two-stage2}, assume there is a feasible solution of problem \eqref{bilevel two-stage2}, $\hat x\neq x^*$ at which $h(\hat x)< h(x^*)$, where $h(x)\triangleq \EXP{H(x,\xi)}$. Note that by feasibility of $\hat x$, we have $\hat x \in \mathop{\rm argmin}_{x \in X } \EXP{F(x,\xi)}$. We already know that $\min_{x \in X } \EXP{F(x,\xi)}$ is achieved at zero by $x^*$. Therefore, we have $\EXP{F(\hat x,\xi)}=0$. Using this and taking into account the definition of $F$ again, we have that $\hat x$ is a feasible solution to problem \eqref{compact two-stage}. Taking to account that problems \eqref{compact two-stage} and \eqref{bilevel two-stage2} have identical objective functions $h$, and that $h(\hat x)< h(x^*)$, the optimality of $x^*$ from problem \eqref{compact two-stage} is contradicted. As such, we conclude that if $x^*\in X$ solves problem \eqref{compact two-stage}, then it solves problem \eqref{bilevel two-stage2} as well.
$(\Leftarrow)$ Now assume that $x^*$ solves problem \eqref{bilevel two-stage2}. Note that from the assumptions of Lemma \ref{lemma 2-stage 2}, problem \eqref{compact two-stage} is feasible. Therefore, there exists $x_0\in X$ that satisfies all the constraints of problem \eqref{compact two-stage}. This means that $\min_{x \in X } \EXP{F(x,\xi)}=0$. So, $\EXP{F(x^*,\xi)}=0$ by its feasibility for \eqref{bilevel two-stage2}. Taking this into account and using the definition of $F$, we can conclude that $x^*$ is a feasible solution for problem \eqref{compact two-stage}. Now, considering that problems \eqref{compact two-stage} and \eqref{bilevel two-stage2} have identical objective functions $h$, we conclude that $x^*$ is also an optimal solution for problem \eqref{compact two-stage}. \end{proof}
\subsection{\textbf{Proof of Lemma \ref{lem:unique sol for h}}}\label{proof of lem:unique sol for h} \begin{proof}
\begin{itemize}
\item[(a)] By Assumption \ref{assum:properties}(b,c), from convexity of $f$ and strong convexity of $h$, for all $x,y \in X$ we have
\begin{align*}
&f(x)+\langle g_f(x), y-x \rangle \leq f(y),\\
&h(x)+\langle g_h(x), y-x \rangle + \frac{\mu_h}{2}\|x-y\|^2 \leq h(y).
\end{align*}
Multiplying the second inequality by the nonnegative parameter $\lambda$ and adding it to the first one, we will obtain strong convexity with parameter $\mu_h$ for the objective function of problem \eqref{regularized form}. Since by Assumption \ref{assum:properties}(a), set $X$ is compact and convex, the uniqueness will follow from subdifferentiability of $f$ in a similar fashion to the proof of Theorem $2.2.6$ of~\cite{Nesterov04}, page 85.
\item[(b)] From Assumption \ref{assum:properties}(c), $h$ is strongly convex. We need to prove the set $X^*$, on which problem \eqref{def:SL} is defined, is compact and convex. We can write: $X^*=X \cap \{x\in \mathbf{R}^n | f(x)\leq f^*\}$.
So, $X^*$ is the intersection of two compact and convex sets; i.e., $X$ and the $f^*$ sublevel set of a convex function, $f$. So $X^*$ is also compact and convex. The rest of proof follows from Theorem $2.2.6$ of~\cite{Nesterov04}, page 85.
\end{itemize} \end{proof} \subsection{\textbf{Proof of Lemma \ref{results from convexity of f and strong convexity of h}}} \label{proof of results from convexity of f and strong convexity of h} \begin{proof} { To show relation \eqref{result from convexity of f}, from convexity of $f$ we have
\begin{align*}
f(x_{\lambda_k}^*)+\langle g_f(x_{\lambda_k}^*),x-x_{\lambda_k}^* \rangle \leq f(x) \qquad \hbox{for all } x \in X, \hbox{where}\ g_f(x_{\lambda_k}^*) \in \partial f(x_{\lambda_k}^*),
\end{align*}
Similarly, we can write
\begin{align*}
f(x_{\lambda_{k-1}}^*)+\langle g_f(x_{\lambda_{k-1}}^*),y-x_{\lambda_{k-1}}^* \rangle \leq f(y) \qquad \hbox{for all } y \in X,\end{align*}
where $g_f(x_{\lambda_{k-1}}^*) \in \partial f(x_{\lambda_{k-1}}^*)$.
Let $x:=x_{\lambda_{k-1}}^*$ and $y:= x_{\lambda_k}^*$ in the preceding inequalities. Then, relation \eqref{result from convexity of f} is obtained by adding the resulting two inequalities.\\
Nest, we show relation \eqref{result from strong convexity of h}. From strong convexity of $h$, we can write
\begin{align*}
h(x_{\lambda_k}^*)+\langle g_h(x_{\lambda_k}^*),x-x_{\lambda_k}^* \rangle +\frac{\mu_h}{2} \|x_{\lambda_k}^*-x\|^2 &\leq h(x) \qquad \hbox{for all } x \in X, \end{align*}
where $g_h(x_{\lambda_k}^*) \in \partial h(x_{\lambda_k}^*)$,\\
\begin{align*}
h(x_{\lambda_{k-1}}^*)+\langle g_h(x_{\lambda_{k-1}}^*),y-x_{\lambda_k}^* \rangle +\frac{\mu_h}{2} \|x_{\lambda_{k-1}}^*-y\|^2 &\leq h(y) \qquad \hbox{for all } y \in X, \end{align*}
where $g_h(x_{\lambda_{k-1}}^*) \in \partial h(x_{\lambda_{k-1}}^*)$.
Now we let $x:=x_{\lambda_{k-1}}^*$ and $y:= x_{\lambda_k}^*$ in these inequalities. Relation \eqref{result from strong convexity of h} will be obtained by adding the resulting relations.
} \end{proof}
\subsection{\textbf{Proof of Proposition \ref{prop:xk_estimate}}} \label{proof of prop:xk_estimate} \begin{proof}{
\begin{itemize}
\item[(a)] Let $k \geq 1$ be fixed. Since {$x_{\lambda_k}^*$ is the minimizer of problem \eqref{regularized form} at $\lambda=\lambda_k$. from optimality conditions we have}
\begin{align*}
\langle g_f(x_{\lambda_k}^*) + \lambda_k g_h(x_{\lambda_k}^*),x-x_{\lambda_k}^* \rangle \geq 0 \qquad \hbox{for all } x \in X.
\end{align*}
Similarly, from the optimality conditions of problem \eqref{regularized form} at $\lambda=\lambda_{k-1}$, we can write
\begin{align*}
\langle g_f(x_{\lambda_{k-1}}^*) + \lambda_{k-1}g_h(x_{\lambda_{k-1}}^*),y-x_{\lambda_{k-1}}^* \rangle \geq 0 \qquad \hbox{for all } y \in X.
\end{align*}
Let $x:=x_{\lambda_{k-1}}^*$ and $y:= x_{\lambda_k}^*$ in the preceding two inequalities. By adding these relations we obtain
\begin{align*}
\langle g_f(x_{\lambda_k}^*) + \lambda_k g_h(x_{\lambda_k}^*),x_{\lambda_{k-1}}^*-x_{\lambda_k}^* \rangle + \langle g_f(x_{\lambda_{k-1}}^*) + \lambda_{k-1}g_h(x_{\lambda_{k-1}}^*),x_{\lambda_k}^*-x_{\lambda_{k-1}}^* \rangle \geq 0.
\end{align*}
Therefore, by rearranging the left-hand side we obtain
\begin{align}\label{proof1.1}
\langle g_f(x_{\lambda_k}^*) - g_f(x_{\lambda_{k-1}}^*) ,x_{\lambda_{k-1}}^*-x_{\lambda_k}^* \rangle + \langle \lambda_k g_h(x_{\lambda_k}^*) - \lambda_{k-1}g_h(x_{\lambda_{k-1}}^*),x_{\lambda_{k-1}}^*-x_{\lambda_k}^* \rangle \geq 0.
\end{align}
Note that from relation \eqref{result from convexity of f} in Lemma \ref{results from convexity of f and strong convexity of h}, we have \\ $\langle g_f(x_{\lambda_k}^*) - g_f(x_{\lambda_{k-1}}^*) ,x_{\lambda_{k-1}}^*-x_{\lambda_k}^* \rangle \leq 0$. Thus, from \eqref{proof1.1} we obtain
\begin{align}\label{proof1.2}
\langle \lambda_k g_h(x_{\lambda_k}^*) - \lambda_{k-1}g_h(x_{\lambda_{k-1}}^*),x_{\lambda_{k-1}}^*-x_{\lambda_k}^* \rangle \geq 0.
\end{align}
Adding and subtracting $\langle \lambda_k g_h(x_{\lambda_{k-1}}^*),x_{\lambda_{k-1}}^*-x_{\lambda_k}^* \rangle$, it follows that
\begin{align*}
& \langle \lambda_k g_h(x_{\lambda_k}^*) - \lambda_k g_h(x_{\lambda_{k-1}}^*),x_{\lambda_{k-1}}^*-x_{\lambda_k}^* \rangle \\&+ \langle \lambda_k g_h(x_{\lambda_{k-1}}^*) - \lambda_{k-1} g_h(x_{\lambda_{k-1}}^*),x_{\lambda_{k-1}}^*-x_{\lambda_k}^* \rangle \geq 0.
\end{align*}
Therefore, by rearranging the terms we have
\begin{align*}
(\lambda_k- \lambda_{k-1}) \langle g_h(x_{\lambda_{k-1}}^*),x_{\lambda_{k-1}}^*-x_{\lambda_k}^* \rangle \geq \lambda_k\langle g_h(x_{\lambda_{k-1}}^*) - g_h(x_{\lambda_k}^*),x_{\lambda_{k-1}}^*-x_{\lambda_k}^* \rangle.
\end{align*}
Combining the preceding inequality with \eqref{result from strong convexity of h}, we obtain
\begin{align*}
(\lambda_k- \lambda_{k-1}) \langle g_h(x_{\lambda_{k-1}}^*),x_{\lambda_{k-1}}^*-x_{\lambda_k}^* \rangle \geq \mu_h \lambda_k \|x_{\lambda_k}^* - x_{\lambda_{k-1}}^*\|^2.
\end{align*}
By definition of dual norm $\|\cdot\|_*$, since $\|a\|_* = \sup_{\|b\| \leq1}{\langle a,b\rangle} $, we have $\|a\|_* \geq \langle a,b\rangle$ for $\|b\| \leq1$, so
\begin{align*}
|\lambda_k- \lambda_{k-1}|\|g_h(x_{\lambda_{k-1}}^*)\|_*\|x_{\lambda_{k-1}}^*-x_{\lambda_k}^*\| \geq \mu_h \lambda_k \|x_{\lambda_k}^* - x_{\lambda_{k-1}}^*\|^2.
\end{align*}
From Assumption \ref{assum:properties}(e) and Remark \ref{assumptionandJensen}, we have $\|g_h(x)\|_* \leq C_H$ for all $g_h \in \partial h(x)$ and $x \in X$. Thus,
\begin{align*}
|\lambda_k- \lambda_{k-1}|C_H\|x_{\lambda_{k-1}}^*-x_{\lambda_k}^*\| \geq \mu_h \lambda_k \|x_{\lambda_k}^* - x_{\lambda_{k-1}}^*\|^2.
\end{align*}
Let us assume $ x_{\lambda_k}^* \neq x_{\lambda_{k-1}}^*$. Then
\begin{align*}\|x_{\lambda_k}^*-x_{\lambda_{k-1}}^*\|\leq \frac{C_H}{\mu_h}\left |1-\frac{\lambda_{k-1}}{\lambda_k} \right|.
\end{align*}
If $x_{\lambda_k}^* = x_{\lambda_{k-1}}^*$, then $\|x_{\lambda_k}^*-x_{\lambda_{k-1}}^*\|=0 \leq \frac{C_H}{\mu_h}\left |1-\frac{\lambda_{k-1}}{\lambda_k} \right|$. Therefore the desired inequality holds.
\item[(b)] Let $x^*$ be the minimizer of function $f$ over the set $X$ and $x_{\lambda_k}^*$ be the minimizer of \eqref{regularized form} at $\lambda=\lambda_k$. From the optimality conditions for this problem we have
\begin{align*}
\langle g_f(x_{\lambda_k}^*) + \lambda_k g_h(x_{\lambda_k}^*),x-x_{\lambda_k}^* \rangle \geq 0 \qquad \hbox{for all } x \in X.
\end{align*}
Similarly, we can write for any arbitrary $x^* \in X^*$
\begin{align*}
\langle g_f(x^*),y-x^* \rangle \geq 0 \qquad \hbox{for all } y \in X.
\end{align*}
Let $x:=x^*$ and $y:= x_{\lambda_k}^*$ in the preceding inequalities. Then by adding them we obtain
\begin{align*}
\langle g_f(x_{\lambda_k}^*) + \lambda_k g_h(x_{\lambda_k}^*)-g_f(x^*),x^*-x_{\lambda_k}^* \rangle \geq 0.
\end{align*}
Rearranging the inequality, we have
\begin{align*}
\langle \lambda_k g_h(x_{\lambda_k}^*),x^*-x_{\lambda_k}^* \rangle \geq \langle g_f(x^*) - g_f(x_{\lambda_k}^*),x^*-x_{\lambda_k}^* \rangle.
\end{align*}
By convexity of $f$, we know that $\langle g_f(x^*) - g_f(x_{\lambda_k}^*),x^*-x_{\lambda_k}^* \rangle \geq 0$. So from the preceding relation we obtain
\begin{align} \label{proof b1}
\langle g_h(x_{\lambda_k}^*),x^*-x_{\lambda_k}^* \rangle \geq 0.
\end{align}
By convexity of $h$, for all $x,y \in X$ we can also have
\begin{align*}
h(x) \geq h(y)+ \langle g_h(y),x-y \rangle,
\end{align*}
where $g_h(y)\in \partial h(y)$. Now by letting $x:=x^*$ and $y:= x_{\lambda_k}^*$, we have
\begin{align}\label{proof b2}
h(x^*) \geq h(x_{\lambda_k}^*)+ \langle g_h(x_{\lambda_k}^*),x^*-x_{\lambda_k}^* \rangle.
\end{align}
Comparing \eqref{proof b1} and \eqref{proof b2}, we obtain that $ h(x^*) \geq h(x_{\lambda_k}^*)$. Note that $x_h^* \in X^*$ implying that $x_h^*$ is an optimal solution to problem \eqref{def:firstlevel}. Therefore, for $ x^* := x_h^*$ we obtain
\begin{align}\label{proof b3}
h(x_h^*) \geq h(x_{\lambda_k}^*) \qquad \hbox{for all } k \geq 0.
\end{align}
Now consider the sequence $\{x_{\lambda_k}^*\}$. We know $x_{\lambda_k}^* \in X$. Since $X$ is bounded, $\{x_{\lambda_k}^*\}$ has at least one accumulation point. Let $\hat{x}^*$ be an accumulation point of $\{x_{\lambda_k}^*\}$. From optimality of $x_{\lambda_k}^*$ we have
\begin{align*}
f(x_{\lambda_k}^*) + \lambda_k h(x_{\lambda_k}^*)\leq f(x) + \lambda_k h(x) \qquad \hbox{for all } x \in X.
\end{align*}
Taking limits along the convergent subsequence from both sides of the preceding inequality for all $x \in X$ and considering the assumption that $\lambda_k \rightarrow0$ and continuity of $f$ and $h$, we have
\begin{align*}
f(\hat{x}^*) \leq f(x) \qquad \hbox{for all } x \in X.
\end{align*}
Thus, $\hat{x}^* \in X^*$ implying that any arbitrary accumulation point of $\{x_{\lambda_k}^*\}$ is an optimal solution to problem \eqref{def:firstlevel}.\\
Now let $\{x_{\lambda_{k_i}}^*\}$ be an arbitrary convergent subsequence of $\{x_{\lambda_k}^*\}$ with accumulation point $\tilde x^*$. Taking limits of \eqref{proof b3} along $\{x_{\lambda_{k_i}}^*\}$ we have $ h(x_h^*) \geq h(\tilde x^*).$
But from Lemma \ref{lem:unique sol for h} we know that $x_h^*$ is the unique optimal solution of problem \eqref{def:SL}. Thus, $\tilde x^*=x_h^* $. Therefore, all the limit points of $\{x_{\lambda_k}^*\}$ converge to $x_h^*$. Hence, $\lim_{k \to \infty} {x_{\lambda_k}}$ exists and is equal to $x_h^*$.
\end{itemize} } \end{proof} \subsection{ \textbf{Proof of Lemma \ref{lemma general conv lemma}}} \label{proof of lemma general conv lemma} \begin{proof}
Let $e_k=\EXP{\nu_k}$ for all $k$. We prove this lemma by applying induction. At first we need to show that the result is true for $k=0$. By definition of $\tau$ we have $\tau \geq \frac{\EXP{\nu_1} \alpha_0}{\beta_0}$. So $\EXP{\nu_1}=e_1 \leq \frac{\beta_0}{\alpha_0} \tau$, and the result holds for $k=1$. Let us now assume that $e_k \leq \frac{\beta_{k-1}}{\alpha_{k-1}} \tau$. We need to show that $e_{k+1} \leq \frac{\beta_k}{\alpha_k} \tau$. By taking expectations from both sides of \eqref{conv lemma ineq} we have
\begin{align*}
e_{k+1} \leq(1-\alpha_k)e_k+\beta_k.
\end{align*}
By the inductive assumption and that $\alpha_k \leq 1$ we obtain
\begin{align*}
e_{k+1} \leq(1-\alpha_k)\frac{\beta_{k-1}}{\alpha_{k-1}} \tau+\beta_k.
\end{align*}
From the assumption $\frac{\beta_{k-1}}{\alpha_{k-1}} \leq \frac{\beta_k}{\alpha_k}(1+ \rho \alpha_k)$ and the preceding relation, we have
\begin{align*}
e_{k+1} \leq(1-\alpha_k)\frac{\beta_k}{\alpha_k}(1+ \rho \alpha_k) \tau+\beta_k.
\end{align*}
So we can write
\begin{align*}
e_{k+1} \leq\frac{\beta_k}{\alpha_k}\tau-(1-\rho)\beta_k\tau -\rho \alpha_k\beta_k\tau+\beta_k.
\end{align*}
Rearranging the terms, we obtain
\begin{align*}
e_{k+1} \leq \frac{\beta_k}{\alpha_k}\tau+\beta_k(-\tau(1-\rho)-\rho \tau \alpha_k+1) \leq \frac{\beta_k}{\alpha_k}\tau+\beta_k\underbrace{(-\tau(1-\rho)+1)}_\text{\hbox{Term1}}.
\end{align*}
Since $\tau\geq \frac{1}{1-\rho}$ and $0<\rho<1
$, Term1 is always nonpositive. So, $e_{k+1}\leq \frac{\beta_k}{\alpha_k} \tau$ and the proof is complete. \end{proof} \subsection{\textbf{Proof of Proposition \ref{prop condition for sequences}}} \label{proof of prop condition for sequences} \begin{proof} \noindent (i) In the following, we show that conditions of Assumption \ref{assumptions for conv} are satisfied.
\begin{itemize}
\item[(a)] Note that $a,b>0$ and $\gamma_0\lambda_0 \leq \frac{L_\omega}{\mu_h}$ are sufficient for the sequences to be non-increasing and for Assumption \ref{assumptions for conv}(a) to be satisfied.
\item[(b)] From the definition of $\gamma_k$ and $\lambda_k$ and that $a+b<1$, we have
\begin{align*}
\sum_{k=0}^{\infty}\gamma_k\lambda_k=\sum_{k=0}^{\infty}\frac{\gamma_0}{(k+1)^a}\frac{\lambda_0}{(k+1)^b}=\sum_{k=0}^{\infty}\frac{\gamma_0\lambda_0}{(k+1)^{a+b}}=\infty.
\end{align*}
\item[(c)] We have
\begin{align*}
\sum_{k=0}^{\infty}\frac{1}{\gamma_k\lambda_k}\left(\frac{\lambda_{k-1}}{\lambda_k}-1\right)^2=\sum_{k=0}^{\infty}\frac{(k+1)^{a+b}}{\gamma_0\lambda_0} \left(\underbrace{\left(1+\frac{1}{k}\right)^b-1}_\text{\hbox{Term1}}\right)^2.
\end{align*}
By Taylor series, we can write Term1$=(1+b/k+ b(b-1)/2k^2 + b(b-1)(b-2)/6k^3 + \cdots)-1={\cal
O}\left(k^{-1}\right)$. So the preceding equality will be
\begin{align*}
\sum_{k=0}^{\infty}\frac{1}{\gamma_k\lambda_k}\left(\frac{\lambda_{k-1}}{\lambda_k}-1\right)^2=\sum_{k=0}^{\infty}\frac{(k+1)^{a+b}}{\gamma_0\lambda_0}{\cal
O}\left(k^{-2}\right)=\sum_{k=0}^{\infty} {\cal O}\left(k^{-(2-(a+b))}\right).
\end{align*}
Since $a+b<1$, we have $2-(a+b)>1$. Therefore, the preceding series is summable implying that Assumption \ref{assumptions for conv}(c) is satisfied.
\item[(d)] We have $\gamma_k^2 = \gamma_0^2/(k+1)^{2a} ={\cal O}\left(k^{-2a}\right)$. Since $a>0.5$, $\gamma_k^2$ is summable and Assumption \ref{assumptions for conv}(d) is met.
\item[(e)] In a similar fashion to part (c), we have
\begin{align*}
\lim_{k\to \infty} \frac{1}{\gamma_k^2\lambda_k^2}\left(\frac{\lambda_{k-1}}{\lambda_k}-1\right)^2=\lim_{k\to \infty} \frac{(k+1)^{2a+2b}}{\gamma_0\lambda_0}{\cal O}\left(k^{-2}\right).
\end{align*}
Since $a+b<1$, this limit goes to zero which implies that Assumption \ref{assumptions for conv}(e) is satisfied.
\item[(f)] The last condition in Assumption \ref{assumptions for conv} holds due to $a>b$.
\end{itemize}
\noindent (ii) Next, we verify conditions of Assumption \ref{assumptions for general conv}.
\begin{itemize}
\item[(a)] The proof for this condition is identical to the proof given for Assumption \ref{assumptions for conv}.
\item[(b)] By the analysis from part (1)(c) we have
\begin{align*}
\frac{1}{\gamma_k^3\lambda_k}\left(\frac{\lambda_{k-1}}{\lambda_k}-1\right)^2=\frac{(k+1)^{3a+b}}{\gamma_0\lambda_0}{\cal O}\left(k^{-2}\right)={\cal O}\left(k^{3a+b-2}\right).
\end{align*}
Note that since $3a+b<2$, the preceding term is bounded above by a constant. Therefore, there are constants $B_1$ and $k_1$ for which Assumption \ref{assumptions for general conv}(b) is satisfied.
\item[(c)] We need to show that there are $0<\rho<1$ and $k_2$ such that the following holds:
\begin{align*}
\underbrace{\frac{\gamma_{k-1}\lambda_k}{\lambda_{k-1}\gamma_k} -1}_\text{\hbox{Term1}} \leq \rho\frac{\mu_h}{2L_\omega}\gamma_k\lambda_k \qquad \hbox{for all } k \geq k_2.
\end{align*}
Replacing $\gamma_k$ and $\lambda_k$ in Term1, we have Term1$=\left(1+1/k\right)^{a-b} -1 = {\cal O}\left(k^{-1}\right)$.
By applying the same analysis as part (1)(c) and that $a>b$. Multiplying and dividing Term1 by $\rho\frac{\mu_h}{2L_\omega}\gamma_k\lambda_k$ in which $\rho$ can be any constant between 0 and 1, we obtain
\begin{align*}
\hbox{Term1}= \rho\frac{\mu_h}{2L_\omega}\gamma_k\lambda_k \frac{{\cal O}\left(k^{-1}\right)}{\rho\frac{\mu_h}{2L_\omega}\gamma_k\lambda_k}&=\rho\frac{\mu_h}{2L_\omega}\gamma_k\lambda_k \frac{{\cal O}\left(k^{-1}\right)}{\rho\frac{\mu_h}{2L_\omega}\gamma_0\lambda_0(k+1)^{-a-b}}\\&=\rho\frac{\mu_h}{2L_\omega}\gamma_k\lambda_k {\cal O}\left(k^{-1+a+b}\right).
\end{align*}
Since $a+b<1$, ${\cal O}\left(k^{-1+a+b}\right)$ converges to zero. So, there exists an integer $k_2$ such that for any $k\geq k_2$ we have Term1 $\leq \rho\frac{\mu_h}{2L_\omega}\gamma_k\lambda_k$. Thus, Assumption \ref{assumptions for general conv}(c) is satisfied.
\item[(d)] Condition (d) of Assumption \ref{assumptions for general conv} follows due to $a>b$.
\end{itemize} \end{proof}
\end{document} |
\begin{document}
\maketitle
\begin{abstract}
We consider quantified pretransitive Horn modal logic. It is known that such logics are complete with respect to predicate Kripke frames with expanding domains. In this paper we prove that they are also complete with respect to neighbourhood frames with constant domains.
\end{abstract}
\section{Introduction}
In this paper we consider neighbourhood semantics for predicate modal logic. Neighbourhood semantics for predicate modal logic is drastically
understudied. There are only a few completeness results for particular logics and no general completeness results.
In this paper we aim to take a first step and to prove completeness for a simple jet infinite class of logics.
Probably, the most straightforward generalization of Kripke semantics to the predicate case is the Kripke semantics with constant domains.
But in any Kripke frame with a constant domain the Barcan formula $ \forall x \left (\Box P(x)\right ) \to \Box \forall x P(x) $ is valid.
At the same time, the Barcan formula is not derivable in predicate modal logics in general. Instead, Kripke frames with expanding domains were considered, and several completeness results were proved (see \cite{gabbay2009quantification}).
But still, many simple predicate modal logics are Kripke incomplete. Different authors suggested several generalizations, including neighbourhood semantics, (for an overview see \cite[Part II]{gabbay2009quantification}).
In the case of transitive reflexive logics, neighbourhood semantics is equivalent to the topological semantics. \cite{rasiowa1963metamathematics} showed that $ \logic{QS4} $ is complete with respect to topological spaces with constant domain.
In a recent paper, \cite{kremer2014quantified} has proved that $ \logic{QS4} $ is complete with respect to the set of rational numbers $ \mathbb Q $.
\cite{arlo2006first} have proved that the minimal normal modal predicate logic $ \logic{QK} $ is complete with respect to neighbourhood frames with constant domains. The methods for proving completeness were different in the last two papers. In \cite{arlo2006first} authors used a neighbourhood canonical model construction, whereas \cite{kremer2014quantified} used the Kripke semantics as an intermediate step and constructed p-morphisms from $ \mathbb Q $ onto a Kripke frame with expanding domains.
The intuition behind the Kremer's construction is that a topological (or, more generally, neighbourhood) frame with a constant domain is very similar to a product of topological space and an $\logic{S5}$-frame, where the $\logic{S5}$-frame plays the role of the domain.
In this paper we use the same idea and prove that $ \mathsf{QL} $ is complete with respect to neighourhood frames with constant domain, where $ \mathsf{L} $ is a one-way PTC-logic (see Definition \ref{def:PTClogic}).
We adopt the methods developed by the author for neighbourhood-Kripke products in \cite{Kudinov_KNproduct17}.
\section{Propositional modal logic}
\subsection{Syntax}
Let $ \mathrm{PROP} $ be a countable set of propositional letters. A \emph{(propositional modal) formula} is defined recursively using the Backus-Naur form as follows:
$$
A ::= p\; |\;\bot \; | \; (A \to A) \; | \; \Box_i A,
$$
where $p\in \mathrm{PROP}$, and $\Box_i$ is a modal operator ($ i= 1, \ldots, N$).
Other connectives are introduced as abbreviations: classical connectives are expressed through $\bot$ and $\to$,
and $\Diamond_i$ is a shorthand for $\lnot\Box_i\lnot$. The set of all modal formulas is denoted by $ \mathcal{ML}_N $.
\begin{definition} A \emph{normal modal logic\/} (or \emph{a logic}, for short) is
a set of modal formulas closed under Substitution $\left
(\frac{A(p)}{A(B)}\right )$, Modus Ponens $\left (\frac{A,\,
A\to B}{B}\right )$ and Generalization rules $\left
(\frac{A}{\Box_i A}\right)$, containing all the
classical tautologies and the normality axioms:
$$
\begin {array}{l}
\Box_i (p\to q)\to (\Box_i p\to \Box_i q).
\end{array}
$$
$\logic{K_N}$ denotes \emph{the minimal normal modal logic with $n$ modalities} and $\logic{K} = \logic{K_1}$. \end{definition}
Let $\logic{L}$ be a logic and $\Gamma$ be a set of formulas, then $\logic{L} + \Gamma$ denotes the minimal logic containing $\logic{L}$ and $\Gamma$. If $\Gamma = \set{A}$, then we write $\logic{L} + A$ rather than $\logic{L}+ \{ A \}$.
\subsection{Kripke semantics} \begin{definition}\label{def:Kripke_model}
A \emph{Kripke frame} is a tuple $F = (W, R)$, where $ W $ is a non-empty set and $ R \subseteq W \times W $.
For a Kripke frame $F = (W, R)$ we define the \emph{subframe} generated by $w \in W$ as the frame $F^w = (W', R|_{W'})$, where $W' = R^*(w)$ and ${R}|_{W'} = R \cap W' \times W'$. The star $ {}^* $ here stands for the reflexive and transitive closure of a relation. A frame $F$ is \emph{rooted} if $F=F^w$ for some $w$.
A frame $ F $ with a \emph{valuation} $ V:PROP \to 2^W $ is \emph{a model} $M = (F, V)$. We say that model $ M $ is \emph{based} on frame $ F $.
The truth of a formula in a model $ M $ at a point $ x\in W $ is defined, as usual, by induction on the length of the formula:
\begin{align*}
M, x &\not\models \bot; &\\
M, x &\models p &\iff& x \in V(p);\\
M, x &\models A \to B &\iff& M, x \not\models A \hbox{ or } M, x \models B;\\
M, x &\models \Box A &\iff& \forall y\; (xR y \Rightarrow M, y \models A).
\end{align*}
A formula $ A $ is \emph{true in a (Kripke) model} $M$ if $\forall x\in W (M, x \models A)$ (notation $M \models A$).
A formula $ A $ is \emph{valid on a (Kripke) frame} $F$ if for any valuation $ V $ $ (F, V) \models A$ (notation $F \models A$).
A formula is \emph{valid on a (Kripke) frame $ F $ at a point $ x $} if it is true in all models based on $F$ at point $ x $ (notation $F, x \models A$). That is if $ \forall V (F,V, x \models A) $.
We write $F \models \mathsf{L}$ if $F\models A$ for all $A \in \mathsf{L}$.
\end{definition}
We define the logic of a class of Kripke frames $\mathcal{F}$ as $${Log}(\mathcal{F}) = \setdef[A]{F \models A \hbox{ for all } F \in \mathcal{F}}.$$
For a single frame we write ${Log}(F)$ instead of ${Log}(\set{F})$.
We sometimes write $w\in F$ as a shorthand for ``$w \in W$ and $F = (W, R)$''.
\begin{definition} \label{def:p_morphism}
Let $F = (W, R)$ and $G = (U, S)$ be Kripke frames. A function $f: W \to U$ is a \emph{p-morphism} (Notation: $f: F \twoheadrightarrow G$) if
\begin{enumerate}
\item $f$ is surjective;
\item {}\textbf{[monotonisity]} for any $w, v\in W$ $ wR v $ implies $ f(w) S f(v) $;
\item {}\textbf{[lifting]} for any $w\in W$ and $v' \in U$ such that $ f(w)S v' $ there exists $v \in W$ such that $ wRv $ and $ f(v)=v' $.
\end{enumerate} \end{definition}
The following p-morphism lemma is well-known (see \cite[Proposition 2.14]{blackburn_modal_2002})
\begin{lem}\label{lem:pmorhism4K-frame}
Let $f: F \twoheadrightarrow G$ and $V'$ be a valuation on $G$. We define a valuation $ V $ on $ F $, such that $V(p) = f^{-1}(V'(p))$. Then, for any $ x\in F $ and formula $ A $
\[
F, V, x \models A \iff G, V', f(x) \models A.
\] \end{lem}
\begin{definition}
A formula $A$ is \emph{closed} if it contains no variables. \end{definition}
\begin{lem}
For any closed modal formula $ A $ and a p-morphism of Kripke frames $f:F \twoheadrightarrow G$
\[
F,x \models A \iff G, f(x) \models A.
\] \end{lem}
This follows from Lemma \ref{lem:pmorhism4K-frame} since the truth of a closed formula does not depend on the valuation.
We define formula $ \Box^k A$ by induction: \begin{itemize}
\item $ \Box^0 A = A $, here $ \varepsilon $ is the empty sequence;
\item $ \Box^{k+1} A = \Box^k \Box A $. \end{itemize}
\begin{definition}
Logic $ \logic{L} $ is called \emph{pretransitive} if for some $ k>0 $ formula
$ p \land \Box p \land \ldots \land \Box^{k} p \to \Box^{k+1} p $ is in $ \logic{L} $. \end{definition}
Sometimes such logics are called \emph{weakly transitive} \cite[Section 3.4]{kracht1999tools}.
\begin{definition}\label{def:PTClogic}
A logic $ \mathsf{L} $ is a \emph{one-way PTC-logic}\footnote{In \cite{Kudinov_KNproduct17} such logics are called HTC-logics} if it can be axiomatized by closed formulas and formulas of the type $ \Box p \to \Box^k p $, where $ k\ge 0 $. These formulas correspond to universal strict Horn sentences (see \cite{GSh_Product_1998}). \end{definition}
\begin{remark*}
It is easy to see that a one-way PTC-logic is pretransitive. \end{remark*}
The following well-known and well-studied logics are one-way PTC-logic: \begin{align*}
\logic{D} &= \logic{K} + \lnot \Box \bot, & \logic{T} &= \logic{K} + \Box p \to p,\\
\logic{K4} &= \logic{K} + \Box p \to \Box\Box p, & \logic{S4} &= \logic{T} + \Box p \to \Box\Box p,\\
\logic{D4} &= \logic{K4} + \lnot \Box \bot. \end{align*} Whereas logic $ \logic{S5} = \logic{S4} + p \to \Box \diamondsuit p$ is not a one-way PTC-logic.
\begin{definition}
Following \cite{GSh_Product_1998}, we define \emph{a universal strict Horn sentence} as a first-order closed formula
\[
\forall x \forall y\forall z_1 \ldots \forall z_n \bigl(\phi(x,y, z_1, \ldots, z_n) \to \psi(x,y)\bigr),
\]
where $ \phi(x,y, z_1, \ldots, z_n) $ is quantifier-free
positive (i.e.{}, it is built from atomic formulas using $ \land $ and $ \lor $) and $ \psi(x,y) $ is an atomic formula in the signature $ \Omega = \left\langle R^{(2)}_1, \ldots, R^{(2)}_m\right \rangle $, where $ R^{(2)}_i $ is a 2-ary propositional letter. In our case $ m=1 $. \end{definition}
Let $ \Gamma $ be a set of universal strict Horn sentences and $F$ be a Kripke frame. We define the $ \Gamma $-closure of $ F $ by $ F^{\Gamma} $. It is the minimal (in terms of inclusion of relations) frame such that all sentences from $ \Gamma $ are valid in it. Such a frame exists due to
\begin{lem}[{\cite[Prop 7.9]{GSh_Product_1998}}]\label{lem:HornClosure}
For any Kripke frame $ F=(W, R_1, \ldots, R_N) $ and a set of universal strict Horn sentences $\Gamma$, there exists $ F^{\Gamma} = (W, R^{\Gamma}) $ such that
\begin{itemize}
\item $ R \subseteq R^{\Gamma} $;
\item $ F^{\Gamma} \models \Gamma $;
\item if $ G \models \Gamma $ and $ f:F \twoheadrightarrow G $ then $ f: F^{\Gamma} \twoheadrightarrow G$.
\end{itemize} \end{lem}
The minimality of $ F^\Gamma $ follows from the proof.
\begin{definition}
Let $ F = (W, R)$ be a Kripke frame. A path in $ F $ is a tuple $ w_0 R w_1 \ldots R w_m $, where for all $ i\in\set{0,\ldots, m-1} w_i R w_{i+1}$. \end{definition}
\begin{definition}\label{def:unraveling} Let $ F = (W, R)$ be a rooted frame with root $ w_0 $. A path in $ F $ is \emph{rooted} if it starts with $ w_0 $. Let $ W^{\sharp} $ be the set of all rooted paths in $ F $.
For any rooted path $ \alpha = w_0 R w_1 \ldots R w_m $ we define \begin{align*}
\pi (\alpha) &= w_m;\\
\alpha R^{\sharp} \beta &\iff \beta = \alpha w_{m+1} \hbox{ and } \pi(\alpha) R \pi(\beta). \end{align*} Frame $ F^{\sharp} = (W^{\sharp}, R^{\sharp}) $ is the \emph{unravelling} of $F$. \end{definition}
\begin{lem}\label{lem:unrev_pmorph}
Map $ \pi $ is a p-morphism: $ \pi: F^{\sharp} \twoheadrightarrow F $. \end{lem}
The proof is straightforward (c.f. \cite[Lemma 4.52]{blackburn_modal_2002}).
\subsection{Neighborhood semantics} We will only give the definitions we need for this paper, more information on this topic and general definitions can be found in \cite{Segerberg1971,Chellas1980} and a more recent book \cite{pacuit2017neighborhood}.
\begin{definition}\label{def:filter}
Let $X$ be a non-empty set of \emph{points}, then $\mathcal{F} \subseteq 2^X$ is a \emph{filter} on $X$ if
\begin{enumerate}
\item $X \in \mathcal{F}$;
\item if $U_1,\; U_2 \in \mathcal{F}$, then $U_1 \cap U_2 \in \mathcal{F}$;
\item if $U_1 \in F$ and $U_1 \subseteq U_2$, then $U_2 \in \mathcal{F}$.
\end{enumerate}
It is usually required that $\varnothing \notin \mathcal{F}$ ($\mathcal{F}$ is a proper filter), but we will not require it in this paper. \end{definition}
\begin{definition}\label{def:filterbase}
For $X \ne \varnothing$ a set of subsets $ \mathcal{B} \subseteq 2^X$ is a \emph{filter base} if
\begin{enumerate}
\item $ \mathcal{B} \ne \varnothing $;
\item for any $U_1,\; U_2 \in \mathcal{B}$ $ \exists U_3 \in \mathcal{B} \left ( U_3 \subseteq U_1 \cap U_2 \right )$.
\end{enumerate}
Given a filter base $ \mathcal{B} $, the filter \emph{generated} by $ \mathcal{B} $ is defined as the minimal filter containing $ \mathcal{B} $. It is the family of all supersets of sets from $ \mathcal{B} $. \end{definition}
\begin{definition}
A \emph{(normal) neighbourhood frame} (an \emph{n-frame} for short) is a tuple $\mathfrak X = (X, \tau)$, where $X$ is a nonempty set and $\tau: X \to 2^{2^X}$ such that $\tau(x)$ is a filter\footnote{Usually neighbourhood semantics is used for non-normal logics, and in the most general case there are no restrictions on the neighbourhood function, but here we will consider only normal modal logics.} on $X$ for any $x$. The function $\tau$ is called the \emph{neighbourhood function} of $\mathfrak X$, and elements of $\tau(x)$ are called \emph{neighbourhoods of $x$}.
\emph{A neighbourhood model (or n-model)} is a pair $(\mathfrak X, V)$, where $\mathfrak X = (X, \tau)$ is an n-frame and $V: PROP \to 2^X$ is a \emph{valuation}. We say that model $ (\mathfrak X, V) $ is based on $ \mathfrak X $. \end{definition}
\begin{definition}
The \emph{truth of a formula in a neighbourhood model} is defined by induction. For the variables and Boolean connectives the definition is the same as for Kripke model (Def. \ref{def:Kripke_model}). For modalities the definition is the following:
\[
M, x \models \Box A \iff \exists U \in \tau(x)\; \forall y \in U (M,y \models A).
\]
A formula is true in an n-model $M$ if it is valid at all points of $M$ (notation $M \models A$).
A formula is valid on an n-frame $\mathfrak X$ if it is true in all models based on $\mathfrak X$ (notation $\mathfrak X \models A$).
We write $\mathfrak X \models \logic{L}$ if for any $A \in \logic{L}$, $\mathfrak X\models A$.
We define the logic of a class of n-frames $\mathcal{C}$ as ${Log}(\mathcal{C}) = \setdef[A]{\mathfrak X \models A \hbox{ for all } \mathfrak X \in \mathcal{C}}$ and $ {Log}(\mathfrak X) = {Log}(\set{\mathfrak X}) $.
\end{definition}
Given a Kripke frame one can construct an equivalent n-frame: \begin{definition}
Let $F = (W, R)$ be a Kripke frame. Then $\mathcal{N}(F) = (W, \tau)$ is a n-frame, such that
\[
\tau (w) = \setdef[U]{ R(w) \subseteq U \subseteq W},
\]
where $ R(w) = \setdef[u]{wRu} $. \end{definition}
Frames $ F $ and $ \mathcal{N}(F) $ are equivalent in the following sense \begin{lem}\label{lem:n-frame_from_kframe}
Let $F = (W, R)$ be a Kripke frame. Then
\[
{Log}(\mathcal{N}(F)) = {Log}(F).
\] \end{lem}
The proof is straightforward (see \cite{Chellas1980}).
This lemma shows that neighbourhood semantics is a generalization of Kripke semantics.
\begin{definition} \label{def:bounded_morphism}
Let $\mathfrak X = (X, \tau)$ and $\mathcal Y = (Y, \sigma)$ be neighbourhood frames. Then function $f: X \to Y$ is a \emph{p-morphism} (notation $f: \mathfrak X \twoheadrightarrow \mathcal Y$) if
\begin{enumerate}
\item $f$ is surjective;
\item \textbf{[zig]} for any $x\in X$ and $U \in \tau(x)$, we have $f(U) \in \sigma(f(x))$;
\item \textbf{[zag]} for any $x\in X$ and $V \in \sigma(f(x))$, we have $f^{-1}(V) \in \tau(x)$.
\end{enumerate} \end{definition}
\begin{lem}\label{lem:pmorhism4n-frame}
Let $\mathfrak X = (X, \tau)$, $\mathcal Y = (Y, \sigma)$ be n-frames and $f: \mathfrak X \twoheadrightarrow \mathcal Y$. Let $V'$ be a valuation on $\mathcal Y$. We define valuation $V(p) = f^{-1}(V'(p))$. Then
\[
\mathfrak X, V, x \models A \iff \mathcal Y, V', f(x) \models A.
\] \end{lem}
The proof is by induction on the length of formula $A$. The following is a straightforward corollary.
\begin{cor}\label{cor:pmorph}
If $f: \mathfrak X \twoheadrightarrow \mathcal Y$, then ${Log}(\mathfrak X) \subseteq {Log}(\mathcal Y)$. \end{cor}
\section{Predicate modal logic}
Following \cite{gabbay2009quantification} we define predicate modal formulas and logics as follows.
\begin{definition}
Let $ \mathop{Var} $ be a countably infinite set of \emph{(individual) variables} and $ PL^n = \setdef[P^n_i]{i\ge 0} $ be a fixed set of \emph{n-ary predicate letters} ($ n\ge 0 $). 0-ary predicate letters we will call propositional letters.
Modal predicate \emph{formulas} are defined inductively as follows:
\begin{itemize}
\item $ \bot $ is a formula;
\item $ P^0_i $ is a formula;
\item if $ x_1, \ldots, x_k \in Var$ then $ P^k_i(x_1, \ldots, x_k) $ is a formula;
\item if $ A $ and $ B $ are formulas then $ (A \to B) $ is a formula;
\item if $ A $ is a formula then $ \Box A $ is a formula;
\item if $ A $ is a formula and $ x \in Var $ then $ \forall x A $ is a formula.
\end{itemize}
All other connectives $ \land, \lor, \lnot, \exists, \diamondsuit $ are expressed as usual.
The set of all modal predicate formulas is denoted by $ \mathcal{MF} $. \end{definition}
\begin{definition}
If $ \mathsf{L} $ is a modal logic then $ \mathsf{QL} $ is the minimal set of predicate modal formulas such that
\begin{itemize}
\item $ \mathsf{QL} $ includes all formulas from $ \mathsf{L} $ where propositional variables are replaced by corresponding propositional letters.
\item for a 1-ary predicate $ P $ and a propositional letter $ Q $, $ \mathsf{QL} $ includes formulas
\begin{itemize}
\item $ \forall x P(x) \to P(y) $,
\item $ \forall x (Q \to P(x)) \to (Q \to \forall x P(x)) $;
\item $\forall x ( P(x) \to Q ) \to (\exists x P(x) \to Q) $.
\end{itemize}
\item $\mathsf{QL}$ is closed under Modus Ponens $ \left ( \frac{A,\ A \to B}{B}\right ) $, necessitation $ \left ( \frac{A}{\Box A}\right ) $, universal generalization rules $ \left ( \frac{A}{\forall x A}\right ) $, and under $ \mathcal{MF} $-substitutions \footnote{The idea is that we avoid variable collisions. For the sake of simplicity one can think that we rename all bound variable before substitution. Also we will assume that two formulas are congruent if they are the same up to renaming the bound variables.} (a proper definition of these substitutions can be found in \cite{gabbay2009quantification}).
\end{itemize} \end{definition}
\begin{lem}[see \cite{gabbay2009quantification}]
For any modal logic $ \mathsf{L} $ the quantified modal logic $ \mathsf{QL} $ includes formula $ \Box \forall x P(x) \to \forall x \Box P(x)$ (the converse Barcan formula). \end{lem}
Note that $ \mathsf{QL} $ in general does not include the Barcan formula: $$\forall x \Box P(x) \to \Box \forall x P(x).$$
\section{Semantics for predicate modal logic}
\begin{definition}
A system of expanding domains over a Kripke frame $ F = (W, R) $ is a family of sets $ D = (D_u)_{u \in W} $ such that
\[
\forall u, v \in W (u R v \Rightarrow D_u \subseteq D_v).
\]
A \emph{predicate Kripke frame} (with expanding domains) is a pair $ \mathbb{F} = (F, D) $, where $ F $ is a Kripke frame and $ D $ is a \emph{system of expanding domains} over $ F $. \end{definition}
\begin{definition}
A \emph{subframe} of a predicate frame $ \mathbb{F} = (F, D) $ generated by $w \in F$ is a predicate frame $\mathbb{F}^w = (F^w, D') $ such that $ F^w = (W', R') $ is the subframe of $ F $ generated by $ w $ (see Def. \ref{def:Kripke_model}) and $ D' = D|_{W'} = \setdef[D_u]{u \in W'}$. \end{definition}
\begin{definition}
A \emph{valuation} $ \xi $ on a predicate frame $ \mathbb{F} $ is a function sending every predicate letter $ P^m_k $ to a family of m-ary predicates on the domains:
\[
\xi(P^m_k) = (\xi_u(P^m_k))_{u\in W}, \hbox{ where } \xi_u(P^m_k) \subseteq D_{u}^{m}.
\]
A frame $ \mathbb{F} $ with a valuation $\xi$ is \emph{a (predicate) model} $\mathbb{M} = (\mathbb{F}, \xi)$.
Note that for $ m=0 $ $ \xi_u(P^0_k) $ is an 0-ary predicate so it is ether true or false. It is very similar to propositional variables.
The truth of a closed formula (formula without free variables) in a model $ \mathbb{M} $ at a point $ u\in W $ is defined, by induction on the length of the formula, but first we enrich our language with constants from set $ \bigcup_{u\in W} D_u $:
\begin{align*}
\mathbb{M}, u &\not\models \bot; &\\
\mathbb{M}, u &\models P^0_i &\iff& \xi_u(P^0_i) \hbox{ is true};\\
\mathbb{M}, u &\models P^m_i(a_1, \ldots, a_m) &\iff& (a_1, \ldots, a_m) \in \xi_u(P^m_i);\\
\mathbb{M}, u &\models A \to B &\iff& \mathbb{M}, u \not\models A \hbox{ or } \mathbb{M}, u \models B;\\
\mathbb{M}, u &\models \Box A &\iff& \forall v\; \left (uR v \Rightarrow \mathbb{M}, v \models A\right );\\
\mathbb{M}, u &\models \forall x A(x)&\iff &\forall a \in D_u \left (\mathbb{M}, u \models A(a)\right ).
\end{align*} \end{definition}
Let $ A(x_1, \ldots, x_k) $ be a formula with free variables such that it does not have free variables other then $ x_1, \ldots, x_k $. Then the universal closure of $ A(x_1, \ldots, x_k) $ is the following closed formula \[ \bar\forall A = \forall x_1 \ldots \forall x_k\; A(x_1, \ldots, x_k) \]
A formula is \emph{true in a (Kripke) model} $\mathbb{M}$ if its universal closure is true at all points of $\mathbb{M}$ (notation $\mathbb{M} \models A$). A formula is \emph{valid on a (Kripke) frame} $\mathbb{F}$ if it is true in all models based on $\mathbb{F}$ (notation $\mathbb{F} \models A$). We write $\mathbb{F} \models \Gamma$ if, for any $A \in \Gamma$, $\mathbb{F}\models A$. The \emph{logic} of a class of Kripke frames $\mathcal{C}$ is ${\mathbf{ML}}(\mathcal{C}) = \setdef[A]{\mathbb{F} \models A \hbox{ for all } \mathbb{F} \in \mathcal{C}}$. Logic $ \mathsf{L} $ is \emph{complete} with respect to Kripke semantics with expanding domains if there is a class of Kripke frames with expanding domains $ \mathcal{C} $ such that $ {\mathbf{ML}}(\mathcal{C}) = \mathsf{L} $.
\begin{thm}[{\cite[Theorem 6.1.29]{gabbay2009quantification}}]\label{thm:PTC_ncompleteness}
The quantification $ \logic{QL} $ of any one-way PTC-logic $ \logic{L} $ is complete with respect to Kripke semantics with expanding domains. \end{thm}
\begin{definition}
Let $ \mathbb{F} = (W, R, D)$ and $ \mathbb{F}' = (W', R', D')$ be a Kripke frames with expanding domains, $ D= \bigcup_{w\in W} D_w $, $ D' = \bigcup_{w\in W} D'_w $.
A \emph{p-morphism} from $ \mathbb{F}$ to $ \mathbb{F}' $ is a pair $ (\varphi_0, \varphi_1) $, such that:
\begin{enumerate}
\item $ \varphi_0 : (W, R) \twoheadrightarrow (W', R') $;
\item $ \varphi_1 = (\varphi_{1w})_{w\in W} $ is a family of surjective functions: $$ \varphi_{1w}:D_{w} \to D_{\varphi_0(w)};$$
\item if $ w R w'$ then $\forall d \in D_w \bigl(\varphi_{1w} (d) = \varphi_{1w'}(d)\bigr) $.
\end{enumerate}
Notation: $ (\varphi_0, \varphi_1): \mathbb{F} \twoheadrightarrow \mathbb{F}' $.
We write $ \mathbb{F} \twoheadrightarrow \mathbb{F}' $ if there exists a p-morphism from $ \mathbb{F} $ to $ \mathbb{F}' $. \end{definition}
\begin{lem}[\cite{gabbay2009quantification}, Prop. 3.3.11 \& 3.3.12]\label{lem:pmorphismlemma4kF2kF}
Let $ (\varphi_0, \varphi_1): \mathbb{F} \twoheadrightarrow \mathbb{F}' $ and $ \xi' $ be a valuation on $ \mathbb{F}' $. We define valuation $ \xi$ on $ \mathbb{F} $ in the following way
$$
(a_1, \ldots, a_m) \in \xi_u(P^m_k) \iff \left (\varphi_{1u}(a_1), \ldots, \varphi_{1u}(a_m)\right ) \in \xi'_{\varphi_0(u)}(P^m_k).
$$
Then for any $ u\in W $ and any formula $ A $
\[
\mathbb{F}, \xi, u \models \bar \forall A \iff \mathbb{F}', \xi', \varphi_0(u) \models \bar \forall A.
\] \end{lem}
\begin{definition}
A predicate neighbourhood frame with constant domain is a pair $ \mathbb{X} = (\mathfrak X, D^*) $, such that $ \mathfrak X = (X, \tau)$ is a neighbourhood frame and $ D^* $ is a nonempty set.
A \emph{valuation} $\theta $ on $ \mathbb{X} $ is a function sending every predicate letter $ P^m_k $ to a family of m-ary predicates on $ D^* $:
\[
\theta(P^m_k) = (\theta_u(P^m_k))_{u\in W}, \hbox{ where } \theta_u(P^m_k) \subseteq (D^*)^m.
\]
A \emph{neighbourhood model} on $ \mathbb{X} $ is a pair $ \mathbb{M} = (\mathbb{X}, \theta) $.
The truth of a closed formula in a model $ \mathbb{M} $ at a point $ x\in X $ is defined similar to Kripke models, by induction on the length of the formula. We also enrich our language with constants from set $D^*$.
\begin{align*}
\mathbb{M}, x &\not\models \bot; &\\
\mathbb{M}, x &\models P^0_i &\iff& \xi_x(P^0_i) \hbox{ is true};\\
\mathbb{M}, x &\models P^m_i(a_1, \ldots, a_m) &\iff& (a_1, \ldots, a_m) \in \xi_x(P^m_i);\\
\mathbb{M}, x &\models A \to B &\iff& \mathbb{M}, x \not\models A \hbox{ or } \mathbb{M}, x \models B;\\
\mathbb{M}, x &\models \Box A &\iff& \exists U \in \tau(x) \forall y \in U \left ( \mathbb{M}, y \models A\right );\\
\mathbb{M}, x &\models \forall x A(x)&\iff &\forall a \in D^* \left (\mathbb{M}, x \models A(a)\right ).
\end{align*} \end{definition}
\begin{definition} \label{def:pmorph4prednframe}
Let $ \mathbb{X} = (X, \tau, D^*)$ be a neighbourhood frame with constant domain and $ \mathbb{F} = (W, R, D)$ be a Kripke frame with expanding domains, $ D= \bigcup_{w\in W} D_w $.
A \emph{p-morphism} from $ \mathbb{X}$ to $ \mathbb{F} $ is a pair $ (\varphi_0, \varphi_1) $, such that:
\begin{enumerate}
\item \label{it:1:def:pmorph4prednframe} $ \varphi_0 : (X, \tau) \twoheadrightarrow \mathcal{N}(W, R) $;
\item \label{it:2:def:pmorph4prednframe} $ \varphi_1 = (\varphi_{1x})_{x\in X} $ is a family of surjective functions indexed by points from~$X$: $$ \varphi_{1x}:D^* \to D_{\varphi_0(x)};$$
\item \label{it:3:def:pmorph4prednframe} $ \forall d \in D^*\,\forall x \in X \,\exists U \in \tau(x)\, \forall y \in U \bigl(\varphi_{1y} (d) = \varphi_{1x}(d)\bigr) $.
\end{enumerate}
Notation: $ (\varphi_0, \varphi_1): \mathbb{X} \twoheadrightarrow \mathbb{F} $.
We write $ \mathbb{X} \twoheadrightarrow \mathbb{F} $ if there exists a p-morphism from $ \mathbb{X} $ to $ \mathbb{F} $. \end{definition}
\begin{lem}\label{lem:pmorphismlemma4nF2kF}
Let $ \mathbb{X} = (X, \tau, D^*)$ be a neighbourhood frame with constant domain, $ \mathbb{F} = (W, R, D)$ be a Kripke frame with expanding domains, $ (\varphi_0, \varphi_1): \mathbb{X} \twoheadrightarrow \mathbb{F} $ and $ \xi $ be a valuation on $ \mathbb{F} $. We define valuation $ \theta = (\theta_x)_{x \in X} $ on $ \mathbb{X} $ in the following way
$$
(a_1, \ldots, a_m) \in \theta_x(P^m_k) \iff \left (\varphi_{1x}(a_1), \ldots, \varphi_{1x}(a_m)\right ) \in \xi_{\varphi_0(x)}(P^m_k).
$$
Then for any $ x\in X $ and any formula $ A $
\[
\mathbb{X}, \theta, x \models \bar \forall A \iff \mathbb{F}, \xi, \varphi_0(x) \models \bar \forall A.
\] \end{lem} \begin{proof}
Let us assume that $A$ has no free variables but can contain constants from set $ D^* $ (for models on $ \mathbb{X} $) and from set $\bigcup D $ (for models on $ \mathbb{F} $). Assume that $ A $ includes only constants $ a_1, \ldots, a_m $, to highlight it, we will write $ A(a_1, \ldots a_m) $.
We prove the following statement by induction on the length of $ A $:
\[
\forall x\in X \bigl (\mathbb{X}, \theta, x \models A(a_1, \ldots a_m) \iff \mathbb{F}, \xi, \varphi_0(x) \models A(\varphi_{1x}(a_1), \ldots \varphi_{1x}(a_m)) \bigr )
\]
We consider only two cases. The other cases are straightforward.
\textbf{Case 1.} $ A = \Box B $.
Let $ \mathbb{X}, \theta, x \models \Box B $, then
$$\exists U \in \tau(x) \forall y \in U (\mathbb{X}, \theta, y \models B). $$
For each $ k \in \set{1, \ldots, m} $ there exists $ U_k \in \tau(x)$ such that $ \varphi_{1y}(a_k) = \varphi_{1x}(a_k)$ for any $ y \in U_k $.
We take $ U' = U \cap \bigcap_{k=1}^{m} U_k $. Set $ U' \in \tau(x)$ since $ \tau(x) $ is a filter. Note that $ B $ is true at all points in $ U' $ and by the induction hypotheses $ B(a_1, \ldots a_m) $ is true at all points in $ \varphi_0(U')$.
By the definition of p-morphism of n-frames $ \varphi_0(U') $ is a neighbourhood of point $ \varphi_0(x) $. Since $R(\varphi_0(x))$ is the minimum neighbourhood of point $\varphi_0(x)$ in frame $ \mathcal{N}(W, R) $ then $R(\varphi_0(x)) \subseteq \varphi_0(U') $. Hence, $ \mathbb{F}, \xi, u \models B(\varphi_{1y}(a_1), \ldots \varphi_{1y}(a_m)) $ for any $ u \in R(\varphi_0(x))$, and any $ y \in U'$ such that $ \varphi_0(y) = u $. Since for any $ y\in U' $ $\varphi_{1y}(a_1) = \varphi_{1x}(a_1), \ldots \varphi_{1y}(a_m) = \varphi_{1x}(a_m)$ then
$$
\mathbb{F}, \xi, \varphi_0(x) \models \Box B (\varphi_{1x}(a_1), \ldots \varphi_{1x}(a_m)).
$$
Now let $ \mathbb{F}, \xi, \varphi_0(x) \models \Box B (\varphi_{1x}(a_1), \ldots \varphi_{1x}(a_m))$, hence
$$
\mathbb{F}, \xi, u \models B (\varphi_{1x}(a_1), \ldots \varphi_{1x}(a_m))\hbox{ for all } u \in R(\varphi_0(x)).
$$
By the definition of p-morphism of n-frames $ U = \varphi_0^{-1}(R(\varphi_0(x))) \in \tau(x)$.
For each $ k \in \set{1, \ldots, m} $ there exists $ U_k \in \tau(x)$ such that $ \varphi_{1y}(a_k) = \varphi_{1x}(a_k) $ for any $ y \in U_k$. We take $ U' = U \cap \bigcap_{k=1}^{m} U_k \in \tau(x)$. Then $ \mathbb{F}, \xi, u \models B (\varphi_{1y}(a_1), \ldots \varphi_{1y}(a_m))$ for all $ u \in R(\varphi_0(x))$ and any $ y \in U'$ such that $ \varphi_0(y) = u $.
By induction hypotheses
$$
\forall y \in U' \left (\mathbb{X}, \theta, y \models B(a_1, \ldots a_m)\right ),\hbox{ hence } \mathbb{X}, \theta, x \models \Box B (a_1, \ldots a_m).
$$
\textbf{Case 2.} $ A = \forall t B(t, a_1, \ldots a_m) $.
$ \mathbb{X}, \theta, x \nvDash \forall t B(t, a_1, \ldots a_m) $ then there exists $ a\in D^* $ such that $ \mathbb{X}, \theta, x \nvDash B(a, a_1, \ldots a_m) $. By induction hypotheses
\[
\mathbb{F}, \xi, \varphi_0(x) \nvDash B(\varphi_{1x}(a), \varphi_{1x}(a_1), \ldots \varphi_{1x}(a_m)), \]
hence $ \mathbb{F}, \xi, \varphi_0(x) \nvDash \forall t B(t, \varphi_{1x}(a_1), \ldots \varphi_{1x}(a_m)) $.
Let $ \mathbb{F}, \xi, \varphi_0(x) \nvDash \forall t B(t, \varphi_{1x}(a_1), \ldots \varphi_{1x}(a_m)) $ then there exists $ b\in D_{\varphi_0(x)} $ such that $ \mathbb{F}, \xi, \varphi_0(x) \nvDash B(b, \varphi_{1x}(a_1), \ldots \varphi_{1x}(a_m)) $. Since $ \varphi_{1x} $ is surjective there exists $ a \in D^* $ such that $ \varphi_{1x}(a) = b $, then by induction hypotheses $ \mathbb{X}, \theta, x \nvDash B(a, a_1, \ldots a_m) $, hence $ \mathbb{X}, \theta, x \nvDash \forall t B(t, a_1, \ldots a_m) $. \end{proof}
The following lemma states that the composition of two $ p $-morphisms is a $ p $-morphism. \begin{lem}\label{lem:pmorphism_composition}
Let $ \mathbb{X} $ be an n-frame with constant domain, $ \mathbb{F}_1 $ and $ \mathbb{F}_2 $ be two Kripke frames with expanding domains.
If $ (\varphi_0, \varphi_1):\mathbb{X} \twoheadrightarrow \mathbb{F}_1 $ and $ (\psi_0, \psi_1):\mathbb{F}_1 \twoheadrightarrow \mathbb{F}_2 $ then
$$ (\psi_0 \circ \varphi_0, \eta):\mathbb{X} \twoheadrightarrow \mathbb{F}_2 ,$$
where $ \eta = (\eta_x)_{x\in X} $, $ \eta_x(d) = \psi_{1\varphi_0(x)}(\varphi_{1x}(d))$ for $ d\in D^* $. \end{lem} The proof is straightforward.
\section{Main construction}
The following construction was introduced in \cite{kudinov_aiml12,kudinov2014neighbourhood}.
For a Kripke frame $ F $ Starting from a Kripke frame we will construct a neighbourhood frame no point has the smallest neighbourhood. In topology a topological space is dense if it has no isolated points.
\begin{definition}
Let $ \Sigma $ be a non-empty set (\emph{alphabet}). A finite sequence of elements from $ \Sigma $ is a \emph{word}; by $\varepsilon$ we denote the empty word.
Let $\Sigma^*$ be the set of all words. We will write words without brackets or commas, e.g.{} $ a_1 a_2 \ldots a_n \in \Sigma^* $. The length of a word is the number of elements in it:
\[
\mathop{len}(a_1 a_2 \ldots a_n) = n, \quad \mathop{len}(\varepsilon)=0.
\]
The \emph{concatenation} of words is defined as follows:
\[
a_1 a_2 \ldots a_n \cdot b_1 b_2 \ldots b_m = a_1 a_2 \ldots a_n b_1 b_2 \ldots b_m
\] \end{definition}
\begin{definition}
For a frame $F=(W, R)$ with root $a_0$
we define \emph{a (rooted) path with stops} as a word in alphabet $W \cup \set{0}$: $a_1 \ldots a_n$, so that $a_i \in W$ or $a_i = 0$, and after dropping zeros, each point is related to the next one by relation $R$, and the first one is a successor of the root.
The empty word $\varepsilon$ is allowed.
Any path with stops is a word of the following type
\begin{align*}
&0^{i_0} b_1 0^{i_1} \ldots 0^{i_{m-1}} b_m 0^{i_{m}},\ \hbox{where } b_j \in W,\ i_j \ge 0,\ 0^i = \underbrace{00\ldots 0}_{\hbox{\small $i$ times}};\\
\hbox{and } &f_0(0^{i_0} b_1 0^{i_1} \ldots b_m 0^{i_{m}}) = a_0 R b_1 R \ldots R b_m \in W^{\sharp},\\
&f_0(\varepsilon) = f_0(0^{i_0}) = a_0 \in W^{\sharp},
\end{align*}
where $ W^{\sharp} $ is the set of all rooted paths in $ F $ (Definition \ref{def:unraveling})
Let us consider some examples.
\begin{itemize}
\item $f_0(\varepsilon) = a_0$.
\item If the root $ a_0 $ is reflexive, then $a_0$, $ a_0 000a_0 $, $ a_0a_0a_0 $ are examples of paths with stops, and $f_0(a_0) = a_0 R a_0$.
\item In frame $F = (W,R)$, where $W=\set{a_0,b}$, $R=\set{(a_0,b)}$ the following words are paths with stops: $\varepsilon$, $000$, $00b$, $00b00$. In general any path with stops in $ F $ equals to $ 0^k b 0^m $ or to $ 0^k $ for some $ k,m \ge 0 $.
\item In frame $F' = (W,R')$, where $W=\set{a_0,b}$, $R'=\set{(a_0,b), (b, a_0)}$ the following words are paths with stops: $\varepsilon$, $000$, $00b$, $00b00a_0$, $ b a_0 b a_0 $. But $ a_0 $ is not a path with stops.
\end{itemize}
We also consider infinite paths with stops that end with infinitely many zeros. We call these sequences \emph{pseudo-infinite paths (with stops)}.
A pseudo-infinite path can be presented uniquely in the following way:
\[
\alpha = 0^{i_1} b_1 0^{i_2} \ldots 0^{i_m} b_m 0^{\omega},\\ \hbox{where } b_j \in W,\ i_j \ge 0.
\]
Let $W_\omega$ be the set of all pseudo-infinite paths in $W$. \end{definition}
In the following we define function $f_0:W_\omega \to W^\sharp$. For a pseudo-infinite path $\alpha = a_1 \ldots a_n \ldots $ we put \begin{align*}
\mathop{st}(\alpha) &= \min \setdef[m]{\forall k > m (a_k = 0)};\\
\alpha|_k &= a_1 \ldots a_k;\ \ \alpha|_0 = \varepsilon; \\
f_0(\alpha) &= f_0(\alpha|_{\mathop{st}(\alpha)}), i.e.,\ f_0(0^{i_1} b_1 0^{i_2} \ldots 0^{i_m} b_m 0^{\omega}) = a_0 R b_1 R \ldots R b_m. \end{align*}
In order to introduce a neighbourhood function on $W_{\omega}$, for $ k\ge 0 $, we define \[
U_k(\alpha) = \Setdef[\beta \in W_\omega]{\alpha |_m = \beta |_m \ \&\ f_0(\alpha) R^\sharp f_0(\beta), \ m = \max(k, \mathop{st}(\alpha))}. \]
\begin{lem}\label{lem:U_m-base}
$U_k (\alpha) \subseteq U_m (\alpha)$ whenever $k \ge m$. \end{lem} \begin{proof}
Let $\beta \in U_k (\alpha)$. Since $\alpha |_k = \beta |_k$ and $k \ge m$, $\alpha |_m = \beta |_m$. Hence, $\beta \in U_m (\alpha)$. \end{proof}
\begin{definition}\label{def:n-frame_from_fframe}
Due to Lemma \ref{lem:U_m-base}, set $\set{U_n(\alpha)}_{n\in \mathbb N}$ form a filter base. So we can define
\begin{align*}
\tau (\alpha) &- \hbox{the filter with the base } \setdef[U_n(\alpha)]{n \in \mathbb N};\\
\mathcal{N_\omega}(F) &= (W_\omega,\tau) - \hbox{is \emph{a dense n-frame based on} $F$.}
\end{align*} \end{definition}
In frame $\mathcal{N_\omega}(F)$ no point has the minimal neighbourhood unlike $\mathcal{N}(F)$. Indeed, \begin{equation} \label{eq:density} \bigcap\limits_{n\in \mathbb N} U_n(\alpha) = \varnothing \not\in \tau(\alpha). \end{equation}
To prove (\ref{eq:density}) let us take $ \beta \in U_m(\alpha) $, then $ \beta = \alpha|_k b' 0^\omega $ for some $ k\ge m $ and $ b' \in W $. But then $ \beta \notin U_{k+1}(\alpha) $.
\begin{lem}
Equality $ f_0(U_k(\alpha)) = R^{\sharp}(f_0(\alpha)) $ holds for any $ \alpha $ and $ k $. \end{lem}
\begin{proof}
For $ k<st(\alpha) $ $ U_k(\alpha) = U_{st(\alpha)}(\alpha) $, hence we can assume that $ k
\ge st(\alpha) $.
Let $ x \in f_0(U_k(\alpha)) $. Then
$$
\exists \beta (\alpha |_k = \beta |_k \ \&\ f_0(\alpha) R^\sharp f_0(\beta) = x),$$
so $ x\in R^{\sharp}(f_0(\alpha)) $.
Let $ x \in R^{\sharp}(f_0(\alpha)) $, then
$x = f_0(\alpha) R b'$ for some $ b' $.
Consider
\[
\beta = \alpha 0^{k-st(\alpha)} b'.
\]
It is easy to show that $ \beta \in U_k(\alpha)$ and $ f_0(\beta) = x $. \end{proof}
\begin{lem}\label{lem:bmorphism_wframe2nframe}
Let $F = (W, R)$ be a Kripke frame with root $a_0$, then
\[
f_0 : \mathcal{N_\omega}(F) \twoheadrightarrow \mathcal{N}(F^\sharp).
\] \end{lem}
\begin{proof}
Since for any $b \in W$ there is a path $a_0 R a_1 R\ldots R a_{n-1} R b$, hence for a pseudo-infinite path $\alpha = a_1\ldots b 0^\omega \in X$, $f(\alpha) = b$. So $f_0$ is surjective.
Let us prove the zig property. Assume that $\alpha\in W_\omega$ and $U \in \tau(\alpha)$. We need to prove that $R^\sharp(f_0(\alpha)) \subseteq f_0(U)$.
There exists $m$ such that $U_m(\alpha) \subseteq U$, and since $f_0(U_m(\alpha)) = R^\sharp(f_0(\alpha))$, we have
\[
R^\sharp(f_0(\alpha)) = f_0(U_m(\alpha)) \subseteq f_0(U).
\]
Let us prove the zag property. Assume that $\alpha\in W_\omega$ and $V$ is a neighbourhood of $f_0(\alpha)$, i.e.{} $R^\sharp(f_0(\alpha)) \subseteq V$. We need to prove that there exists $U \in \tau(\alpha)$, such that $f_0(U) \subseteq V$. As $U$ we take $U_m(\alpha)$ for some $m \ge st(\alpha)$, then
\[
f_0(U_m(\alpha))=R^\sharp(f_0(\alpha)) \subseteq V.
\] \end{proof}
\begin{cor}\label{cor:logicNf}
For any frame $F$, ${Log}(\mathcal{N_\omega}(F)) \subseteq {Log}(F)$. \end{cor}
\begin{proof}
It follows from Lemmas \ref{lem:n-frame_from_kframe}, \ref{lem:bmorphism_wframe2nframe}, \ref{lem:unrev_pmorph}, and Corollary \ref{cor:pmorph} that
\[
{Log}(\mathcal{N_\omega}(F)) \subseteq {Log}(\mathcal{N}(F^\sharp)) = {Log}(F^\sharp) \subseteq {Log}(F).
\] \end{proof}
Let us remark that it is possible that $ {Log}(\mathcal{N_\omega}(F)) \ne {Log}(F) $. For example, consider the natural numbers with the ``next" relation. It is convenient here to regard a number as a word in a one-letter alphabet: \[ G = (\set{1}^* , S),\ 1^n S 1^m \iff m=n+1. \]
Obviously $ G \models \diamondsuit p \to \Box p $.
Since in $ G $ every point, except for the root, has only one predecessor, we can identify a point and a path from the root to this point, i.e.,\ $ G^\sharp = G $. Therefore, points in $\mathcal{N_\omega}(G)$ can be presented as infinite sequences of 0 and 1 with only zeros at the end. \begin{prop}
$ \mathcal{N_\omega}(G) \nvDash \diamondsuit p \to \Box p $ \end{prop} \begin{proof}
Consider valuation $ V(p) = \setdef[0^{2n}10^{\omega}]{n \in \mathbb N} $. In every neighbourhood of point $ 0^\omega $ there are points where $ p $ is true and there are points where $ p $ is false.
For example, if $ k $ is even, then in $U_k(0^{\omega})$ there is a point $ 0^k 1 0^{\omega} $ where $ p $ is true, and a point $ 0^{k+1} 1 0^{\omega} $ where $ p $ is false.
Hence,
\[
\mathcal{N_\omega}(G) \models \diamondsuit p \land \diamondsuit\lnot p.
\] \end{proof}
\begin{definition}
Let $F_1 = (W_1, R_1)$ and $F_2= (W_2, R_2)$ be two Kripke frames with roots $x_0$ and $y_0$ respectively. Let $ \Sigma=W_1 \cup W_2 $ then we define functions $ p_1: \Sigma^* \to W_1^* $, $ p_2: \Sigma^* \to W_2^* $ and $ \pi: \Sigma^*\setminus \set{\varepsilon} \to \Sigma$ by induction
\begin{equation*}
\begin{array}{rll}
p_1 (\varepsilon) &= x_0, \\
p_2 (\varepsilon) &= y_0, \\
\pi (u) &= u,& \hbox{ for } u \in \Sigma,\\
p_1 (\vec a u) &= p_1(\vec a)\cdot u,&\hbox{ for } \vec a \in \Sigma^*,\ u \in W_1, \\
p_1 (\vec a u) &= p_2(\vec a),&\hbox{ for } \vec a \in \Sigma^*,\ u \in W_2,\\
p_2 (\vec a u) &= p_1(\vec a),&\hbox{ for } \vec a \in \Sigma^*,\ u \in W_1, \\
p_2 (\vec a u) &= p_2(\vec a)\cdot u,&\hbox{ for } \vec a \in \Sigma^*,\ u \in W_2,\\
\pi(\vec a u) &= u, &\hbox{ for } \vec a \in \Sigma^*,\ u \in \Sigma.
\end{array}
\end{equation*} \end{definition} Since $ F_1$ and $ F_2$ are Kripke frames with one relation we can assume that paths in them do not contain relations and it will not lead to misunderstanding: \begin{align*}
W_1^\sharp &= \setdef[x_0 x_1\ldots x_n]{x_0 R_1 x_1 R_1\ldots R_1 x_n \hbox{ --- a path in the usual sense}}\\
W_2^\sharp &= \setdef[y_0 y_1 \ldots y_n]{y_0 R_2 y_1 R_2 \ldots R_2 y_n \hbox{ --- a path in the usual sense}}\\ \end{align*}
We define the \emph{entanglement} of $ F_1 $ and $ F_2 $ as follows \begin{align*}
F_1 \mathop{\taurus} F_2 &= \setdef[\vec{x} \in \Sigma^*]{ p_1(\vec{x}) \in W_1^{\sharp} \hbox{ and } p_2(\vec{x}) \in W_2^{\sharp}},\\
\vec a \mathop{\taurus} F_2 &= \setdef[\vec{x} \in F_1 \taurus F_2]{ p_1(\vec{x}) = \vec a }, \hbox{ for } \vec a \in W^\sharp_1.\\ \end{align*}
A universal quantifier behave a lot like an $ \logic{S5} $ modality. So we use entanglement with an $\logic{S5}$-frame to construct a predicate frame.
Let $ F = (W, R) $ be a rooted propositional Kripke frame and $ G = (\mathbb R_{\emptyset}, \mathbb R_{\emptyset}\times \mathbb R_{\emptyset}) $ be the continuum $ \logic{S5} $-frame, here $ \mathbb R_{\emptyset} = \mathbb R \setminus \set{0} $ and $ \mathbb R $ is the set of all real numbers. As the underling frame we take $ F^\sharp $ and for a path $ \vec a \in W^\sharp$ we take the following set $ D'_{\vec a} = \vec a \mathop{\taurus} G$. But this cannot be our family of domains since for any $ \vec a \ne \vec b $ we have $ D'_{\vec a} \cap D'_{\vec b} = \varnothing$. Let us define the following equivalence relation on $ F \mathop{\taurus} G $. For $ \vec x, \vec y \in F \mathop{\taurus} G$ \[ \vec x \sim \vec y \iff \exists \vec t \in F \mathop{\taurus} G \;\exists \vec c, \vec d \in W^* (\vec x = \vec t \cdot \vec c \hbox{ and } \vec y = \vec t \cdot \vec d ). \] It is easy to check that $ \sim $ is reflexive and symmetric. Let us check the transitivity. \begin{align*}
\vec x \sim \vec y \ \&\ \vec y \sim \vec z &\Rightarrow \\
\exists \vec t_1 \in F \mathop{\taurus} G \;\exists \vec c_1, \vec d_1 \in W^* &(\vec x = \vec t_1 \cdot \vec c_1 \ \& \ \vec y = \vec t_1 \cdot \vec d_1 ) \ \&\\
\exists \vec t_2 \in F \mathop{\taurus} G \;\exists \vec c_2, \vec d_2 \in W^* &(\vec y = \vec t_2 \cdot \vec c_2 \ \& \ \vec z = \vec t_2 \cdot \vec d_2 ) \end{align*} Since $ \vec y = \vec t_1 \cdot \vec d_1 = \vec t_2 \cdot \vec c_2 $ then there exist $ \vec e \in W^* $ such that $ \vec y = \vec t_1 \cdot \vec e \cdot \vec c_2$ or $ \vec y = \vec t_2 \cdot \vec e \cdot \vec d_1$.
If $ \vec y = \vec t_1 \cdot \vec e \cdot \vec c_2$ then $ \vec z = \vec t_1 \cdot \vec e \cdot \vec d_2 $ and $ \vec x \sim \vec z $.
If $ \vec y = \vec t_2 \cdot \vec e \cdot \vec d_1$ then $ \vec x = \vec t_2 \cdot \vec e \cdot \vec c_1 $ and $ \vec x \sim \vec z $.
Therefor $ \sim $ is an equivalence relation.
Let $ [\vec x] $ be the equivalence class of $ \vec x $. We define \begin{align}
D^+ &= F \mathop{\taurus} G / {\sim} = \setdef[{[\vec x]}]{\vec x \in F \mathop{\taurus} G};\\
D^\sharp_{\vec a} &= \setdef[{[\vec x]}]{\vec x \in D'_{\vec a}}. \label{eq:Dsharpvec} \end{align}
\begin{lem}
For $ \vec a $ and $ \vec b $ from $ F^\sharp $ if $ \vec a R^{\sharp} \vec b $ then $ D^\sharp_{\vec a} \subset D^\sharp_{\vec b} $. \end{lem}
\begin{proof}
Since $ \vec a R^{\sharp} \vec b $ then there exists $ \vec c \in F^\sharp $ such that $ \vec a \cdot \vec c = \vec b $.
So
\begin{align*}
[\vec x] \in D^\sharp_{\vec a} \;\Leftrightarrow&\; \exists \vec y \in D'_{\vec a} (\vec x \sim \vec y)
\Rightarrow \vec y \cdot \vec c \in D'_{\vec b}
\end{align*}
But $ \vec y \cdot \vec c \sim \vec y \sim \vec x$, hence $ [\vec x ]\in D^\sharp_{\vec b}$. \end{proof}
Let $ \mathbb{F} = (F, D) $ be a predicate Kripke frame, $ F = (W, R) $ be a Kripke frame, and cardinality of all $ D_u $ ($ u\in F $) is no greater than continuum.
We take $ \mathbb{F}^\sharp = (F^\sharp, D^\sharp) $, where $ D^\sharp = (D^\sharp_{\vec a})_{\vec a \in W^\sharp} $ and $ D^\sharp_{\vec a} $ is from (\ref{eq:Dsharpvec}).
Note that $ \pi $ restricted to $ F^\sharp $ is a p-morphism from $ F^\sharp $ to $ F $ (c.f.{} Lemma \ref{lem:unrev_pmorph}).
Let us define $ \psi = (\psi_{\vec a})_{\vec a \in W^\sharp} $ by induction: \begin{enumerate}
\item Let $ \psi_{\varepsilon} $ be an arbitrary surjective map from $ D^\sharp_{\varepsilon} $ to $ D_{a_0} $, where $ a_0 $ is the root of $ F $. Such map exists since cardinality of $ D^\sharp_{\varepsilon} $ is continuum.
\item Assume that $ \psi_{\vec a} $ is already defined and $ \vec b = \vec a u $.
Note that cardinality of set $ D^\sharp_{\vec b} \setminus D^\sharp_{\vec a} $ is continuum, so there is a surjective map $ \eta $ from $ D^\sharp_{\vec b} \setminus D^\sharp_{\vec a} $ to $ \left (D_{\pi(\vec b)} \setminus D_{\pi(\vec a)}\right ) \cup \set{[\varepsilon]}$.
We need to add $ \set{[\varepsilon]} $, since it is possible that $ D_{\pi(\vec b)} = D_{\pi(\vec a)} $. We put
\[
\psi_{\vec b}([\vec x]) = \begin{cases}
\psi_{\vec a}([\vec x]),\hbox{ if } [\vec x] \in {D^\sharp_{\vec a}};\\
\eta([\vec x]), \hbox{ otherwise.}
\end{cases}
\] \end{enumerate}
\begin{lem}\label{lem:pmorphism:unrevelpF_to_pF}
If frame $ F $ is a tree, i.e.{} there is a unique path from the root to any other point, then $ (\pi, \psi):\mathbb{F}^\sharp \twoheadrightarrow \mathbb{F} $. \end{lem}
To prove this we should check all items of the definition of p-morphism, and it is an easy exercise.
Now we define our predicate neighbourhood frame. \begin{align*}
D^* &= \mathcal{N_\omega} (G);\\
\mathfrak X &= \mathcal{N_\omega} (F);\\
\mathbb{X} &= (\mathfrak X, D^*). \end{align*}
By Lemma \ref{lem:bmorphism_wframe2nframe} $ f_0: \mathfrak X \twoheadrightarrow \mathcal{N}(F^\sharp) $.
Let us define functions $ \xi_\alpha:D^* \to D^\sharp_{f_0(\alpha)} $ for each $ \alpha \in \mathfrak X $. Function $ \xi_\alpha $ applied to $ \gamma \in D^* $ replaces zeros in $ \alpha $ with letters from $ \gamma $. To define it properly we first define function $ h:W_\omega \times \mathbb R_\omega \to (W \cup \mathbb R_{\emptyset})^* $ by induction on $ \mathop{st}(\alpha),$ here $ \alpha \in W_\omega $ and $\gamma \in \mathbb R_\omega$: \begin{description}
\item[Base.] $ \mathop{st}(\alpha) = 0 $: $ \alpha = 0^\omega $, $ h(0^\omega, \gamma) = f_0(\gamma)$;
\item[Step.] Assume that $ \alpha = a_1 \beta$ and $ \gamma = c_1\delta $. Then
\[
h(a_1 \beta, c_1\delta) = \begin{cases}
f_0(a_1 \cdot h(\beta, \delta)),\hbox{ if } c_1 = 0;\\
c_1 \cdot h(\beta, a_1\delta), \hbox{otherwise}.
\end{cases}
\] \end{description}
For example \begin{align*}
h(a0b00c0^\omega, &\;10340^\omega)= 1\cdot h(a0b00c0^\omega, 0340^\omega) = 1a\cdot h(0b00c0^\omega, 0340^\omega) \\
&= 1a\cdot h(b00c0^\omega, 340^\omega)=1a3\cdot h(b00c0^\omega, 40^\omega)=\ldots=1a34bc. \end{align*}
Now we can define $ \xi_\alpha $: \[ \xi_\alpha(\gamma) = [h(\alpha, \gamma)]. \]
\begin{lem}\label{lem:X-F_pred_p-morphism}
Pair of functions $(f_0, \xi) $, where $ \xi = (\xi_\alpha)_{\alpha\in \mathfrak X} $ is a p-morphism from $ \mathbb{X} $ to $ \mathbb{F}^\sharp $. \end{lem}
\begin{proof}
In the following we check all the items of Definition \ref{def:pmorph4prednframe}.
Item (\ref{it:1:def:pmorph4prednframe}) follows from Lemma \ref{lem:bmorphism_wframe2nframe}.
Item (\ref{it:2:def:pmorph4prednframe}). We fix $ \alpha\in \mathfrak X $ and take $ [\vec x] \in D^\sharp_{f_0(\alpha)} = D'_{f_0(\alpha)} / {\sim}$. Consider $ \vec y \in D'_{f_0(\alpha)} = f_0(\alpha) \taurus G$, such that $ \vec y \in [\vec x] $. We should insert in the sequence $ \alpha $ numbers from $ \mathbb R_\emptyset $ so that $ f_0(\alpha') = \vec y $ where $ \alpha' $ is the resulting sequence. After that we replace all symbols from $ W $ in $ \alpha $ by zeros and get $ \gamma \in \mathbb R_\omega $. And $ \xi_{\alpha}(\gamma) = [\vec y] = [\vec x]$.
To show that it is always possible, we define function $ t $ by induction on $ st(\alpha) + \mathop{len}(\vec y)$: \begin{align*}
t(0^\omega, \vec{y}) &= \vec{y}\cdot 0^\omega\\
t(a_1\beta, c_1 \vec{z}) &= \begin{cases}
a_1\cdot t(\beta, c_1 \vec{z}),\hbox{ if } c_1 = 0 \ \&\ a_1 ;\\
a_1\cdot t(\beta, \vec{z}),\hbox{ if } c_1 \ \& \ a_1 \in W;\\
c_1 \cdot t(a_1 \beta, \vec{z}), \hbox{ if } c_1 \in \mathbb R_\emptyset.\\
\end{cases} \end{align*}
We illustrate this by an example:
\begin{align*}
\alpha &= ab00c0^\omega & f_0(\alpha) &= abc \\
\vec{y} &=a12bc3 & \alpha' &= t(\alpha, \vec y) = a12b00c30^\omega
\end{align*} Indeed,
\begin{align*}
&t(ab00c0^\omega, a12bc3) = a\cdot t(b00c0^\omega, 12bc3) = a1\cdot t(b00c0^\omega, 2bc3) = \\
=\ &a12\cdot t(b00c0^\omega, bc3) = a12b\cdot t(00c0^\omega, c3) = a12b0\cdot t(0c0^\omega, c3) = \\
=\ &a12b00\cdot t(c0^\omega, c3) = a12b00c\cdot t(0^\omega, 3) = a12b00c3\\
\gamma =\ &01200030^\omega \qquad \xi_\alpha(\gamma) = [h(\alpha, \gamma)] = [a12bc3]
\end{align*}
Let us show that $ f_0(t(\alpha, \vec y)) = \vec y $. For the base of induction we have $$ f_0(t(0^\omega, \vec{y})) = f_0(\vec{y}\cdot 0^\omega) = \vec{y}.$$ Let $ \alpha = a_1\beta $ and $ \vec{y} = c_1\vec{z} $ \begin{itemize}
\item [(case 1)] $ c_1 = 0 $ and $ a_1 = 0 $, $ f_0(t(a_1\beta, c_1 \vec{z})) = f_0(0\cdot t(\beta, \vec{y})) = f_0 (t(\beta, \vec{y})) = \vec{y}$;
\item [(case 2)] $ c_1 \in W $ and $ a_1 \in W $ (note, that in this case $ a_1 = c_1 $ because otherwise $\vec{y} \notin f_0(\alpha) \taurus G $),
$$
f_0(t(a_1\beta, c_1 \vec{z})) = a_1\cdot f_0(t(\beta, \vec{z})) = a_1 \cdot f_0(t(\beta, \vec{z})) = a_1 \cdot \vec{z} = \vec{y};
$$
\item [(case 3)] $c_1 \in \mathbb R_\emptyset $, $ f_0(t(a_1\beta, c_1 \vec{z})) = f_0(c_1 \cdot t(a_1 \beta, \vec{z})) = c_1 \cdot f_0(t(a_1 \beta, \vec{z})) = c_1 \vec{z} = \vec{y}$. \end{itemize}
Item (\ref{it:3:def:pmorph4prednframe}).
Consider $ \gamma\in \mathbb R_\omega $, $ \alpha \in \mathfrak X $ and $ m = \mathop{st}(\gamma) + \mathop{st}(\alpha)$. If $ \beta \in U_m(\alpha) $ then
\[
\xi_{\beta}(\gamma) = [h(\beta, \gamma)] = [h(\rest{\alpha}_{st(\alpha)}0^kb0^\omega, \gamma)] = [h(\alpha, \gamma)\cdot b] = [h(\alpha, \gamma)] = \xi_{\alpha}(\gamma).
\]
In our example$ m = st(\gamma) + st(\alpha) = 11$ and $ \beta \in U_m(\alpha)$ then $ \beta = ab0c0000000 \cdot \vec z \cdot 0^\omega $ for some $ \vec z \in (W \cup \set{0})^*$. Then
\begin{align*}
\xi_{\alpha}(\gamma) &= [a12bc3]\\
\xi_{\beta}(\gamma) &= [a12bc3 \cdot f_0(\vec z)] = [a12bc3]
\end{align*}
This finishes the proof. \end{proof}
\section{Completeness results } This already gives us the following theorem \begin{thm}[\cite{arlo2006first}]
Logic $ \logic{QK} $ is complete with respect to neighbourhood frames with constant domain. \end{thm}
Indeed, if $ A \notin \logic{QK} $ then by Theorem \ref{thm:PTC_ncompleteness} it falsifiable in a Kripke frame $ \mathbb{F} $ with expanding domains. By Lemmas \ref{lem:pmorphism:unrevelpF_to_pF}, \ref{lem:X-F_pred_p-morphism}, \ref{lem:pmorphism_composition} and \ref{lem:pmorphismlemma4nF2kF} $ {Log}(\mathbb{X}) \subseteq{Log}(\mathbb{F}) $. So $ A \notin {Log}(\mathbb{X}) $.
\begin{definition}
Let $\Gamma$ be a set of universal strict Horn sentences, $F = (W,R)$ be a rooted frame, $\alpha \in W_{\omega}$ and $f_0:W_{\omega} \to W^{\sharp}$ be the ``zero-dropping'' function. Then we define
\begin{align*}
U_k^{\Gamma}(\alpha) &= \Setdef[\beta \in W_\omega]{\alpha |_m = \beta |_m \ \&\ f_0(\alpha) (R^\sharp)^{\Gamma} f_0(\beta), \ m = \max(k, \mathop{st}(\alpha))};\\
\tau^{\Gamma}(\alpha) &= \setdef[V]{\exists k \left ( U_k^{\Gamma}(\alpha) \subseteq V \right )};\\
\mathcal{N}^{\Gamma}_{\omega}(F) &= (W_{\omega}, \tau^{\Gamma}).
\end{align*} \end{definition}
\begin{lem}
Let $\mathsf{L}$ be a one-way PTC-logic, $\Gamma$ be the corresponding set of Horn sentences, and $F\models \mathsf{L}$. If $ \Box p \to \Box^n p \in \mathsf{L} $, then
\[
\mathcal{N}^{\Gamma}_{\omega}(F) \models \Box p \to \Box^n p.
\] \end{lem}
\begin{proof}
Let $ M = (\mathcal{N}^{\Gamma}_{\omega}(F), V)$ be a neighbourhood model.
We assume that $M, \alpha \not\models \Box^n p$, and then we prove that
$M, \alpha \not\models \Box p$, i.e.,
\[
\forall m \exists \beta \in U^{\Gamma}_m(\alpha)(\beta \not\models p).
\]
Let us fix $m$. Then
\begin{align*}
&\exists \alpha_1\in U^{\Gamma}_{m}(\alpha) \left(
\alpha_1 \not\models \Box^{n-1} p \right) \\
\Rightarrow &\exists \alpha_2\in U^{\Gamma}_{m}(\alpha_1) \left(
\alpha_2 \not\models \Box^{n-2} p \right) \\
\qquad&\vdots\qquad\vdots\qquad\vdots\qquad\vdots\qquad\vdots\qquad\vdots \\
\Rightarrow &\exists \alpha_n\in U^{\Gamma}_m(\alpha_{n-1}) \left(
\alpha_n \not\models p \right). \\
\end{align*}
By the definition of $ U^{\Gamma}_m(\alpha)$
\[
f_0(\alpha) (R^\sharp)^{\Gamma} f_0(\alpha_1) (R^\sharp)^{\Gamma} \ldots, (R^\sharp)^{\Gamma} f_0(\alpha_n)
\]
and
\[
\rest{\alpha}_m = \rest{\alpha_1}_m = \ldots, =\rest{\alpha_n}_m.
\]
Since $ \left (W^{\sharp}, (R^\sharp)^{\Gamma}\right ) \models \Box p \to \Box^n p$, then
\[
f_0(\alpha) (R^\sharp)^{\Gamma} f_0(\alpha_n).
\]
It follows that $ \alpha_n \in U^{\Gamma}_{m}(\alpha) $. \end{proof}
\begin{lem}
Let $\mathsf{L}$ be a one-way PTC-logic, $\Gamma$ be the corresponding set of Horn sentence, and $F\models \mathsf{L}$. Then
\[
f_0: \mathcal{N}^{\Gamma}_{\omega}(F) \twoheadrightarrow \mathcal{N}(F^{\sharp\Gamma}).
\] \end{lem} \begin{proof}
The surjectivity was established in Lemma \ref{lem:bmorphism_wframe2nframe}.
Assume that $\alpha\in W_\omega$ and $U \in \tau^\Gamma(\alpha)$. We need to prove that $R^{\sharp\Gamma}(f_0(\alpha)) \subseteq f_0(U)$.
There exists $m$ such that $U^{\Gamma}_m(\alpha) \subseteq U$.
It is easy to check by the definition that $f_0(U^\Gamma_m(\alpha)) = R^{\sharp\Gamma}(f(\alpha))$. Then
\[
R^{\sharp\Gamma}(f_0(\alpha)) = f_0(U^\Gamma_m(\alpha)) \subseteq f_0(U).
\]
Assume that $\alpha\in W_\omega$ and $U'$ is a neighbourhood of $f_0(\alpha)$, i.e.\ $R^{\sharp\Gamma}(f_0(\alpha)) \subseteq U'$. We need to prove that there exists $U \in \tau^\Gamma(\alpha)$ such that $f(U) \subseteq U'$. Let us take $U = U^{\Gamma}_m(\alpha)$ for some $m \ge st(\alpha)$, then
\[
f_0(U^\Gamma_m(\alpha))=R^{\sharp\Gamma}(f_0(\alpha)) \subseteq U'.
\] \end{proof}
Let $ F $ be a rooted $ \logic{L} $-frame. Let us define \begin{align*}
\mathfrak X^\Gamma &= \mathcal{N_\omega}^\Gamma(F), \\
\mathbb{X}^\Gamma &= (\mathfrak X^\Gamma, D^*),\hbox{ where } D^* = \mathcal{N_\omega}(\mathbb R_{\emptyset}, \mathbb R^2_{\emptyset}),\\
\mathbb{F}^{\sharp\Gamma}&=(F^{\sharp\Gamma}, D^\sharp). \end{align*}
\begin{lem}
$ \mathbb{F}^{\sharp\Gamma} $ is a predicate Kripke frame with expanding domains. \end{lem}
We need to check that if $ \vec a R^\sharp \vec b $ then $ D^\sharp_{\vec a} \subseteq D^\sharp_{\vec b} $. This is true since relation $ \subseteq $ is transitive and $ \Gamma $-closure is included in the transitive-reflexive closure.
The underlying sets of $ \mathfrak X $ and $ \mathfrak X^\Gamma $ are the same, so function $ \xi $ defined in the previous section will be well-defined here as well. \begin{lem}
Pair $ (f_0, \xi) $ is a p-morphism from $ \mathbb{X}^\Gamma $ to $ \mathbb{F}^{\sharp\Gamma} $. \end{lem}
The proof is the same as in Lemma \ref{lem:X-F_pred_p-morphism}. For the last condition we need to check that for any $ \alpha, \beta \in \mathfrak X^\Gamma $ such that $ \beta \in U^\Gamma_m(\alpha) $ and any $ \gamma \in D^* $ \[ \xi_{\beta}(\gamma) = \xi_{\alpha}(\gamma). \] This is an easy exercise.
\begin{thm}\label{thm:main_PTClogic-nframecompleteness}
Let $ \logic{L} $ be a one-way PTC-logic with one modality then predicate modal logic $ \logic{QL} $ is complete with respect to predicate neighbourhood frames with constant domain. \end{thm}
\section{Multiple modalities}
The construction should be working for multiple modalities if as the alphabet for sequences we take not just elements of $ W $ ($ W_1 $ or $ W_2 $) but pairs $ (R_i, w) $, where $ i\in \set{1, \ldots, N} $ and $ w\in W $, $ N $ is the number of modalities. All the definitions should be changed accordingly.
The author strongly believes that the following hypothesis can be proven by a straightforward adaptation of the methods from this paper:
\textbf{Hypothesis.} \textit{ Let $ \logic{L} $ be a one-way PTC-logic with arbitrary many modalities then predicate modal logic $ \logic{QL} $ is complete with respect to predicate neighbourhood frames with constant domain.}
\section{Topological semantics}
It is well known that the topological semantics is a particular case of the neighbourhood semantics and an $ \logic{S4} $-neighbourhood frame is basically a topological space with closure operator interpreting the $ \diamondsuit $ modality.
This observation gives us the following previously proven theorem as a corollary of Theorem \ref{thm:main_PTClogic-nframecompleteness}
\begin{thm}[\cite{rasiowa1963metamathematics}]
Logic $ \logic{QS4} $ is complete with respect to topological spaces. \end{thm}
Using construction from \cite{kudinov_aiml12} and Theorem \ref{thm:main_PTClogic-nframecompleteness} we can prove
\begin{thm}[\cite{kremer2014quantified}]
Logic $ \logic{QS4} $ is complete with respect to the set of rational numbers $ \mathbb Q $. \end{thm}
As was explained in \cite{kudinov_aiml12} $ \logic{K4^-} = \logic{K} + \Box p \land p \to \Box\Box p $ neighbourhood frame is basically a topological space with derivational operator interpreting the $ \diamondsuit $ modality.
From this and the results of the current paper it follows
\begin{thm}
Logics $ \logic{QK4} $ and $ \logic{QD4} $ are complete with respect to $ T_d $ and dence-in-itself $ T_d $ topological spaces respectively. \end{thm}
\begin{thm}
Logic $ \logic{QD4} $ is complete with respect to the set of rational numbers $ \mathbb Q $ (with derivational modality). \end{thm}
\vspace*{10pt}
\end{document} |
\begin{document}
\title{Newform Eisenstein congruences of local origin}
\begin{center}\textit{In memory of Lynne Walling.}\end{center}
\begin{abstract} We give a general conjecture concerning the existence of Eisenstein congruences between weight $k\geq 3$ newforms of square-free level $NM$ and weight $k$ new Eisenstein series of square-free level $N$. Our conjecture allows the forms to have arbitrary character $\chi$ of conductor $N$. The special cases $M=1$ and $M=p$ prime are fully proved, with partial results given in general. We also consider the relation with the Bloch-Kato conjecture, and finish with computational examples demonstrating cases of our conjecture that have resisted proof. \end{abstract}
\section{Introduction}
The theory of Eisenstein congruences has a rich and beautiful history, beginning with Ramanujan's remarkable observation that the Fourier coefficients $\tau(n)$ of the discriminant function: \[\Delta(z) = q\prod_n(1-q^n)^{24} = \sum_{n\geq 1}\tau(n)q^n \in S_{12}(\text{SL}_2(\mathbb{Z}))\] satisfy $\tau(n)\equiv \sigma_{11}(n) \bmod 691$ for all $n\geq 1$ (here $\sigma_{11}(n) = \sum_{d\mid n}d^{11}$). Intuitively, this family of congruences is explained via a congruence between two modular forms of weight $12$; the cusp form $\Delta$ and the Eisenstein series $E_{12}$. The significance of the modulus $691$ is that it divides the numerator of $-\frac{B_{12}}{24}$, the constant term of $E_{12}$ (so that $E_{12}$ is a cusp form mod $691$).
Since Ramanujan, many other congruences have been found between cusp forms and Eisenstein series, modulo other interesting primes. For example, one can vary the weight of the forms and find congruences whose moduli divide the numerators of other Bernoulli numbers. The existence of such congruences has been a key tool in the proofs of various important results in Algebraic Number Theory, e.g. the Herbrand-Ribet Theorem, relating the $p$-divisibility of Bernoulli numbers with the Galois module structure of $\text{Cl}(\mathbb{Q}(\zeta_p))[p]$. (see \cite{ribet_1976}).
Similarly, varying the levels and characters of our forms produces even more congruences. This time the moduli are observed to divide numerators of generalised Bernoulli numbers and special values of local Euler factors of Dirichlet $L$-functions. In the latter case such congruences are often referred to as being of ``local origin". The papers \cite{dummigan_2007}, \cite{dummigan_fretwell_2014}, \cite{billerey_menares_2016}, \cite{billerey_menares_2017} and \cite{spencer_2018} contain thorough discussions of such congruences. The existence of these congruences is linked to special cases of the Bloch-Kato Conjecture, a far reaching generalisation of the Herbrand-Ribet Theorem, the Analytic Class Number Formula and the Birch Swinnerton-Dyer Conjecture. This conjecture implies links between $p$-divisibility of special values of certain motivic $L$-functions and $p$-torsion in certain Bloch-Kato Selmer groups.
More generally, the theory of Eisenstein congruences has been extended to various families of automorphic form, although the landscape is still highly conjectural \cite{harder}. Roughly speaking, if $G/\mathbb{Q}$ is a reductive group then one expects to observe congruences between Hecke eigenvalues coming from cuspidal automorphic representations for $G(\mathbb{A}_{\mathbb{Q}})$ and automorphic representations that are parabolically induced from Levi subgroups of $G$. The moduli of such congruences are also predicted to arise from special values of certain motivic $L$-functions (related to the particular Levi subgroup considered), in direct comparison with the Bloch-Kato conjecture. For detailed discussion of results and conjectures in this direction, see \cite{bergstrom}. In the special case of $\text{GL}_2/\mathbb{Q}$ we recover exactly the Eisenstein congruences mentioned above, the relevant $L$-functions being Dirichlet $L$-functions (possibly incomplete, hence the appearance of local Euler factors). It is expected that proving such congruences will provide key insights into high rank cases of the Bloch-Kato Conjecture.
In this paper we will return to the ``classical" case of $\text{GL}_2$ Eisenstein congruences, but focus instead on the existence of newforms that satisfy such congruences. Progress has been made on this question in the case of trivial character (\cite{billerey_menares_2016}, \cite{dummigan_fretwell_2014}), and so we consider the more general case of forms with arbitrary character of square-free conductor. To do this, we discuss the following general conjecture and provide a full proof in the special cases of $M=1$ and $M=p$ prime.
\begin{conj} Let $N,M\geq 1$ be square-free, $k > 2$ and $l>k+1$ be a prime satisfying $l \nmid NM$. Let $\psi,\phi$ be Dirichlet characters of conductors $u,v\geq 1$, satisfying $N = uv$, and set $\chi = \psi\phi$ (with a choice of lift $\tilde{\chi}$ to modulus $NM$). There exists a newform $f \in S_k^{\text{new}}(\Gamma_0(NM), \tilde{\chi})$ and a prime $\lambda\mid l$ of $\mathcal{O}_f[\phi,\psi]$ such that \begin{equation*}
a_q(f) \equiv \psi(q)+\phi(q)q^{k-1} \ \text{mod } \lambda \end{equation*}
for all primes $q\nmid NM$ if and only if both of the following conditions hold for some $\lambda'|l$ in $\mathbb{Z}[\psi,\phi]$ (satisfying $\lambda|\lambda'$): \begin{enumerate} \item $\text{ord}_{\lambda'}(L(1-k, \psi^{-1}\phi) \prod_{p \in \mathcal{P}_M} (\psi(p) - \phi(p)p^k)) > 0$. \item $\text{ord}_{\lambda'}((\psi(p)-\phi(p)p^k)(\psi(p)-\phi(p)p^{k-2}))>0$ for each prime $p\in \mathcal{P}_M$. \end{enumerate} \end{conj}
\noindent Here, $\mathcal{P}_M$ is the set of primes divisors of $M$, $a_n(f)$ is the $n$th Fourier coefficient of $f$ and $\mathcal{O}_f[\psi,\phi]$ is the ring of integers of the smallest extension of $K_f = \mathbb{Q}(\{a_n(f)\})$ containing the values of $\psi$ and $\phi$ (similarly for $\mathbb{Z}[\psi,\phi]$).
After, we consider the natural relationship between these newform congruences and the Bloch-Kato conjecture, and give computational evidence for our conjecture in cases where it is not known.
\textbf{Acknowledgements} We thank Neil Dummigan for useful discussions concerning links between Conjecture $1.1$ and the Bloch-Kato conjecture, and for helpful comments and suggestions for improvement.
This work forms part of the thesis of the second named author, and we are grateful to the Heilbronn Institute for support via their PhD Studentship program.
Finally, we wish to dedicate this work to the memory of Lynne Walling, a close friend, mentor and collaborator of the first author, and who would have been a PhD supervisor of the second. Lynne was a highly valued member of the Mathematics community, whose encouragement and support towards early career mathematicians and those from under represented groups was second to none.
\section{Background and notation} \subsection{The Setup} We recap the background knowledge of modular forms that we will need, and refer the reader to \cite{diamond_shurman_2016} for further definitions and discussions. For an integer $N \geq 1$, define the standard congruence subgroups of $SL_2(\field{Z})$: \begin{equation*}
\Gamma_1(N)=\left\{ \begin{pmatrix} a & b \\ c & d \end{pmatrix} \in SL_2(\field{Z}) : \begin{pmatrix} a & b \\ c & d \end{pmatrix} \equiv \begin{pmatrix} 1 & * \\ 0 & 1 \end{pmatrix} \ (\text{mod} \ N) \right\} \end{equation*} \begin{equation*}
\Gamma_0(N)=\left\{ \begin{pmatrix} a & b \\ c & d \end{pmatrix} \in SL_2(\field{Z}) : \begin{pmatrix} a & b \\ c & d \end{pmatrix} \equiv \begin{pmatrix} * & * \\ 0 & * \end{pmatrix} \ (\text{mod} \ N) \right\}. \end{equation*}
Let $M_k(\Gamma_0(N), \chi)$ be the space of modular forms of weight $k \geq 2$, level $N$ and Dirichlet character $\chi : (\field{Z}/N\field{Z})^\times \rightarrow \field{C}^\times$. This is the space of holomorphic functions $f:\mathcal{H}\rightarrow\mathbb{C}$ on the upper half plane $\mathcal{H}$ that satisfy: \begin{equation*}
f[\gamma]_k:=(cz + d)^{-k} f\left(\frac{az+b}{cz+d} \right) = \chi(d)f(z) \end{equation*} for all $\gamma=\begin{pmatrix} a & b \\ c & d \end{pmatrix} \in \Gamma_0(N) $, and are such that $f[\alpha]_k$ is holomorphic at $i\infty$ for all $\alpha \in SL_2(\field{Z})$ (i.e.\ the Fourier expansion of $f[\alpha]_k$ is of the form $f[\alpha]_k(z) = \sum_{n=0}^\infty a_n(f) q^n$, with $q=e^{2\pi i z}$). Note that the weight must satisfy $\chi(-1)=(-1)^k$, since $\begin{pmatrix} -1 & 0 \\ 0 & -1 \end{pmatrix}\in\Gamma_0(N)$.
The subspace $S_k(\Gamma_0(N), \chi)$ of cusp forms consists of the forms such that $f[\alpha]_k$ has Fourier coefficient $a_0(f)=0$ for each $\alpha\in SL_2(\field{Z})$. The orthogonal complement of $S_k(\Gamma_0(N),\chi)$ with respect to the Petersson inner product is the Eisenstein subspace $\mathcal{E}_k(\Gamma_0(N), \chi)$. If $k>2$ (the case of interest to us) then a natural basis of this space consists of the normalised Eisenstein series $E_k^{\psi, \phi}(tz)$ for all ordered pairs of Dirichlet characters $\phi,\psi$ of conductors $u,v$ satisfying $\psi\phi = \chi$ and $tuv\mid N$. The Fourier expansion of $ E_k^{\psi,\phi}$ is: \begin{equation*}
E_k^{\psi,\phi}(z)=\delta(\psi)\frac{L(1-k, \psi^{-1}\phi)}{2} + \sum_{n=1}^\infty \sigma_{k-1}^{\psi, \phi}(n) q^n,. \end{equation*} where $\delta(\psi)=\delta_{\psi,\mathds{1}_1}$ (the trivial character modulo $1$) and \begin{equation*}
\sigma_{k-1}^{\psi,\phi}(n):=\sum_{d\mid n, d>0} \psi(n/d) \phi(d) d^{k-1} \end{equation*} is a generalised power divisor sum. The Eisenstein series with $uv = N$ are referred to as being new at level $N$.
For any given level $N$, we can also decompose $S_k(\Gamma_1(N))$ into new and old subspaces. For any $d\mid N$ we have the map: \begin{align*}
i_d: S_k(\Gamma_1(N/d))^2 &\rightarrow S_k(\Gamma_1(N)) \\
(f,g) &\mapsto f + g[\alpha_d]_k \end{align*}
\noindent where $\alpha_d = \left(\begin{smallmatrix} d & 0 \\ 0 & 1\end{smallmatrix}\right)$. Then the old subspace is: \begin{equation*}
S_k^{\text{old}}(\Gamma_1(N)) = \sum_{p \in \mathcal{P}_N} i_p(S_k(\Gamma_1(N/p))^2), \end{equation*} where $\mathcal{P}_N=\{p \text{ prime}: p \mid N \}$. The new subspace $S_k^{\text{new}}(\Gamma_1(N))$ is then the orthogonal complement of $S_k^{\text{old}}(\Gamma_1(N))$ with respect to the Petersson inner product, so that \begin{equation*}
S_k(\Gamma_1(N)) = S_k^{\text{old}}(\Gamma_1(N))\oplus S_k^{\text{new}}(\Gamma_1(N)). \end{equation*} This induces a decomposition of the space $S_k(\Gamma_0(N),\chi)$ into new and old spaces (lifting from spaces $S_k(\Gamma_0(N/p),\chi')$ with $p\in \mathcal{P}_N$ such that $p\text{ cond}(\chi)\mid N$, and taking $\chi'$ to be the reduction of $\chi$ mod $N/p$).
The space $M_k(\Gamma_0(N),\chi)$ comes equipped with the action of a Hecke algebra, a commutative algebra generated by operators $T_p$ indexed by primes $p$. The action of $T_p$ on the level of Fourier coefficients is as follows: \begin{equation*}
a_n(T_p(f)) = a_{np}(f) + \chi(p)p^{k-1}a_{n/p}(f), \end{equation*} where we take $a_{n/p}(f)=0$ if $n/p \notin \field{Z}$.
The space $S_k(\Gamma_0(N),\chi)$ has a basis of eigenforms for the operators $T_p$ for all $p\nmid N$. We can always normalise an eigenform $f$ so that $a_1(f) = 1$, and in this case $T_p(f) = a_p(f)f$ for each prime $p\nmid N$. For the subspace $S_k^{\text{new}}(\Gamma_0(N),\chi)$ we can find a basis of newforms, eigenforms for the full Hecke algebra.
The action of the Hecke algebra on the Eisenstein subspace $\mathcal{E}_k(\Gamma_0(N),\chi)$ is also well understood (see \cite[Proposition 5.2.3]{diamond_shurman_2016}). In particular, the normalised Eisenstein series $E_k^{\psi,\phi}$ are eigenforms for all Hecke operators $T_p$ with $p\nmid N$, with eigenvalues given by:
\begin{equation*}
T_p(E_k^{\psi,\phi}) = \sigma_{k-1}^{\psi,\phi}(p)E_k^{\psi,\phi} = (\psi(p)+\phi(p)p^{k-1})E_k^{\psi,\phi}. \end{equation*} If $E_k^{\psi,\phi}$ is new at level $N$ then it is an eigenform for the full Hecke algebra and so the above holds for all $p$.
The field of definition $K_f = \mathbb{Q}(\{a_n(f)\})$ of an eigenform $f\in S_k(\Gamma_0(N),\chi)$ is known to be a number field, and so has a well defined ring of integers $\mathcal{O}_f$. We will often denote by $K_f[\psi,\phi]$ the finite extension generated by the values of $\psi,\phi$ (i.e. roots of unity), and denote by $\mathcal{O}_f[\psi,\phi]$ be the corresponding ring of integers.
By a theorem of Deligne \cite{deligne_1969}, for each prime $\lambda$ of $\mathcal{O}_f$ there exists a continuous $\lambda$-adic Galois representation \[\rho_{f,\lambda}: G_{\mathbb{Q}} \rightarrow \text{GL}_2(K_{f,\lambda})\] which is unramified for $q \nmid NMl$ and satisfies \begin{align*}
\text{Tr}(\rho_{f,\lambda}(\text{Frob}_q)) &=a_q(f) \\ \text{det}(\rho_{f,\lambda}(\text{Frob}_q)) &=\chi(q)q^{k-1}. \end{align*} for such primes (here $\text{Frob}_q$ is an arithmetic Frobenius element at $q$). By standard arguments, it is possible to conjugate $\rho_{f,\lambda}$ so that it takes values in $\text{GL}_2(\mathcal{O}_{f,\lambda})$ and reduce modulo $\lambda$ to get a continuous representation \[\overline{\rho}_{f,\lambda}:G_{\mathbb{Q}} \rightarrow \text{GL}_2(\mathbb{F}_{\lambda}).\] In general, the reduction depends on the choice of invariant $\mathcal{O}_{f,\lambda}$-lattice, but the irreducible composition factors are independent of this choice.
\section{A General Conjecture} \label{section:conj} From now on, $N, M\geq 1$ are fixed coprime squarefree integers and $\chi$ is a fixed character of conductor $N$. Suppose further that $\phi, \psi$ are characters of conductors $u, v\geq 1$ respectively satisfying $uv=N$, and set $\psi \phi = \chi$. Then $E_k^{\psi,\phi}\in \mathcal{E}_k(\Gamma_0(N),\chi)$ is new at level $N$. Fix a lift $\tilde{\chi}$ of $\chi$ to modulus $NM$.
The following is a restatement of Conjecture $1.1$. It is a general conjecture concerning Eisenstein congruences between newforms in $S_k^{\text{new}}(\Gamma_0(NM),\tilde{\chi})$ and $E_k^{\psi,\phi}$, providing a wide generalisation of Ramanujan's congruence (as defined earlier, $\mathcal{P}_d$ is the set of prime divisors of $d\geq 1$).
\begin{conj} \label{conj:general} Let $k > 2$ and $l>k+1$ be a prime satisfying $l \nmid NM$. There exists a newform $f \in S_k^{\text{new}}(\Gamma_0(NM), \tilde{\chi})$ and a prime $\lambda\mid l$ of $\mathcal{O}_f[\phi,\psi]$ such that \begin{equation*}
a_q(f) \equiv \psi(q)+\phi(q)q^{k-1} \ \text{mod } \lambda \end{equation*}
for all primes $q\nmid NM$ if and only if both of the following conditions hold for some $\lambda'|l$ in $\mathbb{Z}[\psi,\phi]$ (satisfying $\lambda|\lambda'$): \begin{enumerate} \item $\text{ord}_{\lambda'}(L(1-k, \psi^{-1}\phi) \prod_{p \in \mathcal{P}_M} (\psi(p) - \phi(p)p^k)) > 0$. \item $\text{ord}_{\lambda'}((\psi(p)-\phi(p)p^k)(\psi(p)-\phi(p)p^{k-2}))>0$ for each $p \in \mathcal{P}_M$. \end{enumerate} \end{conj}
Condition (1) of the conjecture is enough to guarantee the existence of an eigenform in $S_k(\Gamma_0(NM),\tilde{\chi})$ satisfying the congruence (see Theorem \ref{thm:eigenform} below). This condition can be thought of as the analogue of $\text{ord}_{691}\left(-\frac{B_{12}}{24}\right)>0$ in Ramanujan's congruence, but now allowing prime divisors of Euler factors as well as Dirichlet $L$-values (an artifact of our Eisenstein series being lifted from level $N$ to level $NM$). Such congruences coming from divisibility of the Euler factor (as opposed to the complete $L$-value) are often said to be of ``local origin".
Condition (2) can be thought of as measuring how ``new'' the modular form $f$ in the conjecture is, i.e. if the prime $l$ only satisfied condition (2) for all $p \in \mathcal{P}_d$, with $d \mid M$, then we would only expect to find a $d$-newform $f$ satisfying the congruence condition.
We will prove the reverse implication of this Conjecture, and partial results concerning the direct implication. The major hurdle is that the Conditions (1) and (2) allow us to prove the existence of a $p$-newform $f$ satisfying the congruence condition for each $p \in \mathcal{P}_M$, but we are currently unable to show that these $p$-newforms can in fact be taken to the same genuine newform.
Later, we will discuss the relationship of this conjecture with previous results in this area, for example \cite{dummigan_2007}, \cite{billerey_menares_2016}, \cite{dummigan_fretwell_2014}, as well as connections with the Bloch-Kato Conjecture. We will also see computational examples using MAGMA \cite{magma}.
\subsection{Initial results} To get us started, we must construct various lifts of $E_k^{\psi,\phi}$ to level $MN$ having specified constant terms. The following construction and results generalise those found in \cite[Section 1.2.2]{billerey_menares_2016}
For each $m\geq 1$ we define the operator $\alpha_m$, acting on complex valued functions on the upper half plane, by $(\alpha_m f)(z)=f(mz)$. We then consider the collection of Eisenstein series given by \begin{equation} \label{eq:E}
E_{\underline{\delta}}(z)=\left[\prod_{p\in \mathcal{P}_M}(T_p - \delta_p)\right]\alpha_M E_k^{\psi,\phi}\in \mathcal{E}_k(\Gamma_0(NM),\tilde{\chi}), \end{equation} where $\underline{\delta} = \{\delta_p\}_{p\mid M}$ and $\delta_p$ is determined by fixing an ordering: \begin{equation} \label{eq:delta/eps}
\{\delta_p, \varepsilon_p\} = \{\psi(p), \phi(p)p^{k-1}\}. \end{equation}
\noindent When $M=1$ there is no choice and we define $E_{\underline{\delta}} = E_k^{\psi,\phi}$.
\begin{lemma} \label{lemma:E} Each Eisenstein series $E_{\underline{\delta}}$ is a normalised eigenform in \\ $\mathcal{E}_k(\Gamma_0(NM), \tilde{\chi})$. For each prime $p$ we have: \begin{equation*} T_p E_{\underline{\delta}}= \begin{cases}
\varepsilon_pE_{\underline{\delta}} & \text{if } p \in \mathcal{P}_M \\
(\psi(p)+\phi(p)p^{k-1})E_{\underline{\delta}} & \text{otherwise} \\ \end{cases} \end{equation*} We can also write \begin{equation} \label{eq:altE}
E_{\underline{\delta}}=\sum_{m \mid M}(-1)^{|\mathcal{P}_m|} \delta_m \alpha_m E_k^{\psi,\phi}. \end{equation} where $\delta_m = \prod_{p \in \mathcal{P}_m} \delta_p$ for each $m\mid M$. \end{lemma} \begin{proof} If $p \notin \mathcal{P}_M$ then
\begin{align*}
a_n(T_p(\alpha_M E_k^{\psi, \phi})) &= a_{np}(\alpha_M E_k^{\psi, \phi}) + \tilde{\chi}(p)p^{k-1}a_{n/p}(\alpha_M E_k^{\psi,\phi}) \\
&= a_{np/M}(E_k^{\psi,\phi}) + \chi(p)p^{k-1}a_{n/pM}(E_k^{\psi,\phi}) \\
&= a_{n/M}(T_pE_k^{\psi,\phi}) \\
&= a_n(\alpha_M (T_p E_k^{\psi,\phi})). \end{align*}
\noindent Hence, we see that $T_p \alpha_M E_k^{\psi,\phi} = \alpha_M T_p E_k^{\psi,\phi}$. It follows that \begin{align*}
T_pE_{\underline{\delta}}&=T_p \left[\prod_{q\in \mathcal{P}_M} (T_q - \delta_q)\right]\alpha_ME_k^{\psi,\phi} \\
&=\left[\prod_{q\in \mathcal{P}_M} (T_q - \delta_q)\right]\alpha_M T_pE_k^{\psi,\phi} \\
&=(\psi(p)+\phi(p)p^{k-1})E_{\underline{\delta}} \end{align*}
\noindent If $p \in \mathcal{P}_M$ we find that: \begin{align*}T_pE_{\underline{\delta}}&=T_p \left[\prod_{q\in \mathcal{P}_M} (T_q - \delta_q)\right]\alpha_ME_k^{\psi,\phi} \\
&=\left[\prod_{q\in \mathcal{P}_{M/p}} (T_q - \delta_q)\right](T_p^2-\delta_pT_p)\alpha_M E_k^{\psi,\phi}.\end{align*} and we must figure out the action of the final operator on $E_k^{\psi,\phi}$. We do this by first proving the claim that for each $m\mid M$: \begin{equation*} T_p\alpha_m E_k^{\psi, \phi} =
\begin{cases}
\alpha_{m/p}E_k^{\psi,\phi} & \text{if } p \in
\mathcal{P}_m \\
(\psi(p)+\phi(p)p^{k-1})\alpha_{m}E_k^{\psi,\phi}-\chi(p)p^{k-1}\alpha_{mp}E_k^{\psi,\phi} & \text{if } p \notin
\mathcal{P}_m
\end{cases} \end{equation*} To prove the claim, note that \begin{align*}
a_n(T_p\alpha_m E_k^{\psi,\phi})&=a_{np}(\alpha_m E_k^{\psi,\phi}) + \tilde{\chi}(p)p^{k-1}a_{n/p}(\alpha_m E_k^{\psi,\phi}) \\
&= a_{np}(\alpha_m E_k^{\psi,\phi}) \\
&= \sigma_{k-1}^{\psi,\phi}\left(\frac{np}{m}\right). \end{align*} (here, $\tilde{\chi}(p)$ vanishes since $\tilde{\chi}$ has modulus $NM$ and $p \in \mathcal{P}_M$).
\noindent When $p \in \mathcal{P}_m$ we have $a_n(T_p\alpha_m E_k^{\psi,\phi})=\sigma_{k-1}^{\psi,\phi}\left(\frac{np}{m}\right)=a_n(\alpha_{m/p}E_k^{\psi,\phi})$, and when $p\notin \mathcal{P}_m$ we use the fact that: \begin{equation} \label{eq:powerdiv}
\sigma_{k-1}^{\psi,\phi}(np)+\chi(p)p^{k-1}\sigma_{k-1}^{\psi,\phi}(n/p) = (\psi(p)+\phi(p)p^{k-1})\sigma_{k-1}^{\psi,\phi}(n) \end{equation} to get \begin{equation*}
\sigma_{k-1}^{\psi,\phi}\left(\frac{np}{m}\right) = (\psi(p)+\phi(p)p^{k-1})\sigma_{k-1}^{\psi,\phi}\left(\frac{n}{m}\right)-\chi(p)p^{k-1}\sigma_{k-1}^{\psi,\phi}\left(\frac{n}{mp}\right), \end{equation*} so that \begin{equation*}
a_n(T_p\alpha_m E_k^{\psi,\phi})=(\psi(p)+\phi(p)p^{k-1})a_n(\alpha_mE_k^{\psi,\phi})-\chi(p)p^{k-1}a_n(\alpha_{mp}E_k^{\psi,\phi}). \end{equation*}
\noindent The claim follows and so \begin{align*}
(T_p^2-\delta_pT_p)\alpha_ME_k^{\psi,\phi} &= T_p\alpha_{M/p}E_k^{\psi,\phi} - \delta_p\alpha_{M/p}E_k^{\psi,\phi} \\
&=(\psi(p)+\phi(p)p^{k-1})\alpha_{M/p}E_k^{\psi,\phi}-\chi(p)p^{k-1}\alpha_ME_k^{\psi,\phi} - \delta_p\alpha_{M/p}E_k^{\psi,\phi} \\
&=\varepsilon_p\alpha_{M/p}E_k^{\psi,\phi} - \chi(p)p^{k-1}\alpha_ME_k^{\psi,\phi} \\
&=\varepsilon_p(T_p-\delta_p)\alpha_M E_k^{\psi,\phi}. \end{align*} Therefore, when $p\in \mathcal{P}_M$ we have that $T_pE_{\underline{\delta}}=\varepsilon_pE_{\underline{\delta}}$, as required.
Equation \eqref{eq:altE} holds by the Inclusion-Exclusion Principle and the claim. The fact that $E_{\underline{\delta}}$ is normalised now follows, since only the term with $m=1$ contributes to the $q$ coefficient of $E_{\underline{\delta}}$, and $E_k^{\psi,\phi}$ is normalised. \end{proof}
We now state a Proposition which allows us to determine the value of $E_{\underline{\delta}}$ at all cusps. The proof can be found in \cite[Proposition 3.1.2]{spencer_2018} or \cite[Proposition 4]{billerey_menares_2017}.
\begin{propn}[Spencer] \label{prop:spencer} If $m \geq 1$ is coprime to $N$ and $\gamma = \left(\begin{smallmatrix} a & \beta \\ b & \delta \end{smallmatrix}\right) \in SL_2(\field{Z})$ then the constant term of $(\alpha_mE_k^{\psi,\phi})[\gamma]_k$ is given by \begin{equation*}
a_0((\alpha_m E_k^{\psi,\phi})[\gamma]_k) = \begin{cases}
-\frac{g(\psi\phi^{-1})}{g(\phi^{-1})}\frac{\phi^{-1}(m^\prime a) \psi\left(\frac{-b^\prime}{v}\right)}{u^k m^{\prime k}}\frac{L(1-k, \psi^{-1}\phi)}{2} & \text{ if }v\mid b'\\ 0 & \text{otherwise}
\end{cases} \end{equation*} where $b^\prime = \frac{b}{gcd(b,m)}$, $m^\prime=\frac{m}{gcd(b,m)}$ and $g(\phi)=\sum_{n=0}^{v-1}\phi(n)e^{\frac{2\pi i n}{v}}$ is the Gauss sum of $\phi$. \end{propn}
For convenience we will write: \begin{equation*}
C_{\gamma}=-\frac{g(\psi\phi^{-1})}{g(\phi^{-1})}\frac{\phi^{-1}(a) \psi\left(\frac{-b}{v}\right)}{u^k }\frac{L(1-k, \psi^{-1}\phi)}{2} \end{equation*} Using this and the notation in Proposition \ref{prop:spencer}, we can write a formula for the constant term of $E_{\underline{\delta}}[\gamma]_k$. \begin{corollary} \label{cor:cusp} If $\gamma = \left(\begin{smallmatrix} a & \beta \\ b & \delta \end{smallmatrix}\right) \in SL_2(\field{Z})$ and $M' = \frac{M}{\text{gcd}(M,b)}$ then the constant term of $E_{\underline{\delta}}[\gamma]_k$ is \begin{equation*}
a_0(E_{\underline{\delta}}[\gamma]_k) = \begin{cases}C_{\gamma} \prod_{p\in \mathcal{P}_{M'}}(1 - \delta_p \phi^{-1}(p)p^{-k})\prod_{p \in \mathcal{P}_{M/M'}} (1 - \delta_p \psi^{-1}(p)) & \text{ if } v\mid b'\\ 0& \text{ otherwise}\end{cases}. \end{equation*} \end{corollary} \begin{proof} By Lemma \ref{lemma:E}, we can write $E$ as \begin{equation*}
E_{\underline{\delta}}=\sum_{m \mid M}(-1)^{|\mathcal{P}_m|} \delta_m \alpha_m E_k^{\psi,\phi}. \end{equation*}
It is clear from Proposition \ref{prop:spencer} that the constant term of $E_{\underline{\delta}}$ is zero if $v\nmid b'$ and so it suffices to consider the case $v \mid b^\prime$. First we use Proposition \ref{prop:spencer} to evaluate the constant term of $(-1)^{|\mathcal{P}_m|}\delta_m \alpha_m E_k^{\psi, \phi}[\gamma]_k$ for a fixed $m \mid M$ as follows: \begin{align*}
&(-1)^{|\mathcal{P}_m| +1}\frac{g(\psi \phi^{-1})}{g(\phi^{-1})} \frac{\phi^{-1}\left(m' a \right)\psi \left(\frac{-b'}{v}\right)}{u^k m'^k} \frac{L(1-k, \psi^{-1}\phi)}{2} \delta_m \\
&= C_{\gamma} (-1)^{|\mathcal{P}_m|} \frac{\phi^{-1}(m')\psi^{-1}\left(\frac{m}{m'}\right)}{m'^k} \delta_m\\ &= C_{\gamma} (-1)^{|\mathcal{P}_m|} \left(\prod_{p\in \mathcal{P}_{m'}} \delta_p\phi^{-1}(p)p^{-k} \right) \left( \prod_{p \in \mathcal{P}_{m/m'}} \delta_p\psi^{-1}(p) \right).\label{eq:constant} \end{align*} It follows that the constant term of $E_{\underline{\delta}}[\gamma]_k$ is: \begin{align*}
a_0(E_{\underline{\delta}}[\gamma]_k) &=C_{\gamma} \sum_{m \mid M} \left[ (-1)^{|\mathcal{P}_m|} \left(\prod_{p\in \mathcal{P}_{m'}} \delta_p\phi^{-1}(p)p^{-k}\right) \left( \prod_{p \in \mathcal{P}_{m/m'}} \delta_p\psi^{-1}(p) \right)\right] \\
&= C_{\gamma} \prod_{p \in \mathcal{P}_{M'}} (1-\delta_p\phi^{-1}(p)p^{-k}) \prod_{p \in \mathcal{P}_{M/M'}} (1-\delta_p\psi^{-1}(p)) \end{align*} by the Inclusion-Exclusion Principle. \end{proof}
\subsection{The reverse implication} We have now constructed various Eisenstein series $E_{\underline{\delta}}$ whose constant coefficients contain terms which look somewhat like those in Condition (1) of Conjecture \ref{conj:general}. We are now in a position to use this to prove the reverse implication of the conjecture.
\begin{theorem} \label{thm:reverse} The reverse implication of Conjecture \ref{conj:general} is true. \end{theorem}
\begin{proof} Suppose that we have a newform $f \in S_k^{\text{new}}(\Gamma_0(NM), \tilde{\chi})$ satisfying the congruence \[a_q(f) \equiv \psi(q) + \phi(q)q^{k-1} \ \text{mod } \lambda\] for all primes $q \nmid NM$ and for some prime $\lambda\mid l$ of $\mathcal{O}_f[\psi,\phi]$ (with the assumptions $k>2$, $l>k+1$ and $l\nmid NM$).
Attached to $f$ is the $\lambda$-adic Galois representation given by \[\rho_{f,\lambda}: G_{\mathbb{Q}}\rightarrow \text{GL}_2(K_{f,\lambda_0}) \hookrightarrow \text{GL}_2(K_f[\psi,\phi]_{\lambda}),\] where the first arrow is the usual $\lambda_0$-adic Galois representation attached to $f$ (with $\lambda_0$ the unique prime of $K_f$ lying below $\lambda$). This may be conjugated to take values in $\text{GL}_2(\mathcal{O}_f[\psi,\phi]_\lambda)$. The congruence then implies that $\rho_{f,\lambda}$ is residually reducible mod $\lambda$ (by Cebotarev density and Brauer-Nesbitt, noting that $l > k > 2$): \begin{equation*} \overline{\rho}_{f,\lambda} \sim \begin{pmatrix} \overline{\psi} & *\\ 0 & \overline{\phi} \chi_l^{k - 1} \end{pmatrix} \end{equation*} so that the semisimplification is given by \begin{equation} \label{eq:rep}
\overline{\rho}_{f,\lambda}^{ss} \sim \overline{\psi} \oplus \overline{\phi} \chi_l^{k-1}. \end{equation} (i.e.\ $\overline{\rho}_{f,\lambda}$ has composition factors $\{\overline{\psi}, \overline{\phi}\chi_l^{k-1}\}$). Here $\chi_{l}:G_{\mathbb{Q}}\rightarrow \mathbb{F}_{l}^{\times}$ is the mod $l$ cyclotomic character.
For each $p\in \mathcal{P}_M$ it is well known that that the composition factors of $\overline{\rho}_{f,\lambda}$ locally at $p$ are given by: \begin{equation*} \overline{\rho}^{\text{ss}}_{f,\lambda}\mid_{W_{\mathbb{Q}_p}} \sim \left(\mu\chi_l^{k/2}\oplus\mu\chi_l^{k/2-1}\right)\mid_{W_{\mathbb{Q}_p}} \end{equation*} (e.g.\ \cite[Proposition 2.8]{loeffler_weinstein_2011}). Here $W_{\mathbb{Q}_p}$ is the local Weil group at $p$ and $\mu : W_{\mathbb{Q}_p}\rightarrow\mathbb{F}_\lambda^{\times}$ is the unramified character such that $\mu(\text{Frob}_p) \equiv a_p(f)/p^{k/2-1} \ \text{mod } \lambda$.
\noindent This leads to the following equivalence for each $p\in \mathcal{P}_M$: \begin{equation*}
\left(\mu\chi_l^{k/2}\oplus\mu\chi_l^{k/2-1}\right)\mid_{W_{\mathbb{Q}_p}} \sim \left(\overline{\psi}\oplus \,\overline{\phi} \chi_l^{k-1}\right)\mid_{W_{\mathbb{Q}_p}}. \end{equation*}
There are only two possibilities (note also that $l\neq p$ since $p\in \mathcal{P}_M$):
\begin{enumerate}
\item[(A)] $\overline{\psi}\mid_{W_{\mathbb{Q}_p}} = \mu \chi_l^{k/2}\mid_{W_{\mathbb{Q}_p}}$\qquad\text{and}\qquad $\overline{\phi} \chi_l^{k-1}\mid_{W_{\mathbb{Q}_p}} = \mu \chi_l^{k/2-1}\mid_{W_{\mathbb{Q}_p}}$.\\
\noindent Evaluating at $\text{Frob}_p$ gives
\begin{align*}\psi(p) &\equiv \mu(p)p^{k/2} \ \text{mod } \lambda\\ \phi(p)p^{k-1} &\equiv \mu(p)p^{k/2 -1} \ \text{mod } \lambda,\end{align*} so that
\begin{equation*}
\psi(p) - \phi(p)p^k \equiv 0 \ \text{mod } \lambda.
\end{equation*}
\item[(B)] $\overline{\psi}\mid_{W_{\mathbb{Q}_p}} = \mu \chi_l^{k/2-1}\mid_{W_{\mathbb{Q}_p}}$\qquad\text{and}\qquad $\overline{\phi} \chi_l^{k-1}\mid_{W_{\mathbb{Q}_p}} = \mu \chi_l^{k/2}\mid_{W_{\mathbb{Q}_p}}$. \\
\noindent Evaluating at $\text{Frob}_p$ gives \begin{align*}\psi(p) &\equiv \mu(p)p^{k/2-1} \ \text{mod } \lambda\\ \phi(p)p^{k-1} &\equiv \mu(p)p^{k/2} \ \text{mod } \lambda,\end{align*} so that we have both of the following
\begin{align*}
\psi(p) - \phi(p)p^{k-2} &\equiv 0 \ \text{mod } \lambda\\ \psi(p) &\equiv a_p(f) \bmod \lambda.
\end{align*} \end{enumerate}
\noindent To summarise, one of the following must hold for each $p\in \mathcal{P}_M$: \begin{enumerate}
\item[(A)] $ \psi(p) - \phi(p)p^k \equiv 0 \ \text{mod } \lambda.$
\item[(B)] $\psi(p) - \phi(p)p^{k-2} \equiv 0 \ \text{mod } \lambda$ and $a_p(f) \equiv \psi(p) \ \text{mod } \lambda$. \end{enumerate} Taking norms down to $\mathbb{Z}[\psi,\phi]$ gives Condition (2) (i.e. divisibility by $\lambda'$). It remains to prove that Condition (1) holds: \[\text{ord}_{\lambda'}\left(L(1-k, \psi^{-1}\phi) \prod_{p \in \mathcal{P}_M} (\psi(p) - \phi(p)p^k)\right) > 0.\] First note that this is immediate if there exists a prime $p \in \mathcal{P}_M$ satisfying case (A), since $l>k+1$ and $l\nmid N$. We assume from now on that case (B) is satisfied for each $p \in \mathcal{P}_M$.
Consider the Eisenstein series $E_{\underline{\delta}}$ corresponding to the choice $\delta_p=\phi(p)p^{k-1}$ for each $p \in \mathcal{P}_M$. We claim that for all primes $p\neq l$, the following congruence holds \begin{equation*}
a_p(E_{\underline{\delta}}) \equiv a_p(f) \ \text{mod } \lambda. \end{equation*} This is true for each prime $p\in \mathcal{P}_M$, since by Lemma \ref{lemma:E} we have \[a_p(E_{\underline{\delta}}) = \varepsilon_p = \psi(p) \equiv a_p(f) \bmod \lambda .\] For each prime $p\in \mathcal{P}_N$ the form $f$ is $p$-new, and another comparison of local composition factors gives (e.g.\ \cite[Proposition 2.8]{loeffler_weinstein_2011}, see also \cite{billerey_menares_2017}): \begin{equation*}
\left(\mu_1\chi_l^{(k-1)/2} \oplus \mu_2\chi_l^{(k-1)/2}\right)\mid_{W_{\mathbb{Q}_p}} \sim \left(\overline{\psi} \oplus \overline{\phi} \chi_l^{k-1}\right)\mid_{W_{\mathbb{Q}_p}}. \end{equation*} with $\mu_1,\mu_2: W_{\mathbb{Q}_p}\rightarrow\mathbb{F}_\lambda^{\times}$ characters of conductors $1$ and $p$ satisfying \[a_p(f) \equiv p^{(k-1)/2}(\mu_1(p)+\mu_2(p)) \ \text{mod } \lambda.\] It follows that, for such primes \[a_p(f) \equiv \psi(p) + \phi(p)p^{k-1} \equiv a_p(E_{\underline{\delta}}) \ \text{mod } \lambda.\] For all other primes the claim follows from Lemma \ref{lemma:E} and the assumption that $f$ satisfies the congruence.
By the claim, and the fact that $f$ and $E_{\underline{\delta}}$ are both normalised Hecke eigenforms, we get the following congruence for all $n$ coprime to $l$ \begin{equation*}
a_n(E_{\underline{\delta}}) \equiv a_n(f) \ \text{mod } \lambda \end{equation*} Applying the theta operator $\Theta = q\frac{d}{dq}$ in \cite{serre_2003}, we obtain $\Theta(E_{\underline{\delta}}) \equiv \Theta(f) \ \text{mod } \lambda$. However, the theta operator is injective for $l > k+1$ \cite[Corollary 3]{katz_1977}, and so we have that $E_{\underline{\delta}} \equiv f \ \text{mod } \lambda$. Since $f$ is a cusp form it must be that $E_{\underline{\delta}}$ must vanish at all cusps mod $\lambda$. Choosing any $\gamma \in SL_2(\field{Z})$ with lower left entry $b$ such that $\mathcal{P}_M\subseteq \mathcal{P}_b$, we then find that \begin{equation*}
\text{ord}_\lambda(a_0(E_{\underline{\delta}}[\gamma]_k)) = \text{ord}_\lambda\left(C_{\gamma}\left(\prod_{p \in \mathcal{P}_M} (1-\psi^{-1}(p)\phi(p)p^{k-1})\right)\right)>0. \end{equation*} by Corollary \ref{cor:cusp}. Using the facts that $\frac{g(\psi\phi^{-1})}{g(\phi)}$, $\phi^{-1}(a), \psi^{-1}(M)$ and $\psi\left(\frac{-b}{v}\right)$ are units in $\field{Z}[\psi, \phi]$, and that $l \nmid 2u$ by assumption, we find that: \begin{equation*}
\text{ord}_{\lambda}\left(L(1-k, \psi^{-1}\phi)\prod_{p \in \mathcal{P}_M} (\psi(p)-\phi(p)p^{k-1})\right)>0. \end{equation*}
\noindent By assumption, we also have that $\psi(p) - \phi(p)p^{k-2} \equiv 0 \bmod \lambda$ for each $p\in\mathcal{P}_M$, and so this implies that: \begin{equation*}
\text{ord}_{\lambda}\left(L(1-k, \psi^{-1}\phi)\prod_{p \in \mathcal{P}_M} (\psi(p)-\phi(p)p^k)\right)>0. \end{equation*} Condition (1) follows again by taking the norm down to $\mathbb{Z}[\psi,\phi]$. \end{proof}
\subsection{The direct implication}
We now give partial results towards the direct implication of Conjecture \ref{conj:general}. First, we prove that Condition (1) guarantees the existence of an eigenform in $S_k(\Gamma_0(NM),\tilde{\chi})$ satisfying the congruence. The following is the necessary extension of \cite[Theorem 2.10]{dummigan_spencer} and \cite[Theorem 3.0.1]{spencer_2018}.
\begin{theorem} \label{thm:eigenform} Let $k > 2$ and $\lambda' \nmid 6NM$ be a prime of $\mathbb{Z}[\psi,\phi]$ such that \[\text{ord}_{\lambda'}\left(L(1-k, \psi^{-1} \phi) \prod_{p \in \mathcal{P}_M}(\psi(p) - \phi(p)p^k)\right)>0.\] There exists a normalised Hecke eigenform $f \in S_k(\Gamma_1(NM), \tilde{\chi})$ and a prime $\lambda\mid \lambda'$ of $\mathcal{O}_f[\psi,\phi]$ such that for all primes $q \nmid NM$, \begin{equation*}
a_q(f) \equiv \psi(q)+\phi(q)q^{k-1} \ \text{mod } \lambda. \end{equation*} \end{theorem}
\begin{proof} Consider the Eisenstein series $E_{\underline{\delta}}$ corresponding to the choice $\delta_p=\psi(p)$ for all $p \in \mathcal{P}_M$. By Corollary \ref{cor:cusp}, for each $\gamma\in\text{SL}_2(\mathbb{Z})$ the constant term of $E_{\underline{\delta}}[\gamma]_k$ is either $0$ or is \begin{equation*}
a_0(E_{\underline{\delta}}[\gamma]_k) = C_{\gamma} \prod_{p \in \mathcal{P}_M} (1 - \psi(p) \phi^{-1}(p)p^{-k}) = (-1)^{|\mathcal{P}_M|}\frac{C_{\gamma}}{M^k}\prod_{p\in\mathcal{P}_M}(\psi(p)-\phi(p)p^k). \end{equation*}
In either case $\text{ord}_{\lambda'}(a_0(E_{\underline{\delta}}[\gamma]_k)) > 0$, and so $E_{\underline{\delta}}$ is a cusp form mod $\lambda'$. A standard argument using the Deligne-Serre Lifting Lemma and Carayol's Lemma (e.g.\ \cite[Theorem 3.0.1]{spencer_2018}) then gives a lift to a characteristic zero eigenform $f\in S_k(\Gamma_0(NM),\tilde{\chi})$ satisfying $a_q(f) \equiv a_q(E_{\underline{\delta}})\bmod \lambda$ for some prime $\lambda|\lambda'$ of $\mathcal{O}_f[\psi,\phi]$. This eigenform satisfies the required congruence by construction. \end{proof}
Taking $M=1$ in Theorem \ref{thm:eigenform} gives a result of Dummigan \cite[Proposition 2.1]{dummigan_2007}. As remarked in the paper, the eigenform satisfying the congruence must be new (since $\chi$ has conductor $N$). This completes the proof of Conjecture \ref{conj:general} in the case $M=1$. We will now see that the case $M=p$ prime can also be fully proved.
\begin{theorem} \label{thm:prime}
If $M=p$ is prime then Conjecture \ref{conj:general} is true. \end{theorem}
\begin{proof} Theorem \ref{thm:reverse} provides the reverse implication and so it suffices to prove the direct implication. By Theorem \ref{thm:eigenform}, Condition (1) provides a level $Np$ eigenform $f_0\in S_k(\Gamma_1(Np), \tilde{\chi})$ and a prime $\lambda_0\mid \lambda'$ of $\mathcal{O}_f[\psi,\phi]$ satisfying \[a_q(f_0) \equiv \psi(q)+\phi(q)q^{k-1} \ \text{mod } \lambda_0,\] for all $q\nmid Np$. We may assume that $f_0$ is an oldform, otherwise we are done. Since $\tilde{\chi}$ has conductor $N$, $f_0$ must be a lift of an eigenform $f_1\in S_k^{\text{new}}(\Gamma_0(N),\chi)$. By the Cebotarev density theorem, we have that $\overline{\rho}_{f_1,\lambda_0}\sim\overline{\rho}_{f_0,\lambda_0}$. As earlier, the congruence implies that $\overline{\rho}_{f_0,\lambda_0}^{\text{ss}}\sim\overline{\psi} \oplus \overline{\phi} \chi_l^{k-1}$. Since $l \neq p$ we see that: \begin{equation*}
a_p(f_1) \equiv \psi(p) + \phi(p)p^{k-1} \ \text{mod } \lambda_0. \end{equation*}
\noindent By Condition (2), we also have that one of the following holds: \begin{align*}\psi(p) &\equiv \phi(p)p^k \ \text{mod } \lambda_0\\ \psi(p) &\equiv \phi(p)p^{k-2} \ \text{mod } \lambda_0,\end{align*} so that \begin{equation*}
a_p(f_1) \equiv \psi(p)+\phi(p)p^{k-1} \equiv
\begin{cases}
\psi(p)(1 + p^{-1}) \ \text{mod } \lambda_0 & \text{if } \psi(p) \equiv \phi(p)p^k \ \text{mod } \lambda_0 \\
\psi(p)(1 + p) \ \text{mod } \lambda_0 & \text{if } \psi(p) \equiv \phi(p)p^{k-2} \ \text{mod } \lambda_0
\end{cases} \end{equation*}
We now claim that the following congruence condition holds: \begin{equation} \label{eq:diamond}
a_p(f_1)^2 \equiv \chi(p)p^{k-2}(1+p)^2 \ \text{mod } \lambda_0. \end{equation} Indeed, if $\psi(p) \equiv \phi(p)p^k \ \text{mod } \lambda_0$ then \begin{align*}
\chi(p)p^{k-2}(1+p)^2 \equiv \psi(p)\phi(p)p^{k-2}(1+p)^2
\equiv \psi^2(p)p^{-2}(1+p)^2
&\equiv \psi^2(p)(1+p^{-1})^2 \\ &\equiv a_p(f_1)^2 \bmod \lambda_0 \end{align*} Alternatively, if $\psi(p) \equiv \phi(p)p^{k-2} \ \text{mod } \lambda_0$ then \begin{align*}
\chi(p)p^{k-2}(1+p)^2 \equiv \psi(p)\phi(p)p^{k-2}(1+p)^2
&\equiv \psi^2(p)(1+p)^2 \\ &\equiv a_p(f_1)^2 \bmod \lambda_0. \end{align*} A well known theorem of Diamond (see \cite{diamond_1991}) now implies the existence of a normalised $p$-newform $f \in S_k^{p\text{-new}}(\Gamma_1(Np), \tilde{\chi})$ and a prime $\lambda \mid \lambda_0$ of $\mathcal{O}_{f_0,f}[\psi,\phi]$ satisfying \begin{equation*}
a_q(f) \equiv a_q(f_1) \ \text{mod } \lambda \end{equation*} for all primes $q\nmid Npl$. In fact, we must have that $f\in S_k^{\text{new}}(\Gamma_0(Np),\tilde{\chi})$, since $\tilde{\chi}$ has conductor $N$ and $p\nmid N$. This newform satisfies the required congruence by construction. \end{proof}
The above argument highlights the bottleneck in trying to prove the direct implication in general. Conditions (1) and (2) still imply the level raising condition for each $p\mid M$, but this only allows us to find ``local" newforms satisfying the congruence, i.e.\ a $p$-newform for each $p|M$. There seems to be no clear way to prove the existence of a ``global" newform satisfying the congruence.
\subsection{Comparison with known results}
In the special case of $N=1$ (so that $\chi = \mathds{1}_1$) Conjecture \ref{conj:general} agrees with Conjecture $4.1$ of Dummigan and Fretwell \cite{dummigan_fretwell_2014} and Conjecture $3.2$ of Billerey and Menares \cite{billerey_menares_2016}.
In the special case of arbitrary square-free $N$ and $M=p$ prime, Theorem \ref{thm:eigenform} becomes Theorem $3.0.1$ of Spencer's thesis \cite{spencer_2018}. Newform congruences were not explored in this thesis, and so Theorem \ref{thm:prime} complements this Theorem well.
\subsection{Low weight}
For weight $k=2$, the analogue of Theorem \ref{thm:eigenform} is expected to be true in the case $N> 1$. However, when $N=1$ the condition can fail to provide an eigenform congruence. For example, when $N=1$ and $M = p\geq 5$ is prime then a famous result of Mazur \cite{mazur} says that eigenform congruences only arise when $\text{ord}_{l}\left(\frac{p-1}{12}\right)>0$, as opposed to $\text{ord}_{l}(p^2-1)>0$. Work of Ribet and Yoo considers results for more general levels \cite{ribetyoo}.
Conjecture \ref{conj:general} is also invalid in general for weight $k=2$. Even when an eigenform congruence exists there may not exist a newform satisfying the congruence, despite the fact that it is possible for Condition (2) to be automatically satisfed. It would be interesting to see what the analogue of Conjecture \ref{conj:general} is in this case.
It would be very interesting to see if there are analogues of our results for weight $1$ modular forms. The existence of such eigenform congruences has been studied for $M=p$ prime in \cite{spencer_2018}, but very little seems to be known beyond this.
\section{Relation with the Bloch-Kato Conjecture} \label{section:BK} In this section, we relate Conjecture \ref{conj:general} to the Bloch-Kato Conjecture. Throughout we assume Conditions (1) and (2) and fix an eigenform $f\in S_k(\Gamma_0(NM), \tilde{\chi})$ satisfying the congruence mod $\lambda$ (guaranteed to exist by Theorem \ref{thm:eigenform}).
As earlier, the congruence implies that the composition factors of $\overline{\rho}_{f,\lambda}$ are given by: \[\overline{\rho}_{f,\lambda}^{\text{ss}} \sim \overline{\psi}\oplus\overline{\phi}\chi_l^{k-1},\] realised on the one dimensional $\mathbb{F}_\lambda[G_{\mathbb{Q}}]$-modules $\mathbb{F}_\lambda(\psi)$ and $\mathbb{F}_\lambda (1-k)(\phi)$ , a Tate twist of $\mathbb{F}_{\lambda}(\phi)$, i.e.\ $\text{Frob}_p$ acts by multiplication by $\phi(p)p^{1-k} \bmod \lambda$ if $\lambda\nmid p$.
By a result of Ribet \cite{ribet_1976}, we can choose the invariant $\mathcal{O}_{f,\lambda}$-lattice defining $\rho_{f,\lambda}$ in such a way that $\overline{\rho}_{f,\lambda}$ is realised on an $\mathbb{F}_{\lambda}$-vector space $V$ such that \begin{equation*}
0 \longrightarrow \mathbb{F}_\lambda(1-k)(\phi) \overset{\iota}\longrightarrow V \overset{\pi}\longrightarrow \mathbb{F}_\lambda(\psi) \longrightarrow 0 \end{equation*} is a non-split extension of $\mathbb{F}_\lambda[G_{\mathbb{Q}}]$-modules. Let $h: \mathbb{F}_\lambda(\psi) \longrightarrow V$ be an $\mathbb{F}_\lambda$-linear map and fix $x \in \mathbb{F}_\lambda(\psi)$. Then for each $g \in G_{\mathbb{Q}}$ we find that $g(h(g^{-1}(x)))-h(x) \in \text{ker}(\pi) =\text{im}(\iota)$, giving a well defined map \begin{align*}C: G_{\mathbb{Q}} &\longrightarrow \text{Hom}(\mathbb{F}_{\lambda}(\psi),\mathbb{F}_{\lambda}(1-k)(\phi)) \\ g &\longmapsto C(g): x \longmapsto\iota^{-1}(g(h(g^{-1}(x))) - h(x)). \end{align*}
\noindent It is easily verified that $C$ is a cocycle, whose class \begin{align*}c = [C] &\in H^1(\mathbb{Q},\text{Hom}(\mathbb{F}_{\lambda}(\psi),\mathbb{F}_{\lambda}(1-k)(\phi))\\ &\cong H^1(\mathbb{Q}, \mathbb{F}_{\lambda}(1-k)(\psi^{-1}\phi)).\end{align*} is independent of the choice of $x$ and $h$, and is non-trivial since the extension is non-split.
Consider the $\mathbb{F}_\lambda[G_{\mathbb{Q}}]$-module $A_{f,\lambda}^{\psi,\phi}= (K_f[\psi,\phi]_{\lambda} / \mathcal{O}_f[\psi,\phi]_{\lambda})(1-k)(\psi^{-1}\phi)$ and let $A_f^{\psi,\phi}[\lambda]$ be the kernel of multiplication by $\lambda$ (abusing notation slightly, we let $\lambda$ be a uniformiser). It follows that \[A_f^{\psi,\phi}[\lambda] = \left(\frac{1}{\lambda}\mathcal{O}_f[\psi,\phi]_{\lambda}/\mathcal{O}_f[\psi,\phi]_{\lambda}\right)(1-k)(\psi^{-1}\phi)\cong \mathbb{F}_{\lambda}(1-k)(\psi^{-1}\phi)\] and so we may view $c$ as a class in $H^1(\mathbb{Q},A_f^{\psi,\phi}[\lambda])$. The following short exact sequence of $\mathbb{F}_{\lambda}[G_{\mathbb{Q}}]$-modules: \begin{equation*}
0 \longrightarrow A_f^{\psi,\phi}[\lambda] \overset{i}\longrightarrow A_{f,\lambda}^{\psi,\phi} \overset{\lambda}\longrightarrow A_{f,\lambda}^{\psi,\phi} \longrightarrow 0 \end{equation*} induces a long exact sequence in Galois cohomology, a piece of which is the following: \begin{equation*}
H^0(\mathbb{Q}, A_{f,\lambda}^{\psi,\phi}) \overset{\delta}\longrightarrow H^1(\mathbb{Q}, A_f^{\psi,\phi}[\lambda]) \overset{i_*}\longrightarrow H^1(\mathbb{Q}, A_{f,\lambda}^{\psi,\phi}). \end{equation*}
\noindent Note that since $l> k+1$ we have that $(l-1)\nmid (k-1)$, and so $H^0(\mathbb{Q}, A_{f,\lambda}^{\psi,\phi})$ is trivial. It follows that $i_*$ is injective, so that we can lift $c$ to a non-trivial class $c' = i_*(c) \in H^1(\mathbb{Q}, A_{f,\lambda}^{\psi,\phi})$.
The aim is to show that $c'$ is a non-trivial element of a Bloch-Kato Selmer group, which we now define (as in \cite[\S 3]{bloch_kato_2007}). First let $B_{f,\lambda}^{\psi,\phi} = K_f[\psi,\phi]_{\lambda}(1-k)(\psi^{-1}\phi)$. For a prime $q \neq l$ we define the local Bloch-Kato Selmer group attached to $B_{f,\lambda}^{\psi,\phi}$:
\[H^1_f(\mathbb{Q}_q, B_{f,\lambda}^{\psi,\phi}) = \text{ker}(H^1(D_q, B_{f,\lambda}^{\psi,\phi})
\longrightarrow H^1(I_q, B_{f,\lambda}^{\psi,\phi})),\]
\noindent where $I_q\subset D_q\subset G_{\mathbb{Q}_q}$ are inertia and decomposition subgroups at $q$. The cohomology is taken with respect to continuous cocycles and coboundaries. Note also that the $f$ on the LHS is standard notation and is not related to the modular form $f$. When $q=l$ the local Bloch-Kato Selmer group of $B_{f,\lambda}^{\psi,\phi}$ is defined to be
\[H^1_f(\mathbb{Q}_l, B_{f,\lambda}^{\psi,\phi}) = \text{ker}(H^1(D_l, B_{f,\lambda}^{\psi,\phi}) \longrightarrow H^1(I_l, B_{f,\lambda}^{\psi,\phi} \otimes_{\mathbb{Q}_l} B_{\text{crys}} )).\]
\noindent See \cite[\S 1]{bloch_kato_2007} for the definition of Fontaine's ring $B_{\text{crys}}$. The global Bloch-Kato Selmer group of $B_{f,\lambda}^{\psi,\phi}$ is then $H^1_f(\mathbb{Q}, B_{f,\lambda}^{\psi,\phi})$, the subgroup of $H^1(\mathbb{Q}, B_{f,\lambda}^{\psi,\phi})$ consisting of classes which have local restriction lying in $H^1_f(\mathbb{Q}_q, B_{f,\lambda}^{\psi,\phi})$ for all primes $q$.
Letting $\pi: B_{f,\lambda}^{\psi,\phi} \longrightarrow A_{f,\lambda}^{\psi,\phi}$ be the quotient map, the local Bloch-Kato Selmer group of $A_{f,\lambda}^{\psi,\phi}$ is defined to be the pushforward $H^1_f(\mathbb{Q}_q, A_{f,\lambda}^{\psi,\phi}) = \pi_*H^1_f(\mathbb{Q}_q,B_{f,\lambda}^{\psi,\phi})$. The global Bloch-Kato Selmer group of $A_{f,\lambda}^{\psi,\phi}$ is then $H^1_f(\mathbb{Q}, A_{f,\lambda}^{\psi,\phi})$, the subgroup of $H^1(\mathbb{Q}, A_{f,\lambda}^{\psi,\phi})$ consisting of classes whose local restrictions lie in $H^1_f(\mathbb{Q}_q, A_{f,\lambda}^{\psi,\phi})$ for all primes $q$. Note that, since $l \nmid 2$ we may omit $q=\infty$.
More generally, given a finite set of primes $\mathcal{P}$ with $l \notin \mathcal{P}$, we define $H^1_\mathcal{P}(\mathbb{Q}, A_{f,\lambda}^{\psi,\phi})$ to be the subgroup of $H^1(\mathbb{Q}, A_{f,\lambda}^{\psi,\phi})$ consisting of classes whose local restrictions lie in $H^1_f(\mathbb{Q}_q, A_{f,\lambda}^{\psi,\phi})$ for all primes $q \notin \mathcal{P}$.
\begin{propn}\label{prop:BK}
The congruence satisfied by $f$ gives the existence of a non-trivial element $c' \in H^1_{\mathcal{P}_{NM}}(\mathbb{Q}, A_{f,\lambda}^{\psi,\phi})$. \end{propn} \begin{proof}
We have that $\rho_{f,\lambda}$ is unramified at each $q\nmid NMl$. It follows that restriction of $c'$ to $H^1(I_q, A_{f,\lambda}^{\psi,\phi})$ is 0 for such $q$. Then by \cite[Lemma 7.4]{brown_2007}, $c' \in H^1_f(\mathbb{Q}_q, A_{f,\lambda}^{\psi,\phi})$. Under the assumption that $l > k+1$, the representation $\rho_{f,\lambda}$ is crystalline at $l$ and we deduce that $c' \in H^1_f(\mathbb{Q}_l, A_{f,\lambda}^{\psi,\phi})$, as a consequence of \cite[Proposition 2.2]{diamond_flach_guo_2004}. Since the necessary local conditions are satisfied, we have that $c' \in H^1_{\mathcal{P}_{NM}}(\mathbb{Q}, A_{f,\lambda}^{\psi,\phi}))$. \end{proof}
Let $C_{k,l}^{\psi,\phi} = (\mathbb{Q}_l/\mathbb{Z}_l)(1-k)(\psi^{-1}\phi)$. Note that since $\lambda|l$, the module $K_f[\psi,\phi]_{\lambda}(1-k)(\psi^{-1}\phi)$ decomposes as a direct sum of copies of $C_{k,l}^{\psi,\phi}$. Proposition \ref{prop:BK} then implies the existence of a non-trivial element $c'\in H^1_{\mathcal{P}_{NM}}(\mathbb{Q},C_{k,l}^{\psi,\phi})$ by projection. We now discuss how the existence of such an element agrees with the Bloch-Kato conjecture.
Consider the partial Dirichlet $L$-value $L_{\mathcal{P}_{NM}}(k, \psi\phi^{-1})$, i.e. with Euler factors at primes $q \in \mathcal{P}_{NM}$ omitted. Letting $\lambda'$ be as in Condition (1) of Conjecture \ref{conj:general}, below is a reformulation of a special case of the $\lambda'$-part of the Bloch-Kato conjecture (as in \cite{diamond_flach_guo_2004}, proved in this case by Huber and Kings in \cite{huber_kings_2003}).
\begin{conj} \label{conj:BK} \[\text{ord}_{\lambda'}\left( \frac{L_{\mathcal{P}_{NM}}(k, \psi\phi^{-1})}{g(\psi\phi^{-1})(2\pi i)^k}\right)
= \text{ord}_{\lambda'} \left( \frac{\text{Tam}^0_\lambda(C_{k,l}^{\psi,\phi}) \#H^1_{\mathcal{P}_{NM}}(\mathbb{Q},C_{k,l}^{\psi,\phi})}{\#H^0(\mathbb{Q},C_{k,l}^{\psi,\phi})} \right).\] \end{conj}
\noindent We omit the definition of the Tamagawa factor $\text{Tam}^0_\lambda(C_{k,l}^{\psi,\phi})$, but note that it is trivial in this case since $l > k+1$ and $\lambda' \mid l$, by \cite[Theorem 4.1.1(iii)]{bloch_kato_2007}. We also know that $H^0(\mathbb{Q},C_{k,l}^{\psi,\phi}))$ is trivial, and so \[\text{ord}_{\lambda'}\left( \frac{L_{\mathcal{P}_{NM}}(k, \psi\phi^{-1})}{g(\psi\phi^{-1})(2\pi i)^k}\right)=\text{ord}_{\lambda'}(\#H^1_{\mathcal{P}_{NM}}(\mathbb{Q},C_{k,l}^{\psi,\phi})).\] Hence if we can show that $\lambda'$ divides the partial $L$-value then we know there is a non-trivial element in the Bloch-Kato Selmer group.
\begin{propn} \label{propn:divideL}
Condition (1) of Conjecture \ref{conj:general} implies the condition \[\text{ord}_{\lambda'}\left( \frac{L_{\mathcal{P}_{NM}}(k, \psi\phi^{-1})}{g(\psi\phi^{-1})(2\pi i)^k}\right)>0.\] \end{propn}
\begin{proof}
By the functional equation for $L(s,\psi\phi^{-1})$ we have:
\begin{align*}
\frac{L_{\mathcal{P}_{NM}}(k, \psi\phi^{-1})}{g(\psi\phi^{-1})(2\pi i)^k} &= \frac{(-1)^{|\mathcal{P}_{NM}|}}{\phi(NM)(NM)^k}\frac{L(k, \psi \phi^{-1})}{ g(\psi\phi^{-1})(2\pi i)^k}\prod_{p \in \mathcal{P}_{NM}} (\psi(p)-\phi(p)p^k)\\ &=\frac{(-1)^{|\mathcal{P}_m|+k}}{2(k-1)!\phi(NM)(N^2M)^k}L(1-k,\psi^{-1}\phi)\prod_{p \in \mathcal{P}_{NM}} (\psi(p)-\phi(p)p^k)
\end{align*}
The claim follows, since $l\nmid NM$ and $l>k+1$. \end{proof} The above shows that Condition (1) of Conjecture \ref{conj:general} provides a non-trivial element $c'\in H^1_{\mathcal{P}_{NM}}(\mathbb{Q},C_{k,l}^{\psi,\phi})$, and that this is in line with the Bloch-Kato Conjecture. Since Condition (2) is (conjecturally) telling us that a newform $f$ of level $NM$ can be found to satisfy the congruence, we might naively expect that the corresponding element $c'\in H^1_{
\mathcal{P}_{NM}}(\mathbb{Q},C_{k,l}^{\psi,\phi})$ is ``new", i.e.\ that $c'\notin H^1_{\mathcal{P}_{Nd}}(\mathbb{Q},C_{k,l}^{\psi,\phi})$ for each $d| M$ with $d\neq M$. However, this may not be the case, since considering the Bloch-Kato quotient for such a divisor $d$ gives: \begin{align*}
\text{ord}_{\lambda^\prime}\left( \frac{ \#H^1_{\mathcal{P}_{NM}}(\mathbb{Q},C_{k,l}^{\psi,\phi})}{\#H^1_{\mathcal{P}_{Nd}}(\mathbb{Q},C_{k,l}^{\psi,\phi})}\right) &= \text{ord}_{\lambda^\prime}\left( \frac{L_{\mathcal{P}_{NM}}(k, \psi\phi^{-1})}{L_{\mathcal{P}_{Nd}}(k, \psi\phi^{-1})} \right)\\ &=\text{ord}_{\lambda^\prime}\left( \frac{\prod_{p \in \mathcal{P}_{M/d}}(\psi(p)-\phi(p)p^k)}{\phi(M/d)(M/d)^k}\right), \end{align*} revealing that new elements can only be accounted for by local divisibility conditions of the form $\text{ord}_{\lambda'}(\psi(p)-\phi(p)p^k)>0$ (as in Case (A) in the proof of Theorem \ref{thm:reverse}). It follows that the primes $p\in\mathcal{P}_M$ satisfying $\text{ord}_{\lambda'}(\psi(p)-\phi(p)p^{k-2})>0$ and $\text{ord}_{\lambda'}(\psi(p)-\phi(p)p^k)=0$ in Condition (2) cannot contribute towards new elements.
\begin{propn}\label{prop:BK1}
Let $d|M$ and $d\neq M$. Then there exists a prime $p\in \mathcal{P}_{M/d}$ such that $\text{ord}_{\lambda'}(\psi(p)-\phi(p)p^k)>0$ if and only if \[\text{ord}_{\lambda'}\left(\frac{\#H^1_{\mathcal{P}_{NM}}(\mathbb{Q},C_{k,l}^{\psi,\phi})}{\#H^1_{\mathcal{P}_{Nd}}(\mathbb{Q},C_{k,l}^{\psi,\phi})}\right) > 0.\] In particular, $\text{ord}_{\lambda'}(\psi(p)-\phi(p)p^k)>0$ if and only if there exists an element $c'_p\in H^1_{\mathcal{P}_{NM}}(\mathbb{Q},C_{k,l}^{\psi,\phi})$ that is ``$p$-new", i.e.\ $c'_p\notin H^1_{\mathcal{P}_{NM/p}}(\mathbb{Q},C_{k,l}^{\psi,\phi})$.
\end{propn}
\begin{proof}
This follows from the above discussion.
\end{proof}
\noindent The above should be roughly compared with the situation in Theorem \ref{thm:prime} and the proceeding discussion, where Conditions (1) and (2) alone were only able to guarantee the existence of $p$-newforms satisfying the congruence for each $p\in\mathcal{P}_M$, as opposed to a genuine newform. Based on this, we conjecture the following.
\begin{conj}\label{conj:pnew}
Let $\mathcal{S} = \{p\in \mathcal{P}_M\,|\,\text{ord}_{\lambda'}(\psi(p)-\phi(p)p^k)>0\}$. Then the class $c'\in H^1_{\mathcal{P}_{NM}}(\mathbb{Q},C_{k,l}^{\psi,\phi})$ provided by Proposition \ref{prop:BK} is ``$p$-new" for each $p\in \mathcal{S}$.
\end{conj}
Now suppose that $\text{ord}_{\lambda'}(\psi(p)-\phi(p)p^k) = 0$ for every $p\in\mathcal{P}_{M/d}$. In order to satisfy Condition (2) we would then have the following conditions for each $p\in\mathcal{P}_{M/d}$: \begin{align*}&\text{ord}_{\lambda'}(\psi(p)-\phi(p)p^{k-2})>0,\\ &\text{ord}_{\lambda'}(L(1-k,\psi^{-1}\phi))>0.\end{align*} Here, the situation is not fully clear (at least not to the authors). One conclusion that can be made in this case is the following.
\begin{propn}\label{prop:BK2}
Let $d|M$ and $d\neq M$. If $\text{ord}_{\lambda'}(\psi(p)-\phi(p)p^{k-2})>0$ for each $p\in\mathcal{P}_{M/d}$ then \[\text{ord}_{\lambda'}\left(\frac{\#H^1_{\mathcal{P}_{NM}}(\mathbb{Q},C_{k-2,l}^{\psi,\phi})}{\#H^1_{\mathcal{P}_{Nd}}(\mathbb{Q},C_{k-2,l}^{\psi,\phi})}\right) \geq \#\mathcal{P}_{M/d}.\]
In particular, if $\text{ord}_{\lambda'}(\psi(p)-\phi(p)p^{k-2})>0$ then there exists an element $c_p{''}\in H^1_{\mathcal{P}_{NM}}(\mathbb{Q},C_{k-2,l}^{\psi,\phi})$ that is ``$p$-new", i.e.\ $c_p{''}\notin H^1_{\mathcal{P}_{NM/p}}(\mathbb{Q},C_{k-2,l}^{\psi,\phi})$.
\end{propn}
\begin{proof}
This follows by considering the Bloch-Kato quotient of weight $k-2$: \begin{align*}\text{ord}_{\lambda^\prime}\left( \frac{ \#H^1_{\mathcal{P}_{NM}}(\mathbb{Q},C_{k-2,l}^{\psi,\phi})}{\#H^1_{\mathcal{P}_{Nd}}(\mathbb{Q},C_{k-2,l}^{\psi,\phi})}\right) &= \text{ord}_{\lambda^\prime}\left( \frac{L_{\mathcal{P}_{NM}}(k-2, \psi\phi^{-1})}{L_{\mathcal{P}_{Nd}}(k-2, \psi\phi^{-1})} \right)\\ &=\text{ord}_{\lambda^\prime}\left( \frac{\prod_{p \in \mathcal{P}_{M/d}}(\psi(p)-\phi(p)p^{k-2})}{\phi(M/d)(M/d)^{k-2}}\right).\end{align*}
\end{proof}
Alternatively, the divisibility condition in the above Proposition implies Conditions (1) and (2) for weight $k-2$. By Theorem \ref{thm:eigenform} there exists a congruence between the eigenvalues of an eigenform $g\in S_{k-2}(\Gamma_0(NM/d),\tilde{\chi})$ and the eigenvalues of $E_{k-2}^{\psi,\phi}$ (at ``good" primes). By a similar argument to earlier, this supplies a non-trivial element $c^{''}\in H^1_{\mathcal{P}_{NM/d}}(\mathbb{Q},C_{k-2,l}^{\psi,\phi})$. Conjecture \ref{conj:general} would let us take $g$ to be a newform, and so we are led to conjecture the following.
\begin{conj}\label{conj:pnew2}
Let $\mathcal{S} = \{p\in \mathcal{P}_M\,|\,\text{ord}_{\lambda'}(\psi(p)-\phi(p)p^{k-2})>0\}$. Then the class $c^{''}\in H^1_{\mathcal{P}_{NM/d}}(\mathbb{Q},C_{k-2,l}^{\psi,\phi})$ constructed above is $p$-new for each $p\in \mathcal{S}$.
\end{conj}
\noindent In summary, we have considered the two different ways in which Conditions (1) and (2) can hold. By Bloch-Kato, each implies the existence of a collection ``$p$-new" elements in a certain Bloch-Kato Selmer group (i.e.\ Propositions \ref{prop:BK1} and \ref{prop:BK2}). Further, we expect that both collections of $p$-new elements should be explained by the existence of two ``new" elements, each arising from a newform congruence implied by Conjecture \ref{conj:general} (i.e.\ Conjectures \ref{conj:pnew} and \ref{conj:pnew2}). In the case of $M=p$ prime, we remark that Theorem \ref{thm:prime} implies the truth of these conjectures, but for more general square-free $M$, little seems to be known.
\section{Examples} \label{section:eg}
Below, we give brief computational examples of Conjecture \ref{conj:general}, using data provided by the LMFDB databse \cite{LMFDB}. First we consider examples demonstrating Theorem \ref{thm:prime}.
\begin{eg} Take $N=5$, $M=2$ and $k=8$. Also let $\psi = \mathds{1}$ and $\phi = \left(\frac{\cdot}{5}\right)$ (so that $\chi = \phi$). The only prime $\lambda'$ of $\mathbb{Z}[\psi,\phi] = \mathbb{Z}$ satisfying the conditions of Theorem \ref{thm:prime} is $\lambda' = 257$.
\noindent Indeed, the newform $f \in S_8^{\text{new}}(\Gamma_0(10), \tilde{\chi})$ with LMFDB label [10.8.b.a] satisfies the congruence \[a_q(f) \equiv 1+\left(\frac{q}{5}\right)q^{7} \bmod \lambda\] for all $q\neq 2,5$ and for some fixed prime $\lambda\mid \lambda'$ of $\mathcal{O}_f[\psi, \phi] = \mathcal{O}_f$ (the ring of integers of the quartic field generated by a root of $x^4-15x^2+64$). \end{eg}
\begin{eg} Take $N=7$, $M=2$ and $k=7$. Also let $\psi = \mathds{1}$ and $\phi$ be the primitive mod $7$ character satisfying $\phi(3) = \zeta_6$ (so that $\chi = \phi$). The only prime $\lambda'$ of $\mathbb{Z}[\psi,\phi] = \mathbb{Z}[\zeta_6]$ satisfying the conditions of Theorem \ref{thm:eigenform} is $\lambda' = \langle 337, \zeta_6+128\rangle$ (lying above $l = 337$).
\noindent Indeed, the newform $f \in S_7^{\text{new}}(\Gamma_0(14), \tilde{\chi})$ with LMFDB label [14.7.d.a] satisfies the congruence \[a_q(f) \equiv 1 + \phi(q)q^6 \bmod \lambda\] for all $q\neq 2,7$ and for some fixed prime $\lambda\mid \lambda'$ of $\mathcal{O}_f[\psi,\phi] = \mathcal{O}_f[\zeta_6]$ (here $\mathcal{O}_f$ is the ring of integers of a degree $8$ number field). \end{eg}
We finish with an example demonstrating Conjecture \ref{conj:general} in a case where $M$ is composite.
\begin{eg} Take $N=7, M=6$ and $k=6$. Let $\psi = \mathds{1}$ and $\phi$ be the primitive mod $7$ character satisfying $\phi(3) = \zeta_3^2$ (so that $\chi = \phi$). The only prime $\lambda'$ of $\mathbb{Z}[\psi,\phi] = \mathbb{Z}[\zeta_6]$ satisfying the conditions of Theorem \ref{thm:eigenform} is $\lambda' = \langle 73, \zeta_6+64\rangle$ (lying above $l = 73$).
\noindent Indeed, the newform $f\in S_6^{\text{new}}(\Gamma_0(42),\tilde{\chi})$ with LMFDB label [42.6.e.c] satisfies the congruence: \[a_q(f) \equiv 1 + \phi(q)q^6\bmod \lambda\] for all primes $q\neq 2,7$ and for a fixed prime $\lambda\mid \lambda'$ of $\mathcal{O}_{f}[\psi,\phi] = \mathcal{O}_{f}[\zeta_6]$ (here $\mathcal{O}_f$ is the ring of integers of a degree $4$ number field). \end{eg}
\end{document} |
\begin{document}
\title{Stationary subspace analysis based on second-order statistics}
\begin{abstract} In stationary subspace analysis (SSA) one assumes that the observable $p$-variate time series is a linear mixture of a $k$-variate nonstationary time series and a $(p-k)$-variate stationary time series. The aim is then to estimate the unmixing matrix which transforms the observed multivariate time series onto stationary and nonstationary components. In the classical approach multivariate data are projected onto stationary and nonstationary subspaces by minimizing a Kullback–Leibler divergence between Gaussian distributions, and the method only detects nonstationarities in the first two moments. In this paper we consider SSA in a more general multivariate time series setting and propose SSA methods which are able to detect nonstationarities in mean, variance and autocorrelation, or in all of them. Simulation studies illustrate the performances of proposed methods, and it is shown that especially the method that detects all three types of nonstationarities performs well in various time series settings. The paper is concluded with an illustrative example. \end{abstract}
\keywords{Autocorrelation \and Joint diagonalization \and Multivariate time series \and Second-order stationary \and Supervised dimension reduction}
\section{Introduction} \label{S:1}
Multivariate time series data are observed in many application areas and are often very challenging to model. To ease the analyses, it is often assumed that the observed time series can be decomposed into latent components with different exploitable properties. One common approach is to apply one of the blind source separation (BSS) methods to observed data in order to estimate the latent components as a pre-processing tool. In a special case of BSS, that is, in second-order source separation (SOS), it is assumed that the latent components are uncorrelated second-order stationary time series, and in independent component time series model, the assumption on uncorrelatedness is replaced with the assumption on independence (see {\it e.g.} \cite{Miettinenetal:2016}, and references therein). In nonstationary source separation (NSS), as defined in \cite{ChoiCichocki:2000}, the variances of uncorrelated sources are allowed to change over time. For a recent review of these and other variants of BSS, see \cite{ComonJutten:2010,NordhausenOja2018,PanMatilainenTaskinenNordhausen2021}.
In stationary subspace analysis (SSA), the underlying model states that the observable $p$-variate time series is a linear mixture of a $k$-variate nonstationary time series and a $(p-k)$-variate stationary time series. The aim is then to factorize the multivariate time series onto stationary and nonstationary components. Such a factorization is useful in several real world applications, {\it e.g.}, in biomedical signal processing, speech recognition, image analysis and econometrics, where either the stationary or the nonstationary components are of interest. SSA was introduced in~\cite{BunauMeineckeKiralyMuller:2009}, where the matrix that separates the subspaces of stationary and nonstationary components was found by dividing the observed multivariate time series into $K$ segments and minimizing a Kullback-Leibler (KL) divergence between Gaussian distributions thus measuring differences in means and covariances across segments. We denote such method as KL-SSA. Notice that in KL-SSA the time series are considered as stationary if the first two moments are time-invariant. The stationarity with respect to the autocorrelations is not taken into account. \cite{BunauMeineckeKiralyMuller:2009} discussed the theoretical properties of the proposed method and proposed a sequential likelihood ratio test for testing the dimension of the stationary subspace. The method was applied to the EEG data analysis in \cite{BunauMeineckeKiralyMuller:2009,Bunauetal:2010} and to computer vision in \cite{Meinecketeal:2009}, and extended to change-point detection in \cite{BlytheBunauMeineckeMuller:2012,Blytheetal:2013}, where the nonstationary components were of the main interest. In \cite{Horevetal:2016,Kaltenstadleretal:2018} several alternatives for the Kullback-Leibler divergence in SSA were given.
In this paper we propose a novel SSA algorithm that detects nonstationarities in the first two moments as well as in autocorrelations. The method can be seen as an extension of analytic SSA (ASSA), which is an SSA method suggested in \cite{Haraetal:2010}, to a more general multivariate time series setting. Notice that \cite{Sundarajanetal:2017} considered an alternative extension via a frequency domain approach. As another contribution we make a connection between SSA and supervised dimension reduction (SDR) methods. The paper is organized as follows. In Section~\ref{sec:SSA} we first review the model assumptions underlying SSA and discuss the two classical methods for performing SSA, that is, KL-SSA and ASSA. We also show how SSA can be seen as a supervised dimension reduction method and propose and compare methods for detecting different type of nonstationarities. Section~\ref{sec:practical} discusses practical issues and identifiability of stationary and nonstationary components. Section~\ref{sec:simulations} presents several simulation studies for comparing performances of different methods under various scenarios and an example. The paper is concluded with some discussion in Section~\ref{sec:discussion}.
\section{Stationary subspace analysis} \label{sec:SSA}
In this section we review the model underlying SSA, show some connections between SSA and supervised dimension reductions methods, and suggest methods to detect changes in mean, variance, correlation structure or in all of them. Tools to identify the type of nonstationarity the components exhibit are also given.
\subsection{Model formulation and notations}
Let $\bo{x_t}$ be an observable $p$-variate nonstationary time series which can be decomposed into a stationary part and a nonstationary part according to the SSA model as defined in \cite{BunauMeineckeKiralyMuller:2009} \begin{equation} \label{eq::SSAmodel} \bo x_t = \bo A \bo z_t = [\bo A_{\bo s}\ \bo A_{\bo n} ] \left(
\begin{array}{c}
\bo s_t \\
\bo n_t \\
\end{array}
\right), \end{equation} where a $p$-variate latent time series $\bo z_t$ consists of a $k$-variate nonstationary time series $\bo n_t$ and a $(p-k)$-variate stationary time series $\bo s_t$. The two components are mixed using a full-rank $p \times p$ matrix $\bo A$. Here matrices $\bo A_{\bo s}$ and $\bo A_{\bo n}$ are $p\times (p-k)$ and $p\times k$ matrices, respectively. The aim of SSA is to estimate the unmixing matrix $\bo W=\bo A^{-1}$ so that $\bo W\bo x_t$ is partitioned into stationary and nonstationary time series. Depending on the assumptions on $\bo s_t$ and $\bo n_t$, different ways to estimate $\bo W$ are suggested in the literature \cite{BunauMeineckeKiralyMuller:2009,Haraetal:2010}. In this paper we make the following assumptions on the latent time series: \begin{itemize}
\item[(A1)] $E(\bo s_t)= \bo 0$, $\cov(\bo s_t) = \bo I_{p-k}$ and $\cov(\bo s_t,\bo s_{t+\tau})<\infty$,
\item[(A2)] $E(\bo n_t)<\infty$ and $\cov(\bo n_t) = \bo D_t$ where $\bo D_t$ is a diagonal matrix with positive diagonal elements, \item[(A3)] $\cov(\bo s_t, \bo n_{t+\tau}) = \bo 0$ for all $\tau \geq 0$. \end{itemize} We thus assume that the $(p-k)$ time series in $\bo s_t$ are second-order stationary with finite autocovariances. The first assumption fixes the location and covariance matrix of the stationary part for convenience. The second assumption states that the $k$ nonstationary components in $\bo n_t$ have finite first moments and are uncorrelated, and the third assumption states that the nonstationary components are uncorrelated with the stationary components.
Despite the assumptions the model is not well-defined as there are many divisions of $\bo x_t$ such that the assumptions hold. Here we are interested in the one with the minimal value of $k$. The stationary components are only specified up to a rotation by an orthogonal matrix, whereas the nonstationary components can be marginally rescaled, shifted and also rotated. Therefore for convenience of presentation we make the following additional assumption on $\bo n_t$: \begin{itemize}
\item[(A4)] $\sum_{t\in T} E(\bo n_t)= \bo 0$ and $\sum_{t \in T} \cov(\bo n_t) = \sum_{t \in T} \bo D_t = \bo I_k$. \end{itemize} Hence for the observed time span $T$ the location and scale are considered as fixed.
The common idea in \cite{BunauMeineckeKiralyMuller:2009,Haraetal:2010} was to divide the observed time series into $K$ segments, then compute for each interval the first and the second order statistics and to measure their nonconstancy across segments. To be more exact, assume that $\bo x_t$ is observed at time points $1,\ldots,T$, and let $T_1,\ldots,T_K$ denote $K$ disjoint subsets of $T$, {\it e.g.}, non-overlapping intervals. Let then \[
\bo m_{T_i}(\bo x_t) = \frac{1}{|T_i|} \sum_{t\in T_i} \bo x_t \] and \[
\bo S_{\tau,T_i}(\bo x_t)= \frac{1}{|T_i|-\tau} \sum_{t\in T_i} (\bo x_t - \bo m_{T_i}(\bo x_t)) (\bo x_{t+\tau} - \bo m_{T_i}(\bo x_t))^\top \] denote the sample mean and the sample (auto)covariance matrix computed using the data from interval $T_i$, respectively. In \cite{BunauMeineckeKiralyMuller:2009,Haraetal:2010} the unmixing matrix revealing the stationary sources was chosen so that the means and covariance matrices vary the least across all intervals. It was also shown that using the standardised time series defined as \begin{equation} \label{stand} \bo y_t = \bo S_{0,T}(\bo x_t)^{-1/2}(\bo x_t - \bo m_{T}(\bo x_t)), \end{equation} the search can be restricted to orthogonal matrices only. In \cite{BunauMeineckeKiralyMuller:2009} KL divergence between the Gaussian distributions on each intervals and the standard normal distribution was minimized. Further, in \cite{Haraetal:2010} a second-order Taylor approximation was applied to the objective function of KL-SSA and the SSA problem was shown to reduce to a simple generalized eigenvalue problem. We review the resulting analytic SSA (ASSA) method in Section~\ref{sec:comb}. Notice that in both approaches the focus was on detecting stationarity deviations in mean and in variance but not in autocorrelation.
\subsection{Connection to supervised dimension reduction} \label{sec:supervised}
Before going into specific SSA methods we point out a connection between SSA and supervised dimension reduction (SDR). In supervised dimension reduction, we have a response $z$ and a $p$-variate predictor vector $\bo x$ and the goal is to find a $k \times p$ matrix $\bo W$ such that $\bo W^\top \bo x$ carries all relevant information about $z$, that is, $\bo x \bot\!\!\,\!\!\bot z\,|\,\bo W^\top \bo x$ with the smallest possible value of $k \ll p$. See \cite{MaZhu2013,Li:2018} for some general overviews. Two popular methods in this context are sliced inverse regression (SIR) \cite{Li:1991} and sliced average variance estimation (SAVE) \cite{Cook:2000}. For a sample $(z_j, \bo x_j)$, $j=1,\ldots,n$, both methods start by whitening $\bo x_j$, that is, by computing $\bo y_j = \bo S_{0,n}^{-1/2}(\bo x_j-\bo m_n(\bo x_j))$ and then group the whitened observations into $K$ so-called slices $H_1,\ldots, H_K$ according to their response values $z_i$. By a slight abuse of the notation from above one is then interested in the slice means $\bo m_i(\bo y)=\frac{1}{|H_i|} \sum_{i\in H_i} \bo y_i$ and slice covariance matrices $\bo S_i(\bo y) = \frac{1}{|H_i|} \sum_{i \in H_i} (\bo y_i-\bo m_i)(\bo y_i-\bo m_i)^\top$, where $i=1,\ldots, K$. SIR estimate is then obtained using the covariance of the slice means and SAVE estimate is based on the average of the differences between the slice covariance matrices and the covariance matrix of all $\bo y_i$ which is due to the whitening $\bo I_p$.
\cite{CookCritchley2000} show that SAVE is more comprehensive than SIR in detecting the subspace of interest and give situations where SIR fails. The increased flexibility of SAVE comes however at the cost of requiring more data and being more sensitive to the number of slices \cite{CookCritchley2000}. Thus, while theoretically superior, in practice SIR is still preferred. Also combinations of SIR and SAVE are regularly investigated, see for example \cite{YeWeiss2003,ZhuOhtakiLi2007,ShakerPrendergast2011} for more details.
If the response $z_i$ is categorical, slicing is usually done by the unique values of $z_i$ while for numeric values of $z_i$ the slices are usually chosen so that they contain approximately the same number of observations. In this case the number of slices has to be chosen and it is a trade-off between trying to have many slices to find the directions and to have enough data points in the slice in order to estimate the slice statistic sufficiently well. SIR is considered quite robust with regard to the selection of number of slices while SAVE is considered quite sensitive. For example for SIR the number of slices should exceed the dimension of the subspace to be detected.
In the context of SSA one can now think of the intervals as the response, where $z$ gets a distinct value in each different interval. Then naturally only the nonstationary components carry information about the ``response''. In the following sections we consider SSA methods that correspond to sliced inverse regression (SIR) \cite{Li:1991} and sliced average variance estimation (SAVE) \cite{Cook:2000}, respectively. Section~\ref{sec:cor} proposes a method for detecting stationarity deviations in autocorrelation as these cannot be detected by SIR and SAVE type approaches. Further, inspired by the success of hybrid methods in SDR we also consider a combined approach in Section~\ref{sec:comb}.
Note that SDR methods like SIR and SAVE and combinations of these have also been extended to the time series settings, see \cite{MatilainenCrouxNordhausenOja2017,MatilainenCrouxNordhausenOja2019}. However, these methods again focus on regression modeling and not for detecting nonstationary subspaces.
\subsection{Nonstationarity in mean} \label{sec:mean}
Consider first a SSA method that aims at detecting stationary deviations in mean. Examples of such nonstationary components include trend, seasonal and cyclic components. Assume now that $\bo x_t$ is generated from the SSA model~(\ref{eq::SSAmodel}) and write $\bo y_t$ for the time series standardised using~(\ref{stand}). Let then \begin{equation} \label{eq:mean}
\bo M_m = \sum_{i=1}^{K} \frac{|T_i|}{T}\bo m_{T_i}(\bo y_t) \bo m_{T_i}(\bo y_t)^\top
\end{equation} be the covariance between the means of the different intervals weighted by the interval length. Notice that with seasonal and cyclic components, the intervals have to be chosen so that they do not correspond to the cycles. This is also illustrated in Section~\ref{sec:CompEx}.
It is easy to see that the eigenvalues of $\bo M_m$ that correspond to the components that are stationary in mean should be zero. Further, the components with non-zero eigenvalues correspond to components that are nonstationary in mean. Hence write the eigenvalue decomposition of $\bo M_m$ as \[ \bo M_m = \bo U_m \bo D_m \bo U_m^\top, \] where $\bo D_m$ is a $p\times p$ diagonal matrix with the eigenvalues of $\bo M_m$ as diagonal values, and $\bo U_m = (\bo U_{m,s}^\top, \bo U_{m,n}^\top)^\top$ includes the eigenvectors of $\bo M_m$ arranged so that $\bo U_{m,n}$ is the $k_m \times p$ matrix containing the eigenvectors belonging to the non-zero eigenvalues as rows and $(p-k_m) \times p$ matrix $\bo U_{m,s}$ includes the remaining ones. The columns of resulting unmixing matrices $\bo W_{m,n} = \bo U_{m,n} \bo S_{0,T}(\bo x_t)^{-1/2}$ and $\bo W_{m,s} = \bo U_{m,s} \bo S_{0,T}(\bo x_t)^{-1/2}$ then generate the nonstationary and stationary subspaces, respectively. Naturally $k_m \leq k$ with equality only if for all nonstationary components the means differ at least between some of the chosen intervals. As this method is based on interval means it corresponds basically to SIR and we denote this method accordingly SSAsir.
\subsection{Nonstationarity in variance} \label{sec:var}
Let us then consider a SAVE type method for detecting stationary deviations in variance. Consider again standardised time series $\bo y_t$ and let now \begin{equation} \label{eq:var}
\bo M_v = \sum_{i=1}^{K} \frac{|T_i|}{T} (\bo I_p- \bo S_{0,T_i}(\bo y_t))^2, \end{equation} where $\bo A^2=\bo A\bo A^\top$, measure the deviation of the covariance computed on intervals $T_1,\dots,T_K$ from the global covariance $\bo I_p$.
Again the eigenvalues of $\bo M_v$ that correspond to the components that are nonstationary in variance should be non-zero. Let thus the eigenvalue decomposition be \[ \bo M_v = \bo U_v \bo D_v \bo U_v^\top, \] where $\bo D_v$ is a $p\times p$ diagonal matrix with the eigenvalues of $\bo M_v$ as diagonal values, $\bo U_v = (\bo U^\top_{v,s}, \bo U^\top_{v,n})^\top$ are the eigenvectors of $\bo M_v$ arranged so that $\bo U_{v,n}$ is the $k_v \times p$ matrix containing the eigenvectors belonging to the non-zero eigenvalues as rows and and $\bo U_{v,s}$ is the $(p-k_v) \times p$ matrix containing the remaining eigenvectors as rows. Again $k_v \leq k$. The transforming matrices to the two subspaces are accordingly $\bo W_{v,n} = \bo U_{v,n} \bo S_{0,T}(\bo x_t)^{-1/2}$ and $\bo W_{v,s} = \bo U_{v,s} \bo S_{0,T}(\bo x_t)^{-1/2}$. As illustrated in Section~\ref{sec:CompEx}, although this approach is designed to detect components with nonstationary variances, it may also detect components which are nonstationary in mean as if the mean changes the variances in intervals might also change. We will refer to this method in the following as SSAsave.
\subsection{Nonstationarity in autocorrelation} \label{sec:cor}
The SIR and SAVE type methods are able to detect components which are nonstationary with respect to the first and the second moments. To detect nonstationarities in autocorrelation we need a statistic to measure the nonconstancy in autocorrelation across segments. Let such statistic be defined (for standardised time series) as \begin{equation} \label{eq:cor}
\bo M_\tau = \sum_{i=1}^{K} \frac{|T_i|}{T} (\bo S_{\tau,T}(\bo y_t)- \bo S_{\tau,T_i}(\bo y_t))^2, \end{equation} where $\tau \geq 1$. The matrix $\bo M_\tau$ thus measures the deviation of the autocovariance matrix computed on intervals $T_1,\dots, T_K$ from the global autocovariance matrix $\bo S_{\tau,T}$. Using again the eigenvalue-eigenvector decomposition \[ \bo M_\tau = \bo U_\tau \bo D_\tau \bo U_\tau^\top, \] and separating the eigenvectors of $\bo M_\tau$ corresponding to non-zero and zero eigenvalues yields the $k_{\tau}\times p$ matrix $\bo U_{\tau,n}$ and the $(p-k_{\tau}) \times p$ matrix $\bo U_{\tau,s}$, where $k_\tau$ is the number of non-zero eigenvalues with $k_\tau \leq k$. The unmixing matrix estimates are then $\bo W_{\tau,n} = \bo U_{\tau,n} \bo S_{0,T}(\bo x_t)^{-1/2}$ and $\bo W_{\tau,s} = \bo U_{\tau,s} \bo S_{0,T}(\bo x_t)^{-1/2}$. For this method to work, the different intervals are required to have different autocovariances in the intervals. As this method is aimed in detecting changes in the correlation structure we denote it as SSAcor.
\subsection{Comparison of SSAsir, SSAsave and SSAcor} \label{sec:CompEx}
In this section we visualize how SSAsir, SSAsave and SSAcor work and in which situations they fail. For that purpose we plot four nonstationary time series of length 3000 which are all standardized to have mean zero and unit variance. We consider $K=6$ equal-sized intervals and compute for each slice the mean, the variance and the autocovariance, and visualize these quantities. The three quantities are illustrated in Figures~\ref{Vis1} and~\ref{Vis2} using dots, vertical bars and horizontal bars, respectively. The methods now detect nonstationarities when there exists variation between the means (SSAsir), when the variances differ from 1 (SSA) which means that the intervals must have different variances, and when the autocovariance differs from the global autocovariance which means that they also need to be different. By combining the values computed at each slice, we then obtain the global statistics of interest for each time series and for each method. These statistics are reported in the top-left corners of the figures and for the methods to work the values need to be non-zero.
\begin{figure}
\caption{Visualization of SSAsir, SSAsave and SSAcor for a component with nonstationarity in mean (left panel trend, right panel seasonality). The black dots represent the interval mean, the height of vertical red bar the interval variance and the width of blue horizontal bar the interval autocovariance.}
\label{Vis1}
\end{figure}
Figure~\ref{Vis1} shows two different types of nonstationarities in mean. The left panel has a mean level shift and the right panel has a strong seasonal pattern. For the mean level shift the mean values are clearly non-constant but also the variances in the intervals differ. Thus in this case both SSAsir and SSAsave work though the statistic of SSAsir is much larger than that of SSAsave. Notice that SSAcor does not work at all in this case. In case of the seasonal time series the example shows that if the intervals are chosen badly, as is the case here, none of the methods work despite the clear nonstationarity. In~\ref{App:1}, Figure~\ref{VisA1} shows the same examples as in this section with $K=6$ intervals with unequal sizes. This demonstrates that SSAsir and SSAsave can perform well in the case of seasonal time series and the performance depends on the chosen intervals.
\begin{figure}
\caption{Visualization of SSAsir, SSAsave and SSAcor for a component with nonstationarity variance (left panel) and for a component with nonstationary correlation structure (right panel). The black dots represent the interval mean, the height of vertical red bar the interval variance and the width of blue horizontal bar the interval autocovariance.}
\label{Vis2}
\end{figure}
In the left panel of Figure~\ref{Vis2} the time series has a constant mean but changes in variance. As expected SSAsir cannot detect this type of nonstationarity as only varying variances in the intervals are clearly visible. Again SSAcor does not work here. The right panel of Figure~\ref{Vis2} on the other hand has a constant mean and a constant variance but the correlation structure changes twice. This is also visible in the figure where for the first time the horizontal correlation bars appear to have clearly different lengths. However, the overall measure of nonstationarity used by SSAcor is still small indicating that detection of nonstationarity in correlation is quite difficult. As shown in Figure~\ref{VisA2} in the Appendix having intervals of unequal sizes does not give higher measure on nonstationarity for SSAcor. To conclude, there always seems to be cases where one of the methods is better than the other making approaches that combine the three methods a natural approach.
\subsection{Combination of methods} \label{sec:comb}
The methods described above can be combined in order to detect all three types of nonstationarities. Notice first that analytic SSA (ASSA) as proposed in \cite{Haraetal:2010} tries to recover the unmixing matrix $\bo W$ so that the covariances of the first and the second moments across intervals is minimized. To be more exact, the method combines the search for components that are nonstationary in mean and variance by using the eigenvalue-eigenvector decomposition of \[ \bo M_{ASSA} = \frac{1}{T}\sum_{i=1}^K \left\{\bo m_{T_i}(\bo y_t) \bo m_{T_i}(\bo y_t)^\top + \frac{1}{2} \bo S_{0,T_i}(\bo y_t) \bo S_{0,T_i}(\bo y_t)^T \right\} -\frac{1}{2} \bo I_p, \] and proceeding as in the previous sections. The main drawback here is that the changes in autocorrelation are not necessarily detected. Furthermore there are no tools to identify what kind of nonstationarities the components exhibit.
Therefore, we suggest another approach which combines the three nonstationarity measures~(\ref{eq:mean}),~(\ref{eq:var}) and~(\ref{eq:cor}). Denote from now on $\bo M_m= \bo M_1$, $\bo M_v= \bo M_2$ and $\bo M_\tau= \bo M_3$. Our suggestion is to turn the problem into a joint diagonalization problem, which means that we search for an orthogonal $p\times p$ matrix $\bo U_c$ which minimizes \[
\sum_i ||\off(\bo U_c^\top \bo M_i \bo U_c )||^2, \] or, equivalently, maximizes \begin{equation} \label{eq:jointdiag}
\sum_i ||\diag(\bo U_c^\top \bo M_i \bo U_c )||^2. \end{equation}
Here $||\bo A||$ is the matrix (Frobenius) norm, $\diag(\bo A)$ is a $p\times p$ diagonal matrix with the diagonal elements as in $\bo A$ and $\off(\bo A) = \bo A - \diag(\bo A)$. Notice that naturally only a subset of the matrices or some additional matrices $\bo M_\tau$ with various lags $\tau$ can be added in the objective function, but the principle of the estimation procedure remains the same. Several algorithms for approximate joint diagonalization in~(\ref{eq:jointdiag}) exist in the literature. The most popular one based on Givens rotations is proposed in \cite{Clarkson:1988}.
Based on $\bo U_c$ one can now compute $\bo D_i = \diag(\bo U_c^\top \bo M_i \bo U_c )=\diag(d_{i,1},\ldots, d_{i,p})$ and collect all those rows of $\bo U_c$, where $\sum_i d_{ij} \neq 0$, to a $k_c \times p$ matrix $\bo U_{c,n}$. The rest of the rows are then collected to a $(p-k_c) \times p$ matrix $\bo U_{c,s}$. Here the individual value $d_{i,j}$ indicates whether the $j$th component is nonstationary with respect to $\bo M_i$. Such classification of the components is not possible when, for example, methods such as ASSA are applied. The final transformation matrices for nonstationary and stationary components are then $\bo W_{c,n} = \bo U_{c,n} \bo S_{0,T}(\bo x_t)^{-1/2}$ and $\bo W_{c,s} = \bo U_{c,s} \bo S_{0,T}(\bo x_t)^{-1/2}$. In the following this method is referred as SSAcomb.
\section{Practical issues and identifiability of components} \label{sec:practical}
There are several practical considerations still worth pointing out. Clearly the finite sample eigenvalues of matrices $\bo M$ corresponding to the stationary components will be never exactly zero. Thus the value of $k$ must be chosen based on a cut-off value or graphically. Furthermore, it is obvious that the choice of the number of intervals $K$ and how they are divided is crucial in this framework. The impact of the number of intervals will be illustrated in the simulation studies in Section~\ref{sec:simulations}. Ideally, given the number of intervals, the cut points of the intervals should be such that the quantities of interest are as different as possible. However, as finding optimal cut points is in practice difficult, the intervals should, in our opinion, be at least of different length so that possible seasonality effects are easier to detect and issues similar to those seen in Section~\ref{sec:CompEx} do not occur.
For all methods discussed above, the unmixing matrix $\bo W$ has the two parts $\bo W_{n} = \bo U_{n} \bo S_{0,T}(\bo x_t)^{-1/2}$ and $\bo W_{s} = \bo U_{s} \bo S_{0,T}(\bo x_t)^{-1/2}$ specifying nonstationary and stationary subspaces, respectively, where matrices $\bo U_n$ and $\bo U_s$ depend on the method used. Let now $\bo x_t^* = \bo B \bo x_t $ denote an affine transformation of $\bo x_t$ with full-rank $p\times p$ matrix $\bo B$. We then denote the unmixing matrix based on $\bo x_t^*$ as $\bo W^*$. In the BSS literature (see for example \cite{MiettinenTaskinenNordhausenOja2015}) a BSS unmixing matrix is defined to be affine equivariant if $\bo W \bo x_t = \bo J \bo W^* \bo x_t^*$ for some diagonal matrix $\bo J$ which has $\pm 1$ on its diagonal. The affine equivariance thus means that the recovered components do not depend on the mixing matrix, except for their signs.
In the proposed SSA approach, the eigenvalues of $\bo M$ matrices provide a way of ordering nonstationary components (or pseudo-eigenvalues in the case of SSAcomb). Thus if the eigenvalues corresponding to the nonstationarity components are unique then $\bo U_n$ is well defined and we have $\bo W_n \bo x_t = \bo J \bo W_n^* \bo x_t^*$. However, for the stationary part all (pseudo-)eigenvalues are equal and thus $\bo U_s$ is not well defined and we actually have $\bo W_s \bo x_t = \bo O \bo W_n^* \bo x_t^*$ for some orthogonal matrix $\bo O$. However, as for finite data the eigenvalues are usually all distinct, the finite sample version will be affine equivariant. Note however that $\bo W_s \bo x_t$ is not necessarily equivalent to $\bo s_t$ and $\bo W_n \bo x_t$ is not necessarily equivalent to $\bo n_t$ as both $\bo s_t$ and $\bo n_t$ have the ambiguities mentioned above and only the two subspaces are well defined. This is quite similar to a non-Gaussian subspace analysis (NGCA) \cite{BlanchardKawanabeSugiyamaSpokoinyMuller2006} - a BSS method for iid data that divides data into a Gaussian part and a non-Gaussian part. See also the supplementary material of~\cite{BunauMeineckeKiralyMuller:2009} for a detailed discussion. However, it is often desirable to have models where the interesting components are well defined (up to some minor identifiability issues such as their signs). In this work we do not consider a full analysis of the identifiability issues, just mention some possibilities. The stationary components are identifiable when the components follow a second-order source separation (SOS) model or a stationary independent time series model. For details see for example \cite{PanMatilainenTaskinenNordhausen2021}. In that case methods such as AMUSE \cite{TongSoonHuangLiu1990,MiettinenNordhausenOjaTaskinen2012}, SOBI \cite{BelouchraniAbedMeraimCardosoMoulines1997,Miettinenetal:2016}, gSOBI \cite{MiettinenMatilainenNordhausenTaskinen2017} or gJADE \cite{MatilainenNordhausenOja2015} can be applied to the identified stationary subspace. For the nonstationary subspace at least the following assumptions allow the estimation of the components. If all nonstationary components are independent and the marginal distributions are non-Gaussian, independent component analysis (ICA) methods can be used. For an overview of several ICA methods, see \cite{ComonJutten:2010,NordhausenOja2018}. Another possibility is to assume a nonstationary source separation (NSS) model for the nonstationary components, that is, to assume that the mean is stationary, but each component has nonstationary variances which change in different patterns. This is for example a reasonable assumption for audio data. Interestingly, most NSS methods also divide the time series into intervals and jointly diagonalize statistics based upon them. See for example \cite{ChoiCichocki:2000,Nordhausen2014} for more details on NSS methods. In this context SSAsave seems to be a natural SSA method.
So far we have assumed that the dimensions of the two subspaces are known, as it is assumed in almost all of the SSA literature mentioned above. This is naturally a very unnatural assumption. As long as inferential tools are missing to estimate $k$, the screeplot of the (pseudo-)eigenvalues might give some indication of the choice. \begin{figure}
\caption{Screeplot based on the column sums of the pseudo-eigenvalues from Table~\ref{tab:ddd}.}
\label{screeplot1}
\end{figure} In the following we apply SSAcomb based on six equal sized intervals for an eight-variate time series with $T=4000$. The time series are simulated according to Setting 4 in the following section. \begin{table}[ht] \center \begin{tabular}{rrrrrrrrr}
\hline
& $d_1$ & $d_2$ & $d_3$ & $d_4$ & $d_5$ & $d_6$ & $d_7$ & $d_8$ \\
\hline $\bo M_m$ & 0.00 & 0.53 & 0.00 & 0.02 & 0.01 & 0.00 & 0.00 & 0.00 \\ $\bo M_v$ & 2.67 & 0.03 & 0.03 & 0.07 & 0.03 & 0.03 & 0.02 & 0.02 \\ $\bo M_\tau$ & 0.02 & 0.02 & 0.18 & 0.07 & 0.04 & 0.03 & 0.02 & 0.02 \\
\hline \end{tabular} \caption{Pseudo-eigenvalues of SSAcomb for a simulated time series.} \label{tab:ddd} \end{table} Table~\ref{tab:ddd} contains the pseudo-eigenvalues and the screeplot shown in Figure~\ref{screeplot1} is based on their column sums. Based on the screeplot one could think of 3 or 4 nonstationary components. Taking then into account the underlying pseudo-eigenvalues it is clear that the first component is nonstationary according to $\bo M_v$ only indicating nonstationarity in variance. The second component is nonstationarity according to $\bo M_m$ and the third component according to $\bo M_\tau$. The fourth component actually has small values for all three $\bo M$-matrices and therefore might actually not be really nonstationary. The true value of $k$ is indeed three in this case.
\section{Simulations and an example} \label{sec:simulations}
In this section we compare the methods discussed above using simulation studies. The method that combines the search for all three types of nonstationarities is also illustrated by an example.
\subsection{Simulations}
As pointed out above, it depends on the purpose of the analysis whether the stationary subspace or the nonstationary subspace is of interest. The goal in this simulation study is therefore to evaluate how well the different methods can estimate the two subspaces under the assumption that the dimension of the subspace is known, as it is assumed in most of the SSA references mentioned above. Thus, based on the sample eigenvalues of the methods, the unmixing matrix estimates can be decomposed into $\hat \bo W_s$ and $\hat \bo W_n$ with dimensions $(p-k) \times p$ and $k \times p$, respectively.
For comparing the methods, we need a performance measure that takes into account the fact that the subspaces are only specified up to rotations. We therefore compute the projection matrices $$ \hat \bo P_s = \hat \bo W_s (\hat \bo W_s^\top \hat \bo W_s)^{-1} \hat \bo W_s^\top \quad \mbox{and} \quad \hat \bo P_n = \hat \bo W_n (\hat \bo W_n^\top \hat \bo W_n)^{-1} \hat \bo W_n^\top $$ and will compare them with the corresponding true projection matrices $\bo P_s$ and $\bo P_n$ by computing the squared distances $$
D_s^2=\frac{1}{{2}} \| \bo P_s- \hat \bo P_s\|^2
\quad \mbox{and} \quad D_n^2=\frac{1}{{2}} \| \bo P_n- \hat \bo P_n\|^2, $$ which can take values in $[0,\min\{k,p-k\}]$, with zero indicating a perfect recovery of the subspace. For more details about this performance criterion, see \cite{CroneCrosby1995,LiskiNordhausenOjaRuizGazen2016}.
All following simulations were done with R \cite{R} using the packages JADE \cite{JADE_package} and LDRtools \cite{LDRtools}. In all simulation settings the observed time points are $1, \dots, T$, where $T$ varies from 1000 to 32000, and $p=8$ with five stationary components $s_{1,t},\ldots,s_{5,t}$ and three nonstationary components $n_{1,t}$,$n_{2,t}$ and $n_{3,t}$. Most components are based on moving average (MA) processes, autoregressive (AR) processes and autoregressive moving average (ARMA) processes. However, inspired by \cite{PatileaRaissi2014} we consider also time varying variance (TV-VAR) processes $x_t$ with parameters $\alpha$ and $\beta$ of the form $x_t = \tilde h_t \epsilon_t$ where $\tilde h_t^2 = h_t^2 + \alpha x_{t-1}^2$ where $\epsilon$ are iid $N(0,1)$, $x_0=0$ and $h_t = 10 - 10 \sin(\beta \pi \ t/T + \pi/6) (1 + t/T)$. Similarly we consider also time varying autoregressive processes of order 1 (TV-AR1) where $x_t= a_t x_{t-1} + \epsilon_t$ with $x_0=0$, $\epsilon_t$ is iid $N(0, \sigma_2)$ and $a_t = 0.5 \cos(2\pi \ t/T)$.
The four 8-variate settings under consideration are then: \begin{description} \item[Setting 1:] $s_{1,t}$ is a MA(0.72, 0.24) process, $s_{2,t}$ is an AR(0.34, 0.27, 0.18) process, $s_{3,t}$ is an ARMA(0.34, 0.27, 0.18; 0.72, 0.15) process, $s_{4,t}$ is an AR(0.11, 0.58) process and $s_{5,t}$ is an MA(0.78) process. $n_{1,t} = y_t + \mu_t$ where $y_t$ is an AR(0.7) process and $\mu_t = -1.52$ if $t \leq \left \lfloor{T/2}\right \rfloor$ and otherwise $1.38$. $n_{2,t} = y_t + \mu_t$ where $y_t$ is an AR(0.5) process and $\mu_t = -0.75$ if $t \leq \left \lfloor{T/3}\right \rfloor$, 0.84 if $t \in [\left \lfloor{T/3}\right \rfloor + 1, 2 \left \lfloor{T/3}\right \rfloor]$ and otherwise $-0.45$. \item[Setting 2:] $s_{1,t}$ is a MA(0.72) process, $s_{2,t}$ is a MA(0.34) process, $s_{3,t}$ is a MA(0.72, 0.15), $s_{4,t}$ is an MA(0.11, 0.58) process and $s_{5,t}$ is a MA(0.34, 0.27, 0.18) process. $n_{1,t}$ is TV-VAR(0.2,0.5), $n_{2,t}$ is TV-VAR(0.1,1) and $n_{3,t}$ is TV-VAR(0.05,0.01). \item[Setting 3:] $s_{1,t}$ is an ARMA(0.14, 0.45; 0.72, 0.24) process, $s_{2,t}$ is an AR(0.34, 0.27, 0.18) process, $s_{3,t}$ is an ARMA(0.34, 0.27, 0.18; 0.72, 0.15) process, $s_{4,t}$ is an AR(0.11,0.58) process and $s_{5,t}$ is an AR(0.1, 0.1, 0.1, 0.1, 0.1) process. $n_{1,t}$ is a TV-AR process with $\sigma_2 = 0.8649$, $n_{2,t}$ consists of three independent blocks, i.e., for $t=1$ to $t=\left \lfloor{T/3}\right \rfloor$ an AR(0.5) process with $\sigma^2=1$, for $t=\left\lfloor{T/3}\right \rfloor+1$ to $t=2\left\lfloor{T/3}\right \rfloor$ an AR(0.2) process with $\sigma^2=1.6384$ and for $t=2\left\lfloor{T/3}\right \rfloor+1$ to $t=T$ an AR(0.8) process with $\sigma^2=0.2304$. $n_{3,t}$ consists of two independent blocks, i.e., for times $t=1$ to $t=\left \lfloor{T/2}\right \rfloor$ a MA(0.5) process with $\sigma^2=1$ and for $t=2\left \lfloor{T/2}\right \rfloor+1$ to $t=T$ a MA(0.9, 0.17) process with $\sigma^2=0.4624$ \item[Setting 4:] Has the same stationary components as used in Setting 3 and $n_{1,t}$ corresponds to $n_{1,t}$ in Setting 1, $n_{2,t}$ corresponds to $n_{2,t}$ in Setting 2 and $n_{3,t}$ corresponds to $n_{1,t}$ in Setting 3. \end{description}
Thus, in Setting 1 the three nonstationary components have all nonstationary means, while in Setting 2 the means are all stationary but the variances are nonstationary. In Setting 3 the nonstationary information is in the correlation structure. Setting 4 uses then one nonstationary component from each of the previous settings meaning that here combining different approaches seems especially important. Examples of nonstationary components for Settings 1-3 with $T=2000$ are shown in Figures~\ref{ExNSset1}-\ref{ExNSset3} showing that the violations of nonstationarity can take quite different forms. \begin{figure}
\caption{Example of the nonstationary components in Setting~1 for $T=2000$.}
\label{ExNSset1}
\end{figure} \begin{figure}
\caption{Example of the nonstationary components in Setting~2 for $T=2000$.}
\label{ExNSset2}
\end{figure} \begin{figure}
\caption{Example of the nonstationary components in Setting~3 for $T=2000$.}
\label{ExNSset3}
\end{figure}
In the simulation study we simulated 2000 source sets with various sample sizes $T$ from all four settings and always used a random orthogonal mixing matrix as $\bo A$. Then for all methods we computed the distances between the true projections and the estimated projections when using $K=2,6,12$ equal-sized intervals. Notice that in the spirit of SIR, the number of slices for SSAsir should exceed $k=3$ and therefore SSAsir is not expected to work when $K=2$.
\begin{figure}
\caption{Average squared distances between the true and estimated subspaces in Setting~1.
}
\label{DistSset1}
\end{figure}
\begin{figure}
\caption{Average squared distances between the true and estimated subspaces in Setting~2.}
\label{DistSset2}
\end{figure}
\begin{figure}
\caption{Average squared distances between the true and estimated subspaces in Setting~3.}
\label{DistSset3}
\end{figure}
\begin{figure}
\caption{Average squared distances between the true and estimated subspaces in Setting~4.}
\label{DistSset4}
\end{figure}
The average distances are shown in Figures~\ref{DistSset1}-\ref{DistSset4}. It is clearly visible that having only two intervals, that is $K=2$, is always the poorest choice. Although this was an expected result for SSAsir, it was not so obvious for the other methods. The differences between $K=6$ and $K=12$ decrease with increasing sample size. However for smaller sample sizes the choice $K=6$ is preferable. This confirms results familiar from NSS and SDR studies which show that it is important that the intervals contain enough observations to estimate the quantities of interest well enough.
As seen in Figure~\ref{DistSset1}, in case of Setting 1 all methods are able to estimate the subspaces when there are enough intervals. This is a bit surprising result for SSAcor. From the results based on Setting 2 in Figure~\ref{DistSset2} it is clear that SSAsir and SSAcor do not work at all. ASSA is unable to detect the stationary subspace but detects the nonstationary subspace quite well. SSAcomb works well despite containing matrices which, when used by themselves, fail completely. The results based on Setting 3 in Figure~\ref{DistSset3} indicate that SSAcor and SSAcomb perform clearly best while ASSA does worse than SSAsave and is therefore affected by the poor performance of SSAsir. Finally, as seen in Figure~\ref{DistSset4}, in Setting 4, where all different types of nonstationarities are present, it is clear that SSAcomb is the only method that can estimate the subspaces properly. It requires however a much larger sample size to detect all three nonstationary components. ASSA seems to be constantly missing one nonstationary component which is not surprising as it is not looking for nonstationarity in autocorrelation structures.
Thus, to conclude, based on the simulation studies SSAcomb seems to be a preferable method as it works quite well independently of the nature of the underlying stationarity violations. However, large sample sizes are preferable and the number of intervals $K$ plays a role in the performance. In the simulation studies above, a moderate number of $K=6$ seems as a reasonable choice.
\subsection{Example}
We applied the proposed SSAcomb method to a magnetoencephalographic (MEG) data of length $T=221710$ that was recorder using $p=102$ magnetometers at the Centre for Interdisciplinary Brain Research, Department of Psychology, University of Jyv\"askyl\"a. Magnetometers record brain activity indirectly by measuring changes in magnetic fields generated by electrical currents occurring in the brain. As the measurements are taken from top of the head, it is realistic to assume that the observed signals are mixtures of actual brain activity signals. The signals are also mixed with artifacts that occur due to external physiological factors such as moving the head, tensing muscles and blinking and moving eyes. Such mixing is also seen in Figure~\ref{MEGoriginal} in~\ref{App:2} where we plot ten first MEG-signals. The goal of the analysis is then to recover the brain activity of interest from the observed mixture. Notice that KL-SSA was applied to EEG data analysis with the similar goal in mind in \cite{BunauMeineckeKiralyMuller:2009,Bunauetal:2010}.
Our interest was to see if SSAcomb can recover nonstationary components that possibly correspond to interesting brain activity or physiological signals. As the length of the data was pretty large, we applied SSAcomb with $K=12$, and plotted the screeplot of the sums of the pseudo-eigenvalues in Figure~\ref{screeplot2}. \begin{figure}
\caption{Screeplot of the sums of the pseudo-eigenvalues based on MEG data.}
\label{screeplot2}
\end{figure}
Based on the screeplot we note that the method is able to detect ten nonstationary components. We then list the ten pseudo-eigenvalues in Table~\ref{tab:MEGddd} and conclude that the components 1-5 and 8-10 were detected due to nonstationarity in variance and in autocorrelation. Component 6 has all three types of nonstationarities and component 7 is nonstationary with respect to the variance.
\begin{table}[ht] \centering \begin{tabular}{rrrrrrrrrrr}
\hline
& $d_1$ & $d_2$ & $d_3$ & $d_4$ & $d_5$ & $d_6$ & $d_7$ & $d_8$ & $d_9$ & $d_{10}$\\
\hline $\bo M_m$ & 0.08 & 0.06 & 0.00 & 0.01 & 0.01 & 0.41 & 0.00 & 0.00 & 0.03 & 0.00 \\ $\bo M_v$ & 2.90 & 2.26 & 2.47 & 2.07 & 1.59 & 1.03 & 1.89 & 1.18 & 0.68 & 0.45 \\ $\bo M_\tau$ & 2.87 & 2.22 & 1.70 & 2.02 & 1.55 & 0.96 & 0.02 & 0.66 & 0.64 & 0.39 \\
\hline \end{tabular} \caption{Pseudo-eigenvalues of SSAcomb based on the MEG data.} \label{tab:MEGddd} \end{table}
Figure~\ref{MEGsources} plots the ten nonstationary components recovered by SSAcomb.
\begin{figure}
\caption{Ten nonstationary components recovered by SSAcomb method with $K=12$.}
\label{MEGsources}
\end{figure}
Due to large number of observations, nonstationarities in autocorrelation are difficult to observe, but nonstationarities in mean and variance are clearly visible in Figure~\ref{MEGsources}. In Figure~\ref{VisA3} in~\ref{App:2} we show the topography plots related to each signal. The topography plots demonstrate which part of the brain is activated and help subject scientists to detect those components that are related to the artifacts and those that are related to the brain activity that is relevant to the study at hand. Notice that many MEG studies proceed by removing artifacts before trying to detect the brain activity signals of interest, see for example~\cite{Escuderoetal:2007} and references therein.
\section{Conclusion} \label{sec:discussion}
In this paper we gave a statistical perspective on stationary subspace analysis. We showed how SSA can be formulated as a supervised dimension reduction method, and in that spirit, proposed methods that can detect nonstationarities in mean, variance, autocorrelation, and in all of them. The method thus extended the classical analytic SSA into a more general time series setting. Simulation studies illustrated that the method which combined all three nonstationarity measures outperformed its competitors in various simulation settings. The combined method seemed to perform best when the sample sizes were large and the number of intervals was moderate. Notice however that performances of different methods depend heavily on the underlying time series components and therefore it might be also of interest to weight the matrices in the combination method if some special structures are of more importance than the others.
In this study we assumed that the number of nonstationary components, $k$, is known. However, most often this parameter value is unknown and needs to be estimated. In \cite{BunauMeineckeKiralyMuller:2009,BlytheBunauMeineckeMuller:2012} a sequential likelihood ratio test was proposed for determining the dimension of the stationary subspace in iid case. In the context of ASSA,~\cite{Haraetal:2010} mentioned that eigenvalues can give guidance for choosing the dimension, but they did not pursue this idea any further. When using SSAsir, SSAsave, SSAcor or SSAcomb, a possible test for testing the null hypothesis $H_0:\ k=k_0$, where $k$ is the true dimension of the nonstationary subspace and $k_0$ is the proposed dimension, could also be based on the eigenvalues of matrices $\bo M$ and resampling-based procedures in a similar fashion as in \cite{NordhausenOjaTylerVirta2017} for example. An estimate for the nonstationary subspace dimension could then be based on the sequential testing as in \cite{VirtaNordhausen2021}. We leave this for a future work. In this context we will also investigate if the detection of the type of nonstationarity can be formalized and how the SSA methods can help to detect change points in the multivariate time series.
\section*{Appendix}
\appendix
\section{Additional figures comparing SSAsir, SSAsave and SSAcor} \label{App:1}
Visualizations of SSAsir, SSAsave and SSAcor for the same series as in Section~\ref{sec:CompEx} when the intervals are of unequal lengths.
\begin{figure}
\caption{Visualization of SSAsir, SSAsave and SSAcor for a component with nonstationarity in mean (left panel trend, right panel seasonality) when the six intervals have different sizes. The black dots represent the interval mean, the height of vertical red bar the interval variance and the width of blue horizontal bar the interval autocovariance.}
\label{VisA1}
\end{figure}
\begin{figure}
\caption{Visualization of SSAsir, SSAsave and SSAcor for a component with nonstationarity in variance (left panel) and in autocorrelation (right panel) when the six intervals have different sizes. The black dots represent the interval mean, the height of vertical red bar the interval variance and the width of blue horizontal bar the interval autocovariance.}
\label{VisA2}
\end{figure}
\section{Additional figures for the MEG example.} \label{App:2}
\begin{figure}
\caption{Ten observed MEG signals.}
\label{MEGoriginal}
\end{figure}
\begin{figure}
\caption{Topography plots based on ten nonstationary components recovered from the MEG data. The topographies illustrate which part of the brain is activated related to each of the nonstationary components.}
\label{VisA3}
\end{figure}
\end{document}
\end{document}
\end{document} |
\begin{document}
\title{A strong loophole-free test of local realism}
\author{Lynden K. Shalm} \affiliation{National Institute of Standards and Technology, 325 Broadway, Boulder, CO 80305, USA}
\author{Evan Meyer-Scott} \affiliation{Institute for Quantum Computing and Department of Physics and Astronomy, University of Waterloo, 200 University Ave West, Waterloo, Ontario, Canada, N2L 3G1}
\author{Bradley G. Christensen} \affiliation{Department of Physics, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA}
\author{Peter Bierhorst} \affiliation{National Institute of Standards and Technology, 325 Broadway, Boulder, CO 80305, USA}
\author{Michael A. Wayne} \affiliation{Department of Physics, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA} \affiliation{National Institute of Standards and Technology, 100 Bureau Drive,\,Gaithersburg,\,MD 20899,\,USA}
\author{Martin J. Stevens} \affiliation{National Institute of Standards and Technology, 325 Broadway, Boulder, CO 80305, USA}
\author{Thomas Gerrits} \affiliation{National Institute of Standards and Technology, 325 Broadway, Boulder, CO 80305, USA}
\author{Scott Glancy} \affiliation{National Institute of Standards and Technology, 325 Broadway, Boulder, CO 80305, USA}
\author{Deny R. Hamel} \affiliation{D\'epartement de Physique et d'Astronomie, Universit\'e de Moncton, Moncton, New Brunswick E1A 3E9, Canada}
\author{Michael S. Allman} \affiliation{National Institute of Standards and Technology, 325 Broadway, Boulder, CO 80305, USA}
\author{Kevin J. Coakley} \affiliation{National Institute of Standards and Technology, 325 Broadway, Boulder, CO 80305, USA}
\author{Shellee D. Dyer} \affiliation{National Institute of Standards and Technology, 325 Broadway, Boulder, CO 80305, USA}
\author{Carson Hodge} \affiliation{National Institute of Standards and Technology, 325 Broadway, Boulder, CO 80305, USA}
\author{Adriana E. Lita} \affiliation{National Institute of Standards and Technology, 325 Broadway, Boulder, CO 80305, USA}
\author{Varun B. Verma} \affiliation{National Institute of Standards and Technology, 325 Broadway, Boulder, CO 80305, USA}
\author{Camilla Lambrocco} \affiliation{National Institute of Standards and Technology, 325 Broadway, Boulder, CO 80305, USA}
\author{Edward Tortorici} \affiliation{National Institute of Standards and Technology, 325 Broadway, Boulder, CO 80305, USA}
\author{Alan L. Migdall} \affiliation{National Institute of Standards and Technology, 100 Bureau Drive,\,Gaithersburg,\,MD 20899,\,USA} \affiliation{Joint Quantum Institute, National Institute of Standards and Technology and University of Maryland, 100 Bureau Drive, Gaithersburg, Maryland 20899, USA}
\author{Yanbao Zhang} \affiliation{Institute for Quantum Computing and Department of Physics and Astronomy, University of Waterloo, 200 University Ave West, Waterloo, Ontario, Canada, N2L 3G1}
\author{Daniel R. Kumor} \affiliation{Department of Physics, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA}
\author{William H. Farr} \affiliation{Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, CA 91109}
\author{Francesco Marsili} \affiliation{Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, CA 91109}
\author{Matthew D. Shaw} \affiliation{Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, CA 91109}
\author{Jeffrey A. Stern} \affiliation{Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, CA 91109}
\author{Carlos Abell\'{a}n} \affiliation{ICFO -- Institut de Ciencies Fotoniques, The Barcelona Institute of Science and Technology, 08860 Castelldefels (Barcelona), Spain}
\author{Waldimar Amaya} \affiliation{ICFO -- Institut de Ciencies Fotoniques, The Barcelona Institute of Science and Technology, 08860 Castelldefels (Barcelona), Spain}
\author{Valerio Pruneri} \affiliation{ICFO -- Institut de Ciencies Fotoniques, The Barcelona Institute of Science and Technology, 08860 Castelldefels (Barcelona), Spain} \affiliation{ICREA -- Instituci\'{o} Catalana de Recerca i Estudis Avan\c{c}ats, 08015 Barcelona, Spain}
\author{Thomas Jennewein} \affiliation{Institute for Quantum Computing and Department of Physics and Astronomy, University of Waterloo, 200 University Ave West, Waterloo, Ontario, Canada, N2L 3G1} \affiliation{Quantum Information Science Program, Canadian Institute for Advanced Research, Toronto, ON, Canada}
\author{Morgan W. Mitchell} \affiliation{ICFO -- Institut de Ciencies Fotoniques, The Barcelona Institute of Science and Technology, 08860 Castelldefels (Barcelona), Spain} \affiliation{ICREA -- Instituci\'{o} Catalana de Recerca i Estudis Avan\c{c}ats, 08015 Barcelona, Spain}
\author{Paul G. Kwiat} \affiliation{Department of Physics, University of Illinois at Urbana-Champaign, Urbana, IL 61801, USA}
\author{Joshua C. Bienfang} \affiliation{National Institute of Standards and Technology, 100 Bureau Drive,\,Gaithersburg,\,MD 20899,\,USA} \affiliation{Joint Quantum Institute, National Institute of Standards and Technology and University of Maryland, 100 Bureau Drive, Gaithersburg, Maryland 20899, USA}
\author{Richard P. Mirin} \affiliation{National Institute of Standards and Technology, 325 Broadway, Boulder, CO 80305, USA}
\author{Emanuel Knill} \affiliation{National Institute of Standards and Technology, 325 Broadway, Boulder, CO 80305, USA}
\author{Sae Woo Nam} \affiliation{National Institute of Standards and Technology, 325 Broadway, Boulder, CO 80305, USA}
\begin{abstract} We present a loophole-free violation of local realism using entangled photon pairs. We ensure that all relevant events in our Bell test are spacelike separated by placing the parties far enough apart and by using fast random number generators and high-speed polarization measurements. A high-quality polarization-entangled source of photons, combined with high-efficiency, low-noise, single-photon detectors, allows us to make measurements without requiring any fair-sampling assumptions. Using a hypothesis test, we compute p-values as small as $5.9\times 10^{-9}$ for our Bell violation while maintaining the spacelike separation of our events. We estimate the degree to which a local realistic system could predict our measurement choices. Accounting for this predictability, our smallest adjusted p-value is $2.3 \times 10^{-7}$. We therefore reject the hypothesis that local realism governs our experiment. \end{abstract} \date{\today} \maketitle
\textit{But if [a hidden variable theory] is local it will not agree with quantum mechanics, and if it agrees with quantum mechanics it will not be local. This is what the theorem says.} \textsc{—--John Stewart Bell }\cite{Bell1975} \\ \\ Quantum mechanics at its heart is a statistical theory. It cannot with certainty predict the outcome of all single events, but instead it predicts probabilities of outcomes. This probabilistic nature of quantum theory is at odds with the determinism inherent in Newtonian physics and relativity, where outcomes can be exactly predicted given sufficient knowledge of a system. Einstein and others felt that quantum mechanics was incomplete. Perhaps quantum systems are controlled by variables, possibly hidden from us \cite{Holland2004}, that determine the outcomes of measurements. If we had direct access to these hidden variables, then the outcomes of all measurements performed on quantum systems could be predicted with certainty. De Broglie's 1927 pilot-wave theory was a first attempt at formulating a hidden variable theory of quantum physics \cite{Broglie1927}; it was completed in 1952 by David Bohm \cite{Bohm1952a,Bohm1952b}. While the pilot-wave theory can reproduce all of the predictions of quantum mechanics, it has the curious feature that hidden variables in one location can instantly change values because of events happening in distant locations. This seemingly violates the locality principle from relativity, which says that objects cannot signal one another faster than the speed of light. In 1935 the nonlocal feature of quantum systems was popularized by Einstein, Podolsky, and Rosen \cite{Einstein1935}, and is something Einstein later referred to as ``spooky actions at a distance''\cite{Einstein1971}. But in 1964 John Bell showed that it is impossible to construct a hidden variable theory that obeys locality and simultaneously reproduces all of the predictions of quantum mechanics \cite{Bell1964}. Bell's theorem fundamentally changed our understanding of quantum theory and today stands as a cornerstone of modern quantum information science.
Bell's theorem does not prove the validity of quantum mechanics, but it does allow us to test the hypothesis that nature is governed by local realism. The principle of realism says that any system has pre-existing values for all possible measurements of the system. In local realistic theories, these pre-existing values depend only on events in the past lightcone of the system. Local hidden-variable theories obey this principle of local realism. Local realism places constraints on the behavior of systems of multiple particles---constraints that do not apply to entangled quantum particles. This leads to different predictions that can be tested in an experiment known as a Bell test. In a typical two-party Bell test, a source generates particles and sends them to two distant parties, Alice and Bob. Alice and Bob independently and randomly choose properties of their individual particles to measure. Later, they compare the results of their measurements. Local realism constrains the joint probability distribution of their choices and measurements. The basis of a Bell test is an inequality that is obeyed by local realistic probability distributions but can be violated by the probability distributions of certain entangled quantum particles \cite{Bell1964}. A few years after Bell derived his inequality, new forms were introduced by Clauser, Horne, Shimony and Holt \cite{Clauser1969}, and Clauser and Horne \cite{Clauser1974} that are simpler to experimentally test.
In a series of landmark experiments, Freedman and Clauser \cite{Freedman1972} and Aspect, Grangier, Dalibard, and Roger \cite{Aspect1981, Aspect1982a, Aspect1982b} demonstrated experimental violations of Bell inequalities using pairs of polarization-entangled photons generated by an atomic cascade. However, due to technological constraints, these Bell tests and those that followed (see \cite{Genovese2005} for a review) were forced to make additional assumptions to show local realism was incompatible with their experimental results. A significant violation of Bell's inequality implies that either local realism is false or that one or more of the assumptions made about the experiment are not true; thus every assumption in an experiment opens a ``loophole.'' No experiment can be absolutely free of all loopholes, but in \cite{Larsson2014} a minimal set of assumptions is described that an experiment must make to be considered ``loophole free.'' Here we report a significant, loophole free, experimental violation of local realism using entangled photon pairs. We use the definition of loophole free as defined in \cite{Larsson2014}. In our experiment the only assumptions that remain are those that can never---even in principle---be removed. We present physical arguments and evidence that these remaining assumptions are either true or untestable.
Bell's proof requires that the measurement choice at Alice cannot influence the outcome at Bob (and vice-versa). If a signal traveling from Alice can not reach Bob in the time between Alice's choice and the completion of Bob's measurement, then there is no way for a local hidden variable constrained by special relativity at Alice to change Bob's outcomes. In this case we say that Alice and Bob are spacelike separated from one another. If an experiment does not have this spacelike separation, then an assumption must be made that local hidden variables cannot signal one another, leading to the ``locality'' loophole. \begin{figure*}
\caption{
Schematic of the entangled photon source. A pulsed \SI{775}{\nano\meter}-wavelength
Ti:Sapphire picosecond mode-locked laser running at \SI{79.3}{\mega\hertz} repetition
rate is used as both a clock and a pump in our setup. A fast photodiode (FPD) and
divider circuit are used to generate the synchronization signal that is distributed to
Alice and Bob. A polarization-maintaining single-mode fiber (SMF) then acts as a spatial
filter for the pump. After exiting the SMF, a polarizer and half-wave plate (HWP) set
the pump polarization. To generate entanglement, a periodically poled potassium titanyl
phosphate (PPKTP) crystal designed for Type-II phasematching is placed in a
polarization-based Mach-Zehnder interferometer formed using a series of HWPs and three beam displacers (BD). At BD1 the pump beam is split in two paths ($1$ and
$2$): the horizontal (H) component of polarization of the pump translates laterally in
the $x$ direction while the vertical (V) component of polarization passes straight
through. Tilting BD1 sets the phase, $\phi$, of the interferometer to 0. After BD1 the
pump state is $\left( \mathrm{cos}(16^{\circ}) \left| H_{1} \right\rangle + \mathrm{sin}(16^{\circ})
\left| V_{2}
\right\rangle \right)$. To address the polarization of the paths individually, semi-circular waveplates are used. A HWP in path 2 rotates the polarization of the pump from
vertical (V) to horizontal (H). A second HWP at $0^{\circ}$ is inserted into path 1 to
keep the path lengths of the interferometer balanced. The pump is focused at two spots
in the crystal, and photon pairs at a wavelength of \SI{1550}{\nano\meter} are generated
in either path 1 or 2 through the process of spontaneous parametric downconversion.
After the crystal, BD2 walks the V-polarized signal photons down in the $y$ direction
($\text{V}_{1a}$ and $\text{V}_{2a}$) while the H-polarized idler photons pass straight
through ($\text{H}_{1b}$ and $\text{H}_{2b}$). The $x$--$y$ view shows the resulting
locations of the four beam paths. HWPs at $45^{\circ}$ correct the polarization while
HWPs at $0^{\circ}$ provide temporal compensation. BD3 then completes the interferometer
by recombining paths 1 and 2 for the signal and idler photons. The two downconversion
processes interfere with one another, creating the entangled state in Eq.
(\ref{E:BellState}). A high-purity silicon wafer with an anti-reflection coating is used
to filter out the remaining pump light. The idler (signal) photons are coupled into a SMF and sent to Alice (Bob).
}
\label{f:apparatus}
\end{figure*}
\begin{figure}
\caption{
Receiver station setup for Alice and Bob. A photon arrives from the source. Two half-wave plates (HWP), a quarter-wave plate (QWP), a Pockels cell (PC), and two plate
polarizers together act to measure the polarization state of the incoming photon. The
polarization projection is determined by a random bit from XORing the outputs of two
random number generators (RNG1 and RNG2) with pre-determined pseudorandom bits (RNG3).
If the random bit is ``0'', corresponding to measurement setting $a$ ($b$) for Alice
(Bob), the Pockels cell remains off. If the random bit is ``1'', corresponding to
measurement setting $a'$ ($b'$) for Alice (Bob), then a voltage is applied to the
Pockels cell that rotates the polarization of the photons using a fast electro-optic
effect. The two plate polarizers have a combined contrast ratio $> 7000:1$. The photons
are coupled back into a single-mode fiber (SMF) and detected using a superconducting
nanowire single-photon detector (SNSPD). The signal is amplified and sent to a time-
tagging unit where the arrival time of the event is recorded. The time tagger also
records the measurement setting, the synchronization signal, and a one pulse-per-second
signal from a global positioning system (GPS). The pulse-per-second signal provides an
external time reference that helps align the time tags Alice and Bob record. A
\SI{10}{\mega\hertz} oscillator synchronizes the internal clocks on Alice's and Bob's
time taggers. The synchronization pulse from the source is used to trigger the
measurement basis choice.}
\label{f:receiver}
\end{figure}
Another requirement in a Bell test is that Alice and Bob must be free to make random measurement choices that are physically independent of one another and of any properties of the particles. If this is not true, then a hidden variable could predict the chosen settings in advance and use that information to produce measurement outcomes that violate a Bell inequality. Not fulfilling this requirement opens the ``freedom-of-choice'' loophole. While this loophole can never in principle be closed, the set of hidden variable models that are able to predict the choices can be constrained using space-like separation. In particular, in experiments that use processes such as cascade emission or parametric downconversion to create entangled particles, space-like separation of the measurement choices from the creation event eliminates the possibility that the particles, or any other signal emanating from the creation event, influence the settings. To satisfy this condition, Alice and Bob must choose measurement settings based on fast random events that occur in the short time before a signal traveling at the speed of light from the entangled-photon creation would be able to reach them. But it is fundamentally impossible to conclusively prove that Alice's and Bob's random number generators are independent without making additional assumptions, since their backward lightcones necessarily intersect. Instead, it is possible to justify the assumption of measurement independence through a detailed characterization of the physical properties of the random number generators (such as the examination described in \cite{Abellan2015v2arxiv,Mitchell2015}).
In any experiment, imperfections could lead to loss, and not all particles will be detected. To violate a Bell inequality in an experiment with two parties, each free to choose between two settings, Eberhard showed that at least 2/3 of the particles must be detected \cite{Eberhard1993} if nonmaximally entangled states are used. If the loss exceeds this threshold, then one may observe a violation by discarding events in which at least one party does not detect a particle. This is valid under the assumption that particles were lost in an unbiased manner. However, relying on this assumption opens the ``detector'' or ``fair-sampling'' loophole. While the locality and fair-sampling loopholes have been closed individually in different systems \cite{Weihs1998, Rowe2001,Scheidl2010, Giustina2013, Christensen2013}, it has only recently been possible to close all loopholes simultaneously using nitrogen vacancy centers in diamonds \cite{Hensen2015Nature}, and now with entangled photons in our experiment and in the work reported in \cite{Giustina2015arxiv}. These three experiments also address the freedom-of-choice loophole by space-like separation.
Fundamentally a Bell inequality is a constraint on probabilities that are estimated from random data. Determining whether a data set shows violation is a statistical hypothesis-testing problem. It is critical that the statistical analysis does not introduce unnecessary assumptions that create loopholes. A Bell test is divided into a series of trials. In our experiment, during each trial Alice and Bob randomly choose between one of two measurement settings (denoted $\{a, a'\}$ for Alice and $\{b, b'\}$ for Bob) and record either a ``+'' if they observe any detection events or a ``0'' otherwise. Alice and Bob must define when a trial is happening using only locally available information, otherwise additional loopholes are introduced. At the end of the experiment Alice and Bob compare the results they obtained on a trial-by-trial basis.
Our Bell test uses a version of the Clauser-Horne inequality \cite{Clauser1974,Eberhard1993,Bierhorst2015} where, according to local realism, \begin{eqnarray} \label{E:CH} P(\text{++}\mid ab) & \leq & P(\text{+0}\mid ab') + \notag \\ & &
P(\text{0+}\mid a'b)+P(\text{++}\mid a'b'). \end{eqnarray} The terms $P(\text{++}\mid ab)$ and $P(\text{++}\mid a'b')$ correspond to the probability that both Alice and Bob record detection events (++) when they choose the measurement settings $ab$ or $a'b'$, respectively. Similarly, the terms $P(\text{+0}\mid ab')$ and $P(\text{0+}\mid a'b)$ are the probabilities that only Alice or Bob record an event for settings $ab'$ and $a'b$, respectively. A local realistic model can saturate this inequality; however, the probability distributions of entangled quantum particles can violate it.
To quantify our Bell violation we construct a hypothesis test based on the inequality in Eq. (\ref{E:CH}). The null hypothesis we test is that the measured probability distributions in our experiment are constrained by local realism. Our evidence against this null hypothesis of local realism is quantified in a p-value that we compute from our measured data using a test statistic. Our test statistic takes all of the measured data from Alice's and Bob's trials and summarizes them into a single number (see the Supplemental Material for further details). The p-value is then the maximum probability that our experiment, if it is governed by local realism, could have produced a value of the test statistic that is at least as large as the observed value \cite{Shao1998}. Smaller p-values can be interpreted as stronger evidence against this hypothesis. These p-values can also be used as certificates for cryptographic applications, such as random number generation, that rely on a Bell test \cite{Pironio2009,Christensen2013}. We use a martingale binomial technique from \cite{Bierhorst2015} for computing the p-value that makes no assumptions about the distribution of events and does not require that the data be independent and identically distributed \cite{Gill2003a} as long as appropriate stopping criteria are determined in advance.
In our experiment, the source creates polarization-entangled pairs of photons and distributes them to Alice and Bob, located in distant labs. At the source location a mode-locked Ti:Sapphire laser running at repetition rate of approximately \SI{79.3}{\mega\hertz} produces picosecond pulses centered at a wavelength of \SI{775}{\nano\meter} as shown in figure \ref{f:apparatus}. These laser pulses pump an apodized periodically poled potassium titanyl phosphate (PPKTP) crystal to produce photon pairs at a wavelength of \SI{1550}{\nano\meter} via the process of spontaneous parametric downconversion \cite{Dixon2014}. The downconversion system was designed using the tools available in \cite{spdcalc}. The PPKTP crystal is embedded in the middle of a polarization-based Mach-Zehnder interferometer that enables high-quality polarization-entangled states to be generated \cite{Bennink10}. Rotating the polarization analyzer angles at Alice and Bob, we measure the visibility of coincidence detections for a maximally entangled state to be $0.999 \pm 0.001$ in the horizontal/vertical polarization basis and $0.996 \pm 0.001$ in the diagonal/antidiagonal polarization basis (see \cite{uncertaintiesStatement} for information about the reported uncertainties). The entangled photons are then coupled into separate single-mode optical fibers with one photon sent to Alice and the other to Bob. Alice, Bob, and the source are positioned at the vertices of a nearly right-angle triangle. Due to constraints in the building layout, the photons travel to Alice and Bob in fiber optic cables that are not positioned along their direct lines of sight. While the photons are in flight toward Alice and Bob, their random number generators each choose a measurement setting. Each choice is completed before information about the entangled state, generated at the PPKTP crystal, could possibly reach the random number generators. When the photons arrive at Alice and Bob, they are launched into free space, and each photon passes through a Pockels cell and polarizer that perform the polarization measurement chosen by the random number generators as shown in Fig. \ref{f:receiver}. After the polarizer, the photons are coupled back into a single-mode fiber and sent to superconducting nanowire single-photon detectors, each with a detection efficiency of $91 \pm 2\:\%$ \cite{Marsili2013}. The detector signal is then amplified and sent to a time tagger where the arrival time is recorded. We assume the measurement outcome is fixed when it is recorded by the time tagger, which happens before information about the other party's setting choice could possibly arrive, as shown in Fig. \ref{f:lightcone}(b).
\begin{figure}
\caption{
Minkowski diagrams for the spacetime events related to Alice (A) and the source (S) and
Bob (B) and the source (a), and Alice and Bob (b). All lightcones are shaded blue. Due
to the geometry of Alice, Bob, and the source, more than one spacetime diagram is
required. In a) the random number generators (RNGs) at Alice and Bob must finish picking
a setting outside the lightcone of the birth of an entangled photon pair. A total of 15
pump pulses have a chance of downconverting into an entangled pair of photons each time
the Pockels cells are on. The events related to the first pulse are not spacelike
separated, because Alice's RNG does not finish picking a setting before information
about the properties of the photon pair can arrive; pulses 2 through 11 are spacelike
separated. As shown in (b), pulses 12 through 15 are not spacelike separated as the
measurement is finished by Alice and Bob after information about the other party's
measurement setting could have arrived. In our experiment the events related to pulse 6
are the furthest outside of all relevant lightcones. }
\label{f:lightcone}
\end{figure}
\begin{figure}
\caption{ (a) The positions of Alice (A), Bob (B), and the source (S)
in the building where the experiment was carried out. The insets show a magnified
($\times 2$) view of Alice's and Bob's locations. The white dots are the location of
the random number generators (RNGs). The larger circle at each location has a radius of
\SI{1}{\meter} and corresponds to our uncertainty in the spatial position measurements.
Alice, Bob, and the source can be located anywhere within the green shaded regions and
still have their events be spacelike separated. Boundaries are plotted for aggregates
of one, three, five, and seven pulses. Each boundary is computed by keeping the
chronology of events fixed, but allowing the distance between the three parties to vary
independently. In (b) the p-value of each of the individual 15 pulses is shown.
Overlayed on the plot are the aggregate pulse combinations used in the contours in (a).
The statistical significance of our Bell violation does not appear to depend on
the spacelike separation of events. For reference and comparison purposes only, the
corresponding number of standard deviations for a given p-value (for a one-sided normal
distribution) are shown. }
\label{f:countours}
\end{figure}
Alice and Bob have system detection efficiencies of $74.7 \pm 0.3\:\%$ and $75.6 \pm 0.3\:\%$, respectively. We measure this system efficiency using the method outlined by Klyshko \cite{Klyshko80}. Background counts from blackbody radiation and room lights reduce our observed violation of the Bell inequality. Every time a background count is observed it counts as a detection event for only one party. These background counts increase the heralding efficiency required to close the detector loophole above 2/3 \cite{Eberhard1993}. To reduce the number of background counts, the only detection events considered are those that occur within a window of approximately \SI{625}{\pico\second} at Alice and \SI{781}{\pico\second} at Bob, centered around the expected arrival times of photons from the source. The probability of observing a background count during a single window is $8.9\times 10^{-7}$ for Alice and $3.2\times 10^{-7}$ for Bob, while the probability that a single pump pulse downconverts into a photon pair is $\approx 5 \times 10^{-4}$. These background counts in our system raise the efficiency needed to violate a Bell inequality from 2/3 to 72.5\:\%. Given our system detection efficiencies, our entangled photon production rates, entanglement visibility, and the number of background counts, we numerically determine the entangled state and measurement settings for Alice and Bob that should give the largest Bell violation for our setup. The optimal state is not maximally entangled \cite{Eberhard1993} and is given by:
\begin{eqnarray} \label{E:BellState}
\left|\psi \right\rangle = 0.961 \left|H_AH_B \right\rangle + 0.276 \left|V_AV_B \right\rangle, \end{eqnarray} where $H$ ($V$) denotes horizontal (vertical) polarization, and $A$ and $B$ correspond to Alice's and Bob's photons, respectively. From the simulation we also determine that Alice's optimal polarization measurement angles, relative to a vertical polarizer, are $\{a =4.2^o, a' = -25.9^o\}$ while Bob's are $\{b = -4.2^o, b' = 25.9^o\}$.
Synchronization signals enable Alice and Bob to define trials based only on local information. The synchronization signal runs at a frequency of \SI{99.1}{\kilo\hertz}, allowing Alice and Bob to perform 99,100 trials/s (\SI{79.3}{\mega\hertz}/800). This trial frequency is limited by the rate the Pockels cells can be stably driven. When the Pockels cells are triggered they stay on for $\approx$ \SI{200}{\nano\second}. This is more than 15 times longer than the \SI{12.6}{\nano\second} pulse-to-pulse separation of the pump laser. Therefore photons generated by the source can arrive in one of 15 slots while both Alice's and Bob's Pockels cells are on. Since the majority of the photon pulses arriving in these 15 slots satisfy the spacelike separation constraints, it is possible to aggregate multiple adjacent pulses to increase the event rate and statistical significance of the Bell violation. However, including too many pulses will cause one or more of the spacelike separation constraints to be violated. Because the probability per pulse of generating an entangled photon pair is so low, given that one photon has already arrived, the chance of getting a second event in the same Pockels cell window is negligible ($<1\:\%$).
\begin{figure}
\caption{ The p-value for different numbers of aggregate pulses as a function of the excess predictability, $\epsilon$, in Alice's and Bob's measurement settings. Larger levels of predictability correspond to a weakening of the assumption that the settings choices are physically independent of the photon properties Alice and Bob measure. As in Fig. \ref{f:countours}(b), the p-value equivalent confidence levels corresponding to the number of standard deviations of a one-sided normal distribution are shown for reference. }
\label{f:rngbias}
\end{figure}
Alice and Bob each have three different sources of random bits that they XOR together to produce their random measurement decisions (for more information see the Supplemental Material). The first source is based on measuring optical phase diffusion in a gain- switched laser that is driven above and below the lasing threshold. A new bit is produced every \SI{5} {\nano\second} by comparing adjacent laser pulses \cite{Abellan2015v2arxiv}. Each bit is then XORed with all past bits that have been produced (for more details see the Supplemental Material). The second source is based on sampling the amplitude of an optical pulse at the single-photon level in a short temporal interval. This source produces a bit on demand and is triggered by the synchronization signal. Finally, Alice and Bob each have a different predetermined pseudorandom source that is composed of various popular culture movies and TV shows, as well as the digits of $\pi$, XORed together. Suppose that a local-realistic system with the goal of producing violation of the Bell inequality, was able to manipulate the properties of the photons emitted by the entanglement source before each trial. Provided that the randomness sources correctly extract their bits from the underlying processes of phase diffusion, optical amplitude sampling, and the production of cultural artifacts (such as the movie \textit{Back to the Future}), this powerful local realistic system would be required to predict the outcomes of all of these processes well in advance of the beginning of each trial to achieve its goal. Such a model would have elements of superdeterminism---the fundamentally untestable idea that all events in the universe are preordained.
Over the course of two days we took a total of 6 data runs with differing configurations of the experimental setup \cite{onlinedata}. Here we report the results from the final dataset that recorded data for 30 minutes (see the Supplemental Material for descriptions and results from all datasets). This is the dataset where the experiment was most stable and best aligned; small changes in coupling efficiency and the stability of the Pockels cells can lead to large changes in the observed violation. The events corresponding to the sixth pulse out of the 15 possible pulses per trial are the farthest outside all the relevant lightcones. Thus we say these events are the most spacelike separated. To increase our data rate we aggregate multiple pulses centered around pulse number 6. We consider different Bell tests using a single pulse (number 6), three pulses (pulses 5, 6, and 7), five pulses (pulses 4 through 8), and seven pulses (pulses 3 through 9). The joint measurement outcomes and corresponding p-values for these combinations are shown in Table \ref{t:pvalues}. For a single pulse we measure a p-value = $2.5 \times 10^{-3}$, for three pulses a p-value = $2.4\times 10^{-6} $, for five pulses a p-value = $5.8 \times 10^{-9}$, and for seven pulses a p-value = $2.0\times 10^{-7} $, corresponding to a strong violation of local realism.
If, trial-by-trial, a conspiratorial hidden variable (or attacker in cryptographic scenarios) has some measure of control over or knowledge about the settings choices at Alice and Bob, then they could manipulate the outcomes to observe a violation of a Bell inequality. Even if we weaken our assumption that Alice's and Bob's setting choices are physically independent from the source, we can still compute valid p-values against the hypothesis of local realism. We characterize the lack of physical independence with predictability of our random number generators. The ``predictability,'' $\mathcal{P}$, of a random number generator is the probability with which an adversary or local realistic system could guess a given setting choice. We use the parameter $\epsilon$, the ``excess predictability'' to place an upper bound on the actual predictability of our random number generators: \begin{eqnarray} \label{E:predictability} \mathcal{P} \leq \frac{1}{2}(1+\epsilon). \end{eqnarray} In principle, it is impossible to measure predictability through statistical tests of the random numbers, because they can be made to appear random, unbiased, and independent even if the excess predictability during each trial is nonzero. Extreme examples that could cause nonzero excess predictability include superdeterminism or a powerful and devious adversary with access to the devices, but subtle technical issues can never be entirely ruled out. Greater levels of excess predictability lead to lower statistical confidence in a rejection of local realism. In Fig. \ref{f:rngbias} we show how different levels of excess predictability change the statistical significance of our results \cite{Bierhorst2013} (see Supplemental Material for more details). We can make estimates of the excess predictability in our system. From additional measurements, we observe a bias of $(1.08 \pm 0.07) \times 10^{-4}$ in the settings reaching the XOR from the laser diffusion random source, which includes synchronization electronics as well as the random number generator. If this bias is the only source of predictability in our system, this level of bias would correspond to an excess predictability of approximately $2 \times 10^{-4}$. To be conservative we use an excess predictability bound that is fifteen times larger, $\epsilon_{p} = 3 \times 10^{-3}$ (see Supplemental Material for more details). If our experiment had excess predictability equal to $\epsilon_{p}$ our p-values would be increased to $5.9 \times 10^{-3}$, $2.4 \times 10^{-5}$, $2.3 \times 10^{-7}$, and $9.2 \times 10^{-6}$ for one, three, five, and seven pulses, respectively \cite{Bierhorst2013}. Combining the output of this random number generator with the others should lead to lower bias levels and a lower excess predictability, but even under the the paranoid situation where a nearly superdeterministic local realistic system has complete knowledge of the bits from the other random number sources, the adjusted p-values still provide a rejection of local realism with high statistical significance.
\begin{table*}[]
\begin{tabular}{|c||c|c|c|c|c|c|c|} \hline Aggregate Pulses & $N(\text{++}\mid ab)$ & $N_{\text{stop}}$ & Total trials & P-value & Adjusted p-value & Timing Margin (ns) & Minimum distance (m) \\ \hline 1 & 1257 & 2376 & 175,654,992 & $2.5 \times 10^{-3}$ & $5.9 \times 10^{-3}$ & $63.5 \pm 3.7$ & $9.2$ \\ \hline 3 & 3800 & 7211 & 175,744,824 & $2.4\times 10^{-6}$ & $2.4 \times 10^{-5}$ & $50.9 \pm 3.7$ & $7.3$ \\ \hline 5 & 6378 & 12127 & 177,358,351 & $5.9\times 10^{-9}$ & $2.3 \times 10^{-7}$ & $38.3 \pm 3.7$ & $5.4$ \\ \hline 7 & 8820 & 16979 & 177,797,650 & $2.0\times 10^{-7}$ & $9.2 \times 10^{-6}$ & $25.7 \pm 3.7$ & $3.5$ \\ \hline \end{tabular} \caption{\label{t:pvalues} P-value results for different numbers of
aggregate pulses. Here $N(\text{++}\mid ab)$ refers to the number of times Alice and
Bob both detect a photon with settings $a$ and $b$ respectively. Before analyzing the
data a stopping criteria, $N_{\text{stop}}$, was chosen. This stopping criteria refers
to the total number of events considered that have the settings and outcomes
specified by the terms in Eq. (\ref{E:CH}), $N_{\text{stop}} = N(\text{++}\mid ab) +
N(\text{+0}\mid ab') + N(\text{0+}\mid a'b) + N(\text{++}\mid a'b')$. After this number
of trials the p-value is computed and the remaining trials discarded. Such pre-determined stopping criteria are necessary for the hypothesis test we use (see
Supplemental Material for more details). The total trials include all trials up to the
stopping criteria regardless of whether a photon is detected. The adjusted p-value
accounts for the excess predictability we estimate from measurements of one of our
random number generators. As discussed in the text, the time difference between Bob
finishing his measurement and the earliest time at which information about Alice's measurement choice could arrive at Bob
sets the margin of timing error that can be tolerated and still have all events
guaranteed to be spacelike separated. We also give the minimum distance between each
party and its boundary line (shown in Fig. \ref{f:countours}(a)) that guarantees
satisfaction of the spacelike separation constraints. In the Supplemental Material the
frequencies of each combination of settings choice for 5 aggregate pulses is reported.} \end{table*}
Satisfying the spacetime separations constraints in Fig. \ref{f:lightcone} requires precise measurements of the locations of Alice, Bob, and the source as well as the timing of all events. Using a combination of position measurements from a global positioning system (GPS) receiver and site surveying, we determine the locations of Alice, Bob, and the source with an uncertainty of $< \SI{1}{\meter}$. This uncertainty is set by the physical size of the cryostat used to house our detectors and the uncertainty in the GPS coordinates. There are four events that must be spacelike separated: Alice's and Bob's measurement choice must be fixed before any signal emanating from the photon creation event could arrive at their locations, and Alice and Bob must finish their measurements before information from the other party's measurement choice could reach them. Due to the slight asymmetry in the locations of Alice, Bob, and the source, the time difference between Bob finishing his measurement and information possibly arriving about Alice's measurement choice is always shorter than the time differences of the other three events as shown in Fig. \ref{f:lightcone}(b). This time difference serves as a kind of margin; our system can tolerate timing errors as large as this margin and still have all events remain spacelike separated. For one, three, five, and seven aggregate pulses this corresponds to a margin of $63.5 \pm 3.7$ ns, $50.9 \pm 3.7$ ns, $38.3 \pm 3.7$ ns, and $25.7 \pm 3.7$ ns, respectively as shown in Table \ref{t:pvalues}. The uncertainty in these timing measurements is dominated by the \SI{1}{\meter} positional uncertainty (see Supplemental Material for further details on the timing measurements).
A way to visualize and further quantify the the spacelike separation of events is to compute how far Alice, Bob, and the source could move from their measured position and still be guaranteed to satisfy the locality constraints, assuming that the chronology of all events remains fixed. In figure \ref{f:countours}(a) Alice, Bob, and the source locations are surrounded by shaded green regions. As long as each party remains anywhere inside the boundaries of these regions their events are guaranteed to be spacelike separated. There are specific configurations where all three parties can be outside the boundaries and still be spacelike separated, but here we consider the most conspiratorial case where all parties can collude with one another. The boundaries are overlayed on architectural drawings of the building in which the experiment was performed. Four different boundaries are plotted, corresponding to the Bell test performed with one, three, five, and seven aggregate pulses. Minimizing over the path of each boundary line, the minimum distance that Alice, Bob, and the source are located from their respective boundaries is \SI{9.2}{\meter}, \SI{7.3}{\meter}, \SI{5.4}{\meter}, and \SI{3.5}{\meter} for aggregates of one pulse, three pulses, five pulses, and seven pulses, respectively. For these pulse configurations we would have had to place our source and detection systems physically in different rooms (or even move outside of the building) to compromise our spacelike separation. Aggregating more than seven pulses leads to boundaries that are less than three meters away from our measured positions. In these cases we are not able to make strong claims about the spacelike separation of our events.
Finally, as shown in Fig. \ref{f:countours}(b), we can compute the 15 p-values for each of the time slots we consider that photons from the source can arrive in every trial. Photons arriving in slots 2 through 11 are spacelike separated while photons in slots 12 through 15 are not. The photons arriving in these later slots are measured after information from the other party's random number generator could arrive as shown in Fig. \ref{f:lightcone}(b). It appears that spacelike separation has no discernible effect on the statistical significance of the violation. However, we do see large slot-to-slot fluctuation in the calculated p-values. We suspect that this is due to instability in the applied voltage when the Pockels cell is turned on. In this case photons receive slightly different polarization rotations depending on which slot they arrive in, leading to non-ideal measurement settings at Alice and Bob. It is because of this slot-to-slot variation that the aggregate of seven pulses has a computed p-value larger than the five-pulse case. Fixing this instability and using more sophisticated hypothesis test techniques \cite{Zhang2011,Zhang2013,knill2015} will enable us to robustly increase the statistical significance of our violation for the seven pulse case.
The experiment reported here is a commissioning run of the Bell test machine we eventually plan to use to certify randomness. The ability to include multiple pulses in our Bell test highlights the flexibility of our system. Our Bell test machine is capable of high event rates, making it well suited for generating random numbers required by cryptographic applications \cite{Pironio2009}. Future work will focus on incorporating our Bell test machine as an additional source of real-time randomness into the National Institute of Standards and Technology's public random number beacon (https://beacon.nist.gov).
It has been 51 years since John Bell formulated his test of local realism. In that time his inequality has shaped our understanding of entanglement and quantum correlations, led to the quantum information revolution, and transformed the study of quantum foundations. Until recently it has not been possible to carry out a complete and statistically significant loophole-free Bell test. Using advances in random number generation, photon source development, and high-efficiency single-photon detectors, we are able to observe a strong violation of a Bell inequality that is loophole free, meaning that we only need to make a minimal set of assumptions. These assumptions are that our measurements of locations and times of events are reliable, that Alice's and Bob's measurement outcomes are fixed at the time taggers, and that during any given trial the random number generators at Alice and Bob are physically independent of each other and the properties of the photons being measured. It is impossible, even in principle, to eliminate a form of these assumptions in any Bell test. Under these assumptions, if a hidden variable theory is local it does not agree with our results, and if it agrees with our results then it is not local. \\ \\
\begin{acknowledgments} We thank Todd Harvey for assistance with optical fiber installation, Norman Sanford for the use of lab space, Kevin Silverman, Aephraim M. Steinberg, Rupert Ursin, Marissa Giustina, Stephen Jordan, Dietrich Leibfried, and Paul Lett for helpful discussions, Nik Luhrs and Kristina Meier for helping with the electronics, Andrew Novick for help with the GPS measurements, Joseph Chapman and Malhar Jere for designing the cultural pseudorandom numbers, and Stephen Jordan, Paul Lett, and Dietrich Leibfried for constructive comments on the manuscript. We thank Conrad Turner Bierhorst for waiting patiently for the computation of p-values. We dedicate this paper to the memory of our coauthor, colleague, and friend, Jeffrey Stern. We acknowledge support for this project provided by: DARPA (LKS, MSA, AEL, SDD, MJS, VBV, TG, RPM, SWN, WHF, FM, MDS, JAS) and the NIST Quantum Information Program (LKS, MSA, AEL, SDD, MJS, VBV, TG, SG, PB, JCB, AM, RPM, EK, SWN); NSF grant No. PHY 12-05870 and MURI Center for Photonic Quantum Information Systems (ARO/ARDA Program DAAD19-03-1-0199) DARPA InPho program and the Office of Naval Research MURI on Fundamental Research on Wavelength-Agile High-Rate Quantum Key Distribution (QKD) in a Marine Environment, award \#N00014-13-0627 (BGC, MAW, DRK, PGK); NSERC, CIFAR and Industry Canada (EMS, YZ, TJ); NASA (FM, MDS, WHF, JAS); European Research Council project AQUMET, FET Proactive project QUIC, Spanish MINECO project MAGO (Ref. FIS2011-23520) and EPEC (FIS2014-62181-EXP), Catalan 2014-SGR-1295, the European Regional Development Fund (FEDER) grant TEC2013-46168-R, and Fundacio Privada CELLEX (MWM, CA, WA, VP); New Brunswick Innovation Foundation (DRH). Part of the research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. This work includes contributions of the National Institute of Standards and Technology, which are not subject to U.S. copyright. \end{acknowledgments}
\end{document} |
\begin{document}
\title{One-dimensional symmetry for solutions of Allen Cahn fully nonlinear equations.} \begin{abstract}
This article presents some qualitative results for solutions of the fully nonlinear elliptic equation
$F(\nabla u, D^2 u) + f(u)=0$ in ${\rm I}\!{\rm R}^N$. Precisely under some additional assumptions on $f$, if $-1\leq u\leq 1$ and $\lim _{x_1\rightarrow \pm \infty} u(x_1, x^\prime) = \pm 1$ uniformly with respect to $x^\prime$, then the solution depends only on $x_1$. \end{abstract}
\section{Introduction}
The sliding method was crystalized in \cite{BN} by Berestycki and Nirenberg in order to prove monotonicity of solutions of
\begin{equation}\label{eqq1} \Delta u +f(u)=0\quad \mbox{ in } \Omega\subset{\rm I}\!{\rm R}^N.
\end{equation} This powerful method uses two features of the Laplacian, comparison principle and invariance with respect to translation. The idea is: Fix a direction; first slide in that direction enough for the intersection of the slided domain with $\Omega$ to be small enough or "narrow enough" for the maximum principle to hold. This allows to compare the value of the solution at different points of the domain. Then continue "sliding" until reaching a critical position.
Coupling simplicity with ductility, the sliding method of \cite{BN} has been incredibly influential, it is possible to count over two hundred citations of the work (e.g. through google scholar). We shall here only recall the work by Berestycki, Hamel and Monneau \cite{BHM} where the technic is used to prove the so called Gibbons conjecture . This was simultaneously and independently solved by Barlow, Bass and Gui \cite{BBG} and Farina \cite{F}. Precisely in \cite{BHM}, they prove that if $f$ is a $C^1([-1,1])$ function decreasing near $-1$ and $1$, with $f(-1)=f(1)=0$ (typically, $f(u)=u-u^3$) then the solutions of (\ref{eqq1}) in ${\rm I}\!{\rm R}^N$ that converge uniformly to 1 or -1 at infinity in some fixed direction, say $x_1$, are in fact one dimensional i.e functions of $x_1$ alone. In \cite{BHM}, the sliding method is coupled with a maximum principle (comparison principle) in unbounded domains contained in some cone.
As is well known the Gibbons conjecture is a weak form of the famous De Giorgi's conjecture which states that for $f(u)=u-u^3$, the level sets of monotone, entire solutions of (\ref{eqq1}) are hyperplanes for $N\leq 8$. This result has been proved in dimension 2 and 3 respectively by Ghoussoub and Gui \cite{GG} and by Ambrosio, Cabr\'e \cite{AC}, while Del Pino, Kowalcyk and Wei \cite{DKW} have proved that it does not hold for $N>8$ by constructing a counter example. Savin has proved the case $4\leq N\leq 8$, with the further condition that the limit be $\pm 1$ in a direction at infinity, in that case this condition is not assumed to be uniform with respect to the other variables. See also \cite{VSS} for analogous results concerning the $p$-Laplacian.
In the present note we extend Gibbons conjecture to fully nonlinear operators. Precisely, we consider entire bounded solutions of
\begin{equation}\label{eq1}
F(\nabla u, D^2 u) + f(u)=0\quad \mbox{in}\quad {\rm I}\!{\rm R}^N,
\end{equation}
where $F(\nabla u, D^2 u):=|\nabla u|^\alpha \tilde F(D^2u)$ with $\alpha>-1$ and $\tilde F$ is uniformly elliptic. With the same conditions on the nonlinearities of $f$ as in \cite{BHM}, we prove that for any solution such that
$\lim _{x_1\rightarrow \pm \infty} u(x_1, x^\prime) = \pm 1$ uniformly with respect to $x^\prime$ and such that $|\nabla u |>0$ in $R^N$ then $\partial_{x_1} u\geq 0$ and $u$ is a function of $x_1$ alone.
Many remarks are in order. Let us note that in the case $\alpha\leq 0$, some recent regularity results \cite{BD9} prove that locally Lipschitz solutions are in fact ${\cal C}^{1, \beta}$ for some $\beta <1$, and this regularity is sufficient to prove the results enclosed here. For $\alpha>0$ the ${\cal C}^{1}$ regularity is a consequence of the hypothesis on the positivity of the norm of the gradient.
A key ingredient in the proof of this result, which is of independent interest, is the following, strong comparison principle.
\begin{prop}\label{strict0}
Suppose that $\Omega$ is some open set, and $x_o, r$ such that $B(x_o, r) \subset \Omega$.
Suppose that $f$ is ${\cal C}^1$ on ${\rm I}\!{\rm R}$ , and that $u$ and $v$ are, respectively, ${\cal C}^1$ bounded sub- and super-solutions of
$$F(\nabla w, D^2 w)+ f(w) =0\quad \mbox{in}\quad \Omega$$ such that $u\geq v$ and $\nabla v \neq 0$ (or $\nabla u\neq 0$) in $B(x_o, r)$, then,
either $u> v$ or $u\equiv v$ in $B(x_o, r)$.
\end{prop}
Observe that the condition that the gradient needs to be different from zero cannot be removed. Indeed, for any $m,k\in Z$ with $k\leq m$ the functions
$$u_{k,m}(x)=\left\{\begin{array}{lc}
1 & \mbox{for}\ x_1\geq (2m+2)\pi\\
\cos x_1 & \mbox{for}\ (2k+1)\pi\leq x_1\leq (2m+2)\pi\\
-1 & \mbox{for}\ x_1\leq (2k+1)\pi
\end{array}
\right.
$$
are viscosity solutions of
$$|\nabla u|^2(\Delta u)+(u-u^3)=0,$$ and they are ${\cal C}^{1,\beta}$ for all $\beta <1$.
Observe that e.g. $u_{0,0}\geq u_{0,i}$ for all $i\geq 1$ and $u_{0,0}(2\pi,y)=u_{0,i}(2\pi,y)$ but the functions don't coincide.
This example suggests that there may be solutions that are not one dimensional if the condition on the gradient is removed.
When $\alpha=0$, De Silva and Savin in \cite{DSS}, have proved the analogue of De Giorgi's conjecture for uniformly elliptic operators in dimension 2. With $f$ as above, they prove that if there exists a one dimensional monotone solution i.e. $g:{\rm I}\!{\rm R}\mapsto [-1.1]$ such that $u(x)=g(\eta\cdot x)$ is a solution of \begin{equation}\label{eqq2} \tilde F(D^2u)+f(u)=0\quad\mbox{in}\quad{\rm I}\!{\rm R}^2 \end{equation} satisfying $\lim_{t\rightarrow\pm\infty}g(t)=\pm1$ then, all monotone bounded solutions of (\ref{eqq2}) are one dimensional, i.e. their level sets are straight lines.
Let us mention that without any further assumptions on $f$ solutions may not exists. Indeed, let $\tilde F(D^2u)= {\cal M}^+_{a,A} (D^2 u)$ where for any symmetric matrix $M$ with eigenvalues $e_i$, $${\cal M}^+_{a,A}(M)=a\sum_{e_i<0} e_i +A\sum_{e_i>0} e_i.$$ Then, as shown in the last section, for $a<A$ there are no one dimensional solutions of $$ {\cal M}^+_{a,A} (D^2 u)+u-u^3=0,$$ that satisfy the asymptotic conditions. In that section we study conditions on $f$ that guarantee existence of solutions of the ODE
$$ |u^\prime |^\alpha {\cal M}^+_{a,A} (u^{\prime\prime})+f(u)=0$$ that satisfy $\lim_{x\rightarrow\pm\infty}u(x)=\pm 1$.
While completing this work, we have received a paper by Farina and Valdinoci, \cite{FV}, who treats Gibbons conjecture in a very general setting that includes the case $\alpha = 0$.
\section{ Assumptions and known results}
In the whole paper we shall suppose the following hypotheses on the operator $F$.
Let $S$ be the set of $N\times N$ symmetric matrices, and let $\alpha >-1$. Then $F$ is defined on ${\rm I}\!{\rm R}^N\setminus\{0\}\times S $ by \begin{equation}\label{deff}
F( p, M) = |p|^\alpha \tilde F( M), \end{equation} where $\tilde F$ satisfies
$$\tilde F(tM)=t\tilde F(M)\quad\mbox{ for any }\quad t\in {\rm I}\!{\rm R}^+, M\in S,$$ and there exist $A\geq a>0$ such that for any $M$ and any $N\in S$ such that $N\geq 0$ \begin{equation}\label{eqaAF} a tr(N)\leq \tilde F(M+N)-\tilde F(M) \leq A tr(N). \end{equation}
\begin{exe}
1) Let $0< a < A$ and ${\cal M}_{a,A}^+(M)$ be the Pucci's operator
${\cal M}_{a,A}^+ (M) = Atr(M^+)-a tr(M^-)$ where $M^\pm$ are the positive and negative part of $M$, and
${\cal M}_{a,A}^-(M)=- {\cal M}_{a,A}^+ (-M)$.
Then $F$ defined as
$$F( p,M) = |p|^\alpha {\cal M}_{a,A}^\pm (M) $$
satisfies the assumptions.
2) Let $B$ be a symmetric positive definite matrix then $F( p,M) = |p|^\alpha(tr(BM) )$, is another example of operator satisfying the assumptions.
\end{exe} We now recall what we mean by viscosity solutions in our context :
\begin{defi}\label{def1}
Let $\Omega$ be a bounded domain in ${\rm I}\!{\rm R}^N$, let $g$ be a continuous function on $\Omega\times {\rm I}\!{\rm R}$, then $v$, continuous on $\overline{\Omega}$ is called a viscosity super-solution (respectively sub-solution) of $F(\nabla u,D^2u)=g(x,u)$ if for all $x_0\in \Omega$,
-Either there exists an open ball $B(x_0,\delta)$, $\delta>0$ in $\Omega$ on which $v$ is a constant $ c $ and $0\leq g(x,c)$, for all $x\in B(x_0,\delta)$ (respectively $0\geq g(x, c)$ for all $x\in B(x_0,\delta)$)
-Or
$\forall \varphi\in {\mathcal C}^2(\Omega)$, such that $v-\varphi$ has a local minimum (respectively local maximum) at $x_0$ and $\nabla\varphi(x_0)\neq 0$, one has $$ F( \nabla\varphi(x_0),
D^2\varphi(x_0))\leq g(x_0,v(x_0)). $$ (respectively $$F( \nabla\varphi(x_0),
D^2\varphi(x_0))\geq g(x_0,v(x_0))).$$
A viscosity solution is a function which is both a super-solution and a sub-solution. \end{defi}
\begin{rema}
When $F$ is continuous in $p$, and $F(0,0)=0$, this definition is equivalent to the classical definition of viscosity solutions, as in the User's guide \cite{CIL}. \end{rema} We now give a definition that will be needed in the statement of our main theorem.
\begin{defi}\label{ddd} We shall say that $|\nabla u|\geq m>0$ in $\Omega$ in the viscosity sense, if for all $ \varphi\in {\mathcal C}^2(\Omega)$, such that $u-\varphi$ has a local minimum or a local maximum at some $x_0\in\Omega$,
$$|\nabla\varphi(x_0)|\geq m.$$ \end{defi}
In our context, since the solutions considered have their gradient different from zero everywhere, the viscosity solutions can be intended in the classical meaning.
We begin to recall some of the results obtained in \cite{BD5} which will be needed in this article.
\begin{theo}\label{thcomp1}
Suppose that $c$ is a continuous and bounded function satisfying $c\leq 0$.
Suppose that $f_1$ and $f_2$ are continuous and bounded and that $u$ and $v$ satisfy \begin{eqnarray*}
F( \nabla u, D^2 u)+c(x)|u|^\alpha u& \geq & f_1\quad \mbox{in}\quad \Omega, \\ F( \nabla v,D^2 v)
+ c(x) |v|^{\alpha}v & \leq & f_2 \quad \mbox{in}\quad \Omega , \\ u \leq v && \quad \mbox{on}\quad \partial\Omega. \end{eqnarray*}
If $f_2< f_1$ then $u \leq v$ in $\Omega$. Furthermore, if $c<0$ in $\Omega$ and $f_2\leq f_1$ then $u \leq v$ in $\Omega$. \end{theo}
\begin{prop}\label{remhopf}
Suppose that ${\cal O}$ is a smooth bounded domain. Let
$u$ be a solution of
\begin{equation}\label{123}
F( \nabla u, D^2 u) \leq 0 \quad \mbox{in }\quad {\cal O}. \end{equation} If there exists some constant $c_o$, such that $u>c_o$ inside ${\cal O}$ and $u(\bar x)=c_o$ with $\bar x\in \partial {\cal O}$, then $$\liminf_{t\rightarrow 0^+} \frac{u(\bar x-t\vec n)-u(\bar x)}{t}>0,$$ where $\vec n$ is the outer normal to $\partial {\cal O}$ at $\bar x$. \end{prop}
\begin{rema} \label{remrem} In particular Proposition \ref{remhopf} implies that a non constant super-solution of (\ref{123}) in a domain $\Omega$ has no interior minimum.
If $c_o = 0$, the result can be extended in the following manner : Suppose that $\beta\geq \alpha $, that $c$ is continuous and bounded, and $u$ is a nonnegative solution of
$$F(\nabla u, D^2 u) + c(x) u^{1+\beta} \leq 0$$
then either $u\equiv 0$ or $u>0$ in $\Omega$. In that last case, if $u = 0$ on some point $x_o\in \partial \Omega$, then $\partial_{\vec n} u(x_o)>0$.
\end{rema}
We now recall the regularity results obtained in \cite{BD9}.
\begin{theo}\label{reg}
Suppose that $\Omega$ is a bounded ${\cal C}^2$ domain and
$\alpha \leq 0$. Suppose that $ g$ is continuous on $\Omega \times {\rm I}\!{\rm R}$ . Then the bounded solutions of
\begin{equation} \label{012}\left\{ \begin{array}{lc}
F(\nabla u, D^2 u) = g(x, u(x)) &\mbox{ in }\ \Omega,\\
u = 0 &\ \mbox{ on } \ \partial \Omega,
\end{array}\right.
\end{equation}
satisfy
$u\in {\cal C}^{1,\beta}(\overline{\Omega})$, for some $\beta \in (0, 1)$ .
Furthermore if $\Omega$ is a domain (possibly unbounded) of ${\rm I}\!{\rm R}^N$ and if $u$ is bounded and locally Lipschitz then $u\in {\cal C}_{loc}^{1,\beta}(\Omega)$ for some $\beta\in (0,1)$.
\end{theo}
When $\alpha>0$, ${\cal C}^1$ regularity results are not known except for the one dimensional case or the radial case, however here, since the solutions that we consider have the gradient bounded away from zero, this regularity is just a consequence of classical results and a priori estimates.
Indeed next theorem is just an application of Theorem 1.2 of \cite{CDV}, which in turn is the extension of Caffarelli's classical result: \begin{theo}\label{IA} Suppose that $\Omega$ is a (possibly unbounded) domain, and that $g$ is ${\cal C}^1$ and bounded. Let $u$ be a bounded solution of \begin{equation}\label{1a}
F(\nabla u, D^2 u) = g(u)\ \mbox{ in }\ \Omega.
\end{equation}
If $|\nabla u|\geq m>0$ in $\Omega$ in the sense of Definition \ref{ddd}, there exists $\beta\in (0,1)$ and $C=C(a,A,N,|g(u)|_\infty, m)$ such that if $B(y, \rho) \subset \Omega$,
\begin{equation} \label{eqholdloc} \|u\|_{{\cal C}^{1,\beta}(B(y, \frac{\rho}{2}))}\leq C \sup_ {B(y, \rho)}|u|.
\end{equation}
\end{theo} {\em Proof.} We introduce the operator:
$$G(v,\nabla v, D^2v):=\tilde F(D^2v)-g(v)\sup\left(|\nabla v|,\frac{m}{2}\right)^{-\alpha}.$$
If $u$ is a solution of (\ref{1a}) such that in the viscosity sense $|\nabla u|\geq m>0$, then it is a solution of $$ G(u,\nabla u, D^2 u)=0\quad \mbox{in}\quad \Omega.$$ Indeed, e.g. if $\varphi\in {\cal C}^2$ is such that
$(u-\varphi)(x)\geq (u-\varphi)(\bar x)$ for some $\bar x\in\Omega$, then $|\nabla \varphi |(\bar x)\geq m$ and
$$|\nabla \varphi|^\alpha(\bar x) \tilde F(D^2\varphi(\bar x))\geq g(u(\bar x))\Rightarrow \tilde F(D^2\varphi (\bar x))- |\nabla \varphi (\bar x)|^{-\alpha} g(u(\bar x))\geq 0.$$ In order to apply Theorem 1.2 of \cite{CDV}, it is enough to remark that, $G$ does not depend on $x$ and therefore the condition on the modulus of continuity is automatically satisfied.
Furthermore,
the dependence on the gradient is Lipschitz, where the Lipschitz constant depends on $m$ and $|g(u)|_\infty$.
Applying Theorem 1.2 of \cite{CDV} we have obtained the above estimate and $u\in C^{1,\beta}(\Omega)$. This ends the proof.
\section{Comparison principles}
As mentioned in the introduction, we begin by proving a strong comparison principle, that extends the one obtained in \cite{BD9}.
\begin{prop}\label{strict}
Suppose that $\Omega$ is some open subset of ${\rm I}\!{\rm R}^N$, $f$ is ${\cal C}^1$ on ${\rm I}\!{\rm R}$ . Let $u$ and $v$ be ${\cal C}^1$ bounded sub-solution and super-solution of
$$F(\nabla u, D^2 u)+ f(u) =0\quad \mbox{in}\ \Omega.$$ Suppose that ${\cal O}$ is some connected subset of $\Omega$, with $u\geq v$ and $\nabla v \neq 0$ (or $\nabla u\neq 0$) on ${\cal O}$ , then
either $u> v$ or $u\equiv v$ in ${\cal O}$. \end{prop} \begin{rema} Of course when $\alpha=0$ the strong comparison principle is classical and holds without requiring that the gradient be different from zero. \end{rema} \noindent{\em Proof of Proposition \ref{strict}.}
We write the proof in the case $\alpha< 0$, the changes to bring when $\alpha>0$ being obvious.
We argue as in \cite {BD9}. Suppose that $x_o$ is some point where $u(x_o)>v(x_o)$ (if such point doesn't exist we have nothing to prove).
Suppose by contradiction that there exists some point $x_1$ such that $u(x_1) = v(x_1)$. It is clear that it can be chosen in such a way that, for $R = |x_1-x_o|$,
$u>v$ in $B(x_o, R)$ and $x_1$ is the only point in the closure of that ball on which $u$ and $v$ coincide. Without loss of generality, one can assume that $B(x_o, {3R\over 2}) \subset {\cal O}$.
We can assume without loss of generality that $v$ is the function whose gradient is bounded away from zero.
Let then $L_1 = \inf_{B(x_o, {3R\over 2})} |\nabla v|>0$, $L_2 = \sup_{B(x_o, {3R\over 2})} |\nabla v|$.
We will prove that there exist two constants $c>0$ and $\delta >0 $ such that
$$ u \geq v + \delta ( e^{-c|x-x_o|}- e^{-3cR\over 2})\equiv v+ w\quad \mbox{in}\quad {R\over 2}\leq |x-x_o| = r\leq {3R\over 2}.$$ This will contradict the fact that $u(x_1) = v(x_1)$.
Let $\delta \leq \displaystyle \min_{|x-x_o|= {R\over 2} }(u-v)$, so that
$$u\geq v+w\quad \mbox{on} \quad
\partial\left(B(x_o, {3R\over 2})\setminus \overline{B(x_o, {R\over 2}})\right).$$ Define $$\gamma(x) = \left\{ \begin{array}{lc}
{f(u(x))-f(v(x))\over u(x)-v(x)} &\mbox{ if } \ u(x) \neq v(x)\\
f^\prime (u(x)) & \mbox{ if } \ u(x) = v(x).
\end{array}\right.$$
Since $f$ is ${\cal C}^1$ and the functions $u$ and $v$ are bounded, $\gamma$ is continuous and bounded.
We write
$$ f(u) =\gamma (x) (u-v) + f(v), $$
$$ F(\nabla u, D^2 u) -(|\gamma |_\infty+1) (u-v) = -f(v)+ (-\gamma-|\gamma|_\infty-1) (u-v)\leq F(\nabla v, D^2 v). $$ We shall prove that, for $c$ chosen conveniently,
$$F(\nabla v, D^2 v) < F(\nabla (v+ w), D^2(v+ w)) -(|\gamma|_\infty+1)
w,$$
this will imply that
$$F(\nabla u, D^2 u)-(|\gamma|_\infty+1) u \leq F(\nabla (v+ w), D^2(v+ w)) -(|\gamma|_\infty+1) (v+ w).$$ Let $\varphi$ be some test function for $v$ from above, a simple calculation on $w$ implies that, if $c \geq {1\over a}({2(2A(N-1) \over R} )$ then
\begin{eqnarray*}
|\nabla \varphi+ \nabla w|^\alpha &\cdot&\tilde F (x, D^2 \varphi+ D^2 w)-(|\gamma|_\infty+1) w\\
&\geq& |\nabla \varphi+ \nabla w|^\alpha \tilde F (x,D^2 \varphi) + |\nabla \varphi+ \nabla w|^\alpha {\cal M}^- (D^2 w)-(|\gamma|_\infty+1) w \\
&\geq& |\nabla \varphi+ \nabla w|^\alpha { F(\nabla \varphi, D^2\varphi)\over |\nabla \varphi|^\alpha} +\\
&& + |\nabla \varphi+ \nabla w|^\alpha\frac{ac^2}{2} \delta e^{-cr} -(|\gamma|_\infty+1) \delta e^{-cr}. \end{eqnarray*}
We also impose $\delta < {R L_1e\over 16 }$ so that $|\nabla w|\leq {|\nabla \varphi]\over 8}$; then the inequalities
$$||\nabla \varphi+\nabla w|^\alpha-|\nabla \varphi|^\alpha |\leq |\alpha | |\nabla w|| \nabla \varphi|^{\alpha-1}\left( {1\over 2}\right)^{\alpha-1}\leq {|\nabla \varphi|^\alpha \over 2}$$ imply that
$$
|\nabla \varphi+ \nabla w|^\alpha \left(\tilde F(x,D^2 \varphi+ D^2 w)\right)
\geq -f(v) -| f(v)|_\infty|\nabla\varphi|^{-1}|\alpha| 2^{1-\alpha}c\delta e^{-cr} +L_2^\alpha {ac^2\over 4} \delta e^{-cr}. $$ It is now enough to choose
$$c\geq {4A(N-1) \over R} + {|\alpha | |f(v)|_\infty 2^{2-\alpha} \over a L_2^{1+\alpha}}+ \left({16(|\gamma |_\infty+1)\over a L_2^\alpha} \right)^{1\over 2}$$ to finally obtain $$
|\nabla \varphi+ \nabla w|^\alpha \tilde F(x,D^2 \varphi+ D^2 w)-(|\gamma|_\infty+1 ) w\geq f(v)+{a c^2\delta L_2^\alpha e^{-cr}\over 8}-(|\gamma|_\infty+1) \delta e^{-cr} $$ i.e.
$$F(x, \nabla (v+ w), D^2 (v+ w)) -(|\gamma|_\infty+1) w> F(x,\nabla v,D^2 v).$$
Hence the comparison principle, Theorem \ref{thcomp1}, gives that
$$ u\geq v+w\quad\mbox{in} \ B(x_o,\frac{3R}{2})\setminus \overline{ B(x_o,\frac{R}{2})},$$ the desired contradiction. This ends the proof of Proposition \ref{strict}.
From now $f$ will denote a ${\cal C}^{1}$ function defined on $[-1,1]$, such that $f(-1) = f(1) =0$,
and nonincreasing on the set $[-1, -1+\delta]\cup [1-\delta, 1]$ for some $\delta\in ]0,1[$.
Next is a comparison principle in unbounded domains that are "strip" like.
\begin{prop}\label{propcomp}
Suppose that $u$ and $v$ are ${\cal C}^1$, have values in $[-1,1]$ and are respectively sub and super solutions of
$$F(\nabla w,D^2 w) + f(w) = 0\ \mbox{ in } \ {\rm I}\!{\rm R}^N$$
with $F(\nabla u,D^2 u) \in L^\infty$, $F(\nabla v,D^2 v)\in L^\infty$. If $b, c\in {\rm I}\!{\rm R}$ are such that $b < c$, $\Omega = [b,c]\times {\rm I}\!{\rm R}^{N-1}$,
$|\nabla u|$ and $|\nabla v|\geq m>0$ and
either $u\leq -1+\delta$ or $v\geq 1-\delta$ in $\Omega$, then
$$u-v\leq \sup_{\partial \Omega } (u-v)^+.$$
\end{prop} Proof of Proposition \ref{propcomp}.
Without loss of generality $f$ can be extended outside of $[-1,1]$ in order that $f$ be still ${\cal C}^1$ , bounded, and nonincreasing after $1-\delta $ and before $-1+ \delta$. Suppose, to fix the ideas, that $v\geq 1-\delta$ in $\Omega$.
We can also assume that $u\leq v$ on $\partial \Omega$. Indeed, since $f$ is decreasing after $1-\delta$, $w= v+ \sup_{\partial \Omega } (u-v)^+$ is a super-solution which satisfies $F(\nabla w ,D^2 w) \in L^\infty$. Suppose by contradiction that $\sup_\Omega (u-v) = \lambda$
for some $\lambda >0$.
By definition of the supremum, there exists some sequence $(x^k)_k$ such that $ (u-v) (x^k)\rightarrow \lambda$. Eventually extracting from $(x^k)_k$ a subsequence, still denoted $(x^k)_k$, we have $x_1^k\rightarrow \bar x_1\in [b,c]$. For any $x=(x_1,x^\prime)$ let
$$u^k(x_1, x^\prime) = u(x_1, x^\prime + (x^\prime)^k)$$
and
$$v^k(x_1, x^\prime) = v(x_1, x^\prime + (x^\prime)^k).$$
By the uniform estimates \ref{eqholdloc} in Theorem \ref{IA} one can extract from $(u^k)_k$ and $(v^k)_k$ some subsequences, denoted in the same way, such that $u^k\rightarrow \bar u$ and $v^k\rightarrow \bar v$ uniformly on every compact set of $[b,c ]\times {\rm I}\!{\rm R}^{N-1}$ and $\bar u$ and $\bar v+ \lambda$ are solutions of
$$F(\nabla \bar u, D^2 \bar u) \geq -f(\bar u),$$
$$F(\nabla(\bar v+\lambda), D^2 (\bar v+\lambda)) \leq -f(\bar v) \leq -f(\bar v+\lambda).$$ Furthermore, $\bar u\leq \bar v+ \lambda$, and through the uniform convergence on the compact set $[b, c] \times \{0\}^{N-1}$, $\lim_k u^k(\bar x_1, 0) = \lim_k u^k (x_1^k, 0)$ and $\lim_k v^k(\bar x_1, 0) = \lim_k v^k (x_1^k, 0)$. This implies that
\begin{eqnarray*}
\bar u(\bar x_1, 0) &=& \lim_k u(x_1^k, 0+ {x^\prime}^k)\\&=&
\lim_k v( x_1^k, 0+ {x^\prime}^k)+ \lambda =\bar v(\bar x_1, 0)+ \lambda.
\end{eqnarray*}
Now using the fact that $|\nabla u|> m$ and $|\nabla v|> m$ on $[b,c]\times {\rm I}\!{\rm R}^{N-1}$, by passing to the limit one gets that $|\nabla \bar u|\geq m>0$ and $|\nabla \bar v| \geq m$ on that strip, and the strong comparison principle in Proposition \ref{strict}, implies that $\bar u \equiv \bar v+ \lambda$.
On the other hand, $$u(b, x^\prime + {x^\prime}^k)\leq v(b, x^\prime + {x^\prime}^k)$$ implies, by passing to the limit that $$\bar u(b, x^\prime )\leq \bar v(b, x^\prime )$$ a contradiction.
\section{Proof of the one dimensionality.}
We now state precisely and prove the main result of this paper:
\begin{theo}\label{th1}
Let $f$ be defined on $[-1,1]$, ${\cal C}^1$ and such that $f$ is nonincreasing near $-1$ and $1$, with $f(-1) = f(1) = 0$. Let $u$ be a viscosity solution of
$$F(\nabla u, D^2 u) + f(u) = 0\ \mbox{ in } \ {\rm I}\!{\rm R}^N,$$ with values in $[-1,1]$.
Suppose that
$\displaystyle\lim_{x_1\rightarrow
\pm \infty} u(x_1, x^\prime) = \pm 1$, uniformly with respect to $x^\prime $, and, if $\alpha\neq 0$, suppose that for any $b<c$ there exists $m>0$ such that
$|\nabla u(x)|\geq m>0$ in $[b, c]\times {\rm I}\!{\rm R}^{N-1}$ in the viscosity sense.
\noindent Then $u$ does not depend on $x^\prime$ i.e. $u(x_1, x^\prime ) = v(x_1)$ where \begin{equation}\label{dim1wholespace}
\left\{ \begin{array}{lc} F( v^\prime e_1 , v^{\prime\prime}e_1\otimes e_1) + f(v) = 0& \ \mbox{ in } \ {\rm I}\!{\rm R},\\
|v|\leq 1, \ \displaystyle \lim_{x\rightarrow \pm \infty} v = \pm1 & \end{array} \right.
\end{equation}
and $v$ is increasing. \end{theo} {\em Proof of Theorem \ref{th1}.} We proceed analogously to the proof given in \cite{BHM}. First observe that by Theorem \ref{IA} the solution $u$ is in ${\cal C}_{loc}^{1,\beta}({\rm I}\!{\rm R}^N)$, so that the condition on the gradient is pointwise and not only in the viscosity sense.
Let $\delta $ be such that $f$ is nonincreasing on $[-1, -1+\delta]\cup [1-\delta, 1]$. Define
$$\Sigma_M^+:=\{x=(x_1,x')\in{\rm I}\!{\rm R}^N,\ x_1\geq M\}\quad\mbox{ and}\quad \Sigma_M^-:=\{x=(x_1,x')\in{\rm I}\!{\rm R}^N,\ x_1\leq M\}.$$ By the uniform behavior of the solution in the $x_1$ direction, there exists $M_1>0$ such that $$u(x) \geq 1-\delta \quad \mbox{in}\quad \Sigma^+_{M_1},\quad u(x)\leq -1+\delta \quad \mbox{in}\quad \Sigma^-_{(-M_1)}.$$
Fix any $\nu=(\nu_1,\dots,\nu_n)$ such that $\nu_1>0$ and let $u_t (x) := u(x+t\vec \nu)$.
\noindent {\bf Claim 1} : For $t$ large enough, $u_t\geq u$ in ${\rm I}\!{\rm R}^N$.
For $x\in\Sigma^+_{(-M_1)}$ and for $t$ large enough, say $t > {2M_1 \over \nu_1}$,
$$ u(x+t \vec\nu ) \geq 1-\delta\quad\mbox{ and }\quad u^t \geq u\quad\mbox{on}\ x_1 = -M_1. $$ We begin to prove that $u_t\geq u$ in $\Sigma^+_{(-M_1)}$.
\noindent Suppose by contradiction that $\sup_{\Sigma^+_{(-M_1)}}(u-u_t)=m_o>0$.
\noindent Observe that since $\displaystyle \lim_{x_1\rightarrow +\infty} u = \lim_{x_1\rightarrow +\infty} u_t= 1$ uniformly, there exists $M_2$ such that for
$x_1> M_2\geq -M_1$, $|u_t-u|< {m_o\over 2}$. Then $\sup_{\Sigma^+_{(-M_1)}}(u-u_t)=m_o$ is achieved inside $[-M_1, M_2] \times {\rm I}\!{\rm R}^{N-1}$.
On that strip, by hypothesis, there exists $m>0$ such that $|\nabla u|, |\nabla u_t |\geq m$, and also $u_t\geq 1-\delta$. Then one can apply the strong comparison principle in Proposition \ref{propcomp} with $b = -M_1$ and $c = M_2$ and obtain that $$u-u_t \leq \sup_{\{x_1 = -M_1\}\cup \{ x_1 = M_2\}} (u-u_t)^+< {m_o\over 2},$$ a contradiction. Finally we have $u\leq u_t$ in $\Sigma^+_{(-M_1)}$.
\noindent We can do the same in $\Sigma^-_{\{-M_1\}}$ by observing that, in that case, $u\leq -1+\delta$.
This ends the proof of Claim 1.
Let $ \tau = \inf\{ t>0,\ \mbox{such that} \ u_t\geq u\in {\rm I}\!{\rm R}^N\}$, by Claim 1, $\tau$ is finite.
\noindent {\bf Claim 2:} $\tau=0$.
To prove this claim, we argue by contradiction, assuming that it is positive.
\noindent We suppose first that
$$\eta := \inf _{ [-M_1, M_1] \times {\rm I}\!{\rm R}^{N-1}} (u_\tau-u)>0,$$ and we prove then that there exists $\epsilon >0$ such that $u_{\tau-\epsilon} \geq u$ in ${\rm I}\!{\rm R}^N$. This will contradict the definition of $\tau$ .
By the estimate (\ref{eqholdloc}) in Theorem \ref{IA}, there exists some constant $c>0$ such that for all $\epsilon >0$
$$ |u_\tau-u_{\tau-\epsilon}|\leq \epsilon c. $$ Choosing $\epsilon$ small enough in order that $ \epsilon c\leq {\eta\over 2}$ and $\epsilon < \tau$, one gets that $u_{\tau-\epsilon}-u \geq 0$ on $\{x_1 = M_1\}$. The same procedure as in Claim 1 proves that the inequality holds in the whole space ${\rm I}\!{\rm R}^N$, a contradiction with the definition of $\tau$.
\noindent Hence
$ \eta = 0$ and there exists a sequence $(x_j)_j\in \left([-M_1, M_1] \times {\rm I}\!{\rm R}^{N-1}\right)^{{\bf N}}$ such that
$$(u-u_\tau) (x_j) \rightarrow 0.$$ Let $v_j (x) = u(x+ x_j)$ and $v_{j, \tau} (x) = u_\tau (x+ x_j)$; these are sequences of bounded solutions, by uniform elliptic estimates (consequence of Theorem \ref{IA}), one can extract subsequences, denoted in the same way, such that $$v_j\rightarrow \bar v\quad\mbox{ and }\quad v_{j, \tau} \rightarrow \bar v_\tau$$ uniformly on every compact set of ${\rm I}\!{\rm R}^N$. Moreover, $v_j$ and $v_{j,\tau}$ are solutions of the same equation and passing to the limit, $\bar v \geq \bar v_\tau$. Furthermore $\bar v (0) = \lim_{j\rightarrow +\infty} u(x_j)= \lim_{j\rightarrow +\infty} u_\tau (x_j) = \bar v_\tau (0)$ and
$$|\nabla \bar v|(0) = \lim_{j\rightarrow +\infty} |\nabla u(x_j)| \geq m$$ by the assumption on $\nabla u$.
Since $|\nabla \bar v| >0$ everywhere, by the strong comparison principle in Proposition \ref{strict},
$\bar v_\tau = \bar v$ on any neighborhood of $0$
. This would imply that $\bar v$ is $\tau$ periodic.
By our choice of $M_1$,
$\forall x\in \Sigma^+_{2M_1}$,
$v_j(x) = u(x+ x_j) \geq 1-\delta$ and
$\forall x\in \Sigma^-_{(-2M_1)}$,
$v_j(x) = u(x+ x_j) \leq -1+\delta$ , This contradicts the periodicity. Hence $\tau = 0$ and this ends the proof of Claim 2.
This implies that $\partial_{\vec \nu} u(x) \geq 0$, for all $x \in {\rm I}\!{\rm R}^{N}$ since for all $t>0$, $u(x+t\vec \nu)\geq u(x)$ as long as $\nu_1>0$.
Take a sequence $\vec{\nu_n}=(\nu_{1,n}, \nu^\prime)$ such that $0<\nu_{1,n}$ and $\nu_{1,n}\rightarrow 0$. Since $ u$ is ${\cal C}^1$, by passing to the limit,
$$\partial_{\vec{\nu^\prime}} u(x) \geq 0.$$
This is also true by changing $\vec \nu^\prime$ in $-\vec\nu^\prime$ , so finally $\partial_{\vec \nu^\prime} u(x) = 0$.
This ends the proof of Theorem \ref{th1}.
\section{Existence's results for the ODE.}
We prove in this section that the one dimensional problem (\ref{dim1wholespace}), under additional assumptions on $f$, admits a solution and that, when $\alpha \leq 0$, the solution is unique up to translation.
We consider the model Cauchy problem
\begin{equation}\label{cauchydel}\left\{ \begin{array}{lc}
-{\cal M}^+_{a,A} (u^{\prime\prime}) |u^\prime|^\alpha = f(u), & \mbox{in}\quad {\rm I}\!{\rm R}\\
u(0) = 0, u^\prime (0) = \delta &
\end{array} \right.
\end{equation} where ${\cal M}^+_{a,A}$ is one of the Pucci operators.
With $f$ such that $f(-1) =f(0 ) = f(1)= 0$, $f$ is positive in $]0 ,1[$, negative in $]-1, 0[$, $f$ is ${\cal C}^1([-1, 1])$.
\noindent We introduce the function $f_{a,A}(t)=\left\{\begin{array}{lc} \frac{f(t)}{a} & \mbox{if}\ f(t)>0\\
\frac{f(t)}{A} & \mbox{if}\ f(t)<0
\end{array}
\right.
,$ so that equation (\ref{cauchydel}) can be written in the following way
\begin{equation}\label{cauchydelta}\left\{ \begin{array}{lc}
-u^{\prime\prime} |u^\prime|^\alpha = f_{a,A}(u), & \mbox{in}\quad{\rm I}\!{\rm R}\\
u(0) = 0, u^\prime (0) = \delta. &
\end{array} \right.
\end{equation}
We also assume on $f$: \begin{enumerate} \item $f^\prime(\pm1)<0$,
\item $\displaystyle\int_{-1}^{1} f_{a,A}(s) ds =0$,
\item for all $t\in (-1, 0]$, $\int_t^{1 } f_{a,A}(s)ds >0$. \end{enumerate} $\delta_1$ will denote the positive real
\begin{equation}\label{delta1}
\delta_1 = \left((2+\alpha ) \int_0 ^{1} {f(s)\over a} ds \right)^{1\over 2+\alpha}.
\end{equation}
Without loss of generality $f$ is extended outside of $[-1,1]$
so that $f\in {\cal C}^{0,1} ({\rm I}\!{\rm R})$, $f\geq 0$ on $(-\infty, -1)$, $f\leq 0$ on $[1, +\infty)$. Then $ f$ satisfies also for all $t\in {\rm I}\!{\rm R}\setminus\{\pm1\}$ $$\int_t^{1 } f_{a,A}(s) ds >0.$$
According to Cauchy-Lipschitz's theorem, as soon as $u^\prime (0) \neq 0$ there exists a local unique solution. Moreover the Cauchy Peano's Theorem establishes some global existence's theorem.
We establish existence and uniqueness (in the case $\alpha \leq 0$) of weak solutions and their equivalence with viscosity solutions. \begin{defi}
A weak solution for (\ref{cauchydelta}) is a ${\cal C}^1$ function which satisfies in the distribution sense
\begin{equation}\label{weak}\left\{ \begin{array}{lc}
-{d\over dx} (|u^\prime|^\alpha u^\prime) = (1+\alpha) f_{a,A}(u)&\mbox{ in } \ {\rm I}\!{\rm R}\\
u(\theta) =0, \ u^\prime(\theta) = \delta.& \
\end{array}\right.
\end{equation} Without loss of generality we can suppose that $\theta=0$. \end{defi} Remark that we are interested in solutions that are in $[-1,1]$ so we shall suppose that $u_o\in (-1,1)$.
\begin{rema}
Let us note that the condition 2 on $f$ is necessary for the existence of weak solutions which satisfy $\lim_{x\rightarrow +\infty} u(x) = 1$, $\lim_{x\rightarrow -\infty} u(x) = -1$. Indeed by continuity $u$ has a zero and without loss of generality we can suppose that it is in 0. Since the solution $u$ is ${\cal C}^1$, and bounded, the limit of $u^\prime$ at infinity is $0$.
In particular, multiplying the equation (\ref{weak}) by $u^\prime$ and integrating in $[0,+\infty)$
$$|u^\prime (0 )|^{2+\alpha}= -(2+\alpha) \int_0^{1} {f(s)\over a} ds$$
and in $]-\infty,0]$,
similarly
$$ |u^\prime (0 )|^{2+\alpha} =(2+\alpha) \int_{0}^{-1} {-f(s)\over A} ds= (2+\alpha) \int_{-1}^0 {f(s)\over A} ds .$$ This implies 2. \end{rema}
\begin{prop} For $\alpha>-1$ there exists a solution of (\ref{weak}), and for $\alpha\leq 0$ this solution is unique.\end{prop} Proof.
To prove existence and uniqueness observe that both the equations (\ref{cauchydelta}) and (\ref{weak}) can be written, with $u=X$ and $Y= |u^\prime|^\alpha u^\prime$, under the following form
\begin{equation}\label{eqcauhlip} \left(\begin{array}{c} X^\prime\\
Y^\prime\end{array} \right) = \left( \begin{array}{c}
|Y|^{\frac{1}{\alpha+1}-1}Y\\
-(1+\alpha)f_{a,A}(X) \end{array}\right)
\end{equation}
with the initial conditions $X(0) = 0$, $Y(0) =| \delta|^\alpha \delta $ and the map $(X, Y) \mapsto \left( \begin{array}{c}
|Y|^{\frac{1}{\alpha+1}-1}Y\\
-(1+\alpha)f_{a,A}(X)
\end{array}\right)$ is continuous. When $\alpha\leq 0$ it is Lipschitz continuous; and when $\alpha>0$ it is Lipschitz continuous for $Y(0)\neq 0$. Now the result is just an application of the classical Cauchy Peano's Theorem, and the Cauchy Lipschitz theorem. It is immediate to see that weak solutions and the solutions of (\ref{eqcauhlip}) are the same. This ends the proof.
Observe that weak solutions are viscosity solutions. Indeed, it is clear that $|u^\prime |^\alpha u^\prime$ is ${\cal C}^1$, hence if $u^\prime \neq 0$, $u^\prime$ is ${\cal C}^1$. Finally $u$ is ${\cal C}^2$ on each point where the derivative is different from zero and on such a point the equation is $-|u^\prime |^\alpha u^{\prime\prime} = f(u(x))$ so $u$ is a viscosity solution.
We now consider the case where $u$ is locally constant on $]x_1-\delta_1, x_1+\delta_1[$ for some $\delta_1 >0$ the "weak equation" gives $f(u(x_1))= 0$, then $u(x_1) = 0, 1$ or $-1$, and $u$ is a viscosity solution.
We now assume that $\alpha \leq 0$ and recall that according to the regularity results in \cite{BD10} applied in the one dimensional case, the solutions are ${\cal C}^2$. We now prove that the viscosity solutions are weak solutions.
When $u^\prime(x)\neq 0$ or when $u$ is locally constant, it is immediate that $u$ is a weak solution in a neighborhood of that point.
So, without loss of generality, we suppose that, $u^\prime (x_1) = 0$, $1>u(x_1) >0$ and hence $u$ is not locally constant. Then, by continuity of $u$ and the equation, there exists $r>0$ such that $$u^{\prime\prime}\leq 0 \quad\mbox{in}\quad (x_1-r,x_1+r).$$ Furthermore there exists $(x_n)_n$, such that $x_n\in (x_1-r,x_1)$, $x_n\rightarrow x_1$ and $u^\prime(x_n)\neq 0$; by the equation we obtain that $$u^{\prime\prime}(x_n)<0.$$ Finally, $u^\prime(x)=\int_{x_1}^x u^{\prime\prime}(t)dt>0$ for $x\in (x_1-r,x_1)$. Similarly $u^\prime(x)<0$ for $x\in (x_1,x_1+r)$.
By uniqueness of the weak solutions , $u$ satisfies in a neighborhood of $x_1$:
$$-{d\over dx} (|u^\prime|^\alpha u^\prime) = {(1+\alpha)f(u(x))\over a}.$$ This proves that $u$ is a weak solution.
\begin{prop}\label{propcauchy} Suppose that $\alpha \leq 0$. Let $u_\delta$ be the unique solution of (\ref{cauchydel}). Then for $\delta_1$ defined in (\ref{delta1}),
\noindent 1) If $\delta > \delta_1$, $|u_\delta (x)| \geq C|x|$ for $C=\delta^{2+\alpha}-\delta_1^{2+\alpha}$. In particular $\displaystyle\lim_{x\rightarrow \pm \infty}u_\delta(x) = \pm \infty$ and $u^\prime_\delta >0$.
\noindent 2) If $\delta = \delta_1$, $u^\prime _\delta >0$ in ${\rm I}\!{\rm R}$ and $\displaystyle \lim_{x\rightarrow +\infty} u_\delta (x) = 1 $, $\displaystyle\lim_{x\rightarrow -\infty}u_\delta (x) = -1$.
\noindent3) If $-\delta_1\leq \delta < \delta_1$ then $ |u_\delta(x)|_\infty < 1$ for any $x\in{\rm I}\!{\rm R}$. The solution can oscillate.
\noindent 4) If $\delta < -\delta_1$, $u_\delta $ is decreasing on ${\rm I}\!{\rm R}$, hence $u_\delta<0$ on ${\rm I}\!{\rm R}^+$, $u_\delta >0$ on ${\rm I}\!{\rm R}^-$.
\end{prop}
\begin{rema} The case 2) in Proposition \ref{propcauchy} is clearly false in the case $\alpha >0$. As one can see with the example : $\alpha = 2$, $f(u) = u-u^3$, $u(x) = \sin x$, $u $ satisfies $u^\prime (0) = \delta_1 = 4 \int_0^1 f(s) ds$, $u(1)={\pi\over 2}$ and it oscillates.
\noindent However the conclusion in the other cases holds for any $\alpha$. \end{rema} Proof of Proposition \ref{propcauchy}.
1 \& 4) To fix the ideas we suppose that $\delta> \delta_1$, the proof is identical in the case $\delta<-\delta_1$. For $x>0$, since $u_\delta>0$ one has
\begin{eqnarray*} |u^\prime_\delta| ^{2+\alpha} (x) &=& \delta^{2+\alpha} - (2+\alpha) \int_0^{u_\delta(x)} {f(s)\over a} ds \\
&=& \delta^{2+\alpha} -\delta_1^{2+\alpha} + (2+\alpha)\int_{u_\delta (x)}^{1} {f(s)\over a} ds \\ &\geq & \delta^{2+\alpha} -\delta_1^{2+\alpha}:=C. \end{eqnarray*} This proves, in particular, that $u_\delta ^\prime (x) \neq 0$ for all $x$ and the Cauchy Lipschitz theorem ensures the local existence and uniqueness on every point, hence also the global existence . From this, we also derive that $u^\prime _\delta >0$ and for $x>0$, $u_\delta (x ) \geq Cx$, and symmetric estimates for $x<0$ give $u_\delta (x) \leq C x$.
2) If $\delta = \delta_1$ then $|u^\prime_\delta|^{2+\alpha}(x) = (2+\alpha )\int_{u_\delta (x)} ^{1} {f(s )\over a} ds > 0$. Suppose that there exists some point $\bar x$ such that $u_\delta(\bar x) = 1$ then $u_\delta^\prime(\bar x ) = 0$. By the uniqueness of the solution $u_\delta(x)\equiv 1$ which contradicts the fact that $u_\delta^\prime (0) = \delta_1\neq 0$.
We have obtained that $u_\delta (x) < 1$ everywhere. Moreover $u_\delta $ is increasing and bounded then $\lim_{x\rightarrow+ \infty} u_\delta^\prime = 0$. By hypothesis 3. on $f$, this implies that $\lim_{x\rightarrow + \infty} u_\delta (x) = 1$.
3) Suppose that $0 < \delta < \delta_1$, and let $\theta^+$ be such that $(2+\alpha) \int_0^{\theta^+} \frac{f(x)}{a}dx = \delta^{2+\alpha}$, which exists by the mean value theorem. Either $u_\delta < \theta^+ $ for all $x$, or there exists $x_1$ such that $u_\delta (x_1) = \theta^+$, and then $u^\prime_\delta (x_1) = 0$. Let us note that $ u = \theta^+ $ on a neighborhood of $x_1$ is not a solution since $f(\theta^+) \neq 0$. So $u_\delta$ is not locally constant and in particular, in a right neighborhood of $x_1$:
$$ \exists \varepsilon_o, \ u^{\prime\prime}_\delta (x) \leq 0, \ u^{\prime\prime}_\delta\not\equiv 0$$ for all $x\in (x_1, x_1+\varepsilon_o)$, hence $u^\prime_\delta (x) <0$ in $(x_1, x_1+\varepsilon_o)$.
So $u$ is decreasing until it reaches a point where $u^\prime_\delta (x_2) = 0$. Observe that by the equation
$$ 0 = |u^\prime_\delta| ^{2+\alpha} (x_2) = - (2+\alpha) \int_{\theta^+}^{u_\delta(x_2)} f_{a,A}(s) ds .$$ Hence $u(x_2) = \theta^-\in (-1,0)$.
We can reason as above and obtain that $u$ oscillates between $\theta^-$ and $\theta^+$.
\end{document} |
\begin{document}
\noindent{\small \bf Ordinary differential equations}\\ \noindent{\small \bf Mechanics of particles and systems}
\begin{center} {\LARGE \bf The meromorphic non-integrability of the three-body problem}\\
\ {\Large \bf Tsygvintsev Alexei}
\end{center}
\begin{abstract} We study the planar three-body problem and prove the absence of a complete set of complex meromorphic first integrals in a neighborhood of the Lagrangian solution. \end{abstract}
\begin{center} {\bf 1. Introduction} \end{center}
The three-body problem is a mechanical system which consists of three mass points $m_1$, $m_2$, $m_3$ which attract each other according to the Newtonian law [16].
The practical importance of this problem arises from its applications to celestial mechanics: the bodies which constitute the solar system attract each other according to Newton's low, and the stability of this system on a long period of time is a fundamental question. Although Sundman [21] gave a power series solution to the three-body problem in 1913, it was not useful in determining the growth of the system for long intervals of time. Chazy [3] proposed in 1922 the first general classification of motion as $t \rightarrow \infty$. In view of the modern analysis [7], this stability problem leads to the problem of integrability of a Hamiltonian system i.e. the existence of a full set of analytic first integrals in involution. Poincar\'e [18] considered Hamiltonian functions $H(z,\mu)$ which in addition to $z_1,\ldots,z_{2n}$ also depended analytically on a parameter $\mu$ near $\mu=0$. His theorem states that under certain assumptions about $H(z,0)$, which are in general satisfied, the Hamiltonian system corresponding to $H(z,\mu)$ can have no integrals represented as convergent series in $2n+1$ variables $z_1,\ldots,z_{2n}$ and $\mu$, other than the convergent series in $H$, $\mu$. Based on this result he proved in 1889 the non-integrability of the restricted three-body problem [22]. However, this theorem does not assert anything about a fixed parameter value $\mu$.
Bruns [2] showed in 1882 that the classical integrals are the only
independent algebraic integrals of the problem of three bodies. His theorem has been extended by Painlev\'e [17], who has shown that every integral of the problem of $n$ bodies which involves the velocities algebraically (whether the coordinates are involved algebraically or not) is a combination of the classical integrals.
However, citing [7] `` One may agree with Winter [25] that these elegant negative results have no importance in dynamics, since they do not take into account the peculiarities of the behavior of phase trajectories. As far as first integrals are concerned, locally, in a neighborhood of a non--singular point, a complete set of independent integrals always exists. Whether they are algebraic or transcendent depends explicitly on the choice of independent variables. Therefore, the problem of the existence of integrals makes sense only when it is considered in the whole phase space or in a neighborhood of the invariant set ... ''
Consider a complex-analytic symplectic manifold $M$, a holomorphic Hamiltonian vector field $X_H$ on $M$ and a non-equilibrium integral curve $\Gamma \subset M$. The nature of the relationship between the branching of solutions of a system of variational equations along $\Gamma$ as functions of the complex time and the non-existence of first integrals of $X_H$ goes back to the classical works of Kowalewskaya [6]. Ziglin [27] studied
necessary conditions for an analytic Hamiltonian system with $n >1$ degrees of freedom to possess $n$ meromorphic independent first integrals in a sufficiently small neighborhood of the phase curve $\Gamma$. One can consider the monodromy group $G$ of the normal variational equations along $\Gamma$. The key idea was that $n$ independent meromorphic integrals of $X_H$ must induce $n$ independent rational invariants for $G$. Then, in order that Hamilton's equations have the above first integrals, it is necessary that for any two non-resonant transformations $g,g\prime\in G$ $g$ must commute with $g\prime$. Although Ziglin formulated his result in terms of the monodromy group, it became quite recently [15,20] that much more could be achieved, under mild restrictions, by replacing this with the differential Galois group. Namely, one should check if its identity component, under Zariski's topology, is abelian.
The collinear three-body problem was proved to be non-integrable near triple collisions by Yoshida [26] based on Ziglin's analysis.
The present paper is devoted to the non-integrability of the planar three-body problem.
In 1772 Lagrange [8] discovered the particular solution in which three bodies form an equilateral triangle and each body describes a conic.
Moeckel [14] has shown that for a small angular momentum there exist orbits homoclinic to the Lagrangian elliptical orbits and heteroclinic between them. Consequently in this case the problem is not-integrable. Nevertheless, it was observed that for a large angular momentum and for certain masses of two bodies which are relatively small compared to the third one, the circular Lagrangian orbits are stable and, a priori, the system can be integrable near these solutions. Topan [23] found some examples of such transcendental integrals in certain configurations of the restricted three-body problem.
Our approach consists of applying the methods related to [27,15] to the Lagrangian parabolic orbits. This means that we will study the integrability of the problem in a sufficiently small complex neighborhood of these solutions.
The plan of the paper is follows. In Section 2, following Whittaker, we introduce the reductions of the planar three-body problem from the Hamiltonian system of 6 degrees of freedom to 3 degrees of freedom. Section 3 is devoted to a parametrization of the Lagrangian parabolic solution. \\ Section 4 contains the normal variational equations along this solution. In Section 5 we study the monodromy group of these equations. In Section 6, applying the Ziglin's method, we prove that for the three-body problem there are no two additional meromorphic first integrals in a connected neighborhood of the Lagrangian parabolic solution (Theorems 6.2-6.3). Section 7 contains a dynamical interpretation of above theorems in connection with a theory of splitting and transverse intersection of asymptotic manifolds.
\begin{center} {\bf 2. The reduction of the problem} \end{center}
Following Whittaker [24] let $(x_1,x_2)$ be the coordinates of $m_1$, $(x_3,x_4)$ the coordinates of $m_2$, and $(x_5,x_6)$ the coordinates of $m_3$. Let $y_r=m_k\displaystyle\frac{dx_r}{dt}$, where $k$ denotes the greatest integer in $\displaystyle\frac{1}{2}(r+1)$. The equations of motion are $$ \displaystyle\frac{dx_r}{dt}=\displaystyle\frac{\partial H_1}{\partial y_r}, \quad \displaystyle\frac{dy_r}{dt}=-\displaystyle\frac{\partial H_1}{\partial x_r}, \quad (r=1,2,\dots,6), \leqno (2.1) $$ where $$ \begin{array}{ll} H_1=\displaystyle\frac{1}{2m_1}(y_1^2+y_2^2)+\displaystyle\frac{1}{2m_2}(y_3^2+y_4^2)+\displaystyle\frac{1}{2m_3}(y_5^2+y_6^2)- m_3m_2\{(x_3-x_5)^2+ (x_4-x_6)^2\}^{-1/2}\\-m_3m_1\{(x_5-x_1)^2+(x_6-x_2)^2\}^{-1/2}- m_1m_2\{(x_1-x_3)^2+ (x_2-x_4)^2\}^{-1/2}. \end{array} $$ This is a Hamiltonian system with $6$ degrees of freedom which admits $4$ first integrals:
\noindent $T_1=H_1 $ -- the energy,\\ $T_2=y_1+y_3+y_5$, $T_3=y_2+y_4+y_6$ -- the components of the impulse of the system,\\ $T_4=y_1x_2+y_3x_4+y_5x_6-x_1y_2-x_3y_4-x_5y_6$ -- the integral of angular momentum of the system.
The system (2.1) can be transformed to a system with $4$ degrees of freedom by the following canonical change (Poincar\'e, 1896) $$ x_r=\displaystyle\frac{\partial W_1}{\partial y_r},\quad g_r=\displaystyle\frac{\partial W_1}{\partial l_r},\quad (r=1,2,\dots,6), $$ where $$ W_1=y_1l_1+y_2l_2+y_3l_3+y_4l_4+(y_1+y_3+y_5)l_5+(y_2+y_4+y_6)l_6. \leqno (2.2) $$ Here $(l_1,l_2)$ are the coordinates of $m_1$ relative to axes through $m_3$ parallel to the fixed axes, $(l_3,l_4)$ are the coordinates of $m_2$ relative to the same axes, $(l_5,l_6)$ are the coordinates of $m_3$ relative to the original axes, $(g_1,g_2)$ are the components of impulse of $m_1$, $(g_3,g_4)$ are the components of impulse of $m_2$, and $(g_5,g_6)$ are the components of impulse of the system. It can be shown that in the system of the center of masses the corresponding equations for $l_5$, $l_6$, $g_5$, $g_6$ disappear from the system and the reduced system takes the following form $$ \displaystyle\frac{dl_r}{dt}=\displaystyle\frac{\partial H_2}{\partial g_r},\quad \displaystyle\frac{dg_r}{dt}=-\displaystyle\frac{ \partial H_2}{\partial l_r}, \quad (r=1,2,3,4), \leqno (2.3) $$ with the Hamiltonian $$ \begin{array}{ll} H_2=\displaystyle\frac{M_1}{2}(g_1^2+g_2^2)+\displaystyle\frac{M_2}{2}(g_3^2+g_4^2)+ \displaystyle\frac{1}{m_3}(g_1g_3+g_2g_4)- \displaystyle \frac{ m_3m_2}{\rho_1}- \displaystyle \frac{ m_1m_3}{\rho_2}+\displaystyle\frac{m_1m_2}{\rho_3}, \end{array} $$ where $$ \rho_1=\sqrt{l_3^2+l_4^2}, \quad \rho_2=\sqrt{l_1^2+l_2^2}, \quad \rho_3=\sqrt{(l_1-l_3)^2+(l_2-l_4)^2},$$ are the mutual distances of the bodies and $M_1=m_1^{-1}+m_3^{-1}$, $M_2=m_2^{-1}+m_3^{-1}$.
This system admits two first integrals in involution\\ $K_1=H_2$ -- the energy,\\ $K_2=g_2l_1+g_4l_3+g_6l_5-g_1l_2-g_3l_4-g_5l_6=k$ -- the integral of angular momentum.
Let us suppose that the Hamiltonian system (2.3) possesses a first integral $K$ different from $K_{1,2}$.
\noindent {\bf Definition 2.1} The first integral $K$ of the system (2.3) is called {\it meromorphic} if it is representable as a ratio $$ K=\displaystyle \frac{R(l,g)}{Q(l,g)},$$ where $R$, $Q$ are analytic functions of the variables $l_i$, $g_i$, $1 \leq i \leq 4$.
It can be shown [24] that the system (2.3) possesses an ignorable coordinate which will make possible a further reduction.
Let us make the following canonical transformation $$ l_r=\displaystyle\frac{\partial W_2}{\partial g_r},\quad p_r=\displaystyle\frac{\partial W_2}{\partial q_r},\quad (r=1,2,3,4), \leqno (2.4) $$ where $$ W_2=g_1q_1\mathrm{cos}q_4+g_2q_1 \mathrm{sin}q_4+g_3(q_2\mathrm{cos}q_4- q_3\mathrm{sin}q_4)+g_4(q_2\mathrm{sin}q_4+q_3\mathrm{cos}q_4). $$ Here $q_1$ is the distance $m_3m_1$; $q_2$ and $q_3$ are the projections of $m_2m_3$ on, and perpendicular to $m_1m_3$; $p_1$ is the component of momentum of $m_1$ along $m_3m_1$; $p_2$ and $p_3$ are the components of momentum of $m_2$ parallel and perpendicular to $m_3m_1$.
One can write the new equations as follows $$ \displaystyle\frac{dq_r}{dt}=\displaystyle\frac{\partial H}{\partial p_r},\quad \displaystyle\frac{dp_r}{dt}= -\displaystyle\frac{ \partial H}{\partial q_r}, \quad (r=1,2,3), \leqno (2.5) $$ and $$ \displaystyle\frac{dq_4}{dt}=\displaystyle\frac{\partial H}{\partial p_4},\quad \displaystyle\frac{dp_4}{dt}=0, \leqno (2.5.a) $$ with the Hamiltonian $$ \begin{array}{ll} H=\displaystyle \frac{M_1}{2}\left\{p_1^2+\displaystyle \frac{1}{q^2_1}P^2\right\}+\displaystyle \frac{M_2}{2}(p_2^2+ p_3^2)+\displaystyle \frac{1}{m_3}\left\{p_1p_2-\displaystyle \frac{p_3}{q_1}P\right\} -\displaystyle \frac{ m_1m_3}{r_1}- \displaystyle \frac{m_3m_2}{r_2}- \displaystyle \frac{ m_1m_2}{r_3}, \\ P=p_3q_2-p_2q_3-p_4, \end{array} $$ where $$ r_1=q_1, \quad r_2=\sqrt{q^2_2+q^2_3}, \quad r_3=\sqrt{ (q_1-q_2)^2+q^2_3},$$ are the mutual distances of the bodies.
Since $p_4=k=const$ the system (2.5) is a closed Hamiltonian system with $3$ degrees of freedom. If this system is integrated then $q_4$ can be found by a quadrature from (2.5.a).
\noindent {\bf Proposition 2.2} {\it If the Hamiltonian system (2.3) admits the full set of functionally independent meromorphic first integrals in involution $\{ K_1,K_2,K_3,K_4\}$ then the system (2.5) possesses two functionally independent additional first integrals $\{H_1,H_2\}$ which are meromorphic functions of the variables $q_i$, $p_i$, $1\leq i \leq 3$.}
This is the obvious consequence of the canonical change (2.4).
\begin{center} {\bf 3. A parametrization of the parabolic Lagrangian solution} \end{center}
The equations (2.1) admit an exact solution discovered by Lagrange [8] in which the triangle formed by the three bodies is equilateral and the trajectories of the bodies are similar conics with one focus at the common barycenter.
For the reduced form (2.5) the equality of the mutual distances gives $$ q_1=q,\quad q_2=\displaystyle\frac{q}{2},\quad q_3=\displaystyle\frac{\sqrt{3}q}{2}, \leqno (3.1) $$ where $q=q(t)$ is an unknown function. Substituting (3.1) into (2.5) one can show that $$ p_1=p,\quad p_2=Ap+\displaystyle\frac{B}{q},\quad p_3=Cp+\displaystyle\frac{D}{q}, \leqno (3.2) $$ with $p=p(t)$ unknown and $A$, $B$, $C$, $D$ are the following constants $$ \begin{array}{ll} A=\displaystyle \frac{m_2(m_3-m_1)}{m_1S_3},\quad B=-\displaystyle \frac{\sqrt{3}kS_1m_2m_3}{S_2S_3},\quad C=\displaystyle \frac{\sqrt{3}m_2(m_1+m_3)}{m_1S_3},\\ D=-\displaystyle \frac{km_2(S_2+m_1m_2-m_3^2)}{S_2S_3}, \end{array} $$
where
$$ S_1=m_1+m_2+m_3, \quad S_2=m_1m_2+m_2m_3+m_3m_1, \quad S_3=m_2+2m_3.$$
Substituting (3.1), (3.2) into the integral of energy $H=h=const$ we obtain the following relation between $q$ and $p$ $$ ap^2+\displaystyle\frac{bp}{q}+\displaystyle\frac{c}{q}+\displaystyle\frac{d}{q^2}=h, \leqno (3.3) $$ where $$ a=\displaystyle\frac{2S_1S_2}{m_1^2S_3^2},\quad b=-\displaystyle\frac{2\sqrt{3}km_2S_1}{m_1S_3^2},\quad c=-S_2, \quad d=\displaystyle\frac{2k^2S_1(m_2^2+m_2m_3+m_3^2)}{S_3^2S_2}. $$ Moreover, from (2.5) we have $$ \displaystyle\frac{dq}{dt}=\left(M_1+\displaystyle\frac{A}{m_3}\right)p+\displaystyle\frac{B}{m_3q}
\leqno (3.4) $$ The equations (3.1), (3.2), (3.3), (3.4) define all Lagrangian particular solutions and contain two free parameters: $k$ and $h$.
Consider the case of zero energy $h=0$ and $k\neq0$. Then there exists a parabolic particular solution in the sense that the limit velocity goes to zero when the bodies approach infinity and each body describes a parabola.
Putting $w=pq$ one can find by using of (3.3) $q$, $p$ as the functions of $w$ $$ q=P(w),\quad p=\displaystyle\frac{w}{P(w)}, \leqno (3.5) $$
where $P(w)=-(aw^2+bw+d)/c$.
Let $M={\mathbb C^6}$ be the complexified phase space of the system (2.5). Then (3.5), (3.1), (3.2) define a parametrized parabolic integral curve $\Gamma \in M$ with the parameter $w\in {\mathbb C \mathbb P^1}$.
\begin{center} {\bf 4. The normal variational equations} \end{center}
Let $z=(q_1,q_2,q_3,p_1,p_2,p_3)$, $z\in M$. One can obtain the variational equations of the system (2.5) along the integral curve $\Gamma$ $$ \displaystyle\frac{d\zeta}{dt}=JH_{zz}(\Gamma)\zeta, \quad \zeta \in T_{\Gamma}M, \leqno (4.1) $$ where $H_{zz}$ is the Hessian matrix of Hamiltonian $H$ at $\Gamma$ and $J$ is the $6\times 6$ matrix $$ J = \left (\begin {array}{cc} 0&E\\\noalign{
}-E&0\end {array} \right ),
$$ where $E$ is the identity $3 \times 3$ matrix .
These equations admit the linear first integral $F=(\zeta,H_z(\Gamma))$, where $H_z=grad(H)$ and can be reduced on the normal $5$-dimensional bundle $ G=T_{\Gamma}M/T\Gamma$ of $\Gamma$ . After the restriction of (4.1) on the surface $F=0$ we obtain {\it normal variational equations} (NVE) [27] which are the system of $4$ equations $$ \displaystyle\frac{d\eta}{dt}=\tilde A(\Gamma)\eta, \quad \eta\in{\mathbb C^4}, \leqno (4.2) $$ where $\tilde A$ is a $4 \times 4$ matrix depending on $\Gamma$.
We can obtain NVE in the following natural way applying Whittaker's procedure [24] of reducing the order of the Hamiltonian system (2.5).
Fixing the level of energy $h=0$ one can find $p_1$ as a function of the other variables from the equation $H(q,p)=0$ which takes the following form $$ a_1p_1^2+b_1p_1+c_1=0, $$ where $a_1$, $b_1$, $c_1$ are known functions depending on $p_2$, $p_3$, $q_1$, $q_2$, $q_3$.
Solving this equation we get two solutions for $p_1$ $$ p_1=\displaystyle\frac{-b_1+\sqrt{\Delta}}{2a_1}=K_+ \quad \mathrm{and} \quad p_1=\displaystyle\frac{-b_1-\sqrt{\Delta}}{2a_1}=K_- ,$$ where $\Delta=b_1^2-4a_1c_1$.
By substituting the Lagrangian solution given by (3.1), (3.2), (3.5) in these relations we choose the root $p_1=K_-$ as corresponding to this solution.
The functions $q_r(t)$, $p_r(t)$, $r=2,3$ satisfy the canonical equations $$ \displaystyle\frac{dq_r}{dq_1}=\displaystyle\frac{\partial K}{\partial p_r},\quad \displaystyle\frac{dp_r}{dq_1}=-\displaystyle\frac{ \partial K}{\partial q_r}, \quad (r=2,3), \leqno (4.3) $$ where $K=-K_-$ and $q_1$ is taken as the new time.
The system (4.3) is a nonautonomous Hamiltonian system with $2$ degrees of freedom which has the same integral curve $\Gamma$. Notice that $K$ is not more a first integral.
It is useful to pass now to the new time $(q_1=q)\rightarrow w$. From the formulas (3.3), (3.5) we have $$ q=\displaystyle\frac{aw^2+bw+d}{c},\quad dq=-\displaystyle\frac{2aw+b}{c}dw. \leqno (4.4) $$ The resulting NVE (4.2) are obtained as the variational equations of the system (4.3) near the integral curve $\Gamma$ and after the substitution (4.4) take the form $$ \displaystyle\frac{d\eta}{dw}=\tilde A(\Gamma)\eta, \quad \eta\in{\mathbb C^4}, \leqno (4.5) $$ where $\tilde A$ is a $4\times 4$ matrix whose elements are rational functions of $w$.
We can represent $\tilde A$ in the following block form $$ \tilde A = \left (\begin {array}{cc} {M_{{3}}}^{T}&M_{{2}}\\\noalign{
}-M_ {{1}}&-M_{{3}}\end {array}\right ), $$ where $M_1$, $M_2$, $M_3$ are $2\times 2$ matrices and $M_3^T$ means the transposition of $M_3$.
The matrix $M_1$ is symmetric and has the following form $$ M_1 = \displaystyle\frac{1}{S_1^3L^2Z^2} \left( \begin{array}{ll} n_{11}&n_{12} \\ n_{12}&n_{22} \end{array} \right), $$ where $L(w)$ is the linear polynomial $$ L=l_1w+l_2, $$ and $l_1=2S_2$,\quad $l_2=-\sqrt{3}m_1m_2k$.
$Z(w)$ is the following quadratic polynomial
$$ Z=z_1w^2+z_2w+z_3, $$
where $z_1=S_2^2,\quad z_2=-\sqrt{3}m_1m_2kS_2,\quad z_3=k^2m_1^2(m_2^2+m_2m_3+m_3^2).$
The coefficients $n_{ij}$ have the form $$ n_{11}=A_1w^2+A_2w+A_3,\quad n_{12}=A_4w^2+A_5w+A_6,\quad n_{22}=A_7w^2+A_8w+A_9, $$ where $A_i$ are constants depending on the masses $m_1$, $m_2$, $m_3$ and $k$.
The matrix $M_2$ has the following expression $$ M_2 = \displaystyle\frac{4S_1Z}{S_2S_3^3m_1^4m_2m_3} \left( \begin{array}{ll} 1&0 \\ 0&1 \end{array} \right). $$ For the matrix $M_3$ we have $$ M_3 = \displaystyle\frac{1}{m_1S_1LZ} \left( \begin{array}{ll} m_{11}&m_{12} \\ m_{21}&m_{22} \end{array} \right), $$ where $$ \begin{array}{ll} m_{11}=B_1w^2+B_2w+B_3,\quad m_{12}=B_4w^2+B_5w+B_6,\quad m_{21}=B_7w^2+B_8w+B_9,\\ m_{22}=B_{10}w^2+B_{11}w+B_{12}, \end{array} $$ and $B_j$ are constants depending on $m_1$, $m_2$, $m_3$ and $k$.
The system (4.5) has four singular points $w_1$, $w_2$, $w_3$, $w_4$ in the complex plane: $$ w_1=\infty, $$ -- the infinity. $$ w_2=\displaystyle\frac{\sqrt{3}m_1m_2k}{2S_2}, $$ -- the root of $L=0$. $$ w_3=\displaystyle \frac {( \sqrt{3}m_2 + i S_3)km_1}{2 S_2} ,\quad w_4=\displaystyle \frac{(\displaystyle \sqrt{3}m_2 - \displaystyle i S_3)k m_1}{2S_2}, \leqno (4.6) $$ -- the corresponding roots of the quadratic equation $Z=0$ where $i^2=-1$.
Notice that the expressions for $w_{2,3,4}$ have a rational form on the masses.
The singularities $w_i$, $1 \leq i \leq 4$ have a clear mechanical sense: $w_1$ corresponds to the motion of the bodies at infinity, $w_2$ defines the moment of the maximal approach.
It is easy to see from (4.6) that if the angular momentum constant $k=0$, then $w_2=w_3=w_4=0$ and we have a triple collision of the bodies at the moment of time $w=0$. If $k\neq 0$ then by the lemma of Sundman there are no triple collisions in the real phase space and $w_{3,4}$ become complex.
Since the expression for $p$ given in (3.5) becomes infinity when $w\rightarrow w_{3,4}$, formally, we can consider $w_3$ and $w_4$ as corresponding to the ``complex'' collisions which tend to $w=0$ as $k\rightarrow 0$.
It was noted by Schaefke [19] that the equations (4.5) can be reduced to fuchsian form.
In order to do it, consider the linear change of variables $$\eta=Cx, \leqno (4.7) $$ where $\eta=(\eta_1,\eta_2,\eta_3,\eta_4)^T$, $x=(x_1,x_2,x_3,x_4)^T$ and $C=diag(LZ,LZ,1,1)$.
In new variables the system (4.5) takes the following form $$ \displaystyle\frac{dx}{dw}=\left(\displaystyle\frac{A(k)}{w-w_2}+\displaystyle\frac{B(k)}{w-w_3}+ \displaystyle\frac{C(k)}{w-w_4}\right)x, \quad x\in{\mathbb C^4}, \leqno (4.8) $$ where $A(k),B(k),C(k)$ are known constant $4\times 4$ matrices depending on $m_1,m_2,m_3$ and $k$.
Under the assumption $k\neq 0$ we can exclude the parameter $k$ from the system (4.8) by using the change of time $w=kt$. As a result, one obtains $$ \displaystyle\frac{dx}{dt}=\left( \displaystyle\frac{A}{t-t_0}+\displaystyle\frac{B}{t-t_1}+\displaystyle\frac{C}{t-t_2}\right)x, \leqno (4.9) $$
where $$ t_0=\displaystyle\frac{\sqrt{3}m_1m_2}{2S_2},\quad t_1=\displaystyle\frac{m_1(\sqrt{3}m_2+iS_3)}{2S_2}, \quad t_2=\displaystyle\frac{m_1(\sqrt{3}m_2-iS_3)}{2S_2}. $$ and $$ A=\displaystyle\frac{\tilde M(t_0)}{(t_0-t_1)(t_0-t_2)},\quad B=\displaystyle\frac{\tilde M(t_1)}{(t_1-t_2)(t_1-t_0)},\quad C=\displaystyle\frac{\tilde M(t_2)}{(t_2-t_1)(t_2-t_0)}. $$
Here, $\tilde M(w)$ is the following matrix $$ \tilde M(w)= \left( {\begin{array}{cc} L\,Z\,{M_{3}}^{T} - {\displaystyle \frac {{ \partial L}\,Z}{{ \partial w }}E} & {M_{2}} \\
- L^{2}\,Z^{2}\,{M_{1}} & - L\,Z\,{M_{3}} \end{array}}
\right), $$ where one should put $k=1$.
The system (4.9) is defined on a connected Riemann surface $X={\mathbb C \mathbb P^1 / \{t_0,t_1,t_2,\infty \}}$.
It turns out that the matrix $A$ is real and the matrices $B=R+iJ$, $C=R-iJ$ are complex conjugate being $R$ and $J$ real matrices. It will simplify matters further if we choose the units of masses as follows $$m_1=\alpha,\quad m_2=\beta, \quad m_3=1,\quad 0< \alpha\leq\beta\leq 1 .$$ In Appendix A we write the expressions for $A$, $R$, $J$ with help of MAPLE.
\begin{center} {\bf 5. The monodromy group of the system (4.9)} \end{center}
Let $\Sigma(t)$ be a solution of the matrix equation (4.9) $$ \displaystyle\frac{d}{dt}\Sigma=\left( \displaystyle\frac{A}{t-t_0}+\displaystyle\frac{B}{t-t_1}+\displaystyle\frac{C}{t-t_2}\right)\Sigma, \leqno (5.1) $$ with the initial condition $\Sigma(\tau)=I$, $\tau \in X$ where $I$ is the unit $4\times 4$ matrix.
It can be continued along a closed path $\gamma$ with end points at $\tau$. We obtain the function $\tilde \Sigma(t)$ which also satisfies (5.1). From linearity of (5.1) it follows that there exists a complex $4\times 4$ matrix $T_{\gamma}$ such that $\tilde \Sigma(t)=\Sigma(t) T_{\gamma}$. The set of matrices $G=\{T_{\gamma}\}$ corresponding to all closed curves in $X$ is a group. This group is called the {\it monodromy group} of the linear system (4.9). Let $T_i$ be the elements of $G$ corresponding to circuits around the singular points $t=t_i$, $i=0,1,2$. Then the monodromy group $G$ is formed by $T_0$, $T_1$, $T_2$. Denote by $T_{\infty}\in G$ the element corresponding to a circuit around the point $t=\infty$.
\noindent {\bf Lemma 5.1} {\it The following assertions about the monodromy group $G$ hold \\
\noindent a) $T_0=I$ -- is the unit matrix and $$ T_1T_2=T^{-1}_{\infty}. \leqno (5.2) $$
\noindent b) There exist two non-singular matrices $U$, $V$ such that $$ U^{-1}T_1U=V^{-1}T_2V=\left( \begin{array}{cccc} 1 & 1 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 1 \end{array} \right).$$
\noindent c) The matrix $T_{\infty}$ has the following eigenvalues $$\mathrm{Spectr}(T_{\infty})=\left\{ e^{2\pi i\lambda_1},\quad e^{2\pi i\lambda_2},\quad e^{-2\pi i\lambda_1},\quad e^{-2\pi i\lambda_2}\right\}, \leqno (5.3)$$ where $$ \lambda_1=\displaystyle\frac{3}{2}+\displaystyle\frac{1}{2}\sqrt{13+\sqrt{\theta}},\quad \lambda_2=\displaystyle\frac{3}{2}+\displaystyle\frac{1}{2}\sqrt{13-\sqrt{\theta}}, \leqno (5.4) $$ and $$ \theta=144\left(1-\displaystyle 3\frac{S_2}{S_1^2}\right),\quad S_1=\alpha+\beta+1,\quad S_2=\alpha\beta+\alpha+\beta.$$ Moreover, $$ \mathrm{Spectr} (T_{\infty} ) \neq \{1,1,1,1\}. $$}
{\it Proof.} a) The matrix $A$ has the eigenvalues $\{-1,-1,0,0\}$. Following the general theory of the linear differential equations let us write the general solution of the system (4.9) near the singular point $t=t_0$ as follows $$ x(t)=c_1X_1(t)+c_2X_2(t)+c_3X_3(t)+c_4X_4(t),$$ where $c_{1,\ldots,4}\in {\mathbb C}$ are arbitrary constants and $$ \begin{array}{llcc} X_1(t)=\displaystyle \frac{a_{-1}}{t-t_0}+a_0+a_1(t-t_0)+\cdots,\quad X_2(t)=\displaystyle \frac{b_{-1}}{t-t_0}+b_0+b_1(t-t_0)+\cdots,\\ X_3(t)=c_0+c_1(t-t_0)+\cdots,\quad \quad \quad \quad \quad X_4(t)=d_0+d_1(t-t_0)+\cdots, \end{array} \leqno (5.5) $$ where $a_i$, $b_i$, $c_i$, $d_i \in {\mathbb C^4} $ are some constant vectors.
By substituting (5.5) in (4.9) one can find $a_i$, $b_i$, $c_i$, $d_i$ and show that the vectors $X_1(t)$, $X_2(t)$, $X_3(t)$, $X_4(t)$ are functionally independent and meromorphic in a small neighborhood of the point $t=t_0$. This implies that the element $T_0$ of the monodromy group $G$ corresponding to a circuit around $t_0$ is the unit matrix. Obviously we should have $T_0T_1T_2=T^{-1}_{\infty}$. From this fact the relation (5.2) follows.
\noindent b) The matrices $B$, $C$ have the same eigenvalues $\{-2,-1,0,1\}$. It can be shown by a straightforward calculation that near the singular point $t=t_1$ the general solution of the system (4.9) can be represented as $$ x(t)=c_1Y_1(t)+c_2Y_2(t)+c_3Y_3(t)+c_4Y_4(t),$$ where $c_{1,\ldots,4}\in {\mathbb C}$ are arbitrary constants and $$ \begin{array}{lllcc} Y_1(t)=e_1(t-t_1)+e_2(t-t_1)^2+\cdots, \quad Y_2=f_0+f_1(t-t_1)+\cdots+C_1 \mathrm{ln}(t-t_1)Y_1(t), \\ Y_3(t)=\displaystyle \frac{g_{-1}}{t-t_1}+g_0+g_1(t-t_1)+\cdots,\\ Y_4(t)=\displaystyle \frac{h_{-2}}{(t-t_1)^2}+\displaystyle \frac{h_{-1}}{t-t_1}+\cdots+C_2\mathrm{ln}(t-t_1)(f_0+f_1(t-t_1)+\cdots)+C_3\mathrm{ln}(t-t_1)Y_1(t), \end{array} $$ where $e_i$, $f_i$, $g_i$, $h_i \in {\mathbb C^4}$ are some constant vectors and $C_1$, $C_2$, $C_3$ are parameters depending on the masses $\alpha$, $\beta$.
For $C_1$, $C_2$ one can find $$
C_1=\displaystyle \frac{9}{4}\frac{\beta \alpha^3( \beta+2)^2(\alpha \beta+\alpha+ \beta)}{(\alpha+\beta+1)^3 },\quad C_2=iC_1.$$
The matrix $\Sigma(t)=(Y_1,Y_2,Y_3,Y_4)$ represents the solution of the system (5.1) in a small neighborhood of the point $t=t_1$. After going around of $t_1$ we get $\tilde \Sigma(t) =\Sigma(t) M$ where $$ M=\left( \begin{array}{cccc} 1 & 2\pi i C_1 & 0 & 2\pi i C_3 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 2\pi iC_2 \\ 0 & 0 & 0 & 1 \end{array} \right). $$
Since $C_1 \neq 0$, $C_2\neq 0$ for $\alpha>0$, $\beta>0$, there exists a non-singular matrix $T$ such that $$ T^{-1}MT=\left( \begin{array}{cccc} 1 & 1 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 1 \end{array} \right), \leqno (5.6) $$ which is the Jordan form of $M$.
The matrix $T_1$ is similar to $M$ and therefore has the same Jordan form (5.6). Repeating the analogous arguments for the matrix $T_2$ we deduce that the same assertion holds for the monodromy matrix $T_2$. Notice that the existence of logarithmic branching near some Lagrangian solutions in three body problem was first observed by H. Block (1909) and J.F. Chazy (1918) ( see for instance [1]).
\noindent c) Consider the matrix $A_{\infty}=-(A+B+C)$. Then there exists (see for example [4]) a non-singular matrix $W$ such that $$ T_{\infty}=W^{-1}e^{2\pi iA_{\infty}}W. \leqno (5.7) $$
Appendix A contains the expressions for the elements of the matrix $A_{\infty}$. One can calculate its eigenvalues $$ \mathrm{Spectr}(A_{\infty})=\{ \lambda_1,\lambda_2,3-\lambda_1,3-\lambda_2\}, $$ where $\lambda_{1,2}$ are given in (5.3).
One can easy check that $$ 0\leq \sqrt{\theta} < 12, \leqno (5.8) $$ for all $\alpha >0$, $\beta > 0$.
With the help of (5.7) we obtain for the eigenvalues of the matrix $T_{\infty}$ the expression (5.3).
Let us suppose now that $\mathrm{ Spectr}(T_{\infty})=\{1,1,1,1\}$. Then according to (5.4) we obtain
$$ \sqrt{13+\sqrt{\theta}}=n_1, \quad \sqrt{13-\sqrt{\theta}}=n_2, \quad n_1,n_2
\in{\mathbb Z Z}.\leqno (5.9) $$ Hence, in view of (5.8), the number $r =\sqrt{\theta}$ is an integer $0\leq r \leq 11$. The simple calculation shows that for these $r$ the relations (5.9) are not fulfilled. This implies that $$ \mathrm{Spectr} (T_{\infty} ) \neq \{1,1,1,1\}. $$ The proof of Lemma 5.1 is completed. \quad \quad $\Box$
\begin{center}
{\bf 6. Nonexistence of additional meromorphic first integrals}
\end{center}
We call the planar three-body problem (2.1) {\it meromorphically} integrable near the Lagrangian parabolic solution $\Gamma$, defined in Section 3, if the corresponding Hamiltonian system (2.3) possesses a complete set of complex meromorphic first integrals (see Definition 2.1) in involution in a connected neighborhood of $\Gamma$. Recall that equations (2.3) describe the motion of bodies in the system of the center of masses.
From Proposition 2.2 it follows that in this case the system (2.5) admits two additional first integrals which are meromorphic and functionally independent in the same neighborhood.
\noindent {\bf Theorem 6.1} {\it For $k\neq 0$ for the Hamiltonian system (2.5) there are no two functionally independent additional first integrals, meromorphic in a connected neighborhood of the Lagrangian parabolic solution $\Gamma$.}
{\it Proof.} Suppose that the Hamiltonian system (2.5) admits two functionally independent first integrals $H_1$, $H_2$, meromorphic in a connected neighborhood of the Lagrangian parabolic solution $\Gamma$ and functionally independent together with $H$. According to Ziglin [27] in this case the NVE (4.5) have two functionally independent meromorphic integrals $F_1$, $F_2$ which are single-valued in a complex neighborhood of the Riemann surface $\Gamma={\mathbb C \mathbb P^1}/\{t_0,t_1,t_2,\infty\}$. The linear system (4.9) was obtained from (4.5) by the linear change of variables (4.7) and the change of the time $w=kt$, $k\neq 0$. Therefore, it possesses two functionally independent meromorphic integrals $I_1$, $I_2$. From this fact the following lemma is deduced
\noindent{\bf Lemma 6.2 (Ziglin [27])} {\it The monodromy group $G$ of the system (4.9) has two rational, functionally independent
invariants $J_1$, $J_2$.}
In appropriate coordinates, according to b) of Lemma 5.1, the monodromy transformation $T_1$ can be written as follows $$T_1= \left( \begin{array}{cccc} 1 & 1 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 1 \end{array} \right) =I+D,$$ where $I$ is the unit matrix and $$D=\left( \begin{array}{cccc} 0& 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 \end{array} \right ). \leqno (6.1) $$
For the monodromy matrix $T_2$ one writes $$ T_2=I+R,$$ where $$R=\tilde V D \tilde V^{-1}=\left( \begin{array}{cccc} a_1 & a_2 & a_3 & a_4 \\ b_1 & b_2 & b_3 & b_4 \\ c_1 & c_2 & c_3 & c_4 \\ d_1 & d_2 & d_3 & d_4 \end{array} \right),\leqno (6.2)$$ with some unknowns $a_i$, $b_i$, $c_i$, $d_i\in {\mathbb C}$ and a nonsingular matrix $\tilde V$.
Let us input the following linear differential operators $$ \delta=x_2\displaystyle\frac{\partial}{\partial x_1}+x_4\displaystyle\frac{\partial}{\partial x_3},$$ and $$ \Delta= \left(\sum_{i=1}^4a_ix_i\right)\displaystyle\frac{\partial}{\partial x_1}+\left(\sum_{i=1}^4b_ix_i\right)\displaystyle\frac{\partial}{\partial x_2}+\left(\sum_{i=1}^4c_ix_i\right)\displaystyle\frac{\partial}{\partial x_3}+\left(\sum_{i=1}^4d_ix_i\right)\displaystyle\frac{\partial}{\partial x_4}. $$
\noindent{\bf Lemma 6.3} {\it Let $J$ be a rational invariant of the monodromy group $G$, then the following relations hold $$ \delta J=0, \quad \Delta J =0.$$}
{\it Proof.} For an arbitrary $n\in \mathbb N$ we have $T_1^n=I+nD$, hence $J\left(T_1^n x\right)=J\left( x+nD x\right).$ Expanding the last expression in Taylor series we obtain $$ J\left(T_1^n x\right)=J(x)+n\delta J(x)+\sum\limits_{i=2}^{\infty} n^i r_i(x), \leqno (6.3)$$ where $r_i(x)$ are some rational functions.
In view of $J\left(T_1^n x\right)=J(x)$ and the fact that $J(x)$ is a rational function on $x$, the second term of (6.3) gives $\delta J=0$. The relation $\Delta J=0$ is deduced by analogy from the identity $J\left(T_2x\right)=J(x)$. \quad \quad $\Box$
{\it Case (1).} Assume that invariants $J_1$, $J_2$ depend on $x_2$, $x_4$ only. By Lemma 6.3 we have $$ \Delta J_1=0, \quad \Delta J_2=0. \leqno (6.4)$$
It can be verified that the equations (6.4) imply the conditions $b_i=0$, $d_i=0$, $1\leq i \leq 4$. Accordingly, the matrix $R$ may be written $$ R=\left( \begin{array}{cccc} a_1 & a_2 & a_3 & a_4 \\ 0 & 0 & 0 & 0 \\ c_1 & c_2 & c_3 & c_4 \\ 0 & 0 & 0 & 0 \end{array} \right). \leqno (6.5)$$
One can find the characteristic polynomial $P(\lambda)=det(R-\lambda I)$ of $R$ $$P(\lambda)=\lambda^4-(a_1+c_3)\lambda^3+(a_1c_3-c_1a_3)\lambda^2. \leqno (6.6)$$
In view of (6.1), (6.2) all eigenvalues of the matrix $R$ are equal to $0$, thus, with help of (6.6) we get $$ a_1+c_3=0, \quad a_1c_3=c_1a_3. \leqno (6.7) $$
The matrix $T_1T_2$ takes the following form $$ T_1T_2=\left( \begin{array}{cccc} a_1+1 & a_2+1 & a_3 & a_4 \\ 0 & 1 & 0 & 0 \\ c_1 & c_2 & c_3+1 & c_4+1 \\ 0 & 0 & 0 & 1 \end{array} \right), $$ and $$ \mathrm{Spectr}(T_1T_2)=\{1,1,s+f,s-f\},$$ where $$ s=1+\displaystyle \frac{a_1+c_3}{2}, \quad f=\displaystyle \frac{ \sqrt { a_1^2+ c_3^2+4 c_1a_3-2a_1 c_3}}{2}. \leqno (6.8) $$
The straightforward calculation by using (6.7) and (6.8) shows that the eigenvalues of the matrix $T_1T_2$ are equal to $\{1,1,1,1\} $. According to (5.2) these must be the eigenvalues of the matrix $T_{\infty}$ which is in contradiction to c) of Lemma 5.1.
{\it Case(2).} Assume that even one from the invariants $J_1$, $J_2$ depends on $x_1$ or $x_3$. Let, for example $$ \displaystyle\frac{\partial J_1}{\partial x_1}\neq 0. \leqno (6.9) $$
It is useful to consider two additional linear operators $ \delta_1=[\delta,\Delta]$ and $ \delta_2=-\displaystyle\frac{1}{2}[\delta,\delta_1].$
One has $$ \delta_1=f_1\displaystyle\frac{\partial}{\partial x_1}+f_2\displaystyle\frac{\partial}{\partial x_2}+ f_3\displaystyle\frac{\partial}{\partial x_3}+f_4\displaystyle\frac{\partial}{\partial x_4}, \quad \delta_2=(b_1x_2+b_3x_4)\displaystyle\frac{\partial}{\partial x_1}+ (d_1x_2+d_3x_4)\displaystyle\frac{\partial}{\partial x_3}, $$ where $$ \begin{array}{llcc} f_1=-b_1x_1+(a_1-b_2)x_2-b_3x_3+(a_3-b_4)x_4, & f_2=b_1x_2+b_3x_4,\\ f_3=-d_1x_1+(c_1-d_2)x_2-d_3x_3+(c_3-d_4)x_4, & f_4=d_1x_2+d_3x_4.\end{array}, \leqno (6.10)$$
We deduce from $\delta J_i=\Delta J_i=0$ that $$ \delta_1 J_i=0, \quad \delta_2 J_i=0, \quad i=1,2.$$
Consider the partial differential equation $\delta J=0$. Solving it one finds that $J=K(Y_1,Y_2,Y_3)$ where $K(y_1,y_2,y_3)$ is an arbitrary function and $$Y_1=x_2, \quad Y_2=x_4, \quad Y_3=x_4x_1-x_2x_3. \leqno (6.11)$$ Therefore, in view of (6.9), (6.11) we have $J_1=J_1(Y_1,Y_2,Y_3)$ and $ \displaystyle\frac{\partial J_1}{\partial Y_3}\neq 0.$
Consequently, as $\delta_2 Y_1=\delta_2 Y_2=0$, one gets $$ \delta_2 J_1=\displaystyle \frac{\partial J_1}{\partial Y_1}\delta_2 Y_1 +\displaystyle \frac{\partial J_1}{\partial Y_2}\delta_2 Y_2+\displaystyle \frac{\partial J_1}{\partial Y_3}\delta_2 Y_3=\displaystyle \frac{\partial J_1}{\partial Y_3}\delta_2 Y_3. $$
This implies $$ \delta_2 Y_3=0. \leqno (6.12) $$
By substituting in (6.12) the expression for $Y_3$ given by (6.11) we arrive to $$ b_3=d_1=0, \quad b_1=d_3=\rho, \leqno (6.13) $$ for some $\rho \in {\mathbb C}$.
We now use the equation $\delta_1 J=0$ which can be written as $$ \delta_1 J=\displaystyle \frac{\partial J}{\partial Y_1}\delta_1 Y_1 +\displaystyle \frac{\partial J}{\partial Y_2}\delta_1 Y_2+\displaystyle \frac{\partial J}{\partial Y_3}\delta_1 Y_3=0, \leqno (6.14) $$
One can show that $$ \begin{array}{lll} \delta_1Y_1=\rho Y_1,\\ \delta_1Y_2=\rho Y_2, \\ \delta_1Y_3=v_{1}Y_1^2+v_2Y_2^2+v_3Y_1Y_2, \end{array} $$ where $v_1=d_2-c_1$, $v_2=a_3-b_4$, $v_3=a_1-b_2-c_3+d_4$.
Hence, (6.14) yields $$ \rho Y_1 \displaystyle \frac{\partial J}{\partial Y_1} +\rho Y_2\displaystyle \frac{\partial J}{\partial Y_2}+(v_{1}Y_1^2+v_2Y_2^2+v_3Y_1Y_2)\displaystyle \frac{\partial J}{\partial Y_3}=0.$$
This equation possesses two rational functionally independent solutions $J_1(Y_1,Y_2,Y_3)$, $J_2(Y_1,Y_2,Y_3)$ only if $$ \rho=0, \quad v_1=v_2=v_3=0 .$$ which gives $$ a_1=\epsilon_1+b_2, \quad c_3=\epsilon_1+d_4,\quad c_1=d_2=\zeta_1, \quad a_3=b_4=\zeta_2, \quad \epsilon_1, \zeta_1, \zeta_2 \in {\mathbb C}. \leqno (6.15) $$
After substitutions of (6.13), (6.15) in (6.2) the matrix $R$ is written as $$ R=\left( \begin{array}{cccc} b_2+\epsilon_1 & a_2 & \zeta_2 & a_4 \\ 0 & b_2 & 0 & \zeta_2 \\ \zeta_1 & c_2 & d_4+\epsilon_1 & c_4 \\ 0 & \zeta_1 & 0 & d_4 \end{array} \right).$$
Now, consider the characteristic polynomial $P(\lambda)$ of $R$ $$P(\lambda)=\lambda^4+P_1\lambda^3+P_2\lambda^2+P_3\lambda+P_4,$$ where $$ \begin{array}{llll} P_1=-2(b_2+d_4+\epsilon_1),\\ P_2=3b_2\epsilon_1-2\zeta_1\zeta_2+ 3\epsilon_1d_4+4b_2d_4+b_2^2+\epsilon^2_1 +d_4^2,\\ P_3=-(d_4+b_2+\epsilon_1)(2b_2d_4+b_2\epsilon_1+d_4\epsilon_1-2\zeta_1 \zeta_2), \\ P_4=(b_2d_4 -\zeta_1\zeta_2)(b_2d_4+b_2\epsilon_1+d_4\epsilon_1-\zeta_1\zeta_2+\epsilon_1^2). \end{array}$$ As above, in view of (6.1), (6.2) all eigenvalues of $R$ must be equal to $0$ and therefore $P_i=0$, $1 \leq i \leq 4$. This system gives $$ \epsilon_1=0, \quad b_2=\eta_1, \quad d_4=-\eta_1, \quad \eta_1^2+\zeta_1 \zeta_2=0,$$ and the monodromy matrix $T_2$ becomes $$ T_2=\left( \begin{array}{cccc} \eta_1+1 & a_2 & \zeta_2 & a_4 \\ 0 & \eta_1+1 & 0 & \zeta_2 \\ \zeta_1 & c_2 & 1-\eta_1 & c_4 \\ 0 & \zeta_1 & 0 & 1-\eta_1 \end{array} \right). $$
The matrix $T_1T_2$ has the eigenvalues $\{1,1,1,1\}$ which contradicts to c) of Lemma 5.1 and proves our claim. \quad \quad $\Box$
Due to our definition of integrability we deduce from Theorem 6.2 the following
\noindent {\bf Theorem 6.3} {\it The planar three-body problem is meromorphically non-integrable near the Lagrangian parabolic solution. }
\begin{center}
{\bf 7. Final remarks} \end{center}
In the end of 19th century Poincar\'e [18] indicated some qualitative phenomena in the behavior of phase trajectories which prevent the appearance of new integrals of a Hamiltonian system besides those which are present, but fail to form a set sufficient for complete integrability.
Let $M^{2n}$ be the phase space, and $H:M^{2n}\rightarrow \mathbb R$, $H=H_0+\epsilon H_1 +O(\epsilon^2)$ the Hamiltonian function. Suppose that for $\epsilon =0$ the corresponding Hamiltonian system has an $m$--dimensional hyperbolic invariant torus $T^m_0$. According to the Graff's theorem [5], for small $\epsilon$ the perturbed system has an invariant hyperbolic torus $T^m_{\epsilon}$ depending analytically on $\epsilon$. It can be shown that $T^m_{\epsilon}$ has asymptotic invariant manifolds $\Lambda^+$ and $\Lambda^-$ filled with trajectories which tend to the torus $T^m_{\epsilon}$ as $t\rightarrow +\infty$ and $t\rightarrow -\infty$ respectively. In integrable Hamiltonian systems such manifolds (called also {\it separatrices}), as a rule, coincide. In the nonintegrable cases, the situation is different: asymptotic surfaces can have transverse intersection forming a complicated tangle which prevent the appearance of new integrals. For a modern presentation of these results see, for example, [7].
The method of splitting of asymptotic surfaces was applied to the three--body problem by many authors. In his book [13] J.K. Moser described a technique which use the symbolic dynamics associated with a transverse homoclinic point. Applying this method, it was shown in [9] that under certain assumptions the planar circular restricted three--body problem does not possess an additional real analytic integral. The similar result for the Sitnikov problem and the collinear three--body problem can be found in [13], [10]. The existence and the transverse intersection of stable and unstable manifolds along some periodic orbits in the planar three--body problem where two masses are sufficiently small was established in [11], using the results obtained in [12].
It is necessary to note that Theorem 6.3 implies the nonexistence of a complete set of complex analytic first integrals for the general planar three--body problem. To prove the nonexistence of real analytic integrals one should use some heteroclinic phenomena and can propose the following line of reasoning : Let $M^{\infty}$ be the infinity manifold, then the taken Lagrangian parabolic orbit is biasymptotic to it. This is a weakly hyperbolic invariant manifold and the reference orbit is a heteroclinic orbit to different periodic orbits sitting in $M^{\infty}$. The dynamical interpretation of Theorem 6.3 seems to be the transversality of the invariant stable and unstable manifolds of $M^{\infty}$, along this orbit. A combination of passages near several of these orbits (there is all the family obtained by rotation) should allow to prove the existence of a heteroclinic chain. This, in turn, gives rise to an embedding of a suitable subshift, with lack of predictability, chaos and implies the nonexistence of real analytic integrals.
\begin{center}
{\bf Acknowledgements}
\end{center} I would like to thank L. Gavrilov and V. Kozlov for useful discussions and the advice to study the present problem. Also, I thank to J.-P. Ramis, J.J. Morales-Ruiz, J.-A. Weil and D. Boucher for their attention to the paper. I am very grateful to the anonymous referee for his useful remarks.
\begin{center}
{\bf References}
\end{center}
\noindent [1] V.I. Arnold, V.V. Kozlov, A.I. Neishtadt,{\it Dynamical systems III, } Springer--Verlag, p. 63, (1987).
\noindent [2] H. Bruns, {\it Ueberdie Integrale des vierk$\ddot o$rper Problems}, Acta Math. 11, p. 25-96, (1887-1888).
\noindent [3] J. Chazy, { \it Sur l'allure du mouvement dans le probl\`eme des trois corps quand le temps croit ind\'efiniment}, Ann. Sci. Ecole Norm., 39 , 29-130 (1922).
\noindent [4] V.V. Golubev, {\it Lectures on analytic theory of differential equations}, Gostekhizdat, Moskow, (1950), (Russian).
\noindent [5] S.M. Graff, {\it On the conservation of hyperbolic invariant tori for Hamiltonian systems}, J. Differential Equations 15, 1--69, (1974).
\noindent [6] S.V. Kowalewskaya, {\it Sur le probl\`eme de la rotation d'un corps solide autour d'un point fixe}, Acta. Math. 12,177-232 (1889).
\noindent [7] V.V. Kozlov, {\it Symmetries, Topology, and Resonances in Hamiltonian mechanics,} Springer-Verlag (1996).
\noindent [8] J.L. Lagrange, {\it Oeuvres}. Vol. 6, 272-292, Paris (1873).
\noindent [9] J. Llibre, C. Sim\'o, {\it Oscillatory solutions in the planar restricted three--body problem,}Math. Ann., 248: 153--184, 1980.
\noindent [10] J. Llibre, C. Sim\'o, {\it Some homoclinic phenomena in the three--body problem,} J. Differential Equations, 37, no. 3, 444--465, 1980.
\noindent [11] R. Martinez, C. Sim\'o, {\it A note on the existence of heteroclinic orbits in the planar three body problem,} In Seminar on Dynamical Systems, Euler International Mathematical Institute, St. Petersburg, 1991, S. Kuksin, V. Lazutkin and J. P$\ddot o$chel, editors, 129--139, Birkh$\ddot a$user, 1993.
\noindent [12] R. Martinez, C. Pinyol, {\it Parabolic orbits in the elliptic restricted three body problem,} J. Differential Equations, 111, 299--339, (1994).
\noindent [13] J.K. Moser, {\it Stable and random motions in dynamical systems,} Princeton Univ. Press, Princeton, N.J., 1973.
\noindent [14] R. Moeckel, {\it Chaotic dynamics near triple collision}, Arch. Rational. Mech. Anal. 107, no. 1, 37-69 (1989).
\noindent [15] J.J. Morales-Ruiz, J.P. Ramis, {\it Galosian Obstructions to integrability of Hamiltonian Systems}, Preprint (1998).
\noindent [16] I. Newton, {\it Philosophiae naturalis principia mathematica}, Imprimatur S. Pepys, Reg. Soc. Praeses, julii 5, 1686, Londini anno MDCLXXXVII.
\noindent [17] P. Painlev\'e, {\it M\'emoire sur les int\'egrales premi\'eres du probl\`eme des n corps}, Acta Math. Bull. Astr. T 15 (1898).
\noindent [18] H. Poincar\'e, {\it Les m\'ethodes novelles de la m\'ecanique c\'eleste}, vol. 1-3. Gauthier--Villars, Paris 1892, 1893, 1899.
\noindent [19] R. Schaefke, Private communication.
\noindent [20] M. Singer, A. Baider, R. Churchill, D. Rod, {\it On the infinitesimal Geometry of Integrable Systems}, in Mechanics Day, Shadwich et. al., eds, Fields Institute Communications, 7, AMS, 5-56 (1996).
\noindent [21] K. F. Sundman, { \it Memoire sur le probl\`eme des trois corps}, Acta Math. 36, 105-107 (1913).
\noindent [22] C.L. Siegel, J.K. Moser, {\it Lectures on Celestial Mechanics,} Springer-Verlag (1971).
\noindent [23] Gh. Topan, {\it Sur une int\'egrale premi\'ere transcendante dans certaines configurations du probl\'eme des trois corps}, Bull. Math. Soc. Sci. Math. R. S. Roumanie (N.S.), no. 1, 83-91 (1989).
\noindent [24] E.T. Whittaker, {\it A Treatise on the Analytical Dynamics of particles and Rigid Bodies}. Cambridge University Press, New York, (1970).
\noindent [25] A. Winter, {\it The analytical Foundations of Celestial Mechanics,} Princeton Univ. Press, Princeton, (1941).
\noindent [26] H. Yoshida, {\it A criterion for the nonexistence of an additional integral in Hamiltonian systems with a homogeneous potential}, Phys. D. 29, no. 1-2, 128-142, (1987).
\noindent [27] S.L. Ziglin, {\it Branching of solutions and non-existence of first integrals in Hamiltonian Mechanics I}, Func. Anal. Appl. 16 (1982).
\noindent( please use this address for correspondence )
\noindent{\small \bf
Section de Mathematiques,\\
Universit\'e de Gen\`eve\\
2-4, rue du Lievre,\\
CH-1211, Case postale 240, Suisse \\ Tel l.: +41 22 309 14 03 \\
Fax: +41 22 309 14 09 \\ E--mail: Alexei.Tsygvintsev@math.unige.ch}
\noindent{\small \bf
Laboratoire Emile Picard, UMR 5580,\\
Universit\'e Paul Sabatier\\
118, route de Narbonne,\\
31062 Toulouse Cedex, France \\ Tel l.: 05 61 55 83 37 \\
Fax: 05 61 55 82 00 \\ E--mail: tsygvin@picard.ups-tlse.fr}
\noindent {\bf Tsygvintsev Alexei}
\begin{center} {\bf Appendix A. The matrices $A_{\infty}$, $A$, $B$, $R$, $J$} \end{center}
\begin{center} $A_{\infty}=A_{\infty, ij}$, $1\leq i,j \leq 4$. \\ \end{center}
\noindent ${A_{\infty , \,11}}={\displaystyle \frac {1}{4}} \, {\displaystyle \frac {12\,\alpha + 5\,\beta + 5\,\beta \,\alpha
^{2} + 26\,\alpha \,\beta + 12\,\alpha ^{2}}{\alpha \,{S_{1}}} } , \quad {A_{\infty , \,12}}={\displaystyle \frac {3}{4}} \, {\displaystyle \frac {\sqrt{3}\,(\alpha + 1)\,\beta \,(\alpha
- 1)}{\alpha \,{S_{1}}}},$\\ $ {A_{\infty , \,13}}= - 2\,{\displaystyle \frac {{S_{1}}}{{S_{ 2}}^{2}\,{S_{3}}^{3}\,\alpha ^{4}\,\beta }}, \quad {A_{\infty , \,14}}=0, $\\ $ {A_{\infty , \,21}}={\displaystyle \frac {3}{4}} \, {\displaystyle \frac {\sqrt{3}\,(\alpha + 1)\,\beta \,(\alpha
- 1)}{\alpha \,{S_{1}}}}, \quad {A_{\infty , \,22}}= - {\displaystyle \frac {1}{4}} \, {\displaystyle \frac { - 12\,\alpha + \beta + \beta \,\alpha ^{ 2} - 2\,\alpha \,\beta - 12\,\alpha ^{2}}{\alpha \,{S_{1}}}} , $\\ $ {A_{\infty , \,23}}=0, \quad {A_{\infty , \,24}}= - 2\,{\displaystyle \frac {{S_{1}}}{{S_{ 2}}^{2}\,{S_{3}}^{3}\,\alpha ^{4}\,\beta }} , $\\ $ {A_{\infty , \,31}}={\displaystyle \frac {1}{8}} \, {\displaystyle \frac {\alpha ^{2}\,\beta \,{S_{3}}^{3}\,(\alpha
+ 1)\,{S_{2}}^{3}\,(2\,\alpha + 13\,\beta + 13\,\beta \,\alpha
^{2} + 24\,\alpha \,\beta + 2\,\alpha ^{2})}{{S_{1}}^{3}}} , $\\ $ {A_{\infty , \,32}}={\displaystyle \frac {3}{8}} \, {\displaystyle \frac {\sqrt{3}\,(\beta + 2\,\alpha + 4\,\alpha \,\beta + \beta \,\alpha ^{2} + 2\,\alpha ^{2})\,(\alpha - 1)\, \beta \,\alpha ^{2}\,{S_{3}}^{3}\,{S_{2}}^{3}}{{S_{1}}^{3}}} , $\\ $ {A_{\infty , \,33}}= - {\displaystyle \frac {1}{4}} \, {\displaystyle \frac {\beta \,(5\,\alpha ^{2} + 14\,\alpha + 5) }{\alpha \,{S_{1}}}} \quad {A_{\infty , \,34}}= - {\displaystyle \frac {3}{4}} \, {\displaystyle \frac {\sqrt{3}\,(\alpha + 1)\,\beta \,(\alpha
- 1)}{\alpha \,{S_{1}}}} , $\\ $ {A_{\infty , \,41}}={\displaystyle \frac {3}{8}} \, {\displaystyle \frac {\sqrt{3}\,(\beta + 2\,\alpha + 4\,\alpha \,\beta + \beta \,\alpha ^{2} + 2\,\alpha ^{2})\,(\alpha - 1)\, \beta \,\alpha ^{2}\,{S_{3}}^{3}\,{S_{2}}^{3}}{{S_{1}}^{3}}} , $\\ $ {A_{\infty , \,42}}={\displaystyle \frac {1}{8}} \, {\displaystyle \frac {\alpha ^{2}\,\beta \,{S_{3}}^{3}\,(\alpha
+ 1)\,{S_{2}}^{3}\,( - 10\,\alpha + 7\,\beta + 7\,\beta \, \alpha ^{2} - 12\,\alpha \,\beta - 10\,\alpha ^{2})}{{S_{1}}^{3} }}, $\\ $ {A_{\infty , \,43}}= - {\displaystyle \frac {3}{4}} \, {\displaystyle \frac {\sqrt{3}\,(\alpha + 1)\,\beta \,(\alpha
- 1)}{\alpha \,{S_{1}}}} , \quad {A_{\infty , \,44}}={\displaystyle \frac {1}{4}} \, {\displaystyle \frac {\beta \,(10\,\alpha + \alpha ^{2} + 1)}{ \alpha \,{S_{1}}}}. $
\begin{center}
$A=(A_{ij})$, $1\leq i,j \leq 4$.\\ \end{center}
\noindent$ {A_{11}}= - {\displaystyle \frac {1}{4}} \,{\displaystyle \frac {(\alpha + 1)\,(\alpha \,\beta + 4\,\alpha + \beta )}{ \alpha \,{S_{1}}}} , \quad {A_{12}}={\displaystyle \frac {1}{4}} \,{\displaystyle \frac {\sqrt{3}\,(\alpha + 1)\,\beta \,(\alpha - 1)}{\alpha \,{ S_{1}}}} , $\\ $ {A_{13}}=2\,{\displaystyle \frac {{S_{1}}}{{S_{2}}^{2}\,{S_{3 }}^{3}\,\alpha ^{4}\,\beta }},\quad {A_{14}}=0,\quad {A_{21}}={\displaystyle \frac {1}{4}} \,{\displaystyle \frac {\sqrt{3}\,(\alpha + 1)\,\beta \,(\alpha - 1)}{\alpha \,{ S_{1}}}} , $\\ $ {A_{22}}= - {\displaystyle \frac {1}{4}} \,{\displaystyle \frac {10\,\alpha \,\beta + 3\,\beta \,\alpha ^{2} + 3\,\beta
+ 4\,\alpha ^{2} + 4\,\alpha }{\alpha \,{S_{1}}}} ,\quad {A_{24}}=2\,{\displaystyle \frac {{S_{1}}}{{S_{2}}^{2}\,{S_{3 }}^{3}\,\alpha ^{4}\,\beta }}$\\ $ {A_{31}}= - {\displaystyle \frac {1}{8}} \,{\displaystyle \frac {\alpha ^{2}\,\beta ^{2}\,{S_{3}}^{3}\,(\alpha + 1)\,( \alpha - 1)^{2}\,{S_{2}}^{3}}{{S_{1}}^{3}}} ,\quad {A_{34}}= - {\displaystyle \frac {1}{4}} \,{\displaystyle \frac {\sqrt{3}\,(\alpha + 1)\,\beta \,(\alpha - 1)}{\alpha \,{ S_{1}}}},$\\ $ A_{23}=0, \quad {A_{32}}={\displaystyle \frac {1}{8}} \,{\displaystyle \frac {\sqrt{3}\,(\alpha - 1)\,(\alpha + 1)^{2}\,\alpha ^{2}\, \beta ^{2}\,{S_{2}}^{3}\,{S_{3}}^{3}}{{S_{1}}^{3}}} ,\quad {A_{33}}={\displaystyle \frac {1}{4}} \,{\displaystyle \frac {(\alpha - 1)^{2}\,\beta }{\alpha \,{S_{1}}}},$\\ ${A_{41}}={\displaystyle \frac {1}{8}} \,{\displaystyle \frac {\sqrt{3}\,(\alpha - 1)\,(\alpha + 1)^{2}\,\alpha ^{2}\, \beta ^{2}\,{S_{2}}^{3}\,{S_{3}}^{3}}{{S_{1}}^{3}}} ,\quad {A_{42}}= - {\displaystyle \frac {3}{8}} \,{\displaystyle \frac {\alpha ^{2}\,\beta ^{2}\,{S_{3}}^{3}\,(\alpha + 1)^{3}\,{ S_{2}}^{3}}{{S_{1}}^{3}}} , $\\ $ {A_{43}}= - {\displaystyle \frac {1}{4}} \,{\displaystyle \frac {\sqrt{3}\,(\alpha + 1)\,\beta \,(\alpha - 1)}{\alpha \,{ S_{1}}}} ,\quad {A_{44}}={\displaystyle \frac {3}{4}} \,{\displaystyle \frac {(\alpha + 1)^{2}\,\beta }{\alpha \,{S_{1}}}}. $
\begin{center} $R=(R_{ij})$, $1\leq i,j \leq 4$.\\ \end{center}
\noindent $ {R_{11}}= - {\displaystyle \frac {1}{2}} \,{\displaystyle \frac {2\,\alpha + \beta + 6\,\alpha \,\beta + 2\,\alpha ^{2}
+ \beta \,\alpha ^{2}}{\alpha \,{S_{1}}}}, \quad {R_{12}}= - {\displaystyle \frac {1}{2}} \,{\displaystyle \frac {\sqrt{3}\,(\alpha + 1)\,\beta \,(\alpha - 1)}{\alpha \,{ S_{1}}}} , $\\ $ {R_{13}}=0, \quad {R_{14}}=0, $\\ $ {R_{21}}= - {\displaystyle \frac {1}{2}} \,{\displaystyle \frac {\sqrt{3}\,(\alpha + 1)\,\beta \,(\alpha - 1)}{\alpha \,{ S_{1}}}},\quad {R_{22}}={\displaystyle \frac {1}{2}} \,{\displaystyle \frac {(\alpha + 1)\,( - 2\,\alpha + \alpha \,\beta + \beta ) }{\alpha \,{S_{1}}}} , $\\ $ {R_{23}}=0, \quad {R_{24}}=0, $\\ $ {R_{31}}= - {\displaystyle \frac {1}{8}} \,{\displaystyle \frac {\beta \,\alpha ^{2}\,{S_{2}}^{3}\,{S_{3}}^{3}\,(\alpha + 1)\,(\alpha ^{2} + 6\,\beta \,\alpha ^{2} + \alpha + 13\,\alpha \,\beta + 6\,\beta )}{{S_{1}}^{3}}} , $\\ $ {R_{32}}= - {\displaystyle \frac {1}{8}} \,{\displaystyle \frac {\sqrt{3}\,(3\,\alpha ^{2} + 2\,\beta \,\alpha ^{2} + 7\, \alpha \,\beta + 3\,\alpha + 2\,\beta )\,(\alpha - 1)\,\beta \,\alpha ^{2}\,{S_{3}}^{3}\,{S_{2}}^{3}}{{S_{1}}^{3}}} , $\\ $ {R_{33}}={\displaystyle \frac {1}{2}} \,{\displaystyle \frac {(\alpha + \sqrt{3} + 2)\,(\alpha + 2 - \sqrt{3})\,\beta }{\alpha \,{S_{1}}}} ,\quad {R_{34}}={\displaystyle \frac {1}{2}} \,{\displaystyle \frac {\sqrt{3}\,(\alpha + 1)\,\beta \,(\alpha - 1)}{\alpha \,{ S_{1}}}}, $\\ $ {R_{41}}= - {\displaystyle \frac {1}{8}} \,{\displaystyle \frac {\sqrt{3}\,(3\,\alpha ^{2} + 2\,\beta \,\alpha ^{2} + 7\, \alpha \,\beta + 3\,\alpha + 2\,\beta )\,(\alpha - 1)\,\beta \,\alpha ^{2}\,{S_{3}}^{3}\,{S_{2}}^{3}}{{S_{1}}^{3}}}, $\\ $ {R_{42}}= - {\displaystyle \frac {1}{8}} \,{\displaystyle \frac {\beta \,\alpha ^{2}\,{S_{2}}^{3}\,{S_{3}}^{3}\,(\alpha + 1)\,( - 5\,\alpha ^{2} + 2\,\beta \,\alpha ^{2} - 5\,\alpha - 9 \,\alpha \,\beta + 2\,\beta )}{{S_{1}}^{3}}} , $\\ $ {R_{43}}={\displaystyle \frac {1}{2}} \,{\displaystyle \frac {\sqrt{3}\,(\alpha + 1)\,\beta \,(\alpha - 1)}{\alpha \,{ S_{1}}}}, \quad {R_{44}}= - {\displaystyle \frac {1}{2}} \,{\displaystyle \frac {(\alpha + \sqrt{3} + 2)\,(\alpha + 2 - \sqrt{3})\,\beta }{\alpha \,{S_{1}}}}. $\\
\begin{center} $J=(J_{ij})$, $1\leq i,j \leq 4$.\\ \end{center}
\noindent $ {J_{11}}= - {\displaystyle \frac {1}{2}} \,{\displaystyle \frac {\sqrt{3}\,(\alpha + 1)\,\beta \,(\alpha - 1)}{\alpha \,{ S_{1}}}} , \quad {J_{12}}={\displaystyle \frac {1}{2}} \,{\displaystyle \frac {(\alpha + 1)\,( - 2\,\alpha + \alpha \,\beta + \beta ) }{\alpha \,{S_{1}}}}, $\\ $ {J_{13}}=0 ,\quad {J_{14}}=0, $\\ $ {J_{21}}={\displaystyle \frac {1}{2}} \,{\displaystyle \frac {2\,\alpha + \beta + 6\,\alpha \,\beta + 2\,\alpha ^{2}
+ \beta \,\alpha ^{2}}{\alpha \,{S_{1}}}}, \quad {J_{22}}={\displaystyle \frac {1}{2}} \,{\displaystyle \frac {\sqrt{3}\,(\alpha + 1)\,\beta \,(\alpha - 1)}{\alpha \,{ S_{1}}}}, $\\ $ {J_{23}}=0 , \quad {J_{24}}=0, \quad {J_{31}}= - {\displaystyle \frac {1}{4}} \,{\displaystyle \frac {\sqrt{3}\,(\alpha - 1)\,(\alpha + 1)^{2}\,\alpha ^{2}\, \beta ^{2}\,{S_{2}}^{3}\,{S_{3}}^{3}}{{S_{1}}^{3}}} , $\\ $ {J_{32}}={\displaystyle \frac {1}{4}} \,{\displaystyle \frac {\beta ^{2}\,\alpha ^{2}\,{S_{2}}^{3}\,{S_{3}}^{3}\,(\alpha
+ 1)\,(\alpha ^{2} + 4\,\alpha + 1)}{{S_{1}}^{3}}} , $\\ $ {J_{33}}={\displaystyle \frac {1}{2}} \,{\displaystyle \frac {\sqrt{3}\,(\alpha + 1)\,\beta \,(\alpha - 1)}{\alpha \,{ S_{1}}}} , \quad {J_{34}}= - {\displaystyle \frac {1}{2}} \,{\displaystyle \frac {2\,\alpha + \beta + 6\,\alpha \,\beta + 2\,\alpha ^{2}
+ \beta \,\alpha ^{2}}{\alpha \,{S_{1}}}}, $\\ $ {J_{41}}={\displaystyle \frac {1}{4}} \,{\displaystyle \frac {\beta ^{2}\,\alpha ^{2}\,{S_{2}}^{3}\,{S_{3}}^{3}\,(\alpha
+ 1)\,(\alpha ^{2} + 4\,\alpha + 1)}{{S_{1}}^{3}}} , $\\ $ {J_{42}}={\displaystyle \frac {1}{4}} \,{\displaystyle \frac {\sqrt{3}\,(\alpha - 1)\,(\alpha + 1)^{2}\,\alpha ^{2}\, \beta ^{2}\,{S_{2}}^{3}\,{S_{3}}^{3}}{{S_{1}}^{3}}} , $\\ $ {J_{43}}= - {\displaystyle \frac {1}{2}} \,{\displaystyle \frac {(\alpha + 1)\,( - 2\,\alpha + \alpha \,\beta + \beta ) }{\alpha \,{S_{1}}}}, \quad {J_{44}}= - {\displaystyle \frac {1}{2}} \,{\displaystyle \frac {\sqrt{3}\,(\alpha + 1)\,\beta \,(\alpha - 1)}{\alpha \,{ S_{1}}}}. $
\end{document} |
\begin{document}
\renewcommand{\arabic{section}.\arabic{equation}}{\arabic{section}.\arabic{equation}}
\begin{center} EXISTENCE FOR WEAKLY\ COERCIVE NONLINEAR DIFFUSION EQUATIONS VIA A VARIATIONAL PRINCIPLE{\huge \ }
GABRIELA MARINOSCHI\footnote{ Institute of Mathematical Statistics and Applied Mathematics of the Romanian Academy, Calea 13 Septembrie 13, \ Bucharest, Romania. \noindent gmarino@acad.ro, gabimarinoschi@yahoo.com \par {}}{\LARGE \ }
\end{center}
\noindent {\small Abstract. We are concerned with the study of the well-posedness of a nonlinear diffusion equation with a monotonically increasing multivalued time-dependent nonlinearity derived from a convex continuous potential having a superlinear growth to infinity. The results in this paper state that the solution of the nonlinear equation can be retrieved as the null minimizer of an appropriate minimization problem for a convex functional involving the potential and its conjugate. This approach, inspired by the Brezis-Ekeland variational principle, provides new existence results under minimal growth and coercivity conditions.}
\noindent Mathematics Subject Classification 2010. 35K55, 47J30, 58EXX, 76SXX
\noindent \textit{Keywords and phrases.} Variational methods, Brezis-Ekeland principle, convex optimization problems, nonlinear diffusion equations, porous media equations
\section{ Introduction}
\setcounter{equation}{0}
We are concerned with the study of the well-posedness of a nonlinear diffusion equation with a monotonically increasing discontinuous nonlinearity derived from a convex continuous potential, by using a dual formulation of this equation as a minimization of an appropriate convex functional. The idea of identifying the solutions of evolution equations as the minima of certain functionals is due to Brezis and Ekeland and originates in their papers published in 1976 (see \cite{brezis-ekeland-76} and \cite{brezis-ekeland-76-2}). During the past decades this approach has enjoyed much attention, as seen in the various literature and in some more recently published monograph and papers (see e.g., \cite{auchmuty-88}, \cite {auchmuty-93}, \cite{ghoussoub}, \cite{ghoussoub-tzou-04}, \cite {ghoussoub-08}, \cite{visintin-08}, \cite{stefanelli-08}, \cite {stefanelli-09}, \cite{barbu-2011-jmma}, \cite{barbu-2012-jota}, \cite {gm-jota-var-1}). In \cite{gm-jota-var-1} two cases were considered, the first for a continuous potential with a polynomial growth and the second for a singular potential. The latter has provided the existence of the solution to variational inequality which models a free boundary flow.
The challenging part in this duality principle is the proof of the well-posedness of the evolution equation as a consequence of the existence of a null minimizer in the associated minimization problem (that is a solution which minimizes the functional to zero). A general receipt for proving this implication does not exist, it rather depending on the good choice of the functional and on the particularities of the potential of the nonlinearity arising in the diffusion term. This way of approaching the well-posedness of nonlinear diffusion equations by a dual formulation as a minimization problem is extremely useful especially when a direct approach by using the semigroup theory (see e.g., \cite{vb-springer-2010}, \cite {Crandall-Pazy-71}) or other classical variational results (see e.g., \cite {lions}) cannot be followed due either to the low regularity of the data or to the weak coercivity of the potential.
In this work, the nonlinearity in the diffusion term is more general and it has a time and space dependent potential assumed to have a weak coercivity and no particular regularity with respect to time and space. The paper is organized in two parts. At the beginning we investigate the case with the potential and its conjugate depending on time and space. We prove that the minimization problem has at least one solution, unique if the functional is strictly convex. This seems to be a good candidate for the solution to the nonlinear equation, reason for which it can be viewed as a generalized or variational solution. If the admissible set is restricted by imposing a $ L^{\infty }$-constraint on the state, then the generalized solution which minimizes the functional to zero turns out to be quite the weak solution to the nonlinear equation.
The second part concerns the case in which the potential does not depend on space. The main result establishes that the null minimizer in the minimization problem is the unique solution to the nonlinear equation, provided that the potential exhibits a symmetry at large values of the argument.
We would like to mention the benefit of such a duality approach, which allows an elegant proof of the existence for a time dependent diffusion equation, under general assumptions, by making possible its replacement by the problem of minimizing a convex functional with a linear state equation. We also stress that the existence results obtained in this way are not covered and do not follow by the general existence theory of porous media equations, as well as that of time dependent nonlinear infinite dimensional Cauchy problems.
\section{Problem presentation}
We deal with the problem \begin{eqnarray} \frac{\partial y}{\partial t}-\Delta \beta (t,x,y) &\ni &f\mbox{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ in }Q:=(0,T)\times \Omega , \notag \\ -\frac{\partial \beta (t,x,y)}{\partial \nu } &=&\alpha \beta (t,x,y)\mbox{ \ \ on }\Sigma :=(0,T)\times \Gamma , \label{si-1} \\ y(0,x) &=&y_{0}\mbox{ \ \ \ \ \ \ \ \ \ \ \ \ \ in }\Omega , \notag \end{eqnarray} where $\Omega $ is an open bounded subset of $\mathbb{R}^{N},$ $N\leq 3,$ with the boundary $\Gamma $ sufficiently smooth, $T$ is finite and $\beta $ has a potential $j$. The notation $\frac{\partial }{\partial \nu }$ represents the normal derivative and $\alpha $ is positive.
In this paper we assume that $j:Q\times \mathbb{R}\rightarrow (-\infty ,\infty ]$ and has the following properties:
\noindent $(h_{1})$ $\ (t,x)\rightarrow j(t,x,r)$ is measurable on $Q,$ for all $r\in \mathbb{R},$
\noindent $(h_{2})$ $\ j(t,x,\cdot )$ is a proper, convex, continuous function, a.e. $(t,x)\in Q,$ \begin{equation} \partial j(t,x,r)=\beta (t,x,r)\mbox{ for all }r\in \mathbb{R},\mbox{ a.e. } (t,x)\in Q, \label{si-beta-j} \end{equation} \begin{eqnarray} \frac{j(t,x,r)}{\left\vert r\right\vert } &\rightarrow &\infty ,\mbox{ \ as\ }\left\vert r\right\vert \rightarrow \infty ,\mbox{ uniformly for }(t,x)\in Q, \label{si-9-2} \\ \frac{j^{\ast }(t,x,\omega )}{\left\vert \omega \right\vert } &\rightarrow &\infty ,\mbox{ \ as\ }\left\vert \omega \right\vert \rightarrow \infty , \mbox{ uniformly for }(t,x)\in Q, \label{si-9-2-00} \end{eqnarray} \begin{equation} j(\cdot ,\cdot ,0)\in L^{\infty }(Q),\mbox{ }j^{\ast }(\cdot ,\cdot ,0)\in L^{\infty }(Q). \label{si-9-2-0} \end{equation} We define the conjugate $j^{\ast }:Q\times \mathbb{R}\rightarrow (-\infty ,\infty ]$ by \begin{equation} j^{\ast }(t,x,\omega )=\sup_{r\in \mathbb{R}}(\omega r-j(t,x,r)),\mbox{ a.e. }(t,x)\in Q. \label{si-4-0} \end{equation} Then, the following two relations (Legendre-Fenchel) take place (see \cite {vb-springer-2010}, p. 6, see also \cite{fenchel-53}): \begin{equation} j(t,x,r)+j^{\ast }(t,x,\omega )\geq r\omega \mbox{ for all }r,\omega \in \mathbb{R},\mbox{ a.e. }(t,x)\in Q, \label{si-4-1} \end{equation} \begin{equation} j(t,x,r)+j^{\ast }(t,x,\omega )=r\omega ,\mbox{ iff }\omega \in \partial j(t,x,r),\mbox{ for all }r,\omega \in \mathbb{R},\mbox{ a.e. }(t,x)\in Q. \label{si-4-2} \end{equation}
By (\ref{si-beta-j}) it follows that $\beta $ is a maximal monotone graph (possibly multivalued) on $\mathbb{R}$, a.e. $(t,x)\in Q.$ Relations (\ref {si-9-2})-(\ref{si-9-2-00}) are equivalent with the the properties that $ (\beta )^{-1}(t,x,\cdot )$ and $\beta (t,x,\cdot ),$ respectively, are bounded on bounded subsets, uniformly a.e. $(t,x)\in Q$. This means that for any $M>0$ there exists $Y_{M}$ and $W_{M},$ independent on $t$ and $x,$ such that \begin{equation} \sup \left\{ \left\vert r\right\vert ;\mbox{ }r\in \beta ^{-1}(t,x,\omega ), \mbox{ }\left\vert \omega \right\vert \leq M\right\} \leq W_{M}, \label{si-9-6} \end{equation} \begin{equation} \sup \left\{ \left\vert \omega \right\vert ;\mbox{ }\omega \in \beta (t,x,r), \mbox{ }\left\vert r\right\vert \leq M\right\} \leq Y_{M}. \label{si-9-7} \end{equation} In fact, when $j$ does not depend on $t$ and $x$, relations (\ref{si-9-2})-( \ref{si-9-2-00}) express\ that \begin{equation*} D(\partial j(r))=R(\partial j(r))=\mathbb{R}\mathbf{,}\mbox{ }D(\partial j^{\ast }(r))=R(\partial j^{\ast }(r))=\mathbb{R} \end{equation*} (see \cite{vb-springer-2010}, p. 9). We also recall that $\partial j^{\ast }(t,x,\cdot )=(\partial j(t,x,\cdot ))^{-1}$ a.e. $(t,x)\in Q.$
We call \textit{weakly coercive }a nonlinear diffusion term with $j$ having the properties (\ref{si-9-2})-(\ref{si-9-2-00}), and implicitly the corresponding equation (\ref{si-1}).
We also recall that a proper, convex l.s.c. function is bounded below by an affine function, hence \begin{equation} j(t,x,r)\geq k_{1}(t,x)r+k_{2}(t,x),\mbox{ }j^{\ast }(t,x,\omega )\geq k_{3}(t,x)\omega +k_{4}(t,x) \label{si-9-7-01} \end{equation} for any $r,\omega \in \mathbb{R}$ and we assume that \begin{equation} k_{i}\in L^{\infty }(Q),\mbox{ }i=1,...4. \label{si-9-7-02} \end{equation} In fact (\ref{si-9-7-01}) follows if besides (\ref{si-9-2-0}) we assume that there exist $\xi ,\eta \in L^{\infty }(Q)$ such that $\xi \in \partial j(t,x,0),$ $\eta \in \partial j^{\ast }(t,x,0)$ a.e. $(t,x)\in Q.$
In this work we show that problem (\ref{si-1}) reduces to a certain minimization problem $(P)$ for a convex lower semicontinuous functional involving the functions $j$ and $j^{\ast }$. In Section 3, the existence of at least a solution to $(P)$ is proved in Theorem 3.2, this being actually the generalized solution associated to (\ref{si-1}). The uniqueness is deduced directly from $(P)$ under the assumption of the strictly convexity of $j.$ Moreover, when a state constraint $y\in \lbrack y_{m},y_{M}]$ is included in the admissible set we show that the null minimization solution is the unique weak solution to (\ref{si-1}) in Theorem 3.3.
In the case when $j$ does not depend on $x$ but on $t$ and has the same behavior at $\left\vert r\right\vert $ large, i.e., it satisfies the relation \begin{equation} j(t,-r)\leq \gamma _{1}j(t,r)+\gamma _{2},\mbox{ for any }r\in \mathbb{R}, \mbox{ a.e. }t\in (0,T), \label{si-9-5} \end{equation} with $\gamma _{1}$ and $\gamma _{2}$ constants, we prove in Theorem 4.3 in Section 4 that the solution to the minimization problem is the unique weak solution to (\ref{si-1}) without assuming the previous additional state constraint. This is based on Lemma 4.1 which plays an essential role in the proof of this result. We mention that stochastic porous media equations of the form (\ref{si-1}) were studied under a similar assumptions in \cite {barbu-daprato}, by a different method.
Theorem 4.3 is the main novelty of this work since it provides existence in ( \ref{si-1}) for a time dependent weakly coercive $j$. With respect to the treatment of the case which assumed a polynomial boundedness of $j$ (see \cite{gm-jota-var-1}), the present one requires a sharp analysis in the $ L^{1}$-space.
\subsection{Functional setting}
First, we introduce several linear operators related to problem (\ref{si-1} ). Actually they represent the operator $-\Delta $ defined on various spaces. The main operators which we use, $A_{0,\infty }$ and $A$ are defined as follows: \begin{eqnarray} A_{0,\infty }\psi &=&-\Delta \psi ,\mbox{ }A_{0,\infty }:D(A_{0,\infty })=X\subset L^{\infty }(\Omega )\rightarrow L^{\infty }(\Omega ), \label{200} \\ X &=&\left\{ \psi \in W^{2,\infty }(\Omega ),\mbox{ }\frac{\partial \psi }{ \partial \nu }+\alpha \psi =0\mbox{ on }\Gamma \right\} \notag \end{eqnarray} and \begin{eqnarray} A &:&D(A)=L^{1}(\Omega )\subset X^{\prime }\rightarrow X^{\prime }, \notag \\ \left\langle A\theta ,\psi \right\rangle _{X^{\prime },X} &=&\int_{\Omega }\theta A_{0,\infty }\psi dx,\mbox{ }\forall \theta \in L^{1}(\Omega ),\mbox{ }\forall \psi \in X, \label{A} \end{eqnarray} where by $X^{\prime }$ we denote the dual of $X,$ with the pivot space $ L^{2}(\Omega )$ ($X\subset L^{2}(\Omega )\subset X^{\prime }).$
We introduce the operator \begin{eqnarray} A_{1}\psi &=&-\Delta \psi ,\mbox{ }A_{1}:D(A_{1})\subset L^{1}(\Omega )\rightarrow L^{1}(\Omega ), \label{A0-L1} \\ D(A_{1}) &=&\left\{ \psi \in W^{1,1}(\Omega );\mbox{ }\Delta \psi \in L^{1}(\Omega ),\mbox{ }\frac{\partial \psi }{\partial \nu }+\alpha \psi =0 \mbox{ on }\Gamma \right\} , \notag \end{eqnarray} which is $m$-accretive on $L^{1}(\Omega )$ (see \cite{brezis-strauss}). For a later use we recall that \begin{eqnarray} A_{2}\psi &=&-\Delta \psi ,\mbox{ }A_{2}:X_{2}=D(A_{2})\subset L^{2}(\Omega )\rightarrow L^{2}(\Omega ), \label{A0} \\ X_{2} &=&\left\{ \psi \in W^{2,2}(\Omega );\mbox{ }\frac{\partial \psi }{ \partial \nu }+\alpha \psi =0\mbox{ on }\Gamma \right\} , \notag \end{eqnarray} is $m$-accretive on $L^{2}(\Omega )$ and $\widetilde{A_{2}},$ its extension to $L^{2}(\Omega ),$ defined by \begin{eqnarray} \widetilde{A_{2}} &:&L^{2}(\Omega )\subset X_{2}^{\prime }\rightarrow X_{2}^{\prime }, \notag \\ \left\langle \widetilde{A_{2}}\theta ,\psi \right\rangle _{X_{2}^{\prime },X_{2}} &=&\int_{\Omega }\theta A_{2}\psi dx,\mbox{ }\forall \theta \in L^{2}(\Omega ),\mbox{ }\forall \psi \in X_{2}, \label{A2} \end{eqnarray} is $m$-accretive on $X_{2}^{\prime }.$ Here, $X_{2}^{\prime }$ is the dual of $X_{2}$ with $L^{2}(\Omega )$ as pivot space (see these last definitions in \cite{gm-jota-var-1}).
Finally, let us consider the Hilbert space $V=H^{1}(\Omega )$ endowed with the norm \begin{equation*} \left\Vert \phi \right\Vert _{V}=\left( \left\Vert \phi \right\Vert ^{2}+\alpha \left\Vert \phi \right\Vert _{L^{2}(\Gamma )}^{2}\right) ^{1/2}, \end{equation*} which is equivalent (for $\alpha >0)$ with the standard Hilbertian norm on $ H^{1}(\Omega )$ (see \cite{necas-67}, p. 20)$.$ The dual of $V$ is denoted $ V^{\prime }$ and the scalar product on $V^{\prime }$ is defined as \begin{equation} (\theta ,\overline{\theta })_{V^{\prime }}=\left\langle \theta ,A_{V}^{-1} \overline{\theta }\right\rangle _{V^{\prime },V} \label{si-0-0} \end{equation} where $A_{V}:V\rightarrow V^{\prime }$ is given by \begin{equation} \left\langle A_{V}\psi ,\phi \right\rangle _{V^{\prime },V}=\int_{\Omega }\nabla \psi \cdot \nabla \phi dx+\int_{\Gamma }\alpha \psi \phi d\sigma , \mbox{ for any }\phi \in V. \label{si-0-1} \end{equation} (In fact, $A_{V}$ is the extension of $A_{2}$ defined by (\ref{A0}) to $ V^{\prime }).$
For the sake of simplicity, we shall omit sometimes to write the function arguments in the integrands, writing $\int_{Q}gdxdt$ instead of $ \int_{Q}g(t,x)dxdt,$ where $g:Q\rightarrow \mathbb{R}.$ In appropriate places we indicate it as $g(t),$ to specify that $g:(0,T)\rightarrow Y,$ with $Y$ a Banach space.
\subsection{Statement of the problem}
In terms of the previously introduced operators we can write the abstract Cauchy problem \begin{eqnarray} \frac{dy}{dt}(t)+A\beta (t,x,y) &\ni &f(t),\mbox{ a.e. }t\in (0,T), \label{si-7} \\ y(0) &=&y_{0}. \notag \end{eqnarray}
\noindent \textbf{Definition 1.1.} Let $f\in L^{\infty }(Q)$ and $y_{0}\in V^{\prime }.$ We call a\textit{\ weak solution} to (\ref{si-1}) a pair $ (y,\eta ),$ \begin{equation*} y\in L^{1}(Q)\cap W^{1,1}([0,T];X^{\prime }),\mbox{ }w\in L^{1}(Q),\mbox{ } w(t,x)\in \beta (t,x,y(t,x))\ \mbox{\ a.e. }(t,x)\in Q, \end{equation*} which satisfies the equation \begin{equation} \int_{0}^{T}\left\langle \frac{dy}{dt}(t),\psi (t)\right\rangle _{X^{\prime },X}dt+\int_{Q}w(t,x)(A_{0,\infty }\psi (t))(x)dxdt=\int_{0}^{T}\left\langle f(t),\psi (t)\right\rangle _{X^{\prime },X}dt \label{si-8-1-0} \end{equation} for any $\psi \in L^{\infty }(0,T;X),$ and the initial condition $ y(0)=y_{0}. $
In literature, such a solution is called\ sometimes \textit{very weak} or \textit{distributional} solution.
We consider the minimization problem \begin{equation} \underset{_{(y,w)\in U}}{\mbox{Minimize}}\mbox{ }J(y,w), \tag{$P$} \end{equation} where \begin{equation} J(y,w)=\left\{ \begin{array}{l} \int_{Q}\left( j(t,x,y(t,x))+j^{\ast }(t,x,w(t,x))\right) dxdt+\frac{1}{2} \left\Vert y(T)\right\Vert _{V^{\prime }}^{2} \\ \\ -\frac{1}{2}\left\Vert y_{0}\right\Vert _{V^{\prime }}^{2}-\int_{Q}y(t,x)(A_{0,\infty }^{-1}f(t))(x)dxdt\mbox{\ \ if }(y,w)\in U, \mbox{ } \\ \\ +\infty ,\mbox{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ otherwise,} \end{array} \right. \label{J} \end{equation} and \begin{multline*} U=\{(y,w);\mbox{ }y\in L^{1}(Q)\cap W^{1,1}([0,T];X^{\prime }),\mbox{ } y(T)\in V^{\prime },\mbox{ }w\in L^{1}(Q), \\ j(\cdot ,\cdot ,y(\cdot ,\cdot ))\in L^{1}(Q),\mbox{ \ }j^{\ast }(\cdot ,\cdot ,w(\cdot ,\cdot ))\in L^{1}(Q), \\ (y,w)\mbox{ verifies (\ref{si-8-P}) below}\} \end{multline*} \begin{eqnarray} \frac{dy}{dt}(t)+Aw(t) &=&f(t)\mbox{ a.e. }t\in (0,T), \label{si-8-P} \\ y(0) &=&y_{0}. \notag \end{eqnarray} Here, $\frac{dy}{dt}$ is taken in the sense of $X^{\prime }$-valued distributions on $(0,T).$
We see that, by the existence theory of elliptic boundary value problems (see \cite{ADN}), if $f(t)\in L^{\infty }(\Omega )$ then $A_{0,\infty }^{-1}f(t)\in \bigcap\limits_{p\geq 2}W^{2,p}(\Omega )\subset L^{\infty }(\Omega ),$ a.e. $t\in (0,T),$ so the last term in the expression of $J$ makes sense.
\section{Time and space dependent potential}
\setcounter{equation}{0}
In this section we consider that $j$ and $j^{\ast }$ depend on $t$ and $x$ as well, and assume $(h_{1})-(h_{2}),$ (\ref{si-beta-j})-(\ref{si-9-2-0}), ( \ref{si-9-7-01})-(\ref{si-9-7-02}). We begin with an intermediate result.
\noindent \textbf{Lemma 3.1. }\textit{The function }$J$\textit{\ is proper, convex and lower semicontinuous on }$L^{1}(Q)\times L^{1}(Q).$
\noindent \textbf{Proof.} It is obvious that\textbf{\ }$J$ is proper (because $U\neq \varnothing )$ and convex. Let $\lambda >0.$ For the lower semicontinuity we prove that the level set \begin{equation*} E_{\lambda }=\{(y,w)\in L^{1}(Q)\times L^{1}(Q);\mbox{ }J(y,w)\leq \lambda \} \end{equation*} is closed in $L^{1}(Q)\times L^{1}(Q).$ Let $(y_{n},w_{n})\in E_{\lambda }$ such that \begin{equation} y_{n}\rightarrow y\mbox{ strongly in }L^{1}(Q),\mbox{ \ }w_{n}\rightarrow w \mbox{ strongly in }L^{1}(Q),\mbox{ as }n\rightarrow \infty . \label{si-198-1} \end{equation} It follows that $(y_{n},w_{n})\in U$ is the solution to \begin{eqnarray} \frac{dy_{n}}{dt}(t)+Aw_{n}(t) &=&f(t),\mbox{ a.e. }t\in (0,T), \label{si-closed-0} \\ y_{n}(0) &=&y_{0} \notag \end{eqnarray} and \begin{eqnarray} &&J(y_{n},w_{n})=\int_{Q}(j(t,x,y_{n}(t,x))+j^{\ast }(t,x,w_{n}(t,x)))dxdt \label{si-198} \\ &&+\frac{1}{2}\left\{ \left\Vert y_{n}(T)\right\Vert _{V^{\prime }}^{2}-\left\Vert y_{0}\right\Vert _{V^{\prime }}^{2}\right\} -\int_{Q}y_{n}A_{0,\infty }^{-1}fdxdt\leq \lambda .\mbox{ } \notag \end{eqnarray} The convergences (\ref{si-198-1}) imply that \begin{equation*} \int_{0}^{T}\left\langle Aw_{n}(t),\psi (t)\right\rangle _{X^{\prime },X}dt=\int_{Q}w_{n}A_{0,\infty }\psi dxdt\rightarrow \int_{Q}wA_{0,\infty }\psi dxdt,\mbox{ as }n\rightarrow \infty , \end{equation*} for any $\psi \in L^{\infty }(0,T;X)$ and \begin{equation*} \int_{Q}y_{n}A_{0,\infty }^{-1}fdxdt\rightarrow \int_{Q}yA_{0,\infty }^{-1}fdxdt\mbox{, as }n\rightarrow \infty , \end{equation*} Therefore, by (\ref{si-closed-0}), we can write \begin{equation} \int_{0}^{T}\left\langle \frac{dy_{n}}{dt}(t),\psi (t)\right\rangle _{X^{\prime },X}dt=-\int_{Q}w_{n}A_{0,\infty }\psi dxdt+\int_{Q}f\psi dxdt, \label{si-198-2} \end{equation} for any $\psi \in L^{\infty }(0,T;X),$ and we deduce that \begin{equation*} \frac{dy_{n}}{dt}\rightarrow \frac{dy}{dt}\mbox{ weakly in } L^{1}(0,T;X^{\prime })\mbox{ as }n\rightarrow \infty , \end{equation*} meaning that $y_{n}$ is absolutely continuous on $[0,T]$ with values in $ X^{\prime }.$
Again by (\ref{si-closed-0}) we have \begin{equation} y_{n}(t)=y_{0}+\int_{0}^{t}f(s)ds-\int_{0}^{t}Aw_{n}(s)ds\mbox{, for }t\in \lbrack 0,T]. \label{si-200} \end{equation} From here we get \begin{equation} \int_{\Omega }y_{n}(t)\phi dx=\left\langle y_{0},\phi \right\rangle _{V^{\prime },V}+\int_{0}^{t}\int_{\Omega }f(s)\phi dxds-\int_{0}^{t}\left\langle Aw_{n}(s),\phi \right\rangle _{X^{\prime },X}ds \label{si-200-0} \end{equation} for any $\phi \in X$ and $t\in \lbrack 0,T].$ Passing to the limit we obtain \begin{equation*} l(t)=\lim_{n\rightarrow \infty }\int_{\Omega }y_{n}(t)\phi dx=\left\langle y_{0},\phi \right\rangle _{V^{\prime },V}+\int_{0}^{t}\int_{\Omega }f(s)\phi dxds-\int_{0}^{t}\left\langle Aw(s),\phi \right\rangle _{X^{\prime },X}ds. \end{equation*} We multiply this relation by $\varphi _{0}\in L^{\infty }(0,T)$ and integrate over $(0,T),$ to obtain that \begin{eqnarray} &&\int_{0}^{T}\varphi _{0}(t)l(t)dt \label{si-201} \\ &=&\int_{0}^{T}\left( \left\langle y_{0},\phi \right\rangle _{V^{\prime },V}+\int_{0}^{t}\int_{\Omega }f(s)\phi dxds-\int_{0}^{t}\left\langle Aw(s),\phi \right\rangle _{X^{\prime },X}ds\right) \varphi _{0}(t)dt. \notag \end{eqnarray} We multiply (\ref{si-200}) by $\varphi _{0}(t)\phi (x)$ and integrate over $ (0,T)\times \Omega .$ We have \begin{eqnarray} \int_{Q}\varphi _{0}\phi y_{n}dxdt &=&\int_{0}^{T}\left( \left\langle y_{0},\phi \right\rangle _{V^{\prime },V}+\int_{\Omega }\int_{0}^{t}f(s)\phi dsdx\right) \varphi _{0}(t)dt \label{si-202} \\ &&-\int_{0}^{T}\int_{0}^{t}\left\langle Aw_{n}(s),\phi \right\rangle _{X^{\prime },X}\varphi _{0}(t)dsdt, \notag \end{eqnarray} whence we use the strong convergence $y_{n}\rightarrow y$ in $L^{1}(Q)$ to get that \begin{eqnarray} \int_{Q}\varphi _{0}\phi ydxdt &=&\int_{0}^{T}\left( \left\langle y_{0},\phi \right\rangle _{V^{\prime },V}+\int_{\Omega }\int_{0}^{t}f(s)\phi dsdx\right) \varphi _{0}(t)dt \label{si-203} \\ &&-\int_{0}^{T}\int_{0}^{t}\left\langle Aw(s),\phi \right\rangle _{X^{\prime },X}\varphi _{0}(t)dsdt. \notag \end{eqnarray} Comparing (\ref{si-201}) and (\ref{si-203}) we deduce that \begin{equation*} \int_{0}^{T}\varphi _{0}(t)l(t)dt=\int_{Q}\varphi _{0}\phi ydxdt\mbox{ for any }\varphi _{0}\in L^{\infty }(0,T), \end{equation*} hence \begin{equation*} l(t)=\lim_{n\rightarrow \infty }\int_{\Omega }y_{n}(t)\phi dx=\int_{\Omega }y(t)\phi dx\mbox{ for any }\phi \in X,\mbox{ }t\in \lbrack 0,T]. \end{equation*} Thus \begin{equation} y_{n}(t)\rightarrow y(t)\mbox{ weakly in }X^{\prime }\mbox{ as }n\rightarrow \infty ,\mbox{ for any }t\in \lbrack 0,T] \label{si-204} \end{equation} and therefore \begin{equation} y_{n}(T)\rightarrow y(T)\mbox{, }y_{n}(0)\rightarrow y(0)=y_{0}\mbox{ weakly in }X^{\prime }\mbox{, as }n\rightarrow \infty . \label{si-206} \end{equation} Letting $n\rightarrow \infty $ in (\ref{si-198-2}) we obtain \begin{equation*} \int_{0}^{T}\left\langle \frac{dy}{dt}(t),\psi (t)\right\rangle _{X^{\prime },X}dt+\int_{0}^{T}\int_{\Omega }wA_{0,\infty }\psi dxdt=\int_{0}^{T}\left\langle f(t),\psi (t)\right\rangle _{X^{\prime },X}dt, \end{equation*} which proves that $(y,w)$ is the solution to (\ref{si-8-P}).
By (\ref{si-198}) and (\ref{si-9-7-01}) we can write that \begin{eqnarray*} &&\int_{Q}(k_{1}y_{n}+k_{2}+k_{3}w_{n}+k_{4})dxdt+\frac{1}{2}\left\Vert y_{n}(T)\right\Vert _{V^{\prime }}^{2} \\ &\leq &\int_{Q}(j(t,x,y_{n}(t,x))+j^{\ast }(t,x,w_{n}(t,x)))dxdt+\frac{1}{2} \left\Vert y_{n}(T)\right\Vert _{V^{\prime }}^{2} \\ &\leq &\frac{1}{2}\left\Vert y_{0}\right\Vert _{V^{\prime }}^{2}+\left\Vert y_{n}\right\Vert _{L^{1}(Q)}\left\Vert A_{0,\infty }^{-1}f\right\Vert _{L^{\infty }(Q)}+\lambda \leq C\mbox{,} \end{eqnarray*} whence, using (\ref{si-198-1}) we get \begin{equation*} \frac{1}{2}\left\Vert y_{n}(T)\right\Vert _{V^{\prime }}^{2}\leq C+\max_{i=1,...4}\left\vert k_{i}\right\vert _{L^{\infty }(Q)}\left( \left\Vert y\right\Vert _{L^{1}(Q)}+\left\Vert w\right\Vert _{L^{1}(Q)}+2\right) =C_{1} \end{equation*} with $C$ and $C_{1}$ constants and $\left\vert k_{i}\right\vert _{\infty }=\left\Vert k_{i}\right\Vert _{L^{\infty }(Q)}.$
It follows that $y_{n}(T)\rightarrow \xi $ weakly in $V^{\prime }$ as $ n\rightarrow \infty .$ As seen earlier, $y_{n}(T)\rightarrow y(T)$ weakly in $X^{\prime },$ and by the uniqueness of the limit we get $\xi =y(T)\in V^{\prime }.$
The function \begin{equation*} \varphi :L^{1}(Q)\rightarrow \mathbb{R},\mbox{ }\varphi (z)=\int_{Q}j(t,x,z(t,x))dxdt \end{equation*} is proper, convex and l.s.c. (see \cite{vb-springer-2010}, p. 56) and so by Fatou's lemma (if $j$ would be nonnegative$)$ we get \begin{equation} \varphi (y)\leq \liminf_{n\rightarrow \infty }\varphi (y_{n})=\liminf_{n\rightarrow \infty }\int_{Q}j(t,x,y_{n}(t,x))dxdt<\infty . \label{si-206-0} \end{equation} Since $j$ is not generally nonnegative we use (\ref{si-9-7-01}) and apply Fatou's lemma for \begin{equation*} \widetilde{j}(t,x,r)=j(t,x,r)-k_{1}(t,x)r-k_{2}(t,x)\geq 0. \end{equation*} We get, by the strongly convergence $y_{n}\rightarrow y$ in $L^{1}(Q)$ and the continuity of $j,$ \begin{eqnarray*} &&\int_{Q}(j(t,x,y(t,x))-k_{1}y-k_{2})dxdt=\int_{Q}\liminf_{n\rightarrow \infty }\widetilde{j}(t,x,y_{n}(t,x))dxdt \\ &\leq &\liminf_{n\rightarrow \infty }\int_{Q}\widetilde{j} (t,x,y_{n}(t,x))dxdt=\liminf_{n\rightarrow \infty }\int_{Q}j(t,x,y_{n}(t,x))dxdt-\int_{Q}(k_{1}y+k_{2})dxdt, \end{eqnarray*} and so (\ref{si-206-0}) holds.
Similarly we have that $\int_{Q}j^{\ast }(t,x,w(t,x))dxdt<\infty ,$ and so, in particular, we have shown that $(y,w)\in U.$
Moreover, passing to the limit in (\ref{si-198}) as $n\rightarrow \infty $ we obtain by lower semicontinuity that \begin{eqnarray*} &&\int_{Q}(j(t,x,y(t,x))+j^{\ast }(t,x,w(t,x)))dxdt+\frac{1}{2}\left\Vert y(T)\right\Vert _{V^{\prime }}^{2} \\ &&-\frac{1}{2}\left\Vert y_{0}\right\Vert _{V^{\prime }}^{2}-\int_{Q}yA_{0,\infty }^{-1}fdxdt\leq \liminf_{n\rightarrow \infty }J(y_{n},w_{n})\leq \lambda \end{eqnarray*} which means that $(y,w)\in E_{\lambda }.$ This ends the proof. \
$ \square $
\noindent \textbf{Theorem 3.2. }\textit{Problem }$(P)$\textit{\ has at least a solution }$(y^{\ast },w^{\ast }).$ \textit{If} $j$ \textit{is strictly convex the solution to }$(P)$\textit{\ is unique}.
\noindent \textbf{Proof. }By (\ref{si-9-7-01}) we note that if $(y,w)\in U,$ then \begin{eqnarray*} J(y,w) &\geq &-\left\vert k_{1}\right\vert _{\infty }\left\Vert y\right\Vert _{L^{1}(Q)}-\left\vert k_{2}\right\vert _{\infty }-\left\vert k_{3}\right\vert _{\infty }\left\Vert w\right\Vert _{L^{1}(Q)}-\left\vert k_{4}\right\vert _{\infty } \\ &&-\frac{1}{2}\left\Vert y_{0}\right\Vert _{V^{\prime }}^{2}-\left\Vert y\right\Vert _{L^{1}(Q)}\left\Vert A_{0,\infty }^{-1}f\right\Vert _{L^{\infty }(Q)}. \end{eqnarray*} Let us set $d=\inf\limits_{(y,w)\in U}J(y,w).$ We assume first that $ d>-\infty $ and we shall show later that this is indeed the only case.
Let us consider a minimizing sequence $(y_{n},w_{n})\in U$, such that \begin{equation} d\leq J(y_{n},w_{n})\leq d+\frac{1}{n}, \label{si-99} \end{equation} where the pair $(y_{n},w_{n})$ satisfies (\ref{si-closed-0}).
By (\ref{si-9-2})-(\ref{si-9-2-00}), for any $M>0,$ there exist $C_{M}$ and $ D_{M}$ such that $j(t,x,r)>M\left\vert r\right\vert $ as $\left\vert r\right\vert >C_{M}$ and $j^{\ast }(t,x,\omega )>M\left\vert \omega \right\vert $ as $\left\vert \omega \right\vert >D_{M}.$ Then, by (\ref {si-99}) we write \begin{eqnarray*} &&\int_{\{(t,x);\left\vert y_{n}(t,x)\right\vert \leq C_{M}\}}j(t,x,y_{n}(t,x))dxdt+M\int_{\{(t,x);\left\vert y_{n}(t,x)\right\vert >C_{M}\}}\left\vert y_{n}\right\vert dxdt \\ &&+\int_{\{(t,x);\left\vert w_{n}(t,x)\right\vert \leq D_{M}\}}j^{\ast }(t,x,w_{n}(t,x))dxdt+M\int_{\{(t,x);\left\vert w_{n}(t,x)\right\vert >D_{M}\}}\left\vert w_{n}\right\vert dxdt \\ &&+\frac{1}{2}\left\Vert y_{n}(T)\right\Vert _{V^{\prime }}^{2}-\frac{1}{2} \left\Vert y_{0}\right\Vert _{V^{\prime }}^{2}\leq d+\frac{1}{n} \\ &\leq &\left\Vert A_{0,\infty }^{-1}f\right\Vert _{L^{\infty }(Q)}\left( \int_{\{(t,x);\left\vert y_{n}(t,x\right\vert )\leq C_{M}\}}\left\vert y_{n}\right\vert dxdt+\int_{\{(t,x);\left\vert y_{n}(t,x)\right\vert >C_{M}\}}\left\vert y_{n}\right\vert dxdt\right) . \end{eqnarray*} Denoting $\left\Vert A_{0,\infty }^{-1}f\right\Vert _{L^{\infty }(Q)}=f_{\infty },$ and taking $M$ large enough such that $M>f_{\infty }$ it follows that \begin{eqnarray*} &&(M-f_{\infty })\int_{\{(t,x);\left\vert y_{n}(t,x)\right\vert >C_{M}\}}\left\vert y_{n}\right\vert dxdt+M\int_{\{(t,x);\left\vert w_{n}(t,x)\right\vert >D_{M}\}}\left\vert w_{n}\right\vert dxdt \\ &&\mbox{ \ }+\frac{1}{2}\left\Vert y_{n}(T)\right\Vert _{V^{\prime }}^{2} \\ &\leq &\frac{1}{2}\left\Vert y_{0}\right\Vert _{V^{\prime }}^{2}+f_{\infty }C_{M}\mbox{meas}(Q)+\int_{\{(t,x);\left\vert y_{n}(t,x)\right\vert \leq C_{M}\}}\left\vert j(t,x,y_{n}(t,x))\right\vert dxdt \\ &&+\int_{\{(t,x);\left\vert w_{n}(t,x)\right\vert \leq D_{M}\}}\left\vert j^{\ast }(t,x,w_{n}(t,x))\right\vert dxdt+d+1 \\ &\leq &\frac{1}{2}\left\Vert y_{0}\right\Vert _{V^{\prime }}^{2}+f_{\infty }C_{M}\mbox{meas}(Q)+\int_{\{(t,x);\left\vert y_{n}(t,x)\right\vert \leq C_{M}\}}\left\vert \widetilde{j}(t,x,y_{n}(t,x))\right\vert dxdt \\ &&+\int_{\{(t,x);\left\vert w_{n}(t,x)\right\vert \leq D_{M}\}}\left\vert \widetilde{j}^{\ast }(t,x,w_{n}(t,x))\right\vert dxdt+d+1 \\ &&+\int_{\{(t,x);\left\vert y_{n}(t,x)\right\vert \leq C_{M}\}}\left\vert k_{1}y_{n}+k_{2}\right\vert dxdt+\int_{\{(t,x);\left\vert w_{n}(t,x)\right\vert \leq D_{M}\}}\left\vert k_{3}w_{n}+k_{4}\right\vert dxdt, \end{eqnarray*} where $\widetilde{j}(t,x,r)=j(t,x,r)-k_{1}r-k_{2},$ $\widetilde{j}^{\ast }(t,x,\omega )=j^{\ast }(t,x,\omega )-k_{3}\omega -k_{4}.$
Recalling (\ref{si-9-7}) and (\ref{si-9-6}), \begin{equation*} j(t,x,y_{n}(t,x))\leq \left\vert j(t,x,0)\right\vert +\left\vert \eta _{n}(t,x)\right\vert \left\vert y_{n}(t,x)\right\vert \leq Y_{M}^{1}\mbox{ on }\{(t,x);\left\vert y_{n}(t,x)\right\vert \leq C_{M}\}, \end{equation*} \begin{equation*} j^{\ast }(t,x,w_{n}(t,x))\leq \left\vert j^{\ast }(t,x,0)\right\vert +\left\vert \varpi _{n}(t,x)\right\vert \left\vert y(t,x)\right\vert \leq W_{M}^{1}\mbox{ on }\{(t,x);\left\vert w_{n}(t,x)\right\vert \leq D_{M}\}, \end{equation*} where $\eta _{n}(t,x)\in \beta (t,x,y_{n})$ and $\varpi _{n}(t,x)\in (\beta )^{-1}(t,x,w_{n})$ a.e. on $Q.$
Then \begin{equation*} 0\leq \widetilde{j}(t,x,y_{n}(t,x))\leq Y_{M}^{1}+\left\vert k_{1}\right\vert _{\infty }C_{M}+\left\vert k_{2}\right\vert _{\infty }\mbox{ on }\{(t,x);\left\vert y_{n}(t,x)\right\vert \leq C_{M}\}, \end{equation*} \begin{equation*} 0\leq \widetilde{j}^{\ast }(t,x,w_{n}(t,x))\leq W_{M}^{1}+\left\vert k_{3}\right\vert _{\infty }D_{M}+\left\vert k_{4}\right\vert _{\infty }\mbox{ on }\{(t,x);\left\vert w_{n}(t,x)\right\vert \leq D_{M}\} \end{equation*} and we deduce that \begin{eqnarray} &&(M-f_{\infty })\int_{\{(t,x);\left\vert y_{n}(t,x)\right\vert >C_{M}\}}\left\vert y_{n}\right\vert dxdt+M\int_{\{(t,x);\left\vert w_{n}(t,x)\right\vert >D_{M}\}}\left\vert w_{n}\right\vert dxdt \label{bound-inf} \\ &&+\frac{1}{2}\left\Vert y_{n}(T)\right\Vert _{V^{\prime }}^{2}\leq C+d. \notag \end{eqnarray} Consequently, this yields \begin{equation} \left\Vert y_{n}\right\Vert _{L^{1}(Q)}\leq C,\mbox{ \ }\left\Vert w_{n}\right\Vert _{L^{1}(Q)}\leq C,\mbox{ \ }\left\Vert y_{n}(T)\right\Vert _{V^{\prime }}\leq C. \label{bound} \end{equation} (By $C$ and $C_{i},$ $i=1,...4,$ we denote several constants independent on $ n$).
From (\ref{si-99}) we get \begin{equation} I_{n}:=\int_{Q}j(t,x,y_{n}(t,x))dxdt+\int_{Q}j^{\ast }(t,x,w_{n}(t,x))dxdt\leq C. \label{si-198-0} \end{equation} We continue by proving that separately each term is bounded, i.e., \begin{equation} \int_{Q}j(t,x,y_{n}(t,x))dxdt\leq C_{1},\mbox{ }\int_{Q}j^{\ast }(t,x,w_{n}(t,x))dxdt\leq C_{2}. \label{si-100} \end{equation} We write \begin{eqnarray*} &&I_{n}=\int_{\{(t,x);\left\vert y_{n}(t,x)\right\vert \leq M\}}j(t,x,y_{n}(t,x))dxdt+\int_{\{(t,x);\left\vert y_{n}(t,x)\right\vert >M\}}j(t,x,y_{n}(t,x))dxdt \\ &&+\int_{\{(t,x);\left\vert w_{n}(t,x)\right\vert \leq M\}}j^{\ast }(t,x,w_{n}(t,x))dxdt+\int_{\{(t,x);\left\vert w_{n}(t,x)\right\vert >M\}}j^{\ast }(t,x,w_{n}(t,x))dxdt \\ &\leq &C. \end{eqnarray*} Therefore \begin{eqnarray*} &&\int_{\{(t,x);\left\vert y_{n}(t,x)\right\vert >M\}}j(t,x,y_{n}(t,x))dxdt+\int_{\{(t,x);\left\vert w_{n}(t,x)\right\vert >M\}}j^{\ast }(t,x,w_{n}(t,x))dxdt \\ &\leq &C+Y_{M}^{1}\mbox{meas}(Q)+W_{M}^{1}\mbox{meas}(Q)=C_{3}. \end{eqnarray*} Since $j(t,x,y_{n}(t,x))\geq k_{1}(t,x)y_{n}(t,x)+k_{2}(t,x)$ we deduce that \begin{equation*} \int_{\{(t,x);\left\vert w_{n}(t,x)\right\vert >M\}}j^{\ast }(t,x,w_{n}(t,x))dxdt\leq C_{4}, \end{equation*} whence \begin{equation} \int_{Q}j^{\ast }(t,x,w_{n}(t,x))dxdt\leq C_{1}. \label{101-1} \end{equation} Finally, (\ref{si-198-0}) yields \begin{equation} \int_{Q}j(t,x,y_{n}(t,x))dxdt\leq C_{2}, \label{101-2} \end{equation} with $C_{1}$ and $C_{2}$ independent of $n.$
Next, we shall show that the sequences $(y_{n})_{n}$ and $(w_{n})_{n}$ are weakly compact in $L^{1}(Q).$
To this end we have to show that the integrals $\int_{S}\left\vert w_{n}\right\vert dxdt,$ with $S\subset Q,$ are equi-absolutely continuous, meaning that for every $\varepsilon >0$ there exists $\delta $ such that $ \int_{S}\left\vert w_{n}\right\vert dxdt<\varepsilon $ whenever meas$ (S)<\delta .$ Let $M_{\varepsilon }>\frac{2C_{2}}{\varepsilon },$ where $ C_{2}$ is the constant in (\ref{si-100}), and let $R_{M}$ be such that $ \frac{j^{\ast }(t,x,w_{n})}{\left\vert w_{n}\right\vert }\geq M_{\varepsilon }$ for $\left\vert r\right\vert >R_{M},$ by (\ref{si-9-2}). If $\delta < \frac{\varepsilon }{2R_{M}}$ then \begin{eqnarray*} &&\int_{S}\left\vert w_{n}\right\vert dxdt\leq \int_{\{(t,x);\left\vert w_{n}(t,x)\right\vert >R_{M}\}}\left\vert w_{n}\right\vert dxdt+\int_{\{(t,x);\left\vert w_{n}(t,x)\right\vert \leq R_{M}\}}\left\vert w_{n}\right\vert dxdt \\ &\leq &M_{\varepsilon }^{-1}\int_{Q}j^{\ast }(t,x,w_{n}(t,x))dxdy+R_{M}\delta <\varepsilon . \end{eqnarray*} Hence, by the Dunford-Pettis theorem it follows that $(w_{n})_{n}$ is weakly compact in $L^{1}(Q).$ In a similar way we proceed for showing the weakly compactness of the sequence $(y_{n})_{n}.$ Thus, \begin{equation*} y_{n}\rightarrow y^{\ast }\mbox{ weakly in }L^{1}(Q),\mbox{ } w_{n}\rightarrow w^{\ast }\mbox{ weakly in }L^{1}(Q)\mbox{ as }n\rightarrow \infty , \end{equation*} \begin{equation*} Aw_{n}\rightarrow Aw^{\ast }\mbox{ weakly in }L^{1}(0,T;X^{\prime }),\mbox{ as }n\rightarrow \infty , \end{equation*} by (\ref{A}) which implies by (\ref{si-closed-0}) that \begin{equation*} \frac{dy_{n}}{dt}\rightarrow \frac{dy^{\ast }}{dt}\mbox{ weakly in } L^{1}(0,T;X^{\prime })\mbox{ as }n\rightarrow \infty . \end{equation*} Passing to the limit in \begin{equation*} \int_{0}^{T}\left\langle \frac{dy_{n}}{dt}(t),\psi (t)\right\rangle _{X^{\prime },X}dt+\int_{Q}w_{n}A_{0,\infty }\psi dxdt=\int_{0}^{T}\left\langle f(t),\psi (t)\right\rangle _{X^{\prime },X}dt \end{equation*} for any $\psi \in L^{\infty }(0,T;X)$ we get that $(y^{\ast },w^{\ast })$ verifies (\ref{si-8-1-0}), or equivalently (\ref{si-8-P}), i.e., \begin{equation*} \int_{0}^{T}\left\langle \frac{dy^{\ast }}{dt}(t),\psi (t)\right\rangle _{X^{\prime },X}dt+\int_{\Omega }w^{\ast }A_{0,\infty }\psi dxdt=\int_{0}^{T}\left\langle f(t),\psi (t)\right\rangle _{X^{\prime },X}dt. \end{equation*} Next we show that \begin{equation*} y_{n}(T)\rightarrow y^{\ast }(T)\mbox{ and }y_{n}(0)\rightarrow y(0)=y_{0} \mbox{ weakly in }V^{\prime }\mbox{, as }n\rightarrow \infty , \end{equation*} in a similar way as in Lemma 3.1. In order to obtain (\ref{si-203}) we use the weakly compactness of ($y_{n})_{n}$ in $L^{1}(Q).$
Finally, by passing to the limit in (\ref{si-99}), on the basis of the weakly lower semicontinuity of the functional $J$ on $L^{1}(Q)\times L^{1}(Q),$ we obtain that \begin{equation*} J(y^{\ast },w^{\ast })=d. \end{equation*} Hence, we have got that $y^{\ast }\in L^{1}(Q),$ $w^{\ast }\in L^{1}(Q),$ $ y^{\ast }(T)\in V^{\prime }$ and $(y^{\ast },w^{\ast })$ satisfies (\ref {si-8-P}). By (\ref{si-100}) we get \begin{equation*} \int_{Q}j(t,x,y^{\ast }(t,x))dxdt<\infty ,\mbox{ }\int_{Q}j^{\ast }(t,x,w^{\ast }(t,x))dxdt<\infty . \end{equation*} With these relations we have ended the proof that $(y^{\ast },w^{\ast })$ belongs to $U$ and that it is is a solution to $(P).$
Let us show now that $d>-\infty .$ Indeed, otherwise, for every $K$ real positive, there exists $n_{K},$ such that for every $n\geq n_{K}$ we have $ J(y_{n},w_{n})<-K.$ Following the computations in the same way as before we arrive at the inequality (\ref{bound-inf}) which reads now \begin{eqnarray*} &&(M-f_{\infty })\int_{\{(t,x);\left\vert y_{n}(t,x)\right\vert >C_{M}\}}\left\vert y_{n}\right\vert dxdt+M\int_{\{(t,x);\left\vert w_{n}(t,x)\right\vert >D_{M}\}}\left\vert w_{n}\right\vert dxdt \\ &&+\frac{1}{2}\left\Vert y_{n}(T)\right\Vert _{V^{\prime }}^{2}\leq C-K. \end{eqnarray*} Since $C$ is a fixed constant, this implies $C-K<0,$ for $K$ large enough, and this leads to a contradiction, as claimed.
The argument for the uniqueness proof is standard and it relies on the assumption of the strict convexity of $j$ and on the obvious inequality \begin{eqnarray} &&J\left( \frac{y_{1}+y_{2}}{2},\frac{w_{1}+w_{2}}{2}\right) \label{si-104} \\ &=&\int_{Q}\left( j\left( t,x,\frac{y_{1}+y_{2}}{2}(t,x)\right) +j^{\ast }\left( t,x,\frac{w_{1}+w_{2}}{2}(t,x)\right) \right) dxdt \notag \\ &&+\frac{1}{2}\left\Vert \frac{y_{1}+y_{2}}{2}(T)\right\Vert _{V^{\prime }}^{2}-\frac{1}{2}\left\Vert y_{0}\right\Vert _{V^{\prime }}^{2}-\int_{Q} \frac{y_{1}+y_{2}}{2}A_{0,\infty }^{-1}fdxdt \notag \\ &\leq &\frac{1}{2}(J(y_{1},w_{1})+J(y_{2},w_{2}))-\frac{1}{2}\left\Vert \frac{y_{1}-y_{2}}{2}(T)\right\Vert _{V^{\prime }}^{2}, \notag \end{eqnarray} where $(y_{1},w_{1})$ and $(y_{2},w_{2})$ are two solutions to $(P).$
$ \square $
We call the solution to the minimization problem $(P)$ a \textit{variational} or \textit{generalized} solution to (\ref{si-1}).
One might suspect that if the minimum in $(P)$ is zero, then the null minimizer is a weak solution to (\ref{si-1}). We shall prove this for a slightly modified version of $(P),$ by including a boundedness constraint for the state $y$ in the admissible set $U.$ More exactly we consider the problem \begin{equation} \mbox{Minimize }\widetilde{J}(y,w)\mbox{ for all }(y,w)\in \widetilde{U} \tag{$\widetilde{P}$} \end{equation} where \begin{equation*} \widetilde{J}(y,w)=\left\{ \begin{array}{l} J(y,w),\mbox{ \ }(y,w)\in \widetilde{U}, \\ +\infty ,\mbox{ \ \ \ \ \ \ otherwise,} \end{array} \right. \end{equation*} \begin{equation*} \widetilde{U}=\{(y,w)\in U;\mbox{ }y(t,x)\in \lbrack y_{m},y_{M}]\mbox{ a.e. }(t,x)\in Q\}, \end{equation*} with $y_{m},$ $y_{M}$ two constants. We assume that \begin{equation*} y_{0}\in L^{\infty }(\Omega ),\mbox{ }y_{0}\in \lbrack y_{m},y_{M}],\mbox{ \ }f\in L^{\infty }(Q) \end{equation*} and remark that $\widetilde{U}$ is not empty (it contains e.g., $y_{0}$ with $w_{0}=A_{0,\infty }^{-1}f(t))$ given by (\ref{si-8-P})).
If we set $y_{m}=0,$ then the previous boundedness property is in agreement with the physical significance of $y,$ that of a fluid concentration in a diffusion process, which is nonnegative.
Problem $(\widetilde{P})$ has at least a solution and the proof is the same as in Theorem 3.2.
\noindent \textbf{Theorem 3.3. }\textit{Let }$(y,w)\in \widetilde{U}$ \textit{be a null minimizer in }$(\widetilde{P}),$\textit{\ i.e., } \begin{equation*} \min (\widetilde{P})=\widetilde{J}(y,w)=0. \end{equation*} \textit{\ Let us assume in addition that } \begin{equation} \frac{1}{2}\left\Vert y(T)\right\Vert _{V^{\prime }}^{2}-\frac{1}{2} \left\Vert y_{0}\right\Vert _{V^{\prime }}^{2}-\int_{Q}y(t,x)(A_{0,\infty }^{-1}f(t))(x)dxdt=-\int_{Q}w(t,x)y(t,x)dxdt. \label{si-298} \end{equation} \textit{\ Then } \begin{equation*} w(t,x)\in \beta (t,x,y(t,x)),\mbox{ }a.e.\mbox{ }(t,x)\in Q, \end{equation*} \textit{and the pair }$(y,w)$\textit{\ is the unique weak solution to} (\ref {si-1}).
\noindent \textbf{Proof. }Let $(y,w)$ be the null minimizer in $(\widetilde{P }).$ Then \begin{eqnarray*} \widetilde{J}(y,w) &=&\int_{Q}(j(t,x,y(t,x))+j^{\ast }(t,x,w(t,x)))dxdt \\ &&+\frac{1}{2}\left\Vert y(T)\right\Vert _{V^{\prime }}^{2}-\frac{1}{2} \left\Vert y_{0}\right\Vert _{V^{\prime }}^{2}-\int_{Q}y(t,x)(A_{0,\infty }^{-1}f(t))(x)dxdt=0. \end{eqnarray*} By (\ref{si-298}) we have \begin{equation} \int_{Q}(j(t,x,y(t,x))+j^{\ast }(t,x,w(t,x))-y(t,x)w(t,x))dxdt=0. \label{si-298-0} \end{equation} This implies that $j(t,x,y(t,x))+j^{\ast }(t,x,w(t,x))-y(t,x)w(t,x)=0$ a.e. $ (t,x)\in Q$ and so \begin{equation*} w(t,x)\in \beta (t,x,y(t,x))\mbox{ a.e. }(t,x)\in Q, \end{equation*} as claimed.
$\square $
\section{Time dependent potential}
\setcounter{equation}{0}
In this section we consider the case when $j$ and $j^{\ast }$ depend only on $t$ and assume $(h_{1})-(h_{2}),$ (\ref{si-beta-j})-(\ref{si-9-2-0}), (\ref {si-9-7-01}) and (\ref{si-9-5}), where $k_{1},$ $k_{2}\in L^{\infty }(0,T).$
The main result of this section is that a solution to $(P)$ belongs to $ L^{\infty }(0,T;V^{\prime })$ and minimizes $J$ to zero, being exactly the unique weak solution to (\ref{si-1}).
To this end we need some intermediate results. The first is proved in the next lemma and the second given in Theorem 4.2 recalls one of the main results in \cite{gm-jota-var-1}.
\noindent \textbf{Lemma 4.1. }\textit{Let} $(y,w)\in U$\textit{\ and }$y\in L^{\infty }(0,T;V^{\prime }).$\textit{\ Then }$yw\in L^{1}(Q)$ \textit{and we have the formula} \begin{equation} -\int_{Q}ywdxdt=\frac{1}{2}\left\Vert y(T)\right\Vert _{V^{\prime }}^{2}- \frac{1}{2}\left\Vert y_{0}\right\Vert _{V^{\prime }}^{2}-\int_{Q}yA_{0,\infty }^{-1}fdxdt. \label{60} \end{equation}
\noindent \textbf{Proof. }Let\textbf{\ }$(y,w)\in U.$ Then\ $y,w\in L^{1}(Q), $ $j(\cdot ,\cdot ,y(\cdot ,\cdot ))\in L^{1}(Q),$ $j^{\ast }(\cdot ,\cdot ,w(\cdot ,\cdot ))\in L^{1}(Q).$ By (\ref{si-9-5}) we have \begin{equation*} j(t,-y(t,x))\leq \gamma _{1}j(t,y(t,x))+\gamma _{2}\mbox{ a.e. on }Q, \end{equation*} which implies that $\int_{Q}j(t,x,-y(t,x))dxdt<\infty .$ Next, by the relations \begin{eqnarray*} j(t,x,y(t,x))+j^{\ast }(t,x,w(t,x)) &\geq &y(t,x)w(t,x), \\ j(t,x,-y(t,x))+j^{\ast }(t,x,w(t,x)) &\geq &-y(t,x)w(t,x) \end{eqnarray*} it follows that \begin{equation} yw\in L^{1}(Q). \label{61} \end{equation} Because $(y,w)\in U$ it also satisfies (\ref{si-8-P}). Then $y\in L^{1}(Q)\cap L^{\infty }(0,T;V^{\prime }),$ $w\in L^{1}(Q).$ We perform a regularization by applying $(I+\varepsilon A_{\Delta })^{-1}$ to (\ref {si-8-P}), where $A_{\Delta }$ denotes here the realization of the operator $ -\Delta $ on the spaces indicated in Section 2.1. We obtain \begin{eqnarray} \frac{dy_{\varepsilon }}{dt}(t)+Aw_{\varepsilon }(t) &=&f_{\varepsilon }(t) \mbox{ a.e. }t\in (0,T), \label{63} \\ y_{\varepsilon }(0) &=&(I+\varepsilon A_{V})^{-1}y_{0}, \notag \end{eqnarray} where \begin{eqnarray} y_{\varepsilon }(t) &=&(I+\varepsilon A_{V})^{-1}y(t),\mbox{ a.e. }t\in (0,T) \notag \\[1pt] w_{\varepsilon }(t) &=&(I+\varepsilon A_{1})^{-1}w(t),\mbox{ a.e. }t\in (0,T) \label{61-0} \\ f_{\varepsilon }(t) &=&(I+\varepsilon A_{0,\infty })^{-1}f(t),\mbox{ a.e. } t\in (0,T). \notag \end{eqnarray} According again to Brezis and Strauss (see \cite{brezis-strauss}), if $ w(t)\in L^{1}(\Omega )$ then \begin{equation} w_{\varepsilon }(t)\in W^{1,q}(\Omega ),\mbox{ a.e. }t\in (0,T),\mbox{ with } 1\leq q<\frac{N}{N-1}. \label{64} \end{equation} Since $\frac{N}{N-1}<N\leq 3$, we get by the Sobolev inequalities that \begin{equation*} W^{1,q}(\Omega )\subset L^{q^{\ast }}(\Omega ),\mbox{ }\frac{1}{q^{\ast }}= \frac{1}{q}-\frac{1}{N}, \end{equation*} with $\frac{N}{N-1}\leq q^{\ast }<\frac{N}{N-2}.$ It follows that \begin{equation*} w_{\varepsilon }\in L^{1}(0,T;L^{2}(\Omega )). \end{equation*} Next, \begin{equation*} y_{\varepsilon }\in L^{1}(0,T;L^{2}(\Omega ))\cap L^{\infty }(0,T;V), \end{equation*} by a similar argument as for $w_{\varepsilon },$ since $y\in L^{1}(Q)\cap L^{\infty }(0,T:V^{\prime })$ and \begin{equation*} y_{\varepsilon }(t)=(I+\varepsilon A_{1})^{-1}y(t),\mbox{ a.e. }t\in (0,T), \end{equation*} too. Finally, \begin{equation*} f_{\varepsilon }\in L^{\infty }(0,T;\bigcap\limits_{p\geq 2}W^{2,p}(\Omega )), \end{equation*} by the elliptic regularity.
Moreover, $A_{1}$ is $m$-accretive on $L^{1}(\Omega ),$ and it follows that \begin{equation*} w_{\varepsilon }(t)\rightarrow w(t)\mbox{ strongly in }L^{1}(\Omega )\mbox{ for any }t\in \lbrack 0,T] \end{equation*} and \begin{equation*} \left\Vert w_{\varepsilon }(t)\right\Vert _{L^{1}(\Omega )}\leq \left\Vert w(t)\right\Vert _{L^{1}(\Omega )}\mbox{ for any }t\in \lbrack 0,T], \end{equation*} (see \cite{brezis-strauss}).
For a later use, we deduce by the Lebesgue dominated convergence theorem that \begin{equation} w_{\varepsilon }\rightarrow w\mbox{ strongly in }L^{1}(Q),\mbox{ as } \varepsilon \rightarrow 0. \label{63-0} \end{equation} Similarly, we have that \begin{equation} y_{\varepsilon }\rightarrow y\mbox{ strongly in }L^{1}(Q),\mbox{ as } \varepsilon \rightarrow 0. \label{63-1} \end{equation} Finally, \begin{equation} f_{\varepsilon }\rightarrow f\mbox{ weak* in }L^{\infty }(Q),\mbox{ and strongly in }L^{p}(Q),\mbox{ }p\geq 2,\mbox{ as }\varepsilon \rightarrow 0. \label{63-2} \end{equation} By the first relation in (\ref{61-0}) we still have that \begin{equation*} (I+\varepsilon A_{V})^{-1}y(t)\rightarrow y(t)\mbox{ strongly in }V^{\prime } \mbox{ for any }t\in \lbrack 0,T]. \end{equation*} We also observe that \begin{equation*} \int_{0}^{T}\left\langle Aw_{\varepsilon }(t),\psi (t)\right\rangle _{X^{\prime },X}dt=\int_{Q}w_{\varepsilon }A_{0,\infty }\psi dxdt\rightarrow \int_{Q}wA_{0,\infty }\psi dxdt\mbox{ as }\varepsilon \rightarrow 0, \end{equation*} for any $\psi \in L^{\infty }(0,T;X)$ and by (\ref{63}) \begin{equation*} \frac{dy_{\varepsilon }}{dt}\rightarrow \frac{dy}{dt}\mbox{ weakly in } L^{1}(0,T;X^{\prime })\mbox{ as }\varepsilon \rightarrow 0. \end{equation*} Passing to the limit in (\ref{63}) tested for any $\psi \in L^{\infty }(0,T;X),$ \begin{equation*} \int_{0}^{T}\left\langle \frac{dy_{\varepsilon }}{dt}(t),\psi (t)\right\rangle _{X^{\prime },X}dt+\int_{Q}w_{\varepsilon }A_{0,\infty }\psi dxdt=\int_{0}^{T}\left\langle f_{\varepsilon }(t),\psi (t)\right\rangle _{X^{\prime },X}dt \end{equation*} we check that $(y,w)$ indeed satisfies (\ref{si-8-P}).
Next, we assert that \begin{equation} \int_{Q}j(t,y_{\varepsilon }(t,x))dxdt\leq \int_{Q}j(t,y(t,x))dxdt. \label{68} \end{equation}
Indeed, let us introduce the Yosida approximation of $\beta ,$ \begin{equation} \beta _{\lambda }(t,r)=\frac{1}{\lambda }\left( 1-(1+\lambda \beta (t,\cdot )\right) ^{-1})r,\mbox{ a.e. }t,\mbox{ for all }r\in \mathbb{R}\mbox{ and } \lambda >0. \label{69} \end{equation} We have $\beta _{\lambda }(t,r)=\frac{\partial j_{\lambda }}{\partial r} (t,r),$ where $j_{\lambda }$ is the Moreau approximation of $j,$ \begin{equation} j_{\lambda }(t,r)=\inf_{s\in \mathbb{R}}\left\{ \frac{\left\vert r-s\right\vert ^{2}}{2\lambda }+j(t,s)\right\} ,\mbox{ a.e. }t,\mbox{ for all }r\in \mathbb{R}, \label{70} \end{equation} that can be still written as \begin{equation} j_{\lambda }(t,r)=\frac{1}{2\lambda }\left\vert (1+\lambda \beta (t,\cdot ))^{-1}r-r\right\vert ^{2}+j(t,(1+\lambda \beta (t,\cdot ))^{-1}r)). \label{70-0} \end{equation} The function $j_{\lambda }$ is convex, continuous and satisfies \begin{eqnarray} j_{\lambda }(t,r) &\leq &j(t,r)\mbox{ for all }r\in \mathbb{R},\mbox{ } \lambda >0,\mbox{ } \label{71} \\ \lim_{\lambda \rightarrow 0}j_{\lambda }(t,r) &=&j(t,r),\mbox{ for all }r\in \mathbb{R}. \notag \end{eqnarray} We have \begin{eqnarray*} &&\int_{Q}j_{\lambda }(t,y_{\varepsilon }(t,x))dxdt \\ &\leq &\int_{Q}j_{\lambda }(t,y(t,x))dxdt-\varepsilon \int_{0}^{T}\left\langle A_{V}y_{\varepsilon }(t),\beta _{\lambda }(t,y_{\varepsilon }(t))\right\rangle _{V^{\prime },V}dt. \end{eqnarray*} Since for any $z\in V$ one has \begin{equation*} -\left\langle A_{V}z,\beta _{\lambda }(t,z)\right\rangle _{V^{\prime },V}=-\int_{\Gamma }\alpha \beta _{\lambda }(t,z)zd\sigma -\int_{\Omega } \frac{\partial \beta _{\lambda }}{\partial z}(t,z)\left\vert \nabla z\right\vert ^{2}dx\leq 0, \end{equation*} we obtain \begin{equation} \int_{Q}j_{\lambda }(t,y_{\varepsilon }(t,x))dxdt\leq \int_{Q}j_{\lambda }(t,y(t,x))dxdt. \label{70-1} \end{equation} Now, by (\ref{71}) \begin{equation*} j_{\lambda }(t,y(t,x))\leq j(t,y(t,x)) \end{equation*} and \begin{equation} \lim_{\lambda \rightarrow 0}j_{\lambda }(t,y(t,x))=j(t,y(t,x))\mbox{ a.e. on }Q. \label{70-2} \end{equation} Let us assume for the moment that $j$ would be nonnegative. Then by the Lebesgue dominated convergence theorem we get \begin{equation} \lim_{\lambda \rightarrow 0}\int_{Q}j_{\lambda }(t,y(t,x))dxdt=\int_{Q}j(t,y(t,x))dxdt,\mbox{ for any }y\mbox{ fixed.} \label{70-3} \end{equation} Passing to the limit in (\ref{70-1}) as $\lambda \rightarrow 0,$ we obtain that \begin{equation} \int_{Q}j(t,y_{\varepsilon }(t,x))dxdt\leq \int_{Q}j(t,y(t,x))dxdt,\mbox{ for all }\varepsilon >0. \label{70-4} \end{equation} Since in general $j$ is not necessarily nonnegative we consider the function \begin{equation} \widetilde{j}(t,r)=j(t,r)-k_{1}(t)r-k_{2}(t) \label{70-6} \end{equation} which is nonnegative by (\ref{si-9-7-01}). Hence, by (\ref{70-4}) we have \begin{equation*} \int_{Q}\widetilde{j}(t,y_{\varepsilon }(t,x))dxdt\leq \int_{Q}\widetilde{j} (t,y(t,x))dxdt+\int_{Q}k_{1}(t)(y(t,x)-y_{\varepsilon }(t,x))dxdt, \end{equation*} hence \begin{equation} \int_{Q}\widetilde{j}(t,y_{\varepsilon }(t,x))dxdt\leq \int_{Q}\widetilde{j} (t,y(t,x))dxdt+\delta (\varepsilon ), \label{70-7} \end{equation} \begin{equation*} \delta (\varepsilon )=\int_{Q}k_{1}(t)(y(t,x)-y_{\varepsilon }(t,x))dxdt\leq \left\Vert k_{1}\right\Vert _{L^{\infty }(0,T)}\left\Vert y-y_{\varepsilon }\right\Vert _{L^{1}(Q)}\rightarrow 0,\mbox{ as }\varepsilon \rightarrow 0, \end{equation*} by (\ref{63-1}). Then, (\ref{70-7}) implies (\ref{68}) as claimed.
A similar relation to (\ref{68}) takes place for $j^{\ast }$, \begin{equation} \int_{Q}j^{\ast }(t,w_{\varepsilon }(t,x))dxdt\leq \int_{Q}j(t,w(t,x))dxdt. \label{70-5} \end{equation}
This implies that $j(\cdot ,y_{\varepsilon }(\cdot ,\cdot ))\in L^{1}(Q)$, $ j^{\ast }(\cdot ,w_{\varepsilon }(\cdot ,\cdot ))\in L^{1}(Q),$ for all $ \varepsilon >0,$ and so, by the same argument as for $yw$ we deduce that \begin{equation*} y_{\varepsilon }w_{\varepsilon }\in L^{1}(Q). \end{equation*}
We test (\ref{63}) by $A_{2}^{-1}y_{\varepsilon }(t)$ and integrate over $ (0,T).$ Since $y_{\varepsilon }\in L^{1}(Q)\cap L^{\infty }(0,T;V),$ $ w_{\varepsilon }\in L^{1}(0,T;L^{2}(\Omega ))$ we get $A_{2}^{-1}y_{ \varepsilon }(t)\in X_{2},$ a.e. $t\in (0,T)$ and by (\ref{A2}) \begin{equation*} \int_{0}^{T}\left\langle \widetilde{A_{2}}w_{\varepsilon }(t),A_{2}^{-1}y_{\varepsilon }(t)\right\rangle _{X_{2}^{\prime },X_{2}}dt=\int_{Q}y_{\varepsilon }w_{\varepsilon }dxdt. \end{equation*} Then, by a few computations we deduce by (\ref{63}) that \begin{equation} -\int_{Q}y_{\varepsilon }w_{\varepsilon }dxdt=\frac{1}{2}\left\Vert y_{\varepsilon }(T)\right\Vert _{V^{\prime }}^{2}-\frac{1}{2}\left\Vert (I+\varepsilon A_{V})^{-1}y_{0}\right\Vert _{V^{\prime }}^{2}-\int_{Q}y_{\varepsilon }A_{0,\infty }^{-1}f_{\varepsilon }dxdt. \label{65} \end{equation} Recalling that by (\ref{61-0}) we have that \begin{equation*} (I+\varepsilon A_{V})^{-1}y(t)\rightarrow y(t)\mbox{ strongly in }V^{\prime } \mbox{ for any }t\in \lbrack 0,T] \end{equation*} and passing to the limit in (\ref{65}) as $\varepsilon \rightarrow 0$ we obtain \begin{equation} \lim_{\varepsilon \rightarrow 0}\left( -\int_{Q}y_{\varepsilon }w_{\varepsilon }dxdt\right) =\frac{1}{2}\left\Vert y(T)\right\Vert _{V^{\prime }}^{2}-\frac{1}{2}\left\Vert y_{0}\right\Vert _{V^{\prime }}^{2}-\int_{Q}yA_{0,\infty }^{-1}fdxdt. \label{66} \end{equation} Moreover, by the strongly convergence of $(y_{\varepsilon }$)$_{\varepsilon } $ and $(w_{\varepsilon })_{\varepsilon },$ (\ref{63-1}) and (\ref{63-0}) we get \begin{equation*} y_{\varepsilon }\rightarrow y\mbox{ a.e. in }Q,\mbox{ }w_{\varepsilon }\rightarrow w\mbox{ a.e. in }Q,\mbox{ as }\varepsilon \rightarrow 0, \end{equation*} which implies that \begin{equation*} y_{\varepsilon }w_{\varepsilon }\rightarrow yw\mbox{ a.e. in }Q,\mbox{ as } \varepsilon \rightarrow 0. \end{equation*} The functions $j$ and $j^{\ast }$ are continuous and so \begin{equation*} j(t,y_{\varepsilon }(t,x))\rightarrow j(t,y(t,x)),\mbox{ }j^{\ast }(t,w_{\varepsilon }(t,x))\rightarrow j^{\ast }(t,w(t,x)),\mbox{ a.e. on }Q, \mbox{ as }\varepsilon \rightarrow 0. \end{equation*} Now, by (\ref{68}) and (\ref{70-5}) we have \begin{eqnarray*} &&\int_{Q}\left( j(t,y_{\varepsilon }(t,x))+j^{\ast }(t,w_{\varepsilon }(t,x))-y_{\varepsilon }w_{\varepsilon }\right) dxdt \\ &\leq &\int_{Q}\left( j(t,y(t,x))+j^{\ast }(t,w(t,x))-y_{\varepsilon }w_{\varepsilon }\right) dxdt \end{eqnarray*} and we apply the Fatou lemma because $j(t,y_{\varepsilon })+j^{\ast }(t,w_{\varepsilon })-y_{\varepsilon }w_{\varepsilon }\geq 0.$ We get, using (\ref{68}) and (\ref{70-5}) that \begin{eqnarray*} &&\int_{Q}\left( j(t,y(t,x))+j^{\ast }(t,w(t,x))-yw\right) dxdt \\ &\leq &\liminf\limits_{\varepsilon \rightarrow 0}\int_{Q}\left( j(t,y_{\varepsilon }(t,x))+j^{\ast }(t,w_{\varepsilon }(t,x))-y_{\varepsilon }w_{\varepsilon }\right) dxdt \\ &\leq &\limsup\limits_{\varepsilon \rightarrow 0}\int_{Q}\left( j(t,x,y(t,x))+j^{\ast }(t,x,w(t,x))-y_{\varepsilon }w_{\varepsilon }\right) dxdt \\ &\leq &\int_{Q}\left( j(t,y(t,x))+j^{\ast }(t,w(t,x))\right) dxdt-\lim\limits_{\varepsilon \rightarrow 0}\int_{Q}y_{\varepsilon }w_{\varepsilon }dxdt, \end{eqnarray*} whence, by using (\ref{66}), we see that \begin{equation} -\int_{Q}ywdxdt\leq \frac{1}{2}\left\Vert y(T)\right\Vert _{V^{\prime }}^{2}- \frac{1}{2}\left\Vert y_{0}\right\Vert _{V^{\prime }}^{2}-\int_{Q}yA_{0,\infty }^{-1}fdxdt. \label{72} \end{equation} We continue the proof by relying on the same arguments, starting this time with Fatou's lemma applied for the positive function $j(t,x,-y_{\varepsilon })+j^{\ast }(t,x,w_{\varepsilon })+y_{\varepsilon }w_{\varepsilon }$. By similar computations we get \begin{equation*} -\int_{Q}ywdxdt\geq \frac{1}{2}\left\Vert y(T)\right\Vert _{V^{\prime }}^{2}- \frac{1}{2}\left\Vert y_{0}\right\Vert _{V^{\prime }}^{2}-\int_{Q}yA_{0,\infty }^{-1}fdxdt, \end{equation*} which together with (\ref{72}) imply (\ref{60}).
$\square $
Next we recall one of the main results given in \cite{gm-jota-var-1} in a more general case, but particularized here to the space $L^{2}(Q).$
Let us consider the problem \begin{eqnarray} \frac{\partial y}{\partial t}-\Delta \widetilde{\beta }(t,x,y) &\ni &f\mbox{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ in }Q, \notag \\ -\frac{\partial \widetilde{\beta }(t,x,y)}{\partial \nu } &=&\alpha \widetilde{\beta }(t,x,y)\mbox{ \ \ on }\Sigma , \label{si-1-0} \\ y(0,x) &=&y_{0}\mbox{ \ \ \ \ \ \ \ \ \ \ \ \ \ in }\Omega , \notag \end{eqnarray} where $\widetilde{\beta }(t,x,r)=\partial \varphi (t,x,r)$ a.e. on $Q,$ for all $r\in \mathbb{R},$ and $\varphi :\mathbb{R}\rightarrow \mathbb{R}$ is a proper, convex, l.s.c. function satisfying $(h_{1}),(h_{2}),$ and the growth condition \begin{equation} C_{1}\left\vert r\right\vert ^{2}+C_{1}^{0}\leq \varphi (t,x,r)\leq C_{2}\left\vert r\right\vert ^{2}+C_{2}^{0},\mbox{ for all }r\in \mathbb{R}, \mbox{ a.e. }t\in (0,T) \label{si-1-1} \end{equation} in addition. ($C_{i},$ $C_{i}^{0}$ are constants$,$ $i=1,2,$ and $C_{1}>0.)$
We consider the minimization problem \begin{equation} \mbox{Minimize }J_{0}(y,w)=\int_{Q}\left( \varphi (t,x,y(t,x))+\varphi ^{\ast }(t,x,w(t,x))-w(t,x)y(t,x)\right) dxdt \tag{$P_{0}$} \end{equation} for all $(y,w)\in U_{0},$ where \begin{multline*} U_{0}=\{(y,w);\mbox{ }y\in L^{2}(Q)\cap W^{1,2}([0,T];X_{2}^{\prime }),\mbox{ }y(T)\in V^{\prime },\mbox{ }w\in L^{2}(Q), \\ \varphi (\cdot ,\cdot ,y)\in L^{1}(Q),\mbox{ }\varphi ^{\ast }(\cdot ,\cdot ,w)\in L^{1}(Q), \\ (y,w)\mbox{ verifies (\ref{76}) below}\}, \end{multline*} \begin{eqnarray} \frac{dy}{dt}(t)+\widetilde{A_{2}}w(t) &=&f(t)\mbox{ a.e. }t\in (0,T), \label{76} \\ y(0) &=&y_{0}. \notag \end{eqnarray} We recall the notations $X_{2},$ $X_{2}^{\prime },$ $\widetilde{A_{2}}$ given in Section 2.1.
In \cite{gm-jota-var-1} it has been proved that $(P_{0})$ has at least a solution and it has been established the equivalence between (\ref{si-1-0}) and $(P_{0}),$ resumed below (see Theorem 3.2 in \cite{gm-jota-var-1}).
\noindent \textbf{Theorem 4.2.} \textit{Let }$y_{0}\in V^{\prime },$ $f\in L^{\infty }(Q),$ \textit{and} \textit{let the pair }$(y,w)\in U_{0}$\textit{ \ be a solution to }$(P_{0}).$\textit{\ Then,} \begin{equation*} w(t,x)\in \widetilde{\beta }(t,y(t,x))\mbox{ \textit{a.e.} }(t,x)\in Q \end{equation*} \textit{and }$(y,w)$\textit{\ is the unique weak solution to }(\ref{si-1-0} ). \textit{Moreover,} \begin{eqnarray} &&-\int_{0}^{t}\int_{\Omega }ywdxd\tau \label{300} \\ &=&\frac{1}{2}\left\{ \left\Vert y(t)\right\Vert _{V^{\prime }}^{2}-\left\Vert y_{0}\right\Vert _{V^{\prime }}^{2}\right\} -\int_{0}^{t}\int_{\Omega }yA_{0,\infty }^{-1}fdxd\tau ,\mbox{ }for\mbox{ } all\mbox{ }t\in \lbrack 0,T]. \notag \end{eqnarray}
Of course, the result remains true when $\varphi $ does not depend on $x$.
Now, we can pass to the main result of this section which shows that a null minimizer in $(P)$ provides a unique weak solution to (\ref{si-1}).
\noindent \textbf{Theorem 4.3. }\textit{Under the assumptions }$ (h_{1})-(h_{2}),$ (\ref{si-beta-j})-(\ref{si-9-2-0}), (\ref{si-9-7-01})-(\ref {si-9-7-02}), (\ref{si-9-5}) \textit{problem }$(P)$ \textit{has a solution }$ (y^{\ast },w^{\ast })$\textit{\ such that }$y^{\ast }\in L^{\infty }(0,T;V^{\prime }).$ \textit{Then, this solution is a null minimizer in (}$ P) $\textit{\ } \begin{equation} J(y^{\ast },w^{\ast })=\inf_{(y,w)\in U}J(y,w)=0 \label{Jinf} \end{equation} \textit{and it turns out that it is the unique weak solution to }(\ref{si-1}) \textit{. }
\noindent \textbf{Proof. }Let us introduce the approximating problem \begin{eqnarray} \frac{\partial y}{\partial t}-\Delta \beta _{\lambda }(t,y) &=&f\mbox{ \ \ \ \ \ \ \ \ \ \ \ \ \ in }Q, \notag \\ -\frac{\partial \beta _{\lambda }(t,y)}{\partial \nu } &=&\alpha \beta _{\lambda }(t,y)\mbox{ \ \ on }\Sigma , \label{73} \\ y(0,x) &=&y_{0}\mbox{ \ \ \ \ \ \ \ \ \ \ \ \ in }\Omega , \notag \end{eqnarray} where $\beta _{\lambda }$ is the Yosida approximation of $\beta .$
\noindent Let $\sigma $ be positive and consider the approximating problem indexed upon $\sigma ,$ \begin{eqnarray} \frac{\partial y}{\partial t}-\Delta (\beta _{\lambda }(t,y)+\sigma y) &=&f \mbox{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ in }Q, \notag \\ -\frac{\partial (\beta _{\lambda }(t,y)+\sigma y)}{\partial \nu } &=&\alpha (\beta _{\lambda }(t,y)+y)\mbox{ \ on }\Sigma , \label{74} \\ y(0,x) &=&y_{0}\mbox{ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ in }\Omega . \notag \end{eqnarray} The potential of $\beta _{\lambda }(t,r)+\sigma r$ is \begin{equation} j_{\lambda ,\sigma }(t,r)=j_{\lambda }(t,r)+\frac{\sigma }{2}r^{2}, \label{74-0} \end{equation} where $j_{\lambda }$ is the Moreau regularization of $j.$ By a simple computation using (\ref{70}), (\ref{si-9-7-01}), (\ref{si-9-2-0}) we get that \begin{equation} \frac{\sigma }{2}\left\vert r\right\vert ^{2}+k_{1}r+k_{2}-2\lambda k_{1}^{2}\leq j_{\lambda ,\sigma }(t,r)\leq j(t,0)+\left\vert r\right\vert ^{2}\left( \frac{1}{2\lambda }+\sigma ^{2}\right) . \label{74-0-1} \end{equation} Hence $j_{\lambda ,\sigma }$ satisfies (\ref{si-1-1}) and we rely on Theorem 4.2 with $\varphi (t,r)=j_{\lambda ,\sigma }(t,r)$ and $\widetilde{\beta } (t,r)=\beta _{\lambda }(t,r)+\sigma r$ to get that (\ref{74}) has a unique weak solution $(y_{\lambda ,\sigma },w_{\lambda ,\sigma })\in U_{0},$ \begin{eqnarray*} y_{\lambda ,\sigma } &\in &L^{2}(Q)\cap W^{1,2}([0,T];X_{2}^{\prime }),\mbox{ }y_{\lambda ,\sigma }(T)\in V^{\prime }, \\ w_{\lambda ,\sigma } &=&\beta _{\lambda }(t,y_{\lambda ,\sigma })+\sigma y_{\lambda ,\sigma }\in L^{2}(Q). \end{eqnarray*} This solution is the null minimizer in $(P_{0}),$ i.e., \begin{equation} J_{0}(y_{\lambda ,\sigma },w_{\lambda ,\sigma })=\int_{Q}\left( j_{\lambda ,\sigma }(t,y_{\lambda ,\sigma }(t,x))+j_{\lambda ,\sigma }^{\ast }(t,w_{\lambda ,\sigma }(t,x))-y_{\lambda ,\sigma }w_{\lambda ,\sigma }\right) dxdt=0, \label{77} \end{equation} and satisfies (\ref{76}), namely \begin{eqnarray} \frac{dy_{\lambda ,\sigma }}{dt}(t)+\widetilde{A_{2}}w_{\lambda ,\sigma }(t) &=&f(t)\mbox{ a.e. }t\in (0,T), \label{76-0} \\ y(0) &=&y_{0}. \notag \end{eqnarray} Moreover, we have by (\ref{300}) that \begin{eqnarray} &&-\int_{0}^{t}\int_{\Omega }y_{\lambda ,\sigma }w_{\lambda ,\sigma }dxd\tau \label{si-9-1} \\ &=&\frac{1}{2}\left\{ \left\Vert y_{\lambda ,\sigma }(t)\right\Vert _{V^{\prime }}^{2}-\left\Vert y_{0}\right\Vert _{V^{\prime }}^{2}\right\} -\int_{0}^{t}\int_{\Omega }y_{\lambda ,\sigma }A_{0,\infty }^{-1}fdxd\tau , \mbox{ } \notag \end{eqnarray} for all $t\in \lbrack 0,T].$ Taking into account (\ref{si-9-1}) and (\ref{77} ) we still can write \begin{eqnarray} &&\int_{Q}(j_{\lambda ,\sigma }(t,y_{\lambda ,\sigma }(t,x))+j_{\lambda ,\sigma }^{\ast }(t,w_{\lambda ,\sigma }(t,x)))dxdt \label{78} \\ &&+\frac{1}{2}\left\{ \left\Vert y_{\lambda ,\sigma }(T)\right\Vert _{V^{\prime }}^{2}-\left\Vert y_{0}\right\Vert _{V^{\prime }}^{2}\right\} =\int_{0}^{T}\int_{\Omega }y_{\lambda ,\sigma }A_{0,\infty }^{-1}fdxdt. \notag \end{eqnarray} We note that \begin{equation} \frac{j_{\lambda ,\sigma }^{\ast }(t,\omega )}{\left\vert \omega \right\vert }\rightarrow \infty \mbox{ as }\left\vert \omega \right\vert \rightarrow \infty , \label{78-0} \end{equation} uniformly in $\lambda $ and $\sigma .$ This happens due to (\ref{si-9-7}) because by setting \begin{equation*} \eta _{\lambda ,\,\sigma }=\partial j_{\lambda ,\sigma }(t,r),\mbox{ }\eta _{\lambda ,\sigma }=\beta _{\lambda }(t,r)+\sigma r=\beta (t,(1+\lambda \beta (t,\cdot ))^{-1}r)+\sigma r, \end{equation*} then $\eta _{\lambda ,\sigma }$ is bounded on bounded subsets $\left\vert r\right\vert \leq M,$ uniformly in $\lambda $ and $\sigma ,$ for $\lambda $ and $\sigma $ small (smaller than 1, e.g.).
We also note that \begin{eqnarray*} &&\int_{Q}j_{\lambda ,\sigma }(t,y_{\lambda ,\sigma }(t,x))dxdt\geq \int_{Q}j_{\lambda }(t,y_{\lambda ,\sigma }(t,x))dxdt \\ &=&\int_{Q}j(t,(1+\lambda \beta (t,\cdot ))^{-1}y_{\lambda ,\sigma })dxdt+\int_{Q}\frac{1}{2\lambda }\left\vert y_{\lambda ,\sigma }-(1+\lambda \beta (t,\cdot ))^{-1}y_{\lambda ,\sigma }\right\vert ^{2}dxdt \end{eqnarray*} and \begin{eqnarray*} &&\int_{0}^{T}\int_{\Omega }y_{\lambda ,\sigma }A_{0,\infty }^{-1}fdxdt=\int_{Q}\left( y_{\lambda ,\sigma }-(1+\lambda \beta (t,\cdot ))^{-1}y_{\lambda ,\sigma }\right) A_{0,\infty }^{-1}fdxdt \\ &&+\int_{Q}(1+\lambda \beta (t,\cdot ))^{-1}y_{\lambda ,\sigma }A_{0,\infty }^{-1}fdxdt \\ &\leq &\int_{Q}\frac{1}{2\lambda }\left\vert y_{\lambda ,\sigma }-(1+\lambda \beta (t,\cdot ))^{-1}y_{\lambda ,\sigma }\right\vert ^{2}dxdt+2\lambda \int_{Q}(A_{0,\infty }^{-1}f)^{2}dxdt \\ &&+\int_{Q}(1+\lambda \beta (t,\cdot ))^{-1}y_{\lambda ,\sigma }A_{0,\infty }^{-1}fdxdt. \end{eqnarray*} Plugging these in (\ref{78}) we get after some algebra that \begin{eqnarray} &&\int_{Q}j(t,(1+\lambda \beta (t,\cdot ))^{-1}y_{\lambda ,\sigma })dxdt+\int_{Q}j_{\lambda ,\sigma }^{\ast }(t,w_{\lambda ,\sigma }(t,x)))dxdt \label{78-1} \\ &&+\frac{1}{2}\left\{ \left\Vert y_{\lambda ,\sigma }(T)\right\Vert _{V^{\prime }}^{2}-\left\Vert y_{0}\right\Vert _{V^{\prime }}^{2}\right\} \notag \\ &\leq &\int_{Q}(1+\lambda \beta (t,\cdot ))^{-1}y_{\lambda ,\sigma }A_{0,\infty }^{-1}fdxdt+2\lambda \int_{Q}(A_{0,\infty }^{-1}f)^{2}dxdt. \notag \end{eqnarray} Further we set \begin{equation*} (1+\lambda \beta (t,\cdot ))^{-1}y_{\lambda ,\sigma }=z_{\lambda ,\sigma } \end{equation*} and argue as in Theorem 3.2 to deduce by the Dunford-Pettis theorem that $ (z_{\lambda ,\sigma })_{\sigma }$ and $(w_{\lambda ,\sigma })_{\sigma }$ are weakly compact in $L^{1}(Q).$ Recalling (\ref{bound}) we also get \begin{equation} \left\Vert y_{\lambda ,\sigma }(T)\right\Vert _{V^{\prime }}\leq C \label{79} \end{equation} independently on $\sigma $ and $\lambda $.
Taking into account that $w_{\lambda ,\sigma }=\beta _{\lambda }(y_{\lambda ,\sigma })+\sigma y_{\lambda ,\sigma },$ equation (\ref{si-9-1}) yields \begin{eqnarray*} &&\frac{1}{2}\left\Vert y_{\lambda ,\sigma }(t)\right\Vert _{V^{\prime }}^{2}+\int_{0}^{t}\int_{\Omega }(\beta _{\lambda }(\tau ,y_{\lambda ,\sigma })y_{\lambda ,\sigma }+\sigma y_{\lambda ,\sigma }^{2})dxd\tau \\ &=&\frac{1}{2}\left\Vert y_{0}\right\Vert _{V^{\prime }}^{2}+\int_{0}^{t}\left\langle y_{\lambda ,\sigma }(\tau ),A_{0,\infty }^{-1}f(\tau )\right\rangle _{V^{\prime },V}d\tau \\ &=&\frac{1}{2}\left\Vert y_{0}\right\Vert _{V^{\prime }}^{2}+\frac{1}{2} \int_{0}^{t}\left\Vert A_{0,\infty }^{-1}f(\tau )\right\Vert _{V}^{2}d\tau + \frac{1}{2}\int_{0}^{t}\left\Vert y_{\lambda ,\sigma }(\tau )\right\Vert _{V^{\prime }}^{2}d\tau ,\mbox{ } \end{eqnarray*} for all $t\in \lbrack 0,T].$
Taking into account that $\beta _{\lambda }(t,r)r\geq 0,$ for all $r\in \mathbb{R},$ and in virtue of the Gronwall lemma, we deduce that \begin{equation} \left\Vert y_{\lambda ,\sigma }\right\Vert _{L^{\infty }(0,T;V^{\prime })}\leq C, \label{79-0} \end{equation} \begin{equation*} \sqrt{\sigma }\left\Vert y_{\lambda ,\sigma }\right\Vert _{L^{2}(Q)}\leq C, \end{equation*} and \begin{equation} \int_{Q}j(t,z_{\lambda ,\sigma }(t,x))dxdt\leq C,\mbox{ }\int_{Q}j_{\lambda ,\sigma }^{\ast }(t,w_{\lambda ,\sigma }(t,x))dxdt\leq C \label{79-1} \end{equation} independently on $\sigma $ and $\lambda .$ (For getting (\ref{79-1}) we recall the arguments leading to (\ref{101-1}), (\ref{101-2})).
Then, (\ref{78}) and relation (\ref{si-9-7-01}) for $j_{\lambda ,\sigma }^{\ast }$ imply that \begin{equation} \int_{Q}j_{\lambda ,\sigma }(t,y_{\lambda ,\sigma }(t,x))dxdt\leq C \label{79-2} \end{equation} independently on $\sigma $ and $\lambda .$ Following the proof of Theorem 3.2 we deduce that \begin{eqnarray*} z_{\lambda ,\sigma } &\rightarrow &z_{\lambda }\mbox{ weakly in }L^{1}(Q), \mbox{ as }\sigma \rightarrow 0, \\ w_{\lambda ,\sigma } &\rightarrow &w_{\lambda }\mbox{ weakly in }L^{1}(Q), \mbox{ as }\sigma \rightarrow 0, \\ \sqrt{\sigma }y_{\lambda ,\sigma } &\rightarrow &\zeta _{\lambda }\mbox{ weakly in }L^{2}(Q),\mbox{ as }\sigma \rightarrow 0, \\ y_{\lambda ,\sigma } &\rightarrow &y_{\lambda }\mbox{ weak-star in } L^{\infty }(0,T;V^{\prime }),\mbox{ as }\sigma \rightarrow 0, \\ y_{\lambda ,\sigma }(T) &\rightarrow &\xi \mbox{ weakly in }V^{\prime }, \mbox{ as }\sigma \rightarrow 0, \\ Aw_{\lambda } &\rightarrow &Aw_{\lambda }\mbox{ weakly in } L^{1}(0,T;X^{\prime }),\mbox{ as }\sigma \rightarrow 0, \\ \frac{dy_{\lambda ,\sigma }}{dt} &\rightarrow &\frac{dy_{\lambda }}{dt}\mbox{ weakly in }L^{1}(0,T;X^{\prime }),\mbox{ as }\sigma \rightarrow 0. \end{eqnarray*} By (\ref{79-2}) and (\ref{70-0}) we have \begin{equation*} \int_{Q}\frac{1}{2\lambda }\left\vert y_{\lambda ,\sigma }-(1+\lambda \beta (t,\cdot ))^{-1}y_{\lambda ,\sigma }\right\vert ^{2}dxdt\leq \int_{Q}j_{\lambda ,\sigma }(t,y_{\lambda ,\sigma }(t,x))dxdt\leq C \end{equation*} whence, denoting $\chi _{\lambda ,\sigma }=\left( y_{\lambda ,\sigma }-z_{\lambda ,\sigma }\right) /\sqrt{2\lambda }$ we see that $(\chi _{\lambda ,\sigma })_{\sigma }$ is bounded in $L^{2}(Q)$ and $\chi _{\lambda ,\sigma }\rightarrow \chi _{\lambda }$ weakly in $L^{2}(Q),$ as $\sigma \rightarrow 0,$ on a subsequence. Then \begin{equation*} y_{\lambda ,\sigma }-z_{\lambda ,\sigma }\rightarrow \sqrt{2\lambda }\chi _{\lambda }\mbox{ weakly in }L^{1}(Q),\mbox{ as }\sigma \rightarrow 0, \end{equation*} where $\left\Vert \chi _{\lambda }\right\Vert _{L^{1}(Q)}\leq C.$ Since $ z_{\lambda ,\sigma }\rightarrow z_{\lambda }$ weakly in $L^{1}(Q),$ it follows that $(y_{\lambda ,\sigma })_{\sigma }$ is bounded in $L^{1}(Q),$ so it converges weakly and by the limit uniqueness we have \begin{equation*} y_{\lambda ,\sigma }\rightarrow y_{\lambda }\mbox{ weakly in }L^{1}(Q),\mbox{ as }\sigma \rightarrow 0. \end{equation*} We also have \begin{equation} y_{\lambda }=z_{\lambda }+\sqrt{2\lambda }\zeta _{\lambda }\mbox{ a.e. on }Q. \label{79-3} \end{equation} By Arzel\`{a}-Ascoli theorem (since $V^{\prime }$ is compact in $X^{\prime }$ because $X$ is compact in $V)$ it follows that \begin{equation*} y_{\lambda ,\sigma }(t)\rightarrow y_{\lambda }(t)\mbox{ in }X^{\prime }, \mbox{ uniformly in }t\in \lbrack 0,T],\mbox{ as }\sigma \rightarrow 0, \end{equation*} so $\xi =y_{\lambda }(T)$ and $y_{\lambda }(0)=y_{0}.$
Passing to the limit in (\ref{76-0}) we get that $(y_{\lambda },w_{\lambda }) $ satisfies \begin{eqnarray} \frac{dy_{\lambda }}{dt}(t)+Aw_{\lambda }(t) &=&f(t)\mbox{ a.e. }t\in (0,T), \label{76-00} \\ y(0) &=&y_{0}. \notag \end{eqnarray}
Passing to the limit in (\ref{78-1}) as $\sigma \rightarrow 0$, using the weak lower semicontinuity property we get \begin{eqnarray} \int_{Q}(j(t,z_{\lambda }(t,x))+j_{\lambda }^{\ast }(t,w_{\lambda }(t,x)))dxdt && \label{82} \\ +\frac{1}{2}\left\{ \left\Vert y_{\lambda }(T)\right\Vert _{V^{\prime }}^{2}-\left\Vert y_{0}\right\Vert _{V^{\prime }}^{2}\right\} -\int_{Q}y_{\lambda }A_{0,\infty }^{-1}fdxdt-2\lambda \int_{Q}(A_{0,\infty }^{-1}f)^{2}dxdt &\leq &0. \notag \end{eqnarray}
We repeat again the arguments developed in Theorem 3.2 and deduce by the Dunford-Pettis theorem that $(z_{\lambda })_{\lambda }$ and $(w_{\lambda })_{\lambda }$ are weakly compact in $L^{1}(Q).$ It still follows that \begin{equation} \left\Vert z_{\lambda }(T)\right\Vert _{V^{\prime }}\leq C,\mbox{ } \int_{Q}j(t,z_{\lambda }(t,x))dxdt\leq C,\mbox{ }\int_{Q}j_{\lambda }^{\ast }(t,w_{\lambda }(t,x))dxdt\leq C \label{82-1} \end{equation} independently on $\lambda $ (recall (\ref{bound}), (\ref{101-1}), (\ref {101-2}))$.$ Passing to the limit in (\ref{79-0}) as $\sigma \rightarrow 0$ we get \begin{equation*} \left\Vert y_{\lambda }\right\Vert _{L^{\infty }(0,T;V^{\prime })}\leq C \end{equation*} where $C$ are several constants independent on $\lambda .$
Then, proceeding along with the proof of Theorem 3.2 we obtain from (\ref{82} ), by selecting subsequences, that \begin{eqnarray*} z_{\lambda } &\rightarrow &z^{\ast }\mbox{ weakly in }L^{1}(Q),\mbox{ as } \lambda \rightarrow 0, \\ w_{\lambda } &\rightarrow &w^{\ast }\mbox{ weakly in }L^{1}(Q),\mbox{ as } \sigma \rightarrow 0, \\ y_{\lambda } &\rightarrow &y^{\ast }\mbox{ weak-star in }L^{\infty }(0,T;V^{\prime }),\mbox{ as }\sigma \rightarrow 0, \\ y_{\lambda }(T) &\rightarrow &y^{\ast }(T)\mbox{ weakly in }V^{\prime }, \mbox{ as }\sigma \rightarrow 0, \\ Aw_{\lambda } &\rightarrow &Aw^{\ast }\mbox{ weakly in }L^{1}(0,T;X^{\prime }),\mbox{ as }\sigma \rightarrow 0, \\ \frac{dy_{\lambda }}{dt} &\rightarrow &\frac{dy^{\ast }}{dt}\mbox{ weakly in }L^{1}(0,T;X^{\prime }),\mbox{ as }\sigma \rightarrow 0. \end{eqnarray*} By (\ref{79-3}) we get that $z^{\ast }=y^{\ast }$ a.e. on $Q$ and by (\ref {82-1}) we obtain \begin{equation} \left\Vert y^{\ast }(T)\right\Vert _{V^{\prime }}\leq C,\mbox{ } \int_{Q}j(t,y^{\ast }(t,x))dxdt\leq C,\mbox{ }\int_{Q}j^{\ast }(t,w^{\ast }(t,x))dxdt\leq C. \label{82-3} \end{equation} The first inequality is obvious. For the second (if $j(t,r)\geq 0)$ Fatou's lemma yields \begin{equation} \int_{Q}j(t,y^{\ast }(t,x))dxdt=\int_{Q}\liminf\limits_{\lambda \rightarrow 0}j(t,z_{\lambda }(t,x))dxdt\leq \liminf\limits_{\lambda \rightarrow 0}\int_{Q}j(t,z_{\lambda }(t,x))dxdt\leq C. \label{82-2} \end{equation} If $j$ is not positive, we use again (\ref{si-9-7-01}) and denoting $ \widetilde{j}(t,r)=j(t,r)-k_{1}r-k_{2}\geq 0$ we write \begin{equation*} \int_{Q}\widetilde{j}(t,y^{\ast }(t,x))dxdt\leq \liminf\limits_{\lambda \rightarrow 0}\int_{Q}\widetilde{j}(t,z_{\lambda }(t,x))dxdt, \end{equation*} whence we get \begin{eqnarray*} &&\int_{Q}j(t,y^{\ast }(t,x))dxdt-\int_{Q}(k_{1}y^{\ast }+k_{2})dxdt \\ &\leq &\liminf\limits_{\lambda \rightarrow 0}\int_{Q}j(t,z_{\lambda }(t,x))dxdt-\int_{Q}(k_{1}y^{\ast }+k_{2})dxdt, \end{eqnarray*} i.e., (\ref{82-2}). In what concerns the third inequality in (\ref{82-3}), we can write by (\ref{82-1}) \begin{eqnarray} &&\int_{Q}\frac{1}{2\lambda }\left\vert w_{\lambda }-(1+\lambda \beta ^{-1}(t,\cdot ))^{-1}w_{\lambda }\right\vert ^{2}dxdt+j^{\ast }(t,(1+\beta ^{-1}(t,\cdot ))^{-1}w_{\lambda }) \label{82-4} \\ &\leq &\int_{Q}j_{\lambda }^{\ast }(t,w_{\lambda }(t,x))dxdt\leq C. \notag \end{eqnarray} Recalling that $j^{\ast }(t,x,\omega )\geq k_{3}(t,x)\omega +k_{4}(t,x)$ we get that \begin{equation*} k_{3}\int_{Q}(1+\beta ^{-1}(t,\cdot ))^{-1}w_{\lambda }dxdt\leq C+k_{4}\mbox{ meas}(Q), \end{equation*} hence $\left( (1+\beta ^{-1}(t,\cdot ))^{-1}w_{\lambda }\right) _{\lambda }$ is bounded in $L^{1}(Q).$ Then \begin{equation*} \int_{Q}\frac{1}{2\lambda }\left\vert w_{\lambda }-(1+\lambda \beta ^{-1}(t,\cdot ))^{-1}w_{\lambda }\right\vert ^{2}dxdt\leq C+\int_{Q}(k_{3}(1+\beta ^{-1}(t,\cdot ))^{-1}w_{\lambda }+k_{4})dxdt\leq C_{1}. \end{equation*} It follows that \begin{equation*} (1+\lambda \beta ^{-1}(t,\cdot ))^{-1}w_{\lambda }\rightarrow w^{\ast }\mbox{ weakly in }L^{1}(Q),\mbox{ as }\lambda \rightarrow 0. \end{equation*} Then, we passing to the limit in (\ref{82-4}) as $\lambda \rightarrow 0$ (if $j^{\ast }$ is nonnegative). Otherwise we use again (\ref{si-9-7-01}) for $ j^{\ast }.$
Passing to the limit in (\ref{76-00}) and (\ref{82}) as $\lambda \rightarrow 0$ we get \begin{eqnarray} \frac{dy^{\ast }}{dt}(t)+Aw^{\ast }(t) &=&f(t)\mbox{ a.e. }t\in (0,T), \label{76-000} \\ y(0) &=&y_{0}, \notag \end{eqnarray} and, again by the weak lower semicontinuity, \begin{eqnarray} &&\int_{Q}(j(t,y^{\ast }(t,x))+j^{\ast }(t,w^{\ast }(t,x)))dxdt \label{83} \\ &&+\frac{1}{2}\left\{ \left\Vert y^{\ast }(T)\right\Vert _{V^{\prime }}^{2}-\left\Vert y_{0}\right\Vert _{V^{\prime }}^{2}\right\} -\int_{Q}y^{\ast }A_{0,\infty }^{-1}fdxd\tau \leq 0. \notag \end{eqnarray} We have got that $(y^{\ast },w^{\ast })\in U,$ $y^{\ast }\in L^{\infty }(0,T;V^{\prime })$ and so by Lemma 4.1 it follows that $y^{\ast }w^{\ast }\in L^{1}(Q).$ Replacing the sum of the last two terms on the right-hand side in (\ref{83}) by (\ref{60}) we get \begin{equation*} \int_{Q}(j(t,y^{\ast }(t,x))+j^{\ast }(t,w^{\ast }(t,x))-y^{\ast }(t,x)w^{\ast }(t,x))dxdt\leq 0. \end{equation*} Recalling (\ref{si-4-1}) we obtain \begin{equation} \int_{Q}(j(t,y^{\ast }(t,x))+j^{\ast }(t,w^{\ast }(t,x))-y^{\ast }(t,x)w^{\ast }(t,x))dxdt=0 \label{84} \end{equation} which eventually implies that \begin{equation*} j(t,y^{\ast }(t,x))+j^{\ast }(t,w^{\ast }(t,x))-y^{\ast }(t,x)w^{\ast }(t,x)=0\mbox{ a.e. on }Q. \end{equation*} Therefore, we conclude that $w^{\ast }(t,x)\in \beta (t,y^{\ast }(t,x))$ a.e. on $Q,$ by the Legendre-Fenchel relations.
On the other hand, due again to (\ref{60}) in Lemma 4.1, relation (\ref{84}) means in fact that $(y^{\ast },w^{\ast })$ realizes the minimum in $(P),$ as claimed in (\ref{Jinf}).
The uniqueness follows directly by (\ref{si-1}) using the monotony of $\beta .$
Indeed, let $(y$,$\eta )$ and $(\widetilde{y},\widetilde{\eta })$ be two solutions to (\ref{si-1}) corresponding to the same data, where $\eta (t,x)\in \beta (t,x,y(t,x)),$ $\widetilde{\eta }(t,x)\in \beta (t,x, \widetilde{y}(t,x))$ a.e. on $Q,$ $(y,\eta )$ and $(\widetilde{y},\widetilde{ \eta })$ belong to $U$ and $y,\widetilde{y}\in L^{\infty }(0,T;V^{\prime }).$ We write the equations satisfied by their difference \begin{eqnarray*} \frac{d(y-\widetilde{y})}{dt}(t)+A(\eta -\widetilde{\eta })(t) &\ni &0\mbox{ a.e. }t\in (0,T), \\ (y-\widetilde{y})(0) &=&0, \end{eqnarray*} multiply the equation by $A_{0,\infty }^{-1}(y-\widetilde{y})(t)$ and integrate it over $(0,t)$ obtaining \begin{equation*} \frac{1}{2}\left\Vert (y-\widetilde{y})(t)\right\Vert _{V^{\prime }}^{2}+\int_{0}^{t}\int_{\Omega }\left( \eta -\widetilde{\eta })(y- \widetilde{y}\right) dxdt=0. \end{equation*} But $\beta (t,r)$ is maximal monotone, hence we get $\left\Vert y(t)- \widetilde{y}(t)\right\Vert _{V^{\prime }}^{2}\leq 0,$ whence $y(t)= \widetilde{y}(t)$ for all $t\in \lbrack 0,T].$
$\square $
We remark now that if $(y^{\ast },\eta ^{\ast })$ is the solution to (\ref {si-1}), with $\eta ^{\ast }(t,x)\in \beta (t,y^{\ast }(t,x))$ a.e. on $Q$ and $y^{\ast }\in L^{\infty }(0,T;V^{\prime }),$ then it is a unique solution to $(P),$ because by Lemma 4.1, $J(y,w)\geq 0$ for any $(y,w)\in U$ and the minimum is realized at $(y^{\ast },\eta ^{\ast })$ since $J(y^{\ast },\eta ^{\ast })=0.$ So, we conclude that (\ref{si-1}) is equivalent with the minimization problem $(P)$.
\section{Conclusions}
This paper deals with the application of the Brezis-Ekeland principle for a nonlinear diffusion equation with a monotonically increasing time depending nonlinearity which provides a potential having a weak coercivity property. The result states that the solution of the nonlinear equation can be retrieved as the null minimizer of an appropriate minimization problem for a convex functional involving the potential of the nonlinearity.
This approach is useful because it allows the existence proof in cases in which, due to the generality of the nonlinearity, standard methods do not apply. Also it can lead to a simpler numerical computation of the solution to the equation by replacing its direct determination by the numeric calculus of the minimum of a convex functional with a linear state equation.
With respect to the literature concerning existence results for (\ref{si-1} ), Theorem 4.3 provides existence under very general conditions on the nonlinear function $\beta .$ As regards the assumption (\ref{si-9-5}) it can be equivalently expressed as \begin{equation*} \limsup_{\left\vert r\right\vert \rightarrow \infty }\frac{j(t,-r)}{j(t,r)} <\infty , \end{equation*} (see \cite{barbu-daprato}). Since in specific real problems the solution to ( \ref{si-1}) is nonnegative, and so $\beta $ is defined on $[0,\infty ),$ this condition is achieved by extending in a convenient way the function $ \beta $ on $(-\infty ,0).$ For instance, conditions (\ref{si-9-2})-(\ref {si-9-2-00})\ are satisfied for $\beta $ of the form \begin{equation*} \beta (t,x,r)=\mbox{sgn(}r)\log (\left\vert r\right\vert +a(t,x)),\mbox{ } a\geq a_{0}>0 \end{equation*} or \begin{equation*} \beta (t,x,r)=\mbox{sgn}(r)\exp (a(t,x)r^{2}),\mbox{ }a\geq a_{0}>0. \end{equation*}
Concerning possible applications, we remark that problem (\ref{si-1}) can be obtained by a change of variable in an equation of the form \begin{equation*} \frac{\partial (m(t,x)y)}{\partial t}-\Delta \beta _{0}(y)\ni f \end{equation*} which is associated to various physical models as for example: fluid diffusion in saturated-unsaturated deformable porous media with the porosity $m$ time and space dependent (see appropriate problems in \cite{af-gm-12}, \cite{gm-cc-08}), or to absorption-desorption processes in saturated porous media in which $m$ is the absorption-desorption rate of the fluid by the solid. The Robin boundary condition arising in (\ref{si-1}) was chosen because of its relevance in these physical models. Also, evolution equations with nonautonomous operators can be associated to models in which the boundary conditions are of time dependent nonhomogeneous Dirichlet type, or nonlocal as in population dynamics (see \cite{cuiama-2}).
In all these problems the coefficient $m$ or the coefficients in the boundary conditions may have a very low regularity which makes not possible the approach of (\ref{si-1}) by the nonlinear semigroup method in the time-dependent case given in \cite{Crandall-Pazy-71}.
\noindent \textbf{Acknowledgment. }This work has been carried out within a grant of the Romanian National Authority for Scientific Research, CNCS-UEFISCDI, project number PN-II-ID-PCE-2011-3-0027. The author acknowledges the fellowship awarded by CIRM-Fondazione Bruno Kessler, Italy, in November 2012.
\end{document} |
\begin{document}
\title{A Combinatorial Approach to the Symmetry of $q,t$-Catalan Numbers} \subjclass[2010]{05A19, 05A17, 05E05}
\date{\today}
\author{Kyungyong Lee} \address{Department of Mathematics \\
University of Nebraska --- Lincoln \\
Lincoln, NE 68588, USA \\ and Korea Institute for Advanced Study \\ Seoul 02455, Republic of Korea} \email{klee24@unl.edu; klee1@kias.re.kr} \thanks{The first author was supported by the Korea Institute for Advanced Study (KIAS), the AMS Centennial Fellowship, NSA grant H98230-14-1-0323, and the University of Nebraska --- Lincoln.}
\author{Li Li} \address{Department of Mathematics and Statistics \\
Oakland University \\
Rochester, MI 48309} \email{li2345@oakland.edu} \thanks{The second author was partially supported by the Oakland University URC Faculty Research Fellowship Award.}
\author{Nicholas A. Loehr} \address{Department of Mathematics \\
Virginia Tech \\
Blacksburg, VA 24061-0123 \\ and Department of Mathematics \\ United States Naval Academy \\ Annapolis, MD 21402-5002} \email{nloehr@vt.edu} \thanks{This work was partially supported by a grant from the Simons Foundation (\#244398 to Nicholas Loehr).}
\keywords{$q,t$-Catalan numbers, Dyck paths, dinv statistic, joint symmetry, integer partitions} \maketitle \begin{abstract} The \emph{$q,t$-Catalan numbers} $C_n(q,t)$ are polynomials in $q$ and $t$ that reduce to the ordinary Catalan numbers when $q=t=1$. These polynomials have important connections to representation theory, algebraic geometry, and symmetric functions. Haglund and Haiman discovered combinatorial formulas for $C_n(q,t)$ as weighted sums of Dyck paths (or equivalently, integer partitions contained in a staircase shape). This paper undertakes a combinatorial investigation of the joint symmetry property $C_n(q,t)=C_n(t,q)$. We conjecture some structural decompositions of Dyck objects into ``mutually opposite'' subcollections that lead to a bijective explanation of joint symmetry in certain cases. A key new idea is the construction of infinite chains of partitions that are independent of $n$ but induce the joint symmetry for all $n$ simultaneously. Using these methods, we prove combinatorially that for $0\leq k\leq 9$ and all $n$, the terms in $C_n(q,t)$ of total degree $\binom{n}{2}-k$ have the required symmetry property. \end{abstract}
\section{Introduction} \label{sec:intro}
\subsection{Background on $q,t$-Catalan Numbers.} \label{subsec:background-qtcat}
The \emph{$q,t$-Catalan numbers} were introduced in 1996 by Garsia and Haiman as part of an ongoing study of Macdonald's symmetric polynomials and the representation theory of the diagonal harmonics modules~\cite{GH-qtcat}. Garsia and Haiman's original definition has the form
\begin{equation}\label{eq:GH-qtcat-def}
C_n(q,t)=\sum_{\mu\in\Par(n)} r_{\mu}(q,t), \end{equation} where $\Par(n)$ is the set of integer partitions of $n$, and $r_{\mu}(q,t)$ is a complicated rational function in $q$ and $t$ built up from the arms, legs, coarms, and colegs of the cells in the Ferrers diagram of $\mu$. (These terms will be defined below; for general background on partitions and symmetric functions, we refer the reader to~\cite{macbook,stanvol2}. See Haglund's book~\cite{hag-book} for an extensive discussion of $q,t$-Catalan numbers.)
Let $\mu'$ be the conjugate of the partition $\mu$, whose diagram is obtained by transposing the diagram of $\mu$. It follows from the definition of the rational functions $r_{\mu}(q,t)$ that $r_{\mu'}(q,t)=r_{\mu}(t,q)$. Replacing the summation index $\mu$ by $\mu'$ in~\eqref{eq:GH-qtcat-def}, we deduce the \emph{joint symmetry property} \[ C_n(q,t)=C_n(t,q)\qquad(n\geq 0). \]
Another much less obvious property of the original $q,t$-Catalan numbers is that each $C_n(q,t)$ is a \emph{polynomial} in $q$ and $t$ with \emph{nonnegative integer} coefficients. Garsia and Haiman were able to prove that the specialization $C_n(q,1)$ was a polynomial obtained by summing terms $q^{\area(\pi)}$, where $\pi$ is a \emph{Dyck path of order $n$} and $\area(\pi)$ is the number of area cells below the path. (See below for more detailed definitions.) This suggested that there might be a second statistic $\wt(\pi)$ on Dyck paths, such that $C_n(q,t)=\sum_{\pi} q^{\area(\pi)}t^{\wt(\pi)}$. One such statistic, called the \emph{bounce statistic}, was conjectured by Haglund in 2003~\cite{hag-qtcatconj}. Upon hearing from Garsia that Haglund had found a statistic, Haiman quickly discovered another statistic called \emph{dinv} that also appeared to work. The two conjectures were found to be equivalent via a bijection on Dyck paths that sends the pair of statistics $(\area,\bounce)$ to the pair of statistics $(\dinv,\area)$. In particular, this implies that all three statistics have the same distribution on Dyck paths of order $n$. Garsia and Haglund were eventually able to prove that Haglund's combinatorial formula for $C_n(q,t)$ was indeed correct. Their proof (announced in~\cite{GHag-PNAS} and given in detail in~\cite{GHag-qtcatpf}) consists of extremely intricate algebraic manipulations of symmetric polynomials, making extensive use of plethystic calculus.
Since the original $q,t$-Catalan numbers are jointly symmetric in $q$ and $t$, the same must be true of the various combinatorial formulas for $q,t$-Catalan numbers. In particular, there must exist a bijection on Dyck paths of order $n$ interchanging the area and bounce statistics, and there must exist another bijection interchanging area and dinv. However, at this time, no one has been able to construct such bijections valid for all $n$. The main goal of this paper is to investigate the joint symmetry of $q,t$-Catalan numbers from a combinatorial viewpoint. Roughly speaking, we prove combinatorially that the homogeneous components of $C_n(q,t)$ of sufficiently high degree have the desired joint symmetry property, and we formulate several conjectures that provide a general framework for understanding joint symmetry.
\subsection{Definition of $q,t$-Catalan Numbers based on Dyck Vectors.} \label{subsec:def-dyck-vectors}
To state our results more precisely, we must first review the details of the combinatorial definition of $q,t$-Catalan numbers based on the area and dinv statistics. The definition can be formulated in three equivalent ways, using combinatorial objects called \emph{Dyck vectors}, \emph{Dyck paths}, or \emph{Dyck partitions}, which are all illustrated in Figure~\ref{fig:object} below. The simplest formulation involves Dyck vectors (also called \emph{Dyck sequences} or the \emph{area vectors of Dyck paths}), so we discuss these first.
\begin{definition} A \emph{Dyck vector} is a finite sequence $v=(v_1,v_2,\ldots,v_n)\in\mathbb{Z}^n$ such that $v_1=0$, every $v_i\geq 0$, and $v_{i+1}\leq v_i+1$ for all $i<n$. The \emph{area} of a Dyck vector is $\area(v)=v_1+v_2+\cdots+v_n$. The \emph{diagonal inversion count} of a Dyck vector, denoted $\dinv(v)$, is the number of $i<j$ such that $v_i-v_j\in\{0,1\}$. Let $\mathcal{DV}_n$ be the set of all Dyck vectors of length $n$. Finally, define the \emph{combinatorial $q,t$-Catalan numbers} by setting \[ \Cat_n(q,t)=\sum_{v\in\mathcal{DV}_n} q^{\area(v)}t^{\dinv(v)}. \] \end{definition}
\begin{example} The vector $v=(0,1,1,0,1,2,2,0)$ is a Dyck vector in $\mathcal{DV}_8$ with $\area(v)=7$ and $\dinv(v)=12$. The entries in the vector $v$ count the number of shaded cells in each row of Figure~\ref{fig:object} below, reading from the bottom of the figure to the top. These cells lie underneath a \emph{Dyck path}, which is a lattice path from $(0,0)$ to $(n,n)$ consisting of north and east steps that never go below the line $y=x$. This correspondence between Dyck vectors and Dyck paths is a bijection. So we sometimes identify these two types of objects, blurring the distinction between a Dyck vector (a list of integers) and the associated Dyck path (a sequence of north and east steps). \end{example}
\begin{figure}
\caption{A typical object counted by $\Cat_n(q,t)$.}
\label{fig:object}
\end{figure}
\begin{example} When $n=3$, we have $\mathcal{DV}_3=\{(0,0,0),(0,0,1),(0,1,0),(0,1,1),(0,1,2)\}$, so \[ \Cat_3(q,t)=t^3+qt+qt^2+q^2t+q^3. \] \end{example}
For larger values of $n$, it is convenient to display the coefficients of $\Cat_n(q,t)$ in a matrix, where the entry $i$ rows to the right and $j$ columns above the lower-left corner is the coefficient of $q^it^j$ in $\Cat_n(q,t)$. For instance, $\Cat_5(q,t)=t^{10}+q(t^9+t^8+t^7+t^6)+\cdots$ is represented by the matrix {\footnotesize \[ \left[\begin{array}{ccccccccccc} 1 & . & . & . & . & . & . & . & . & . & . \\ . & 1 & . & . & . & . & . & . & . & . & . \\ . & 1 & 1 & . & . & . & . & . & . & . & . \\ . & 1 & 1 & 1 & . & . & . & . & . & . & . \\ . & 1 & 2 & 1 & 1 & . & . & . & . & . & . \\ . & . & 1 & 2 & 1 & 1 & . & . & . & . & . \\ . & . & 1 & 2 & 2 & 1 & 1 & . & . & . & . \\ . & . & . & 1 & 2 & 2 & 1 & 1 & . & . & . \\ . & . & . & . & 1 & 1 & 2 & 1 & 1 & . & . \\ . & . & . & . & . & . & 1 & 1 & 1 & 1 & . \\ . & . & . & . & . & . & . & . & . & . & 1 \end{array}\right], \] } where zeroes are denoted by dots for visual clarity.
To prove joint symmetry of $\Cat_n(q,t)$, it is sufficient to show that each diagonal of the coefficient matrix has the joint symmetry property. For instance, it is not too hard to show (see Lemma~\ref{lem:C0-opp} below) that the main (highest) diagonal consists of $1$'s running southeast from position $(0,\binom{n}{2})$ to position $(\binom{n}{2},0)$. In general, the combinatorial complexity increases as we move southwest from this main diagonal (in a sense made more precise later). The next definition introduces a statistic that keeps track of which diagonal each object belongs to.
\begin{definition} For a Dyck vector $v\in\mathcal{DV}_n$, define the \emph{deficit of $v$} to be $$\defc(v)=\binom{n}{2}-\area(v)-\dinv(v).$$ (We will see below that $\defc(v)\geq 0$.) Let $\mathcal{DV}_{n,k}=\{v\in\mathcal{DV}_n:\defc(v)=k\}$. For all $n,k\geq 0$, define the \emph{$q,t$-Catalan numbers of order $n$ and level $k$} by \[ \Cat_{n,k}(q,t)=\sum_{v\in\mathcal{DV}_{n,k}} q^{\area(v)}t^{\dinv(v)}. \] \end{definition}
\begin{example} The Dyck vector $v=(0,1,1,0,1,2,2,0)$ has $\defc(v)=\binom{8}{2}-7-12=9$. Referring to the coefficient matrix above, we find that \begin{eqnarray*} \Cat_{5,0}(q,t)&=&q^{10}+qt^9+q^2t^8+\cdots+t^{10}, \\ \Cat_{5,1}(q,t)&=&qt^8+q^2t^7+\cdots+q^8t, \\ \Cat_{5,2}(q,t)&=&qt^7+2q^2t^6+2q^3t^5+2q^4t^4+2q^5t^3+2q^6t^2+q^7t,\\ \Cat_{5,3}(q,t)&=&qt^6+q^2t^5+2q^3t^4+2q^4t^3+q^5t^2+q^6t,\\ \Cat_{5,4}(q,t)&=&q^2t^4+q^3t^3+q^4t^2, \end{eqnarray*} and $\Cat_{5,k}(q,t)=0$ for all $k>4$. \end{example}
Our first main result is a combinatorial proof of the following theorem. \begin{theorem}\label{thm:jsym-levelk} For all $n\geq 0$ and all $k\leq 9$, $\Cat_{n,k}(q,t)=\Cat_{n,k}(t,q)$. \end{theorem}
This result will emerge from a framework of (partially proved) conjectures regarding the combinatorial structure of the sets $\mathcal{DV}_{n,k}$. To state these conjectures, we must first describe an alternate formulation of the $q,t$-Catalan numbers phrased in terms of Dyck partitions.
\subsection{Definition of $q,t$-Catalan Numbers based on Dyck Partitions.} \label{subsec:def-dyck-ptns}
An \emph{integer partition} is a sequence $\mu=(\mu_1,\mu_2,\ldots,\mu_s)$ of weakly decreasing nonnegative integers. We set
$|\mu|=\mu_1+\mu_2+\cdots+\mu_s$ and let $\ell(\mu)$ be the number of nonzero entries in $\mu$. We identify sequences that differ only in the number of trailing zeroes; for example, $(3,3,1)$ and $(3,3,1,0)$
and $(3,3,1,0,0,0)$ all represent the same partition. Let $\Par(n)$ be the set of partitions $\mu$ with $|\mu|=n$, and let $\Par=\bigcup_{n=0}^{\infty} \Par(n)$ be the set of all integer partitions.
Formally, the \emph{diagram} of a partition $\mu$ is the set $$\dg(\mu)=\{(i,j)\in\mathbb{Z}_{>0}^2: 1\leq i\leq\ell(\mu),1\leq j\leq\mu_i\}.$$ The \emph{conjugate} of $\mu$ is the partition $\mu'$ with diagram $\dg(\mu')=\{(j,i):(i,j)\in\dg(\mu)\}$. We usually visualize $\dg(\mu)$ as an array of left-justified unit boxes, with $\mu_i$ boxes in the $i$'th row from the top. For example, the diagram of the partition $\gamma=(7,4,3,3,3,1,0,0)$ appears northwest of the Dyck path in Figure~\ref{fig:object}. That figure suggests a correspondence between Dyck paths of order $n$ and integer partitions contained in a staircase shape. This is formalized in the next definition.
\begin{definition} For each $n\geq 0$, let $\Delta_n$ be the partition $(n-1,n-2,\ldots,3,2,1,0)\in\Par(\binom{n}{2})$. We call $\gamma\in\Par$ a \emph{Dyck partition of order $n$} iff $\dg(\gamma)\subseteq\dg(\Delta_n)$, which means that $\ell(\gamma)<n$ and $\gamma_i\leq n-i$ for all $1\leq i<n$. Let $\mathcal{DP}_n$ be the set of Dyck partitions of order $n$. For $\gamma\in\Par$, let $\Delta(\gamma)$ be the minimum $n$ such that $\gamma\in\mathcal{DP}_n$. \end{definition}
The set of all integer partitions is the increasing union of the sets $\mathcal{DP}_n$, i.e., \begin{equation}\label{eq:DP-inclusions}
\mathcal{DP}_0\subseteq\mathcal{DP}_1\subseteq\mathcal{DP}_2\subseteq\cdots\subseteq
\mathcal{DP}_n\subseteq\cdots\subseteq\Par=\bigcup_{n=0}^{\infty} \mathcal{DP}_n. \end{equation} We obtain a bijection from $\mathcal{DP}_n$ to the set of Dyck paths of order $n$ by mapping $\gamma\in\mathcal{DP}_n$ to the frontier of $\dg(\gamma)$ when we embed the diagram of $\gamma$ in the diagram of $\Delta_n$. We can then transform this Dyck path into a Dyck vector by counting area cells to the right of the path in each row. It is routine to establish the following formulas for bijections $\textsc{dp}_n:\mathcal{DV}_n\rightarrow\mathcal{DP}_n$ and $\textsc{dv}_n=\textsc{dp}_n^{-1}:\mathcal{DP}_n\rightarrow\mathcal{DV}_n$, which will be needed later: \begin{eqnarray*} \textsc{dp}_n(v_1,v_2,\ldots,v_n)&=& (n-1-v_n,n-2-v_{n-1},\ldots,n-i-v_{n-i+1},\ldots,1-v_2,0-v_1); \\ \textsc{dv}_n(\gamma_1,\gamma_2,\ldots,\gamma_n) &=& (0-\gamma_n,1-\gamma_{n-1},\ldots,i-\gamma_{n-i},\ldots,n-1-\gamma_1). \end{eqnarray*} In the second formula, we pad $\gamma$ with zero parts so that $\gamma$ has exactly $n$ nonnegative parts.
Next we define area and dinv statistics for Dyck partitions. First we review the notions of arm, leg, coarm, and coleg for cells in a partition diagram.
\begin{definition} Given $\gamma\in\Par$ and a cell $c=(i,j)\in\dg(\gamma)$, the \emph{arm} of $c$ is $\arm(c)=\gamma_i-j$, which is the number of boxes strictly right of $c$ in its row of the diagram. The \emph{leg} of $c$ is $\leg(c)=\gamma'_j-i$, which is the number of boxes strictly below $c$ in its column of the diagram. The \emph{coarm} of $c$ is $\arm'(c)=j-1$, which is the number of boxes strictly left of $c$ in its row of the diagram. The \emph{coleg} of $c$ is $\leg'(c)=i-1$, which is the number of boxes strictly above $c$ in its column of the diagram. \end{definition}
For example, cell $c=(2,1)$ in the Dyck partition $\gamma$ shown in Figure~\ref{fig:object} has $\arm(c)=3$, $\leg(c)=4$, $\arm'(c)=0$, and $\leg'(c)=1$.
\begin{definition}\label{DP*} Fix a Dyck partition $\gamma\in\mathcal{DP}_n$. Let $\dinv(\gamma)$
be the number of cells $c\in\dg(\gamma)$ with $\arm(c)-\leg(c)\in\{0,1\}$. Let $\area_n(\gamma)=\binom{n}{2}-|\gamma|$, which is the number of cells in the skew partition $\Delta_n/\gamma$. Define the deficit $\defc(\gamma)=\binom{n}{2}-\area_n(\gamma)-\dinv(\gamma)
=|\gamma|-\dinv(\gamma)\geq 0$, which can also be described as the number of cells $c\in\dg(\gamma)$ with $\arm(c)-\leg(c)\not\in\{0,1\}$. Let $\mathcal{DP}_{n,k}=\{\gamma\in\mathcal{DP}_n:\defc(\gamma)=k\}$ be the set of Dyck partitions of \emph{order $n$ and level $k$}. We write $\mathcal{DP}_{\ast,k}=\bigcup_{n=0}^{\infty} \mathcal{DP}_{n,k}$ for the set of all Dyck partitions of level $k$ (of any order). \end{definition}
For each $k\geq 0$, we have inclusions \begin{equation}\label{eq:DPk-inclusions}
\mathcal{DP}_{0,k}\subseteq\mathcal{DP}_{1,k}\subseteq\mathcal{DP}_{2,k}\subseteq\cdots\subseteq
\mathcal{DP}_{n,k}\subseteq\cdots\subseteq\mathcal{DP}_{\ast,k}
=\bigcup_{n=0}^{\infty} \mathcal{DP}_{n,k}. \end{equation}
\begin{example}
The partition $\gamma=(5,5,4,3,1,1,0)$ has $|\gamma|=19$, $\dinv(\gamma)=14$, $\defc(\gamma)=5$, $\Delta(\gamma)=7$, $\area_7(\gamma)=2$, $\area_8(\gamma)=9$, $\area_9(\gamma)=17$, and so on. The diagram $\dg(\gamma)$ is shown below; the 14 cells $c$ such that $\arm(c)-\leg(c)\in\{0,1\}$ are marked with asterisks.
\[ \tableau { {}&{\ast}&{}&{}&{}\\ {\ast}&{\ast}&{\ast}&{\ast}&{\ast}\\ {\ast}&{\ast}&{\ast}&{\ast}&\\ {\ast}&{\ast}&{\ast}&\\ {}\\ {\ast} } \] We have $\textsc{dv}_7(\gamma)=v=(0,0,1,0,0,0,1)$; note that $\area(v)=2=\area_7(\gamma)$ and $\dinv(v)=14=\dinv(\gamma)$. \end{example}
One may check that for all $n,k\geq 0$, \[ \Cat_{n,k}(q,t)= \sum_{\gamma\in\mathcal{DP}_{n,k}} q^{\area_n(\gamma)}t^{\dinv(\gamma)}\mbox{ and } \Cat_n(q,t)= \sum_{\gamma\in\mathcal{DP}_{n}} q^{\area_n(\gamma)}t^{\dinv(\gamma)}. \] It suffices to verify that the bijection $\textsc{dv}_n:\mathcal{DV}_n\rightarrow\mathcal{DP}_n$ sends the statistics $(\area,\dinv,\defc)$ on Dyck vectors to the corresponding statistics $(\area_n,\dinv,\defc)$ on Dyck partitions. This certainly holds for the area statistics, and it is not too hard to prove for the dinv statistics (for details, see~\cite[Lemma 4.4.1 and 6.3.3]{HHLRU}). It then follows that the deficit statistic is also preserved, which also establishes the earlier claim that the deficit of a Dyck vector is always nonnegative.
One advantage of using Dyck partitions is that the dinv and deficit of a partition $\gamma\in\mathcal{DP}_n$ do not change when $n$ is increased, and the area changes in a predictable way: $\area_{n+1}(\gamma) =\area_n(\gamma)+n$. The chain of set inclusions~\eqref{eq:DP-inclusions} for Dyck partitions translates into a chain of inclusion maps \begin{equation}\label{eq:DV-inclusions} \mathcal{DV}_0\stackrel{\iota_0}{\hookrightarrow} \mathcal{DV}_1\stackrel{\iota_1}{\hookrightarrow} \mathcal{DV}_2\stackrel{\iota_2}{\hookrightarrow}\cdots \stackrel{\iota_{n-1}}{\hookrightarrow} \mathcal{DV}_n\stackrel{\iota_n}{\hookrightarrow} \mathcal{DV}_{n+1}\stackrel{\iota_{n+1}}{\hookrightarrow}\cdots, \end{equation} for Dyck vectors, where $\iota_n(v_1,\ldots,v_n)=(0,v_1+1,v_2+1,\ldots,v_n+1)$ for $(v_1,\ldots,v_n)\in\mathcal{DV}_n$. For $v\in\mathcal{DV}_n$, one immediately verifies that $\textsc{dp}_{n+1}(\iota_n(v))=\textsc{dp}_n(v)$, which says that the inclusions in~\eqref{eq:DP-inclusions} and~\eqref{eq:DV-inclusions} correspond under the bijection between Dyck vectors and Dyck partitions. Also note that if $\gamma\in\mathcal{DP}_n$ has associated Dyck vector $(v_1,\ldots,v_n)$, then $\Delta(\gamma)=n$ iff $v_j=0$ for some $j>1$.
\subsection{Opposite Objects and Opposite Subsets} \label{subsec:opposites}
The following terminology will be helpful for discussing joint symmetry.
\begin{definition} Two Dyck vectors $v,w\in\mathcal{DV}_n$ are called a \emph{pair of opposites} iff $\area(v)=\dinv(w)$ and $\dinv(v)=\area(w)$. Two subsets $V,W\subseteq\mathcal{DV}_n$ are called \emph{opposite to each other} iff there exists a unique bijection $f:V\rightarrow W$ such that $v$ and $f(v)$ are a pair of opposites for all $v\in V$. (The uniqueness of the bijection implies that for $v\neq v'$ in $V$,
$(\area(v),\dinv(v))\neq (\area(v'),\dinv(v'))$.) Two Dyck partitions $\beta,\gamma\in\mathcal{DP}_n$ are called a \emph{pair of $n$-opposites} iff $\area_n(\beta)=\dinv(\gamma)$ and $\dinv(\beta)=\area_n(\gamma)$. Two subsets $B,C\subseteq\mathcal{DP}_n$ are called \emph{$n$-opposite to each other} iff there exists a unique bijection $g:B\rightarrow C$ such that $\gamma$ and $g(\gamma)$ are a pair of $n$-opposites for all $\gamma\in B$. In this definition, we allow $v=w$, $V=W$, $\beta=\gamma$, and $B=C$. \end{definition}
\begin{example}\label{ex:special-path} We define an injective map $\textsc{dyck}:\Par(n)\rightarrow\mathcal{DV}_n$ as follows. For $\lambda=(\lambda_1\geq\lambda_2\geq\cdots)\in\Par(n)$, let $\textsc{dyck}(\lambda)\in\mathcal{DV}_n$ be the sequence consisting of $\lambda_1$ zeroes, followed by $\lambda_2$ ones, $\lambda_3$ twos, and so on. For instance, when $\lambda=(4,2,1,1,1)\in\Par(9)$, $\textsc{dyck}(\lambda)=(0,0,0,0,1,1,2,3,4)\in\mathcal{DV}_9$. We claim that for all partitions $\lambda$, $\textsc{dyck}(\lambda)$ and $\textsc{dyck}(\lambda')$ are a pair of opposites. To see this, note that \[ \area(\textsc{dyck}(\lambda))=0\lambda_1+1\lambda_2+2\lambda_3+\cdots
=\sum_{c\in\dg(\lambda)} \leg'(c); \] \[ \dinv(\textsc{dyck}(\lambda))=\binom{\lambda_1}{2}+\binom{\lambda_2}{2}
+\binom{\lambda_3}{2} +\cdots =\sum_{c\in\dg(\lambda)} \arm'(c). \] The natural bijection $\dg(\lambda)\rightarrow\dg(\lambda')$ given by $(i,j)\mapsto (j,i)$ interchanges coarms and colegs, so that $\area(\textsc{dyck}(\lambda'))=\dinv(\textsc{dyck}(\lambda))$ and $\dinv(\textsc{dyck}(\lambda'))=\area(\textsc{dyck}(\lambda))$, as needed. \end{example}
One corollary to Theorem~\ref{thm:jsym-levelk} and Example~\ref{ex:special-path} is a combinatorial proof of the joint symmetry $C_n(q,t)=C_n(t,q)$ for all $n\leq 7$. This follows since every Dyck vector $v$ of length at most $7$ satisfies $\defc(v)\leq 9$, with four exceptions. These four exceptional objects are $\textsc{dyck}((3,3,1))$, $\textsc{dyck}((3,3,1)')$, $\textsc{dyck}((3,2,1,1))$, and $\textsc{dyck}((3,2,1,1)')$, which form two pairs of opposite objects by the preceding example.
\subsection{Conjectural Decompositions of Level $k$ Objects.} \label{subsec:intro-conjs}
We can now describe our conjectured structural decomposition of the sets $\mathcal{DV}_{n,k}$.
\begin{conjecture}\label{conj:DV} Fix an integer $k\geq 0$. For each $n\geq 0$ and each $\mu\in\Par(k)$, there exist (possibly identical) subsets $\mathcal{DV}_{n,\mu}$ and $\overline{\mathcal{DV}}_{n,\mu}$ of $\mathcal{DV}_{n,k}$ satisfying the following:
\begin{itemize} \item[(a)] $\mathcal{DV}_{n,k}$ is the disjoint union of the sets $\{\mathcal{DV}_{n,\mu}:\mu\in\Par(k)\}$, and $\mathcal{DV}_{n,k}$ is also the disjoint union of the sets $\{\overline{\mathcal{DV}}_{n,\mu}:\mu\in\Par(k)\}$. \item[(b)] $\mathcal{DV}_{n,\mu}$ and $\overline{\mathcal{DV}}_{n,\mu}$ are opposite to
each other. \item[(c)] If $\mathcal{DV}_{n,\mu}$ contains $\textsc{dyck}(\lambda)$,
then $\overline{\mathcal{DV}}_{n,\mu}$ contains $\textsc{dyck}(\lambda')$. \item[(d)] $\iota_n(\mathcal{DV}_{n,\mu})\subseteq \mathcal{DV}_{n+1,\mu}$
and $\iota_n(\overline{\mathcal{DV}}_{n,\mu})\subseteq \overline{\mathcal{DV}}_{n+1,\mu}$,
where $\iota_n$ is the injection from~\eqref{eq:DV-inclusions}.
\item[(e)] $\lim_{n\rightarrow\infty} |\mathcal{DV}_{n,\mu}|
=\lim_{n\rightarrow\infty} |\overline{\mathcal{DV}}_{n,\mu}|=\infty$. \end{itemize} \end{conjecture}
An even stronger, but more technical, conjecture is stated in Section~\ref{sec:stronger_conj} below. We will prove these conjectures for all $k\leq 9$; this clearly implies Theorem~\ref{thm:jsym-levelk} for these values of $k$. We also explicitly construct certain sets $\mathcal{DV}_{n,\mu}$ and $\overline{\mathcal{DV}}_{n,\mu}$ satisfying (b) and (c) when $\lambda$ is a hook shape (i.e., $\lambda_2\leq 1$) or an almost-hook shape (i.e., $\lambda_2=2$ and $\lambda_3\leq 1$).
Here is an equivalent formulation of Conjecture~\ref{conj:DV} in terms of Dyck partitions.
\begin{conjecture}\label{conj:DP} Fix an integer $k\geq 0$. For each $\mu\in\Par(k)$, there exist (possibly identical) infinite subsets $\mathcal{DP}_{\mu}$ and $\overline{\mathcal{DP}}_{\mu}$ of $\mathcal{DP}_{\ast,k}$ satisfying the following: \begin{itemize} \item[(a)] $\mathcal{DP}_{\ast,k}$ is the disjoint union of the sets $\{\mathcal{DP}_{\mu}:\mu\in\Par(k)\}$, and $\mathcal{DP}_{\ast,k}$ is also the disjoint union of the sets $\{\overline{\mathcal{DP}}_{\mu}:\mu\in\Par(k)\}$. \item[(b)] For all $n\geq 0$, $\mathcal{DP}_{\mu}\cap\mathcal{DP}_n$ and $\overline{\mathcal{DP}}_{\mu}\cap\mathcal{DP}_n$ are $n$-opposite to each other. \item[(c)] If $\mathcal{DP}_{\mu}$ contains $\textsc{dp}_n(\textsc{dyck}(\lambda))$
where $\lambda\in\Par(n)$, then $\overline{\mathcal{DP}}_{\mu}$ contains
$\textsc{dp}_n(\textsc{dyck}(\lambda'))$. \end{itemize} \end{conjecture}
To deduce the previous conjecture from this one, take $\mathcal{DV}_{n,\mu}=\textsc{dv}_n(\mathcal{DP}_{\mu}\cap\mathcal{DP}_n)$ and $\overline{\mathcal{DV}}_{n,\mu}=\textsc{dv}_n(\overline{\mathcal{DP}}_{\mu}\cap\mathcal{DP}_n)$.
\subsection{Outline of Paper.} \label{subsec:outline}
The rest of the paper is structured as follows. In Section~\ref{sec:hook_sec}, we construct sets $\mathcal{C}_{(k)}\subseteq\mathcal{DP}_{\ast,k}$ and prove that $\mathcal{C}_{(k)}\cap\mathcal{DP}_n$ is $n$-opposite to itself for all $n,k\geq 0$. We use these sets to show how to satisfy Conjecture~\ref{conj:DP}(b) and (c) when $\lambda$ is a hook shape. In Section~\ref{sec:almost_hook_sec}, we construct certain sets $\mathcal{C}_{(u,v)}\subseteq\mathcal{DP}_{\ast,u+v}$ and prove that $\mathcal{C}_{(ab-b-1,b-1)}\cap\mathcal{DP}_n$ and $\mathcal{C}_{(ab-a-1,a-1)}\cap\mathcal{DP}_n$ are $n$-opposites for all $a,b\geq 2$ and $n\geq 0$. These sets are used to prove Conjecture~\ref{conj:DP}(b) and (c) when $\lambda$ is an almost-hook shape. In Section~\ref{sec:stronger_conj}, we state strengthened versions of the conjectures above and prove special cases including the case $k\leq 9$. Some data needed for this proof are given in an Appendix.
\section{Construction for Hook Shapes} \label{sec:hook_sec}
In this section, we define collections of Dyck partitions $\mathcal{C}_{(k)}$ such that $\mathcal{C}_{(k)}\cap\mathcal{DP}_n$ is $n$-opposite to itself for all $n$ and $k$. Moreover, for every partition $\lambda\in\Par(n)$ of hook shape, $\textsc{dp}_n(\textsc{dyck}(\lambda))$ belongs to some $\mathcal{C}_{(k)}$.
\subsection{The Operator $\nu$.} \label{subsec:nu}
A fundamental tool for building $n$-opposite sets of Dyck partitions is the operator $\nu$ defined next. This operator can be viewed as a special case
of the map defined by the present authors in~\cite[Definition 8]{LLL} (the latter map acts on $m$-Dyck vectors satisfying certain conditions; here $m=1$).
\begin{definition} Suppose $\gamma$ is a Dyck partition satisfying the condition $\gamma_1\leq\ell(\gamma)+2$. For such a partition, we define $\nu(\gamma)=(\ell(\gamma)+1,\gamma_1-1,\gamma_2-1,\ldots, \gamma_{\ell(\gamma)}-1)$, which is also a partition by the hypothesis on $\gamma$. \end{definition}
\begin{example} We have $\nu((5,5,4,3,1,1))=(7,4,4,3,2,0,0)=(7,4,4,3,2)$, $\nu((7,4,4,3,2))=(6,6,3,3,2,1)$, $\nu((6,6,3,3,2,1))=(7,5,5,2,2,1)$, and so on. On the other hand, $\nu((7,3,1,1))$ is not defined. The empty partition $\gamma=(0)=()$ has $\gamma_1=0=\ell(\gamma)$, so $\nu(\gamma)=(1)$. \end{example}
Informally, the next lemma shows that applying $\nu$ allows us to ``move one step northwest'' along a diagonal in the coefficient matrix for $\Cat_n(q,t)$.
\begin{lemma}\label{lem:propertiesnu} For all $\gamma\in\Par$ such that $\nu(\gamma)$ is defined and all $n$ such that $\gamma,\nu(\gamma)\in\mathcal{DP}_n$, $$\begin{array}{lcl}
\dinv(\nu(\gamma))&=&\dinv(\gamma) + 1;\\ \area_n(\nu(\gamma))&=&\area_n(\gamma) - 1;\\ \defc(\nu(\gamma))&=&\defc(\gamma). \end{array}$$ \end{lemma} \begin{proof}
This is a special case of \cite[Lemma 9]{LLL}, so we only sketch the proof here. Using the formulas for $\textsc{dv}_n$ and $\textsc{dp}_n$, we can describe how $\nu$ acts on Dyck vectors as follows. Suppose $\gamma\in\mathcal{DP}_n$ has associated Dyck vector $\textsc{dv}_n(\gamma)=(v_1,v_2,\ldots,v_n)$. Let $a$ be the maximal index such that $v_a=a-1$; then (provided $\nu(\gamma)$ is defined and in $\mathcal{DP}_n$) \begin{equation}\label{eq:nu-on-dv}
\textsc{dv}_n(\nu(\gamma))=(v_1,\ldots,v_{a-1},v_{a+1},\ldots,v_n,a-2). \end{equation} One can check that $\nu(\gamma)$ is defined iff $a-2\leq v_n+1$, whereas $\nu(\gamma)\in\mathcal{DP}_n$ iff $\ell(\gamma)<n-1$ iff $a\geq 2$ iff $0\leq a-2$. Write $v=\textsc{dv}_n(\gamma)$ and $v'=\textsc{dv}_n(\nu(\gamma))$; we know $\dinv(v)=\dinv(\gamma)$, $\dinv(v')=\dinv(\nu(\gamma))$, $\area(v)=\area_n(\gamma)$, and $\area(v')=\area_n(\nu(\gamma))$. We obtain $v'$ from $v$ by deleting $v_a=a-1$ and appending $a-2$ as the new last entry of the vector. It is then evident that $\area(v')=\area(v)-1$. To see how $\dinv(v')$ compares to $\dinv(v)$, note that the entries preceding $v_a$ are $(0,1,2,\ldots,a-2)$, which do not cause any diagonal inversions with $v_a=a-1$. So removing $v_a=a-1$ from $v$ reduces the dinv statistic by the number of times $a-1$ or $a-2$ occurs in the sequence $(v_{a+1},\ldots,v_n)$, When we append $a-2$, we increase the dinv statistic by the number of times $a-1$ or $a-2$ occurs in the sequence $(v_1,\ldots,v_{a-1},v_{a+1},\ldots,v_n)$. Since $(v_1,\ldots,v_{a-1})=(0,1,\ldots,a-2)$ by definition of $a$, and this prefix contains $a-2$ once and does not contain $a-1$, we see that the net change in dinv is $+1$ when we go from $v$ to $v'$. The formulas in the lemma follow, recalling that $\defc(\gamma)=\binom{n}{2}-\area_n(\gamma)-\dinv(\gamma)$. \end{proof}
\subsection{The Dyck Partitions $\mathbb{I}_{j,\ell}$.} \label{subsec:Ijl}
Next we introduce notation leading to the definition of certain Dyck partitions, denoted $\mathbb{I}_{j,\ell}$, that play a key role in the sequel.
\begin{definition} Given integers $a,b$ with $a\geq b$, let $a\!\!\searrow\!\! b$ denote the decreasing sequence $(a,a-1,a-2,\ldots,b)$. Similarly, for integers $a,b$ with $a\le b$, we denote by $a\!\!\nearrow\!\! b$ the sequence $(a,a+1,\dots,b)$. For any integers $c,d\geq 0$, let $\underline{c}^d$ denote the sequence consisting of $d$ copies of $c$. Given finite sequences $v$ and $w$ of possibly different lengths, we write $v+w=(v_1+w_1,v_2+w_2,\ldots)$, where we pad the shorter sequence with zeroes at the end if needed. Finally, given integers $k\geq 0$ and $\ell>0$, let $r_{k,\ell}=k-\ell\lfloor k/\ell\rfloor$ be the remainder when $k$ is divided by $\ell$, and define \[ \mathbb{I}_{k,\ell}= (\underline{\ell}^{1+\lfloor k/\ell \rfloor}, (\ell-1)\!\!\searrow\!\! 1) +(\underline{0}^{1+\lfloor k/\ell \rfloor},
\underline{1}^{r_{k,\ell}},
\underline{0}^{\ell-1-r_{k,\ell}}). \] For $k=\ell=0$, let $\mathbb{I}_{k,\ell}=(0)$ be the zero partition. \end{definition}
\begin{example}\label{ex:Ikl} When $k=9$ and $\ell=4$, we find that
$\mathbb{I}_{9,4} =(4,4,4,3,2,1)+(0,0,0,1,0,0)=(4,4,4,4,2,1)$. For $\lambda\in\Par(n)$, one may check that \[ \textsc{dp}_n(\textsc{dyck}(\lambda))
=\sum_{i=1}^{\ell(\lambda)} \mathbb{I}_{((\lambda_i-1)\sum_{j=i+1}^{\ell(\lambda)} \lambda_j),\lambda_i-1}. \] For example, given $\lambda=(4,2,1,1,1)\in\Par(9)$, \[ \textsc{dp}_9(\textsc{dyck}(\lambda)) = \mathbb{I}_{15,3} + \mathbb{I}_{3,1} = (3,3,3,3,3,3,2,1)+(1,1,1,1) = (4,4,4,4,3,3,2,1). \] For a partition $\lambda=(a,\underline{1}^b)\in\Par(n)$ of hook shape, we find that \[ \textsc{dp}_n(\textsc{dyck}(\lambda))=\mathbb{I}_{(a-1)b,a-1}. \] \end{example}
We are going to build collections of Dyck partitions by applying $\nu$ repeatedly to the special partitions $\mathbb{I}_{k,\ell}$. The next lemma describes some properties of the Dyck partitions $\nu^m(\mathbb{I}_{k,\ell})$, where $\nu^m$ means apply the $\nu$ operator $m$ times.
\begin{lemma}\label{Delta_II} Fix integers $k,\ell\geq 0$. \begin{enumerate} \item For $0\leq m\leq\ell$, $\nu^m(\mathbb{I}_{k,\ell})$ is defined. \item For $0\leq m\leq\ell$, $\defc(\nu^m(\mathbb{I}_{k,\ell}))=k$, so $\nu^m(\mathbb{I}_{k,\ell})\in\mathcal{DP}_{\ast,k}$. \item For $0\leq m\leq\ell$, $\dinv(\nu^m(\mathbb{I}_{k,\ell}))=\binom{\ell+1}{2}+m$. \item For $0\leq m\leq r_{-k,\ell}$, $\Delta(\nu^m(\mathbb{I}_{k,\ell}))=\ell+\lceil k/\ell\rceil+1$. \item For $1+r_{-k,\ell}\leq m\leq\ell$, $\Delta(\nu^m(\mathbb{I}_{k,\ell}))=\ell+\lceil k/\ell\rceil+2$.
\end{enumerate} \end{lemma} \begin{proof} Let $n=\ell+\lceil k/\ell\rceil+2$ and $r=r_{-k,\ell}=-k\bmod\ell=-k+\ell\lceil k/\ell\rceil$. One readily checks that \[ \mathcal{DV}_n(\mathbb{I}_{k,\ell})= (0,1,\underline{2}^{r}, \underline{1}^{\ell-r},2,3,4,\ldots,\lceil k/\ell\rceil+1). \] Using~\eqref{eq:nu-on-dv} and the comments following it, we now compute: \begin{eqnarray*} \mathcal{DV}_n(\nu(\mathbb{I}_{k,\ell}))&=&
(0,1,\underline{2}^{r-1},\underline{1}^{\ell-r},2,3,4, \ldots,\lceil k/\ell\rceil+1,1);\\ \mathcal{DV}_n(\nu^2(\mathbb{I}_{k,\ell}))&=&
(0,1,\underline{2}^{r-2},\underline{1}^{\ell-r},2,3,4, \ldots,\lceil k/\ell\rceil+1,\underline{1}^2);\\ \ldots&&\ldots\\ \mathcal{DV}_n(\nu^r(\mathbb{I}_{k,\ell})) &=&
(0,1,\underline{1}^{\ell-r},2,3,4,\ldots,
\lceil k/\ell\rceil+1,\underline{1}^{r});\\ \mathcal{DV}_n(\nu^{r+1}(\mathbb{I}_{k,\ell})) &=& (0,\underline{1}^{\ell-r},2,3,4,\ldots, \lceil k/\ell\rceil+1,\underline{1}^{r}, 0);\\ \ldots&&\ldots\\ \mathcal{DV}_n(\nu^{\ell}(\mathbb{I}_{k,\ell})) &=&
(0,1,2,3,4,\ldots,\lceil k/\ell\rceil+1,\underline{1}^{r}, \underline{0}^{\ell-r}). \end{eqnarray*} Assertions (1), (4), and (5) are immediate from this computation. Next, we compute \begin{eqnarray*}
|\mathbb{I}_{k,\ell}|&=& \binom{\ell+1}{2}+\ell\lfloor k/\ell\rfloor+r_{k,\ell} =k+\binom{\ell+1}{2}; \\ \dinv(\mathbb{I}_{k,\ell})&=&\dinv(\textsc{dv}_n(\mathbb{I}_{k,\ell}))
=\binom{\ell-r+1}{2}+\binom{r+1}{2}+r(\ell-r)=\binom{\ell+1}{2}; \\
\defc(\mathbb{I}_{k,\ell})&=&|\mathbb{I}_{k,\ell}|-\dinv(\mathbb{I}_{k,\ell})=k. \end{eqnarray*} Now (2) and (3) follow from Lemma~\ref{lem:propertiesnu}.
\end{proof}
\subsection{The Collections $\mathcal{C}_{(k)}$.} \label{subsec:define-Ck}
\begin{definition} For each integer $k\geq 0$, define \begin{equation}\label{def:hook} \mathcal{C}_{(k)} = \bigcup_{\ell=\min(1,k)}^{\infty} \{\nu^m(\mathbb{I}_{k,\ell}):0\leq m\leq\ell\}\subseteq \mathcal{DP}_{\ast,k}. \end{equation} (The notation $\mathcal{C}_{(k)}$ is chosen so that it is compatible with Conjecture~\ref{strong_conj} below.) \end{definition}
We remark that for all $i\geq\min(1,k)$, there exists a unique $\gamma\in\mathcal{C}_{(k)}$ with $\dinv(\gamma)=i$. This follows from Lemma~\ref{Delta_II}(3) and the readily verified fact that every integer $i\geq 0$ can be written uniquely in the form $i=\binom{\ell+1}{2}+m$ for some $\ell,m\geq 0$ with $0\leq m\leq\ell$.
\begin{example} Table~\ref{tab:calc-C4} shows the first several Dyck partitions in $\mathcal{C}_{(4)}$. \begin{table}[ht] \begin{center}
\begin{tabular}{l|l|l|l} Dyck partition $\gamma$ & $\dinv(\gamma)$ & $\defc(\gamma)$ & $\Delta(\gamma)$ \\\hline\hline $\mathbb{I}_{4,1}=(1,1,1,1,1)$ & $1$ & $4$ & $6$ \\ $\nu(\mathbb{I}_{4,1})=(6)$ & $2$ & $4$ & $7$ \\\hline $\mathbb{I}_{4,2}=(2,2,2,1)$ & $3$ & $4$ & $5$ \\ $\nu(\mathbb{I}_{4,2})=(5,1,1,1)$&$4$& $4$ & $6$ \\ $\nu^2(\mathbb{I}_{4,2})=(5,4)$& $5$ & $4$ & $6$ \\\hline $\mathbb{I}_{4,3}=(3,3,3,1)$ & $6$ & $4$ & $6$ \\ $\nu(\mathbb{I}_{4,3})=(5,2,2,2)$&$7$& $4$ & $6$ \\ $\nu^2(\mathbb{I}_{4,3})=(5,4,1,1,1)$ & $8$ & $4$ & $6$ \\ $\nu^3(\mathbb{I}_{4,3})=(6,4,3)$& $9$ & $4$ & $7$ \\\hline $\mathbb{I}_{4,4}=(4,4,3,2,1)$ & $10$ & $4$ & $6$ \\ $\nu(\mathbb{I}_{4,4})=(6,3,3,2,1)$&$11$ & $4$ & $7$ \\ $\ldots$ & $\ldots$ & $\ldots$ & $\ldots$ \end{tabular} \end{center} \caption{Dyck partitions in $\mathcal{C}_{(4)}$.} \label{tab:calc-C4} \end{table} \end{example}
From Example~\ref{ex:Ikl}, we know that the partition $\lambda=(a+1,\underline{1}^b)\in\Par(n)$ of hook shape satisfies $\textsc{dp}_{n}(\textsc{dyck}(\lambda))=\mathbb{I}_{ab,a}\in\mathcal{C}_{(ab)}$. The conjugate partition $\lambda'=(b+1,\underline{1}^a)$ satisfies $\textsc{dp}_{n}(\textsc{dyck}(\lambda'))=\mathbb{I}_{ba,b}\in\mathcal{C}_{(ab)}$. Thus, we could satisfy Conjecture~\ref{conj:DP}(c) for this $\lambda$ by setting $\mu=(ab)$ and $\mathcal{DP}_{\mu}=\overline{\mathcal{DP}}_{\mu}=\mathcal{C}_{(ab)}$. The main result of this section is that this choice of $\mathcal{DP}_{\mu}$ and $\overline{\mathcal{DP}}_{\mu}$ satisfies part (b) of the conjecture:
\begin{theorem}\label{hook_chain} For all $k,n\geq 0$, $\mathcal{C}_{(k)}\cap\mathcal{DP}_n$ is $n$-opposite to itself. \end{theorem}
We prove this theorem in the following subsections. As an example of the theorem, when $k=4$ and $n=6$, we see from Table~\ref{tab:calc-C4} that \[ \sum_{\gamma\in\mathcal{C}_{(4)}\cap\mathcal{DP}_6} q^{\area_6(\gamma)}t^{\dinv(\gamma)}
= q^{10}t^1+q^8t^3+q^7t^4+q^6t^5+q^5t^6+q^4t^7+q^3t^8+q^1t^{10}, \] which is jointly symmetric in $q$ and $t$. Replacing $n=6$ by $n=5$, the sum becomes $q^3t^3$.
\subsection{Proof of the $k=0$ Case.} \label{subsec:prove-hook-chain0}
\begin{lemma}\label{lem:C0-opp} For all $n\geq 0$, $\mathcal{C}_{(0)}\cap\mathcal{DP}_n$ is $n$-opposite to itself.
\end{lemma} \begin{proof} Taking $k=0$ in Lemma~\ref{lem:propertiesnu}(4) and (5), we see that $\Delta(\nu^m(\mathbb{I}_{0,\ell}))$ is $\ell+1$ for $m=0$, and $\ell+2$ for $m>0$. So, $\nu^m(\mathbb{I}_{0,\ell})\in\mathcal{DP}_n$ iff $\Delta(\nu^m(\mathbb{I}_{0,\ell}))\leq n$ iff either $m=0$ and $\ell\leq n-1$, or $m>0$ and $\ell\leq n-2$. Therefore, \[ \mathcal{C}_{(0)}\cap\mathcal{DP}_n=\left(\bigcup_{\ell=0}^{n-2}
\{\nu^m(\mathbb{I}_{0,\ell}):0\leq m\leq\ell\}\right)\cup\{\mathbb{I}_{0,n-1}\}. \] Now by Lemma~\ref{lem:propertiesnu}(2) and (3) and the definition of $\area_n$, \[ \{(\dinv(\gamma),\area_n(\gamma)):\gamma\in\mathcal{C}_{(0)}\cap\mathcal{DP}_n\} = \left\{\left(0,\binom{n}{2}\right),\left(1,\binom{n}{2}-1\right),\ldots, \left(\binom{n}{2},0\right)\right\}.\] This list of exponent pairs is jointly symmetric, so $\mathcal{C}_{(0)}\cap\mathcal{DP}_n$ is $n$-opposite to itself. \end{proof}
\subsection{Exponent Pairs Appearing in $\mathcal{C}_{(k)}\cap\mathcal{DP}_n$.} \label{subsec:exp-pairs-Ck}
In the rest of this section, let $k>0$ be a fixed integer. Define two functions $\tau, s:\mathbb{Z}_{>0}\to\mathbb{Z}_{>0}$ (depending on $k$) by \begin{equation}\label{taus} \tau(\ell)=\lceil k/\ell\rceil,\quad s(\ell)=\ell+\tau(\ell)+1. \end{equation} We translate Theorem~\ref{hook_chain} into a question about the sequence $A_{(k)}=(a_1,a_2,\ldots)$, defined as follows: for $i$ of the form $\binom{\ell+1}{2}+m$ where $\ell\geq 1$ and $0\leq m\leq\ell$, let $a_i=\Delta(\nu^m(\mathbb{I}_{k,\ell}))$. By Lemma~\ref{Delta_II}(4) and (5), we have $$a_i=\begin{cases} s(\ell),& \text{ for $0\leq m\leq r_{-k,\ell}$;} \\ s(\ell)+1,& \text{ for $1+r_{-k,\ell}\leq m\leq\ell$.}
\end{cases}$$
\begin{example}
For $1\leq k\leq 5$, the sequences $A_{(k)}$ begin as follows: \begin{eqnarray*} A_{(1)}&=&(3,4;4,4,5;5,5,5,6;6,6,6,6,7;\ldots); \\ A_{(2)}&=&(4,5;4,5,5;5,5,6,6;6,6,6,7,7;\ldots); \\ A_{(3)}&=&(5,6;5,5,6;5,6,6,6;6,6,7,7,7;\ldots); \\ A_{(4)}&=&(6,7;5,6,6;6,6,6,7; 6,7,7,7,7;\ldots) \text{\quad[cf. Table~\ref{tab:calc-C4}];} \\ A_{(5)}&=&(7,8;6,6,7;6,6,7,7;7,7,7,7,8;7,8,8,8,8,8;\ldots). \end{eqnarray*} When $k=50$, we have $A_{(50)}=($52, 53; 28, 29, 29; 21, 21, 22, 22; 18, 18, 18, 19, 19; 16, 17, 17, 17, 17, 17; 16, 16, 16, 16, 16, 17, 17; 16, 16, 16, 16, 16, 16, 16, 17; 16, 16, 16, 16, 16, 16, 16, 17, 17; $\ldots)$.
\end{example}
For any $n\in\mathbb{Z}_{>0}$, define a set $$S_{n,k}=\Big\{(\dinv(\gamma),\area_n(\gamma)): \gamma\in \mathcal{C}_{(k)}\cap\mathcal{DP}_n\Big\}.$$ Then, by Lemma~\ref{Delta_II}(2) and (3) and the definition of $\area_n$, $$S_{n,k}=\Big\{\left(i,\binom{n}{2}-k-i\right): i\ge 1, a_i\le n\Big\}.$$ Thus Theorem~\ref{hook_chain} is equivalent to the following statement, proved below: for all $n,k,x,y>0$, $(x,y)\in S_{n,k}$ if and only if $(y,x)\in S_{n,k}$.
\subsection{Preliminary Lemmas and Definitions.} \label{subsec:hook-lem-def}
Define $L=\max\{\ell:\ell\leq\tau(\ell)\}$. The following lemma states some fundamental facts about the functions $\tau$ and $s$ defined in~\eqref{taus}.
\begin{lemma} \begin{enumerate} \item[(1)] $L=\max\{\ell:\ell^2-\ell<k\}$. \item[(2)] $\tau$ is a weakly decreasing function, and so
$s(\ell+1)\leq s(\ell)+1$ for all $\ell$. \item[(3)] The restriction of $\tau$ to $\{1,2,\ldots,L\}$ is a strictly decreasing function, and so $s$ is weakly decreasing on $\{1,2,\ldots,L\}$. \item[(4)] For all $\ell>L$, $\tau(\ell+1)\geq\tau(\ell)-1$, and so $s$ is weakly increasing on $\{L+1,L+2,\ldots\}$. \item[(5)] For all $\ell$, we have $\tau^2(\ell)\leq\ell$, and so $s(\tau(\ell))\leq s(\ell)$. \item[(6)] For all $\ell\leq L$, we have $\tau^2(\ell)=\ell$, and so $s(\tau(\ell))=s(\ell)$. \end{enumerate} \end{lemma}
\begin{proof} First note that for all $j\in\mathbb{Z}$ and $x\in\mathbb{R}$, $j\leq\ceil{x}$ iff $j<x+1$. So, $\ell\leq\tau(\ell)=\ceil{k/\ell}$ iff $\ell<(k/\ell)+1$ iff $\ell^2-\ell<k$, proving (1). If $\ell<\ell'$, then $k/\ell>k/\ell'$, hence $\ceil{k/\ell}\geq\ceil{k/\ell'}$, proving (2). To prove (3), assume $1<\ell\leq L$, and show $\tau(\ell)<\tau(\ell-1)$. By (1), $\ell^2-\ell<k$, which rearranges to $k/\ell<k/(\ell-1)-1$. This implies $\ceil{k/\ell}<\ceil{k/(\ell-1)}$, as needed. To prove (4), assume $\ell\geq L+1$, so $\ell^2-\ell\geq k$ by (1). Then $\ell^2+\ell>k$, which rearranges to $(k/\ell)-1<k/(\ell+1)$. Taking the ceiling of both sides gives $\tau(\ell)-1\leq\tau(\ell+1)$, as needed. To prove (5), note that $k/\ell\leq\ceil{k/\ell}$ implies $\ell\geq k/\ceil{k/\ell}$. Since $\ell$ is an integer, $\ell\geq\ceil{k/\ceil{k/\ell}}=\tau^2(\ell)$, as needed. To prove (6), assume $\ell\leq L$; by (5), it suffices to prove $\ell\leq\tau^2(\ell)$. Now $\ell^2-\ell\leq k$, which rearranges to $(k/\ell)+1\leq k/(\ell-1)$. Since $\ceil{k/\ell}<(k/\ell)+1$, we get $\ceil{k/\ell}<k/(\ell-1)$, which rearranges to $\ell<k/\ceil{k/\ell}+1$. By the first sentence of this proof, $\ell\leq\tau^2(\ell)$ follows. \end{proof}
The following definitions will help us analyze $S_{n,k}$. We may assume that $S_{n,k}\neq\emptyset$, which is equivalent to the condition $\min\{s(\ell):\ell\geq 1\}\leq n$.
\begin{definition} Let $r_{j,\ell}^+$ be the unique integer in the range $\{1,2,\ldots,\ell\}$ such that $j\equiv r_{j,\ell}^+\pmod{\ell}$. Define \begin{flalign*} && x_1=\min\{i: a_i\le n\},\quad \quad & x_2=\max\{i: a_i\le n\} =\max\{i: a_i= n\},&&\\ && \ell_1=\min\{\ell : s(\ell)\le n\},\quad &\ell_2=\max\{\ell : s(\ell)\le n\}=\max\{\ell : s(\ell)= n\}, && \\ && \ell_1'=\min\{\ell:s(\ell)<n\}, \quad &\ell_2'=\min\{\ell>\ell_1': s(\ell)=n\}\quad\text{(if these exist).} && \end{flalign*} \end{definition}
We need the following remarks about this definition. The second formulas for $\ell_2$ and $x_2$ follow from part (2) of the previous lemma and the definition of $a_i$. The numbers $\ell_1'$ and $\ell_2'$ are undefined iff $\min\{s(\ell):\ell\geq 1\}=n$. Using properties of $s$ from the previous lemma, one may check that $x_1=\binom{\ell_1+1}{2}$, $x_2=\binom{\ell_2+1}{2}+r_{-k,\ell_2}$,
$\ell_1\leq L\leq\ell_2$, and (when $\ell_1'$ and $\ell_2'$ are defined) $\ell_1\leq\ell_1'\leq L<\ell_2'\leq\ell_2$.
\begin{lemma}\label{easylemma} \begin{enumerate}
\item[(1)] If $\ell<\tau(\ell)$ and $s(\ell)=n$, then $r_{-k,\ell}=r_{-k,n-1-\ell}$ and ${r^+_{k,\ell}=r^+_{k,n-2-\ell}}$. \item[(2)] $\ell_1=\tau(\ell_2)$ and $x_1+x_2=\binom{n}{2}-k$. \item[(3)] If $n>\min\{s(\ell):\ell\geq 1\}$, then $\ell'_1-\ell_1=\ell_2-\ell_2'$. \end{enumerate} \end{lemma} \begin{proof} (1) Since $\lceil k/\ell \rceil=\tau(\ell)=n-1-\ell$, we have $k+r=\ell(n-1-\ell)$ for a unique $r$ satisfying $0\le r<\ell$. Then $-k=-\ell(n-1-\ell)+r$, implying $-k\equiv r \pmod \ell$ and $-k\equiv r \pmod{n-1-\ell}$. Note that $0\le r<\ell<n-1-\ell$, so $r_{-k,\ell}=r=r_{-k,n-1-\ell}$. Next, note that $k=\ell(n-1-\ell)-r =\ell(n-2-\ell)+(\ell-r)$ and $0<\ell-r\le \ell\le n-2-\ell$, thus $r^+_{k,\ell}=\ell-r=r^+_{k,n-2-\ell}$.
(2) To show the first equality: since $s(\tau(\ell_2))\le s(\ell_2)=n$, we conclude that $\tau(\ell_2)\ge \ell_1$ by the definition of $\ell_1$. On the other hand, $\ell_1\le L$ implies $s(\tau(\ell_1))=s(\ell_1)\le n$, so we conclude that $\tau(\ell_1)\le \ell_2$ by the definition of $\ell_2$; then the weakly decreasing property of $\tau$ implies that $\tau^2(\ell_1)\ge \tau(\ell_2)$, i.e., $\ell_1\ge \tau(\ell_2)$. Thus $\ell_1=\tau(\ell_2)$.
To show the second equality: $s(\ell_2)=n$ implies $\ell_2+\tau(\ell_2)+1=n$, thus $\ell_2+\ell_1+1=n$. It follows as in the proof of (1) that $k+r_{-k,\ell_2}=\ell_2(n-1-\ell_2)=\ell_1\ell_2$. So $x_1+x_2+k=\binom{\ell_1+1}{2}+\binom{\ell_2+1}{2}+r_{-k,\ell_2}+k= \binom{\ell_1+1}{2}+\binom{\ell_2+1}{2}+\ell_1\ell_2 =\binom{\ell_1+\ell_2+1}{2}=\binom{n}{2}$.
(3) Applying (2) to $n-1$ and noticing that $\ell_2'-1=\max\{\ell : s(\ell)=n-1\}$, we conclude that $\ell_1'=\tau(\ell'_2-1)$. Then $\ell_1'+(\ell'_2-1)+1=s(\ell'_2-1)= n-1=\ell_1+\ell_2$, thus $\ell_1'-\ell_1=\ell_2-\ell_2'$. \end{proof}
\subsection{Proof of Symmetry of $S_{n,k}$.} \label{subsec:symm-Snk}
We are now ready to prove the symmetry of $S_{n,k}$. First consider the case where $\ell_1'$ and $\ell_2'$ are defined and $\ell_1<\ell_1'$. Then the sequence $A_{(k)}$ has the following form, where $*$ represents a number $>n$, and ${\rm o}$ represents a number $\le n$. \begin{equation}\label{eq:pattern} \aligned
&\cdots **;
\underbrace{n,\ldots,n}_{1+r_{-k,\ell_1}},
\underbrace{n+1,\ldots,n+1}_{r^+_{k,\ell_1}};
\cdots\cdots;
\underbrace{n,\ldots,n}_{1+r_{-k,\ell_1'-1}},
\underbrace{n+1,\ldots,n+1}_{r^+_{k,\ell_1'-1}};
{\rm oo}\cdots\\ &\quad\quad\quad \cdots {\rm oo}; \underbrace{n,\ldots,n}_{1+r_{-k,\ell_2'}},
\underbrace{n+1,\ldots,n+1}_{r^+_{k,\ell_2'}};
\cdots\cdots;
\underbrace{n,\ldots,n}_{1+r_{-k,\ell_2}},
\underbrace{n+1,\ldots,n+1}_{r^+_{k,\ell_2}};
**\cdots \endaligned \end{equation} So the subsequence $(a_{i_1},a_{i_1+1},\ldots,a_{i_2})$ of $A_{(k)}$ has the form \begin{equation}\label{general case}
\underbrace{\rm o\cdots o}_{1+r_{-k,\ell_1}}
\underbrace{*\cdots *}_{r^+_{k,\ell_1}}
\cdots\cdots
\underbrace{\rm o\cdots o}_{1+r_{-k,\ell_1'-1}}
\underbrace{*\cdots*}_{r^+_{k,\ell_1'-1}}
\underbrace{\rm
o\cdots\cdots o
}_{{\rm central\; block\; } B}
\underbrace{*\cdots*}_{r^+_{k,\ell_2'}}
\underbrace{\rm o\cdots o}_{1+r_{-k,\ell_2'+1}}
\cdots\cdots
\underbrace{*\cdots*}_{r^+_{k,\ell_2-1}}
\underbrace{\rm o\cdots o}_{1+r_{-k,\ell_2}} \end{equation} We see that $S_{n,k}$ is symmetric if $x_1+x_2=\binom{n}{2}-k$ and the pattern of {\rm o}'s and $*$'s in \eqref{general case} is equal to its own reversal. The first condition holds due to Lemma~\ref{easylemma}(2), which also tells us that $\tau(\ell_2)=\ell_1$. Since $s(\ell_2)=n$, we deduce that $\ell_2+\ell_1+1=n$. It now follows from Lemma~\ref{easylemma}(1) that the leftmost and rightmost blocks of o's in~\eqref{general case} have the same length, the leftmost and rightmost blocks of $*$'s have the same length, and so on. Part (3) of the lemma ensures that there are the same number of blocks of o's to the left and right of the central block $B$, so that~\eqref{general case} has the required symmetry.
Another case that can occur is when $\ell_1'$ and $\ell_2'$ are defined and $\ell_1=\ell_1'$ (hence $\ell_2=\ell_2'$ by part (3) of the lemma). In this case, the $n$'s and $(n+1)$'s explicitly displayed in~\eqref{eq:pattern} are absent, and~\eqref{general case} consists of just the central block of o's. Here it suffices to know that $x_1+x_2=\binom{n}{2}-k$, which follows from part (2) of the lemma.
A final special case occurs when $\min\{s(\ell):\ell\geq 1\}=n$ and hence $\ell_1',\ell_2'$ are undefined. In this case, \eqref{general case} should be replaced by \begin{equation}\label{degenerate case}
\underbrace{\rm o\cdots o}_{1+r_{-k,\ell_1}}
\underbrace{*\cdots *}_{r^+_{k,\ell_1}}
\underbrace{\rm o\cdots o}_{1+r_{-k,\ell_1+1}}
\cdots\cdots
\underbrace{\rm o\cdots o}_{1+r_{-k,\ell_2-1}}
\underbrace{*\cdots*}_{r^+_{k,\ell_2-1}}
\underbrace{\rm o\cdots o}_{1+r_{-k,\ell_2}}, \end{equation} which is equal to its reversal by the same argument used above.
\subsection{Bijection on $\mathcal{C}_{(k)}\cap\mathcal{DP}_n$.} \label{subsec:bij-Ck}
We can use the preceding constructions to describe an explicit bijection $F_{n,k}$ on $\mathcal{C}_{(k)}\cap\mathcal{DP}_n$ that interchanges the statistics $\area_n$ and $\dinv$. Suppose $\gamma$ is a Dyck partition in $\mathcal{C}_{(k)}\cap\mathcal{DP}_n$ with $\area_n(\gamma)=i$ (and hence $\dinv(\gamma)=\binom{n}{2}-k-i$). Write $i$ uniquely in the form $\binom{\ell+1}{2}+m$ where $\ell,m\geq 0$ and $0\leq m\leq\ell$. Then define $F_{n,k}(\gamma)=\nu^m(\mathbb{I}_{k,\ell})$, which has dinv equal to $i$ (and hence $\area_n$ equal to $\binom{n}{2}-k-i$) by parts (2) and (3) of Lemma~\ref{Delta_II}. The calculations in the preceding subsections are necessary to know that $F_{n,k}(\gamma)$ is still in the set $\mathcal{DP}_n$. Evidently, $F_{n,k}$ is an involution and is the unique bijection on $\mathcal{C}_{(k)}\cap\mathcal{DP}_n$ interchanging the two statistics.
\section{Construction for Almost-Hook Shapes} \label{sec:almost_hook_sec}
In this section, we construct collections of Dyck partitions $\mathcal{C}_{(u,v)}$ that contain all the objects $\textsc{dp}_n(\textsc{dyck}(\lambda))$ for almost-hook shapes $\lambda$. The main result is that $\mathcal{C}_{(ab-b-1,b-1)}\cap\mathcal{DP}_n$ and $\mathcal{C}_{(ab-a-1,a-1)}\cap\mathcal{DP}_n$ are $n$-opposite subsets for all $a,b\geq 2$ and all $n\geq 0$.
\subsection{Removing Cells from Partition Diagrams.} \label{subsec:remove-cells}
To describe the Dyck partitions in $\mathcal{C}_{(u,v)}$, it is convenient to introduce the following construction that removes specified cells from a partition diagram. We totally order $\mathbb{Z}_{>0}^2$ by setting
$(i,j)<(i',j')$ iff either $i+j<i'+j'$, or $i+j=i'+j'$ and $i<i'$. Given $\gamma\in\Par$ and $k\in\{1,2,\ldots,|\gamma|\}$, let $u_k=u_k(\gamma)$ be the $k$'th largest cell $(i,j)$ in the diagram of $\gamma$ relative to this total ordering.
\begin{definition} Suppose $\gamma\in\Par$ and $u_{i_1},\ldots,u_{i_s}$ are cells in $\dg(\gamma)$ such that $\dg(\gamma)\setminus\{u_{i_1},\ldots,u_{i_s}\}$ is also the diagram of some partition $\beta$. By a slight abuse of notation, we define \[ \gamma\setminus\{u_{i_1},\ldots,u_{i_s}\}=\beta. \] \end{definition}
\begin{example} Given $\gamma=(4,4,4,4,2,1)$, the cells $u_1,u_2,u_3,u_4$ in $\dg(\gamma)$ are shown below. \[ \tableau{ {}&{}&{}&{}\\ {}&{}&{}&{}\\ {}&{}&{}&{}\\ {}&{}&{u_4}&{u_1}\\ {}&{u_3}\\ {u_2} } \] We have $\gamma\setminus\{u_1,u_4\}=(4,4,4,2,2,1)$ and $\gamma\setminus\{u_2,u_3\}=(4,4,4,4,1)$. \end{example}
\subsection{The Collections $\mathcal{C}_{(u,v)}$.} \label{subsec:Cuv}
Fix an almost-hook shape $\lambda=(b,2,\underline{1}^{a-2})$, where $a,b\geq 2$. From Example~\ref{ex:Ikl}, we have \begin{eqnarray*} \textsc{dp}_n(\textsc{dyck}(\lambda))&=&\mathbb{I}_{(b-1)a, b-1} + \mathbb{I}_{a-2,1} \\
&=&(\underline{b}^{a-1},b-1,(b-1)\!\!\searrow\!\! 1) \\
&=&\mathbb{I}_{b(a-1),b}-(\underline{0}^{a-1},1,\underline{0}^{b-1})
=\mathbb{I}_{b(a-1),b}\setminus\{u_b\}. \end{eqnarray*}
Having observed these expressions, we define Dyck partitions $$\gamma_{\lambda,i} =\left\{ \begin{array}{ll} \mathbb{I}_{(b-1)a, i} + \mathbb{I}_{a-2,1}, &\text{ if }1\leq i\leq b-1;\\ \mathbb{I}_{b(a-1), i+1}\setminus\{u_{b}\}, &\text{ if }b-1\leq i. \end{array}\right.$$ (In particular, the two expressions coincide when $i=b-1$.) In the second case, one may check that the removal of cell $u_b$ subtracts 1 from position $j$ in $\mathbb{I}_{b(a-1),i+1}$, where
\[ j=\left\{\begin{array}{ll} i+2-r_{-b(a-1),i+1}-b+\ceil{b(a-1)/(i+1)} &\text{ if }i+1-r_{-b(a-1),i+1}\geq b;\\ 2i+2-r_{-b(a-1),i+1}-b+\ceil{b(a-1)/(i+1)} &\text{ if }i+1-r_{-b(a-1),i+1}<b.\end{array}\right. \] Next, define $$R(\gamma_{\lambda,i}) =\left\{ \begin{array}{ll} \{\nu^m(\gamma_{\lambda,i}):0\leq m\leq i\}, &\text{ if }1\leq i\leq b-2; \\ \{\nu^m(\gamma_{\lambda,i}):0\leq m\leq i+1\}, &\text{ if }b-1\leq i. \end{array}\right. $$ Finally, define \begin{equation}\label{def:almost hook}
\mathcal{C}_{(ab-b-1,b-1)}=\bigcup_{i=1}^{\infty} R(\gamma_{\lambda,i}).
\end{equation} The notation $\mathcal{C}_{(ab-b-1,b-1)}$ is chosen so that it is compatible with Conjecture~\ref{strong_conj} below; note that $a$, $b$, and $\lambda$ are uniquely determined by $ab-b-1$ and $b-1$.
\begin{lemma}\label{ahk} Fix integers $a,b\geq 2$. \begin{enumerate} \item $\mathcal{C}_{(ab-b-1,b-1)}\subseteq \mathcal{DP}_{\ast,ab-2} =\{\gamma\in\Par:\defc(\gamma)=ab-2\}$. \item For every integer $d\geq 2$, there exists a unique $\gamma\in\mathcal{C}_{(ab-b-1,b-1)}$ with $\dinv(\gamma)=d$. \end{enumerate} \end{lemma} \begin{proof}
This is similar to the proof of Lemma~\ref{Delta_II}, using Lemma~\ref{lem:propertiesnu} and the following Dyck vectors for $\gamma_{\lambda,i}$. For $i$ in the range $1\leq i\leq b-1$, let $k=(b-1)a$ and $n=i+\ceil{k/i}+2$; then \[ \mathcal{DV}_n(\gamma_{\lambda,i})= (0,1,\underline{2}^{r_{-k,i}},\underline{1}^{i-r_{-k,i}},
2\!\!\nearrow\!\!(\ceil{k/i}-a+2),
(\ceil{k/i}-a+2)\!\!\nearrow\!\!\ceil{k/i}). \] For $i\geq b$, let $k=b(a-1)$ and $n=i+\ceil{k/(i+1)}+3$; then \[ \mathcal{DV}_n(\gamma_{\lambda,i})= (0,1,\underline{2}^{r_{-k,i+1}},\underline{1}^{i+1-r_{-k,i+1}},
2\!\!\nearrow\!\!(\ceil{k/(i+1)}+1))+\epsilon, \] where $\epsilon$ is the sequence with a 1 in position \[ \left\{\begin{array}{ll} r_{-k,i+1}+b+2 & \text{ if }i+1-r_{-k,i+1}\geq b, \\ r_{-k,i+1}+b-i+2& \text{ if }i+1-r_{-k,i+1}<b, \end{array}\right. \] and zeroes elsewhere. \end{proof}
Part (2) of the previous lemma justifies the following definition. \begin{definition} For $a,b,i\geq 2$, we let $C_{(ab-b-1,b-1),i}$ denote the unique $\gamma\in\mathcal{C}_{(ab-b-1,b-1)}$ with $\dinv(\gamma)=i$. \end{definition}
\begin{theorem}\label{almost_hook_chain} For all $a,b\geq 2$ and all $n\geq 0$, $\mathcal{C}_{(ab-b-1,b-1)}\cap\mathcal{DP}_n$ and $\mathcal{C}_{(ab-a-1,a-1)}\cap\mathcal{DP}_n$ are $n$-opposite to each other.
\end{theorem}
We prove this theorem in the following subsections. Throughout, we fix $a,b\geq 2$ and define $k=ab-2$, $S=(ab-b-1,b-1)$, and $S'=(ab-a-1,a-1)$.
\subsection{Exponent Pairs Appearing in $\mathcal{C}_{S}$ and $\mathcal{C}_{S'}$.} \label{subsec:exp-pairs-almost-hook}
Define the sequence $A_{S}=(a_2,a_3,\ldots)$ by setting $a_i=\Delta(C_{S,i})$ for $i\geq 2$. Define the sequence $A_{S'}=(b_2,b_3,\ldots)$ by setting $b_i=\Delta(C_{S',i})$ for $i\geq 2$.
For any $n\geq 0$, define $$\begin{array}{lcl} T_{n,k}&=&\{(\dinv(\gamma), \area_n(\gamma)):\gamma\in\mathcal{C}_{S}\cap\mathcal{DP}_n\};\\ T'_{n,k}&=&\{(\dinv(\gamma), \area_n(\gamma)):\gamma\in\mathcal{C}_{S'}\cap\mathcal{DP}_n\}.
\end{array}$$ By Lemma~\ref{ahk}, $$\aligned &T_{n,k}=\left\{\left(i,\binom{n}{2}-k-i\right):i\ge 2, a_i\le n\right\};\\ &T'_{n,k}=\left\{\left(i,\binom{n}{2}-k-i\right):i\ge 2, b_i\le n\right\}. \endaligned$$
Theorem~\ref{almost_hook_chain} is equivalent to the assertion that $(x,y)\in T_{n,k}$ iff $(y,x)\in T'_{n,k}$.
\begin{example} Let $a=3$ and $b=4$, so $\lambda=(4,2,1)$, $\lambda'=(3,2,1,1)$, $k=10$, $S=(7,3)$, and $S'=(8,2)$.
We compute $$\aligned A_S&=(\underbrace{11, 12}_{\Delta(R(\gamma_{\lambda,1}))}, \underbrace{\underline{8}^2, 9}_{\Delta(R(\gamma_{\lambda,2}))}, \underbrace{7, \underline{8}^4}_{\Delta(R(\gamma_{\lambda,3}))}, \underbrace{\underline{8}^3, \underline{9}^3}_{\Delta(R(\gamma_{\lambda,4}))}, \underbrace{\underline{9}^5, \underline{10}^2}_{\Delta(R(\gamma_{\lambda,5}))}, \underbrace{\underline{10}^7, 11}_{\Delta(R(\gamma_{\lambda,6}))}, 10, 11,\ldots)\\ &=(11,12,\underline{8}^2, 9, 7, \underline{8}^7, \underline{9}^8, \underline{10}^9, 11, 10, \underline{11}^{10},\ldots); \\
A_{S'}&=(\underbrace{10, 11}_{\Delta(R(\gamma_{\lambda',1}))}, \underbrace{7, \underline{8}^3}_{\Delta(R(\gamma_{\lambda',2}))}, \underbrace{\underline{8}^4, 9}_{\Delta(R(\gamma_{\lambda',3}))}, \underbrace{\underline{8}^2, \underline{9}^4}_{\Delta(R(\gamma_{\lambda',4}))}, \underbrace{\underline{9}^4, \underline{10}^3}_{\Delta(R(\gamma_{\lambda',5}))}, \underbrace{\underline{10}^6, \underline{11}^2}_{\Delta(R(\gamma_{\lambda',6}))}, \underbrace{\underline{11}^8, \underline{12}}_{\Delta(R(\gamma_{\lambda',7}))},11,12, \ldots)\\ &=(10,11,7,\underline{8}^7,9, \underline{8}^2, \underline{9}^8, \underline{10}^9, \underline{11}^{10},12,11,\underline{12}^{11},\ldots). \endaligned$$
So we find, in accordance with Theorem~\ref{almost_hook_chain}, that
$$\begin{array}{lcl}T_{7,10}&=&\{(7,21-10-7)\}=\{(7,4)\};\\
T'_{7,10}&=&\{(4, 21-10-4)\}=\{(4,7)\};\\
T_{8,10} &=& \{(4,14),(5,13),...,(14,4)\}\setminus\{(6,12)\};\\
T'_{8,10} &=& \{(4,14),(5,13),...,(14,4)\}\setminus\{(12,6)\};\\
T_{9,10}&=&\{(4,22),(5,21),...,(22,4)\}=T_{9,10}';\\
T_{10,10} &=& \{(4,31),(5,30),...,(33,2)\}\setminus\{(32,3)\};\\
T'_{10,10} &=& \{(2,33),(3,32),...,(31,4)\}\setminus\{(3,32)\}. \end{array}$$ \end{example}
\subsection{Proof of Theorem~\ref{almost_hook_chain}} \label{subsec:prove-almost-hook}
We may assume $T_{n,k}$ and $T'_{n,k}$ are nonempty.
Define $s(\ell)=\ell+\ceil{(b-1)a/\ell}+1$ and $\tilde{s}(\ell)=\ell+\ceil{b(a-1)/\ell}+1$. Then let
\begin{flalign*} && x_1=\min\{i: a_i\le n\},\quad \quad & x_2=\max\{i: a_i\le n\}=\max\{i: a_i= n\},&&\\ && y_1=\min\{i: b_i\le n\},\quad \quad & y_2=\max\{i: b_i\le n\}=\max\{i: b_i= n\},&&\\ && \ell_1=\min\{\ell : s(\ell)\le n\},\quad &\ell_2=\max\{\ell : s(\ell)\le n\}=\max\{\ell : s(\ell)= n\}, && \\ && \ell_1'=\min\{\ell:s(\ell)<n\}, \quad &\ell_2'=\min\{\ell>\ell_1': s(\ell)=n\} \quad\text{(if these exist)}.&& \end{flalign*} Define $\widetilde{\ell_1},\widetilde{\ell_2},\widetilde{\ell_1'}, \widetilde{\ell_2'}$ similarly by replacing $s(\ell)$ with $\tilde{s}(\ell)$ throughout. Special cases occur when $\ell_1'$, $\ell_2'$, $\widetilde{\ell_1'}$, or $\widetilde{\ell_2'}$ are undefined, or when $\ell_1=\ell_1'$ or $\widetilde{\ell_1} =\widetilde{\ell_1'}$. These cases, which are easier than the generic case discussed below, can be handled by arguments analogous to that leading to~\eqref{degenerate case}.
Let $j$, $i'$ be two integers determined by $R(\gamma_{\lambda,i})=\{C_{S,j}, C_{S,j+1},\ldots,C_{S,j+i'}\}$. Then $(a_j, a_{j+1},\ldots,a_{j+i'})$ has the form $(d,\ldots,d,d+1,\ldots,d+1)$. Hence $C_{S,x_1}= \gamma_{\lambda,i}$ for some $i$. Using $$\Delta(\gamma_{\lambda,i})= \begin{cases} \Delta(\mathbb{I}_{(b-1)a, i})=i+\lceil \frac{ab-a}{i}\rceil +1,& \textrm{ if } i\le b-1, \\ \Delta(\mathbb{I}_{(a-1)a, i})=i+\lceil \frac{ab-b}{i+1}\rceil +2,& \textrm{ if } i\ge b, \\ \end{cases}$$ one may check that:
(i) If $a<b$, then
$$ \aligned &\Delta(\gamma_{\lambda,1})\geq\cdots\geq\Delta(\gamma_{\lambda,a-1})>\Delta(\gamma_{\lambda,a})=a+b=\Delta(\gamma_{\lambda,b-1})<\Delta(\gamma_{\lambda,b})\leq \Delta(\gamma_{\lambda,b+1})\leq\cdots,\\ &\Delta(\gamma_{\lambda',1})\geq\cdots\geq\Delta(\gamma_{\lambda',a-2})>\Delta(\gamma_{\lambda',a-1})=a+b=\Delta(\gamma_{\lambda',b-2})<\Delta(\gamma_{\lambda',b-1}) \leq\Delta(\gamma_{\lambda',b})\leq\cdots,\\ \endaligned $$ and $\Delta(\gamma_{\lambda,i})\le a+b$ when $a<i<b-1$, $\Delta(\gamma_{\lambda',i})\le a+b$ when $a-1<i<b-2$.
(ii) If $a=b$, then $\lambda=\lambda'$ and $\Delta(\gamma_{\lambda,a-2})>\Delta(\gamma_{\lambda,a-1})=2a<\Delta(\gamma_{\lambda,a})$. $$\Delta(\gamma_{\lambda,1})\geq\cdots\geq\Delta(\gamma_{\lambda,a-2})>\Delta(\gamma_{\lambda,a-1})=2a<\Delta(\gamma_{\lambda,a})\leq \Delta(\gamma_{\lambda,a+1})\leq\cdots.$$
First consider the case $n\leq a+b$. If $a=b$ then $$\mathcal{C}_{(a^2-a-1,a-1)}\cap\mathcal{DP}_n= \left\{\begin{array}{ll} \emptyset, & \text{ for }n<2a;\\ \{\gamma_{\lambda,a-1}\}, & \text{ for }n=2a, \end{array}\right.$$
and $\area_n(\gamma_{\lambda,a-1})=\dinv(\gamma_{\lambda,a-1})=\binom{a}{2}+1$ for $n=2a$, hence $C_{(a^2-a-1,a-1)}\cap\mathcal{DP}_n$ is $n$-opposite to itself.
When $a<b$ it is enough to show that $\mathcal{DP}_n\cap\bigcup_{i=a}^{b-1} R(\gamma_{\lambda,i})$ and $\mathcal{DP}_n\cap\bigcup_{i=a-1}^{b-2} R(\gamma_{\lambda',i})$ are $n$-opposite to each other.
Since \begin{eqnarray*} \Delta(\nu^m(\gamma_{\lambda,i})) &=& \Delta(\nu^m(\mathbb{I}_{(b-1)a, i} + \mathbb{I}_{a-2,1}))
=\Delta(\nu^m(\mathbb{I}_{(b-1)a, i})) \\ & &
\text{ for }a\leq i\leq b-1\mbox{ and }0\leq m\leq i; \\
\Delta(\nu^m(\gamma_{\lambda',i})) &=&\Delta(\nu^m(\mathbb{I}_{a(b-1), i+1}\setminus\{u_{a}\}))
=\Delta(\nu^m(\mathbb{I}_{a(b-1), i+1}))\\ & &
\text{ for } a-1\leq i\leq b-2\mbox{ and }0\leq m\leq i+1, \end{eqnarray*}
the subsequence $(a_{x_1},a_{x_1+1},\ldots,a_{x_2})$ of $A_{S}$ and the subsequence $(b_{y_1},b_{y_1+1},\ldots,b_{y_2})$ of $A_{S'}$ are of the same form~\eqref{general case} with $k=ab-a$. Since \eqref{general case} is equal to its reversal, it suffices to check that $C_{S,x_1}$ and $C_{S',y_2}$ form a pair of $n$-opposites, because the rest follows from the same argument as in the proof of Lemma~\ref{easylemma}.
We have $$\begin{array}{lcl}C_{S,x_1}=\mathbb{I}_{(b-1)a, a} + \mathbb{I}_{a-2,1};\\ C_{S',y_2}=\mathbb{I}_{a(b-1),b-1}\setminus\{u_{a}\}.\end{array}$$
Hence $$ \operatorname{dinv}(C_{S,x_1})=\operatorname{dinv}(\mathbb{I}_{(b-1)a, a})+1=\operatorname{area}_n(\mathbb{I}_{a(b-1), b-1})+1= \operatorname{area}_n(C_{S',y_2}) $$ Since both $C_{S,x_1}$ and $C_{S',y_2}$ have the same deficit $ab-2$, the above equality implies $\operatorname{area}(C_{S,x_1})=\operatorname{dinv}_n(C_{S',y_2})$, thus we conclude that
$C_{S,x_1}$ and $C_{S',y_2}$ form a pair of $n$-opposites.
Next consider the case $n>a+b$. Let $k=(b-1)a$ and $k'=b(a-1)$. Then the subsequence $(a_{x_1},a_{x_1+1},\ldots,a_{x_2})$ of $A_{S}$ has the form \begin{equation}\label{almosthook case a}
\underbrace{\rm o\cdots o}_{1+r_{-k,\ell_1}}
\underbrace{*\cdots *}_{r^+_{k,\ell_1}}
\cdots\cdots
\underbrace{\rm o\cdots o}_{1+r_{-k,\ell_1'-1}}
\underbrace{*\cdots*}_{r^+_{k,\ell_1'-1}}
\underbrace{\rm
o\cdots\cdots o
}_{{\rm central\; block\; } B}
\underbrace{*\cdots*}_{r^+_{k',\widetilde{\ell_2'}}}
\underbrace{\rm o\cdots o}_{1+r_{-k',\widetilde{\ell_2'}+1}}
\cdots\cdots
\underbrace{*\cdots*}_{r^+_{k',\widetilde{\ell_2}-1}}
\underbrace{\rm o\cdots o}_{1+r_{-k',\widetilde{\ell_2}}}, \end{equation} and the subsequence $(b_{y_1},b_{y_1+1},\ldots,b_{y_2})$ of $A_{S'}$ has the form \begin{equation}\label{almosthook case b}
\underbrace{\rm o\cdots o}_{1+r_{-k',\widetilde{\ell_1}}}
\underbrace{*\cdots *}_{r^+_{k',\widetilde{\ell_1}}}
\cdots\cdots
\underbrace{\rm o\cdots o}_{1+r_{-k',\widetilde{\ell_1'}-1}}
\underbrace{*\cdots*}_{r^+_{k',\widetilde{\ell_1'}-1}}
\underbrace{\rm
o\cdots\cdots o
}_{{\rm central\; block\; } B}
\underbrace{*\cdots*}_{r^+_{k,\ell_2'}}
\underbrace{\rm o\cdots o}_{1+r_{-k,\ell_2'+1}}
\cdots\cdots
\underbrace{*\cdots*}_{r^+_{k,\ell_2-1}}
\underbrace{\rm o\cdots o}_{1+r_{-k,\ell_2}}. \end{equation} Using the facts proved in \S2.6 and \S2.7, the above two forms \eqref{almosthook case a} and \eqref{almosthook case a} are mutual reversal. Thus it suffices to check that $C_{S,x_1}$ and $C_{S',y_2}$ (resp. $C_{S,x_2}$ and $C_{S',y_1}$) form a pair of $n$-opposites, which is straightforward as above. Finally, we can define an explicit bijection from $\mathcal{C}_{(ab-b-1,b-1)}\cap\mathcal{DP}_n$ to $\mathcal{C}_{(ab-a-1,a-1)}\cap\mathcal{DP}_n$ interchanging $\area_n$ and $\dinv$ by the same method used in \S\ref{subsec:bij-Ck}.
\section{Conjectures} \label{sec:stronger_conj}
In this section we propose conjectures that generalize Theorems~\ref{hook_chain} and~\ref{almost_hook_chain}.
\subsection{Conjectured Decomposition of Level $k$ Objects into Chains.} \label{subsec:conj-chain}
\begin{definition} For any partition $\gamma=(\gamma_1,...,\gamma_v)$, let $H_\gamma=\{j \ : \ \gamma_j=\gamma_{j+1}=\gamma_{j+2}>0\}$, and define $$\rho(\gamma)=\left\{\begin{array}{ll} \gamma_i, &\text{ if $\nu(\alpha)$ is defined and }H_\gamma\neq\emptyset\text{ and }i=\max(H_\gamma);\\ \gamma_1, &\text{ if $\nu(\alpha)$ is defined and }H_\gamma=\emptyset; \\ 0, &\text{ if $\nu(\alpha)$ is not defined.} \\ \end{array} \right.$$ Let $R(\gamma)=\{\nu^m(\gamma):0\leq m\leq\rho(\gamma) \} \subseteq\Par$. One may check that $\nu^m(\gamma)$ is well-defined for $0\leq m\leq \rho(\gamma)$. \end{definition}
\begin{conjecture}\label{wdssPar} For any $\lambda\in\Par(k)$, there exists a set
$\{\gamma_{\lambda,i}:i\in\mathbb{Z}_{>0}\}$ of Dyck partitions with $|\gamma_{\lambda,1}|<|\gamma_{\lambda,2}|<\cdots$ satisfying the following conditions: \begin{enumerate} \item $\textsc{dp}_k(\textsc{dyck}(\lambda))=\gamma_{\lambda,i}$ for some $i$.
\item There exists $t\in\mathbb{Z}_{>0}$ such that $i\mapsto \Delta(\gamma_{\lambda,i})$ is weakly decreasing on $\{1,2,\ldots,t\}$ and weakly increasing on $\{t,t+1,\ldots\}$.
\item $\defc(\gamma_{\lambda,i})$ is a constant not depending on $i$ (namely, $\sum_{r=1}^{\ell(\lambda)-1} (-1+\lambda_r) \sum_{s=r+1}^{\ell(\lambda)} \lambda_s$, which is the number of pairs of cells in $\dg(\lambda)$ that are not in the same row or column).
\item For any integer $d\geq |\gamma_{\lambda,1}|$, there exists a unique Dyck partition $\gamma\in\bigcup_{i\geq 1} R(\gamma_{\lambda,i})$
such that $|\gamma|=d$.
\item For all $n\geq 0$, $\mathcal{DP}_n\cap\bigcup_{i\geq 1} R(\gamma_{\lambda,i})$ and $\mathcal{DP}_n\cap\bigcup_{i\geq 1} R(\gamma_{\lambda',i})$ are $n$-opposite to each other. \end{enumerate} \end{conjecture}
Theorems~\ref{hook_chain} and~\ref{almost_hook_chain} imply that Conjecture~\ref{wdssPar} holds for partitions of hook shape or almost hook shape. We also checked that Conjecture~\ref{wdssPar} holds for $\lambda=(3,3)$, $(4,3)$, $(3,3,1)$, $(5,3)$, $(4,3,1)$, $(3,3,1,1)$, and the conjugates of these partitions.
\subsection{Strong Conjecture on the Structure of Level $k$ Objects} \label{subsec:strong-conj}
We now propose another conjecture, which is stronger than Conjectures~\ref{conj:DP} and~\ref{wdssPar}. When we construct $\mathcal{DP}_{\mu}$ and $\overline{\mathcal{DP}}_{\mu}$ for a partition $\mu$, we want to include the most natural Dyck vector from which $\mu$ can be reconstructed. This Dyck vector is defined as follows.
\begin{definition}\label{def:natural} For any partition $\mu$, let $\zeta=\zeta(\mu)=\mu_1+\ell(\mu)$. Write $\mu=(\underline{k}^{e_k},\ldots, \underline{2}^{e_2},\underline{1}^{e_1})$ with all $e_j\geq 0$ and $e_k>0$, and define \begin{equation}\label{eq:nat-DV} \gamma_{\mu}=\textsc{dp}_{\zeta+1}(0,0,\underline{1}^{e_1},0,\underline{1}^{e_2}, \ldots,0,\underline{1}^{e_k}). \end{equation} Equivalently, we can write $$\gamma_{\mu}=\mathbb{I}_{0,\zeta}\setminus \{u_{\mu_1+\ell(\mu)},
u_{\mu_2+\ell(\mu)-1},u_{\mu_3+\ell(\mu)-2},\ldots\}.$$ For $\mu=(0)$, we set $\zeta(\mu)=0$ and $\gamma_{\mu}=(0)$. \end{definition}
\begin{example} Let $\mu=(2,2,1,1,1)\in\Par(7)$. Here $\zeta(\mu)=2+5=7$ and $\textsc{dv}_8(\gamma_{\mu})=(0,0,1,1,1,0,1,1)$, so $\gamma_{\mu}=(6,5,5,3,2,1,1,0)$. The following diagram shows how we can compute $\gamma_{\mu}$ by removing cells $u_7,u_6,u_4,u_3,u_2$ from the diagram of $\mathbb{I}_{0,7}=\Delta_8$. \[ \tableau{ {}&{}&{}&{}&{}&{}&{u_7} \\ {}&{}&{}&{}&{}&{u_6} \\ {}&{}&{}&{}&{} \\ {}&{}&{}&{u_4} \\ {}&{}&{u_3} \\ {}&{u_2} \\ {} } \] \end{example}
\begin{lemma}\label{kappa_natural}
For all $\mu\in\Par$, $\defc(\gamma_{\mu})=|\mu|$. \end{lemma} \begin{proof} Let $v=\textsc{dv}_{\zeta+1}(\gamma_{\mu})$ be the Dyck vector shown in~\eqref{eq:nat-DV}. This vector has $e_1+e_2+\cdots+e_k=\ell(\mu)$ ones and $\zeta+1-\ell(\mu)=\mu_1+1$ zeroes. We directly calculate \[ \dinv(v)=\binom{\ell(\mu)}{2}+\binom{\mu_1+1}{2}
+\sum_{i=1}^{\mu_1} e_i(\mu_1-i). \]
The last sum is $\mu_1\ell(\mu)-|\mu|$. Also, $\area(v)=e_1+e_2+\cdots+e_k=\ell(\mu)$, so \[ \defc(\gamma_{\mu})=\defc(v)= \binom{\ell(\mu)+\mu_1+1}{2}-\binom{\ell(\mu)}{2}-\binom{\mu_1+1}{2}
-\mu_1\ell(\mu)-\ell(\mu)+|\mu|=|\mu|. \] \end{proof}
We can now state our strengthened version of Conjecture~\ref{conj:DP}.
\begin{conjecture}\label{strong_conj} For all integers $k\geq 0$ and all $\mu\in\Par(k)$, there exist two (possibly identical) collections $\mathcal{C}_{\mu}$ and $\overline{\mathcal{C}}_{\mu}$ of Dyck partitions satisfying the following conditions: \begin{enumerate} \item[(1)] The sets $\mathcal{C}_{\mu}$ are pairwise disjoint,
and the sets $\overline{\mathcal{C}}_{\mu}$ are pairwise disjoint. \item[(2)] $\mathcal{DP}_{\ast,k}=\bigcup_{\mu\in\Par(k)} \mathcal{C}_{\mu}
=\bigcup_{\mu\in\Par(k)} \overline{\mathcal{C}}_{\mu}$. \item[(3)] For all $m\geq 0$, $\nu^m(\gamma_{\mu})\in\mathcal{C}_{\mu}$. \item[(4)] For each $\mu\in\Par(k)$, there exists a unique $\mu^*\in\Par(k)$ such that $\mathcal{C}_{\mu}=\overline{\mathcal{C}}_{\mu^*}$ and $\mathcal{C}_{\mu^*}=\overline{\mathcal{C}}_{\mu}$ (note $\mu^*$ may not equal the conjugate partition $\mu'$).
\item[(5)] For all $\mu\in\Par(k)$ and all $i\geq\ell(\mu^*)$, there exists a unique partition $C_{\mu,i}\in\mathcal{C}_{\mu}$ with $\dinv(C_{\mu,i})=i$. Moreover, there exists an increasing integer sequence $i_1 < i_2 < \ldots$ such that $\mathcal{C}_{\mu}=\bigcup_{j\geq 1} R(C_{\mu,i_j})$ and the map $j\mapsto\Delta(C_{\mu,i_j})$ is weakly decreasing on $\{1,2,\ldots,t\}$ and weakly increasing on $\{t,t+1,\ldots\}$ for some $t\geq1$. \item[(6)] For sufficiently large $n$ (more precisely, for $n\geq\max(\zeta(\mu)+2,\zeta(\mu^*)+2)$), we have \[ \ell(\mu^*)\leq i\leq \binom{n}{2}-k-\ell(\mu) \] if and only if there exists a (unique) $\gamma\in\mathcal{C}_{\mu}$ with $\Delta(\gamma)\leq n$ and $\dinv(\gamma)=i$. \item[(7)] If $\gamma\in\mathcal{C}_{\mu}$ and $\nu(\gamma)$ is defined,
then $\nu(\gamma)\in\mathcal{C}_{\mu}$. \item[(8)] If $\textsc{dp}_n(\textsc{dyck}(\lambda))\in\mathcal{C}_{\mu}$ for some $\lambda\in\Par(n)$, then $\textsc{dp}_n(\textsc{dyck}(\lambda')) \in\overline{\mathcal{C}}_{\mu}$. \item[(9)] For all $n\geq 0$, $\mathcal{C}_{\mu}\cap\mathcal{DP}_n$ and $\overline{\mathcal{C}}_{\mu}\cap\mathcal{DP}_n$ are $n$-opposite to each other. \end{enumerate} \end{conjecture}
Evidently, Conjecture~\ref{strong_conj} implies Conjecture~\ref{conj:DP} (take $\mathcal{DP}_{\mu}=\mathcal{C}_{\mu}$ and $\overline{\mathcal{DP}}_{\mu}=\overline{\mathcal{C}}_{\mu}$). Our next goal is to show that condition (2) in Conjecture~\ref{strong_conj} is already implied by the other conditions. Write $p_{\leq d}(k)$ for the number of partitions $\lambda$ with
$|\lambda|=k$ and $\ell(\lambda)\leq d$. Define $$\mathcal{DP}_{\ast,k}(d) =\{\gamma\in \mathcal{DP}_{\ast,k}:\dinv(\gamma)=d\}
=\{\gamma\in\Par:\defc(\gamma)=k\mbox{ and }\dinv(\gamma)=d\}.$$ For any subset $\mathcal{C}\subseteq\mathcal{DP}_{\ast,k}$, define $\mathcal{C}(d)=\mathcal{C}\cap \mathcal{DP}_{\ast,k}(d)$.
\begin{lemma}
For all $d,k\geq 0$, $|\mathcal{DP}_{\ast,k}(d)|=p_{\leq d}(k)$. \end{lemma} \begin{proof} In~\cite[Theorem 3]{LW}, Loehr and Warrington showed that the two statistics $\dinv$ and $\ell$ (number of parts) have the same distribution on $\Par(n)$ for all $n$; more specifically,
\[ \sum_{\lambda\in\Par} q^{|\lambda|}t^{\ell(\lambda)}
=\prod_{i=1}^{\infty} \frac{1}{1-tq^i}
=\sum_{\lambda\in\Par} q^{|\lambda|}t^{\dinv(\lambda)}. \] Replacing $t$ by $t/q$ in this identity, we get
\[ \sum_{\lambda\in\Par} q^{|\lambda|-\ell(\lambda)}t^{\ell(\lambda)}
=\sum_{\lambda\in\Par} q^{\defc(\lambda)}t^{\dinv(\lambda)}. \]
The coefficient of $q^kt^d$ on the right side is $|\mathcal{DP}_{\ast,k}(d)|$. The coefficient of $q^kt^d$ on the left side is the number of partitions of $k+d$ into exactly $d$ nonzero parts. By decreasing each of these $d$ parts by $1$, we see that this is the number of partitions of $k$ into at most $d$ nonzero parts, namely $p_{\leq d}(k)$. \end{proof}
\begin{lemma} In Conjecture~\ref{strong_conj}, part {\rm(2)} follows from parts {\rm(1)}, {\rm(4)}, and {\rm(5)}. \end{lemma} \begin{proof}
Parts (1), (4), and (5) of Conjecture~\ref{strong_conj} imply that $\bigcup_{\mu\in\Par(k)} \mathcal{C}_{\mu} =\bigcup_{\mu\in\Par(k)} \overline{\mathcal{C}}_{\mu}$ and that $\mathcal{C}=\bigcup_{\mu\in\Par(k)} \mathcal{C}_{\mu}$
satisfies $|\mathcal{C}(d)|=p_{\leq d}(k)=|\mathcal{DP}_{\ast,k}(d)|$.
Thus, $\mathcal{DP}_{\ast,k}$ must be the union of the sets $\mathcal{C}_{\mu}$ (and the union of the sets $\overline{\mathcal{C}}_{\mu}$) as $\mu$ varies through $\Par(k)$. \end{proof}
Conjecture~\ref{strong_conj}(5) now implies that each collection $\mathcal{C}_{\mu}$ has the form $\{C_{\mu,i}:i\geq \ell(\mu^*)\}$ with $\dinv(C_{\mu,i})=i$.
\begin{lemma}\label{almost_all} For all $\mu\in\Par$ and $m\in\mathbb{Z}_{\geq 0}$,
\begin{enumerate} \item $\nu^m(\gamma_{\mu})$ is defined; \item $\dinv(\nu^m(\gamma_{\mu}))=\binom{\mu_1+\ell(\mu)+1}{2}
-|\mu|-\ell(\mu)+m$; \item $(\Delta(\nu^1(\gamma_{\mu})), \Delta(\nu^2(\gamma_{\mu})), \ldots)
=(\underline{(\zeta+2)}^{\zeta+1}, \underline{(\zeta+3)}^{\zeta+2},\ldots)$, where $\zeta=\zeta(\mu)$. \end{enumerate} \end{lemma} \begin{proof} One can explicitly describe $\nu^m(\gamma_{\mu})$, namely $$\{\nu^m(\gamma_{\mu}) : m\in\mathbb{Z}_{\geq 0}\} =\bigcup_{i\geq \zeta(\mu)} R(\mathbb{I}_{0,i}\setminus\{u_{\mu_1+\ell(\mu)},u_{\mu_2+\ell(\mu)-1},\ldots\}),$$ from which all the statements readily follow. \end{proof}
\begin{theorem}\label{mainthm2} Conjecture~\ref{strong_conj} holds for all $k\leq 9$. \end{theorem} \begin{proof}
We first note that $\nu^{m}(\gamma_\mu)$ are all distinct as we let $\mu$ and $m$ vary, because the operator $\nu$ is one-to-one and $\nu^{-1}(\gamma_\mu)$ are not defined. Since $|\mathcal{DP}_{\ast,k}(d)|=p_{\leq d}(k)$, we have $$\mathcal{DP}_{\ast,k}(d)=\left\{\nu^m(\gamma_{\mu}) \ : \mu\in\Par(k)\mbox{ and }
m=d-\binom{\mu_1+\ell(\mu)+1}{2}+k+\ell(\mu)\right\}$$ for each $d\geq \binom{k+2}{2}$.
In particular, any partition $\gamma$ with $\defc(\gamma)=k$ and $\dinv(\gamma)=d$ has the form $\nu^m(\gamma_{\mu})$ for some $\mu$ and $m$. Hence, for fixed $k$, there are only finitely many objects in $\mathcal{DP}_{\ast,k}$ that are not of the form $\nu^m(\gamma_{\mu})$. So if $\gamma$ is not of the form $\nu^m(\gamma_{\mu})$, then $\Delta(\gamma)\leq n_k$ for some constant $n_k$ depending only on $k$. One readily sees that $n_k$ can be any integer greater than $k+1$.
So if Conjecture~\ref{strong_conj}(6) holds for $n=n_k+1$, then it also holds for $n>n_k+1$ by Lemma~\ref{almost_all}. In turn, Conjecture~\ref{strong_conj}(9) holds for all $n\geq n_k+1$. Thus, to prove Conjecture~\ref{strong_conj} for a given $k$, it suffices to prove the conjecture with conditions (6) and (9) replaced by the following conditions. \begin{enumerate} \item[(6$'$)] For $\max(\zeta(\mu)+1,\zeta(\mu^*)+1)\leq n\leq n_k+1$, $$\ell(\mu^*)\leq i\leq \binom{n}{2}-k-\ell(\mu)$$ if and only if there exists a (unique) $\gamma\in\mathcal{C}_{\mu}$ with $\Delta(\gamma)\leq n$ and $\dinv(\gamma)=i$. \item[(9$'$)] For all $n\leq n_k$, $\mathcal{C}_{\mu}\cap\mathcal{DP}_n$ and $\overline{\mathcal{C}}_{\mu}\cap\mathcal{DP}_n$ are $n$-opposite to each other. \end{enumerate}
Therefore, for fixed $k$, Conjecture~\ref{strong_conj} can be checked in finitely many steps by examining finitely many objects. By exhaustive computations, we have checked Conjecture~\ref{strong_conj} for $k\leq 9$. The list of $\mathcal{C}_{\mu}$ and $\overline{\mathcal{C}}_{\mu}$ for $k\leq 6$ is given in the appendix below, along with a link to the list of $\mathcal{C}_{\mu}$ and $\overline{\mathcal{C}}_{\mu}$ for $k\leq 9$. \end{proof}
In fact, our computations show that for all $k\leq 4$, the collections $\mathcal{C}_{\mu}$ and $\overline{\mathcal{C}}_{\mu}$ with $\mu\in\Par(k)$ are uniquely determined by the conditions in Conjecture~\ref{strong_conj}.
\section{Appendix}\label{app} Below is the list of all $\mathcal{C}_{\mu}$ and $\overline{\mathcal{C}}_{\mu}$
with $k=|\mu|\le 6$. The list of $\mathcal{C}_{\mu}$ and $\overline{\mathcal{C}}_{\mu}$ for $k\leq 9$ may be found on the website \verb-www.oakland.edu/~li2345/List_of_C_mu.pdf-.
{\tiny
$k=0$: \begin{enumerate} \item $\mathcal{C}_{(0)}=\{\nu^{(m)}(\gamma_{(0)}) : m\in[0,\infty) \}=\{\nu^{(m)}((0)) : m\in[0,\infty) \}=\overline{\mathcal{C}}_{(0)}$ \end{enumerate}
$k=1$: \begin{enumerate} \item $\mathcal{C}_{(1)}=\{\nu^{(m)}(\gamma_{(1)}) : m\in[0,\infty) \}=\{\nu^{(m)}((1,1)) : m\in[0,\infty) \}=\overline{\mathcal{C}}_{(1)}$ \end{enumerate}
$k=2$: \begin{enumerate}
\setlength\itemsep{.5em} \item $\mathcal{C}_{(2)}=R((1,1,1)) \cup \{\nu^{(m)}(\gamma_{(2)}) : m\in[0,\infty) \}
=\{(1,1,1),(4)\}\cup\{\nu^{(m)}((2,2,1)) : m\in[0,\infty) \} =\overline{\mathcal{C}}_{(2)}$
\item $\mathcal{C}_{(1,1)}=\{\nu^{(m)}(\gamma_{(1,1)}) : m\in[0,\infty) \}=\{\nu^{(m)}((2,1,1)) : m\in[0,\infty) \}=\overline{\mathcal{C}}_{(1,1)}$
\end{enumerate}
$k=3$: \begin{enumerate}
\setlength\itemsep{.5em} \item $\mathcal{C}_{(3)}=R((1,1,1,1))\cup R((2,2,2)) \cup \{\nu^{(m)}(\gamma_{(3)}) : m\in[0,\infty) \}$
${\;}=\{(1,1,1,1),(5)\}\cup\{(2,2,2),(4,1,1,1),(5,3)\}\cup\{\nu^{(m)}((3,3,2,1)) : m\in[0,\infty) \}
=\overline{\mathcal{C}}_{(3)}$
\item $\mathcal{C}_{(2,1)}=R((3,1,1,1))\cup \{\nu^{(m)}(\gamma_{(2,1)}) : m\in[0,\infty) \} =\{(3,1,1,1),(5,2)\}\cup \{\nu^{(m)}((3,3,1,1)) : m\in[0,\infty) \} =\overline{\mathcal{C}}_{(1,1,1)}$
\item $\mathcal{C}_{(1,1,1)}=R((2,1,1,1))\cup \{\nu^{(m)}(\gamma_{(1,1,1)}) : m\in[0,\infty) \}$
${\;\;\;\;\;\;}=\{(2,1,1,1),(5,1)\}\cup \{\nu^{(m)}((3,2,1,1)) : m\in[0,\infty) \} =\overline{\mathcal{C}}_{(2,1)}$ \end{enumerate}
$k=4$: \begin{enumerate}
\setlength\itemsep{.5em} \item $\mathcal{C}_{(4)}=R((1,1,1,1,1))\cup R((2,2,2,1)) \cup R((3,3,3,1))\cup \{\nu^{(m)}(\gamma_{(4)}) : m\in[0,\infty) \}$
$ {\;}=\{(1,1,1,1,1),(6)\} \cup\{(2,2,2,1),(5,1,1,1),(5,4)\} \cup\{(3,3,3,1),(5,2,2,2),(5,4,1,1,1),(6,4,3)\}$
${\;\;\;\;\;}\cup\{\nu^{(m)}((4,4,3,2,1)) : m\in[0,\infty) \}\
=\overline{\mathcal{C}}_{(4)}$
\item $\mathcal{C}_{(3,1)}=R((2,2,1,1))\cup R((3,3,3))\cup \{\nu^{(m)}(\gamma_{(3,1)}) : m\in[0,\infty) \}$
${\;\;\;\,}=R((2,2,1,1))\cup R((3,3,3))\cup \{\nu^{(m)}((4,4,3,1,1)) : m\in[0,\infty) \} =\overline{\mathcal{C}}_{(2,2)}$
\item $\mathcal{C}_{(2,2)}=R((2,1,1,1,1))\cup \{\nu^{(m)}(\gamma_{(2,2)}) : m\in[0,\infty) \}=R((2,1,1,1,1))\cup \{\nu^{(m)}((3,2,2,1)) : m\in[0,\infty) \} =\overline{\mathcal{C}}_{(3,1)}$
\item $\mathcal{C}_{(2,1,1)}=R((3,2,1,1,1))\cup R((4,3,1,1,1))\cup \{\nu^{(m)}(\gamma_{(2,1,1)}) : m\in[0,\infty) \}$
${\;\;\;\;\;\;}=R((3,2,1,1,1))\cup R((4,3,1,1,1))\cup \{\nu^{(m)}((4,4,2,1,1)) : m\in[0,\infty) \} =\overline{\mathcal{C}}_{(1,1,1,1)}$
\item $\mathcal{C}_{(1,1,1,1)}=R((3,1,1,1,1))\cup R((4,2,1,1,1))\cup \{\nu^{(m)}(\gamma_{(1,1,1,1)}) : m\in[0,\infty) \}$
${\;\;\;\;\;\;\;\;\,}=R((3,1,1,1,1))\cup R((4,2,1,1,1))\cup \{\nu^{(m)}((4,3,2,1,1)) : m\in[0,\infty) \} =\overline{\mathcal{C}}_{(2,1,1)}$ \end{enumerate}
$k=5$: \begin{enumerate}
\setlength\itemsep{.5em} \item $\mathcal{C}_{(5)}=\overline{\mathcal{C}}_{(5)}$ is given in \eqref{def:hook}.
\item $\mathcal{C}_{(4,1)}=R((4,1,1,1,1))\cup R((3,3,2,2))\cup R((4,4,4,2))\cup \{\nu^{(m)}(\gamma_{(4,1)}) : m\in[0,\infty) \} =\overline{\mathcal{C}}_{(2,2,1)}$
\item $\mathcal{C}_{(3,2)}=R((2,2,1,1,1))\cup R((3,3,1,1,1))\cup R((4,4,1,1,1)) \cup \{\nu^{(m)}(\gamma_{(3,2)}) : m\in[0,\infty) \} = \overline{\mathcal{C}}_{(3,2)}$
\item $\mathcal{C}_{(2,2,1)}=R((2,1,1,1,1,1))\cup R((3,2,2,2))\cup \{\nu^{(m)}(\gamma_{(2,2,1)}) : m\in[0,\infty) \} =\overline{\mathcal{C}}_{(4,1)}$
\item $\mathcal{C}_{(3,1,1)}=R((3,1,1,1,1,1))\cup R((4,2,2,1,1))\cup R((5,4,2,2,2))\cup \{\nu^{(m)}(\gamma_{(3,1,1)}) : m\in[0,\infty) \} =\overline{\mathcal{C}}_{(3,1,1)}$
\item $\mathcal{C}_{(2,1,1,1)}=R((3,2,1,1,1,1))\cup R((4,3,1,1,1,1))\cup R((5,3,2,1,1,1))\cup R((5, 4, 3, 1, 1, 1))\cup \{\nu^{(m)}(\gamma_{(2,1,1,1)}) : m\in[0,\infty) \} =\overline{\mathcal{C}}_{(2,1,1,1)}$
\item $\mathcal{C}_{(1,1,1,1,1)}=R((4,2,1,1,1,1))\cup R((4,3,2,1,1,1))\cup R((5,4,2,1,1,1))\cup \{\nu^{(m)}(\gamma_{(1,1,1,1,1)}) : m\in[0,\infty) \} =\overline{\mathcal{C}}_{(1,1,1,1,1)}$ \end{enumerate}
$k=6$: \begin{enumerate}
\setlength\itemsep{.5em} \item $ \mathcal{C}_{(6)}=\overline{\mathcal{C}}_{(6)}$ is given in \eqref{def:hook}.
\item $\mathcal{C}_{(5,1)} =\overline{\mathcal{C}}_{(3,3)}$ is given in \eqref{def:almost hook} for $(a,b)=(4,2)$ (almost hook shape).
\item $\mathcal{C}_{(3,3)} =\overline{\mathcal{C}}_{(5,1)}$ is given in \eqref{def:almost hook} for $(a,b)=(2,4)$ (almost hook shape).
\item $\mathcal{C}_{(4,2)}=R((3, 1, 1, 1, 1, 1, 1))\cup R((4, 2, 2, 2, 1))\cup R((4, 4, 4, 1, 1)) \cup \{\nu^{(m)}(\gamma_{(4,2)}) : m\in[0,\infty) \} = \overline{\mathcal{C}}_{(4,1,1)}$
\item $\mathcal{C}_{(4,1,1)}=R((2, 2, 1, 1, 1, 1))\cup R((3, 3, 2, 1, 1))\cup R((4, 4, 3, 3)) \cup R((5, 5, 5, 3, 1))\cup \{\nu^{(m)}(\gamma_{(4,1,1)}) : m\in[0,\infty) \} =\overline{\mathcal{C}}_{(4,2)}$
\item $\mathcal{C}_{(3,2,1)}=R((3, 2, 1, 1, 1, 1, 1))\cup R((5, 3, 1, 1, 1, 1))\cup R((5, 3, 3, 1, 1, 1))\cup R((5, 5, 3, 1, 1, 1))$
${\;\;\;\;\;\;\;\;\;\;}\cup \{\nu^{(m)}(\gamma_{(3,2,1)}) : m\in[0,\infty) \} =\overline{\mathcal{C}}_{(3,1,1,1)}$
\item $\mathcal{C}_{(3,1,1,1)}=R((4, 1, 1, 1, 1, 1))\cup R((4, 2, 2, 1, 1, 1))\cup R((4, 4, 2, 1, 1, 1)) \cup R((5, 4, 2, 2, 1, 1)) \cup R((6, 5, 4, 2, 2, 2))$
${\;\;\;\;\;\;\;\;\;\;\;\;\;} \cup \{\nu^{(m)}(\gamma_{(3,1,1,1)}) : m\in[0,\infty) \} =\overline{\mathcal{C}}_{(3,2,1)}$
\item $\mathcal{C}_{(2,2,2)}=R((3, 3, 1, 1, 1, 1))\cup \{\nu^{(m)}(\gamma_{(2,2,2)}) : m\in[0,\infty) \} =\overline{\mathcal{C}}_{(2,2,1,1)}$
\item $\mathcal{C}_{(2,2,1,1)}=R((3, 2, 2, 1, 1))\cup R((4, 3, 3, 3))\cup \{\nu^{(m)}(\gamma_{(2,2,1,1)}) : m\in[0,\infty) \} =\overline{\mathcal{C}}_{(2,2,2)}$
\item $\mathcal{C}_{(2,1,1,1,1)}=R((4,2,1,1,1,1,1))\cup R((4,3,2,1,1,1,1))\cup R((5, 4, 2, 1, 1, 1,1)) \cup R((5,4,3,2,1,1,1))\cup R((6,5,3,2,1,1,1))$
${\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;}\cup R((6,5,4,3,1,1,1))\cup \{\nu^{(m)}(\gamma_{(2,1,1,1,1)}) : m\in[0,\infty) \} =\overline{\mathcal{C}}_{(2,1,1,1,1)}$
\item $\mathcal{C}_{(1,1,1,1,1,1)}=R((4,3,1,1,1,1,1))\cup R((5,3,2,1,1,1,1))\cup R((5, 4, 3, 1, 1, 1,1)) \cup R((6,4,3,2,1,1,1))$
${\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;}\cup R((6,5,4,2,1,1,1))\cup \{\nu^{(m)}(\gamma_{(1,1,1,1,1,1)}) : m\in[0,\infty) \}=\overline{\mathcal{C}}_{(1,1,1,1,1,1)}$
\end{enumerate} }
\end{document} |
\begin{document}
\maketitle
\begin{abstract} In this paper we present a new algorithmic realization of a projection-based scheme for general convex constrained optimization problem. The general idea is to transform the original optimization problem to a sequence of feasibility problems by iteratively constraining the objective function from above until the feasibility problem is inconsistent. For each of the feasibility problems one may apply any of the existing projection methods for solving it. In particular, the scheme allows the use of subgradient projections and does not require exact projections onto the constraints sets as in existing similar methods.
We also apply the newly introduced concept of superiorization to optimization formulation and compare its performance to our scheme. We provide some numerical results for convex quadratic test problems as well as for real-life optimization problems coming from medical treatment planning.
\textbf{Keywords}: Projection methods, feasibility problems, superiorization, subgradient, iterative methods
\textbf{$2010$ MSC}: 65K10, 65K15, 90C25
\end{abstract}
\section{Introduction}
In this paper we are concerned with a general convex optimization problem. Let $f:
\mathbb{R}
^{n}\rightarrow
\mathbb{R}
$ and $\left\{ g_{i}:
\mathbb{R}
^{n}\rightarrow
\mathbb{R}
\right\} _{i\in I}$, for $I=\{1,\ldots,m\}$ be convex functions. We wish to solve the following convex optimization problem. \begin{gather} \min f(x)\nonumber\\ \text{such that }g_{i}(x)\leq0\text{ for all }i\in I. \label{Problem:1} \end{gather}
The literature on this problem is vast and there exist many different techniques for solving it, see e.g., \cite{BSS06,Bertsekas99, BoydVandenberghe04} and the many references therein. As a special case when $f\equiv0$, (\ref{Problem:1}) reduces to find a point in the convex set, \begin{equation} \boldsymbol{C}:=\cap_{i\in I}C_{i}=\cap_{i\in I}\left\{ x\in
\mathbb{R}
^{n}\mid g_{i}(x)\leq0\right\} \neq\emptyset. \label{eq:Bold_C} \end{equation} $\boldsymbol{C}$ is called the \textit{feasibility set} of (\ref{Problem:1}). In general, the problem of finding a point in the intersection of convex sets is known as the \textit{Convex Feasibility Problem} (CFP) or \textit{Set Theoretic Formulation}. Many real-world problems in various areas of mathematics and of physical sciences can be modeled in this way; see \cite{cap88} and the references therein for an early example. More work on the CFP can be found in \cite{Byrne, byrne04, cdh10}.
In case where all the $\left\{ g_{i}\right\} _{i\in I}$ are linear and only equalities are considered in $C_{i}$, meaning that all the $C_{i}\ $ are hyper-planes, the CFP reduces to a system of linear equations; Kaczmarz \cite{Kaczmarz37} and Cimmino \cite{Cimmino38} in the 1930's, proposed two different projection methods, sequential and simultaneous projection methods for solving a system of linear equalities. These methods were later extended for solving systems of linear inequalities, see \cite{Agmon54, MS54}. Today, projection methods are more general and are applied to solve general convex feasibility problems.
In general, projection methods are iterative procedures that employ projections onto convex sets in various ways. They typically follow the principle that it is easier to project onto the individual sets (usually closed and convex) instead of projecting onto other derived sets (e.g. their intersection). The methods have different algorithmic structures, of which some are particularly suitable for parallel computing, and they demonstrate desirable convergence properties and good initial behavior patterns, for more details see for example \cite{cccdh11}.
In last decades, due to their computational efficiency, projection methods have been applied successfully to many real-world applications, for example in imaging, see Bauschke and Borwein \cite{bb96} and Censor and Zenios \cite{CZ97}, in transportation problems \cite{cz91b,bk13}, sensor networks \cite{bh06}, in radiation therapy treatment planning \cite{cap88,ccmzkh10}, in resolution enhancement \cite{coo03}, in graph matching \cite{ww04}, in matrix balancing \cite{cz91, Amelunxen11}, in radiation therapy treatment planning, in resolution enhancement \cite{cccdh11, BC11, cc14}, to name but a few. Their success is based on their ability to handle huge-size problems since they do not require storage or inversion of the full constraint matrix. Their algorithmic structure is either sequential or simultaneous, or in-between, as in the block-iterative projection methods or string-averaging projection methods which naturally support parallelization. This is one of the reasons that this class of methods was called \textquotedblleft Swiss army knife\textquotedblright, see \cite{bk13}.
Following the above we aim to apply different projection methods for solving (\ref{Problem:1}); in order to do that we first put (\ref{Problem:1}) into epigraph form. \begin{gather} \min t\nonumber\\ \text{such that }t\in
\mathbb{R}
\nonumber\\ \text{and for some }x\in\boldsymbol{C}\text{ one has }f(x)\leq t. \label{Problem:2} \end{gather} Denote by $t^{\ast}$ the optimal value of (\ref{Problem:2}) which we assumed is attained and finite. Now, a natural idea for solving (\ref{Problem:2}) is to construct a decreasing sequence $\left\{ t_{k}\right\} $ such that $t_{k}\rightarrow t^{\ast}$ and at each step, for a fixed $t_{k}$, to solve a corresponding CFP; see also \cite[Subsection 2.1.2]{Bertsekas99}. Formally this can be phrased as follows. Set $t_{-1}=\infty$ and at the $k$-th step, with $k\geq0$, solve the following problem: \begin{equation} \text{find a point }x^{k}\text{ such that}\left\{ \begin{array} [c]{l} f(x^{k})\leq t_{k-1}\\ \text{and}\\ g_{i}(x^{k})\leq0\text{ for all }i\in I. \end{array} \right. \label{Problem:k_CFP} \end{equation} Once a feasible point $x^{k}$ is obtained, $t_{k}$ is updated according to the formula \begin{equation} t_{k}=f(x^{k})-\varepsilon_{k}, \label{eq:epsilon_decrease} \end{equation}
where $\varepsilon_{k}>0$ is some user chosen constant; in the numerical experiments in Section \ref{sec:Numerical_Experiments} we use $\epsilon _{k}=0.1|f(x^{k})|$ whenever $|f(x^{k})|>1$ and $\epsilon_{k}=0.1$ otherwise\textbf{.} For solving these CFPs at each $k$-th step, we apply different projection methods based on the \textit{Cyclic Subgradient Projections} \textit{Method} (CSPM) \cite{cl81, cl82} and thus obtain several algorithmic realizations of this general scheme. In Subsection \ref{sec:Convergence} we discuss the issue of convergence to an approximate optimal solution of (\ref{Problem:1}) (which we call an $\varepsilon$-optimal solution.)
It might happen that the objective function decreases by $\varepsilon_{k}$ after each step. Therefore, we not only wish to solve each of the CFPs (\ref{Problem:k_CFP}), but also end up with a \textit{Slater point}, that is a point that solves (\ref{Problem:k_CFP}) with strict inequalities at each step to maximize the decrease in $t_{k}$. To this end, we will make use of over relaxation parameters in the CSPM for one realization of the scheme, but also apply the newly introduced \textit{Superiorization} idea \cite{cdh10}, where perturbations are used in the CSPM to steer the algorithm into the interior of the objective level set after the $k$-th CSP. Our major contribution in this work, Section \ref{sec:Numerical_Experiments}, is the applicability of the scheme and its comparability with some existing results as demonstrated and compared intensively on benchmark quadratic programming problems and medical therapy.
The paper is organized as follows. In Section \ref{sec:Prelim} we present several projection methods and definitions which will be useful for our analysis. Next in Section \ref{sec:GeneralOpt} our general scheme for solving convex optimization problems are presented and analyzed. Later in Section \ref{sec:Numerical_Experiments} numerical experiments illustrating the different realizations of our scheme are presented and tested for convex quadratic programming problems and for Intensity-Modulated Radiation Therapy (IMRT). Finally in Section \ref{sec:conclusion} conclusion and further research directions are presented.
\section{Preliminaries\label{sec:Prelim}}
In this section we provide several projection methods which are relevant to our results, mainly orthogonal and subgradient projections. We start by presenting several definitions which will be useful for our analysis.
\begin{definition} A sequence $\left\{ x^{k}\right\} _{k=0}^{\infty}$ is said to be \texttt{finite convergent} if $\lim_{k\rightarrow\infty}x^{k}=x^{\ast}$ and there exists $N\in
\mathbb{N}
$ such that for all $k\geq N$, $x^{k}=x^{\ast}$. \end{definition}
Let $C$ be non-empty, closed and convex set in the Euclidean space $
\mathbb{R}
^{n}$. Assume that the set $C$ can be represented as \begin{equation} C=\left\{ x\in
\mathbb{R}
^{n}\mid c(x)\leq0\right\} , \label{eq:C} \end{equation} where $c:
\mathbb{R}
^{n}\rightarrow
\mathbb{R}
$ is an appropriate continuous and convex function. Take, for example, $c(x)=\operatorname*{dist}(x,C),$ where $\operatorname*{dist}$ is the distance function; see, e.g., \cite[Chapter B, Subsection 1.3(c)]{hul01}.
\begin{definition} For any point $x\in
\mathbb{R}
^{n}$,\ the \texttt{orthogonal projection} of $x$ onto $C$, denoted by $P_{C}(x)$ is the closest point to $x$ in $C$, that is, \begin{equation} \left\Vert x-P_{C}\left( x\right) \right\Vert \leq\left\Vert x-y\right\Vert \text{ for all }y\in C. \end{equation}
\end{definition}
\begin{definition} Let $c$ be as in the representation of $C$ in (\ref{eq:C}). The set \begin{equation} \partial c(z):=\{\xi\in
\mathbb{R}
^{n}\mid c(y)\geq c(z)+\langle\xi,y-z\rangle\text{ for all }y\in
\mathbb{R}
^{n}\} \end{equation}\label{eq:2.3} is called the \texttt{subdifferential} of $c$ at $z$ and any element of $\partial c(z)$ is called a \texttt{subgradient}. \end{definition}
It is well-known that if $C$ is non-empty, closed and convex, then $P_C(x)$ exists and is unique. Moreover, if $c$ is differentiable at $z,$ then $\partial c(z)=\{\nabla c(z)\}$, see for example \cite[Theorem 5.37 (p. 77)]{Tiel84}.
Now for any $x\in\mathbb{R}^{n}$ the function $\xi$ assigns some subgradient, that is $\xi(x)\in\partial c(x)$. \begin{definition} For any point $x\in
\mathbb{R}
^{n}$,\ the \texttt{subgradient projection}\textit{ }of $x$ is defined as \begin{equation} \Pi_{_{C}}(x):=\left\{ \begin{array} [c]{ll} x-\frac{\displaystyle c(x)}{\displaystyle\left\Vert \xi\right\Vert ^{2}}\xi & \text{if\ }c(x)>0\text{,}\\ x & \text{if\ }c(x)\leq0\text{,} \end{array} \right. \end{equation} where $\xi\in\partial c(x)$. It must be that $\xi \neq 0$ when $c(x)>0$, because if $\xi=0$, then from (\ref{eq:2.3}) one has $c(x)\leq c(y)$ for every $y\in \mathbb{R}^{n}$; in particular, for $y\in C$ we have $c(x)\leq c(y)=0$, a contradiction to the assumption that $c(x)>0$. \end{definition}
\begin{remark} It is well-known and can be verified easily that if the set $C$ is a half-space which is presented using its canonical way (using a normal vector) then the subgradient projection is the orthogonal projection onto $C$. \end{remark}
\begin{definition} \label{Def:Slater}Consider the CFP \begin{equation} \boldsymbol{C}:=\cap_{i\in I}C_{i}=\cap_{i\in I}\left\{ x\in
\mathbb{R}
^{n}\mid g_{i}(x)\leq0\right\} . \end{equation} We say that $\boldsymbol{C}$ satisfies the \texttt{Slater Condition} if there exists a point $x\in C$ having the property that $g_i(x)<0$ for all $i\in I$. \end{definition}
\subsection{Projection methods and Superiorization}\label{subsec:prog}
Now we present two relevant classes of projection methods, the Orthogonal and the Subgradient projections methods. We only introduce the sequential versions which is relevant to our result, but there exists also simultaneous version; see e.g., \cite[Chapter 5]{CZ97}. Later we also present the Superiorization methodology.
\textbf{1. }\underline{\textbf{Sequential methods}}
Sequential projection methods are also refereed to as \textquotedblleft row-action\textquotedblright\ methods. The main idea is that at each iteration one constraint set $C_{i}$ is chosen with respect to some control sequence and either an orthogonal or a subgradient projection is calculated.
\textbf{1.1. }\textit{Projection Onto Convex Sets} (POCS). The general iterative step can fit into the following \begin{equation} x^{k+1}=x^{k}-\lambda_{k}\left(x^{k}-P_{C_{i(k)}}(x^{k})\right) \label{eq:POCS} \end{equation} where $\lambda_{k}\in\lbrack\epsilon_{1},2-\epsilon_{2}]$ are called \textit{relaxation parameters} for arbitrary $\epsilon_{1},\epsilon_{2}>0$ such that $\epsilon_{1}+\epsilon_{2}<2$, $P_{C_{i(k)}}$ is the orthogonal projection of $x^{k}$ onto $C_{i(k)}$, $\left\{ i(k)\right\} $ is a sequence of indices according to which individual sets $C_{i}$ are chosen, for example \textit{cyclic} $i(k)=k\operatorname*{mod}m+1$. For the linear case with equalities and $\lambda_{k}=1$ for all $k$, this is known as Kaczmarz's algorithm \cite{Kaczmarz37} or \textit{Algebraic Reconstruction Technique} (ART) in the field of image reconstruction from projection, see \cite{bgh70, hm93}. For solving a system of interval linear inequalities which appears for example in the field of \textit{Intensity-Modulated Radiation Therapy} (IMRT), ART3 and especially its faster version ART3+ (see \cite{hc08}) are known to find a solution in a finite number of steps, provided that the feasible region is full dimensional. The successful idea of ART3+ was extended for solving optimization problems with linear objective and interval linear inequalities constraints, this is known as ATR3+O \cite{ccmzkh10}.
\textbf{1.2.} The \textit{Cyclic Subgradient Projections} (CSP) introduced by Censor and Lent \cite{cl81, cl82} for solving the CFP. The iterative step of the method is formulated as follows. \begin{equation} x^{k+1}=\left\{ \begin{tabular} [c]{ll} $x^{k}-\lambda_{k}\frac{g_{i(k)}(x^{k})}{\left\Vert \xi^{k}\right\Vert ^{2} }\xi^{k}$ & $g_{i(k)}(x^{k})>0$\\ $x^{k}$ & $g_{i(k)}(x^{k})\leq0$ \end{tabular} \ \ \ \ \ \ \ \right.\label{csp-def} \end{equation} where $\xi^{k}\in\partial g_{i(k)}(x^{k})$ is arbitrary, $\lambda_{k}$ is taken as in (\ref{eq:POCS}) and $\left\{ i(k)\right\} $ is cyclic. Of course, in the linear case this method coincides with POCS.
\textbf{2.} \underline{\textbf{Superiorization}}
Superiorization is a recently introduced methodology which gains increasing interest and recognition, as evidenced by the dedicated special issue entitled: \textquotedblleft Superiorization: Theory and Applications\textquotedblright, in the journal \textit{Inverse Problems} \cite{chj17}. The state of current research on superiorization can best be appreciated from the \textquotedblleft Superiorization and Perturbation Resilience of Algorithms: A Bibliography compiled and continuously updated by Yair Censor\textquotedblright \cite{Censor_sup-page}. In addition, \cite{herman-review-sm}, \cite{weak-strong15} and \cite[Section 4]{rm15} are recent reviews of interest.
This methodology is heuristic and its goal is to find certain good, or superior, solutions to optimization problems. More precisely, suppose that we want to solve a certain optimization problem, for example, minimization of a convex function under constraints (below we focus on this optimization problem because it is relevant to our paper; for an approach which considers the superiorization methodology in a much broader form, see \cite[Section 4]{rm15}). Often, solving the full problem can be rather demanding from the computational point of view, but solving part of it, say the feasibility part (namely, finding a point which satisfies all the constraints) is, in many cases, less demanding. Suppose further that our algorithmic scheme which solves the feasibility problem is perturbation resilient, that is, it converges to a solution of the feasibility problem despite perturbations which may appear in the algorithmic steps due to noise, computational errors, and so on.
Under these assumptions, the superiorization methodology claims that there is an advantage in considering perturbations in an active way during the performance of the scheme which tries to solve the feasibility part. What is this advantage? It may simply be a solution (or an approximation solution) to the feasibility problem which is found faster thanks to the perturbations; it may also be a feasible solution $x'$ which is better than (or superior) feasible solutions $x$ which would have been obtained without the perturbations, where we measure this \textquotedblleft superiority \textquotedblright with respect to some given cost/merit function $\phi$, namely we want to have $\phi(x')\leq \phi(x)$ (and hopefully $\phi(x')$ will be much smaller than $\phi(x)$).
Since our original optimization problem is the minimization of some convex function, we may, but not obliged to, take $\phi$ to be that function, and we can combine a feasibility-seeking step (a step aiming at finding a solution to the feasibility problem) with a perturbation which will reduce the cost function (such a perturbation can be chosen or be guessed in a non-ascending direction, if such a direction exists: see Definition \ref{eq:nonascend} and Algorithm \ref{Algorithm:Yair-superiorization} below). We note that the above-mentioned assumption that the algorithmic scheme which solves the feasibility part is perturbation resilient often holds in practice: for example, this is the case for the schemes considered in \cite{bdhk07, cr13, hgcd12}.
\begin{definition}\label{eq:nonascend} Given a function $\phi:\Delta\subseteq
\mathbb{R}
^{n}\rightarrow
\mathbb{R}
$ and a point $x\in\Delta$, we say that a vector $d\in
\mathbb{R}
^{n}$ is \texttt{non-ascending} \texttt{for }$\phi$\texttt{ at }$x$ if $\left\Vert d\right\Vert \leq1$ and there is a $\delta>0$ such that \begin{equation} \text{for all }\lambda\in\left[ 0,\delta\right] \text{ we have }\left( x+\lambda d\right) \in\Delta\text{ and }\phi\left( x+\lambda d\right) \leq\phi\left( x\right) . \end{equation} \end{definition}
Observe that one option of choosing the perturbations, in order to steer the algorithm to a superior feasible point with respect to $\phi$, is along $-\nabla\phi$, (when $\phi$ is convex and differentiable) but this is only one example and of course the scheme allows the usage of other direction.
The following pseudocode, which is a small modification of a similar algorithm mentioned in \cite{hgcd12}, illustrates one option to perform the perturbations when applying the superiorizaton methodology.
\begin{algorithm} \label{Algorithm:Yair-superiorization}$\left. {}\right. $
\textbf{Initialization:} Select an arbitrary starting point\textit{ }$x^{0}\in\mathbb{R}^{n}$, a positive integer $N$, an integer $\ell$, a sequence $(\eta_{\ell})_{\ell=0}^\infty$ of positive real numbers which is strictly decreasing to zero (for example, $\eta_{\ell}=a^\ell$ where $a\in(0,1)$) and a family of algorithmic operators $(P_k)_{k=0}^{\infty}$.
\textbf{Iterative step:}
\textbf{set} $k=0$
\textbf{set} $x^{k}=x^{0}$
\textbf{set} $\ell=-1$
\textbf{repeat until a stopping criterion is satisfied (see Section \ref{sec:GeneralOpt})}
$\qquad$\textbf{set} $m=0$
$\qquad$\textbf{set} $x^{k,m}=x^{k}$
$\qquad$\textbf{while }$m$\textbf{$<$}$N$
$\qquad\qquad$\textbf{set }$v^{k,m}$\textbf{ }to be a non-ascending vector for $\phi$ at $x^{k,m}$
$\qquad$\textbf{$\qquad$set} \emph{loop=true}
$\qquad$\textbf{$\qquad$while}\emph{ loop}
$\qquad\qquad\qquad$\textbf{set $\ell=\ell+1$}
$\qquad\qquad\qquad$\textbf{set} $\beta_{k,m}=\eta_{\ell}$
$\qquad\qquad\qquad$\textbf{set} $z=x^{k,m}+\beta_{k,m}v^{k,m}$
$\qquad\qquad\qquad$\textbf{if }$z$\textbf{$\in$}$\Delta$\textbf{ and } $\phi\left( z\right) $\textbf{$\leq$}$\phi\left( x^{k}\right) $\textbf{ \textbf{then}}
$\qquad\qquad\qquad\qquad$\textbf{set }$m$\textbf{$=$}$m+1$
$\qquad\qquad\qquad\qquad$\textbf{set }$x^{k,m}$\textbf{$=$}$z$
$\qquad\qquad\qquad\qquad$\textbf{set }\emph{loop = false}
$\qquad$\textbf{set }$x^{k+1}$\textbf{$=$}$\boldsymbol{P}_k(x^{k,N})$
$\qquad$\textbf{set }$k=k+1$ \end{algorithm}
\section{A general projection scheme for convex optimization\label{sec:GeneralOpt}}
In this section we present our scheme (Algorithm \ref{Algorithm:epsilon-scheme}) for solving convex optimization problems by translating them into a sequence of convex feasibility problems
(\ref{Problem:k_CFP}) and then solving each of them by using some projection method; since we are concerned with the general convex case, subgradient projections are most likely to be used, but we can use any type of projections, for example orthogonal and Bregman projection, see e.g., \cite{CZ97}. There are two essential questions in this scheme, the first is how to construct the sequence of convex feasibility problems, meaning how to choose an $\varepsilon$ to update $t_{k}$, and the other question is when to stop the procedure. For the latter question, that is, the stopping criterion, there are several options and the most poplar are maximum number of iterations (an upper bound on the maximum number should be specified in advance), or to check whether either $\|x^{k+1}-x^k\|$ or $|f(x^{k+1})-f(x^k)|$ are smaller than some given positive parameter.\\
For (\ref{Problem:1}) we denote $C_{i}:=\left\{ x\in
\mathbb{R}
^{n}\mid g_{i}(x)\leq0\right\} $, we assume that $\boldsymbol{C}:=\cap_{i\in I}C_{i} \neq\emptyset$ and for $t\in
\mathbb{R}
$ we denote $C^{t}:=\left\{ x\in
\mathbb{R}
^{n}\mid f(x)\leq t\right\} $.
We would like to motivate our scheme (Algorithm \ref{Algorithm:epsilon-scheme}) by reviewing a natural extension of the ART3+O \cite{ccmzkh10}, which was designed for linear problems, for the general convex case. We will also show how the mathematical disadvantages of ART3+O can be treated in our new scheme. ART3+O is based on the same reformulation of the original optimization problem (\ref{Problem:1}) into a feasibility problem (\ref{Problem:2}). Then the optimal level set value is determined using a one-dimensional line search on $t_{k}$. In the original work \cite{ccmzkh10} the \textit{Dichotomous} (bisection) line search \cite[Chapter 8]{BSS06} was used (in practice a somewhat variant of the line search was used), but any line search could be applied.
Assume that a lower bound of $f$ is given and denote it by $f_{l}$. We denote by $f^{h}$ the upper bound of $f$ and initialize it to $\infty$. In \cite{ccmzkh10} there is also the use of a bisection scheme but for the linear case, and in what follows we generalize this scheme for the convex optimization setting (below $k$ is a natural number).
\begin{algorithm} [Bisection scheme]\label{Algorithm:Bisection}$\left. {}\right. $
\textbf{Initialization:} Solve the following CFP \begin{equation} \text{find a point }x^{0}\in\boldsymbol{C} \end{equation} set $f^{h}=f(x^{0})$ and $t_{0}=\left( f_{l}+f^{h}\right) /2$.
\textbf{Iterative step:} Given $t_{k-1}$, try to find a feasible solution $x^{k} \in\boldsymbol{C}\cap C^{t_{k-1}}$;
(i) If there exists a feasible solution, set $f^{h}=f(x^{k})$ and continue with $t_{k}:=\left( f_{l}+f^{h}\right) /2$;
(ii) If there is no feasible solution, determined by a \textquotedblleft time-out\textquotedblright\ rule (meaning that a feasible point can not be found in $n_{max}$ iterations; other alternatives might be \cite{kac91, Kiwiel96} and \cite{cl02}), then set $f_{l}=t_{k-1}$ and continue with $t_{k}:=\left( f_{l}+f^{h}\right) /2$;
(iii) If $\left\vert f^{h} - f_{l} \right\vert \leq\gamma$ for small enough $\gamma>0$, then stop. A $\gamma$-optimal solution is obtained. \end{algorithm}
Next we present our new scheme which we call the level set scheme for solving the constrained minimization (\ref{Problem:1}). Let $\left\{ \varepsilon _{k}\right\} _{k=0}^{\infty}$ be some user chosen positive sequence, such that $\sum_{k=0}^{\infty}\epsilon_{k}=\infty$. We choose $\epsilon_{k}
=\max\{0.1|f(x^{k})|,0.1\}.$
\begin{algorithm} [Level set scheme]\label{Algorithm:epsilon-scheme}$\left. {}\right. $
\textbf{Initialization:} Solve the following CFP \begin{equation} \text{find a point }x^{0}\in\boldsymbol{C} \end{equation} and set $t_{0}=f(x^{0})-\varepsilon_{0}$.
\textbf{Iterative step:} Given the current point $x^{k-1}$, try to find a point $x^{k}\in\boldsymbol{C}\cap C^{t_{k-1}}$;
(i) If there exists a feasible solution, set $t_{k}=f(x^{k})-\varepsilon_{k}$ and continue.
(ii) If there is no feasible solution, then $x^{k-1}$ is an $\varepsilon_{k} $-optimal solution.
\end{algorithm}
\begin{remark} Compared with the bisection strategy, infeasibility is detected only once, just before we get the $\varepsilon_{k}$-optimal solution.
\end{remark}
This level set scheme is quite general as it allows users to decide in advance what projection method they would like to use in that scheme. For the numerical results, we decided to apply the scheme with the following variations of projection methods.
\textbf{1.} Each convex feasibility problem in the level set scheme is solved via the \textit{Cyclic Subgradient Projections Method} (CSPM) (\ref{csp-def}) with $\lambda_k\in(1,2)$ over relaxation parameters.
\textbf{2.} Each convex feasibility problem in the level set scheme is solved based on the superiorization methodology. By doing so we try to decrease the objective function value below $t_{k-1}$. Following the recent result of \cite[Section 7]{cr13}, the methodology can be extended to convex and non-convex constraints. In general, superiorization does not provide an optimality certificate, therefore, we propose a sequential superiorization method where we decrease the sub-level sets of the objective function $f$ according to the level set scheme.
\textbf{3.} In the first variation where CSPM is used to solve the resulting feasibility problems, it may happen that the objective function only decreases by some small $\varepsilon_k$ in each step $k$. Combining the previous ideas, if only small steps are detected as progress, a perturbation along the negative gradient of the objective is performed - just like in superiorization. That is, if insufficient decrease is detected within a block of iterations, then the current iterate is shifted by $x^{k}\leftarrow x^{k}-1.9\nabla f(x^{k})$. It is clear that this is a heuristic step and does not guarantee that $f(x^{k-1}-1.9\nabla f(x^{k-1}))\leq f(x^{k-1})$, and so it can be revised by using an adaptive step-size rule for some positive $\alpha$ such that $f(x^{k-1}-\alpha\nabla f(x^{k-1}))\leq f(x^{k-1})$.
Let $c,s>0$ and $\left\{ \varepsilon_{k}\right\} _{k=0}^{\infty}$ be a sequence and small user chosen constants, such that $\sum_{k=0}^{\infty }\epsilon_{k}=\infty$, and $\delta=0$; In addition, determine the size of each block of iterations $BLOCK$, for example if we decide to run 1000 iterations then $BLOCK=1000.$
\begin{algorithm} \label{Algorithm:modified-epsilon-scheme} $\left. {}\right. $
\textbf{Initialization:} Let $\delta=0$ and $t_{-1}:=\infty$; Solve the following CFP \begin{equation} \text{find a point }x^{0}\in\boldsymbol{C} \end{equation} and set $t_{0}=f(x^{0})-\varepsilon_{0}$.
\textbf{Iterative step:} At the $k$-th iterate compute $\left\vert t_{k-2}-t_{k-1}\right\vert $;
If $\left\vert t_{k-2}-t_{k-1}\right\vert \leq c\varepsilon_{k-1}$, then $\delta=\delta+1$.
If $\delta/BLOCK>s,$ then $\delta=0$ and $x^{k-1}\leftarrow x^{k-1}-1.9\nabla f(x^{k-1}).$
Set $t_{k-1}=f(x^{k-1})-\varepsilon_{k-1}$ and try to find a solution $x^{k}\in\boldsymbol{C}\cap C^{t_{k-1}}$ to the CFP. \end{algorithm}
\begin{remark} \textbf{Relation with previous work.} There are numerous approaches in the literature on how to update $t_{k}$, and by that, the sub-level set of $f$ in solving (\ref{Problem:k_CFP}), or how to transform (\ref{Problem:1}) to a sequence of CFPs. Some of these schemes are Khabibullin \cite[English translation]{Khabibullin87_1} and \cite[English translation]{Khabibullin87}, \textit{Cutting-planes methods} or \textit{localization methods}; see \cite{Kelly60, em75, lnn95, gly96, Kiwiel96, gk99} and \cite{BV07, cl02}, Cegielski in \cite{cegielski99} and also in \cite{cd02,cd03}, \textit{subgradient method for constrained optimization}, see e.g. \cite{bm08}. For optimization problem (\ref{Problem:1} ) with separable objective see De Pierro and Helou Neto \cite{dn09} and see also \cite{ccmzkh10}. \end{remark}
\subsection{Convergence \label{sec:Convergence}}
Next we present the convergence proof of Algorithm \ref{Algorithm:epsilon-scheme}. We use different arguments than those presented for other finite convergent projection methods, for example, to name but a few Khabibullin \cite[English translation]{Khabibullin87}, (see also Kulikov and Fazylov \cite{kf89} and Konnov \cite[Procedure A.]{konnov98}) and Iusem and Moledo \cite{im87}, De Pierro and Iusem \cite{di98} and Censor, Chen and Pajoohesh \cite{ccp11}. These algorithms assume that the Slater Condition holds, i.e., Definition \ref{Def:Slater}.
\begin{theorem} \label{thm:P_k} Let $(P_{k})_{k=0}^{\infty}$ be a sequence of algorithmic schemes (formally, each $P_{k}$ is a Turing machine). For every $k\in \mathbb{N}\cup\{0\}$ the goal of $P_{k}$ is to solve the sub-problem (\ref{Problem:k_CFP}). It produces, after a finite number of machine operations, an output, and then it terminates. There are three possible cases for this output: if there exists a solution to (\ref{Problem:k_CFP}) and the machine is able to find it before it passes a given threshold (that is, before it performs a too large number of machine operations, where this \textquotedblleft large number\textquotedblright\hspace{.1in}is fixed in the beginning), then this output is a point $x^{k}$ which solves (\ref{Problem:k_CFP}); if there exists no solution to (\ref{Problem:k_CFP}) and the machine is able to determine this case before it passes the threshold, then the output is a string indicating that (\ref{Problem:k_CFP}) has no solution; otherwise the output is a string indicating that the threshold has been passed. In addition, if for some $k\in\mathbb{N}\cup\{0\}$ the algorithmic scheme $P_{k}$ is able to find a point $x^{k}$ satisfying (\ref{Problem:k_CFP}), then a positive number $\epsilon_{k}$ is produced (it may or may not depend on $x^{k}$) and one defines $t_{k}:=f(x^{k})-\epsilon_{k}$. Assume further that there exists a sequence $(\tilde{\epsilon}_{k})_{k=0}^{\infty}$ satisfying $\sum_{k=1}^{\infty}\tilde{\epsilon}_{k}=\infty$ and having the property that for every $k\in\mathbb{N}\cup\{0\}$, if $P_{k}$ is able to find a point $x^{k}$ satisfying (\ref{Problem:k_CFP}) before passing the threshold, then $\epsilon_{k} \geq\tilde{\epsilon}_{k}$. Under the above mentioned assumptions, Algorithm \ref{Algorithm:epsilon-scheme} terminates after a finite number of machine operations, and, moreover, exactly one of the following cases must hold:
\textbf{Case 1:}\label{item:P_0} The only algorithmic scheme that has been applied is $P_{0}$ and either it declares that (\ref{Problem:k_CFP}) has no solution or it declares that the threshold has been passed;
\textbf{Case 2:}\label{item:LastComputed} there exists $k\in\mathbb{N}\cup\{0\}$ such that $P_{0},\ldots,P_{k}$ are able to solve (\ref{Problem:k_CFP}) before the threshold has been passed and $P_{k+1}$ terminates by declaring that (\ref{Problem:k_CFP}) does not have a solution. In this case $x^{k}$ is an $\epsilon_{k}$-approximate solution of the minimization problem (\ref{Problem:2}).
\textbf{Case 3:}\label{item:threshold} there exists $k\in\mathbb{N}\cup\{0\}$ such that $P_{0},\ldots,P_{k}$ are able to solve (\ref{Problem:k_CFP}) before the threshold has been passed and $P_{k+1}$ terminates by declaring that the threshold has been passed; \end{theorem}
\begin{proof} A simple verification shows that the three cases mentioned above are mutually disjoint, and therefore at most one of them can hold. Hence it is sufficient to show that at least one of these cases holds. The level-set scheme starts at $k=0$. According to our assumption on $P_{0}$ (and on any other algorithmic scheme), either it is able to solve (\ref{Problem:k_CFP}) before passing the threshold, or it is able to show before passing the threshold that (\ref{Problem:k_CFP}) does not have any solution, or it passes the threshold before being able to determine whether (\ref{Problem:k_CFP}) has or does not have any solution. If either the second or the third cases holds, then we are in the first case (Case 1) mentioned by the theorem, and the proof is complete (the number of machine operations done on both cases is finite by the assumption on $P_{0}$). Hence from now on we assume that $P_{0}$ is able to solve (\ref{Problem:k_CFP}) before passing the threshold.
According to the level-set scheme definition, since we assume that $P_{0}$ was able to solve (\ref{Problem:k_CFP}), we should now consider $P_{1}$. Either $P_{1}$ finds a solution $x^{1}$ to (\ref{Problem:k_CFP}) before passing the threshold, or it is able to show before passing the threshold that (\ref{Problem:k_CFP}) does not have any solution, or it passes the threshold before being able to determine whether (\ref{Problem:k_CFP}) has or does not have any solution. In the second case we are in Case 2 of the theorem and in the third case we are in Case 3 of the theorem. Hence in the second and third cases the proof is complete (up to the verification that in the second case $x^{k}$ is an $\epsilon_{k}$-optimal solution: see the next paragraph), and so we assume from now on that $P_1$ finds a solution to (\ref{Problem:k_CFP}) before passing the threshold. By continuing this reasoning it can be shown by induction that several subcases can hold: either any $P_{k}$, $k\in\mathbb{N}\cup\{0\}$, is able to solve (\ref{Problem:k_CFP}) before passing the threshold, or there exists a minimal $k\in \mathbb{N}\cup\{0\}$ such that any $P_{j}$, $j\in\{0,\ldots,k\}$ is able to solve (\ref{Problem:k_CFP}) before passing the threshold but $P_{k+1}$ either shows that (\ref{Problem:k_CFP}) does not have any solution or $P_{k+1}$ passes the threshold before being able to determine whether (\ref{Problem:k_CFP}) has a solution or does not have any solution. In the second subcase we are in Case 2 of the theorem, and in the third subcase we are in Case 3 of the theorem. In both subcases the accumulating machine operations is, of course, finite, since it is the sum of the finitely many machine operations done by each of the algorithmic schemes $P_{j}$, $j\in\{0,1,\ldots,k+1\}$.
In the third subcase the proof is complete but in the second subcase we also need to show that $x^{k}$ is an $\epsilon_{k}$-optimal solution. Indeed, suppose that this subcase holds. Then $C\cap C^{t_{k}}=\emptyset$. Since a basic assumption of the paper is that the set of minimizers of $f$ over $C$ is non-empty, there exists $x^{*}\in C$ satisfying $f(x^{*})=t^{*}:=\inf\{f(x): x\in C\}$. It must be that $t^{*}>t_{k}$ because otherwise we would have $f(x^{*})=t^{*}\leq t_{k}$, i.e., $x^{*}\in C\cap C^{t_{k}}$, a contradiction. Because $x^{k}\in C$ one has $t^{*}\leq f(x^{k})$. Hence $t^{*}\leq f(x^{k})=t_{k}+\epsilon _{k}<t^{*}+\epsilon_{k}$ and therefore $|f(x^{k})-t^{*}|<\epsilon_{k}$. In other words, $x^{k}$ is an $\epsilon_{k}$-optimal solution, as required.
Therefore it remains to deal with the first subcase mentioned earlier in which each $P_{k}$, $k\in\mathbb{N}\cup\{0\}$, is able to solve (\ref{Problem:k_CFP}) before passing the threshold. Assume to the contrary that this subcase holds. Then for each $k\in\mathbb{N}\cup\{0\}$ the point $x^{k}$ and the numbers $\epsilon _{k}$ and $t_{k}$ are well-defined and their definitions imply (by induction) that when $k\geq2$, then \begin{equation} t_{k}=f(x^{k})-\epsilon_{k}\leq t_{k-1}-\epsilon_{k}\leq\ldots\leq t_{0} -\sum_{j=1}^{k}\epsilon_{j}\leq t_{0}-\sum_{j=1}^{k}\tilde{\epsilon}_{j}. \label{t_k<t_0-sum} \end{equation} Because $t^{\ast}=f(x^{\ast})\in\mathbb{R}$ and since, according to our assumption, $\sum_{j=1}^{\infty}\tilde{\epsilon}_{j}=\infty$, for large enough $k\in\mathbb{N}$ we have $t_{0}-t^{\ast}<\sum_{j=1}^{k}\tilde{\epsilon}_{j}$. By combining this with (\ref{t_k<t_0-sum}) it follows that \begin{equation} t_{k}\leq t_{0}-\sum_{j=1}^{k}\tilde{\epsilon}_{j}<t^{\ast}. \label{t_k<v_0} \end{equation} It must be that one of the algorithmic schemes $P_{j}$, $j\in\{1,\ldots,k+1\}$ will fail to solve (\ref{Problem:k_CFP}) (either by passing the threshold or by determining that (\ref{Problem:k_CFP}) does not have any solution), since if this is not true, then in iteration $k+1$ a solution $x^{k+1}$ to (\ref{Problem:k_CFP}) will be found by $P_{k+1}$. Now, because $x^{k+1}$ solves (\ref{Problem:k_CFP}) we have $f(x^{k+1})\leq t_{k}$. Hence it follows from (\ref{t_k<v_0}) that $f(x^{k+1})<t^{\ast}$, a contradiction to the definition of $t^{\ast}$. This contradiction shows that the subcase mentioned earlier in which each $P_k$, $k\in \mathbb{N}\cup\{0\}$ is able to solve (\ref{Problem:k_CFP}) before passing the threshold, cannot occur.
\end{proof}
\begin{remark} If for some given $\epsilon>0$ the sequence $\{\epsilon_{k}\}_{k=0}^{\infty}$ satisfies $\epsilon_{k}<\epsilon$ for all $k\in\mathbb{N}$ sufficiently large, then the theorem ensures, in the third case mentioned in it, that the point $x^{k}$ will be an $\epsilon$-approximate solution. \end{remark}
\begin{remark}
An illustration of the condition needed in Theorem \ref{thm:P_k} is to let $\epsilon _{k}:=\max\{0.1,0.1 |f(x^{k})|\}$, as done in the numerical simulations (Section \ref{sec:Numerical_Experiments}). In this case $\epsilon_{k}\geq\tilde{\epsilon}_{k}:=0.1$ for all $k\in\mathbb{N}\cup\{0\}$ for which $P_{k}$ is able to solve (\ref{Problem:k_CFP}) before passing the threshold, and, in addition, $\sum_{k=0}^{\infty} \tilde{\epsilon }_{k}=\infty$, as required. However, if one merely defines $\epsilon_{k}:=0.1
|f(x^{k})|$ instead of defining $\epsilon_{k}:=\max\{0.1,0.1 |f(x^{k})|\}$, or, more generally, if one uses algorithmic schemes $P_{k}$ which, for every $k\in\mathbb{N}\cup\{0\}$, are able to solve (\ref{Problem:k_CFP}) before passing the threshold, and if $\sum_{k=0}^{\infty}\epsilon_{k}<\infty$, then it may happen that none of the values $f(x^{k})$ approximate well the optimal value $t^{*}$.
Indeed, consider $f(x):=x^{2}-100$, $x\in C:=\mathbb{R}$. Denote $x^{0}:=\sqrt{500}$. Suppose that for each $k\in\mathbb{N}$ our schemes $P_{k}$ find a point
$x^{k}$ satisfying $f(x^{k})=t_{k-1}$, namely $x^{k}=\sqrt{100+t_{k-1}}$ (we assume that $P_{k}$ can represent numbers in an algebraic way which allows it to store square roots without the need to represent them in a decimal way; such schemes can be found in the scientific domains called \textquotedblleft computer algebra\textquotedblright, \textquotedblleft exact numerical computation\textquotedblright, and \textquotedblleft symbolic computation\textquotedblright). Let $\epsilon_{0}:=0.1|f(x^{0})|$. Since $f(x^{0})=400>0$ we have $t_{0}=f(x^{0})-\epsilon_{0}=0.9\cdot400=360$. Now we need to find a point $x^{1}$ satisfying $f(x^{1})=360$, i.e., \[ (x^{1})^{2}-100=360=0.9\cdot f(x^{0})=0.9((x^{0})^{2}-100)=0.9(x^{0})^{2}-90 \] and thus $x^{1}=\sqrt{10+0.9(x^{0})^{2}}$. By induction
$x^{k}=\sqrt{10+0.9(x^{k-1})^{2}}$, $f(x^{k})=t_{k-1}$ and $\epsilon _{k}:=0.1|f(x^{k})|$ for every $k\in\mathbb{N}$. In particular, we can see by induction that $x^{k}\geq10$ for all $k$ and hence $\epsilon_{k}=0.1f(x^{k})$ and $t_{k}=f(x^k)-\epsilon_k=0.9f(x^{k})\geq0$ for all $k\in\mathbb{N}$. Therefore
$|t_{k}-t^{\ast}|\geq100$ and $|f(x^{k})-t^{\ast}|>100$ for all $k\in \mathbb{N}$ and hence we neither have $\lim_{k\rightarrow\infty} f(x^{k})=-100=t^{\ast}$ nor $\lim_{k\rightarrow\infty}t_{k}=t^{\ast}$. It remains to show that $\sum_{k=1}^{\infty}\epsilon_{k}<\infty$. Indeed, observe that since $0<f(x^{k+1})\leq t_{k}$ for every $k\in\mathbb{N}\cup\{0\}$ and since from (\ref{t_k<t_0-sum}) we have $\sum_{i=0}^{k}\epsilon _{i}\leq t_{0}-t_{k}\leq t_{0}$, it follows that $\sum_{k=1}^{\infty} \epsilon_{k}\leq t_{0}$. Therefore $\sum_{k=1}^{\infty}\epsilon _{k}<\infty$ as claimed.
\end{remark}
\section{Numerical experiments \label{sec:Numerical_Experiments}}
In this section, we compare several variants of the two optimization schemes (Algorithms \ref{Algorithm:epsilon-scheme} and \ref{Algorithm:modified-epsilon-scheme}) for some selected optimization problems. All solvers were tested against the freely available library of convex quadratic programming problems stored in the QPS format by Maros and M\'{e}sz\'{a}ros \cite{mm99} as well as clinical cases from intensity modulated radiation therapy planning (IMRT) provided to us by the German Cancer Research Center (DKFZ) in Heidelberg. The QPS problems were parsed using the parser from the CoinUtils package \cite{CoinUtils} and consist of quadratic objectives and linear constraints. The IMRT problem data is constructed using a prototypical treatment planning system developed by the Fraunhofer ITWM and consist of nonlinear convex objectives and constraints.
\begin{remark} \label{remark:conv} It is clear that from the mathematical point of view only finite convergence projection methods can be applied in each iterative step. However, numerical experiments show that even asymptotically convergent algorithms can be used, when the stopping rule is chosen in an educated way. For further finite convergence methods see \cite{pm79, fuku82, mph81}. \end{remark}
The algorithms were implemented in C++. As solvers for the feasibility problem, we implemented the finite convergence variants of the Cyclic Subgradient Projections Method (CSPM) and the Algebraic Reconstruction Technique 3 (ART3+) and also their regular version with standard stopping rule, see the beginning of Section \ref{sec:GeneralOpt} and Remark \ref{remark:conv}. The superiorized versions of these methods simply use the objective function of the optimization problem as a merit function to decrease. Although the superiorized versions of CSPM and ART3+ preformed surprisingly well in terms of the objective function value they obtained, the solutions were in almost all cases far from the optimum. Table \ref{tab:solver_variants} lists the variants of the level set and bisection schemes that are compared:
\begin{table}[htp] \begin{tabular}
[c]{|l|l|}\hline Scheme variant & Abbreviation\\\hline Level set (Alg. \ref{Algorithm:epsilon-scheme}) with CSPM & ls\_cspm\\\hline Level set (Alg. \ref{Algorithm:epsilon-scheme}) with ART3+ & ls\_art3+\\\hline Accelerated level set (Alg. \ref{Algorithm:modified-epsilon-scheme}) with CSPM & ls\_acc\_cspm\\\hline Level set (Alg. \ref{Algorithm:epsilon-scheme}) with superiorized CSPM & ls\_sup\_cspm\\\hline Level set (Alg. \ref{Algorithm:epsilon-scheme}) with superiorized ART3+ & ls\_sup\_art3+\\\hline Accelerated level set (Alg. \ref{Algorithm:modified-epsilon-scheme}) with superiorized CSPM & ls\_acc\_sup\_cspm\\\hline Bisection (Alg. \ref{Algorithm:Bisection}) with CSPM & bis\_cspm\\\hline Bisection (Alg. \ref{Algorithm:Bisection}) with ART3+ & bis\_art3+\\\hline Accelerated Bisection with CSPM & bis\_acc\_cspm\\\hline Bisection (Alg. \ref{Algorithm:Bisection}) with superiorized CSPM & bis\_sup\_cspm\\\hline Bisection (Alg. \ref{Algorithm:Bisection}) with superiorized ART3+ & bis\_sup\_art3+\\\hline Accelerated Bisection with superiorized CSPM & bis\_acc\_sup\_cspm\\\hline \end{tabular} \caption{Overview of all tested schemes.} \label{tab:solver_variants} \end{table}
The bisection schemes were accelerated in the same way as in Algorithm
\ref{Algorithm:modified-epsilon-scheme}, despite the fact that this is a heuristics which is not guaranteed to converge, we decided to test and add it to our comparison. To determine whether a feasible solution exists, we set a maximum number of 1000 iterations for each of the feasibility solvers. In Algorithm \ref{Algorithm:Bisection}(iii) we choose $\gamma=10^{-5}$. If no feasible solution is found after 1000 projections, it is assumed that none exists. For the choice of $\varepsilon_{k}$ we used multiplicative update rule ($\varepsilon_{k}=0.1|f(x^{k})|$) if the absolute value of the objective function is greater than $1$ and subtraction update rule ($\varepsilon_{k}=0.1$) otherwise.
\subsection{IMRT cases descriptions}
Given fixed irradiation directions, the objective in IMRT optimization is to determine a treatment plan consisting of an energy fluence distribution to create a dose in the patient to irradiate the tumor as homoegeneously as possible while sparing critical healthy organs \cite{KueMonSche09}. Figure \ref{fig:imrt} shows irradiation directions and some energy fluence maps for a paraspinal tumor case.
\begin{figure}
\caption{The gantry moves around the couch on which the patient lies. The couch position may also be changed to alter the beam directions.}
\label{fig:imrt}
\end{figure}
The problem is multi-criteria in nature and numerical optimization problems are often a weighted sum scalarization of the multiple objectives involved. IMRT optimization problems can be formulated so that they are convex. For this work, we selected nine head-and-neck cancer patients and posed the same optimization formulations for each to ensure comparability, see Figure \ref{fig:Head-Neck} for two of the nine patients. We then chose four types of scalarization weights to determine four treatment plans for each patient, each with different distinct solution properties. Overall, this resulted in 36 optimization problems. The following list describes the different scalarizations.\\
\textbf{1. }High weights on tumor volumes, low weights on healthy organs;
\textbf{2. }High weights on tumor volumes and brain stem;
\textbf{3. }High weights on tumor volumes and spinal cord;
\textbf{4. }High weights on tumor volumes and parotis glands.\\
In order to numerically optimize the treatment plans, we define the energy fluence distribution - the variables of the optimization problem - as a vector $x\in\mathbb{R}^{n}$ (typically $n\approx 10^3$) and assume that the resulting radiation dose in the patient is given by $d=Dx\in\mathbb{R}^{m}$ (typically $m\approx 10^6$), where the entries $D_{ij}$ of the so-called dose matrix $D\in\mathbb{R}^{m\times n}$ contain the information of how much radiation is deposited in voxel $i$ of the patient body by a unit amount of energy emitted by a small area $j$ on the beam surface. That is, the dose in each voxel $i$ is given by $ d_i(x):= \sum_{j=1}^n D_{ij} x_j $. To achieve a homogeneous dose in a tumor volume given by voxel indices $\mathcal{T}$, we use functions to minimize the amount of under-dosage below a prescribed dose $R$, \[ f_{\text{under}}(x):=\left( \left\vert \mathcal{T}\right\vert ^{-1}\sum _{i\in\mathcal{T}}\max(0,R-d_{i}(x))^{2}\right) ^{\frac{1}{2}}, \] and, symmetrically the over-dosage above a given prescription. This is done using two functions to provide better control over both aspects. These objectives are also constrained from above, resulting in nonlinear but convex constraints. Dose in risk organs given by voxel indices $\mathcal{R}$ is minimized by norms of the dose in those organs: \[ f_{\text{norm}}(x):=\left( \left\vert \mathcal{R}\right\vert ^{-1}\sum _{i\in\mathcal{R}}d_{i}^{p}(x)\right) ^{\frac{1}{p}}, \] where we used $p=2$ and $p=8$, depending on whether the organ is more sensitive to the general amount of radiation (e.g. parotis glands) or the maximal dose (e.g. spinal cord) received. More relevant data which is typical and standard to IMRT and in particular for the implementation of our scheme, i.e., the constrains set $\boldsymbol{C}$, can be found in \cite{KueMonSche09}, which also includes details on numerical optimization in IMRT planning; see also the works \cite{cekb05, aygk1505}. Note that the IMRT problem formulations here do not contain any linear constraints in our case, so that in the analysis, ART3+ is omitted as it is identical to CSPM in this case.
\begin{figure}
\caption{Two of the nine patients. Highlighted are some tumor volumes (lymphatic pathways and the macroscopic tumor volume) and some healthy structures to be spared.}
\label{fig:imrt_case_hnaa_sagital}
\label{fig:imrt_case_hnaa_trans}
\label{fig:imrt_case_hnbb_sagital}
\label{fig:imrt_case_hnbb_trans}
\label{fig:Head-Neck}
\end{figure} \subsection{Quality evaluation of the solutions}
All of the 36 IMRT optimization problems could be solved by all variants of the schemes. This was not the case for the QPS problems: of the 101 problems tested in the library, only 50 problems could be solved by all of the variants of the schemes. There are two reasons why for the other 51 cases, the projection methods were unable to find an initial feasible solution. The first reason is that most of these QPS problems are indeed infeasible. The second reason is, that in \cite{mm99}, the primal infeasibility stopping criterion is determined as $\|Ax-b\|/(1-\|b\|)<10^{-8}$ (where $A$ and $b$ are part of the QPS problems constraints and $\|b\|<1$) (also combined with an additional stopping criterion for the dual infeasibility), while in our implementations, we choose the maximum number of iterations, such as in \cite{ccmzkh10}, denoted there by $Q$, to be the stopping rule. As can be seen, these two stopping criterion are different. It turn out that even when the number of iterations was increased to $10,000$, still the CSPM did not make a difference and hence these problems were declared infeasible. In general, the behaviour of projection methods in the inconsistent case (infeasibility) have attracted many researchers and the subject is not fully explored. Some of the results in this area, state that in the case of infeasibility there is a cyclic convergence while for others methods, mainly simultaneous ones, there is convergent to some point that minimizes the norm of the infeasibilities. For further details on the above, the readers are refereed to the works Gubin, Polyak and Raik\'{\i}s \cite[Theorem 2]{gpr67}, Censor and Tom \cite {ct03}, the book of Chinneck \cite{Chinneck08} and the many references therein. Moreover, in the recent result of Censor and Zur \cite{cz16}, superiorization is used for the inconsistent linear case and it is shown that the generated sequence converges to a point that minimizes a proximity function which measures the linear constraints violation.
The following analysis concerning the QPS problems is restricted to those problems that could be solved by all variants. We report the required total number of projections and the total number of objective evaluations for each method as measures of numerical complexity, since these are independent of machine architecture or efficiency of implementation (parallelization or other software acceleration techniques). To measure the quality of the solutions, the following score $Q$ was calculated for each solver variant and each problem. Let $\hat{f}$ be the best objective the solver found and $f^{*} $ the best known objective value for the problem. We define \[ Q := \begin{cases} \hat{f}, & \mbox{if $f^*=0$}\\
\hat{f} - f^{*}, & \mbox{if $|f^*| \leq 1$}\\
(\hat{f} - f^{*}) / (|f^{*}|), & \mbox{else} \end{cases} \] Thus, the close to 0, the better the score, a positive value for $Q $ measures a deviation from optimality. Tables \ref{table:SummaryQualityScoresAlgorithms_QPS} and \ref{table:SummaryQualityScoresAlgorithms_IMRT} show some statistics for the deviation from optimality for the solvers.
The median quality score of all optimization schemes for the problems that could be solved are very good, meaning each solver can be expected to find the optimal solution if the underlying projection method can find a feasible starting point. The average is heavily skewed towards some outliers, i.e. instances for which the algorithms needed many iterations - especially for the QPS problems and in the bisection schemes for both problem types. However, not all problems are solved very well, as the 90-th quantiles show: in 10\% of all problem instances the algorithms did not produce a very good solution.
Based on these findings, the following questions are answered in the following subsections:
\textbf{1. }Do the accelerated versions of the schemes outperform the basic versions in terms of quality and complexity?
\textbf{2. }Do the superiorized versions of the feasibility solvers outperform their basic variants since the average deviations are lower for those solvers with \textquotedblleft sup\textquotedblright?
\textbf{3. }Is the level set scheme better than the bisection scheme in terms of quality and complexity?
\begin{table}[H] \begin{tabular}
[c]{|l|c|c|c|c|}\hline Scheme variant & Average Q & Median Q & 10-th quantile Q & 90-th quantile Q\\\hline ls\_cspm & 0.44 & 0.05 & 0.00 & 0.66\\\hline ls\_art3+ & 1.83 & 0.05 & 0.00 & 0.66\\\hline ls\_acc\_cspm & 0.44 & 0.06 & 0.00 & 0.66\\\hline ls\_sup\_cspm & 0.17 & 0.05 & 0.00 & 0.66\\\hline ls\_sup\_art3+ & 0.14 & 0.06 & 0.00 & 0.66\\\hline ls\_acc\_sup\_cspm & 0.20 & 0.05 & 0.00 & 0.66\\\hline bis\_cspm & 7.22 & 0.04 & 0.00 & 1.10\\\hline bis\_art3+ & 7.07 & 0.04 & 0.00 & 1.00\\\hline bis\_acc\_cspm & 7.24 & 0.05 & 0.00 & 1.10\\\hline bis\_sup\_cspm & 0.19 & 0.03 & 0.00 & 0.89\\\hline bis\_sup\_art3+ & 0.16 & 0.02 & 0.00 & 0.83\\\hline bis\_acc\_sup\_cspm & 0.21 & 0.05 & 0.00 & 0.89\\\hline \end{tabular} \caption{Quality scores of all tested algorithms over all 50 QPS problems that could be solved by all variants. The statistics are taken over the 50 problem runs, thus aggregating the results over all problems.} \label{table:SummaryQualityScoresAlgorithms_QPS} \end{table}
\begin{table}[H] \begin{tabular}
[c]{|l|c|c|c|c|}\hline Scheme variant & Average Q & Median Q & 10-th quantile Q & 90-th quantile Q\\\hline ls\_cspm & 0.13 & 0.11 & 0.03 & 0.23\\\hline ls\_acc\_cspm & 0.13 & 0.11 & 0.03 & 0.23\\\hline ls\_sup\_cspm & 0.06 & 0.06 & 0.00 & 0.12\\\hline ls\_acc\_sup\_cspm & 0.05 & 0.05 & 0.00 & 0.13\\\hline bis\_cspm & 2.72 & 0.02 & 0.00 & 8.52\\\hline bis\_acc\_cspm & 2.72 & 0.02 & 0.00 & 8.52\\\hline bis\_sup\_cspm & 0.04 & 0.04 & 0.00 & 0.08\\\hline bis\_acc\_sup\_cspm & 0.04 & 0.03 & 0.00 & 0.09\\\hline \end{tabular} \caption{Quality scores of all tested algorithms over all IMRT problems. ART3+ was omitted as the problem formulations did not contain any linear constraints. The statistics are taken over the 36 problem runs, thus aggregating the results over all problems.} \label{table:SummaryQualityScoresAlgorithms_IMRT} \end{table}
\subsection{Is the accelerated scheme better than the basic scheme?}
Only the CSPM variants for the level set scheme and the bisection scheme are studied, since they are most promising candidates for each (in the bisection scheme, there is no difference between CSPM and ART3+). The quality scores for the level set scheme with CSPM ls\_cspm and the accelerated level set scheme ls\_acc\_cspm were compared to see if there is a statistically significant difference in the outcome of the methods. For the 50 QPS problems, the median difference is 0, and the t-Test for two-tailed sample mean difference returns a p-value of 0.472, indicating that there is no real difference between the two versions. For the IMRT problems there was absolutely no difference in the quality score between the two variants.
The results for the bisection scheme are even clearer. For the QPS problems, the median difference is 0 and the average difference is -0.01. In no case could the accelerated version produce a better objective score, and at worst, it produced a loss of 0.28 objective score. A statistical test was not performed for this case. Similar results were found for the IMRT problems.
As there is no difference in quality, the question remains whether the accelerated variants can be better in terms of faster function decrease or number of constraint projections or function evaluations. For IMRT, there was no difference in running time or rate of decrease of the objective function: the behavior was identical for the base and accelerated versions. For the QPS problems, however, the acceleration of the level set schemes lead to a faster converging algorithm. The rate of decrease of the objective function over all feasible solutions produced by the scheme with the accelerated version can be up to 15 times the rate of the basic version. The chart in Figure \ref{fig:speedupObjDecLsAccCspmVsLsCspm} shows the frequencies of different speedup factors realized for the accelerated level set scheme over the basic level set scheme. The bins on the horizontal axis denote the multiplication factor of how much faster the objective scores (in case of Figure \ref{fig:speedupObjDecLsAccCspmVsLsCspmQPS}) decreased in the accelerated cases. ``Frequency'' refers to the count of solver instances where the multiplication factor was observed. It should be noted that the frequency for the 0 bin is exactly the count of zero speedups, meaning that there were no cases where the accelerated version was better in terms of faster function decrease.
\begin{figure}
\caption{Factor of how much the objective function decreased faster in the accelerated versions of the schemes (QPS problems). A speedup factor of 0 means that no difference in speed was observed. Negative speedups indicate that the indicated scheme performed worse over the compared method.}
\label{fig:speedupObjDecLsAccCspmVsLsCspm}
\label{fig:speedupObjDecLsAccCspmVsLsCspmQPS}
\end{figure}
For the bisection scheme and QPS problems, there is no difference in the rate of objective decrease between accelerated and basic version. Therefore, for certain problem types, the accelerated level set scheme using CSPM has a great potential to speed up the rate of objective decrease and only causes a very moderate increase in the number of projections required by the algorithm (only about 1.5 times as many for 17 problems).
\subsection{Are the superiorized versions a good choice for the level set scheme?}
The quality scores in Tables \ref{table:SummaryQualityScoresAlgorithms_QPS} and \ref{table:SummaryQualityScoresAlgorithms_IMRT} seem to indicate that the superiorized version outperform their basic variants in both schemes. However, for the QPS problems, the t-tests for paired two sample mean comparisons showed that there is, in fact, no statistically significant difference in the means (p-values for two-tailed tests were 0.148 for the level set scheme comparison between ls\_acc\_cspm and ls\_acc\_sup\_cspm and 0.292 for the bisection scheme comparison between bis\_art3+ and bis\_sup\_art3+). Nevertheless, for our test cases, the superiorized versions in the level set scheme outperformed their basic counterparts in more cases: the superiorized version ls\_sup\_acc\_cspm produced a quality score at least as good as the basic version ls\_acc\_cspm in about 68\% of all cases, in 64\% it could actually get a better score.
On the other hand, for the IMRT problems, the superiorized versions clearly outperform the basic versions in terms of objective function scores. Statistical tests are all significant up to a level of 0.0013 (p-value for two-tailed tests of test for mean difference being 0). This clearly shows a promising feature of superiorized algorithms: they are very often able to obtain better solutions, even when used in an optimization framework. The accelerated versions of the superiorized schemes, however, did not differ in quality from their unaccelerated versions.
However, the superiorized versions require more projections and objective evaluations than the normal versions (see Figures \ref{fig:SpeedupCompLsSupAccCspmVsLsAccCspmQPS} and \ref{fig:SpeedupCompLsSupCspmVsLsCspmIMRT}). In these charts, the horizontal axis again denotes the multiplication factor by which the number of projections or objective function evaluations of the basic versions would have to be multiplied to be equal to the values for the superiorized versions. An increase factor of 0 means that the indicated scheme needed as many projections as the compared method. In many instances, the superiorized versions required more than 10 times the number of projections over the basic variant, leading to significantly higher computation times.
\begin{figure}
\caption{Factor of how much the complexity increases by using the superiorized version of CSPM in the level set scheme (QPS problems). }
\label{fig:SpeedupCompLsSupAccCspmVsLsAccCspmQPS}
\end{figure}
\begin{figure}
\caption{Factor of how much the complexity increases by using the superiorized version of CSPM in the level set scheme (IMRT problems).}
\label{fig:SpeedupCompLsSupCspmVsLsCspmIMRT}
\end{figure}
Yet, for the QPS problems, there is also some potential when it comes to the rate of decrease of the objective function, as shown in Figure \ref{fig:SpeedupObjDecLsSupAccCspmVsLsAccCspm}.
\begin{figure}
\caption{Factor of speedup of the objective function decrease by using the superiorized version of CSPM in the level set scheme for QPS problems. A factor of 0 indicates no difference, negative factors indicate that the superiorized version exhibited a slower decrease.}
\label{fig:SpeedupObjDecLsSupAccCspmVsLsAccCspm}
\end{figure}
For IMRT problems, this decrease was only marginal: the rate of decrease of the superiorized versions is on average only about 0.5 times faster than the basic versions (note that 0 times would indicate they progress at the same rate).
Hence, the superiorized version of CSPM uses many more projections and evaluations. However, if these are cheap to compute, then the potential increase in objective function reduction in early iterations could lead to a faster approach overall for some problems if the user is willing to stop the optimization prematurely for practical reasons.
\subsection{Which is the better optimization scheme?}
We compare the level set scheme using the accelerated CSPM and the bisection scheme with CSPM, as these are the most promising candidates for each optimization scheme given the analysis above. For the QPS problems, there is no statistically significant difference in the objective score between the two methods (the p-value of the two-tailed test is 0.389). However, ls\_acc\_cspm outperforms the bisection scheme significantly for IMRT problems - even if 4 outliers of the 36 problems were removed (those which skewed the average quality score of the bisection scheme to the right). With a p-value of 0.007, ls\_acc\_cspm produces a better quality score than the equivalent bisection scheme.
Moreover, as Figure \ref{fig:ComplexityBisVsLs} shows for QPS problems, on average, the bisection scheme requires many more projections and objective evaluations. An intuitive explanation to the results is perhaps because in the bisection scheme one may be in an infeasible detecting stage several times, and in each such a stage many calculations are done (which eventually lead one to conclude that infeasibility has been detected). In the level-set scheme an infeasibility stage can happen only one time, and in the other stages usually feasibility is detected.
\begin{figure}
\caption{Factor of how much the complexity increases for QPS problems by using the bisection scheme over the level set scheme. `0' indicates no difference, and negative values indicate a decrease in complexity.}
\label{fig:ComplexityBisVsLs}
\end{figure}
For IMRT problems, the increase is less pronounced, but consistent. There the increase is up to factor 4.
In terms of rate of objective decrease, the results are mixed. In fact, it seems that bisection can be expected to decrease the objective a little faster than the level set scheme. However, Figure \ref{fig:SpeedupObjDecBisCspmVsLsAccCspm} shows that for the QPS problems this does not happen very often.
\begin{figure}
\caption{Factor of speedup of the objective function decrease by using the bisection scheme over the level set scheme (QPS problems).}
\label{fig:SpeedupObjDecBisCspmVsLsAccCspm}
\end{figure}For IMRT problems, the results are similar, however, it seems as if a speedup of factor 2 occurred quite frequently.
The level set scheme seems to be the better optimization tool for IMRT problems in terms of the quality and complexity. But, if one allows superiorization, then this is not always the case and the differences might be very minor, compare for example "ls\_sup\_cspm" and "bis\_sup\_cspmsee" in Tables \ref{table:SummaryQualityScoresAlgorithms_QPS} and \ref{table:SummaryQualityScoresAlgorithms_IMRT}. Overall, although both strategies are able to obtain similar qualities in the solutions for the QPS problems, there is a clear advantage of the level set scheme over the bisection scheme when it comes to complexity.
\section{Concluding remarks and Further research} \label{sec:conclusion}
Projection methods are known for their computational efficiency and simplicity. This is the reason we decided to use a well-known reformulation of a convex optimization problem and apply projection methods within that general scheme. While at this point the convergence proof for the scheme is valid only when finite convergent algorithms are used, numerical experiments show that general convergent algorithms also generate good solutions when the stopping rule is chosen in an educated way. We believe that the mathematical validity of this relies on Remark \ref{remark:conv} and is still under investigation.
Another direction we plan to investigate is based on Yamagishi and Yamada results \cite{yy08}, which show how to replace the subgradient projections by a more efficient projection when additional knowledge, such as lower bounds is provided. In addition we plan to study accelerating techniques for projection methods, for example the recent work of Pang \cite{Pang12} and \cite{Pang13}. Another direction for investigation is the usage of other type of projection methods, for example Bregman projection, see e.g., \cite{CZ97}.
\end{document} |
\begin{document}
\title{HS-integral and Eisenstein integral mixed Cayley graphs over abelian groups}
\begin{center}{\textbf{Abstract}}\end{center}
\noindent A mixed graph is called \emph{second kind hermitian integral}(or \emph{HS-integral}) if the eigenvalues of its Hermitian-adjacency matrix of second kind are integers. A mixed graph is called \emph{Eisenstein integral} if the eigenvalues of its (0, 1)-adjacency matrix are Eisenstein integers. Let $\Gamma$ be an abelian group. We characterize the set $S$ for which a mixed Cayley graph $\text{Cay}(\Gamma, S)$ is HS-integral. We also show that a mixed Cayley graph is Eisenstein integral if and only if it is HS-integral.
\vspace*{0.3cm} \noindent \textbf{Keywords.} Hermitian adjacency matrix of second kind, mixed Cayley graph; HS-integral mixed graph; Eisenstein integral mixed graph \\ \textbf{Mathematics Subject Classifications:} 05C50, 05C25
\section{Introduction}
A \emph{mixed graph} $G$ is a pair $(V(G),E(G))$, where $V(G)$ and $E(G)$ are the vertex set and the edge set of $G$, respectively. Here $E(G)\subseteq V(G) \times V(G)\setminus \{(u,u)~|~u\in V(G)\}$. If $G$ is a mixed graph, then $(u,v)\in E(G)$ need not imply that $(v,u)\in E(G)$. An edge $(u,v)$ of a mixed graph $G$ is called \textit{undirected} if both $(u,v)$ and $(v,u)$ belong to $E(G)$. An edge $(u,v)$ of a mixed graph $G$ is called \textit{directed} if $(u,v)\in E(G)$ but $(v,u)\notin E(G)$. A mixed graph can have both undirected and directed edges. A mixed graph $G$ is said to be a \textit{simple graph} if all the edges of $G$ are undirected. A mixed graph $G$ is said to be an \textit{oriented graph} if all the edges of $G$ are directed.
For a mixed graph $G$ on $n$ vertices, its (0, 1)-\textit{adjacency matrix} and \textit{Hermitian-adjacency matrix of second kind} are denoted by $\mathcal{A}(G)=(a_{uv})_{n\times n}$ and $\mathcal{H}(G)=(h_{uv})_{n\times n}$, respectively, where
\[a_{uv} = \left\{ \begin{array}{rl}
1 &\mbox{ if }
(u,v)\in E \\
0 &\textnormal{ otherwise,} \end{array}\right. ~~~~~\text{ and }~~~~~~ h_{uv} = \left\{ \begin{array}{cl}
1 &\mbox{ if }
(u,v)\in E \textnormal{ and } (v,u)\in E \\ \frac{1+i\sqrt{3}}{2} & \mbox{ if } (u,v)\in E \textnormal{ and } (v,u)\not\in E \\
\frac{1-i\sqrt{3}}{2} & \mbox{ if } (u,v)\not\in E \textnormal{ and } (v,u)\in E\\
0 &\textnormal{ otherwise.} \end{array}\right.\]
The Hermitian-adjacency matrix of second kind was introduced by Bojan Mohar~\cite{mohar2020new}. Let $G$ be a mixed graph. By an \emph{HS-eigenvalue} of $G$, we mean an eigenvalue of $\mathcal{H}(G)$. By an \emph{eigenvalue} of $G$, we mean an eigenvalue of $\mathcal{A}(G)$. Similarly, the \emph{HS-spectrum} of $G$, denoted $Sp_H(G)$, is the multi-set of the HS-eigenvalues of $G$, and the \emph{spectrum} of $G$, denoted $Sp(G)$, is the multi-set of the eigenvalues of $G$. Note that the Hermitian-adjacency matrix of second kind of a mixed graph is a Hermitian matrix, and so its HS-eigenvalues are real numbers. However, if a mixed graph $G$ contains at least one directed edge, then $\mathcal{A}(G)$ is non-symmetric. Accordingly, the eigenvalues of $G$ need not be real numbers. The matrix obtained by replacing $\frac{1+i\sqrt{3}}{2}$ and $\frac{1-i\sqrt{3}}{2}$ by $i$ and $-i$, respectively, in $\mathcal{H}(G)$, is called the \emph{Hermitian adjacency} matrix of $G$. Hermitian adjacency matrix of mixed graphs was introduced in~\cite{2017mixed, 2015mixed}.
A mixed graph is called \textit{H-integral} if the eigenvalues of its Hermitian adjacency matrix are integers. A mixed graph $G$ is said to be \textit{HS-integral} if all the HS-eigenvalues of $G$ are integers. A mixed graph $G$ is said to be \textit{Eisenstein integral} if all the eigenvalues of $G$ are Eisenstein integers. Note that complex numbers of the form $a+b\omega_3$, where $a,b\in \mathbb{Z}, \omega_3=\frac{-1+i\sqrt{3}}{2}$, are called \emph{Eisenstein} integers. An HS-integral simple graph is called an \emph{integral} graph. Note that $\mathcal{A}(G)=\mathcal{H}(G)$ for a simple graph $G$. Therefore in case of a simple graph $G$, the terms HS-eigenvalue, HS-spectrum and HS-integrality of $G$ are the same with that of eigenvalue, spectrum and integrality of $G$, respectively.
Integrality of simple graphs have been extensively studied in the past. Integral graphs were first defined by Harary and Schwenk~\cite{harary1974graphs} in 1974 and proposed a classification of integral graphs. See \cite{balinska2002survey} for a survey on integral graphs. Watanabe and Schwenk \cite{watanabe1979note,watanabe1979integral} proved several interesting results on integral trees in 1979. Csikvari \cite{csikvari2010integral} constructed integral trees with arbitrary large diameters in 2010. Further research on integral trees can be found in \cite{brouwer2008integral,brouwer2008small,wang2000some, wang2002integral}. In $2009$, Ahmadi et al. \cite{ahmadi2009graphs} proved that only a fraction of $2^{-\Omega (n)}$ of the graphs on $n$ vertices have an integral spectrum. Bussemaker et al. \cite{bussemaker1976there} proved that there are exactly $13$ connected cubic integral graphs. Stevanovi{\'c} \cite{stevanovic20034} studied the $4$-regular integral graphs avoiding $\pm3$ in the spectrum, and Lepovi{\'c} et al. \cite{lepovic2005there} proved that there are $93$ non-regular, bipartite integral graphs with maximum degree four.
Let $S$ be a subset, not containing the identity element, of a group $\Gamma$. The set $S$ is said to be \textit{symmetric} (resp. \textit{skew-symmetric}) if $S$ is closed under inverse (resp. $a^{-1} \not\in S$ for all $a\in S$). Define $\overline{S}= \{u\in S: u^{-1}\not\in S \}$. Clearly, $S\setminus \overline{S}$ is symmetric and $\overline{S}$ is skew-symmetric. The \textit{mixed Cayley graph} $G=\text{Cay}(\Gamma,S)$ is a mixed graph, where $V(G)=\Gamma$ and $E(G)=\{ (a,b): a^{-1}b\in S , a,b\in \Gamma\}$. If $S$ is symmetric then $G$ is a \textit{simple Cayley graph}. If $S$ is skew-symmetric then $G$ is an \textit{oriented Cayley graph}.
In 1982, Bridge and Mena \cite{bridges1982rational} introduced a characterization of integral Cayley graphs over abelian groups. Later on, same characterization was rediscovered by Wasin So \cite{2006integral} for cyclic groups in 2005. In 2009, Abdollahi and Vatandoost \cite{abdollahi2009cayley} proved that there are exactly seven connected cubic integral Cayley graphs. On the same year, Klotz and Sander \cite{klotz2010integral} proved that if a Cayley graph $\text{Cay}(\Gamma,S)$ over an abelian group $\Gamma$ is integral then $S$ belongs to the Boolean algebra $\mathbb{B}(\Gamma)$ generated by the subgroups of $\Gamma$. Moreover, they conjectured that the converse is also true, which was proved by Alperin and Peterson \cite{alperin2012integral}. In 2015, Ku et al. \cite{ku2015Cayley} proved that normal Cayley graphs over the symmetric groups are integral. In 2017, Lu et al. \cite{lu2018integral} gave necessary and sufficient condition for the integrality of Cayley graphs over the dihedral group $D_n$. In particular, they completely determined all integral Cayley graphs over the dihedral group $D_p$ for a prime $p$. In 2019, Cheng et al. \cite{cheng2019integral} obtained several simple sufficient conditions for the integrality of Cayley graphs over the dicyclic group $T_{4n}= \langle a,b| a^{2n}=1, a^n=b^2,b^{-1}ab=a^{-1} \rangle $. In particular, they also completely determined all integral Cayley graphs over the dicyclic group $T_{4p}$ for a prime $p$. In 2014, Godsil \emph{et al.} \cite{godsil2014rationality} characterized integral normal Cayley graphs. Xu \emph{et al.} \cite{xu2011gaussian} and Li \cite{li2013circulant} characterized the set $S$ for which the mixed circulant graph $\text{Cay}(\mathbb{Z}_n, S)$ is Gaussian integral. In 2006, So \cite{2006integral} introduced characterization of integral circulant graphs. In \cite{kadyan2021integralNormal}, the authors provide an alternative proof of the characterization obtained in~\cite{li2013circulant,xu2011gaussian}. H-integral mixed circulant graphs, H-integral mixed Cayley graphs over abelian groups, H-integral normal Cayley graphs and HS-integral mixed circulant graphs have been characterized in \cite{kadyan2021integral}, \cite{kadyan2021integralAbelian}, \cite{kadyan2021integralNormal} and \cite{kadyan2021Secintegral}, respectively.
Throughout this paper, we consider Cayley graphs over abelian groups. The paper is organized as follows. In Section~\ref{prelAbelian}, some preliminary concepts and results are discussed. In particular, we express the HS-eigenvalues of a mixed Cayley graph as a sum of HS-eigenvalues of a simple Cayley graph and an oriented Cayley graph. In Section~\ref{sec3}, we obtain a sufficient condition on the connection set for the HS-integrality of an oriented Cayley graph. In Section~\ref{sec4}, we first characterize HS-integrality of oriented Cayley graphs by proving the necessity of the condition obtained in Section~\ref{sec3}. After that, we extend this characterization to mixed Cayley graphs. In Section~\ref{sec5}, we prove that a mixed Cayley graph is Eisenstein integral if and only if it is HS-integral.
\section{Preliminaries}\label{prelAbelian}
A \textit{representation} of a finite group $\Gamma$ is a homomorphism $\rho : \Gamma \to GL(V)$, where $GL(V)$ is the group of automorphisms of a finite dimensional vector space $V$ over the complex field $\mathbb{C}$. The dimension of $V$ is called the \textit{degree} of $\rho$. Two representations $\rho_1$ and $\rho_2$ of $\Gamma$ on $V_1$ and $V_2$, respectively, are \textit{equivalent} if there is an isomorphism $T:V_1 \to V_2$ such that $T\rho_1(g)=\rho_2(g)T$ for all $g\in \Gamma$.
Let $\rho : \Gamma \to GL(V)$ be a representation. The \textit{character} $\chi_{\rho}: \Gamma \to \mathbb{C}$ of $\rho$ is defined by setting $\chi_{\rho}(g)=Tr(\rho(g))$ for $g\in \Gamma$, where $Tr(\rho(g))$ is the trace of the representation matrix of $\rho(g)$. By degree of $\chi_{\rho}$ we mean the degree of $\rho$ which is simply $\chi_{\rho}(1)$. If $W$ is a $\rho(g)$-invariant subspace of $V$ for each $g\in \Gamma$, then we say $W$ a $\rho(\Gamma)$-invariant subspace of $V$. If the only $\rho(\Gamma)$-invariant subspaces of $V$ are $\{ 0\}$ and $V$, we say $\rho$ an \textit{irreducible representation} of $\Gamma$, and the corresponding character $\chi_{\rho}$ an \textit{irreducible character} of $\Gamma$.
For a group $\Gamma$, we denote by $\text{IRR}(\Gamma)$ and $\text{Irr}(\Gamma)$ the complete set of non-equivalent irreducible representations of $\Gamma$ and the complete set of non-equivalent irreducible characters of $\Gamma$, respectively.
Throughout this paper, we consider $\Gamma$ to be an abelian group of order $n$. Let $S$ be a subset of $\Gamma$ with $0\not\in S$, where $0$ is the additive identity of $\Gamma$. Then $\Gamma$ is isomorphic to the direct product of cyclic groups of prime power order, $i.e.$
$$\Gamma\cong \mathbb{Z}_{n_1} \otimes \cdots \otimes \mathbb{Z}_{n_k},$$
where $n=n_1 \cdots n_k$, and $n_j$ is a power of a prime number for each $j=1,...,k $. We consider an abelian group $\Gamma$ as $\mathbb{Z}_{n_1} \otimes \cdots \otimes \mathbb{Z}_{n_k}$ of order $n=n_1...n_k$. We consider the elements $x\in \Gamma $ as elements of the cartesian product $\mathbb{Z}_{n_1} \otimes \cdots \otimes \mathbb{Z}_{n_k}$, $i.e.$
$$x=(x_1,x_2,...,x_k), \mbox{ where } x_j \in \mathbb{Z}_{n_j} \mbox{ for all } 1\leq j \leq k. $$ Addition in $\Gamma$ is done coordinate-wise modulo $n_j$. For a positive integer $k$ and $a\in \Gamma$, we denote by $ka$ or $a^k$ the $k$-fold sum of $a$ to itself, $(-k)a=k(-a)$, $0a=0$, and inverse of $a$ by $-a$.
\begin{lema}\label{lemma1}\cite{steinberg2009representation} Let $\mathbb{Z}_n=\{ 0,1,...,n-1\}$ be a cyclic group of order $n$. Then $\text{IRR}(\mathbb{Z}_n)=\{ \phi_k: 0\leq k \leq n-1\}$, where $\phi_k(j)=\omega_n^{jk}$ for all $0\leq j,k \leq n-1$, and $\omega_n=\exp(\frac{2\pi i}{n})$. \end{lema}
\begin{lema}\label{lemma2}\cite{steinberg2009representation} Let $\Gamma_1$,$\Gamma_2$ be abelian groups of order $m,n$, respectively. Let $\text{IRR}(\Gamma_1)=\{ \phi_1,...,\phi_m\}$, and $\text{IRR}(\Gamma_2)=\{ \rho_1,...,\rho_n\}$. Then $$\text{IRR}(\Gamma_1 \times \Gamma_2)=\{ \psi_{kl} : 1\leq k \leq m, 1\leq l \leq n \},$$ where $\psi_{kl}: \Gamma_1 \times \Gamma_2 \to \mathbb{C}^* \mbox{ and } \psi_{kl}(g_1,g_2)=\phi_k(g_1)\rho_l(g_2)$ for all $g_1\in \Gamma_1, g_2\in \Gamma_2$. \end{lema}
Consider $\Gamma = \mathbb{Z}_{n_1}\times \mathbb{Z}_{n_2}\times ...\times \mathbb{Z}_{n_k}$. By Lemma \ref{lemma1} and Lemma \ref{lemma2}, $\text{IRR}(\Gamma)=\{ \psi_{\alpha}: \alpha \in \Gamma\}$, where \begin{equation} \psi_{\alpha}(x)=\prod_{j=1}^{k}\omega_{n_j}^{\alpha_j x_j} \textnormal{ for all $\alpha=( \alpha_1,...,\alpha_k),x=(x_1,...,x_k) \in \Gamma$},\label{character} \end{equation} and $\omega_{n_j}=\exp\left(\frac{2\pi i}{n_j}\right)$. Since $\Gamma$ is an abelian group, every irreducible representation of $\Gamma$ is 1-dimensional and thus it can be identified with its characters. Hence $\text{IRR}(\Gamma)=\text{Irr}(\Gamma)$. For $x\in \Gamma$, let $\text{ord}(x)$ denote the order of $x$. The following lemma can be easily proved.
\begin{lema}\label{Basic} Let $\Gamma$ be an abelian group and $\text{Irr}(\Gamma)=\{\psi_{\alpha} : \alpha \in \Gamma \}$. Then the following statements are true. \begin{enumerate}[label=(\roman*)] \item $\psi_{\alpha}(x)=\psi_x({\alpha})$ for all $x,\alpha \in \Gamma$. \item $(\psi_{\alpha}(x))^{\text{ord}(x)}=(\psi_{\alpha}(x))^{\text{ord}(\alpha)}=1$ for all $x,\alpha \in \Gamma$. \end{enumerate} \end{lema}
Let $f : \Gamma \to \mathbb{C}$ be a function. The \textit{Cayley color digraph} of $\Gamma$ with \textit{connection function} $f$, denoted by $\text{Cay}(\Gamma, f)$, is defined to be the directed graph with vertex set $\Gamma$ and arc set $\{ (x,y): x,y \in \Gamma\}$ such that each arc $(x,y)$ is colored by $f(x^{-1}y)$. The \textit{adjacency matrix} of $\text{Cay}(\Gamma, f)$ is defined to be the matrix whose rows and columns are indexed by the elements of $\Gamma$, and the $(x,y)$-entry is equal to $f(x^{-1}y)$. The eigenvalues of $\text{Cay}( \Gamma, f)$ are simply the eigenvalues of its adjacency matrix.
\begin{theorem}\cite{babai1979spectra}\label{EigNorColCayMix} Let $\Gamma$ be a finite abelian group and $\text{Irr}(\Gamma)=\{\psi_{\alpha} : \alpha \in \Gamma \}$. Then the spectrum of the Cayley color digraph $\text{Cay}(\Gamma, f)$ is $\{ \gamma_\alpha : \alpha\in \Gamma \},$ where $$\gamma_{\alpha} = \sum_{y\in \Gamma} f(y)\psi_{\alpha}(y) \hspace{0.2cm} \textnormal{ for all } \alpha \in \Gamma.$$ \end{theorem}
For a subset $S$ of an abelian group $\Gamma$, let $S^{-1}=\{s^{-1}~:~s\in S\}$.
\begin{lema}\cite{babai1979spectra}\label{imcgoa3} Let $\Gamma$ be an abelian group and $\text{Irr}(\Gamma)=\{\psi_{\alpha} : \alpha \in \Gamma \}$. Then the HS-spectrum of the mixed Cayley graph $\text{Cay}(\Gamma, S)$ is $\{ \gamma_\alpha : \alpha \in \Gamma \}$, where $\gamma_{\alpha}=\lambda_{\alpha}+\mu_{\alpha}$ and $$\lambda_{\alpha}=\sum_{s\in S\setminus \overline{S}} \psi_{\alpha}(s),\hspace{0.2cm} \mu_{\alpha}=\sum_{s\in\overline{S}}\left( \omega_6 \psi_{\alpha}(s)+ \omega_6^5\psi_{\alpha}(-s)\right) \textnormal{ for all } \alpha \in \Gamma.$$ \end{lema} \begin{proof} Define $f_S: \Gamma \to \{0,1,\omega_6,\omega_6^5\}$ such that $$f_S(s)= \left\{ \begin{array}{rl}
1 & \mbox{if } s\in S \setminus \overline{S} \\
\omega_6 & \mbox{if } s\in \overline{S}\\
\omega_6^5 & \mbox{if } s\in \overline{S}^{-1}\\
0 & \mbox{otherwise}.
\end{array}\right.$$ The adjacency matrix of the Cayley color digraph $\text{Cay}(\Gamma, f_S)$ agrees with the Hermitian adjacency matrix of the mixed Cayley graph $\text{Cay}(\Gamma, S)$. Thus the result follows from Theorem~\ref{EigNorColCayMix}. \end{proof}
Next two corollaries are special cases of Lemma~\ref{imcgoa3}.
\begin{corollary}\cite{klotz2010integral}\label{simpleAbelianEigBabai} Let $\Gamma$ be an abelian group and $\text{Irr}(\Gamma)=\{\psi_{\alpha} : \alpha \in \Gamma \}$. Then the spectrum of the Cayley graph $\text{Cay}(\Gamma, S)$ is $\{ \lambda_\alpha : \alpha \in \Gamma \}$, where $\lambda_\alpha=\lambda_{-\alpha}$ and $$\lambda_{\alpha}=\sum_{s\in S} \psi_{\alpha}(s) \mbox{ for all } \alpha \in \Gamma.$$ \end{corollary}
\begin{corollary}\label{OriEig} Let $\Gamma$ be an abelian group and $\text{Irr}(\Gamma)=\{\psi_{\alpha} : \alpha \in \Gamma \}$. Then the spectrum of the oriented Cayley graph $\text{Cay}(\Gamma, S)$ is $\{ \mu_\alpha : \alpha \in \Gamma \}$, where $$\mu_{\alpha}=\sum_{s\in S}\left(\omega_6 \psi_{\alpha}(s)+ \omega_6^5 \psi_{\alpha}(-s)\right) \mbox{ for all } \alpha \in \Gamma.$$ \end{corollary}
Let $n\geq 2$ be a fixed positive integer. Define $G_n(d)=\{k: 1\leq k\leq n-1, \gcd(k,n)=d \}$. It is clear that $G_n(d)=dG_{\frac{n}{d}}(1)$. Alperin and Peterson \cite{alperin2012integral} considered a Boolean algebra generated by a class of subgroups of a group in order to determine the integrality of Cayley graphs over abelian groups. Suppose $\Gamma$ is a finite group, and $\mathcal{F}_{\Gamma}$ is the family of all subgroups of $\Gamma$. The Boolean algebra $\mathbb{B}(\Gamma)$ generated by $\mathcal{F}_{\Gamma}$ is the set whose elements are obtained by arbitrary finite intersections, unions, and complements of the elements in the family $\mathcal{F}_{\Gamma}$. The minimal non-empty elements of this algebra are called \textit{atoms}. Thus each element of $\mathbb{B}(\Gamma)$ is the union of some atoms. Consider the equivalence relation $\sim$ on $\Gamma$ such that $x\sim y$ if and only if $y=x^k$ for some $k\in G_m(1)$, where $m=\text{ord}(x)$.
\begin{lema}\cite{alperin2012integral} The equivalence classes of $\sim$ are the atoms of $\mathbb{B}(\Gamma)$. \end{lema}
For $x\in \Gamma$, let $[x]$ denote the equivalence class of $x$ with respect to the relation $\sim$. Also, let $\langle x \rangle$ denote the cyclic group generated by $x$.
\begin{lema}\label{atomsboolean} \cite{alperin2012integral} The atoms of the Boolean algebra $\mathbb{B}(\Gamma)$ are the sets $[x]=\{ y: \langle y \rangle = \langle x \rangle \}$. \end{lema}
By Lemma \ref{atomsboolean}, each element of $\mathbb{B}(\Gamma)$ is a union of some sets of the form $[x]=\{ y: \langle y \rangle = \langle x \rangle \}$. Thus, for all $S\in \mathbb{B}(\Gamma)$, we have $S=[x_1]\cup...\cup [x_k]$ for some $x_1,...,x_k\in \Gamma$. The next result provides a complete characterization of integral Cayley graphs over an abelian group $\Gamma$ in terms of the atoms of $\mathbb{B}(\Gamma)$.
\begin{theorem}\label{Cayint} (\cite{alperin2012integral}, \cite{bridges1982rational}) Let $\Gamma$ be an abelian group. The Cayley graph $\text{Cay}(\Gamma, S)$ is integral if and only if $S\in \mathbb{B}(\Gamma)$. \end{theorem}
Define $\Gamma(3)$ to be the set of all $x\in \Gamma$ satisfying $\text{ord}(x)\equiv 0 \pmod 3$. For all $x\in \Gamma(3)$ and $r\in \{0,1,2\}$, define $$M_r(x):=\{x^k: 1\leq k \leq \text{ord}(x) , k \equiv r \Mod 3 \}.$$
For all $a\in \Gamma$ and $S\subseteq \Gamma$, define $a+S:= \{ a+s: s\in S\}$ and $-S:=\{ -s: s\in S \}$. Note that $-s$ denotes the inverse of $s$, that is $-s=s^{m-1}$, where $m=\text{ord}(s)$.
\begin{lema} Let $\Gamma$ be an abelian group and $x\in \Gamma(3)$. Then the following statement\textbf{}s are true. \begin{enumerate}[label=(\roman*)] \item $\bigcup\limits_{r=0}^{2}M_r(x)= \langle x \rangle$. \item Both $M_1(x)$ and $M_2(x)$ are skew-symmetric subsets of $\Gamma$. \item $-M_1(x)=M_2(x)$ and $-M_2(x)=M_1(x)$. \item $a+M_1(x)=M_1(x)$ and $a+M_2(x)=M_2(x)$ for all $a\in M_0(x)$. \end{enumerate} \end{lema} \begin{proof} \begin{enumerate}[label=(\roman*)]
\item It follows from the definitions of $M_r(x)$ and $\langle x \rangle$.
\item Let $\text{ord}(x)=m$. If $x^k\in M_1(x)$ then $-x^k=x^{m-k} \not\in M_1(x)$, as $k\equiv 1 \pmod 3$ gives $m-k\equiv 2 \pmod 3$. Thus $M_1(x)$ is a skew-symmetric subset of $\Gamma$. Similarly, $M_2(x)$ is also a skew-symmetric subset of $\Gamma$.
\item Let $\text{ord}(x)=m$. As $k\equiv 1 \pmod 3$ if and only if $m-k\equiv 2 \pmod 3$, and $-x^k=x^{m-k}$, we get $-M_1(x)=M_2(x)$ and $-M_2(x)=M_1(x)$.
\item Let $a\in M_0(x)$ and $y\in a+M_1(x)$. Then $a=x^{k_1}$ and $y=x^{k_1}+x^{k_2}=x^{k_1+k_2}$, where $k_1\equiv 0 \pmod 3$ and $k_2 \equiv 1 \pmod 3$. Since $k_1+k_2\equiv 1 \pmod 3$, we have $y\in M_1(x)$ implying that $a+M_1(x)\subseteq M_1(x)$. Hence $a+M_1(x)=M_1(x)$. Similarly, $a+M_2(x)=M_2(x)$ for all $a\in M_0(x)$.\qedhere \end{enumerate} \end{proof}
Let $m \equiv 0 \Mod 3$. For $r\in \{1,2\}$ and $g \in \mathbb{Z}$, define the following: \begin{align*} & G_{m,3}^r(1)=\{ k: 1\leq k \leq m-1 , \gcd(k,m )= 1,k\equiv r \Mod 3 \},\\ & D_{g,3}= \{ k: k \text{ divides } g, k \not\equiv 0 \Mod 3\},~\text{ and}\\ & D_{g,3}^r=\{ k: k \text{ divides } g, k \equiv r \Mod 3\} . \end{align*} It is clear that $D_{g,3}=D_{g,3}^1 \bigcup\limits D_{g,3}^2$. Define an equivalence relation $\approx$ on $\Gamma(3)$ such that $x \approx y$ if and only if $y=x^k$ for some $k\in G_{m,3}^1(1)$, where $m=\text{ord}(x)$. Observe that if $x,y\in \Gamma(3)$ and $x \approx y$ then $x \sim y$, but the converse need not be true. For example, consider $x=5\pmod {12}$, $y=7\pmod {12}$ in $\mathbb{Z}_{12}$. Here $x,y\in \mathbb{Z}_{12}(3)$ and $x \sim y$ but $x \not\approx y$. For $x\in \Gamma(3)$, let $\langle\!\langle x \rangle\!\rangle$ denote the equivalence class of $x$ with respect to the relation $\approx$.
\begin{lema}\label{lemanecc} Let $\Gamma$ be an abelian group, $x\in \Gamma(3)$ and $m=\text{ord}(x)$. Then the following are true. \begin{enumerate}[label=(\roman*)] \item $\langle\!\langle x \rangle\!\rangle= \{ x^k: k \in G_{m,3}^1(1) \}$. \item $\langle\!\langle -x \rangle\!\rangle= \{ x^k: k \in G_{m,3}^2(1) \}$. \item $\langle\!\langle x \rangle\!\rangle \cap \langle\!\langle -x \rangle\!\rangle=\emptyset$. \item $[x]=\langle\!\langle x \rangle\!\rangle \cup \langle\!\langle -x \rangle\!\rangle$. \end{enumerate} \end{lema} \begin{proof} \begin{enumerate}[label=(\roman*)] \item Let $y\in \langle\!\langle x \rangle\!\rangle$. Then $x \approx y$, and so $\text{ord}(x)=\text{ord}(y)=m$ and there exists $k\in G_{m,3}^1(x)$ such that $y=x^k$. Thus $\langle\!\langle x \rangle\!\rangle\subseteq \{ x^k: k \in G_{m,3}^1(1) \}$. On the other hand, let $z=x^k$ for some $k\in G_{m,3}^1(1)$. Then $\text{ord}(x)=\text{ord}(z)$ and so $x \approx z$. Thus $ \{ x^k: k \in G_{m,3}^1(1) \} \subseteq \langle\!\langle x \rangle\!\rangle$. \item Note that $-x=x^{m-1}$ and $m-1\equiv 2 \pmod 3$. By Part $(i)$, \begin{align*} \langle\!\langle -x \rangle\!\rangle = \{ (-x)^k: k \in G_{m,3}^1(1) \} = \{ x^{(m-1)k}: k \in G_{m,3}^1(1) \}&= \{ x^{-k}: k \in G_{m,3}^1(1) \}\\ &= \{ x^{k}: k \in G_{m,3}^2(1) \}. \end{align*} \item Since $G_{m,3}^1(1)\cap G_{m,3}^2(1)=\emptyset$, so by Part $(i)$ and Part $(ii)$, $\langle\!\langle x \rangle\!\rangle \cap \langle\!\langle -x \rangle\!\rangle=\emptyset$ holds. \item Since $[x]= \{ x^k: k \in G_m(1) \}$ and $G_m(1)$ is a disjoint union of $G_{m,3}^1(1)$ and $ G_{m,3}^3(1)$, by Part $(i)$ and Part $(ii)$, $[x]=\langle\!\langle x \rangle\!\rangle \cup \langle\!\langle -x \rangle\!\rangle$ holds. \qedhere \end{enumerate} \end{proof}
\begin{lema}\label{imcgoa4} Let $\Gamma$ be an abelian group, $x\in \Gamma(3)$, $m=\text{ord}(x)$ and $g=\frac{m}{3}$. Then the following are true. \begin{enumerate}[label=(\roman*)] \item $M_1(x) \cup M_2(x)=\bigcup\limits_{h\in D_{g,3}} [x^h] $. \item $M_1(x)= \bigcup\limits_{h\in D_{g,3}^1} \langle\!\langle x^h \rangle\!\rangle \cup \bigcup\limits_{h\in D_{g,3}^2} \langle\!\langle -x^h \rangle\!\rangle$. \item $M_2(x)=\bigcup\limits_{h\in D_{g,3}^1} \langle\!\langle -x^h \rangle\!\rangle \cup \bigcup\limits_{h\in D_{g,3}^2} \langle\!\langle x^h \rangle\!\rangle $. \end{enumerate} \end{lema} \begin{proof} \begin{enumerate}[label=(\roman*)] \item Let $x^k \in M_1(x) \cup M_2(x)$, where $k \equiv 1 \text{ or } 2 \pmod 3$. To show that $x^k \in \bigcup\limits_{h\in D_{g,3}} [x^h]$, it is enough to show $x^k \sim x^h$ for some $h \in D_{g,3}$. Let $h=\gcd (k,g) \in D_{g,3}$. Note that $$\text{ord}(x^k)=\frac{m}{\gcd(m,k)}=\frac{m}{\gcd(g,k)}=\frac{m}{h}=\text{ord}(x^h).$$ Also, as $h=\gcd(k,m)$, we have $\langle x^k \rangle = \langle x^h \rangle$, and so $x^k=x^{hj}$ for some $j\in G_q(1)$, where $q=\text{ord}(x^h)=\frac{m}{h}$. Thus $x^k\sim x^h$ where $h=\gcd (k,g) \in D_{g,3}$. Conversely, let $z\in \bigcup\limits_{h\in D_{g,3}} [x^h]$. Then there exists $h\in D_{g,3}$ such that $z=x^{hj}$ where $j\in G_q(1)$ and $q= \frac{m}{\gcd(m,h)}$. Now $h\in D_{g,3}$ and $q\equiv 0\pmod 3$ imply that $hj\equiv 1 \text{ or } 2 \pmod 3$, and so $\bigcup\limits_{h\in D_{g,3}} [x^h] \subseteq M_1(x) \cup M_2(x)$. Hence $M_1(x) \cup M_2(x)=\bigcup\limits_{h\in D_{g,3}} [x^h] $.
\item Let $x^k \in M_1(x)$, where $k\equiv 1 \pmod 3$. By Part $(i)$, there exists $h\in D_{g,3}$ and $j\in G_q(1)$ such that $x^k=x^{hj}$, where $q=\frac{m}{\gcd(m,h)}$. Note that $k=jh$. If $h\equiv 1 \pmod 3$ then $j\in G_{q,3}^1(1)$, otherwise $j\in G_{q,3}^2(1)$. Thus using parts $(i)$ and $(ii)$ of Lemma \ref{lemanecc}, if $h\equiv 1 \pmod 3$ then $x^k \approx x^h$, otherwise $x^k \approx -x^h$. Hence $M_1(x) \subseteq \bigcup\limits_{h\in D_{g,3}^1} \langle\!\langle x^h \rangle\!\rangle \cup \bigcup\limits_{h\in D_{g,3}^2} \langle\!\langle -x^h \rangle\!\rangle$. Conversely, assume that $z\in \bigcup\limits_{h\in D_{g,3}^1} \langle\!\langle x^h \rangle\!\rangle \cup \bigcup\limits_{h\in D_{g,3}^2} \langle\!\langle -x^h \rangle\!\rangle$. This gives $z\in \langle\!\langle x^h \rangle\!\rangle$ for an $h\in D_{g,3}^1$ or $z\in \langle\!\langle -x^h \rangle\!\rangle$ for an $h\in D_{g,3}^2$. In the first case, by part $(i)$ of Lemma \ref{lemanecc}, there exists $j\in G_{q,3}^1(1)$ with $q=\frac{m}{\gcd(m,h)}$ such that $z = x^{hj}$. Similarly, for the second case, by part $(ii)$ of Lemma \ref{lemanecc}, there exists $j\in G_{q,3}^2(1)$ with $q=\frac{m}{\gcd(m,h)}$ such that $z = x^{hj}$. In both the cases, $hj \equiv 1 \pmod 3$. Thus $z\in M_1(x)$.
\item The proof is similar to Part $(ii)$. \qedhere \end{enumerate} \end{proof}
The \textit{cyclotomic polynomial} $\Phi_m(x)$ is the monic polynomial whose zeros are the primitive $m^{th}$ roots of unity. That is,
$$\Phi_m(x)= \prod_{a\in G_m(1)}(x-\omega_m^a).$$
Clearly, the degree of $\Phi_m(x)$ is $\varphi(m)$, where $\varphi$ denotes the Euler $\varphi$-function. It is well known that the cyclotomic polynomial $\Phi_m(x)$ is monic and irreducible in $\mathbb{Z}[x]$. See \cite{numbertheory} for more details on cyclotomic polynomials.
The polynomial $\Phi_m(x)$ is irreducible over $\mathbb{Q}(\omega_3)$ if and only if $[\mathbb{Q}(\omega_3,\omega_m) : \mathbb{Q}(\omega_3)]= \varphi(m)$. Also, $ \mathbb{Q}(\omega_m)$ does not contain the number $\omega_3$ if and only if $m\not\equiv 0 \Mod 3$. Thus, if $m\not\equiv 0 \Mod 3$ then $[\mathbb{Q}(\omega_3,\omega_m):\mathbb{Q}(\omega_m) ]=2=[\mathbb{Q}(\omega_3), \mathbb{Q}]$, and therefore
$$[\mathbb{Q}(\omega_3,\omega_m) : \mathbb{Q}(\omega_3)]=\frac{[\mathbb{Q}(\omega_3,\omega_m) : \mathbb{Q}(\omega_m)] \times [\mathbb{Q}(\omega_m) : \mathbb{Q}]}{ [\mathbb{Q}(\omega_3) : \mathbb{Q}]}= [\mathbb{Q}(\omega_m) : \mathbb{Q}]= \varphi(m).$$ Further, if $m\equiv 0 \Mod 3$ then $ \mathbb{Q}(\omega_3,\omega_m)= \mathbb{Q}(\omega_m)$, and so $$[\mathbb{Q}(\omega_3,\omega_m) : \mathbb{Q}(\omega_3)] = \frac{[\mathbb{Q}(\omega_3,\omega_m) : \mathbb{Q}]}{[\mathbb{Q}(\omega_3) : \mathbb{Q}]}=\frac{\varphi(m)}{2}.$$ Note that $\mathbb{Q}(\omega_3)=\mathbb{Q}(\omega_6)=\mathbb{Q}(i\sqrt{3})$. Therefore $\Phi_m(x)$ is irreducible over $\mathbb{Q}(\omega_3), \mathbb{Q}(\omega_6)$ or $\mathbb{Q}(i\sqrt{3})$ if and only if $m\not\equiv 0 \Mod 3$.
Let $m\equiv 0 \Mod 3$. Observe that $G_{m}(1)$ is a disjoint union of $G_{m,3}^1(1)$ and $G_{m,3}^2(1)$. Define $$\Phi_{m,3}^{1}(x)= \prod_{a\in G_{m,3}^1(1)}(x-\omega_m^a)~~ \textnormal{ and } ~~\Phi_{m,3}^2(x)= \prod_{a\in G_{m,3}^2(1)}(x-\omega_m^a).$$ It is clear from the definition that $\Phi_m(x)=\Phi_{m,3}^1(x)\Phi_{m,3}^2(x)$.
\begin{theorem}\cite{kadyan2021Secintegral}
Let $m\equiv 0\Mod 3$. Then $\Phi_{m,3}^1(x)$ and $\Phi_{m,3}^2(x)$ are irreducible monic polynomials in $\mathbb{Q}(\omega_3)[x]$ of degree $\frac{\varphi(m)}{2}$.
\end{theorem}
\section{A sufficient condition for HS-integrality of oriented Cayley graphs over abelian groups}\label{sec3}
In this section, first we prove that $S=\emptyset$ is the only connection set for an HS-integral oriented Cayley graph $\text{Cay}(\Gamma, S)$ whenever $\Gamma(3)=\emptyset$. After that we obtain a sufficient condition on the set $S$ for which the oriented Cayley graph $\text{Cay}(\Gamma, S)$ is HS-integral.
\begin{lema}\label{AbelianSqrt3ZeroSum} Let $S$ be a skew-symmetric subset of an abelian group $\Gamma$. If $\sum\limits_{s \in S} i\sqrt{3} (\psi_{\alpha}(s) -\psi_{\alpha}(-s)) =0$ for all $j=0,...,n-1$ then $S=\emptyset$.
\end{lema}
\begin{proof} Let $A_S=(a_{uv})_{n\times n}$ be the matrix whose rows and columns are indexed by the elements of $\Gamma$, where
$$a_{uv} = \left\{ \begin{array}{cl}
i\sqrt{3} & \mbox{ if } v-u \in S \\
-i\sqrt{3} & \mbox{ if } v-u \in S^{-1}\\
0 &\textnormal{ otherwise.}
\end{array}\right.$$ Since $A_S$ is a circulant matrix, $\lambda_{\alpha}=\sum\limits_{k\in S} i\sqrt{3} (\psi_{\alpha}(s)-\psi_{\alpha}(-s))$ is an HS-eigenvalue of $A_S$ for each $\alpha \in \Gamma$. Therefore $\lambda_{\alpha}=0$ for all $\alpha \in \Gamma$, which implies all the entries of $A_S$ are zero. Hence $S=\emptyset$.
\end{proof}
\begin{theorem}\label{ori4} Let $\Gamma$ be an abelian group and $\Gamma(3) = \emptyset$. Then the oriented Cayley graph $\text{Cay}(\Gamma, S)$ is HS-integral if and only if $S=\emptyset$ \end{theorem} \begin{proof} Let $G=\text{Cay}(\Gamma, S)$ and $Sp_H(G)=\{ \mu_{\alpha}: \alpha \in \Gamma \}$. Assume that $\text{Cay}(\Gamma, S)$ is HS-integral and $n\not\equiv 0 \Mod 3$. By Corollary~\ref{OriEig}, $$\mu_{\alpha}=\sum_{s\in S} (\omega_6 \psi_{\alpha}(s)+ \omega_6^5\psi_{\alpha}(-s)) \in \mathbb{Z}, \textnormal{ for all } \alpha \in \Gamma.$$ Note that, $\psi_{\alpha}(s)$ and $\psi_{\alpha}(-s)$ are $n^{th}$ roots of unity for all $ \alpha \in \Gamma, s\in S$. Fix a primitive $n^{th}$ root $\omega$ of unity and express $\psi_{\alpha}(s)$ in the form $\omega^j$ for some $j \in \{ 0,1,...,n-1\}$. Thus $$\mu_{\alpha}= \sum_{s\in S} (\omega_6 \psi_{\alpha}(s)+ \omega_6^5\psi_{\alpha}(-s) ) = \sum_{j=0}^{n-1} a_j \omega^j,$$
where $a_j \in \mathbb{Q}(\omega_3)$. Since $\mu_{\alpha} \in \mathbb{Z}$, so $p(x)= \sum\limits_{j=0}^{n-1} a_j x^j- \mu_{\alpha} \in \mathbb{Q}(\omega_3)[x]$ and $\omega$ is a root of $p(x)$. Since $n\not\equiv 0(\mod 3)$, so $\Phi_n(x)$ is irreducible in $\mathbb{Q}(\omega_3)[x]$. Thus $p(\omega)=0$ and $\Phi_n(x)$ is the monic irreducible polynomial over $\mathbb{Q}(\omega_3)$ having $\omega$ as a root. Therefore $\Phi_n(x)$ divides $p(x)$, and so $\omega^{-1}=\omega^{n-1}$ is also a root of $p(x)$. Note that, if $\psi_{\alpha}(s)=\omega^j$ for some $j \in \{ 0,1,...,n-1\}$ then $\psi_{-\alpha}(s)=\omega^{-j}$. We have \begin{align*} \sum_{s \in S} i\sqrt{3} (\psi_{\alpha}(s) - \psi_{\alpha}(-s))&=\sum_{s \in S}[(\omega_6 - \omega_6^5) \psi_{\alpha}(s) + (\omega_6^5 - \omega_6) \psi_{\alpha}(-s)]\\ &=\sum_{j=0}^{n-1} a_j \omega^{-j}- \mu_{\alpha}=\mu_{-\alpha}-\mu_{\alpha}=p(\omega^{-1})=0 . \end{align*} By Lemma \ref{AbelianSqrt3ZeroSum}, $S=\emptyset$. Conversely, if $S=\emptyset$ then all the HS-eigenvalues of $\text{Cay}(\Gamma, S)$ are zero. Thus $\text{Cay}(\Gamma, S)$ is HS-integral. \end{proof}
\begin{lema}\label{imcgoa11} Let $\Gamma$ be an abelian group and $x\in \Gamma(3)$. Then $\sum\limits_{s\in M_1(x)} \left(\omega_6 \psi_{\alpha}(s) + \omega_6^5 \psi_{\alpha}(-s)\right)$ is an integer for each $\alpha \in \Gamma$. \end{lema} \begin{proof} Let $x\in \Gamma(3)$, $\alpha \in \Gamma$ and $\mu_{\alpha}=\sum\limits_{s\in M_1(x)}\left( \omega_6 \psi_{\alpha}(s) + \omega_6^5 \psi_{\alpha}(-s)\right).$
\noindent\textbf{Case 1.} There exists $a\in M_0(x)$ such that $\psi_{\alpha}(a)\neq 1$. Then
\begin{equation*} \begin{split} \mu_{\alpha}=\sum\limits_{s\in M_1(x)}\left( \omega_6 \psi_{\alpha}(s) + \omega_6^5 \psi_{\alpha}(-s)\right) &= \sum\limits_{s\in M_1(x)} \omega_6 \psi_{\alpha}(s) + \sum\limits_{s\in M_2(x)} \omega_6^5 \psi_{\alpha}(s)\\ &= \sum\limits_{s\in a+M_1(x)} \omega_6 \psi_{\alpha}(s) + \sum\limits_{s\in a+M_2(x)} \omega_6^5 \psi_{\alpha}(s)\\ &=\psi_{\alpha}(a) \sum\limits_{s\in M_1(x)} \omega_6 \psi_{\alpha}(s) + \psi_{\alpha}(a) \sum\limits_{s\in M_2(x)} \omega_6^5 \psi_{\alpha}(s)\\ &= \psi_{\alpha}(a) \mu_{\alpha}. \end{split} \end{equation*} We have $(1-\psi_{\alpha}(a)) \mu_{\alpha}=0$. Since $\psi_{\alpha}(a)\neq 1$, we get $\mu_{\alpha}=0\in \mathbb{Z}$.
\noindent\textbf{Case 2.} Assume that $\psi_{\alpha}(a)=1$ for all $a\in M_0(x)$. Then $\psi_{\alpha}(s)=\psi_{\alpha}(x)$ for all $s\in M_1(x)$ and $\psi_{\alpha}(s)=\psi_{\alpha}(x^2)$ for all $s\in M_2(x)$. Therefore \begin{equation*} \begin{split} \mu_{\alpha}=\sum\limits_{s\in M_1(x)}\left( \omega_6 \psi_{\alpha}(s) + \omega_6^5 \psi_{\alpha}(-s)\right) &= \sum\limits_{s\in M_1(x)} \omega_6 \psi_{\alpha}(s) + \sum\limits_{s\in M_2(x)} \omega_6^5 \psi_{\alpha}(s)\\
&= |M_1(x)|(\omega_6 \psi_{\alpha}(x) + \omega_6^5 \psi_{\alpha}(x^2))\\
&= -|M_1(x)|(\omega_3^2 \psi_{\alpha}(x) + \omega_3 \psi_{\alpha}(x^2)). \end{split} \end{equation*}
Since $\psi_{\alpha}(x)^3=1$ then $\psi_{\alpha}(x)=\omega_3$ or $\omega_3^2$. If $\psi_{\alpha}(x)=\omega_3$ then $\mu_{\alpha}= -2 |M_1(x)|$. If $\psi_{\alpha}(x)=\omega_3^2$ then $\mu_{\alpha}= |M_1(x)|$. Thus in both cases, $\mu_{\alpha}$ are integers for all $\alpha \in \Gamma$. \end{proof}
For $x\in \Gamma(3)$ and $\alpha \in \Gamma$, define $$Z_{x}(\alpha)= \sum\limits_{s\in \langle\!\langle x \rangle\!\rangle}\left( \omega_6 \psi_{\alpha}(s) + \omega_6^5 \psi_{\alpha}(-s)\right).$$
\begin{lema}\label{integerEigenvalue} Let $\Gamma$ be an abelian group and $x\in \Gamma(3)$. Then $Z_{x}(\alpha)$ is an integer for each $\alpha \in \Gamma$. \end{lema} \begin{proof} Note that there exists $x\in \Gamma(3)$ with $\text{ord}(x)=3$. Apply induction on $\text{ord}(x)$. If $\text{ord}(x)=3$, then $M_1(x)=\langle\!\langle x \rangle\!\rangle$. Hence by Lemma ~\ref{imcgoa11}, $Z_{x}(\alpha)$ is an integer for each $\alpha \in \Gamma$. Assume that the statement holds for all $x\in \Gamma(3)$ with $\text{ord}(x)\in \{ 3,6,...,3(g-1)\}$. We prove it for $\text{ord}(x)=3g$. Lemma ~\ref{imcgoa4} implies that $$M_1(x)= \bigcup\limits_{h\in D_{g,3}^1} \langle\!\langle x^h \rangle\!\rangle \cup \bigcup\limits_{h\in D_{g,3}^2} \langle\!\langle -x^h \rangle\!\rangle.$$ If $\text{ord}(x)=3g=m, h\in D_{g,3}^1\cup D_{g,3}^2$, and $h>1$ then $\text{ord}(x^h), \text{ord}(-x^h)\in \{ 3,6,...,3(g-1)\}$. By induction hypothesis, both $Z_{x^h}(\alpha)$ and $Z_{-x^h}(\alpha)$ are integers for all $\alpha \in \Gamma$. Now we have \begin{equation*} \begin{split} \sum\limits_{s\in M_1(x)}\left( \omega_6\psi_{\alpha}(s)+ \omega_6^5\psi_{\alpha}(s)\right) &= Z_{x}(\alpha)+ \sum_{h\in D_{g,3}^1, h> 1} Z_{x^h}(\alpha) + \sum_{h\in D_{g,3}^2, h> 1} Z_{-x^h}(\alpha). \end{split} \end{equation*} By Lemma ~\ref{imcgoa11} and induction hypothesis, \begin{equation*} \begin{split} Z_{x}(\alpha) = \sum\limits_{s\in M_1(x)}\left( \omega_6\psi_{\alpha}(s)+ \omega_6^5\psi_{\alpha}(s)\right) - \sum_{h\in D_{g,3}^1, h> 1} Z_{x^h}(\alpha) - \sum_{h\in D_{g,3}^2, h> 1} Z_{-x^h}(\alpha) \end{split} \end{equation*} is an integer for each $\alpha \in \Gamma$. \end{proof}
For $\Gamma(3) \neq \emptyset$, define $\mathbb{E}(\Gamma)$ to be the set of all skew-symmetric subsets of $\Gamma$ of the form $\langle\!\langle x_1 \rangle\!\rangle\cup...\cup \langle\!\langle x_k \rangle\!\rangle$ for some $x_1,...,x_k\in \Gamma(3)$. For $\Gamma(3) = \emptyset$, define $\mathbb{E}(\Gamma)=\{ \emptyset \}$.
\begin{theorem}\label{OrientedChara} Let $\Gamma$ be an abelian group. If $S \in \mathbb{E}(\Gamma)$ then the oriented Cayley graph $\text{Cay}(\Gamma, S)$ is HS-integral. \end{theorem} \begin{proof} Assume that $S \in \mathbb{E}(\Gamma)$. Then $S=\langle\!\langle x_1 \rangle\!\rangle\cup...\cup \langle\!\langle x_k \rangle\!\rangle$ for some $x_1,...,x_k\in \Gamma(3)$. We have \begin{equation*}
\begin{split} \mu_{\alpha} &=\sum_{s\in S}\left( \omega_6 \psi_{\alpha}(s)+ \omega_6^5 \psi_{\alpha}(-s)\right)= \sum_{j=1}^k Z_{x_j}(\alpha).
\end{split} \end{equation*} Now by Lemma~\ref{integerEigenvalue}, $\mu_{\alpha}$ is an integer for each $\alpha \in \Gamma$. Hence the oriented Cayley graph $\text{Cay}(\Gamma, S)$ is HS-integral. \end{proof}
\section{Characterization of HS-integral mixed Cayley graphs over abelian groups}\label{sec4}
Let $\Gamma$ be an abelian group of order $n$. Define $E$ to be the matrix of size $n\times n$, whose rows and columns are indexed by elements of $\Gamma$ such that $E_{x,y}=\psi_{x}(y)$. Note that each row of $E$ corresponds to a character of $\Gamma$ and $EE^*=nI_n$, where $E^*$ is the conjugate transpose of $E$. Let $v_{\langle\!\langle x \rangle\!\rangle}$ be the vector in $\mathbb{Q}(\omega_3)^n$ whose coordinates are indexed by the elements of $\Gamma$, and the $a^{th}$ coordinate of $v_{\langle\!\langle x \rangle\!\rangle}$ is given by
$$v_{\langle\!\langle x \rangle\!\rangle}(a) = \left\{ \begin{array}{cl}
\omega_6 &\mbox{ if }
a \in \langle\!\langle x \rangle\!\rangle \\\omega_6^5 & \mbox{ if } a \in \langle\!\langle -x \rangle\!\rangle\\
0 &\textnormal{ otherwise.}
\end{array}\right.$$ By Lemma~\ref{integerEigenvalue}, we have $Ev_{\langle\!\langle x \rangle\!\rangle} \in \mathbb{Z}^n$. For $z\in \mathbb{C}$, let $\overline{z}$ denote the complex conjugate of $z$ and $\Re (z)$ (resp. $\Im (z)$) denote the real part (resp. imaginary part) of $z$.
\begin{lema}\label{Ori4Nec} Let $\Gamma$ be an abelian group, $v\in \mathbb{Q}(\omega_3)^n$ and $Ev \in \mathbb{Q}^n$. Let the coordinates of $v$ be indexed by elements of $\Gamma$. Then \begin{enumerate}[label=(\roman*)] \item $\overline{v}_x=v_{-x}$ for all $x \in \Gamma$. \item $v_x=v_y$ for all $x,y \in \Gamma(3)$ satisfying $x \approx y$. \item $\Re(v_x)=\Re(v_{-x})$ and $\Im(v_x)=\Im(v_{-x})=0$ for all $x\in \Gamma \setminus \Gamma(3)$. \end{enumerate} \end{lema} \begin{proof} Let $E_x$ and $E_y$ denote the column vectors of $E$ indexed by $x$ and $y$, respectively, and assume that $u=Ev \in \mathbb{Q}^n$. \begin{enumerate}[label=(\roman*)] \item We use the fact that $\overline{\psi_x(y)}=\psi_{-x}(y)=\psi_x(-y)$ for all $x,y \in \Gamma$. Again \[u=Ev\Rightarrow E^*u=E^*Ev= (nI_n)v\Rightarrow \frac{1}{n}E^*u=v \in \mathbb{Q}(\omega_3)^n.\] Thus
\begin{equation*}
\begin{split} v_x=\frac{1}{n}(E^* u)_x=\frac{1}{n} \sum_{a\in \Gamma} E^*_{x,a}u_a = \frac{1}{n} \sum_{a\in \Gamma} \overline{ \psi_a(x)}u_a &= \frac{1}{n} \sum_{a\in \Gamma} \psi_{a}(-x)u_a \\ &= \overline{\frac{1}{n} \sum_{a\in \Gamma} \overline{ \psi_{a}(-x)}u_a}\\ &= \overline{\frac{1}{n} \sum_{a\in \Gamma} E^*_{-x,a}u_a}= \overline{\frac{1}{n}(E^* u)_{-x}}=\overline{v}_{-x}.
\end{split} \end{equation*}
\item If $\Gamma(3) = \emptyset$ then there is nothing to prove. Now assume that $\Gamma(3)\neq\emptyset$. Let $x,y \in \Gamma(3)$ and $x \approx y$. Then there exists $k\in G_{m,3}^1(1)$ such that $y=x^k$, where $m=\text{ord}(x)$. Assume $x\neq y$, so that $k\geq 2$. Using Lemma ~\ref{Basic}, entries of $E$ are $m^{th}$ roots of unity. Fix a primitive $m^{th}$ root of unity $\omega$, and express each entry of $E_x$ and $E_y$ in the form $\omega^j$ for some $j\in \{ 0,1,...,m-1\}$. Thus $$nv_x= (E^*u)_x= \sum_{j=0}^{m-1} a_j \omega^j,$$ where $a_j\in \mathbb{Q}$ for all $j$. Thus $\omega$ is a root of the polynomial $p(x)= \sum\limits_{j=0}^{m-1} a_j x^j-nv_x \in \mathbb{Q}(\omega_3)[x]$. Therefore $p(x)$ is a multiple of the irreducible polynomial $\Phi_{m,3}^1(x)$, and so $\omega^k$ is also a root of $p(x)$, because of $k\in G_{m,3}^1(1)$. As $y=x^k$ implies that $\psi_{a}(y)=\psi_{a}(x)^k$ for all $a\in \Gamma$, we have $(E^*u)_y= \sum\limits_{j=0}^{m-1} a_j \omega^{kj}$. Hence $$0 =p(\omega^k)= \sum\limits_{j=0}^{m-1} a_j \omega^{kj}-nv_x= (E^*u)_y -nv_x=nv_y-nv_x \Rightarrow v_x=v_y.$$
\item Let $x\in \Gamma \setminus \Gamma(3)$ and $r=\text{ord}(x) \not\equiv 0 \pmod 3$. Fix a primitive $r^{th}$ root $\omega$ of unity, and express each entry of $E_x$ in the form $\omega^j$ for some $j\in \{ 0,1,...,r-1\}$. Thus $$nv_x= (E^*u)_x= \sum_{j=0}^{r-1} a_j \omega^j,$$ where $a_j\in \mathbb{Q}$ for all $j$. Thus $\omega$ is a root of the polynomial $p(x)= \sum\limits_{j=0}^{r-1} a_j x^j-nv_x \in \mathbb{Q}(\omega_3)[x]$. Therefore, $p(x)$ is a multiple of the irreducible polynomial $\Phi_r(x)$, and so $\omega^{-1}$ is also a root of $p(x)$. Since $\psi_{a}(-x)=\psi_{a}(x)^{-1}$ for all $a\in \Gamma$, therefore $(E^*u)_{-x}= \sum\limits_{j=0}^{r-1} a_j \omega^{-j}$. Hence $$0 =p(\omega^{-1})= \sum\limits_{j=0}^{r-1} a_j \omega^{-j}-nv_x= (E^*u)_{-x} -nv_x=nv_{-x}-nv_x,$$ implies that $v_x=v_{-x}$. This together with Part $(i)$ imply that $\Re(v_x)=\Re(v_{-x})$, and that $\Im(v_x)=\Im(v_{-x})=0$ for all $x\in \Gamma \setminus \Gamma(3)$. \qedhere \end{enumerate} \end{proof}
\begin{theorem}\label{neccori} Let $\Gamma$ be an abelian group. The oriented Cayley graph $\text{Cay}(\Gamma, S)$ is HS-integral if and only if $S \in \mathbb{E}(\Gamma)$. \end{theorem} \begin{proof} Assume that the oriented Cayley graph $\text{Cay}(\Gamma, S)$ is integral. If $\Gamma(3) = \emptyset$ then by Theorem ~\ref{ori4}, we have $S = \emptyset$, and so $S \in \mathbb{E}(\Gamma)$. Now assume that $\Gamma(3) \neq \emptyset$. Let $v$ be the vector in $\mathbb{Q}^n(\omega_3)$ whose coordinates are indexed by the elements of $\Gamma$, and the $x^{th}$ coordinate of $v$ is given by
$$v_x = \left\{ \begin{array}{rl}
\omega_6 &\mbox{ if }
x \in S \\ \omega_6^5 & \mbox{ if } x \in S^{-1}\\
0 &\textnormal{ otherwise.}
\end{array}\right.$$ We have \[(Ev)_a=\sum\limits_{x\in \Gamma}E_{a,x}v_x =\sum\limits_{x\in S}\omega_6 E_{a,x}+ \sum\limits_{x\in S^{-1}} \omega_6^5 E_{a,x}=\sum\limits_{x\in S}\left( \omega_6 \psi_a(x)+ \omega_6^5 \psi_a(-x)\right).\] Thus $(Ev)_a$ is an HS-eigenvalue of the integral oriented Cayley graph $\text{Cay}(\Gamma, S)$ for each $a\in \Gamma$. Therefore $Ev \in \mathbb{Q}^n$, and hence all the three conditions of Lemma ~\ref{Ori4Nec} hold.
By the third condition of Lemma ~\ref{Ori4Nec}, $v_x=0$ for all $x\in \Gamma \setminus\Gamma(3)$, and so we must have $S \cup S^{-1} \subseteq \Gamma(3)$. Again, let $x \in S$, $y \in \Gamma(3)$ and $ x \approx y$. The second condition of Lemma ~\ref{Ori4Nec} gives $v_x=v_y$, which implies that $ y\in S$. Thus $x \in S$ implies $\langle\!\langle x \rangle\!\rangle \subseteq S$. Hence $S\in \mathbb{E}(\Gamma)$. The converse part follows from Theorem~\ref{OrientedChara}. \end{proof}
The following example illustrates Theorem~\ref{neccori}.
\begin{ex}\label{ex1} Consider $\Gamma= \mathbb{Z}_3 \times \mathbb{Z}_3$ and $S=\{ (0,1), (2,0)\}$. The oriented graph $\text{Cay}(\mathbb{Z}_3 \times \mathbb{Z}_3, S)$ is shown in Figure~\ref{a}. We see that $\langle\!\langle (0,1) \rangle\!\rangle=\{(0,1)\}$ and $\langle\!\langle (2,0)\rangle\!\rangle=\{(2,0)\}$. Therefore $S \in \mathbb{E}(\Gamma)$. Further, using Corollary~\ref{OriEig} and Equation~\ref{character}, the HS-eigenvalues of $\text{Cay}(\mathbb{Z}_3 \times \mathbb{Z}_3, S)$ are obtained as $$\mu_\alpha= [\omega_6\psi_{\alpha}(0,1) + \omega_6^5 \psi_{\alpha}(0,2)] + [\omega_6 \psi_{\alpha}(2,0) + \omega_6^5 \psi_{\alpha}(1,0)] ~~\text{for each }\alpha \in \mathbb{Z}_3 \times \mathbb{Z}_3 ,$$ where $$\psi_{\alpha}(x) =\omega_3^{\alpha_1 x_1}\omega_3^{\alpha_2 x_2} \text{ for all } \alpha=(\alpha_1,\alpha_2),x=(x_1,x_2)\in \mathbb{Z}_3 \times \mathbb{Z}_3.$$ It can be seen that $\mu_{(0,0)}=2,\mu_{(0,1)}=-1,\mu_{(0,2)}=2,\mu_{(1,0)}=2,\mu_{(1,1)}=-1,\mu_{(1,2)}=2$, $\mu_{(2,0)}=-1$, $\mu_{(2,1)}=-4$ and $\mu_{(2,2)}=-1$. Thus $\text{Cay}(\mathbb{Z}_3 \times \mathbb{Z}_3, S)$ is HS-integral. \end{ex}
\begin{figure}
\caption{$S=\{ (0,1), (2,0)\}$}
\label{a}
\caption{$S=\{ (0,1),(1,0), (2,0)\}$}
\label{b}
\caption{The graph $\text{Cay}(\mathbb{Z}_3 \times \mathbb{Z}_3, S)$}
\label{main}
\end{figure}
\begin{lema}\label{CharaNewIntegSum} Let $S$ be a skew-symmetric subset of an abelian group $\Gamma$ and $t(\neq 0) \in \mathbb{Q}$. If \linebreak[4] $\sum\limits_{s\in S}it\sqrt{3} (\psi_{\alpha}(s)- \psi_{\alpha}(-s))$ is an integer $ \textnormal{ for each } \alpha \in \Gamma$ then $S \in \mathbb{E}(\Gamma)$
\end{lema}
\begin{proof} Let $v$ be the vector, whose coordinates are indexed by the elements of $\Gamma$, defined by
$$v_x= \left\{ \begin{array}{cl}
it\sqrt{3} & \mbox{if } x\in S \\
-it\sqrt{3} & \mbox{if } x\in S^{-1}\\
0 & \mbox{otherwise}.
\end{array}\right.$$ Since $v\in \mathbb{Q}(\omega_3)^n$ and $\alpha$-th coordinate of $Ev$ is $\sum\limits_{s\in S} it\sqrt{3} ( \psi_{\alpha}(s)-\psi_{\alpha}(-s))$, we have $Ev \in \mathbb{Q}^n$. By the third condition of Lemma ~\ref{Ori4Nec}, $\Im(v_x)=0$, and so $v_x=0$ for all $x\in \Gamma \setminus\Gamma(3)$. Thus we must have $S \cup S^{-1} \subseteq \Gamma(3)$. Again, let $x \in S$, $y \in \Gamma(3)$ and $ x \approx y$. The second condition of Lemma ~\ref{Ori4Nec} gives $v_x=v_y$, which implies that $ y\in S$. Thus $x \in S$ implies $\langle\!\langle x \rangle\!\rangle \subseteq S$. Hence $S\in \mathbb{E}(\Gamma)$.
\end{proof}
\begin{lema}\label{Sqrt3NecessIntSum} Let $S$ be a skew-symmetric subset of an abelian group $\Gamma$ and $t(\neq 0) \in \mathbb{Q}$. If \linebreak[4] $\sum\limits_{s\in S}it\sqrt{3} ( \psi_{\alpha}(s)- \psi_{\alpha}(-s))$ is an integer for each $ \alpha \in \Gamma$ then $\sum\limits_{s\in S\cup S^{-1}} \psi_{\alpha}(s) $ is an integer for each $ \alpha \in \Gamma$.
\end{lema}
\begin{proof} Assume that $\sum\limits_{s\in S}it\sqrt{3} (\psi_{\alpha}(s)-\psi_{\alpha}(-s))$ is an integer for each $ \alpha \in \Gamma$. By Lemma \ref{CharaNewIntegSum} we have $S \in \mathbb{E}(\Gamma)$, and so $S=\langle\!\langle x_1 \rangle\!\rangle\cup...\cup \langle\!\langle x_k \rangle\!\rangle$ for some $x_1,...,x_k\in \Gamma(3)$. Therefore, using Lemma~\ref{lemanecc} we get $S \cup S^{-1}=[x_1] \cup ...\cup[x_k] \in \mathbb{B}(\Gamma)$. Thus by Theorem~\ref{Cayint}, $\text{Cay}(\Gamma, S \cup S^{-1})$ is integral, that is, $\sum\limits_{s\in S\cup S^{-1}} \psi_{\alpha}(s) $ is an integer for each $ \alpha \in \Gamma$.
\end{proof}
\begin{lema}\label{SeperatIntegMixedGraph} Let $\Gamma$ be an abelian group. The mixed Cayley graph $\text{Cay}(\Gamma,S)$ is HS-integral if and only if $\text{Cay}(\Gamma,{S\setminus \overline{S}})$ is integral and $\text{Cay}(\Gamma, {\overline{S}})$ are HS-integral.
\end{lema}
\begin{proof}
Assume that the mixed Cayley graph $\text{Cay}(\Gamma,S)$ is HS-integral. Let the HS-spectrum of $\text{Cay}(\Gamma,S)$ be $\{\gamma_{\alpha}: \alpha \in \Gamma\}$, where $\gamma_{\alpha}=\lambda_{\alpha} +\mu_{\alpha}$,
$$\lambda_{\alpha}= \sum\limits_{s \in S\setminus \overline{S}} \psi_{\alpha}(s) \textnormal{ and } \mu_{\alpha}= \sum\limits_{s \in \overline{S}} (\omega_6 \psi_{\alpha}(s) + \omega_6^5 \psi_{\alpha}(-s)), \textnormal{ for } \alpha \in \Gamma.$$ Note that $\{\lambda_{\alpha}: \alpha \in \Gamma\}$ is the spectrum of $\text{Cay}(\Gamma, S\setminus \overline{S})$ and $\{\mu_{\alpha}: \alpha \in \Gamma\}$ is the HS-spectrum of $\text{Cay}(\Gamma,\overline{S})$. By assumption $\gamma_{\alpha} \in \mathbb{Z}$, and so $ \gamma_{\alpha} - \gamma_{-\alpha}= \sum\limits_{s\in \overline{S}}i\sqrt{3}(\psi_{\alpha}(s)-\psi_{\alpha}(-s)) \in \mathbb{Z}$ for all $ \alpha \in \Gamma$. By Lemma \ref{Sqrt3NecessIntSum}, we get $\sum\limits_{ s \in \overline{S} \cup \overline{S}^{-1}}\psi_{\alpha}(s) \in \mathbb{Z}$ for all $ \alpha \in \Gamma$. Note that $\mu_{\alpha}$ is an algebraic integer. Also \begin{align*} \mu_{\alpha}= \frac{1}{2} \sum\limits_{ s \in \overline{S} \cup \overline{S}^{-1}}\psi_{\alpha}(s) + \frac{1}{2} \sum\limits_{s\in \overline{S}}i\sqrt{3}(\psi_{\alpha}(s)-\psi_{\alpha}(-s))\in \mathbb{Q}. \end{align*} Hence $\mu_{\alpha}$ is an integer for each $ \alpha \in \Gamma$. Thus $\text{Cay}(\Gamma,\overline{S})$ is HS-integral. Now we have $\gamma_{\alpha}, \mu_{\alpha} \in \mathbb{Z}$, and so $\lambda_{\alpha} = \gamma_{\alpha} -\mu_{\alpha} \in \mathbb{Z}$ for each $ \alpha \in \Gamma$. Hence $\text{Cay}(\Gamma,S\setminus \overline{S})$ is integral.
Conversely, assume that $\text{Cay}(\Gamma,S\setminus \overline{S})$ is integral and $\text{Cay}(\Gamma, \overline{S})$ is HS-integral. Then Lemma \ref{imcgoa3} implies that $\text{Cay}(\Gamma,S)$ is integral.
\end{proof}
\begin{theorem}\label{CharaHSintMixed} Let $\Gamma$ be an abelian group. The mixed Cayley graph $\text{Cay}(\Gamma, S)$ is HS-integral if and only if $S \setminus \overline{S} \in \mathbb{B}(\Gamma)$ and $\overline{S} \in \mathbb{E}(\Gamma)$. \end{theorem} \begin{proof} By Lemma~\ref{SeperatIntegMixedGraph}, the mixed Cayley graph $\text{Cay}(\Gamma, S)$ is HS-integral if and only if $\text{Cay}(\Gamma, S \setminus \overline{S})$ is integral and $\text{Cay}(\Gamma, \overline{S})$ is HS-integral. Note that $S \setminus \overline{S}$ is a symmetric set and $\overline{S}$ is a skew-symmetric set. Thus by Theorem~\ref{Cayint}, $\text{Cay}(\Gamma, S \setminus \overline{S})$ is integral if and only if $S \setminus \overline{S} \in \mathbb{B}(\Gamma)$. By Theorem~\ref{neccori}, $\text{Cay}(\Gamma, \overline{S})$ is HS-integral if and only if $\overline{S} \in \mathbb{E}(\Gamma)$. Hence the result follows. \end{proof}
The following example illustrates Theorem~\ref{CharaHSintMixed}.
\begin{ex}\label{ex2} Consider $\Gamma= \mathbb{Z}_3 \times \mathbb{Z}_3$ and $S=\{ (0,1),(1,0), (2,0)\}$. The mixed graph $\text{Cay}(\mathbb{Z}_3 \times \mathbb{Z}_3, S)$ is shown in Figure~\ref{b}. Here $\overline{S}=\{(0,1)\}=\langle\!\langle (0,1)\rangle\!\rangle\in \mathbb{E}(\Gamma)$ and $S\setminus\overline{S}=\{(1,0),(2,0)\} =[(1,0)]\in \mathbb{B}(\Gamma)$. Further, using Lemma~\ref{imcgoa3} and Equation~\ref{character}, the HS-eigenvalues of $\text{Cay}(\mathbb{Z}_3 \times \mathbb{Z}_3, S)$ are obtained as $$\gamma_\alpha=[\psi_{\alpha}(1,0) + \psi_{\alpha}(2,0)] + [\omega_6 \psi_{\alpha}(0,1) + \omega_6^5 \psi_{\alpha}(0,2)] ~~\text{for each }\alpha \in \mathbb{Z}_3 \times \mathbb{Z}_3.$$ One can see that $\gamma_{(0,1)}=\gamma_{(1,0)}= \gamma_{(1,2)}= \gamma_{(2,0)}=\gamma_{(2,2)}=0$, $\gamma_{(0,0)}=\gamma_{(0,2)}=3$ and $\gamma_{(1,1)} = \gamma_{(2,1)}=-3$. Thus $\text{Cay}(\mathbb{Z}_3 \times \mathbb{Z}_3, S)$ is HS-integral. \end{ex}
\section{Characterization of Eisenstein integral mixed Cayley graphs over abelian groups}\label{sec5} Let $\Gamma$ be a finite abelian group of order $n$. For an $S\subseteq \Gamma$ with $0\notin S$, consider the function $\alpha:\Gamma \rightarrow \{0,1\}$ defined by \[\alpha(s)= \left\{ \begin{array}{rl}
1 & \mbox{if } s\in S \\
0 & \mbox{otherwise},
\end{array}\right.\] in Theorem~\ref{EigNorColCayMix}. We see that $\sum\limits_{s \in S} \psi_{\alpha}(s)$ is an eigenvalue of the mixed Cayley graph $\text{Cay}(\Gamma, S)$ for all $\alpha \in \Gamma$. For $x \in \Gamma, y\in \Gamma(3)$ and $\alpha \in \Gamma$, define $$C_x(\alpha)=\sum_{s \in [x] } \psi_{\alpha}(s)~~\text{ and }~~ T_y(\alpha)= \sum_{s \in \langle\!\langle y \rangle\!\rangle} i\sqrt{3}( \psi_{\alpha}(s)-\psi_{\alpha}(-s)).$$
Note that $C_x(\alpha)$ is an eigenvalue of the mixed Cayley graph $\text{Cay}(\Gamma, [x])$ for each $\alpha \in \Gamma$.
\begin{lema}\label{Tn(q)IsIntegerForAll} Let $x \in \Gamma(3)$. Then $T_x(\alpha)$ is an integer for each $\alpha \in \Gamma$. \end{lema} \begin{proof} We have \begin{equation*}
\begin{split} Z_x(\alpha) = \sum\limits_{s \in \langle\!\langle x \rangle\!\rangle}\left( \omega_6\psi_{\alpha}(s) + \omega_6^5 \psi_{\alpha}(-s)\right)
&= \frac{1}{2} \sum\limits_{s\in [x]}\psi_{\alpha}(s) + \frac{i\sqrt{3}}{2} \sum\limits_{s \in \langle\!\langle x \rangle\!\rangle} (\psi_{\alpha}(s) - \psi_{\alpha}(-s))\\
&= \frac{C_x(\alpha)}{2} + \frac{T_x(\alpha)}{2}.\\
\end{split}
\end{equation*} By Lemma~\ref{integerEigenvalue}, $T_x(\alpha)= 2 Z_x(\alpha) - C_x(\alpha)$ is an integer for each $\alpha \in \Gamma$. \end{proof}
\begin{lema} Let $\Gamma$ be a finite abelian group and the order of $x\in \Gamma(3)$ be $3m$, with $m \not\equiv 0 \Mod 3$. Then $$x^m [x^3] = \left\{ \begin{array}{ll}
\langle\!\langle x \rangle\!\rangle & \mbox{if } m \equiv 1 \Mod 3 \\
\langle\!\langle -x \rangle\!\rangle & \mbox{if } m \equiv 2 \Mod 3.
\end{array}\right. $$ \end{lema} \begin{proof} Assume that $m \equiv 1 \Mod 3$. Let $y \in x^m [x^3]$. Then $y=x^{m+3r}$ for some $r\in G_m(1)$. We have $\gcd(r,m)=1$, which implies that $\gcd(m+3r, 3m)=1$ and $m+3r \equiv 1 \Mod 3$. Thus $x^m [x^3] \subseteq \langle\!\langle x \rangle\!\rangle$. Since size of both $x^m [x^3]$ and $\langle\!\langle x \rangle\!\rangle$ are same, so $x^m [x^3] = \langle\!\langle x \rangle\!\rangle$. Similarly, if $m \equiv 2 \Mod 3$ then we have $x^m [x^3] = \langle\!\langle -x \rangle\!\rangle$. \end{proof}
\begin{lema}\label{Sum3mequalRamanujan} Let $\Gamma$ be a finite abelian group and the order of $x\in \Gamma(3)$ be $3m$, with $m \not\equiv 0 \Mod 3$. Then \begin{equation*}
\begin{split}
T_x(\alpha) = \left\{ \begin{array}{cl}
\pm3 C_{x^3}(\alpha) & \mbox{if } \psi_{\alpha}(x^m) \neq 1 \\
0 & \mbox{otherwise.}
\end{array}\right.
\end{split} \end{equation*} for all $\alpha \in \Gamma$. \end{lema}
\begin{proof}
We have
\begin{equation*}
\begin{split}
T_x(\alpha) &= i\sqrt{3} \sum\limits_{s \in \langle\!\langle x \rangle\!\rangle} (\psi_{\alpha}(s) - \psi_{\alpha}(-s)) \\
&= \left\{ \begin{array}{rl}
i\sqrt{3} \sum\limits_{s \in \langle\!\langle x \rangle\!\rangle} (\psi_{\alpha}(s) - \psi_{\alpha}(-s)) & \mbox{if } m \equiv 1 \Mod 3 \\
-i\sqrt{3} \sum\limits_{s \in \langle\!\langle -x \rangle\!\rangle} (\psi_{\alpha}(s) - \psi_{\alpha}(-s)) & \mbox{if } m \equiv 2 \Mod 3
\end{array}\right.\\
&= \left\{ \begin{array}{rl}
i\sqrt{3} \sum\limits_{s \in x^m [x^3]} (\psi_{\alpha}(s) - \psi_{\alpha}(-s)) & \mbox{if } m \equiv 1 \Mod 3 \\
-i\sqrt{3} \sum\limits_{s \in x^m [x^3]} (\psi_{\alpha}(s) - \psi_{\alpha}(-s)) & \mbox{if } m \equiv 2 \Mod 3
\end{array}\right.\\
&= \left\{ \begin{array}{rl}
i\sqrt{3} \sum\limits_{s \in [x^3]} (\psi_{\alpha}(x^m) \psi_{\alpha}(s) - \psi_{\alpha}(-x^{m}) \psi_{\alpha}(-s)) & \mbox{if } m \equiv 1 \Mod 3 \\
-i\sqrt{3} \sum\limits_{s \in [x^3]} (\psi_{\alpha}(x^m) \psi_{\alpha}(s) - \psi_{\alpha}(-x^m) \psi_{\alpha}(-s)) & \mbox{if } m \equiv 2 \Mod 3
\end{array}\right.\\
&= \left\{ \begin{array}{rl}
-2\sqrt{3} \Im(\psi_{\alpha}(x^m)) \sum\limits_{s \in [x^3]} \psi_{\alpha}(s) & \mbox{if } m \equiv 1 \Mod 3 \\
2\sqrt{3} \Im(\psi_{\alpha}(x^m)) \sum\limits_{s \in [x^3]} \psi_{\alpha}(s) & \mbox{if } m \equiv 2 \Mod 3
\end{array}\right.\\
&=\pm 2\sqrt{3} \Im(\psi_{\alpha}(x^m)) C_{x^3}(\alpha).
\end{split}
\end{equation*} Since $\psi_{\alpha}(x^m)$ is a $3$-rd root of unity, $\Im(\psi_{\alpha}(x^m))=0$ or $\pm \frac{ \sqrt{3}}{2}$. Thus \begin{equation*}
\begin{split} T_x(\alpha) = \left\{ \begin{array}{cl}
\pm3 C_{x^3}(\alpha) & \mbox{if } \psi_{\alpha}(x^m) \neq 1 \\
0 & \mbox{otherwise.} \end{array}\right.
\end{split} \end{equation*}
\end{proof}
\begin{lema}\label{Tn(q)SumEqualTo3TimesSum}
Let $\Gamma$ be a finite abelian group and the order of $x\in \Gamma(3)$ be $k=3^tm$, with $m \not\equiv 0 \Mod 3$ and $t\geq 2$. Then $$T_x(\alpha)= \left\{ \begin{array}{ll}
3 \sqrt{3} i \sum\limits_{r \in G_{3^{t-1}m,3}^1(1)} (\psi_{\alpha}(x^r) - \psi_{\alpha}(-x^r)) & \mbox{if } \psi_{\alpha}(x^{\frac{k}{3}}) =1 \\
0 & otherwise.
\end{array}\right.$$
\end{lema}
\begin{proof} Note that $ G_{k,3}^1(1)= G_{\frac{k}{3},3}^1(1) \cup \left(\frac{k}{3} + G_{\frac{k}{3},3}^1(1)\right) \cup \left(\frac{2k}{3} + G_{\frac{k}{3},3}^1(1)\right)$. Therefore
\begin{equation*}
\begin{split}
T_x(\alpha) =& i\sqrt{3} \sum\limits_{s \in \langle\!\langle x \rangle\!\rangle} (\psi_{\alpha}(s) - \psi_{\alpha}(-s)) \\
=& i\sqrt{3} \sum\limits_{r \in G_{k,3}^1(1) } (\psi_{\alpha}(x^r) - \psi_{\alpha}(-x^r)) \\
=& i\sqrt{3} \bigg[\sum\limits_{r \in G_{\frac{k}{3},3}^1(1)} (\psi_{\alpha}(x^r) - \psi_{\alpha}(-x^r)) + \sum\limits_{r \in G_{\frac{k}{3},3}^1(1)} (\psi_{\alpha}(x^{\frac{k}{3}}) \psi_{\alpha}(x^r) - \psi_{\alpha}(x^{\frac{2k}{3}}) \psi_{\alpha}(-x^r)) \\
&+ \sum\limits_{r \in G_{\frac{k}{3},3}^1(1)} (\psi_{\alpha}(x^{\frac{2k}{3}}) \psi_{\alpha}(x^r) - \psi_{\alpha}(x^{\frac{k}{3}}) \psi_{\alpha}(-x^r))\bigg]\\
=& i\sqrt{3} \bigg[ \sum\limits_{r \in G_{\frac{k}{3},3}^1(1) } (\psi_{\alpha}(x^r) - \psi_{\alpha}(-x^r)) + \psi_{\alpha}(x^{\frac{k}{3}}) \sum\limits_{r \in G_{\frac{k}{3},3}^1(1) } ( \psi_{\alpha}(x^r) - \psi_{\alpha}(-x^r)) \\
&+\psi_{\alpha}(x^{\frac{2k}{3}}) \sum\limits_{r \in G_{\frac{k}{3},3}^1(1)} ( \psi_{\alpha}(x^r) - \psi_{\alpha}(-x^r))\bigg]\\
=& i\sqrt{3} (1+ \psi_{\alpha}(x^{\frac{k}{3}}) + \psi_{\alpha}(x^{\frac{2k}{3}})) \sum\limits_{r \in G_{\frac{k}{3},3}^1(1)} (\psi_{\alpha}(x^r) - \psi_{\alpha}(-x^r))\\
=& \left\{ \begin{array}{cl}
3 \sqrt{3} i \sum\limits_{r \in G_{\frac{k}{3},3}^1(1)} (\psi_{\alpha}(x^r) - \psi_{\alpha}(-x^r)) & \mbox{if } \psi_{\alpha}(x^{\frac{k}{3}}) =1 \\
0 & otherwise.
\end{array}\right.
\end{split}
\end{equation*} \end{proof}
\begin{lema}\label{3DividesTn(q)}
Let $\Gamma$ be a finite abelian group and $x\in \Gamma(3)$. Then $\frac{T_x(\alpha)}{3}$ is an integer for each $\alpha \in \Gamma $.
\end{lema}
\begin{proof}
Let $x\in \Gamma(3)$ and order of $x$ be $k=3^tm$ with $m\not\equiv 0 \Mod 3$ and $t\geq 1$. If $t=1$ then by Lemma~\ref{Sum3mequalRamanujan}, $\frac{T_x(\alpha)}{3}$ is an integer for each $\alpha \in \Gamma $. Assume that $t\geq 2$. If $\psi_{\alpha}(x^{\frac{k}{3}}) \neq1$ then by Lemma~\ref{Tn(q)SumEqualTo3TimesSum}, $\frac{T_x(\alpha)}{3}$ is an integer for each $\alpha \in \Gamma $. If $\psi_{\alpha}(x^{\frac{k}{3}})=1$ then by Lemma~\ref{Tn(q)IsIntegerForAll} and Lemma~\ref{Tn(q)SumEqualTo3TimesSum}, $i\sqrt{3} \sum\limits_{r \in G_{\frac{k}{3},3}^1(1)} (\psi_{\alpha}(x^r) - \psi_{\alpha}(-x^r))$ is a rational algebraic integer, and hence an integer for each $\alpha \in \Gamma $.
\end{proof}
\begin{lema}\label{3DividesTn(q)SameParity}
Let $\Gamma$ be a finite abelian group and $x\in \Gamma(3)$. Then $C_x(\alpha)$ and $\frac{T_x(\alpha)}{3}$ are integers of the same parity for each $\alpha \in \Gamma $.
\end{lema}
\begin{proof}
Let $x\in \Gamma(3)$ and $\alpha \in \Gamma $. By Lemma~\ref{integerEigenvalue}, $T_x(\alpha)+ C_x(\alpha)= 2 Z_x(\alpha)$ is an even integer, therefore $T_x(\alpha)$ and $C_x(\alpha)$ are integers of the same parity. By Lemma~\ref{3DividesTn(q)}, $\frac{T_x(\alpha)}{3}$ is an integer. Hence $C_x(\alpha)$ and $\frac{T_x(\alpha)}{3}$ are integers of the same parity.
\end{proof}
Let $S$ be a subset of $\Gamma$. For each $\alpha \in \Gamma$, define $$f_{\alpha}(S) = \sum_{s \in S \setminus \overline{S}} \psi_{\alpha}(s) \hspace{0.5cm}\textnormal{and}\hspace{0.5cm} g_{\alpha}(S)= \sum_{s \in \overline{S}}(\omega \psi_{\alpha}(s) + \overline{w}\psi_{\alpha}(-s)),$$ where $\omega=\frac{1}{2} - \frac{i\sqrt{3}}{6}$. It is clear that $f_{\alpha}(S)$ and $g_{\alpha}(S)$ are real numbers. We have $$\sum_{s \in S} \psi_{\alpha}(s)= f_{\alpha}(S)+ g_{\alpha}(S) + \left( \frac{-1}{2} + \frac{i\sqrt{3}}{2} \right)( g_{\alpha}(S) - g_{-\alpha}(S) ).$$ Note that $f_{\alpha}(S)=f_{-\alpha}(S)$ for each $\alpha \in \Gamma$. Therefore if $f_{\alpha}(S) + g_{\alpha}(S)$ is an integer for each $\alpha\in \Gamma$, then $g_{\alpha}(S)-g_{-\alpha}(S)= \left[ f_{\alpha}(S)+g_{\alpha}(S)\right]-\left[ f_{-\alpha}(S)+g_{-\alpha}(S)\right]$ is also an integer for each $\alpha \in \Gamma$. Hence, the mixed Cayley graph $\text{Cay}(\Gamma,S)$ is Eisenstein integral if and only if $f_{\alpha}(S) + g_{\alpha}(S)$ is an integer for each $\alpha \in \Gamma$.
\begin{lema}\label{CharaEisensteinIntegral} Let $S$ be a subset of a finite abelian group $\Gamma$ with $0 \not\in S$. Then the mixed Cayley graph $\text{Cay}(\Gamma,S)$ is Eisenstein integral if and only if $2 f_{\alpha}(S)$ and $2 g_{\alpha}(S)$ are integers of the same parity for each $\alpha \in \Gamma$. \end{lema} \begin{proof} Suppose the mixed Cayley graph $\text{Cay}(\Gamma,S)$ is Eisenstein integral and $\alpha \in \Gamma$. Then $f_{\alpha}(S) + g_{\alpha}(S)$ and $g_{\alpha}(S)-g_{-\alpha}(S)= \sum\limits_{s\in \overline{S}} \frac{-1}{3}\left[i\sqrt{3}\left(\psi_{\alpha}(s)-\psi_{\alpha}(-s)\right)\right]$ are integers. By Lemma~\ref{Sqrt3NecessIntSum}, $\sum\limits_{s\in \overline{S}\cup \overline{S}^{-1}} \psi_{\alpha}(s) \in \mathbb{Z}$. Since $$2 g_{\alpha}(S)= \sum\limits_{s\in \overline{S}\cup \overline{S}^{-1}} \psi_{\alpha}(s)- \sum\limits_{s\in \overline{S}}\frac{i\sqrt{3}}{3} (\psi_{\alpha}(s)- \psi_{\alpha}(-s)),$$ we find that $2 g_{\alpha}(S)$ is an integer. Therefore, $2 f_{\alpha}(S)=2(f_{\alpha}(S)+g_{\alpha}(S))-2g_{\alpha}(S)$ is also integer of the same parity with $g_{\alpha}(S)$.
Conversely, assume that $2 f_{\alpha}(S)$ and $2 g_{\alpha}(S)$ are integers of the same parity for each $\alpha \in \Gamma$. Then $f_{\alpha}(S) + g_{\alpha}(S)$ is an integer for each $\alpha \in \Gamma$. Hence the mixed Cayley graph $\text{Cay}(\Gamma,S)$ is Eisenstein integral. \end{proof}
\begin{lema}\label{CharaEisensteinIntegral1} Let $S$ be a subset of finite abelian group $\Gamma$ with $0 \not\in S$. Then the mixed Cayley graph $\text{Cay}(\Gamma,S)$ is Eisenstein integral if and only if $f_{\alpha}(S)$ and $g_{\alpha}(S)$ are integers for each $\alpha \in \Gamma$. \end{lema} \begin{proof} By Lemma~\ref{CharaEisensteinIntegral}, it is enough to show that $2 f_{\alpha}(S)$ and $2 g_{\alpha}(S)$ are integers of the same parity if and only if $f_{\alpha}(S)$ and $g_{\alpha}(S)$ are integers. If $f_{\alpha}(S)$ and $g_{\alpha}(S)$ are integers, then clearly $2 f_{\alpha}(S)$ and $2 \beta_j(S)$ are even integers. Conversely, assume that $2 f_{\alpha}(S)$ and $2 g_{\alpha}(S)$ are integers of the same parity. Since $f_{\alpha}(S)$ is an algebraic integer, the integrality of $2 f_{\alpha}(S)$ implies that $f_{\alpha}(S)$ is an integer. Thus $2 f_{\alpha}(S)$ is even, and so by the assumption $2 g_{\alpha}(S)$ is also even. Hence $g_{\alpha}(S)$ is an integer. \end{proof}
\begin{theorem}\label{MinCharacEisensteinInteg} Let $S$ be a subset of a finite abelian group $\Gamma$ with $0\notin S$. Then the mixed Cayley graph $\text{Cay}(\Gamma,S)$ is Eisenstein integral if and only if $\text{Cay}(\Gamma,S)$ is HS-integral. \end{theorem} \begin{proof} By Lemma~\ref{CharaEisensteinIntegral1}, it is enough to show that $f_{\alpha}(S)$ and $g_{\alpha}(S)$ are integers for each $\alpha \in \Gamma$ if and only if $\text{Cay}(\Gamma,S)$ is HS-integral. Note that $f_{\alpha}(S)$ is an eigenvalue of the Cayley graph $\text{Cay}(\Gamma,S\setminus \overline{S})$. By Theorem~\ref{Cayint}, $f_{\alpha}(S)$ is an integer for each $\alpha \in \Gamma$ if and only if $S\setminus \overline{S}\in \mathbb{B}(\Gamma)$.
Assume that $f_{\alpha}(S)$ and $g_{\alpha}(S)$ are integers for each $\alpha \in \Gamma$. Then $ \sum\limits_{s\in \overline{S}} \frac{-i\sqrt{3}}{3}( \psi_{\alpha}(s)- \psi_{\alpha}(-s))=g_{\alpha}(S)-g_{-\alpha}(S)$ is also an integer for each $\alpha \in \Gamma$. Using Theorem~\ref{Cayint} and Lemma~\ref{CharaNewIntegSum}, we see that $S \setminus \overline{S}$ and $\overline{S}$ satisfy the conditions of Theorem~\ref{CharaHSintMixed}. Hence $\text{Cay}(\Gamma,S)$ is HS-integral.
Conversely, assume that $\text{Cay}(\Gamma,S)$ is HS-integral. Then $\text{Cay}(\Gamma,S\setminus \overline{S})$ is integral, and hence $f_{\alpha}(S)$ is an integer for each $\alpha \in \Gamma$. By Theorem~\ref{CharaHSintMixed}, we have $\overline{S} \in \mathbb{E}(\Gamma)$, and so $\overline{S}= \langle\!\langle x_1 \rangle\!\rangle \cup ...\cup \langle\!\langle x_k \rangle\!\rangle $ for some $x_1,...,x_k \in \Gamma (3)$. Then
\begin{equation*}
\begin{split} g_{\alpha}(S)&=\frac{1}{2} \sum\limits_{s\in \overline{S}\cup \overline{S}^{-1}}\psi_{\alpha}(s) -\frac{1}{6} \sum\limits_{s\in \overline{S}}i \sqrt{3}\left(\psi_{\alpha}(s)-\psi_{\alpha}(-s)\right)\\ &=\frac{1}{2} \sum\limits_{j=1}^{k} \sum\limits_{s\in [ x_j ] }\psi_{\alpha}(s) -\frac{1}{6} \sum\limits_{j=1}^{k} \sum\limits_{s\in \langle\!\langle x_j \rangle\!\rangle}i \sqrt{3}\left(\psi_{\alpha}(s)-\psi_{\alpha}(-s)\right)\\ &= \frac{1}{2} \sum\limits_{j=1}^{k} C_{x_j}(\alpha) - \frac{1}{6} \sum\limits_{j=1}^{k} T_{x_j}(\alpha)\\ &= \frac{1}{2} \sum\limits_{j=1}^{k} \left( C_{x_j}(\alpha) - \frac{1}{3} T_{x_j}(\alpha) \right).
\end{split}
\end{equation*} By Lemma~\ref{3DividesTn(q)SameParity}, $C_{x_j}(\alpha) - \frac{1}{3} T_{x_j}(\alpha)$ is an even integer for each $j\in \{1,\ldots,k\}$. Hence $g_{\alpha}(S)$ is an integer for each $\alpha \in \Gamma$. \end{proof}
The following example illustrates Theorem~\ref{MinCharacEisensteinInteg}. \begin{ex} Consider the HS-integral graph $\text{Cay}(\mathbb{Z}_3 \times \mathbb{Z}_3, S)$ of Example~\ref{ex2}. By Theorem~\ref{MinCharacEisensteinInteg}, the graph $\text{Cay}(\mathbb{Z}_3 \times \mathbb{Z}_3, S)$ is Eisenstein integral. Indeed, the eigenvalues of $\text{Cay}(\mathbb{Z}_3 \times \mathbb{Z}_3, S)$ are obtained as \begin{equation*}
\begin{split} \gamma_{\alpha}=\psi_{\alpha}(0,1)+\psi_{\alpha}(1,0) + \psi_{\alpha}(2,0).
\end{split}\end{equation*} We have $\gamma_{(0,0)}=3, \gamma_{(0,1)}=2+ \omega_3, \gamma_{(0,2)}=1-\omega_3, \gamma_{(1,0)}=0, \gamma_{(1,1)}=-1+\omega_3,\gamma_{(1,2)}=-2-\omega_3, \gamma_{(2,0)}=0,$ $ \gamma_{(2,1)}=-1+\omega_3$ and $\gamma_{(2,2)}=-2-\omega_3$. Thus $\gamma_{\alpha}$ is an Eisenstein integer for each $\alpha \in \mathbb{Z}_3 \times \mathbb{Z}_3$. \qed \end{ex}
\end{document} |
\begin{document}
\title{The recurrence formulas for primes and non-trivial zeros of the Riemann zeta function}
\author{Artur Kawalec}
\date{} \maketitle
\begin{abstract} In this article, we explore the Riemann zeta function with a perspective on primes and non-trivial zeros. We develop the Golomb's recurrence formula for the $n$th+1 prime, and assuming (RH), we propose an analytical recurrence formula for the $n$th+1 non-trivial zero of the Riemann zeta function. Thus all non-trivial zeros up the $n$th order must be known to generate the $n$th+1 non-trivial zero. We also explore a variation of the recurrence formulas for primes based on the prime zeta function, which will be a basis for the development of the recurrence formulas for non-trivial zeros based on the secondary zeta function. In the last part, we review the presented formulas and outline the duality between primes and non-trivial zeros. The proposed formula implies that all primes can be converted into an individual non-trivial zero (assuming RH), and conversely, all non-trivial zeros can be converted into an individual prime (not assuming RH). Also, throughout this article, we summarize numerical computation and verify the presented results to high precision. \end{abstract}
\section{Introduction} The Riemann zeta function is defined by the infinite series \begin{equation}\label{eq:20} \zeta(s)=\sum_{n=1}^{\infty}\frac{1}{n^s}, \end{equation} which is absolutely convergent for $\Re(s)>1$, where $s=\sigma+it$ is a complex variable. The values for the first few special cases are:
\begin{equation}\label{eq:9} \begin{aligned} \zeta(1) &\sim\sum_{n=1}^{k}\frac{1}{n}\sim\gamma+\log(k) \quad \text{as}\quad k\to \infty,\\ \zeta(2) &=\frac{\pi^2}{6}, \\ \zeta(3) &=1.20205690315959\dots, \\ \zeta(4) &=\frac{\pi^4}{90}, \\ \zeta(5) &=1.03692775514337\dots. \end{aligned} \end{equation} For $s=1$, the series diverges asymptotically as $\gamma+\log(k)$, where $\gamma=0.5772156649\dots$ is the Euler-Mascheroni constant. The special values for even positive integer argument are given by the Euler's formula \begin{equation}\label{eq:9} \zeta(2k) = \frac{\mid B_{2k}\mid}{2(2k)!}(2\pi)^{2k}, \end{equation} for which the value is expressed as a rational multiple of $\pi^{2k}$ where the constants $B_{2k}$ are Bernoulli numbers denoted such that $B_0=1$, $B_1=-\frac{1}{2}$, $B_2=\frac{1}{6}$ and so on. For odd positive integer argument, the values of $\zeta(s)$ converge to unique constants, which are not known to be expressed as a rational multiple of $\pi^{2k+1}$ as occurs in the even positive integer case. For $n=3$, the value is commonly known as Ap\'ery's constant, who proved its irrationality.
At the heart of the Riemann zeta function are prime numbers, which are encoded by the Euler's product formula \begin{equation}\label{eq:20} \zeta(s)=\prod_{n=1}^{\infty}\left(1-\frac{1}{p_n^s}\right)^{-1} \end{equation} also valid for $\Re(s)>1$, where $p_1=2$, $p_2=3$, and $p_3=5$ and so on, denote the prime number sequence. The expression for the complex magnitude, or modulus, of the Euler prime product is
\begin{equation}\label{eq:7} \mid \zeta(\sigma+it) \mid^2 = \frac{\zeta(4\sigma)}{\zeta(2\sigma)}\prod_{n=1}^\infty \left(1-\frac{\cos(t\log p_n)}{\cosh(\sigma\log p_n)}\right)^{-1} \end{equation} for $\sigma>1$, which for a positive integer argument $\sigma=k$ simplifies the zeta terms using (3), resulting in \begin{equation}\label{eq:11}
\mid \zeta(k+it) \mid=(2\pi)^{k}\sqrt{\frac{|B_{4k}|(2k)!}{|B_{2k}|(4k)!}}\prod_{n=1}^\infty \left(1-\frac{\cos(t\log p_n)}{\cosh(k\log p_n)}\right)^{-1/2}. \end{equation} Using this form, the first few special values of this representation are
\begin{equation}\label{eq:29} \begin{aligned} \zeta(1) &\sim \frac{\pi}{\sqrt{15}}\prod_{n=1}^k \left(1-\frac{2}{p_n+p_n^{-1}}\right)^{-1/2} \sim e^{\gamma}\log(p_k),\\ \zeta(2) &= \frac{\pi^2}{\sqrt{105}}\prod_{n=1}^\infty \left(1-\frac{2}{p_n^2+p_n^{-2}}\right)^{-1/2}, \\ \zeta(3) &= \frac{\pi^3}{15} \sqrt{\frac{691}{3003}}\prod_{n=1}^\infty \left(1-\frac{2}{p_n^{3}+p_n^{-3}}\right)^{-1/2}, \\ \zeta(4) &= \frac{\pi^4}{45} \sqrt{\frac{3617}{17017}}\prod_{n=1}^\infty \left(1-\frac{2}{p_n^{4}+p_n^{-4}}\right)^{-1/2}, \\ \zeta(5) &= \frac{\pi^5}{225} \sqrt{\frac{174611}{323323}}\prod_{n=1}^\infty \left(1-\frac{2}{p_n^{5}+p_n^{-5}}\right)^{-1/2}, \end{aligned} \end{equation} where we let $t=0$ and reduced the hyperbolic cosine term as we have shown in [7][9]. The value for $\zeta(1)$ in terms of Euler prime product representation is asymptotic to $e^{\gamma}\log(p_k)$ due to Mertens's theorem as $k\to \infty$ [5][14]. Also, the arg of the Euler product can be found as \begin{equation}\label{eq:20} \text{arg } \zeta(\sigma+it) = -\sum_{n=1}^{\infty}\tan^{-1}\left(\frac{\sin(t\log p_n)}{p_n^{\sigma}-\cos(t\log p_n)}\right) \end{equation} thus writing the Euler product in polar form: \begin{equation}\label{eq:20}
\zeta(s) = |\zeta(s)|e^{\text{i arg } \zeta(s)}. \end{equation} The Euler prime product permits the primes to be individually extracted from the infinite product under certain limiting conditions, as we have shown in [6], thus yielding the Golomb's formula for primes [4]. To illustrate this, when we expand the product we have
\begin{equation}\label{eq:20} \zeta(s)=\left(1-\frac{1}{p_1^s}\right)^{-1}\left(1-\frac{1}{p_2^s}\right)^{-1}\left(1-\frac{1}{p_3^s}\right)^{-1}\ldots, \end{equation} and next, we wish to solve for the first prime $p_1$, then we have \begin{equation}\label{eq:20} p_1=\left(1-\frac{\epsilon_2(s)}{\zeta(s)}\right)^{-1/s}, \end{equation} where \begin{equation}\label{eq:20} \epsilon_k(s)=\prod_{n=k}^{\infty}\left(1-\frac{1}{p_n^s}\right)^{-1} \end{equation} is the tail of Euler product starting at $p_k$. When we then consider the limit
\begin{equation}\label{eq:20} p_1=\lim_{s\to \infty}\left(1-\frac{\epsilon_2(s)}{\zeta(s)}\right)^{-1/s}, \end{equation} then $\epsilon_2(s)\to 1$ at a faster rate than the Riemann zeta function, that is $\zeta(s) \sim 1+O(p_1^{-s})$, while $\epsilon_2(s) \sim 1+O(p_2^{-s})$, and the gap $p_1^{-s}\gg p_2^{-s}$ is only widening as $s\to \infty$, hence the contribution due to Riemann zeta function dominates the limit, and the formula for the first prime becomes
\begin{equation}\label{eq:20} p_1=\lim_{s\to \infty}\left(1-\frac{1}{\zeta(s)}\right)^{-1/s}. \end{equation} Numerical computation of (14) for $s=10$ and $s=100$ is summarized in Table $1$, where we observe convergence to $p_1$. The next prime in the sequence is found the same way by solving for $p_2$ in (10) to obtain \begin{equation}\label{eq:20} p_2=\lim_{s\to \infty}\left[1-\frac{\left(1-\frac{1}{p_1^s}\right)^{-1}\epsilon_3(s)}{\zeta(s)}\right]^{-1/s}, \end{equation} where similarity as before, $\epsilon_3(s)\to 1$ at a faster rate than the Riemann zeta function and the contribution due to the first prime product $(1-p_1^{-s})^{-1}$ as $s\to\infty$, where it cancels the first prime product in $\zeta(s)$, so that $(1-p_1^{-s})\zeta(s) \sim 1+O(p_2^{-s})$, while $\epsilon_3(s) \sim 1+O(p_3^{-s})$, and the gap $p_2^{-s}\gg p_3^{-s}$ is increasing rapidly as $s\to \infty$, hence the contribution due to Riemann zeta function and the first prime product dominates the limit, and we have \begin{equation}\label{eq:20} p_2=\lim_{s\to \infty}\left[1-\frac{\left(1-\frac{1}{p_1^s}\right)^{-1}}{\zeta(s)}\right]^{-1/s}. \end{equation} Numerical computation of (16) for $s=10$ and $s=100$ is summarized in Table 1, and we observe convergence to $p_2$. And the next prime follows the same pattern $(1-p_1^{-s})(1-p_2^{-s})\zeta(s) \sim 1+O(p_3^{-s})$, while $\epsilon_4(s) \sim 1+O(p_4^{-s})$ which results in \begin{equation}\label{eq:20} p_3=\lim_{s\to \infty}\left[1-\frac{\left(1-\frac{1}{p_1^s}\right)^{-1}\left(1-\frac{1}{p_2^s}\right)^{-1}}{\zeta(s)}\right]^{-1/s}. \end{equation} Hence, this process continues for the $n$th+1 prime, and so if we define a partial Euler product up to the $n$th order as \begin{equation}\label{eq:20} Q_{n}(s)=\prod_{k=1}^{n}\left(1-\frac{1}{p_k^s}\right)^{-1} \end{equation} for $n>1$ and $Q_0(s)=1$, then we obtain the Golomb's formula for the $p_{n+1}$ prime \begin{equation}\label{eq:20} p_{n+1}=\lim_{s\to \infty}\left(1-\frac{Q_n(s)}{\zeta(s)}\right)^{-1/s}. \end{equation} We performed numerical computation of (19) in PARI/GP software package, as it is an excellent platform for performing arbitrary precision computations [10], and its functionality will be very useful for the rest of this article. Before running any script, we recommend to allocate alot of memory $\textbf{allocatemem(1000000000)}$, and setting precision to high value, for example $\textbf{\textbackslash p 2000}$. We tabulate the computational results in Table 1 for $s=10$ and $s=100$ case, and observe the convergence approaching to the $p_{n+1}$ prime based on the knowledge of all primes up to the $n$th order. When we compute for the $p_{1000}$ case, the $s=100$ variable is still too small to obverse correct convergence, hence we performed a very high precision computation for $n=999$ and $s=10000$ with precision set to $50000$ decimal places, and now the true value of the prime is revealed:
\begin{equation}\label{eq:20} p_{1000}\approx7926.99958710978789301541492167\ldots. \end{equation} This formula will always converge because $p_n^{-s}\gg p_{n+1}^{-s}$ as $s\to \infty$, and also because the prime gaps are always bounded which will prevent higher order primes from modifying the main asymptote. It's just a matter of allowing the limit variable $s$ to tend a large value, however, as it seen it is not very practical for computing large primes, as very high arbitrary precision is required. The script in PARI is shown in Listing $1$ to compute the next prime using the Golomb's formula (19), which was used to compute Table $1$. The precision must be set very high, we generally set to $2000$ digits by default. \begin{table}[hbt!] \caption{The $p_{n+1}$ prime computed by equation (19) shown to $15$ decimal places.} \centering \begin{tabular}{c c c c} \hline\hline $n$ & $p_{n+1}$ & $s=10$ & $s=100$ \\[0.5ex]
\hline $0$ & $p_1$ & 1.996546424130332 & 1.999999999999999 \\ $1$ & $p_2$ & 2.998128944738979 & 2.999999999999999 \\ $2$ & $p_3$ & 4.982816482987932 & 4.9999999999999991\\ $3$ & $p_4$ & 6.990872151877531 & 6.999999999999999 \\ $4$ & $p_5$ & 10.795904253794409 & 10.999999993885992 \\ $5$ & $p_6$ & 12.882858209904345 & 12.999999999999709 \\ $6$ & $p_7$ & 16.454690036492369 & 16.999997488242396 \\ $7$ & $p_8$ & 18.700432429563358 & 18.999999999042078 \\ $8$ & $p_9$ & 22.653649208924189 & 22.999999999980263 \\ $9$ & $p_{10}$ & 27.560268802131417 & 28.999632082761238 \\ $99$ & $p_{100}$ & 429.143320774398099 & 539.114941393037977 \\ $999$ & $p_{1000}$ & 5017.353999786395028 & 7747.370093956440561 \\ [1ex] \hline \end{tabular} \label{table:nonlin} \end{table}
\lstset{language=C,caption={PARI script for computing equation (19).},label=DescriptiveLabel,captionpos=b} \begin{lstlisting}[frame=single] \\ Define partial Euler product up to nth order Qn(x,n)= {
prod(i=1,n,(1-1/prime(i)^x)^(-1)); }
\\ Compute the next prime {
n=10; \\ set n
s=100; \\ set limit variable
\\ compute next prime
pnext=(1-Qn(s,n)/zeta(s))^(-1/s);
print(pnext); } \end{lstlisting}
The Riemann zeta function has many representations. One common form is the alternating series representation
\begin{equation}\label{eq:20} \zeta(s) = \frac{1}{1-2^{1-s}}\sum_{n=1}^{\infty} \frac{(-1)^{n+1}}{n^s}, \end{equation} which is convergent for $\Re(s)>0$, with some exceptions at $\Re(s)=1$ due the constant factor. By the application of the Euler-Maclaurin summation formula, the main series (1) can also be extended to domain $\Re(s)>0$ by subtracting the pole in the limit as
\begin{equation}\label{eq:20} \zeta(s)=\lim_{k\to \infty}\Big\{\sum_{n=1}^{k-1}\frac{1}{n^s}-\frac{k^{1-s}}{1-s}\Big\}. \end{equation} Equations (21) and (22) are hence valid in the critical strip region $0<\Re(s)<1$.
Another important representation of $\zeta(s)$ is the Laurent expansion about $s=1$ that gives a globally convergent series valid anywhere in the complex plane except at $s=1$ as
\begin{equation}\label{eq:20} \zeta(s)=\frac{1}{s-1}+\sum_{n=0}^{\infty}\gamma_n\frac{(-1)^n(s-1)^n}{n!}. \end{equation} The coefficients $\gamma_n$ are the Stieltjes constants, and $\gamma_0=\gamma$ is the usual Euler-Mascheroni constant. We observe that $\gamma_n$ are linear in the series, hence if we form a system of linear equations, then using the Cramer's rule and some properties of an Vandermonde matrix, we find that Stieltjes constants can be represented by a determinant of a certain matrix:
\begin{equation}\label{eq:20} \gamma_n = \pm\det(A_{n+1}) \end{equation} where the matrix $A_n(k)$ is matrix $A(k)$, but with an $n$th column swapped with a vector $B$ as given next
\begin{gather} A(k)= \begin{pmatrix}1 & -\frac{1}{1!} & \frac{1^2}{2!} & -\frac{1^3}{3!} &\dots & \frac{1^k}{k!}\\ 1 & -\frac{2}{1!} & \frac{2^2}{2!} & -\frac{2^3}{3!} & \dots & \frac{2^k}{k!} \\ 1 & -\frac{3}{1!} & \frac{3^2}{2!} & -\frac{3^3}{3!} & \dots & \frac{3^k}{k!}\\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots\\ 1 & -\frac{(k+1)}{1!} & \frac{(k+1)^2}{2!} & -\frac{(k+1)^3}{3!} &\dots & \frac{(k+1)^k}{k!}\end{pmatrix} \end{gather} and
\begin{gather} B(k)=
\begin{pmatrix}
\zeta(2)-1\\
\zeta(3)-\frac{1}{2}\\
\zeta(4)-\frac{1}{3}\\
\vdots \\
\ \zeta(k+1)-\frac{1}{k}\\
\end{pmatrix}. \end{gather} The $\pm$ sign depends on $k$, but to ensure a positive sign, the size of $k$ must be a multiple of $4$. Hence, the first few Stieltjes constants can be represented as:
\begin{gather} \gamma_0= \lim_{k\to\infty}\begin{vmatrix} \zeta(2)-1 & -\frac{1}{1!} & \frac{1^2}{2!} & -\frac{1^3}{3!} &\dots & \frac{(-1)^k 1^k}{k!}\\ \zeta(3)-\frac{1}{2} & -\frac{2}{1!} & \frac{2^2}{2!} & -\frac{2^3}{3!} & \dots & \frac{(-1)^k 2^k}{k!} \\ \zeta(4)-\frac{1}{3} & -\frac{3}{1!} & \frac{3^2}{2!} & -\frac{3^3}{3!} & \dots & \frac{(-1)^k 3^k}{k!}\\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots\\ \zeta(k+1)-\frac{1}{k} & -\frac{(k+1)}{1!} & \frac{(k+1)^2}{2!} & -\frac{(k+1)^3}{3!} &\dots & \frac{(-1)^k(k+1)^k}{k!}\end{vmatrix}, \end{gather} and the next Stieltjes constant is \begin{gather} \gamma_1= \lim_{k\to\infty}\begin{vmatrix} 1 & \zeta(2)-1 & \frac{1^2}{2!} & -\frac{1^3}{3!} &\dots & \frac{(-1)^k 1^k}{k!}\\ 1 & \zeta(3)-\frac{1}{2} & \frac{2^2}{2!} & -\frac{2^3}{3!} & \dots & \frac{(-1)^k 2^k}{k!} \\ 1 & \zeta(4)-\frac{1}{3} & \frac{3^2}{2!} & -\frac{3^3}{3!} & \dots & \frac{(-1)^k 3^k}{k!}\\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots\\ 1 & \zeta(k+1)-\frac{1}{k} & \frac{(k+1)^2}{2!} & -\frac{(k+1)^3}{3!} &\dots & \frac{(-1)^k(k+1)^k}{k!}\end{vmatrix}, \end{gather} and the next is \begin{gather} \gamma_2= \lim_{k\to\infty}\begin{vmatrix}1 & -\frac{1}{1!} & \zeta(2)-1 & -\frac{1^3}{3!} &\dots & \frac{(-1)^k 1^k}{k!}\\ 1 & -\frac{2}{1!} & \zeta(3)-\frac{1}{2} & -\frac{2^3}{3!} & \dots & \frac{(-1)^k 2^k}{k!} \\ 1 & -\frac{3}{1!} & \zeta(4)-\frac{1}{3}& -\frac{3^3}{3!} & \dots & \frac{(-1)^k 3^k}{k!}\\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots\\ 1 & -\frac{(k+1)}{1!} & \zeta(k+1)-\frac{1}{k} & -\frac{(k+1)^3}{3!} &\dots & \frac{(-1)^k(k+1)^k}{k!}\end{vmatrix}, \end{gather} and so on. In Table $2$, we compute the determinant formula (24) for the first $10$ Stieltjes constants for $k=500$, and observe the convergence. In Listing $2$, the script in PARI to generate values for Table $2$ is also given. This shows that the $\gamma_n$ constants can be represented by $\zeta(n)$ at positive integer values as basis \begin{equation}\label{eq:20} \gamma_n = \lim_{k\to\infty}\Bigg\{C_{n,1}(k)+\sum_{m=2}^{k+1}(-1)^m C_{n,m}(k)\zeta(m)\Bigg\} \end{equation} where the expansion coefficients $C_{n,m}$ are rational and divergent, which grow very fast as $k$ increases. The index $n\geq 0$ is the $n$th Stieltjes constant, and index $m\geq 1$ is for the $\zeta(m)$ basis value. These coefficients can be generated by expanding the determinant of $A_n$ using the Leibniz determinant rule along columns with the zeta values. For example, for $k=12$, which is a multiple of $4$, then the first few expansion coefficients are
\begin{equation}\label{eq:20} \gamma_0 \approx -\frac{86021}{27720}+12\zeta(2)-66\zeta(3)+220\zeta(4)-495\zeta(5)+792\zeta(6)-\ldots \end{equation} The $C_{0,1}$ coefficient is the harmonic number $H_{12}$ \begin{equation}\label{eq:20} C_{0,1}=-H_{k} = -\sum_{n=1}^{k}\frac{1}{n} \end{equation} and the next are \begin{equation}\label{eq:20} C_{0,m}=\binom{k}{m-1}. \end{equation} For the next $\gamma_n$, the first few coefficients for $k=12$ are
\begin{equation}\label{eq:20} \gamma_1 \approx -\frac{1676701}{415800}+\frac{58301}{2310}\zeta(2)-\frac{72161}{420}\zeta(3)+\frac{76781}{126}\zeta(4)-\frac{79091}{56}\zeta(5)+\frac{80477}{35}\zeta(6)-\ldots, \end{equation} and for the next $\gamma_n$, we have
\begin{equation}\label{eq:20} \gamma_2 \approx -\frac{5356117}{907200}+\frac{10418}{225}\zeta(2)-\frac{2270987}{6300}\zeta(3)+\frac{143644}{105}\zeta(4)-\frac{5520439}{1680}\zeta(5)+\frac{574108}{105}\zeta(6)-\ldots, \end{equation} and so on, but these coefficients are more difficult to determine and they diverge very fast.
\begin{table}[hbt!] \caption{The first $30$ digits of $\gamma_n$ computed by equation (24) for $k=500$.} \centering \begin{tabular}{c c c} \hline\hline $n$ & $\gamma_n$ & Significant Digits \\ [0.5ex]
\hline $0$ & 0.577215664901532860606512090082 & 34 \\ $1$ & -0.072815845483676724860586375874 & 34 \\ $2$ & -0.009690363192872318484530386035 & 33 \\ $3$ & 0.002053834420303345866160046542 & 32 \\ $4$ & 0.002325370065467300057468170177 & 31 \\ $5$ & 0.00079332381730106270175333487\underline{7} & 30 \\ $6$ & -0.0002387693454301996098724218\underline{4}2 & 29 \\ $7$ & -0.0005272895670577510460740975\underline{0}7 & 29 \\ $8$ & -0.000352123353803039509602052\underline{1}77 & 28 \\ $9$ & -0.000034394774418088048177914\underline{6}91 & 28 \\ $10$ & 0.0002053328149090647946837\underline{2}1922 & 26 \\ [1ex] \hline \end{tabular} \label{table:nonlin} \end{table}
\lstset{language=C,caption={PARI script for computing equation (24)},label=DescriptiveLabel,captionpos=b} \begin{lstlisting}[frame=single] {
n = 0; \\ set nth Stieltjes constant
k = 100; \\ set limit variable
An=matrix(k,k); \\ allocate matrix
\\ load matrix An
for(j=1,k,
for(i=1,k,
if(j==1+n,An[i,j]=zeta(i+1)-1/i,
An[i,j]=(-i)^(j-1)/factorial(j-1))));
\\ compute determinant of An
yn = matdet(An);
print(yn); } \end{lstlisting}
The Hadamard infinite product formula is another global analytically continued representation of (1) to the whole complex plane
\begin{equation}\label{eq:20} \zeta(s)=\frac{\pi^{\frac{s}{2}}}{2(s-1)\Gamma(1+\frac{s}{2})}\prod_{\rho}^{}\left(1-\frac{s}{\rho}\right) \end{equation} having a simple pole at $s=1$, and at the heart of this form is an infinity of complex non-trivial zeros $\rho_n=\sigma_n+it_n$, which are constrained to lie in the critical strip $0<\Re(s)<1$ region. The infinite product is assumed to be taken over zeros in conjugate pairs. Hardy proved that there is an infinity of non-trivial zeros on the critical line at $\sigma=\frac{1}{2}$. It is not yet known whether there are non-trivial zeros off of the critical line in the range $0<\Re(s)<1$ other than $\sigma=\frac{1}{2}$, a problem of the Riemann Hypothesis (RH). To date, there has been a very large number of zeros verified numerically to lie on the critical line, and none was ever found off of the critical line. The first few non-trivial zeros on the critical line $\rho_n=\frac{1}{2}+it_n$ have imaginary components $t_1 = 14.13472514...$, $t_2 = 21.02203964...$, $t_3 = 25.01085758...$ which were originally found numerically using a solver, but if (RH) is true, then can be computed analytically by the formula presented later in this article. Also, we will interchangeably refer to $\rho_n$ or $t_n$ to imply a non-trivial zero.
The Hadamard product representation can be interpreted as a volume of an s-ball (that is for a ball of complex dimension $s$). For a positive integer $n$, the n-ball defines all points satisfying $\Omega=\{x_1^2+x_2^2+x_3^2\dots +x_n^2\leq R^n\}$, and integrating gives the total volume
\begin{equation}\label{eq:20} V(n)=\underset{\Omega}{\int\int\int\ldots\int} dx_1 dx_2 dx_3\ldots dx_n=K(n)R^n, \end{equation} where \begin{equation}\label{eq:20} K(n) = \frac{\pi^{\frac{n}{2}}}{\Gamma(1+\frac{n}{2})} \end{equation} is the proportionality constant. Now, when generalizing the n-ball to an s-ball of complex $s$ dimension for $\zeta(s)$, we can identify that the terms involving $\pi$ and $\Gamma(s)$ function is $K(s)$, and that the radius of the s-ball is the remaining product involving the non-trivial zeros \begin{equation}\label{eq:25} R(s)^s = \frac{1}{2(s-1)}\prod_{\rho}^{}\left(1-\frac{s}{\rho}\right) \end{equation} which is actually the Riemann xi function $\xi(s)$ multiplied by $(s-1)^{-1}$. Thus \begin{equation}\label{eq:25} \zeta(s) = V_s = K(s)R(s)^s \end{equation} can be understood as a volume quantity, which when packed into an s-ball, then the radius function in this form is being described by explicitly the non-trivial zeros. The trivial zeros at negative even integers $-2,-4,-6\ldots -2n$ are then the zeros of the proportionality constant due to the pole of $\Gamma(s)$. For example, if we consider $s=2$, then
\begin{equation}\label{eq:25} \begin{aligned} \zeta(2) & = K(2)R(2)^2 \\
& = \pi R^2 \end{aligned} \end{equation} where $R=\sqrt{\pi/6}=0.7236012545\ldots$ is the radius to give the volume quantity for $\zeta(2)$, which from (1) can be understood as packing the areas of squares with $1/n$ sides into a circle. And similarly for $s=3$ \begin{equation}\label{eq:25} \begin{aligned} \zeta(3) & = K(3)R(3)^3 \\
& = \frac{4}{3}\pi R^3 \end{aligned} \end{equation} where $R=0.6595972037\ldots$ is the radius to give the volume quantity for Ap\'ery's constant $\zeta(3)$, which from (1) can be understood as packing the volumes of cubes with $1/n$ sides into a sphere. Hence in this view, the non-trivial zeros are governing the radius quantity of an s-ball, essentially encoding the volume information of $\zeta(s)$, and while the trivial zeros are just the zeros of the proportionality constant $K(s)$, which has a role of scaling the values of non-trivial zeros across the dimension $s$ to the values that they currently are, and perhaps even on the critical line. If we plot the radius in the range $1<\sigma<\infty$, we find a local minima for $R$ which occurs between $s=2$ and $s=3$ at $s_{min}= 2.8992592006...$ and $R_{min}=0.6592484066\ldots$. That would mean that the s-ball would reach minimum radius $R_{min}$ at $s_{min}$.
Furthermore, if we consider the complex magnitude for $\zeta(s)$ for representations (21) and (22), and note that at each non-trivial zero on the critical line, a harmonic series is induced from which we can obtain formulas for the Euler-Mascheroni constant $\gamma$ expressed as a function of a single non-trivial on the critical line zero as \begin{equation}\label{eq:20} \gamma = \lim_{k\to \infty}\Bigg\{2\sum_{v=1}^{k}\sum_{u=v+1}^{k}\frac{(-1)^{u}(-1)^{v+1}}{\sqrt{uv}}\cos(t^{}_n \log(u/v))-\log(k)\Bigg\} \end{equation} and the second formula as \begin{equation}\label{eq:20} \gamma = \lim_{k\to \infty}\Bigg\{\frac{k+1}{(\frac{1}{2})^2+t^{2}_{n}}-2\sum_{v=1}^{k}\sum_{u=v+1}^{k}\frac{1}{\sqrt{uv}}\cos(t^{}_n \log(u/v))-\log(k)\Bigg\}, \end{equation} where it is assumed the index variables satisfy $u>v$ starting with $v=1$ as we have shown in [8][9]. Thus, any individual non-trivial zero on the critical line $t_n$ can be converted to $\gamma$, which is independent on (RH). As a numerical example, for $t_1$ and $k=10^5$, we compute $\gamma=0.5772\underline{1}81648\ldots$ accurate to $5$ decimal places, however, the computation becomes more difficult as it grows as $O(k^2)$ due to the double series. And if we subtract equations (43) and (44), then we obtain a relation
\begin{equation}\label{eq:20}
\frac{1}{|\rho_n|^2}=\frac{1}{(\frac{1}{2})^2+t^{2}_{n}} = \lim_{k\to\infty}\frac{2}{\sqrt{k}}\sum_{m=1}^{k}\frac{1}{\sqrt{m}}\cos(t_n\log(m/k)) \end{equation} whereby any individual non-trivial zero can be converted to its absolute value on the critical line. Also next, the infinite sum over non-trivial zeros
\begin{equation}\label{eq:20}
\sum_{n=1}^{\infty}\frac{1}{|\rho_n|^2}=\frac{1}{2}\gamma+1-\frac{1}{2}\log(4\pi), \end{equation} is an example of secondary zeta function family which will be discussed later.
There is also another whole side to the theory of the Riemann zeta function concerning the prime counting function $\pi(n)$ up to a given quantity $n$, and the non-trivial zero counting function $N(T)$ up to a given quantity T. It is natural to take the logarithm of the Euler prime product yielding a sum
\begin{equation}\label{eq:20} \log[\zeta(s)] = \sum_{n=1}^{\infty}\sum_{m=1}^{\infty}\frac{1}{m}\frac{1}{p_n^{ms}}=s\int_{0}^{\infty}J(x)x^{-s-1}dx \quad \Re(s)>1 \end{equation} from which motivates to define a step function $J(x)$ that increases by $1$ at each prime, by $\frac{1}{2}$ at prime square, by $\frac{1}{3}$ at prime cubes, and so on, as shown in [3, p.22] and [15]. Riemann then expressed $J(x)$ by Fourier inversion as
\begin{equation}\label{eq:20} J(x)=\frac{1}{2\pi i}\int_{a-i\infty}^{a+i\infty}\log[\zeta(s)]\frac{x^s}{s}ds \quad(a>1). \end{equation} After finding a suitable expansion for $\log[\zeta(s)]$ in terms of non-trivial zeros using the xi function Weierstrass product over non-trivial zeros \begin{equation}\label{eq:25} \xi(s) = \frac{1}{2}\prod_{\rho}^{}\left(1-\frac{s}{\rho}\right), \end{equation} with its relation to the zeta by $\xi(s)=\pi^{-\frac{s}{2}}\zeta(s)(s-1)\Gamma(1+\frac{s}{2})$, then after a very detailed and a lengthy analysis as shown in Edwards [3], the main formula for $J(x)$ appears as
\begin{equation}\label{eq:20} J(x) = \text{Li}(x)-\sum_{\rho}^{}\text{Li}(x^{\rho})-\log(2)+\int_{x}^{\infty}\frac{dt}{t(t^2-1)\log(t)} \end{equation} for $x>1$, where Li$(x)$ is a logarithmic integral, and then by applying M\"{o}bius inversion leads to recovering \begin{equation}\label{eq:20} \pi(x)=\sum_{n=1}^{\infty}\frac{\mu(n)}{n}J(x^{1/n}). \end{equation} Hence, through this formula, the non-trivial zeros are shown to be involved in the generation of primes. Although applying M\"{o}bius inversion in (51) to recover $\pi(n)$ is somewhat circular in this case, because one needs to have knowledge of all primes by $\mu(n)$, however, the main prime content is still in $J(x)$, which comes from the contribution of non-trivial zero terms. In Fig 1, we plot $J(x)$ using (50) and observe the curve approach the step function as more non-trivial zeros (taken in conjugate-pairs) are used.
Furthermore, in analysis by LeClair [11] concerning $N(T)$, it is found that $n$th non-trivial zeros satisfy the following transcendental equation:
\begin{equation}\label{eq:20} \frac{t_n}{2\pi}\log\left(\frac{t_n}{2\pi e}\right)+\lim_{\delta\to 0}\frac{1}{\pi}\text{arg } \zeta(\frac{1}{2}+it_n+\delta)=n-\frac{11}{8}, \end{equation} however, the contribution to due to arg function is very small, and only provides fine level tuning to the overall equation, hence when dropping the arg term, LeClair obtained an approximate asymptotic formula for non-trivial zeros via the Lambert function $W(x)e^{W(x)}=x$ transformation:
\begin{equation}\label{eq:20} t_{n}\approx 2\pi\frac{n-\frac{11}{8}}{W\left(\frac{n-\frac{11}{8}}{e}\right)}. \end{equation} It turns out that this approximation works very well with an accuracy down to a decimal place. For example, with this formula, we can quickly approximate a $10^{100}$ zero:
\begin{equation}\label{eq:20} \begin{aligned} t_{10^{100}}\approx && 28069038384289406990319544583825640008454803016284\\ && 6045192360059224930922349073043060335653109252473.23351 \end{aligned} \end{equation} in less than one second, and it should be accurate to within a decimal place. The Lambert function can be computed efficiently for large input argument, and the approximated values for $t_n$ get better for higher zeros as $n\to\infty$. In fact, LeClair computed the largest non-trivial zero known to date for $n=10^{10^6}$ using this method [12].
Also, very little is known about the properties of non-trivial zeros. For example, they are strongly believed to be simple, but remains unproven. And in the works by Wolf [16], a large sample of non-trivial zeros was numerically expanded into continued fractions, from which it was possible to compute the Khinchin’s constant, which strongly suggests they are irrational.
In this article, we propose an analytical recurrence formula for $t_{n+1}$, very similar to the Golomb's formula for primes, thus all non-trivial zeros up to $t_n$ must be known in order to compute the $t_{n+1}$ zero. The formula is based on a certain representation of the secondary zeta function
\begin{equation}\label{eq:20} Z(s) = \sum_{n=1}^{\infty}\frac{1}{t_{n}^{s}} \end{equation} in the works of Voros [13], for $s>1$, which is not involving non-trivial zeros, thus avoiding circular reasoning. There is alot of work already on the secondary zeta functions published in the literature, especially concerning the meromorphic extension of $Z(s)$ via the Mellin transform techniques and tools of spectral theory.
We now introduce the main result of this paper. Assuming (RH), the full recurrence formula for the $t_{n+1}$ non-trivial zero is:
\begin{equation}\label{eq:20}
t_{n+1} = \lim_{m\to\infty}\left[\frac{(-1)^{m+1}}{2}\left(\frac{1}{(2m-1)!}\log (|\zeta|)^{(2m)}\big(\frac{1}{2}\big)+\sum_{k=1}^{\infty}\frac{1}{\left(\frac{1}{2}+2k\right)^{2m}}-2^{2m}\right)-\sum_{k=1}^{n}\frac{1}{t_{k}^{2m}}\right]^{-\frac{1}{2m}} \end{equation} for $n\geq 0$, thus all non-trivial zeros up the $n$th order must be known in order to generate the $n$th+1 non-trivial zero. This formula is a solution to
\begin{equation}\label{eq:20} \zeta(s)=0 \end{equation} where $s=\rho_n=\frac{1}{2}+it_n$ for $\sigma_n=\frac{1}{2}$, and the zeros $t_n$ are real and ordered $0<t_1<t_2<t_3<\ldots t_{n}$. This formula is satisfied by all representations of $\zeta(s)$ on the critical strip, such as by (21), (22), (23), (36), and so on. And in the next sections, we will develop this formula, and explore some its variations, and then we will numerically compute non-trivial zeros to high precision. We will also discuss some possible limitations to this formula for $n\to \infty$.
In the last section, we will discuss formulas for $t_{n}$ which actually can be related to the primes themselves, and that one could compute $t_n$ as a function of all primes. And conversely, one could compute any individual prime $p_{n}$ as a function of all non-trivial zeros.
\section{A variation of the $n$th+1 prime formula} Golomb described several variations of the prime formulas of the form (19), one such is
\begin{equation}\label{eq:20} p_{n+1}=\lim_{s\to \infty}\left[\zeta(s)-Q_{n}(s)\right]^{-1/s}, \end{equation} which will serve to motivate the next result, which is based on the prime zeta function, and that will then serve as a basis for the development of an analogue formula for the $n$th+1 non-trivial zero formula in the next section.
The prime zeta function is an analogue of (1), but instead of summing over reciprocal integer powers, we sum over reciprocal prime powers as \begin{equation}\label{eq:20} P(s) = \sum_{n=1}^{\infty}\frac{1}{p_{n}^{s}}. \end{equation} When we consider the expanded sum \begin{equation}\label{eq:20} P(s) = \frac{1}{p_{1}^{s}}+\frac{1}{p_{2}^{s}}+\frac{1}{p_{3}^{s}}+\ldots \end{equation} then similarly as before, we wish to solve for $p_1$, and obtain
\begin{equation}\label{eq:20} \frac{1}{p_{1}^{s}}=P(s) -\frac{1}{p_{2}^{s}}-\frac{1}{p_{3}^{s}}-\ldots \end{equation} which leads to \begin{equation}\label{eq:20} p_1=\left(P(s) -\frac{1}{p_{2}^{s}}-\frac{1}{p_{3}^{s}}-\ldots\right)^{-1/s}. \end{equation} If we then consider the limit, \begin{equation}\label{eq:20} p_1=\lim_{s\to\infty}\left(P(s) -\frac{1}{p_{2}^{s}}-\frac{1}{p_{3}^{s}}-\ldots\right)^{-1/s} \end{equation} then we find that the higher order primes decay faster than $P(s)$, namely, $P(s)\sim p_1^{-s}$, while the tailing error is $O(p_2^{-s})$, and so $P(s)$ dominates the limit. Since $p_1^{-s}\gg p_2^{-s}$, hence we have \begin{equation}\label{eq:20} p_1=\lim_{s\to\infty}\left[P(s)\right]^{-1/s}. \end{equation} To find $p_2$ we consider (60) again \begin{equation}\label{eq:20} p_2=\lim_{s\to\infty}\left[P(s)-\frac{1}{p_1^s}-\frac{1}{p_3^s}\ldots\right]^{-1/s}, \end{equation} and when taking the limit, then we must keep $p_1$, while the higher order primes decay faster, namely, $P(s)-p_1^{-s}\sim p_2^{-s}$, while the tailing error is $O(p_3^{-s})$, and so $P(s)-p_1^{-s}$ dominates the limit. Since $p_2^{-s}\gg p_3^{-s}$, hence we have \begin{equation}\label{eq:20} p_2=\lim_{s\to\infty}\left[P(s)-\frac{1}{p_1^s}\right]^{-1/s}. \end{equation} And similarly, the next prime is found the same way, but this time we must retain the two previous primes \begin{equation}\label{eq:20} p_3=\lim_{s\to\infty}\left[P(s)-\frac{1}{p_1^s}-\frac{1}{p_2^s}\right]^{-1/s}. \end{equation} Hence in general, if we define a partial prime zeta function up to the $n$th order \begin{equation}\label{eq:20} P_n(s) = \sum_{k=1}^{n}\frac{1}{p_{k}^{s}}, \end{equation} then the $n$th+1 prime is \begin{equation}\label{eq:20} p_{n+1}=\lim_{s\to\infty}\left[P(s)-P_{n}(s)\right]^{-1/s}. \end{equation} At this point, knowing $P(s)$ by the original definition (60) leads to circular reasoning, hence we seek to find other representations for $P(s)$ that don't involve primes directly. We explore the well-known relation
\begin{equation}\label{eq:20} \log[\zeta(s)]=\sum_{k=1}^{\infty}\frac{P(ks)}{k} \end{equation} and then by applying M\"{o}bius inversion leads to
\begin{equation}\label{eq:20} P(s)=\sum_{k=1}^{\infty}\mu(k)\frac{\log(ks)}{k}, \end{equation} where $\mu(k)$ is the M\"{o}bius function, which however, still depends on the primes just like (51) for $J(x)$, so it may not be best candidate for $P(s)$. And if there are other representations for $P(s)$ not involving primes, then one could certainly use them, but we are unaware of such. But to verify (69), we pre-compute $P(s)$ using primes to high precision instead, thus introducing circular reasoning, Hence, we pre-compute $P(s)$ for $s=10$ and $s=100$ as \begin{equation}\label{eq:20} P(10)=9.936035744369802178558507001 \times 10^{-4}\ldots \end{equation} and \begin{equation}\label{eq:20} P(100)=7.888609052210118073520537827\times 10^{-31}\ldots \end{equation} using a neat remainder estimation technique of (71) developed by Cohen in [2]. Next, we summarize computation for $p_{n+1}$ by the recurrence formula (69) in Table 3, and observe the convergence to the $p_{n+1}$ prime, just as the Golomb's formula for primes. And as before, the convergence works because
\begin{equation}\label{eq:20} O(p_n^{-s})\gg O(p_{n+1}^{-s})\quad \text{as }s\to\infty, \end{equation} and also that the prime gaps are bounded, which prevents any higher order primes from modifying the main asymptote.
\begin{table}[hbt!] \caption{The $p_{n+1}$ prime computed by equation (69) shown to $15$ decimal places.} \centering \begin{tabular}{c c c c} \hline\hline $n$ & $p_{n+1}$ & $s=10$ & $s=100$ \\[0.5ex]
\hline $0$ & $p_1$ & 1.996543079767713 & 1.999999999999999 \\ $1$ & $p_2$ & 2.998128913153986 & 2.999999999999999 \\ $2$ & $p_3$ & 4.982816481260483 & 4.999999999999999\\ $3$ & $p_4$ & 6.990872151845387 & 6.999999999999999 \\ $4$ & $p_5$ & 10.79590425378718 & 10.999999993885992 \\ $5$ & $p_6$ & 12.88285820990352 & 12.999999999999709 \\ $6$ & $p_7$ & 16.45469003649213 & 16.999997488242396 \\ $7$ & $p_8$ & 18.70043242956331 & 18.999999999042078 \\ $8$ & $p_9$ & 22.65364920892418 & 22.999999999980263 \\ $9$ & $p_{10}$ & 27.5602688021314 & 28.999632082761238 \\ [1ex] \hline \end{tabular} \label{table:nonlin} \end{table}
\section{The recurrence formula for non-trivial zeros} The secondary zeta function has been studied in the literature, and there has been interesting developments concerning the analytical extension to the whole complex plane for
\begin{equation}\label{eq:20} Z(s) = \sum_{n=1}^{\infty}\frac{1}{t_{n}^{s}} \end{equation} which has many parallels with the zeta function. In this article, the symbol $Z$ is implied, and is not related to the Hardy-Z function. For the first few special values the $Z(s)$ yields
\begin{equation}\label{eq:9} \begin{aligned}
Z(2) &=\frac{1}{2}(\log |\zeta|)^{(2)}\big(\frac{1}{2}\big)+\frac{1}{8}\pi^2+\beta(2)-4 \\
&=0.023104993115418\dots, \\
&\\ Z(3) &= 0.00072954\dots, \\
&\\
Z(4) &=-\frac{1}{12}(\log |\zeta|)^{(4)}\big(\frac{1}{2}\big)-\frac{1}{24}\pi^4-4\beta(4)+16 \\
&= 0.00037172599285\dots, \\
&\\ Z(5) &= 0.00000223\dots. \end{aligned} \end{equation}
The special values for even positive integer argument $Z(2m)$ is: \begin{equation}\label{eq:20} \begin{aligned}
Z(2m) = (-1)^m \bigg[-\frac{1}{2(2m-1)!}(\log |\zeta|)^{(2m)}\big(\frac{1}{2}\big)+\\
-\frac{1}{4}\left[(2^{2m}-1)\zeta(2m)+2^{2m}\beta(2m)\right]+2^{2m}\bigg] \end{aligned} \end{equation} and is found in [13,p. 693] by works of Voros, and it's originally denoted as $\mathcal{Z}(2\sigma)$. This formula is a sort of an analogue for Euler's formula (3) for $\zeta(2n)$, and is valid for $m\geq 1$, where $m$ is an integer, and $\beta(s)$ is the Dirichlet beta function
\begin{equation}\label{eq:20} \beta(s) = \sum_{n=0}^{\infty}\frac{(-1)^n}{(2n+1)^s}=\prod_{n=1}^{\infty}\left(1-\frac{\chi_{4}(p_n)}{p_n^s}\right)^{-1}, \end{equation} where $\chi_4$ is the Dirichlet character modulo $4$. The value for $\beta(2)$ is the Catalan's constant. In (76), the odd values for $Z(2m+1)$ were computed numerically by summing $25000$ zeros, as it is not known whether there is a closed-form representation similarly as for the $\zeta(2m+1)$ case, and so the given values could only be accurate to several decimal places. The formula (77) assumes (RH), and is a result of a complicated development to meromophically extend (75) to the whole complex plane using tools from spectral theory. Furthermore, using the relation, also found in [13,p. 681] as \begin{equation}\label{eq:20} \frac{1}{2^s}\zeta\big(s,\frac{5}{4}\big)=\sum_{k=1}^{\infty}\frac{1}{\left(\frac{1}{2}+2k\right)^s}=2^s\left[\frac{1}{2}\left((1-2^{-s})\zeta(s)+\beta(s)\right)-1\right], \end{equation} from which we have several variations of (77) for $Z(2m)$ as \begin{equation}\label{eq:20}
Z(2m) = \frac{(-1)^{m+1}}{2} \left[\frac{1}{(2m-1)!}\log (|\zeta|)^{(2m)}\big(\frac{1}{2}\big)+\sum_{k=1}^{\infty}\frac{1}{\left(\frac{1}{2}+2k\right)^{2m}}-2^{2m}\right] \end{equation} and another as \begin{equation}\label{eq:20}
Z(2m) = \frac{(-1)^{m+1}}{2} \left[\frac{1}{(2m-1)!}\log (|\zeta|)^{(2m)}\big(\frac{1}{2}\big)+\frac{1}{2^{2m}}\zeta\big(2m,\frac{5}{4}\big)-2^{2m}\right]. \end{equation}
The expressions involving the $\log (|\zeta|)^{(2m)}\big(\frac{1}{2}\big)$ term can be computed numerically and independently of the non-trivial zeros, and there is no known closed-form representation of it, but there is for the odd values \begin{equation}\label{eq:20}
\log (|\zeta|)^{(2m+1)}\big(\frac{1}{2}\big)=\frac{1}{2}(2m)!(2^{2m+1}-1)\zeta(2m+1)+\frac{1}{4}\pi^{2m+1}|E_{2m}|, \end{equation}
where $E_{2m}$ are Euler numbers [13,p. 686]. Unfortunately, the $\log (|\zeta|)^{(2m+1)}(\frac{1}{2})$ term is not involved in the computation of $Z(m)$ for $m>1$. Also, the infinite series in (80) is related to the Hurwitz zeta function, and it can also be separated into two parts involving the zeta function and the beta function, which can then be related to primes via the Euler product, which we will come back to shortly.
Now we will follow the same program that we did for the prime zeta function as outlined in equations (58) to (69). If we begin with the secondary zeta function \begin{equation}\label{eq:20} Z(s) = \frac{1}{t_{1}^{s}}+\frac{1}{t_{2}^{s}}+\frac{1}{t_{3}^{s}}+\ldots \end{equation} and then solving for $t_1$ we obtain
\begin{equation}\label{eq:20} \frac{1}{t_{1}^{s}}=Z(s) -\frac{1}{t_{2}^{s}}-\frac{1}{t_{3}^{s}}-\ldots \end{equation} and then we get \begin{equation}\label{eq:20} t_1=\left(Z(s) -\frac{1}{t_{2}^{s}}-\frac{1}{t_{3}^{s}}-\ldots\right)^{-1/s}. \end{equation} If we then consider the limit \begin{equation}\label{eq:20} t_1=\lim_{s\to\infty}\left(Z(s) -\frac{1}{t_{2}^{s}}-\frac{1}{t_{3}^{s}}-\ldots\right)^{-1/s} \end{equation} then, since $O(Z(s))\sim O(t_1^{-s})$, and so the higher order non-trivial zeros decay as $O(t_2^{-s})$ faster than $Z(s)$, and so $Z(s)$ dominates the limit, hence we have \begin{equation}\label{eq:20} t_1=\lim_{s\to\infty}\left[Z(s)\right]^{-1/s}. \end{equation} Now, substituting representation (80) for $Z(s)$ into (87), and $s$ is now assumed be an integer as a limit variable $2m$, then we get a direct formula for $t_1$ as
\begin{equation}\label{eq:20}
t_{1} = \lim_{m\to\infty}\left[\frac{(-1)^{m+1}}{2}\left(\frac{1}{(2m-1)!}\log (|\zeta|)^{(2m)}\big(\frac{1}{2}\big)+\sum_{k=1}^{\infty}\frac{1}{\left(\frac{1}{2}+2k\right)^{2m}}-2^{2m}\right)\right]^{-\frac{1}{2m}}. \end{equation} Next we numerically verify this formula in PARI, and the script is shown in Listing $3$. We broke up the representation (88) into several parts A to D. Also, sufficient memory must be allocated and precision set to high before running the script. We utilize the Hurwitz zeta function representation, since it is available in PARI, and the \textbf{derivnum} function for computing the $m$th derivative very accurately for high $m$. The results are summarized in Table $4$ for various limit values of $m$ from low to high, and we can observe the convergence to the real value as $m$ increases. Already at $m=10$ we get several digits of $t_1$, and at $m=100$ we get over $30$ digits. We performed even higher precision computations, and the result is clearly converging to $t_1$.
\begin{table}[hbt!] \caption{The computation of $t_1$ by equation (88) for different $m$.} \centering \begin{tabular}{c c c} \hline\hline m & $t_1$ (First 30 Digits) & Significant Digits\\ [0.5ex]
\hline $1$ & 6.578805783608427637281793074245 & 0 \\ $2$ & 12.806907343833847091925940068962 & 0 \\ $3$ & 13.809741306055624728153992726341 & 0 \\ $4$ & 14.038096225961619450676758199577 & 0 \\ $5$ & 14.\underline{1}02624784431488524304946186056 & 1 \\ $6$ & 14.\underline{1}23297656314161936112154413740 & 1 \\ $7$ & 14.1\underline{3}0464459254236820197453483721 & 2 \\ $8$ & 14.1\underline{3}3083993992268169646789606564 & 2 \\ $9$ & 14.13\underline{4}077755601528384660110026302 & 3 \\ $10$ & 14.13\underline{4}465134057435907124435534843 & 3 \\ $15$ & 14.1347\underline{2}1950874675119831881762569 & 5 \\ $20$ & 14.13472\underline{5}096741738055664458081219 & 6\\ $25$ & 14.13472514\underline{1}055464326339414131271 & 9 \\ $50$ & 14.134725141734693\underline{7}89641535771021 & 16 \\ $100$ & 14.134725141734693790457251983562 & 34 \\ [1ex] \hline \end{tabular} \label{table:nonlin} \end{table}
\lstset{language=C,caption={PARI script for computing equation (88).},label=DescriptiveLabel,captionpos=b} \begin{lstlisting}[frame=single] {
\\ set limit variable
m = 250;
\\ compute parameters A to D
A = derivnum(x=1/2,log(zeta(x)),2*m);
B = 1/factorial(2*m-1);
C = 2^(2*m);
D = (2^(-2*m))*zetahurwitz(2*m,5/4);
\\ compute Z(2m)
Z = (-1)^(m+1)*(1/2)*(A*B-C+D);
\\ compute t1
t1 = Z^(-1/(2*m));
print(t1); } \end{lstlisting}
Next, we perform a higher precision computation for $m=250$ case, and the result is
\begin{equation}\label{eq:20} \begin{aligned} t_1=14.13472514173469379045725198356247027078425711569924 & \\
317568556746014996342980925676494901\underline{0}212214333747\ldots \end{aligned} \end{equation} accurate to $87$ decimal places. In order to find the second non-trivial zero, we comeback to (83), and solving for $t_2$ yields
\begin{equation}\label{eq:20} t_2=\lim_{s\to\infty}\left(Z(s) -\frac{1}{t_{1}^{s}}-\frac{1}{t_{3}^{s}}-\ldots\right)^{-1/s} \end{equation} and since the higher order zeros decay faster than $Z(s)-t_1^{-s}$, we then have
\begin{equation}\label{eq:20} t_2=\lim_{s\to\infty}\left(Z(s) -\frac{1}{t_{1}^{s}}\right)^{-1/s} \end{equation} and the zero becomes
\begin{equation}\label{eq:20}
t_{2} = \lim_{m\to\infty}\left[\frac{(-1)^{m+1}}{2}\left(\frac{1}{(2m-1)!}\log (|\zeta|)^{(2m)}\big(\frac{1}{2}\big)+\sum_{k=1}^{\infty}\frac{1}{\left(\frac{1}{2}+2k\right)^{2m}}-2^{2m}\right)-\frac{1}{t_{1}^{2m}}\right]^{-\frac{1}{2m}}. \end{equation} A numerical computation for $m=250$ yields
\begin{equation}\label{eq:20} \begin{aligned} t_2=21.0220396387715549926284795938969027773\underline{3}355195796311 & \\
4759442381621433519190301896683837161904986197676\ldots \end{aligned} \end{equation} which is accurate to $38$ decimal places, and we assumed $t_1$ used was already pre-computed to $2000$ decimal places by other means. We cannot use the same $t_1$ computed earlier with same precision, as it will cause a self-cancelation in (91), and so, the numerical accuracy of $t_{n}$ must be much higher than $t_{n+1}$ to guarantee convergence. And continuing on, the next zero is computed as \begin{equation}\label{eq:20}
t_{3} = \lim_{m\to\infty}\left[\frac{(-1)^{m+1}}{2}\left(\frac{1}{(2m-1)!}\log (|\zeta|)^{(2m)}\big(\frac{1}{2}\big)+\sum_{k=1}^{\infty}\frac{1}{\left(\frac{1}{2}+2k\right)^{2m}}-2^{2m}\right)-\frac{1}{t_{1}^{2m}}-\frac{1}{t_{2}^{2m}}\right]^{-\frac{1}{2m}}. \end{equation} A numerical computation for $m=250$ yields
\begin{equation}\label{eq:20} \begin{aligned} t_3=25.010857580145688763213790992562821818659549\underline{6}5846378 & \\
3317371101068278652101601382278277606946676481041\ldots \end{aligned} \end{equation} which is accurate to $43$ decimal places, and we assumed $t_1$ and $t_2$ was used to high enough precision which was $2000$ decimal places in this example. Hence, just like for the $n$th+1 Golomb prime recurrence formulas and the prime zeta function $P(s)$, the same limit works for non-trivial zeros. As a result, if we define a partial secondary zeta function up to the $n$th order \begin{equation}\label{eq:20} Z_n(s) = \sum_{k=1}^{n}\frac{1}{t_{k}^{s}}, \end{equation} then the $n$th+1 non-trivial zero is
\begin{equation}\label{eq:20} t_{n+1}=\lim_{m\to\infty}\left[Z(m)-Z_{n}(m)\right]^{-1/m} \end{equation} and the main recurrence formula:
\begin{equation}\label{eq:20}
t_{n+1} = \lim_{m\to\infty}\left[\frac{(-1)^{m+1}}{2}\left(\frac{1}{(2m-1)!}\log (|\zeta|)^{(2m)}\big(\frac{1}{2}\big)+\sum_{k=1}^{\infty}\frac{1}{\left(\frac{1}{2}+2k\right)^{2m}}-2^{2m}\right)-\sum_{k=1}^{n}\frac{1}{t_{k}^{2m}}\right]^{-\frac{1}{2m}}. \end{equation} One can actually use any number of representations for $Z(s)$, and the challenge will be find more efficient algorithms to compute them. And finally, we report a numerical result for $Z(2m)$ for $m=250$ as: \begin{equation}\label{eq:20} \begin{aligned} Z = 7.18316934899718140841650578011166023417090863769600 & \\
8517536818521464413577481501771580460474425539208\times 10^{-576}\ldots. \end{aligned} \end{equation} From this number, we extracted the first $10$ non-trivial zeros, which are summarized in Table $5$ for $k=250$. The previous non-trivial zeros used were already known to high precision to $2000$ decimal places in order to compute the $t_{n+1}$ zero. One cannot use the same $t_n$ obtained earlier because it will cause self-cancelation in (98), and the accuracy for $t_n$ must be much higher than $t_{n+1}$ to ensure convergence. Initially we started with an accuracy of $87$ digits after decimal place for $t_1$, and then it dropped to $7$ to $12$ digits by the time it gets to $t_{10}$ zero. There is also a sudden drop in accuracy when the gaps get too small. Hence, these formulas are not very practical for computing high zeros as large numerical precision is required, for example, at the first Lehmer pair at $t_{6709}=7005.06288$, the gap between next zero is about $\sim 0.04$. Also, the average gap between zeros gets smaller as $t_{n+1}-t_{n}\sim\frac{2\pi}{\log(n)}$, making the use of this formula progressively harder and harder to compute.
\begin{table}[hbt!] \caption{The $t_{n+1}$ computed by equation (98).} \centering \begin{tabular}{c c c c} \hline\hline $n$ & $t_{n+1}$ & $m=250$ & Significant Digits \\ [0.5ex]
\hline $0$ & $t_{1}$ & 14.134725141734693790457251983562 & 87 \\ $1$ & $t_{2}$ & 21.022039638771554992628479593896 & 38 \\ $2$ & $t_{3}$ & 25.010857580145688763213790992562 & 43 \\ $3$ & $t_{4}$ & 30.424876125859513\underline{2}09940851142395 & 16 \\ $4$ & $t_{5}$ & 32.9350615877391896906623689640\underline{7}3 & 29 \\ $5$ & $t_{6}$ & 37.58617815882567125\underline{7}190902153280 & 18 \\ $6$ & $t_{7}$ & 40.918719012147\underline{4}63977678179889317 & 13 \\ $7$ & $t_{8}$ & 43.3270732809149995194961\underline{1}7449701 & 22 \\ $8$ & $t_{9}$ & 48.005150\underline{8}79831498066163921378664 & 7 \\ $9$ & $t_{10}$ & 49.77383247767\underline{2}299146155484901550 & 12 \\ [1ex] \hline \end{tabular} \label{table:nonlin} \end{table}
\section{Duality between primes and non-trivial zeros} We outline the duality between primes and non-trivial zeros. The Golomb's recurrence formula (19) is an exact formula for the $n$th+1 prime
\begin{equation}\label{eq:20} p_{n+1}=\lim_{s\to \infty}\left(1-\frac{Q_n(s)}{\zeta(s)}\right)^{-1/s}, \end{equation} and the Hadamard product formula establishes $\zeta(s)$ as a function of non-trivial zeros:
\begin{equation}\label{eq:20} \zeta(s)=\frac{\pi^{\frac{s}{2}}}{2(s-1)\Gamma(1+\frac{s}{2})}\prod_{\rho}^{}\left(1-\frac{s}{\rho}\right). \end{equation} Hence, this is a pathway from non-trivial zeros to the primes and without assuming (RH), as the Hadamard product is over all zeros. On the other hand, the recurrence formula for the $n$th+1 non-trivial zero is \begin{equation}\label{eq:20} \begin{aligned}
t_{n+1} = \lim_{m\to\infty}&\Bigg[\frac{(-1)^{m+1}}{2}\Big(\frac{1}{(2m-1)!}\log (|\zeta|)^{(2m)}\big(\frac{1}{2}\big)-2^{2m+1}+\\
& +2^{2m-1}\big((1-2^{-2m})\zeta(2m)+\beta(2m)\big)\Big)-\sum_{k=1}^{n}\frac{1}{t_k}\Bigg]^{-\frac{1}{2m}} \end{aligned} \end{equation} where now one could substitute the Euler product for the zeta and beta functions, or both, which is what we will do next. We have
\begin{equation}\label{eq:20} \left(1-2^{-2m}\right)\zeta(2m)=\prod_{n=2}^{\infty}\left(1-\frac{1}{p_n^{2m}}\right)^{-1} \end{equation} and \begin{equation}\label{eq:20} \beta(2m)=\prod_{n=2}^{\infty}\left(1-\frac{\chi_4(p_n)}{p_n^{2m}}\right)^{-1}. \end{equation} As a result,
\begin{equation}\label{eq:20} \begin{aligned}
t_{n+1} = \lim_{m\to\infty}&\Bigg[\frac{(-1)^{m+1}}{2}\Bigg(\frac{1}{(2m-1)!}\log (|\zeta|)^{(2m)}\big(\frac{1}{2}\big)-2^{2m+1}+\\
& +2^{2m-1}\Big(\prod_{n=2}^{\infty}\big(1-p_n^{-2m}\big)^{-1}+\prod_{n=2}^{\infty}\big(1-\chi_4(p_n)p_n^{-2m}\big)^{-1}\Big)\Bigg)-\sum_{k=1}^{n}\frac{1}{t_k}\Bigg]^{-\frac{1}{2m}} \end{aligned} \end{equation} which completes the pathway from primes to non-trivial zeros. We note that these formulas are independent, and thus avoid any circularity, however, the recurrence formula for $t_{n+1}$ is dependent on (RH). And finally, in Appendix A, we present a PARI script to compute (105) recursively for several zeros.
\section{Conclusion}
We explored various representations of the Riemann zeta function, such as the Euler prime product, the Laurent expansion, and the Golomb's recurrence formula for primes. The Golomb's formula is a basis for developing similar recurrence formulas for the $n$th+1 non-trivial zeros via an independent formula for the secondary zeta function $Z(2m)$, which does not involve non-trivial zeros. Hence, the non-trivial zeros can be extracted under the right excitation in the limit, just like prime numbers. We verified these formulas numerically, and they indeed do converge to $t_{n+1}$. The difficultly lies in computation of the $\log (|\zeta|)^{(2m)}(\frac{1}{2})$ term. We utilized the PARI/GP software package for computing $Z(2m)$ for $m=250$, and the first zero $t_1$ achieves $87$ correct digits after the decimal place. Presently, computing beyond that caused the test computer to run out of memory. And so, if better and more efficient methods for computing $Z(2m)$ are developed, then more higher zeros can be computed accurately. But even then, computing up to a millionth zero for example, would be almost insurmountable. The only open question is whether the recurrence for the non-trivial zeros will hold up, namely the limit $O(t_n^{-s})\gg O(t_{n+1}^{-s})$ as $s\to\infty$, as the average gap between non-trivial zeroes decreases $t_{n+1}-t_{n}\sim\frac{2\pi}{\log(n)}$ as $n\to\infty$. In case of the Golomb's formula for primes, this gap is bounded.
These formulas also suggest a new criterion for (RH). It suffices to take a first zero $t_1$ represented by (88) which depends on (RH) as \begin{equation}\label{eq:20}
t_{1} = \lim_{m\to\infty}\left[\frac{(-1)^{m+1}}{2}\left(\frac{1}{(2m-1)!}\log (|\zeta|)^{(2m)}\big(\frac{1}{2}\big)+\sum_{k=1}^{\infty}\frac{1}{\left(\frac{1}{2}+2k\right)^{2m}}-2^{2m}\right)\right]^{-\frac{1}{2m}} \end{equation} and passing it through to any number of representations of $\zeta(s)$ valid in the critical strip to work out \begin{equation}\label{eq:20} \zeta(\frac{1}{2}+i t_1)=0. \end{equation} For example, if we take equation (45) and substitute $t_1$ as \begin{equation}\label{eq:20}
\frac{1}{|\rho_1|^2}=\frac{1}{(\frac{1}{2})^2+t_1^{2}} = \lim_{k\to\infty}\frac{2}{\sqrt{k}}\sum_{m=1}^{k}\frac{1}{\sqrt{m}}\cos(t_1\log(m/k)), \end{equation} then recovering $t_1$ would imply (RH) if there was a way work out the series. We also would like to extend these formulas for the secondary beta function
\begin{equation}\label{eq:20} B(s) = \sum_{n=1}^{\infty}\frac{1}{r_{n}^{s}}, \end{equation} where $r_n$ are imaginary components of non-trivial zeros of $\beta(s)$ on the critical line. For example, the first few zeros are $r^{}_1 = 6.02094890...$, $r^{}_2 = 10.24377030...$, $r^{}_3 = 12.98809801...$. Then, the proposed recurrence formula would be \begin{equation}\label{eq:20} r_{n+1}=\lim_{s\to\infty}\left[B(s)-B_{n}(s)\right]^{-1/s}, \end{equation} where \begin{equation}\label{eq:20} B_n(s) = \sum_{n=1}^{n}\frac{1}{r_{n}^{s}} \end{equation} is the partial secondary beta function up to the $n$th order. And just like for the Dirichlet beta, the same could potentially apply to other Dirichlet L-functions.
Finally, we highlighted the duality between primes and non-trivial zeros where it is possible convert non-trivial zeros into an individual prime, and conversely, to convert all primes into an individual non-trivial zero.
\texttt{Email: art.kawalec@gmail.com}
\section{Appendix A} The script in Listing $4$ computes the $n$th+1 non-trivial recursively from a set of primes by equation (105). The parameter $\textbf{pmax}$ specifies the number of primes to use for the Euler product. The starting limiting variable is $\textbf{m}$, and at each iteration $\textbf{m}$ is decreased by a pre-set amount $\textbf{step\textunderscore m}$, so that the accuracy for $t_n$ will be greater than for $t_{n+1}$ in order to avoid self-cancelation. The values for computed zeros are stored in an array, and the partial secondary zeta $Z_n$ is computed at every iteration. By leveraging these parameters, the output can converge to different values, and in some cases will not converge. We optimized them to give $4$ zeros accurately, and beyond that it doesn't converge and then $m$ has to be increased to a larger value. The results of running this script are summarized in Table $6$. As before, we obtain $t_1$ accurate to $87$ decimal places, but $t_2$ now is accurate to $26$ decimal places, and the next zero to $12$ and $1$ decimal places respectively. At this point the iteration has ran its course. We would like to increase $m$, but presently is outside the range of the test computer.
\begin{table}[hbt!] \caption{The $t_{n+1}$ by PARI script in Listing $4$} \centering \begin{tabular}{c c c c c} \hline\hline $m$ & $n$ & $t_{n+1}$ & First $30$ digits of computed results & Significant Digits \\ [0.5ex]
\hline 250 & $0$ & $t_{1}$ & 14.134725141734693790457251983562 & 87 \\ 175 & $1$ & $t_{2}$ & 21.0220396387715549926284795\underline{9}4245 & 26 \\ 100 & $2$ & $t_{3}$ & 25.01085758014\underline{5}177681574221575793 & 12 \\ 25 & $3$ & $t_{4}$ & 30.\underline{4}13415903597141481192661667214 & 1 \\ [1ex] \hline \end{tabular} \label{table:nonlin} \end{table}
\lstset{language=C,caption={PARI script for generating non-trivial zeros from primes.},label=DescriptiveLabel,captionpos=b} \begin{lstlisting}[frame=single][hbt!] {
m = 250; \\ starting limit variable m
step_m = -75; \\ decrease limit step_m
pmax = 2000; \\ set max number of primes
tn = vector(100); \\ allocate vector to hold zeros
n=1; \\ init non-trivial zero counter
\\ start loop
while(m != 0,
\\ compute parameters A to D
A = derivnum(x=1/2,log(zeta(x)),2*m);
B = 1/(factorial(2*m-1));
C = 2^(2*m+1); D = 2^(2*m-1);
\\ compute Euler products
P1 = prod(i=2,pmax,(1-1/prime(i)^(2*m))^(-1));
P2 = prod(i=2,pmax,
(1-(-1)^((prime(i)-1)/2)/prime(i)^(2*m))^(-1));
\\ compute Z(2m)
Z = 0.5*(-1)^(m+1)*(A*B-C+D*(P1+P2));
\\ compute Zn up to nth-1 order
if(n==1,Zn=0,
for(j=1,n-1,Zn = Zn + 1/tn[j]^(2*m)));
\\ compute and print tn
tn[n] = (Z-Zn)^(-1/(2*m));
print(m, ":", tn[n]);
m = m+step_m; \\ decrease m by step_m
n = n+1; \\ increment zero counter
) } \end{lstlisting}
\end{document} |
\begin{document}
\begin{center}
{\LARGE \textbf{NARS vs. Reinforcement learning}}\\
{\Large ONA vs. $Q$-Learning}\\
{\large Project of AGI course}\\
{\large Ali Beikmohammadi}\\
\textit{Department of Computer and Systems Sciences\\
Stockholm University\\
SE-164 07 Kista, Sweden \\
\texttt{beikmohammadi@dsv.su.se} \\}
\end{center}
\begin{center}
\rule{\textwidth}{0.2mm}
\end{center}
\begin{abstract} One of the realistic scenarios is taking a sequence of optimal actions to do a task. Reinforcement learning is the most well-known approach to deal with this kind of task in the machine learning community. Finding a suitable alternative could always be an interesting and out-of-the-box matter. Therefore, in this project, we are looking to investigate the capability of NARS and answer the question of whether NARS has the potential to be a substitute for RL or not. Particularly, we are making a comparison between $Q$-Learning and ONA on some environments developed by an Open AI gym. The source code for the experiments is publicly available in the following link: \url{https://github.com/AliBeikmohammadi/OpenNARS-for-Applications/tree/master/misc/Python}.
\end{abstract}
\begin{center}
\rule{\textwidth}{0.2mm}
\end{center}
\section{Introduction and Background} Model-free Reinforcement Learning (RL) algorithms by combining RL and high-capacity function approximations gives hope of automation of a broad range of decision making and control tasks \cite{wang2022sample,deisenroth2011pilco, sutton2018}. Researchers have solved challenging problems, e.g., in game playing \cite{mnih2015,silver2016, silver2017}, financial markets \cite{meng2019, fischer2018, 10.1145/3383455.3422540}, robotic control \cite{haarnoja2018soft, kober2013, lillicrap2015, schulman2015trust}, optimal control \cite{126844, PERRUSQUIA2021145}, healthcare \cite{CORONATO2020101964, 10.1145/3477600}, autonomous driving \cite{shalev2016, ref4}, and recommendation systems~\cite{rec1,rec2, ref3}. While RL works well for environments where infinite data can be generated, it does not feature compositional representations that would allow for data-efficient learning \cite{hammer2021autonomy}.
Recently, Non-Axiomatic Reasoning System (NARS) is proposed as a general-purpose reasoner that adapts under the Assumption of Insufficient Knowledge and Resources (AIKR) \cite{hammer2020reasoning, wang2009insufficient, wang2013non}. There have been implementations based on this non-axiomatic logic, including OpenNARS \cite{hammer2016opennars} and ONA (OpenNARS for Applications) \cite{hammer2020opennars}. ONA is more capable than OpenNARS in terms of reasoning performance, and recently has also been experimentally compared with RL \cite{eberding2020sage, hammer2021autonomy}. Specifically, in \cite{hammer2021autonomy}, a comparison has been made between ONA and $Q$-Learning \cite{watkins1992q}. Three simple environments, Space invaders, Pong, and grid robot, have been used for comparison. The results have shown that ONA has provided more stable results while maintaining almost the same performance in terms of success ratio.
In this software project, the aim is to compare $Q$-Learning as a basic algorithm in RL with ONA in several more challenging tasks. There are several challenges when comparing $Q$-Learning with ONA, which can be summarized as: \begin{itemize}
\item Statements instead of states,
\item Unobservable information,
\item One action in each step,
\item Multiple objectives,
\item Hierarchical abstraction,
\item Changing objectives,
\item Goal achievement as reward. \end{itemize} Their details can be found in \cite{hammer2021autonomy}. Since ONA does not assume that in every step an action has to be chosen, in \cite{hammer2021autonomy}, to make the techniques comparable, they have added an additional \textit{nothing} action for the $Q$-Learner in each example. However, in this work, we will choose a random action when ONA does not recommend any action to be taken. This has been decided to keep the originality of the tasks/environments as much as possible. Also, taking random actions could be beneficial for the agent in terms of exploration of the environment. In this work, we will basically use the assumptions mentioned in \cite{hammer2021autonomy} when comparing the two algorithms, otherwise we will mention it. The rest of this work is organized as follows. In Section \ref{section2}, we describe the environments as well as the hyperparameters that have been used for both algorithms. Experimental results and analyses are reported in Sections \ref{section3}. Also, additional figures are included in Appendix \ref{section4}.
\section{Hyperparameters and Environments} \label{section2} We compare ONA with a standard table-based Q-Learner implementation \cite{watkins1992q} with exponentially decaying $\epsilon$ value. Specifically, we have used a series of fixed hyperparameters in all tasks, which are as follows: \begin{itemize}
\item $\alpha = 0.7$,
\item $\gamma = 0.618$,
\item $\epsilon_{max} = 1$,
\item $\epsilon_{min} = 0.01$,
\item decay $= 0.01$, \end{itemize} where $\alpha$ is the learning rate, $\gamma$, known as discount factor, controls how much to favour future rewards over short-term reward. Besides $\epsilon$ is the exploration rate. It encodes the chance to select a random action with probability of $\epsilon$ as $t+1$ instead of the one with the highest expected reward. As for $\epsilon$ we have employed an exponentially decaying version, where $\epsilon = \epsilon_{min} + (\epsilon_{max} - \epsilon_{min})*\exp{(-decay * episode counter)}$.
On the other hand, regarding ONA hyperparameters, specifically \textit{motorbabling}, we have used default config in ONA v0.9.1 \cite{hammer2020opennars}. However, \textit{babblingops} have been changed due to the variety of number of available actions in each of the environments. Also, we have used \textit{setopname} to set allowed actions in ONA.
\begin{figure}
\caption{CliffWalking-v0}
\label{CliffWalking-v0}
\caption{Taxi-v3}
\label{Taxi-v3}
\caption{FrozenLake-v1 4x4}
\label{FrozenLake-v1 4x4}
\caption{FrozenLake-v1 8x8}
\label{FrozenLake-v1 8x8}
\caption{FlappyBird-v0}
\label{FlappyBird-v0}
\caption{OpenAI gym environments used as experiment tasks}
\label{environments}
\end{figure}
We compare ONA to $Q$-Learning on a variety of challenging control tasks from the OpenAI gym benchmark suite \cite{brockman2016openai} (Figure \ref{environments}). It should be noted that since both ONA to $Q$-Learning algorithms were developed specifically for discrete tasks, we have to map the FlappyBird-v0 observation space, but we did it in different ways for each algorithm, which we describe in detail in the following. Except for FlappyBird-v0, we use the original environments from \cite{brockman2016openai} without any modifications to the environment and reward.
To make the practical comparison possible, as for ONA, the events will hold the same information as the corresponding states the Q-Learner will receive in the simulated experiments, except for FlappyBird-v0. To be more specific, as it is mentioned in \cite{hammer2021autonomy}, when $s$ is observation, it is interpreted by ONA as event (s. $:|:$), and by the $Q$-Learner simply as current state. Then both algorithms suggest an operation/action by exploitation or sometimes randomly. After feeding the action to the environment, we will receive new observation, reward, and some info about reaching to the goal. The reward for the $Q$-Learner will be used without any change, while ONA receives event (G. $:|:$) when the task completely done. So there is no event if rewards are related anything except finishing the task. This of course assumes that the goal does not change, as else the $Q$-table entries would have to be re-learned, meaning the learned behavior would often not apply anymore. For the purposes of this work, and for a fair comparison with Reinforcement Learning, the examples include a fixed objective.
\textbf{CliffWalking-v0:} This environment is part of the Toy Text environments. This is a simple implementation of the Gridworld Cliff reinforcement learning task. Adapted from Example 6.6 (page 106) from Reinforcement Learning: An Introduction by Sutton and Barto \cite{sutton2018}. As shown in Figure \ref{CliffWalking-v0}, the board is a 4x12 matrix, with (using NumPy matrix indexing): [3, 0] as the start at bottom-left, [3, 11] as the goal at bottom-right, and [3, 1..10] as the cliff at bottom-center. If the agent steps on the cliff, it returns to the start. An episode terminates when the agent reaches the goal. There are 4 discrete deterministic actions: 0: move up, 1: move right, 2: move down, 3: move left. As for observations, there are 3x12 + 1 possible states. In fact, the agent cannot be at the cliff, nor at the goal (as this results in the end of the episode). It remains all the positions of the first 3 rows plus the bottom-left cell. The observation is simply the current position encoded as flattened index. About reward, each time step incurs -1 reward, and stepping into the cliff incurs -100 reward.
\textbf{Taxi-v3:} This environment is also part of the Toy Text environments form the work done by Tom Dietterich \cite{dietterich2000hierarchical}. In this environment, as depicted in Figure \ref{Taxi-v3}, there are four designated locations in the grid world indicated by R(ed), G(reen), Y(ellow), and B(lue). When the episode starts, the taxi starts off at a random square and the passenger is at a random location. The taxi drives to the passenger’s location, picks up the passenger, drives to the passenger’s destination (another one of the four specified locations), and then drops off the passenger. Once the passenger is dropped off, the episode ends.
There are 6 discrete deterministic actions: 0: move south, 1: move north, 2: move east, 3: move west, 4: pickup passenger, 5: drop off passenger. As for observations, there are 500 discrete states since there are 25 taxi positions, 5 possible locations of the passenger (including the case when the passenger is in the taxi), and 4 destination locations. Note that there are 400 states that can actually be reached during an episode. The missing states correspond to situations in which the passenger is at the same location as their destination, as this typically signals the end of an episode. Four additional states can be observed right after a successful episodes, when both the passenger and the taxi are at the destination. This gives a total of 404 reachable discrete states. Each state space is represented by the tuple: (taxi\_row, taxi\_col, passenger\_location, destination). An observation is an integer that encodes the corresponding state. Passenger locations are 0: R(ed), 1: G(green), 2: Y(fellow), 3: B(lue), and 4: in taxi. Destinations also are 0: R(ed), 1: G(green), 2: Y(fellow), 3: B(lue). Agent will be rewarded -1 per step unless other reward is triggered, +20 for delivering passenger, and -10 in the case of executing “pickup” and “drop-off” actions illegally.
\textbf{FrozenLake-v1:} As depicted in Figures \ref{FrozenLake-v1 4x4} and \ref{FrozenLake-v1 8x8}, Frozen lake involves crossing a frozen lake from Start (S) to Goal (G) without falling into any Holes (H) by walking over the Frozen (F) lake. The agent may not always move in the intended direction due to the slippery nature of the frozen lake. The agent takes a 1-element vector for actions. The action space is (dir), where dir decides direction to move in which can be: 0: LEFT, 1: DOWN, 2: RIGHT, 3: UP. In addition, the observation is a value representing the agent’s current position as current\_row * nrows + current\_col (where both the row and col start at 0). For example, the goal position in the 4x4 map can be calculated as follows: 3 x 4 + 3 = 15. The number of possible observations is dependent on the size of the map. For example, the 4x4 map has 16 possible observations, while in the case of 8x8 is 64. Reward schedule is as follow: \begin{itemize}
\item Reach goal(G): +1,
\item Reach hole(H): 0,
\item Reach frozen(F): 0, \end{itemize}
While one could specify custom map for frozen lake by "desc" argument, for example, desc=["SFFF", "FHFH", "FFFH", "HFFG"], we have used preloaded maps which are shown in \ref{FrozenLake-v1 4x4} and \ref{FrozenLake-v1 8x8}, known with 4x4 and 8x8 IDs, respectively. Thanks to "is\_slippery" argument, we could even have a non-deterministic environment, which is very interesting to see how algorithms behave in such a problem. Particularly, if "is\_slippery" is True, the agent moves in intended direction with probability of 1/3 else moves in either perpendicular direction with equal probability of 1/3 in both directions. For example, if action is left and "is\_slippery" is True, then: P(move left)= 1/3, P(move up) = 1/3, P(move down) = 1/3.
\textbf{FlappyBird-v0:} As shown in \ref{FlappyBird-v0}, the last environment is the Flappy Bird game. The implementation of the game's logic and graphics is based on the FlapPyBird project\footnote{https://github.com/sourabhv/FlapPyBird}. This environment yields simple numerical information about the game's state as observations. Specifically, the yielded attributes are: ($O_1$) the horizontal distance to the next pipe, and ($O_2$) the difference between the player's $y$ position and the next hole's $y$ position. The reward received by the agent in each step is one, if the bird is still alive. Also a score point is obtained every time the bird passes a pipe. The action taken by the agent could be 0: "do nothing" and 1: "flap". We have access to a status report also which will be true if the game is over, and otherwise is false. Since observation space is continues, to be able to use ONA and $Q$-Learning, we have mapped it to a discrete space. Specifically, as for ONA, the event is "round(100x$O_1$)\_round(1000x$O_2$). $:|:$", which could be for instance "138\_-4. $:|:$". However, since for defining $Q$-table, the states should correspond to the specific row, we have to subtly change the mapping to "$|$round(100x$O_1$)$|$+$|$round(1000x$O_2$)$|$", which results "142", for our instance. However, one could find a better way to do this mapping.
In the next section, we examine both algorithms performances in detail on all these seven tasks.
\section{Results and Discussion} \label{section3} Both techniques are run one time due to time limit in each experiment, and the behavior of many parameters, which are shown in Figures \ref{Reward_vs_Time Step}, \ref{Cumulative_Successful_Episodes_vs_Time Step}, \ref{Epsilon_vs_Episodes}, \ref{Cumulative_Random_Action_vs_Time Step}, and \ref{Cumulative_Non-Random_Action_vs_Time Step}, kept track of for each time step across 100000 iterations. Each technique is run one time with random seed = 1. Details can be found in the source code in the following link: \url{https://github.com/AliBeikmohammadi/OpenNARS-for-Applications/tree/master/misc/Python}.
As can be seen from Figures \ref{Reward_vs_Time Step} and \ref{Cumulative_Successful_Episodes_vs_Time Step}, the results of two algorithms are very dependent on the task and one cannot be considered superior for all environments. Specifically, the $Q$-Learning algorithm has performed better on CliffWalking-v0, Taxi-v3, and FlappyBird-v0 environments. But ONA is more promising on environments based on FrozenLake-v1.
A very interesting point is the ability of ONA to solve non-deterministic problems, where it is able to solve the slippery-enable problems as shown in Figures \ref{Reward_vs_Time_Step_FrozenLake-v1_4x4_Slippery}, \ref{Reward_vs_Time_Step_FrozenLake-v1_8x8_Slippery}, \ref{Cumulative_Successful_Episodes_vs_Time_Step_FrozenLake-v1_4x4_Slippery}, and \ref{Cumulative_Successful_Episodes_vs_Time_Step_FrozenLake-v1_8x8_Slippery}, while $Q$-Learning has not been successful. However, by changing the hyperparameters of $Q$-Learning, it might be possible to finally draw conclusions from it. However, note that, all time dependencies of hyperparameters are implicitly example-specific, and have hence to be avoided when generality is evaluated. With the passing of time, a reduction of the learning rate makes the $Q$-Learner take longer to change its policy when new circumstances demand it.
On the other side, in general, it can be seen that ONA guarantees more reliability thanks to having fewer hyperparameters. To be more specific, ONA does not need a specific reduction of learning rate and exploration rate to work well for a particular example, hence needs less parameter tuning. As an example, ONA does not rely on learning rate decay. In fact, how much new evidences changes an existing belief is only dependent on the amount of evidence which already supports it, making high-confident beliefs automatically more stable. So, that is why the learning behavior of ONA is more consistent.
\begin{figure}
\caption{CliffWalking-v0}
\label{Reward_vs_Time_Step_CliffWalking-v0}
\caption{Taxi-v3}
\label{Reward_vs_Time_Step_Taxi-v3}
\caption{FrozenLake-v1 4x4}
\label{Reward_vs_Time_Step_FrozenLake-v1 4x4}
\caption{FrozenLake-v1 4x4 Slippery}
\label{Reward_vs_Time_Step_FrozenLake-v1_4x4_Slippery}
\caption{FrozenLake-v1 8x8}
\label{Reward_vs_Time_Step_FrozenLake-v1 8x8}
\caption{FrozenLake-v1 8x8 Slippery}
\label{Reward_vs_Time_Step_FrozenLake-v1_8x8_Slippery}
\caption{FlappyBird-v0}
\label{Reward_vs_Time_Step_FlappyBird-v0}
\caption{Reward vs. Time steps. The reward is measured at time steps where the episode ends (by reaching the goal, truncating the episode length, falling into the hole, falling from the cliff, hitting the pipe.)}
\label{Reward_vs_Time Step}
\end{figure}
\begin{figure}
\caption{CliffWalking-v0}
\label{Cumulative_Successful_Episodes_vs_Time_Step_CliffWalking-v0}
\caption{Taxi-v3}
\label{Cumulative_Successful_Episodes_vs_Time_Step_Taxi-v3}
\caption{FrozenLake-v1 4x4}
\label{Cumulative_Successful_Episodes_vs_Time_Step_FrozenLake-v1 4x4}
\caption{FrozenLake-v1 4x4 Slippery}
\label{Cumulative_Successful_Episodes_vs_Time_Step_FrozenLake-v1_4x4_Slippery}
\caption{FrozenLake-v1 8x8}
\label{Cumulative_Successful_Episodes_vs_Time_Step_FrozenLake-v1 8x8}
\caption{FrozenLake-v1 8x8 Slippery}
\label{Cumulative_Successful_Episodes_vs_Time_Step_FrozenLake-v1_8x8_Slippery}
\caption{Cumulative Successful Episodes vs. Time steps.}
\label{Cumulative_Successful_Episodes_vs_Time Step}
\end{figure}
In Figure \ref{Epsilon_vs_Episodes}, we have drawn how to reduce $\epsilon$, which directly determines the probability of choosing a random action for $Q$-Learning. Then, in Figures \ref{Cumulative_Random_Action_vs_Time Step} and \ref{Cumulative_Non-Random_Action_vs_Time Step}, the behavior of the two algorithms is drawn in terms of referring to a random or non-random action. Regarding $Q$-Learning, since the selection of the random action is controlled only by $\epsilon$, it can be clearly seen that after a short period of time, the probability of performing a random action is greatly reduced. This means that if a good policy has not yet been found, or if the environment changes, the agent will not be able to solve the task. So, reduction of $\epsilon$ over time will make it increasingly unlikely to attempt an alternative solutions.
But on the other hand, in ONA, we know that apart from the fact that sometimes the system does not suggest any action to us, which subsequently we choose a random action in this situation, which is shown in Figure \ref{Cumulative_Random_Action_vs_Time Step}, thanks to the \textit{motorbabbling} parameter, it is possible that the system itself suggests a random action to explore the environment. ONA reduces \textit{motorbabbling} by itself once the hypotheses it bases its decisions on are stable and predict successfully, and hence does not depend on a time-dependent reduction of the exploration rate either. Specifically, however, in this case, some of the actions drawn in Figure \ref{Cumulative_Non-Random_Action_vs_Time Step}, despite being named non-randomly (since they were chosen by the system and not by us), are actually random and exploratory in nature. This could be a reason for ONA's success, despite not receiving frequent reward like $Q$-Learning. So, ONA has the capability to deal with multiple and changing objectives, while it also demands less implicitly example-dependent parameter tuning than $Q$-Learning.
\begin{figure}
\caption{CliffWalking-v0}
\label{Epsilon_vs_Episodes_CliffWalking-v0}
\caption{Taxi-v3}
\label{Epsilon_vs_Episodes_Taxi-v3}
\caption{FrozenLake-v1 4x4}
\label{Epsilon_vs_Episodes_FrozenLake-v1 4x4}
\caption{FrozenLake-v1 4x4 Slippery}
\label{Epsilon_vs_Episodes_FrozenLake-v1_4x4_Slippery}
\caption{FrozenLake-v1 8x8}
\label{Epsilon_vs_Episodes_FrozenLake-v1 8x8}
\caption{FrozenLake-v1 8x8 Slippery}
\label{Epsilon_vs_Episodes_FrozenLake-v1_8x8_Slippery}
\caption{FlappyBird-v0}
\label{Epsilon_vs_Episodes_FlappyBird-v0}
\caption{Decaying $\epsilon$ used for $Q$-Learning vs. Episodes. The $\epsilon$ value has been updated just before stating a new episode. }
\label{Epsilon_vs_Episodes}
\end{figure}
\begin{figure}
\caption{CliffWalking-v0}
\label{Cumulative_Random_Action_vs_Time_Step_CliffWalking-v0}
\caption{Taxi-v3}
\label{Cumulative_Random_Action_vs_Time_Step_Taxi-v3}
\caption{FrozenLake-v1 4x4}
\label{Cumulative_Random_Action_vs_Time_Step_FrozenLake-v1 4x4}
\caption{FrozenLake-v1 4x4 Slippery}
\label{Cumulative_Random_Action_vs_Time_Step_FrozenLake-v1_4x4_Slippery}
\caption{FrozenLake-v1 8x8}
\label{Cumulative_Random_Action_vs_Time_Step_FrozenLake-v1 8x8}
\caption{FrozenLake-v1 8x8 Slippery}
\label{Cumulative_Random_Action_vs_Time_Step_FrozenLake-v1_8x8_Slippery}
\caption{FlappyBird-v0}
\label{Cumulative_Random_Action_vs_Time_Step_FlappyBird-v0}
\caption{Cumulative Random Action vs. Time steps.}
\label{Cumulative_Random_Action_vs_Time Step}
\end{figure}
\begin{figure}
\caption{CliffWalking-v0}
\label{Cumulative_Non-Random_Action_vs_Time_Step_CliffWalking-v0}
\caption{Taxi-v3}
\label{Cumulative_Non-Random_Action_vs_Time_Step_Taxi-v3}
\caption{FrozenLake-v1 4x4}
\label{Cumulative_Non-Random_Action_vs_Time_Step_FrozenLake-v1 4x4}
\caption{FrozenLake-v1 4x4 Slippery}
\label{Cumulative_Non-Random_Action_vs_Time_Step_FrozenLake-v1_4x4_Slippery}
\caption{FrozenLake-v1 8x8}
\label{Cumulative_Non-Random_Action_vs_Time_Step_FrozenLake-v1 8x8}
\caption{FrozenLake-v1 8x8 Slippery}
\label{Cumulative_Non-Random_Action_vs_Time_Step_FrozenLake-v1_8x8_Slippery}
\caption{FlappyBird-v0}
\label{Cumulative_Non-Random_Action_vs_Time_Step_FlappyBird-v0}
\caption{Cumulative Non-Random Action vs. Time steps.}
\label{Cumulative_Non-Random_Action_vs_Time Step}
\end{figure}
As future work, some of our examples can easily be extended to multi-objective scenarios and scenarios with changing objectives to show merits in these areas. Also, this technique can be used to improve the sample efficiency challenge of the new generation of RL algorithms which are established on using deep learning methods, which itself has achieved outstanding performance on complicated tasks like plant identification \cite{beikmohammadi2020swp, BEIKMOHAMMADI2022117470}, handwritten digit recognition \cite{beikmohammadi2021hierarchical}, and human action detection \cite{beikmohammadi2019mixture}, as the function approximators. In addition, one could also think about any aggregation of both approaches to solve a problem. This aggregation could be done in different scenarios, including voting, hierarchically, teacher-student learning. Overall, both approaches were comparative in performance on average. Therefore the reasoning-based approach provides a likely alternative for such problems, while performing better whenever the problem is non-deterministic.
\appendix \section{Additional Learning Curves} \label{section4} Additional figures are included in this section.
\begin{figure}
\caption{CliffWalking-v0}
\label{Reward_vs_Episodes_CliffWalking-v0}
\caption{Taxi-v3}
\label{Reward_vs_Episodes_Taxi-v3}
\caption{FrozenLake-v1 4x4}
\label{Reward_vs_Episodes_FrozenLake-v1 4x4}
\caption{FrozenLake-v1 4x4 Slippery}
\label{Reward_vs_Episodes_FrozenLake-v1_4x4_Slippery}
\caption{FrozenLake-v1 8x8}
\label{Reward_vs_Episodes_FrozenLake-v1 8x8}
\caption{FrozenLake-v1 8x8 Slippery}
\label{Reward_vs_Episodes_FrozenLake-v1_8x8_Slippery}
\caption{FlappyBird-v0}
\label{Reward_vs_Episodes_FlappyBird-v0}
\caption{Reward vs. Episodes. The reward is measured at time steps where the episode ends (by reaching the goal, truncating the episode length, falling into the hole, falling from the cliff, hitting the pipe.)}
\label{Reward_vs_Episodes}
\end{figure}
\begin{figure}
\caption{CliffWalking-v0}
\label{Cumulative_Successful_Episodes_vs_Episodes_CliffWalking-v0}
\caption{Taxi-v3}
\label{Cumulative_Successful_Episodes_vs_Episodes_Taxi-v3}
\caption{FrozenLake-v1 4x4}
\label{Cumulative_Successful_Episodes_vs_Episodes_FrozenLake-v1 4x4}
\caption{FrozenLake-v1 4x4 Slippery}
\label{Cumulative_Successful_Episodes_vs_Episodes_FrozenLake-v1_4x4_Slippery}
\caption{FrozenLake-v1 8x8}
\label{Cumulative_Successful_Episodes_vs_Episodes_FrozenLake-v1 8x8}
\caption{FrozenLake-v1 8x8 Slippery}
\label{Cumulative_Successful_Episodes_vs_Episodes_FrozenLake-v1_8x8_Slippery}
\caption{Cumulative Successful Episodes vs. Episodes.}
\label{Cumulative_Successful_Episodes_vs_Episodes}
\end{figure}
\begin{figure}
\caption{CliffWalking-v0}
\label{Epsilon_vs_Time_Step_CliffWalking-v0}
\caption{Taxi-v3}
\label{Epsilon_vs_Time_Step_Taxi-v3}
\caption{FrozenLake-v1 4x4}
\label{Epsilon_vs_Time_Step_FrozenLake-v1 4x4}
\caption{FrozenLake-v1 4x4 Slippery}
\label{Epsilon_vs_Time_Step_FrozenLake-v1_4x4_Slippery}
\caption{FrozenLake-v1 8x8}
\label{Epsilon_vs_Time_Step_FrozenLake-v1 8x8}
\caption{FrozenLake-v1 8x8 Slippery}
\label{Epsilon_vs_Time_Step_FrozenLake-v1_8x8_Slippery}
\caption{FlappyBird-v0}
\label{Epsilon_vs_Time_Step_FlappyBird-v0}
\caption{Decaying $\epsilon$ used for $Q$-Learning vs. Time steps. The $\epsilon$ value has been updated just before stating a new episode. }
\label{Epsilon_vs_Time Step}
\end{figure}
\begin{figure}
\caption{CliffWalking-v0}
\label{Cumulative_Random_Action_vs_Episodes_CliffWalking-v0}
\caption{Taxi-v3}
\label{Cumulative_Random_Action_vs_Episodes_Taxi-v3}
\caption{FrozenLake-v1 4x4}
\label{Cumulative_Random_Action_vs_Episodes_FrozenLake-v1 4x4}
\caption{FrozenLake-v1 4x4 Slippery}
\label{Cumulative_Random_Action_vs_Episodes_FrozenLake-v1_4x4_Slippery}
\caption{FrozenLake-v1 8x8}
\label{Cumulative_Random_Action_vs_Episodes_FrozenLake-v1 8x8}
\caption{FrozenLake-v1 8x8 Slippery}
\label{Cumulative_Random_Action_vs_Episodes_FrozenLake-v1_8x8_Slippery}
\caption{FlappyBird-v0}
\label{Cumulative_Random_Action_vs_Episodes_FlappyBird-v0}
\caption{Cumulative Random Action vs. Episodes.}
\label{Cumulative_Random_Action_vs_Episodes}
\end{figure}
\begin{figure}
\caption{CliffWalking-v0}
\label{Cumulative_Non-Random_Action_vs_Episodes_CliffWalking-v0}
\caption{Taxi-v3}
\label{Cumulative_Non-Random_Action_vs_Episodes_Taxi-v3}
\caption{FrozenLake-v1 4x4}
\label{Cumulative_Non-Random_Action_vs_Episodes_FrozenLake-v1 4x4}
\caption{FrozenLake-v1 4x4 Slippery}
\label{Cumulative_Non-Random_Action_vs_Episodes_FrozenLake-v1_4x4_Slippery}
\caption{FrozenLake-v1 8x8}
\label{Cumulative_Non-Random_Action_vs_Episodes_FrozenLake-v1 8x8}
\caption{FrozenLake-v1 8x8 Slippery}
\label{Cumulative_Non-Random_Action_vs_Episodes_FrozenLake-v1_8x8_Slippery}
\caption{FlappyBird-v0}
\label{Cumulative_Non-Random_Action_vs_Episodes_FlappyBird-v0}
\caption{Cumulative Non-Random Action vs. Episodes.}
\label{Cumulative_Non-Random_Action_vs_Episodes}
\end{figure}
\begin{figure}
\caption{CliffWalking-v0}
\label{Episodes_vs_Time_Step_CliffWalking-v0}
\caption{Taxi-v3}
\label{Episodes_vs_Time_Step_Taxi-v3}
\caption{FrozenLake-v1 4x4}
\label{Episodes_vs_Time_Step_FrozenLake-v1 4x4}
\caption{FrozenLake-v1 4x4 Slippery}
\label{Episodes_vs_Time_Step_FrozenLake-v1_4x4_Slippery}
\caption{FrozenLake-v1 8x8}
\label{Episodes_vs_Time_Step_FrozenLake-v1 8x8}
\caption{FrozenLake-v1 8x8 Slippery}
\label{Episodes_vs_Time_Step_FrozenLake-v1_8x8_Slippery}
\caption{FlappyBird-v0}
\label{Episodes_vs_Time_Step_FlappyBird-v0}
\caption{Episodes vs. Time steps.}
\label{Episodes_vs_Time Step}
\end{figure}
\begin{figure}
\caption{CliffWalking-v0}
\label{Episode_Length_vs_Time_Step_CliffWalking-v0}
\caption{Taxi-v3}
\label{Episode_Length_vs_Time_Step_Taxi-v3}
\caption{FrozenLake-v1 4x4}
\label{Episode_Length_vs_Time_Step_FrozenLake-v1 4x4}
\caption{FrozenLake-v1 4x4 Slippery}
\label{Episode_Length_vs_Time_Step_FrozenLake-v1_4x4_Slippery}
\caption{FrozenLake-v1 8x8}
\label{Episode_Length_vs_Time_Step_FrozenLake-v1 8x8}
\caption{FrozenLake-v1 8x8 Slippery}
\label{Episode_Length_vs_Time_Step_FrozenLake-v1_8x8_Slippery}
\caption{FlappyBird-v0}
\label{Episode_Length_vs_Time_Step_FlappyBird-v0}
\caption{Episode Length vs. Time steps.}
\label{Episode_Length_vs_Time Step}
\end{figure}
\begin{figure}
\caption{CliffWalking-v0}
\label{Episode_Length_vs_Episodes_CliffWalking-v0}
\caption{Taxi-v3}
\label{Episode_Length_vs_Episodes_Taxi-v3}
\caption{FrozenLake-v1 4x4}
\label{Episode_Length_vs_Episodes_FrozenLake-v1 4x4}
\caption{FrozenLake-v1 4x4 Slippery}
\label{Episode_Length_vs_Episodes_FrozenLake-v1_4x4_Slippery}
\caption{FrozenLake-v1 8x8}
\label{Episode_Length_vs_Episodes_FrozenLake-v1 8x8}
\caption{FrozenLake-v1 8x8 Slippery}
\label{Episode_Length_vs_Episodes_FrozenLake-v1_8x8_Slippery}
\caption{FlappyBird-v0}
\label{Episode_Length_vs_Episodes_FlappyBird-v0}
\caption{Episode Length vs. Episodes.}
\label{Episode_Length_vs_Episodes}
\end{figure}
\begin{figure}
\caption{CliffWalking-v0}
\label{Cumulative_Episode_Length_vs_Episodes_CliffWalking-v0}
\caption{Taxi-v3}
\label{Cumulative_Episode_Length_vs_Episodes_Taxi-v3}
\caption{FrozenLake-v1 4x4}
\label{Cumulative_Episode_Length_vs_Episodes_FrozenLake-v1 4x4}
\caption{FrozenLake-v1 4x4 Slippery}
\label{Cumulative_Episode_Length_vs_Episodes_FrozenLake-v1_4x4_Slippery}
\caption{FrozenLake-v1 8x8}
\label{Cumulative_Episode_Length_vs_Episodes_FrozenLake-v1 8x8}
\caption{FrozenLake-v1 8x8 Slippery}
\label{Cumulative_Episode_Length_vs_Episodes_FrozenLake-v1_8x8_Slippery}
\caption{FlappyBird-v0}
\label{Cumulative_Episode_Length_vs_Episodes_FlappyBird-v0}
\caption{Cumulative Episode Length vs. Episodes.}
\label{Cumulative_Episode_Length_vs_Episodes}
\end{figure}
\end{document} |
\begin{document}
\setlength{\abovedisplayskip}{3pt} \setlength{\belowdisplayskip}{3pt}
\title{Matching Algorithms under Diversity-Based Reservations}
\author{Haris Aziz}
\affiliation{
\institution{UNSW Sydney}
\streetaddress{}
\city{Sydney}
\state{} \country{Australia}
\postcode{} } \email{haris.aziz@unsw.edu.au}
\author{Sean Morota Chu}
\affiliation{
\institution{UNSW Sydney}
\streetaddress{}
\city{Sydney}
\state{} \country{Australia}
\postcode{} } \email{seanmorotachu@gmail.com}
\author{Zhaohong Sun}
\affiliation{
\institution{CyberAgent Inc.}
\streetaddress{}
\city{Tokyo}
\state{} \country{Japan}
\postcode{} } \email{sunzhaohong1991@gmail.com}
\begin{abstract}
Selection under category or diversity constraints is a ubiquitous and widely-applicable problem that is encountered in immigration, school choice, hiring, and healthcare rationing. These diversity constraints are typically represented by minimum and maximum quotas on various categories or types. We undertake a detailed comparative study of applicant selection algorithms with respect to the diversity goals.
\end{abstract}
\keywords{Matching; diversity constraints; affirmative action; school selection}
\maketitle
\section{Introduction} How should we hire job applicants when we want to take both the overall merit as well as requirements of various departments into account? How should we decide on student intake while considering both entrance test scores and target numbers of scholarships for different categories? How should we ration healthcare resources when patients can avail {resources} under various categories? Which applicants should be given an immigration slot when the government has targets for various categories?
These fundamental and important questions constitute a recurring theme in allocation and selection decisions. We consider a natural mathematical model for the problem that captures the main features of many of the problems discussed above. Although various choice rules and algorithms for selecting agents have been proposed, there has been little work carefully comparing the relative performance of these algorithms, especially from an experimental methodology. In this paper, we undertake one of the first detailed experimental studies to understand how well the algorithms perform with respect to capturing the intended diversity goals as well as selecting the highest priority applicants. We also try to understand the tradeoffs between merit and diversity.
We consider a very widely studied model of selection under diversity constraints. Firstly, there is a baseline ordering over the applicants. The baseline ordering could be the merit ordering in the context of school admissions, or the need for treatment in the context of healthcare rationing. If no diversity constraints are present, the selection of agents is made only with respect to the baseline priority ordering. If the diversity constraints are additionally present, then both the priority ordering and the diversity constraints are used to make selection decisions.
The diversity constraints or goals are represented by imposing minimum and maximum quotas on each of the types. In particular, {given one school $c$}, there is a lower quota of $q_{c,t}^1$ for the number of slots taken by agents for type $t$ and there is an upper quota of $q_{c,t}^2$ for the number of slots taken by agents for type $t$.
In the line of literature~(see, e.g., \citet{EHYY14a}) both lower and upper quotas are viewed as guidelines towards reaching diversity goals: firstly, fill up slots of those types whose minimum quotas have not been reached. As a secondary consideration, fill up slots of those types whose minimum quotas have been reached, but not their maximum quotas.
Another feature of our setting is that applicants can satisfy multiple types such as being extra talented or being from a {disadvantaged} group. Each applicant who is selected is assumed to count towards one of the types satisfied by them. Such a type could include a general public type. This way of accounting for representation has been referred to as the one-to-one convention, which is popular in Indian college admissions~\citep{SoYe19b}. Since we are not only interested in which agents are selected but also {in} how many target numbers of spots corresponding to relevant types are filled up, the output for our problems is not just a set of selected agents. Instead, it is a matching that matches each student to some type that the student satisfies. Such a matching not only gives information about the set of selected agents who are matched but also gives a count of how many seats of each type are used.
In this paper, we examine the following problem.
\begin{quote} \emph{In selection problems under minimum and maximum quota diversity goals, how do various algorithms perform with respect to satisfying diversity goals as well as merit?} \end{quote}
With respect to performance on merit, we will compare the outcomes of algorithms according to various objectives, including average rank, worst rank, and best rank. When considering diversity constraints captured by lower and upper diversity quotas, a natural question is how to {gauge} the level of diversity captured by a given set of applicants or a matching? A natural solution was provided by \citet{AzSu21a} who viewed each type $t$ as two ranks of slots corresponding {to} lower and upper quotas.
A set of agents provides \emph{maximal diversity} if there is a matching that matches the agents to the types in such a way that the number of rank 1 slots is maximized and given that the number of rank 2 slots is maximized.
One of the first algorithms for the problem was presented by \citet{EHYY14a} who assumed that each applicant can satisfy at most one type. The algorithm takes a natural greedy approach to first fill up slots corresponding to rank 1 and then to rank 2. It can suitably be extended to the case where agents may have multiple types. We will use the natural extension as one of the main algorithms whose performance we examine. We will refer to the algorithm as EHYY.
Another algorithm that we consider is the \emph{horizontal choice rule} by \citet{SoYe20a} that was designed to optimally filling up seats when there is a single rank of slots. We consider two versions of the rule of \citet{SoYe20a}: SY1 optimizes the use of the first ranked slots and SY2 merges the first and second ranked slots and then optimizes the use of these slots.
\citet{AzSu21a} presented algorithms that achieve maximal diversity. We will refer to the algorithm as A-S. There are several other algorithms that have been proposed or are used in real-world systems. The goal of this paper is to undertake a {comparative} study of various {algorithms} for the problem and see how they fare in terms of maximal diversity. We check how various algorithms do in terms of filling up the first ranked slots. We also check how various algorithms do in filling up the first two ranks.
From the specification, the A-S already maximizes the use of rank 1 slots and given that, it maximizes the use of rank 2 slots. One of the goals of the paper is to understand the extent to which it {performs} in relation to other existing approaches. We will also compare the algorithms with two baseline algorithms that {predominantly} care about the priority of the agents rather than diversity concerns.
In this paper, {we} present several contributions. Firstly, we present {a} consistent specification of various algorithms for our setting with minimum and maximum quotas or equivalently rank 1 and rank 2 seats. Secondly, we perform one of the first experimental comparisons of prominent selection algorithms in achieving optimal diversity goals as well as average merit ranking of the agents. Next, we investigate the performance of prominent selection algorithms across a variety of different environments, thereby determining the environmental parameters affecting their performance.
Some of the conclusions from the experiments include the following. The total number of reserves and the selection capacity of a problem instance influence the performance of each algorithm. As the number of reserves relative to selection capacity increases, the performance of diversity based algorithms is reduced with respect to satisfying merit compared to matching algorithms that ignore reserves. When the total number of reserves is exceeded by the selection capacity, A-S and SY2 have equivalent performance, despite having different behaviour when total reserves exceed selection capacity. Overall, A-S is the best algorithm at fulfilling reserves across two ranks but performs worse in selecting for merit compared to SY1 and SY2, which are optimal for filling the first and first two ranks of reserves respectively. The performance of EHYY is close to optimality
on average when satisfying the first rank reserves, but its worst case performance is reduced when selection capacity and the number of reserves increase.
We find that, due to the various different characteristics of each algorithm, there is a necessary tradeoff between achieving merit and diversity goals, and the choice of algorithm can help negotiate between these two goals for any specific problem instance.
\section{Related Work}
The literature on matching under diversity and other distributional constraints is vast. We discuss work that is closely related to our problem.
Affirmative action in two-sided matching has been considered in early work on school choice \citet{AbSo03b,Abdu05a}. In many of the diversity models, each school puts a minimum quota on each type \citep{HYY13a,Koji12a,KoSo13a,EHYY14a}. \citet{EHYY14a} treated the quotas in a soft manner since hard constraints can lead to infeasibility. We pursue the same approach as well. In {contrast} to \citet{EHYY14a}, we allow agents to have multiple types.
The issue of agents having multiple `overlapping types' has been considered in recent papers and deployed applications in the past few years, including those in Brazil, Chile, Israel, and India (see, e.g.,~\citep{AyTu16a,BCC+19a,CEE+19a,KHIY17a,GNKR19a}). There are two ways to perform accounting when agents have multiple types~\citep{SoYe20a}. In the \textit{one-for-all} convention, an agent is viewed as taking slots for all the types that they satisfy~\citep{GNKR19a,AGS20a}. In the \textit{one-for-one} convention, they take the slot of one of the types they satisfy. In this paper, we pursue the one-for-one convention. This convention has the `more widespread interpretation'~\citep{SoYe20a}. The one-for-one {convention} has been explicitly or {implicitly} considered in several recent papers~~\citep{AyTu16a,KHIY17a,BCC+19a,CEE+19a,EHYY14a}. Most of these approaches do not achieve diversity optimally. In {contrast}, \citet{AzSu21a} presented a rule that achieves diversity optimally. When there is only one rank of reserves or when there are no maximum quotas, \citet{SoYe20a} presented a rule that also satisfies diversity optimally. We will consider two extensions of the algorithm for our model.
\section{Preliminaries}
An instance $I$ of the problem consists of a tuple $(S,c,q_c,T,\succ_c, \eta_c)$
where $S$ denotes the set of agents. {There is one school $c$ with capacity $q_c$.} We denote by $T$ the set of types. We overload the term to also capture the information about the types of each agent. For each agent $s$, let $T(s) \subseteq T$ denote the subset of types to which agent $s$ belongs. If $T(s) = \emptyset$, then agent $s$ does not have any privileged type. {We use the term $\eta_c$ to specify the diversity goals of school $c$. In this work, we consider two ranks of slots.} The term $\eta_{c,t}^1$ {denotes the} number of slots of rank 1 of type $t$ {(minimum quotas)} and $\eta_{c,t}^2$ denotes slots of rank 2 of type $t$ {(maximum quotas)}.\
The school $c$ has a strict priority ordering $\succ_c$ over $S \cup \{\emptyset\}$ where $\emptyset$ represents the option of leaving seats vacant for school $c$. An agent $s$ is \emph{acceptable} to school $c$ if $s \succ_c \emptyset$ holds. The priority ordering of the school could be based on the entrance exam scores, or in the case of automated hiring, on some objective measure that captures the suitability of the applicants.
\begin{example}
\label{ex:1}
Consider the setting in which there are six students $S = \{s_1,s_2,s_3,s_4,s_5,s_6\}$, applying for seats at one school $c$.
The type profile of the students is $T(s_1) = \{\}$, $T(s_2) = \{t_{4}\}$ , $T(s_3) = \{t_{3}\}$, $T(s_4) = \{t_{1},t_{2},t_{3}\}$, $T(s_5) = \{t_{1}\}$ , $T(s_6) = \{t_{2},t_{3}\}$.
The capacity of the school is $q_c = 3$ and the school has diversity goals specified as follows: $\eta^1_{c,t_{1}} = 1, \eta^1_{c,t_{2}} = 1, \eta^2_{c,t_{3}} = 1, \eta^2_{c,t_{4}} = 1$.
The priority ordering of students is $s_1 \succ_c s_2 \succ_c s_3 \succ_c s_4 \succ_c s_5 \succ_c s_6$.
The interpretation of the diversity goals outlined for school $c$ {is} as follows: school $c$ wishes to admit 3 students while matching as many students to slots of rank 1 as possible.
In the event that no further rank 1 slots can be matched, school $c$ would like to match as many rank 2 seats as possible.
School $c$ has one rank 1 slot each for types $t_{1},t_{2}$ and one rank 2 slot each for types $t_{3},t_{4}$. \end{example}
\section{A Tool Box of Algorithms} \label{sec:toolbox}
{In this section, we describe several algorithms that we considered in the experiments.} Diversity goals have been defined over two ranks, in the sense that first rank diversity goals are to be satisfied before second rank diversity goals whenever possible. This is analogous to diversity settings in which minimum quotas are to be satisfied as many as possible before targeting maximum quotas. Within this setting, we also allow for overlapping types, such that any agent may be prescribed multiple undersubscribed types.
\subsection*{A-S algorithm of \citeauthor{AzSu21a}}
The A-S algorithm creates a ranked reservation graph and then computes a rank-maximal matching within this graph to find a matching which optimizes first rank seat usage before second rank usage.
Given a set of students $S'$ and a school $c$ with reserved quotas $\eta_c$,
a corresponding \emph{ranked reservation graph} $G=(S' \cup V, E, \eta_c)$
is a bipartite graph whose vertices consist of a set of students $S'$ and a set of reserved seats $V$. Each reserved seat $v_{t,i}^j \in V$ has a rank $j$, a type $t$ and an index $i$. For each rank $j$ and each type $t$, we create $\eta_{c,t}^j$ reserved seats in $G$. The edge set $E$ is specified as follows. There is an edge between a student $s$ and a reserved seat
$v_{t,i}^j$ if {student $s$ belongs to type $t$},
i.e., $t \in T(s)$.
Each edge $(s, v_{t,i}^j)$ has a \emph{rank} $j$ corresponding to the rank $j$ of the reserved seat $v_{t,i}^j$. We refer to all edges with rank $j$ as $j$-ranked edges. Since we are focusing on problems arising from lower and upper quotas, we assume that there are two main ranks $1$ and $2$. We also create an artificial universal type $t_0$ that each agent is eligible for and that has zero seats of rank $1$ and $2$ but $q_c$ seats of rank $3$. Only those agents are matched to this type, if they are unable to be matched to seats of the real types. This type is only present to match those agents who are unable to be matched to real types.
To keep our figures simple, we will not depict vertices corresponding to $t_0$.
\begin{figure}
\caption{The ranked reservation graph for the problem instance in Example~\ref{ex:1}.}
\label{fig:A-S-example1}
\end{figure}
\begin{algorithm}[h!]
\begin{algorithmic}[scale=1]
\REQUIRE $S'\subseteq S$, $q_c$, $\eta_c$, $\succ_c$.
\ENSURE A matching $M\subseteq S'\times V$ and
a set of matched agents $S^*\subseteq S'$
\end{algorithmic}
\begin{algorithmic}[1]
\caption{A-S algorithm from \citet{AzSu21a}}
\label{alg:choice_school1-for-1}
\STATE Selected agents $S^* \leftarrow \emptyset$
\STATE {Matching $M \leftarrow \emptyset$}
\STATE Construct the corresponding ranked reservation graph $G=(S'\cup V, E, \eta_c)$.
\FOR{agent $s\notin S^*$ down the list in $\succ_c$}
\IF{there exists a matching in $G$ of size at most $q_c$ that satisfies the following two conditions
\begin{enumerate}
\item it is rank maximal among all matchings in $G$ of size at most $q_c$
\item it matches all agents in $S^*\cup \{s\}$
\end{enumerate}}
\STATE Add $s$ to $S^*$
\ENDIF
\ENDFOR
\STATE Compute a rank maximal matching $M$ of $G$ that matches all the students in $S^*$
\RETURN $M$ and $S^*$.
\end{algorithmic}
\end{algorithm}
\begin{example}[A-S Algorithm]
\label{ex:A-S}
For Example~\ref{ex:1},
the A-S algorithm will select the students $S^{*} = \{s_2,s_4,s_5\}$, which fills both rank 1 slots and a rank 2 slot for type $t_4$.
This arises from the ranked reservation graph $G$ pictured in Figure~\ref{fig:A-S-example1}.
Evidently, the 5 possible rank maximal matchings on $G$ of size 3 are
$\{\{(s_2,v^{2}_{t_4,1}),(s_4,v^{1}_{t_2,1}),(s_5,v^{1}_{t_1,1})\}$,$\{(s_2,v^{2}_{t_4,1}),(s_4,v^{1}_{t_1,1}),(s_6,v^{1}_{t_2,1})\}$,
$\{(s_3,v^{2}_{t_3,1}),(s_4,v{1}_{t_2,1}),(s_5,v^{1}_{t_1,1})\}$,$\{(s_3,v^{2}_{t_3,1}),(s_4,v^{1}_{t_1,1}),(s_6,v^{1}_{t_2,1})\}$,
$\{(s_4,v^{2}_{t_3,1}),(s_5,v^{1}_{t_1,1}),(s_6,v^{1}_{t_2,1})\}\}$.
The A-S algorithm will first select $s_2$ when scanning the students by priority ordering, as there exists a rank maximal matching in $s_2$ is matched.
It will then select $s_4$, as $s_3$ cannot be in the same rank-maximal matching as $s_2$, before finally selecting $s_5$.
As the A-S algorithm selects the highest priority students possible while maintaining a rank-maximal matching, the final matching in this instance will be $\{(s_2,v^{2}_{t_4,1}),(s_4,v^{1}_{t_2,1}),(s_5,v^{1}_{t_1,1})\}$, as pictured in Figure~\ref{fig:A-Sexample2}. To keep our figure simple, we have not depicted vertices corresponding to $t_0$.
\end{example}
\begin{figure}
\caption{The matching returned by the A-S Algorithm for the problem instance in Example~\ref{ex:1}.}
\label{fig:A-Sexample2}
\end{figure}
\subsection*{EHYY Algorithm of \citeauthor{EHYY14a}}
One of the first algorithms for the problem was presented by \citet{EHYY14a} who assumed that each applicant can satisfy at most one type. When each agent has at most one type, the choice of which type's slot an agent should take is straightforward. The algorithm proposed by \citet{EHYY14a} follows a natural idea that provides the blueprint for many of the other algorithms in the literature. The algorithm works as follows. The algorithm goes down the priority list and selects the highest priority agent who has a type that is undersubscribed (whose count has not reached the lower quota). If there is no such agent, the algorithm selects agents with the highest priority who has some type that is not oversubscribed (whose count has not reached the upper quota). If there are no such agents, then the highest priority agents are selected until the total capacity is reached. When agents may have multiple types, the algorithm of \citet{EHYY14a} can be suitably generalized to handle `overlapping types'. When choosing an agent we select the highest priority agent who has some type that is undersubscribed; and otherwise we select the highest priority agents who have some type that is not oversubscribed. We will refer to the algorithm as EHYY.
The EHYY algorithm below approaches the two-ranked school choice problem by greedily selecting agents with first rank types before selecting agents with second rank types. This approach is not optimal when there are multiple overlapping types for agents. \begin{algorithm}[h!] \begin{algorithmic}[scale=1]
\REQUIRE $S'\subseteq S$, $q_c$, $\eta_c$, $\succ_c$.
\ENSURE A matching $M\subseteq S'\times V$ and
a set of matched agents $S^*\subseteq S'$
\end{algorithmic}
\begin{algorithmic}[1]
\caption{EHYY Algorithm of \citet{EHYY14a}}
\label{alg:choice_school1-for-2}
\STATE Selected agents $S^* \leftarrow \emptyset$
\STATE {Matching $M \leftarrow \emptyset$}
\FOR{agent $s\notin S^*$ down the list in $\succ_c$}
\IF{ $|S^*| < q_c$ and if there exists an unmatched first rank seat $v_{t,i}^1$ for some type $t$ satisfied by $s$}
\STATE Add $s$ to $S^*$ and $(s,v_{t,i}^1)$ to $M$
\ENDIF
\ENDFOR
\FOR{agent $s\notin S^*$ down the list in $\succ_c$}
\IF{ $|S^*| < q_c$ and if there exists a second rank seat $(s,v_{t,i}^2)$ for some type $t$ satisfied by $s$}
\STATE Add $s$ to $S^*$ and $(s,v_{t,i}^2)$ to $M$
\ENDIF
\ENDFOR
\FOR{agent $s$ down the list in $\succ_c$}
\IF{$|S^*|<q_c$ and $s\notin S^*$}
\STATE Add $s$ to $S^*$ and add some new edge $(s,v_{t,i}^3)$ to $M$ where $t\in T(s)$.
\ENDIF
\ENDFOR
\RETURN $M$ and $S^*$. \end{algorithmic} \end{algorithm}
\begin{example}[EHYY Algorithm]
\label{ex:EHYY}
Consider the problem instance described in Example~\ref{ex:1}.
In this problem instance, EHYY may select $S^{*} = \{s_2,s_4,s_6\}$, filling two rank 1 slots and one rank 2 slot.
In the first traversal of the priority list of students, we match students $s_4$ and $s_6$ to the unmatched slots $v^{1}_{c,t_1}$ and $v^{1}_{c,t_2}$ respectively.
When matching $s_4$, we have two options: $v^{1}_{c,t_1}$ and $v^{1}_{c,t_2}$ - by random tiebreaking, we choose $v^{1}_{c,t_1}$.
In the second traversal of the priority list of students, we select student $s_2$ by matching her to the unmatched slot $v^{2}_{c,t_4}$.
Our final matching is $\{(s_2,v^{2}_{t_4,1}),(s_4,v^{1}_{t_1,1}),(s_6,v^{1}_{t_2,1})\}$. \end{example}
\subsection*{Horizontal Choice Algorithms of \citeauthor{SoYe20a}}
The horizontal choice algorithm was proposed by \citet{SoYe20a} for the case where a school has only one rank of reserves and once the reserves are filled up, the remaining seats are filled according to the priority ranking. We will refer to the algorithm as the SY algorithm. The algorithm gives the same outcome as the A-S algorithm for the case of one rank of reserves (along with the generic type $t_0$ that takes up any remaining agents who are not matched to reserves types) so we do not define it formally in the way \citet{SoYe20a} did. We adapt the SY algorithm from \citet{SoYe20a} by either focusing on one rank only, or by merging the two ranks of quotas into one. The original algorithm SY does not allow for any type that has a rank $3$. However, we can view SY as having one type of rank 3 that matches all agents who were unable to be matched to an actual reserved seat.
As a result of this preprocessing, we are testing two different algorithms: which we will call SY1 and SY2. SY1 below eliminates second rank seats from consideration when running the selection algorithm. \begin{algorithm}[h!] \begin{algorithmic}[scale=1]
\REQUIRE $S'\subseteq S$, $q_c$, $\eta_c$, $\succ_c$.
\ENSURE A matching $M\subseteq S'\times V$ and
a set of matched agents $S^*\subseteq S'$
\end{algorithmic}
\begin{algorithmic}[1]
\caption{SY1 per \citet{SoYe20a}}
\label{alg:choice_school1-for-3}
\STATE Eliminate second rank seats from $\eta_c$ to get $\eta_c'$
\STATE Run the horizontal choice rule of \citet{SoYe20a} with respect to $\eta_c'$;
{let $M$ denote the} matching returned by the horizontal choice rule
\RETURN $M$ and $S^*$ (the set of agents matched by $M$).
\end{algorithmic}
\end{algorithm}
\begin{example}
\label{ex:SY1}
Consider the problem instance described in Example \ref{ex:1}.
In this instance, SY1 will select students $S^{*} = \{s_1,s_4,s_5\}$,
filling two rank 1 slots and no rank 2 slots.
This is as we remove all rank 2 slots from our instance, so the remaining slots are $v^{1}_{t_1,1}$ and $v^{1}_{t_2,1}$.
Any matching that includes these two slots will be rank maximal.
In order to fill the two rank 1 slots, SY1 will pair $s_4$ with $v^1_{t_2,1}$, and $s_5$ with $v^{1}_{t_1,1}$.
Due to the lack of second rank seats, SY1 will then select the highest priority student $s_1$, pairing $s_1$ with $v^{3}_{t_0,1}$, where $t_0$ is a generic type shared by all students.
Hence the final matching for SY1 will be $\{(s_4,v^1_{t_2,1}),(s_5,v^{1}_{t_1,1}),(s_1,v^{3}_{t_0,1})\}$.
\end{example}
SY2 below merges first and second rank seats into seats for a single rank, such that both ranks are considered at the same priority. \begin{algorithm}[h!] \begin{algorithmic}[scale=1]
\REQUIRE $S'\subseteq S$, $q_c$, $\eta_c$, $\succ_c$.
\ENSURE A matching $M\subseteq S'\times V$ and
a set of matched agents $S^*\subseteq S'$
\end{algorithmic}
\begin{algorithmic}[1]
\caption{SY2 per \citet{SoYe20a}}
\label{alg:choice_school1-for-4}
\STATE Merge first and second rank seats from $\eta_c$ into one rank to get $\eta_c'$
\STATE Run the horizontal choice rule as per \citet{SoYe20a} with respect to $\eta_c'$
\STATE $M \leftarrow$ matching returned by the horizontal choice rule
\RETURN $M$ and $S^*$ (the set of agents matched by $M$).
\end{algorithmic}
\end{algorithm}
\begin{example}[SY2 Algorithm]
\label{ex:SY2}
Consider the problem instance described in Example \ref{ex:1}.
Since SY2 considers all seats as equal rank, for this instance any matching where all students are matched to a ranked seat is considered rank maximal.
Hence, SY2 will select students $S^{*} = \{s_2,s_3,s_4\}$, filling one rank 1 slot and two rank 2 slots.
This is as, we ignore $s_1$ due to a lack of types, then match $s_2,s_3,s_4$ sequentially as they can be matched as $\{(s_2,v^2_{t_4,1}),(s_3,v^{2}_{t_3,1}),(s_4,v^{1}_{t_1,1})\}$, which is our final matching.
Other rank-maximal matchings, in this case, are ignored as they require selecting students of lower rank in the priority list.
\end{example}
\subsection*{Priority Only Algorithms}
Next, we discuss two algorithms that select agents only on the basis of their priority. In other words, they select the top $q_c$ agents. Since we are not only interested in the selection of agents but also want to check the type used by each selected student, the algorithms return matchings {by specifying which student is matched with which slot}.
The first priority only algorithm Priority Only Greedy (POG) goes down the priority list and for a current agent, gives them a rank 1 seat from an eligible type, and if such seat is not available, then a rank 2 seat from an eligible type. If neither of the two ranks are available, then a rank 3 seat from an eligible type is matched to the agent.
The second priority only algorithm selects the same set of agents but matches them to ranked seats in a smart way. For this reason, we refer to it as Priority Only Smart (POS). The algorithm uses A-S to match the set of selected students in an optimal way to the ranked seats.
\begin{algorithm}[h!] \begin{algorithmic}[scale=1]
\REQUIRE $S'\subseteq S$, $q_c$, $\eta_c$, $\succ_c$.
\ENSURE A matching $M\subseteq S'\times V$ and
a set of matched agents $S^*\subseteq S'$
\end{algorithmic}
\begin{algorithmic}[1]
\caption{Priority Only Greedy (POG)}
\label{alg:pog}
\STATE Selected agents $S^* \leftarrow \emptyset$
\STATE {Matching $M \leftarrow \emptyset$}
\FOR{agent $s\notin S^*$ down the list in $\succ_c$}
\IF {$|S^*| < q_c$}
\STATE Add $s$ to $S^*$
\ENDIF
\IF{ $|S^*| < q_c$ and if there exists an unmatched first rank seat $v_{t,i}^1$ for some type $t$ satisfied by $s$}
\STATE Add $(s,v_{t,i}^1)$ to $M$
\ELSIF{ $|S^*| < q_c$ and if there exists a second rank seat $(s,v_{t,i}^2)$ for some type $t$ satisfied by $s$}
\STATE Add $(s,v_{t,i}^2)$ to $M$
\ELSIF{$|S^*|<q_c$}
\STATE Add some new edge $(s,v_{t,i}^3)$ to $M$ where $t\in T(s)$.
\ENDIF
\ENDFOR
\RETURN $M$ and $S^*$. \end{algorithmic} \end{algorithm}
\begin{algorithm}[h!] \begin{algorithmic}[scale=1]
\REQUIRE $S'\subseteq S$, $q_c$, $\eta_c$, $\succ_c$.
\ENSURE A matching $M\subseteq S'\times V$ and
a set of matched agents $S^*\subseteq S'$
\end{algorithmic}
\begin{algorithmic}[1]
\caption{Priority Only Smart (POS)}
\label{alg:pos} \STATE Take top $q_c$ students $S^*$ with respect to $\succ_c$ \RETURN A-S applied to $(S^*$, $q_c$, $\eta_c$, $\succ_c$). \end{algorithmic} \end{algorithm}
\begin{example}[Priority Only Algorithms]
\label{ex:PO}
Consider the problem instance described in Example~\ref{ex:1}.
Both priority based algorithms will select students $S^{*} = {s_1,s_2,s_3}$ filling no rank 1 seats and 2 rank 2 seats.
The matching generated for both algorithms will be $\{(s_1,v^3_{t_0,1}),(s_2,v^{2}_{t_4,1}),(s_3,v^{2}_{t_3,1})\}$.
Both algorithms simply select the highest ranked 3 students, but their main difference arises in the way in which they assign seats. \end{example}
\section{Experimental Comparison} We use two sets of synthetic data in order to compare our algorithms. The first dataset is based on the SAT: the US university entrance examinations. In this dataset, we generate data to match the relative diversity of test-takers in the US. The goal of this dataset is to compare the performance of our selected matching algorithms in a real-world setting in which they can be utilised.
The first dataset is limited in testing scope by the total number of first and second rank reserves, which we will define as $\psi = $ ($\sum_{k=1}^{|types|} (|\eta_{t_{k}}^{1}|+|\eta_{t_{k}}^{2}|$), being less than $q_c$. In order to overcome this limitation, for our second dataset, described in Section \ref{section:synthetic}, we generate data based on $\psi$ values exceeding $q_c$.
\subsection{Comparison using synthetic SAT data} \label{subsection:SAT} In this section, we compare the performance of our selected algorithms when selecting a variable number of applicants from an input of 100 applicants with randomly generated types and priority ranking. The diversity types we consider are "disadvantaged minority", "low parental education", and "low income household". These types are generated based on \citep{Coll20a}. The "disadvantaged minority" type is an aggregation of the Black, Hispanic, and American Indian ethnicities, "low parental education" consists of applicants whose highest level of parental education is less than a bachelor's degree, and "low income household" applicants are those who used an SAT fee waiver.
For each selection capacity level, we simulate 100 datasets for consistency. \subsubsection{Dataset generation}
In this dataset, we have a consistent number of applicants $|S| = 100$, and examine the performance of the algorithms across a range of different selection capacities $q_c$. For each $q_c$, we generate 100 different applicant pools and aggregate the performance of the selected algorithms across all applicant pools.
For each applicant pool, we generate the diversity types and SAT score for every applicant individually. We assign students' types with probabilities such that the total number of students with a given type matches the type frequencies outlined in \citep{Coll20a}. When assigning overlapping types, we consider the conditional probability that students have a type given that they have already been assigned another type. For example, disadvantaged minorities have a 1.7 times higher chance of coming from a low education household \citep{NiSc18a}. We assign each applicant the ``disadvantaged minority'' type with a 39\% probability. If the applicant is a disadvantaged minority, we assign him to be from a low education household with a 64\% probability, otherwise, we assign him to be from a low education household with a 30\% probability. For applicants who have accrued both of the previous two types, we assign them to be from a low income household with a 30\% probability, otherwise, if they have only one type so far---this probability is 26\%, finally, if an applicant has no types so far, the probability of them being low income is 10\%.
We generate a student's SAT score using a truncated normal distribution. In this truncated normal distribution, the domain is 0 to 1600, the variance we use is 211 (as per \citep{Coll20a}), and the mean of the distribution is based on the types the applicant satisfies. Students without any types have a mean score of 1135, while disadvantaged minorities score 172 points lower, applicants with low parental education score 171 points lower, and low income household applicants score 86 points lower on average.
We then reduce the expected score of each applicant in the truncated normal distribution based on the types they satisfy. For students with multiple types, we reduce the impact of each type harmonically such that the impact of overlapping types is reduced. For example, a student with all 3 types will have the expected value of their score reduced by 172 due to the disadvantaged minority type, $\lceil 171/2 \rceil = 86$ due to the low parental education type, and $\lceil 86/3 \rceil = 29$ due to the low income household type. Having generated types and SAT scores for every student, we create a priority list $\succ_c$ by descending order of SAT scores.
The reserves $\eta_c$ were generated in a consistent manner proportional to the selection capacity $q_c$ for all datasets. Writing the disadvantaged minority type as $t_1$, low education household type as $t_2$, and the low income household type as $t_3$, we have: $\eta^{1}_{t_1} = 0.15 \times q_c$ , $\eta^{2}_{t_1} = 0.2 \times q_c$, $\eta^{1}_{t_2} = 0.1 \times q_c$,$\eta^{2}_{t_2} = 0.1 \times q_c$,$\eta^{1}_{t_3} = 0.05 \times q_c$, $\eta^{1}_{t_2} = 0.05 \times q_c$. Recall that $\eta^{j}_{t}$ denotes the quota of type $t$ and rank $j$. Hence, the total number of reserves is $\psi = 0.65q_c$ throughout this section. \subsubsection{Performance of algorithms} For each of the synthetic applicant pools generated above, we applied all of the algorithms outlined in Section~\ref{sec:toolbox}, then calculated performance metrics to compare their performance with respect to maximising diversity and selection of top performers.
\begin{comment} \haris{Please correct the notation. We can't write worse case or average $r1$. We need to define a performance metric $P$ thag could be $r_1(I)$ or $r_2(I)$ for a given instance. We are interested in the average or minimum ration found across a set of instances generated. Please define the ratio by comparing the performance of a rule with that of the optimal performance for the instances. Best to define the average and worse performance ratios of a rule separately. }
\haris{For an instance $I$ and a rule $f$, let the outcome be $f(I)$. For an outcome $X$, let $P(X)$ denote the performance of $X$ with respect to performance parameter $P$ where $P$ could be number of rank 1 seats filled, number of rank 2 seats filled, number of rank 1 or rank 2 seats filled, or the average percentile rank of the students. For a rule $f$, performance measure $P$ and a set of instances $\mathcal{I}$, we define the average perfomance of a rule $f$ as $\text{avg}_{I\in \mathcal{I}} \{\frac{P(f(I))}{\text{opt}_{P}(I)}\}$. For a rule $f$ and performance measure $P$ and a set of instances $\mathcal{I}$,, we define the worse case perfomance of a rule $f$ as $\text{min}_{I\in \mathcal{I}} \{\frac{P(f(I))}{\text{opt}_{P}(I)}\}$. } \end{comment}
For an instance $I$ and an algorithm $f$, let the outcome (selected students) of applying $f$ be $f(I)$. For an outcome $f(I)$, let $P(f(I))$ denote the performance of $f(I)$ with respect to a performance parameter $P$, where $P$ may be the number of rank 1 reserves filled, total reserves filled, or the average percentile rank of students.
For an algorithm $f$, performance measure $P$ and a set of instances $\mathcal{I}$, we define the average performance of an algorithm $f$ as $\text{avg}_{I\in \mathcal{I}} \{\frac{P(f(I))}{\text{opt}(P(I))}\}$ where $\text{opt}({P}({I}))$ denotes the maximum value of $P(I)$ reached by all algorithms for instance $I$. Informally, this calculates the ratio of the performance achieved by an algorithm $f$ for a metric $P$ relative to the best performance achieved by all algorithms for the same metric for each instance, and averages these ratios across all instances.
For an algorithm $f$, performance measure $P$ and a set of instances $\mathcal{I}$, we define the worst case perfomance of an algorithm $f$ as $\text{min}_{I\in \mathcal{I}} \{\frac{P(f(I))}{\text{opt}(P(I))}\}$. Informally, we calculate the same ratios as we do in the average case, but we take the lowest value of the calculated ratio across all instances rather than averaging them.
We define three main performance metrics by which we evaluate our matching algorithms. For a given algorithm $f$ and an outcome $f(I)$: \begin{enumerate}
\item $P_{1}(f(I))$ denotes the number of first rank reserves satisfied by $f$,
\item $P_{2}(f(I))$ denotes the total number of first and second rank reserves satisfied by $f$, and
\item $P_{3}(f(I))$ denotes the average percentile rank of students in $f(I)$. \end{enumerate}
We present below the average and worst case performance of each algorithm relative to our performance metrics outlined above. \begin{comment} \haris{average lowest rank student is NOT the right meaning of worst case. What I had in mind was that we always calculate the average percentile of the students selected. For average, we take average ratio over a set of instances. For worst case, we take the minimum ratio over a set of instances. You could look at an additional performance measure which is the percentile rank of the worst student. For such a performance measure, you will again consider the average performance as well as the worst case performance...}
\haris{The titles of the y-axes of the figures are not correct and consistent with performance ratios we are computing. Please correct the hard-coded y-axis titles or you could just remove them as the caption of the figures seems more accurate. } \end{comment}
In Figure~\ref{fig:p1} we see that the four diversity based algorithms (A-S, EHYY, SY1, SY2) are equivalent on average when selecting rank 1 seats, while the priority algorithms (POG, POS) trail behind. \begin{figure}
\caption{Average performance with respect to $P_1$}
\label{fig:p1}
\end{figure}
In Figure~\ref{fig:p2}, we see that with respect to $P_2$, A-S and SY2 perform optimally, while EHYY is optimal up to higher values of $q_c$. SY1 beats POG and POS (which overlap here) for lower values of $q_c$, yet converges with the priority algorithms at higher $q_c$ levels. \begin{figure}
\caption{Average performance with respect to $P_2$}
\label{fig:p2}
\end{figure}
In Figure~\ref{fig:p3}, POG and POS are optimal for all $q_c$ with respect to $P_3$, while SY1 trails closely behind. A-S and SY2 overlap in performance, while EHYY exhibits the worst performance across the tested algorithms. \begin{figure}
\caption{Average performance with respect to $P_3$}
\label{fig:p3}
\end{figure}
In Figure~\ref{fig:p4}, the four diversity algorithms all overlap with equivalent performances. POG and POS are equivalent for lower values of $q_c$ ($\leq 30$), while POS outperforms POG on higher values of $q_c$. \begin{figure}
\caption{Worst case performance with respect to $P_1$}
\label{fig:p4}
\end{figure}
In Figure~\ref{fig:p5}, A-S and SY2 perform equivalently, achieving an optimal result for all values of $q_c$. EHYY is optimal for smaller $q_c$ values ($\leq 50$), however, its performance worsens for larger values. SY1 outperforms the two priority algorithms at lower values, but converges to POG and POS at higher $q_c$. \begin{figure}
\caption{Worst case performance with respect to $P_2$}
\label{fig:p5}
\end{figure}
In Figure~\ref{fig:p6}, POG and POS are optimal, SY1 clearly outperforms other diversity algorithms, and A-S, EHYY, and SY2 are largely equivalent, with EHYY marginally underperforming. \begin{figure}
\caption{Worst case performance with respect to $P_3$}
\label{fig:p6}
\end{figure}
\subsubsection{Analysis}
By observing all of the figures above, we make the general observation that A-S and SY2 have an identical performance for the above dataset. This is as $\psi$ is less than $q_c$ by a relatively large margin ($\psi = 0.65q_c$), meaning that both SY2 and A-S are able to fill every reserve (regardless of rank) with the highest ranked students possible.
We also notice that POG and POS converge with SY1 for larger $q_c$ values. To explain this, we first note that since SY1 is unaware of rank 2 reserves, any difference between the two algorithms is purely based on rank 1 selection. As we have distributed students with types lower than students without types on average, for a greater $q_c$, where priority algorithms will select lower ranked students, there will be a greater abundance of typed students, allowing POG and POS to achieve greater diversity results. Since POG and POS both fill rank 1 seats before rank 2 seats (after selection), they improve on their first rank diversity before improving on second rank diversity as $q_c$ increases. Hence, POG and POS approach SY1 quickly, before filling rank 2 seats to approach the other diversity algorithms.
Figures \ref{fig:p1} and \ref{fig:p4} show that all of the diversity based algorithms are equivalent with respect to one rank ($P_1$), while the performance by POG and POS improves drastically as $q_c$ increases, approaching optimality.
Figures \ref{fig:p2} and \ref{fig:p5} show that A-S and SY2 are optimal with respect to the first two ranks, while EHYY is optimal for lower values of $q_c$. We note that, since SY1 is unaware of rank 2 seats, the performance gap between SY1 and the other diversity algorithms is entirely made up of the number of rank 2 reserves filled.
From figures \ref{fig:p3} and \ref{fig:p6} we see that SY1 outperforms other diversity based algorithms as a result of satisfying fewer reserves (and hence satisfying a larger proportion of $q_c$ based only on priority). We also notice that SY2 and A-S marginally outperform EHYY for all values of $q_c$, as the greedy selection approach by EHYY does not ensure optimality with respect to priority.
From this dataset (where the number of rank 1 and 2 reserves is less than $q_c$) we find that: \begin{enumerate}
\item{When maximising diversity with respect to one rank, all diversity algorithms are optimal.}
\item{When maximising diversity with respect to the first two ranks, A-S, SY2, and EHYY produce near-identical results.}
\item{For $q_c \geq 30$, priority based algorithms fill more than 90\% of the optimal number of rank 1 seats.}
\item{For $q_c \geq 70$, priority based algorithms fill more than 80\% of the optimal number of rank 1 and 2 seats.} \end{enumerate}
\subsection{Comparison using random synthetic data} \label{section:synthetic} In our comparison of the algorithms using the synthetic SAT data above, we have been limited in scope by keeping a mostly consistent set of paramaters in order to simulate a student admission problem. In this section, we explore scenarios in which $\psi > q_c$. We thus vary both the number of total (rank 1 + rank 2) reserves available for agents, as well as our acceptance capacity ($q_c$) to further compare the performance of our matching algorithms.
\subsubsection{Data procurement} We maintain the same testing conditions as used in the SAT data above, but with the key difference of varying $\psi$ across different snapshots, while varying $q_c$ within each snapshot. We choose our three main values for $\psi$ as 1.3, 1.5 and 1.7 times of $q_c$, which we will achieve by multiplying the quotas found in Section~\ref{subsection:SAT}. For example, instances with $\psi = 1.3q_c$ will have double the number of reserves of each type and rank than the corresponding instances in Section~\ref{subsection:SAT}, as the instances in Section~\ref{subsection:SAT} have $\psi = 0.65$ $q_c$.
For each of these three values, we compare our selection algorithms across four values of $q_c$, namely 20, 40, 60, and 80 with a consistent $|S| = 100$. When evaluating the performance of our algorithms, we will use the same metrics for comparison of the algorithms as Section \ref{subsection:SAT}. While we have done average and worst case testing for $\psi = \{1.3q_c,1.5q_c,1.7q_c\}$, we only include the worst case graphs for $\psi = 1.7q_c$ for the sake of space. \subsubsection{Performance of algorithms for $\psi = 1.7$} \begin{comment} \begin{figure}
\caption{Average performance with respect to $P_1$, $\psi = 1.7q_c$}
\label{fig:p19}
\end{figure}
\begin{figure}
\caption{Average performance with respect to $P_2$, $\psi = 1.7q_c$}
\label{fig:p20}
\end{figure} \begin{figure}
\caption{Average performance with respect to $P_3$, $\psi = 1.7q_c$}
\label{fig:p21}
\end{figure} \end{comment}
In Figure~\ref{fig:p22}, A-S and SY1 overlap at $P_1 = 1$. The next best algorithm is EHYY which trends downwards as $q_c$ increases. SY2 outperforms POG and POS (which overlap) at lower $q_c$, but is overtaken at higher values of $q_c$. \begin{figure}
\caption{Worst case performance for $P_1$, $\psi = 1.7q_c$}
\label{fig:p22}
\end{figure}
In Figure~\ref{fig:p23}, A-S, SY2, and EHYY overlap at $P_2 = 1$. SY1 then heavily outperforms POG and POS, which are overlapping. \begin{figure}
\caption{Worst case performance for $P_2$, $\psi = 1.7q_c$}
\label{fig:p23}
\end{figure}
In Figure~\ref{fig:p24}, POG and POS overlap at $P_3 = 1$. SY1 is the next best performing, then SY2 outperforms EHYY and A-S, which overlap. \begin{figure}
\caption{Worst case performance for $P_3$, $\psi = 1.7q_c$}
\label{fig:p24}
\end{figure}
\subsubsection{Analysis} We see significantly different performance from each algorithm compared to what has been demonstrated in Section \ref{subsection:SAT}, as well as between different $\psi$ levels. The most notable difference from Section \ref{subsection:SAT} is that, for instances where $\psi > 1$, A-S and SY2 are no longer equivalent, but POG and POS are equivalent.
From observing performance relative to $P_1$, we have gained the following results. \begin{enumerate}
\item A-S and SY1 remain optimal for $P_1$.
\item EHYY's $P_1$ performance is close to 1 for all $q_c$ in the average case, but drops drastically in the worst case, especially at higher values of $\psi$ and $q_c$.
\item SY2 outperforms POG and POS in satisfying $P_1$ for lower $q_c$ values, but is overtaken for higher $q_c$. \end{enumerate}
Relative to $P_2$, we obtain the following results. \begin{enumerate}
\item A-S, SY2, and EHYY are optimal for $P_2$.
\item SY1 is strictly better than POG and POS when satisfying $P_2$.
\item The performance of SY1 increases as $\psi$ increases, both in terms of average and worst case. \end{enumerate}
By comparing $P_3$ performances, we obtain the following results. \begin{enumerate}
\item POG and POS remain optimal for $P_3$.
\item SY1 is significantly better at satisfying $P_3$ than other diversity based algorithms.
\item The performance of all diversity based algorithms decreases relative to $P_3$ as $\psi$ increases.
\item Diversity based algorithms perform worst at $P_3$ for intermediate levels of $q_c$ that is, $q_c = {40,60}$. \end{enumerate} \begin{comment}
\end{comment}
\section{Conclusions} We have examined the effectiveness of prominent matching algorithms in satisfying a range of performance metrics across a variety of different instances. We find that there is a necessary tradeoff when balancing performance between priority and reserves, and this tradeoff can be negotiated through our choice of selection algorithm.
When we wish to optimise our matching toward fulfilling reserves across multiple ranks, the A-S algorithm will always provide the best solution while maintaining the highest possible priority of selected agents. However, if we wish to optimise across only one rank, SY1 and SY2 can provide a solution that can achieve this while outperforming A-S in terms of priority ranking. It also becomes clear that, for most instances where $q_c$ is not high, reserve based matching algorithms provide highly different outcomes from priority-only algorithms such as POG and POS, creating further emphasis on the tradeoff between priority and reserve satisfaction.
Therefore, when selecting an algorithm to solve a problem, we must carefully consider the following points: \begin{enumerate}
\item Whether or not the problem requires optimisation for priority or reserves.
\item The relative importance of filling reserves according to rank against the importance of maximising priority.
\item The value of {capacity} $q_c$ relative to {the number of students} $|S|$.
\item The number of reserves available relative to {capacity} $q_c$. \end{enumerate}
\end{document} |
\begin{document}
\title{Exploring Physical Latent Spaces for Deep Learning}
\begin{abstract} We explore training deep neural network models in conjunction with physical simulations via partial differential equations (PDEs), using the simulated degrees of freedom as latent space for the neural network. In contrast to previous work, we do not impose constraints on the simulated space, but rather treat its degrees of freedom purely as tools to be used by the neural network. We demonstrate this concept for learning reduced representations. It is typically extremely challenging for conventional simulations to faithfully preserve the correct solutions over long time-spans with traditional, reduced representations. This problem is particularly pronounced for solutions with large amounts of small scale features. Here, data-driven methods can learn to restore the details as required for accurate solutions of the underlying PDE problem. We explore the use of physical, reduced latent space within this context, and train models such that they can modify the content of physical states as much as needed to best satisfy the learning objective. Surprisingly, this autonomy allows the neural network to discover alternate dynamics that enable a significantly improved performance in the given tasks. We demonstrate this concept for a range of challenging test cases, among others, for Navier-Stokes based turbulence simulations. \end{abstract}
\section{Introduction}
Partial differential equations (PDEs) are crucial tools for a wide range of science and engineering problems, ranging from blood flow simulations \citep{johnston2004non}, over aerodynamics \citep{cummins2018separated}, to climate and weather \citep{randall2007climate}. For these PDE problems, numerous numerical methods have been developed to compute solutions as efficiently as possible. However, a central challenge of utilizing these traditional methods lies in the fact that the PDE problems of real-world scenarios are extremely costly to resolve. Solving the given PDE problem in a reduced space is often considered as an alternative although the desired solutions usually require very fine degrees of freedom both temporally and spatially.
Due to the super-linear scaling of many existing solvers, data-driven methods pose a highly attractive alternative \citep{morton2018deep,wiewel2020latent,kochkov2021machine}. Here the reduced space typically has much fewer degrees of freedom $d_r$, compared to the size of the full solutions $d_f$, i.e., $d_r \! \ll \! d_f$.
\begin{figure}
\caption{The autonomous exploration of the reduced space leads to a reduced representation (a) that differs from the linear down-sampling (b) of the reference (d). It enables our model to restore accurate solutions (c)
that lead to relative improvements of 82\% for the Karman vortex street case with respect to the baseline.}
\label{fig:teaser}
\end{figure}
In this paper, we present a novel training method that explores using physical states as latent spaces for deep learning. Hence, in contrast to many previous works \cite{morton2018deep,kim2019,mohan2019compressed,stachenfeld2022learned}, we do not consider the output or intermediate states of a neural network as latent space, but rather the physical states $\mathbf{r}\in \mathbb{R}^{d_r}$ of a PDE solver. A deep neural network (DNN) is trained to exploit the content of $\mathbf{r}$, and fill it in a way that best satisfies the given learning objective. This \emph{shaping} of the physical latent space gives the neural network the chance to discover modified dynamics, of which an example is shown in Fig.~\ref{fig:teaser}, that simplify the learning task in comparison to, e.g., being constrained to a lower-dimensional representation of the target dynamics.
For a given PDE, our approach consists of (a) an \emph{encoder} model that reduces the degrees of freedom of a physical state, (b) a \emph{physics solver} and \emph{adjustment} DNN both working in the reduced space, and (c) a \emph{decoder} turning the reduced state back into the target high resolution space.
To train our models with a numerical solver, we adopt a differentiable simulator approach \citep{hu2019difftaichi,holl2020,thuerey2021physicsbased}.
Within the interactions of the encoder, differentiable solver, adjustment, and decoder, we let the encoder model learn the latent space representation for the given target solutions without additional constraints. An end-to-end training of this pipeline
gives the encoder a complete autonomy to shape the reduced representation.
Our experiments demonstrate that this autonomy leads to a better performance, especially in terms of generalization. Moreover, interestingly, we observe that the learned reduced representation differs from a typical down-sampled representation. Our experiments also show that training the models while interacting with the physics solver for the latent space is a crucial factor for accurately restoring the solutions. We demonstrate this concept with three complex, non-linear PDE problems. For all these scenarios, our models outperform conventional and more tightly constrained models in terms of restoration accuracy.
\textbf{Previous work:} The study of machine learning (ML) techniques for PDEs has a long history in science and engineering \citep{crutchfield1987equations,kevrekidis2003equation,brunton2016discovering}. A popular direction in using ML for PDEs is to aim for the replacement of entire PDE solvers by neural network models that can efficiently approximate the solutions as accurately as possible \citep{lusch2018deep,kim2019,wang2020physicsinformed,bhattacharya2021model}. Moreover, Fourier Neural Operator \citep{li2021fourier} and Neural Message Passing \citep{brandstetter2022message} models have been introduced for learning PDEs, aiming at a better representation of full solvers with neural network models. Instead of this pure ML-driven approach to solve target PDEs, an alternative approach exists in the form of hybrid methods that combine ML with traditional numerical methods. For example, a learned model can replace the most expensive part of an iterative PDE solver \citep{tompson2017} or supplement inexpensive yet under-resolved simulations \citep{um2018,sirignano2020dpm}. These approaches have demonstrated the promising capabilities of ML to solve PDE problems for many different applications.
Recently, differentiable components for ML have received a significant amount of attention, particularly when training neural network models in recurrent setups for spatio-temporal problems \citep{amosKolter2017,de2018end,toussaint2018differentiable,chen2018neural,schenck2018spnets,liang2019differentiable,wang2020differentiable,um2020solverintheloop,kochkov2021machine,zhuang2021learned}. Consequently, a variety of differentiable programming frameworks have been developed for different domains \citep{schoenholz2019jax,hu2019difftaichi,innes2019differentiable,holl2020}. These differentiable frameworks allow neural networks to closely interact with PDE solvers, which provides the model with important feedback about the temporal evolution of the target problem from the recurrent evaluations. Targeting similar problems for temporal evolution, we also employ a differentiable framework in our training procedure.
Effectively utilizing latent spaces lies at the heart of many ML-based approaches for solving PDEs. A central role of the latent space is to embed important (often non-linear) information for the given training task into a set of reduced degrees of freedom. For example, with an autoencoder network architecture, the latent space can be used for discovering interpretable, low-dimensional dynamical models and their associated coordinates from high-dimensional data \citep{champion2019datadriven}. Moreover, thanks to their effectiveness in terms of embedding information and reducing the degrees of freedom, latent space solvers have been proposed for different problems such as advection-dominated systems \citep{maulik2021reducedorder} and fluid flows \citep{wiewel2020latent,fukami2021sparse}. While those studies typically focus on training equation-free evolution models, we focus on latent states that result from the interaction with a PDE solver. Neural network models have also been studied for the integration of a dynamical system with an ordinary differential equation (ODE) solver in the latent space \citep{chen2018neural}. This approach targets general neural network approximations with a simple physical model in the form of an ODE, whereas we focus on learning tasks for complex non-linear PDE systems.
The ability to learn underlying PDEs has allowed neural networks to improve reduced, approximate solutions. Residual correction models are trained to address numerical errors of PDE solvers \citep{um2020solverintheloop}. Details at sub-grid scales are improved via learning discretizations of PDEs \citep{barsinai2019data} and learning solvers \citep{kochkov2021machine,stachenfeld2022learned} from high-resolution solutions. Moreover, multi-scale models with downsampled skip-connections have been used for super-resolution tasks of turbulent flows \citep{fukami2019super}. These methods, however, typically employ a constrained solution manifold for the reduced representation. Indeed, the reduced solutions are produced using coarse-grained simulations with standard numerical methods, while our work shows the advantages of autonomously exploring the latent space representation through our joint training methodology.
\begin{figure*}
\caption{Architecture of our autonomous training approach
for $N$ integrated solver steps. The initial state is encoded into the reduced space, the solver and adjustment are applied $N$ times, and the adjusted states are decoded back into the reference space.}
\label{fig:architecture}
\end{figure*}
\section{Exploring Physical Latent Spaces} \label{sec:models}
Our goal is to explore how the training process for neural networks can leverage the environment of a PDE in order to achieve a given learning objective. Let $\mathbf{f} \in \mathbb{R}^{d_f}$ and $\mathbf{r} \in \mathbb{R}^{d_r}$ denote two discretized solutions of this PDE, where $\mathbf{f}$ and $\mathbf{r}$ denote a fine representation and its reduced version, respectively, with
$d_r \! \ll \! d_f$. Considering an encoder $\mathcal{E}$ with $\mathcal{E}(\mathbf{f} | \theta_E): \mathbb{R}^{d_f} \rightarrow \mathbb{R}^{d_r}$, a decoder function
$\mathcal{D}$ transforms the reduced representation back to the higher dimensional references, i.e., $\mathcal{D}(\mathbf{r}| \theta_D): \mathbb{R}^{d_r} \rightarrow \mathbb{R}^{d_f}$. The weights $\theta_E$ and $\theta_D$ denote the encoder and decoder weights, respectively.
We focus on the numerical integration of a target PDE problem and indicate the temporal evolution of each state as a subscript. A reference solution trajectory integrated from a given initial state $\mathbf{f}_t$ at time $t$ for $n$ steps is the finite set of states $\{ \mathbf{f}_t, \mathbf{f}_{t+1}, \cdots, \mathbf{f}_{t+n} \}$. Without loss of generality, each reference state is integrated over time with a fixed time step size using a numerical solver $\mathcal{P}_f$, i.e., $\mathbf{f}_{t+1} = \mathcal{P}_f(\mathbf{f}_{t})$. Similarly, we integrate a reduced state $\mathbf{r}_{t}$ over time using a corresponding numerical solver $\mathcal{P}$ at the reduced space, which we will call \emph{reduced
solver} henceforth, i.e., $\mathbf{r}_{t+1} = \mathcal{P}(\mathbf{r}_{t})$.
As our primary goal is to facilitate the restoration of the reference states, we allow for modifications of the reduced states. The output at time $t+1$ as given by $\mathcal{P}$ is additionally transformed by an adjustment DNN $\mathcal{A}$ with $\mathbf{\hat{r}}_{i} = \mathcal{A}(\mathbf{r}_{i}| \theta_A)$ with $\theta_A$ denoting the weights. We focus on cases where $\mathcal{P}$ is the same for reduced and fine discretizations. However, the exact content of the reduced states $\mathbf{\hat{r}}$ does not matter, and $\mathcal{A}$ primarily serves to ensure temporal coherence in terms of the outputs as produced by the encoder. Hence, we do not constrain the adjusted states to match direct transformations of the references, but constrain them to the output of the encoder by minimizing $\|\mathbf{\hat{r}}_{t+i} - \mathcal{E}(\mathbf{f}_{t+i}| \theta_E)\|_2$.
The decoder model $\mathcal{D}$ has the objective to restore a reference solution trajectory $\{ \mathbf{\hat{f}}_{t}, \mathbf{\hat{f}}_{t+1}, \cdots, \mathbf{\hat{f}}_{t+n} \}$ from the reduced trajectory
$\{ \mathbf{\hat{r}}_{t}, \mathbf{\hat{r}}_{t+1}, \cdots, \mathbf{\hat{r}}_{t+n} \}$, and thus $\mathbf{\hat{f}}_{i} = \mathcal{D}(\mathbf{\hat{r}}_{i}| \theta_D)$. The joint learning objective of the three involved DNNs is to minimize the error between the approximate solutions and their corresponding reference solutions:\\ \begin{equation} \label{eq:loss}
\mathcal{L} = \sum_{i=1}^n ||\mathbf{\hat{f}}_{t+i} - \mathbf{f}_{t+i}||_2 + ||\mathbf{\hat{r}}_{t+i} - \mathcal{E}(\mathbf{f}_{t+i}| \theta_E)||_2 , \end{equation}
where $\mathbf{\hat{r}}_{t+i}$ is obtained by $i$ recurrent evaluations of $\mathcal{P}$ and $\mathcal{A}$, starting from the initial reduced state, i.e., $\mathcal{E}(\mathbf{f}_t | \theta_E) = \mathbf{\hat{s}}_t$. Hence, at training time, gradients through all $i$ steps of both modules are computed for back-propagation and, consequently, all the models get jointly updated at each training iteration.
Fig.~\ref{fig:architecture} shows the architecture of our approach. As the encoder does not receive any explicit constraints and has the complete freedom to autonomously explore the reduced space to arrive at a suitable representation, we denote this approach by \emph{ATO}.
We note that the evolution of the fine solution restored by the decoder purely depends on the recurrent chain of reduced simulator and adjustment inferences. The encoder is only evaluated a single time to produce the initial reduced state, while the interplay of encoder and adjustment networks is taken into account by the temporal coherence term, the second one in Eq.~\ref{eq:loss}.
\textbf{Comparisons and baselines}: To illustrate the capabilities of physical latent spaces, we compare to the following simplified baselines and state-of-the-art approaches from previous work.
A first simplified variant
replaces the $\mathcal E$ network with a
linear down-sampling operator to produce the reduced states.
This represents the
most commonly used operation for domain reduction. Therefore, only the adjustment and decoder models are trainable. These two networks still interact with the differentiable solver in order to fulfill both tasks of adjusting the reduced states and transforming them back into the reference space, but now the physical latent space states are tightly constrained to match down-sampled versions of the references. Hence, this variant denoted by \emph{ATO-CE} (as \emph{constrained encoder}) mimics our full architecture, but removes most of the latent space autonomy.
We additionally compare to two state-of-the-art solvers that operate in reduced spaces \citep{stachenfeld2022learned, um2020solverintheloop}. The former, denoted by
\emph{Stach} in the following, represents a neural network model that aims at directly predicting solution states in the reduced space at each time-step. Hence, it does not make use of the reduced physics solver $\mathcal P$. On the other hand, the second variant \citep{um2020solverintheloop}, denoted by \emph{SOL}, consists of a differentiable physics solver and a trainable corrector model that addresses numerical errors of the solution states. In both cases, unlike \emph{ATO}, the target solution is conditioned with linearly down-sampled high-resolution
reference; thus, the models are trained to reach conditioned reduced space
solutions.
We note that the solutions of these state-of-the-art solvers stay in the reduced space. To be fair in performance comparison, we transform their reduced outputs back into the target high-resolution space using a separate neural network. To fulfill that task, we employ a super-resolution model proposed for spatio-temporal turbulence problems \citep{fukami2019super}. The reduced states produced by \emph{Stach} and \emph{SOL} are fed to the trained super-resolution model, resulting in high-resolution states. Henceforth, these results will be denoted as \emph{Stach + SR} and \emph{SOL + SR}.
\section{Experiments} \label{sec:experiments}
For each of the following scenarios, represented in 2D, the reference solution trajectories are generated for 200 steps from different initial conditions with a fixed time step size. We consider a four times coarser discretization for the reduced representation and the velocity field as our restoration target.
\subsection{Non-linear advection-diffusion} \label{sec:burgers}
We first consider a two-dimensional non-linear advection-diffusion
problem, also known as the Burgers' equation. This problem is modeled as follows:
\begin{equation*}
\label{eq:burgers}
\partial{\mathbf{v}}/\partial{t} = - (\mathbf{v} \cdot \nabla)\mathbf{v} -
\nu \nabla^2\mathbf{v}
\end{equation*}
where $\mathbf{v}$ is the velocity and $\nu$ is the kinematic viscosity coefficient.
In this scenario, we let the random initial velocity fields decay without any external perturbation. Thus, the velocity is slowly smoothing out over time. Each solution uses a domain discretized with $128^2$ cells and a centered layout with periodic boundary conditions. The test set is generated using different randomized initial velocities.
\subsection{Forced advection-diffusion} \label{sec:forced-burgers}
As a more challenging example, we add external forces $\mathbf{g}$ to the advection-diffusion problem. Here, the
external forces, i.e., $\mathbf{g} = [g_x~g_y]^T$,
consist of
20 overlapping sine functions with a random amplitude $a_i$, wave direction $\alpha_i$, frequency $w_i$, wave number $k_i$, and phase shift $\phi_i$
as follows:
\begin{align*}
g_x(\mathbf{x}, t) = \sum_{i=1}^{20}a_i\sin(\alpha_i \cdot \mathbf{x} + w_i t + \phi_i) \\
g_y(\mathbf{x}, t) = \sum_{i=1}^{20}a_i\sin(\alpha_i \cdot \mathbf{x} + w_i t + \phi_i).
\end{align*}
This example adopts the same setup as the non-linear advection-diffusion, but, for each simulation, a different randomized force sequence is applied. For most of the models, $\mathrm{lerp(\mathbf{g})}$ is directly applied as a force field to the reduced states. However, for the \emph{ATO} model, we let our encoder infer a reduced force field $\mathbf{\hat{g}}$ that affects the simulation over time in the reduced solver.
\subsection{Karman vortex street} \label{sec:karman}
We additionally consider a complex constrained PDE problem in the form of the Navier-Stokes equations. This problem is modeled as follows:
\begin{align*}
\label{eq:ns}
\partial{\mathbf{v}}/\partial{t} = - (\mathbf{v} \cdot \nabla)\mathbf{v} -
\nabla{p}/\rho + \nu \nabla^2\mathbf{v} + \mathbf{g} \\
\textrm{subject to} \quad \nabla \cdot \mathbf{v} = 0
\end{align*} where $\rho$ is the density and $p$ is the pressure.
In this scenario, a continuous inflow collides with a fixed circular obstacle. It creates an unsteady wake flow, which evolves differently depending on the Reynolds number. For the reference solutions, we use a numerical fluid solver that adopts the operator splitting scheme, Chorin projection for implicit pressure solve \citep{chorin1967numerical}, semi-Lagrangian advection \citep{stam1999}, and explicit integration for diffusion. We choose Reynolds numbers between Re = 100 and Re = 1190 for our training data-set, and Reynolds numbers from Re = 1630 to Re = 2230 for testing. The encoder of our \emph{ATO} setup takes the Reynolds number as an additional input in order to guide the exploration of reduced spaces. The \emph{Stach} model also receives the Reynolds number to let the model learn different physics evolutions. Finally, in order to be fair in our comparisons, the obstacle mask is applied to each state output by the \emph{Stach} solver. All the target solutions use a domain discretized with $128\times256$ cells and a staggered layout with closed boundaries, except for the bottom boundary for the constant inflow velocity and the top boundary that remains open.
\subsection{Forced turbulence}
\label{sec:forced_turb}
This last scenario is likewise based on the Navier-Stokes equations and uses initial conditions with vortices being randomly initialized in the domain. In this scenario, the external force field $\mathbf{g}$ serves to slow down the decaying motion of the vortex structures. A unique viscosity $\nu = 0.1$ is used in the training data-set, but the multiplicity of initial velocities and force fields enables a great variety of evolutions. In this scenario, as in the forced advection-diffusion case, we let the encoder of the \emph{ATO} setup infer a reduced force field $\mathbf{\hat{g}}$, that is integrated by the reduced solver. $\mathrm{lerp(\mathbf{g})}$ is applied to the reduced spaces for the other setups. Each solution uses a domain discretized with $128^2$ cells and a centered layout with periodic boundary conditions. A different randomized force sequence is generated for each simulation, while the test set is generated using different randomized initial vortices and force fields than for training.
\begin{figure*}
\caption{Example frames of the absolute error in velocity for (left) the non-linear advection-diffusion with randomized forcing, (middle) the Karman vortex street with Re = 1800, and (right) the forced turbulence cases.}
\label{fig:errormaps}
\end{figure*}
\subsection{Network architecture and training procedure} \label{sec:archi}
The encoder $\mathcal{E}(\cdot | \theta_E)$ is implemented as a convolutional neural network (CNN). It contains two max-pooling layers that reduce the input dimension to a four times lower dimension for the output. For the convolutional layers, we adopt circular padding for the periodic boundary condition problems and zero padding for the closed boundary condition problem. The adjustment architecture is represented by the corrector network of \cite{um2020solverintheloop} although they serve different purposes. The decoder is adapted from the multi-scale model \cite{fukami2019super} for the Karman vortex street and forced turbulence cases, or implemented as a conventional CNN model with two up-sampling layers for the others. The separate SR model employs the same multi-scale model. For all models, every convolutional layer, except for the last one, is followed by the Leaky ReLU activation function.
When training the \emph{ATO} model for $N$ integrated steps, at each training iteration, the encoder, differentiable physics solver, adjustment, and decoder are recurrently evaluated $N$ times in the forward pass. Correspondingly, the weights of the neural network models are updated by back-propagating through the $N$ steps to accumulate the gradient of the weights. At each training iteration, for a given batch size, we randomly sample the initial states from the reference solution trajectories and integrate the approximate solution trajectories for the $N$ steps. All our training runs use an Adam optimizer \citep{kingma2014adam} and a decaying learning rate scheduling.
\section{Results} \label{sec:results}
We evaluate the trained models based on relative improvements over a reduced \emph{baseline} simulation. The baseline solutions are integrated purely by the reduced solver from the linear down-sampling of the initial state without interactions with any neural network. They are taken back to the reference space with a linear interpolation. Errors are computed with respect to the reference solutions, hence an improvement of 100\% would mean that the restored solutions are identical to the reference. We evaluate each model using the mean absolute error (MAE) as metric. Example frames of the absolute error in velocity for our experiments are shown in Fig.~\ref{fig:errormaps}. For the turbulent flow scenarios, we measure the errors in both the velocity and vorticity. For all scenarios, we examined the models trained with different integration steps. As the models with more steps show better performance in general, we present the results of those with more integrated steps for each scenario.
\textbf{Reduced representations:} The images of Fig.~\ref{fig:latent_images} show visual examples of the reduced representations for all three scenarios, for different timesteps. The graphs of Fig.~\ref{fig:latent_plot} show the quantified differences between the reduced states produced by the different trained models and the conventionally down-sampled reference states. When our models are trained with complete autonomy, we observe that our training procedure leads the reduced representation to be visually close to conventional down-sampling, while being considerably different quantitatively. We note that multiple reduced representations of the same scenario from different training initializations stay close to each other, which indicates that there exists a manifold of reduced solutions that our ATO model converges to in order to get the best performance.
\begin{figure}
\caption{Example frames of (left) the non-linear advection-diffusion velocity fields with
randomized forcing, (right) the Karman vortex street vorticities with Re = 1830, and (bottom) the forced turbulence vorticities. From top to bottom for each: reduced space of ATO, down-sampled reference, and baseline.}
\label{fig:latent_images}
\end{figure}
\textbf{Non-linear advection-diffusion}: The velocity evolution of this example is twofold: one without external forces and the other with randomized forcing, which influences the solution trajectories. We evaluate the models trained with 32 integrated steps, with five test trajectories consisting of 200 steps. We note that these simulations were performed on a four times larger domain than the one used for training, i.e., $256\times256$ instead of $128\times128$.
\begin{figure}
\caption{Distance between each model's reduced space and the down-sampled reference. The error bars indicate the standard deviation over the test runs. The reduced representation of ATO considerably differs from the down-sampled reference for each case.}
\label{fig:latent_plot}
\end{figure}
This example demonstrates that the complete autonomy improves the training quality. As shown in Fig.~\ref{fig:results_plot}, the \emph{ATO} model outperforms the others in both cases. The benefit of the autonomy stands out when comparing \emph{ATO} with its variant with a constrained encoder, \emph{ATO-CE}, which is comparable to \emph{SOL}. The \emph{ATO} model shows 57\% of improvement on average for the five trajectories of the test solutions in the case without forces and 38\% of improvement in the case with the random forcing. On the other hand, the \emph{Stach} model fails to yield any significant improvement presenting errors similar to and even higher than the baseline. We note that the super-resolution model, which was originally proposed for turbulent flows, was not able to improve the \emph{Stach} and \emph{SOL} performance, and thus is not shown here.
\textbf{Karman vortex street}: This example considers different vortex shedding behaviors depending on the Reynolds number of each simulation. We evaluate the models trained with 32 integrated steps, with five Reynolds numbers ranging from 1630 to 2230 as test simulations, consisting of 800 steps each. In this scenario, we test the extrapolation ability of the models in both physically and temporally different test simulations, which used higher Reynolds number thus are more turbulent than the training simulation trajectories, and last four times longer.
For the generalization in terms of turbulence, Fig.~\ref{fig:results_plot} shows that \emph{ATO} significantly outperforms the other setups, with a relative improvement of 82\% (and 68\%) in terms of velocity MAE (and vorticity). In comparison, \emph{Stach + SR} improves by 60\% (and -1\%), \emph{ATO-CE} by 57\% (and 39\%), and \emph{SOL + SR} by 54\% (and 42\%). We observe that \emph{Stach + SR} suffers from perturbations of noise, which explains why this model performs worse in the vorticity metric. Moreover, the tests on temporal extrapolation show that \emph{Stach + SR} diverges from the reference a lot faster than the models that integrate a physics solver. For more details, additional results are provided in the appendix.
These results show that our \emph{ATO} model, thanks to its latent-space content, has learned to approximate the physical dynamics of the simulation more accurately, which yields improved capabilities for generalization.
\textbf{Forced turbulence}: As a more complex scenario based on the Navier-Stokes
equations, this example considers the evolution of initial random vortices with
randomized forcing, which leads to different, highly chaotic turbulent flows
as shown in Fig.~\ref{fig:results_images} (right).
We evaluate the models trained with 16 integrated steps, with
five randomized setups in both initial velocity and forcing, lasting 200 steps
each. With these setups, we test the generalization ability of the models.
Fig.~\ref{fig:results_plot}
shows that the \emph{ATO} model
yields substantially improved results
with a relative improvement of 69\% (and 64\%) in terms of velocity MAE
(and vorticity).
In comparison, \emph{SOL +
SR} improves by 49\% (and 43\%), \emph{ATO-CE} by 48\% (and 42\%), and \emph{Stach + SR} by 28\% (and 16\%).
\begin{figure}
\caption{MAEs for the different models' velocities. Top: the non-linear advection-diffusion cases without and with forcing, bottom: the Karman vortex street case (left), and the forced turbulence case (right). The ATO model outperforms the others for all scenarios.}
\label{fig:results_plot}
\end{figure}
\begin{figure*}
\caption{Example frames of (left) the non-linear advection-diffusion with randomized forcing, (middle) the Karman vortex street with Re = 1830, and (right) the forced turbulence cases.}
\label{fig:results_images}
\end{figure*}
\textbf{Runtime performances:} For each scenario, we compare the runtime performance of the trained models, and in particular \emph{ATO}, with the reference's, measuring timing for one simulation of 100 frames, averaged over ten different runs. For the reference, the computations include the solving cost for each step but do not include the evaluation cost for the force fields since they are considered as external factors. For \emph{ATO}, the computations start with the initial velocity inference by the encoder model and stop when all 100 frames are output by the decoder. Similarly, the encoding cost of each force field is not included because it can be pre-computed.
We do not include the timings of disk I/O, which are identical for all trained models and the reference. All timings were computed using a single \emph{GeForce RTX 2080 Ti} with 11GiB of VRAM. For both the Karman vortex street and forced turbulence cases, our \emph{ATO} model yields improvement in runtime compared to the reference.
Table~\ref{perf_table} shows the summary of computational timings for the reference, baseline (reduced solver without any DNN model), and trained models. For the Karman vortex street, the \emph{ATO} inference for 100 frames takes 15\% of the reference computing time, against 11\% for the baseline. Yet, as shown in Sec~\ref{sec:results}, it improved the baseline of 82\% in velocity MAE. Similarly, for the forced turbulence case, the \emph{ATO} inference takes 60\% of the reference runtime, against 50\% for the baseline, and yet improves the baseline of 69\%. We note that, for the non-linear advection-diffusion case, the solver does not implement any iterative solving steps; thus, all solving steps are explicit and highly parallelized. This naturally explains the efficient run for the reference and negligible difference between the reference and baseline. Moreover, in the forced turbulence case, the external forces make the flows more complex and thus the iterative solving harder. This explains why the computation time of the baseline is larger than in the Karman vortex street case, despite using a two times smaller resolution. For the reference, however, the Karman case also captures complex flows, which leads to a slightly larger computation time. The \emph{Stach + SR} model often has the best runtime performance because it does not contain any numerical solver, but barely improves the baseline in most cases.
\begin{table}[ht]
\caption{MAEs of the different models' velocities, along with the runtime of one simulation of 100 frames (averaged over ten runs, in seconds) for each model.}
\label{perf_table}
\centering
\resizebox{.9\columnwidth}{!}{
\begin{tabular}{ccccccc} \\
\midrule
& \multicolumn{2}{c}{Non-lin. adv.-diff.} & \multicolumn{2}{c}{Karman vort. st.} & \multicolumn{2}{c}{Forced turb.} \\
& \multicolumn{2}{c}{($256\times256$)} & \multicolumn{2}{c}{($128\times256$)} & \multicolumn{2}{c}{($128\times128$)} \\
\cmidrule{2-7}
& MAE & runtime & MAE & runtime & MAE & runtime \\
\midrule
Reference & N/A & 2.71 & N/A & 26.12 & N/A & 20.69 \\
\midrule
Baseline & 0.041 & 2.41 & 0.121 & 2.78 & 0.50 & 10.29 \\
\midrule
Stach+SR & 0.076 & 5.47 & 0.029 & 4.41 & 0.36 & 7.94 \\
\midrule
SOL+SR & 0.024 & 7.13 & 0.035 & 6.75 & 0.25 & 15.66 \\
\midrule
ATO-CE & 0.022 & 8.04 & 0.024 & 6.68 & 0.26 & 14.12 \\
\midrule
\textbf{ATO (ours)} & \textbf{0.017} & 6.82 & \textbf{0.012} & 3.91 & \textbf{0.16} & 12.39 \\
\bottomrule
\end{tabular}
} \end{table}
\section{Discussion} \label{sec:limit}
To summarize, our results show that autonomously explored latent spaces of physics simulations can yield significantly improved inference results for complex tasks. However, these improvements are not guaranteed in all cases. Rather, our results indicate that the proposed method works especially well in scenarios where the reduced representation has a chance to successfully interact with the reduced solver and decoder (e.g., reasonable timesteps), and where the model can leverage its degrees of freedom to discover a new space that is more suitable for our tasks.
Our evaluation shows that the distance of the latent space content to the linearly down-sampled reference solutions reliably indicates whether a model successfully made use of the latent space content. A large distance typically means that the \emph{ATO} model deviated from the regular physical evolution and was able to encode additional information in the physical states of the simulator. This information in turn can be processed by the decoder model to infer an improved output.
As our work has focused on scenarios where the reduced simulator uses the same solving scheme as the reference's, it only provides a starting point for the exploration of physical latent spaces. Similarly to using ODEs for generic deep learning tasks \cite{chen2018neural}, our results show that the content of complex PDEs could provide a powerful basis for deep learning algorithms.
\section{Conclusions} \label{sec:conclusion}
We have demonstrated that the degrees of freedom of physical simulations can be leveraged by neural networks to achieve an improved performance at inference time. Our results show across a wide range of challenging PDE scenarios that DNNs can learn to develop modified dynamics that encode useful information about learning objectives. Taking our results as a starting point, it will be highly interesting to investigate other PDEs and different combinations of reduced simulations with learning objectives in future work.
\appendix
\twocolumn[\centering \LARGE \textbf{Appendix: Exploring Physical Latent Spaces for Deep Learning}\\
]
\section{Implementation details}
We provide the implementation details with the code, which
can be found in the supplemental material. The code includes the neural network architectures with the hyper-parameters used for training.
We refer to the guideline (i.e. \emph{README.md}) to reproduce the results presented in this work. The code and trained models of our experiments will be published upon acceptance.
\section{Experiments}\label{appx:experiments}
In order to acquire a training data-set for each scenario, we generate a set of solution sequences of the given PDE problem with different setups. The experimented PDEs work with a continuous velocity field $\mathbf{v}$ in the two-dimensional space, i.e., $\mathbf{v} = [v_x, v_y]^T$. Considering the reference simulations are preformed on regularly discretized grids, we focus on exploring latent spaces (i.e., reduced representations) that are four times coarser than the reference grids.
\subsection{Advection-diffusion}\label{appx:burgers-no-forcing}
For the advection-diffusion problem, which is based on Burgers' equation, we randomly initialize a velocity field and let it evolve over time according to the advection and diffusion steps as follows: \\ \begin{align*}
\frac{\partial{v_x}}{\partial{t}} = - (\mathbf{v} \cdot \nabla)v_x - \nu \nabla^2v_x \\
\frac{\partial{v_y}}{\partial{t}} = - (\mathbf{v} \cdot \nabla)v_y -
\nu \nabla^2v_y \end{align*} \\ where $\nu$ is the diffusion coefficient for which we use 0.1 in our experiments. For the reference solution, the simulation domain is discretized with 128$^2$ cells and a cell spacing of 0.5, and the discrete velocity values are stored at the center of each cell. We use periodic boundary conditions. For the temporal discretization, a unit time step size of one is used for the explicit Euler scheme of diffusion and semi-Lagrangian advection scheme \cite{stam1999}. We generate 20 simulations of 200 steps each and use randomly selected 5\% for the validation set and the remaining 95\% for the training set. An example sequence of the data is shown in Fig.~\ref{appx:fig:adv-diff-dataset}. Due to its diffusive nature, this setup starts with high-frequency details, which become smoothed out over time.
For a recurrent setup, we observe that the training stability is significantly improved when using pre-trained networks with a small number of integrated steps, as discussed in other studies \cite{um2020solverintheloop}. Similarly, we first train all models with 8 integrated steps and use them as a ``warm start'' when training the models with 16 integrated steps. These models are, then, used for training the models with 32 integrated steps. Each training takes 100 epochs with a batch size of ten. The learning rate starts from 5$\times$10$^{-4}$ and exponentially decays with a decaying rate of 0.9 every ten epochs. In the event of divergence in training, we restart our training with a smaller learning rate, e.g., 4$\times$10$^{-4}$ or 2$\times$10$^{-4}$. In this example of advection-diffusion, we compare all the models trained with 32 integrated steps.
The test set consists of five solution trajectories evaluated from different initial velocity fields. The simulation domain of the test set is four times larger than the one of the training data-set, i.e., discretized with 256$^2$ cells and the same cell spacing of 0.5. Example sequences of the data and the inference results of different models are shown in Fig.~\ref{appx:fig:adv-diff-test}, along with the spatial distribution of the error in Fig.~\ref{appx:fig:adv-diff_error_maps}.
Fig.~\ref{appx:fig:adv-diff-imp} presents the velocity error improvements of different models in the five test cases as well as the overall averaged improvement per model. In this scenario, the benefit of our autonomous latent space shaping stands out particularly at the early frames. As shown in Fig.~\ref{appx:fig:adv-diff-time}, our \emph{ATO} model has the smallest MAE in terms of velocity, while the reduced representation is distant from the down-sampled reference and getting closer to it over time. We presume that our \emph{ATO} setup shows no significant benefit during the later frames since those are already close to the down-sampled reference. We note that the initial high-frequency details are smoothed out at the later frames; thus, the encoded latent space has a smaller chance of deviating from the down-sampled version.
\subsection{Forced advection-diffusion}\label{appx:burgers}
In this problem, we add external forces $\mathbf{g}=[g_x, g_y]^T$ to the previous advection-diffusion scenario such that the velocity field evolves not only by the advection and diffusion but also by an external factor as follows: \\ \begin{align*}
\frac{\partial{v_x}}{\partial{t}} = - (\mathbf{v} \cdot \nabla)v_x - \nu \nabla^2v_x + g_x(\mathbf{x}, t) \\
\frac{\partial{v_y}}{\partial{t}} = - (\mathbf{v} \cdot \nabla)v_y -
\nu \nabla^2v_y + g_y(\mathbf{x}, t). \end{align*} \\ As explained in the main text, for each simulation trajectory, we use different force field sequences that are composed using 20 overlapping sine functions as follows: \\ \begin{equation} \begin{aligned} \label{appx:eq:extforce}
g_x(\mathbf{x}, t) &= \sum_{i=1}^{20}a_i\sin(k_i\alpha_i \cdot \mathbf{x} + w_i t + \phi_i) \\
g_y(\mathbf{x}, t) &= \sum_{i=1}^{20}a_i\sin(k_i\alpha_i \cdot \mathbf{x} + w_i t + \phi_i) \end{aligned} \end{equation} \\ where $a_i$ is the amplitude, $k_i$ is the wave number, $\alpha_i$ is the wave direction, $w_i$ is the frequency, and $\phi_i$ is the phase shift.
\begin{figure*}
\caption{Example frames from one simulation of the training data-set of the advection-diffusion scenario.}
\label{appx:fig:adv-diff-dataset}
\end{figure*}
\begin{figure*}
\caption{Example frames of a test case for different models for the advection-diffusion scenario. The domain is larger than for training.}
\label{appx:fig:adv-diff-test}
\end{figure*}
\begin{figure*}
\caption{Absolute error in velocity for the different models, for the advection-diffusion scenario.}
\label{appx:fig:adv-diff_error_maps}
\end{figure*}
\begin{figure*}
\caption{Velocity error improvements for the advection-diffusion scenario. The \emph{ATO} model improves the baseline the most for every test case.}
\label{appx:fig:adv-diff-imp}
\end{figure*}
\begin{figure*}
\caption{MAEs of recovered velocities (left) and distances of the reduced
spaces to the down-sampled reference (right) over time for the advection-diffusion scenario.}
\label{appx:fig:adv-diff-time}
\end{figure*}
\begin{figure*}
\caption{Example frames from one simulation of the training data-set of the forced advection-diffusion scenario.}
\label{appx:fig:forced-adv-diff-dataset}
\end{figure*}
\begin{figure*}
\caption{Example frames of a test case for different models for the forced advection-diffusion scenario. The domain is larger than for training.}
\label{appx:fig:forced-adv-diff-test}
\end{figure*}
\begin{figure*}
\caption{Absolute error in velocity for the different models, for the forced advection-diffusion scenario.}
\label{appx:fig:forced-adv-diff_error_maps}
\end{figure*}
\begin{figure*}
\caption{Velocity error improvements for the forced advection-diffusion scenario. The \emph{ATO} model improves upon the baseline the most for every test case.}
\label{appx:fig:forced-adv-diff-imp}
\end{figure*}
\begin{figure*}
\caption{MAEs of recovered velocities (left) and distances of the reduced
spaces to the down-sampled reference (right) over time for the forced advection-diffusion scenario.}
\label{appx:fig:forced-adv-diff-time}
\end{figure*}
These values are randomly sampled from uniform distributions. The wave direction $\alpha_i$ is decided by a random angle ($\in [0, 2\pi]$). The ranges of each random value are as follows: $a_i \in [-0.05, 0.05]$, $k_i \in \{0.6, 0.8, 1.0, 1.2\}$, $w_i \in [-0.04, 0.04]$, and $\phi_i \in [0, \pi]$. An example sequence of the data is shown in Fig.~\ref{appx:fig:forced-adv-diff-dataset}. Unlike the previous advection-diffusion scenario, this setup preserves high-frequency details over the full duration of the simulations.
We observe that pre-loading the previous advection-diffusion models stabilizes and accelerates our training for this scenario. Thus, we load the models trained with 32 integrated steps without forcing and continue the training with this new data-set for 100 epochs. In this example, the encoder of our \emph{ATO} setup is also trained for a reduced representation of the force fields, which are used in the reduced solver. We note that the encoder shares the weights for velocity and force such that it can learn a unified operation for both reduced representations.
The test set is generated as in the previous scenario, yet the five solution trajectories evolve not only from the different initial velocity fields but also with different force field sequences. As in the previous scenario, the simulation domain of the test set is four times larger than the one of the training set. Fig.~\ref{appx:fig:forced-adv-diff-test} shows the inference results of the different models for one example, and Fig.~\ref{appx:fig:forced-adv-diff_error_maps} shows the spatial distribution of the error for the same example.
As shown in Fig.~\ref{appx:fig:forced-adv-diff-imp}, our \emph{ATO} model consistently presents better performance in the all five test cases while using a latent space representation (Fig.~\ref{appx:fig:forced-adv-diff-time}) that differs from that of other models, which aim for the conventional linearly down-sampled representation. In this example, the \emph{Stach} model does not yield significant improvement, showing a similar performance to the baseline.
\subsection{Karman vortex street}\label{appx:karman}
This example targets a complex PDE problem within a constrained setup, where the velocity field evolves over time while being constrained to be divergence free. We evaluate the incompressible Navier-Stokes equations for Newtonian fluids: \begin{equation} \label{appx:eq:ns} \frac{\partial{\mathbf{v}}}{\partial{t}} = - (\mathbf{v} \cdot \nabla)\mathbf{v} - \frac{\nabla{p}}{\rho} + \nu \nabla^2\mathbf{v} \quad \textrm{subject to} \quad \nabla \cdot \mathbf{v} = 0 \end{equation} \\ where $p$ is the pressure, $\rho$ is the density, and $\nu$ is the viscosity coefficient. The reference simulation domain is discretized with 128$\times$256 cells and a cell spacing of one adopting a staggered grid scheme.
We use closed boundary conditions for the sides and open boundary conditions for the top of the domain; at the bottom, we set a constant inflow velocity. The continuous inflow collides with a fixed circular obstacle placed in the domain and creates an unsteady wake flow, which evolves differently depending on the Reynolds number. For the temporal discretization, the unit time step size is used. We generate 20 simulations of 200 steps each, 16 being used for training and the remaining four for validation. We use Reynolds numbers in \{90, 120, 150, 160, 170, 180, 190, 200, 220, 290, 390, 490, 590, 690, 790, 1190\} for training and \{140, 340, 540, 740\} for validation. Both the least and most turbulent simulations of the training set are shown in Fig.~\ref{appx:fig:karman_dataset}.
As for the advection-diffusion case, we use pre-trained networks with a smaller number of integrated steps as warm starts for our final models. Each training uses 100 epochs with a batch size of four. The learning rate starts from 1$\times$10$^{-4}$ and exponentially decays with a decaying rate of 0.9 every ten epochs. If divergence happens while training, we restart our training with a smaller learning rate. In this example, we compare all the models trained with 32 integrated steps. We also note that the encoder of \emph{ATO} and the solver of \emph{Stach} take the Reynolds number as an additional input.
The test set consists of four solution trajectories evaluated with the following Reynolds numbers: \{1630, 1830, 2000, 2230\}. Example sequences of the test data and the inference results of different models are shown in Fig.~\ref{appx:fig:karman_test}, for Re = 2000, along with the spatial distribution of the velocity error in Fig.~\ref{appx:fig:karman_error_maps}. Fig.~\ref{appx:fig:karman_time} shows the temporal evolution of the MAEs for velocity and vorticity for the test data. The \emph{ATO} model performs almost constantly after 100 time-steps, whereas the other models suffer from errors accumulated over time. Particularly, \emph{Stach + SR} presents better performance at the early frames yet significantly diverges from the reference with large errors. Despite the regular evolution of vortex structures in each sequence, we observe that the other models yield a deterioration in restoration whereas our ATO model consistently performs better for a long time span.
Fig.~\ref{appx:fig:karman_imp} shows the velocity error improvements of the models on an extended test set, which consists of eight simulations of Reynolds numbers regularly distributed between 1200 and 2600. In this test, \emph{ATO} clearly presents its improved capabilities for generalization. For the cases that are close to the training data, all the models perform comparably well, whereas the performance decreases drastically for the higher Reynolds numbers for most models. Only the \emph{ATO} model is able to infer actual improvements in terms of velocity error up until the largest Reynolds number of 2600.
\subsection{Forced turbulence}\label{appx:forced-turb}
This example tackles the same incompressible Navier-Stokes equations as in the Karman vortex street scenario, yet with an external force sequence $\mathbf{g}(\mathbf{x}, t)$ that is added to Eq.~(\ref{appx:eq:ns}). This force sequence yields complex, chaotic evolutions of vortices over time. To generate the force sequences, we employ the same scheme as in Eq.~(\ref{appx:eq:extforce}) with different ranges for random values: $a_i \in [-0.1, 0.1]$, $k_i \in \{6, 8, 10, 12\}$, $w_i \in [-0.2, 0.2]$, and $\phi_i \in [0, \pi]$. The composed sine functions are, then, evaluated over the domain mapped into $[0, 2\pi]$ for each dimension.
In this scenario, the viscosity stays constant within the training and test data-sets, with a value of 0.1. The reference simulation domain is discretized with 128$^2$ cells and a cell spacing of one. Both the discrete velocity and pressure values are stored at the center of each cell, and periodic boundary conditions are applied. For the temporal discretization, a time step size of 0.2 is used. The training data-set consists of 20 simulations of 200 steps each, which evolve from different initial velocity fields with different force sequences. We use randomly selected 5\% for the validation set and the remaining 95\% for the training set. An example sequence of the data is shown in Fig.~\ref{appx:fig:forced-turb-dataset}.
\begin{figure*}
\caption{Two examples from the training data-set of the Karman vortex street scenario: Re = 90 and Re = 1190.}
\label{appx:fig:karman_dataset}
\end{figure*}
\begin{figure*}
\caption{Restored frames of different models for the Karman vortex street scenario with Re = 2000.}
\label{appx:fig:karman_test}
\end{figure*}
\begin{figure*}
\caption{Absolute error in velocity for the different models, for the Karman vortex street scenario with Re = 2000.}
\label{appx:fig:karman_error_maps}
\end{figure*}
\begin{figure*}
\caption{MAEs of recovered velocities (left) and vorticities (right) over time, averaged on the four test cases of the Karman vortex street scenario. The \emph{ATO} model has a low and almost constant error over time.}
\label{appx:fig:karman_time}
\end{figure*}
\begin{figure*}
\caption{Velocity (left) and vorticity (right) error improvements for eight different Reynolds numbers, increasing regularly from 1200 to 2600. The highest Reynolds number used for training is 1190. The \emph{ATO} model generalizes better than the others.}
\label{appx:fig:karman_imp}
\end{figure*}
We first train our models on a decaying turbulence case, i.e., without external forces. Then, we pre-load the trained models with the most integrated steps and continue the training with the forcing data-set for 100 epochs. We compare all the models trained with 16 integrated steps.
The encoder of \emph{ATO} again shares its weights for velocity and force in order to learn a unified operation for both reduced representations. The training curves of the \emph{ATO} model are shown in Fig.~\ref{appx:fig:train_curves}, for both the high-resolution component and the latent one (Eq.~\ref{eq:loss}).
The five test trajectories evolve from both different initial velocity fields and different force field sequences on a domain discretized as in the reference simulation. Fig.~\ref{appx:fig:forced-turb-test} shows the inference results of the different models for one example, and Fig.~\ref{appx:fig:turb_error_maps} shows the spatial distribution of the error for the same example.
Fig.~\ref{appx:fig:forced-turb-imp} and \ref{appx:fig:forced-turb-imp-vorticity} show that our \emph{ATO} model improves the baseline the most in the all five test cases in both velocity and vorticity metrics, while presenting a latent space representation (Fig.~\ref{appx:fig:forced-turb-time}) that is more distant from the linearly down-sampled representation than the other models.
\begin{figure*}
\caption{Example frames from one simulation of the training data-set of the forced turbulence scenario.}
\label{appx:fig:forced-turb-dataset}
\end{figure*}
\begin{figure*}
\caption{Training curves of the \emph{ATO} model for the forced turbulence scenario. From left to right: the high-resolution space loss, the latent space loss and the sum of both (final loss).}
\label{appx:fig:train_curves}
\end{figure*}
\begin{figure*}
\caption{Example frames of a test case for different models for the forced turbulence scenario.}
\label{appx:fig:forced-turb-test}
\end{figure*}
\begin{figure*}
\caption{Absolute error in velocity for the different models, for the forced turbulence scenario.}
\label{appx:fig:turb_error_maps}
\end{figure*}
\begin{figure*}
\caption{Velocity error improvements for the forced turbulence scenario. The \emph{ATO} model improves the baseline the most for every test case.}
\label{appx:fig:forced-turb-imp}
\end{figure*}
\begin{figure*}
\caption{Vorticity error improvements for the forced turbulence scenario.}
\label{appx:fig:forced-turb-imp-vorticity}
\end{figure*}
\begin{figure*}
\caption{MAEs of recovered velocities (left) and distances of the reduced
spaces to the down-sampled reference (right) over time for the forced turbulence scenario.}
\label{appx:fig:forced-turb-time}
\end{figure*}
\section{Neural Network Architectures}\label{appx:nn-arch}
In this section, we detail our network architectures for each model. We note that the practical implementations of all the models can be found in the supplemental code.
The encoder of the \emph{ATO} setup consists of two convolutional layers with 32 and 16 features each with a kernel size of five. Each convolutional layer is followed by the Leaky ReLU activation function and a max-pooling layer. The two max-pooling layers reduce the input domain into a four times coarser domain. After that, a last layer with two features follows with the same kernel size but without the activation. This model has approximately 15k trainable weights.
The adjustment model used for the \emph{ATO}, \emph{ATO-CE}, and \emph{SOL} setups employs the network model proposed by \cite{um2020solverintheloop}. This model consists of a first block of one convolutional layer with 32 features with a kernel size of five, followed by five blocks of two convolutional layers with 32 features each with a kernel size of five. Each layer is followed by the Leaky ReLU activation function, and each block is connected to the next with a skip-connection. A last layer with two features follows with the same kernel size yet without the activation. This architecture has approximately 260k trainable weights.
The decoder used for the \emph{ATO} and \emph{ATO-CE} models in the non-linear advection-diffusion cases consists of two blocks of two convolutional layers. Each block contains two convolutional layers with 64 and 32 features with a kernel size of five as well as an up-sampling layer. Each convolutional layer is followed by the Leaky ReLU activation function, and a last layer with two features follows with the same kernel size but without the activation. This model has approximately 160k weights. In the Karman vortex street and forced turbulence scenarios, the decoder is adapted from the multi-scale architecture from \cite{fukami2019super} such that the total number of trainable weights is similar to the previously described decoder's.
The \emph{Stach} model is adapted from the architecture of the state-of-the-art network model proposed for turbulent flow problems \cite{stachenfeld2022learned}. Our model has a first convolutional layer with 32 features with a kernel size of three and no activation. It is followed by four identical blocks of seven convolutional layers with 32 features each with a kernel size of three and varying dilation rates from one to eight (respectively: 1, 2, 4, 8, 4, 2, 1). Each layer is followed by the Leaky ReLU activation function, and each block is linked to the next via a skip-connection. A last layer with two features follows with the same kernel size yet without the activation. This model has a similar number of trainable weights as the adjustment model's (i.e., 260k).
For the super-resolution SR model, we employ the same multi-scale architecture adapted from \cite{fukami2019super} as for the decoder of \emph{ATO} and \emph{ATO-CE}.
For the advection-diffusion and forced turbulence cases, which use periodic boundary conditions, we use periodic padding for all models. For the Karman vortex street case, we adopt zero-padding.
\end{document} |
\begin{document}
\title{Optical Nonlinear Dynamics with cold atoms in a cavity}
\author{A.Lambrecht, E.Giacobino, J.M.Courty} \address{Laboratoire Kastler Brossel \thanks{Laboratoire de l'Universit\'{e} Pierre et Marie Curie et de l'Ecole Normale Sup\'{e}rieure, associ\'{e} au CNRS}\\ Universit\'{e} Pierre et Marie Curie, case 74, 4 place Jussieu, 75252 Paris Cedex 05, France} \date{{\sc Optics Communications} {\bf 115}, p.199 (1995)} \maketitle
\begin{abstract} This paper presents the nonlinear dynamics of laser cooled and trapped cesium atoms placed inside an optical cavity and interacting with a probe light beam slightly detuned from the $6S_{1/2} F=4$ to $6P_{3/2} F=5$ transition. The system exhibits very strong bistability and instabilities. The origin of the latter is found to be a competition between optical pumping and non-linearities due to saturation of the optical transition.\\[2mm] PACS: 32.80.Bx, 32.80.Pj, 42.65.Pc \end{abstract}
\section{Introduction}
In an atomic vapor, the interaction of the atoms with one or several near resonant electromagnetic fields is complicated by the fact that the various velocity classes have different detunings from the fields, except when this detuning is very large. Laser cooled atoms in a magneto-optic trap \cite{Raab87} can have velocities as low as a centimeter per second. This means that their Doppler width is smaller than the natural linewidth and that a laser field can be set close to resonance with an atomic transition, all the atoms having the same detuning from the field. In such conditions, the interaction between the atoms and the field is well characterized and one can take advantage of the strong non-linearities of atomic systems while keeping the absorption rather small. In particular, it was shown that a probe beam going through a cloud of cold atoms could experience a strong gain due to Raman transitions involving the trapping beams \cite{Grison91}. When the atoms are placed in a resonant optical cavity without a probe beam, laser action corresponding to that gain feature was demonstrated \cite{Hilico92}.
When cold atoms interact with a probe laser beam inside an optical cavity, bistability is easily observed at very low input powers (as low as $5 \mu$W) \cite{Giac94}. This bistability effect was observed in the presence of the cooling beams. However, to investigate the nonlinear dynamics of a collection of cold atoms in a cavity in more detail, we needed better controled conditions, and we studied the behaviour of the system in the absence of cooling beams, right after the trap is turned off. In that case, in addition to bistability, new features were found, such as very pronounced self pulsing oscillations in a wide range of experimental parameters.
While instabilities have been observed in similar conditions in atomic vapors \cite{Penna93}, the present situation is much easier to analyse, and we show hereafter that we have been able to find a rather simple model for these instabilities, that invokes a competition between optical pumping and saturation.
The exprimental procedure is described in section 2. In section 3 we give a model explaining the dynamical behaviour of the system and compare its results with the measurements.
\section{Bistability and instabilities with cold atoms}
In the experiments described in the following, we prepare a cloud of cold cesium atoms in a cell using the background pressure to fill the trap. The trap operates in the standard way \cite{Raab87}, with three orthogonal circularly polarized trapping beams generated by a Ti:Sapphire laser and an inhomogeneous magnetic field. The Ti:Sapphire laser is detuned by 2.5 $\Gamma$ ($\Gamma$ being the linewidth of the upper state) on the low frequency side of the $6S_{1/2}F=4$ to $6P_{3/2}F=5$ transition. We obtain a cloud of cesium atoms the typical temperature of which is of the order of 1 mK, which gives a Doppler width much smaller than the natural width. The diameter (2.5 cm) and power (20~mW/cm$^{2}$) of our trapping beams allow us to obtain large clouds (about 5~mm in diameter) with densities of the order of 10$^{10}$~atoms/cm$^{3}$. The relevant parameter in the experiment is actually the number of trapped atoms in the probe beam, which is measured from the change in the intensity of the probe beam transmitted through the cavity with and without trapped atoms. This number is found to be ranging between 10$^{7}$ and 10$^{8}$ depending on the pressure of the background gas. As usual, the atoms non resonantly excited into the $6P_{3/2} F=4$ state and falling back into the $F=3$ ground state are repumped into the cooling cycle by a laser diode tuned to the $6S_{1/2} F=3$ to $6P_{3/2} F=4$ transition.
The cavity is a 25 cm long linear asymetrical cavity, close to half-confocal, with a waist of $260 \mu$m. Because the cell has optical quality antireflecting windows, we can build a good finesse optical cavity around the atomic cloud (Fig.1). Losses due to the two windows are of the order of 1\%. The input mirror has a transmission coefficient of 10\%, the end mirror is a highly reflecting mirror. The cavity is in the symmetry plane of the trap, making a 45${^\circ}$ angle with the two trapping beams that propagate in this plane. \begin{figure}
\caption{Experimental set-up showing the cell containing the cold atom cloud in an optical cavity; PBS: polarizing beamsplitter, QWP: quarter wave plate, PD1, PD2: photodiodes; PD1 and PD2 measure the powers respectively transmitted and reflected by the cavity.}
\end{figure}
To look for bistable behaviour of the optical cavity containing cold atoms, we send a circularly polarized probe beam into the cavity. It can be detuned by 0 to 130 MHz on either side of the $6S_{1/2} F=4$ to $6P_{3/2} F=5$ atomic transition.
We measure the power of the beam transmitted through the cavity while scanning the cavity length for a fixed value of the input intensity, as shown in Fig.2, or the input intensity for a fixed value of the detuning. The recording shows the characteristic hysteresis cycle due to bistability, where the output power switches abruptly between low and high values when the length of the cavity is scanned. Switching and hysteresis were also observed when the input power is scanned at fixed cavity length. In some cases overshoot and oscillations in the output power were recorded \cite{Giac94}. \begin{figure}
\caption{Recording showing the bistable switching from low to high transmission and back when the cavity length is scanned across the cavity resonance. The trapping laser beams are on. The laser is detuned by $22 \Gamma$ on the high frequency side of the atomic transition, the input power is 100~$\mu$W.}
\end{figure}
The shapes of the curves in such a case can be grossly interpreted from the theory of bistability with two-level atoms. However, the fit is approximate and the theory, even including single mode instabilities \cite{Boni78,Orozco89}, cannot explain the overshoot and oscillations \cite{Bram}. Actually the transition under investigation in cesium is far from being a two-level one in the presence of the trap. The fact that standard bistability is observed in most cases to be in fair agreement with two-level atom theory can be explained by the fact that the trapping beams randomize the the ground state population among the various Zeeman sublevels.
To thoroughly investigate the phenomenon, it was thus indispensable to study the system without the trapping beams. After the trap is loaded, we turn off the trapping laser beams in order to get unperturbed atoms. We have about 20 ms to perform the measurements before most of the atoms have escaped out of the interaction region due to free fall and expansion of the cloud. In the experiment, the bistability parameter $C$ (see definition in section 3) can be as high as 400 just after the atoms have been released.
In a broad range of experimental parameters, we observed bistability and instabilities that exhibit unusal features. Instabilities are present within the whole range of accessible detunings (from $-25\Gamma$ to $25\Gamma$, where $\Gamma$ is the linewidth of the excited state, $\Gamma/2\pi=5.2$~MHz), and for input powers ranging from 50 to 300$\mu$W. These oscillations are somewhat similar to the ones observed in the presence of trapping beams but those were obtained only at high intensity or small atomic detuning. Here, on the contrary, they are observed very easily.
Fig. 3 shows a set of recordings of the output power of the cavity when the length is scanned, for different values of the input power. At very low power (not shown), the curves exhibit neither bistability nor instabilities. Starting for an input power of the order of $30\mu$W, oscillations appear on the left hand side of the cavity resonance curve, within some range of cavity detuning. This side is the one on which a bistable switching would occur in a saturated two-level atomic system. At higher input powers, the oscillations disappear and only bistability persists. Let us mention that when the power of the probe beam is too high, the atoms are expelled very rapidly from the beam \begin{figure}
\caption{Recording of the output power P$_{\rm out }$of the cavity containing cold atoms when the cavity length is scanned, for four different values of the input power P$_{\rm in}$. The trapping beams are off. The atomic detunig is the same as in Fig. 2.}
\end{figure} \noindent and only the very early part of the signal can be considered as significant of the nonlinear behaviour of the atoms.
Once enlarged these oscillations look clearly like self-pulsing. Their frequency is comprised between 100 KHz and a few MHz, and they are not due to the scan of the cavity length. To investigate them in more detail, one would want to record them at a fixed cavity length. However, it is not possible to keep the optical cavity length perfectly constant in time because the atoms escape from the original cloud, thus changing the index of refraction. But in such conditions, the effective cavity scan is slow and the oscillations are observed on longer time durations. Such a recording is shown in Fig.~4. \begin{figure}
\caption{Recording of the instabilities for a fixed geometrical cavity length. The optical length is slowly scanned by the decrease of the number of atoms. The detuning is the same as in Fig. 2, the input power is 80~$\mu$W, the trapping beams are switched off at $t=0$.}
\end{figure}
\section{Model for instabilities}
To understand these oscillations, one has to take into account the hyperfine and Zeeman structure of the considered states. Various optical pumping effects can occur and phenomena linked to it like bistability, multistability \cite{Mlynek82,Cecchi82,Hamil83,Giac85} and instabilities \cite{Penna93,Boni78,Orozco89,Bram,Mlynek82,Cecchi82,Hamil83,Giac85,Gius87} have been predicted and observed in alkali vapors. Instabilities necessitate a strong coupling between the atoms and the field, that is a small detuning. Considering that the Doppler width is of the same order as the hyperfine structure in the ground and excited state of cesium, the system is rather intricate to describe, due to the competitive action of the various velocity classes and hyperfine sublevels.
As already pointed out, the situation is much simpler with cold atoms, where one can consider that the field interacts with one hyperfine transition only. Owing to this, we have been able to develop a simple model to understand the origin of the observed oscillations. Roughly, as shown below, they result from the competition between a fast nonlinear process, the saturation of the optical transition and a much slower one, the optical pumping.
The saturation of the optical transition causes a decrease of the linear index of refraction of the atomic medium when the intensity increases. On the contrary, optical pumping by circularly polarized light is at the origin of a non-linearity that increases the index of refraction with the light intensity: when the atoms are submitted to circularly polarized light, they tend to accumulate in the magnetic sublevels with high $m_{F}$ number, which have the largest coupling coefficient with the electromagnetic field. Therefore, the two nonlinear processes have opposite effects and compete in our system. The relaxation oscillations are a consequence of the significant difference in the characteristic times of theses processes.
Optical pumping tends to empty out the magnetic sublevels of low magnetic number to accumulate all the atoms in the sublevels with highest magnetic number ($m_{F}=4$ to $m_{F}=5$ transition). Due to the high number of magnetic substates, it takes a rather long time, starting from an equally distributed population, to complete the optical pumping to the highest $m_{F}$ sublevels of the ground and excited states.
The time evolution of the sum of the populations of these two sublevels for several values of the Rabi frequency of the probe beam (calculated as an average over the hyperfine transitions) is shown in Fig. 5. One can see that even for values close to or larger than the saturation value, the optical pumping rate is much smaller than the natural linewidth. Thus the response time of the nonlinear susceptibility due to optical pumping is much larger than the one due to saturation of the optical transition, which, for a field detuned from resonance, can be considered as being of the order of the detuning. \begin{figure}
\caption{Optical pumping as a function of time : the curves show the calculated variation of the sum ${\cal N}$ of the populations of the $m_{F}=4$ sublevel of the ground state and of the $m_{F}$=5 sublevel of the excited state as a function of time, starting from a population equally distributed among the ground state sublevels, when the intensity I of the pumping light (defined by Eq. (1)) is equal to (from bottom to top)~: 1, 5, 10, 12, 15, 20, 30, 40, 60. Time is in units of $\Gamma$. The atomic detuning is $20 \Gamma$.}
\end{figure}
Although the behaviour of the system, involving many variables (all the hyperfine Zeeman populations and coherences and the field), is quite complex, the underlying mechanism can be explained with a simple model, involving basically two differential equations that give the evolution of the intracavity field and that of the atomic orientation in the ground state. To write these equations, we first introduce our basic notations.
The intracavity intensity $I$ of the laser is normalized to the saturation intensity :
\begin{equation}
I = \frac{g^2 |\alpha|^2}{\Gamma^2/4}, \label{int} \end{equation} where $g$ is the coupling constant of the atoms with the field, \begin{equation} g^2 = \frac{d^2 \omega_{\rm L}}{2 \epsilon_0 \hbar S c}, \label{g} \end{equation} $d$ is the atomic dipole, $\omega_{\rm L}$ the frequency of the probe laser and $S$ the cross section area of the beam.
$|\alpha|^2$ is the electric field squared, expressed in units of number of photons per second.
The round trip phase shift $\Phi_{\rm cav}$ of the field in the cavity (assumed to be a ring cavity) is the sum of four contributions. First, the phase shift $\Phi_{0}$ proportional to the geometrical length of the cavity. Second, we have two contributions due to the presence of atoms in the cavity, a linear phase shift,
\begin{equation} \Phi_{\rm L} = 2Ng^2/\Gamma\delta \label{PhiL} \end{equation} and a nonlinear Kerr-like phase shift \begin{equation} \Phi_{\rm NL} = -KI. \label{PhiNL} \end{equation} The nonlinear coefficient K is given by \begin{equation} K = 4Ng^2/\Gamma \delta^3, \label{K} \end{equation} where $N$ is the number of atoms and $\delta$ the detuning of the atomic transition frequency $\omega_0$ from the field frequency $\omega_{\rm L}$, normalized to the atomic transition linewidth $\Gamma/2$ \begin{equation} \delta = 2(\omega_0 - \omega_{\rm L})/\Gamma. \label{delta} \end{equation}
$\Phi_{\rm L}$ and $\Phi_{\rm NL}$ are the phase shifts corresponding to the presence of two level atoms in the cavity. If we now consider that the ground and excited states have several Zeeman sublevels, the main additional contribution when the atoms interact with circularly polarized light is a term $\Phi_{p}$ coming from the change in the populations of the ground state sublevels and proportional to the ground state orientation $p$ (normalized to 1). Since the square of the Clebsch-Gordan coefficient of the $m_{F}$=4 to $m_{F}=5$ transition is about twice the mean square Clebsch-Gordan coefficient for the F=4 to F=5 transition, $\Phi_{p}$ is equal to $\Phi_{\rm L}$ for $p=1$, that is when the ground state is completely pumped. Thus, we can write \begin{equation} \Phi_p = \Phi_{\rm L}p. \label{Phip} \end{equation} The total phase shift in the cavity is then: \begin{equation} \Phi_{\rm cav} = \Phi_0 + \Phi_{\rm L} + \Phi_{\rm NL} + p\Phi_{\rm L} \label{Phicav} \end{equation}
The ground state orientation $p$ increases with the intracavity intensity $I$ at rate $\beta I$ and decays at a rate $\gamma_{p}$ due to magnetic precession in transverse fields and to transitions to other hyperfine sublevels (via non resonant transitions): \begin{equation} {\rm d}p/{\rm d}t = -\gamma_p p + \beta I(1-p). \label{pump} \end{equation} The pumping rate~coefficient $\beta$ is computed from the calculation presented in Fig. 5 and the relaxation rate $\gamma_{p}$ is evaluated from the experimental parameters.
The change of the intracavity field $\alpha$ on a round trip of time duration $\tau$ is due to the driving field $\alpha_{\rm in}$ entering through the coupling mirror of transmission {\it t}, to the mirror decay coefficient $\gamma_{\rm cav}$ (with $\gamma_{\rm cav}=t^2/2$) and to the round trip phase shift $\Phi_{\rm cav}$: \begin{equation} \tau {\rm d}\alpha/{\rm d}t = t\alpha_{\rm in} - (\gamma_{\rm cav} - i \Phi_{\rm cav})\alpha. \label{champ} \end{equation}
The Kerr-like non-linearity has been assumed to have an instantaneous response. In the absence of optical pumping this system is well known to become bistable when the intensity is larger than a threshold intensity $I_{\rm bist}$ given by \begin{equation} I_{\rm bist} = 8\gamma_{\rm cav}^2/3\sqrt{3} K. \label{bist} \end{equation}
In the presence of optical pumping eqs. (8) and (9) have to be solved numerically. They involve two different rates: the optical pumping rate $\beta I$ and the field evolution rate in the cavity $\gamma_{\rm cav}/\tau$ ($\gamma_{\rm cav}/2\pi \tau \approx 5$ MHz) which is usually larger than the optical pumping rate, except at very high pump powers.
These equations have been used to calculate the motion of the system. In some range of initial conditions, oscillations and limit cycles are found in the intracavity intensity as well as in the output intensity. To compare these results with the experimental data, we have calculated the output intensity when $\Phi_{0}$, i.e. the cavity length, is slowly scanned for several values of the input intensity. The result is shown in Fig. 6. It can be seen that the curves reproduce the experimental recordings in a satisfactory way.
In particular, they show oscillations setting in for intermediate powers. When the input power is strong enough, the oscillations disappear, due to the fact that the optical pumping is fast enough to bring all the atoms in the sublevels of high magnetic number before the oscillations can start. The calculations show that in the unstable region, the ground state orientation may vary by quantities as small as 1\%. As a consequence, the period of the instabilities can be much smaller than the typical optical pumping time shown in Fig. 5. \begin{figure}
\caption{Calculated output power of the cavity containing cold atoms when the cavity length is scanned for four values of the input power chosen below and above the bistability threshold $P_{\rm bist}$ ($P_{\rm bist}$ is in the absence of optical pumping, cf eq.(10)).}
\end{figure}
This oscillatory behaviour can be understood by considering that the optical pumping changes the optical length of the cavity. It can then scan the cavity length back and forth in the vicinity of the bistable regime, causing periodic abrupt changes in the cavity transmission as shown in Fig. 7. A similar behaviour was observed in a different system with thermal effects \cite{Call78}.
Instabilities can also be found in the absence of relaxation for the orientation when the cavity length is scanned. This occurs for high intensities and large num- \begin{figure}
\caption{Diagramme showing the self-pulsing of the bistable system due to optical pumping (for the case of $\Phi_{\rm L}$ and $\Phi_{p}<0$); (a) the cavity is in the high transmission state and optical puming increases the orientation and decreases $\Phi_{\rm L}$; (b) abrupt switch towards the low transmission state; (c) the cavity is in the low transmission state, depumping is dominant and the orientation decreases; (d) switch towards the high transmission state.}
\end{figure}
\noindent bers of atoms. In such a case, in the vicinity of a resonance, the optical pumping takes over, brings the cavity first into resonance and then beyond the resonance. After this overshoot, the cavity length scan brings the cavity back into resonance, the optical pumping starts again and so on. Thus the orientation increases by steps, each time the light enters the cavity.
The validity of the model was checked by further experiments. When the atoms are released from the trap, it is possible to optically pump them into the $m_{F}=4$ magnetic sublevel of the ground state with an additional circularly polarized beam parallel to the probe beam, but closer to the atomic resonance. This prepumping is done in the presence of a magnetic field directed along the cavity. In such conditions the instabilities disappear. If the magnetic field is absent, the orientation created by the pump field is destroyed by the Larmor precession in tranverse magnetic fields and the instabilities persist.
A more complete treatment of the atomic non-linearity and of the optical pumping was also performed, where absorption and saturation of the optical non-linearity were taken into account. In this case, the separation of linear and nonlinear phase shifts is no longer possible. Instead, one defines a total phase shift due to two-level atoms \begin{equation} \Phi_1 = \frac{2Ng^2}{\Gamma} \frac{\delta+i}{1+\delta^2+2I} \label{Phi1} \end{equation} or with the help of the bistability parameter $C$ given by \begin{eqnarray} C &=& g^2N/\gamma_{\rm cav}\Gamma,\\ \Phi_1 &=& \frac{2C\gamma_{\rm cav}(\delta+i)}{1+\delta^2+2I}. \end{eqnarray}
$\Phi_{1}$ now includes a contribution due to the linear absortion and dispersion of the atoms. The contributions $\Phi_{\rm L}$ and $\Phi_{\rm NL}$ introduced below are simply the first two real terms of the expansion of formula (11) in powers of I. The total phase shift, including optical pumping now writes \begin{equation} \Phi_{\rm cav} = \Phi_0 + \Phi_1 (1 + p). \end{equation}
For consistency, the saturation of the optical pumping was also taken into account: \begin{equation} {\rm d}p/{\rm d}t = -\gamma_p p+\beta \frac{I}{1+\delta^2+2I}(1-p). \end{equation}
The evolution of the system was calculated again using this more elaborate model. This yielded only minor changes in the results, the general behaviour predicted by the simple model for the instabilities remaining the same.
Let us note that for high powers, the probe laser tends to push the atoms out of the beam, especially when its frequency is close to resonance. This phenomenon could give rise to bistability due to the change of the effective linear index of refraction of the medium with the number of atoms in the interaction zone. A careful study of the behaviour of the cold atoms in the probe beam has shown that such mechanical effects were negligible under our experimental conditions.
\section{Conclusion}
The recent development of the magneto-optic trap, which enables to get clouds of motionless atoms with a density comparable to that of the atomic beams is of great interest for nonlinear optics. In such traps, one can get large non-linearities by setting the driving field close to resonance without having much absorption. The instabilities described and interpreted in this paper show that it is possible to get new nonlinear phenomena in well characterized conditions. These experiments open the way to nonlinear and quantum optics using cold atoms in resonant cavities.
{\bf Acknowledgements:} The authors thank L.A. Lugiato for very useful comments and suggestions. This work has been supported in part by the EC contract ESPRIT BRA 6934, the EC HMC contract CHRX-CT 930114 and the CNRS ULTIMATECH programme.
\begin{references}
\bibitem{Raab87} E. Raab, M. Prentiss, A. Cable, S. Chu, D. Pritchard, Phys. Rev. Lett. {\bf 59} (1987) 2631; C. Monroe, W. Swann, H. Robinson, C. Wieman, Phys. Rev. Lett. {\bf 65} (1990) 1571.
\bibitem{Grison91} D. Grison, B. Lounis, C. Salomon, J.Y. Courtois, G. Grynberg, Europhys. Lett. {\bf 15} (1991) 149; J. Tabosa, G. Chen, Z. Hu, R. Lee, H.J. Kimble, Phys. Rev. Lett. {\bf 66} (1991) 3254.
\bibitem{Hilico92} L. Hilico, C. Fabre, E. Giacobino, Europhys. Lett. {\bf 18} (1992) 685.
\bibitem{Giac94} E. Giacobino, J.M. Courty, C. Fabre, L. Hilico, A. Lambrecht, in ``Laser Spectroscopy XI'', eds L. Bloomfield, T. Gallagher, D. Larson, AIP Press (1994); L. Hilico, Thesis, Universit\'{e} Paris VI, 1992.
\bibitem{Penna93} P. La Penna and G. Giusfredi, Phys. Rev. {\bf A48} (1993) 2299.
\bibitem{Boni78} R. Bonifacio and L. A. Lugiato, Optics Comm. {\bf 19} (1976) 172; Phys. Rev. {\bf A 18} (1978) 1129.
\bibitem{Orozco89} L.A. Orozco, H. J. Kimble, A.T. Rosenberger, L.A. Lugiato, M.L. Asquini, M. Brambilla and L.M. Narducci, Phys. Rev. {\bf A39} (1989) 1235.
\bibitem{Bram} M. Brambilla and L.A. Lugiato, private communication.
\bibitem{Mlynek82} J. Mlynek, F. Mitschke, R. Deserno and W. Lange Appl. Phys. B {\bf 28} 135 (1982), Phys. Rev. {\bf A29} (1984) 1297.
\bibitem{Cecchi82} S. Cecchi, G. Giusfredi, E. Pietrella and P. Salieri, Phys. Rev. Lett. {\bf 49} (1982) 1928; G. Giusfredi, P. Salieri, S. Cecchi and F.T. Arecchi, Optics Comm. {\bf 54} (1985) 39.
\bibitem{Hamil83} M.W. Hamilton, W.J. Sandle, J.T. Shilwell, J.S. Satchell and M.D. Warrington, Optics Comm. {\bf 48} (1983) 190.
\bibitem{Giac85} E. Giacobino, Optics Comm. {\bf 56} (1985) 249.
\bibitem{Gius87} G. Giusfredi, S. Cecchi, E. Pietrella, P. Salieri and F.T. Arecchi, in ``Instabilities and Chaos in Quantum Optics'' eds: F.T. Arecchi and R.G. Harrison, Springer (1987).
\bibitem{Call78} S. L. McCall, Appl. Phys. Lett. {\bf 32} (1978) 284.
\end{references}
\end{document} |
\begin{document}
\allowdisplaybreaks
\begin{abstract} We prove genuinely multilinear weighted estimates for singular integrals in product spaces. The estimates complete the qualitative weighted theory in this setting. Such estimates were previously known only in the one-parameter situation. Extrapolation gives powerful applications -- for example, a free access to mixed-norm estimates in the full range of exponents. \end{abstract}
\title{Genuinely multilinear weighted estimates for singular integrals in product spaces}
\section{Introduction} For given exponents $1 < p_1, \ldots, p_n < \infty$ and $1/p = \sum_i 1/p_i> 0$, a natural form of a weighted estimate in the $n$-variable context has the form $$
\Big \|g \prod_{i=1}^n w_i \Big \|_{L^p} \lesssim \prod_{i=1}^n \|f_i w_i\|_{L^{p_i}} $$ for some functions $f_1, \ldots, f_n$ and $g$. It is natural to initially assume that $w_i^{p_i} \in A_{p_i}$, where $A_q$ stands for the classical Muckenhoupt weights. Even with this assumption the target weight only satisfies $\prod_{i=1}^n w_i^p \in A_{np} \supsetneq A_p$ making the case $n \ge 2$ have a different flavour than the classical case $n=1$. Importantly, it turns out to be very advantageous -- we get to the application later -- to only impose a weaker \emph{joint} condition on the tuple of weights $\vec w = (w_1, \ldots, w_n)$ rather than to assume individual conditions on the weights $w_i^{p_i}$. This gives the problem a genuinely multilinear nature. For many fundamental mappings $(f_1, \ldots, f_n) \mapsto g(f_1, \ldots, f_n)$, such as the $n$-linear maximal function, these joint conditions on the tuple $\vec w$ are necessary and sufficient for the weighted bounds.
Genuinely multilinear weighted estimates were first proved for $n$-linear \emph{one-parameter} singular integral operators (SIOs) by Lerner, Ombrosi, P\'erez, Torres and Trujillo-Gonz\'alez in the extremely influential paper \cite{LOPTT}. A basic model of an $n$-linear SIO $T$ in $\mathbb{R}^d$ is obtained by setting \begin{equation*}\label{eq:multilinHEUR} T(f_1,\ldots, f_n)(x) = U(f_1 \otimes \cdots \otimes f_n)(x,\ldots,x), \qquad x \in \mathbb{R}^d,\, f_i \colon \mathbb{R}^d \to \mathbb{C}, \end{equation*} where $U$ is a linear SIO in $\mathbb{R}^{nd}$. See e.g. Grafakos--Torres \cite{GT} for the basic theory. Estimates for SIOs play a fundamental role in pure and applied analysis --
for example, $L^p$ estimates for the homogeneous fractional derivative $D^{\alpha} f=\mathcal F^{-1}(|\xi|^{\alpha} \widehat f(\xi))$ of a product of two or more functions, the \emph{fractional Leibniz rules}, are used in the area of dispersive equations, see e.g. Kato--Ponce \cite{KP} and Grafakos--Oh \cite{GO}.
In the usual one-parameter context of \cite{LOPTT} there is a general philosophy that the maximal function controls SIOs $T$ -- in fact, we have the concrete estimate \begin{equation}\label{eq:CF}
\|T(f_1, \ldots, f_n)w\|_{L^p} \lesssim \|M(f_1, \ldots, f_n)w\|_{L^p}, \qquad p > 0, \, w^p \in A_{\infty}. \end{equation}
Thus, the heart of the matter of \cite{LOPTT} reduces to the maximal function $$M(f_1, \ldots, f_n) = \sup_I 1_I \prod_{i=1}^n \langle |f_i|\rangle_I,$$ where $\langle |f_i|\rangle_I
= \fint_I |f_i| = \frac{1}{|I|} \int_I |f_i|$ and the supremum is over cubes $I \subset \mathbb{R}^d$.
In this paper we prove genuinely multilinear weighted estimates for multi-parameter SIOs in the product space $\mathbb{R}^d = \prod_{i=1}^m \mathbb{R}^{d_i}$. For the classical linear multi-parameter theory and some of its original applications see e.g. \cite{CF1, CF2, RF1, RF2, RF3, FS, FL, Jo}. Multilinear multi-parameter estimates arise naturally in applications whenever a multilinear phenomena, like the fractional Leibniz rules, are combined with product type estimates, such as those that arise when we want to take different partial fractional derivatives $D^{\alpha}_{x_1} D^{\beta}_{x_2} f$. We refer to our recent work \cite{LMV} for a thorough general background on the subject.
It is already known \cite{GLTP} that the multi-parameter maximal function $(f_1, \ldots, f_n) \mapsto \sup_R 1_R \prod_{i=1}^n \langle |f_i|\rangle_R$, where the supremum is over rectangles $R = \prod_{i=1}^m I^i \subset \prod_{i=1}^m \mathbb{R}^{d_i}$ with sides parallel to the axes, satisfies the desired genuinely multilinear weighted estimates. However, in contrast to the one-parameter case, there is no known general principle which would automatically imply the corresponding weighted estimate for multi-parameter SIOs from the maximal function estimate. In particular, no estimate like \eqref{eq:CF} is known. In the paper \cite{LMV} we developed the general theory of bilinear bi-parameter SIOs including weighted estimates under the more restrictive assumption $w_i^{p_i} \in A_{p_i}$. In fact, we only reached these weighted estimates without any additional cancellation assumptions of the type $T1=0$ in \cite{ALMV}.
There are no genuinely multilinear weighted estimates for \textbf{any} multi-parameter SIOs in the literature -- not even for the bi-parameter analogues (see e.g. \cite[Appendix A]{LMV}) of Coifman--Meyer \cite{CM} type multilinear multipliers. Almost ten years after the maximal function result \cite{GLTP} we establish these missing bounds -- not only for some special SIOs -- but for a very general class of $n$-linear $m$-parameter SIOs. With weighted bounds previously being known both in the linear multi-parameter setting \cite{RF1, RF2, HPW} and in the multilinear one-parameter setting \cite{LOPTT}, we finally establish a holistic view completing the theory of qualitative weighted estimates in the joint presence of multilinearity and product space theory.
With the understanding that a Calder\'on--Zygmund operator (CZO) is an SIO satisfying natural $T1$ type assumptions, our main result reads as follows. \begin{thm}\label{thm:intro1} Suppose $T$ is an $n$-linear $m$-parameter CZO in $\mathbb{R}^d = \prod_{i=1}^m \mathbb{R}^{d_i}$. If $1 < p_1, \ldots, p_n \le \infty$ and $1/p = \sum_{i=1}^n 1/p_i> 0$, we have $$
\|T(f_1, \ldots, f_n)w \|_{L^p} \lesssim \prod_{i=1}^n \|f_i w_i\|_{L^{p_i}}, \qquad w = \prod_{i=1}^n w_i, $$ for all $n$-linear $m$-parameter weights $\vec w = (w_1, \ldots, w_n) \in A_{\vec p}$, $\vec p = (p_1, \ldots, p_n)$. Here $\vec w \in A_{\vec p}$ if $$ [\vec{w}]_{A_{\vec p}} :=\sup_R \, \ave{w^p}_R^{\frac 1 p} \prod_{i=1}^n \ave{w_i^{-p_i'}}_R^{\frac 1{p_i'}} < \infty, $$ where the supremum is over all rectangles $R \subset \mathbb{R}^d$. \end{thm} \noindent For the exact definitions, see the main text.
Recent extrapolation methods are crucial both for the proof and for the applications. The extrapolation theorem of Rubio de Francia says that if $\|g\|_{L^{p_0}(w)} \lesssim \|f\|_{L^{p_0}(w)}$ for some $p_0 \in (1,\infty)$ and all $w \in A_{p_0}$, then $\|g\|_{L^{p}(w)} \lesssim \|f\|_{L^{p}(w)}$ for all $p \in (1,\infty)$ and all $w \in A_{p}$. In \cite{GM} (see also \cite{DU}) a multivariable analogue was developed in the setting $w_i^{p_i} \in A_{p_i}$, $i = 1, \ldots, n$. Such extrapolation results are already of fundamental use in proving other estimates -- often just to even deduce the full $n$-linear range of unweighted estimates $\prod_{j=1}^n L^{p_j}\to L^p$, $\sum_j 1/p_j = 1/p$, $1 < p_j < \infty$, $1/n <p < \infty$, from some particular single tuple $(p_1, \ldots, p_n, p)$. Indeed, reaching $p \le 1$ can often be a crucial challenge, particularly so in multi-parameter settings where many other tools are completely missing.
Very recently, in \cite{LMO} it was shown that also the genuinely multilinear weighted estimates can be extrapolated. In the subsequent paper \cite{LMMOV} (see also \cite{Nieraeth}) a key advantage of extrapolating using the general weight classes was identified: it is possible to both start the extrapolation and, importantly, to reach -- as a consequence of the extrapolation -- weighted estimates with $p_i = \infty$. See Theorem \ref{thm:ext} for a formulation of these general extrapolation principles. Moreover, extrapolation is flexible in the sense that one can extrapolate both $1$-parameter and $m$-parameter, $m\ge 2$, weighted estimates.
These new extrapolation results are extremely useful e.g. in proving mixed-norm estimates -- for example, in the bi-parameter case they yield that \[
\| T(f_1,\ldots, f_n)\|_{L^p(\mathbb{R}^{d_1}; L^q(\mathbb{R}^{d_2}))}\lesssim \prod_{i=1}^n \| f_i\|_{L^{p_i}(\mathbb{R}^{d_1}; L^{q_i}(\mathbb{R}^{d_2}))}, \] where $1<p_i, q_i\le \infty$, $\frac 1p= \sum_i \frac{1}{p_i} >0$ and $\frac 1q=\sum_i \frac{1}{q_i} >0$. The point is that even all of the various cases involving $\infty$ become immediate. See e.g. \cite{DO, LMMOV, LMV} for some of the previous mixed-norm estimates. Compared to \cite{LMMOV} we can work with completely general $n$-linear $m$-parameter SIOs instead of bi-parameter tensor products of $1$-parameter SIOs, and the proof is much simplified due to the optimal weighted estimates, Theorem \ref{thm:intro1}.
We also use extrapolation to give a new short proof of the boundedness of the multi-parameter $n$-linear maximal function \cite{GLTP} -- see Proposition \ref{prop:prop2}.
On the technical level there is no existing approach to our result: the modern one-parameter tools (such as sparse domination in the multilinear setting, see e.g. \cite{CUDPOU})
are missing and many of the bi-parameter methods \cite{LMV} used in conjunction with the assumption that each weight individually satisfies $w_i^{p_i} \in A_{p_i}$ are of little use. Aside from maximal function estimates, multi-parameter estimates require various square function estimates (and combinations of maximal function and square function estimates). Similarly as one cannot use $\prod_i Mf_i$ instead of $M(f_1, \ldots, f_n)$ due to the nature of the multilinear weights, it is also not possible to use classical square function estimates separately for the functions $f_i$. Now, this interplay makes it impossible to decouple estimates to terms like $\|M f_1 \cdot w_1\|_{L^{p_1}} \|S f_2 \cdot w_2\|_{L^{p_2}}$, since neither of them would be bounded separately as $w_1^{p_1} \not \in A_{p_1}$ and $w_2^{p_2} \not \in A_{p_2}$. However, such decoupling of estimates has previously seemed almost indispensable.
Our proof starts with the reduction to dyadic model operators \cite{AMV} (see also \cite{DLMV2, Hy1, LMOV, Ma1, Ou}), which is a standard idea. After this we introduce a family of $n$-linear multi-parameter square function type objects $A_k$. On the idea level, a big part of the proof works by taking a dyadic model operator $S$ and finding an appropriate square function $A_k$ so that $$
\|S(f_1, \ldots, f_n)w \|_{L^p} \lesssim \|A_k(f_1, \ldots, f_n)w \|_{L^p}. $$ This requires different tools depending on the model operator in question and is a new way to estimate model operators that respects the $n$-linear structure fully. We then prove that all of our operators $A_k$ satisfy the genuinely $n$-linear weighted estimates $$
\|A_{k}(f_1, \ldots, f_n)w \|_{L^p} \lesssim \prod_{i=1}^n \|f_i w_i\|_{L^{p_i}}. $$ This is done with an argument that is based on using duality and lower square function estimates iteratively until all of the cancellation present in these square functions has been exploited.
Aside from the full range of mixed-norm estimates, the weighted estimates immediately give other applications as well. We present here a result on commutators, which greatly generalises \cite{LMV}. Commutator estimates appear all over analysis implying e.g. factorizations for Hardy functions \cite{CRW}, certain div-curl lemmas relevant in compensated compactness, and were recently connected to the Jacobian problem $Ju = f$ in $L^p$ (see \cite{HyCom}). For a small sample of commutator estimates in various other key setting see e.g. \cite{FL, HLW, HPW, LOR1}. \begin{thm} Suppose $T$ is an $n$-linear $m$-parameter CZO in $\mathbb{R}^d = \prod_{i=1}^m \mathbb{R}^{d_i}$,
$1 < p_1, \ldots, p_n \le \infty$ and $1/p = \sum_{i=1}^n 1/p_i> 0$. Suppose also that $\|b\|_{\operatorname{bmo}} = \sup_R \frac{1}{|R|}
\int_R |b-\ave{b}_R| < \infty$. Then for all $1\le k\le n$ we have the commutator estimate \begin{equation*} \begin{split}
\| [b, T]_k(f_1,\ldots, f_n) w\|_{L^p} &\lesssim \|b\|_{\operatorname{bmo}} \prod_{i=1}^n \|f_iw_i\|_{L^{p_i}}, \\ [b, T]_k(f_1,\ldots, f_n) &:= bT(f_1, \ldots, f_n) - T(f_1, \ldots, f_{k-1}, bf_k, f_{k+1}, \ldots, f_n), \end{split} \end{equation*} for all $n$-linear $m$-parameter weights $\vec w = (w_1, \ldots, w_n) \in A_{\vec p}$. Analogous results hold for iterated commutators. \end{thm} We note that we can also finally dispose of some of the sparse domination tools that restricted some of the theory of \cite{LMV} to bi-parameter.
\section{Preliminaries}\label{sec:def}
Throughout this paper $A\lesssim B$ means that $A\le CB$ with some constant $C$ that we deem unimportant to track at that point. We write $A\sim B$ if $A\lesssim B\lesssim A$. Sometimes we e.g. write $A \lesssim_{\epsilon} B$ if we want to make the point that $A \le C(\epsilon) B$.
\subsection{Dyadic notation} Given a dyadic grid $\calD$ in $\mathbb{R}^d$, $I \in \calD$ and $k \in \mathbb{Z}$, $k \ge 0$, we use the following notation: \begin{enumerate} \item $\ell(I)$ is the side length of $I$. \item $I^{(k)} \in \calD$ is the $k$th parent of $I$, i.e., $I \subset I^{(k)}$ and $\ell(I^{(k)}) = 2^k \ell(I)$. \item $\ch(I)$ is the collection of the children of $I$, i.e., $\ch(I) = \{J \in \calD \colon J^{(1)} = I\}$.
\item $E_I f=\langle f \rangle_I 1_I$ is the averaging operator, where $\langle f \rangle_I = \fint_{I} f = \frac{1}{|I|} \int _I f$.
\item $\Delta_If$ is the martingale difference $\Delta_I f= \sum_{J \in \ch (I)} E_{J} f - E_{I} f$. \item $\Delta_{I,k} f$ is the martingale difference block $$ \Delta_{I,k} f=\sum_{\substack{J \in \calD \\ J^{(k)}=I}} \Delta_{J} f. $$
\end{enumerate}
For an interval $J \subset \mathbb{R}$ we denote by $J_{l}$ and $J_{r}$ the left and right halves of $J$, respectively. We define $h_{J}^0 = |J|^{-1/2}1_{J}$ and $h_{J}^1 = |J|^{-1/2}(1_{J_{l}} - 1_{J_{r}})$. Let now $I = I_1 \times \cdots \times I_d \subset \mathbb{R}^d$ be a cube, and define the Haar function $h_I^{\eta}$, $\eta = (\eta_1, \ldots, \eta_d) \in \{0,1\}^d$, by setting \begin{displaymath} h_I^{\eta} = h_{I_1}^{\eta_1} \otimes \cdots \otimes h_{I_d}^{\eta_d}. \end{displaymath} If $\eta \ne 0$ the Haar function is cancellative: $\int h_I^{\eta} = 0$. We exploit notation by suppressing the presence of $\eta$, and write $h_I$ for some $h_I^{\eta}$, $\eta \ne 0$. Notice that for $I \in \calD$ we have $\Delta_I f = \langle f, h_I \rangle h_I$ (where the finite $\eta$ summation is suppressed), $\langle f, h_I\rangle := \int fh_I$.
We make a few clarifying comments related to the use of Haar functions. In the model operators coming from the representation theorem there are Haar functions involved. There we use the just mentioned convention that $h_I$ means some unspecified cancellative Haar function $h_I^{\eta}$ which we do not specify. On the other hand, the square function estimates in Section \ref{sec:SquareFunctions} are formulated using martingale differences (which involve multiple Haar functions as $\Delta_I f = \sum_{\eta \ne 0} \langle f, h_I^{\eta} \rangle h_I^{\eta}$). When we estimate the model operators, we carefully consider this difference by passing from the Haar functions into martingale differences via the simple identity \begin{equation}\label{eq:HaarMart} \langle f, h_I \rangle=\langle \Delta_I f, h_I \rangle, \end{equation} which follows from $\Delta_I f=\sum_{\eta \not=0} \langle f, h^{\eta}_I \rangle h^{\eta}_I$ and orthogonality.
\subsection{Multi-parameter notation} We will be working on the $m$-parameter product space $\mathbb{R}^d = \prod_{i=1}^m \mathbb{R}^{d_i}$. We denote a general dyadic grid in $\mathbb{R}^{d_i}$ by $\calD^i$. We denote cubes in $\calD^i$ by $I^i, J^i, K^i$, etc. Thus, our dyadic rectangles take the forms $\prod_{i=1}^m I^i$, $\prod_{i=1}^m J^i$, $\prod_{i=1}^m K^i$ etc. We usually denote the collection of dyadic rectangles by $\calD = \prod_{i=1}^m \calD^i$.
If $A$ is an operator acting on $\mathbb{R}^{d_1}$, we can always let it act on the product space $\mathbb{R}^d$ by setting $A^1f(x) = A(f(\cdot, x_2, \ldots, x_n))(x_1)$. Similarly, we use the notation $A^i f$ if $A$ is originally an operator acting on $\mathbb{R}^{d_i}$. Our basic multi-parameter dyadic operators -- martingale differences and averaging operators -- are obtained by simply chaining together relevant one-parameter operators. For instance, an $m$-parameter martingale difference is $$ \Delta_R f = \Delta_{I^1}^1 \cdots \Delta_{I^m}^m f, \qquad R = \prod_{i=1}^m I^i. $$ When we integrate with respect to only one of the parameters we may e.g. write \[ \langle f, h_{I^1} \rangle_1(x_2, \ldots, x_n):=\int_{\mathbb{R}^{d_1}} f(x_1, \ldots, x_n)h_{I^1}(x_1) \ud x_1 \] or $$ \langle f \rangle_{I^1, 1}(x_2, \ldots, x_n) := \fint_{I^1} f(x_1, \ldots, x_n) \ud x_1. $$
\subsection{Adjoints}\label{sec:adjoints} Consider an $n$-linear operator $T$ on $\mathbb{R}^d = \mathbb{R}^{d_1} \times \mathbb{R}^{d_2}$. Let $f_j = f_j^1 \otimes f_j^2$, $j = 1, \ldots, n+1$. We set up notation for the adjoints of $T$ in the bi-parameter situation. We let $T^{j*}$, $j \in \{0, \ldots, n\}$, denote the full adjoints, i.e., $T^{0*} = T$ and otherwise $$ \langle T(f_1, \dots, f_n), f_{n+1} \rangle = \langle T^{j*}(f_1, \dots, f_{j-1}, f_{n+1}, f_{j+1}, \dots, f_n), f_j \rangle. $$ A subscript $1$ or $2$ denotes a partial adjoint in the given parameter -- for example, we define $$ \langle T(f_1, \dots, f_n), f_{n+1} \rangle = \langle T^{j*}_1(f_1, \dots, f_{j-1}, f_{n+1}^1 \otimes f_j^2, f_{j+1}, \dots, f_n), f_j^1 \otimes f_{n+1}^2 \rangle. $$ Finally, we can take partial adjoints with respect to different parameters in different slots also -- in that case we denote the adjoint by $T^{j_1*, j_2*}_{1,2}$. It simply interchanges the functions $f_{j_1}^1$ and $f_{n+1}^1$ and the functions $f_{j_2}^2$ and $f_{n+1}^2$. Of course, we e.g. have $T^{j*, j*}_{1,2} = T^{j*}$ and $T^{0*, j*}_{1,2} = T^{j*}_{2}$, so everything can be obtained, if desired, with the most general notation $T^{j_1*, j_2*}_{1,2}$. In any case, there are $(n+1)^2$ adjoints (including $T$ itself). These notions have obvious extensions to $m$-parameters.
\subsection{Structure of the paper} To avoid unnecessarily complicating the notation, we start by proving everything in the bi-parameter case $m=2$. Importantly, we present a proof which does not exploit this in a way that would not be extendable to $m$-parameters (e.g., our proof for the partial paraproducts does not exploit sparse domination for the appearing one-parameter paraproducts). At the end, we demonstrate for some key model operators how the general case can be dealt with.
\section{Weights}\label{sec:weights} The following notions have an obvious extension to $m$-parameters. A weight $w(x_1, x_2)$ (i.e. a locally integrable a.e. positive function) belongs to the bi-parameter weight class $A_p = A_p(\mathbb{R}^{d_1} \times \mathbb{R}^{d_2})$, $1 < p < \infty$, if $$ [w]_{A_p} := \sup_{R}\, \ave{w}_R \ave{w^{1-p'}}^{p-1}_R = \sup_{R}\, \ave{w}_R \ave{w^{-\frac{1}{p-1}}}^{p-1}_R
< \infty, $$ where the supremum is taken over rectangles $R$ -- that is, over $R = I^1 \times I^2$ where $I^i \subset \mathbb{R}^{d_i}$ is a cube. Thus, this is the one-parameter definition but cubes are replaced by rectangles.
We have \begin{equation}\label{eq:eq28} [w]_{A_p(\mathbb{R}^{d_1} \times \mathbb{R}^{d_2})} < \infty \textup { iff } \max\big( \esssup_{x_1 \in \mathbb{R}^{d_1}} \,[w(x_1, \cdot)]_{A_p(\mathbb{R}^{d_2})}, \esssup_{x_2 \in \mathbb{R}^{d_2}}\, [w(\cdot, x_2)]_{A_p(\mathbb{R}^{d_1})} \big) < \infty, \end{equation} and that $$ \max\big( \esssup_{x_1 \in \mathbb{R}^{d_1}} \,[w(x_1, \cdot)]_{A_p(\mathbb{R}^{d_2})}, \esssup_{x_2 \in \mathbb{R}^{d_2}}\, [w(\cdot, x_2)]_{A_p(\mathbb{R}^{d_1})} \big) \le [w]_{A_p(\mathbb{R}^{d_1}\times \mathbb{R}^{d_2})}, $$ while the constant $[w]_{A_p}$ is dominated by the maximum to some power. It is also useful that $\ave{w}_{I^2,2} \in A_p(\mathbb{R}^{d_1})$ uniformly on the cube $I^2 \subset \mathbb{R}^{d_2}$.
For basic bi-parameter weighted theory see e.g. \cite{HPW}. We say $w\in A_\infty(\mathbb{R}^{d_1}\times \mathbb{R}^{d_2})$ if \[ [w]_{A_\infty}:=\sup_R \, \ave{w}_R \exp\big( \ave{\log w^{-1}}_R \big)<\infty. \] It is well-known that $$A_\infty(\mathbb{R}^{d_1}\times \mathbb{R}^{d_2})=\bigcup_{1<p<\infty}A_p(\mathbb{R}^{d_1}\times \mathbb{R}^{d_2}).$$ We also define $$ [w]_{A_1} = \sup_R \, \ave{w}_R \esssup_R w^{-1}. $$ \begin{comment} The following multilinear reverse H\"older property is well-known -- for the history and a very short proof see e.g. \cite[Lemma 2.5]{Li}. The proof in our bi-parameter setting is the same. \begin{lem}\label{lem:lem6} Let $u_i \in (0, \infty)$ and $w_i \in A_{\infty}$, $i = 1, \ldots, N$, be bi-parameter weights. Then for every rectangle $R$ we have $$ \prod_{i=1}^N \ave{w_i}_R^{u_i} \lesssim \Big\langle \prod_{i=1}^N w_i^{u_i} \Big\rangle_R. $$ \end{lem} \end{comment}
We introduce the classes of multilinear Muckenhoupt weights that we will use. \begin{defn}\label{defn:defn1} Given $\vec p=(p_1, \ldots, p_n)$ with $1 \le p_1, \ldots, p_n \le \infty$ we say that $\vec{w}=(w_1, \ldots, w_n)\in A_{\vec p} = A_{\vec p}(\mathbb{R}^{d_1} \times \mathbb{R}^{d_2})$, if $$ 0<w_i <\infty, \qquad i = 1, \ldots, n, $$ almost everywhere and $$ [\vec{w}]_{A_{\vec p}} :=\sup_R \, \ave{w^p}_R^{\frac 1 p} \prod_{i=1}^n \ave{w_i^{-p_i'}}_R^{\frac 1{p_i'}} < \infty, $$ where the supremum is over rectangles $R$, $$ w := \prod_{i=1}^n w_i \qquad \textup{and} \qquad \frac 1 p = \sum_{i=1}^n \frac 1 {p_i}. $$ If $p_i = 1$ we interpret $\ave{w_i^{-p_i'}}_R^{\frac 1{p_i'}}$ as $\esssup_R w_i^{-1}$, and if $p = \infty$ we interpret $\ave{w^p}_R^{\frac 1 p}$ as $\esssup_R w$. \end{defn} \begin{rem} \begin{enumerate} \item It is important that the lower bound \begin{equation}\label{eq:eq7}
\ave{w^p}_R^{\frac 1 p} \prod_{i=1}^n \ave{w_i^{-p_i'}}_R^{\frac 1{p_i'}} \ge 1 \end{equation} holds always. To see this recall that for $\alpha_1, \alpha_2 > 0$ we have by H\"older's inequality that \begin{equation}\label{eq:eq8} \begin{split} 1 \le \ave{ w^{-\alpha_1}}_R^{\frac{1}{\alpha_1}} \ave{ w^{\alpha_2}}_R^{\frac{1}{\alpha_2}}. \end{split} \end{equation} Apply this with $\alpha_2 = p$ and $\alpha_1 = \frac{1}{n-\frac{1}{p}}$. Then apply H\"older's inequality with the exponents $u_i = \Big(n-\frac{1}{p}\Big)p_i'$ to get $\Big \langle \Big( \prod_{i=1}^n w_i \Big)^{-\frac{1}{n-\frac{1}{p}}} \Big \rangle_R^{n-\frac{1}{p}} \le \prod_{i=1}^n \ave{ w_i^{-p_i'}}_R^{\frac{1}{p_i'}}$. \item Our definition is essentially the usual one-parameter definition \cite{LOPTT} with the difference that cubes are replaced by rectangles. However, we are also using the renormalised definition from \cite{LMMOV} that works better with the exponents $p_i = \infty$. Compared to the usual formulation of \cite{LOPTT} the relation is that $[w_1^{p_1}, \ldots, w_n^{p_n}]_{A_{\vec p}}^{\frac 1p}$ with $A_{\vec p}$ defined as in \cite{LOPTT} agrees with our $[\vec w]_{A_{\vec p}}$ when $p_i < \infty$. \item The case $p_1 = \cdots = p_n = \infty = p$ can be used as the starting point of extrapolation. This is rarely useful but we will find use for it when we consider the multilinear maximal function. \end{enumerate} \end{rem}
The following characterization of the class $ A_{\vec p}$ is convenient. The one-parameter result with the different normalization is \cite[Theorem 3.6]{LOPTT}. We record the proof for the convenience of the reader. \begin{lem}\label{lem:lem1} Let $\vec p=(p_1, \ldots, p_n)$ with $1 \le p_1, \ldots, p_n \le \infty$, $1/p = \sum_{i=1}^n 1/p_i \ge 0$, $\vec{w}=(w_1, \ldots, w_n)$ and $w = \prod_{i=1}^n w_i$. We have $$ [w_i^{-p_i'}]_{A_{np_i'}} \le [\vec{w}]_{A_{\vec p}}^{p_i'}, \qquad i = 1, \ldots, n, $$ and $$ [w^p]_{A_{np}} \le [\vec{w}]_{A_{\vec p}}^{p}. $$ In the case $p_i = 1$ the estimate is interpreted as $[w_i^{\frac 1n}]_{A_1} \le [\vec{w}]_{A_{\vec p}}^{1/n}$, and in the case $p=\infty$ we have $[w^{-\frac{1}{n}}]_{A_1} \le [\vec{w}]_{A_{\vec p}}^{1/n}$.
Conversely, we have $$
[\vec{w}]_{A_{\vec p}} \le [w^p]_{A_{np}}^{\frac{1}{p}} \prod_{i=1}^n [w_i^{-p_i'}]_{A_{np_i'}}^{\frac{1}{p_i'}}. $$ \end{lem} \begin{proof} We fix an arbitrary $j \in \{1, \ldots, n\}$ for which we will show $[w_j^{-p_j'}]_{A_{np_j'}} \le [\vec{w}]_{A_{\vec p}}^{p_j'}$. Notice that \begin{equation}\label{eq:eq1} \frac{1}{p} + \sum_{i \ne j} \frac{1}{p_i'} = n - 1 + \frac{1}{p_j}. \end{equation} We define $q_j$ via the identity $$ \frac{1}{q_j} = \frac{1}{n - 1 + \frac{1}{p_j}}\cdot \frac 1p $$ and for $i \ne j$ we set $$ \frac{1}{q_i} = \frac{1}{n - 1 + \frac{1}{p_j}} \cdot\frac{1}{p_i'}. $$ From \eqref{eq:eq1} we have that $\sum_i \frac{1}{q_i} = 1$. By definition we have \begin{equation}\label{eq:eq3} [w_j^{-p_j'}]_{A_{np_j'}} = \sup_R \, \ave{ w_j^{-p_j'}}_R \ave{ w_j^{p_j' \frac{1}{np_j'-1}}}_R^{np_j' - 1}. \end{equation} Notice that $$ p_j' \frac{1}{np_j'-1} = \frac{1}{n-\frac{1}{p_j'}} = \frac{1}{n - 1 + \frac{1}{p_j}}. $$
Using H\"older's inequality with the exponents $q_1, \ldots, q_n$ we have the desired estimate $$ \ave{ w_j^{-p_j'}}_R^{\frac 1{p_j'}} \ave{ w_j^{\frac{p}{q_j}}}_R^{\frac{q_j}{p}} = \ave{ w_j^{-p_j'}}_R^{\frac 1{p_j'}}\ave{ w^{\frac{p}{q_j}} \prod_{i \ne j } w_i^{-\frac{p}{q_j}}}_R^{\frac{q_j}{p}} \le \ave{w^p}_R^{\frac{1}{p} } \prod_{i} \ave{ w_i^{-p_i'} }_R^{\frac{1}{p_i'} } \le [\vec{w}]_{A_{\vec p}}. $$ When $p_j=1$ this is $\esssup_R w_j^{-1} \ave{ w_j^{\frac{1}{n}}}_R^{n}\le [\vec{w}]_{A_{\vec p}}$, and so $[w_j^{\frac 1n}]_{A_1}^n \le [\vec{w}]_{A_{\vec p}}$.
We now move on to bounding $[w^p]_{A_{np}}$. Notice that by definition \begin{equation}\label{eq:eq4} [w^p]_{A_{np}} = \sup_R \, \ave{w^p}_R \ave{ w^{-\frac{p}{np-1}}}_R^{np-1}. \end{equation} We define $s_i$ via $$ -\frac{p}{np-1} \cdot s_i = - p_i' $$ and notice that then $\sum_i \frac{1}{s_i} = 1$. Then, by H\"older's inequality with the exponents $s_1, \ldots, s_n$ we have $$ \ave{w^p}_R \ave{ w^{-\frac{p}{np-1}}}_R^{np-1} \le \ave{w^p}_R \prod_i \ave{ w_i^{-p_i'}}_R^{\big(\frac{p}{np-1}\big)\frac{1}{p_i'}(np-1)} = \Big[ \ave{w^p}_R^{\frac{1}{p}} \prod_i \ave{ w_i^{-p_i'}}_R^{\frac{1}{p_i'}} \Big]^{p} \le [\vec{w}]_{A_{\vec p}}^{p}, $$ which is the desired bound for $[w^p]_{A_{np}}$. Notice that in the case $p=\infty$ we get $$ [w^{-\frac{1}{n}}]_{A_1}^n = \sup_R \big\langle w^{-\frac{1}{n}} \big \rangle_R^{n} \esssup_R w \le \sup_R\Big[ \prod_i \langle w_i^{-1} \rangle_R \Big]\esssup_R w = [\vec{w}]_{A_{\vec p}}. $$
We then move on to bounding $ [\vec{w}]_{A_{\vec p}}$. It is based on the following inequality \begin{equation}\label{eq:eq2} 1 \le \ave{ w^{-\frac{p}{np-1}} }_R^{n-\frac{1}{p}} \prod_i \Big\langle w_i^{\frac{1}{n - 1 + \frac{1}{p_i}}} \Big\rangle_R^{n-1+\frac{1}{p_i}}. \end{equation} Before proving this, we show how it implies the desired bound. We have \begin{align*} [\vec{w}&]_{A_{\vec p}} = \sup_R \, \ave{w^p}_R^{\frac 1 p} \prod_i \ave{w_i^{-p_i'}}_R^{\frac 1{p_i'}} \\ &\le \sup_R \Big[ \ave{w^p}_R \ave{ w^{-\frac{p}{np-1}} }_R^{np-1} \Big]^{\frac 1 p}
\prod_i \Big[ \ave{w_i^{-p_i'}}_R \big \langle w_i^{p_i' \frac{1}{np_i'-1}}\big \rangle_R^{np_i' - 1} \Big]^{\frac 1{p_i'}}
\le [w^p]_{A_{np}}^{\frac{1}{p}} \prod_i [w_i^{-p_i'}]_{A_{np_i'}}^{\frac{1}{p_i'}}, \end{align*} where in the last estimate we recalled \eqref{eq:eq3} and \eqref{eq:eq4}.
Let us now give the details of \eqref{eq:eq2}. We apply \eqref{eq:eq8} with $\alpha_1 = \frac{p}{np-1}$ and $\alpha_2 = \frac{1}{n(n-1)+\frac{1}{p}}$ to get $$ 1 \le \ave{ w^{-\frac{p}{np-1}} }_R^{n-\frac{1}{p}} \Big\langle w^{ \frac{1}{n(n-1)+\frac{1}{p}} }\Big\rangle_R^{n(n-1)+\frac{1}{p}}. $$ The first term is already as in \eqref{eq:eq2}. Define $u_i$ via $$
\frac{1}{n(n-1)+\frac{1}{p}} u_i = \frac{1}{n - 1 + \frac{1}{p_i}} $$
and notice that by H\"older's inequality with these exponents ($\sum_i \frac{1}{u_i} = 1$) we have $$ \Big\langle w^{ \frac{1}{n(n-1)+\frac{1}{p}} }\Big\rangle_R^{n(n-1)+\frac{1}{p}} \le \prod_i \Big\langle w_i^{\frac{1}{n - 1 + \frac{1}{p_i}} }\Big\rangle_R^{ n - 1 + \frac{1}{p_i}}, $$ which matches the second term in \eqref{eq:eq2}. \end{proof} The following duality of multilinear weights is handy -- see \cite[Lemma 3.1]{LMS}. We give the short proof for convenience. \begin{lem}\label{lem:lem7} Let $\vec p=(p_1, \ldots, p_n)$ with $1 < p_1, \ldots, p_n < \infty$ and $\frac 1 p = \sum_{i=1}^n \frac 1 {p_i} \in (0,1)$. Let $\vec{w}=(w_1, \ldots, w_n)\in A_{\vec p}$ with $w = \prod_{i=1}^n w_i$ and define \begin{align*} \vec w^{\, i} &= (w_1, \ldots, w_{i-1}, w^{-1}, w_{i+1}, \ldots, w_n), \\ \vec p^{\,i} &= (p_1, \ldots, p_{i-1}, p', p_{i+1}, \ldots, p_n). \end{align*} Then we have $$ [\vec{w}^{\,i}]_{A_{\vec p^{\, i}}} = [\vec{w}]_{A_{\vec p}}. $$ \end{lem} \begin{proof} We take $i=1$ for notational convenience. Notice that $\frac{1}{p'} + \sum_{i=2}^n \frac{1}{p_i} = \frac{1}{p_1'}$. Notice also that $w^{-1} \prod_{i=2}^n w_i = w_1^{-1}$. Therefore, we have $$ [\vec{w}^{\,i}]_{A_{\vec p^{\, i}}} = \ave{w_1^{-p_1'}}_R^{\frac{1}{p_1'}} \ave{ w^{p}}_R^{\frac{1}{p}} \prod_{i=2}^n \ave{ w_i^{-p_i'}}_R^{\frac{1}{p_i'}} = [\vec{w}]_{A_{\vec p}}. $$ \end{proof}
\begin{comment} The partial adjoints have to be always considered separately. However, using Lemma \ref{lem:lem7} we see that the weighted boundedness of $T$ transfers to the adjoints $T^{j*}$. Let us show this. Suppose we know that for some exponents $1 < p_1, \ldots, p_n \le \infty$ and $1/p = \sum_i 1/p_i> 0$ we have \begin{equation}\label{eq:eq5}
\|T(f_1, \ldots, f_n) w \|_{L^p} \lesssim \prod_i \|f_i w_i\|_{L^{p_i}} \end{equation}
for all multilinear bi-parameter weights $\vec w = (w_1, \ldots, w_n) \in A_{\vec p}$. By extrapolation \cite{LMMOV} we know that this holds with all exponents in this range. Fix some $\vec p$ with $1 < p_1, \ldots, p_n, p < \infty$ and $\vec w = (w_1, \ldots, w_n) \in A_{\vec p}$. Let $\|f_{n+1}\|_{L^{p'}} \le 1$. We will bound $\|T^{1*}(f_1, \ldots, f_n)w\|_{L^p}$ by controlling \begin{align*}
|\langle T^{1*}(f_1, \ldots, f_n)w, f_{n+1}\rangle| &= |\langle T( f_{n+1}w, f_2, \ldots, f_n)w_1^{-1}, f_1w_1 \rangle| \\
&\le \|f_1 w_1\|_{L^{p_1}} \| T( f_{n+1}w, f_2, \ldots, f_n)w_1^{-1} \|_{L^{p_1'}}. \end{align*} By Lemma \ref{lem:lem7} we know that $(w^{-1}, w_2, \ldots, w_n) \in A_{(p', p_2, \ldots, p_n)}$. The product of these weights is $w_1^{-1}$ and the associated target exponent is $p_1'$. Applying \eqref{eq:eq5} with this data we get $$
\| T( f_{n+1}w, f_2, \ldots, f_n)w_1^{-1} \|_{L^{p_1'}} \le \|(f_{n+1}w) w^{-1}\|_{L^{p'}} \prod_{i=2}^n \| f_i w_i\|_{L^{p_i}} \le \prod_{i=2}^n \| f_i w_i\|_{L^{p_i}}. $$ We have shown that $$
\|T^{1*}(f_1, \ldots, f_n)w\|_{L^p} \le \prod_{i=1}^n \| f_i w_i\|_{L^{p_i}}. $$ By using the extrapolation \cite{LMMOV} again this holds for all exponents $1 < p_1, \ldots, p_n \le \infty$ with $1/p = \sum_i 1/p_i> 0$ and for all multilinear bi-parameter weights $\vec w = (w_1, \ldots, w_n) \in A_{\vec p}$. \end{comment}
We now recall the recent extrapolation result of \cite{LMMOV}. The previous version, which did not yet allow exponents to be $\infty$ appeared in \cite{LMO}. For related independent work see \cite{Nieraeth}. The previous extrapolation results with the separate assumptions $w_i^{p_i} \in A_{p_i}$ appear in \cite{GM} and \cite{DU}. An even more general result than the one below appears in \cite{LMMOV}, but we will not need that generality here. Finally, we note that the proof of this extrapolation result can be made to work in $m$-parameters even though \cite{LMMOV} provides the details only in the one-parameter case -- we give more details later in Section \ref{app:app1}. \begin{thm}\label{thm:ext}
Let $f_1, \ldots, f_n$ and $g$ be given functions. Given $\vec p=(p_1,\dots, p_n)$ with $1\le p_1,\dots, p_n\le \infty$ let $\frac1p= \sum_{i=1}^n \frac{1}{p_i}$. Assume that given any $\vec w=(w_1,\dots, w_n) \in A_{\vec p}$ the inequality
\begin{equation}\label{extrapol:H*}
\|gw\|_{L^{p}} \lesssim \prod_{i=1}^n \|f_iw_i\|_{L^{p_i} }
\end{equation}
holds, where $w:=\prod_{i=1}^n w_i $. Then for all exponents $\vec q=(q_1,\dots,q_n)$, with $1< q_1,\dots, q_n\le \infty$ and $\frac1q= \sum_{i=1}^n \frac{1}{q_i} >0$, and for all weights $\vec v=(v_1,\dots, v_n) \in A_{\vec q}$ the inequality
\begin{equation*}\label{extrapol:C*}
\|gv\|_{L^{q} } \lesssim \prod_{i=1}^n \|f_iv_i\|_{L^{q_i} }
\end{equation*}
holds, where $v:=\prod_{i=1}^n v_i $.
Given functions $f_1^j, \ldots, f_n^j$ and $g^j$ so that \eqref{extrapol:H*} holds uniformly on $j$, we have
for the same family of exponents and
weights as above, and for all exponents $\vec{s}=(s_1,\dots, s_n)$ with $1< s_1,\dots, s_n\le \infty$ and $\frac1s=\sum_i \frac{1}{s_i} >0$ the inequality
\begin{equation}\label{extrapol:vv*}
\| (g^j v)_j\|_{L^{q}(\ell^s)}
\lesssim
\prod_{i=1}^n \|(f_i^j v_i)_j\|_{L^{q_i}(\ell^{s_i}) }.
\end{equation} \end{thm} \begin{rem} Using Lemma \ref{lem:lem7} and extrapolation, Theorem \ref{thm:ext}, we see that the weighted boundedness of $T$ transfers to the adjoints $T^{j*}$. Partial adjoints have to always be considered separately, though. \end{rem}
As a final thing in this section, we demonstrate the necessity of the $A_{\vec p}$ condition for the weighted boundedness of SIOs. We work in the $m$-parameter setting and let $\mathbb{R}^d=\mathbb{R}^{d_1} \times \dots \times \mathbb{R}^{d_n}$. Let $R_j$ be the following version of the $n$-linear one-parameter Riesz transform in $\mathbb{R}^{d_j}$: \[
R_j(f_1,\dots, f_n)=\textup{p.v.} \int_{\mathbb{R}^{d_jn}}\frac{\sum_{i=1}^n\sum_{k=1}^{d_j}(x-y_i)_k}{(\sum_{i=1}^{n}|x-y_i|)^{d_j n+1}}f_1(y_1)\cdots f_n(y_n) \ud y_1\cdots \ud y_n, \] where $(x-y_i)_k$ is the $k$-th coordinate of $x-y_i \in \mathbb{R}^{d_j}$. Consider the tensor product $ R_{1}\otimes R_{2}\otimes \cdots \otimes R_{m}. $ Let $\vec w =(w_1, \dots, w_n)$ be a multilinear weight, that is, $0 < w_i < \infty$ a.e., and denote $w=\prod_{i=1}^n w_i$. Suppose that for some exponents $1< p_1,\dots, p_n\le \infty$ with $1/p=\sum_{i=1}^n 1/{p_i}>0$ the estimate \[
\| R_{1}\otimes R_{2}\otimes \cdots \otimes R_{m} (f_1, \dots, f_n)\|_{L^{p,\infty}(w^p)}
\lesssim \prod_{i=1}^n \|f_iw_i\|_{L^{p_i}} \] holds for all $f_i \in L^\infty _c$. We show that $\vec w$ is an $m$-parameter $A_{\vec p}$ weight.
Define $\sigma_i=w_i^{-p_i'}$. Let $E \subset \mathbb{R}^d$ be an arbitrary set such that $1_E \sigma_i \in L^\infty_c$ for all $i=1, \dots, n$. Fix an $m$-parameter rectangle $R=R^1 \times \cdots \times \mathbb{R}^m \subset \mathbb{R}^d$, where each $R^j$ is a cube. Let $R^+= (R^1)^+ \times \cdots \times (R^m)^+$, where $(R^j)^+:=R^j+(\ell(R^j), \dots, \ell(R^j))$.
Using the kernel of $R_1 \otimes \cdots \otimes R_m$ we have for all $x \in R^+$ that \begin{align*} R_{1}\otimes R_{2}\otimes \cdots \otimes R_{m} (1_E \sigma_11_R, \dots, 1_E\sigma_n1_R)(x) \gtrsim \prod_{i=1}^n \langle 1_E\sigma_i\rangle_R. \end{align*} Hence \begin{equation*} w^p(R^+)^{\frac 1p}\prod_{i=1}^n \langle 1_E\sigma_i\rangle_R
\lesssim \prod_{i=1}^n \|1_E\sigma_i1_Rw_i \|_{L^{p_i}} =\prod_{i=1}^n \sigma_i(E \cap R)^{\frac{1}{p_i}}, \end{equation*} which gives that $ \langle w^p\rangle_{R^+}^{\frac 1p}\prod_{i=1}^n \langle 1_E\sigma_i\rangle_R^{\frac{1}{p_i'}} \lesssim 1. $ Since $E$ was arbitrary this implies the estimate \begin{equation}\label{eq:eq29} \langle w^p\rangle_{R^+}^{\frac 1p}\prod_{i=1}^n \langle \sigma_i\rangle_R^{\frac{1}{p_i'}} \lesssim 1. \end{equation} Similarly, we can show that \begin{equation}\label{eq:eq30} \langle w^p\rangle_{R}^{\frac 1p}\prod_{i=1}^n \langle \sigma_i\rangle_{R^+}^{\frac{1}{p_i'}} \lesssim 1. \end{equation}
By H\"older's inequality we have that \[ \langle w^{-\frac p{np-1}}\rangle_{R^+}^{\frac{np-1}p}\le \prod_{i=1}^n \langle \sigma_i\rangle_{R^+}^{\frac 1{p_i'}}. \] Hence, \eqref{eq:eq30} shows that \[ \langle w^p\rangle_{R}^{\frac 1p} \langle w^{-\frac p{np-1}}\rangle_{R^+}^{\frac{np-1}p} \lesssim 1. \] Therefore, \begin{align*} \frac{\langle w^p\rangle_{R}^{\frac 1p}}{\langle w^p\rangle_{R^+}^{\frac 1p}} =\frac{\langle w^p\rangle_{R}^{\frac 1p}\langle w^{-\frac p{np-1}}\rangle_{R^+}^{\frac{np-1}p}}{\langle w^p\rangle_{R^+}^{\frac 1p}\langle w^{-\frac p{np-1}}\rangle_{R^+}^{\frac{np-1}p}} \lesssim 1, \end{align*} where the denominator in the middle term was $\ge 1$. Thus, $\langle w^p\rangle_{R}^{\frac 1p} \lesssim \langle w^p\rangle_{R^+}^{\frac 1p}$, which together with \eqref{eq:eq29} gives that $ \langle w^p\rangle_{R}^{\frac 1p}\prod_{i=1}^n \langle \sigma_i\rangle_R^{\frac 1{p_i'}} \lesssim 1. $
\section{Maximal functions} It was proved in \cite{GLTP} that the multilinear bi-parameter (or multi-parameter) maximal function is bounded with respect to the genuinely multilinear bi-parameter weights. We give a new efficient proof of this. Let $\calD = \calD^1 \times \calD^2$ be a fixed lattice of dyadic rectangles and define $$
M_{\calD}(f_1, \ldots, f_n) = \sup_{R \in \calD} \prod_{i=1}^n \ave{ |f_i| }_R 1_R. $$ \begin{prop}\label{prop:prop2} If $1 < p_1, \ldots, p_n \le \infty$ and $1/p = \sum_{i=1}^n 1/p_i$ we have $$
\|M_{\calD}(f_1, \ldots, f_n)w \|_{L^p} \lesssim \prod_{i=1}^n \|f_i w_i\|_{L^{p_i}} $$ for all multilinear bi-parameter weights $\vec w \in A_{\vec p}$. \end{prop} \begin{proof} Our proof is based on the proof of the case $\vec p = (p_1, \ldots, p_n) = (\infty, \ldots, \infty)$ and extrapolation, Theorem \ref{thm:ext}. We have $$ \sup_R \Big[\prod_i \langle w_i^{-1} \rangle_R \Big]\cdot \esssup_R w = [\vec{w}]_{A_{\vec p}}, $$ and therefore $$ \prod_i \langle w_i^{-1} \rangle_R \lesssim \frac{1}{\esssup_R w}. $$
For every $R \in \calD$ let $N_R \subset R$ be such that $|N_R| = 0$ and $w(x) \le \esssup_R w$ for all $x \in R \setminus N_R$. Let $N = \bigcup_{R \in \calD} N_R$. Then $|N| = 0$ and for every $x \in \mathbb{R}^d \setminus N$ we have $$ \frac{1}{w(x)} \ge \sup_{R \in \calD} \frac{1_R(x)}{\esssup_R w}. $$ Thus, we have \begin{align*}
M_{\calD}(f_1, \ldots, f_n)(x)w(x) &\le \Big[\prod_i \|f_i w_i\|_{L^{\infty}} \Big]\sup_{R \in \calD} \Big[ 1_R(x) \prod_i \ave{ w_i^{-1}}_R \Big] \cdot w(x) \\
&\lesssim \Big[\prod_i \|f_i w_i\|_{L^{\infty}}\Big] \sup_{R \in \calD} \Big[ \frac{1_R(x) }{\esssup_R w} \Big] \cdot w(x) \le \prod_i \|f_i w_i\|_{L^{\infty}} \end{align*}
almost everywhere, and so $\|M_{\calD}(f_1, \ldots, f_n)w\|_{L^{\infty}} \lesssim \prod_i \|f_i w_i\|_{L^{\infty}}$ as desired.
\begin{comment} We now give another proof, which is not so special and, thus, gives more intuition on operators different from the maximal function. This time we also use extrapolation but instead directly prove the case $\overline p = (p_1, \ldots, p_n) = (n, \ldots, n)$. Then we have $p = 1$, $p_i = n$ and $p_i' = \frac{n}{n-1}$. We denote $$ \sigma_i = w_i^{-p_i'} = w_i^{-\frac{n}{n-1}} $$ and use the definition of $[\vec w]_{A_{\vec p}}$ to estimate $$ \prod_i \ave{\sigma_i}_R \lesssim_{[\vec w]_{A_{\vec p}}} \ave{w}_R^{-\frac{n}{n-1}}. $$ Fix $x$ and consider an arbitrary $R \in \calD$ so that $x \in R$. Then we have \begin{align*}
\prod_i \ave{ |f_i| }_R &= \prod_i \frac{\sigma_i(R)}{|R|} \frac{1}{\sigma_i(R)} \int_R |f_i| \sigma_i^{-1} \sigma_i \\
&= \prod_i \ave{\sigma_i}_R \ave{ |f_i| \sigma_i^{-1}}_R^{\sigma_i} \\
&\lesssim \ave{w}_R^{-\frac{n}{n-1}} \prod_i \ave{ |f_i| \sigma_i^{-1}}_R^{\sigma_i} \\
&= \Big[ \ave{w}_R^{-1} \prod_i [ \ave{ |f_i| \sigma_i^{-1}}_R^{\sigma_i}]^{\frac{n-1}{n}} \Big]^{\frac{n}{n-1}} \\
&= \Big[ \frac{1}{w(R)} \int_R \prod_i [ \ave{ |f_i| \sigma_i^{-1}}_R^{\sigma_i}]^{\frac{n-1}{n}} \Big]^{\frac{n}{n-1}} \\
&\le \Big[ \frac{1}{w(R)} \int_R \prod_i [ M_{\calD}^{\sigma_i}( f_i \sigma_i^{-1}) ]^{\frac{n-1}{n}} \cdot w^{-1} w \Big]^{\frac{n}{n-1}} \\
&\le \Big[ M_{\calD}^w\Big(\prod_i [ M_{\calD}^{\sigma_i}( f_i \sigma_i^{-1})]^{\frac{n-1}{n}} \cdot w^{-1} \Big)(x) \Big]^{\frac{n}{n-1}}. \end{align*} Therefore, we have \begin{align*}
\|M_{\calD}(f_1, \ldots, f_n) \|_{L^1(w) } &\lesssim \Big\| M_{\calD}^w\Big(\prod_i [ M_{\calD}^{\sigma_i}( f_i \sigma_i^{-1})]^{\frac{n-1}{n}} \cdot w^{-1} \Big) \Big\|_{L^{\frac{n}{n-1}}(w)}^{\frac{n}{n-1}}. \end{align*} As $p=1$ we have by Lemma \ref{lem:lem1} that $w \in A_{n}$ (the exact class does not matter as long as we are in $A_{\infty}$) and so by Proposition \ref{prop:prop1} this is dominated by \begin{align*}
\Big\| \Big[ \prod_i M_{\calD}^{\sigma_i}( f_i \sigma_i^{-1}) \Big]^{\frac{n-1}{n}} w^{-1} \Big\|_{L^{\frac{n}{n-1}}(w)}^{\frac{n}{n-1}}
&= \Big\| \prod_i M_{\calD}^{\sigma_i}( f_i \sigma_i^{-1})w_i^{-\frac{1}{n-1}} \Big\|_{L^1} \\
&\le \prod_i \| M_{\calD}^{\sigma_i}( f_i \sigma_i^{-1})w_i^{-\frac{1}{n-1}} \|_{L^n} \\
&= \prod_i \| M_{\calD}^{\sigma_i}( f_i \sigma_i^{-1}) \|_{L^n(\sigma_i)}. \end{align*} By Lemma \ref{lem:lem1} we have $\sigma_i \in A_{\infty}$ and so by Proposition \ref{prop:prop1} we again have $$
\prod_i \| M_{\calD}^{\sigma_i}( f_i \sigma_i^{-1}) \|_{L^n(\sigma_i)} \lesssim \prod_i \| f_i \sigma_i^{-1} \|_{L^n(\sigma_i)} = \prod_i \| f_i w_i \|_{L^n}, $$ and so we are done. \end{comment} \end{proof}
If an average is with respect to a different measure $\mu$ than the Lebesgue measure, we write $\langle f \rangle_R^{\mu} := \frac{1}{\mu(R)} \int_R f\ud \mu$ and define $$
M_{\calD}^{\mu} f = \sup_R 1_R \langle |f| \rangle_R^{\mu}. $$ The following is a result of R. Fefferman \cite{RF3}. Recently, we also recorded a proof in \cite[Appendix B]{LMV:Bloom}. \begin{prop}\label{prop:prop1} Let $\lambda \in A_p$, $p \in (1,\infty)$, be a bi-parameter weight. Then for all $s \in (1,\infty)$ we have $$
\| M_{\calD}^{\lambda} f \|_{L^s(\lambda)} \lesssim [\lambda]_{A_p}^{1+1/s} \|f\|_{L^s(\lambda)}. $$ \end{prop}
We formulate some vector-valued versions of Proposition \ref{prop:prop1}. We state the following version with two sequence spaces -- of course, a version with arbitrarily many also works. Proposition \ref{prop:vecvalmax} is proved in the end of Section \ref{app:app1}.
\begin{prop}\label{prop:vecvalmax} Let $\mu\in A_\infty$, $w\in A_p(\mu)$ and $1<p,s,t<\infty$. Then we have \[
\left\| \Big\|\big\|\{M^\mu f_j^i\}\big\|_{\ell^s}\Big\|_{\ell^t}\right\|_{L^p(w\mu)}\lesssim \left\|\Big\|\big\|\{f_j^i\}\big\|_{\ell^s}\Big\|_{\ell^t}\right\|_{L^p(w\mu)}. \]In particular, we have \[
\left\| \Big\|\big\|\{M^\mu f_j^i\}\big\|_{\ell^s}\Big\|_{\ell^t}\right\|_{L^p(\mu)}\lesssim \left\|\Big\|\big\|\{f_j^i\}\big\|_{\ell^s}\Big\|_{\ell^t}\right\|_{L^p(\mu)}. \] \end{prop}
Finally, we point out that everything in this section works easily in the general multi-parameter situation.
\section{Square functions}\label{sec:SquareFunctions} Let $\calD = \calD^1 \times \calD^2$ be a fixed lattice of dyadic rectangles. We define the square functions $$
S_{\calD} f = \Big( \sum_{R \in \calD} |\Delta_R f|^2 \Big)^{1/2}, \,\, S_{\calD^1}^1 f = \Big( \sum_{I^1 \in \calD^1} |\Delta_{I^1}^1 f|^2 \Big)^{1/2} $$ and define $S_{\calD^2}^2 f$ analogously.
\begin{comment} The following lemma present the well-known and most basic square function estimates. We cannot rely on it as we will be working with the genuinely multilinear weights. \begin{lem}\label{lem:lem2} For $p \in (1,\infty)$ and a bi-parameter weight $w \in A_p$ we have $$
\| f \|_{L^p(w)}
\sim \| S_{\calD} f\|_{L^p(w)}
\sim \| S_{\calD^1}^1 f \|_{L^p(w)}
\sim \| S_{\calD^2}^2 f \|_{L^p(w)}. $$ \end{lem} \end{comment}
The following lower square function estimate valid for $A_{\infty}$ weights is important for us. The importance comes from the fact that by Lemma \ref{lem:lem1} some of the key weights $w^p$ and $w_i^{-p_i'}$ are at least $A_{\infty}$ for the multilinear weights of Definition \ref{defn:defn1}. \begin{lem}\label{lem:lem3} There holds $$
\|f\|_{L^p(w)} \lesssim \|S_{\calD^i}^i f\|_{L^p(w)} \lesssim \|S_{\calD} f\|_{L^p(w)} $$ for all $p \in (0, \infty)$ and bi-parameter weights $w \in A_{\infty}$. \end{lem} For a proof of the one-parameter estimate see \cite[Theorem 2.5]{Wi}. The bi-parameter results can be deduced using the following extremely useful $A_{\infty}$ extrapolation result \cite{CUMP}, which will be applied several times during the paper. We also mention that square function estimates related to Lemma \ref{lem:lem3} also appear in \cite{BM3}.
\begin{lem}\label{lem:lem4} Let $(f,g)$ be a pair of non-negative functions. Suppose that there exists some $0<p_0<\infty$ such that for every $w\in A_\infty$ we have $$ \int f^{p_0} w \lesssim \int g^{p_0} w. $$ Then for all $0<p<\infty$ and $w\in A_\infty$ we have $$ \int f^{p} w \lesssim \int g^{p} w. $$ \end{lem}
\begin{proof}[Proof of Lemma \ref{lem:lem3}] Let $w \in A_\infty$ be a bi-parameter weight. The first estimate in the statement follows from the one-parameter result \cite[Theorem 2.5]{Wi} and the fact that $w(x_1, \cdot) \in A_\infty(\mathbb{R}^{d_2})$ and $w(\cdot, x_2) \in A_\infty(\mathbb{R}^{d_1})$. Using this, we have that $$
\| f \|_{L^2(w)} \lesssim \| S^1_{\calD^1}f\|_{L^2(w)}
= \Big(\sum_{I^1 \in \calD^1} \| \Delta^1_{I^1}f\|_{L^2(w)}^2 \Big)^{\frac 12}. $$ For each $I^1$ we again use the one-parameter estimate to get $$
\| \Delta^1_{I^1}f\|_{L^2(w)}
\lesssim \|S^2_{\calD^2} \Delta^1_{I^1}f\|_{L^2(w)}
=\Big( \sum_{I^2 \in \calD^2}\|\Delta^2_{I^2} \Delta^1_{I^1}f\|_{L^2(w)}^2 \Big)^{\frac 12}. $$ Since $\Delta^2_{I^2} \Delta^1_{I^1}f= \Delta_{I^1 \times I^2} f$, inserting the last estimate into the previous one shows that $$
\| f \|_{L^2(w)} \lesssim \Big( \sum_{I^1 \times I^2 \in \calD^1 \times \calD^2}
\|\Delta_{I^1 \times I^2} f\|_{L^2(w)}^2 \Big)^{\frac 12}
= \| S_{\calD} f \|_{L^2(w)}. $$ Since this holds for every bi-parameter weight $w \in A_\infty$, Lemma \ref{lem:lem4} concludes the proof. We point out that with further extrapolation we could obtain vector-valued versions analogous to Proposition \ref{prop:vecvalmax}, see the end of Section \ref{app:app1}. \end{proof}
\begin{rem}\label{rem:rem1} We often use the lower square function estimate with the additional observation that we e.g. have for all $k = (k_1, k_2) \in \{0,1,\ldots\}^2$ that $$
S_{\calD} f = \Big( \sum_{K = K^1 \times K^2 \in \calD} |\Delta_{K,k} f|^2 \Big)^{1/2}, \qquad \Delta_{K,k} = \Delta_{K^1,k_1}^1 \Delta_{K^2, k_2}^2. $$ This simply follows from disjointness. \end{rem}
For $k= (k_1, k_2)$ we define the following family of $n$-linear square functions. First, we set $$ A_1(f_1, \ldots, f_n) = A_{1,k}(f_1, \ldots, f_n)
= \Big( \sum_{K \in \calD} \langle | \Delta_{K,k} f_1 | \rangle_K ^2 \prod_{j=2}^n \langle |f_j| \rangle_K^2 1_K \Big)^{\frac{1}{2}}. $$ In addition, we understand this so that $A_{1,k}$ can also take any one of the symmetric forms, where each $\Delta_{K^i, k_i}^i$ appearing in $\Delta_{K,k} = \Delta_{K^1,k_1}^1 \Delta_{K^2, k_2}^2$ can alternatively be associated with any of the other functions $f_2, \ldots, f_n$. That is, $A_{1,k}$ can, for example, also take the form $$ A_{1,k}(f_1, \dots, f_n) =
\Big( \sum_{K \in \calD} \langle | \Delta^2_{K^2,k_2} f_1 | \rangle_K^2
\langle | \Delta^1_{K^1,k_1} f_2| \rangle_K^2 \prod_{j=3}^{n} \langle |f_j| \rangle_K^2 1_K \Big)^{\frac 12}. $$ For $k = (k_1, k_2, k_3)$ we define \begin{equation}\label{eq:eq11} \begin{split} &A_{2,k}(f_1, \ldots, f_n) \\ &= \Big( \sum_{K^2 \in \calD^2} \Big( \sum_{K^1 \in \calD^1}
\langle|\Delta^2_{K^2, k_1}f_1|\rangle_{K}\langle|\Delta^1_{K^1, k_2}f_2|\rangle_{K}
\langle|\Delta^1_{K^1, k_3}f_3|\rangle_{K} \prod_{j=4}^{n} \langle |f_j|\rangle_K1_{K} \Big)^2\Big)^{ \frac 12}, \end{split} \end{equation} where we again understand this as a family of square functions. First, the appearing three martingale blocks can be associated with different functions, too. Second, we can have the $K^1$ summation out and the $K^2$ summation in (we can interchange them), but then we have two martingale blocks with $K^2$ and one martingale block with $K^1$.
Finally, for $k = (k_1, k_2, k_3, k_4)$ we define $$
A_{3,k}(f_1, \ldots, f_n) = \sum_{K \in \calD} \langle | \Delta_{K,(k_1, k_2)} f_1| \rangle_K
\langle | \Delta_{K,(k_3, k_4)} f_2| \rangle_K \prod_{j=3}^n \langle |f_j| \rangle_K 1_K, $$ where this is a family with two martingale blocks in each parameter, which can be moved around. \begin{thm}\label{thm:thm3} If $1 < p_1, \ldots, p_n \le \infty$ and $\frac{1}{p} = \sum_{i=1}^n \frac{1}{p_i}> 0$ we have $$
\|A_{j,k}(f_1, \ldots, f_n)w \|_{L^p} \lesssim \prod_{i=1}^n \|f_i w_i\|_{L^{p_i}}, \quad j=1,2,3, $$ for all multilinear bi-parameter weights $\vec w \in A_{\vec p}$. \end{thm}
\begin{proof} The proofs of all of the cases have the same underlying idea based on an iterative use of duality and the lower square function estimate until all of the cancellation has been utilised. One can also realise that the result for $A_{3,k}$ follows using the above scheme just once if the result is first proved for $A_{1,k}$ and $A_{2,k}$.
We show the proof for $A_{2,k}$ with the explicit form \eqref{eq:eq11}. Fix some $\vec p = (p_1, \ldots, p_n)$ with $1 < p_i < \infty$ and $p > 1$. This is enough by extrapolation, Theorem \ref{thm:ext}. To estimate
$\|A_{2,k}(f_1, \ldots, f_n)w \|_{L^p}$ we take a sequence $(f_{n+1,K^2})_{K^2} \subset L^{p'}(\ell^2)$ with a norm $\| (f_{n+1,K^2})_{K^2} \|_{L^{p'}(\ell^2)} \le 1$ and look at \begin{equation}\label{eq:A1Dual}
\sum_{K}\langle|\Delta^2_{K^2, k_1}f_1|,1_K\rangle\langle|\Delta^1_{K^1, k_2}f_2|\rangle_{K}
\langle|\Delta^1_{K^1, k_3}f_3|\rangle_{K} \prod_{j=4}^{n} \langle |f_j|\rangle_K \langle f_{n+1,K^2}w \rangle_{K}. \end{equation} There holds that \begin{equation}\label{eq:RemAbs}
\langle|\Delta^2_{K^2, k_1}f_1|,1_K\rangle = \langle \Delta^2_{K^2, k_1}f_1 , \varphi_{K^2, f_1} \rangle
= \langle f_1 , \Delta^2_{K^2, k_1} \varphi_{K^2, f_1} \rangle, \qquad |\varphi_{K^2, f_1}| \le 1_K. \end{equation}
We now get that \eqref{eq:A1Dual} is less than $\| f_1 w_1\|_{L^{p_1}}$ multiplied by \begin{equation*}
\Big \| \sum_{K} \langle f_{n+1,K^2}w \rangle_{K} \langle|\Delta^1_{K^1, k_2}f_2|\rangle_{K}
\langle|\Delta^1_{K^1, k_3}f_3|\rangle_{K} \prod_{j=4}^{n} \langle |f_j|\rangle_K \Delta^2_{K^2, k_1} \varphi_{K^2, f_1} w_1^{-1}\Big \|_{L^{p_1'}}. \end{equation*}
We will now apply the lower square function estimate $\|gw_1^{-1}\|_{L^{p_1'}} \lesssim \|S_{\calD^2}^2(g) w_1^{-1} \|_{L^{p_1'}}$, Lemma \ref{lem:lem3}, with the weight $w_1^{-p_1'} \in A_{\infty}$ (see Lemma \ref{lem:lem1}). Here we use the block form of Remark \ref{rem:rem1}. Using also that $|\Delta^2_{K^2, k_1} \varphi_{K^2, f_1}| \lesssim 1_K$ we get that the last norm is dominated by \begin{equation*}
\Big \|\Big( \sum_{K^2} \Big( \sum_{K^1} \langle |f_{n+1,K^2}|w \rangle_{K} \langle|\Delta^1_{K^1, k_2}f_2|\rangle_{K}
\langle|\Delta^1_{K^1, k_3}f_3|\rangle_{K} \prod_{j=4}^{n} \langle |f_j|\rangle_K
1_K \Big)^2 \Big)^{\frac 12} w_1^{-1}\Big \|_{L^{p_1'}}. \end{equation*} We still have cancellation to use in the form of the other two martingale differences and will continue the process.
We repeat the argument from above -- this gives that the previous term is dominated by $\| f_2 w_2 \|_{L^{p_2}}$ multiplied by $$
\Big \|\Big( \sum_{K^1} \Big( \sum_{K^2} \langle |f_{n+1,K^2}|w \rangle_{K} \langle |f_{1,K^2}|w_1^{-1}\rangle_K
\langle|\Delta^1_{K^1, k_3}f_3|\rangle_{K} \prod_{j=4}^{n} \langle |f_j|\rangle_K
1_K \Big)^2 \Big)^{\frac 12} w_2^{-1}\Big \|_{L^{p_2'}} $$
where $\| (f_{1,K^2})_{K^2} \|_{L^{p_1}(\ell^2)} \le 1$. Running this argument one more time finally gives us that this is dominated by $\| f_3w_3\|_{L^{p_3}}$ multiplied by \begin{equation*} \begin{split}
\Big \| \Big(\sum_{K^1} & \Big( \sum_{K^2} \langle |f_{n+1,K^2}|w \rangle_{K} \langle |f_{1,K^2}|w_1^{-1}\rangle_K \langle |f_{2, K^1}|w_2^{-1}\rangle_K
\prod_{j=4}^{n} \langle |f_j|\rangle_K 1_K \Big)^2 \Big)^{\frac 12} w_3^{-1}\Big\|_{L^{p_3'}} \\
& \le \Big \| \Big(\sum_{K^1} \Big( \sum_{K^2} M_\calD( f_{n+1,K^2}w, f_{1,K^2}w_1^{-1}, f_{2, K^1}w_2^{-1}, f_4, \dots, f_n) \Big)^2 \Big)^{\frac 12} w_3^{-1}\Big\|_{L^{p_3'}}, \end{split} \end{equation*}
where $\| (f_{2,K^1})_{K^1} \|_{L^{p_2}(\ell^2)} \le 1$.
Using Lemma \ref{lem:lem7} three times (we dualized three times) shows that $$ (w^{-1}, w_1,w_2,w_4, \dots, w_n) \in A_{(p',p_1,p_2,p_4, \dots, p_n)}. $$ The maximal function satisfies the weighted $$ L^{p'}(\ell^\infty_{K^1}(\ell^2_{K^2})) \times L^{p_1}(\ell^\infty_{K^1}(\ell^2_{K^2})) \times L^{p_2}(\ell^2_{K^1}(\ell^\infty_{K^2})) \times L^{p_4} \times \dots \times L^{p_n} \to L^{p_3'}(\ell^2_{K^1}(\ell^1_{K^2})) $$ estimate. This gives that the last norm above is dominated by \begin{equation*}
\| ( f_{n+1,K^2}ww^{-1})_{K^2} \|_{L^{p'}(\ell^2)}
\| ( f_{1,K^2}w_1^{-1}w_1)_{K^2} \|_{L^{p_1}(\ell^2)}
\| ( f_{2, K^1}w_2^{-1}w_2)_{K^1} \|_{L^{p_2}(\ell^2)}
\prod_{i=4}^n \| f_iw_i \|_{L^{p_i}}, \end{equation*} where the first three norms are $\le 1$. This concludes the proof for $A_{2,k}$ and the rest of the cases are similar. \end{proof} We also record some linear estimates. We will need these when we deal with the most complicated model operators -- the partial paraproducts. \begin{prop}\label{prop:prop3} For $u \in A_{\infty}$ and $p, s \in (1, \infty)$ we have $$
\Big\| \Big[ \sum_m \Big( \sum_{K \in \calD} \langle |\Delta_{K,k} f_m| \rangle_K^2 \frac{1_K}{\langle u \rangle_K^2} \Big)^{\frac{s}{2}} \Big]^{\frac{1}{s}} u^{\frac{1}{p}}
\Big\|_{L^p}
\lesssim \Big\| \Big( \sum_m |f_m|^s \Big)^{\frac{1}{s}} u^{-\frac{1}{p'}} \Big\|_{L^p}. $$ \end{prop} \begin{proof} By \eqref{eq:eq8} we have for all $n \ge 2$ that $$ 1 \le \langle u \rangle_K \Big\langle u^{-\frac{1}{n-1}} \Big\rangle_K^{n-1}. $$ Simply using this we reduce to \begin{align*}
\Big\| \Big[& \sum_m \Big( \sum_{K \in \calD} \langle |\Delta_{K,k} f_m| \rangle_K^2 \Big\langle u^{-\frac{1}{n-1}} \Big\rangle_K^{2(n-1)}1_K\Big)^{\frac{s}{2}} \Big]^{\frac{1}{s}} u^{\frac{1}{p}}
\Big\|_{L^p} \\
&= \Big\| \Big[ \sum_m A_{1,k}\big(f_m, u^{-\frac{1}{n-1}}, \ldots, u^{-\frac{1}{n-1}}\big)^s \Big]^{\frac{1}{s}} u^{\frac{1}{p}}
\Big\|_{L^p}, \end{align*} where $A_{1,k}$ is a suitable square function as in Theorem \ref{thm:thm3}.
We then fix $n$ large enough so that $u \in A_{n}$. We then notice that this implies that \begin{equation}\label{eq:InftynLin}
\big(u^{-\frac{1}{p'}}, u^{\frac{1}{n-1}}, \ldots, u^{\frac{1}{n-1}}\big) \in A_{(p, \infty, \ldots, \infty)}.
\end{equation} To see this, notice that the target weight associated with this tuple is $u^{-\frac{1}{p'}} u = u^{\frac{1}{p}}$ and that the target exponent is $p$, and so $$ \big[\big(u^{-\frac{1}{p'}}, u^{\frac{1}{n-1}}, \ldots, u^{\frac{1}{n-1}}\big)\big]_{A_{(p, \infty, \ldots, \infty)}} = \sup_R \langle u \rangle_R^{1/p} \langle u \rangle_R^{1/p'} \big\langle u^{-\frac{1}{n-1}} \big\rangle_R^{n-1} = [u]_{A_n} < \infty. $$
It remains to use the weighted (with the weight \eqref{eq:InftynLin}) vector-valued estimate $L^p(\ell^s) \times L^{\infty}\times \cdots \times L^{\infty} \to L^p(\ell^s)$ of $A_{1,k}$, which follows by Theorem \ref{thm:thm3} and \eqref{extrapol:vv*}. \end{proof} \begin{rem} It is possible to prove the above proposition also directly with the duality and lower square function strategy that was used in the proof of Theorem \ref{thm:thm3}. \end{rem}
\section{Dyadic model operators}\label{sec:dmo} In this section we are working with a fixed set of dyadic rectangles $\calD = \calD^1 \times \calD^2$. All the model operators depend on this lattice, but it is not emphasised in the notation. \subsection{Shifts} Let $k=(k_1, \dots, k_{n+1})$, where $k_j = (k_j^1, k_j^2) \in \{0,1,\ldots\}^2$. An $n$-linear bi-parameter shift $S_k$ takes the form \begin{equation*}\label{eq:S2par} \langle S_k(f_1, \ldots, f_n), f_{n+1}\rangle = \sum_{K} \sum_{\substack{R_1, \ldots, R_{n+1} \\ R_j^{(k_j)} = K }} a_{K, (R_j)} \prod_{j=1}^{n+1} \langle f_j, \wt h_{R_j} \rangle. \end{equation*} Here $K, R_1, \ldots, R_{n+1} \in \calD = \calD^1 \times \calD^2$, $R_j = I_j^1 \times I_j^2$, $R_j^{(k_j)} := (I_j^1)^{(k_j^1)} \times (I_j^2)^{(k_j^2)}$ and $\wt h_{R_j} = \wt h_{I_j^1} \otimes \wt h_{I_j^2}$. Here we assume that for $m \in \{1,2\}$ there exist two indices $j^m_0,j_1^m \in \{1, \ldots, n+1\}$, $j^m_0 \not =j^m_1$, so that $\wt h_{I_{j^m_0}^m}=h_{I_{j^m_0}^m}$, $\wt h_{I_{j^m_1}^m}=h_{I_{j^m_1}^m}$ and for the remaining indices $j \not \in \{j^m_0, j^m_1\}$ we have $\wt h_{I_j^m} \in \{h_{I_j^m}^0, h_{I_j^m}\}$. Moreover, $a_{K,(R_j)} = a_{K, R_1, \ldots ,R_{n+1}}$ is a scalar satisfying the normalization \begin{equation}\label{eq:Snorm2par}
|a_{K,(R_j)}| \le \frac{\prod_{j=1}^{n+1} |R_j|^{1/2}}{|K|^{n}}. \end{equation}
\begin{thm} Suppose $S_k$ is an $n$-linear bi-parameter shift, $1 < p_1, \ldots, p_n \le \infty$ and $\frac{1}{p} = \sum_{i=1}^n \frac{1}{p_i}> 0$. Then we have $$
\|S_k(f_1, \ldots, f_n)w \|_{L^p} \lesssim \prod_{i=1}^n \|f_i w_i\|_{L^{p_i}} $$ for all multilinear bi-parameter weights $\vec w \in A_{\vec p}$. The implicit constant does not depend on $k$. \end{thm}
\begin{proof} We use duality to always reduce to one of the operators of type $A_{3}$ in Theorem \ref{thm:thm3}. Performing the proof like this has the advantage that the form of the shift really plays no role -- it just affects which type of $A_3$ operator we get. For example, we consider the explicit case $$ S_k(f_1, \dots, f_n) = \sum_{K} A_K(f_1, \ldots, f_n), $$ where $$ A_K(f_1, \ldots, f_n) = \sum_{\substack{R_1, \ldots, R_{n+1} \\ R_j^{(k_j)} = K }} a_{K, (R_j)} \langle f_1, h_{R_1} \rangle \prod_{j=2}^{n} \langle f_j, \wt h_{R_j} \rangle h_{R_{n+1}}. $$
Fix some $\vec p = (p_1, \ldots, p_n)$ with $1 < p_i < \infty$ and $p > 1$, which is enough by extrapolation. We will dualise using $f_{n+1}$ with $\|f_{n+1}w^{-1}\|_{L^{p'}} \le 1$. The normalisation of the shift coefficients gives the direct estimate \begin{equation*} \begin{split}
\sum_{K}& |\langle A_K(f_1, \ldots, f_n), f_{n+1} \rangle | \\ & \le \sum_K \sum_{\substack{R_1, \ldots, R_{n+1} \\ R_j^{(k_j)} = K }}
\frac{\prod_{j=1}^{n+1} |R_j|^{1/2}}{|K|^{n}}
\Big|\langle \Delta_{K,k_1} f_1, h_{R_1} \rangle \prod_{j=2}^{n} \langle f_j, \wt h_{R_j} \rangle
\langle \Delta_{K,k_{n+1}}f_{n+1},h_{R_{n+1}}\rangle\Big| \\ & \le \sum_K \sum_{\substack{R_1, \ldots, R_{n+1} \\ R_j^{(k_j)} = K }}
\frac{1}{|K|^{n}}
\langle |\Delta_{K,k_1} f_1|, 1_{R_1} \rangle \prod_{j=2}^{n} \langle |f_j|, 1_{R_j} \rangle
\langle |\Delta_{K,k_{n+1}}f_{n+1}|,1_{R_{n+1}}\rangle \\
&\le \sum_K \langle | \Delta_{K,k_1} f_1 | \rangle_K
\prod_{j=2}^{n} \langle |f_j| \rangle_K\langle |\Delta_{K,k_{n+1}} f_{n+1}| \rangle_K |K| \\
& = \Big\| \sum_K \langle | \Delta_{K,k_1} f_1 | \rangle_K
\prod_{j=2}^{n} \langle |f_j| \rangle_K\langle |\Delta_{K,k_{n+1}} f_{n+1}| \rangle_K 1_K \Big\|_{L^1}, \end{split} \end{equation*} where we used \eqref{eq:HaarMart} in the first step in the passage from Haar functions into martingale differences. Notice that \begin{equation}\label{eq:eq12} (w_1, \cdots, w_n, w^{-1})\in A_{(p_1,\cdots, p_n, p')}, \qquad w=\prod_{i=1}^n w_i. \end{equation} The target weight associated to this data is $ww^{-1} = 1$ and the target exponent is $1/p + 1/p' = 1$. By using Theorem \ref{thm:thm3} with a suitable $A_{3}(f_1, \ldots, f_{n+1})$ and the above weight we can directly dominate this by $$
\Big[\prod_{i=1}^n \|f_i w_i\|_{L^{p_i}} \Big]\cdot \|f_{n+1} w^{-1}\|_{L^{p'}} \le \prod_{i=1}^n \|f_i w_i\|_{L^{p_i}}. $$ We are done. \end{proof}
\subsection{Partial paraproducts} Let $k=(k_1, \dots, k_{n+1})$, where $k_j \in \{0,1,\ldots\}$. An $n$-linear bi-parameter partial paraproduct $(S\pi)_k$ with the paraproduct component on $\mathbb{R}^{d_2}$ takes the form \begin{equation}\label{eq:Spi} \langle (S\pi)_k(f_1, \ldots, f_n), f_{n+1} \rangle = \sum_{K = K^1 \times K^2} \sum_{\substack{ I^1_1, \ldots, I_{n+1}^1 \\ (I_j^1)^{(k_j)} = K^1}} a_{K, (I_j^1)} \prod_{j=1}^{n+1} \langle f_j, \wt h_{I_j^1} \otimes u_{j, K^2} \rangle, \end{equation} where the functions $\wt h_{I_j^1}$ and $u_{j, K^2}$ satisfy the following. There are $j_0,j_1 \in \{1, \ldots, n+1\}$, $j_0 \not =j_1$, so that $\wt h_{I_{j_0}^1}=h_{I_{j_0}^1}$, $\wt h_{I_{j_1}^1}=h_{I_{j_1}^1}$ and for the remaining indices $j \not \in \{j_0, j_1\}$ we have $\wt h_{I_j^1} \in \{h_{I_j^1}^0, h_{I_j^1}\}$. There is $j_2 \in \{1, \ldots, n+1\}$ so that $u_{j_2, K^2} = h_{K^2}$ and for the remaining indices $j \ne j_2$ we have
$u_{j, K^2} = \frac{1_{K^2}}{|K^2|}$. Moreover, the coefficients are assumed to satisfy \begin{equation}\label{eq:PPNorma}
\| (a_{K, (I_j^1)})_{K^2} \|_{\BMO} = \sup_{K^2_0 \in \calD^2} \Big( \frac{1}{|K^2_0|} \sum_{K^2 \subset K^2_0} |a_{K, (I_j^1)}|^2 \Big)^{1/2}
\le \frac{\prod_{j=1}^{n+1} |I_j^1|^{\frac 12}}{|K^1|^{n}}. \end{equation} Of course, $(\pi S)_k$ is defined symmetrically.
The following $H^1$-$\BMO$ duality type estimate is well-known and elementary: \begin{equation}\label{eq:H1BMO}
\sum_{K^2} |a_{K^2}| |b_{K^2}| \lesssim \| (a_{K^2} ) \|_{\BMO} \Big\| \Big( \sum_{K^2} |b_{K^2}|^2 \frac{1_{K^2}}{|K^2|} \Big)^{1/2} \Big \|_{L^1}. \end{equation} Such estimates have natural multi-parameter analogues also, and the proofs in all parameters are analogous. See e.g. \cite[Equation (4.1)]{MO}.
Our result for the partial paraproducts has a significantly more difficult proof than for the other model operators. It is also more inefficient in that is produces an exponential -- although crucially with an arbitrarily small exponent -- dependence on the complexity. This has some significance for the required kernel regularity of CZOs, but a standard $t \mapsto t^{\alpha}$ type continuity modulus will still suffice. \begin{thm}\label{thm:thm4} Suppose $(S\pi)_k$ is an $n$-linear partial paraproduct, $1 < p_1, \ldots, p_n \le \infty$ and $\frac{1}{p} = \sum_{i=1}^n \frac{1}{p_i}> 0$. Then, for every $0<\beta \le 1$ we have $$
\|(S\pi)_k(f_1, \ldots, f_n)w \|_{L^p} \lesssim_\beta 2^{\max_j k_j \beta}\prod_{i=1}^n \|f_i w_i\|_{L^{p_i}} $$ for all multilinear bi-parameter weights $\vec w \in A_{\vec p}$. \end{thm} \begin{proof} Recall that $(S\pi)_k$ is of the form \eqref{eq:Spi}. Recall also the indices $j_0$ and $j_1$, which say that $\wt h_{I^1_j}=h_{I^1_j}$ at least for $j\in \{j_0, j_1\}$, and the index $j_2$, which specifies the place of $h_{K^2}$ in the second parameter. It makes no difference for the argument what the indices $j_0$ and $j_1$ are, so we assume that $j_0=1$ and $j_1=2$. It makes a small difference whether $j_2 \in \{j_0,j_1\}$ or $j_2 \not \in \{j_0,j_1\}$, so we do not specify $j_2$ yet. To make the following formulae shorter we write $\wt h_{I^1_j}$ for every $j$ but keep in mind that these are cancellative at least for $j \in \{1,2\}$. We define $$ A_{K^2}(g_1, \dots, g_{n+1}) = \prod_{j=1}^{n+1} \langle g_j, u_{j,K^2} \rangle $$ and write $(S\pi)_k$ in the form $$ \langle (S\pi)_k(f_1, \ldots, f_n), f_{n+1} \rangle = \sum_{K = K^1 \times K^2} \sum_{\substack{ I^1_1, \ldots, I_{n+1}^1 \\ (I_j^1)^{(k_j)} = K^1}} a_{K, (I_j^1)} A_{K^2}(\langle f_{1}, \wt h_{I_{1}^1}\rangle_1, \dots, \langle f_{n+1}, \wt h_{I_{n+1}^1}\rangle_1 ). $$
Fix some $\vec p = (p_1, \ldots, p_n)$ with $1 < p_i < \infty$ and $p > 1$, which is enough by extrapolation. We will dualise using $f_{n+1}$ with $\|f_{n+1}w^{-1}\|_{L^{p'}} \le 1$. We may assume $f_j \in L^{\infty}_c$. The $H^1$-$\BMO$ duality \eqref{eq:H1BMO} gives that \begin{equation}\label{eq:eq13} \begin{split}
|\langle (S\pi)_k(f_1, \ldots, f_n), f_{n+1} \rangle|
&\lesssim \sum_{K^1} \sum_{\substack{ I^1_1, \ldots, I_{n+1}^1 \\ (I_j^1)^{(k_j)} = K^1}}\Bigg[ \frac{\prod_{j=1}^{n+1}|I^1_j|^{\frac 12}}{|K^1|^n} \\
&\int_{\mathbb{R}^{d_2}} \Big( \sum_{K^2} |A_{K^2}(\langle f_{1}, \wt h_{I_{1}^1}\rangle_1, \dots, \langle f_{n+1}, \wt h_{I_{n+1}^1}\rangle_1 )|^2
\frac{1_{K^2}}{|K^2|} \Big)^{\frac{1}{2}}\Bigg]. \end{split} \end{equation}
Suppose $j \in \{3, \dots, n+1\}$ is such that $\wt h_{I^1_j}=h_{I^1_j}^0$ and $k_j>0$, that is, we have non-cancellative Haar functions and non-zero complexity. We expand $$
|I^1_j|^{-\frac{1}{2}} \langle f_j, h^0_{I^1_j} \rangle_1 =\langle f\rangle_{I_j^1,1} =\langle f_j \rangle_{K^1,1}+\sum_{i_j=1}^{k_j} \langle \Delta^1_{(I^1_j)^{(i_j)}} f_j\rangle_{(I_j^1)^{(i_j-1)},1}. $$ For convenience, we further write that $$ \langle \Delta^1_{(I^1_j)^{(i_j)}} f_j\rangle_{(I_j^1)^{(i_j-1)},1} = \langle h_{(I^1_j)^{(i_j)}} \rangle_{(I^1_j)^{(i_j-1)}} \langle f_j, h_{(I^1_j)^{(i_j)}} \rangle_1, $$ where we are suppressing the summation over the $2^{d_1}-1$ different Haar functions. We perform these expansions inside the operators $A_{K^2}$, and take the sums out of the $\ell^2_{K^2}$ norm. This gives that the right hand side of \eqref{eq:eq13} is less than a sum of at most $\prod_{j=3}^n(1+k_j)$ terms of the form \begin{equation*} \begin{split}
\sum_{K^1} \sum_{\substack{ I^1_1, \ldots, I_{n+1}^1 \\ (I_j^1)^{(k_j)} = K^1}}& \Bigg[ \frac{\prod_{j=1}^{n+1} |I^1_j| |(I^1_j)^{(i_j)}|^{-\frac 12}}{|K^1|^n} \\
&\int_{\mathbb{R}^{d_2}} \Big( \sum_{K^2} |A_{K^2}(\langle f_{1}, \wt h_{(I_{1}^1)^{(i_1)}}\rangle_1, \dots, \langle f_{n+1}, \wt h_{(I_{n+1}^1)^{(i_{n+1})}}\rangle_1 )|^2
\frac{1_{K^2}}{|K^2|} \Big)^{\frac{1}{2}} \Bigg]. \end{split} \end{equation*} Here we have the following properties. If $j$ in an index such that we did not do the expansion related to $j$, then $i_j=0$. Thus, at least $i_1=i_2=0$. We also remind that $\wt h_{(I_{j}^1)^{(i_j)}}=h_{(I_{j}^1)^{(i_j)}}$ for $j=1,2$. If $i_j<k_j$, then $\wt h_{(I_{j}^1)^{(i_j)}}=h_{(I_{j}^1)^{(i_j)}}$. If $i_j=k_j$, then $\wt h_{(I_{j}^1)^{(i_j)}} \in \{h_{K^1}, h_{K^1}^0\}$. We can further rewrite this as \begin{equation}\label{eq:eq14}
\sum_{K^1} \sum_{\substack{ L^1_1, \ldots, L_{n+1}^1 \\ (L_j^1)^{(l_j)} = K^1}} \frac{\prod_{j=1}^{n+1} |L^1_j|^{\frac 12} }{|K^1|^n}
\int_{\mathbb{R}^{d_2}} \Big( \sum_{K^2} |A_{K^2}(\langle f_{1}, \wt h_{L_1}\rangle_1, \dots, \langle f_{n+1}, \wt h_{L_{n+1}}\rangle_1 )|^2
\frac{1_{K^2}}{|K^2|} \Big)^{\frac{1}{2}}. \end{equation} This is otherwise analogous to the right hand side of \eqref{eq:eq13} except for the key difference that if a non-cancellative Haar function appears, then the related complexity is zero.
We turn to estimate \eqref{eq:eq14}. We show that \begin{equation}\label{eq:eq15}
\eqref{eq:eq14} \lesssim_\beta 2^{\max_j k_j \frac \beta 2 }\Big[\prod_{j=1}^n \|f_j w_j\|_{L^{p_j}}\Big]\| f_{n+1} w^{-1} \|_{L^{p'}}. \end{equation}
Recalling that $\| f_{n+1} w^{-1} \|_{L^{p'}} \le 1$ this implies that the left hand side of \eqref{eq:eq13} satisfies $$
LHS\eqref{eq:eq13} \lesssim_ \beta (1+\max_j k_j)^{n-1} 2^{\max_j k_j \frac \beta 2 } \prod_{j=1}^n \|f_j w_j\|_{L^{p_j}}
\lesssim_\beta 2^{\max_j k_j \beta } \prod_{j=1}^n \|f_j w_j\|_{L^{p_j}}, $$ which proves the theorem.
Let $(v_1, \dots, v_{n+1}) \in A_{(2, \dots, 2)}$ and $v=\prod_{j=1}^{n+1} v_j$. We will prove the $(n+1)$-linear estimate \begin{equation}\label{eq:eq16} \begin{split}
\Bigg\| & \sum_{K^1} \sum_{\substack{ L^1_1, \ldots, L_{n+1}^1 \\ (L_j^1)^{(l_j)} = K^1}} \Bigg[
\frac{\prod_{j=1}^{n+1} |L^1_j|^{\frac 12} }{|K^1|^n} \frac{1_{K^1}}{|K^1|} \\
&\Big( \sum_{K^2} |A_{K^2}(\langle f_{1}, \wt h_{L_1}\rangle_1, \dots, \langle f_{n+1}, \wt h_{L_{n+1}}\rangle_1 )|^2
\frac{1_{K^2}}{|K^2|} \Big)^{\frac{1}{2}}\Bigg]
v\Bigg\|_{L^{\frac{2}{n+1}}}
\lesssim 2^{\max_j k_j \frac \beta 2 } \prod_{j=1}^{n+1} \| f_j v_j \|_{L^2}. \end{split} \end{equation} Extrapolation, Theorem \ref{thm:ext}, then gives that \begin{equation*} \begin{split}
\Bigg\| \sum_{K^1} \sum_{\substack{ L^1_1, \ldots, L_{n+1}^1 \\ (L_j^1)^{(l_j)} = K^1}}
\frac{\prod_{j=1}^{n+1} |L^1_j|^{\frac 12} }{|K^1|^n}
\frac{1_{K^1}}{|K^1|}
\Big( \sum_{K^2} |A_{K^2}(\langle f_{1}, \wt h_{L_1}\rangle_1, \dots,& \langle f_{n+1}, \wt h_{L_{n+1}}\rangle_1 )|^2
\frac{1_{K^2}}{|K^2|} \Big)^{\frac{1}{2}}
v\Bigg\|_{L^{q}} \\
& \lesssim 2^{\max_j k_j \frac \beta 2 }\prod_{j=1}^{n+1} \| f_j v_j \|_{L^{q_j}} \end{split} \end{equation*} for all $q_1, \dots, q_{n+1} \in (1,\infty]$ such that $\frac{1}{q}=\sum_{j=1}^{n+1} \frac{1}{q_j}>0$ and for all $(v_1, \dots, v_{n+1}) \in A_{(q_1, \dots, q_{n+1})}$. Applying this with the exponent tuple $(p_1, \dots, p_n, p')$ and the weight tuple $(w_1, \dots, w_n,w^{-1}) \in A_{(p_1, \dots, p_n,p')}$ gives \eqref{eq:eq15}.
It remains to prove \eqref{eq:eq16}. We denote $\sigma_j=v_j^{-2}$. The $A_{(2, \dots, 2)}$ condition gives that $$ \langle v^{\frac{2}{n+1}} \rangle_K^{n+1} \prod_{j=1}^{n+1} \langle \sigma_j \rangle_K \lesssim 1. $$ Using this we have $$
|A_{K^2}(\langle f_{1}, \wt h_{L^1_1}\rangle_1, \dots, \langle f_{n+1}, \wt h_{L^1_{n+1}}\rangle_1 )| \lesssim \frac{1}{\langle v^{\frac{2}{n+1}} \rangle_K^{n+1}}
\Bigg|A_{K^2}\Bigg(\frac{\langle f_{1}, \wt h_{L^1_1}\rangle_1}{\langle \sigma_1\rangle_K}, \dots,
\frac{\langle f_{n+1}, \wt h_{L^1_{n+1}}\rangle_1}{\langle \sigma_{n+1} \rangle_K} \Bigg)\Bigg|. $$
For the moment we abbreviate the last $|A_{K^2}( \cdots)|$ as $c_{K,(L^1_j)}$. There holds that \begin{equation*} \begin{split} \frac{1}{\langle v^{\frac{2}{n+1}} \rangle_K^{n+1}}c_{K,(L^1_j)} &= \Bigg[ \frac{1}{\langle v^{\frac{2}{n+1}} \rangle_K} \Big\langle c_{K,(L^1_j)}^{\frac{1}{n+1}}1_K v^{-\frac{2}{n+1}}v^{\frac{2}{n+1}} \Big\rangle_K \Bigg ]^{n+1} \\ &\le \Big(M_\calD^{v^{\frac{2}{n+1}}}\Big(c_{K,(L^1_j)}^{\frac{1}{n+1}}1_K v^{-\frac{2}{n+1}}\Big)(x)\Big)^{n+1} \end{split} \end{equation*} for all $x \in K$.
We substitute this into the left hand side of \eqref{eq:eq16}. This gives that the term there is dominated by \begin{equation*}
\Bigg\| \sum_{K^1} \sum_{\substack{ L^1_1, \ldots, L_{n+1}^1 \\ (L_j^1)^{(l_j)} = K^1}}
\frac{\prod_{j=1}^{n+1} |L^1_j|^{\frac 12} }{|K^1|^{n+1}} \Big( \sum_{K^2} M_\calD^{v^{\frac{2}{n+1}}}\Big(c_{K,(L^1_j)}^{\frac{1}{n+1}}1_K v^{-\frac{2}{n+1}}\Big)^{2(n+1)}
\frac{1}{|K^2|} \Big)^{\frac{1}{2}}
v\Bigg\|_{L^{\frac{2}{n+1}}}. \end{equation*} We use the $L^2(\ell_{K^1,(L^1_j)}^{n+1}(\ell_{K^2}^{2(n+1)}))$ boundedness of the maximal function $M_\calD^{v^{\frac{2}{n+1}}}$, see Proposition \ref{prop:vecvalmax}. This gives that the last norm is dominated by \begin{equation}\label{eq:eq17}
\Bigg\| \sum_{K^1} \sum_{\substack{ L^1_1, \ldots, L_{n+1}^1 \\ (L_j^1)^{(l_j)} = K^1}}
\frac{\prod_{j=1}^{n+1} |L^1_j|^{\frac 12} }{|K^1|^{n+1}} 1_{K^1} \Big( \sum_{K^2} c_{K,(L^1_j)}^{2}
\frac{1_{K^2}}{|K^2|} \Big)^{\frac{1}{2}}
v^{-1}\Bigg\|_{L^{\frac{2}{n+1}}}. \end{equation}
Now, we recall what the numbers $c_{K,(L^1_j)}$ are. At this point it becomes relevant which of the Haar functions $\wt h_{L^1_j}$ are cancellative and what is the form of the operators $A_{K^2}$. We assume that $\wt h_{L^1_j}=h_{L^1_j}$ for $j=1, \dots, n$ and $\wt h_{L^1_{n+1}}=h_{L^1_{n+1}}^0=h_{K^1_{n+1}}^0$, which is a good representative of the general case. First, we assume that the index $j_2$, which specifies the place of $h_{K^2}$ in $A_{K^2}$, satisfies $j_2 \in \{1, \dots, n\}$. The point is that then $\wt h_{L^1_{j_2}}=h_{L^1_{j_2}}$. For convenience of notation we assume that $j_2=1$. With these assumptions there holds that \begin{equation}\label{eq:eq20}
c_{K,(L^1_j)} =\Bigg|\frac{\langle f_{1}, h_{L^1_1} \otimes h_{K^2}\rangle}{\langle \sigma_1\rangle_K}
\prod_{j=2}^n\frac{\Big\langle f_{j}, h_{L^1_j} \otimes \frac{1_{K^2}}{|K^2|}\Big\rangle}{\langle \sigma_j\rangle_K}
\cdot \frac{\Big\langle f_{n+1}, h^0_{K^1}\otimes \frac{1_{K^2}}{|K^2|}\Big\rangle}{\langle \sigma_{n+1} \rangle_K} \Bigg|. \end{equation}
For $j=2, \dots,n$ we estimate that \begin{equation}\label{eq:eq21} \begin{split}
\frac{\Big|\Big\langle f_{j}, h_{L^1_j} \otimes \frac{1_{K^2}}{|K^2|}\Big\rangle\Big|}{\langle \sigma_j\rangle_K}
&= \frac{\Big| \Big\langle \langle f_{j}, h_{L^1_j} \rangle_1 \langle \sigma_j \rangle_{K^1,1}^{-1}\langle \sigma_j \rangle_{K^1,1},
\frac{1_{K^2}}{|K^2|}\Big\rangle\Big|}{\langle \langle \sigma_j\rangle_{K^1,1}\rangle_{K^2}}\\ &\le M^{\langle \sigma_j\rangle_{K^1,1}}_{\calD^2}(\langle f_{j}, h_{L^1_j} \rangle_1 \langle \sigma_j \rangle_{K^1,1}^{-1})(x_2) \end{split} \end{equation} for all $x_2 \in K^2$. Also, there holds that $$
\frac{\Big|\Big\langle f_{n+1}, h^0_{K^1}\otimes \frac{1_{K^2}}{|K^2|}\Big\rangle\Big|}{\langle \sigma_{n+1} \rangle_K}
\le |K^1|^{\frac 12} M_\calD^{\sigma_{n+1}}(f_{n+1}\sigma_{n+1}^{-1})(x) $$ for all $x \in K$. These give (recall that $L^1_{n+1}=K^1$) that \begin{equation*} \begin{split} &\sum_{\substack{ L^1_1, \ldots, L_{n+1}^1 \\ (L_j^1)^{(l_j)} = K^1}}
\frac{\prod_{j=1}^{n+1} |L^1_j|^{\frac 12} }{|K^1|^{n+1}} 1_{K^1}\Big( \sum_{K^2} c_{K,(L^1_j)}^{2}
\frac{1_{K^2}}{|K^2|} \Big)^{\frac{1}{2}} \le \prod_{j=1}^n F_{j,K^1} \cdot M_\calD^{\sigma_{n+1}}(f_{n+1}\sigma_{n+1}^{-1}), \end{split} \end{equation*} where \begin{equation}\label{eq:eq24}
F_{1,K^1}= 1_{K^1}\sum_{(L_1^1)^{(l_1)}=K^1} \frac{|L^1_1|^{\frac 12}}{|K^1|} \Big( \sum_{K^2}\frac{|\langle f_{1}, h_{L^1_1} \otimes h_{K^2}\rangle|^2}{\langle \sigma_1\rangle_K^2}\frac{1_{K^2}}{|K^2|} \Big)^{\frac{1}{2}} \end{equation} and \begin{equation}\label{eq:eq25} F_{j,K^1}
=1_{K^1}\sum_{(L_j^1)^{(l_j)}=K^1} \frac{|L^1_j|^{\frac 12}}{|K^1|} M^{\langle \sigma_j\rangle_{K^1,1}}_{\calD^2}(\langle f_{j}, h_{L^1_j} \rangle_1 \langle \sigma_j \rangle_{K^1,1}^{-1}) \end{equation} for $j=2, \dots, n$.
We will now continue from \eqref{eq:eq17} using the above pointwise estimates. Notice that \begin{equation*} \begin{split} \sum_{K^1}\prod_{j=1}^n F_{j,K^1} \le \prod_{j=1}^2 \Big(\sum_{K^1} (F_{j,K^1} )^2 \Big)^{1/2}\prod_{j=3}^n \sup_{K^1}F_{j,K^1} \le \prod_{j=1}^n \Big(\sum_{K^1} (F_{j,K^1} )^2 \Big)^{1/2}. \end{split} \end{equation*} Since $v^{-1}=\prod_{j=1}^{n+1}v_j^{-1}$, we have that \begin{equation*} \eqref{eq:eq17}
\lesssim \prod_{j=1}^n \Big \| \Big( \sum_{K^1} F_{j,K^1}^2 \Big)^{\frac 12} v_{j}^{-1} \Big \|_{L^2}
\big\|M_\calD^{\sigma_{n+1}}(f_{n+1}\sigma_{n+1}^{-1})v_{n+1}^{-1} \big \|_{L^2}. \end{equation*} Since $\sigma_j=v_j^{-2}$ there holds by Proposition \ref{prop:prop1} that $$
\|M_\calD^{\sigma_{n+1}}(f_{n+1}\sigma_{n+1}^{-1})v_{n+1}^{-1} \|_{L^2}
=\|M_\calD^{\sigma_{n+1}}(f_{n+1}\sigma_{n+1}^{-1}) \|_{L^2(\sigma_{n+1})}
\lesssim \| f_{n+1} v_{n+1} \|_{L^2}. $$ It remains to estimate the norms for $j=1, \dots, n$.
We begin with $j=1$. If $l_1=0$, then we directly have that $$ \Big(\sum_{K^1} F_{1,K^1}^2 \Big)^{\frac 12}
=\Big( \sum_{K}\frac{|\langle f_{1}, h_K\rangle|^2}{\langle \sigma_1\rangle_K^2}\frac{1_{K}}{|K|} \Big)^{\frac{1}{2}}. $$
Since $|\langle f_{1}, h_K\rangle| |K|^{-\frac 12} \le \langle | \Delta_K f_1 | \rangle_K$, we may use Proposition \ref{prop:prop3} to have that $$
\Big \| \Big( \sum_{K^1} F_{1,K^1}^2 \Big)^{\frac 12} v_{1}^{-1} \Big \|_{L^2}
\lesssim \| f_1 \sigma_1^{-\frac 12} \|_{L^2}=\| f_1 v_1 \|_{L^2}. $$
Suppose then $l_1>0$. There holds that $$
\Big \| \Big( \sum_{K^1} F_{1,K^1}^2 \Big)^{\frac 12} v_{1}^{-1} \Big \|_{L^2}
= \Big( \sum_{K^1} \| F_{1,K^1} v_1^{-1} \|_{L^2}^2\Big)^{\frac 12}. $$ Let $s \in (1, \infty)$ be such that $d_1/s'=\beta/(2n)$. Then \begin{equation*} F_{1,K^1}
\le 2^{\frac{l_1\beta}{2n}}1_{K^1}\bigg(\sum_{(L_1^1)^{(l_1)}=K^1} \frac{|L^1_1|^{\frac s2}}{|K^1|^s} \Big( \sum_{K^2}\frac{|\langle f_{1}, h_{L^1_1} \otimes h_{K^2}\rangle|^2}{\langle \sigma_1\rangle_K^2}\frac{1_{K^2}}{|K^2|} \Big)^{\frac{s}{2}} \bigg)^{\frac 1s}. \end{equation*}
Therefore, $\| F_{1,K^1} v_j^{-1} \|_{L^2}^2$ is less than \begin{equation}\label{eq:eq18}
2^{\frac{l_1\beta}{n}} \bigg\| \bigg(\sum_{(L_1^1)^{(l_1)}=K^1} \frac{|L^1_1|^{\frac s2}}{|K^1|^s}
\Big( \sum_{K^2}\frac{|\langle f_{1}, h_{L^1_1} \otimes h_{K^2}\rangle|^2}{\langle \sigma_1\rangle_K^2}\frac{1_{K^2}}{|K^2|} \Big)^{\frac{s}{2}}
\bigg)^{\frac 1s} \langle \sigma_1\rangle_{K^1,1}^{\frac 12} \bigg\|_{L^2}^2 |K^1|. \end{equation} Notice that $
|\langle f_{1}, h_{L^1_1} \otimes h_{K^2}\rangle | |K^2|^{-\frac 12}
\le \langle | \Delta_{K^2} \langle f_1, h_{L_1^1} \rangle_1| \rangle_{K^2}. $ Therefore, the one-parameter case of Proposition \ref{prop:prop3} gives that \begin{equation}\label{eq:eq19} \begin{split} \eqref{eq:eq18}
&\lesssim 2^{\frac{l_1\beta}{n}} \bigg\| \Big(\sum_{(L_1^1)^{(l_1)}=K^1} \frac{|L^1_1|^{\frac s2}}{|K^1|^s}
|\langle f_{1}, h_{L^1_1}\rangle_1|^s\Big)^{\frac 1s} \langle \sigma_1\rangle_{K^1,1}^{-\frac 12} \bigg\|_{L^2}^2 |K^1| \\
& \le 2^{\frac{l_1\beta}{n}} \bigg\| \sum_{(L_1^1)^{(l_1)}=K^1} \frac{|L^1_1|^{\frac 12}}{|K^1|}
|\langle f_{1}, h_{L^1_1}\rangle_1| \langle \sigma_1\rangle_{K^1,1}^{-\frac 12} \bigg\|_{L^2}^2 |K^1|. \end{split} \end{equation} Notice that $$
\sum_{(L_1^1)^{(l_1)}=K^1} \frac{|L^1_1|^{\frac 12}}{|K^1|}
|\langle f_{1}, h_{L^1_1}\rangle_1|
\le \langle | \Delta_{K^1,l_1}^1 f_1 | \rangle_{K^1,1}. $$ Thus, summing the right hand side of \eqref{eq:eq19} over $K^1$ leads to \begin{equation*} \begin{split}
2^{\frac{l_1\beta}{n}} \int_{\mathbb{R}^{d_2}} \sum_{K^1} \langle | \Delta_{K^1,l_1}^1 f_1 | \rangle_{K^1,1}^2 \langle \sigma_1\rangle_{K^1,1}^{-1} |K^1|
&=2^{\frac{l_1\beta}{n}} \int_{\mathbb{R}^d} \sum_{K^1} \langle | \Delta_{K^1,l_1}^1 f_1 | \rangle_{K^1,1}^2
\frac{1_{K^1}}{\langle \sigma_1 \rangle_{K^1,1}^2} \sigma_1 \\
& \lesssim 2^{\frac{l_1\beta}{n}} \int_{\mathbb{R}^{d}} | f_1 |^2 v_1^2,
\end{split} \end{equation*} where we used Proposition \ref{prop:prop3} again.
Finally, we estimate the norms related to $j=2, \dots, n$, which are all similar. We assume that $l_j>0$. It will be clear how to do the case $l_j=0$. As above we have \begin{equation*} F_{j,K^1}
\le 2^{\frac{l_j\beta}{2n}}1_{K^1}\bigg(\sum_{(L_j^1)^{(l_j)}=K^1} \frac{|L^1_j|^{\frac s2}}{|K^1|^s} M^{\langle \sigma_j\rangle_{K^1,1}}_{\calD^2}(\langle f_{j}, h_{L^1_j} \rangle_1 \langle \sigma_j \rangle_{K^1,1}^{-1})^s \bigg)^{\frac 1s}. \end{equation*} Therefore, we get that \begin{equation*} \begin{split}
\| F_{j,K^1} v_{j}^{-1}\|_{L^2}^2
&\le 2^{\frac{l_j\beta}{n}} \bigg\| \bigg(\sum_{(L_j^1)^{(l_j)}=K^1} \frac{|L^1_j|^{\frac s2}}{|K^1|^s} M^{\langle \sigma_j\rangle_{K^1,1}}_{\calD^2}(\langle f_{j}, h_{L^1_j} \rangle_1 \langle \sigma_j \rangle_{K^1,1}^{-1})^s
\bigg)^{\frac 1s} \langle \sigma_j \rangle_{K^1,1}^{\frac 12} \bigg \|_{L^2}^2|K^1| \\ &\lesssim 2^{\frac{l_j\beta}{n}}
\bigg\| \bigg(\sum_{(L_j^1)^{(l_j)}=K^1} \frac{|L^1_j|^{\frac s2}}{|K^1|^s}
|\langle f_{j}, h_{L^1_j} \rangle_1 \langle \sigma_j \rangle_{K^1,1}^{-1}|^s
\bigg)^{\frac 1s} \langle \sigma_j \rangle_{K^1,1}^{\frac 12} \bigg \|_{L^2}^2|K^1| \\ & \le 2^{\frac{l_j\beta}{n}}
\bigg\| \sum_{(L_j^1)^{(l_j)}=K^1} \frac{|L^1_j|^{\frac 12}}{|K^1|}
|\langle f_{j}, h_{L^1_j} \rangle_1|
\langle \sigma_j \rangle_{K^1,1}^{-\frac 12} \bigg \|_{L^2}^2|K^1|, \end{split} \end{equation*} where we applied the one-parameter version of Proposition \ref{prop:vecvalmax}. The last norm is like the last norm in \eqref{eq:eq19}, and therefore the estimate can be concluded with familiar steps. Combining the estimates we have shown that $$
\prod_{j=1}^n \Big \| \Big( \sum_{K^1} F_{j,K^1}^2 \Big)^{\frac 12} v_{j}^{-1} \Big \|_{L^2}
\lesssim \prod_{j=1}^n 2^{\frac{l_j\beta}{2n}} \| f_j v_j \|_{L^2}
\le 2^{\max k_j\frac{\beta}{2}} \prod_{j=1}^n \| f_j v_j \|_{L^2}. $$
Above, we assumed that the index $j_2$ related to the form of the paraproduct satisfied $j_2=1$, see the discussion before \eqref{eq:eq20}. It remains to comment on the case $j_2=n+1$. In this case the formula corresponding to \eqref{eq:eq20} is $$
c_{K,(L^1_j)} =\Bigg|\prod_{j=1}^n\frac{\Big\langle f_{j}, h_{L^1_j} \otimes \frac{1_{K^2}}{|K^2|}\Big\rangle}{\langle \sigma_j\rangle_K}
\cdot \frac{\langle f_{n+1}, h^0_{K^1}\otimes h_{K^2}\rangle}{\langle \sigma_{n+1} \rangle_K} \Bigg|. $$ For $j=1, \dots, n$ we do the estimate \eqref{eq:eq21}. Also, there holds that \begin{equation*} \begin{split}
\frac{|\langle f_{n+1}, h^0_{K^1}\otimes h_{K^2}\rangle |}{\langle \sigma_{n+1} \rangle_K}
&= |K^1|^{\frac 12}\frac{\big|\big \langle \langle f_{n+1}, h_{K^2}\rangle_2 \langle \sigma_{n+1} \rangle_{K^2,2}^{-1}\langle \sigma_{n+1} \rangle_{K^2,2} \big \rangle_{K^1}\big|} {\langle \langle \sigma_{n+1} \rangle_{K^2,2} \rangle_{K^1}} \\
& \le |K^1|^{\frac 12} M_{\calD^1}^{\langle \sigma_{n+1} \rangle_{K^2,2}}(\langle f_{n+1}, h_{K^2}\rangle_2 \langle \sigma_{n+1} \rangle_{K^2,2}^{-1})(x_1) \end{split} \end{equation*} for any $x_1 \in K^1$. With the pointwise estimates we proceed as above. Related to $f_j$, $j=1, \dots, n$, this leads to terms which we know how to estimate. Related to $f_{n+1}$ we get a similar term except that the parameters are in opposite roles. We are done. \end{proof}
\subsection{Full paraproducts} An $n$-linear bi-parameter full paraproduct $\Pi$ takes the form \begin{equation}\label{eq:pi2bar} \langle \Pi(f_1, \ldots, f_n) , f_{n+1} \rangle = \sum_{K = K^1 \times K^2} a_{K} \prod_{j=1}^{n+1} \langle f_j, u_{j, K^1} \otimes u_{j, K^2} \rangle, \end{equation} where the functions $u_{j, K^1}$ and $u_{j, K^2}$ are like in \eqref{eq:Spi}. The coefficients are assumed to satisfy $$
\| (a_{K} ) \|_{\BMO_{\operatorname{prod}}} = \sup_{\Omega} \Big(\frac{1}{|\Omega|} \sum_{K\subset \Omega} |a_{K}|^2 \Big)^{1/2} \le 1, $$
where the supremum is over open sets $\Omega \subset \mathbb{R}^d = \mathbb{R}^{d_1} \times \mathbb{R}^{d_2}$ with $0 < |\Omega| < \infty$. As already discussed the $H^1$-$\BMO$ duality works also in bi-parameter (see again \cite[Equation (4.1)]{MO}): \begin{equation}\label{eq:BiParH1BMO}
\sum_{K} |a_{K}| |b_{K}| \lesssim \| (a_{K} ) \|_{\BMO_{\operatorname{prod}}} \Big\| \Big( \sum_{K} |b_{K}|^2 \frac{1_{K}}{|K|} \Big)^{1/2} \Big \|_{L^1}. \end{equation}
We are ready to bound the full paraproducts. \begin{thm} Suppose $\Pi$ is an $n$-linear bi-parameter full paraproduct, $1 < p_1, \ldots, p_n \le \infty$ and $1/p = \sum_{i=1}^n 1/p_i> 0$. Then we have $$
\|\Pi(f_1, \ldots, f_n)w \|_{L^p} \lesssim \prod_{i=1}^n \|f_i w_i\|_{L^{p_i}} $$ for all multilinear bi-parameter weights $\vec w \in A_{\vec p}$. \end{thm} \begin{proof}
We use duality to always reduce to one of the operators of type $A_{1}$ in Theorem \ref{thm:thm3}. Fix some $\vec p = (p_1, \ldots, p_n)$ with $1 < p_i < \infty$ and $p > 1$, which is enough by extrapolation. We will dualise using $f_{n+1}$ with $\|f_{n+1}w^{-1}\|_{L^{p'}} \le 1$. The particular form of $\Pi$ does not matter -- it only affects the form of the operator $A_1$ we will get. We may, for example, look at $$
\Pi(f_1, \ldots, f_n) = \sum_{K = K^1 \times K^2} a_K \Big \langle f_{1}, h_{K^1} \otimes \frac{1_{K^2}}{|K^2|} \Big\rangle
\Big \langle f_{2}, \frac{1_{K^1}}{|K^1|} \otimes h_{K^2} \Big\rangle \prod_{j=3}^{n} \langle f_j \rangle_K \cdot \frac{1_K}{|K|}. $$ We have \begin{align*}
|\langle \Pi(f_1, \ldots, f_n), f_{n+1} \rangle| \le \sum_{K} |a_K| \Big| \Big \langle f_{1}, h_{K^1} \otimes \frac{1_{K^2}}{|K^2|} \Big\rangle
\Big \langle f_{2}, \frac{1_{K^1}}{|K^1|} \otimes h_{K^2} \Big\rangle \Big| \prod_{j=3}^{n+1} \langle |f_j| \rangle_K. \end{align*} We now apply the unweighted $H^1$-$\BMO$ duality estimate from above to bound this with \begin{align*}
\Big\| \Big( \sum_{K} \langle | \Delta_{K^1}^1 f_1 | \rangle_K^2
\langle | \Delta_{K^2}^2 f_2 | \rangle_K^2 \prod_{j=3}^{n+1} \langle |f_j| \rangle_K^2 1_K \Big)^{\frac{1}{2}} \Big\|_{L^1}. \end{align*} Recalling \eqref{eq:eq12} it remains to apply Theorem \ref{thm:thm3} with a suitable $A_1(f_1, \ldots, f_{n+1})$. \end{proof}
\section{Singular integrals}\label{sec:SIOs} Let $\omega$ be a modulus of continuity: an increasing and subadditive function with $\omega(0) = 0$. A relevant quantity is the modified Dini condition \begin{equation}\label{eq:Dini}
\|\omega\|_{\operatorname{Dini}_{\alpha}} := \int_0^1 \omega(t) \Big( 1 + \log \frac{1}{t} \Big)^{\alpha} \frac{dt}{t}, \qquad \alpha \ge 0. \end{equation} In practice, the quantity \eqref{eq:Dini} arises as follows: \begin{equation}\label{eq:diniuse} \sum_{k=1}^{\infty} \omega(2^{-k}) k^{\alpha} = \sum_{k=1}^{\infty} \frac{1}{\log 2} \int_{2^{-k}}^{2^{-k+1}} \omega(2^{-k}) k^{\alpha} \frac{dt}{t} \lesssim \int_0^1 \omega(t) \Big( 1 + \log \frac{1}{t} \Big)^{\alpha} \frac{dt}{t}. \end{equation}
We define what it means to be an $n$-linear bi-parameter SIO. Let $\mathscr{F}_{d_i}$ denote the space of finite linear combinations of indicators of cubes in $\mathbb{R}^{d_i}$, and let $\mathscr{F}$ denote the space of finite linear combinations of indicators of rectangles in $\mathbb{R}^{d}$. Suppose that we have $n$-linear operators $T^{j_1*,j_2*}_{1,2}$, $j_1,j_2 \in \{0, \dots, n\}$, each mapping $\mathscr{F} \times \dots \times \mathscr{F}$ into locally integrable functions. We denote $T=T^{0*,0*}$ and assume that the operators $T^{j_1*,j_2*}_{1,2}$ satisfy the duality relations as described in Section \ref{sec:adjoints}.
Let $\omega_i$ be a modulus of continuity on $\mathbb{R}^{d_i}$. Assume $f_j = f_j^1 \otimes f_j^2$, $j = 1, \ldots, n+1$, where $f_{j}^i \in \mathscr{F}_{d_i}$.
\subsection*{Bi-parameter SIOs} \subsubsection*{Full kernel representation} Here we assume that given $m \in \{1,2\}$ there exist $j_1, j_2 \in \{1, \ldots, n+1\}$ so that $\operatorname{spt} f_{j_1}^m \cap \operatorname{spt} f_{j_2}^m = \emptyset$. In this case we demand that $$ \langle T(f_1, \ldots, f_n), f_{n+1}\rangle = \int_{\mathbb{R}^{(n+1)d}} K(x_{n+1},x_1, \dots, x_n)\prod_{j=1}^{n+1} f_j(x_j) \ud x, $$ where $$ K \colon \mathbb{R}^{(n+1)d} \setminus \{ (x_1, \ldots, x_{n+1}) \in \mathbb{R}^{(n+1)d}\colon x_1^1 = \cdots = x_{n+1}^1 \textup{ or } x_1^2 = \cdots = x_{n+1}^2\} \to \mathbb{C} $$ is a kernel satisfying a set of estimates which we specify next.
The kernel $K$ is assumed to satisfy the size estimate \begin{displaymath}
|K(x_{n+1},x_1, \dots, x_n)| \lesssim \prod_{m=1}^2 \frac{1}{\Big(\sum_{j=1}^{n} |x_{n+1}^m-x_j^m|\Big)^{d_mn}}. \end{displaymath}
We also require the following continuity estimates. For example, we require that we have \begin{align*}
|K(x_{n+1}, x_1, \ldots, x_n)-&K(x_{n+1},x_1, \dots, x_{n-1}, (c^1,x^2_n))\\
&-K((x_{n+1}^1,c^2),x_1, \dots, x_n)+K((x_{n+1}^1,c^2),x_1, \dots, x_{n-1}, (c^1,x^2_n))| \\
&\qquad \lesssim \omega_1 \Big( \frac{|x_{n}^1-c^1| }{ \sum_{j=1}^{n} |x_{n+1}^1-x_j^1|} \Big)
\frac{1}{\Big(\sum_{j=1}^{n} |x_{n+1}^1-x_j^1|\Big)^{d_1n}} \\ &\qquad\times
\omega_2 \Big( \frac{|x_{n+1}^2-c^2| }{ \sum_{j=1}^{n} |x_{n+1}^2-x_j^2|} \Big)
\frac{1}{\Big(\sum_{j=1}^{n} |x_{n+1}^2-x_j^2|\Big)^{d_2n}} \end{align*}
whenever $|x_n^1-c^1| \le 2^{-1} \max_{1 \le i \le n} |x_{n+1}^1-x_i^1|$
and $|x_{n+1}^2-c^2| \le 2^{-1} \max_{1 \le i \le n} |x_{n+1}^2-x_i^2|$. Of course, we also require all the other natural symmetric estimates, where $c^1$ can be in any of the given $n+1$ slots and similarly for $c^2$. There are $(n+1)^2$ different estimates.
Finally, we require the following mixed continuity and size estimates. For example, we ask that \begin{align*}
|K(x_{n+1}&, x_1, \ldots, x_n)-K(x_{n+1},x_1, \dots, x_{n-1}, (c^1,x^2_n))| \\
& \lesssim \omega_1 \Big( \frac{|x_{n}^1-c^1| }{ \sum_{j=1}^{n} |x_{n+1}^1-x_j^1|} \Big)
\frac{1}{\Big(\sum_{j=1}^{n} |x_{n+1}^1-x_j^1|\Big)^{d_1n}} \cdot \frac{1}{\Big(\sum_{j=1}^{n} |x_{n+1}^2-x_j^2|\Big)^{d_2n}} \end{align*}
whenever $|x_n^1-c^1| \le 2^{-1} \max_{1 \le i \le n} |x_{n+1}^1-x_i^1|$. Again, we also require all the other natural symmetric estimates. \subsubsection*{Partial kernel representations} Suppose now only that there exist $j_1, j_2 \in \{1, \ldots, n+1\}$ so that $\operatorname{spt} f_{j_1}^1 \cap \operatorname{spt} f_{j_2}^1 = \emptyset$.
Then we assume that $$ \langle T(f_1, \ldots, f_n), f_{n+1}\rangle = \int_{\mathbb{R}^{(n+1)d_1}} K_{(f_j^2)}(x_{n+1}^1, x_1^1, \ldots, x_n^1) \prod_{j=1}^{n+1} f_j^1(x^1_j) \ud x^1, $$ where $K_{(f_j^2)}$ is a one-parameter $\omega_1$-Calder\'on--Zygmund kernel but with a constant depending on the fixed functions $f_1^2, \ldots, f_{n+1}^2$. For example, this means that the size estimate takes the form $$
|K_{(f_j^2)}(x_{n+1}^1, x_1^1, \ldots, x_n^1)| \le C(f_1^2, \ldots, f_{n+1}^2) \frac{1}{\Big(\sum_{j=1}^{n} |x_{n+1}^1-x_j^1|\Big)^{d_1n}}. $$ The continuity estimates are analogous.
We assume the following $T1$ type control on the constant $C(f_1^2, \ldots, f_{n+1}^2)$. We have \begin{equation*}\label{eq:PKWBP}
C(1_{I^2}, \ldots, 1_{I^2}) \lesssim |I^2| \end{equation*} and \begin{equation}\label{eq:pest}
C(a_{I^2}, 1_{I^2}, \ldots, 1_{I^2}) + C(1_{I^2}, a_{I^2}, 1_{I^2}, \ldots, 1_{I^2}) + \cdots + C(1_{I^2}, \ldots, 1_{I^2}, a_{I^2}) \lesssim |I^2| \end{equation} for all cubes $I^2 \subset \mathbb{R}^{d_2}$
and all functions $a_{I^2} \in \mathscr{F}_{d_2}$ satisfying $a_{I^2} = 1_{I^2}a_{I^2}$, $|a_{I^2}| \le 1$ and $\int a_{I^2} = 0$.
Analogous partial kernel representation on the second parameter is assumed when $\operatorname{spt} f_{j_1}^2 \cap \operatorname{spt} f_{j_2}^2 = \emptyset$ for some $j_1, j_2$.
\begin{defn} If $T$ is an $n$-linear operator with full and partial kernel representations as defined above, we call $T$ an $n$-linear bi-parameter $(\omega_1, \omega_2)$-SIO. \end{defn}
\subsection*{Bi-parameter CZOs} We say that $T$ satisfies the weak boundedness property if \begin{equation*}\label{eq:2ParWBP}
|\langle T(1_R, \ldots, 1_R), 1_R \rangle| \lesssim |R| \end{equation*} for all rectangles $R = I^1 \times I^2 \subset \mathbb{R}^{d} = \mathbb{R}^{d_1} \times \mathbb{R}^{d_2}$.
An SIO $T$ satisfies the diagonal BMO assumption if the following holds. For all rectangles $R = I^1 \times I^2 \subset \mathbb{R}^{d} = \mathbb{R}^{d_1} \times \mathbb{R}^{d_2}$
and functions $a_{I^i}\in \mathscr{F}_{d_i}$ with $a_{I^i} = 1_{I^i}a_{I^i}$, $|a_{I^i}| \le 1$ and $\int a_{I^i} = 0$ we have \begin{equation*}\label{eq:DiagBMO}
|\langle T(a_{I^1} \otimes 1_{I^2}, 1_R, \ldots, 1_R), 1_R \rangle| + \cdots + |\langle T(1_R, \ldots, 1_R), a_{I^1} \otimes 1_{I^2} \rangle| \lesssim |R| \end{equation*} and $$
|\langle T(1_{I^1} \otimes a_{I^2}, 1_R, \ldots, 1_R), 1_R \rangle| + \cdots + |\langle T(1_R, \ldots, 1_R), 1_{I^1} \otimes a_{I^2} \rangle| \lesssim |R|. $$
An SIO $T$ satisfies the product BMO assumption if it holds $$S(1, \ldots, 1) \in \BMO_{\textup{prod}}$$ for all the $(n+1)^2$ adjoints $S = T^{j_1*, j_2*}_{1,2}$. This can be interpreted in the sense that $$
\| S(1, \ldots, 1) \|_{\BMO_{\operatorname{prod}}} = \sup_{\calD = \calD^1 \times \calD^2} \sup_{\Omega} \Big(\frac{1}{|\Omega|} \sum_{ \substack{ R = I^1 \times I^2 \in \calD \\
R \subset \Omega}} |\langle S(1, \ldots, 1), h_R \rangle|^2 \Big)^{1/2} < \infty, $$
where the supremum is over all dyadic grids $\calD^i$ on $\mathbb{R}^{d_i}$ and open sets $\Omega \subset \mathbb{R}^d = \mathbb{R}^{d_1} \times \mathbb{R}^{d_2}$ with $0 < |\Omega| < \infty$, and the pairings $\langle S(1, \ldots, 1), h_R\rangle$ can be defined, in a natural way, using the kernel representations.
\begin{defn}\label{defn:CZO} An $n$-linear bi-parameter $(\omega_1, \omega_2)$-SIO $T$ satisfying the weak boundedness property, the diagonal BMO assumption and the product BMO assumption is called an $n$-linear bi-parameter $(\omega_1, \omega_2)$-Calder\'on--Zygmund operator ($(\omega_1, \omega_2)$-CZO). \end{defn}
\subsection*{Dyadic representation theorem} In Section \ref{sec:dmo} we have introduced the three different dyadic model operators (DMOs). In this section we explain how and why the DMOs are linked to the CZOs. Before stating the known representation theorem, to aid the reader, we first outline the main structure and idea of representation theorems -- for the lenghty details in this generality see \cite{AMV}.
\textbf{Step 1.} There is a natural probability space $\Omega = \Omega_1 \times \Omega_2$, the details of which are not relevant for us here (but see \cite{Hy1}), so that to each $\sigma = (\sigma_1, \sigma_2) \in \Omega$ we can associate a random collection of dyadic rectangles $\calD_{\sigma} = \calD_{\sigma_1} \times \calD_{\sigma_2}$. The starting point is the martingale difference decomposition \begin{equation*} \langle T(f_1, \ldots, f_n),f_{n+1} \rangle = \sum_{j_1, j_2 =1}^{n+1} \mathbb{E}_{\sigma} \Sigma_{j_1, j_2, \sigma} + \mathbb{E}_{\sigma} \operatorname{Rem}_{\sigma}, \end{equation*} where $$ \Sigma_{j_1, j_2, \sigma} = \sum_{ \substack{ R_1, \ldots, R_{n+1} \\ \ell(I_{i_1}^1) > \ell(I_{j_1}^1) \textup{ for } i_1 \ne j_1
\\ \ell(I_{i_2}^2) > \ell(I_{j_2}^2) \textup{ for } i_2 \ne j_2}} \langle T(\Delta_{R_1}f_1, \ldots, \Delta_{R_n}f_n),\Delta_{R_{n+1}}f_{n+1} \rangle $$ and $R_1 = I_1^1 \times I_1^2, \ldots, R_{n+1} = I_{n+1}^1 \times I_{n+1}^2 \in \calD_\sigma = \calD_{\sigma_1} \times \calD_{\sigma_2}$. Notice how we have already started the proof working parameter by parameter. At this point the randomization is not yet important: it is used at a later point in the proof to find suitable common parents for dyadic cubes. Looking at the definition of shifts this is clearly critical: everything is organised under the common parent $K$ and they cannot be arbitrarily large.
\textbf{Step 2.} There are $(n+1)^2$ main terms $\Sigma_{j_1, j_2, \sigma}$ -- these are similar to each other and all of them produce shifts, partial paraproducts and \emph{exactly one} full paraproduct. For example, a further parameter by parameter $T1$ style decomposition of $\Sigma_{n, n+1, \sigma}$ produces the full paraproduct \begin{align*} &\sum_{R = K^1 \times K^2} \langle T(1, \ldots, 1, h_{K^1} \otimes 1), &1 \otimes h_{K^2} \rangle
\prod_{j=1}^{n-1} \langle f_j \rangle_{R}\Big \langle f_n, h_{K^1} \otimes \frac{1_{K^2}}{|K^2|} \Big\rangle
\Big \langle f_{n+1}, \frac{1_{K^1}}{|K^1|} \otimes h_{K^2} \Big\rangle, \end{align*} where $$ \langle T(1, \ldots, 1, h_{K^1} \otimes 1), 1 \otimes h_{K^2} \rangle = \langle T^{n*}_1(1, \ldots, 1), h_R \rangle. $$ For the quantitative part the product $\BMO$ assumption of $T^{n*}_1$ is critical here, and the remaining product $\BMO$ assumptions are used to control the full paraproducts coming from the other main terms.
The shifts and partial paraproducts structurally arise from the $T1$ decomposition combined with probability. Again, the randomization is simply used to find suitably sized common parents. After this completely structural part (for full details see \cite{AMV}), the focus is on providing estimates for the coefficients, like the coefficient $a_{K, (R_j)}$ of the shifts. \emph{As in the full paraproduct case above, it is important to understand that the coefficients always have a concrete form in terms of pairings involving $T$ and various Haar functions.} These pairings are estimated in various ways: \begin{itemize}
\item The shift coefficients are handled with kernel estimates only (various size and continuity estimates).
Only in the part $\operatorname{Rem}_{\sigma}$, which we have not yet discussed,
also the weak boundedness is used to handle the diagonal case, where kernel
estimates are not valid.
\item In the partial paraproduct case a size estimate does not suffice, as there is not enough
cancellation. A more refined $\BMO$ estimate needs to be proved -- this is done via a duality argument.
This duality is the source of the atoms $a_I$ appearing in some of the assumptions -- e.g. in \eqref{eq:pest}. \end{itemize}
\textbf{Step 3.} The final step is to deal with the remainder $\operatorname{Rem}_{\sigma}$. This only produces shifts and partial paraproducts. Another difference to the main terms is that all the diagonal parts of the summation are here -- to deal with them we need to assume the weak boundedness property and the diagonal $\BMO$ assumptions.
\begin{rem}\label{rem:mrem}
An $m$-parameter representation theorem is structurally identical: the pairing
$\langle T(f_1, \ldots, f_n),f_{n+1} \rangle$ is split into $(n+1)^m$ main terms
and the remainder. These are then further split into shifts, partial paraproducts and full paraproducts.
The full paraproduct is produced parameter by parameter, as it is in the bi-parameter case, and this produces partial paraproducts, where the
paraproduct component can vary from being $1$-parameter to being $(m-1)$-parameter.
The definition of a CZO is adjusted so that all of the appearing
coefficients of the appearing model operators involving $T$ and Haar functions can be estimated.
For the linear $m$-parameter representation theorem see Ou \cite{Ou} -- this establishes the appropriate
definition of a multi-parameter CZO.
We discuss the $m$-parameter case in more detail in Section \ref{sec:multi}. The point there is the following:
while the representation theorem itself is straightforward, some
of the estimates of Section \ref{sec:dmo} are harder in $m$-parameter. \end{rem}
In the paper \cite{AMV}, among other things, a dyadic representation theorem for $n$-linear bi-parameter CZOs was proved. The minimal regularity required is $\omega_i \in \operatorname{Dini}_{\frac{1}{2}}$, but then the dyadic representation is in terms of certain modified versions of the model operators we have presented, and bounded, in this paper. It appears to be difficult to prove weighted bounds for the modified operators with the optimal dependency on the complexity. Instead, we will rely on a lemma, which says that all of the modified operators can be written as suitable sums of the standard model operators. This step essentially outright loses $\frac{1}{2}$ of kernel regularity, and puts us in competition to obtain our weighted bounds with $\omega_i \in \operatorname{Dini}_{1}$. The bilinear bi-parameter representation theorem with the usual H\"older type kernel regularity $w_i(t) = t^{\alpha_i}$ appeared first in \cite{LMV}. We now state a representation theorem that we will rely on.
A consequence of \cite[Theorem 5.35]{AMV} and \cite[Lemma 5.12]{AMV} is the following. \begin{prop} Suppose $T$ is an $n$-linear bi-parameter $(\omega_1, \omega_2)$-CZO. Then we have $$ \langle T(f_1,\ldots,f_n), f_{n+1} \rangle= C_T \mathbb{E}_{\sigma} \sum_{u = (u_1, u_2) \in \mathbb{N}^2} \omega_1(2^{-u_1})\omega_2(2^{-u_2}) \langle U_{u, \sigma}(f_1,\ldots,f_n), f_{n+1} \rangle, $$ where $C_T$ enjoys a linear bound with respect to the CZO quantities and $U_{u, \sigma}$ denotes some $n$-linear bi-parameter dyadic operator (defined in the grid $\calD_{\sigma}$) with the following property. We have that $U_u = U_{u, \sigma}$ can be decomposed using the standard dyadic model operators as follows: \begin{equation}\label{eq:eq10} U_{u} = C \sum_{i_1=0}^{u_1-1} \sum_{i_2=0}^{u_2-1} V_{i_1,i_2}, \end{equation} where each $V = V_{i_1,i_2}$ is a dyadic model operator (a shift, a partial paraproduct or a full paraproduct) of complexity $k^m_{j, V}$, $j \in \{1, \ldots, n+1\}$, $m \in \{1,2\}$, satisfying $$ k^{m}_{j, V} \le u_m. $$ \end{prop}
\begin{rem} We assumed that the operator $T$ and its adjoints are initially well-defined for finite linear combinations of indicators of rectangles. However, a careful proof of the representation theorem \cite{LMV} shows that this implies the boundedness of $T$ (for related details see also \cite{GH} and \cite{Hy3}). Therefore, we do not need to worry about this detail any more at this point and we can work with general functions. Moreover, we do not need to work with the CZOs directly -- after the representation theorem we only need to work with the dyadic model operators. \end{rem}
\subsection*{Weighted estimates for CZOs} In this paper we were able to prove a complexity free weighted estimate for the shifts. On the contrary, the weighted estimate for the partial paraproducts is even exponential, however, with an arbitrarily small power. For these reasons, we can prove a weighted estimate with mild kernel regularity for paraproduct free $T$, and otherwise we will deal with the standard kernel regularity $\omega_i(t) = t^{\alpha_i}$. By paraproduct free we mean that the paraproducts in the dyadic representation of $T$ vanish, which could also be stated in terms of (both partial and full) ``$T1=0$'' type conditions (only the partial paraproducts, and not the full paraproducts, are problematic in terms of kernel regularity, of course). In the paraproduct free case the reader can think of convolution form SIOs. \begin{thm} Suppose $T$ is an $n$-linear bi-parameter $(\omega_1, \omega_2)$-CZO. For $1 < p_1, \ldots, p_n \le \infty$ and $1/p = \sum_i 1/p_i> 0$ we have $$
\|T(f_1, \ldots, f_n)w \|_{L^p} \lesssim \prod_i \|f_i w_i\|_{L^{p_i}} $$ for all multilinear bi-parameter weights $\vec w \in A_{\vec p}$, if one of the following conditions hold. \begin{enumerate} \item $T$ is paraproduct free and $\omega_i \in \operatorname{Dini}_{1}$. \item We have $\omega_i(t) = t^{\alpha_i}$ for some $\alpha_i \in (0,1]$. \end{enumerate} \end{thm} \begin{proof} Notice that in the paraproduct free case (1) by our results for the shifts we always have $$
\|U_{u, \sigma}(f_1, \ldots, f_n)w \|_{L^p} \lesssim (1+u_1)(1+u_2) \prod_i \|f_i w_i\|_{L^{p_i}}, $$ where the complexity dependency comes only from the decomposition \eqref{eq:eq10}. We then take some $1 < p_1, \ldots, p_n < \infty$ with $p \in (1, \infty)$, use the dyadic representation theorem and conclude that $T$ satisfies the weighted bound with these fixed exponents -- recall \eqref{eq:diniuse} and that $\omega_i \in \operatorname{Dini}_{1}$. Finally, we extrapolate using Theorem \ref{thm:ext}.
The case of a completely general CZO with the standard kernel regularity is proved completely analogously. Just choose the exponent $\beta$ in the exponential complexity dependendency of the partial paraproducts to be small enough compared to $\alpha_1$ and $\alpha_2$. \end{proof}
\section{Extrapolation}\label{app:app1} This section is devoted to providing more details about Theorem \ref{thm:ext} in the multi-parameter setting. We also obtain the proof of the vector-valued Proposition \ref{prop:vecvalmax}. We give the details in the bi-parameter case with the general case being similar.
We begin with the following definitions. Given $\mu\in A_\infty(\mathbb{R}^{n+m})$, we say $w\in A_p(\mu)$ if $w>0$ a.e. and \[ [w]_{A_p(\mu)}:=\sup_R\, \langle w\rangle_R^{\mu} \left(\big\langle w^{-\frac 1{p-1}}\big\rangle_R^{\mu}\right)^{p-1}<\infty,\qquad 1<p<\infty. \] And we say $w\in A_1(\mu)$ if $w>0$ a.e. and \[ [w]_{A_1(\mu)} = \sup_R \, \ave{w}_R^\mu \esssup_R w^{-1}<\infty. \]
We begin with the following auxiliary result needed to build the required machinery. This is an extension of Proposition \ref{prop:prop1}. \begin{rem} The so-called three lattice theorem states that there are lattices $\calD^m_j$ in $\mathbb{R}^{d_m}$, $m \in \{1,2\}$, $j \in \{1, \ldots, 3^{d_m}\}$, such that for every cube $Q^m \subset \mathbb{R}^{d_m}$
there exists a $j$ and $I^m \in \calD^m_j$ so that $Q^m \subset I^m$ and $|I^m| \sim |Q^m|$. Given $\lambda \in A_\infty$ we have in particular that $\lambda$ is doubling: $\lambda(2R) \lesssim \lambda(R)$ for all rectangles $R$. It then follows that also the non-dyadic variant $M^{\lambda}$ satisfies Proposition \ref{prop:prop1}. \end{rem} \begin{lem}\label{lem:lem5} Let $\mu\in A_\infty$ and $w\in A_p(\mu)$, $1<p<\infty$. Then we have \[
\| M^\mu f\|_{L^p(w\mu)}\lesssim \|f\|_{L^p(w\mu)}. \] \end{lem} \begin{proof} Fix $x$ and $f\ge 0$ and denote $\sigma=w^{-\frac 1{p-1}}$. For an arbitrary rectangle $R \subset \mathbb{R}^d$ with $x\in R$ we have \begin{align*} \langle f \rangle_R^\mu &= \langle \sigma\rangle_R^\mu \left( \langle w\rangle_R^\mu\right)^{\frac 1{p-1}}\left( \langle w\rangle_R^\mu\right)^{-\frac 1{p-1}}\frac 1{\sigma\mu(R)}\int_R f\mu\\ &\le [w]_{A_p(\mu)}^{\frac 1{p-1}} \left(M^{w\mu} \big( [M^{\sigma \mu}(f \sigma^{-1})]^{p-1} w^{-1} \big)(x)\right)^{\frac 1{p-1}}. \end{align*} The idea of the above pointwise estimate is from \cite{Le}. If $w\mu, \sigma \mu\in A_\infty$, then by (the non-dyadic version of) Proposition \ref{prop:prop1} we have \begin{align*}
\| M^{\mu} f\|_{L^p(w\mu)}&\lesssim \left\| \left(M^{w\mu} \big( [M^{\sigma \mu}(f \sigma^{-1})]^{p-1} w^{-1} \big) \right)^{\frac 1{p-1}}\right\|_{L^p(w\mu)}\\
&\lesssim \left\| \left( [M^{\sigma \mu}(f \sigma^{-1})]^{p-1} w^{-1} \right)^{\frac 1{p-1}}\right\|_{L^p(w\mu)}\\
&= \| M^{\sigma \mu}(f \sigma^{-1})\|_{L^p(\sigma\mu)}\lesssim \|f\|_{L^p(w\mu)}. \end{align*}
Therefore, it remains to check that $w\mu, \sigma \mu\in A_\infty$. We only explicitly prove that $w\mu\in A_\infty$, since the other one is symmetric. First of all, write \[ \langle w\rangle_R^{\mu} \left(\big\langle w^{-\frac 1{p-1}}\big\rangle_R^{\mu}\right)^{p-1}\le [w]_{A_p(\mu)} \] in the form \[ \langle w\mu\rangle_R \big\langle w^{-\frac 1{p-1}}\mu\big\rangle_R^{p-1}\le [w]_{A_p(\mu)} \langle \mu\rangle_R^p. \] Then by the Lebesgue differentiation theorem, we have for all cubes $I^1 \subset \mathbb{R}^{d_1}$ that \[ \langle w\mu\rangle_{I^1,1}(x_2) \big\langle w^{-\frac 1{p-1}}\mu\big\rangle_{I^1,1}^{p-1}(x_2) \le [w]_{A_p(\mu)} \langle \mu\rangle_{I^1,1}^p(x_2),\quad x_2\in \mathbb{R}^{d_2}\setminus N_{I^1},
\]where $|N_{I^1}|=0$. By standard considerations there exists $N$ so that $|N| = 0$ and for all cubes $I^1 \subset \mathbb{R}^{d_1}$ we have \[ \langle w\mu\rangle_{I^1,1}(x_2) \big\langle w^{-\frac 1{p-1}}\mu\big\rangle_{I^1,1}^{p-1}(x_2) \le [w]_{A_p(\mu)} \langle \mu\rangle_{I^1,1}^p(x_2) ,\quad x_2\in \mathbb{R}^{d_2}\setminus N. \] In other words, $w(\cdot, x_2)\in A_p(\mu(\cdot, x_2))$ (uniformly) for all $x_2\in \mathbb{R}^m\setminus N$. As $\mu \in A_{\infty}$ there exists $s < \infty$ so that $\mu \in A_s$.
Then for all cubes $I^1 \subset \mathbb{R}^{d_1}$ and arbitrary $E\subset I^1$ we have \begin{align*}
\frac{|E|}{|I^1|}\lesssim \Big(\frac{\mu(\cdot, x_2)(E)}{\mu(\cdot, x_2)(I^1)} \Big)^{\frac{1}{s}} \lesssim \Big(\frac{w\mu(\cdot, x_2)(E)}{w\mu(\cdot, x_2)(I^1)}\Big)^{\frac{1}{ps}},\quad {\rm {a.e.}} \, x_2\in \mathbb{R}^{d_2}, \end{align*}where the implicit constant is independent from $x_2$. This means $w\mu(\cdot, x_2)\in A_\infty(\mathbb{R}^{d_1})$ uniformly for a.e. $x_2\in \mathbb{R}^{d_2}$. Likewise we can show that $w\mu(x_1,\cdot)\in A_\infty(\mathbb{R}^{d_2})$ uniformly for a.e. $x_1\in \mathbb{R}^{d_1}$. This completes the proof and we are done. \end{proof}
Now we are ready to formulate the following version of Rubio de Francia algorithm. \begin{lem}
Let $\mu \in A_{\infty}$ and $p \in (1,\infty)$. Let $f$ be a non-negative function in $L^p(w\mu)$ for some $w\in A_p(\mu)$. Let $M^\mu_k$ be the $k$-th iterate of $M^\mu$, $M^\mu_0f=f$, and $\|M^\mu\|_{L^p(w\mu)} := \|M^\mu\|_{L^p(w\mu) \to L^p(w\mu)}$ be the norm of $M^\mu$ as a bounded operator on $L^p(w\mu)$. Define \[
Rf(x)= \sum_{k=0}^\infty \frac{M^\mu_k f}{(2\|M^\mu\|_{L^p(w\mu)})^k}. \]
Then $f(x)\le Rf(x)$, $\|Rf\|_{L^p(w\mu)}\le 2 \|f\|_{L^p(w\mu)}$, and $Rf$ is an $A_1(\mu)$ weight with constant $[Rf]_{A_1(\mu)}\le 2 \|M^\mu\|_{L^p(w\mu)}$. \end{lem} \begin{proof}
The statements $f(x)\le Rf(x)$ and $\|Rf\|_{L^p(w\mu)}\le 2 \|f\|_{L^p(w\mu)}$ are obvious. Since \[
M^\mu(Rf)\le \sum_{k=0}^\infty \frac{M^\mu_{k+1} f}{(2\|M^\mu\|_{L^p(w\mu)})^k}\le 2\|M^\mu\|_{L^p(w\mu)} Rf, \] we have \[
[Rf]_{A_1(\mu)}\le \sup_R \big(\inf_R M^\mu(Rf) \big) \big(\operatornamewithlimits{ess\,inf}_R Rf \big) ^{-1}\le 2\|M^\mu\|_{L^p(w\mu)}. \] We are done. \end{proof} With the above Rubio de Francia algorithm at hand, we are able to prove the bi-parameter version of \cite[Theorem 3.1]{LMO} and the corresponding endpoint cases similarly as in \cite[Theorem 2.3]{LMMOV}. On the other hand, the key technical lemma \cite[Lemma 2.14]{LMMOV} can be extended to the bi-parameter setting very easily. Using these as in \cite{LMMOV} we obtain Theorem \ref{thm:ext}.
The above Rubio de Francia algorithm, of course, also yields the following standard linear extrapolation. Let $\mu\in A_\infty$ and assume that \begin{equation}\label{eq:eq23}
\| g\|_{L^{p_0}(w\mu)}\lesssim \|f\|_{L^{p_0}(w\mu)} \end{equation} for all $w \in A_{p_0}(\mu)$. Then the same inequality holds for all $p \in (1,\infty)$ and $w \in A_p(\mu)$. Using this and Lemma \ref{lem:lem5} we obtain Proposition \ref{prop:vecvalmax} via the following standard argument.
\begin{proof}[Proof of Proposition \ref{prop:vecvalmax}] From Lemma 8.2 we directly have that $$
\Big\| \Big( \sum_i (M^\mu f^i_j)^s \Big)^{\frac 1s} \Big\|_{L^s(w\mu)}
\lesssim \Big\| \Big( \sum_i |f^i_j|^s \Big)^{\frac 1s} \Big\|_{L^s(w\mu)} , \quad w \in A_s(\mu). $$ Then, the extrapolation described around \eqref{eq:eq23} gives that $$
\Big\| \Big( \sum_i (M^\mu f^i_j)^s \Big)^{\frac 1s} \Big\|_{L^t(w\mu)}
\lesssim \Big\| \Big( \sum_i |f^i_j|^s \Big)^{\frac 1s} \Big\|_{L^t(w\mu)} , \quad w \in A_t(\mu). $$ This in turn gives $$
\Big\| \Big( \sum_j \Big( \sum_i (M^\mu f^i_j)^s \Big)^{\frac ts}\Big)^{\frac 1t} \Big\|_{L^t(w\mu)}
\lesssim \Big\|\Big( \sum_j \Big( \sum_i |f^i_j|^s \Big)^{\frac ts}\Big)^{\frac 1t} \Big\|_{L^t(w\mu)} , \quad w \in A_t(\mu). $$ Extrapolating once more concludes the proof. \end{proof}
\section{The multi-parameter case}\label{sec:multi} One can approach the multi-parameter case as follows. \begin{enumerate} \item What is the definition of an SIO/CZO? The important base case is the linear multi-parameter definition given in \cite{Ou}. That can be straightforwardly extended to the multilinear situation as in Section \ref{sec:SIOs}. The definition becomes extremely lengthy due to the large number of different partial kernel representations, and for this reason we do not write it down explicitly. However, there is no complication in combining the linear multi-parameter definition \cite{Ou} and our multilinear bi-parameter definition.
We mention that another way to define the operators would be to adapt a Jour\'ne \cite{Jo} style definition -- this kind of vector-valued definition would be shorter to state. In this paper we do not use the Journ\'e style formulation. However, for the equivalence of the Journ\'e style definitions and the style we use here see \cite{Grau, LMV, Ou}.
\item Is there a representation theorem in this generality? Yes -- see Remark \ref{rem:mrem}. The linear multi-parameter representation theorem is proved in \cite{Ou}. The multilinear representation theorems \cite{AMV, LMV} are stated only in the bi-parameter setting for convenience. However, using the multilinear methods from \cite{AMV, LMV} the multi-parameter theorem \cite{Ou} can easily be generalised to the multilinear setting.
\item How do the model operators look like? Studying the above presented bi-parameter model operators, one realises that the philosophies in each parameter are independent of each other -- for example, if one has a shift type of philosophy in a given parameter, one needs at least two cancellative Haar functions in that parameter. With this logic it is clear how to define the $m$-parameter analogues just by working parameter by parameter.
Alternatively, one can take all possible $m$-fold tensor products of one-parameter $n$-linear model operators, and then just replace the appearing product coefficients by general coefficients. This yields the form of the model operators.
We demonstrate this with an example of a bilinear tri-parameter partial paraproduct. Tri-parameter partial paraproducts have the shift structure in one or two of the parameters. In the remaining parameters there is a paraproduct structure. The following is an example of a partial paraproduct with the shift structure in the first parameter and the paraproduct structure in the second and third parameters: \begin{equation*} \begin{split} &\sum_{K=K^1 \times K^2 \times K^3 \in \calD} \sum_{\substack{I_1^1, I^1_2, I_{3}^1 \in \calD^1 \\ (I^1_j)^{(k_j)}=K^1}} \Big[a_{K,(I^1_j)} \\
&\hspace{1cm}\Big\langle f_1, h_{I_1^1} \otimes \frac{1_{K^2}}{|K^2|} \otimes \frac{1_{K^3}}{|K^3|}\Big\rangle
\Big\langle f_2, h_{I^1_2}^0 \otimes \frac{1_{K^2}}{|K^2|} \otimes h_{K^3} \Big\rangle
\Big\langle f_3, h_{I^1_3} \otimes h_{K^2} \otimes \frac{1_{K^3}}{|K^3|} \Big\rangle\Big]. \end{split} \end{equation*} Here $\calD= \calD^1 \times \calD^2 \times \calD^3$. The assumption on the coefficients is that when $K^1$, $I^1_1$, $I^1_2$ and $I^1_3$ are fixed, then $$
\| (a_{K,(I^1_j)})_{K^2 \times K^3 \in \calD^2 \times \calD^3} \|_{\BMO_{{\rm{prod}}}}
\le \frac{|I^1_1|^{ \frac 12}|I^1_2|^{\frac12} |I^1_3|^{\frac12}}{|K^1|^2}. $$ In the shift parameter there is at least two cancellative Haar functions and in the paraproduct parameters there is exactly one cancellative Haar and the remaining functions are normalised indicators. Thus, this is a generalization of $S \otimes \pi \otimes \pi$, where $S$ is a one-parameter shift and $\pi$ is a one-parameter paraproduct -- and all model operators arise like this.
\item Finally, is it more difficult to show the genuinely multilinear weighted estimates for the $m$-parameter, $m \ge 3$, model operators compared to the bi-parameter model operators? When it comes to shifts and full paraproducts, there is no essential difference -- their boundedness always reduces to Theorem \ref{thm:thm3}, which has an obvious $m$-parameter version. With out current proof, the answer for the partial paraproducts is more complicated. Thus, we will elaborate on how to prove the weighted estimates for $m$-parameter partial paraproducts. Notice that previously e.g. in \cite{LMV} we could only handle bi-parameter partial paraproducts as our proof exploited the one-parameter nature of the paraproducts by sparse domination. Here we have already disposed of sparse domination, but the proof is still complicated and leads to some new philosophies in higher parameters. \end{enumerate}
We now discuss how to prove a tri-parameter analogue of Theorem \ref{thm:thm4}. We can have a partial paraproduct with a bi-parameter paraproduct component and a one-parameter shift component, or the other way around. Regardless of the form, the initial stages of the proof of Theorem \ref{thm:thm4} can be used to reduce to estimating the weighted $L^2$ norms of certain functions which are analogous to \eqref{eq:eq24} and \eqref{eq:eq25}. Most of these norms can be estimated with similar steps as in the bi-parameter case. However, also a new type of variant appears. An example of such a variant is given by \begin{equation}\label{eq:eq27}
F_{j,K^1}=1_{K^1} \sum_{(L^1_j)^{(l_j)}=K^1} \frac{|L^1_j|^{\frac 12 }}{|K^1|}\Big( \sum_{K^2} \frac{1_{K^2}}{|K^2|} M_{\calD^3}^{\langle \sigma_j \rangle_{K^{1,2}}}\big(\langle f_j, h_{L^1_j} \otimes h_{K^2} \rangle \langle \sigma_j \rangle_{K^{1,2}}^{-1}\big)^2\Big)^{\frac 12}, \end{equation}
where $f_j \colon \mathbb{R}^{d_1} \times \mathbb{R}^{d_2} \times \mathbb{R}^{d_3} \to \mathbb{C}$. The goal is to estimate $\sum_{K^1} \| F_{j,K^1} \|_{L^2(\sigma_j)}^2$. Here we are denoting $K^{1,2} = K^1 \times K^2$ the original tri-parameter rectangle being $K = K^1 \times K^2 \times K^3$, and for brevity we only write $\langle \sigma_j \rangle_{K^{1,2}}$ instead of $ \langle \sigma_j \rangle_{K^{1,2}, 1, 2}$.
Comparing with \eqref{eq:eq25}, the key difference is that in \eqref{eq:eq25} the measure of the maximal function depended only on $K^1$. Here, it depends also on $K^2$, and therefore we have maximal functions with respect to different measures inside the norms. We will use the following lemma and the appearing new type of extrapolation trick to overcome this. \begin{lem}\label{lem:lem8} Let $\mu \in A_\infty(\mathbb{R}^{d_1} \times \mathbb{R}^{d_2})$ be a bi-parameter weight. Let $\calD=\calD^1\times \calD^2$ be a grid of bi-parameter dyadic rectangles in $\mathbb{R}^{d_1} \times \mathbb{R}^{d_2}$. Suppose that for each $m \in \mathbb{Z}$ and $K^1 \in \calD^1$ we have a function $f_{m,K^1} \colon \mathbb{R}^{d_2} \to \mathbb{C}$. Then, for all $p,s,t \in (1, \infty)$, the estimate $$
\Big\| \Big( \sum_{m } \Big( \sum_{K^1} 1_{K^1}
M_{\calD^2}^{\langle \mu \rangle_{K^1}} (f_{m,K^1})^t \Big)^{\frac st} \Big)^{\frac 1s} \Big\|_{L^p(w\mu)}
\lesssim \Big\| \Big( \sum_{m } \Big( \sum_{K^1} 1_{K^1}
|f_{m,K^1}|^t \Big)^{\frac st} \Big)^{\frac 1s} \Big\|_{L^p(w\mu)} $$ holds for all $w \in A_p(\mu)$. \end{lem}
\begin{proof} By extrapolation, see the discussion around \eqref{eq:eq23}, it suffices to take a function $f \colon \mathbb{R}^{d_2} \to \mathbb{C}$ and show that $$
\big\| 1_{K^1} M_{\calD^2}^{\langle \mu \rangle_{K^1}} f \big\|_{L^q(w\mu)}
\lesssim \| 1_{K^1} f \|_{L^q(w\mu)} $$ for some $q \in (1, \infty)$ and for all $w \in A_q(\mu)$. We fix some $w \in A_{q}(\mu)$. The above estimate can be rewritten as $$
\big\| M_{\calD^2}^{\langle \mu \rangle_{K^1}} f \big\|_{L^q(\langle w\mu\rangle_{K^1})}|K^1|^{\frac 1q}
\lesssim \| f \|_{L^q(\langle w\mu\rangle_{K^1})}|K^1|^{\frac 1q}. $$
We have the identity \begin{equation}\label{eq:eq22} \langle w\mu\rangle_{K^1}(x_2) = \frac{\langle w\mu\rangle_{K^1}(x_2)}{\langle \mu \rangle_{K^1}(x_2)}\langle \mu \rangle_{K^1}(x_2) = \langle w(\cdot,x_2) \rangle_{K^1}^{\mu(\cdot, x_2)}\langle \mu \rangle_{K^1}(x_2). \end{equation} Define $v(x_2) =\langle w(\cdot,x_2) \rangle_{K^1}^{\mu(\cdot, x_2)}$. We show that $v \in A_q(\langle \mu \rangle_{K^1})$. Let $I^2$ be a cube in $\mathbb{R}^{d_2}$. First, we have that $$ \int_{I^2} v \langle \mu \rangle_{K^1} = \int_{I^2} \langle w(\cdot,x_2) \rangle_{K^1}^{\mu(\cdot, x_2)}\langle \mu(\cdot,x_2) \rangle_{K^1} \ud x_2
=\int_{K^1 \times I^2}w \mu |K^1|^{-1}. $$ Therefore, $ \langle v \rangle_{I^2}^{\langle \mu \rangle_{K^1}} =\langle w \rangle_{K^1 \times I^2 }^\mu. $ H\"older's inequality gives that $$ \big(\langle w(\cdot,x_2) \rangle_{K^1}^{\mu(\cdot, x_2)}\big)^{-\frac{1}{q-1}} \le \big\langle w(\cdot,x_2)^{-\frac{1}{q-1}} \big \rangle_{K^1}^{\mu(\cdot, x_2)}, $$ which shows that $$ \int_{I^2} v^{-\frac{1}{q-1}} \langle \mu \rangle_{K^1} \le \int_{I^2}\big\langle w(\cdot,x_2)^{-\frac{1}{q-1}} \big \rangle_{K^1}^{\mu(\cdot, x_2)} \langle \mu(\cdot, x_2) \rangle_{K^1} \ud x_2
=\int_{K^1\times I^2} w^{-\frac{1}{q-1}} \mu |K^1|^{-1}. $$ Thus, we have that $$ \big(\langle v^{-\frac{1}{q-1}} \rangle_{I^2}^{\langle \mu \rangle_{K^1}}\big)^{q-1} \le \big(\langle w^{-\frac{1}{q-1}} \rangle_{K^1 \times I^2}^{\mu}\big)^{q-1}. $$ These estimates yield that $[v]_{A_q(\langle \mu \rangle_{K^1})} \le [w]_{A_q(\mu)}$.
Recall the identity \eqref{eq:eq22}. Since $\langle \mu \rangle_{K^1} \in A_\infty$, we have that \begin{equation*}
\big\| M_{\calD^2}^{\langle \mu \rangle_{K^1}} f \big\|_{L^q(\langle w\mu\rangle_{K^1})}
=\big\| M_{\calD^2}^{\langle \mu \rangle_{K^1}} f \big\|_{L^q(v\langle \mu\rangle_{K^1})}
\lesssim \| f \|_{L^q(v\langle \mu\rangle_{K^1}))}
=\| f \|_{L^q(\langle w\mu\rangle_{K^1}))}, \end{equation*} where we used Lemma \ref{lem:lem5}. This concludes the proof. \end{proof}
We now show how to estimate \eqref{eq:eq27}. First, we have that $\|F_{j,K^1}\|_{L^2(\sigma_j)}^2$ is less than $ 2^{\frac{2l^1_jd_1}{s'}}$ multiplied by \begin{equation*}
\bigg\| \bigg[\sum_{(L^1_j)^{(l_j)}=K^1} \bigg[\frac{|L^1_j|^{\frac 12 }}{|K^1|}\Big( \sum_{K^2} \frac{1_{K^2}}{|K^2|} M_{\calD^3}^{\langle \sigma_j \rangle_{K^{1,2}}} \big(\langle f_j, h_{L^1_j} \otimes h_{K^2} \rangle \langle \sigma_j \rangle_{K^{1,2}}^{-1}\big)^2\Big)^{\frac 12}\bigg]^{s} \bigg]^{\frac 1s}
\bigg \|_{L^2(\langle\sigma_j\rangle_{K^1})}^2|K^1|. \end{equation*} The exponent $s \in (1, \infty)$ is chosen small enough so that we get a suitable dependence on the complexity through $2^{\frac{2l^1_jd_1}{s'}}$, see the corresponding step in the bi-parameter case. Since $\langle \sigma_j \rangle_{K^1} \in A_\infty(\mathbb{R}^{d_2} \times \mathbb{R}^{d_3})$, we can use Lemma \ref{lem:lem8} to have that the last term is dominated by $$
\bigg\| \bigg[\sum_{(L^1_j)^{(l_j)}=K^1} \bigg[\frac{|L^1_j|^{\frac 12 }}{|K^1|}\Big( \sum_{K^2} \frac{1_{K^2}}{|K^2|}
\big|\langle f_j, h_{L^1_j} \otimes h_{K^2} \rangle \langle \sigma_j \rangle_{K^{1,2}}^{-1}\big|^2\Big)^{\frac 12}\bigg]^{s} \bigg]^{\frac 1s}
\bigg \|_{L^2(\langle\sigma_j\rangle_{K^1})}^2|K^1|. $$ After these key steps it only remains to use Proposition \ref{prop:prop3} twice in a very similar way as in the bi-parameter proof. We are done.
\section{Applications}
\subsection{Mixed-norm estimates} With our main result, Theorem \ref{thm:intro1}, and extrapolation, Theorem \ref{thm:ext}, the following result becomes immediate. \begin{thm} Let $T$ be an $n$-linear $m$-parameter Calder\'on-Zygmund operator. Let $1<p_i^j \le \infty$, $i = 1, \ldots, n$, with $\frac {1}{p^j}= \sum_i \frac{1}{p_i^j} >0$, $j = 1, \ldots, m$. Then we have that \[
\| T(f_1,\ldots, f_n)\|_{L^{p^1} \cdots L^{p^m}}\lesssim \prod_{i=1}^n \| f_i\|_{L^{p^1_i} \cdots L^{p^m_i}}. \] \end{thm} \begin{rem} We understand this as an a priori estimate with $f_i\in L_c^\infty$ -- this is only a concern when some $p_i^j$ is $\infty$. In \cite{LMMOV}, which concerned the bilinear bi-parameter case with \emph{tensor} form CZOs, we went to great lengths to check that this restriction can always be removed. We do not want to get into such considerations here, and prefer this a priori interpretation at least when $n \ge 3$. See also \cite{LMV} for some previous results for bilinear bi-parameter CZOs that are not of tensor form, but where, compared to \cite{LMMOV}, the range of exponents had some limitations in the $\infty$ cases. See also \cite{DO}.
We also mention that mixed-norm estimates for multilinear bi-parameter Coifman-Meyer operators have been previously obtained in \cite{BM1} and \cite{BM2}. Related to this, bi-parameter mixed norm Leibniz rules were proved in \cite{OW}.
\end{rem}
The proof is immediate by extrapolating with tensor form weights. For the general idea see \cite[Theorem 4.5]{LMMOV} -- here the major simplification is that everything can be done with extrapolation and the operator-valued analysis is not needed. This is because the weighted estimate, Theorem \ref{thm:intro1}, is now with the genuinely multilinear weights unlike in \cite{LMMOV, LMV}.
\subsection{Commutators} We will state these applications in the bi-parameter case $\mathbb{R}^d = \mathbb{R}^{d_1} \times \mathbb{R}^{d_2}$. The $m$-parameter versions are obvious. We define $$ [b, T]_k(f_1,\ldots, f_n) := bT(f_1, \ldots, f_n) - T(f_1, \ldots, f_{k-1}, bf_k, f_{k+1}, \ldots, f_n). $$ One can also define the iterated commutators as usual. We say that $b \in \operatorname{bmo}$ if $$
\|b\|_{\operatorname{bmo}} = \sup_R \frac{1}{|R|}
\int_R |b-\ave{b}_R| < \infty, $$ where the supremum is over rectangles. Recall that given $b \in \operatorname{bmo}$, we have \begin{equation}\label{e:litbmo}
\|b\|_{\rm{bmo}}\sim \max\big(\esssup_{x_1\in \mathbb{R}^{d_1}} \|b(x_1,\cdot)\|_{\BMO(\mathbb{R}^{d_2})}, \esssup_{x_2\in \mathbb{R}^{d_2}} \|b(\cdot,x_2)\|_{\BMO(\mathbb{R}^{d_1})}\big). \end{equation} See e.g. \cite{HPW}. In the one-parameter case the following was proved in \cite[Lemma 5.6]{DHL}. \begin{prop}\label{prop:dhl} Let $\vec p=(p_1,\dots, p_n)$ with $1<p_1,\ldots, p_n<\infty$ and $\frac 1p=\sum_i \frac 1{p_i} <1$. Let $\vec w = (w_1,\dots, w_n)\in A_{\vec p}$. Then for any $1\le j\le n$ we have \[ \vec w_{b,z}:=(w_1,\dots, w_je^{\Re(bz)},\dots, w_n)\in A_{\vec p} \]with $[\vec w_{b,z}]_{A_{\vec p}}\lesssim [\vec w]_{A_{\vec p}}$ provided that $$
|z|\le \frac{\epsilon}{\max([w^p]_{A_\infty}, \max_i [w_i^{-p_i'}]_{A_\infty}) \|b\|_{\BMO}}, $$ where $\epsilon$ depends on $\vec p$ and the dimension of the underlying space. \end{prop}
If $1< p_1, \dots, p_n < \infty$, then there holds that $[\vec w]_{A_{\vec p}(\mathbb{R}^{d})} < \infty$ if and only if \[ \max \big(\esssup_{x_1\in \mathbb{R}^{d_1}}[\vec w(x_1,\cdot)]_{A_{\vec p}(\mathbb{R}^{d_2})}, \esssup_{x_2\in \mathbb{R}^{d_2}}[\vec w(\cdot,x_2)]_{A_{\vec p}(\mathbb{R}^{d_1})}\big) < \infty. \] Moreover, we have that the above maximum satisfies $\max (\cdot, \cdot) \le [\vec w]_{A_{\vec p}(\mathbb{R}^{d})}\lesssim \max (\cdot, \cdot)^{\gamma}$ where $\gamma$ is allowed to depend on $\vec p$ and $d$. The first estimate follows from the Lebesgue differentiation theorem. The second estimate can be proved by using Lemma \ref{lem:lem1} and the corresponding linear statement, see \eqref{eq:eq28}. Using this, \eqref{e:litbmo} and Proposition \ref{prop:dhl} gives a bi-parameter version of Proposition \ref{prop:dhl} -- the statement is obtained by replacing $\BMO$ with $\operatorname{bmo}$, and the quantitative estimate is of the form $[w_{b,z}]_{A_{\vec p}} \lesssim [ \vec w ]_{A_{\vec p}}^\gamma$. Now, we have everything ready to prove the following commutator estimate. \begin{thm} Suppose $T$ is an $n$-linear bi-parameter CZO in $\mathbb{R}^d = \mathbb{R}^{d_1} \times \mathbb{R}^{d_2}$, $1 < p_1, \ldots, p_n \le \infty$ and $1/p = \sum_i 1/p_i> 0$. Suppose also that $b \in \operatorname{bmo}$. Then for all $1\le k\le n$ we have the commutator estimate $$
\| [b, T]_k(f_1,\dots, f_n) w\|_{L^p} \lesssim \|b\|_{\operatorname{bmo}} \prod_{i=1}^n \|f_iw_i\|_{L^{p_i}} $$ for all $n$-linear bi-parameter weights $\vec w = (w_1, \ldots, w_n) \in A_{\vec p}$. Analogous results hold for iterated commutators. \end{thm} \begin{proof}
We assume $\|b\|_{\operatorname{bmo}} = 1$. It suffices to study $[b, T]_1$, and in fact we shall prove the following principle. Once we have \[
\| T(f_1,\dots, f_n) w\|_{L^p}\lesssim \prod_{i=1}^n \|f_iw_i\|_{L^{p_i}} \] for some $\vec p$ in the Banach range, then \[
\| [b,T]_1(f_1,\dots, f_n) w\|_{L^p}\lesssim \prod_{i=1}^n \|f_iw_i\|_{L^{p_i}}. \] In this principle the form of the $n$-linear operator plays no role ($T$ does not need to be a CZO). The iterated cases follow immediately from this principle and the full range then follows from extrapolation.
Define \[ T_z^1(f_1, \dots, f_n)=e^{zb} T(e^{-zb}f_1, f_2,\dots, f_n). \] Then, by the Cauchy integral theorem, we get for nice functions $f_1,\dots, f_n$, that \[
[b, T]_1(f_1,\dots, f_n)=\frac{\ud}{\ud z}T_z^1(f_1,\dots, f_n)\Big|_{z=0}
=\frac {-1}{2\pi i} \int_{|z|=\delta} \frac{T_z^1(f_1,\dots, f_n)}{z^2}\ud z,\quad \delta>0. \] Since $p\ge 1$, by Minkowski's inequality \[
\| [b, T]_1(f_1,\dots, f_n)w\|_{L^p}\le \frac 1{2\pi \delta^2}\int_{|z|=\delta} \|T_z^1(f_1,\dots, f_n) w\|_{L^p}|\ud z|. \] We choose $$ \delta \sim \frac{1}{\max([w^p]_{A_\infty}, \max_i [w_i^{-p_i'}]_{A_\infty}) }. $$ This allows to use the bi-parameter version of Proposition \ref{prop:dhl} to have that \begin{align*}
\|T_z^1(f_1,\dots, f_n) w\|_{L^p} &=\|T(e^{-zb}f_1,f_2,\dots, f_n) we^{\Re(bz)}\|_{L^p}\\
&\lesssim \| e^{-zb} f_1w_1e^{\Re(bz)}\|_{L^{p_1}}\prod_{i=2}^n \|f_iw_i\|_{L^{p_i}}
= \prod_{i=1}^n \|f_iw_i\|_{L^{p_i}}. \end{align*} The claim follows. \end{proof}
\end{document} |
\begin{document}
\begin{abstract} We solve a class of weighted isoperimetric problems of the form \[
\min\left\{\int_{\partial E}w e^V\,dx:\int_E e^V\,dx={\rm constant}\right\} \] where $w$ and $V$ are suitable functions on $\mathbb R^d$. As a consequence, we prove a comparison result for the solutions of degenerate elliptic equations. \end{abstract}
\maketitle
\section{Introduction}
In the celebrated paper \cite{T}, G. Talenti established several comparison results between the solutions of the Poisson equation with Dirichlet boundary condition (with suitable data $f$ and $E$): \begin{equation}\label{problematalenti}
-\Delta u=f \mbox{ in $E$},\qquad u=0 \mbox{ on $\partial E$} \end{equation} and the solutions of the corresponding problem where $f$ and $E$ are replaced by their spherical rearrangements (see \cite[Chapter 3]{LL} for the definition and main properties of spherical rearrangement). Precisely, he proves that if we denote by $v$ the solution of the problem with symmetrized data, then the rearrangement $u^*$ of the (unique) solution $u$ of \eqref{problematalenti} is pointwise bounded by $v$. Moreover he shows that the $L^q$ norm of $\nabla u$ is bounded, as well, by the $L^q$ norm of $\nabla v$, for $q\in(0,2]$. The proof of these facts basically relies on two ingredients: the Hardy-Littlewood-Sobolev inequality and the isoperimetric inequality (see \cite{amfupa} and \cite{LL} for comprehensive accounts on the subjects).
Later on, following such a scheme, many other works have been developed to prove analogous comparison results related to the solutions of PDEs involving different kind of operators, see for instance \cite{bbmp1,bbmp2,bbmp3,bcm1,bcm2,bla,blafeopos,cinesi} and the references therein. A recurring idea in these works is, roughly speaking, the following: the operator considered is usually linked to a sort of {\em weighted perimeter}. Thus initially it is necessary to solve a corresponding isoperimetric problem; then
the desired comparison results can be obtained following the ideas contained in \cite{T}.
\noindent
For example in \cite{bbmp2} the authors consider a class of weighted perimeters of the form \[
P_w(E)=\int_{\partial E}w(|x|)\,d\mathcal{H}^{d-1}(x), \] where $E$ is a set with Lipschitz boundary and $w:\mathbb R\to[0,\infty)$ a non-negative function, and prove, under suitable convexity assumptions on the weight $w$, that the ball centered at the origin is the unique solution of the mixed isoperimetric problem \[
\min\{P_w(E):|E|={\rm constant}\} \]
where $|\cdot|$ denotes the $d$-dimensional Lebesgue measure. As a consequence they prove comparison results, analogous to those considered by Talenti in \cite{T}, for the solutions of \[ -{\rm div}(w^2\nabla u)=f \mbox{ in $E$},\qquad u=0 \mbox{ on $\partial E$}. \] \noindent Recently in \cite{bdr}, L. Brasco, G. De Philippis and the second author proved a quantitative version of the weighted isoperimetric inequality considered in \cite{bbmp1}. Their proof is achieved by means of a sort of {\em calibration technique}. One advantage of this technique is that it is adaptable to other kind of problems, as that of considering other kind of functions in the weighted perimeter (e.g. Wulff-type weights, see \cite{bf}), or that of considering different measured spaces, as $\mathbb R^d$ endowed with the Gauss measure.
\noindent In this paper we consider degenerate elliptic equations with Dirichlet boundary condition of the form \begin{equation}\label{problem}
-\mathrm{div}(w^2\,e^{V}\nabla u)=f\,e^{V} \, \mbox{ in $E$},\qquad u=0 \mbox{ on $\partial E$} \end{equation} where $w$ and $V$ are two given functions, and we aim to prove
analogous comparison results as those in \cite{T}. The particular form in which is written the measure $e^V$ is due to the later applications, whose main examples are Gauss-type measures, that is $V(x)=-c|x|^2$. Bearing in mind this instance, we consider a class of mixed isoperimetric problems of the form \[
\min\left\{P_{we^V}(E):\int_E e^{V}={\rm constant}\right\} \] and prove, by means of a calibration technique reminiscent of that developed in \cite{bdr}, that the solutions, under suitable assumptions on $V$ and $w$, are half-spaces, see Proposition \ref{semispazides} and Theorem \ref{brasco0}. Then, using a suitable concept of rearrangement related to the measures considered, we prove, in the Main Theorem in Section \ref{main}, comparison results between the solutions of \eqref{problem} and the solutions of the same equation with rearranged data.
\section{Preliminaries on rearrangement inequalities}\label{prerequisiti}
In this section we introduce the main definitions and properties about the concept of symmetrization and rearrangement we shall make use of.
\noindent Let $\mu$ be a finite Radon measure on $\mathbb R^d$, a \emph{right rearrangement} with respect to $\mu$
is defined, for any Borel set $A$, as
\[ R_A^\mu=\{(x_1,x')\in\mathbb R\times\mathbb R^{d-1}\,:\,x_1> t_A\}, \] where $t_A=\inf\left\{t\,: \mu(A)=\mu(\{(x_1,x')\in\mathbb R\times\mathbb R^{d-1}\,:\,x_1> t\})\,\right\}$. Notice that if $d\mu=fdx$, for some positive and measurable function $f$, then the value of $t$ is uniquely determined.\\
\noindent Given a non-negative Borel function $f:\mathbb R^d\to[0,+\infty)$, we call \emph{right increasing rearrangement} of $f$ the function $f^{*\mu}$ given by \[ f^{*\mu}(x)=\int_0^{+\infty}\chi_{R^\mu_{\{f>t\}}}(x)\,dt \] where $\chi_A$ is the characteristic function of the set $A$. As an aside we notice that the right increasing rearrangement of the characteristic function of a Borel set $A$ coincides with the characteristic function of $R_A^\mu$. Clearly $f^{*\mu}$ is non-negative, increasing with respect to the first variable $x_1$, and constant on the sets $\{(x_1,x')\in\mathbb R\times\mathbb R^{d-1}:x_1=t\}$, for $t\in\mathbb R$. Moreover $f$ and $f^{*\mu}$ share the same distribution function: \[ \mu_f(t):=\mu(\{f>t\})=\mu(\{f^{*\mu}>t\})=\mu_{f^{*\mu}}(t). \]
We furthermore define $f^{\star\mu}:\mathbb R^+\rightarrow\mathbb R^+$ as the smallest decreasing function satisfying $f^{\star\mu}(\mu_f(t))\geq t$; in other words \[ f^{\star\mu}(s)=\inf\{t>0\,:\, \mu_f(t)<s\}. \] It is useful to bear in mind that $\{s:f^{\star\mu}(s)>t\}=[0,\mu_f(t)]$ so that by the Layer-Cake Representation Theorem (see for instance \cite{LL}) we have \begin{equation}\label{symlc} \int_0^{\mu(\{x_1>t\})}f^{\star\mu}(s)\,ds=\int_{t}^\infty \mu_f(s)\,ds=\int_{\{x_1>t\}} f^{*\mu}(x)\,dx. \end{equation}
\noindent We conclude this section by proving the {\it Hardy-Littlewood} rearrangement inequality related to the right symmetrization. \begin{lem}[Hardy-Littlewood rearrangement inequality]\label{hardy} Let $f$ and $g$ be non-negative Borel functions from $\mathbb R^d$ to $\mathbb R$. Then for any non-negative Borel measure $\mu$ we have \[ \int_{\mathbb R^d}f\,g\,d\mu\leq\int_{\mathbb R^d}f^{*\mu}g^{*\mu}d\mu. \] \end{lem} \begin{proof} We have \[ \begin{aligned} \int_{\mathbb R^d}f\,g\,d\mu&=\int_{\mathbb R^d}\int_0^\infty\int_0^\infty \chi_{\{f>t\}}(x)\chi_{\{g>s\}}(x)\,dt\,ds\,d\mu(x)\\ &=\int_0^\infty\int_0^\infty\int_{\mathbb R^d} \chi_{\{f>t\}\cap \{g>s\}}(x)\,d\mu(x)\,dt\,ds\\ &=\int_0^\infty\int_0^\infty\mu(\{f>t\}\cap\{g>s\})\,dt\,ds \\ &\le\int_0^\infty\int_0^\infty\min(\mu(\{f>t\}),\,\mu(\{g>s\}))\,dt\,ds \\& =\int_0^\infty\int_0^\infty\min(\mu(\{f^{*\mu}>t\}),\,\mu(\{g^{*\mu}>s\}))\,dt\,ds \\ &=\int_0^\infty\int_0^\infty\mu(\{f^{*\mu}>t\}\cap\{g^{*\mu}>s\})\,dt\,ds= \int_{\mathbb R^d}f^{*\mu}\,g^{*\mu}\,d\mu, \end{aligned} \] where we used the fact that $\{f^{*\mu}>t\}$ and $\{g^{*\mu}>s\}$ are half-spaces of the form $\{(x_1,x')\in\mathbb R\times\mathbb R^{d-1}:x_1>r\}$ for some $r\in\mathbb R$ and so \[
\min(\mu(\{f^{*\mu}>t\}),\,\mu(\{g^{*\mu}>s\}))=\mu(\{f^{*\mu}>t\}\cap\{g^{*\mu}>s\}). \]
\end{proof} \begin{remark} Setting $g=\chi_A$ in Lemma \ref{hardy} and thanks to \eqref{symlc} we get \begin{equation}\label{hardy2} \int_A f\,dx\leq \int_{R_A^\mu}f^{*\mu}(x)\,dx=\int_0^{\mu(A)}f^{\star\mu}(s)\,ds. \end{equation}
\end{remark}
\section{A class of weighted isoperimetric inequalities}
Given a measurable function $V:\mathbb R^d\to \mathbb R$ we denote by $\mu[V]$ the absolutely continuous measure whose density equals $e^ V$, that is, for any measurable set $E\subset\mathbb R^d$ \[
\mu[V](E)=\int_E e^{V(x)} dx; \] in what follows with the scope of simplifying the notation, and if there is no risk of confusion, we will drop the dependence of $V$, writing $\mu$ instead of $\mu[V]$. Moreover we will often adopt the notation $x=(x_1,x')\in\mathbb R\times\mathbb R^{d-1}$ and denote by $R_A$ instead of $R_A^{\mu[V]}$ the right rearrangement of $A$ with respect to the measure $\mu[V]$. Given a Borel {\it weight} function $w:\mathbb R\to\mathbb [0,+\infty]$ we define, for any open set $A$ with Lipschitz boundary, the following concept of {\em weighted perimeter}: \[ P_{w,V}(A)=\int_{\partial A}w(x_1)e^{V(x)}d\mathcal H^{d-1}(x). \] In the following proposition we show that, under suitable conditions on $w$ and $V$, the half-spaces of the form $\{(x_1,x'):x_1>t\}$ are the only minimizers of the weighted perimeter among the sets of fixed volume with respect to the measure $\mu[V]$.
\begin{pro}\label{semispazides} Let $A\subset\mathbb R^d$ be a set with Lipschitz boundary. Suppose that $w : \mathbb R\to\mathbb R^+$ and $V:\mathbb R^d\to\mathbb R$ are $C^1$-regular functions satisfying the following assumptions: \begin{itemize} \item[{\it(i)}] $\mu(A)=\mu(R_A)<+\infty$,
\item[{\it (ii)}] the function $\partial_1 V(x)$ depends only on $x_1$ and $g(x):=-w'(x_1)-w(x_1)\partial_1 V(x)$ is a non-negative decreasing function on the real line.
\end{itemize} Then \begin{equation}\label{disprincipale} P_{w,V}(A)\geq P_{w,V}(R_A). \end{equation}
\end{pro}
\begin{proof} We start by noticing that if $P_{w,V}(A)=+\infty$ there is nothing to prove. Hence we can suppose that \begin{equation}\label{perimetrofinito}
P_{w,V}(A)<+\infty. \end{equation} Let $e_1=(1,0,\dots,0)\in\mathbb R^d$ and consider the vector field $-e_1w(x_1)e^{V(x)}$. Its divergence is given by \[ \mathrm{div} (-e_1w(x_1)e^V(x))=(-w'(x_1)-w(x_1)\partial_1 V(x))e^{V(x)}=g(x)e^{V(x)}. \] By an application of the Divergence Theorem we have \begin{equation}\label{perimestimate} \begin{aligned} \int_Ag(x)d\mu(x)&=\int_{A}\mathrm{div} (-e_1w(x_1)e^{V(x)})dx\\ &=\int_{\partial A}w(x_1)e^{V(x)}\langle\nu_A(x),-e_1\rangle d\mathcal H^{d-1}(x)\\ &\le\int_{\partial A}w(x_1)e^{V(x)}d\mathcal H^{d-1}(x)=P_{w,V}(A), \end{aligned} \end{equation} \noindent where $\nu_A(x)$ is the outer unit normal to $\partial A$ at $x$. Let $t_A$ be a real number such that the right half-space $R_A=\{(x_1,x'):x_1\ge t_A\}$ satisfies $\mu(R_A)=\mu(A)$. Then, since the outer normal of $R_A$ is the constant vector field $-e_1$, the inequality in \eqref{perimestimate} turns into an equality if we replace $A$ with $R_A$. Notice that by condition $(ii)$ and \eqref{perimestimate} we have \[
P_{w,V}(R_A)=\int_{R_A\setminus A}g\,d\mu+\int_{R_A\cap A}g\, d\mu\le g(t_A)\mu(A)+P_{w,V}(A). \] Thanks to assumption $(i)$ and \eqref{perimetrofinito} such quantities are finite and so we get \[ P_{w,V}(A)-P_{w,V}(R_A)\geq\int_Ag(x)d\mu(x)-\int_{R_A}g(x)d\mu(x). \] Since, by definition, $\mu(A)=\mu(R_A)<+\infty$ again by condition $(i)$ we obtain $\mu(A\setminus R_A)=\mu(R_A\setminus A)<+\infty$. Thus \begin{equation}\label{finito} \begin{aligned} \int_Ag(x)&d\mu(x)-\int_{R_A}g(x)d\mu(x)=\int_{A\setminus R_A}g(x)d\mu(x)-\int_{R_A\setminus A}g(x)d\mu(x)\\ &=\int_{A\setminus R_A}(g(x)-g(t_Ae_1))d\mu(x)-\int_{R_A\setminus A}(g(x)-g(t_Ae_1))d\mu(x). \end{aligned} \end{equation} Since every $x\in A\setminus R_A$ (respectively $x\in R_A\setminus A$) satisfies $\langle x,e_1\rangle<t_A$ (respectively $\langle x,e_1\rangle>t_A$), by condition {\it (ii)} we deduce \begin{equation}\label{quantitativa} \begin{aligned}
P_{w,V}(A)-P_{w,V}(R_A)&\ge\int_{A\setminus R_A}|g(x)-g(t_Ae_1)|d\mu(x)+\int_{R_A\setminus A}|g(x)-g(t_Ae_1)|d\mu(x)
\\&=\int_{A\Delta R_A}|g(x)-g(t_Ae_1)|d\mu\geq 0, \end{aligned} \end{equation} where $A\Delta R_A=(A\setminus R_A)\cup (R_A\setminus A)$ stands for the symmetric difference between $A$ and $R_A$. This concludes the proof. \end{proof}
\begin{oss}[Necessity of the assumptions]\rm We stress that the integrability condition $(i)$ is necessary to formulas \eqref{perimestimate} and \eqref{finito} (and thus to our proof) to work.
\noindent Concerning condition $(ii)$, we note that it is needed just for technical reasons.
Nonetheless we stress that our proof offers a slightly stronger inequality than \eqref{disprincipale}. Indeed the right-hand side of \eqref{quantitativa} may be seen as a modulus of continuity of the $L^1$ distance between $A$ and $R_A$. Thus it would be interesting to understand how much our hypotheses are far from optimality (compare also with \cite[Remark $2.3$]{bdr}). \end{oss}
\begin{oss}[Equality cases]\rm An inspection of the proof of Proposition \ref{semispazides}, and in particular of inequality \eqref{perimestimate}, shows that if $w>0$, then we have equality in \eqref{disprincipale} only if $A$ is equal to the half space $R_A$, up to set of zero $d$-dimensional Lebesgue measure. On the other hand, if the set $\{w=0\}$ has positive Lebesgue measure, we can not expect any kind of uniqueness for the equality cases of such an inequality. \end{oss}
\begin{example}\label{esempione}\rm A non-trivial example fulfilling condition $(ii)$ of Proposition \ref{semispazides} is the following \[
V(x_1,x')=-c(x_1|x_1|+|x'|^2),\quad w(x_1)=e^{-ax_1}, \] with $a,c>0$ constants satisfying $a^2-2c\ge0$. To prove this fact we initially observe that if $x_1\ne0$ such a condition
is equivalent to require that \begin{equation}\label{oder} w''(x_1)+V_1''(x_1)w(x_1)+V_1'(x_1)w'(x_1)\ge0 \end{equation} which turns out to be equivalent, in our example, to \[
a^2-2c+2ac|x_1|\ge0. \] Then, since $-w'(x_1)-w(x_1)\partial_1 V(x_1)$ is continuous in $x_1=0$, condition $(ii)$ is satisfied everywhere.
\end{example}
To transform inequality \eqref{disprincipale} into a well posed isoperimetric problem, it would be more advisable
to eliminate the integrability hypothesis $(i)$ in Proposition \ref{semispazides} by requiring that the measure
$\mu(\mathbb R^d)<+\infty$. This fact, together with ordinary differential inequality required in assumption $(ii)$, is
seldom satisfied.
Hence, to get other instances of functions which fulfill
inequality
\eqref{oder} together with the integrability property {\it (i)} of Proposition
\ref{semispazides} it is worth restricting our attention to the half-space
\[
\mathbb R^d_+=\{(x_1,x')\in\mathbb R\times\mathbb R^{d-1}: x_1>0\}.
\]
As an immediate corollary of Proposition \ref{semispazides} we get that the solution of the problem
\begin{equation}\label{errore}
\min\left\{P_{w,V}(A): A\subseteq\mathbb R^d_+,\,\,\mu(A)=c, \,\,\partial A\,\,{\rm Lipschitz}\right\}
\end{equation} \noindent is given by $R_c=\{x_1\ge t_c\}$ where $t_c$ is such that $\mu(R_c)=c$.\\
\begin{oss}\label{osservazionciona2}\rm Notice that the non-mixed Gauss case, $w$ constant and $V(x)={-c|x|^2}$, is not covered by our hypotheses.
Nevertheless in this case examples of functions $w$ which satisfy the hypotheses of Proposition \ref{semispazides} are given by $w(t)=t^{-a}$ with $a\ge1$ or $w(t)=b+e^{-at}$, with $a,b\ge 0$ such that $a^2-2c(1+b)>0$ (as can be easily seen reasoning as in the previous example). In the latter case at least if $b=0$ we have that \[
we^V=e^{a^2/(4c)}\exp{\left(-c\left|x+{\bf e_1}\frac{a}{2c}\right|^2\right)}, \] where ${\bf e_1}=(1,0,\dots,0)\in\mathbb R^d$, which can be rephrased\footnote{as suggested us by an anonymous Referee.} as the fact that the solutions of the isoperimetric problem in the half-space $\mathbb R^d_+$ with (suitable) mixed Gaussian conditions \[
\min\left\{P_{\gamma_{\sigma,\eta}}(E): \gamma_{\sigma,0}(E)={\rm constant},\,\, E\subseteq \mathbb R^d_+,\,\,\partial E \,\,{\rm Lipschitz} \right\} \] are right-half spaces. Here we denoted by $\gamma_{\sigma,\eta}$ the normal distribution whose covariance matrix is $\sigma \rm{Id}$ and whose mean vector $\eta$ is given by $\eta=-\frac{a}{2c}{\bf e_1}$. If $b\ne0$ the unique change is that the perimeter is weighted by means of the sum of two Gaussian measures. We recall that, as pointed out in the Introduction, similar problems related to the Gauss measure are considered in \cite{bbmp3,bcm1,bla,blafeopos,cinesi}. \end{oss}
Notice that we defined the perimeter $P_{w,V}$ only for sets with Lipschitz boundary, but for our later applications it will be useful to have a definition of perimeter which comprehends also less regular subsets of $\mathbb R^d$. A measurable set $A$ is said to have locally finite (Euclidean) perimeter (we refer to \cite{M} for a complete overview on the subject) if there exists a vector-valued Radon measure $\nu_A$ called \emph{Gauss--Green measure} of the set $A$ such that, for every $T\in C_c^1(\mathbb R^d;\mathbb R^d)$, it holds true that \[ \int_A \mathrm{div} T =\int_{\mathbb R^d}\langle T,d\nu_A\rangle. \] The perimeter of $A$ is defined in terms of the total variation of the Gauss--Green measure of $A$ as
$P(A)=|\nu_A|(\mathbb R^d)$. For any set $A$ of locally finite perimeter we then define the \emph{weighted perimeter} $P_{w,V}$ by \[
P_{w,V}(A)=we^V|\nu_A|(\mathbb R^d). \]
Since when $A$ has Lipschitz boundary $|\nu_A|=\mathcal H^{d-1}\llcorner \partial A$, the above definition is coherent with the one given at the beginning of this section on such sets. \\
\begin{thm}\label{brasco0} Let $w$ and $V$ non-negative and $C^1$-regular functions satisfying condition $(ii)$ of Proposition \ref{semispazides}. Suppose moreover that $\mu(\mathbb R_+^d)<+\infty$; then the problem \[ \min\left\{P_{w,V}(A): A\subseteq\mathbb R^d_+,\,\,\mu(A)=c\right\} \] admits a solution, and this solution coincides with the one of \eqref{errore}. \end{thm}
\begin{proof} Let $A$ be a measurable set of locally finite perimeter and suppose, by contraddiction, that $P_{w,V}(A)<P_{w,V}(R_A)$. We start by noticing that $P_{w,V}(R_A)<+\infty$, indeed, recalling \eqref{perimestimate} we have that \[ P_{w,V}(R_A)=\int_{R_A}g(x)\,d\mu(x)\le g(0)\mu(A). \]
By \cite[Theorem II.2.8]{M} we can find a sequence of sets $A_n$ with smooth boundary such that
$\chi_{A_n}\rightarrow\chi_A$ in $L_{\rm{loc}}^1(\mathbb R^d)$ and $|\nu_{A_n}| \rightharpoonup ^* |\nu_A|$, where $\rightharpoonup ^*$ indicates the weak* convergence of Radon measures. Since $\mu(\mathbb R^d_+)<+\infty$, we also have that \begin{equation}\label{convvol} \chi_{A_n}\rightarrow\chi_A \,\,\,\, \mathrm{in }\, L^1(\mathbb R^d, \mu) \end{equation}
and, since $we^V$ is a continuous function \begin{equation}\label{convper}
we^V|\nu_{A_n}| \rightharpoonup ^* we^V|\nu_A|. \end{equation} Thanks to \eqref{convper} we get \[ P_{w,V}(A)=\lim_{n\to\infty} P_{w,V}(A_n)\ge \lim_{n\to\infty} P_{w,V}(R_{A_n}). \] We are left to show that $\lim_{n\to\infty} P_{w,V}(R_{A_n})=P_{w,V}(R_A)$, but \[
|P_{w,V}(R_A)-P_{w,V}(R_{A_n})|\le g(0)|\mu(A)-\mu(A_n)|, \] and we can conclude thanks to \eqref{convvol} and the fact that $\mu(\mathbb R^d_+)<+\infty$. \end{proof}
\section{Main result}\label{main}
In this section we consider sets $E\subseteq\mathbb R^d_+$ and we define $d\mu=e^V\,dx$, $R_E=\{x_1>t_E\}$ where $t_E\in\mathbb R$ is such that $\mu(R_E)=\mu(E)$ and $f^*=f^{*\mu}$ the right rearrangement of a function $f$ with respect to $\mu$. In what follows we consider problems of the form \begin{equation}\label{brasco} \left\{ \begin{array}{ll}
-\mathrm{div}(w^2\,e^{V}\nabla u)=f\,e^{V} & \mbox{in $E$}\\
u=0 & \mbox{on $\partial E$}\end{array} \right. \end{equation} which must be intended in weak sense. Precisely, a solution of \eqref{brasco} is a function $u\in H^1_0(e^V,w^2e^V,E)$, defined as the space of functions in $L^2(E,e^V)$ whose weak gradients are in $L^2(E,w^2e^V)$ which vanish on the boundary of $E$ in the trace sense\footnote{which is possible since the regularity of $w$ and $V$ and if $E$ has Lipschitz boundary.}, and which satisfies \begin{equation}\label{weakproblem0} \int_E\langle \nabla u,\nabla \phi\rangle w^2e^V\,dx=\int_E f\,\phi\,e^V\,dx \end{equation}
for any $\phi\in H^1_0(e^V,w^2e^V,E)$. \\ The main scope of this section is to prove {\em a priori} estimates for the solutions of problem \eqref{brasco}. For this reason we shall always consider that a solution $u$ exists. Clearly this requirement depends on the choice of $w$, $V$ and $f$. General instances of such functions for which the existence of a solution for problem \eqref{brasco} is guaranteed, can be found in \cite{tru} (see also \cite{bla,cinesi,bbmp3,blafeopos}). Here we limit ourselves to state that most of the examples considered in Remark \ref{osservazionciona2}, as the {\em mixed-Gaussian case}
$V(x)=-c|x|^2$, $w(t)=b+e^{-at}$ with $a^2- 2c(1 + b) > 0$ and $b$ strictly positive, are covered by the cases considered in \cite{tru}, whenever $f\in L^2(E,e^V)$.
\begin{mt}\label{mainthm} Suppose that the set $E\subset\mathbb R^d_+=\{(x_1,x'):x_1>0\}$ and the functions $w:[0,+\infty]\to(0,+\infty]$ and $V:\mathbb R^d \to\mathbb R$ satisfy the hypotheses of Proposition \ref{semispazides}. Consider the two problems \begin{equation}\label{problemthm} \left\{ \begin{array}{ll}
-\mathrm{div}(w^2\,e^{V}\nabla u)=f\,e^{V} & \mbox{in $E$}\\
u=0 & \mbox{on $\partial E$}\end{array} \right. \end{equation} and \begin{equation}\label{problemsym} \left\{ \begin{array}{ll}
-\mathrm{div}(w^2\,e^{V}\nabla v)=f^{*}e^{V} & \mbox{in $R_E$}\\
v=0 & \mbox{on $\partial R_E$}\end{array} \right. \end{equation} where $0<f\in L^2(\mathbb R^d_+,\mu)$. Then the problem \eqref{problemsym} has as solution the one variable function $v(z)$ given by \begin{equation}\label{v}
v((z,z'))=v(z)=\int_{\mu(\{x_1\ge z\})}^{\mu(R_E)}\frac{1}{h^2(s)}\left(\int_0^sf^*(\xi)\,d\xi\right)\,ds, \end{equation} where \begin{equation}\label{acca}
h(m)=w(\Phi^{-1}(m))\int_{\mathbb R^{d-1}}\mu(\Phi^{-1}(m),x')\,dx', \end{equation} being $\Phi(t)=\mu(\{x_1>t\})$. Moreover, for any solution $u$ of the problem \eqref{problemthm}, we have
\begin{equation}\label{tesi1}
u^*(x)\le v(x), \end{equation} and, for any $q\in(0,2]$, \begin{equation}\label{tesi2}
\int_E |\nabla u|^qw^q \,d\mu \le \int_{R_E} |\nabla v|^qw^q \,d\mu \end{equation}
\end{mt}
\begin{proof} Let us suppose for the moment that the function $v$ given in \eqref{v} is a solution for the problem \eqref{problemsym}. To prove \eqref{tesi1} and \eqref{tesi2} we consider the functions $\phi_h$ defined as \[ \phi_h(x)=\left\{ \begin{array}{ll}
\mathrm{sign\ }(u) & \mbox{if $|u|>t+h$}\\
\frac{u(x)-t\mathrm{sign\ } u(x)}{h} & \mbox{if $|u|\in [t,t+h)$}\\
0 & \mbox{if $|u|<t$},\end{array} \right. \]
where $0 \leq t< \mathrm{ess\,sup}|u|$ and $h >0$. Notice that, for every $h>0$, $\phi_h$ is an admissible test function, since the solution $u$ belongs to the space $H^1_0(e^V,w^2e^V,E)$. Then \eqref{weakproblem0} turns into \[
\frac 1 h \int_{\{|u|\in[t,t+h)\}}\langle \nabla u,\nabla u\rangle w^2\,d\mu=\frac 1 h\int_{\{|u|\in[t,t+h)\}} f\,
(u-t\frac{u}{|u|}) d\mu+\int_{\{|u|>t+h\}}f\,\mathrm{sign\ }(u)\,d\mu. \] Taking the limit for $h\to 0$, we get \begin{equation}\label{stima1}
-\frac{d}{dt}\int_{\{|u|>t\}}|\nabla u|^2w^2\,d\mu=\int_{\{|u|>t\}}f\,d\mu. \end{equation} Let us analyze the left-hand side of equation \eqref{stima1}. We claim that the following inequality holds true for almost every $t$: \begin{equation}\label{s2}
-\frac{d}{dt}\int_{\{|u|>t\}}|\nabla u|^2w^2\,d\mu\ge\frac{\left(-\frac{d}{dt}
\int_{\{|u|>t\}}|\nabla u|w\,d\mu\right)^2}{-\mu'_u(t)}, \end{equation} where $\mu_u(t)$ is the distribution function of $u$ introduced in the Section \ref{prerequisiti}.\\ \noindent Indeed $\mu_u(t)$ is a decreasing function and thence it is derivable for almost every $t$, thanks to the H\"older inequality we get \[
\begin{aligned}
-\frac{d}{dt}\int_{\{|u|>t\}}&|\nabla u|w\,d\mu=\lim_{h\rightarrow 0}\frac 1 h \int_{t<|u|<t+h}|\nabla u|w\,d\mu\\
&\le\lim_{h\rightarrow 0}\left(\int_{\{t<|u|<t+h\}}|\nabla u|^2w^2\,d\mu\right)^{1/2}\left(\int_{\{t<|u|<t+h\}}\frac{1}{h^2}\,d\mu\right)^{1/2}\\
&=\lim_{h\rightarrow 0}\left(\frac 1 h\int_{\{t<|u|<t+h\}}|\nabla u|^2w^2\,d\mu\right)^{1/2}\left(\frac 1 h\int_{\{t<|u|<t+h\}} 1\,d\mu\right)^{1/2}\\
&=\left(-\frac{d}{dt}\int_{\{|u|>t\}}|\nabla u|^2w^2\,d\mu\right)^{1/2}\left(-\mu_u'(t)\right)^{1/2}
\end{aligned} \] By the Co-Area formula and the fact that $w$ is strictly positive and $C^1$-regular, we easily get that the set $\{u>t\}$ is a set of locally finite (Euclidean) perimeter. Thus, thanks to Proposition \ref{semispazides} and Theorem \ref{brasco0} we get \begin{equation}\label{stima2}
-\frac{d}{dt}\int_{\{|u|>t\}}|\nabla u|w\,d\mu=\int_{\{|u|=t\}}w\,d\mu=P_{w,V}(\{|u|>t\})\ge P_{w,V}({\{u^*>t\}}). \end{equation} We introduce the function \begin{equation}
\Phi(t)=\mu(\{x_1>t\}). \end{equation} We recall that the weight function $w$ is constant on the boundary of the super level sets of $u^*$, so that the perimeter of $\{u^*>t\}$ can be written as \[
P_{w,V}(\{u^*>t\})=w(\tau)\int_{\mathbb R^{d-1}}\mu(\tau,x')\,dx'. \] Moreover $\tau\in\mathbb R$ satisfies $\mu_{u^*}(t)=\Phi(\tau)$ that is $\tau=\Phi^{-1}(\mu_{u^*}(t))$ (notice that $\Phi$ is a strictly decreasing function and thus invertible) so that we can write the previous formula as \begin{equation}\label{stima3}
P_{w,V}(\{u^*>t\})=w(\Phi^{-1}(\mu_{u^*}(t)))\int_{\mathbb R^{d-1}}\mu(\Phi^{-1}(\mu_{u^*}(t)),x')\, dx':=h(\mu_{u^*}(t)). \end{equation} Plugging \eqref{stima2} in \eqref{s2}, and recalling \eqref{stima3} we get that \begin{equation}\label{stima4}
-\frac{d}{dt}\int_{\{|u|>t\}}|\nabla u|^2w^2\,d\mu\ge\frac{h(\mu_{u^*}(t))^2}{-\mu_{u^*}'(t)}. \end{equation}
We pass now to estimate the right-hand side of \eqref{stima1}: equation \eqref{hardy2} with $A=\{|u|>t\}$ turns into \begin{equation}\label{stima5}
\int_{\{|u|>t\}}f\,d\mu\le \int_{\{|u^*|>t\}}f^*\,d\mu=\int_0^{\mu_{u^*}(t)}f^\star(s)\,ds. \end{equation} Combining \eqref{stima5} and \eqref{stima4} we get \begin{equation}\label{stima6}
\frac{\left(\int_0^{\mu_{u^*}(t)}f^\star(s)\,ds\right)\mu_{u^*}'(t)}{h^2(\mu_{u^*}(t))}\le-1. \end{equation} Reasoning analogously for the function $v$, we easily see that, since $v$ is constant on every set $\{x_1=t\}$ and since $v=v^*$, \eqref{stima6} holds for $v$ as an equality. Consider now the real function \[
F(r)=\frac{\int_0^r f(s)\,ds}{h(r)^2}, \] and let $G$ be a primitive of $F$. Since $F\ge0$, we have that $G$ is increasing. Moreover by our previous analysis we have that \[ F(\mu_{u^*}(t))\mu_{u^*}'(t)\le-1= F(\mu_v(t))\mu_v'(t). \] We recall that here $\mu_{u^*}'(t)$ denotes the derivative almost everywhere of the function $\mu_{u^*}(t)$. Moreover $t\mapsto G(\mu_{u^*}(t))$ is a monotone non-increasing function which satisfies the chain rule in any point of differentiability of $\mu_{u^*}$, so that, by \cite[Corollary $3.29$]{amfupa}, we get that \begin{equation}\label{palle1}
G(\mu_{u^*}(t))\le G(\mu_{u^*}(0))+\int_0^tF(\mu_{u^*}(\tau))\mu_{u^*}'(\tau)\,d\tau. \end{equation} On the other hand, being $\mu_v(t)$ an absolutely continuous function (since $v$ is a $C^1$-regular with positive derivative one variable function) we have \begin{equation}\label{palle2} G(\mu_{v}(t))=G(\mu_{u^*}(0))+ \int_0^tF(\mu_{v}(\tau))\mu_{v}'(\tau)\,d\tau, \end{equation} so that, since $G(\mu_{v}(0))=G(\mu_{u^*}(0))$, we get that $G(\mu_{u^*}(t))\le G(\mu_v(t))$. This implies that $\mu_{u^*}(t)\le\mu_v(t)$ for any $t$ and hence that $u^*\le v$, since $u^*$ and $v$ depends only on $x_1$ and are increasing functions of such a variable.
We pass now to the proof of \eqref{tesi2}. Using the H\"older inequality and reasoning as before we obtain, for $0< q\le2$, \[ \begin{aligned}
-\frac{d}{dt} \int_{\{|u|>t\}}&|\nabla u|^q w^q\,d\mu =\lim_{h\rightarrow 0}\frac 1 h \int_{\{t<|u|<t+h\}}|\nabla u|^q w^q\,d\mu\\
& \le \lim_{h\rightarrow 0}\left(\frac 1 h \int_{\{t<|u|<t+h\}}|\nabla u|^2w^2\,d\mu\right)^{q/2}\left(\frac 1 h\int_{\{t<|u|<t+h\}}d\mu\right)^{1-q/2}\\
&=\left(-\frac{d}{dt}\int_{\{|u|>t\}}|\nabla u|^2w^2\,d\mu \right)^{q/2}(-\mu_u'(t))^{1-q/2}. \end{aligned} \] Recalling \eqref{stima1} and \eqref{stima5} we have \[
-\frac{d}{dt}\int_{\{|u|>t\}}|\nabla u|^2w^2\,d\mu \leq \int_0^{\mu_{u^*}(t)}f^*(s)\,ds, \]
thus \begin{equation}\label{stima9}
-\frac{d}{dt} \int_{\{|u|>t\}}|\nabla u|^q w^q\,d\mu\le \left(\int_0^{\mu_{u^*}(t)}f^\star(s)\,ds\right)^{q/2}(-\mu_u'(t))^{1-q/2}. \end{equation} Combining \eqref{stima9} and \eqref{stima6} we finally get \[
-\frac{d}{dt} \int_{\{|u|>t\}}|\nabla u|^q w^q\,d\mu\le (-\mu_{u^*}'(t))\left(h(\mu_{u^*}(t))^{-1}\int_0^{\mu_{u^*}(t)}f^\star(s)\,ds\right)^q. \] By integrating on both side between $0$ and $+\infty$, we get \[
\int_E |\nabla u|^q w^q\,d\mu\le\int_0^\infty (-\mu_{u^*}'(t))\left(h(\mu_{u^*}(t))^{-1}\int_0^{\mu_{u^*}(t)}f^\star(s)\,ds\right)^q dt. \] We perform the change of variables $r=\mu_{u^*}(t)$, so that the above equation turns into \[
\int_E |\nabla u|^q w^q\,d\mu\le \int_0^{\mu(E)} \left(h(r)^{-1}\int_0^{r}f^\star(s)\,ds\right)^q dr. \] By a straightforward inspection of those steps we notice that $v$ satisfies \[
\int_{R_E} |\nabla v|^q w^q\,d\mu=\int_0^\infty (-\mu_{v}'(t))\left(h(\mu_{v}(t))^{-1}\int_0^{\mu_{v}(t)}f^\star(s)\,ds\right)^q dt; \] By performing the change of variables $r=\mu_v(t)$ we find \[
\int_{R_E} |\nabla v|^q w^q\,d\mu= \int_0^{\mu(R_E)} \left(h(r)^{-1}\int_0^{r}f^\star(s)\,ds\right)^q dr. \] Since $\mu(E)=\mu(R_E)$ we get the desired result.
We are left to prove that the function $v$ given by \eqref{v} is a solution of problem \eqref{problemsym}. We start by noticing that
equation \eqref{stima6} suggests how to derive \eqref{v}: indeed, as we pointed out, any solution $v$ of \eqref{problemsym} such that $v=v^*$ satisfies \[
\frac{\int_0^{\mu_{v}(t)}f^\star(s)\,ds}{h^2(\mu_{v}(t))}\mu_{v}'(t)=-1. \] By integrating both sides between $0$ and $r$ we obtain \[ \int_0^r \frac{\int_0^{\mu_{v}(t)}f^\star(s)\,ds}{h^2(\mu_{v}(t))} \mu_{v}'(t)\,dt=-r. \] so that, by performing the change of variables $m=\mu_v(t)$, we get \[ \int_{\mu_v(r)}^{\mu(R_E)}\frac{\int_0^{m}f^\star(s)\,ds}{h^2(m)} dm=r\] which is equivalent to \[ v(z,z')=\int_{\mu\{x_1>z\}}^{\mu(R_E)}\frac{\int_0^{m}f^\star(s)\,ds}{h^2(m)} dm, \] that is \eqref{v}. Notice that $v$ is strictly decreasing and belongs to $C_{\mathrm{loc}}^{1,1}(R_E)$. Indeed, recalling \eqref{acca} one can explicitly compute \[ \nabla v(z,z')=e_1 \frac{\partial v}{\partial z}(z,z')=-e_1 \frac{\int_0^{\mu\{x_1>z\}}f^\star(s)\,ds}{w^2(z) \int_{R^{d-1}}e^{V(z,x')}\,dx'}, \] where $e_1=(1,0,\dots,0)\in\mathbb R^d$. Since $f^\star$ is a decreasing and locally integrable function, then $f^\star\in L_{\mathrm{loc}}^\infty(\mathbb R)$; thus, being $z\mapsto \mu(\{x_1>z\})$ $C^1$-regular, we get that $\int_0^{\mu\{x_1>z\}}f^\star(s)\,ds$ is a locally Lipschitz function. Moreover the denominator is locally Lipschitz as well, and locally bounded away from zero. Hence we have that $\nabla v$ is locally Lipschitz.
Thus, recalling that $\partial_1 V$ depends only on the first variable $x_1$ it is possible to explicitly compute the divergence of $w^2\nabla v e^V$ and check that it satisfies \eqref{problemsym}. This concludes the proof of the theorem. \end{proof}
\end{document} |
\begin{document}
\date{}
\title{Geometry of a weak para-$f$-structure}
\begin{abstract} We study the geometry of the weak almost para-$f$-structure and its satellites. This allow us to produce totally geodesic foliations and Killing vector fields and also to take a fresh look at the para-$f$-structure introduced by A.\,Bucki and A.\,Miernowski. We demonstrate this by generalizing several known results on almost para-$f$-manifolds. First, we express the covariant derivative of $f$ using a new tensor on a metric weak para-$f$-structure, then we prove that on a weak para-${\cal K}$-manifold the characteristic vector fields are Killing and $\ker f$ defines a totally geodesic foliation. Next, we show that a para-${\cal S}$-structure is rigid (i.e., a weak para-${\cal S}$-structure is a para-${\cal S}$-structure), and that a metric weak para-$f$-structure with parallel tensor $f$ reduces to a weak para-${\cal C}$-structure. We obtain corollaries for $p=1$, i.e., for a weak almost paracontact~structure.
\vskip1.5mm\noindent \textbf{Keywords}: para-$f$-structure; distribution; totally geodesic foliation; Killing vector field
\vskip1.5mm \noindent \textbf{Mathematics Subject Classifications (2010)} 53C15, 53C25, 53D15
\end{abstract}
\section*{Introduction}
A distribution (or a foliation, associated with integrable distribution) on a pseudo-Riemannian manifold is \textit{totally geodesic} if any geodesic of a manifold that is tangent to the distribution at one point is tangent to it at all points. Such foliations have the simplest extrinsic geometry of the leaves and appear in Riemannian geometry, e.g., in the theory of $\mathfrak{g}$-foliations, as kernels of degenerate tensors, e.g., \cite{AM-1995,FP-2017}. We are motivated by the problem of finding structures on manifolds, which lead to totally geodesic foliations and Killing vector fields,
see~\cite{fip}.
A well-known source of totally geodesic foliations
is a~para-$f$-structure on a smooth manifold $M^{2n+p}$, defined
using (1,1)-tensor field $f$ satisfying $f^3 = f$ and having constant rank $2n$, see \cite{BN-1985,m1976}. The~paracontact geometry (a counterpart to the contact geometry)
is a higher dimensional analog of almost product ($p=0$) \cite{g1967}, and almost paracontact ($p=1$) structures \cite{CFG-survey}.
A~para-$f$-structure with $p=2$ arises in the study of hypersurfaces in almost contact manifolds, e.g., \cite{BL-69}. Interest in para-Sasakian manifolds is due to their connection with para-K\"{a}hler manifolds and their role in mathematical~physics.
If there exists a set of vector fields $\xi_1, \ldots , \xi_p$ with certain properties, then $M^{2n+p}$ is said to have a para-$f$-structure with complemented frames. In this case, the tangent bundle $TM$ splits into three complementary subbundles: $\pm1$-eigen-distributi\-ons for $f$ composing a $2n$-dimensional distribution $f(TM)$ and a $p$-dimensional distribution $\ker f$ (the kernel of $f$).
In \cite{RWo-2}, we introduced the ``weak" metric structures that generalize an $f$-structure and a para-$f$-structure, and allow us to take a fresh look at the classical theory. In~\cite{Rov-arxiv}, we studied geometry of a weak $f$-structure and its satellites that are analogs of ${\mathcal K}$- ${\mathcal S}$- and ${\mathcal C}$- manifolds. In~this paper, using a similar approach, we study geometry of a weak para-$f$-structure and its important cases related to a pseudo-Riemannian manifold endowed with a totally geodesic foliation. A~natu\-ral question arises: {how rich are weak para-$f$-structures compared to the classical ones}? We~study this question for weak analogs of para-${\mathcal K}$-, para-${\mathcal S}$- and para-${\mathcal C}$- structures.
The proofs of main results use the properties of new tensors, as well as the constructions required in the classical~case.
The~theory presented here can be used to deepen our knowledge of pseudo-Riemannian geometry of manifolds equipped with distributions.
This article consists of an introduction and five sections. In Section~\ref{sec:1}, we discuss the properties of ``weak" metric structures generalizing some classes of para-$f$-manifolds. In Section~\ref{sec:2} we express the covariant derivative of $f$ of a weak para-$f$-structure using a new tensor and show that on a weak para-${\mathcal K}$-manifold the characteristic vector fields are Killing and $\ker f$ defines a totally geodesic foliation. Also, for a weak almost para-${\mathcal C}$-structure and a weak almost para-${\mathcal S}$-structure, $\ker f$ defines a totally geodesic foliation. In Section~\ref{sec:3a}, we apply to weak almost para-${\mathcal S}$-manifolds the tensor~$h$ and prove stability of some known results. In Section~\ref{sec:3} we complete the result in \cite{RWo-2} and prove the rigidity theorem that a weak para-${\mathcal S}$-structure is a para-${\mathcal S}$-structure. In Section~\ref{sec:4}, we show that a weak para-$f$-structure with parallel tensor $f$ reduces to a weak para-${\mathcal C}$-structure, we also give an example of such a structure.
\section{Preliminaries} \label{sec:1}
Here, we describe ``weak" metric structures generalizing certain classes of para-$f$-manifolds and discuss their properties. A \textit{weak para-$f$-structure} on a smooth manifold $M^{\,2n+p}$ is defined by a $(1,1)$-tensor field $f$ of rank $2\,n$ and a~nonsingular $(1,1)$-tensor field $Q$ satisfying, see \cite{RWo-2}, \begin{equation}\label{E-fQ-1}
f^3 - fQ = 0,\qquad
Q\,\xi=\xi\quad (\xi\in\ker f). \end{equation} If $\ker f=\{X\in TM: f(X)=0\}$ is parallelizable, then we fix vector fields $\xi_i\ (1\le i\le p)$, which span $\ker f$, and their dual one-forms $\eta^i$. We get a~\textit{weak almost para-$f$-structure} (a weak almost paracontact structure for $p=1$), see~\cite{RWo-2}, \begin{equation}\label{2.1}
f^2 = Q -\sum\nolimits_{i}\eta^i\otimes\xi_i, \quad \eta^i(\xi_j)=\delta^i_j \,. \end{equation}
Using \eqref{2.1} we get $f(TM)=\bigcap_{i}\ker\eta^i$ and that $f(TM)$ is $f$-invariant, i.e., \begin{equation}\label{2.1-D}
{f} X\in f(TM),\quad X\in f(TM). \end{equation} By \eqref{2.1}-\eqref{2.1-D}, $f(TM)$ is invariant for $Q$. A weak almost $f$-structure is called \textit{normal} if the following tensor (known for $Q={\rm id}_{TM}$, e.g., \cite{FP-2017}) is identically~zero: \begin{align}\label{2.6X}
N^{\,(1)}(X,Y) = [{f},{f}](X,Y) - 2\sum\nolimits_{i} d\eta^i(X,Y)\,\xi_i . \end{align} The Nijenhuis torsion
of ${f}$ and the exterior derivative
of $\eta^i$ are given~by \begin{align}\label{2.5}
[{f},{f}](X,Y) & = {f}^2 [X,Y] + [{f} X, {f} Y] - {f}[{f} X,Y] - {f}[X,{f} Y],\ X,Y\in\mathfrak{X}_M , \\ \label{3.3A}
d\eta^i(X,Y) & = \frac12\,\{X(\eta^i(Y)) - Y(\eta^i(X)) - \eta^i([X,Y])\},\quad X,Y\in\mathfrak{X}_M . \end{align}
\begin{remark}\rm A differential $k$-\textit{form} on a smooth manifold $M$ is a skew-symmetric tensor field $\omega$ of type $(0, k)$. According to the conventions of \cite{KN-69}, \begin{eqnarray}\label{eq:extdiff} \nonumber
& d\omega ({X}_1, \ldots , {X}_{k+1}) = \frac1{k+1}\sum\nolimits_{\,i=1}^{k+1} (-1)^{i+1} {X}_i(\omega({X}_1, \ldots , \widehat{{X}}_i\ldots, {X}_{k+1}))\\
& +\sum\nolimits_{\,i<j}(-1)^{i+j}\,\omega ([{X}_i, {X}_j], {X}_1, \ldots,\widehat{{X}}_i,\ldots,\widehat{{X}}_j, \ldots, {X}_{k+1}), \end{eqnarray} where ${X}_1,\ldots, {X}_{k+1}\in\mathfrak{X}_M$ and $\,\widehat{\cdot}\,$ denotes the operator of omission, defines a $(k+1)$-form $d\omega$ -- the \textit{exterior differential} of $\omega$. Thus, \eqref{eq:extdiff} with $k=1$ gives~\eqref{3.3A}. \end{remark}
If there exists a pseudo-Riemannian metric $g$ such that
\begin{align}\label{2.2}
g({f} X,{f} Y)= -g(X,Q\,Y) +\sum\nolimits_{i}
\eta^i(X)\,\eta^i(Y),\quad X,Y\in\mathfrak{X}_M, \end{align} then $({f},Q,\xi_i,\eta^i,g)$ is called a {\it metric weak para-$f$-structure}, $M({f},Q,\xi_i,\eta^i,g)$ is called a \textit{metric weak para-$f$-manifold}, and $g$ is called a \textit{compatible metric}.
Putting $Y=\xi_i$ in \eqref{2.2} and using \eqref{E-fQ-1}, we get
$g(X,\xi_i) = \eta^i(X)$, thus, $f(TM)\,\bot\,\ker f$ and $\{\xi_i\}$ is an orthonormal frame of $\ker f$.
\begin{remark}\rm According to \cite{RWo-2}, a weak almost para-$f$-structure admits a compatible pseudo-Riemannian metric if ${f}$ admits a skew-symmetric representation, i.e., for any $x\in M$ there exist a neighborhood $U_x\subset M$ and a~frame $\{e_k\}$ on $U_x$, for which ${f}$ has a skew-symmetric matrix. \end{remark}
The following statement is well-known for the case of $Q={\rm id}_{TM}$.
\begin{proposition}
{\rm (a)} For a weak almost para-$f$-structure the following hold: \[
{f}\,\xi_i=0,\quad \eta^i\circ{f}=0,\quad \eta^i\circ Q=\eta_i\quad (1\le i\le p),\quad [Q,\,{f}]=0. \] {\rm (b)} For a metric weak almost para-$f$-structure
the tensor ${f}$ is skew-symmetric and the tensor $Q$ is self-adjoint, i.e., \begin{equation}\label{E-Q2-g}
g({f} X, Y) = -g(X, {f} Y),\quad
g(QX,Y)=g(X,QY). \end{equation} \end{proposition}
\begin{proof} (a) By \eqref{E-fQ-1} and \eqref{2.1}, ${f}^2\xi_i=0$. Applying \eqref{E-fQ-1} to $f\xi_i$, we get $f\xi_i=0$. To show $\eta^i\circ{f}=0$, note that $\eta^i({f}\,\xi_i)=\eta^i(0)=0$, and, using \eqref{2.1-D}, we get $\eta^i({f} X)=0$ for $X\in f(TM)$. Next, using \eqref{2.1} and ${f}(Q\,\xi_i) = {f}\,\xi_i=0$, we get \begin{align*}
{f}^3 X = {f}({f}^2 X) = {f}\,QX -\sum\nolimits_{i}\eta^i(X)\,{f}\xi_i = {f}\,QX,\\
{f}^3 X = {f}^2({f} X) = Q\,{f} X -\sum\nolimits_{i}\eta^i({f} X)\,\xi_i = Q\,{f} X \end{align*} for any $X\in f(TM)$. This and $[Q,\,{f}]\,\xi_i=0$ provide $[Q,\,{f}]=Q\,{f} - {f} Q = 0$.
(b) By~\eqref{2.2}, the~restriction $Q_{|\,f(TM)}$ is self-adjoint. This and \eqref{E-fQ-1} provide (\ref{E-Q2-g}b). For any $Y\in f(TM)$ there is $\tilde Y\in f(TM)$ such that ${f}Y=\tilde Y$. From \eqref{2.1} and \eqref{2.2} with $X\in f(TM)$ and $\tilde Y$ we get \begin{eqnarray*}
g(fX,\tilde Y) = g(fX, fY) \overset{\eqref{2.2}}= -g(X, QY) \overset{\eqref{2.1}}
= -g(X, f^2 Y) = -g(X, f\tilde Y), \end{eqnarray*} and (\ref{E-Q2-g}a) follows. \end{proof}
\begin{remark}\rm For a weak almost para-$f$-structure, the tangent bundle
decomposes as
$TM=f(TM)\oplus\ker f$, where $\ker f$ is a $p$-dimensional characte\-ristic distribution; moreover,
if we assume that the symmetric tensor $Q$ is positive definite, then $f(TM)$ decomposes into the sum of two $n$-dimensional subbundles: $f(TM)={\mathcal D}_+\oplus{\mathcal D}_-$,
corresponding to positive and negative eigenvalues of $f$,
and in this case we get
$TM={\mathcal D}_+\oplus{\mathcal D}_-\oplus\ker f$.
\end{remark}
Define the difference tensor $\widetilde{Q}$ (vanishing on a para-$f$-structure) by \[
\tilde{Q} = Q - {\rm id}_{TM}. \] By the above, $\widetilde{Q}\,\xi_i=0$ and $[\tilde{Q},{f}]=0$.
We can rewrite \eqref{2.5} in terms of the Levi-Civita connection $\nabla$ as \begin{align}\label{4.NN}
[{f},{f}](X,Y) = ({f}\nabla_Y{f} - \nabla_{{f} Y}{f}) X - ({f}\nabla_X{f} - \nabla_{{f} X}{f}) Y; \end{align} in particular, since ${f}\,\xi_i=0$, \begin{align}\label{4.NNxi}
[{f},{f}](X,\xi_i)= {f}(\nabla_{\xi_i}{f})X +\nabla_{{f} X}\,\xi_i -{f}\,\nabla_{X}\,\xi_i, \quad X\in \mathfrak{X}_M . \end{align}
The {fundamental $2$-form} $\Phi$ on $M({f},Q,\xi_i,\eta^i,g)$ is defined by \begin{align*}
\Phi(X,Y)=g(X,{f} Y),\quad X,Y\in\mathfrak{X}_M. \end{align*} Since $\eta^1\wedge\ldots\wedge\eta^p\wedge\Phi^n\ne0$,
a metric weak para-${f}$-manifold is orientable.
\begin{definition}\rm A metric weak para-$f$-structure $({f},Q,\xi_i,\eta^i,g)$ is called a \textit{weak para-${\mathcal K}$-structure} if it is normal and the form $\Phi$ is closed, i.e., $d \Phi=0$.
We define two subclasses of weak para-${\mathcal K}$-manifolds as follows: \textit{weak para-${\mathcal C}$-manifolds} if $d\eta^i = 0$ for any $i$, and \textit{weak para-${\mathcal S}$-manifolds}~if \begin{align}\label{2.3}
d\eta^i = \Phi,\quad 1\le i\le p . \end{align} Omitting the normality condition, we get the following: a metric weak para-$f$-structure
is called (i)~a \textit{weak almost para-${\mathcal S}$-structure} if \eqref{2.3} is valid; (ii)~a \textit{weak almost para-${\mathcal C}$-structure} if $\Phi$ and $\eta^i$ are closed forms. \end{definition}
For $p=1$, weak para-${\mathcal C}$- and weak para-${\mathcal S}$- manifolds reduce to weak
para-cosymplectic manifolds and weak
para-Sasakian manifolds, respectively.
Recall the formulas with the Lie derivative $\pounds_{Z}$ in the $Z$-direction and $X,Y\in\mathfrak{X}_M$: \begin{eqnarray}\label{3.3B}
(\pounds_{Z}{f})X & = & [Z, {f} X] - {f} [Z, X],\\ \label{3.3C}
(\pounds_{Z}\,\eta^j)X & = & Z(\eta^j(X)) - \eta^j([Z, X]) , \\ \label{3.7} \nonumber
(\pounds_{Z}\,g)(X,Y) &= & Z(g(X,Y)) - g([Z, X], Y) - g(X, [Z,Y])\\
& = & g(\nabla_{X}\,Z, Y) + g(\nabla_{Y}\,Z, X). \end{eqnarray} The following tensors are known in the theory of para-$f$-manifolds, e.g., \cite{FP-2017}:
\begin{align} \label{2.7X}
N^{\,(2)}_i(X,Y) &= (\pounds_{{f} X}\,\eta^i)Y - (\pounds_{{f} Y}\,\eta^i)X \overset{\eqref{3.3A}}= 2\,d\eta^i({f} X,Y) - 2\,d\eta^i({f} Y,X), \\ \label{2.8X}
N^{\,(3)}_i(X) &= (\pounds_{\xi_i}{f})X \overset{\eqref{3.3B}}= [\xi_i, {f} X] - {f} [\xi_i, X],\\ \label{2.9X}
N^{\,(4)}_{ij}(X) &= (\pounds_{\xi_i}\,\eta^j)X \overset{\eqref{3.3C}}= \xi_i(\eta^j(X)) - \eta^j([\xi_i, X])
= 2\,d\eta^j(\xi_i, X). \end{align} For $p=1$, the tensors \eqref{2.7X}--\eqref{2.9X} reduce to the following tensors on (weak) almost paracontact manifolds:
$N^{\,(2)}(X,Y) = (\pounds_{\varphi X}\,\eta)Y - (\pounds_{\varphi Y}\,\eta)X, \
N^{\,(3)} = \pounds_{\xi}\,\varphi,\
N^{\,(4)} = \pounds_{\xi}\,\eta$ .
\begin{remark}\rm Let $M^{2n+p}(\varphi,Q,\xi_i,\eta^i)$ be a framed weak para-$f$-manifold. Consider the product manifold $\bar M = M^{2n+p}\times\mathbb{R}^p$, where $\mathbb{R}^p$ is a Euclidean space with a basis $\partial_1,\ldots,\partial_p$, and define tensor fields $\bar f$ and $\bar Q$ on $\bar M$ putting \begin{align*}
\bar f(X, \sum a^i\partial_i) = (fX -\sum a^i\xi_i, \sum \eta^j(X)\partial_j),
\quad
\bar Q(X, \sum a^i\partial_i) = (QX, \sum a^i\partial_i) . \end{align*} Hence, $\bar f(X,0)=(fX,0)$, $\bar Q(X,0)=(QX,0)$ for $X\in\ker f$, $\bar f(\xi_i,0)=(0,\partial_i)$, $\bar Q(\xi_i,0)=(\xi_i,0)$ and $\bar f(0,\partial_i)=(-\xi_i,0)$, $\bar Q(0,\partial_i)=(0,\partial_i)$.
Then it is easy to verify that $\bar f^{\,2}=-\bar Q$. The tensors $N^{\,(i)}\ (i=1,2,3,4)$ appear when we use the integrability condition $[\bar f, \bar f]=0$ of $\bar f$ to express the normality condition
of a weak almost para-$f$-structure. \end{remark}
\section{The geometry of a metric weak para-$f$-structure} \label{sec:2}
Here, we study the geometry of the characteristic distribution $\ker f$, supplement the sequence of tensors \eqref{2.6X} and \eqref{2.7X}--\eqref{2.9X} with a new tensor $N^{\,(5)}$ and calculate the covariant derivative of $f$ on a metric weak para-$f$-structure.
A distribution ${\mathcal D}\subset TM$ is \textit{totally geodesic} if and only if its second fundamental form vanishes, i.e., $\nabla_X Y+\nabla_Y X\in{\mathcal D}$ for any vector fields $X,Y\in{\mathcal D}$ --
this is the case when {any geodesic of $M$ that is tangent to ${\mathcal D}$ at one point is tangent to ${\mathcal D}$ at all its points}.
Any integrable and totally geodesic distribution determines a totally geodesic foliation. A foliation, whose orthogonal distribution is totally geodesic, is said to be a Riemannian foliation. For example, a foliation is Riemannian if it is invariant under transformations (isometries) generated by Killing vector fields.
Note that $X = X^\top + X^\bot$, where $X^\top$ is the projection of the vector $X\in TM$ onto $f(TM)$, and $X^\bot = \sum\nolimits_{i}\eta^i(X)\,\xi_i$.
The next statement generalizes
\cite[Proposition~3]{FP-2017},
i.e., $Q={\rm id}_{ TM}$.
\begin{proposition}\label{thm6.1} Let a metric weak para-$f$-structure be normal. Then $N^{\,(3)}_i$ and $N^{\,(4)}_{ij}$ vanish~and \begin{align}\label{3.1KK}
N^{\,(2)}_i(X,Y) =\eta^i([\widetilde{Q} X,\,{f} Y]); \end{align} moreover, the characteristic distribution $\ker f$ is totally geodesic. \end{proposition}
\begin{proof}
Assume $N^{\,(1)}(X,Y)=0$ for any $X,Y\in TM$. Taking $\xi_i$ instead of $Y$ and using the formula of Nijenhuis tensor \eqref{2.5}, we~get \begin{eqnarray}\label{3.11}
0 & =& [{f},{f}](X,\xi_i) - 2\sum\nolimits_{j} d\eta^j(X,\xi_i)\,\xi_j \notag\\
& =& {f}^2[X,\xi_i] - {f}[{f} X,\xi_i] - 2\sum\nolimits_{j} d\eta^j(X,\xi_i)\,\xi_j. \end{eqnarray} For the scalar product of \eqref{3.11} with $\xi_j$, using
${f}\,\xi_i=0$, we~get \begin{align}\label{3.11A}
d\eta^j(\xi_i,\,\cdot)=0;
\end{align} hence, $N^{\,(4)}_{ij}=0$, see \eqref{2.9X}.
Next, combining \eqref{3.11} and \eqref{3.11A}, we get \begin{align*}
0 = [{f},{f}](X,\xi_i) = {f}^2[X,\xi_i] - {f}[{f} X,\xi_i] = {f}\,(\pounds_{\xi_i}{f})X . \end{align*} Applying ${f}$ and using \eqref{2.1} and $\eta^i\circ{f}=0$, we achieve \begin{eqnarray}\label{3.14} \nonumber
0 & = {f}^2 (\pounds_{\xi_i}{f})X
= Q(\pounds_{\xi_i}{f})X - \sum\nolimits_{j}\eta^j((\pounds_{\xi_i}{f})X)\,\xi_j \\
& = Q(\pounds_{\xi_i}{f})X - \sum\nolimits_{j}\eta^j([\xi_i,{f} X])\,\xi_j. \end{eqnarray} Further, \eqref{3.11A} and \eqref{3.3A} yield \begin{align}\label{3.11B}
0=2\,d\eta^j({f} X, \xi_i)
=({f} X)(\eta^j(\xi_i)) - \xi_i(\eta^j({f} X)) - \eta^j([{f} X, \xi_i])
=\eta^j([\xi_i, {f} X]). \end{align} Since $Q$ is non-singular, from \eqref{3.14}--\eqref{3.11B} we get $\pounds_{\xi_i}{f}=0$, i.e, $N^{\,(3)}_i=0$, see~\eqref{2.8X}.
Replacing $X$ by ${f} X$ in our assumption $N^{\,(1)}=0$ and using \eqref{2.5} and \eqref{3.3A}, we get \begin{align}\label{2.6}
0 &= g([{f},{f}]({f} X,Y) - 2\sum\nolimits_{j} d\eta^j({f} X,Y)\,\xi_j,\ \xi_i) \notag\\
&= g([{f}^2 X,{f} Y],\xi_i) - ({f} X)(\eta^i(Y)) + \eta^i([{f} X,Y]) ,\quad 1\le i\le p. \end{align} Using \eqref{2.1} and $[{f} Y, \eta^j(X) \xi_i] = ({f} Y)(\eta^j(X)) \xi_i + \eta^j(X)[{f} Y, \xi_i]$, we rewrite \eqref{2.6}~as \begin{equation*}
0 = \eta^i([QX, {f} Y]) -\sum \eta^j(X)\,\eta^i([\xi_j, {f} Y])
+ {f} Y(\eta^i(X)) - {f} X(\eta^i(Y)) + \eta^i([{f} X,Y]). \end{equation*} Since \eqref{3.11B} gives $\eta^i([{f} Y, \xi_j])=0$, the above equation becomes
\begin{align}\label{2.9}
\eta^i([QX, {f} Y]) + ({f} Y)(\eta^i(X)) - ({f} X)(\eta^i(Y)) + \eta^i([{f} X,Y]) = 0. \end{align} Finally, combining \eqref{2.9} with \eqref{2.7X}, we get \eqref{3.1KK}.
Using the identity \begin{align}\label{3.Ld}
\pounds_{\xi_i}=\iota_{{\xi_i}}\,d + d\,\iota_{{\xi_i}}, \end{align} from \eqref{3.11A} and $\eta^i(\xi_j)=\delta^i_j$ we obtain
$\pounds_{\xi_i}\,\eta^j = d (\eta^j(\xi_i)) + \iota_{\xi_i}\, d\eta^j = 0$.
On the other hand, by \eqref{3.3C} we have \[
(\pounds_{\xi_i}\,\eta^j)X= g(X,\nabla_{\xi_i}\,\xi_j)+g(\nabla_{X}\,\xi_i,\,\xi_j),\quad
X\in\mathfrak{X}_M. \] Symmetrizing this and using $\pounds_{\xi_i}\,\eta^j =0$ and $g(\xi_i,\, \xi_j)=\delta_{ij}$ yield \begin{align}\label{3.30}
\nabla_{\xi_i}\,\xi_j+\nabla_{\xi_j}\,\xi_i =0, \end{align} thus, the distribution $\ker f$ is totally geodesic. \end{proof}
Recall the co-boundary formula for exterior derivative $d$ on a $2$-form $\Phi$, \begin{eqnarray}\label{3.3} \nonumber
d\Phi(X,Y,Z) & =& \frac{1}{3}\,\big\{X\,\Phi(Y,Z) + Y\,\Phi(Z,X) + Z\,\Phi(X,Y) \\
&& -\Phi([X,Y],Z) - \Phi([Z,X],Y) - \Phi([Y,Z],X)\big\}. \end{eqnarray} By direct calculation we get the following: \begin{align}\label{3.9A}
(\pounds_{\xi_i}\,\Phi)(X,Y) = (\pounds_{\xi_i}\,g)(X, {f}Y) + g(X,(\pounds_{\xi_i}{f})Y) . \end{align}
The following result generalizes \cite[Proposition~4]{FP-2017}.
\begin{theorem}\label{C-K} On a weak para-${\mathcal K}$-manifold the vector fields $\xi_1,\ldots,\xi_p$ are Killing and \begin{align}\label{6.1e}
\nabla_{\xi_i}\,\xi_j = 0,\quad 1\le i,j\le p ; \end{align}
thus,
$\ker f$ is integrable and defines a totally geodesic Riemannian foliation with flat leaves. \end{theorem}
\begin{proof} By Proposition~\ref{thm6.1}, the distribution $\ker f$ is totally geodesic, see \eqref{3.30}, and $N^{\,(3)}_i=\pounds_{\xi_i}{f}=0$. Using $\iota_{{\xi_i}}\Phi=0$ and condition $d\Phi=0$ in the identity \eqref{3.Ld}, we get $\pounds_{\xi_i}\Phi=0$. Thus, from \eqref{3.9A} we obtain $(\pounds_{\xi_i}\,g)(X, {f}Y)=0$. To show $\pounds_{\xi_i}\,g=0$, we will examine $(\pounds_{\xi_i}\,g)(fX, \xi_j)$ and $(\pounds_{\xi_i}\,g)(\xi_k, \xi_j)$. Using $\pounds_{\xi_i}\,\eta^j =0$, we get \[
(\pounds_{\xi_i}\,g)(fX, \xi_j)=(\pounds_{\xi_i}\,\eta^j)fX -g(fX, [\xi_i,\xi_j])=-g(fX, [\xi_i,\xi_j])=0. \] Using \eqref{3.30}, we get
$(\pounds_{\xi_i}\,g)(\xi_k, \xi_j)= -g(\xi_i, \nabla_{\xi_k}\,\xi_j+\nabla_{\xi_j}\,\xi_k) = 0$.
Thus, $\xi_i$ is a Killing vector field, i.e., $\pounds_{\xi_i} g=0$. By $d\Phi(X,\xi_i,\xi_j)=0$ and \eqref{3.3} we obtain $g([\xi_i,\xi_j], fX)=0$, i.e., $\ker f$ is integrable. From this and \eqref{3.30} we get $\nabla_{\xi_k}\,\xi_j=0$; thus, the sectional curvature is $K(\xi_i,\xi_j)=0$. \end{proof}
\begin{theorem}\label{thm6.2} For a weak almost para-${\mathcal S}$-structure,
we get $N^{\,(2)}_i=N^{\,(4)}_{ij}=0$ and \begin{equation}\label{E-N1}
(N^{\,(1)}(X,Y))^\bot = 2\,g(X, f\widetilde{Q} Y)\,\bar\xi \,; \end{equation} moreover, $N^{\,(3)}_i$ vanishes if and only if $\,\xi_i$ is a Killing vector field. \end{theorem}
\begin{proof} Applying \eqref{2.3} in \eqref{2.7X} and using skew-symmetry of ${f}$ we get $N^{\,(2)}_i=0$. Equation \eqref{2.3} with $Y=\xi_i$ yields $d\eta^j(X,\xi_i)=g(X,{f}\,\xi_i)=0$ for any $X\in\mathfrak{X}_M$; thus, we get \eqref{3.11A}, i.e., $N^{\,(4)}_{ij}=0$. Using \eqref{2.3} and \[
g([f,f](X,Y), \xi_i) = g([fX,fY], \xi_i) = -2\,d\eta^i(fX,fY) = -2\,\Phi(fX, fY) \] for all $i$, we also calculate \begin{eqnarray*}
& \frac12\,g(N^{\,(1)}(X,Y), \xi_i) = -d\eta^i(fX,fY) - g(\sum\nolimits_{j}d\eta^j(X,Y)\,\xi_j, \xi_i) \\
&= -\Phi(fX, fY) -\Phi(X, Y) = g(X, (f^3-f)Y) = g(X, \widetilde{Q} f Y), \end{eqnarray*} that proves \eqref{E-N1}. Next, invoking \eqref{2.3} in the equality
\begin{align*}
(\pounds_{\xi_i}\,d\eta^j)(X,Y) = \xi_i(d\eta^j(X,Y)) - d\eta^j([\xi_i,X], Y) - d\eta^j(X,[\xi_i,Y]), \end{align*} and using \eqref{3.7}, we obtain for all $i,j$ \begin{align}\label{3.9}
(\pounds_{\xi_i}\,d\eta^j)(X,Y) = (\pounds_{\xi_i}\,g)(X, {f}Y) + g(X,(\pounds_{\xi_i}{f})Y). \end{align} Since $\pounds_V=\iota_{V}\circ d+d\circ\iota_{V}$, the exterior derivative $d$ commutes with the Lie-derivative, i.e., $d\circ\pounds_V = \pounds_V\circ d$, and as in the proof of Theorem~\ref{C-K}, we get that $d\eta^i$ is invariant under the action of $\xi_i$, i.e., $\pounds_{\xi_i}\,d\eta^j=0$. Therefore, \eqref{3.9} implies that $\xi_i$ is a Killing vector field if and only if $N^{\,(3)}_i=0$. \end{proof}
\begin{theorem}\label{thm6.2C} For a weak almost para-${\mathcal C}$-structure, we get $N^{\,(2)}_i=N^{\,(4)}_{ij}=0$, $N^{\,(1)}=[{f},{f}]$, and \eqref{6.1e}; thus, the distribution $\ker f$ is tangent to a totally geodesic foliation with the sectional curvature $K(\xi_i,\xi_j)=0$. Moreover, $N^{\,(3)}_i=0$ if and only if $\,\xi_i$ is a Killing vector~field. \end{theorem}
\begin{proof} By \eqref{2.7X} and \eqref{2.9X} and since $d\eta^i=0$, the tensors $N^{\,(2)}_i$ and $N^{\,(4)}_{ij}$ vanish on a weak almost para-${\mathcal C}$-structure. Moreover, by \eqref{2.6X} and \eqref{3.9}, respectively, the tensor $N^{\,(1)}$ coincides with $[f,f]$, and $N^{\,(3)}_i=\pounds_{\xi_i}{f}\ (1\le i\le p)$ vanish if and only if each $\xi_i$ is a Killing~vector. From the equalities \begin{align*}
3\,d\Phi(X,\xi_i,\xi_j) = g([\xi_i,\xi_j], fX), \qquad
2\,d\eta^k(\xi_j, \xi_i) = g([\xi_i,\xi_j],\xi_k) \end{align*} and conditions $d\Phi=0$ and $d\eta^i=0$ we obtain \begin{align}\label{6.1d}
[\xi_i, \xi_j] & = 0,\quad 1\le i,j\le p . \end{align} Next, from $d\eta^i=0$ and the equality \[
2\,d\eta^i(\xi_j,X)+2\,d\eta^j(\xi_i,X) = g(\nabla_{\xi_i}\,\xi_j+\nabla_{\xi_j}\,\xi_i, X) \] we obtain \eqref{3.30}: $\nabla_{\xi_i}\,\xi_j+\nabla_{\xi_j}\,\xi_i=0$. From this and \eqref{6.1d} we get \eqref{6.1e}. \end{proof}
We will express $\nabla_{X}{f}$ using a new tensor on a metric weak para-$f$-structure. The following assertion
generalizes \cite[Proposition~1]{FP-2017}.
\begin{proposition}\label{lem6.1} For a metric weak para-$f$-structure
we get \begin{eqnarray}\label{3.1}
& 2\,g((\nabla_{X}{f})Y,Z) = -3\,d\Phi(X,{f} Y,{f} Z) - 3\, d\Phi(X,Y,Z) - g(N^{\,(1)}(Y,Z),{f} X)\notag\\
& +\sum\nolimits_{i}\big( N^{\,(2)}_i(Y,Z)\,\eta^i(X) + 2\,d\eta^i({f} Y,X)\,\eta^i(Z) - 2\,d\eta^i({f} Z,X)\,\eta^i(Y)\big)\notag\\
& + N^{\,(5)}(X,Y,Z), \end{eqnarray} where a skew-symmetric w.r.t. $Y$ and $Z$ tensor $N^{\,(5)}(X,Y,Z)$ is defined by \begin{eqnarray*}
N^{\,(5)}(X,Y,Z) &=& ({f} Z)\,(g(X, \widetilde{Q}Y)) -({f} Y)\,(g(X, \widetilde{Q}Z)) +g([X, {f} Z], \widetilde{Q}Y)\\
&&-\,g([X,{f} Y], \widetilde{Q}Z) + g([Y,{f} Z] -[Z, {f} Y] - {f}[Y,Z],\ \widetilde{Q} X). \end{eqnarray*} \end{proposition}
\begin{proof} Using
the skew-symmetry of ${f}$, one can compute \begin{eqnarray}\label{3.4}
& 2\,g((\nabla_{X}{f})Y,Z) = 2\,g(\nabla_{X}({f} Y),Z) + 2\,g( \nabla_{X}Y,{f} Z) \notag\\
& = X\,g({f} Y,Z) + ({f} Y)\,g(X,Z) - Z\,g(X,{f} Y) \notag\\
& +\, g([X,{f} Y],Z) +g([Z,X],{f} Y) - g([{f} Y,Z],X) \notag\\
& +\, X\,g(Y,{f} Z) + Y\,g(X,{f} Z) - ({f} Z)\,g(X,Y)\notag\\
& +\, g([X,Y],{f} Z) + g([{f} Z,X],Y) - g([Y,{f} Z],X). \end{eqnarray} Using \eqref{2.2}, we obtain \begin{align}\label{XZ} \notag
g(X,Z) &= -\Phi({f} X, Z) -g(X,\widetilde{Q} Z) +\sum\nolimits_{i}\big(\eta^i(X)\,\eta^i(Z) +\eta^i(X)\,\eta^i(\widetilde{Q} Z)\big)\\
&= -\Phi({f} X, Z) + \sum\nolimits_{i}\eta^i(X)\,\eta^i(Z) - g(X, \widetilde{Q}Z). \end{align} Thus, and in view of the skew-symmetry of ${f}$ and applying \eqref{XZ} six times, \eqref{3.4} can be written~as \begin{align*}
& 2\,g((\nabla_{X}{f})Y,Z) = X\,\Phi(Y, Z) +({f} Y)\,\big(-\Phi({f} X, {Z})+\sum\nolimits_{i}\eta^i(X)\,\eta^i(Z) \big) \\ & - ({f} Y)\,g(X,\widetilde{Q}Z) - Z\,\Phi(X,Y) \\ & +\Phi([X,{f} Y],{f} {Z}) + \sum\nolimits_{i}\eta^i([X,{f} Y])\eta^i(Z) - g([X,{f} Y],\widetilde{Q}Z) +\Phi([Z,X],Y) \notag\\ & -\Phi([{f} Y,Z],{f} {X}) - \sum\nolimits_{i}\eta^i([{f} Y,Z])\,\eta^i(X) + g([{f} Y, Z], \widetilde{Q}X) + X\,\Phi(Y,Z) \\ & +Y\,\Phi(X,Z) - ({f} Z)\,\big(-\Phi({f} X, {Y}) + \sum\nolimits_{i}\eta^i(X)\,\eta^i(Y)\big) + ({f} Z) g(X, \widetilde{Q}Y) \\ & +\Phi([X,Y],Z) + g({f}[-{f} Z,X],{f} {Y}) + \sum\nolimits_{i}\eta^i([{f} Z,X])\eta^i(Y) - g([{f} Z,X],\widetilde{Q}Y)\\ & +g({f}[Y,{f} Z],{f} {X}) - \sum\nolimits_{i}\eta^i([Y,{f} Z])\,\eta^i(X) + g([Y,{f} Z], \widetilde{Q}X) . \end{align*} We also have \begin{eqnarray*}
g(N^{\,(1)}(Y,Z),{f} X) = g({f}^2 [Y,Z] + [{f} Y, {f} Z] - {f}[{f} Y,Z] - {f}[Y,{f} Z], {f} X)\\
= - g({f}[Y,Z], \widetilde{Q} X) + g([{f} Y, {f} Z] - {f}[{f} Y,Z] - {f}[Y,{f} Z] - [Y,Z], {f} X). \end{eqnarray*} From this and \eqref{3.3} we get the required result. \end{proof}
\begin{remark}\rm
For particular values of the tensor $N^{\,(5)}$ we get \begin{eqnarray}\label{KK} \nonumber
N^{\,(5)}(X,\xi_i,Z) & = & - N^{\,(5)}(X, Z, \xi_i) = g( N^{\,(3)}_i(Z),\, \widetilde{Q} X),\\
\nonumber
N^{\,(5)}(\xi_i,Y,Z) &=& g([\xi_i, {f} Z], \widetilde{Q}Y) -g([\xi_i,{f} Y], \widetilde{Q}Z),\\
N^{\,(5)}(\xi_i,Y,\xi_j) &=& N^{\,(5)}(\xi_i,\xi_j, Y) =0. \end{eqnarray} \end{remark}
We will discuss the meaning of $\nabla_{X}{f}$ for weak almost para-${\mathcal S}$- and weak para-${\mathcal K}$- structures.
The following corollary of Proposition~\ref{lem6.1} and Theorem~\ref{thm6.2} generalizes well-known results with $Q={\rm id}_{TM}$.
\begin{corollary}\label{cor3.1} For a weak almost para-${\mathcal S}$-structure we get \begin{align}\label{3.1A} \nonumber
2\,g((\nabla_{X}{f})Y,Z) & = - g(N^{\,(1)}(Y,Z),{f} X) +2\,g(fX,fY)\,\bar\eta(Z) \\
& -2\,g(fX,fZ)\,\bar\eta(Y) + N^{\,(5)}(X,Y,Z),
\end{align} where $\bar\eta=\sum\nolimits_{i}\eta^i$. In particular, taking $x=\xi_i$ and then $Y=\xi_j$ in \eqref{3.1A}, we get \begin{align}\label{3.1AA}
2\,g((\nabla_{\xi_i}{f})Y,Z) &= N^{\,(5)}(\xi_i,Y,Z) ,\quad 1\le i\le p,
\end{align} and \eqref{6.1e}; thus, the characteristic distribution
is tangent to a totally geodesic foliation with flat leaves.
\end{corollary}
\begin{proof} According to Theorem~\ref{thm6.2}, for a weak almost para-${\mathcal S}$-structure we have
$d\eta^i = \Phi$ and $N^{\,(2)}_i= N^{\,(4)}_{ij}=0$.
Thus, invoking \eqref{2.3} and using Theorem~\ref{thm6.2} in \eqref{3.1}, we get \eqref{3.1A}.
From \eqref{3.1AA} with $Y=\xi_j$ we get $g(f\nabla_{\xi_i}\,\xi_j, Z)=0$, thus $\nabla_{\xi_i}\,\xi_j\in\ker f$.~Also, \[
\eta^k([\xi_i,\xi_j])= -2\,d\eta^k(\xi_i,\xi_j) =-2\,g(\xi_i, f\xi_j)=0; \] hence, $[\xi_i,\xi_j]=0$, i.e., $\nabla_{\xi_i}\,\xi_j=\nabla_{\xi_j}\,\xi_i$. Finally, from $g(\xi_j,\xi_k)=\delta_{jk}$, using the covariant derivative with respect to $\xi_i$ and the above equality, we get $\nabla_{\xi_i}\,\xi_j\in f(TM)$. This together with $\nabla_{\xi_i}\,\xi_j\in\ker f$ proves \eqref{6.1e}. \end{proof}
\section{The tensor field $h$} \label{sec:3a}
Here, we apply for a weak almost para-${\mathcal S}$-manifold the tensor field $h=(h_1,\ldots,h_p)$, where
$h_i=\frac{1}{2}\, N^{\,(3)}_i = \frac{1}{2}\,\pounds_{\xi_i}{f}$ .
By Theorem~\ref{thm6.2}, $h_i=0$ if and only if $\xi_i$ is a Killing field. First, we calculate \begin{align}\label{4.2} \nonumber
& (\pounds_{\xi_i}{f})X \overset{\eqref{3.3B}} = \nabla_{\xi_i}({f} X) - \nabla_{{f} X}\,\xi_i - {f}(\nabla_{\xi_i}X - \nabla_{X}\,\xi_i) \\
&\ = (\nabla_{\xi_i}{f})X - \nabla_{{f} X}\,\xi_i + {f}\nabla_X\,\xi_i. \end{align} For $X=\xi_i$ in \eqref{4.2}, using $g((\nabla_{\xi_i}{f})\,\xi_j,Z)=\frac12 N^{\,(5)}(\xi_i,\xi_j,Z)=0$, see \eqref{3.1AA}, and $\nabla_{\xi_i}\,\xi_j=0$, see Corollary~\ref{cor3.1}, we get \begin{align}\label{4.2b}
h_i\,\xi_j = 0. \end{align} The following result generalizes the fact that for an almost para-${\mathcal S}$-structure, each tensor $h_i$ is self-adjoint and commutes with ${f}$.
\begin{proposition}
For a weak almost para-${\mathcal S}$-structure, the tensor $h_i$ and its conjugate $h_i^*$ satisfy \begin{eqnarray}\label{E-31}
g((h_i-h_i^*)X, Y) &=& \frac{1}{2}\,N^{\,(5)}(\xi_i, X, Y),\\
\label{E-30b}
\nabla\,\xi_i &=&
Q^{-1} {f}\, h^*_i - f , \\
\label{E-31A}
h_i{f}+{f}\, h_i &=& -\frac12\,\pounds_{\xi_i}\widetilde{Q}.
\end{eqnarray} \end{proposition}
\begin{proof} (i) The scalar product of \eqref{4.2} with $Y$,
using \eqref{3.1AA}, gives \begin{align}\label{4.3}
g((\pounds_{\xi_i}{f})X,Y) &= N^{\,(5)}(\xi_i, X, Y) + g({f}\nabla_{X}\,\xi_i - \nabla_{{f} X}\,\xi_i,\ Y). \end{align} Similarly, \begin{align}\label{4.3b}
g((\pounds_{\xi_i}{f})Y,X) &=
N^{\,(5)}(\xi_i, Y, X) +g({f}\nabla_{Y}\,\xi_i - \nabla_{{f} Y}\,\xi_i,\ X). \end{align} Using \eqref{2.7X} and $(fX)(\eta^i(Y)) -(fY)(\eta^i(X))\equiv0$ (this vanishes if either $X$ or $Y$ equals $\xi_j$ and also for $X$ and $Y$ in~$f(TM)$), we get
$N^{\,(2)}_i (X,Y) = \eta^i([f Y, X]-[f X, Y])$.
Thus, the difference of \eqref{4.3} and \eqref{4.3b} gives \begin{align*}
2\,g((h_i-h_i^*)X,Y) = N^{\,(5)}(\xi_i, X, Y) - N^{\,(2)}_i (X,Y). \end{align*} From this and equality $N^{\,(2)}_i=0$ (see Theorem~\ref{thm6.2}) we get \eqref{E-31}.
(ii) From Corollary \ref{cor3.1} with $Y=\xi_i$, we find \begin{align}\label{4.4}
g((\nabla_{X}{f})\xi_i,Z) &= -\frac12\,g(N^{\,(1)}(\xi_i,Z),{f} X)
-g({f} X, {f} Z)
+ \frac12\,N^{\,(5)}(X,\xi_i,Z). \end{align}
Note that $\frac 12\,N^{\,(5)}(X,\xi_i,Z)= g(h_i Z, \widetilde Q X)$, see \eqref{KK}.
By \eqref{2.5} with $Y=\xi_i$, we get \begin{align}\label{2.5B}
[{f},{f}](X,\xi_i) = {f}^2 [X,\xi_i] - {f}[{f} X,\xi_i] = f N^{\,(3)}_i (X). \end{align} Using \eqref{2.2}, \eqref{3.3B} and \eqref{2.5B}, we calculate \begin{align} \label{4.4A}
g([{f},{f}](\xi_i,Z),{f} X)&= g({f}^2\,[\xi_i,Z] - {f}[\xi_i,{f} Z],{f} X) = - g({f}(\pounds_{\xi_i}{f})Z,{f} X)\notag\\
&= g((\pounds_{\xi_i}{f})Z,QX) -\sum\nolimits_{j}\eta^j(X)\,\eta^j((\pounds_{\xi_i}{f})Z) .
\end{align}
From \eqref{2.3} we have $g([X,\xi_i], \xi_k) = 2\,d\eta^k(\xi_i,X)=2\,\Phi(\xi_i,X)=0$. By \eqref{6.1e}, we get $g(\nabla_X\,\xi_i, \xi_k) = g(\nabla_{\xi_i}X, \xi_k) = -g(\nabla_{\xi_i}\xi_k, X) = 0$ for $X\in f(TM)$, thus \begin{align}\label{E-30-xi}
g(\nabla_{X}\,\xi_i,\ \xi_k) = 0,\quad X\in TM,\ 1\le i,k \le p . \end{align} Using \eqref{4.2}, we get \begin{align}\label{3.1A3}
2\,g((\nabla_{\xi_i}{f})Y,\xi_j) \overset{\eqref{3.1AA}}= N^{\,(5)}(\xi_i,Y,\xi_j) \overset{\eqref{KK}}=0 . \end{align}
From \eqref{4.2}, \eqref{E-30-xi} and \eqref{3.1A3} we get \begin{align}\label{4.5}
g((\pounds_{\xi_i}{f})X,\xi_j) = -g(\nabla_{{f} X}\,\xi_i,\xi_j) = 0. \end{align} Since ${f}\,\xi_i=0$, we find \begin{align}\label{4.5A}
(\nabla_{X}{f})\,\xi_i = -{f}\,\nabla_{X}\,\xi_i. \end{align} Thus, combining \eqref{4.4}, \eqref{4.4A} and \eqref{4.5}, we find \begin{eqnarray}\label{4.6} \nonumber
& -g({f}\,\nabla_{X}\xi_i, Z) = g(X,QZ) - g(h_iZ,QX) - \sum\nolimits_{j}\eta^j(X)\eta^j(Z) + g(h_i Z, \widetilde Q X) \\
& = g(h_iZ, X) + g(X,QZ) - \sum\nolimits_{j}\eta^j(X)\,\eta^j(Z) + g(h_i Z, \widetilde Q X) .
\end{eqnarray} Replacing $Z$ by ${f} Z$ in \eqref{4.6} and using \eqref{2.1}, \eqref{E-30-xi} and ${f}\,\xi_i=0$, we achieve \eqref{E-30b}: \begin{align*}
g(Q\,\nabla_{X}\,\xi_i, Z) = g(({f} Q -h_i{f}) Z, X)
= g( {f}( h^*_i - Q) X, Z) .
\end{align*}
(iii) Using \eqref{2.1}, we obtain \begin{eqnarray*}
& {f}\nabla_{\xi_i}{f} +(\nabla_{\xi_i}{f}){f} = \nabla_{\xi_i}\,({f}^2)
= \nabla_{\xi_i}\widetilde{Q} -\nabla_{\xi_i}(\sum\nolimits_{j}\eta^j\otimes \xi_j) , \end{eqnarray*} where in view of \eqref{6.1e}, we get $\nabla_{\xi_i}(\sum\nolimits_{j}\eta^j\otimes \xi_j)=0$. From the above and \eqref{4.2}, we get \eqref{E-31A}: \begin{eqnarray*}
&& 2(h_i{f}+{f} h_i)X = {f}(\pounds_{\xi_i}{f})X +(\pounds_{\xi_i}{f}){f} X \\
&& = {f}(\nabla_{\xi_i}{f})X +(\nabla_{\xi_i}{f}){f} X +{f}^2\nabla_X\,\xi_i -\nabla_{{f}^2 X}\,\xi_i \\
&& = -(\nabla_{\xi_i}\widetilde{Q})X -\widetilde{Q}\nabla_X\,\xi_i+\nabla_{\widetilde{Q}X}\,\xi_i
+\sum\nolimits_{j}\big(g(\nabla_X\,\xi_i, \xi_j)\,\xi_j -g(X, \xi_j)\nabla_{\xi_j}\,\xi_i\big) \\
&& = [\widetilde{Q}X, \xi_i] - \widetilde{Q}\,[X, \xi_i]
= -(\pounds_{\xi_i}\widetilde{Q})X .
\end{eqnarray*}
We used \eqref{6.1e} and
\eqref{E-30-xi} to show $\sum\nolimits_{j}\big(g(\nabla_X\,\xi_i, \xi_j)\,\xi_j -g(X, \xi_j)\nabla_{\xi_j}\,\xi_i\big)=0$.
\end{proof}
\begin{remark}\rm For a weak almost para-${\mathcal S}$-structure, using \eqref{3.1A3}, we find
\[
2\,g(h_i X, \xi_j)=-g(\nabla_{f X}\,\xi_i, \xi_j) \overset{\eqref{E-30-xi}}= 0; \] thus, the distribution $f(TM)$ is invariant under $h_i$; moreover, $h^*_i\,\xi_j = 0$, see also \eqref{4.2b}. \end{remark}
The next statement follows from Propositions~\ref{thm6.1} and \ref{lem6.1}.
\begin{corollary}
For a weak para-${\mathcal K}$-structure, we have
\begin{eqnarray}\label{3.1K} \nonumber
&& 2\,g((\nabla_{X}{f})Y,Z) =
\sum\nolimits_{i}\big( 2\,d\eta^i({f} Y,X)\,\eta^i(Z) - 2\,d\eta^i({f} Z,X)\,\eta^i(Y) \\
&& +\,\eta^i([\widetilde{Q}Y,\,{f} Z])\,\eta^i(X) \big)
+ N^{\,(5)}(X,Y,Z).
\end{eqnarray} In particular, using \eqref{E-31} with $h_i=0$, gives
$2\,g((\nabla_{\xi_i}{f})Y,Z) = \eta^i([\widetilde{Q} Y,\,{f} Z])$ for $1\le i\le p$. \end{corollary}
\section{The rigidity of a para-${\mathcal S}$-structure} \label{sec:3}
An important class of metric para-$f$-manifolds is given by para-${\mathcal S}$-manifolds. Here, we study a wider class of weak para-${\mathcal S}$-manifolds and prove the rigidity theorem for para-${\mathcal S}$-manifolds.
\begin{proposition}
For a weak para-${\mathcal S}$-structure we get \begin{eqnarray}\label{4.10} \nonumber
& g((\nabla_{X}{f})Y,Z) = g(QX,Z)\,\bar\eta(Y) - g(QX,Y)\,\bar\eta(Z) +\frac{1}{2}\, N^{\,(5)}(X,Y,Z) \\
& -\sum\nolimits_{j} \eta^j(X)\big(\bar\eta(Y)\eta^j(Z) - \eta^j(Y)\bar\eta(Z)\big) . \end{eqnarray} \end{proposition}
\begin{proof} Since $({f},Q,\xi_i,\eta^i,g)$ is a metric weak $f$-structure with $N^{\,(1)}=0$, by Corollary~\ref{cor3.1}, we get~\eqref{4.10}.
\end{proof}
\begin{remark}\rm Using $Y=\xi_i$ in \eqref{4.10}, we get $f\nabla_{X}\,\xi_i = -f^2X - \frac{1}{2}\,(N^{\,(5)}(X,\xi_i,\,\cdot))^\flat$, which gene\-ralizes the equality $\nabla_{X}\,\xi_i=-fX$ for a para-${\mathcal S}$-structure, e.g., \cite{FP-2017}.
\end{remark}
It was shown in \cite{RWo-2} that a weak almost para-${\mathcal S}$-structure with positive partial Ricci curvature can be deformed to an almost para-${\mathcal S}$-structure. The~main result in this section is the following rigidity theorem.
\begin{theorem}\label{T-4.1} A metric weak para-$f$-structure
is a weak para-${\mathcal S}$-structure if and only if it is a para-${\mathcal S}$-structure. \end{theorem}
\begin{proof} Let $({f},Q,\xi_i,\eta^i,g)$ be a weak para-${\mathcal S}$-structure. Since $N^{\,(1)}=0$, by Proposition~\ref{thm6.1}, we get $N^{\,(3)}_i=0$. By \eqref{KK}, we then obtain $N^{\,(5)}(\cdot\,,\xi_i,\,\cdot\,)=0$.
Recall that $\tilde{Q}X = Q X - X$ and $\eta^j(\widetilde{Q} X)=0$. Using the above and $Y=\xi_i$ in \eqref{4.10}, we~get \begin{align}\label{4.12} \nonumber
& g((\nabla_{X}{f})\,\xi_i,Z) = g(QX,Z) -\eta^i(Q X)\,\bar\eta(Z)
+\sum\nolimits_{j} \eta^j(X)\big(\eta^j(Z) - \delta^j_i\,\bar\eta(Z)\big)\\ \nonumber
& = g(Q X^\top, Z) +\sum\nolimits_{j} \eta^j(Z)\big(\eta^j(Q X) - \eta^i(Q X)\big) -\sum\nolimits_{j} \eta^j(Z)\big(\eta^j(X) - \eta^i(X)\big)\\
& = g(Q X^\top, Z) +\sum\nolimits_{j} \eta^j(Z)\big(\eta^j(\widetilde{Q} X) - \eta^i(\widetilde{Q} X)\big)
= g(Q X^\top, Z) . \end{align} Using \eqref{4.5A}, we rewrite \eqref{4.12} as $g(\nabla_{X}\,\xi_i,{f} Z) = g(Q X^\top,Z)$. By the above and \eqref{2.1}, we find \begin{align}\label{4.14}
g(\nabla_X\,\xi_i +{f} X^\top, \,{f}\,Z) = 0. \end{align}
Since ${f}$ is skew-symmetric, applying \eqref{4.10} with $Z=\xi_i$ in \eqref{4.NN}, we obtain \begin{eqnarray}\label{4.17} && g( [{f},{f}](X,Y),\xi_i) = g([{f} X, {f} Y], \xi_i) = g((\nabla_{{f} X}{f})Y, \xi_i) - g((\nabla_{{f} Y}{f})X, \xi_i) \notag\\ &&\quad = g(Q\,{f} Y,X) - g(Q\,{f} Y, \xi_i)\,\bar\eta(X) -g(Q\,{f} X,Y) +g(Q\,{f} X, \xi_i)\,\bar\eta(Y) . \end{eqnarray} Recall that $[Q,\,{f}]=0$ and $f\,\xi_i=0$. Thus, \eqref{4.17} yields for all $i$, \begin{align*}
g( [{f},{f}](X,Y), \xi_i) = 2\,g(QX,{f} Y) . \end{align*} From this, using the definition of $N^{\,(1)}$, we get for all $i$, \begin{align}\label{4.18}
g(N^{\,(1)}(X,Y), \xi_i) = 2\,g(\widetilde{Q} X, {f} Y) . \end{align} From $N^{\,(1)}=0$ and \eqref{4.18} we get $g(\widetilde{Q} X, {f} Y)=0$ for all $X,Y\in \mathfrak{X}_M$; thus, $\widetilde Q=0$. \end{proof}
For a weak almost para-${\mathcal S}$-structure all $\xi_i$ are Killing if and only if $h=0$, see Theorem~\ref{thm6.2}. The equality $h=0$ holds for a weak para-${\mathcal S}$-structure since it is true for a para-${\mathcal S}$-structure, see Theorem~\ref{T-4.1}. We~will prove this property of a weak para-${\mathcal S}$-structure directly.
\begin{corollary}
For a weak para-${\mathcal S}$-structure, $\xi_1,\ldots,\xi_p$ are Killing vector fields; moreover, $\ker f$ is integrable and defines a Riemannian totally geodesic foliation. \end{corollary}
\begin{proof} In view of \eqref{4.5A} and $\bar\eta(\xi_i)=1$, Eq. \eqref{4.10} with $Y=\xi_i$ becomes \begin{align}\label{4.6A}
g(\nabla_{X}\,\xi_i, {f} Z) = -\eta^i(X)\,\bar\eta(Z) +g(X,QZ) + \frac 12\,N^{\,(5)}(X,\xi_i,Z) . \end{align} Combining \eqref{4.6} and \eqref{4.6A}, and using \eqref{E-30-xi}, we achieve for all $i$ and $X,Z$, \begin{align*}
g(h_iZ,QX) = \sum\nolimits_{j}\eta^j(X)\,\eta^j(Z) -\eta^i(X)\,\bar\eta(Z),
\end{align*} which implies $hZ=0$ for $Z\in f(TM)$ (since $Q$ is nonsingular). This and \eqref{4.2b} yield $h=0$. By~Theorem~\ref{thm6.2}, $\ker f$ defines a totally geodesic foliation. Since $\xi_i$ is a Killing field, we~get \[
0 = (\pounds_{\xi_i}\,g)(X,Y) = g(\nabla_{X}\,\xi_i, Y) + g(\nabla_{Y}\,\xi_i, X)
= -g(\nabla_{X} Y + \nabla_{Y} X,\ \xi_i) \] for all $i$ and $X,Y\bot\,\ker f$. Thus, $f(TM)$ is totally geodesic, i.e., $\ker f$ defines a Riemannian foliation. \end{proof}
For $p=1$, from Theorem~\ref{T-4.1} we have the following
\begin{corollary}
A weak almost paracontact metric structure on $M^{2n+1}$ is a weak para-Sasakian structure if and only if it is a para-Sasakian structure, i.e., a normal weak paracontact metric structure, on $M^{2n+1}$. \end{corollary}
\section{The characteristic of a weak para-${\mathcal C}$-structure} \label{sec:4}
An important class of metric para-$f$-manifolds is given by para-${\mathcal C}$-mani\-folds. Recall that $\nabla_{X}\,\xi_i=0$ holds on para-${\mathcal C}$-manifolds.
\begin{proposition} Let $({f},Q,\xi_i,\eta^i,g)$ be a weak
para-${\mathcal C}$-structure. Then \begin{align}\label{6.1}
& 2\,g((\nabla_{X}{f})Y,Z) = N^{\,(5)}(X,Y,Z),\\ \label{6.1b}
& 0 = N^{\,(5)}(X,Y,Z) + N^{\,(5)}(Y,Z,X) + N^{\,(5)}(Z, X, Y) ,\\ \label{6.1c}
0 & = N^{\,(5)}({f} X,Y,Z) + N^{\,(5)}({f} Y,Z,X) + N^{\,(5)}({f} Z, X, Y) . \end{align}
Using \eqref{6.1} with $Y=\xi_i$ and \eqref{2.1}, we get \begin{align*}
g(\nabla_{X}\,\xi_i,\,Q Z) = -\frac12\,N^{\,(5)}(X,\xi_i,{f} Z). \end{align*} \end{proposition}
\begin{proof} For a weak almost para-${\mathcal C}$-structure $({f},Q,\xi_i,\eta^i,g)$, using Theorem~\ref{thm6.2C}, from \eqref{3.1}
we get \begin{equation}\label{6.1a}
2\,g((\nabla_{X}{f})Y,Z)= - g([{f},{f}](Y,Z),{f} X) + N^{\,(5)}(X,Y,Z). \end{equation} From \eqref{6.1a}, using condition $[{f},{f}]=0$ we get \eqref{6.1}. Using \eqref{3.3} and \eqref{6.1}, we write \[
0 = 3\,d\Phi(X,Y,Z) = g((\nabla_X\,{f})Z,Y) +g((\nabla_Y\,{f})X, Z) +g((\nabla_Z\,{f})Y, X); \] hence, \eqref{6.1b} is true. Using \eqref{4.NN}, \eqref{6.1} and the skew-symmetry of ${f}$, we obtain \begin{align*}
0 & = 2\,g([{f},{f}](X,Y),Z) \\
& = N^{\,(5)}(X, Y, {f} Z) + N^{\,(5)}({f} X, Y, Z)
- N^{\,(5)}(Y, X, {f} Z) - N^{\,(5)}({f} Y,X,Z) . \end{align*} This and \eqref{6.1b} with $X$ replaced by ${f} X$ provide \eqref{6.1c}. \end{proof}
Recall that $X^\bot = \sum\nolimits_{i}\eta^i(X)\,\xi_i$. Consider a weaker condition than \eqref{6.1d}: \begin{align}\label{E-xi31}
[\xi_i,\xi_j]^\bot =0,\quad 1\le i,j\le p. \end{align}
In the following theorem, we characterize weak para-${\mathcal C}$-manifolds in a wider class of metric weak para-$f$-manifolds using the condition $\nabla{f}=0$.
\begin{theorem}\label{thm6.2D} A metric weak para-$f$-structure with $\nabla{f}=0$ and \eqref{E-xi31} is a~weak para-${\mathcal C}$-structure with $N^{\,(5)}=0$. \end{theorem}
\begin{proof} Using condition $\nabla{f}=0$, from \eqref{4.NN} we obtain $[{f},{f}]=0$. Hence, from \eqref{2.6X} we get $N^{\,(1)}(X,Y)=-2\,\sum\nolimits_{i} d\eta^i(X,Y)\,\xi_i$, and from \eqref{4.NNxi} we obtain \begin{align}\label{E-cond1}
\nabla_{{f} X}\,\xi_i - {f}\,\nabla_{X}\,\xi_i = 0,\quad X\in \mathfrak{X}_M. \end{align} From \eqref{3.3}, we calculate \[
3\,d\Phi(X,Y,Z) = g((\nabla_{X}{f})Z, Y) + g((\nabla_{Y}{f})X,Z) + g((\nabla_{Z}{f})Y,X); \] hence, using condition $\nabla{f}=0$ again, we get $d\Phi=0$. Next,
$N^{\,(2)}_i(Y,\xi_j) = -\eta^i([{f} Y,\xi_j]) = g(\xi_j, {f}\nabla_{\xi_i} Y) =0$.
Setting $Z=\xi_j$ in \eqref{3.1} and using the condition $\nabla{f}=0$ and the properties $d\Phi=0$, $N^{\,(2)}_i(Y,\xi_j)=0$ and $N^{\,(1)}(X,Y)=-2\sum\nolimits_{i} d\eta^i(X,Y)\,\xi_i$, we find
$0 = 2\,d\eta^j({f} Y, X) - N^{\,(5)}(X,\xi_j, Y)$.
By~\eqref{KK} and~\eqref{E-cond1},
\[
N^{\,(5)}(X,\xi_j, Y) = g([\xi_j,{f} Y] -{f}[\xi_j,Y],\, \widetilde{Q} X)
= g(\nabla_{{f} Y}\,\xi_j - {f}\,\nabla_{Y}\,\xi_j,\, \widetilde{Q} X) = 0; \] hence, $d\eta^j({f} Y, X)=0$. From this and $g([\xi_i,\xi_j],\xi_k)=2\,d\eta^k(\xi_j, \xi_i)=0$ we get $d\eta^j=0$. By the above, $N^{\,(1)}=0$. Thus, $({f},Q,\xi_i,\eta^i,g)$ is a weak para-${\mathcal C}$-structure. Finally, from \eqref{6.1} and condition $\nabla{f}=0$ we get $N^{\,(5)}=0$. \end{proof}
\begin{corollary} A normal metric weak para-$f$-structure with
$\nabla f=0$ is a~weak para-${\mathcal C}$-structure with
$N^{\,(5)}=0$. \end{corollary}
\begin{proof} By $N^{\,(1)}=0$, we get $d\eta^i =0$ for all $i$. As in Theorem~\ref{thm6.2D}, we get $d\Phi=0$. \end{proof}
\begin{example}\rm Let $M$ be a $2n$-dimensional smooth manifold and $\tilde{f}:TM\to TM$ an endomorphism of rank $2n$ such that $\nabla\tilde{f}=0$. To construct a weak para-${\mathcal C}$-structure on $M\times\mathbb{R}^p$ (or $M\times \mathbb{T}^p$, where
$\mathbb{T}^p$ is a $p$-dimensional flat torus), take any point $(x, t_1,\ldots,t_p)$
and set $\xi_i = (0, d/dt_i)$, $\eta^i =(0, dt_i)$~and \[
{f}(X, Y) = (\tilde{f} X,\, 0),\quad
Q(X, Y) = (\tilde{f}^{\,2} X,\, Y). \] where $X\in T_xM$ and $Y=\sum_i Y^i\xi_i\in\{\mathbb{R}^p_t, \mathbb{T}^p_t\}$. Then \eqref{2.1} holds and Theorem~\ref{thm6.2D} can be used. \end{example}
For $p=1$, from Theorem~\ref{thm6.2D} we have the following
\begin{corollary}
Any weak almost paracontact structure $(\varphi,Q,\xi,\eta,g)$ with the property $\nabla\varphi=0$ is a~weak para-cosymplectic structure.
\end{corollary}
\end{document} |
\begin{document}
\begin{center} {\large \bf Some Notes on Pairs in Binary Strings}\\ Jeremy M. Dover \end{center} \begin{abstract} Seth~\cite{1812699} posed a problem that is equivalent to the following: how many binary strings of length $n$ have exactly $k$ pairs of consecutive 0s and exactly $m$ pairs of consecutive 1s, where the first and last bits are considered as being consecutive? In this paper, we provide a closed form solution which also solves a related problem with some interesting connections to other combinatorial sequences. \end{abstract}
\section{The Setup} Seth~\cite{1812699} posed the following problem: consider a microstate consisting of 8 spins, where a microstate is a linear ordering of spins, each of which may be in the up or down state. Seth asks how many of these 8-spin microstates have exactly 2 ``up parallel pairs" and 2 ``down parallel pairs", where an ``up parallel pair" is two consecutive up states, and the obvious meaning of consecutive for the linear ordering is extended so that the first and last states are also considered consecutive. A ``down parallel pair" is defined analogously. Note that despite the first and last states being considered consecutive, the microstate is still considered to have a first and last state, so rotations of the state pattern are counted as being different.
It is not hard to cast this problem into a question about binary strings, where an ``up spin" is a 0, and a ``down spin" a 1, namely finding the cardinality of the set $S^\circ(n,k,m)$, the set of all binary strings of length $n$ with $k$ pairs of consecutive 0s and $m$ pairs of consecutive 1s, with the first and last bits considered consecutive. In order to address this problem, we define the related set $Z(n,k,m)$ to be the number of binary sequences of length $n$ that start with 0 and have $k$ pairs of consecutive zeroes, and $m$ pairs of consecutive ones, where the first and last bits are {\em not} considered consecutive. We denote the cardinality of $Z(n,k,m)$ as $z(n,k,m)$. We now show that $\left|S^\circ(n,k,m)\right|$ can be determined in terms of the values $z(n,k,m)$. In what follows, we will assume unless otherwise stated that the first and last bits of a binary string are not considered consecutive. For brevity, we refer to a pair of consecutive 0s (resp. 1s) in a binary string as a 0-pair (resp. 1-pair); this terminology will specifically not be used for the first and last bits, in those cases where they are considered consecutive.
The first issue to address is that some of the binary strings in $S^\circ(n,k,m)$ begin with 1. However, we note that the operation of inverting each bit of a binary string of length $n$ is obviously a bijective involution from the set of all binary strings of length $n$ onto itself, and shows that the number of binary strings that start with a 1 and have $k$ 0-pairs and $m$ 1-pairs is exactly $z(n,m,k)$.
We know that the elements of $Z(n,k,m)$ begin with 0, but we do not necessarily know how they end, which is an important consideration when analyzing $S^\circ(n,k,m)$. The following lemma provides an answer to this question.
\begin{lemma} \label{end} Let $n,k,m$ be integers such that $n \ge 1 $ and $k,m \ge 0$, and let $b$ be a binary string of length $n$ containing $k$ 0-pairs and $m$ 1-pairs. Then the last bit of $b$ is the same as the first bit of $b$ if and only if $n+k+m$ is odd. \end{lemma}
\begin{proof} Given a binary string $b$ of length $n$, assign to each pair of consecutive bits (of which there are $n-1$) the letter S if they are the same, and D if they are different. Since $b$ has $k$ 0-pairs and $m$ 1-pairs, there are exactly $m+k$ Ss, and thus there are $n-1-m-k$ Ds. Reading from left to right, we only change values in the string when we encounter a D, so it is easy to see that the last bit of the string depends only on the parity of $n-1-m-k$. If this value is odd, then the last bit of $b$ is different from the first bit, while these bits will be the same if the number of Ds is even. Since $n+m+k$ and $n-1-m-k$ have opposite parity, we obtain the result. \end{proof}
Let's use these facts to determine $\left|S^\circ(n,k,m)\right|$. Let $b \in S^\circ(n,k,m)$ be a binary string. If the first and last bits of $b$ are different, then by Lemma~\ref{end} we must have $n+k+m$ even, since $b$ has $k$ 0-pairs and $m$ 1-pairs. If the first and last bits of $b$ are the same, then $b$ has either $k-1$ 0-pairs and $m$ 1-pairs, or $k$ 0-pairs and $m-1$ 1-pairs; in either case Lemma~\ref{end} shows that $n+k+m-1$ must be odd, or $n+k+m$ is even. This shows that if $n+k+m$ is odd, then there are {\em no} binary strings in $S^\circ(n,k,m)$. But if $n+k+m$ is even, then all of the binary strings in $Z(n,k,m)$ and $Z(n,k-1,m)$ are in $S^\circ(n,k,m)$, as are the inversed of the string in $Z(n,m,k)$ and $Z(n,m-1,k)$. Thus we have: $$S^\circ(n,k,m) = \begin{cases} 0 &\mbox n+k+m\,{\rm odd}\\z(n,k,m)+z(n,k-1,m)+\\z(n,m,k)+z(n,m-1,k) & \mbox n+k+m\,{\rm even}\end{cases}$$
The important takeaway from this section is that our original problem can be solved strictly by consideration of the numbers $z(n,k,m)$, which we focus on exclusively in what follows.
\section{Some recurrence relations for $z(n,k,m)$}
The boundary conditions for $z(n,k,m)$ are fairly straightforward; for convenience we define $z(n,k,m) = 0$ for all $n < 0$, $k < 0$ or $m < 0$. Noting that a binary string of length $n$ only has $n-1$ pairs of consecutive bits, we know that $z(n,k,m) = 0$ for all integers $k,m$ such that $k+m \ge n$. Moreover, if $k+m = n-1$, our numbers $z(n,k,m)$ count the number of strings where all pairs of consecutive bits are identical, with the first bit of the string being 0. This forces the string to be entirely 0s, showing that for integers $k,m$ with $k+m = n-1$, $z(n,k,m) = 0$ unless $k=n-1$ and $m=0$, in which case $z(n,n-1,0) = 1$.
We now derive several different recurrence relations for the $z(n,k,m)$, which each have different uses.
\begin{thm} \label{recur1} Let $n,k,m$ be integers such that $n \ge 3$ and $k,m \ge 0$. Then $z(n,k,m) = z(n-1,k-1,m) + z(n-2,k,m) + z(n-2,m-1,k)$. \end{thm}
\begin{proof} To prove this result, we count the size of $Z(n,k,m)$ in two ways, one of which is $z(n,k,m)$ by definition. For the other count, let $b \in Z(n,k,m)$ and consider three cases: \begin{enumerate} \item If $b$ starts with 00, then $b$ consists of a 0 followed by a string of $n-1$ bits starting with 0 with $k-1$ 0-pairs and $m$ 1-pairs, of which there are $z(n-1,k-1,m)$. \item If $b$ starts with 010, then $b$ consists of 01 followed by a string of $n-2$ bits starting with 0 with $k$ 0-pairs and $m$ 1-pairs, of which there are $z(n-2,k,m)$. \item If $b$ starts with 011, then $b$ consists of 01 followed by a string of $n-2$ bits starting with 1 with $k$ 0-pairs and $m-1$ 1-pairs, of which there are $z(n-2,m-1,k)$. \end{enumerate} Note that if $m=0$, no strings in $Z(n,k,m)$ start with 011, but $z(n-2,-1,k)$ is defined to be 0, so this remains correct. Since these three cases count sets that form a disjoint union of $Z(n,k,m)$, we have the result. \end{proof}
Our next recurrence is somewhat more complicated and is not universally applicable, but it counts in a way which quickly reveals an important corollary.
\begin{thm} \label{recur2} Let $n,k,m$ be integers such that $n \ge 1$ and $k+m < n-1$. Then $$z(n,k,m) = \displaystyle \sum_{f=1}^{k+1} z(n-f,m,k+1-f)$$ \end{thm}
\begin{proof} Again we proceed by counting the cardinality of $Z(n,k,m)$ in two ways, the first yielding $z(n,k,m)$. To count this set in another way, we note that since $k+m < n-1$, any $b \in Z(n,k,m)$ must contain at least one 1. Let $f$ be the position of the first 1 in $b$, so $f$ may vary between $1$ and $n-1$ (note: the first bit of $b$ has index 0). Prior to the first 1, the string consists entirely of 0s, creating $f-1$ pairs of consecutive 0s. Starting from the first 1, the remainder of the string is a binary string of length $n-f$ starting with 1 that contains exactly $k+1-f$ 0-pairs and $m$ 1-pairs, of which there are $z(n-f,m,k+1-f)$ such strings. Thus we can write $$z(n,k,m) = \displaystyle \sum_{f=1}^{n-1} z(n-f,m,k+1-f)$$ Noting that $z(n-f,m,k+1-f) = 0$ for all $f > {\rm min}\{n,k+1\}$ and that $k+1 < n$ yields the indices of summation shown in the Theorem statement. \end{proof}
\begin{cor} \label{zeroflip} Let $n,m$ be integers with $n \ge 1$ and $0 \le m < n-1$. Then $z(n,0,m) = z(n-1,m,0)$. \end{cor}
\begin{proof} Since $m < n-1$, we can apply Theorem~\ref{recur2} to $z(n,0,m)$ to obtain $z(n,0,m) = \sum_{f=1}^{1} z(n-f,m,1-f)$. Evaluating the single term of the summation yields the result. \end{proof}
Our final recurrence is nice because it shows how we can reduce the problem of calculating $z(n,k,m)$ to just those values with $m=0$.
\begin{thm} \label{generalm} Let $n,k,m$ be integers such that $n \ge 3$, $0 \le k \le n-1$, $0 < m \le n-1-k$. Then \footnotesize $$z(n,k,m) = \begin{cases}\displaystyle\sum_{f=1}^m {{k+f}\choose{f}}{{m-1}\choose{f-1}}z(n-m-f,k+f,0) &\mbox n+k+m\,{\rm odd}\\ \displaystyle\sum_{f=1}^m {{k+f-1}\choose{f-1}}{{m-1}\choose{f-1}}z(n-m-f,k+f-1,0)+\\ \displaystyle\sum_{f=1}^m {{k+f}\choose{f}}{{m-1}\choose{f-1}}z(n-m-f,k+f,0) &\mbox n+k+m\,{\rm even}\end{cases}$$ \normalsize \end{thm}
\begin{proof} Define the mapping $\psi$ on a string $b \in Z(n,k,m)$ to be the string obtained by deleting from $b$ all substrings of consecutive 1s of length greater than 1; notice that this operation is well-defined, and that $\psi(b)$ is unique. Let $f$ be the number of substrings of 1s removed from $b$, so clearly $f$ is between 1 and $m$. The length of $\psi(b)$ must be $n-(m+f)$, since the removal of a substring of $g$ consecutive 1s removes only $g-1$ 1-pairs. Clearly $\psi(b)$ has no 1-pairs, since all such strings are removed, and no 1-pair can be created by our deletion process. Indeed, the removal of a string of consecutive 1s creates an additional 0-pair, unless that substring is removed from the end of the original string (it cannot come from the beginning since the string starts with a zero, by definition of $Z(n,k,m)$).
So to summarize, for any $b \in Z(n,k,m)$ ending in either 0 or 01, there exists a unique integer $1 \le f \le m$ and a unique binary string $\psi(b) \in Z(n-m-f,k+f,0)$ such that $b$ can be obtained from $\psi(b)$ by injecting a total of $m+f$ 1s into $f$ 0-pairs such that each injected substring contains at least two 1s. Also, for any $b \in Z(n,k,m)$ ending in 11, there exists a unique integer $1 \le f \le m$ and a unique binary string $\psi(b) \in Z(n-m-f,k+f-1,0)$ ending in 0 such that $b$ can be obtained from $\psi(b)$ by injecting a total of $m+f$ 1s into $f-1$ 0-pairs and at the end of $\psi(b)$ such that each injected substring contains at least two 1s.
To count the number of strings in $Z(n,k,m)$ that end in 0 or 01, we follow the recipe above. Given any $1 \le f \le m$, let $b' \in Z(n-m-f,k+f,0)$, for which there are $z(n-m-f,k+f,0)$ choices. We can pick any $f$ 0-pairs in $b'$, which can be done in ${k+f}\choose{f}$ ways. To inject our substrings of ones, since we know each substring must have at least two 1s, and the remaining $m-f$ 1s are ``identical balls" that need to be distributed into ``distinguishable urns", which can be done in ${m-1}\choose{f-1}$ ways. Each of the binary strings constructed this way is contained in $Z(n,k,m)$, and all are distinct as discussed above. Hence $Z(n,k,m)$ contains $$\displaystyle\sum_{f=1}^m {{k+f}\choose{f}}{{m-1}\choose{f-1}}z(n-m-f,k+f,0)$$ strings ending in either 0 or 01.
To count the number of strings in $Z(n,k,m)$ that end in 11, we proceed as before. Given any $1 \le f \le m$, we pick $b' \in Z(n-m-f,k+f-1,0)$ ending in zero. From Lemma~\ref{end}, the number of possibilities for $b'$ is 0 if $n-m-f+k+f-1 = n-m+k-1$ is even, or equivalently if $n+k+m$ is odd. However, if $n+k+m$ is even, every string in $Z(n-m-f,k+f-1,0)$ ends in 0, so there are $z(n-m-f,k+f-1,0)$ choices for $b'$. Then from the $k+f-1$ 0-pairs in $b'$, we choose $f-1$ into which to inject 1s, which can be done in ${k+f-1}\choose{f-1}$ ways. Finally, we can distribute the $m+f$ 1s we need to add between these $f-1$ 0-pairs and at the end of $b'$, such that each injected string has at least two 1s, in ${m-1}\choose{f-1}$ ways, exactly as above. Therefore, if $n+k+m$ is odd, $Z(n,k,m)$ contains no strings ending in 11, while if $n+k+m$ is even, $Z(n,k,m)$ contains: $$\displaystyle\sum_{f=1}^m {{k+f-1}\choose{f-1}}{{m-1}\choose{f-1}}z(n-m-f,k+f-1,0)$$ strings ending in 11. This proves the result. \end{proof}
\section{The case $m=0$ and Terquem's problem}
Theorem~\ref{generalm} reduces our problem of computing $z(n,k,m)$ to the values of $z(n,k,0)$, but does not help us find these values. Fortunately, when analyzing computational data for $z(n,k,m)$, we searched the OEIS~\cite{oeis} for the case $m=0$, which yielded an unexpected connection with the sequence A046854. This sequence, which we call $T(n,k)$, represents a triangle of numbers with $n \ge 0$, $0 \le k \le n-1$ defined via $$T(n,k) = {{\left \lfloor \frac{n+k}{2} \right \rfloor}\choose {k}}$$ Combinatorially, this sequence arises as a solution to Terquem's problem~\cite{riordan}, namely providing the number of length $k$, increasing sequences of integers from $\{1 \ldots n\}$ that alternate parity and start with an odd number, as well as the number of length $k$, increasing sequences of integers from $\{1 \ldots n+1\}$ that alternate parity and start with an even number.
We can verify this numerical relationship with a quick induction proof: \begin{thm} \label{terquem} Let $n,k$ be integers such that $n > 0$ and $0 \le k \le n-1$. Then $z(n,k,0) = T(n-1,k) = {{\left \lfloor \frac{n+k-1}{2} \right \rfloor}\choose {k}}$. \end{thm}
\begin{proof} We proceed via strong induction. It is easy to calculate $z(n,k,0)$ for small values of $n$ via enumeration: \begin{description} \item[$n=1$] $Z(1,0,0) = \{0\}$, $z(1,0,0) = T(0,0) = 1$ \item[$n=2$] $Z(2,0,0) = \{01\}$, $z(2,0,0) = 1$; $Z(2,1,0) = \{00\}$, $z(2,1,0) = 1$ \item[$n=3$] $Z(3,0,0) = \{010\}$, $Z(3,1,0) = \{001\}$, $Z(3,2,0) = \{000\}$, \\$Z(3,0,1) =\{011\}$ \end{description}
By way of induction, assume $z(n,k,0) = {{\left \lfloor \frac{n+k-1}{2} \right \rfloor}\choose {k}}$ for all $n < N$, $0 \le k \le n-1$, and consider $z(N,k,0)$. By Theorem~\ref{recur1}, we have \begin{eqnarray*} z(N,k,0) &=& z(N-1,k-1,0) +z(N-2,k,0) + z(N-2,-1,k)\\ & = & z(N-1,k-1,0) +z(N-2,k,0) \\ \end{eqnarray*} Using our induction hypothesis, we have $z(N,k,0) = {{\left \lfloor \frac{N+k-3}{2} \right \rfloor}\choose {k-1}} + {{\left \lfloor \frac{N+k-3}{2} \right \rfloor}\choose {k}}$. A simple application of Pascal's identity yields the result. \end{proof}
Interestingly, it is possible to generate a bijection between all of the binary strings in $Z(n,k,0)$ and Terquem's sequences. A rigorous proof of this fact is tedious, but beginning with $b \in Z(n,k,0)$, define $t$ to be the sequence of positions of the first 0 in each 0-pair, where the first position in $b$ is position 1. The key is to note that to be in $Z(n,k,0)$, $b$ basically consists of runs of two or more 0s, with alternating strings of 0s and 1s between them; this alternation forces the first element of $t$ to be odd, as well as alternate parity thereafter. As an example: $$001010001010001 \rightarrow 1,6,7,12,13$$
{}
\end{document} |
\begin{document}
\title{K\"{a}hler Gradient Ricci Solitons with Large Symmetry}
\author{Hung Tran}
\date{\today}
\begin{abstract} Let $(M, g, J, f)$ be an irreducible non-trivial K\"{a}hler gradient Ricci soliton of real dimension $2n$. We show that its group of isometries is of dimension at most $n^2$ and the case of equality is characterized. As a consequence, our framework shows the uniqueness of $U(n)$-invariant K\"{a}hler gradient Ricci solitons constructed earlier. There are corollaries regarding the groups of automorphisms or affine transformations and a general version for almost Hermitian GRS. The approach is based on a connection to the geometry of an almost contact metric structure. \end{abstract} \maketitle
\section{Introduction} Let $(M, g)$ be an orientable connected Riemannian manifold. In \cite{H3}, R. Hamilton introduced the Ricci flow equation, for ${\mathrm {Rc}}$ denoting the Ricci curvature, \begin{equation}
\label{rf}
\frac{\partial}{\partial t} g(t)= -2 {\mathrm {Rc}}(t). \end{equation}
The theory has been utilized to solve fundamental problems; see \cite{perelman1, perelman2, perelman3, bohmwilking, bs091, bs072}.
As a weakly parabolic system, it generically develops singularities and the study of such models is essential in any potential applications. Gradient Ricci solitons (GRS) are self-similar solutions to (\ref{rf}) and arise naturally in that context. Consequently, there have been numerous efforts to study them; see \cite{Hsurface, chow, perelman2, naber07, munse09, caozhou10, pewy10, b12rot, brendle14rotahigh, caotran1, LNW16fourPIC, MW17compact} and references therein.
A GRS $(M, g, f)$ is a Riemannian manifold such that, for a constant $\lambda$, \begin{equation}
\label{grs}
{\mathrm {Rc}}+\text{Hess}{f}=\lambda g. \end{equation}
It is called shrinking, steady, or expanding depending on the sign of $\lambda$ being positive, zero, or negative. Clearly, any Einstein manifold is an example with $\text{Hess}{f}\equiv 0$ and $\lambda$ being the Einstein constant. Moreover, the Gaussian soliton refers to $(\mathbb{R}^{m}, g_{Euc}, \lambda \frac{|x|^2}{2})$ for $g_{Euc}$ the Euclidean metric. It is natural to combine these examples and, in that case, a soliton is called rigid, namely isometric to a quotient of $N^{n-k}\times \mathbb{R}^k$ with $f= \frac{|x|^2}{2}$ on the Euclidean factor. A soliton is called non-trivial (or non-rigid) if at least a factor in its de Rham decompsotion is non-Einstein.
Many non-trivial examples are K\"ahler and the topic receives tremendous interest; see, for examples, \cite{tian1, WZ04toric, caohd09, CZ12Kahler, MW15topo, CS2018classification, CF16conical, Kotschwar18Kahlercone, CD2020expanding, DZ2020rigidity}. In particular, significant efforts lead to the classification of all K\"{a}hler Ricci shrinker surfaces \cite{CDSexpandshriking19, CCD22finite, BCCD22KahlerRicci, LW23}. For $m=2n$, $(M^{m}, g, J)$ is called an almost Hermitian manifold if $g$ is compatible with an almost complex structure $J: TM\mapsto TM$, $J^2=-\mathrm{Id}$. The associated K\"{a}hler two-form is defined as, for tangential vector fields $X$ and $Y$,
\[\omega(X, Y)=g(X, JY).\]
Consequently, $(M^{2n}, g, J)$ is called almost K\"{a}hler if $\omega$ is closed and K\"{a}hler if, additionally, $J$ is a complex structure.
A K\"{a}hler GRS $(M, g, J, f)$ is simultaneously a K\"{a}hler manifold and a gradient Ricci soliton.
In this paper, we propose an investigation based on the group of symmetry. An isometry on $(M^m, g)$ is a diffeomorphism preserving the metric $g$. The dimension of the group of isometries is at most $\frac{m(m+1)}{2}$ \cite{KNvolumeI96} and it is attained iff the manifold is simply-connected of constant curvature: the round spheres or the real projective space, the Euclidean space, or the hyperbolic space.
For an almost Hermitian manifold, using the terminology of \cite{KNauto57}, an automorphism is an isometry which preserves $J$. S. Tanno \cite{tanno69Hermitian} showed that the maximal dimension of the automorphism group is $n(n+2)$. Additionally, the maximal case is characterized as the manifold must be homothetic to one of the followings: the unitary space $\mathbb{C}^n=\mathbb{R}^{2n}$ with $g_{Euc}$, a complex projective space $\mathbb{CP}^n$ with a Fubini-Study metric, or an open ball $B^n_{\mathbb{C}}$ with a Bergman metric. These models play an important role in our work. \begin{definition}
Let $\mathbb{N}^n(k)$ be a simply connected K\"{a}hler manifold of real dimension $2n$ with constant holomorphic sectional curvature and normalized Ricci curvature $k$. \end{definition}
From the above discussion, it is immediate that a Gaussian soliton has $n(2n+1)$ isometries and $n(n+2)$ automorphisms. Also, P. Petersen and W. Wylie showed that a homogeneous GRS must be rigid and if the Riemannian metric is reducible then the soliton structure is reduced accordingly \cite{PW09grsym}. It is, thus, interesting to ponder the next best scenario.
It is noted that, many non-trivial K\"{a}hler GRS's, see \cite{caohd96, caohd97limits, koi90, CV96, fik03}, are $U(n)$-invariant and $\text{dim}(U(n))=n^2$. According to \cite{DW11coho}, their metrics all belong to the following cohomogeneity one structure:
\textit{ \textbf{An Ansatz:} Let ${N}^{n-1}(k)$ be a K\"{a}hler-Einstein manifold with ${\mathrm {Rc}}_N=k \mathrm{Id}$, $I$ be an interval, and functions $H, F: I\mapsto \mathbb{R}^+$. $(P, g_t)$ is a Riemann submersion of a line or circle bundle with coordinate $z$ over $({N}, F^2 g_N)$ and a bundle projection $\pi: P\mapsto \mathbb{N}$. $\eta$ is the one-form dual of $\partial_z$ such that $d\eta=q\pi^\ast \omega_{\mathbb{N}}$ for $q\in \mathbb{Z}$. If ${N}=\mathbb{CP}^{n-1}$ and $q=1$ then $P=\mathbb{S}^{2n-1}$ and one recovers the Hopf fibration. If ${N}=\mathbb{N}\neq \mathbb{CP}^{n-1}$, the bundle is trivial. The metric on $I\times P$ is given by}
\begin{equation}
\label{ansatz}
g = dt^2+g_t=dt^2+ H^2(t) \eta\otimes \eta + F^2(t)\pi^\ast g_\mathbb{N}.
\end{equation}
Our first result asserts the uniqueness.
\begin{theorem}
\label{main0}
Let $(M^{2n}, g, J, f, \lambda)$ be an irreducible non-trivial complete K\"{a}hler GRS. Its group of isometries is of dimension at most $n^2$ and equality happens iff it is smoothly constructed by the ansatz \ref{ansatz} for ${N}=\mathbb{N}(k)$ and $q=1$. If $\lambda\geq 0$ then $\mathbb{N}=\mathbb{CP}^{n-1}$. \end{theorem}
\begin{remark} When $N=\mathbb{CP}^{n-1}$, the construction of complete K\"{a}hler GRS with ansatz \ref{ansatz} is considered in \cite{DW11coho}. By their analysis, there must be exactly one or two singular orbits (two only if $\lambda >0$); to smoothly compactify each, one must collapse either the whole sphere (both $H$ and $F$ going to zero) or just the fiber ($H$ going to zero). Here are all possible configurations:
\begin{itemize}
\item $\lambda=0$, $I=[0,\infty)$, the singular orbit is either a point ($M$ is topologically $\mathbb{C}^n$) or $\mathbb{CP}^{n-1}$ (($M$ is $\mathbb{C}^n$ blowing up at one point) \cite{caohd96, CV96}.
\item $\lambda<0$, $I=[0,\infty)$, the original construction is due to \cite{caohd97limits, CV96, fik03}.
\item $\lambda>0$, $I=[0, 1]$, each singular orbit is $\mathbb{CP}^{n-1}$ \cite{koi90, caohd96, CV96}.
\item $\lambda>0$, $I=[0, \infty)$, the singular orbit is $\mathbb{CP}^{n-1}$ \cite{fik03} (if the singular orbit is a point, it recovers a Gaussian soliton).
\end{itemize} \end{remark} \begin{remark}
The metric in Theorem \ref{main0} has each $(P, g_t)$ being a deformed homogenous Sasakian structure with constant holomorphic sectional curvature. \end{remark}
For the reducible case, the group of isometries is potentially skewed by a Gaussian soliton factor of a large dimension. Thus, it is natural to consider the following.
\begin{corollary}
\label{main3}
Let $(M^{2n}, g, J, f, \lambda)$ be a complete simply connected non-trivial K\"{a}hler GRS. Its group of automorphisms is of dimension at most $n^2$ and equality happens iff it is either irreducible as in Theorem \ref{main0} or isometric to
\begin{enumerate}[label=(\roman*)]
\item a product of a Gaussian soliton and a Hamilton's cigar ($\lambda=0$);
\item a product of $\mathbb{N}(k)$ ($k\leq 0$) with a complete K\"{a}hler expanding GRS in real dimension two ($\lambda<0$).
\end{enumerate} \end{corollary} \begin{remark}
There is a list of \textit{all} models of GRS in real dimension two \cite{BM15}. \end{remark}
Under certain conditions, an infinitesimal isometry is closely related to conformal \cite{schoen95conformal} and affine vector fields \cite{KNvolumeI96}.
For example, following \cite[Chapter 9]{KNvolumeII96}, one recalls that a K\"{a}hler manifold is non-degenerate if the restricted linear holonomy group at $x\in M$ contains the endormorphism $J_x$ for an arbitrary $x\in M$.
\begin{corollary}
\label{nondegenrate}
Let $(M^{2n}, g, J, f)$ be a non-degenerate complete simply connected K\"{a}hler GRS. If $f$ is non-constant, then the group of affine transformations is of dimension at most $n^2$ and equality happens iff it is either irreducible as in Theorem \ref{main0} or a product of $\mathbb{N}(k)$ ($k<0$) with a complete K\"{a}hler expanding GRS in real dimension two. \end{corollary}
Indeed, Theorem \ref{main0} follows from a more general version for (possibly incomplete) almost Hermitian GRS $(M^{2n}, g, J, f, \lambda)$. These structures are compatible: \begin{align*}
g(X, Y) &= g(JX, JY),\\
\mathrm {Hess} f (X, Y) &= \mathrm {Hess} f(JX, JY). \end{align*} The group of symmetry is to preserve all $g$, $J$, and $f$. \begin{theorem}
\label{main1}
Let $(M^{2n}, g, J, f)$ be an almost Hermitian GRS with symmetry group $G$. If $f$ is non-constant then $\text{dim}(G)\leq n^2$ and equality happens iff locally it is either
\begin{enumerate}[label=(\roman*)]
\item constructed by the ansatz \ref{ansatz} for ${N}=\mathbb{N}(k)$ and $q=0, 1$.
\item a product of a line/circle with a hyperbolic space.
\end{enumerate} \end{theorem} \begin{remark} The metric in case (ii) above can be written as, for non-zero constants $H$ and $A$, \begin{align*}
g = dt^2+g_t &=dt^2+ H^2 dz^2+ e^{2Az}g_{\mathbb{C}^{n-1}},\\
\lambda=\frac{\partial^2 f}{\partial t^2}&=-2(\frac{A}{H})^2(n-1). \end{align*} \end{remark} \begin{remark}
For Ansatz \ref{ansatz}, equation (\ref{grs}) is equivalent to an ODE system:
\begin{align}
{\lambda} &=-\frac{H''}{H}-\frac{(2n-2)F''}{F}+f''=\frac{H^2 q^2(2n-2)}{F^4}-\frac{H''}{H}-\frac{2(n-1)H'F'}{HF}+f'\frac{H'}{H}\nonumber\\
\label{ODEsasa}
&=\frac{k}{F^2}-\frac{2H^2 q^2}{F^4}- \frac{F''}{F}-(2n-3)(\frac{F'}{F})^2-\frac{H'F'}{FH}+f'\frac{F'}{F}.
\end{align}
The almost K\"{a}hler condition is equivalent to
\[FF'= qH.\]
\end{remark} \begin{remark}
It is possible to construct local solutions for (\ref{ODEsasa}) giving (possibly incomplete) manifolds with maximal symmetry. For $\mathbb{N}=\mathbb{CP}^{n-1}$, generalized versions of (\ref{ODEsasa}) were investigated by \cite{DW11coho} and \cite{BDW15}. \end{remark}
\begin{remark}
The Gaussian soliton $(\mathbb{R}^{2n}, g_{Euc}, f=\lambda \frac{|x|^2}{2},\lambda) \text{ for } \lambda\neq 0$ belongs to family $q=1$ with $P$ being the round sphere, $H=F=t$, and $k=2n$. For $\lambda=0$, the soliton $(\mathbb{R}^{2n}, g_{Euc}, f=ax_i+b)$ belongs to family $q=0$ with $P$ being the Euclidean space. \end{remark} To illustrate the dimension $n^2$, let's consider the case of a Gaussian soliton for $\lambda\neq 0.$ The isometry group consists of $2n$ translations and $\frac{2n(2n-1)}{2}$ rotations. With a standard coordinate $\{x_i, y_i\}_{i=1}^n$, one specifies an almost complex structure such that \[J(\partial_{x_i})= \partial_{y_i}, ~~~~ J(\partial_{y_i})= -\partial_{x_i}. \]
Then it is clear that not all rotations preserve this tensor field. That's why the automorphism group is only of dimension $n(n+2)$. Among those, the translations do not preserve the potential function $f=\lambda \frac{|x|^2}{2}$. Consequently, the group of symmetry preserving $g$, $J$ and $f$ is of dimension $n^2$.
The paper is organized as follows. Section \ref{prelim} recalls general and useful preliminaries while Section \ref{coho1ansatz} is devoted to calculation about ansatz \ref{ansatz}. Afterward, we'll discuss the relation between the symmetry of an almost Hermitian GRS and one of its level sets determined by $f$. The key idea is that a symmetry group on $(M, g, J, f)$ induces a symmetry group of regular level sets considered as almost contact metric structures. In Section \ref{rigid}, the rigidity of a maximal dimension is examined and proofs of all theorems are collected. Finally the appendix explains our convention and recalls submersion.
\subsection{Acknowledgment} H. T was partially supported by grants from the Simons Foundation [709791], the National Science Foundation [DMS-2104988], and the Vietnam Institute for Advanced Study in Mathematics. He also benefits greatly from discussion with Profs. McKenzie Wang, Catherine Serle, and Ronan Conlon.
\section{Preliminaries} \label{prelim} We'll recall fundamental concepts and useful results about an almost complex structure, a gradient Ricci soliton, group actions on a manifold, an almost contact structure, and certain model spaces. \subsection{Almost Complex Structure}
Let $M$ be a smooth manifold of dimension $2n$. \begin{definition}
An almost complex structure is a smooth section $J$ of the bundle of endormorphisms $\text{End}(TM)$ such that
\[J^2=-\mathrm{Id}. \]
\end{definition}
One can immediately extend $J$ to be an endormorphism on the complexified tangent bundle $TM\otimes_{\mathbb{R}}\mathbb{C}$ via $\mathbb{C}$-linearity. An almost complex structure is said to be integrable if $M$ admits an atlas of complex charts with holomorphic transition functions such that $J$ corresponds to the induced complex multiplication on $TM\otimes_{\mathbb{R}}\mathbb{C}$. A real differentiable manifold with an integrable almost complex structure is, by definition, a complex manifold. Thanks to the work of Newlander and Nirenberg \cite{NN57}, the integrability of $J$ is equivalent to the vanishing of the Nijenhuis tensor
\[N_J(X, Y)=[JX, JY]-[X, Y]-J[X, JY]-J[JX, Y]. \]
\begin{definition}
Let $(M^{2n}, g)$ be a Riemannian manifold with an almost complex structure $J$. $(M, g, J)$ is called an almost Hermitian manifold and $g$ a Hermitian metric if
\[g(JX, JY)=g(X, Y).\]
The fundamental $2$-form or K\"{a}hler form is given by
\[\omega(X, Y)= g(X, JY).\]
$(M, g, J)$ is called almost K\"{a}hler if $d\omega=0$. When $J$ is integrable, we upgrade an almost Hermitian to Hermitian and almost K\"{a}hler to K\"{a}hler. \end{definition} For a Riemannian manifold to be K\"{a}hler, the following is well-known. \begin{proposition}\cite[Proposition 3.1.9]{BGbookSasakian08}
Let $(M, g, J)$ be an almost Hermitian (real) manifold. The followings are equivalent:
\begin{enumerate}
\item $\nabla J=0$,
\item $\nabla \omega_g=0$,
\item $(M, g, J)$ is K\"{a}hler.
\end{enumerate} \end{proposition} On a K\"{a}hler manifold, one observes that \begin{align*}
J({\mathrm R}(X, Y)Z) &={\mathrm R}(X, Y) JZ,\\
{\mathrm R}(X, Y, JZ, JW)=g({\mathrm R}(X, Y) JZ, JW) &= g({\mathrm R}(X, Y) Z, W)={\mathrm R}(X, Y, Z, W). \end{align*} Naturally, it leads to the notion of the Ricci form. \begin{definition}
The Ricci form $\rho$ is the image of $\omega_g$ via the curvature operator:
\begin{align*}
\rho(X, Y) &=g({\mathrm R}(\omega_g)(X), Y).
\end{align*} \end{definition}
A priori, it is not clear how the Ricci form $\rho$ is related to the Ricci curvature tensor. \begin{proposition}\cite[Proposition 2.45]{besse} On a K\"{a}hler manifold $(M, g, J, \omega_g)$, we have
\[ {\mathrm {Rc}}(X, Y)=\rho(X, JY). \] \end{proposition}
\begin{corollary}
On a K\"{a}hler manifold $(M, g, J)$, ${\mathrm {Rc}}$ is $J$-invariant. \end{corollary}
\subsection{Gradient Ricci Solitons}
In this subsection, we recall how a GRS is compatible with a complex setup.
\begin{definition}
$(M, g, J, f)$ is an almost Hermitian GRS if $(M, g, f)$ is a GRS, $(M, g, J)$ is an almost Hermitian manifold, and $\mathcal{L}_{\nabla f} g$ is $J$-invariant. \end{definition} Because of (\ref{grs}), $\mathcal{L}_{\nabla f} g$ is $J$-invariant if and only if ${\mathrm {Rc}}$ is $J$-invariant. Thus, the assumption is automatic for K\"{a}hler manifolds.
\begin{definition}
$(M, g, J, f)$ is a K\"{a}hler GRS if $(M, g, f)$ is a GRS, $(M, g, J)$ is K\"{a}hler manifold. \end{definition} In a complex coordinate system, the $J$-invariant property is equivalent to $\nabla f$ being a holomorphic vector field. That is, \[\mathcal{L}_{\nabla f} g(\partial_{z_i}, \partial_{z_j})=\mathcal{L}_{\nabla f} g(\partial_{\overline{z_i}}, \partial_{\overline{z_j}})=0. \]
\subsection{Group Actions on a manifold} In this subsection, we review the basic setup and properties of group actions on a manifold. The standard texts are \cite{KNvolumeI96, KNvolumeII96, kobayashi95}. Let $G$ be a topological group. An action of $G$ on a manifold $M$ is a homomorphism from $G$ to the group of homomorphisms on $M$ \[g\mapsto A_g \text{ such that } A_g: M\mapsto M, x\mapsto g.x\]
The action is \textit{continuous/smooth} if the map $G\times M\mapsto M$, given by $(g, x)\mapsto g\cdot x$ is continuous/smooth (for smoothness, it requires G to be a Lie group). The action is said to be \textit{proper} if the associated map $G\times M \mapsto M\times M$, given by $(g, x)\mapsto (x, g\cdot x)$ is proper (that is, the inverse of any compact set is compact).
For each $x\in G$, the subgroup $G_x=\{g\in G, g\cdot x=x\}$ is called the \textit{isotropy} subgroup or the \textit{stabilizer}. The orbit through $x$ is an immersed sub-manifold and there is a natural identification \[G\cdot x=\{y\in M, y=g\cdot x, g\in G\} \equiv G/G_x. \] Orbits are also classified based on the relative size of associated isotropy groups. In particular, principal orbits correspond to the smallest possible groups and singular ones have isotropy groups of higher dimensions.
At the infinitesimal level, a smooth vector field $X$ on $M$ generates a (local) one-parameter family of maps between domains in $M$. If the vector field is complete, then it generates global differmorphisms. If the corresponding maps preserve certain geometric quantities and structures then the vector field is called a (local) infinitesimal transformation of the same property. A vector field preserves a tensor $T$ if and only if \[\mathcal{L}_X T=0.\]
It is also noted that the set of all vector fields can be seen as a Lie algebra $\mathfrak{X}(M)$ by the natural bracket \[[X, Y]= XY-YX.\] Since \[\mathcal{L}_{[X, Y]}=\mathcal{L}_X\circ \mathcal{L}_Y-\mathcal{L}_Y\circ \mathcal{L}_X,\] the set of all infinitesimal transformations preserving a tensor $T$ is always a Lie sub-algebra of $\mathfrak{X}(M)$. It is noted that the group of transformations preserving a tensor $T$ is not necessarily a Lie group.
Nevertheless, on a Riemnnian manifold, the group of isometries (preserving the Riemannian metric) is a Lie group \cite[Chapter 6. Theorem 3.4]{KNvolumeI96}. The infinitesimal transformation corresponding to a subgroup of isometries is called a Killing vector field. That is, \[\mathcal{L}_X g=0.\] The Lie algebra of such vector fields corresponds to the Lie algebra of the Lie group of all isometries on $M$. It is well-know that a Killing vector field is totally determined by its zero and first order values at a point $(X_p, (\nabla X)_p)$ \cite[Chapter VI]{KNvolumeI96}. \\
The K\"{a}hler and GRS structures impose rigidity on the Riemannian manifold as the followings are well-known \cite{fik03}. \begin{lemma}
\label{killing}
Let $(M, g, J, f)$ be a K\"{a}hler gradient Ricci soliton. Then, we have the followings:
\begin{enumerate}
\item $J(\nabla f)$ is a Killing vector field.
\item $\mathcal{L}_{\nabla f} J\equiv 0$.
\end{enumerate} \end{lemma} On an almost Hermitian GRS $(M, g, J, f)$, one may consider transformations and vector fields preserving each individual structure: the metric $g$, the almost complex structure $J$. and the potential function $f$. The group of such symmetry is clearly a closed subgroup of the group of isometry and, thus, is a Lie group.
\subsection{Almost Contact Structure} In this subsection, we recall important notions about an almost contact structure following the book by C. Boyer and K. Galicki \cite{BGbookSasakian08}.
\begin{definition}
A $(2n+1)$-dimensional manifold $M$ is an almost contact manifold if there exists a triple $(\zeta, \eta, \Phi)$ where $\zeta$ is a vector field, $\eta$ is a $1$-form, $\Phi$ is a tensor field of type $(1, 1)$, and they satisfy, everywhere on $M$,
\[\eta(\zeta)=1 \text{ and } \Phi^2=-\mathrm{Id}+\zeta\otimes \eta.\]
\end{definition} \begin{definition} An almost contact manifold $(M, \zeta, \eta, \Phi)$ with a Riemannian metric $g$ is called an almost contact metric structure if
\[g(\Phi(X), \Phi(Y))=g(X, Y)-\eta(X)\eta(Y). \] \end{definition}
\begin{definition}
The holomorphic or $\Phi$-sectional curvature of an almost contact manifold $(M, \zeta, \eta, \Phi)$ is given by, for $\eta(X)=0$ and $g(X, X)=1$,
\[K_\Phi(X)=K(X, \Phi(X)).\] \end{definition}
Closely related is the notion of a contact structure. \begin{definition}
A $(2n+1)$-dimensional manifold $M$ is a contact manifold if there exists a $1$-form $\eta$, called a contact $1$-form, on $M$ such that
\[\eta \wedge (d\eta)^n \neq 0\]
everywhere on $M$. A contact structure is an equivalence class of such $1$-forms. \end{definition}
\begin{definition}
An almost contact metric structure $(M, \zeta, \eta, \Phi, g)$ is called a contact metric structure if one further assumes
\[g(X, \Phi(Y))=d\eta(X, Y).\]
\end{definition} It is immediate to check that a contact metric structure is indeed a contact manifold by the above definition. As $\zeta$ and $\Phi$ are uniquely determined by $\eta$ and $g$, we also denote a contact metric manifold by $(M, \eta, g)$.
\begin{definition} A contact metric structure $(M, g, \eta)$ is called Sasakian if the cone $C(M)=M\times \mathbb{R}^+$ with the cone metric $r^2 g+ dr^2$ is K\"{a}hler. \end{definition}
Next we recall certain transformations which will play crucial roles.
\begin{definition}
\label{Sasahomothety}
Let $(M, \zeta,\eta, \Phi, g)$ be an almost contact metric structure. For $a>0$, a transverse $a$-homothety deformation is given by
\[ \hat{\zeta} =\frac{1}{a}\zeta, ~~~ \hat{\eta} =a\eta, ~~~ \hat{\Phi}=\Phi,~~~ \hat{g}= ag+(a^2-a)\eta\otimes \eta. \] \end{definition} If $(M, \zeta,\eta, \Phi, g)$ is Sasakian, then so is its homothety transformation. \begin{definition}
\label{Sasadeform}
Let $(M, \zeta,\eta, \Phi, g)$ be an almost contact metric structure. For $a>0$, a $\pm a$-deformation is given by
\[ \zeta^\ast =\zeta, ~~~ \eta^\ast =a\eta, ~~~ \Phi^\ast=\pm\Phi,~~~ g^\ast= ag+(1-a)\eta\otimes \eta. \] \end{definition}
A $\pm a$-deformation of a Sasakian manifold is not necessarily Sasakian.
\subsection{Model Spaces}
Using the submersion toolkit, we can describe several model spaces that will appear in our classification. First, the {unitary space} is the complex formulation of the Euclidean space $\mathbb{C}^n=\mathbb{R}^{2n}$ with standard coordinates $\{x_1, y_1,..., x_n, y_n\}$. The metric, the almost complex structure, and the fundamental $2$-form are as follows: \begin{align*}
g&=\sum_i (d{x^i})^2+ (dy^i)^2,\\
J &= \sum_{i} (\partial_{y_i}\otimes dx^i-\partial_{x_i}\otimes dy^i),\\
\omega_{\mathbb{C}^n} &= -2\sum_i dx^i\wedge dy^i. \end{align*}
\textbf{The flat Sasakian space $(P, g_P)=\mathbb{R}^{2n+1}(-3)$}: the total space of a real line bundle over $\mathbb{C}^n$ with coordinates $\{x_1, y_1,..., x_n, y_n, z\}$. For $\eta = dz+2\sum_i y_i dx_i$, one considers: \begin{align*}
g_P &=\sum_i \big(((d{x_i})^2+ (dy_i)^2 \big)+\eta\otimes \eta,\\
\Phi &=\sum_i \big(\partial_{y_i}\otimes dx^i-(\partial_{x_i}-2y_i\partial_z)\otimes dy^i. \big). \end{align*} It is readily verified, by Lemma \ref{submersioncurvature}, that $(P, g_P, \eta, \partial_z, \Phi)$ is Sasakian with constant $\Phi$-sectional curvature $-3$.
\textbf{The spherical Sasakian $(P, g_P)=\mathbb{S}^{2n+1}(a)$}: For simplicity, we utilize the ambient coordinates of $\mathbb{R}^{2n+2}$, $\{x_1, y_1,..., x_{n+1}, y_{n+1}\}$. All tensors described below are understood as their restriction to the unit sphere.
With the induced metric, the canonical Sasakian structure on $\mathbb{S}^{2n+1}$ is given by \begin{align*}
\zeta &= \sum_i (y_i\partial_{x_i}-x_i\partial_{y_i}),\\
\eta &= \sum_i (y_i dx^i- x_i dy^i),\\
\Phi &=\sum_{i, j}(x_ix_j-\delta_{ij})\partial_{x_i}\otimes dy_j-(y_iy_j-\delta_{ij})\partial_{y_i}\otimes dx_j+ x_jy_i\partial_{y_i}\otimes dy_j- x_iy_j\partial_{x_i}\otimes dx_j \end{align*} Let $\pi: \mathbb{S}^{2n+1}\mapsto N=\mathbb{CP}^n$ be the Hopf fibration and $g_N$ the Fubini-Study metric. The Sasakian metric can be realized as \[g = \pi^\ast g_N +\eta\otimes \eta, ~~ d\eta =\pi^\ast \omega_N.\] Via a homothetic deformation (Definition \ref{Sasahomothety}), if $g_N$ is scaled to have Ricci curvature $k\mathrm{Id}$, $k>0$, then the constant $\Phi$-sectional curvature $a$ of $g_P$ is, by Lemma \ref{submersioncurvature}, \[a=\frac{4k}{n+1}-3>-3.\]
\textbf{The hyperbolic Sasakian $(P, g_P)=\mathbb{SB}^{2n+1}(a)$}: Let $g_0$ be the Bergman metric of constant sectional curvature $-1$ in the unit ball in $\mathbb{C}^n$. One then scales it to have Ricci curvatutre $k\mathrm{Id}$, for $k<0$ and denote such construction by $N=B^n_\mathbb{C}(k)$ with metric $g_N$. Let $\omega_N$ be the corresponding K\"{a}hler form and, since $B^n_\mathbb{C}(k)$ is simply connected, there exists $1$-form $\alpha$ such that $d\alpha=\omega_N$. On the total space of the line bundle $P=B^n_\mathbb{C}(k)\times \mathbb{R}$ with natural projection $\pi$, one considers: \begin{align*}
g_P &= \pi^\ast g_N+\eta\otimes \eta,\\
\eta &= dz+ \pi^\ast\alpha. \end{align*}
By Lemma \ref{submersioncurvature}, the $\Phi$-sectional curvature of $(P, g_P)$ is \[a=\frac{k}{2n-1}-3<-3.\]
\begin{theorem}\cite{tanno69sasa}
\label{tannoSasa}
Let $(M^{2n+1}, g, \eta, \Phi, \zeta)$ be a simply connected complete Sasakian manifold with constant $\Phi$-sectional curvature $H$ then it must be isometric to:
\begin{enumerate}[label=(\roman*)]
\item ($H>-3$) the Sasakian sphere $\mathbb{S}^{2n+1}(H)$,
\item ($H=-3$) the flat Sasakian space $\mathbb{R}^{2n+1}(-3)$,
\item ($H<-3$) the Sasakian disk model $\mathbb{SB}^{2n+1}(H)$.
\end{enumerate} \end{theorem}
As described earlier, Sasakian manifolds belong to the family of almost contact metric structures which also include certain Riemannian products and the following. For $P=\mathbb{R}\times \mathbb{C}^n$ and a constant $A$, \[g_P=dz^2+e^{2A z}g_{\mathbb{C}^{n-1}}.\]
One realizes it as a hyperbolic metric $\mathbb{H}^{2n+1}(-A^2)$. \begin{lemma}
\label{curvatrehyper}
The sectional and Ricci curvature of $g_P$, for orthonormal vectors $X, Y$ on $\mathbb{C}^n$ and $\partial_z$ along $\mathbb{R}$,
\begin{align*}
K(\partial_z, X) &=-A^2 =K(X, Y)\\
{\mathrm R}(\partial_z, \partial_z) &=-2nA^2={\mathrm {Rc}}(X_i, X_i).
\end{align*} \end{lemma}
All aforementioned models appear in the following result. Let $(P^{2n+1}, g, \eta, \zeta, \Phi)$ be an almost contact metric structure. The symmetry group is supposed to preserve both $g$, $\eta$, $\zeta$ and $\Phi$. \begin{theorem}\cite{tanno69}
\label{tannoclassificationalmost} The maximum dimension of the symmetry group is $(n+1)^2$. It is attained iff the sectional curvature for $2$-planes which contain $\zeta$ is a constant $C$ and the manifold is one of the following spaces:
\begin{enumerate} [label=(\roman*)]
\item $C>0$: an $\pm b$ deformation of a homogeneous Sasakian manifold with constant $\Phi$-sectional curvature $H$ or, precisely,
\begin{itemize}
\item $H>-3:$ the Sasakian sphere $\mathbb{S}^{2n+1}(H)$ or its quotient by a finite group generated by $\text{exp}(t\zeta)$ for $2\pi/t$ being an integer,
\item $H=-3$: the flat Sasakian space $\mathbb{R}^{2n+1}(-3)$ or its quotient by a cyclic group generated by $\text{exp}(t\zeta)$,
\item $H<-3$: the Sasakian disk model $\mathbb{SB}(H)$ or its quotient by a cyclic group generated by $\text{exp}(t\zeta)$.
\end{itemize}
\item $C=0$: six global Riemannian product $X\times \mathbb{CP}^{n-1}(k)$, $X\times \mathbb{C}^{n-1}$, $X\times B^{n-1}_{\mathbb{C}}(k)$ where $X$ is a line or a circle;
\item $C<0$ the hyperbolic space $\mathbb{H}^{2n+1}(C)$.
\end{enumerate} \end{theorem} For a Sasakian model with submersion $\pi: P\mapsto N$ with $N=\mathbb{N}(k)$, the metric can always be written as \[g=g_N+ \eta\otimes \eta, ~~ d\eta=\pi^\ast \omega_N.\]
\begin{lemma}
\label{curvaturedeform}
If $(M, \zeta', \eta', \Phi', g')$ is obtained via a transverse $a$-homothety and a $\pm b$-deformation then
\[g'=ba g_N+a^2 \eta\otimes \eta,~~ \zeta'=\frac{1}{a}\zeta, ~~~, \eta'=a \eta, ~~~\Phi'=\pm \Phi.\] \end{lemma} \begin{proof}
Via a transerver $a$-homothety transformation:
\begin{align*}
g^\ast &=a g+(a^2-a)\eta\otimes \eta= a g_N+a^2 \eta\times \eta=a g_N+\eta^\ast \otimes \eta^\ast;\\
\eta^\ast &= a \eta;~~ \zeta^\ast = \frac{1}{a}\zeta;~~ \Phi^\ast = \Phi.
\end{align*}
Via a $\pm b$-deformation:
\begin{align*}
g' &=b g^\ast+ (1-b)\eta^\ast\otimes \eta^\ast= ba g_N+b \eta^\ast\otimes \eta^\ast+ (1-b)\eta^\ast\otimes \eta^\ast\\
&= ba g_T+\eta^\ast\otimes \eta^\ast= ba g_N+a^2 \eta\otimes \eta\\
\eta' &=\eta^\ast=a \eta,~~\zeta'=\zeta^\ast=\frac{1}{a}\zeta,~~
\Phi' = (\pm)\Phi.
\end{align*} \end{proof}
\section{Cohomogeneity One Ansatz} \label{coho1ansatz} Here we assume the cohomogeneity one condition and collect calculation related to ansatz \ref{ansatz}. The setup follows \cite{DW11coho} closely. Let $G$ be a Lie group acting isometrically on a Riemannian manifold $(M, {g})$. Supposed that there is a dense subset $M_0\subset M$ such that, locally, there is a $G$-equivariant diffeomorphism: \[\Psi: I\times P\mapsto M_0 \text{ given by } \Psi(t, hK)=h\cdot \gamma(t).\] Here, $I$ is an interval; $\gamma(t)$ is a unit speed geodesic intersecting all orbits orthogonally; $P=G/K$ where $K$ is the istropy group along $\gamma(t)$. It follows that \[\Phi^\ast ({g})={g}= dt^2+ {g}_t\] where ${g}_t$ is a one-parameter family of $G$-invariant metrics on $G/K$. For unit vector fields $N=\Phi_{\ast}(\partial_t)$,
let $L$ denote the shape operator \begin{align*} L(X) &= {\nabla}_X N. \end{align*}
We will consider $L_t=L_{\mid_{\Psi(t\times P)}}$ to be a one-parameter family of endormorphisms on $TP$ via identification $T(\Psi(t\times P))=TP$. Following \cite{DW11coho}, one observes \[\partial_t g= 2g_t\circ L_t .\] Thanks to Gauss, Codazzi, and Riccati equations, the Ricci curvature of $(M_0, {g})$ is totally determined by the geometry of the shape operator and how it evolves. Thus, the gradient Ricci soliton equation ${\mathrm {Rc}}+\mathrm {Hess}{f}=\lambda g$ is reduced to \begin{align} 0 &=-(\delta L)-\nabla \text{tr}{L},\nonumber \\ \label{reducedsolitionsystem} \lambda &=-\text{tr}(L')-\text{tr}(L^2)+f'',\\ \lambda g_t(X, Y) &={\mathrm {Rc}}_t(X, Y)-(\text{tr}{L}) g_t(L X, Y)-g_t({L'}(X), Y) +f' g_t(LX, Y). \nonumber \end{align}
Here ${\mathrm {Rc}}_t$ denotes the Ricci curvature of $(P, g_t)$, $\delta L=\sum_{i}\nabla_{e_i}L(e_i)$ for an orthonormal basis and $\text{tr}{T}=\text{tr}_{g_t}T_t$.\\
We are particularly interested in the metric given by Ansatz \ref{ansatz}. For convenience, let $m=n-1$, the dimension of $N$. We recall \begin{align*}
g=dt^2+ g_t&= dt^2+ F(t)^2 \pi^\ast g_N+ H(t)^2\eta\otimes \eta,\\
\eta &= (dz+ q\pi^\ast\alpha), ~~d\alpha =\omega_{\mathbb{N}}.
\end{align*} We have \[2g_t L_t= g'_t= 2\frac{H'}{H} H^2\eta\otimes \eta+2\frac{F'}{F} F^2\pi^\ast g_N.\] Thus, for $\mathrm{Id}$ denoting the identity operator on the horizontal subspace of $TP$, which is $g_t$- perpendicular to $\partial_z$, \begin{align*}
L_t &= \frac{H'}{H} \partial_z \otimes \eta + \frac{F'}{F}\mathrm{Id}.\,\\
{L'}_t &=\Big(\frac{H''}{H}-\big(\frac{H'}{H}\big)^2\Big) \partial_z \otimes {\eta}+\Big(\frac{F''}{F}-\big(\frac{F'}{F}\big)^2\Big)\mathrm{Id}. \end{align*} Consequently, \begin{align*}
\text{tr}{L_t} &=\frac{H'}{H}+(2m)\frac{F'}{F},\\
\text{tr}{L^2_t} &=(\frac{H'}{H})^2+(2m)\frac{(F')^2}{F^2},\\
\text{tr}{L'_t} &=\frac{H''}{H}+(2m)\frac{F''}{F}-\frac{(H')^2}{H^2}-(2m)\frac{(F')^2}{F^2}. \end{align*} \begin{lemma}
\label{solitonsubmer1}
The gradient Ricci soliton equation becomes
\begin{align*}
{\lambda} &=-\frac{H''}{H}-(2m)\frac{F''}{F}+f''=\frac{H^2 q^2}{F^4}(2m)-\frac{H''}{H}-2m\frac{H'F'}{HF}+f'\frac{H'}{H}\\
&=\frac{k}{F^2}-\frac{H^2 q^2}{F^4}2- \frac{F''}{F}-(2m-1)(\frac{F'}{F})^2-\frac{H'F'}{FH}+f'\frac{F'}{F}.
\end{align*} \end{lemma} \begin{proof}
By Lemma \ref{submersioncurvature} and system \ref{reducedsolitionsystem}, we obtain
\begin{align*}
0 &= -\delta L-\nabla (\text{tr}{L})=-0-0=0;\\
\lambda &= -\text{tr}{L'_t}-\text{tr}{L^2_t}+f''\\ &=-\frac{H''}{H}-(2m)\frac{F''}{F}+\frac{(H')^2}{H^2}+(2m)\frac{(F')^2}{F^2}-\frac{(H')^2}{H^2}-(2m)\frac{(F')^2}{F^2}+f''\\
&=-\frac{H''}{H}-(2m)\frac{F''}{F}+f'';\\
H^2\lambda &= {\mathrm {Rc}}(\partial_z, \partial_z)-\text{tr}{L}g(L\partial_z, \partial_z)-g(L'\partial_z, \partial_z)+f' g(L\partial_z, \partial_z)\\
&= \frac{H^4 q^2}{F^4}(2m)-(\frac{H'}{H}+(2m)\frac{F'}{F})H^2\frac{H'}{H}-H^2 \Big(\frac{H''}{H}-\big(\frac{H'}{H}\big)^2\Big)+f'H^2\frac{H'}{H}\\
&=H^2 \Big(\frac{H^2 q^2}{F^4}(2m)-\frac{H''}{H}-(2m)\frac{F'}{F}\frac{H'}{H}+f'\frac{H'}{H} \Big);\\
F^2\lambda &= k-2\frac{H^2 q^2}{F^2}-(\frac{H'}{H}+(2m)\frac{F'}{F})F^2\frac{F'}{F}-F^2 \Big(\frac{F''}{F}-\big(\frac{F'}{F}\big)^2\Big)+f'F^2\frac{F'}{F}\\
&=F^2 \Big(\frac{k}{F^2}-\frac{H^2 q^2}{F^4}2-\frac{F''}{F}-\frac{F'}{F}\frac{H'}{H}-(2m-1)\big(\frac{F'}{F}\big)^2+f'\frac{F'}{F} \Big).
\end{align*} \end{proof} The almost complex structure on $I\times P$ is constructed from one on $(N, g_N)$: \[J = \partial_t \otimes H\eta - \frac{1}{H}\partial_z\otimes dt +\pi^\ast J_N.\] Thus, the K\"{a}hler form becomes: \begin{align*}
\omega &= 2dt\wedge H\eta+ F^2 \pi^\ast\omega_N,\\
d\omega &= -2qH dt \wedge \pi^\ast \omega_N+ 2FF' dt\wedge\pi^\ast\omega_N. \end{align*} Thus the metric is almost K\"{a}hler if and only if \begin{equation}
\label{Kahlercondition}FF'=qH \end{equation} Following \cite{DW11coho}, we consider the change of variables: \begin{equation} \label{transform1} ds = H dt, ~~ \alpha(s):= H^2(t), ~~ \beta(s)=F^2(t), \varphi(s):= f(t). \end{equation} Consequently, for $\dot{X}=\partial_s X$, \begin{align*}
\dot{\alpha} &= 2H', &\ddot{\alpha} &= \frac{2 {H''}}{H}\\
\dot{\beta} &= \frac{2F{F'}}{H}, &\ddot{\beta}&=\frac{2{F'}^2+2F{F''}}{H^2}-\frac{2F{F'}{H'}}{H^3},\\
\dot{\varphi} &= \frac{{f'}}{H}, &\ddot{\varphi}&= \frac{{f''}}{H^2}-\frac{{f'}{H'}}{H^3}.
\end{align*} The almost K\"{a}hlerity of $g$ implies $\beta(s)=2s+A$, $\varphi(s)=Bs+C$. The soliton equation becomes \begin{align*}
{\lambda} &=-\frac{\alpha''}{2}+\frac{2m\alpha}{(2s+A)^2}-\frac{m\alpha'}{2s+A} +B\frac{\alpha'}{2}=-\frac{\alpha''}{2}+B\frac{\alpha'}{2}-\big(\frac{m\alpha}{2s+A})'\\
{\lambda}(2s+A) &={k}-\alpha'-\frac{2(m-1+q^2)\alpha}{2s+A}+B\alpha. \end{align*} We summarize the above calculation in the following lemma. \begin{lemma}
\label{almostKahler}
Let $(I\times P, g)$ be given as in Ansatz \ref{ansatz}. $g$ is almost K\"{a}hler GRS if and only if, under transformation (\ref{transform1}), we have:
\begin{align*}
{\lambda}
&=-\frac{\ddot{\alpha}}{2}+B\frac{\dot{\alpha}}{2}-\frac{d}{ds}\big(\frac{m\alpha}{2s+A})\\
{\lambda}(2s+A) &={k}-\dot{\alpha}-\frac{2(m-1+q^2)\alpha}{2s+A}+B\alpha
\end{align*} \end{lemma} It order to obtain a global complete metric, one needs to smoothly extend the construction to singular orbits (if any). The following follows from the proof of \cite[Lemma 1.1]{EW00ivp}. We provide a direct proof as our ansatz (\ref{ansatz}) is fairly explicit.
\begin{lemma}
\label{smoothness}
Let $I=(0, r)$ for $r>0$ and $(I\times P, g)$ is given by ansatz \ref{ansatz} for $H(0)=0$, $F(0)>0$. The metric can be extended smoothly to $t=0$ if and only if, for any non-zero integer $k$,
\[H'(0)=1,~~ H^{(2k)}(0)=0=F^{(2k+1)}(0).\] \end{lemma} \begin{proof}
We rewrite the metric in polar coordinates, for $x=t\cos(z)$ and $y=t\sin(z)$,
\begin{align*}
dt &= \frac{x}{t} dx +\frac{y}{t}dy,\\
dz &= \frac{-y}{t^2} dx +\frac{x}{t^2}dy
\end{align*}
Then,
\begin{align*}
g &= dt^2+ H^2 (dz + q \alpha) \otimes (dz+ q\alpha)+ F^2 g_N\\
&=t^{-2}(x^2 dx^2+y^2 dy^2+ xy dx\otimes dy+ xy dy\otimes dx)\\
&+\frac{H^2}{t^4}(y^2 dx^2+x^2 dy^2- xy dx\otimes dy- xy dy\otimes dx)\\
&+\frac{-qy H^2}{t^2}(\alpha \otimes dx + dx\otimes \alpha)+ \frac{qx H^2}{t^2}(\alpha \otimes dy+ dy\otimes \alpha)\\
&+H^2 q^2 \alpha\otimes \alpha+F^2 g_N,\\
&= dx^2 (\frac{H^2 y^2}{t^4}+\frac{x^2}{t^2})+ dy^2 (\frac{H^2 x^2}{t^4}+\frac{y^2}{t^2})+ (dx\otimes dy+ dy\otimes dx)(-\frac{H^2 xy}{t^4}+\frac{xy}{t^2})\\
&+\frac{-qy H^2}{t^2}(\alpha \otimes dx + dx\otimes \alpha)+ \frac{qx H^2}{t^2}(\alpha \otimes dy+ dy\otimes \alpha)\\
&+H^2 q^2 \alpha\otimes \alpha+F^2 g_N.
\end{align*}
Thus, the metric can be extended smoothly to $t=0$ if and only if the metric components
\[ \frac{y^2}{t^2}(\frac{H^2}{t^2}-1), \frac{x^2}{t^2}(\frac{H^2}{t^2}-1), \frac{xy}{t^2}(\frac{H^2}{t^2}-1), \frac{qy H^2}{t^2}, \frac{qx H^2}{t^2}, F^2 \]
can be smoothly extended to $x=y=0$. According to \cite[Prop. 2.7]{KW74curv}, a function $\tilde{f}(x, y)=f(t, z)$ is smooth if and only if
\begin{itemize}
\item $f(t, z)=f(-t, z+\pi)$ for all $t, z$.
\item $t^k (\frac{\partial^k f}{\partial t^k}(0, \theta))$ is a homogeneous polynomial of degree $k$ in $x$ and $y$,
\end{itemize}
Applying such criteria to our case yields
\begin{itemize}
\item $H'(0)=1$ and $H^{(2n)}(0)=0$;
\item $F^{(2n+1)}(0)=0$.
\end{itemize} \end{proof}
Additionally, the hyperbolic case gives rise to the following. \begin{lemma}
\label{solitonsubmer2}
Let $I\times P$ be equipped with the metric
\[dt^2+ g_t=dt^2+ e^{2A(t)z} \pi^\ast g_N+ H^2(t)\eta\otimes \eta\]
for $\eta= dz$ and ${\mathrm {Rc}}_N =0$. The gradient Ricci soliton equation becomes
\begin{align*}
A' &=0= H',\\
{\lambda} &=-(\frac{A}{H})^2 (2m)=f''.
\end{align*} \end{lemma} \begin{proof}
We have
\begin{align*}
2g_t L_t &= g'_t=2\frac{H'}{H}H^2 dz^2+ 2z A' e^{2Az}\pi^\ast g_N,\\
L_t &= \frac{H'}{H}\partial_z \otimes dz+ zA'\mathrm{Id},\\
L'_t &=(\frac{H''}{H}-(\frac{H'}{H})^2) \partial_z \otimes dz+zA''\mathrm{Id}.
\end{align*}
Consequently,
\begin{align*}
\text{tr}{L_t} &=\frac{H'}{H}+zA'(2m),\\
\text{tr}{L^2_t} &=(\frac{H'}{H})^2+2m z^2 (A')^2,\\
\text{tr}{L'_t} &=(\frac{H''}{H}-(\frac{H'}{H})^2)+(2m)zA''.
\end{align*}
By the first equation of (\ref{reducedsolitionsystem}), one deduces that $A'=0$ or $A$ is constant. By the third equation of (\ref{reducedsolitionsystem}) and Lemma \ref{curvatrehyper}, $H$ is constant. The result then follows.
\end{proof} \section{Induced Symmetry} \label{symm} Let $(M, g, J, f)$ be an almost Hermitian GRS. In this section, we examine how the symmetry of $(M, g, J, f)$ induces certain symmetry on level sets of function $f$. For each $c\in f(M)$, $M_c:=f^{-1}(c)$ is called a level set of $f$. By the regular level set theorem \cite{tubook11}, if $c$ is a regular value, then the level set is a smooth submanifold of codimension one. From now on, we assume $c$ is a regular value unless stated otherwise.
As $V=\frac{\nabla f}{|\nabla f|}$ is well-defined on $M_c$, let $\zeta=-J(V)$ and $\eta$ be the dual $1$-form to $\zeta$.
We define $\Phi$ on $TM_c$ by \[\Phi X+\eta(X) V= JX. \] \begin{proposition}
\label{levelalmost}
Let $(M, g, J, f)$ be an almost Hermitian GRS. If $c$ is a regular value of $f$, then $(M_c, g, \zeta, \eta, \Phi)$ is an almost contact metric structure. \end{proposition} \begin{proof}
If $X=a\zeta+X_1$ for $X_1$ a section of $TM_c$ and $X_1\perp \zeta$, then it is immediate that $\Phi(X)=JX_1$ is also a section of $TM_c$. We check
\begin{align*}
\Phi^2(X) &=\Phi(J(X_1))=J(J(X_1))-\eta(J(X_1))V,\\
&=-X_1-g(\zeta, J(X_1))V=-X_1+g(J(\zeta), X_1)V=-X_1+g(V, X_1)V,\\
&=-X_1=-X+a\zeta=-X+\eta(X)\zeta.
\end{align*}
Therefore, $\Phi^2=-\text{Id}+\zeta\otimes \eta$ and $(M_c, \Phi, \zeta, \eta)$ is an almost contact structure.
Next, for $X=a\zeta+X_1$ and $Y=b\zeta+Y_1$, we compute
\begin{align*}
g(\Phi X, \Phi Y) &= g(J(X_1), J(Y_1))=g(X_1, Y_1)= g(X-a\zeta, Y-b\zeta)\\
&=g(X, Y)-a g(\zeta, Y)-b g(X,\zeta)+ab g(\zeta, \zeta)\\
&=g(X, Y)-2ab+ab=g(X, Y)-\zeta(X)\zeta(Y).
\end{align*}
Thus, $(M_c, \Phi, \zeta, \eta)$ is an almost contact metric structure. \end{proof}
We will collect useful observations. \begin{lemma}
\label{lie0}
Suppose that $\mathcal{L}_X g=0$.
\begin{enumerate} [label=(\roman*)]
\item $\mathcal{L}_X f=0 \iff \mathcal{L}_X \nabla f=0 \iff \mathcal{L}_X |\nabla f|^2=0$.
\item Let $\gamma$ be the 1-form dual to a vector field $Z$. $\mathcal{L}_X Z=0 \iff \mathcal{L}_X \gamma=0$.
\end{enumerate} \end{lemma} \begin{proof}
We only show the first statement as others follow from similar calculation. One computes
\begin{align*}
(\mathcal{L}_X \nabla f)Y &= g(\nabla_X \nabla f-\nabla_{\nabla f}X, Y)\\
&=\mathrm {Hess} f(X, Y)+g(\nabla_Y X, \nabla f)-\mathcal{L}_X g(Y, \nabla f)\\
&=\mathrm {Hess} f(X, Y)-g(X, \nabla_Y \nabla f)+Y(\mathcal{L}_X f)-\mathcal{L}_X g(Y, \nabla f)\\
&=Y(\mathcal{L}_X f)-\mathcal{L}_X g(Y, \nabla f).
\end{align*}
Furthermore,
\begin{align*}
\mathcal{L}_X|\nabla f|^2 &= 2g(\nabla_X \nabla f, \nabla f)\\
&= 2g([X,\nabla f], \nabla f)-2g(\nabla_{\nabla f}X, \nabla f)\\
&= 2g([X,\nabla f], \nabla f)-(\mathcal{L}_X g)(\nabla f, \nabla f).
\end{align*} The conclusion then follows. \end{proof}
\begin{lemma} \label{lie4}
Suppose that $\mathcal{L}_X V=0$ and $\mathcal{L}_X\eta=0$. $\mathcal{L}_X \Phi=0$ if and only if $\mathcal{L}_X J=0$.
\end{lemma} \begin{proof} We compute
\begin{align*}
(\mathcal{L}_X \Phi)Y &=[X, \Phi(Y)]-\Phi([X, Y])\\
&=[X, J(Y)-\eta(Y)V]-J([X, Y])+\eta([X, Y])V\\
&= (\mathcal{L}_X J)Y-[X, \eta(Y)V]+\eta([X, Y])V\\
&= (\mathcal{L}_X J)Y-\eta(Y)[X, V]-\nabla_X(\eta(Y))V+\eta([X, Y])V\\
&=(\mathcal{L}_X J)Y-\eta(Y)\mathcal{L}_X V-(\mathcal{L}_X\eta)(Y) V
\end{align*} \end{proof}
\begin{proposition}
\label{restrictionsymmetry} On each $M_c$, if $\mathcal{L}_X g=\mathcal{L}_X J=\mathcal{L}_X f=0$ then we have
\begin{align*}
\mathcal{L}_X \zeta &=0, &\mathcal{L}_X \eta &=0, &\mathcal{L}_X \Phi &=0.
\end{align*} \end{proposition}
\begin{proof}
It follows from Lemmas \ref{lie0} and \ref{lie4}. \end{proof}
\begin{proposition}
\label{restrictiontrivial}
Suppose that $\mathcal{L}_X g=0$ and $\mathcal{L}_X \nabla f=0$. If $X_{\mid_{M_c}}\equiv 0$ then $X\equiv 0$. \end{proposition} \begin{proof}
We compute
\begin{align*}
\nabla_{\nabla f} X &= -[X, \nabla f]+\nabla_X \nabla f\\
&= \mathrm {Hess}{f}(X, \cdot)=0.
\end{align*}
Since a Killing vector field is completely determined by its zero and first order values at a point, $X$ must be trivial. \end{proof}
\begin{theorem}
\label{inducegroup}
Let $(M^{2n}, g, J, f)$ be an almost Hermitian GRS with a non-trivial $f$ and $G$ be a group of symmetry preserving $g, J,$ and $f$. Then $G$ is also a group of symmetry for $(M_c, g, \zeta, \eta, \Phi)$ as an almost contact metric structure. \end{theorem} \begin{proof}
Let $u: M\mapsto M$ be an isometry preserving $J$ and $f$. As $f(u(a))=f(a)$, $u(M_c)=M_c$ and $u$ induces a map $u_c: M_c\mapsto M_c$. The proof will follow from the following claims.
\textit{Claim:} If $u_c$ is an identity map then so is $u$.
\textit{Proof:} An isometry is necessarily an affine transformation which preserves parallelism \cite[Chapter VI]{KNvolumeI96}. The result then follows.
\textit{Claim:} $\frac{\nabla f}{|\nabla f|}$ is $u$-invariant. Consequently, so is $\zeta= -J(\frac{\nabla f}{|\nabla f|})$.
\textit{Proof:} Since $f$ is $u$-invariant, so is $df$. As $\nabla f$ is the dual of $df$ via $g$ and each is $u$-invariant, the first statement follows via Lemma \ref{lie0}. The second is because $J$ is $u$-invariant.
\textit{Claim:} $\eta$ and $\Phi$ are $u$-invariant.
\textit{Proof:} Because $\eta$ is the dual of an $u$-invariant vector field and $u$ is an isometry, $\eta$ is $u$-invariant. Next, one recalls
\[\Phi(\cdot)= J(\cdot)-\eta(\cdot)\frac{\nabla f}{|\nabla f|}\]
and each component is $u$-invariant. The result then follows. \end{proof}
\begin{corollary}
\label{maxdim}
Let $(M^{2n}, g, J, f)$ be an almost Hermitian GRS with a non-trivial $f$. The dimension of the group of symmetry is at most $n^2$.
\end{corollary}
\begin{proof}
It follows from Theorem \ref{inducegroup} and the corresponding result for an almost contact metric structure, Theorem \ref{tannoclassificationalmost}.
\end{proof}
Under the setup of cohomogeneity one, there is a converse statement. Let $(M, g, J, f)$ be an almost Hermitian GRS such that over a dense subset, the metric is given by the ansatz \ref{ansatz}. Let $X$ be an infinitesimal automorphism vector field on $N$. That is, \[\mathcal{L}_X g_N= \mathcal{L}_X J_N=0.\] Since $\omega_N= g(\cdot, J\cdot)$, \[\mathcal{L}_X \omega_N=0.\] Let $X^\ast$ be its horizontal lift to $(P, g_t)$. By Cartan's formula, \[3d\omega (W, Y, Z)= (\mathcal{L}_W\omega)(Y, Z)- d(\mathfrak{i}_W\omega)(Y, Z).\] For $\omega=\pi^\ast(\omega_N)$, $W=X^\ast$, $d\omega=0=\mathcal{L}_{X^\ast} \omega$. Thus, $\mathfrak{i}_{X^\ast} \omega$ is closed. If $P$ is simply connected then there exists a function $\ell$ such that \[ \mathfrak{i}_{X^\ast} \pi^\ast(\omega)= d\ell.\]
\begin{lemma}
\label{symmetryup}
If $P$ is simply connected then the vector field $X^\ast- \ell\partial_z$ is independent of $t$ and is an infinitesimal symmetry of $(P, g_t, \eta, \zeta, \Phi)$. \end{lemma} \begin{proof}
$X^\ast$ is independent of $t$ as it only depends on $\pi$ and the fixed subspace which is $g_t$ perpendicular to $\partial_z$ for all $t$. $\ell$ is independent of $t$ as the proof of the Poincare's lemma is topological. The rest is straightforward; see \cite[Lemma 5.1]{tanno69} for details. \end{proof} \begin{remark}
If $N$ is simply-connected, one can choose $u$ to be constant on each fiber. \end{remark}
Since $(P, g_t)$ is complete for each $t$, it is possible to construct $G$, the group consisting of automorphisms generated by vector fields of the form $X^\ast -u \partial_z$ in Lemma \ref{symmetryup} and the Killing vector field $\partial_z$.
\begin{proposition}
\label{conversesymmetry} If $P$ is simply connected then $G$ is a group of automorphism for $(M, g, J, f)$. \end{proposition} \begin{proof}
Let $X_i$ be either a vector field from Lemma \ref{symmetryup} or $\partial_z$. Then, immediately,
\[\mathcal{L}_{X_i} g= \mathcal{L}_{X_i}(dt^2+g_t)=\mathcal{L}_{X_i}g_t=0. \]
For $V=\partial_t$, by Lemmas \ref{lie4} and \ref{symmetryup}, $\mathcal{L}_X J=0$. The result then follows. \end{proof}
\section{Rigidity of the Maximal Dimension} \label{rigid}
We are now ready to prove main results.
\begin{proof}[Proof of Theorem \ref{main1}] First, by Corollary \ref{maxdim}, $\text{dim}(G)\leq n^2$. Suppose that $\text{dim}(G)= n^2$. Since $f$ is non-constant, let $\gamma(t)$ be a unit speed integral curve of $\nabla f$. For each regular value $f(t)$, $(M_t, \zeta_t, \eta_t, \Phi_t)$ is an almost contact metric structure by Prop \ref{levelalmost}. By Theorem \ref{inducegroup}, $G$ is also a group of symmetry for each such almost contact metric structure.
By S. Tanno's Theorem \ref{tannoclassificationalmost}, each connected component of $M_t$ must be one of the model spaces therein. Furthermore, as $G$ acts transitively on each connected component of $M_t$, $M_t$ is a principal orbit and the orbit space of $G$-actions on $M$ is of dimension one. Thus, $(M, g, J, f)$ is of cohomogeneity one and each $M_c$ is connected. Let $P=M_{t_0}$, for some fixed value $t_0$, which is a total space of a line or circle bundle over $\mathbb{N}(k)$ with the fiber projection $\pi$. By Sard's theorem, the set of singular values for $f:M \mapsto \mathbb{R}$ is of measure zero in $\mathbb{R}$. Thus, by continuity, nearby $M_t$ must be obtained from the same model. Locally, the metric can be written as, \[g= dt^2 +g_t,~~ f=f(t).\]
Next, we consider cases as described in Theorem \ref{tannoclassificationalmost}.
\textbf{Case 1:} $(P, g_t, \zeta_t, \eta_t, \Phi_t)$ is a deformation of a homogeneous Sasakian metric with constant $\Phi$-sectional curvature. Thus, $g_t$ is obtained from a standard Sasakian metric via an $a$-homothety and a $\pm b$ deformation. Thus, by fixing a background $\eta$ and $\zeta=\partial_z$ on each fiber, By Theorem \ref{tannoSasa} and Lemma \ref{curvaturedeform}, for $F^2 = ab$ and $H =a$, \begin{align*}
g_t &= F^2(t)\pi^\ast g_{\mathbb{N}}+ H^2(t)\eta\otimes \eta,\\
\eta (\partial_z) &= 1,~~ d\eta =\pi^\ast\omega_N. \end{align*}
\textbf{Case 2:} $P$ is a trivial bundle over $\mathbb{N}(k)$. Thus, \begin{align*} g_t &= H^2(t)dz^2+ F(t)^2 \pi^{\ast} g_{\mathbb{N}}. \end{align*}
\textbf{Case 3:} $(P, g_t)$ is a hyperbolic metric. That is, \begin{align*} g_t &= H^2 dz^2+ e^{2A(t)z} g_N. \end{align*} One direction then follows from Lemmas \ref{solitonsubmer1} (for $q=1, 0$) and Lemma \ref{solitonsubmer2}. \\
For the reverse direction, if the soliton is locally constructed by the ansatz \ref{ansatz} and $P$ is simply connected, then its automorphism group is the same as the group of symmetry of $(P, g_t, \eta_t, \zeta_t, \Phi_t)$ for each regular value $t$ by Lemma \ref{conversesymmetry}. For $N=\mathbb{N}(k)$ such group is of dimension $n^2$ by \cite{tanno69}. The hyperbolic case case is trival as the metric is a product.
\end{proof}
The next results will pave the way to the proof of Theorems \ref{main0}. \begin{proposition}
\label{main2}
Let $(M^{2n}, g, J, f, \lambda)$ be a non-trivial almost K\"{a}hler GRS with $G$ the group of symmetry. If $\text{dim}(G)=n^2$ then either
\begin{enumerate} [label=(\roman*)]
\item the soliton belongs to case (i) of Theorem \ref{main1} with $q=1$. Furthermore, $H, F$ and $f$ can be solved as follows:
\begin{align*}
ds &= H dt,\\
F^2(t)=\beta(s) &= 2s+A,\\
f(t) = \phi(s) &=Bs+C,\\
D &= k-\lambda A,\\
\alpha(x) (2x+A)^m e^{-Bx}\mid_{s_0}^s&= \frac{e^{Bs}}{(2s+A)^m} \int_{s_0}^s (-2\lambda x+D)e^{-B x}(2z+A)^m dx.
\end{align*}
\item the soliton splits as $(M_1, g_, J_1, f_1, \lambda)\times (\mathbb{N}, g_{\mathbb{N}}, J_{\mathbb{N}}, f_{\mathbb{N}}, \lambda)$ for $(M_1, g_, J_1, f_1, \lambda)$ a K\"{a}hler GRS in real dimension two.
\end{enumerate}
\end{proposition} \begin{proof} We continue from the proof of Theorem \ref{main1}. For the first case $q=1$, the result follows from Lemma \ref{almostKahler}. For the case $q=0$, by equation (\ref{Kahlercondition}), $F$ is constant. Thus, the soliton must splits as a Riemannian product
\[ M_1 \times M_2.\] By \cite[Lemma 2.1]{PW09grsym} and the discussion after Lemma \ref{solitonsubmer1}, each $(M_i, g_i, J_i. f_i, \lambda)$ is a K\"{a}hler GRS. As $(M_2, g_2, J_2)= (\mathbb{N}, g_{\mathbb{N}}, J_{\mathbb{N}})$ the result follows. Finally, for the hyperbolic case, the product metric is not almost K\"{a}hler.
\end{proof}
\begin{proof}[Proof of Theorem \ref{main0}] We first need the following.
\textit{Claim 1:} The largest connected group of isometries preserves $J$ and $f$.
\textit{Proof:} By A. Lichnerowicz \cite{lich54}, for an irreducible K\"{a}hler manifold, the largest connected group of isometries preserves the almost complex structure if $n$ is odd or if $n$ is even and ${\mathrm {Rc}}$ does not vanish. As the soliton is non-trivial, ${\mathrm {Rc}}\neq 0$.
Furthermore, by \cite{PW09grsym}, for a Killing vector field $X$, either $\mathcal{L}_X f=\nabla_X f\equiv 0$ or $\nabla X\equiv 0$ and the manifold splits off a line or a circle. Since the metric is K\"{a}hler,
\[\nabla X\equiv 0 \rightleftarrows \nabla(JX)\equiv 0.\]
Thus, it splits off a line/circle if and only if there is a decomposition with a flat factor with respect to the K\"{a}hler structure, which contradicts the irreducibility. The claim follows. \\
The claim implies that the largest connected group of isometries is contained in the group of symmetry preserving $g, J,$ and $f$. By Theorem \ref{main1}, the dimension of the group of isometries is at most $n^2$. The maximal dimension is attained only if, locally, the metric must be constructed from ansatz \ref{ansatz} as in Prop. \ref{main2}(i). Since $\alpha(s)$ and $\beta(s)$ do not both approach $\infty$ as $s\rightarrow -A/2$, the metric is only complete if there is a singular orbit and one needs to smoothly compactify such an end. By taking a scaling if necessary, one assumes that the singular orbit is at $s=0=t$ and the metric is defined in a neighborhood where $s>0$. Thus, immediately, $\beta\geq 0$ if and only if $A\geq 0$.\\
\textit {Claim 2:} If $\lambda\geq 0$ then $\mathbb{N}=\mathbb{CP}^{n-1}$.
\textit{Proof:} Assume that $\mathbb{N}\neq \mathbb{CP}^{n-1}$ then it is non-compact. Then, at $t=s=0$, one can only collapse the fiber; that is, $\alpha(0)=H(0)=0, \beta(0)=F^2(0)>0$. By Prop. \ref{smoothness}, the smoothness of the metric requires
\begin{align*}
\frac{\partial \alpha}{\partial s}(0) &= 2\frac{\partial H}{\partial t}(0)= 2.
\end{align*}
Evaluating the equation
${\lambda}(2s+A) ={k}-\dot{\alpha}-\frac{2(m-1+q^2)\alpha}{2s+A}+B\alpha$ at $s=0$ yields
\[k-\lambda A=D= 2. \]
For $\lambda \geq 0$, it implies that $k>0$, a contradiction to $\mathbb{N}(k)\neq \mathbb{CP}^{n-1}$.\\
Finally, the reverse direction follows from Claim 1 and Theorem \ref{main1}.
\end{proof} \begin{remark}
For $\lambda<0$, Lemma \ref{smoothness} shows that it is possible to construct K\"{a}hler GRS with $\mathbb{N}\neq \mathbb{CP}^{n-1}$. The details will appear elsewhere. \end{remark}
\begin{proof}[Proof Corollary \ref{main3}] If it is irreducible, Theorem \ref{main0} applies. Otherwise, we argue as in the proof of Theorem \ref{main0} to obtain a decomposition:
\[(M, g, f, J, \lambda)= \Pi_{i=0}^k (M_i^{n_i}, g_i, f_i, J_i, \lambda)\]
with $\sum_{i} n_i=n$ (complex dimensions), the $i=0$-factor is Gaussian, and each $i>0$-one is an irreducible K\"{a}hler GRS. The group of automorphisms for the $i=0$-factor is of dimension at most
\[n_0(n_0+2).\]
For the rest, by Theorem \ref{main0}, the group of isometry of the $i$-factor is of dimension at most $n_i^2$. As each is simply-connected, the group of the product is equal to the product of groups \cite{KNvolumeI96}. Thus, together, the group of isometry is of dimension at most
\[n_1^2+...n_k^2\leq (n_1+...n_k)^2= (n-n_0)^2.\]
It follows that the automorphism group of $(M, g, J, f)$ is of dimension at most, for $k\leq n-1$,
\[(n-n_0)^2+n_0(n_0+2)=n^2 -2n_0(n-n_0-1)\leq n^2.\]
Equality happens if and only if $k=1$ and $n_0=n-1$ or $n_0=0$. The case $n_0=n-1$, one factor is a complete K\"{a}hler GRS in real dimension two. The result then follows.
\end{proof}
\begin{proof}[Proof of Corollary \ref{nondegenrate}]
By \cite{KNauto57}, the largest connected group of affine transformations $G$ contains of automorphisms. Non-degeneracy implies the Riemannian metric does not split off any flat factors. Thus, $f$ is $G$-invariant. The result then follows from Theorem \ref{main1} and Corollary \ref{main3}. For the reverse direction, the non-degeneracy follows as the Ricci curvature is non-singular at some point \cite{KNauto57}.
\end{proof}
\section{Appendix} \subsection{Convention} Here are our conventions: \begin{itemize}
\item $\mathcal{L}$ denotes the Lie derivative.
\item The convention of exterior derivative, for an $m$-form $\alpha$,
\begin{align*}(m+1)d\alpha(X_0,....X_m) &= \sum_{i}(-1)^i X_i (\alpha(X_0,....,\hat{X_i},...X_m))\\
&+\sum_{i<j}(-1)^{i+j}\alpha([X_i, X_j], X_0,..., \hat{X_i},...,\hat{X_j},...X_m).\end{align*}
Consequently,
\[(dx^{i_1}\wedge... \wedge dx^{i_k})(\partial_{x_i},... \partial_{x_k})=\frac{1}{k!}. \]
\begin{remark}
Our convention agrees with \cite{chowluni} but differs from \cite{besse} by a scaling.
\end{remark} \item The interior product for a $m$-form is defined as \[(\mathfrak{i}_X \alpha (Y_1,..., Y_m))=m \alpha(X, Y_1,..., Y_m).\] The following identity is the so-called Cartan's formula for a differential form \[\mathcal{L}_X \alpha=(d \circ \mathfrak{i}_X+\mathfrak{i}_X\circ d)\alpha.\]
\item On a Riemannian manifold $(M, g)$, there is a unique Levi-Civita connection $\nabla: TM\times C^\infty(TM)\mapsto C^\infty(TM)$. The connection induces a Riemannian curvature via the covariant second derivative:
\[\nabla^2_{X, Y}= \nabla_X\nabla_Y-\nabla_{\nabla_X Y}.\]
The Riemann curvature $(3,1)$ tensor and $(4,0)$ tensor are defined as follows,
\begin{align*}
{\mathrm R}(X,Y,Z)&=\nabla^2_{X,Y}Z -\nabla^2_{Y,X}Z\\
{\mathrm R}(X,Y,Z,W)&=g(\nabla^2_{X,Y}Z -\nabla^2_{Y,X}Z, W).
\end{align*}
\begin{remark}
Our sign convention agrees with \cite{chowluni, BGbookSasakian08}. Our $(3,1)$ curvature tensor differs from one of \cite{besse} by a sign.
\end{remark}
Furthermore, the curvature can be seen as an operator on the space of two forms. For an orthonormal basis $\{e_i\}_i$ and any $2$-form $\alpha$, \[{\mathrm R}(\alpha)(e_i, e_j)=\sum_{k<l}{\mathrm R}(e_i, e_j, e_k, e_l)\alpha(e_k, e_l).\]
Consequently, due to our exterior derivative convention,
\[{\mathrm R}(X\wedge Y) (Z, W) =\frac{1}{2}{\mathrm R}(X, Y, Z, W).\]
The sectional curvature and Ricci curvature are defined as follows,
\begin{align*} K(X, Y) &={\mathrm R}(X, Y, Y, X),\\
{\mathrm {Rc}}_{ik} &=\sum_j {\mathrm R}_{ijjk}.
\end{align*}
\end{itemize} \subsection{Submersion} A differentiable map between smooth manifolds $\pi: P\mapsto N$ is a submersion if the pushforward of the tangent space at each point is surjective. That is, for $p\in P$, $\pi_\ast (T_xP)=T_{\pi(p)}N$. A Riemannian submersion is a submersion between Riemannian manifolds such that the differential above is a linear isometry.
We consider a Riemannian submersion $\pi: (P^{2n+1}, g_p)\mapsto (N^{2n}, g_N)$ such that each fiber is a geodesic line or circle with tangential vector field $\zeta$ such that $g(\zeta, \zeta)=1$. The submersion naturally decomposes $TP$ into vertical and horizontal distributions. Let $(\cdot)^{\mathcal{H}}$ and $(\cdot)^{\mathcal{V}}$ denote the horizontal and vertical parts, respectively, of a vector field on $P$. The vertical subspace consists of multiples of $\zeta$. Furthermore, for each vector field on $N$ there is a unique horizontal vector field on $P$ such that they are $\pi$-related. We'll collect useful lemmas whose proofs can be found in \cite{pe06book, besse, BGbookSasakian08} or a straightforward calculation.
\begin{lemma} For horizontal vector fields ${X}$ and ${Y}$:
\begin{enumerate} [label=\roman*)]
\item $[\zeta, {X}]$ is vertical,
\item $g([{X}, {Y}], \zeta) =2g(\nabla_{{X}}{Y}, \zeta)= 2g(\nabla_{{Y}}\zeta, {X})=-2g(\nabla_{\zeta}{X}, {Y}),$
\item $\zeta$ is a Killing vector field.
\end{enumerate}
\end{lemma}
The curvature of a submersion can be computed via B. O'Neill's $A$ and $T$ tensors \cite{ONeill66}. Since each fiber is totally geodesic, $T\equiv 0$ and only $A$ is non-trivial. One recalls \begin{align*}
A_X \zeta = (\nabla_X \zeta)^{\mathcal{H}}, &~~A_X Y = (\nabla_X Y)^{\mathcal{V}}. \end{align*} For horizontal vector fields $X, T, Z, W$, \begin{align*}
{\mathrm R}_P({X}, {Y}, {Z}, {W}) &= {\mathrm R}_{N}(X, Y, Z, W) +2g(A_X Y, A_Z W)+g(A_X Z, A_Y W)-g(A_X W, A_Y Z),\\
{\mathrm R}_P({X}, \zeta, {Z}, \zeta) &= -g(A_X \zeta, A_Z \zeta). \end{align*} \begin{remark}
These formulas differ from ones of \cite[Chapter 9]{besse} by a sign convention. \end{remark}
Consequently, there are corresponding identities for the sectional curvature and Ricci curvature. For orthonormal horizontal vectors $X, Y$ \begin{align*}
K_P(X, Y) = K_{N}(X, Y)-3 |A_X Y|^2; &~~K_P(X, \zeta) = |A_X \zeta|^2, \\
{\mathrm {Rc}}_P(X, Y) = {\mathrm {Rc}}_{N}(X, Y)-2g(A_X, A_Y);~~{\mathrm {Rc}}(X, \zeta) = 0; &~~ {\mathrm {Rc}}(\zeta, \zeta) = g(A\zeta, A\zeta). \end{align*} Here, as $\{E_i\}_{i=1}^{2n}$ denotes a local orthonormal frame for the horizontal distribution, \begin{align*}
g(A_X, A_Y) &= \sum_i g(A_X E_i, A_Y E_i)= g(A_X \zeta, A_Y \zeta),\\
g(A\zeta, A\zeta) &=\sum_i g(A_{E_i}\zeta, A_{E_i}\zeta). \end{align*}
Next, we restrict to the submersion given by ansatz \ref{ansatz}. It is immediate that $\zeta= \frac{1}{H} \partial_z$ is a Killing vector field. In this situation, tensor $A$ can be computed immediately. \begin{lemma} \label{computeA}
For horizontal vector fields $X$ and $Y$
\begin{align*}
2 H d\eta(X, Y) &= -g([{X}, {Y}], \zeta)= \frac{Hq}{F^2} g (X, JY),\\
A_X Y &=-\frac{Hq}{F^2}g(X, JY)\zeta\\
A_X\zeta &=-\frac{Hq}{F^2} JX.
\end{align*} \end{lemma}
\begin{lemma}
\label{submersioncurvature}
The sectional and Ricci curvature of $(P, g)$ are given by, for orthonormal horizontal vectors $X, Y$
\begin{align*}
K(X, Y) &= \frac{1}{F^2}K_N(FX, FY)-3\frac{H^2 q^2}{F^4} g(X, JY)^2,\\
K(X, \zeta) &= \frac{H^2 q^2}{F^4},\\
{\mathrm {Rc}}(X, Y) &= {\mathrm {Rc}}_N (X, Y)-2\frac{H^2q^2}{F^4}g(X, Y)\\\
{\mathrm {Rc}}(\zeta, \zeta) &=\frac{H^2q^2}{F^4}2m.
\end{align*} \end{lemma}
\def$'${$'$}
\end{document} |
\begin{document}
\begin{abstract} Recently, Bollen, Draisma, and Pendavingh have introduced the Lindstr\"om valuation on the algebraic matroid of a field extension of characteristic~$p$. Their construction passes through what they call a matroid flock and builds on some of the associated theory of matroid flocks which they develop. In this paper, we give a direct construction of the Lindstr\"om valuated matroid using the theory of inseparable field extensions. In particular, we give a description of the valuation, the valuated circuits, and the valuated cocircuits. \end{abstract}
\maketitle
The algebraic matroid of a field extension records which subsets of a fixed set of elements of the field are algebraically independent. In characteristic~0, the algebraic matroid coincides with the linear matroid of the vector configuration of differentials, and, as a consequence the class of matroids with algebraic realizations over a field of characteristic 0 is exactly equivalent to the class of matroids with linear realizations in characteristic~0~\cite{ingleton}. However, in positive characteristic, there are strictly more algebraic matroids than linear matroids, and without an equivalence to linear matroids, the class of algebraic matroids is not well understood.
Pioneering work of Lindstr\"om has shown the power of first applying well-chosen powers of the Frobenius morphism to the field elements, before taking differentials. In particular, he constructed an infinite family of matroids (the Fano matroid among them) for which any algebraic realization over a field of finite characteristic, after applying appropriate powers of Frobenius and taking differentials, yields a linear representation of the same matroid~\cite{lindstrom}.
In general, no single choice of powers of Frobenius may capture the full algebraic matroid, and so Bollen, Draisma, and Pendavingh went one step further by looking at the matroids of differentials after all possible powers of Frobenius applied to the chosen field elements~\cite{bollen-draisma-pendavingh}. These matroids fit together to form what they call a \defi{matroid flock}, and they show that a matroid flock is equivalent to a valuated matroid~\cite{bollen-draisma-pendavingh}*{Thm.~7}. Therefore, the matroid flock of differentials defines a valuation on the algebraic matroid of the field extension, called the \defi{Lindstr\"om valuation} of the algebraic matroid. In this paper we give a direct construction of this valuation, without reference to matroid flocks.
We now explain the construction of the Lindstr\"om valuation of an algebraic matroid. Throughout this paper, we will work with an extension of fields $L \supset K$ of characteristic $p>0$ as well as fixed elements $x_1, \ldots, x_n \in L$. We also assume that $L$ is a finite extension of $K(x_1, \ldots, x_n)$, for example, by replacing $L$ with $K(x_1, \ldots, x_n)$. The algebraic matroid of this extension can be described in terms of its bases, which are subsets $B \subset E = \{1, \ldots, n\}$ such that the extension of~$L$ over $K(x_B) = K(x_i : i \in B)$ is algebraic. We recall from~\cite{lang}*{Sec.~V.6} that if $K(x_B)^{\sep}$ denotes the set of elements of~$L$ which are separable over $K(x_B)$, then $L$ is a purely inseparable extension of $K(x_B)^{\sep}$, and the degree of this extension, $[L : K(x_B)^{\sep}]$ is called the \defi{inseparable degree} and denoted by adding a subscript: $[L : K(x_B)]_i$.
Now, we define a valuation on the algebraic matroid of $L$ as the following function $\nu$ from the set of bases to $\ZZ$: \begin{equation}\label{eq:valuation} \nu(B) = \log_p [L : K(x_B)]_i. \end{equation} Note that $\nu(B)$ is finite because we assumed that $L$ was a finitely generated algebraic extension of $K(x_B)$ and it is an integer because $[L : K(x_B)]_i$ is the degree of a purely inseparable extension, which is always a power of~$p$~\cite{lang}*{Cor.~V.6.2}.
\begin{thm}\label{t:agree} The function $\nu$ in \eqref{eq:valuation} defines a valuation on the algebraic matroid of $L \supset K$, such that the associated matroid flock is the matroid flock of the extension. \end{thm}
In addition to the valuation given in~\eqref{eq:valuation}, we give descriptions of the valuated circuits of the Lindstr\"om valuated matroid in the beginning of Section~\ref{sec:valuated-matroid} and of the valuated cocircuits and minors in Section~\ref{sec:cocircuits-minors}. The description of the circuits gives an algorithm for computing the Lindstr\"om valuated matroid using Gr\"obner bases, assuming that $L$ is finitely generated over a prime field (see Remark~\ref{r:computing} for details).
\begin{rmk}\label{r:sign-convention} There are two different sign conventions used in the literature on valuated matroids. We use the convention which is compatible with the ``min-plus'' convention in tropical geometry, which is the opposite of what was used in the original paper of Dress and Wenzel~\cite{dress-wenzel}, but is consistent with~\cite{bollen-draisma-pendavingh}. \end{rmk}
\subsection*{Acknowledgments} I'd like to thank Jan Draisma for useful discussion about the results in~\cite{bollen-draisma-pendavingh}, which prompted this paper, Rudi Pendavingh for suggesting the results appearing in Section~\ref{sec:cocircuits-minors}, and Felipe Rinc\'on for helpful feedback. The author was supported by NSA Young Investigator grant H98230-16-1-0019.
\section{The Lindstr\"om valuated matroid}\label{sec:valuated-matroid}
In this section, we verify that the function~\eqref{eq:valuation} from the introduction is a valuation on the algebraic matroid of the extension $L \supset K$ and the elements $x_1, \ldots, x_n$. We do this by first constructing the valuated matroid in terms of its valuated circuits, and then showing that the corresponding valuation agrees with the function~\eqref{eq:valuation}. Throughout the rest of the paper, we will use $F$ to denote the Frobenius morphism $x \mapsto x^p$.
Recall that a (non-valuated) circuit of the algebraic matroid of the elements $x_1, \ldots, x_n$ in the extension $L \supset K$ is an inclusion-wise minimal set $C \subset E$ such that $K(x_{C})$ has transcendence degree $\lvert C \rvert - 1$ over $K$. Therefore, there is a unique (up to scaling) polynomial relation among the~$x_i$, which we call the \defi{circuit polynomial}, following~\cite{kiraly-rosen-theran}. More precisely, we let $K[X_C]$ be the polynomial ring whose variables are denoted $X_i$ for $i \in C$. The aforementioned circuit polynomial is a (unique up to scaling) generator $f_C$ of the kernel of the homomorphism $K[X_C] \rightarrow K(x_C)$ which sends $X_i$ to $x_i$. We write this polynomial: \begin{equation*} f_{C} = \sum_{\bfu \in J} c_{\bfu} X^{\bfu} \in K[X_C] \subset K[X_E] \end{equation*} where $J \subset \ZZ_{\geq 0}^n$ is a finite set of exponents and $c_{\bfu} \neq 0$ for all $\bfu \in J$. Then, we define $\vecC(f_{C})$ to be the vector in $(\ZZ \cup \{\infty\})^n$ with components: \begin{equation}\label{eq:c-function} \vecC(f_{C})_i = \min \{\val_p \bfu_i \mid \bfu \in J, \bfu_i \neq 0 \}, \end{equation} where $\val_p \bfu_i$ denotes the $p$-adic valuation, which is defined to be the power of~$p$ in the prime factorization of the positive integer $\bfu_i$. If $\bfu_i = 0$ for all $\bfu \in J$, then we take $\vecC(f_C)_i$ to be $\infty$. For any vector $\vecC \in (\ZZ \cup \{\infty\})^n$, the \defi{support} of~$\vecC$, denoted $\supp \vecC$, is the set $\{i \in E \mid \vecC_i < \infty\}$. Since $f_C$ is a polynomial in the variables~$X_i$ for $i \in C$, but not in any proper subset of them, the support of $\vecC(f_C)$ is exactly the circuit~$C$.
We will take the valuated circuits of the Lindstr\"om valuation to be the set of vectors: \begin{equation}\label{eq:valuated-circuits} \mathcal C = \{\vecC(f_C) + \lambda \onevector \mid \mbox{$C$ is a circuit of $L \supset K$}, \lambda \in \ZZ\} \subset (\ZZ \cup \{\infty\})^n, \end{equation} where $\onevector$ denotes the vector $(1, \ldots, 1)$. Before verifying that this collection of vectors satisfies the axioms, we prove the following preliminary lemma relating the definition in~\eqref{eq:c-function} to the inseparable degree: \begin{lem}\label{l:poly-insep-degree} Let $S \subset E$ be a set of rank $\lvert S \rvert - 1$, and let $C$ be the unique circuit contained in $S$. If we abbreviate the vector $\vecC(f_C)$ as $\vecC$, then \begin{equation*} [K(x_{S}) : K(x_{S \setminus \{i\}})]_i = p^{\vecC(f)_i} \end{equation*} for any $i \in C$. In particular, $K(x_{S})$ is a separable extension of $K(x_{S \setminus \{i\}})$ if and only if $\vecC_i = 0$. \end{lem}
\begin{proof} For $i \in C$, we let $Y_i$ denote the monomial $X_i^{p^{\vecC_i}}$ in $K[X_S]$. Then, the polynomial~$f_C$ lies in the polynomial subring $K[X_{S \setminus \{i\}}, Y_i]$, by the definition of~$\vecC_i$. Similarly, we let $y_i$ denote the element $x_i^{p^{\vecC_i}} = F^{\vecC_i} x_i$ in $K(x_S)$. Then, $f_C$, as a polynomial in $K[X_{S \setminus \{i\}}, Y_i]$, is the minimal defining relation for $K(x_{S\setminus \{i\}}, y_i)$ as an extension of $K(x_{S \setminus \{i\}})$. By the definition of $\vecC_i$, some term of $f_C$ is of the form $X^uY_i^a$, where $a$ is not divisible by $p$, and so $\partial f_C/\partial Y_i$ is a non-zero polynomial. Therefore, $f_C$ is a separable polynomial of $Y_i$, and so $K(x_{S \setminus \{i\}}, y_i)$ is a separable extension of $K(x_{S \setminus \{i\}})$.
On the other hand, $K(x_S)$ is a purely inseparable extension of $K(x_{S \setminus \{i\}}, y_i)$, defined by the minimal relation \begin{equation*} x_i^{p^{\vecC_i}} - y_i = 0. \end{equation*} Therefore, this extension has degree $p^{\vecC_i}$, which is thus the inseparable degree $[K(x_S) : K(x_{S \setminus \{x_i\}})]_i$, as desired. \end{proof}
We now verify that the collection~\eqref{eq:valuated-circuits} satisfies the axioms of valuated circuits. Several equivalent characterizations of valuated circuits are given in~\cite{murota-tamura}, and we will use the characterization in the following proposition:
\begin{prop}[Thm.~3.2 in \cite{murota-tamura}]\label{p:valuated-circuits} A set of vectors $\mathcal C \subset (\ZZ \cup \{\infty\})^n$ is the set of \defi{valuated circuits} of a valuated matroid if and only if it satisfies the following properties: \begin{enumerate} \item The collection of sets $\{\supp \vecC \mid \vecC \in \mathcal C\}$ satisfies the axioms of the circuits of a non-valuated matroid. \item If $\vecC$ is a valuated circuit, then $\vecC + \lambda \onevector$ is a valuated circuit for all $\lambda \in \ZZ$. \item Conversely, if $\vecC$ and $\vecC'$ are valuated circuits with $\supp \vecC = \supp \vecC'$, then $\vecC = \vecC' + \lambda \onevector$ for some integer $\lambda$. \item Suppose $\vecC$ and $\vecC'$ are in $\mathcal C$ such that \begin{equation*} \rank(\supp \vecC \cup \supp \vecC') = \lvert \supp \vecC \cup \supp \vecC' \rvert - 2, \end{equation*} and $u, v \in E$ are elements such that $\vecC_u = \vecC_u'$ and $\vecC_v < \vecC_v' = \infty$. Then there exists a vector $\vecC'' \in \mathcal C$ such that $\vecC_u'' = \infty$, $\vecC_v'' = \vecC_v$, and $\vecC_i'' \geq \min\{ \vecC_i, \vecC_i'\}$ for all $i \in E$ . \end{enumerate} \end{prop}
The first property from Proposition~\ref{p:valuated-circuits} is equivalent to axioms $\mathrm{VC1}$, $\mathrm{VC2}$, and $\mathrm{MCE}$ from~\cite{murota-tamura} and the three after that are denoted $\mathrm{VC3}$, $\mathrm{VC3_e}$, $\mathrm{VCE_{loc1}}$, respectively.
\begin{prop}\label{p:circuit-axioms} The collection~$\mathcal C$ of vectors given in~\eqref{eq:valuated-circuits} defines the valuated circuits of a valuated matroid. \end{prop}
\begin{proof} The first axiom from Proposition~\ref{p:valuated-circuits} follows because each valuated circuit is constructed to have support equal to a non-valuated circuit. The second axiom follows immediately from the construction, and the third follows from the uniqueness of circuit polynomials.
Thus, it remains only to check (4) from Proposition~\ref{p:valuated-circuits}. Suppose that $\vecC$ and~$\vecC'$ are valuated circuits and $u, v \in E$ are elements satisfying the hypotheses of condition~(4). We can write $\vecC = \vecC(f) + \lambda \onevector$ and $\vecC' = \vecC(f') + \lambda'\onevector$ for circuit polynomials~$f$ and~$f'$ in $K[X_1, \ldots, X_n]$. Note that $\vecC(F^m f) = \vecC(f) + m \onevector$, and so by either replacing $f$ with $F^m f$ or replacing $f'$ with $F^m f'$, for some integer~$m$, we can assume that $\lambda = \lambda'$. Moreover, since the fourth axiom only depends on the relative values of the entries of $\vecC$ and $\vecC'$, it is sufficient to check the axiom for $\vecC$ and~$\vecC'$ replaced by $\vecC(f) = \vecC - \lambda \onevector$ and $\vecC(f') = \vecC' - \lambda \onevector$, respectively.
We now define an injective homomorphism $\psi$ from the polynomial ring $K[Y_1, \ldots, Y_n]$ to $K[X_1, \ldots, X_n]$ by \begin{equation*} \psi(Y_i) = F^{\min\{\vecC_i, \vecC_i'\}} X_i \end{equation*} Thus, there exist polynomials $g$ and $g'$ in $K[Y_1, \ldots Y_n]$ such that $f = \psi(g)$ and $f' = \psi(g')$. In particular, since $\vecC(g)_i = \vecC_i - \min\{\vecC_i, \vecC_i'\}$ and $\vecC(g')_i = \vecC_i' - \min\{\vecC_i, \vecC_i'\}$, then our assumptions on $u$ and $v$ imply that $\vecC(g)_u = \vecC(g')_u = \vecC(g)_v = 0$.
Likewise, we define $y_i = F^{\min\{\vecC_i, \vecC_i'\}} x_i$ so that the elements $y_i \in L$ satisfy the polynomials $g$ and $g'$. Thus, Lemma~\ref{l:poly-insep-degree} shows that $g$ is separable in the variable $Y_v$, and so if $S$ denotes the set $\supp \vecC \cup \supp \vecC'$, then $K(y_S)$ is a separable extension of $K(y_{S \setminus \{v \}})$. Likewise, $g'$ is separable in the variable $Y_u$ and doesn't use the variable $Y_v$, and so $K(y_{S \setminus \{v\}})$ is a separable extension of $K(y_{S \setminus \{v, u\}})$. Since the composition of separable extensions is separable, $y_v$ is separable over $K(y_{S \setminus \{v,u\}})$~\cite{lang}*{Thm.~V.4.5}.
Since algebraic extensions have transcendence degree $0$, then the field $K(y_{S \setminus \{v, u\}})$ has the same transcendence degree over $K$ as $K(y_S)$ does, and that transcendence degree is $\lvert S \rvert - 2$, because we assumed that $\rank(S) = \lvert S \rvert - 2$. In addition, we have containments $K(y_{S \setminus \{v,u\}}) \subset K(y_{S \setminus\{u\}}) \subset K(y_{S})$, so that $K(y_{S \setminus \{u\}})$ also has transcendence degree $\lvert S \rvert -2$, and therefore there exists a unique (up to scaling) polynomial relation $g'' \in K[Y_{S \setminus \{u\}}]$ among the elements $y_i$ for $i \in S \setminus \{u\}$. Since $y_v$ is finite and separable over $K(y_{S \setminus \{u\}})$, $\vecC(g'')_v = 0$ by Lemma~\ref{l:poly-insep-degree}.
We claim that the $\vecC''= \vecC(\psi(g''))$ satisfies the desired conclusions of the axiom. First, \begin{equation*} \vecC_v'' = \vecC(g_{C''})_v + \min\{\vecC_v, \vecC_v'\} = 0 + \min\{\vecC_v, \infty\} = \vecC_v, \end{equation*} as desired. Similarly, \begin{equation*} \vecC''_i = \vecC (g_{C''})_i + \min \{\vecC_i, \vecC_i'\} \geq \min \{\vecC_i, \vecC_i'\}, \end{equation*} and, finally, $\vecC''_u = \infty$ because $g''$ was chosen to be a polynomial in the variables $Y_{S \setminus \{u\}}$. \end{proof}
\begin{rmk}\label{r:computing} The valuated circuits defined in Proposition~\ref{p:circuit-axioms} are effectively computable from a suitable description of $L$ and the $x_i$. More precisely, suppose $K$ is a finitely generated extension of $\mathbb F_p$ and $L$ is given as the fraction field of $K[x_1,\ldots, x_n]/I$ for a prime ideal~$I$. Then $I$ can be represented in computer algebra software, and the elimination ideals $I \cap K[x_S]$ can be computed for any subset $S \subset E$ using Gr\"obner basis methods. The circuits of the algebraic matroid are the minimal subsets $C$ for which $I \cap K[x_C]$ is not the zero ideal, in which case the elimination ideal will be principal, generated by the circuit polynomial~$f_C$. By computing all of these elimination ideals, we can determine the circuits of the algebraic matroid, and from the corresponding generators, we get the valuated circuits by the formula~\eqref{eq:valuated-circuits}. \end{rmk}
\begin{ex}\label{ex:toric} One case where the connection between the Lindstr\"om valuated matroid and linear algebraic valuated matroids is most transparent is when the variables $x_i$ are monomials. This example is given in~\cite{bollen-draisma-pendavingh}*{Thm.~45}, but we discuss it here in terms of our description of the valuated circuits.
We let $A$ be any $d \times n$ integer matrix, and then we take $L = K(t_1, \ldots, t_d)$ for any field $K$ of characteristic~$p$, and we let $x_i$ be the monomial $t_1^{A_{1i}} \cdots t_d^{A_{di}}$, whose exponents are the $i$th column of $A$. Then the algebraic matroid of $x_1, \ldots, x_n$ is the same as the linear matroid of the vector configuration formed by taking the columns of~$A$. Moreover, we claim that the Lindstr\"om valuated matroid is the same as the valuated matroid of the same vector configuration with respect to the $p$-adic valuation on~$\mathbb Q$.
To see this, we look at the valuated circuits of both valuated matroids. A circuit of the linear matroid is determined by an $n \times 1$ vector $\bfu$ with minimal support such that $A \bfu = \mathbf 0$. The circuit is the support of the vector $\bfu$, and the valuated circuit is the entry-wise $p$-adic valuation of~$\bfu$. The support of~$\bfu$ is also a circuit of $x_1, \ldots, x_n$ with circuit polynomial \begin{equation*} f = X_1^{\bfu^{(+)}_1} \cdots X_n^{\bfu^{(+)}_n} - X_1^{\bfu^{(-)}_1} \cdots X_n^{\bfu^{(-)}_n} \end{equation*} where \begin{equation*} \bfu^{(+)}_i = \min\{0, \bfu_i\} \qquad \bfu^{(-)}_i = -\max\{0, \bfu_i\} \end{equation*} so that $\bfu = \bfu^{(+)} - \bfu^{(-)}$. Then, since one of $\val_p(\bfu_i^{(-)})$ and $\val_p(\bfu_i^{(+)})$ equals $\val_p(\bfu_i)$ and the other is infinite, $\vecC(f)$ is the same as the entry-wise $p$-adic valuation of $\bfu$, which is the valuated circuit of the linear matroid. Thus, the valuated circuits of the linear and algebraic matroids are the same. \end{ex}
\begin{prop}\label{p:valuation-circuits} The Lindstr\"om valuated matroid given by the circuits in~\eqref{eq:valuated-circuits} agrees with the valuation~\eqref{eq:valuation} given in the introduction. \end{prop}
\begin{proof} The essential relation between the valuation and the valuated circuits is that if $B$ is a basis, $u \in B$, $v \in E \setminus B$, and $\vecC$ is a valuated circuit whose support is contained in $B \cup \{v\}$, then: \begin{equation}\label{eq:valuated-basis-circuit} \nu(B) + \vecC_u = \nu(B \setminus \{u\} \cup \{v\}) + \vecC_v \end{equation} This relation is used at the beginning of \cite{murota-tamura}*{Sec. 3.1} to define the valuated circuits in terms of the valuation, and in the other direction with~(10) from~\cite{murota-tamura}. In~\eqref{eq:valuated-basis-circuit}, we adopt the convention that $\nu(B \setminus \{u\} \cup \{v\})$ is $\infty$ if $B \setminus \{u\} \cup \{v \}$ is not a basis.
The only quantities in \eqref{eq:valuated-basis-circuit} which can be infinite are $\vecC_u$ and $\nu(B \setminus \{u\} \cup \{v\})$, because if $\vecC_v$ were infinite, then $\supp \vecC$ would be contained in $B$, which contradicts $B$ being a basis. However $B \setminus \{u\} \cup \{v\}$ is not a basis if and only the support of $\vecC$ is contained in $B \setminus \{u \} \cup \{v\}$, which is true if and only $\vecC_u = \infty$. Therefore, the left hand side of~\eqref{eq:valuated-basis-circuit} is infinite if and only if the right hand side is, so for the rest of the proof, we can assume that all of the terms of~\eqref{eq:valuated-basis-circuit} are finite.
By the multiplicativity of inseparable degrees~\cite{lang}*{Cor.~V.6.4}, we have \begin{align*} \nu(B) &= \log_p [L : K(x_B)]_i \\ &= \log_p [L : K(x_{B \cup \{v\}})]_i + \log_p [K(x_{B \cup \{v\}}) : K(x_B)]_i \\ &= \log_p [L : K(x_{B \cup \{v\}})]_i + {\vecC_v}, \end{align*} by Lemma~\ref{l:poly-insep-degree}. Similarly, we also have \begin{align*} \nu(B \setminus \{u\} \cup \{v\}) &= \log_p [L : K(x_{B \setminus \{u\} \cup \{v\}}]_i \\ &= \log_p [L : K(x_{B \cup \{v\}})]_i + \log_p [K(x_{B \cup \{v\}}) : K(x_{B \setminus \{u\} \cup \{v\}})]_i \\ &= \log_p [L : K(x_{B \cup \{v\}})]_i + \vecC_u, \end{align*} again, using Lemma~\ref{l:poly-insep-degree} for the last step. Therefore, \begin{equation*} \nu(B) - \vecC_v = \log_p [L : K(x_{B \cup \{v\}})]_i = \nu(B \setminus \{u\} \cup \{v \}) - \vecC_u, \end{equation*} which is just a rearrangement of the desired equation \eqref{eq:valuated-basis-circuit}. \end{proof}
Thus, we've proved the first part of Theorem~\ref{t:agree}, namely that the function~$\nu$ given in~\eqref{eq:valuation} defines a valuation on the algebraic matroid $M$. In the next section, we turn to the second part of Theorem~\ref{t:agree} and show that this valuation is compatible with the matroid flock studied in~\cite{bollen-draisma-pendavingh}.
\section{Matroid flocks}
We now show that the matroid flock defined by the valuated matroid from the previous section is the same as the matroid flock defined from the extension $L \supset K$ in~\cite{bollen-draisma-pendavingh}. A \defi{matroid flock} is a function~$M$ which maps each vector $\alpha \in \ZZ^n$ to a matroid $M_\alpha$ on the set $E$, such that: \begin{enumerate} \item $M_\alpha \slash i = M_{\alpha + e_i} \backslash i$ for all $\alpha \in \ZZ^n$ and $i \in E$, \item $M_\alpha = M_{\alpha + \onevector}$ for all $\alpha \in \ZZ^n$. \end{enumerate} In the first axiom, the matroids $M_\alpha \slash i$ and $M_{\alpha + e_i} \backslash i$ are the contraction and deletion of the respective matroids with respect to the single element~$i$.
To any valuated matroid $M$, the associated matroid flock, which we also denote by $M$, is defined by letting $M_\alpha$ be the matroid whose bases consist of those bases of $M$ such that $\mathbf e_B \cdot \alpha - \nu(B) = g(\alpha)$, where $\mathbf e_B$ is the indicator vector with entry $(\mathbf e_B)_i = 1$ for $i \in B$ and $(\mathbf e_B)_i = 0$ otherwise, and where \begin{equation}\label{eq:def-g} g(\alpha) = \max\{ \mathbf e_B \cdot \alpha - \nu(B) \mid B \mbox{ is a basis of $M$}\}. \end{equation} Moreover, any matroid flock comes from a valuated matroid in this way by Theorem~7 in~\cite{bollen-draisma-pendavingh}.
On the other hand, \cite{bollen-draisma-pendavingh} also associates a matroid flock directly to the extension $L \supset K$ and the elements $x_1, \ldots, x_n$. Their construction is in terms of algebraic varieties and the tangent spaces at sufficiently general points. Here, we recast their definition using the language of field theory and derivations. Define $\tilde L$ to be the perfect closure of $L$, which is equal to the union $\bigcup_{k \geq 0} L(x^{1/p^k}_1, \ldots, x_n^{1/p^k})$ of the infinite tower of purely inseparable extensions of~$L$. For a vector $\alpha \in \ZZ^n$, we define $F^{-\alpha} x_E$ to be the vector in $\tilde L^n$ with $(F^{-\alpha} x_E)_i = F^{-\alpha_i} x_i$, and $K(F^{-\alpha} x_E)$ to be the field generated by these elements. Recall from field theory, e.g.\ \cite{lang}*{Sec. XIX.3}, that the vector space of differentials $\Omega_{K(F^{-\alpha}x_E)/ K}$ is defined algebraically over $K(F^{-\alpha}x_E)$, generated by the differentials $d (F^{-\alpha_i} x_i)$ as $i$ ranges over the set~$E$. We define $N_\alpha$ to be the matroid on $E$ of the configuration of these vectors $d(F^{-\alpha_i} x_i)$ in $\Omega_{K(F^{-\alpha}x_E)/K}$, and then the function $N$ which sends $\alpha$ to $N_\alpha$ is a matroid flock~\cite{bollen-draisma-pendavingh}*{Thm.~34}.
\begin{proof}[Proof of Theorem~\ref{t:agree}] The function $\nu$ is a valuation on $M$ by Propositions~\ref{p:circuit-axioms} and~\ref{p:valuation-circuits}, so it only remains to show that the matroid flock associated to this valuation coincides with the matroid flock $N$ defined above. Let $\alpha$ be a vector in $\ZZ^n$. Since both $M$ and $N$ are matroid flocks, they are invariant under shifting $\alpha$ by the vector $\onevector$, as in the second axiom of a matroid flock. Therefore, we can shift $\alpha$ by a multiple of $\onevector$ such that all entries of $\alpha$ are non-negative and it suffices to show $M_\alpha = N_\alpha$ in this case.
Now let $B$ be a basis of $M$ and we want to show that the differentials $d(F^{-\alpha_i}x_i)$, for $i \in B$, form a basis for $\Omega_{K(F^{-\alpha} x_E) / K}$ if and only if ${\mathbf e_B \cdot \alpha} - \nu(B)$ equals $g(\alpha)$, as defined in~\eqref{eq:def-g}. Since the field $K(F^{-\alpha} x_B)$ is generated by the algebraically independent elements $F^{-\alpha_i} x_i$ as $i$ ranges over the elements of $B$, the differentials $d(F^{-\alpha_i}x_i)$ do form a basis for $\Omega_{K(F^{-\alpha} x_B)/K}$. Moreover, the natural map $\Omega_{K(F^{-\alpha} x_B)/K} \rightarrow \Omega_{K(F^{-\alpha} x_E)/K}$ is an isomorphism if and only if the $K(F^{-\alpha} x_E)$ is a separable extension of $K(F^{-\alpha} x_B)$~\cite{lang}*{Prop.~VIII.5.2}, i.e.\ if and only if its inseparable degree is~$1$. Therefore, $B$ is a basis for $N_\alpha$ if and only if $[K(F^{-\alpha} x_E) : K(F^{-\alpha} x_B)]_i = 1$.
We list the inseparable degrees: \begin{align*} [L : K(x_B)]_i &= p^{\nu(B)} \\ [K(F^{-\alpha} x_B) : K(x_B)]_i &= p^{\mathbf e_B \cdot \alpha} \\ [K(F^{-\alpha} x_E) : K(x_E)]_i &= p^{m(\alpha)} \\ [L : K(x_E)]_i &= p^\ell \end{align*} The first of these equalities is by definition, the second is because $K(F^{-\alpha} x_B)$ is the purely inseparable extension of $K(x_B)$ defined by adjoining a $p^{\alpha_i}$-root of $x_i$ for each~$i$, and the third and fourth we take to be the definitions of the integers $m(\alpha)$ and~$\ell$, respectively. By the multiplicativity of inseparable degrees, and taking logarithms, we have: \begin{align}\label{eq:insep-degree} \log_p [K(F^{-\alpha} x_E) : K(F^{-\alpha} x_B)]_i &= \log_p [K(F^{-\alpha} x_E) : K(x_B)]_i - \mathbf e_B \cdot \alpha \notag \\ &= m(\alpha) + [K(x_E) : K(x_B)]_i - \mathbf e_B \cdot \alpha \notag \\ &= m(\alpha) - \ell + \nu(B) - \mathbf e_B \cdot \alpha \end{align} As noted above, $B$ is a basis of $N_\alpha$ if and only if the left hand side of \eqref{eq:insep-degree} is zero, and $B$ is a basis of $M_{\alpha}$ if and only $\mathbf e_B \cdot \alpha - \nu(B) = g(\alpha)$. Thus, it suffices to show that $m(\alpha) - \ell$ equals $g(\alpha)$.
Since \eqref{eq:insep-degree} is always non-negative, we have the inequality \begin{equation*} m(\alpha) - \ell \geq {\mathbf e_B \cdot \alpha} - \nu(B) \end{equation*} for all bases $B$, and thus $m(\alpha) - \ell \geq g(\alpha)$. On the other hand, if $m(\alpha) - \ell > g(\alpha)$, then \eqref{eq:insep-degree} will always be positive, so no subset of the differentials $d(F^{\alpha_i}x_i)$ will form a basis for $\Omega_{K(F^{-\alpha} x_E)/K}$. However, this would contradict the fact that the complete set of differentials $d(F^{-\alpha_i}x_i)$ for all $i \in E$ forms a generating set for $\Omega_{K(F^{-\alpha} x_E)/K}$, and therefore, some subset forms a basis. Thus, $m(\alpha)$ must equal $g(\alpha) + l$, which completes the proof that the two matroid flocks coincide. \end{proof}
\begin{rmk}\label{r:equivalence} By \cite{bollen-draisma-pendavingh}*{Thm.~7}, any matroid flock, such as that of an algebraic extension, comes from a valuated matroid, but the valuation is not unique. In particular, two valuations $\nu$ and $\nu'$ are called \defi{equivalent} if they differ by a shift $\nu'(B) = \nu(B) + \lambda$ for some constant $\lambda$~\cite{dress-wenzel}*{Def.~1.1}, and equivalent valuations define the same matroid flock. However, among all equivalent valuations giving the matroid flock of an algebraic extension, the formula~\eqref{eq:valuation} nevertheless gives a distinguished valuation. For example, if $L = K(x_E)$, then this distinguished valuation $\nu$ is the unique representative such that the minimum $\min_B \nu(B)$ over all bases $B$ is $0$. If $L$ is a proper extension of $K(x_E)$, then the valuation $\nu$ records the inseparable degree $[L: K(x_E)]_i$, which was denoted $p^\ell$ in the proof of Theorem~\ref{t:agree}. \end{rmk}
\begin{ex}\label{ex:non-fano} We look at the matroid flock and Lindstr\"om valuation of an algebraic realization of the non-Fano matroid~$M$ over $K = \mathbb F_2$, which is a special case of the construction in Example~\ref{ex:toric}. The realization is given by the elements \begin{align*} x_1 &= t_1 & x_3 &= t_3 & x_5 &= t_1 t_3 & x_7 &= t_1t_2t_3 \\ x_2 &= t_2 & x_4 &= t_1 t_2 & x_6 &= t_2 t_3 \end{align*} in the field $L = K(t_1, t_2, t_3)$. The differentials of these elements in $\Omega_{L/K}$ are: \begin{align*} dx_1 &= dt_1 & dx_4 &= t_2 \,dt_1 + t_1\, dt_2 \\ dx_2 &= dt_2 & dx_5 &= t_3\, dt_1 + t_1 \,dt_3 \\ dx_3 &= dt_3 & dx_6 &= t_3 \, dt_2 + t_2 \,dt_3 \\ && dx_7 &= t_2 t_3 \,dt_1 + t_1 t_3 \,dt_2 + t_1 t_2 \,dt_3. \end{align*} These vectors are projectively equivalent to the Fano configuration, and, therefore, the matroid $M_{(0,0,0,0,0,0,0)}$ of the matroid flock is the Fano matroid. In particular, we have the linear relation $t_3 \, dx_4 + t_2 \,dx_5 + t_1 \, dx_6 = 0$, among the differentials, even though $\{4, 5, 6\}$ is a basis of the algebraic matroid.
On the other hand, if we let $\alpha = (-1, -1, -1, 0, 0, 0, -1)$, then $K(F^{-\alpha}x_E)$ is the subfield $K(x_4, x_5, x_6) \subset L$, because \begin{align*} Fx_1 = x_1^2 &= x_4 x_5 x_6^{-1} & Fx_3 = x_3^2 &= x_4^{-1} x_5 x_6 \\ Fx_2 = x_2^2 &= x_4 x_5^{-1} x_6 & Fx_7 = x_7^2 &= x_4 x_5 x_6 \end{align*} Therefore, $\{4, 5, 6\}$ is a basis for the matroid $M_\alpha$. Using the basis $dx_4, dx_5, dx_6$ for $\Omega_{K(F^{-\alpha}x_E)/K}$, one can check that the vectors $d(Fx_i)$, for $i = 1, 2, 3, 7$ are all parallel to each other, and thus the bases of $M$ which contain at least two of these indices is not a basis for $M_{\alpha}$.
We claim that the Lindstr\"om valuation~$\nu$ of the field extension $L$ of $K$ takes the value $0$ for every basis of $M$ except that $\nu(\{4, 5, 6\}) = 1$. This can be seen directly from the definition~\eqref{eq:valuation} because one can check that every basis other than $\{4, 5, 6\}$ generates the field $L$, and $L \supset K(x_4, x_5, x_6)$ is an index~$2$, purely inseparable extension.
Alternatively, the fact that the vector configuration of the differentials $dx_i$ in $\Omega_{L/K}$ is the Fano matroid means that its bases consist of all bases of $M$ except for $\{4, 5, 6\}$, and so the bases of the Fano matroid have the same valuation, except for $\{4,5,6\}$, which has larger valuation. As in Remark~\ref{r:equivalence}, the matroid flock only determines the valuation up to equivalence, so we can take $\nu(B) = 0$ for $B$ a basis of the Fano matroid. Then, the computation of $M_{\alpha}$ above shows that both $\{4, 5, 6\}$ and $\{3, 5, 6\}$ are bases, and thus, \begin{equation*} \mathbf e_{\{4, 5, 6\}}
\cdot \alpha - \nu(\{4, 5, 6\}) = \mathbf e_{\{3,5,6\}} \cdot \alpha - \nu(\{3, 5, 6\}) = -1 - 0 = -1 \end{equation*} and so we can solve for $\nu(\{4, 5, 6\}) = 1$.
Finally, a third way of computing the Lindstr\"om valuation is to use Example~\ref{ex:toric}, which shows that the valuation is the same as that of the vector configuration given by the columns of the matrix \begin{equation*} A = \begin{pmatrix} 1 & 0 & 0 & 1 & 1 & 0 & 1 \\ 0 & 1 & 0 & 1 & 0 & 1 & 1 \\ 0 & 0 & 1 & 0 & 1 & 1 & 1 \end{pmatrix} \end{equation*} over the field of rational numbers $\QQ$ with the $2$-adic valuation. The valuation of a vector configuration is given by the $2$-adic valuation of the determinant of the submatrices. The submatrices of $A$ corresponding to bases of~$M$ all have determinant $\pm 1$ except for the one with columns $\{4, 5, 6\}$, whose determinant is $-2$, which has $2$-adic valuation equal to $1$. \end{ex}
\section{Cocircuits and minors}\label{sec:cocircuits-minors}
In this section, we consider further properties of the Lindstr\"om valuated matroid which can be understood in terms of the field theory of the extension. In particular, we give constructions of the valuated cocircuits and minors of the Lindstr\"om valuated matroid.
First, a \defi{hyperplane} of the algebraic matroid of $L$ is a maximal subset $H$ of~$E$ such that $L$ has transcendence degree $1$ over $K(x_E)$. For any hyperplane~$H$, we define a vector in $(\ZZ \cup \{\infty\})^n$: \begin{equation*} \vecCco(H)_i = \begin{cases} \infty &\mbox{if } i \in H \\ \log_p [L : K(x_{H \cup \{i\}})]_i & \mbox{if } i \notin H \end{cases} \end{equation*} The expression in the second case is an integer by \cite{lang}*{Cor.~V.6.2} and finite because, by the assumption that $H$ is a hyperplane, $L$ must be an algebraic extension of $K(x_{H \cup \{i\}})$, for $i \notin H$.
\begin{prop} The collection of vectors: \begin{equation*} \{\vecCco(H) + \lambda \onevector \mid H \mbox{ is a hyperplane of the algebraic matroid of } L, \lambda \in \ZZ \} \end{equation*} define the cocircuits of the Lindstr\"om valuation of the field $L$ and the elements $x_1, \ldots, x_n$. \end{prop}
\begin{proof} By definition, the cocircuits of a valuated matroid~$M$ are the circuits of the dual $M^*$, and the dual valuation is defined by $\nu(B^*) = \nu(E \setminus B^*)$ for any subset $B^* \subset E$ such that $E \setminus B^*$ is a basis of~$M$. Suppose $B^*$ and $B^* \setminus \{u\} \cup \{v\}$ are bases of $M^*$, and $\vecCco(H)$ is a cocircuit contained in $B^* \cup \{v\}$. Then, as in the proof of Proposition~\ref{p:valuation-circuits}, we have to show the relation: \begin{equation}\label{eq:cocircuit} \nu^*(B^*) + \vecCco(H)_u = \nu^*(B^* \setminus \{u\} \cup \{v\}) + \vecCco(H)_v \end{equation} We write $B$ for the complement $E \setminus B^*$, which is a basis of~$M$. We can then expand these expressions using their definitions and multiplicativity of the inseparable degree: \begin{align*} \nu^*(B^*) &= \log_p [L : K(x_{H \cup \{v\}})]_i + \log_p [K(x_{H \cup \{v\}}) : K(x_{B})]_i \\ \vecCco(H)_u &= \log_p [L : K(x_{H \cup \{u\}})]_i \\ \nu^*(B^* \setminus \{u\} \cup \{v\}) &= \log_p [L : K(x_{H \cup \{u\}})]_i \\ &\qquad\qquad + \log_p [K(x_{H \cup \{u\}}) :
K(x_{B \setminus \{v\} \cup \{u\}})]_i \\ \vecCco(H)_v &= \log_p [L : K(x_{H \cup \{v\}})]_i \end{align*} Therefore, to show the relation \eqref{eq:cocircuit}, it is sufficient to show that \begin{equation}\label{eq:cocircuit-rewritten} [K(x_{H \cup \{v\}}) : K(x_{B})]_i = [K(x_{H \cup \{u\}}) : K(x_{B \setminus \{v\} \cup \{u\}})]_i \end{equation}
We claim that \eqref{eq:cocircuit-rewritten} is true because both sides are equal to the inseparable degree $[K(x_{H}) : K(x_{B \setminus \{v\}})]_i$. Indeed, the extensions on either side of \eqref{eq:cocircuit-rewritten} are given adjoining to the extension $K(x_H) \supset K(X_{B \setminus \{v\}})$ a single transcendental element, namely, $x_v$ on the left, and $x_u$ on the right. Such a transcendental element has no relations with the other elements of $x_H$ and so doesn't affect the inseparable degree. \end{proof}
Minors of a valuated matroid are defined in~\cite{dress-wenzel}*{Prop. 1.2 and~1.3}. Note that the definition of the valuation on the minor depends on an auxiliary choice of a set of vectors, and the valuation is only defined up to equivalence.
\begin{prop} Let $F$ and $G$ be disjoint subsets of $E$. Then the minor $M \backslash G / F$, denoting the deletion of $G$ and the contraction of~$F$, is equivalent to the Lindstr\"om valuation of the extension $K(x_{E \setminus G}) \supset K(x_{F})$ with the elements $x_i$ for $i \in E \setminus (F \cup G)$. \end{prop}
\begin{proof} The valuated circuits of the deletion $M \backslash G$ are equal to the restriction of the valuated circuits $\vecC$ such that $\supp \vecC \cap G = \emptyset$ to the indices $E \setminus G$. Likewise, the circuits and circuit polynomials of the algebraic extension $K(x_{E'}) \supset K$ are those of $L \supset K$ such that the support and variable indices, respectively, are disjoint from~$G$. Therefore, the valuated circuits of the Lindstr\"om matroid of $K(x_{E \setminus G})$ as an extension of $K$ are the same as those of the deletion~$M \backslash G$.
Dually, the valuated cocircuits of the contraction $M \backslash G / F$ are the restrictions of the cocircuits $\vecCco$ of $M \backslash G$ such that $\supp \vecCco \cap F = \emptyset$ to the indices in $E \setminus (F \cup G)$. The hyperplanes of the extension $K(x_{E \setminus G}) \supset K(x_F)$ are the hyperplanes of $K(x_{E \setminus G}) \supset K$ which contain $F$ and so the valuated cocircuits are the valuated cocircuits which are disjoint from $F$ and with indices restricted to the indices $E \setminus ( F \cup G)$. Therefore, the Lindstr\"om valuated matroid of $K(x_{E \setminus G}) \supset K(x_F)$ is the same as the minor $M \backslash G / F$. \end{proof}
\begin{bibdiv} \begin{biblist}
\bib{bollen-draisma-pendavingh}{article}{
author = {Bollen, Guus P.},
author = {Draisma, Jan},
author = {Pendavingh, Rudi},
title = {Algebraic matroids and Forbenius flocks},
date = {2018},
journal = {Adv. Math.},
volume = {323},
pages = {688--719}, }
\bib{dress-wenzel}{article}{
author = {Dress, Andreas W. M.},
author = {Wenzel, Walter},
title = {Valuated matroids},
journal = {Adv. Math.},
volume = {93},
number = {2},
date = {1992},
pages = {214--250},
}
\bib{ingleton}{article}{
author={Ingleton, A. W.},
title={Representation of matroids},
conference={
title={Combinatorial Mathematics and its Applications (Proc. Conf.,
Oxford, 1969)}},
book={
publisher={Academic Press, London}},
date={1971},
pages={149--167}, }
\bib{kiraly-rosen-theran}{unpublished}{
author = {Kir\'aly, Franz J.},
author = {Rosen, Zvi},
author = {Theran, Louis},
title = {Algebraic matroids with graph symmetry},
year = {2013},
note = {preprint, \arxiv{1312.3777}}, }
\bib{lang}{book}{
author = {Lang, Serge},
title = {Algebra},
year = {2002},
publisher = {Springer},
series = {Graduate Texts in Mathematics},
volume = {211}, }
\bib{lindstrom}{article}{
author = {Lindstr\"om, Bernt},
title = {On the algebraic characteristic set for a class of matroids},
journal = {Proc. AMS},
volume = {95},
number = {1},
pages = {147--151},
year = {1985}, }
\bib{murota-tamura}{article}{
author = {Murota, Kazuo},
author = {Tamura, Akihisa},
title = {On circuit valuations of matroids},
journal = {Adv. Appl. Math.},
volume = {26},
pages = {192--225},
year = {2001},
}
\end{biblist} \end{bibdiv}
\end{document} |
\begin{document}
\title[Configuration space of four points in the torus] {The ${\rm PSL}(2,{{\mathbb R}})^2$-configuration space of four points in the torus $S^1\times S^1$}
\author[I.D. Platis ]{Ioannis D. Platis } \email{jplatis@math.uoc.gr} \address{Department of Mathematics and Applied Mathematics\\ University of Crete\\ University Campus\\ GR 700 13 Voutes\\ Heraklion Crete\\Greece}
\begin{abstract} The torus ${{\mathbb T}}=S^1\times S^1$ appears as the ideal boundary $\partial_\infty AdS^3$ of the three-dimensional anti-de Sitter space $AdS^3$, as well as the F\"urstenberg boundary ${{\mathbb F}}(X)$ of the rank-2 symmetric space $X={\rm SO}_0(2,2)/{\rm SO}(2)\times{\rm SO}(2)$. We introduce cross-ratios on the torus in order to parametrise the ${\rm PSL}(2,{{\mathbb R}})^2$ configuration space of quadruples of pairwise distinct points in ${{\mathbb T}}$ and define a natural M\"obius structure in ${{\mathbb T}}$ and therefore to ${{\mathbb F}}(X)$ and $\partial_\infty AdS^3$ as well.
\end{abstract}
\date{{\today}\\ {\it 2010 Mathematics Subject Classifications.} 57M50, 51F99. \\ \it{Key words. Configuration space, torus, M\"obius structures.} }
\maketitle
\section{Introduction} Let $S$ be a topological space and denote by ${{\mathcal C}}_4={{\mathcal C}}_4(S)$ the space of quadruples of pairwise distinct points of $S$. Let $G$ be a group of homeomorphisms of $S$ acting diagonally on ${{\mathcal C}}_4$ from the left: for each $g\in G$ and ${{\mathfrak p}}=(p_1,p_2,p_3,p_4)\in{{\mathcal C}}_4$, $$ (g,{{\mathfrak p}})\mapsto g({{\mathfrak p}})=(g(p_1),g(p_2),g(p_3),g(p_4)). $$ The $G$-{\it confiquration space of quadruples of pairwise distinct points in $S$} is the quotient space ${{\mathcal F}}_4={{\mathcal F}}_4(S)$ of ${{\mathcal C}}_4$ cut by the action of $G$. In this general setting, it is not obvious what kind of object ${{\mathcal F}}_4$ is; however, there are tractable cases. If for instance $S$ is a smooth manifold of dimension $s$ and $G$ is a Lie subgroup of diffeomorphisms of $S$ of dimension $g$, then ${{\mathcal F}}_4$ carries the structure of a smooth manifold of dimension $4s-g$ if the diagonal action is proper and free.
The particular cases when $S$ are spheres bounding hyperbolic spaces and $G$ are the sets ${{\mathcal M}}(S)$ of M\"obius transformations acting on $S$ are prototypical; in some of these cases there exists neat parametrisations of the configurations spaces by cross-ratios. The most illustrative (and simplest) example of all of is that of the ${{\mathcal M}}(S^1)$-configuration space ${{\mathcal F}}_4(S^1)$ of quadruples of pairwise distinct points in the unit circle $S^1$. What follows is classical and well-known, see for instance \cite{Be}, but we include it here both for clarity as well as for setting up the notation we shall use throughout the paper.
Let $S^1=\overline{{{\mathbb R}}}$, where $\overline{{{\mathbb R}}}={{\mathbb R}}\cup\{\infty\}$ be the unit circle. Recall that if ${{\mathfrak x}}=(x_1,x_2,x_3,x_4)$ $\in{{\mathcal C}}_4(S^1)$ then its real cross-ratio is defined by $$ {{\rm X}}({{\mathfrak x}})=[x_1,x_2,x_3,x_4]=\frac{(x_4-x_2)(x_3-x_1)}{(x_4-x_1)(x_3-x_2)}, $$ where we agree that if one of the points is $\infty$ then $\infty:\infty=1$. The set of M\"obius transformations ${{\mathcal M}}(S^1)$ of $S^1$ comprises maps $g:S^1\to S^1$ of the form $$ g(x)=\frac{ax+b}{cx+d},\quad x\in\overline{{{\mathbb R}}}, $$ where the matrix $$ A=\left(\begin{matrix} a&b\\ c&d\end{matrix}\right) $$ is in ${\rm PSL}(2,{{\mathbb R}})={\rm SL}(2,{{\mathbb R}})/\{\pm I\}$. The cross-ratio ${\rm X}$ is invariant under the diagonal action of ${{\mathcal M}}(S^1)$ in ${{\mathcal C}}_4={{\mathcal C}}_4(S^1)$: If $g\in{{\mathcal M}}(S^1)$, then $$ {{\rm X}}(g({{\mathfrak x}}))=[g(x_1),g(x_2),g(x_3),g(x_4)]=[x_1,x_2,x_3,x_4]={{\rm X}}({{\mathfrak x}}), $$ for every ${{\mathfrak x}}\in{{\mathcal C}}_4$. Also, $X$ takes values in ${{\mathbb R}}\setminus\{0,1\}$ and for each quadruple $(x_1,x_2,x_3,x_4)$ satisfies the standard symmetry properties: \begin{eqnarray*} && \noindent ({\rm S1})\quad {{\rm X}}(x_1,x_2,x_3,x_4)={{\rm X}}(x_2,x_1,x_4,x_3)={{\rm X}}(x_3,x_4,x_1,x_2)= {{\rm X}}(x_4,x_3,x_2,x_1),\\ && \noindent({\rm S2})\quad{{\rm X}}(x_1,x_2,x_3,x_4)\cdot {{\rm X}}(x_1,x_2,x_4,x_3)=1,\\
&&
\noindent({\rm S3})\quad
{{\rm X}}(x_1,x_2,x_3,x_4)\cdot {{\rm X}}(x_1,x_4,x_2,x_3)\cdot {{\rm X}}(x_1,x_3,x_4,x_2)=-1.
\end{eqnarray*} Hence all 24 real cross-ratios corresponding to a given quadruple ${{\mathfrak x}}$ are eventually functions of ${{\rm X}}({{\mathfrak x}})=[x_1,x_2,x_3,x_4]$. We let ${{\mathcal M}}(S^1)$ act diagonally on ${{\mathcal C}}_4(S^1)$ and let ${{\mathcal F}}_4={{\mathcal F}}_4(S^1)$ be the ${{\mathcal M}}(S^1)$-configuration space of quadruples of pairwise distinct points in $S^1$. From invariance by cross-ratios we have that the map $$ {{\mathcal G}}:{{\mathcal F}}_4(S^1)\ni[{{\mathfrak x}}]\mapsto {{\rm X}}({{\mathfrak x}})\in{{\mathbb R}}\setminus\{0,1\}, $$ is well-defined. Also ${{\mathcal G}}$ is surjective; to see this, recall that there is a triply-transitive action of ${{\mathcal M}}(S^1)$ on $S^1$; if $(x_1,x_2,x_3)$ is a triple of pairwise distinct points in $S^1$, then there is a unique $f\in{{\mathcal M}}(S^1)$ such that $$ f(x_1)=0,\quad f(x_2)=\infty,\quad f(x_3)=1. $$ Recall at this point that actually $f$ is given in terms of cross-ratios: $$ [0,\infty,f(x),1]=[x_1,x_2,x,x_3]. $$ Hence if $x\in{{\mathbb R}}\setminus\{0,1\}$, then $[{{\mathfrak x}}]\mapsto x$, where ${{\mathfrak x}}=(0,\infty,x,1)$. Finally, ${{\mathcal G}}$ is injective: If ${{\mathfrak x}}$ and ${{\mathfrak x}}'$ are in ${{\mathcal C}}$ and ${{\rm X}}({{\mathfrak x}})=X({{\mathfrak x}}')=x$, then there exists a $g\in{{\mathcal M}}(S^1)$ such that ${{\mathfrak x}}=g({{\mathfrak x}}')$. All the above discussion boils down to the well-known fact that the configuration space ${{\mathcal F}}_4={{\mathcal F}}_4(S^1)$ of quadruples of pairwise distinct points in $S^1$ is isomorphic to ${{\mathbb R}}\setminus\{0,1\}$. and therefore it inherits the structure of a one-dimensional disconnected real manifold. Moreover, the following possibilities occur for the relative position of the points $x_i$ of ${{\mathfrak x}}$ on the circle: \begin{enumerate} \item $x_1,x_2$ separate $x_3,x_4$. This happens if and only if ${{\rm X}}({{\mathfrak x}})<0$. \item $x_1,x_3$ separate $x_2,x_4$. This happens if and only if ${{\rm X}}({{\mathfrak x}})<1$. \item $x_1,x_4$ separate $x_2,x_3$. This happens if and only if $0<{{\rm X}}({{\mathfrak x}})<1$. \end{enumerate} Each of cases (1), (2) and (3) correspond to the connected components of ${{\mathcal F}}_4$.
In an analogous manner, see again \cite{Be}, the ${\rm PSL}(2,{{\mathbb C}})$-configuration space ${{\mathcal F}}_4(S^2)$ of quadruples of pairwise disjoint points in the sphere $S^2$ is isomorphic to ${{\mathbb C}}\setminus\{0,1\}$ and therefore inherits the structure of a one-dimensional complex manifold. The case of the ${\rm PU}(2,1)$-configuration space ${{\mathcal F}}_4(S^3)$ of quadruples of pairwise distinct points in $S^3$ is much harder but it is treated in the same spirit, see \cite{FP}: Using complex cross-ratios we find that besides a subset of lower dimension, ${{\mathcal F}}_4(S^3)$ is isomorphic to $({{\mathbb C}}\setminus{{\mathbb R}})\times{{\mathbb C}} P^1$, a two-dimensional disconnected complex manifold. Finally, recent treatments for the cases of ${\rm PSp}(1,1)$ and ${\rm PSp}(2,1)$-configuration spaces of quadruples of pairwise distinct points in $S^3$ and $S^7$ may be found in \cite{GM} and \cite{C}, respectively.
As we have mentioned above, spheres may be viewed as boundaries of symmetric spaces of non-compact type and of rank-1; that is hyperbolic spaces ${{\bf H}}_{{\mathbb K}}^n$, where ${{\mathbb K}}$ can be: a) ${{\mathbb R}}$ the set of real numbers, b) ${{\mathbb C}}$ the set of complex numbers, c) $\mathbb{H}$ the set of quaternions and d) $\mathbb{O}$ the set of octonions (in the last case $n=2$). Two problems arise naturally here; first, to describe configuration spaces of four points on products of such spheres by parametrising them with cross-ratios defined on those products and second, to describe configuration spaces of four points in boundaries of symmetric spaces of rank$>$1 again by parametrising them using cross-ratios defined on those boundaries. These two problems are sometimes intertwined and the crucial issue here is the definition of an appropriate cross-ratio; this is directly linked to the M\"obius geometry of the spaces we wish to study, as we explain below. In this paper we deal with both problems by describing in the manner above the configuration space of four points in the torus ${{\mathbb T}}=S^1\times S^1$; the torus is the F\"urstenberg boundary of the symmetric space ${\rm SO}_0(2,2)/{\rm SO}(2)\times{\rm SO}(2)$ which is of rank-2, as well as the ideal boundary of anti-de Sitter space $adS^3$, see Section \ref{sec:cons}.
Returning to our original general setting, suppose that the $G$-configuration space ${{\mathcal F}}_4(S)$ has a real manifold structure due to a proper and free action of $G$ on ${{\mathcal C}}_4(S)$. By taking the product $S\times S$ and the space ${{\mathcal C}}_4(S\times S)$ of quadruples of pairwise distinct points of $S\times S$, the group $G\times G$ acts diagonally as follows: for $g=(g_1,g_2)$ and ${{\mathfrak p}}=(p_1,p_2,p_3,p_4)\in{{\mathcal C}}_4(S\times S)$, $p_i=(x_i,y_i)$, $i=1,2,3,4$, $$ (g,{{\mathfrak p}})\mapsto g({{\mathfrak p}})=\left((g_1(x_1),g_2(y_1)),(g_1(x_2),g_2(y_2)),(g_1(x_3),g_2(y_3)),(g_1(x_4),g_2(y_4))\right). $$ Using elementary arguments, one deduces that the action of $G\times G$ on ${{\mathcal C}}_4(S\times S)$ is proper. Towards the direction of the free action of $G\times G$ we observe that from the obvious injection $ {{\mathcal C}}_4(S)\times{{\mathcal C}}_4(S)\to{{\mathcal C}}_4(S\times S) $ which assigns to each $({{\mathfrak x}},{{\mathfrak y}})\in{{\mathcal C}}_4(S)\times{{\mathcal C}}_4(S)$, ${{\mathfrak x}}=(x_1,x_2,x_3,x_4)$, ${{\mathfrak y}}=(y_1,y_2,y_3,y_4)$, the quadruple ${{\mathfrak p}}=(p_1,p_2,p_3,p_4)$ where $p_i=(x_i,y_i)$, $i=1,2,3,4.$, we obtain an injection $$ {{\mathcal F}}_4(S)\times{{\mathcal F}}_4(S)\ni([{{\mathfrak x}}],[{{\mathfrak y}}])\mapsto [{{\mathfrak p}}]\in{{\mathcal F}}_4(S^1\times S^1). $$ The image of this map is the subset ${{\mathcal F}}_4^\sharp(S\times S)$ of ${{\mathcal F}}_4(S\times S)$ comprising quadruples ${{\mathfrak p}}$ such that ${{\mathfrak x}}$, ${{\mathfrak y}}$ are in ${{\mathcal F}}_4(S)$. We may straightforwardly show that ${{\mathcal F}}_4^\sharp$ comprises principal orbits, that is, orbits of the maximal dimension of the $G\times G$ action; these are orbits of quadruples with trivial isotropy groups. Therefore ${{\mathcal F}}_4^\sharp(S\times S)$ is a manifold of dimension $2n$, where $n=\dim({{\mathcal F}}_4(S))$. If the action is free only on ${{\mathcal F}}^\sharp_4(S\times S)$, the orbits of the remaining points are of dimension less than $2n$. This is exactly the case we study in Section \ref{sec:config}, i.e., the configuration space ${{\mathcal F}}_4({{\mathbb T}})$ of quadruples of pairwise distinct points in the torus ${{\mathbb T}}=S^1\times S^1$. The subset of ${{\mathcal F}}_4({{\mathbb T}})$ of maximal dimension two is isomorphic to $$ {{\mathcal F}}_4^\sharp({{\mathbb T}})={{\mathcal F}}_4(S^1)\times{{\mathcal F}}_4(S^1)=({{\mathbb R}}\setminus\{0,1\})^2, $$ a disconnected subset of ${{\mathbb R}}^2$ comprising nine connected components, see Theorem \ref{thm:vec}. Also, by considering the natural involution $\iota_0:{{\mathbb T}}\to {{\mathbb T}}$ which maps each $(x,y)$ to $(y,x)$ and the group $\overline{{{\mathcal M}}({{\mathbb T}})}$ comprising M\"obius transformations of ${{\mathbb T}}$ followed by $\iota_0$, then by taking $\overline{{{\mathcal F}}^\sharp_4({{\mathbb T}})}$ to be ${{\mathcal C}}_4^\sharp({{\mathbb T}})$ cut by the diagonal action of $\overline{{{\mathcal M}}({{\mathbb T}})}$ we find that it is isomorphic with a disconnected subset ${{\mathcal Q}}$ of ${{\mathbb R}}^2$ comprising four open connected components and three components with 1-dimensional boundary (Theorem \ref{thm:band}).
At this point, the goal of parametrising the configuration space by cross-ratios defined on the torus has not yet achieved; by Theorem \ref{thm:vec} the set ${{\mathcal F}}_4({{\mathbb T}})$ admits a parametrisation obtained by assigning to each ${{\mathfrak p}}$ the pair $({{\rm X}}({{\mathfrak x}}),{{\rm X}}({{\mathfrak y}}))$.
To this end, for each ${{\mathfrak p}}\in{{\mathcal C}}_4^\sharp$ we define $$ {{\mathbb X}}({{\mathfrak p}})={{\rm X}}({{\mathfrak x}})\cdot {{\rm X}}({{\mathfrak y}}), $$ see Section \ref{sec:realX}, which is ${{\mathcal M}}({{\mathbb T}})$-invariant. Certain symmetries for ${{\mathbb X}}$ exist so that for each quadruple ${{\mathfrak p}}$ all 24 cross-ratios of quadruples resulting from permutations of points of ${{\mathfrak p}}$ are functions of two cross-ratios which we denote by ${{\mathbb X}}_1={{\mathbb X}}_1({{\mathfrak p}})$ and ${{\mathbb X}}_2={{\mathbb X}}_2({{\mathfrak p}})$. According to Proposition \ref{prop:fundX}, $({{\mathbb X}}_1,{{\mathbb X}}_2)$ lie in a disconnected subset ${{\mathcal P}}$ of ${{\mathbb R}}^2$ comprising six components. Three of these components are open and the remaining three have boundaries which are pieces of the parabola $$ \Delta(u,v)=u^2+v^2-2u-2v+1-2uv=0. $$ In Theorem \ref{thm:F4} we prove that ${{\mathcal F}}_4^\sharp({{\mathbb T}})$ is in a 2-1 surjection with ${{\mathcal P}}$ and therefore $\overline{{{\mathcal F}}_4^\sharp({{\mathbb T}})}$ is isomorphic to ${{\mathcal P}}$. Remark that boundary components of ${{\mathcal P}}$ correspond to quadruples ${{\mathfrak p}}$ such that all points of ${{\mathfrak p}}$ lie on a Circle, that is, a ${{\mathcal M}}({{\mathbb T}})$-image of the diagonal curve $\gamma(x)=(x,x)$, $x\in S^1$, which is fixed by the involution $\iota_0$. Remark also that the parametrisation of $\overline{{{\mathcal F}}_4^\sharp({{\mathbb T}})}$ by ${{\mathcal Q}}$ and ${{\mathcal P}}$ induces the same differentiable structure.
We now discuss in brief some general aspects of M\"obius geometry. Let $S$ be a set comprising at least four points and denote by ${{\mathcal C}}_4={{\mathcal C}}_4(S)$ the space of quadruples of pairwise distinct points of $S$. A {\it positive cross-ratio} ${{\bf X}}$ on ${{\mathcal C}}_4$ is a map ${{\mathcal C}}_4\to{{\mathbb R}}_+$ such that for each ${{\mathfrak p}}=(p_1,p_2,p_3,p_4)\in{{\mathcal S}}$, a list of symmetric properties hold; explicitly, \begin{eqnarray*} && ({\rm S1})\quad {{\bf X}}(p_1,p_2,p_3,p_4)={{\bf X}}(p_2,p_1,p_4,p_3)={{\bf X}}(p_3,p_4,p_1,p_2) ={{\bf X}}(p_4,p_3,p_2,p_1),\\ && ({\rm S2})\quad({{\bf X}}(p_1,p_2,p_3,p_4)\cdot {{\bf X}}(p_1,p_2,p_4,p_3)=1,\\ &&
({\rm S3})\quad
{{\bf X}}(p_1,p_2,p_3,p_4)\cdot {{\bf X}}(p_1,p_4,p_2,p_3)\cdot {{\bf X}}(p_1,p_3,p_4,p_2)=1.
\end{eqnarray*} Hence all 24 real cross-ratios corresponding to a given quadruple ${{\mathfrak p}}$ are functions of ${{\bf X}}_1({{\mathfrak p}})=$ ${{\bf X}}(p_1,p_2,p_3,p_4)$ and ${{\bf X}}_2({{\mathfrak p}})={{\bf X}}(p_1,p_3,p_2,p_4)$. The {\it M\"obius structure} of $S$ is then defined to be the map $$ {{\mathfrak M}}_S:{{\mathcal C}}_4(S)\ni{{\mathfrak p}}\mapsto({{\bf X}}_1({{\mathfrak p}}),{{\bf X}}_2({{\mathfrak p}}))\in({{\mathbb R}}_+)^2. $$ The {\it M\"obius group} ${{\mathfrak M}}(S)$ comprises bijections $g:S\to S$ that leave ${{\bf X}}$ invariant, that is, ${{\bf X}}(g({{\mathfrak p}}))={{\bf X}}({{\mathfrak p}})$. We stress here that the above definitions vary depending on the author; however, all existing definitions are equivalent. An equivalent to our definition is the definition of {\it sub-M\"obius structure} in \cite{Bu}.
Frequently, a M\"obius structure is obtained from a metric (or even a semi-metric) $\rho$ on $S$ and it is called a {\it M\"obius structure associated to $\rho$}. In the primitive case of the circle $S^1$, from the real cross-ratio ${\rm X}$ we obtain a positive cross-ratio ${{\bf X}}$ in ${{\mathcal C}}_4(S_1)$ by assigning to each ${{\mathfrak p}}\in{{\mathcal C}}_4(S^1)$ the number $$
{{\bf X}}({{\mathfrak p}})=|{\rm X}({{\mathfrak p}})|=\frac{|x_4-x_2||x_3-x_1|}{|x_4-x_1||x_3-x_2|}=\frac{\rho(x_4,x_2)\cdot \rho(x_3,x_1)}{\rho(x_4,x_1)\cdot \rho(x_3,x_2)}. $$
The metric $\rho$ here is the extension of the euclidean metric in ${{\mathbb R}}$ to $\overline{{{\mathbb R}}}$: $\rho(x,y)=|x-y|$ if $x,y\in{{\mathbb R}}$, $\rho(x,\infty)=+\infty$ and $\rho(\infty,\infty)=0$. One verifies that the positive cross-ratio satisfies properties (S1) and (S2) and (S3). The M\"obius structure of $S^1$ is thus the map $$ {{\mathfrak M}}_{S^1}({{\mathfrak p}})=({{\bf X}}_1({{\mathfrak p}}),{{\bf X}}_2({{\mathfrak p}}))\in({{\mathbb R}}^+\setminus\{0,1\})^2. $$ Note that the M\"obius group ${{\mathfrak M}}(S^1)$ for this M\"obius structure is ${\rm SL}(2,{{\mathbb R}})$, a double cover of ${{\mathcal M}}(S^1)$. Note also that since ${\rm X}_1({{\mathfrak p}})+{\rm X}_2({{\mathfrak p}})=1$, we have by triangle inequality \begin{equation}\label{eq:ptol}
\left|{{\bf X}}_1({{\mathfrak p}})-{{\bf X}}_2({{\mathfrak p}})\right|\le 1\quad\text{and}\quad{{\bf X}}_1({{\mathfrak p}})+{{\bf X}}_2({{\mathfrak p}})\ge 1. \end{equation}
Explicitly,
${{\bf X}}_1({{\mathfrak p}})-{{\bf X}}_2({{\mathfrak p}})=1$ if $x_1$ and $x_3$ separate $x_2$ and $x_4$; ${{\bf X}}_2({{\mathfrak p}})-{{\bf X}}_1({{\mathfrak p}})=1$ if $x_1$ and $x_2$ separate $x_3$ and $x_4$;
${{\bf X}}_1({{\mathfrak p}})+{{\bf X}}_2({{\mathfrak p}})=1$ if $x_1$ and $x_4$ separate $x_2$ and $x_3$.
If the M\"obius structure ${{\mathfrak M}}_S$ of a space $S$ satisfies (\ref{eq:ptol}) then it is called Ptolemaean. The subsets of $S$ at which equalities hold in (\ref{eq:ptol}) are called Ptolemaean circles. In this way the M\"obius structure of $S^1$ associated to the euclidean metric is Ptolemaean and $S^1$ itself is a Ptolemaean circle for this M\"obius structure. $S^1$ is the boundary of the hyperbolic disc ${{\bf H}}_{{\mathbb C}}^1$; it is proved in \cite{P} that all M\"obius structures in boundaries of hyperbolic spaces ${{\bf H}}_{{\mathbb K}}^n$, $n=1,2\dots$, are Ptolemaean; all these M\"obius structures are associated to the the Kor\'anyi metric.
Therefore all boundaries of symmetric spaces of non-compact type and of rank-1 have M\"obius structures which are all associated to a metric and they are all Ptolemaean. In the case of the torus we study here, this does not happen: The M\"obius structure which is defined in Section \ref{sec:mob} \begin{enumerate} \item is not associated to any semi-metric on ${{\mathbb T}}$; \item is not Ptolemaean, but \item there exist Ptolemaean circles for this structure. \end{enumerate} To the direction of defining M\"obius structures in boundaries of symmetric spaces of rank$>1$, little was known till recently; in his recent work \cite{B}, Byrer explicitly constructs cross-ratio triples in F\"urstenberg boundaries of symmetric spaces of higher rank. We have already mentioned that the torus ${{\mathbb T}}=S^1\times S^1$ which we study here appears naturally as the F\"urstenberg boundary of the rank-2 symmetric space ${\rm SO}_0(2,2)/{\rm SO}(2)\times{\rm SO}(2)$ and is also isomorphic to the ideal boundary of 3-dimensional anti-de Sitter space $Ad S^3$. Our results apply to these spaces which we discuss in Section \ref{sec:cons}.
\noindent{\it Acknowledgements:} Part of this work was carried out while the author visited University of Zurich; hospitality is gratefully appreciated. The author also wishes to thank Viktor Schroeder and Jonas Beyrer for fruitful discussions.
\section{The Configuration Space of Four Points in the Torus }\label{sec:config} Our main results lie in this section. In Section \ref{sec:trans} we study the transitive action of the group of M\"obius transformations of the torus. The results about the configuration space are in Sections \ref{sec:confT} and \ref{sec:realX}. \subsection{The action of M\"obius transformations in the torus }\label{sec:trans} The torus ${{\mathbb T}}=S^1\times S^1$ is isomorphic to $\overline{{{\mathbb R}}}\times\overline{{{\mathbb R}}}$, where $\overline{{{\mathbb R}}}={{\mathbb R}}\cup\{\infty\}$. Let $(x,y)\in{{\mathbb T}}$; a M\"obius transformation of ${{\mathbb T}}$ is a map $g:{{\mathbb T}}\to{{\mathbb T}}$ of the form $$ g(x,y)=\left(g_1(x),g_2(y)\right), $$ where $g_1$ and $g_2$ are in ${{\mathcal M}}(S^1)$, that is, $$ g_1(x)=\frac{ax+b}{cx+d},\quad g_2(y)=\frac{a'y+b'}{c'y+d'}, $$ where the matrices $$ A_1=\left(\begin{matrix} a&b\\ c&d\end{matrix}\right)\quad\text{and}\quad A_2=\left(\begin{matrix} a'&b'\\ c'&d'\end{matrix}\right) $$ are both in ${\rm PSL}(2,{{\mathbb R}})={\rm SL}(2,{{\mathbb R}})/\{\pm I\}$. Thus the set of M\"obius transformations ${{\mathcal M}}({{\mathbb T}})$ is ${{\mathcal M}}(S^1)\times{{\mathcal M}}(S^1)={\rm PSL}(2,{{\mathbb R}})\times{\rm PSL}(2,{{\mathbb R}})$.
We wish to describe the action of ${{\mathcal M}}({{\mathbb T}})$ on ${{\mathbb T}}$. First, the action is simply-transitive; this follows directly from simply-transitive action of ${{\mathcal M}}(S^1)$ on $S^1$. Secondly, the action is not doubly-transitive in the usual sense. If $$ {{\mathfrak c}}=(p_1,p_2)=((x_1,y_1),(x_2,y_2)), $$ is a pair of distinct points on the torus, then the cases: a) $x_1=x_2$ or $y_1=y_2$ and b) $x_1\neq x_2$ and $y_1\neq y_2$, are completely distinguished: A transformation $g\in{{\mathcal M}}({{\mathbb T}})$ maps couples of the form a) (resp. of the form b)) to couples of the same form; this prevents ${{\mathcal M}}({{\mathbb T}})$ to act doubly-transitively on ${{\mathbb T}}$ in the usual sense. The doubly-transitive action of ${{\mathcal M}}({{\mathbb T}})$ is rather partial in the sense above. Thirdly, as far as it concerns a triply-transitive action of ${{\mathcal M}}({{\mathbb T}})$, distinguished cases appear again. Indeed, consider an arbitrary triple $$ {{\mathfrak t}}=(p_1,p_2,p_3)=\left((x_1,y_1),(x_2,y_2),(x_3,y_3)\right), $$ of pairwise distinct points in ${{\mathbb T}}$
and we have the following distinguished cases: \begin{enumerate} \item [{a)}] Both $(x_1,x_2,x_3)$ and $(y_1,y_2,y_3)$ are triples of pairwise distinct points in $S^1$; \item [{b)}] $y_1=y_2=y_3$ and $(x_1,x_2,x_3)$ is a triple of pairwise distinct points of $S^1$; \item [{c)}] $x_1=x_2=x_3$ and $(y_1,y_2,y_3)$ is a triple of pairwise distinct points of $S^1$; \item [{d)}] $x_i=x_j=x$, $x_l\neq x$, $i,j=1,2,3$, $i\neq j$, $l\neq i,j$, and $(y_1,y_2,y_3)$ is a triple of pairwise distinct points in $S^1$; \item [{e)}] $y_i=y_j=y$, $y_l\neq y$, $i,j=1,2,3$, $i\neq j$, $l\neq i,j$, and $(x_1,x_2,x_3)$ is a triple of pairwise distinct points in $S^1$; \item [{f)}] Two $x_i$'s and two $y_j$'s are equal. \end{enumerate} All the above cases are not M\"obius equivalent: A $g\in{{\mathcal M}}({{\mathbb T}})$ maps triples of each of the above categories to a triple of the same category. However, there is a triply-transitive action of ${{\mathcal M}}({{\mathbb T}})$ at points which belong to the same category: Notice for instance that in the first case we have that there exist $g_1$ and $g_2$ in ${{\mathcal M}}(S^1)$ such that \begin{equation*}\label{eq:g12} g_1(x_1)=g_2(y_1)=0,\quad g_1(x_2)=g_2(y_2)=\infty,\quad g_1(x_3)=g_2(y_3)=1. \end{equation*} We derive that $g=(g_1,g_2)\in{{\mathcal M}}({{\mathbb T}})$ satisfies \begin{equation*}\label{eq:g} g({{\mathfrak t}})=\left((0,0),(\infty,\infty),(1,1)\right). \end{equation*} In the second case, if $x_1=x_2=x_3$, then $(y_1,y_2,y_3)$ is a triple of distinct points in $S^1$. Therefore there exists a $g=(g_1,g_2)$ in ${{\mathcal M}}({{\mathbb T}})$ such that $g_1(x_i)=0$, $g_2(y_1)=0$, $g_2(y_2)=\infty$, $g_2(y_3)=1$, that is, $$ g({{\mathfrak t}})=\left((0,0),(0,\infty),(0,1)\right). $$ Analogously for the case $y_1=y_2=y_3$ we find that there exists a $g\in{{\mathcal M}}({{\mathbb T}})$ such that $$ g({{\mathfrak t}})=\left((0,0),(\infty,0),(1,0)\right). $$ The remaining cases are treated in the same manner and we leave them to the reader. \subsubsection{Circles} For each $g=(g_1,g_2)\in{{\mathcal M}}({{\mathbb T}})$ we get an embedding of $S^1$ into ${{\mathbb T}}$ which is given by the parametrisation $$ \gamma(x)=(g_1(x),g_2(x)),\quad x\in S^1. $$ Such embeddings of $S^1$ into ${{\mathbb T}}$ will be called {\it M\"obius embeddings of $S^1$} or {\it Circles} on ${{\mathbb T}}$. Notice first, that each Circle is the image of the {\it standard Circle} $R_0$ via an element of ${{\mathcal M}}({{\mathbb T}})$; here, $R_0$ is the curve $\gamma(x)=(x,x)$, $x\in S^1$. Secondly, the involution $\iota_0$ of ${{\mathbb T}}$ defined by $\iota_0(x,y)=(y,x)$, fixes point-wise $R_0$. Hence to each Circle $R$ is associated an involution $\iota_R$ of ${{\mathbb T}}$ which fixes $R$ point-wise. Moreover, we have \begin{prop} Given a triple ${{\mathfrak t}}=(p_1,p_2,p_3)$ of the form a) above, there exists a Circle $R$ passing through the points of ${{\mathfrak t}}$ and thus the involution $\iota_R$ of ${{\mathbb T}}$ associated to $R$ fixes all points of ${{\mathfrak t}}$. \end{prop} \begin{proof} We normalise so that ${{\mathfrak t}}=(p_1,p_2,p_3)$ where $$ p_1=(0,0),\quad p_2=(\infty,\infty),\quad p_3=(1,1). $$ Then the Circle passing through $p_i$ is $R_0$ and the involution is $\iota_0$. \end{proof}
Three distinct points of ${{\mathbb T}}$ might lie on various embeddings of $S^1$; for instance, triples of the form b) and c) lie on $\gamma_y(x)=(g_1(x),y)$ for fixed $y$ and $\gamma_x(y)=(x,g_2(y))$ for fixed $x$, respectively, where $g_1,g_2\in{{\mathcal M}}(S^1)$. But in any case, only triples of points of the form a) lie on Circles.
\subsection{The configuration space of four points in ${{\mathbb T}}$}\label{sec:confT} According to the notation which was set up in the introduction, let ${{\mathcal C}}_4={{\mathcal C}}_4({{\mathbb T}})$ be the space of quadruples of pairwise distinct points in ${{\mathbb T}}$ and let also ${{\mathcal F}}_4={{\mathcal F}}_4({{\mathbb T}})$ be the {\it configuration space of quadruples of pairwise distinct points in} ${{\mathbb T}}$, that is, the quotient of ${{\mathcal C}}_4$ by the diagonal action of the M\"obius group ${{\mathcal M}}({{\mathbb T}})$ on ${{\mathcal C}}_4$. Let ${{\mathfrak p}}=(p_1,p_2,p_3,p_4)\in{{\mathcal C}}_4$ be arbitrary;
if $p_i=(x_i,y_i)$, $i=1,2,3,4$, we shall denote by ${{\mathfrak x}}$ the quadruple $(x_1,x_2,x_3,x_4)$ and by ${{\mathfrak y}}$ the quadruple $(y_1,y_2,y_3,y_4)$. The isotropy group of ${{\mathfrak p}}$ is \begin{eqnarray*}
{{\mathcal M}}({{\mathbb T}})({{\mathfrak p}})&=&\{g\in{{\mathcal M}}({{\mathbb T}})\;|\;g({{\mathfrak p}})={{\mathfrak p}}\}\\
&=&\{(g_1,g_2)\in{{\mathcal M}}(S^1)\times{{\mathcal M}}(S^1)\;|\;g_1({{\mathfrak x}})={{\mathfrak x}}\;\text{and}\;g_2({{\mathfrak y}})={{\mathfrak y}}\}\\ &=&{{\mathcal M}}(S^1)({{\mathfrak x}})\times{{\mathcal M}}(S^1)({{\mathfrak y}}). \end{eqnarray*} Therefore the isotropy group ${{\mathcal M}}({{\mathbb T}})({{\mathfrak p}})$ is trivial if and only if both isotropy groups ${{\mathcal M}}(S^1)({{\mathfrak x}})$ and ${{\mathcal M}}(S^1)({{\mathfrak y}})$ are trivial as well. If ${{\mathfrak p}}$ is such that $[{{\mathfrak p}}]$ is of maximal dimension (that is, both $[{{\mathfrak x}}]$ and $[{{\mathfrak y}}]$ are of maximal dimension), then we call ${{\mathfrak p}}$ {\it admissible}. Note that the dimension of the orbit of an admissible ${{\mathfrak p}}$ is 2. In the opposite case, we call ${{\mathfrak p}}$ {\it non-admissible}.
We start with the non-admissible case first.
Let ${{\mathfrak p}}=(p_1,p_2,p_3,p_4)$, ${{\mathfrak x}}=(x_1,x_2,x_3,x_4)$ and ${{\mathfrak y}}=(y_1,y_2,y_3,y_4)$ as above. We distinguish the following cases for ${{\mathcal M}}(S^1)({{\mathfrak x}})$: \begin{enumerate} \item [{${{\mathfrak x}}$-1)}] ${{\mathcal M}}(S^1)({{\mathfrak x}})$ is trivial and ${{\mathfrak x}}\in{{\mathcal C}}_4(S^1)$. We may then normalise so that $$ x_1=0,\quad x_2=\infty,\quad x_3={{\rm X}}({{\mathfrak x}}),\quad x_4=1. $$ \item [{${{\mathfrak x}}$-2)}] ${{\mathcal M}}(S^1)({{\mathfrak x}})$ is trivial and two points $x_i$ in ${{\mathfrak x}}$, $i\in\{1,2,3,4\}$, are equal. If for instance $x_1=x_2$, we may normalise so that $$ x_1=x_2=0,\quad x_3=1,\quad x_4=\infty; $$ we normalise similarly for the remaining cases. \item [{${{\mathfrak x}}$-3)}] ${{\mathcal M}}(S^1)({{\mathfrak x}})$ is isomorphic to ${{\mathcal M}}(S^1)(0,\infty)$: Three points $x_i$ in ${{\mathfrak x}}$, $i\in\{1,2,3,4\}$, are equal. If for instance $x_1=x_2=x_3$, we may normalise so that $$ x_1=x_2=x_3=0,\quad x_4=\infty; $$ we normalise similarly for the remaining cases. \item [{${{\mathfrak x}}$-4)}] ${{\mathcal M}}(S^1)({{\mathfrak x}})$ is isomorphic to ${{\mathcal M}}(S^1)(\infty)$: All points $x_i$ in ${{\mathfrak x}}$, $i=1,2,3,4$, are equal and we may normalise so that $x_i=\infty$. \end{enumerate} Notice that there are six sub-cases in ${{\mathfrak x}}$-2) and four sub-cases in ${{\mathfrak x}}$-3); in all, we have twelve distinguished cases. Entirely analogous distinguished cases ${{\mathfrak y}}$-1), ${{\mathfrak y}}$-2), ${{\mathfrak y}}$-3) and ${{\mathfrak y}}$-4) appear for ${{\mathcal M}}(S^1)({{\mathfrak y}})$. Non-admissible quadruples ${{\mathfrak p}}$ are such that all combinations of cases for ${{\mathfrak x}}$ and ${{\mathfrak y}}$ may appear except when ${{\mathfrak x}}$ falls into the case ${{\mathfrak x}}$-1) and ${{\mathfrak y}}$ falls into the case ${{\mathfrak y}}$-1). Mind that not all combinations are valid; for instance, there can be no ${{\mathfrak p}}$ such that ${{\mathfrak x}}$ is as in ${{\mathfrak x}}$-4) and ${{\mathfrak y}}$ is as in ${{\mathfrak y}}$-2) or ${{\mathfrak y}}$-3). Subsets of ${{\mathcal F}}_4$ corresponding to each valid combination are either of dimension 0 or 1. One-dimensional subsets appear when ${{\mathfrak x}}$ belongs to the ${{\mathfrak x}}$-1) case or ${{\mathfrak y}}$ belongs to the ${{\mathfrak y}}$-1) case. The corresponding subset is then isomorphic to ${{\mathbb R}}\setminus\{0,1\}$ together with a point. For clarity, we will treat two cases: First, suppose that the non-admissible ${{\mathfrak p}}$ is such that ${{\mathfrak x}}$ is as in ${{\mathfrak x}}$-1) and ${{\mathfrak y}}$ is as in ${{\mathfrak y}}$-2) with $y_1=y_2$. Then we may normalise so that $$ p_1=(0,0),\quad p_2=(\infty, 0),\quad p_3=({{\rm X}}({{\mathfrak x}}),1),\quad p_4=(1,\infty). $$ Therefore the subset of ${{\mathcal F}}_4({{\mathbb T}})$ comprising orbits of such ${{\mathfrak p}}$ is isomorphic to ${{\mathbb R}}\setminus\{0,1\}\times\{b_{12}\}$, where $b_{12}$ is the abstract point corresponding to quadruples ${{\mathfrak y}}$ such that $y_1=y_2$. Secondly, suppose that ${{\mathfrak p}}$ is such that ${{\mathfrak x}}$ is as in ${{\mathfrak x}}$-3) with $x_1=x_2=x_3$ and ${{\mathfrak y}}$ is as in ${{\mathfrak y}}$-2) with $y_3=y_4$. Then we may normalise so that $$ p_1=(0,0),\quad p_2=(0, \infty),\quad p_3=(0,1),\quad p_4=(\infty,1). $$ The corresponding subset of the orbit space is then isomorphic to $\{a_{123}\}\times \{b_{34}\}$, where $a_{123}$ is the abstract point corresponding to quadruples ${{\mathfrak x}}$ such that $x_1=x_2=x_3$ and $b_{34}$ is the abstract point corresponding to quadruples ${{\mathfrak y}}$ such that $y_3=y_4$. Table \ref{table:1} shows all 68 distinguished subsets of ${{\mathcal F}}_4({{\mathbb T}})$ comprising non-admissible orbits of quadruples ${{\mathfrak p}}$ such that ${{\mathfrak x}}$ and ${{\mathfrak x}}$ belong to the above categories:
\begin{table}[h!] \centering
\begin{tabular}{||c c c c||}
\hline
${{\mathfrak x}}$ & ${{\mathfrak y}}$ & Corresponding subset of ${{\mathcal F}}_4$ & Components \\ [0.5ex]
\hline\hline
${{\mathfrak x}}$-1) & ${{\mathfrak y}}$-2) & $({{\mathbb R}}\setminus\{0,1\})\times\{b_{ij}\}$
& 6\\
${{\mathfrak x}}$-1) & ${{\mathfrak y}}$-3) & $({{\mathbb R}}\setminus\{0,1\})\times\{b_{ijk}\}$
& 3\\ ${{\mathfrak x}}$-1) & ${{\mathfrak y}}$-4) & $({{\mathbb R}}\setminus\{0,1\})\times\{\infty\}$& 1\\ ${{\mathfrak x}}$-2) & ${{\mathfrak y}}$-1) & $\{a_{ij}\}\times({{\mathbb R}}\setminus\{0,1\})$ & 6\\ ${{\mathfrak x}}$-2) & ${{\mathfrak y}}$-2) &$\{a_{ij}\}\times\{b_{kl}\}$ & 24\\ ${{\mathfrak x}}$-2) & ${{\mathfrak y}}$-3) & $\{a_{ij}\}\times\{b_{klm}\}$&12\\ ${{\mathfrak x}}$-3) & ${{\mathfrak y}}$-1) & $\{a_{ijk}\}\times({{\mathbb R}}\setminus\{0,1\})$ & 3\\ ${{\mathfrak x}}$-3) & ${{\mathfrak y}}$-2) & $\{a_{ijl}\}\times\{b_{lm}\}$&12\\ ${{\mathfrak x}}$-4) & ${{\mathfrak y}}$-1) &$\{\infty\}\times({{\mathbb R}}\setminus\{0,1\})$ &1 \\ [1ex]
\hline \end{tabular} \caption{Subspaces of non-admissible orbits} \label{table:1} \end{table}
If now ${{\mathfrak p}}$ is an admissible quadruple, we have that both ${{\mathfrak x}}=(x_1,x_2,x_3,x_4)$ and ${{\mathfrak y}}=(y_1,y_2,y_3,y_4)$ are in ${{\mathcal C}}_4(S^1)$. Let ${{\mathcal C}}^\sharp_4={{\mathcal C}}_4^\sharp({{\mathbb T}})$ the subspace of ${{\mathcal C}}_4({{\mathbb T}})$ comprising admissible quadruples and denote by ${{\mathcal F}}_4{{\mathcal F}}_4({{\mathbb T}})$ the corresponding orbit space.
The bijection $$ {{\mathfrak C}}:{{\mathcal C}}_4^\sharp({{\mathbb T}})\ni {{\mathfrak p}}\mapsto ({{\mathfrak x}},{{\mathfrak y}})\in{{\mathcal C}}_4(S^1)\times{{\mathcal C}}_4(S^1), $$ projects into the bijection $$ {{\mathfrak F}}:{{\mathcal F}}_4^\sharp({{\mathbb T}})\ni [{{\mathfrak p}}]\mapsto ([{{\mathfrak x}}],[{{\mathfrak y}}])\in{{\mathcal F}}_4(S^1)\times{{\mathcal F}}_4(S^1), $$ and therefore we obtain \begin{thm}\label{thm:vec} The configuration space ${{\mathcal F}}_4({{\mathbb T}})$ of quadruples of pairwise distinct points of the torus ${{\mathbb T}}$ is isomorphic to a set comprising 69 distinguished components: 20 one-dimensional, 48 points and a 2-dimensional subset corresponding to the subset ${{\mathcal F}}_4^\sharp({{\mathbb T}})$ of admissible quadruples. This subset may be identified to $({{\mathbb R}}\setminus\{0,1\})^2$. The identification is given by assigning to each $[{{\mathfrak p}}]$ the vector-valued cross-ratio $ \vec{{\mathbb X}}({{\mathfrak p}})=({{\rm X}}({{\mathfrak x}}),{{\rm X}}({{\mathfrak y}})). $ \end{thm}
The set ${{\mathcal F}}_4^\sharp=({{\mathbb R}}\setminus\{0,1\})^2$ is a subset of ${{\mathbb R}}^2$ comprising nine connected open components. We consider the space $\overline{{{\mathcal F}}_4^\sharp({{\mathbb T}})}$; this is ${{\mathcal C}}_4^\sharp({{\mathbb T}})$ factored by the diagonsl action of $\overline{{{\mathcal M}}({{\mathbb T}})}$: The latter comprises elements of ${{\mathcal M}}({{\mathbb T}})$ followed by the involution $\iota_0:(x,y)\mapsto(y,x)$ of ${{\mathbb T}}$.
We thus have \begin{thm}\label{thm:band} The configuration space $\overline{{{\mathcal F}}_4^\sharp({{\mathbb T}})}$ is identified to the disconnected subset ${{\mathcal Q}}$ of ${{\mathbb R}}^2$ which is induced by identifying points of $({{\mathbb R}}\setminus\{0,1\})^2$ which are symmetric with respect to the diagonal straight line $y=x$. Explicitly, ${{\mathcal Q}}$ has three open components: \begin{eqnarray*} && {{\mathcal Q}}_1^0=(-\infty,0)\times(0,1);\\ && {{\mathcal Q}}_2^0=(-\infty,0)\times(1,+\infty);\\ && {{\mathcal Q}}_3^0=(0,1)\times(1,+\infty), \end{eqnarray*} and three components with boundary: \begin{eqnarray*} &&
{{\mathcal Q}}_1^1=\{(x,y)\in{{\mathbb R}}^2\;|\;x<0,\;y\ge x\};\\ &&
{{\mathcal Q}}_2^1=\{(x,y)\in{{\mathbb R}}^2\;|\;0<x<1,\;y\ge x\};\\ &&
{{\mathcal Q}}_3^1=\{(x,y)\in{{\mathbb R}}^2\;|\;x>0,\;y\ge x\}. \end{eqnarray*} \end{thm} \subsection{Real Cross-Ratios and another parametrisation}\label{sec:realX} Using the vector-valued $\vec{{\mathbb X}}$ as in Theorem \ref{thm:vec} we define a real cross-ratio in ${{\mathcal C}}_4^\sharp({{\mathbb T}})$ by $$ {{\mathbb X}}({{\mathfrak p}})={{\rm X}}({{\mathfrak x}})\cdot {{\rm X}}({{\mathfrak y}}). $$ One may show that all 24 cross-ratios corresponding to an admissible quadruple ${{\mathfrak p}}$ depend on the following two: $$ {{\mathbb X}}_1({{\mathfrak p}})=[x_1,x_2,x_3,x_4]\cdot[y_1,y_2,y_3,y_4],\quad {{\mathbb X}}_2({{\mathfrak p}})=[x_1,x_3,x_2,x_4]\cdot[y_1,y_3,y_2,y_4]. $$ We now consider the map ${{\mathcal G}}^\sharp:{{\mathcal F}}_4^\sharp\to{{\mathbb R}}^2_*$, where $$ {{\mathcal G}}^\sharp([{{\mathfrak p}}])=\left({{\mathbb X}}_1({{\mathfrak p}}),{{\mathbb X}}_2({{\mathfrak p}})\right). $$ The map ${{\mathcal G}}^\sharp$ is well defined since ${{\mathbb X}}$ remains invariant under the action of ${{\mathcal M}}({{\mathbb T}})$. Let \begin{equation}\label{eq:P}
{{\mathcal P}}=\{(u,v)\in({{\mathbb R}}_*)^2\;|\;\Delta(u,v)=u^2+v^2-2u-2v+1-2uv\ge 0\}. \end{equation} The next fundamental inequality for cross-ratios in the following proposition shows exactly that ${{\mathcal G}}^\sharp$ takes its values in ${{\mathcal P}}$:
\begin{prop}\label{prop:fundX} Let ${{\mathfrak p}}$ be an admissible quadruple of points in ${{\mathbb T}}$ and ${{\mathcal P}}$ as in (\ref{eq:P}). Then $$ ({{\mathbb X}}_1({{\mathfrak p}}),{{\mathbb X}}_2({{\mathfrak p}}))\in{{\mathcal P}}. $$ Moreover, $\Delta({{\mathbb X}}_1({{\mathfrak p}}),{{\mathbb X}}_2({{\mathfrak p}}))=0$ if and only if all points of ${{\mathfrak p}}$ lie on a Circle. \end{prop} \begin{proof} To prove the first statement, we may normalise so that ${{\mathfrak p}}=(p_1,p_2,p_3,p_4)$ where $$ p_1=(0,0),\quad p_2=(\infty,\infty),\quad p_3=(x,y),\quad p_4=(1,1). $$ We then calculate $$ {{\mathbb X}}_1=xy,\quad {{\mathbb X}}_2=(1-x)(1-y). $$ Therefore $ {{\mathbb X}}_2=1+{{\mathbb X}}_1-x-y $, from where we derive $$ 1+{{\mathbb X}}_1-{{\mathbb X}}_2=x+y. $$ Taking this to the square we find $$ (1+{{\mathbb X}}_1-{{\mathbb X}}_2)^2=(x+y)^2\ge 4xy=4{{\mathbb X}}_1, $$ and the inequality follows. For the second statement, observe that equality holds only if $x=y$, i.e. all points lie on the standard Circle $R_0$ on ${{\mathbb T}}$. \end{proof} We proceed by showing that ${{\mathcal G}}^\sharp$ is surjective: \begin{prop} Let $(u,v)\in{{\mathcal P}}$. Then there exist a ${{\mathfrak p}} \in{{\mathcal C}}_4^\sharp({{\mathbb T}})$ such that $$ {{\mathbb X}}_1({{\mathfrak p}})= u\quad\text{and}\quad {{\mathbb X}}_2({{\mathfrak p}})= v. $$ \end{prop} \begin{proof} Since $\Delta=(1+u-v)^2-4u\ge 0$ there exist $x,y$ such that $$ xy=u\quad\text{and}\quad x+y=1+u-v. $$ In fact, $$ x,y=\frac{1+u-v\pm\sqrt{\Delta}}{2}. $$ Now one verifies that the admissible quadruple ${{\mathfrak p}}=(p_1,p_2,p_3,p_4)$ where $$ p_1=(0,0),\quad p_2=(\infty,\infty),\quad p_3=(x,y),\quad p_4=(1,1), $$
is the quadruple in question. The proof is complete. \end{proof} Let $\iota_0$ be the involution associated to the standard Circle $R_0$. Notice in the proof that the quadruple $\iota_0({{\mathfrak p}})$, that is, $$ \iota_0(p_1)=(0,0),\quad \iota_0(p_2)=(\infty,\infty),\quad \iota_0(p_3)=(y,x),\quad \iota_0(p_4)=(1,1), $$
also satisfies ${{\mathbb X}}_1(\iota_0({{\mathfrak p}}))= u$ and ${{\mathbb X}}_2(\iota_0({{\mathfrak p}}))=v$.
\begin{prop} Suppose that ${{\mathfrak p}}$ and ${{\mathfrak p}}'$ are two quadruples in ${{\mathcal C}}_4^\sharp({{\mathbb T}})$ such that $$ {{\mathbb X}}_i({{\mathfrak p}})={{\mathbb X}}_i({{\mathfrak p}}'),\quad i=1,2. $$ Then one of the following cases occur: \begin{enumerate} \item There exists a $g\in{{\mathcal M}}({{\mathbb T}})$ such that $g({{\mathfrak p}})={{\mathfrak p}}'$; \item There exists a $g\in\overline{{{\mathcal M}}({{\mathbb T}})}$ such that $g({{\mathfrak p}})={{\mathfrak p}}'$. \end{enumerate} \end{prop} \begin{proof} We may normalise so that $$ p_1=(0,0),\quad p_2=(\infty,\infty),\quad p_3=(x,y),\quad p_4=(1,1), $$ and $$ p_1'=(0,0),\quad p_2'=(\infty,\infty),\quad p_3'=(x',y'),\quad p_4'=(1,1). $$ Then $ {{\mathbb X}}_i({{\mathfrak p}})={{\mathbb X}}_i({{\mathfrak p}}'),\quad i=1,2, $ imply $$ xy=x'y'\quad \text{and}\quad (1-x)(1-y)=(1-x')(1-y'). $$ It follows that $xy=x'y'$ and $x+y=x'+y'$. But then either $x=x'$ and $y=y'$ or $x=y'$ and $y=x'$. The proof is complete. \end{proof} The above discussion boils down to the following theorem: \begin{thm}\label{thm:F4} The ${{\mathcal M}}({{\mathbb T}})$-(rep. $\overline{{{\mathcal M}}({{\mathbb T}})}$)-configuration space ${{\mathcal F}}_4^\sharp={{\mathcal F}}_4^\sharp({{\mathbb T}})$ (resp. $\overline{{{\mathcal F}}_4^\sharp}=\overline{{{\mathcal F}}_4^\sharp({{\mathbb T}})})$ of admissible quadruples of points in the torus ${{\mathbb T}}$ is in a 2-1 (resp. 1-1) surjection with the set ${{\mathcal P}}$ given in (\ref{eq:P}).
The subset ${{\mathcal F}}_4^{\sharp,0}$ of both ${{\mathcal F}}_4^\sharp$ and $\overline{{{\mathcal F}}_4^\sharp}$ comprising equivalence classes of quadruples of points in the same Circle is in a bijection with the subset of ${{\mathcal P}}$ comprising $(u,v)$ such that $$ \Delta=u^2+v^2-2u-2v+1-2uv= 0. $$ Explicitly, ${{\mathcal P}}$ has three open components \begin{eqnarray*} &&
{{\mathcal P}}_1^0=(-\infty, 0)\times(0,+\infty);\\%\{(u,v)\in{{\mathcal P}}\;|\;u<0,\;v>0\};\\ &&
{{\mathcal P}}_2^0=(-\infty, 0)\times(-\infty,0);\\%\{(u,v)\in{{\mathcal P}}\;|\;u<0,\;v<0\};\\ &&
{{\mathcal P}}_3^0=(0,+\infty, 0)\times(-\infty,0);\\%\{(u,v)\in{{\mathcal P}}\;|\;u>0,\;v<0\}. \end{eqnarray*} and three components with one-dimensional boundaries: \begin{eqnarray*} &&
{{\mathcal P}}_1^1=\{(u,v)\in{{\mathcal P}}\;|\;0<u<1,\; 0<v<1,\;\Delta\ge 0\};\\ &&
{{\mathcal P}}_2^1=\{(u,v)\in{{\mathcal P}}\;|\;u>1,\;v>0,\;\Delta\ge 0\};\\ &&
{{\mathcal P}}_3^1=\{(u,v)\in{{\mathcal P}}\;|\;u>0,\;v>1,\;\Delta\ge 0\}. \end{eqnarray*} \end{thm} \begin{rem} The change of coordinates $$ u=xy,\quad v=1-x-y-xy $$ maps the set ${{\mathcal Q}}$ of Theorem \ref{thm:band} onto the set ${{\mathcal P}}$ in a bijective manner. \end{rem} \section{M\"obius Structure} Towards defining a M\"obius structure from the real cross-ratio ${{\mathbb X}}$ on the torus ${{\mathbb T}}$, we study first the case where both cross-ratios ${{\mathbb X}}_i({{\mathfrak p}})$, $i=1,2$, of an admissible quadruple of points are positive {Section \ref{sec:pos}). We then define ${{\mathcal M}}_{{\mathbb T}}$ and prove in Section \ref{sec:mob} that is is not Ptolemaean. \subsection{When both cross-ratios are positive}\label{sec:pos} Let $$ {{\mathcal P}}^1={{\mathcal P}}_1^1\;\dot\cup\;{{\mathcal P}}_2^1\;\dot\cup\;{{\mathcal P}}_3^1. $$ This set corresponds exactly to quadruples ${{\mathfrak p}}$ with both ${{\mathbb X}}_1({{\mathfrak p}})$ and ${{\mathbb X}}_2({{\mathfrak p}})$ positive. Equivalently, ${{\rm X}}({{\mathfrak x}})$ and ${{\rm X}}({{\mathfrak y}})$ belong to the same connected component of ${{\mathbb R}}\setminus\{0,1\}$, which means that the points of ${{\mathfrak x}}$ and ${{\mathfrak y}}$ have exactly the same type of ordering on the circle: If $x_1,x_2$ separate $x_3,x_4$ then also $y_1,y_2$ separate $y_3,y_4$ and so forth. Quadruples ${{\mathfrak p}}$ corresponding to ${{\mathcal P}}^1$ have both ${{\mathbb X}}_1({{\mathfrak p}})$ and ${{\mathbb X}}_2({{\mathfrak p}})$ positive. Let ${{\mathcal F}}^{\sharp,+}_4=({{\mathcal G}}^\sharp)^{-1}({{\mathcal P}}^1)$. \begin{prop}\label{prop:Ptol-eq-T} Let ${{\mathfrak p}}=(p_1,p_2,p_3,p_4)$ such that $[{{\mathfrak p}}]\in{{\mathcal F}}^{\sharp,+}_4$ and let ${{\mathbb X}}_i={{\mathbb X}}_i({{\mathfrak p}})$, $i-1,2$. Then
\begin{equation}\label{eq:X12}
{{\mathbb X}}_1^{1/2}+{{\mathbb X}}_2^{1/2}\ge 1\;\;\text{and}\;\;|{{\mathbb X}}_1^{1/2}-{{\mathbb X}}_2^{1/2}|\ge 1,\quad\text{or}\quad{{\mathbb X}}_1^{1/2}+{{\mathbb X}}_2^{1/2}\le 1\;\;\text{and}\;\;|{{\mathbb X}}_1^{1/2}-{{\mathbb X}}_2^{1/2}|\le 1. \end{equation} Moreover, $[{{\mathfrak p}}]\in{{\mathcal F}}_4^{\sharp,0}$, that is, all points of ${{\mathfrak p}}$ lie in the same Circle if and only if $$
{{\mathbb X}}_1^{1/2}+{{\mathbb X}}_2^{1/2}=1\quad \text{or}\quad |{{\mathbb X}}_1^{1/2}-{{\mathbb X}}_2^{1/2}|=1. $$
Explicitly, \begin{enumerate}
\item ${{\mathbb X}}_1^{1/2}-{{\mathbb X}}_2^{1/2}=1$ if $p_1$ and $p_3$ separate $p_2$ and $p_4$;
\item ${{\mathbb X}}_2^{1/2}-{{\mathbb X}}_1^{1/2}=1$ if $p_1$ and $p_2$ separate $p_3$ and $p_4$;
\item ${{\mathbb X}}_1^{1/2}+{{\mathbb X}}_2^{1/2}=1$ if $p_1$ and $p_4$ separate $p_2$ and $p_3$. \end{enumerate} \end{prop} \begin{proof}
From the fundamental inequality for cross-ratios (Proposition \ref{prop:fundX}) we have \begin{eqnarray*} 0&\le&{{\mathbb X}}_1^2+{{\mathbb X}}_2^2-2{{\mathbb X}}_1-2{{\mathbb X}}_2+1-2{{\mathbb X}}_1{{\mathbb X}}_2\\ &=&({{\mathbb X}}_1+{{\mathbb X}}_2-1)^2-4{{\mathbb X}}_1{{\mathbb X}}_2\\ &=&({{\mathbb X}}_1+{{\mathbb X}}_2-2{{\mathbb X}}_1^{1/2}{{\mathbb X}}_2^{1/2}-1)({{\mathbb X}}_1+{{\mathbb X}}_2+2{{\mathbb X}}_1^{1/2}{{\mathbb X}}_2^{1/2}-1)\\ &=&\left(({{\mathbb X}}_1^{1/2}+{{\mathbb X}}_2^{1/2})^2-1\right)\left(({{\mathbb X}}_1^{1/2} -{{\mathbb X}}_2^{1/2})^2-1\right), \end{eqnarray*} and this proves Eqs. (\ref{eq:X12}). The details of the proof of the last statement are left to the reader. \end{proof}
\subsection{M\"obius structure}\label{sec:mob}
From the real cross-ratio ${{\mathbb X}}:{{\mathcal C}}^\sharp_4({{\mathbb T}})\to{{\mathbb R}}$ we define a positive cross-ratio ${{\bf X}}:{{\mathcal C}}^\sharp_4({{\mathbb T}})\to{{\mathbb R}}_+$ by setting
$$
{{\bf X}}({{\mathfrak p}})=|{{\mathbb X}}({{\mathfrak p}})|^{1/2},
$$ for each ${{\mathfrak p}}\in{{\mathcal C}}_4^\sharp$. The positive cross-ratio is invariant under $\tilde{{{\mathcal M}}({{\mathbb T}})}$. The M\"obius structure on ${{\mathbb T}}$ associated to ${{\bf X}}$ and restricted to ${{\mathcal C}}_4^\sharp$ is the map $$ {{\mathfrak M}}_{{\mathbb T}}:{{\mathcal C}}^\sharp_4({{\mathbb T}})\ni{{\mathfrak p}}\mapsto({{\bf X}}_1({{\mathfrak p}}),{{\bf X}}_2({{\mathfrak p}})). $$ Recall that $(S,\rho)$ is a pseudo-semi-metric space if $\rho:S\times S\to{{\mathbb R}}_+$ satisfies a)
$x=y$ implies $\rho(x,y)=0$ and b)
$\rho(x,y)=\rho(y,x)$,
for all $x,y\in S$. The M\"obius structure ${{\mathfrak M}}_{{\mathbb T}}$ is associated to the pseudo-semi-metric $\rho:{{\mathbb T}}\times{{\mathbb T}}\to{{\mathbb R}}_+$, given by $$
\rho\left((x_1,y_1),(x_2,y_2)\right)=|x_1-x_2|^{1/2}\cdot|y_1-y_2|^{1/2}, $$ for each $(x_1,y_1)$ and $(x_2,y_2)$ in ${{\mathbb T}}$. In Section \ref{sec:cons} we explain the reason why we cannot have a natural positive cross-ratio compatible with the group action that is associated to any metric on ${{\mathbb T}}$. The following corollary concerning ${{\mathfrak M}}_T$ follows directly from Proposition \ref{prop:Ptol-eq-T}. \begin{cor} The M\"obius structure ${{\mathfrak M}}_{{\mathbb T}}$ is not Ptolemaean. However, Circles are Ptolemaean circles for ${{\mathfrak M}}_{{\mathbb T}}$. \end{cor}
\section{Application to Boundaries of ${\rm SO_0}(2,2)/{\rm SO}(2)\times{\rm SO}(2)$ and $AdS^3$} \label{sec:cons} In this section we show how the torus appears as the F\"urstenberg boundary of the symmetric space ${\rm SO_0}(2,2)/{\rm SO}(2)\times{\rm SO}(2)$ as well as the ideal boundary of the 3-dimensional anti-de Sitter space $AdS^3$. We refer to \cite{BJ} for compactifications of symmetric spaces, to \cite{B} for recent developments on M\'obius structures in F\"urstenberg boundaries and finally to \cite{D} for a comprehensive treatment of anti-de Sitter space and its relations to Hyperbolic Geometry.
Let ${{\mathbb R}}^{2,2}={{\mathbb R}}^4\setminus\{0\}$ be the real vector space of dimension 4 equipped with a non-degenerate, indefinite pseudo-hermitian form $\langle\cdot,\cdot\rangle$ of signature $(2,2)$. Such a form is given by a $4\times 4$ matrix with 2 positive and 2 negative eigenvalues.
Let ${{\bf x}}=\left[\begin{matrix} x_1 & x_2 &x_3 &x_4\end{matrix}\right]^T$ and ${{\bf y}}=\left[\begin{matrix} y_1 & y_2 &y_3 &y_4\end{matrix}\right]^T$ be column vectors. The pseudo-hermitian form is then defined by $$ \langle{{\bf x}},{{\bf y}}\rangle=x_1y_4-x_2y_3-x_3y_2+x_4y_1 $$ and it is given by the matrix $$
J=\left(\begin{matrix}
0&0&0&1\\
0&0&-1&0\\
0&-1&0&0\\
1&0&0&0
\end{matrix}\right). $$ The isometry group of this pseudo-hermitian form is $G={\rm SO_0}(2,2)$. There is a natural identification of $G$ with ${\rm SL}(2,{{\mathbb R}})\times{\rm SL}(2,{{\mathbb R}})$, see \cite{GPP}; if $$ A_1=\left(\begin{matrix} a_1&b_1\\ c_1&d_1\end{matrix}\right)\quad\text{and}\quad A_2=\left(\begin{matrix} a_2&b_2\\ c_2&d_2\end{matrix}\right)\in{\rm SL}(2,{{\mathbb R}}), $$ then the pair $(A_1,A_2)$ is identified to $$ A=\left(\begin{matrix} a_1A_2^{-1}&b_1A_2^{-1}\\ c_1A_2^{-1}&d_1A_2^{-1}\end{matrix}\right)\in{\rm SO}_0(2,2). $$ The group $K={\rm SO}(2)\times$ ${\rm SO}(2)$ is a maximal compact subgroup of $G$ and $X=G/K$ is a symmetric space of rank-2. The symmetric space $X$ is also realised as ${{\bf H}}_{{\mathbb C}}^1\times{{\bf H}}_{{\mathbb C}}^1$, where ${{\bf H}}_{{\mathbb C}}^1=D$ is the Poincar\'e hyperbolic unit disc. The torus ${{\mathbb T}}=S^1\times S^1$ is the {\it maximal F\"urstenberg boundary} $\mathbb{F}(X)$ of the symmetric space $X$. Recall that if $G$ is a connected semi-simple Lie group and $X=G/K$ is the associated symmetric space, then the maximal F\"urstenberg boundary $\mathbb{F}(X)$ may be thought as $G/P_0$, where $P_0$ is a minimal parabolic subgroup of $G$, see for instance \cite{BJ}. If $X$ is of rank $\ge 2$ then $\mathbb{F}(X)$ cannot be the whole boundary of any compactification of $X$. In particular, if $X=D\times D$ , then $\mathbb{F}(X)={\rm SO}_0(2,2)/P_0$, $P_0=AN\times AN$ where $AN$ is the $AN$ group in the Iwasawa $KAN$ decomposition of ${\rm SL}(2,{{\mathbb R}})$. In this manner $\mathbb{F}(X)$ is just the corner of the boundary $(\overline{D}\times S^1)\cup(S^1\times\overline{D})$ of the compactification $\overline{D}\times\overline{D}$ of $X$.
A rather neat way to represent ${{\mathbb T}}={{\mathbb F}}(X)$ is via its isomorphism to the ideal boundary of anti-de Sitter space which is obtained as follows: For the pseudo-hermitian product there are subspaces of positive (space-like) vectors $V_+$, of null (light-like) vectors $V_0$ and of negative (time-like) vectors $V_-$: \begin{eqnarray*}
&&
V_+=\left\{{{\bf x}}\in{{\mathbb R}}^{2,2}\;|\;\langle{{\bf x}},{{\bf x}}\rangle> 0\right\},\\ &&
V_0=\left\{{{\bf x}}\in{{\mathbb R}}^{2,2}\;|\;\langle{{\bf x}},{{\bf x}}\rangle=0\right\},\\ &&
V_-=\left\{{{\bf x}}\in{{\mathbb R}}^{2,2}\;|\;\langle{{\bf x}},{{\bf x}}\rangle<0\right\}.
\end{eqnarray*} If $\lambda$ is a non-zero real, then $\langle\lambda{{\bf x}},\lambda{{\bf x}}\rangle=\lambda^2\langle{{\bf x}},{{\bf x}}\rangle$. Therefore $\lambda{{\bf x}}$ is positive, null or negative if and only if ${{\bf x}}$ is positive, null or negative, respectively. Let $P$ be the projection map from ${{\mathbb R}}^{2,2}$ to projective space $P{{\mathbb R}}^3$. The {\it projective model of anti-de Sitter space} $AdS^3$ is now defined as the collection of negative vectors $PV_-$ in $P{{\mathbb R}}^3$ and its {\it ideal boundary} $\partial_\infty AdS^3$ is defined as the collection $PV_0$ of null vectors. Anti-de Sitter space $AdS^3$ carries a natural Lorentz structure; the isometry group of this structure is the projectivisation of the set ${\rm SO}(2,2)$ of unitary matrices for the pseudo-hermitian form with matrix $J$, that is, ${\rm PSO}_0(2,2)={\rm SO}(2,2)/\{\pm I\}$; here $I$ is the identity $4\times 4$ matrix. From the discussion above we have that ${\rm PSO_0}(2,2)$ is identified to ${\rm PSL}(2,{{\mathbb R}})^2={{\mathcal M}}({{\mathbb T}})$. Now the identification of $\partial_\infty AdS^3$ with the torus ${{\mathbb T}}={{\mathbb F}}(X)$ is given in terms of the Segre embedding ${{\mathcal S}}:{{\mathbb R}} P^1\times{{\mathbb R}} P^1\to{{\mathbb R}} P^3$. Recall that in homogeneous coordinates the Segre embedding $ w={{\mathcal S}}(x,y) $ is defined by $$ ({{\bf x}},{{\bf y}})=\left(\left[\begin{matrix} x_1\\
x_2
\end{matrix}\right]\;,\; \left[\begin{matrix} y_1\\
y_2
\end{matrix}\right]\right)\mapsto {{\bf w}}=\left[\begin{matrix} x_1y_1\\
x_1y_2\\
x_2y_1\\
x_2y_2
\end{matrix}\right]. $$ Notice that ${{\bf w}}$ is a null vector.
The action of isometry group ${\rm PSO_0}(2,2)={\rm PSL}(2,{{\mathbb R}})^2$ of $AdS^3$ is extended naturally on the ideal boundary $\partial_\infty AdS^3$ which in this manner is identified to ${{\mathbb T}}$.
We stress at this point that in contrast with the case of hyperbolic spaces, there are distinct points in $\partial AdS^3$ which may be orthogonal. To see this, let $p={{\mathcal S}}(x,y)$ and $p'={{\mathcal S}}(x',y')$ any distinct points; if $$ {{\bf x}}=\left[\begin{matrix} x_1\\x_2\end{matrix}\right],\quad {{\bf y}}=\left[\begin{matrix} y_1\\y_2\end{matrix}\right],\quad {{\bf x}}'=\left[\begin{matrix} x_1'\\x_2'\end{matrix}\right],\quad {{\bf y}}'=\left[\begin{matrix} y'_1\\y'_2\end{matrix}\right], $$ then $$ {{\bf p}}=\left[\begin{matrix}x_1y_1\\ x_1y_2\\ x_2y_1\\ x_2y_2 \end{matrix}\right],\quad{{\bf p}}'=\left[\begin{matrix} x_1y_1\\ x_1'y_2'\\ x_2'y_1'\\ x_2'y_2'\end{matrix}\right]. $$ One then calculates \begin{equation*}
\langle{{\bf p}},{{\bf p}}'\rangle=(x_1x_2'-x_2x_1')(y_1y_2'-y_2y_1'). \end{equation*} Therefore $\langle{{\bf p}},{{\bf p}}'\rangle=0$ if and only if $x=x'$ or $y=y'$. For fixed $p\in\partial_\infty AdS^3$, $p={{\mathcal S}}(x,y)$ the locus $$
p^c=\{{{\mathcal S}}(x,z)\;|\;z\in S^1\}\cup \{{{\mathcal S}}(z,y)\;|\;z\in S^1\}\equiv (\{x\}\times S^1)\cup(S^1\times\{y\}) $$ comprises points of the ideal boundary which are orthogonal $p$. We call $p^c$ the {\it cross-completion} of $p$; transferring the picture in the context of a point $p=(x,y)$ lie on ${{\mathbb T}}$, orthogonal points are all points of $(\{x\}\times S^1)\cup S^1\cup\{y\})$.
The ideal boundary $\partial_\infty AdS^3$ may be thus thought as the union of cross-completion of $\infty={{\mathcal S}}(\infty,\infty)$ and the remaining region of the torus which we denote by $N$. The set $N$ comprises points $p={{\mathcal S}}(x,y)$, $x,y\neq \infty$ with standard lifts $$ {{\bf p}}=\left[\begin{matrix}xy&x&y&1\end{matrix}\right]^T, $$ and can be viewed as the saddle surface $x_1=x_2x_3$ embedded in ${{\mathbb R}}^3$. But actually, $N$ admits a group structure: First, if $p={{\mathcal S}}(x,y)\in N$, we call $(x,y)$ the $N$-coordinates of $p$. To each such $p$ we assign the matrix $$ T(x,y)=\left(\begin{matrix}
1&y&x&xy\\
0&1&0&x\\
0&0&1&y\\
0&0&0&1
\end{matrix}\right), $$ whose projectivisation gives an element of ${\rm PSO}_0(2,2)$ in the unipotent isotropy group of $\infty$. Note that if $G$ is the isomorphism ${\rm SL}(2,R)^2\to\to{\rm SO}_0(2,2)$, and $KAN$ is the Iwasawa decomposition of ${\rm SL}(2,R)$, then $T(x,y)$ lies in the image $G(N,N)$. It is straightforward to verify that $T(x,y)$ leaves the cross-completion $\infty^c$ of infinity invariant and maps $o={{\mathcal S}}(0,0)$ to $p$. Also, for $p=(x,y)$ and for $p'=(x',y')$ we have $$ T(x,y)T(x',y')=T(x+x',y+y'),\quad \left(T(x,y)\right)^{-1}=(-x,-y). $$ Thus $T$ is a group homomorphism from ${{\mathbb R}}^2$ to ${\rm PSO_0}(2,2)$ with group law $$ (x,y)\star(x',y')=(x+x',y+y'). $$ In other words $N$ admits the structure of the additive group $({{\mathbb R}}^2,+)$. The natural Euclidean metric $e:N\times N\to{{\mathbb R}}_+$ where $$ e\left((x,y),(x',y')\right)=((x-x')^{2}+(y-y')^2)^{1/2}, $$ is invariant by the left action of $N$ but its similarity group is not ${\rm PSO}_0(2,2)$. To see this consider $$ D_\delta=\left(\begin{matrix} \delta&0\\ 0&1/\delta\end{matrix}\right)\quad \text{and}\quad D_{\delta'}=\left(\begin{matrix} \delta'&0\\ 0&1/\delta'\end{matrix}\right),\quad\delta,\delta'>0. $$ Then $G(D_\delta,D_{\delta'})(\xi_1,\xi_2)=(\delta^2\xi_1,(1/\delta')^2\xi_2)$, and $A=F(D_\delta,D_{\delta'})$ does not scale $e$ unless $\delta\delta'=1$ which is not always the case. Since all metrics in ${{\mathbb R}}^2$ are equivalent to $e$, there is no natural metric in $N$ such that its similarity group equals to ${\rm PSO}_0(2,2)$. In contrast, we define a function $a:N\to{{\mathbb R}}$, $$ a(x,y)=xy $$ and a gauge $$
\|(x,y)\|=|a(x,y)|^{1/2}=|x|^{1/2}|y|^{1/2}. $$ Essentially, we are mimicking here Kor\'anyi and Reimann and their construction for the Heisenberg group case, see \cite{KR}. The pseudo-semi-metric $\rho:N\times N\to{{\mathbb R}}_+$ is then defined by $$
\rho\left((x,y),(x',y')\right)=\|(x',y')^{-1}\star(x,y)\|=|x-x'|^{1/2}|y-y'|^{1/2}. $$
Let $\overline{N}=N\cup\{\infty\}$. The set ${{\mathcal C}}_4^\sharp({{\mathbb T}})$ of admissible quadruples is actually the set ${{\mathcal C}}_4^\sharp(\overline{N})$ of quadruples of points of $\overline{N}$ such that none of these points belongs to the cross-completion of any other point in the quadruple. The configuration space ${{\mathcal F}}_4^\sharp({{\mathbb T}})$ is thus identified to ${{\mathcal C}}_4^\sharp(\overline{N})$ cut by the diagonal action of ${\rm PSO}_0(2,2)$ and the configuration space $\overline{{{\mathcal F}}_4^\sharp({{\mathbb T}})}$ is ${{\mathcal C}}_4^\sharp(\overline{N})$ cut by the diagonal action of $\overline{{\rm PSO}_0(2,2)}$ which comprises elements of ${\rm PSO}_0(2,2)$ followed by the involution $\iota_0:(x,y)\mapsto(y,x)$ of $\overline{N}$. The real cross-ratio ${{\mathbb X}}$ is thus defined in ${{\mathcal C}}_4^\sharp(\overline{N})$ by $$ {{\mathbb X}}({{\mathfrak p}})=\frac{a(p_4\star p_2^{-1})\cdot a(p_3\star p_1^{-1})}{a(p_4\star p_1^{-1})\cdot a(p_3\star p_2^{-1})}, $$ for each ${{\mathfrak p}}\in{{\mathcal C}}_4^\sharp(\overline{N})$. The positive cross-ratio is $$ {{\bf X}}({{\mathfrak p}})=\frac{\rho(p_4,p_2)\cdot\rho(p_3,p_1)}{\rho(p_4,p_1)\cdot\rho(p_3,p_2)}. $$ The results of the previous section apply now immediately.
\end{document} |
\begin{document}
\begin{frontmatter} \title{On Induced Colourful Paths in Triangle-free Graphs} \author{Jasine Babu\footnote{Indian Institute of Technology, Palakkad, India. E-mail: \texttt{jasine@iitpkd.ac.in}}\hspace{.5in} Manu Basavaraju\footnote{National Institute of Technology Karnataka, Surathkal, India. E-mail: \texttt{manub@nitk.ac.in}}\hspace{.5in} L.~Sunil Chandran\footnote{Indian Institute of Science, Bangalore, India. E-mail: \texttt{sunil@iisc.ac.in}}\hspace{.5in} Mathew~C. Francis\footnote{Indian Statistical Institute, Chennai, India. E-mail: \texttt{mathew@isichennai.res.in}}}
\begin{abstract} Given a graph $G=(V,E)$ whose vertices have been properly coloured, we say that a path in $G$ is \textit{colourful} if no two vertices in the path have the same colour. It is a corollary of the Gallai-Roy-Vitaver Theorem that every properly coloured graph contains a colourful path on $\chi(G)$ vertices. We explore a conjecture that states that every properly coloured triangle-free graph $G$ contains an induced colourful path on $\chi(G)$ vertices and prove its correctness when the girth of $G$ is at least $\chi(G)$. Recent work on this conjecture by Gy\'arf\'as and S\'ark\"ozy, and Scott and Seymour has shown the existence of a function $f$ such that if $\chi(G)\geq f(k)$, then an induced colourful path on $k$ vertices is guaranteed to exist in any properly coloured triangle-free graph $G$. \end{abstract} \begin{keyword} Induced Path \sep Colourful Path\sep Triangle-free Graph \end{keyword} \end{frontmatter} \section{Introduction} All graphs considered in this paper are simple, undirected and finite. For a graph $G=(V,E)$, we denote the vertex set of $G$ by $V(G)$ and the edge set of $G$ by $E(G)$. A function $c:V(G)\rightarrow\{1,2,\ldots,k\}$ is said to be a \emph{proper $k$-colouring} of $G$ if for any edge $uv\in E(G)$, we have $c(u)\neq c(v)$. A graph is \textit{properly coloured}, if it has an associated proper $k$-colouring $c$ specified (for some $k$). The minimum integer $k$ for which a graph $G$ has a proper $k$-colouring is the \emph{chromatic number} of $G$, denoted by $\chi(G)$. A subgraph $H$ of a properly coloured graph $G$ is said to be \emph{colourful} if no two vertices of $H$ have the same colour. If a colourful subgraph $H$ of $G$ is also an induced subgraph, then we say that $H$ is an \emph{induced} colourful subgraph of $G$. The \emph{length} of a path or cycle in $G$ is the number of edges in the path or cycle. Therefore, a path on $t$ vertices has length $t-1$ and a cycle on $t$ vertices has length $t$.
The following classical result of Gallai, Roy and Vitaver (see~\cite{West}) tells us that that every (not necessarily optimally) properly coloured graph $G$ has a colourful path on $\chi(G)$ vertices (an alternative proof for this is given in Theorem~\ref{thm:colpath}).
\begin{theorem}[Gallai-Roy-Vitaver] Let $G$ be a graph and let $H$ be any directed graph obtained by orienting the edges of $G$. Then $H$ contains a directed path on $\chi(G)$ vertices. \end{theorem}
Indeed, given a properly coloured graph $G$, we can construct a directed graph $H$ by fixing an arbitrary order on the colours and orienting every edge from the vertex of lower colour to the vertex of higher colour. Then, by the Gallai-Roy-Vitaver Theorem, we know that there is directed path on $\chi(G)$ vertices in $H$ and this is a colourful path in $G$ as the colours on this path are strictly increasing.
We are interested in the question of when one can find colourful paths on $\chi(G)$ vertices that are also induced in a given properly coloured graph $G$. Note that the colourful path on $\chi(G)$ vertices that should exist in any properly coloured graph $G$, as noted above, may not always be an induced path. In fact, when $G$ is a complete graph, there is no induced path on more than two vertices in the graph. The following conjecture is due to N.~R.~Aravind~\cite{Aravind}.
\begin{conjecture}\label{conj:main} Let $G$ be a triangle-free graph that is properly coloured. Then there is an induced colourful path on $\chi(G)$ vertices in $G$. \end{conjecture}
Recently, Gy\'arf\'as and S\'ark\"ozy~\cite{GyarfasSarkozy} studied this conjecture and showed that there exists a function $f(k)$ such that in any properly coloured graph $G$ with girth at least 5 and $\chi(G)\geq f(k)$, there is an induced colourful path on $k$ vertices. This was improved by Scott and Seymour~\cite{ScottSeymour}, who removed the girth condition, and showed that for any two integers $k$ and $t$, there exists a function $f(k,t)$ such that in any properly coloured graph $G$ with $\omega(G)\leq t$ and $\chi(G)\geq f(k,t)$, there is an induced colourful path on $k$ vertices (here, $\omega(G)$ denotes the maximum size of a clique in $G$).
A necessary condition for Conjecture~\ref{conj:main} to hold is the presence of an induced path on $\chi(G)$ vertices in any triangle-free graph $G$. Indeed something stronger is known to be true: each vertex in a triangle-free graph $G$ is the starting point of an induced path on $\chi(G)$ vertices~\cite{Gyarfas}. Concerning induced trees, Gy\'{a}rf\'{a}s~\cite{Gyarfas1} and Sumner~\cite{Sumner} conjectured that there exists an integer-valued function $f$ defined on finite trees with the property that every triangle-free graph $G$ with $\chi(G)=f(T)$ contains $T$ as an induced subgraph. This was proven true for trees of radius two by Gy\'{a}rf\'{a}s, Szemer\'{e}di, and Tuza~\cite{GyarSzTu}. There have been several investigations on variants of the Gallai-Roy-Vitaver Theorem~\cite{AlishahiTT11, Li}. Every connected graph $G$ other than $C_7$ admits a proper $\chi(G)$-colouring such that every vertex of $G$ is the beginning of a (not necessarily induced) colourful path on $\chi(G)-1$ vertices~\cite{AlishahiTT11}. A stronger version of the Gallai-Roy-Vitaver Theorem that guarantees an induced directed path on $\chi(G)$ vertices in any directed graph $G$ would have easily implied Conjecture~\ref{conj:main}. Clearly, such a theorem cannot be true for every directed graph. But Kierstead and Trotter~\cite{KiersteadT92} show that no such result can be obtained even if the underlying undirected graph of $G$ is triangle-free. They show that for every natural number $k$, there exists a digraph $G$ such that its underlying undirected graph is triangle-free and has chromatic number $k$, but $G$ has no induced directed path on 4 vertices.
Conjecture~\ref{conj:main} is readily seen to be true for any triangle-free graph $G$ with $\chi(G)=3$, because the colourful path guaranteed to exist in $G$ by the Gallai-Roy-Vitaver Theorem is also an induced path in $G$. For the same reason, the conjecture is true for any graph $G$ with $g(G)>\chi(G)$, where $g(G)$ is the \emph{girth} of $G$, or the length of the shortest cycle in $G$. In this paper, we first prove Conjecture~\ref{conj:main} for the case when $\chi(G)=4$. Note that it follows from the above observation that to prove this, we only need to consider graphs $G$ with $g(G)=4$. Proving Conjecture~\ref{conj:main} in its full generality even for the case when $\chi(G)=5$ does not seem to be easy. As explained above, in order to prove the conjecture for the case $\chi(G)=5$, we only need to prove it for graphs with $g(G)\in\{4,5\}$. Our approach shows that the conjecture is true when $g(G)=5$; the case when $g(G)=4$ is open. Scott and Seymour~\cite{ScottSeymour} mention that they verified by hand that the conjecture holds for all possible colourings of the Mycielski graph on 23 vertices having chromatic number 5. One natural way to weaken the conjecture would be to restrict the girth of the graph to be above some constant fraction of $\chi(G)$. Even this appears to be difficult. The main result of this paper shows that for each value of $\chi(G)\geq 4$, the conjecture is true for graphs with $g(G)=\chi(G)$. \section{Preliminaries} Notation used in this paper is the standard notation used in graph theory (see e.g.~\cite{West}). We shall now describe a special greedy colouring procedure for an already coloured graph that will later help us in proving our main result. \paragraph{\textbf{The refined greedy algorithm}}Given a properly coloured graph $G$ with the colouring $\beta$, we will construct a new proper colouring $\alpha:V(G)\rightarrow \mathbb{N}^{>0}$ of $G$, using the algorithm given below. Let $b_1<b_2<\cdots<b_t$ be the colours used by $\beta$. \begin{tabbing} ~~~~~\=~~~~~\=~~\kill For every vertex $v\in V(G)$, set $\alpha(v)\leftarrow 0$\\ \textbf{for} $i$ from 1 to $t$ \textbf{do}\\ \>\textbf{for} vertex $v$ with $\beta(v)=b_i$ and $\alpha(v)=0$ \textbf{do}\\ \>\>Colour $v$ with the least positive integer that has \\ \>\>not already been assigned to a neighbour of it\\ \>\>i.e., set $\alpha(v)\leftarrow\min (\mathbb{N}^{>0} \setminus \{\alpha(u)\colon u\in N(v)\})$. \end{tabbing}
Let $G$ be a graph with a proper colouring $\beta$ and let $\alpha$ be the proper colouring that is constructed by the refined greedy algorithm. We now define a \emph{decreasing path} in $G$ as follows:
\begin{definition}[Decreasing path] A path $u_1u_2\ldots u_l$ in $G$ is said to be a ``decreasing path'' if for $2\leq i\leq l$, $\alpha(u_i)<\alpha(u_{i-1})$ and $\beta(u_i)<\beta(u_{i-1})$. \end{definition}
\begin{lemma}\label{lem:path}
Let $v\in V(G)$ and $X=\{a_1,a_2,\ldots,a_{|X|}\}\subseteq\{1,2,\ldots,\alpha(v)-1\}$ such that $a_1<a_2<\cdots<a_{|X|}$. Then there is a decreasing path $vu_{|X|}u_{|X|-1} \ldots u_1$ in $G$ such that for $1\leq i\leq |X|$, $\alpha(u_i)=a_i$. \end{lemma} \begin{proof}
We shall prove this by induction on $\alpha(v)$. It is easy to see that the statement is true for the base case when $\alpha(v)=1$ (because $X=\emptyset$ in that case). Suppose that the statement is true for vertices $u$ with $\alpha(u)<\alpha(v)$. Note that the refined greedy algorithm colours each vertex exactly once. The fact that the algorithm assigned the colour $\alpha(v)$ to $v$ implies that at the time of colouring $v$, we had $\alpha(v)=\min(\mathbb{N}^{>0}\setminus \{\alpha(u)\colon u\in N(v)\})$. Since $a_{|X|}<\alpha(v)$, this means that at that point of time, there was $w\in N(v)$ having $\alpha(w)=a_{|X|}$. Also, since $w$ had previously been coloured by the algorithm, we have $\beta(w)\leq\beta(v)$. But as $w\in N(v)$, we know that $\beta(w)\neq\beta(v)$, giving us $\beta(w)<\beta(v)$. Now, applying the induction hypothesis on $w$ and the set $X\setminus\{a_{|X|}\}$, we get that there is a decreasing path $wu'_{|X|-1}u'_{|X|-2}\ldots u'_1$ in $G$ such that for $1\leq i\leq |X|-1$, $\alpha(u'_i)=a_i$. It is clear that $vwu'_{|X|-1}u'_{|X|-2}\ldots u'_1$ is then a decreasing path of the form $vu_{|X|}u_{|X|-1}\ldots u_1$, where for $1\leq i\leq |X|$, $\alpha(u_i)=a_i$. \end{proof}
The above observation about the refined greedy algorithm can be used to show that there is a colourful path on $\chi(G)$ vertices in every properly coloured graph $G$ (without using the Gallai-Roy-Vitaver Theorem).
\begin{theorem}\label{thm:colpath} If $G$ is any graph whose vertices are properly coloured, then there is a colourful path on $\chi(G)$ vertices in $G$. \end{theorem} \begin{proof} Let $\beta$ denote the proper colouring of $G$. Run the refined greedy algorithm on $G$ to generate the colouring $\alpha$. Clearly, the algorithm will use at least $\chi(G)$ colours as the colouring $\alpha$ generated by the algorithm is also a proper colouring of $G$. Let $v$ be any vertex in $G$ with $\alpha(v)\geq\chi(G)$. Now consider the set $X=\{1,2,\ldots,\chi(G)-1\}$. By applying Lemma~\ref{lem:path} on $v$ and $X$, we can conclude that there is a path on $\chi(G)$ vertices starting at $v$ on which the colours in the colouring $\beta$ are strictly decreasing. This path is a colourful path on $\chi(G)$ vertices in $G$. \end{proof}
\begin{corollary}\label{cor:colpath} Any properly coloured graph $G$ with $g(G)>\chi(G)$ has an induced colourful path on $\chi(G)$ vertices. \end{corollary} \begin{proof} If $g(G)>\chi(G)$, then the colourful path given by Theorem~\ref{thm:colpath} is an induced path in $G$. \end{proof}
This implies that the conjecture is true for all triangle-free graphs with chromatic number at most 3. It also implies that in order to prove Conjecture~\ref{conj:main}, one only has to consider graphs $G$ with $4\leq g(G)\leq\chi(G)$. The main result of this paper is that in any properly coloured graph $G$ with $4\leq g(G)=\chi(G)$, there exists an induced colourful path on $\chi(G)$ vertices.
\section{Induced colourful paths in graphs with girth equal to chromatic number} In this section, we shall prove our main result, given by the theorem below. \begin{theorem}\label{thm:main} Let $G$ be a graph with $g(G)\geq\chi(G)=k$, where $k\geq 4$, and whose vertices have been properly coloured. Then there exists an induced colourful path on $k$ vertices in $G$. \end{theorem} We can assume that $G$ is ``critical'', i.e., every proper induced subgraph of $G$ has chromatic number less than $k=\chi(G)$. This is because if the theorem is proven for critical graphs, then since $G$ contains a critical induced subgraph $G'$ having $\chi(G')=k$, we can apply the theorem to $G'$, whose vertices are coloured with same colours that they had in $G$, to get an induced colourful path in $G'$ containing $k$ vertices (note that $g(G')\geq g(G)$). This path is clearly also an induced colourful path in $G$. Assuming that $G$ is critical, the following observation is not too hard to see~\cite{West}.
\begin{observation}\label{obs:mindegree} Every vertex in $G$ has degree at least $k-1$. \end{observation}
Note that critical graphs are connected, and hence we assume that $G$ is connected. Note also that by Corollary~\ref{cor:colpath}, we can assume $g(G)=k$. Let $\beta:V(G)\rightarrow\mathbb{N}^{>0}$ denote the proper colouring of $G$ that is given.
A $k$-cycle in $G$ in which no colour repeats is said to be a \emph{colourful $k$-cycle}, sometimes shortened to just ``colourful cycle''. Notice that every colourful cycle in $G$ is also an induced cycle as $g(G)=k$. Also, from here onwards, we shorten ``colourful path on $k$ vertices'' to just ``colourful path''.
Suppose that there is no induced colourful path on $k$ vertices in $G$.
\begin{observation}\label{obs:colpath} Since $g(G)=k$, if $y_1y_2\ldots y_k$ is a colourful path on $k$ vertices in $G$, then the edge $y_1y_k\in E(G)$. Thus, $y_1y_2\ldots y_ky_1$ is a colourful $k$-cycle in $G$. \end{observation}
Let $\alpha$ be a proper colouring of $G$ generated by running the refined greedy algorithm on $G$. We shall refer to the colours of the colouring $\alpha$ as ``labels''. From here onwards, we shall reserve the word ``colour'' to refer to a colour in the colouring $\beta$. As before, whenever we say that a path or a cycle is ``colourful'', we are actually saying that it is colourful in the colouring $\beta$.
We say that a path with no repeating colours is an ``almost decreasing path'' if the subpath induced by the vertices other than the starting vertex is a decreasing path. Note that any decreasing path is also an almost decreasing path.
\begin{definition} We say that a set of vertices or a subgraph in $G$ ``sees'' the colour $i$ if one of the vertices in it has colour~$i$. \end{definition}
If $G_1$ and $G_2$ are two subgraphs of $G$, then we define $G_1\cup G_2$ to be the subgraph of $G$ with vertex set $V(G_1)\cup V(G_2)$ and edge set $E(G_1)\cup E(G_2)$. In particular, if $G_1$ is a subgraph of $G$ and $xy\in E(G)$, we denote by $G_1\cup xy$ the subgraph with vertex set $V(G_1)\cup\{x,y\}$ and edge set $E(G_1)\cup\{xy\}$.
The proof of Theorem~\ref{thm:main} is split into two cases: when $k=4$ and when $k>4$.
\subsection{Case when $k=4$}
In this case, we have $\chi(G)=g(G)=4$.
As $\alpha$ is also a proper colouring of $G$, we know that there exists a vertex $v$ in $G$ with label 4 (i.e., $\alpha(v)=4$). By Lemma~\ref{lem:path}, there exists a decreasing path $v_4v_3v_2v_1$ where $v_4=v$ and for $1\leq i\leq 3$, we have $\beta(v_i)<\beta(v_{i+1})$ and $\alpha(v_i)=i$. Clearly, as $v_4v_3v_2v_1$ is a decreasing and hence colourful path, by Observation~\ref{obs:colpath}, we have $v_1v_4\in E(G)$. Again by Lemma~\ref{lem:path}, we have a path $vv'_2v'_1$ in which we have $\beta(v'_1)<\beta(v'_2)<\beta(v)$, $\alpha(v'_2)=2$ and $\alpha(v'_1)=1$. Note that $v'_2\neq v_2$ and $v'_1\neq v_1$ (as otherwise $vv'_2v_1v$ would be a triangle in $G$). This means that the vertices in $\{v_4,v_3,v_2,v_1,v'_2,v'_1\}$ are all pairwise distinct. Let $\beta(v_i)=b_i$ for each $i$, where $1 \le i \le 4$. We shall call the colours $b_1,b_2,b_3,b_4$ ``primary colours''. Clearly, we have $b_1<b_2<b_3<b_4$.
\begin{claim} $\beta(v'_2)=b_2$ and $\beta(v'_1)=b_1$. \end{claim} \begin{proof} Suppose that $\beta(v'_2) \neq b_2$. Then we have that either the path $v'_2v_4v_3v_2$ or the path $v'_2v_4v_1v_2$ is colourful, which implies that $v'_2v_2 \in E(G)$, a contradiction since $\alpha(v'_2)=\alpha(v_2)$. Therefore we have $\beta(v'_2)=b_2$. Since $vv'_2v'_1$ is a decreasing path, this tells us that $\beta(v'_1)<b_2$. Thus, if $\beta(v'_1)\neq b_1$, the path $v'_1v'_2v_4v_1$ is colourful, implying that $v'_1v_1 \in E(G)$, which is a contradiction since $\alpha(v'_1)=\alpha(v_1)$. We can thus conclude that $\beta(v'_1)=b_1$. \end{proof}
Now notice that the path $v'_1v'_2v_4v_3$ is colourful and hence we have that $v'_1v_3 \in E(G)$. We call the vertices in the set $\{v_4,v_3,v_2,v_1,v'_2,v'_1\}$ ``forced vertices''. The vertices of $G$ other than these are called ``optional vertices''. Figure~\ref{fig:4} shows the subgraph induced in $G$ by the forced vertices. From our previous observations, we have that for any forced vertex $w$, $\beta(w)=b_{\alpha(w)}$. The following two observations about forced vertices are easy to verify.
\begin{figure}
\caption{The forced vertices when $k=4$.}
\label{fig:4}
\end{figure}
\begin{observation}\label{obs:3path}
Let $u$ be a forced vertex. For $X\subseteq\{b_1,b_2,b_3,b_4\}\setminus\{\beta(u)\}$ such that $|X|=2$, there exists a colourful path on 3 vertices starting at $u$ that sees exactly the colours in $X\cup\{\beta(u)\}$ and contains only forced vertices. \end{observation}
\begin{observation}\label{obs:nonadj} Let $x,y$ be forced vertices such that $xy\notin E(G)$ and $\beta(x)\neq\beta(y)$. Then either there exists a forced vertex $x'\in N(x)$ such that $\beta(x')=\beta(y)$ or there exists a forced vertex $y'\in N(y)$ such that $\beta(y')=\beta(x)$. \end{observation}
\begin{lemma}\label{lem:optionalprops} Let $u$ be an optional vertex. If $N(u)$ contains a forced vertex $x$, then there exist forced vertices $y,z$ such that $uxzyu$ is a colourful cycle. \end{lemma} \begin{proof}
Choose some $X\subseteq\{b_1,b_2,b_3,b_4\}\setminus\{\beta(u),\beta(x)\}$ such that $|X|=2$. From Observation~\ref{obs:3path}, there exists a colourful path $xzy$, where $z$ and $y$ are forced vertices, that sees exactly the colours in $X\cup\{\beta(x)\}$. Clearly, $uxzy$ is a colourful path, which implies by Observation~\ref{obs:colpath} that $uxzyu$ is a colourful cycle. \end{proof} \begin{lemma}\label{lem:optforcedneigh} Every optional vertex is adjacent to at least one forced vertex. \end{lemma} \begin{proof} Consider the set of all optional vertices that have no forced vertices as neighbours. For the sake of contradiction, assume that this set is nonempty. Let $w$ be a vertex in this set that is closest to a forced vertex. As $G$ is connected, $w$ has a neighbour $w'$ such that $w'$ is an optional vertex and $N(w')$ contains a forced vertex. From Lemma~\ref{lem:optionalprops}, there is a colourful cycle $w'xzyw'$, where $x,z,y$ are all forced vertices. If $\beta(w)\neq\beta(z)$, then at least one of the paths $ww'xz$ or $ww'yz$ will be a colourful path, and by Observation~\ref{obs:colpath} we will have $wz\in E(G)$, which contradicts the fact that there was no forced vertex in $N(w)$. We can therefore assume that $\beta(w)=\beta(z)$. Notice that $x$ and $y$ are two nonadjacent forced vertices with $\beta(x)\neq\beta(y)$. By Observation~\ref{obs:nonadj}, we then get that either $N(x)$ contains a forced vertex $x'$ such that $\beta(x')=\beta(y)$ or $N(y)$ contains a vertex forced vertex $y'$ such that $\beta(y')=\beta(x)$. In the former case, $ww'xx'$ is a colourful path and in the latter case, $ww'yy'$ is a colourful path. By Observation~\ref{obs:colpath}, we now have that either $wx'\in E(G)$ or $wy'\in E(G)$. This again contradicts the fact that there are no forced vertices in $N(w)$. \end{proof} Let $S_1$ denote the set of optional vertices adjacent to at least one of the forced vertices $\{v_4,v_2,v'_1\}$ and let $S_2$ denote the set of optional vertices adjacent to at least one of the forced vertices $\{v_3,v_1,v'_2\}$.
\begin{lemma}\label{lem:s1s2} \begin{enumerate} \itemsep 0in \renewcommand{\textit{(\roman{enumi})}}{\textit{(\roman{enumi})}} \renewcommand{\textit{(\roman{enumi})}}{\textit{(\roman{enumi})}} \item\label{lem:s1s2disjoint} $S_1$ and $S_2$ are disjoint, and \item\label{lem:s1s2independent} $S_1$ and $S_2$ are both independent sets. \end{enumerate} \end{lemma} \begin{proof} First let us show that $S_1$ and $S_2$ are disjoint. Suppose that there is a vertex $w\in S_1\cap S_2$. Consider any two forced vertices $x$ and $y$ in $N(w)$ such that
$x\in \{v_4,v_2,v'_1\}$ and $y\in \{v_3,v_1,v'_2\}$. As $G$ is triangle-free, we only have the two possibilities $(x=v'_1, y=v_1)$ or $(x=v_2, y=v'_2)$. Note that this implies that $|N(w)\cap\{v_4,v_2,v'_1\}|=|N(w)\cap\{v_3,v_1,v'_2\}|=1$. This lets us conclude that the set of forced vertices in $N(w)$ is either $\{v'_1,v_1\}$ or $\{v_2,v'_2\}$. We now have that the two forced vertices in the neighbourhood of $w$ have the same colour. Then there cannot exist a colourful cycle containing $w$ in which all the other vertices are forced vertices, contradicting Lemma~\ref{lem:optionalprops}. This proves~\ref{lem:s1s2disjoint}.
From~\ref{lem:s1s2disjoint}, we have that for each vertex $w\in S_1$, the forced vertices in $N(w)$ all lie in $\{v_4,v_2,v'_1\}$ and for each vertex $w'\in S_2$, the forced vertices in $N(w')$ all lie in $\{v_3,v_1,v'_2\}$. Since we know from Lemma~\ref{lem:optionalprops} and Lemma~\ref{lem:optforcedneigh} that each vertex in $S_1\cup S_2$ has at least two forced vertices in their neighbourhood, we can conclude that each vertex in $S_1$ has at least two neighbours from $\{v_4,v_2,v'_1\}$ and that each vertex in $S_2$ has at least two neighbours from $\{v_3,v_1,v'_2\}$. This means that for any two $w,w'\in S_1$, there is at least one vertex in $\{v_4,v_2,v'_1\}$ that is a neighbour of both $w$ and $w'$. As $G$ is triangle-free, we can conclude that $ww'\notin E(G)$. For the same reason, for any two vertices $w,w'\in S_2$, we have $ww'\notin E(G)$. This proves~\ref{lem:s1s2independent}. \end{proof}
From Lemma~\ref{lem:s1s2}\ref{lem:s1s2disjoint}, we know that there are no edges between $S_1$ and $\{v_3$, $v_1,v'_2\}$. Similarly, there are no edges between $S_2$ and $\{v_4,v_2,v'_1\}$. Now, by Lemma~\ref{lem:s1s2}\ref{lem:s1s2independent}, we have that $S_1 \cup\{v_3,v_1,v'_2\}$ is an independent set and $S_2 \cup \{v_4,v_2,v'_1\}$ is an independent set. Since from Lemma~\ref{lem:optforcedneigh}, we know that $V(G)=S_1\cup S_2\cup\{v_4,v_3,v_2,v_1,v'_2,v'_1\}$, this tells us that $G$ is bipartite, which contradicts the assumption that $\chi(G)=4$. Therefore, there can be no properly coloured graph $G$ such that $g(G)=\chi(G)=4$ with no induced colourful path on 4 vertices. This completes the proof of Theorem~\ref{thm:main} for the case $k=4$.
\subsection{Case when $k>4$}
We now use the the fact that $g(G)=k>4$ to complete the proof. In this case we define forced vertices, primary colours and the primary cycle in a more general way. First, we give a useful lemma.
\begin{lemma}\label{lem:colourfulcycle} Let $y_1y_2\ldots y_ky_1$ be a colourful $k$-cycle. Let $z\in N(y_i)\setminus \{y_{i-1},y_{i+1}\}$ for some $i\in\{1,2,\ldots,k\}$. Then $\beta(z)\in \{\beta(y_1),\ldots,\beta(y_k)\}\setminus \{\beta(y_{i-1}),\beta(y_i),$ $\beta(y_{i+1})\}$. (Here we assume that $y_{i+1}=y_1$ when $i=k$ and that $y_{i-1}=y_k$ when $i=1$.) \end{lemma} \begin{proof} Clearly, $z\not\in\{y_1,y_2,\ldots,y_k\}$ as every colourful cycle is an induced cycle. Suppose $\beta(z)\notin \{\beta(y_1),\ldots,\beta(y_k)\}\setminus \{\beta(y_{i-1}),\beta(y_i),\beta(y_{i+1})\}$. Clearly, $\beta(z)\neq\beta(y_i)$. Suppose that $\beta(z) \neq \beta(y_{i+1})$. Then observe that $zy_iy_{i+1}\ldots y_ky_1\ldots y_{i-2}$ is a colourful path on $k$ vertices and hence $zy_{i-2}\in E(G)$. This implies that $zy_iy_{i-1}y_{i-2}z$ is a 4-cycle in $G$, which is a contradiction to the fact that $g(G)=k>4$. Therefore, $\beta(z)=\beta(y_{i+1})$. Then the path $zy_iy_{i-1}\ldots y_1y_k\ldots y_{i+2}$ is a colourful path and the same reasoning as above tells us that there is a 4-cycle $zy_iy_{i+1}y_{i+2}z$ in $G$, which is again a contradiction. \end{proof} \begin{corollary}\label{cor:colourfulcycle} Let $y_1y_2\ldots y_ky_1$ be a colourful $k$-cycle. Let $z\in N(y_i)$ for some $i\in\{1,2,\ldots,k\}$. Then $\beta(z)\in\{\beta(y_1),\ldots,\beta(y_k)\}$. \end{corollary}
\noindent\textbf{The vertex \textit{v}:} Fix $v$ to be a vertex which has the largest label. Since $\alpha$ is also a proper vertex colouring of $G$, it should use at least $k$ labels. In other words, $\alpha(v)\geq k$. (For the proof to go through, we could have chosen any vertex with label $k$ as $v$. But as Lemma~\ref{lem:largestlabel} shows, the vertex with largest label will have label $k$.)
\noindent\textbf{Primary cycle:} By applying Lemma~\ref{lem:path} to $v$ and the set of labels $\{1,2,\ldots,k-1\}$, we can conclude that there exists a decreasing path $v_kv_{k-1}\ldots v_1$ where $v_k=v$ and such that $\alpha(v_i)=i$ for all $1\leq i<k$ and $\beta(v_i)<\beta(v_{i+1})$ for all $1\leq i\leq k-1$. Since this path is colourful, by Observation~\ref{obs:colpath}, $vv_{k-1}v_{k-2}\ldots v_1v$ is a colourful cycle, which we shall call the ``primary cycle''. For $1\leq i\leq k$, we shall denote by $b_i$ the colour $\beta(v_i)$. The set of colours $\{b_k,b_{k-1},\ldots,b_1\}$ shall be called the set of ``primary colours''.
\noindent\textbf{Forced vertices:} A vertex $u\in V(G)$ is said to be a ``forced vertex'' if there is a decreasing path from $v$ to $u$. Note that every vertex on the primary cycle is a forced vertex.
\begin{lemma}\label{lem:largestlabel} $\alpha(v)=k$. Hence, for $1\leq i\leq k$, $\alpha(v_i)=i$. \end{lemma} \begin{proof} Suppose for the sake of contradiction that $\alpha(v)>k$. By Lemma~\ref{lem:path}, there exists a decreasing path $y_{k+1}y_k\ldots y_1$ where $y_{k+1}=v$ and for $1\leq i\leq k$, we have $\alpha(y_i)=i$ and $\beta(y_i)<\beta(y_{i+1})$. As the paths $y_{k+1}y_k\ldots y_2$ and $y_ky_{k-1}\ldots y_1$ are both colourful, it must be the case that $y_{k+1}y_2,y_ky_1\in E(G)$. But then, $y_{k+1}y_2y_1y_ky_{k+1}$ is a cycle on four vertices in $G$, which is a contradiction to the fact that $g(G)=k>4$. \end{proof}
\begin{lemma}\label{lem:uis} For each $i\in\{1,2,\ldots,k-1\}$ there is exactly one vertex $u_i$ in $N(v)$ with label $i$. Moreover, $\beta(u_i)=b_i$ and there is a colourful cycle $C_i$ containing $u_i$ and $v$ that contains only forced vertices. \end{lemma} \begin{proof} For each $i\in\{1,2,\ldots k-1\}$, by applying Lemma~\ref{lem:path} to $v$ and the set $\{i\}$, we get that there exists a decreasing path $vu_i$ where $\alpha(u_i)=i$ and $\beta(u_i)<\beta(v)=b_k$. We shall choose $u_{k-1}$ to be $v_{k-1}$. Because $u_i$ is adjacent to $v$ which is on the primary cycle, by Corollary~\ref{cor:colourfulcycle}, we know that $\beta(u_i)$ is a primary colour.
We claim that $\beta(u_i)=b_i$ and that there is a colourful cycle containing $v$ and $u_i$ that contains only forced vertices. We shall use backward induction on $i$ prove this. Consider the base case when $i=k-1$. Since $u_{k-1}=v_{k-1}$, we know that $\beta(u_{k-1})=b_{k-1}$ and that there is a colourful cycle (the primary cycle) that contains $u_{k-1}$ and $v$ and also contains only forced vertices. Thus the claim is true for the base case. Let us assume that the claim has been proved for $u_{k-1},u_{k-2},\ldots,u_{i+1}$. If $\beta(u_i)=b_j>b_i$, then $b_j\in\{b_{i+1},b_{i+2},\ldots,b_{k-1}\}$ (recall that $\beta(u_i)$ is a primary colour). By the induction hypothesis, we know that the vertex $u_j\in N(v)$ has $\beta(u_j)=b_j$ and that there is a colourful cycle $C_j$ containing $u_j$ and $v$. Note that $u_j\neq u_i$ (as $\alpha(u_i)\neq\alpha(u_j)$), but $\beta(u_j)=\beta(u_i)=b_j$. Therefore, as $C_j$ contains $u_j$ and is a colourful cycle, it cannot contain $u_i$. Since $u_i$ is adjacent to $v$ which is on $C_j$, and $\beta(u_i)=b_j$, we now have a contradiction to Lemma~\ref{lem:colourfulcycle} (note that $u_jv$ is an edge of $C_j$ as every colourful cycle is a chordless cycle). So it has to be the case that $\beta(u_i)\leq b_i$. By Lemma~\ref{lem:path}, there exists a path $y_iy_{i-1}y_{i-2}\ldots y_1$, where $y_i=u_i$, such that for $1\leq j\leq i-1$, $\alpha(y_j)=j$ and $\beta(y_j)<\beta(y_{j+1})$. Notice that $y_1y_2\ldots y_ivv_{k-1}\ldots v_{i+1}$ is a colourful path and therefore by Observation~\ref{obs:colpath}, $C_i=y_1y_2\ldots y_ivv_{k-1}\ldots v_{i+1}y_1$ is a colourful cycle containing both $u_i$ and $v$. Since $v_i$ is adjacent to $v_{i+1}$ which is on $C_i$, by Corollary~\ref{cor:colourfulcycle}, we know that there is some vertex $z$ on $C_i$ such that $\beta(z)=b_i$. Clearly, $z\in\{y_i,y_{i-1},\ldots,y_1\}$. Since $\beta(z)=b_i\geq\beta(y_i)$ (recall that $u_i=y_i$) and $y_iy_{i-1}\ldots y_1$ is a decreasing path, we have $z\notin\{y_{i-1},y_{i-2},\ldots,y_1\}$. Therefore, we have $z=y_i$, which implies that $\beta(u_i)=b_i$. Notice that each $y_j\in\{y_i,y_{i-1},\ldots,y_1\}$, because of the decreasing path $vy_iy_{i-1}\ldots y_j$, is a forced vertex. Thus, $C_i$ is a colourful cycle containing $u_i$ and $v$ that contains only forced vertices. This shows that for any $i\in\{1,2,\ldots,k-1\}$, $\beta(u_i)=b_i$ and there is a colourful cycle containing $v$ and $u_i$ that contains only forced vertices.
We shall now show that $u_i$ is the only vertex in $N(v)$ which has the label $i$. Suppose that there is a vertex $u\in N(v)$ such that $\alpha(u)=i$ and $u\neq u_i$. Since $u$ is adjacent to a colourful cycle containing only primary colours (the primary cycle), we can conclude from Corollary~\ref{cor:colourfulcycle} that $\beta(u)$ is a primary colour. Therefore, $\beta(u)=b_j$ for some $j\in\{1,2,\ldots,k-1\}$. From what we observed above, $\beta(u_j)=b_j$ and there exists a colourful cycle $C_j$ containing the vertices $v$ and $u_j$. Note that $u_j\neq u$ since if $j\neq i$, then $u_j$ and $u$ have different labels and if $j=i$, we know that $u_j\neq u$ (as we have assumed that $u_i\neq u$). Hence $u$ is not in $C_j$ (as $C_j$ already has a vertex $u_j$ with $\beta(u_j)=b_j$) but is adjacent to it. But now $C_j$ and $u$ contradict Lemma~\ref{lem:colourfulcycle} as $u_jv$ is an edge of $C_j$. Therefore, $u$ cannot exist. \end{proof}
\begin{corollary}\label{cor:cyclewithv} Let $C$ be any colourful cycle containing $v$. Then $C$ sees only primary colours. \end{corollary} \begin{proof} Notice that from Lemma~\ref{lem:uis}, we know that for every primary colour $b_j\in\{b_1,b_2,\ldots,b_{k-1}\}$, there is a vertex $u_j$ with $\beta(u_j)=b_j$ that is adjacent to $v$. Because $v$ is in $C$, we can apply Corollary~\ref{cor:colourfulcycle} to $C$ and $u_j$ to conclude that $b_j$ is present in $C$. This means that every primary colour appears on at least one vertex of $C$. Since $C$ was a $k$-cycle, this means that $C$ sees only primary colours. \end{proof}
The following corollary shows that $k\leq 6$. However, we do not need this fact to prove Theorem~\ref{thm:main}.
\begin{corollary}\label{cor:5or6} $k$ is 5 or 6. \end{corollary} \begin{proof} By Lemma~\ref{lem:uis}, there exists a vertex $u_2\in N(v)$ such that $\alpha(u_2)=2$ and $\beta(u_2)=b_2$. By applying Lemma~\ref{lem:path} to $u_2$ and the set $\{1\}$, we know that there exists $z\in N(u_2)$ such that $\alpha(z)=1$ and $\beta(z)<\beta(u_2)=b_2$. Now, the path $zu_2vv_{k-1}v_{k-2}\ldots v_3$ is a colourful path and hence by Observation~\ref{obs:colpath}, we have that $zv_3\in E(G)$. By just comparing labels, it is clear that $u_2\neq v_1$ and $z\neq v_2$. Further, $u_2\neq v_2$ as $vv_2\notin E(G)$ and $z\neq v_1$ as otherwise, there will be the triangle $vu_2zv$ in $G$. We then have the 6-cycle $vu_2zv_3v_2v_1v$ in $G$, which implies that $k=g(G)\leq 6$. Since $k>4$, we now have $k\in\{5,6\}$. \end{proof}
\begin{lemma}\label{lem:primarycolours} If $u\in V(G)$ is a forced vertex such that $\alpha(u)=i$, then $\beta(u)=b_i$. Moreover, if $P$ is any decreasing path from $v$ to $u$, then there is a colourful cycle which has $P$ as a subpath, contains only forced vertices, and sees exactly the primary colours. \end{lemma} \begin{proof} Consider a forced vertex $u$. We shall prove the statement of the lemma for $u$ by backward induction on $\alpha(u)$. The statement is true for $\alpha(u)\in\{k,k-1\}$ as there is only one forced vertex each with labels $k$ and $k-1$---which are $v$ and $v_{k-1}$ respectively (recall that from Lemma~\ref{lem:uis}, $u_{k-1}=v_{k-1}$ is the only vertex in $N(v)$ with label $k-1$). Also, note that they are both in a colourful cycle (the primary cycle) that satisfies the required conditions. Let us assume that the statement of the lemma has been proved for $\alpha(u)\in\{k,k-1,\ldots,i+1\}$. Let us look at the case when $\alpha(u)=i$. Let $z$ be the predecessor of $u$ in the path $P$ and let $P_z$ be the subpath of $P$ that starts at $v$ and ends at $z$. Let $\alpha(z)=j$. By the induction hypothesis, $\beta(z)=b_j$ and $z$ is in a colourful cycle $C$ that contains only primary colours. By Corollary~\ref{cor:colourfulcycle}, we can infer that $\beta(u)$ is a primary colour. Since $P$ was a decreasing path, $\beta(u)\in\{b_1,b_2,\ldots,b_{j-1}\}$. If $\beta(u)=b_l$ with $b_j>b_l>b_i$, then notice that there already exists a neighbour $y$ of $z$ with $\alpha(y)=l$ and $\beta(y)<\beta(z)$, because the refined greedy algorithm set $\alpha(z)=j$. Note that $P_z\cup zy$ is a decreasing path from $v$ to $y$, which implies that $y$ is a forced vertex. Clearly, $u\neq y$ as $\alpha(u)\neq\alpha(y)$. Because of our induction hypothesis, $\beta(y)=b_l$ and there is a colourful cycle containing the path $P_z\cup zy$ as a subpath. As $\beta(u)=\beta(y)$, $u$ is outside this cycle but is a neighbour of $z$. This contradicts Lemma~\ref{lem:colourfulcycle}. Therefore, $\beta(u)\leq b_i$. Consider the decreasing path $y_iy_{i-1}\ldots y_1$ where $y_i=u$, and for $s\in\{1,2,\ldots,i-1\}$, $\alpha(y_s)=s$ and $\beta(y_s)<\beta(y_{s+1})$ which exists by Lemma~\ref{lem:path}. Again by Lemma~\ref{lem:path}, there exists a decreasing path $Q$ starting from $v$ whose vertices other than $v$ have exactly the labels in $\{i+1,i+2,\ldots,k\}$ that are not seen on $P_z$. By the induction hypothesis, we can now see that every colour in $\{b_{i+1},b_{i+2},\ldots,b_k\}$ occurs exactly once in the path $Q\cup P_z$. Since $y_iy_{i-1}\ldots y_1$ is a decreasing path in which every vertex has colour at most $b_i$, we can conclude that the path $P'=Q\cup P_z \cup zy_iy_{i-1}\ldots y_1$ is a colourful path. By Observation~\ref{obs:colpath}, the graph induced by $V(P')$ is a colourful cycle containing $v$, which we shall call $C'$. By Corollary~\ref{cor:cyclewithv}, we know that $C'$ contains only primary colours. Now, if $\beta(u)<b_i$, then because $uy_{i-1}\ldots y_1$ was a decreasing path, it should mean that $\beta(y_1)<b_1$, which is a contradiction. Thus, $\beta(u)=b_i$ and $C'$ is a cycle containing $P$ as a subpath and which contains only forced vertices and primary colours (note that each $y_s$, for $1\leq s\leq k-1$, is a forced vertex as there is the decreasing path $P_z\cup zy_iy_{i-1}\ldots y_s$ from $v$ to $y_s$). \end{proof}
\begin{corollary}\label{cor:forced} Let $u$ be a forced vertex with $\alpha(u)=i$. Then, for each $j\in\{1,2,\ldots,i-1\}$ there is exactly one forced vertex $u_j$ in $N(u)$ with label $j$. Moreover, for each $j\in\{1,2,\ldots,i-1\}$, there is a colourful cycle containing the edge $uu_j$, sees exactly the primary colours and contains only forced vertices. \end{corollary} \begin{proof} As $u$ is a forced vertex, we have from Lemma~\ref{lem:primarycolours} that $\beta(u)=b_i$. We further know that there exists a decreasing path $P$ from $v$ to $u$. Lemma~\ref{lem:path} can be used to infer that there exists a vertex $u_j\in N(u)$ such that $\alpha(u_j)=j$ and $\beta(u_j)<\beta(u)$. As $P\cup uu_j$ is a decreasing path, $u_j$ is a forced vertex. Suppose for the sake of contradiction that there exists $u'_j\neq u_j$ such that $\alpha(u'_j)=j$ and $u'_j$ is a forced vertex in $N(u)$. By Lemma~\ref{lem:primarycolours}, we have that $\beta(u_j)=\beta(u'_j)=b_j$. Applying Lemma~\ref{lem:primarycolours} on $u_j$ and the decreasing path $P\cup uu_j$, we know that there exists a colourful cycle containing the edge $uu_j$, sees exactly the primary colours and contains only forced vertices. As $\beta(u_j)=\beta(u'_j)$, the vertex $u'_j$ is not on this cycle. This contradicts Lemma~\ref{lem:colourfulcycle}. \end{proof}
It might be helpful to note that combining Corollaries~\ref{cor:5or6} and~\ref{cor:forced}, we get that the forced vertices in $G$ are as shown in Figure~\ref{fig:5} or Figure~\ref{fig:6}. But we do not use this observation for the proof.
\begin{figure}
\caption{The forced vertices when $k=5$. The vertices in the primary cycle are named, while only the labels of the other forced vertices are shown.}
\label{fig:5}
\end{figure}
\begin{figure}
\caption{The forced vertices when $k=6$. The vertices in the primary cycle are named, while only the labels of the other forced vertices are shown.}
\label{fig:6}
\end{figure}
Clearly, the only forced vertex with label $k$ is $v_k$. And by Lemma~\ref{lem:uis}, we also have that the only forced vertex with label $k-1$ is $v_{k-1}$. This gives us the following observation.
\begin{observation}\label{obs:final} There are exactly $k-2$ forced vertices in $N(v_{k-2})$. \end{observation} \begin{proof} From Corollary~\ref{cor:forced}, we know that for each $i\in\{1,2,\ldots,k-3\}$, there is exactly one forced vertex with label $i$ in $N(v_{k-2})$. The only forced vertices with labels higher than $k-2$ are $v_{k-1}$ and $v_k$. Since $v_{k-2}v_k\notin E(G)$ and $v_{k-2}v_{k-1}\in E(G)$, there are exactly $k-2$ forced vertices in $N(v_{k-2})$. \end{proof}
\begin{lemma}\label{lem:final} Every vertex in $N(v_{k-2})$ is a forced vertex. \end{lemma} \begin{proof} Suppose for the sake of contradiction that there exists a vertex $u\in N(v_{k-2})$ that is not a forced vertex. Applying Lemma~\ref{lem:colourfulcycle} on the primary cycle and $u$, we get that $\beta(u)$ is a primary colour other than $b_{k-1},b_{k-2},b_{k-3}$. By applying Corollary~\ref{cor:forced} to $v_{k-2}$, we know that for each $j\in\{1,2,\ldots,k-3\}$, there is a forced vertex $u_j\in N(v_{k-2})$ such that $\alpha(u_j)=j$ and there is a colourful cycle $C_j$ containing the edge $v_{k-2}u_j$, sees exactly the primary colours and contains only forced vertices. Note that by Lemma~\ref{lem:primarycolours}, we have $\beta(u_j)=b_j$. Then by applying Lemma~\ref{lem:colourfulcycle} to $C_j$ and $u$, for each $j\in\{1,2,\ldots,k-3\}$, we can conclude that $\beta(u)\notin\{b_1,b_2,\ldots,b_{k-3}\}$. Further, by applying Lemma~\ref{lem:colourfulcycle} to the primary cycle and $u$, we get that $\beta(u)\neq b_{k-1}$. Therefore, $\beta(u)=b_k$. By Lemma~\ref{lem:path} applied on the vertex $v_{k-1}$ and set $\{b_{k-3},b_{k-4},\ldots,b_1\}$, there exists a decreasing path $v_{k-1}v'_{k-3}v'_{k-4}\ldots v'_1$ where $\alpha(v'_i)=i$. Each $v'_i$, for $1\leq i\leq k-3$, is a forced vertex since $v_kv_{k-1}v'_{k-3}v'_{k-4}\ldots v'_i$ is a decreasing path from $v_k$ to $v'_i$. From Lemma~\ref{lem:primarycolours}, we now have that for each $1\leq i\leq k-3$, $\beta(v'_i)=b_i$. Again applying Lemma~\ref{lem:path} to $v_{k-1}$ and the set $\{k-4,k-5,\ldots,1\}$, we get a decreasing path $v_{k-1}u'_{k-4}u'_{k-5}\ldots u'_1$, for which using similar arguments as before, we can see that $u'_i$, for $1\leq i\leq k-4$, is a forced vertex with $\beta(u'_i)=b_i$. Now applying Lemma~\ref{lem:path} to $v_k$ and the set $\{k-2,k-3\}$, we get a decreasing path $v_kx_{k-2}x_{k-3}$, and using similar arguments as before we get that $x_i$, for $i\in\{k-2,k-3\}$, is a forced vertex with $\beta(x_i)=b_i$. Note that $uv_{k-2}v_{k-1}v'_{k-3}v'_{k-4}\ldots v'_1$ is a colourful path, implying that $uv'_1\in E(G)$. Also, $u'_1u'_2\ldots u'_{k-4}v_{k-1}v_kx_{k-2}x_{k-3}$ is a colourful path, and hence $u'_1x_{k-3}\in E(G)$. Now, $uv_{k-2}v_{k-1}u'_{k-4}u'_{k-5}\ldots u'_1x_{k-3}$ is a colourful path, which implies that $ux_{k-3}\in E(G)$. Further, $v'_1v'_2\ldots v'_{k-3}v_{k-1}v_kx_{k-2}$ is a colourful path, which gives us $v'_1 x_{k-2}\in E(G)$. Therefore, we have the 4-cycle $uv'_1x_{k-2}x_{k-3}u$ in $G$, which is a contradiction. \end{proof}
From Observation~\ref{obs:final} and Lemma~\ref{lem:final}, we have that $v_{k-2}$ has exactly $k-2$ neighbours, which is a contradiction to Observation~\ref{obs:mindegree}. This completes the proof of Theorem~\ref{thm:main}.
\section{Conclusion} The results of this paper imply that for any properly coloured graph $G$ with $g(G)\geq\chi(G)>3$, there exists an induced colourful path on $\chi(G)$ vertices in $G$. The question of whether every properly coloured graph $G$ contains an induced colourful path on $\chi(G)$ vertices remains open for the case $3<g(G)<\chi(G)$.
\end{document} |
\begin{document}
\title{On Homogeneous Landsberg Surfaces}
\begin{abstract} In this paper, we prove that every homogeneous Landsberg surface has isotropic flag curvature. Using this special form of the flag curvature, we prove a rigidity result on homogeneous Landsberg surface. Indeed, we prove that every homogeneous Landsberg surface is Riemannian or locally Minkowskian. This gives a positive answer to the Xu-Deng's well-known conjecture in 2-dimensional homogeneous Finsler manifolds.\\\\ {\bf {Keywords}}: Homogeneous Finsler surface, Landsberg metric, Berwald metric, flag curvature.\footnote{ 2000 Mathematics subject Classification: 53B40, 53C60.} \end{abstract}
\section{Introduction}
Let $(M, F)$ be a Finsler manifold and $c: [a, b]\rightarrow M$ be a piecewise $C^\infty$ curve from $c(a)=p$ to $c(b)=q$. For every $u\in T_pM$, let us define $P_c:T_pM\rightarrow T_qM$ by $P_c(u):=U(b)$, where $U=U(t)$ is the parallel vector field along $c$ such that $U(a)=u$. $P_c$ is called the parallel translation along $c$. In \cite{I}, Ichijy\={o} showed that if $F$ is a Berwald metric, then all tangent spaces $(T_xM, F_x)$ are linearly isometric to each other. Let us consider the Riemannian metric ${\hat g}_x$ on $T_xM_0:=T_xM-\{0\}$ which is defined by
${\hat g}_x:=g_{ij}(x, y)\delta y^i\otimes \delta y^j$, where $g_{ij}:={1}/{2}[F^2]_{y^iy^j}$ is the fundamental tensor of $F$ and $\{\delta y^i:= dy^i+N^i_j dx^j\}$ is the natural coframe on $T_xM$ associated with the natural basis $\{{\partial}/{\partial x^i}|_x\}$ for $T_xM$. If $F$ is a Landsberg metric, then for any $C^\infty$ curve $c$, $P_c$ preserves the induced Riemannian metrics on the tangent spaces, i.e., $P_c:(T_pM, {\hat g}_p)\rightarrow (T_qM, {\hat g}_q)$ is an isometry. By definition, every Berwald metric is a Landsberg metric, but the converse may not hold.
In 1996, Matsumoto found a list of rigidity results which almost suggest that such a pure Landsberg metric (non-Berwaldian metric) does not exist \cite{Mat96}. In 2003, Matsumoto emphasized this problem again and looked at it as the most important open problem in Finsler geometry. It is a long-existing open problem in Finsler geometry to find Landsberg metrics which are not Berwaldian. Bao called such metrics unicorns in Finsler geometry, mythical single-horned horse-like creatures that exist in legend but have never been seen by human beings \cite{Bao2}. There are a lot of unsuccessful attempts to find explicit examples of unicorns. In \cite{Szabo2}, Szab\'{o} made an argument to prove that any regular Landsberg metric must be of Berwald type. But unfortunately, there is a little gap in Szab\'{o}'s argument. As pointed out in Szab\'{o}'s correction to \cite{Szabo2}, his argument only applies to the so-called dual Landsberg spaces. Hence, the unicorn problem remains open in Finsler geometry. Taking into account of so many unsuccessful efforts of
many researchers, one can conclude that unicorn problem is becoming more and more puzzling.
The unicorn problem in Finsler geometry is well-studied. However, up to now, very little attention has been paid to the subject of homogeneous Finsler metrics. A Finsler manifold $(M, F)$ is said to be homogeneous if its group of isometries acts transitively on $M$. In \cite{TN1}, the authors consider the unicorn problem in the class of homogeneous $(\alpha, \beta)$-metric. We proved that every homogeneous $(\alpha, \beta)$-metric is a stretch metric if and only if it is a Berwald metric. In \cite{XD}, Xu-Deng introduced a generalization of $(\alpha,\beta)$-metrics,
called $(\alpha_1,\alpha_2)$-metrics. Let $(M, \alpha)$ be an $n$-dimensional Riemannian manifold. Then one can define an $\alpha$-orthogonal decomposition of the tangent bundle by $TM=\mathcal{V}_1\oplus\mathcal{V}_2$, where $\mathcal{V}_1$ and $\mathcal{V}_2$ are two linear subbundles with dimensions $n_1$ and $n_2$ respectively, and
$\alpha_i=\alpha|_{\mathcal{V}_i}$ $i=1,2$ are naturally viewed as functions on $TM$. An $(\alpha_1,\alpha_2)$-metric on $M$ is a Finsler metric $F$ which can be written as $F=\sqrt{L(\alpha_1^2,\alpha_2^2)}$. An $(\alpha_1,\alpha_2)$-metric can also be represented as $F=\alpha\phi(\alpha_2/\alpha)=\alpha\psi(\alpha_1/\alpha)$, in which $\phi(s)=\psi(\sqrt{1-s^2})$. They proved that evey Landsberg $(\alpha_1,\alpha_2)$-metric reduces to a Berwald metric. This result shows that the finding a unicorn cannot be successful even in the very broad class of $(\alpha_1,\alpha_2)$-metrics. Then, Xu-Deng conjectured the following: \begin{con}{\rm (\cite{XD})} A homogeneous Landsberg space must be a Berwald space. \end{con}
Taking a look at the rigid theorems in Finsler geometry, one can find that this type of result is different for procedures with dimensions greater than three. For example, in \cite{Sz} Szab\'{o} proved that any connected Berwald surface is locally Minkowskian or Riemannian. In \cite{BCS}, Bao-Chern-Shen proved a rigidity result for compact Landsberg surface. They showed that a compact Landsberg surfaces with non-positive flag curvature is locally Minkowskian or Riemannian. Therefore, we preferred to consider the issue of unicorns for homogeneous Finsler surfaces. In this paper, we prove the following rigidity result. \begin{thm}\label{MainTHM1} Any homogeneous Landsberg surface of is Riemannian or locally Minkowskian. \end{thm} This result articulates the hunters of unicorns that they do not looking forward to seeing such a creature in the jungle of homogeneous Finsler surfaces.
In order to prove Theorem \ref{MainTHM1}, we consider the flag curvature of Landsberg surface and prove the following rigidity result. \begin{thm}\label{MainTHM2} Every homogeneous Landsberg surface has isotropic flag curvature. \end{thm}
\section{Preliminaries}\label{sectionP}
Let $(M, F)$ be an $n$-dimensional Finsler manifold, and $TM$ be its tangent space. We denote the slit tangent space of $M$ by $TM_0$, i.e., $T_xM_0=T_xM-\{0\}$ at every $x\in M$. The fundamental tensor $\textbf{g}_y:T_xM\times T_xM\rightarrow \mathbb{R}$ of $F$ is defined by following \[
\textbf{g}_{y}(u,v):={1 \over 2}\frac{{\partial} ^2}{{\partial} s {\partial} t} \Big[ F^2 (y+su+tv)\Big]|_{s,t=0}, \ \ u,v\in T_xM. \]
Let $x\in M$ and $F_x:=F|_{T_xM}$. To measure the non-Euclidean feature of $F_x$, define ${\bf C}_y:T_xM\times T_xM\times T_xM\rightarrow \mathbb{R}$ by \[ {\bf C}_{y}(u,v,w):={1 \over 2} \frac{d}{dt}\Big[\textbf{g}_{y+tw}(u,v)
\Big]|_{t=0}, \ \ u,v,w\in T_xM. \] The family ${\bf C}:=\{{\bf C}_y\}_{y\in TM_0}$ is called the Cartan torsion. By definition, ${\bf C}_y$ is a symmetric trilinear form on $T_xM$. It is well known that ${\bf{C}}=0$ if and only if $F$ is Riemannian.
Let $(M, F)$ be a Finsler manifold. For $y\in T_x M_0$, define ${\bf I}_y:T_xM\rightarrow \mathbb{R}$ by \[ {\bf I}_y(u)=\sum^n_{i=1}g^{ij}(y) {\bf C}_y(u, \partial_i, \partial_j), \] where $\{\partial_i\}$ is a basis for $T_xM$ at $x\in M$. The family ${\bf I}:=\{{\bf I}_y\}_{y\in TM_0}$ is called the mean Cartan torsion. By definition, ${\bf I}_y(u):=I_i(y)u^i$, where $I_i:=g^{jk}C_{ijk}$. By Deicke's theorem, every positive-definite Finsler metric
$F$ is Riemannian if and only if ${\bf I}=0$.
Given a Finsler manifold $(M, F)$, then a global vector field ${\bf G}$ is induced by $F$ on $TM_0$, and in a standard coordinate $(x^i,y^i)$ for $TM_0$ is given by ${\bf G}=y^i {{\partial} / {\partial x^i}}-2G^i(x,y){{\partial}/ {\partial y^i}}$, where $G^i=G^i(x, y)$ are scalar functions on $TM_0$ given by \[ G^i:=\frac{1}{4}g^{ij}\Bigg\{\frac{\partial^2[F^2]}{\partial x^k \partial y^j}y^k-\frac{\partial[F^2]}{\partial x^j}\Bigg\},\ \ y\in T_xM.\label{G} \] The vector field ${\bf G}$ is called the spray associated with $(M, F)$.
For $y \in T_xM_0$, define ${\bf B}_y:T_xM\times T_xM \times T_xM\rightarrow T_xM$ by ${\bf B}_y(u, v, w):=B^i_{\ jkl}(y)u^jv^kw^l{{\partial } \over {\partial x^i}}|_x$ where \[ B^i_{\ jkl}:={{\partial^3 G^i} \over {\partial y^j \partial y^k \partial y^l}}. \] The quantity $\bf B$ is called the Berwald curvature of the Finsler metric $F$. We call a Finsler metric $F$ a Berwald metric, if ${\bf{B}}=0$.
Define the mean of Berwald curvature by ${\bf E}_y:T_xM\times T_xM \rightarrow \mathbb{R}$, where \[ {\bf E}_y (u, v) := {1\over 2} \sum_{i=1}^n g^{ij}(y) g_y \Big ( {\bf B}_y (u, v, e_i ) , e_j \Big ). \] The family ${\bf E}=\{ {\bf E}_y \}_{y\in TM\setminus\{0\}}$ is called the {\it mean Berwald curvature} or {\it E-curvature}. In a local coordinates, ${\bf E}_y(u, v):=E_{ij}(y)u^iv^j$, where \[ E_{ij}:=\frac{1}{2}B^m_{\ mij}. \] The quantity $\bf E$ is called the mean Berwald curvature. $F$ is called a weakly Berwald metric if ${\bf{E}}=0$. Also, define ${\bf H}_y:T_xM\otimes T_xM \rightarrow \mathbb{R}$ by ${\bf H}_y(u,v):=H_{ij}(y)u^iv^j$, where \[
H_{ij}:= E_{ij|s} y^s. \] Then ${\bf H}_y$ is defined as the covariant derivative of ${\bf E}$ along geodesics.
For non-zero vector $y \in T_xM_0$, define ${\bf D}_y:T_xM\otimes T_xM \otimes T_xM\rightarrow T_xM$
by ${\bf D}_y(u,v,w):=D^i_{\ jkl}(y)u^iv^jw^k\frac{\partial}{\partial x^i}|_{x}$, where \[ D^i_{\ jkl}:=\frac{\partial^3}{\partial y^j\partial y^k\partial y^l}\Bigg[G^i-\frac{2}{n+1}\frac{\partial G^m}{\partial y^m} y^i\Bigg].\label{Douglas1} \] $\bf D$ is called the Douglas curvature. $F$ is called a Douglas metric if $\bf{D}=0$. According to the definition, the Douglas tensor can be written as follows \[ D^i_{\ jkl}=B^i_{\ jkl}-\frac{2}{n+1}\Big\{E_{jk}\delta^i_{\ l}+E_{kl}\delta^i_{\ j}+E_{lj}\delta^i_{\ k}+E_{jk,l}y^i\Big\}. \]
For $y\in T_xM$, define the Landsberg curvature ${\bf L}_y:T_xM\times T_xM \times T_xM\rightarrow \mathbb{R}$ by \[ {\bf L}_y(u, v,w):=-\frac{1}{2}{\bf g}_y\big({\bf B}_y(u, v, w), y\big). \] $F$ is called a Landsberg metric if ${\bf L}_y=0$. By definition, every Berwald metric is a Landsberg metric.
Let $(M, F)$ be a Finsler manifold. For $y\in T_x M_0$, define ${\bf J}_y:T_xM\rightarrow \mathbb{R}$ by \[ {\bf J}_y(u)=\sum^n_{i=1}g^{ij}(y) {\bf L}_y(u, \partial_i, \partial_j). \] The quntity $\bf J$ is called the mean Landsberg curvature or J-curvature of Finsler metric $F$. A Finsler metric $F$ is called a weakly Landsberg metric if ${\bf J}_y=0$. By definition, every Landsberg metric is a weakly Landsberg metric. Mean Landsberg curvature can also be defined as following \[ J_i: = y^m {{\partial} I_i \over {\partial} x^m} -I_m {{\partial} G^m\over {\partial} y^i} - 2 G^m {{\partial} I_i \over {\partial} y^m}. \] By definition, we get \begin{eqnarray*} {\bf J}_y (u):= {d\over dt} \Big [ {\bf I}_{\dot{\sigma}(t) } \big ( U(t) \big )\Big ]_{t=0}, \end{eqnarray*} where $y\in T_xM$, $\sigma=\sigma(t)$ is the geodesic with $\sigma(0)=x$, $\dot{\sigma}(0)=y$, and $U(t)$ is a linearly parallel vector field along $\sigma$ with $U(0)=u$. The mean Landsberg curvature ${\bf J}_y$ is the rate of change of ${\bf I}_y$ along geodesics for any $y\in T_xM_0$.
For an arbitrary non-zero vector $y \in T_xM_{0}$, the Riemann curvature is a linear transformation $\textbf{R}_y: T_xM \rightarrow T_xM$ with homogeneity ${\bf R}_{\lambda y}=\lambda^2 {\bf R}_y$, $\forall \lambda>0$, which is defined by $\textbf{R}_y(u):=R^i_{k}(y)u^k {\partial / {\partial x^i}}$, where \begin{equation} R^i_{k}(y)=2{\partial G^i \over {\partial x^k}}-{\partial^2 G^i \over {{\partial x^j}{\partial y^k}}}y^j+2G^j{\partial^2 G^i \over {{\partial y^j}{\partial y^k}}}-{\partial G^i \over {\partial y^j}}{\partial G^j \over {\partial y^k}}.\label{Riemannx} \end{equation} The family $\textbf{R}:=\{\textbf{R}_y\}_{y\in TM_0}$ is called the Riemann curvature of the Finsler manifold $(M, F)$.
For a flag $P:={\rm span}\{y, u\} \subset T_xM$ with flagpole $y$, the flag curvature ${\bf K}={\bf K}(x, y, P)$ is defined by \begin{equation} {\bf K}(x, y, P):= {{\bf g}_y \big(u, {\bf R}_y(u)\big) \over {\bf g}_y(y, y) {\bf g}_y(u,u) -{\bf g}_y(y, u)^2 }.\label{FC0} \end{equation} The flag curvature ${\bf K}(x, y, P)$ is a function of tangent planes $P={\rm span}\{ y, v\}\subset T_xM$. This quantity tells us how curved space is at a point. A Finsler metric $F$ is of scalar flag curvature if $\textbf{K} = \textbf{K}(x, y, P)$ is independent of flags $P$ containing $y\in T_xM_0$.
\section{Proof of Theorems}
In this section, we are going to prove Theorems \ref{MainTHM1} and \ref{MainTHM2}. In order to prove Theorem \ref{MainTHM1}, first we consider the flag curvature of homogeneous Landsberg surface. More precisely, we prove Theorem \ref{MainTHM2}. For this aim, we need some useful Lemmas as follows.
In \cite{LR}, Latifi-Razavi proved that every homogeneous Finsler manifold is forward geodesically complete. In \cite{TN1}, Tayebi-Najafi improved their result and proved the following. \begin{lem}{\rm (\cite{TN2})}\label{lem1} Every homogeneous Finsler manifold is complete. \end{lem}
By definition, every two points of a homogeneous Finsler manifold $(M, F)$ map to each other under an isometry. This causes the norm of an invariant tensor under the isometries of a homogeneous Finsler manifold is a constant function on $M$, and consequently, it has a bounded norm. Then, we conclude the following. \begin{lem}{\rm (\cite{TN1})}\label{lem2} Let $(M, F)$ be a homogeneous Finsler manifold. Then, every invariant tensor under the isometries of $F$ has a bounded norm with respect to $F$. \end{lem}
\noindent {\bf Proof of Theorem \ref{MainTHM2}:} We first deal with Finsler surfaces. The special and useful Berwald frame was introduced and developed by Berwald \cite{B}. Let $(M, F)$ be a two-dimensional Finsler manifold. One can define a local field of orthonormal frame $(\ell^i,m^i)$ called the Berwald frame, where $\ell^i=y^i/F$, $m^i$ is the unit vector with $\ell_i m^i=0$, $\ell_i=g_{ij}\ell^i$ and $g_{ij}$ is defined by $g_{ij}=\ell_i\ell_j+m_im_j$. In \cite{BM}, it is proved that the Douglas curvature of the Finsler surface $(M, F)$ is given by following \[
D^i_{\ jkl} = -\frac{1}{ 3F^2}\Big(6I_{,1}+ I_{2|2} + 2II_2\Big)m_jm_km_ly^i. \] We rewrite it as equivalently \begin{equation} {\bf D}_y(u,v,w)={\bf T}(u, v, w) y\label{GDW1}
\end{equation} where ${\bf T}(u, v, w):=T_{ijk}u^iv^jw^k$ and $T_{ijk}:=-{1}/(3F^2)(6I_{,1}+ I_{2|2} + 2II_2)m_im_jm_k$. It is easy to see that ${\bf T}$ is a symmetric Finslerian tensor filed and satisfies the following \[ {\bf T}(y, v, w)=0. \] Let us denote the Berwald connection of $F$ by $D$. The horizontal and vertical derivation with of a Finsler tensor field are denoted by `` $D_{u}$ " and `` $D_{\dot u}$ " respectively. Taking a horizontal derivation of \eqref{GDW1} along Finslerian geodesics implies that \begin{equation} D_0{\bf D}_y(u,v,w)=D_0{\bf T}(u, v, w)y,\label{GDW1.5} \end{equation} where $D_0:=D_iy^i$. Let us define ${\bf h}_y:T_xM\to T_xM$ by \[ {\bf h}_y(u)=u-{1 \over F^2 }{\bf g}_y(u,y)y. \] Since ${\bf h}_y(y)=0$, it follows from \eqref{GDW1.5} that \begin{equation} {\bf h}_y\big(D_0{\bf D}_y(u,v,w)\big)=0.\label{GDW2} \end{equation} On the other hand. the Douglas tensor of $F$ is given by \begin{equation} {\bf D}_y(u,v,w)={\bf B}_y(u,v,w)-\frac{2}{3}\Big\{{\bf E}_y(v, w)u+{\bf E}_y(w, u)v+{\bf E}_y(u, v)w+(D_{\dot u}{\bf E}_y)(v,w)y)\Big\}.\label{GD2} \end{equation} Then \begin{equation} {\bf h}_y\big(D_0{\bf D}_y(u, v, w)\big)={\bf h}_y\big(D_0{\bf B}_y(u, v, w)\big)-\frac{2}{3}\Big\{{\bf H}_y(u,v) {\bf h}_y(w)+{\bf H}_y(v, w) {\bf h}_y(u)+{\bf H}_y(w, u) {\bf h}_y(v)\Big\}.\label{GD3} \end{equation} Let us define \[ \tilde{\bf B}_y:=D_0{\bf B}_y. \] Indeed, $\tilde{\bf B}_y$ is the horizontal derivative of Berwald curvature along Finsler geodesics. By (\ref{GDW2}) and (\ref{GD2}), we get \begin{equation} {\bf h}_y\big(\tilde{\bf B}_y(u, v, w)\big)=\frac{2}{3}\Big\{{\bf H}_y(u,v) {\bf h}_y(w)+{\bf H}_y(v, w) {\bf h}_y(u)+{\bf H}_y(w, u) {\bf h}_y(v)\Big\}.\label{GD4a} \end{equation} Using $D_i{\bf h}=0$ yields \begin{equation} {\bf h}_y\big(D_i\tilde{\bf B}_y(u, v, w)\big)=\frac{2}{3}\Big\{D_i{\bf H}_y(u,v) {\bf h}_y(w)+D_i{\bf H}_y(v, w) {\bf h}_y(u)+D_i{\bf H}_y(w, u) {\bf h}_y(v)\Big\}.\label{GD5} \end{equation} Using $g_y({\bf B}_y(u,v,w),y)=-2{\bf L}_y(u,v,w)$, we get \begin{eqnarray}\label{GD7} D_i\big( {\bf h}_y\tilde{\bf B}_y(u, v, w)\big)\!\!\!\!&=&\!\!\!\! {\bf h}_y\big(D_i\tilde{\bf B}_y(u, v, w)\big)\nonumber\\ \!\!\!\!&=&\!\!\!\! D_iD_0\big( {\bf h}_y{\bf B}_y(u, v, w)\big)\nonumber\\ \!\!\!\!&=&\!\!\!\! D_iD_0\Big ({\bf B}_y(u, v, w)-{1 \over F^2}{\bf g}_y\big({\bf B}_y(u, v, w), y\big)\Big )\nonumber\\ \!\!\!\!&=&\!\!\!\! D_i\tilde{\bf B}_y(u, v, w)+{2 \over F^2}D_iD_0{\bf L}_y(u,v,w)y. \end{eqnarray} By (\ref{GD5}), (\ref{GD7}), and ${\bf L}=0$, we obtain \begin{equation} D_i\tilde{\bf B}_y(u, v, w)=\frac{2}{3}\Big\{D_i{\bf H}_y(u,v) {\bf h}_y(w)+D_i{\bf H}_y(v, w) {\bf h}_y(u)+D_i{\bf H}_y(w, u) {\bf h}_y(v)\Big\}.\label{GD1} \end{equation} The relation \eqref{GD1} yields \begin{eqnarray} D_h\tilde{\bf B}_y(u, v, \partial_k)-D_k\tilde{\bf B}_y(u, v, \partial_h)&=&\frac{2}{3}\Big\{D_h{\bf H}_y(u,v) {\bf h}_y(\partial_k)-D_k{\bf H}_y(u,v) {\bf h}_y(\partial_h)\Big\}\nonumber\\ &+&\frac{2}{3}\Big\{\big(D_h{\bf H}_y(v, \partial_k)-D_k{\bf H}_y(v, \partial_h) \big) {\bf h}_y(u)\Big\}\nonumber\\ &+&\frac{2}{3}\Big\{\big(D_h{\bf H}_y(\partial_k, u)-D_k{\bf H}_y(\partial_h, u)\big) {\bf h}_y(v)\Big\}.\label{GD1b} \end{eqnarray}
By definition, we have $tr(\tilde{\bf B})=2{\bf H}$ and $tr({\bf h})=1$. Then, (\ref{GD1b}) implies that \begin{eqnarray} D_h{\bf H}_y(u,\partial_k)-D_k{\bf H}_y(u,\partial_h)=2\Big\{D_h{\bf H}_y(u,\partial_k)-D_k{\bf H}_y(u,\partial_h)\Big\},\label{GD11} \end{eqnarray} which yields \begin{equation} D_h{\bf H}_y(u,\partial_k)=D_k{\bf H}_y(u,\partial_h).\label{GD12} \end{equation} Contracting (\ref{GD12}) with $y^h$ and using $D_k{\bf H}_y(u,y)=0$, we get \begin{equation} D_0{\bf H}_y(u,w)=0.\label{GD13} \end{equation} Take an arbitrary unit vector $y\in T_xM$ and an arbitrary vector $v\in T_xM$. Let $c(t)$ be the geodesic with $\dot c(0)=y$ and $U=U(t)$ the parallel vector field along $c$ with $V(0)=v$. In order to avoid clutter, we put \begin{equation} {\bf E}(t)={\bf E}_{\dot c}(U(t),U(t)), \ \ \ \ \ {\bf H}(t)={\bf H}_{\dot c}(U(t),U(t)).\label{GD14} \end{equation} From the definition of ${\bf H}_y$, we have \begin{equation} {\bf H}(t)={\bf E}^{'}(t).\label{GD15x} \end{equation} By (\ref{GD13}) we have ${\bf H}^{'}(t)=0$ which implies that \begin{equation} {\bf H}(t)={\bf H}(0).\label{GD15xx} \end{equation} Then by (\ref{GD15x}) and (\ref{GD15xx}), we get \begin{equation} {\bf E}(t)={\bf H}(0)t+{\bf E}(0).\label{GD10} \end{equation} Since ${\bf E}(t)$ is a bounded function on $[0,\infty)$, then letting $ t\rightarrow +\infty $ or $ t\rightarrow -\infty $ implies that \[ {\bf H}_y(v,v)={\bf H}(0)=0. \] Therefore ${\bf H}=0$. According to Akbar-Zadeh's theorem every Finsler metric $F=F(x, y)$ of scalar flag curvature ${\bf K}={\bf K}(x, y)$ on an $n$-dimensional manifold $M$ has isotropic flag curvature ${\bf K}={\bf K}(x)$ if and only if $\textbf{H}=0$ \cite{AZ}. Every Finsler surface has scalar flag curvature ${\bf K}={\bf K}(x, y)$. Then by Akbar-Zadeh theorem, we get ${\bf K}={\bf K}(x)$. \qed
Now, we can prove the Theorem \ref{MainTHM1}.
\noindent {\bf Proof of Theorem \ref{MainTHM1}:} Let $(M, F)$ be a homogeneous Landsberg surface and fix a point $x\in M$. Suppose that $y=y(t)$ is a unit speed parametrization of indicatrix of $M$ at $x$. We know that the curvature along $y(t)$ is completely determined by the Cartan scalar of $F$, i.e., we have \[ {\bf K}(t)={\bf K}(0)\ e^{\int_0^tI(s)ds}. \] Thus either ${\bf K}(t)$ vanishes every where or it is non-zero every where and ${\bf K}(t)$ has the same sign as the sign of ${\bf K}(0)$. On the other hand, for homogeneous Finsler surfaces the flag curvature is a bounded scalar function on $SM$. Suppose that $\lambda_1\leq {\bf K}(t)\leq \lambda_2$. In this case, we have
\[ e^{\lambda_1 t}\leq C(0)e^{\int_0^t {\bf K}(s)ds}\leq e^{\lambda_2 t}. \] Suppose that $C(0)\neq 0$. Then we consider two following cases:\\\\ {\bf Case 1:} If $\lambda_1$ and $\lambda_2$ are positive, then letting $t\to \infty$ implies that $C(t)$ is unbounded, which is a contradiction.\\\\ {\bf Case 2:} If $\lambda_1$ and $\lambda_2$ are negative, then letting $t\to -\infty$ implies that $C(t)$ is unbounded, which is a contradiction.\\\\ Thus, every homogeneous Landsberg surface is Riemannian or flat. On the other hand, by Akbar-Zadeh's theorem any positively complete Finsler metric with zero flag curvature must be locally Minkowskian if the first and second Cartan torsions are bounded \cite{AZ}. For the homogeneous Finsler metrics, the first and second Cartan torsions are bounded. Then in this case, $F$ reduces to a locally Minkowskian metric. This completes the proof. \qed
It is worth to mention that, in general, every Landsberg metric of non-zero scalar flag curvature is Riemannian, provided that its dimension is greater than two. Theorem \ref{MainTHM1} is Numata type theorem for homogeneous Finsler surfaces.
\begin{cor} Let $(M, F)$ be a homogeneous Finsler surface of non-positive flag curvature. Then $F$ is a Landsberg metric if and only if it has isotropic flag curvature. In this case, $F$ is Riemannian or locally Minkowskian. \end{cor} \begin{proof} According to Theorem 8.1 of \cite{BCS}, every geodesically complete Finsler surface of non-positive isotropic flag curvature ${\bf K}(x)\leq 0$ and bounded Cartan scalar is a Landsberg metric. Then, by Theorem \ref{MainTHM1}, we get the proof. \end{proof}
In \cite{1924}, L. Berwald introduced a non-Riemannian curvature so-called stretch curvature and denoted it by ${\bf \Sigma}_y$. He showed that this tensor vanishes if and only if the length of a vector remains unchanged under the parallel displacement along an infinitesimal parallelogram. \begin{cor} Every homogeneous stretch surface is Riemannian or locally Minkowskian. \end{cor} \begin{proof} Every Landsberg metric is a stretch metric. In \cite{TN1}, it is proved that every homogeneous stretch metric is a Landsberg metric. Then, by Theorem \ref{MainTHM1}, we get the proof. \end{proof}
In \cite{BF}, Bajancu-Farran introduced a new class of Finsler metrics, called generalized Landsberg metrics. This class of Finsler metrics contains the class of Landsberg metrics as a special case. A Finsler metric $F$ on a manifold $M$ is called generalized Landsberg metric the Riemannian curvature tensors of the Berwald and Chern connections coincide. \begin{cor} Every homogeneous generalized Landsberg surface is Riemannian or locally Minkowskian. \end{cor} \begin{proof} By definition, we have
\begin{equation} L^i_{\ jl|k}-L^i_{\ jk|l}+L^i_{\ sk}L^s_{\ jl}-L^i_{\ sl}L^s_{\ jk}=0,\label{GL}
\end{equation} where ``$|$" denotes the horizontal derivation with respect to the Berwald connection of $F$. By \eqref{GL}, we get \begin{eqnarray} &&L_{isk}L^s_{\ jl}-L_{isl}L^s_{\ jk}=0,\label{GL4} \\
&&L_{ijl|k}-L_{ijk|l}=0.\label{GL5} \end{eqnarray} The Landsberg curvature of Finsler surface satisfies \begin{equation}\label{B3b} L_{jkl}+\mu FC_{jkl}=0. \end{equation} where $\mu:=-{4I_{,1}}/{I}$. By \eqref{GL4} and \eqref{B3b}, we get \begin{eqnarray} \mu F\Big\{C_{isk}C^s_{\ jl}-C_{isl}C^s_{\ jk}\Big\}=0.\label{GL5} \end{eqnarray} We have two cases: If $C_{isk}C^s_{\ jl}-C_{isl}C^s_{\ jk}=0$, then the vv-curvature is vanishing. In \cite{Sc}, Schneider proved that vv-curvature is vanishing if and only if $F$ is Riemannian. If $\mu=0$, then by \eqref{B3b} it follows that $F$ is a Landsberg metric. By Theorem \ref{MainTHM1}, we get the proof. \end{proof}
Let us define ${\bf \tilde J}= {\tilde J}_{ij}dx^i\otimes dx^j$, by \begin{equation}
{\bf \tilde J}:=\big(J_{i,j}+J_{j,i}\big)_{|m}y^m.\label{X1} \end{equation} In \cite{X}, Xia proved that every $n$-dimensional compact Finsler manifold with ${\bf \tilde J}=2{\bf \tilde H}$ is a weakly Landsberg metric. Here, we prove the following. \begin{cor}\label{Prop1} Every homogeneous Finsler surface satisfying ${\bf \tilde J}=2{\bf \tilde H}$ is Riemannian or locally Minkowskian. \end{cor} \begin{proof} The following Bianchi idenity holds
\begin{equation} H_{ij}:=\frac{1}{2}\big(J_{i,j}+J_{j,i}-(I_{i,j})_{|p}y^p\big)_{|m}y^m.\label{X2}
\end{equation} See \cite{X}. By \eqref{X1} and \eqref{X2}, we get $(I_{i,j})_{|p}y^p=0$ and contracting it with $y^j$ yields
\begin{equation} J_{i|p}y^p=0.\label{X3} \end{equation} For any geodesic $c=c(t)$ and any parallel vector field $U=U(t)$ along $c$, let us put \[ {\bf I}(t)={\bf I}_{\dot c}\big(U(t),U(t), U(t)\big), \ \ \ \ {\bf J}(t)={\bf J}_{\dot c}\big(U(t),U(t), U(t)\big). \] Thus, we have \begin{equation} {\bf J}(t)={\bf I}^{'}(t).\label{GD15} \end{equation} Integrating \eqref{X3} implies that \[ {\bf I}(t)={\bf J}(0)t+{\bf I}(0). \] Every homogeneous manifold $ M $ is complete and the parameter $t$ takes all the values in $ (-\infty,+\infty) $. Letting $ t\rightarrow +\infty $ or $ t\rightarrow -\infty $ we have $ \textbf{I}(t) $ is unbounded which is a contradiction. Therefore $ \textbf{J}(0)={\bf J}(t)=0 $. On the other hand, every Finsler surface is C-reducible \begin{equation} {\bf C}_y(u, v, w)= {1\over 3}\Big\{{\bf I}_y(u){\bf h}_y(v, w)+{\bf I}_y(v){\bf h}_y(u, w)+{\bf I}_y(w){\bf h}_y(u, v) \Big\}.\label{CR} \end{equation} Taking a horizontal derivation of \eqref{CR} yields \begin{equation} {\bf L}_y(u, v, w)= {1\over 3}\Big\{{\bf J}_y(u){\bf h}_y(v, w)+{\bf J}_y(v){\bf h}_y(u, w)+{\bf J}_y(w){\bf h}_y(u, v) \Big\}.\label{LR} \end{equation} Putting ${\bf J}=0$ in \eqref{LR} implies that ${\bf L}=0$. By Theorem \ref{MainTHM1}, we get the proof. \end{proof}
\noindent Akbar Tayebi\\ Department of Mathematics, Faculty of Science\\ University of Qom \\ Qom. Iran\\ Email:\ akbar.tayebi@gmail.com
\noindent Behzad Najafi\\ Department of Mathematics and Computer Sciences\\ Amirkabir University (Tehran Polytechnic)\\ Hafez Ave.\\ Tehran. Iran\\ Email:\ behzad.najafi@aut.ac.ir
\end{document} |
\begin{document}
\title[]{Tur\'an number of disjoint triangles in 4-partite graphs} \thanks{ Jie Han is partially supported by a Simons Collaboration Grant 630884. Yi Zhao is partially supported by an NSF grant DMS 1700622 and Simons Collaboration Grant 710094.}
\author{Jie Han} \address{Department of Mathematics, University of Rhode Island, 5 Lippitt Road, Kingston, RI, 02881, USA} \email{jie\_han@uri.edu}
\author{Yi Zhao}
\address {Department of Mathematics and Statistics, Georgia State University, Atlanta, GA 30303} \email{yzhao6@gsu.edu}
\maketitle
\begin{abstract} Let $k\ge 2$ and $n_1\ge n_2\ge n_3\ge n_4$ be integers such that $n_4$ is sufficiently larger than $k$. We determine the maximum number of edges of a 4-partite graph with parts of sizes $n_1,\dots, n_4$ that does not contain $k$ vertex-disjoint triangles. For any $r> t\ge 3$, we give a conjecture on the maximum number of edges of an $r$-partite graph that does not contain $k$ vertex-disjoint cliques $K_t$.
\end{abstract}
\section{Introduction} \label{sec:intro}
Given two graphs $G$ and $F$, we say that $G$ is \emph{$F$-free} if $G$ does not contain $F$ as a subgraph. Let $K_t$ denote a complete graph on $t$ vertices, and $T_{n,t}$ denote a balanced complete $t$-partite graph on $n$ vertices (now known as the \emph{Tur\'an graph}). In 1941, Tur\'an~\cite{turan} proved that $T_{n,t}$ has the maximum number of edges among all $K_{t+1}$-free graphs (the case $t=2$ was previously solved by Mantel~\cite{mantel}). Tur\'an's result initiates the study of Extremal Graph Theory, an important area of research in modern Combinatorics (see the monograph of Bollob\'{a}s \cite{MR506522}).
Let $kK_t$ denote $k$ disjoint copies of $K_t$. Simonovits~\cite{Simonovits} studied the Tur\'an problem for $kK_t$ and showed that when $n$ is sufficiently large, the (unique) extremal graph on $n$ vertices is the join of $K_{k-1}$ and the Tur\'an graph $T_{n-k+1, t-1}$.
In this paper we consider Tur\'an problems in multi-partite graphs. Let $K_{n_1,n_2,\dots, n_r}$ denote the complete $r$-partite graph on parts of sizes $n_1,n_2,\dots, n_r$. This variant of the Tur\'an problem was first considered by Zarankiewicz~\cite{zaran}, who was interested in the case of forbidding $K_{s,t}$ in (subgraphs of) $K_{a,b}$. Formally, given graphs $H$ and $F$, we define ex$(H, F)$ as the maximum number of edges in an $F$-free subgraph of $H$. Bollob\'{a}s, Erd\H{o}s, and Straus \cite{MR379256} (see also \cite[Page 544]{MR506522}) proved the following result. For any subset $I\subseteq [r]$, write $n_I := \sum_{i\in I} n_i$.
\begin{thm}\cite{MR379256}\label{thm:BES} The extremal number $\ex(K_{n_1,\dots, n_r}, K_t)$ is equal to \[ \max_{\mathcal{P}} \sum_{I\ne I'\in \mathcal P} n_I\cdot n_{I'}, \] where the maximum is taken over all partitions $\mathcal P$ of $[r]$ into $t-1$ parts. \end{thm}
The problem of forbidding disjoint copies of cliques in multi-partite graphs has been studied recently. Chen, Li and Tu~\cite{CLT} determined $\ex(K_{n_1,n_2}, kK_2)$ and De Silva, Heysse and Young~\cite{DHY} showed that $\ex(K_{n_1, \dots, n_r}, kK_2) = (k-1)(n_1 + \dots + n_{r-1})$ for $n_1\ge \cdots \ge n_r$.
De Silva, Heysse, Kapilow, Schenfisch and Young~\cite{DHKSY} determined $\ex(K_{n_1,\dots, n_r}, kK_r)$ and raised the question of determining $\ex(K_{n_1,\dots, n_r}, kK_t)$ when $r>t$. After giving another proof of Theorem~\ref{thm:BES}, Bennett, English and Talanda-Fisher~\cite{MR3952133} reiterated this question.
\begin{problem}\cite{DHKSY}\label{pro1} Determine $\ex(K_{n_1,\dots, n_r}, kK_t)$ when $r>t$. \end{problem}
In this paper we solve Problem~\ref{pro1} for $r=4$ and $t=3$ when all $n_i$'s are sufficiently large. To state our result, for $k\ge 1$, we define a function of positive integers $n_1\ge n_2\ge n_3\ge n_4$: \begin{align*}
g_k(n_1, n_2, n_3, n_4) &:=\max\left\{ (n_1+n_4)(n_2+n_3)+ (k-1)n_1, n_1(n_2+n_3+n_4) + (k-1)(n_2+n_3) \right\} \\ &= \left\{\begin{array}{lr}
(n_1+n_4)(n_2+n_3) + (k-1) n_1 & \text{if } n_1\le n_2+n_3, \\
n_1(n_2+n_3+n_4) + (k-1)(n_2+n_3), & \text{if } n_1> n_2+n_3.
\end{array}\right. \end{align*} When $G$ is a 4-partite graph with parts of sizes $n_1\ge n_2\ge n_3\ge n_4$, we define $g_k(G):= g_k(n_1, n_2, n_3, n_4)$. For arbitrary positive integers $a, b, c, d$, we define
$g_k(a,b,c,d)=g_k(a_1, a_2, a_3, a_4)$, where $a_1\ge a_2\ge a_3\ge a_4$ is a reordering of $a, b, c, d$.
\begin{thm}\label{thm:main} Given $k\ge 1$, there exists $N_0(k)$ such that if $G$ is a $kK_3$-free 4-partite graph with parts of sizes $n_1\ge n_2\ge n_3\ge n_4\ge 6k^2$ and $n_1+n_2+n_3+n_4\ge N_0(k)$, then $e(G)\le g_k(n_1, n_2, n_3, n_4) $. In other words, $\ex(K_{n_1, n_2, n_3, n_4}, kK_3)\le g_k(n_1, n_2, n_3, n_4)$.
\end{thm}
Theorem~\ref{thm:main} is tight due to two constructions $G_1$ and $G_2$ below. In fact, a subgraph of $G_2$ was given by De Silva et al. \cite{DHKSY} as a potential extremal construction; later Wagner~\cite{Wag19} realized that $G_1$ was a better construction for the $n_1 = n_2 = n_3 = n_4$ case.
Let $n_1\ge n_2\ge n_3\ge n_4\ge k$. We define two 4-partite graphs with parts $V_1,\dots, V_4$ such that $|V_i|=n_i$. Fix a set $Z$ of $k-1$ vertices in $V_4$. Let \[ G_1: =K_{V_1\cup V_4, \,V_2\cup V_3}\cup K_{Z, \,V_1} \text{ and } G_2:=K_{V_1, \,V_2\cup V_3\cup V_4}\cup K_{Z, \,V_2\cup V_3}, \] where $K_{V_1, \dots, V_r}$ denotes the complete $r$-partite graph with parts $V_1, \dots, V_r$.
Note that each triangle must intersect $Z$ and thus both $G_1$ and $G_2$ are $kK_3$-free. Moreover, $e(G_1)=(n_1+n_4)(n_2+n_3)+ (k-1)n_1$ and $e(G_2)=n_1(n_2+n_3+n_4) + (k-1)(n_2+n_3)$.
Thus $e(G_2)\le e(G_1)$ if and only if $n_1\le n_2+n_3$ and equality holds when $n_1=n_2+n_3$.
\begin{figure}
\caption{The extremal graphs $G_1$ and $G_2$}
\end{figure}
Our proof uses a \emph{progressive induction} (an induction without a base case) on the total number of vertices and a standard induction on~$k$ that uses Theorem~\ref{thm:BES} as the base case.
We conjecture an answer to Problem~\ref{pro1} in general, which includes all aforementioned results \cite{MR3952133, CLT, DHY} and Theorem~\ref{thm:main}. \begin{conj} \label{conj} Given $r> t\ge 3$ and $k\ge 2$, let $n_1, \dots, n_r$ be sufficiently large. For $I\subseteq [r]$, write $m_I := \min_{i\in I}n_i$. Given a partition $\mathcal P$ of $[r]$, let $n_{\mathcal P}:=\max_{I\in \mathcal P}\{n_I-m_I\}$. The Tur\'an number $\ex(K_{n_1,\dots, n_r}, kK_t)$ is equal to \begin{equation}\label{eq:conj} \max_{\mathcal{P}} \left\{ (k-1)n_{\mathcal P} + \sum_{I\ne I'\in \mathcal P} n_I\cdot n_{I'} \right\}, \end{equation} where the maximum is taken over all partitions $\mathcal P$ of $[r]$ into $t-1$ parts. \end{conj}
The bound~\eqref{eq:conj} is achieved by the following graph. Given integers $k, t$ and $n_1,\dots, n_r$ with $r>t$ and $n_i\ge k$ for all $i$, let ${\mathcal P}$ be a partition of $[r]$ into $t-1$ parts that maximizes~\eqref{eq:conj}. Let $G$ be an $r$-partite graph whose parts have sizes $n_1,\dots, n_r$. Partition $G$ into $t-1$ parts according to ${\mathcal P}$, namely, let $V_{I}=\bigcup_{i\in I}V_i$ for every $I\in {\mathcal P}$ and include all edges between $V_I$ and $V_{I'}$ for all $I\ne I'\in {\mathcal P}$. In addition, let $I_0\in \mathcal P$ such that $n_{{\mathcal P}}=n_{I_0}-m_{I_0}$ and let $V_{i_0}$ be the smallest part in $V_{I_0}$. We choose a set $Z\subseteq V_{i_0}$ of $k-1$ vertices and add all edges between $Z$ and $V_{I_0}\setminus V_{i_0}$.
Verifying Conjecture~\ref{conj} seems hard due to the complexity of \eqref{eq:conj} -- we shall discuss this in the last section.
\noindent\textbf{Notation.}
Given a graph $G=(V, E)$, let $|G|$ denote the order of $G$. Suppose $A, B$ are two disjoint subsets of $V$.
Let $e(A):=e(G[A])$ be the number of edges of $G$ in $A$ and $e(A, B)$ be the number of edges of $G$ with one end in $A$ and the other in $B$. Moreover, let $G\setminus A:=G[V\setminus A]$. Denote by \[ e(A; G) := e(G) - e(G\setminus A), \] the number of edges of $G$ incident to $A$. Given a vertex $x$, let $N(x)$ denote the set of neighbors of $x$. For vertices $x, y$ and $z$, we often write $xyz$ for $\{x, y, z\}$. We sometimes abuse this notation by using $xy\in A\times B$ to indicate that $x\in A$ and $y\in B$. Given an $r$-partite graph $G$, a \emph{crossing set} is a set that contains at most one vertex from each part of $G$.
\section{Proof of Theorem~\ref{thm:main}}
In this section we prove Theorem~\ref{thm:main}. Define two sequences $N_0(k)$ and $M_0(k)$ recursively by letting $N_0(1)=1$, \begin{equation}\label{eq:M0} M_0(k)=\max\{72(k-1)^3, 96k^2, N_0(k-1)+3\}, \quad \text{and} \quad N_0(k)= M_0(k)^2 \end{equation} for $k\ge 2$. Given a 4-partite graph $G$, let $v_4(G)$ denote the size of the smallest part of $G$. Define $\phi(G) := e(G) - g_k(G)$. The following theorem is the main step in the proof of Theorem~\ref{thm:main}. \begin{thm} \label{thm:key} Suppose $k\ge 2$ and Theorem~\ref{thm:main} holds for $k-1$.
Let $G$ be a 4-partite graph of order $|G| > M_0(k)$ and with $v_4(G) \ge 6k^2$. If $G$ is $k K_3$-free and $\phi(G)> 0$, then we can find a subgraph $G'$ of $G$ such that $|G| -2 \le |G'| \le |G| - 1$, $v_4(G')\ge 6k^2$, and $\phi(G') > \phi(G)$. \end{thm}
Theorem~\ref{thm:main} nows follows from Theorem~\ref{thm:key} by an induction on $k$ and a progressive induction on $|G|$ (e.g., used in \cite{Simonovits}).
\begin{proof}[Proof of Theorem~\ref{thm:main}]
The base case $k=1$ follows from Theorem~\ref{thm:BES} with $N_0(1)=1$. Let $k\ge 2$ and $G$ be a 4-partite graph of order $|G| \ge N_0(k)$ and with $v_4(G) \ge 6k^2$. Suppose $G$ is $k K_3$-free and $\phi(G)> 0$. By Theorem~\ref{thm:key}, we find a subgraph $G_1\subset G$ such that $|G| -2 \le |G_1| \le |G| - 1$, $v_4(G_1)\ge 6k^2$, and $\phi(G_1) > \phi(G)\ge 1$. Repeating this process, we obtain subgraphs $G_1\supset G_2\supset G_3 \supset \cdots \supset G_t$ such that $|G| - 2i \le |G_i|\le |G| - i$ and $\phi(G_i) > i$ for $i=1, \dots, t$. We stop at $G_t$ because $|G_t|\le M_0(k)$. Hence, \[
t \ge \frac{|G|- |G_t|}2 \ge \frac{N_0(k)- M_0(k)}2 = \frac{M_0(k)^2 - M_0(k)}2 = \binom{M_0(k)}{2}. \] Consequently, $\phi(G_t) > \binom{M_0(k)}{2}$. However, this is impossible because $\phi(G_t)\le e(G_t) \le \binom{M_0(k)}{2}$. \end{proof}
The rest of this section is devoted to the proof of Theorem~\ref{thm:key}.
\begin{proof}[Proof of Theorem~\ref{thm:key}] Let $k\ge 2$ and suppose that \begin{enumerate}[label=($\ast$)] \item for any $(k-1)K_3$-free 4-partite graph $\tilde{G}$ with part sizes $n_1'\ge n_2'\ge n_3'\ge n_4'\ge 6(k-1)^2$ and $\sum_{i\in [4]}n_i' \ge N_0(k-1)$, we have $e(\tilde{G})\le g_{k-1}(n_1', n_2', n_3', n_4')$. \label{item:IH} \end{enumerate}
Let $G$ be a 4-partite graph of order $|G| > M_0(k)$ and with parts of size $n_1\ge n_2\ge n_3\ge n_4 \ge 6k^2$. Assume that $G$ is $kK_3$-free and $\phi(G)> 0$. Without loss of generality, we assume that $G$ contains $k-1$ disjoint triangles -- otherwise we keep adding edges to $G$ until it contains $k-1$ disjoint triangles (as a result, $\phi(G)$ increases).
Our goal is to show that there exists a crossing set $T\subset V(G)$ of size at most 2 such that $\phi(G) < \phi(G\setminus T)$ and $v_4(G\setminus T)\ge 6k^2$.
We proceed in the following cases. It is easy to see that these cases cover all possibilities. In each case we verify $v_4(G\setminus T)\ge 6k^2$ immediately.
\noindent\textbf{Case 0. $n_1 > n_2 + n_3$.} We will select a one-element set $T\subset V_1$. Since $n_1> 2 n_4$, we have $n_1 - 1> n_4$ and thus $v_4(G\setminus T)= n_4 \ge 6k^2$.
We assume $n_1\le n_2+n_3$ in the remaining cases.
\noindent\textbf{Case 1. $n_1 > n_3$ and $n_2> n_4$.} We will select a crossing set $T\subset V_1\cup V_2$. Since $n_1 - 1 \ge n_2 - 1 \ge n_4$, we have $v_4(G\setminus T) = n_4 \ge 6k^2$.
\noindent\textbf{Case 2. $n_1 = n_2 = n_3 \ge n_4 > 6k^2$.} We select a one-element set $T\subset V(G)$. Then $v_4(G\setminus T)\ge n_4-1\ge 6k^2$.
\noindent\textbf{Case 3. $n_1 = n_2 = n_3 > n_4 = 6k^2$.} We will select a one-element set $T\subset V_1\cup V_2\cup V_3$. Since $n_3 -1 \ge n_4$, we have $v_4(G\setminus T) = n_4 = 6k^2$.
\noindent\textbf{Case 4. $n_1 > n_2 = n_3 = n_4 $.} We will select a one-element set $T\subset V_1$. Since $n_1>n_4$, $v_4(G\setminus T)= n_4\ge 6k^2$.
It remains to show $\phi(G) < \phi(G\setminus T)$ in \textbf{Cases 0--4}. This is actually easy in \textbf{Case 0}.
\noindent \textbf{Case 0.} Recall that $\phi(G) = e(G) - g_k(n_1, n_2, n_3, n_4) > 0$. Since $n_1 > n_2 + n_3$, \[ g_k(n_1, n_2, n_3, n_4) = n_1(n_2+n_3+n_4) + (k-1)(n_2+n_3). \] First assume that some vertex $v\in V_1$ satisfies $d(v)<n_2+n_3+n_4$. Let $T=\{v\}$. Since $n_1-1\ge n_2+n_3$, \begin{align*} g_k(n_1-1, n_2, n_3, n_4) &= (n_1 - 1)(n_2+n_3+n_4) + (k-1)(n_2+n_3)\\ &= g_k(n_1, n_2, n_3, n_4) - (n_2+n_3+n_4). \end{align*} It follows that \[ \phi(G\setminus \{v\}) = e(G) - d(v) - g_k(n_1-1, n_2, n_3, n_4) > e(G) - g_k(n_1, n_2, n_3, n_4) = \phi(G), \]
as desired. Otherwise, $G[V_1, V_2\cup V_3\cup V_4]$ must be complete. Since $G$ is $k K_3$-free, it follows that $G[V_2\cup V_3\cup V_4]$ contains no matching of size $k$. The result of ~\cite{DHY} or a simple induction on $k$\footnote{If there is a vertex of degree at least $2k-1$, then we can delete it and apply induction; otherwise, as the size of the maximum matching is $k-1$, there are at most $2(k-1)(2k-1)\le (k-1)(n_2 + n_3)$ edges (using $k\ll n_3\le n_2$).} yields that $e(G[V_2\cup V_3\cup V_4]) \le (k-1)(n_2+n_3)$. This shows that $e(G)\le n_1(n_2+n_3+n_4) + (k-1)(n_2+n_3)$, namely, $\phi(G)=0$, a contradiction.
In the rest of the proof we assume $n_1\le n_2+n_3$ and will resolve \textbf{Cases 1--4}.
One difficulty in these cases is that, after we delete a set $T\subseteq V(G)$, the sizes of the four parts of $G\setminus T$ may not follow the order in $G$. For instance, suppose $n_1\le n_2 + n_3$ and $T=\{v\}\subseteq V_1$. If $n_1>n_2$, then the order of the part sizes of $G\setminus T$ is $n_1-1\ge n_2\ge n_3\ge n_4$, the same as in $G$.
However, when $n_1=n_2>n_3\ge n_4$, the order of the part sizes of $G\setminus T$ is $n_2\ge n_1-1\ge n_3\ge n_4$, and the degree estimates we obtain are quite different.
Another complication comes from the fact that there are two possible extremal graphs. Even under the assumption that $n_1 \le n_2 + n_3$, we still have to consider the possibility of $n'_1> n'_2 + n'_3$ in $G\setminus T$, where $n'_1, n'_2, n'_3, n'_4$ are the part sizes of $G\setminus T$.
Although a case analysis is inevitable, we study the structure of $G$ in Section ~\ref{sec:21} and use it to simplify the presentation of the proofs of \textbf{Cases 1--4} in Section~\ref{sec:22}.
\subsection{Preparation} \label{sec:21}
We first give several preliminary results.
An edge of $G$ is called \emph{rich} if it is contained in at least $k$ triangles whose third vertices are located in the same part of $V(G)$. We show that every triangle in $G$ must contain a rich edge and $G$ contains at most $6(k-1)^2$ rich edges. Let $Z$ be the set of vertices incident to at least one rich edge. Thus, not only is $G\setminus Z$ triangle-free,
but also \emph{every edge in $G\setminus Z$ is not contained in any triangle of $G$} because such a triangle would not contain any rich edge.
We shall use the following simple fact. \begin{fact}\label{fact:com_neigh}
Let $G$ be a 4-partite graph with parts $V_1,\dots, V_4$ and suppose $x\in V_1$ and $y\in V_2$. Let $n_i:=|V_i|$ for $i\in [4]$. Then $x$ and $y$ have at least $d(x)+d(y) - \sum_{i\in [4]}n_i$ common neighbors in $G$.
In particular, if $x$ and $y$ have no common neighbor, then $d(x)+d(y) = \sum_{i\in [4]}n_i$ implies that $xy\in E(G)$, $V_2\subseteq N(x)$ and $V_1\subseteq N(y)$. Moreover, if $d(x)+d(y)\ge \sum_{i\in [4]}n_i+2k-1$, then $xy$ is rich. \end{fact}
\begin{proof}
Note that $|N(x)\cap (V_3\cup V_4)| = d(x) - |N(x)\cap V_2| \ge d(x) - n_2$ and $|N(y)\cap ( V_3\cup V_4)|=d(y) - |N(y)\cap V_1|\ge d(y) - n_1$. Let $m$ denote the number of common neighbors of $x$ and $y$. Then $m\ge |N(x)\cap (V_3\cup V_4)| + |N(y)\cap (V_3\cup V_4)| - n_3 - n_4 \ge d(x)+d(y) - \sum_{i\in [4]}n_i$. So the first part of the fact follows. In particular, if $m=0$, then $d(x)+d(y) \le \sum_{i\in [4]}n_i$. Moreover, if the equality holds, then the inequalities in previous calculations must be equalities. In particular, $V_2 \subseteq N(x)$ and $V_1\subseteq N(y)$, which also imply that $xy\in E(G)$.
For the ``moreover'' part, note that $d(x)+d(y)\ge \sum_{i\in [4]}n_i+2k-1$ implies that $x$ and $y$ have at least $2k-1$ common neighbors and thus at least $k$ common neighbors in one part. Therefore $xy$ is rich. \end{proof}
Recall that we have assumed that $\phi(G)>0$ and $n_1\le n_2 + n_3$. Thus, \begin{equation} \label{eq:contra} e(G)> g_k(n_1, n_2, n_3, n_4) = (n_1+n_4)(n_2+n_3)+ (k-1)n_1. \end{equation}
Let $R$ be the subgraph of $G$ induced by the rich edges of $G$, and let $Z= V(R)$ be the set of the vertices of $G$ that are incident to at least one rich edge.
\begin{clm} Suppose \ref{item:IH}, \eqref{eq:contra}, and $G$ is $k K_3$-free. Then the following assertions hold: \begin{enumerate}[label=$(\roman*)$] \item every vertex is contained in at most $k-1$ edges of $R$ whose other ends are located in the same part of $G$; in particular, the maximum degree of $R$ is at most $3k-3$; \label{item:1}
\item $e(R)\le 6(k-1)^2$ and $|Z|\le 6(k-1)^2$; \label{item:2} \item every triangle in $G$ contains an edge in $R$. \label{item:3}
\end{enumerate} \end{clm}
\begin{proof} We first show \ref{item:1} $\Rightarrow$ \ref{item:2}. Note that if $R$ has a matching of size $k$, then we can greedily build $k$ vertex-disjoint triangles by extending each rich edge in the matching. This contradicts the assumption that $G$ is $k K_3$-free. Therefore, the largest matching in $R$ is of size at most $k-1$ and consequently, $R$ has a vertex cover of size at most $2(k-1)$. If the maximum degree of $R$ is at most $3k-3$, then
$e(R)\le 2(k-1)(3k-4) + k-1 < 6(k-1)^2$ and $|Z|\le 2(k-1)(3k-4) + 2(k-1) = 6(k-1)^2$, confirming \ref{item:2}.
To see \ref{item:1}, we assume that some vertex $v$ is incident to $k$ rich edges whose other ends are in the same part of $G$.
If there is a copy $S$ of $(k-1)K_3$ in $G\setminus \{v\}$, then we can pick a rich edge in $G\setminus S$ that contains $v$ and then extend this rich edge to a triangle that does not intersect $S$. This gives a $kK_3$ in $G$, a contradiction. Thus, we infer that $G\setminus \{v\}$ is $(k-1)K_3$-free.
Let $n'_1 \ge n'_2 \ge n'_3 \ge n'_4$ be the sizes of four parts of $G\setminus \{v\}$.
By \ref{item:IH}, we have $e(G\setminus \{v\})\le g_{k-1}(n'_1, n'_2, n'_3, n'_4)$. To estimate $g_{k-1}(n'_1, n'_2, n'_3, n'_4)$, we first observe that there exists $i_0\in [4]$ such that $n'_i = n_i$ for all $i\ne i_0$ and $n_{i_0} = n_{i_0} - 1$; and furthermore, $n'_i = |V_i\setminus \{v\}|$ for $i\in [4]$ after relabeling $V_1, V_2, V_3, V_4$ if necessary (but maintaining $n_i = |V_i|$). This is obvious when $v\in V_{i_0}$ and $n_{i_0} > n_{i_0 + 1}$. Otherwise, for example, assume that $v\in V_1$ and $n_1 = n_2> n_3$ (other cases are similar). Then $n'_1 = n_2 = n_1$ and $n'_2 = n_1 -1 = n_2 -1$. After relabeling $V_1$ and $V_2$, we have $v\in V_2$, and $n'_i = |V_i\setminus \{v\}|$ for $i\in [4]$.
By the definition of $g$, we consider two cases. When $n'_1 \le n'_2 + n'_3$, we have \begin{align} g_{k-1}(n'_1, n'_2, n'_3, n'_4) &= (n'_1+n'_4)(n'_2+n'_3)+ (k-2)n'_1 \nonumber \\ &\le \left\{\begin{array}{lr}
(n_1+n_4-1)(n_2+n_3) + (k-2)n_1 & \text{if } v\in V_1\cup V_4, \\
(n_1+n_4)(n_2+n_3-1) + (k-2)n_1, & \text{if } v\in V_2\cup V_3.
\end{array}\right.
\label{eq:23i} \end{align} Together with \eqref{eq:contra} and \ref{item:IH}, this implies that \begin{align*} d_G(v) &= e(G) - e(G\setminus \{v\}) > g_k(n_1, n_2, n_3, n_4) - g_{k-1}(n'_1, n'_2, n'_3, n'_4) \\\ &\ge
\left\{\begin{array}{lr}
n_1+ n_2+n_3 & \text{if } v\in V_1\cup V_4, \\
2n_1+n_4, & \text{if } v\in V_2\cup V_3,
\end{array}\right. \end{align*} which is impossible. When $n'_1 > n'_2 + n'_3$, it must be the case when $n_1 = n_2 + n_3$ and $n'_{i_0} = n_{i_0} - 1$ for $i_0\in \{2, 3\}$. Thus \begin{align*} g_{k-1}(n'_1, n'_2, n'_3, n'_4) &= n'_1(n'_2+n'_3+ n'_4)+ (k-2)(n'_2 + n'_3)\\ &= (n_2+n_3)(n_1+n_4-1) + (k-2)(n_1-1). \end{align*} Together with \eqref{eq:contra} and \ref{item:IH}, this implies that $d_G(v)> n_1 + n_2 + n_3$, which is impossible for any $v\in V(G)$.
To see \ref{item:3}, let $S$ be a triangle in $G$ and consider $G\setminus S$.
Since $G$ is $k K_3$-free, $G\setminus S$ is $(k-1) K_3$-free. By~\ref{item:IH}, we have $e(G\setminus S)\le g_{k-1}(n_1', n_2', n_3', n_4')$ where $n_1'\ge n_2'\ge n_3'\ge n_4'$ are the sizes of parts of $G\setminus S$. We observe that there exists $i_0\in [4]$ such that $n'_i = n_i - 1$ for $i\ne i_0$ and $n'_{i_0} = n_{i_0}$; furthermore, $n'_i = |V_i\setminus S|$ after relabeling $V_1, V_2, V_3, V_4$ if necessary (while maintaining $n_i = |V_i|$). This is obvious when $S\subset \bigcup_{i\ne i_0} V_i$ and either $i_0=1$ or $n_{i_0 - 1}> n_{i_0}$. Otherwise, for example, assume that $S\subset V_1\cup V_2\cup V_3$ and $n_2 > n_3 = n_4$ (other cases are similar). We have $n'_1 = n_1 -1$, $n'_2 = n_2 - 1$, $n'_3 = n_4 = n_3$ and $n'_4 = n_3 -1 = n_4 -1$. After swapping $V_3$ and $V_4$, we have $S\subset V_1\cup V_2\cup V_4$.
If $n_1' \le n_2'+n_3'$. then $g_{k-1}(n_1', n_2', n_3', n_4') = (n_1'+n_4')(n_2'+n_3')+ (k-2)n_1'$. By our observation on the values of $n'_1, n'_2, n'_3, n'_4$, it follows that \[ g_{k-1}(n_1', n_2', n_3', n_4') \le \max_{j=1,2}\{(n_1+n_4- j)(n_2+n_3-(3- j))\}+ (k-2)n_1. \] If $n_1'> n_2'+n_3'$, then $g_{k-1}(n_1', n_2', n_3', n_4') = n_1'(n_2'+n_3' + n'_4)+ (k-2)(n_2' + n'_3)$. In this case, we must have $n_1=n_2+n_3-t$ for $t=0,1$, $n'_2= n_2 -1$, and $n'_3 = n_3 -1$. Thus $n'_i = n_i -1$ either for $i\in [3]$ or for $i\in \{2, 3, 4\}$, and consequently \begin{align*} g_{k-1}(n_1', n_2', n_3', n_4')\le \max\{ & (n_1 -1) (n_2 + n_3 + n_4 - 2) + (k-2) (n_2 + n_3 - 2), \\
& n_1 (n_2 + n_3 + n_4 - 3) + (k-2) (n_2 + n_3 - 2). \end{align*} Since $n_1=n_2+n_3-t$ for $t=0,1$, it follows that \[ g_{k-1}(n_1', n_2', n_3', n_4') \le \max_{j=1,2,3}\{(n_2+n_3- (3-j))(n_1+n_4 -j)\}+ (k-2)(n_1-1). \]
Putting all cases together with $e(G\setminus S)\le g_{k-1}(n_1', n_2', n_3', n_4')$, we conclude that \begin{equation} \label{eq:eGS} e(G\setminus S) \le \max_{j=1,2,3}\{(n_1+n_4- j)(n_2+n_3-(3- j))\}+ (k-2)n_1. \end{equation}
Recall that $e(S; G) := e(G) - e(G\setminus S)$. We next claim that $e(S; G)\ge \frac32 \sum_{i\in [4]}n_i + 3k$.
Indeed, if the maximum in~\eqref{eq:eGS} is achieved by $j=1,2$, then, together with~\eqref{eq:contra}, it gives
\[
e(S; G)> \sum_{i\in [4]} n_i + \min\{n_1+n_4, n_2+n_3\} + n_1-2 \ge \frac32 \sum_{i\in [4]} n_i + n_4-2\ge \frac32 \sum_{i\in [4]} n_i + 3k, \] where we used $n_4\ge 6k^2$ in the last inequality.
Otherwise, the maximum in~\eqref{eq:eGS} is achieved by $j=3$, that is, $e(G\setminus S) \le (n_1+n_4-3)(n_2+n_3)+ (k-2)n_1$.
By~\eqref{eq:contra}, we get \begin{align*} e(S; G)&> (n_1+n_4)(n_2+n_3)+ (k-1)n_1 - (n_1+n_4-3)(n_2+n_3)- (k-2)n_1 \\ &= n_1+3n_2 +3n_3 \ge \frac32 \sum_{i\in [4]}n_i + \frac{n_4}2 \ge \frac32 \sum_{i\in [4]}n_i + 3k, \end{align*} where we used the assumption $n_2+n_3\ge n_1$ and $n_2, n_3\ge n_4$.
Let $S=xyz$ and note that $d(x)+d(y)+d(z) = e(S; G)+3$. By averaging, without loss of generality, we may assume that \[ d(x)+d(y) \ge \frac23 \left(\frac32 \sum_{i\in [4]} n_i + 3k \right) = \sum_{i\in [4]} n_i + 2k. \] By the moreover part of Fact~\ref{fact:com_neigh}, $xy$ is rich and we are done. \end{proof}
For two disjoint sets $A, B\subseteq V(G)$, let $d(A, B) = e(A, B)/(|A| |B|)$ be the density of the bipartite graph with parts $A$ and $B$. A pair $(V_i, V_j)$ is called \emph{full} if $d(V_i\setminus Z, V_j)=d(V_j\setminus Z, V_i)=1$; $(V_i, V_j)$ is called \emph{empty} if $e(V_i\setminus Z, V_j)=e(V_i, V_j\setminus Z)=0$. We have the following observation.
\begin{obs}\label{obs:empty}
For distinct $i, j, t\in [4]$, if $d(V_i\setminus Z, V_j)=d(V_i\setminus Z, V_t)=1$, then $(V_j, V_t)$ must be empty because any edge in $(V_j,V_t)$ but not in $(V_j\cap Z, V_t\cap Z)$ will create a triangle with at most one vertex in $Z$, contradicting~\ref{item:3}. In particular, if both $(V_i, V_j)$ and $(V_i, V_t)$ are full, then $(V_j, V_t)$ is empty. \end{obs}
\begin{clm} \label{claim:1}
Fix $i\ne j\in [4]$. If $d(x)+d(y) \ge \sum_{i\in [4]}n_i$ for every edge $x y\in V_i\times V_j$,
then either \begin{itemize} \item $e(V_i\setminus Z, V_j\setminus Z)=0$ (this is weaker than $(V_i, V_j)$ being empty) or \item $d(V_i\setminus Z, V_j)=d(V_j\setminus Z, V_i)=1$, and $d(x)+d(y) = \sum_{i\in [4]}n_i$. \end{itemize} Moreover, if $d(x)+d(y) > \sum_{i\in [4]}n_i$ for every edge $x y\in V_i\times V_j$, then $(V_i, V_j)$ is empty. \end{clm}
\begin{proof}
Assume that $\{i,j,t,\ell\}=[4]$. Suppose there is an edge $xy \in (V_i\setminus Z)\times (V_j\setminus Z)$. Note that if $x$ and $y$ have a common neighbor $z$, then as $x, y\notin Z$, none of the edges of $xyz$ is rich, contradicting~\ref{item:3}. Thus, $x$ and $y$ have no common neighbor. By Fact~\ref{fact:com_neigh}, $d(x)+d(y)\le \sum_{i\in [4]} n_i$. If $d(x)+d(y) \ge \sum_{i\in [4]}n_i$, then Fact~\ref{fact:com_neigh} implies that $V_j \subseteq N(x)$ and $V_i \subseteq N(y)$. In particular, $x y'\in E(G)$ for every $y'\in V_j\setminus Z$. Applying the same argument to the edge $x y'$, we obtain that $V_i \subseteq N(y')$. Similarly, we can derive that $V_j \subseteq N(x')$ for every $x'\in V_i\setminus Z$. Thus, $d(V_i\setminus Z, V_j)=d(V_j\setminus Z, V_i)=1$.
Now assume $d(x)+d(y) > \sum_{i\in [4]}n_i$ for every edge $x y\in V_i\times V_j$. If $e(V_i\setminus Z, V_j\setminus Z) \ne 0$, then the arguments in the previous paragraph provide a contradiction. Suppose there is an edge $xy\in (V_i\cap Z)\times (V_j\setminus Z)$. As $d(x)+d(y) > \sum_{i\in [4]}n_i$, $x$ and $y$ have some common neighbors in $V_t\cup V_\ell$. But since $y\notin Z$, by~\ref{item:3}, their common neighbors must be in $(V_t\cup V_\ell)\cap Z$. Since $e(V_i\setminus Z, V_j\setminus Z)=0$, we know that $N(y)\cap V_i\subseteq V_i\cap Z$. Altogether, we obtain that $d(x)+d(y)\le n_j+n_t+n_\ell+|Z| < \sum_{i\in [4]}n_i$, a contradiction. Analogous arguments show that there is no edge in $(V_i\setminus Z)\times (V_j\cap Z)$. Thus, $e(V_i\setminus Z, V_j)=e(V_i, V_j\setminus Z)=0$, that is, $(V_i, V_j)$ is empty. \end{proof}
Consider a set $T\subseteq V(G)$ defined in \textbf{Cases~1--4} and let $n'_1, n'_2, n'_3, n'_4$ denote the sizes of the parts of $G\setminus T$.
Then $\phi(G) < \phi(G\setminus T)$ is equivalent to \[ e(G) - g_k(n_1, n_2, n_3, n_4) < e(G\setminus T) - g_k(n'_1, n'_2, n'_3, n'_4), \] or $e(T; G)< g_k(n_1, n_2, n_3, n_4) - g_k(n'_1, n'_2, n'_3, n'_4)$. We will prove by contradiction, assuming that $\phi(G) \ge \phi(G\setminus T)$, equivalently,
\begin{equation}\label{eq:etg} e(T; G) \ge (n_1+n_4)(n_2+n_3) + (k-1)n_1 - g_k(n'_1, n'_2, n'_3, n'_4) \end{equation} for every $T\subseteq V(G)$ defined in \textbf{Cases~1--4}.
The case when $T=\{v\}\subseteq V_1$ occurs in all four cases so we consider it before the cases. Since $n_1\le n_2+n_3$, we have three possibilities: \begin{itemize} \item if $n_1>n_2$, then $g_k(n_1-1, n_2, n_3, n_4) = (n_1-1+n_4)(n_2+n_3) + (k-1)(n_1-1)$; \item if $n_1=n_2>n_4$, then $g_k(n_1-1, n_2, n_3, n_4) = (n_1+n_4)(n_2+n_3-1) + (k-1)n_1$; \item if $n_1=n_4$, then $g_k(n_1-1, n_2, n_3, n_4) = (n_1+n_4-1)(n_2+n_3) + (k-1)n_1$; \end{itemize} Thus~\eqref{eq:etg} implies that for every $v\in V_1$,
\begin{align}
\label{eq:dV1}
d(v)\ge
\left\{\begin{array}{lr}
n_2 + n_3+k-1, & \text{if } n_1 > n_2, \\
n_1 + n_4, & \text{if } n_1 = n_2.
\end{array}\right.
\end{align}
\subsection{Proof of Cases 1--4} \label{sec:22}
After these preparations, we return to the proof of \textbf{Cases 1--4}.
Recall that $n_1\le n_2+n_3$ in all these cases. Recall also that $n_i\ge 6k^2$ for $i\in [4]$, so we can always assume that $V_i\setminus Z\neq\emptyset$. Moreover, by~\eqref{eq:M0}, we have $M_0(k)\ge N_0(k-1)+3$, and thus we can apply the induction hypothesis \ref{item:IH} on any \emph{$(k-1)K_3$-free} subgraph $G\setminus S$, whenever $|S|\le 3$ (and thus $v_4(G\setminus S)\ge 6k^2 - 3 \ge 6(k-1)^2$).
\noindent\textbf{Case 1. $n_1 > n_3$ and $n_2> n_4$.}
In this case \eqref{eq:etg} holds for every crossing set $T= xy\in V_1\times V_2$.
Since the part sizes of $G\setminus \{x,y\}$ are $n_1-1\ge \{n_2-1, n_3\} \ge n_4$.
By~\eqref{eq:etg}, we have \begin{align*} e(xy; G) &\ge (n_1+n_4)(n_2+n_3) + (k-1)n_1 - ((n_1+n_4-1)(n_2+n_3-1) + (k-1)(n_1-1)) \\ &= \sum_{i\in [4]} n_i + k-2. \end{align*} If $xy\in E(G)$, then $d(x)+d(y) = e(xy; G) + 1\ge \sum_{i\in [4]} n_i + k-1>\sum_{i\in [4]} n_i$.
By Claim~\ref{claim:1}, $(V_1, V_2)$ is empty. For every $x\in V_1\setminus Z$, we thus have $d(x)\le n_3+n_4<\min\{n_2+n_3, n_1+n_4\}$, contradicting~\eqref{eq:dV1}.
\noindent\textbf{Case 2. $n_1 = n_2 = n_3 \ge n_4 > 6k^2$.}
In this case \eqref{eq:etg} holds for any one-element set $T\subset V(G)$.
Write $n_1=n_2=n_3=n$. For any $x\in V_1\cup V_2\cup V_3$, by~\eqref{eq:etg}, we have \[ d(x)=e(\{x\}; G) \ge 2n(n+n_4) + (k-1)n - g_k(n, n, n-1, n_4), \] where $g_k(n, n, n-1, n_4) = (2n-1)(n+n_4)+ (k-1)n$ if $n>n_4$ and $g_k(n, n, n-1, n_4) = 2n(n+n_4-1)+ (k-1)n$ if $n=n_4$. Thus, we have $d(x)\ge \min\{2n, n+n_4\}=n+n_4$. Similarly, for $y\in V_4$, by~\eqref{eq:etg}, we have \begin{align}\label{eq:dy} d(y)=e(\{y\}; G) \ge 2n(n+n_4) + (k-1)n - \big(2n(n+n_4-1) + (k-1)n \big) = 2n. \end{align} These together imply $d(x)+d(y) \ge \sum n_i$ for every edge $x y\in (V_1\cup V_2\cup V_3)\times V_4$. For $i=1, 2, 3$, Claim~\ref{claim:1} implies that either $(V_i, V_4)$ is full or $e(V_i\setminus Z, V_4\setminus Z)=0$.
If $e(V_i\setminus Z, V_4\setminus Z)=0$ holds for at least two values of $i\in \{1, 2, 3\}$, then for every $y\in V_4\setminus Z$, we have $d(y)\le n + |Z| < 2n$ (as $n\ge M_0(k) /4 > 6 k^2$), contradicting \eqref{eq:dy}.
This implies that at least two of $(V_1, V_4)$, $(V_2, V_4)$, and $(V_3, V_4)$ must be full. Without loss of generality, assume $(V_1, V_4)$ and $(V_2, V_4)$ are full. By Observation~\ref{obs:empty}, $(V_1, V_2)$ is empty. Next, we claim that $(V_3, V_4)$ is empty. Indeed, let $x\in V_2\setminus Z$ and recall that $d(x)\ge n+n_4$.
Since $(V_1, V_2)$ is empty, we have $d(x)\le n+n_4$. Thus, $d(x)=n+n_4$ and in particular $V_3\subseteq N(x)$. Since this holds for every $x\in V_2\setminus Z$, it follows that $d(V_2\setminus Z, V_3)=1$. Thus $(V_3, V_4)$ is empty by Observation~\ref{obs:empty}. Together with~\ref{item:2}, we infer \[
e(G)= e(G[Z]) + e(V\setminus Z; G) < \binom{|Z|}2 + (n_1+n_2)(n_3+n_4)\le (n_1+n_2)(n_3+n_4) + (k-1)n_1, \]
contradicting~\eqref{eq:contra}, The previous inequality follows from $ \binom{|Z|}2 \le 18(k-1)^4\le (k-1)n_1$, which follows from $n_1\ge M_0(k)/4$ and~\eqref{eq:M0}.
\noindent\textbf{Case 3. $n_1 = n_2 = n_3 > n_4 = 6k^2$.}
Write $n_1=n_2=n_3=n$. We assume that \begin{align}\label{eq:n1} n_1 \ge 30k^2, \end{align}
as otherwise $\sum n_i\le 3\cdot 30k^2 + 6k^2 \le M_0(k)$ by \eqref{eq:M0}, contradicting the assumption $|G|> M_0(k)$. By~\eqref{eq:dV1} and the similarity of $V_1, V_2$, and $V_3$, we have $d(x) \ge n+n_4$ for every $x \in V_1\cup V_2\cup V_3$. We claim that for $y\in V_4$, \begin{equation} \label{eq:dyV4} d(y)\le 2n+2k-1. \end{equation} Otherwise, pick $k$ neighbors $x_1, \dots, x_{k}$ of $y$ from the same part of $G$. For each $i$, since $d(x_i) \ge n+n_4$, we have $d(x_i)+d(y) \ge \sum n_i+2k-1$, yielding that $x_iy$ is rich by Fact~\ref{fact:com_neigh}. However, this contradicts~\ref{item:1}.
\noindent \textbf{Claim.} The graph $G[V_1\cup V_2\cup V_3]$ is $K_3$-free.
\begin{proof} Suppose instead, there exists a triangle $xyz\in V_1\times V_2\times V_3$. Without loss of generality, assume that $d(x)\ge d(y)\ge d(z)$. We first claim that \begin{align}\label{eq:xyz} d(x)+d(y)+d(z) \ge 5n+2n_4+k. \end{align} Otherwise $d(x)+d(y)+d(z)\le 5n+2n_4+k-1$ and $e(xyz; G)= d(x)+d(y)+d(z) - 3 \le 5n+2n_4+k-4$. Then, by~\eqref{eq:contra}, \begin{align*} e(G\setminus \{x, y, z\}) &= e(G) - e(xyz; G) > g_k(n, \,n,n,n_4) - (5n+2n_4+k-4)\\ &= 2n(n+n_4) +(k-1)n - (5n+2n_4+k-4)\\ &= (2n-2)(n-1+n_4) + (k-2)(n-1) \\ &=g_{k-1}(n-1, n-1, n-1, n_4). \end{align*}
By induction hypothesis \ref{item:IH}, we obtain a copy of $(k-1) K_3$ in $ G\setminus \{x, y, z\} $. Together with the triangle $xyz$, this contradicts the assumption $G$ is $k K_3$-free.
We next claim that at least two of $xy, yz, xz$ are rich and thus all $x, y, z\in Z$.
Indeed, if $d(x)<2n+n_4-k$, then by \eqref{eq:xyz}, \[ d(y)+d(z)> 5n+2n_4+k- (2n+n_4-k) = 3n+n_4+2k > \sum n_i + 2k-1. \] By Fact~\ref{fact:com_neigh}, $yz$ is rich. Since $d(x)$ is the largest, this argument implies that all three edges of $xyz$ are rich, as desired. Otherwise, $d(x)\ge 2n+n_4-k$ and recall that $d(y)\ge d(z)\ge n+n_4$. Thus \[ d(x)+d(y) \ge d(x)+d(z)\ge 3n+2n_4-k \ge \sum n_i + 2k-1 \] because $n_4= 6k^2\ge 3k-1$. By Fact~\ref{fact:com_neigh}, both $xy$ and $xz$ are rich, as desired.
The claim in the previous paragraph applies to all triangles in $V_1\cup V_2 \cup V_3$. Therefore, all the common neighbors of $x$ and $y$ in $V_1\cup V_2 \cup V_3$ are in $Z$ and consequently, $| N(x)\cap N(y)|\le |Z|+|V_4|\le 6k^2+n_4$, and consequently, $d(x)+d(y)\le \sum n_i + 6k^2+n_4 = 3n+2n_4+ 6k^2$.
On the other hand, \eqref{eq:xyz} and the assumption $d(x)\ge d(y)\ge d(z)$ imply that \begin{equation}\label{eq:xyz-rich} d(x)+d(y)\ge \frac23 (5n+2n_4+k) = \frac{10}3n + \frac43 n_4 + \frac23k > 3n+2n_4+ 6k^2 \end{equation} because $n\ge 30k^2 = 2n_4+18k^2$ by~\eqref{eq:n1}. This gives a contradiction.
\end{proof}
By the claim, $G[V_1\cup V_2\cup V_3]$ is $K_3$-free, and thus has at most $2n^2$ edges by Theorem~\ref{thm:BES}. Together with~\eqref{eq:dyV4} and~\eqref{eq:n1}, we obtain that \[ e(G)\le 2n^2 + n_4\cdot (2n+2k-1) = 2n(n+n_4) +(2k-1)n_4 < 2n(n+n_4) +(k-1)n, \] contradicting \eqref{eq:contra}.
\noindent\textbf{Case 4. $n_1 > n_2 = n_3 = n_4 $.}
Assume $n_2=n_3=n_4=n$ and recall that $n_1\le 2n$. We first claim that \begin{align}\label{eq:dy4} d(x)\le 3n \text{ for all } x\in V_1, \ \text{and} \ d(y)\le n_1+n+k-1 \text{ for all } y\in V_2\cup V_3\cup V_4. \end{align}
Indeed, the bound $d(x)\le 3n$ for $x\in V_1$ is trivial. Suppose to the contrary, that there is a vertex $y\in V_2\cup V_3\cup V_4$ with $d(y)\ge n_1+n+k$. It follows that $|N(y)\cap V_1| \ge d(y) - 2n \ge k$. Assume that $x_1,\dots, x_k\in N(y)\cap V_1$. By~\eqref{eq:dV1}, we have $d(x_j)\ge 2n+k-1$. Thus, we infer that $d(x_j)+d(y)\ge n_1+3n+ 2k-1$. By Fact~\ref{fact:com_neigh}, we have $x_1 y, \dots, x_k y\in E(R)$. However, this contradicts~\ref{item:1}.
We next claim that there is no rich edge in $V_1 \times (V_2\cup V_3\cup V_4)$. Suppose to the contrary, that $xy\in V_1 \times (V_2\cup V_3\cup V_4)$ is a rich edge. By \eqref{eq:dy4}, we have $e(xy; G)=d(x)+d(y)-1\le n_1+4n+k-2$. By \eqref{eq:contra}, it follows that \begin{align*} e(G\setminus \{x, y\}) &= e(G) -e(xy; G) > 2n (n_1+n) +(k-1)n_1 - (n_1+4n+k-2) \\ &= 2n (n_1+n-2) + (k-2)(n_1-1) \\ &= g_{k-1}(n_1-1, n, n, n-1). \end{align*} By induction hypothesis \ref{item:IH}, $G\setminus \{x, y\}$ contains a copy $S$ of $(k-1) K_3$. Since $xy$ is rich, we can find a triangle in $G\setminus S$ containing $xy$, contradicting the assumption that $G$ is $k K_3$-free.
Now we show that there is no triangle intersecting $V_1$. Suppose to the contrary, there is a triangle $xyz$ with $x\in V_1$. If $d(x)+d(z) \ge n_1+3n+ 2k-1$, then, by Fact~\ref{fact:com_neigh}, $xy$ is rich, contradicting our earlier claim.
We thus assume that $d(x)+d(z) < n_1+3n+2k-1$. Together with \eqref{eq:dy4}, it gives that $d(x)+d(y)+d(z) < 2n_1+4n+3k-2$, and $e(xyz; G)= d(x)+d(y)+d(z)-3 < 2n_1+ 4n+ 3k- 5$. By \eqref{eq:contra}, it follows that \begin{align*} e(G\setminus \{x, y, z\}) &= e(G) - e(xyz; G) > 2n (n_1+n) +(k-1)n_1 - (2n_1+ 4n+ 3k- 5) \\ &= (n_1+n-2)(2n-1) + (k-2)(n_1-1) + n - 2k + 1\\ &= g_{k-1}(n_1-1, n, n-1, n-1) + n - 2k + 1. \end{align*}
By \ref{item:IH}, $G\setminus \{x, y, z\}$ contains a copy of $(k-1) K_3$. Together with the triangle $xyz$, this contradicts the assumption that $G$ is $k K_3$-free.
We assumed that $G$ contains $k-1$ disjoint triangles. Let $T_1$ be a triangle of $G$.
By the claim of the previous paragraph, $T_1$ must be in $V_2\cup V_3\cup V_4$.
Moreover, by~\ref{item:3}, $T_1$ must contain a rich edge $xy$.
Below we show that \begin{align}\label{eq:Gxy} e(G\setminus \{x, y\}) > g_{k-1}(n_1, n, n-1, n-1). \end{align} Then, by \ref{item:IH}, $G\setminus \{x, y\}$ contains a copy $S$ of $(k-1) K_3$. Since $xy$ is rich, we can find a triangle in $G\setminus S$ containing $xy$, contradicting the assumption that $G$ is $k K_3$-free.
We first assume that $n_1=2n$. If $d(x)+d(y) > 6n$, then $x$ and $y$ have a common neighbor in $V_1$, contradicting the earlier claim that there is no triangle intersecting $V_1$. We thus assume that $d(x) + d(y)\le 6n$. Thus $e(xy; G) \le 6n-1$. By \eqref{eq:contra}, it follows that \begin{align*} e(G\setminus \{x, y\}) &> g_k(2n, n,n, n) - (6n-1) \\ &= 3n\cdot 2n + 2n (k-1) - (6n-1) \\ &=2n(3n-2) + (k-2)(2n-1) + k-1 \\ &= g_{k-1}(2n, n, n-1, n-1) + k-1. \end{align*} Thus \eqref{eq:Gxy} holds.
Second, assume $n_1<2n$. By \eqref{eq:dy4}, we have $e(xy; G) = d(x)+d(y)-1 \le 2(n_1+n+k-1) -1$. By \eqref{eq:contra}, it follows that \begin{align*} e(G\setminus \{x, y\}) &> g_k(n_1, \,n,n,n) - (2n_1+ 2n+ 2k-3)\\ &= (n_1+n) 2n +(k-1)n_1 - (2n_1+ 2n+ 2k-3)\\ &= (n_1+n-1)(2n-1) + (k-2) n_1 + n -2k + 2\\ &= g_{k-1}(n_1, n, n-1, n-1) + n- 2k+2. \end{align*} Thus \eqref{eq:Gxy} holds.
The proof of Theorem~\ref{thm:key} is now completed. \end{proof}
\section{Concluding remarks}
In this paper we solved Problem~\ref{pro1} for $r=4$ and $t=3$ when all $n_i$'s are large.
The idea in our proof should be helpful for proving Conjecture~\ref{conj} in general. However, to determine the maximum in \eqref{eq:conj}, there are quite a few cases to consider even when $r=5$ and $t=3$. Indeed, suppose $n_1\ge n_2\ge \cdots \ge n_5$ and $\{I, I'\}$ is the bipartition of $[5]$ that attained the maximum in \eqref{eq:conj}. Assume $1\in I$. Depending on the values of $n_1, \dots, n_5$, it is possible to have \[ I = \{1\} \text{ or } \{1, 2\} \text{ or } \{1, 3\} \text{ or } \{1, 4\} \text{ or } \{1, 5\} \text{ or } \{1, 4, 5\}. \]
Another open problem is to find the smallest $N_0(k)$ such that Theorem~\ref{thm:main} holds. The $N_0(k)$ provided in our proof is a double exponential function of $k$. Indeed, by \eqref{eq:M0} and $N_0(1)=1$, we have $M_0(2)= 96\cdot 2^2 = 384$ and $N_0(2)= 384^2$. It is easy to see that $N_0(k) = ( N_0(k-1) + 3)^2$ for $k\ge 3$. Thus $N_0(k-1)^2 \le N_0(k)\le 2 N_0(k-1)^2$ for $k\ge 3$. It follows that \[ N_0(2)^{2^{k-2}} \le N_0(k) \le \big(2 N_0(2)\big)^{2^{k-2}}. \]
It is interesting to know whether one can reduce $N_0(k)$ to a polynomial function (or even a linear function) of $k$.
\noindent \textbf{Acknowledgements.} We would like to thank Chunqiu Fang and Longtu Yuan for valuable feedbacks on an earlier version of the manuscript and thank Ming Chen, Jie Hu and Donglei Yang for helpful discussions. We also thank two anonymous referees for their helpful comments that improved the presentation of this paper.
\noindent
\end{document} |
\begin{document}
\title{\huge\textbf{On piecewise hyperdefinable groups} \begin{abstract} The aim of this paper is to generalise and improve two of the main model{\hyp}theoretic results of ``Stable group theory and approximate subgroups'' by E. Hrushovski to the context of piecewise hyperdefinable sets. The first one is the existence of Lie models. The second one is the Stabilizer Theorem. In the process, a systematic study of the structure of piecewise hyperdefinable sets is developed. In particular, we show the most significant properties of their logic topologies. \end{abstract}
\section*{Introduction} Various enlightening results were found in \cite{hrushovski2011stable} with significant consequences on model theory and additive combinatorics. Two of them are particularly relevant: the Stabilizer Theorem and the existence of Lie models.
The Stabilizer Theorem \cite[Theorem 3.5]{hrushovski2011stable}, subsequently improved in \cite[Theorem 2.12]{montenegro2018stabilizers}, was originally itself a generalisation of the classical Stabilizer Theorem for stable and simple groups (see for example \cite[Section 4.5]{wagner2010simple}) changing the stability and simplicity hypotheses by some kind of measure{\hyp}theoretic ones. Here, we extend that theorem to piecewise hyperdefinable groups in Section 3. Once one has the right definition of dividing and forking for piecewise hyperdefinable sets, the original proof of \cite{hrushovski2011stable}, and its improved version of \cite{montenegro2018stabilizers}, can be naturally adapted. However, we also manage to simplify the proof in such a way that we get a slightly stronger result.
In \cite{hrushovski2011stable}, Hrushovski studied piecewise definable groups generated by near{\hyp}subgroups, i.e. approximate subgroups satisfying a kind of measure{\hyp}theoretic condition. In particular, he worked with ultraproducts of finite approximate subgroups. In that context, Hrushovski proved that there exist some Lie groups, named \emph{Lie models}\footnote{In \cite{hrushovski2011stable} they are simply called \emph{associated Lie groups}. The term \emph{Lie model} was later introduced in \cite{breuillard2012structure}.}, deeply connected to the model{\hyp}theoretic structure of the piecewise definable group \cite[Theorem 4.2]{hrushovski2011stable}. Furthermore, among all these Lie groups, Hrushovski focused on the minimal one, showing its uniqueness and its independence of expansions of the language.
Here, in Section 2, we improve these results by defining the more general notion of \emph{Lie core}. Then, we prove the existence of Lie cores for any piecewise hyperdefinable group with a generic piece and the uniqueness of the minimal Lie core. In the process, we adapt the classical model{\hyp}theoretic components (with parameters) $G^0$, $G^{00}$ and $G^{000}$ for piecewise hyperdefinable groups --- our definitions extend some particular cases already studied (e.g. \cite{hrushovski2022amenability}). We also introduce a new component $G^{\mathrm{ap}}$; $G^{\mathrm{ap}}$ is the smallest possible kernel of a continuous projection to a locally compact topological group without non{\hyp}trivial compact normal subgroups. We use these components to show that the minimal Lie core is precisely $\rfrac{G^0}{G^{\mathrm{ap}}}$. Using this canonical presentation of the minimal Lie core, we conclude that the minimal Lie core is piecewise $0${\hyp}hyperdefinable and independent of expansions of the language.
Hyperdefinable sets, originally introduced in \cite{hart2000coordinatisation}, are quotients of $\bigwedge${\hyp}definable (read infinitely{\hyp}definable or type{\hyp}definable) sets over $\bigwedge${\hyp}definable equivalence relations. Hyperdefinable sets have been already well studied by different authors (e.g. \cite{wagner2010simple} and \cite{kim2014simplicity}). Here we extend this study to piecewise hyperdefinable sets.
A piecewise hyperdefinable set is a strict direct limit of hyperdefinable sets. We are interested in the piecewise hyperdefinable sets as elementary objects in themselves that inherit an underlying model{\hyp}theoretic structure. In particular, we are principally interested in studying the natural logic topologies that generalise the usual Stone topology and can be defined for any piecewise hyperdefinable set.
We will discuss the general theory of piecewise hyperdefinable sets in Section 1, elaborating the basic theory that we need for the rest of the paper. Mainly, we study their basic topological properties such as compactness, local compactness, normality, Hausdorffness and metrizability, and also their relations with the quotient, product and subspace topologies. All this study leads naturally to the definition of \emph{locally hyperdefinable sets}, which is in fact one of the main notions of this document.
The original motivation of this paper comes from the study of rough approximate subgroups. Rough approximate subgroups are the natural generalisation of approximate subgroups where we also allow a small thickening of the cosets. This generalisation is particularly natural in the context of metric groups, where the thickenings are given by balls. In \cite{tao2008product} (see also \cite{tao2014metric}), Tao showed that a significant part of the basic theory about approximate subgroups can be extended to rough approximate subgroups of metric groups via a discretisation process. In \cite{gowers2020partial}, this generalisation to the context of metric groups has recently shown interesting applications.
It is a well{\hyp}known fact in first{\hyp}order model theory that, when working with metric spaces, non{\hyp}standard real numbers appear as soon as we get saturation. To solve this issue, we have to deal with some kind of continuous logic. Piecewise hyperdefinable sets provide a natural way of doing it. Roughly speaking, the idea behind continuous logic is to restrict the universe to the bounded elements and to quotient out by infinitesimals via the standard part function, as can be done using piecewise hyperdefinable sets.
Hrushovski already indicated in unpublished works that, using piecewise hyperdefinable groups, it should be possible to extend some of the results of \cite{hrushovski2011stable} to the context of metric groups. The aim of this paper is to give the abstract basis for that kind of results, to find in the end possibly interesting applications to combinatorics. We conclude the paper giving the natural generalisation of the Lie model Theorem for rough approximate subgroups. Applications of this result to the case of metric groups will be studied in a future paper.
We study in general piecewise hyperdefinable sets in Section 1, focussing on the properties of their logic topologies. The most important results of this section are given after the introduction of locally hyperdefinable sets in Section 1.4. Section 2 is the core of this paper and is devoted to the general study of piecewise hyperdefinable groups. The first fundamental result of the section is Theorem \ref{t:generic set lemma}, in which we show that piecewise hyperdefinable groups satisfying a natural combinatorial condition are locally hyperdefinable. In Section 2.3, we define the model{\hyp}theoretic components for piecewise hyperdefinable groups, proving the existence of $G^{\mathrm{ap}}$ in Theorem \ref{t:the aperiodic component}. Finally, we focus on the study of Lie cores, proving their existence (Theorem \ref{t:logic existence of lie core}), the uniqueness of the minimal one (Theorem \ref{t:logic uniqueness of minimal lie core}), giving a canonical representation of the minimal one in terms of the model{\hyp}theoretic components (Theorem \ref{t:canonical representation of the minimal lie core}) and showing its independence of expansions of the language (Corollary \ref{c:independence of expansions of the minimal lie core}). Section 3 is devoted to the Stabilizer Theorem for piecewise hyperdefinable groups, which is divided over Theorem \ref{t:stabilizer theorem 1}, Corollary \ref{c:stabilizer theorem mos b2} and Theorem \ref{t:stabilizer theorem 2} --- the latter being the standard statement of the Stabilizer Theorem. We conclude the paper stating the Rough Lie model Theorem \ref{t:rough lie model} which generalises Hrushovski's Lie Model Theorem to the case of rough approximate subgroups.
{\textnormal{\textbf{Notations and conventions:}}} From now on, unless otherwise stated, fix a many{\hyp}sort first order language, $\mathscr{L}$, and an infinite $\kappa${\hyp}saturated and strongly $\kappa${\hyp}homogeneous $\mathscr{L}${\hyp}structure, $\mathfrak{M}$, with $\kappa>|\mathscr{L}|$ a strong limit cardinal. We say that a subset of $\mathfrak{M}$ is \emph{small} if its cardinality is smaller than $\kappa$.
We would like to remark that our assumptions on $\mathfrak{M}$ are far stronger than needed. While saturation is fundamental, strong homogeneity is actually irrelevant. Also, assuming that $\kappa$ is a strong limit cardinal is much more than necessary. Most of the results and arguments work when we only assume $\kappa>|\mathscr{L}|$. The only significant exceptions are sections 2.3 and 2.4, where we need to assume $\kappa> 2^{|A|+|\mathscr{L}|}$, with $A$ the set of parameters we are using.
Against the common habit, we consider the logic topologies on the models themselves, rather than on the spaces of types. This has the unpleasant consequence of handling with non $\mathrm{T}_0$ topological spaces. In particular, we need to study normality and local compactness without Hausdorffness.
Anyway, whether on the model or on the space of types, these two topologies are in fact the ``same". Indeed, the spaces of types are obtained from the logic topologies on the model after saturation (i.e. compactification) as the quotient by the equivalence relation of being topologically indistinguishable (i.e. the \emph{Kolmogorov quotient}). In particular, the canonical projections ${\mathrm{tp}}_A:\ a\mapsto {\mathrm{tp}}(a/A)$ are continuous, open and closed maps\footnote{In fact, $X\mapsto\{{\mathrm{tp}}(x/A)\mathrel{:} x\in X\}$ is an isomorphism (in the sense of lattices of sets) between the topologies. Also, ${\mathrm{tp}}^{-1}_A[{\mathrm{tp}}_A[X]]=X$ for any $A${\hyp}invariant set.} from the $A${\hyp}logic topologies to the spaces of types over $A$. Moreover, based on this, we will show that piecewise hyperdefinable sets with their logic topologies generalise type spaces, the latter being the particular case of $\mathrm{T}_0$ logic topologies.
We choose to work with logic topologies over the models for three reasons. The first one is that we save one unneeded step (the Kolmogorov quotient). The second one is that the spaces of types of products are not the products of the spaces of types, while we want to be able to prove and naturally use Proposition \ref{p:product logic topology}, which says that the global logic topology of the product is the product topology of the global logic topologies. The last one, and related with the previous one, is that the space of types of a group does not preserve usually the group structure, which makes odd the statement of the Isomorphism Theorem \ref{t:isomorphism theorem} for piecewise hyperdefinable groups, fundamental in Section 2, expressed trough the spaces of types.
\rule{0.5em}{0.5em} From now on, except when otherwise stated, we use $a,b,\ldots$ and $A,B,\ldots$ to denote hyperimaginaries and small sets of hyperimaginaries respectively, while $a^*,a^{**},\ldots$ and $A^*,A^{**},\ldots$ denote associated representatives. If we use $a^*$ or $A^*$ without mention of $a$ or $A$, we mean real elements. To avoid technical issues, we assume that $A$ contains a subset $A_{\mathrm{re}}\subseteq A$ of real elements such that every element of $A$ is hyperimaginary over $A_{\mathrm{re}}$.
\rule{0.5em}{0.5em} We use $x,y,z,\ldots$ to denote variables. Except when otherwise stated, variables are always finite tuples of (single) variables. Sometimes, to simplify the notation, we also allow infinite tuples of variable. We write $\overline{x},\overline{y},\overline{z},\ldots$ to denote infinite tuples of variables. If in some exceptional place we need to consider single variables, we will explicitly indicate it.
\rule{0.5em}{0.5em} Except when otherwise stated, elements are finite tuples of (single) elements. We write $\overline{a},\overline{b},\ldots$ and $\overline{a}^*,\overline{b}^*,\ldots$ to indicate when we consider infinite tuples of elements. In this paper, we do not study piecewise hyperdefinable sets of infinite tuples. It seems that our results can be generalised in that sense by taking also inverse limits.
\rule{0.5em}{0.5em} By a type we always mean a complete type. The Stone space of types contained in the set $X$ with parameters $A^*$ is denoted by $\mathbf{S}_{X}(A^*)$. The set of formulas of $\mathscr{L}$ with parameters in $A^*$ and variables in the (possibly infinite) tuple $\overline{x}$ is denoted by $\mathrm{For}^{\bar{x}}(\mathscr{L}(A^*))$. The cardinality of the language is the cardinality of its set of formulas; $|\mathscr{L}|\coloneqq |\mathrm{For}(\mathscr{L})|$. In particular, $|\mathscr{L}|$ is always at least $\aleph_0$.
\rule{0.5em}{0.5em} Here, definable means definable with parameters. Also, $\bigwedge${\hyp}definable means infinitely{\hyp}definable (or type{\hyp}definable) over a small set of parameters. Similarly, $\bigvee${\hyp}definable means coinfinitely{\hyp}definable (or cotype{\hyp}definable) over a small set of parameters. To indicate that we use parameters from $A$, we write $A${\hyp}definable, $\bigwedge_A${\hyp}definable and $\bigvee_A${\hyp}definable. We also use cardinals and cardinal inequalities. In that case, the subscript should be read as an anonymous set of parameters whose size satisfies the indicated condition. For example, $\bigwedge_{<\omega}${\hyp}definable means $\bigwedge_A${\hyp}definable for some subset $A$ with $|A|<\omega$. The same notation will be naturally used for hyperdefinable, piecewise hyperdefinable and piecewise $\bigwedge${\hyp}definable sets.
\rule{0.5em}{0.5em} Following the terminology of \cite{tent2012course}, a $\kappa${\hyp}homogeneous structure is a structure such that every partial elementary internal map defined on a subset with cardinality less than $\kappa$ can be extended to any further element, while a strongly $\kappa${\hyp}homogeneous structure is a structure such that every partial elementary internal map defined on a subset with cardinality less than $\kappa$ can be extended to an automorphism.
\rule{0.5em}{0.5em} Let $R\subseteq X\times Y$ be a set{\hyp}theoretic binary relation. For $x\in X$, we write $R(x)\coloneqq \{y\in Y\mathrel{:} (x,y)\in R\}$. For a subset $V\subseteq X$, we write $R[V]\coloneqq\{y\in Y\mathrel{:} \exists x\in V\ (x,y)\in R\}$. We also write $R^{-1}\coloneqq\{(y,x)\mathrel{:} (x,y)\in R\}$, so $R^{-1}(y)\coloneqq \{x\in X\mathrel{:} (x,y)\in R\}$ for $y\in Y$ and $R^{-1}[W]\coloneqq \{x\in X\mathrel{:} \exists y\in V\ (x,y)\in R\}$ for $W\subseteq Y$. We denote the image and preimage functions between the power sets by ${\mathrm{Im}}\ R:\ V\mapsto R[V]$ and ${\mathrm{Im}}^{-1}R:\ W\mapsto R^{-1}[W]$. Most of the time, this notation is used for partial functions, which are always identified with their graphs --- note that $f^{-1}$ is only a function when $f$ is invertible.
\rule{0.5em}{0.5em} Cartesian projections are denoted by ${\mathop{\mbox{\normalfont{\large{\Fontauri{p}}}}}}$, quotient maps are denoted by ${\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}$, quotient homomorphisms in groups are denoted by $\pi$, inclusion maps are denoted by ${\mathop{\mbox{\normalfont{\Large{\calligra{i\,}}}}}}$ and identity maps are denoted by ${\mathrm{id}}$.
\rule{0.5em}{0.5em} A lattice of sets is a family of sets closed under finite unions and intersections. A complete algebra on a set $X$ is a family of subsets of $X$ closed under complements and arbitrary unions.
\rule{0.5em}{0.5em} The class of ordinals is denoted by ${\mathbf{\mathbbm{O}\mathrm{n}}}$. The cardinal of a set $X$ is written $|X|$.
\rule{0.5em}{0.5em} We use product notation for groups. Also, unless otherwise stated, we consider the group acting on itself on the left. In particular, by a coset we mean a left coset. A subset $X$ of a group is called symmetric if $1\in X=X^{-1}$. For subsets $X$ and $Y$ of a group, we write $XY$ for the set of pairwise products, and abbreviate $X^n\coloneqq XX^{n-1}$ and $X^{-n}\coloneqq (X^{-1})^n$ for $n\in\mathbb{N}$. We say that $X$ normalises $Y$ if $x^{-1}Yx\subseteq Y$ for every $x\in X$.
\rule{0.5em}{0.5em} By a Lie group we always mean here a finite{\hyp}dimensional real Lie group.
\section{Piecewise hyperdefinable sets} \subsection{Hyperdefinable sets} Let $A^*$ be a small set of parameters. An \emph{$A^*${\hyp}hyperdefinable} set is a quotient $P=\rfrac{X}{E}$ where $X$ is a non{\hyp}empty $\bigwedge_{A^*}${\hyp}definable set and $E$ is an $\bigwedge_{A^*}${\hyp}definable equivalence relation. If we do not indicate the set of parameters, we mean that it is hyperdefinable for some small set of parameters. Write ${\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}_P:\ X\rightarrow P$ for the \emph{quotient map} given by $x\mapsto [x]_E\coloneqq \rfrac{x}{E}\coloneqq E(x)=\{x'\in X\mathrel{:} (x,x')\in E\}$. The elements of $A${\hyp}hyperdefinable sets are called \emph{hyperimaginaries over $A$}. Given a hyperimaginary element $a$, a \emph{representative} of $a$ is an element $a^*\in a$ of the structure such that ${\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}_P(a^*)=a$. The elements of the structure will be called \emph{real}.
An \emph{$\bigwedge_{A^*}${\hyp}definable} subset, $V\subseteq P$, is a subset such that ${\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}^{-1}_P[V]$ is $\bigwedge_{A^*}${\hyp}definable in $X$. We will say that a partial type defines $V\subseteq P$ if it defines ${\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}^{-1}_P[V]$. If $V\subseteq P$ is a non{\hyp}empty $\bigwedge${\hyp}definable set, after declaring the parameters, we will write $\underline{V}$ to denote a partial type defining ${\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}^{-1}_P[V]$. The following basic proposition is the starting point to study hyperdefinable sets.
\begin{lem}[Correspondence Lemma] \label{l:correspondence lemma} Let $P=\rfrac{X}{E}$ be an $A^*${\hyp}hyperdefinable set. Then, the image by ${\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}_P$ of any $\bigwedge_{A^*}${\hyp}definable subset of $X$ is an $\bigwedge_{A^*}${\hyp}definable subset of $P$. Moreover, the preimage function ${\mathrm{Im}}^{-1}{\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}_P$ is an isomorphism, whose inverse is ${\mathrm{Im}}\ {\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}_P$, between the lattice of $\bigwedge_{A^*}${\hyp}definable subsets of $P$ and the lattice of $\bigwedge_{A^*}${\hyp}definable subsets of $X$ closed under $E$. \end{lem} The main part of this lemma can be further generalised to Lemma \ref{l:infinite definable functions with real parameters}. For this, note firstly that, given two $A^*${\hyp}hyperdefinable sets $P=\rfrac{X}{E}$ and $Q=\rfrac{Y}{F}$, the Cartesian product $P\times Q$ is canonically identified with the hyperdefinable set $\rfrac{X\times Y}{E\hat{\times} F}$ via $([x]_E,[y]_F)\mapsto [x,y]_{E\hat{\times} F}$, where $E\hat{\times} F\coloneqq\{((x,y),(x',y'))\mathrel{:} (x,x')\in E$, $(y,y')\in F\}$. Then, we can talk about $\bigwedge_{A^*}${\hyp}definable relations and partial functions. \begin{ejems} The inclusion ${\mathop{\mbox{\normalfont{\Large{\calligra{i\,}}}}}}:\ V\rightarrow P$ of an $\bigwedge_{A^*}${\hyp}definable set $V$ is $\bigwedge_{A^*}${\hyp}definable. The Cartesian projections ${\mathop{\mbox{\normalfont{\large{\Fontauri{p}}}}}}_P:\ P\times Q\rightarrow P$ and ${\mathop{\mbox{\normalfont{\large{\Fontauri{p}}}}}}_Q:\ P\times Q\rightarrow Q$ are $\bigwedge_{A^*}${\hyp}definable. Also, the quotient map ${\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}_P:\ X\rightarrow P$ is $\bigwedge_{A^*}${\hyp}definable. \end{ejems} \begin{lem} \label{l:infinite definable functions with real parameters} Let $P=\rfrac{X}{E}$ and $Q=\rfrac{Y}{F}$ be two $A^*${\hyp}hyperdefinable sets and $f$ an $\bigwedge_{A^*}${\hyp}definable partial function from $P$ to $Q$. Then, for any $\bigwedge_{A^*}${\hyp}definable sets $V\subseteq P$ and $W\subseteq Q$, $f[V]$ and $f^{-1}[W]$ are $\bigwedge_{A^*}${\hyp}definable. \begin{dem} It is enough to check that the projection maps satisfy the proposition. It is trivial that ${\mathop{\mbox{\normalfont{\large{\Fontauri{p}}}}}}^{-1}_P[V]=V\times Q$ is $\bigwedge_{A^*}${\hyp}definable for any $\bigwedge_{A^*}${\hyp}definable subset $V$. On the other hand, if $V\subseteq P\times Q$ is $\bigwedge_{A^*}${\hyp}definable, by compactness, $\Sigma(x)=\{\exists y\mathrel{} \bigwedge \Delta(x,y)\mathrel{:} \Delta\subseteq \underline{V}\mbox{ finite}\}$ defines ${\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}^{-1}_P[{\mathop{\mbox{\normalfont{\large{\Fontauri{p}}}}}}_P[V]]$. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{lem} \begin{obs} Let $f:\ P\rightarrow Q$ and $g:\ Q\rightarrow R$ be functions. As subsets of $P\times R$, we have $g\circ f={\mathop{\mbox{\normalfont{\large{\Fontauri{p}}}}}}_{P\times R}[(f\times R)\cap (P\times g)]$, where ${\mathop{\mbox{\normalfont{\large{\Fontauri{p}}}}}}_{P\times R}:\ P\times Q\times R\rightarrow P\times R$ is the natural projection. Thus, compositions of $\bigwedge_{A^*}${\hyp}definable partial functions are also $\bigwedge_{A^*}${\hyp}definable partial functions. \end{obs}
A \emph{type over $A^*$, or $A^*${\hyp}type}, in $P$ is an $\bigwedge_{A^*}${\hyp}definable subset of $P$ which is $\subset${\hyp}minimal in the family of non{\hyp}empty $\bigwedge_{A^*}${\hyp}definable subsets. For $a\in P$, we write ${\mathrm{tp}}(a/A^*)$ for the type over $A^*$ containing $a$. As the lattice of $\bigwedge_{A^*}${\hyp}definable sets is closed under arbitrary intersections, the type of a hyperimaginary element always exists.
As usual, for infinite tuples $\overline{a}=(a_t)_{t\in T}$ and $\overline{b}=(b_t)_{t\in T}$ of hyperimaginaries over $A^*$, we write ${\mathrm{tp}}(\overline{a}/A^*)={\mathrm{tp}}(\overline{b}/A^*)$ to mean that ${\mathrm{tp}}(\overline{a}_{\mid T_0}/A^*)={\mathrm{tp}}(\overline{b}_{\mid T_0}/A^*)$ for any $T_0\subseteq T$ finite. \begin{lem} \label{l:types of hyperimaginaries} Let $P$ be an $A^*${\hyp}hyperdefinable set, $a\in P$ and $a^*\in a$. Then, ${\mathrm{tp}}(a/A^*)={\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}_P[{\mathrm{tp}}(a^*/A^*)]$. In particular, $b\in{\mathrm{tp}}(a/A^*)$ if and only if there is $b^*\in b$ such that ${\mathrm{tp}}(a^*/A^*)={\mathrm{tp}}(b^*/A^*)$. In other words, ${\mathrm{tp}}(a/A^*)$ is the orbit of $a$ under the action of $\mathrm{Aut}(\mathfrak{M}/A^*)$ on $P$. \begin{dem} By the Correspondence Lemma \ref{l:correspondence lemma} and minimality. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{lem} Let $P$ be $A^*${\hyp}hyperdefinable. A subset $V\subseteq P$ is \emph{$A^*${\hyp}invariant} if ${\mathrm{tp}}(a/A^*)\subseteq V$ for any $a\in V$. As immediate corollaries of Lemma \ref{l:types of hyperimaginaries} we get the following results: \begin{coro} \label{c:invariance} Let $P$ be $A^*${\hyp}hyperdefinable and $V\subseteq P$. Then, $V$ is $A^*${\hyp}invariant if and only if it is setwise invariant under the action of $\mathrm{Aut}(\mathfrak{M}/A^*)$ on $P$, if and only if ${\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}^{-1}[V]$ is $A^*${\hyp}invariant. \begin{dem} Obvious by Lemma \ref{l:types of hyperimaginaries}. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{coro} \begin{coro} \label{c:parameters of definition} Let $P$ be $A^*${\hyp}hyperdefinable and $V\subseteq P$ an $\bigwedge${\hyp}definable subset. Then, $V$ is $\bigwedge_{A^*}${\hyp}definable if and only if it is $A^*${\hyp}invariant. \end{coro} We also want to be able to use hyperimaginary parameters. To do that we need to redefine the previous notions. Let $A$ be a set of hyperimaginaries. We start by defining types over $A$ and $A${\hyp}invariance for real elements. Then, we can say what is an $A${\hyp}hyperdefinable set, an $A${\hyp}invariant set, an $\bigwedge_A${\hyp}definable set and a type over $A$ for hyperimaginaries.
Since we assume that every element in $A$ is hyperimaginary over $A_{\mathrm{re}}$, the group $\mathrm{Aut}(\mathfrak{M}/A_{\mathrm{re}})$ naturally acts on the elements of $A$, i.e. $\sigma(a)$ makes sense for $a\in A$ and $\sigma\in\mathrm{Aut}(\mathfrak{M}/A_{\mathrm{re}})$. The \emph{group of automorphisms $\mathrm{Aut}(\mathfrak{M}/A)$ pointwise fixing $A$} is defined as the subgroup of elements of $\mathrm{Aut}(\mathfrak{M}/A_{\mathrm{re}})$ fixing each element of $A$. Note that, obviously, $\mathrm{Aut}(\mathfrak{M}/A^*)\leq \mathrm{Aut}(\mathfrak{M}/A)$ for any set of representatives $A^*$ of $A$.
Let $a^*$ be a tuple of real elements. The type of $a^*$ over $A$ is the set ${\mathrm{tp}}(a^*/A)\coloneqq\{b^*\mathrel{:} {\mathrm{tp}}(a^*,A)={\mathrm{tp}}(b^*,A)\}$. Equivalently, by Lemma \ref{l:types of hyperimaginaries}, ${\mathrm{tp}}(a^*/A)$ is the orbit of $a^*$ under $\mathrm{Aut}(\mathfrak{M}/A)$. A set of real elements $X$ is \emph{$A${\hyp}invariant} if we have ${\mathrm{tp}}(a^*/A)\subseteq X$ for any $a^*\in X$. Equivalently, $X$ is $A${\hyp}invariant if and only if it is setwise invariant under the action of $\mathrm{Aut}(\mathfrak{M}/A)$. We say that a hyperdefinable set $P=\rfrac{X}{E}$ is \emph{$A${\hyp}hyperdefinable} if $X$ and $E$ are $A${\hyp}invariant. Note that, as $\mathrm{Aut}(\mathfrak{M}/A^*)\leq \mathrm{Aut}(\mathfrak{M}/A)$, an $A${\hyp}hyperdefinable set is, in particular, $A^*${\hyp}hyperdefinable for any set of representatives $A^*$ of $A$. If $P$ is $A${\hyp}hyperdefinable, it follows that $\mathrm{Aut}(\mathfrak{M}/A)$ naturally acts on $P$.
A subset of an $A${\hyp}hyperdefinable set is \emph{$A${\hyp}invariant} if its preimage by the quotient map is $A${\hyp}invariant. Equivalently, it is $A${\hyp}invariant if and only if it is setwise invariant under the action of $\mathrm{Aut}(\mathfrak{M}/A)$. By Corollary \ref{c:invariance}, as $\mathrm{Aut}(\mathfrak{M}/A^*)\leq \mathrm{Aut}(\mathfrak{M}/A)$, every $A${\hyp}invariant set is in particular $A^*${\hyp}invariant for any set of representatives. An \emph{$\bigwedge_A${\hyp}definable} subset is an $A${\hyp}invariant $\bigwedge${\hyp}definable subset. By Corollary \ref{c:parameters of definition}, every $\bigwedge_A${\hyp}definable set is $\bigwedge_{A^*}${\hyp}definable for any set of representatives. Note that the $A${\hyp}invariant subsets of $P$ form a complete algebra of subsets of $P$. Thus, $\bigwedge_A${\hyp}definable sets are a lattice of sets closed under arbitrary intersections. \begin{lem} \label{l:infinite definable functions} Let $P=\rfrac{X}{E}$ and $Q=\rfrac{Y}{F}$ be two $A${\hyp}hyperdefinable sets and $f$ an $\bigwedge_A${\hyp}definable partial function. Then, for any $\bigwedge_A${\hyp}definable sets $V\subseteq P$ and $W\subseteq Q$, $f[V]$ and $f^{-1}[W]$ are $\bigwedge_A${\hyp}definable. \begin{dem} By Lemma \ref{l:infinite definable functions with real parameters}, $f[V]$ and $f^{-1}[W]$ are $\bigwedge${\hyp}definable. As $f$ is $A${\hyp}invariant, $f(\sigma(a))=\sigma(f(a))$ for $a\in P$ and $\sigma\in\mathrm{Aut}(\mathfrak{M}/A)$, so $f[V]$ and $f^{-1}[W]$ are $A${\hyp}invariant. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{lem}
A \emph{type over $A$, or $A${\hyp}type}, is a $\subset${\hyp}minimal non{\hyp}empty $\bigwedge_A${\hyp}definable subset. For $a\in P$, we write ${\mathrm{tp}}(a/A)$ for the type over $A$ containing $a$. As the lattice of $\bigwedge_{A}${\hyp}definable sets is closed under arbitrary intersections, the type of a hyperimaginary element always exists. As usual, for infinite tuples $\overline{a}=(a_t)_{t\in T}$ and $\overline{b}=(b_t)_{t\in T}$, we write ${\mathrm{tp}}(\overline{a}/A)={\mathrm{tp}}(\overline{b}/A)$ to mean that ${\mathrm{tp}}(\overline{a}_{\mid T_0}/A)={\mathrm{tp}}(\overline{b}_{\mid T_0}/A)$ for any $T_0\subseteq T$ finite. \begin{lem} \label{l:types and infinite definable functions} Let $P,Q$ be $A${\hyp}hyperdefinable sets, $f:\ P\rightarrow Q$ an $\bigwedge_{A}${\hyp}definable function and $a\in P$. Then, $f[{\mathrm{tp}}(a/A)]={\mathrm{tp}}(f(a)/A)$. \begin{dem} By Lemma \ref{l:infinite definable functions}, we have $f[{\mathrm{tp}}(a/A)]$ and $f^{-1}[{\mathrm{tp}}(f(a)/A)]$ are $\bigwedge_A${\hyp}definable. Then, ${\mathrm{tp}}(a/A)\subseteq f^{-1}[{\mathrm{tp}}(f(a)/A)]$ and ${\mathrm{tp}}(f(a)/A)\subseteq f[{\mathrm{tp}}(a/A)]$ by minimality. Thus, $f[{\mathrm{tp}}(a/A)]={\mathrm{tp}}(f(a)/A)$. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{lem} As an immediate corollary we get the following result: \begin{coro}\label{c:types over hyperimaginaries} Let $P$ be an $A${\hyp}hyperdefinable set, $a\in P$ and $a^*\in a$. Then, ${\mathrm{tp}}(a/A)={\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}_P[{\mathrm{tp}}(a^*/A)]$. In particular, ${\mathrm{tp}}(a/A)$ is the orbit of $a$ under the action of $\mathrm{Aut}(\mathfrak{M}/A)$ on $P$. In other words, for any $A^*$ representatives of $A$, we have $b\in{\mathrm{tp}}(a/A)$ if and only if there are $b^{**}\in b$ and $A^{**}$ representatives of $A$ such that ${\mathrm{tp}}(a^*,A^*)={\mathrm{tp}}(b^{**},A^{**})$.
Consequently, $V$ is $A${\hyp}invariant if and only if ${\mathrm{tp}}(a/A)\subseteq V$ for any $a\in V$. \begin{dem} Clear by Lemma \ref{l:types and infinite definable functions} and Lemma \ref{l:types of hyperimaginaries}. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{coro}
We explain now how to substitute hyperimaginary parameters. Let $P$ be $A${\hyp}hyperdefinable, $\overline{b}$ a small set of hyperimaginaries over $A$ and $V\subseteq P$ an $\bigwedge_{\overline{b}}${\hyp}definable subset. Let $\overline{c}$ be such that ${\mathrm{tp}}(\overline{b}/A)={\mathrm{tp}}(\overline{c}/A)$. The set \emph{$V(\overline{c})$ given from $V$ by replacing $\overline{b}$ by $\overline{c}$} is the set $\sigma[V]$ for $\sigma\in\mathrm{Aut}(\mathfrak{M}/A)$ such that $\sigma(\overline{b})=\overline{c}$. Note that this does not depend on the choice of $\sigma\in\mathrm{Aut}(\mathfrak{M}/A)$.
Alternatively, we present a more explicit methodology using \emph{uniform definitions} that does not require the use of automorphisms. Let $P=\rfrac{X}{E}$ be an $A${\hyp}hyperdefinable set. We say that a partial type $\Sigma(x,A^*)$ is \emph{weak uniform on $P$ over $A$} if $\Sigma(x,A^{**})$ defines the same set on $P$ for any set of representatives $A^{**}$ of $A$, i.e. $\rfrac{\Sigma(\mathfrak{M},A^*)}{E}=\rfrac{\Sigma(\mathfrak{M},A^{**})}{E}$. We say that $\Sigma(x,A^*)$ is \emph{uniform on $P$ over $A$} if $\Sigma(x,A^*)\cap \mathrm{For}^x(\mathscr{L}(B^*))$ is weak uniform on $P$ over $B$ for any $B^*\subseteq A^*$ such that $P$ is still $B${\hyp}hyperdefinable and $A_{\mathrm{re}}\subseteq B$. Let $V\subseteq P$ be $\bigwedge_A${\hyp}definable. A \emph{uniform definition of $V$ over $A$} is a partial type $\underline{V}$ uniform on $P$ over $A$ that defines $V$. \begin{lem} \label{l:uniform definition} Let $P=\rfrac{X}{E}$ be an $A${\hyp}hyperdefinable set and $V\subseteq P$ a non{\hyp}empty $\bigwedge_A${\hyp}definable set. Then, there is a uniform definition of $V$ over $A$. \begin{dem} Write $A=\{a_i\}_{i\in I}$ and $a_i\in Q_i=\rfrac{Y_i}{F_i}$. Pick $A^*$ representatives of $A$ and let $\Sigma(x,A^*)$ be any partial type defining $V$ over $A^*$. Write $\underline{F}=\bigwedge_i \underline{F}_i$ and $\Gamma={\mathrm{tp}}(A^*)$. Using saturation, take the partial type $\underline{V}(x,A^*)$ expressing $\exists \overline{y}\mathrel{} \Sigma(x,\overline{y})\wedge \underline{F}(\overline{y},A^*)\wedge \Gamma(\overline{y})$. By Corollary \ref{c:types over hyperimaginaries}, it is a uniform definition of $V$ over $A$. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{lem}
\begin{lem}\label{l:substitution and uniform definitions} Let $P$ be an $A${\hyp}hyperdefinable set and $V\subseteq P$ be a non{\hyp}empty $\bigwedge_{\overline{b}}${\hyp}definable subset with $\overline{b}$ a small set of hyperimaginaries over $A$ and $A\subseteq \overline{b}$. Let $\underline{V}(x,\overline{b}^*)$ be a uniform definition of $V$. Let $\overline{c}$ be such that ${\mathrm{tp}}(\overline{b}/A)={\mathrm{tp}}(\overline{c}/A)$ and $\overline{c}^*$ be representatives. Then, $\underline{V}(x,\overline{c}^*)$ is a uniform definition of $V(\overline{c})$. \begin{dem} Take $\sigma\in\mathrm{Aut}(\mathfrak{M}/A)$ with $\sigma(\overline{b})=\overline{c}$, so $\overline{b}^{**}=\sigma^{-1}(\overline{c}^*)$ is a representative of $\overline{b}$. As $\underline{V}(x,\overline{b}^*)$ is a uniform definition of $V$ on $P$ over $b$, we have that $\underline{V}(x,\overline{b}^{**})$ also defines $V$. Consequently, $V(\overline{c})$ is defined by $\underline{V}(x,\overline{c}^*)$. Now, take $\overline{c}_0\subseteq\overline{c}$ such that $P$ is still $ \overline{c}_0${\hyp}hyperdefinable and say $\overline{c}_0=\sigma(\overline{b}_0)$. Then, $P$ is still $\overline{b}_0${\hyp}hyperdefinable and, as $\underline{V}(x,\overline{b}^*)$ is uniform on $P$ over $b$, we have that $\underline{W}(x,\overline{b}^{**}_0)=\underline{V}(x,\overline{b}^{**})\cap \mathrm{For}(\mathscr{L}(\overline{b}^{**}_0))$ is weakly uniform on $P$ over $\overline{b}_0$. Therefore, as ${\mathrm{tp}}(c^*_0/A)={\mathrm{tp}}(b^{**}_0/A)$, $\underline{W}(x,\overline{c}^{*}_0)$ is weakly uniform on $P$ over $\overline{c}_0$. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{lem} \begin{lem} \label{l:uniform definition finite} Let $P$ and $Q$ be $A${\hyp}hyperdefinable sets and $b\in Q$. Then, for any $\bigwedge_{A,b}${\hyp}definable subset $V\subseteq P$ there is an $\bigwedge_A${\hyp}definable set $W\subseteq Q\times P$ such that $V(c)=W(c)\coloneqq {\mathop{\mbox{\normalfont{\large{\Fontauri{p}}}}}}_P[W\cap (\{c\}\times P)]$ for any $c\in {\mathrm{tp}}(b/A)$. \begin{dem} Take $\Sigma(A^*,b^*,x)$ a uniform definition of $V$ on $P$ over $A$. Then, by Lemma \ref{l:substitution and uniform definitions}, $\Sigma(A^*,y,x)$ defines an $\bigwedge_A${\hyp}definable subset $W$ of $Q\times P$ such that $V(c)=W(c)\coloneqq {\mathop{\mbox{\normalfont{\large{\Fontauri{p}}}}}}_P[W\cap (\{c\}\times P)]$ for any $c\in{\mathrm{tp}}(b/A)$. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{lem} \begin{lem} \label{l:equivalence relation of equality of types} Let $P$ be an $A${\hyp}hyperdefinable set. Then, \[\Delta_P(A)=\{(a,b)\in P\times P\mathrel{:} {\mathrm{tp}}(a/A)={\mathrm{tp}}(b/A)\}\] is an $\bigwedge_{A}${\hyp}definable equivalence relation. Furthermore, it has a uniform definition $\underline{\Delta_P}(A^*)$ such that $\underline{\Delta_P}(A^*)\cap \mathrm{For}(\mathscr{L}(B^*))$ defines $\Delta_P(B)$ for any subset $B\subseteq A$ such that $P$ is $B${\hyp}hyperdefinable. \end{lem} \subsection{The logic topologies of hyperdefinable sets} Let $P=\rfrac{X}{E}$ be an $A${\hyp}hyperdefinable subset. The \emph{$A${\hyp}logic topology} of $P$ is the one given by taking as closed sets the $\bigwedge_A${\hyp}definable subsets of $P$. In particular, by Lemma \ref{l:infinite definable functions}, the $A${\hyp}logic topology of $P$ is the quotient topology of the $A${\hyp}logic topology of $X$. \begin{prop} Let $P$ and $Q$ be $A${\hyp}hyperdefinable sets and $f:\ P\rightarrow Q$ an $\bigwedge_A${\hyp}definable function. Then, $f$ is a continuous and closed function between the $A${\hyp}logic topologies. \begin{dem} By Lemma \ref{l:infinite definable functions}. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{prop} \begin{prop}\label{p:topology hyperdefinables} Let $P$ be an $A${\hyp}hyperdefinable set. Then, the $A${\hyp}logic topology of $P$ is compact and normal (i.e. any two disjoint closed sets can be separated by open sets). The closure of a point $a\in P$ is ${\mathrm{tp}}(a/A)$, so the properties $\mathrm{T}_0$, $\mathrm{T}_1$ and $\mathrm{T}_2$ are equivalent to ${\mathrm{tp}}(a/A)=\{a\}$ for all $a\in P$. \begin{dem} Compactness follows from saturation. For normality we should note that the image of a normal space by a continuous closed function is always normal. Therefore, using ${\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}_P$, from normality of the $A^*${\hyp}logic topology of $X$ we conclude the normality of the $A^*${\hyp}logic topology of $P$. Using that $a\mapsto \rfrac{a}{\Delta_P(A)}$ is continuous and closed from the $A^*${\hyp}logic topology of $P$ to the $A${\hyp}logic topology of $\rfrac{P}{\Delta_P(A)}$, we get normality of the latter. From there, we trivially conclude normality of the $A${\hyp}logic topology of $P$.
Finally, as the closure of a point is its type by definition, $\mathrm{T}_1$ is equivalent to ${\mathrm{tp}}(a/A)=\{a\}$ for every $a\in P$. By Corollary \ref{c:types over hyperimaginaries}, $a\in{\mathrm{tp}}(b/A)$ if and only if ${\mathrm{tp}}(a/A)={\mathrm{tp}}(b/A)$, so $b\in \overline{\{a\}}$ if and only if $a\in \overline{\{b\}}$ --- this topological property is sometimes called $\mathrm{R}_0$. Therefore, $\mathrm{T}_0$ is also equivalent to ${\mathrm{tp}}(a/A)=\{a\}$ for every $a\in P$. By normality, $\mathrm{T}_1$ and $\mathrm{T}_2$ are equivalent. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{prop} \begin{prop} \label{p:uniqueness of hausdorff logic topologies} Let $P$ be an $A${\hyp}hyperdefinable set and $A\subseteq A'$. If the $A${\hyp}logic topology of $P$ is Hausdorff, then the $A${\hyp}logic topology and the $A'${\hyp}logic topology are equal. Thus, there is at most one Hausdorff logic topology and, if it exists, it will be called the \emph{global logic topology}. \end{prop} \begin{obs} Furthermore, the global logic topologies are preserved by expansion of the language. Indeed, let $P$ be a hyperdefinable set whose $A${\hyp}logic topology is Hausdorff and $\mathfrak{M}'$ a $\kappa${\hyp}saturated and strongly $\kappa${\hyp}homogeneous expansion of $\mathfrak{M}$. Then, obviously, $\mathrm{id}:\ P\rightarrow P$ is a continuous bijection from the $A${\hyp}logic topology in $\mathfrak{M}'$ to the $A${\hyp}logic topology in $\mathfrak{M}$. As both are compact and the second one is Hausdorff, we conclude that both are the same. \end{obs}
Let $P$ be an $A${\hyp}hyperdefinable set. Assuming that $\kappa> 2^{|A|+|\mathscr{L}|}$, either $|P|\geq \kappa$ or $|P|\leq 2^{|A|+|\mathscr{L}|}$. Then, $P$ has a global logic topology if and only if $P$ is small, if and only if $|P|\leq 2^{|A|+|\mathscr{L}|}$. Furthermore, write $\mathrm{bdd}(A)$ for the set of \emph{bounded hyperimaginaries over $A$}, i.e. hyperimaginaries over $A$ such that $|{\mathrm{tp}}(a/A)|<\kappa$. Note that $|\mathrm{bdd}(A)|\leq 2^{|A|+|\mathscr{L}|}$. It follows that, for any $A${\hyp}hyperdefinable set $P$, it has a global logic topology if and only if the $\mathrm{bdd}(A)${\hyp}logic topology is the global logic topology. \begin{prop} \label{p:product logic topology} Let $P$ and $Q$ be $A${\hyp}hyperdefinable sets. Then, the $A${\hyp}logic topology in $P\times Q$ is at least as fine as the product topology of the $A${\hyp}logic topologies. Furthermore, $P\times Q$ has a global logic topology if and only if $P$ and $Q$ have so, and then the global logic topology of $P\times Q$ is the product topology of the global logic topologies of $P$ and $Q$. \end{prop}
\subsection{Piecewise hyperdefinable sets} A \emph{piecewise $A${\hyp}hyperdefinable} set is a strict direct limit of $A${\hyp}hyperdefinable sets with $\bigwedge_A${\hyp}definable inclusions. In other words, a piecewise $A${\hyp}hyperdefinable set is a direct limit \[P\coloneqq \underrightarrow{\lim}_{(I,\prec)}(P_i,\varphi_{ji})_{j\succeq i}=\rfrac{\bigsqcup_I P_i}{\sim_P}\] of a direct system of $A${\hyp}hyperdefinable sets and $1${\hyp}to{\hyp}$1$ $\bigwedge_A${\hyp}definable functions $\varphi_{ji}:\ P_i\rightarrow P_j$.
Recall that $(I,\prec)$ is a direct ordered set and, for each $i,j,k\in I$ with $i\preceq j\preceq k$, $\varphi_{kj}\circ\varphi_{ji}=\varphi_{kj}$ and $\varphi_{ii}=\mathrm{id}$. Also, recall that $\sim_{P}$ is the equivalence relation on $\bigsqcup_I P_i$ defined by $x\sim_{P}y$ for $x\in P_i$ and $y\in P_j$ if and only if there is some $k\in I$ with $i\preceq k$ and $j\preceq k$ and $\varphi_{ki}(x)=\varphi_{kj}(y)$. Note that, in fact, as we consider only direct systems where the functions $\varphi_{ji}$ are $1${\hyp}to{\hyp}$1$, the equivalence relation is given by $x\sim_{P}y$ with $x\in P_i$ and $y\in P_j$ if and only if $\varphi_{ki}(x)=\varphi_{kj}(y)$ for any $k\in I$ such that $i\preceq k$ and $j\preceq k$.
The \emph{pieces} of $P$ are the subsets $\rfrac{P_i}{\sim_{P}}$. The \emph{canonical inclusions} are the maps ${\mathop{\mbox{\normalfont{\Large{\calligra{i\,}}}}}}_{P_i}:\ P_i\rightarrow \rfrac{P_i}{\sim_P}\subseteq P$ given by $a\mapsto [a]_{\sim_P}$ for $a\in P_i$.
The \emph{cofinality} $\mathrm{cf}(P)$ of $P$ is the cofinality of $I$, which is the minimal ordinal $\alpha$ from which there is a function $f:\ \alpha\rightarrow I$ such that for every $i\in I$ there is $\xi\in \alpha$ with $i\preceq f(\xi)$. We say that $P$ is \emph{countably} piecewise hyperdefinable if it has countable cofinality. From now on, we always assume that $\mathrm{cf}(P)<\kappa$.
A \emph{piecewise $\bigwedge_{A}${\hyp}definable} subset of $P$ is a subset $V\subseteq P$ such that ${\mathop{\mbox{\normalfont{\Large{\calligra{i\,}}}}}}^{\ -1}_{P_i}[V]$ is $\bigwedge_{A}${\hyp}definable in $P_i$ for each $i\in I$. An \emph{$\bigwedge_A${\hyp}definable} subset of $P$ is a piecewise $\bigwedge_A${\hyp}definable subset contained in some piece. If $V\subseteq P$ is a non{\hyp}empty $\bigwedge${\hyp}definable set, after fixing a piece $\rfrac{P_i}{\sim_P}$ containing it and declaring the parameters, we will write $\underline{V}$ to denote a partial type defining it in $P_i$. In that case, we will also say that $\underline{V}$ defines $V$. If $a\in P$ is an element, after fixing a piece $\rfrac{P_i}{\sim_P}$ containing it, we will say that $a^*$ is a representative of $a$ if ${\mathop{\mbox{\normalfont{\Large{\calligra{i\,}}}}}}_{P_i}({\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}_{P_i}(a^*))=a$, where ${\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}_{P_i}$ is the quotient map of $P_i$ as hyperdefinable set.
\begin{obs} The set of piecewise $\bigwedge_A${\hyp}definable subsets is a lattice of sets closed under arbitrary intersections, and the collection of $\bigwedge_A${\hyp}definable subsets is the ideal of that lattice generated by the pieces.\end{obs}
Note that, in the previous definitions, piecewise hyperdefinable sets are not only sets but sets together with a particular structure. This structure is given by the lattices of piecewise $\bigwedge${\hyp}definable subsets and the ideals of $\bigwedge${\hyp}definable subsets. It is very important to remember this as the same set could be represented as a piecewise hyperdefinable set in several different ways --- see Example \ref{e:example 1}.
By the strictness condition we get the following fundamental lemma. \begin{lem}[Correspondence Lemma]\label{l:correspondence lemma in pieces} Let $P$ be piecewise $A${\hyp}hyperdefinable and $\rfrac{P_i}{\sim_P}$ a piece of $P$. Then, ${\mathop{\mbox{\normalfont{\Large{\calligra{i\,}}}}}}_{P_i}:\ P_i\rightarrow \rfrac{P_i}{\sim_P}$ is a bijection. Furthermore, $V\subseteq \rfrac{P_i}{\sim_P}$ is $\bigwedge_A${\hyp}definable as subset of $P$ if and only if ${\mathop{\mbox{\normalfont{\Large{\calligra{i\,}}}}}}_{P_i}^{\ -1}[V]$ is $\bigwedge_A${\hyp}definable as subset of $P_i$. \begin{dem} Obviously ${\mathop{\mbox{\normalfont{\Large{\calligra{i\,}}}}}}_{P_i}:\ P_i\rightarrow\rfrac{P_i}{\sim_P}$ is a bijection by the strictness condition. Say $V\subseteq \rfrac{P_i}{\sim_P}$. Using again the strictness condition, for any $j,k\in I$ with $i,j\preceq k$, we have that ${\mathop{\mbox{\normalfont{\Large{\calligra{i\,}}}}}}^{\ -1}_{P_j}[V]=\varphi_{kj}^{-1}[{\mathop{\mbox{\normalfont{\Large{\calligra{i\,}}}}}}^{\ -1}_{P_k}[V]]=\varphi_{kj}^{-1}[\varphi_{ki}[{\mathop{\mbox{\normalfont{\Large{\calligra{i\,}}}}}}^{\ -1}_{P_i}[V]]]$. Therefore, by $\bigwedge_A${\hyp}definability of the maps and Lemma \ref{l:infinite definable functions}, we conclude that $V$ is $\bigwedge_A${\hyp}definable if and only if ${\mathop{\mbox{\normalfont{\Large{\calligra{i\,}}}}}}_{P_i}^{\ -1}[V]$ is $\bigwedge_A${\hyp}definable as subset of $P_i$. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{lem} The Correspondence Lemma \ref{l:correspondence lemma in pieces} says that ${\mathop{\mbox{\normalfont{\Large{\calligra{i\,}}}}}}_{P_i}$ is a true identification between $P_i$ and $\rfrac{P_i}{\sim_P}$ in terms of the model theoretic structure. In other words, it says that pieces of piecewise hyperdefinable sets are, indeed, hyperdefinable. From now on, slightly abusing of the notation, we make no distinction between $P_i$ and $\rfrac{P_i}{\sim_P}$, i.e. $P_i\coloneqq\rfrac{P_i}{\sim_P}$ and ${\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}_{P_i}\coloneqq {\mathop{\mbox{\normalfont{\Large{\calligra{i\,}}}}}}_{P_i}\circ {\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}_{P_i}$. \begin{coro} \label{c:infinite definable in piecewise hyperdefinable} Let $P=\underrightarrow{\lim}_I P_i$ be piecewise $A${\hyp}hyperdefinable. Then, a subset $V\subseteq P$ is piecewise $\bigwedge_A${\hyp}definable if and only if $V\cap P_i$ is $\bigwedge_A${\hyp}definable as subset of $P$ for each $i\in I$. \end{coro} \begin{obs}\label{o:remark ideal infinite definable sets} As every piece is $\bigwedge${\hyp}definable, the lattice of piecewise $\bigwedge_A${\hyp}definable subsets can be recovered from the ideal of $\bigwedge_A${\hyp}definable subsets. In other words, the structure of $P$ is completely determined by the ideals of $\bigwedge${\hyp}definable sets. However, in general, the lattice of piecewise $\bigwedge_A${\hyp}definable subsets does not determine the ideal of $\bigwedge_A${\hyp}definable subsets --- see Example \ref{e:example 1}. \end{obs}
A \emph{type over $A$, or $A${\hyp}type}, is a $\subset${\hyp}minimal non{\hyp}empty piecewise $\bigwedge_A${\hyp}definable subset. The \emph{type of $a\in P$ over $A$}, ${\mathrm{tp}}(a/A)$, is the minimal piecewise $\bigwedge_A${\hyp}definable subset of $P$ containing $a$. Since $a\in P_i$ for some $i\in I$ and pieces are $\bigwedge_A${\hyp}definable, ${\mathrm{tp}}(a/A)$ is actually $\bigwedge_A${\hyp}definable.
Let $R\subseteq P\times Q$ be a binary relation between two piecewise hyperdefinable sets. We say that $R$ is \emph{piecewise bounded (or piecewise continuous)} if the image of any piece of $P$ is contained in some piece of $Q$. We say that it is \emph{piecewise proper} if the preimage of any piece of $Q$ is contained in some piece of $P$. We use this terminology in particular for partial functions. To simplify the terminology, we often omit reiterative uses of ``piecewise'' when they happen. So, for example, we say ``a piecewise bounded and proper $\bigwedge${\hyp}definable function'' instead of ``a piecewise bounded and piecewise proper piecewise $\bigwedge${\hyp}definable function''.
Given two piecewise $A${\hyp}hyperdefinable sets $P=\underrightarrow{\lim}_{I}P_i$ and $Q=\underrightarrow{\lim}_{J}Q_j$, the Cartesian product $P\times Q$ is canonically identified with $\underrightarrow{\lim}_{I\times J}P_i\times Q_j$ via $([x]_{P},[y]_{Q})\mapsto [x,y]_{P\times Q}$, where $(x,y)\sim_{P\times Q}(x',y')$ if and only if $x\sim_Px'$ and $y\sim_Q y'$. Thus, we say that a binary relation $R$ between $P$ and $Q$ is piecewise $\bigwedge_A${\hyp}definable if it is so as subset of the Cartesian product. \begin{lem} \label{l:types and piecewise infinite definable functions} Let $P$ and $Q$ be piecewise $A${\hyp}hyperdefinable sets, $f:\ P\rightarrow Q$ a piecewise $\bigwedge_{A}${\hyp}definable function and $a\in P$. Then, $f[{\mathrm{tp}}(a/A)]={\mathrm{tp}}(f(a)/A)$. \begin{dem} Clear from Lemma \ref{l:types and infinite definable functions}. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{lem} \begin{prop} \label{p:piecewise infinite definable functions} Let $P$ and $Q$ be two piecewise $A${\hyp}hyperdefinable sets and $f$ a piecewise $\bigwedge_A${\hyp}definable partial function from $P$ to $Q$. Then:
{\textnormal{\textbf{(1)}}} If $f$ is piecewise bounded, images of $\bigwedge_A${\hyp}definable sets are $\bigwedge_A${\hyp}definable, and preimages of piecewise $\bigwedge_A${\hyp}definable sets are piecewise $\bigwedge_A${\hyp}definable.
{\textnormal{\textbf{(2)}}} If $f$ is piecewise proper, images of piecewise $\bigwedge_A${\hyp}definable sets are piecewise $\bigwedge_A${\hyp}definable, and preimages of $\bigwedge_A${\hyp}definable sets are $\bigwedge_A${\hyp}definable. \begin{dem} Both are quite similar, so let us show only {\textnormal{\textbf{(1)}}}. For $i\in I$ and $j\in J$ with $f[P_i]\subseteq Q_j$, consider $f_{ji}:\ P_i\rightarrow Q_j$ given by the restriction of $f$. As $f$ is piecewise $\bigwedge_A${\hyp}definable, each $f_{ji}$ is $\bigwedge_A${\hyp}definable. Given an $\bigwedge_A${\hyp}definable subset $V\subseteq P_i$, we have that $f[V]=f_{ji}[V]$, concluding that the image of $V$ is $\bigwedge_A${\hyp}definable in $Q$ by Lemma \ref{l:infinite definable functions} and the Correspondence Lemma \ref{l:correspondence lemma in pieces}. On the other hand, given a piecewise $\bigwedge_A${\hyp}definable subset $W\subseteq Q$, we have that $f^{-1}[W]\cap P_i=f^{-1}_{ji}[W\cap Q_j]$, so $f^{-1}[W]$ is piecewise $\bigwedge_A${\hyp}definable in $P$ by Lemma \ref{l:infinite definable functions}. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{prop} An \emph{isomorphism of piecewise $A${\hyp}hyperdefinable sets} is a piecewise bounded and proper $\bigwedge_A${\hyp}definable bijection. In that case, we will say that $P$ and $Q$ are \emph{isomorphic over $A$}.
\subsection{The logic topologies of piecewise hyperdefinable sets} The \emph{$A${\hyp}logic topology} of $P$ is the respective direct limit topology. In other words, a subset of $P$ is closed if and only if it is piecewise $\bigwedge_A${\hyp}definable. By the Correspondence Lemma \ref{l:correspondence lemma in pieces}, each piece is compact and, further, every $\bigwedge_A${\hyp}definable subset is compact.
As in the case of hyperdefinable sets, $\overline{\{a\}}={\mathrm{tp}}(a/A)$ for any $a\in P$. Thus, the properties $\mathrm{T}_0$ and $\mathrm{T}_1$ are equivalent. If the $A${\hyp}logic topology is $\mathrm{T}_1$, for any other small set of hyperimaginary parameters $A'$ containing $A$, the logic topologies over $A$ and over $A'$ are the same. Thus, there is at most one $\mathrm{T}_1$ logic topology on $P$ and, if it exists, it is called the \emph{global logic topology}. It follows that $P$ has a global logic topology if and only if every piece has size at most $2^{|A|+|\mathscr{L}|}$, if and only if the $\mathrm{bdd}(A)${\hyp}logic topology is the global logic topology. In particular, if $P$ has a global logic topology, then $|P|\leq 2^{|A|+|\mathscr{L}|}+\mathrm{cf}(P)$. Thus, assuming $2^{|A|+|\mathscr{L}|}+\mathrm{cf}(P)<\kappa$, we conclude that $P$ has a global logic topology if and only if it is small.
There are still some topological properties that we want to extend from hyperdefinable sets to piecewise hyperdefinable sets. Ideally, we would like to show that these topologies are locally compact, normal and satisfy $\mathrm{T}_1\Leftrightarrow \mathrm{T}_2$. Also, we would like to show that they are closed under taking finite products. In general, these properties may fail --- see Example \ref{e:example 4}, Example \ref{e:example 6} and Example \ref{e:example no topological group} for some counterexamples. The rest of the subsection is dedicated to give sufficient natural conditions for these properties.
\
In general, for a topological space $X$, a \emph{covering} of $X$ is a set $\mathcal{C}\subseteq \mathcal{P}(X)$ such that $\bigcup\mathcal{C}=X$, whose elements are called \emph{pieces}. A covering $\mathcal{C}$ is \emph{coherent} when, for every $U\subseteq X$, $U$ is open in $X$ if and only if $U\cap P$ is open in $P$ for each $P\in\mathcal{C}$. Equivalently, $\mathcal{C}$ is coherent when, for every $V\subseteq X$, $V$ is closed in $X$ if and only if $V\cap P$ is closed in $P$ for each $P\in\mathcal{C}$. For example, $X$ is \emph{compactly generated} if the family of all the compact subsets is a coherent covering.
We say that a covering is \emph{local} if for every point of $X$ there is a piece that is a neighbourhood of it. The following topological results are straightforward:
\begin{lem}[Local coverings] \label{l:local coherent} Let $X$ be a topological space and $\mathcal{C}$ a local covering. Then, $\mathcal{C}$ is coherent. \end{lem}
\begin{lem} \label{l:lemma normal coverings} Let $X$ be a topological space and $\{P_n\}_{n\in\mathbb{N}}$ a closed coherent covering with $P_n\subseteq P_{n+1}$ for each $n\in\mathbb{N}$. Suppose that $P_n$ is normal for each $n\in\mathbb{N}$. Then, $X$ is normal. \begin{dem} Let $V$ and $W$ be closed disjoint subsets of $X$. Take the continuous map $g:\ V\sqcup W\rightarrow\{-1,1\}$ given by $g_{\mid V}=1$ and $g_{\mid W}=-1$. By recursion, using Tietze's Theorem \cite[Theorem 35.1]{munkres1999topology}, we construct a chain $(f_n)_{n\in\mathbb{N}}$ of continuous maps $f_n:\ P_n\rightarrow [-1,1]$, each one extending $g_{\mid P_n}$. Taking $f=\bigcup f_n$, we get a continuous map separating $V$ and $W$. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{lem} In the case of a piecewise hyperdefinable set $P$, we have a directed closed and compact coherent covering $\{P_i\}_{i\in I}$. In particular, it follows that logic topologies are always compactly generated. If $P$ is countably piecewise hyperdefinable, then it is $\sigma${\hyp}compact (i.e. a countable union of compact sets). Furthermore, by Lemma \ref{l:lemma normal coverings}, we get normality: \begin{prop} \label{p:normal countably piecewise hyperdefinable} Let $P$ be a countably piecewise $A${\hyp}hyperdefinable set. Then, $P$ is normal with the $A${\hyp}logic topology. In particular, global logic topologies of countably piecewise hyperdefinable sets are Hausdorff. \end{prop} Say that a piecewise hyperdefinable set $P$ is \emph{locally $A${\hyp}hyperdefinable} if its covering is local in the $A${\hyp}logic topology, i.e. if for every point of $P$ there is an $\bigwedge_A${\hyp}definable set which is a neighbourhood of it. Say that $P$ is locally hyperdefinable if it is so for some small set of parameters. \begin{obs} \emph{Piecewise definable} sets are the special case of piecewise hyperdefinable sets when we only consider strict direct limits of definable sets with definable maps. Definable sets are always open in the logic topology so, following the terminology of this paper, every piecewise definable set is trivially locally (hyper)definable. The distinction between these two notions only appears in the general context of piecewise hyperdefinable sets. In particular, note that our terminology is consistent with the typical use of the terms ``piecewise definable'' and ``locally definable'' as synonyms in the literature. \end{obs} \begin{prop} \label{p:mapping locally hyperdefinable sets} Let $P$ and $Q$ be piecewise $A${\hyp}hyperdefinable sets. Assume that $P$ is locally $A${\hyp}hyperdefinable. Let $f:\ P\rightarrow Q$ be a piecewise bounded and proper $\bigwedge_A${\hyp}definable onto map. Then, $Q$ is locally $A${\hyp}hyperdefinable. \begin{dem} Pick $y\in Q$. By Proposition \ref{p:piecewise infinite definable functions}, $f^{-1}[{\mathrm{tp}}(y/A)]$ is $\bigwedge_A${\hyp}definable, so compact in the $A${\hyp}logic topology of $P$. As $P$ is locally hyperdefinable, we find an $\bigwedge_A${\hyp}definable set $V$ such that $f^{-1}[{\mathrm{tp}}(y/A)]\subseteq U\subseteq V$, where $P\setminus U$ is piecewise $\bigwedge_A${\hyp}definable. By Proposition \ref{p:piecewise infinite definable functions}, $f[V]$ is $\bigwedge_A${\hyp}definable, and $f[P\setminus U]$ is piecewise $\bigwedge_A${\hyp}definable. As $f^{-1}[{\mathrm{tp}}(y/A)]\subseteq U$, ${\mathrm{tp}}(y/A)\cap f[P\setminus U]=\emptyset$. Also, as $f$ is onto, $Q=f[V\cup (P\setminus U)]= f[V]\cup f[P\setminus U]$. Therefore, $y\in {\mathrm{tp}}(y/A)\subseteq Q\setminus f[P\setminus U]\subseteq f[V]$, concluding that $f[V]$ is an $\bigwedge_A${\hyp}definable neighbourhood of $y$ in the $A${\hyp}logic topology. As $y$ is arbitrary, we conclude that $Q$ is locally $A${\hyp}hyperdefinable. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{prop} For locally hyperdefinable sets we have a really good control of the logic topology. \begin{prop}\label{p:locally compact locally hyperdefinable} Let $P$ be a locally $A${\hyp}hyperdefinable set. Then, $P$ is locally closed compact in the $A${\hyp}logic topology, i.e. every point has a local base of closed compact neighbourhoods. \begin{dem} Say $P=\underrightarrow{\lim}\, P_i$. Pick $x\in P$ and $U$ open neighbourhood of $x$ in the $A${\hyp}logic topology. Take a piece $P_i$ and $U_0$ such that $x\in U_0\subseteq P_i$ with $U_0$ open in the $A${\hyp}logic topology of $P$. Then, $U_1\coloneqq U\cap U_0$ is an open neighbourhood of $x$ in the $A${\hyp}logic topology of $P$. Note that ${\mathrm{tp}}(x/A)\subseteq U_1\subseteq P_i$, so ${\mathrm{tp}}(x/A)$ and $P_i\setminus U_1$ are disjoint closed subsets of $P_i$. By Proposition \ref{p:topology hyperdefinables}, $P_i$ is normal, so there are ${\mathrm{tp}}(x/A)\subseteq U'$ and $P_i\setminus U_1\subseteq P_i\setminus V'$ with $U'\cap (P_i\setminus V')=\emptyset$ such that $P_i\setminus U'$ and $V'$ are $\bigwedge_A${\hyp}definable in $P_i$. Therefore, $x\in{\mathrm{tp}}(x/A)\subseteq U'\subseteq V'\subseteq U_1\subseteq P_i$. Now, since $U_1$ is open in the $A${\hyp}logic topology of $P$ and $U'\subseteq U_1$ is open in the subspace topology of $U_1$, we conclude that $U'$ is open in the $A${\hyp}logic topology of $P$. Hence, $V'$ is an $\bigwedge_A${\hyp}definable neighbourhood of $x$ in the $A${\hyp}logic topology contained in $U$. As $x$ and $U$ are arbitrary, we conclude that $P$ is locally closed compact. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{prop} \begin{prop} \label{p:compact locally hyperdefinable} Let $P$ be a locally $A${\hyp}hyperdefinable set. Then, every compact subset of $P$ in the $A${\hyp}logic topology is contained in the interior of some piece. \begin{dem} Clear from the definition. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{prop} \begin{prop} \label{p:hausdorff locally hyperdefinable} Let $P$ be a locally $A${\hyp}hyperdefinable set. Then, any two closed compact disjoint subsets in the $A${\hyp}logic topology are separated by open sets. In particular, it is $\mathrm{T}_1$ if and only if it is $\mathrm{T}_2$. \begin{dem} Say $P=\underrightarrow{\lim}\, P_i$. Let $K_1$ and $K_2$ be two disjoint closed compact subsets of $P$ in the $A${\hyp}logic topology. By Proposition \ref{p:compact locally hyperdefinable}, there is a piece $P_i$ and an open subset $U$ of $P$ such that $K_1,K_2\subseteq U\subseteq P_i$. By normality of $P_i$, there are disjoint open subsets $U_1$ and $U_2$ in the $A${\hyp}logic topology of $P_i$ such that $K_1\subseteq U_1$ and $K_2\subseteq U_2$. Thus, $U\cap U_1$ and $U\cap U_2$ are disjoint open subsets in the subspace topology of $U$ separating $K_1$ and $K_2$. As $U$ is open in the $A${\hyp}logic topology of $P$, we conclude that $U\cap U_1$ and $U\cap U_2$ are disjoint open subsets in the $A${\hyp}logic topology of $P$ separating $K_1$ and $K_2$. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{prop} Recall that a function between topological spaces is \emph{proper} if the preimage of every compact set is compact. For instance, every closed function with compact fibres is proper \cite[Theorem 3.7.2]{engelking1989general}. \begin{prop} \label{p:functions logic topology} Let $P$ and $Q$ be piecewise $A${\hyp}hyperdefinable sets and $f:\ P\rightarrow Q$ a piecewise $\bigwedge_A${\hyp}definable function.
{\textnormal{\textbf{(1)}}} If $f$ is piecewise bounded, then $f$ is continuous between the $A${\hyp}logic topologies.
{\textnormal{\textbf{(2)}}} If $f$ is piecewise proper, then $f$ is closed and has compact fibres between the $A${\hyp}logic topologies. In particular, it is proper.
{\textnormal{\textbf{(3)}}} If $f$ is an isomorphism of piecewise $A${\hyp}hyperdefinable sets, then $f$ is a homeomorphism between the $A${\hyp}logic topologies.
{\textnormal{\textbf{(4)}}} If $Q$ is locally $A${\hyp}hyperdefinable, then $f$ is continuous between the $A${\hyp}logic topologies if and only if it is piecewise bounded.
{\textnormal{\textbf{(5)}}} If $P$ is locally $A${\hyp}hyperdefinable, then $f$ is proper between the $A${\hyp}logic topologies if and only if it is piecewise proper.
{\textnormal{\textbf{(6)}}} If $P$ and $Q$ are locally $A${\hyp}hyperdefinable, then $f$ is an isomorphism of piecewise $A${\hyp}hyperdefinable sets if and only if it is a homeomorphism between the $A${\hyp}logic topologies. \begin{dem} Point \textbf{(1)} is given by Proposition \ref{p:piecewise infinite definable functions}(1).
Point \textbf{(2)}: closedness is given by Proposition \ref{p:piecewise infinite definable functions}(2). On the other hand, for any point $a\in Q$, $f^{-1}(a)\subseteq f^{-1}[{\mathrm{tp}}(a/A)]$ and $f^{-1}[{\mathrm{tp}}(a/A)]$ is $\bigwedge_A${\hyp}definable, so compact. As $f$ is $A${\hyp}invariant and ${\mathrm{tp}}(a/A)$ is the orbit of $a$ under $\mathrm{Aut}(\mathfrak{M}/A)$ by Corollary \ref{c:types over hyperimaginaries}, it follows that $f^{-1}[{\mathrm{tp}}(a/A)]$ is the orbit of $f^{-1}(a)$ under $\mathrm{Aut}(\mathfrak{M}/A)$, i.e. the smallest $A${\hyp}invariant set containing $f^{-1}(a)$. Consequently, any open covering of $f^{-1}(a)$ in the $A${\hyp}logic topology is also a covering of $f^{-1}[{\mathrm{tp}}(a/A)]$, so $f^{-1}(a)$ is compact too.
Point \textbf{(3)} is given by points \textbf{(1)} and \textbf{(2)}. Point \textbf{(4)} is given by \textbf{(1)} and Proposition \ref{p:compact locally hyperdefinable}. Point \textbf{(5)} is given by \textbf{(2)} and Proposition \ref{p:compact locally hyperdefinable}. Point \textbf{(6)} is given by points \textbf{(4)} and \textbf{(5)}.\setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{prop} \begin{prop} \label{p:functions logic topology 2} Let $P$ be a piecewise $A${\hyp}hyperdefinable set and $Q$ a locally $A${\hyp}hyperdefinable set whose $A${\hyp}logic topology is its global logic topology. Let $f:\ P\rightarrow Q$ be a function. Then, $f$ is continuous between the $A${\hyp}logic topologies if and only if $f$ is a piecewise bounded $\bigwedge_A${\hyp}definable function. \begin{dem} By the Closed Graph Theorem \cite[Exercise 8, Section 26]{munkres1999topology}, Proposition \ref{p:functions logic topology} and Proposition \ref{p:compact locally hyperdefinable}. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{prop} By Proposition \ref{p:functions logic topology}(1), Cartesian projection maps are continuous between the logic topologies. Therefore, the $A${\hyp}logic topology of a finite product of piecewise $A${\hyp}hyperdefinable sets is at least as fine as the product topology of the $A${\hyp}logic topologies. In the case of local hyperdefinable sets with global logic topologies, we have that they coincide. \begin{prop} \label{p:product of locally hyperdefinable sets} Let $P$ and $Q$ be locally hyperdefinable sets with global logic topologies. Then, $P\times Q$ is a locally hyperdefinable set and its product topology is its global logic topology. \begin{dem} It suffices to note that, for any two topological spaces $X$ and $Y$ with respective local coverings $\mathcal{C}$ and $\mathcal{D}$, $\mathcal{C}\times \mathcal{D}\coloneqq \{P\times Q\mathrel{:} P\in\mathcal{C},\ Q\in\mathcal{D}\}$ is a local covering of the product topology. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{prop} Similarly, in the case of countably piecewise hyperdefinable sets with global logic topologies, we also conclude that they coincide. \begin{prop} \label{p:product of countably piecewise hyperdefinable sets} Let $P$ and $Q$ be two countably piecewise hyperdefinable sets with global logic topologies. Then, $P\times Q$ is a countably piecewise hyperdefinable set and its product topology is its global logic topology. \begin{dem} Say $P=\underrightarrow{\lim}\, P_n$ and $Q=\underrightarrow{\lim}\, Q_n$. Let $\Gamma\subseteq P\times Q$ be closed in the global logic topology and $(a,b)\notin \Gamma$ arbitrary. We recursively define a sequence $(U_n,V_n)_{n\in\mathbb{N}}$ such that $a\in U_0$, $b\in V_0$, $U_n\subseteq U_{n+1}$, $V_n\subseteq V_{n+1}$, $U_n\subseteq P_n$ is open in $P_n$, $V_n\subseteq Q_n$ is open in $Q_n$ and $(\overline{U}_n\times \overline{V}_n)\cap \Gamma=\emptyset$.
For any $n\in\mathbb{N}$, note that the product topology and the global logic topology in $P_n\times Q_n$ coincide. As $(a,b)$ and $\Gamma\cap (P_0\times Q_0)$ are disjoint and closed, by normality in $P_0\times Q_0$, we can find $U_0\subseteq P_0$ open in $P_0$ and $V_0\subseteq Q_0$ open in $Q_0$ such that $(a,b)\in U_0\times V_0$ and $(\overline{U}_0\times\overline{V}_0)\cap \Gamma=\emptyset$. Now, suppose $U_n$ and $V_n$ are defined. By normality of $P_{n+1}\times Q_{n+1}$, for any $(x,y)\in \overline{U}_n\times \overline{V}_n$, we can find $U^{xy}_{n+1}\subseteq P_{n+1}$ open in $P_{n+1}$ and $V^{xy}_{n+1}\subseteq Q_{n+1}$ open in $Q_{n+1}$ such that $x\in U^{xy}_{n+1}$, $y\in V^{xy}_{n+1}$ and $(\overline{U}^{xy}_{n+1}\times \overline{V}^{xy}_{n+1})\cap \Gamma=\emptyset$. As $\overline{V}_n$ is compact, for each $x\in \overline{U}_n$ there is $F_x\subseteq \overline{V}_n$ finite such that $\overline{V}_n\subseteq \bigcup_{y\in F_x}V^{xy}_{n+1}$. Take $V^x_{n+1}=\bigcup_{y\in F_x}V^{xy}_{n+1}$ and $U^x_{n+1}=\bigcap_{y\in F_x}U^{xy}_n$. Then, $U^x_{n+1}\subseteq P_{n+1}$ is open in $P_{n+1}$, $V^x_{n+1}\subseteq Q_{n+1}$ is open in $Q_{n+1}$, $x\in U^x_{n+1}$, $\overline{V}_n\subseteq V^x_{n+1}$ and $(\overline{U}^x_{n+1}\times \overline{V}^x_{n+1})\cap \Gamma=\emptyset$. As $\overline{U}_n$ is compact, there is $F\subseteq \overline{U}_n$ finite such that $\overline{U}_n\subseteq \bigcup_{x\in F}U^x_{n+1}$. Take $U_{n+1}=\bigcup_{x\in F}U^x_{n+1}$ and $V_{n+1}=\bigcap_{x\in F} V^x_{n+1}$. Then, $U_{n+1}\subseteq P_{n+1}$ is open in $P_{n+1}$, $V_{n+1}\subseteq Q_{n+1}$ is open in $Q_{n+1}$, $\overline{U}_n\subseteq U_{n+1}$, $\overline{V}_n\subseteq V_{n+1}$ and $(\overline{U}_{n+1}\times \overline{V}_{n+1})\cap\Gamma=\emptyset$.
Let $U=\bigcup_{n\in\mathbb{N}} U_n$ and $V=\bigcup_{n\in\mathbb{N}} V_n$. For $n\in\mathbb{N}$, we have $U\cap P_n=\bigcup_{m\geq n} (U_m\cap P_n)$ and $V\cap Q_n=\bigcup_{m\geq n}(V_m\cap Q_n)$. As $P_n\subseteq P_m$ and $Q_n\subseteq Q_m$ for $m>n$, we get that $U_m\cap P_n$ is open in $P_n$ and $V_m\cap Q_n$ is open in $Q_n$, so $U\cap P_n$ is open in $P_n$ and $V\cap Q_n$ is open in $Q_n$ for each $n\in\mathbb{N}$. Therefore, $U\times V$ is open in the product topology and $(a,b)\in U\times V$ with $(U\times V)\cap \Gamma=\emptyset$. As $(a,b)\notin \Gamma$ is arbitrary, $\Gamma$ is closed in the product topology. As $\Gamma$ is arbitrary, we conclude that the global logic topology in $P\times Q$ is the same as the product topology. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{prop} Let $P=\underrightarrow{\lim}\, P_i$ be piecewise $A${\hyp}hyperdefinable set and $V\subseteq P$ a non{\hyp}empty piecewise $\bigwedge_A${\hyp}definable subset. Note that the subspace topology of $V$ inherited from the $A${\hyp}logic topology of $P$ is the $A${\hyp}logic topology of $V$ given as the piecewise $A${\hyp}hyperdefinable set $\underrightarrow{\lim}\ V\cap P_i$. We conclude showing that local hyperdefinability is hereditary. \begin{prop} \label{p:subspace locally hyperdefinable} Let $P=\underrightarrow{\lim}\, P_i$ be a locally $A${\hyp}hyperdefinable set and $V\subseteq P$ a piecewise $\bigwedge_A${\hyp}definable subset. Then, $V$ with the induced piecewise hyperdefinable substructure $\underrightarrow{\lim}\ V\cap P_i$ is a locally $A${\hyp}hyperdefinable set. \begin{dem} It suffices to note that, for any topological space $X$ with local covering $\mathcal{C}$ and any $Y\subseteq X$, $\mathcal{C}_{\mid Y}=\{C\cap Y\mathrel{:} C\in\mathcal{C}\}$ is a local covering of the subspace topology. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{prop} \subsection{Spaces of types} Let $P=\rfrac{X}{E}$ be an $A${\hyp}hyperdefinable set and $F\subseteq P\times P$ an $\bigwedge_A${\hyp}definable equivalence relation in $P$. Then, $\rfrac{P}{F}$ is actually identified with the $A${\hyp}hyperdefinable set $\rfrac{X}{{\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}^{-1}_{P\times P}[F]}$ via the canonical bijection \[\eta:\ [x]_{{\mathop{\mbox{\normalfont{\small{\Fontauri{q}}}}}}^{-1}_{P\times P}[F]}\mapsto [[x]_P]_F.\] Furthermore, by the definition of the quotient topologies, this map is a homeomorphism.
Let $P=\underrightarrow{\lim}_{(I,\prec)}(P_i,\varphi_{ji})_{j\succeq i}$ be a piecewise $A${\hyp}hyperdefinable set. Write $P_i=\rfrac{X_i}{E_i}$. Let $F\subseteq P\times P$ be a piecewise $\bigwedge_A${\hyp}definable equivalence relation. Write $F_i\coloneqq F_{\mid P_i\times P_i}$. Define $\widetilde{\varphi}_{ji}:\ \rfrac{P_i}{F_i}\rightarrow \rfrac{P_j}{F_j}$ by $\widetilde{\varphi}_{ji}([x]_{F_i})=[\varphi_{ji}(x)]_{F_j}$. Clearly, they are well{\hyp}defined $\bigwedge_A${\hyp}definable and $1${\hyp}to{\hyp}$1$ functions and $\widetilde{\varphi}_{ii}=\mathrm{id}$ and $\widetilde{\varphi}_{ki}=\widetilde{\varphi}_{kj}\circ\widetilde{\varphi}_{ji}$. Then, we have canonically the piecewise $A${\hyp}hyperdefinable structure in $\rfrac{P}{F}$ given by \[\rfrac{P}{F}\coloneqq \underrightarrow{\lim}_{(I,\prec)}(\rfrac{P_i}{F_i},\widetilde{\varphi}_{ji})_{j\succeq i},\] via the map $\eta:\ [{\mathop{\mbox{\normalfont{\Large{\calligra{i\,}}}}}}_{P_i}(x)]_F\mapsto {\mathop{\mbox{\normalfont{\Large{\calligra{i\,}}}}}}_{P_i/F_i}([x]_{F_i})$ for $i$ such that $x\in P_i$. Furthermore, topologically, by the definitions of the quotient and the direct limit topologies, this bijection is clearly a homeomorphism.
Let $P=\rfrac{X}{E}$ be an $A^*${\hyp}hyperdefinable set. Consider the space of types $\mathbf{S}_X(A^*)=\{{\mathrm{tp}}(x/A^*)\mathrel{:} x\in X\}$ with the usual topology. We define the equivalence relation $\sim_E$ in $\mathbf{S}_X(A^*)$ as $p\sim_Eq$ if and only if there are $E${\hyp}equivalent realisations of $p(x)$ and $q(y)$. The \emph{space of types} of $P$ over $A^*$ is the space $\mathbf{S}_P(A^*)\coloneqq \rfrac{\mathbf{S}_X(A^*)}{\sim_E}$ with the quotient topology.
On the other hand, by Lemma \ref{l:equivalence relation of equality of types}, $\rfrac{P}{\Delta_P(A^*)}=\{{\mathrm{tp}}(a/A^*)\mathrel{:} a\in P\}$ is an $A^*${\hyp}hyperdefinable set and has its $A^*${\hyp}logic topology. \begin{prop} \label{p:space of types} Let $P=\rfrac{X}{E}$ be an $A^*${\hyp}hyperdefinable set. Then, $\rfrac{P}{\Delta_P(A^*)}$ and $\mathbf{S}_P(A^*)$ are homeomorphic. \begin{dem} Consider the map \[\begin{array}{lccc} f:& \mathbf{S}_{P}(A^*)&\rightarrow & \rfrac{P}{\Delta_P(A^*)}\\ & \left[{\mathrm{tp}}(a^*/A^*)\right]_{\sim_{E}}&\mapsto& {\mathrm{tp}}([a^*]_{E}/A^*).\end{array}\] By saturation, it is well{\hyp}defined. Clearly, it is onto. It is $1${\hyp}to{\hyp}$1$ by Lemma \ref{l:types of hyperimaginaries}. By the definition of the quotient topology, it is clear that $f$ is continuous. As $\mathbf{S}_{X}(A^*)$ is a compact topological space, $\mathbf{S}_{P}(A^*)$ is also compact. On the other hand, $\rfrac{P}{\Delta_P(A^*)}$ is a Hausdorff space. Then, as the domain is compact, the image is Hausdorff and $f$ is a continuous bijection, we conclude that $f$ is a homeomorphism. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{prop} Therefore, for hyperimaginary parameters, we define the space of types of $P$ over $A$, $\mathbf{S}_P(A)$, as $\rfrac{P}{\Delta_P(A)}$ with its global logic topology. \begin{obs} By Lemma \ref{l:infinite definable functions} and Lemma \ref{l:types and infinite definable functions}, any $\bigwedge_A${\hyp}definable function $f:\ P\rightarrow Q$ induces a continuous closed map between the spaces of types. \end{obs} Similarly, let $P=\underrightarrow{\lim}_{I}P_i$ be a piecewise $A^*${\hyp}hyperdefinable set. For each $i\in I$, we have the space of types $\mathbf{S}_{P_i}(A^*)=\{{\mathrm{tp}}(a/A^*)\mathrel{:} a\in P_i\}$. Given $i,j\in I$ with $i\preceq j$, the map $\varphi_{ji}:\ P_i\rightarrow P_j$ induces a continuous and closed $1${\hyp}to{\hyp}$1$ function ${\mathrm{Im}}\ \varphi_{ji}:\ \mathbf{S}_{P_i}(A^*)\rightarrow \mathbf{S}_{P_j}(A^*)$ given by ${\mathrm{tp}}(a/A^*)\mapsto {\mathrm{tp}}(\varphi_{ji}(a)/A^*)$. Clearly, ${\mathrm{Im}}\ \varphi_{kj}\circ{\mathrm{Im}}\ \varphi_{ji}={\mathrm{Im}}\ \varphi_{ki}$ and ${\mathrm{Im}}\ \varphi_{ii}=\mathrm{id}$. Therefore, we have a topological direct sequence $(\mathbf{S}_{P_i}(A^*),{\mathrm{Im}}\ \varphi_{ji})_{j\succeq i}$. The \emph{space of types} of $P$ over $A^*$ is then the direct limit topological space \[\mathbf{S}_{P}(A^*)\coloneqq \underrightarrow{\lim}_I\mathbf{S}_{P_i}(A^*).\] By Proposition \ref{p:space of types}, we have \[\rfrac{P}{\Delta_{P}(A^*)}=\underrightarrow{\lim}\,\rfrac{P_i}{\Delta_{P_i}(A^*)}\cong\underrightarrow{\lim}\,\mathbf{S}_{P_i}(A^*)=\mathbf{S}_{P}(A^*).\]
Therefore, for a piecewise hyperdefinable set $P$, like in the case of hyperdefinable sets, we also define the \emph{space of types} of $P$ over hyperimaginary parameters $A$, $\mathbf{S}_P(A)$, as $\rfrac{P}{\Delta_{P}(A)}$ with its global logic topology. Even more, we say that a piecewise hyperdefinable set $S$ is a \emph{space of types} if it has a global logic topology.
Let $X$ be a topological space. Recall that two points are \emph{topologically indistinguishable} if they have the same neighbourhoods. A \emph{Kolmogorov map} is an onto continuous and closed map $k:\ X\rightarrow Y$ between topological spaces such that $k(a)$ and $k(b)$ are topologically indistinguishable if and only if $a$ and $b$ are topologically indistinguishable. We will use the following basic characterisation --- see \cite{pirttimaki2019survey} for more details: \begin{lem} \label{l:kolmogorov maps} Let $X$ and $Y$ be topological spaces and $k:\ X\rightarrow Y$ a function. Then, the following are equivalent:
{\textnormal{\textbf{(1)}}} $k$ is a Kolmogorov map.
{\textnormal{\textbf{(2)}}} ${\mathrm{Im}}\, k$ is a lattice isomorphism between the topologies with inverse ${\mathrm{Im}}^{-1}k$.
\end{lem} \begin{obs} When $Y$ is $\mathrm{T}_0$, we call a Kolmogorov map $k:\ X\rightarrow Y$ the \emph{Kolmogorov quotient} of $X$. In that case, up to a homeomorphism, $Y$ is the quotient space $\rfrac{X}{\sim}$, where $\sim$ is the topologically indistinguishable equivalence relation, and $k$ is the respective quotient map. \end{obs} As it is discussed in the introduction, it is useful to note that the quotient map ${\mathrm{tp}}_A:\ P\rightarrow \mathbf{S}_{P}(A)$ given by ${\mathrm{tp}}_A:\ a\mapsto {\mathrm{tp}}(a/A)=\rfrac{a}{\Delta_P(A)}$ is the Kolmogorov quotient between the $A${\hyp}logic topologies. By definition, it also satisfies that ${\mathrm{tp}}^{-1}_A[{\mathrm{tp}}_A[V]]=V$ for any $A${\hyp}invariant set $V$. Thus, when studying $P$, it is typically possible to map the discussion via ${\mathrm{tp}}_A$, argue in $\mathbf{S}_P(A)$ and then lift the conclusions to $P$ via ${\mathrm{tp}}^{-1}_A$. Following this procedure, one can usually assume without loss of generality that $P$ is $\mathrm{T}_0$. This technique will be illustrated in the following subsection --- see Theorem \ref{t:uniform structure} and Metrisation Theorem \ref{t:metrisation theorem}. \begin{obs} By Proposition \ref{p:functions logic topology}(1) and Lemma \ref{l:types and piecewise infinite definable functions}, any piecewise bounded $\bigwedge_A${\hyp}definable function induces a continuous map between the spaces of $A${\hyp}types. \end{obs}
\subsection[Metrisation results]{Metrisation results \footnote{This section is inspired on results from \cite{ben2005uncountable}.}} A \emph{uniform space} is a pair $(X,\Phi)$ formed by a non{\hyp}empty set $X$ and a filter $\Phi$ of binary relations in $X$ satisfying that, for every $U\in\Phi$, \[\begin{array}{ll} \mathbf{(i)}& \Delta\subseteq U,\\ \mathbf{(ii)}& U^{-1}\in \Phi \mathrm{\ and\ }\\ \mathbf{(iii)}& \mathrm{there\ is\ }V\in \Phi\mathrm{\ such\ that\ }V\circ V\subseteq U; \end{array}\] where $\Delta\coloneqq\{(x,x)\mathrel{:} x\in X\}$ is the diagonal (equality) relation, $U^{-1}\coloneqq\{(y,x)\mathrel{:} (x,y)\in U\}$ and $W\circ V\coloneqq\{(x,z)\mathrel{:} \exists y\mathrel{} (x,y)\in V,\ (y,z)\in W\}$. The filter $\Phi$ is called the \emph{uniform structure} of the uniform space $(X,\Phi)$.
A \emph{uniformity base} of $(X,\Phi)$ is a filter base of $\Phi$. Note that a filter base $\mathcal{B}$ of reflexive binary relations on $X$ is a uniformity base of some uniform structure if and only if, for any $U\in\mathcal{B}$, there are $V,W\in \mathcal{B}$ such that $V\circ V\subseteq U$ and $W\subseteq U^{-1}$.
Uniform spaces generalise (pseudo{\hyp})metric spaces. In other words, every pseudo{\hyp}metric $\rho$ induces a uniform space given by the uniformity base $\{\rho^{-1}[0,\varepsilon)\mathrel{:} \varepsilon\in\mathbb{R}_{>0}\}$. We say then that a uniform structure $\Phi$ is \emph{pseudo{\hyp}metrisable} if it arises in this way; that means there is a pseudo{\hyp}metric $\rho$ such that $\{\rho^{-1}[0,\varepsilon)\mathrel{:} \varepsilon\in\mathbb{R}_{>0}\}$ is a uniformity base of $\Phi$.
As in the case of metric spaces, every uniform space has a topology given by the system of local bases of neighbourhoods $\{U(a)\mathrel{:} U\in \Phi\}_{a\in X}$, where $U(a)\coloneqq\{b\mathrel{:} (a,b)\in U\}$. We say that a topology $\mathcal{T}$ \emph{admits a uniform structure} if there is a uniform structure $\Phi$ on $X$ with uniform topology $\mathcal{T}$. Note that a topological space could admit many different uniform structures. Uniform structures inducing the same topology are called \emph{equivalent}. \begin{obs} Uniform structures are the natural abstract context to study uniform continuity, uniform convergence, Cauchy sequences or completeness. It is important to note that two equivalent uniform structures may differ on these aspects. For example, a uniformly continuous function with respect to one uniform structure might not be uniformly continuous in another equivalent uniform structure. \end{obs} Recall that a topological space $X$ is \emph{functionally regular} if, for any point $x\in X$ and any closed set $V\subseteq X$ such that $x\notin V$, there is a continuous function $f:\ X\rightarrow [0,1]$ such that $f(x)=0$ and $f_{\mid V}=1$. It is a well{\hyp}known fact from the theory of uniform spaces that a topological space admits a uniform structure if and only if it is functionally regular --- see \cite[Theorem 38.2]{willard1970general} for a proof. Hence, we get the following general result for countably piecewise $A${\hyp}hyperdefinable sets. \begin{prop} \label{p:uniformity of countably piecewise hyperdefinable sets} Every countably piecewise $A${\hyp}hyperdefinable set $P$ with its $A${\hyp}logic topology admits a uniform structure. \begin{dem} We know that $P$ is normal by Proposition \ref{p:normal countably piecewise hyperdefinable}. Let $V$ be piecewise $\bigwedge_A${\hyp}definable and $a\in P\setminus V$. As $\overline{\{a\}}={\mathrm{tp}}(a/A)$, we have that $\overline{\{a\}}$ and $V$ are disjoint. By normality, using Urysohn's Lemma \cite[Theorem 33.1]{munkres1999topology}, we conclude that $P$ is functionally regular. Thus, we conclude that it admits a uniform structure. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{prop} Our aim now is to give a better description of the uniform structure in each piece. In other words, we want to give an actual uniformity base for the logic topology.
Let $P$ be an $A${\hyp}hyperdefinable set. We say that an $\bigwedge_A${\hyp}definable binary relation $\varepsilon\subseteq P\times P$ is \emph{positive} if $\Delta_P(A)\subseteq \mathring{\varepsilon}$, where the interior is taken in the product topology of the $A${\hyp}logic topology. Write $\mathcal{E}_P(A)$ for the set of all positive $\bigwedge_A${\hyp}definable binary relations of $P$. It could seem odd the fact that we take the interior in the product topology. The following lemma explains why.
\begin{lem} Let $P$ be an $A${\hyp}hyperdefinable set and $\pi:\ P\times P\rightarrow \mathbf{S}_P(A)\times\mathbf{S}_P(A)$ the quotient map $(a,b)\mapsto ({\mathrm{tp}}(a/A),{\mathrm{tp}}(b/A))$. Then, for any $\varepsilon\in\mathcal{E}_P(A)$, $\pi[\varepsilon]\in\mathcal{E}_{\mathbf{S}_P(A)}(A)$, and for any $\varepsilon'\in \mathcal{E}_{\mathbf{S}_P(A)}(A)$, $\pi^{-1}[\varepsilon']\in \mathcal{E}_P(A)$. \end{lem} \begin{obs} When the $A${\hyp}logic topology is the global topology, $\Delta_P(A)$ is precisely the diagonal $\Delta$ and $\mathcal{E}_P(A)$ is the set of closed neighbourhoods of $\Delta$ in the global logic topology of $P\times P$, which coincides with the product topology by Proposition \ref{p:product logic topology}. This in particular applies to $\mathbf{S}_P(A)$. \end{obs} We now prove the main result of this subsection: \begin{teo} \label{t:uniform structure} Let $P$ be an $A${\hyp}hyperdefinable set. Then, $P$ with the $A${\hyp}logic topology admits a unique uniform structure and $\mathcal{E}_P(A)$ is a uniformity base of it. \begin{dem} We start by proving the proposition for $\mathbf{S}_P(A)$ rather than $P$, so consider the set $\mathcal{E}_{\mathbf{S}_P(A)}(A)$ of positive $\bigwedge_A${\hyp}definable binary relations on $\mathbf{S}_P(A)$. By \cite[Theorem 36.19]{willard1970general}, since $\mathbf{S}_P(A)$ with the global logic topology is compact and Hausdorff, it admits one and only one uniform structure, which is precisely the filter of neighbourhoods of the diagonal in the product topology. Since the global logic topology and the product topology on $\mathbf{S}_P(A)\times\mathbf{S}_P(A)$ coincide by Proposition \ref{p:product logic topology}, $\mathcal{E}_{\mathbf{S}_P(A)}(A)$ is the collection of all closed neighbourhoods of the diagonal. Thus, by normality of $\mathbf{S}_P(A)\times\mathbf{S}_P(A)$ and closedness of $\Delta$ (which follows from Hausdorffness of $\mathbf{S}_P(A)$), the filter generated by $\mathcal{E}_{\mathbf{S}_P(A)}(A)$ is exactly the collection of all neighbourhoods of $\Delta$, which is precisely the unique uniform structure admitted by $\mathbf{S}_P(A)$. Hence, $\mathcal{E}_{\mathbf{S}_P(A)}(A)$ is a base of the unique uniform structure of $\mathbf{S}_P(A)$.
We now use the quotient maps ${\mathrm{tp}}_A:\ P\rightarrow \mathbf{S}_P(A)$ and $\tau:\ P\times P\rightarrow\mathbf{S}_P(A)\times\mathbf{S}_P(A)$ given by ${\mathrm{tp}}_A(a)={\mathrm{tp}}(a/A)$ and $\tau(a,b)=({\mathrm{tp}}(a/A),{\mathrm{tp}}(b/A))$ to extend the result to $P$.
Let $\Phi$ be any uniform structure on $P$ inducing the $A${\hyp}logic topology. Note that at least one $\Phi$ exists by Proposition \ref{p:uniformity of countably piecewise hyperdefinable sets}. Consider $\tau\Phi\coloneqq \{\tau[U]\mathrel{:} U\in \Phi\}$. Obviously, $\tau\Phi$ is a filter. Also, $\tau[U^{-1}]=\tau[U]^{-1}$ and $\tau[U]\circ\tau[U]\subseteq \tau[U\circ U\circ U]$ for any $U\in\Phi$, so $\tau\Phi$ is a uniform structure on $\mathbf{S}_P(A)$. On the other hand, for any $a\in P$, ${\mathrm{tp}}_A[U(a)]\subseteq\tau[U]({\mathrm{tp}}(a/A))\subseteq {\mathrm{tp}}_A[U\circ U(a)]$. Thus, by the properties of ${\mathrm{tp}}_A$, $\tau\Phi$ induces the global logic topology on $\mathbf{S}_P(A)$. As $\mathbf{S}_P(A)$ with the global logic topology only admits one uniform structure, it follows that $\mathcal{E}_{\mathbf{S}_P(A)}(A)$ is a uniformity base of $\tau\Phi$. In particular, we have $\tau^{-1}\mathcal{E}_{\mathbf{S}_P(A)}(A)\subseteq \Phi$, where $\tau^{-1}\mathcal{E}_{\mathbf{S}_P(A)}(A)\coloneqq \{\tau^{-1}[\varepsilon]\mathrel{:} \varepsilon\in\mathcal{E}_{\mathbf{S}_P(A)}(A)\}$. On the other hand, for $U\in\Phi$, take $V\in\Phi$ such that $V\circ V\circ V\subseteq U$ and find $\varepsilon\in\mathcal{E}_{\mathbf{S}_P(A)}(A)$ such that $\varepsilon\subseteq \tau[V]$. Therefore, $\tau^{-1}[\varepsilon]\subseteq \tau^{-1}[\tau[V]]=\Delta_P(A)\circ V\circ\Delta_P(A)\subseteq V\circ V\circ V\subseteq U$. Hence, $\tau^{-1}\mathcal{E}_{\mathbf{S}_P(A)}(A)$ is a uniformity base of $\Phi$, concluding uniqueness.
We now show that $\mathcal{E}_P(A)$ is a uniformity base of $\Phi$. Since $\tau^{-1}[\varepsilon]\in\mathcal{E}_P(A)$ for any $\varepsilon\in \mathcal{E}_{\mathbf{S}_P(A)}(A)$, it is enough to show that for any $\varepsilon\in \mathcal{E}_P(A)$ there is $\varepsilon'\in\mathcal{E}_{\mathbf{S}_P(A)}(A)$ such that $\tau^{-1}[\varepsilon']\subseteq \varepsilon$. Take $\varepsilon''\in \mathcal{E}_P(A)$ such that $\varepsilon''\circ\varepsilon''\circ\varepsilon''\subseteq \varepsilon$ and set $\varepsilon'=\tau[\varepsilon'']$. Then, $\tau^{-1}[\varepsilon']=\Delta_P(A)\circ\varepsilon''\circ\Delta_P(A)\subseteq \varepsilon$. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{teo} Using compactness, we can get an even smaller uniformity base:
\begin{lem} \label{l:size uniformity base} Let $P$ be an $A${\hyp}hyperdefinable set. There is a family $\{\varepsilon_i\}_{i<|A|+|\mathscr{L}|}$ of $\bigwedge_A${\hyp}definable positive binary relations on $P$ which is a uniformity base of $P$ with the $A${\hyp}logic topology.
\begin{dem} Say $\underline{\Delta}_P(A)=\bigwedge_{i< |A|+|\mathscr{L}|} \varphi_i$ with $\varphi_i\in\mathrm{For}(\mathscr{L}(A^*))$. For each $i<|A|+|\mathscr{L}|$, write $E_i\coloneqq {\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}_{P\times P}[\varphi_i(\mathfrak{M})]$ and $U_i\coloneqq P\times P\setminus {\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}_{P\times P}[\neg\varphi_i(\mathfrak{M})]$. Since ${\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}_{P\times P}^{-1}[\Delta_P(A)]\cap \neg\varphi_i(\mathfrak{M})=\emptyset$, $\Delta_P(A)\cap {\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}_{P\times P}[\neg\varphi_i(\mathfrak{M})]=\emptyset$, so $\Delta_P(A)\subseteq U_i\subseteq E_i$ where $E_i$ is closed and $U_i$ is open in the $A${\hyp}logic topology. By compactness of $P\times P$ in the $A${\hyp}logic topology, we can take $\varepsilon_i\in \mathcal{E}_P(A)$ such that $\varepsilon_i\subseteq U_i\subseteq E_i$. It follows that $\{\varepsilon_i\}_{i<|A|+|\mathscr{L}|}$ is a sequence of $\bigwedge_A${\hyp}definable positive binary relations on $P$ such that $\Delta_P(A)=\bigcap_{i<|A|+|\mathscr{L}|}\varepsilon_i$. Now we show that $\{\varepsilon_i\}_{i<|A|+|\mathscr{L}|}$ is a uniformity base. Take $\varepsilon\in \mathcal{E}_P(A)$ and $U\subseteq \varepsilon$ open in the product topology of $P\times P$ such that $\bigcap_{i<|A|+|\mathscr{L}|} \varepsilon_i=\Delta_P(A)\subseteq U\subseteq \varepsilon$. Since $U$ is open in the product topology, it is also open in the $A${\hyp}logic topology. Then, by compactness, there is $i<|A|+|\mathscr{L}|$ such that $\varepsilon_i\subseteq U\subseteq \varepsilon$, concluding that $\{\varepsilon_i\}_{i<|A|+|\mathscr{L}|}$ is a uniformity base of $P$ with the $A${\hyp}logic topology.\setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{lem} Recall that a uniform structure is pseudo{\hyp}metrisable if and only if it has a countable uniformity base --- see \cite[Theorem 38.3]{willard1970general}. Therefore, we get the following metrisation results: \begin{coro} Let $P$ be an $A${\hyp}hyperdefinable set. Then, $P$ with its $A${\hyp}logic topology is pseudo{\hyp}metrisable if and only if there is a countable family $\{\varepsilon_n\}_{n\in\mathbb{N}}$ of $\bigwedge_A${\hyp}definable positive binary relations on $P$ such that $\Delta_P(A)=\bigcap_{n\in \mathbb{N}} \varepsilon_n$. In particular, $P$ with the $A${\hyp}logic topology is pseudo{\hyp}metrisable if $\mathscr{L}$ and $A$ are countable. \end{coro} \begin{teo}[Metrisation Theorem] \label{t:metrisation theorem} Let $P$ be a locally $A${\hyp}hyperdefinable set of countable cofinality. Then, $P$ with the $A${\hyp}logic topology is pseudo{\hyp}metrisable if and only if each piece is pseudo{\hyp}metrisable. In particular, $P$ with the $A${\hyp}logic topology is pseudo{\hyp}metrisable if $\mathscr{L}$ and $A$ are countable. \begin{dem} Assume that each piece is pseudo{\hyp}metrisable. Taking the quotient map $a\mapsto {\mathrm{tp}}(a/A)$, we get that $\mathbf{S}_P(A)$ is locally metrisable and $\sigma${\hyp}compact. By $\sigma${\hyp}compactness, $\mathbf{S}_P(A)$ is trivially Lindel\"of (i.e. every open cover has a countable subcover). By Proposition \ref{p:normal countably piecewise hyperdefinable}, $\mathbf{S}_P(A)$ is normal and Hausdorff, so it is in particular regular (i.e. any closed set and any point outside it can be separated by open sets). Therefore, by \cite[Theorem 41.5]{munkres1999topology}, $\mathbf{S}_P(A)$ is paracompact (i.e. every open cover has a locally finite open refinement). By Smirnov's Metrisation Theorem \cite[Theorem 42.1]{munkres1999topology}, we conclude that it is metrisable. Taking the composition with the quotient map $a\mapsto {\mathrm{tp}}(a/A)$, we conclude that $P$ with the $A${\hyp}logic topology is pseudo{\hyp}metrisable. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{teo} Using uniformities, it is also easy to find small dense subsets:
\begin{coro} Let $P$ be a piecewise $A${\hyp}hyperdefinable set. Then, there is a subset $D\subseteq P$ with $|D|\leq |A|+|\mathscr{L}|+\mathrm{cf}(P)$ which is dense in the $A${\hyp}logic topology. In particular, when $A$ and $\mathscr{L}$ are countable, every countably piecewise $A${\hyp}hyperdefinable set is separable (i.e. has a countable dense subset) with the $A${\hyp}logic topology.
\begin{dem} Say $P=\underrightarrow{\lim}_{i<\mathrm{cf}(P)}\, P_i$. By Lemma \ref{l:size uniformity base}, for each $i$, there is a uniformity base $\mathcal{B}_i\subseteq \mathcal{E}_{P_i}(A)$ with $|\mathcal{B}_i|\leq |A|+|\mathscr{L}|$. By compactness, for each $\varepsilon\in \mathcal{B}_i$, there is $D_{\varepsilon,i}\subseteq P_i$ finite such that $P_i\subseteq\bigcup_{a\in D_{\varepsilon,i}} \varepsilon(a)$. Take $D=\bigcup_{i<\mathrm{cf}(P)} \bigcup_{\varepsilon\in\mathcal{B}_i} D_{\varepsilon,i}$, so $|D|\leq |A|+|\mathscr{L}|+\mathrm{cf}(P)$. Let $U$ be open. Then, $\varepsilon(a)\subseteq U\cap P_i$ for some $a\in P$, $i<\mathrm{cf}(P)$ and $\varepsilon\in\mathcal{B}_i$. Find $\varepsilon_0\in\mathcal{B}_i$ with $\varepsilon_0^{-1}\subseteq \varepsilon$ and $d\in D_{i,\varepsilon_0}\subseteq D$ with $a\in\varepsilon_0(d)$, so $d\in\varepsilon(a)\subseteq U$, concluding $D\cap U\neq \emptyset$. As $U$ is arbitrary, $D$ is dense. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{coro}
Arguing similarly for global logic topologies, we can reduce the number of parameters from $2^{|A|+|\mathscr{L}|}$ to $|A|+|\mathscr{L}|+\mathrm{cf}(P)$.
\begin{prop}\label{p:reduce number of parameters} Let $P$ be a piecewise $A${\hyp}hyperdefinable set with a global logic topology. Then, there is $B\subseteq P\cup A$ with $|B|\leq |A|+|\mathscr{L}|+\mathrm{cf}(P)$ such that the $B${\hyp}logic topology of $P$ is its global logic topology.
\begin{dem} Say $P=\underrightarrow{\lim}_{i\in I}\, P_i$ and $P_i=\rfrac{X_i}{E_i}$ with $|I|=\mathrm{cf}(P)$. Write $\lambda=|\mathscr{L}|+|A|+\mathrm{cf}(P)$. For $i\in I$, $\Delta_i\coloneqq \{(x,x)\mathrel{:} x\in P_i\}$ is $\bigwedge_{A^*}${\hyp}definable as $\underline{\Delta}_i=\underline{E}_i$. Say $\underline{\Delta}_i=\bigwedge_{j\in \lambda} \varphi_j(x,y)$ with $\varphi_j(x,y)\in\mathrm{For}(\mathscr{L}(A^*))$. Write $V_j\coloneqq {\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}_{P_i\times P_i}[\varphi_j(\mathfrak{M})]$ and $U_j\coloneqq P_i\times P_i\setminus {\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}_{P_i\times P_i}[\neg\varphi_j(\mathfrak{M})]$. Then, $\Delta_i\subseteq U_j\subseteq V_j$ where $V_j$ is closed and $U_j$ is open in the global logic topology of $P_i\times P_i$. By compactness of $P_i\times P_i$, there is $\varepsilon_j$ positive $\bigwedge${\hyp}definable binary relation such that $\varepsilon_j\subseteq U_j\subseteq V_i$. It follows that $\mathcal{B}_i\coloneqq \{\varepsilon_j\}_{j\in \lambda}$ forms a uniformity base of $P_i$ with the global logic topology.
By compactness of $P_i$, for each $\varepsilon\in \mathcal{B}_i$, there is $D_{\varepsilon,i}\subseteq P_i$ finite such that $P_i\subseteq\bigcup_{a\in D_{\varepsilon,i}} \varepsilon(a)$. Take $D_i=\bigcup_{\varepsilon\in\mathcal{B}_i} D_{\varepsilon,i}$ and $D=\bigcup_{i\in I}D_i$, so $|D|\leq \lambda$. It follows that $D_i$ is dense in the global logic topology of $P_i$.
Take $B=D\cup A\subseteq \mathrm{bdd}(A)$, so $|B|\leq \lambda$. We claim that the $B${\hyp}logic topology of $P$ is its global logic topology. Indeed, take $a\in P$ and $\sigma\in \mathrm{Aut}(\mathfrak{M}/B)$ arbitrary and, aiming a contradiction, suppose $\sigma(a)\neq a$, so $\sigma^{-1}(a)\neq a$. Pick $i\in I$ such that $a\in P_i$. Note that $\sigma[P_i]=P_i$. By Hausdorffness of $P_i$ in the global logic topology, there are $U$ and $V$ such that $a\in U\subseteq V\subseteq P_i$ with $\sigma^{-1}(a)\notin V$, where $U$ is open and $V$ is closed in the global logic topology of $P_i$. Now, $D_i\cap U=\sigma[D_i\cap U]\subseteq \sigma[V]$. As $U$ is open and $D_i$ is dense in $P_i$, $U\cap D_i$ is dense in $U$. As $\sigma[V]$ is closed in the global logic topology, we conclude $a\in U\subseteq \sigma[V]$. Therefore, $\sigma^{-1}(a)\in V$, getting a contradiction. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{prop} \subsection{Examples} Most of the examples of locally hyperdefinable sets come from the following two basic remarks, which were, in fact, already known. First, note that any piecewise $A${\hyp}definable set is trivially locally $A${\hyp}definable. Secondly, note that if $P$ is locally $A${\hyp}hyperdefinable and $E$ is a piecewise bounded $\bigwedge_A${\hyp}definable equivalence relation on $P$, then $\rfrac{P}{E}$ is locally hyperdefinable by Proposition \ref{p:mapping locally hyperdefinable sets}.
\begin{ejem}\label{e:example 1} A classical example is the field of real numbers $\mathbb{R}$ with its usual topology, which is a locally hyperdefinable set of countable cofinality in the theory of real closed fields. It is explicitly given as $\rfrac{O(1)}{o(1)}$ with $O(1)=\bigcup_{n\in\mathbb{N}} [-n,n]$ and $o(1)=\bigcap_{n\in\mathbb{N}} [-\rfrac{1}{n},\rfrac{1}{n}]$. Furthermore, up to isomorphism of piecewise hyperdefinable sets, $\rfrac{O(1)}{o(1)}$ is the unique representation of $\mathbb{R}$ as a locally hyperdefinable set. Indeed, by Proposition \ref{p:functions logic topology}(8), any two locally hyperdefinable sets homeomorphic with the logic topologies are isomorphic as piecewise hyperdefinable sets.
However, note that the real numbers with the usual topology can be represented as a piecewise hyperdefinable set (non{\hyp}locally hyperdefinable) in other non{\hyp}isomorphic ways. For instance, consider the direct system of all compact subsets of $\rfrac{O(1)}{o(1)}$ with empty interior in the global logic topology with the natural inclusion maps. Using that the topology is first{\hyp}countable, it is easy to note that this is a coherent covering. Therefore, the resulting direct limit with the global logic topology is a piecewise hyperdefinable set homeomorphic to $\mathbb{R}$ with the usual topology. However, it is not locally hyperdefinable, so it is not isomorphic to $\rfrac{O(1)}{o(1)}$. \end{ejem}
\begin{ejem} \label{e:example 2} More generally, any topological manifold $X$ (i.e. a locally euclidean Hausdorff topological space) is a locally hyperdefinable set in the usual theory of real closed fields. Indeed, for any $m$, $\mathbb{R}^m$ is locally hyperdefinable, so every compact subset of $\mathbb{R}^m$ is hyperdefinable, concluding that the compact charts of $X$ are hyperdefinable. Using now Proposition \ref{p:functions logic topology 2}, the chart changing maps are $\bigwedge${\hyp}definable, so the finite unions of compact chart neighbourhoods are hyperdefinable. It follows then that the whole manifold is locally hyperdefinable. If it is also second countable, then it has countable cofinality too. \end{ejem}
\begin{ejem} \label{e:example 3} As $\mathbb{Q}$ with the usual metric topology is not locally compact, it cannot be given as a locally hyperdefinable set. However, it is possible to give it as a piecewise hyperdefinable set in the theory of real closed fields. Indeed, using that $\mathbb{Q}$ is first countable, it is clear that $\mathbb{Q}$ is compactly generated. Every compact subset of $\mathbb{Q}$ is a hyperdefinable subset being $\bigwedge${\hyp}definable in $\mathbb{R}=\rfrac{O(1)}{o(1)}$. In other words, $\mathbb{Q}$ is the direct limit of all the $\bigwedge${\hyp}definable subsets of $\mathbb{R}=\rfrac{O(1)}{o(1)}$ contained in $\mathbb{Q}$ with the standard inclusion maps. Note that it has uncountable cofinality. \end{ejem}
\begin{ejem} \label{e:example 4} More generally, any first countable Hausdorff topological space $X$ can be given as a piecewise hyperdefinable set with a global logic topology in the theory of real closed fields. Indeed, let $\mathcal{C}$ be the family of countable compact subsets of $X$. As $X$ is first countable, $\mathcal{C}$ is coherent. Now, by Mazurkiewicz{\hyp}Sierpi\'{n}ski Theorem \cite[Theorem 4]{milliet2011remark}, every countable compact Hausdorff topological space is homeomorphic to a countable successor ordinal with the order topology. On the other hand, by induction, we easily see that every countable ordinal with the order topology is homeomorphic to a subset of $\mathbb{Q}$. Therefore, for every $A\in \mathcal{C}$, there is a compact subset $P_A\subseteq \mathbb{Q}$ such that $A$ with the subspace topology is homeomorphic to $P_A$ with the subspace topology. For each $A\in\mathcal{C}$, pick $\eta_A: A\rightarrow P_A$ a homeomorphism. For $A,B\in \mathcal{C}$ with $A\subseteq B$, take $\varphi_{BA}=\eta_B\circ \eta_A^{-1}:\ P_{A}\rightarrow P_{B}$. Now, as noted in the previous example, every compact subset of $\mathbb{Q}$ is homeomorphic to a hyperdefinable set with a global logic topology in the theory of real closed fields. Also, for $A,B\in\mathcal{C}$ with $A\subseteq B$, $\varphi_{BA}$ is continuous, so it is $\bigwedge${\hyp}definable by Proposition \ref{p:functions logic topology 2}. Then, we conclude that $X$ is homeomorphic to $\underrightarrow{\lim}_{A\in \mathcal{C}}\, P_A$ with the global logic topology. \end{ejem}
\begin{ejem} \label{e:example 5} For a countably piecewise hyperdefinable set that is not locally hyperdefinable, consider the infinite countable rose, i.e. the infinite countable bouquet of circles. This is $\rfrac{\mathbb{R}}{\sim}$ with the equivalence relation $x\sim y\Leftrightarrow x,y\in\mathbb{Z}\vee x=y$. Note that $\mathbb{R}$ is countably piecewise hyperdefinable and the relation $\sim$ is piecewise $\bigwedge${\hyp}definable in the theory of real closed fields. Thus, the infinite countable rose is a countably piecewise hyperdefinable set. It is not locally hyperdefinable and not first countable, so it is not pseudo{\hyp}metrisable. \end{ejem}
\begin{ejem} \label{e:example 6} For a piecewise hyperdefinable set which is not normal, consider $\mathbb{R}$ with the rational sequence topology. For $i\in\mathbb{R}$, pick a sequence $(i_n)_{n\in\mathbb{N}}$ of rational numbers converging to $i$. When $i\in \mathbb{Q}$, take $i_n=i$ for every $n\in\mathbb{N}$. Take $P_i=\{i_n\mathrel{:} n\in\mathbb{N}\}\cup\{i\}$ for $i\in\mathbb{R}$. For a finite subset $I\subseteq \mathbb{R}$, take $P_I=\bigcup_{i\in I} P_i$. Note that, for any finite $I\subseteq \mathbb{R}$, $P_I$ is compact in $\mathbb{R}$ with the usual topology, so each $P_I$ is hyperdefinable in the language of real closed fields. Take $P=\underrightarrow{\lim}\ P_I$ with the usual inclusion maps. We now check that $P$ with the global logic topology is homeomorphic to $\mathbb{R}$ with the rational sequence topology, i.e. the topology given by the local bases of open neighbourhoods $U_n(i)\coloneqq \{i_k\mathrel{:} k\geq n\}\cup \{i\}$ for $i\in\mathbb{R}$.
Note first that $U_n(i)$ is open in $P$ for each $i\in\mathbb{R}$ and $n\in\mathbb{N}$. Obviously, $U_n(i)\cap P_i$ is open in $P_i$. For $j\neq i$, as $(i_n)_{n\in\mathbb{N}}$ converges to $i$ and $(j_n)_{n\in\mathbb{N}}$ converges to $j$, we conclude that there is $m\in\mathbb{N}$ such that $U_n(i)\cap P_j\subseteq \{i_n\mathrel{:} n\leq m\}$. Thus, $U_n(i)\cap P_j$ is open in $P_j$.
On the other hand, suppose $U\subseteq P$ is open in $P$. Take $i\in U$. As $U\cap P_i$ is open in $P_i$, there is $n\in\mathbb{N}$ such that $U_n(i)\subseteq U\cap P_i$, so $U_n(i)\subseteq U$. As $i\in U$ is arbitrary, we conclude that $U$ is open in the rational sequence topology.
By Jone's Lemma \cite[Lemma 15.2]{willard1970general}, $P$ is not normal. Indeed, $\mathbb{Q}$ is dense, $\mathbb{R}\setminus \mathbb{Q}$ is discrete and closed and $|\mathbb{R}\setminus\mathbb{Q}|\geq 2^{|\mathbb{Q}|}$. \end{ejem}
For a counterexample where the product topology of global logic topologies is not the global logic topology on the product, see Example \ref{e:example no topological group}. We have no counterexample of a piecewise hyperdefinable set with a non{\hyp}Hausdorff global logic topology. We have no counterexample of a countably piecewise $A${\hyp}hyperdefinable set that is not locally $A${\hyp}hyperdefinable but has a locally compact $A${\hyp}logic topology.
\section{Piecewise hyperdefinable groups} In this section we study the particular case of piecewise hyperdefinable groups. Our main aim is to find sufficient and necessary conditions to conclude when they are locally compact topological groups with the logic topology. Then, we will discuss how to extend the classical Gleason{\hyp}Yamabe Theorem and some related results to piecewise hyperdefinable groups. We start with an introduction recalling some fundamental facts about topological groups.
\subsection{Preliminaries on topological groups} Recall that a \emph{topological group} is a Hausdorff topological space with a group structure whose group operations are continuous. \begin{obss} \label{o:remarks topological groups} Let $G$ be a topological group. The following are the basic fundamental facts that we need:
{\textnormal{\textbf{(1)}}} Let $H\trianglelefteq G$ be closed. Then, $\rfrac{G}{H}$ is a topological group and $\pi_{G/H}:\ G\rightarrow \rfrac{G}{H}$ is a continuous and open surjective homomorphism. Furthermore, if $H$ is compact, then $\pi_{G/H}$ is also a closed map and has compact fibres. In particular, it is proper by \cite[Theorem 3.7.2]{engelking1989general}.
{\textnormal{\textbf{(2)}}} Let $H\leq G$ be an open subgroup. Then, $H$ is also closed.
{\textnormal{\textbf{(3)}}} The connected component $G^0$ of the identity is a normal closed subgroup of $G$. If $G$ is locally connected (e.g. a Lie group), then $G^0$ is also open.
{\textnormal{\textbf{(4)}}} {\textnormal{\textbf{(Closed Isomorphism Theorem)}}} Let $f:\ G\rightarrow H$ be a continuous and closed surjective homomorphism between topological groups. Then, for any closed subgroup $S\trianglelefteq K\coloneqq \ker(f)$ with $S\trianglelefteq G$, the map $f_S:\ \rfrac{G}{S}\rightarrow H$ defined by $f=f_S\circ\pi_{G/S}$ is a continuous, closed and open homomorphism. In particular, $f_K:\ \rfrac{G}{K}\rightarrow H$ is an isomorphism of topological groups and $f$ is an open map.
\end{obss} In the theory of topological groups, a \emph{Yamabe pair} of a topological group $G$ is a pair $(K,H)$ with $K\trianglelefteq H\leq G$ such that $K$ is compact, $H$ is open and $L=\rfrac{H}{K}$ is a finite dimensional Lie group. We say that $H$ is the \emph{domain}, $K$ is the \emph{kernel} and $L$ is the \emph{Lie core}. We write $\pi_{H/K}:\ H\rightarrow L$ for the quotient map. A Lie group is a \emph{Lie core} of $G$ if it is isomorphic, as topological group, to the Lie core of some Yamabe pair of $G$. \begin{obs} Let $G$ be a topological group and suppose that it has a Yamabe pair $(K,H)$ with Lie core $L$. By Remark \ref{o:remarks topological groups}(1), $\pi_{H/K}:\ H\rightarrow L$ is a continuous, open, closed and proper surjective group homomorphism. In particular, as $L$ is locally compact, $H$ must be a locally compact topological group too. Since $H$ is open in $G$, we conclude that $G$ is locally compact as well. \end{obs} The following celebrated theorem, claiming that every locally compact topological group has Lie cores, is usually considered the solution to Hilbert's fifth problem. \begin{teo}[Gleason{\hyp}Yamabe] \label{t:gleason yamabe} Let $G$ be a locally compact topological group and $U\subseteq G$ a neighbourhood of the identity. Then, $G$ has a Yamabe pair $(K,H)$ with $K\subseteq U$. In particular, a topological group has a Lie core if and only if it is locally compact. \begin{dem} The original papers are \cite{gleason1951structure} and \cite{yamabe1953generalization}. A complete proof can be found in \cite{tao2014hilbert}. Model{\hyp}theoretic treatments can be found in \cite{hirschfeld1990nonstandard} and \cite{van2015hilbert}. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{teo} In this paper we mainly use this classical version of Gleason{\hyp}Yamabe Theorem \ref{t:gleason yamabe}. Alternatively, we can use the following variation proved in \cite[Theorem 1.25]{carolino2015structure} which provides some extra control over some parameters. Recall that two subsets of a group are \emph{$k${\hyp}commensurable} if $k$ left translates of each one cover the other. Recall that a \emph{$k${\hyp}approximate subgroup} is a symmetric subset $X$ which is $k${\hyp}commensurable with its set of pairwise products $X^2$. \begin{teo}[Gleason{\hyp}Yamabe{\hyp}Carolino Theorem] \label{t:gleason yamabe carolino} There are functions $c:\ \mathbb{N}\rightarrow \mathbb{N}$ and $d:\ \mathbb{N}\rightarrow\mathbb{N}$ such that the following holds:
Let $G$ be a locally compact topological group and $U\subseteq G$ an open precompact $k${\hyp}approximate subgroup for some $k\in\mathbb{N}$. Then, $G$ has a Yamabe pair $(K,H)$ with $K\subseteq U^4$ such that $\rfrac{H}{K}$ is a Lie group of dimension at most $d(k)$ and $H\cap U^4$ generates $H$ and is $c(k)${\hyp}commensurable to $U$. \end{teo} A Yamabe pair $(K',H')$ is \emph{smaller than or equal to} $(K,H)$ if $K\trianglelefteq K'\trianglelefteq H'\leq H$. A Yamabe pair is \emph{minimal} if it has no smaller ones. A Lie core is \emph{minimal} if it is the Lie core of some minimal Yamabe pair. In other words, let us define an \emph{aperiodic} topological group to be a topological group without non{\hyp}trivial compact normal subgroups. Then, by Remark \ref{o:remarks topological groups}(1), a Lie core is minimal if and only if it is an aperiodic connected Lie core. The following basic proposition implies that every Yamabe pair has a smaller or equal minimal Yamabe pair. \begin{lem} \label{l:maximal compact normal subgroup} Every connected Lie group has a unique maximal compact normal subgroup. \begin{dem} By Cartan{\hyp}Iwasawa{\hyp}Malcev Theorem \cite[Chapter XV Theorem 3.1]{hochschild1965structure}, there is a maximal compact subgroup $T$, and every compact subgroup is contained in a conjugate of it. Hence, $\bigcap_{g\in G} gTg^{-1}$ is the unique maximal compact normal subgroup. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{lem} \begin{obs} A different proof of the previous result, without using Cartan{\hyp}Iwasawa{\hyp}Malcev Theorem, was explained in \cite{hrushovski2011stable}. \end{obs} \begin{coro} \label{c:minimal yamabe pairs} Let $G$ be a topological group and $(K_1,H_1)$ a Yamabe pair. Then, there is a minimal Yamabe pair $(K,H)$ smaller than or equal to $(K_1,H_1)$. Furthermore, for any clopen subset $U$ containing $K_1$, we have $H\subseteq U^2$. \begin{dem} Write $\pi_1\coloneqq \pi_{H_1/K_1}:\ H_1\rightarrow L_1$ for the quotient map to the Lie core of $(K_1,H_1)$. Let $\widetilde{L}\subseteq L_1$ be the topological connected component of the identity. As Lie groups are locally connected, $\widetilde{L}$ is open by Remarks \ref{o:remarks topological groups}(3). Let $\widetilde{K}\trianglelefteq \widetilde{L}$ be its maximal compact normal subgroup, given by Lemma \ref{l:maximal compact normal subgroup}. Take $H\coloneqq \pi^{-1}_1[\widetilde{L}]$ and $K\coloneqq \pi^{-1}_1[\widetilde{K}]$. Then, by Remark \ref{o:remarks topological groups}(1) and the Closed Isomorphism Theorem (Remark \ref{o:remarks topological groups}(4)), $(K,H)$ is a minimal Yamabe pair of $G$ smaller than or equal to $(K_1,H_1)$. Finally, if $U$ is clopen and $K_1\subseteq U$, $\pi_1[U]$ is clopen with $1\in \pi_1[U]$ as $\pi_1$ is open and closed by Remark \ref{o:remarks topological groups}(1). Thus, $\widetilde{L}\subseteq \pi_1[U]$ as $\widetilde{L}$ is connected, concluding that $H\subseteq UK_1\subseteq U^2$. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{coro} Two Yamabe pairs $(K,H)$ and $(K',H')$ of $G$ with Lie cores $\pi\coloneqq \pi_{H/K}:\ H\rightarrow L$ and $\pi'\coloneqq \pi_{H'/K'}:\ H'\rightarrow L'$ are \emph{equivalent} if the map $\eta:\ \pi(h)\mapsto \pi'(h)$ for $h\in H\cap H'$ is a well{\hyp}defined isomorphism of topological groups between $L$ and $L'$. Equivalently, by the Closed Isomorphism Theorem (Remark \ref{o:remarks topological groups}(4)), $(K,H)$ and $(K',H')$ are equivalent if and only if $H\cap K'\subseteq K$, $H'\cap K\subseteq K'$, $(H\cap H')K=H$ and $(H\cap H')K'=H'$. It follows that minimal Yamabe pairs are unique up to equivalence: \begin{lem} \label{l:lemma uniqueness minimal lie core} Let $G$ be a locally compact topological group and $(K_1,H_1)$ and $(K_2,H_2)$ two minimal Yamabe pairs with Lie cores $\pi_1\coloneqq \pi_{H_1/K_1}:\ H_1\rightarrow L_1$ and $\pi_2\coloneqq \pi_{H_2/K_2}:\ H_2\rightarrow L_2$:
{\textnormal{\textbf{(1)}}} Let $H'\leq H_1$ be an open subgroup. Then, $(K_1\cap H',H')$ is a minimal Yamabe pair of $G$ equivalent to $(K_1,H_1)$ and $[K_1:K_1\cap H']$ is finite.
{\textnormal{\textbf{(2)}}} $K_1\subseteq K_2\Leftrightarrow H_1\subseteq H_2\Leftrightarrow K_2\cap H_1=K_1$. In particular, $K_1=K_2$ if and only if $H_1=H_2$.
{\textnormal{\textbf{(3)}}} $(K_1\cap K_2,H_1\cap H_2)$ is a minimal Yamabe pair with $K_1\cap H_2=K_1\cap K_2=K_2\cap H_1$. In particular, $[K_1:K_1\cap K_2]$ and $[K_2:K_1\cap K_2]$ are finite. \begin{dem} {\textnormal{\textbf{(1)}}} By connectedness, $\pi_1[H']=L_1$. Thus, by the Closed Isomorphism Theorem (Remark \ref{o:remarks topological groups}(4)), we conclude that $(K_1\cap H',H')$ is a minimal Yamabe pair of $G$ equivalent to $(K_1,H_1)$. Finally, as $K_1\cap H'$ is an open subset of $K_1$, by compactness, $[K_1:K_1\cap H']$ is finite.
{\textnormal{\textbf{(2)}}} We already have $H_1\subseteq H_2\Rightarrow K_2\cap H_1=K_1\Rightarrow K_1\subseteq K_2$ by {\textnormal{\textbf{(1)}}}. On the other hand, by connectedness, $\pi_1[H_1\cap H_2]=L_1$. If $K_1\subseteq K_2$, then $K_1\leq H_1\cap H_2$, so $H_1=\pi^{-1}_1[\pi_1[H_1\cap H_2]]=H_1\cap H_2\subseteq H_2$.
{\textnormal{\textbf{(3)}}} By {\textnormal{\textbf{(1)}}}, $(K_1\cap H_2,H_1\cap H_2)$ and $(K_2\cap H_1,H_2\cap H_1)$ are minimal Yamabe pairs with $[K_1:K_1\cap H_2]$ and $[K_2:H_1\cap K_2]$ finite. By point {\textnormal{\textbf{(2)}}}, $K_1\cap K_2=K_1\cap H_2=K_2\cap H_1$. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{lem} As an immediate consequence of the previous proposition we get the following corollary: \begin{coro} \label{c:uniqueness minimal lie core} Every locally compact topological group has a unique minimal Yamabe pair up to equivalence. \end{coro} The previous uniqueness statement implies that the minimal Lie core $L$ is unique up to isomorphism of topological groups, but it is far stronger than that. Indeed, it also says that there is a \emph{global minimal Lie core map} extending all the minimal Yamabe pairs which is unique up to isomorphisms of $L$: \begin{prop}\label{p:global minimal lie core map} Let $G$ be a locally compact topological group and $L$ its minimal Lie core. Let $D_L$ be the union of all the domains of minimal Yamabe pairs of $G$. Then, there is a map $\pi_L:\ D_L\rightarrow L$ such that, for any minimal Yamabe pair $(K,H)$ of $G$, $\pi_{L\mid H}$ is a continuous, closed, open and proper surjective homomorphism with kernel $K$. Furthermore, $\pi_L$ is unique up to isomorphisms of $L$. \begin{dem} Let $\mathcal{Y}$ be the set of all minimal Yamabe pairs of $G$ and fix any minimal Yamabe pair $(K_0,H_0)\in \mathcal{Y}$ and $L=\rfrac{H_0}{K_0}$. Now, for any Yamabe pair $(K,H)$, we define $\pi_{L\mid H}\coloneqq \eta_{(K,H)}\circ \pi_{H/K}:\ H\rightarrow L$ where $\eta_{(K,H)}:\ \rfrac{H}{K}\rightarrow L$ is the canonical isomorphism given by the equivalence between $(K,H)$ and $(K_0,H_0)$. Take any $(K,H),(K',H')\in \mathcal{Y}$. By Lemma \ref{l:lemma uniqueness minimal lie core} and Corollary \ref{c:uniqueness minimal lie core}, $(K\cap K',H\cap H')$ is a minimal Yamabe pair and equivalent to $(K,H)$, $(K',H')$ and $(K_0,H_0)$. For any $h\in H\cap H'$, as $(K\cap K',H\cap H')$ is equivalent to $(K_0,H_0)$, there is $h_0\in H\cap H'\cap H_0$ such that $h^{-1}h_0\in K\cap K'\cap K_0$. Thus, $\pi_{L\mid H}(h)=\pi_{L\mid H}(h_0)=\pi_{H_0/K_0}(h_0)=\pi_{L\mid H'}(h_0)=\pi_{L\mid H}(h)$. Take $D_L=\bigcup_{\mathcal{Y}} H$ and define $\pi_L=\bigcup_{\mathcal{Y}} \pi_{L\mid H}$. By Remark \ref{o:remarks topological groups}(1), we get a global map $\pi_L:\ D_L\rightarrow L$ such that $\pi_{L\mid H}:\ H\rightarrow L$ is a continuous, closed, open and proper surjective homomorphism with kernel $K$ for each minimal Yamabe pair $(K,H)$.
Suppose $\pi'_L:\ D_L\rightarrow L$ is any other map such that $\pi'_{L\mid H}$ is a continuous, closed, open and proper surjective group homomorphism with kernel $K$ for any minimal Yamabe pair $(K,H)$. Then, by the Closed Isomorphism Theorem (Remark \ref{o:remarks topological groups}(4)), we get an isomorphism $\eta:\ \rfrac{H_0}{K_0}\rightarrow L$ such that $\pi'_{L\mid H_0}=\eta\circ\pi_{L\mid H_0}$. Now, for any $(K,H)\in\mathcal{Y}$ and $g\in H$, there is $g_0\in H\cap H_0$ such that $g\in g_0K$. Then, $\pi'_L(g)=\pi'_L(g_0)=\eta\circ\pi_L(g_0)=\eta\circ\pi_L(g)$, concluding that $\pi'_L=\eta\circ\pi_L$. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{prop} Note that $D_L=\mathrm{Dom}(\pi_L)$ is the union of all the domains of minimal Yamabe pairs of $G$ and $\ker(\pi_L)\coloneqq \pi_L^{-1}(1)$ is the union of all the kernels of minimal Yamabe pairs of $G$. Consequently, $D_L$ and $\ker(\pi_L)$ are invariant by any automorphism of $G$ as topological group. In particular, both are normal sets (i.e. conjugate invariant sets).
Among all the minimal Yamabe pairs, it could be natural to look for the ones with maximal domain. We have the following criterion. \begin{prop} \label{p:minimal yamabe pairs with maximal domain} Let $G$ be a locally compact topological group and $(K,H)$ a minimal Yamabe pair of $G$. Let $K'$ be a compact subgroup of $G$ with $K\leq K'$ such that $H$ normalises $K'$ (i.e. $hK'=K'h$ for any $h\in H$). Then, $K=H\cap K'$, $[K':K]$ is finite and $(K',H')$ is a minimal Yamabe pair of $G$ with $H'=K'H$. Furthermore, $H'$ is a finite union of cosets of $H$.
In particular, $(K,H)$ is a minimal Yamabe pair with maximal domain if and only if there is no compact subgroup $K'\leq G$ normalised by $H$ with $K< K'$. \begin{dem} Clearly, $K\leq K'\cap H\trianglelefteq H$ is compact. Then, as $K$ is the maximal compact normal subgroup of $H$, we conclude $K'\cap H=K$ and $\rfrac{H}{K'\cap H}=\rfrac{H}{K}=L$. As $K=K'\cap H$ is open in $K'$ compact, $[K':K]$ is finite. Take $\Delta\subseteq K'$ finite such that $K'=\Delta K$. Note that $\Delta H=K'H$ is a clopen subgroup and $K'\trianglelefteq K'H$. Write $H'=\Delta H$. Then, $\pi_{H'/K'\ \mid H}:\ H\rightarrow \rfrac{H'}{K'}$ is a continuous and closed onto homomorphism. Therefore, by the Closed Isomorphism Theorem (Remark \ref{o:remarks topological groups}(4)), $\rfrac{H'}{K'}$ and $\rfrac{H}{K'\cap H}=L$ are isomorphic. That concludes that $(K',H')$ is also a minimal Yamabe pair of $G$. Using also Lemma \ref{l:lemma uniqueness minimal lie core}(1,2), we conclude that this gives a necessary and sufficient condition for the maximality of the domain. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{prop} Similarly, it is natural to look at minimal Yamabe pairs with minimal kernel. In this case, this question is related to the connected component of $G$.
Recall that the \emph{quasicomponent} of a point in a topological space is the intersection of all its clopen neighbourhoods. By definition, quasicomponents are closed sets containing the connected components. In locally connected spaces, connected components are clopen, so quasicomponents and connected components coincide. Similarly, in every compact Hausdorff space, connected components and quasicomponents coincide \cite[Lemma 29.6]{willard1970general}. In general, however, they may be different --- even for locally compact Hausdorff topological spaces.
In a topological group $G$, as the inversion, the conjugations and the translations are homeomorphisms, the connected component $G^0$ and the quasicomponent $G^{\mathrm{qs}}$ of the identity are both normal closed subgroups of $G$. When $G$ is locally compact, it is a well{\hyp}known fact that $G^0=G^{\mathrm{qs}}$ is the intersection of all the open subgroups of $G$ \cite[Theorem 2.1.4(b)]{dikranjan1990topological}.
Hence, we conclude the following criterion for the existence of a minimal Yamabe pair with minimal kernel. \begin{prop} \label{p:minimal yamabe pair with minimal domain} Let $G$ be a locally compact topological group. Then, there is a minimal Yamabe pair with minimal kernel if and only if $G^0$ is open (i.e. $G$ is locally connected). Furthermore, in that case, for any other minimal Yamabe pair $(K,H)$, $(K\cap G^0,G^0)$ is the minimal Yamabe pair of $G$ with minimal kernel. \begin{dem} Suppose that $(K,H)$ is a minimal Yamabe pair with minimal kernel. As $H$ is clopen by Remark \ref{o:remarks topological groups}(2), we have that $G^0\subseteq H$. On the other hand, for any other open subgroup $H'\leq G$, by Lemma \ref{l:lemma uniqueness minimal lie core}(1), we have that $(K\cap H',H\cap H')$ is a minimal Yamabe pair. As $(K,H)$ is the one with minimal kernel, it follows that $H\cap H'=H$, so $H\subseteq H'$. As $G^0$ is the intersection of all the open subgroups, we conclude that $G^0=H$. Conversely, suppose that $G^0$ is open. Then, by Lemma \ref{l:lemma uniqueness minimal lie core}(1), for any minimal Yamabe pair $(K,H)$, we have that $(K\cap G^0,G^0)$ is a minimal Yamabe pair. Thus, for any minimal Yamabe pairs $(K,H)$ and $(K',H')$, we have that $(K\cap G^0,G^0)$ and $(K'\cap G^0,G^0)$ are minimal Yamabe pairs, and so $K\cap G^0=K'\cap G^0$ by Lemma \ref{l:lemma uniqueness minimal lie core}(2). Therefore, $(K\cap G^0,G^0)$ is the minimal Yamabe pair with minimal kernel. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{prop} In general, even if $G^0$ is not open, a similar conclusion is ``asymptotically'' true: \begin{prop} \label{p:asymptotic minimal yamabe pair with minimal domain} Let $G$ be a locally compact topological group and $L$ its minimal Lie core. Then, the restriction to $G^0$ of the global minimal Lie core map $\pi_{L\mid G^0}:\ G^{0}\rightarrow L$ is a continuous, open, closed and proper surjective group homomorphism. \begin{dem} Take $(K,H)$ minimal Yamabe pair and $\pi_{L\mid H}:\ H\rightarrow L$. By Proposition \ref{p:global minimal lie core map}, $\pi_{L\mid H}$ is a continuous, open, closed and proper surjective group homomorphism. By definition, $G^0\leq H$. Thus, consider the restriction $\pi_{L\mid G^0}:\ G^0\rightarrow L$. As $G^0$ is a closed subgroup, $\pi_{L\mid G^0}$ is also a continuous, closed and proper group homomorphism. It remains to show that it is onto and open. Let $b\in L$. We want to show that $\pi^{-1}_{L\mid H}(b)\cap G^0\neq \emptyset$. Let $H'\leq G$ be a clopen subgroup such that $H'\leq H$. Then, $\pi_{L\mid H}[H']$ is clopen in $L$. As $L$ is connected, we get that $\pi^{-1}_{L\mid H}(b)\cap H'\neq \emptyset$. Since $\pi^{-1}_{L\mid H}(b)$ is compact and $H'$ is arbitrary, we conclude that $\pi^{-1}_{L\mid H}(b)\cap G^0\neq \emptyset$. Therefore, $\pi_{L\mid G^0}:\ G^0\rightarrow L$ is onto. We conclude that it is also open by the Closed Isomorphism Theorem (Remark \ref{o:remarks topological groups}(4)). \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{prop}
\subsection{Local compactness and generic pieces} A \emph{piecewise $A${\hyp}hyperdefinable group} is a group whose universe is piecewise $A${\hyp}hyperdefinable and whose operations are piecewise bounded $\bigwedge_A${\hyp}definable. \begin{ejem} Let $G$ be a definable group and $X\subseteq G$ a symmetric definable subset. Then, the subgroup $H\leq G$ generated by $X$ is a countably piecewise definable group. If $K\trianglelefteq H$ is a piecewise $\bigwedge${\hyp}definable normal subgroup, the quotient $\rfrac{H}{K}=\underrightarrow{\lim} \rfrac{X^n}{K}$ is a countably piecewise hyperdefinable group too. If $K\subseteq X^n$ for some $n$, then $\rfrac{H}{K}$ is also locally hyperdefinable. This corresponds to the case studied in \cite{hrushovski2011stable}. \end{ejem} \begin{obs} Piecewise $\bigwedge${\hyp}definable subgroups of piecewise hyperdefinable groups are piecewise hyperdefinable groups. The quotient of a piecewise hyperdefinable group by a normal piecewise $\bigwedge${\hyp}definable subgroup is a piecewise hyperdefinable group. \end{obs} Note that the group operations are continuous between the logic topologies by Proposition \ref{p:functions logic topology}(1). However, the product topology and the logic topology may differ, so piecewise hyperdefinable groups with the logic topologies do not need to be topological groups. \begin{prop} \label{p:small piecewise hyperdefinable groups and translations} Let $G$ be a piecewise hyperdefinable group with a global logic topology. Then, every translation is a homeomorphism in its global logic topology. \begin{dem} Trivial by Proposition \ref{p:functions logic topology}(1) and Proposition \ref{p:uniqueness of hausdorff logic topologies}. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{prop} \begin{obs} Groups with a $\mathrm{T}_1$ topology such that every translation is continuous are called \emph{semitopological groups} --- see \cite{husain2018introduction} for an introduction to semitopological groups. \end{obs} \begin{teo}\label{t:countably piecewise hyperdefinable groups and topological groups} Let $G$ be a countably piecewise hyperdefinable group with a global logic topology. Then, $G$ is a topological group with the global logic topology. \begin{dem} Clear by Proposition \ref{p:functions logic topology}(1) and Proposition \ref{p:product of countably piecewise hyperdefinable sets}. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{teo} \begin{ejem} \label{e:example no topological group} We show now an example of a piecewise hyperdefinable group with a global logic topology that is not a topological group. We simply adapt the fundamental example given in \cite[Example 1.2]{tatsuuma1998group} to the case of piecewise hyperdefinable groups.
First, recall that $\mathbb{Q}^n$ with the usual topology is piecewise hyperdefinable with a global logic topology in the theory of real closed fields by Example \ref{e:example 4}. Now, the inclusion $\psi_{n,m}:\ \mathbb{Q}^m\rightarrow \mathbb{Q}^n$ given by $\psi(x)=(x,0,\ldots,0)$ for $n>m$ is a piecewise bounded $\bigwedge${\hyp}definable $1${\hyp}to{\hyp}$1$ map. Also, the set of pairwise sums of two compact countable subsets of $\mathbb{Q}^n$ is a compact countable subset of $\mathbb{Q}^n$, so $+$ is a piecewise bounded $\bigwedge${\hyp}definable map. Then, $\bigoplus_{\mathbb{N}}\mathbb{Q}=\underrightarrow{\lim}\, \mathbb{Q}^n$ is a piecewise hyperdefinable group. Now, $\bigoplus_\mathbb{N} \mathbb{Q}$ is not a topological group. Consider the set $U=\{x\mathrel{:} |x_j|<|\cos(jx_0)|\mathrm{\ for\ }j\in\mathbb{N}_{>0}\}$. As $x_0\in\mathbb{Q}$ for any $x\in \bigoplus_{\mathbb{N}} \mathbb{Q}$, we have $\cos(jx_0)\neq 0$, so $U$ is an open neighbourhood of $0$. However, there is no open neighbourhood $V$ of $0$ such that $V+V\subseteq U$, concluding that $\bigoplus_{\mathbb{N}}\mathbb{Q}$ is not a topological group. Aiming a contradiction, suppose otherwise; take $V$ an open neighbourhood of $0$ such that $V+V\subseteq U$. As $V$ is an open neighbourhood of $0$, there is $\varepsilon_0\in\mathbb{R}_{>0}$ such that $\{x\mathrel{:} |x_0|<\varepsilon_0 \mathrm{\ and\ }x_i=0\mathrm{\ for\ }i\in\mathbb{N}_{>0}\}\subseteq V$. Take $n\in\mathbb{N}_{>0}$ such that $2n\varepsilon_0>\pi$. There is then $\varepsilon_1\in\mathbb{R}_{>0}$ such that $\{x\mathrel{:} |x_n|<\varepsilon_1\mathrm{\ and\ }x_i=0\mathrm{\ for\ }i\neq n\}\subseteq V$. Hence, $\{x\mathrel{:} |x_0|<\varepsilon_0,\ |x_n|<\varepsilon_1\mathrm{\ and\ }x_i=0\mathrm{\ for\ }i\in\mathbb{N}\setminus \{0,n\}\}\subseteq V+V\subseteq U$. In particular, $(-\varepsilon_0,\varepsilon_0)_{\mathbb{Q}}\times(-\varepsilon_1,\varepsilon_1)_{\mathbb{Q}}\subseteq \{(x_0,x_1)\in \mathbb{Q}\times\mathbb{Q}\mathrel{:} |x_1|<|\cos(nx_0)|\}$. However, this is impossible when $2n\varepsilon_0>\pi$, getting a contradiction. \end{ejem} \begin{teo} \label{t:piecewise hyperdefinable and local compactness} Let $G$ be a locally hyperdefinable group with a global logic topology. Then, $G$ is a locally compact topological group with this topology.
Furthermore, a countably piecewise hyperdefinable group $G$ is a locally compact topological group with some logic topology if and only if this logic topology is the global logic topology and $G$ is locally hyperdefinable. \begin{dem} We know that the global logic topology of $G$ is locally compact by Proposition \ref{p:locally compact locally hyperdefinable} and Hausdorff by Proposition \ref{p:hausdorff locally hyperdefinable}. By Proposition \ref{p:product of locally hyperdefinable sets} and Proposition \ref{p:functions logic topology}(1), we conclude that $G$ is a locally compact topological group.
By definition, a logic topology is $\mathrm{T}_1$ if and only if it is the global logic topology. On the other hand, assuming $G=\underrightarrow{\lim}_{n\in\mathbb{N}}G_n$, by Baire's Category Theorem \cite[Theorem 48.2]{munkres1999topology}, if $G$ is locally compact Hausdorff, there are $h$ and $n\in \mathbb{N}$ such that $G_n$ is a neighbourhood of $h$. Thus, for any $g\in G$, $gh^{-1}G_n$ is an $\bigwedge${\hyp}definable neighbourhood of $g$. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{teo}
\begin{ejem} \label{e:example no locally compact topological group} We give an example of a countably piecewise hyperdefinable group with a global logic topology which is not locally hyperdefinable; this is the infinite countable direct sum of circles with the inductive topology. Denote the unit circle by $\mathbb{S}\coloneqq\{ x\in \mathbb{C}\mathrel{:} |x|=1\}$, which is hyperdefinable in the theory of real closed fields as quotient of the common definable circle by the infinitesimals. For $n>m$, we take the map $\psi_{n,m}:\ \mathbb{S}^m\rightarrow \mathbb{S}^n$ by $\psi_{n,m}(x)=(x,1,\ldots,1)$. Then, $\bigoplus_{\mathbb{N}} \mathbb{S}\coloneqq \underrightarrow{\lim}\, \mathbb{S}^n$ is a countably piecewise hyperdefinable group with a global logic topology that is not locally hyperdefinable.
A local base of open neighbourhoods of the identity in the global logic topology of $\bigoplus_{\mathbb{N}} \mathbb{S}$ is the family of subsets $U_\varepsilon\coloneqq\{x\mathrel{:} \mathrm{d}_{\mathbb{S}}(x_i,1)<\varepsilon_i\mathrm{\ for\ }i\in\mathbb{N}\}$ for sequences $\varepsilon=(\varepsilon_i)_{i\in\mathbb{N}}$ with $\varepsilon_i\in (0,1]$, where $\mathrm{d}_{\mathbb{S}}$ is the normalised usual distance in the unit circle. Indeed, suppose $U$ is an open neighbourhood of $1$ in $\bigoplus_{\mathbb{N}} \mathbb{S}$. By using compactness in $\mathbb{S}^n$ for each $n\in\mathbb{N}$, we recursively find a sequence $\varepsilon=(\varepsilon_i)_{i\in\mathbb{N}}$ with $\varepsilon_i\in (0,1]$ such that $\{x\mathrel{:} \mathrm{d}_{\mathbb{S}}(x_i,1)\leq \varepsilon_i\mathrm{\ for\ }i\leq n\}\subseteq U\cap \mathbb{S}^n$. \end{ejem} Unfortunately, proving that a particular piecewise hyperdefinable set is locally hyperdefinable may be truly hard, as it requires to check a property about a topological space that we understand only vaguely. Until now, the only method available to show that a piecewise hyperdefinable set is locally hyperdefinable relies on Proposition \ref{p:mapping locally hyperdefinable sets} and the fact that piecewise definable sets are trivially locally hyperdefinable (i.e. piecewise definable and locally definable are the same). Sometimes that is not enough. To solve this problem we introduce generic sets.
Let $G$ be a piecewise hyperdefinable group. A \emph{generic} subset is an $\bigwedge${\hyp}definable subset $V$ such that, for any other $\bigwedge${\hyp}definable subset $W$, $[W:V]\coloneqq\min\{|\Delta|\mathrel{:} W\subseteq \Delta V\}$ is finite, i.e. there is a finite $\Delta\subseteq G$ with $W\subseteq \Delta V$. Obviously, if $V$ is a generic set and $V\subseteq W$ for $\bigwedge${\hyp}definable $W$, then $W$ is also a generic set. In other words, if there are generic sets, the generic sets form an upper set of the family of $\bigwedge${\hyp}definable subsets. Hence, if there is a generic set, then there is in particular a generic piece, i.e. a piece which is a generic set.
The following theorem is a generalisation of an unpublished example due to Hrushovski. \begin{teo}[Generic Set Lemma] \label{t:generic set lemma} Let $G$ be a piecewise hyperdefinable group and $V$ a symmetric generic set. Then, for each $n\in\mathbb{N}$, $V^{n+2}$ is a neighbourhood of $V^n$ in some logic topology. In particular, if $G$ has a generic piece, then $G$ is locally hyperdefinable. Furthermore, when $G$ is small, $G$ is locally hyperdefinable if and only if it has a generic piece. \begin{dem}Using that $V$ is generic, find a well{\hyp}ordered sequence $(a_\xi)_{\xi\in \alpha}$ in $G$ such that, for every $\bigwedge${\hyp}definable subset $W\subseteq G$, there is $\Delta_W\subseteq \alpha$ finite with $W\subseteq \bigcup_{\xi\in \Delta_W}a_\xi V$. Let $A$ be a set of parameters containing $\{a_\xi\}_{\xi\in\alpha}$ and such that $G=\underrightarrow{\lim}\ G_i$ is a piecewise $A${\hyp}hyperdefinable group and $V$ is $\bigwedge_A${\hyp}definable. From now on, we work on the $A${\hyp}logic topology. We want to show that $V^{n+2}$ is a neighbourhood of $V^n$. Let $\Delta\subseteq \alpha$ be finite and minimal such that $V^{n+4}\subseteq V^{n+2}\cup \bigcup_{\xi\in \Delta}a_\xi V$. Let $U=V^{n+4}\setminus \bigcup_{\xi\in \Delta}a_\xi V$. Obviously, $U\subseteq V^{n+2}$ and $U$ is open in $V^{n+4}$. Note also that $V^n\subseteq U$; otherwise, taking $a\in V^n\setminus U$, there is $\xi\in\Delta$ such that $a\in a_{\xi}V$, so $a_{\xi}\in V^{n+1}$ and $a_{\xi}V\subseteq V^{n+2}$, contradicting minimality of $\Delta$. Similarly, for each piece $G_i$ such that $V^{n+4}\subseteq G_i$, pick a finite and minimal subset $\Delta_i\subseteq \alpha$ such that $G_i\subseteq V^{n+2}\cup \bigcup_{\xi\in \Delta_i}a_\xi V$ and $\Delta\subseteq \Delta_i$. Define $U_i=G_i\setminus \bigcup_{\xi\in\Delta_i}a_\xi V$. Again, it is clear that $U_i\subseteq V^{n+2}\subseteq V^{n+4}$ and $U_i$ is open in $G_i$. Also, by minimality of $\Delta_i$, it follows that $V^n\subseteq U_i$.
We claim that $U_i=U$ for any $i\in I$ such that $V^{n+4}\subseteq G_i$. It is clear by definition that $U_i\subseteq U$. On the other hand, take $a\in V^{n+2}\setminus U_i$. As $a\notin U_i$ and $a\in V^{n+2}\subseteq V^{n+4}\subseteq G_i$, there is $\xi\in\Delta_i$ such that $a\in a_\xi V$. As $a\in V^{n+2}$, $a_\xi\in a\cdot V\subseteq V^{n+3}$, concluding $a_\xi V\subseteq V^{n+4}$. Then, by minimality of $\Delta_i$, it follows that $\xi\in \Delta$, so $a\notin U$. That shows that $U$ is open in $G$, so $V^{n+2}$ is a neighbourhood of $V^n$.
In particular, $V^3$ is a neighbourhood of $V$. As $G=\bigcup_{\xi\in\alpha} a_{\xi}V$, for every $a\in G$ there is $\xi\in\alpha$ such that $a\in a_\xi V\subseteq a_\xi U\subseteq a_\xi V^3$ with $a_\xi U$ open in the $A${\hyp}logic topology. Thus, $a_\xi V^3$ is an $\bigwedge_A${\hyp}definable neighbourhood of $a$ in the $A${\hyp}logic topology, concluding that $G$ is locally $A${\hyp}hyperdefinable.
On the other hand, if $G$ is locally hyperdefinable and small, it is a locally compact topological group by Theorem \ref{t:piecewise hyperdefinable and local compactness}. Therefore, the identity is in the interior of some piece of $G$. Every $\bigwedge${\hyp}definable subset $W$ of $G$ is compact, and so covered by finitely many translates of this piece. As $W$ is arbitrary, this piece is generic. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{teo}
If $G$ has a generic subset, so has $\rfrac{G}{K}$ for $K\trianglelefteq G$ piecewise $\bigwedge${\hyp}definable. Therefore, by the Generic Set Lemma (Theorem \ref{t:generic set lemma}), when $G$ has a generic subset, $\rfrac{G}{K}$ is a locally hyperdefinable group for any piecewise $\bigwedge${\hyp}definable normal subgroup $K\trianglelefteq G$.
A \emph{$T${\hyp}rough $k${\hyp}approximate subgroup} of a group $G$ is a subset $X\subseteq G$ such that $X^2\subseteq \Delta XT$ with $|\Delta|\leq k\in\mathbb{N}_{>0}$ and $1\in T\subseteq G$. In particular, a \emph{$k${\hyp}approximate subgroup} is a $1${\hyp}rough $k${\hyp}approximate subgroup.
It is clear from the definitions that every symmetric generic set is in particular an approximate subgroup. Conversely, any $\bigwedge${\hyp}definable approximate subgroup is a generic set of the piecewise hyperdefinable group that it generates. Thus, we can understand generic sets as a strengthening of $\bigwedge${\hyp}definable approximate subgroups.
The next corollary follows from Theorem \ref{t:piecewise hyperdefinable and local compactness} and the Generic Set Lemma (Theorem \ref{t:generic set lemma}) as a particular case: \begin{coro} \label{c:corollary locally hyperdefinable and generic piece} Let $G=\underrightarrow{\lim}\, X^n$ be a piecewise hyperdefinable group generated by an $\bigwedge${\hyp}definable symmetric set $X$ and $T\trianglelefteq G$ be a normal piecewise $\bigwedge${\hyp}definable subgroup of small index. Then, $\rfrac{G}{T}$ is a locally compact topological group if and only if $X^n$ is a $T${\hyp}rough approximate subgroup for some $n$. In particular, if $T$ is $\bigwedge${\hyp}definable, $\rfrac{G}{T}$ is a locally compact topological group if and only if $X^n$ is an approximate subgroup for some $n$. \end{coro}
An \emph{isomorphism} of piecewise $A${\hyp}hyperdefinable groups is an isomorphism of groups which is also an isomorphism of piecewise $A${\hyp}hyperdefinable sets.
\begin{teo}[Isomorphism Theorem] \label{t:isomorphism theorem} Let $f:\ G\rightarrow H$ be an onto piecewise bounded and proper $\bigwedge_A${\hyp}definable homomorphism of piecewise $A${\hyp}hyperdefinable groups. Then, for each $\bigwedge_A${\hyp}definable subgroup $S\leq K\coloneqq \ker(f)$ with $S\trianglelefteq G$, there is a unique map $\widetilde{f}_S:\ \rfrac{G}{S}\rightarrow H$ such that $f=\widetilde{f}_S\circ \pi_{G/S}$. This map $\widetilde{f}_S$ is a piecewise bounded and proper $\bigwedge_A${\hyp}definable homomorphism of groups with kernel $\rfrac{K}{S}$. In particular, $\widetilde{f}_K:\ \rfrac{G}{K}\rightarrow H$ is a piecewise $\bigwedge_A${\hyp}definable isomorphism. Furthermore, if $G$ has a global logic topology, then $f$ is an open map between the global logic topologies. \begin{dem} Existence and uniqueness are given by the usual Isomorphism Theorem. Say $G=\underrightarrow{\lim}\, G_i$ and $H=\underrightarrow{\lim}\, H_j$. We get that $\widetilde{f}_S$ is obviously piecewise $\bigwedge_A${\hyp}definable as, for any pieces $G_i$ and $H_j$, ${\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}^{-1}_{(\rfrac{G_i}{S})\times H_j}[\widetilde{f}_S]={\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}^{-1}_{G_i\times H_j}[f]$. It is trivially piecewise bounded and proper as $f$ and $\pi_{G/S}$ are so. Assuming that $G$ has a global logic topology, one{\hyp}side translations are continuous. Since, by Proposition \ref{p:functions logic topology}(3), $\widetilde{f}_K$ is a piecewise $\bigwedge_A${\hyp}definable homeomorphism, it is in particular an open map. Since $\pi^{-1}_{G/K} [\pi_{G/K}[U]]=KU=\bigcup_{x\in K} xU$, we see that $\pi_{G/K}$ is open. Therefore, $f=\widetilde{f}_K\circ\pi_{G/K}$ is open too.\setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{teo}
{\textbf{Digression on Machado's Closed Approximate Subgroup Theorem and de Saxc\'{e}'s Product Theorem:}} The Generic Set Lemma (Theorem \ref{t:generic set lemma}) can be rewritten in purely topological terms. Written in this way, it is likely that this result was already (partially) known in the theory of topological groups. Recall that a semitopological group is a group with a $\mathrm{T}_1$ topology such that left and right translations are continuous. \begin{teo} \label{t:generics} Let $G$ be a semitopological group with a coherent covering $\mathcal{C}$ by closed symmetric subsets such that, for any $A,B\in\mathcal{C}$, there is $C\in\mathcal{C}$ with $AB\subseteq C$. Let $V$ be a \emph{generic} piece, i.e. an element $V\in\mathcal{C}$ such that $[W:V]$ is finite for every $W\in \mathcal{C}$. Then, $V$ has non{\hyp}empty interior. In particular, if $V$ is compact Hausdorff with the subspace topology, then $G$ is a locally compact topological group. Furthermore, if $\mathcal{C}$ is a countable covering by compact Hausdorff subsets, then $G$ is locally compact Hausdorff if and only if $\mathcal{C}$ contains a generic piece. \begin{dem} Mimicking the proof of Theorem \ref{t:generic set lemma}, we show that $V^2$ is a neighbourhood of the identity, and so $V$ has non{\hyp}empty interior as finitely many translates of it cover $V^2$. If $V$ is compact Hausdorff, as left translations are homeomorphisms, we get that every point has a compact Hausdorff neighbourhood. Therefore, $G$ is locally compact Hausdorff, so a topological group by Ellis's Theorem \cite[Theorem 2]{ellis1957locally}. The ``furthermore'' part is an immediate consequence of Baire's Category Theorem \cite[Theorem 48.2]{munkres1999topology}.
\end{dem} \end{teo} As a corollary we get the following notable Closed Approximate Subgroups Theorem, which was first proved by Machado in \cite[Theorem 1.4]{machado2021good}. Recall that the \emph{commensurator} in $G$ of an approximate subgroup $\Lambda\subseteq G$ is the subset $\mathrm{Comm}(\Lambda)=\{g\in G\mathrel{:} g^{-1}\Lambda g\mathrm{\ and\ }\Lambda$ are commensurable$\}$. The commensurator was first introduced by Hrushovski in \cite{hrushovski2022beyond}. The following is one of the most fundamental results about the commensurator: \begin{lem}{\textnormal{\textbf{\cite[Lemma 5.1]{hrushovski2022beyond}}}} \label{l:commensurator} Let $G$ be a group and $\Lambda\subseteq G$ an approximate subgroup. Then, $\mathrm{Comm}(\Lambda)=\bigcup \{\Lambda'\subseteq G$ approximate $\mathrm{subgroup}\mathrel{:} \Lambda'$ and $\Lambda$ are commensurable$\}$ and $\mathrm{Comm}(G)\leq G$.
\end{lem} \begin{coro}[Machado's Closed Approximate Subgroups Theorem] \label{c:machado} Let $G$ be a locally compact topological group and $X$ a closed approximate subgroup. Take $\Lambda=\overline{X^2\cap K^2}$ where $K$ is a compact symmetric neighbourhood of the identity. Then, $\mathrm{Comm}(\Lambda)$, with the direct limit topology given by $\Omega=\{\Lambda'\mathrel{:} \Lambda'$ compact approximate subgroup commensurable to $\Lambda\}$, is a locally compact topological group such that ${\mathop{\mbox{\normalfont{\Large{\calligra{i\,}}}}}}:\ \mathrm{Comm}(\Lambda)\rightarrow G$ is a continuous $1${\hyp}to{\hyp}$1$ group homomorphism, $X\subseteq \mathrm{Comm}(\Lambda)$, ${\mathop{\mbox{\normalfont{\Large{\calligra{i\,}}}}}}_{\mid X}$ is a homeomorphism and $X$ has non{\hyp}empty interior. Furthermore, if $G$ is a Lie group, then $\mathrm{Comm}(\Lambda)$ is a Lie group too. \begin{dem} As $\Lambda$ is compact, for any $B$ commensurable to $\Lambda$, we have that $\overline{B}$ is compact and commensurable to $\Lambda$. Thus, $\Omega$ is a covering of $\mathrm{Comm}(\Lambda)$ by Lemma \ref{l:commensurator}. By construction, $\Lambda$ is generic in $\Omega$. Also, by an easy computation, if $\Lambda_1$ and $\Lambda_2$ are commensurable approximate subgroups, then $\Lambda_1\Lambda_2\cup \Lambda_2\Lambda_1$ is an approximate subgroup commensurable to $\Lambda_1$ and to $\Lambda_2$. Thus, by Theorem \ref{t:generics}, $\mathrm{Comm}(\Lambda)$ with the direct limit topology is a locally compact topological group such that ${\mathop{\mbox{\normalfont{\Large{\calligra{i\,}}}}}}:\ \mathrm{Comm}(\Lambda)\rightarrow G$ is a continuous proper $1${\hyp}to{\hyp}$1$ group homomorphism and $\Lambda$ has non{\hyp}empty interior. As any two compact neighbourhoods of the identity are commensurable, by \cite[Lemma 2.2, Lemma 2.3]{machado2021good}, we get that $X^2\cap (K')^2\subseteq \mathrm{Comm}(\Lambda)$ for any compact neighbourhood of the identity $K'$, concluding that $X\subseteq X^2\subseteq \mathrm{Comm}(\Lambda)$. Since $\Lambda\subseteq X^2$ has non{\hyp}empty interior, we get that $X^2$ has non{\hyp}empty interior and so $X$ has non{\hyp}empty interior. If $Y\subseteq X$ is closed in $G$, by continuity of ${\mathop{\mbox{\normalfont{\Large{\calligra{i\,}}}}}}$, it is closed in $\mathrm{Comm}(\Lambda)$. On the other hand, if $Y\subseteq X$ is closed in $\mathrm{Comm}(\Lambda)$, then $Y\cap (K')^2=Y\cap \overline{X^2\cap (K')^2}$ is closed in $\overline{X^2\cap (K')^2}$ (and so in $G$) for any compact symmetric neighbourhood of the identity $K'$ of $G$. By local compactness of $G$, the family of compact symmetric neighbourhoods of the identity forms a local covering of $G$. Thus, by the Local Covering Lemma \ref{l:local coherent}, $Y$ is closed in $G$. Since $X$ is closed, we conclude that the subspace topologies of $X$ in $G$ and in $\mathrm{Comm}(\Lambda)$ coincide. Finally, if $G$ is a Lie group, $\mathrm{Comm}(\Lambda)$ is a Lie group by \cite[Chapter III, \S 8.2, Corollary 1]{bourbaki1975lie}. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{coro} \begin{obs}{\textnormal{\textbf{\cite[Theorem 4.1]{machado2021good}}}} If $X$ is compact, we do not need to assume that $G$ is locally compact; in that case, we can simply take $\Lambda=X$. \end{obs}
Corollary \ref{c:machado} implies the following remarkable result by using Poguntke's Theorem \cite[Theorem 3.3]{poguntke1994dense}. Recall that a connected Lie group is \emph{semi{\hyp}simple} if it has no non{\hyp}trivial connected solvable normal subgroups. \begin{coro}\label{c:poguntke} Let $G$ be a semi{\hyp}simple Lie group and $X$ a closed approximate subgroup with empty interior. Then, $X$ is contained in a closed subgroup $H\lneq G$. In particular, the subgroup generated by $X$ is not dense. \begin{dem} By Machado's Closed Approximate Subgroups Theorem (Corollary \ref{c:machado}), there is a subgroup $C\leq G$ containing $X$ that admits a Lie group structure such that $X$ has non{\hyp}empty interior and the map ${\mathop{\mbox{\normalfont{\Large{\calligra{i\,}}}}}}:\ C\rightarrow G$ is a $1${\hyp}to{\hyp}$1$ continuous group homomorphism. By \cite[Theorem 3.3]{poguntke1994dense} and \cite[Proposition 6.5]{lee2002introduction}, as $G$ is semi{\hyp}simple, ${\mathop{\mbox{\normalfont{\Large{\calligra{i\,}}}}}}$ has dense image if and only if it is actually an isomorphism. Therefore, the group generated by $X$ cannot be dense in $G$.\setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{coro} As a consequence, by an easy ultraproduct argument, we get the following corollary, which provides a variation of de Saxc\'{e}'s Product Theorem \cite[Theorem 1]{desaxce2014product} valid for semi{\hyp}simple Lie groups. For a metric space, write $\overline{\mathbb{D}}_r(x)$ for the closed ball at $x$ of radius $r$, and write $\mathrm{N}_r(X)$ for the maximum size of a finite $r${\hyp}separated subset of $X$ (or infinite if it does not exist). \begin{coro}[A Product Theorem for Semisimple Lie Groups] \label{c:de saxce} Let $G$ be a semi{\hyp}simple Lie group and $U$ a compact neighbourhood of the identity. Take some left invariant metric. Let $\sigma<\mathrm{dim}(G)$ and $C,k,s\in \mathbb{N}$. Then, there is $m\in\mathbb{N}$ such that, for any $\overline{\mathbb{D}}_{2^{-m}}(1)${\hyp}rough $k${\hyp}approximate subgroup $X\subseteq U$ satisfying $\mathrm{N}_{2^{-i}}(X)\leq C2^{i\sigma}$ for each $i\leq m$, there is a closed subgroup $H\lneq G$ with $X\subseteq \overline{\mathbb{D}}_{2^{-s}}(H)$. \begin{dem} Aiming a contradiction, suppose otherwise. Then, we have a counterexample $X_m$ for each $m\in\mathbb{N}$. Take an ultraproduct in the sense of (unbounded) continuous logic, i.e. take an ultraproduct, take the subgroup generated by $U$ and quotient by the infinitesimals. By compactness of $U$, we end then with a subset $X\subseteq U$ of $G$. By {\L}o\'{s}'s Theorem, $X$ is a $k${\hyp}approximate subgroup and satisfies $\mathrm{N}_{2^{-i}}(X)\leq C\cdot 2^{i\sigma}$ for every $i\in\mathbb{N}$. Then, $\dim(X)\leq \sigma<d$, where $\dim$ denotes the (large) inductive dimension, by \cite[Theorem VII 2]{hurewicz2015dimension} and \cite[Eq.3.17 p.46]{falconer1990fractal}. In particular, $X$ has empty interior in $G$ by \cite[Corollary 1 of Theorem IV 3]{hurewicz2015dimension}. By Corollary \ref{c:poguntke}, as $G$ is semi{\hyp}simple, the closure of the subgroup generated by $X$ is a proper closed subgroup $H\lneq G$ containing $X$. Thus, by {\L}o\'{s}'s Theorem, there is some $m$ such that $X_m\subseteq \overline{\mathbb{D}}_{2^{-s}}(H)$, getting a contradiction with our initial assumption. \end{dem} \end{coro}
\subsection{Model{\hyp}theoretic components} We define now some model{\hyp}theoretic components for piecewise hyperdefinable groups. Let $G$ be a piecewise $A${\hyp}hyperdefinable group.
\\ The \emph{invariant component of $G$ over $A$} is \[G^{000}_A\coloneqq\bigcap\left\{H\leq G\mathrel{:} H\mbox{ is }A\mbox{{\hyp}invariant with }[G:H]\mbox{ small}\right\}.\] The \emph{infinitesimal component of $G$ over $A$} is \[G^{00}_A\coloneqq\bigcap\left\{H\leq G\mathrel{:} H\mbox{ is p/w. }{\textstyle{\bigwedge}}\mbox{{\hyp}def. with } G^{000}_A\leq H\right\};\] The \emph{connected component of $G$ over $A$} is \[G^0_A\coloneqq\bigcap \left\{ H\leq G\mathrel{:} H\ \mathrm{and}\ G\setminus H \mbox{ are p/w. }{\textstyle{\bigwedge}}\mbox{{\hyp}def. with }G^{000}_A\leq H\right\}.\] Obviously, $G^{000}_A\leq G^{00}_A\leq G^0_A\leq G$. \begin{lem} \label{l:normal subgroup of small index} Let $G$ be a piecewise $A${\hyp}hyperdefinable group and $T\leq G$ be an $A${\hyp}invariant subgroup of small index. Then, there is a unique maximal normal subgroup $\widetilde{T}\trianglelefteq G$ contained in $T$. Furthermore, $\widetilde{T}$ is $A${\hyp}invariant and has small index. Moreover, $\widetilde{T}$ is piecewise $\bigwedge${\hyp}definable when $T$ is so. \begin{dem} Take $\widetilde{T}=\bigcap_{i\in [G:T]} T^{g_i}$ with $\{g_i\}_{i\in [G:T]}$ set of representatives. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{lem} Let $B$ be a set of parameters with $A\subseteq B$. The \emph{$B${\hyp}logic topology} in $\rfrac{G}{G^{000}_A}$ is the one such that $V\subseteq \rfrac{G}{G^{000}_A}$ is closed if and only if $\pi^{-1}_{G/G^{000}_A}[V]$ is piecewise $\bigwedge_B${\hyp}definable. The global logic topology in $\rfrac{G}{G^{000}_A}$ is the one such that $V\subseteq \rfrac{G}{G^{000}_A}$ is closed if and only if $\pi^{-1}_{G/G^{000}_A}[V]$ is piecewise $\bigwedge${\hyp}definable. \begin{teo} Let $G$ be a piecewise $A${\hyp}hyperdefinable group. Then:
{\textnormal{\textbf{(1)}}} $G^{000}_A$ is an $A${\hyp}invariant normal subgroup of $G$ and $[G:G^{000}_A]$ is small. In fact, $[V:G^{000}_A]\leq 2^{|A|+|\mathscr{L}|}$ for any $\bigwedge${\hyp}definable subset $V\subseteq G$.
{\textnormal{\textbf{(2)}}} Let $B$ be a small set of parameters with $A\subseteq B$. The inversion map is continuous on $\rfrac{G}{G^{000}_A}$ with the $B${\hyp}logic topology.
{\textnormal{\textbf{(3)}}} The global logic topology on $\rfrac{G}{G^{000}_A}$ coincides with the $B${\hyp}logic topology for every small $B$ containing $A$ and a set of representatives of $\rfrac{G}{G^{000}_A}$. Every translation map is continuous on $\rfrac{G}{G^{000}_A}$ with the global logic topology.
{\textnormal{\textbf{(4)}}} $G^{00}_A$ is a piecewise $\bigwedge_A${\hyp}definable normal subgroup of $G$. Furthermore, $\rfrac{G^{00}_A}{G^{000}_A}$ is the closure of the identity in the global logic topology of $\rfrac{G}{G^{000}_A}$.
{\textnormal{\textbf{(5)}}} Let $\pi:\ \rfrac{G}{G^{000}_A}\rightarrow \rfrac{G}{G^{00}_A}$ be the natural quotient map given by $\pi_{G/G^{00}_A}=\pi\circ\pi_{G/G^{000}_A}$. Then, $\pi$ is a Kolmogorov map between the $B${\hyp}logic topologies for any $B$ small with $A\subseteq B$. In particular, it is the Kolmogorov quotient between the global logic topologies. If $\rfrac{G}{G^{00}_A}$ is a topological group, then the group operations in $\rfrac{G}{G^{000}_A}$ are continuous.
{\textnormal{\textbf{(6)}}} $G^0_A$ is a piecewise $\bigwedge_A${\hyp}definable normal subgroup of $G$.
{\textnormal{\textbf{(7)}}} $\rfrac{G}{G^{00}_A}$ is a locally compact topological group whenever $G$ has a generic piece modulo $G^{00}_A$. In that case, $\rfrac{G^0_A}{G^{00}_A}$ is the connected component of $\rfrac{G}{G^{00}_A}$ in the global logic topology and $\rfrac{G^0_A}{G^{000}_A}$ is the connected component of $\rfrac{G}{G^{000}_A}$ in the global logic topology.
\begin{dem} {\textnormal{\textbf{(1)}}} Trivially, $G^{000}_A$ is $A${\hyp}invariant and $[G:G^{000}_A]$ is small. Using Lemma \ref{l:normal subgroup of small index}, it is obvious that $G^{000}_A$ is a normal subgroup. Finally, recall that, for real elements, an $A^*${\hyp}invariant equivalence relation on a definable set with a small amount of equivalence classes has at most $2^{|A^*|+|\mathscr{L}|}$ equivalence classes --- indeed, such a relation is coarser than having the same type over an elementary substructure containing $A^*$. The same holds for $A${\hyp}invariant equivalence relations on hyperdefinable sets. Consequently, we actually have $[V:G^{000}_A]\leq 2^{|A|+|\mathscr{L}|}$ for any $\bigwedge${\hyp}definable $V$.
{\textnormal{\textbf{(2)}}} Trivial.
{\textnormal{\textbf{(3)}}} Take $B$ small containing $A$ and a set of representatives of $\rfrac{G}{G^{000}_A}$. We have that, for any $V\subseteq \rfrac{G}{G^{000}_A}$, $\pi^{-1}_{G/G^{000}_A}[V]$ is $B${\hyp}invariant. In particular, the global logic topology on $\rfrac{G}{G^{000}_A}$ is the $B${\hyp}logic topology, so it is a well{\hyp}defined topology. Finally, as $\pi_{G/G^{000}_A}$ is a homomorphism and every translation map is piecewise bounded $\bigwedge${\hyp}definable, we conclude, by Proposition \ref{p:functions logic topology}(1), that every translation map is continuous in $\rfrac{G}{G^{000}_A}$ with the global logic topology.
{\textnormal{\textbf{(4)}}} Let $\widehat{V}$ be the closure of the identity on the global logic topology of $\rfrac{G}{G^{000}_A}$ and $V=\pi^{-1}_{G/G^{000}_A}[\widehat{V}]$. Obviously, $V\subseteq G^{00}_A$. On the other hand, as translations are continuous by point {\textnormal{\textbf{(3)}}}, it follows that $\widehat{V}^{-1}\widehat{V}\subseteq \widehat{V}$ and $\bar{g}\widehat{V}\bar{g}^{-1}\subseteq \widehat{V}$ for any $\bar{g}\in\rfrac{G}{G^{000}_A}$. Thus, $V$ is a normal subgroup, concluding $G^{00}_A=V$. In particular, $G^{00}_A$ is a piecewise $\bigwedge_A${\hyp}definable normal subgroup.
{\textnormal{\textbf{(5)}}} As translations are continuous, for any $\bar{g}\in\rfrac{G}{G^{000}_A}$, the closure of $\bar{g}$ in the global logic topology is $\bar{g}\cdot\rfrac{G^{00}_A}{G^{000}_A}$. Thus, $\pi$ is the Kolmogorov quotient between the global logic topologies. By Lemma \ref{l:kolmogorov maps}, ${\mathrm{Im}}\, \pi$ is a lattice isomorphism between the global logic topologies with inverse ${\mathrm{Im}}^{-1}\pi$. Now, as $\pi$ is $A${\hyp}invariant, it follows that ${\mathrm{Im}}\, \pi$ is a lattice isomorphism between the $B${\hyp}logic topologies with inverse ${\mathrm{Im}}^{-1}\pi$. Therefore, by Lemma \ref{l:kolmogorov maps}, we conclude that $\pi$ is a Kolmogorov map between the $B${\hyp}logic topologies.
Now, suppose $\rfrac{G}{G^{00}_A}$ is a topological group. Take $U\subseteq \rfrac{G}{G^{000}_A}$ open and $\bar{g}_1\bar{g}_2\in U$. Then, $\pi(\bar{g}_1)\pi(\bar{g}_2)\in \pi[U]$. As $\pi[U]$ is open, there are $\widehat{U}_1$ and $\widehat{U}_2$ open in $\rfrac{G}{G^{000}_A}$ such that $\pi(\bar{g}_1)\in \widehat{U}_1$ and $\pi(\bar{g}_2)\in \widehat{U}_2$ and $\widehat{U}_1\widehat{U}_2\subseteq \pi[U]$. Write $U_1=\pi^{-1}[\widehat{U}_1]$ and $U_2=\pi^{-1}[\widehat{U}_2]$. By Lemma \ref{l:kolmogorov maps}, $\pi^{-1}[\pi[U]]=U$. Then, $U_1U_2\subseteq U$ with $\bar{g}_1\in U_1$, $\bar{g}_2\in U_2$ and $U_1$ and $U_2$ open. As $U$, $\bar{g}_1$ and $\bar{g}_2$ are arbitrary, we conclude that the product operation is continuous.
{\textnormal{\textbf{(6)}}} For any $H\leq G$ with $G^{000}_A\leq H$ such that $H$ and $G\setminus H$ are piecewise $\bigwedge${\hyp}definable, we have that $\rfrac{H}{G^{00}_A}$ is a clopen subgroup in the global logic topology. Hence, $\rfrac{G^0_A}{G^{00}_A}$ is the intersection of all the clopen subgroups of $\rfrac{G}{G^{00}_A}$ in the global logic topology. It follows that $\rfrac{G^0_A}{G^{00}_A}$ is closed in the global logic topology, so $G^0_A$ is piecewise $\bigwedge${\hyp}definable. As it is $A${\hyp}invariant, we conclude that $G^0_A$ is piecewise $\bigwedge_A${\hyp}definable. Finally, as every translation is a homeomorphism in $\rfrac{G}{G^{00}_A}$, we conclude that $\rfrac{G^0_A}{G^{00}_A}$ is a normal subgroup of $\rfrac{G}{G^{00}_A}$. Thus, $G^0_A\trianglelefteq G$.
{\textnormal{\textbf{(7)}}} By the Generic Set Lemma (Theorem \ref{t:generic set lemma}) and Theorem \ref{t:piecewise hyperdefinable and local compactness}, $\rfrac{G}{G^{00}_A}$ with the global logic topology is a locally compact topological group if it has a generic piece. In that case, by \cite[Theorem 2.1.4(b)]{dikranjan1990topological}, $\rfrac{G^0_A}{G^{00}_A}$ is the connected component of $\rfrac{G}{G^{00}_A}$. By point \textbf{(5)}, $\rfrac{G^0_A}{G^{000}_A}$ is the connected component of $\rfrac{G}{G^{000}_A}$. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{teo} We define now a final special model{\hyp}theoretic component which has no analogues in the definable or hyperdefinable case. I want to thank Hrushovski for all his help via private conversations in relation with this result. Recall that an aperiodic topological group is a topological group that has no non{\hyp}trivial compact normal subgroups.
Let $G$ be a piecewise hyperdefinable group with a generic piece. The \emph{aperiodic component} $G^{\mathrm{ap}}$ of $G$ is the smallest piecewise $\bigwedge${\hyp}definable normal subgroup of $G$ with small index such that $\rfrac{G}{G^{\mathrm{ap}}}$ is an aperiodic locally compact topological group with its global logic topology.
\begin{lem} \label{l:lemma aperiodic} Let $G$ be an aperiodic locally compact topological group which is the union of $\lambda$ compact subsets with $\lambda\geq \aleph_0$. Then, $|G|\leq (\lambda+2^{\aleph_0})^{\lambda}$.
\begin{dem} By Gleason{\hyp}Yamabe Theorem \ref{t:gleason yamabe} and Corollary \ref{c:minimal yamabe pairs}, there is an open subgroup $H\leq G$ and a compact subgroup $K\trianglelefteq H$ such that $\rfrac{H}{K}$ is an aperiodic connected Lie group. Thus, $[H:K]\leq 2^{\aleph_0}$. On the other hand, as $G$ is a union of $\lambda$ compact subsets, $[G:H]\leq \lambda$, so $[G:K]\leq \lambda+2^{\aleph_0}$. Now, take $\{a_i\}_{i\in \lambda}$ a set of representatives of $\rfrac{G}{H}$ and write $K_i\coloneqq K^{a_i}=a_iKa_i^{-1}$ for each $i$. As $K\trianglelefteq H$, for any $g\in G$, we have $K^g=K_i$ for some $i\in \lambda$. Note that $[G:K_i]\leq \lambda+ 2^{\aleph_0}$, so $[G:\bigcap_{i\in \lambda} K_i]\leq (\lambda+ 2^{\aleph_0})^{\lambda}$. As $G$ is aperiodic, $\bigcap_{i\in \lambda} K_i=1$, so we conclude $|G|\leq (\lambda+ 2^{\aleph_0})^{\lambda}$. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{lem} \begin{teo} \label{t:the aperiodic component} Let $G$ be a piecewise $0${\hyp}hyperdefinable group with a generic piece. Then, $G^{\mathrm{ap}}$ exists, is piecewise $\bigwedge_0${\hyp}definable and does not change by expansions of the language.
\begin{dem} Let $\lambda=\mathrm{cf}(G)$ and $\tau=(\lambda+2^{\aleph_0})^{\lambda}$. By assumption $\kappa>\lambda+2^{\aleph_0}$ is a strong limit cardinal, and so $\kappa>\tau^+$. For any small subset of parameters $A$, let $\mathcal{H}_A$ be the family of normal subgroups $H\trianglelefteq G$ of small index which are piecewise $\bigwedge_{\leq \tau}${\hyp}definable with parameters from $A$ (i.e. piecewise $\bigwedge_B${\hyp}definable for some $B\subseteq A$ with $|B|\leq \tau$) such that $\rfrac{G}{H}$, with its global logic topology, is an aperiodic locally compact topological group.
{\textnormal{\textbf{Claim:}}} For any $A$ small, $\mathcal{H}_A$ is closed under arbitrary intersections. Furthermore, for any $\mathcal{F}\subseteq \mathcal{H}_A$ there is $\mathcal{F}_0\subseteq \mathcal{F}$ with $|\mathcal{F}_0|\leq \tau$ and $\bigcap \mathcal{F}_0=\bigcap\mathcal{F}$.
{\textnormal{\textbf{Proof of claim:}}} Take $\mathcal{F}\subseteq \mathcal{H}_A$. Then, $H\coloneqq \bigcap \mathcal{F}$ is a piecewise $\bigwedge_A${\hyp}definable normal subgroup of $G$ of small index. As $G$ has a generic piece, $\rfrac{G}{H}$ has a generic piece as well, so $\rfrac{G}{H}$ is a locally compact topological group with its global logic topology by the Generic Set Lemma (Theorem \ref{t:generic set lemma}) and Theorem \ref{t:piecewise hyperdefinable and local compactness}. Take $K\leq G$ such that $\rfrac{K}{H}\trianglelefteq \rfrac{G}{H}$ is a compact normal subgroup in its global logic topology. Take $F\in\mathcal{F}$ arbitrary. We know that $\pi:\ \rfrac{G}{H}\rightarrow\rfrac{G}{F}$ is a continuous homomorphism between the global logic topologies, so $\rfrac{K}{F}$ is a compact normal subgroup of $\rfrac{G}{F}$. As $\rfrac{G}{F}$ is aperiodic, we conclude that $K\leq F$. As $F\in\mathcal{F}$ is arbitrary, we conclude that $K\subseteq\bigcap \mathcal{F}=H$, so $\rfrac{G}{H}$ is aperiodic. Since $\rfrac{G}{H}$ is an aperiodic locally compact topological group, by Lemma \ref{l:lemma aperiodic}, $[G:H]\leq \tau$.
If $|\mathcal{F}|\leq \tau$, then $H$ is piecewise $\bigwedge_{\leq \tau}${\hyp}definable, so $H\in\mathcal{H}$. Now, we claim that there is $\mathcal{F}_0\subseteq \mathcal{F}$ with $|\mathcal{F}_0|\leq \tau$ such that $\bigcap \mathcal{F}_0=H$. Indeed, suppose otherwise, then we can find recursively a sequence $(F_i)_{i\in\tau^+}$ of elements in $\mathcal{F}$ such that $H<\bigcap_{i\in \alpha} F_i\subset \bigcap_{i\in \beta}F_i$ for $\beta<\alpha<\tau^+$. Thus, we can find a sequence $(g_i)_{i\in \tau^+}$ such that $g_i\in \rfrac{F_j}{H}$ for $j\leq i$ and $g_i\notin \rfrac{F_{i+1}}{H}$; contradicting that $[G:H]\leq \tau$. \qed
Take a $\tau^+${\hyp}saturated elementary substructure $\mathfrak{N}\preceq \mathfrak{M}$. Set $G^{\mathrm{ap}}=\bigcap \mathcal{H}_N$, so $G^{\mathrm{ap}}\in \mathcal{H}_N$. By $\tau^+${\hyp}saturation of $\mathfrak{N}$, we have that $G^{\mathrm{ap}}$ is $0${\hyp}invariant, so it is piecewise $\bigwedge_0${\hyp}definable. As $G^{\mathrm{ap}}$ is $0${\hyp}invariant and $\mathfrak{N}$ is $\tau^+${\hyp}saturated, we conclude that $G^{\mathrm{ap}}=\bigcap\mathcal{H}_A$ for any $A$. Therefore, $G^{\mathrm{ap}}$ is the smallest piecewise $\bigwedge_{\leq \tau}${\hyp}definable normal subgroup of $G$ with small index such that $\rfrac{G}{G^{\mathrm{ap}}}$ is an aperiodic locally compact topological group with its global logic topology.
Now, we show that $G^{\mathrm{ap}}$ does not change by expansions of the language. In any $\kappa${\hyp}saturated and strongly $\kappa${\hyp}homogeneous $\mathscr{L}'/\mathscr{L}${\hyp}expansion $\mathfrak{M}'$ of an elementary extension of $\mathfrak{M}$, we find a piecewise $\bigwedge_0${\hyp}definable normal subgroup $G^{\mathrm{ap}}_{\mathscr{L}'}\trianglelefteq G$ which is the smallest $\bigwedge_{\leq \tau}${\hyp}definable normal subgroup of $G$ of small index such that $\rfrac{G}{G^{\mathrm{ap}}_{\mathscr{L}'}}$ is an aperiodic locally compact topological group with its global logic topology. Since $[G:G^{\mathrm{ap}}_{\mathscr{L}'}]\leq\tau$ by Lemma \ref{l:lemma aperiodic}, there is a large enough $\kappa${\hyp}saturated and strongly $\kappa${\hyp}homogeneous $\mathscr{L}_0/\mathscr{L}${\hyp}expansion $\mathfrak{M}_0$ of an elementary extension of $\mathfrak{M}$ such that, for any further $\kappa${\hyp}saturated and strongly $\kappa${\hyp}homogeneous $\mathscr{L}_1/\mathscr{L}_0${\hyp}expansion $\mathfrak{M}_1$ of an elementary extension of $\mathfrak{M}_0$, we have that $G^{\mathrm{ap}}_0(\mathfrak{M}_1)=G^{\mathrm{ap}}_1$ --- where $G^{\mathrm{ap}}_1$ is computed in $\mathfrak{M}_1$, $G^{\mathrm{ap}}_0$ is computed in $\mathfrak{M}_0$ and $G^{\mathrm{ap}}_0(\mathfrak{M}_1)$ is the realisation of $G^{\mathrm{ap}}_0$ in ${\mathfrak{M}_1}_{\mid \mathscr{L}_0}$.
Take this base expansion $\mathfrak{M}_0$ of an elementary extension of $\mathfrak{M}$, with language $\mathscr{L}_0$. By replacing $\mathfrak{M}_0$ by an elementary extension, we can assume that the $\mathscr{L}${\hyp}reduct ${\mathfrak{M}_0}_{\mid \mathscr{L}}$ of $\mathfrak{M}_0$ is a $\kappa${\hyp}saturated and strongly $\kappa${\hyp}homogeneous elementary extension of $\mathfrak{M}$. Let $G^{\mathrm{ap}}_0$ be computed in $\mathfrak{M}_0$. Take $\alpha\in\mathrm{Aut}({\mathfrak{M}_0}_{\mid\mathscr{L}})$. Consider the language $\mathscr{L}_1$ expanding $\mathscr{L}_0$ by adding a new symbol $\alpha x$ of the same sort that $x$ for each symbol $x$ of $\mathscr{L}_0$ which is not in $\mathscr{L}$. Take the $\mathscr{L}_1${\hyp}expansion $\mathfrak{M}'_1$ of $\mathfrak{M}_0$ given by interpreting $(\alpha x)^{\mathfrak{M}'_1}=\alpha( x^{\mathfrak{M}_0})$. Let $\mathfrak{M}_1$ be a $\kappa${\hyp}saturated and strongly $\kappa${\hyp}homogeneous elementary extension of $\mathfrak{M}'_1$. Let $G^{\mathrm{ap}}_0(\mathfrak{M}_1)$ be the realisation of $G^{\mathrm{ap}}_0$ in ${\mathfrak{M}_1}_{\mid\mathscr{L}_0}$, $G^{\mathrm{ap}}_1$ be computed in $\mathfrak{M}_1$ and $G^{\mathrm{ap}}_\alpha$ be computed in ${\mathfrak{M}_1}_{\mid \mathscr{L}_1\setminus (\mathscr{L}_0\setminus \mathscr{L})}$. By the choice of $\mathfrak{M}_0$, we know that $G^{\mathrm{ap}}_0(\mathfrak{M}_1)=G^{\mathrm{ap}}_1\subseteq G^{\mathrm{ap}}_0\cap G^{\mathrm{ap}}_\alpha$. Since $G^{\mathrm{ap}}_\alpha(\mathfrak{M}_0)$ is precisely $\alpha(G^{\mathrm{ap}}_0)$, we get that $G^{\mathrm{ap}}_0\subseteq \alpha(G^{\mathrm{ap}}_0)$. Therefore, as $\alpha$ is arbitrary, we conclude that $G^{\mathrm{ap}}_0$ is $\mathrm{Aut}({\mathfrak{M}_0}_{\mid\mathscr{L}})${\hyp}invariant. By Beth's Definability Theorem \cite[Exercise 6.1.4]{tent2012course}, we conclude that $G^{\mathrm{ap}}_0$ is already piecewise $\bigwedge_0${\hyp}definable in $\mathscr{L}$. Therefore, $G^{\mathrm{ap}}_0=G^{\mathrm{ap}}$ and $G^{\mathrm{ap}}$ does not change by expansions of the language.
Finally, by invariance under arbitrary expansions of the language, we have that $G^{\mathrm{ap}}$ is the smallest $\bigwedge${\hyp}definable normal subgroup of $G$ of small index such that $\rfrac{G}{G^{\mathrm{ap}}}$ is an aperiodic locally compact topological group with its global logic topology. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{teo}
\subsection{Lie cores} Now, we adapt the results about Lie cores to piecewise hyperdefinable groups.
Let $G$ be a piecewise $A${\hyp}hyperdefinable group. An \emph{$A${\hyp}Yamabe pair} of $G$ is a pair $(K,H)$ of subgroups $K\trianglelefteq H\leq G$ which is a Yamabe pair modulo $G^{00}_A$ for the global logic topology of $\rfrac{G}{G^{00}_A}$ and $\rfrac{K}{G^{00}_A}$ is $\bigwedge${\hyp}definable. In other words, it is a pair satisfying the following three properties: \begin{enumerate}[(i)] \item $H\leq G$ is a piecewise $\bigwedge${\hyp}definable subgroup whose complement is also piecewise $\bigwedge${\hyp}definable. \item $K\trianglelefteq H$ is a piecewise $\bigwedge${\hyp}definable normal subgroup of $H$ such that $G^{00}_A\leq K$ and $\rfrac{K}{G^{00}_A}$ is $\bigwedge${\hyp}definable. \item $L\coloneqq \rfrac{H}{K}$ with the respective global logic topology is a finite dimensional Lie group. \end{enumerate} We say that $H$ is the \emph{domain}, $K$ is the \emph{kernel} and $L$ is the \emph{Lie core}. Write $\pi\coloneqq\pi_{H/K}:\ H\rightarrow L$ and $\widetilde{\pi}\coloneqq\widetilde{\pi}_{H/K}:\ \rfrac{H}{G^{00}_A}\rightarrow L$ for the quotient maps with $\pi=\widetilde{\pi}\circ\pi_{G/G^{00}_A}$ where $\pi_{G/G^{00}_A}:\ G\rightarrow \rfrac{G}{G^{00}_A}$. We say that a Lie group is an \emph{$A${\hyp}Lie core} of $G$ if it is isomorphic, as Lie group, to the Lie core of some $A${\hyp}Yamabe pair of $G$. \begin{obss} \label{o:remarks lie core} Let $G$ be a piecewise $A${\hyp}hyperdefinable group. Let $(K,H)$ be an $A${\hyp}Yamabe pair of $G$ and $\pi\coloneqq \pi_{H/K}:\ H\rightarrow L$ its Lie core and $\widetilde{\pi}\coloneqq\widetilde{\pi}_{H/K}:\ \rfrac{H}{G^{00}_A}\rightarrow L$ such that $\pi=\widetilde{\pi}\circ\pi_{G/G^{00}_A}$.
{\textnormal{\textbf{(1)}}} The fact that $G^{00}\leq K$ means that $[G:H]<\kappa$. Indeed, as $\rfrac{H}{K}$ is Hausdorff, we already have that $[H:K]<\kappa$. Therefore, if $[G:H]<\kappa$, we conclude that $[G:K]<\kappa$, so $G^{00}\leq K$ (for some set of parameters). Thus, the condition of working modulo $G^{00}$ is saying that $H$ is large in some sense.
On the other hand, the condition that $\rfrac{K}{G^{00}_A}$ is $\bigwedge${\hyp}definable may seem superfluous. Indeed, as $(\rfrac{K}{G^{00}_A},\rfrac{H}{G^{00}_A})$ is a Yamabe pair, $\rfrac{K}{G^{00}_A}$ is compact. If $\rfrac{G}{G^{00}_A}$ has a generic piece (as we will assume in the rest of the section), this is enough to conclude that $\rfrac{K}{G^{00}_A}$ is $\bigwedge${\hyp}definable. However, in general, $\rfrac{G}{G^{00}_A}$ may have compact sets that are not $\bigwedge${\hyp}definable, so we need to add this condition.
{\textnormal{\textbf{(2)}}} Note that $\widetilde{\pi}$ is a piecewise bounded and proper $\bigwedge${\hyp}definable map. Thus, by Proposition \ref{p:functions logic topology}, $\widetilde{\pi}$ is continuous and closed between the global logic topologies. By the Isomorphism Theorem \ref{t:isomorphism theorem}, since $\widetilde{\pi}$ is an onto homomorphism, we have that $\rfrac{(H/G^{00}_A)}{(K/G^{00}_A)}$ and $\rfrac{H}{K}$ are isomorphic as piecewise hyperdefinable groups and $\widetilde{\pi}$ is also an open map between the global logic topologies. Finally, as it has compact fibres, $\widetilde{\pi}$ is also proper \cite[Theorem 3.7.2]{engelking1989general}. In sum, $\widetilde{\pi}$ deeply connects the logic topology of $\rfrac{G}{G^{00}_A}$ and the geometry of $L$.
When $G^{00}_A$ is $\bigwedge_A${\hyp}definable, $\pi$ is also a piecewise bounded and proper $\bigwedge${\hyp}definable surjective homomorphism. Consequently, $\pi$ is a continuous, closed and proper map between the logic topologies (with enough parameters), concluding that the relation between $\rfrac{G}{G^{00}_A}$ and $L$ can be mostly lifted into a relation between $G$ and $L$. In that special case, we say that $\pi:\ H\rightarrow L$ is a \emph{Lie model} of $G$. In general, however, $G^{00}_A$ is only piecewise $\bigwedge_A${\hyp}definable and $\pi$ is only piecewise bounded --- we could even have $G=G^{00}_A$ (e.g. take $G=\bigcup_{n\in\mathbb{N}} [-a_n,a_n]$ with $a_n\in o(a_{n+1})$ for each $n\in\mathbb{N}$ in the theory of real closed fields). In Section 3, we give sufficient conditions to show that $G^{00}_A$ is $\bigwedge_A${\hyp}definable.
{\textnormal{\textbf{(3)}}} $L$ is a minimal Lie core if and only if it is an aperiodic connected Lie group. In that case, $\rfrac{K}{G^{00}_A}$ is the maximal compact normal subgroup of $\rfrac{H}{G^{00}_A}$, so it is its maximal $\bigwedge${\hyp}definable normal subgroup. In particular, every automorphism over $A$ leaving $H$ invariant leaves $K$ invariant.
{\textnormal{\textbf{(4)}}} As $L$ is locally compact and $\widetilde{\pi}$ is proper and continuous, we conclude that $\rfrac{H}{G^{00}_A}$ is locally compact. Since $\rfrac{H}{G^{00}_A}$ is open, we conclude that $\rfrac{G}{G^{00}_A}$ is locally compact with the global logic topology. Thus, if $G$ is a countably piecewise hyperdefinable group, by Theorem \ref{t:countably piecewise hyperdefinable groups and topological groups}, Theorem \ref{t:piecewise hyperdefinable and local compactness} and the Generic Set Lemma (Theorem \ref{t:generic set lemma}), we conclude that $G$ has a generic piece modulo $G^{00}_A$. For that reason, we will assume from now on that $G$ has a generic piece modulo $G^{00}_A$.
{\textnormal{\textbf{(5)}}} Note that our definitions of Lie core and Lie model extend the notion of Lie model used in \cite{hrushovski2011stable}. Suppose that $G$ is a piecewise $A${\hyp}definable group and $G^{00}_A$ is $\bigwedge_A${\hyp}definable. Then, $\pi$ is proper and continuous so, for any $\Gamma\subseteq U\subseteq L$ with $\Gamma$ compact and $U$ open, $\pi^{-1}[\Gamma]$ is $\bigwedge${\hyp}definable and $\pi^{-1}[U]$ is $\bigvee${\hyp}definable. Hence, there is a definable subset $D\subseteq G$ such that \[\pi^{-1}[\Gamma]\subseteq D\subseteq \pi^{-1}[U].\] As $L$ is first countable, it is metrisable by Birkhoff{\hyp}Kakutani Theorem \cite[Theorem 1.5.2]{tao2014hilbert}. Thus, every closed set in $L$ is $\mathrm{G_{\delta}}$ \cite[Example 2, page 249]{munkres1999topology}. In particular, it follows that the preimage of any compact set is $\bigwedge_{\omega}${\hyp}definable. When $L$ is connected, it is also second countable, so the preimage of every open set is $\bigvee_{\omega}${\hyp}definable. \end{obss} \begin{teo} \label{t:logic existence of lie core} Let $G$ be a piecewise $A${\hyp}hyperdefinable group with a generic piece modulo $G^{00}_A$. Then, $G$ has an $A${\hyp}Lie core. If $G$ is countably piecewise hyperdefinable, this condition is also necessary.
Furthermore, for any $U$ such that $\rfrac{U}{G^{00}_A}$ is an open neighbourhood of the identity in the global logic topology of $\rfrac{G}{G^{00}_A}$, there is an $A${\hyp}Yamabe pair $(K,H)$ of $G$ with $K\subseteq UG^{00}_A$. \begin{dem} ($\Rightarrow$) From Gleason{\hyp}Yamabe Theorem \ref{t:gleason yamabe}, using the Generic Set Lemma (Theorem \ref{t:generic set lemma}), Theorem \ref{t:piecewise hyperdefinable and local compactness} and Proposition \ref{p:compact locally hyperdefinable}. ($\Leftarrow$) We have explained the necessity of the generic piece assumption in Remark \ref{o:remarks lie core}(4).
\setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{teo}
\begin{teo} \label{t:logic existence of minimal lie core} Let $G$ be a piecewise $A${\hyp}hyperdefinable group with a generic piece modulo $G^{00}_A$ and $(K_1,H_1)$ an $A${\hyp}Yamabe pair of $G$. Then, $G$ has a minimal $A${\hyp}Yamabe pair $(K,H)$ smaller than or equal to $(K_1,H_1)$. Furthermore, $H\subseteq U^2$ for any $U$ containing $K_1$ such that $\rfrac{U}{G^{00}_A}$ is a clopen neighbourhood of $\rfrac{K_1}{G^{00}_A}$ in the global logic topology. \begin{dem} Clear from Corollary \ref{c:minimal yamabe pairs}, using the Generic Set Lemma (Theorem \ref{t:generic set lemma}), Theorem \ref{t:piecewise hyperdefinable and local compactness} and Proposition \ref{p:compact locally hyperdefinable}. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{teo} Applying also Corollary \ref{c:uniqueness minimal lie core}, we conclude the following result. \begin{teo} \label{t:logic uniqueness of minimal lie core} Let $G$ be a piecewise hyperdefinable group with a generic piece modulo $G^{00}_A$. Then, $G$ has a unique minimal $A${\hyp}Yamabe pair up to equivalence. \end{teo} By Proposition \ref{p:global minimal lie core map}, we get a \emph{global minimal Lie core map} $\widetilde{\pi}_L:\ \rfrac{D_L}{G^{00}_A}\rightarrow L$ extending all the minimal $A${\hyp}Yamabe pairs, which is unique up to isomorphisms of $L$. Let $\pi_L:\ D_L\rightarrow L$ be the map given by $\pi_L=\widetilde{\pi}_L\circ \pi_{G/G^{00}_A\ \mid D_L}$. Here, $D_L$ is the union of all the domains of minimal $A${\hyp}Yamabe pairs and $\ker(\pi_L)\coloneqq \pi_L^{-1}(1)$ is the union of all the kernels of minimal $A${\hyp}Yamabe pairs. Consequently, $D_L$ and $\ker(\pi_L)$ are $A${\hyp}invariant. \begin{obs} As noted in \cite{hrushovski2011stable}, the uniqueness of the minimal Lie core is achieved at a price. Indeed, while Gleason{\hyp}Yamabe Theorem \ref{t:gleason yamabe} gives us Yamabe pairs of arbitrarily small kernel, we have lost the control over the kernel in Theorem \ref{t:logic existence of minimal lie core}. If we do not care about uniqueness (as in \cite{breuillard2012structure} or \cite{massicot2015approximate}), it could be better just to apply Theorem \ref{t:logic existence of lie core} to find a Yamabe pair $(K,H)$ with arbitrarily small kernel. Also, it may be natural to apply Gleason{\hyp}Yamabe{\hyp}Carolino Theorem \ref{t:gleason yamabe carolino} rather than Gleason{\hyp}Yamabe Theorem \ref{t:gleason yamabe} to get some extra control on some parameters. \end{obs} In Proposition \ref{p:minimal yamabe pairs with maximal domain}, we gave a criterion to find minimal Yamabe pairs with maximal domain in topological groups. Applying it modulo $G^{00}_A$, this result can be easily adapted to piecewise hyperdefinable groups.
Similarly, we can easily adapt Proposition \ref{p:minimal yamabe pair with minimal domain} and Proposition \ref{p:asymptotic minimal yamabe pair with minimal domain} to the context of piecewise hyperdefinable groups by applying them modulo $G^{00}_A$. In particular, it follows that the minimal $A${\hyp}Lie core is piecewise $A${\hyp}hyperdefinable: \begin{prop} \label{p:connected component and minimal lie core} Let $G$ be a piecewise $A${\hyp}hyperdefinable group with a generic piece modulo $G^{00}_A$ and let $L$ be the minimal $A${\hyp}Lie core of $G$. Then, the restriction to $G^0_A$ of the global minimal Lie core map $\pi_{L\mid G^0_A}:\ G^0_A\rightarrow L$ is a piecewise bounded $\bigwedge${\hyp}definable surjective group homomorphism. Furthermore, we conclude that $L\cong \rfrac{G^0_A}{\ker(\pi_{L\mid G^0_A})}$. \end{prop} The previous Proposition \ref{p:connected component and minimal lie core} gives us a canonical presentation of the minimal $A${\hyp}Lie core of $G$ as the piecewise $A${\hyp}hyperdefinable group $\rfrac{G^0_A}{K}$ with $K\coloneqq\ker(\pi_{L\mid G^0_A})$. Similarly, we get a canonical $A${\hyp}invariant presentation of the global minimal Lie core map $\pi_L:\ D_L\rightarrow L$ by taking $\pi_{L\mid G^0_A}=\pi_{G^0_A/K}$. We now give a more precise description of $K\coloneqq\ker(\pi_{L\mid G^0_A})$ using $G^{\mathrm{ap}}$. \begin{lem} \label{l:minimal yamabe pair and aperiodic component} Let $G$ be a piecewise $0${\hyp}hyperdefinable group with a generic piece and $G^{\mathrm{ap}}$ the aperiodic component of $G$. Let $(K,H)$ be a minimal $A${\hyp}Yamabe pair of $G$ with Lie core $\pi_{H/K}:\ H\rightarrow L$. Then, $G^{\mathrm{ap}}\cap H\leq K$. \begin{dem} Let $B$ be the small set of parameters with $A\subseteq B$ such that the $B${\hyp}logic topology of $\rfrac{G}{G^{00}_A}$ is its global logic topology. Denote by $\mathrm{cl}_B$ the closure in the $B${\hyp}logic topology of $G$. We define by recursion the sequence $(J_\alpha)_{\alpha\in{\mathbf{\mathbbm{O}\mathrm{n}}}}$ of piecewise $\bigwedge_B${\hyp}definable normal subgroups of $G$ given by $J_0=G^{00}_A$, $J_{\gamma}=\mathrm{cl}_B(\bigcup_{i\in \gamma} J_i)$ for $\gamma$ limit and $J_{\alpha+1}=\mathrm{cl}_B(\bigcup \mathcal{K}_\alpha)$ where $\mathcal{K}_\alpha$ is the family of piecewise $\bigwedge${\hyp}definable normal subgroups which are $\bigwedge${\hyp}definable modulo $J_\alpha$.
{\textnormal{\textbf{Claim:}}} For $\alpha_0\in{\mathbf{\mathbbm{O}\mathrm{n}}}$ large enough, $J_\alpha=G^{\mathrm{ap}}$ for any $\alpha\geq \alpha_0$.
{\textnormal{\textbf{Proof of claim:}}} We prove inductively that $J_\alpha\subseteq G^{\mathrm{ap}}$ for any $\alpha\in{\mathbf{\mathbbm{O}\mathrm{n}}}$. Obviously, $J_0\subseteq G^{\mathrm{ap}}$. For $\gamma$ limit, assuming that $J_i\subseteq G^{\mathrm{ap}}$ for each $i\in \gamma$, as $G^{\mathrm{ap}}$ is piecewise $\bigwedge_B${\hyp}definable, we get that $J_\gamma\subseteq G^{\mathrm{ap}}$. Finally, assuming that $J_\alpha\subseteq G^{\mathrm{ap}}$, we have a piecewise bounded $\bigwedge${\hyp}definable onto homomorphism $\pi:\ \rfrac{G}{J_{\alpha}}\rightarrow \rfrac{G}{G^{\mathrm{ap}}}$. As $\rfrac{G}{G^{\mathrm{ap}}}$ is aperiodic, we have that every piecewise $\bigwedge${\hyp}definable normal subgroup of $G$ which is $\bigwedge${\hyp}definable modulo $J_{\alpha}$ must be contained in $G^{\mathrm{ap}}$. Therefore, $\bigcup \mathcal{K}_\alpha\subseteq G^{\mathrm{ap}}$, concluding that $J_{\alpha+1}\subseteq G^{\mathrm{ap}}$.
On the other hand, as there is a small amount of piecewise $\bigwedge_B${\hyp}definable normal subgroups of $G$, for large $\alpha_0$, we must have $J_{\alpha_0+1}=J_{\alpha_0}$. In particular, that means that $\rfrac{G}{J_{\alpha_0}}$ is an aperiodic locally compact topological group with its global logic topology, so $G^{\mathrm{ap}}\leq J_{\alpha_0}$. Thus, $G^{\mathrm{ap}}=J_\beta$ for $\beta\geq \alpha_0$. \ $\scriptscriptstyle{\square}$
Now, we prove by induction that, for any $\alpha\in {\mathbf{\mathbbm{O}\mathrm{n}}}$, we have $J_\alpha\cap H\leq K$. Obviously, $J_0\cap H\leq K$. For $\gamma$ limit, assuming that $J_i\cap H\leq K$ for any $i\in \gamma$, we have $H\cap \bigcup_{i\in\gamma} J_i\leq K$. Therefore, $H\cap J_{\gamma}=\mathrm{cl}_B(H\cap \bigcup_{i\in\gamma}J_i)\subseteq K$, as $H$ is clopen and $K$ is closed in the $B${\hyp}logic topology. Finally, assuming $J_\alpha\cap H\leq K$, we have a piecewise bounded $\bigwedge${\hyp}definable onto homomorphism $\pi:\ \rfrac{H}{J_\alpha}\rightarrow L=\rfrac{H}{K}$. As $L$ is aperiodic, we have that $H\cap \bigcup\mathcal{K}_\alpha\subseteq K$. Therefore, $H\cap J_{\alpha+1}=\mathrm{cl}_B(H\cap \bigcup \mathcal{K}_\alpha)\subseteq K$, as $H$ is clopen and $K$ is closed in the $B${\hyp}logic topology. In particular, by the claim, we conclude that $G^{\mathrm{ap}}\cap H\leq K$. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{lem}
\begin{teo} \label{t:canonical representation of the minimal lie core} Let $G$ be a piecewise $A${\hyp}hyperdefinable group with a generic piece. Let $L$ be the minimal $A${\hyp}Lie core of $G$ and $\pi_{L\mid G^0_A}:\ G^0_A\rightarrow L$ the restriction to $G^0_A$ of the global minimal Lie core map of $L$. Then, $G^0_A\cap G^{\mathrm{ap}}=\ker(\pi_{L\mid G^0_A})$. In particular, $L\cong\rfrac{G^0_A}{G^{\mathrm{ap}}}$. \begin{dem} By Lemma \ref{l:minimal yamabe pair and aperiodic component}, we know that $G^{\mathrm{ap}}\cap G^0_A\leq \ker(\pi_{L\mid G^0_A})$. On the other hand, $\ker(\pi_{L\mid G^0_A})=G^0_A\cap \pi_L^{-1}(1)\trianglelefteq G$ is $\bigwedge${\hyp}definable modulo $G^{00}_A\leq G^{\mathrm{ap}}$. Therefore, $\rfrac{\ker(\pi_{L\mid G^0_A})}{G^{\mathrm{ap}}}$ is a compact normal subgroup of $\rfrac{G}{G^{\mathrm{ap}}}$. As it is aperiodic, we conclude that $\ker(\pi_{L\mid G^0_A})\leq G^{\mathrm{ap}}$, so $\ker(\pi_{L\mid G^0_A})=G^{\mathrm{ap}}\cap G^0_A$. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{teo} In sum, for any piecewise $A${\hyp}hyperdefinable group $G$ with a generic piece, we have the following structure in terms of the components $G^0_A$, $G^{00}_A$ and $G^{\mathrm{ap}}$: \[\begin{array}{ccl}1.& \rfrac{G^{\mathrm{ap}}\cap G^0_A}{G^{00}_A}& \mathrm{is\ a\ compact\ topological\ group.}\\ 2.&\rfrac{G^0_A}{G^{\mathrm{ap}}}&\mathrm{is\ an\ aperiodic\ connected\ Lie\ group.}\\ 3.&\rfrac{G}{G^0_A}&\mathrm{is\ a\ totally\ disconnected\ locally\ compact}\\ & & \mathrm{topological\ group,\ i.e.\ a}\ locally\ profinite\ group. \end{array}\] Unfortunately, if $G^{00}_A=G^{\mathrm{ap}}=G^0_A=G$, all the previous results say nothing about $G$. In the following section, we extend the Stabilizer Theorem to the context of piecewise hyperdefinable groups. This theorem gives sufficient conditions to conclude that, with enough parameters, $G^{00}$ is $\bigwedge${\hyp}definable. As we pointed out at the beginning of the section, in this particular situation, the minimal Lie core gives very precise information about $G$ since the quotient homomorphism is piecewise bounded and proper.
\
We now note that the minimal Lie core is independent of the parameters. Furthermore, as $G^{\mathrm{ap}}$ is independent of expansions of the language, so is the minimal Lie core. \begin{coro} \label{c:independence of expansions of the minimal lie core} Let $G$ be a piecewise $0${\hyp}hyperdefinable group with a generic piece. Then, the minimal $A${\hyp}Lie core of $G$ is isomorphic to the minimal $0${\hyp}Lie core of $G$. Furthermore, the minimal Lie core of $G$ does not change by expansions of the language. \begin{dem} By Lemma \ref{l:minimal yamabe pair and aperiodic component}, we have that the minimal $A${\hyp}Lie core of $G$ is isomorphic to the minimal Lie core of $\rfrac{G}{G^{\mathrm{ap}}}$. As the latter does not depend on parameters or expansions of the language, we conclude.
\setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{coro}
The \emph{$A${\hyp}Lie rank} $\mathrm{Lrank}_A(G)$ of a piecewise hyperdefinable group $G$ with a generic piece modulo $G^{00}_A$ is the dimension of its minimal $A${\hyp}Lie core. As a consequence of Theorem \ref{t:logic uniqueness of minimal lie core}, $\mathrm{Lrank}_A(G)$ is a well{\hyp}defined invariant. Note that, by Corollary \ref{c:independence of expansions of the minimal lie core}, we have that $\mathrm{Lrank}(G)\coloneqq\mathrm{Lrank}_A(G)$ does not depend on the parameters when $G$ has a generic piece. \begin{prop} \label{p:lie rank} Let $G$ be a piecewise $A${\hyp}hyperdefinable group with a generic piece modulo $G^{00}_A$ and $N\trianglelefteq G$ a piecewise $\bigwedge_A${\hyp}definable normal subgroup of small index. Then, \[\mathrm{Lrank}_A(G)\geq \mathrm{Lrank}_A(G/N)+\mathrm{Lrank}_A(N).\] More precisely, let $L$ and $L_{G/N}$ be the minimal $A${\hyp}Lie cores of $G$ and $\rfrac{G}{N}$ respectively, presented with the canonical piecewise $A${\hyp}hyperdefinable structures given by Proposition \ref{p:connected component and minimal lie core}. Let $\pi_L$ and $\pi_{L_{G/N}}$ be their canonical global minimal Lie core maps. Let $(K,H)$ be a minimal $A${\hyp}Yamabe pair of $G$. Then:
{\textnormal{\textbf{(1)}}} $(K\cap N,H\cap N)$ is an $A${\hyp}Yamabe pair of $N$ with Lie core $\rfrac{H\cap N}{K\cap N}\cong L_N\coloneqq\pi_{L\mid H}[N]$. The connected component of $L_N$ is aperiodic so, in particular, $\mathrm{Lrank}_A(N)=\mathrm{dim}(L_N)$.
{\textnormal{\textbf{(2)}}} There is an $\bigwedge${\hyp}definable normal subgroup $T\trianglelefteq \rfrac{H}{N}$ with $\rfrac{K}{N}\trianglelefteq T$ such that $(T,\rfrac{H}{N})$ is a minimal $A${\hyp}Yamabe pair of $\rfrac{G}{N}$.
{\textnormal{\textbf{(3)}}} There is a piecewise bounded $\bigwedge_A${\hyp}definable surjective group homomorphism $\psi:\ L\rightarrow L_{G/N}$ such that $\psi\circ \pi_L=\pi_{L_{G/N}}\circ \pi_{G/N}$ (on the domain of $\pi_L$). \begin{dem} {\textnormal{\textbf{(1)}}} Firstly, note that $G^{00}_A\leq N$, so we get $N^{00}_A=G^{00}_A$. We have that $\rfrac{H\cap N}{K\cap N}\cong \rfrac{H\cap N}{K}\cong L_N$. Note that $\rfrac{H\cap N}{G^{00}_A}$ is a closed normal subgroup of $\rfrac{H}{G^{00}_A}$. As $\widetilde{\pi}_{H/K}:\ \rfrac{H}{G^{00}_A}\rightarrow \rfrac{H}{K}$ is closed between the global logic topologies, we get that $L_N\cong \rfrac{H\cap N}{K}\cong \rfrac{((H\cap N)/G^{00}_A)}{(K/G^{00}_A)}$ is closed in $L\cong \rfrac{H}{K}\cong \rfrac{(H/G^{00}_A)}{(K/G^{00}_A)}$. Therefore, $L_N$ is a closed normal subgroup of $L$, concluding that $L_N$ is a Lie group. Then, $(K\cap N,H\cap N)$ is an $A${\hyp}Yamabe pair of $N$ with Lie core (isomorphic to) $L_N$. Let $L_N^0$ be the connected component of $L_N$. By Lemma \ref{l:maximal compact normal subgroup}, $L^0_N$ has a unique maximal compact normal subgroup $K_{N}$. Thus, by uniqueness, $K_N$ is characteristic in $L^0_N$, which is characteristic in $L_N$. Therefore, $K_N$ is a compact normal subgroup of $L$, so it is trivial by aperiodicity of $L$. In other words, $L^0_N$ is aperiodic. Thus, $L^0_N$ is the minimal $A${\hyp}Lie core of $N$. In particular, $\mathrm{Lrank}_A(N)=\mathrm{dim}(L_N)$.
{\textnormal{\textbf{(2)}}} By \textbf{(1)}, we know that $L_N\trianglelefteq L$ is a closed subgroup, so $L_0\coloneqq\rfrac{L}{L_N}$ is a Lie group too. As $L$ is connected, $L_0$ is connected too. By Lemma \ref{l:maximal compact normal subgroup}, there is a maximal compact normal subgroup $T_0\trianglelefteq L_0$. Then, $\rfrac{L_0}{T_0}$ is a connected aperiodic Lie group.
Since $\rfrac{G}{G^{00}_A}$ is a topological group, we know that the quotient homomorphism \[\pi_{(G/G^{00}_A)/(N/G^{00}_A)}:\ \rfrac{G}{G^{00}_A}\rightarrow \rfrac{(G/G^{00}_A)}{(N/G^{00}_A)}\cong \rfrac{G}{N}\] is an open map. Therefore, $\rfrac{H}{N}$ is an open subgroup of $\rfrac{G}{N}$. Now, $\rfrac{G}{N}$ is small and contains a generic set, so $\rfrac{G}{N}$ is a locally compact topological group by the Generic Set Lemma (Theorem \ref{t:generic set lemma}) and Theorem \ref{t:piecewise hyperdefinable and local compactness}. Thus, $\rfrac{H}{N}$ is clopen. Consider, $\phi_0:\ \rfrac{H}{N}\rightarrow L_0$ given by $\phi_0\circ\pi_{(G/N)\mid H}=\pi_{L/L_N}\circ\pi_{L\mid H}$. It is clear that $\phi_0$ is a piecewise bounded $\bigwedge${\hyp}definable surjective group homomorphism with kernel $\rfrac{K}{N}$, which is $\bigwedge${\hyp}definable. Therefore, it is also piecewise proper. By the Isomorphism Theorem \ref{t:isomorphism theorem}, it follows that $(\rfrac{K}{N},\rfrac{H}{N})$ is an $A${\hyp}Yamabe pair of $\rfrac{G}{N}$ with Lie core (isomorphic to) $L_0$.
As $L_0$ is locally hyperdefinable, $T_0$ is $\bigwedge${\hyp}definable. Thus, $\pi_{L_0/T_0}:\ L_0\rightarrow \rfrac{L_0}{T_0}$ is piecewise bounded and proper $\bigwedge${\hyp}definable surjective group homomorphism. Take $\phi=\pi_{L_0/T_0}\circ\phi_0:\ \rfrac{H}{N}\rightarrow \rfrac{L_0}{T_0}$. Then, $\phi$ is a piecewise bounded and proper $\bigwedge${\hyp}definable surjective group homomorphism. By the Isomorphism Theorem \ref{t:isomorphism theorem}, we conclude that $\rfrac{L_0}{T_0}\cong \rfrac{(H/N)}{T}$ where $T\coloneqq \ker(\phi)$ is an $\bigwedge${\hyp}definable normal subgroup of $\rfrac{H}{N}$ with $\rfrac{K}{N}\trianglelefteq T$. Consequently, $(T,\rfrac{H}{N})$ is a minimal $A${\hyp}Yamabe pair of $\rfrac{G}{N}$ with Lie core (isomorphic to) $\rfrac{L_0}{T_0}\cong L_{G/N}$.
{\textnormal{\textbf{(3)}}} By the Isomorphism Theorem \ref{t:isomorphism theorem}, take $\eta:\ \rfrac{L_0}{T_0}\rightarrow L_{G/N}$ isomorphism such that $\pi_{L_{G/N}\mid (\rfrac{H}{N})}=\eta\circ\phi$. Consider $\psi=\eta \circ \pi_{L_0/T_0}\circ\pi_{L/L_N}:\ L\rightarrow L_{G/N}$ --- note that, a priori, the definition of $\psi$ depends on $(K,H)$. Obviously, $\psi$ is a piecewise bounded $\bigwedge${\hyp}definable onto group homomorphism. Also, $\psi\circ \pi_{L\mid H}=\pi_{L_{G/N}\mid (\rfrac{H}{N})}\circ \pi_{(G/N)\mid H}$.
Let $(K',H')$ be any other minimal $A${\hyp}Yamabe pair of $G$. For $h'\in H'$, there is $h\in H\cap H'$ such that $\pi_L(h')=\pi_L(h)$, i.e. $h^{-1}h'\in K'$. By point \textbf{(2)}, it follows that $\pi_{L_{G/N}}(\pi_{G/N}(h))=\pi_{L_{G/N}}(\pi_{G/N}(h'))$. Thus, $\psi(\pi_L(h'))=\psi(\pi_L(h))=\pi_{L_{G/N}}(\pi_{G/N}(h))=\pi_{L_{G/N}}(\pi_{G/N}(h'))$. Therefore, $\psi\circ\pi_L=\pi_{L_{G/N}}\circ\pi_{G/N}$ on the domain of $\pi_L$.
It remains to show that $\psi$ is $A${\hyp}invariant. Take $\sigma\in\mathrm{Aut}(\mathfrak{M}/A)$ and $x\in L$. Take $h$ such that $\pi_L(h)=x$. Using that $\pi_L$, $\pi_{L_{G/N}}$ and $\pi_{G/N}$ are $A${\hyp}invariant, we get that \[\psi(\sigma(x))=\psi\circ \pi_L(\sigma(h))=\pi_{L_{G/N}}\circ\pi_{G/N}(\sigma(x))=\sigma(\pi_{L_{G/N}}\circ\pi_{G/N}(x))=\sigma(\psi(h)).\]
Finally, putting everything together, we conclude that \[\mathrm{Lrank}_A(G/N)=\mathrm{dim}(L_{G/N})\leq \mathrm{dim}(L_0)=\mathrm{Lrank}_A(G)-\mathrm{Lrank}_A(N).\] \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{prop} The following proposition is a direct consequence of the results of \cite{jing2021nonabelian}. I am very grateful to Chieu{\hyp}Minh for telling me about it.
\begin{prop}[An{\hyp}Jing{\hyp}Tran{\hyp}Zhang bound] Let $G$ be a piecewise hyperdefinable group and $X$ a symmetric generic set of $G$. Then, $\mathrm{Lrank}(G)\leq 12 \log_2(k)^2$ where $k=[X^2:X]$. \begin{dem} By working modulo $G^{00}$, we may assume that $G$ is small. Let $(K,H)$ be a minimal Yamabe pair of $G$ and $\pi\coloneqq \pi_{H/K}: H \rightarrow L$ the minimal Lie core. By the Generic Set Lemma (Theorem \ref{t:generic set lemma}), we have that $X^2$ is a symmetric compact neighbourhood of the identity in the global logic topology. Thus, as $H$ is open, $Y=H\cap X^2$ is also a symmetric compact neighbourhood of the identity in the global logic topology. By \cite[Lemma 2.3]{machado2021good}, $Y$ is a $k^3${\hyp}approximate subgroup. As $\pi$ is a continuous and open homomorphism, $\pi[Y]$ is a compact neighbourhood of the identity and a $k^3${\hyp}approximate subgroup. As it is a neighbourhood of the identity, it has positive Haar measure, so the general Brunn{\hyp}Minkowski Inequality \cite[Theorem 1.1]{jing2021nonabelian} applies and we get that $2 \leq k^{\rfrac{3}{\alpha}}$ with $\alpha\coloneqq d-m-h$, where $d=\dim(L)$, $m=\max\{\dim(\Gamma)\mathrel{:} \Gamma\leq L \mathrm{\ compact}\}$ and $h$ is the helix dimension of $L$. On the other hand, by \cite[Corollary 2.15]{jing2021nonabelian}, $h\leq \rfrac{n}{3}$ with $n\coloneqq d-m$, so $2n\leq 9\log_2(k)$. Finally, as $L$ has no compact normal subgroups, by \cite[Fact 3.6, Lemma 3.9]{an2021small}, we conclude that $d\leq \frac{n(n+1)}{2}$, so \[d\leq \frac{1}{8}\lfloor 9\log_2(k)\rfloor^2+\frac{1}{4}\lfloor 9\log_2(k)\rfloor\leq 12\log_2(k)^2.\] \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{prop}
\textbf{Breuillard{\hyp}Green{\hyp}Tao Theorem.} Let $\mathcal{A}$ be a particular class of piecewise hyperdefinable groups and $L$ a finite dimensional Lie group. It is then natural to study the \emph{direct problem} for $\mathcal{A}$ or (dually) the \emph{inverse problem} for $L$: \begin{center} What are the Lie cores of elements of $\mathcal{A}$?
Is $L$ a Lie core of an element of $\mathcal{A}$? \end{center}
The classification theorem by Breuillard, Green and Tao for finite approximate subgroups \cite{breuillard2012structure} can be interpreted as an answer to the direct problem for the class of piecewise definable subgroups generated by pseudo{\hyp}finite definable approximate subgroups. Indeed, it can be restated to say that Lie cores of piecewise definable subgroups generated by pseudo{\hyp}finite definable approximate subgroups are nilpotent \cite[Proposition 9.6]{breuillard2012structure}. We consider it interesting to discuss the direct and inverse problems in general. In fact, one may expect that solutions of these questions would yield classification results similar to the one of \cite{breuillard2012structure}.
The following easy examples show that the inverse problem is trivially solved for some basic classes of piecewise hyperdefinable groups. In particular, these examples show that no general classification result for the Lie cores analogous to the one of \cite{breuillard2012structure} could be found in those cases.
\begin{ejems} Any Lie group $L$ is a Lie core of some piecewise hyperdefinable group in the theory of real closed fields. Indeed, $L$ is in particular a second countable manifold, so it is a locally hyperdefinable subset of countable cofinality. By Proposition \ref{p:functions logic topology 2}, the group operations are piecewise bounded $\bigwedge${\hyp}definable, so $L$ is a locally hyperdefinable group. Clearly, $L$ is its own Lie core.
For a slightly more explicit construction, note that any connected Lie group $L$ is the Lie core of some piecewise definable group generated by a definable approximate subgroup. Indeed, connected Lie groups are metrisable by Birkhoff{\hyp}Kakutani Theorem \cite[Theorem 1.5.2]{tao2014hilbert}, so take a left invariant metric $d$ for $L$. As $L$ is locally compact, we may assume that the closed unit ball $\overline{\mathbb{D}}$ is a compact symmetric neighbourhood of the identity. Consider the structure of $L$ with the language of groups, a sort for $\mathbb{R}$ with the language of ordered rings and a function symbol for the metric $d$. Let $L'$ be an $|L|${\hyp}saturated elementary extension of it. Consider the subgroup $H\leq L'$ generated by the closed unit ball and $E=\{(a,b)\mathrel{:} d(a,b)<\rfrac{1}{n}\mathrm{\ for\ any\ }n\in \mathbb{N}\}$. Clearly, $\overline{\mathbb{D}}$ is a definable approximate subgroup and $H$ is the piecewise definable group generated by it. Then, $L$ is a Lie core of $H$, as, in fact, we have $\rfrac{H}{E}\cong L$.
In the case of a linear connected Lie group $L$, we can combine both examples. Indeed, in that case, in the real numbers, $L$ is piecewise definable and its metric and group operations are definable, so, after saturation, we just need to take the piecewise definable group generated by the closed unit ball and quotient out by the infinitesimals as in the previous example. \end{ejems}
\section{Stabilizer Theorem} In this section, we aim to extend the Stabilizer Theorem \cite[Theorem 3.5]{hrushovski2011stable} to piecewise hyperdefinable groups. The main point of this theorem is that it provides sufficient conditions to conclude that $G^{00}$ is $\bigwedge${\hyp}definable. As we already noted in the previous section, this result has significant consequences for the projection $\pi_L$ of the minimal Lie core.
To prove the Stabilizer Theorem, we need first to extend the model{\hyp}theoretic notions of dividing, forking and stable relation to piecewise hyperdefinable sets. Once these model{\hyp}theoretic notions have been properly defined for piecewise hyperdefinable groups, adapting the original proof of the Stabilizer Theorem is straightforward.
Forking and dividing for hyperimaginaries have already been well studied by many authors (e.g. \cite{hart2000coordinatisation}, \cite{wagner2010simple} and \cite{kim2014simplicity}). Here, in the first subsection, we rewrite the definitions of dividing and forking for hyperdefinable sets in a slightly different way which we find more natural from the point of view of this paper. After that, it is trivial to extend the definition to the context of piecewise hyperdefinable sets. \subsection{Dividing and forking} Let $(P_t)_{t\in T}$ be $A${\hyp}hyperdefinable sets. Write $P\coloneqq \prod P_t$. For infinite tuples $\overline{a},\overline{b}\in P$ of hyperimaginaries, we write ${\mathrm{tp}}(\overline{a}/A)={\mathrm{tp}}(\overline{b}/A)$ to mean that ${\mathrm{tp}}(a_{\mid T_0}/A)={\mathrm{tp}}(b_{\mid T_0}/A)$ for any $T_0\subseteq T$ finite.
Let $(I,<)$ be a linear order. An \emph{$A${\hyp}indiscernible} sequence in $P$ indexed by $(I,<)$ is a sequence $(\overline{a}_i)_{i\in I}$ of hyperimaginary tuples $\overline{a}_i\in P$ such that, for any $n\in \mathbb{N}$ and any $i_1<\cdots<i_n$ and $j_1<\cdots<j_n$, \[{\mathrm{tp}}(\overline{a}_{i_1},\ldots,\overline{a}_{i_n}/A)={\mathrm{tp}}(\overline{a}_{j_1},\ldots,\overline{a}_{j_n}/A).\]
\begin{lem}[Standard Lemma] \label{l:standard lemma for hyperimaginaries} Let $P=\prod_{t\in T}P_t$ be a product of $A${\hyp}hyperdefinable sets and $(I,<)$ and $(J,<)$ two infinite linear orders, with $|T|^+ +|J|\leq\kappa$. Then, given a sequence $(\overline{a}_i)_{i\in I}$ of hyperimaginary tuples in $P$, for any set of representatives $A^*$, there is a sequence $(\overline{b}_j)_{j\in J}$ of hyperimaginary tuples in $P$ with an $A^*${\hyp}indiscernible sequence of representatives $(\overline{b}^*_j)_{j\in J}$ such that \[({\overline{b}_{j_1}}_{\mid T_0},\ldots,{\overline{b}_{j_n}}_{\mid T_0})\in W\] for any $n\in \mathbb{N}$, any $j_1<\ldots<j_n$, any $T_0\subseteq T$ finite and any $\bigwedge_{A}${\hyp}definable set $W\subseteq \left(\prod_{t\in T_0} P_t\right)^n$ such that $({\overline{a}_{i_1}}_{\mid T_0},\ldots,{\overline{a}_{i_n}}_{\mid T_0})\in W$ for every $i_1<\cdots<i_n$. \begin{dem} Trivial from the classic Ehrenfeucht{\hyp}Mostowski Standard Lemma proved using Ramsey's Theorem \cite[Theorem 5.1.5]{tent2012course}. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{lem} \begin{coro} \label{c:indiscernible hyperimaginaries}
Let $P=\prod_{t\in T}P_t$ be a product of $A${\hyp}hyperdefinable sets and $(I,<)$ an infinite linear order, with $|T|^+ +|I|\leq\kappa$. Then, a sequence $(\overline{b}_i)_{i\in I}$ of hyperimaginary tuples in $P$ is $A${\hyp}indiscernible if and only if, for some set of representatives $A^*$ of $A$, there is an $A^*${\hyp}indiscernible sequence of representatives $(\overline{b}^*_i)_{i\in I}$.
Furthermore, if $(\overline{b}_i)_{i\in I}$ is $A${\hyp}indiscernible, for any set of representatives $A^*$ there is an $A^{**}${\hyp}indiscernible sequence of representatives $(\overline{b}^*_i)_{i\in I}$ of $(\overline{b}_i)_{i\in I}$ where $A^{**}$ is another set of representatives of $A$ with ${\mathrm{tp}}(A^*)={\mathrm{tp}}(A^{**})$. \begin{dem} By the Standard Lemma \ref{l:standard lemma for hyperimaginaries} and Corollary \ref{c:types over hyperimaginaries}. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{coro} Let $P=\rfrac{X}{E}$ be an $A${\hyp}hyperdefinable set and ${\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}_P:\ X\rightarrow P$ its quotient map. An $\bigwedge${\hyp}definable subset $V\subseteq P$ \emph{divides over $A$} if and only if ${\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}^{-1}_P[V]$ divides over $A^*$ for some set of representatives of $A$. \begin{lem} \label{l:finite character of dividing} Let $P$ be an $A${\hyp}hyperdefinable set and $V\subseteq P$ an $\bigwedge_{A,B}${\hyp}definable subset. Then, $V$ divides over $A$ if and only if there are a finite tuple $b$ from $B$ and an $\bigwedge_{A,b}${\hyp}definable subset $W\subseteq P$ such that $V\subseteq W$ and $W$ divides over $A$. \begin{dem} One direction is trivial. Let us check the other. Take a uniform definition $\underline{V}$ of $V$, which exists by Lemma \ref{l:uniform definition}. Take representatives $A^*$ of $A$ such that $\underline{V}(x,A^*,B^*)$ divides over $A^*$. There is then $b$ finite such that $\underline{W}(x,A^*,b^*)\coloneqq \underline{V}(x,A^*,B^*)\cap \mathrm{For}^x(\mathscr{L}(A^*,b^*))$ divides over $A^*$. Now, as $\underline{V}$ is a uniform definition, $W$ is $\bigwedge_{A,b}${\hyp}definable. Obviously, $V\subseteq W$ and $W$ divides over $A$. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{lem} \begin{lem} \label{l:dividing} Let $P$ and $Q$ be $A${\hyp}hyperdefinable sets. Let $V\subseteq Q\times P$ be an $\bigwedge_{A}${\hyp}definable set and $b\in Q$. Then, $V(b)$ divides over $A$ if and only if there is an $A${\hyp}indiscernible sequence $(b_i)_{i\in \omega}$ of hyperimaginaries from $Q$ such that ${\mathrm{tp}}(b_0/A)={\mathrm{tp}}(b/A)$ and $\bigcap_{i\in \omega} V(b_i)=\emptyset$.
Furthermore, $V(b)$ divides over $A$ if and only if for any set of representatives $A^*$ of $A$ there is another set of representatives $A^{**}$ such that ${\mathrm{tp}}(A^*)={\mathrm{tp}}(A^{**})$ and $V(b)$ divides over $A^{**}$. \begin{dem} Assume $V(b)$ divides over $A$. Take a uniform definition $\underline{V}$ of $V$, which exists by Lemma \ref{l:uniform definition}. Take representatives $A^*$ of $A$ such that $\underline{V}(x,A^*,b^*)$ divides over $A^*$. There is then an $A^*${\hyp}indiscernible sequence $(b^*_i)_{i\in \omega}$ such that ${\mathrm{tp}}(b^*_0/A^*)={\mathrm{tp}}(b^*/A^*)$ and $\bigcup_{i\in\omega}\underline{V}(x,A^*,b^*_i)$ is not finitely satisfiable. Then, $(b_i)_{i\in \omega}$ given by $b_i={\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}_Q(b^*_i)$ is $A${\hyp}indiscernible and ${\mathrm{tp}}(b_0/A)={\mathrm{tp}}(b/A)$. Also, ${\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}^{-1}_P\left[\bigcap_{i\in \omega} V(b_i)\right]=\bigcap_{i\in \omega} {\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}^{-1}_P[V(b_i)]=\bigcap_{i\in\omega}\underline{V}(\mathfrak{M},A^*,b^*_i)=\emptyset$, so $\bigcap_{i\in\omega} V(b_i)=\emptyset$ by surjectivity of ${\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}_P$.
On the other hand, assume there is an $A${\hyp}indiscernible sequence $(b_i)_{i\in \omega}$ in $Q$ such that ${\mathrm{tp}}(b_0/A)={\mathrm{tp}}(b/A)$ and $\bigcap_{i\in \omega} V(b_i)=\emptyset$. Take a uniform definition $\underline{V}$ of $V$, which exists by Lemma \ref{l:uniform definition}. By Corollary \ref{c:indiscernible hyperimaginaries}, there is an $A^{*}${\hyp}indiscernible sequence $(b^*_i)_{i\in \omega}$ of representatives of $(b_i)_{i\in \omega}$ with $A^{*}$ representatives of $A$. Now, as ${\mathrm{tp}}(b_0/A)={\mathrm{tp}}(b/A)$, by Corollary \ref{c:types over hyperimaginaries}, there is a representative $b^{**}\in b$ and a set of representatives $A^{**}$ of $A$ such that ${\mathrm{tp}}(b^*_0, A^{*})={\mathrm{tp}}(b^{**},A^{**})$. Take $\sigma\in\mathrm{Aut}(\mathfrak{M})$ mapping $(b_0^*,A^*)$ to $(b^{**},A^{**})$, and write $b'_i\coloneqq \sigma(b_i)$ and ${b'}^*_i\coloneqq \sigma(b^*_i)$ for $i\in\omega$. Then, $({b'}^*_i)_{i\in\omega}$ is an $A^{**}${\hyp}indiscernible sequence with ${b'}^*_0=b^{**}$. For this sequence, it follows that $\emptyset={\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}^{-1}_P\left[\bigcap_{i\in \omega}V(b'_i)\right]=\bigcap_{i\in\omega}{\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}^{-1}_P[V(b'_i)]=\bigcap_{i\in \omega}\underline{V}(\mathfrak{M},A^{**},{b'}^{*}_i),$ concluding that $\underline{V}$ divides over $A^{**}$. In particular, $V$ divides over $A$.
For the ``furthermore part'', note that in the previous paragraph ${\mathrm{tp}}(A^*)$ can be chosen arbitrarily by Corollary \ref{c:indiscernible hyperimaginaries}. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{lem} Combining both propositions we conclude that the definition given here is equivalent to the one studied previously by other authors (e.g. \cite{hart2000coordinatisation}, \cite{wagner2010simple} and \cite{kim2014simplicity}). It is now straightforward to prove the following basic proposition. \begin{lem} \label{l:dividing and functions} Let $P$ and $Q$ be $A${\hyp}hyperdefinable sets. Let $V\subseteq P$ be $\bigwedge${\hyp}definable and $f:\ P\rightarrow Q$ a $1${\hyp}to{\hyp}$1$ $\bigwedge_A${\hyp}definable function. Then, $f[V]$ divides over $A$ provided that $V$ divides over $A$. \end{lem} Let $P$ be an $A${\hyp}hyperdefinable set. The \emph{forking ideal} $\mathfrak{f}_P(A)$ of $P$ over $A$ is the ideal of $\bigwedge${\hyp}definable subsets of $P$ generated by the ones dividing over $A$. An $\bigwedge${\hyp}definable subset of $P$ \emph{forks over $A$} if it is in that forking ideal. \begin{obs} Trivially, we have that $V$ forks over $A$ implies that $\underline{V}$ forks over $A$. In the case of simple theories, the converse also holds as forking and dividing are the same \cite[Proposition 3.2.7]{wagner2010simple}. In general, however, the converse is not true. For instance, let $\mathfrak{M}$ be the monster model of the theory of dense circular orders in the usual language, let $M$ be the whole $1${\hyp}ary universe of $\mathfrak{M}$ and $E$ the trivial equivalence relation given by $xEy$ for every $x,y\in M$. Obviously, $\rfrac{M}{E}$ is a singleton, so it does not fork over $\emptyset$. However, $M$ forks over $\emptyset$. \end{obs} Let $P$ be a piecewise $A${\hyp}hyperdefinable set. An $\bigwedge${\hyp}definable set $V\subseteq P$ \emph{divides} over $A$ if it divides over $A$ as subset of some/any piece $P_i$ containing $V$. Note that, by Lemma \ref{l:dividing and functions}, this is well{\hyp}defined. An $\bigwedge${\hyp}definable subset $V$ of $P$ forks over $A$ if and only if $V$ forks over $A$ as subset of some/any piece $P_i$ containing $V$. The \emph{forking ideal} $\mathfrak{f}_{P}(A)$ of $P$ over $A$ is the family of $\bigwedge${\hyp}definable subsets of $P$ forking over $A$. Clearly, $\mathfrak{f}_P(A)$ is the ideal of $\bigwedge${\hyp}definable subsets of $P$ generated by the ones dividing over $A$.
\subsection{Ideals}
From now on, $\lambda$ is a cardinal with $\kappa\geq\lambda>|\mathscr{L}|+|A|$.
Let $P$ be a piecewise $A${\hyp}hyperdefinable set and $\mu$ an ideal of $\bigwedge_{<\lambda}${\hyp}definable subsets in $P$. We say that $\mu$ is \emph{$A${\hyp}invariant} if it is invariant under $\mathrm{Aut}(\mathfrak{M}/A)$, i.e. $W(\overline{b})\in\mu$ implies $W(\overline{b}')\in\mu$ for any $\overline{b}'$ with ${\mathrm{tp}}(\overline{b}/A)={\mathrm{tp}}(\overline{b}'/A)$ and any $\bigwedge_{\bar{b}}${\hyp}definable subset $W(\overline{b})\subseteq P$ with $|\overline{b}|<\lambda$. We say that an $\bigwedge_{<\lambda}${\hyp}definable subset is \emph{$\mu${\hyp}negligible} if it is in $\mu$ and that it is \emph{$\mu${\hyp}wide} if it is not in $\mu$. We say that $\mu$ is \emph{locally atomic} if, for any wide $\bigwedge_B${\hyp}definable subset $V$ with $|B|<\lambda$, there is $a\in V$ such that ${\mathrm{tp}}(a/B)$ is wide. \begin{obs} For $X\subseteq P$, write $\mu_{\mid X}\coloneqq\{W\in \mu\mathrel{:} W\subseteq X\}$. Clearly, $\mu_{\mid X}$ is an ideal of $\bigwedge_{<\lambda}${\hyp}definable subsets of $X$. Say $P=\underrightarrow{\lim}\, P_i$, then write $\mu_i\coloneqq \mu_{\mid P_i}$ for each piece. We have from the definitions that $\mu$ is $A${\hyp}invariant if and only if each $\mu_i$ is so. Similarly, it is locally atomic if and only if each $\mu_i$ is so. \end{obs}
We say that an $\bigwedge_A${\hyp}definable subset $V$ is \emph{$A${\hyp}medium} if for any $\bigwedge_{\bar{b}}${\hyp}definable subset $W(\overline{b})\subseteq V$ with $|\bar{b}|<\lambda$, we have \[W(\overline{b})\in \mu\Leftrightarrow W(\overline{b}_0)\cap W(\overline{b}_1)\in\mu\] for any $A${\hyp}indiscernible sequence $(\overline{b}_i)_{i\in\omega}$ realising ${\mathrm{tp}}(\overline{b}/A)$. We say that an $\bigwedge_{<\lambda}${\hyp}definable subset $V$ is $A${\hyp}medium if there is an $A${\hyp}medium $\bigwedge_A${\hyp}definable subset $V_0$ with $V\subseteq V_0$. We say that $V$ is \emph{strictly $A${\hyp}medium} if it is wide and $A${\hyp}medium. We say that $\mu$ is \emph{$A${\hyp}medium\footnote{We use the terminology of \cite{montenegro2018stabilizers}. In \cite{hrushovski2011stable}, it is said that the ideal has the $S_1$ property.}} if every $\bigwedge_{<\lambda}${\hyp}definable subset is $A${\hyp}medium for $\mu$. \begin{obs} Note that, by definition, if $\mu$ is $A${\hyp}medium, $\mu$ is in particular $A${\hyp}invariant. Note that the family of $A${\hyp}medium sets of $\mu$ is an $A${\hyp}invariant ideal of $\bigwedge_{<\lambda}${\hyp}definable subsets. \end{obs} \begin{ejems} {\textnormal{\textbf{(1)}}} The forking ideal over $A$ is an $A${\hyp}invariant ideal of $\bigwedge${\hyp}definable subsets. In simple theories, the forking ideal over $A$ is locally atomic and $A${\hyp}medium. Indeed, it is locally atomic by the extension property of forking \cite[Theorem 3.2.8]{wagner2010simple}. On the other hand, suppose $V(\overline{b})$ is $\bigwedge_{\overline{b}}${\hyp}definable and $V(\overline{b}_0)\cap V(\overline{b}_1)$ forks over $A$ with $(\overline{b}_i)_{i\in\omega}$ an $A${\hyp}indiscernible sequence realising ${\mathrm{tp}}(\overline{b}/A)$. Then, by simplicity, $V(\overline{b}_0)\cap V(\overline{b}_1)$ divides over $A$, so $\bigcap^n_{i=0} V(\overline{b}_{2i})\cap V(\overline{b}_{2i+1})=\emptyset$ for some $n\in\mathbb{N}$. Thus, $V(\overline{b})$ divides over $A$, so it forks over $A$. As $V$ is arbitrary, we conclude that the forking ideal over $A$ is $A${\hyp}medium.
{\textnormal{\textbf{(2)}}} If we have an $A${\hyp}invariant measure on the lattice of $\bigwedge_{<\lambda}${\hyp}definable sets, the ideal of zero measure $\bigwedge_{<\lambda}${\hyp}definable subsets is an $A${\hyp}invariant ideal. In this case, every $\bigwedge_A${\hyp}definable subset of finite measure is $A${\hyp}medium. \end{ejems} \begin{lem}{\textnormal{\textbf{\cite[Lemma 2.9]{hrushovski2011stable}}}} \label{l:s1 and forking} Let $P$ be a piecewise $A${\hyp}hyperdefinable set and $\mu$ an ideal of $\bigwedge_{<\lambda}${\hyp}definable subsets on $P$. Let $V\subseteq P$ be a strictly $A${\hyp}medium $\bigwedge_{<\lambda}${\hyp}definable subset. Then, $V$ does not fork over $A$. \begin{dem} Take an $A${\hyp}medium $\bigwedge_A${\hyp}definable set $V_0$ such that $V\subseteq V_0$. Suppose that $V$ forks over $A$. Then, there are $W'_1,\ldots,W'_n$ $\bigwedge${\hyp}definable subsets dividing over $A$ such that $V\subseteq\bigcup_i W'_i$. Applying Lemma \ref{l:finite character of dividing}, we may assume that each $W'_i$ is $\bigwedge_{<\lambda}${\hyp}definable. Now, $V\subseteq \bigcup (W'_i\cap V_0)$ and $W'_i\cap V_0$ divides over $A$. Write $W_i(\overline{b})=W'_i\cap V_0$. By Lemma \ref{l:dividing}, there is $(\overline{b}_j)_{j\in\omega}$ $A${\hyp}indiscernible such that $\bigcap_{j\in\omega}W_i(\overline{b}_j)=\emptyset$. Hence, $\bigcap^k_{j=0}W_i(\overline{b}_j)=\emptyset\in \mu$ for some $k$, concluding that $W_i(\overline{b})\in \mu$ by $A${\hyp}mediumness of $V_0$. This concludes that $V\in\mu$, contradicting that $V$ is wide. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{lem}
\begin{lem}\label{l:translation of medium}
Let $\mathfrak{N}\preceq \mathfrak{M}$ with $|N|<\lambda$ and $G=\underrightarrow{\lim}\, G_k$ be a piecewise $N${\hyp}hyperdefinable group. Let $\mu$ be an $N${\hyp}invariant ideal of $\bigwedge_{<\lambda}${\hyp}definable subsets of $G$. Let $W$ and $U$ be non{\hyp}empty $\bigwedge_N${\hyp}definable subsets of $G$. If $\mu$ is invariant under left translations and $U\cdot W$ is $N${\hyp}medium, then $W$ is $N${\hyp}medium too. Similarly, if $\mu$ is invariant under right translations and $W\cdot U$ is medium over $A$, then $W$ is medium over $A$ too.
\begin{dem} Let $X(\overline{b})\subseteq W$ be $\bigwedge_{\bar{b}}${\hyp}definable with $|\overline{b}|<\lambda$. Take an $N${\hyp}indiscernible sequence $(\overline{b}^*_i)_{i\in\omega}$ of representatives of elements in ${\mathrm{tp}}(\overline{b}/N)$ and write $X_i\coloneqq X(\overline{b}_i)$. Fix $k$ such that $U\subseteq G_k$ and take $\underline{U}$ defining $U$. As $\mathfrak{N}\prec\mathfrak{M}$, $\underline{U}$ is finitely satisfiable in $N$, so $\underline{U}$ does not fork over $N$ \cite[Lemma 7.1.10]{tent2012course}. Thus, $\underline{U}$ has a non{\hyp}forking extension to a complete type $p$ over $N,\overline{b}^*_0$. As $p$ does not fork over $N$, it does not divide over $N$. By \cite[Lemma 7.1.5]{tent2012course}, there is $a^*$ realising $p$ such that $(\overline{b}^*_i)_{i\in \omega}$ is $N,a^*${\hyp}indiscernible. As $a^*$ realises $p$, in particular, $a\in U$.
Suppose $\mu$ is invariant under left translations and $U\cdot W$ is $N${\hyp}medium. Then, we get that $a\cdot X_0\cap a\cdot X_1\in \mu$ if and only if $a\cdot X_0\in\mu$. Therefore, provided left translational invariance, \[X_0\cap X_1\in \mu\Leftrightarrow a\cdot X_0\cap a\cdot X_1\in \mu\Leftrightarrow a\cdot X_0\in\mu\Leftrightarrow X_0\in\mu.\]
We similarly prove the case with right translations.
\setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{lem} \subsection{Stable relations} Let $P$ and $Q$ be piecewise $A${\hyp}hyperdefinable sets and $V,W\subseteq P\times Q$ disjoint $A${\hyp}invariant subsets. We say that $V$ and $W$ are \emph{unstably separated over $A$} if there is an infinite $A${\hyp}indiscernible sequence $(a_i,b_i)_{i\in \omega}$ such that $(a_0,b_1)\in V$ and $(a_1,b_0)\in W$. We say that they are \emph{stably separated over $A$} if they are not unstably separated. We say that an $A${\hyp}invariant binary relation $R\subseteq P\times Q$ is \emph{stable over $A$} if $R$ and $R^c$ are stably separated over $A$. \begin{obss} {\textnormal{\textbf{(1)}}} Clearly, being stably separated over $A$ is invariant under piecewise $\bigwedge_A${\hyp}definable isomorphisms.
{\textnormal{\textbf{(2)}}} Note that $V$ and $W$ are stably separated over $A$ if and only if $V\cap (P_i\times Q_j)$ and $W\cap (P_i\times Q_j)$ are stably separated over $A$ for any pieces $P_i$ and $Q_j$.
{\textnormal{\textbf{(3)}}} Let $P$ and $Q$ be $A${\hyp}hyperdefinable. By Corollary \ref{c:indiscernible hyperimaginaries}, $V$ and $W$ are stably separated over $A$ if and only if ${\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}^{-1}_{P\times Q}[V]$ and ${\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}^{-1}_{P\times Q}[W]$ are stably separated over $A^*$ for any set of representatives $A^*$ of $A$. In particular, if $V$ and $W$ are $\bigwedge_A${\hyp}definable, we have that $V$ and $W$ are stably separated over $A$ if and only if $\underline{V}$ and $\underline{W}$ are stably separated over $A^*$ for any set of representatives $A^*$ of $A$ and any partial types $\underline{V}$ and $\underline{W}$ defining $V$ and $W$.
{\textnormal{\textbf{(4)}}} Note that, by the symmetry of stability, $R$ is stable over $A$ if and only if $R^c$ is stable over $A$. \end{obss} We need the following two lemmas from \cite{hrushovski2011stable} for the Stabilizer Theorem. Their original proofs can be easily adapted.
Recall that we say that a partial type $\Sigma(x,y)$ over $A^*$ divides over $A^*$ \emph{with respect to} a global $A^*${\hyp}invariant type $\widehat{q}(y)$ if there is a sequence $(b^*_i)_{i\in \omega}$ such that ${\mathrm{tp}}(b^*_i/A^*,b^*_0,\ldots,b^*_{i-1})\models \widehat{q}$ and $\bigwedge \Sigma(x,b^*_i)$ is inconsistent. \begin{lem}{\textnormal{\textbf{\cite[Lemma 2.2]{hrushovski2011stable}}}} Let $P=\rfrac{X}{E}$ and $Q=\rfrac{Y}{F}$ be $A^*${\hyp}hyperdefinable sets and $\widehat{q}$ a global $A^*${\hyp}invariant type with $\underline{Y}\in\widehat{q}$. Let $W_1,W_2\subseteq P\times Q$ be $\bigwedge_{A^*}${\hyp}definable sets stably separated over $A^*$. Assume that $a\in P$ is such that $\underline{W_2(a)}\subseteq \widehat{q}$. Then, $V=W_1\cap \left({\mathrm{tp}}(a/A^*)\times Q\right)$ divides over $A^*$. Furthermore, $\underline{V}$ divides over $A^*$ with respect to $\widehat{q}$. \end{lem} For sets $V$ and $W$, write \[V\times_{\mathrm{nf}(A)}W\coloneqq\{(a,b)\in V\times W\mathrel{:} {\mathrm{tp}}(b/aA)\mbox{ does not fork over }A\}.\] Similarly, $V\times_{\mathrm{ndiv}(A)}W$, $V\hbox{\hss $\prescript{}{\mathrm{nf}(A)}{\times}$ \hss} W$ and $V\hbox{\hss $\prescript{}{\mathrm{ndiv}(A)}{\times}$ \hss} W$. \begin{lem}{\textnormal{\textbf{\cite[Lemma 2.3]{hrushovski2011stable}}}} \label{l:fundamental lemma for the stabilizer theorem} Let $P$ and $Q$ be piecewise $A^*${\hyp}hyperdefinable sets and $R\subseteq P\times Q$ a stable binary relation over $A^*$. Let $p\subseteq P$ and $q\subseteq Q$ be types over $A^*$. Assume that there is an $A^*${\hyp}invariant global type $\widehat{q}$ extending a partial type $\underline{q}$ over $A^*$ defining $q$.
\textnormal{\textbf{(1)}} Take $a\in p$ and $b\in q$ such that $(a,b)\in R$. Suppose that there are representatives $a^*\in a$ and $b^*\in b$ such that $b^*\models\widehat{q}_{\mid A^*,a^*}$. Then, we have that $(a',b)\in R$ for any $a'\in p$ such that ${\mathrm{tp}}(a'/A^*,b)$ does not divide over $A^*$.
\textnormal{\textbf{(2)}} Take $a',a\in p$ and $b\in q$. Suppose that ${\mathrm{tp}}(a/A^*,b)$ and ${\mathrm{tp}}(a'/A^*,b)$ do not divide over $A^*$. Then, $(a,b)\in R$ if and only if $(a',b)\in R$.
\textnormal{\textbf{(3)}} Assume that there is also an $A^*${\hyp}invariant global type $\widehat{p}$ extending a partial type $\underline{p}$ over $A^*$ defining $p$. Then, the following conditions are equivalent: \[\begin{array}{llll} \mbox{{\textbf{i.}}}& p\times_{\mathrm{ndiv}(A^*)}q\cap R\neq \emptyset & \mbox{{\textbf{v.}}}& p\hbox{\hss $\prescript{}{\mathrm{ndiv}(A^*)}{\times}$ \hss} q\cap R\neq \emptyset\\ \mbox{{\textbf{ii.}}}& p\times_{\mathrm{nf}(A^*)}q\cap R\neq \emptyset & \mbox{{\textbf{vi.}}}& p\hbox{\hss $\prescript{}{\mathrm{nf}(A^*)}{\times}$ \hss} q\cap R\neq \emptyset\\ \mbox{{\textbf{iii.}}}& p\times_{\mathrm{nf}(A^*)}q\subseteq R &\mbox{{\textbf{vii.}}} & p\hbox{\hss $\prescript{}{\mathrm{nf}(A^*)}{\times}$ \hss} q\subseteq R\\ \mbox{{\textbf{iv.}}}& p\times_{\mathrm{ndiv}(A^*)}q\subseteq R &\mbox{{\textbf{viii.}}} & p\hbox{\hss $\prescript{}{\mathrm{ndiv}(A^*)}{\times}$ \hss} q\subseteq R\end{array}\] \end{lem} \subsection{Stabilizer Theorem}
From now on, $\lambda$ is a cardinal with $\kappa\geq\lambda>|\mathscr{L}|+|A|$.
Let $G$ be a piecewise $A${\hyp}hyperdefinable group and $\mu$ an ideal of $\bigwedge_{<\lambda}${\hyp}definable subsets of $G$. Let $V,W\subseteq G$ be two $\bigwedge_{<\lambda}${\hyp}definable subsets. The \emph{(left) $\mu${\hyp}stabilizer of $V$ with respect to $W$} is the set ${\mathrm{St}}_\mu(V,W)=\{a\in G\mathrel{:} a^{-1}\cdot V\cap W\notin \mu\}$, and the \emph{(left) $\mu${\hyp}stabilizer group of $V$ with respect to $W$} is the subgroup ${\mathrm{Stab}}_\mu(V,W)$ of $G$ generated by ${\mathrm{St}}_\mu(V,W)$. Write ${\mathrm{St}}_\mu(V)\coloneqq {\mathrm{St}}_\mu(V,V)$ and ${\mathrm{Stab}}_\mu(V)\coloneqq {\mathrm{St}}_\mu(V,V)$. We omit the subscript $\mu$ if there is no confusion. \begin{lem} \label{l:basic remarks} Let $G$ be a piecewise $A${\hyp}hyperdefinable group, $\mu$ an $A${\hyp}invariant ideal of $\bigwedge_{<\lambda}${\hyp}definable subsets of $G$ and $V$ and $W$ two $\bigwedge_A${\hyp}definable wide subsets. Then:
{\textnormal{\textbf{(1)}}} Take any $B\supseteq A$ with $|B|<\lambda$. If $\mu$ is locally atomic, $g\in {\mathrm{St}}(V,W)$ if and only if there are $a\in V$ and $b\in W$ such that $g=ab^{-1}$ with ${\mathrm{tp}}(b/B,g)\notin \mu$.
{\textnormal{\textbf{(2)}}} If $\mu$ is invariant under left translations, then ${\mathrm{St}}(V,W)^{-1}={\mathrm{St}}(W,V)$. In particular, ${\mathrm{St}}(V)$ is a symmetric subset. \end{lem} We say that a subset $X\subseteq G$ is \emph{stable over $A$} if the relation $\{(a,b)\mathrel{:} a^{-1}b\in X\}$ is stable over $A$. \begin{ejem}{\textnormal{\textbf{\cite[Lemma 2.10]{hrushovski2011stable} \& \cite[Lemma 2.8]{montenegro2018stabilizers}}}} \label{e:mos 2.8} Let $\mu$ be an $A${\hyp}invariant ideal of $\bigwedge_{<\lambda}${\hyp}definable subsets of a piecewise $A${\hyp}hyperdefinable group. If $\mu$ is invariant under left translations and $V$ and $W$ are $A${\hyp}medium $\bigwedge_A${\hyp}definable subsets, then ${\mathrm{St}}(V,W)$ is stable over $A$. \end{ejem} \begin{obs} Since ${\scriptstyle \bullet}^{-1}:\ G\rightarrow G$ is a piecewise $\bigwedge_A${\hyp}definable isomorphism, $X$ is stable over $A$ if and only if $X^{-1}$ is stable over $A$. Also, $X$ is stable over $A$ if and only if $\{(a,b)\mathrel{:} ab\in X\}$ is stable over $A$, if and only if $\{(a,b)\mathrel{:} ab^{-1}\in X\}$ is stable over $A$. \end{obs} For subsets $V$ and $W$ of a piecewise hyperdefinable group $G$, write \[V\cdot_{\mathrm{nf}(A)}W\coloneqq \{a\cdot b\mathrel{:} (a,b)\in V\times_{\mathrm{nf}(A)}W\}.\] Similarly, $V\cdot_{\mathrm{ndiv}(A)}W$, $V\hbox{\hss $\prescript{}{\mathrm{nf}(A)}{\cdot}$ \hss} W$ and $V\hbox{\hss $\prescript{}{\mathrm{ndiv}(A)}{\cdot}$ \hss} W$. \begin{lem} \label{l:lemma stabilizer} Let $G$ be a piecewise $A^*${\hyp}hyperdefinable group and $\mu$ a locally atomic $A^*${\hyp}invariant ideal of $\bigwedge_{<\lambda}${\hyp}definable subsets of $G$ which is also invariant under left translations. Let $p$ be an $A^*${\hyp}medium $A^*${\hyp}type and $X\subseteq G$ a stable subset over $A^*$ containing $p$. Suppose that there is an $A^*${\hyp}invariant global type $\widehat{p}$ extending a partial type $\underline{p}$ over $A^*$ defining $p$. Then, ${\mathrm{Stab}}(p)\subseteq X\cdot X^{-1}$. Furthermore, ${\mathrm{Stab}}(p)\cdot_{\mathrm{nf}(A^*)} p\subseteq X$. \begin{dem} We may assume that $p$ is strictly $A${\hyp}medium; otherwise, ${\mathrm{St}}(p)=\emptyset$ and ${\mathrm{Stab}}(p)=\{1\}$, so the lemma holds trivially. Since $\mu$ is invariant under left translations, ${\mathrm{St}}(p)^{-1}={\mathrm{St}}(p)$ by Lemma \ref{l:basic remarks}(2). Thus, by definition, ${\mathrm{Stab}}(p)=\bigcup^{\infty}_{n=0} {\mathrm{St}}(p)^n$. We prove by induction on $n$ that $b\cdot c\in X$ for $b\in{\mathrm{St}}(p)^n$ and $c\in p$ such that ${\mathrm{tp}}(c/A^*,b)$ does not fork over $A^*$.
By hypothesis, $p\subseteq X$, so we are done for $n=0$. Assume it is true for $n-1$ with $n\in\mathbb{N}_{>0}$. Take $b=b_1\cdot \cdots\cdot b_n$ with $b_i\in {\mathrm{St}}(p)$ for each $i\in\{1,\ldots,n\}$ and $c\in p$ such that ${\mathrm{tp}}(c/A^*,b)$ does not fork over $A^*$. We want to prove that $b\cdot c\in X$. As $b_n\in{\mathrm{St}}(p)$, by Lemma \ref{l:basic remarks}(1), there is $c'\in p$ such that $b_n\cdot c'\in p$ and ${\mathrm{tp}}(c'/A^*,b_1,\ldots,b_n)\notin \mu$. In particular, as $p$ is $A^*${\hyp}medium, ${\mathrm{tp}}(c'/A^*,b_1,\ldots,b_n)$ does not fork over $A^*$ by Lemma \ref{l:s1 and forking}. Hence, ${\mathrm{tp}}(c'/A^*,b)$ and ${\mathrm{tp}}(b_n\cdot c'/A^*,b_1,\ldots,b_{n-1})$ do not fork over $A^*$. By induction hypothesis, $b\cdot c'=(b_1\cdots b_{n-1})\cdot b_n\cdot c'\in X$. Since $(b,c')\in {\mathrm{tp}}(b/A^*)\times_{\mathrm{nf}(A^*)}p$, by Lemma \ref{l:fundamental lemma for the stabilizer theorem}(2), we conclude that $b\cdot c\in X$ too.
Now, for any $b\in {\mathrm{Stab}}(p)$ we can find $c\in p$ such that ${\mathrm{tp}}(c/A^*,b)$ does not fork over $A^*$ --- namely, choose $c^*\models \widehat{p}_{\mid A^*,b^*}$. Therefore, we conclude that ${\mathrm{Stab}}(p)\subseteq X\cdot p^{-1}\subseteq X\cdot X^{-1}$.\setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{lem} As an immediate corollary we get the following result:
\begin{coro} Let $\mathfrak{N}\prec\mathfrak{M}$ with $|N|<\lambda$. Let $G$ be a piecewise $N${\hyp}hyperdefinable group and $\mu$ a locally atomic $N${\hyp}invariant ideal of $\bigwedge_{<\lambda}${\hyp}definable subsets invariant under left translations. Let $p$ be an $N${\hyp}medium $N${\hyp}type with $p\subseteq {\mathrm{St}}(p)$. Then, ${\mathrm{Stab}}(p)={\mathrm{St}}(p)^2=(p\cdot p^{-1})^2$. \begin{dem} If $p\subseteq {\mathrm{St}}(p)$, by Lemma \ref{l:basic remarks}, we get that ${\mathrm{St}}(p)\subseteq p\cdot p^{-1}\subseteq {\mathrm{St}}(p)^2$. By Example \ref{e:mos 2.8}, ${\mathrm{St}}(p)$ is a stable subset. Hence, by Lemma \ref{l:lemma stabilizer}, taking $\widehat{p}$ a coheir of $\underline{p}$, we conclude that ${\mathrm{Stab}}(p)\subseteq {\mathrm{St}}(p)^2\subseteq (p\cdot p^{-1})^2\subseteq {\mathrm{Stab}}(p)$.\setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{coro} \begin{lem}{\textnormal{\textbf{\cite[Lemma 2.11]{montenegro2018stabilizers}}}} \label{l:mos 2.11} Let $G$ be a piecewise $A^*${\hyp}hyperdefinable group and $\mu$ an $A^*${\hyp}invariant ideal of $\bigwedge_{<\lambda}${\hyp}definable subsets of $G$ which is also invariant under left translations. Let $p$ be an $A^*${\hyp}type and $W$ a strictly $A^*${\hyp}medium $\bigwedge_{A^*}${\hyp}definable subset such that $p^{-1}W$ is $A^*${\hyp}medium too. Suppose that there is an $A^*${\hyp}invariant global type $\widehat{p}$ extending a partial type $\underline{p}$ over $A^*$ defining $p$. Then, $p\cdot_{\mathrm{nf}(A^*)} p^{-1}\subseteq {\mathrm{St}}(W)$. \begin{dem} Take $(a^*_i)_{i\in \omega}$ such that $a^*_i\models \widehat{p}_{\mid A^*,a^*_0,\ldots,a^*_{i-1}}$. Then, $(a_i)_{i\in \omega}$ is an $A^*${\hyp}indiscernible sequence of realisations of $p$ with ${\mathrm{tp}}(a_1/A^*,a_0)$ non{\hyp}forking. As $\mu$ is invariant under left translations, $a^{-1}_0W$ is wide. Since $p^{-1}W$ is $A^*${\hyp}medium, we conclude that $a^{-1}_0W\cap a^{-1}_1W\not\in\mu$. Thus, by invariance under left translations, $a_0a^{-1}_1\in {\mathrm{St}}(W)$. By Lemma \ref{l:fundamental lemma for the stabilizer theorem}(2) and Example \ref{e:mos 2.8}, we conclude that $p\cdot_{\mathrm{nf}(A^*)}p^{-1}\subseteq {\mathrm{St}}(W)$. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{lem}
For two subsets $X$ and $Y$ of a group $G$, recall that $[X:Y]\coloneqq \min\{|\Delta|\mathrel{:} X\subseteq \Delta Y\}$. Note that $[X^{-1}:Y^{-1}]=\min\{|\Delta|\mathrel{:} X\subseteq Y\Delta\}$. \begin{lem} \label{l:index lemma 1} Let $G$ be a piecewise $A^*${\hyp}hyperdefinable group and $S\leq G$ an $\bigwedge_{A^*}${\hyp}definable subgroup. Let $p\subseteq G$ be an $A^*${\hyp}type such that there is an $A^*${\hyp}invariant global type $\widehat{p}$ extending a partial type $\underline{p}$ defining $p$ over $A^*$.
\textnormal{\textbf{(1)}} If $[p:S]$ is small, then $[p:S]=1$ and $p^{-1}\cdot p\subseteq S$.
\textnormal{\textbf{(2)}} If $[p^{-1}:S]$ is small, then $[p^{-1}:S]=1$ and $p\cdot p^{-1}\subseteq S$. \begin{dem} \textnormal{\textbf{(1)}} Let $(a_i)_{i\in \alpha}$ be a sequence in $p$ such that $\alpha=[p:S]$ and $a_jS\cap a_iS=\emptyset$ for $i\neq j$, and $p\subseteq \bigcup_{i\in\alpha}a_iS$. Suppose that there is no $i\in\alpha$ such that $\underline{(a_iS)\cap p}$ is contained in $\widehat{p}$. Then, there are formulas $\varphi_i\in \underline{(a_iS)\cap p}$ for each $i\in\alpha$ such that $\{\neg\varphi_i\}_{i\in\alpha}\subseteq \widehat{p}$. In particular, $\{\neg\varphi_i\}_{i\in\alpha}\cup\underline{p}$ is finitely satisfiable. Thus, by saturation, there is $c\in p$ such that $c\notin \bigcup_{i\in \alpha} a_iS$, getting a contradiction. That concludes that there is $i\in \alpha$ such that $\underline{(a_iS)\cap p}\subseteq \widehat{p}$. Since $a_iS\cap a_jS=\emptyset$ for any $i\neq j$, we conclude that there is just one such $i\in\alpha$. Since $\widehat{p}$ and $S$ are $A^*${\hyp}invariant, we get that $a_iS$ is $A^*${\hyp}invariant too. Indeed, take $\sigma\in\mathrm{Aut}(\mathfrak{M}/A^*)$ arbitrary. Then, $\sigma[\underline{(a_iS)\cap p}]\subseteq \widehat{p}$ defines $(\sigma(a_i)S)\cap p$, concluding that $\sigma(a_i)S=a_iS$. As $\sigma$ is arbitrary, we conclude that $a_iS$ is $A^*${\hyp}invariant. Thus, $a_iS$ is $\bigwedge_{A^*}${\hyp}definable by Corollary \ref{c:parameters of definition}. As $(a_iS)\cap p\neq \emptyset$, by minimality of $p$, $p\subseteq a_iS$. So, $[p:S]=1$ and $p^{-1}\cdot p\subseteq S$.
\textnormal{\textbf{(2)}} Analogous to point \textnormal{\textbf{(1)}}, but now working with right cosets --- as $[p^{-1}:S]=\min\{|\Delta|\mathrel{:} p\subseteq S\Delta\}$.
\setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{lem} \begin{lem} \label{l:index lemma 2} Let $G$ be a piecewise $A${\hyp}hyperdefinable group, $S\leq G$ an $\bigwedge_A${\hyp}definable subgroup and $V\subseteq G$ an $\bigwedge_A${\hyp}definable set. Suppose $[V:S]$ is not small. Then, there is an $A${\hyp}indiscernible sequence $(a_i)_{i\in\omega}$ in $V$ such that $a_i\cdot S\cap a_j\cdot S=\emptyset$ for $i\neq j$. \begin{dem}
Let $\mathbf{G}$, $\mathbf{S}$ and $\mathbf{V}$ be the interpretations in the monster model $\boldsymbol{\mathfrak{C}}$. Since $\boldsymbol{\mathfrak{C}}$ is the monster model, $[\mathbf{V}: \mathbf{S}]\notin{\mathbf{\mathbbm{O}\mathrm{n}}}$. Recall that there is $\tau\in{\mathbf{\mathbbm{O}\mathrm{n}}}$ depending only on $|A^*|$ such that, for any sequence $(a^*_i)_{i\in\tau}$ of elements in $\boldsymbol{\mathfrak{C}}$, there is an $A^*${\hyp}indiscernible sequence $(b^*_j)_{j\in\omega}$ of elements in $\boldsymbol{\mathfrak{C}}$ with the property that, for any $j_1<\cdots<j_n$, ${\mathrm{tp}}(a^*_{i_1},\ldots,a^*_{i_n}/A^*)={\mathrm{tp}}(b^*_{j_1},\ldots,b^*_{j_n}/A^*)$ for some $i_1<\cdots<i_n$ --- see \cite[Lemma 7.2.12]{tent2012course}. In particular, take $(a_i)_{i\in \tau}$ a sequence of hyperimaginaries of $\mathbf{V}$ such that $a_i\cdot \mathbf{S}\cap a_j\cdot \mathbf{S}=\emptyset$ for each $i\neq j$. Let $(a^*_i)_{i\in\tau}$ be representatives of $(a_i)_{i\in \tau}$. Then, there is a sequence $(\widetilde{b}^*_j)_{j\in\omega}$ of $A^*${\hyp}indiscernible elements such that ${\mathrm{tp}}(\widetilde{b}^*_0,\widetilde{b}^*_1/A^*)={\mathrm{tp}}(a^*_{j'},a^*_{j''}/A^*)$ for some $j'<j''$. Now, by $\kappa${\hyp}saturation of $\mathfrak{M}$, we can find $(b^*_j)_{j\in\omega}$ elements in $\mathfrak{M}$ realising the same type that $(\widetilde{b}^*_j)_{j\in\omega}$ over $A^*$. So, the projections $b_j\in V$ form an $A${\hyp}indiscernible sequence $(b_j)_{j\in\omega}$ in $V$ such that $b_i\cdot S\cap b_j\cdot S=\emptyset$ for $i\neq j$. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{lem} We now prove the Stabilizer Theorem for piecewise hyperdefinable groups. Below, Corollary \ref{c:stabilizer theorem mos b2} corresponds to \cite[Theorem 2.12 (B2)]{montenegro2018stabilizers}; Theorem \ref{t:stabilizer theorem 2} corresponds to \cite[Theorem 3.5]{hrushovski2011stable} and \cite[Theorem 2.12 (B1)]{montenegro2018stabilizers}; and Theorem \ref{t:stabilizer theorem 2}(c) corresponds to \cite[Proposition 2.14]{montenegro2018stabilizers}.
\begin{teo} \label{t:stabilizer theorem 1} Let $\mathfrak{N}\prec\mathfrak{M}$ with $|N|<\lambda$. Let $G$ be a piecewise $N${\hyp}hyperdefinable group and $\mu$ a locally atomic $N${\hyp}invariant ideal of $\bigwedge_{<\lambda}${\hyp}definable subsets invariant under left translations. Let $p\subseteq G$ be an $N${\hyp}medium $N${\hyp}type. Assume that there is a wide $N${\hyp}type $q\subseteq {\mathrm{St}}(p)$ such that $p^{-1}\cdot q$ is $N${\hyp}medium. Then, ${\mathrm{Stab}}(p)={\mathrm{Stab}}(q)={\mathrm{St}}(p)^2={\mathrm{St}}(q)^4=(p\cdot p^{-1})^2$ is a wide $\bigwedge_N${\hyp}definable subgroup of $G$ without proper $\bigwedge_N${\hyp}definable subgroups of small index. Furthermore:
{\textnormal{\textbf{(a)}}} ${\mathrm{Stab}}(q)\cdot_{\mathrm{nf}(N)}q\subseteq {\mathrm{St}}(p)$.
{\textnormal{\textbf{(b)}}} Every strictly $N${\hyp}medium $N${\hyp}type of ${\mathrm{Stab}}(p)$ lies in ${\mathrm{St}}(p)$. \begin{dem} We take $\widehat{p}$ and $\widehat{q}$ coheirs of $\underline{p}$ and $\underline{q}$. By Lemma \ref{l:translation of medium}, $q$ is $N${\hyp}medium. By Lemma \ref{l:mos 2.11}, $p\cdot_{\mathrm{nf}(N)}p^{-1}\subseteq {\mathrm{St}}(q)$. Then, ${\mathrm{St}}(p)\subseteq p\cdot p^{-1}\subseteq {\mathrm{St}}(q)^2$. Indeed, for any $a,b\in p$ we can find $c\in p$ such that ${\mathrm{tp}}(c/N,a,b)$ does not fork over $N$ --- simply, take $c^*$ realising $\widehat{p}_{\mid N,a^*,b^*}$. Thus, $ac^{-1},bc^{-1}\in {\mathrm{St}}(q)$, concluding that $ab^{-1}\in{\mathrm{St}}(q)^2$ by Lemma \ref{l:basic remarks}(2).
Since $q\subseteq {\mathrm{St}}(p)$, ${\mathrm{Stab}}(q)\cdot_{\mathrm{nf}(N)} q\subseteq {\mathrm{St}}(p)$ by Lemma \ref{l:lemma stabilizer} and Example \ref{e:mos 2.8}. In particular, ${\mathrm{Stab}}(q)\subseteq {\mathrm{St}}(p)^2$ by Lemma \ref{l:basic remarks}(2). Thus, ${\mathrm{Stab}}(p)={\mathrm{Stab}}(q)={\mathrm{St}}(p)^2=(p\cdot p^{-1})^2={\mathrm{St}}(q)^4$ is an $\bigwedge_N${\hyp}definable subgroup. Since $q\subseteq {\mathrm{St}}(p)\subseteq {\mathrm{Stab}}(p)$, we have that ${\mathrm{Stab}}(p)$ is wide.
Take an $\bigwedge_N${\hyp}definable subgroup $T\leq {\mathrm{Stab}}(p)$ such that $[{\mathrm{Stab}}(p):T]$ is small. Since $p\cdot p^{-1}\subseteq {\mathrm{Stab}}(p)$, $[p^{-1}:T]$ is also small. By Lemma \ref{l:index lemma 1}(2), $p\cdot p^{-1}\subseteq T$. Therefore, ${\mathrm{Stab}}(p)=(p\cdot p^{-1})^2\subseteq T$. In other words, ${\mathrm{Stab}}(p)$ does not have proper $\bigwedge_N${\hyp}definable subgroups of small index.
Finally, we prove property {\textnormal{\textbf{(b)}}}. Take $r\subset {\mathrm{Stab}}(p)$ a strictly $N${\hyp}medium $N${\hyp}type. Set $c\in q$. Since $\mu$ is locally atomic, there is $b\in r$ such that ${\mathrm{tp}}(b/N,c)\notin \mu$. By Lemma \ref{l:s1 and forking}, ${\mathrm{tp}}(b/N,c)$ does not fork over $N$. Then, by Lemma \ref{l:dividing and functions} and Lemma \ref{l:types and infinite definable functions}, ${\mathrm{tp}}(b^{-1}c^{-1}/N,c)$ does not fork over $N$. Since $c,b\in {\mathrm{Stab}}(q)$, we have $b^{-1}c^{-1}\in{\mathrm{Stab}}(q)$. Write $r'={\mathrm{tp}}(b^{-1}c^{-1}/N)\subseteq {\mathrm{Stab}}(q)$. By {\textnormal{\textbf{(a)}}}, $r'\cdot_{\mathrm{nf}(N)}q\subseteq {\mathrm{St}}(p)$. By Lemma \ref{l:fundamental lemma for the stabilizer theorem} and Example \ref{e:mos 2.8}, we conclude that $b^{-1}=b^{-1}\cdot c^{-1}\cdot c\in {\mathrm{St}}(p)$, so $b\in {\mathrm{St}}(p)$. Since $r$ is a type over $N$, we conclude $r\subseteq {\mathrm{St}}(p)$. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{teo} \begin{coro} \label{c:stabilizer theorem mos b2}
Let $\mathfrak{N}\prec\mathfrak{M}$ with $|N|<\lambda$ and $G$ a piecewise $N${\hyp}hyperdefinable group. Let $\mu$ be an $N${\hyp}invariant locally atomic ideal of $\bigwedge_{<\lambda}${\hyp}definable subsets of $G$ invariant under left and right translations. Let $p$ be a wide type over $N$ and assume that $p^{-1}\cdot p\cdot p^{-1}$ is $N${\hyp}medium. Then, ${\mathrm{Stab}}(p)={\mathrm{St}}(p)^2=(p\cdot p^{-1})^2$ is a wide $\bigwedge_N${\hyp}definable subgroup of $G$ without proper $\bigwedge_N${\hyp}definable subgroups of small index such that every strictly $N${\hyp}medium $N${\hyp}type of ${\mathrm{Stab}}(p)$ lies in ${\mathrm{St}}(p)$. \begin{dem} Take $a\in p$ arbitrary. By the local atomic property, there is $b\in p$ such that ${\mathrm{tp}}(b/N,a)$ is wide. By Lemma \ref{l:translation of medium}, $p\cdot p^{-1}$ is $N${\hyp}medium. Again by Lemma \ref{l:translation of medium} but now using invariance under right translations, we get that $p$ is $N${\hyp}medium. By Lemma \ref{l:s1 and forking}, it follows that ${\mathrm{tp}}(b/N,a)$ does not fork over $N$. Consider the $N${\hyp}type $q={\mathrm{tp}}(ba^{-1}/N)\subseteq p\prescript{}{\mathrm{nf}(N)}{\cdot} p^{-1}$.
By invariance under right translations, we know that $q$ is a wide $\bigwedge_N${\hyp}definable set. As $q\subseteq p\cdot p^{-1}$ with $p\cdot p^{-1}$ $N${\hyp}medium, we have that $q$ is $N${\hyp}medium too. Also, note that $p^{-1}\cdot q\subseteq p^{-1}\cdot p\cdot p^{-1}$, so $p^{-1}\cdot q$ is $N${\hyp}medium too. On the other hand, by Lemma \ref{l:translation of medium} using invariance under right translations, $p^{-1}\cdot p$ is $N${\hyp}medium, so $p\cdot_{\mathrm{nf}(N)}p^{-1}\subseteq {\mathrm{St}}(p)$ by Lemma \ref{l:mos 2.11}. By Example \ref{e:mos 2.8} and Lemma \ref{l:fundamental lemma for the stabilizer theorem}(3), $p\prescript{}{\mathrm{nf}(N)}{\cdot} p^{-1}\subseteq {\mathrm{St}}(p)$ too, so $q\subseteq {\mathrm{St}}(p)$. By Theorem \ref{t:stabilizer theorem 1}, we conclude that ${\mathrm{Stab}}(p)={\mathrm{St}}(p)^2=(p\cdot p^{-1})^2$ is a wide $\bigwedge_N${\hyp}definable subgroup of $G$ without proper $\bigwedge_N${\hyp}definable subgroups of small index such that every strictly $N${\hyp}medium $N${\hyp}type of ${\mathrm{Stab}}(p)$ lies in ${\mathrm{St}}(p)$. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{coro} \begin{teo}[Stabilizer Theorem] \label{t:stabilizer theorem 2}
Let $\mathfrak{N}\prec\mathfrak{M}$ with $|N|<\lambda$ and $G=\underrightarrow{\lim}\, X^n$ a piecewise $N${\hyp}hyperdefinable group generated by a symmetric $\bigwedge_N${\hyp}definable subset $X$. Let $\mu$ be an $N${\hyp}invariant locally atomic ideal of $\bigwedge_{<\lambda}${\hyp}definable subsets of $G$ invariant under left translations. Let $p\subseteq X$ be a wide type over $N$ and assume that $X^3$ is $N${\hyp}medium. Then, ${\mathrm{Stab}}(p)={\mathrm{St}}(p)^2=(p\cdot p^{-1})^2$ is a wide and $N${\hyp}medium $\bigwedge_N${\hyp}definable normal subgroup of small index of $G$ without proper $\bigwedge_N${\hyp}definable subgroups of small index. Furthermore:
{\textnormal{\textbf{(a)}}} $p\cdot p\cdot p^{-1}=p\cdot {\mathrm{Stab}}(p)$ is a left coset of ${\mathrm{Stab}}(p)$.
{\textnormal{\textbf{(b)}}} Every wide $N${\hyp}type of ${\mathrm{Stab}}(p)$ is contained in ${\mathrm{St}}(p)$.
{\textnormal{\textbf{(c)}}} Assume $\mu$ is also invariant under right translations. Then, $p\cdot p^{-1}\cdot p={\mathrm{Stab}}(p)\cdot p$ is a right coset of ${\mathrm{Stab}}(p)$. \begin{dem} Take $a\in p$ arbitrary. By the local atomic property, we find $b\in p$ such that ${\mathrm{tp}}(b/N,a)$ is wide. By Lemma \ref{l:translation of medium}, $X$ is $N${\hyp}medium. As ${\mathrm{tp}}(b/N,a)\subseteq p\subseteq X$, we conclude that ${\mathrm{tp}}(b/N,a)$ is $N${\hyp}medium, so ${\mathrm{tp}}(b/N,a)$ does not fork over $N$ by Lemma \ref{l:s1 and forking}. Also, ${\mathrm{tp}}(a^{-1}b/N,a)=a^{-1}\cdot {\mathrm{tp}}(b/N,a)$ is wide by left invariance of $\mu$, so $q={\mathrm{tp}}(a^{-1}b/N)\subseteq p^{-1}\cdot_{\mathrm{nf}(N)}p$ is wide. By Lemma \ref{l:translation of medium}, $X^2$ is $N${\hyp}medium, so $p^2$ is $N${\hyp}medium too. Using any coheir extending $\underline{p^{-1}}$, we get $p^{-1}\cdot_{\mathrm{nf}(N)}p\subseteq {\mathrm{St}}(p)$ by Lemma \ref{l:mos 2.11}. In particular, $q\subseteq {\mathrm{St}}(p)$. As $X$ is symmetric, $p^{-1}\cdot q\subseteq p^{-1}\cdot p^{-1}\cdot p\subseteq X^3$, so $p^{-1}\cdot q$ is $N${\hyp}medium. Thus, by Theorem \ref{t:stabilizer theorem 1}, ${\mathrm{Stab}}(p)={\mathrm{Stab}}(q)={\mathrm{St}}(p)^2={\mathrm{St}}(q)^4=(p\cdot p^{-1})^2$ is a wide $\bigwedge_N${\hyp}definable subgroup without proper $\bigwedge_N${\hyp}definable subgroups of small index and every strictly $N${\hyp}medium type of ${\mathrm{Stab}}(p)$ is contained in ${\mathrm{St}}(p)$. Also, ${\mathrm{Stab}}(q)\cdot_{\mathrm{nf}(N)}q\subseteq {\mathrm{St}}(p)$.
We show that it is a normal subgroup of small index. First of all, note that $[X^2:{\mathrm{Stab}}(p)]$ is small. Otherwise, by Lemma \ref{l:index lemma 2}, there is an $N${\hyp}indiscernible sequence $(b_j)_{j\in\omega}$ in $X^2$ such that $b_i\cdot {\mathrm{Stab}}(p)\cap b_j\cdot {\mathrm{Stab}}(p)=\emptyset$ for $i\neq j$. Take $d\in p^{-1}$ arbitrary. As $p\cdot p^{-1}\subseteq {\mathrm{Stab}}(p)$, we get $(b_i\cdot p\cdot d)\cap (b_j\cdot p\cdot d)=\emptyset$ for $i\neq j$, so $b_i\cdot p\cap b_j\cdot p=\emptyset$ for $i\neq j$. As $X^3$ is $N${\hyp}medium, that implies $b_0\cdot p\in\mu$, so $p\in\mu$ by invariance under left translations, contradicting our hypotheses. For any $c\in X$, we have $[{\mathrm{tp}}(c/N):{\mathrm{Stab}}(p)]=1$ by Lemma \ref{l:index lemma 1}(1), so ${\mathrm{tp}}(c/N)\cdot {\mathrm{Stab}}(p)\cdot {\mathrm{tp}}(c/N)^{-1}=c\cdot {\mathrm{Stab}}(p)\cdot c^{-1}$ is $\bigwedge_N${\hyp}definable. As $X$ is symmetric, we also get that $[p^{-1}:c\cdot {\mathrm{Stab}}(p)\cdot c^{-1}]=[p^{-1}\cdot c:{\mathrm{Stab}}(p)]\leq [X^2:{\mathrm{Stab}}(p)]$ is small, so $p\cdot p^{-1}\subseteq c\cdot {\mathrm{Stab}}(p)\cdot c^{-1}$ by Lemma \ref{l:index lemma 1}(2). Therefore, ${\mathrm{Stab}}(p)\subseteq c\cdot {\mathrm{Stab}}(p)\cdot c^{-1}$. As $c\in X$ is arbitrary and $X$ is symmetric, ${\mathrm{Stab}}(p)=c\cdot {\mathrm{Stab}}(p)\cdot c^{-1}$. Thus, $X\subseteq \mathrm{N}_G({\mathrm{Stab}}(p))$, concluding ${\mathrm{Stab}}(p)\trianglelefteq G$. Since we have proved that $[X:{\mathrm{Stab}}(p)]$ is small, we conclude that ${\mathrm{Stab}}(p)$ has small index by normality.
We show now property {\textnormal{\textbf{(a)}}}, i.e. $p\cdot p\cdot p^{-1}=p\cdot {\mathrm{Stab}}(p)=y\cdot{\mathrm{Stab}}(p)$ for any $y\in p$. As $(p\cdot p^{-1})^2={\mathrm{Stab}}(p)$, we have $p\cdot p\cdot p^{-1}\subseteq p\cdot {\mathrm{Stab}}(p)$. As $[p:{\mathrm{Stab}}(p)]\leq [X:{\mathrm{Stab}}(p)]$ is small, we get $[p:{\mathrm{Stab}}(p)]=1$ by Lemma \ref{l:index lemma 1}(1). Thus, $p\cdot {\mathrm{Stab}}(p)=y\cdot {\mathrm{Stab}}(p)$. On the other hand, take $x\in y\cdot {\mathrm{Stab}}(p)$ arbitrary, so $x^{-1}=c\cdot y^{-1}$ with $c\in{\mathrm{Stab}}(p)={\mathrm{Stab}}(q)$. By definition of $q$, we know that $q={\mathrm{tp}}(a^{-1}b/N)$ with ${\mathrm{tp}}(b/N,a)$ wide and $a,b\in p$. Take $z_0$ such that ${\mathrm{tp}}(z_0,y/N)={\mathrm{tp}}(b,a/N)$. By $N${\hyp}invariance of $\mu$, ${\mathrm{tp}}(z_0/N,y)$ is wide. By the local atomic property, we can find $z\in {\mathrm{tp}}(z_0/N,y)$ such that ${\mathrm{tp}}(z/N,y,c)$ is wide. Thus, ${\mathrm{tp}}(y^{-1}z/N,c)$ is wide by invariance under left translations. By Lemma \ref{l:s1 and forking}, ${\mathrm{tp}}(y^{-1}z/N,c)$ does not fork over $N$. Thus, $x^{-1}\cdot z=c\cdot y^{-1}z\in {\mathrm{Stab}}(q)\cdot_{\mathrm{nf}(N)}q\subseteq {\mathrm{St}}(p)$. In particular, we conclude $x^{-1}\in {\mathrm{St}}(p)\cdot p^{-1}\subseteq p\cdot p^{-1}\cdot p^{-1}$, so $x\in p\cdot p\cdot p^{-1}$.
As $X$ is symmetric, $p\cdot {\mathrm{Stab}}(p)\subseteq X^3$, so $p\cdot {\mathrm{Stab}}(p)$ is $N${\hyp}medium. By Lemma \ref{l:translation of medium}, we conclude that ${\mathrm{Stab}}(p)$ is $N${\hyp}medium. Thus, we conclude property \textnormal{\textbf{(b)}}.
Finally, it remains to prove property \textnormal{\textbf{(c)}}. In other words, assuming that $\mu$ is also invariant under right translations, we want to show that $p\cdot p^{-1}\cdot p={\mathrm{Stab}}(p)\cdot p={\mathrm{Stab}}(p)\cdot y$ for any $y\in p$. Take $a\in p$ arbitrary and, using the local atomic property, find $b\in p$ such that ${\mathrm{tp}}(b/N,a)$ is wide. As $p$ is $N${\hyp}medium, by Lemma \ref{l:s1 and forking}, ${\mathrm{tp}}(b/N,a)$ does not fork over $N$. Consider $q_2={\mathrm{tp}}(ba^{-1}/N)\subseteq p\prescript{}{\mathrm{nf}(N)}{\cdot} p^{-1}$. Then, by invariance under right translations, $q_2$ is a wide $\bigwedge_N${\hyp}definable set. As $q_2\subseteq X^2$ and $p^{-1}\cdot q_2\subseteq X^3$, we conclude that $q_2$ and $p^{-1}\cdot q_2$ are $N${\hyp}medium. As $p\cdot_{\mathrm{nf}(N)} p^{-1}\subseteq {\mathrm{St}}(p)$ by Lemma \ref{l:lemma stabilizer}, we get using Example \ref{e:mos 2.8} and Lemma \ref{l:fundamental lemma for the stabilizer theorem}(3) that $p\prescript{}{\mathrm{nf}(N)}{\cdot} p^{-1}\subseteq {\mathrm{St}}(p)$ too, so $q_2\subseteq {\mathrm{St}}(p)$. By Theorem \ref{t:stabilizer theorem 1}, we conclude that ${\mathrm{Stab}}(p)={\mathrm{Stab}}(q_2)$ and ${\mathrm{Stab}}(q_2)\cdot_{\mathrm{nf}(N)}q_2\subseteq {\mathrm{St}}(p)$. Take $x\in {\mathrm{Stab}}(p)\cdot y$ arbitrary, so $x=cy$ with $c\in{\mathrm{Stab}}(p)={\mathrm{Stab}}(q_2)$ and $y\in p$. Find $z_0$ such that ${\mathrm{tp}}(z_0,y/N)={\mathrm{tp}}(a,b/N)$, so ${\mathrm{tp}}(y/N,z_0)$ is wide. Using the local atomic property, we may find $y_1\in {\mathrm{tp}}(y/N,z_0)$ such that ${\mathrm{tp}}(y_1/N,z_0,c)$ is wide. Take $z$ such that ${\mathrm{tp}}(z,y/N,c)={\mathrm{tp}}(z_0,y_1/N,c)$. Thus, $yz^{-1}\in q_2$ and ${\mathrm{tp}}(y/N,z,c)$ is wide. In particular, as $X^2$ is $N${\hyp}medium, by Lemma \ref{l:s1 and forking}, ${\mathrm{tp}}(yz^{-1}/N,c)$ does not fork over $N$. Thus, $x\cdot z^{-1}=c\cdot yz^{-1}\in {\mathrm{Stab}}(q_2)\cdot_{\mathrm{nf}(N)}q_2\subseteq {\mathrm{St}}(p)$, concluding $x\in p\cdot p^{-1}\cdot p$. As $x$ is arbitrary, ${\mathrm{Stab}}(p)\cdot y\subseteq p\cdot p^{-1}\cdot p$. As $p\cdot p^{-1}\cdot p\subseteq {\mathrm{Stab}}(p)\cdot y$, we get $p\cdot p^{-1}\cdot p={\mathrm{Stab}}(p)\cdot p={\mathrm{Stab}}(p)\cdot y$ for any $y\in p$. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{teo} \begin{obss} \label{o:remarks stabilizer} {\textnormal{\textbf{(1)}}} The main improvement of Theorem \ref{t:stabilizer theorem 2} with respect to the original formulations studied in \cite{hrushovski2011stable} and \cite{montenegro2018stabilizers} is that we do not require invariance of $\mu$ under right translations. In both papers, while they preferably considered ideals invariant under two{\hyp}sided translations, the authors already anticipated that it should be possible to weaken the invariance under right translations hypothesis, perhaps losing a few properties of ${\mathrm{Stab}}(p)$. We have shown that, in fact, it can be completely eliminated without significant consequences. Indeed, we only use invariance under right translations for \textnormal{\textbf{(c)}}, but this is essentially replaced by \textnormal{\textbf{(a)}}.
{\textnormal{\textbf{(2)}}} Although in \cite[Theorem 2.12]{montenegro2018stabilizers} it was assumed that $X^4$ is $N${\hyp}medium, we only need $N${\hyp}mediumness of $X^3$.
{\textnormal{\textbf{(3)}}} Note that the locally atomic property is mostly used only for subsets of $p$. The only time when we apply the locally atomic property for other subsets is in the proof of Theorem \ref{t:stabilizer theorem 1} to get property {\textnormal{\textbf{(b)}}}. Thus, when $\mu$ only satisfies the locally atomic property for subsets of $p$, we only miss property {\textnormal{\textbf{(b)}}}.
For property {\textnormal{\textbf{(b)}}}, we have used the locally atomic property for subsets of ${\mathrm{Stab}}(p)=(p\cdot p^{-1})^2$. However, in fact, it suffices to assume it only for subsets of $p\cdot p\cdot p^{-1}$. Indeed, by Theorem \ref{t:stabilizer theorem 2}{\textnormal{\textbf{(a)}}} and invariance under left translations, if $V\subseteq {\mathrm{Stab}}(p)$ is $\bigwedge_B${\hyp}definable and wide, then $y\cdot V\subseteq p\cdot p\cdot p^{-1}$ is $\bigwedge_{B,y}${\hyp}definable and wide for any $y\in p$. By the locally atomic property on $p\cdot p\cdot p^{-1}$, there is $b\in V$ such that ${\mathrm{tp}}(yb/B,y)$ is wide, concluding that ${\mathrm{tp}}(b/B)$ is wide by invariance under left translations.
{\textnormal{\textbf{(4)}}} It suffices to have $\mu$ defined only on $X^3$, as, in that case, we may extend the ideal by taking the one generated by the finite unions of left translates of elements of $\mu$. As $\mu$ is invariant under left{\hyp}translations within $X^3$ (i.e. $V\in\mu$ if and only if $gV\in\mu$ for any $V\subseteq X^3$ and $g\in G$ such that $gV\subseteq X^3$), this extension coincides with $\mu$ inside of $X^3$, so we can apply the Stabilizer Theorem \ref{t:stabilizer theorem 2} by Remark \ref{o:remarks stabilizer}(3).
\end{obss}
\begin{prop}{\textnormal{\textbf{\cite[Corollary 3.11]{hrushovski2011stable}\&\cite[Proposition 2.13]{montenegro2018stabilizers}}}} Let $\mathfrak{N}\prec\mathfrak{M}$ with $|N|<\lambda$ and $G=\underrightarrow{\lim}\, X^n$ a piecewise $N${\hyp}hyperdefinable group generated by a symmetric $\bigwedge_N${\hyp}definable subset $X$. Let $\mu$ be an $N${\hyp}invariant locally atomic ideal of $\bigwedge_{<\lambda}${\hyp}definable subsets of $G$ invariant under left translations such that $X$ is wide and $X^3$ is $N${\hyp}medium. Then, $\mu$ is $N${\hyp}medium on $G$.
\begin{dem} By the locally atomic property, there is a wide $N${\hyp}type $p\subseteq X$. By the Stabilizer Theorem \ref{t:stabilizer theorem 2}, $S\coloneqq {\mathrm{Stab}}(p)=(p\cdot p^{-1})^2$ is an $N${\hyp}medium normal subgroup of small index. Let $Y(\overline{b})\subseteq G$ be $\bigwedge_{N,\bar{b}}${\hyp}definable with $|\overline{b}|<\lambda$ small. Let $(\overline{b}_i)_{i\in\omega}$ be an $N${\hyp}indiscernible sequence with ${\mathrm{tp}}(\overline{b}/N)={\mathrm{tp}}(\overline{b}_i/N)$. Suppose that $Y(\overline{b})\notin \mu$. By the locally atomic property of $\mu$, there is a wide $N,\overline{b}${\hyp}type $q(\overline{b})\subseteq Y(\overline{b})$. By Lemma \ref{l:index lemma 1}(1), there is $a$ such that $q(\overline{b})\subseteq a\cdot S$. Let $(\overline{b}'_i)_{i\in \omega}$ be an $N,a^*${\hyp}indiscernible sequence realising ${\mathrm{tp}}((\overline{b}_i)_{i\in\omega}/N)$. By invariance under left translations, $a^{-1}q(\overline{b})\notin \mu$. As $S$ is $N${\hyp}medium, $a^{-1}q(\overline{b}'_0)\cap a^{-1}q(\overline{b}'_1)\notin \mu$. Thus, by $N${\hyp}invariance and invariance under left translations, we get that $q(\overline{b}_0)\cap q(\overline{b}_1)\notin \mu$, concluding $Y(\overline{b}_0)\cap Y(\overline{b}_1)\notin \mu$. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{prop} \subsection{Near{\hyp}subgroups}
Let $G$ be a piecewise $A${\hyp}hyperdefinable group and $\mu$ an ideal of $\bigwedge_{<\lambda}${\hyp}definable subsets invariant under left translations with $\kappa\geq \lambda>|\mathscr{L}|+|A|$. A \emph{$\mu${\hyp}near{\hyp}subgroup} of $G$ over $A$ is a wide $\bigwedge_A${\hyp}definable symmetric set $X$ generating $G$ such that $\mu_{\mid X^3}$ is locally atomic and $A${\hyp}medium. We say that $X$ is a near{\hyp}subgroup if it is a $\mu${\hyp}near{\hyp}subgroup for some ideal. \begin{teo} \label{t:stabilizer nearsubgroups}
Let $G$ be a piecewise $A${\hyp}hyperdefinable group, $\mu$ an ideal of $\bigwedge_{<\lambda}${\hyp}definable subsets and $X$ a $\mu${\hyp}near{\hyp}subgroup over $A$. Then, there is a wide $\bigwedge_{|A|+|\mathscr{L}|}${\hyp}definable normal subgroup of small index $S$ contained in $X^4$ and contained in every $\bigwedge_A${\hyp}definable subgroup of small index. Furthermore, $S=(p\cdot p^{-1})^2$ and $ppp^{-1}=pS=yS$ for any $y\in p$, where $p\subseteq X$ is a wide type over an elementary substructure of size $|A|+|\mathscr{L}|$.
\begin{dem} By L\"{o}wenheim{\hyp}Skolem Theorem \cite[Theorem 2.3.1]{tent2012course}, we find an elementary substructure $\mathfrak{N}\preceq \mathfrak{M}$ containing $A^*$ with $|N|=|A^*|+|\mathscr{L}|<\lambda$. As $A^*\subseteq N$, $\mu_{\mid X^3}$ is $N${\hyp}medium. By the locally atomic property, there is a wide $N${\hyp}type $p\subseteq X$. Applying the Stabilizer Theorem \ref{t:stabilizer theorem 2} (and Remark \ref{o:remarks stabilizer}(3)), we conclude that there is a wide $\bigwedge_N${\hyp}definable normal subgroup $S\trianglelefteq G$ of small index which does not have proper $\bigwedge_N${\hyp}definable subgroups of small index. Furthermore, $S={\mathrm{Stab}}(p)=(p\cdot p^{-1})^2\subseteq X^4$ and $ppp^{-1}=pS=yS$ for any $y\in p$. As $A^*\subseteq N$, we conclude. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{teo} \begin{teo}\label{t:general lie model} Let $G=\underrightarrow{\lim}\, X^n$ be a piecewise $A${\hyp}hyperdefinable group generated by a near{\hyp}subgroup $X$ over $A$. Assume that $X^n$ is an approximate subgroup for some $n\in\mathbb{N}$. Then, $G$ has a connected Lie model $\pi_{H/K}:\ H\rightarrow L=\rfrac{H}{K}$ with $K\subseteq X^{2n+4}$ such that
{\textnormal{\textbf{(1)}}} $H\cap X^{2n}$ is $\bigwedge_{|A|+|\mathscr{L}|}${\hyp}definable and commensurable to $X^n$,
{\textnormal{\textbf{(2)}}} $\pi_{H/K}[H\cap X^{2n}]$ is a compact neighbourhood of the identity in $L$,
{\textnormal{\textbf{(3)}}} $H$ is generated by $H\cap X^{2n+4}$, and
{\textnormal{\textbf{(4)}}} $\pi_{H/K}$ is continuous and proper from the logic topology using $|A|+|\mathscr{L}|$ many parameters.
\begin{dem} As $X$ is a near{\hyp}subgroup, by Theorem \ref{t:stabilizer nearsubgroups}, $G^{00}_N\subseteq X^4$ with $|N|\leq |A|+|\mathscr{L}|$. As $X^n$ is an approximate subgroup, $\rfrac{X^n}{G^{00}_N}$ is an approximate subgroup. Thus, $\rfrac{X^{2n}}{G^{00}_N}$ is a neighbourhood of the identity by the Generic Set Lemma (Theorem \ref{t:generic set lemma}). Then, by Theorem \ref{t:logic existence of lie core}, we can find a connected Lie core $\pi_{H/K}:\ H\rightarrow L=\rfrac{H}{K}$ with $K\subseteq X^{2n}G^{00}_N\subseteq X^{2n+4}$.
{\textbf{(1)}} Since $\rfrac{X^{n}}{G^{00}_A}$ is compact and $\rfrac{H}{G^{00}_A}$ is an open subgroup in the global logic topology, we conclude that $\left[\rfrac{X^n}{G^{00}_A}:\rfrac{H}{G^{00}_A}\right]$ is finite. As $G^{00}_A\leq H$, we get that $[X^n:H]$ is finite. Thus, by \cite[Lemma 2.2, Lemma 2.3]{machado2021good}, $H\cap X^{2n}$ is an approximate subgroup commensurable to $X^{n}$.
{\textbf{(2)}} As $\rfrac{X^{2n}}{G^{00}_N}$ is a compact neighbourhood of the identity and $\rfrac{H}{G^{00}_N}$ is an open subgroup in the global logic topology, we conclude that $\rfrac{H\cap X^{2n}}{G^{00}_N}$ is a compact neighbourhood of the identity. Recall that $\widetilde{\pi}_{H/K}:\ \rfrac{H}{G^{00}_N}\rightarrow L$ given by $\pi_{H/K}=\widetilde{\pi}_{H/K}\circ\pi_{G/G^{00}_N}$ is a continuous, closed, open and proper onto group homomorphism. Thus, we conclude that $\pi_{H/K}[H\cap X^{2n}]$ is a compact neighbourhood of the identity.
{\textbf{(3)}} Since $L$ is connected and $\pi_{H/K}[X^{2n}\cap H]$ is a neighbourhood of the identity, $\pi_{H/K}[H\cap X^{2n}]$ generates $L$. Therefore, we have that $\pi^{-1}_{H/K}[\pi_{H/K}[H\cap X^{2n}]]=(H\cap X^{2n})\cdot K$ generates $H$. Now, $K\subseteq H\cap X^{2n+4}$, so $(H\cap X^{2n})\cdot K\subseteq (H\cap X^{2n+4})^2$, concluding that $H\cap X^{2n+4}$ generates $H$.
{\textbf{(4)}} By Proposition \ref{p:reduce number of parameters}, the global logic topology of $\rfrac{G}{G^{00}_N}$ is given using $|A|+|\mathscr{L}|$ many parameters, so $H\cap X^{2n}$ is $\bigwedge_{|A|+|\mathscr{L}|}${\hyp}definable and $\pi_{H/K}$ is continuous and closed using $|A|+|\mathscr{L}|$ many parameters. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{teo} Alternatively, using Gleason{\hyp}Yamabe{\hyp}Carolino Theorem \ref{t:gleason yamabe carolino} rather than Gleason{\hyp}Yamabe Theorem \ref{t:gleason yamabe}, we get the following variation which provides some extra control over some of the parameters. \begin{teo}\label{t:general lie model carolino}
There are functions $c:\ \mathbb{N}\rightarrow \mathbb{N}$ and $d:\ \mathbb{N}\rightarrow\mathbb{N}$ such that the following holds for any $\kappa${\hyp}saturated $\mathscr{L}${\hyp}structure $\mathfrak{M}$ and any set of parameters $A$ with $\kappa>|\mathscr{L}|+|A|$:
Let $G=\underrightarrow{\lim}\, X^n$ be a piecewise $A${\hyp}hyperdefinable group generated by a near{\hyp}subgroup $X$ over $A$. Assume that $X^n$ is a $k${\hyp}approximate subgroup for some $n$ and $k$. Then, $G$ has a Lie model $\pi_{H/K}:\ H\rightarrow L=\rfrac{H}{K}$ with $K\subseteq X^{12n+4}$ and $\dim(L)\leq d(k)$ such that
{\textnormal{\textbf{(1)}}} $H\cap X^{2n}$ is $\bigwedge_{|A|+|\mathscr{L}|}${\hyp}definable and $c(k)${\hyp}commensurable to $X^n$,
{\textnormal{\textbf{(2)}}} $\pi_{H/K}[H\cap X^{2n}]$ is a compact neighbourhood of the identity in $L$,
{\textnormal{\textbf{(3)}}} $H$ is generated by $H\cap X^{12n+4}$, and
{\textnormal{\textbf{(4)}}} $\pi_{H/K}$ is continuous and proper from the logic topology using $|A|+|\mathscr{L}|$ many parameters.
\begin{dem} As $X$ is a near{\hyp}subgroup, by Theorem \ref{t:stabilizer nearsubgroups}, $G^{00}_N\subseteq X^4$ with $|N|\leq |A|+|\mathscr{L}|$. As $X^n$ is a $k${\hyp}approximate subgroup, $\rfrac{X^n}{G^{00}_N}$ is an approximate subgroup. Thus, $\rfrac{X^{2n}}{G^{00}_N}$ is a neighbourhood of the identity by the Generic Set Lemma (Theorem \ref{t:generic set lemma}), and, in particular, $\rfrac{G}{G^{00}_N}$ is a locally compact topological group with the global logic topology. Thus, $\rfrac{X^n}{G^{00}_N}$ is contained in the interior, $\rfrac{U}{G^{00}_N}$, of $\rfrac{X^{3n}}{G^{00}_N}$ in the global logic topology. Now, we obviously have that $\rfrac{U^2}{G^{00}_N}\subseteq \rfrac{X^{6n}}{G^{00}_N}$, which is covered by $k^5$ left translates of $\rfrac{X^n}{G^{00}_N}\subseteq \rfrac{U}{G^{00}_N}$. Hence, $\rfrac{U}{G^{00}_N}$ is an open precompact $k^5${\hyp}approximate subgroup.
Let $c_0$ and $d_0$ be the functions provided by Gleason{\hyp}Yamabe{\hyp}Carolino Theorem \ref{t:gleason yamabe carolino}. Applying Theorem \ref{t:gleason yamabe carolino}, we get a Lie model $\pi_{H/K}:\ H\rightarrow L=\rfrac{H}{K}$ with $G^{00}_N\leq K\subseteq U^4\subseteq X^{12n}G^{00}_N\subseteq X^{12n+4}$ and $\dim(L)\leq d(k)\coloneqq d_0(k^5)$ such that $H\cap U^4$ generates $H$ and is $c_0(k^5)${\hyp}commensurable to $U$. Thus, $H\cap X^{12n+4}$ generates $H$ and $c_0(k^5)$ left translates of $H\cap X^{12n+4}$ cover $X^n$. As $X^n$ is a $k${\hyp}approximate subgroup, $k^{15}$ left translates of $X^n$ cover $X^{16n}\supseteq H\cap X^{12n+4}$. By \cite[Lemma 2.2]{machado2021good}, we get that $k^{15}$ left translates of $H\cap X^{2n}$ cover $H\cap X^{12n+4}$, so $H\cap X^{2n}$ and $X^n$ are $c(k)${\hyp}commensurable, where $c(k)\coloneqq c_0(k^5)k^{15}$.
Now, $\rfrac{H\cap X^{2n}}{G^{00}_N}$ is a compact neighbourhood of the identity in the global logic topology. Recall that $\widetilde{\pi}_{H/K}:\ \rfrac{H}{G^{00}_N}\rightarrow L$ given by $\pi_{H/K}=\widetilde{\pi}_{H/K}\circ\pi_{G/G^{00}_N}$ is a continuous, closed, open and proper onto group homomorphism. Thus, $\pi_{H/K}[H\cap X^{2n}]$ is a compact neighbourhood of the identity of $L$.
By Proposition \ref{p:reduce number of parameters}, the global logic topology of $\rfrac{G}{G^{00}_N}$ is given using $|A|+|\mathscr{L}|$ many parameters, so $H\cap X^{2n}$ is $\bigwedge_{|A|+|\mathscr{L}|}${\hyp}definable and $\pi_{H/K}$ is continuous and proper using $|A|+|\mathscr{L}|$ many parameters. \setbox0=\hbox{\ \ \ \footnotesize{\textbf{\normalfont Q.E.D.}}}\kern\wd0 \strut
\kern-\wd0 \box0 \end{dem} \end{teo}
We conclude applying Theorem \ref{t:general lie model} to the case of rough approximate subgroups. Recall that a \emph{$T${\hyp}rough $k${\hyp}approximate subgroup} of a group $G$ is a subset $X\subseteq G$ such that $X^2\subseteq \Delta XT$ with $|\Delta|\leq k\in\mathbb{N}_{>0}$ and $1\in T\subseteq G$.
Let $G$ be a group and $X\subseteq G$ a symmetric subset. For some fixed $n$, assume that $X^n$ is a $T_i${\hyp}rough $k${\hyp}approximate subgroup for a sequence $(T_i)_{i\in\mathbb{N}}$ of thickenings such that
\textbf{(a)} it decreases in doubling scales, i.e. $T_{i+1}T^{-1}_{i+1}\subseteq T_i$ for each $i\in\mathbb{N}$, and
\textbf{(b)} it is asymptotically normalised by $X$, i.e. $x^{-1}T_{i+1}x\subseteq T_i$ for each $x\in X$ and $i\in \mathbb{N}$.
Assuming saturation and $\bigwedge${\hyp}definability, we have that $T=\bigcap T_i$ is an $\bigwedge${\hyp}definable subgroup of $G$ normalised by $X$ and $X^n$ is a $T${\hyp}rough $k${\hyp}approximate subgroup. \begin{teo}[Rough Lie Model Theorem] \label{t:rough lie model} Let $G$ be an $A${\hyp}definable group, $T\leq G$ an $\bigwedge_{A}${\hyp}definable subgroup and $X\subseteq G$ a symmetric $\bigwedge_{A}${\hyp}definable subset. Write $\widetilde{G}$ for the subgroup generated by $X$. Assume that $X$ normalises $T$, $\rfrac{X}{\kern 0.1em T}$ is a near{\hyp}subgroup of $\rfrac{\widetilde{G}}{\kern 0.1em T}$ and $X^n$ is a $T${\hyp}rough approximate subgroup for some $n$. Then, $\widetilde{G}T\leq G$ has a connected Lie model $\pi_{H/K}:\ H\rightarrow L=\rfrac{H}{K}$ with $T\subseteq K\subseteq X^{2n+4}T$ such that
\textnormal{\textbf{(1)}} $H\cap X^{2n}T$ is $\bigwedge_{|A|+|\mathscr{L}|}${\hyp}definable and commensurable to $X^nT$,
\textnormal{\textbf{(2)}} $\pi_{H/K}[H\cap X^{2n}]$ is a compact neighbourhood of the identity in $L$,
\textnormal{\textbf{(3)}} $H$ is generated by $H\cap X^{2n+4}T$, and
\textnormal{\textbf{(4)}} $\pi_{H/K}$ is continuous and proper from the logic topology using $|A|+|\mathscr{L}|$ many parameters. \end{teo} Alternatively, using Theorem \ref{t:general lie model carolino}: \begin{teo}[Rough Lie model Theorem, version 2] \label{t:rough lie model carolino}
There are functions $c:\ \mathbb{N}\rightarrow \mathbb{N}$ and $d:\ \mathbb{N}\rightarrow\mathbb{N}$ such that the following holds for any $\kappa${\hyp}saturated $\mathscr{L}${\hyp}structure $\mathfrak{M}$ and any set of parameters $A$ with $\kappa>|\mathscr{L}|+|A|$:
Let $G$ be an $A${\hyp}definable group, $T\leq G$ an $\bigwedge_{A}${\hyp}definable subgroup and $X\subseteq G$ a symmetric $\bigwedge_{A}${\hyp}definable subset. Write $\widetilde{G}$ for the subgroup generated by $X$. Assume that $X$ normalises $T$, $\rfrac{X}{\kern 0.1em T}$ is a near{\hyp}subgroup of $\rfrac{\widetilde{G}}{\kern 0.1em T}$ and $X^n$ is a $T${\hyp}rough $k${\hyp}approximate subgroup for some $n$ and $k$. Then, $\widetilde{G}T\leq G$ has a Lie model $\pi_{H/K}:\ H\rightarrow L=\rfrac{H}{K}$ with $T\subseteq K\subseteq X^{12n+4}T$ and $\dim(L)\leq d(k)$ such that
\textnormal{\textbf{(1)}} $H\cap X^{2n}T$ is $\bigwedge_{|A|+|\mathscr{L}|}${\hyp}definable and $c(k)${\hyp}commensurable to $X^nT$,
\textnormal{\textbf{(2)}} $\pi_{H/K}[H\cap X^{2n}]$ is a compact neighbourhood of the identity in $L$,
\textnormal{\textbf{(3)}} $H$ is generated by $H\cap X^{12n+4}T$, and
\textnormal{\textbf{(4)}} $\pi_{H/K}$ is continuous and proper from a logic topology using $|A|+|\mathscr{L}|$ many parameters. \end{teo}
\end{document} |
\begin{document}
\title {On intermediate extensions of generic extensions by a random real}
\author { Vladimir Kanovei\thanks{IITP RAS and MIIT,
Moscow, Russia, \ {\tt kanovei@googlemail.com} --- contact author. Partial support of RFFI grant 17-01-00705 acknowledged. } \and Vassily Lyubetsky\thanks{IITP RAS,
Moscow, Russia, \ {\tt lyubetsk@iitp.ru}. Partial support of Russian Scientific Fund grant 14-50-00150 acknowledged. } }
\date {\today}
\maketitle
\begin{abstract} The paper is the second of our series of notes aimed to bring back in circulation some bright ideas of early modern set theory, mainly due to Harrington and Sami, which have never been adequately presented in set theoretic publications. We prove that if a real $a$ is random over a model $M$ and $x\in M[a]$ is another real then either (1) $x\in M$, or (2) $M[x]=M[a]$, or (3) $M[x]$ is a random extension of $M$ and $M[a]$ is a random extension of $M[x]$. This is a less-known result of old set theoretic folklore, and, as far as we know, has never been published.
As a corollary, we prove that $\fs1n$-Reduction holds for all $n\ge3$, in a model extending $\rL$ by $\ali$-many random reals.
{\footnotesize \tableofcontents }
\end{abstract}
\np
\parf{Introduction}
It is known from Solovay \cite{sol}, and especially Grigorieff \cite{gri} in most general form, that any subextension $\rV[x]$ of a generic extension $\rV[G]$, generated by a set $x\in\rV[G]$, is itself a generic extension $\rV[x]=\rV[G_0]$ of the same ground universe $\rV$, and the whole extension $\rV[G]$ is equal to a generic extension $\rV[G_0][G_1]$ of the subextension $\rV[x]=\rV[G_0]$. See a more recent treatment of this question in \cite{jechmill,zapt,kl21,kl32e}. In particular, it is demonstrated in \cite{kl32e} that if $\dP=\stk\dP\leq\in \rV$ is a forcing notion, a set $G\sq\dP$ is $\dP$-generic over $\rV$, $t\in\rV[G]$ is a $\dP$-name, $x=t[G]\in\rV[G]$ is the $G$-valuation of $t$, and $x\sq\rV$, then \ben \Renu \itlb{i1} there is a set $\Sg\sq\dP$ such that $\rV[\Sg]=\rV[x]$ and $G$ is $\Sg$-generic over $\rV[x]$;
\itlb{i2} there exists a stronger order $\leq_t$ on $\dP$ (so that $p\leq q$ implies $p\leq_t q$) in $\rV$ such that $\Sg$ itself is $\stk\dP{\leq_t}$-generic over $\rV[\Sg]=\rV[x]$. \een However the nature and forcing properties of the derived forcing notions $\dP_0=\stk\dP{\leq_t}\in\rV$ and $\dP_1(x)=\stk\Sg{\leq}\in\rV[x]$ is not immediately clear.
At the trivial side, we have the Cohen forcing $\dP=\dC=\bse.$ In this case, $\dP_0$ and $\dP_1(x)$ are countable forcing notions, hence the corresponding extensions, $\rV\to\rV[x]$ and $\rV[x]\to \rV[G]$ in the above scheme, are Cohen generic or trivial. As observed in \cite{kl32e}, this leads to the following result of set theoretic folklore, never explicitly appeared in set theoretic publications, except for \cite[Lemma 1.9]{samiPHD}. (It can also be derived from some results in \cite{gri}, especially 4.7.1 and 2.14.1.)
\bte [folklore, Sami] \lam{intC} Let\/ $a\in\dn$ be Cohen-generic over the ground set universe\/ $\rV$. Let\/ $x$ be a real in\/ $\rV[a]$. Then we have exactly one of the following$:$ \ben
\tenu{{\rm(C\arabic{enumi})}} \itlb{intC1} $x\in\rV\,;$ \qquad\qquad
{\rm(C2)} $\,\rV[x]=\rV[a]\,;$
\atc \itlb{intC2} {\rm (a)} $\rV[x]$ is a Cohen-generic extension of\/ $\rV$, and\/ \\[0.5ex] {\rm (b)} $\rV[a]$ is a Cohen-generic extension of\/ $\rV[x]$.\snos {Theorem~\ref{intC} dramatically fails for intermediate extensions not generated by sets, \cite{abyss}.} \qed \een \ete
A much more complex case is the Levy -- Solovay extension of $\rL$, the constructible universe. As established in \cite{sol}, such an extension is equal to a Levy -- Solovay extension of $\rL[x]$ for any real $x$ it contains.
The following theorem, proved below, is a result of the same type.
\bte \lam{intR} Let\/ $a\in\dn$ be Solovay-random over the ground set universe\/ $\rV$,
Let\/ $x$ be a real in\/ $\rV[a]$. Then we have exactly one of the following$:$ \ben \tenu{{\rm(R\arabic{enumi})}} \itlb{intR1} $x\in\rV\,;$ \qquad\qquad
{\rm(R2)} $\,\rV[x]=\rV[a]\,;$
\atc \itlb{intR2} {\rm (a)} $\rV[x]$ is a Solovay-random extension of\/ $\rV$, and\/ \\[0.5ex] {\rm (b)} $\rV[a]$ is a Solovay-random extension of\/ $\rV[x]$. \een \ete
It is {\bfit not\/} asserted though that the real $x$ itself is random over $\rV$ in (a) and/or the real $a$ itself is random over $\rV[x]$ in (b).
Note that Theorem~\ref{intR} contains two separate dichotomies: \ref{intR1} vs.\ \ref{intR2}(a) and (R2) vs.\ \ref{intR2}(b). In spite of obvious semblance of Theorem~\ref{intC}, this theorem takes more effort. Its proof (it begins in Section~\ref{XX1}) involves some results related rather to real analysis and measure theory.
Now we proceed with an application of Theorem~\ref{intR}.
\parf{A corollary: Reduction in extensions by random reals} \las{redf}
\vyk{ The separation property for a pointclass $K$, or simply \rit{\dd KSeparation}, is the assertion that any two disjoint sets $X,Y$ in $K$ (in the same Polish space) can be separated by a set in $K\cap \dop K$, where $\dop K$ is the pointclass of complements of sets in $K$. }
The reduction property for a pointclass $K$, or simply \rit{\dd KReduction}, is the assertion that for any two sets $X,Y$ in $K$ (in the same Polish space) there exist \rit{disjoint} sets $X'\sq X$, $Y'\sq Y$ in the same class $K$, such that $X'\cup Y'=X\cup Y$.
It is known classically from studies of
Kuratowski \cite{kursep} that
Reduction holds for $\fp11$ and $\fs12$, but fails for $\fs11$ and $\fp12$.
As for the higher projective classes, Addison \cite{add2} proved that the axiom of constructibility $\rV=\rL$ implies that
Reduction holds for $\fs1n\yd n\ge 3$, but fails for $\fp1n\yd n\ge 3$. On the other hand, by Martin \cite{martAD}, the axiom of projective determinacy $\mathbf{PD}$ implies that, similarly to projective level $1$,
$\fp1n$-Reduction holds for all odd numbers $n\ge3$, and, similarly to projective level $2$,
$\fs1n$-Reduction holds for all even $n\ge4$.
Apparently not much is known on
Reduction for higher projective classes in generic models. One can expect that rather homogeneous, well-behaved forcing notions produce generic extensions of $\rL$, in which Reduction keeps to be true for projective classes $\fs1n$ and accordingly fails for $\fp1n$, $n\ge3$, while in specially designed non-homogeneous extensions this pattern can be violated. This idea is supported by a few known results. Ramez Sami \cite{samiPHD} proved
\bte [Sami] \lam{mtC} It is true in any extension of\/ $\rL$ by\/ $\ali$ Cohen reals that if\/ $n\ge3$ then\/ $\is1n$-Reduction holds, and hence\/ $\fs1n$-Reduction holds, too.\snos {To prove that $\is1n$-Reduction implies the boldface $\fs1n$-Reduction, it suffices to use a double-universal pair of $\is1n$ sets, as those used in a typical proof that $\fs1n$-Reduction and $\fs1n$-Separation contadict each other. This argument does not work for Separation though.}\qed \ete
On the other hand, we proved in \cite{kl28} that Reduction fails for $\fs13$ (and in fact Separation fails for both $\fs13$ and $\fp13$) in a rather complicated model related to an $\ali$-product of forcings similar to Jensen's minimal forcing \cite{jenmin}. See also \cite{kl30e,kl38} on similar models in which the Uniformization principle fails for $\fp12$ (or $\fp1n$ for a given $n\ge3$) sets with countable sections, and \cite{kl47a} on some related (and very complex) models of Harrington.
Here we prove the following theorem.
\bte \lam{mt'} It is true in any extension of\/ $\rL$ by\/ $\ali$ Solovay-random reals that if\/ $n\ge3$ then\/ $\is1n$-Reduction holds, and hence\/ $\fs1n$-Reduction holds, too. \ete
Note that the theorem also holds in models obtained by adding any uncountable (not necessarily $\ali$) number $\ka$ of random reals. (Because such models are elementarily equivalent to the extension by $\ali$ random reals.)
Sami's proof of Theorem~\ref{mtC} involves Theorem~\ref{intC}. Accordingly, we'll use Theorem~\ref{intR} rather similar way. The following lemma is the key ingredient.
\ble [proof see Section~\ref{lok}] \lam{llok} If\/ $n\ge 2$ and\/ $\vpi(x)$ is a parameter-free\/ $\is1n$ formula then there is a parameter-free\/ $\is1n$ formula\/ $\vpa(x)$ such that if\/ $x$ is a real in an\/ $\ali$-random extension\/ $N$ of\/ $\rL$ then\/ $\vpi(x)$ holds in\/ $N$ iff\/ $\rL[x]\mo\vpa(x)$. \ele
A similar result was obtained by Solovay \cite{sol} (\poo\ Levy -- Solovay extensions) and by Sami \cite{samiPHD} (\poo\ extensions by $\ali$ Cohen reals).
\bpf[Theorem \ref{mt'}, sketch] The idea, due to Sami \cite[Lemma 1.11]{samiPHD}, is to closely emulate Addison's proof of $\is1n$-Reduction in $\rL$.
Arguing in an\/ $\ali$-random extension\/ $N$ of\/ $\rL$, we suppose that $n\ge3$, and $X=\ens{x}{\vpi(x)}$ and $Y=\ens{x}{\psi(x)}$ are sets of reals, $\vpi$ and $\psi$ being $\is1n$ formulas. Then, by Lemma~\ref{llok}, we have $X=\ens{x}{\rL[x]\mo\vpa(x)}$ and $Y=\ens{x}{\rL[x]\mo\psa(x)}$, where $\vpa$ and $\psa$ are still $\is1n$-formulas. Thus $\vpa(x)$ is $\sus y\,\Phi(x,y)$ and $\psa(x)$ is $\sus y\,\Psi(x,y)$, $\Phi$ and $\Psi$ being $\ip1{n-1}$.
Still arguing in $N$, if $x\in\dn$ then let $\llx x$ be the canonical G\"odel wellordering of the reals in $\rL[x]$, of order type $\omi$. The crucial property of this system of order relations says that the \rit{bounded quantifiers} $\kaz y'\llx x y$ and $\kaz y'\lelx x y$, applied to a $\is1n$ formula, yield a $\is1n$ formula. It follows that the sets $$ \bay{rcl} X' &=& \ens{x}{\rL[x]\mo \sus y \big( \Phi(x,y)\land \kaz y'\llx x y \:\neg\:\Psi(x,y') \big)}\\[1ex]
Y' &=& \ens{x}{\rL[x]\mo \sus y \big( \Psi(x,y)\land \kaz y'\lelx x y \:\neg\:\Phi(x,y') \big)} \eay $$ are $\is1n$, because the relativization to $\rL[x]$ does not violate being $\is1n$ ($n\ge 2$).\vtm
\epF{Theorem~\ref{mt'}, modulo Lemma~\ref{llok} and Theorem~\ref{intR}}
\parf{Randomness is measure-independent} \las{XX0}
\rit{Random} (or \rit{Solovay-random}) reals, over a set universe $\rV$, are usually defined as those reals in $\dn,$ or true reals in the unit interval $[0,1]=\dI$, which avoid Borel sets, coded in $\rV$ and null with respect to, resp., the usual product probability measure $\mu$ on $\dn,$ or the Lebesgue measure $\la$ on $\dI$.
That the $\mu$-random reals in $\dn$ and $\la$-random reals in $\dI$ produce the same generic extensions and thereby both notions can be identified, is witnessed by the Borel map $f(a)=\sum_{a(n)=1}2^{-n-1}:\dn\onto\dI$. It satisfies $\la(\im fX)=\mu(X)$ for any Borel $X\sq\dn,$ therefore if $a\in\dn$ and $x=f(a)\in\dI$ then $a$ is $\mu$-random iff $x$ is $\la$-random, and $\rV[a]=\rV[x]$, of course. There is a general version of such a correspondence, which will be used in the proof of Theorem~\ref{intR} below.
\ble \lam{xxl} Assume that\/ $\nu$ is a continuous\/ {\rm(that is, all singletons are null sets)} Borel probability measure defined on\/ $\dn$ in a set universe\/ $\rV$. Then there is a Borel map\/ $g:\dn\onto\dI$, coded in\/ $\rV$, and such that if\/ $a\in\dn$ and\/ $x=g(a)\in\dI$ then\/ $a$ is $\nu$-random over\/ $\rV$ iff\/ $x$ is\/ $\la$-random over\/ $\rV$, and\/ $\rV[a]=\rV[x]$. \ele \bpf Let $\lx$ be the lexicographical order on $\dn,$ and let $\inl ab=\ens{a'}{a\lx a' \lx b}$ denote $\lx$-intervals.
Let $g(a)=\mu(\inl{\olex}{a})$, where $\olex\in\dn$ is the $\lx$-least element, $\olex(k)=0\yt\kaz k$. Easily $g$ is measure-presirving: if $X\sq\dn$ is Borel then $\nu(X)=\la(\im gX)$. (See \eg\ the proof of Theorem 17.41 in Kechris~\cite{dst}.) It follows that $a$ is $\nu$-random iff $x$ is $\la$-random, whenever $a\in\dn$ and $x=g(a)$. To see that $a\in\rV[x]$, note that $J=\aim gx$ is a closed $\lex$-interval in $\dn$, the interior of which (if non-empty) is a $\nu$-null set, hence $a$ is equal to oneof the two endpoints of $J$. \epf
\parf{Intermediate submodels of random extensions: kase split} \las{XX1}
We begin here a proof of Theorem~\ref{intR}. It will use only basic forcing ideas and some classical theorems related to real analysis.
Thus let $\jao\in\dn$ be Solovay-random over the background set universe $\rV$. We shall assume that ${\jxo}\in\rV[\jao]$ is a real in the unit segment $[0,1]$ of the true real line $\dR$. As the Solovay-random forcing admits continuous reading of names, there is a continuous map $f:\dn\to \dI$, coded in $\rV$, such that ${\jxo}=f({\jao})$.
Let $\muo$ be the usual product probability measure on $\dn,$ and $\la$ be the Lebesgue measure on the segment $\dI=[0,1]$.
{\ubf We have to prove the trichotomy $\ref{intR1}$ vs.\ {\rm(R2)} vs.\ $\ref{intR2}$ of Theorem~\ref{intR}}.\vom
{\ubf First split.} \rit{Arguing in\/ $\rV$}, consider the set $C=\ens{x\in\dI}{\muo(\aim fx)>0}$. It is at most countable. Consider the complementary sets $D=\aim fC$ and $A_1=\dn\bez D$. These are resp.\ $\Fs$ and $\Gd$ sets coded in $\rV$, we identify them with \lap{the same} (\ie, coded by the same codes) sets in the extensions $\rV[\jao]$, $\rV[\jxo]$.
{\ubf Case 1:} $\jao\in D$. Then there is a real $\bar y\in\dI\cap\rV$ such that $\jao\in\aim f{\bar y}$, hence ${\jxo}=\bar y\in\rV$, and \ref{intR1} holds.\vom
{\ubf Case 2:} $\jao\in A_1$. Then $\muo(A_1)>0$ by the randomness. In $\rV$, there is an $\Fs$ set $A'_1\sq A_1$ of the same measure, so the Borel set $A_1\bez A'_1$, coded in $\rV$, is null, and hence $\jao\in A'_1$. Therefore there is, in $\rV$, a perfect set $A_2\sq A'_1$, satisfying $\jao\in A_2$ and $\muo(A_2)>0$. We let $\mu(A)={\muo(A)}/\muo(A_2)$, for any measurable $A\sq A_2$, so $\mu$ is a continuous probability measure on $P$, and the real $\jao\in P$ is $\mu$-random over $\rV$. The set $Y_2=\im f{A_2}$ is closed, and by construction we have \ben \fenu \itlb{assf1} if $x\in Y_2$ then $\mu(\aim fx)=0$ (\ie, $f$-preimages of singletons are $\mu$-null). \een
The set $R$ of all rational intervals $J\sq\dI$, such that $\mu(\aim f{J\cap Y_2})=0$, is at most countable. Therefore ${\Ao}=A_2\bez\bigcup_{J\in R}\aim f{J\cap Y_2}$ is a closed subset of $A_2$, of the same measure, $f$ maps ${\Ao}$ onto the closed set ${\Yo}=Y_2\bez\bigcup R$, and we have \ben \fenu \atc \itlb{assf2} if $J$ is an open interval in $\dI$ and ${\Yo}\cap J\ne\pu$ then $\mu(\aim f{{\Yo}\cap J})>0$. \een
\bdf \lam{hatf} If $x\in\dI$ then let $\haf(x)=\mu(\aim f{{\Yo}\cap[0,x)})$, so $\haf:\dI\to\dI$. \edf
\ble \lam{xx1} The map\/ $\haf$ is continous, $\ran\haf=\dI$, and $\haf$ is strictly increasing, except that\/ $\haf(x)=\haf(x')$ in case when\/ $x<x'$ belong to\/ $\dI$ and\/ ${\Yo}\cap(x,x')=\pu$. \ele \bpf Let $x<x'$ belong to $\dI$. Then $\haf(x)\le\haf(x')$ is clear. To prove the strict inequality, note that $\haf(x')-\haf(x)=\mu(\aim f{{\Yo}\cap[x,x')})>0$ provided ${\Yo}\cap(x,x')\ne\pu$, and apply \ref{assf1}, \ref{assf2}. \epf
\ble \lam{xx2} The superposition map\/ $\baf(a)=\haf(f(a)):{\Ao}\onto\dI$ is continuous and measure-preserving in the sense that if\/ $X\sq\dI$ is Borel then\/ $\mu(\aim\baf{X})=\la(X)$, while if\/ $A\sq{{\Ao}}$ is Borel then\/ $\la(\im\baf A)\ge\mu(A)$. \ele \bpf Consider any interval $X=[0,m)$ in $\dI$; $0\le m\le1$. By definition, $\haf(x)\in X$ iff $\mu(\aim f{{\Yo}\cap[0,x)})<m$. Therefore the $\haf$-preimage $\aim\haf{X}$ is equal to $Z=[0,M)$, where $M$ is the largest real in $\dI$ satisfying the inequality $\mu(\aim f{{\Yo}\cap[0,M)})\le m$. Then clearly $\mu(\aim f{{\Yo}\cap Z})=m$.
But $\aim f{{\Yo}\cap Z}= \aim f{\aim\haf{X}}=\aim\baf{X}$. We conclude that $\mu(\aim\baf{X})=\la(X)=m$ for any $X=[0,m)$, as above. By induction, this implies $\mu(\aim\baf{X})=\la(X)$ for any Borel $X\sq\dI$, the first claim. The second claim follows, since $A\sq \aim\baf{\im\baf{A}}$, and any analytic set has a Borel subset of the same measure. \epf
\bcor [under Case 2] \lam{xxC} The real ${\jyo}=\baf({\jao})=\haf({\jxo})\in\dI$ is $\la$-ran\-dom over $\rV$. Therefore\/ $\rV[{\jxo}]=\rV[{\jyo}]$ is a Solovay-random extension of $\rV$. \ecor \bpf To prove the second claim, note that $\haf$ is \lap{almost} $1-1$ on ${\Yo}$ by Lemma~\ref{xx1}, and hence $\rV[{\jxo}]=\rV[{\jyo}]$. \epf
We have another split in cases. In $\rV$, let $\cB$ be the family of all Borel sets $B\sq \Ao$ such that $\mu(B)>0$ and $F$ is $1-1$ on $B$. The set $\cB$ can be empty or not, but anyway there is a Borel set $B_0$, equal to a union of $\le\alo$ sets in $\cB$, such that $\mu(B'\bez B_0)=0$ for any $B'\in\cB$. (If $\cB=\pu$ then $B_0=\pu$ either.) We let $\Ai=\Ao\bez B_0$ and $\Yi=\im FB$. Thus $\Ai$ is Borel, $\Yi\sq \Yo$ analytic, and \ben \fenu \atc \atc \itlb{assf3} if $B\sq \Ai$ is Borel and $\mu(B)>0$ then $F$ is {\ubf not} 1-1 on $B$. \een
{\ubf Subcase 2a of Case 2:} $\jao\in\Ao\bez\Ai$. By construction there is a Borel set $B\sq\Ao$ such that $\jao\in B$, $\mu(B)>0$, and $F$ is $1-1$ on $B$. Then $\jao\in\id11(p,p',\jyo)$ for some $p,p'\in\rV$ (codes for $F,B$), hence ${\jao}\in\rV[{\jyo}]=\rV[{\jxo}]$, thus (R2) holds.\vom
{\ubf Subcase 2b of Case 2:} not Subcase 2a. This is the {\ubf key subcase}, and it will be considered in the two following sections.
\vyk{ By classical theorems of descriptive set theory, the subcase assumption can be reduced to the assumption that \ben \fenu \atc \atc \itlb{assf3} if $y\in \dI$ then the preimage $\aim Fy$ is uncountable. \een And the goal is to prove \ref{intR2}(b) of Thm~\ref{intR} under the assumptions \ref{assf1},\ref{assf2},\ref{assf3}. }
\parf{The key subcase, measure construction} \las{XX2}
Here we prove that $\rV[{\jao}]$ is a random extension of $\rV[{\jxo}]$. First of all, we define, in $\rV[{\jxo}]$, a measure on the set $\Om=\aim F{{\jxo}}$, with respect to which ${\jao}$ itself will be random. We'll make use of the following lemma which combines effects of forcing and the Shoenfield absoluteness theorem.
\ble \lam{xxA} Let\/ $\vpi(x)$ be a combination of\/ $\is11$-formulas and\/ $\ip11$-formulas, by means of $\land$, $\lor$, $\neg$, and quantifiers over\/ $\om$, and with reals in\/ $\rV$ as parameters. If\/ $\vpi(\jyo)$ is true then there is a closed set\/ $Y\sq\dI$ of positive measure\/ $\la(Y)>0$, coded in\/ $\rV$, containing\/ ${\jyo}$, and satisfying\/ $\vpi(y)$ for all\/ $y\in Y$. \ele \bpf The set $\ens{y}{\vpi(y)}$ is measurable, hence, it is true in $\rV$ that any Borel $Y_0\sq\dI$ of positive measure contains a perfect subset $Y\sq Y_0$ still of positive measure $\la(Y)>0$, satisfying either (1) $\kaz y\in Y\vpi(y)$ or (2) $\kaz y\in Y\,\neg\,\vpi(y)$. These formulas are $\ip12$, hence absolute by Shoenfield. It follows by the randomness of ${\jxo}$ that there is a perfect subset $Y\sq\dI$ of positive measure, containing ${\jyo}$ and satisfying (1) or (2). But (2) is impossible because of $\vpi({\jyo})$. \epf
Now suppose that $B\sq{{\Ai}}$ is a Borel set.
If $X\sq\dI$ then let $\og BX=B\cap\aim\baf X=\ens{a\in B}{\baf(a)\in X}$.
In particular, if $x\in\dI$ then $\og Bx=\ens{a\in B}{\baf(a)=x}$, a cross-section.
Note that $\mu(B)\le \la(\im\baf B)$ by Lemma~\ref{xx2}, and if $X\sq\im FB$ is Borel then $\mu(\og BX)\le \la(X)$.
If $X\sq\dI$ is Borel then put $\la_B(X)=\mu(\og BX)$; $\la_B$ is a $\sg$-additive Borel measure on $\dI$, concentrated on $\im FB$ and satisfying $\la_B(X)\le\la(X)$.
If $x\in\dI$ then let $U_B(x)=\la_B([0,x))=\mu(\og B{[0,x)})$. It is important that $U_B:\dI\to\dI$ is non-decreasing ($x<y\imp U_B(x)\le U_B(y)$). We'll make use of the following collection of classical results related to monotone real functions.
\bpro [see \eg\ \cite{riss}, Chapters I and II] \label{ris} \ben \renu \itlb{ris1} If\/ $B\sq{{\Ai}}$ is a Borel set then a derivative\/ $U'_B(x)<\iy$ exists for\/ $\la$-almost all\/ $x\in\dI\,;$
\itlb{ris2}
If\/ $B\sq{{\Ai}}$ is a Borel set and\/ $U'_B(x)=0$ for\/ $\la$-almost all\/ $x\in\dI$, then\/ $U'_B(x)=0$ for all\/ $x\in\dI\,;$
\itlb{ris3} if\/ $B_0,B_1,\dots\sq{{\Ai}}$ are 2wise-disjoint Borel, and\/ $B=\bigcup_nB_n$, then\/ $U_B(x)=\sum_nU_{B_n}(x)$, $\kaz x$, and\/ $U'_B(x)=\sum_nU'_{B_n}(x)$ for\/ $\la$-almost all\/ $x\in\dI$.\qed \een \epro
\ble \lam{xx3} If\/ $C\sq{{\Ai}}$ and\/ $X\sq\im FC$ are Borel sets, $\la(C)>0$, and\/ $B=\og CX$, then\/ $U_C'(x)=U'_B(x)$ for\/ $\la$-almost all\/ $x\in X$. \ele \bpf Let $A=C\bez B$, so that $X$ and $Y=\im FA$ are disjoint sets satisfying $X\cup Y=\im FC$. Accordingly we have $U_C(x)=U_B(x)+U_A(x)$ for all $x\in \dI$, therefore $U'_C(x)=U'_B(x)+U'_A(x)$ for $\la$-almost all $x$ (those in which all three derivatives are defined). However we have $U'_A(x)=0$ for $\la$-almost all $x\in X$; in fact, the equality holds for all points $x\in X$ of density 1. As required. \epf
\bdf \lam{dfom} Let $\Om=\aim f{{\jxo}}=\aim\baf{{\jyo}}$ (a
closed set, containing ${\jao}$). \edf
\ble \lam{xx4} If\/ $P\sq \Om$ is a Borel set coded in\/ $\rV[{\jyo}]$ then there is a Borel set\/ $B\sq\dI$, coded in\/ $\rV$ and such that\/ $P=\og B {{\jyo}}$. \ele \bpf There is a Borel set $W\sq\dI\ti{{\Ai}}$, coded in $\rV$, such that $P=W_{{\jyo}}=\ens{a}{\ang{{\jyo},a}\in W}$ (a cross-section). Thus $W_{{\jyo}}\sq \Om=\aim\baf{{\jyo}}$. By Lemma~\ref{xxA}, there is a Borel set $X\sq\dI$ of positive measure $\la(X)>0$, coded in $\rV$, containing ${\jyo}$, and such that $W_{y}\sq \aim\baf{y}$ holds for all $y\in X$. Then $B=\ens{a}{\baf(a)\in X\land\ang{\baf(a),a}\in W}$ is a Borel set coded in $\rV$. Moreover $W_y=\og B y$ for all $y\in X$ by construction, in particular, $P=\og B {{\jyo}}$. \epF{Lemma}
\bdf \lam{xxD} If $P\sq \Om$ is a Borel set coded in\/ $\rV[{\jyo}]$ then let $\nu(P)=U_B'({\jyo})$, for any $B$ as in the lemma. It follows from Proposition~\ref{ris}\ref{ris1} that $U_B'({\jyo})$ is defined, because ${\jyo}$ is random over $\rV$ by the above. \edf
\ble \lam{xx6} $\nu(P)$ is independent of the choice of $B$. \ele \bpf Suppose that $C\sq {{\Ai}}$ is another Borel set satisfying $P=\og C {{\jyo}}$. By Lemma~\ref{xxA}, there is a Borel set $X\sq\dI$ of positive measure $\la(X)>0$, coded in $\rV$, containing ${\jyo}$, and such that $\og C {y}=\og B {y}$ holds for all $y\in X$. Then $U_B'(y)= U_C'(y)$ for $\la$-almost all $y\in X$ by Lemma~\ref{xx3}. Therefore $U_B'({\jyo})= U_C'({\jyo})$, as ${\jyo}\in X$ is random.
\epf
Thus $\nu$ is a well-defined measure on Borel sets $P\sq\Om$ in $\rV[{\jyo}]$.
\parf{The key subcase, proof of randomness} \las{XX3}
To finalize the proof of Theorem~\ref{intR} in Case 2b, we are going to show that ${\jao}$ is $\nu$-random over\/ $\rV[{\jyo}]$. Then it suffices to apply Lemma~\ref{xxl}, to transform $\jao$ to a \lap{standard} $\la$-random real in $\dI$. We first of all show that $\nu$ is a \lap{good} measure.
\ble \lam{111} In\/ $\rV[{\jyo}]$, $\nu$ is a\/ $\sg$-additive continuous probability measure on\/ $\Om$. \ele \bpf (A) To prove $\nu(\Om)=1$ take $B={{\Ai}}.$ Then $\og B{{\jyo}}=\aim \baf{{\jyo}}=\Om$. Lemma~\ref{xx2} implies $$ U_B(x)=\la_B([0,x))=\mu(\og B{[0,x)})= \mu(\aim\baf{[0,x)})=\la([0,x))=x\,, $$ and hence $U'_B(x)=1$ for all $x$. In particular, $\nu(\Om)=U'_B({\jyo})=1$.\vom
(B) Prove $\sg$-additivity. Lemma~\ref{xx4} reduces this to the following claim: \rit{if\/ $\sis{C_n}{n<\om}\in\rV$ is a sequence of Borel sets\/ $C_n\sq{{\Ai}},$ and\/ $(\og{C_k}{{\jyo}})\cap (\og{C_n}{{\jyo}})=\pu$ for all\/ $k\ne n$, and\/ $C=\bigcup_nC_n$, then\/ $U'_C({\jyo})=\sum_n U'_{C_n}({\jyo})$.} By Lemma~\ref{xxA}, there is a Borel set $X\sq\dI$ with $\la(X)>0$, coded in $\rV$, containing ${\jyo}$, and such that $(\og{C_k}{{y}})\cap (\og{C_n}{{y}})=\pu$ for all $y\in X$, $k\ne n$. The Borel sets $B_n=\og{C_n}X\sq{{\Ai}}$ are pairwise disjoint, and the set $B=\og CX$ satisfies $B=\bigcup_nB_n$.
Moreover, we have $U_B(x)=\sum_nU_{B_n}(x)$ for all $x$, and\/ $U'_B(x)=\sum_nU'_{B_n}(x)$ for $\la$-almost all $x\in\dI$ by Proposition~\ref{ris}\ref{ris3}. Finally, Lemma~\ref{xx3} implies that $U'_B(x)=U'_C(x)$ and $U'_{B_n}(x)=U'_{C_n}(x)$ for all $n$ and $\la$-almost all $x\in X$. It follows that $U'_C(x)=\sum_nU'_{C_n}(x)$ for $\la$-almost all $x\in X$, hence, $U'_C({\jyo})=\sum_nU'_{C_n}({\jyo})$ by the randomness, as required.\vom
(C) To prove that $\nu$ is continuous, suppose to the contrary that $z_0\in\Om$ and $\nu(\ans{z_0})>0$. By definition there is a Borel set $C\sq{{\Ai}},$ coded in $\rV$ and satisfying $\og C{{\jyo}}=\ans{z_0}$ and $U'_C({\jyo})>0$. By Lemma~\ref{xxA}, there is a Borel set $X\sq\dI$ with $\la(X)>0$, coded in $\rV$, containing ${\jyo}$, and such that $\og C{{y}}$ is a singleton and $U'_C({y})>0$ for all $y\in X$. Let $B=\og CX$. Then $\og B{{\jyo}}=\ans{z_0}$, $\og B{{y}}$ is a singleton for all $y\in X$, and $U'_B(y)>0$ for $\la$-almost all $y\in X$, by Lemma~\ref{xx3}. It follows that $U_B(1)>0$, hence $\mu(B)=U_B(1)>0$. Moreover, by the singleton condition, the preimage $\aim Fy\cap B=\og By$ is a singleton for all $y\in\im FB\sq X$. But this contradicts the Case 2b assumption. \epf
\ble \lam{xx8} ${\jao}$ is\/ $\nu$-random over\/ $\rV[{\jyo}]$. \ele \bpf Assume that $P\sq\Om$ is a Borel set, coded in $\rV[{\jyo}]$, and $\nu(P)=0$; we have to prove that ${\jao}\nin P$. By definition there is a Borel set $C\sq{{\Ai}},$ coded in $\rV$ and satisfying $P=\og C{{\jyo}}$ and $U'_C({\jyo})=0$. By Lemma~\ref{xxA}, there is a closed (here, this is more suitable than Borel) set $X\sq\dI$ of positive measure $\la(X)>0$, coded in $\rV$, containing ${\jyo}$, and such that $U'_C(y)=0$ for all $y\in X$.
Let $B=\og CX$. Then $P=\og B{{\jyo}}$, and $U'_B(y)=0$ for $\la$-almost all $y\in X$ by Lemma~\ref{xx3}. Note that $\im FB\sq X$, thus $U_B(x)$ is a constant inside any open interval disjoint with $X$. Thus $U'_B(y)=0$ for all $y\in\dI\bez X$, hence overall $U'_B(y)=0$ for $\la$-almost all $y\in \dI$. This implies $U_B(x)=0$ for all $x\in\dI$ by Proposition~\ref{ris}\ref{ris2}. Therefore $\la_B(\dI)=\mu(B)=0$ by construction. We conclude that ${\jao}\nin B$, by the $\mu$-randomness of ${\jao}$. Then ${\jao}\nin P=\og B{{\jyo}}$, as required. \epf
\qeD{Theorem~\ref{intR}}
\bcor \lam{rl1+} If\/ $x,y$ are reals in an\/ $\ali$-random extension\/ $N=\rL[\sis{a_\xi}{\xi<\omi}]$ of\/ $\rL$, then\/ $y$ belongs to a random extension of\/ $\rL[x]$ inside\/ $N$. \ecor \bpf We have $x\in N_\al=\rL[\sis{a_\xi}{\xi<\al}]$ and $y\in N_\ba$, for some $\al<\ba<\omi$. The model $N_\al$ is equal to a simple extension of $\rL$ by one random real. Thus, by Theorem~\ref{intR}, either $N_\al=\rL[x]$ or $N_\al$ is a random extension of $\rL[x]$. In addition, $N_\ba$ is a random extension of $N_\al$. This implies the result required. \epf
\parf{Proof of the localization lemma} \las{lok}
\bpf[Lemma~\ref{llok}] Let $\don$ be the weakest element of any forcing considered, and $\dox=\ans\bon\ti x$ be the canonical name for any set $x$ in the ground set universe $\rV$. Let $\raf$ be the random forcing and $\for_\raf$ be the associated forcing relation.
\bcl \lam{rfc} If\/ $n\ge2$ and $\vpi(\cdot)$ is a parameter-free\/ $\is1n$-formula, resp., $\ip1n$-formula, then the set\/ $F_\vpi=\ens{x}{\don\for_\raf\vpi(\dox)}$ is\/ $\is1n$, resp., $\ip1n$. \ecl \bpf We make use of a standard Borel coding system for subsets of $\dn.$ It consists of $\ip11$ sets $\kos\sq\dn$ and $W_+\yi W_-\sq\bn\ti\bn,$ and an assignment $c\mto \bks c\sq\dn$, such that (1) $\ens{\bks c}{c\in\kos}$ is exactly the family of all Borel sets $X\sq\dn,$ and (2) if $c\in\kos$ and $x\in\dn$ then $x\in\bks c$ iff $W_+(c,x)$ iff $\neg\:W_-(c,x)$.
To define an associated coding system for Borel maps, let $e\mto \sis{(e)_n}{n<\om}$ be a recursive homeomorphism $\dn\onto\dnp\om$. Let $\kof=\ens{e\in\dn}{\kaz n((e)_n\in\kos}$ --- \imar{kof} codes of Borel maps $f:\dn\to\dn.$ If $e\in\kof$ then define a Borel map $\bkf e:\dn\to\dn$ so that $\bkf e(x)(n)=1$ iff $x\in \bks{(e)_n}$, for all $x\in\dn,$ $n<\om$.
If $\vpi(v_1,\dots,v_k)$ is any formula, $e_1,\dots,e_k\in \kof$, and $x\in\bn,$ then let $\vpi(e_1,\dots,e_k)[x]$ be the formula $\vpi(\bkf{e_1}(x),\dots,\bkf{e_k}(x))$, and let $$ \TS \forc\vpi=\ens{\ang{c,e_1,\dots,e_k}\in\kos\ti\kof^k} {\mu(\bks c)>0\land \bks c\for_\raf \vpi(e_1,\dots,e_k)[\ja]}\,, $$ where $\ja$ is a canonical name for the random real. We assert the following. \ben \fenu \itlb{rfc*} If $\vpi$ is a $\ip11$ formula then $\forc\vpi\in\is12$. If $\vpi$ is a $\is1n$ formula, $n\ge2$, then $\forc\vpi\in\is1n$. If $\vpi$ is a $\ip1n$ formula, $n\ge2$, then $\forc\vpi\in\ip1n$. \een This is proved by induction. If $\vpi(v)$ is $\ip11$ then $\ang{c,e}\in\forc\vpi$ iff the set $X=\ens{x\in B_c}{\neg \:\vpi(\bkf e(x))}$ is null, which roughly estimated to be $\is12$ by coverings with $\Gd$ sets. To pass $\ip1n\to\is1{n+1}$, assume that $\vpi(v_1):=\sus v_2\,\psi(v_1,v_2)$, $\psi$ is $\ip1n$. Then $\ang{c,e_1}\in \forc\vpi$ iff $ \sus e_2\in\kof\,(\ang{c,e_1,e_2}\in\forc\psi) $. (We make use of the fact that the random forcing admits Borel reading of names.) Thus if $\forc\psi$ is $\is1{n+1}$ then so is $\forc\vpi$. To pass $\is1n\to\ip1{n}$, let $\vpi(v)$ be $\is1n$. Then $$ \ang{c,e}\in \forc{\neg\:\vpi} \leqv \kaz c'\in\kos\,(\bks{c'}\sq \bks{c}\land \mu(\bks{c'})>0 \imp\ang{c',e}\nin\forc\psi)\,. $$ Thus if $\forc\vpi$ is $\is1{n}$ then $\forc{\neg\,\vpi}$ is $\ip1{n}$. This ends the proof of \ref{rfc*}.
Now to prove the claim note that $x\in F_\vpi$ iff $\ang{c_0,e_x}\in\forc\vpi$, where $c_0\in\kos$ satisfies $\bks{c_0}=\dn,$ while $e_x\in\kof$ is such that $\bkf{e_x}$ is the constant map $\bkf{e_x}(a)=x$, $\kaz a\in\dn$. \epF{Claim}
To finalize the proof of Lemma~\ref{llok}, we define formulas $\vpa(x)$ by induction. If $\vpi$ is $\is12$ or $\ip12$ then $\vpa:=\vpi$ works by Shoenfield. Suppose that $n\ge2$, $\psi(x,y)$ is $\ip1n$, and a $\ip1n$-formula $\psa$ is defined, satisfying $\psi(x,y)\leqv \rL[x,y]\mo\psa(x,y)$ in $N=\rL[\sis{a_\xi}{\xi<\omi}]$ (a given $\ali$-random extension). We define $\vpa(x)$ to be the formula $\don \for_\raf\sus y\,(\rL[\dox,y]\mo\psa(\dox,y))$. This is a $\is1{n+1}$-formula by Claim~\ref{rfc}, so it remains to show that $\vpi(x)\leqv\rL[x]\mo\vpa(x)$ in $N$.
Assume that $x$ is a real in $N$ satisfying $\vpi(x)$. Thus there is a real $y\in N$ satisfying $\psi(x,y)$, or equivalently, $\rL[x,y]\mo\psa(x,y)$. By Corollary \ref{rl1+}, $y$ belongs to a random extension of $\rL[x]$ inside $N$. Therefore, as the random forcing is homogeneous, it is true in $\rL[x]$ that $\don \for_\raf\sus y\,(\rL[\dox,y]\mo\psa(\dox,y))$. In other words, $\rL[x]\mo\vpa(x)$.
To prove the converse, assume that $\rL[x]\mo \big(\don \for_\raf \sus y\,(\rL[\dox,y]\mo\psa(\dox,y))\big)$. Consider any real $z\in N$ random over $\rL[x]$. Then $\sus y\,(\rL[x,y]\mo\psa(x,y))$ holds in $\rL[x,z]$, so there is a real $y\in\rL[x,z]$ satisfying $\rL[x,y]\mo\psa(x,y)$. Then $N\mo\psi(x,y)$ by the choice of $\psa$, hence finally $N\mo\vpi(x)$.\vtm
\epF{Lemma~\ref{llok} and Theorem~\ref{mt'}}
\addcontentsline{toc}{subsection}{\hspace*{4.7ex} References}
{\small
}
\end{document} |
\begin{document}
\title{\bf Asymptotics and scalings for large product-form networks via
the Central Limit Theorem} \author{Guy Fayolle
\thanks{Postal address: INRIA ---
Domaine de Voluceau, Rocquencourt, BP105 ---
78153 Le Chesnay, France.}
\and Jean-Marc Lasgouttes$\ ^*$}
\date{February 1996}
\maketitle
\begin{abstract} The asymptotic behaviour of a closed BCMP network, with $n$ queues and $m_n$ clients, is analyzed when $n$ and $m_n$ become simultaneously large. Our method relies on Berry-Esseen type approximations coming in the Central Limit Theorem. We construct {\em critical sequences\/} $m^{\scriptscriptstyle0}_n$, which are necessary and sufficient to distinguish between saturated and non-saturated regimes for the network. Several applications of these results are presented. It is shown that some queues can act as bottlenecks, limiting thus the global efficiency of the system. \end{abstract} \nocite{FayLas:3}
\let\oldphi=\phi \let\phi=\varphi \let\varphi=\oldphi \let\eps=\varepsilon \renewcommand\liminf{\mathop{\rm \underline{lim}}} \renewcommand\limsup{\mathop{\rm \overline{lim}}} \def\mathop{\rm Re}\nolimits{\mathop{\rm Re}\nolimits} \def\mathop{\rm Im}\nolimits{\mathop{\rm Im}\nolimits} \def{\rm i}{{\rm i}}
\def\*#1{{\cal #1}} \def\m#1{\beta^{\scriptscriptstyle(#1)}} \def^{\scriptscriptstyle0}{^{\scriptscriptstyle0}} \def\widehat{\widehat} \def\widetilde{\widetilde}
\hyphenation{uni-form-ly}
\input{bn-intro} \input{bn-model} \input{bn-llt} \input{bn-scal} \input{bn-ass} \input{bn-appl} \input{bn-concl} \begin{secappendix}{Appendix} \input{bn-bound} \input{bn-proof} \end{secappendix}
\end{document} |
\begin{document}
\title{A young person's guide to mixed Hodge modules} \author[M. Saito]{Morihiko Saito} \address{RIMS Kyoto University, Kyoto 606-8502 Japan} \begin{abstract} We give a rather informal introduction to the theory of mixed Hodge modules for young mathematicians. \end{abstract} \maketitle \centerline{\bf Introduction} \par
\par\noindent The theory of mixed Hodge modules (\cite{mhp}, \cite{mhm}) was originally constructed as a Hodge-theoretic analogue of the theory of $\ell$-adic mixed perverse sheaves (\cite{th1}, \cite{weil}, \cite{BBD}) and also as an extension of Deligne's mixed Hodge theory (\cite{th2}, \cite{th3}) including the theory of degenerations of variations of pure or mixed Hodge structures (see \cite{Sch}, \cite{St}, \cite{CK}, \cite{CKS1}, \cite{StZ}, \cite{Kadm}, etc.) The main point is the ``stability" by the direct images and the pull-backs under morphisms of complex algebraic varieties and also by the dual and the nearby and vanishing cycle functors. Here ``stability" means more precisely the ``stability theorems" saying that there are {\it canonically defined functors} between the derived categories. These stability theorems become quite useful by combining them with the fundamental theorem of mixed Hodge modules asserting that any admissible variations of mixed Hodge structure on smooth complex algebraic varieties in the sense of \cite{StZ}, \cite{Kadm} are mixed Hodge modules \cite{mhm}; in particular, mixed Hodge modules on a point are naturally identified with graded-polarizable mixed ${\mathbf Q}$-Hodge structures in the sense of Deligne \cite{th2}. \par
Technically the {\it strictness} of the Hodge filtration $F$ on the underlying complexes of ${\mathcal D}$-modules is very important in the theory of mixed Hodge modules. Here ${\mathcal D}$-modules are indispensable for the generalization of Deligne's theory in the absolute case (\cite{th2}, \cite{th3}) to the {\it relative} case. For instance, this includes the assertion that any morphism of mixed Hodge modules is {\it bistrict} for the Hodge and weight filtrations $F,W$, generalizing the case of mixed Hodge structures by Deligne \cite{th2}. ${\mathcal D}$-modules are also essential for the construction of a relative version of ${\rm Dec}\,W$ in the proof of the stability theorem of mixed Hodge modules by the direct images under projective morphisms extending Deligne's argument in the absolute case, where we have {\it bistrict} complexes of ${\mathcal D}$-modules with filtrations $(F,{\rm Dec}\,W)$ under the direct images by projective morphisms, see \cite[Proposition 2.15]{mhm}. \par
In order to understand the theory of mixed Hodge modules, the general theory of ${\mathcal D}$-modules does not seem to be absolutely indispensable. In fact, ${\mathcal D}$-modules appearing in the theory of mixed Hodge modules are rather special ones, and we have to deal with them always as {\it $F$-filtered ${\mathcal D}$-modules}, where a slightly different kind of argument is usually required. For instance, although the {\it regularity} of ${\mathcal D}$-modules is quite useful for the construction of the direct images by affine open immersions, this can be reduced essentially to the normal crossing case by using Beilinson's functor together with the stability theorem under the direct images by projective morphisms (see \cite{def}), and in the latter case, a generalization of the Deligne canonical extensions \cite{eq} to the case of ${\mathcal D}$-modules with normal crossing singular supports is sufficient. Also the pull-backs of mixed Hodge modules under closed immersions are constructed by using nearby and vanishing cycle functors, which is entirely different from the usual construction of pull-backs of ${\mathcal D}$-modules, see \cite[Section 4.4]{mhm}. \par
Note finally that we need only Zucker's Hodge theory in the curve case \cite{Zu} for the proof of the stability theorem of mixed Hodge modules under the direct images by proper morphisms, and classical Hodge theory is not used, see (2.5) below. \par
I thank the referee for useful comments. The author is partially supported by Kakenhi 15K04816. \par
In Section~1 we explain the main properties of pure Hodge modules. In Section~2 we give an inductive definition of pure Hodge modules, and explain an outline of proofs of Theorems~(1.3) and (1.4). In Section~3 we explain a simplified definition of mixed Hodge modules following \cite{def}. In Appendices we explain some basics of hypersheaves, ${\mathcal D}$-modules, and compatible filtrations. \par
\par\noindent {\bf Conventions 1.} We assume that algebraic varieties in this paper are always defined over ${\mathbf C}$ and are reduced (but not necessarily irreducible). More precisely, a variety means a separated reduced scheme of finite type over ${\mathbf C}$, but we consider only its {\it closed points}, that is, its {\it ${\mathbf C}$-valued points}. So it is close to a variety in the sense of Serre (except that reducible varieties are allowed here). We also assume that a variety is always quasi-projective, or more generally, globally embeddable into a smooth variety (where morphisms of varieties are assumed quasi-projective) in order to simplify some arguments. The reader may assume that all the varieties in this paper are reduced quasi-projective complex algebraic varieties. \par
\par\noindent {\bf 2.} We use {\it analytic} sheaves on complex algebraic varieties; in particular, any ${\mathcal D}$-modules are analytic ${\mathcal D}$-modules. (These are suitable for calculations using local coordinates.) For the underlying filtered ${\mathcal D}$-module $(M,F)$ of a mixed Hodge module on $X$, one can pass to the corresponding {\it algebraic} filtered ${\mathcal D}$-module by applying GAGA to each $F_pM$ after taking an extension of the mixed Hodge module over a compactification of $X$. \par
\par\noindent {\bf 3.} In this paper, perverse sheaves (which do not seem appropriate at least for book titles, see \cite{Di}, \cite{KS}) are mainly called hypersheaves by an analogy with hypercohomology versus cohomology (although the word ``sheaf" may not be of Greek origin). The abelian category of hypersheaves on $X$ are denoted by ${\mathbf H\mathbf S}(X,A)$ where $A$ is a subfield of ${\mathbf C}$. Hypersheaves are not sheaves in the usual sense, but they behave like sheaves in some sense; for instance, they can be defined locally provided that gluing data satisfying some compatibility condition are also given. \par
\par
\vbox{\centerline{\bf 1. Main properties of pure Hodge modules} \par
\par\noindent In this section we explain the main properties of pure Hodge modules.} \par
\par\noindent {\bf 1.1.~Filtered ${\mathcal D}$-modules with ${\mathbf Q}$-structure.} A pure Hodge module ${\mathcal M}$ on a smooth complex algebraic variety $X$ of dimension $d_X$ is basically a coherent left ${\mathcal D}_X$-module $M$ endowed with the Hodge filtration $F$ such that $(M,F)$ is a filtered left ${\mathcal D}_X$-module with $F_pM$ coherent over ${\mathcal O}_X$. The filtration $F$ on ${\mathcal D}_X$ is by the order of differential operators, and we have the filtered ${\mathcal D}$-module condition $$(F_p{\mathcal D}_X)\,F_qM\subset F_{p+q}M\quad\quad(p\in{\mathbf N},\,q\in{\mathbf Z}), \leqno(1.1.1)$$ (which is equivalent to {\it Griffiths transversality} in the case of variations of Hodge structure), and the equality holds in (1.1.1) for $q\gg0$. Moreover $M$ has a ${\mathbf Q}$-{\it structure} given by an isomorphism $$\alpha_{{\mathcal M}}:{\rm DR}_X(M)\cong{\mathbf C}\otimes_{{\mathbf Q}}K\quad\hbox{in}\,\,\,{\mathbf H\mathbf S}(X,{\mathbf C}), \leqno(1.1.2)$$ with $K\in{\mathbf H\mathbf S}(X,{\mathbf Q})$ (see (A.1) below). Here ${\rm DR}_X(M)$ is the de Rham complex of the ${\mathcal D}_X$-module $M$ viewed as a quasi-coherent ${\mathcal O}_X$-module with an integrable connection: $${\rm DR}_X(M):=\bigl[M\to\Omega_X^1\otimes_{{\mathcal O}_X}M\to\cdots\to\Omega_X^{d_X}\otimes_{{\mathcal O}_X}M\bigr], \leqno(1.1.3)$$ with last term put at degree 0. In (1.1.2) we {\it assume} $${\rm DR}_X(M)\in{\mathbf H\mathbf S}(X,{\mathbf C}), \leqno(1.1.4)$$ and moreover (1.1.4) holds with $M$ replaced by any subquotients of $M$ as coherent ${\mathcal D}$-modules. (More precisely, these properties follow from the condition that $M$ is {\it regular holonomic}, see (B.5.2) below. We effectively assume the last condition in the definition of Hodge modules, see Remark~(ii) after Theorem~(1.3).) In this paper we also assume $$\hbox{$M,K$ are quasi-unipotent (see (B.6) below).} \leqno(1.1.5)$$ \par
We will denote by ${\rm MF}(X,{\mathbf Q})$ the category of $${\mathcal M}=\bigl((M,F),K,\alpha_{{\mathcal M}}\bigr)$$ satisfying the above conditions. (Sometimes $\alpha_{{\mathcal M}}$ will be omitted to simplify the notation.) We also use the notation $${\rm rat}({\mathcal M}):=K. \leqno(1.1.6)$$ The category ${\rm MH}(X,w)$ of pure Hodge modules of weight $w$ on $X$ will be defined as a full subcategory of ${\rm MF}(X,{\mathbf Q})$. \par
\par\noindent {\bf 1.2.~Strict support decomposition.} The first condition for pure Hodge modules is the {\it decomposition by strict support} $${\mathcal M}=\h{$\bigoplus$}_{Z\subset X}\,{\mathcal M}_Z\quad\hbox{in}\,\,\,{\rm MF}(X,{\mathbf Q}), \leqno(1.2.1)$$ where $Z$ runs over irreducible closed subvarieties of $X$, and $${\mathcal M}_Z=\bigl((M_Z,F),K_Z,\alpha_{{\mathcal M}_Z}\bigr)$$ has {\it strict support} $Z$ (that is, its support is $Z$, and it has no nontrivial sub nor quotient object supported on a proper subvariety of $Z$). More precisely, we assume that the last condition is satisfied for both $M_Z$ and $K_Z$. Note that the condition for $K_Z$ is equivalent to that $K_Z$ is an intersection complex with local system coefficients, see \cite{BBD}. We then get $${\rm Hom}({\mathcal M}_Z,{\mathcal M}_{Z'})=0\quad\hbox{if}\,\,\,Z\ne Z'. \leqno(1.2.2)$$ In fact, this holds with ${\mathcal M}_Z,{\mathcal M}_{Z'}$ replaced by $M_Z,M_{Z'}$ or by $K_Z,K_{Z'}$. \par
In (2.2) below, we will define the full subcategory $${\rm MH}_Z(X,w)\subset{\rm MF}(X,{\mathbf Q})$$ consisting of {\it pure Hodge modules of weight $w$ with strict support $Z$} by increasing induction on $\dim Z$, and put $${\rm MH}(X,w):=\h{$\bigoplus$}_{Z\subset X}\,{\rm MH}_Z(X,w)\subset{\rm MF}(X,{\mathbf Q}), \leqno(1.2.3)$$ where the direct sum over closed irreducible subvarieties $Z$ of $X$ is justified by (1.2.1--2). \par
This full subcategory ${\rm MH}_Z(X,w)\subset{\rm MF}(X,{\mathbf Q})$ can be defined effectively by the following {\it fundamental theorem of pure Hodge modules}, which may be viewed as a {\it working definition} of ${\rm MH}_Z(X,w)$. \par
\par\noindent {\bf Theorem~1.3} (\cite[Theorem~3.21]{mhm}). {\it For any closed irreducible subvariety $Z\subset X$, the restriction to sufficiently small open subvarieties of $Z$ induces an equivalence of categories $${\rm MH}_Z(X,w)\buildrel{\sim}\over\longrightarrow {\rm VHS}_{\rm gen}(Z,w-\dim Z)^p, \leqno(1.3.1)$$ where the right-hand side is the category of polarizable variations of pure Hodge structure of weight $w-\dim Z$ defined on smooth dense open subvarieties $U$ of $Z$. $($More precisely, we take the inductive limit over $U\subset Z.)$ Moreover, $(1.3.1)$ induces a one-to-one correspondence between polarizations of ${\mathcal M}\in{\rm MH}_Z(X,w)$ $($see $(2.2.2)$ below$)$ and those of the corresponding generic variation of Hodge structure.} \par
\par\noindent {\bf Remarks.} (i) The equivalence of categories (1.3.1) means that any pure Hodge module with strict support $Z$ is generically a polarizable variation of pure Hodge structure, and conversely any polarizable variation of pure Hodge structure defined on a smooth dense Zariski-open subset $U\subset Z$ can be extended uniquely to a pure Hodge module with strict support $Z$. \par
(ii) By using the stability theorem of pure Hodge modules under the direct images by projective morphisms (see Theorem~(1.4) below), the proof of Theorem~(1.3) can be reduced to the case where $Z=X$ and a variation of Hodge structure is defined on the complement $U$ of a divisor with normal crossings. In this case we use the original definition of pure Hodge modules in \cite{mhp} given by induction on the dimension of the support of $M$ and using the nearby and vanishing cycle functors. \par
In the case $D:=X\setminus U$ is a divisor with normal crossings, the above extension of a variation of Hodge structure on $U$ to a Hodge module on $X$ is rather easy to describe by using Deligne's canonical extension \cite{eq}. In fact, we have the following explicit formula (see \cite[(3.10.12)]{mhm}: $$F_pM=\h{$\sum$}_{i\geqslant 0}\,F_i\hskip1pt{\mathcal D}_X\hskip1pt\bigl(j_*F_{p-i}\hskip1pt{\mathcal L}\cap\widehat{{\mathcal L}}^{\,>-1}\bigr)\subset\widehat{{\mathcal L}}^{\,>-1}(*D), \leqno(1.3.2)$$ with $j:U\hookrightarrow X$ is the inclusion. Here ${\mathcal L}$ is the locally free ${\mathcal O}_U$-module underlying the variation of Hodge structure with $F$ the Hodge filtration, $\widehat{{\mathcal L}}^{\,>-1}$ is the Deligne extension of ${\mathcal L}$ over $X$ such that the eigenvalues of the residues of the connection are contained in $(-1,0]$, and the last term of (1.3.2) is the Deligne meromorphic extension, see \cite{eq}. (Note that a decreasing filtration $F$ is identified with an increasing filtration by setting $F_p:=F^{-p}$.) Taking the union over $p\in{\mathbf Z}$, we have $$M={\mathcal D}_X\,\widehat{{\mathcal L}}^{\,>-1}\subset\widehat{{\mathcal L}}^{\,>-1}(*D). \leqno(1.3.3)$$ \par
The underlying ${\mathbf Q}$-local system $L$ of a polarizable variation of Hodge structure on $U$ is canonically extended over $X$ as an intersection complex (see \cite{BBD}) where $L$ must be shifted by $\dim Z$. This can be done without assuming that $D=X\setminus U$ is a divisor with normal crossings on a smooth variety. \par
The second main theorem in the theory of pure Hodge modules is the stability theorem of pure Hodge modules under the direct images by projective morphisms. \par
\par\noindent {\bf Theorem~1.4} (\cite[Theorem~5.3.1]{mhp}). {\it Let $f:X\to Y$ be a projective morphism of smooth complex algebraic varieties, and ${\mathcal M}=((M,F),K,\alpha_{{\mathcal M}})\in{\rm MH}_Z(X,w)$. Let $\ell$ be the first Chern class of an $f$-ample line bundle. Then the direct image $f_*^{{\mathcal D}}(M,F)$ as a filtered ${\mathcal D}$-module $($see {\rm (B.3)} below$)$ is strict, and we have $${\mathcal H}^if_*{\mathcal M}:=({\mathcal H}^if_*^{{\mathcal D}}(M,F),{}^{\mathfrak m}{\mathcal H}^if_*K,{}^{\mathfrak m}{\mathcal H}^if_*\alpha_{{\mathcal M}})\in{\rm MH}(Y,w+i)\quad(i\in{\mathbf Z}), \leqno(1.4.1)$$ together with the isomorphisms $$\ell^i:{\mathcal H}^{-i}f_*{\mathcal M}\buildrel{\sim}\over\longrightarrow{\mathcal H}^if_*{\mathcal M}(i)\quad(i>0), \leqno(1.4.2)$$ where $(i)$ denotes the Tate twist shifting the filtration $F$ by $i$, see a remark after $(2.1.10)$ below. \par
Moreover, if $S:K\otimes K\to{\mathbf D}_X(-w)$ is a polarization of ${\mathcal M}$ {\rm(}see $(2.2.2)$ below$)$, then a polarization of the $\ell$-primitive part $${}^P{\mathcal H}^{-i}f_*{\mathcal M}:={\rm Ker}\,\ell^{\,i+1}\subset{\mathcal H}^{-i}f_*{\mathcal M}\quad(i\geqslant 0)$$ is given by the restriction to the $\ell$-primitive part of the induced pairing} $$(-1)^{i(i-1)/2}\,{}^{\mathfrak m}{\mathcal H} f_*S\,\raise.15ex\hbox{${\scriptstyle\circ}$}\,(id\otimes\ell^{\,i}):{}^{\mathfrak m}{\mathcal H}^{-i}f_*K\otimes{}^{\mathfrak m}{\mathcal H}^{-i}f_*K\to{\mathbf D}_Y(i-w). \leqno(1.4.3)$$ \par
\par\noindent {\bf Remarks.} (i) The action of $\ell$ can be defined by using $C^{\infty}$ forms and the Dolbeault resolution to get a filtered complex which is filtered quasi-isomorphic to the relative de Rham complex $${\rm DR}_{X\times Y/Y}\bigl((i_f)_*^{{\mathcal D}}(M,F)\bigr),$$ which is used in the definition of the direct image in (B.3) below. Note that the first Chern class is represented by a closed 2-form of type $(1,1)$ on $X$, see also \cite[Lemma 5.3.2]{mhp}. (It is also possible to define the action of $\ell$ by using the restriction to a sufficiently general hyperplane section of $(M,F)$.) \par
(ii) We use Deligne's sign convention of a polarization $S$ of a Hodge structure $(H_{{\mathbf C}},F),H_{{\mathbf Q}})$, see \cite[Definition 2.1.15]{th2}; that is, $$S(v,C\,\overline{\!v})>0\quad(v\in H_{{\mathbf C}}\setminus\{0\}), \leqno(1.4.4)$$ where $C$ is the Weil operator defined by $i^{p-q}$ on $H^{p,q}_{{\mathbf C}}$, and the Tate twist $(2\pi i)^w$ is omitted to simplify the notation. (This sign convention seems to be theoretically natural if one considers the action of the Weil restriction of ${\mathbf G}_{\mathbf m}$, see \cite[Section 2.1]{th1}.) \par
Recall that the usual sign convention is $$S(C\1v,\,\overline{\!v})>0\quad(v\in H_{{\mathbf C}}\setminus\{0\}), \leqno(1.4.5)$$ and the difference is given by the multiplication by $$(-1)^w, \leqno(1.4.6)$$ where $w$ is the weight of the Hodge structure. \par
If we use the usual sign convention (1.4.5) instead of (1.4.4), then the difference (1.4.6) implies a considerable change of the formula (1.4.3) of Theorem~(1.4). In fact, we have to change the sign for each direct factor with strict support $Z$ {\it depending on} $d_Z:=\dim Z$, since the difference of sign (1.4.6) depends on the {\it pointwise weight} $$w-i-d_Z \leqno(1.4.7)$$ of the generic variations of Hodge structure of each direct factor with strict support $Z$: $$({\mathcal H}^{-i}f_*{\mathcal M})_Z\subset{\mathcal H}^{-i}f_*{\mathcal M}.$$ \par
(iii) For a polarization $S$ of a generic variation of Hodge structure of a pure Hodge module with strict support $Z$, the associated polarization of the Hodge modules is defined by
$$(-1)^{d_Z(d_Z-1)/2}\,S:K|_U\otimes K|_U\to{\mathbf D}_U(-w), \leqno(1.4.8)$$ on an open subvariety $U\subset Z$ where the variation of Hodge structure is defined, see \cite[Proposition 5.2.16]{mhp} for the constant coefficient case. This can be extended to a pairing of $K$ by using the theory of intersection complexes \cite{BBD}. \par
For a smooth projective variety $X$, it is well-known that $$\int_X(-1)^{j(j-1)/2}\,i^{p-q}\,v\,\wedge\,\overline{\!v}{}\wedge\omega^{d_X-j}>0, \leqno(1.4.9)$$ for a nonzero element $v$ in the primitive cohomology ${}^P\!H^j(X,{\mathbf C})$ of type $(p,q)$, where $\omega$ is a K\"ahler form. One may think that (1.4.9) contradicts (1.4.8) combined with (1.4.3) in Theorem~(1.4) for $f:X\to pt$. In fact, by setting $\xi(k)=k(k-1)/2$ for $k\in{\mathbf N}$, the difference between the sign $(-1)^{\xi(j)}$ in (1.4.9) and the product of $(-1)^{\xi(d_X)}$ in (1.4.8) with $Z=X$ and $(-1)^{\xi(d_X-j)}$ in (1.4.3) with $i=d_X-j$ does not coincide with the difference between the two sign conventions (1.4.4--5) which is equal to $(-1)^j$ by (1.4.6) for $w=j$, since $$\xi(d_X)-\xi(d_X-j)-\xi(j)-j=jd_X\mod 2.$$ However, the remaining sign $(-1)^{jd_X}$ just comes from the isomorphism $${\mathbf R}\Gamma(X,{\mathbf C}_X)[d_X]\otimes_{{\mathbf C}}{\mathbf R}\Gamma(X,{\mathbf C}_X)[d_X]\cong\bigl({\mathbf R}\Gamma(X,{\mathbf C}_X)\otimes_{{\mathbf C}}{\mathbf R}\Gamma(X,{\mathbf C}_X)\bigr)[2d_X]. \leqno(1.4.10)$$ In fact, if we set ${\mathcal K}^{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}:={\mathbf R}\Gamma(X,{\mathbf C}_X)$, then ${\mathcal K}^{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}[d_X]$ is identified with ${\mathbf C}_X[d_X]\otimes_{{\mathbf C}}{\mathcal K}^{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}$, and the above isomorphism is given by $${\mathbf C}[d_X]\otimes_{{\mathbf C}}{\mathcal K}^{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}\otimes_{{\mathbf C}}{\mathbf C}[d_X]\otimes_{{\mathbf C}}{\mathcal K}^{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}\cong{\mathbf C}[d_X]\otimes_{{\mathbf C}}{\mathbf C}[d_X]\otimes_{{\mathbf C}}{\mathcal K}^{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}\otimes_{{\mathbf C}}{\mathcal K}^{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}},$$ where the middle two components are exchanged, but the remaining ones are unchanged (see \cite{coh} for the sign about single complexes associated with $n$-ple complexes). A similar argument is used in the proof of the anti-commutativity of the last diagram in \cite[Section 5.3.10]{mhp} where $d_X=1$. \par
\par
\vbox{\centerline{\bf 2. Outline of proofs of Theorems~(1.3) and (1.4)} \par
\par\noindent In this section we give an inductive definition of pure Hodge modules (see \cite{via}, \cite{mhp}), and explain an outline of proofs of Theorems~(1.3) and (1.4).} \par
\par\noindent {\bf 2.1.~Admissibility condition along $g=0$.} Let $X$ be a smooth algebraic variety, and $Z$ be an irreducible closed subvariety of $X$. Let $${\mathcal M}=((M,F),K,\alpha_{{\mathcal M}})\in{\rm MF}(X,{\mathbf Q}),$$ with strict support $Z$, see (1.2). Let $g$ be a function on $X$, that is, $g\in\Gamma(X,{\mathcal O}_X)$. Let $i_g:X\hookrightarrow X\times{\mathbf C}$ be the graph embedding by $g$. Set $$(\widetilde{M},F):=(i_g)_*^{{\mathcal D}}(M,F),$$ see (B.3) below for $(i_g)_*^{{\mathcal D}}$. (Note that the filtration $F$ is {\it shifted by} $1$, which is the codimension of the embedding.) We have the filtration $V$ on $\widetilde{M}$ (see (B.6) below). \par
\par\noindent {\bf Definition.} We say that $(M,F)$ is {\it admissible along} $g=0$ (or $g$-{\it admissible} for short) in this paper if the following two conditions are satisfied: $$t(F_pV^{\alpha}\widetilde{M})=F_pV^{\alpha+1}\widetilde{M}\quad\quad(\forall\,\alpha>0), \leqno(2.1.1)$$ \vskip-6mm $$\partial_t(F_p{\rm Gr}_V^{\alpha}\widetilde{M})=F_{p+1}{\rm Gr}_V^{\alpha-1}\widetilde{M}\quad\quad(\forall\,\alpha<1,\,p\in{\mathbf Z}), \leqno(2.1.2)$$ see \cite[3.2.1]{mhp}. (These properties were first found in the one-dimensional case, see \cite{hfgs2}.) \par
In the case $Z\subset g^{-1}(0)$, $(M,F)$ is $g$-admissible if and only if the following condition is satisfied: $$g\,F_pM\subset F_{p-1}M\quad\quad(\forall\,p\in{\mathbf Z}), \leqno(2.1.3)$$ see \cite[Lemma 3.2.6]{mhp}. \par
In the case $Z\not\subset g^{-1}(0)$, let $j:X\times{\mathbf C}^*\hookrightarrow X\times{\mathbf C}$ be the natural inclusion. We have the isomorphisms $$F_p\widetilde{M}=\h{$\sum$}_{i\geqslant 0}\,\partial_t^i(j_*F_{p-i}\widetilde{M}\cap V^{>0}M)\quad\quad(p\in{\mathbf Z}), \leqno(2.1.4)$$ if $(M,F)$ is $g$-admissible and moreover the following condition holds: $$\partial_t:{\rm Gr}_V^1(\widetilde{M},F)\to{\rm Gr}_V^0(\widetilde{M},F[-1])\,\,\,\hbox{is strictly surjective,} \leqno(2.1.5)$$ see \cite[Remark 3.2.3]{mhp}. This is closely related to (1.3.2). (Forgetting $F$, condition (2.1.5) is equivalent to that $M$ has no nontrivial quotient supported in $g^{-1}(0)$, see (B.6.6) below. The strictness of $F$ follows from the properties of Hodge modules as is seen in (2.3.5) below.) \par
Assume $(M,F)$ is $g$-admissible. We have the {\it nearby and vanishing cycle functors} $\psi_g$, $\varphi_g$ defined by $$\psi_g(M,F):=\h{$\bigoplus$}_{\lambda\in{\mathbf C}^*_1}\,\psi_{g,\lambda}(M,F),\quad\varphi_g(M,F):=\h{$\bigoplus$}_{\lambda\in{\mathbf C}^*_1}\,\varphi_{g,\lambda}(M,F), \leqno(2.1.6)$$ \vskip-5mm $$\aligned\psi_{g,{\mathbf e}(-\alpha)}(M,F)&:={\rm Gr}_V^{\alpha}(\widetilde{M},F)\quad(\alpha\in(0,1]),\\\varphi_{g,1}(M,F)&:={\rm Gr}_V^0(\widetilde{M},F[-1]),\endaligned \leqno(2.1.7)$$
where ${\mathbf C}^*_1:=\{\lambda\in{\mathbf C}^*\mid|\lambda|=1\}$, ${\mathbf e}(-\alpha):=\exp(-2\pi i\alpha)$, and $\psi_{g,\lambda}=\varphi_{g,\lambda}$ ($\lambda\ne 1$) as in (A.2.8) below. We have $\varphi_{g,1}(M,F)=(M,F)$ by \cite[Lemma 3.2.6]{mhp} if ${\rm supp}\,M\subset g^{-1}(0)$ and $g\1F_pM\subset F_{p-1}M$ ($p\in{\mathbf Z}$). Note that $F$ is shifted by 1 when the direct image $(i_g)_*^{{\mathcal D}}$ is taken. This is the reason for which $F$ is shifted for $\varphi$ in (2.1.7), and not for $\psi$ (for left ${\mathcal D}$-modules). \par
Combining these with (B.6.7) below, we get $$\aligned\psi_g{\mathcal M}&:=(\psi_g(M,F),{}^{\mathfrak m}\psi_gK,{}^{\mathfrak m}\psi_g\alpha_{{\mathcal M}}),\\\varphi_g{\mathcal M}&:=(\varphi_g(M,F),{}^{\mathfrak m}\varphi_gK,{}^{\mathfrak m}\varphi_g\alpha_{{\mathcal M}})\quad\hbox{in}\,\,\,\,{\rm MF}(X,{\mathbf Q}).\endaligned \leqno(2.1.8)$$ We have similarly $\psi_{g,1}{\mathcal M}$, $\varphi_{g,1}{\mathcal M}$ together with the morphisms $$\aligned{\rm can}:\psi_{g,1}{\mathcal M}\to\varphi_{g,1}{\mathcal M},\quad&{\rm Var}:\varphi_{g,1}{\mathcal M}\to\psi_{g,1}{\mathcal M}(-1),\\ N:\psi_g{\mathcal M}\to\psi_g{\mathcal M}(-1),\quad&N:\varphi_{g,1}{\mathcal M}\to\varphi_{g,1}{\mathcal M}(-1),\endaligned \leqno(2.1.9)$$ such that the restrictions of $\,{\rm can}$, $\,{\rm Var}$, $N$ to the ${\mathcal D}$-module part ${\rm Gr}_V^1{\mathcal M}$, ${\rm Gr}_V^0{\mathcal M}$, ${\rm Gr}_V^{\alpha}{\mathcal M}$ ($\alpha\in[0,1]$) are respectively given by $-\partial_t$, $t$, $-(\partial_tt-\alpha)$, and we have $${\rm Var}\,\raise.15ex\hbox{${\scriptstyle\circ}$}\,{\rm can}=N\,\,\,\,\hbox{on}\,\,\,\varphi_{g,1}{\mathcal M},\quad{\rm can}\,\raise.15ex\hbox{${\scriptstyle\circ}$}\,{\rm Var}=N\,\,\,\,\hbox{on}\,\,\,\varphi_{g,1}{\mathcal M}. \leqno(2.1.10)$$ Here the Tate twist $(k)$ for $k\in{\mathbf Z}$ in general is essentially the shift of the filtration $F$ by $[k]$. For the ${\mathbf Q}$-coefficient part, it is defined by the tensor product of ${\mathbf Q}(k):=(2\pi i)^k{\mathbf Q}\subset{\mathbf C}$ over ${\mathbf Q}$, see \cite[Definition 2.1.13]{th2}. (Similarly with ${\mathbf Q}$ replaced by any subfield $A\subset{\mathbf C}$)
\par
By \cite[Lemma 5.1.4]{mhp} we see that the strict support decomposition (1.2.1) holds if for any $g\in\Gamma(U,{\mathcal O}_U)$ with $U$ an open subvariety of $X$, $(M,F)|_U$ is $g$-admissible and moreover
$$\varphi_{g,1}{\mathcal M}|_U={\rm Im}\,{\rm can}\oplus{\rm Ker}\,{\rm Var}\quad\hbox{in}\quad{\rm MF}(U,{\mathbf Q}), \leqno(2.1.11)$$ \par
\par\noindent {\bf 2.2.~Inductive definition of pure Hodge modules.} For a smooth complex algebraic variety $X$ and an irreducible closed subvariety $Z\subset X$, we define the full subcategory ${\rm MH}_Z(X,w)\subset{\rm MF}(X,{\mathbf Q})$ by increasing induction on $d_Z:=\dim Z$ as follows (see \cite{via}, \cite{mhp}): \par
\par\noindent {\bf Case 1.} If $Z$ is a point $x\in X$, then we have an equivalent of categories $$\aligned&(i_x)_*:{\rm HS}(w)^p\buildrel{\sim}\over\longrightarrow{\rm MH}_{\{x\}}(X,w)\\ \hbox{with}\quad\quad&(i_x)_*\bigl((H_{{\mathbf C}},F),H_{{\mathbf Q}}\bigr)=\bigl((i_x)_*^{{\mathcal D}}(H_{{\mathbf C}},F),(i_x)_*H_{{\mathbf Q}}\bigr),\endaligned \leqno(2.2.1)$$ where $i_x:\{x\}\hookrightarrow X$ denotes the canonical inclusion, and ${\rm HS}(w)^p$ denotes the category of polarizable ${\mathbf Q}$-Hodge structures of weight $w$ (see \cite{th2}). The latter is naturally identified with a full subcategory of ${\rm MF}(\{x\},{\mathbf Q})$ (by setting $F_p=F^{-p}$ as usual). \par
\par\noindent {\bf Case 2.} If $d_Z>0$, then ${\mathcal M}=((M,F),K)\in{\rm MF}(X,{\mathbf Q})$ with strict support $Z$ belongs to ${\rm MH}_Z(X,w)$ if there is a perfect pairing (see (A.3) below) $$S:K\otimes_{{\mathbf Q}}K\to{\mathbf D}_X(-w)={\mathbf Q}_X(d_X-w)[2d_X], \leqno(2.2.2)$$ which is called a {\it polarization} of ${\mathcal M}$, and the following two conditions are satisfied: \par
\par\noindent (i) The pairing $S$ is compatible with the Hodge filtration $F$ in the following sense: \par
\par\noindent There is an isomorphism of filtered ${\mathcal D}$-modules $${\mathbf D}(M,F)=(M,F)(w),$$ which corresponds (by using (B.4.6) below) to an isomorphism defined over ${\mathbf Q}$: $${\mathbf D}(K)=K(w),$$ and the latter is identified with the perfect pairing $S$ via (A.3.1) below. \par
\par\noindent (ii) For any Zariski-open subset $U\subset X$ and $g\in\Gamma(U,{\mathcal O}_U)$, the restriction of $(M,F)$ to $U$ is $g$-admissible, and moreover, in the case $Z\cap U\not\subset g^{-1}(0)$, we have
$${\rm Gr}_k^W\psi_g{\mathcal M}|_U,\,{\rm Gr}_k^W\varphi_{g,1}{\mathcal M}|_U\in{\rm MH}_{<d_Z}(U,w), \leqno(2.2.3)$$
$$\hbox{${}^{\mathfrak m}\psi_gS\,\raise.15ex\hbox{${\scriptstyle\circ}$}\,(id\otimes N^i)$ gives a polarization of ${}^P{\rm Gr}^W_{w-1+i}\psi_g{\mathcal M}|_U\,\,\,(i\geqslant 0)$}. \leqno(2.2.4)$$
\par
The last term of (2.2.3) is the direct sum of ${\rm MH}_{Z'}(U,w)$ with $Z'$ running over closed irreducible subvarieties of $U$ with $d_{Z'}<d_Z$. The weight filtrations $W$ on $\psi_g{\mathcal M}|_U$, $\varphi_{g,1}\hskip1pt{\mathcal M}|_U$ are the {\it monodromy filtration} associated with the action of $N:=(2\pi i)^{-1}\log T_u$ (see \cite{weil}) which are shifted by $w-1$ and $w$ respectively. This means that $W$ on $\psi_g{\mathcal M}|_U$ {\it with filtration $F$ forgotten} is uniquely determined by the following conditions:
$$\aligned N(W_i\psi_g{\mathcal M}|_U)&\subset(W_{i-2}\psi_g{\mathcal M}|_U)(-1)\quad(i\in{\mathbf Z}),\\ N^i:{\rm Gr}_{w-1+i}^W\psi_g{\mathcal M}|_U&\buildrel{\sim}\over\longrightarrow({\rm Gr}^W_{w-1-i}\psi_g{\mathcal M}|_U)(-i)\quad(i\in{\mathbf N}).\endaligned \leqno(2.2.5)$$ Here the Tate twists may be neglected since $F$ is forgotten in (2.2.5).
(However, the last isomorphism of (2.2.5) is strictly compatible with $F$ by (2.3.3) below if $\psi_g{\mathcal M}|_U$ belongs to ${\rm MHW}(U)$ in (2.3) below.)
A similar assertion holds for $\varphi_{g,1}{\mathcal M}|_U$ with $w-1$ replaced by $w$.
\par
In (2.2.4), the primitive part ${}^P{\rm Gr}^W_{w-1+i}\psi_g{\mathcal M}|_U$ is defined by
$${}^P{\rm Gr}^W_{w-1+i}\psi_g{\mathcal M}|_U:={\rm Ker}\,N^{i+1}\subset{\rm Gr}^W_{w-1+i}\psi_g{\mathcal M}|_U, \leqno(2.2.6)$$
by using the induced filtration $F$ on the kernel. For $\psi_gS$, see (A.3.2) below. Note that the condition for a polarization $S$ is also by induction on $d_Z$. For each direct factor of ${}^P{\rm Gr}^W_{w-1+i}\psi_g{\mathcal M}|_U$ with $0$-dimensional strict support, we assume that $\psi_gS\,\raise.15ex\hbox{${\scriptstyle\circ}$}\,(id\otimes N^i)$ induces a polarization of ${\mathbf Q}$-Hodge structure in the sense of \cite{th2} where the place of the Weil operator is different from the usual one as is noted in Remark (ii) after Theorem (1.4). \par
\par\noindent {\bf 2.3. Some properties of pure Hodge modules.} Let ${\rm MH}(X,w)$ be as in (1.2.3). Let $${\rm MHW}(X)$$ be the category of {\it weakly mixed Hodge modules\,} consisting of $({\mathcal M},W)$ with ${\mathcal M}\in{\rm MF}(X,{\mathbf Q})$ and $W$ a finite increasing filtration of ${\mathcal M}$, which satisfy $${\rm Gr}_w^W{\mathcal M}\in{\rm MH}(X,w)\quad(\forall\,w\in{\mathbf Z}). \leqno(2.3.1)$$ We have by definition the stability of pure Hodge modules by the nearby and vanishing cycle functors:
$$\psi_g{\mathcal M}|_U,\,\varphi_{g,1}{\mathcal M}|_U\in{\rm MHW}(U). \leqno(2.3.2)$$ It is easy to show the following (see \cite[Proposition 5.1.14]{mhp}): \par
\par\noindent (2.3.3)\,\,\,\, ${\rm MHW}(X)$ and ${\rm MH}(X,w)$ ($w\in{\mathbf Z}$) are abelian categories such that \par
\quad\quad\quad any morphisms are strictly compatible with $(F,W)$ and $F$ respectively. \par
This assertion is proved by using $${\rm Hom}({\mathcal M},{\mathcal M}')=0\,\,\,\hbox{if}\,\,\,{\mathcal M}\in{\rm MH}(X,w),\,{\mathcal M}'\in{\rm MH}(X,w')\,\,\,\hbox{with}\,\,\,\,w>w'. \leqno(2.3.4)$$ This is reduced to \cite{th2} by using the assertion that any ${\mathcal M}\in{\rm MH}_Z(X,w)$ is generically a variation of Hodge structure of weight $w-d_Z$. (The latter is an easy part of Theorem~(1.3).) \par
These assertions hold without assuming polarizability (see \cite[Section 5.1]{mhp}), and imply $${\rm can}:\psi_{g,1}{\mathcal M}\to\varphi_{g,1}{\mathcal M}\,\,\,\,\hbox{is strictly surjective for $(F,W)$}, \leqno(2.3.5)$$ This assures the strict surjectivity in (2.1.5). It also gives a reason for which condition~(2.2.4) is imposed only for $\psi$. \par
We prove Theorems~(1.3) and (1.4) by induction on $\dim Z$ using the following rather technical key theorem: \par
\par\noindent {\bf Theorem~2.4.} {\it Let $f:X\to Y$ be as in Theorem~$(1.4)$. Let $g\in\Gamma(Y,{\mathcal O}_Y)$. Put $h:=fg$. Let ${\mathcal M}=((M,F),K)\in{\rm MF}(X,{\mathbf Q})$ with strict support $Z\not\subset h^{-1}(0)$. Let $S:K\otimes K\to{\mathbf D}_X(-w)$ be a perfect pairing compatible with the filtration $F$ as in condition~{\rm (ii)} in Case $2$ of $(2.2)$. Assume that $(M,F)$ is $h$-admissible, we have $${\rm Gr}_i^W\psi_h{\mathcal M},\,{\rm Gr}_i^W\varphi_{h,1}{\mathcal M}\in{\rm MH}(X,i)\quad(\forall\,i\in{\mathbf Z}), \leqno(2.4.1)$$ with $W$ as in $(2.2.5)$, and the conclusions of Theorem~$(1.4)$ are satisfied for the $N$-primitive part $${}^{P_N}{\rm Gr}_{w-1+i}^W\psi_h{\mathcal M}\,\,\,\,\,\hbox{with polarization}\,\,\,\,\,\psi_hS\,\raise.15ex\hbox{${\scriptstyle\circ}$}\,(id\otimes N^i)\quad(i\geqslant 0). \leqno(2.4.2)$$ Then \par
\par\noindent {\rm (i)} The filtered direct image $\,f_*^{{\mathcal D}}(M,F)$ is strict on a sufficiently small neighborhood of $g^{-1}(0)$ $($in the classical topology$)$, and the ${\mathcal H}^if_*^{{\mathcal D}}(M,F)$ are $g$-admissible. \par
\par\noindent {\rm (ii)} The shifted direct image filtration $f_*^{{\mathcal D}}W[j]$ induces the monodromy filtration shifted by $w+j-1$ on $$\psi_g{\mathcal H}^jf_*^{{\mathcal D}}{\mathcal M}={\mathcal H}^jf_*^{{\mathcal D}}\psi_h{\mathcal M}\quad(\forall\,j\in{\mathbf Z}), \leqno(2.4.3)$$ which is denoted by $W$ so that $${\rm Gr}_i^W\psi_g{\mathcal H}^jf_*^{{\mathcal D}}{\mathcal M}\in{\rm MH}(Y,i)\quad(\forall\,i\in{\mathbf Z}). \leqno(2.4.4)$$ \par
\par\noindent {\rm (iii)} We have isomorphisms on a sufficiently small neighborhood of $g^{-1}(0)$: $$\ell^j:{\mathcal H}^{-j}f_*^{{\mathcal D}}{\mathcal M}\buildrel{\sim}\over\longrightarrow({\mathcal H}^jf_*^{{\mathcal D}}{\mathcal M})(j)\quad(\forall\,j\geqslant 0). \leqno(2.4.5)$$ \par
\par\noindent {\rm (iv)} On the bi-primitive part ${}^{P_{\ell}}{}^{P_N}{\rm Gr}_{w-1-j+i}^W\psi_g{\mathcal H}^{-j}f_*^{{\mathcal D}}{\mathcal M}$ defined by $${\rm Ker}\,\ell^{j+1}\cap{\rm Ker}\,N^{i+1}\subset{\rm Gr}_{w-1-j+i}^W\psi_g{\mathcal H}^{-j}f_*^{{\mathcal D}}{\mathcal M}, \leqno(2.4.6)$$ we have a polarization of Hodge module given by the induced pairing} $$(-1)^{j(j-1)/2}\hskip1pt{\rm Gr}^W\psi_g{\mathcal H} f_*S\,\raise.15ex\hbox{${\scriptstyle\circ}$}\,(id\otimes N^i\ell^j)\quad(\forall\,i,j\geqslant 0). \leqno(2.4.7)$$ \par
We first explain how the above theorem is used in the proofs of Theorems~(1.3) and (1.4). \par
\par\noindent {\bf 2.5.~Outline of proofs of Theorems~(1.3) and (1.4).} We show the assertions by increasing induction on the dimension of the strict support $Z$. The order of the induction is rather complicated as is explained below: \par
Assume Theorems~(1.3) and (1.4) are proved for $\dim Z<d$. Then Theorem~(1.4) for $\dim Z=d$ with $f(Z)\ne pt$ follows from Theorem~(2.4) where the decomposition by strict support follows from (2.1.11) (which is satisfied by using \cite[Corollary 4.2.4]{mhp}). Using this, we can reduce the proof of Theorem~(1.3) to the normal crossing case where the singular locus of $M$ is a divisor with normal crossings. Here the filtration $F$ can be defined by (1.3.2), and we have to show that the conditions for Hodge modules are satisfied for any locally defined functions $g$. This can be further reduced by using Theorem~(2.4) (but not Theorem~(1.4)) to the case where the union of $g^{-1}(0)$ and the singular locus of $M$ is a divisor with normal crossings. Here we can calculate explicitly the nearby and vanishing cycle functors together with the induced pairing (although these are rather complicated), see \cite{mhm} for details. \par
Now we have to prove Theorem~(1.4) in the case $\dim Z=d$ and $f(Z)=pt$. Here we may assume that $X={\mathbf P}^n$, $Y=pt$. Let $\pi:\widetilde{X}\to X$ be the blow-up along the intersection $Z$ of two sufficiently general hyperplanes of ${\mathbf P}^n$. By Theorem~(1.3) for $\dim Z=d$, there is
$$\widetilde{\mathcal M}=((\widetilde{M},F),\widetilde{K})\in{\rm MH}(\widetilde{X},w)\quad\hbox{with}\quad\widetilde{\mathcal M}|_{\widetilde{X}\setminus\pi^{-1}(Z)}={\mathcal M}|_{X\setminus Z}.$$ We have a polarization $\widetilde{S}$ of $\widetilde{\mathcal M}$ extending the restriction of a polarization $S$ of ${\mathcal M}$ to the complement of $Z\subset X$. \par
By Theorem~(1.4) for $f(Z)\ne 0$, we see that ${\mathcal M}$ is a direct factor of ${\mathcal H}^0\pi_*{\mathcal M}$. Moreover its complement is isomorphic to ${\mathcal M}_Z(-1)$ since $Z$ is sufficiently general, where ${\mathcal M}_Z$ is the noncharacteristic restriction of ${\mathcal M}$ to $Z$; more precisely, $${\mathcal M}_Z=((M_Z,F),K_Z[-2]),$$ with $(M_Z,F)$ and $K_Z$ respectively the noncharacteristic restrictions of $(M,F)$ and $K$ to $Z$. \par
So we have the direct sum decomposition $${\mathcal H}^0\pi_*\widetilde{\mathcal M}={\mathcal M}\oplus{\mathcal M}_Z(-1), \leqno(2.5.1)$$ with ${\mathcal H}^j\pi_*\widetilde{\mathcal M}=0$ for $j\ne 0$. Here $\pi_*\widetilde{S}$ is compatible with this decomposition, and its restriction to ${\mathcal M}$ coincides with $S$ (by using (A.3.1) below together with the remark after (1.2.2)). We then get the direct sum decompositions $$H^j(\widetilde{X},\widetilde{\mathcal M})=H^j(X,{\mathcal M})\oplus H^{j-2}(Z,{\mathcal M}_Z)(-1)\quad\quad(j\in{\mathbf Z}). \leqno(2.5.2)$$ where $H^j(X,{\mathcal M}):=H^j(a_X)_*{\mathcal M}$ for the structure morphism $a_X:X\to pt$, and similarly for $H^j(\widetilde{X},\widetilde{\mathcal M})$, etc. The above argument implies that these direct sum decompositions are compatible with the induced pairing by $\widetilde{S}$, and moreover its restriction to the first factors coincides with the induced pairing by $S$. \par
We have the Lefschetz pencil $$p:\widetilde{X}\to{\mathbf P}^1,$$ and Theorem~(1.4) for $f(Z)\ne pt$ can be applied to this. So the proof of Theorem~(1.4) for $\dim Z=d$ and $f(Z)=pt$ is reduced to the case $X={\mathbf P}^1$, where we can apply Zucker's result \cite{Zu}. (Note that Zucker gave an ``algebraic description" of the Hodge filtration $F$ using holomorphic differential forms, see \cite[Corollary 6.15 and Proposition 9.1]{Zu}.) However, we have to make some more calculations about polarizations on primitive classes related to the Leray spectral sequence, etc.\ (which are not quite trivial, see \cite[Sections 5.3.8\,-11]{mhp} for details). \par
The Lefschetz pencil is also used in an essential way for the proof of the Weil conjecture and the hard Lefschetz theorem in \cite{weil}. The reduction to the curve case is by analogy with the $\ell$-adic case in some sense. \par
\par\noindent {\bf Remarks.} (i) For the calculation of the nearby and vanishing cycle functors in the normal crossing case, we use the so-called ``combinatorial description" of mixed Hodge modules with normal crossing singular loci. Here the {\it compatibility} of the $d_X+1$ filtrations $F,V_{(i)}$ ($i\in[1,d_X])$ is quite essential, where $V_{(i)}$ is the $V$-filtration along $x_i=0$, and the $x_i$ are local coordinates compatible with the singular locus of a mixed Hodge module. Note, however, that this description {\it never} gives an equivalence of categories (consider, for instance, the case of variations of mixed Hodge structure having no singular loci; in fact, this ``Hodge-theoretic combinatorial description" gives only the information of the fiber at the fixed point). Nevertheless it is quite useful when it is combined with the Verdier-type extension theorem \cite{Ve2} inductively, see also \cite[Proposition 3.13]{mhm}, etc. \par
It seems rather easy to predict Hodge-theoretic combinatorial formulas for the nearby and vanishing cycle functors together with the induced pairing in the normal crossing case. These are implicitly related with Beilinson's construction of nearby cycles \cite{Bei2}, see \cite{ext} for the mixed Hodge module case. It seems more difficult to prove that these formulas actually hold, see for instance \cite{dual}. (Note that the argument in the Appendix of \cite{mhm} was simplified by the writer. The original argument used the reduction to the 2-dimensional case, and was much more complicated.) \par
(ii) The results of Cattani, Kaplan, Schmid (\cite{CK}, \cite{CKS1}, \cite{CKS2}, \cite{CKS3}) are used in an essential way for the above ``Hodge-theoretic combinatorial description". For instance, the {\it descent lemma} in \cite{CKS1}, \cite{CKS3} is crucial to the ``combinatorial description" of the pure Hodge module corresponding to the intersection complex. (This lemma is called ``the vanishing cycle theorem" in \cite{KK4}, which does not seem to be contained in \cite{KK2}.) \par
(iii) It is still an open problem whether the Hodge structure obtained by the $L^2$ method in \cite{CKS3}, \cite{KK4} has an ``algebraic description" using holomorphic differential forms, and in the algebraic case, whether it coincides with the Hodge structure obtained by the theory of mixed Hodge modules. (It has been expected that the detailed versions of \cite{KK3}, \cite{KK5} would give a positive answer to these problems, and some people thought that \cite{KK2}, \cite{KK4} were written for this purpose. As for \cite{BrSY}, it seems rather difficult to apply it to filtered $L^2$-sheaf complexes.) We have to assume that polarizable variations of Hodge structure are geometric ones in \cite{toh}, and also in \cite{PS} for the analytic case. \par
(iv) In the curve case, the answer to the above first problem was already given in \cite[Corollary 6.15 and Proposition 9.1]{Zu}, and the second problem is then easy to solve, see \cite[Section 5.3.10]{mhp}. \par
(v) It does not seem easy to generalize the results in \cite{CKS3}, \cite{KK4} to the case of a ``tubular neighborhood" of a subvariety in a smooth complex algebraic variety, since we would have to take a {\it complete} metric to get a Hodge structure by applying a standard method. \par
\par\noindent {\bf 2.6.~Outline of proof of Theorem~(2.4).} We have to study the weight spectral sequence $$E_1^{-k,j+k}={\mathcal H}^jf_*{\rm Gr}_k^W\psi_h{\mathcal M}\Longrightarrow{\mathcal H}^jf_*\psi_h{\mathcal M}, \leqno(2.6.1)$$ and similarly with $\psi_h$ replaced by $\varphi_{h,1}$. This spectral sequence is defined in ${\rm MF}(Y,{\mathbf Q})$ if the differentials $d_r$ ($r\geqslant 1$) are strictly compatible with the filtration $F$ inductively, see \cite[1.3.6]{mhp}. The last assertion for $r=1$ follows from (2.3.3), since $d_1$ preserves the Hodge filtration $F$ and also the weight filtration $W$ (the latter is shifted depending on the degree $j$ of the cohomology sheaf ${\mathcal H}^jf_*\psi_h{\mathcal M}$). For $r\geqslant 2$, we can show $d_r=0$ inductively by using (2.3.4). In particular, the $E_2$-degeneration of the spectral sequence follows. The above argument also implies the strictness of the filtration $F$ on the direct image $f_*^{{\mathcal D}}\psi_h{\mathcal M}$ by using the theory of compatible filtrations ({\it loc.~cit.}). A similar assertion holds for $\varphi_{g,1}{\mathcal M}$. These assertions imply the assertion~(i) of Theorem~(2.4) by using the completion by the $V$-filtration, see \cite[Section 3.3]{mhp} for details. \par
To show the remaining assertions, we have to show that the ``bi-symmetry" for the actions of $\ell$, $N$ on the $E_1$-term is preserved on the $E_2$-terms, and moreover the ``bi-primitive part" of the $E_2$-term is {\it represented} by the bi-primitive part of the $E_1$-term. Using the strict support decomposition together with the easy part of Theorem~(1.3), we can reduce the assertions to the case where the spectral sequence is defined in the category of Hodge structures. Theorem~(2.4) then follows from the theory of bi-graded Hodge structures of Lefschetz type as in the proof of \cite[Proposition 4.2.2]{mhp} (see also \cite{pos} where a slightly better explanation is given). \par
\par\noindent {\bf Remarks.} (i) Signs were not determined in \cite[Proposition 4.2.2]{mhp}, since this was a very subtle issue at that time (see for instance \cite[Section 2.2.5]{th2} where a problem of sign for Chern classes was raised.) In the case of the nearby cycles of the constant sheaf ${\mathbf Q}_X$ in the normal crossing case (that is, in the case of Steenbrink \cite{St}), the primitive part $${}^P{\rm Gr}^W_{d_X-1+i}\psi_{h,1}({\mathbf Q}_X[d_X-1])$$ is the direct sum of the constant sheaves supported on intersections of $i+1$ irreducible components of $h^{-1}(0)$. In particular, its support is pure dimensional, and the signs should depend only on this dimension. Then there would be no problem for using \cite[Proposition 4.2.2]{mhp} in this case. In the general case, however, the situation is much more complicated. In fact, there may be direct factors $({}^P{\rm Gr}^W_k\psi_h{\mathcal M})_Z$ of the primitive part ${}^P{\rm Gr}^W_k\psi_h{\mathcal M}$ which have strict supports $Z$ of {\it various dimensions}, and surject to the {\it same} closed subvariety of $Y$ (for instance, if $g^{-1}(0)=\{0\}$). In this case, we have to determine exactly the sign for each direct factor so that the {\it positivity} becomes compatible among the direct images of direct factors with various strict support dimensions. \par
(ii) Precise signs are written in \cite[Lemma 5.3.6]{mhp} following Deligne's sign system \cite{sign} (see also \cite{GN}). Note that the conclusion of Lemma 5.3.6 follows from the {\it proof} of \cite[Proposition 4.2.2]{mhp}, since the hypothesis of the lemma is stronger than that of the proposition and the conclusion is essentially the same (that is, the $E_2$-term is bi-symmetric for $\ell,N$ and its bi-primitive part is represented by the bi-primitive part of the $E_1$-term). A pairing on the $E_1$-term of the Steenbrink weight spectral sequence \cite{St} satisfying Deligne's sign system \cite{sign} is constructed in \cite{GN} although its relation with the pairing induced by the nearby cycle functor does not seem to be clear (for instance, an isomorphism like (1.4.10) does not seem to be used there). \par
(iii) An argument showing the decomposition (2.1.11) is noted in \cite[Lemma 5.2.5]{mhp} which can replace \cite[Corollary 4.2.4]{mhp} if one is quite sure that the signs in the lemma really hold in the case one is considering. In fact, the assertion is very sensitive to the signs: if the signs are modified, then the role of $H$ and $H'$ can be reversed, and we may get a decomposition of $H$ instead of $H'$. (There is a misprint in the last line of the lemma: $S'$ should be $H'$.) If one is not completely sure whether the signs of the lemma really hold in the case under consideration, then it is still possible to use \cite[Corollary 4.2.4]{mhp} at least in the constant sheaf case with normal crossing singularities as is explained in Remark~(i) above. \par
\par
\vbox{\centerline{\bf 3. Mixed Hodge modules} \par
\par\noindent In this section we explain a simplified definition of mixed Hodge modules following \cite{def}.} \par
\par\noindent {\bf 3.1.~Admissibility condition for weakly mixed Hodge modules.} Let $X$ be a smooth complex algebraic variety, and $g\in\Gamma(X,{\mathcal O}_X)$. For a weakly mixed Hodge module $$({\mathcal M},W)=((M;F,W),(K,W))\in{\rm MHW}(X),$$ (see (2.3.1)), set $$(\widetilde{M};F,W)=(i_g)_*^{{\mathcal D}}(M;F,W),$$ where $W$ is not shifted under the direct image. We have the filtration $V$ on $\widetilde{M}$ as in (B.6) below. We define the filtration $L$ on the nearby and vanishing cycle functors by $$L_k\psi_g{\mathcal M}:=\psi_gW_{k+1}{\mathcal M},\quad L_k\varphi_{g,1}{\mathcal M}=\varphi_{g,1}W_k{\mathcal M}\quad(\forall\,k\in{\mathbf Z}). \leqno(3.1.1)$$ Here the filtration $F$ can be neglected when the filtration $L$ is defined. \par
We say that ${\mathcal M}$ is {\it admissible} along $g=0$ (or $g$-{\it admissible} for short) if the following two conditions are satisfied: $$\hbox{Three filtrations $F,W,V$ on $\widetilde{M}$ are compatible filtrations (see (C.2) below).} \leqno(3.1.2)$$ \vskip-7mm $$\hbox{There is the relative monodromy filtration $W$ on $(\psi_g{\mathcal M},L)$, $(\varphi_{g,1}{\mathcal M},L)$.} \leqno(3.1.3)$$ The last condition means that there is a unique filtration $W$ on $\psi_g{\mathcal M}$ satisfying the following two conditions: $$\aligned N(W_i\psi_g{\mathcal M})&\subset W_{i-2}\psi_g{\mathcal M}(-1)\quad(\forall\,i\in{\mathbf Z}),\\ N^i:{\rm Gr}^W_{k+i}{\rm Gr}^L_k\psi_g{\mathcal M}&\buildrel{\sim}\over\longrightarrow{\rm Gr}^W_{k-i}{\rm Gr}^L_k\psi_g{\mathcal M}(-i)\quad(\forall\,i\in{\mathbf N},\,k\in{\mathbf Z}),\endaligned \leqno(3.1.4)$$ and similarly for $\varphi_{g,1}{\mathcal M}$, see \cite{weil}, \cite{StZ}. (Here the filtration $F$ can be forgotten. However, the last isomorphism is compatible with $F$ by (2.3.3) if $(\psi_f{\mathcal M},W),(\varphi_{f,1}{\mathcal M},W)\in{\rm MHW}(X)$.) \par
\par\noindent {\bf Remark.} Let $X$ be a smooth complex variety, and $\Xo$ be a smooth compactification such that $D:=\Xo\setminus X$ is a divisor with simple normal crossings. Let $$({\mathcal M},W)=\bigl((M;,F,W),(H,W)\bigr)$$ be a variation of mixed Hodge structure on $X$ such that ${\rm Gr}_k^W{\mathcal M}$ are polarizable pure Hodge structures of weight $k$ for any $k\in{\mathbf Z}$, where $(M;F,W)$ is the underlying bi-filtered ${\mathcal O}_X$-module, and $(H,W)$ is the underlying filtered ${\mathbf Q}$-local system. \par
Assume the local monodromies of $H$ are all {\it unipotent}. Let $\Mo$ be the {\it canonical}\, Deligne extension of $M$ over $\Xo$ (that is, the residues of the logarithmic connections are all {\it nilpotent}, see \cite{eq}). The filtrations $F,W$ are naturally extended on $\Mo$ by taking the intersection with $\Mo$ of the open direct images of $F,W$ under the inclusion $j:X\hookrightarrow\Xo$. Let $D_i$ be the irreducible components of $D$ ($i\in[1,r]$). Let $T_i$ be the local monodromy around a general point of $D_i$, which is defined on the fiber $H_{x_0}$ at a base point $x_0\in X$, and is unipotent by hypothesis. (This is well-defined up to a conjugate compatible with $W$.) Set $N_i:=(2\pi i)^{-1}\log T_i$. We denote by $L$ the filtration $W$ on $H_{x_0}$. \par
Under the above notation and assumption, $({\mathcal M},W)$ is an {\it admissible variation of mixed Hodge structure} in the sense of \cite{StZ}, \cite{Kadm} if and only if the following two conditions are satisfied: $${\rm Gr}_F^p{\rm Gr}_k^W\Mo\,\,\hbox{are locally free ${\mathcal O}_{\Xo}$-modules for any $p,k\in{\mathbf Z}$.} \leqno(3.1.5)$$ \vskip-7mm $$\hbox{There is the relative monodromy filtration on $(H_{x_0},L)$ for each $N_i$.} \leqno(3.1.6)$$ (The last condition means that (3.1.4) holds with $\psi_g{\mathcal M}$ replaced by $H_{x_0}$.) In fact, this equivalence follows from \cite[Theorem 4.5.2]{Kadm}. \par
In the non-unipotent local monodromy case, let $\rho:X'\to X$ be a generically finite morphism of complex algebraic varieties such that $\rho^*{\mathcal M}$ has unipotent local monodromies around the divisor at infinity of a compactification of $X'$ (by replacing $X$ with a non-empty open subvariety if necessary). Then ${\mathcal M}$ is an admissible variation if and only if $\rho^*{\mathcal M}$ is. (In fact, may assume that $\rho$ is finite \'etale by shrinking $X$. Then we can take the direct image.) \par
\par\noindent {\bf 3.2.~Well-definedness of open direct images.} Let $D$ be a locally principal divisor on a smooth complex algebraic variety $X$. Set $X':=X\setminus D$ with $j:X'\hookrightarrow X$ the inclusion. We say that the open direct images $j_!,j_*$ are well-defined for ${\mathcal M}'\in{\rm MHW}(X')$, if there are $$\aligned&\quad\quad{\mathcal M}'_!,\,{\mathcal M}'_*\in{\rm MHW}(X),\,\,\,\,\hbox{satisfying}\\{\rm rat}({\mathcal M}'_!)&=j_!K',\,\,\,{\rm rat}({\mathcal M}'_*)={\mathbf R} j_*K'\quad\hbox{with}\quad K':={\rm rat}({\mathcal M}'),\endaligned \leqno(3.2.1)$$ (see (1.1.6) for rat), and moreover the following condition is satisfied: $$\hbox{${\mathcal M}'_!$, ${\mathcal M}'_*$ are $g$-admissible for any $g\in\Gamma(U,{\mathcal O}_U)$ with $g^{-1}(0)_{\rm red}=D_{\rm red}\cap U$.} \leqno({\rm A3})$$ Here $U$ is any open subvariety of $X$. If the above condition is satisfied, we then define $$j_!{\mathcal M}':={\mathcal M}'_!,\quad j_*{\mathcal M}':={\mathcal M}'_*. \leqno(3.2.2)$$ If ${\mathcal M}'=j^{-1}{\mathcal M}$ with ${\mathcal M}\in{\rm MHW}(X)$ and ${\mathcal M}$ is $g$-admissible, then we have the canonical morphisms (see \cite[Proposition~2.11]{mhm}) $$j_!j^{-1}{\mathcal M}\to{\mathcal M},\quad{\mathcal M}\to j_*j^{-1}{\mathcal M}. \leqno(3.2.3)$$ \par
\par\noindent {\bf 3.3. Definition of mixed Hodge modules.} Let $X$ be a smooth complex algebraic variety. The category of mixed Hodge modules ${\rm MHM}(X)$ is the abelian full subcategory of ${\rm MHW}(X)$ in (2.3) defined by increasing induction on the dimension $d$ of the support as follows: \par
For ${\mathcal M}\in{\rm MHW}(X)$ with ${\rm supp}\,{\mathcal M}=Z$, we have ${\mathcal M}\in{\rm MHM}(X)$ if and only if, for any $x\in X$, there is a Zariski-open neighborhood $U_x$ of $x$ in $X$ together with $g_x\in\Gamma(U_x,{\mathcal O}_{U_x})$ such that $$\dim Z\cap U_x\cap g_x^{-1}(0)<\dim Z,$$ $Z'_x:=Z\cap U_x\setminus g_x^{-1}(0)$ is smooth, and moreover the following two conditions are satisfied:
$$\hbox{${\mathcal M}|_{Z'_x}$ is an admissible variation of mixed Hodge structure.} \leqno(3.3.1)$$
$$\hbox{${\mathcal M}|_{U_x}$ is $g_x$-admissible, and $\varphi_{g_x,1}{\mathcal M}|_{U_x}\in{\rm MHM}(U_x)$.} \leqno(3.3.2)$$
More precisely, (3.3.1) means that ${\mathcal M}|_{U'_x}$ is isomorphic to the direct image of an admissible variation of mixed Hodge structure on $Z'_x$ by the closed embedding $$i_{Z'_x}:Z'_z\hookrightarrow U'_x:=U_x\setminus g_x^{-1}(0).$$ \par
If $Z=\{x\}$ for $x\in X$, then we set $${\rm MHM}_{\{x\}}(X):={\rm MHW}_{\{x\}}(X)={\rm MHS}({\mathbf Q}), \leqno(3.3.3)$$ where the first and second categories are respectively full subcategories of ${\rm MHM}(X)$ and ${\rm MHW}(X)$ consisting of objects supported on $x$, and the last one is the category of graded-polarizable mixed ${\mathbf Q}$-Hodge structures \cite{th2}. (Here the direct image by $\{x\}\hookrightarrow X$ is used.) \par
This definition is justified by the following (see \cite[Theorem 1]{def}). \par
\par\noindent {\bf Theorem~3.4.} {\it Conditions {\rm (3.3.1--2)} are independent of the choice of $U_x$, $g_x$. More precisely, if they are satisfied for some $U_x$, $g_x$ for each $x\in Z$, then $(3.3.2)$ is satisfied for any $U_x$, $g_x$, and $(3.3.1)$ is satisfied in case ${\rm rat}({\mathcal M}')$ is a local system up to a shift of complex, see $(1.1.6)$ for ${\rm rat}$.} \par
We have moreover the following (see \cite[Theorem 2]{def}). \par
\par\noindent {\bf Theorem~3.5.} {\it The categories ${\rm MHM}(X)$ for smooth complex algebraic varieties $X$ are stable by the canonically defined cohomological functors ${\mathcal H}^jf_*$, ${\mathcal H}^jf_!$, ${\mathcal H}^jf^*$, ${\mathcal H}^jf^!$, $\psi_g$, $\varphi_{g,1}$, $\boxtimes$, ${\mathbf D}$, where $f$ is a morphism of smooth complex algebraic varieties and $g\in\Gamma(X,{\mathcal O}_X)$. Moreover these functors are compatible with the corresponding functors of the underlying ${\mathbf Q}$-complexes via the forgetful functor {\rm rat} in $(1.1.6)$.} \par
The proofs of these theorems use Beilinson's maximal extension together with the stability by subquotients systematically. The well-definedness of open direct images in (3.3) is reduced to the normal crossing case, see \cite{def}. Combining Theorem~(3.5) with the construction in \cite{mhm}, we can get the following (see \cite[Corollary 1]{def}). \par
\par\noindent {\bf Theorem~3.6.} {\it There are canonically defined functors $f_*$, $f_!$, $f^*$, $f^!$, $\psi_g$, $\varphi_{g,1}$, $\boxtimes$, ${\mathbf D}$, $\otimes$, ${{\mathcal H}}om$ between the bounded derived categories $D^b{\rm MHM}(X)$ for smooth complex algebraic varieties $X$ so that we have the canonical isomorphisms $H^jf_*={\mathcal H}^jf_*$, etc., where $f$ is a morphism of smooth complex algebraic varieties, $g\in\Gamma(X,{\mathcal O}_X)$, $H^j$ is the usual cohomology functor of the derived categories, and ${\mathcal H}^jf_*$, etc.\ are as in Theorem~$(3.5)$. Moreover the above functors between the $D^b{\rm MHM}(X)$ are compatible with the corresponding functors of the underlying ${\mathbf Q}$-complexes via the forgetful functor {\rm rat}.} \par
\par\noindent {\bf 3.7. Some notes on references about applications.} Since there is no more space to explain about applications of mixed Hodge modules, we indicate some references here. These are not intended to be complete. \par
For applications to algebraic cycles, see \cite{BRS}, \cite{BFNP}, \cite{MuS}, \cite{NS}, \cite{RS1}, \cite{RS2}, \cite{int}, \cite{HC1}, \cite{HC2}, \cite{HTC}, \cite{ari}, \cite{rcy}, \cite{FCh}, \cite{Dir}, \cite{Tho}, \cite{HCh}, \cite{SS2}, etc. Some of them are related to normal functions. For the latter, see also \cite{ext}, \cite{anf}, \cite{spr}, \cite{SS1}, \cite{Schn}, etc. \par
Related to mixed Hodge structures on cohomologies of algebraic varieties, see \cite{BDS}, \cite{DS1}, \cite{DS2}, \cite{DS3}, \cite{DS5}, \cite{DS6}, \cite{DSW}, \cite{DuS}, \cite{OS}, \cite{PS}, \cite{cmp}, \cite{Fil}, \cite{SZ}, etc. \par
About Bernstein-Sato polynomials, Steenbrink spectra, and multiplier ideals, see \cite{Bu}, \cite{BMS}, \cite{BS1}, \cite{BS2}, \cite{BSY}, \cite{DMS}, \cite{DMST}, \cite{DS4}, \cite{DS7}, \cite{MP}, \cite{ste}, \cite{rat}, \cite{mic}, \cite{bhyp}, \cite{Mul}, \cite{pow}, etc. \par
For direct images of dualizing sheaves and vanishing theorems, see \cite{FFT}, \cite{kol}, \cite{Su}, etc. Concerning Hirzebruch characteristic classes, see \cite{BrScY}, \cite{MS}, \cite{MSS1}, \cite{MSS2}, \cite{MSS3}, \cite{MSS4}, etc. \par
\par
\vbox{\centerline{\bf Appendix A. Hypersheaves} \par
\par\noindent In this appendix we review some basics of hypersheaves, see Convention~3.} \par
\par\noindent {\bf A.1.} Let $X$ be a complex algebraic variety or a complex analytic space, and $A$ be a subfield of ${\mathbf C}$. We denote by $D_c^b(X,A)$ the derived category of bounded complexes of $A$-modules with constructible cohomology sheaves, see \cite{Ve1}, etc. In the algebraic case, we use the classical topology for the sheaf complexes although we assume that {\it stratifications are algebraic}. \par
The category of hypersheaves ${\mathbf H\mathbf S}(X,A)$ is the {\it full subcategory} of $D_c^b(X,A)$ consisting of objects $K$ satisfying the condition: $$\dim{\rm supp}\,{\mathcal H}^iK\leqslant-i,\quad\dim{\rm supp}\,{\mathcal H}^i\hskip1pt{\mathbf D}(K)\leqslant-i\quad(\forall\,i\in{\mathbf Z}). \leqno({\rm A}.1.1)$$ Here ${\mathcal H}^iK$ is the $i$\,th cohomology sheaf of $K$ in the usual sense, and ${\mathbf D}(K)$ is the dual of $K$. The latter can be defined by $${\mathbf D}(K):={\mathbf R}{\mathcal H} om_A(K,{\mathbf D}_X), \leqno({\rm A}.1.2)$$ with ${\mathbf D}_X$ the dualizing sheaf in $D^b_c(X,A)$. In the {\it smooth} case (by taking an embedding into smooth varieties), it can be defined by $${\mathbf D}_X:=A_X(d_X)[2d_X], \leqno({\rm A}.1.3)$$ with $d_X:=\dim X$. \par
By \cite{BBD}, ${\mathbf H\mathbf S}(X,A)$ is an {\it abelian category}, and there are canonical cohomological functors $${}^{\mathfrak m}{\mathcal H}^i:D^b_c(X,A)\to{\mathbf H\mathbf S}(X,A)\quad(i\in{\mathbf Z}), \leqno({\rm A}.1.4)$$ where the superscript $\,{}^{\mathfrak m}\,$ means the ``middle perversity". \par
\par\noindent {\bf A.2.~Nearby and vanishing cycles.} Let $g$ be a holomorphic function on an analytic space $X$. Let $\Delta\subset{\mathbf C}$ be a sufficiently small open disk with center $0$, and $\pi:\widetilde{\Delta^*}\to\Delta^*$ be a universal covering of the punctured disk $\Delta^*$. Let $\pi':\widetilde{\Delta^*}\to{\mathbf C}$ be its composition with the inclusion $\Delta^*\hookrightarrow{\mathbf C}$. Let $X_{\infty}$ be the base change of $X$ by $\pi'$. We denote by $\widetilde{j}:X_{\infty}\to X$ the base change of $\pi'$ by $g$. Set $X_0:=f^{-1}(0)$ with $i_0:X_0\hookrightarrow X$ the canonical inclusion. The nearby and vanishing cycle functors $$\psi_g,\,\varphi_g:D_c^b(X,A)\to D_c^b(X_0,A)$$ are defined as in \cite{van} by $$\psi_gK:=i_0^*\,{\mathbf R}\widetilde{j}_*\widetilde{j}^*K,\,\,\,\varphi_gK:=C(i_0^*\,K\to\psi_gK)\quad\hbox{for}\,\,\,\,K\in D_c^b(X,A), \leqno({\rm A}.2.1)$$ where we take a flasque resolution of $K$ to define ${\mathbf R}\widetilde{j}_*\widetilde{j}^*K$ and also the mapping cone. By definition we have a distinguished triangle $$i^*K\to\psi_gK\to\varphi_gK\buildrel{+1}\over\to. \leqno({\rm A}.2.2)$$ \par
The action of the monodromy $T$ is defined by $\gamma^*$ with $\gamma$ the automorphism of $\widetilde{\Delta^*}$ over $\Delta^*$ defined by $z\mapsto z+1$. Here $\widetilde{\Delta^*}$ is identified with $\{z\in{\mathbf C}\mid{\rm Im}\,z>r\}$ for some $r>0$ and $\pi$ is given by $z\mapsto t:=\exp(2\pi iz)$. (This is compatible with the usual definition of the monodromy of a local system $L$ on $\Delta^*$. In fact, $(\gamma^*\sigma)(z_0)=\sigma(z_0+1)$ for $\sigma\in\Gamma(\widetilde{\Delta^*},\pi^*L)$ with $z_0$ a base point of $\widetilde{\Delta^*}$, and the monodromy is given by the composition of canonical isomorphisms: $L_{\pi(z_0)}=(\pi^*L)_{z_0}=\Gamma(\widetilde{\Delta^*},\pi^*L)=(\pi^*L)_{z_0+1}=L_{\pi(z_0)}$.) There is a nonzero minimal polynomial for $T$ locally on $X$, and this implies the Jordan decomposition $T=T_sT_u$ (with $T_s,T_u$ polynomials in $T$ locally on $X$). \par
Assume $K\in{\mathbf H\mathbf S}(X,A)$. Set $${}^{\mathfrak m}\psi_gK:=\psi_gK[-1],\,\,{}^{\mathfrak m}\varphi_gK:=\varphi_gK[-1]. \leqno({\rm A}.2.3)$$ Then $${}^{\mathfrak m}\psi_gK,\,{}^{\mathfrak m}\varphi_gK\in{\mathbf H\mathbf S}(X_0,A). \leqno({\rm A}.2.4)$$ This follows for instance from \cite{Kvan}, \cite{Ma3} by using the Riemann-Hilbert correspondence. \par
In the case $A={\mathbf C}$, this implies the decompositions in the abelian category ${\mathbf H\mathbf S}(X_0,A)$: $${}^{\mathfrak m}\psi_gK=\h{$\bigoplus$}_{\lambda\in{\mathbf C}^*}\,{}^{\mathfrak m}\psi_{g,\lambda}K,\quad{}^{\mathfrak m}\varphi_gK=\h{$\bigoplus$}_{\lambda\in{\mathbf C}^*}\,{}^{\mathfrak m}\varphi_{g,\lambda}K, \leqno({\rm A}.2.5)$$ (which are locally finite direct sum decompositions), where $${}^{\mathfrak m}\psi_{g,\lambda}K:={\rm Ker}(T_s-\lambda)\subset{}^{\mathfrak m}\psi_gK,\quad{}^{\mathfrak m}\varphi_{g,\lambda}K:={\rm Ker}(T_s-\lambda)\subset{}^{\mathfrak m}\varphi_gK. \leqno({\rm A}.2.6)$$ \par
In the case $A\subset{\mathbf C}$, we have only the decompositions $${}^{\mathfrak m}\psi_gK={}^{\mathfrak m}\psi_{g,1}K\oplus{}^{\mathfrak m}\psi_{g,\ne 1}K,\quad{}^{\mathfrak m}\varphi_gK={}^{\mathfrak m}\varphi_{g,1}K\oplus{}^{\mathfrak m}\varphi_{g,\ne 1}K, \leqno({\rm A}.2.7)$$ which are compatible with the above decompositions after taking the scalar extension by $A\hookrightarrow{\mathbf C}$. \par
By (A.2.2) we have the canonical isomorphisms $${}^{\mathfrak m}\psi_{g,\ne1}K\buildrel{\sim}\over\longrightarrow{}^{\mathfrak m}\varphi_{g,\ne1}K,\quad{}^{\mathfrak m}\psi_{g,\lambda}K\buildrel{\sim}\over\longrightarrow{}^{\mathfrak m}\varphi_{g,\lambda}K\,\,\,\,(\lambda\ne 1,\,A={\mathbf C}), \leqno({\rm A}.2.8)$$ since the action of $T$ on $i^*K$ is trivial. \par
If $K=A_X$ and $X$ is a smooth algebraic variety (or a complex manifold with $X_0$ compact), then the nearby cycle functor $\psi_gA_X$ is also defined by $$\psi_gA_X={\mathbf R}\rho_*A_{X_c}, \leqno({\rm A}.2.9)$$ where $X_c:=f^{-1}(c)\subset X$ with $c\in{\mathbf C}^*$ sufficiently near $0$, and $\rho:X_c\to X_0$ is an appropriate contraction morphism. The latter is constructed by using an embedded resolution of $X_0\subset X$. \par
\par\noindent {\bf A.3.~Compatibility with the dual functor ${\mathbf D}$.} We say that a pairing $$K\otimes_AK'\to{\mathbf D}_X(k)$$ is a {\it perfect pairing} (with $k\in{\mathbf Z}$) if its corresponding morphism $$K\to{\mathbf D}(K')(k)={\mathbf R}{\mathcal H} om_A(K',{\mathbf D}_X)(k)$$ is an isomorphism in $D^b_c(X,A)$. Here ${\mathbf D}_X$ is as in (A.1.2), and the above correspondence comes from the isomorphism $${\rm Hom}(K\otimes K',{\mathbf D}_X(k))={\rm Hom}(K,{\mathbf R}{\mathcal H} om_A(K',{\mathbf D}_X(k))). \leqno({\rm A}.3.1)$$ The Tate twist $(k)$ is defined as in \cite[Definition 2.1.13]{th1}, see also a remark after (2.1.10). This can be neglected if $i=\sqrt{-1}$ is chosen. However, this twist is quite useful in order to keep track of ``weight". In fact, if $K=K'$ and it has pure weight $w$, then ${\mathbf D} K$ should have weight $-w$, and the above $k$ must be equal to $-w$, since the Tate twist $(k)$ changes the weight by $-2k$ ({\it loc.~cit.}). \par
Assume there is a perfect pairing $$S:K\otimes_A K'\to{\mathbf D}_X(-w)\quad\hbox{for}\,\,\,K,K'\in{\mathbf H\mathbf S}(X,A).$$ It induces a canonical pairing $$\psi_gS:\psi_gK\otimes_A\psi_gK'\to\psi_g{\mathbf D}_X(-w). \leqno({\rm A}.3.2)$$ \par
Assume $X$ is {\it smooth} (by taking an embedding into a smooth variety), and $X_0$ is also {\it smooth} (by replacing $K$ with its direct image under the graph embedding by $g$). Then we have $$\psi_g{\mathbf D}_X=A_{X_0}(d_X-w)[2d_X]={\mathbf D}_{X_0}(1-w)[2], \leqno({\rm A}.3.3)$$ and (A.3.2) indues a canonical perfect pairing $${}^{\mathfrak m}\psi_gS:{}^{\mathfrak m}\psi_gK\otimes_A{}^{\mathfrak m}\psi_gK'\to{\mathbf D}_{X_0}(1-w). \leqno({\rm A}.3.4)$$ Here some sign appears, and this is closely related to the sign in (1.4.8). \par
The above construction is compatible with the monodromy $T$, that is, $${}^{\mathfrak m}\psi_gS={}^{\mathfrak m}\psi_gS\,\raise.15ex\hbox{${\scriptstyle\circ}$}\,(T\otimes T).$$ Since $T^e$ is unipotent for some $e\in{\mathbf Z}_{>0}$, this implies $${}^{\mathfrak m}\psi_gS\,\raise.15ex\hbox{${\scriptstyle\circ}$}\,(N\otimes id)=-{}^{\mathfrak m}\psi_gS\,\raise.15ex\hbox{${\scriptstyle\circ}$}\,(id\otimes N),\quad{}^{\mathfrak m}\psi_gS={}^{\mathfrak m}\psi_gS\,\raise.15ex\hbox{${\scriptstyle\circ}$}\,(T_s\otimes T_s), \leqno({\rm A}.3.5)$$ where $T=T_sT_u$ is the Jordan decomposition, and $N:=(2\pi i)^{-1}\log T_u$. \par
We then get the induced perfect pairings $$\aligned{}^{\mathfrak m}\psi_{g,1}S:{}^{\mathfrak m}\psi_{g,1}K&\otimes_A{}^{\mathfrak m}\psi_{g,1}K'\to{\mathbf D}_{X_0}(1-w),\\{}^{\mathfrak m}\psi_{g,\ne1}S:{}^{\mathfrak m}\psi_{g,\ne1}K&\otimes_A{}^{\mathfrak m}\psi_{g,\ne1}K'\to{\mathbf D}_{X_0}(1-w),\\{}^{\mathfrak m}\psi_{g,\lambda}S:{}^{\mathfrak m}\psi_{g,\lambda}K&\otimes_A{}^{\mathfrak m}\psi_{g,\lambda^{-1}}K'\to{\mathbf D}_{X_0}(1-w)\quad(A={\mathbf C}).\endaligned \leqno({\rm A}.3.6)$$ \par
For the vanishing cycle functor $\varphi_g$, we have the induced perfect pairing $${}^{\mathfrak m}\varphi_{g,1}S:{}^{\mathfrak m}\varphi_{g,1}K\otimes_A{}^{\mathfrak m}\varphi_{g,1}K'\to{\mathbf D}_{X_0}(-w), \leqno({\rm A}.3.7)$$ satisfying $${}^{\mathfrak m}\varphi_{g,1}S\,\raise.15ex\hbox{${\scriptstyle\circ}$}\,({\rm can}\otimes id)={}^{\mathfrak m}\psi_{g,1}S\,\raise.15ex\hbox{${\scriptstyle\circ}$}\,(id\otimes{\rm Var}), \leqno({\rm A}.3.8)$$ where the morphisms $${\rm can}:{}^{\mathfrak m}\psi_{g,1}K\to{}^{\mathfrak m}\varphi_{t,1}K,\quad{\rm Var}:{}^{\mathfrak m}\varphi_{g,1}K'\to{}^{\mathfrak m}\psi_{t,1}K'(-1), \leqno({\rm A}.3.9)$$ are constructed in \cite[Section 5.2.1]{mhp}. (These correspond respectively to the morphisms $-{\rm Gr}_V\partial_t$, ${\rm Gr}_Vt$ in (B.6.9) below if $K={\rm DR}_X(M)$, $K'={\rm DR}_X(M')$ with $A={\mathbf C}$.) We have $${\rm Var}\,\raise.15ex\hbox{${\scriptstyle\circ}$}\,{\rm can}=N,\,\,\,{\rm can}\,\raise.15ex\hbox{${\scriptstyle\circ}$}\,{\rm Var}=N. \leqno({\rm A}.3.10)$$ \par
Note that the target of (A.3.7) is different from that of (A.3.6) by the Tate twist, and ${}^{\mathfrak m}\varphi_{g,\ne 1}S$ is given by ${}^{\mathfrak m}\psi_{g,\ne 1}S$ together with the isomorphism (A.2.8). The construction of (A.3.7) is not quite trivial, see \cite[Sections 5.2.1 and 5.2.3 and Lemma 5.2.4]{mhp}. For instance, we used there the isomorphism $$i^!K'=[i^*K'\to\psi_{g,1}K'\buildrel{-N}\over\longrightarrow\psi_{g,1}K'(-1)], \leqno({\rm A}.3.11)$$ where $i:X_0\hookrightarrow X$ is the inclusion. If we replace this with $$i^!K'=[i^*K'\to\psi_{g,1}K'\buildrel{id-T}\over\longrightarrow\psi_{g,1}K'], \leqno({\rm A}.3.12)$$ then (A.3.8) would hold with Var, $N$ respectively replaced by var, $T-id$, where the Tate twist should be omitted (since $T-id$ is not compatible with the weight structure). \par
\par
\vbox{\centerline{\bf Appendix B. ${\mathcal D}$-modules} \par
\par\noindent In this appendix we review some basics of ${\mathcal D}$-modules.} \par
\par\noindent {\bf B.1.~Holonomic ${\mathcal D}$-modules.} Let $X$ be a complex manifold of dimension $d_X$, and $M$ be a coherent left ${\mathcal D}_X$-module. This means that $M$ has locally a finite presentation
$$\h{$\bigoplus$}^p\,{\mathcal D}_U\to\h{$\bigoplus$}^q\,{\mathcal D}_U\to M|_U\to 0,$$ over sufficiently small open subsets $U\subset X$. (This is equivalent to the condition that $M$ is quasi-coherent over ${\mathcal O}_X$ and is locally finitely generated over ${\mathcal D}_X$.) \par
A filtration $F$ on $M$ is called a {\it good filtration} if $(M,F)$ satisfies $$(F_p{\mathcal D}_X)\hskip1pt(F_qM)\subset F_{p+q}M\quad(p\in{\mathbf N},\,q\in{\mathbf Z}), \leqno({\rm B}.1.1)$$ and ${\rm Gr}^F_{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}M$ is a {\it coherent} ${\rm Gr}^F_{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}{\mathcal D}_X$-module. (The last condition is equivalent to the conditions that each $F_pM$ is coherent over ${\mathcal O}_X$ and the equality holds for $q\gg0$ in (B.1.1).) Here $F$ on ${\mathcal D}_X$ is by the order of differential operators, that is, we have for local coordinates $(x_1,\dots,x_{d_X})$
$$F_p{\mathcal D}_X=\h{$\sum$}_{|\nu|\leqslant p}{\mathcal O}_X\partial_{x_i}^{\nu_i}.$$ \par
The {\it characteristic variety} ${\rm Ch}(M)\subset T^*X$ of a coherent left ${\mathcal D}_X$-module $M$ is defined to be the support of the ${\rm Gr}^F_{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}{\mathcal D}_X$-module ${\rm Gr}^F_{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}M$ in the cotangent bundle $T^*X$. (The latter can be defined to be the union of the analytic subspaces of $T^*X$ defined by the ideal of ${\rm Gr}^F_{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}{\mathcal D}_X$ annihilating $g_i$ with $g_i$ local generators of ${\rm Gr}^F_{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}M$. Here ${\rm Gr}^F_{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}{\mathcal D}_X$ is identified with the sheaf of holomorphic functions on $T^*X$ which are polynomials on fibers of $T^*X\to X$.) This is independent of a choice of a good filtration $F$. \par
By the involutivity of the characteristic varieties (see \cite{SKK}, \cite{Ma2}, \cite{Ga}), it is known that $$\dim{\rm Ch}(M)\geqslant\dim X. \leqno({\rm B}.1.2)$$ (See also \cite{Bo} for the algebraic ${\mathcal D}$-module case.) \par
A coherent left ${\mathcal D}_X$-module $M$ is called {\it holonomic} if $$\dim{\rm Ch}(M)=\dim X. \leqno({\rm B}.1.3)$$ We will denote by $M_{\rm hol}({\mathcal D}_X)$ the abelian category of holonomic ${\mathcal D}_X$-modules. \par
\par\noindent {\bf B.2.~Left and right ${\mathcal D}$-modules.} We have the transformation between filtered left and right ${\mathcal D}_X$-modules on a complex manifold $X$ of dimension $d_x$ which associates the following to a filtered left ${\mathcal D}_X$-module $(M,F)$: $$(\Omega_X^{d_X},F)\otimes_{{\mathcal O}_X}(M,F), \leqno({\rm B}.2.1)$$ where the filtration $F$ on $\Omega_X^{d_X}$ is defined by the condition $${\rm Gr}^F_p\Omega_X^{d_X}=0\quad(p\ne-d_X). \leqno({\rm B}.2.2)$$ So the filtration is shifted by $-d_X$. Here it is better to distinguish $\Omega_X^{d_X}$ and the dualizing sheaf $\omega_X$, since the Hodge filtration $F$ on $\omega_X$ is usually defined by $${\rm Gr}^F_p\omega_X=0\quad(p\ne 0). \leqno({\rm B}.2.3)$$ \par
By choosing local coordinates $x_1,\dots,x_{d_X}$, the sheaf $\Omega_X^{d_X}$ is trivialized by ${\rm d} x_1\wedge\cdots\wedge {\rm d} x_{d_X}$ locally on $X$, and forgetting $F$, the transformation is given by the anti-involution $^*$ of ${\mathcal D}_X$ defined by the conditions (see for instance \cite{Ma1}): $$\partial_{x_i}^*=-\partial_{x_i},\quad g^*=g\,\,\,(g\in{\mathcal O}_X),\quad(PQ)^*=Q^*P^*\,\,\,(P,Q\in{\mathcal D}_X). \leqno({\rm B}.2.4)$$ \par
For a right ${\mathcal D}_X$-module $N$, the left ${\mathcal D}_X$-module corresponding to it is denoted often by $$N\otimes_{{\mathcal O}_X}(\Omega_X^{d_X})^{\vee}. \leqno({\rm B}.2.5)$$ Here $L^{\vee}$ denotes the dual of a locally free sheaf $L$ in general, that is, $L^{\vee}:={\mathcal H} om_{{\mathcal O}_X}(L,{\mathcal O}_X)$. \par
\par\noindent {\bf B.3.~Direct images.} For a closed embedding $i:X\to Y$ of smooth complex algebraic varieties, the direct image of a filtered {\it right} ${\mathcal D}$-module $(M,F)$ is defined by $$i_*^{{\mathcal D}}(M,F)=(M,F)\otimes_{{\mathcal D}_X}({\mathcal D}_{X\hookrightarrow Y},F), \leqno({\rm B}.3.1)$$ where the sheaf-theoretic direct image is omitted to simplify the notation, and $$({\mathcal D}_{X\hookrightarrow Y},F):={\mathcal O}_X\otimes_{{\mathcal O}_Y}({\mathcal D}_Y,F). \leqno({\rm B}.3.2)$$ \par
For a filtered {\it left} ${\mathcal D}$-module $(M,F)$, the ${\mathcal D}$-module is twisted by $\omega_{X/Y}$ and the filtration $F$ is shifted by $r:~={\rm codim}_XY$ because of the transformation between filtered left and right ${\mathcal D}$-modules in (B.2). If $X$ is locally defined by $y_1=\cdots=y_r$ with $y_1,\dots,y_m$ local coordinates of $Y$, then, setting $\partial_{y_i}:=\partial/\partial y_i$, the direct image is {\it locally} defined by $$i_*^{{\mathcal D}}(M,F)=(M,F[r])\otimes_{{\mathbf C}}({\mathbf C}[\partial_{y_1},\dots,\partial_{y_r}],F). \leqno({\rm B}.3.3)$$ \par
For a smooth projection $p:Z:=X\times Y\to Y$ with $X,Y$ smooth, the direct image of a filtered {\it left} ${\mathcal D}_Z$-module $(M,F)$ is defined by the sheaf-theoretic direct image of the relative de Rham complex ${\rm DR}_{Z/Y}(M,F)$, that is, $$p_*^{{\mathcal D}}(M,F):={\mathbf R}\1p_*{\rm DR}_{Z/Y}(M,F), \leqno({\rm B}.3.4)$$ where ${\rm DR}_{Z/Y}(M,F)$ is the filtered complex defined by $$(M,F)\to\Omega_{Z/Y}^1\otimes_{{\mathcal O}_Z}(M,F[-1])\to\cdots\to\Omega_{Z/Y}^{d_X}\otimes_{{\mathcal O}_Z}(M,F[-d_X]), \leqno({\rm B}.3.5)$$ with the last term put at degree $0$. The differential of this complex is defined as in the absolute case (see (1.2.3)), and is locally given as the Koszul complex associated with the action of $\partial/\partial x_i$ on $M$ if $x_1,\dots,x_{d_X}$ are local coordinates of $X$. (Note, however, that this does not work for {\it smooth morphisms} which are not necessarily {\it smooth projections}, since there is no canonical lift of vector fields on $Y$ to $Z$.) \par
In general, the direct image of a filtered right ${\mathcal D}_X$-module $(M,F)$ by a morphism of smooth varieties $f:X\to Y$ is defined by $$i_*^{{\mathcal D}}(M,F)=p_*^{{\mathcal D}}\,\raise.15ex\hbox{${\scriptstyle\circ}$}\,(i_f)_*^{{\mathcal D}}(M,F), \leqno({\rm B}.3.6)$$ where $i_f:X\to X\times Y$ is the graph embedding, and $p:X\times Y\to Y$ is the second projection so that $f=p\,\raise.15ex\hbox{${\scriptstyle\circ}$}\, i_f$. \par
\par\noindent {\bf Remarks.} (i) We can verify that the direct image in (B.3.6) is naturally isomorphic to the complex of the induced ${\mathcal D}_Y$-module associated with the (sheaf-theoretic) direct image of the filtered differential complex ${\rm DR}_X(M,F)$. This means the compatibility between the direct images of filtered differential complexes and filtered ${\mathcal D}$-modules. \par
(ii) It seems simpler to use the above construction of direct images instead of the induced ${\mathcal D}$-module construction as in \cite[3.3.6]{mhp} for the definition of direct images of $V$-filtrations. \par
(iii) The direct image for a morphism of singular varieties is rather complicated. For the direct image of mixed Hodge modules, we may assume that the morphism is projective by using a Beilinson-type resolution (see \cite[Section 3]{Bei1} and the proof of \cite[Theorem 4.3]{mhm}), and the cohomological direct image is actually enough. So it is reduced to the case of a morphism of smooth varieties. \par
\par\noindent {\bf B.4.~Dual functor.} For a holonomic ${\mathcal D}_X$-module $M$ on a complex manifold $X$ of dimension $n$, its dual ${\mathbf D}(M)$ is defined so that $$(\Omega_X^{d_X})^{\vee}\otimes_{{\mathcal O}_X}{\mathbf D}(M)={{\mathcal E}}xt_{{\mathcal D}_X}^{d_X}(M,{\mathcal D}_X). \leqno({\rm B}.4.1)$$ and ${\mathbf D}$ is called the dual functor. This is a contravariant functor. It is well-known that $${\mathcal E} xt_{{\mathcal D}_X}^p(M,{\mathcal D}_X)=0\quad(p\ne\dim X), \leqno({\rm B}.4.2)$$ and $${\mathbf D}^2=id. \leqno({\rm B}.4.3)$$ \par
We say that a filtered holonomic ${\mathcal D}_X$-module $(M,F)$ is {\it Cohen-Macaulay} if ${\rm Gr}_{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}^FM$ is a Cohen-Macaulay ${\rm Gr}_{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}^F$-module. In this case, we have $${{\mathcal E} xt}^i_{{\rm Gr}_{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}^F{\mathcal D}_X}\bigl({\rm Gr}_{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}^FM,{\rm Gr}_{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}^F{\mathcal D}_X\bigr)=0\quad\quad(i\ne d_X), \leqno({\rm B}.4.4)$$ and the dual filtered ${\mathcal D}_X$-module ${\mathbf D}(M,F)$ can be defined so that $$(\Omega_X^{d_X},F)\otimes_{{\mathcal O}_X}{\mathbf D}(M,F)={\mathbf R}{{\mathcal H} om}_{{\mathcal D}_X}\bigl((M,F),({\mathcal D}_X,F[d_X])\bigr)[d_X]. \leqno({\rm B}.4.5)$$ This means that the last filtered complex is filtered quasi-isomorphic to a filtered ${\mathcal D}$-module. \par
It is known that the dual functor ${\mathbf D}$ commutes with the de Rham functor ${\rm DR}_X$, that is, for a regular holonomic ${\mathcal D}_X$-module $M$, there is a canonical isomorphism (see for instance \cite[Proposition 2.4.12]{mhp}): $${\mathbf D}({\rm DR}_X(M))={\rm DR}_X({\mathbf D}(M)). \leqno({\rm B}.4.6)$$ \par
\par\noindent {\bf B.5.~Regular holonomic ${\mathcal D}$-modules.} Let $Z$ be a closed analytic subset $Z$ of a complex manifold $X$. Let ${\mathcal I}_Z\subset{\mathcal O}_X$ be the ideal of $Z$. For a bounded complex of ${\mathcal D}_X$-modules $M^{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}$, set $$\aligned{\mathcal H}_{[Z]}^iM^{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}&:=\rlap{\raise-10pt\hbox{$\,\,\scriptstyle k$}}\rlap{\raise-6pt\hbox{$\,\rightarrow$}}{\rm lim}\,{\mathcal E}{xt}^i_{{\mathcal O}_X}({\mathcal O}_X/{\mathcal I}_Z^k,M^{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}})\,\bigl(=\rlap{\raise-10pt\hbox{$\,\,\scriptstyle k$}}\rlap{\raise-6pt\hbox{$\,\rightarrow$}}{\rm lim}\,{\mathcal E}{xt}^i_{{\mathcal D}_X}({\mathcal D}_X/{\mathcal D}_X{\mathcal I}_Z^k,M^{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}})\bigr),\\
{\mathcal H}_{[X|Z]}^iM^{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}&:=\rlap{\raise-10pt\hbox{$\,\,\scriptstyle k$}}\rlap{\raise-6pt\hbox{$\,\rightarrow$}}{\rm lim}\,{\mathcal E}{xt}^i_{{\mathcal O}_X}({\mathcal I}_Z^k,M^{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}})\,\bigl(=\rlap{\raise-10pt\hbox{$\,\,\scriptstyle k$}}\rlap{\raise-6pt\hbox{$\,\rightarrow$}}{\rm lim}\,{\mathcal E}{xt}^i_{{\mathcal D}_X}({\mathcal D}_X{\mathcal I}_Z^k,M^{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}})\bigr),\endaligned$$ so that we have a long exact sequence of ${\mathcal D}_X$-modules
$$\to{\mathcal H}_{[Z]}^iM^{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}\to{\mathcal H}^iM^{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}\to{\mathcal H}_{[X|Z]}^iM^{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}\to{\mathcal H}_{[Z]}^{i+1}M^{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}\to \leqno({\rm B}.5.1)$$ see \cite{Khol2} (and also \cite{Gr} for the algebraic case). Note that ${\mathcal H}_{[Z]}^0M$ for a holonomic ${\mathcal D}_X$-module $M$ is the largest holonomic ${\mathcal D}_X$-submodule supported in $Z$. \par
It is known that a holonomic ${\mathcal D}_X$-module $M$ with support $Z$ is {\it regular holonomic} if and only if there is a closed analytic subset $Z'\subset Z$ together with a proper morphism $\pi:\widetilde{Z}\to Z$ such that $\dim Z'<\dim Z$, $Z\setminus Z'$ is smooth and equi-dimensional, $\widetilde{Z}$ is smooth, $\pi$ induces an isomorphism over $Z\setminus Z'$, $\pi^{-1}(Z')$ is a divisor with normal crossings on $\widetilde{Z}$, and moreover, by setting $\pi_X:=j_Z\,\raise.15ex\hbox{${\scriptstyle\circ}$}\,\pi$ with $j_Z:Z\hookrightarrow X$ the canonical inclusion, the following two conditions are satisfied: \par
\par\noindent (i) $\,{\mathcal H}^0_{[Z']}M$ is a regular holonomic ${\mathcal D}_X$-module. \par
\par\noindent
(ii) $\,{\mathcal H}^0_{[X|Z']}M={\mathcal H}^0(\pi_X)_*^{{\mathcal D}}\widetilde{M}$ with $\widetilde{M}$ the Deligne meromorphic extension of $M|_{Z\setminus Z'}$ on $\widetilde{Z}$.
\par
We may assume that $Z'$ is the union of ${\rm Sing}\,Z$ and the lower dimensional irreducible components of $Z$, and $\pi$ is given by the desingularization of the union of the maximal dimensional irreducible components of $Z$. Note that in the case of algebraic ${\mathcal D}$-modules, we have ${\mathcal H}^0_{[X|Z']}M=j_*j^*M$ with $j:X\setminus Z'\hookrightarrow X$ the canonical inclusion. \par
The above criterion is by induction on the dimension of the support by using \cite{eq}, \cite{Khol2}. In fact, it is known that regular holonomic ${\mathcal D}$-modules are stable by subquotients and extensions in the category of holonomic ${\mathcal D}$-modules and also by the direct images under proper morphisms, and contain Deligne meromorphic extensions in the normal crossing case. \par
Let $M_{rh}({\mathcal D}_X)\subset M_{\rm hol}({\mathcal D}_X)$ be the full category of regular holonomic ${\mathcal D}_X$-modules on a complex manifold $X$. Let $D^b_{rh}({\mathcal D}_X)\subset D^b({\mathcal D}_X)$ be the full subcategory of bounded complexes with regular holonomic cohomologies. We have the equivalence of categories, that is, the Riemann-Hilbert correspondence: $${\rm DR}_X:D^b_{rh}({\mathcal D}_X)\buildrel{\sim}\over\longrightarrow D_c^b(X,{\mathbf C})\,\,\,\bigl(\hbox{inducing}\,\,\,{\rm DR}_X:M_{rh}({\mathcal D}_X)\buildrel{\sim}\over\longrightarrow{\mathbf H\mathbf S}(X,{\mathbf C})\bigr), \leqno({\rm B}.5.2)$$ see \cite{Krh1}, \cite{Krh2}, \cite{KK1}, \cite{Mthe}, \cite{Mrh}, \cite{Meq} (and also \cite{Bo}, \cite{HT}, etc.\ for the algebraic case). \par
\par\noindent {\bf Remarks.} (i) There are many ways to define the full subcategory $M_{rh}({\mathcal D}_X)\subset M_{\rm hol}({\mathcal D}_X)$. Their equivalences may follow by using the above argument in certain cases. \par
(ii) There is a nontrivial point about the commutativity of some diagram used in certain proofs of (B.5.2), and this is studied in \cite[Section 4]{ind}. \par
(iii) Some people say that (B.5.2) is essentially proved in \cite{KK1} where the full faithfulness of the de Rham functor is essentially shown (compare the assertion (ii) of Theorem 5.4.1 written in \cite[p.~825]{KK1} with \cite[Proposition 3.3]{Mbid}). The essential surjectivity is not difficult to show by using the full faithfulness. In \cite{Krh1}, \cite{Krh2}, a quasi-inverse is explicitly constructed. \par
(iv) In the algebraic case, the argument is more complicated, since we have the regularity {\it at infinity}, see \cite{Bo}, \cite{eq}, \cite{HT}, etc. (This is essential to get a {\it canonical algebraic structure} on the vector bundle associated with a local system on a smooth complex variety by using the Deligne extension, see \cite{eq}.) The Riemann-Hilbert correspondence is used in an essential way in representation theory, see \cite{BB}, \cite{BK}. (This point does not seem to be sufficiently clarified in the last one.) \par
\par\noindent {\bf B.6.~$V$-filtration.} For a complex manifold $X$, set $Y:=X\times{\mathbf C}$ with $t$ the coordinate of ${\mathbf C}$. We have the filtration $V$ on ${\mathcal D}_Y$ indexed by ${\mathbf Z}$ and such that \par
\par\noindent (i) $\,\,V^0{\mathcal D}_Y\subset{\mathcal D}_Y$ is the subring generated by ${\mathcal O}_Y$, $\partial_{y_i}$, and $t\partial_t$, \par
\par\noindent (ii) $\,\,V^j{\mathcal D}_Y=t^j\,V^0{\mathcal D}_Y$,\,\,\,$V^{-j}{\mathcal D}_Y=\h{$\sum$}_{0\leqslant k\leqslant j}\,\partial_t^k\,V^0{\mathcal D}_Y$\quad($j\in{\mathbf Z}_{>0}$), \par
\par\noindent where the $y_i$ are local coordinates of $Y$, and $\partial_{y_i}:=\partial/\partial y_i$. \par
Let $M$ be a regular holonomic ${\mathcal D}_Y$-module. We say that $M$ is {\it quasi-unipotent} if so is $K:={\rm DR}_Y(M)$, that is, if there is a stratification $\{S\}$ of $Y$ such that the restrictions of the cohomology sheaves ${\mathcal H}^iK$ to each stratum $S$ are ${\mathbf C}$-local systems having quasi-unipotent local monodromies around $\So\setminus S$. \par
For a quasi-unipotent regular holonomic left ${\mathcal D}_Y$-module $M$, there is a unique exhaustive filtration $V$ of Kashiwara \cite{Kvan} and Malgrange \cite{Ma3} indexed discretely by ${\mathbf Q}$ and satisfying the following three conditions: \par
\par\noindent (iii)\,\, $V^{\alpha}M$ ($\forall\,\alpha\in{\mathbf Q}$) are locally finitely generated $V^0{\mathcal D}_Y$-submodules, \par
\par\noindent (iv)\,\, $t\,V^{\alpha}M\subset V^{\alpha+1}M$ (with equality if $\alpha>0$),\,\,\,$\partial_t\,V^{\alpha}M\subset V^{\alpha-1}M\,\,$ for any $\alpha\in{\mathbf Q}$, \par
\par\noindent (v)\,\,\, $\partial_tt-\alpha$ is locally nilpotent on ${\rm Gr}_V^{\alpha}M\,\,$ $(\forall\,\alpha\in{\mathbf Q})$. \par
Here we say that $V$ is {\it indexed discretely by} ${\mathbf Q}$ if there is a positive integer $m$ satisfying $$V^{\alpha}M=V^{j/m}M\quad\hbox{if}\quad(j-1)/m<\alpha\leqslant j/m\quad\hbox{with}\quad j\in{\mathbf Z}. \leqno({\rm B}.6.1)$$ The existence of $V$ follows from that of $b$-functions in \cite{Khol2} where the holonomicity is actually sufficient (see \cite[Proposition 1.9]{rat}). \par
From condition (v) we can easily deduce the isomorphisms $$t:{\rm Gr}_V^{\alpha}M\buildrel{\sim}\over\longrightarrow{\rm Gr}_V^{\alpha+1}M,\quad\partial_t:{\rm Gr}_V^{\alpha+1}M\buildrel{\sim}\over\longrightarrow{\rm Gr}_V^{\alpha}M\quad\quad(\forall\,\alpha\ne 1), \leqno({\rm B}.6.2)$$ It is also easy to show the following: $$V^{\alpha}M=0\,\,\,(\forall\,\alpha>0)\quad\hbox{if}\quad{\rm supp}\,M\subset X\times\{0\}\subset Y, \leqno({\rm B}.6.3)$$ $$t:V^{\alpha}M\buildrel{\sim}\over\longrightarrow V^{\alpha+1}M\quad\quad(\forall\,\alpha>0), \leqno({\rm B}.6.4)$$ $$\hbox{$M\mapsto V^{\alpha}M$ (or ${\rm Gr}_V^{\alpha}M$) are exact functors ($\forall\,\alpha\in{\mathbf Q}$),} \leqno({\rm B}.6.5)$$ see \cite[Lemma 3.1.3, Lemma 3.1.4, Corollary 3.1.5]{mhp}, where right ${\mathcal D}$-modules are used so that the action of $t\partial_t$ there corresponds to that of $-\partial_tt$ in this paper by (B.2.4), and an increasing filtration $V_{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}$ is used there so that $V_{\alpha}=V^{-\alpha}$. \par
Set $Z:=g^{-1}(0)\subset X$. Let $M'_Z$ be the largest holonomic ${\mathcal D}_X$-submodule of $M$ supported in $Z$, and similarly for $M''_Z$ with submodule replaced with quotient module. Then we have the following canonical isomorphisms of ${\mathcal D}_X$-modules (see \cite[Proposition 3.1.8]{mhp}): $$\aligned M'_Z&={\rm Ker}\bigl(t:{\rm Gr}_V^0\widetilde{M}\to{\rm Gr}_V^1\widetilde{M}\bigr),\\ M''_Z&={\rm Coker}\bigl(\partial_t:{\rm Gr}_V^1\widetilde{M}\to{\rm Gr}_V^0\widetilde{M}\bigr).\endaligned \leqno({\rm B}.6.6)$$ Set $K:={\rm DR}_Y(M)\in{\mathbf H\mathbf S}(Y,{\mathbf C})$. In the notation of (A.2), there are canonical isomorphisms $$\aligned{\rm DR}_X({\rm Gr}_V^{\alpha}M)&\buildrel{\sim}\over\longrightarrow{}^{\mathfrak m}\psi_{t,{\mathbf e}(-\alpha)}K\quad(\alpha\in(0,1]),\\ {\rm DR}_X({\rm Gr}_V^0M)&\buildrel{\sim}\over\longrightarrow{}^{\mathfrak m}\varphi_{t,1}K,\endaligned \leqno({\rm B}.6.7)$$ such that $\exp(-2\pi i(\partial_tt-\alpha))$ on the left-hand side corresponds to the monodromy $T$ on the right-hand side, where ${\mathbf e}(-\alpha):=\exp(-2\pi i\alpha)$. In particular, $-(\partial_tt-\alpha)$ corresponds to $N:=(2\pi i)^{-1}\log T_u$ with $T=T_sT_u$ the Jordan decomposition. Moreover the morphisms $$-{\rm Gr}_V\partial_t:{\rm Gr}_V^1M\to{\rm Gr}_V^0M,\quad{\rm Gr}_Vt:{\rm Gr}_V^0M\to{\rm Gr}_V^1M \leqno({\rm B}.6.8)$$ respectively correspond to the morphisms can and Var in (A.3.9) with $K'=K$ and $f=t$. (The sign before ${\rm Gr}_V\partial_t$ in (B.6.8) comes from the transformation between left and right ${\mathcal D}$-modules as in (B.2.4).) \par
In the case $M=(i_g)_*^{{\mathcal D}}{\mathcal O}_X$ with $i_g:X\hookrightarrow X\times{\mathbf C}$ the graph embedding by a holomorphic function $g$ on $X$, the proof of (B.6.7) is given in \cite{Ma3}, and this can be extended to the general regular holonomic case (see also the proof of \cite[Proposition 3.4.12]{mhp}). There are canonical morphisms inducing the isomorphisms of (B.6.7) by using logarithmic functions, and these canonical morphisms are quite important for the proof of the stability theorem of Hodge modules by direct images. \par
\par
\vbox{\centerline{\bf Appendix C. Compatible filtrations} \par
\par\noindent In this appendix we review some basics of compatible filtrations.} \par
\par\noindent {\bf C.1.~Compatible subobjects.} Let ${\mathcal A}$ be an abelian category, and $n\in{\mathbf Z}_{>0}$. We say that subobjects $B_i$ ($i\in[1,n]$) of $A\in{\mathcal A}$ are {\it compatible subobjects} if there is a {\it short exact $n$-ple complex} $K$ in ${\mathcal A}$ such that \par
\par\noindent
(i) $\,\,K^p=0$ if $|p_i|>1$ for some $i\in[1,n]$, where $p=(p_1,\dots,p_n)\in{\mathbf Z}^n$. \par
\par\noindent (ii) $\,\,K^{p-{\mathbf 1}_i}\to K^p\to K^{p+{\mathbf 1}_i}$ is exact for any $p\in{\mathbf Z}^n$, $i\in[1,n]$. \par
\par\noindent (iii) $\,\,K^0=A$, $K^{-{\mathbf 1}_i}=B_i$ for any $i\in[1,n]$. \par
\par\noindent Here ${\mathbf 1}_i=(({\mathbf 1}_i)_1,\dots,({\mathbf 1}_i)_n)$ with $({\mathbf 1}_i)_j=\delta_{i,j}$ Note that conditions (i) and (ii) respectively correspond to ``short" and ``exact". In the case $n=2$, $K$ is the diagram of the {\it nine lemma}. \par
\par\noindent {\bf C.2.~Compatible filtrations.} We say that $n$ filtrations $F_{(i)}$ ($i\in[1,n]$) of $A\in{\mathcal A}$ form {\it compatible filtrations} if $$F_{(1)}^{\nu_1}A,\,\dots\,,\,F_{(n)}^{\nu_n}A$$ are compatible subobjects of $A$ for any $\nu=(\nu_1,\dots,\nu_n)\in{\mathbf Z}^n$. \par
If $n=2$, then any $2$ filtrations $F_{(1)}$, $F_{(2)}$ are compatible filtrations. However, this does not necessarily hold for $n>2$. \par
We can show that if the $F_{(i)}$ ($i\in[1,n]$) form compatible filtrations of $A$, then their restrictions to $F^{\nu}A:=F_{(1)}^{\nu_1}\cdots F_{(n)}^{\nu_n}A$ also form compatible filtrations (see the proof of \cite[Corollary 1.2.13]{mhp}). Using the short exact complex $K$ with $$K^0=F^{\nu}A,\quad K^{-{\mathbf 1}_i}=F^{\nu+{\mathbf 1}_i}A\,\,\,\,(i\in[1,n]),$$ we can show that $$\hbox{${\rm Gr}_{F_{(1)}}^{\nu_1}\cdots\,{\rm Gr}_{F_{(n)}}^{\nu_n}A\,$ does not depend on the order of $\{1,\dots,n\}$.} \leqno({\rm C}.2.1)$$ In fact, ${\rm Gr}_{F_{(i)}}^{\nu_i}$ corresponds to restricting $K$ to the subcomplex defined by $p_i=1$, and $${\rm Gr}_{F_{(1)}}^{\nu_1}\cdots\,{\rm Gr}_{F_{(n)}}^{\nu_n}A=K^{1,\dots,1}. \leqno({\rm C}.2.2)$$ Note that (C.2.1) is not completely trivial even in the case $n=2$, where Zassenhaus lemma is usually used, and we can replace it by the diagram of the nine lemma as is explained above. \par
\par\noindent {\bf C.3.~Strict complexes.} Let $A^{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}$ be a complex in ${\mathcal A}$ with $n$ filtrations $F_{(i)}$ ($i\in[1,n]$). We say that $\bigl(A^{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}},F_{(i)}\,(i\in[1,n])\bigr)$ is {\it strict} if for any $j\in{\mathbf Z}$, $\nu=(\nu_1,\dots,\nu_n)\in{\mathbf Z}^n$, there is a short exact $n$-ple complex $K$ as in (C.1.1) such that $$H^j\bigl(\h{$\bigcap$}_{i\in I}\,F_{(i)}^{\nu_i}A^{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}\bigr)=K^{-{\mathbf 1}_I}\quad\bigl(\forall\,I\subset\{1,\dots,n\}\bigr), \leqno({\rm C}.3.1)$$ where ${\mathbf 1}_I=(({\mathbf 1}_i)_I,\dots,({\mathbf 1}_I)_n)$ with $({\mathbf 1}_I)_j=1$ if $j\in I$, and $0$ otherwise (and $I$ can be empty in (C.3.1)). \par
We can show (see \cite[Corollary 1.2.13]{mhp}) that if $\bigl(A^{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}},F_{(i)}\,(i\in[1,n])\bigr)$ is strict, then $$\hbox{$H^j,\,{\rm Gr}_{F_{(1)}}^{\nu_1},\,\dots\,,\,{\rm Gr}_{F_{(n)}}^{\nu_n}$ on ${\mathcal A}^{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}$ commute with each other $(\forall\,j\in{\mathbf Z},\,\nu\in{\mathbf Z}^n)$.} \leqno({\rm C}.3.2)$$ \vskip-7mm $$\hbox{The induced filtrations $F_{(i)}$ ($i\in[1,n]$) on $H^jA^{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}$ form compatible filtrations.} \leqno({\rm C}.3.3)$$ \par
Note that $\bigl(A^{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}},F_{(i)}\,(i\in[1,2])\bigr)$ is strict if $(A^{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}},F_{(2)})$, $({\rm Gr}_{F_{(2)}}^pA^{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}},F_{(1)})$ ($\forall\,p\in{\mathbf Z})$ are strict, and $F_{(2)}^pA^{\raise.15ex\hbox{${\scriptscriptstyle\bullet}$}}=0$ for $p\gg 0$, where we assume that the filtered inductive limit in ${\mathcal A}$ is an exact functor, see \cite[Theorem~1.2.9]{mhp}.
\end{document} |
\begin{document}
\title{Changing the past by forgetting} \author{Saibal Mitra} \date{\today} \maketitle \begin{abstract} Assuming the validity of the Many Worlds Interpretation (MWI) of quantum mechanics, we show that memory erasure can cause one to end up in a different sector of the multiverse where the contents of the erased memory is different.
\end{abstract} \section{Introduction} As pointed out by David Deutsch \cite{deutsch}, it is possible to experimentally disprove all collapse interpretations of quantum mechanics if one could make measurements in a reversible way. Suppose an observer measures the $z$-component of a spin that is polarized in the $x$-direction. Then there exists a unitary operator that disentangles the observer from the spin, causing the observer to forget the result of the measurement. However, he would still remember having measured the $z$-component of the spin. In the MWI, the spin will be in its original state and therefore measuring the $x$-component will yield spin up with 100\% probability. In any collapse interpretation, measuring the $x$-component will yield spin up or spin down with 50\% probability.
Unfortunately, such reversible measurements are not possible with current technology. In the MWI, it is still the case that simply dumping the information about the result of the measurement in the environment will cause the observer to become disentangled from the spin, but now the spin is entangled with the rest of the universe. While subsequent measurements cannot distinguish the MWI from collapse interpretations, in the MWI the outcome of measuring the $z$-component again is not predetermined. In this article, we show how machine observers can benefit from resetting their memories to previous states. First, we state the assumptions that define what we mean by the MWI in this article.
\section{Assumptions} The conclusions of this article depend only on the following assumptions about time evolution and the physical nature of the subjective experiences of observers. \begin{assumption} The time evolution of the universe, including any observers, is always of form \begin{equation} \ket{\psi\haak{t_{2}}}=\hat{U}\haak{t_{2},t_{1}}\ket{\psi\haak{t_{1}}}, \end{equation} where $\ket{\psi\haak{t}}$ represents the state of the universe at time $t$ and $\hat{U}$ is some unitary operator. \end{assumption}
\begin{assumption} The states any given observer can subjectively find herself in can be described classically using a finite amount of information, similar to specifying the computational state of a classical computer by specifying the state of each of the individual bits to be either zero or one. \end{assumption} To describe the full quantum mechanical state vector of an observer will, of course, require a huge amount of additional information. In this article we will take the view that whatever the exact quantum mechanical state vector is, the possible states the observer's consciousness can be in, can be identified with some classically describable macrostates of the observer. We will consider the additional information needed to specify the exact quantum state of the observer as part of the rest of the universe.
Denoting the macrostates an observer can find herself in formally as orthonormal ket vectors $\ket{O_{n}}$, the generic form of a quantum state of the universe containing the observer can be written as: \begin{equation}\label{formm} \ket{\psi} = \sum_{n,m}a_{n,m}\ket{O_{n}}\ket{\phi_{m}}, \end{equation} where the $\ket{\phi_{m}}$ form a complete set of orthonormal states describing the rest of the universe. If we sum \eqref{formm} over $m$, we can write it formally as \begin{equation}\label{form} \ket{\psi} = \sum_{n}\ket{O_{n}}\ket{U_{n}}. \end{equation} The $\ket{U_{n}}$ can then be arbitrary states describing the rest of the universe.
If an initial state \begin{equation} \ket{\psi_{\text{Initial}}}=\ket{O_{1}}\ket{U_{1}} \end{equation} evolves in time to become a superposition of the form \begin{equation} \ket{\psi_{\text{A while later}}} = \ket{O_{2}}\ket{U_{2}} + \ket{O_{3}}\ket{U_{3}}, \end{equation} then we interpret this as two parallel universes, one containing the observer in the state $\ket{O_{2}}$, the other containing the observer in the state $\ket{O_{3}}$.
\begin{assumption} Born rule: In a state described by a superposition of the form \begin{equation} \sum_{k}\ket{O_{k}}\ket{U_{k}}, \end{equation} the squared norms of the states $\ket{U_{k}}$ give the relative probabilities for the observer to find herself in the states $\ket{O_{k}}$. \end{assumption}
\section{Memory erasure} Consider a future machine observer who backs up its memory every day. It will reset its memory to the last backed up state when it learns about an impending disaster. The memory is also reset pseudo-randomly from macrostates that do not contain any information about a disaster. The fraction of such macrostates from which a resetting is done in the next clock cycle is $q$. Let's focus on the sector of the multiverse where at the time of a memory backup the observer is in some macrostate $\ket{O_j}$. The normalized state of the universe can be formally denoted as: \begin{equation}\label{psi0} \ket{\psi\haak{0}}=\ket{O_{j}}\ket{U_{j}}. \end{equation} This state then evolves to become at some clock cycle at time $t$, a state of the general form: \begin{equation}\label{psit} \ket{\psi\haak{t;j}}=\sum_{k}\ket{O_{k}}\ket{\tilde{U}_{k;j}}. \end{equation} Here the $j$ indicates that the information contained in the macrostate $\ket{O_{j}}$ has been stored. In the superposition described by \eqref{psit}, the observer in each of the macrostates $\ket{O_{k}}$ knows whether or not it will reset its memory in the next clock cycle. We split the summation in \eqref{psit} into three parts: \begin{equation} \ket{\psi\haak{t;j}}=\sum_{k_{1}}\ket{O_{k_{1}}}\ket{\tilde{U}_{k_{1};j}}+\sum_{k_{2}}\ket{O_{k_{2}}}\ket{\tilde{U}_{k_{2};j}} + \sum_{k_{3}}\ket{O_{k_{3}}}\ket{\tilde{U}_{k_{3};j}}, \end{equation} where the summation over $k_{1}$ is over those values for which $\ket{O_{k_{1}}}$ will reset its memory because of a disaster, the summation over $k_{2}$ is over the states for which memory resetting is triggered by the pseudo-random generator, and the summation over $k_{3}$ is over states in which no memory resetting will happen at the next clock cycle.
Suppose that the probability for the observer to learn about an impending disaster during a clock cycle is $p$. The squared norm of the summation over $k_{1}$ is then $p$ and the squared norm of the sum of the summation over $k_{2}$ and $k_{3}$ is $\haak{1-p}$. Since a fraction $q$ of the macrostates that don't contain any information about a disaster will do a pseudo-random memory resetting, the squared norm of the second summation over $k_{2}$ is $\haak{1-p}q$. The probability for the observer to reset its memory is thus: \begin{equation}\label{reset} P_{\text{reset}} = p + \haak{1-p}q. \end{equation} The normalized wavefunction describing the sector of the multiverse containing the observer with its reset memory is given by: \begin{equation}\label{resetst} \ket{\psi\haak{t;j}}=\frac{1}{\sqrt{P_{\text{reset}}}}\ket{O_{j}}\rhaak{\sum_{k_{1}}\ket{\tilde{U}'_{k_{1};j}}+\sum_{k_{2}}\ket{\tilde{U}'_{k_{2};j}}}, \end{equation} where the primes indicate that due to the dumping of the memory, the state of the rest of the universe has been modified. Since this process is unitary, it follows that upon making new observations, the observer with the reset memory will learn about a disaster coming its way with a probability of: \begin{equation}\label{dis} P_{\text{dis}} = \frac{p}{p + \haak{1-p}q}. \end{equation} The probability for the observer before memory resetting to ultimately learn of the disaster after the memory resetting is $P_{\text{reset}}P_{\text{dis}} = p$, so the memory resetting doesn't appear to have had any effect. Moreover, the above formulas for $P_{\text{reset}}$ and $P_{\text{dis}}$ are also valid in a purely classical setting. However, while one can give the probabilities a trivial classical interpretation, according to quantum physics, the outcome of the measurement by the observer of the state described by \eqref{resetst} is not pre-determined.
The only way to escape this conclusion is to assume that the stored state to which the memory resetting is done, is always different for the sectors in which a disaster has happened and in which it has not happened. This means that one has to assume that the state \eqref{psi0} can only evolve into a state where a disaster is certain to happen at the clock cycle at time $t$ or to a state in which it is certain that no disaster will happen at that time.
It is certainly the case that the different sectors of the multiverse where a disaster strikes and where it doesn't strike in the future will usually be already be very different some time before. Therefore, it is reasonable to expect that, due to decoherence, the information about the coming disaster has already affected the exact wavefunction of the observer. Nevertheless, this doesn't necessarily have to happen and even if it did, it is unreasonable to assume that the macrostate of the observer must then be affected, as that means that the observer's state of consciousness would necessarily have to differ in the two sectors.
\section{Discussion} Assuming the validity of the MWI, we are forced to accept that by resetting the memory to a previous state, the reason why the memory was reset is no longer determined. In the limit $p\ll q$ the probabilities for memory resetting \eqref{reset} and for having to face a disaster after memory resetting \eqref{dis}, reduce to $P_{\text{reset}}=q$ and $P_{\text{dis}}=p/q$. The observer facing disaster can thus be almost sure to escape the disaster by doing a memory resetting. The observer who learns of a disaster after a memory resetting should think that he would not have faced the disaster had he not reset his memory.
Of course, none of this is directly verifiable to the observer as the observer only has access to information present in the sector of the multiverse where he resides. Although we cannot reset our memories, we often can choose to test or not to test for possible disasters. If the MWI is true then there is no benefit to test for impending disasters against which nothing can be done. After learning about the impending disaster all we can do is sit and wait for the disaster to happen, while we could be almost sure that we would not have to endure the coming disaster had we not tested for it in the first place. It was the detection of the impending disaster that trapped us in the wrong sector of the multiverse.
But perhaps when we forget something, this is equivalent to the memory resetting scenario discussed in this article. This depends on whether or not the lost memory has affected our consciousness. So, if we watch a recording of a soccer match played a long time ago, the outcome is undetermined, not just if we are watching the match for the first time and never read about the outcome, but perhaps also if we've seen the match before and forgot about the outcome.
\end{document} |
\begin{document}
\title{Superconducting metamaterials for waveguide quantum electrodynamics}
\author{Mohammad~Mirhosseini}
\author{Eunjong~Kim} \author{Vinicius~S.~Ferreira} \author{Mahmoud~Kalaee} \author{Alp~Sipahigil} \author{Andrew~J.~Keller} \affiliation{Kavli Nanoscience Institute and Thomas J. Watson, Sr., Laboratory of Applied Physics, California Institute of Technology, Pasadena, California 91125, USA.} \affiliation{Institute for Quantum Information and Matter, California Institute of Technology, Pasadena, California 91125, USA.}
\author{Oskar~Painter} \email{opainter@caltech.edu} \homepage{http://copilot.caltech.edu} \affiliation{Kavli Nanoscience Institute and Thomas J. Watson, Sr., Laboratory of Applied Physics, California Institute of Technology, Pasadena, California 91125, USA.} \affiliation{Institute for Quantum Information and Matter, California Institute of Technology, Pasadena, California 91125, USA.}
\date{\today}
\begin{abstract} {The embedding of tunable quantum emitters in a photonic bandgap structure enables the control of dissipative and dispersive interactions between emitters and their photonic bath. Operation in the transmission band, outside the gap, allows for studying waveguide quantum electrodynamics in the slow-light regime. Alternatively, tuning the emitter into the bandgap results in finite range emitter-emitter interactions via bound photonic states. Here we couple a transmon qubit to a superconducting metamaterial with a deep sub-wavelength lattice constant ($\lambda/60$). The metamaterial is formed by periodically loading a transmission line with compact, low loss, low disorder lumped element microwave resonators. We probe the coherent and dissipative dynamics of the system by measuring the Lamb shift and the change in the lifetime of the transmon qubit. Tuning the qubit frequency in the vicinity of a band-edge with a group index of $n_g = 450$, we observe an anomalous Lamb shift of $10$~MHz accompanied by a $24$-fold enhancement in the qubit lifetime. In addition, we demonstrate selective enhancement and inhibition of spontaneous emission of different transmon transitions, which provide simultaneous access to long-lived metastable qubit states and states strongly coupled to propagating waveguide modes.
} \end{abstract} \maketitle
Cavity quantum electrodynamics (QED) studies the interaction of an atom with a single electromagnetic mode of a high-finesse cavity with a discrete spectrum \cite{Raimond:2006wd,Reiserer:2015en}. In this canonical setting, a large photon-atom coupling is achieved by repeated interaction of the atom with a single photon bouncing many times between the cavity mirrors. Recently, there has been much interest in achieving strong light-matter interaction in a cavity-free system such as a waveguide. Waveguide QED refers to a system where a chain of atoms are coupled to a common optical channel with a continuum of electromagnetic modes over a large bandwidth. Slow-light photonic crystal waveguides are of particular interest in waveguide QED because the reduced group velocity near a bandgap preferentially amplifies the desired radiation of the atoms into the waveguide modes \cite{Yao:2009gj,Goban:2015dr,Calajo:2016fz}. Moreover, in this configuration an interesting paradigm can be achieved by placing the resonance frequency of the atom inside the bandgap of the waveguide \cite{Bykov:1975gp,Yablonovitch:1987eb, John:1990cn, Kofman:1994gi, Hood:2016ic}. In this case the atom cannot radiate into the waveguide but the evanescent field surrounding it gives rise to a photonic bound state \cite{John:1990cn}. The interaction of such localized bound states has been proposed for realizing tunable spin-exchange interaction between atoms in a chain \cite{Munro:2017ba, Douglas:2015hd}, and also for realizing effective non-local interactions between photons \cite{Shahmoon:2016kv,Douglas:2016kr}.
While achieving efficient waveguide coupling in the optical regime requires the challenging task of interfacing atoms or atomic-like systems with nanoscale dielectric structures~\cite{Vetch2010,Yu2014,Javadi2015,Bhaskar2017}, superconducting circuits provide an entirely different platform for studying the physics of light-matter interaction in the microwave regime \cite{Blais:2004kn}. Development of the field of circuit QED has enabled fabrication of fast and tunable qubits with long coherence times \cite{Koch:2007gz,Barends:2013kz,Chen:2014cwa}. Moreover, strong coupling is readily achieved in this platform due to the deep sub-wavelength transverse confinement of photons attainable in microwave waveguides and the large electrical dipole of superconducting qubits \cite{Wallraff:2004dy}. Microwave waveguides with strong dispersion, even ``bandgaps'' in frequency, can also be simply realized by periodically modulating the geometry of a coplanar transmission line~\cite{Pozar:1998wp}. Such an approach was recently demonstrated in a pioneering experiment by Liu and Houck~\cite{Liu:2016ic}, whereby a qubit was coupled to the localized photonic state within the bandgap of a modulated coplanar waveguide (CPW). Satisfying the Bragg condition in a periodically modulated waveguide requires a lattice constant on the order of the wavelength \cite{Joannopoulos:2011tg}, however, which translates to a device size of approximately a few centimeters for complete confinement of the evanescent fields in the frequency range suitable for microwave qubits. Such a restriction significantly limits the scaling in this approach, both in qubit number and qubit connectivity.
An alternative approach for tailoring dispersion in the microwave domain is to take advantage of the metamaterial concept. Metamaterials are composite structures with sub-wavelength components which are designed to provide an effective electromagnetic response \cite{Smith2000,Itoh:2006uw}. Since the early microwave work, the electromagnetic metamaterial concept has been expanded and extensively studied across a broad range of classical optical sciences~\cite{Koschny2017,Alu2017,Chen2016,Genevet2017}; however, their role in quantum optics has remained relatively unexplored, at least in part due to the lossy nature of many sub-wavelength components. Improvements in design and fabrication of low-loss superconducting circuit components in circuit QED offer a new prospect for utilizing microwave metamaterials in quantum applications. Indeed, high quality-factor superconducting components such as resonators can be readily fabricated on a chip \cite{Goppl:2008iu, Megrant:2012cd}, and such elements have been used as a tool for achieving phase-matching in near quantum-limited traveling wave amplifiers \cite{OBrien:2014fc,Macklin:2015ek,White:2015bb} and for tailoring qubit interactions in a multimode cavity QED architecture \cite{McKay:2015hn}.
\begin{figure}
\caption{\textbf{Microwave metamaterial waveguide.} \textbf{a}, Dispersion relation of a CPW loaded with a periodic array of microwave resonators. The dashed line shows the dispersion relation of the waveguide without the resonators. Inset: circuit diagram for a unit cell of the periodic structure. \textbf{b}, Scanning electron microscope (SEM) image of the fabricated capacitively coupled microwave resonator with a wire width of 500 nm. The resonator region is false-colored in purple, the waveguide central conductor and the ground plane are colored green, and the coupling capacitor is shown in orange. We have used pairs of identical resonators symmetrically placed on the two sides of the transmission line to preserve the symmetry of the structure. \textbf{c}, Transmission measurement for the realized metamaterial waveguide made from 9 unit cells of resonator pairs with a wire width of 1 $\mu$m, repeated with a lattice constant of $d = 350\, \mu$m. The blue curve depicts the experimental data and the red curve shows the lumped-element model fit to the data.}
\label{fig:Lin}
\end{figure}
In this paper, we utilize an array of coupled lumped-element microwave resonators to form a compact bandgap waveguide with a deep sub-wavelength lattice constant ($\lambda/60$) based on the metameterial concept. In addition to a compact footprint, these sort of structures can exhibit highly nonlinear band dispersion surrounding the bandgap, leading to exceptionally strong confinement of localized intra-gap photon states. We present the design and fabrication of such a metamaterial waveguide, and characterize the resulting waveguide dispersion and bandgap properties via interaction with a tunable superconducting transmon qubit. We measure the Lamb shift and lifetime of the qubit in the bandgap and its vicinity, demonstrating the anomalous Lamb shift of the fundamental qubit transition as well as selective inhibition and enhancement of spontaneous emission for the first two excited states of the transmon qubit.
\begin{figure*}
\caption{\textbf{Disorder effects and qubit-waveguide coupling.} \textbf{a}, Calculated localization length for a metamaterial waveguide with structural disorder and resonator loss are shown as blue dots. The waveguide parameters are determined from the fit to a lumped element model with resonator loss to the transmission data in Fig.\,\ref{fig:Lin}. Numerical simulation has been performed for $N=100$ unit cells, averaged over $10^5$ randomly realized values of the resonance frequency $\omega_{0}$, with the standard deviation $\delta\omega_0/\omega_0 = 0.5\%$. The red curve outside the gap is an analytic model based on Ref.\,\cite{HernandezHerrejon:2010ef}. \textbf{b}, SEM image of the fabricated qubit-waveguide system. The metamaterial waveguide (gray) consists of 9 periods of the resonator unit cell. The waveguide is capacitively coupled to an external CPW (red) for reflective read-out. Bottom left inset: The transmon qubit is capacitively coupled to the resonator at the end of the array. The Z drive is used to tune the qubit resonance frequency by controlling the external flux bias in the superconducting quantum interference device (SQUID) loop. The XY drive is used to coherently excite the qubit. Top right inset: capacitively coupled microwave resonator. \textbf{c}, Calculated local density of states (LDOS) at the qubit position for a metamaterial waveguide with a length of 9 unit cells and open boundary conditions. The band-edges for an infinite structure are marked with dashed red lines.}
\label{fig:QBWGSEM}
\end{figure*}
We begin by considering the circuit model of a CPW that is periodically loaded with microwave resonators as shown in the inset to Fig.\,\ref{fig:Lin}a. The Lagrangian for this system can be constructed as a function of the node fluxes of the resonator and waveguide sections ${{\Phi}^b_n}$ and ${{\Phi}^a_n}$ \cite{Devoret:1995vn}. Assuming periodic boundary conditions and applying the rotating wave approximation, we derive the Hamiltonian for this system and find the eigenstates and energies to be (see App.~\ref{App:A}),
\begin{align}\label{Eq:ResDisp_rwa} &\omega_{\pm,k} = \frac{1}{2} \left[ \left(\Omega_k + \omega_0 \right) \pm \sqrt{{\left( \Omega_k - \omega_0 \right)}^2 + 4 g_k^2} \right], \\ &\hat{\alpha}_{\pm,k} = \frac{(\omega_{\pm,k} - \omega_0)}{\sqrt{ {(\omega_{\pm,k} - \omega_0)}^2 + g_k^2}}\hat{a}_{k} + \frac{g_k}{\sqrt{ {(\omega_{\pm,k} - \omega_0)}^2 + g_k^2}}\hat{b}_{k}, \end {align}
\noindent where $\hat{a}_k$, and $\hat{b}_k$ describe the momentum-space annihilation operators for the bare waveguide and bare resonator sections, the index $k$ denotes the wavevector, and the parameters $\Omega_k$, ${\omega_0}$, and $g_k$ quantify the frequency of traveling modes of the bare waveguide, the resonance frequency of the microwave resonators, and coupling rate between resonator and waveguide modes, respectively. The operators $\hat{\alpha}_{\pm,k}$ represent quasi-particle solutions of the composite waveguide, where far from the bandgap the quasi-particle is primarily composed of the bare waveguide mode, while in the vicinity of $\omega_0$ most of its energy is confined in the microwave resonators.
Figure\,\ref{fig:Lin}a depicts the numerically calculated energy bands $\omega_{\pm,k}$ as a function of the wavevector $k$. It is evident that the dispersion has the form of an avoided crossing between the energy bands of the bare waveguide and the uncoupled resonators. For small gap sizes, the midgap frequency is close to the resonance frequency of uncoupled resonators $\omega_0$, and unlike the case of a periodically modulated waveguide, there is no fundamental relation tying the midgap frequency to the lattice constant in this case. The form of the band structure near the higher cut-off frequency $\omega_{c+}$ can be approximated as a quadratic function $(\omega -\omega_{c+}) \propto k^2$, whereas the band structure near the lower band-edge $\omega_{c-}$ is inversely proportional to the square of the wavenumber $(\omega -\omega_{c-}) \propto 1/k^2$. The analysis above has been presented for resonators which are capacitively coupled to a waveguide in a parallel geometry; a similar band structure can also be achieved using series inductive coupling of resonators (see App.~\ref{App:A}).
A coplanar microwave resonator is often realized by terminating a short segment of a coplanar transmission line with a length set to an integer multiple of $\lambda/4$, where $\lambda$ is the wavelength corresponding to the fundamental resonance frequency \cite{Pozar:1998wp,Goppl:2008iu,Gao:2008td}. However, it is possible to significantly reduce the footprint of a resonator by using components that mimic the behavior of lumped elements. We have used the design presented in Ref.\,\cite{Zhou:2003wf} to realize resonators in the frequency range of $6$-$10$~GHz. This design provides compact resonators by placing interdigital capacitors at the anti-nodes of the charge waves and double spiral coils near the peak of the current waves at the fundamental frequency (see Fig.\,\ref{fig:Lin}b). Further, the symmetry of the geometry results in the suppression of the second harmonic frequency and thus the elimination of an undesired bandgap at twice the fundamental resonance frequency of the band-gap waveguide.
We fabricate individual resonator pairs using an electron-beam deposited $120$~nm Al film, patterned via lift-off, on a high resistivity silicon wafer substrate of thickness $500$~$\mu$m (see Ref.~\cite{Keller2017} for further details of fabrication techniques). In this work we have made a periodic array of 9 resonator pairs with a wire width of 1 $\mu$m and coupled them to a CPW in a periodic fashion with a lattice constant of $350$~$\mu$m to realize a metamaterial waveguide. The resonators are arranged in identical pairs placed on the opposite sides across the central waveguide conductor to preserve the symmetry of the waveguide. Figure\,\ref{fig:Lin}c shows the measured power transmission through such a finite-length metamaterial waveguide. Here $50$-$\Omega$ CPW segments, galvanically coupled to the metamaterial waveguide, are used at the input and output ports. We find the midgap frequency of $5.83$~GHz for the structure, and a gap frequency span of $1.82$~GHz. Using the simulated value of effective refractive index of $2.54$, the midgap frequency gives a lattice constant-to-wavelength ratio of $d/\lambda \approx 1/60$.
Propagation of electromagnetic fields in the frequency range within the bandgap is exponentially attenuated with a localization length set by the imaginary part of the wavenumber. In addition, statistical variations in the electromagnetic properties of the periodic structure result in random scattering of the traveling waves in the transmission band. Such random scatterings can lead to complete trapping of propagating photons in the presence of strong disorder and an exponential extinction for weak disorder; a phenomenon known as the Anderson localization of light \cite{Wiersma:1997fp}. We have measured a random standard deviation of $0.3\%$ in the resonance frequency of the fabricated lumped-element resonators. Figure \,\ref{fig:QBWGSEM}a shows the calculated localization length as a function of frequency from numerical simulation of the independently measured disorder and loss of the resonators in the metamaterial waveguide (see App.~\ref{App:C} and ~\ref{App:D} for further details). Near the edges of the bandgap the localization length from disorder dominates that from loss, rapidly approaching zero at the lower band-edge where the group index is largest and maintaining a large value ($6\times 10^3$ periods) at the higher band-edge where the group index is smaller. Similarly, the localization length inside the gap is inversely proportional to the curvature of the energy bands \cite{Douglas:2015hd}. Owing to the divergence (in the loss-less case) of the lower band curvature for the waveguide studied here, the localization length inside the gap approaches zero near the lower band-edge frequency as well. These results indicate that, even with practical limitations on disorder and loss in such metamaterial waveguides, a range of photon length scales of nearly four orders of magnitude should be accessible for frequencies within a few hundred MHz of the band-edges.
\begin{figure}\label{fig:ExpBandGap}
\end{figure}
To further probe the electromagnetic properties of the metamaterial waveguide we couple it to a superconducting qubit. In this work we use a transmon qubit \cite{Koch:2007gz, Barends:2013kz} with the fundamental resonance frequency $\nu_{ge} = 7.9$ GHz and Josephson energy to single electron charging energy ratio of $E_J/E_C \approx 100$ at zero flux bias (details of our qubit fabrication methods can also be found in Ref.~\cite{Keller2017}). Figure\,\ref{fig:QBWGSEM}b shows the geometry of the device where the qubit is capacitively coupled to one end of the waveguide and the other end is capacitively coupled to a $50$-$\Omega$ CPW transmission line. This geometry allows for forming narrow individual modes in the transmission band of the metamaterial, which can be used for dispersive qubit state read-out \cite{Wallraff:2005in} from reflection measurements at the $50$-$\Omega$ CPW input port (see Fig.\,\ref{fig:QBWGSEM}b). Within the bandgap the qubit is self-dressed by virtual photons which are emitted and re-absorbed due to the lack of escape channels for the radiation. Near the band-edges surrounding the bandgap, where the LDOS is rapidly varying with frequency, this can result in a large anomalous Lamb shift of the dressed qubit frequency \cite{John:1991es,Kofman:1994gi}. To observe this effect, we tune the qubit frequency using a flux bias \cite{Barends:2013kz} and find the frequency shift by subtracting the measured frequency from the expected frequency of the qubit as a function of flux bias. Figure\,\ref{fig:ExpBandGap}a shows the measured frequency shift as a function of tuning. It is evident that the qubit frequency is repelled from the band-edges on the two sides, as a result of the asymmetric density of states near the cut-off frequencies. The measured frequency shift is approximately $10$~MHz at the band-edges ($0.2\%$ of the qubit frequency), in excellent agreement with the circuit theory model (see App.~\ref{App:E}).
\begin{figure}
\caption{\textbf{State-selective enhancement and inhibition of radiative decay.} \textbf{a}, Measurement with the $g$-$e$ transition tuned into the bandgap, with the $f$-$e$ transition in the lower transmission band. \textbf{b}, Measurement with the $g$-$e$ transition tuned into the upper transmission band, with the $f$-$e$ transition in the bandgap. For measuring the $f$-$e$ lifetime, we initially excite the third energy level $\ket{f}$ via a two-photon $\pi$ pulse at the frequency of $\omega_{gf} /2$. Following the population decay in a selected time interval, the population in $\ket{f}$ is mapped to the ground state using a second $\pi$ pulse. Finally the ground state population is read using the dispersive shift of a close-by band-edge resonance of the waveguide. $g$-$e$ ($f$-$e$) transition data shown as red squares (blue circles)}
\label{fig:ThreeLevels}
\end{figure}
Another signature of the qubit-waveguide interaction is the change in the rate of spontaneous emission of the qubit. Tuning the qubit into the bandgap changes the localization length of the waveguide photonic state that dresses the qubit. Since the finite waveguide is connected to an external port which acts as a dissipative environment, the change in localization length $\ell(\omega)$ is accompanied by a change in the radiative lifetime of the qubit $T_\text{rad}(\omega)\propto e^{2x/\ell(\omega)}$, where $x$ is the total length of the waveguide. Figure\,\ref{fig:ExpBandGap}b shows the measured qubit lifetime ($T_1$) as a function of its frequency in the bandgap. It is evident that the qubit lifetime drastically increases inside the bandgap, where spontaneous emission into the output port is greatly suppressed due to the reduced localization length of the photon bound state. Deep within the bandgap one observes the appearance of multiple narrow spectral features in the measured frequency dependence of the qubit lifetime. These features, attributable to parasitic ``box'' modes of our chip packaging, highlight the ability of the metamaterial waveguide to enable effectively-dissipation-free probing of the qubit's environment over a broad spectral range ($> 1$~GHz). As the qubit frequency approaches the band-edges, the lifetime is sharply reduced because of the increase in the localization length of the waveguide modes. The slope of the lifetime curve at the band-edge can be shown to be directly proportional to the group delay, $\left|{\partial T_\text{rad}}/{\partial \omega}\right| = T_\text{rad} \tau_\text{delay}$ (see App.~\ref{App:E}). We observe a $24$-fold enhancement in the lifetime of the qubit near the upper band-edge, corresponding to a maximum group index of $n_g=450$ right at the band-edge.
In addition to radiative decay into the output channel, losses in the resonators in the waveguide also contribute to the qubit's excited state decay. Using a low power probe in the single-photon regime we have measured intrinsic $Q$-factors of $7.2 \pm 0.4 \times 10^4$ for the individual waveguide modes between $4.6$-$7.4$~GHz. The solid line in Fig.\,\ref{fig:ExpBandGap}b shows a fitted theoretical curve which takes into account the loss in the waveguide along with a phenomenological intrinsic lifetime of the qubit. While the measured lifetime near the upper band is in excellent agreement with the theoretical model, the data near the lower band shows significant departure from the model. We attribute this departure in the lower band to the presence of a spurious resonance or resonances near the lower band-edge. Possible candidates for such spurious modes include the asymmetric ``slotline'' modes of the metamaterial waveguide, which are weakly coupled to our symmetrically grounded CPW line but may couple to the qubit. Further study of the spectrum of these modes and possible methods for suppressing them using cross-over connections \cite{Chen:2014he} will be a topic of future studies.
The sharp variation in the photonic LDOS near the metamaterial waveguide band-edges may also be used to engineer the multi-level dynamics of the qubit. A transmon qubit, by construct, is a nonlinear quantum oscillator and thus it has a multilevel energy spectrum. In particular, a third energy level ($\ket{f}$) exists at the frequency $\omega_{gf} = 2\omega_{ge} - E_C/\hbar$. Although the transition $g$-$f$ is not allowed because of the selection rules, the $f$-$e$ transition is allowed and has a dipole moment that is $\sqrt{2}$ larger than the fundamental transition \cite{Koch:2007gz}. This is reminiscent of the scaling of transition amplitudes in a harmonic oscillator and results in a second transition lifetime that is half of the fundamental transition lifetime for a uniform density of states in the electromagnetic bath. Nonetheless, the sharply varying density of states in the metamaterial can lead to strong suppression or enhancement of the spontaneous emission for each transition. Figure\,\ref{fig:ThreeLevels} shows the measured lifetimes of the two transitions for two different spectral configurations. In the first scenario, we enhance the ratio of the lifetimes $T_{eg}/T_{fe}$ by situating the fundamental transition frequency inside in the bandgap while having the second transition positioned inside the lower transmission band. The situation is reversed in the second configuration, where the fundamental frequency is tuned to be within the upper energy band while the second transition lies inside the gap. In our fabricated qubit, the second transition is $290$~MHz lower than the fundamental transition frequency at zero flux bias, which allows for achieving large lifetime contrast in both configurations.
Compact, low loss, low disorder superconducting metamaterials, as presented here, can help realize more scalable superconducting quantum circuits with higher levels of complexity and functionality in several regards. They offer a method for densely packing qubits -- both in spatial and frequency dimensions -- with isolation from the environment by operation in forbidden bandgaps, and yet with controllable connectivity achieved via bound qubit-waveguide polaritons~\cite{Douglas:2015hd,Calajo:2016fz}. Moreover, the ability to selectively modify the transition lifetimes provides simultaneous access to long-lived metastable qubit states as well as short-lived states strongly coupled to waveguide modes. This approach realizes an effective $\Lambda$-type level structure for the transmon, and can be used to create state-dependent bound state localization lengths, quantum nonlinear media for propagating microwave photons~\cite{Nikoghosyan2010,Douglas:2016kr,Albrecht2017}, or as recently demonstrated, to realize spin-photon entanglement and high-bandwidth itinerant single microwave photon detection~\cite{Inomata:2016jc, Besse:2017wh}. Combined, these attributes provide a unique platform for studying the many-body physics of quantum photonic matter~\cite{Greentree:2006jg,Hartmann:2006kv,Houck2012,Noh2016}.
\section*{Acknowledgments}
We would like to thank Paul Dieterle, Ana Asenjo Garcia and Darrick Chang for fruitful discussions regarding waveguide QED. This work was supported by the AFOSR MURI Quantum Photo4nic Matter (grant 16RT0696), the AFOSR MURI Wiring Quantum Networks with Mechanical Transducers (grant FA9550-15-1-0015), the Institute for Quantum Information and Matter, an NSF Physics Frontiers Center (grant PHY-1125565) with support of the Gordon and Betty Moore Foundation, and the Kavli Nanoscience Institute at Caltech. M.M. (A.J.K., A.S.) gratefully acknowledges support from a KNI (IQIM) Postdoctoral Fellowship.
\begin{thebibliography}{60} \makeatletter \providecommand \@ifxundefined [1]{
\@ifx{#1\undefined} } \providecommand \@ifnum [1]{
\ifnum #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi } \providecommand \@ifx [1]{
\ifx #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi } \providecommand \natexlab [1]{#1} \providecommand \enquote [1]{``#1''} \providecommand \bibnamefont [1]{#1} \providecommand \bibfnamefont [1]{#1} \providecommand \citenamefont [1]{#1} \providecommand \href@noop [0]{\@secondoftwo} \providecommand \href [0]{\begingroup \@sanitize@url \@href} \providecommand \@href[1]{\@@startlink{#1}\@@href} \providecommand \@@href[1]{\endgroup#1\@@endlink} \providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode
`\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax} \providecommand \@@startlink[1]{} \providecommand \@@endlink[0]{} \providecommand \url [0]{\begingroup\@sanitize@url \@url } \providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }} \providecommand \urlprefix [0]{URL } \providecommand \Eprint [0]{\href } \providecommand \doibase [0]{http://dx.doi.org/} \providecommand \selectlanguage [0]{\@gobble} \providecommand \bibinfo [0]{\@secondoftwo} \providecommand \bibfield [0]{\@secondoftwo} \providecommand \translation [1]{[#1]} \providecommand \BibitemOpen [0]{} \providecommand \bibitemStop [0]{} \providecommand \bibitemNoStop [0]{.\EOS\space} \providecommand \EOS [0]{\spacefactor3000\relax} \providecommand \BibitemShut [1]{\csname bibitem#1\endcsname} \let\auto@bib@innerbib\@empty
\bibitem [{\citenamefont {Raimond}\ and\ \citenamefont
{Haroche}(2006)}]{Raimond:2006wd}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont
{Raimond}}\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Haroche}},\ }\href@noop {} {\emph {\bibinfo {title} {{Exploring the
quantum}}}}\ (\bibinfo {publisher} {Oxford University Press},\ \bibinfo
{address} {Oxford},\ \bibinfo {year} {2006})\BibitemShut {NoStop} \bibitem [{\citenamefont {Reiserer}\ and\ \citenamefont
{Rempe}(2015)}]{Reiserer:2015en}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Reiserer}}\ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont
{Rempe}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Reviews
of Modern Physics}\ }\textbf {\bibinfo {volume} {87}},\ \bibinfo {pages}
{1379} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Yao}\ \emph {et~al.}(2009)\citenamefont {Yao},
\citenamefont {Van~Vlack}, \citenamefont {Reza}, \citenamefont {Patterson},
\citenamefont {Dignam},\ and\ \citenamefont {Hughes}}]{Yao:2009gj}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Yao}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Van~Vlack}},
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Reza}}, \bibinfo {author}
{\bibfnamefont {M.}~\bibnamefont {Patterson}}, \bibinfo {author}
{\bibfnamefont {M.~M.}\ \bibnamefont {Dignam}}, \ and\ \bibinfo {author}
{\bibfnamefont {S.}~\bibnamefont {Hughes}},\ }\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. B}\ }\textbf {\bibinfo {volume}
{80}} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Goban}\ \emph {et~al.}(2015)\citenamefont {Goban},
\citenamefont {Hung}, \citenamefont {Hood}, \citenamefont {Yu}, \citenamefont
{Muniz}, \citenamefont {Painter},\ and\ \citenamefont
{Kimble}}]{Goban:2015dr}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Goban}}, \bibinfo {author} {\bibfnamefont {C.~L.}\ \bibnamefont {Hung}},
\bibinfo {author} {\bibfnamefont {J.~D.}\ \bibnamefont {Hood}}, \bibinfo
{author} {\bibfnamefont {S.~P.}\ \bibnamefont {Yu}}, \bibinfo {author}
{\bibfnamefont {J.~A.}\ \bibnamefont {Muniz}}, \bibinfo {author}
{\bibfnamefont {O.}~\bibnamefont {Painter}}, \ and\ \bibinfo {author}
{\bibfnamefont {H.~J.}\ \bibnamefont {Kimble}},\ }\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {115}},\ \bibinfo {pages} {063601} (\bibinfo {year}
{2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Calaj{\'o}}\ \emph {et~al.}(2016)\citenamefont
{Calaj{\'o}}, \citenamefont {Ciccarello}, \citenamefont {Chang},\ and\
\citenamefont {Rabl}}]{Calajo:2016fz}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont
{Calaj{\'o}}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Ciccarello}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Chang}}, \
and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Rabl}},\ }\href@noop
{} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf
{\bibinfo {volume} {93}},\ \bibinfo {pages} {033833} (\bibinfo {year}
{2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bykov}(1975)}]{Bykov:1975gp}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {V.~P.}\ \bibnamefont
{Bykov}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Soviet
Journal of Quantum Electronics}\ }\textbf {\bibinfo {volume} {4}},\ \bibinfo
{pages} {861} (\bibinfo {year} {1975})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Yablonovitch}(1987)}]{Yablonovitch:1987eb}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont
{Yablonovitch}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {58}},\ \bibinfo {pages}
{2059} (\bibinfo {year} {1987})}\BibitemShut {NoStop} \bibitem [{\citenamefont {John}\ and\ \citenamefont
{Wang}(1990)}]{John:1990cn}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{John}}\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Wang}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\
}\textbf {\bibinfo {volume} {64}},\ \bibinfo {pages} {2418} (\bibinfo {year}
{1990})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kofman}\ \emph {et~al.}(1994)\citenamefont {Kofman},
\citenamefont {Kurizki},\ and\ \citenamefont {Sherman}}]{Kofman:1994gi}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~G.}\ \bibnamefont
{Kofman}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Kurizki}}, \
and\ \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Sherman}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Journal of Modern
Optics}\ }\textbf {\bibinfo {volume} {41}},\ \bibinfo {pages} {353} (\bibinfo
{year} {1994})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hood}\ \emph {et~al.}(2016)\citenamefont {Hood},
\citenamefont {Goban}, \citenamefont {Asenjo-Garcia}, \citenamefont {Lu},
\citenamefont {Yu}, \citenamefont {Chang},\ and\ \citenamefont
{Kimble}}]{Hood:2016ic}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~D.}\ \bibnamefont
{Hood}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Goban}},
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Asenjo-Garcia}}, \bibinfo
{author} {\bibfnamefont {M.}~\bibnamefont {Lu}}, \bibinfo {author}
{\bibfnamefont {S.-P.}\ \bibnamefont {Yu}}, \bibinfo {author} {\bibfnamefont
{D.~E.}\ \bibnamefont {Chang}}, \ and\ \bibinfo {author} {\bibfnamefont
{H.~J.}\ \bibnamefont {Kimble}},\ }\href@noop {} {\bibfield {journal}
{\bibinfo {journal} {Proceedings of the National Academy of Sciences}\
}\textbf {\bibinfo {volume} {113}},\ \bibinfo {pages} {10507} (\bibinfo
{year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Munro}\ \emph {et~al.}(2017)\citenamefont {Munro},
\citenamefont {Kwek},\ and\ \citenamefont {Chang}}]{Munro:2017ba}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont
{Munro}}, \bibinfo {author} {\bibfnamefont {L.~C.}\ \bibnamefont {Kwek}}, \
and\ \bibinfo {author} {\bibfnamefont {D.~E.}\ \bibnamefont {Chang}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {New Journal of
Physics}\ }\textbf {\bibinfo {volume} {19}},\ \bibinfo {pages} {083018}
(\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Douglas}\ \emph {et~al.}(2015)\citenamefont
{Douglas}, \citenamefont {Habibian}, \citenamefont {Hung}, \citenamefont
{Gorshkov}, \citenamefont {Kimble},\ and\ \citenamefont
{Chang}}]{Douglas:2015hd}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~S.}\ \bibnamefont
{Douglas}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Habibian}},
\bibinfo {author} {\bibfnamefont {C.~L.}\ \bibnamefont {Hung}}, \bibinfo
{author} {\bibfnamefont {A.~V.}\ \bibnamefont {Gorshkov}}, \bibinfo {author}
{\bibfnamefont {H.~J.}\ \bibnamefont {Kimble}}, \ and\ \bibinfo {author}
{\bibfnamefont {D.~E.}\ \bibnamefont {Chang}},\ }\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {Nature Photonics}\ }\textbf {\bibinfo
{volume} {9}},\ \bibinfo {pages} {326} (\bibinfo {year} {2015})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Shahmoon}\ \emph {et~al.}(2016)\citenamefont
{Shahmoon}, \citenamefont {Kurizki}, \citenamefont {Stimming}, \citenamefont
{Mazets},\ and\ \citenamefont {Gri{\v s}ins}}]{Shahmoon:2016kv}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont
{Shahmoon}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Kurizki}},
\bibinfo {author} {\bibfnamefont {H.~P.}\ \bibnamefont {Stimming}}, \bibinfo
{author} {\bibfnamefont {I.}~\bibnamefont {Mazets}}, \ and\ \bibinfo {author}
{\bibfnamefont {P.}~\bibnamefont {Gri{\v s}ins}},\ }\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {Optica}\ }\textbf {\bibinfo {volume} {3}},\
\bibinfo {pages} {725} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Douglas}\ \emph {et~al.}(2016)\citenamefont
{Douglas}, \citenamefont {Caneva},\ and\ \citenamefont
{Chang}}]{Douglas:2016kr}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~S.}\ \bibnamefont
{Douglas}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Caneva}}, \
and\ \bibinfo {author} {\bibfnamefont {D.~E.}\ \bibnamefont {Chang}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. X}\
}\textbf {\bibinfo {volume} {6}},\ \bibinfo {pages} {031017} (\bibinfo {year}
{2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Vetsch}\ \emph {et~al.}(2010)\citenamefont {Vetsch},
\citenamefont {Reitz}, \citenamefont {Sagu\'{e}}, \citenamefont {Schmidt},
\citenamefont {Dawkins},\ and\ \citenamefont {Rauschenbeutel}}]{Vetch2010}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont
{Vetsch}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Reitz}},
\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Sagu\'{e}}}, \bibinfo
{author} {\bibfnamefont {R.}~\bibnamefont {Schmidt}}, \bibinfo {author}
{\bibfnamefont {S.~T.}\ \bibnamefont {Dawkins}}, \ and\ \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Rauschenbeutel}},\ }\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf
{\bibinfo {volume} {104}},\ \bibinfo {pages} {203603} (\bibinfo {year}
{2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Yu}\ \emph {et~al.}(2014)\citenamefont {Yu},
\citenamefont {Hood}, \citenamefont {Muniz}, \citenamefont {Martin},
\citenamefont {Norte}, \citenamefont {Hung}, \citenamefont {Meenehan},
\citenamefont {Cohen}, \citenamefont {Painter},\ and\ \citenamefont
{Kimble}}]{Yu2014}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.-P.}\ \bibnamefont
{Yu}}, \bibinfo {author} {\bibfnamefont {J.~D.}\ \bibnamefont {Hood}},
\bibinfo {author} {\bibfnamefont {J.~A.}\ \bibnamefont {Muniz}}, \bibinfo
{author} {\bibfnamefont {M.~J.}\ \bibnamefont {Martin}}, \bibinfo {author}
{\bibfnamefont {R.}~\bibnamefont {Norte}}, \bibinfo {author} {\bibfnamefont
{C.-L.}\ \bibnamefont {Hung}}, \bibinfo {author} {\bibfnamefont {S.~M.}\
\bibnamefont {Meenehan}}, \bibinfo {author} {\bibfnamefont {J.~D.}\
\bibnamefont {Cohen}}, \bibinfo {author} {\bibfnamefont {O.}~\bibnamefont
{Painter}}, \ and\ \bibinfo {author} {\bibfnamefont {H.~J.}\ \bibnamefont
{Kimble}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {App.
Phys. Lett.}\ }\textbf {\bibinfo {volume} {104}},\ \bibinfo {pages} {111103}
(\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Javadi}\ \emph {et~al.}(2015)\citenamefont {Javadi},
\citenamefont {S\"{o}llner}, \citenamefont {Arcari}, \citenamefont {Hansen},
\citenamefont {Midolo}, \citenamefont {Mahmoodian}, \citenamefont
{Kir\v{s}ansk\.{e}}, \citenamefont {Pregnolato}, \citenamefont {Lee},
\citenamefont {Song}, \citenamefont {Stobbe1},\ and\ \citenamefont
{Lodahl}}]{Javadi2015}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Javadi}}, \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {S\"{o}llner}},
\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Arcari}}, \bibinfo
{author} {\bibfnamefont {S.~L.}\ \bibnamefont {Hansen}}, \bibinfo {author}
{\bibfnamefont {L.}~\bibnamefont {Midolo}}, \bibinfo {author} {\bibfnamefont
{S.}~\bibnamefont {Mahmoodian}}, \bibinfo {author} {\bibfnamefont
{G.}~\bibnamefont {Kir\v{s}ansk\.{e}}}, \bibinfo {author} {\bibfnamefont
{T.}~\bibnamefont {Pregnolato}}, \bibinfo {author} {\bibfnamefont
{E.}~\bibnamefont {Lee}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Song}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Stobbe1}}, \
and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Lodahl}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nature Comm.}\
}\textbf {\bibinfo {volume} {6}},\ \bibinfo {pages} {8655} (\bibinfo {year}
{2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bhaskar}\ \emph {et~al.}(2017)\citenamefont
{Bhaskar}, \citenamefont {Sukachev}, \citenamefont {Sipahigil}, \citenamefont
{Evans}, \citenamefont {Burek}, \citenamefont {Nguyen}, \citenamefont
{Rogers}, \citenamefont {Siyushev}, \citenamefont {Metsch}, \citenamefont
{Park}, \citenamefont {Jelezko}, \citenamefont {Lon\v{c}ar},\ and\
\citenamefont {Lukin}}]{Bhaskar2017}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~K.}\ \bibnamefont
{Bhaskar}}, \bibinfo {author} {\bibfnamefont {D.~D.}\ \bibnamefont
{Sukachev}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Sipahigil}},
\bibinfo {author} {\bibfnamefont {R.~E.}\ \bibnamefont {Evans}}, \bibinfo
{author} {\bibfnamefont {M.~J.}\ \bibnamefont {Burek}}, \bibinfo {author}
{\bibfnamefont {C.~T.}\ \bibnamefont {Nguyen}}, \bibinfo {author}
{\bibfnamefont {L.~J.}\ \bibnamefont {Rogers}}, \bibinfo {author}
{\bibfnamefont {P.}~\bibnamefont {Siyushev}}, \bibinfo {author}
{\bibfnamefont {M.~H.}\ \bibnamefont {Metsch}}, \bibinfo {author}
{\bibfnamefont {H.}~\bibnamefont {Park}}, \bibinfo {author} {\bibfnamefont
{F.}~\bibnamefont {Jelezko}}, \bibinfo {author} {\bibfnamefont
{M.}~\bibnamefont {Lon\v{c}ar}}, \ and\ \bibinfo {author} {\bibfnamefont
{M.~D.}\ \bibnamefont {Lukin}},\ }\href@noop {} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {118}},\
\bibinfo {pages} {223603} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Blais}\ \emph {et~al.}(2004)\citenamefont {Blais},
\citenamefont {Huang}, \citenamefont {Wallraff}, \citenamefont {Girvin},\
and\ \citenamefont {Schoelkopf}}]{Blais:2004kn}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Blais}}, \bibinfo {author} {\bibfnamefont {R.-S.}\ \bibnamefont {Huang}},
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Wallraff}}, \bibinfo
{author} {\bibfnamefont {S.~M.}\ \bibnamefont {Girvin}}, \ and\ \bibinfo
{author} {\bibfnamefont {R.~J.}\ \bibnamefont {Schoelkopf}},\ }\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo
{volume} {69}},\ \bibinfo {pages} {062320} (\bibinfo {year}
{2004})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Koch}\ \emph {et~al.}(2007)\citenamefont {Koch},
\citenamefont {Yu}, \citenamefont {Gambetta}, \citenamefont {Houck},
\citenamefont {Schuster}, \citenamefont {Majer}, \citenamefont {Blais},
\citenamefont {Devoret}, \citenamefont {Girvin},\ and\ \citenamefont
{Schoelkopf}}]{Koch:2007gz}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Koch}}, \bibinfo {author} {\bibfnamefont {T.~M.}\ \bibnamefont {Yu}},
\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Gambetta}}, \bibinfo
{author} {\bibfnamefont {A.~A.}\ \bibnamefont {Houck}}, \bibinfo {author}
{\bibfnamefont {D.~I.}\ \bibnamefont {Schuster}}, \bibinfo {author}
{\bibfnamefont {J.}~\bibnamefont {Majer}}, \bibinfo {author} {\bibfnamefont
{A.}~\bibnamefont {Blais}}, \bibinfo {author} {\bibfnamefont {M.~H.}\
\bibnamefont {Devoret}}, \bibinfo {author} {\bibfnamefont {S.~M.}\
\bibnamefont {Girvin}}, \ and\ \bibinfo {author} {\bibfnamefont {R.~J.}\
\bibnamefont {Schoelkopf}},\ }\href@noop {} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {76}},\ \bibinfo
{pages} {042319} (\bibinfo {year} {2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Barends}\ \emph {et~al.}(2013)\citenamefont
{Barends}, \citenamefont {Kelly}, \citenamefont {Megrant}, \citenamefont
{Sank}, \citenamefont {Jeffrey}, \citenamefont {Chen}, \citenamefont {Yin},
\citenamefont {Chiaro}, \citenamefont {Mutus}, \citenamefont {Neill},
\citenamefont {O{\textquoteright}Malley}, \citenamefont {Roushan},
\citenamefont {Wenner}, \citenamefont {White}, \citenamefont {Cleland},\ and\
\citenamefont {Martinis}}]{Barends:2013kz}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont
{Barends}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Kelly}},
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Megrant}}, \bibinfo
{author} {\bibfnamefont {D.}~\bibnamefont {Sank}}, \bibinfo {author}
{\bibfnamefont {E.}~\bibnamefont {Jeffrey}}, \bibinfo {author} {\bibfnamefont
{Y.}~\bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont
{Yin}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Chiaro}},
\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Mutus}}, \bibinfo
{author} {\bibfnamefont {C.}~\bibnamefont {Neill}}, \bibinfo {author}
{\bibfnamefont {P.}~\bibnamefont {O{\textquoteright}Malley}}, \bibinfo
{author} {\bibfnamefont {P.}~\bibnamefont {Roushan}}, \bibinfo {author}
{\bibfnamefont {J.}~\bibnamefont {Wenner}}, \bibinfo {author} {\bibfnamefont
{T.~C.}\ \bibnamefont {White}}, \bibinfo {author} {\bibfnamefont {A.~N.}\
\bibnamefont {Cleland}}, \ and\ \bibinfo {author} {\bibfnamefont {J.~M.}\
\bibnamefont {Martinis}},\ }\href@noop {} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {111}},\ \bibinfo
{pages} {080502} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Chen}\ \emph
{et~al.}(2014{\natexlab{a}})\citenamefont {Chen}, \citenamefont {Neill},
\citenamefont {Roushan}, \citenamefont {Leung}, \citenamefont {Fang},
\citenamefont {Barends}, \citenamefont {Kelly}, \citenamefont {Campbell},
\citenamefont {Chen}, \citenamefont {Chiaro}, \citenamefont {Dunsworth},
\citenamefont {Jeffrey}, \citenamefont {Megrant}, \citenamefont {Mutus},
\citenamefont {O{\textquoteright}Malley}, \citenamefont {Quintana},
\citenamefont {Sank}, \citenamefont {Vainsencher}, \citenamefont {Wenner},
\citenamefont {White}, \citenamefont {Geller}, \citenamefont {Cleland},\ and\
\citenamefont {Martinis}}]{Chen:2014cwa}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont
{Chen}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Neill}},
\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Roushan}}, \bibinfo
{author} {\bibfnamefont {N.}~\bibnamefont {Leung}}, \bibinfo {author}
{\bibfnamefont {M.}~\bibnamefont {Fang}}, \bibinfo {author} {\bibfnamefont
{R.}~\bibnamefont {Barends}}, \bibinfo {author} {\bibfnamefont
{J.}~\bibnamefont {Kelly}}, \bibinfo {author} {\bibfnamefont
{B.}~\bibnamefont {Campbell}}, \bibinfo {author} {\bibfnamefont
{Z.}~\bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{Chiaro}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Dunsworth}},
\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Jeffrey}}, \bibinfo
{author} {\bibfnamefont {A.}~\bibnamefont {Megrant}}, \bibinfo {author}
{\bibfnamefont {J.~Y.}\ \bibnamefont {Mutus}}, \bibinfo {author}
{\bibfnamefont {P.~J.~J.}\ \bibnamefont {O{\textquoteright}Malley}}, \bibinfo
{author} {\bibfnamefont {C.~M.}\ \bibnamefont {Quintana}}, \bibinfo {author}
{\bibfnamefont {D.}~\bibnamefont {Sank}}, \bibinfo {author} {\bibfnamefont
{A.}~\bibnamefont {Vainsencher}}, \bibinfo {author} {\bibfnamefont
{J.}~\bibnamefont {Wenner}}, \bibinfo {author} {\bibfnamefont {T.~C.}\
\bibnamefont {White}}, \bibinfo {author} {\bibfnamefont {M.~R.}\ \bibnamefont
{Geller}}, \bibinfo {author} {\bibfnamefont {A.~N.}\ \bibnamefont {Cleland}},
\ and\ \bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Martinis}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\
}\textbf {\bibinfo {volume} {113}},\ \bibinfo {pages} {220502} (\bibinfo
{year} {2014}{\natexlab{a}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wallraff}\ \emph {et~al.}(2004)\citenamefont
{Wallraff}, \citenamefont {Schuster}, \citenamefont {Blais}, \citenamefont
{Frunzio}, \citenamefont {Huang}, \citenamefont {Majer}, \citenamefont
{Kumar}, \citenamefont {Girvin},\ and\ \citenamefont
{Schoelkopf}}]{Wallraff:2004dy}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Wallraff}}, \bibinfo {author} {\bibfnamefont {D.~I.}\ \bibnamefont
{Schuster}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Blais}},
\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Frunzio}}, \bibinfo
{author} {\bibfnamefont {R.~S.}\ \bibnamefont {Huang}}, \bibinfo {author}
{\bibfnamefont {J.}~\bibnamefont {Majer}}, \bibinfo {author} {\bibfnamefont
{S.}~\bibnamefont {Kumar}}, \bibinfo {author} {\bibfnamefont {S.~M.}\
\bibnamefont {Girvin}}, \ and\ \bibinfo {author} {\bibfnamefont {R.~J.}\
\bibnamefont {Schoelkopf}},\ }\href@noop {} {\bibfield {journal} {\bibinfo
{journal} {Nature}\ }\textbf {\bibinfo {volume} {431}},\ \bibinfo {pages}
{162} (\bibinfo {year} {2004})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Pozar}(1998)}]{Pozar:1998wp}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~M.}\ \bibnamefont
{Pozar}},\ }\href@noop {} {\emph {\bibinfo {title} {{Microwave Engineering,
4th}}}}\ (\bibinfo {publisher} {Edition-John Wiley},\ \bibinfo {year}
{1998})\BibitemShut {NoStop} \bibitem [{\citenamefont {Liu}\ \emph {et~al.}(2017)\citenamefont {Liu},
\citenamefont {Liu}, \citenamefont {Houck},\ and\ \citenamefont
{Houck}}]{Liu:2016ic}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont
{Liu}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Liu}}, \bibinfo
{author} {\bibfnamefont {A.~A.}\ \bibnamefont {Houck}}, \ and\ \bibinfo
{author} {\bibfnamefont {A.~A.}\ \bibnamefont {Houck}},\ }\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {Nature Physics}\ }\textbf
{\bibinfo {volume} {13}},\ \bibinfo {pages} {48} (\bibinfo {year}
{2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Joannopoulos}\ \emph {et~al.}(2011)\citenamefont
{Joannopoulos}, \citenamefont {Johnson}, \citenamefont {Winn},\ and\
\citenamefont {Meade}}]{Joannopoulos:2011tg}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~D.}\ \bibnamefont
{Joannopoulos}}, \bibinfo {author} {\bibfnamefont {S.~G.}\ \bibnamefont
{Johnson}}, \bibinfo {author} {\bibfnamefont {J.~N.}\ \bibnamefont {Winn}}, \
and\ \bibinfo {author} {\bibfnamefont {R.~D.}\ \bibnamefont {Meade}},\
}\href@noop {} {\enquote {\bibinfo {title} {{Photonic crystals: molding the
flow of light}},}\ } (\bibinfo {year} {2011})\BibitemShut {NoStop} \bibitem [{\citenamefont {Smith}\ \emph {et~al.}(2000)\citenamefont {Smith},
\citenamefont {Padilla}, \citenamefont {Vier}, \citenamefont {Nemat-Nasser},\
and\ \citenamefont {Schultz}}]{Smith2000}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~R.}\ \bibnamefont
{Smith}}, \bibinfo {author} {\bibfnamefont {W.~J.}\ \bibnamefont {Padilla}},
\bibinfo {author} {\bibfnamefont {D.~C.}\ \bibnamefont {Vier}}, \bibinfo
{author} {\bibfnamefont {S.~C.}\ \bibnamefont {Nemat-Nasser}}, \ and\
\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Schultz}},\ }\href@noop
{} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf
{\bibinfo {volume} {84}},\ \bibinfo {pages} {4184} (\bibinfo {year}
{2000})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Itoh}(2006)}]{Itoh:2006uw}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Itoh}},\ }\href@noop {} {\emph {\bibinfo {title} {{Electromagnetic
metamaterials: transmission line theory and microwave applications (the
engineering approach)}}}}\ (\bibinfo {publisher} {New Jersey: A John Wiley
{\&} Sons Inc},\ \bibinfo {year} {2006})\BibitemShut {NoStop} \bibitem [{\citenamefont {Koschny}\ \emph {et~al.}(2017)\citenamefont
{Koschny}, \citenamefont {Soukoulis},\ and\ \citenamefont
{Wegener}}]{Koschny2017}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Koschny}}, \bibinfo {author} {\bibfnamefont {C.~M.}\ \bibnamefont
{Soukoulis}}, \ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Wegener}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Journal of Optics}\ }\textbf {\bibinfo {volume} {19}},\ \bibinfo {pages}
{084005} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Al\`{u}}\ and\ \citenamefont
{Engheta}(2017)}]{Alu2017}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Al\`{u}}}\ and\ \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont
{Engheta}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Journal of Optics}\ }\textbf {\bibinfo {volume} {19}},\ \bibinfo {pages}
{084008} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Chen}\ \emph {et~al.}(2016)\citenamefont {Chen},
\citenamefont {Taylor},\ and\ \citenamefont {Yu}}]{Chen2016}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {H.-T.}\ \bibnamefont
{Chen}}, \bibinfo {author} {\bibfnamefont {A.~J.}\ \bibnamefont {Taylor}}, \
and\ \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Yu}},\ }\href@noop
{} {\bibfield {journal} {\bibinfo {journal} {Rep. Prog. Phys.}\ }\textbf
{\bibinfo {volume} {79}},\ \bibinfo {pages} {076401} (\bibinfo {year}
{2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Genevet}\ \emph {et~al.}(2017)\citenamefont
{Genevet}, \citenamefont {Capasso}, \citenamefont {Aieta}, \citenamefont
{Khorasaninejad},\ and\ \citenamefont {Devlin}}]{Genevet2017}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Genevet}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Capasso}},
\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Aieta}}, \bibinfo
{author} {\bibfnamefont {M.}~\bibnamefont {Khorasaninejad}}, \ and\ \bibinfo
{author} {\bibfnamefont {R.}~\bibnamefont {Devlin}},\ }\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {Optica}\ }\textbf {\bibinfo
{volume} {4}},\ \bibinfo {pages} {139} (\bibinfo {year} {2017})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {G{\"o}ppl}\ \emph {et~al.}(2008)\citenamefont
{G{\"o}ppl}, \citenamefont {Fragner}, \citenamefont {Baur}, \citenamefont
{Bianchetti}, \citenamefont {Filipp}, \citenamefont {Fink}, \citenamefont
{Leek}, \citenamefont {Puebla}, \citenamefont {Steffen},\ and\ \citenamefont
{Wallraff}}]{Goppl:2008iu}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{G{\"o}ppl}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Fragner}},
\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Baur}}, \bibinfo {author}
{\bibfnamefont {R.}~\bibnamefont {Bianchetti}}, \bibinfo {author}
{\bibfnamefont {S.}~\bibnamefont {Filipp}}, \bibinfo {author} {\bibfnamefont
{J.~M.}\ \bibnamefont {Fink}}, \bibinfo {author} {\bibfnamefont {P.~J.}\
\bibnamefont {Leek}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont
{Puebla}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Steffen}}, \
and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Wallraff}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Journal of Applied
Physics}\ }\textbf {\bibinfo {volume} {104}},\ \bibinfo {pages} {113904}
(\bibinfo {year} {2008})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Megrant}\ \emph {et~al.}(2012)\citenamefont
{Megrant}, \citenamefont {Neill}, \citenamefont {Barends},\ and\
\citenamefont {Chiaro}}]{Megrant:2012cd}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Megrant}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Neill}},
\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Barends}}, \ and\
\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Chiaro}},\ }\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {App. Phys. Lett.}\ }\textbf
{\bibinfo {volume} {100}},\ \bibinfo {pages} {113510} (\bibinfo {year}
{2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {O{\textquoteright}Brien}\ \emph
{et~al.}(2014)\citenamefont {O{\textquoteright}Brien}, \citenamefont
{Macklin}, \citenamefont {Siddiqi},\ and\ \citenamefont
{Zhang}}]{OBrien:2014fc}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont
{O{\textquoteright}Brien}}, \bibinfo {author} {\bibfnamefont
{C.}~\bibnamefont {Macklin}}, \bibinfo {author} {\bibfnamefont
{I.}~\bibnamefont {Siddiqi}}, \ and\ \bibinfo {author} {\bibfnamefont
{X.}~\bibnamefont {Zhang}},\ }\href@noop {} {\bibfield {journal} {\bibinfo
{journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {113}},\ \bibinfo
{pages} {157001} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Macklin}\ \emph {et~al.}(2015)\citenamefont
{Macklin}, \citenamefont {O{\textquoteright}Brien}, \citenamefont {Hover},
\citenamefont {Schwartz}, \citenamefont {Bolkhovsky}, \citenamefont {Zhang},
\citenamefont {Oliver},\ and\ \citenamefont {Siddiqi}}]{Macklin:2015ek}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Macklin}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont
{O{\textquoteright}Brien}}, \bibinfo {author} {\bibfnamefont
{D.}~\bibnamefont {Hover}}, \bibinfo {author} {\bibfnamefont {M.~E.}\
\bibnamefont {Schwartz}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont
{Bolkhovsky}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Zhang}},
\bibinfo {author} {\bibfnamefont {W.~D.}\ \bibnamefont {Oliver}}, \ and\
\bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Siddiqi}},\ }\href@noop
{} {\bibfield {journal} {\bibinfo {journal} {Science}\ }\textbf {\bibinfo
{volume} {350}},\ \bibinfo {pages} {307} (\bibinfo {year}
{2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {White}\ \emph {et~al.}(2015)\citenamefont {White},
\citenamefont {Mutus}, \citenamefont {Hoi}, \citenamefont {Barends},
\citenamefont {Campbell}, \citenamefont {Chen}, \citenamefont {Chen},
\citenamefont {Chiaro}, \citenamefont {Dunsworth}, \citenamefont {Jeffrey},
\citenamefont {Kelly}, \citenamefont {Megrant}, \citenamefont {Neill},
\citenamefont {O{\textquoteright}Malley}, \citenamefont {Roushan},
\citenamefont {Sank}, \citenamefont {Vainsencher}, \citenamefont {Wenner},
\citenamefont {Chaudhuri}, \citenamefont {Gao},\ and\ \citenamefont
{Martinis}}]{White:2015bb}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.~C.}\ \bibnamefont
{White}}, \bibinfo {author} {\bibfnamefont {J.~Y.}\ \bibnamefont {Mutus}},
\bibinfo {author} {\bibfnamefont {I.~C.}\ \bibnamefont {Hoi}}, \bibinfo
{author} {\bibfnamefont {R.}~\bibnamefont {Barends}}, \bibinfo {author}
{\bibfnamefont {B.}~\bibnamefont {Campbell}}, \bibinfo {author}
{\bibfnamefont {Y.}~\bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont
{Z.}~\bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{Chiaro}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Dunsworth}},
\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Jeffrey}}, \bibinfo
{author} {\bibfnamefont {J.}~\bibnamefont {Kelly}}, \bibinfo {author}
{\bibfnamefont {A.}~\bibnamefont {Megrant}}, \bibinfo {author} {\bibfnamefont
{C.}~\bibnamefont {Neill}}, \bibinfo {author} {\bibfnamefont {P.~J.~J.}\
\bibnamefont {O{\textquoteright}Malley}}, \bibinfo {author} {\bibfnamefont
{P.}~\bibnamefont {Roushan}}, \bibinfo {author} {\bibfnamefont
{D.}~\bibnamefont {Sank}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Vainsencher}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Wenner}},
\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Chaudhuri}}, \bibinfo
{author} {\bibfnamefont {J.}~\bibnamefont {Gao}}, \ and\ \bibinfo {author}
{\bibfnamefont {J.~M.}\ \bibnamefont {Martinis}},\ }\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {App. Phys. Lett.}\ }\textbf {\bibinfo
{volume} {106}} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {McKay}\ \emph {et~al.}(2015)\citenamefont {McKay},
\citenamefont {Naik}, \citenamefont {Reinhold}, \citenamefont {Bishop},\ and\
\citenamefont {Schuster}}]{McKay:2015hn}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~C.}\ \bibnamefont
{McKay}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Naik}},
\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Reinhold}}, \bibinfo
{author} {\bibfnamefont {L.~S.}\ \bibnamefont {Bishop}}, \ and\ \bibinfo
{author} {\bibfnamefont {D.~I.}\ \bibnamefont {Schuster}},\ }\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf
{\bibinfo {volume} {114}},\ \bibinfo {pages} {080501} (\bibinfo {year}
{2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hern{\'a}ndez-Herrej{\'o}n}\ \emph
{et~al.}(2010)\citenamefont {Hern{\'a}ndez-Herrej{\'o}n}, \citenamefont
{Izrailev},\ and\ \citenamefont {Tessieri}}]{HernandezHerrejon:2010ef}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~C.}\ \bibnamefont
{Hern{\'a}ndez-Herrej{\'o}n}}, \bibinfo {author} {\bibfnamefont {F.~M.}\
\bibnamefont {Izrailev}}, \ and\ \bibinfo {author} {\bibfnamefont
{L.}~\bibnamefont {Tessieri}},\ }\href@noop {} {\bibfield {journal}
{\bibinfo {journal} {Journal of Physics A: Mathematical and Theoretical}\
}\textbf {\bibinfo {volume} {43}},\ \bibinfo {pages} {425004} (\bibinfo
{year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Devoret}(1995)}]{Devoret:1995vn}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~H.}\ \bibnamefont
{Devoret}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Les
Houches Lectures}\ } (\bibinfo {year} {1995})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gao}(2008)}]{Gao:2008td}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Gao}},\ }\emph {\bibinfo {title} {{The physics of superconducting microwave
resonators}}},\ \href@noop {} {Ph.D. thesis} (\bibinfo {year}
{2008})\BibitemShut {NoStop} \bibitem [{\citenamefont {Zhou}\ \emph {et~al.}(2003)\citenamefont {Zhou},
\citenamefont {Lancaster},\ and\ \citenamefont {Huang}}]{Zhou:2003wf}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Zhou}}, \bibinfo {author} {\bibfnamefont {M.~J.}\ \bibnamefont {Lancaster}},
\ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Huang}},\ }in\
\href@noop {} {\emph {\bibinfo {booktitle} {Microwave Symposium Digest, 2003
IEEE MTT-S International.}}}\ (\bibinfo {year} {2003})\BibitemShut {NoStop} \bibitem [{\citenamefont {Keller}\ \emph {et~al.}(2017)\citenamefont {Keller},
\citenamefont {Dieterle}, \citenamefont {Fang}, \citenamefont {Berger},
\citenamefont {Fink},\ and\ \citenamefont {Painter}}]{Keller2017}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~J.}\ \bibnamefont
{Keller}}, \bibinfo {author} {\bibfnamefont {P.~B.}\ \bibnamefont
{Dieterle}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Fang}},
\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Berger}}, \bibinfo
{author} {\bibfnamefont {J.~M.}\ \bibnamefont {Fink}}, \ and\ \bibinfo
{author} {\bibfnamefont {O.}~\bibnamefont {Painter}},\ }\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {App. Phys. Lett.}\ }\textbf
{\bibinfo {volume} {111}},\ \bibinfo {pages} {042603} (\bibinfo {year}
{2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wiersma}\ \emph {et~al.}(1997)\citenamefont
{Wiersma}, \citenamefont {Bartolini}, \citenamefont {Lagendijk},\ and\
\citenamefont {Righini}}]{Wiersma:1997fp}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~S.}\ \bibnamefont
{Wiersma}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Bartolini}},
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Lagendijk}}, \ and\
\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Righini}},\ }\href@noop
{} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo
{volume} {390}},\ \bibinfo {pages} {671} (\bibinfo {year}
{1997})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wallraff}\ \emph {et~al.}(2005)\citenamefont
{Wallraff}, \citenamefont {Schuster}, \citenamefont {Blais}, \citenamefont
{Frunzio}, \citenamefont {Majer}, \citenamefont {Devoret}, \citenamefont
{Girvin},\ and\ \citenamefont {Schoelkopf}}]{Wallraff:2005in}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Wallraff}}, \bibinfo {author} {\bibfnamefont {D.~I.}\ \bibnamefont
{Schuster}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Blais}},
\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Frunzio}}, \bibinfo
{author} {\bibfnamefont {J.}~\bibnamefont {Majer}}, \bibinfo {author}
{\bibfnamefont {M.~H.}\ \bibnamefont {Devoret}}, \bibinfo {author}
{\bibfnamefont {S.~M.}\ \bibnamefont {Girvin}}, \ and\ \bibinfo {author}
{\bibfnamefont {R.~J.}\ \bibnamefont {Schoelkopf}},\ }\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf
{\bibinfo {volume} {95}},\ \bibinfo {pages} {060501} (\bibinfo {year}
{2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {John}\ and\ \citenamefont
{Wang}(1991)}]{John:1991es}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{John}}\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Wang}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. B}\
}\textbf {\bibinfo {volume} {43}},\ \bibinfo {pages} {12772} (\bibinfo {year}
{1991})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Chen}\ \emph
{et~al.}(2014{\natexlab{b}})\citenamefont {Chen}, \citenamefont {Megrant},
\citenamefont {Kelly}, \citenamefont {Barends}, \citenamefont {Bochmann},
\citenamefont {Chen}, \citenamefont {Chiaro}, \citenamefont {Dunsworth},
\citenamefont {Jeffrey}, \citenamefont {Mutus}, \citenamefont
{O{\textquoteright}Malley}, \citenamefont {Neill}, \citenamefont {Roushan},
\citenamefont {Sank}, \citenamefont {Vainsencher}, \citenamefont {Wenner},
\citenamefont {White}, \citenamefont {Cleland},\ and\ \citenamefont
{Martinis}}]{Chen:2014he}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont
{Chen}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Megrant}},
\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Kelly}}, \bibinfo
{author} {\bibfnamefont {R.}~\bibnamefont {Barends}}, \bibinfo {author}
{\bibfnamefont {J.}~\bibnamefont {Bochmann}}, \bibinfo {author}
{\bibfnamefont {Y.}~\bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont
{B.}~\bibnamefont {Chiaro}}, \bibinfo {author} {\bibfnamefont
{A.}~\bibnamefont {Dunsworth}}, \bibinfo {author} {\bibfnamefont
{E.}~\bibnamefont {Jeffrey}}, \bibinfo {author} {\bibfnamefont {J.~Y.}\
\bibnamefont {Mutus}}, \bibinfo {author} {\bibfnamefont {P.~J.~J.}\
\bibnamefont {O{\textquoteright}Malley}}, \bibinfo {author} {\bibfnamefont
{C.}~\bibnamefont {Neill}}, \bibinfo {author} {\bibfnamefont
{P.}~\bibnamefont {Roushan}}, \bibinfo {author} {\bibfnamefont
{D.}~\bibnamefont {Sank}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Vainsencher}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Wenner}},
\bibinfo {author} {\bibfnamefont {T.~C.}\ \bibnamefont {White}}, \bibinfo
{author} {\bibfnamefont {A.~N.}\ \bibnamefont {Cleland}}, \ and\ \bibinfo
{author} {\bibfnamefont {J.~M.}\ \bibnamefont {Martinis}},\ }\href@noop {}
{\bibfield {journal} {\bibinfo {journal} {App. Phys. Lett.}\ }\textbf
{\bibinfo {volume} {104}} (\bibinfo {year} {2014}{\natexlab{b}})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Nikoghosyan}\ and\ \citenamefont
{Fleischhauer}(2010)}]{Nikoghosyan2010}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont
{Nikoghosyan}}\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Fleischhauer}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {105}},\ \bibinfo {pages}
{013601} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Albrecht}\ \emph {et~al.}(2017)\citenamefont
{Albrecht}, \citenamefont {Caneva},\ and\ \citenamefont
{Chang}}]{Albrecht2017}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Albrecht}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Caneva}}, \
and\ \bibinfo {author} {\bibfnamefont {D.~E.}\ \bibnamefont {Chang}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {New Journal of
Physics}\ }\textbf {\bibinfo {volume} {19}},\ \bibinfo {pages} {115002}
(\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Inomata}\ \emph {et~al.}(2016)\citenamefont
{Inomata}, \citenamefont {Lin}, \citenamefont {Koshino}, \citenamefont
{Oliver}, \citenamefont {Tsai}, \citenamefont {Yamamoto},\ and\ \citenamefont
{Nakamura}}]{Inomata:2016jc}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont
{Inomata}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Lin}},
\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Koshino}}, \bibinfo
{author} {\bibfnamefont {W.~D.}\ \bibnamefont {Oliver}}, \bibinfo {author}
{\bibfnamefont {J.-S.}\ \bibnamefont {Tsai}}, \bibinfo {author}
{\bibfnamefont {T.}~\bibnamefont {Yamamoto}}, \ and\ \bibinfo {author}
{\bibfnamefont {Y.}~\bibnamefont {Nakamura}},\ }\href@noop {} {\bibfield
{journal} {\bibinfo {journal} {Nature Communications}\ }\textbf {\bibinfo
{volume} {7}},\ \bibinfo {pages} {12303} (\bibinfo {year}
{2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Besse}\ \emph {et~al.}(2017)\citenamefont {Besse},
\citenamefont {Gasparinetti}, \citenamefont {Collodo}, \citenamefont
{Walter}, \citenamefont {Kurpiers}, \citenamefont {Pechal}, \citenamefont
{Eichler},\ and\ \citenamefont {Wallraff}}]{Besse:2017wh}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {J.-C.}\ \bibnamefont
{Besse}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Gasparinetti}},
\bibinfo {author} {\bibfnamefont {M.~C.}\ \bibnamefont {Collodo}}, \bibinfo
{author} {\bibfnamefont {T.}~\bibnamefont {Walter}}, \bibinfo {author}
{\bibfnamefont {P.}~\bibnamefont {Kurpiers}}, \bibinfo {author}
{\bibfnamefont {M.}~\bibnamefont {Pechal}}, \bibinfo {author} {\bibfnamefont
{C.}~\bibnamefont {Eichler}}, \ and\ \bibinfo {author} {\bibfnamefont
{A.}~\bibnamefont {Wallraff}},\ }\href@noop {} {\bibfield {journal}
{\bibinfo {journal} {arXiv:1711.11569}\ } (\bibinfo {year}
{2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Greentree}\ \emph {et~al.}(2006)\citenamefont
{Greentree}, \citenamefont {Tahan}, \citenamefont {Cole},\ and\ \citenamefont
{Hollenberg}}]{Greentree:2006jg}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~D.}\ \bibnamefont
{Greentree}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Tahan}},
\bibinfo {author} {\bibfnamefont {J.~H.}\ \bibnamefont {Cole}}, \ and\
\bibinfo {author} {\bibfnamefont {L.~C.~L.}\ \bibnamefont {Hollenberg}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nature Physics}\
}\textbf {\bibinfo {volume} {2}},\ \bibinfo {pages} {856} (\bibinfo {year}
{2006})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hartmann}\ \emph {et~al.}(2006)\citenamefont
{Hartmann}, \citenamefont {Brand{\~a}o},\ and\ \citenamefont
{Plenio}}]{Hartmann:2006kv}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~J.}\ \bibnamefont
{Hartmann}}, \bibinfo {author} {\bibfnamefont {F.~G. S.~L.}\ \bibnamefont
{Brand{\~a}o}}, \ and\ \bibinfo {author} {\bibfnamefont {M.~B.}\ \bibnamefont
{Plenio}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nature
Physics}\ }\textbf {\bibinfo {volume} {2}},\ \bibinfo {pages} {849} (\bibinfo
{year} {2006})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Houck}\ \emph {et~al.}(2012)\citenamefont {Houck},
\citenamefont {T\"{u}reci1},\ and\ \citenamefont {Koch}}]{Houck2012}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~A.}\ \bibnamefont
{Houck}}, \bibinfo {author} {\bibfnamefont {H.~E.}\ \bibnamefont
{T\"{u}reci1}}, \ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont
{Koch}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Nature
Physics}\ }\textbf {\bibinfo {volume} {8}},\ \bibinfo {pages} {292} (\bibinfo
{year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Noh}\ and\ \citenamefont
{Angelakis}(2017)}]{Noh2016}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Noh}}\ and\ \bibinfo {author} {\bibfnamefont {D.~G.}\ \bibnamefont
{Angelakis}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Rep. Prog. Phys.}\ }\textbf {\bibinfo {volume} {80}},\ \bibinfo {pages}
{016401} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Karyamapudi}\ and\ \citenamefont
{Hong}(2003)}]{Karyamapudi:iz}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.~M.}\ \bibnamefont
{Karyamapudi}}\ and\ \bibinfo {author} {\bibfnamefont {J.-S.}\ \bibnamefont
{Hong}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal} {IEEE
MTT-S International Microwave Symposium - IMS 2003}\ }\textbf {\bibinfo
{volume} {3}},\ \bibinfo {pages} {1619} (\bibinfo {year} {2003})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Dossetti-Romero}\ \emph {et~al.}(2004)\citenamefont
{Dossetti-Romero}, \citenamefont {Izrailev},\ and\ \citenamefont
{Krokhin}}]{DossettiRomero:2004ck}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont
{Dossetti-Romero}}, \bibinfo {author} {\bibfnamefont {F.~M.}\ \bibnamefont
{Izrailev}}, \ and\ \bibinfo {author} {\bibfnamefont {A.~A.}\ \bibnamefont
{Krokhin}},\ }\href@noop {} {\bibfield {journal} {\bibinfo {journal}
{Physica E: Low-dimensional Systems and Nanostructures}\ }\textbf {\bibinfo
{volume} {25}},\ \bibinfo {pages} {13} (\bibinfo {year} {2004})}\BibitemShut
{NoStop} \bibitem [{\citenamefont {Izrailev}\ \emph {et~al.}(1995)\citenamefont
{Izrailev}, \citenamefont {Kottos},\ and\ \citenamefont
{Tsironis}}]{Izrailev:1995by}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {F.~M.}\ \bibnamefont
{Izrailev}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Kottos}}, \
and\ \bibinfo {author} {\bibfnamefont {G.~P.}\ \bibnamefont {Tsironis}},\
}\href@noop {} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. B}\
}\textbf {\bibinfo {volume} {52}},\ \bibinfo {pages} {3274} (\bibinfo {year}
{1995})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Dieterle}\ \emph {et~al.}(2016)\citenamefont
{Dieterle}, \citenamefont {Kalaee}, \citenamefont {Fink},\ and\ \citenamefont
{Painter}}]{Dieterle:2016kj}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {P.~B.}\ \bibnamefont
{Dieterle}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Kalaee}},
\bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Fink}}, \ and\
\bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Painter}},\ }\href@noop
{} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Applied}\ }\textbf
{\bibinfo {volume} {6}},\ \bibinfo {pages} {014013} (\bibinfo {year}
{2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Underwood}\ \emph {et~al.}(2012)\citenamefont
{Underwood}, \citenamefont {Shanks}, \citenamefont {Koch},\ and\
\citenamefont {Houck}}]{Underwood:2012hx}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~L.}\ \bibnamefont
{Underwood}}, \bibinfo {author} {\bibfnamefont {W.~E.}\ \bibnamefont
{Shanks}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Koch}}, \ and\
\bibinfo {author} {\bibfnamefont {A.~A.}\ \bibnamefont {Houck}},\ }\href@noop
{} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf
{\bibinfo {volume} {86}},\ \bibinfo {pages} {023837} (\bibinfo {year}
{2012})}\BibitemShut {NoStop} \end{thebibliography}
\appendix
\section{Band structure calculation} \label{App:A}
\subsection{Quantization of a periodic resonator-loaded waveguide}
\begin{figure*}
\caption{Circuit diagram of metamaterial waveguides made from periodic arrays of transmission line sections loaded with capacitively coupled resonators (top), and inductively loaded resonators (bottom). }
\label{fig:setup-transmission-line}
\end{figure*}
We consider the case of a waveguide that is periodically loaded with microwave resonators. Fig.\,\ref{fig:setup-transmission-line} depicts a unit cell for this configuration. The Lagrangian for this system can be readily written as \cite{Devoret:1995vn} \begin{align}\label{eq:hamiltonian1} {L} =& \sum_n \bigg[ \frac{1}{2} C_0 {[\dot{\Phi}^a_n]}^2 - \frac{[{{\Phi}^a_n-{\Phi}^a_{n-1}}]^2}{2 L_0} \nonumber\\ & + \frac{1}{2} C_r {[\dot{\Phi}^b_n]}^2 + \frac{1}{2} C_k {[\dot{\Phi}^a_n-\dot{\Phi}^b_n]}^2 - \frac{{[{\Phi}^b_n]}^2}{2L_r} \bigg]. \end{align} In order to find solutions in form of traveling waves, it is easier to work with the Fourier transform of node fluxes. We use the following convention for defining the (discrete) Fourier transformation \begin{align} &{\Phi}^{a,b}_\kappa = \frac{1}{{\sqrt{M}}} \sum_{n=-N}^N e^{-i2\pi(\kappa/M)n}\Phi^{a,b}_n ,
\end{align} where $M = 2N+1$ is the total number of periods in the waveguide.
Using the Fourier relation we find the Lagrangian in $k$-space as \begin{align}
{L} =& \sum_\kappa \bigg[ \frac{1}{2}( C_0 +C_k ) {|\dot{\Phi}^a_\kappa|}^2 - \left|1- e^{-i2\pi(\kappa/M)}\right|^2\frac{{|{\Phi}^a_\kappa|}^2}{2L_0} \nonumber\\
& \frac{1}{2}( C_k +C_r ) {|\dot{\Phi}^b_\kappa|}^2 - \frac{{|{\Phi}^b_\kappa|}^2}{2L_r} -C_k \frac{\dot{\Phi}^b_\kappa\dot{\Phi}^a_{-\kappa} +\dot{\Phi}^b_{-\kappa}\dot{\Phi}^a_{\kappa}}{2} \bigg]. \end{align} To proceed further, we need to find the canonical node charges which are defined as $Q^{a,b}_\kappa = \frac{\partial L}{\partial \dot{\Phi}^{a,b}_\kappa} $, and subsequently derive the Hamiltonian of the system by using a Legendre transformation. Doing so we find \begin{align}
{H} =& \sum_\kappa \bigg[ \frac{{Q^a_\kappa}{Q^a_{-\kappa}}}{2C_0'} + \left|1- e^{-i2\pi(\kappa/M)}\right|^2\frac{{{\Phi}^a_\kappa}{{\Phi}^a_{-\kappa}}}{2L_0}+ \nonumber\\ &\frac{{Q^b_\kappa}{Q^b_{-\kappa}}}{2 C_r'} + \frac{{{\Phi}^b_\kappa}{{\Phi}^b_{-\kappa}}}{2L_r} +\frac{{Q^a_\kappa}{Q^b_{-\kappa}} +{Q^a_{-\kappa}}{Q^b_{\kappa}}}{2 C_k' } \bigg]. \end{align} Here, we have defined the following quantities \begin{align} C_0' = \frac{C_kC_r + C_kC_0 +C_0C_r }{C_k +C_r},\\ C_r' = \frac{C_kC_r + C_kC_0 +C_0C_r }{C_k +C_0},\\ C_k' = \frac{C_kC_r + C_kC_0 +C_0C_r }{C_k }. \end{align} The canonical commutation relation \mbox{$[{\Phi}^i_{\kappa},Q^j_{-\kappa'}] = i\hbar \delta_{i,j}\delta_{\kappa,\kappa'}$} allows us to define the following annihilation operators as a function of charge and flux operators \begin{align} \hat{a}_\kappa = &\sqrt{\frac{C_0' \Omega_k}{2\hbar}} \left( {\Phi}^a_{\kappa} +\frac{i}{C_0' \Omega_k} Q^a_{\kappa} \right),\\ \hat{b}_\kappa = &\sqrt{\frac{C_r' \omega_0}{2\hbar}} \left( {\Phi}^b_{\kappa} +\frac{i}{C_r' \omega_0} Q^b_{\kappa} \right). \end{align} Here, we have defined the resonance frequency for each mode as \begin{align} &\Omega_k = \sqrt{\frac{4 {\mathrm{sin}}^2(kd/2)}{L_0 C_0'}},\\ &\omega_0 = \frac{1}{\sqrt{L_r C_r'}}, \end{align} where $k = (2\pi\kappa)/(Md)$ is the wavenumber. It is evident that $\Omega_k $ has the expected dispersion relation of a discrete periodic transmission line and $\omega_0 $ is the resonance frequency of the loaded microwave resonators. Using the above definitions for $\hat{a}_\kappa, \hat{b}_\kappa$ \begin{align}\label{Eq:HamiltonianExact} \hat{H} &= \frac{\hbar}{2} \sum_{k } \bigg[ \Omega_k \left( \hat{a}_k^\dagger\hat{a}_k + \hat{a}_{-k}\hat{a}_{-k}^\dagger \right)+ {\omega_0} \left( \hat{b}^\dagger_k\hat{b}_k + \hat{b}_{-k}\hat{b}_{-k}^\dagger \right)\nonumber \\ & - g_k \left( \hat{b}_{-k} - \hat{b}_{k}^\dagger \right) \left( \hat{a}_{k} - \hat{a}_{-k}^\dagger \right) - g_k \left( \hat{a}^\dagger_{k} - \hat{a}_{-k} \right)\left( \hat{b}^\dagger_{-k} - \hat{b}_{k} \right) \bigg], \end{align} along with the coupling coefficient \begin{align} g_k = \frac{\sqrt{{C_0' C_r'}}}{2 C_k'} \sqrt{{\omega_0 \Omega_k}} = \frac{C_k \sqrt{{\omega_0 \Omega_k}}}{2\sqrt{{(C_0+C_k) (C_r+C_k)}}} . \end{align} An alternative structure for coupling microwave resonators is depicted in the bottom panel of Fig.\,\ref{fig:setup-transmission-line}. In this geometry, the coupling is controlled by the inductive element $L_k$. Repeating the analysis above for this case, we find
\begin{align} &\Omega_k = \sqrt{\frac{4 {\mathrm{sin}}^2(kd/2)}{C_0 L_0'}},\\ &\omega_0 = \frac{1}{\sqrt{C_r L_r'}},\\ &g_k = \frac{\sqrt{{L_0' L_r'}}}{2 L_k'} \sqrt{{\omega_0 \Omega_k}}. \end{align} We have defined the modified inductance values as \begin{align} L_0' = \frac{L_kL_r + L_kL_0 +L_0L_r }{L_k +L_r},\\ L_r' = \frac{L_kL_r + L_kL_0 +L_0L_r }{L_k +L_0},\\ L_k' = \frac{L_kL_r + L_kL_0 +L_0L_r }{L_k }. \end{align}
\subsection{Band structure calculation with RWA} Using the rotating wave approximation, the Hamiltonian in Eq.\,(\ref{Eq:HamiltonianExact}) can be simplified to \begin{align} \hat{H} &= {\hbar} \sum_{k } \bigg[ \Omega_k \hat{a}_k^\dagger\hat{a}_k + {\omega_0} \hat{b}^\dagger_k\hat{b}_k + g_k \left( \hat{b}_{k}^\dagger \hat{a}_{k} + \hat{a}^\dagger_{k} \hat{b}_{k} \right) \bigg], \end{align} The simplified Hamiltonian can be written in the compact form \begin{align} \hat{H} = \hbar \sum_{k } {\mathbf{x}}_k^\dagger \mathbf{H}_k {\mathbf{x}}_k, \end{align} where \begin{align}
\mathbf{H}_k=
\begin{bmatrix}
\Omega_k & g_k \\
g_k & \omega_0 \\
\end{bmatrix}
,
{\mathbf{x}}_k=
\begin{bmatrix}
&\hat{a}_{k} \\
&\hat{b}_{k}
\end{bmatrix}. \end {align} We desire to transform the Hamiltonian to a diagonalized form
\begin{align}
\mathbf{\tilde{H}}_k=
\begin{bmatrix}
\omega_{+,k} & 0 \\
0 & \omega_{-,k}
\end{bmatrix}.
\end {align} It is straightforward to use the eigenvalue decomposition to find $\omega_{\pm,k}$ as
\begin{align} \omega_{\pm,k} = \frac{1}{2} \left[ \left(\Omega_k + \omega_0 \right) \pm \sqrt{{\left( \Omega_k - \omega_0 \right)}^2 + 4 g_k^2} \right], \end {align} along with the corresponding eigenstates
\begin{align} \hat{\alpha}_{\pm,k} = \frac{(\omega_{\pm,k} - \omega_0)}{\sqrt{ {(\omega_{\pm,k} - \omega_0)}^2 + g_k^2}}\hat{a}_{k} + \frac{g_k}{\sqrt{ {(\omega_{\pm,k} - \omega_0)}^2 + g_k^2}}\hat{b}_{k} . \end {align} \subsection{Band structure calculation beyond RWA} The exact Hamiltonian in Eq.\,(\ref{Eq:HamiltonianExact}) can be written in the compact form \begin{align} \hat{H} = \frac{\hbar}{2} \sum_{k } {\mathbf{x}}_k^\dagger \mathbf{H}_k {\mathbf{x}}_k, \end{align} where \begin{align}
\mathbf{H}_k=
\begin{bmatrix}
\Omega_k & 0 & g_k & -g_k \\
0 & \Omega_k & -g_k & g_k \\
g_k & -g_k & \omega_0 & 0 \\
-g_k & g_k & 0 & \omega_0
\end{bmatrix}
,
{\mathbf{x}}_k=
\begin{bmatrix}
&\hat{a}_{k} \\
&\hat{a}_{-k}^\dagger \\
&\hat{b}_{k} \\
&\hat{b}_{-k}^\dagger
\end{bmatrix}. \end {align}
To find the eigenstates of the system, we can use a linear transform to map the state vector ${\mathbf{\tilde{x}}}_k = \mathbf{S}_k{\mathbf{{x}}}_k$ such that ${\mathbf{x}}_k^\dagger \mathbf{H}_k {\mathbf{x}}_k = {\mathbf{\tilde{x}}}_k^\dagger \mathbf{\tilde{H}}_k {\mathbf{\tilde{x}}}_k$ with the transformed diagonal Hamiltonian matrix
\begin{align}
\mathbf{\tilde{H}}_k=
\begin{bmatrix}
\omega_{+,k} & 0 & 0 & 0 \\
0 & \omega_{+,k} & 0 & 0 \\
0 & 0 & \omega_{-,k} & 0 \\
0 & 0 & 0 & \omega_{-,k}.
\end{bmatrix} \end {align} In order to preserve the canonical commutation relations, the matrix $\mathbf{S}_k$ has to be symplectic, i. e. $\mathbf{S}_k = \mathbf{S}_k \mathbf{J} \mathbf{S}_k^\dagger$, with the matrix $\mathbf{J}$ defined as
\begin{align} \mathbf{J}=
\begin{bmatrix}
1 & 0 & 0 & 0 \\
0 & -1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & -1.
\end{bmatrix} \end {align} A linear transformation (such as $\mathbf{S}_k$) that diagonalizes a set of quadratically coupled boson fields while preserving their canonical commutation relations is often referred to as a Bogoliubov-Valatin transformation. While it is generally difficult to find the transform matrix $\mathbf{S}_k$, it is easy to find the eigenvalues of the diagonalized Hamiltonian by exploiting some of the properties of $\mathbf{S}_k$. Note that since $\mathbf{S}_k = \mathbf{S}_k \mathbf{J} \mathbf{S}_k^\dagger$, the matrices $\mathbf{J}\mathbf{\tilde{H}}_k$ and $\mathbf{J}\mathbf{{H}}_k$ share the same set of eigenvalues. The eigenvalues of $\mathbf{J}\mathbf{\tilde{H}}_k$ are the two frequencies $\omega_{\pm,k}$, and thus we have
\begin{align}\label{Eq:ExactDisp} \omega_{\pm,k}^2 = \frac{1}{2} \left[ \left(\Omega_k^2 + \omega_0^2 \right) \pm \sqrt{{\left( \Omega_k^2 - \omega_0^2 \right)}^2 + 16 \omega_0 \Omega_k g_k^2} \right]. \end {align}
\subsection{Circuit theory derivation of the band structure}
Consider the pair of equations that describe the propagation of a monochromatic electromagnetic wave of the form $v(x,t) =V(x) e^{-ikx}e^{i\omega t}$ (along with the corresponding current relation) inside a transmission line \begin{align} \label{Eq:Telegrapher} \frac{\mathrm d}{\mathrm d x} V(x)& = - Z(\omega) I(x), \nonumber \\ \frac{\mathrm d }{\mathrm d x} I(x)& = - Y(\omega) V(x). \end{align} Here, $Z(\omega)$ and $Y(\omega)$ are frequency dependent impedance and admittance functions that model the linear response of the series and parallel portions of a transmission line with length $d$. It is straightforward to check that the solutions to these equation satisfy $k(\omega) = n \omega/c = \sqrt{-Z(\omega)Y(\omega)}/d$. For a loss-less waveguide and in the absence of dispersion we have $Z(\omega) = i \omega L_0$ and $Y(\omega) = i \omega C_0$, and thus we find the familiar dispersion relation $k(\omega) = \omega\sqrt{L_0 C_0}/d$. Nevertheless, the pair of equations above remain valid for arbitrary impedance and admittance functions $Z(\omega)$ and $Y(\omega)$, provided that the dimension of the model circuit remains much smaller than the wavelength under consideration. In this model, a real and negative quantity for the product $ZY$ results in an imaginary wavenumber and subsequently creates a stop band in the dispersion relation. This situation can be achieved by periodically loading a transmission line with an array of resonators \cite{Karyamapudi:iz, OBrien:2014fc}. Assuming a unit length of $d$ we find \begin{align} \label{Eq:ResDisp} k^2 = {\left(\frac{\omega}{c}\right)}^2 {n}^2 \left[1+ \frac{2c \gamma_e}{n d} \frac{1}{\omega_0^2 - \omega^2} \right] . \end{align} Here, $\omega_0$ is the resonance frequency, and $\gamma_e$ is the external coupling decay rate of an individual resonator in the array. For moderate values of gap-midgap ratio ($\Delta/\omega_m$), the frequency gap can be found as \begin{align} \Delta = \frac{c }{n d} \left(\frac{\gamma_e}{\omega_0}\right), \end{align} and $\omega_m = \omega_0 + \Delta/2$. We have defined the gap as the range of frequencies where the wavenumber is imaginary.
Although a microwave resonator can be realized by using a two-elements $LC$-circuit, the three-element circuits in Fig.\,\ref{fig:setup-transmission-line} provide an additional degree of freedom which enables setting the coupling $\gamma_e$ independent of the resonance frequency $\omega_0$. Using circuit theory, it is straightforward to show \begin{align} \omega_0 = \frac{1}{\sqrt{L_r(C_r+ C_k)}},\\ \gamma_e =\frac{Z_0}{2L_r} {\left(\frac{C_k}{C_r+ C_k}\right)}^2 . \end{align} Here, $Z_0$ is the characteristic impedance of the unloaded waveguide. It is easy to check that for small values of $C_k/C_r$, the resonance frequency is only a weak function of $C_k$. As a result, it is possible to adjust the coupling rate $\gamma_e$ by setting the capacitor $C_k$ while keeping the resonance frequency almost constant. Figure\,\ref{fig:setup-transmission-line} also depicts an alternative strategy for coupling microwave resonators to the waveguide. In this design, the inductive element $L_k$ is used to set the coupling in a ``current divider" geometry. We provide experimental results for implementation of bandgap waveguide based on both designs in the next section.
While the ``continuum" model described above provides a heuristic explanation for formation of bandgap in a waveguide loaded with resonators, its results remains valid as far as $k \ll 2\pi/d$. To avoid this approximation, we can use the transfer matrix method to find the exact dispersion relation for a system with discrete periodic symmetry \cite{Pozar:1998wp}. In this case Eq.\,(\ref{Eq:ResDisp}) is modified to \begin{align}\label{Eq:ResDisp_discrete}
\cos{(kd)} = 1-{\left(\frac{\omega}{c}\right)}^2 \frac{{n}^2 d^2}{2} - \frac{ n d \gamma_e}{c} \frac{\omega^2}{\omega_0^2 - \omega^2} . \end{align} Note that this relation still requires $d$ to be much smaller than the wavelength of the unloaded waveguide $\lambda = 2\pi c/ (n\omega)$.
\subsection{Dispersion and group index near the band-edges}
Equation (\ref{Eq:ExactDisp}) can be reversed to find the wavenumber $k$ as a function of frequency. Assuming, a linear dispersion relation of the form $k= n\Omega_k /c$ for the bare waveguide we find \begin{align}\label{Eq:ExactDisp2} k = \frac{n\omega}{c} \sqrt{\frac{\omega^2-\omega_{c+}^2}{\omega^2-\omega_{c-}^2}}. \end{align} Here, $\omega_{c+} = \omega_0$ and $\omega_{c-} = \omega_0 \sqrt{1- 4g_k^2/(\Omega_k\omega_0)} $ are the upper and lower cut-off frequencies, respectively. The quantity $g_k^2/(\Omega_k\omega_0)$ is a unit-less parameter quantifying the size of the bandgap and is independent of the wavenumber $k$.
The dispersion relation can be written in simpler forms by expanding the wavenumber in the vicinity of the two band-edges \begin{align} k =
\begin{cases}
\frac{n\omega_{c-}}{c} \sqrt{ \frac{\Delta}{-\delta_{-}} } & \quad \text{for } \omega\approx \omega_{c-}, \\
\frac{n\omega_{c+}}{c} \sqrt{ \frac{\delta_{+}}{\Delta} } & \quad \text{for } \omega\approx \omega_{c+}.
\end{cases} \end{align} Here, $\Delta = \omega_{c+}-\omega_{c-}$ is the frequency span of the bandgap and $\delta_\pm = \omega- \omega_{c\pm}$ are the detunings from the band-edges.
The form of the dispersion relation Eq.\,(\ref{Eq:ExactDisp}) suggests that the maxima of the group index happens near the band-edges. Having the wavenumber, we can readily evaluate the group velocity $v_g = {\partial \omega }/{\partial k}$ and find the group index $n_g = c/v_g$ as \begin{align}\label{Eq:suppGroupIndx} n_g =
\begin{cases}
{\frac{{n\omega_{c-}} \sqrt{\Delta}}{\sqrt{-4{(\delta_{-} - i\gamma_i)}^3}} } & \quad \text{for } \omega\approx \omega_{c-}, \\
{ \frac{ {n\omega_{c+}} }{\sqrt{4\Delta (\delta_{+} - i\gamma_i)}} } & \quad \text{for } \omega\approx \omega_{c+}.
\end{cases} \end{align} Note that we have replaced $\delta_\pm$ with $\delta_\pm - i\gamma_i$ to account for finite internal quality factor of the resonators in the structure.
\section{Coupling a Josephson junction qubit to a metamaterial waveguide} \label{App:B}
We consider the coupling of a Josephson junction qubit to the metamaterial waveguide. Assuming rotating wave approximation, the Hamiltonian of this system can be written as \begin{align} \hat{H} &= {\hbar} \sum_{k } \bigg[ \omega_k \hat{a}_k^\dagger\hat{a}_k + \frac{\omega_q}{2} \hat{\sigma}_z + f_k \left( \hat{a}_{k}^\dagger \hat{\sigma}^{-} + \hat{a}_{k} \hat{\sigma}^+ \right).\bigg] \end{align} Here $ f_k$ is the coupling factor of the qubit to the waveguide photons, and $ \omega_k = \omega_{\pm,k} $, where the plus or minus sign is chosen such that the qubit frequency $\omega_q$ lies within the band. Without loss of generality, we assume $ f_k$ to be a real number. The Heisenberg equations of motions for the qubit and the photon operators can be written as \begin{align} &\frac{\partial }{\partial t} \hat{a}_k = -i \omega_k \hat{a}_k - i f_k \hat{\sigma}^- \\ &\frac{\partial }{\partial t} \hat{\sigma}^- = -i \omega_q \hat{\sigma}^- - i \sum_k f_k \hat{a}_k \end{align} The equation for $\hat{a}_k$ can be formally integrated and substituted in the equation for $\hat{\sigma}^- $ to find \begin{align} \frac{\partial }{\partial t} \hat{\sigma}^- = -i \omega_q \hat{\sigma}^- - i \sum_k f_k e^{-i\omega_k (t-t_0)} \hat{a}_k(t_0) \\ \nonumber - \sum_k f_k^2 \int_{t_0}^t e^{-i(\omega_k) (t-\tau)} \hat{\sigma}^-(\tau) \mathrm{d} \tau. \end{align} We now use the Markov approximation to write $\hat{\sigma}^-(\tau) \approx \hat{\sigma}^-(t) e^{-i(\omega_q) (\tau-t)}$, and thus \begin{align} \frac{\partial }{\partial t} \hat{\sigma}^- = -i \omega_q \hat{\sigma}^- - i \sum_k f_k e^{-i\omega_k (t-t_0)} \hat{a}_k(t_0) \\ \nonumber - \sum_k f_k^2 \left(\int_{t_0}^t e^{-i(\omega_k - \omega_q) (t-\tau)} \mathrm{d} \tau\right) \hat{\sigma}^-(t). \end{align} Considering the generic equation of motion for a linearly decaying qubit, $({\partial }/{\partial t} ) \hat{\sigma}^- = -i \omega_q \hat{\sigma}^- - (\gamma/2) \hat{\sigma}^- $, we can identify real part of the last term in the equation above as the decay rate due to radiation of the qubit into the waveguide. We can extend the integral's bound to approximately evaluate this term as
\begin{align} \gamma &= 2 \Re \left[\sum_k f_k^2 \int_{t_0}^t e^{-i(\omega_k - \omega_q) (t-\tau)} \mathrm{d} \tau \nonumber\right] \\ &\approx 2 \Re \left[\sum_k f_k^2 \int_{t_0}^{\infty} e^{-i(\omega_k - \omega_q) (t-\tau)} \mathrm{d} \tau \right] \nonumber \\ &= 2\pi \sum_k f_k^2 \delta(\omega_k - \omega_q). \end{align} Assuming the coupling rate $f_k$ is a smooth function of the $k$-vector, we can evaluate this some in the continuum limit as \begin{align} \gamma &= 2\pi \sum_k f_k^2 \delta(\omega_k - \omega_q) \\&\approx {Md} \int \mathrm{d} k f_k^2 \delta(\omega_k - \omega_q)\\& ={L} \int \mathrm{d} \omega \left(\frac{\partial k}{\partial \omega}\right) f_k^2 \delta(\omega_k - \omega_q) \\&= \frac{L}{c} f(\omega_q)^2 n_g(\omega_q) . \end{align} It is evident that reducing the group velocity increases the radiation decay rate of the qubit.A similar analysis can be applied to find the decay rate of a linear cavity with resonance frequency of $\omega_0$ (i.e. a harmonic oscillator) that has been coupled to the waveguide with coupling constant $g(\omega)$. In this case we find \begin{align} &\gamma = \frac{L}{c} g(\omega_0)^2 n_g(\omega_0), \nonumber\\ &Q_e = \omega_0/\gamma = \frac{\omega_0 c}{L} \frac{1}{g(\omega_0)^2 n_g(\omega_0) }. \end{align}
\section{Disorder and Anderson localization} \label{App:C}
Propagation of electron waves in a one dimensional quasi-periodic potential is described by \begin{align}\label{Eq:KP} \left[-\frac{\partial^2}{{\partial x}^2} + \sum_n (U+ U_n) \delta(x-an)\right]\psi_q(x) = q^2\psi_q(x). \end{align} Here, $q$ is the quasi momentum and $U_n$ is the random variable that models compositional disorder at position $x = na$. Disorder leads to localization of waves with a characteristic length defined as \begin{align}
\ell^{-1} = \lim_{N \rightarrow \infty} \left< \frac{1}{N} \sum_{n=0}^{N-1} \ln{\left|\frac{\psi_{n+1}}{\psi_n}\right|} \right>. \end{align} Here, the brackets represent averaging over different realization of the disorder, whereas the summation accounts for spatial/temporal averaging for traveling waves. For this model, previous authors have found the localization length to be \cite{HernandezHerrejon:2010ef,DossettiRomero:2004ck,Izrailev:1995by} \begin{align} \frac{\ell}{d} = \frac{2\Gamma(1/6)}{6^{1/3} \sqrt{\pi}} \sigma ^{-2/3}\approx 3.45\sigma ^{-2/3}. \end{align} In this model $\sigma^2 = \langle U_n^2\rangle \sin^2{(q_0 a)}/q_0^2$ is a parameter that quantifies the strength of disorder, and $q_0$ is the value of quasi-momentum at the band-edge.
\begin{figure*}
\caption{\textbf{a}, Optical and SEM images of microwave resonator array chip. Middle: optical image of the chip with two arrays of coupled resonators on a $1\times1$ cm silicon chip. Left and Right: SEM image (false-color) of the fabricated inductively (left) and capacitively (right) coupled microwave resonator pairs. The resonator region is colored red and the waveguide central conductor is colored blue. \textbf{b-c}, Amplitude and phase response of two capacitively-coupled microwave resonator pairs measured at the fridge temperature $T_f \approx 7$ mK. The legends show the intrinsic ($Q_i = \omega_0/\gamma_i$) and extrinsic ($Q_e = \omega_0/\gamma_e$) quality factors extracted from a Fano line shape fit. \textbf{c}, Statistical variations in the resonance frequency of 9 resonators with a wire width of $500$ nm. The dashed lines mark the standard deviation of the normalized error equal to $\sigma = 0.3\%$.}
\label{fig:ResImage}
\end{figure*}
Now, we consider the propagation of current waves in a one dimensional waveguide that has been periodically loaded with resonators (a similar analysis can be applied to the voltage waves for the case of inductively coupled resonators). Starting from Eq.\,(\ref{Eq:ResDisp}), it is straightforward to find \begin{align} \frac{\partial^2 I(x)}{{\partial x}^2} + I(x){\left(\frac{\omega}{c}\right)}^2 n^2 \left[1 + \sum_n\frac{d \Delta \delta(x-an)}{\omega_{0,n} - \omega + i\gamma_i} \right] = 0 . \end{align} By comparing this equation with the Schrodinger equation for the Kronig-Penny model Eq.\,(\ref{Eq:KP}) we find \begin{align} q^2 &\rightarrow {\left(\frac{\omega}{c}\right)}^2 n^2\nonumber \\ U + U_n &\rightarrow -{\left(\frac{\omega}{c}\right)}^2 n^2 \left[\frac{d \Delta}{\omega_{0,n} - \omega + i\gamma_i}\right]. \end{align} For small variation in resonance frequencies, $\delta \omega_{0}$, we can expand the resonance potential term to find \begin{align} U_n = -{\left(\frac{\omega_0}{c}\right)}^2 n^2 \frac{\partial}{\partial \omega_{0,n}} \left(\frac{d \Delta }{\omega_{0,n} - \omega + i\gamma_i}\right) \delta \omega_{0} \end{align} By evaluating the expression for $U_n$ and substituting it in the relation above for $\sigma^2$, we find
\begin{align} \sigma^2_{\text{low}} ={\left(\frac{\gamma_e}{\gamma_i}\right)}^4{\left(\frac{\delta \omega_0}{\Delta}\right)}^2, \nonumber \\ \sigma^2_{\text{high}} ={\left(\frac{\gamma_e}{\Delta}\right)}^4{\left(\frac{\delta \omega_0}{\Delta}\right)}^2 . \end{align}
The analysis above gives us the localization length from disorder, ${\ell_\mathrm{diss}}$. In addition to disorder, the loss in the waveguide leads to an exponential extinction of the wave's amplitude. The localization length from loss, ${\ell_\mathrm{loss}}$, can be found by solving for the complex band structure and setting $\ell_\text{loss} = 1/\text{Im}(k)$. Finally, the total localization length can be found by adding the effect of both contributions as \begin{align} \frac{1}{\ell_\mathrm{total}} = \frac{1}{\ell_\mathrm{diss}} + \frac{1}{\ell_\mathrm{loss}}. \end{align}
\section{Characterization of lumped-element microwave resonators} \label{App:D}
We have achieved a characteristic size of $\lambda_0/150$ ($130 \mu$m by $76 \mu$m for $\omega_0/2\pi \approx$ 6 GHz) and $\lambda_0/76$ ($155 \mu$m by $92 \mu$m for $\omega_0/2\pi \approx$ 10 GHz), using a wire width of 500 nm and 1 $\mu$m, respectively.
Figure\,\ref{fig:ResImage} shows the typical amplitude and phase of measured for a waveguide coupled to a pair of identical resonators. Microwave spectroscopy of the fabricated resonators is performed in a dilution refrigerator cooled-down to a temperature of $T_f \approx 7$~mK. The input microwave is launched onto the chip via a 50-$\Omega$ CPW. The output microwave signal is subsequently amplified and analyzed using a network analyzer (for more details regarding the measurement setup, refer to Ref.~\cite{Dieterle:2016kj}). We have extracted the intrinsic and extrinsic decay rates of the cavity by fitting the transmission data to a Fano line shape of the form \begin{align} S_{21} (\omega)= 1- \frac{\gamma_e e^{i\phi_0}}{\gamma_i+\gamma_e + 2i (\omega - \omega_0)}. \end{align} Here $\gamma_e $ and $\gamma_i$ are the extrinsic and intrinsic decay rates of the resonator, respectively. The phase $\phi_0$ is a parameter that sets the asymmetry of the Fano line shape \cite{Gao:2008td}. The data demonstrates that it is possible to adjust the external coupling to the resonator in a wide range without much degradation in the internal quality factor (it is straightforward to convert the extrinsic quality factor $Q_e$ to the coupling constants $g_k$ used in our theoretical analysis above). We have compared the measured resonance frequency with the resonance frequency found from numerical simulations in Fig.\,\ref{fig:ResImage}d. We find that the measured resonance frequencies are in agreement with the simulated values, with a multiplicative scaling factor of 0.85. Using this scale factor, we have measured a random variation $0.3\%$ in the resonance frequency. It has been previously suggested that the shift in the resonance frequency and its statistical variation can be attributed to the kinetic inductance of the free charge carriers in the superconductor, and the variations can be mitigated by increasing the wire width \cite{Underwood:2012hx}.
\section{Qubit frequency shift and the Purcell-limited lifetime} \label{App:E}
The qubit frequency shift can be derived from circuit theory by modeling the qubit as a linear resonator. Consider the circuit diagram in Fig.\,\ref{fig:circuit-model}. The load impedance seen from the qubit port can be written as \begin{equation} Z_L (\omega) = \frac{1}{i\omega C_g} + Z_\text{line}(\omega), \end{equation} and \begin{equation} Y_L (\omega) = \frac{i\omega C_g}{1+ Z_\text{line}(\omega)i\omega C_g}. \end{equation} For weak coupling, the decay rate can be found using the real part of the load impedance as \begin{align}\label{Eq.kappaE} \kappa \simeq {\omega_q^2 L_J \Re\left[Y_{L}(\omega_q) \right]}. \end{align} Here, $\omega_q$ is the resonance frequency of the qubit. Similarly, the shift in qubit frequency is found as \begin{align}\label{Eq.FreqShift} {\Delta \omega_q} \simeq - \frac{ \omega_q^2 L_J }{2} \Im\left[Y_{L}(\omega_q) \right] . \end{align}
For a transmon qubit, we have the following relation that approximate its behavior in the linearized regime \begin{align} L_J = \frac{{\left( \frac{\Phi_0}{2\pi} \right)}^2}{E_J},\\ \omega_q = \frac{1}{\sqrt{L_J C_q}}. \end{align}
We first use the simplified continuum model to find the input impedance $Z_\text{line}$ \begin{align} Z_\text{line}(\omega) = Z_B(\omega) \frac{ R_L+ Z_B(\omega) \tanh{\left[\Im{(k)} x \right]} }{ Z_B(\omega)+ R_L \tanh{\left[\Im{(k)} x \right]}}. \end{align} Here, $\Im{(k)}(\omega)$ is the attenuation constant (we are assuming $\Re{(k)}(\omega) = 0$, i.e. valid when the value of $\omega$ is within the bandgap), $Z_B(\omega)$ is the Bloch impedance of the periodic structure, and $x$ is the length of the waveguide. Assuming $\Im{(k)} x \gg 1$, this expression can be simplified as \begin{align}\label{Eq:Bloch}
Z_\text{line}(\omega) \approx & Z_B(\omega)+ \frac{4R_L {| Z_B(\omega)|}^2}{{R_L}^2+ {| Z_B(\omega)|}^2} e^{-2\Im{(k)} x} \nonumber\\
\approx &Z_B(\omega)+ 4R_L e^{-2\Im{(k)} x}. \end{align}
Note that we have assumed $R_L \ll {| Z_B(\omega)|}$ to make the last approximation. For weak coupling, the qubit coupling capacitance, $C_g$, should be chosen such that the (magnitude of ) impedance $Z_g = 1/(i \omega C_g)$ is much larger than $|Z_\text{line}| $. In this situation, we use Eq.\,(\ref{Eq.FreqShift}) and Eq.\,(\ref{Eq:Bloch}) to find
\begin{align}
\frac{\Delta \omega_q}{\omega_q} & = -\frac{1}{2}(L_J \omega_q)(C_g \omega_q)-\frac{1}{2}(L_J \omega_q){(C_g \omega_q)}^2 \text{Im}[Z_B(\omega_q)] \nonumber \\
& = -\frac{{C_g}}{2C_q} -\frac{{C_g}}{2C_q} \text{Im}[Z_B(\omega_q)]C_g \omega_q .
\end{align} Note that the first term in the frequency shift is merely caused by addition of the coupling capacitor to the overall qubit capacitance.
\begin{figure}
\caption{{Circuit diagram for qubit that is capacitively coupled to a metamaterial waveguide with a resistive termination.}}
\label{fig:circuit-model}
\end{figure}
We find the qubit's radiation decay rate by substituting Eq.\,(\ref{Eq:Bloch}) in Eq.\,(\ref{Eq.kappaE}) \begin{align} \kappa = \frac{4\omega_q^2 {C_g}^2 }{C_q} R_L e^{-2\Im{(k)}(\omega) x}. \end{align} Subsequently, the radiative lifetime of the qubit can be written as \begin{align}\label{Eq:suppLifetime} T_\text{rad}= \frac{C_q}{4\omega_q^2 {C_g}^2 R_L} e^{2x/\ell(\omega_q)}, \end{align} where $\ell = 1/\Im{(k)}$ is the localization length in the bandgap. We note that the analysis from circuit theory is only valid for weak qubit-waveguide coupling rates, where the Markov approximation can be applied. In the strong coupling regime, the qubit frequency and lifetime can be found by numerically finding the zeros of the circuit's admittance function $Y = Y_L +Y_q$, where $Y_q = i\omega_q C_q + 1/(i\omega_qL_J)$.
\subsection{Group delay and the qubit lifetime profile}
Equation\,(\ref{Eq:suppLifetime}) demonstrates the relation between the qubit lifetime and the localization length. Moving the qubit frequency beyond the gap, results in a drastic increase in the localization length and subsequently reduces the qubit lifetime. The normalized slope of the lifetime profile in the vicinity of the band-edge can be written as \begin{align}
\left|\frac{1}{T_\text{rad}} \frac{\partial T_\text{rad}}{\partial \omega}\right| = \left|x \frac{\partial \Im{(k)} }{\partial \omega}\right| = \left|x \Im{(n_g)}/c\right|. \end{align} We now evaluate Eq.\,(\ref{Eq:suppGroupIndx}) to find the group index at the upper and lower band-edges $\delta_\pm = 0$ \begin{align}
|\Re(n_g)| = |\Im(n_g)| =
\begin{cases}
{n\omega_{c-}} \sqrt{\frac{\Delta}{{8\gamma_i}^3} } & \quad \text{for } \omega= \omega_{c-}, \\
{ {n\omega_{c+}} \frac{1 }{\sqrt{8\Delta \gamma_i}} } & \quad \text{for } \omega= \omega_{c+}.
\end{cases} \end{align} Consequently, we can write the normalized slope of the lifetime profile at the band-edge as \begin{align}
\left.\left(\frac{1}{T_\text{rad}} \left|\frac{\partial T_\text{rad}}{\partial \omega}\right|\right) \right|_{\omega = \omega_{c\pm}} = & \left|x\Im{\left[n_g(\omega_{c\pm})\right]}/c\right| \nonumber \\ = & \left|x \Re{\left[n_g(\omega_{c\pm})\right]}/c\right| \nonumber \\ = & \tau_\text{delay}. \end{align} This result has a simple description: the normalized slope of the lifetime profile at the band-edge is equal to the (maximum) group delay.
\end{document} |
\begin{document}
\title{Entanglement witness operator for quantum teleportation} \author{Nirman Ganguly} \thanks{nirmanganguly@gmail.com} \affiliation{Dept. of Mathematics, Heritage Institute of Technology, Kolkata-107, West Bengal, India} \affiliation{S. N. Bose National Centre for Basic Sciences, Salt Lake, Kolkata-700 098, India} \author{Satyabrata Adhikari} \thanks{tapisatya@iopb.res.in} \affiliation{Institute of Physics, Sainik School Post, Bhubaneshwar-751005, Orissa, India} \author{A. S. Majumdar} \thanks{archan@bose.res.in} \affiliation{S. N. Bose National Centre for Basic Sciences, Salt Lake, Kolkata-700 098, India} \author{Jyotishman Chatterjee} \thanks{jyotishman.chatterjee@heritageit.edu} \affiliation{Dept. of Mathematics, Heritage Institute of Technology, Kolkata-107, West Bengal, India} \date{\today}
\begin{abstract} The ability of entangled states to act as resource for teleportation is linked to a property of the fully entangled fraction. We show that the set of states with their fully entangled fraction bounded by a threshold value required for performing teleportation is both convex and compact. This feature enables for the existence of hermitian witness operators the measurement of which could distinguish unknown states useful for performing teleportation. We present an example of such a witness operator illustrating it for different classes of states. \end{abstract}
\pacs{03.67.-a, 03.67.Mn}
\maketitle
\paragraph{A. Introduction.\textemdash}
Quantum information processing is now widely recognized as a powerful tool for implementing tasks that cannot be performed using classical means \cite{nielsen}. A large number of algorithms for various information processing tasks such as super dense coding\cite{dense}, teleportation \cite{teleport} and key generation \cite{crypto} have been proposed and experimentally demonstrated. At the practical level information processing is implemented by manipulating states of quantum particles, and it is well known that not all quantum states can be used for such purposes. Hence, given an unknown state, one of the most relevant issues here is to determine whether it is useful for quantum information processing.
The key ingredient for performing many information processing tasks is provided by quantum entanglement. The experimental detection of entanglement is facilitated by the existence of entanglement witnesses \cite{horodecki,terhal} which are hermitian operators with at least one negative eigenvalue. The existence of entanglement witnesses is a consequence of the Hahn-Banach theorem in functional analysis \cite{hahn1,hahn2} providing a necessary and sufficient condition to detect entanglement. Motivated by the nature of different classes of entangled states, various methods have been suggested to construct entanglement witnesses \cite{lewenstein,terhal2,sperling,adhikari}. Study of entanglement witnesses \cite{review} has proceeded in directions such as the construction of optimal witnesses \cite{lewenstein,sperling}, Schmidt number witnesses \cite{sanpera}, and common witnesses \cite{wu}. The possibility of experimental detection of entanglement through the measurement of expectation values of witness operators for unknown states is facilitated by the decomposition of witnesses in terms of Pauli spin matrices for qubits \cite{guhne} and Gell-Mann matrices in higher dimensions \cite{bertl}. For macroscopic systems the properties of thermodynamic quantities provide a useful avenue for detection of entanglement \cite{vedral}.
Teleportation \cite{teleport} is a typical information processing task where at present there is intense activity in extending the experimental frontiers \cite{telex}. However, it is well known that not all entangled states are useful for teleportation. For example, while the entangled Werner state \cite{werner} in $2 \otimes 2$ dimensions is a useful resource \cite{lee}, another class of maximally entangled mixed states \cite{mems}, as well as other non-maximally entangled mixed states achieve a fidelity higher than the classical limit only when their magnitude of entanglement exceeds a certain value \cite{adhikari2}. The problem of determining states useful for teleportation becomes conceptually more involved in higher dimensions where bound entangled states \cite{bound} also exist.
The motivation for this study is to enquire how to determine whether an unknown entangled state could be used as a resource for performing information processing tasks. In the present work we consider this question for the specific task of quantum teleportation. We propose and demonstrate the existence of measurable witness operators connected to teleportation, by making use of a property of entangled states, {\it viz}, the fully entangled fraction (FEF) \cite{ben,horodecki2} which can be related to the efficacy of teleportation. In spite of the conceptual relevance of the FEF as a characteristic trait of entangled states \cite{vidal}, its actual determination could be complicated for higher dimensional systems\cite{zhao,gu}. Our proof of the existence of witnesses connected to a relevant threshold value for the FEF enables us to construct a suitable witness operator for teleportation, as is illustrated with certain examples.
\paragraph{B. Proof of existence of witness.\textemdash}
The fully entangled fraction (FEF) \cite{horodecki2} is defined for a bipartite state $\rho$ in $d \otimes d$ dimensions as \begin{equation} F(\rho)= max_{U}\langle \psi^{+} \vert U^{\dagger} \otimes I \rho U \otimes I \vert \psi^{+} \rangle \label{fef} \end{equation} where $\vert \psi^{+} \rangle = \frac{1}{\sqrt{d}} \sum_{i=0}^{d-1} \vert ii \rangle $ and $U$ is a unitary operator. A quantum channel is useful for teleportation if it can provide a fidelity higher than what can be done classically. The fidelity depends on the FEF of the state, e.g., a state in $d \otimes d$ dimensions works as a teleportation channel if its FEF $ > \frac{1}{d}$ \cite{horodecki2,vidal,zhao}.
Here we propose the existence of a hermitian operator which serves to distinguish between states having FEF higher than a given threshold value from other states. FEF $>\frac{1}{d}$ is a benchmark which measures the viability of quantum states in teleportation. Let us consider the set $S$ of states having FEF $\leq \frac{1}{d}$. A special geometric form of the Hahn-Banach theorem in functional analysis \cite{hahn1,hahn2} states that if a set is convex and compact, then a point lying outside the set can be separated from it by a hyperplane. The existence of entanglement witnesses are indeed also an outcome of this theorem \cite{horodecki,terhal}. We now present the proof that the set $S$ of states with FEF $\leq \frac{1}{d}$ is indeed convex and compact, so that the separation axiom in the form of the Hahn-Banach theorem could be applied in order to demonstrate the existence of hermitian witness operators for teleportation.
\noindent \textit{Proposition:} The set $S=\lbrace \rho: F(\rho) \leq \frac{1}{d} \rbrace$ is convex and compact. \textit{Proof:} The proof is done in two steps. \textit{(i) We first show that $S$ is convex}. Let $\rho_{1},\rho_{2} \in S$. Therefore, \begin{equation} F(\rho_{1})\leq \frac{1}{d}, ~~~ F(\rho_{2})\leq \frac{1}{d}. \label{frho12} \end{equation} Consider $\rho_{c}=\lambda \rho_{1} + (1-\lambda) \rho_{2}$, where $\lambda \in [0,1]$ and $F(\rho_{c})= \langle \psi^{+} \vert U_{c}^{\dagger} \otimes I \rho_{c} U_{c} \otimes I \vert \psi^{+} \rangle$. Now, $F(\rho_{c})= \lambda \langle \psi^{+} \vert U_{c}^{\dagger} \otimes I \rho_{1} U_{c} \otimes I \vert \psi^{+} \rangle +(1-\lambda)\langle \psi^{+} \vert U_{c}^{\dagger} \otimes I \rho_{2} U_{c} \otimes I \vert \psi^{+} \rangle$. Let $F(\rho_{i})= \langle \psi^{+} \vert U_{i}^{\dagger} \otimes I \rho_{i} U_{i} \otimes I \vert \psi^{+} \rangle,~~(i=1,2)$. This is possible since the group of unitary matrices is compact, hence the maximum will be attained for a unitary matrix $U$. It follows that $F(\rho_{c}) \leq \lambda F(\rho_{1})+ (1-\lambda)F(\rho_{2}).$ Using Eq.(\ref{frho12}) we have \begin{eqnarray} F(\rho_{c}) \leq \frac{1}{d} \label{convex} \end{eqnarray} Thus, $\rho_{c}$ lies in $S$, and hence, $S$ is convex.
\textit{(ii) We now show that $S$ is compact}. Note that in a finite dimensional Hilbert space, in order to show that a set is compact it is enough to show that the set is closed and bounded. The set $S$ is bounded as every density matrix has a bounded spectrum, i.e., eigenvalues lying between $0$ and $1$. In order to prove that
the set $S$ is closed, consider first the following lemma. \textit{Lemma}: Let $A$ and $B$ be two matrices of size $m \times n$ and $n \times r$ respectively. Then $ \Vert AB \Vert \leq \Vert A \Vert \Vert B \Vert $, where the norm of a matrix $A$ is defined as $\Vert A \Vert = \sqrt{TrA^{\dagger}A}=\sqrt{\sum_{i}\sum_{j}\vert A_{ij} \vert^{2}}$.\\ \textit{Proof of the lemma}: Let $A= \left( \begin{array}{c}
A_{1} \\
A_{2} \\
.\\
.\\
A_{m} \end{array} \right)$
and $B=[B^{(1)} B^{(2)}.... B^{(r)}]$ , where $A_{i}$'s are row vectors of size $n$ and $B^{(j)}$'s are column vectors of size $n$ respectively. Using the Cauchy-Schwarz inequality, it follows that $\vert (AB)_{ij} \vert = \vert A_{i}B^{(j)} \vert \leq \Vert A_{i} \Vert \Vert B^{(j)} \Vert.$ Therefore, one has \begin{eqnarray} \Vert AB \Vert^{2} = \sum_{i=1}^{m}\sum_{j=1}^{r}\vert (AB)_{ij} \vert ^{2} \leq \sum_{i=1}^{m}\sum_{j=1}^{r} \Vert A_{i} \Vert^{2} \Vert B^{(j)} \Vert^{2} \label{lemma} \end{eqnarray} The r.h.s of the above inequality can be expressed as $\sum_{i=1}^{m} \Vert A_{i} \Vert^{2} \sum_{j=1}^{r} \Vert B^{(j)} \Vert^{2} = \Vert A \Vert^{2} \Vert B \Vert^{2}$, from which it follows that $ \Vert AB \Vert \leq \Vert A \Vert \Vert B \Vert $.
For any two density matrices $\rho_{a}$ and $\rho_{b}$, assume the maximum value of FEF is obtained at $U_{a}$ and $U_{b}$ respectively, i.e.,
$F(\rho_{a})= \langle \psi^{+} \vert U_{a}^{\dagger} \otimes I \rho_{a} U_{a} \otimes I \vert \psi^{+} \rangle$ and $F(\rho_{b})= \langle \psi^{+} \vert U_{b}^{\dagger} \otimes I \rho_{b} U_{b} \otimes I \vert \psi^{+} \rangle$. Therefore, we have
$F(\rho_{a})-F(\rho_{b})= \langle \psi^{+} \vert U_{a}^{\dagger} \otimes I \rho_{a} U_{a} \otimes I \vert \psi^{+} \rangle - \langle \psi^{+} \vert U_{b}^{\dagger} \otimes I \rho_{b} U_{b} \otimes I \vert \psi^{+} \rangle$ from which it follows that $F(\rho_{a})-F(\rho_{b}) \leq \langle \psi^{+} \vert U_{a}^{\dagger} \otimes I \rho_{a} U_{a} \otimes I \vert \psi^{+} \rangle - \langle \psi^{+} \vert U_{a}^{\dagger} \otimes I \rho_{b} U_{a} \otimes I \vert \psi^{+} \rangle$ since $\langle \psi^{+} \vert U_{a}^{\dagger} \otimes I \rho_{b} U_{a} \otimes I \vert \psi^{+} \rangle \leq \langle \psi^{+} \vert U_{b}^{\dagger} \otimes I \rho_{b} U_{b} \otimes I \vert \psi^{+} \rangle$. Hence, $F(\rho_{a})-F(\rho_{b}) \leq \langle \psi^{+} \vert U_{a}^{\dagger} \otimes I (\rho_{a}-\rho_{b}) U_{a} \otimes I \vert \psi^{+} \rangle$, implying \begin{equation} F(\rho_{a})-F(\rho_{b}) \leq \vert \langle \psi^{+} \vert U_{a}^{\dagger} \otimes I (\rho_{a}-\rho_{b}) U_{a} \otimes I \vert \psi^{+} \rangle \vert. \end{equation} Now, using the above lemma, one gets $F(\rho_{a})-F(\rho_{b}) \leq \Vert \langle \psi^{+} \vert \Vert \Vert U_{a}^{\dagger} \otimes I \Vert \Vert (\rho_{a}-\rho_{b}) \Vert \Vert U_{a} \otimes I \Vert \Vert \vert \psi^{+} \rangle \Vert$, or $F(\rho_{a})-F(\rho_{b}) \leq C^{2}K_{1}^{2} \Vert \rho_{a}-\rho_{b} \Vert $, where $C, K_1$ are positive real numbers. The last step follows from the fact that $\Vert \langle \psi^{+} \vert \Vert = C$. Since the set of all unitary operators is compact, it is bounded, and thus for any $U$, $\Vert U \otimes I \Vert \leq K_{1}$. Similarly $F(\rho_{b})-F(\rho_{a}) \leq C^{2}K_{1}^{2} \Vert \rho_{b}-\rho_{a} \Vert = C^{2}K_{1}^{2} \Vert \rho_{a}-\rho_{b} \Vert $. So finally, one may write \begin{equation} \vert F(\rho_{a})-F(\rho_{b}) \vert \leq
C^2K_1^2\Vert \rho_{a}-\rho_{b} \Vert.
\label{compact} \end{equation} This implies that $F$ is a continuous function. Moreover, for any density matrix $\rho$, with $F(\rho) \in [\frac{1}{d^{2}},1]$, one has $F(\rho)=1$ iff $\rho$ is a maximally entangled pure state, and $F(\rho)= \frac{1}{d^{2}}$ iff $\rho$ is the maximally mixed state \cite{zhao}. For the set $S$ in our consideration $F(\rho) \in [\frac{1}{d^{2}},\frac{1}{d}]$. Hence, $S=\lbrace \rho: F(\rho) \leq \frac{1}{d} \rbrace = F^{-1}([\frac{1}{d^{2}},\frac{1}{d}])$, is closed \cite{hahn2}. This completes the proof of our proposition that the set $S=\lbrace \rho: F(\rho) \leq \frac{1}{d} \rbrace$ is convex and compact.
It now follows from the Hahn-Banach theorem \cite{hahn1,hahn2}, that any $\chi \not \in S$ can be separated from $S$ by a hyperplane. In other words, any state useful for teleportation can be separated from the states not useful for teleportation by a hyperplane and thus allows for the definition of a witness. The witness operator, if so defined, identifies the states which are useful in the teleportation protocol, i.e., provides a fidelity higher than the classical optimum.
\begin{figure}
\caption{The set $S=\lbrace \rho: F(\rho) \leq \frac{1}{d} \rbrace$ is convex and compact, and using the Hahn-Banach theorem it follows that any state useful for teleportation can be separated from the states not useful for teleportation by a hyperplane, thus providing for the existence of a witness for teleportation.}
\end{figure}
\paragraph{C. A witness operator for teleportation.\textemdash}
A hermitian operator $W$ may be called a teleportation witness if the following conditions are satisfied: (i) $Tr(W\sigma)\geq 0$, for all states $\sigma$ which are not useful for teleportation. (ii) $Tr(W\chi) < 0$, for at least one state $\chi$ which is useful for teleportation. We propose a hermitian operator for a $d \otimes d$ system of the form (using $\vert \psi^{+} \rangle = \frac{1}{\sqrt{d}}\sum_{i=0}^{d-1}\vert ii \rangle$) \begin{equation} W = \frac{1}{d}I - \vert \psi^{+} \rangle \langle \psi^{+} \vert \label{TW} \end{equation} In order to prove that $W$ is indeed a witness operator, we first show that the operator $W$ gives a non-negative expectation over all states which are not useful for teleportation. Let $\sigma$ be an arbitrary state chosen from the set $S$ not useful for teleportation, i.e., $\sigma \in S$. Hence, \begin{equation} Tr(W\sigma)= \frac{1}{d} - \langle \psi^{+} \vert \sigma \vert \psi^{+} \rangle \end{equation} from which it follows that $Tr(W\sigma) \geq \frac{1}{d} - max_{U}\langle \psi^{+} \vert U^{\dagger} \otimes I \sigma U \otimes I \vert \psi^{+} \rangle$. Now, using the definition of the FEF, $F(\sigma)$ from Eq.(\ref{fef}), and the fact that $\sigma \in S$, one gets \begin{equation} Tr(W\sigma) \geq 0 \end{equation} Our task now is to show that the operator $W$ detects at least one entangled state $\chi$ which is useful for teleportation, i.e., $Tr(W\chi)<0$, which we do by providing the following illustrations.
Let us first consider the isotropic state \begin{equation} \chi_{\beta}= \beta \vert \psi^{+} \rangle \langle \psi^{+} \vert + \frac{1-\beta}{d^{2}}I~~~~~~~~~(-\frac{1}{d^{2}-1}\leq \beta \leq 1) \end{equation} The isotropic state is entangled $\forall \beta > \frac{1}{d+1}$ \cite{bertl2}. Now, $Tr(W\chi_{\beta})=\frac{(d-1)(1-\beta(d+1))}{d^{2}}$, from which it follows that $Tr(W\chi_{\beta})< 0$, when $\beta>\frac{1}{d+1}$. Therefore, all entangled isotropic states are useful for teleportation. The same conclusion was obtained in Ref. \cite{zhao} on explicit calculation of the FEF for isotropic states. We next consider the generalized Werner state \cite{werner,pitt} in $d \otimes d$ given by \begin{equation} \chi_{wer}=(1-v)\frac{I}{d^{2}}+ v\vert \psi_{d}\rangle \langle \psi_{d} \vert \end{equation} where $0 \leq v \leq 1$ and $\vert \psi_{d}\rangle = \sum_{i=0}^{d-1}\alpha_{i}\vert ii \rangle$, with $\sum_{i}\vert \alpha_{i} \vert^2=1$, for which one obtains $Tr(W\chi_{wer})=\frac{1}{d}-\frac{1-v}{d^{2}}-\frac{v}{d}\sum_{i=0}^{d-1}\alpha_{i}\sum_{i=0}^{d-1}\alpha_{i}^{*}$. The witness $W$ detects those Werner states which are useful for teleportation, i.e., $Tr(W\rho_{wer})< 0$, which is the case when \begin{equation} \frac{1}{d}-\frac{1-v}{d^{2}}-\frac{v}{d}\sum_{i=0}^{d-1}\alpha_{i}\sum_{i=0}^{d-1}\alpha_{i}^{*} < 0 \end{equation} In $2 \otimes 2$ dimensions with $\alpha_i = 1/\sqrt{2}$, one gets $Tr(W\chi_{wer})=\frac{1-3v}{4}<0, \text{when}~~ v>\frac{1}{3}$. Thus, all entangled Werner states are useful for teleportation, a result which is well-known \cite{lee}.
Now, consider another class of maximally entangled mixed states in $2 \otimes 2$ dimensions, which possess the maximum amount of entanglement for a given purity \cite{mems}: \begin{equation} \chi_{MEMS}= \left( \begin{array}{cccc}
h(C) & 0 & 0 & C/2 \\
0 & 1-2h(C) & 0 & 0 \\
0 & 0 & 0 & 0 \\
C/2 & 0 & 0 & h(C) \\ \end{array} \right) \label{mems} \end{equation} where, $h(C)= C/2$ for $C\geq2/3$, and $h(C)= 1/3$ for $C<2/3$, with
$C$ the concurrence of $\chi_{MEMS}$. Here we obtain $Tr(W\rho_{MEMS})= \frac{1}{2}-h(C)-\frac{C}{2}$. It follows that $Tr(W\rho_{MEMS})\geq 0$ when $0\leq C \leq \frac{1}{3}$, implying that for a magnitude of the entanglement in the above range, the state $\chi_{MEMS}$ is not useful for teleportation. But, for $C>\frac{1}{3}$, the state $\chi_{MEMS}$ is suitable for teleportation, as one obtains $Tr(W\rho_{MEMS}) < 0$ in this case, confirming the results derived earlier in the literature \cite{adhikari2}. However, as expected with any witness, our proposed witness operator may fail to identify certain other states that are known to be useful for teleportation. For example, the state (for $\vert \phi \rangle = \frac{1}{\sqrt{2}}(\vert 01 \rangle + \vert 10 \rangle)$ and $0 \leq a \leq 1$) \begin{equation} \rho_{\phi} = a\vert \phi \rangle \langle \phi \vert + (1-a)\vert 11 \rangle \langle 11 \vert \end{equation} was recently studied in the context of quantum discord \cite{ali}. This class of states is useful for teleportation but the
witness $W$ is unable to detect it as $Tr(W\rho_{\phi})=\frac{a}{2}\geq 0$.
Let us now briefly discuss the measurability of the witness operator. For experimental realization of the witness it is necessary to decompose the witness into operators that can be measured locally,
i.e, a decomposition into projectors of the form $W = \sum_{i=1}^{k}c_{i}\vert e_{i}\rangle \langle e_{i}\vert \otimes \vert f_{i}\rangle \langle f_{i}\vert$ \cite{review,guhne}. For implementation using polarized photons as in \cite{barbieri}, one may
take $\vert H \rangle = \vert 0 \rangle ,\vert V \rangle = \vert 1 \rangle , \vert D \rangle = \frac{\vert H \rangle + \vert V \rangle }{\sqrt{2}}, \vert F \rangle = \frac{\vert H \rangle - \vert V \rangle }{\sqrt{2}}, \vert L \rangle = \frac{\vert H \rangle + i\vert V \rangle }{\sqrt{2}}, \vert R \rangle = \frac{\vert H \rangle - i\vert V \rangle }{\sqrt{2}} $ as the horizontal, vertical, diagonal, and the left and right circular polarization states, respectively. Using a result given in \cite{hyllus}, our witness operator
can be recast for qubits into the required form, given by \begin{eqnarray} W = \frac{1}{2}(\vert HV \rangle \langle HV \vert + \vert VH \rangle \langle VH \vert - \vert DD \rangle \langle DD \vert \nonumber \\ - \vert FF \rangle \langle FF \vert +\vert LL \rangle \langle LL \vert + \vert RR \rangle \langle RR \vert) \end{eqnarray} Using this technique for an unknown two-qubit state $\chi$, the estimation of $\langle W \rangle$ requires three measurements \cite{hyllus}, as is also evident from the decomposition of our witness operator for qubits in terms of Pauli spin matrices, i.e., $W=\frac{1}{4}[I \otimes I - \sigma_{x} \otimes \sigma_{x} + \sigma_{y} \otimes \sigma_{y} - \sigma_{z} \otimes \sigma_{z}]$, which is far less than the measurement of $15$ parameters required for full state tomography\cite{munro}. In higher dimensions, the witness operator may be decomposed in terms of Gell-Mann matrices \cite{bertl}, and this difference further increases with the increase in dimensions. Therefore, the utility of the witness operator is indicated as compared to full state tomography when discrimination of useful entangled states for performing teleportation is required.
Before concluding, it may be noted that is possible to relate the FEF (\ref{fef}) with the maximum fidelity for other information processing tasks, such as super dense coding and entanglement swapping \cite{grond}. In the generalized dense coding for $d \otimes d$ systems, one can use a maximally entangled state $\vert\phi\rangle$ to encode $d^{2}/2$ bits in $d^{2}$ orthogonal states $(I \otimes U_{i})\vert\phi\rangle$ \cite{liu}. If the maximally entangled state is replaced with a general density operator, the dense coding fidelity is defined as an average over the $d^{2}$ results. A relation between the maximum fidelity $F_{DC}^{max} $ of dense coding and the FEF was established for $d \otimes d$ systems to be $F_{DC}^{max} = F$. Similarly, for two-qubit systems the maximum fidelity of entanglement swapping\cite{swap} $F_{ES}^{max}$ is also related to the FEF by $F_{ES}^{max} = F$ \cite{grond}. However, teleportation is a different information processing task as compared to dense coding where $F > 1/d$ does not guarantee a higher than classical fidelity \cite{bru}. Hence, it is not possible to apply the above witness (\ref{TW}) to super dense coding and entanglement swapping.
\paragraph{D. Conclusions.\textemdash}
To summarize, in this work we have proposed a framework for discriminating quantum states useful for performing teleportation through the measurement of a hermitian witness operator. The ability
of an entangled state to act as a resource for teleportation is connected with the fully entangled fraction of the state. The estimation of the fully entangled fraction is difficult in general, except in the case of some known states. We have shown that the set of states having their fully entangled fraction bounded by a certain threshold value required for teleportation is both convex and compact. Exploiting this feature we have demonstrated the existence of a witness operator for teleportation. The measurement of the expectation value of the witness for unknown states reveals which states are useful as resource for performing teleportation. We have provided some illustrations of the applicability of the witness for isotropic and Werner states in $d \otimes d$ dimensions, and another class of maximally entangled mixed states for qubits. The measurability of such a witness operator requires determination of a much lesser number of parameters in comparison to state tomography of an unknown state, thus signifying the practical utility of our proposal. It would be interesting to explore the possibility of existence of witnesses for various other information processing tasks, as well. In this context further studies on finding optimal and common witnesses are called for.
\emph{Acknowledgments:} ASM would like to acknowledge support from the DST Project SR/S2/PU-16/2007.
\end{document} |
\begin{document}
\title{General bounds for sender-receiver capacities in multipoint quantum communications} \author{Riccardo Laurenza} \affiliation{Computer Science and York Centre for Quantum Technologies, University of York, York YO10 5GH, UK} \author{Stefano Pirandola} \affiliation{Computer Science and York Centre for Quantum Technologies, University of York, York YO10 5GH, UK}
\begin{abstract} We investigate the maximum rates for transmitting quantum information, distilling entanglement and distributing secret keys between a sender and a receiver in a multipoint communication scenario, with the assistance of unlimited two-way classical communication involving all parties. First we consider the case where a sender communicates with an arbitrary number of receivers, so called quantum broadcast channel. Here we also provide a simple analysis in the bosonic setting where we consider quantum broadcasting through a sequence of beamsplitters. Then, we consider the opposite case where an arbitrary number of senders communicate with a single receiver, so called quantum multiple-access channel. Finally, we study the general case of all-in-all quantum communication where an arbitrary number of senders communicate with an arbitrary number of receivers. Since our bounds are formulated for quantum systems of arbitrary dimension, they can be applied to many different physical scenarios involving multipoint quantum communication.
\end{abstract} \maketitle
\section{Introduction}
Today a huge effort is devoted to the development of robust quantum technologies, directly inspired by the field of quantum information~\cite{NielsenChuang,SamRMPm,RMP,hybrid,hybrid2,HolevoBOOK}. The most typical communication tasks are quantum key distribution (QKD)~\cite{BB84,Ekert,Gisin,Scarani,LM05,Gross,Chris,Twoway,Madsen,MDI1,CVMDIQKD} , reliable transmission of quantum information~\cite{QC1,QC2} and distribution of entanglement~\cite{distillDV,B2main,distillCV}. The latter allows two remote parties to implement powerful protocols such as quantum teleportation~\cite{tele,teleCV,telereview}, which is a crucial tool for the contruction of a future quantum Internet~\cite{Kimble2008,HybridINTERNET}. Unfortunately, practical implementations are affected by decoherence~\cite{Zurek}. This is the very reason why the performance of any point-to-point protocol of quantum and private communication suffers from fundamental limitations, which become more severe when the distance is increased. This is the reason why we need quantum repeaters~\cite{repeaters}.
In this context, an open problem was to find the optimal rates for quantum and private communication that are achievable by two remote parties, say Alice and Bob, assuming the most general strategies allowed by quantum mechanics, i.e., assuming arbitrary local operations (LOs) assisted by unlimited two-way classical communication (CCs), briefly called adaptive LOCCs. These optimal rates are known as two-way (assisted) capacities and their determination has been notoriously difficult. Only recently, after about $20$ years~\cite{ErasureChannelm}, Ref.~\cite{PLOB} finally addressed this problem and established the two-way capacities at which two remote parties can distribute entanglement ($D_{2}$), transmit quantum information ($Q_{2}$), and generate secret keys ($K$) over a number of fundamental quantum channels at both finite and infinite dimension, including erasure channels, dephasing channels, bosonic lossy channels and quantum-limited amplifiers. For a review of these results, see also Ref.~\cite{TQCreview}.
For the specific case of a bosonic lossy channel with transmissivity $\eta$, Ref.~\cite{PLOB} proved that $D_{2}=Q_{2}=K=-\log_{2}(1-\eta)$ corresponding to $\simeq1.44\eta$ bits per channel use at high loss. The latter result completely characterizes the fundamental rate-loss scaling that affects any point-to-point protocol of QKD through a lossy communication line, such as an optical fiber or free-space link. The novel and general methodology that led to these results is based on a suitable combination of quantum teleportation~\cite{tele,teleCV,telereview} with a LOCC-monotonic functional, such as the relative entropy of entanglement (REE)~\cite{VedFORMm,Pleniom,RMPrelent}. Thanks to this combination, Ref.~\cite{PLOB} was able to upper-bound the generic two-way capacity $\mathcal{C}=D_{2}$, $Q_{2}$, $K$ of an arbitrary quantum channel $\mathcal{E}$ with a computable single-letter quantity: This is the REE $E_{R}(\sigma)$ of a suitable resource state $\sigma$ that is able to simulate the quantum channel by means of a generalized teleportation protocol. In particular, Ref.~\cite{PLOB} showed that $\sigma$ corresponds to the Choi matrix of the channel when the channel is teleportation-covariant, i.e., suitably commutes with the random unitaries induced by the teleportation process.
The goal of the present paper is to extend such \textquotedblleft REE+teleportation\textquotedblright\ methodology to a more complex communication scenario, in particular that of a single-hop quantum network, where multiple senders and/or receivers are involved. The basic configurations are represented by the quantum broadcast channel~\cite{brd1,brd2,brd3} where information is broadcast from a single sender to multiple receivers, and the quantum multiple-access channel~\cite{mac2}, where multiple senders communicate with a single receiver. More generally, we also consider the combination of these two cases, where many senders communicate with many receivers in a sort of all-in-all quantum communication or quantum interference channel. In practical implementations, this may represent a quantum bus~\cite{MaBus,BreBus} where quantum information is transmitted among an arbitrary number of qubit registers.
In all these multipoint scenarios, we characterize the most general protocols for entanglement distillation, quantum communication and key generation, assisted by adaptive LOCCs. This leads to the definition of the two-way capacities $\mathcal{C}=D_{2}$, $Q_{2}$, $K$ between any pair of sender and receiver. We then consider those quantum channels (for broadcasting, multiple-accessing, and all-in-all communication) which \text{are teleportation-covariant}. For these channels, we can completely reduce an adaptive protocol into a block form involving a tensor product of Choi matrices. Combining this reduction with the REE, we then bound their two-way capacities by means of the REE of their Choi matrix, therefore extending the methods of Ref.~\cite{PLOB} to multipoint communication.
Our upper bounds applies to both discrete-variable (DV) and continuous-variable (CV) channels. As an example, we consider the specific case of a $1$-to-$M$ thermal-loss broadcast channel through a sequence of beamsplitters subject to thermal noise. In particular, we discuss how that the two-way capacities $Q_{2}$, $D_{2}$ and $K$ between the sender and each receiver are all bounded by the first point-to-point channel in the \textquotedblleft multisplitter\textquotedblright. This bottleneck result can be extended to other Gaussian broadcast channels. In the specific case of a lossy broadcast channel (without thermal noise), we find a straighforward extension of the fundamental rate-loss scaling, so that any sender-receiver capacity is bounded by $-\log_{2}(1-\eta)$ with $\eta$\ being the transmissivity of the first beamsplitter.
The paper is organized as follows. In Sec.~\ref{review}, we review the basic ideas, methods, and results of Ref.~\cite{PLOB} in relation to point-to-point quantum and private communication. This serves as a background to the reader, in order to better understand the novel developments which are presented in the following sections about the multi-point communication. Specifically, we consider the quantum broadcast channel in Sec.~\ref{SECbroadcasting}, the quantum multiple-access channel in Sec.~\ref{SECmulti} and, finally, the quantum interference channel in Sec.~\ref{SECall}. Sec.~\ref{SECconclu} is for conclusions.
\section{Theory of point-to-point quantum and private communication\label{review}}
Let us briefly review the general methodology and results established in Ref.~\cite{PLOB}. Let us define an adaptive point-to-point protocol through a quantum channel $\mathcal{E}$. We assume that Alice has a local register $\mathbf{a}$ (i.e., countable set of systems) and Bob has another local register $\mathbf{b}$. These registers are prepared in some initial state $\rho_{\mathbf{ab}}^{0}$ by means of an adaptive LOCC $\Lambda_{0}$. Then, Alice picks a system $a_{1}\in\mathbf{a}$ and sends it through channel $\mathcal{E}$; at the output, Bob gets a system $b_{1}$ which becomes part of his register, i.e., $b_{1}\mathbf{b}\rightarrow\mathbf{b}$. Another adaptive LOCC $\Lambda_{1}$ is applied to the registers. Then, there is the second transmission $\mathbf{a}\ni a_{2}\rightarrow b_{2}$ through $\mathcal{E}$, followed by another LOCC $\Lambda_{2}$ and so on (see Fig.~\ref{longPIC}). After $n$ uses, Alice and Bob share an output state $\rho_{\mathbf{ab}}^{n}$ which is epsilon-close to a target state with $nR^{n}$ bits~\cite{Eps}. The generic two-way capacity is defined by maximizing the asymptotic rate over all the adaptive LOCCs $\mathcal{L}=\{\Lambda_{0},\ldots,\Lambda_{n}\}$, i.e., \begin{equation} \mathcal{C}(\mathcal{E}):=\sup_{\mathcal{L}}\lim_{n}R^{n}. \end{equation} \begin{figure}\label{longPIC}
\end{figure}
In particular, by specifying the target state we identify a corresponding two-way capacity. If the target state is a maximally-entangled state, then $\mathcal{C}$ is equal to the two-way entanglement-distribution capacity ($D_{2}$). This is in turn equal to the two-way quantum capacity $Q_{2}$, because the reliable transmission of a qubit implies the distribution of an entanglement bit (ebit), and the distribution of an ebit implies the reliable teleportation of a qubit. These operations are completely equivalent under two-way CCs. If the target state is a private state~\cite{KD}, then $\mathcal{C}$ is equal to the (two-way) secret key capacity ($K$). Since a maximally-entangled state is a type of private state, we have $D_{2}\leq K$. Then also note that $K=P_{2}$, where the latter is the two-way private capacity of the channel, i.e., the maximum rate at which Alice may deterministically transmit secret bits over the channel by means of adaptive protocols~\cite{Devetak}. In summary, one has the hierarchy \begin{equation} D_{2}=Q_{2}\leq K=P_{2}. \label{hierarchy} \end{equation}
In order to bound these capacities by means of a computable single-letter quantity, Ref.~\cite{PLOB} introduced a reduction\ method whose application can be adapted to many other scenarios. The first ingredient is the extension of the relative entropy of entanglement (REE)~\cite{RMPrelent,VedFORMm,Pleniom} from quantum states to quantum channels. Ref.~\cite[Theorem 1]{PLOB} showed that, for any quantum channel $\mathcal{E}$ (at any dimension, finite or infinite), the generic two-way capacity $\mathcal{C}(\mathcal{E})$ [i.e., any of the quantities in Eq.~(\ref{hierarchy})] satisfies the weak converse bound \begin{equation} \mathcal{C}(\mathcal{E})\leq E_{R}^{\bigstar}(\mathcal{E}):=\sup_{\mathcal{L} }\underset{n}{\lim}\frac{E_{R}(\rho_{\mathbf{ab}}^{n})}{n}~. \label{mainweak} \end{equation}
Here the adaptive channel's REE\ $E_{R}^{\bigstar}(\mathcal{E})$\ is defined by computing the REE of the output state $\rho_{\mathbf{ab}}^{n}$, taking the asymptotic limit in the number of channels uses $n$, and optimizing over all the adaptive protocols $\mathcal{L}$. Recall the REE of a quantum state $\rho$ is defined as \begin{equation}
E_{R}(\rho)=\inf_{\sigma_{s}}S(\rho||\sigma_{s}), \end{equation}
where $\sigma_{s}$ is an arbitrary separable state and $S(\rho||\sigma _{s}):=\mathrm{Tr}\left[ \rho(\log_{2}\rho-\log_{2}\sigma_{s})\right] $ is the relative entropy~\cite{RMPrelent}. For an asymptotic state $\sigma :=\lim_{\mu}\sigma^{\mu}$ defined by a sequence of states $\sigma^{\mu}$, the REE can be defined as~\cite{PLOB} \begin{equation} E_{R}(\sigma):=\inf_{\sigma_{s}^{\mu}}\underset{\mu\rightarrow+\infty}
{\lim\inf}S(\sigma^{\mu}||\sigma_{s}^{\mu}), \label{REE_weaker} \end{equation}
where $\sigma_{s}^{\mu}$ is an arbitrary sequence of separable states which is convergent in trace-norm, i.e., such that $||\sigma_{s}^{\mu}-\sigma _{s}||\rightarrow0$ for some separable $\sigma_{s}$. This mathematical form is directly inherited from the lower semi-continuity of the relative entropy for CV\ systems~\cite{HolevoBOOK}.
\begin{remark} The demonstration of Eq.~(\ref{mainweak}) exploits several tools from Refs.~\cite{VedFORMm,Pleniom, KD,Donaldmain}. In particular, Ref.~\cite{PLOB} provided three equivalent proofs, based on alternative treatments of the private state involved in the definition of the secret key capacity. The first proof exploits the fact that the dimension of the shield system~\cite{KD} of the private state has an effective exponential scaling in the number of channel uses; this scaling is an immediate application of well-known results in the literature~\cite{Matthias1a,Matthias2a}, whose adaptation to CVs is trivial as discussed in~\cite[Supplementary Note~3]{PLOB} and also in the recent review~\cite{TQCreview}. The second proof assumes an exponential energy growth in the channel uses, while the third proof does not depend on the shield size. For a full discussion of these details see Supplementary Note~3 of Ref.~\cite{PLOB}. \end{remark}
To simplify the upper bound of Eq.~(\ref{mainweak}) into a single-letter quantity, Ref.~\cite{PLOB} devised a second ingredient. This consists of a technique, dubbed \textquotedblleft teleportation stretching\textquotedblright , which reduces an adaptive protocol (with any communication task) into a much simpler block-type protocol. More recently, this technique has been extended to simplify adaptive protocols of quantum metrology and quantum channel discrimination~\cite{Metro}. The first step of the technique is the LOCC simulation of a quantum channel. This leads to a stretching of the channel into a quantum state. Then, the second step is the exploitation of this simulation argument in the adaptive protocol, so that all the transmissions through the channel are replaced by a tensor product of quantum states subject to a trace-preserving LOCC.
For any quantum channel $\mathcal{E}$, we may consider an LOCC-simulation. This consists of an LOCC $\mathcal{T}$ and a resource state $\sigma$ such that, for any input state $\rho$, the output of the channel can be expressed as~\cite{PLOB} \begin{equation} \mathcal{E}(\rho)=\mathcal{T}(\rho\otimes\sigma). \label{sigma} \end{equation} A channel $\mathcal{E}$ which is LOCC-simulable with a resource state $\sigma$ as in Eq.~(\ref{sigma}) is called \textquotedblleft$\sigma$ -stretchable\textquotedblright~\cite{PLOB}. For the same channel $\mathcal{E}$ there may be different choices for $\mathcal{T}$ and $\sigma$. Furthermore, the simulation can also be asymptotic. This means that we may consider sequences of LOCCs $\mathcal{T}^{\mu}$ and resource states $\sigma^{\mu}$ such that~\cite{PLOB} \begin{equation} \mathcal{E}(\rho)=\lim_{\mu}\mathcal{T}^{\mu}(\rho\otimes\sigma^{\mu}), \label{asymptotic} \end{equation} for any input state $\rho$. In other words, a quantum channel may be defined as a point-wise limit as in Eq.~(\ref{asymptotic}). This generalization is relevant for the simulation of CV\ channels, such as bosonic channels, or certain DV channels, such as the amplitude damping channel, whose\ simulation is based on mappings between CVs and DVs.
\begin{remark} The LOCC-simulation of an arbitrary quantum channel at any dimension (finite or infinite) was introduced in Ref.~\cite{PLOB}. The first relevant idea was the teleportation simulation of Ref.~\cite[Section~V]{B2main} whose application was however limited to Pauli channels (as shown in Ref.~\cite{SougatoBowen}). Other approaches known in the literature~\cite{Niset,MHthesis,Wolfnotes,Leung} did not consider arbitrary LOCC\ simulations but teleportation protocols, therefore restricting the classes of simulable channels. In particular, the amplitude damping channel is a simple example of a quantum channel that could not be deterministically simulated by any approach prior to Ref.~\cite{PLOB}. Finally note that the simulations in Refs.~\cite{Gatearray,Qsim0,Qsim1,Qsim2} are not suitable for quantum communications because they generally imply non-local operations between the remote parties. See Ref.~\cite[Supplementary Note~8]{PLOB} for detailed discussions on the literature and advances in channel simulation. See also the recent review~\cite{TQCreview} and Table~I therein. \end{remark}
At any dimension (finite or infinite), an important class of quantum channels are those \textquotedblleft Choi-stretchable\textquotedblright. These are channels $\mathcal{E}$\ for which we may write the LOCC simulation of Eqs.~(\ref{sigma}) with $\sigma$ being the Choi matrix of the channel, which is defined as \begin{equation} \rho_{\mathcal{E}}:=\mathcal{I}\otimes\mathcal{E}(\Phi), \end{equation} where $\Phi$ is a maximally-entangled state. If the simulation is asymptotic, then the Choi-stretchable channel satisfies Eq.~(\ref{asymptotic}) with $\sigma^{\mu}$ being a sequence of Choi-approximating states $\rho _{\mathcal{E}}^{\mu}$, i.e., such that their limit defines the asymptotic Choi matrix of the channel as $\rho_{\mathcal{E}}:=\lim_{\mu}\rho_{\mathcal{E} }^{\mu}$. In particular, for a bosonic channel, we may set \begin{equation} \rho_{\mathcal{E}}^{\mu}:=\mathcal{I}\otimes\mathcal{E}(\Phi^{\mu}), \end{equation} where $\Phi^{\mu}$ is a two-mode squeezed vacuum (TMSV) state~\cite{RMP}, whose asymptotic limit defines the ideal CV\ Einstein-Podolsky-Rosen (EPR) state.
A simple criterion to identify Choi-stretchable channels is that of teleportation-covariance. By definition, we say that a quantum channel $\mathcal{E}$ is teleportation-covariant if, for any teleportation unitary $U$ (Pauli operators in DVs, phase-space displacements in CVs~\cite{telereview} ),\ we may write \begin{equation} \mathcal{E}(U\rho U^{\dagger})=V\mathcal{E}(\rho)V^{\dagger}~, \label{stretchability} \end{equation} for another (generally-different) unitary $V$~\cite{PLOB}. This is a large family which includes Pauli, erasure and bosonic Gaussian channels. With this definition in hand, Ref.~\cite[Proposition 2]{PLOB} showed that a teleportation-covariant channel is certainly a Choi-stretchable channel, where the LOCC\ simulation is simply given by quantum teleportation. In other words, we may write $\mathcal{E}(\rho)=\mathcal{T}(\rho\otimes\rho_{\mathcal{E}})$ where $\mathcal{T}$ is a teleportation-LOCC. For asymptotic simulations, we have \begin{equation} \mathcal{E}(\rho)=\lim_{\mu}\mathcal{T}^{\mu}(\rho\otimes\rho_{\mathcal{E} }^{\mu}), \label{asymptotics} \end{equation} where $\mathcal{T}^{\mu}$\ is a sequence of teleportation-LOCCs. These are built on finite-energy Gaussian measurements whose asymptotic limit defines the ideal CV Bell detection~\cite{PLOB}.
\begin{remark} The fact that teleportation-covariance implies the simulation of the channel by means of teleportion over the channel's Choi matrix has been first discussed in Ref.~\cite{Leung}\ in the context of DV channels. It has been later generalized in Ref.~\cite{PLOB} to include CV channels and asymptotic simulations. See also Ref.~\cite{TQCreview}. \end{remark}
Thanks to the LOCC-simulation $(\mathcal{T},\sigma)$ of a quantum channel $\mathcal{E}$ as in Eq.~(\ref{sigma}), one may completely simplify the structure of an adaptive protocol. In fact, the output $\rho_{\mathbf{ab}} ^{n}$ can be reduced to a tensor-product $\sigma^{\otimes n}$ up to a trace-preserving LOCC $\bar{\Lambda}$~\cite{PLOB}. In other words, we may write \begin{equation} \rho_{\mathbf{ab}}^{n}=\bar{\Lambda}\left( \sigma^{\otimes n}\right) ~. \label{StretchingMAIN} \end{equation} In fact, (i)~first we replace each transmission through the channel $\mathcal{E}$ with an LOCC-simulation $(\mathcal{T},\sigma)$; (ii)~then we stretch the resource state $\sigma$ \textquotedblleft back in time\textquotedblright; (iii)~finally, we collapse all the LOCCs (and also the initial separable state $\rho_{\mathbf{ab}}^{0}$) into a single trace-preserving LOCC (which is suitably averaged over all the possible measurements in the simulated protocol). These steps are depticted in Fig.~\ref{pppPIC} and lead to the decomposition in Eq.~(\ref{StretchingMAIN} ).\begin{figure}\label{pppPIC}
\end{figure}
For a quantum channel with asymptotic simulation as in Eq.~(\ref{asymptotic}), the procedure is more involved. One first considers an imperfect channel simulation $\mathcal{E}^{\mu}(\rho):=\mathcal{T}^{\mu}(\rho\otimes\sigma^{\mu })$ in each transmission. By adopting this simulation, we realize an imperfect stretching of the protocol, with output state $\rho_{\mathbf{ab}}^{\mu ,n}:=\bar{\Lambda}_{\mu}\left( \sigma^{\mu\otimes n}\right) $\ for a trace-preserving LOCC $\bar{\Lambda}_{\mu}$. This is done similarly to the steps in Fig.~\ref{pppPIC}, but considering $\mathcal{E}^{\mu}$ in the place of the original channel $\mathcal{E}$. A crucial point is now the estimation of the error in the channel simulation, which must be suitably controlled and propagated to the output state.
Assume that the registers have a total number $m$ of modes (the value of $m$ can be taken to be finite and then relaxed at the very end to include countable registers). Then assume that, during the $n$ transmissions of the protocol, the total mean number of photons in the registers is bounded by some large but finite value $N$. We may therefore define the set of energy-constrained states \begin{equation}
\mathcal{D}_{N}:=\{\rho_{\mathbf{ab}}~|~\mathrm{Tr}(\hat{N}\rho_{\mathbf{ab} })\leq N\}\subset\mathcal{D}(\mathcal{H}^{\otimes m}), \end{equation} where $\hat{N}$ is the $m$-mode number operator. For the $i$th transmission $a_{i}\mathbf{\rightarrow}b_{i}$, the simulation error\ may be quantified in terms of the energy-bounded diamond norm~\cite{PLOB} \begin{align} \varepsilon_{N} & :=\left\Vert \mathcal{E}-\mathcal{E}^{\mu}\right\Vert _{\diamond N}=\\ & \sup_{\rho_{a_{i}\mathbf{ab}}\in\mathcal{D}_{N}}\left\Vert \mathcal{E} \otimes\mathcal{I}_{\mathbf{ab}}(\rho_{a_{i}\mathbf{ab}})-\mathcal{E}^{\mu }\otimes\mathcal{I}_{\mathbf{ab}}(\rho_{a_{i}\mathbf{ab}})\right\Vert .\nonumber \end{align}
Because $\mathcal{D}_{N}$ is compact~\cite{HolevoCOMPACT} and channel $\mathcal{E}$ is defined by the point-wise limit $\mathcal{E}(\rho)=\lim_{\mu }\mathcal{E}^{\mu}(\rho)$, we may write the following uniform limit \begin{equation} \varepsilon_{N}\overset{\mu}{\rightarrow}0\mathrm{~~}\text{for~any }N. \end{equation} This error has to be propagated to the output state, so that we can suitably bound the trace distance between the actual output $\rho_{\mathbf{ab}}^{n}$ and the simulated output $\rho_{\mathbf{ab}}^{n,\mu}$. By using basic properties of the trace distance (triangle inequality and monotonicity under maps), Ref.~\cite{PLOB} showed that the simulation error in the output state satisfies \begin{equation} \left\Vert \rho_{\mathbf{ab}}^{n}-\rho_{\mathbf{ab}}^{n,\mu}\right\Vert \leq n\left\Vert \mathcal{E}-\mathcal{E}^{\mu}\right\Vert _{\diamond N}~. \end{equation} Therefore, for any $N$, we may write the trace-norm limit \begin{equation} \left\Vert \rho_{\mathbf{ab}}^{n}-\bar{\Lambda}_{\mu}\left( \sigma ^{\mu\otimes n}\right) \right\Vert \overset{\mu}{\rightarrow}0, \label{stretch2} \end{equation} i.e., the asymptotic stretching $\rho_{\mathbf{ab}}^{n}=\lim_{\mu}\bar {\Lambda}_{\mu}(\sigma^{\mu\otimes n})$.
\begin{remark} Teleportation stretching simplifies an arbitrary adaptive protocol implemented over an arbitrary channel at any dimension, finite or infinite.\ In particular, it works by maintaining the original communication task. This means that\ an adaptive protocol of quantum communication (QC), entanglement distribution (ED) or key generation (KG), is reduced to a corresponding block protocol with exactly the same original task (QC, ED, or KG), but with the output state being decomposed in the form of Eq.~(\ref{StretchingMAIN}) or Eq.~(\ref{stretch2}). In the literature, there were some precursory but restricted arguments, as those in Refs.~\cite{B2main,Niset}. These were limited to the transformation of a protocol of QC into a protocol of ED, over specific classes of channels (e.g., Pauli channels in Ref.~\cite{B2main}). Furthermore, no control of the simulation error was considered in previous literature~\cite{Niset}, while this is crucial for the rigorous simulation of bosonic channels. \end{remark}
The most crucial insight of Ref.~\cite{PLOB} has been the combination of the previous two ingredients, i.e., channel's REE and teleportation stretching, which is the key observation leading to a single-letter upper bound for all the two-way capacities of a quantum channel. In fact, let us compute the REE\ of the output state decomposed as in Eq.~(\ref{StretchingMAIN}). We derive \begin{equation} E_{R}(\rho_{\mathbf{ab}}^{n})\overset{(1)}{\leq}E_{R}(\sigma^{\otimes n})\overset{(2)}{\leq}nE_{R}(\sigma)~, \label{toREP} \end{equation} using (1) the monotonicity of the REE under trace-preserving LOCCs and (2) its subadditive over tensor products. By replacing Eq.~(\ref{toREP}) in Eq.~(\ref{mainweak}), we then find the single-letter upper bound~\cite[Theorem~5]{PLOB} \begin{equation} \mathcal{C}(\mathcal{E})\leq E_{R}(\sigma)~. \label{UB1} \end{equation} In particular, if the channel $\mathcal{E}$ is teleportation-covariant, it is Choi-stretchable, and we may write~\cite[Theorem 5]{PLOB} \begin{equation} \mathcal{C}(\mathcal{E})\leq E_{R}(\rho_{\mathcal{E}}). \label{UB2} \end{equation}
These results are suitable extended to asymptotic simulations. By adopting the extended definition of REE in Eq.~(\ref{REE_weaker}), Ref.~\cite{PLOB} showed that Eqs.~(\ref{UB1}) and~(\ref{UB2}) are valid for channels with asymptotic simulations, such as bosonic channels. In particular, the proof exploits the fact that Eq.~(\ref{mainweak}) involves a supremum over all protocols $\mathcal{L}$, so that we may extend the upper bound to the asymptotic limit of energy-uncostrained protocols where the total mean photon number $N$ tends to infinite (and the local registers become countable set).
The upper bound of Eq.~(\ref{UB2}) is valid for any teleportation-covariant channel, in particular for Pauli channels (e.g., depolarizing or dephasing), erasure channels and bosonic Gaussian channels. Then, by showing coincidence of this upper bound with lower bounds based on the coherent~\cite{QC1,QC2} and reverse coherent information~\cite{RevCohINFO,ReverseCAP}, Ref.~\cite{PLOB} established strikingly simple formulas for the two-way capacities of the most fundamental quantum channels. For a bosonic lossy channel $\mathcal{E}_{\eta} $\ with transmissivity $\eta$~\cite{RMP}, one has~\cite{PLOB} \begin{equation} D_{2}(\eta)=Q_{2}(\eta)=K(\eta)=P_{2}(\eta)=-\log_{2}(1-\eta)~. \label{formCloss} \end{equation} In particular, the secret-key capacity\ of the lossy channel determines the maximum rate achievable by any QKD protocol. At high loss $\eta\simeq0$, one has the optimal rate-loss scaling of $K\simeq1.44\eta$ secret bits per channel use. Because it establishes the upper limit of any point-to-point quantum optical communication, Eq.~(\ref{formCloss}) also establishes a \textquotedblleft repetearless bound\textquotedblright, i.e., the benchmark that quantum repeaters must surpass in order to be effective.
Then, for a quantum-limited amplifier $\mathcal{E}_{g}$ with gain $g>1$~\cite{RMP}, one finds~\cite{PLOB} \begin{equation} D_{2}(g)=Q_{2}(g)=K(g)=P_{2}(g)=-\log_{2}\left( 1-g^{-1}\right) ~. \label{Campli} \end{equation} In particular, this proves that $Q_{2}(g)$ coincides with the unassisted quantum capacity $Q(g)$~\cite{HolevoWerner,Wolf}. For a qubit dephasing channel $\mathcal{E}_{p}^{\text{deph}}$ with dephasing probability $p$, one has~\cite{PLOB} \begin{equation} D_{2}(p)=Q_{2}(p)=K(p)=P_{2}(p)=1-H_{2}(p)~, \label{dep2} \end{equation} where $H_{2}$ is the binary Shannon entropy. Note that this also proves $Q_{2}(p)=Q(p)$ for a dephasing channel, where $Q(p)$ was found in Ref.~\cite{degradable}. Eq.~(\ref{dep2}) can be extended to dephasing channels $\mathcal{E}_{p,d}^{\text{deph}}$ in arbitrary dimension $d$, so that all the two-way capacities are given by~\cite{PLOB} \begin{equation} \mathcal{C}(p,d)=\log_{2}d-H(\{P_{i}\})~, \end{equation} where $H$ is the Shannon entropy and $P_{i}$ is the probability of $i$ phase flips. Finally, for the qudit erasure channel $\mathcal{E}_{p,d} ^{\text{erase}}$ with erasure probability $p$, one finds~\cite{PLOB} \begin{equation} D_{2}(p)=Q_{2}(p)=K(p)=P_{2}(p)=(1-p)\log_{2}d~. \label{erase2} \end{equation} As previously mentioned, only the $Q_{2}$ of the erasure channel was previously known~\cite{ErasureChannelm}. Simultaneously with Ref.~\cite{PLOB}, an independent study of the erasure channel has been provided by Ref.~\cite{GEWa} which showed how its $K$ can be computed from the squashed entanglement (see also Ref.~\cite[Supplementary~Discussion (page 38)]{PLOB}).
\section{Quantum broadcast channel\label{SECbroadcasting}}
Here we consider quantum and private communication in a single-hop point-to-multipoint network. We adapt the techniques of Ref.~\cite{PLOB} to bound the optimal rates that are achievable in adaptive protocols involving multiple receivers. For the sake of simplicity, we present the theory for non-asymptotic simulations. The theoretical treatment of asymptotic simulations goes along the lines described in previous Sec.~\ref{review} and is discussed afterwards.\
Consider a quantum broadcast channel $\mathcal{E}$\ where Alice (local register $\mathbf{a}$) transmits a system $a\in\mathbf{a}$ to $M$ different Bobs; the generic $i$th Bob (with $i=1,\ldots,M$) receives an output system $b^{i}$ which may be combined with a local register $\mathbf{b}^{i}$ for further processing. Denote by $\mathcal{D}(\mathcal{H}_{s})$ the space of density operators defined over the Hilbert space $\mathcal{H}_{s}$ of quantum system $s$. Then, the quantum broadcast channel is a completely-positive trace preserving (CPTP) map from Alice's input space $\mathcal{D}(\mathcal{H}_{a})$ to the Bobs' output space $\mathcal{D}(\otimes_{i}\mathcal{H}_{b^{i}})$. The most general adaptive protocol over this channel goes as follows.
All the parties prepare their initial systems by means of a LOCC $\Lambda_{0} $. Then, Alice picks the first system $a_{1}\in\mathbf{a}$ which is broadcast to all Bobs $a_{1}\rightarrow\{b_{1}^{i}\}$ through channel $\mathcal{E}$. This is followed by another LOCC $\Lambda_{1}$ involving all parties. Bobs' ensembles are updated as $b_{1}^{i}\mathbf{b}^{i}\rightarrow\mathbf{b}^{i}$. Then, there is the second broadcast $\mathbf{a}\ni a_{2}\rightarrow\{b_{2} ^{i}\}$ through $\mathcal{E}$, followed by another LOCC $\Lambda_{2}$ and so on. After $n$ uses, Alice and the $i$th Bob share an output state $\rho_{\mathbf{ab}^{i}}^{n}$ which is epsilon-close to a target state with $nR_{i}^{n}$ bits. The generic broadcast capacity for the $i$th Bob is defined by maximizing the asymptotic rate over all the adaptive LOCCs $\mathcal{L} =\{\Lambda_{0},\Lambda_{1},\ldots\}$, i.e., we have \begin{equation} \mathcal{C}^{i}:=\sup_{\mathcal{L}}\lim_{n}R_{i}^{n}. \end{equation}
By specifying the adaptive protocol to a particular target state, i.e., to a particular task (entanglement distribution, reliable transmission of quantum information, key generation or deterministic transmission of secret bits), one derives the entanglement-distribution broadcast capacity ($D_{2}^{i}$), the quantum broadcast capacity ($Q_{2}^{i}$), the secret-key broadcast capacity ($K^{i}$), and the private broadcast capacity ($P_{2}^{i}$). These are all assisted by unlimited two-way CCs between the parties and it is easy to check that they must satisfy $D_{2}^{i}=Q_{2}^{i}\leq K^{i}=P_{2}^{i}$.
In order to bound the previous capacities, let us introduce the notion of teleportation-covariant broadcast channel. It is explained for the case of two receivers, Bob and Charlie, with the extension to arbitrary $M$ receivers being just a matter of technicalities. This is a broadcast channel which suitably commutes with teleportation. Formally, this means that, for any teleportation unitary $U_{k}$ at the channel input, we may write \begin{equation} \mathcal{E}(U_{k}\rho U_{k}^{\dagger})=(B_{k}\otimes C_{k})\mathcal{E} (\rho)(B_{k}\otimes C_{k})^{\dagger}~, \end{equation} for unitaries $B_{k}$ and $C_{k}$ at the two outputs. If this is the case,\textbf{ }it is immediate to prove that $\mathcal{E}$ can be simulated by a generalized teleportation protocol over its Choi matrix \begin{equation} \rho_{\mathcal{E}}=\mathcal{I}_{A}\otimes\mathcal{E}_{A^{\prime}} (\Phi_{AA^{\prime}}), \label{choi1} \end{equation} where the latter is defined by sending half of an EPR $\Phi_{AA^{\prime}}$ through the broadcast channel. In other words, the broadcast channel is Choi-stretchable and its LOCC simulation is based on teleportation. See Fig.~\ref{broad}.\begin{figure}\label{broad}
\end{figure}
Following and extending the ideas of Ref.~\cite{PLOB}, we may simplify any adaptive protocol performed over a teleportation-covariant broadcast channel. The steps of the procedure are shown in Fig.~\ref{broadST}. As a result, the total output state of Alice, Bob and Charlie can be decomposed in the form \begin{equation} \rho_{\mathbf{abc}}^{n}:=\rho_{\mathbf{abc}}(\mathcal{E}^{\otimes n} )=\bar{\Lambda}\left( \rho_{\mathcal{E}}^{\otimes n}\right) ~, \label{tensorOUT} \end{equation} where $\bar{\Lambda}$\ is a trace-preserving LOCC. If we now trace one of the two receivers, e.g., Charlie, we still have a trace-preserving LOCC between Alice and Bob, and we may write the following \begin{equation} \rho_{\mathbf{ab}}^{n}=\mathrm{Tr}_{\mathbf{c}}\bar{\Lambda}\left(
\rho_{\mathcal{E}}^{\otimes n}\right) =\bar{\Lambda}_{\mathbf{a|bc}}\left( \rho_{\mathcal{E}}^{\otimes n}\right) ~, \label{broadppp} \end{equation}
where $\bar{\Lambda}_{\mathbf{a|bc}}$ is local with respect to the cut
$\mathbf{a|bc}$.\begin{figure}\label{broadST}
\end{figure}
Let us now compute the REE\ of Alice and Bob's output state $\rho _{\mathbf{ab}}^{n}$. Using Eq.~(\ref{broadppp}) and the monotonicity of the REE under $\bar{\Lambda}_{\mathbf{a|bc}}$, we derive \begin{align}
E_{R}(\rho_{\mathbf{ab}}^{n}) & :=\inf_{\sigma_{s}(\mathbf{a}|\mathbf{b}
)}S\left( \rho_{\mathbf{ab}}^{n}||\sigma_{s}\right) \nonumber\\
& \leq\inf_{\sigma_{s}(\mathbf{a}|\mathbf{bc})}S\left( \rho_{\mathcal{E}
}^{\otimes n}||\sigma_{s}\right) :=E_{R(\mathbf{a}|\mathbf{bc})} (\rho_{\mathcal{E}}^{\otimes n}), \label{ooo1} \end{align}
where we call $E_{R(\mathbf{a}|\mathbf{bc})}$ the REE\ with respect to the bipartite cut $\mathbf{a}|\mathbf{bc}$~\cite{monotonocity}. Note that the set of states $\{\sigma_{s}(\mathbf{a}|\mathbf{bc})\}$, separable between
$\mathbf{a}$ and $\mathbf{bc}$, includes the set of states $\{\sigma _{s}(\mathbf{a}|\mathbf{b}|\mathbf{c})\}$ which are separable with respect to $\mathbf{a}$, $\mathbf{b}$ and $\mathbf{c}$. Therefore, we may write the further upper-bound \begin{equation}
E_{R(\mathbf{a}|\mathbf{bc})}(\rho_{\mathcal{E}}^{\otimes n})\leq\inf _{\sigma_{s}(\mathbf{a}|\mathbf{b}|\mathbf{c})}S\left( \rho_{\mathcal{E}
}^{\otimes n}||\sigma_{s}\right) :=E_{R}(\rho_{\mathcal{E}}^{\otimes n}). \label{ooo2} \end{equation}
For Alice and Bob ($i=B$), we can then exploit the weak converse bound in Eq.~(\ref{mainweak}) where the optimization must be done over all the adaptive broadcast protocols. Combining this bound with Eqs.~(\ref{ooo1}) and~(\ref{ooo2}), we get \begin{equation} \mathcal{C}^{B}\leq\sup_{\mathcal{L}}\underset{n}{\lim}\frac{E_{R}
(\rho_{\mathbf{ab}}^{n})}{n}\leq E_{R(\mathbf{a}|\mathbf{bc})}^{\infty} (\rho_{\mathcal{E}})\leq E_{R}^{\infty}(\rho_{\mathcal{E}}), \end{equation} where $E_{R}^{\infty}(\rho):=\lim_{n}n^{-1}E_{R}(\rho^{\otimes n})$ is the regularized version of the REE. Then, using the subadditive over tensor products, we may also write \begin{equation}
E_{R}(\rho_{\mathbf{ab}}^{n})\leq nE_{R(\mathbf{a}|\mathbf{bc})} (\rho_{\mathcal{E}})\leq nE_{R}(\rho_{\mathcal{E}}), \end{equation} which clearly leads to the single-letter upper bounds \begin{equation}
\mathcal{C}^{B}\leq E_{R(\mathbf{a}|\mathbf{bc})}(\rho_{\mathcal{E}})\leq E_{R}(\rho_{\mathcal{E}}). \label{ERRdv} \end{equation}
We find the same bounds for the capacity of Alice and Charlie ($i=C$). In general, for arbitrary $M$ receivers, we may extend the reasoning and write the following upper bounds for the capacity between Alice and the $i$th Bob \begin{equation}
\mathcal{C}^{i}\leq E_{R(\mathbf{a}|\mathbf{b}^{1}\cdots\mathbf{b}^{M})} (\rho_{\mathcal{E}})\leq E_{R}(\rho_{\mathcal{E}}):=\Phi(\mathcal{E})~, \label{upper} \end{equation} where $\Phi(\mathcal{E})$\ is the entanglement flux of the broadcast channel $\mathcal{E}$, defined as the REE of its Choi matrix $\rho_{\mathcal{E}}$.
\subsection{Extension to continuous variables\label{SecCV}}
As explained in Sec.~\ref{review}, one cannot directly apply the DV formulation of channel simulation and teleportation stretching to CV systems. There are non-trivial issues to be taken into account, related with the infinite energy of the asymptotic Choi matrices of the bosonic channels. These issues require a suitable treatment~\cite{PLOB,TQCreview}.
The Choi matrix of a bosonic broadcast channel can be defined as the following asymptotic state \begin{equation} \rho_{\mathcal{E}}:=\lim_{\mu}\rho_{\mathcal{E}}^{\mu},~~~\rho_{\mathcal{E} }^{\mu}=\mathcal{I}_{A}\otimes\mathcal{E}_{A^{\prime}}(\Phi_{AA^{\prime}} ^{\mu}), \end{equation} with $\Phi_{AA^{\prime}}^{\mu}$ being a TMSV state. The simulation of a teleportation-covariant bosonic broadcast channel is based on the sequence of Choi-approximating states $\rho_{\mathcal{E}}^{\mu}$, so that we may write the generalization of Eq.~(\ref{asymptotics}), i.e., \begin{equation} \mathcal{E}(\rho)=\lim_{\mu}\mathcal{T}^{\mu}(\rho\otimes\rho_{\mathcal{E} }^{\mu}), \end{equation} where $\mathcal{T}^{\mu}$\ is a sequence of teleportation-LOCCs. By repeating the reasoning in Sec.~\ref{review}, the error in the channel simulation can be propagated to the output state of the adaptive protocol, so that, for any energy constraint on the local registers, we may write the trace-norm limit \begin{equation} \left\Vert \rho_{\mathbf{abc}}^{n}-\bar{\Lambda}_{\mu}\left( \rho _{\mathcal{E}}^{\mu\otimes n}\right) \right\Vert \overset{\mu}{\rightarrow}0, \end{equation} where $\bar{\Lambda}_{\mu}$\ is an imperfect stretching-LOCC associated with the imperfect teleportation LOCC $\mathcal{T}^{\mu}$. By tracing one of the outputs, e.g., Charlie, one gets \begin{equation}
\left\Vert \rho_{\mathbf{ab}}^{n}-\bar{\Lambda}_{\mu}^{\mathbf{a|bc}}\left( \rho_{\mathcal{E}}^{\mu\otimes n}\right) \right\Vert \overset{\mu }{\rightarrow}0, \label{stttt} \end{equation}
where $\bar{\Lambda}_{\mu}^{\mathbf{a|bc}}$ is an imperfect stretching-LOCC associated with Alice and Bob, which is local with respect to the bipartite cut $\mathbf{a|bc}$.
The next step is to extend the definition of REE to asymptotic states as in Eq.~(\ref{REE_weaker}). In particular, we define \begin{equation}
E_{R(\mathbf{a|bc})}(\rho_{\mathcal{E}}):=\inf_{\sigma_{s}^{\mu}
(\mathbf{a|bc})}\underset{\mu\rightarrow+\infty}{\lim\inf}S(\rho_{\mathcal{E}
}^{\mu}||\sigma_{s}^{\mu}), \label{broadFFLL} \end{equation}
where $\sigma_{s}^{\mu}(\mathbf{a|bc})$ is an arbitrary converging sequence of states that is separable with respect to the cut $\mathbf{a|bc}$. Then, we also define the entanglement flux of the bosonic broadcast channel as \begin{equation} \Phi(\mathcal{E})=E_{R}(\rho_{\mathcal{E}}):=\inf_{\sigma_{s}^{\mu}} \underset{\mu\rightarrow+\infty}{\lim\inf}S(\rho_{\mathcal{E}}^{\mu}
||\sigma_{s}^{\mu}), \label{broadFFFLLL} \end{equation}
where $\sigma_{s}^{\mu}$ is an arbitrary converging sequence of separable states (with respect to all the local systems $\mathbf{a|b|c}$). By applying a direct extension of the weak converse bound in Eq.~(\ref{mainweak}), we then derive the same result as in Eq.~(\ref{ERRdv}) for the capacity $\mathcal{C} ^{B}$ between Alice and Bob, proviso that the REE\ quantities are suitably extended as in Eqs.~(\ref{broadFFLL}) and~(\ref{broadFFFLLL}). In general, for arbitrary $M$ receivers, we have the corresponding extension of Eq.~(\ref{upper}).
\subsection{Thermal-loss quantum broadcast channel}
Now that we have rigorously extended the treatment to CV systems, we study the example of a bosonic broadcast channel from Alice to $M$ Bobs which introduces both loss and thermal noise. This is an optical scenario that may easily occur in practice. For instance, it may represent the practical implementation of a single-hop QKD\ network, where a party wants to share keys with several other parties for broadcasting private information. The latter may also be a common key to enable a quantum-secure conferencing among all the trusted parties.
One possible physical representation is a chain of $M+1$ beamsplitters with transmissivities $(\eta_{0},\eta_{1},\ldots\eta_{M})$ in which Alice's input mode $A^{\prime}$ subsequently interacts with $M+1$ modes $(E_{0},E_{1} ,E_{2},\ldots,E_{M})$ described by thermal states $\rho_{E_{i}}(\bar{n}_{i})$ with $\bar{n}_{i}$ mean number of photons. The $M$ output modes $(B_{1} ,B_{2},\ldots,B_{M})$ are then given to the different Bobs, with the extra modes $E$ and $E^{\prime}$\ being the leakage to the environment (or an eavesdropper). See Fig.~\ref{broadBS} for a schematic representation of this thermal-loss broadcast channel $\mathcal{E}=\mathcal{E}_{A^{\prime}\rightarrow B_{1}\ldots B_{M}}$.\begin{figure}\label{broadBS}
\end{figure}
The generic capacity $\mathcal{C}^{i}$ between Alice and the $i$th Bob is upper bounded by \begin{equation}
\mathcal{C}^{i}\leq E_{R(A|B_{1}\cdots B_{M})}(\rho_{\mathcal{E}}
):=\inf_{\sigma_{s}^{\mu}(A|B_{1}\cdots B_{M})}\underset{\mu\rightarrow
+\infty}{\lim\inf}S(\rho_{\mathcal{E}}^{\mu}||\sigma_{s}^{\mu}), \end{equation} where the state $\rho_{\mathcal{E}}^{\mu}:=\mathcal{I}_{A}\otimes \mathcal{E}_{A^{\prime}\rightarrow B_{1}\ldots B_{M}}(\Phi_{AA^{\prime}}^{\mu
})$ is the Choi-approximating state obtained by sending one half of a TMSV state $\Phi_{AA^{\prime}}^{\mu}$, and $\sigma_{s}^{\mu}(A|B_{1}\cdots B_{M})$ is a converging sequence of states that are separable with respect to the cut
$A|B_{1}\cdots B_{M}$. Now notice that we may write \begin{equation}
\rho_{\mathcal{E}}^{\mu}=\mathbf{L}_{A|B_{1}^{\prime}E_{1}\cdots E_{M}}\left[ \rho_{\mathcal{E}_{A^{\prime}\rightarrow B_{1}^{\prime}}}^{\mu}\otimes \bigotimes\nolimits_{i=1}^{M}\rho_{E_{i}}(\bar{n}_{i})\right] , \label{ff1} \end{equation} where $\rho_{\mathcal{E}_{A^{\prime}\rightarrow B_{1}^{\prime}}}^{\mu }:=\mathcal{I}_{A}\otimes\mathcal{E}_{A^{\prime}\rightarrow B_{1}^{\prime} }(\Phi_{AA^{\prime}}^{\mu})$ is associated with the first beamsplitter, and
$\mathbf{L}_{A|E_{1}\cdots E_{M}}$ is a trace-preserving LOCC, local with respect to the cut $A|B_{1}^{\prime}E_{1}\cdots E_{M}$. Also note that, for any separable state $\sigma_{s}^{\mu}(A|B_{1}^{\prime})$ we have that the output state \begin{equation}
\tilde{\sigma}_{s}^{\mu}=\mathbf{L}_{A|B_{1}^{\prime}E_{1}\cdots E_{M}}\left[
\sigma_{s}^{\mu}(A|B_{1}^{\prime})\otimes\bigotimes\nolimits_{i=1}^{M} \rho_{E_{i}}(\bar{n}_{i})\right] \label{ff2} \end{equation}
is separable with respect to the cut $A|B_{1}\cdots B_{M}$. As a result we have that \begin{gather}
E_{R(A|B_{1}\cdots B_{M})}(\rho_{\mathcal{E}})\overset{(1)}{\leq}\inf _{\tilde{\sigma}_{s}^{\mu}(A|B_{1}\cdots B_{M})}\underset{\mu\rightarrow
+\infty}{\lim\inf}S(\rho_{\mathcal{E}}^{\mu}||\tilde{\sigma}_{s}^{\mu })\nonumber\\
\overset{(2)}{\leq}\inf_{\sigma_{s}^{\mu}(A|B_{1}^{\prime})}\underset
{\mu\rightarrow+\infty}{\lim\inf}S(\rho_{\mathcal{E}_{A^{\prime}\rightarrow B_{1}^{\prime}}}^{\mu}||\sigma_{s}^{\mu}):=\Phi(\mathcal{E}_{A^{\prime }\rightarrow B_{1}^{\prime}}), \end{gather}
where we use: (1) the fact that $\tilde{\sigma}_{s}^{\mu}(A|B_{1}\cdots B_{M})$ are specific types of $\sigma_{s}^{\mu}(A|B_{1}\cdots B_{M})$; and (2) monotonicity and additivity of the relative entropy with respect to the decompositions in Eqs.~(\ref{ff1}) and~(\ref{ff2}).
Because $\mathcal{E}_{A^{\prime}\rightarrow B_{1}^{\prime}}$ is a thermal-loss channel with transmissivity $\eta_{0}$ and mean photon number $\bar{n}_{0}$, its entanglement flux is bounded by~\cite{PLOB} \begin{equation} \Phi(\mathcal{E}_{A^{\prime}\rightarrow B_{1}^{\prime}})\leq-\log_{2}\left[ (1-\eta_{0})\eta_{0}^{\bar{n}_{0}}\right] -h(\bar{n}_{0}), \label{LossUB} \end{equation} for $\bar{n}_{0}<\eta_{0}/(1-\eta_{0})$, while zero otherwise. Here we set \begin{equation} h(x):=(x+1)\log_{2}(x+1)-x\log_{2}x. \label{hEntropyMAIN} \end{equation} Thus, we find that the capacity between Alice and the $i$th Bob must satisfy \begin{equation} \mathcal{C}^{i}\leq\left\{ \begin{array} [c]{c} -\log_{2}\left[ (1-\eta_{0})\eta_{0}^{\bar{n}_{0}}\right] -h(\bar{n} _{0})~~~~\text{for~~}\bar{n}_{0}<\frac{\eta_{0}}{1-\eta_{0}},\\ \\ 0~~~~\text{for~~}\bar{n}_{0}\geq\frac{\eta_{0}}{1-\eta_{0}} .~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \end{array} \right. \label{Cithermal} \end{equation} As expected, the first beamsplitter is a universal bottleneck which restricts the capacities between Alice and any of the receiving Bobs.
In the specific case of a lossy broadcast channel with no thermal noise ($n_{i}=0$ for any $i$), we may specify Eq.~(\ref{Cithermal}) into the following simple bound \begin{equation} \mathcal{C}^{i}\leq-\log_{2}(1-\eta_{0})~. \end{equation} Let us note that, contrary to another work~\cite{Wilde} also inspired by Ref.~\cite{PLOB}, our analysis of the lossy broadcast channel builds upon a rigorous extension of channel simulation and teleportation stretching to CV systems, which includes a suitable generalization of the REE to asymptotic states. Since our results represent a\ rigorous extension of Ref.~\cite{PLOB}, they may also be used to solidify the claims presented in Ref.~\cite{Wilde} on the capacity region of the lossy broadcast channel. For further details see Ref.~\cite{TQCreview}.
Most importantly, notice that our derivation can be generalized to other bosonic broadcast channels, where the $M+1$ beamsplitters are replaced by arbitrary Gaussian unitaries $U_{A^{\prime}E_{0}},U_{B_{1}^{\prime}E_{1} },\ldots,U_{B_{M}^{\prime}E_{M}}$. In this general case, we repeat the previous reasonings to find that the capacities must satisfy\ the bottleneck relation \begin{equation} \mathcal{C}^{i}\leq\Phi(\mathcal{E}_{A^{\prime}\rightarrow B_{1}^{\prime}}), \end{equation} where the latter is the entanglement flux of the first Gaussian channel $\mathcal{E}_{A^{\prime}\rightarrow B_{1}^{\prime}}$, determined by the action of the Gaussian unitary $U_{A^{\prime}E_{0}}$ on the input mode $A^{\prime}$ and the thermal mode $E_{0}$.
\section{Quantum multiple-access channel\label{SECmulti}}
Let us now study multipoint-to-point quantum communication, i.e., a quantum multiple-access channel from $M$ senders (Alices) to a single receiver (Bob). This channel is a CPTP map from Alices' input space $\mathcal{D}(\otimes _{i}\mathcal{H}_{a^{i}})$ to Bob's output space $\mathcal{D}(\mathcal{H}_{b} )$. The most general adaptive protocol over this channel goes as follows. All the parties prepare their initial systems by means of a LOCC $\Lambda_{0}$. Then, the $i$th Alice picks the first system from her local ensemble, i.e., $a_{1}^{i}\in\mathbf{a}^{i}$. All Alice's input systems are sent through the quantum multiple-access channel $\mathcal{E}$ with output $b_{1}$ for Bob, i.e., \begin{equation} a_{1}^{1},\ldots,a_{1}^{i},\ldots,a_{1}^{M}\overset{\mathcal{E}}{\rightarrow }b_{1}~. \end{equation} This is followed by another LOCC $\Lambda_{1}$ involving all parties. Bob ensemble is updated as $b_{1}\mathbf{b}\rightarrow\mathbf{b}$. Then, there is the second transmission $\{\mathbf{a}^{i}\}\ni\{a_{2}^{i}\}\rightarrow b_{2}$ through $\mathcal{E}$, followed by another LOCC $\Lambda_{2}$ and so on. After $n$ uses, the $i$th Alice and Bob share an output state $\rho_{\mathbf{a} ^{i}\mathbf{b}}^{n}$ which is epsilon-close to a target state with $nR_{i} ^{n}$ bits.
The generic multiple-access capacity for the $i$th Alice is defined by maximizing the asymptotic rate over all the adaptive LOCCs $\mathcal{L} =\{\Lambda_{0},\Lambda_{1},\ldots\}$, i.e., we have $\mathcal{C}^{i} :=\sup_{\mathcal{L}}\lim_{n}R_{i}^{n}$. As before, by specifying the adaptive protocol to a particular task, one derives the entanglement distribution multiple-access capacity ($D_{2}^{i}$), the quantum multiple-access capacity ($Q_{2}^{i}$), the secret-key multiple-access capacity ($K^{i}$) and the private multiple-access capacity ($P_{2}^{i}$). These are all assisted by unlimited two-way CCs between the parties and satisfy $D_{2}^{i}=Q_{2}^{i}\leq K^{i}=P_{2}^{i}$.
Let us introduce the notion of teleportation-covariant multiple-access channel. For the sake of simplicity, this is explained for the case of two senders, with the extension to arbitrary $M$ senders being just a matter of technicalities. We also consider the case of DV\ channels, with the extension to CV\ channels left implicit and following the basic methodology of Sec.~\ref{review}. A quantum multiple-access channel is teleportation-covariant if, for any teleportation unitaries, $U_{k_{1}}^{1}$ and $U_{k_{2}}^{2}$, we may write \begin{equation} \mathcal{E}\left[ (U_{k_{1}}^{1}\otimes U_{k_{2}}^{2})\rho(U_{k_{1}} ^{1}\otimes U_{k_{2}}^{2})^{\dagger}\right] =V_{k}\mathcal{E}(\rho )V_{k}^{\dagger}, \label{stretchMULTI} \end{equation} for some unitary $V_{k}$, with $k$ depending on both $k_{1}$ and $k_{2}$. If this is the case, then we can replace $\mathcal{E}$ with teleportation over its Choi matrix, which is defined by sending halves of two EPR states through the channel, i.e., \begin{equation} \rho_{\mathcal{E}}=\mathcal{I}_{A^{1}A^{2}}\otimes\mathcal{E}_{A^{\prime 1}A^{\prime2}}(\Phi_{A^{1}A^{\prime1}}\otimes\Phi_{A^{2}A^{\prime2}}). \label{choi} \end{equation} See also Fig.~\ref{multiple} for further explanations.\begin{figure}\label{multiple}
\end{figure}
By using the channel simulation, we may fully simplify any adaptive protocol performed over a teleportation-covariant multiple-access channel $\mathcal{E} $. In fact, each transmission through $\mathcal{E}$ can be replaced by double teleportation on its Choi matrix $\rho_{\mathcal{E}}$, with the Bell detections and Bob's correction unitary being included in the LOCCs of the protocol. By stretching $n$ uses of the adaptive protocol (see Fig.~\ref{mutlpleST}), we find that the total output state of Alice 1, Alice 2 and Bob reads \begin{equation} \rho_{\mathbf{a}^{1}\mathbf{a}^{2}\mathbf{b}}^{n}=\bar{\Lambda}\left( \rho_{\mathcal{E}}^{\otimes n}\right) . \end{equation} If we now trace one of the two senders, e.g., Alice 2, we still have an LOCC between Alice 1 and Bob. In other words, we may write the following \begin{equation} \rho_{\mathbf{a}^{1}\mathbf{b}}^{n}=\bar{\Lambda}_{\mathbf{a}^{1}
\mathbf{a}^{2}|\mathbf{b}}\left( \rho_{\mathcal{E}}^{\otimes n}\right) , \end{equation}
where $\bar{\Lambda}_{\mathbf{a}^{1}\mathbf{a}^{2}|\mathbf{b}}$ is local with respect to the cut $\mathbf{a}^{1}\mathbf{a}^{2}|\mathbf{b}$ .\begin{figure}\label{mutlpleST}
\end{figure}
For Alice 1 and Bob ($i=1$), we can now write \begin{align} E_{R}(\rho_{\mathbf{a}^{1}\mathbf{b}}^{n}) & :=\inf_{\sigma_{s}
(\mathbf{a}^{1}|\mathbf{b})}S\left( \rho_{\mathbf{a}^{1}\mathbf{b}}
^{n}||\sigma_{s}\right) \nonumber\\
& \leq\inf_{\sigma_{s}(\mathbf{a}^{1}\mathbf{a}^{2}|\mathbf{b})}S\left(
\rho_{\mathcal{E}}^{\otimes n}||\sigma_{s}\right) :=E_{R(\mathbf{a}
^{1}\mathbf{a}^{2}|\mathbf{b})}(\rho_{\mathcal{E}}^{\otimes n})\nonumber\\
& \leq\inf_{\sigma_{s}(\mathbf{a}^{1}|\mathbf{a}^{2}|\mathbf{b})}S\left(
\rho_{\mathcal{E}}^{\otimes n}||\sigma_{s}\right) :=E_{R}(\rho_{\mathcal{E} }^{\otimes n}). \end{align} By applying the weak converse bound, we then derive \begin{equation} \mathcal{C}^{1}\leq\sup_{\mathcal{L}}\underset{n}{\lim}\frac{E_{R} (\rho_{\mathbf{a}^{1}\mathbf{b}}^{n})}{n}\leq E_{R(\mathbf{a}^{1}
\mathbf{a}^{2}|\mathbf{b})}^{\infty}(\rho_{\mathcal{E}})\leq E_{R}^{\infty }(\rho_{\mathcal{E}}), \end{equation} and using the subadditivity of the REE over tensor products, it is easy to show the single-letter version \begin{equation}
\mathcal{C}^{1}\leq E_{R(\mathbf{a}^{1}\mathbf{a}^{2}|\mathbf{b})} (\rho_{\mathcal{E}})\leq E_{R}(\rho_{\mathcal{E}}). \end{equation}
Note that we find the same bound for the other capacity for Alice 2 and Bob ($i=2$). The reasoning can be readily extended to arbitrary $M$ senders, so that the capacity between the $i$th Alice and Bob reads \begin{equation}
\mathcal{C}^{i}\leq E_{R(\mathbf{a}^{1}\cdots\mathbf{a}^{M}|\mathbf{b})} (\rho_{\mathcal{E}})\leq E_{R}(\rho_{\mathcal{E}}):=\Phi(\mathcal{E}), \end{equation} where $\Phi(\mathcal{E})$ is the entanglement flux of the quantum multiple-access channel. As previously mentioned, the result can be extended to CV systems by employing asymptotic simulations and extending the notions.
\section{All-in-all quantum communication\label{SECall}}
In this section we extend our technique to a single-hop quantum network involving multiple ($M_{A}$) senders and multiple ($M_{B}$)\ receivers, which is also known as quantum interference channel. This is a CPTP map from Alices' input space $\mathcal{D}(\otimes_{i=1}^{M_{A}}\mathcal{H}_{a^{i}})$ to Bobs' output space $\mathcal{D}(\otimes_{j=1}^{M_{B}}\mathcal{H}_{b^{j}})$. As a straightforward generalization of the previous cases, the most general adaptive protocol over this channel can be described as follows. At the initial stage the parties exploit a LOCC $\Lambda_{0}$ for their systems' preparation. Then, each Alice picks the first system from her local ensemble $a_{1}^{i}\in\mathbf{a}^{i}$. The inputs of all Alices are sent to all Bobs through channel $\mathcal{E}$ resulting into the outputs $\{b_{1}^{i}\}$, i.e., \begin{equation} a_{1}^{1},\ldots,a_{1}^{i},\ldots,a_{1}^{M_{A}}\overset{\mathcal{E} }{\rightarrow}b_{1}^{1},\ldots,b_{1}^{j},\ldots,b_{1}^{M_{B}}~. \end{equation} After this first transmission, there is another LOCC $\Lambda_{1}$, after which all Bobs' ensembles are updated $b_{1}^{j}\mathbf{b}^{j}\rightarrow \mathbf{b}^{j}$. Next, there is the second transmission $\mathbf{a}^{i} \ni\{a_{2}^{i}\}\rightarrow\{b_{2}^{j}\}$ through $\mathcal{E}$, followed by another LOCC $\Lambda_{2}$ and so on.
Thus, after $n$ uses of the channel, the $i$th Alice and the $j$th Bob share an output state $\rho_{\mathbf{a}^{i}\mathbf{b}^{j}}^{n}$, which is $\epsilon $-close to a target state of $nR_{ij}^{n}$ bits. By maximizing the asymptotic rate over all the adaptive LOCCs $\mathcal{L}=\{\Lambda_{0},\Lambda_{1} ,\ldots\}$ we can define the generic interference capacity for the $i$th Alice and the $j$th Bob as \begin{equation} \mathcal{C}^{ij}:=\sup_{\mathcal{L}}\lim_{n}R_{ij}^{n}~. \end{equation} As usual, depending on the task, one specifies three different capacities assisted by unlimited two-way CCs: The entanglement distribution capacity ($D_{2}^{ij}$), the quantum capacity ($Q_{2}^{ij}$), the secret-key capacity ($K^{ij}$) and the private capacity ($P_{2}^{ij}$)\ of the quantum interference channel (with $D_{2}^{ij}=Q_{2}^{ij}\leq K^{ij}=P_{2}^{ij}$).
As for the case of the broadcast and the multiple-access channels we bound these capacities by using REE+teleportation stretching. We proceed by considering two senders and two receivers being the extension to arbitrary $M_{A}$ and $M_{B}$ just a matter of technicalities. The definition of a teleportation-covariance quantum interference channel relies once again on the commutation with teleportation, i.e., for any teleportation unitaries $U_{k_{1}}^{1}$ and $U_{k_{2}}^{2}$ we must have \begin{equation} \mathcal{E}\left[ (U_{k_{1}}^{1}\otimes U_{k_{2}}^{2})\rho(U_{k_{1}} ^{1}\otimes U_{k_{2}}^{2})^{\dagger}\right] =\mathcal{V}\mathcal{E} (\rho)\mathcal{V}^{\dagger}, \label{allSTRE} \end{equation} where $\mathcal{V}=V_{l_{1}}^{1}\otimes V_{l_{2}}^{2}$ for unitaries $V_{l_{1}}^{1}$ and $V_{l_{2}}^{2}$, with both $l_{1}$ and $l_{2}$ depending on $k_{1}$ and $k_{2}$. If this condition holds then the channel can be simulated by teleportation over its Choi matrix, which is formally defined as in Eq.~(\ref{choi}). See Fig.~\ref{bus1}\ for this simulation.\begin{figure}\label{bus1}
\end{figure}
Thus, an adaptive protocol can be simplified since each use of channel $\mathcal{E}$ can be replaced by teleportation and both the Bell detections and Bobs' correction unitaries become part of the LOCCs. By stretching $n$ uses of the channel (see Fig.~\ref{bus2}), we have the following output state shared between Alice~1, Alice~2, Bob~1 and Bob~2 \begin{equation} \rho_{\mathbf{a}^{1}\mathbf{a}^{2}\mathbf{b}^{1}\mathbf{b}^{2}}=\bar{\Lambda }(\rho_{\mathcal{E}}^{\otimes n}). \end{equation} By tracing over one sender and one receiver, say Alice~2 and Bob~2, we then derive \begin{equation} \rho_{\mathbf{a}^{1}\mathbf{b}^{1}}^{n}=\bar{\Lambda}_{\mathbf{a}
^{1}\mathbf{a}^{2}|\mathbf{b}^{1}\mathbf{b}^{2}}(\rho_{\mathcal{E}}^{\otimes n})~, \end{equation}
where $\bar{\Lambda}_{\mathbf{a}^{1}\mathbf{a}^{2}|\mathbf{b}^{1}
\mathbf{b}^{2}}$ is a trace-preserving LOCC between Alice~1 and Bob~1, local with respect to the cut $\mathbf{a}^{1}\mathbf{a}^{2}|\mathbf{b}^{1} \mathbf{b}^{2}$. \begin{figure}\label{bus2}
\end{figure}
It follows that the capacity for Alice 1 and Bob 1 ($i=j=1$) is upper bounded by
\begin{equation} \mathcal{C}^{11}\leq\sup_{\mathcal{L}}\underset{n}{\lim}\frac{E_{R} (\rho_{\mathbf{a}^{1}\mathbf{b}^{1}}^{n})}{n}\leq E_{R(\mathbf{a}
^{1}\mathbf{a}^{2}|\mathbf{b}^{1}\mathbf{b}^{2})}^{\infty}(\rho_{\mathcal{E} })\leq E_{R}^{\infty}(\rho_{\mathcal{E}}). \end{equation} In terms of single-letter bounds we find \begin{equation}
\mathcal{C}^{11}\leq E_{R(\mathbf{a}^{1}\mathbf{a}^{2}|\mathbf{b} ^{1}\mathbf{b}^{2})}(\rho_{\mathcal{E}})\leq E_{R}(\rho_{\mathcal{E}}). \end{equation} Clearly, we find the same result in all other cases, i.e., for any sender-receiver pair $(i,j)$. In general, for arbitrary $M_{A}$ senders and $M_{B}$\ receivers, we may write \begin{equation} \mathcal{C}^{ij}\leq E_{R(\mathbf{a}^{1}\cdots\mathbf{a}^{M_{A}}
|\mathbf{b}^{1}\cdots\mathbf{b}^{M_{B}})}(\rho_{\mathcal{E}})\leq E_{R} (\rho_{\mathcal{E}}):=\Phi(\mathcal{E}), \end{equation} where $\Phi(\mathcal{E})$\ is the entanglement flux of the quantum interference channel. The extension to CV systems exploits asymptotic simulations along the lines of Sec.~\ref{review}.
\section{Conclusions\label{SECconclu}}
In this work we have studied the capacities for quantum communication, entanglement distribution and secret key generation in a single-hop quantum network, involving a direct channel between multiple senders and/or multiple receivers. More precisely, we have considered the quantum broadcast channel (point-to-multipoint), the multiple-access channel (multipoint-to-point), and the quantum interference channel (multipoint-to-multipoint), assuming that all the parties may apply the most general local operations assisted by unlimited two-way CCs (adaptive protocols).
By suitably extending the\ methodology of Ref.~\cite{PLOB}, which suitably combines the relative entropy of entanglement (REE) with teleportation stretching, we have reduced the most general adaptive protocols implemented on these multipoint channels to the computation of a one-shot quantity and, in particular, their entanglement flux (i.e., the REE of their Choi matrix). This is achieved at any dimension, i.e., finite or infinite (CV channels).
Further research should be directed to show how a rigorous application of our\ reduction method can be used to upperbound the entire capacity regions of multipoint channels, defined as the convex closure of the set of all the rates which are achievable by the parties assisted by unlimited two-way CCs. Other important directions are related with the study of multipoint quantum communication within a multi-hop quantum network, following the general methods and results of Ref.~\cite{networkPIRS}.
\textbf{Acknowledgments}. This work has been supported by the EPSRC via the `UK Quantum Communications Hub' (EP/M013472/1). The authors would like to thank S. L. Braunstein, S. Lloyd, G. Spedalieri, and C. Ottaviani for valuable feedback and comments.
\end{document} |
\begin{document}
\title{A violation of the uncertainty principle implies a violation of the second law of thermodynamics}
\author{Esther H\"anggi} \email[]{esther@locc.la} \affiliation{Centre for Quantum Technologies, National University of Singapore, 3 Science Drive 2, 117543 Singapore} \author{Stephanie Wehner} \email[]{steph@locc.la} \affiliation{Centre for Quantum Technologies, National University of Singapore, 3 Science Drive 2, 117543 Singapore} \date{\today}
\begin{abstract} Uncertainty relations state that there exist certain incompatible measurements, to which the outcomes cannot be simultaneously predicted. While the exact incompatibility of quantum measurements dictated by such uncertainty relations can be inferred from the mathematical formalism of quantum theory, the question remains whether there is any more fundamental reason for the uncertainty relations to have this exact form. What, if any, would be the operational consequences if we were able to go beyond any of these uncertainty relations? We give a strong argument that justifies uncertainty relations in quantum theory by showing that violating them implies that it is also possible to violate the second law of thermodynamics. More precisely, we show that violating the uncertainty relations in quantum mechanics leads to a thermodynamic cycle with positive net work gain, which is very unlikely to exist in nature. \end{abstract}
\maketitle
Many features commonly associated with quantum physics, such as the uncertainty principle~\cite{heisenberg27} or non-locality~\cite{bell} appear highly counter-intuitive at first sight. The fact that quantum mechanics is more non-local than any classical theory~\cite{bell}, but yet more limited~\cite{tsirel:original,tsirel:separated} than what the no-signalling principle alone demands~\cite{PR,PR1,PR2} has been the subject of much investigation~\cite{function,glance,s3:nonlocal,causality,OppenheimWehner}. Several reasons and principles were put forward that explain the origin of such quantum mechanical limits~\cite{s3:nonlocal,causality,OppenheimWehner}.
In~\cite{OppenheimWehner} it was shown that the amount of non-locality in quantum mechanics is indeed directly related to another fundamental quantum mechanical limit, namely the uncertainty principle~\cite{heisenberg27}. This forged a relation between two fundamental quantum mechanical concepts. We may however still ask why the uncertainty principle itself is not maybe stronger or weaker than predicted by quantum physics? - and, what would happen if it was?
Here we relate this question to the second law of thermodynamics. We show that any violation of uncertainty relations in quantum mechanics also leads to a violation of the second law.
\section{Background}
To state our result, we need to explain three different concepts. First, we need some properties of generalized physical theories (see e.g.~\cite{barrett,leifer,hardy,dariano,howard:survey}). Second, we recall the concept of uncertainty relations, and finally the second law of thermodynamics.
{\bf Physical theories} Whereas it is not hard to prove our result for quantum theory, we extend our result to some more general physical theories. These are described by a probabilistic framework that makes the minimal assumptions that there are \emph{states} and \emph{measurements} which can be made on a physical system (see, e.g.,~\cite{teleportation,entropy}). Even for general theories, we denote a state as $\rho \in \Omega$, where $\Omega$ is a convex state space. In quantum mechanics, $\rho$ is simply a density matrix. The assumption that the state space is convex is thereby generally made~\cite{howard:survey} and says that if we can prepare states $\rho_1$ and $\rho_2$, then the probabilistic mixture $\rho = \rho_1/2 + \rho_2/2$ prepared by by tossing a coin and preparing $\rho_1$ or $\rho_2$ with probability $1/2$ each is also an element of $\Omega$. A state is called \emph{pure} if it cannot be written as a convex combination of other states. Measurements consist of linear functionals $e_j: \Omega \rightarrow [0,1]$ called \emph{effects}. We call an effect $e_j$ \emph{pure} if it cannot be written as a
positive linear combination of any other allowed effects. Intuitively, each effect corresponds to a possible measurement outcome, where $p(e_j|\rho) = e_j(\rho)$
is the probability of obtaining ''outcome'' $e_j$ given the state $\rho$. More precisely, a measurement is thus given by $\textbf{e} = \{e_j \mid \sum_j p(e_j|\rho)=1\}$. For quantum mechanics, we will simply label effects by measurement operators. For example, a projective measurement in the eigenbasis $\{0_Z,1_Z\}$ of the Pauli $Z$
operator is denoted by $p(0_Z|\rho) = \operatorname{tr}(\proj{0_Z}\rho)$. The assumption that effects are linear, i.e., $p(e_j|\rho)$ is linear in $\rho$, is essentially made for all probabilistic theories~\cite{howard:survey} and says that when we prepared a probabilistic mixture of states the distribution of measurement outcomes scales accordingly.
{\bf Uncertainty relations} A modern way of quantifying uncertainty~\cite{deutsch83,maassen88} is by means of \emph{entropic uncertainty relations} (see~\cite{WehnerWinter} for a survey), or the closely related \emph{fine-grained uncertainty relations}~\cite{OppenheimWehner}. Here we will use the latter. As for our cycle we will only need two measurements with two outcomes, and each measurement is chosen with probability $1/2$. We state their definition only for this simple case. Let $\textbf{f} = \{f_0,f_1\}$ and $\textbf{g} = \{g_0,g_1\}$ denote the two measurements with effects $f_{y_1}$ and $g_{y_2}$ respectively. A fine-grained uncertainty relation for these measurements is a set of inequalities \begin{align}
\label{eq:fineGrained}
\left\lbrace \forall \rho:\ \frac{1}{2}\left(p(f_{y_1}|\rho) + p(g_{y_2}|\rho)\right) \leq \zeta_{\vec{y}} \middle|
\vec{y} \in \{0,1\}^2
\right\rbrace\,. \end{align} To see why this quantifies uncertainty, note that if $\zeta_{\vec{y}} < 1$ for some $\vec{y} = (y_1,y_2)$, then we have that if the outcome is certain for one of the
measurements (e.g., $p(f_{y_1}|\rho) = 1$) it is uncertain ($p(g_{y_2}|\rho) < 1$) for the other. As an example from quantum mechanics, consider measurements in the $X = \{0_X,1_X\}$ and $Z = \{0_Z,1_Z\}$ eigenbases.~\footnote{We use the common convention of labelling the $X$ and $Z$ eigenbases states as $\{\ket{+},\ket{-}\}$ and $\{\ket{0},\ket{1}\}$ respectively.} We then have for all pure quantum states $\rho$ \begin{align}\label{eq:quantumFG}
\frac{1}{2}\left(p(0_X|\rho) + p(0_Z|\rho)\right) \leq \frac{1}{2}+ \frac{1}{2\sqrt{2}}\,. \end{align} The same relation holds for all other pairs of outcomes $(0_X,1_Z)$,$(1_X,0_Z)$ and $(1_X,1_Z)$. Depending on $\vec{y}$, the eigenstates of either $(X+Z)/\sqrt{2}$ or $(X-Z)/\sqrt{2}$ saturate these inequalities. A state that saturates a particular inequality is also called a \emph{maximally certain state}~\cite{OppenheimWehner}.
For any theory such as quantum mechanics in which there is a direct correspondence between \emph{states} and \emph{measurements} uncertainty relations can also be stated in terms of \emph{states} instead of measurements. More precisely, uncertainty relations can be written in terms of states if pure effects and pure states are dual to each other in the sense that for any pure effect $f$ there exists a corresponding pure state $\rho_f$, and conversely for every pure state $\sigma$ an effect
$e_\sigma$ such that $p(f|\sigma) = p(e_\sigma|\rho_f)$. Here, we restrict ourselves to theories that exhibit such a duality. This is often (but not always) assumed~\cite{howard:survey,entropy}. As a quantum mechanical example, consider the effect $f = 0_X$ and the state $\sigma = \proj{0}$. We then
have $p(f|\sigma) = \operatorname{tr}(\proj{+}\sigma) = \operatorname{tr}(\proj{+}\proj{0}) = p(e_\sigma|\rho_f)$ with $\rho_f = \proj{+}$ and $e_{\sigma} = 0_Z$.
For measurements $\textbf{f} = \{f_0,f_1\}$ and $\textbf{g} = \{g_0,g_1\}$ consisting of pure effects, let $\{\rho_{f_0},\rho_{f_1}\}$ and $\{\rho_{g_0},\rho_{g_1}\}$ denote the corresponding dual states. The equations of~\eqref{eq:fineGrained} then take the dual form \begin{align}\label{eq:stateUR}
\forall \mbox{ pure effects } e:\ \frac{1}{2}\left(p(e|\rho_{f_{y_1}}) + p(e|\rho_{g_{y_2}})\right) \leq \zeta_{\vec{y}}\,. \end{align} For our quantum example of measuring in the $X$ and $Z$ eigenbasis, we have $\rho_{0_X} = \proj{+}$, $\rho_{1_X} = \proj{-}$, $\rho_{0_Z} = \proj{0}$ and $\rho_{1_Z} = \proj{1}$. We then have that for all pure quantum effects $e$ \begin{align}
\frac{1}{2}\left(p(e|\rho_{0_X}) + p(e|\rho_{1_Z})\right) \leq \frac{1}{2}+ \frac{1}{2\sqrt{2}}\,. \end{align} The same relation holds for all other pairs $(0_X,1_Z)$,$(1_X,0_Z)$ and $(1_X,1_Z)$. Again, measurement effects from the eigenstates of either $(X+Z)/\sqrt{2}$ or $(X-Z)/\sqrt{2}$ saturate these inequalities. In analogy, with maximally certain states we refer to effects that saturate the inequalities~\eqref{eq:stateUR} as \emph{maximally certain effects}. From now on, we will always consider uncertainty relations in terms of~\emph{states}.
{\bf 2nd law} The second law of thermodynamics is usually stated in terms of entropies. One way to state it is to say that the entropy of an isolated system cannot decrease. These entropies can be defined for general physical theories even for systems which are not described by the quantum formalism~\cite{entropy,barnum:entropy,japanese:entropy} (see appendix). However, for our case it will be sufficient to consider one operational consequence of the second law of thermodynamics~\cite{peres,demon}: there cannot exist a cyclic physical process with a net work gain over the cycle.
\section{Result}
Our main result is that if it was possible to violate the fine-grained uncertainty relations as predicted by quantum physics, then we could create a cycle with net work gain. This holds for \emph{any} two projective measurements with two outcomes on a qubit. By the results of~\cite{OppenheimWehner} which showed that the amount of non-locality is solely determined by the uncertainty relations of quantum mechanics and our ability to steer, our result extends to a link between the amount of non-locality and the second law of thermodynamics.
In the following we focus on the quantum case, i.e., in the situation where all the properties except the uncertainty relations hold as for quantum theory. In the appendix, we extend our result to more general physical theories that satisfy certain assumptions. In essence, different forms of entropies coincide in quantum mechanics, but can differ in more general theories~\cite{entropy}. This has consequences on whether a net work gain in our cycle is due to a violation of uncertainty alone, or can also be understood as the closely related question of whether certain entropies can differ.
Let us now first state our result for quantum mechanics more precisely. We consider the following process as depicted in Figure~\ref{fig:imp}. We start with a box which contains two types of particles described by states $\rho_0$ and $\rho_1$ in two separated volumes. The state $\rho_0$ is the equal mixture of the eigenstates $\rho_{f_0}$ and $\rho_{g_0}$ of two measurements (observables) $\textbf{f} = \{f_0,f_1\}$ and $\textbf{g} = \{g_0,g_1\}$. The state $\rho_1$ is the equal mixture of $\rho_{f_1}$ and $\rho_{g_1}$. We choose the measurements such that the equal mixture $\rho = (\rho_0 + \rho_1)/2$ is the completely mixed state in dimension $2$. We then replace the wall separating $\rho_0$ from $\rho_1$ by two semi-transparent membranes, i.e., membranes which measure any arriving particle in a certain basis $\textbf{e} = \{e_0,e_1\}$ and only let it pass for a certain outcome. In the first part of the cycle we separate the two membranes until they are in equilibrium, which happens when the state everywhere in the box can be described as $\rho$. Then, in the second part of the cycle, we separate $\rho$ again into its different components.
\begin{figure*}
\caption{The impossible process.}
\label{fig:imp}
\end{figure*}
We find that the total work which can be extracted by performing this cycle is given by \begin{align} \nonumber \Delta W &=
NkT \ln2 \left( \sum_{i=0}^1 p_i S(\rho_i) \right. \\ \nonumber & \quad \left. -\frac{1}{2} H\left(\zeta_{(f_0,g_0)}\right) -\frac{1}{2} H\left(\zeta_{(f_1,g_1)}\right)
\right)\,. \end{align} Here, $S(\rho) = - \operatorname{tr}(\rho \log \rho)$ is the \emph{von Neumann entropy} of the state. The entropy $H$ appearing in the above expression is simply the Shannon entropy of the distribution over measurement outcomes when measuring in the basis $\textbf{f}$ and $\textbf{g}$, respectively.~\footnote{The Shannon entropy of a probability distribution $\{p_1,\ldots,p_d\}$ is given by $H(\{p_1,\ldots,p_d\}) = - \sum_j p_j \log p_j$. All logarithms in this paper are to base $2$.}
{\bf Example} To illustrate our result, consider the concrete quantum example, where the states are given by \begin{align} \label{eq:stateDef} \rho_0 &= \frac{1}{2}\left( \rho_{0_X}+ \rho_{0_Z} \right)=\frac{\mathds{1} + \frac{X+Z}{2}}{2}\ \text{and}\\ \nonumber \rho_1&=\frac{1}{2}\left( \rho_{1_{X}}+ \rho_{1_{Z}} \right)= \frac{\mathds{1} - \frac{X+Z}{2}}{2} \,. \end{align} The work which can be extracted from the cycle then becomes \begin{align} \nonumber \Delta W &=
NkT \ln2 \left( H\left( \frac{1}{2}+\frac{1}{2\sqrt{2}}\right) \right. \\ \nonumber & \quad \left. -\frac{1}{2} H\left(\zeta_{(0_X,0_Z)}\right) -\frac{1}{2} H\left(\zeta_{(1_X,1_Z)}\right)
\right)\,. \end{align} The fine-grained uncertainty relations predict in the quantum case that $\zeta_{(0_X,0_Z)}$ and $\zeta_{(1_X,1_Z)}$ are at most $\frac{1}{2}+\frac{1}{2\sqrt{2}}$. We see that a theory which can violate this uncertainty relation, i.e., reach a larger value of $\zeta$, would lead to $\Delta W>0$ --- a violation of the second law of thermodynamics.
\section{Methods}
We now explain in more detail how we obtain the work which can be extracted from the cycle in quantum mechanics. In the appendix, we consider the case of general physical theories.
\subsection*{First part of the cycle}
For the first part of the cycle we start with two separate parts of the box in each of which there are $N/2$ particles in the states $\rho_0$ and $\rho_1$ respectively. These states are described by \begin{align} \nonumber \rho_0 &= \frac{1}{2}\left( \rho_{f_0}+ \rho_{g_0} \right)\ \text{and}\\ \nonumber \rho_1&=\frac{1}{2}\left( \rho_{f_1}+ \rho_{g_1} \right) \,, \end{align} where $\textbf{f} = \{f_0,f_1\}$ and $\textbf{g} = \{g_0,g_1\}$ are chosen such that the state $\rho=\rho_0/2+\rho_1/2$ corresponds to the completely mixed state in dimension $2$. We then make a projective measurement $\textbf{e}=\{e_0,e_1\}$ with two possible outcomes denoted by $0,1$. More precisely, we insert two semi-transparent membranes instead of the wall separating the two volumes. One of the membranes is transparent to $e_0$ but completely opaque to $e_1$ while the other lets the particle pass if the outcome is $e_1$, but not if it was $e_0$. Letting these membranes move apart until they are in equilibrium, we can extract work from the system. The equilibrium is reached when on both sides of the membranes which is opaque for $e_1$, there is the same density of particles in this state and similarly for the membrane which is opaque for $e_0$.
The work which can be extracted from the first part of the cycle (i.e., by going from {\ding{192}} to \ding{194} in Figure~\ref{fig:imp}) is given by the following (see appendix). \begin{align} \nonumber W &=
NkT \ln 2\left( 1
- \frac{1}{2} H\left(\frac{1}{2}p(e_0|\rho_{f_0})
+ \frac{1}{2}p(e_0|\rho_{g_0}) \right) \right. \\ \nonumber & \quad \left.
- \frac{1}{2} H\left(\frac{1}{2}p(e_1|\rho_{f_1})+ \frac{1}{2}p(e_1|\rho_{g_1}) \right) \right)\\ \nonumber & \leq NkT \ln 2\left( 1 - \frac{1}{2} H\left(\zeta_{(f_0,g_0)}\right) - \frac{1}{2} H\left(\zeta_{(f_1,g_1)}\right) \right)\,, \end{align} where we denoted by $\zeta$ the fine-grained uncertainty relations. The inequality can be saturated by choosing $e_0$ and $e_1$ to be maximally certain effects.~\footnote{It is easy to see that in quantum mechanics the maximally certain effects $e_0$ and $e_1$ do indeed form a complete measurement in dimension $2$.} Note that our argument is not specific to the outcome combination $(0_f,0_g)$ and $(1_f,1_g)$ used in the the fine-grained uncertainty relation and choosing the remaining two inequalities corresponding to outcomes $(0_f,1_g)$ and $(1_f,0_g)$ leads to an analogous argument.
\textbf{Example} For our quantum example given by the states~\eqref{eq:stateDef} we obtain \begin{align} \nonumber W & \leq NkT \ln 2\left( 1 - \frac{1}{2} H\left(\zeta_{(0_X,0_Z)}\right) - \frac{1}{2} H\left(\zeta_{(1_X,1_Z)}\right) \right)\,. \end{align} Equality is attained by taking $\{e_0,e_1\}$ to be the maximally certain effects given by the two eigenstates of $(X+Z)/\sqrt{2}$.
\subsection*{Second part of the cycle}
In the second part we form a cycle (i.e., we go from {\ding{194}} to \ding{192} in Figure~\ref{fig:imp})). We start with the completely mixed state $\rho$. Denote the different pure components of $\rho$ by $\{q_j,\sigma_j\}_j$, i.e., $\rho=\sum_j q_j\sigma_j$. We can now `decompose' $\rho$ into its components by inserting a semi-transparent membrane which is opaque for a specific component $\sigma_j$, but completely transparent for all other components. Effectively, this membrane measures using the effects $h_{\sigma_j}$ that are dual to the states $\sigma_j$. This membrane is used to confine all states $\sigma_j$ in a volume $q_jV$. This is done for all components and we end up with a box where each component of $\rho$ is sorted in a volume proportional to its weight in the convex combination. This process needs work proportional to $S(\rho)$.
In a second step, we create the (pure) components $\tau$ of $\rho_0=\sum_j r_j^0 \tau_j^0$ and $\rho_1=\sum_j r_j^1 \tau_j^1$ from the pure components of $\rho$ and then `reassamble' the states $\rho_0$ and $\rho_1$. In order to do so, we subdivide the volumes containing $\sigma_j$ into smaller volumes, such that the number of particles contained in these smaller volumes are proportional to $p_0r_j^0$ and $p_1r_j^1$. The pure state contained in each small volume is then transformed into the pure state $\tau_j^0$ or $\tau_j^1$. Since these last states are also pure, no work is needed for this transformation. Finally, we `mix' the different components of $\rho_0$ together, which allows us to extract work $p_0S(\rho_0)$. Similarly we obtain work $p_1S(\rho_1)$ from $\rho_1$.
In total, the transformation $\rho \rightarrow \{p_i,\rho_i\}$, needs work \begin{align} \nonumber W&=NkT \ln 2 (S(\rho)-\sum_i p_i S(\rho_i))\,. \end{align}
{\bf Example} Returning to the example above and using that the two eigenvalues of $\rho$ are $1/2$, we obtain \begin{align} \nonumber S(\rho)&= -2\cdot \frac{1}{2} \log_2 \frac{1}{2}=1\,. \end{align} Both $\rho_0$ and $\rho_1$ have the two eigenvalues $\lbrace \frac{1}{2}+\frac{1}{2\sqrt{2}}, \frac{1}{2}-\frac{1}{2\sqrt{2}}\rbrace$. Therefore, \begin{align} \nonumber S(\rho_i)&= H\left( \frac{1}{2}+\frac{1}{2\sqrt{2}} \right)\approx H(0.85)\,. \end{align} The total work which has to be invested for this process is therefore given by \begin{align} \nonumber W&=NkT \ln2\left( 1- H\left( \frac{1}{2}+\frac{1}{2\sqrt{2}} \right)\right) \,. \end{align}
\subsection*{Closing the cycle}
If we now perform the first and second process described above one after another (i.e., we perform a cycle, as depicted in Figure~\ref{fig:imp}), the total work which can be extracted is given by \begin{align} \nonumber \Delta W &= NkT \ln2 \left(
- \left( S(\rho)- \sum_i p_i S(\rho_i) \right) \right. \\ \nonumber & \quad \left. + \left( 1 - \frac{1}{2} H\left(\zeta_{(f_0,g_0)}\right) - \frac{1}{2} H\left(\zeta_{(f_1,g_1)}\right)\right)
\right)\,. \end{align} In general, we can see that when the uncertainty relation is violated, this quantity can become positive and a positive $\Delta W$ corresponds to a violation of the second law of thermodynamics.
{\bf Example} In our example, the above quantity corresponds to \begin{align} \nonumber \Delta W &=
NkT \ln2 \left( H\left( \frac{1}{2}+\frac{1}{2\sqrt{2}}\right) \right. \\ \nonumber & \quad \left. -\frac{1}{2} H\left(\zeta_{(0_X,0_Z)}\right) -\frac{1}{2} H\left(\zeta_{(1_X,1_Z)}\right)
\right)\,. \end{align} The fine-grained uncertainty relations for quantum mechanics state that $\zeta_{(0_X,0_Z)},\zeta_{(1_X,1_Z)}\leq \frac{1}{2}+\frac{1}{2\sqrt{2}}$. When this value is reached with equality, then $\Delta W=0$ in the above calculation.
On the other hand if these values were larger, i.e., the uncertainty relation could be violated, then the binary entropy of them would be smaller and $\Delta W$ becomes positive.
\section{Discussion}
We give a strong argument why quantum mechanical uncertainty relations should not be violated. Indeed, as we show, a violation of the uncertainty relations would lead to an `impossible machine' which could extract net work from a cycle. Our result extends to more general theories than quantum theory - however, raises the question of which general form of entropy~\cite{entropy} is most significant. In quantum mechanics, the different entropies of~\cite{entropy} coincide, meaning that if a physical theory is just like quantum mechanics, but with a different amount of uncertainty, net work can be extracted.
Our cycle is similar to the ones given in~\cite{peres,cost,demon}, which study related questions: We can understand uncertainty relations as given in~\eqref{eq:fineGrained} as imposing a limit on how well one of several bits of information can be extracted from a qubit using the given measurements~\cite{OppenheimWehner}. This means that the amount of uncertainty for all pairs of measurements that we could make directly imposes a limit on how much classical information we can store in each qubit. Indeed, in any system that is finite dimensional (possibly due to an energetic constraint), it is thus clear that the mere fact that we can only store a finite amount of information in a finite dimensional system (Holevo's bound~\cite{holevo}) demands that non-commuting measurements obey uncertainty relations. This shows that our example is closely related to the ones given in~\cite{plenio,pleniovitelli,cost,demon} where it has been shown that if it was possible to encode more than one bit of information in a qubit and therefore to violate the Holevo bound~\cite{holevo}, then it was also possible to violate the second law of thermodynamics.
In~\cite{peres} similar consequences had been shown if one was able to perfectly distinguish non-orthogonal quantum states. The possibility of distinguishing non-orthogonal states is again directly related to the question of how much information we can store in a quantum state.
In future work, it might be interesting to investigate whether an implication also holds in the other direction. Does any violation of the second law lead to a violation of the uncertainty relations?
We have investigated the relation between uncertainty and the second law of thermodynamics. A concept related to uncertainty is the one of complementarity. It is an open question, whether a violation of complementarity could also be used to build such an impossible machine.
\textbf{Acknowledgments:} We thank Christian Gogolin, Markus M\"uller, and Jonathan Oppenheim for helpful discussions and comments on an earlier draft. EH and SW acknowledge support from the National Research Foundation (Singapore), and the Ministry of Education (Singapore).
\begin{thebibliography}{37} \expandafter\ifx\csname natexlab\endcsname\relax\def\natexlab#1{#1}\fi \expandafter\ifx\csname bibnamefont\endcsname\relax
\def\bibnamefont#1{#1}\fi \expandafter\ifx\csname bibfnamefont\endcsname\relax
\def\bibfnamefont#1{#1}\fi \expandafter\ifx\csname citenamefont\endcsname\relax
\def\citenamefont#1{#1}\fi \expandafter\ifx\csname url\endcsname\relax
\def\url#1{\texttt{#1}}\fi \expandafter\ifx\csname urlprefix\endcsname\relax\defURL {URL }\fi \providecommand{\bibinfo}[2]{#2} \providecommand{\eprint}[2][]{\url{#2}}
\bibitem[{\citenamefont{Heisenberg}(1927)}]{heisenberg27} \bibinfo{author}{\bibfnamefont{W.}~\bibnamefont{Heisenberg}},
\bibinfo{journal}{Zeitschrift f{\"u}r Physik} \textbf{\bibinfo{volume}{43}},
\bibinfo{pages}{172} (\bibinfo{year}{1927}).
\bibitem[{\citenamefont{Bell}(1964)}]{bell} \bibinfo{author}{\bibfnamefont{J.~S.} \bibnamefont{Bell}},
\bibinfo{journal}{Physics} \textbf{\bibinfo{volume}{1}}, \bibinfo{pages}{195}
(\bibinfo{year}{1964}).
\bibitem[{\citenamefont{Tsirelson}(1980)}]{tsirel:original} \bibinfo{author}{\bibfnamefont{B.}~\bibnamefont{Tsirelson}},
\bibinfo{journal}{Letters in Mathematical Physics}
\textbf{\bibinfo{volume}{4}}, \bibinfo{pages}{93} (\bibinfo{year}{1980}).
\bibitem[{\citenamefont{Tsirelson}(1987)}]{tsirel:separated} \bibinfo{author}{\bibfnamefont{B.}~\bibnamefont{Tsirelson}},
\bibinfo{journal}{Journal of Soviet Mathematics}
\textbf{\bibinfo{volume}{36}}, \bibinfo{pages}{557} (\bibinfo{year}{1987}).
\bibitem[{\citenamefont{Popescu and Rohrlich}(1994)}]{PR} \bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Popescu}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Rohrlich}},
\bibinfo{journal}{Foundations of Physics} \textbf{\bibinfo{volume}{24}},
\bibinfo{pages}{379} (\bibinfo{year}{1994}).
\bibitem[{\citenamefont{Popescu and Rohrlich}(1996)}]{PR1} \bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Popescu}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Rohrlich}}, in
\emph{\bibinfo{booktitle}{The dilemma of Einstein, Podolsky and Rosen, 60
years later: International symposium in honour of Nathan Rosen}}, edited by
\bibinfo{editor}{\bibfnamefont{A.}~\bibnamefont{Mann}} \bibnamefont{and}
\bibinfo{editor}{\bibfnamefont{M.}~\bibnamefont{Revzen}}
(\bibinfo{publisher}{Israel Physical Society, Haifa, Israel},
\bibinfo{year}{1996}).
\bibitem[{\citenamefont{Popescu and Rohrlich}(1997)}]{PR2} \bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Popescu}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Rohrlich}}, in
\emph{\bibinfo{booktitle}{Proceedings of the Symposium of Causality and
Locality in Modern Physics and Astronomy: Open Questions and Possible
Solutions}}, edited by
\bibinfo{editor}{\bibfnamefont{G.}~\bibnamefont{Hunter}},
\bibinfo{editor}{\bibfnamefont{S.}~\bibnamefont{Jeffers}}, \bibnamefont{and}
\bibinfo{editor}{\bibfnamefont{J.-P.} \bibnamefont{Vigier}}
(\bibinfo{publisher}{Kluwer Academic Publishers, Dordrecht/Boston/London},
\bibinfo{year}{1997}), p. \bibinfo{pages}{383}.
\bibitem[{\citenamefont{Brassard et~al.}(2006)\citenamefont{Brassard, Buhrman,
Linden, M\'ethot, Tapp, and Unger}}]{function} \bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Brassard}},
\bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Buhrman}},
\bibinfo{author}{\bibfnamefont{N.}~\bibnamefont{Linden}},
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{M\'ethot}},
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Tapp}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{F.}~\bibnamefont{Unger}},
\bibinfo{journal}{Physical Review Letters} \textbf{\bibinfo{volume}{96}},
\bibinfo{pages}{250401} (\bibinfo{year}{2006}).
\bibitem[{\citenamefont{Navascu\'{e}s}(2010)}]{glance} \bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Navascu\'{e}s},
\bibfnamefont{M.and~Wunderlich}}, \bibinfo{journal}{Proceedings of the Royal
Society A: Mathematical, Physical and Engineering Science}
\textbf{\bibinfo{volume}{466}}, \bibinfo{pages}{881} (\bibinfo{year}{2010}).
\bibitem[{\citenamefont{Barnum et~al.}(2010{\natexlab{a}})\citenamefont{Barnum,
Beigi, Boixo, Elliot, and Wehner}}]{s3:nonlocal} \bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Barnum}},
\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Beigi}},
\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Boixo}},
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Elliot}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Wehner}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{104}},
\bibinfo{pages}{140401} (\bibinfo{year}{2010}{\natexlab{a}}).
\bibitem[{\citenamefont{Pawlowski et~al.}(2009)\citenamefont{Pawlowski,
Paterek, Kaszlikowski, Scarani, Winter, and Zukowski}}]{causality} \bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Pawlowski}},
\bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Paterek}},
\bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Kaszlikowski}},
\bibinfo{author}{\bibfnamefont{V.}~\bibnamefont{Scarani}},
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Winter}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Zukowski}},
\bibinfo{journal}{Nature} \textbf{\bibinfo{volume}{461}},
\bibinfo{pages}{1101} (\bibinfo{year}{2009}).
\bibitem[{\citenamefont{Oppenheim and Wehner}(2010)}]{OppenheimWehner} \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Oppenheim}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Wehner}},
\bibinfo{journal}{Science} \textbf{\bibinfo{volume}{330}},
\bibinfo{pages}{1072} (\bibinfo{year}{2010}).
\bibitem[{\citenamefont{Barrett}(2007)}]{barrett} \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Barrett}},
\bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{75}},
\bibinfo{pages}{032304} (\bibinfo{year}{2007}).
\bibitem[{\citenamefont{Barnum et~al.}(2007)\citenamefont{Barnum, Barrett,
Leifer, and Wilce}}]{leifer} \bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Barnum}},
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Barrett}},
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Leifer}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Wilce}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{99}},
\bibinfo{pages}{240501} (\bibinfo{year}{2007}).
\bibitem[{\citenamefont{Hardy}(2001)}]{hardy} \bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{Hardy}} (\bibinfo{year}{2001}),
\bibinfo{note}{quant-ph/0101012}.
\bibitem[{\citenamefont{D'Ariano}(2008)}]{dariano} \bibinfo{author}{\bibfnamefont{G.~M.} \bibnamefont{D'Ariano}}
(\bibinfo{year}{2008}), \bibinfo{note}{arXiv:0807.4383}.
\bibitem[{\citenamefont{Barnum and Wilce}(2008)}]{howard:survey} \bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Barnum}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Wilce}}
(\bibinfo{year}{2008}), \bibinfo{note}{arXiv:0908.2352}.
\bibitem[{\citenamefont{Barnum et~al.}(2008)\citenamefont{Barnum, Barrett,
Leifer, and Wilce}}]{teleportation} \bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Barnum}},
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Barrett}},
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Leifer}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Wilce}}, in
\emph{\bibinfo{booktitle}{Proceedings of the Clifford Lectures}}
(\bibinfo{year}{2008}), \eprint{0805.3553}.
\bibitem[{\citenamefont{Short and Wehner}(2010)}]{entropy} \bibinfo{author}{\bibfnamefont{A.~J.} \bibnamefont{Short}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Wehner}},
\bibinfo{journal}{New Journal of Physics} \textbf{\bibinfo{volume}{12}},
\bibinfo{pages}{033023} (\bibinfo{year}{2010}).
\bibitem[{\citenamefont{Deutsch}(1983)}]{deutsch83} \bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Deutsch}},
\bibinfo{journal}{Physical Review Letters} \textbf{\bibinfo{volume}{50}},
\bibinfo{pages}{631} (\bibinfo{year}{1983}).
\bibitem[{\citenamefont{Maassen and Uffink}(1988)}]{maassen88} \bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Maassen}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Uffink}},
\bibinfo{journal}{Physical Review Letters} \textbf{\bibinfo{volume}{60}},
\bibinfo{pages}{1103} (\bibinfo{year}{1988}).
\bibitem[{\citenamefont{Wehner and Winter}(2010)}]{WehnerWinter} \bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Wehner}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Winter}},
\bibinfo{journal}{New Journal of Physics} \textbf{\bibinfo{volume}{12}},
\bibinfo{pages}{025009} (\bibinfo{year}{2010}).
\bibitem[{\citenamefont{Barnum et~al.}(2010{\natexlab{b}})\citenamefont{Barnum,
Barrett, Clark, Leifer, Spekkens, Stepanik, Wilce, and
Wilke}}]{barnum:entropy} \bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Barnum}},
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Barrett}},
\bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{Clark}},
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Leifer}},
\bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Spekkens}},
\bibinfo{author}{\bibfnamefont{N.}~\bibnamefont{Stepanik}},
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Wilce}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Wilke}},
\bibinfo{journal}{New Journal of Physics} \textbf{\bibinfo{volume}{12}},
\bibinfo{pages}{033024} (\bibinfo{year}{2010}{\natexlab{b}}).
\bibitem[{\citenamefont{Kimura et~al.}(2010)\citenamefont{Kimura, Nuida, and
Imai}}]{japanese:entropy} \bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Kimura}},
\bibinfo{author}{\bibfnamefont{K.}~\bibnamefont{Nuida}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Imai}},
\bibinfo{journal}{Rep. Math. Phys} \textbf{\bibinfo{volume}{66}},
\bibinfo{pages}{175} (\bibinfo{year}{2010}).
\bibitem[{\citenamefont{Peres}(1993)}]{peres} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Peres}},
\emph{\bibinfo{title}{Quantum theory: concepts and methods}}, Fundamental
theories of physics (\bibinfo{publisher}{Kluwer Academic},
\bibinfo{year}{1993}).
\bibitem[{\citenamefont{Maruyama et~al.}(2009)\citenamefont{Maruyama, Nori, and
Vedral}}]{demon} \bibinfo{author}{\bibfnamefont{K.}~\bibnamefont{Maruyama}},
\bibinfo{author}{\bibfnamefont{F.}~\bibnamefont{Nori}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{V.}~\bibnamefont{Vedral}},
\bibinfo{journal}{Reviews of Modern Physics} \textbf{\bibinfo{volume}{81}},
\bibinfo{pages}{1} (\bibinfo{year}{2009}).
\bibitem[{\citenamefont{Maruyama et~al.}(2005)\citenamefont{Maruyama, Brukner,
and Vedral}}]{cost} \bibinfo{author}{\bibfnamefont{K.}~\bibnamefont{Maruyama}},
\bibinfo{author}{\bibfnamefont{{\v{C}}.}~\bibnamefont{Brukner}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{V.}~\bibnamefont{Vedral}},
\bibinfo{journal}{Journal of Physics A: Mathematical and General}
\textbf{\bibinfo{volume}{38}}, \bibinfo{pages}{7175} (\bibinfo{year}{2005}).
\bibitem[{\citenamefont{Holevo}(1973)}]{holevo} \bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Holevo}},
\bibinfo{journal}{Problems of Information Transmission}
\textbf{\bibinfo{volume}{9}}, \bibinfo{pages}{3} (\bibinfo{year}{1973}).
\bibitem[{\citenamefont{Plenio}(1999)}]{plenio} \bibinfo{author}{\bibfnamefont{M.~B.} \bibnamefont{Plenio}},
\bibinfo{journal}{Physics Letters~A} \textbf{\bibinfo{volume}{263}},
\bibinfo{pages}{281 } (\bibinfo{year}{1999}).
\bibitem[{\citenamefont{Plenio and Vitelli}(2001)}]{pleniovitelli} \bibinfo{author}{\bibfnamefont{M.~B.} \bibnamefont{Plenio}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{V.}~\bibnamefont{Vitelli}},
\bibinfo{journal}{Contemporary Physics} \textbf{\bibinfo{volume}{42}},
\bibinfo{pages}{25} (\bibinfo{year}{2001}).
\bibitem[{\citenamefont{M{\"u}ller and Ududec}(2012)}]{markusprl} \bibinfo{author}{\bibfnamefont{M.~P.} \bibnamefont{M{\"u}ller}}
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Ududec}},
\bibinfo{journal}{Physical Review Letters} \textbf{\bibinfo{volume}{108}},
\bibinfo{pages}{130401} (\bibinfo{year}{2012}).
\bibitem[{\citenamefont{Pfister}(2012)}]{corsin:mthesis} \bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Pfister}}, Master's thesis,
\bibinfo{school}{ETH Zurich and CQT Singapore} (\bibinfo{year}{2012}),
\bibinfo{note}{arXiv:1203.5622}.
\bibitem[{\citenamefont{{Von Neumann}}(1955)}]{vonNeumann:entropyDef} \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{{Von Neumann}}},
\emph{\bibinfo{title}{Mathematische Grundlagen der Quantenmechanik}}
(\bibinfo{publisher}{Springer, Berlin}, \bibinfo{year}{1955}).
\bibitem[{\citenamefont{Barrett}(2011)}]{barrett:entropy} \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Barrett}}
(\bibinfo{year}{2011}), \bibinfo{note}{personal communication}.
\bibitem[{\citenamefont{M{\"u}ller and Oppenheim}(2012)}]{jonathan:entropy} \bibinfo{author}{\bibfnamefont{M.~P.} \bibnamefont{M{\"u}ller}}
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Oppenheim}}
(\bibinfo{year}{2012}), \bibinfo{note}{personal communication}.
\bibitem[{\citenamefont{M{\"u}ller et~al.}(2011)\citenamefont{M{\"u}ller,
Dahlsten, and Vedral}}]{markus:decoupling} \bibinfo{author}{\bibfnamefont{M.~P.} \bibnamefont{M{\"u}ller}},
\bibinfo{author}{\bibfnamefont{O.}~\bibnamefont{Dahlsten}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{V.}~\bibnamefont{Vedral}}
(\bibinfo{year}{2011}), \bibinfo{note}{arXiv:1107.6029}.
\bibitem[{\citenamefont{M{\"u}ller et~al.}(2012)\citenamefont{M{\"u}ller,
Oppenheim, and Dahlsten}}]{markus:newdecoupling} \bibinfo{author}{\bibfnamefont{M.~P.} \bibnamefont{M{\"u}ller}},
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Oppenheim}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{O.}~\bibnamefont{Dahlsten}}
(\bibinfo{year}{2012}), \bibinfo{note}{in preparation}.
\end{thebibliography} \appendix
\section*{General theories}
In this appendix, we extend our result to more general physical theories. To this end, we need to introduce several additional assumptions and entropies. As quantum mechanics satisfies all assumptions made here, the derivation below can also be taken as a detailed explanation of the results claimed in the front matter.
In the main text, we have shown that a violation of uncertainty relations leads to a violation of the second law of thermodynamics for the quantum case. More precisely, we have assumed that all processes can be described by the quantum formalism, with the exception of the uncertainty relations. We now want to show that our result still holds when the physical processes have to be described by a general convex theory. We need several assumptions on these theories, which we clearly state below.
\subsection*{General assumptions}
As already outlined we will make three very common assumptions on a generalized physical theory. The first two are thereby essentially made everywhere~\cite{howard:survey}, the third is made very often (but not always (see e.g.~\cite{entropy}). We label assumptions as $A\cdot$. Whereas such assumptions may seem rather elaborate, there are physical reasons for assuming them. For example, a property known as bit symmetry~\cite{markusprl} implies A3, A6 and A7.
\begin{description} \item[A1] The state space $\Omega$ is convex. \item[A2\label{it:lin}] Effects are \emph{linear} functionals. \item[A3\label{it:dual}] Pure states are dual to pure effects as outlined in the background section. Uncertainty relations can thus be stated equivalently in terms of states or measurements\footnote{In~\cite{markusprl} it was shown that such a duality holds at least for any theory which has a property called `bit symmetry', which means that it allows for reversible computation.}. \end{description} Next, we will assume that pure effects are \emph{projective} in that there exists a way to implement them in a physical measurement such that if we repeatedly apply an effect $e_i$ by repeating the measurement we again obtain
the same outcome. That is, $p(e_ie_i|\rho)=p(e_i|\rho)$ for all $\rho$, where with some abuse of notation we take $p(e_ie_i|\rho)$ to be the probability of observing $e_i$ when making the measurement twice in a row, and don't consider the outcome of the first. A measurement is projective if it consists only of projective effects. Note that this is not the same as demanding that post-measurement states are the same for all $\rho$, which has significant consequences~\cite{corsin:mthesis}. \begin{description} \item[A4] Pure effects are projective. \end{description} We will also assume that the unit effect $u$, i.e. the effect satisfying $u(\rho) = 1$ for all $\rho$, has a dual state that is analogous to the maximally mixed state. If we accept duality between states and measurements, than this assumption is very natural. \begin{description} \item[A5\label{it:halfe}] If $f_0 + f_1 = u$ for two effects $f_0$ and $f_1$, then the dual states $\rho_{f_0} + \rho_{f_1} = \rho_{u}$ and $e(\rho_{u}/2) = 1/2$ for all pure effects $e$. \end{description} Note that this assumption again implies that we are dealing with the analogue of a qubit, i.e. a two-level system. It is possible to extend our statements for quantum mechanics to traceless two-outcome observables but as this requires additional assumptions in generality, we omit it. Our next assumption, however, is rather strong and significant, and extends beyond the duality of states and measurements. It is of course satisfied by quantum mechanics. \begin{description} \item[A6\label{it:cycle2}] Let $\rho = \sum_{j=1}^d q_j \sigma_j$ be a decomposition of $\rho$ into perfectly distinguishable pure states $\sigma_j$: Let $h_{\sigma_j}$ denote the pure effect dual to $\sigma_j$. Then $\sum_{j=1}^d h_{\sigma_j} = u$ and $h_k(\sigma_j) = \delta_{jk}$ for all $j$ and $k$. \end{description} Finally, we will also need that pure states can be transformed into different pure states and that this does not require any work. In quantum mechanics, this is justified since the transformation just corresponds to a unitary. \begin{description} \item[A7\label{it:pure}] Let $\rho$ and $\sigma$ be pure states. Then the transformation $\rho\rightarrow \sigma$ is reversible (and thus does not require any work, neither can any work be gained from it). \end{description}
\subsection*{Entropies}
Several definitions of entropy are possible in generalized theories~\cite{entropy} that happen to coincide in quantum mechanics. The first is the so-called \emph{decomposition entropy} given as \begin{align} S(\rho) = \min_{\substack{\{p_j,\rho_j\}_j\\\rho = \sum_{j=1}^d p_j \sigma_j}} H(\{p_1,\ldots,p_d\})\,, \end{align} where the minimization is taken over decompositions into pure states $\sigma_j$ and $H$ is the Shannon entropy. Here, we will take the minimum over decompositions into perfectly distinguishable pure states. Note that the resulting quantity is equally well defined, but avoids an unnecessary strengthening of assumption A6. To define the other notion of entropy, we will need the following definition of \emph{maximally fine-grained measurements}~\cite{entropy} as measurements such that each of its effects cannot be re-expressed as a non-trivial linear combination, i.e., \begin{align} \nonumber \textbf{e}&=\{e_i\}_i :\text{ maximally fine-grained }\Leftrightarrow \\ \nonumber & \text{ for all }e_i:\ e_i=\alpha e_\alpha^\prime+\beta e_\beta^\prime,\ \alpha,\beta>0 \ \Rightarrow\ e_\alpha^\prime=e_\beta^\prime\,. \end{align} We also call any effect satisfying this equation fine-grained. Note that pure effects are automatically maximally fine-grained. The \emph{measurement entropy} is then given by \begin{align} H(\rho) = \min_{\{e_j\}_{j=1}^\ell} H(\{e_1(\rho),\ldots,e_{\ell}(\rho)\})\,, \end{align} where the minimization is taken over maximally fine-grained measurements. Finally, it would be possible to define entropies by a thermodynamical process itself~\cite{vonNeumann:entropyDef}, even in general physical theories~\cite{barrett:entropy}. In such a setting, also a difference between the decomposition and measurement entropy can lead to a violation of the second law~\cite{jonathan:entropy}. As such, it is still under investigation what is the most relevant entropy~\cite{entropy} in general theories, also when it comes to operational tasks from quantum information such as decoupling~\cite{markus:decoupling,markus:newdecoupling}.
\subsection*{First part of the cycle}
Below we state the explicit calculation of the work which can be extracted from the first part of the cycle. The measurements $\textbf{f} = \{f_0,f_1\}$ and $\textbf{g} = \{g_0,g_1\}$ are now not necessarily quantum mechanical, but do obey the assumptions above. We use a generalized notion of the completely mixed state and a projective measurement. In order to determine the position of the semi-transparent membranes in equilibrium, we assume that they perform a projective measurement, in the sense that a particle which is once measured to be $e_0$ ($e_1$) will give outcome $e_0$ ($e_1$) with certainty when the measurement is repeated, and never outcome $e_1$ ($e_0$). Additionally, we will use the following definitions in our calculation. \begin{description} \item[D1\label{it:def}] $\rho_0$ is the mixture of $\rho_{f_0}$ and $\rho_{g_0}$, and $\rho_1$ of $\rho_{f_1}$ and $\rho_{g_1}$, i.e., \begin{align} \nonumber \rho_0 &= \frac{1}{2}\left( \rho_{f_0}+ \rho_{g_0} \right)\\ \nonumber \rho_1&=\frac{1}{2}\left( \rho_{f_1}+ \rho_{g_1} \right) \,. \end{align} \item[D2\label{it:halfp}] We choose an equal mixture of $\rho_0$ and $\rho_1$, i.e., $p_i=1/2$ for all $i$. \end{description} Note
that by assumption A5 the state $\rho=\rho_0/2+\rho_1/2$ has analogous properties to the completely mixed state in dimension $2$, i.e., $p(e_j)=\sum_i p_i p(e_j|\rho_i)=1/2$ for all~$j$. \begin{description}
\item[D3\label{it:binary}] We make a measurement with binary outcomes, i.e., $p(e_j|\rho_i)=1-p(e_{\bar{j}}|\rho_i)$. \end{description} The numbers on top of the equation refer to the assumptions and/or definitions stated above which are used in this step of the calculation. \begin{align} \nonumber W &= NkT\left( \sum_{i,j}
p_i p(e_j|\rho_i)\ln(p_i p(e_j|\rho_i)) \right. \\ \nonumber & \quad \left. - \sum_j p(e_j)\ln p(e_j) - \sum_i p_i \ln p_i \right) \\ \nonumber &\mathop{=}^{\ref{it:halfp},\ref{it:halfe}} NkT \ln 2\left( \sum_{i,j}
p_i p(e_j|\rho_i)\log (p_i p(e_j|\rho_i)) \right. \\ \nonumber & \quad \left. - \log \frac{1}{2} - \log \frac{1}{2}\right) \\ \nonumber &\mathop{=}^{\ref{it:halfp}} \nonumber NkT \ln 2\left( 2 \vphantom{\frac{1}{2}}\right. \\ \nonumber & \quad \left. + \frac{1}{2} \sum_{i,j}
p(e_j|\rho_i)\left ( \log \frac{1}{2}+\log p(e_j|\rho_i))\right) \right) \\ &= \nonumber NkT \ln 2\left( 1+ \frac{1}{2} \sum_{i,j}
p(e_j|\rho_i)\left ( \log p(e_j|\rho_i))\right) \right) \\ &\mathop{=}^{\ref{it:binary}} \nonumber NkT \ln 2\left( 1+ \frac{1}{2} \sum_{i}\left(
p(e_j|\rho_i) \log p(e_j|\rho_i) \right. \right. \\ \nonumber & \quad \left. \vphantom{\frac{1}{2}} \left. +
(1-p(e_j|\rho_i)) \log (1- p(e_j|\rho_i)) \right) \right) \\ &= \nonumber NkT \ln 2\left( 1
- \frac{1}{2} \sum_{i}H(p(e_j|\rho_i)) \right) \\ &= \nonumber NkT \ln 2\left( 1
- \frac{1}{2} H\left(p\left(e_j\middle|\frac{1}{2}\rho_{f_0}+\frac{1}{2}\rho_{g_0} \right)\right) \right. \\ \nonumber & \quad \left. -
\frac{1}{2} H\left(p\left(e_j \middle| \frac{1}{2}\rho_{f_1}+\frac{1}{2}\rho_{g_1}\right) \right) \right)\\ &\mathop{=}^{\ref{it:lin}} \nonumber NkT \ln 2\left( 1
- \frac{1}{2} H\left(\frac{1}{2} p\left(e_j\middle |\rho_{f_0}\right)+\frac{1}{2}p\left(e_j\middle|\rho_{g_0}\right)\right) \right. \\ \nonumber & \quad \left. -
\frac{1}{2} H\left(p\left(e_j\middle|\rho_{f_1}\right)+\frac{1}{2}p\left(e_j\middle|\rho_{g_1}\right)\right) \right)\\ \nonumber &\mathop{=}^{\ref{it:dual}} NkT \ln 2\left( 1
- \frac{1}{2} H\left(\frac{1}{2}p(f_0|\rho_{e_j})
+ \frac{1}{2}p(g_0|\rho_{e_j}) \right) \right. \\ \nonumber & \quad \left.
- \frac{1}{2} H\left(\frac{1}{2}p(f_1|\rho_{e_j})+ \frac{1}{2}p(g_1|\rho_{e_j}) \right) \right)\\ \label{eq:cycle1} & \leq NkT \ln 2\left( 1 - \frac{1}{2} H\left(\zeta_{(f_0,g_0)}\right) - \frac{1}{2} H\left(\zeta_{(f_1,g_1)}\right) \right)\,. \end{align}
Equality is achieved when the measurement can be formed from the maximally certain effects.
\subsection*{Second part of the cycle}
We calculate the work needed for the second part of the cycle in two parts. First let us calculate the work needed to `decompose' $\rho$ into its different pure components. We use the effects $h_{\sigma_j}$, which are dual to the pure states $\sigma_j$ which form the components of $\rho$, i.e., $\rho=\sum_j q_j \sigma_j$. Note that we can do this for any decomposition, in particular the one minimizing $S(\rho)$. \begin{align}
\nonumber W&=-NkT \ln 2 \left(\sum_j p(h_{\sigma_j}|\rho)\log p(h_{\sigma_j}|\rho) \right)\\
&=
\nonumber
-NkT \ln 2 \left(\sum_j p\left(h_{\sigma_j}\middle|\sum_{j^\prime}q_{j^\prime}\sigma_{j^\prime}\right)
\right. \\ & \quad \left.
\nonumber
\log p\left(h_{\sigma_j}\middle|\sum_{j^\prime}q_{j^\prime}\sigma_{j^\prime}\right) \right)\\
&\mathop{=}^{\ref{it:lin}} \nonumber
-NkT \ln 2 \left(\sum_j \left(\sum_{j^\prime} q_{j^\prime} p\left(h_{\sigma_j}\middle|\sigma_{j^\prime}\right)\right)
\right. \\ & \quad \left.
\nonumber
\log \left(\sum_{j^\prime} q_{j^\prime} p\left(h_{\sigma_j}\middle|\sigma_{j^\prime}\right)\right) \right)\\
&\mathop{=}^{\ref{it:cycle2}}
\nonumber
-NkT \ln 2 \left(\sum_{j^\prime} q_{j^\prime} \log q_{j^\prime} \right)\\ \label{eq:decompentropy}
&= NkT \ln 2 S(\rho)\,. \end{align} We then transform the pure states $\sigma_j$ into the pure states $\tau^0_j$ or $\tau^1_j$. By~\ref{it:pure}, this does not require any work. By performing a processes analogous to~\eqref{eq:decompentropy} but in the reverse direction, we can then extract work $NkT \ln 2 \sum_i p_i S(\rho_i)$ by `reassembling' the states $\rho_0$ and $\rho_1$. Overall, the work needed for the second part of the cycle is given by \begin{align}
W&=NkT \ln 2 (S(\rho)-\sum_i p_i S(\rho_i))\,. \label{eq:cycle2} \end{align}
\subsection*{Closing the cycle}
From the above calculation, i.e., by substracting~\eqref{eq:cycle2} from~\eqref{eq:cycle1}, we see that for the total cycle, the amount of work which can be extracted is given by \begin{align} \nonumber \Delta W &= NkT \ln2 \left(
- \left( S(\rho)- \sum_i p_i S(\rho_i) \right) \right. \\ \nonumber & \quad \left. + \left( 1 - \frac{1}{2} H\left(\zeta_{(f_0,g_0)}\right) - \frac{1}{2} H\left(\zeta_{(f_1,g_1)}\right)\right)
\right)\,, \end{align} however, where $S$ is now the general decomposition entropy. A net work gain of this cycle, and therefore a violation of the second law of thermodynamics, can therefore be reached if the uncertainty relations can be violated without at the same time changing the decomposition entropy.
\end{document} |
\begin{document}
\title{\Large $(2,3)$-GENERATION OF THE SPECIAL LINEAR GROUPS OF DIMENSIONS $9$, $10$ and $11$} \author{\Large E. Gencheva, Ts. Genchev and K. Tabakov} \date{} \maketitle \begin{abstract} In the present paper we prove that the groups $PSL_{n}(q)$ ($n = 9$, $10$ or $11$) are $(2,3)$-generated for any $q$. Actually, we provide explicit generators $x_{n}$ and $y_{n}$ of respective orders $2$ and $3$, for the special linear group $SL_{n}(q)$. \indent\\ \noindent\textbf{Key words:}\quad(2,3)-generated group.\\ \noindent\textbf{2010 Mathematics Subject Classification:} \,20F05, 20D06. \end{abstract}
\indent\indent\indent\textbf{1.\,\,Introduction.} $(2,3)$-generated groups are those groups which can be generated by an involution and an element of order $3$ or, equivalently, they appear to be homomorphic images of the famous modular group $PSL_{2}(\mathbb{Z})$. It is known that many series of finite simple groups are $(2,3)$-generated. Most powerful result of Liebeck-Shalev and L\"{u}beck-Malle states that all finite simple groups, except the infinite families $PSp_{4}(2^{m})$, $PSp_{4}(3^{m})$, $^{2}B_{2}(2^{2m+1})$, and a finite number of other groups, are $(2,3)$-generated (see \cite{12}). We have especially focused our attention to the projective special linear groups defined over finite fields. Many authors have been investigated the groups $PSL_{n}(q)$ with respect to that generation property. $(2,3)$-generation has been proved in the cases $n=2$, $q\neq 9$ \cite{7}, $n=3$, $q\neq 4$ \cite{5},\cite{2}, $n=4$, $q\neq 2$ \cite{14}, \cite{13}, \cite{8}, \cite {10}, $n=5$, any $q$ \cite{17}, \cite{9}, $n=6$, any $q$ \cite{16}, $n=7$, any $q$ \cite{15}, $n=8$, any $q$ \cite{6}, $n\geq 5$, odd $q\neq 9$ \cite{3}, \cite{4}, and $n\geq 13$, any $q$ \cite{11}. In this way the only cases that still remain open are those for $9\leq n \leq12$, even $q$ or $q=9$. In the present work we continue our investigation by considering the next portion of infinite series of finite special linear groups and their projective images. We shall treat the groups $SL_{9}(q)$ and $SL_{10}(q)$ simultaneously. Based on the results obtained below, we deduce the following.\\
\indent\indent\textbf{Theorem.} \emph{The groups $SL_{9}(q)$, $SL_{10}(q)$, $SL_{11}(q)$ and their simple projective images $PSL_{9}(q)$, $PSL_{10}(q)$ and $PSL_{11}(q)$ are $(2,3)$-generated for all $q$ }.\\
\indent\indent\textbf{2.\,\,Proof of the Theorem.} First let $G = SL_{n}(q)$ and $\overline{G} = G/Z(G) = PSL_{n}(q)$, where $n = 9$ or $10$, and $q = p^{m}$ for a prime number $p$. Set $Q = q^{n-1}-1$ if $q \neq 3, 7$ and $Q = (q^{n-1}-1)/2$ if $q = 3, 7$. The group $G$ acts (naturally) on the left on the $n$-dimensional column vector space $V = F^{n}$ over the field $F = GF(q)$. We denote by $v_{1}$, . . . , $v_{n}$ the standard base of the space $V$, i.e. $v_{i}$ is a column which has $1$ as its $i$-th coordinate, while all other coordinates are zeros.\\ \indent\indent\ We shall need the following result, which can be easily obtained by the list of maximal subgroups of $G$ given in \cite{1} and simple arithmetic considerations using (for example) Zsigmondy's well-known theorem. \\
\indent\indent\textbf{Lemma 1.} \emph{For any maximal subgroup $M$ of the group $G$ either it stabilizes one-dimensional subspace or hyperplane of $V$ ($M$ is reducible on the space $V$) or $M$ has no element of order $Q$}.\\
\indent\indent \textbf{2.1.} We suppose first that $q \neq 2, 4$ if $n = 9$ and $q > 4$ if $n = 10$. Let choose an element $\omega$ of order $Q$ in the multiplicative group of the field $GF(q^{n-1})$ and set \begin{center} $f_{n}(t) = (t - \omega)(t - \omega^{q})(t - \omega^{q^{2}})(t - \omega^{q^{3}}) . . . (t - \omega^{q^{n-3}}) (t - \omega^{q^{n-2}}) = t^{n-1} - \alpha_{1}t^{n-2} + \alpha_{2}t^{n-3} - \alpha_{3}t^{n-4} + . . . + (-1)^{n-2}\alpha_{n-2}t + (-1)^{n-1}\alpha_{n-1}$. \end{center} Then $f_{n}(t) \in F[t]$ and the polynomial $f_{n}(t)$ is irreducible over the field $F$. Note that $\alpha_{n-1} = \omega^\frac{q^{n-1} - 1}{q - 1}$ has order $q - 1$ if $q \neq 3, 7$, $\alpha_{n-1} = 1$ if $q = 3$, and $\alpha_{n-1}^{3} = 1 \neq \alpha_{n-1}$ if $q = 7$.\\ \indent\indent Now let \begin{center} \[ x_{9} = \left[ \begin{array}{ccccccccc} -1 & 0 & 0 & 0 & 0 & 0 & \alpha_{5}\alpha_{8}^{-1} & 0 & \alpha_{5}\\ 0 & -1 & 0 & 0 & 0 & 0 & \alpha_{4}\alpha_{8}^{-1} & 0 & \alpha_{4}\\ 0 & 0 & 0 & -1 & 0 & 0 & \alpha_{3}\alpha_{8}^{-1} & 0 & \alpha_{6}\\ 0 & 0 & -1 & 0 & 0 & 0 & \alpha_{6}\alpha_{8}^{-1} & 0 & \alpha_{3}\\ 0 & 0 & 0 & 0 & -1 & 0 & \alpha_{2}\alpha_{8}^{-1} & 0 & \alpha_{2}\\ 0 & 0 & 0 & 0 & 0 & 0 & \alpha_{1}\alpha_{8}^{-1} & -1 & \alpha_{7}\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \alpha_{8}\\ 0 & 0 & 0 & 0 & 0 & -1 & \alpha_{7}\alpha_{8}^{-1} & 0 & \alpha_{1}\\ 0 & 0 & 0 & 0 & 0 & 0 & \alpha_{8}^{-1} & 0 & 0\\
\end{array} \right],\] \[y_{9} = \left[ \begin{array}{ccccccccc} 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0\\
\end{array} \right];\] \[ x_{10} = \left[ \begin{array}{cccccccccc} 0 & 0 & 0 & -1 & 0 & 0 & 0 & \alpha_{2}\alpha_{9}^{-1} & 0 & \alpha_{3}\\ 0 & 0 & 0 & 0 & 0 & -1 & 0 & \alpha_{4}\alpha_{9}^{-1} & 0 & \alpha_{7}\\ 0 & 0 & -1 & 0 & 0 & 0 & 0 & \alpha_{5}\alpha_{9}^{-1} & 0 & \alpha_{5}\\ -1 & 0 & 0 & 0 & 0 & 0 & 0 & \alpha_{3}\alpha_{9}^{-1} & 0 & \alpha_{2}\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & \alpha_{1}\alpha_{9}^{-1} & -1 & \alpha_{8}\\ 0 & -1 & 0 & 0 & 0 & 0 & 0 & \alpha_{7}\alpha_{9}^{-1} & 0 & \alpha_{4}\\ 0 & 0 & 0 & 0 & 0 & 0 & -1 & \alpha_{6}\alpha_{9}^{-1} & 0 & \alpha_{6}\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \alpha_{9}\\ 0 & 0 & 0 & 0 & -1 & 0 & 0 & \alpha_{8}\alpha_{9}^{-1} & 0 & \alpha_{1}\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & \alpha_{9}^{-1} & 0 & 0\\
\end{array} \right],\] \[y_{10} = \left[ \begin{array}{cccccccccc} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0\\ \end{array} \right].\] \end{center} Then $x_{n}$ and $y_{n}$ are elements of $G (= SL_{n}(q))$ of orders $2$ and $3$, respectively. Denote \begin{center} \[z_{9} = x_{9}y_{9} = \left[ \begin{array}{ccccccccc} 0 & 0 & -1 & 0 & 0 & 0 & 0 & \alpha_{5} & \alpha_{5}\alpha_{8}^{-1}\\ -1 & 0 & 0 & 0 & 0 & 0 & 0 & \alpha_{4} & \alpha_{4}\alpha_{8}^{-1}\\ 0 & 0 & 0 & 0 & 0 & -1 & 0 & \alpha_{6} & \alpha_{3}\alpha_{8}^{-1}\\ 0 & -1 & 0 & 0 & 0 & 0 & 0 & \alpha_{3} & \alpha_{6}\alpha_{8}^{-1}\\ 0 & 0 & 0 & -1 & 0 & 0 & 0 & \alpha_{2} & \alpha_{2}\alpha_{8}^{-1}\\ 0 & 0 & 0 & 0 & 0 & 0 & -1 & \alpha_{7} & \alpha_{1}\alpha_{8}^{-1}\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & \alpha_{8} & 0\\ 0 & 0 & 0 & 0 & -1 & 0 & 0 & \alpha_{1} & \alpha_{7}\alpha_{8}^{-1}\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \alpha_{8}^{-1}\\
\end{array} \right],\] \[z_{10} = x_{10}y_{10} = \left[ \begin{array}{cccccccccc} 0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 & \alpha_{3} & \alpha_{2}\alpha_{9}^{-1}\\ 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & \alpha_{7} & \alpha_{4}\alpha_{9}^{-1}\\ 0 & 0 & 0 & 0 & 0 & 0 & -1 & 0 & \alpha_{5} & \alpha_{5}\alpha_{9}^{-1}\\ -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \alpha_{2} & \alpha_{3}\alpha_{9}^{-1}\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & \alpha_{8} & \alpha_{1}\alpha_{9}^{-1}\\ 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 & \alpha_{4} & \alpha_{7}\alpha_{9}^{-1}\\ 0 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & \alpha_{6} & \alpha_{6}\alpha_{9}^{-1}\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \alpha_{9} & 0\\ 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & \alpha_{1} & \alpha_{8}\alpha_{9}^{-1}\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \alpha_{9}^{-1}\\ \end{array} \right].\] \end{center} The characteristic polynomial of $z_{n}$ is $f_{z_{n}}(t) = (t - \alpha_{n-1}^{-1})f_{n}(t)$ and the characteristic roots $\alpha_{n-1}^{-1}$, $\omega$, $\omega^{q}$, $\omega^{q^{2}}$, $\omega^{q^{3}}$, . . . , $\omega^{q^{n-3}}$, and $\omega^{q^{n-2}}$ of $z_{n}$ are pairwise distinct. Then, in $GL_{n}(q^{n-1})$, $z_{n}$ is conjugate to the matrix diag $(\alpha_{n-1}^{-1}$, $\omega$, $\omega^{q}$, $\omega^{q^{2}}$, $\omega^{q^{3}}$, . . . , $\omega^{q^{n-3}}$, $\omega^{q^{n-2}})$ and hence $z_{n}$ is an element of $SL_{n}(q)$ of order $Q$.\\ \indent\indent Let $H_{n}$ is the subgroup of $G (= SL_{n}(q))$ generated by the above elements $x_{n}$ and $y_{n}$.\\
\indent\indent\textbf{Lemma 2.} \emph{The group $H_{n}$ can not stabilize one-dimensional subspaces or hyperplanes of the space $V$ or equivalently $H_{n}$ acts irreducible on $V$.}\\
\indent\indent P r o o f. Assume that $W$ is an $H_{n}$-invariant subspace of $V$ and $k$ = dim $W$, $k=1$ or $n-1$.\\ \indent\indent Let first $k = 1$ and $0 \neq w \in W$. Then $y_{n}(w) = \lambda w$ where $\lambda \in F$ and $\lambda^{3} = 1$. This yields \begin{center} $w = \mu_{1}(v_{1} + \lambda^{2} v_{2} + \lambda v_{3}) + \mu_{2}(v_{4} + \lambda^{2} v_{5} + \lambda v_{6}) + \mu_{3}(v_{7} + \lambda^{2} v_{8} + \lambda v_{9})$ $(\mu_{1}, \mu_{2}, \mu_{3} \in F)$ \end{center} if $n = 9$, and \begin{center} $w = \mu_{1}^{'}v_{1} + \mu_{2}^{'}(v_{2} + \lambda v_{3}) + \mu_{3}^{'}(\lambda v_{4} + v_{5} + \lambda^{2} v_{6}) + \mu_{2}^{'} \lambda^{2} v_{7} + \mu_{4}^{'}(\lambda v_{8} + v_{9} + \lambda^{2} v_{10}) (\mu_{i}^{'} \in F)$ \end{center} if $n = 10$. Moreover $\mu_{1}^{'} = 0$ if $\lambda \neq 1$.\\ Now $x_{n}(w) = \nu w$ where $\nu = \pm 1$. This yields consecutively $\mu_{3} \neq 0$ , $\mu_{4}^{'} \neq 0$, $\alpha_{n - 1} = \lambda^{2}\nu$, and (in case $n = 9$) \begin{enumerate}[(1)]
\item \rule{0pt}{0pt}\vspace*{-12pt}
\begin{equation*}
\lambda\nu\mu_{1} + \mu_{2} = (\lambda\nu\alpha_{3} + \lambda\alpha_{6})\mu_{3},
\end{equation*} \end{enumerate}
\begin{enumerate}[(2)]
\item \rule{0pt}{0pt}\vspace*{-12pt}
\begin{equation*}
\mu_{2} = (\alpha_{1} - \lambda\nu + \nu\alpha_{7})\mu_{3},
\end{equation*} \end{enumerate}
\begin{enumerate}[(3)]
\item \rule{0pt}{0pt}\vspace*{-12pt}
\begin{equation*}
(\nu + 1)(\lambda\mu_{2} - \alpha_{2}\mu_{3}) = 0,
\end{equation*} \end{enumerate}
\begin{enumerate}[(4)]
\item \rule{0pt}{0pt}\vspace*{-12pt}
\begin{equation*}
(\nu + 1)(\mu_{1} - \lambda\alpha_{5}\mu_{3}) = 0,
\end{equation*} \end{enumerate}
\begin{enumerate}[(5)]
\item \rule{0pt}{0pt}\vspace*{-12pt}
\begin{equation*}
(\nu + 1)(\mu_{1} - \lambda^{2}\alpha_{4}\mu_{3}) = 0;
\end{equation*} \end{enumerate} in case $n = 10$ we obtain the following relations:\\ \begin{enumerate}[(1.)]
\item \rule{0pt}{0pt}\vspace*{-12pt}
\begin{equation*}
\mu_{1}^{'} = -\nu\lambda\mu_{3}^{'} + (\lambda\alpha_{3}\alpha_{9}^{-1} + \lambda^{2}\alpha_{2})\mu_{4}^{'},
\end{equation*} \end{enumerate}
\begin{enumerate}[(2.)]
\item \rule{0pt}{0pt}\vspace*{-12pt}
\begin{equation*}
\mu_{2}^{'} = -\nu\lambda^{2}\mu_{3}^{'} + (\lambda\alpha_{7}\alpha_{9}^{-1} + \lambda^{2}\alpha_{4})\mu_{4}^{'},
\end{equation*} \end{enumerate}
\begin{enumerate}[(3.)]
\item \rule{0pt}{0pt}\vspace*{-12pt}
\begin{equation*}
\mu_{3}^{'} = (-\nu + \lambda_{2}\alpha_{1} + \lambda\alpha_{8}\alpha_{9}^{-1})\mu_{4}^{'},
\end{equation*} \end{enumerate}
\begin{enumerate}[(4.)]
\item \rule{0pt}{0pt}\vspace*{-12pt}
\begin{equation*}
(\nu + 1)(\mu_{2}^{'} - \lambda\alpha_{5}\mu_{4}^{'}) = 0,
\end{equation*} \end{enumerate}
\begin{enumerate}[(5.)]
\item \rule{0pt}{0pt}\vspace*{-12pt}
\begin{equation*}
(\nu + 1)(\lambda^{2}\alpha_{6} - \alpha_{5}) = 0.
\end{equation*} \end{enumerate} In particular, we have $\alpha_{n-1}^{3} = \nu$ and $\alpha_{n-1}^{6} = 1$. This is impossible if $q = 5$ or $q > 7$ since then $\alpha_{n-1}$ has order $q - 1$.\\ Next, let us continue with the case $n = 9$. According to our assumption ($q \neq 2, 4$) only two possibilities left: $q = 3$ (and $\alpha_{8} = 1$), $q = 7$ (and $\alpha_{8}^{3} = 1 \neq \alpha_{8}$). So $\nu = 1$, $\alpha_{8} = \lambda^{2}$ and (1), (2), (3), (4), (5) produce $\alpha_{1} = \lambda^{2}\alpha_{2} - \alpha_{7} + \lambda$, $\alpha_{3} = \lambda\alpha_{2} + \lambda^{2}\alpha_{4} - \alpha_{6}$ and $\alpha_{5} = \lambda\alpha_{4}$. Now $f_{9}(-1) = (1 + \lambda + \lambda^{2})(1 + \alpha_{2} + \alpha_{4}) = 0$ both for $q = 3$ and $q = 7$, an impossibility as $f_{9}(t)$ is irreducible over the field $F$.\\ Lastly, we treat the case $n = 10$, and as $q > 4$, that is $q = 7$ (and $\alpha_{9}^{3} = 1 \neq \alpha_{9}$). Thus $\nu = 1$ and $\alpha_{9} = \lambda^{2} \neq 1$. So $\lambda \neq 1$, $\mu_{1}^{'} = 0$ and from $(1.)$, $(2.)$, $(3.)$, $(4.)$, $(5.)$ we can extract that $\alpha_{1} = \lambda^{2}\alpha_{2} + \lambda^{2}\alpha_{3} - \alpha_{8} + \lambda$, $\alpha_{5} = -\lambda^{2}\alpha_{2} - \lambda^{2}\alpha_{3} + \lambda\alpha_{4} + \lambda\alpha_{7}$ and $\alpha_{6} = -\alpha_{2} -\alpha_{3}+ \lambda^{2}\alpha_{4} + \lambda^{2}\alpha_{7}$. Then $f_{10}(-1) = -(1 + \lambda + \lambda^{2})(1 + \alpha_{4} + \alpha_{7}) = 0$, again an impossibility as $f_{10}(t)$ is irreducible over the field $F$.\\
\indent\indent Now let $k=n-1$. The subspace $U$ of $V$ which is generated by the vectors $v_{1}$, $v_{2}$, $v_{3}$, . . . , $v_{n-1}$ is $\left\langle {z_{n}}\right\rangle$-invariant. If $W \neq U$ then $U \cap W$ is $\left\langle {z_{n}}\right\rangle$-invariant and dim $(U \cap W) = n-2$. This means that the characteristic polynomial of $z_{n}|_{U \cap W}$ has degree $n-2$ and must divide $f_{z_{n}}(t)$ which is impossible as $f_{n}(t)$ is irreducible over $F$. Thus $W = U$ but obviously $U$ is not $\left\langle {y_{n}}\right\rangle$-invariant, a contradiction.\\ \indent\indent The lemma is proved. (Note that the statement is false if $q = 2$ or $4$ in both cases, and additionally if $q = 3$ in case $n = 10$.)
$\square$\\
\indent\indent Now, as $H_{n} = \left\langle{x_{n},y_{n}}\right\rangle$ acts irreducible on the space $V$ and it has an element of order $Q$, we conclude (by Lemma 1) that $H_{n}$ can not be contained in any maximal subgroup of $G (= SL_{n}(q))$. Thus $H_{n} = G$ and $G = \left\langle {x_{n},y_{n}}\right\rangle$ is a $(2,3)$-generated group. Obviously $\overline{x_{n}}$ and $\overline{y_{n}}$ are elements of respective orders $2$ and $3$ for the group $\overline{G} = PSL_{n}(q)$, and $\overline{G} = \left\langle {\overline{x_{n}},\overline{y_{n}}}\right\rangle$ is a $(2,3)$-generated group too.\\
\indent\indent \textbf{2.2.} Now we proceed to prove the $(2,3)$ - generation of the remaining groups $SL_{9}(2)$, $SL_{9}(4)$, $SL_{10}(2)$, $SL_{10}(3)$ and $SL_{10}(4)$. Below we provide elements $x_{n}^{(q)}$ and $y_{n}^{(q)}$ of orders $2$ and $3$, respectively, for each one of the groups $SL_{n}(q)$ in this list, and prove that $\left\langle {x_{n}^{(q)},y_{n}^{(q)}}\right\rangle = SL_{n}(q)$. In our consideration, counting the orders of some elements in the corresponding groups $\left\langle {x_{n}^{(q)},y_{n}^{(q)}}\right\rangle$, we rely on the great possibilities of Magma Computational Algebra System. We also use the orders of the maximal subgroups of the groups in the list above. (The maximal subgroups of these groups are classified in \cite{1}.)\\
\indent\indent Take the following two matrices of $SL_{9}(2)$: \[ x_{9}^{(2)} = \left[ \begin{array}{ccccccccc} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 1\\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 1\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1\\ 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 0\\ \end{array} \right], y_{9}^{(2)} = \left[ \begin{array}{ccccccccc} 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0\\ \end{array} \right].\] Then $x_{9}^{(2)}$ and $y_{9}^{(2)}$ are elements of respective orders $2$ and $3$ in the group $SL_{9}(2)$, and $x_{9}^{(2)}y_{9}^{(2)}$ has order $73$; also $x_{9}^{(2)}y_{9}^{(2)}(x_{9}^{(2)}(y_{9}^{(2)})^{2})^{2}$ is an element in $\left\langle {x_{9}^{(2)},y_{9}^{(2)}}\right\rangle$ of order $3.127$. Since in $SL_{9}(2)$ there is no maximal subgroup of order divisible by $73.127$ it follows that $SL_{9}(2) = \left\langle {x_{9}^{(2)},y_{9}^{(2)}}\right\rangle$.\\
\indent\indent Now continue with the desired matrices of $SL_{9}(4)$: \[ x_{9}^{(4)} = \left[ \begin{array}{ccccccccc} 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & \eta\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1\\ \end{array} \right], y_{9}^{(4)} = \left[ \begin{array}{ccccccccc} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0\\ \end{array} \right].\] (Here $\eta$ is a generator of $GF(4)^{*}$.)\\ Besides that $x_{9}^{(4)}$ and $y_{9}^{(4)}$ have orders $2$ and $3$, respectively, $x_{9}^{(4)}y_{9}^{(4)}$ has order $3.5.43.127$ and in $\left\langle {x_{9}^{(4)},y_{9}^{(4)}}\right\rangle$ the order of the following element \begin{center} $(x_{9}^{(4)}(y_{9}^{(4)})^{2})^{2}(x_{9}^{(4)}y_{9}^{(4)})^{3}x_{9}^{(4)}(y_{9}^{(4)})^{2}(x_{9}^{(4)}y_{9}^{(4)})^{2}x_{9}^{(4)}(y_{9}^{(4)})^{2}(x_{9}^{(4)}y_{9}^{(4)})^{2}x_{9}^{(4)}(y_{9}^{(4)})^{2}x_{9}^{(4)}y_{9}^{(4)}$ \end{center} is $3.7.19.73$. But no one maximal subgroup of $SL_{9}(4)$ has order divisible by $43.73$. Thus $ SL_{9}(4) = \left\langle {x_{9}^{(4)},y_{9}^{(4)}}\right\rangle$ is a $(2,3)$ - generated group too.\\
\indent\indent Next, we consider the appropriate pair of elements in the group $SL_{10}(2)$: \[ x_{10}^{(2)} = \left[ \begin{array}{cccccccccc} 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 1\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 1\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 0\\ \end{array} \right], y_{10}^{(2)} = \left[ \begin{array}{cccccccccc} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0\\ \end{array} \right].\] Here we obtain that the order of $x_{10}^{(2)}y_{10}^{(2)}$ is $3.11.31$ and $x_{10}^{(2)}y_{10}^{(2)}(x_{10}^{(2)}(y_{10}^{(2)})^{2})^{2}$ has order $73$. But there is no maximal subgroup in $SL_{10}(2)$ of order divisible by $11.73$. So $SL_{10}(2) = \left\langle {x_{10}^{(2)},y_{10}^{(2)}}\right\rangle$.\\
\indent\indent Further, let us deal with the group $SL_{10}(3)$ and choose: \[ x_{10}^{(3)} = \left[ \begin{array}{cccccccccc} 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 1\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1\\ \end{array} \right], y_{10}^{(3)} = \left[ \begin{array}{cccccccccc} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & -1\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0\\ \end{array} \right].\] The product of the last two matrices has order $11^{2}.61$ and the following element \begin{center} $(x_{10}^{(3)}y_{10}^{(3)})^{2}x_{10}^{(3)}(y_{10}^{(3)})^{2}(x_{10}^{(3)}y_{10}^{(3)})^{2}x_{10}^{(3)}(y_{10}^{(3)})^{2}x_{10}^{(3)}y_{10}^{(3)}x_{10}^{(3)}(y_{10}^{(3)})^{2}x_{10}^{(3)}y_{10}^{(3)}$ \end{center} is of order $2.13.757$. Checking the orders of the maximal subgroups of $SL_{10}(3)$ we can see that no one of them is a multiple of $61.757$ which means that $SL_{10}(3) = \left\langle {x_{10}^{(3)},y_{10}^{(3)}}\right\rangle$.\\
\indent\indent Lastly, we finish with the proof of the $(2,3)$ - generation of the group $SL_{10}(4)$ by taking its elements: \[ x_{10}^{(4)} = \left[ \begin{array}{cccccccccc} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & \eta\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1\\ \end{array} \right], y_{10}^{(4)} = \left[ \begin{array}{cccccccccc} 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0\\ \end{array} \right].\] (Recall that $\eta$ is a generator of $GF(4)^{*}$.)\\ In this case $x_{10}^{(4)}y_{10}^{(4)}$ has order $3.19.73$ and the element $(x_{10}^{(4)}(y_{10}^{(4)})^{2})^{3}x_{10}^{(4)}y_{10}^{(4)}(x_{10}^{(4)}(y_{10}^{(4)})^{2})^{6}$ has order $5.11.31.41$. Similarly, as in the previous cases, we can conclude that $SL_{10}(4)$ is generated by the above matrices because no one of its maximal subgroups has order divisible by $41.73$.\\ \indent\indent Finally, it is obvious that the projective images (if it is necessary) of all these elements $x_{n}^{(q)}$ and $y_{n}^{(q)}$ (again of orders $2$ and $3$, respectively) generate the correspondent simple special linear group.\\
\indent\indent \textbf{Acknowledgement}. We express our gratitude to Prof. Marco Antonio Pellegrini who provided us with all these generators for the groups $SL_{9}(2)$, $SL_{9}(4)$, $SL_{10}(2)$, $SL_{10}(3)$ and $SL_{10}(4)$.\\
\indent\indent \textbf{2.3.} Finally let $G = SL_{11}(q)$ and $\overline{G} = G/Z(G) = PSL_{11}(q)$, where $q = p^{e}$ and $p$ is a prime. Set $d = (11,q - 1)$ and $Q=(q^{11}-1)/(q-1)$. It is easily seen that here $(6,Q)=1$. The group $G$ acts (naturally) on an eleven-dimensional vector space $V = F^{11}$ over the field $F = GF(q)$.\\ \indent\indent\ We shall make use of the known list of maximal subgroups of ${G}$ given in \cite{1}. In Aschbaher's notation any maximal subgroup of ${G}$ belongs to one of the following families \emph{$C_{1}, C_{2}, C_{3}, C_{5}, C_{6}, C_{8}$}, and \emph{S}. Roughly speaking, they are: \begin{itemize} \item \emph{$C_{1}$}: stabilizers of subspaces of $V$, \item \emph{$C_{2}$}: stabilizers of direct sum decompositions of $V$, \item \emph{$C_{3}$}: stabilizers of extension fields of $F$ of prime degree, \item \emph{$C_{5}$}: stabilizers of subfields of $F$ of prime index, \item \emph{$C_{6}$}: normalizers of extraspecial groups in absolutely irreducible representations, \item \emph{$C_{8}$}: classical groups on $V$ contained in $G$, \item \emph{S}: almost simple groups, absolutely irreducible on $V$, and the representation of their (simple) \emph{socles} on $V$ can not be realized over proper subfields of $F$; not continued in members of \emph{$C_{8}$}. \end{itemize} In \cite{1} the representatives of the conjugacy classes of maximal subgroups of ${G}$ are specified in Tables $8.70$ and $8.71$. For the reader's convenience we provide the exact list of maximal subgroups of $G$ together with their orders. The notation used here for group structures is standard group-theoretic notation as in \cite{1}. Especially, $A \times B$ is the direct product of groups $A$ and $B$, and we write $A:B$ or $A.B$ to denote a split extension of $A$ by $B$ or an extension of $A$ by $B$ of unspecified type, respectively; the cyclic group of order $n$ is simple denoted by $n$, and $E_{q^{k}}$ stands for an elementary abelian group of order $q^{k}$.\\ If $M$ is a maximal subgroup of $G$ then one of the following holds. \begin{enumerate} \item $M \cong E_{q^{10}}:GL_{10}(q)$ of order $q^{55}(q - 1)(q^{2} - 1)(q^{3} - 1)(q^{4} - 1)(q^{5} - 1)(q^{6} - 1)(q^{7} - 1)(q^{8} - 1)(q^{9} - 1)(q^{10} - 1)$. \item $M \cong E_{q^{18}}:(SL_{9}(q)\times SL_{2}(q)):(q - 1)$ of order $q^{55}(q - 1)(q^{2} - 1)^{2}(q^{3} - 1)(q^{4} - 1)(q^{5} - 1)(q^{6} - 1)(q^{7} - 1)(q^{8} - 1)(q^{9} - 1)$. \item $M \cong E_{q^{24}}:(SL_{8}(q)\times SL_{3}(q)):(q - 1)$ of order $q^{55}(q - 1)(q^{2} - 1)^{2}(q^{3} - 1)^{2}(q^{4} - 1)(q^{5} - 1)(q^{6} - 1)(q^{7} - 1)(q^{8} - 1)$. \item $M \cong E_{q^{28}}:(SL_{7}(q)\times SL_{4}(q)):(q - 1)$ of order $q^{55}(q - 1)(q^{2} - 1)^{2}(q^{3} - 1)^{2}(q^{4} - 1)^{2}(q^{5} - 1)(q^{6} - 1)(q^{7} - 1)$. \item $M \cong E_{q^{30}}:(SL_{6}(q)\times SL_{5}(q)):(q - 1)$ of order $q^{55}(q - 1)(q^{2} - 1)^{2}(q^{3} - 1)^{2}(q^{4} - 1)^{2}(q^{5} - 1)^{2}(q^{6} - 1)$. \item $M \cong (q - 1)^{10}: S_{11}$ (if $q \geq 5$) of order $2^{8}.3^{4}.5^{2}.7.11.(q - 1)^{10}$. \item $M \cong \frac{q^{11} - 1}{q - 1}:11$ of order $11.\frac{q^{11} - 1}{q - 1}$. \item $M \cong SL_{11}(q_{0}). (11,\frac{q - 1}{q_{0} - 1})$ (if $q = q_{0}^{r}$, $r$ prime) of order $q_{0}^{55}(q_{0}^{2} - 1)(q_{0}^{3} - 1)(q_{0}^{4} - 1)(q_{0}^{5} - 1)(q_{0}^{6} - 1)(q_{0}^{7} - 1)(q_{0}^{8} - 1)(q_{0}^{9} - 1)(q_{0}^{10} - 1)(q_{0}^{11} - 1).(11,\frac{q - 1}{q_{0} - 1})$. \item $M \cong 11_{+}^{1+2}:Sp_{2}(11)$ (if $q = p \equiv 1$ (mod $11$) or $q=p^{5}$ and $p \equiv 3, 4, 5, 9$ (mod $11$)) of order $2^{3}.3.5.11^{4}$ (here $11_{+}^{1+2}$ stands for an extraspecial group of order $11^{3}$ and exponent $11$). \item $M \cong d \times SO_{11}(q)$ (if $q$ is odd) of order $d.q^{25}(q^{2} - 1)(q^{4} - 1)(q^{6} - 1)(q^{8} - 1)(q^{10} - 1)$. \item $M \cong (11,q_{0} - 1) \times SU_{11}(q_{0})$ (if $q = q_{0}^{2}$) of order $q_{0}^{55}(q_{0}^{2} - 1)(q_{0}^{3} + 1)(q_{0}^{4} - 1)(q_{0}^{5} + 1)(q_{0}^{6} - 1)(q_{0}^{7} + 1)(q_{0}^{8} - 1)(q_{0}^{9} + 1)(q_{0}^{10} - 1)(q_{0}^{11} + 1).(11,q_{0} - 1)$. \item $M \cong d \times L_{2}(23)$ (if $q = p \equiv 1, 2, 3, 4, 6, 8, 9, 12, 13, 16, 18$ (mod $23$), $q \neq 2$) of order $2^{3}.3.11.23.d$. \item $M \cong d \times U_{5}(2)$ (if $q = p \equiv 1$ (mod $3$)) of order $2^{10}.3^{5}.5.11.d$. \item $M \cong M_{24}$ (if $q = 2$) of order $2^{10}.3^{3}.5.7.11.23$. \end{enumerate}
\indent\indent Now we proceed as follows. First we prove that there is only one type of maximal subgroups of $G$ whose order is a multiple of $Q$; actually these are the groups (in Aschbaher's class \emph{$C_{3}$}) in case $7$ above, of order $11.\frac{q^{11} - 1}{q - 1}$. Into the second step we find out two elements $x_{11}$ and $y_{11}$ of respective orders $2$ and $3$ in $G$ such that their product has got order $Q$. Finally, we deduce that the group $G$ is generated by these two elements. Then the projective images of these elements will generate the group $\overline{G}$.\\
\indent\indent Let us start with the first step in our strategy. In order to prove the above mentioned arithmetic fact we use the well-known Zsigmondy's theorem, and take a primitive prime divisor of $p^{11e} - 1$, i.e., a prime $r$ which divides $p^{11e} - 1$ but does not divide $p^{i} - 1$ for $0 < i < 11e$. Obviously $r \geq 23$ (as $r - 1$ is a multiple of $11e$) and also $r$ divides $Q$. Now it is easy to be seen that the only maximal subgroups of orders divisible by $r$ are those in cases $11$, and $12$ or $14$ with $r = 23$. In case $11$ if $Q = \frac{q_{0}^{22} - 1}{q_{0}^{2} - 1}$ divides the order of $M$, then $\frac{q_{0}^{11} - 1}{q_{0} - 1}$ should be a factor of the integer $q_{0}^{55}(q_{0} + 1)(q_{0}^{2} - 1)(q_{0}^{3} + 1)(q_{0}^{4} - 1)(q_{0}^{5} + 1)(q_{0}^{6} - 1)(q_{0}^{7} + 1)(q_{0}^{8} - 1)(q_{0}^{9} + 1)(q_{0}^{10} - 1).(11,q_{0} - 1)$, an impossibility, by the same Zsigmondy's theorem. As for the groups in case $12$, we have $Q = \frac{p^{11} - 1}{p - 1} \geq \frac{3^{11} - 1}{3 - 1} > 2^{3}.3.11^{2}.23 \geq |M|$. Lastly, in case $14$ $Q = 2^{11} - 1 = 23.89$ does not divide the order of $M_{24}$.\\ \indent\indent Further, let us choose for $x_{11}$ the matrix \begin{center} \[ x_{11} = \left[ \begin{array}{rrrrrrrrrrr} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
\end{array} \right]\] \end{center} and $y_{11}$ to be in the form \begin{center} \[ y_{11} = \left[ \begin{array}{rrrrrrrrrrl} -1 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \delta_{1}\\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \delta_{2}\\ 0 & 0 & -1 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & \delta_{3}\\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \delta_{4}\\ 0 & 0 & 0 & 0 & -1 & -1 & 0 & 0 & 0 & 0 & \delta_{5}\\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & \delta_{6}\\ 0 & 0 & 0 & 0 & 0 & 0 & -1 & -1 & 0 & 0 & \delta_{7}\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & \delta_{8}\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & -1 & \delta_{9}\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & \delta_{10}\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1\\
\end{array} \right].\] \end{center} Then $x_{11}$ is an involution of $G$ and $y_{11}$ is an element of order $3$ in $G$ for any $\delta_{1}$, $\delta_{2}$, $\delta_{3}$, $\delta_{4}$, $\delta_{5}$, $\delta_{6}$, $\delta_{7}$, $\delta_{8}$, $\delta_{9}$, $\delta_{10} \in GF(q)$, also \begin{center} \[ z_{11}=x_{11}y_{11} = \left[ \begin{array}{rrrrrrrrrrr} 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & \delta_{10}\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & -1 & \delta_{9}\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & \delta_{8}\\ 0 & 0 & 0 & 0 & 0 & 0 & -1 & -1 & 0 & 0 & \delta_{7}\\ 0 & 0 & 0 & 0 & -1 & 0 & 0 & 0 & 0 & 0 & -\delta_{6}\\ 0 & 0 & 0 & 0 & -1 & -1 & 0 & 0 & 0 & 0 & \delta_{5}\\ 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \delta_{4}\\ 0 & 0 & -1 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & \delta_{3}\\ 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \delta_{2}\\ -1 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \delta_{1}\\
\end{array} \right].\] \end{center} The characteristic polynomial of $z_{11}$ is \begin{center} $f_{z_{11}}(t) = t^{11}-\delta_{1}t^{10}+(\delta_{10}-1)t^{9}+(2\delta_{1}+\delta_{3}+1)t^{8}-(\delta_{1}+\delta_{8}+\delta_{9}+2\delta_{10}+1)t^{7}-(\delta_{1}-\delta_{2}+\delta_{3}+\delta_{5}-\delta_{10})t^{6}+(\delta_{1}+\delta_{3}-\delta_{6}+\delta_{7}+\delta_{8}+\delta_{9}+\delta_{10}+1)t^{5}+(\delta_{1}-\delta_{2}-\delta_{4}-\delta_{7}-\delta_{8}-\delta_{9}-\delta_{10})t^{4}-(\delta_{1}-\delta_{2}-\delta_{4}+\delta_{9}+\delta_{10}+2)t^{3}+(\delta_{2}+\delta_{9}+\delta_{10}+2)t^{2}-(\delta_{2}-1)t-1$ \end{center} Now let take an element $\omega$ of order $Q$ in the multiplicative group of the field $GF(q^{11})$ and put \begin{center} $l(t) = (t - \omega)(t - \omega^{q})(t - \omega^{q^{2}})(t - \omega^{q^{3}})(t - \omega^{q^{4}})(t - \omega^{q^{5}})(t - \omega^{q^{6}})(t - \omega^{q^{7}})(t - \omega^{q^{8}})(t - \omega^{q^{9}})(t - \omega^{q^{10}}) = t^{11} - at^{10} + bt^{9} - ct^{8} + dt^{7} - et^{6} + ft^{5}-gt^{4}+ht^{3}-kt^{2}+mt- 1$. \end{center} The last polynomial has all its coefficients in the field $GF(q)$ and the roots of $l(t)$ are pairwise distinct (in fact, the polynomial $l(t)$ is irreducible over $GF(q)$ which is not necessary for our considerations). The polynomials $f_{z_{11}}(t)$ and $l(t)$ are identically equal if \begin{center} $\delta_{1}=a$, $\delta_{2}=-m+1$, $\delta_{3}=-2a-c-1$, $\delta_{4}=a+2m-2-k+h$, $\delta_{5}=a-m+3+c+b+e$, $\delta_{6}=-a-c+1+g-m+k-h-f$, $\delta_{7}=3-m+k-h+g+a+b+d$, $\delta_{8}=-a+k-m+1-b-d$, $\delta_{9}=m-4-b-k$, $\delta_{10}= b+1$ \end{center} For these values of $\delta_{i} (i=1,...,10)$ $f_{z_{11}}(t)=l(t)$ and then, in $GL_{11}(q^{11})$, $z_{11}$ is conjugate to diag $(\omega, \omega^{q}, \omega^{q^{2}}, \omega^{q^{3}}, \omega^{q^{4}}, \omega^{q^{5}}, \omega^{q^{6}}, \omega^{q^{7}}, \omega^{q^{8}}, \omega^{q^{9}}, \omega^{q^{10}})$ and hence $z_{11}$ is an element of $G$ of order $Q$.\\ \indent\indent Now, $H = \left\langle x_{11},y_{11}\right\rangle$ is a subgroup of $G$ of order divisible by $6Q$. We have already proved above that the only maximal subgroup of $G$ whose order is a multiple of $Q$ is that in Aschbaher's class \emph{$C_{3}$}, of order $11Q$, which means that $H$ can not be contained in any maximal subgroup of $G$. Thus $H = G$ and $G = \left\langle x_{11},y_{11}\right\rangle$ is a $(2,3)$-generated group; $\overline{G} = \left\langle {\overline{x_{11}},\overline{y_{11}}}\right\rangle$ is a $(2,3)$-generated group too. \\ \indent\indent This completes the proof of the theorem.
$\square$\\ \begin{center}
\end{center}
E. Gencheva and Ts. Genchev\\ Department of Mathematics\\ Technical University of Varna \\ Varna, Bulgaria\\ e-mail: elenkag@abv.bg; genchev57@yahoo.com\\
K. Tabakov\\ Faculty of Mathematics and Informatics\\ Department of Algebra\\ "St. Kliment Ohridski" University of Sofia\\ Sofia, Bulgaria\\ e-mail: ktabakov@fmi.uni-sofia.bg\\
\end{document} |
\begin{document}
\title{A structural geometrical analysis of weakly infeasible SDPs \ \ } \begin{abstract} In this article, we present a geometric theoretical analysis of semidefinite feasibility problems (SDFPs). This is done by decomposing a SDFP into smaller problems, in a way that preserves most feasibility properties of the original problem. With this technique, we develop a detailed analysis of weakly infeasible SDFPs to understand clearly and systematically how weak infeasibility arises in semidefinite programming. In particular, we show that for a weakly infeasible problem over $n\times n$ matrices, at most $n-1$ directions are required to approach the positive semidefinite cone. We also present a discussion on feasibility certificates for SDFPs and related complexity results. \end{abstract}
\renewcommand{0.9}{0.9}
\section{Introduction.} In this paper, we deal with the following semidefinite feasibility problem \begin{equation} \max \; 0 \ \ \text{s.t.}\ \; x \in (L+c)\cap K_n, \label{sdpf} \end{equation} where $L \subseteq \mathbb{S}_n$ is a vector subspace and $c \in \mathbb{S}_n$. By $\mathbb{S}_n$ we denote the linear space of $n\times n$ real symmetric matrices and $K_{n} \subseteq \mathbb{S}_{n}$ denotes the cone of $n\times n$ positive semidefinite matrices. We denote the problem \eqref{sdpf} by $(K_n,L,c)$.
It is known that every instance of a semidefinite program falls into one of the following four statuses: \begin{itemize}
\item \emph{Strongly feasible}: $(L+c)\cap \mathrm{int}\, (K_n) \neq \emptyset$, where
$\mathrm{int}\, (K_n)$ denotes the interior of $K_n$.
\item \emph{Weakly feasible}: $(L+c)\cap \mathrm{int}\, (K_n) = \emptyset$, but $(L+c)\cap K_n \neq \emptyset$.
\item \emph{Weakly infeasible}: $(L+c)\cap K_n = \emptyset$ and $\mathrm{dist}(K,L+c) = 0$.
\item \emph{Strongly infeasible}: $(L+c)\cap K_n = \emptyset$ and $\mathrm{dist}(K,L+c) > 0$. \end{itemize}
Among the four feasibility statuses, all but weak infeasibility afford simple finite certificates: an interior-feasible solution, a pair consisting of a feasible solution and a vector which is normal to a separating hyperplane, and a dual improving direction for strong feasibility, weak feasibility and strong infeasibility respectively. The last one is sometimes called a Farkas-type certificate, and plays an important role in optimization theory. However, it is not evident whether weak infeasibility affords such a finite certificate.
By ``finite certificate'' we mean a finite sequence in some finite dimensional vector space. In this paper, we focus on the structural analysis of weak infeasibility in semidefinite programming and develop a procedure which distinguishes the four statuses. We also obtain a finite certificate for weak infeasibility. But we emphasize that the main feature of our approach is concreteness in analyzing weak infeasibility.
In view of finite certificates, we mention that it is possible to obtain a finite and polynomially bounded certificate of weak infeasibility by using Ramana's extended Lagrangian dual \cite{Ramana95anexact}. This result is based on the fact that
{\it $(K_n,L,c)$ is weakly infeasible if and only if it is \emph{infeasible} and \emph{not strongly infeasible} }
\noindent as we will discuss in Section 2. Ramana developed a generalized Farkas' Lemma for SDP which holds without any assumptions. Since \emph{infeasibility} and \emph{not strong infeasibility} have finite certificates, the same is true for weak infeasibility. As this argument has an existential flavour, it is not so clear the implications for the structure of the problem. In this paper, we study weak infeasibility in semidefinite programming from a more constructive point of view to answer, for instance, the following basic question:
{\it Given a weakly infeasible SDFP, how can we generate a sequence
$\{u^{(i)}|\ u^{(i)}\in L + c, i = 1,\ldots \infty\}$ such that $\lim_{i\rightarrow\infty}{\rm dist}(u^{(i)}, K_n) =0$? }
\noindent Due to the fact that the distance between $K_n$ and $L+c$ is zero, we readily see that there exists a nonzero element $a$ in $K_n\cap L$. However, it is not clear how $a$ is related to the weak infeasibility of $(K_n,L,c)$. Since the problem is infeasible, ${\rm dist}(ta + b, K_n)> 0$ for any $t > 0$ and $b\in L+c$. It would be natural to ask what to happen as $t$ goes to infinity. Can $\lim_{t\rightarrow \infty} {\rm dist}(ta + b, K_n)=\infty$ or a finite nonzero value, or zero? If we cannot find any $b\in L+c$ such that $\lim_{t\rightarrow \infty} {\rm dist}(ta + b, K_n)= 0$ holds, how $a$ can be used to construct points close to the cone?
We will show that $a$ alone is not enough to generate such a sequence, but $(n-1)$ directions including $a$ are sufficient (with an appropriate choice of $b$), whenever the problem is weakly infeasible. In other words, if $(K_n, L, c)$ is weakly infeasible then there exists a $(n-1)$ dimensional affine subspace ${\cal F} \subseteq L + c$ such that ${\cal F}\cap K_n = \emptyset$ but ${\rm dist}({\cal F}, K_n)=0$. This result is a bit surprising to us, because, in general, if $K$ is a closed convex cone and $(K,L,c)$ is weakly infeasible, then the number of directions necessary to approach the cone could be as large as the dimension of $L$, which could be up to $ (\frac{n(n+1)}{2} - 1)$ in our context.
The proof is done by constructing a set of directions in $L$ which we call {\it hyper feasible partition}. These direction are obtained recursively starting from a nonzero element in $K_n\cap L$. An important feature of this set is that, even though each direction is not necessarily positive (semi)definite, we can always find a positive linear combination which is almost positive semidefinite (the minimum eigenvalue can be made to be arbitrarily close to zero). The introduction of hyper feasible partitions is another main contribution of this paper and they provide a new insight in the analysis of ill-conditioned semidefinite programs.
One possible application of our results is as follows. Consider the following SDP \begin{equation*} \max \; \inProd{b}{x} \ \ \text{s.t.} \; x \in (L+c)\cap K_n \tag{P}\label{sdp_primal}, \end{equation*} and suppose that the optimal value $b^*$ is finite but not attained. The set $\{x \in L+c \mid \inProd{b}{x} = b^*\}$ is non-empty and is also an affine space. Denoting by $\widetilde {L}$ the underlying vector space and letting $\widetilde c$ be any point which belongs to the affine space, we have that $(K_n,\widetilde{L}, \widetilde{c})$ is weakly infeasible. Indeed such problems arise in many applications in semidefinite programming including control theory and polynomial optimization \cite{waki_how_2012}.
The main tool we use is a simple decomposition result (Theorem \ref{theo_decomp}), which implies that some semidefinite feasibility problems (SDFPs) can be decomposed into smaller subproblems in a way that the feasibility properties are mostly preserved. We also discuss two procedures for analyzing feasibility problems, a \emph{forward procedure} (\textbf{FP}) and a \emph{backward procedure} (\textbf{BP}). In particular, \textbf{BP} can distinguish the 4 different feasibility statuses in a systematic manner.
We review related previous works. The existence of weak infeasibility/feasibility and finite duality gap is one of the main difficulties in semidefinite programming. These situations may occur in the absence of interior-feasible solutions to the primal and/or dual. Two possible techniques to recover interior-feasibility by reducing the feasible region of the problem or by expanding the feasible region of its dual counter-part are the facial reduction algorithm (FRA) and the conic expansion approach (CEA), respectively. FRA was developed by Borwein and Wolkowicz \cite{borwein_facial_1981} for problems more general than conic programming, whereas CEA was developed by Luo, Sturm and Zhang \cite{Luo96dualityand} for conic programming.
In the earlier stages of research of semidefinite programming, Ramana \cite{Ramana95anexact} developed an extended Lagrange-Slater dual (ELSD) that has no duality gap. ELSD has the remarkable feature that the size of the extended problem is bounded by a polynomial in terms of the size of the original problem. In \cite{ramana_strong_1997}, Ramana, Tun\c{c}el and Wolkowicz demonstrated that ELSD can be interpreted as a facial reduction problem, however, we should note that in the original FRA, the size of the problem is not polynomially bounded, see also \cite{pataki_strong_2013}. In \cite{polik07b}, Polik and Terlaky provided strong duals for conic programming over symmetric cones. Recently, Klep and Schweighofer developed another dual based on real algebraic geometry where the strong duality holds without any constraint qualification \cite{klep_exact_2013}. Like ELSD, their dual is just represented in terms of the data of original problem and the size of the dual is bounded by a polynomial in terms of the size of the original problem. Complexity of SDFP is yet a subtle issue. This topic was studied extensively by Porkolab and Khachiyan \cite{Porkolab_Khachiyan_97}.
Waki and Muramatsu \cite{article_waki_muramatsu} considered a FRA for conic programming and showed that FRA can be regarded as a dual version of CEA. See an excellent review by Pataki \cite{pataki_strong_2013} for FRA, where he points out the relation between facial reduction and extended duals. Pataki also found that all ill-conditioned semidefinite programs can be reduced to a common $2\times 2$ semidefinite program \cite{pataki_bad_sdps}. Finally, we mention that Waki showed that weakly infeasible instances can be obtained from semidefinite relaxation of polynomial optimization problems \cite{waki_how_2012}.
The problem of weak infeasibility is closely related to closedness of the image of $K_n$ by a certain linear map. A comprehensive treatment of the subject was given by Pataki \cite{pataki_closedness_2007}. We will discuss the connection between Pataki's results and weak infeasibility in Section 2.
This paper is organized as follows. In Section \ref{sec:certificates}, we discuss certificates for the different feasibility statuses and point the connections to previous works. In Section \ref{sec:decomp} we present Theorem \ref{theo_decomp} and discuss how certain SDFPs can be broken in smaller problems. We also prove the bound $n-1$ for the number of the directions needed to approach $K_n$. In Section \ref{sec:backward_procedure}, a procedure to distinguish between the $4$ different feasibility statues is given. Section \ref{sec:conc} summarizes this work.
\section{Characterization of different feasibility statuses}\label{sec:certificates}
In this section, we review the characterization of different feasibility statuses of semidefinite programs with emphasis on weak infeasibility. \subsection{Certificates and \NP\ class in the Blum-Shub-Smale model}
Our main interest is on \emph{finite certificates} and computational complexity. The model of computation we use is the Blum-Shub-Smale model (BSS model) \cite{blum_complexity_1997} of real computation. The main aspects are that we do not care about the bit length of a real number, we can evaluate any rational function over $\mathbb{R}$ and the machine can deviate the flow of execution by evaluating a linear inequality. ``Finite'' in this context means that the certificates are composed of a finite number of vectors contained in some finite dimensional vector space. The length of the certificate is then the total number of coordinates among all the vectors it contains. It is also required that a verifier procedure exists. Such a procedure receives as input the problem and the certificate and attest that the certificate is indeed valid in a finite amount of time. If a decision problem admits a finite certificate with a verification procedure such that the length of the former and time complexity of the latter are polynomials in terms of the size of the problem then it is in \NP\ under the BSS model. The main decision problem we are interested in is: \emph{given $(K_n,L,c)$, what is its feasibility status?}\footnote{Strictly speaking, a decision problem should have ``yes'' or ``no'' as answers, but in our case the possible answers are strong/weakly feasible, strong/weakly infeasible. We could have broken down the decision problem in 4 different decision problems having ``yes'' or ``no'' as answers. We did not do so, because we wanted to treat them in a unified manner in our procedure \textbf{BP}.}
Suppose we have an algorithm for a decision problem which employs oracles for some problems, for instance, returning a feasible solution to a SDP. (This is a typical situation in the literature when talking about regularization procedures.) In such a situation, if we want to show that the decision problem is in \NP, we can do the following. First, we prove that correctness of the output of each oracle can be verified in polynomial time (with respect the size of the problem). Then, we evaluate the time complexity of the algorithm assuming that the cost for each call of the oracle is one. If the running time of the algorithm is bounded by a polynomial in the size of the problem, then the decision problem is in $\NP$. The set of outputs given by the oracles can be used as a certificate and the algorithm itself acts as a verifier procedure.
Throughout this paper, we assume that $L$ is represented as the set of solution of the system of linear equations, where the coefficients and left hand side is explicitly given. We also note that checking positive semidefiniteness of a symmetric matrix can be done in polynomial time by using a variant of $LDL^T$ decomposition.
\subsection{Characterization of feasibility statuses}
We start with the following proposition which characterizes strong feasibility, weak feasibility and strong infeasibility.
\begin{proposition}\label{prop_certificate} Let $L$ be a subspace of $\mathbb{S}_n$ and $c \in \mathbb{S}_n$ then $(K_n,L,c)$ is: \begin{enumerate} \item \emph{Strongly feasible}, if and only if there is $ x \in L +c$ such that $x$ is positive definite. \item \emph{Weakly feasible} if and only if there is \begin{enumerate}[i.]
\item $x \in L +c$ such that $x$ is positive semidefinite
\item $y \in L^\perp \cap K_n$ such that $y \neq 0$, $\inProd{y}{c} = 0$. \end{enumerate} \item \emph{Strongly infeasible} if and only if $c\neq 0$ and $(K_n, L_{\rm SI}, c_{\rm SI})$ is feasible, where $L_{\rm SI} = L^\perp \cap {c}^\perp$, and $c_{\rm SI} = -\frac{c}{\norm{c}^2}$. \end{enumerate} \end{proposition} \begin{proof} Item $i$ is immediate. Items $ii.$ and $iii.$ follow easily from Theorem 11.3 and 11.4 of \cite{rockafellar}. Item $ii.$ correspond to the situation where $K_n$ and $L+c$ can be properly separated but still a feasible point exists and item $iii.$ to the case where they can be strongly separated. Also, for a proof of $iii.$ see, for instance, Lemma 5 of \cite{Luo97dualityresults}. \end{proof} Proposition \ref{prop_certificate} already implies that deciding deciding strong feasibility, weak feasibility and strong infeasibility are in $\NP$, in the BSS model. In addition, $(K_n,L,c)$ is not strongly feasible if and only if item $2.ii$ holds (but not necessarily $2.i$). This means that deciding strong feasibility lies in $\coNP$ as well\footnote{In this case, the decision problem has either ``yes'' or ``no'' as possible answers, so it makes sense to talk about $\coNP$.}.
In Proposition \ref{prop_certificate}, weak infeasibility is absent. When proving weak infeasibility, it is necessary to show that the distance between $K_n$ and $L+c$ is $0$. The obvious way is to produce a sequence $\{x_k\} \in L+c$ such that $\lim _{k \to +\infty} \text{dist}(x_k,K_n) = 0$. In \cite{Luo96dualityand} it was shown that $(K_n,L,c)$ is weakly infeasible if and only if there is no dual improving direction \emph{and} there is a dual improving sequence (see Lemma 6 and Table 1 in \cite{Luo96dualityand}). But this is not a finite certificate of weak infeasibility.
A finite certificate for weak infeasibility can be obtained by using Ramana's results on an extended Lagrangian dual for semidefinite programming \cite{Ramana95anexact}. Ramana's dual has a number of key properties: it is written explicitly in terms of problem data, it has no duality gap and the optimal value is always attained when finite. With his dual, it was possible to develop an exact Farkas-type lemma for semidefinite programming. In Theorem 19 of \cite{Ramana95anexact}, he constructed another SDFP $\mathcal{RD}(K_n,L,c)$ for which the following holds without any regularity conditions:
\ \ \ \ \ \ \ \ {\it $(K_n,L,c)$ is feasible if and only if $\mathcal{RD}(K_n,L,c)$ is infeasible.}
\noindent Furthermore, the size of $\mathcal{RD}(K_n,L,c)$ is bounded by a polynomial that depends only on the size of the system $(K_n,L,c)$. Based on this strong result, we obtain a finite certificate of weak infeasibility as in the following proposition:
\begin{proposition}\label{col_weak_inf} We have the following: \begin{enumerate}[$i.$] \item $(K_n,L,c)$ is weakly infeasible $\Leftrightarrow$ $c \neq 0$, $\mathcal{RD}(K_n,L,c)$ and $\mathcal{RD}(K_n,{L}_{\rm SI},{c}_{\rm SI})$ are feasible. \item The problem of deciding whether a given $(K_n,L,c)$ is weakly infeasible is in $\NP \cap \coNP$ in the BSS model. \end{enumerate}
\end{proposition} \begin{proof} A feasible solution to $\mathcal{RD}(K_n,L,c)$ attests the infeasibility of $(K_n,L,c)$. As $\mathcal{RD}(K_n,L,c)$ has polynomial size, it is possible to check that a point is indeed a solution to it in polynomial time.
Note that a problem is weakly infeasible if and only if it is infeasible and is not strongly infeasible. Due to Proposition 1, we have that $(K_n,L,c)$ is not strongly infeasible if and only if $c = 0$ or $c\neq 0$ and $\mathcal{RD}(K_n,L_{\rm SI}, c_{\rm SI})$ is feasible. Hence, feasible solutions to $ \mathcal{RD}(K_n,L,c)$ and $\mathcal{RD}(K_n,L_{\rm SI},c_{\rm SI})$ can be used together as a certificate for weak infeasibility. Such a certificate can be checked in polynomial time, hence the problem is in $\NP$.
Now, $c = 0$, a solution to $(K_n,L_{\rm SI}, c_{\rm SI})$ \emph{or} to $(K_n,L,c)$ can be used to certify that a system is not weakly infeasible. This shows that deciding weak infeasibility is indeed in $\coNP$. \end{proof}
The important point in the argument above is having both a certificate of infeasibility for the original system and a certificate of infeasibility for the system $(K_n, L_{\rm SI}, c_{\rm SI})$. Any method of obtaining finite certificates of infeasibility can be used in place of $\mathcal{RD}$, as long as it takes polynomial time to verify them. See the comments after Theorem 3.5 in Sturm \cite{sturm_error_2000} and also Theorem 7.5.1 of \cite{sturm_handbook} for another certificate of infeasibility. Klep and Schweighofer \cite{klep_exact_2013} also developed certificates for infeasibility and
a hierarchy of infeasibility in which $0$-infeasibility corresponds to strong infeasibility and $k$-infeasibility to weak infeasibility, when $k > 0$. Liu and Pataki \cite{pataki_liu_2014} also introduced an infeasibility certificate for semidefinite programming. They defined what is a reformulation of a feasibility system and showed that $(K_n,L,c)$ is infeasible if and only if it admits a reformulation that converts the systems to a special format, see Theorem 1 therein.
We mention a few more related works on weak infeasibility. The feasibility problem $(K_n,L,c)$ is weakly infeasible if and only if $c \in \mathrm{cl}\, (K_n+L) \setminus (K_n+L)$, where $\mathrm{cl}\,$ denotes the closure operator. Hence, a \emph{necessary} condition for weak infeasibility is that $K_n+L$ fails to be closed. This problem is closely related to closedness of the image of $K_n$ by a linear map which is the problem analyzed in detail by Pataki \cite{pataki_closedness_2007}. Theorem 1.1 in \cite{pataki_closedness_2007} provides a necessary and sufficient condition for the failure of closedness of $K_n+L$. Pataki's result implies that there is some $c \in \mathbb{S}_n$ such that $(K_n,L,c)$ is weakly infeasible if and only if $L^\perp \cap (\mathrm{cl}\, \mathrm{dir}\,(x,K_n) \setminus \mathrm{dir}\, (x,K_n)) \neq \emptyset$, where $x$ belongs to the relative interior of $L\cap K_n$ and $\mathrm{dir}\, (x,K_n)$ is the cone of feasible directions at $x$. This tells us whether $K_n$ and $L$ can accommodate a weakly infeasible problem. If it is indeed possible, Corollary 3.1 of \cite{pataki_closedness_2007} shows how to find an appropriate $c$. Bonnans and Shapiro \cite{bonnans_perturbation_2000} also discussed generation of weakly infeasible semidefinite programming problems. As a by-product of the proof of Proposition 2.193 therein, it is shown how to construct weakly infeasible problems.
In \cite{pataki_bad_sdps}, Pataki introduced the notion of \emph{well-behaved} system. $(K_n,L,c)$ is said to be well-behaved if for all $b \in \mathbb{S}_n$, the optimal value of \eqref{sdp_primal} and of its dual are the same and the dual is attained whenever it is finite. A SDP which is not well-behaved is said to be \emph{badly-behaved}. Pataki showed that badly-behaved SDPs can be put into a special shape, see Theorem 6 in \cite{pataki_bad_sdps}. Then, a necessary condition for weak infeasibility is that the homogenized system $(K_n,\widetilde {L},0)$ be badly-behaved, where $\widetilde {L}$ is spanned by $L$ and $c$. See the comments before Section 4 in \cite{pataki_bad_sdps}.
\section{A decomposition result.}\label{sec:decomp}
In this section, we develop a key decomposition result. Given an SDFP, we show how to construct a smaller dimensional SDFP which preserves most of the feasibility properties.
\subsection{Preliminaries}
First we introduce the notation. If $\mathcal{C},\mathcal{D}$ are subsets of some real
space, we write $\mathrm{dist}(\mathcal{C},\mathcal{D}) = \inf \{\|x-y\| \mid x \in \mathcal{C}, y \in \mathcal{D}\}$, where $\| \cdot \|$ is the Euclidean norm or the Frobenius norm, in the case of subsets of $\mathbb{S}_{n}$. By $\mathrm{int}\, (\mathcal{C})$ and $\mathrm{ri}\,(\mathcal{C})$ we denote the interior and the relative interior of $\mathcal{C}$, respectively. We use $I_{n}$ to denote the $n\times n$ identity matrix. Given $(K_n,L,c)$ and a matrix $A \in K_n\cap L$ with rank $k$, we will call $A$ a \emph{hyper feasible direction} of rank $k$. We remark that when $(K_n,L,c)$ is feasible, $A$ is also a recession direction of the feasible region.
Let $x$ be a $n\times n$ matrix, and $0\leq k \leq n$. We denote by $\pi_k(x)$. the upper left $k\times k$ principal submatrix of $x$. For instance, if \[ x = \left(\begin{array}{ccc} 1 & 2 & 3 \\ 2 & 4 & 5 \\ 3 & 5 & 6 \end{array}\right), \] then, \[ \pi_{2}(x) = \left(\begin{array}{cc}1&2\\2&4\end{array}\right). \] We define the subproblem $\pi_k(K_n, L, c)$ of $(K_n, L, c)$ to be \[ {\rm find}\ u \in \pi_k(L+c),\ \ \ u \succeq 0. \] In other words, it is the feasibility problem $(\pi_k(K_n), \pi_k(L), \pi_k(c)) $. We denote by ${\overline \pi}_k(x)$, the lower right $(n-k)\times (n-k)$ principal submatrix. In the example above, we have $ \pi_{\overline{2}}(x) = 6. $ In a similar manner, we write $\overline\pi_k(K_n, L, c)$ for the feasibility problem $(\overline \pi_k(K_n), \overline \pi_k(L), \overline \pi_k(c)) $. We remark that $\pi _n(x) = \overline \pi _0(x) = x$ and we define $\pi _0 (x) = \overline \pi _n(x) = 0$.
The proposition belows summarizes the properties of the Schur Complement. For proofs, see Theorem 7.7.6 of \cite{matrix_analysis}. \begin{proposition}[Schur Complement] Suppose $M =\bigl(\begin{smallmatrix} A&B\\ B^T&C \end{smallmatrix} \bigr)$ is a symmetric matrix divided in blocks in a way that $A$ is positive definite, then: \begin{itemize}
\item $M$ is positive definite if and only if $C - B^TA^{-1}B$ is.
\item $M$ is positive semidefinite if and only if $C - B^TA^{-1}B$ is. \end{itemize} \end{proposition}
The properties of a semidefinite program are not changed when a congruence transformation is applied, i.e, for any non-singular matrix $P$, we have that $(K_{n},L,c)$ and $(K_{n}, PLP^T, PcP^T)$ have the same feasibility properties, where $PLP^{T} = \{PlP^{T} \mid l \in L \}$.
\subsection{The main result} It will be convenient for now to collapse weak feasibility and weak infeasibility into a single status. We say that $(K_n,L,c)$ is in \emph{weak status} if it is either weakly feasible or weakly infeasible. We start with the following basic observation. The proof is left to the readers. \begin{proposition}\label{prop_non_empty} If $(K_n,L,c)$ is weakly infeasible, there exists a nonzero vector in $K_n \cap L $. \end{proposition}
Now we present a key result in our paper. The following theorem says that if $(K_n,L,c)$ has a hyper feasible direction, then, we can construct another SDFP of smaller size whose feasibility status is {\it almost} identical to the original problem.
\begin{theorem}\label{theo_decomp} Let $(K_n,L,c)$ be a SDFP, and consider a subproblem $\pi_k(K_n,L,c)$ for some $k > 0$. If the subproblem $\pi_k(K_n,L,c)$ admits an interior hyper feasible direction (i.e, $\mathrm{int}\, \pi_k(K_n)\cap \pi_k(L) \neq \emptyset $) then: \begin{enumerate} \item $(K_n, L, c)$ is strongly feasible if and only if $\overline\pi_{k}(K_n,L,c)$ is. \item $(K_n, L, c)$ is strongly infeasible if and only if $\overline \pi_{k}(K_n,L,c)$ is. \item $(K_n, L, c)$ is in weak status if and only if $\overline \pi_{k}(K_n,L,c)$ is. \end{enumerate}
\end{theorem}\begin{proof}
Due to the assumption, there exists a $n\times n$ matrix \[ x = \left(\begin{array}{cc} A & 0 \\ 0 & 0 \end{array}\right) \] where $A$ is a $k\times k$ positive definite matrix.
We now prove items $1$ and $2$. Item $3$ will follow by elimination.
$(1) \Rightarrow )$ If $y \in L+c$ is positive definite, all its principal submatrices are also positive definite. Therefore, $\overline \pi _{k}(y)$ is positive definite.
$(1) \Leftarrow )$ Suppose that $y \in L +c$ is such that $\overline \pi _{k}(y) \in \mathrm{int}\, K_{n-k}$. Then, we may write $y = \left(\begin{smallmatrix} F & E \\E^T & G\end{smallmatrix} \right)$, where $G$ is $(n-k)\times(n-k)$ and positive definite. For large and positive $\alpha$, $F +\alpha A$ is positive definite and the Schur complement of $y +x\alpha$ is $G - E^T(F +\alpha A)^{-1}E $. Since $G$ is positive definite, it is clear that, increasing $\alpha$ if necessary, the Schur complement is also positive definite. For such an $\alpha$, $ y +x\alpha \in (L+c)\cap \mathrm{int}\, K_n$.
$(2) \Rightarrow)$. Suppose $(K_n,L,c)$ strongly infeasible. Then there exists $s \in K_n$ such that $s\in L^\perp$ and $\inProd{s}{c} = -1$. As $x \in L$, we have $s \in K_n \cap \{x\}^\perp$. This means that $s$ can be written as $\left( \begin{smallmatrix} 0 & 0 \\ 0 & D\end{smallmatrix} \right)$, where $D$ belongs to $K_{n-k}$. It follows that $\overline \pi _{k}(s) \in \overline \pi _{k}(L)^\perp$ and $\inProd{\overline \pi _{ k}(s)}{\overline\pi _{ k}(c)} = -1$. By item $iii.$ of Proposition \ref{prop_certificate}, $\overline \pi _k(K_n,L,c)$ is strongly infeasible.
$(2) \Leftarrow)$. Now, suppose $\overline \pi _k(K_n,L,c)$ is strongly infeasible. Note that $\overline \pi _k$ is a non-expansive map, i.e, $\norm {\overline \pi _k (y) - \overline \pi _k (z)} \leq \norm {y - z}$ holds. In particular, if $\inf _{y \in L+c, z \in K_n} \norm {\overline \pi _k(y) - \overline \pi _k (z)} > 0$, then the same is true for $\inf _{y \in L+c, z \in K_n} \norm {y - z} $. \end{proof} \subsection{Forward Procedure}
Assume that $(K_n, L, c)$ admits a hyper feasible direction $\widetilde A_1$ of rank $k_1$. Theorem \ref{theo_decomp} might not be directly applicable but after appropriate congruence transformation by a nonsingular matrix $P_1$, we have that $(K_n, P_1^T L P_1, P_1^T c P_1)$ admits a hyper feasible direction of the form \[ A_1 = \left(\begin{array}{cc} \widehat A_1 & 0 \\ 0 & 0 \end{array}\right) = P_1^T\widetilde A _{1} P_1, \] where $\widehat A_1$ is a $k_1\times k_1$ positive definite matrix. The feasibility status of $(K_n, L, c)$ and \[ (K_{n-k_1}, \overline\pi_{k_1}(P_1^T L P_1), \overline\pi_{k_1}(P_1^T c P_1)) \] are mostly the same in the sense that items $1-3$ of Theorem \ref{theo_decomp} hold.
Now, suppose that $(K_{n-k_1}, \overline\pi_{k_1}(P_1^T L P_1), \overline\pi_{k_1}(P_1^T c P_1))$ admits a hyper feasible direction $\widetilde A_2$ of rank $k_2$. Then, after appropriate congruence transformation by $\widetilde P_2$, we obtain that \[ (K_{n-k_1}, {\widetilde P_2}^T\bar\pi_{k_1}(P_1^T L P_1) {\widetilde P_2}, {\widetilde P_2}^T\bar\pi_{k_1}(P_1^T c P_1) {\widetilde P_2}) \] admits a hyper feasible direction of the form \[ \left(\begin{array}{cc} \widehat A_2 & 0 \\ 0 & 0 \end{array}\right), \] where $\widehat A_2$ is $k_2\times k_2$ positive definite matrix.
Now, the feasibility status of $(K_{n-k_1}, \overline\pi_{k_1}(P_1^T L P_1), \overline\pi_{k_1}(P_1^T c P_1))$ and \[ (K_{n-k_1-k_2}, \overline\pi_{k_2}( {\widetilde P_2}^T\overline\pi_{k_1}(P_1^T L P_1) {\widetilde P_2}), \overline\pi_{k_2} ({\widetilde P_2}^T\overline\pi_{k_1}(P_1^T L P_1) {\widetilde P_2})) \] are mostly the same. Note that instead of applying a congruence transformation by ${\widetilde P_2}$ to \linebreak $(K_{n-k_1}, \overline\pi_{k_1}(P_1^T L P_1), \overline\pi_{k_1}(P_1^T c P_1))$, we can apply a congruence transformation by \[ P_2 = \left(\begin{array}{cc} I_{k_1} & 0 \\ 0 & \widetilde P_2 \end{array}\right) \] to the original problem $(K_n, P_1^T L P_1, P_1^T c P_1)$, i.e., we consider \[ \left(K_n, P_2^T P_1^T L P_1 P_2, P_2^T P_1^T c P_1 P_2
\right) \] Then the subproblem defined by the $(n-k_1)\times(n-k_1)$ lower right block matrix is precisely \[ (K_{n-k_1}, \widetilde{P}_2^T\bar\pi_{k_1}(P_1^T L P_1) \widetilde{P}_2, \widetilde{P}_2^T\bar\pi_{k_1}(P_1^T c P_1) \widetilde{P}_2), \] and we may pick $A_2 \in P_2^TP_1^T L P_1P_2$ such that \[ \overline\pi_{k_1+k_2}(A_2) =\left(\begin{array}{cc} \widehat A_2 & 0 \\ 0 & 0 \end{array}\right). \] Note that $A_2$ has the following shape \[ A_2 = \left(\begin{array}{ccc} * & * & * \\ * & \widehat A_2 & 0 \\ * & 0 & 0 \end{array}\right). \]
Generalizing the process outlined above, we obtain the following procedure, which we call ``forward procedure''. The set of matrices $\{A_1, \ldots, A_m\}$ obtained in this way will be called a \emph{hyper feasible partition}. After each application of Theorem \ref{theo_decomp}, the size of the matrices is reduced at least by one. This means that after at most $n$ iterations, a subproblem with no nonzero hyper feasible directions is found. At this point, no further directions can be added and we will say that the partition is \emph{maximal}.
We note that the problem of checking whether a SDFP $(K_{\tilde n}, \widetilde L, \tilde c)$ has a nonzero hyper-feasible direction lies in $\NP\cap \coNP$, in the real computation model. In fact, by Gordan's Theorem, $(K_{\tilde n}, \widetilde L, \tilde c)$ does not have a nonzero hyper-feasible direction if and only if $(K_{n}, {\widetilde L}^{\perp}, 0)$ is strongly feasible.
\noindent {\bf [Procedure FP]}
{\bf Input:} $(K_n,L,c)$
{\bf Output:} a non-singular matrix $P$, a sequence ${k_1, \ldots , k_m }$ and a maximal hyper feasible partition $\{A_1, \ldots , A_m\}$ contained in $P^TLP$. The $A_i$ are such that $A_1 = \left(\begin{smallmatrix}\widehat {A}_1 & 0 \\ 0 & 0 \end{smallmatrix} \right)$, $A_2 = \left(\begin{smallmatrix} * & * & * \\ * & \widehat {A}_2 & 0 \\ * & 0 & 0 \end{smallmatrix} \right)$, $A_3 = \left(\begin{smallmatrix} * & * & * & *\\ * & * & * & * \\ * & * & \widehat {A}_3 & 0 \\ * & * & 0 & 0 \end{smallmatrix} \right)$ and so forth, where $\widehat{A}_i$ is positive definite and lies in $K_{k_i}$, for every $i$. \begin{enumerate} \item Set $i := 1$, $\widetilde{L} := L$, $\widetilde{c} := c$ $K := K_n$, $P := I_n$. \item Find (i) $\widetilde{A}_i \in \widetilde L \cap K$, ${\rm tr}(\widetilde{A}_i)=1$ or (ii) $\widetilde B \in \widetilde L^\perp \cap {\rm int} K$, ${\rm tr}(\widetilde{B})=1$. (Exactly one of (i) and (ii) is solvable.) If (ii) is solvable, then stop. (No nonzero hyper-feasible direction exists.) \item Compute a non-singular $\widetilde{P}$ such that, \[ \widetilde{P}^T \widetilde{A}_i \widetilde{P} = \left(\begin{array}{cc} \widehat A _i& 0 \\ 0 & 0 \end{array}\right) \] where $\hat A _i$ is a positive definite matrix. Let $k_i := \text{rank}(\widetilde{A}_i)$. \item Compute $M = \begin{pmatrix} I_{k_1 + \ldots + k_{i-1}} & 0 \\ 0 & \widetilde{P} \end{pmatrix}\;$ and set $P^T := M^TP^T$. (If $i = 1$, take $M = \widetilde{P}$) \item Let $A_i$ be any matrix in $P^TL P $ such that $\overline \pi_{k_1 + \ldots + k_{i-1}}(A_i) = \widetilde{P}^T\widetilde{A}_i\widetilde{P}$.
For each $1 \leq j < i$ exchange $A_{j}$ for $M^TA_{j}M$. \item Set $\widetilde L:= \overline\pi_{k_{i}}(\widetilde{P}^T\widetilde L \widetilde{P}), \widetilde c:= \overline\pi_{k_{i}}(\widetilde{P}^T\widetilde c \widetilde{P})$, $K := \overline\pi_{k_{i}}(K_n)$, $i := i+1$ and return to Step 2. (This step is just to pick the lower-right block after the congruence transformation.) \end{enumerate}
\begin{proposition}\label{prop_last_problem} Suppose that $(K_n,L,c)$ is such that there is a nonzero element in $K_n \cap L$. Applying \textbf{FP} to $(K_n,L,c)$ we have that: \begin{enumerate} \item $(K_n, L, c)$ is strongly feasible if and only if $\overline{\pi}_{k_1 + \ldots + k_m}(K_n,P^TLP,P^TcP)$ is. \item $(K_n, L, c)$ is strongly infeasible if and only if $\overline{\pi}_{k_1 + \ldots + k_m}(K_n,P^TLP,P^TcP)$ is. \item $(K_n, L, c)$ is in weak status if and only if $\overline{\pi}_{k_1 + \ldots + k_m}(K_n,P^TLP,P^TcP)$ is weakly feasible. \end{enumerate} \end{proposition} \begin{proof} If $m = 0$, then the proposition follows because $\overline{\pi}_{0}$ is equal to the identity map. In the case $m = 1$, the result follows from Theorem \ref{theo_decomp}.
Note that at the $i$-th iteration, if a direction $A_i$ is found then, after applying the congruence transformation $\widetilde{P}$, $\overline\pi_{k_{i}}(K,\widetilde{P}^T\widetilde L \widetilde{P},\widetilde{P}^T\widetilde c \widetilde{P})$ preserves feasibility properties in the sense of Theorem $1$. Note that it is a SDFP over $\mathbb{S}_{n -k_1 - \ldots - k_i}$. Also, due to the way $M$ is selected, we have that equation $\overline\pi_{k_{i}}(K,\widetilde{P}^T\widetilde L \widetilde{P},\widetilde{P}^T\widetilde c \widetilde{P}) = \overline\pi_{k_1 + \ldots + k_{i}}(K_n,P^TLP,P^TcP)$ holds after Line 4 and before $\widetilde{L}$ and $K$ are updated. This justifies items $1.$ and $2.$.
Consider the case where $(K_n,L,c)$ is in weak status. When $(K,\widetilde L , \widetilde c)$ is weakly infeasible we can always find a new direction $A_i$ and the size of problem decreases by a positive amount, so that $(K,\widetilde L , \widetilde c)$ cannot be weakly infeasible for all iterations. The only other possibility is weak feasibility, which justifies item $3$. \end{proof}
The matrices $A_1, \ldots, A_m $ obtained through \textbf{FP} have the shape \[ \left(\begin{array}{cccc}\widehat A_1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 \end{array}\right),\ \ \ \left(\begin{array}{cccc}* & * & * & *\\ * &\widehat A_2 & 0 & 0 \\ * & 0 & 0 & 0 \\ * & 0 & 0 & 0\end{array}\right),\ \ \ \left(\begin{array}{cccc}* & * & * & * \\ * &* & * & * \\ * & * & \widehat A_3 & 0 \\ * & *& 0 & 0\end{array}\right), \ldots \] where $\widehat A_1, \widehat A_2, \widehat A_3, \ldots $ are positive definite. The matrix $A_i$ are referred to as {\it sub-hyper feasible directions}, since the $\widehat A_i$ are hyper feasible directions. The problem $\overline{\pi}_{k_1 + \ldots + k_m}(K_n,P^TLP,P^TcP)$ will be referred to as the \emph{last subproblem} of $(K_n,L,c)$.
\begin{example}\label{example_partition} Let \begin{equation}\label{equation_example_partition}
L+c = \left \{
\begin{pmatrix}
t & v & 1 & u\\
v & z+2 & v+1 & z+1 \\
1 & v+1 & u-1 & s \\
u & z+1 & s & 0
\end{pmatrix}
\mid t,u,v,s,z \in \mathbb{R} \right \}. \end{equation} and let us apply \textbf{FP} to $(K_4,L,c)$. The first direction can be, for instance, $A_1 = \left(\begin{smallmatrix} 1 & 0 & 0 & 0\\
0 & 0 & 0 & 0\\
0 & 0 & 0 & 0\\
0 & 0 & 0 & 0
\end{smallmatrix}\right)$. Then $k_1 = 1$ and $\widetilde{P}$ is the identity, at this step. At next iteration, we have $K = K_3$ and $\widetilde {L} = \left \{
\left(\begin{smallmatrix}
z & v & z \\
v & u & s \\
z & s & 0
\end{smallmatrix}\right)
\mid u,s,v,z \in \mathbb{R} \right \}.$ Then, $\widetilde {A}_2$ can be taken
as $ \left(\begin{smallmatrix}
0 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 0
\end{smallmatrix}\right)$ and $k_2$ is $1$. A possible choice of $\widetilde {P}$ is
$ \left(\begin{smallmatrix}
0 & 1 & 0 \\
1 & 0 & 0 \\
0 & 0 & 1
\end{smallmatrix}\right)$. Then $P$ is $\left(\begin{smallmatrix} 1 & 0 & 0 & 0\\
0 & 0 & 1 & 0\\
0 & 1 & 0 & 0\\
0 & 0 & 0 & 1
\end{smallmatrix}\right)$ and we can take $A_2 = \left(\begin{smallmatrix} 0 & 0 & 0 & 1\\
0 & 1 & 0 & 0\\
0 & 0 & 0 & 0\\
1 & 0 & 0 & 0
\end{smallmatrix}\right)$. $\tilde{L}$ is then updated and it becomes $\left \{
\left(\begin{smallmatrix}
z & z \\
z & 0
\end{smallmatrix}\right)
\mid z \in \mathbb{R} \right \}.$ The procedure stops here, because $0$ is the only positive
semidefinite matrix in $\tilde{L}$.
Now, $\overline{\pi}_2(P^T(L+c)P)$ is $\left \{
\left(\begin{smallmatrix}
z +2 & z + 1 \\
z + 1 & 0
\end{smallmatrix}\right)
\mid z \in \mathbb{R} \right \}$, so $\overline{\pi}_2(K_4,P^TLP,P^TcP)$ is a weakly feasible
system. Therefore, by Proposition \ref{prop_last_problem}, $(K_4,L,c)$ has
weak status and is either weakly infeasible or weakly feasible. The
$0$ in the lower right corner of \eqref{equation_example_partition}
forces $u = 0$, $z = -1$ and $s = 0$, but
this assignment produces a negative element in the diagonal. This tells
us that $(K_4,L,c)$ is infeasible so it must be weakly infeasible. \end{example}
\begin{corollary} \label{Certificate_Weak} The matrices $A_1, \ldots , A_m, P, \widetilde B$ as described in \textbf{FP} together with a
finite weak feasibility certificate for $\overline{\pi}_{k_1 + \ldots k_m}(K_n,P^TLP,P^TcP)$ form a finite certificate that $(K_n,L,c)$ is in weak status. If no such a certificate exists, then either
item $1$ or item $3$ of Proposition \ref{prop_certificate} holds. This shows that deciding whether a SDFP is in weak status is in $\NP \cap \coNP$.
\end{corollary} \begin{proof} Follows directly from Proposition \ref{prop_last_problem}. \end{proof}
\subsection{Maximum number of directions required to approach the positive semidefinite cone} According to Proposition \ref{prop_non_empty}, there is always a nonzero element in $K_n\cap L$ when $(K_n,L,c)$ is weakly infeasible. Therefore, a natural question is, given a weakly infeasible $(K_n,L,c)$, whether it is always possible to select a point in $x \in L+c$ and then a nonzero direction $d \in K_n\cap L$ such that $\lim _{t \to +\infty } \text{dist}(x +td, K_n) = 0$ or not. We call weakly infeasible problems having this property \emph{directionally weakly infeasible} (DWI). The simplest instance of DWI problem is \[ \max \; 0\ \text{s.t.} \; \begin{pmatrix}
t & 1 \\
1 & 0
\end{pmatrix} \in K_2,\ t\in \mathbb{R}. \] Unfortunately, not all weakly infeasible problems are DWI, as shown in the following instance.
\begin{example}[A weakly infeasible problem that is not directionally weakly infeasible]\label{example_not_dwi} Let $(K_3,L,c)$ be such that $L+c = \left \{
\left(\begin{smallmatrix}
t & 1 & s \\
1 & s & 1 \\
s & 1 & 0
\end{smallmatrix}\right) \mid t,s \in \mathbb{R}\right\} $ and let $A_1 = \left(\begin{smallmatrix}
1 & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 & 0
\end{smallmatrix}\right)$.
Applying Theorem 1 twice, we see that the problem is in weak status. Looking at its $2\times 2$ lower right block, we see this problem is infeasible and hence is weakly infeasible. But this problem is not DWI. If $(K_{3},L,c)$ were DWI, we would have $\lim _{t \to +\infty}\mathrm{dist}(tA_1 + c', K_3) = 0$, for some $c' \in L+c$. To show this does not hold, we fix $s$. Regardless of the value of $t\geq 0$, the minimum eigenvalue of the matrix is uniformly negative, since its $2\times 2$ lower right block is strongly infeasible. \end{example}
Thus, a weakly infeasible problem is not DWI in general. If we let $s$ sufficiently large in the example, then the minimum eigenvalue of the lower ${2\times 2}$ matrix gets very close to zero. This will make $(1,3)$ and $(3,1)$ elements large. But we can let $t$ much larger than $s$. Then, the minimum eigenvalue of the submatrix $\left(\begin{smallmatrix}
t & s \\
s & 0
\end{smallmatrix}\right) $ is close to zero. Intuitively, this neutralize the effect of big off-diagonal elements, and we obtain points in $L+c$ arbitrarily close to $K_3$, by taking $s$ to be large and $t$ to be much larger than $s$.
Generalizing this intuition, in the following, we show that $n-1$ directions are enough to approach the positive semidefinite cone. First we discuss how the hyper feasible partition $\{A_1,\ldots , A_m \}$ of \textbf{FP} fits in the concept of tangent cone. We recall that for $x \in K_n$ the cone of feasible directions is the set $\mathrm{dir}\,(x,K_n) = \{d \in \mathbb{S}_n \mid \exists t > 0 \text{ s.t } x +td \in K_n \}$. Then the tangent cone at $x$ is the closure of $\mathrm{dir}\, (x,K_n)$ and is denoted by $\mathrm{tanCone}\, (x,K_n)$. It can be shown that if $d \in \mathrm{tanCone}\, (x,K_n)$ then $\lim _{t\to +\infty} \text{dist}(tx +d,K_n) = 0$.
We remark that if $x = \left(\begin{smallmatrix} D & 0 \\ 0 & 0 \end{smallmatrix}\right)$, where $D$ is positive definite $k \times k$ matrix, then $\mathrm{tanCone}\, (x,K_n)$ consists of all symmetric matrices $\left(\begin{smallmatrix} * & * \\ * & E \end{smallmatrix}\right)$, where $*$ denotes arbitrary entries and $E$ is a positive semidefinite $(n-k)\times (n-k)$ matrix. See \cite{pataki_handbook} for more details.
The output $\{A_1,\ldots , A_m \}$ of \textbf{FP} is such that $A_2 \in \mathrm{tanCone}\,(A_1,K_{n} )$. This is clear from the shape of $A_1$ and $A_2$, and from a simple argument using the Schur Complement. Now, $A_3$ is such that $\overline{\pi} _{k_1+k_2}(A_3)$ is positive semidefinite. We have $A_2 = \left(\begin{smallmatrix} * & * & * & *\\ * & \widehat {A}_2 & 0 & 0 \\ * & 0 & 0 & 0 \\ * & 0 & 0 & 0 \end{smallmatrix} \right)$ $A_3 = \left(\begin{smallmatrix} * & * & * & *\\ * & * & * & * \\ * & * & \widehat {A}_3 & 0 \\ * & * & 0 & 0 \end{smallmatrix} \right)$. Then $ \left(\begin{smallmatrix} * & * & * \\ * & \widehat {A}_3 & 0 \\ * & 0 & 0 \end{smallmatrix} \right) \in \mathrm{tanCone}\, \left( \left(\begin{smallmatrix} \widehat {A}_2 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{smallmatrix} \right), K_{n-k_1} \right)$, i.e, $\overline{\pi} _{k_1}(A_3) \in \mathrm{tanCone}\,(\overline{\pi} _{k_{1}}(A_{2}),K_{n-k_1} )$. Denote $k_1 + \ldots + k_i$ by $N_i$ and set $N_0 = 0$. Then, for $i > 2$, we have: \begin{align*} \overline{\pi} _{N_{i-2}}(A_i) \in \mathrm{tanCone}\,(\overline{\pi} _{N_{i-2}}(A_{i-1}),K_{n-N_{i-2}} ). \end{align*}
Moreover, if the last subproblem $\overline{\pi}_{N_m}(K_n,L,c)$ has a feasible solution, we can pick some $c'$ such that $\overline{\pi}_{N_m}(c')$ is positive semidefinite. Then $\overline{\pi}_{N_{m-1}}(c') \in \mathrm{tanCone}\,(\overline{\pi} _{N_{m-1}}(A_{m}),K_{n-N_{m-1}} )$. Given $\epsilon > 0$, by picking $\alpha _m > 0$ sufficiently large we have $\text{dist} (\overline{\pi}_{N_{m-1}}(c'+\alpha _m A_m ), K_{n-N_{m-1}}) < \epsilon $. Now, $\overline{\pi}_{N_{m-2}}(x+\alpha _m A_m )$ does not necessarily lie on the tangent cone of $\overline{\pi}_{N_{m-2}}(A_{m-1})$ at $K_{n-N_{m-2}}$, but still it is possible to pick $\alpha _{m-1} > 0$ such that \begin{equation*} \text{dist} (\overline{\pi}_{N_{m-2}}(c'+\alpha _m A_m +\alpha _{m-1} A_{m-1} ), K_{n-N_{m-2}}) < 2\epsilon. \end{equation*} In order to show this, let $h \in K_{n-N_{m-1}}$ be such that \begin{equation*} \norm{\overline{\pi}_{N_{m-1}}(c'+\alpha _m A_m) -h} = \text{dist} (\overline{\pi}_{N_{m-1}}(c'+\alpha _m A_m ), K_{n-N_{m-1}}). \end{equation*} Now, define $\widetilde h$ to be the matrix $\overline{\pi}_{N_{m-2}}(c'+\alpha _m A_m)$, except that the lower right $(n-k_m)\times (n-k_m)$ block is replaced by $h$. It follows readily that $\widetilde h$ lies on the tangent cone of $\overline{\pi}_{N_{m-2}}(A_{m-1})$. Then, we may pick $\alpha _{m-1} > 0$ sufficiently large such that $\text{dist} (\overline{\pi}_{N_{m-2}}(\alpha _m A_m) + h, K_{n-N_{m-2}}) < \epsilon $. Let $y_1 = \overline{\pi}_{N_{m-2}}(c'+\alpha _m A_m)$, $y_2 = \overline{\pi}_{N_{m-2}}(\alpha _{m-1} A_{m-1})$. We then have the following implications: \begin{align*} \text{dist} (y_1 + y_2, K_{n-N_{m-2}}) & \leq \text{dist} (y_1 - \widetilde h, K_{n-N_{m-2}}) + \text{dist} (y_2 +\widetilde h, K_{n-N_{m-2}}) \\ & \leq \norm{\overline{\pi}_{N_{m-1}}(c'+\alpha _m A_m) - h} + \epsilon \leq 2\epsilon. \end{align*}
If we continue in this way, it becomes clear that $\alpha _1, \ldots , \alpha _m$ can be selected such that $\text{dist} (c'+\alpha _m A_m +\alpha _{m-1} A_{m-1} + \ldots + \alpha _1 A_1, K_{n}) < m\epsilon $. This shows how the directions $\{A_1,\ldots , A_m \}$ can be used to construct points that are arbitrarily close to $K_n$, when the last subproblem is feasible. This leads to the next theorem.
\begin{theorem}\label{theo_weakly_infeasible_subspace} If $(K_{n},L,c)$ is weakly infeasible then there exists an affine space of dimension at most $n-1$ such that $L' + c' \subseteq L+c$ and $(K_{n},L',c')$ is weakly infeasible. \end{theorem} \begin{proof} The construction above shows that if $L'$ is the space spanned by $\{A_1, \ldots , A_m \}$ and $c'$ is taken as above, then $(K_n,L',c')$ is weakly infeasible. As $(K_n,L,c)$ is weakly infeasible, we have $m > 0$. We also have $k_1 + \ldots + k_m \leq n$, which implies $m \leq n$. Notice that $\overline{\pi} _{n}(K_n,P^TLP,P^TcP)$ is strongly feasible, because it is equal to the system $(\{0\}, \{0\},0)$. Therefore $k_1 + \ldots + k_m < n$, which forces $m < n$. \end{proof}
\section{Backward Procedure}\label{sec:backward_procedure} In this section, we discuss a ``backward procedure'' for distinguishing the 4 different feasibility statuses. The main difficulty is when the problem is in weak status. In that case, due to Proposition \ref{prop_last_problem}, the last subproblem is weakly feasible. This offers the opportunity to shrink both the last subproblem and the whole problem, as discussed in our next theorem.
\begin{theorem}\label{theo_backwards} Let $(K_n,L,c)$ be a given SDFP, satisfying the following assumptions: \begin{enumerate}
\item for some $k > 0$, $\overline \pi _{ k}(K_n,L,c)$ is weakly feasible.
\item for some $l$ such that $0 \leq l < n-k$, the face $F = \left\{ \left(\begin{smallmatrix} A & 0 \\0 & 0 \end{smallmatrix}\right) \mid A \in K_{l} \right\}$ contains the feasible region of $\overline \pi _{k}(K_n,L,c)$. \end{enumerate} \begin{equation*}
\left(\begin{array}{c| c } * & * \\\hline * & \overline \pi_k(L+c) \\ \multicolumn{1}{c}{$\upbracefill$} & \multicolumn{1}{c}{$\upbracefill$} \\ \multicolumn{1}{c}{\scriptstyle k} & \multicolumn{1}{c}{\scriptstyle n-k} \\
\noalign{
}
\end{array} \right)
=
\left(\begin{array}{c| c c} * & * & * \\\hline * & A & 0 \\ * & 0 & 0 \\ \multicolumn{1}{c}{$\upbracefill$} & \multicolumn{1}{c}{$\upbracefill$} & \multicolumn{1}{c}{$\upbracefill$} \\ \multicolumn{1}{c}{\scriptstyle k} & \multicolumn{1}{c}{\scriptstyle l} & \multicolumn{1}{c}{\scriptstyle n-k-l} \\
\noalign{
}
\end{array} \right)
, \end{equation*}
Furthermore, let $E$ be the set of $(k+l)\times(k+l)$ upper left principal submatrices in $\mathbb{S}_n$, i.e., $E = \left\{ \left(\begin{smallmatrix} B & 0 \\0 & 0 \end{smallmatrix}\right) \mid B \in \mathbb{S}_{k+l} \right\}$, and define $\widetilde L$ and $\widetilde c$ as the vector subspace and a vector such that $E\cap (L+c)$ is equal to $\widetilde L + \widetilde c$, i.e., \begin{equation} E\cap (L + c) = \widetilde{L} + \widetilde{c} =
\left(\begin{array}{c c| c}
\multicolumn{2}{c|}{\pi_{k+l}(\widetilde{L} + \widetilde{c})} & 0 \\
& & 0 \\\hline
0 & 0 & 0 \\ \multicolumn{1}{c}{$\upbracefill$} & \multicolumn{1}{c}{$\upbracefill$} & \multicolumn{1}{c}{$\upbracefill$} \\ \multicolumn{1}{c}{\scriptstyle k} & \multicolumn{1}{c}{\scriptstyle l} & \multicolumn{1}{c}{\scriptstyle n-k-l} \\ \noalign{
}
\end{array} \right)
. \label{aaa} \end{equation} ($\widetilde L+\widetilde c$ is an affine subspace of $L + c$ where all but the upper left block is set to zero.)
Then, the following holds: If \eqref{aaa} is empty, $(K_n,L,c)$ is infeasible. Otherwise, $(K_n,L,c)$ is feasible if and only if the SDFP subproblem $\pi_{k+l}(K_n,\widetilde{L},\widetilde{c})$ is feasible.
\end{theorem} \begin{proof} Due to assumption $1$, $\overline \pi _{k}(K_n,L,c)$ is weakly feasible, so if $x \in L+c$ and $\overline \pi _{k}(x) $ is positive semidefinite then $x$ has the format $ \left(\begin{smallmatrix} * & * \\ * & C \end{smallmatrix}\right) $, where $C$ is a $(n-k)\times (n-k)$ positive semidefinite definite and $*$ denotes arbitrary entries. Now, due to Assumption $2$, $C$ itself has the format $\left(\begin{smallmatrix} A & 0 \\0 & 0 \end{smallmatrix}\right)$, where $C$ is a $l\times l$ positive semidefinite matrix . So, actually, $x$ has the format $ \left(\begin{smallmatrix} * & * & * \\ * & C & 0 \\ * & 0 & 0 \end{smallmatrix}\right) $.
In order for $x$ itself to be positive semidefinite, the elements in the upper right and lower left must be 0. In other words, $x$ must have the format $ \left(\begin{smallmatrix} * & * & 0 \\ * & C & 0 \\ 0 & 0 & 0 \end{smallmatrix}\right) $, where $ \left(\begin{smallmatrix} * & * \\ * & C \end{smallmatrix}\right) $ is a $(k+l)\times (k+l)$ positive semidefinite matrix. Therefore, if $E \cap (L+c) = \emptyset$ there is no way $(K_n,L,c)$ could be feasible.
If $E \cap (L+c)$ is not empty, since $\pi _{k+l}(x) = \left(\begin{smallmatrix} * & * \\ * & C \end{smallmatrix}\right) $, it is clear that the feasibility of $(K_n,L,c)$ is equivalent to the feasibility of $\pi _{k+l}(K_n,\widetilde L,\widetilde c)$. \end{proof}
We remark that whenever $(K_n,L,c)$ satisfies Assumption $1$ , it is possible to apply a congruence transformation to $(K_n,L,c)$ in order to meet Assumption $2$. Using Theorem 11.3 of \cite{rockafellar}, the weak feasibility of $\overline \pi _{k}(K_n,L,c)$ implies the existence of $w \neq 0$ such that $\inProd{w}{ \overline \pi _{k}(l+c)} \leq \inProd{w}{x}$, for every $l \in L$ and $x \in K_{n-k}$. The only way this inequality can hold is is if $w \in K_{n-k}\cap \overline \pi _{k}(L)^\perp $ and $\inProd{w}{ \overline \pi _{k}(c)}\leq 0 $. As $\overline \pi _{k}(K_n,L,c)$ is not strongly infeasible, we have $\inProd{w}{ \overline \pi _{k}(c)}= 0 $. Changing $L+c$ and $w$ by a congruence transformation if necessary, we may assume that $w = \left(\begin{smallmatrix} 0 & 0 \\ 0 & \widetilde w\end{smallmatrix} \right) $, where $ \widetilde w$ is a $(n-k-l)\times (n-k-l)$ positive definite matrix and $l < n-k$. Then, it is clear that Assumption $2$ holds for the transformed problem.
The search for a $w$ as above is essentially one step of a facial reduction algorithm \cite{borwein_facial_1981,pataki_strong_2013,article_waki_muramatsu}. Each iteration of a facial reduction algorithm aims to find a proper face of $K_n$ that still contains the feasible region. Usually, however, the search is done on the whole problem. Let $w' = \left(\begin{smallmatrix} 0 & 0 \\ 0 & w \end{smallmatrix} \right) $, where $w' \in K_n$, then $w' \in K_{n}\cap L^\perp \cap \{c \}^\perp$. Which means that $w'$ can be used to perform a step of facial reduction on the whole problem. In particular, if $x$ is a feasible point, since $\inProd{x}{w} = 0$, it must be true that $x = \left(\begin{smallmatrix} D & 0 \\ 0 & 0 \end{smallmatrix} \right)$, where $D$ is a positive semidefinite $(k+l)\times (k+l)$ matrix. The idea is that the knowledge of a weakly feasible subproblem makes it possible to confine the search to a smaller subproblem and still find a smaller face of $K_n$ that contains the feasible region of the original problem.
If we apply Theorems \ref{theo_decomp} and \ref{theo_backwards} repeatedly, we obtain a facial reduction-like procedure which is able to determine the feasibility status of a given $(K_n,L,c)$ as shown below.
\noindent {\bf [Procedure BP]} \begin{enumerate}[Step 1.] \item Apply \textbf{FP} to $(K_n,L,c)$. If the last subproblem $\overline {\pi} _{k_1 + \ldots + k_m}(K_n,P^TLP,P^TcP)$ is strongly infeasible, then $(K_n,L,c)$ is also strongly infeasible. If $\overline {\pi} _{k_1 + \ldots + k_m}(K_n,P^TLP,P^TcP)$ is strongly feasible, then $(K_n,L,c)$ is also strongly feasible. In both cases we stop the procedure. Otherwise set $i = 0,F_0 = K_n, L_0 = L, c_0 = c$.
\item If we reach this step, $\overline {\pi} _{k_1 + \ldots + k_m}(F_i,P^TL_iP,P^Tc_iP)$, is \emph{weakly feasible}, i.e., $(F_i,L_i,c_i)$ is in weak status. Applying a congruence transformation to $(F_i,P^TL_iP,P^Tc_iP)$, if necessary, both assumptions of Theorem \ref{theo_backwards} can be met. Let $K_{k+l} $, $\widetilde L$, $\widetilde c$ and $E$ be as in Theorem \ref{theo_backwards}. If $E \cap P^T(L_i+c_i)P$ is empty, we stop and declare $(K_n,L,c)$ to be weakly infeasible. Otherwise, we obtain $\widetilde L+ \widetilde c$ such that $E \cap P^T(L_i+c_i)P = \widetilde L+\widetilde c$ and a projection $\pi _{k+l}$.
\item Apply \textbf{FP} to $\pi _{k+l}(F_i,\widetilde L,\widetilde c)$ and obtain a new projection $\overline {\pi} _{k_1 + \ldots + k_m}$. If $\overline {\pi} _{k_1 + \ldots + k_m}(K_{k+l},\pi _{k+l}(\widetilde L),\pi _{k+l}(\widetilde c))$ is strongly feasible, then $(K_n,L,c)$ is weakly feasible. If $\overline {\pi} _{k_1 + \ldots + k_m}(K_{k+l},\pi _{k+l}(\widetilde L),\pi _{k+l}(\widetilde c))$ is strongly infeasible, then $(K_n,L,c)$ is weakly infeasible. In both cases, we end the procedure. Otherwise, set $F_{i+1} := K_{k+l}$, $L_{i+1} := \pi _{k+l}(\widetilde L)$, $c_{i+1} := \pi _{k+l}(\widetilde c)$, $i := i+1$ and return to Step $2.$ \end{enumerate}
\noindent{\bf Remark:} The procedure terminates in at most $n$ iterations, because the size of the problem is reduced at least by one for each iteration.
\begin{example}\label{example_bp} Let $L$ and $c$ be as in Example \ref{example_partition} and let us apply \textbf{BP} to $(K_4,L,c)$. At Step 1, we apply to \textbf{FP} and we obtain $\overline{\pi}_2(P^T(L+c)P) = \left \{
\left(\begin{smallmatrix}
z +2 & z + 1 \\
z + 1 & 0
\end{smallmatrix}\right)
\mid z \in \mathbb{R} \right \}$. And $\overline{\pi}_2(K_4,P^TLP,P^TcP)$ is
weakly feasible, so we move on to Step 2. The feasible region
of $\overline{\pi}_2(K_4,P^TLP,P^TcP)$ consists of a single matrix which
is $\left(\begin{smallmatrix}
1 & 0 \\
0 & 0
\end{smallmatrix}\right)$. We are under the conditions of Theorem
$\ref{theo_backwards}$ and
\begin{equation*}
E \cap P^T(L+c)P = \left \{ \begin{pmatrix} t & 1 & v & 0 \\ 1 & -1 & v+1 & 0 \\ v & v+1 & 1 & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix} \mid t,v \in \mathbb{R} \right \}.
\end{equation*} Then $\pi _3 (\widetilde{L} + \widetilde{c}) = \left \{ \left(\begin{smallmatrix} t & 1 & v \\ 1 & -1 & v+1 \\ v & v+1 & 1 \end{smallmatrix}\right) \mid t,v \in \mathbb{R} \right \}$. Applying \textbf{FP} to $ (K_3,\pi _M(\widetilde{L}),\pi _M(\widetilde{c}))$ we obtain as output $P = I_3 $, $m = 1$, $k_1 = 1$ and $A_1 = \left(\begin{smallmatrix} 1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{smallmatrix}\right)$. Then $\overline {\pi} _1(\pi _3 (\widetilde{L} + \widetilde{c}) ) = \left \{ \left(\begin{smallmatrix} -1 & v+1 \\ v+1 & 1 \end{smallmatrix}\right) \mid v \in \mathbb{R} \right \}$. The $-1$ in the upper left entry ensures that the system $(K_2,\overline {\pi} _1(\pi _3 (\widetilde{L})),\overline {\pi} _1(\pi _3 (\widetilde{L}))) $ is strongly infeasible, which shows that $(K_4,L,c)$ is weakly infeasible itself.
\end{example}
\subsection{Complexity aspects of \textbf{BP} } Let us discuss briefly certificates and complexity issues regarding \textbf{BP}. Both \textbf{BP} and \textbf{FP} can be thought as procedures that invoke several oracles. For instance, we can consider that a nonzero hyper feasible direction, as required in \textbf{FP}, is obtained by querying an oracle. According to the recipe explained in the second paragraph of Section 2.1, we can show that the problem of deciding the feasibility status of $(K_n, L, c)$ has a finite certificate and that \textbf{BP} acts as as verifier procedure. All we have to do is argue that all the computations required by \textbf{BP} can be checked in polynomial time.
First note that all computations done by \textbf{FP} can be checked either by the certificates discussed in Proposition \ref{prop_certificate} or by Gordan's Theorem. The same is true for Steps 1 and 3 of \textbf{BP}. The only part of \textbf{BP} that needs further analysis is when Theorem \ref{theo_backwards} is invoked at Step 2, where we need to check that assumption 2 of Theorem \ref{theo_backwards} holds. However, we can use as certificate a nonzero element $w$ satisfying $w \in F_i \cap (P^TL_iP)^\perp \cap \{P^Tc_iP \}^\perp$, as in the discussion that follows the proof of Theorem \ref{theo_backwards} and, if necessary, a non-singular matrix which puts the problem in the correct shape.
This provides an alternative proof of the fact that for each different feasibility status, the problem of deciding whether $(K_n, L, c)$ has that status is in $\NP\cap \coNP$ in the BBS model of real computation.
\section{Conclusion}\label{sec:conc} In this article we presented an analysis of weakly infeasible problems via two procedures: \textbf{FP} and \textbf{BP}. The procedure \textbf{FP} produces as an output a finite set of directions and for weakly infeasible problem, they can be used to construct $L+c$ arbitrarily close to $K_n$. The procedure \textbf{BP} uses \textbf{FP} and is able to distinguish between the four feasibility statuses. The computations involved in both procedures might be hard, but they are verifiable in polynomial time, in the BSS model. Extension of our analysis to blockwise SDPs and to other classes of conic linear programs is an interesting topic for future research.
\section*{Acknowledgments.} M. Muramatsu and T. Tsuchiya are supported in part with Grant-in-Aid for Scientific Research (B)24310112 and (C)2630025. M. Muramatsu is also supported in part with Grant-in-Aid for Scientific Research (B) 26280005.
\end{document} |
\begin{document}
\begin{abstract} We make rigorous and old idea of using mean curvature flow to prove a theorem of Richard Hamilton on the compactness of proper hypersurfaces with pinched, bounded curvature. \end{abstract}
\maketitle
A famous theorem of Myers \cite{Myers} implies that a complete Riemannian manifold with uniformly positive Ricci curvature is necessarily compact. By the Gauss equation, this implies that a properly immersed hypersurface of Euclidean space $\ensuremath{\mathbb{R}}^{n+1}$, $n\ge 2$, with uniformly positive second fundamental form is compact.
Hamilton obtained a scale invariant version of this result \cite{HamiltonPinched}: a smooth, proper, \emph{locally} uniformly convex hypersurface $M^n\to \ensuremath{\mathbb{R}}^{n+1}$, $n\ge 2$, which is \emph{pointwise pinched}, i.e. \[
\kappa_1\ge \alpha\kappa_n\;\;\text{for some}\;\;\alpha>0, \] where $\kappa_1\le\dots\le\kappa_n$ denote the principal curvatures (eigenvalues of the second fundamental form $A$), is necessarily compact. Hamilton's argument exploits the fact that the Gauss map of such a hypersurface is quasiconformal.
Inspired by Hamilton's theorem, Ni and Wu \cite{NiWuPinched} (see also \cite{ChZh}) proved an analogous \emph{intrinsic} pinching theorem: any smooth, complete Riemannian manifold $M^n$, $n\ge 3$, with bounded, non-negative curvature operator $\Rm$ which is \emph{pointwise pinched}, i.e., \[
\rho_1\ge \alpha\rho_N\;\;\text{for some}\;\;\alpha>0, \] where $\rho_1\le \dots\le\rho_N$ are the eigenvalues of $\Rm$, is either flat or compact. This intrinsic result is proved using Ricci flow. The main tools in the argument are the improvement-of-pinching theorem of B\"ohm and Wilking \cite{BohmWilking} and a non-existence result for pinched Ricci solitons.
It has long been believed that Hamilton's theorem can be proved using extrinsic geometric flows such as the mean curvature flow. The idea, which combines many classical ideas of Hamilton, is as follows: suppose, to the contrary, that there exists a non-flat pinched proper hypersurface which is \emph{not} compact. Evolve it by curvature. If the evolving hypersurface becomes singular in finite time, then, due to the uniform pinching, we can blow-up \emph{\`a la} Huisken \cite{Hu84} to obtain a shrinking sphere solution, violating the noncompactness. Else, the flow exists for all time. But then we can blow-up at time infinity \emph{\`a la} Hamilton \cite{HamiltonSingularities} to obtain a non-flat, pinched (translating or expanding) soliton solution. But these may be ruled out directly.
This argument is quite simple and elegant in concept but will require some deep analytic facts about mean curvature flow to make rigorous. Unfortunately, as for the result of Ni and Wu, we require the additional hypothesis of bounded curvature (Hamilton's argument requires no such hypothesis). Removing this hypothesis in the analysis of noncompact solutions to mean curvature flow is a deep issue of independent interest.\footnote{Compare the very recent work of Deruelle--Schulze--Simon \cite{DSSRiccipinched}, Lott \cite{LottRiccipinched}, and Lee--Topping \cite{LTRiccipinched,LTPIC1} on Ricci flow and Daskalopoulos--S\'aez \cite{DaskSaez} and the authors \cite{BLLnoncollapsing} on mean curvature flow.} On the other hand, we only require weak local convexity and our proof appears to be generalizable to other settings.
The key novel ingredient we will need is the following localization of Huisken's umbilic estimate \cite[5.1 Theorem]{Hu84}, which is an immediate corollary of the local pinching estimate recently proved in \cite{LocalPinching}.
\begin{comment} \begin{prop}[Local umbilic estimate]\label{prop:local pinching} If a mean curvature flow is properly defined in $B_{\lambda R}\times (0,\frac{1}{2n}R^2)\subset \ensuremath{\mathbb{R}}^{n+1}\times \ensuremath{\mathbb{R}}$ and satisfies \[ \inf_{B_{2\lambda R}\times\{0\}\cup \partial B_{2\lambda R}\times (0,\frac{1}{2n}R^2)}\frac{\kappa_1}{H}\ge \alpha>0\,, \]
then, given any $\varepsilon>0$, it satisfies \ba\label{eq:m convexity} \kappa_n-\kappa_1\le \varepsilon H+C_\varepsilon \Theta R^{-1} \ea in $B_{\frac{\lambda}{2}R}\times(0,\tfrac{1}{2n}R^2)$, where $C_\varepsilon=C(n,\alpha,\varepsilon)$ and \[ \Theta\doteqdot \sup_{B_{2\lambda R}\times\{0\}\cup B_{2\lambda R}\setminus B_{\lambda R}\times (0,\frac{1}{2n}R^2)}\tfrac{1}{n}RH \] \end{prop} \begin{prop}[Local umbilic estimate]\label{prop:local pinching} Given any $\varepsilon>0$, there exists $C_\varepsilon=C(n,\alpha,\varepsilon)$ with the following property. If a mean curvature flow is properly defined in $B_{L}\times (0,T)\subset \ensuremath{\mathbb{R}}^{n+1}\times \ensuremath{\mathbb{R}}$ and satisfies \[ \inf_{B_{2L}\times\{0\}\cup \partial B_{2L}\times (0,T)}\frac{\kappa_1}{H}\ge \alpha>0\,, \] then it satisfies \ba\label{eq:m convexity} \vert\mathring A\vert \le \varepsilon H+C_\varepsilon \Theta\;\;\text{in}\;\; B_{L/2}\times(0,T)\,, \ea
where $\mathring A$ is the trace-free part of $A$ and $\displaystyle\Theta\doteqdot \sup_{B_{2L}\times\{0\}\cup B_{2L}\setminus B_{L}\times (0,T)}\tfrac{1}{n}H$. \end{prop} \end{comment} \begin{prop}[Local umbilic estimate]\label{prop:local pinching} Every mean curvature flow in $\ensuremath{\mathbb{R}}^{n+1}$ which is properly defined and $\alpha$-pinched in $B_{2L}\times [0,T)$ satisfies \ba\label{eq:m convexity} \vert\mathring A\vert \le \varepsilon H+C_\varepsilon \Theta\;\;\text{in}\;\; B_{L/2}\times[0,T)\,, \ea
where $\mathring A$ is the trace-free part of $A$, $\Theta\doteqdot \sup_{B_{2L}\times\{0\}\cup B_{2L}\setminus B_{L}\times (0,T)}H$, and $C_\varepsilon=C(n,\alpha,\varepsilon)$. \end{prop} \begin{proof} The claim follows from the $m=0$ case of \cite[Theorem 1]{LocalPinching} since the integral hypothesis is in this case superfluous. Indeed, by the pinching hypothesis and the area formula, \[ \int_{M_t\cap B_{2L}}H^n\,d\mu\le \left(\frac{n}{\alpha}\right)^n\int_{M_t\cap B_{2L}}K\,d\mu\le \left(\frac{n}{\alpha}\right)^n\operatorname{area}(S^n) \] for each $t\in[0,T)$, where $K$ is the Gauss curvature. \end{proof}
We also require a suitable existence result for a mean curvature flow out of the boundary of a convex body. This is a straightforward consequence of the following Chou--Ecker--Huisken-type \cite{Ts85,EckerHuisken91} estimate for radial graphs (see, for example, \cite[Corollary 2.2]{LynchUniqueness} for a proof).
\begin{comment} \begin{lem}\label{lem:interior_est} Let $\{\partial\Omega_t\}_{t \in [0,T)}$ be a convex solution to mean curvature flow in $\ensuremath{\mathbb{R}}^{n+1}$. If $B_r(p)\subset\Omega_0$, then, for any $L>1$ and $K<r^{-1}\sqrt T$, \begin{equation*} \sup_{(X,t)\in B_{Lr}(p) \times [0,K^2r^2]}\left(1-\frac{\vert X-p\vert^2}{L^2r^2}\right)\sqrt{t}H(X,t) \leq C(1+K)L^3. \end{equation*} where $C=C(n)$. \end{lem}
We also obtain an estimate up to the initial time if we allow the constant to depend on an initial bound for the curvature. \end{comment} \begin{lem}\label{lem:global in time est} There exists $C=C(n)$ with the following property. Let $\{\partial\Omega_t\}_{t \in [0,T)}$ be a convex solution to mean curvature flow in $\ensuremath{\mathbb{R}}^{n+1}$. If $B_r(p)\subset\Omega_t$ and $\sup_{B_{2Lr}}H(\cdot,0)\le\Theta r^{-1}$, where $\Theta\ge 1$ and $L>1$, then \begin{equation*} \sup_{X\in B_{Lr}(p)\cap \partial\Omega_t}\left(1-\frac{\vert X-p\vert^2}{L^2r^2}\right)H(X,t) \leq CL^3\Theta r^{-1}\,. \end{equation*} \end{lem}
\begin{comment} \begin{lem}\label{lem:interior_est} There is a constant $C=C(n)$ with the following property. Let $\{M_t\}_{t \in [0,T)}$ be a convex solution to mean curvature flow in $\ensuremath{\mathbb{R}}^{n+1}$. If $B_r(p)$ is enclosed by $M_0$, then, for any $L>1$ and $K<r^{-1}\sqrt T$, \begin{equation*} \sup_{(X,t)\in B_{Lr}(p) \times [0,K^2r^2]}\left(1-\frac{\vert X-p\vert^2}{L^2r^2}\right)\sqrt{t}H(X,t) \leq C(1+K)L^3. \end{equation*} {\color{red}We should also have an accompanying estimate of the form \begin{equation*} \sup_{(X,t)\in B_{Lr}(p) \times [0,K^2r^2]}\left(1-\frac{\vert X-p\vert^2}{L^2r^2}\right)H(X,t) \leq C(1+K)L^3\,, \end{equation*} where $C=C(n)\sup_{M_0}H$, no? [Need this to get a solution with bounded curvature]} \end{lem} {\color{red}Fix $t$ and $X\in M_t$. Choosing $K=\sqrt {t}/r$ and $L=2\vert X-p\vert/r$ yields \[ H(X,t)\le C(n)\left(\frac{1}{\sqrt{t}}+\frac{1}{r}\right)\frac{\vert X-p\vert^3}{r^3} \] If $B_{r_0}(p)\subset \Omega_0$, then $B_{\sqrt{r_0^2-2nt}}\subset\Omega_t$ for $t\in[0,\frac{r_0^2}{2n}]$, and hence \bann H(X,t)\le{}& C(n)\left(\frac{1}{\sqrt{t}}+\frac{1}{\sqrt{r_0^2-2nt}}\right)\frac{\vert X-p\vert^3}{(r_0^2-2nt)^\frac{3}{2}}\\ \le{}& C(n)\left(\frac{1}{\sqrt{t}}+\frac{1}{r_0}\right)\frac{\vert X-p\vert^3}{r_0^3} \eann for $t\le r_0^2/4n$.
}
\begin{prop}[Existence for arbitrary convex initial data]\label{prop:existence} For any convex body $\Omega\subset\ensuremath{\mathbb{R}}^{n+1}$, there exists $T>0$ and a family of convex bodies $\{\Omega_t\}_{t\in (0,T)}$ with smooth boundaries $\{\partial\Omega_t\}_{t\in(0,T)}$ which evolve by mean curvature and satisfy $\Omega_t\to \Omega$ locally uniformly in the Hausdorff topology as $t\to 0$. If $\partial\Omega_0$ is smooth, then the convergence is (locally uniform) in the smooth topology. {\color{red}If $\sup_{\partial\Omega_0}\vert A\vert<\infty$, then $\sup_{[0,T']}\sup_{\partial\Omega_t}\vert A\vert<\infty$ for some $T'\in(0,T]$.}
\end{prop} \end{comment}
\begin{prop}[Existence]\label{prop:existence} Let $\Omega\subset\ensuremath{\mathbb{R}}^{n+1}$ be an (unbounded) convex body with smooth boundary $\partial\Omega$ satisfying $\sup_{\partial\Omega}\vert A\vert<\infty$. There exist $\delta>0$ and a family of (unbounded) convex bodies whose boundaries $\{\partial\Omega_t\}_{t\in(0,\delta]}$ are smooth and evolve by mean curvature, converge locally uniformly to $\Omega_0\doteqdot\Omega$ as $t\to 0$, and satisfy $\sup_{t\in [0,\delta]}\sup_{\partial\Omega_t}\vert A\vert<\infty$.
\end{prop} \begin{proof}
We may assume that $\Omega$ contains no lines --- else it splits as a product of an affine subspace with a lower dimensional convex body which contains no lines; the solution we seek is then obtained as a product of an affine subspace with the lower dimensional solution we shall construct. In that case, $\Omega$ is either bounded or, up to a rotation, the graph of a function $u:D\to\ensuremath{\mathbb{R}}$ over some convex domain $D\subset \ensuremath{\mathbb{R}}^n\times\{0\}$ with compact sublevel sets $\{u\le h\}$. When $\Omega$ is unbounded, we consider for each height $h>0$ the bounded convex body $\Omega^h$ defined by intersecting $\Omega$ with its reflection about the plane $\{x_{n+1}=h\}$ (if $\Omega$ is bounded, just take $\Omega^h=\Omega$ in what follows). Let $B^h$ be the largest ball contained in $\Omega^h$ and denote by $r_h:\partial B^n\to\ensuremath{\mathbb{R}}$ the radial graph height of $\partial\Omega^h$. We now mollify $\Omega^h$ to obtain, for any sufficiently small $\varepsilon>0$, a smooth convex body $\Omega^{h,\varepsilon}$ with corresponding smooth radial graph function $r_{h,\varepsilon}:\partial B^n\to\ensuremath{\mathbb{R}}$ (for example, we could mollify the radial graph functions $r_h$ using, say, the heat kernel on $S^n$).
For $\varepsilon$ sufficiently small, we can evolve $r_{h,\varepsilon}$ smoothly by radial graphical mean curvature flow using parabolic existence theory. Since, by the avoidance principle, the time of existence of the approximating solutions is bounded from below by the square of their inradius (which is bounded uniformly from below as $\varepsilon\to 0$ and $h\to\infty$) Lemma \ref{lem:global in time est} and the higher order estimates of Ecker and Huisken \cite{EckerHuisken91} yield the claims upon taking $\varepsilon\to 0$ and then $h\to\infty$.
\begin{comment} When $\Omega$ is bounded, the theorem is well-known (it follows by approximating $\Omega$ by smooth convex bodies and exploiting the Ecker--Huisken interior estimates). So assume that $\Omega$ is unbounded. In that case, it must contain a ray $R\doteqdot \{p+re:r>0\}$ for some $p\in \ensuremath{\mathbb{R}}^{n+1}$ and $e\in S^n$. Consider the (non-empty) bounded convex body $\Omega^r$ defined by intersecting $\Omega$ with its reflection about the hyperplane perpendicular to $R$ which passes through $p+re$. Let $\{\Omega^r_t\}_{t\in [0,T_r)}$ be its (unique) evolution by mean curvature. By Huisken's theorem and the avoidance principle, we can assume that $\{\Omega^r_t\}_{t\in [0,T_r)}$ contracts to a point at time $T_r\ge \frac{\mathrm{inrad}^2(r)}{2n}$, where $\mathrm{inrad}(r)$ denotes the inradius of $\Omega^r$. Since $\partial \Omega^r$ is uniformly Lipschitz, the Ecker--Huisken estimates yield, for sufficiently large $r$, \[ t^m\vert\nabla^mA_r\vert^2\le C_m\mathrm{inrad}^{-2}(r)\;\;\text{for}\;\; t\in[0,\tfrac{\mathrm{inrad}^2(r)}{4n}]\,. \] where $A_r$ denotes the second fundamental form of $\{\partial\Omega^r_t\}_{t\in [0,T_r)}$ and $C_m$ is a constant which depends only on $n$, $m$, and and the Lipschitz constant of $\Omega$. Since $\mathrm{inrad}(r)$ is non-decreasing in $r$, we can take a local uniform limit as $t\to\infty$. \end{comment} \end{proof}
In order to exploit Proposition \ref{prop:local pinching}, we will need two ingredients. First, we need to preserve the initial pinching condition.
\begin{prop}[Pinching is preserved]\label{prop:pinching preserved} Let $\{M_t\}_{t\in[0,\delta]}$ be a family of convex, locally uniformly convex hypersurfaces evolving by mean curvature flow with $\sup_{t\in[0,\delta]}\sup_{M_t}\vert A\vert<\infty$. If $\kappa_1\ge \alpha H$ on $M_0$, then \[
\kappa_1\ge \alpha H\;\;\text{on}\;\; M_t\;\;\text{for all}\;\;t\in[0,\delta]\,. \] \end{prop} \begin{proof} Since $M_0$ does not contain any lines, we can find $p\in \ensuremath{\mathbb{R}}^{n+1}$ and $e\in S^n$ so that $\beta\doteqdot \inf_{X\in M_0}\inner{\frac{X-p}{\vert{X-p\vert}}}{e}>0$. If we define \[ \psi(X)\doteqdot \inner{X-p}{e}\mathrm{e}^{(C+1)t}\,, \] where $C\doteqdot\sup_{t\in[0,\delta]}\sup_{M_t}\vert A\vert^2$, then \[ \psi\ge \beta\vert X-p\vert\;\;\text{and}\;\;(\partial_t-\Delta)\psi=(C+1)\psi\,. \] \begin{comment} Fix $\varepsilon>0$. We claim that the function $H_{\varepsilon}\doteqdot H+\varepsilon \psi$ is non-negative in $[0,\delta]$ (we follow \cite[Theorem 6.14]{EGF}). Suppose then that this is not the case. Since $H$ is positive on $M_0$ and $\psi\to\infty$ as $X\to\infty$, we can find $t_0\in(0,\delta]$ and $X_0\in M_{t_0}$ such that $H_{\varepsilon}(X,t)>0$ for each $X\in M^n_t$, $t\in[0,t_0)$, but $H_{\varepsilon}(X_0,t_0)=0$. We thus find at $(X_0,t_0)$ that
\bann 0\ge (\partial_t-\Delta)(H+\varepsilon \psi)\ge{}&\vert A\vert^2H+\varepsilon(C+1)\psi\\ ={}&-\varepsilon\psi\vert A\vert^2+\varepsilon(C+1)\psi\\
\ge{}&\varepsilon \psi\\ >{}&0\, \eann which is absurd. Now take $\varepsilon\to 0$.
Now consider \end{comment}
Fix $\varepsilon>0$ and set $S\doteqdot A-\alpha Hg$. We claim that the tensor \[ S^{\varepsilon}\doteqdot S+\varepsilon\psi g \] remains non-negative definite on $[0,\delta]$.
Suppose that this is not the case. Then, since $S$ is positive definite on $M_0$ and $\psi\to\infty$ as $\vert X\vert\to\infty$, there must exist $t_0\in(0,\delta]$, $X_0\in M^n_{t_0}$, and $V_0\in T_{X_0}M^n_{t_0}$ such that $S^{\varepsilon}_{(X,t)}>0$ for each $X\in M^n_t$, $t\in [0,t_0)$, but $S^{\varepsilon}_{(X_0,t_0)}(V_0,V_0)=0$. Extend $V_0$ locally in space by solving \[ \nabla_{\gamma'} V\equiv 0 \] along radial $g_{t_0}$-geodesics $\gamma$ emanating from $X_0$ and then extend the resulting local vector field locally in the time direction by solving \[ \nabla_tV\equiv 0\,, \] where $\nabla_t$ is the time-dependent connection of Andrews and Baker \cite{AnBa10}. We find that $\nabla V(X_0,t_0) = 0$, $\nabla_t V(X_0,t_0) = 0$, and $\Delta V (X_0,t_0) = 0$.
Now set $s_{\varepsilon}(X,t)\doteqdot S^{\varepsilon}_{(X,t)}(V_{(X,t)},V_{(X,t)})$ for $(X,t)$ near $(X_0,t_0)$. We now find at $(X_0,t_0)$ that
\bann 0\geq(\partial_t-\Delta)s_{\varepsilon}={}&(\nabla_t-\Delta)S^{\varepsilon}(V,V)\\ \geq{}&\vert A\vert^2S(V,V)+\varepsilon(C +1)\psi\\ ={}&-\varepsilon \psi\vert A\vert^2+\varepsilon(C +1)\psi\\ \geq{}&\varepsilon\psi\\ >{}&0\,, \eann which is absurd. So $S^{\varepsilon}$ indeed remains positive definite in $[0,\delta]$. Now take $\varepsilon\to 0$. \end{proof}
Second, we need a bound for the curvature at infinity which is uniform in time.
\begin{prop}[Curvature bound at infinity]\label{prop:curvature bound at infinity} Let $\{\partial\Omega_t\}_{t\in[-\frac{1}{2n}R^2,0]}$ be a family of convex boundaries evolving by mean curvature. Suppose that $0 \in \Omega_0$. Given $\varepsilon>0$ and $\delta>0$, there exists $L<\infty$ such that, given any $X \in M_0 \setminus B_{LR}(0)$, \[H(X,0) \geq \delta R^{-1}\;\;\implies\;\;\inf_{B_{\frac{\delta}{2H(X,0)}}(X)\times(-\frac{1}{2n}\frac{\delta^2}{4H^2(X,0)},0]}\frac{\kappa_1}{H}\leq \varepsilon\,.\] \end{prop} \begin{proof} It suffices to prove the claim when $R=1$. So defining $P_r(X,t)\doteqdot B_r(X)\times(t-\frac{1}{2n}r^2,t]$ suppose, contrary to the claim, that we can find $\varepsilon>0$, $\delta>0$, and a sequence of points $X_j\in M_0$ such that \[ \vert X_j\vert\underset{j\to\infty}{\to}\infty\,,\;\; H(X_j,0)\ge \delta\,,\;\;\text{and yet}\;\;\inf_{P_{\frac{\delta}{2H(X_j,0)}(X_j,0)}}\frac{\kappa_1}{H}(X_j,0)>\varepsilon\,. \] Point selection yields a sequence of points $(Y_j,s_j)$ with the following properties: \begin{enumerate} \item \label{eq:pp1} $(Y_j,s_j) \in P_{\frac{\delta}{2H(X_j,0)}}(X_j,0)$ (and hence $\frac{\kappa_1}{H}(Y_j,s_j)>\varepsilon$). \item \label{eq:pp2} $H(Y_j,s_j) \geq H(X_j,0)$. \item \label{eq:pp3} $H \leq 2H(Y_j,s_j)$ in $P_{\frac{\delta}{4H(Y_j,s_j)}}(Y_j,s_j)$. \end{enumerate}
Indeed, if the choice $(Y_j,s_j)=(X_j,0)$ satisfies \eqref{eq:pp3}, then we take $(Y_j,s_j)=(X_j,0)$. If not, choose $(X_j^1,t_j^1) \in P_{\frac{\delta}{4H(X_j,0)}}(X_j,0)$ such that \[ H(X_j^1,t_j^1) \geq 2H(X_j,0)\,. \] So $(X_j^1,t_j^1)$ satisfies \eqref{eq:pp1} and \eqref{eq:pp2}. If $(X_j^1,t_j^1)$ also satisfies \eqref{eq:pp3}, choose $(Y_j,s_j)=(X_j^1,t_j^1)$; if not, choose $(X_j^2,t_j^2) \in P_{\frac{\delta}{4H(X_j^1,t_j^1)}}(X_j^1,t_j^1)$ such that \[ H(X_j^2,t_j^2) \geq 2H(X_j^1,t_j^1)\,. \] Since $P_{\frac{\delta}{4H(X_j^1,t_j^1)}}(X_j^1,t_j^1)\subset P_{\frac{\delta}{4H(X_j,0)}+\frac{\delta}{8H(X_j,0)}}(X_j,0)$, this point satisfies properties \eqref{eq:pp1} and \eqref{eq:pp2}. Since $\sum_{k=1}^\infty 2^{-k}=1$ and $H$ is finite on the set $P_{\frac{\delta}{H(X_j,0)}}(X_j,0)$, continuing in this way we find, after some finite number of steps $k$, a point $(Y_j,s_j)\doteqdot (X_j^k,t_j^k)$ satisfying \eqref{eq:pp1}, \eqref{eq:pp2} and \eqref{eq:pp3}.
Now translate $(Y_j,s_j)$ to the spacetime origin and rescale by $\lambda_j\doteqdot H(Y_j,s_j)$ to obtain a sequence of rescaled flows $\{M^j_t\}_{t\in (-\lambda_j^2(s_j+\frac{1}{2n}),0]}$ defined by $M^j_t\doteqdot \lambda_j(M_{\lambda_j^{-2}t+s_j}-Y_j)$. Passing to a subsequence, the final timeslices $M^j_0$ converge locally uniformly in the Hausdorff topology to a convex hypersurface $M^\infty_0=\partial\Omega^\infty_0$.
We claim that $M^\infty_0$ splits off a line. To see this, consider the segments $\ell_i$ joining $Y_j\in \partial\Omega_{s_j}$ to $0\in \Omega_{0}\subset \Omega_{s_j}$. Passing to a subsequence, these segments converge to a ray, $\ell$, emanating from $0$. Observe that $\ell\subset \Omega_{0}$ for all $j$. Indeed, if this were not the case, then we could find a point $p\in \ell\cap \partial\Omega_{0}$. But then, since $\ell\cap\partial\Omega_{-1/2}=\emptyset$, the tangent hyperplane to $\partial\Omega_t$ parallel to $T_p\partial\Omega_0$ must travel an infinite distance in finite time, contradicting the uniform curvature bound. Now consider the segment $\ell'_i$ obtained from $\ell_i$ by reflection across the hyperplane orthogonal to $\ell$ and through $Y_j$. Since $\Omega_{s_j}$ is convex, the triangle bounded by $\ell_i$, $\ell_i'$ and $\ell$ lies in $\overline\Omega_{s_j}$ for all $j$. The claim follows, since the angle between $\ell_i$ and $\ell_i'$ goes to $\pi$ as $i\to\infty$ and $\lambda_i\ge \delta>0$.
Since (after passing to a subsequence) the convergence is smooth in $P_{\frac{\delta}{8}}$, we find that $H=1$, $\kappa_1=0$ and $\frac{\kappa_1}{H}\ge \varepsilon$ at the spacetime origin, which is absurd. \end{proof}
\begin{comment} {\color{red}If \[ a\ge 2n+4(q-1), \] then the function \[ \psi\doteqdot \big(\vert X\vert^2+at+b\big)^q\mathrm{e}^{ct} \] satisfies \bann (\partial_t-\Delta)\psi={}&c\psi+q\big(\vert X\vert^2+at+b\big)^{q-2}\left[(a-2n)\big(\vert X\vert^2+at+b\big)\right.\\ {}&\left.-4(q-1)\vert X^\top\vert^2\right]\mathrm{e}^{ct}\\ \ge{}&c\psi+q(a-2n-4(q-1))\big(\vert X\vert^2+at+b\big)^{q-1}\mathrm{e}^{ct}\\ \ge{}&c\psi\,. \eann } \end{comment} Finally, we rule out pinched expanding or translating solutions.
\begin{prop}[No pinched solitons]\label{prop:translator and expander} There exist no locally uniformly convex pinched mean curvature flow translators or expanders. \end{prop} \begin{proof} We proceed as in \cite{BL17} and \cite{Ni05}. First, let $M^n\subset \ensuremath{\mathbb{R}}^{n+1}$, $n\ge 2$, be a locally uniformly convex, pinched mean curvature flow translator. Then $M$ satisfies \[ H=-\inner{e}{\nu} \] for some $e\in \ensuremath{\mathbb{R}}^{n+1}\setminus\{0\}$. Observe that the vector field $V\doteqdot e^\top$ satisfies \begin{equation}\label{eq:gradient field} L(V)+\nabla H=0\,, \end{equation} and \begin{equation}\label{eq:gradient of gradient field translator} \nabla V=HL\,, \end{equation} where $L$ denotes the Weingarten map (see, for example, Lemma 13.32 in the book \cite{EGF} of Chow \emph{et al.}).
By Proposition \ref{prop:curvature bound at infinity}, $H$ attains its maximum at some point $o\in M$. Since $A>0$, \eqref{eq:gradient field} implies that $o$ is a zero of $V$. We claim that $V$ vanishes nowhere else. To see this, fix $X\in M\setminus\{o\}$ and let $\gamma:[0,d(X)]\to M$ be any minimizing unit speed geodesic joining $o$ to $X$, where $d$ denotes the intrinsic distance to $o$. Observe that \ba\label{eq:distance bound} \inner{V_X}{\gamma'(d(X))}={}&\int_0^{d(X)}\!\!\frac{d}{ds}\inner{V\circ\gamma}{\gamma'}ds=\int_0^{d(X)}\!\!\inner{\nabla_{\gamma'}V}{\gamma'}ds\,.
\ea This is positive (and hence $V_X\neq 0$) since, by \eqref{eq:gradient of gradient field translator}, $\nabla V$ is positive definite. In fact, since $\vert V\vert^2=1-H^2$ and $A\ge \alpha Hg$ for some $\alpha>0$, we obtain (we will use this in a moment) \begin{equation}\label{eq:translator estimate} 1\ge \vert V\vert \ge \alpha\int_0^{d}H^2\,ds=\alpha d-\alpha\int_0^{d}\vert V\vert^2\,ds\,. \end{equation}
It follows that $M\setminus\{o\}$ is foliated by integral curves $\sigma:(0,\infty)\to M$ of $V$ with $\sigma(s)\to o$ as $s\to 0$. Let $\sigma$ be such a curve. By \eqref{eq:gradient field} and the pinching hypothesis, \begin{equation}\label{eq:H estimate} -\frac{d}{ds}\log(H\circ\sigma)=-\frac{\nabla_{V}H}{H}=\frac{A(V,V)}{H}\ge \alpha\vert V\vert^2\,. \end{equation} Combining \eqref{eq:translator estimate} and \eqref{eq:H estimate} yields \[ H\le H(o)\mathrm{e}^{1-\alpha d}\,. \]
Since $M$ is convex and non-flat, it follows from the Ecker--Huisken interior estimates that $\lambda M$ converges as $\lambda\to 0$ locally uniformly in $C^\infty(\ensuremath{\mathbb{R}}^{n+1}\setminus\{0\})$ to a non-planar convex cone. But this violates pinching.
Now, let $M^n\subset \ensuremath{\mathbb{R}}^{n+1}$, $n\ge 2$, be a locally uniformly convex, pinched mean curvature flow expander. Then $M$ satisfies \[ H=-\frac{1}{2}\inner{X}{\nu}\,. \] Observe that the vector field $V\doteqdot \tfrac{1}{2}X^\top$ satisfies \eqref{eq:gradient field} and \begin{equation}\label{eq:gradient of gradient field expander} \nabla V=HL+\tfrac{1}{2}I\,. \end{equation} (See, for example, \cite[Lemma 10.14]{EGF}).
By Proposition \ref{prop:curvature bound at infinity}, $H$ attains its maximum at some point $o\in M$. Since $A>0$, \eqref{eq:gradient field} implies that $o$ is a zero of $V$. Arguing as in \eqref{eq:distance bound}, we find that $V$ vanishes nowhere else. In fact, we obtain \begin{equation}\label{eq:expander estimate} \vert V\vert \ge \frac{\alpha}{2}d \end{equation} for some $\alpha>0$. On the other hand, \begin{equation}\label{eq:expander distance} \frac{d}{ds}d\circ\sigma=\inner{V\circ \sigma}{\nabla d\circ \sigma}\le\vert V\vert\,. \end{equation} Since \eqref{eq:gradient field} holds, \eqref{eq:H estimate} also holds (with $V=\frac{1}{2}X^\top$) on an expander. Combining \eqref{eq:H estimate}, \eqref{eq:expander estimate} and \eqref{eq:expander distance}, we conclude that \[ H\le H(o)\mathrm{e}^{-\frac{\alpha}{4}d^2}\,. \] The claim now follows as before. \end{proof}
\begin{comment} \begin{proof} We proceed as in \cite{BL17} and \cite{Ni05}. Let $M^n\subset \ensuremath{\mathbb{R}}^{n+1}$, $n\ge 2$, be a locally uniformly convex, pinched mean curvature flow translator or expander, respectively. Define the vector field \[ V\doteqdot e^\top\;\;\text{resp.}\;\; V\doteqdot \tfrac{1}{2}X^\top\,, \] where $e$ is the bulk velocity in case of a translator. Observe that \begin{equation}\label{eq:gradient field} L(V)+\nabla H=0\,, \end{equation} and \begin{equation}\label{eq:gradient of gradient field} \nabla V=HL\;\;\text{resp.}\;\; \nabla V=HL+\tfrac{1}{2}I\,, \end{equation} where $L$ denotes the Weingarten map and $I$ the identity map (see, for example, \cite[Lemmas 13.32 and 10.14, resp.]{EGF}).
By Proposition \ref{prop:curvature bound at infinity}, $H$ attains its maximum at some point $o\in M$.
Since $A>0$, \eqref{eq:gradient field} implies that $o$ is a zero of $V$. We claim that $V$ vanishes nowhere else. Fix $X\in M\setminus\{o\}$ and let $\gamma:[0,d(X)]\to M$ be any minimizing unit speed geodesic joining $o$ to $X$, where $d$ denotes the intrinsic distance to $o$. By \eqref{eq:gradient of gradient field}, \bann \inner{V_X}{\gamma'(d(X))}={}&\int_0^{d(X)}\frac{d}{ds}\inner{V\circ\gamma}{\gamma'}ds>0\,.
\eann This proves the claim.
It follows that $M\setminus\{o\}$ is foliated by integral curves $\sigma:(0,\infty)\to M$ of $\hat V\doteqdot V/\vert V\vert$ with $\sigma(s)\to o$ as $s\to 0$. Let $\sigma$ be such a curve. By \eqref{eq:gradient field} and the pinching hypothesis, \[ -\frac{d}{ds}\log(H\circ\sigma)=-\frac{\nabla_{\hat V}H}{H}=\frac{A(\hat V,V)}{H}\ge \alpha\vert V\vert \] for some $\alpha>0$. On the other hand, \[ \frac{d}{ds}d\circ\sigma=\inner{V\circ \sigma}{\nabla d\circ \sigma}\le\vert V\vert\,. \] Thus, \[ H\le H(o)\mathrm{e}^{-\alpha d}\,. \] Since $M$ is convex and non-flat, it follows that $\lambda M$ converges as $\lambda\to 0$ locally uniformly in $C^\infty(\ensuremath{\mathbb{R}}^{n+1}\setminus\{0\})$ to a non-flat convex cone. But this violates pinching. \end{proof} \begin{rem} There's also a sledge-hammer approach to the non-existence of pinched translators: Let $M^n\subset \ensuremath{\mathbb{R}}^{n+1}$, $n\ge 2$, be a pinched mean curvature flow translator. Since $M^n$ is convex, Huisken's monotonicity formula \cite[4.1 Theorem]{Hu93} may be used to show that its blow-down is a hyperplane or a shrinking cylinder $\ensuremath{\mathbb{R}}^{m}\times S^{n-m}_{\sqrt{-2(n-m)t}}$ for some $m\in\{0,\dots,n\}$ \cite[Lemma 2.1]{BLT4} (see also \cite{Wa11}). Since the uniform pinching passes to the limit, the limit must be a hyperplane of multiplicity one or two. If the multiplicity is one, then the monotonicity formula implies that the translator is a hyperplane. If the multiplicity is two, then Wang's dichotomy \cite[Corollary 2.2]{Wa11} implies that $M^n$ lies in a slab region. But then Hamilton's Harnack inequality \cite{HamiltonHarnack} may be used to obtain, after spacetime translation, a nontrivial asymptotic translator which splits off a line (see, for example, \cite[Lemma 2.3]{BLTatomic}). But this violates the uniform pinching.
$\square$ \end{rem} \end{comment}
\begin{comment} One way to localise the preservation-of-pinching estimate might be to consider the harmonic mean of the principal curvatures, \[G := (\sum_i \kappa_i^{-1} )^{-1},\] since its evolution equation contains a good gradient term. Using this term, one might hope to be able to derive an estimate of the form $G \geq \varphi H$, where $\varphi$ is a carefully chosen cutoff function. \begin{lem} On a strictly convex solution of mean curvature flow we have
\[(\partial_t - \Delta) G \geq |A|^2 G + \frac{n-1}{2n} \frac{G^5}{H^5} \frac{|\nabla H|^2}{H}. \] If necessary, one can also extract the full gradient of $A$ on the right:
\[(\partial_t - \Delta) G \geq |A|^2 G + \bigg(2n + \frac{4n}{n-1}\bigg)^{-1} \frac{G^5}{H^5} \frac{|\nabla A|^2}{H}. \] \end{lem} \begin{proof} Recall $G$ satisfies the evolution equation
\[(\partial_t - \Delta) G = |A|^2 G - \delta^{kl} \ddot G^{pq,rs} \nabla_k A_{pq} \nabla_l A_{rs}.\] We first prove an algebraic estimate for $\ddot G^{pq, rs}$. If $A_{ij} = \lambda_i \delta_{ij}$ where $0 < \lambda_1 < \dots< \lambda_n$, for a general symmetric matrix $B$ we have
\[-\ddot G^{pq, rs}(A)B_{pq} B_{rs} = - \ddot G^{pq}(\lambda) B_{pp} B_{qq} + 2 \sum_{p>q} \frac{\dot G^{q}(A) - \dot G^{p}(A)}{\lambda_p - \lambda_q} |B_{pq}|^2.\] Computing the derivatives then gives \begin{align*} -\ddot G^{pq, rs}B_{pq} B_{rs} &= G^3 \sum_{p,q} \frac{1}{\lambda_p \lambda_q} \bigg( \frac{B_{pp}}{\lambda_p} - \frac{B_{qq}}{\lambda_q}\bigg)^2 + 2 G^2 \sum_{p>q} \frac{\lambda_q^{-2} - \lambda_p^{-2}}{\lambda_p - \lambda_q}\\
&= G^3 \sum_{p,q} \frac{1}{\lambda_p \lambda_q^3 } \bigg( \lambda_q \frac{B_{pp}}{\lambda_p} - B_{qq}\bigg)^2 + 2 G^2 \sum_{p >q} \frac{\lambda_p +\lambda_q}{\lambda_p^2 \lambda_q^2}|B_{pq}|^2 . \end{align*} Since $G \leq \lambda_p \leq H$ for all $p$, we estimate
\[2 G^2 \sum_{p >q} \frac{\lambda_p +\lambda_q}{\lambda_p^2 \lambda_q^2}|B_{pq}|^2\geq 2 \frac{G^3}{H^3} \sum_{p>q} \frac{|B_{pq}|^2}{H},\] and Cauchy-Schwarz gives \begin{align*} G^3 \sum_{p,q} \frac{1}{\lambda_p \lambda_q^3 } \bigg( \lambda_q \frac{B_{pp}}{\lambda_p} - B_{qq}\bigg)^2 &\geq \frac{G^3}{H^4} \sum_{p,q} \bigg( \lambda_q \frac{B_{pp}}{\lambda_p} - B_{qq}\bigg)^2\\ &\geq \frac{1}{n} \frac{G^3}{H^4} \sum_{p} \bigg( H \frac{B_{pp}}{\lambda_p} - \tr(B)\bigg)^2\\ &\geq \frac{1}{n} \frac{G^3}{H^4} \sum_{p} \bigg( B_{pp} - \frac{\tr(B)}{H} \lambda_p\bigg)^2. \end{align*} Combining these estimates we obtain
\[-\ddot G^{pq,rs} B_{pq}B_{rs} \geq \frac{1}{n} \frac{G^3}{H^4} \bigg| B - \frac{\tr(B)}{H} A \bigg|^2.\]
Returning to the evolution of $G$, we have shown \begin{equation} \label{eq:G_evol}
(\partial_t - \Delta) G \geq |A|^2 G + n^{-1} \frac{G^3}{H^4} \bigg| \nabla A - \frac{\nabla H}{H} A\bigg|^2. \end{equation} By the Codazzi equations, \begin{align*}
4 \bigg| \nabla A - \frac{\nabla H}{H} A\bigg|^2 &\geq \sum_{k,p,q} \bigg( \nabla_k A_{pq} - \frac{\nabla_k H}{H} A_{pq} - \nabla_p A_{kq} + \frac{\nabla_p H}{H} A_{kq} \bigg)^2\\ &= \sum_{k,p,q} \bigg( - \frac{\nabla_k H}{H} A_{pq} + \frac{\nabla_p H}{H} A_{kq} \bigg)^2\\
&= 2 \sum_k \frac{|A|^2 - \kappa_k^2}{H^2} |\nabla_k H|^2. \end{align*} Notice that
\[\frac{|A|^2 - \kappa_k^2}{H^2} \geq (n-1) \frac{\kappa_1^2}{H^2} \geq (n-1) \frac{G^2}{H^2},\] so we have
\[\bigg| \nabla A - \frac{\nabla H}{H} A\bigg|^2 \geq \frac{n-1}{2} \frac{G^2}{H^2} |\nabla H|^2,\] which upon substitution into \eqref{eq:G_evol} yields
\[(\partial_t - \Delta) G \geq |A|^2 G + \frac{n-1}{2n} \frac{G^5}{H^5} \frac{|\nabla H|^2}{H}. \]
Alternatively, we can bound \begin{align*}
|\nabla A|^2 &= \sum_{k,p,q} \bigg(\nabla_k A_{pq} - \frac{\nabla_k H}{H} A_{pq} + \frac{\nabla_k H}{H} A_{pq}\bigg)^2\\
&\leq 2 \bigg|\nabla A - \frac{\nabla H}{H} A\bigg|^2 + 2 \frac{|A|^2}{H^2} |\nabla H|^2, \end{align*}
so since $|A|^2 \leq |H|^2 $, \begin{align*}
|\nabla A|^2 &\leq \bigg( 2+\frac{4}{n-1} \frac{H^2}{G^2}\bigg) \bigg|\nabla A - \frac{\nabla H}{H} A\bigg|^2 \leq \bigg(2 + \frac{4}{n-1}\bigg) \frac{H^2}{G^2} \bigg|\nabla A - \frac{\nabla H}{H} A\bigg|^2. \end{align*} Substituting this inequality into \eqref{eq:G_evol} gives
\[(\partial_t - \Delta) G \geq |A|^2 G + \bigg(2n + \frac{4n}{n-1}\bigg)^{-1} \frac{G^5}{H^5} \frac{|\nabla A|^2}{H}. \] \end{proof} \end{comment}
Putting these ingredients together yields the result.
\begin{thm}[Hamilton \cite{HamiltonPinched}] Every pinched, convex hypersurface with bounded curvature in $\ensuremath{\mathbb{R}}^{n+1}$, $n\geq 2$, is either a hyperplane or compact. \end{thm} \begin{proof} So let $M=\partial\Omega\subset \ensuremath{\mathbb{R}}^{n+1}$, $n\ge 2$, be a convex hypersurface with bounded curvature which is $\alpha$-pinched for some $\alpha>0$.
By Proposition \ref{prop:existence}, we obtain a family $\{M_t\}_{t\in[0,T)}$ of convex boundaries $M_t=\partial\Omega_t$ with $M_0=M$ which evolve by mean curvature and satisfy $\sup_{t\in[0,\sigma]}\sup_{M_t}\vert A\vert<\infty$ for all $\sigma\in[0,T)$, and either $T=\infty$, or \[ \limsup_{t\to T}\sup_{M_t}\vert A\vert=\infty\,. \] By Proposition \ref{prop:pinching preserved}, $M_t$ is $\alpha$-pinched for each $t\in(0,T)$.
By applying the strong maximum principle to the evolution equation for $H$ (see, for example, \cite[Equation (6.18)]{EGF}), we may assume that $H>0$ on $M_t$ for all $t\in(0,T)$.
\subsection*{Case 1: $T<\infty$} Since Proposition \ref{prop:curvature bound at infinity} implies that \begin{equation}\label{eq:type III at infinity} \limsup_{\vert X\vert\to\infty}\vert A_{(X,t)}\vert^2\le 0\;\;\text{for all}\;\; t\in(0,T)\,, \end{equation} \begin{comment} We claim that \begin{equation}\label{eq:type III at infinity} \limsup_{\vert X\vert\to\infty}\vert A_{(X,t)}\vert^2\le 0\;\;\text{for all}\;\; t\in(0,T)\,. \end{equation}
Suppose, contrary to \eqref{eq:type III at infinity}, that we can find some $\delta>0$, a time $t_0\in(0,T)$, and a sequence of points $X_j\in M_{t_0}$ such that $\vert X_j\vert\to\infty$ and $\sqrt{2nt_0}\vert A_{(X_j,t_0)}\vert>\delta$ for all $j$. Set $Q(X,t)\doteqdot \vert A_{(X,t)}\vert$.
Denote by $P_r(X,t)\doteqdot B_r(X)\times(t-\frac{1}{2n}r^2,t]$ the parabolic cylinder of scale $r$ centred at $(X,t)$. Employing Ecker's point-picking trick \cite{EckerBook}, we can find a sequence of points $(Y_j,s_j)$ with the following properties: \begin{enumerate} \item \label{eq:pp1} $(Y_j,s_j) \in P_{\frac{\delta}{Q(X_j,t_0)}}(X_j,t_0)$. \item \label{eq:pp2} $Q(Y_j,s_j) \geq Q(X_j,t_0)$. \item \label{eq:pp3} $Q \leq 2Q(Y_j,s_j)$ in $P_{\frac{\delta}{2Q(Y_j,s_j)}}(Y_j,s_j)$. \end{enumerate}
Indeed, if the choice $(Y_j,s_j)=(X_j,t_0)$ satisfies \eqref{eq:pp3}, then we take $(Y_j,s_j)=(X_j,t_0)$. If not, choose $(X_j^1,t_j^1) \in P_{\frac{\delta}{2Q(X_j,t_0)}}(X_j,t_0)$ such that \[ Q(X_j^1,t_j^1) \geq 2Q(X_j,t_0)\,. \] Then $(X_j^1,t_j^1)$ certainly satisfies \eqref{eq:pp1} and \eqref{eq:pp2}. If $(X_j^1,t_j^1)$ also satisfies \eqref{eq:pp3}, choose $(Y_j,s_j)=(X_j^1,t_j^1)$; if not, choose $(X_j^2,t_j^2) \in P_{\frac{\delta}{2Q(X_j^1,t_j^1)}}(X_j^1,t_j^1)$ such that \[ Q(X_j^2,t_j^2) \geq 2Q(X_j^1,t_j^1)\,. \] Since $P_{\frac{\delta}{2Q(X_j^1,t_j^1)}}(X_j^1,t_j^1)\subset P_{\frac{\delta}{2Q(X_j,t_0)}+\frac{\delta}{4Q(X_j,t_0)}}(X_j,t_0)$, this point satisfies properties \eqref{eq:pp1} and \eqref{eq:pp2}. Since $\sum_{k=1}^\infty 2^{-k}=1$ and $Q$ is finite on $P_{\frac{\delta}{Q(X_j,t_0)}}(X_j,t_0)$, continuing in this way we find, after some finite number of steps $k$, a point $(Y_j,s_j)\doteqdot (X_j^k,t_j^k)$ satisfying \eqref{eq:pp1}, \eqref{eq:pp2} and \eqref{eq:pp3}.
Now translate $(Y_j,s_j)$ to the spacetime origin and rescale by $\lambda_j\doteqdot Q(Y_j,s_j)$ to obtain a sequence of rescaled flows $\{M^j_t\}_{t\in (-\lambda_j^2s_j,0]}$ defined by $M^j_t\doteqdot \lambda_j(M_{\lambda_j^{-2}t+s_j}-Y_j)$. Passing to a subsequence, the final timeslices $M^j_0$ converge locally uniformly in the Hausdorff topology to a convex hypersurface $M^\infty_0=\partial\Omega^\infty_0$. We claim that $M^\infty_0$ splits off a line. To prove this, consider the segments $\ell_i$ joining $Y_j\in \Omega_{s_j}$ to some fixed $X_0\in \Omega_{t_0}\subset \Omega_{s_j}$. Passing to a subsequence, these segments converge to a ray, $\ell$, emanating from $X_0$. Observe that $\ell\subset \Omega_{t_0}$ for all $j$. Indeed, if this were not the case, then we could find a point $p\in \ell\cap \partial\Omega_{t_0}$. But then, since $\ell\cap\partial\Omega_{t_0/2}=\emptyset$, the point $p_t=\ell\cap\partial\Omega_t$ must travel an infinite distance in finite time, contradicting the uniform curvature bound. Now consider the segment $\ell'_i$ obtained from $\ell_i$ by reflection across the hyperplane orthogonal to $\ell$ and through $Y_j$. Since $\Omega_{s_j}$ is convex, the triangle bounded by $\ell_i$, $\ell_i'$ and $\ell$ lies in $\overline\Omega_{s_j}$ for all $j$. The claim follows, since the angle between $\ell_i$ and $\ell_i'$ goes to $\pi$ as $i\to\infty$ and $\lambda_i\ge \delta/\sqrt{2nt_0}>0$.
Since (after passing to a subsequence) the convergence is smooth in $P_{\frac{\delta}{4}}$, we find that $\vert A\vert=1$ and $\kappa_1=0$ at the spacetime origin, in contradiction with the pinching hypothesis. This proves \eqref{eq:type III at infinity}.
The local umbilic estimate (Proposition \ref{prop:local pinching}) now yields \end{comment} the local umbilic estimate (Proposition \ref{prop:local pinching}) yields \begin{equation}\label{eq:umbilic} \vert \mathring A\vert \le \varepsilon H+C(n,\alpha,\varepsilon)\Theta \end{equation} for every $\varepsilon>0$, where $\Theta\doteqdot \sup_{M_{0}}H<\infty$.
Since $\vert A_{(X,t)}\vert\to 0$ as $\vert X\vert\to\infty$, we can find a sequence of times $t_j\to T$ and points $X_j\in M_{t_j}$ such that \[ \lambda_j\doteqdot H(X_j,t_j)=\max_{t\in [0,t_j]}\max_{M_{t}}H>0\,. \] Translating $(X_j,t_j)$ to the spacetime origin and rescaling by $\lambda_j$ yields a sequence of mean curvature flows $\{M^j_t\}_{t\in(-\lambda_j^2t_j, 0]}$ defined by $M^j_t\doteqdot \lambda_j(M_{\lambda_j^{-2}t+t_j}-X_j)$. Since $\max_{M^j_t}H\le 1=H(0,0)$ and $\lambda_j\to\infty$ as $j\to\infty$, a subsequence of the rescaled flows converges locally uniformly in $C^\infty(\ensuremath{\mathbb{R}}^{n+1}\times(-\infty,0])$ to an ancient flow $\{M^\infty_t\}_{t\in(-\infty,0]}$. By \eqref{eq:umbilic}, this limit is umbilic, and hence a shrinking sphere. So $M$ is compact.
\subsection*{Case 2: $T=\infty$.}
Suppose first that the flow is \emph{type-III}; that is, $\Lambda\doteqdot \sup_{t\in(0,\infty)}\sqrt{t}\max_{M_t}H<\infty$. \begin{comment} Given a sequence of times $t_j>0$ with $t_j\to\infty$, choose $x_j\in M$ so that \[ \frac{\vert\mathring{A}\vert}{H}(x_j,t_j)\ge \sup_{M\times\{t_j\}}\frac{\vert\mathring{A}\vert}{H}-\frac{1}{j} \] and consider the rescaled flow defined by \[ X_j(x,t)\doteqdot \lambda_j\big(X(x,\lambda_j^{-2}t+t_j)-X(x_j,t_j)\big)\,,\;\;t\in I_j\doteqdot (-\lambda_j^2t_j,\infty)\,, \] where $\lambda_j\doteqdot \frac{1}{\sqrt{t_j}}$. Since $\lambda_j^2t_j\equiv 1$ and \bann H_j(\cdot,t)={}&\lambda_j^{-1}H(\cdot,\lambda_j^{-2}t+t_j)\le\frac{\Lambda}{\sqrt{t+1}}\,, \eann we obtain a subsequential limit defined on the time interval $(-1,\infty)$. We claim that $\sup_{M\times\{t\}}\frac{\vert\mathring A\vert}{H}$ is constant on the limit. Recall that (see, for example, \cite[Equation (8.7)]{EGF}) \begin{equation}\label{eq:evolve trace free} (\partial_t-\Delta)\frac{\vert\mathring A\vert^2}{H^2}=2\inner{\nabla\frac{\vert\mathring A\vert^2}{H^2}}{\frac{\nabla H}{H}}-2\left\vert\nabla\frac{\mathring A}{H}\right\vert^2\,. \end{equation} By a similar argument to that of Proposition \ref{prop:pinching preserved}, we find that $\sup_{M\times\{t\}}\frac{\vert\mathring A\vert}{H}$ is non-increasing. It therefore converges to some limit as $t\to\infty$. Thus, given any $a,b\in(-1,\infty)$, \bann \sup_{M\times\{b\}}\frac{\vert\mathring A_j\vert}{H_j}-\sup_{M\times\{a\}}\frac{\vert\mathring A_j\vert}{H_j}={}&\sup_{M}\frac{\vert\mathring A\vert}{H}(\cdot,\lambda_j^{-2}b+t_j)-\sup_{M}\frac{\vert\mathring A\vert}{H}(\cdot,\lambda_j^{-2}a+t_j)\\ \to{}&0\;\;\text{as}\;\;j\to\infty\,. \eann It follows that $\sup_{M\times\{0\}}\frac{\vert\mathring A\vert}{H}$ is attained at the origin, so the strong maximum principle implies that $\frac{\vert\mathring A\vert}{H}$ is constant on the limit solution. The pinching hypothesis and the evolution equation \eqref{eq:evolve trace free} then imply that the limit flow is totally umbilic (and non-flat), and hence a sphere (this well-known argument appears in \cite{Hu93}), which is impossible. \end{comment}
Given a sequence of times $t_j>0$ with $t_j\to\infty$, consider the rescaled flow $\{M^j_t\}_{t\in(-\lambda_j^2t_j,\infty)}$ defined by $M^j_t\doteqdot \lambda_jM_{\lambda_j^{-2}t+t_j}$, where $\lambda_j\doteqdot \frac{1}{\sqrt{t_j}}$. Since $\lambda_j^2t_j\equiv 1$ and \bann H_j(\cdot,t)={}&\lambda_j^{-1}H(\cdot,\lambda_j^{-2}t+t_j)\le\frac{\Lambda}{\sqrt{t+1}}\,, \eann we obtain a subsequential limit defined for $t\in(-1,\infty)$. Furthermore, \[ \sqrt{t+1}\,H_\infty(\cdot,t)=\lim_{j\to\infty}\sqrt{t+1}H_j(\cdot,t)=\lim_{j\to\infty}\sqrt{\frac{t+1}{\lambda_j^2}}H\left(\cdot,\frac{t+1}{\lambda_j^2}\right)\,. \] By the differential Harnack inequality and the type-III hypothesis, the limit on the right exists (and is positive) independently of $t$. Thus, by the rigidity case of the differential Harnack inequality, the limit is a nontrivial expander, which violates Proposition \ref{prop:translator and expander}.
So suppose that the flow is \emph{type-IIb}; that is, $\sup_{t\in(0,\infty)}\sqrt{t}\max_{M_t}H=\infty$. For each $j$, choose $(x_j,t_j)$ such that \[ t_j(j-t_j)H^2(x_j,t_j)=\max_{M\times [0,j]}t(j-t)H^2(\cdot,t) \] and set $\lambda_j\doteqdot H(x_j,t_j)$. The corresponding rescaled flows satisfy \bann H_j(\cdot,t_j)
\le{}&\sqrt{\frac{-\alpha_j}{t-\alpha_j}\frac{\omega_j}{\omega_j-t}} \eann for all $t\in(\alpha_j,\omega_j)$, where $\alpha_j\doteqdot -\lambda_j^{2}t_j$ and $\omega_j\doteqdot \lambda_j^{2}(j-t_j)$. Since \bann \frac{1}{\omega_j^{-1}-\alpha_j^{-1}}
\ge{}& \frac{1}{2}\max_{M\times[0,j/2]}tH^2\,,
\eann we find that $\alpha_j\to-\infty$ and $\omega_j\to\infty$, and hence obtain an eternal limit flow $\{M_t\}_{t\in(-\infty,\infty)}$. Since $\max H$ is attained at the spacetime origin (where it is positive), the differential Harnack inequality implies that $\{M_t\}_{t\in(-\infty,\infty)}$ evolves by translation. This violates Proposition \ref{prop:translator and expander}.
\end{proof}
\begin{comment} It may also be possible to use the pinching estimate in the blow-up analysis?
\begin{proof}[Pinching estimate for translators?] Let $\{M_t\}_{t\in(-\infty,\infty)}$, $M_t=M+te_{n+1}$, be a mean convex translator satisfying \[ \inf\frac{\kappa_1}{H}>-\infty \] or \[ \inf\frac{\kappa_1+\dots+\kappa_{m+1}}{H}>0 \] for some $m\in\{0,\dots,n-2\}$. Consider the parabolic rescaling \[ M^\lambda_t\doteqdot \lambda M_{\lambda^{-2}t}=\lambda M+\lambda^{-1}te_{n+1} \] for $\lambda\le 1$. Fix $\sigma\in(0,1)$ and consider the parabolic cylinders \[ P^\sigma\doteqdot B_1\times(-\tfrac{1}{2n},-\tfrac{\sigma}{2n}]\;\;\text{and}\;\; P^\sigma_{\lambda^{-1}}\doteqdot B_{\lambda^{-1}}\times(-\tfrac{1}{2n}\lambda^{-2},-\tfrac{\sigma}{2n}\lambda^{-2}]\,. \] Observe that \bann \sup_{\partial P^\sigma}\vert A_\lambda\vert={}&\sup_{\partial P_{\lambda^{-1}}^\sigma}\lambda^{-1}\vert A\vert\\ ={}& \lambda^{-1}\max\left\{\sup_{M\cap B_{\lambda^{-1}}(\frac{1}{2n}\lambda^{-2})}\vert A\vert,\sup_{t\in (-\frac{1}{2n}\lambda^{-2},-\frac{\sigma}{2n}\lambda^{-2}]}\sup_{M\cap\partial B_{\lambda^{-1}}(t e_{n+1})}\vert A\vert\right\} \eann and \bann \int\!\!\!\int_{C_{\delta}(P^\sigma)}\!\!\!\!\!\!d\mu^\lambda\,dt={}&\lambda^{n+2}\int\!\!\!\int_{C_{\delta\lambda^{-1}}(P_{\lambda^{-1}}^\sigma)}d\mu\,dt\\ ={}&\lambda^{n+2}\left(\int\!\!\!\int_{P_{\lambda^{-1}}^\sigma}d\mu\,dt-\int\!\!\!\int_{P_{(1-\delta)\lambda^{-1}}^\sigma}d\mu\,dt\right)\\ ={}&\lambda^{n+2}\left(\int^{\frac{1}{2n\lambda^{2}}}_{\frac{\sigma}{2n\lambda^{2}}}\!\!\!\!\int_{M\cap B_{\lambda^{-1}}(se_{n+1})}\!\!\!\!\!\!\!\!\!d\mu\,ds-\int^{\frac{1-\delta}{2n\lambda^{2}}}_{\frac{\sigma}{2n\lambda^{2}}}\!\!\int_{M\cap B_{(1-\delta)\lambda^{-1}}(se_{n+1})}\!\!\!\!\!\!\!\!\!d\mu\,ds\right) \eann Can we bound these two guys uniformly in $\lambda$ (for $\lambda$ sufficiently small)? It would suffice to bound \begin{equation}\label{eq:curvature bound} \sup_{M\cap B_{\sqrt{2nh}}(he_{n+1})}H\le \frac{C}{\sqrt{h}} \end{equation} and \begin{equation}\label{eq:area bound} \sup_{s\in[\sigma h,h]}\mu(M\cap B_{\sqrt{2nh}}(se_{n+1}))\le Ch^\frac{n}{2} \end{equation} for $h$ sufficiently large.
If these estimates hold, we obtain \[ \kappa^\lambda_1\ge -\varepsilon H_\lambda- C_\varepsilon\lambda \;\;\text{in}\;\; P^\sigma \] or \[ \sum_{j=m+1}^n(\kappa^\lambda_n-\kappa_j^\lambda)\le\sum_{j=1}^m\kappa_j^\lambda+\varepsilon H_\lambda+C_\varepsilon\lambda \;\;\text{in}\;\; P^\sigma \] respectively. Taking $\lambda\to 0$ then yields a ``blow-down'' limit solution satisfying the sharp estimates. The case $m=0$ would be impossible and in the two convex case the blow-down would have to be the corresponding shrinking cylinder (and in this case the cylindrical estimate passes to the interior by the maximum principle).
For the curvature estimate, it suffices to estimate $H=-\inner{\nu}{e_{n+1}}=\frac{1}{\sqrt{1+\vert Du\vert^2}}$ at points $x$ in the ball in $\ensuremath{\mathbb{R}}^n$ of radius $\lambda^{-1}$ satisfying $u(x)\sim \lambda^{-2}$. (In \cite[Lemma 3.4]{BL17}, we obtained an estimate for the infimum). The area estimate seems very mild. \end{proof} \end{comment}
\begin{comment}
{\color{blue}
In the following formulation the lemma may also be useful in studying convex solutions that are not uniformly convex. \begin{lem} \label{lem:pp} Let $X: M\times [0, T] \to \mathbb{R}^{n+1}$, $T= R^2$, be a solution of mean curvature flow such that $M_t := X(M,t)$ is the boundary of a convex set $\Omega_t$. Fix a point $x_0 \in \Omega_T$. Given $\varepsilon, \delta > 0$ there is a constant $L<\infty$ such that \[\frac{\kappa_1}{H} (x,t) \leq \varepsilon \] for all $x \in M_T \setminus B_{LR}(x_0)$ at which $H(x,t) \geq \delta R^{-1}$. \end{lem}
\begin{proof} For $x \in M_t$ we write \[\hat P(x,t, r) := B_{r H(x,t)^{-1}}(x) \times [t - r^2 H(x,t)^{-2}, t].\]
If the claim is false for some $\varepsilon$ and $\delta$ choose a sequence $x_i \in M_T$ such that $|x_i| \to \infty$, $H(x_i, T) \geq \delta R^{-1}$ and $\tfrac{\kappa_1}{H}(x_i, T) \geq \varepsilon$. In particular, \[\delta H(x_i, T)^{-1} \leq R = \sqrt{T}\] so $M_t$ is well-defined in $\hat P(x_i, T, \delta)$.
Employing Ecker's point-picking trick we find another sequence $(y_i, \tau_i)$ with the following properties: \begin{enumerate} \item \label{eq:pp_1}$(y_i,\tau_i) \in \hat P (x_i, t_i, \tfrac{\delta}{2})$. \item \label{eq:pp_2}$H(y_i, \tau_i) \geq \delta R^{-1}$. \item \label{eq:pp_3}$H(x,t) \leq 4 H(y_i,\tau_i)$ in $\hat P(y_i, \tau_i, \tfrac{\delta}{4})$. \end{enumerate} The construction is as follows. If $(x_i,t_i)$ satisfies \eqref{eq:pp_3} set $(y_i, \tau_i) = (x_i, t_i)$. If not, choose $(x_i^1, t_i^1) \in \hat P(x_i,t_i, \tfrac{\delta}{4})$ such that \[H(x_i^1, t_i^1) \geq 4 H(x_i,t_i).\] Then $(x_i^1, t_i^1)$ certainly satisfies \eqref{eq:pp_1} and \eqref{eq:pp_2}. If $(x_i^1, t_i^1)$ also satisfies \eqref{eq:pp_3} choose $(y_i, \tau_i) = (x_i^1, t_i^1)$, and if not fix $(x_i^2, t_i^2) \in \hat P(x_i^1, t_i^1, \tfrac{\delta}{4})$. Continuing in this way, after some finite number of steps $k$, we find $(x_i^k,t_i^k)$ satisfying \eqref{eq:pp_1}, \eqref{eq:pp_2} and \eqref{eq:pp_3}.
We have \begin{align*}
|x_i^k - x_i| \leq{}& \frac{\delta \varepsilon}{4}\big( H(x_i, t_i)^{-1} + \sum_{l = 1}^{k-1} H(x_i^l ,t_i^l)^{-1} \big)\\ \leq{}& \frac{\delta \varepsilon}{4} (1 + \sum_{l=1}^{k-1} 4^{-l}) H(x_i,t_i)^{-1}\\ \leq{}& \frac{\delta \varepsilon}{2} H(x_i,t_i)^{-1}, \end{align*} and \begin{align*} t_i^k \geq{}& t_i - \frac{\delta^2 \varepsilon^2}{16} \big(H(x_i,t_i)^{-2} + \sum_{l=1}^{k-1} H(x_i^l, t_i^l)^{-2} \big)\\ \geq{}&t_i - \frac{\delta^2 \varepsilon^2}{16} \big(1 + \sum_{l=1}^{k-1} 4^{-2l}\big) H(x_i,t_i)^{-1}\\ \geq{}&t_i - \frac{\delta^2\varepsilon^2}{4} H(x_i,t_i)^{-1}. \end{align*} } That is, $(x_i^k,t_i^k)$ also satisfies \eqref{eq:pp_1}, so we may set $(y_i,\tau_i) = (x_i^k, t_i^k)$.
Observe that $|y_i| \to \infty$. By convexity $\ell_i := \overline{x_0 y_i}$ is contained in $\Omega_{\tau_i}$, and thus in $\Omega_{T/2}$. Passing to a subsequence we may assume the sequence $\ell_i$ converges to a ray $\ell$ emanating from the origin and contained in $\bar \Omega_{T/2}$. The avoidance principle then implies $\ell$ is in $\Omega_T$. Let $\ell_i '$ be the line segment from $y_i$ to $\ell$ with $|\ell_i'| = |\ell_i|$. Then by convexity $\ell_i'$ is contained in $\Omega_{\tau_i}$. This setup ensures the angle between $\ell_i$ and $\ell_i'$ tends to $\pi$ as $i \to \infty$.
Shift $(y_i, \tau_i)$ to $(0,0)$ and parabolically rescale by the factor $H(y_i, \tau_i)$. The resulting sequence of solutions each satisfy $\tfrac{\kappa_1}{H}(0,0) \geq \varepsilon$ and have mean curvature bounded above by $4$ in $P(0,0,\tfrac{\delta}{2})$. Therefore, by the Ecker-Huisken interior estimates we may extract a smoothly converging subsequence in, say, $P(0,0,\tfrac{\delta}{4})$. However, the shifted and rescaled lines $\ell_i$ and $\ell_i'$ are converging to a line segment through the origin, with which our limiting hypersurface makes tangential contact at $(0,0)$. The second fundamental form of the limit is nonnegative, so it has a zero eigenvalue at $(0,0)$, contradicting $\kappa_1 \geq \varepsilon H$. \end{proof}
The lemma shows if $\tfrac{\kappa_1}{H} \geq \varepsilon$ then $H$ is close to zero outside of a fixed compact set on any closed time interval. Suppose however $M_t$ is only defined for $t \in[0,T)$. If $\cap_{t < T}\,\Omega_t$ is nonempty we can apply the lemma centered at some $x_0$ in this set to find a fixed ball outside of which $H$ is small for all $t \in [0,T)$. However, if $\Omega_t$ is escaping to infinity in finite time there is no such $x_0$. To rule out this behaviour we need to know $M_t$ is not moving too quickly. {\color{red}Maybe we can argue as follows:}
\begin{lem} Let $X: M\times [0,T) \to \mathbb{R}^{n+1}$ be a solution of mean curvature flow such that $M_t := X(M,t)$ is the boundary of a convex region. Suppose in addition $\tfrac{\kappa_1}{H} \geq \varepsilon$. Then there is a $C<\infty$ such that $H(\cdot,t) \leq C/\sqrt{T- t}$. \end{lem}
\begin{proof} Lemma \ref{lem:pp} implies, for all $t>0$, the mean curvature of $M_t$ attains its maximum. Therefore, let us define $\bar H := \max_{M_t} H$. If $\bar H$ is bounded on $[0,T)$ there is nothing to prove, so assume $\bar H \to \infty$ as $t \to T$. If the claim is false then $M_t$ is forming a Type II singularity at $T$; that is, \[\limsup_{t \to T} \sqrt{T-t}\,\bar H = \infty.\] Thus, as in Huisken-Sinestrari, Hamilton's differential Harnack inequality implies there is a sequence of rescalings converging to a uniformly convex translator moving with unit speed. This is incompatible with uniform convexity by Lemma \ref{lem:translator and expander}, so our assumption was false and we conclude \[\limsup_{t \to T} \sqrt{T-t}\, \bar H < \infty.\] \end{proof} }
\end{comment}
\begin{comment} \section{Riemannian ambient spaces}
Let $(\overline M{}^{n+1},\overline g)$ be a complete Riemannian manifold with sectional curvature bounded below by $K\le 0$. Alternatively, suppose that the Ricci curvature is bounded below by $(n-1)Kg$. If a hypersurface $M^n\to \overline M{}^{n+1}$ satisfies \begin{equation}\label{eq:Riemannian_pinching} A\ge \sqrt{-K}g\,, \end{equation} then, by the Gauss equation, the sectional curvature of a two-plane $u\wedge v$ (with $u$ and $v$ orthonormal) satisfies \[ \sec(u\wedge v)=\overline{\sec}(u\wedge v)+A(u,u)A(v,v)\ge 0\,. \] Alternatively, the Ricci curvature satisfies \[ \Rc(u,v)=\overline{\Rc}(u,v)+HA(u,v)-A^2(u,v)\ge 0\,. \] So the pinching condition \eqref{eq:Riemannian_pinching} is borderline for the Bonnet--Myers theorem. We might be able to prove, using the harmonic mean curvature flow (following \cite{AndrewsRiemannian}) that any hypersurface of $(\overline M{}^{n+1},\overline g)$ satisfying \[ \inf_{M}\frac{\kappa_1-\sqrt{-K}}{\kappa_n-\sqrt{-K}}>0 \] is compact.
{\color{blue}
\begin{lem} Let $\Omega \in \mathbb{R}^{n+1}$ be an open convex set with smooth boundary and unbounded curvature. There exists a sequence of points $Y_k$ such that the sequence $\Omega_k:=H(Y_k)(\Omega - Y_k)$ satisfies \[\limsup_{k\to \infty} \max_{B_R(0)} H_k \leq 2\] for every $R<\infty$. In particular, for any $\alpha \in (0,1)$, we can pass to a subsequence so that $\Omega_k \to \Omega'$ in $C^{1,\alpha}_{\mathrm{loc}}$, where $\Omega'$ is an open convex set. Moreover, the outward unit normal to $\partial \Omega'$ is Lipschitz continuous with Lipschitz constant not exceeding 2. \end{lem}
\begin{proof} Since there is a sequence $X_k \to \infty$ such that $H(X_k) \to \infty$, there is a sequence $Y_k \in B_1(X_k)$ with the property that
\[(1-|Y_k-X_k|) H(Y_k) = \max_{Y \in B_1(X_k)} (1-|Y-X_k|) H(Y) \to \infty.\]
Rearranging the definition of $|Y_k|$ and using the triangle inequality we find
\[\frac{H(Y)}{H(Y_k)} \leq \frac{1-|Y_k-X_k|}{1-|Y-X_k|} \leq \bigg(1- \frac{|Y-Y_k|}{1-|Y_k-X_k|}\bigg)^{-1}\leq 2\]
for all $Y$ satisfying $|Y-Y_k| \leq \tfrac{1}{2}(1-|Y_k- X_k|)$. Therefore, the sequence of shifted and rescaled domains $\Omega_k := H(Y_k) (\Omega - Y_k)$ satisfy $H_k \leq 2$ in the ball of radius $\tfrac{1}{2}(1-|Y_k- X_k|)H(Y_k) \to \infty$. For any fixed $\alpha \in (0,1)$, passing to a subsequence and applying the Arzela--Ascoli theorem, we may assume the $\Omega_k$ converge in $C^{1,\alpha}_{\mathrm{loc}}$ to an open convex set $\Omega'$. The Lipschitz norm of $\nu_k$ is at most 2, and $\nu_k \to \nu'$ uniformly on compact subsets, so the Lipschitz norm of $\nu'$ is at most 2. \end{proof}
Maybe the limit $\Omega'$ is still uniformly pinched in some weak sense/almost everywhere? Maybe it can be mollified to obtain a smooth region whose boundary satisfies $H\leq 2$ and is uniformly pinched? }
\end{comment}
\end{document}
\appendix
\section{Ideas for removing the curvature bound}
We want to prove something like the following.
\begin{claim} Let $\Omega\subset \ensuremath{\mathbb{R}}^{n+1}$ be a convex body with smooth, locally uniformly convex boundary $\partial\Omega$. Suppose that $0\in\Omega$. Given any $\varepsilon>0$, there exist $h$ and $L$ such that, for any $X\in \ensuremath{\mathbb{R}}^{n+1}\setminus B_{LR}$, \[ H(X)\ge hR^{-1}\;\;\implies\;\;\inf_{B_{\frac{h}{2H(X)}}}\frac{\kappa_1}{H}\le \varepsilon. \] \end{claim}
This could be proved in a similar manner as Proposition \ref{prop:curvature bound at infinity} if we had a H\"older estimate for $H$:
Suppose the claim is false, then there exist $\varepsilon>0$ and, for each $j\in\ensuremath{\mathbb{N}}$, a point $X_j\in\partial\Omega$ such that \[ \vert X_j\vert\to\infty,\;\; H(X_j)\ge j\;\;\text{and yet}\;\;\inf_{B_{\frac{j}{2H(X_j)}}}\frac{\kappa_1}{H}>\varepsilon. \] By point-picking, we find $Y_j\in B_{\frac{j}{2H(X_j)}}(X_j)$ such that $H(Y_j)\ge H(X_j)$ and $H\le 2 H(Y_j)$ in $B_{\frac{j}{4H(Y_j)}}(Y_j)$. Set $\lambda_j\doteqdot H(Y_j)$. After passing to a subsequence, the convex bodies $\Omega_j\doteqdot \lambda_j(\Omega-Y_j)$ converge in the Hausdorff topology on compact subsets of $\ensuremath{\mathbb{R}}^{n+1}$ to a limit body $\Omega_\infty$ which splits off a line. If the convergence was in $C^2$, we would arrive at a contradiction. As it stands, it is only in $C^{1,\alpha}$. Here are some ideas...
(1) Myers' theorem and the Gauss equation immediately imply that \[ \min_{\gamma}H\le \frac{1}{\varepsilon d} \] on any geodesic $\gamma$ of length at least $\pi d$ contained in a region satisfying $\kappa_1\ge \varepsilon H$. Thus, \[
B_{r}(X)\subset B_{\frac{j}{2H(X_j)}}\;\;\implies\;\; \inf_{B_r}H\le \frac{1}{2\varepsilon r}\,. \]
(2) Since the normal is continuous under the above convergence, \[ \int_{\partial\Omega_j\cap B_{j/4}}H_j^n\le \left(\frac{n}{\varepsilon}\right)^n\area(\mathrm{G}(\partial\Omega_j\cap B_{j/4}))\to 0\,, \] where $\mathrm{G}$ is the Gauss map. In particular, $H_j\to 0$ a.e. in any compact subset of $\ensuremath{\mathbb{R}}^{n+1}$. Can we bootstrap this somehow?
(3) Simons' identity gives a little bit of higher regularity. Indeed, defining the skew-symmetric parts \[ \Lambda_{ijkl}\doteqdot \frac{1}{2}\left(\nabla_{(i}\nabla_{j)}A_{kl}-\nabla_{(k}\nabla_{l)}A_{ij}\right) \] of $\nabla_{(i}\nabla_{j)}A_{kl}$, and \[ \mathrm{C}_{ijkl}\doteqdot \frac{1}{2}\left(A_{ij}A^2_{kl}-A_{kl}A^2_{ij}\right) \] of $A_{ij}A^2_{kl}$, Simons' identity states that \[ \Lambda=\mathrm C\,. \] Pinching implies that $\mathrm C$ controls $A_{ij}A^2_{kl}$. Does this mean that $\Lambda$ controls $\nabla_{(i}\nabla_{j)}A_{kl}$?
(4) Might also be possible to integrate (some appropriately contracted version of) Simons' identity by parts to obtain some weak control on $\nabla H$.
\end{document} |
\begin{document}
\maketitle
\begin{abstract} We generalize the notion of calibrated submanifolds to smooth maps and show that the several examples of smooth maps appearing in the differential geometry become the examples of our situation. Moreover, we apply these notion to give the lower bound to the energy of smooth maps in the given homotopy class between Riemannian manifolds, and consider the energy functional which is minimized by the identity maps on the Riemannian manifolds with special holonomy groups. \end{abstract}
\tableofcontents
\section{Introduction} In this article, we introduce the notion of calibrated geometry for smooth maps between Riemannian manifolds and consider the lower bound or the minimizers of several energy of smooth maps. Let $(X,g)$ and $(Y,h)$ be a compact Riemannian manifolds and $f\colon X\to Y$ be a smooth map. Then the $p$-energy of $f$ is defined by \begin{align*}
\mathcal{E}_p(f):=\int_X |df|^pd\mu_g \end{align*} for $p\ge 1$, where $\mu_g$ is the volume measure of $g$. A harmonic map is defined to be a critical point of $\mathcal{E}_2$, and the study of it is one of the significant region in differential geometry. In 1964, Eells and Sampson \cite{ES1964} have shown that there is a harmonic map $f'$ homotopic to $f$ if the sectional curvature of $h$ is nonpositive. Moreover, Hartman \cite{Hartman1967} showed
that such harmonic maps minimize $\mathcal{E}_2|_{[f]}$, where $[f]$ is the homotopy class represented by $f$.
In general, harmonic maps need not minimize the energy. For example, although the identity maps on any Riemannian manifolds are always harmonic, it is known that there is a smooth map $f_\varepsilon$ homotopic to the identity map such that $\mathcal{E}_2(f_\varepsilon)=\varepsilon$ on the $n$-sphere $S^n$ with the standard metric and $n\ge 3$. By the result shown by White \cite{White1986}, if $\pi_l(X)$ is trivial
for all $1\le l\le k$, then $\inf \mathcal{E}_k|_{[1_X]}=0$, where $1_X$ is the identity map of $X$.
Now, we consider how to give the lower bound to the energy restricting to a given homotopy class $[f]$ and the minimizer of them. Such a lower bound was first obtained by Lichnerowicz \cite{Lichnerowicz1969} in the case of $(X,g)$ and $(Y,h)$ are K\"ahler manifolds, then it was shown that any holomorphic maps between K\"ahler manifolds minimize $\mathcal{E}_2$ in their homotopy classes. Moreover, Croke \cite{Croke1987} showed that the identity map on the real projective space with the standard metric minimize $\mathcal{E}_2$ in its homotopy class, then Croke and Fathi \cite{CrokeFathi1990} introduced the new homotopy invariant called the intersection, which gives the lower bound to
$\mathcal{E}_2|_{[f]}$ for a given homotopy class $[f]$. Recently, Hoisington \cite{hoisington2021} give the lower bound to $\mathcal{E}_p$ for an appropriate $p$ in the case of $X$ is real, complex, or quaternionic projective spaces with the standard metrics.
In this article, we generalize the notion of calibrated geometry to smooth maps between smooth manifolds and give the lower bound to the several energy. The origin of calibrated geometry is the Wirtinger's inequality for the even dimensional subspaces in hermitian inner product spaces \cite{Wirtinger1936}, then it refined or generalized by many researchers. In \cite{harvey1982calibrated}, Harvey and Lawson defined calibrated submanifolds in the Calabi-Yau, $G_2$ or $Spin(7)$ manifolds which minimize the volume in their homology classes. Similarly, we define the new class of smooth maps, called calibrated maps, and show that they minimize the appropriate energy for the given situation. Moreover, we obtain the next results as applications.
The first application is to obtain the lower bound to $p$-energy restricting to the given homotopy class. We assume $X$ is oriented. The pullback of $f$ induces a linear map $[f^*]^k\colon H^k(Y,\mathbb{R})\to H^k(X,\mathbb{R})$. By fixing basis of $H^k(X,\mathbb{R})$, $H^k(Y,\mathbb{R})$, we obtain the matrix $P([f^*]^k)$ of $[f^*]^k$
and put $|P([f^*]^k)|:=\sqrt{{\rm tr}({}^tP([f^*]^k)\cdot P([f^*]^k))}$. \begin{thm} Let $(X,g)$ and $(Y,h)$ be as above. For any $1\le k\le \dim X$, there is a positive constant $C$ depending only on $k$, $(X,g)$, $(Y,h)$ and the basis of $H^k(X,\mathbb{R})$, $H^k(Y,\mathbb{R})$ such that for any $f\in C^\infty(X,Y)$ we have \begin{align*}
\mathcal{E}_k(f)\ge C|P([f^*]^k)|. \end{align*} In particular, if $[f^*]^k$ is nonzero,
then $\inf (\mathcal{E}_k|_{[f]})$ is positive. \label{thm main1} \end{thm} In the above theorem, the compactness of $Y$ is not essential. See Theorem \ref{thm lower bdd of k-energy}.
The second application is to show that the identity maps of some Riemannian manifolds with special holonomy groups minimize the appropriate energy. As we have already mentioned, the identity map on the real or complex projective space minimize $\mathcal{E}_2$ in its homotopy class by \cite{Croke1987} and \cite{Lichnerowicz1969}, respectively. It was shown by Wei \cite{Wei1998} that the identity map on the quaternionic projective space $\mathbb{H}\mathbb{P}^n$ with the standard metric is an unstable critical point of $\mathcal{E}_p$ for $1\le p< 2+4n/(n+1)$. Moreover, Hoisington gave the nontrivial lower bound of
$\mathcal{E}_p|_{[1_{\mathbb{H}\mathbb{P}^n}]}$ for $p\ge 4$. Here, the quaternionic projective space is a typical example of quaternionic K\"ahler manifolds, which are Riemannian manifolds of dimension $4n$ whose holonomy group is contained in $Sp(n)\cdot Sp(1)$. Now, let $A$ be an $n\times m$ real-valued matrix and denote by $a_1,\ldots,a_m\in\mathbb{R}$ the nonnegative eigenvalues of ${}^tAA$, then put
$|A|_p:=(\sum_{i=1}^m a_i^{p/2})^{1/p}$. Moreover, we define an energy $\mathcal{E}_{p,q}$ by \begin{align*}
\mathcal{E}_{p,q}(f):=\int_X|df|_p^qd\mu_g, \end{align*} then we have $\mathcal{E}_p=\mathcal{E}_{2,p}$. \begin{thm}\label{thm main2} Let $(X,g)$ be a compact quaternionic K\"ahler manifold of dimension $4n\ge 8$. Then the identity map of $X$ minimizes $\mathcal{E}_{4,4}$ in its homotopy class. \end{thm} We can also show the similar theorem in the case of other holonomy groups. If $(X,g)$ is a compact $G_2$ manifold, then $1_X$
minimizes $\mathcal{E}_{3,3}|_{[1_X]}$ and if $(X,g)$ is a compact $Spin(7)$ manifold, then $1_X$
minimizes $\mathcal{E}_{4,4}|_{[1_X]}$ (See Theorem \ref{thm id calib}). Moreover, it is easy to see that if the identity map minimizes $\mathcal{E}_{p,q}$, then it also minimizes $\mathcal{E}_{p',q'}$ for all $p'\ge p$ and $q'\ge q$ by the H\"older's inequality. Of course, we can also consider the case of K\"ahler, Calabi-Yau, hyper-K\"ahler manifolds respectively, however, the results in these cases also follow from \cite{Lichnerowicz1969}.
This paper is organized as follows. In Section \ref{sec energy of map}, we define the notion of calibrated maps, which is the analogy of the calibrated submanifolds. In Section \ref{sec ex}, we explain some examples of calibrated maps. In particular, we show that holomorphic maps between K\"ahler manifolds and the inclusion maps of calibrated submanifolds can be regarded as calibrated maps. Moreover, we can also show that the fibration whose regular fibers are calibrated submanifolds are calibrated maps. We prove Theorem \ref{thm main1} in Section \ref{sec lower bdd}, and Theorem \ref{thm main2} in Section \ref{sec id map}. In Section \ref{sec intersection}, we compare the homotopy invariant introduced in \cite{CrokeFathi1990} with the invariants defined in this paper.
\paragraph{\bf Acknowledgment} I would like to thank Professor Frank Morgan for his advice on this paper. This work was supported by JSPS KAKENHI Grant Numbers JP19K03474, JP20H01799.
\section{Calibrated maps}\label{sec energy of map}
Let $X,Y$ be smooth manifold of $\dim X=m$ and $\dim Y=n$. Throughout of this paper, we suppose $X$ is compact and oriented. We fix a volume form ${\rm vol}\in\Omega^m(X)$ on $X$, namely, a nowhere vanishing $m$-form which determines an orientation and a measure of $X$. For $m$-forms $v_1,v_2\in\Omega^m(X)$, there are $\varphi_i\subset C^\infty(X)$ with $v_i=\varphi_i{\rm vol}$. Then we write $v_1\le v_2$ if $\varphi_1(x)\le \varphi_2(x)$ for all $x\in X$.
If a map $\sigma\colon C^\infty(X,Y)\to L^1(X)$ is given, then we can define an energy $\mathcal{E}\colon C^\infty(X,Y)\to \mathbb{R}$ by \begin{align*} \mathcal{E}(f):=\int_X \sigma(f){\rm vol}. \end{align*}
Now, $f_0,f_1\in C^\infty(X,Y)$ are said to be {\it homotopic} if there is a smooth map $F\colon [0,1]\times X\to Y$ such that $F(0,\cdot)=f_0$ and $F(1,\cdot)=f_1$. By Whitney approximation theorem, it is equivalent to the existence of the continuous homotopy joining $f_0$ and $f_1$. For $f\in C^\infty(X,Y)$, denote by $[f]\subset C^\infty(X,Y)$ the homotopy equivalent class represented by $f$.
In this paper we consider the lower bound to $\mathcal{E}|_{[f]}$ or the minimum
of $\mathcal{E}|_{[f]}$.
Denote by $1_X\colon X\to X$ the identity map on $X$. We define a smooth map $(1_X,f)\colon X\to X\times Y$ by \begin{align*} (1_X,f)(x):=(x,f(x)). \end{align*}
The next definition is the analogy of \cite{harvey1982calibrated}. \begin{definition} \normalfont $\Phi \in\Omega^m(X\times Y)$ is a {\it $\sigma$-calibration} if $d\Phi=0$ and \begin{align*} (1_X,f)^*\Phi\le \sigma(f){\rm vol} \end{align*} for any smooth map $f\colon X\to Y$. Moreover, $f$ is a {\it $(\sigma,\Phi)$-calibrated map} if \begin{align*} (1_X,f)^*\Phi = \sigma(f){\rm vol}. \end{align*} \end{definition}
\begin{thm} Let $\sigma$ be an energy density and $\Phi$ be a $\sigma$-calibration. \begin{itemize} \setlength{\parskip}{0cm} \setlength{\itemsep}{0cm}
\item[$({\rm i})$] The constant $\int_X(1_X,f)^*\Phi$ is determined by the homotopy class $[f]$. In other words, $\int_X(1_X,f_0)^*\Phi=\int_X(1_X,f_1)^*\Phi$ if $[f_0]=[f_1]$.
\item[$({\rm ii})$] We have $\inf\mathcal{E}|_{[f]} \ge \int_X(1_X,f)^*\Phi$ for any $f\in C^\infty(X,Y)$.
\item[$({\rm iii})$] We have $\mathcal{E}(f)=\int_X(1_X,f)^*\Phi$ iff $f$ is $(\sigma,\Phi)$-calibrated map. In particular, any $(\sigma,\Phi)$-calibrated map minimizes $\mathcal{E}$ in its homotopy class. \end{itemize} \end{thm} \begin{proof} $({\rm i})$ If $f_0,f_1$ are homotopic, then $(1_X,f_0)$ and $(1_X,f_1)$ are homotopic, accordingly $(1_X,f_0)^*\Phi $ and $(1_X,f_1)^*\Phi$ represent the same cohomology class by \cite[Corollary 4.1.2]{BT1982}.
$({\rm ii})$ is shown by the definition of $\sigma$-calibration.
$({\rm iii})$ By the point-wise inequality $(1_X,f)^*\Phi\le \sigma(f){\rm vol}$, we have $\mathcal{E}(f)=\int_X(1_X,f)^*\Phi$ iff $(1_X,f)^*\Phi= \sigma(f){\rm vol}$. \end{proof}
\section{Examples}\label{sec ex}
One of the typical example of the energy of maps is $p$-energy defined for the smooth maps between Riemannian manifolds. Let $(X,g)$ and $(Y,h)$ be Riemannian manifolds and $f\colon X\to Y$ be a smooth map. Then the pullback $f^*h$ is a section of $T^*X\otimes T^*X$, we can take the trace ${\rm tr}_g(f^*h)$. For $p\ge 1$, put $\sigma_p(f):=\{ {\rm tr}_g(f^*h)\}^{p/2}$. We assume that $X$ is oriented and denote by ${\rm vol}_g$ the volume form of $g$. The $p$-energy $\mathcal{E}_p(f)$ is defined by \begin{align*} \mathcal{E}_p(f):=\int_X\sigma_p(f){\rm vol}_g. \end{align*}
Now, the differential $df_x$ is an element of $T^*_xX\otimes T_{f(x)}Y$ for every $x\in X$. Since $g_x$ and $h_{f(x)}$ induces the natural inner product and the norm on $T^*_xX\otimes T_{f(x)}Y$, Then we may also write
$\sigma_p(f)(x)=|df_x|^p$.
By the H\"older's inequality, we have \begin{align*} \mathcal{E}_p(f)&\le {\rm vol}_g(X)^{1-p/q}\mathcal{E}_q(f)^{p/q} \end{align*} for $1\le p\le q$. Thus we have the following proposition. \begin{prop} Let $\Phi\in\Omega^m(X\times Y)$ be a $\sigma_p$-calibration. Then \begin{align*} {\rm vol}_g(X)^{-1+p/q}\int_X(1_X,f)^*\Phi\le \mathcal{E}_q(f)^{p/q} \end{align*} for any $q\ge p$ and $f\in C^\infty(X,Y)$. \label{prop holder} \end{prop}
\subsection{holomorphic maps} Here, assume that $X,Y$ are complex manifolds and $g,h$ are K\"ahler metrics. Let $m=\dim_\mathbb{C} X$ and $n=\dim_\mathbb{C} Y$. Then we have the decomposition \begin{align*} T^*X\otimes \mathbb{C}&=\Lambda^{1,0}T^*X\oplus\Lambda^{0,1}T^*X,\\ TY\otimes \mathbb{C}&=T^{1,0}Y\oplus T^{0,1}Y, \end{align*} accordingly the derivative $df\in \Gamma(T^*X\otimes f^*TY)$ is decomposed into \begin{align*} df&=(\partial f)^{1,0}+(\partial f)^{0,1}+(\overline{\partial} f)^{1,0}+(\overline{\partial} f)^{0,1}\\ &\in (\Lambda^{1,0}T^*X\otimes T^{1,0}Y) \oplus (\Lambda^{1,0}T^*X\otimes T^{0,1}Y)\\ &\quad\quad\oplus (\Lambda^{0,1}T^*X\otimes T^{1,0}Y) \oplus (\Lambda^{0,1}T^*X\otimes T^{0,1}Y). \end{align*} Since $df$ is real, we have \begin{align*} \overline{(\partial f)^{1,0}}=(\overline{\partial} f)^{0,1}, \quad\overline{(\partial f)^{0,1}}=(\overline{\partial} f)^{1,0}. \end{align*} Denote by $\omega_g,\omega_h$ the K\"ahler form of $g,h$, respectively, then the volume form is given by ${\rm vol}_g=\frac{1}{m!}\omega_g^m$. The following observation was given by Lichnerowicz. \begin{thm}[{\cite{Lichnerowicz1969}}] For any smooth map $f\colon X\to Y$, we have \begin{align*} \omega_g^{m-1}\wedge f^*\omega_h
&=(m-1)!(|(\partial f)^{1,0}|^2-|(\overline{\partial} f)^{1,0}|^2){\rm vol}_g,\\
|df|^2&=2|(\partial f)^{1,0}|^2+2|(\overline{\partial} f)^{1,0}|^2. \end{align*} In particular, we have \begin{align*} \mathcal{E}_2(f)\ge \frac{2}{(m-1)!}\int_X\omega_g^{m-1}\wedge f^*\omega_h \end{align*} and the equality holds iff $f$ is holomorphic. \label{thm Lich} \end{thm} Now, we consider $\omega_g^{m-1}\wedge \omega_h\in \Omega^m(X\times Y)$. The first two equalities in Theorem \ref{thm Lich} implies that $\frac{2}{(m-1)!}\omega_g^{m-1}\wedge \omega_h$ is a $\sigma_2$-calibration. Moreover, the second statement implies that $f$ is a $(\sigma_2,\frac{2}{(m-1)!}\omega_g^{m-1}\wedge \omega_h)$-calibrated map iff $f$ is holomorphic. One can also see that $f$ is $(\sigma_2,-\frac{2}{(m-1)!}\omega_g^{m-1}\wedge \omega_h)$-calibrated map iff $f$ is anti-holomorphic.
\subsection{Calibrated submanifolds} In this subsection, we see the relation between the calibrated submanifolds in the sense of \cite{harvey1982calibrated} and the calibrated maps. We assume $(Y^n,h)$ is a Riemannian manifold. \begin{definition}[\cite{harvey1982calibrated}] \normalfont For an integer $0<m<n$, $\psi \in\Omega^m(Y)$ is a {\it calibration} if $d\psi=0$ and \begin{align*}
\psi|_V\le {\rm vol}_{h|_V} \end{align*} for any $y\in Y$ and $m$-dimensional oriented subspace $V\subset T_yY$.
Here, $h|_V$ is the induced metric on $V$ and
${\rm vol}_{h|_V}$ is its volume form whose orientation is compatible with the one equipped with $V$. Moreover, an oriented submanifold $X\subset Y$ is a {\it calibrated submanifold} if \begin{align*}
\psi|_{T_xX}={\rm vol}|_{h|_{T_xX}} \end{align*} for any $x\in X$. \label{def HL cal} \end{definition}
Now, if $X$ is an oriented manifold with a volume form ${\rm vol}\in\Omega^m(X)$, then for every linear map $A\colon T_xX\to T_yY$ can be regarded as an $n\times m$-matrix by taking a basis $e_1,\ldots,e_m$ of $T_xX$ and an orthonormal basis of $T_yY$ with ${\rm vol}_x(e_1,\ldots,e_m)=1$. Then $\sqrt{\det( {}^tA\cdot A)}$ does not depend on the choice of these basis. Therefore, for $f\in C^\infty(X,Y)$, we can define the energy density $\tau_m(f)(x):=\sqrt{\det({}^tdf_x\cdot df_x)}$ and the energy $\mathcal{E}_{\tau_m}(f):=\int_X\tau_m(f){\rm vol}$.
\begin{prop} Let $(X,{\rm vol})$ be an oriented manifold equipped with a volume form and $\psi\in\Omega^m(Y)$ be closed. Assume that $\dim_\mathbb{R} X=m<n$ and denote by $\pi_Y\colon X\times Y\to Y$ the natural projection. Then $\psi$ is a calibration iff $\pi_Y^*\psi\in\Omega^m(X\times Y)$ is a $\tau_m$-calibration. Moreover, for any embedding $f\colon X\to Y$, the following conditions are equivalent. \begin{itemize} \setlength{\parskip}{0cm} \setlength{\itemsep}{0cm}
\item[$({\rm i})$]
$f(X)$ is a calibrated submanifold, where the orientation of $f(X)$ is determined such that $f$ preserves the orientation.
\item[$({\rm ii})$] $f$ is a $(\tau_m,\pi_Y^*\psi)$-calibrated map. \end{itemize} \end{prop} \begin{proof} Note that $(1_X,f)^*(\pi_Y^*\psi)=f^*\psi$ and $\tau_m(f){\rm vol}_g={\rm vol}_{f^*h}$. Hence $\psi$ is a calibration iff $\pi_Y^*\psi\in\Omega^m(X\times Y)$ is a $\tau_m$-calibration. Moreover, suppose that $f$ is an embedding. Then $f$ is a $(\tau_m,\pi_Y^*\psi)$-calibrated map iff $f(X)$ is a calibrated submanifold. \end{proof}
\subsection{Fibrations} Let $(X^m,g)$ be an oriented Riemannian manifold and $Y^n$ be a smooth manifold equipped with a volume form ${\rm vol}_Y\in\Omega^n(Y)$. Here, we suppose $n< m$ and let $\varphi\in\Omega^{m-n}$ be a calibration in the sense of Definition \ref{def HL cal}. Fix an orthonormal basis of $T_xX$ and a basis $e_1',\ldots,e_n'\in T_yY$ with ${\rm vol}_Y(e_1',\ldots,e_n')=1$, we can regard a linear map $A\colon T_xX\to T_yY$ as an $m\times n$-matrix. Then the value of $\sqrt{\det(A\cdot{}^tA)}$ does not depend on the choice of above basis. For a smooth map $f\colon X\to Y$, put
$\tilde{\tau}_{m,n}(f)|_x:=\sqrt{\det(df_x\cdot{}^tdf_x)}$ and $\Phi:={\rm vol}_Y\wedge \varphi$.
Put \begin{align*}
X_{\rm reg}:=\{ x\in X|\, x\mbox{ is a regular point of }f\}. \end{align*} Note that $X_{\rm reg}$ is open in $X$. If $x\in X_{\rm reg}$,
we have the orthogonal decomposition $T_xX={\rm Ker}(df_x)\oplus H$ and $df_x|_H\colon H\to T_{f(x)}Y$ is a linear isomorphism. Put $y=f(x)$ and suppose that $f^{-1}(y)$ is a calibrated submanifold with respect to the suitable orientation. We say that $df_x$ is {\it orientation preserving} if there is a basis $v_1,\ldots,v_m$ of $T_xX$ such that \begin{align*} v_1,\ldots,v_n&\in H, \quad {\rm vol}_Y(df_x(v_1),\ldots,df_x(v_n))>0,\\ v_{n+1},\ldots,v_m&\in {\rm Ker}(df_x),\quad \varphi_x(v_{n+1},\ldots,v_m)>0,\\ {\rm vol}_g(v_1,\ldots,v_m)&>0. \end{align*} \begin{prop} $\Phi$ is a $\tilde{\tau}_{m,n}$-calibration. Moreover, a smooth map $f\colon X\to Y$ is a $(\tilde{\tau}_{m,n},\Phi)$-calibrated map iff \begin{itemize} \setlength{\parskip}{0cm} \setlength{\itemsep}{0cm}
\item[$({\rm i})$] $f^{-1}(y)\cap X_{\rm reg}$ is a calibrated submanifold with respect to $\varphi$ and the suitable orientation for any $y\in Y$,
\item[$({\rm ii})$] $df_x$ is orientation preserving for any $x\in X_{\rm reg}$. \end{itemize} \end{prop}
\begin{proof} If $x\in X$ is a critical point of $f$, then we can see \begin{align*}
f^*{\rm vol}_Y\wedge \varphi|_x=\tilde{\tau}_{m,n}(f){\rm vol}_g|_x=0. \end{align*} Fix a regular point $x$ and an oriented orthonormal basis $e_1,\ldots,e_m\in T_xX$ such that $e_{m-n+1},\ldots,e_m\in {\rm Ker}(df_x)$. Then we have \begin{align*} f^*{\rm vol}_Y\wedge \varphi(e_1,\ldots,e_m) &={\rm vol}_Y(df_x(e_1),\ldots,df_x(e_n)) \varphi(e_{n+1},\ldots,e_m),\\
\tilde{\tau}_{m,n}(f)|_x&=
\left| {\rm vol}_Y(df_x(e_1),\ldots,df_x(e_n))\right|. \end{align*} Since $\varphi$ is a calibration, we have $\varphi(\pm e_{n+1},e_{n+2},\ldots,e_m)\le 1$,
hence $|\varphi(e_{n+1},\ldots,e_m)|\le 1$. Therefore, \begin{align*} f^*{\rm vol}_Y\wedge \varphi(e_1,\ldots,e_m)
&\le \left| {\rm vol}_Y(df_x(e_1),\ldots,df_x(e_{m-n}))\right|
= \tilde{\tau}_{m,n}(f)|_x, \end{align*} which implies that $\Phi$ is a $\tilde{\tau}_{m,n}$-calibration.
Next we consider the condition \begin{align*}
f^*{\rm vol}_Y\wedge \varphi|_x=\tilde{\tau}_{m,n}(f){\rm vol}_g|_x, \end{align*} where $x$ is a regular value of $f$. In this case we have the orthogonal decomposition $T_xX={\rm Ker}(df_x)\oplus H$, where $H$ is an $n$-dimensional subspace. We can take an orthonormal basis $e_1,\ldots,e_m\in T_xX$ such that \begin{align*} e_1,\ldots,e_n&\in H,\\ e_{n+1},\ldots,e_m&\in {\rm Ker}(df_x),\\ a:={\rm vol}_Y(df_x(e_1),\ldots,df_x(e_{m-n}))&>0,\\ {\rm vol}_g(e_1,\ldots,e_m)&>0. \end{align*} Then we have \begin{align*} f^*{\rm vol}_Y\wedge \varphi(e_1,\ldots,e_m) &=a \varphi(e_{n+1},\ldots,e_m),\\
\tilde{\tau}_{m,n}(f){\rm vol}_g(e_1,\ldots,e_m)&=|a|=a. \end{align*} Therefore, we have \begin{align*}
f^*{\rm vol}_Y\wedge \varphi|_x
&=\tilde{\tau}_{m,n}(f){\rm vol}_g|_x \end{align*} iff $\varphi(e_{n+1},\ldots,e_m)=1$. Now we have taken $x\in X_{\rm reg}$ arbitrarily, hence we have \begin{align*} f^*{\rm vol}_Y\wedge \varphi &=\tilde{\tau}_{m,n}(f){\rm vol}_g \end{align*} iff $f^{-1}(y)\cap X_{\rm reg}$ is a calibrated submanifold for any $y\in Y$ and $df_x$ is orientation preserving for all $x\in X_{\rm reg}$. \end{proof}
\subsection{Totally geodesic maps between tori}\label{subsec tori} Let $\mathbb{T}^n=\mathbb{R}^n/\mathbb{Z}^n$ be the $n$-dimensional torus and we consider smooth maps from $\mathbb{T}^m$ to $\mathbb{T}^n$. Let $G=(g_{ij})\in M_m(\mathbb{R})$ and $H=(h_{ij})\in M_n(\mathbb{R})$ be positive symmetric matrices. Denote by $x=(x^1,\ldots,x^m)$ and $y=(y^1,\ldots,y^n)$ the Cartesian coordinate on $\mathbb{R}^m$ and $\mathbb{R}^n$, respectively, then we have closed $1$-forms $dx^i\in\Omega^1(\mathbb{T}^m)$ and $dy^i\in\Omega^1(\mathbb{T}^n)$. We define the flat Riemannian metrics $g=\sum_{i,j}g_{ij}dx^i\otimes dx^j$ on $\mathbb{T}^m$ and $h=\sum_{i,j}h_{ij}dy^i\otimes dy^j$ on $\mathbb{T}^n$.
For a smooth map $f\colon \mathbb{T}^m\to \mathbb{T}^n$, we have the pullback $f^*\colon H^1(\mathbb{T}^n,\mathbb{R})\to H^1(\mathbb{T}^m,\mathbb{R})$. Here, since \begin{align*} H^1(\mathbb{T}^m,\mathbb{Z})&={\rm span}_\mathbb{Z}\{ [dx^1],\ldots,[dx^m]\},\\ H^1(\mathbb{T}^n,\mathbb{Z})&={\rm span}_\mathbb{Z}\{ [dy^1],\ldots,[dy^n]\}, \end{align*} there is $P=(P_i^j)\in M_{m,n}(\mathbb{Z})$ such that $f^*[dy^j]=\sum_i P_i^j[dx^i]$. The matrix $P$ is determined by the homotopy class of $f$. Now, let $*_g$ be the Hodge star operator of $g$ and put \begin{align} \Phi:=\sum_{i,j,k}h_{jk} P_i^j *_gdx^i\wedge dy^k \in \Omega^m(\mathbb{T}^m\times \mathbb{T}^n).\label{eq phi on torus} \end{align} Then we can check that \begin{align*} \int_{\mathbb{T}^m}(1_{\mathbb{T}^m},f)^*\Phi &=\sum_{i,j,k,l}h_{jk} P_i^jP_l^k\int_{\mathbb{T}^m} *_gdx^i\wedge dx^l\\ &=\sum_{i,j,k,l}h_{jk} P_i^jP_l^k g^{il}{\rm vol}_g(\mathbb{T}^m)\\
&={\rm tr}({}^tPG^{-1}PH){\rm vol}_g(\mathbb{T}^m)=:\| P\|^2{\rm vol}_g(\mathbb{T}^m)\ge 0. \end{align*} Consequently, by the positivity of $G^{-1}$ and $H$, $\int_{\mathbb{T}^m}(1_{\mathbb{T}^m},f)^*\Phi=0$ iff $P=0$. \begin{prop} Assume that $f^*\colon H^1(\mathbb{T}^n,\mathbb{R})\to H^1(\mathbb{T}^m,\mathbb{R})$ is not the zero map.
$\| P\|^{-1}\Phi$ is a $\sigma_1$-calibration and
$f$ is a $(\sigma_1,\| P\|^{-1}\Phi)$-calibrated map if $f(x)=Px+a$ for some $a\in \mathbb{T}^m$. Moreover, $f$ minimizes $\mathcal{E}_2$ in its homotopy class iff $f(x)=Px+a$ for some $a\in \mathbb{T}^m$. \label{prop lower torus} \end{prop} \begin{proof} We fix $x\in\mathbb{T}^m$ and put $df_x:=A=(A_i^j)\in M_{n,m}(\mathbb{R})$, and show $(1_{\mathbb{T}^m},f)^*\Phi\le \sigma_1(f){\rm vol}_g$ at $x$. Since \begin{align*}
(1_{\mathbb{T}^m},f)^*\Phi|_x
&=\sum_{i,j,k,l}h_{jk} P_i^jA_l^k *_gdx^i\wedge dx^l|_x\\
&=\left(\sum_{i,j,k,l}h_{jk} P_i^jA_l^k g^{il}\right){\rm vol}_g|_x\\
&=\left({\rm tr}({}^tPG^{-1}AH)\right){\rm vol}_g|_x. \end{align*} Here, by the Cauchy-Schwarz inequality, we have \begin{align*}
{\rm tr}({}^tPG^{-1}AH)\le \sqrt{\| P\|\| A\|}, \end{align*} and the equality holds iff $A=\lambda P$ for a constant $\lambda\ge 0$. Therefore, we have \begin{align*} (1_{\mathbb{T}^m},f)^*\Phi
&\le \| P\|\sigma_1(f){\rm vol}_g, \end{align*}
which implies that $\| P\|^{-1}\Phi$ is a $\sigma_1$-calibration. Moreover, the equality holds iff $df_x=\lambda_x \cdot {}^t P$ for some $\lambda_x\ge 0$. Therefore, $f(x)={}^tPx+a$ for some $a\in\mathbb{T}^m$ is
a $(\sigma_1,\| P\|^{-1}\Phi)$-calibrated map.
For any $f\in C^\infty(\mathbb{T}^m,\mathbb{T}^n)$, we have \begin{align*} \int_X(1_{\mathbb{T}^m},f)^*\Phi
&\le \| P\|\int_X\sigma_1(f){\rm vol}_g
\le \| P\|\sqrt{{\rm vol}_g(\mathbb{T}^m)\mathcal{E}_2(f)} \end{align*} by the Cauchy-Schwartz inequality. Moreover, we have the following equality \begin{align*}
\int_X(1_{\mathbb{T}^m},f)^*\Phi=\| P\|\sqrt{{\rm vol}_g(\mathbb{T}^m)\mathcal{E}_2(f)} \end{align*} iff $df_x=\lambda_x\cdot {}^tP$ for some $\lambda_x\ge 0$ and $\sigma_1(f)$ is a constant function on $\mathbb{T}^m$.
Since $\sigma_1(f)(x)=\lambda_x\| P\|$, if $\sigma_1(f)$ is constant, then $\lambda_x=\lambda$ is independent of $x$. Hence we may write $f(x)=\lambda \cdot {}^tPx+a$ for some $a\in \mathbb{T}^m$. Moreover, since $f^*=P$ on $H^1(\mathbb{T}^n)$, we have $\lambda=1$. \end{proof}
In the above proposition, we cannot show that
every $(\sigma_1,\| P\|^{-1}\Phi)$-calibrated map is given by $f(x)={}^tPx+a$ for some $a$. The following case is a counterexample.
Suppose $m=n=1$ and let $P=1$. If we put $f(x)=x+\frac{1}{2\pi}\sin (2\pi x)$, then it gives a smooth map $\mathbb{T}^1\to \mathbb{T}^1$ homotopic to the identity map. Then one can easy to check that
$f$ is a $(\sigma_1,\| P\|^{-1}\Phi)$-calibrated map since $f'(x)\ge 0$.
\section{The lower bound of $p$-energy}\label{sec lower bdd} In this section, we give the lower bound of $p$-energy in the general situation. Let $(X,g)$ and $(Y,h)$ be Riemannian manifolds and assume $X$ is compact and oriented. Now we have the decomposition \begin{align*} \Lambda^kT^*_{(x,y)}(X\times Y) \cong \bigoplus_{l=0}^k\Lambda^lT^*_xX\otimes \Lambda^{k-l}T^*_yY, \end{align*} then denote by $\Omega^{l,k-l}(X\times Y)\subset \Omega^k(X\times Y)$ the set consisting of smooth sections of $\Lambda^lT^*_xX\otimes \Lambda^{k-l}T^*_yY$. For $\Phi\in \Omega^k(X\times Y)$, let
$|\Phi_{(x,y)}|$ be the norm with respect to the metric $g\oplus h$ on $X\times Y$. \begin{lem} Let $\Phi\in \Omega^{m-k,k}(X\times Y)$ be closed and
$\sup_{x,y}|\Phi_{(x,y)}|<\infty$. Then there is a constant $C>0$ depending only on $\Phi,m,n,k$ such that $C\Phi$ is a $\sigma_k$-calibration. \label{lem cal k} \end{lem} \begin{proof} Fix $x\in X$ and let $\{ e_1,\ldots,e_m\}$ and $\{ e_1',\ldots,e_n'\}$ be an orthonormal basis of $T_xX$ and $T_{f(x)}Y$, respectively. Put \begin{align*} \mathcal{I}_k^m
:=\left\{ I=(i_1,\ldots,i_k)\in\mathbb{Z}^k\left|\, 0\le i_1<\cdots<i_k\le m\right.\right\}. \end{align*} For $I=(i_1,\ldots,i_k)\in \mathcal{I}_k^m$, $J=(j_1,\ldots,j_k)\in\mathcal{I}_k^n$, we write \begin{align*} e_I:=e_{i_1}\wedge\cdots\wedge e_{i_k},\quad e_J':=e_{j_1}'\wedge\cdots\wedge e_{j_k}'. \end{align*} Then we have \begin{align*} \Phi_{(x,f(x))}=\sum_{I\in\mathcal{I}_k^m,J\in\mathcal{I}_k^n}\Phi_{IJ}(*_ge_I)\wedge e'_J \end{align*} for some $\Phi_{IJ}\in\mathbb{R}$ and \begin{align*} \left\{ (1_X,f)^*\Phi\right\}_x &=\sum_{I,J}\Phi_{IJ}(*_ge_I)\wedge df_x^*e'_J. \end{align*} If we denote by $(df_x)_{IJ}$ the $k\times k$ matrix whose $(p,q)$-component is given by $g(df_x(e_{i_q}), e'_{j_p})$, then we have \begin{align*} (*_ge_I)\wedge df_x^*e'_J
&=\det((df_x)_{IJ}){\rm vol}_g|_x\le k!|df_x|^k{\rm vol}_g|_x, \end{align*} therefore, we can see \begin{align*} \left\{ (1_X,f)^*\Phi\right\}_x
&\le \left(\sum_{I,J}|\Phi_{IJ}|\right) k! |df_x|^k{\rm vol}_g|_x. \end{align*}
Since $|\Phi_{x,f(x)}|^2=\sum_{I,J}|\Phi_{I,J}|^2$, we have \begin{align*} (1_X,f)^*\Phi
&\le k! (\#\mathcal{I}_k^m)(\#\mathcal{I}_k^n)\sup_{x,y}|\Phi_{(x,y)}| \sigma_k(f){\rm vol}_g, \end{align*} which implies the assertion. \end{proof}
For $f\in C^\infty(X,Y)$, denote by $[f^*]^k$ the pullback $H^k(Y,\mathbb{R})\to H^k(X,\mathbb{R})$ of $f$. For a closed form $\alpha\in\Omega^k(Y)$, denote by $[\alpha]\in H^k(Y,\mathbb{R})$ its cohomology class. Put \begin{align*} H^k_{\rm bdd}(Y,\mathbb{R})
:=\left\{ [\alpha]\in H^k(Y,\mathbb{R})|\, \alpha\in \Omega^k(Y),\, d\alpha=0,\, \sup_{y\in Y}h(\alpha_y,\alpha_y)<\infty\right\}. \end{align*} This is a subspace of $H^k(Y,\mathbb{R})$, and we have $H^k_{\rm bdd}(Y,\mathbb{R})=H^k(Y,\mathbb{R})$ if $Y$ is compact. Denote by $[f^*]^k_{\rm bdd}$ the restriction of $[f^*]^k$ to $H^k_{\rm bdd}(Y,\mathbb{R})$. Fixing a basis of $H^k(X,\mathbb{R})$ and $H^k_{\rm bdd}(Y,\mathbb{R})$, we obtain the matrix $P=P([f^*]^k_{\rm bdd})\in M_{N,d}(\mathbb{R})$ of $[f^*]^k_{\rm bdd}$, where $d=\dim H^k_{\rm bdd}(Y,\mathbb{R})$ and $N=\dim H^k(X,\mathbb{R})$.
Put $|P|:=\sqrt{{\rm tr}({}^tPP)}$, which may depends on the choice of basis. Here, since $d$ may become infinity,
we may have $|P|=\infty$.
\begin{thm} Let $(X^m,g)$ and $(Y^n,h)$ be Riemannian manifolds and $X$ be compact and oriented. For any $1\le k\le m$, there is a constant $C>0$ depending only on $k$, $(X,g)$, $(Y,h)$ and the basis of $H^k(X,\mathbb{R})$, $H^k_{\rm bdd}(Y,\mathbb{R})$ such that for any $f\in C^\infty(X,Y)$ we have \begin{align*}
\mathcal{E}_k(f)\ge C|P([f^*]^k_{\rm bdd})|. \end{align*} In particular, if $[f^*]^k_{\rm bdd}$ is a nonzero map,
then the infimum of $\mathcal{E}_k|_{[f]}$ is positive. \label{thm lower bdd of k-energy} \end{thm} \begin{proof} Take bounded closed $k$-forms $\beta_1,\ldots,\beta_d\in \Omega^k(Y)$ such that $\{ [\beta_l]\}_l$ is a basis of $H^k_{\rm bdd}(Y,\mathbb{R})$.
By the Hodge Theory, $H^k(X)$ is isomorphic to the space of harmonic $k$-forms as vector spaces. Therefore, for any basis of $H^k(X,\mathbb{R})$, there is a corresponding basis $\alpha_1,\ldots,\alpha_N\in\Omega^k(X)$ of the space of harmonic $k$-forms. Let $G_{ij}:=\int_X\alpha_i\wedge *_g\alpha_j$, which is symmetric positive definite.
Define $P=(P_{ij})\in M_{N,d}(\mathbb{R})$ by $[f^*]^k_{\rm bdd}([\beta_j])=\sum_iP_{ij}[\alpha_i]$. If we put \begin{align*} \Phi:=\sum_{i,j} P_{ij}\beta_j\wedge (*_g\alpha_i), \end{align*} then every $\beta_j\wedge (*_g\alpha_i)$ is closed and satisfies the assumption of Lemma \ref{lem cal k}, since $X$ is compact and $\beta_j$ is bounded. Take the constant $C_{ij}>0$ as in Lemma \ref{lem cal k}. Here, $C_{ij}$ is depending only on $m,n,k$ and $\alpha_i,\beta_j$. Then for any $f\in C^\infty(X,Y)$, we have \begin{align*} (1_X,f)^*\left\{ \beta_j\wedge (*_g\alpha_i)\right\} &\le C_{ij}\sigma_k(f){\rm vol}_g,\\
(1_X,f)^*\Phi&\le \sum_{i,j}C_{ij}|P_{ij}|\sigma_k(f){\rm vol}_g\\
&\le \sqrt{\sum_{i,j}C_{ij}^2}|P|\sigma_k(f){\rm vol}_g, \end{align*} hence \begin{align*} \mathcal{E}_k(f)\ge \left(\sum_{i,j}C_{ij}^2\right)^{-1/2}
|P|^{-1}\int_X(1_X,f)^*\Phi. \end{align*} Moreover, we have \begin{align*} \int_X(1_X,f)^*\Phi &= \sum_{i,j} \int_X P_{ij}f^*\beta_j\wedge (*_g\alpha_i)\\ &=\sum_{i,j} \int_X P_{ij}\sum_kP_{kj}\alpha_k\wedge (*_g\alpha_i)\\ &=\sum_{i,j,k} P_{ij}P_{kj}G_{ki}. \end{align*} If we denote by $\lambda>0$ the minimum eigenvalue of $(G_{ij})_{i,j}$, then we have
$\sum_{i,j,k} P_{ij}P_{kj}G_{ki}\ge \lambda|P|^2$. Hence we obtain \begin{align*} \mathcal{E}_k(f)\ge \lambda\left(\sum_{i,j}C_{ij}^2\right)^{-1/2}
|P|. \end{align*} \end{proof}
\begin{rem} \normalfont Combining the above theorem with Proposition \ref{prop holder}, we also have the lower bound of $\mathcal{E}_p$ for any $p\ge k$. \end{rem}
\section{Energy of the identity maps}\label{sec id map} In this section we consider when the identity map on compact oriented Riemannian manifold $X$ minimizes the energy. Here, we consider the family of energies. For Riemannian manifolds $(X^m,g)$, $(Y^n,h)$ and points $x\in X$, $y\in Y$, take a linear map $A\colon T_xX\to T_yY$. Fixing orthonormal basis of $T_xX$ and $T_yY$, we can regard $A$ as an $n\times m$-matirx. Denote by $a_1,\ldots ,a_m\in\mathbb{R}_{\ge 0}$ the eigenvalues of ${}^tA\cdot A$, then put \begin{align*}
|A|_p:=\left( \sum_{i=1}^m a_i^{p/2}\right)^{1/p} \end{align*}
for $p>0$. Then $|A|_p$ is independent of the choice of the orthonormal basis of $T_xX$. For a smooth map $f\colon X\to Y$, let \begin{align*}
\sigma_{p,q}(f)|_x&:=|df_x|_p^q,\\ \mathcal{E}_{p,q}(f)&:=\int_X\sigma_{p,q}(f){\rm vol}_g. \end{align*} Note that $\sigma_{2,p}=\sigma_p$ and $\mathcal{E}_{2,p}=\mathcal{E}_p$.
From now on we consider $(Y,h)=(X,g)$ and a map $f\colon X\to X$. Let $1_X$ be the identity map of $X$. \begin{prop}
If $1_X$ minimizes $\mathcal{E}_{p,q}|_{[1_X]}$,
then it also minimizes $\mathcal{E}_{p',q'}|_{[1_X]}$ for any $p'\ge p$ and $q'\ge q$. \end{prop} \begin{proof} First of all, for any smooth map $f$, we have \begin{align*}
|df_x|_p&\le m^{1/p-1/p'}| df_x|_{p'},\\ \mathcal{E}_{p,q}(f)&\le m^{q/p-q/p'}{\rm vol}_g(X)^{1-q/q'}
\left( \int_X|df|_{p'}^{q'}{\rm vol}_g\right)^{q/q'}, \end{align*} by the H\"older's inequality, which gives $\mathcal{E}_{p',q'}(f)\ge C\mathcal{E}_{p,q}(f)^{q'/q}$ for some constant $C>0$. Moreover, we have the equality for $f=1_X$. Therefore, we can see \begin{align*}
\inf \mathcal{E}_{p',q'}|_{[1_X]}
\ge \inf C\mathcal{E}_{p,q}^{q'/q}|_{[1_X]} = C\mathcal{E}_{p,q}(1_X)^{q'/q}= \mathcal{E}_{p',q'}(1_X)
\ge \inf \mathcal{E}_{p',q'}|_{[1_X]}. \end{align*} \end{proof}
\begin{prop}[{cf.\cite[Lemma 2.2]{hoisington2021}}] Let $(X,g)$ be a compact oriented Riemannian manifold of dimension $m$. Then $1_X$ minimizes $\mathcal{E}_{1,m}$ in its homotopy class. \end{prop} \begin{proof} The proof is essentially given by \cite[Lemma 2.2]{hoisington2021}. For any map $f\colon X\to X$, we can see \begin{align*} f^*{\rm vol}_g=\det(df){\rm vol}_g \le m^{-m}\sigma_{1,m}(f){\rm vol}_g. \end{align*} Here, the second inequality follows from the inequality \begin{align*} \frac{\sum_{i=1}^ma_i}{m}\ge \left( \prod_{i=1}^ma_i\right)^{1/m} \end{align*} for any $a_i\ge 0$. Therefore, we can see \begin{align*} \mathcal{E}_{1,m}(f)\ge m^m\int_Xf^*{\rm vol}_g. \end{align*} Moreover, the equality holds if $f=1_X$. \end{proof}
Next we consider the analogy of the above proposition. We assume that $X$ has a nontrivial parallel $k$-form.
Denote by $g_0$ the standard metric on $\mathbb{R}^m$, which also induces the metric on $\Lambda^k(\mathbb{R}^m)^*$. Let $\varphi_0\in\Lambda^k(\mathbb{R}^m)^*$ and fix an orientation of $\mathbb{R}^m$. For a $k$-form $\varphi$ and a Riemannian metric $g$ on an oriented manifold $X$, we say that {\it $(g_0,\varphi_0)$ is a local model of $(g,\varphi)$} if for any $x\in X$ there is an orientation preserving isometry $I\colon \mathbb{R}^m\to T_xX$
such that $I^*(\varphi|_x)=\varphi_0$.
Denote by $*_{g_0}\colon \Lambda^k(\mathbb{R}^m)^*\to \Lambda^{m-k}(\mathbb{R}^m)^*$ the Hodge star operator induced by the standard metric and let ${\rm vol}_{g_0}\in\Lambda^m(\mathbb{R}^m)^*$ be the volume form. First of all, we show the following proposition for the local model $(g_0,\varphi_0)$. \begin{prop} Let $(g_0,\varphi_0)$ be as above. Assume that
$\left| \iota_u\varphi_0\right|_{g_0}$
is independent of $u\in\mathbb{R}^m$ if $|u|_{g_0}=1$. We have \begin{align*}
A^*\varphi_0\wedge *_{g_0} \varphi_0\le \frac{|\varphi_0|_{g_0}^2}{m}|A|_k^k{\rm vol}_{g_0} \end{align*} for any $A\in M_m(\mathbb{R})$. Moreover, if $A=\lambda T$ for $\lambda\in\mathbb{R}$, $T\in O(m)$ and $A^*\varphi_0=\lambda'\varphi_0$ for some $\lambda'\ge 0$, then we have the equality. \label{prop ptwise ineq} \end{prop} \begin{proof} For any $A$, we can take oriented orthonormal basis $\{ e_1,\ldots,e_m\}$ and $e'_1,\ldots,e'_m$ of $(\mathbb{R}^m)^*$ such that $A^*e'_i=a_ie_i$ for some $a_i\in\mathbb{R}$. We put \begin{align*} \varphi_0=\sum_{I\in\mathcal{I}_k^m}F_Ie_I=\sum_{I\in\mathcal{I}_k^m}F'_Ie'_I \end{align*} for some $F_I,F'_I\in\mathbb{R}$. Now, put $a_I:=a_{i_1}\cdots a_{i_k}$ for $I=(i_1,\ldots,i_k)\in\mathcal{I}_k^m$. The we have $A^*\varphi_0=\sum_IF'_Ia_Ie_I$ and \begin{align*} A^*\varphi_0\wedge *_{g_0}\varphi_0 = g_0( A^*\varphi_0,\varphi_0){\rm vol}_{g_0} &=\sum_IF_IF'_Ia_I{\rm vol}_{g_0}\\
&\le \sum_I|F_IF'_I||a_I|{\rm vol}_{g_0}. \end{align*} If we put $\{ I\}:=\{ i_1,\ldots,i_k\}$, then \begin{align*}
|a_I|=\left( |a_{i_1}|^k\cdots |a_{i_k}|^k\right)^{1/k}
\le \frac{1}{k} \sum_{j\in \{ I\}}|a_j|^k, \end{align*} therefore, we obtain \begin{align*}
\sum_I|F_IF'_I||a_I|
&\le \frac{1}{k}\sum_I |F_IF'_I|\sum_{j\in \{ I\}}|a_j|^k\\
&= \frac{1}{k}\sum_{j=1}^m |a_j|^k\sum_{I\in\mathcal{I}_k^m,j\in\{ I\}} |F_IF'_I|. \end{align*} Denote by $\hat{g}_0\colon (\mathbb{R}^m)^*\to \mathbb{R}^m$ the isomorphism induced by the metric $g_0$. Put \begin{align*}
\varphi_1:=\sum_{I\in\mathcal{I}_k^m}|F_I|e_I,\quad
\varphi_2:=\sum_{I\in\mathcal{I}_k^m}|F'_I|e_I \end{align*} and define an orthogonal matrix $U\colon \mathbb{R}^m\to \mathbb{R}^m$ by $U\circ \hat{g}_0(e_j)=\hat{g}_0(e'_j)$. Now we can see \begin{align*}
\sum_{I\in\mathcal{I}_k^m,j\in\{ I\}} |F_IF'_I| =g_0\left( \iota_{\hat{g}_0(e_j)}\varphi_1,\iota_{\hat{g}_0(e_j)}\varphi_2\right)
&\le \left| \iota_{\hat{g}_0(e_j)}\varphi_1\right|_{g_0}
\cdot \left| \iota_{\hat{g}_0(e_j)}\varphi_2\right|_{g_0}\\
&=\left| \iota_{\hat{g}_0(e_j)}\varphi_0\right|_{g_0}
\cdot \left| \iota_{\hat{g}_0(e_j)}(U^*\varphi_0)\right|_{g_0} \end{align*} and \begin{align*}
\left| \iota_{\hat{g}_0(e_j)}(U^*\varphi_0)\right|_{g_0}
=\left| U^*(\iota_{U\circ \hat{g}_0(e_j)}\varphi_0)\right|_{g_0}
=\left| \iota_{U\circ\hat{g}_0(e_j)}\varphi_0\right|_{g_0}. \end{align*} Then by the assumption,
we can see that $C=\left| \iota_{\hat{g}_0(e_j)}\varphi_0\right|_{g_0}=\left| \iota_{U\circ\hat{g}_0(e_j)}\varphi_0\right|_{g_0}$ is independent of $j$, therefore, we have
$\sum_{I\in\mathcal{I}_k^m,j\in\{ I\}} |F_IF'_I|\le C^2$ and \begin{align*} A^*\varphi_0\wedge *_{g_0}\varphi_0
\le \frac{C^2}{k}\sum_{j=1}^m |a_j|^k{\rm vol}_{g_0}=\frac{C^2}{k}|A|_k^k{\rm vol}_{g_0}. \end{align*} In the above inequalities, we have the equality if $A=1_m$, then we can determine the constant $C$. Moreover, we can also check that the equality holds if $A=\lambda T$, where $\lambda\in\mathbb{R}$, $T\in O(m)$ and $A^*\varphi_0=\lambda'\varphi_0$ for some $\lambda'\ge 0$. \end{proof}
\begin{prop} Let $(X^m,g)$ be a compact oriented Riemannian manifold and $\varphi\in \Omega^k(X)$ be a harmonic form. Assume that there is a local model $(g_0,\varphi_0)$
of $(g,\varphi)$ and $|\iota_u\varphi_0|_{g_0}$
is independent of $u\in\mathbb{R}^m$ if $|u|_{g_0}=1$. Denote by ${\rm pr}_i\colon X\times X\to X$ the projection to $i$-th component for $i=1,2$.
Then $\Phi=m|\varphi_0|_{g_0}^{-2}{\rm pr}_2^*\varphi\wedge {\rm pr}_1^*(*_g\varphi)$ is an $\sigma_{k,k}$-calibration. Moreover, any isometry $f\colon X\to X$ with $f^*\varphi=\varphi$ is $(\sigma_{k,k},\Phi)$-calibrated. \label{prop id calib} \end{prop} \begin{proof} $\Phi$ is an $\sigma_{k,k}$-calibration iff \begin{align*} f^*\varphi\wedge *_g\varphi
\le \frac{|\varphi_0|_{g_0}^2}{m}|df|_k^k{\rm vol}_g. \end{align*} By putting $A=df_x$ and identifying $\mathbb{R}^m\cong T_xX$, this is equivalent to the inequality in Proposition \ref{prop ptwise ineq}. Moreover, the equality holds if
$(df_x)^*\varphi|_x = \varphi|_x$ for all $x\in X$ and $df_x$ is isometry. \end{proof}
Next we have to consider when the assumption for $(g_0,\varphi_0)$ is satisfied. If $G\subset SO(m)$ is a closed subgroup, then the linear action of $SO(m)$ on $\mathbb{R}^m$ induces the action of $G$ on $\mathbb{R}^m$. Similarly, since $SO(m)$ acts on $\Lambda^k(\mathbb{R}^m)^*$ for all $k$, $G$ also acts on them. Here, $\mathbb{R}^m$ is {\it irreducible as a $G$-representation} if any subspace $W\subset \mathbb{R}^m$ which is closed under the $G$-action is equal to $\mathbb{R}^m$ or $\{ 0\}$. For $\varphi_0\in\Lambda^k(\mathbb{R}^m)^*$, denote by ${\rm Stab}(\varphi_0)\subset SO(m)$ the stabilizer of $\varphi_0$.
\begin{lem} Let $G$ be a closed subgroup of $SO(m)$ and assume that $\mathbb{R}^m$ is irreducible as a $G$-representation. Moreover, assume that $G\subset {\rm Stab}(\varphi_0)$.
Then $|\iota_u\varphi_0|_{g_0}$
is independent of $u\in\mathbb{R}^m$ if $|u|_{g_0}=1$. \label{lem holonomy irred} \end{lem} \begin{proof}
Define a linear map $\Psi\colon \mathbb{R}^m\to \Lambda^{k-1}(\mathbb{R}^m)^*$ by $\Psi(u):=\iota_u\varphi_0$, then we can see $\Psi$ is $G$-equivariant map since the $G$-action preserves $\varphi_0$. Since the $SO(m)$-action on $\Lambda^{k-1}(\mathbb{R}^m)^*$ preserves the inner product, we can see \begin{align*} g_0( A\Psi(u),A\Psi(v) ) =g_0( \Psi(u),\Psi(v) ) \end{align*} for any $A\in G$ and $u,v\in\mathbb{R}^m$. Moreover, the left-hand-side is equal to $g_0( \Psi( Au),\Psi( Av) )$ since $\Psi$ is $G$-equivariant.
Now, let $e_1,\ldots, e_m$ be the standard orthonormal basis of $\mathbb{R}^m$, and define the symmetric matrix $H=(H_{ij})_{i,j}$ by $H_{ij}:=g_0(\Psi(e_i),\Psi(e_j))$. Then by the above argument we have ${}^tAHA=H$. Let $\lambda\in \mathbb{R}$ be any eigenvalue of $H$ and denote by $V(\lambda)\subset\mathbb{R}^m$ the eigenspace associate with $\lambda$. Then we can see that $V(\lambda)$ is closed under the $G$-action, hence we have $V(\lambda)=\mathbb{R}^m$ by the irreducibility, which implies \begin{align*}
|\Psi( Au)|_{g_0}^2=\lambda |u|_{g_0}^2 \end{align*} for all $u\in\mathbb{R}^m$ and $A\in G$. \end{proof}
Let $(X^m,g)$ be an oriented Riemannian manifold and denote by ${\rm Hol}_g\subset SO(m)$ the holonomy group. We consider $(X,g,\varphi,G,g_0,\varphi_0)$, where $\varphi\in\Omega^k(X)$ is closed, $(g_0,\varphi_0)$ is a local model of $(g,\varphi)$ and $G$ is a closed subgroup of $SO(m)$ such that ${\rm Hol}_g\subset G\subset {\rm Stab}(\varphi_0)$. The followings are examples.
\begin{table}[h]
\caption{Examples of $(X,g,\varphi,G,g_0,\varphi_0)$}
\label{table_holonomy}
\centering
\begin{tabular}{lccl} \hline
$(X,g,\varphi)$ & $m$ & $G$ & $k$ \\ \hline \hline
K\"ahler manifold & $2q$ & $U(q)$ & $2$\\ \hline
quaternionic K\"ahler manifold & $4q\ge 8$ & $Sp(q)\cdot Sp(1)$ & $4$\\ \hline $G_2$ manifold & $7$ & $G_2$ & $3$\\ \hline $Spin(7)$ manifold & $8$ & $Spin(7)$ & $4$\\ \hline
\end{tabular}\label{table holonomy} \end{table} We can apply Proposition \ref{prop id calib} and Lemma \ref{lem holonomy irred} to the above cases and obtain the following result. \begin{thm} Let $(X,g,\varphi)$ be an oriented compact Riemannian manifold whose geometric structure is one of Table \ref{table holonomy} and let $\Phi$ be as in Proposition \ref{prop id calib}. Then the identity map $1_X$ is a $(\sigma_{k,k},\Phi)$-calibrated map. In particular, $1_X$ minimizes $\mathcal{E}_{k,k}$ in its homotopy class. \label{thm id calib} \end{thm}
\section{Intersection of Smooth maps}\label{sec intersection} In \cite{CrokeFathi1990}, Croke and Fathi introduced the homotopy invariant of a smooth map $f\colon X\to Y$ which gives the lower bound to the $2$-energy $\mathcal{E}_2$. In this section, we compare our invariant with the invariant in \cite{CrokeFathi1990}.
First of all, we review the intersection of smooth map introduced in \cite{CrokeFathi1990}. Let $(X,g)$ and $(Y,h)$ be Riemannian manifolds and suppose $X$ is compact. Here, we do not assume $X$ is oriented, and we use the volume measure $\mu_g$ of $g$ instead of the volume form.
Croke and Fathi defined the following quantity \begin{align*} i_f(g,h)=\lim_{t\to \infty}\frac{1}{t}\int_{S_g(X)}\phi_t(v)d {\rm Liou}_g(v) \end{align*} for a smooth map $f\colon X\to Y$ and called it {\it the intersection of $f$}. Here, ${\rm Liou}_g$ is the Liouville measure on the unit tangent bundle $S_g(X)$ and $\phi_t^f(v)=\phi_t(v)$ is the minimum length of all paths in $Y$ homotopic with the fixed endpoints to \begin{align*} s\mapsto f(\gamma_v(s)),\quad 0\le s\le t, \end{align*} where $\gamma_v$ is the geodesic from $p\in X$ with $\gamma_v'(0)=v\in S_g(X)$.
\begin{thm}[\cite{CrokeFathi1990}] For a smooth map $f\colon X\to Y$, the intersection $i_f(g,h)$ is homotopy invariant, that is, $i_f(g,h)=i_{f'}(g,h)$ if $[f']=[f]$. Moreover, for any $f$, we have \begin{align*} \int_X \sigma_2(f)d\mu_g \ge\frac{m}{V(S^{m-1})^2\mu_g(X)} i_f(g,h)^2, \end{align*} where $V(S^{m-1})$ is the volume of the unit sphere $S^{m-1}$ in $\mathbb{R}^m$. \label{thm croke fathi} \end{thm} First of all, we introduce the variant of $i_f(g,h)$ and improve the above theorem. We put \begin{align*} j_f(g,h):=\lim_{t\to \infty}\frac{1}{t^2}\int_{S_g(X)}\phi_t(v)^2d {\rm Liou}_g(v). \end{align*}
\begin{thm} For a smooth map $f\colon X\to Y$, $j_f(g,h)$ is homotopy invariant. Moreover, for any $f$, we have \begin{align*} \int_X \sigma_2(f)d\mu_g \ge\frac{m}{V(S^{m-1})} j_f(g,h), \end{align*} where the equality holds iff the image of the geodesic in $X$ by $f$ minimizes the length in its homotopy class with the fixed endpoints. \label{thm croke fathi2} \end{thm} \begin{proof} The proof is parallel to that of Theorem \ref{thm croke fathi}. The homotopy invariance of $j_f(g,h)$ is same as the case of $i_f(g,h)$. See the proof of \cite[Lemma 1.3]{CrokeFathi1990}.
Next we show the inequality. Here we can see \begin{align*} \int_X\sigma_2(f)d\mu_g
&=\frac{m}{V(S^{m-1})}\int_{S_g(X)} | df(v) |_h^2d{\rm Liou}_g(v). \end{align*} For $s\ge 0$, let $g_s\colon S_g(X)\to S_g(X)$ be the geodesic flow. Since $g_s$ preserves the Liouville measure, we can see \begin{align*}
\int_{S_g(X)}|df(v)|_h^2d{\rm Liou}_g(v)
&=\frac{1}{t}\int_{S_g(X)}\left(\int_0^t|df(g_sv)|_h^2 ds\right)d{\rm Liou}_g(v)\\
&=\frac{1}{t}\int_{S_g(X)}\mathcal{E}_2(f\circ\gamma_v|_{[0,t]})d{\rm Liou}_g(v), \end{align*} where $\mathcal{E}_2$ is the $2$-energy of the curves in $(Y,h)$. If $L(c)$ is the length of $c$, then we have \begin{align*} \mathcal{E}_2(c)
=\int_a^b|c'(s)|_h^2ds
&\ge \frac{1}{b-a}\left(\int_a^b|c'(s)|_hds\right)^2\\ &= \frac{1}{b-a}\min_cL(c)^2, \end{align*} therefore \begin{align*}
\int_{S_g(X)}|df(v)|_h^2d{\rm Liou}_g(v) &\ge\frac{1}{t^2}\int_{S_g(X)}\phi_t(v)^2d{\rm Liou}_g(v) \end{align*} for any $t>0$. Consequently, we have the second assertion by considering $t\to \infty$. Finally, we consider the condition when \begin{align*}
\int_{S_g(X)}|df(v)|_h^2d{\rm Liou}_g(v) &= \lim_{t\to\infty}\frac{1}{t^2}\int_{S_g(X)}\phi_t(v)^2d{\rm Liou}_g(v) \end{align*} holds. To consider it, we show \begin{align} \lim_{t\to\infty}\frac{1}{t^2}\int_{S_g(X)}\phi_t(v)^2d{\rm Liou}_g(v) =\inf_{t>0}\frac{1}{t^2}\int_{S_g(X)}\phi_t(v)^2d{\rm Liou}_g(v). \label{eq lim inf} \end{align} By \cite[Lemma 1.2]{CrokeFathi1990}, we have \begin{align*} \phi_{t+t'}(v)\le \phi_{t'}(g_tv)+\phi_t(v) \end{align*} for any $t,t'\ge 0$. Then by combining the Cauchy-Schwarz inequality, we have \begin{align*} \int_{S_g(X)}\phi_{t+t'}(v)^2d{\rm Liou}_g(v) &\le \int_{S_g(X)}\phi_{t'}(g_tv)^2d{\rm Liou}_g(v)\\ &\quad\quad +2\sqrt{ \int_{S_g(X)}\phi_{t'}(g_tv)^2d{\rm Liou}_g(v) \int_{S_g(X)}\phi_t(v)^2d{\rm Liou}_g(v) }\\ &\quad\quad +\int_{S_g(X)}\phi_t(v)^2d{\rm Liou}_g(v). \end{align*} Since the Liouville measure is invariant under the geodesic flow, we can see $\int_{S_g(X)}\phi_{t'}(g_tv)^2d{\rm Liou}_g(v) =\int_{S_g(X)}\phi_{t'}(v)^2d{\rm Liou}_g(v)$, hence \begin{align*} \int_{S_g(X)}\phi_{t+t'}(v)^2d{\rm Liou}_g(v) &\le \left( \sqrt{ \int_{S_g(X)}\phi_{t'}(v)^2d{\rm Liou}_g(v)} + \sqrt{ \int_{S_g(X)}\phi_t(v)^2d{\rm Liou}_g(v)} \right)^2. \end{align*}
If we put \begin{align*} P_t:=\sqrt{ \int_{S_g(X)}\phi_t(v)^2d{\rm Liou}_g(v)}, \end{align*} then we have $P_{t+t'}\le P_t+P_{t'}$, hence \begin{align*} \inf_{t>0}\frac{P(t)}{t} =\lim_{t\to\infty}\frac{P(t)}{t}. \end{align*} Thus we obtain \eqref{eq lim inf}.
Now, suppose \begin{align*} \int_X \sigma_2(f)d\mu_g = \frac{m}{V(S^{m-1})} j_f(g,h). \end{align*} By the above argument,
we can see that $f\circ \gamma_v|_{[0,t]}$ is geodesic for any $v\in S_g(X)$ and $t>0$,
and $L(f\circ \gamma_v|_{[0,t]})$ gives the minimum of \begin{align*}
\left\{ L(c)|\, c\mbox{ is homotopic with the fixed endpoints to }f\circ \gamma_v|_{[0,t]}\right\}. \end{align*} \end{proof}
\begin{rem} \normalfont By the Cauchy-Schwarz inequality, we have \begin{align*} j_f(g,h)\ge \frac{i_f(g,h)^2}{\mu_g(X)V(S^{m-1})}, \end{align*} therefore, the inequality in Theorem \ref{thm croke fathi2} implies the inequality in Theorem \ref{thm croke fathi}. \end{rem}
Next we compute $j_f(g,h)$ in the case of flat tori, and compare with the lower bound obtained by Proposition \ref{prop lower torus}. Let $(\mathbb{T}^m,g)$ and $(\mathbb{T}^n,h)$ be as in Subsection \ref{subsec tori}, and take a coordinate $x$ on $\mathbb{T}^m$ and $y$ on $\mathbb{T}^n$ as in Subsection \ref{subsec tori}.
\begin{prop} Let $f\colon \mathbb{T}^m\to \mathbb{T}^n$ be a smooth map such that $f^*([dy^j])=\sum_iP_i^j[dx^i]$ for $P=(P_i^j)\in M_{m,n}(\mathbb{Z})$. If we define $\Phi$ by \eqref{eq phi on torus} in Subsection \ref{subsec tori}, then we have \begin{align*} j_f(g,h)=\frac{V(S^{m-1})}{m}\int_{\mathbb{T}^m}(1_{\mathbb{T}^m},f)^*\Phi. \end{align*} \end{prop} \begin{proof} First of all, we can see that $f$ is homotopic to the map given by $x\mapsto Px$ for $x\in\mathbb{T}^m$, hence it suffices to show the equality by putting $f(x)={}^tPx$.
Since the image of the geodesic by $f$ minimizes the length in its homotopy class with the fixed endpoints, then by Theorem \ref{thm croke fathi2}, we have $\mathcal{E}_2(f)=\frac{m}{V(S^{m-1})}j_f(g,h)$. we can compute $\mathcal{E}_2(f)$ directly as \begin{align*}
\mathcal{E}_2(f)=\int_{\mathbb{T}^m}|df|^2{\rm vol}_g =\sum_{i,j,k,l}h_{ij}P_k^iP_l^jg^{kl}{\rm vol}_g(\mathbb{T}^m)
=\| P\|^2{\rm vol}_g(\mathbb{T}^m). \end{align*} Moreover, by the computation in Subsection \ref{subsec tori}, we have shown that \begin{align*}
\int_{\mathbb{T}^m}(1_{\mathbb{T}^m},f)^*\Phi=\| P\|^2{\rm vol}_g(\mathbb{T}^m). \end{align*} Therefore, \begin{align*} \int_{\mathbb{T}^m}(1_{\mathbb{T}^m},f)^*\Phi
=\| P\|^2{\rm vol}_g(\mathbb{T}^m) =\frac{m}{V(S^{m-1})}j_f(g,h). \end{align*} \end{proof}
\begin{comment} \begin{align*} \end{align*} \[ A= \left ( \begin{array}{ccc}
& & \\
& & \\
& & \end{array} \right ). \]
\begin{itemize} \setlength{\parskip}{0cm} \setlength{\itemsep}{0cm}
\item[(1)]
\item[(2)] \end{itemize}
\begin{thm} \end{thm} \begin{prop} \end{prop} \begin{proof} \end{proof}
\begin{definition} \normalfont \end{definition} \begin{rem} \normalfont \end{rem} \begin{lem} \end{lem}
\end{comment}
\end{document} |
\begin{document}
\thispagestyle{empty}
\title[A finiteness property for Chebyshev polynomials]{A finiteness
property for preperiodic points of Chebyshev polynomials}
\author{Su-Ion Ih} \address{Su-Ion Ih\\ Department of Mathematics \\ University of Colorado at Boulder \\ Campus Box 395 \\ Boulder, CO 80309-0395 \\ USA} \email{ih@math.colorado.edu}
\author{Thomas Tucker} \address{Thomas Tucker\\ Department of Mathematics\\ University of Rochester\\ Rochester, NY 14627} \email{ttucker@math.rochester.edu}
\subjclass[2000]{Primary 11G05, 11G35, 14G05, 37F10, Secondary 11J86, 11J71, 11G50}
\keywords{Chebyshev polynomials, equidistribution, integral points, preperiodic points}
\thanks{The second author was partially supported by NSA
Grant 06G-067.}
\begin{abstract}
Let $K$ be a number field with algebraic closure $\overline K$, let
$S$ be a finite set of places of $K$ containing the archimedean
places, and let $\varphi$ be a Chebyshev polynomial. We prove that
if $\alpha \in \overline K$ is not preperiodic, then there
are only finitely many preperiodic points $\beta \in \overline K$
which are $S$-integral with respect to $\alpha$. \end{abstract} \maketitle
\section{Introduction}
Let $K$ be a number field with algebraic closure $\overline K$, let $S$ be a finite set of places of $K$ containing the archimedean places, and let $\alpha, \beta \in {\overline K}$. We say that $\beta$ is $S$-integral relative to $\alpha$ if no conjugate of $\beta$ meets any conjugate of $\alpha$ at primes lying outside of $S$. More precisely, this means that for any prime $v \notin S$ and any $K$-embeddings $\sigma:K(\alpha) \longrightarrow {\overline K_v}$ and $\tau: K(\alpha) \longrightarrow {\overline K_v}$, we have \begin{equation*} \left\{ \begin{array}{ll}
|\sigma(\beta)-\tau(\alpha)|_v \ge 1 & \text{if $|\tau(\alpha)|_v \le 1$\ ; \text{and} } \\
|\sigma(\beta)|_v \le 1 & \text{if $|\tau(\alpha)|_v > 1$\ .}
\end{array} \right. \end{equation*} Note that this definition extends naturally to the case where $\alpha$
is the point at infinity. We say that $\beta$ is $S$-integral relative to the point at infinity if $|\sigma(\beta)|_v \leq 1$ for all $v \notin S$ and all $K$-embeddings $\sigma:K(\beta) \longrightarrow {\overline
K}_v$. Thus, our $S$-integral points coincide with the usual $S$-integers when $\alpha$ is the point at infinity. \vskip .05 in
In \cite{BIR}, the following conjecture is made. \begin{conj}[Ih]\label{Ih conj}
Let $K$ be a number field, and
let $S$ be a finite set of places of $K$ that contains all the
archimedean places. If $\varphi: \mathbb{P}^1_K \longrightarrow \mathbb{P}^1_K$ is a
nonconstant rational function of degree $d > 1$ and $\alpha \in
\mathbb{P}^1(K)$ is non-preperiodic for $\varphi$, then there are at most
finitely many preperiodic points $\beta \in \mathbb{P}^1(\,\overline{\! K})$ that are
$S$-integral with respect to $\alpha$. \end{conj}
In \cite{BIR}, it is proved that this conjecture holds when $\varphi$ is a multiplication-by-$n$ (for $n \geq 2$) map on an $\mathbb G_m$ or on an elliptic curve. Recently, Petsche \cite{clay} has proved the conjecture in the case where the point $\alpha$ is in the $v$-adic Fatou set at every place of $K$. A similar problem, dealing with points in inverse images of a single point rather than with preperiodic points, has been treated by Sookdeo \cite{vijay}.
In this paper, we show that this conjecture is true for Chebyshev polynomials. That is, we prove the following, where we note that $\alpha$ may lie on the Julia set.
\begin{thm}\label{main} Let $\varphi$ be a Chebyshev polynomial.
Let $K$ be a number field, and let $S$ be a finite set of places of
$K$, containing all the archimedean places. Suppose that $\alpha \in K$
is not of type $\zeta + \zeta^{-1}$ for any root of unity $\zeta$.
Then the following set $$ {\mathbb A^1}_{\varphi, \alpha, S} := \{ x \in {\overline {\mathbb Q}} : x \; \textup{{\em{is $S$-integral with respect to}}} \; \alpha \; \textup{{\em{and is}}} \; \textup{${\varphi}$-{\em{preperiodic}}} \} $$ is finite. \end{thm}
This will follow easily from the following theorem.
\begin{thm}\label{comp} Let $( x_n )_{n=1}^\infty$ be a nonrepeating sequence of preperiodic points for a Chebyshev polynomial $\varphi$. Then for any non-preperiodic $\alpha$ in a number field $K$ and any place $v$ of $K$, we have \begin{equation}\label{v} \hat{h}_v(\alpha) = \lim_{n \to \infty} \frac{1}{ [ K(x_n) : K ] } \sum_{\sigma:
K(x_n)/K \hookrightarrow \overline{K}_v } \log | \sigma (x_n) - \alpha
|_v, \end{equation} where $\sigma: K(x_n)/K \hookrightarrow \overline K_v$ means that $\sigma$ is an embedding of $K(x_n)$ into $\overline K_v$, fixing $K$, here and in what follows. \end{thm}
Indeed, we will prove Theorem~\ref{comp} slightly more generally for any $\alpha \in K$ if $v \not | \infty$, while for any $\alpha \neq$
$-2$, 0, or 2 if $v | \infty$. (Note that the proof of Proposition~\ref{most} actually works for any $\alpha \in [-2, 2] - \{-2, 0, 2 \}$.) The proof of Theorem~\ref{main} is then similar to the proof for $\mathbb G_m$ given in \cite{BIR}. Specifically, the proof of Theorem~\ref{comp} breaks down into various cases, depending on whether or not the place $v$ is finite or infinite and whether or not the point $\alpha$ is in the Julia set at $v$. The fact that the invariant measure for Chebyshev polynomials is not uniform on $[-2,2]$ provides a slight twist.
The proof of Theorem~\ref{comp} is fairly simple when $v$ is nonarchimedean. Likewise, when $v$ is archimedean but $\alpha$ is not in the Julia set at $v$, the proof follows almost immediately from an equidistribution result for continuous functions (see \cite{bilu}). When $v$ is archimedean and $\alpha$ is in the Julia set at $v$, however, the proof becomes quite a bit more difficult. In particular, it is necessary to use A.~Baker's theorem on linear forms in logarithms (see \cite{Baker}). We note that in all cases, our techniques are similar to those of \cite{BIR}.
The derivation of Theorem~\ref{main} from Theorem~\ref{comp} goes as follows: suppose, for contrary, that Theorem~\ref{main} were to be false. Then we may further assume that the sequence $(x_n)_{n=1}^\infty$ in Theorem\ref{comp} is a sequence of $\alpha$-integral points. Then we have
\begin{eqnarray} 0 \ & < & \
\widehat h(\alpha) \nonumber\\ \ & = & \
\sum_{\text{places} \ v \ \text{of} \ K } \widehat h_v (\alpha) \nonumber\\ \ & = & \
\sum_{\text{places} \ v \ \text{of} \ K } \lim_{n \to \infty} \frac{1}{ [ K(x_n) : K ] }
\sum_{\sigma: K(x_n)/K \hookrightarrow \overline{K}_v } \log | \sigma (x_n) - \alpha |_v \nonumber\\ \ & = &
\lim_{n \to \infty} \frac{1}{ [ K(x_n) : K ] } \sum_{\text{places} \ v \ \text{of} \ K } \sum_{\sigma:
K(x_n)/K \hookrightarrow \overline{K}_v } \log | \sigma (x_n) - \alpha
|_v \nonumber\\ \ & = & \ 0, \nonumber \end{eqnarray} where the equality on the third line comes from Theorem~\ref{comp}, the integrality hypothesis on the $x_n$ enables us to switch $\sum_{\text{places} \ v \ \text{of} \ K }$ and $\lim_{n \to \infty}$ to get the equality on the fourth line, and the last equality is immediate from the product formula. This is a contradiction.
\section{Preliminaries}
\subsection{The Chebyshev polynomials}
\begin{defn}
$$
P_1 (z) := z, \;\;\; P_2 (z) := z^2 - 2; \;\; \textup{and}
$$
$$
P_{m+1} (z) + P_{m-1} (z) = z P_{m} (z) \;\; \textup{for all} \;
m \geq 2.
$$ Then a \emph{Chebyshev polynomial} is defined to be any of the $P_{m}$ ($m \geq 2$). \end{defn}
These polynomials satisfy the following properties (see \cite[Section 7]{Milnor}).
\begin{enumerate}
\item For any $m \geq 1$, $P_m (\omega +\omega^{-1}) = \omega^m +\omega^{-m}$, equivalently $P_m (2 \cos \theta) = 2 \cos (m \theta)$, where $\omega \in \mathbb C^{\times}$ and $\theta \in \mathbb R$.
\item For any $\ell, m \geq 1$, $P_\ell \circ P_m =
P_{\ell m}$.
\item For any $m \geq 3$, $P_m$ has $m-1$ distinct critical points in the finite plane, but only two critical values, i.e., $\pm 2$.
\end{enumerate}
\subsection{The dynamical systems of Chebyshev polynomials}
\begin{defn} Let $\varphi$ be a Chebyshev polynomial. The dynamical system induced by $\varphi$ on ${\mathbb P}^1$ (or ${\mathbb A}^1$) is called the \emph{(Chebyshev) dynamical system} with respect to $\varphi$ or the \emph{$\varphi$-dynamical system}. If $\varphi$ is clearly understood from the context, we simply call it a \emph{Chebyshev dynamical system} without reference to $\varphi$. \end{defn}
\begin{prop} For any Chebyshev polynomial $\varphi$, the Julia set of the dynamical system induced by $\varphi$ (resp.~$- \varphi$) is \textup{[$-2$, 2]}, which is naturally identified as a subset of the real line on the complex plane. \end{prop}
\noindent {\bf {Proof.}} See \cite[Section 7]{Milnor}.
\begin{prop}~\label{prop;preper}
Let $\varphi$ be a Chebyshev polynomial. Then the finite
preperiodic points of the $\varphi$-dynamical system are
the elements of ${\overline K}$ of the form $\zeta + \zeta^{-1}$,
where $\zeta$ is a root of
unity. \end{prop} \begin{proof}
Take an element $z \in {\overline K}$. Then there is some $a \in {\overline K}$ such
that $z = a + \frac{1}{a}$, as can be seen by finding $a$ such that
$a^2 - az + 1 = 0$. Note that $a$ cannot be zero. Now if $a$ is not
a root of unity, then there is some place $w$ of $K(a)$ such that
$|a|_w > 1$. Thus, letting $m = \deg \varphi$ $(\geq 2)$, we have
$$ |\varphi^k(z)|_w =
\left| a^{m^k} + \frac{1}{a^{m^k}} \right|_w > |a|_w^{m^k} - 1,$$
so $|\varphi^k(z)|_w$ goes to infinity as $k \to \infty$. Hence $z$ cannot be preperiodic.
Conversely, if $z = \zeta + \zeta^{-1}$, where $\zeta$ is a root of unity then there are some positive integers $j \not= k$ such that $\zeta^{m^k} = \zeta^{m^j}$, which gives $$ \varphi^k(z) = \zeta^{m^k} + \frac{1}{\zeta^{m^k}} = \varphi^j(z),$$ so $z$ is preperiodic for $\varphi$. \end{proof}
\subsection{The canonical height attached to a dynamical system}
Let $\varphi$ be a Chebyshev polynomial of degree $m$ and let $v$ be a place of a number field $K$. We define the local canonical height $\hat{h}_v(\alpha)$ of a point $\alpha \in {\overline K}_v$ associated to $\varphi$ at any place $v$ of $K$ as \begin{equation}\label{local} \hat{h}_v(\alpha) = \lim_{k \to \infty} \frac{\log \max
(|\varphi^k(\alpha)|_v, 1)}{m^k}. \end{equation}
\noindent This local canonical height has the property that $$ \hat{h}_v(\varphi(\alpha)) = m \hat{h}_v(\alpha)$$ for any $\alpha \in
{\overline K}_v$ (see \cite{CG} for details). Note that if $v$ is a nonarchimedean place, then the Chebyshev dynamical system has good reduction at $v$ and we have $\hat{h}_v(\alpha) = \log \max ( |\alpha|_v, 1)$.
When $\alpha \in {\overline K}$, we have \begin{equation}\label{global} \hat{h}(\alpha) = \sum_{\text{places $v$ of $K$}} \hat{h}_v(\alpha), \end{equation} where the left-hand side is the (global) canonical height of $\alpha$ associated to $\varphi$.
In the case of the places $v \mid \infty$, we will use the $\varphi$-invariant measure $\mu_v := \mu_{v, \varphi}$ (see \cite{L}) for $\varphi$ to calculate these local heights. It is worth noticing this is not a uniform measure on $[-2, 2]$, unlike in the case of the dynamical system on ${\mathbb P}^1$ with respect to the map $z \mapsto z^2$, in which case the measure at archimedean places is the uniform probability Haar measure on the unit circle centered at the origin (see \cite{bilu}). The measure has more mass toward the end/boundary points $\pm 2$ of the Julia set $[-2, 2]$. Further, the kernel ${
\frac{1}{\pi} } { \frac{1}{ \sqrt {4-x^2} } }$ has singularities at the extreme points $\pm 2$.
When $v | \infty$, we have the following formula for the local height at $v$ (see \cite[Appendix B]{PST} or \cite{FR2}) for any $\alpha \in \mathbb{C}$: \begin{equation}\label{gen}
\hat{h}_{v} (\alpha)= \int_{\mathbb{C}} \log | z
- \alpha |_v \; d \mu (z), \end{equation} where $\mu := \mu_{v, \varphi}$ is the unique $\varphi$-invariant measure with support on the Julia set of $\varphi$ at $v$.
Since any root of unity $\xi_k$, say $e^{2 \pi i/k}$, is preperiodic for the the map sending $z$ to $z^m$, we see that $\xi_k + \xi_k^{-1} = 2 \cos(2\pi/k)$ is preperiodic for $\varphi$. Now, the preperiodic points of $\varphi$ are equidistributed with respect to $\mu$ (see \cite{L, BH}), so for any continuous function $f$ on $[-2,2]$ we have $$ \lim_{k \to \infty} \frac{1}{k} \sum_{j=1}^k f(2 \cos(j \pi /k)) = \int_\mathbb{C} f \, d \mu.$$ Thus $d \mu$ is the push-forward of the the uniform distribution on $[0, \pi]$ under the map $\theta \mapsto 2 \cos \theta$, thus $$ d \mu(x) = \frac{1}{\pi} \frac{d}{dx} \cos^{-1} (x/2) \, dx = \frac{1}{\pi} \frac{1}{\sqrt{4 - x^2}} \, dx.$$ Thus, \eqref{gen} becomes \begin{equation}\label{from-PST} \hat{h}_{v} (\alpha) = {
\frac{1}{\pi} } \int_{-2}^{2} { \frac{1}{ \sqrt {4-x^2} } } \log | x
- \alpha |_v \; dx \end{equation} for any $\alpha \in \mathbb{C}$.
\section{Archimedean places}
\subsection{A counting lemma}
Let $K$ be a number field, and let $I \subset [-2, 2]$ be an interval. For any root of unity $\zeta \in \overline K$, write $x_{\zeta} := \zeta + \zeta^{-1}$. Let $$ \mathcal N ( x_{\zeta}, I ) \; := \; \# \{ \sigma (x_{\zeta}) \in I : \sigma \in \Gal \big ( K (x_{\zeta})/K \big ) \}. $$
\begin{lem}~\label{lem;counting} Keep notation just above. Let $-2 \leq c < d \leq 2$, and let $I := (c, d]$ be an interval. Then for any real $0 < \gamma < 1$ and any root of unity $\zeta \in \overline K$, \begin{equation}\label{first}
\mathcal N ( x_{\zeta}, I ) \; = \; { \frac{[ K(x_{\zeta}) : K ]}{\pi} }
\Big ( \cos^{-1} { \frac{c}{2} } - \cos^{-1} { \frac{d}{2} } \Big ) \; + \; O_{\gamma} \big ([ K(x_{\zeta}) : K ]^{\gamma} \big )
\end{equation} where $\cos^{-1}: [-1, 1] \rightarrow [0, \pi]$ is the $\arccos$ function. In particular, when $-2 < c < d < 2$, we may write \begin{equation}\label{M} \mathcal N ( x_{\zeta}, I ) \leq M [ K(x_{\zeta}) : K ](d-c) + O_{\gamma} \big ([ K(x_{\zeta}) : K ]^{\gamma} \big ) \end{equation} where $M := M_{c,d}$ is the supremum of $\frac{1}{\sqrt{4 - x^2}}$ on $(c, d]$. \end{lem}
\begin{proof} Write $\zeta = e^{2\pi i { \frac{a}{N} } }$, where $N$ is a positive integer and $1 \leq a \leq N$. Then note \begin{eqnarray} x_{\zeta} \in I \; &\Longleftrightarrow& \; e^{2\pi i { \frac{a}{N} } } + e^{-2\pi i { \frac{a}{N} } } \in I \nonumber\\ \; &\Longleftrightarrow& \; \cos \Big ( 2 \pi { \frac{a}{N} } \Big ) \in \Big ( { \frac{c}{2} }, { \frac{d}{2} } \Big ] \nonumber\\ \; &\Longleftrightarrow& \; a \in { \frac{N}{2 \pi} } \Big [ \cos^{-1} { \frac{d}{2} }, \cos^{-1} { \frac{c}{2} } \Big ). \nonumber \end{eqnarray} Then \eqref{first} follows immediately from \cite[Prop. 1.3]{BIR}. To see that \eqref{M} holds, note that the derivative of the function $\cos^{-1} (x/2)$ is $\frac{1}{\sqrt{4 -
x^2}}$. Thus, \eqref{M} is a consequence of \eqref{first} and along with the Mean Value Theorem from calculus. \end{proof}
Remark. In the above, more precisely, we may define $M$ to be the supremum of $\frac{1}{\pi} \frac{1}{\sqrt{4 - x^2}}$ on $(c, d]$. However, this difference will not matter for our later purpose. So we will keep the above choice for $M$.
\subsection{Baker's lower bounds for linear forms in logarithms}
Here we state the theorem on Baker's lower bounds for linear forms in logarithms, (see \cite{Baker}[A.~Baker, Thm. 3.1, p. 22]). \begin{thm}[Baker] \label{thm;baker} Suppose that $e^{2 \pi i \theta_0} \in \overline {\mathbb Q}$. Then there exists a constant $C := C(\theta_0) > 0$ such that for any coprime $a, N \in \mathbb Z$ ($N \neq 0$ or $\pm 1$) with ${ \frac{a}{N} } \neq \theta_0$, $$
\Big | { \frac{a}{N} } - \theta_0 \Big | \; \geq \; M^{-C} $$
where $M := \max ( |a|, |N| )$. \end{thm}
\begin{proof} \begin{eqnarray}
\Big | { \frac{a}{N} } - \theta_0 \Big | \; & = & \; { \frac{1}{2 \pi} }
\Big | { \frac{a}{N} } \cdot 2 \pi i - 2 \pi i \theta_0 \Big |. \nonumber \end{eqnarray} Then apply Baker's theorem to the absolute value of the right hand side and adjust the resulting constant for ${ \frac{1} {2 \pi} }$. (Also recall that $N \neq 0$ or $\pm 1$.) \end{proof}
\section{The main theorem and its variant}
\subsection{The main theorem and its proof} We will prove Theorem~\ref{comp} by breaking it into several cases. We begin with the case where the place $v$ is finite. For the sake of precision, we will state when we need $\alpha$ to be in $K$ and when it suffices that it be in ${\overline K}_v$. \begin{prop}\label{prev}
Let $(\zeta_n)_{n=1}^{\infty}$ be a sequence of distinct roots of
unity, and write $x_n := \zeta_n + {\zeta_n}^{-1}$ for any $n \geq
1$. If $v$ is finite, then for any $\alpha \in {\overline K}_v$, we have \begin{equation}\label{finite} \hat{h}_v(\alpha) = \lim_{n \rightarrow \infty} { \frac{1} { [ K(x_n) : K ] } } \sum_{\sigma:
K(x_n)/K \hookrightarrow \overline{K}_v } \log | \sigma (x_n) - \alpha
|_v. \end{equation} \end{prop} \begin{proof}
If $| \alpha |_v > 1$, then $| \sigma (x_n) - \alpha |_v = |\alpha
|_v$. Thus, \eqref{finite} is immediate. Now, suppose that $| \alpha |_v \leq 1$.
Let $r < 1$ be a real
number. Let $x_m$ and $x_n$ satisfy that $| x_m - \alpha |_v \leq r$
and $| x_n - \alpha |_v \leq r$. Then observe \begin{eqnarray} r
& \; \geq \; & | (x_m - \alpha) - (x_n - \alpha) |_v \nonumber\\
& \; = \; & | x_m - x_n |_v \nonumber\\ & \; = \; &
\Big | (\zeta_m - \zeta_n) - { \frac{\zeta_m - \zeta_n}{\zeta_m \zeta_n}}
\Big |_v \nonumber\\
& \; = \; & | \zeta_m - \zeta_n |_v \; | 1 - (\zeta_m \zeta_n)^{-1} |_v \nonumber\\
& \; = \; & | 1 - \zeta_m^{-1} \zeta_n |_v \;
| 1 - (\zeta_m \zeta_n)^{-1} |_v. \nonumber \end{eqnarray}
Hence either $| 1 - \zeta_m^{-1} \zeta_n |_v \leq \sqrt r$ or
$| 1 - (\zeta_m \zeta_n)^{-1} |_v \leq \sqrt r$. In the first (resp.~second) case it follows that $\zeta_m^{-1} \zeta_n$ (resp.~$(\zeta_m \zeta_n)^{-1}$) must have order equal to a power of the prime number $\in {\mathbb Z}$ lying below $v$, and that there are only finitely many choices for $\zeta_m^{-1} \zeta_n$ (resp. $(\zeta_m \zeta_n)^{-1}$) in the first (resp.~second) case. Thus, for any real $r < 1$, there are only finitely many indices $n \geq 1$
such that $| x_n - \alpha |_v \leq r$, which immediately implies the desired convergence in this case. \end{proof}
We now treat the archimedean $v$ for which $\alpha$ is outside the Julia set at $v$. \begin{prop}
Let $x_n$ be as in Proposition~\ref{prev}. If $v$ is archimedean, then
for any $\alpha \in \mathbb{C} - [-2,2]$, we have $$\hat{h}_v(\alpha) = \lim_{n \rightarrow \infty} { \frac{1} { [ K(x_n) : K ] } } \sum_{\sigma:
K(x_n)/K \hookrightarrow \overline{K}_v } \log | \sigma (x_n) - \alpha
|_v. $$ \end{prop} \begin{proof}
From \eqref{gen}, we have
$$
\int_{\mathbb{C}} \log |z - \alpha|_v \, d \mu_v (z)= \hat{h}_v(\alpha),$$
where $\mu_v := \mu_{v, \varphi}$ is the invariant measure for $\varphi$ at $v$. This
measure is supported on $[-2,2]$, so if $g$ is a function on $\mathbb{C}$
that agrees with $\log |z - \alpha|$ on $[-2,2]$ we have \begin{equation}\label{agree} \int_{\mathbb{C}} g(z) d \mu_v(z)= \hat{h}_v(\alpha). \end{equation}
Let $\epsilon = \min_{w \in [-2,2]} |w - \alpha|$ (note that $\epsilon \not= 0$ since $\alpha \notin [-2,2]$) and define $g(z)$ as
$$ g(z) = \min \Big(\log \max ( | z - \alpha |, \epsilon ), \log
(|\alpha| + 2) \Big).$$ Then $g$ is continuous and bounded on all of $\mathbb{C}$ and agrees with
$\log |z - \alpha|$ on $[-2,2]$. By \cite[Theorem 1.1]{bilu}, we have $$ \int_{\mathbb{C}} g(z) \, d \mu_v(z) = \lim_{n \rightarrow \infty} { \frac{1} { [ K(x_n) : K ] } } \sum_{\sigma:
K(x_n)/K \hookrightarrow \overline{K}_v } g(\sigma(x_n)).$$ Since all $x_n \in [-2,2]$, this finishes the proof, using \eqref{agree}. \end{proof}
Now, we come to the most difficult case. \begin{prop}~\label{most} Let $x_n$ be as in
Proposition~\ref{prev}. If $v | \infty$ and $\alpha \in [-2,2]$ is not preperiodic, then we have $$\hat{h}_v(\alpha) = \lim_{n \rightarrow \infty} \frac{1}{ [ K(x_n) : K
] } \sum_{\sigma: K(x_n)/K \hookrightarrow \overline{K}_v } \log |
\sigma (x_n) - \alpha |_v. $$ \end{prop} \begin{proof}
We may assume that $\alpha \in K$.
If $v$ is archimedean and $\alpha \in K$ is
in $[-2,2]$, then we have $\hat{h}_v(\alpha) = 0$. This follows from
the fact that $\varphi$ maps $[-2,2]$ to itself, so if $\alpha \in
[-2,2]$, then $|\varphi^n(\alpha)|_v$ is bounded for all $n$, so
$\hat{h}_v(\alpha) = 0$ by \eqref{local}.
Note $|x|_v = |\tau (x)|$ for all $x \in \mathbb{C}$, where $\tau: K(x)/K
\hookrightarrow \mathbb{C}$ is associated to $v$ and $| \cdot |$ is the
usual absolute value on $\mathbb{C}$. To simplify our notation, we will
fix one $v | \infty$, suppress $v$ in the notation of the absolute
value, and use $| \cdot |$ according to this
observation, i.e., without loss of generality we will prove this
theorem for the place $v$ equal to the usual absolute value ($|z|
= \sqrt{z \overline z}$, $z \in \mathbb{C}$). However, we will keep $v$ in
the notation of the local height $\hat{h}_v$ to avoid any confusion with
the global height $\hat{h}$.
We may write $ \alpha = e^{2 \pi i \theta_0} + e^{-2 \pi i \theta_0} = 2 \cos (2 \pi \theta_0), $ where $\theta_0 \in (- { \frac{1} {2} }, { \frac{1} {2} }]$. Note $\alpha$ cannot be equal to $-2$, $2$, or 0 since we assume that $\alpha$ is not preperiodic. Note that $\int_{0}^{\epsilon} \log \big ( { \frac{t} {\epsilon} } \big ) dt = -\epsilon$ for any $\epsilon > 0$.
Write \begin{eqnarray} x & \; = \; & e^{2\pi i {\theta }} + e^{-2\pi i {\theta }} \;\; \;\; = \; 2 \cos (2\pi {\theta}); \;\;\;\;\;\; \textup{and} \nonumber\\ x_n & \; = \; & e^{2\pi i { \frac{a}{N} } } + e^{-2\pi i { \frac{a}{N} } } \; = \; 2 \cos \Big (2\pi { \frac{a}{N} } \Big ) \nonumber \end{eqnarray} where $a$ and $N (\neq 0)$ are integers (depending on $n \geq 1$), and
$\big | { \frac{a}{N} } \big | \leq 1$.
We recall that $\hat{h}_v(\alpha) = 0$ since for any $\alpha$ in $[-2,2]$, the quantity $|P_m^k(\alpha)|$ is bounded for all $k \geq 1$. Thus, we have
\begin{equation}\label{delta} \frac{1}{\pi} \int_{-2}^{2} { \frac{1}{ \sqrt
{4-x^2} } } \log | x - \alpha | \ dx = 0. \end{equation} Hence it will suffice to show, for all $n \gg 1$, that the quantity \begin{equation} \label{quant}
\frac{1}{ [ K(x_n) : K ] } \Bigg |
\sum_{\sigma: K(x_n)/K \hookrightarrow \mathbb{C}} \log
|\sigma(x_n) - \alpha| \Bigg | \end{equation} can be made sufficiently small.
Fix $\epsilon > 0$. By \eqref{delta}, we have \begin{equation}\label{epsi}
\Bigg | \int_{\alpha - \delta}^{\alpha + \delta} { \frac{1}{ \sqrt
{4-x^2} } } \log | x - \alpha | \ dx \Bigg | < \epsilon, \end{equation} i.e., sufficiently small for all sufficiently small $\delta > 0$.
Let $g_\delta (z) = \log \max (|z - \alpha|, \delta)$. By
\eqref{epsi} and the fact that $0 > g_\delta(x) > \log |x - \alpha|$ for $x \in [\alpha - \delta, \alpha + \delta]$, we see that \begin{equation}\label{epsi2}
\Bigg | \int_{-2}^{2} { \frac{1}{ \sqrt
{4-x^2} } } g_\delta(x) \ dx \Bigg | < \epsilon. \end{equation}
By the equidistribution theorem of Baker/Rumely (\cite{BREQUI}), Chambert-Loir (\cite{CL}) and Favre/ Rivera-Letelier (\cite{FR2}), we see that for all sufficiently large $n$, the quantity
\begin{equation}\label{ep3}
\left| \Big( \frac{1}{\pi} \int_{-2}^{2} \frac{1}{ \sqrt {4-x^2}}
g_\delta(x) \; dx \Big) - \Big( \frac{1}{ [ K(x_n) : K ] }
\sum_{\sigma: K(x_n)/K \hookrightarrow \mathbb{C} }
g_\delta(\sigma(x_n)) \Big) \right| \end{equation} is sufficiently small. Thus, by \eqref{epsi2} it suffices to show that \begin{equation*} \frac{1}{ [ K(x_n) : K ] }
\Bigg | \sum_{\sigma: K(x_n)/K
\hookrightarrow \mathbb{C} } \big ( \log | \sigma(x_n) - \alpha | -
g_\delta(\sigma(x_n)) \big ) \Bigg | \end{equation*} is sufficiently small for all $n \gg 1$ and all sufficiently small
$\delta > 0$. Since $\log | \sigma(x_n) - \alpha | = g_\delta(\sigma(x_n))$ outside of $[\alpha-\delta, \alpha+\delta]$, it in turn it suffices to show that \begin{equation}\label{a} \frac{1}{ [ K(x_n) : K ] }
\Bigg | \sum_{\substack{\sigma: K(x_n)/K
\hookrightarrow \mathbb{C}\\ \sigma(x_n) \in [\alpha - \delta, \alpha +
\delta] }}
\big ( \log | \sigma (x_n) - \alpha | - g_\delta(\sigma(x_n)) \big )
\Bigg |
\end{equation} is sufficiently small for all $n \gg 1$ and all sufficiently small $\delta > 0$.
Now, when $\delta > 0$ is small and $x$ is in $[\alpha - \delta,
\alpha + \delta]$, we have $0 > g_\delta(x) \geq \log |x - \alpha|$ and the quantity \eqref{a} is bounded above by \begin{equation}\label{b} \frac{1}{ [ K(x_n) : K ] }
\Bigg | \sum_{\substack{\sigma: K(x_n)/K
\hookrightarrow \mathbb{C}\\ \sigma(x_n) \in [\alpha - \delta, \alpha +
\delta] }}
\log | \sigma (x_n) - \alpha |
\Bigg |. \end{equation} Hence, finally it suffices to show that \eqref{b} is sufficiently small whenever $n$ is sufficiently large and $\delta > 0$ is sufficiently small.
If we choose $\delta >0$ sufficiently small, we may assume that \begin{equation}\label{M2} \min_{x \in [\alpha - \delta, \alpha + \delta]} \left( \frac{1}{\sqrt{4 -
x^2}} \right) \geq \frac{1}{2} \max_{x \in [\alpha - \delta, \alpha
+ \delta]} \left( \frac{1}{\sqrt{4 - x^2}} \right). \end{equation} We define $M$ as \begin{equation}\label{M3} M := \max_{x \in [\alpha - \delta, \alpha
+ \delta]} \left( \frac{1}{\sqrt{4 - x^2}} \right) \end{equation}
Choose a large positive integer $D$. For any $1 \leq i \leq D$ denote by $S_i$ the interval $$[\alpha - \delta + (i-1)(\delta/D), \alpha - \delta + i (\delta/D)].$$ Given any $n \gg 1$, let $N_i := N_i (n)$ denote the number of $\sigma(x_n)$'s belonging to $S_i$.
Note that $\log | \sigma (x_n) - \alpha | \leq 0$,
whenever $\sigma (x_n)$ belongs to any of the
$S_i$ $(1 \leq i \leq D)$. For any $1 \leq i \leq D -1$, on $S_i$ we have \begin{equation*} \begin{split}
& \frac{1}{[K(x_n):K]} \biggl | \sum_{\substack{\sigma: K(x_n)/K \hookrightarrow \mathbb{C} \\ \sigma(x_n)
\in S_i}} \log | \sigma(x_n) - \alpha| \biggl | \\ & \leq M
(\delta/D) \Big | \log |(D-i)
(\delta/D)| \Big | + O\left(\frac{1}{\sqrt{[K(x_n):K]}} \right) \Big | \log
\big ( (D-i)\delta/D \big ) \Big | \\ & \quad \quad \text{(by Lemma~\ref{lem;counting} with $\gamma = 1/2$)}\\
& \leq 2 \Bigg | \int_{S_{i+1}} (M/2) \log |x - \alpha| \, dx \Bigg | +
O\left(\frac{1}{\sqrt{[K(x_n):K]}}
\right) \Big | \log \big ((D-i) \delta/D \big ) \Big |\\
& \leq 2 \Bigg |
\int_{S_{i+1}} \frac{1}{\sqrt{4 - x^2}} \log |x - \alpha| \, dx \Bigg |
+ O\left(\frac{1}{\sqrt{[K(x_n):K]}}\right) \Big |\log \big ( (D-i) \delta/D
\big ) \Big | \\ & \ \ \quad \text{ (by \eqref{M2} and \eqref{M3}) }. \end{split} \end{equation*}
Summing up over all $1 \leq i \leq D-1$ and applying \eqref{epsi} we obtain \begin{equation}\label{smaller} \begin{split}
\frac{1}{[K(x_n):K]} \Bigg | \sum_{\substack{\sigma: K(x_n)/K \hookrightarrow \mathbb{C} \\ \sigma(x_n)
\in [\alpha - \delta, \alpha - (\delta/D)]}} & \log |
\sigma(x_n) - \alpha| \Bigg |\\
& \leq 2 \epsilon + \frac{1}{\sqrt{[K(x_n):K]}}
C_2 D \big ( \big | \log (\delta/D) \bigl |
+ \log D \big ), \end{split} \end{equation} for some constant $C_2 > 0$ independent of $n$ and $D$.
Similarly, we see that \begin{equation}\label{bigger} \begin{split}
\frac{1}{[K(x_n):K]} \Bigg | \sum_{\substack{\sigma: K(x_n)/K \hookrightarrow \mathbb{C} \\ \sigma(x_n)
\in [\alpha + (\delta/D), \alpha + \delta]}} & \log |
\sigma(x_n) - \alpha| \Bigg | \\ & \leq 2 \epsilon + \frac{1}{\sqrt{[K(x_n):K]}}
C_3 D \big ( \big| \log (\delta/D) \big| + \log D \big ), \end{split} \end{equation} for some constant $C_3 > 0$ independent of $n$ and $D$. Since
$|\log(1/D)|$ and $\log D$ grow more slowly than any power of $D$, we see that quantities \eqref{smaller} and \eqref{bigger} can be made sufficiently small when $D$ is large and $[K(x_n):K] \geq D^4$.
Now, for all sufficiently small $\delta > 0$, we have $$
0 \geq \log | x_n - \alpha | = \log
\big | 2 \cos \Big (2\pi { \frac{a}{N} } \Big ) -
2 \cos (2\pi {\theta_0}) \big | \geq \log
\Big | { \frac{a}{N} } -
\theta_0 \Big | + O(1) $$ for all $x_n \in [\alpha - \delta, \alpha + \delta]$. When $N$ is sufficiently large, Theorem~\ref{thm;baker} thus yields $$
0 \geq \log | x_n - \alpha | \geq -C_4 {\log N} + O(1) $$ where $C_4 > 0$ is a constant independent of $n$. This inequality is true not only for $x_n$ itself, but also for all its $K$-Galois conjugates that belong to $[\alpha - \delta, \alpha + \delta]$, i.e., after readjusting $C_4$ if necessary, we have
\begin{equation}\label{baker-log}
0 \geq \log | \sigma (x_n) - \alpha | \geq
- C_4 \log N
\end{equation}
for all $\sigma (x_n) \in [\alpha - \delta, \alpha +
\delta]$, where $C_4 > 0$ is a constant independent of
(all) $n \gg 1$. Thus, it follows from \eqref{M}
(again with $\gamma
= 1/4$) that we have \begin{equation}\label{close} \begin{split} \frac{1}{[K(x_n):K]}
\Bigg | \sum_{\substack{\sigma: K(x_n)/K \hookrightarrow \mathbb{C} \\ \sigma(x_n) \in [\alpha - (\delta/D), \alpha + (\delta/D)]}} \log
|\sigma(x_n) - \alpha |
\Bigg | \\ \leq C_4 M (\delta/D) \log N +
\frac{C_5
\log N}{[K(x_n):K]^{1/2} } \end{split} \end{equation}
where $C_5 > 0$ is a constant.
Write $\phi$ for the Euler function, and suppose that $N
\gg 1$. Note that $$ [K(x_n) : K] \; \geq \; \frac{[\mathbb Q (x_n) : {\mathbb Q}] } {[K : \mathbb Q]} \; = \; { \frac{\phi (N)} {[K : \mathbb Q]} } $$ and $\phi (N) \geq \sqrt N$ (see \cite[page 267, Thm 327]{HW}), and hence that
$[K(x_n):K]^{ \frac{1}{2} } \gg \sqrt[4]{N}$. Now, let $D = \lfloor \sqrt[4]{N} \rfloor $. (Note this choice of $D$ is compatible with that of $D$ in \eqref{smaller} and \eqref{bigger}.) Then, when $N$ is sufficiently large, the right-hand sides of \eqref{smaller} and \eqref{bigger} are both sufficiently small and the right-hand side of \eqref{close} is also sufficiently small. Combining equations \eqref{smaller}, \eqref{bigger}, and \eqref{close} we then obtain that $$ \frac{1}{[K(x_n):K]}
\Bigg | \sum_{\sigma: K(x_n)/K \hookrightarrow
\mathbb{C} } \log |\sigma(x_n) - \alpha |
\Bigg | \;\; \textup{is sufficiently small}.$$ Thus, we must have $ \lim_{n \to \infty} \sum_{\sigma: K(x_n)/K \hookrightarrow
\mathbb{C} } \log |\sigma(x_n) - \alpha | = 0,$ as desired. \end{proof}
The proof of Theorem~\ref{comp} is now immediate since the Propositions above cover all $v$ and all non-preperiodic $\alpha \in K$. Now, we are ready to prove our main theorem, Theorem~\ref{main}. \begin{proof}[Proof of Theorem~\ref{main}]
Let $S$ be a finite set of places of $K$ that includes all the archimedean
places. After extending $S$ to a larger finite set if
necessary, which only makes the set ${\mathbb
A^1}_{\varphi, \alpha, S}$ larger, we may assume that $S$ also
contains all the places $v$ for which $|\alpha|_v > 1$. Then for any $v
\notin S$ and any preperiodic point $x_n$ we have \begin{equation}\label{zero}
\log | \sigma (x_n) - \alpha |_v = 0 \; \; \text{for any embedding
$\sigma: K(x_n)/K \longrightarrow {\overline K}_v$.} \end{equation}
Assume that $(x_n)_{n=1}^\infty$ is an infinite nonrepeating
sequence in ${\mathbb A^1}_{\varphi, \alpha, S}$. Since we can
interchange a limit with a finite sum, we have \begin{equation} \begin{split}
& \frac{1}{[ K : {\mathbb Q}]} \hat{h}(\alpha)
= \sum_{v \in S} \lim_{n \rightarrow \infty}
\frac{1}{[ K(x_n) : {\mathbb Q}]} \sum_{\sigma: K(x_n)/K
\hookrightarrow {\overline K}_v } \log | \sigma (x_n) - \alpha |_v\\
& \text {(by \eqref{global}, \eqref{zero}, and Thm.~\ref{comp})
}\\
& = \lim_{n \rightarrow \infty} \sum_{v \in S} \frac{1}{[ K(x_n) :
{\mathbb Q}]} \sum_{\sigma: K(x_n)/K
\hookrightarrow {\overline K}_v } \log | \sigma (x_n) - \alpha |_v
\; \; \text{(switching sum and limit)} \\
&= \lim_{n \rightarrow \infty} \sum_{\text{places $v$ of $K$}}
\frac{1}{[ K(x_n) : {\mathbb Q}]} \sum_{\sigma: K(x_n)/K
\hookrightarrow {\overline K}_v } \log | \sigma (x_n) - \alpha |_v
\quad \text{(by \eqref{zero})}\\
& = 0 \quad \quad \quad \text{(by the product formula)}. \end{split} \end{equation} Since $\alpha$ is not preperiodic, however, we have $\hat{h}(\alpha) > 0$. Thus, we have a contradiction, so ${\mathbb A^1}_{\varphi,
\alpha, S}$ must be finite. \end{proof}
\subsection{A variant of the Chebyshev dynamical systems}
We look at different Chebyshev polynomials defined by the following recursion formula: $$ Q_1 (z) := z, \;\;\; Q_2 (z) := z^2 + 2; \;\; \textup{and} $$ $$ Q_{m+1} (z) - Q_{m-1} (z) = z Q_{m} (z) \;\; \textup{for all} \; m \geq 2. $$ The dynamical system induced by any of the $Q_{m}$ ($m \geq 2$) on ${\mathbb A^{1}}$ (or ${\mathbb P^1}$) has properties similar to those for the Chebyshev dynamical systems, for instance: \begin{enumerate} \item[{\rm (i)}] The Julia set is equal to the interval $[-2, 2]$ on the $y$-axis;
\item[{\rm (ii)}] The preperiodic points are (either $\infty$ or) the points of type $\zeta - \zeta^{-1}$, where $\zeta$ is a root of unity.
\item[{\rm (iii)}] The corresponding measures $\mu_v$ satisfy \begin{eqnarray}
\int_{ {\mathbb P^1} (\mathbb C_v)} \log |z-\alpha|_v \; d\mu_v = \left\{
\begin{array}{ll} \log \max \{ |\alpha|_v, \; 1 \}, &
\text{if $v \not | \infty$}; \\ { \frac{1}{\pi} } \int_{-2}^{2} { \frac{1} { \sqrt {4-y^2} } }
\log | yi - \alpha |_v \; dy, & \text{otherwise} \end{array} \right. \nonumber \end{eqnarray} where $\alpha \in K$ ($K$ a number field), $v$ is a place of $K$, and $dy$ is the usual Lebesgue measure on $[-2, 2]$. Note that the measure
$\mu_v$ ($v | \infty$) is supported on the interval $[-2, 2]$ on the $y$-axis.
\end{enumerate}
\noindent It is then easy to see that arguments similar to the above prove the following:
\begin{thm} Let $\psi$ be any of the $Q_m$ ($m \geq 2$). Let $K$ be a number field, and let $S$ be a finite set of places of $K$, containing all the infinite ones. Suppose that $\alpha \in K$ is not of type $\zeta - \zeta^{-1}$ for any root of unity $\zeta$. Then the following set $$ {\mathbb A^1}_{\psi, \alpha, S} := \{ z \in {\overline {\mathbb Q}} : z \; \textup{{\em{is $S$-integral with respect to}}} \; \alpha \; \textup{{\em{and is}}} \; \textup{${\psi}$-{\em{preperiodic}}} \} $$ is finite. \end{thm}
\def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
\end{document} |
\begin{document}
\title{Local middle dimensional symplectic non-squeezing in the analytic setting}
\author{Lorenzo Rigolli\footnote{This work is partially supported by the DFG grant AB 360/1-1.}} \maketitle
\begin{abstract}
We prove the following middle-dimensional non-squeezing result for analytic symplectic embeddings of domains in $\mathbb{R}^{2n}$.\\ Let $\varphi: D \hookrightarrow \mathbb{R}^{2n}$ be an analytic symplectic embedding of a domain $D \subset \mathbb{R}^{2n}$ and $P$ be a symplectic projector onto a linear $2k$-dimensional symplectic subspace $V\subset \mathbb{R}^{2n}$. Then there exists a positive function $r_0:D\rightarrow (0,+ \infty)$, bounded away from $0$ on compact subsets $K \subset D$, such that the inequality $Vol_{2k}(P\varphi (B_r(x)),\omega ^k _{0|V})\geq \pi^{k} r^{2k}$ holds for every $x \in D$ and for every $r < r_0(x)$. This claim will be deduced from an analytic middle-dimensional non-squeezing result (stated by considering paths of symplectic embeddings) whose proof will be carried on by taking advantage of a work by \'{A}lvarez Paiva and Balacheff. \end{abstract}
\section*{Introduction} Let $\omega_0=\sum_{i=1}^n d x_i \wedge d y_i$ be the standard symplectic form on $\mathbb{R}^{2n}$, $B_R$ the ball of radius $R$ \begin{align*}
B_R = \{ (x_1,y_1,\ldots,x_n,y_n) \in {\mathbb{R}}^{2n} \ | \sum_{i=1} ^n x_i^2 + \sum_{i=1} ^n y_i^2 < R^2 \}, \end{align*} and $Z_r$ the cylinder \begin{align*}
Z_r = \{ (x_1,y_1,\ldots,x_n,y_n) \in {\mathbb{R}}^{2n} \ | x_1 ^2 + y_1 ^2 < r^2 \}. \end{align*} The Gromov's non-squeezing theorem (see \cite{Gro85} or \cite{HZ94}) states that if $\varphi (B_R) \subset Z_r$, where $\varphi$ is a symplectic (open) embedding, then $r\geq R$. Symplectic diffeomorphisms are volume preserving, due to the fact that they preserve the multiple of the volume form ${\omega_0}^n$, but the non-squeezing theorem shows that, unlike volume preserving diffeomorphisms, they also present two-dimensional rigidity phenomena.\\ Since symplectic diffeomorphisms preserve the $2k$-form ${\omega_0}^k$ for every integer $1\leq k \leq n$, after Gromov's pioneering result one may ask if there are also middle dimensional rigidity phenomena. Some work in this direction, concerning symplectic embeddings of polydisks, has been done by Guth. In \cite{Gut08} he considers symplectic embeddings of a polydisk $\Gamma :=B_2(R_1) \times \ldots \times B_2(R_n)$ with $R_1\leq \ldots \leq R_n$ into a polydisk $\Gamma ' := B_2(R_1 ') \times \ldots \times B_2(R_n ')$ with $R_1' \leq \ldots \leq R_n '$. By the Gromov's non-squeezing theorem and the conservation of the volume under symplectic diffeomorphism, such symplectic embeddings may exist only if $R_1 \leq R_1 '$ and $R_1 \ldots R_n \leq R_1 ' \ldots R_n '$. On the other hand Guth proved that such embeddings exist if and only if for a certain constant $C(n)$, the inequalities $C(n) R_1 \leq R_1 '$ and $C(n) R_1 \ldots R_n \leq R_1 ' \ldots R_n '$ hold. In particular this result exclude every middle dimensional non-squeezing phenomena in the case of polydisks embeddings.\\ In this paper we proceed in a different way, namely we keep the ball as domain of symplectic embeddings and in order to search for middle dimensional non-squeezing phenomena we follow the strategy pursued in \cite{AM13}.\\ First, as in \cite{EG91}, we introduce an alternative formulation of Gromov's theorem, which is that the two-dimensional shadow of the image of a radius $R$ ball in $\mathbb{R}^{2n}$ under a symplectic diffeomorphism has area at least $\pi R^2$. More precisely the claim is that every symplectic embedding $\varphi :B_R \hookrightarrow \mathbb{R}^{2n}$ satisfies the inequality
\begin{align}
area( P \varphi (B_R), \omega_{0|V}) \geq \pi R^2, \label{ineq1}
\end{align} where $P$ denotes the symplectic projector onto a symplectic plane $V$, i.e. the projector along the symplectic orthogonal complement of $V$.\\
This second formulation easily implies the classic one. On the other hand, if it were $area( P \varphi (B_R),\omega_{0|V}) < \pi R^2$, then, by a theorem of Moser's (see \cite{Mos65} or \cite{HZ94}), there would exist a smooth area preserving diffeomorphism\\ $\phi: P \varphi (B_R) \rightarrow B^2_r \cap V$ for some $r< R$, and then the symplectic embedding $(\phi \times id_{{V}^{\bot}}) \circ \varphi$ mapping $B_R$ into $Z_r$ would violate the classic formulation of Gromov's theorem.\\ The alternative formulation of Gromov's theorem has a natural generalization to higher dimensional shadows of a symplectic ball.\\ In other words, if $V$ is a $2k$-dimensional symplectic subspace of $\mathbb{R}^{2n}$ and $P$ is the symplectic projector onto $V$, we may ask whether it is true that \begin{align}
Vol_{2k} ( P \varphi(B_R), \omega_{0|V} ^k) \geq \pi ^k R^{2k}, \label{ineq3} \end{align} for every symplectic embedding $\varphi: B_R \hookrightarrow \mathbb{R}^{2n}$.\\ If $k=1$ or $k=n$ the inequality holds respectively by the non-squeezing theorem and by the volume preserving property of symplectic diffeomorphisms. So we are interested only in the middle dimensional case when $2\leq k \leq n-1$.\\
If the symplectic diffeomorphism $\varphi$ is a linear map an affirmative answer to the middle dimensional non-squeezing question has been given in \cite{AM13}, nevertheless in the same paper Abbondandolo and Matveyev show that if $P$ is the symplectic projector onto a $2k$-dimensional symplectic subspace with $2\leq k \leq n-1$, then for every $\epsilon >0$ there exists an open symplectic embedding $\varphi:B_1 \hookrightarrow \mathbb{R}^{2n}$ such that $Vol_{2k} ( P \varphi(B_1), \omega_{0|V} ^k) < \epsilon$. Since this counterexample deforms very strongly the unitary ball, one may ask how far can the ball be deformed before the middle non-squeezing ends his validity and whether the middle dimensional non-squeezing holds locally. In \cite{AM13} the authors give two different formulations of the local question.\\ The first one asks whether, fixed a symplectic embedding $\varphi:D \hookrightarrow \mathbb{R}^{2n}$ of a domain $D\subset \mathbb{R}^{2n}$, the inequality \begin{align} \label{eq7}
Vol_{2k} ( P \varphi(B_R(x)), \omega_{0|V} ^k) \geq \pi ^k R^{2k} \end{align}
holds for any $x \in D$ and for $R$ positive and small enough.\\ The second formulation is the following.\\ Let us fix a path of symplectic embeddings of the unit $2n$-dimensional ball \begin{align*} \varphi_t :B_1 \hookrightarrow \mathbb{R}^{2n} \ \ t \in [0,1], \end{align*} such that $\varphi_0$ is linear (i.e. it is the restriction to $B_1$ of a linear symplectomorphism).\\ We would like to know whether there exists a positive number $t_0 \leq 1$ such that \begin{align}
Vol_{2k} (P \varphi_t (B_1), \omega_{0|V} ^k) \geq \pi^k, \textrm{ \ for every \ } 0\leq t < t_0. \label{ineq5} \end{align} The second formulation implies the first one by taking the path of symplectic embeddings \begin{align} \varphi_t (y):= \left\{ \begin{array}{l} \dfrac{1}{t} \big( \varphi (x+t y) - \varphi (x) \big) \ \ \ \textrm{ if } t \in ]0,1],\\ D \varphi (x) [y] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \textrm{ if } t=0,
\end{array}\right. \end{align} in fact \begin{align*}
Vol_{2k}(P \varphi_t (B_1(0))=Vol_{2k}(P \frac{1}{t} \varphi (B_t(x)),\omega_{0|V} ^k)=\dfrac{Vol_{2k}(P \varphi (B_t(x)),\omega_{0|V} ^k)}{t^{2k}}. \end{align*} In this setting Abbondandolo, Bramham, Hryniewicz and Salom\~ao \cite{ABHS15} have recently proved that if the symplectic projection is onto a $4$-dimensional symplectic subspace $V$, then both these local non-squeezing results hold (in the first formulation the diffeomorphism is required to be $C^3$). In this paper we address the same question but we do not impose for any assumption on the dimension of $V$, instead we require an analiticity hypothesis. First we focus on the second local formulation of the middle dimensional non-squeezing and we prove its validity under the additional assumption that the path of embeddings $t \mapsto \varphi_t$ is analytic in $t$, i.e. $\forall x \in \overline{B_1}$ the function $t\mapsto \varphi_t(x)$ is analytic. \begin{teo}[Analytic local non-squeezing] \label{Teor2} Let $[0,1] \ni t \mapsto \varphi_t$ be an analytic path of symplectic embeddings $\varphi_t:\overline{B_1} \hookrightarrow \mathbb{R}^{2n}$, such that $\varphi_0$ is linear. Then the middle dimensional non-squeezing inequality \begin{eqnarray*}
Vol_{2k} (P \varphi_t (B_1),\omega_{0|V} ^k) \geq \pi^k \end{eqnarray*} holds for $t$ small enough. \end{teo} To prove this theorem we need some ingredients.\\ In Section \ref{Sect1} we recall some facts about contact geometry together with some results about the minimal action of a Reeb orbit in a contact manifold and we introduce Zoll contact manifolds (also known as \textit{regular contact type manifolds}), namely manifolds with the property that all Reeb orbits are periodic with the same period.\\ In Section \ref{Sect2} we prove a weaker version of the main theorem in \cite{BP14}, which says that if a constant volume deformation of the unit ball does not start tangent to all orders to a deformation by convex domains with Zoll boundaries (i.e. the deformation is \textit{not formally trivial}), then the minimal action $A_{\min}$ on the ball is strictly larger than the one of its small deformations.\\ In Section \ref{Sect4} we will see that this result implies the validity of the non-squeezing inequality \eqref{ineq5} for not formally trivial deformations of the unit ball.\\
On the other hand in case of a \textit{formally trivial deformation} we will have to proceed in a different way. Namely, using some results from Section \ref{Sect3}, we will prove that if the deformation of the ball is analytic and trivial then the function $t \mapsto Vol (P \varphi_t(B_1),\omega_{0|V} ^k)$ is analytic and has all vanishing derivatives in $t=0$. This will enable us to deduce that the function above is constant and consequently that the equality in \eqref{ineq5} holds.\\ Using Theorem \ref{Teor2} we will deduce the local non-squeezing formulation for any fixed analytic symplectic embedding. Moreover, in this latter setting we will prove that, on compact subsets of $\mathbb{R}^{2n}$, the minimal radius $R$ for which the estimate \eqref{eq7} holds is bounded away from $0$. More precisely we shall prove the following result. \begin{teo} \label{Teor5}
Let $\varphi: D \hookrightarrow \mathbb{R}^{2n}$ be an analytic symplectic embedding of a domain $D \subset \mathbb{R}^{2n}$. Then there exists a function $r_0:D\rightarrow (0,+ \infty)$ such that the inequality $Vol_{2k}(P \varphi (B_r(x)),\omega ^k _{0|V})\geq \pi^{k} r^{2k}$ holds, for every $x \in D$ and for every $r < r_0(x)$. Moreover $r_0$ is bounded away from $0$ on compact subsets $K \subset D$. \end{teo}
{\textbf{Acknowledgments.}} I would like to warmly thank Alberto Abbondandolo for all the precious help and advice he gave me concerning this paper. \section{Zoll contact manifolds and minimal action} \label{Sect1} Let us start recalling some basic facts in contact geometry.\\ A $1$-form $\alpha$ on the $(2n-1)$-dimensional manifold $M$ is a \textit{contact form} if $\alpha \wedge (d \alpha)^{n-1}$ is a volume form. In this case $(M,\alpha)$ is called \textit{contact manifold} and the volume of $M$ with respect to the volume form induced by $\alpha$ is denoted by $Vol(M,\alpha)$.\\ Moreover the contact form $\alpha$ induces a vector field $R_\alpha$ on $M$, which is called the Reeb vector field of $\alpha$ and is determined by the requirements: \begin{eqnarray*} i_{R_{\alpha}} d \alpha =0 \ \ \& \ \ \alpha (R_\alpha)=1. \end{eqnarray*}
The \textit{action} $A (\gamma)$ of a periodic Reeb orbit $\gamma$ on a contact manifold $(M,\alpha)$ is defined as \begin{align*} A (\gamma) :=\int_{\gamma} \alpha \in \mathbb{R}. \end{align*} and coincides with the period of $\gamma$. \begin{defin} Given any contact manifold $(M,\alpha)$ with at least a closed Reeb orbit we define a function as follows \begin{align*}
A_{\min}(M,\alpha):= \min_{\gamma} \{ A( \gamma) \ | \ \gamma \textrm{ is a closed Reeb orbit on} \ (M,\alpha) \}. \end{align*} \end{defin} Both the volume and the function $A_{\min}$ are invariant under strict contactomorphism. Indeed we have the following simple result. \begin{prop} Let $(M,\alpha)$ and $(N,\beta)$ be two $2n-1$ dimensional contact manifolds and $\phi:M \rightarrow N$ a strict contactomorphism (i.e. $\phi^* \beta = \alpha$). Then \begin{enumerate} \item to each closed Reeb orbit $\gamma$ of $(M,\alpha)$ corresponds a closed Reeb orbit $\phi \circ \gamma$ of $(N,\beta)$, \item $A(\gamma)= A(\phi \circ \gamma)$, \item $A_{\min}(M,\alpha)=A_{\min}(N,\beta)$, \item $Vol(M,\alpha)= Vol(N,\beta)$. \end{enumerate} \end{prop} Another straightforward fact we will use is the following. \begin{oss} \label{oss1}
Let $f:S^{2n-1}\rightarrow (0,+\infty)$ be a $C^1$ function and define the set $M_f:=\{ f(x) x \ | \ x\in S^{2n-1} \} \subset \mathbb{R}^{2n}$. Then $(S^{2n-1}, f^2 \lambda_{0|S^{2n-1}})$ and $(M_f,\lambda_{0|M_f})$ are strictly contactomorphic. \end{oss}
Indeed the radial projection $\theta:(S^{2n-1}, f^2 \lambda_{0|S^{2n-1}}) \rightarrow (M_f,\lambda_{0|M_f})$ defined by $\theta(x)=f(x)x$ is a strict contactomorphism: \begin{align*}
& \theta^* (\lambda_{0|M_f})(x)[v]=\lambda_{0|S^{2n-1}} (\theta(x)) [ d \theta(x)[v]]=\lambda_{0|S^{2n-1}} (f(x)x)[f(x)[v]+ df(x)[v]x]=\\
&= f(x)\lambda_{0|S^{2n-1}} (x)[f(x)[v]]+ f(x)df(x)[v]\lambda_{0|S^{2n-1}} (x)[x]={f(x)}^2 \lambda_{0|S^{2n-1}}(x)[v]. \end{align*} Now we introduce a special type of contact form. \begin{defin} A contact form on a manifold $M$ is \textit{Zoll} (or \textit{regular}) if its Reeb flow is periodic and all the Reeb orbits have the same period, and hence the same action. \end{defin}
For example the contact form $\lambda _{0|S^{2n-1}}$, induced on the unit sphere $S^{2n-1}$ in $\mathbb{R}^{2n}$ by the standard Liouville 1-form $\lambda_0 := \sum_{i=1}^n x_i d y_i$, is Zoll.\\ Later on we will consider two different kinds of deformations of a contact form: \textit{formally trivial} and \textit{not formally trivial}. \begin{defin} A smooth deformation $\alpha_t$, $t \in [0,t_0)$, of a contact form $\alpha_0$ is \textit{trivial} if there exists a smooth real valued function $r(t)$ and an isotopy $\phi_t$ such that $\alpha_t= r (t) \phi_t ^* \alpha_0$. A deformation $\alpha_t$ is \textit{formally trivial} if for every non negative $m$ there exists a trivial deformation $\alpha_t ^{(m)}$ that has order of contact $m$ with $\alpha_t$ at $t=0$. Otherwise the deformation is \textit{not formally trivial}. \end{defin} If instead of deformations of contact forms we choose to consider deformations of convex domains, we give the following definition. \begin{defin} Consider a smooth convex domain $C_0 \subset \mathbb{R}^{2n}$ with the standard Liouville 1-form ${\lambda_0}_{\vert \partial C_0}$.
A smooth deformation $C_t$ of $C_0$ is \textit{trivial} (resp. \textit{formally trivial}, resp. \textit{not formally trivial}) if $\theta_t ^* ( \lambda_{0|\partial C_t})$ is trivial (resp. formally trivial, resp. not formally trivial), where $\theta_t:S^{2n-1}\rightarrow \partial C_t$ is the radial projection. \end{defin} It is a result by Weinstein that trivial deformations of a Zoll contact form can be characterized in the following way. \begin{prop}{\upshape{\cite{Wei74}}} Let $\alpha_t$, $t\in [0,t_0)$, be a smooth deformation of a Zoll contact form $\alpha_0$. The deformation is trivial if and only if $\alpha_t$ is a Zoll contact form for every $t\in [0,t_0)$. \end{prop}
In our case it will turn out that every contact deformation can be reduced to a normal form. \begin{defin} Let $\alpha_t = \rho_t \alpha_0$ be a smooth deformation of the Zoll contact form $\alpha_0$, where $\rho_t$ is a smooth family of positive functions on $M$, and let $m$ be a non negative integer. The deformation $\alpha_t$ is in \textit{normal form up to order $m$} if \begin{align*} \alpha_t = (1+ t \mu ^{(1)} + \ldots + t^m \mu ^{(m)} + t^{m+1} r_t) \alpha_0 \end{align*} where, for $i=1,\ldots,m$, the functions $\mu ^{(i)}$ are integrals of motion for the Reeb flow of $\alpha_0$ (i.e. they are constant on the orbit of that flow) and $r_t$ is a smooth function on $M$ depending smoothly on the parameter $t$. \end{defin}
Using a technique known as the \textit{method of Dragt and Finn} (see \cite{DF76} and \cite{Fin86}), which consist of constructing the required isotopy as composition of isotopies $\phi_t ^{(m)}$ which are flows of some particular vector fields, Balacheff and Paiva proved the following result. \begin{teo}{\upshape{\cite{BP14}}} \label{Teor1} Let $\alpha_t = \rho_t \alpha_0$ be a smooth deformation of a Zoll contact form $\alpha_0$, where $\rho_t$ is a smooth family of real valued functions on $M$ with $\rho_0=1$. Given a non negative integer $m$, there exists a contact isotopy $\phi_t ^{(m)}$ such that ${\phi_t ^{(m)}}^*\alpha _t$ is in normal form up to order $m$. \end{teo} \proof (Idea) The theorem is proved by induction. The case $m=0$ follows from the Taylor expansion $\rho_t = 1+t r_t$ around $t=0$. Supposing that $\alpha_t$ is already in normal form up to order $m-1$, the point is to find a function $h_m$ in such a way that the flow $\phi_{t,{h_m}}$ of the Hamiltonian vector field $X_{h_m}$ determines an isotopy $t\mapsto \phi_{t,{h_m}}$ for which $\phi_{t,{h_m}}^* \alpha_t$ is in normal form up to order $m$. \endproof
This proposition will be crucial in the next section: \begin{prop}{\upshape{\cite{BP14}}} \label{Prop3} Let $(M,\alpha)$ be a Zoll contact manifold and let $\rho: M \rightarrow \mathbb{R}^+$ be a smooth positive function invariant under the Reeb flow of $\alpha$. Then $A_{\min} (M,\rho \alpha ) \leq A_{\min} (M,\alpha) \min \rho$. \end{prop} \proof This essentially follows from the fact that if $u \in M$ is a minimum point for $\rho$, then the Reeb orbit $\gamma$ of $(M,\alpha)$ passing through $u$ is also a Reeb orbit for $(M,\rho \alpha)$. Once one checks this by a straightforward computation, we have that the action of $\gamma$ in $(M,\rho \alpha)$ is \begin{align*} \int_{\gamma} \rho \alpha = \min \rho \int_{\gamma} \alpha = A_{\min} (M,\alpha) \min \rho \end{align*} and the proof is complete. \endproof We shall make use of the following classical result. \begin{teo}{\upshape{\cite{Rab78}} \upshape{\cite{Wei78}}} \label{Teor4}
Let $C$ be a smooth convex bounded domain of $\mathbb{R}^{2n}$. The contact manifold $(\partial C,{\lambda_0}_{|\partial C})$ admits at least one periodic Reeb orbit. \end{teo} An important fact is that the function $A_{\min}$ coincides with some well known symplectic capacities such as Hofer-Zehnder and Ekeland-Hofer capacities. \begin{teo}{\upshape{\cite{EH89}} \upshape{\cite{Vit89}}} \label{Teor7} Let $C$ be a smooth convex bounded domain of $\mathbb{R}^{2n}$. There exists a distinguished closed characteristic $\overline{\gamma} \subset \partial C$ such that $A_{\min}(\partial C,\lambda_0) = A( \overline{\gamma})$. Moreover, restricting to smooth convex domain of $\mathbb{R}^{2n}$, the function $A_{\min}$ is a symplectic capacity that we will denote with $c$. \end{teo}
Due to the two theorems above, the function $A_{\min}(\partial C, {\lambda_0}_{| \partial C})$ is well defined.\\ Besides the usual proprieties of a capacity, choosing in a carefully way one among the equivalent definitions of $c$, the following result can be proved. \begin{prop}{\upshape{\cite{AM15}}} \label{Prop7}
Let $C \subset \mathbb{R}^{2n}$ be a smooth convex bounded domain and $P$ the symplectic projector onto a symplectic linear subspace $V \subset \mathbb{R}^{2n}$. Then $c (PC,{\omega_0}_{|V}) \geq c (C,\omega_0)$. \end{prop}
\section{Deformations of $S^{2n-1}$} \label{Sect2} In this section we would like to get some information on how $A_{\min}$ behaves in case of a contact deformation on the unit sphere. The results we are going to state hold in the case of an arbitrary Zoll contact manifold, but in this paper we are interested just in deformations of the standard contact form on the sphere $S^{2n-1}$, so we can simplify the proof about the Lipschitz continuity of $A_{\min}$ that relies on a result from \cite{Gin87}. \begin{lem} \label{Lem3} Fix two real numbers $0<\delta<\Delta<\infty$ and consider the family $\mathcal{C}_{\delta, \Delta}$ of the convex domains $C \subset \mathbb{R}^{2n}$ which satisfy the $(\delta,\Delta)$-pinching condition $B_{\delta} \subset C \subset B_{\Delta}$. Every symplectic capacity $c:\mathcal{C}_{\delta, \Delta} \rightarrow \mathbb{R}$ is Lipschitz continuous with respect to the Hausdorff distance. \end{lem} \proof Let us take two elements $C,D \in \mathcal{C}_{\delta, \Delta}$ and let $d$ be their Hausdorff distance.\\ By assumption \begin{eqnarray*} \delta B=B_{\delta} \subset C,D \subset B_{\Delta}=\Delta B, \end{eqnarray*} hence \begin{eqnarray*} C \subset D + d B \subset D + \dfrac{d}{\delta} D= \left(1 + \dfrac{d}{\delta}\right) D, \end{eqnarray*} and by the monotonicity and conformality proprieties of symplectic capacities \begin{eqnarray*} c(C) \leq \left(1+ \dfrac{d}{\delta}\right)^2 c(D) = \left(1 + 2\dfrac{d}{\delta}+\dfrac{d ^2}{\delta ^2} \right) c(D), \end{eqnarray*} therefore \begin{eqnarray*} c(C)-c(D) \leq d \left(\dfrac{2}{\delta} + \dfrac{d}{\delta ^2}\right) c(D). \end{eqnarray*} Because of the pinching condition we have $c(D)\leq c(\Delta B)=\Delta^2 \pi$ and $d\leq \Delta$, thus there exists a fixed real number $M>0$ such that \begin{eqnarray*} c(C)-c(D) \leq d M. \end{eqnarray*}
If $c(C) \geq c(D)$ we have $c(C)-c(D)=|c(C)-c(D)|$ and the claim follows, otherwise if $c(C) < c(D)$ we repeat the same proof switching the role of sets $C$ and $D$. \endproof
\begin{lem} \label{Lem2} There exists a small open neighbourhood $U$ of zero in the Banach space $C^2 (S^{2n-1})$ such that if $f \in U$, the map $f \mapsto A_{\min} (S^{2n-1}, (1+ f) {\lambda_0}_{\vert S^{2n-1}})$ is Lipschitz continuous on $U$ with respect to the $C^0$-topology. \end{lem}
\proof Let us set $M_{\sqrt{1+f}}:=\{ \sqrt{1+f(x)} x \ | \ x\in S^{2n-1} \} \subset \mathbb{R}^{2n}$.\\ The map $A_{\min} (S^{2n-1}, (1+ f) \lambda_{0\vert S^{2n-1}})$ is well defined because, as observed in Remark \ref{oss1}, looking for a periodic orbit of $(S^{2n-1}, (1+ f) \lambda_{0\vert S^{2n-1}})$ is the same as looking for one of $(M_{\sqrt{1+f}}, \lambda_{0\vert M_{\sqrt{1+f}}})$, and that exists because a $C^2$-small deformation of $S^{2n-1}$ still bounds a convex domain of $\mathbb{R}^{2n}$.\\ The map $f \mapsto A_{\min}(S^{2n-1}, (1+ f) \lambda_{0\vert S^{2n-1}})$ is the composition of the maps\\ $f \mapsto M_{\sqrt{1+f}}$ and $M_{\sqrt{1+f}} \mapsto A_{\min} (M_{\sqrt{1+f}},\lambda_{0\vert M_{\sqrt{1+f}}})$, the first of which is clearly Lipschitz from the $C^0$-distance to the Hausdorff distance. So in order to prove the Lipschitz regularity result we need to show that the minimal action (which is a capacity) of a periodic orbit on a convex domain whose boundary is close to $S^{2n-1}$, is Lipschitz continuous with respect to the Hausdorff distance. But this follows from Lemma \ref{Lem3}. \endproof
The next theorem is a weaker version, which suits in our case, of the one in \cite{BP14} which holds for every Zoll contact manifold. The proof is the same except that in our setting we do not need to use a stronger result about the Lipschitz continuity that generalizes Lemma \ref{Lem2}. \begin{teo} \label{Teor3} Consider a domain $U \subset \mathbb{R}^{2n}$ and a smooth simple curve $[0,1] \ni t \mapsto y(t) \in U$ starting at $y(0)=y_0$. Let $(S^{2n-1},\mu_{t,y(t)})$, with $t \in [0,t_0)$, be a smooth constant volume deformation of the Zoll contact manifold $(S^{2n-1},\mu_{0,y_0}:=\lambda_{0\vert S^{2n-1}})$. If it is not formally trivial, the function $t\mapsto A_{\min} (S^{2n-1},\mu_{t,y(t)})$ attains a strict local maximum at $t=0$. \end{teo} \proof To simplify the notation we will denote $\mu_{t,y(t)}$ by $\mu_t$ and, since $y(t)$ depends smoothly on $t$, we will consider that everything depends just on $t$. The proof is carried out in four steps. \begin{enumerate}[1)] \item First we consider the form $(1 + t \nu_t + t^m r_t) \mu_0$, where $m>1$ and both $\nu_t$ and $r_t$ are smooth function on $S^{2n-1}$ depending smoothly on $t$. By Lemma \ref{Lem2}, the function $f\mapsto A_{\min} (S^{2n-1},(1+ f)\mu_0)$ is Lipschitz if $f$ is in a small enough $C^2$-neighbourhood of zero in $C^{\infty} (S^{2n-1})$, then, for $t\rightarrow0$ \begin{align*} A_{\min} (S^{2n-1},(1 + t \nu_t + t^m r_t) \mu_0)=A_{\min} (S^{2n-1},(1+ t \nu_t)\mu_0) + O (t^m). \end{align*} \item Let $(1+t^m \rho + t^{m+1} r_t) \mu_0$ be a deformation of $\mu_0$ and $\overline{\rho}$ the function obtained by averaging $\rho$ along the orbits of the Reeb vector field of $\mu_0$ \begin{align*} \overline{\rho}(x):=\dfrac{1}{T}\int_0 ^T \rho (\varphi_t (x))dt, \end{align*} where $T$ is the common period of the periodic orbits of the Reeb flow $\varphi_t$. According to the induction step of the proof of Theorem \ref{Teor1}, there exists a contact isotopy $\phi_t ^{(m)} :S^{2n-1} \rightarrow S^{2n-1}$ such that \begin{align*} \phi_t ^{(m)*}(1+t^m \rho + t^{m+1} r_t) \mu_0=(1+t^m \overline{\rho} + t^{m+1} r' _t) \mu_0, \end{align*} where $r'_t$ is a smooth function depending smoothly on $t$. \item If $(1+t^m \rho + t^{m+1} r_t) \mu_0$ is a smooth constant volume deformation and $\overline{\rho}$ is not identically zero, then $A_{\min}(S^{2n-1},(1+t^m \rho + t^{m+1} r_t) \mu_0) < A_{\min}(S^{2n-1},\mu_0)$ for $t\neq 0$ small enough.\\ To prove this claim, first note that by 2) and 1) follows \begin{align*}
A_{\min}(S^{2n-1},(1+t^m \rho + t^{m+1} r_t) \mu_0)=A_{\min}(S^{2n-1},(1+t^m \overline{\rho} + t^{m+1} r'_t) \mu_0)= \end{align*} \begin{align} \label{eq3}
= A_{\min}(S^{2n-1},(1+t^m \overline{\rho}) \mu_0) + O({t}^{m+1}). \quad \qquad \qquad \qquad \qquad \qquad \end{align}
Since $\overline{\rho}$ is an integral of motion for the Reeb flow of $\mu_0$ and $m$ is a positive integer, we have that $(1 + t^m \overline{\rho})$ is a positive (for small $t$) integral of motion of $\mu_0$, thus Proposition \ref{Prop3} implies that \begin{align} \label{eq4} A_{\min}(S^{2n-1},(1 + t^m \overline{\rho}) \mu_0)\leq (1+t^m \min \overline{\rho}) A_{\min}(S^{2n-1},\mu_0). \end{align} The deformation $(1+t^m \overline{\rho} + t^{m+1} r'_t) \mu_0$ is constant volume because contact isotopies preserve the volume. By the proprieties of the exterior derivative \begin{align*} & Vol (S^{2n-1}, \mu_0) =Vol (S^{2n-1}, (1+t^m \overline{\rho} + t^{m+1} r'_t) \mu_0) =\\ &= \int_{S^{2n-1}} {(1+t^m \overline{\rho} + t^{m+1} r'_t)}^{n} \mu_0 \wedge {d\mu_0}^{n-1}=\\ & = Vol (S^{2n-1}, \mu_0) + n t^m \int_{S^{2n-1}} \overline{\rho} \mu_0 \wedge \ {d\mu_0}^{n-1} + O({t}^{m+1}), \end{align*} and thus the integral of $\overline{\rho}$ over $S^{2n-1}$ is zero. Therefore, if in addition $t \neq 0$ and $\overline{\rho}$ is not identically zero, the extrema of $\overline{\rho}$ must have opposite signs and hence its minimum must be negative. Putting together this fact with \eqref{eq3} and \eqref{eq4}, we deduce that the function $t\mapsto A_{\min}(S^{2n-1},(1+t^m \rho + t^{m+1} r_t) \mu_0)$ attains a strict maximum at $t=0$, namely \begin{align*} A_{\min}(S^{2n-1},(1+t^m \rho + t^{m+1} r_t) \mu_0)<A_{\min}(S^{2n-1},\mu_0), \ \ \textrm{for} \ \ t>0. \end{align*}
\item We are finally ready to prove the theorem. Let us consider a constant volume deformation $\mu_t$ of the Zoll contact form $\mu_0$. By Gray's stability theorem we can assume that the contact deformation has the form\\ $\mu_t= \rho_t \mu_0$. Expanding $\rho_t$ around $t=0$, we obtain \begin{align*} \mu_t = (1+ t \rho_{(1)} + t^2 r_t) \mu_0, \end{align*}
where $\rho_{(1)} = {\dfrac{d \rho_t}{dt}}|_{t=0}$ and $r_t$ is a smooth function depending on $t$.\\ By 3), if the average $\overline{\rho_{(1)}}$ is not identically zero, then $t\mapsto A_{\min} (S^{2n-1},\mu_t)$ attains a strict maximum at $t=0$.\\ Otherwise, if $\overline{\rho_{(1)}}$ is identically zero, by 2) there exists a contact isotopy $\phi_t^{(2)}$ such that $\phi_t ^{(2)*} \mu_t = (1+t^2 r'_t) \mu_0$. Since $\phi_t^{(2)}$ is a contact isotopy, then $(1+t^2 r'_t) \mu_0$ is also a constant volume smooth deformation of $\mu_0$ and $A_{\min} (S^{2n-1},(1+t^2 r'_t) \mu_0) =A_{\min} (S^{2n-1},\mu_t)$, so we can rewrite $\mu_t = (1+ t^2 r'_t) \mu_0$ and start anew.\\ If we repeat this process an arbitrary number of times, we see that either $t\mapsto A_{\min} (S^{2n-1},\mu_t)$ attains a strict maximum at $t=0$ or that for any positive integer $m$, there exist a contact isotopy $\phi_t ^{(m)}$ and a smooth function $\nu_t ^{(m)}$ on $S^{2n-1}$ depending smoothly on the parameter $t$, such that $\phi_t ^{(m)*} \mu_t = (1 + t^{m} \nu_t ^{(m)} ) \mu_0$. In other words, either $t\mapsto A_{\min} (S^{2n-1},\mu_t)$ attains a strict maximum at $t=0$ or the deformation $\mu_t$ is formally trivial. \end{enumerate} \endproof
\section{Analiticity of the volume of a projection} \label{Sect3} Our next goal is to prove that the fixed domain formulation of the local middle dimensional non-squeezing theorem holds if we consider an analytic path of symplectic embeddings.\\ To do this we need a result, whose proof relies on calculations made in order to prove Theorem 3 of \cite{AM13}. \begin{prop} \label{Prop1} Let $U\ni y_0$ be a domain of $\mathbb{R}^{n}$ and $[0,1] \times U \ni (t,y) \mapsto \varphi_{t,y}$ an analytic map such that $\varphi_{t,y}$ are embeddings of the unit $n$-dimensional ball $\varphi_{t,y}:\overline{B_1} \hookrightarrow \mathbb{R}^{n}$, with $\varphi_{0,y_0}$ linear. Moreover, let $P:\mathbb{R}^n \rightarrow V$ be the orthogonal projector onto a $k$-dimensional linear subspace $V \subset \mathbb{R}^n$ and $\rho$ a constant $k$-volume form on $V$. Then the function $(t,y) \mapsto Vol_{k} (P \varphi_{t,y} (B_1),\rho)$ is analytic in a neighbourhood of $(0,y_0)$ small enough. \end{prop} In the proof we will use the following lemma. \begin{lem} Take the hypothesis of the proposition above. The set $S_{t,y} \subset \partial B_1$ defined as \begin{align}
S_{t,y} := \{x \in \partial B_1 | P_{| T_{\varphi_{t,y} (x)} \varphi_{t,y} (\partial B_1)} \textrm{ is not surjective} \} \label{an1} \end{align} has the property that \begin{align} \partial P \varphi_{t,y} (B_1)= P \varphi_{t,y} (S_{t,y}) \label{an2} \end{align} and can be written as \begin{align}
S_{t,y}=\{x \in \partial B_1 | F_{t,y}(x)=0\}, \end{align} \label{an3} where $F_{t,y}(x):=(I-P) (D \varphi_{t,y} (x)^{*})^{-1} [x]$. If $(t,y)$ is in a small enough neighbourhood $of (0,y_0)$, $S_{t,y}$ is a submanifold of $\partial B_1$ such that $S_{t,y}= \phi_{t,y}(S^{k-1})$, where $\phi_{t,y}$ is an analytic path of diffeomorphisms. \end{lem}
\proof First observe that ~\eqref{an2} is an immediate consequence of the definition of $S_{t,y}$. The function $P_{| T_{\varphi_{t,y} (x)} \varphi_{t,y} (\partial B_1)}$ is not surjective if and only if $P D \varphi_{t,y} (x)_{| T_x \partial B_1} :T_x \partial B_1 \rightarrow T_{P \varphi_{t,y} (x)} V \cong \mathbb{R}^{k}$ is not surjective.\\ This is true iff \begin{align*} &\exists u \in \mathbb{R}^{k}, u \neq 0, \textrm{ such that } <P D \varphi_{t,y}(x) [\xi], u> =0 \end{align*} $\forall \xi \in T_x \partial B_1$, i.e. $\forall \xi$ such that $<\xi , x> =0$.\\ Since $u = P u$ and $P=P^*$ \begin{align*} <P D \varphi_{t,y}(x) [\xi], u>=<\xi, {(P D \varphi_{t,y}(x))}^* [u]>=<\xi, {D \varphi_{t,y}(x)}^* [u]> \end{align*} and thus the non surjectivity holds iff \begin{align*} D \varphi_{t,y} (x) ^* [u] = \lambda x, \textrm{ where } \lambda \neq 0 \textrm{ is a real number}. \end{align*} Equivalently \begin{align*} (D \varphi_{t,y} (x)^{*})^{-1} [x] \in \mathbb{R}^{k} \end{align*} which is the same as \begin{align*} F_{t,y}(x):=(I-P) (D \varphi_{t,y} (x)^{*})^{-1} [x] =0 \in \mathbb{R}^{n-k}. \end{align*} Now, consider the analytic function $G(t,y,x):=(I-P)(\varphi_{t,y} (x)^{*})^{-1} [x]$. We have that $\varphi_{0,y_0}= D \varphi_{0,y_0}$ because $\varphi_{0,y_0}$ is linear, hence $G(0,y,z)=0$ if $z \in S_{0,y_0}$. Applying the analytic implicit function theorem we deduce that, for $(t,y)$ close to $(0,y_0)$, $S_{t,y}$ is a submanifold of $\partial B_1$ and $S_{t,y}=\phi_{t,y} ' (S_{0,y_0})$ where $\phi_{t,y} '$ is an analytic path of diffeomorphisms. There is a diffeomorphism given by $(D \varphi_{t,y} (x)^{*})^{-1}$ between $S^{k-1}$ and $S_{0,y_0}$, therefore by composition with $\phi_{t,y} '$ we get an analytic path of diffeomorphisms $\phi_{t,y}$ such that $\phi_{t,y}(S^{k-1})= S_{t,y}$. \endproof Now we are ready to prove Proposition \ref{Prop1}.
\proof Take a primitive $\alpha \in \Omega^{k-1} (V)$ of the volume form $\rho \in \Omega^{k} (V)$, i.e. $d \alpha=\rho$.\\ As observed in the former lemma $\partial P \varphi_{t,y} (B_1)= P \varphi_{t,y} (S_{t,y}) $ and applying Stokes' theorem we get \begin{align*} Vol_k (P \varphi_{t,y} (B_1),\rho) = \int_{P \varphi_{t,y} (B_1)} d \alpha= \int_{\partial P \varphi_{t,y} (B_1)} \alpha= \end{align*} \begin{align*} = \int_{P \varphi_{t,y} (S_{t,y})} \alpha=\int_{S_{t,y}} (P \varphi_{t,y})^* \alpha = \int_{S^{k-1}} (P \varphi_{t,y} \phi_{t,y})^* \alpha. \end{align*} where $\phi_{t,y}: S^{k-1} \rightarrow S_{t,y}$ is the diffeomorphism introduced in the proof of the lemma above. For $(t,y)$ close to $(0,y_0)$, the function $(t,y) \mapsto P \varphi_{t,y} \phi_{t,y}$ is analytic and this implies the analyticity of $(t,y) \mapsto \int_{S^{k-1}} (P \varphi_{t,y} \phi_{t,y})^* \alpha$.\\ In fact, we can write $\int_{S^{k-1}} (P \varphi_{t,y} \phi_{t,y})^* \alpha=\int_{S^{k-1}} a_{t,y}(x) \nu$ where $a_{t,y}$ is analytic. Differentiating under integral sign, from the Taylor expansion of $a_{t,y}$ we get a local series expansion of the function $(t,y) \mapsto \int_{S^{k-1}} (P \varphi_{t,y} \phi_{t,y})^* \alpha= Vol_{k} (P \varphi_{t,y} (B_1),\rho)$, which is therefore analytic. \endproof \section{Local non-squeezing} In the following $B_1$ indicates the unit ball in $\mathbb{R}^{2n}$ and $P:\mathbb{R}^{2n} \rightarrow V$ the symplectic projection onto a $2k$-dimensional symplectic linear subspace $V \subset \mathbb{R}^{2n}$. At first, we are interested in proving the local non squeezing formulation for a path of symplectic embeddings starting from a linear one and to do so we will use the middle dimensional linear non-squeezing result. \label{Sect4} \begin{teo}{\upshape{\cite{AM13}, \cite{AM15}}} \label{teor6} Let $P$ be the symplectic projector onto a $2k$-dimensional symplectic linear subspace $V \subset \mathbb{R}^{2n}$. Then for every linear symplectic isomorphism $L: \mathbb{R}^{2n} \rightarrow \mathbb{R}^{2n}$ there holds \begin{align*}
Vol_{2k} ( P L (B_1), \omega^k_{0|V}) \geq \pi^k. \end{align*} The equality holds if and only if the linear subspace $L^{-1} V$ is $J$-invariant, where $J$ is the standard complex structure on $\mathbb{R}^{2n}$. \end{teo} We complete the above result by the following: \begin{add} \label{add1}
The equality holds if and only if $(P L(B_1),\omega_{0|V})$ is symplectomorphic to $(B_1 \cap L^{-1} V,\omega_{0|L^{-1} V} )$. \end{add} The following result is useful to prove Theorem \ref{teor6} and the addendum as well. \begin{lem}\upshape{{\cite{Fed69} (Section 1.8.1)}}\\ \label{Lem1} Let $1\leq k \leq n$, then \begin{eqnarray*}
| \omega^k[u_1,\ldots,u_{2k}]| \leq |u_1 \wedge \ldots \wedge u_{2k}| \ \ \forall u_1,\ldots , u_{2k} \in \mathbb{R}^{2n}. \end{eqnarray*} \end{lem}
\proof (Addendum)
If a symplectomorphism exists, by Lemma \ref{Lem1} we have $Vol_{2k} (P L(B_1),\omega^k _{0|V}) = Vol_{2k}(B_1 \cap L^{-1} V,\omega_{0|L^{-1} V} ^k ) \leq \pi^k$. But at the same time Theorem \ref{teor6} yields $Vol_{2k} (P L(B_1),\omega^k _{0|V}) \geq \pi^k$, hence the equality holds. On the other hand $Vol_{2k} ( P L (B_1), \omega^k_{0|V}) = \pi^k$ iff $L^{-1} V$ is $J$-invariant; and if the claim that $PL(B_1 \cap L^{-1} V)=PL(B_1)$ is true, then $(B_1 \cap L^{-1} V,\omega_{0|L^{-1} \cap V} )$ is symplectomorphic to $(P L(B_1),\omega_{0|V})=(PL(B_1\cap L^{-1} V),\omega_{0|V})$ via the linear symplectic isomorphism $L:L^{-1} V \rightarrow V$. To prove the claim we reduce it to the easier case in which $P$ is orthogonal. First we take an $\omega$-compatible inner product $(\cdot,\cdot) '$ on $\mathbb{R}^{2n}$ such that $P$ is orthogonal and we denote with $B_1 '$ and $J '$ the corresponding unit ball and complex structure. In particular $V$ is $J'$-invariant. Let $\psi :(V ,\omega, J ') \rightarrow (V ,\omega, J)$ be a complex and linear isomorphism. It follows that $\psi$ is an isometry from $(V, (\cdot,\cdot)' )$ to $(V, (\cdot,\cdot) )$, hence $\psi(B_1 ')= B_1$. The image of the unit ball under a linear surjection $M$ is given by \begin{align*} M(B_1)=M( B_1 \cap rank M^*). \end{align*} If we take $N=L\psi$, $M=PN$ and we denote with $*'$ the adjoint of a matrix with respect to $(\cdot,\cdot)'$, we get \begin{align*} &PL(B_1)=PL \psi (B_1 ')= PN(B_1 ') =PN(B_1 ' \cap rank (PN)^{*'})=\\ &=PN(B_1 ' \cap rank (N^{*'}P^{*'}))=PN(B_1 ' \cap rank (N^{*'}P))= PN(B_1 ' \cap N^{*'}V). \end{align*} The identity $\psi J' = J \psi$ implies $ J' = \psi ^{-1} J \psi$ and the fact that $L^{-1}V$ is\\ $J$-invariant is equivalent to $J\psi N^{-1}V=\psi N^{-1}V$, hence \begin{align*} &N^{*'} V=N^{*'} J' V=N^{*'} J'N N^{-1} V=J' N^{-1} V=\\ &= \psi^{-1} J \psi N^{-1} V=\psi^{-1} \psi N^{-1}V=N^{-1}V, \end{align*} thus we obtain \begin{align*} &PL(B_1)=PN(B_1 ' \cap N^{*'}V)=PN(B_1 ' \cap N^{-1}V)=\\ &=PL\psi (B_1 ' \cap \psi^{-1} L^{-1} V)=PL(B_1 \cap L^{-1} V), \end{align*} and the claim is proved. \endproof
In order to gain some information about the strong formulation of the local non-squeezing inequality we study the function $t\mapsto Vol_{2k} (P \varphi_t (B_1),\omega_{0|V} ^k)$. \begin{prop} \label{prop1} Consider a domain $U \subset \mathbb{R}^{2n}$ and a smooth simple curve $[0,1] \ni t \mapsto y(t) \in U$ starting at $y(0)=y_0$. Let $[0,1] \ni t \mapsto \varphi_{t,y(t)}$ be a smooth path of symplectic embeddings $\varphi_{t,y(t)}:\overline{B_1} \hookrightarrow \mathbb{R}^{2n}$, such that $\varphi_{0,y_0}$ is linear and $\varphi_{0,y_0} ^{-1} V$ is $J$-invariant. The deformation of $P\varphi_{0,y_0}(B_1)$ given by $P\varphi_{t,y(t)}(B_1)$ can be either formally or not formally trivial: \begin{itemize}
\item if the deformation is formally trivial, then every order $m \in \mathbb{Z}^+$ derivative of $t\mapsto Vol_{2k} (P \varphi_{t,y(t)} (B_1),\omega_{0|V} ^k)$ vanishes in $0$;
\item if the deformation is not formally trivial, then the strict middle dimensional non-squeezing inequality $Vol_{2k} (P \varphi_{t,y(t)} (B_1),\omega_{0|V} ^k) > \pi^k$ holds for $t>0$ small enough. \end{itemize} \end{prop} \proof
By the previous addendum we have that $\psi:=\varphi_{{0,y_0}|\varphi_{0,y_0} ^{-1}V}$ is a linear symplectomorphism between $(B_1 \cap \varphi_{0,y_0} ^{-1}V,\omega_{{0,y_0}|\varphi_{0,y_0} ^{-1}V})$ and $(P\varphi_{0,y_0} (B_1),\omega_{{0,y_0}|V})$. Let us call $M_{t,y(t)}:=\partial P \varphi_{t,y(t)} (B_1)$ and consider two $1$-forms: the Liouville form $\lambda_{0|\psi^{-1} M_{t,y(t)}}$ and its pullback $\mu_{t,y(t)} := {\theta_{t,y(t)}}^* (\lambda_{0|\psi^{-1}M_{t,y(t)}})$, where $\theta_{t,y(t)} : S^{2k-1} \rightarrow \psi^{-1}M_{t,y(t)}$ is the radial diffeomorphism such that ${\theta_{t,y(t)}}^{-1}(x) = \dfrac{x}{||x||}$.\\ Later we will use the capacity $c$, which is defined only for convex domains, so let us notice once for all that, for small deformations, $\varphi_{t,y(t)} (B_1)$ is still convex and that the projection of a convex domain is still convex.\\ Now we compute the relations between the volume of the deformations.\\ Using Stokes' theorem we get \begin{align*}
&Vol_{2k-1}(\psi^{-1}M_{t,y(t)}, \lambda_{0|\psi^{-1}M_{t,y(t)}})= \int_{\psi^{-1}\partial P \varphi_{t,y(t)} (B_1)} \lambda_{0|\psi^{-1}M_{t,y(t)}} \wedge (d \lambda_{0|\psi^{-1}M_{t,y(t)}})^{k-1}=\\
&=\int_{ \psi^{-1}P \varphi_{t,y(t)} (B_1)} {\omega_0}^k = Vol_{2k} (\psi^{-1}P \varphi_{t,y(t)} (B_1),\omega_{0|\varphi_{0,y_0} ^{-1}V} ^k). \end{align*}
On the other hand, since $(\psi^{-1}M_{t,y(t)}, \lambda_{0|\psi^{-1}M_{t,y(t)}})$ and $(S^{2k-1},\mu_{t,y(t)})$ are strictly contactomorphic, $Vol_{2k-1}(\psi^{-1}M_{t,y(t)}, \lambda_{0|\psi^{-1}M_{t,y(t)}})=Vol_{2k-1}(S^{2k-1},\mu_{t,y(t)})$. So, if $\mu_{t,y(t)} ' :=\mu_{t,y(t)} \rho (t)$, where $\rho (t):= \dfrac{1}{\sqrt[k]{Vol_{2k} (\psi^{-1}P \varphi_{t,y(t)} (B_1),\omega_{0|\varphi_{0,y_0} ^{-1} V} ^k)}}$, it follows $Vol(S^{2k-1},\mu_{t,y(t)} ')= 1$ and in particular that $\mu_{t,y(t)} '$ is a constant volume deformation.\\ Observing that closed characteristics in $(S^{2k-1}, \mu_{t,y(t)} ')$ are the same as in $(S^{2k-1}, \mu_{t,y(t)})$ we can establish the relations between the minimal action of their closed Reeb orbits \begin{align*}
&A_{\min} (S^{2k-1}, \mu_{t,y(t)} ') = \min_{\gamma} \{ A(\gamma) \ | \ \gamma \textrm{ closed characteristic in } (S^{2k-1}, \mu_{t,y(t)} ') \} = \\
&= \min_{\gamma} \{ \int_{\gamma} \mu_{t,y(t)} ' \ | \ \gamma \textrm{ closed characteristic in } (S^{2k-1}, \mu_{t,y(t)} ') \} = \\
&= \min_{\gamma} \{ \rho (t)\int_{\gamma} \mu_{t,y(t)} \ | \ \gamma \textrm{ closed characteristic in } (S^{2k-1}, \mu_{t,y(t)} ') \}= \\
& =\min_{\gamma} \{ \rho (t)\int_{\gamma} \mu_{t,y(t)} \ | \ \gamma \textrm{ closed characteristic in } (S^{2k-1}, \mu_{t,y(t)} ) \}= \\
&= \rho (t) A_{\min} (S^{2k-1}, \mu_{t,y(t)}). \end{align*}
Since $\theta_{t,y(t)}$ is a strict contactomorphism between $(\psi^{-1}M_{t,y(t)}, \lambda_{0|\psi^{-1}M_{t,y(t)}})$ and $(S^{2k-1}, \mu_{t,y(t)} )$, we also get \begin{align*}
A_{\min} (S^{2k-1}, \mu_{t,y(t)} ) =A_{\min}(\psi^{-1}M_{t,y(t)}, \lambda_{0|\psi^{-1}M_{t,y(t)}})=c (\psi^{-1}P \varphi_{t,y(t)} (B_1)), \end{align*}
where $\psi$ is a symplectomorphism.\\ Thus the quantities $A_{\min}(\psi^{-1}M_{t,y(t)}, \lambda_{0|\psi^{-1}M_{t,y(t)}})$ and $c (\psi^{-1}P \varphi_{t,y(t)} (B_1))$ are equal respectively to $A_{\min}(M_{t,y(t)}, \lambda_{0|M_{t,y(t)}})$ and $c (P \varphi_{t,y(t)} (B_1))$. Notice that the Weinstein conjecture holds in the convex case (Theorem \ref{Teor4}), hence a closed characteristic for $(M_{t,y(t)}, \lambda_{0|M_{t,y(t)}} )$ always exists, moreover by Theorem \ref{Teor7} the quantities above are well defined.\\
Now let us take a deformation $(S^{2k-1}, \mu_{t,y(t)} ' )$ of the standard Zoll contact form $\mu_{0,y_0}=\lambda_{0|S^{2k-1}}$ on $S^{2k-1}$, that could be either formally trivial or not formally trivial.\\ Suppose the former to be true, which is equivalent to say that the deformation $P\varphi_{t,y(t)}(B_1)$ is formally trivial.\\ In this case, in the last part of the proof of Theorem \ref{Teor3} we deduced that for every $m \in \mathbb{Z}^+$ there is a contact isotopy $\phi_{t,y(t)}$ such that \begin{align*} \phi_{t,y(t)}^* \mu_{t,y(t)} = (1+O({t}^{m})) \mu_0. \end{align*} The volume function is invariant by contact isotopy, so \begin{align*}
&Vol_{2k} (\psi^{-1}P \varphi_{t,y(t)} (B_1),\omega_{0|\varphi_{0,y_0} ^{-1}V} ^k)=Vol_{2k-1} (S^{2k-1},\mu_{t,y(t)})=\\ &=\rho(t) Vol_{2k-1} (S^{2k-1},\mu_{t,y(t)} ')=\rho(t) Vol_{2k-1} ( S^{2k-1},(1+O({t}^{m}))\mu_0), \ \ \ \forall m \in \mathbb{Z}^+. \end{align*} By the definition of $\rho(t)$ the above equality is equivalent to \begin{align*}
&{Vol_{2k} (\psi^{-1}P \varphi_{t,y(t)} (B_1),\omega_{0|\varphi_{0,y_0} ^{-1}V} ^k)}^{\frac{k+1}{k}}=Vol_{2k} (\psi^{-1}P \varphi_{t,y(t)} (B_1),\omega_{0|\varphi_{0,y_0} ^{-1}V} ^k) \rho(t)=\\ &= Vol_{2k}( S^{2k-1},(1+O({t}^{m}))\mu_0), \ \ \ \forall m \in \mathbb{Z}^+. \end{align*}
Therefore each of $m$-order derivatives of $Vol_{2k} (\psi^{-1}P \varphi_{t,y(t)} (B_1),\omega_{0|\varphi_{0,y_0} ^{-1}V} ^k)^{\frac{k+1}{k}}$, and hence of $Vol_{2k} (\psi^{-1}P \varphi_{t,y(t)} (B_1),\omega_{0|\varphi_{0,y_0} ^{-1}V} ^k)=Vol_{2k} (P \varphi_{t,y(t)} (B_1),\omega_{0|V} ^k)$, vanishes in $0$.\\ Now we suppose that $(S^{2k-1}, \mu_{t,y(t)} ')$ (equivalently $P\varphi_{t,y(t)}(B_1)$) is not formally trivial. By Theorem \ref{Teor3} and the previous calculations, if $t$ is small enough the following inequality holds \begin{align*}
& 1=\dfrac{\pi}{\sqrt[k]{\pi^k}}= \dfrac{ A_{\min} (\psi^{-1}M_{0,y_0}, \lambda_{0|\psi^{-1} M_{0,y_0}})}{\sqrt[k]{\pi^k}} = \rho (0) A_{\min} (\psi^{-1}M_{0,y_0}, \lambda_{0|\psi^{-1}M_{0,y_0}})= \\
&= A_{\min} (S^{2k-1}, \mu_{0,y_0} ') > A_{\min} (S^{2k-1}, \mu_{t,y(t)} ')= \dfrac{A_{\min} (\psi^{-1}M_{t,y(t)}, \lambda_{0|\psi^{-1}M_{t,y(t)}})}{{\sqrt[k]{Vol_{2k} (\psi^{-1}P \varphi_{t,y(t)} (B_1),\omega_{0|\varphi_{0,y_0} ^{-1} V} ^k)}}}. \end{align*}
So, recalling that $A_{\min} (M_{t,y(t)}, \lambda_{0|M_{t,y(t)}})=A_{\min} (\psi^{-1}M_{t,y(t)}, \lambda_{0|\psi^{-1}M_{t,y(t)}})$ and\\ $Vol_{2k} (\psi^{-1}P \varphi_{t,y(t)} (B_1),\omega_{0|\varphi_{0,y_0} ^{-1}V} ^k)=Vol_{2k} (P \varphi_{t,y(t)} (B_1),\omega_{0|V} ^k)$, if we prove that \\$A_{\min} (M_{t,y(t)}, \lambda_{0|M_{t,y(t)}})\geq \pi$, then ${ Vol_{2k} (P \varphi_{t,y(t)} (B_1),\omega_{0|V} ^k)}^{-\frac{1}{k}} < \dfrac{1}{\pi}$ and the strict local non-squeezing inequality $Vol_{2k} (P \varphi_{t,y(t)} (B_1),\omega_{0|V} ^k) > \pi^k$ holds. But from the behaviour of the capacity $c$ respect to symplectic projections (Proposition \ref{Prop7}), we deduce \begin{align*}
A_{\min} (M_{t,y(t)}, \lambda_{0|M_{t,y(t)}}) = c (P \varphi_{t,y(t)} (B_1)) \geq c ( \varphi_{t,y(t)} (B_1))= c ( B_1) =\pi, \end{align*} and hence the result. \endproof From this result we cannot deduce the general local non-squeezing inequality \eqref{ineq5} because in the general case we cannot say much if a trivial deformation occurs. Nevertheless, if the deformation is analytic, the local non-squeezing inequality follows easily as consequence of the proposition above.
\begin{teo*}{Teor2}[Analytic local non-squeezing] Let $[0,1] \ni t \mapsto \varphi_t$ be an analytic path of symplectic embeddings $\varphi_t:\overline{B_1}\hookrightarrow \mathbb{R}^{2n}$, such that $\varphi_0$ is linear. Then the middle dimensional non-squeezing inequality \begin{align*}
Vol_{2k} (P \varphi_t (B_1),\omega_{0|V} ^k) \geq \pi^k \end{align*} holds for $t$ small enough. \end{teo*}
\proof By Theorem \ref{teor6} we have that $Vol_{2k} (P \varphi_0 (B_1),\omega_{0|V} ^k) \geq \pi^k$ and the equality holds if and only if $\varphi_0 ^{-1} V$ is $J$-invariant. If the equality does not hold the theorem is trivially true by the continuity of the volume. On the other hand, if the equality holds, Theorem \ref{teor6} implies that $\varphi_0 ^{-1}V$ is $J$-invariant and thus we are under the hypothesis of Proposition \ref{prop1}.\\ Therefore, in the case of a not formally trivial deformation $P \varphi_t(B_1)$ there is nothing to prove. Otherwise, if the deformation is formally trivial, the function $t \mapsto Vol_{2k} (P \varphi_t (B_1),\omega_{0|V} ^k)$ has vanishing derivatives in $0$, but we know by Proposition \ref{Prop1} that if $t$ is small enough this function is analytic and hence constant. Thus we get $Vol_{2k} (P \varphi_t (B_1),\omega_{0|V} ^k)=Vol_{2k} (B^{2k}_1, \omega_{0|V} ^k)= \pi^k$ for $t$ small enough. \endproof Note that to prove the theorem it was sufficient to use Proposition \ref{prop1} in the case where the curve $y(t)$, on which the path $t\mapsto \varphi_{t,y(t)}$ depends, is a constant curve, but the same proof leads to a generalization of Theorem \ref{Teor2} to the case in which $y(t)$ is an arbitrary analytic curve. Thanks to this remark we can say something more about the fixed symplectic embedding formulation of the local non-squeezing, but before we state a couple of lemmata. First a result on the local structure of the zero set of an analytic function. \begin{teo}(Lojasiewicz's Structure Theorem) \ {\upshape{\cite[Theorem 5.2.3]{KP92}}} \label{Teo4}
Let $f(x_1,\ldots,x_n)$ be a real analytic function in a neighbourhood of a point $y=(y_1,\ldots,y_n)$ in $\mathbb{R}^n$ and assume that $x_n \mapsto f(y_1,\ldots,y_{n-1},x_n)$ is not identically zero. There exist numbers $\delta_j>0$, $j=1,\ldots n,$ and a neighbourhood $Q_n$ (where we define $Q_k:= \{(x_1,\ldots,x_k) \ | \ |y_j - x_j| < \delta_j, \ 1\leq j \leq k \}$) such that the zero set \begin{align*}
Z:=\{ x\in Q_n \ | \ f(x)=0 \} \end{align*} has a decomposition \begin{align*} Z=V^{0} \cup \ldots \cup V^{n-1}, \end{align*} where the set $V^0$ is either empty or consists of the point $y$ alone, while for $1\leq k \leq n-1$ we may write $V^k$ as a finite disjoint union $V^k= \cup_\lambda \Gamma^k_\lambda$ of $k$-dimensional subvarieties $\Gamma^k_\lambda$. Each $\Gamma^k_\lambda$ is defined by a system of $n-k$ equations: \begin{align*} &x_{k+1}=^\lambda \! \! \eta _{k+1}^k (x_1,\ldots, x_k),\\ &\ \ \ \ \ \ \ \ \ \ \ldots\\ &x_n=^\lambda \! \!\eta _{n}^k (x_1,\ldots, x_k), \end{align*} where each function $^\lambda \eta _{k+1}^k$ is real analytic on an open subset $\Omega_\lambda ^k \subseteq Q_k \subseteq \mathbb{R}^k$. \end{teo} \begin{lem} Let $\varphi: D\rightarrow \mathbb{R}^{2n}$ be an analytic symplectic embedding and $x \in D$. As long as $x + ry \in D$, the map \begin{align*} \varphi_{r,x}(y) := \left\{ \begin{array}{l} \dfrac{1}{r} \big( \varphi (x+ r y) - \varphi (x) \big) \ \ \ \textrm{ if } r >0,\\ D \varphi (x) [y] \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \textrm{ if } r=0,
\end{array}\right. \end{align*} is analytic. \end{lem} \proof The function $\varphi(x+ry)$ is analytic in $r$ because it is a composition of analytic maps, thus the map $\dfrac{1}{r} \big( \varphi (x+ r y) - \varphi (x) \big)$ is analytic in $r>0$. Since $\varphi(x+ry)- \varphi(x)$ is analytic in $r=0$, we can express it as a convergent Taylor series centred in $0$. But the $0$-th coefficient of this expansion must vanish since $\varphi(x+0y)- \varphi(x)=0$, hence we can divide by $r$ and we obtain a convergent Taylor expansion for $\dfrac{1}{r} \big( \varphi (x+ r y) - \varphi (x) \big)$ in $r=0$. \endproof \begin{teo*}{Teor5}
Let $\varphi: D\hookrightarrow \mathbb{R}^{2n}$ be an analytic symplectic embedding, with $D$ domain of $\mathbb{R}^{2n}$. Then there exists a function $r_0:D\rightarrow (0,+ \infty)$ such that the inequality $Vol_{2k}(P\varphi (B_r(x)),\omega ^k_{0|V})\geq r^{2k}\pi^{k}$ holds, for every $x \in D$ and for every $r < r_0(x)$. Moreover $r_0$ is bounded away from $0$ on compact subsets $K \subset D$. \end{teo*} \proof Let $\varphi_{r,x}$ be the map defined in the lemma above. Observing that \begin{align} \label{eq6}
Vol_{2k}(P \varphi_{r,x} (B_1(0)),\omega_{0|V} ^k)=Vol_{2k}(P \frac{1}{r} \varphi (B_r(x)),\omega_{0|V} ^k)=\dfrac{Vol_{2k}(P \varphi (B_r(x)),\omega_{0|V} ^k)}{r^{2k}}, \end{align} for every fixed $x \in D$ we can apply Theorem \ref{Teor2} to the path $r\mapsto \varphi_{r,x}$ and we deduce the first part of the theorem.\\ Now we prove the estimate on compact sets.\\ Define a function \begin{align*}
f(x,r):= Vol_{2k}(P \varphi_{r,x} (B_1(0)),\omega_{0|V} ^k) - \pi^k. \end{align*}
This function is analytic in $\mathcal{D}= \{ (x,r) \in D \times [0,+ \infty) \ | \ 0 \leq r < R(x) \}$, where $R(x)>0$ is the supremum of the radii $r$ for which $f(x,r)$ is defined. To see this is enough to apply Proposition \ref{Prop1} to the analytic map $(r,x)\mapsto \varphi_{r,x}$. Now, take an arbitrary point $x_0 \in D$. If $f(x_0,0)>0$, then by continuity there exists a small neighbourhood $B_{\epsilon_{x_0}} \times [0,r_{x_0})$ of $(x_0,0)$ in $\mathcal{D}$, on which $f$ is positive.
On the other hand, if $f(x_0,0)=0$, we denote with $\gamma_A ^{x_0}:[0,1] \rightarrow \mathcal{D}$ a simple analytic curve such that $\gamma_A ^{x_0}(0)=(x_0,0)$. A consequence of Theorem \ref{Teor2} is that $f(\gamma_A ^{x_0} (r))$ must be non negative in a neighbourhood of $r=0$, i.e. $(x_0,0)$ is a local minimum for the restriction of $f$ to every analytic curve $\gamma_A ^{x_0}$. From this we can deduce that $(x_0,0)$ is a minimum for $f$ in $\mathcal{D}$. To see it, we first extend $f$ to an analytic function in a neighbourhood of $(x_0,0)$ in $\mathbb{R}^{2n+1}$. By Theorem \ref{Teo4}, there is a small ball $B_\delta(x_0,0) \subset \mathbb{R}^{2n+1}$ in which we know how the zeros are distributed, in particular $\mathcal{D} \cap (B_\delta (x_0,0) \backslash f^{-1}(0))$ has at most a finite number $N$ of different connected components $A_i \subset \mathcal{D} \cap B_\delta (x_0,0)$ such that $(x_0,0) \in \overline{A_i}$. The set $(f^{-1}(0) \cup_{i=1}^N A_i) \cap (\mathcal{D} \cap B_\delta (x_0,0))$ contains a neighbourhood of $(x_0,0)$ in $\mathcal{D}$, hence if we prove that $f_{|A_i}>0$ for every $i\in \{1,\ldots ,N\}$, we get the desired result. But if it were $f_{|A_i}<0$, by Theorem \ref{Teo4} we would be able to conclude that there exists an analytic curve $\gamma_A ^{x_0}$ laying in the connected component $A_i$ and this would imply that $0$ is not a minimum for $\gamma_A ^{x_0}$, hence a contradiction. Therefore $(x_0,0)$ is a minimum for $f$ in $\mathcal{D}$ and hence there exists a small neighbourhood $B_{\epsilon_{x_0}} \times [0,r_{x_0})$ of $(x_0,0)$ in $\mathcal{D}$ on which $f$ is positive. Now we consider an arbitrary compact set $K \subset D$. As we have just seen, to every $x_0 \in D$ we can associate two positive real numbers $r_{x_0}$ and $\epsilon_{x_0}$. The balls of radius $\epsilon_{x_0}$ centred in an arbitrary $x_0 \in K$ produce an open cover of $K$. From this cover we can extract a finite subcover of balls of radius $\epsilon_{x_i}$ and if we define $r_0$ as the minimum in the set of the corresponding $r_{x_i}$ we get the result. \endproof
\thispagestyle{empty} \section*{\huge References} \begin{bibliography}{A} \bibitem[AM13]{AM13} A. Abbondandolo and R. Matveyev, \emph{How large is the shadow of a symplectic ball?}, J. Topol. Anal. \textbf{5} (2013), 87-119. \bibitem[ABHS15]{ABHS15} A. Abbondandolo, B. Bramham, U. Hryniewicz and P. Salom\~ ao \emph{Sharp systolic inequalities for Reeb flows on the three sphere}, arXiv:1054.05258 [math.SG], 2015. \bibitem[AM15]{AM15} A. Abbondandolo and P. Majer, \emph{A non-squeezing theorem for symplectic images of the Hilbert ball}, Calc. Var. Partial Diff. Equ. (online version). \bibitem[BP14]{BP14} J. C. \'{A}lvarez Paiva and F. Balacheff, \emph{Contact geometry and isosystolic inequalities}, Geom. Funct. Anal. \textbf{24} (2014), no. 2 648-669. \bibitem[DF76]{DF76} A. J. Dragt and J. M. Finn \emph{Lie series and invariant functions for analytic symplectic maps}, J. Mathematical Phys. \textbf{17} (1976), no. 12 2215-2227. \bibitem[EG91]{EG91} Y. Eliashberg and M. Gromov, \emph{Convex symplectic manifolds, Several complex variables and complex geometry, Part 2}, Proc. Sympos. Pure Math. \textbf{52} part 2 (1991), 135-162. \bibitem[EH89]{EH89} I. Ekeland and H. Hofer, \emph{Symplectic topology and Hamiltonian dynamics}, Math. Z. \textbf{200} (1989), 355-378. \bibitem[Fed69]{Fed69} H. Federer \emph{Geometric mesure theory}, Springer (1969). \bibitem[Fin86]{Fin86} J. M. Finn \emph{Lie transforms: a perspective}, Lecture Notes in Phys. \textbf{252}, Springer, Berlin (1986), 63-86. \bibitem[Gin87]{Gin87} V. L. Ginzburg \emph{New generalizations of Poincar\'e's geometric theorem}, Funktsional. Anal. i Prilozehn. \textbf{21} (1987), no.2, 16-22. \bibitem[Gro85]{Gro85} M. Gromov \emph{Pseudo holomorphic curves in symplectic manifolds}, Invent. Math. \textbf{82} (1985), 307-347. \bibitem[Gut08]{Gut08} L. Guth \emph{Symplectic embeddings of polydisks}, Invent. Math. \textbf{172} (2008), 477-489. \bibitem[HZ94]{HZ94} H. Hofer and E. Zehnder, \emph{Symplectic invariants and Hamiltonian dynamics}, Birkh\"auser, 1994. \bibitem[KP92]{KP92} S. G. Krantz and H. R. Parks \emph{A Primer on Real Analytic Functions}, Birkh\"auser, 1992. \bibitem[Mos65]{Mos65} J. Moser, \emph{On the volume elements of a manifold}, Trans. Amer. Math. Soc. \textbf{120} (1965), 286-294. \bibitem[Rab78]{Rab78} P. Rabinowitz, \emph{Periodic solutions of Hamiltonian Systems}, Comm. Pure Math. Appl. \textbf{31} (1978), 157-184. \bibitem[Vit89]{Vit89} C. Viterbo, \emph{Capacit\'e symplectiques et applications}, Ast\'erisque \textbf{177-178} (1989), no. 714 S\'eminaire Bourbaki 41\'eme ann\'ee 345-362. \bibitem[Wei74]{Wei74} A. Weinstein, \emph{Fourier integral operators, quantization, and the spectra of Riemannian manifolds}, G\'eom\'etrie symplectique et physique math\'ematique, \'Editions Centre Nat. Recherche Sci., Paris (1975), 289-298. \bibitem[Wei78]{Wei78} A. Weinstein, \emph{Periodic orbits for convex Hamiltonian systems}, Ann. Math. \textbf{108} (1978), 507-518.
\end{bibliography}
\thispagestyle{empty}
\end{document} |
\begin{document} \title{Intersection forms, topology of maps and motivic
decomposition for resolutions of threefolds}
\tableofcontents
\section{Introduction} This paper has two aims.
\n The former is to give an introduction to our earlier work \ci{decmightam} and more generally to some of the main themes of the theory of perverse sheaves and to some of its geometric applications. Particular emphasis is put on the topological properties of algebraic maps.
\n The latter is to prove
a motivic version of the decomposition theorem for the resolution of a threefold $Y.$ This result allows to define a pure motive whose Betti realization is the intersection cohomology of $Y.$
We assume familiarity with Hodge theory and with the formalism of derived categories. On the other hand, we provide a few explicit computations of perverse truncations and intersection cohomology complexes which we could not find in the literature and which may be helpful to understand the machinery.
We discuss in detail the case of surfaces, threefolds and fourfolds. In the surface case, our ``intersection forms" version of the decomposition theorem stems quite naturally from two well-known and widely used theorems on surfaces, the Grauert contractibility criterion for curves on a surface and the so called ``Zariski Lemma," cf. \ci{BPV}.
The following assumptions are made throughout the paper
\begin{ass} \label{s1} We work with varieties over the complex numbers. A {\em map} $f:X \to Y$ is a proper morphism of varieties. We assume that $X$ is smooth. All (co)homology groups are with rational coefficients. \end{ass} These assumptions are placed for ease of exposition only, for
the main results remain valid when $X$ is singular if one replaces the cohomology of $X$ with its intersection cohomology, or the constant sheaf $\rat_{X}$ with the intersection cohomology complex of $X.$
It is a pleasure to dedicate this work to J. Murre, with admiration and respect.
\section{Intersection forms} \label{intforms}
\subsection{Surfaces} \label{surf} Let $D = \cup D_{k} \subseteq X$ be a finite union of compact irreducible curves on a smooth complex surface. There is a sequence of maps \begin{equation} \label{blle} H_{2}(D) \stackrel{r_{*}}\longrightarrow H^{BM}_{2}(X) \stackrel{PD}\simeq H^{2}(X) \stackrel{r^*}\longrightarrow H^{2}(D). \end{equation} The group $H_{2}(D)$ is freely generated by the fundamental classes $[D_{k}].$
\n The group $H^{2}(D)\simeq H_{2}(D)^{\vee}$ and, via Mayer-Vietoris, it is freely generated by the classes associated with points $p_{k} \in D_{k}.$ The map $$ H_{2}(D) \stackrel{cl}\longrightarrow H^{2}(X), \qquad cl\,:= \, PD\,\circ \, r_{*} $$ is called the {\em class map} and it assigns to the fundamental class $[D_{k}]$ the cohomology class $c_{1}( {\cal O}_{X}(D_{k}) ).$
\n The restriction map $r,$ or rather $r\circ PD,$ assigns to a Borel Moore $2-$cycle meeting transversely all the $D_{k},$ the points of intersection with the appropriate multiplicities.
\n The composition $ H_{2}(D) \longrightarrow H^{2}(D), $ gives rise to the so-called {\rm refined intersection form on $D \subseteq X:$ \begin{equation} \label{iota} \iota: H_{2}(D) \times H_{2}(D) \longrightarrow \rat \end{equation}
with associated symmetric intersection matrix $||D_{h}\cdot
D_{k}||.$
\n If $X$ is replaced by the germ of a neighborhood of $D,$ then $X$ retracts to $D$ so that all four spaces appearing in (\ref{blle}) have the same dimension $b_{2}(D)=$numbers of curves in $D.$
\n In this case the restriction map $r$ is an isomorphism: the Borel Moore classes of disks transversal to the $D_{k}$ map to the point of intersection.
\n On the other hand, $cl$ may fail to be injective, e.g. $(\comp \times \pn{1}, \{0\}\times \pn{1}).$
The following are two classical results results concerning the properties of the intersection form $\iota$, dealing respectively with resolutions of normal surface singularities and one dimensional families of curves. They are known as the Grauert's Criterion and the Zariski Lemma (cf. \ci{BPV}, p.90).
\begin{tm}
\label{tmgra}
Let $f: X \to Y$ be the contraction of a divisor $D$
to a normal surface singularity.
Then the refined intersection form $\iota$ on $H_{2}(D)$
is negative definite.
In particular, the class map $cl$ is an isomorphism. \end{tm}
\begin{tm}
\label{wzar}
Let $f: X \to Y$ be a surjective proper map of quasi-projective
smooth varieties, $X$ a surface, $Y$ a curve.
Let $D = f^{-1}(y)$ be any fiber.
Then the rank of $cl$ is $b_{2}(D) -1.$
More precisely, let $F= \sum_k a_k D_{k},$ $a_{k}>0,$ be the
cycle-theoretic fiber. $F\cdot F=0$ and the induced bilinear form
$$
\frac{H_{2}(D)}{ \langle [F] \rangle } \times
\frac{H_{2}(D)}{ \langle [F] \rangle }
\longrightarrow \rat
$$ is non degenerate and negative definite. \end{tm}
\begin{rmk} \label{link} {\rm Theorem \ref{tmgra} can be interpreted in terms of the topology of the ``link" ${\cal L}$ of the singularity. Let $N$ be a small contractible neighborhood of a singular point $y$ and ${\cal L}$ be its boundary. Choose analytic disks $\Delta_1, \cdots, \Delta_r$ cutting transversally the divisors $D_1, \cdots, D_r$ at regular points. The classes of these disks, generate the Borel-Moore homology $H_2^{BM}(f^{-1}(N)) \simeq H^2(f^{-1}(N)).$ The statement \ref{tmgra} implies that each class $\Delta_{i}$ is homologous to a rational linear combination of exceptional curves. Equivalently, for every index $i$ some multiple of the $1-$cycle $\Delta_i\cap {\cal L}$ bounds in the link ${\cal L}$ of $y.$ This is precisely what fails in the aforementioned example $(\comp \times \pn{1}, \{0\}\times \pn{1}).$ A similar interpretation is possible for the ``Zariski lemma."}
\end{rmk}
In view of the important role played by these theorems in the theory of complex surfaces it is natural to ask for generalization to higher dimension. We next define what is the analogue of the intersection form for a general map $f:X \to Y$ (cf. \ref{s1})
\subsection{Intersection forms associated to a map} \label{psif} General theorems, due to J. Mather, R. Thom and others (cf. \ci{g-m}) ensure that a projective map $f:X \to Y$ can be stratified, i.e. there is a decomposition ${\frak Y}= \coprod S_l$ of $Y$ with locally closed nonsingular subvarieties $S_l$, the strata, so that $f: f^{-1}(S_l)\to S_l$ is, for any $l$, a topologically locally trivial fibration. Such stratification allows us, when $X$ is nonsingular, to define a sequence of intersection forms. Let $L$ be the pullback of an ample bundle on $Y.$ The idea is to use sections of $L$ to construct transverse slices and reduce the strata to points, and to use a very ample line bundle $\eta$ on $X$ to fix the ranges:
Let $ \dim S_l=l$, let $s_l$ a generic point of the stratum $S_l$ and $Y_s$ a complete intersection of $l$ hyperplane sections of $Y$ passing through $s_l$, transverse to $S_l$;
As we did for surfaces, we consider the maps: $$ I_{l,0}: H_{n-l}(f^{-1}(s_l)) \times H_{n-l}(f^{-1}(s_l)) \longrightarrow \rat. $$ obtained intersecting cycles supported in $f^{-1}(s)$ in the smooth $(n-l)-$dimensional ambient variety $f^{-1}(Y_s):$ $$ H_{n-l}(f^{-1}(s_l))\to H_{n-l}(f^{-1}(Y_s)) \simeq H^{n-l }(f^{-1}(Y_s)) \to H^{n-l}(f^{-1}(s_l)). $$
We can define other intersection forms, in different ranges, cutting the cycles in $f^{-1}(s_l)$ with generic sections of $\eta.$
The composition: $$ H_{n-l -k}(f^{-1}(s))\to H_{n-l -k}(f^{-1}(Y_s)) \simeq H^{n-l +k}(f^{-1}(Y_s)) \to H^{n-l +k}(f^{-1}(s)). $$ gives maps $$ I_{l,k}:H_{n-l-k}(f^{-1}(s_l)) \times H_{n-l+k}(f^{-1}(s_l)) \longrightarrow \rat. $$
Let us denote by
$$\cap \eta^k: H_{n-l+k}(f^{-1}(s_l)) \to H_{n-l-k}(f^{-1}(s_l)), $$ the operation of cutting a cycle in $f^{-1}(s_l)$ with $k$ generic sections of $\eta$.
\n Composing this map with $I_{l,k},$ we obtain the intersection forms we will consider: $$ I_{l,k }(\cap \eta^k \cdot, \cdot ): H_{n-l+k}(f^{-1}(s_l)) \times H_{n-l+k}(f^{-1}(s_l)) \longrightarrow \rat. $$ \begin{rmk} \label{inde} {\rm These intersection forms depend on $\eta$ but not on the particular sections used to cut the dimension. They are independent of $L$. In fact we could define them using a local slice of the stratum $S_l$ and its inverse image, without reference to sections of $L.$ } \end{rmk}
\begin{ex} \label{3fold} {\rm Let $f:X \to Y$ be a resolution of singularities of a threefold $Y,$ with a stratification $Y_0 \coprod C \coprod y_0,$ defined so that $f$ is an isomorphism over $Y_0$, the fibers are one-dimensional over $C$, and there is a divisor $D=\cup D_i$ contracted to the point $y_0$. We have the following intersection forms:
-- let $c$ be a general point of $C$ and $s\in H^0(Y, {\cal O}(1))$ be a generic section vanishing at $c;$ there is the form $H_2(f^{-1}(c) \times H_2(f^{-1}(c)) \longrightarrow \rat$ which is nothing but the Grauert-type form on the surface $f^{-1}(\{s=0\});$
-- similarly, over $y_0$, there is the form on $H_4(D)$ given by $\eta \cap [D_i]\cdot [D_j];$ it is a Grauert-type form, computed on a hyperplane section of $X$ with respect to $\eta;$
-- finally, we have the more interesting $H_3(D)\times H_3(D) \longrightarrow \rat$. } \end{ex}
One of the dominant themes of this paper is that {\em Hodge theory affords non degeneracy results for these forms and that this non degeneration has strong cohomological consequences.}
To see why Hodge theory is relevant to the study of the intersection forms, let us sketch a proof of Theorem \ref{tmgra}, in the hypothesis that $X$ and $Y$ are projective. The proof we give is certainly not the most natural or economic. Its interest lies in the fact that, while the original proof seems difficult to generalize to higher dimension, this one can be generalized. It is based on the observation that the classes $[D_i]$ of the exceptional curves are ``primitive'' with respect to the cup product with the first Chern class of any ample line bundle pulled back from $Y.$ Even though such a line bundle is certainly not ample, some parts of the ``Hodge package," namely the Hard Lefschetz theorem and the Hodge Riemann bilinear relations, go through. To prove this, we introduce a technique, which we call {\em approximation of $L-$primitives,} which plays a decisive role in what follows.
\n {\em Proof of \ref{tmgra} in the case $X$ and $Y$ are projective.}
\n Let $L$ be the pullback to $X$ of an ample line bundle on $Y$. Since the map is dominant, $L^2 \neq 0,$ and we get the Hodge-Lefschetz type decomposition: $$H^2(X,\real) \, = \, \real \langle c_1(L) \rangle \oplus \hbox{\rm Ker \,} \{c_1(L) \wedge : H^2(X) \to H^4(X)\}.$$ Denote the kernel above by $P^{2}.$ This decomposition is orthogonal with respect to the Poincar\'e duality pairing which, in turn, is non degenerate when restricted to the two summands. The decomposition holds with rational coefficients. However, real coefficients are more convenient in view of taking limits.
\n Consider a sequence of Chern classes of ample $\rat -$line bundles $L_n$, converging to the Chern class of $L$, e.g. $L_n=L+\frac{1}{n} \eta,$ $\eta $ ample on $X.$ Define $P^2_{1/n}= \hbox{\rm Ker \,} \{c_1(L_n):H^2(X) \to H^4(X)\}.$ These are $(b_2-1)-$dimensional subspaces of $H^2(X)$. Any limit point of the sequence $P^2_{1/n}$ in ${\Bbb P}^{b_2}(\real)$ gives a codimension one subspace $W \subseteq H^2(X)$, contained in $ \hbox{\rm Ker \,} \{c_1(L):H^2(X) \to H^4(X)\}=P^2.$ Since $\dim{W} = b_2-1 = \dim{ P^2},$ we must have $\lim_{n}{ P^2_{1/n}}=P^2.$
\n The Hodge Riemann Bilinear Relations hold on $P^2_{1/n}$ by classical Hodge theory. The duality pairing on the limit $P^2$ is non degenerate. It follows that the Hodge Riemann Bilinear Relations hold on $P^2$ as well.
\n The classes of the exceptional curves $D_i$ are in $P^2,$ since we can choose a section of the very ample line bundle on $Y$ not passing through the singular point and pull it back to $X.$
\n The fact that these classes are independent is known classically. Let us briefly mention here that if there is only one component $D_{i}$ then $0 \neq [D_{i}] \in H^{2}(X)$ in the K\"ahler $X.$ In general, one may also argue along the following lines (cf. \ci{demigsemi}, \ci{deca}, $\S8$): use the Leray spectral sequence over an affine neighborhood $V$ of the singularity $y$ to show that $H^{2}(f^{-1}(V)) \to H^{2}(f^{-1}(y))$ is surjective; use the basic properties of mixed Hodge structures to deduce that $H^{2}(X) \to H^{2}(f^{-1}(y))$ is also surjective; conclude by dualizing and by Poincar\'e Duality.
\n
The classes $[D_{i}]$ are real of type $(1,1)$ and for such classes
$\alpha\in P^2 \cap H^{1,1}$ the Hodge Riemann bilinear relations give $$ \int_X \alpha \wedge \alpha \,<\, 0 $$ whence the statement of \ref{tmgra}. \blacksquare
\subsection{Resolutions of isolated singularities in dimension $3$} \label{lde} In this section we study the intersection forms in the case of the resolution of three-dimensional isolated singularities. Many of the features and techniques used in the general case emerge already in this case. Besides motivating what follows, we believe that the statements and the techniques used here are of some independent interest.
We prove all the relevant Hodge-theoretic results about the intersection forms associated to the resolution of an isolated singular point on a threefold. This example will be reconsidered in the last section, where we give a motivic version of the Hodge theoretic decomposition proved here.
As is suggested in the proof of Theorem \ref{tmgra} sketched at the end of the previous section, in order to draw conclusions on the behaviour of the intersection forms, we must investigate the extent to which the Hard Lefschetz theorem and the Hodge Riemann Bilinear Relations hold when we consider the cup product with the Chern class of the pullback of an ample bundle by a projective map. In order to motivate what follows let us recall an inductive proof of the Hard Lefschetz theorem based on the Hodge Riemann relations:
\n {\em Hard Lefschetz and Hodge-Riemann relations in dimension $(n-1)$ and Weak Lefschetz in dimension $n$ imply Hard Lefschetz in dimension $n.$}
\n Let $X$ be projective nonsingular and $X_H$ be a generic hyperplane section with respect to a very ample bundle $\eta$. Consider the map
$c_1(\eta):H^{n-1}(X)\to H^{n+1}(X).$ The Hard Lefschetz theorem states it is an isomorphism. By the Weak Lefschetz Theorem $i^*: H^{n-1}(X)\to H^{n-1}(X_H)$ is injective, and its dual $i_*: H^{n-1}(X_H)\to H^{n+1}(X),$ with respect to Poincar\'e duality on $X$ and $X_H$, is surjective. The cup product with $c_1(\eta)$ is the composition $i_* \circ i^*$ $$ \xymatrix{ H^{n-1}(X) \ar[rd]^{i^*} \ar[rr]^{c_1(\eta)} & & H^{n+1}(X) \\
& H^{n-1}(X_H) \ar[ur]^{i_*} & } $$ and is therefore an isomorphism if and only if the bilinear form $ \int_{X_{H}}$ remains non degenerate when restricted to the subspace $H^{n-1}(X) \subseteq H^{n-1}(X_H).$ This inclusion is a Hodge substructure. The Hodge Riemann relations on $X_{H}$ imply that the Hodge structure $H^{n-1}({X_H})$ is a direct sum of Hodge structures polarized by the pairing $\int_{X_{H}}.$ It follows that the restriction of the Poincar\'e form $\int_{X_{H}}$ to $H^{n-1}(X)$ is non degenerate, as wanted. The other cases of the Hard Lefschetz Theorem (i.e. $c_1(\eta)^k$ for $k \geq 2$) follow immediately from the weak Lefschetz theorem and the Hard Lefschetz theorem for $X_H$. \blacksquare
\begin{ass} \label{3foldass} $Y$ is projective with an isolated singular point $y$, $dim Y=3.$ $X$ is a resolution and $f:X \to Y$ is an isomorphism when restricted to $f^{-1}(Y-y)$. Suppose $D=f^{-1}(y)$ is a divisor and let $D_i$ be its irreducible components. \end{ass}
As usual in this paper, we will denote by $\eta$ a very ample line bundle on $X,$ and by $L$ the pullback to $X$ of a very ample line bundle on $Y.$ Of course $L$ is not ample. We want to investigate whether the Hard Lefschetz theorem and the Hodge Riemann relations hold if we consider cup-product with $c_1(L)$ instead of with an ample line bundle.
\begin{rmk} {\rm Since $c_1(L)^3 \neq 0$ we have an isomorphism $c_1(L)^3:H^{0}(X) \to H^{6}(X). $}
\end{rmk}
\begin{rmk} {\rm Clearly the classes $[ D_i ] \in H^2(X)$ are killed by the cup product with $c_1(L),$ since we can pick a generic section of ${\cal O}_Y(1)$ not passing through $y$ and its inverse image in $X$ will not meet the $D_{i}.$ Since $[D_i] \neq 0,$ it follows that $c_1(L):H^{2}(X) \to H^{4}(X)$ is not an isomorphism.} \end{rmk}
We now prove that in fact the subspace $ \hbox{\rm Im \,} \{H_4(D) \to H^2(X) \}$ generated by the classes $[D_{i}]$ is precisely $ \hbox{\rm Ker \,} { c_1(L):H^{2}(X) \to H^{4}(X)}.$
\begin{tm} \label{ht3fold} Let $s \in \Gamma(Y,{\cal O}_Y(1))$ be a generic section and $X_s=f^{-1}(\{s=0\}) \stackrel{i}{\to}X$. Then:
a. $i^*:H^1(X) \to H^1(X_s)$ is an isomorphism.
b. $i_*:H^3(X_s) \to H^5(X)$ is an isomorphism.
c. $i^*:H^2(X)/( \hbox{\rm Im \,} \{H_4(D) \to H^2(X)\}) \to H^2(X_s)$ is injective.
d. $i_*: H^2(X_s) \to \hbox{\rm Ker \,} \{H^4(X) \to H^4(D)\}$ is surjective.
e. The map $H_3(D) \to H^3(X)$ is injective. \end{tm} {\em Proof.} Set $X_0=X \setminus X_s$ e $Y_o=Y \setminus \{s=0\}$ and let us consider the Leray spectral sequence for $f:X_0 \to Y_0.$ Since $Y_0$ is affine, we have $H^k(Y_0)=0$ for $k>3$. $$ \xymatrix{
&H^4(D)\ar[rrd]^{d_2} & & & & \\
&H^3(D)\ar[rrd]^{d_2}\ar@{.}[dddrrr] & & & & \\
&H^2(D) & & & & \\
&H^1(D) & & & & \\
& H^0(Y_0)& H^1(Y_0)& H^2(Y_0) & H^3(Y_0)& \\ \ar@<-4ex>[uuuuu] \ar@<4ex>[rrrr] & & & & } $$ The sequence degenerates so that we have surjections $H^3(X_0) \to H^3(D)$ and $H^4(X_0) \to H^4(D).$ But from \ci{ho3}, Proposition 8.2.6, $H^3(X) \to H^3(D) \to 0$ and $H^4(X) \to H^4(D) \to 0$ are also surjective. We have the long exact sequence $$ \xymatrix{ H^1_c(X_0) \ar[r] \ar@{=}[d] & H^1(X) \ar[r] & H^1(X_s) \ar[rr] \ar[dr]& & H^2_c(X_0) \ar[r] \ar@{=}[d] & H^2(X) \ar[r] & H^2(X_s) \\ H^5(X_0)^*=\{0 \} & & & 0 \ar[ur] & H_4(X_0) \ar@{=}[u] & & \\
& &
& & H_4(D) \ar@{=}[u] \ar@{^{(}->}[uur] & & }
$$ The other statements are obtained applying duality. \blacksquare
Since on $H^2(X_s)$ the bilinear relation of Hodge-Riemann hold, the argument given at the beginning of this section shows that $$ c_1(L)^2:H^1(X)\longrightarrow H^5(X) \; \hbox{ is an isomorphism} $$ and $$ c_1(L)\, :\, H^2(X)/H_4(D) \, \longrightarrow \, \hbox{\rm Ker \,} { \{H^4(X) \to H^4(D) \}} \; \hbox{ is an isomorphism}. $$ The Hodge Riemann relations hold for $P^1:=H^1(X)$, $P^2:= \hbox{\rm Ker \,} c_1(L)^2:H^2(X)/H_4(D) \to H^6(X) $ since, by the weak Lefschetz Theorem, they follow from those for $X_s.$
The Hodge Riemann relations for $P^3= \hbox{\rm Ker \,} \{c_1(L):H^3(X) \to H^5(X)\},$ of which $H_{3}(D)$ is a subspace, must be considered separately: the main technique to be used here is the {\em approximation of primitives} introduced in the previous section to prove Theorem \ref{tmgra}.
\begin{tm} \label{polarizeh3} The Poincar\'e pairing $\int_X $ is a polarization of $P^3$
\end{tm} {\em Proof.} Since $c_1(L)^2:H^1(X)\to H^5(X) $ is an isomorphism, there is a decomposition, orthogonal with respect to the Poincar\`e pairing, $H^3(X)=P^3 \oplus c_1(L)H^1(X)$ and, in particular, $\dim {P^3}=b_3-b_1, $ just as if $L$ were ample. The Poincar\`e pairing remains nondegenerate when restricted to $P^3$. The classes $c_1(L)+\frac{1}{n}c_1(\eta)$ are Chern classes of ample line bundles, hence $P^3_{1/n}= \hbox{\rm Ker \,} \{c_1(L)+\frac{1}{n}c_1(\eta):H^3(X) \to H^5(X) \}$ are $b_3-b_1-$dimensional subspaces of $H^3(X)$ . As in the proof of \ref{tmgra} a limit point of the sequence $P^3_{1/n},$ considered as points in the real Grassmannian $Gr(b_3-b_1, b_3),$ gives a subspace of $H^3(X)$, contained in $ \hbox{\rm Ker \,} \{c_1(L):H^3(X) \to H^5(X)\}=P^3$ and, by equality of dimensions, $\lim {P^3_{1/n}}=P^3.$ The Hodge Riemann relations must then hold on the limit $P^3$ as explained in the proof of Theorem \ref{tmgra}. \blacksquare
Finally, let us remark that the cup-product with $\eta$ gives an isomorphism $c_1(\eta): H_4(D) \to H^4(D)$ via the bilinear form $\int_X c_1(\eta) \wedge [D_i] \wedge [D_j]$ which is negative definite. As we remarked in \ref{3fold} this form is just the intersection form on the exceptional curves of the restriction of $f$ to a hyperplane section (with respect to $\eta$) of $X$.
Summarizing: $$ \xymatrix{
& & H_4(D) \ar@/_2pc/ @{-->}[ddrr]_>>>>>>>>{c_1(\eta)} & & & & \\ H^0 \ar@/_2pc/[rrrrrr]^{c_1(L)^3} & H^1 \ar@/^3pc/[rrrr]^{c_1(L)^2} & H^2/H_4(D) \ar@/^1pc/[rr]^{c_1(L)} & H^3 & \hbox{\rm Ker \,} \{ H^4 \to H^4(D) \} & H^5 & H^6 \\
& & & & H^4(D) & & } $$ {\em the groups in the central row behave, with respect to $L$, as the cohomology of a projective nonsingular variety on which $L$ is ample.}
We are now in position to prove the first nontrivial fact on intersection forms which generalizes \ref{tmgra}: \begin{cor} \label{tmgra3dim} $H^3(D)$ has Hodge structure which is pure of weight $3$ and the Poincar\'e form is a polarization. In particular, the (skew-symmetric) intersection form $H_3(D) \times H_3(D) \to \rat$ is non degenerate. \end{cor} {\em Proof.} This follows because $H_3(D) \to H^3(X)$, is injective and identifies $H^3(D)$ with a Hodge substructure of the polarized Hodge structure $ P^3 \subseteq H^3(X)$. \blacksquare
This is a nontrivial criterion for a configuration of (singular) surfaces contained in a nonsingular threefold to be contractible to a point. See \ci{decmightam}, Corollary 2.1.11 for a generalization to arbitrary dimension.
\n For example, the purity of the Hodge structure implies that $H^3(D)=\oplus H^3(D_i).$
We will see that the non degeneracy statement of \ref{tmgra3dim} also plays an important role in the motivic decomposition of $X$ described in section \ref{gmdmt}.
\begin{rmk} {\rm The same analysis can be carried on with only notational changes for an arbitrary generically finite map from a nonsingular threefold $X$, e.g. assuming that there is also some divisor which is blown down to a curve etc. In this case the Hodge structure of $X$ can be further decomposed, splitting of a piece corresponding to the contribution to cohomology of this divisor. } \end{rmk}
\begin{rmk} {\rm The classical argument of Ramanujam \ci{rama}, \ci{e-v}, to derive the Aki\-zu\-ki-Kodaira-Nakano Vanishing Theorem from Hodge theory and Weak Lefschetz can be adapted to give the following sharp version: if $L$ is a line bundle on a threefold $X$, with $L^3 \neq 0$, a multiple of which is globally generated, then $$ H^{p}(X, \Omega_X^q \otimes L^{-1})=0 $$ for $p+q<2$, and for $p+q=2$ but $(p,q) \neq (1,1)$. More precisely $H^{1}(X, \Omega_X^1 \otimes L^{-1}) \neq 0$ if and only if some divisor is contracted to a point.} \end{rmk}
\subsection{Resolutions of isolated singularities in dimension $4$} \label{lde4} Let us quickly consider another similar example in dimension $4$,
\begin{ass} \label{4foldass} $f:X \to Y$, where $Y$ still has a unique singular point $y$ and $X$ is a resolution. As before, $\eta$ will denote a very ample bundle on $X$, and $L$ the pull-back of a very ample bundle on $Y$. Set $D=f^{-1}(y).$ \end{ass}
An argument completely analogous to the one used in the previous example shows that the sequence of spaces $H^0(X),$ $H^1(X),$ $ H^2(X)/H_6(D),$ $ H^3(X)/H_5(D),$ $ H^4(X),$ $ \hbox{\rm Ker \,} \{H^5(X) \to H^5(D)\}$, $ \hbox{\rm Ker \,} \{H^6(X) \to H^6(D)\}$, $H^7(X),$ $H^8(X) $ satisfies the Hard Lefschetz Theorem with respect to the cup product with $L$. The corresponding primitive spaces $P^1,P^2,P^3,$ are endowed with pairing satisfying the Hodge Riemann bilinear relations. The new fact that we have to face shows up when studying the Hodge Riemann bilinear relations on $H^4(X)$. The ``approximation of primitives'' technique here must be modified, since the dimension of $P^4= \hbox{\rm Ker \,} \{c_1(L):H^4(X) \to H^6(X) \}$ is greater than $b_4-b_2. $ Hence, if we introduce the primitive spaces $P^4_{1/n}= \hbox{\rm Ker \,} \{c_1(L)+\frac{1}{n}c_1(\eta):H^4(X) \to H^6(X)\}$ with respect to the ample classes $c_1(L)+\frac{1}{n}c_1(\eta),$ their limit is a proper subspace, of dimension $b_4-b_2$, of $P^4$. We can determine the exact dimension of $P^4$:
\begin{lm} $\dim{ \hbox{\rm Ker \,} \{c_1(L):H^4(X) \to H^6(X)\}}\,=\, b_4 -b_2 + \dim{H_6(D)}.$ \end{lm} {\em Proof.} Since $c_1(L)^2:H^2(X)/H_6(D) \stackrel{c_1(L)}{\to} H^4(X) \stackrel{c_1(L)}{\to} \hbox{\rm Ker \,} \{H^6(X) \to H^6(D)\}$ is an isomorphism, we have an orthogonal decomposition $$ H^4(X)= P^4 \oplus \hbox{\rm Im \,} \{c_1(L): H^2(X) \to H^4(X)\}. $$ The statement follows from: $ \hbox{\rm Ker \,} \{c_1(L): H^2(X) \to H^4(X) \}=H_6(D).$ \blacksquare
The ``excess'' dimension of $P_4$ is thus $\dim{H_6(D)}$. On the other hand $P^4$ contains an obvious subspace of this dimension, namely $c_1(\eta)H_6(D)$, the subspace generated by the classes obtained intersecting the irreducible components of the exceptional divisor with a generic hyperplane section.
\begin{rmk} {\rm The intersection form $\int_X c_1(\eta)^2 \wedge [D_i] \wedge [D_j]$ is negative definite, as it is just the intersection form on the exceptional curves of a double hyperplane section of $X$.} \end{rmk}
This last remark implies the following orthogonal decomposition $$ H^4(X)= c_1(\eta)H_6(D) \oplus (c_1(\eta)H_6(D))^{\perp}\cap P^4 \oplus \hbox{\rm Im \,} \{ c_1(L): H^2(X) \to H^4(X) \}. $$ and $(c_1(\eta)H_6(D))^{\perp}\cap P^4$ has dimension $b_4-b_2.$ This subspace turns out to be the subspace of ``approximable $L-$primitives'' we are looking for, as shown in the following
\begin{tm} $$ \lim_{n \to \infty}{ \hbox{\rm Ker \,} { \{ c_1(L)+\frac{1}{n}c_1(\eta) :H^4(X) \to H^6(X)\}} } \, =\, (c_1(\eta)\,H_6(D))^{\perp}\cap P^4. $$ \end{tm} {\em Proof.} The two subspaces have the same dimension, so it is enough to prove that $$ \hbox{\rm Ker \,} { \{ c_1(L)+\frac{1}{n}c_1(\eta): \{ H^4(X) \to H^6(X) \} \} } \subseteq (c_1(\eta)H_6(D))^{\perp}. $$ If $(c_1(L)+\frac{1}{n}c_1(\eta)(\alpha))=0,$ then, using $c_{1}(L) [D_{i}]=0:$
$$ \int_X c_1(\eta)\wedge [D_i] \wedge \alpha \, = \, -n \int_X c_1(L) \wedge [D_i] \wedge \alpha=0. $$ \blacksquare
\begin{cor} The Poincar\'e pairing is a polarization of the weight $4$ pure Hodge structure
$(c_1(\eta)H_6(D))^{\perp}\cap P^4$. \end{cor}
Let us spell out the consequences of this analysis for the intersection form $ H_4(D) \times H_4(D) \to \rat.$ First notice that the same argument used in the proof of \ref{ht3fold}.e shows that the map $H_4(D) \to H^4(X)$ is injective. It follows that $H_4(D) $ has a pure Hodge structure
which is the direct sum of two substructures polarized (with opposite signs) by the Poincar\'e pairing.
The next result shows that in fact $H_4(D) $ is the direct sum of two substructures, polarized (with opposite signs) by the Poincar\'e pairing. This gives a clear indication of what happens in general:
\begin{cor} \label{tmgra4fold} The intersection form $ H_4(D) \times H_4(D) \to \rat$ is non degenerate. There is a direct sum decomposition: $$ H_4(D)= c_1(\eta)H_6(D) \oplus (c_1(\eta)H_6(D))^{\perp} $$ orthogonal with respect to the intersection form, which is negative definite on the first summand and positive on the second. \end{cor}
\section{Intersection forms and Decomposition in the Derived Category} \label{intformdecomp} We now show how the results we quoted at the beginning of the first section can be translated in statements about the decomposition in the derived category of sheaves of the direct image of the constant sheaf. We will freely use the language of derived categories. In particular we will use the notion of a constructible sheaf and the functors $Rf_*,Rf_!,f^*,f^!.$
In section \ref{subsm} we briefly review the classical $E_2-$degeneration criterion of Deligne \ci{dess}, \ci{shockwave} in order to motivate the construction of the perverse cohomology complexes. These complexes are
a natural generalization of the higher direct image local systems for a smooth map. The construction of perverse cohomology is carried out in section \ref{macc}.
We denote by $S(Y)$ the abelian category of sheaves of $\rat -$vector spaces on $Y$, and by $D^b(Y)$ the corresponding derived category of bounded complexes.
We shall make use of the following splitting criterion in the derived category. We state it in the form we need it in this paper. For a more general statement and a proof the reader is referred to \ci{demigsemi} and \ci{decmightam}.
Let $(U,y)$ be a germ of an isolated $n-$dimensional singularity with the obvious stratification $= V \coprod y,$ $j: V \to U \leftarrow y : i$ be the
obvious maps, $P$ be a self-dual complex on $U$ with $P_{|V}= {\cal L}[n],$ ${\cal L}$ a local system on $V,$ and $P \simeq \tau_{\leq 0}P. $ We wish to compare $P,$ $IC_{U}({\cal L}):= \td{-1} Rj_{*} {\cal L}[n]$ and the stalk ${\cal H}^{0}(P)_{y}.$
\begin{lm} \label{splitp} The following are equivalent:
\n 1) there is a canonical isomorphism in the derived category $$ P \, \simeq \, IC_{U}({\cal L}) \oplus {\cal H}^{0}(P)_{y}[0]; $$
\n 2) the natural map ${\cal H}^{0}(P) \longrightarrow {\cal H}^{0}(Rj_{*} j^{*}P)=R^{n}j_{*}{\cal L}$ is zero. \end{lm}
\subsection{Resolution of surface singularities } \label{fsts} For a normal surface $Y$, let $j:Y_{reg} \to Y$ be the open embedding of its regular points. The intersection cohomology complex, which we will consider in much more detail in the next section, is $IC_Y= \tau_{\leq -1} Rj_{*}\rat_{Y_{reg} } [2] .$ The following, which we will prove as a consequence of \ref{tmgra}, is the first case of the Decomposition theorem which needs to be stated in the derived category and not just in the category of sheaves.
\begin{tm} \label{tmrs} Let $f:X \to Y$ be a proper birational map of quasi-projective surfaces, $X$ smooth, $Y$ normal. There is a canonical isomorphisms $$ Rf_{*} \rat_{X}[2] \stackrel{\simeq}\longrightarrow IC_{Y} \oplus R^{2}f_*\rat_X [0]. $$ \end{tm} {\em Proof.} We work locally on $Y.$ Let $(Y,p)$ be the germ of an analytic normal surface singularity, $f:(X,D) \to (Y,p)$ be a resolution. The fiber $D=f^{-1}(p)$ is a connected union of finitely many irreducible compact curves $D_k.$ Note that $j^*Rf_*\rat \simeq \rat_{Y\setminus p}.$ Consider the following diagram $$
\xymatrix{ Rf_{*}\rat_{X}[2] \ar[rd]_{l_{0}} \ar@/_1pc/[rdd]_{l_{-1}} \ar@/_2pc/[rddd]_{l_{-2}} \ar[r]^{r} & Rj_{*}j^{*} Rf_{*}\rat_{X}[2] & = Rj_{*} \rat_{Y \setminus p}[2] & \\ & \td{0} Rj_{*}j^{*} Rf_{*}\rat_{X}[2] \ar@[=][u] \ar@[=][u] & & \\
& \td{-1} Rj_{*}j^{*} Rf_{*}\rat_{X}[2] \ar[u] &
=: IC_{Y} &\\
& \td{-2} Rj_{*}j^{*} Rf_{*}\rat_{X}[2] \ar[u] & =
\rat_{Y}[2] & } $$ We are looking for the obstructions to the existence of the lifts $l_{0},$ $l_{-1}$ and $l_{-2}.$
\n Since $R^k f_*\rat_X =0,$ $k\geq 3,$ we have that $\td{0} Rf_*\rat_X[2] \simeq Rf_* \rat_X[2].$ In particular, $l_0$ exists and it is unique.
\n From the exact triangle $$
\to \tau_{\leq -1}Rj_*j^*Rf_* \rat_X[2] \to \tau_{\leq 0}Rj_*j^*Rf_* \rat_X[2] \to {\cal H}^0 (Rj_*j^*Rf_* \rat_X[2])\simeq R^2j_* \rat_{Y \setminus p} \stackrel{+1}{\to} $$
$l_{-1}$ exists iff the natural map $$ \rho: R^2f_* \rat_X \to R^2j_* \rat_{Y \setminus p} $$ is trivial. Using the isomorphisms $(R^2f_* \rat_X)_{p} \simeq H^{2}(X)$ and $(R^2j_* \rat_{Y \setminus p})_{p} \simeq H^{2}(X \setminus D),$ the map $\rho$ can be identified with the restriction map $\rho$ appearing in the long exact sequence of the pair $(X, D):$ $$ \ldots \longrightarrow H_2(D) \stackrel{cl}\longrightarrow H^2(X)\simeq H^2(D) \stackrel{\rho}\longrightarrow H^2( X \setminus D ) \longrightarrow \ldots $$ where have identified $H_{2}(D) \simeq H^{2}(X, X \setminus D)$ via Lefschetz Duality.
\n By Theorem \ref{tmgra}, $cl$ is an isomorphism and $\rho$ is trivial.
\n This lift $l_{-1}$ is unique and splits by Lemma \ref{splitp}. \blacksquare
\begin{rmk} \label{nolift} {\rm It can be shown easily that a lift $Rf_{*}\rat_{X}[2] \longrightarrow \td{-2}\, Rj_{*}\rat_{U}[2] = \rat_{Y}[2]$ exists iff $H^{1}(f^{-1}(p), \rat) =\{0\},$ i.e. iff $IC_{Y}\simeq \rat_{Y}[2],$ iff $(Y,p)$ is a rational homology manifold.
\n It follows that, in general, the natural map $\rat_{Y } \to Rf_{*}\rat_{X}$ does {\em not} split and $Rf_{*}\rat_{X}$ does {\em not}
decompose as a direct sum of its shifted cohomology sheaves
as in (\ref{e1}).
} \end{rmk}
\begin{ex}
\label{rag2}
{\rm Let $f: X = \comp \times \pn{1} \to Y$ be the real algebraic map contracting precisely $D:= \{0 \} \times \pn{1}$ to a point $p \in Y.$ One has a {\em non} split exact sequence in the category $P(Y)$ of perverse sheaves on $Y:$ $$
0 \longrightarrow IC_{Y} \longrightarrow Rf_{*}\rat_{X}[2] \longrightarrow H^{2}(\pn{1})_{p}[0]
\longrightarrow 0. $$ } \end{ex}
It is remarkable that while the lift $l_{-2}$ does not exist in general, the lift $l_{-1}$ always exists. While looking for a nontrivial map $Rf_{*} \rat_{X}[2] \to \rat_{Y}[2],$ one ends up finding another {\em more interesting} map to $IC_{Y}.$
Recall that the dualizing sheaf $\omega_{X} \simeq \rat_{X}[2n].$ Dualizing the canonical isomorphism of Theorem \ref{tmrs} and, keeping in mind that $IC_{Y}$ and $H_{2}(D)_{p}[0]$ are simple objects in $P(Y)$ (cf. section \ref{simple}), we get \begin{cor} \label{ctmrs} There are canonical isomorphisms $$ H_{2}(D)_{p}[0] \oplus IC_{Y}^{*} \stackrel{\simeq}\longrightarrow Rf_{*} \omega_{X}[-2] \stackrel{PD}\simeq Rf_{*} \rat_{X}[2] \stackrel{\simeq}\longrightarrow IC_{Y} \oplus H^{2}(D)_{p}[0], $$ such that the composition is a direct sum map and induces the intersection form $\iota$ on $H_{2}(D)$ and the Poincar\'e-Verdier pairing on the self-dual $IC_{Y}.$
\n In particular, if $X$ is compact, then the induced splitting injection $$ I\!H^{\bullet}(Y) \subseteq H^{\bullet}(X) $$ exhibits the lhs as the pure Hodge substructure of the rhs orthogonal to the space $cl (H_{2}(D)) \subseteq H^{2}(X)$ with respect to the Poincar\'e pairing on $X.$ \end{cor}
\subsection{Fibrations over curves} \label{fstc} Let $f: X \to Y$ be a map of from a smooth surface onto a smooth curve. Denote by $\hat{f}: \hat{X} \to \hat{Y}$ the smooth part of the map $f,$
by $j: \hat{Y} \to Y$ the open immersion, by $T^i:= R^i\hat{f}_*\rat_{\hat{X}} = {R^if_*\rat_X}_{|\hat{Y}}.$ For ease of exposition we assume that $f$ has connected fibers.
\n Fix an ample line bundle $\eta$ on $X.$ The isomorphism stated in the next proposition will depend on $\eta.$
\begin{pr} \label{rhl1} There is an isomorphism $$ Rf_* \rat_X[2] \simeq j_*T^0[2] \oplus P \oplus j_*T^2[0], $$ with $P$ a suitable self-dual (with respect to the Verdier duality functor) object of $D^b(Y).$ \end{pr} {\em Proof.} We work around one critical value $p \in Y$ and replace $Y$ by a small disk centered at $p,$ $X$ by the preimage of this disk, etc.
\n Since the fibers are connected, $\rat_Y \simeq f_*\rat_X \simeq j_*T^0 \simeq j_*T^2.$
\n Since $\eta$ is $f-$ample, $\eta: T^0 \simeq T^2,$ which in this case implies that $\eta: j_*T^0 \simeq j_*T^2.$
\n There are the natural truncation maps $f_*\rat_X \to Rf_*\rat_X \to R^2f_*\rat_X[-2].$
\n There is the natural adjunction map $R^2f_*\rat_X \to j_*T^2.$ It is splitting/surjective in view of the presence of $\eta.$
\n Putting together, there is a sequence of maps \begin{equation} \label{som} j_*T^0[2] \stackrel{c}\to Rf_*\rat_X[2] \stackrel{\eta}\to Rf_* \rat_X[4] \stackrel{\pi}\to j_*T^2[2] \stackrel{\sigma}\to j_*T^0[2] \stackrel{c}\to \ldots \end{equation} where the composition $\eta \pi c$ is the isomorphism mentioned above $\eta: j_*T^0 \simeq j_*T^2$ and $\sigma := ( \eta \pi c )^{-1}.$
\n The reader can verify that the composition $$ j_*T^0[2] \oplus j_*T^2[0] \stackrel{ \gamma:=( c + \eta c \sigma)[-2] }\longrightarrow Rf_* \rat_X[2] \stackrel{ \sigma\pi\eta \oplus \pi [-2]}\longrightarrow j_*T^0[2] \oplus j_*T^2[0] $$ is the identity, i.e. $\gamma$ splits.
\n Let $P:= \mbox{Cone}(\gamma).$ There is a direct sum decomposition \begin{equation} \label{chsp} Rf_* \rat_X [2] \simeq j_*T^0[2] \oplus P \oplus j_*T^2[0]. \end{equation} The self-duality of $P$ follows from the self-duality of $Rf_* \rat_X [2]$ and of $j_*T^0[2] \oplus j_*T^2[0].$ \blacksquare
The object $P$ introduced in the previous proposition has a simple structure: \begin{pr} \label{semisimp1} Assumptions as in \ref{rhl1}. The object $P$ splits in $D^b(Y)$ as $$P=V \oplus j_*T^1[1],$$ where $V = \hbox{\rm Ker \,} {R^2f_*\rat_X \to j_*T^2}$
is a skyscraper sheaf supported at $Y \setminus \hat{Y}.$ This decomposition is canonical and compatible with Verdier duality. \end{pr} {\em Proof.} By inspecting cohomology sheaves we see that ${\cal H}^i(P) =0$ for $i \neq 0,1,$ that ${\cal H}^{-1}(P) = R^1f_*\rat_X$ and that ${\cal H}^0(P) =V.$
\n In view of Lemma \ref{splitp}, we need to show that \begin{equation}
\label{rpr} r': {\cal H}^0 (P) \to R^1j_*T^1 \end{equation} is the zero map.
\n We now show that this is equivalent to the Zariski Lemma.
\n By applying adjunction to (\ref{chsp}), we obtain the a commutative diagram $$
\xymatrix{ Rf_{*}\rat_{X}[2] \ar[d]^{\simeq} \ar[r] & Rj_{*}j^{*} Rf_{*}\rat_{X}[2] \ar[d]^{\simeq} \\ j_{*}T^{0}[2] \oplus P \oplus j_{*}T^{2}[0] \ar[r] & Rj_{*}T^{0}[2] \oplus Rj_{*}P \oplus Rj_{*}T^{2}[0] } $$ The associated map of spectral sequences ${\Bbb H}^p(Y, {\cal H}^q (-) ) \Longrightarrow {\Bbb H}^{p+q}(Y, -)$ gives a commutative diagram
$$
\xymatrix{ H_{2}(D) \ar[r]^{cl} & H^{2}(X) \ar[d]^{\simeq} \ar[r]^{r} & H^{2}(X \setminus D) \ar[d]^{\simeq} \\ & {\cal H}^{0}( P) \oplus (j_{*}T^{2})_{p} \ar[r]^{(r',id)} & \; {\cal H}^{1}( Rj_{*}T^{1}) \oplus (j_{*}T^{2})_{p}. } $$ It follows that, using the identifications above, $\mbox{Im}(cl)= \hbox{\rm Ker \,} {r} = \hbox{\rm Ker \,} {r'} \subseteq {\cal H}^0(P).$
\n In particular, $r'=0,$ iff $\dim{ \hbox{\rm Ker \,} {r} } = \dim{ {\cal H}^0(P) }.$ Note
that $\dim{ {\cal H}^0(P) }=b_2(f^{-1}(p)) -1.$
\n It follows that $r'=0$ iff $\dim{ \mbox{Im} (cl) } = b_2(f^{-1}(p)) -1.$ The latter is implied by Theorem \ref{wzar}.
\n We conclude by Lemma \ref{splitp}. \blacksquare
Finally, we have:
\begin{tm} \label{stc} There is an isomorphism $$ Rf_* \rat_X[2] \simeq j_*T^0[2] \oplus j_*T^1[1] \oplus V [0] \oplus j_*T^2[0]. $$ \end{tm}
\begin{rmk} {\rm From \ref{stc} follows that $R^1f_* \rat_X \simeq j_*R^1\hat{f}_*\rat_{\hat{X}}.$ Note that this implies the Local Invariant Cycle Theorem.
\n Since $R^2f_* \rat_X=j_*R^2\hat{f}_*\rat_{\hat{X}}\oplus V,$ we have the coarser decomposition $$ Rf_* \rat_X[2] \simeq R^0f_*\rat_{X}[2] \oplus R^1f_*\rat_{X}[1] \oplus R^2 f_*\rat_{X}[0]. $$ In particular, the Leray spectral sequence degenerates at $E_2.$ It is easy to see that the Leray filtration on the cohomology of $X$ is by Hodge substructures: $$ L^2=f^*H(Y), \qquad L^1= \hbox{\rm Ker \,} { \{f_*:H(X) \to H(Y)\}}. $$ } \end{rmk}
\subsection{Smooth maps} \label{subsm} Even in the case of a smooth fibration $f: X\to Y$ of a surface over a curve, the study of the complex $Rf_{*}\rat_{X}$ is nontrivial, without any projectivity assumptions.
\begin{ex} \label{hopf} {\rm Let $X$ be a Hopf surface. There is a natural holomorphic smooth fibration $ f:X\to \pn{1}$ with fibers elliptic curves. Since $b_{1}(X)=1,$ one sees easily that the Leray Spectral Sequence for $f$ is not $E_{2}-$degenerate. In particular, $Rf_{*}\rat_{X} $ is {\em not} isomorphic to $\oplus_{i}{ R^{i}f_{*}\rat_{X} [-i]}.$ } \end{ex}
Let us briefly list {\em some} of the important properties of a smooth projective map $f:X \to Y$ of smooth varieties.
The sheaves $R^{i}f_{*}\rat_{X}$ are locally constant over $Y,$ i.e. they are local systems. In fact, $f$ is differentiably locally trivial over $Y$ in view of Ehresmann Lemma.
The model for a general decomposition theorem for $Rf_{*}\rat_{X}$ is the following \begin{tm} Let $f:X^{n} \to Y^{m}$ be a smooth projective map of smooth quasi-projective varieties of the indicated dimensions and $\eta$ be an $f-$ample line bundle on $X.$
\label{del} \begin{equation}
\label{e1}
Rf_{*}\rat_{X} \, \simeq_{D(Y)} \, \oplus{R^{i}f_{*}\rat_{X} [-i]}. \end{equation}
\begin{equation}
\label{e2} \eta^{i}: R^{n-m-i}f_{*}\rat_{X} \simeq R^{n-m+i}f_{*}\rat_{X}, \quad \forall \,i \geq 0. \end{equation}
\begin{equation}
\label{e3}
\mbox{ The local systems $R^{j}f_{*}\rat_{X}$ on $Y$ are semisimple.} \end{equation} \end{tm}
The first Chern class of the line bundle $\eta \in H^{2}(X, \rat) = Hom_{D(X)}(\rat_{X}, \rat[2]),$ defines maps $$ \eta: \rat_{X} \longrightarrow \rat_{X}[2], \quad \eta: Rf_{*}\rat_{X} \longrightarrow Rf_{*}\rat_{X}[2], \quad \eta^{r}: Rf_{*}\rat_{X} \longrightarrow Rf_{*}\rat_{X}[2r] $$ and finally $$ \eta^{r}: R^{i}f_{*}\rat_{X} \longrightarrow R^{i+2r}f_{*}\rat_{X}. $$ Theorem \ref{del}.\ref{e2} is then just a re-formulation of the Hard Lefschetz Theorem for the fibers of $f$ and can be named the Relative Hard Lefschetz Theorem for smooth maps.
We remind the reader that a functor ${\cal T}: D(Y) \to A$, $A$ an abelian category, is said to be cohomological (cf. \ci{verd} II, 1.1.5.) if, setting ${\cal T}^i(K)= {\cal T}^0(K[i]),$ to a distinguished triangle $$ K \to L \to M \stackrel{+1}{\to} $$ corresponds a long exact sequence in $A$
$$ \to {\cal T}^i(K) \to {\cal T}^i(L) \to {\cal T}^i(M) \to {\cal T}^{i+1}(K) \to... $$ The cohomology sheaf functor ${\cal H}^{0}: D(Y) \to S(Y)$ is cohomological. Noting that ${\cal H}^{i}(Rf_{*}) = R^{i}f_{*},$ Theorem \ref{del}.\ref{e1} can be re-phrased by saying that $Rf_{*}\rat_{X}$ is decomposable with respect to the functor ${\cal H}^{0}.$
\n It is important to note that (\ref{e2}) implies (\ref{e1}) by the Deligne-Lefschetz Criterion.
\n Theorem \ref{del}.\ref{e3} states that every local subsystem ${\cal L} \subseteq R^{j}f_{*}\rat_{X}$ admits a complement, i.e. a local system ${\cal L}'$ such that ${\cal L} \oplus {\cal L}' = R^{j}f_{*}\rat_{X}.$
Let us note {\em some} of the important consequences of Theorem \ref{del}.
\n The ${\cal H}^{0}-$decomposability (\ref{e1}) of $Rf_{*}\rat_{X}$ implies immediately the $E_{2}-$degeneration of the Leray spectral sequence, i.e. of the spectral sequence associated with the cohomological functor ${\cal H}^{0}:$ \begin{equation}
\label{e4} \mbox{ $H^{p}(Y, R^{q}f_{*}\rat_{X}) \Longrightarrow H^{p+q}(X, \rat)$ is $E_{2}-$degenerate}. \end{equation} This degeneration implies the surjection \begin{equation}
\label{e5a} H^{k}(X, \rat) \longrightarrow H^{0}(Y, R^{k}f_{*}\rat_{X}) = H^{k}(X_{y}, \rat)^{\pi_{1}(Y, y)}, \end{equation} i.e. the so-called Global Invariant Cycle Theorem.
\n The Theory of MHS allows to show, using a smooth compactification of $X,$ that in fact the monodromy invariants are a Hodge substructure of $H^{k}(X_{y},\rat),$ which as a PHS is independent of $y \in Y$ (Theorem of the Fixed Part).
In fact, (\ref{e3}) is a consequence of this fact.
In general, if $f$ is not smooth, Theorem \ref{del} fails completely.
\n The Relative Hard Lefschetz Theorem (\ref{e2}) fails due to the presence of singular fibers, i.e. fibers along which the differential of $f$ drops rank.
\n The sheaves $R^{j}f_{*}\rat_{X}$ are no longer locally constant. Moreover, they are not semisimple in the category of constructible sheaves: e.g. $j_{!}\rat_{\comp^{*}} \to \rat_{\comp}.$
\n The following examples shows that the ${\cal H}^{0}-$decomposability (\ref{e1}) fails in general and so does the $E_{2}-$degeneration of the Leray Spectral Sequence (\ref{e4}).
\begin{ex}
\label{exnondeg}
{\rm Let $X$ be the blowing up of $\comp \pn{2},$ along ten points lying on an irreducible cubic $C'$ and $C$ be the strict transform of $C'$ on $X.$ Since $C^{2}=-1$ the curve contracts to a point under a birational map $f: X\to Y.$ We leave to the reader the task to verify that 1) the Leray Spectral Sequences for $H^{2}(X,\rat)$ and for $I\!H^{2}(Y, \rat)$ are not $E_{2}-$degenerate and that, though the Leray spectral sequence always degenerates over suitably small Euclidean neighborhoods on $Y,$ 2) the complex $IC_{Y}$ does not split as a direct sum of its shifted cohomology sheaves. } \end{ex}
The following more general class of examples shows that the failure of the $E_{2}-$degeneration is very frequent. \begin{ex}
\label{exgen} {\rm Let $f: X \to Y$ be a projective resolution of the singularities of a projective and normal variety $Y$ such that there is at least one index $i$ such that the natural MHS on $H^{i}(Y, \rat)$ is not pure (e.g. $i=2$ in\ref{exnondeg}). Then the Leray Spectral Sequence for $f$ is not $E_{2}-$degenerate. If it were, then the edge sequence would give an injection of MHS $f^{*}: H^{j}(Y, \rat) \to H^{j}(X, \rat),$ forcing the MHS of such a $Y$ to be pure. } \end{ex}
However, not everything is lost.
\section{Perverse sheaves and the Decomposition Theorem} \label{macc} One of the main ideas leading to the theory of perverse sheaves is that Theorem \ref{del}, which holds for smooth maps, can be made to hold for arbitrary proper algebraic maps provided that it is re-formulated using the perverse cohomology functor $^{p}\!{\cal H}^{0}$ in place of the cohomology sheaf functor ${\cal H}^{0}.$ Just as this latter is $\tau_{\leq 0}\tau_{\geq 0},$ with $\tau$ the standard truncation functors of a complex,
the perverse cohomology functor will be expressed as $^{p}\!{\cal H}^{0}= \, ^{p}\!\tau_{\leq 0} \, \!^{p}\!\tau_{\geq 0},$ where $^{p}\!\tau$ is the so called perverse truncation functor. Roughly speaking, the perverse truncation functor (with respect to middle perversity, which is the only case we will consider) is defined by gluing standard truncations on the strata, shifted by a term which depends on the dimension of the stratum. The choice of the shifting is dictated by the behavior of the standard truncation with respect to duality, as we suggest in \ref{troncodiaz}. In this context and keeping this in mind, perverse truncation becomes quite natural. We believe it can be useful to give a few details of its construction and an example of computation, related to the examples given in section \ref{psif}. In analogy with the cohomology sheaf functor ${\cal H}^0,$ the perverse cohomology functor $^{p}\!{\cal H}^{0}$ will be a cohomological functor which takes values in an abelian subcategory of ${D}^{b}(Y),$ whose object are the so called perverse sheaves. For a general proper map these objects play the role played by local system for smooth maps.
\subsection{Truncation and Perverse sheaves} \label{sps} Let ${D}^{b}(Y)$ be the bounded derived category of the category $S(Y)$ of sheaves of rational vector spaces on $Y.$ We are interested in the full subcategory $D(Y)$ of those complexes whose cohomology sheaves are constructible. This means that, given an object $F$ of $D(Y),$ there is an algebraic Whitney stratification $Y=\coprod{S_{l}},$ depending on $F,$ such that
${\cal H}^{j}(F)_{|S_{l}}$ is a finite rank local system. By the Thom Isotopy Lemmata, $Rf_{*}\rat_{X}$, and in fact any other complex appearing in this paper, is an object of $D(Y).$ One is interested in direct sum decompositions of this complex, in the geometric meaning of the summands and in the consequences, both theoretical and practical, of such splittings.
We now define the $t$-structure on $D(Y)$ associated with the middle perversity. Instead of insisting on its axiomatic characterization (cf. \ci{bbd}), we give the explicit construction of the {\em perverse truncations}
$\ptd{m}: D(Y) \longrightarrow D(Y)$, and $\ptu{m}: D(Y) \longrightarrow D(Y)$. These come with natural morphisms
$\ptd{m}F \longrightarrow F$ and
$F \longrightarrow \ptu{m}F.$
\n We start with the following: \begin{lm} \label{troncodiaz} Let $Z$ be nonsingular of complex dimension $r$, and $F \in D(Z)$ with locally constant cohomology sheaves. Then there are natural isomorphisms: $$ \tau_{\leq k} {\cal D}F \simeq {\cal D}\tau_{\geq -k-2r}F \qquad \tau_{\geq k} {\cal D}F \simeq {\cal D}\tau_{\leq -k-2r}F $$ \end{lm} {\em Proof.} Since the dualizing complex is in this case isomorphic to $\rat_Z[2r],$ it is enough to prove that there are natural isomorphisms $$ \tau_{\leq k} Rhom(F,\rat_Z) \simeq Rhom(\tau_{\geq -k}F, \rat_Z) \qquad \tau_{\geq k} Rhom(F,\rat_Z) \simeq Rhom(\tau_{\leq -k}F, \rat_Z). $$ We prove the first statement. The proof of the second is analogous. Applying $Rhom$ and $\td{k}$ to the map $F \to \tau_{\geq -k}F,$ we get: $$ \begin{array}{ccc}
Rhom(\tau_{\geq -k}F, \rat_Z) & \longrightarrow & Rhom(F, \rat_Z) \\ \uparrow & & \uparrow \\ \td{k} Rhom(\tau_{\geq -k}F, \rat_Z) & \stackrel{}\longrightarrow & \td{k} Rhom( F, \rat_Z). \end{array} $$
\n To prove the statement it is enough to show that the three complexes $ Rhom(\tau_{\geq -k}F, \rat_Z),$ $\td{k} Rhom(\tau_{\geq -k}F, \rat_Z)$ and $ \td{k} Rhom( F, \rat_Z)$ have the same cohomology sheaves. Since $F$ and $\rat_{Z}$ have locally constant cohomology sheaves, there are natural isomorphisms of complexes of vector spaces $Rhom (F,\rat_{Z})_{y} \simeq Rhom(F_{y}, \rat_{y}) \simeq \oplus_i Hom( {\cal H}^{-i}F_{y}, \rat_{y})[-i]$. The cohomology sheaves of the three complexes, are, therefore, equal to $Hom( {\cal H}^{-i}F_{y}, \rat_{y})$ for $i\leq k$ and vanish otherwise. \blacksquare
The construction of the perverse truncation is done by induction on the strata of $Y$ starting from the shifted standard truncation on the open stratum $U_{d}.$ In the sequel we will indicate by $U_l$ the union of strata of dimension bigger than or equal to $l$. With a slight abuse of notation, we will write $U_{l+1}=U_l \coprod S_l,$ with $S_l$ now denoting the union of strata of dimension $l.$ Let $F \in Ob (D(Y))$ be ${\frak Y}-$constructible for some stratification $\frak Y= \coprod S_l.$ All the constructions below will lead to ${\frak Y}-$constructible complexes.
We define $\ptd{0}^{U_{d}}=\tau_{\leq -\dim{Y}}$ and $\ptu{0}^{U_{d}}=\tau_{\geq -\dim{Y} }$.
\n Suppose that $\ptd{0}^{U_{l+1}}: D(U_{l+1}) \longrightarrow D(U_{l+1}) $ and $\ptu{0}^{U_{l+1}}: D(U_{l+1}) \longrightarrow D(U_{l+1})$ have been defined.
\n We proceed to define $\ptd{0}^{U_l}$ and $\ptu{0}^{U_l}$ on $U_l=U_{l+1} \coprod S_l$. Let $i:S_l \to U_l \longleftarrow U_{l+1}: j $ be the inclusions: the exact triangles $$ \tau'_{\leq 0}F \to F \to Rj_* \,^{p}\tau_{ > 0 }^{U_{l+1}}j^*F \stackrel{[1]}\longrightarrow \qquad \tau''_{\leq 0}F \to F \to i_* \tau_{>-dim S } i^*F \stackrel{[1]}\longrightarrow $$ and $$ Rj_!\,^{p}\tau_{ < 0 }^{U_{l+1}}j^!F \longrightarrow F \longrightarrow \tau'_{\geq 0}F \stackrel{[1]}\longrightarrow \qquad i_!\tau_{ < -\dim{S} }i^!F \longrightarrow F \longrightarrow \tau''_{\geq 0}F \stackrel{[1]}\longrightarrow $$ define four functors (cf. \ci{bbd}, 1.1.10, 1.3.3 and 1.4.10), i.e. the four objects $\tau'_{\geq 0}F,$ $ \tau'_{\leq 0}F,$ $\tau''_{\geq 0}F$ and $ \tau''_{\leq 0}F$ which make the corresponding triangles exact, are determined up to unique isomorphism. Define $$ \ptd{0}^{U_{l}}:= \tau''_{\leq 0} \tau'_{\leq 0}, \qquad \ptu{0}^{U_{l}}:= \tau''_{\geq 0} \tau'_{\geq 0}. $$ Define: $$ \ptd{0}:=\ptd{0}^{U_{0}}, \qquad \ptu{0}:=\ptu{0}^{U_{0}}. $$ We have the following compatibilities with respect to shifts.
$$
\ptd{m} (F [l] ) \simeq \ptd{m+l} (F) [l], \qquad
\ptu{m} (F [l] ) \simeq \ptu{m+l} (F) [l].
$$ These formulas hold for the ordinary truncation functors as well and we symbolically summarize them as follows $$ (\tau_{m} ( [l] ))[-l] \, = \, \tau_{m+l}. $$ The perverse truncations so defined have the following properties:
\begin{itemize} \item By the construction above, if $F$ is ${\frak Y}-$cc, then so are
$\ptd{m}F$ and $\ptu{m}F$.
\item Let $P(Y)$ be the full subcategory of complexes $Q$ such that
$$ \dim \hbox{ Supp }( {\cal H}^{-i}(Q) \leq i \hbox{ for every } i \in \zed $$ and the same holds for ${\cal D}(Q),$ the Verdier dual of $Q$. $P(Y) $ is an abelian category.
The functor
$$
\phix{0}{-}: D(Y) \longrightarrow P(Y), \qquad \phix{0}{F}: =
\ptd{0} \ptu{0}F \simeq \ptu{0} \ptd{0} F,
$$ is cohomological.
Define
$$
\phix{m}{F}:= \phix{0}{F[m]}.
$$
These functors are called {\em the perverse cohomology functors}.
Any distinguished triangle $F \longrightarrow G \longrightarrow H \stackrel{[1]}\longrightarrow$
in $D(Y)$
gives rise to a long exact sequence in $P(Y)$:
$$
\ldots \longrightarrow \phix{i}{F} \longrightarrow \phix{i}{G} \longrightarrow
\phix{i}{H} \longrightarrow \phix{i+1}{F} \longrightarrow \ldots.
$$ If $F$ is ${\frak Y}-$cc, then so are $\phix{m}{F},$ $\forall m \in \zed.$
\item
Poincar\'e- Verdier Duality induces functorial
isomorphisms for $F \in Ob ( D(Y) )$
$$\ptd{0}{\cal D}F \simeq {\cal D}\ptu{0}F, \qquad \ptu{0}{\cal D}F \simeq {\cal D}\ptd{0}F \qquad
{\cal D} ( \phix{j}{F}) \simeq \phix{-j}{ {\cal D} (F) }.
$$ This can be seen from the construction above. In fact, by Lemma \ref{troncodiaz}, the isomorphisms hold for $U=U_{d},$ since $\ptd{0}^{U_d}=\tau_{\leq -\dim{Y}}$ and $\ptu{0}^{U_d}=\tau_{\geq -\dim{Y}}$.
\n Suppose that $\ptd{0}^U{\cal D}\simeq {\cal D}\ptu{0}^U$ and $ \ptu{0}^U{\cal D} \simeq {\cal D}\ptd{0}^U$ for $U=U_{l+1}$. It then follows that the same isomorphisms hold for $U=U_l.$ In fact, applying the functor ${\cal D}$ to the triangle defining $\tau'_{\leq 0}{\cal D}F$, and the inductive hypothesis $\ptd{0}^U{\cal D}\simeq {\cal D}\ptu{0}^U$, we get the triangle defining $\tau'_{\geq 0}F$, so that ${\cal D}\tau'_{\leq 0}{\cal D}F \simeq \tau'_{\geq 0}F.$ The argument for $\tau''_{\leq 0}$ is identical. We get ${\cal D}\tau''_{\leq 0}{\cal D}F \simeq \tau''_{\geq 0}F.$ It follows that ${\cal D} \tau''_{\leq 0} \tau'_{\leq 0} \simeq \tau''_{\geq 0} {\cal D} \tau'_{\leq 0}\simeq \tau''_{\geq 0} \tau'_{\geq 0} {\cal D}$ and the first wanted isomorphism follows. The second is equivalent to the first one. The third one follows formally: ${\cal D}( \phix{m}{F}) \simeq {\cal D} \ptd{0} \ptu{0} (F[-m]) \simeq \ptu{0}\ptd{0} ({\cal D}(F) [m]) \simeq \phix{-m}{{\cal D}F}.$
\item
For every $F$ and $m$ one constructs,
functorially, a distinguished triangle
$$
\ptd{m} F \longrightarrow F \longrightarrow \ptu{m+1} F \stackrel{[1]}\longrightarrow.
$$
\end{itemize}
The objects of the abelian category
$P (Y)$ are called {\em perverse sheaves.}
An object $F$ of $D(Y)$ is perverse if and only if
the two natural maps $\ptd{0} F \longrightarrow F$ and $ F \longrightarrow \ptu{0} F $ are isomorphisms.
\begin{ex} \label{semismall} {\rm Let $f:X \longrightarrow Y$ a surjective proper map of surfaces, $X$ smooth. The direct image $Rf_* \rat [2]$ is a perverse sheaf.} \end{ex}
\begin{ex} \label{3foldtrunc} {\rm To give an example of how the truncation functors can be computed from the construction given above, let us examine the example of section \ref{lde}. The assumptions in \ref{3foldass} are in force and we use the same notation. We show that: $$ \ptd{0}Rf_* \rat_X[3]\simeq \tau_{\leq 0}Rf_* \rat_X[3],
\qquad
\ptd{-1}Rf_* \rat_X[3]=H_4(D)_y[1]. $$ Since $^{p}\!\tau^{Y-y}_{> 0}=\tau_{> -3}$ and $j^*Rf_*\rat_X[3]=\rat_{Y \setminus y}[3],$ we have $\tau '_{\leq 0}Rf_* \rat_X[3]= Rf_* \rat_X[3].$ The perverse truncation $\ptd{0}Rf_* \rat_X[3]=\tau ''_{\leq 0} \tau'_{\leq 0}Rf_* \rat_X[3]= \tau ''_{\leq 0}Rf_* \rat_X[3]$ is computed by the triangle $$
\tau ''_{\leq 0}Rf_* \rat_X[3] \longrightarrow Rf_* \rat_X[3] \longrightarrow i_*\,
\tau_{>0}\,i^*Rf_* \rat_X[3]
\stackrel{+1}{\longrightarrow}. $$ Since $i^*Rf_* \rat_X[3] = \oplus_{j} H^{3-j}(D)_y[j],$ we have $i_* \tau_{>0}i^*Rf_* \rat_X[3] = H^4(D)_y[-1]$, so that $$ \ptd{0}Rf_* \rat_X[3]\, \simeq \, \mbox{Cone} \, \{Rf_* \rat_X[3] \to H^4(D)_y[-1] \} \, \simeq \, \tau_{\leq 0}Rf_* \rat_X[3]. $$ Keeping in mind the truncation rules, we have the triangle $$
\tau '_{\leq -1}Rf_* \rat_X[3] \longrightarrow Rf_* \rat_X[3] \longrightarrow Rj_* \,^{p}\!\tau_{>-1}^{Y-y} j^* \rat_Y[3] =Rj_* j^*\rat_Y[3] \stackrel{+1}{\longrightarrow} $$ from which we deduce that $$ \tau '_{\leq -1}Rf_* \rat_X[3] \, \simeq \, i_!i^! Rf_* \rat_X[3] \, \simeq \, \oplus_{j} H_j(D)_y[j-3]. $$ The truncation $\ptd{-1}Rf_* \rat_X[3]=\tau ''_{\leq -1} \tau'_{\leq -1}Rf_* \rat_X[3]= \tau ''_{\leq -1}(\oplus_{j}{ H_j(D)_y[j-3])}$ is computed by the triangle $$ \tau ''_{\leq -1}(\oplus_{j}{ H_j(D)_y[j-3])} \longrightarrow \oplus_{j}{ H_j(D)_y[j-3]} \to i_* \tau_{>-1}(\oplus_{j}{ H_j(D)_y[j-3] ) } \stackrel{+1}{\longrightarrow} $$ from which the conclusion follows. } \end{ex}
It is remarkable that the category of Perverse sheaves is Artinian and Noetherian, that its simple object can be completely characterized and have an important geometric meaning: they are the intersection cohomology complexes.
\subsection{The simple objects of $P(Y)$} \label{simple} Goresky and MacPherson introduced the intersection cohomology groups of $Y$ for an arbitrary perversity. Here we deal with the case of middle perversity. These groups were first defined as the homology of a chain sub-complex of the complex of geometric chains with twisted coefficients on $Y.$ Later, following a suggestion by Deligne, they realized these groups as the hypercohomology of what they called the intersection cohomology complexes with twisted coefficients of $Y.$
\n These complexes are the building blocks of $P(Y).$ They are special examples of perverse sheaves and every perverse sheaf can be exhibited as a finite series of non trivial extensions of objects of this kind supported on closed subvarieties of $Y.$
Let $Z \subseteq Y$ be a closed subvariety, $Z^{o} \subseteq Z_{reg} \subseteq Z $ be an inclusion of Zariski-dense open subsets and $L$ be a local system on $Z^{o}.$
Goresky-MacPherson associate with this data the intersection cohomology complex $IC_{Z}(L)$ in $P(Z).$
\n Up to isomorphism, this complex is independent of the choice of $Z^{o}:$ if $L$ and $L'$ are local systems on
$Z^{o}$ and ${Z^{o}}'$ respectively and $L_{|Z^{o}\cap {Z^{o}}' }
\simeq L'_{| Z^{o}\cap {Z^{o}}' },$ then the associated intersection cohomology complexes on $Z$ are canonically isomorphic.
\n The complex $IC_{Z}(L),$ when viewed as a complex on $Y,$ is perverse on $Y.$
\n The intersection cohomology complex of $Y$ is defined to be $IC_{Y}:= IC_{Y}(\rat_{Y_{reg}}).$ If $Y$ is smooth, or a rational homology manifold, then $IC_{Y}\simeq \rat_{Y}[\dim{Y}].$
\n If $Z$ is smooth and $L$ is a local system on $Z,$ then $IC_{Z}(L) \simeq L[\dim{Z}].$
\begin{pr}
\label{so} The simple objects in $P(Y)$ are precisely the ones of the form $IC_{Z}(L),$ $L$ simple on $Z^{o}.$ In particular, if $L$ is simple, then $IC_{Z}(L)$ does not decompose into non-trivial direct summands in $D(Y).$
\n The semisimple objects of $P(Y)$ are finite direct sums of such intersection cohomology complexes on possibly differing subvarieties. \end{pr}
Every perverse sheaf $Q \in P(Y)$ is supported on a finite union of closed subvarieties of $Y.$ Let $Z$ be any one of them. There is a Zariski-dense open subset $Z^{o}\subseteq Z_{reg},$ such that
$Q_{|Z^{o}} \simeq L[\dim{Z}],$ where $L$ is a local system on $Z^{o}.$ The object $Q$ admits a finite filtration where one of the quotients is $IC_{Z}(L)$ and all the others are all supported on $Supp(Q) \setminus Z^{o}.$ It follows that
$Q$ admits a finite filtration where the quotients are intersection cohomology complexes supported on closed subvarieties of $Y.$
An intersection cohomology complex $IC_{Z}(L)$ is characterized by its not admitting subquotients supported on smaller dimensional subspaces of $Z.$ Its eventual splitting is entirely due to a corresponding splitting of $L.$
Let us define the intersection cohomology complexes. Assume ${\frak Y}$ is a stratification and $L$ is a local system on the open stratum $U_d.$ We start by defining $IC_{U_d}(L):=L[dim Y].$ Now suppose inductively that $IC_{U_{l+1}}(L)$ has been defined on $U_{l+1}$ and we define it on $U_l$ by $$ IC_{U_l}(L):=\tau_{\leq -l-1}Rj_*IC_{U_{l+1}}(L). $$
Let us give formulae for $IC_{Y}(L)$ when $Y$ and $L$ have isolated singularities. It suffices to work in the Euclidean topology.
\n Let $(Y,p)$ be a germ of an isolated singularity, $j: U: = Y \setminus p \to Y$ be the open embedding and $L$ be a local system on $U.$ We have \begin{equation}
\label{icis} IC_{Y}(L) = \td{-1} (Rj_{*}L[\dim{Y}]). \end{equation} If $\dim{Y} =1,$ then $IC_{Y}(L) = j_{*}L[1].$ The stalk at $p$ are the invariants of $L$ i.e. $H^{0}(U, L).$
\n In general, when $\dim{Y} \geq 2,$ then $IC_{Y}(L)$ is a complex, not a sheaf. If $L$ is simple, then $IC_{Y}(L)$ is simple and does not split non-trivially in $D(Y).$ The cohomology sheaves ${\cal H}^{j}(IC_{Y}(L))$ are non trivial only for $j \in [-\dim{Y}, -1]$ and we have \begin{equation}
\label{csisl} {\cal H}^{-\dim{Y}}(IC_{Y}(L)) = j_{*}L, \qquad {\cal H}^{-\dim{Y}+l}(IC_{Y}(L)) = H^{l}(U,L)_{p}, \; 1 \leq l \leq \dim{Y} -1, \end{equation} where $V_{p}$ denotes a skyscraper sheaf at $p\in Y$ with stalk $V.$
In order to familiarize ourselves with these complexes, we compute two important examples:
\begin{ex} \label{3folddoublepoint} {\rm We consider a threefold $Y$ with an ordinary double point $y$ and with associated link ${\cal L}.$ Let $j:Y\setminus y \to Y$ be the open embedding, so that (\ref{icis}) gives $$ IC_Y= \tau_{\leq -1}Rj_*\rat[3]. $$ The cohomology sheaves at $y$ are $$ {\cal H}^{k}(IC_{Y}) = H^{k+3}({\cal L})_{p}, \hbox{ for } k \leq -1, \qquad {\cal H}^{k}(IC_{Y})= 0 \hbox{ otherwise. } $$ The singularity is analytically equivalent to a cone over a smooth quadric in projective space, hence its link is homeomorphic to the $S^1-$bundle over $S^2 \times S^2$ with Chern class $(1,1).$ The long exact sequence for this $S^1-$fibration gives $$ H^k({\cal L})= \rat \hbox{ for } k=0,2,3,5 \qquad H^k({\cal L})= 0 \hbox{ otherwise, } $$ which in turn implies that $$ {\cal H}^{k}(IC_{Y}) = \rat \hbox{ for } k= -3, -1, \qquad {\cal H}^{k}(IC_{Y}) = 0 \hbox{ otherwise. } $$ We have a triangle in $D(Y),$ (not of perverse sheaves) $$ \rat_Y[3] \to IC_Y \to \rat_y[1] \stackrel{+1}{\to} $$ The fact that ${\cal H}^{-1}(IC_{Y}) = \rat_y $ should be compared with the existence of a small resolution with fiber a projective line over the singular point, and the statement of the Decomposition Theorem \ref{dtbbd}.} \end{ex} \begin{ex} \label{cksrecipe} {\rm Let $Y=\comp^2,$ and $L$ be a local system on $\comp^2 \setminus ( x_1x_2=0)$ defined by the two monodromies $T_1$ and $T_2$ acting on the vector space $V=L_{p},$ the stalk of $L$ at $p=(1,1) \in \comp^{2}.$ We first determine the intersection cohomology complex over $\comp^2 \setminus \{0\}.$ Denoting by $j:\comp^2 \setminus \{ x_1x_2=0\} \to \comp^2 \setminus \{0\}$ the natural map, we have
$IC_{\comp^2 \setminus \{0\} }(L)=\tau_{\leq -2}Rj_*L[2]=(j_*L) [2].$ Denoting by $j':\comp^2 \setminus \{0\} \to \comp^2$ the natural map, we have $$ IC_{\comp^2}(L)\, = \, \tau_{\leq -1}Rj'_* IC_{\comp ^2 \setminus \{0\} (L) } \, =\, \tau_{ \leq -1}Rj'_*(j_*L [2]). $$ In order to determine the cohomology sheaves of $IC_{\comp^2}(L),$ we compute $H^i(\comp^2 \setminus \{0\}, j_*L)$ for $i=0,1.$ More precisely, we should determine these groups for a fundamental system of neighborhoods of the origin; however the cohomology groups are in fact constant. Set $N_1=T_1-Id,$ $N_2=T_2-Id.$
\n We have $H^0(\comp^2 \setminus \{0\}, j_*L)=H^0(\comp^2 \setminus \{ x_1x_2=0\},L) = \hbox{\rm Ker \,} { N_1} \cap \hbox{\rm Ker \,} { N_2}=V^{\pi_1} .$ Since $j_*L= \tau_{\leq 0}Rj_*L,$ and fundamental deleted neighborhoods around the axes are homotopic to circles, so that $$ {\cal H}^{i}(Rj_*L) =0 \hbox{ for } i \geq 2, $$ we have the following exact triangle in $D(\comp^2 \setminus \{0\})$
$$ j_*L =\tau_{\leq 0}Rj_*L \longrightarrow Rj_*L \longrightarrow {\cal H}^{1}(Rj_*L)[-1] \stackrel {+1} {\longrightarrow}. $$ The sheaf ${\cal H}^{1}(Rj_*L)$ is the local system on $( x_1x_2=0) \setminus \{(0,0) \}=D_1 \coprod D_2$, with fiber $ \hbox{\rm Coker \,} { N_1}$ and monodromy $T_2$ on $D_1$, and fiber $ \hbox{\rm Coker \,} { N_2}$ and monodromy $T_1$ on $D_2$. Since $\comp^2 \setminus \{ x_1x_2=0\}$ retracts to a torus $T^2$, the cohomology of $L$ is isomorphic to the group cohomology of $\zed ^2$ with values in $V$ as a $\zed ^2$-module via the monodromies $T_1,T_2,$ which can be computed by the Koszul complex (see for instance \ci{weibel} )
$$ 0 \longrightarrow V \stackrel{\phi}{\longrightarrow}
V \oplus V \stackrel{ \psi}{\longrightarrow}V {\longrightarrow} 0, $$ with $$ \phi(v)= (N_1(v),N_2 (v)) \qquad \psi(v_1,v_2)=N_2(v_1) -N_1(v_2). $$ The long exact sequence associated to the exact triangle above gives $$ {\cal H}^{-1}(IC_Y(L))_0 \simeq H^1(\comp^2 \setminus \{0\}, j_*L) \,= \, \frac{(N_1(v_1),N_2(v_2))\hbox{ such that }N_1N_2(v_1-v_2)=0 } { (N_1( v),N_2( v))}. $$ More generally, a similar recipe holds for the cohomology sheaves of the intersection cohomology complex of a local system defined on the complement of a normal crossing, see \ci{cks}.} \end{ex}
\subsection{Decomposability, $E_{2}-$degenerations and filtrations} \label{de2} \begin{defi} \label{hdec} {\rm Let ${\cal H}= {\cal H}^{0}$ be the sheaf cohomology functor. We say that $F$ in $D(Y)$ is {\em ${\cal H}-$decomposable} if $$ F \simeq_{D(Y)} \bigoplus_{i}{ {\cal H}^{i}(F)[-i] } $$ We say that $F$ in $D(Y)$ is {\em $^{p}{\cal H}-$decomposable} if $$ F \simeq_{D(Y)} \dsdix{i}{F}. $$ } \end{defi}
If $F$ is ${\cal H}-$decomposable, then the spectral sequence $$ {H}^{p}( Y, {\cal H}^{q}(F) ) \Longrightarrow {\Bbb H}^{p+q}(Y,F) $$ is $E_{2}-$degenerate. This spectral sequence is the Leray Spectral Sequence when $F= Rf_{*}(G).$ In this case the corresponding filtration is called the Leray filtration.
\n The analogous statements holds for $^{p}\!{\cal H}-$decomposability. The corresponding spectral sequence is called the Perverse Leray Spectral Sequence: $$ {\Bbb H}^{p}(Y, ^{p}{\cal H}^{q}(Rf_{*}G) ) \Longrightarrow {\Bbb H}^{p+q}(Y,G) $$ and the corresponding filtration is called the perverse filtration. \begin{defi} \label{pf} {\rm
Let $f: X \to Y$ be a map, $n = \dim{X}.$
The perverse filtration $H^{n+j}_{\leq b}(X) \subseteq H^{n+j}(X),$ $
b,j \in \zed$
is defined to be the perverse filtration
on ${\Bbb H}^{j}(Y, Rf_*\rat_{X}[n])$
}
\end{defi}
It coincides, up to a shift, with the Leray filtration, when $f$ is smooth.
If these decomposing isomorphisms exist, they are seldom unique. We now give the statement (not in the most general form) of one of the more general criteria for decomposability, see \ci{dess} and \ci{shockwave}:
\begin{tm} \label{delilefsdege} {\bf \rm (Deligne degeneration criterion.)} Let $K$ be an object of $D^b(Y)$, and let $\eta \in H^2(X).$ Suppose that $\eta^l : \phix{-l}{K} \to \phix{l}{K}$ is an isomorphism for all $l.$ Then $K$ is $^{p}\!{\cal H}-$decomposable. The same statement holds if we consider the functor ${\cal H}$. \end{tm}
\begin{ex} \label{3foldancor} {\rm By the computation done in \ref{3foldtrunc}, we have the following description of the perverse filtration for the resolution of a threefold: $$ H^{i}_{\leq -2}(X)=\{0\}, \qquad H^2_{\leq -1}(X)= \hbox{\rm Im \,} \{H_4(D) \to H^2(X)\}, \qquad H^i_{\leq -1}(X)=0 \hbox{ otherwise, } $$ $$ H^4_{\leq 0}(X)= \hbox{\rm Ker \,} \{H^4(X) \to H^4(D)\}, \qquad H^i_{\leq 0}(X)=H^i(X) \hbox{ otherwise }, $$ $$
H^i(X)_{\leq 1}=H^i(X) \hbox { for all } i. $$ The condition \ref{delilefsdege}, that $$\eta: \phix{-1}{Rf_* \rat_X[3]}= H_4(D)_y \longrightarrow H^4(D)_y=\phix{1}{Rf_* \rat_X[3]} $$ be an isomorphism, is just the non degeneracy of the intersection form $\eta \cap [D_i]\cdot[D_j].$ Note that in this case, the explicit description makes it clear that the perverse filtration is given by Hodge substructures.} \end{ex}
\subsection{The Decomposition Theorem of Beilinson, Bernstein, Deligne and Gabber} \label{sdtbbd} We can now state the generalization of Deligne's Theorem \ref{del} to the case of arbitrary proper maps. Recall that if $X$ is smooth, then $IC_{Y} = \rat_{X}[n].$ \begin{tm} \label{dtbbd} Let $f:X \to Y$ be a proper map of algebraic varieties. Then \begin{equation}
\label{fd}
\mbox{the complex $Rf_{*}IC_{X} \simeq
\dsdix{i}{Rf_{*}IC_{X}}$ is $^p\!{\cal H}-$decomposable }
\end{equation} The complexes $\phix{j}{Rf_{*}IC_{X}}$ are semisimple i.e. there is a canonical isomorphism \begin{equation}
\label{semisi}
\phix{j}{Rf_{*}IC_{X}} \,\simeq_{P(Y)} \,
\oplus{IC_{Z_{a}}(L_{a} )} \end{equation} for some finite collection, depending on $j,$ of semisimple local systems $L_{a}$ on smooth \underline{distinct} varieties $Z_{a}^{o} \subseteq Z \subseteq Y.$
\n Let $\eta$ be an $f-$ample line bundle on $X.$
Then \begin{equation}
\label{rhl} \eta^{r} \, : \, \phix{-r }{ Rf_{*}IC_{X} } \, \simeq \, \phix{r }{ Rf_{*}IC_{X} }.
\end{equation} \end{tm}
The Verdier Duality functor is an autoequivalence ${\cal D}: D(Y) \to D(Y)$ which preserves $P(Y)$ and for which one has $$ {\cal D} \circ \, ^{p}\!{\cal H}^{-j} \, \simeq \, ^{p}\!{\cal H}^{j} \circ {\cal D}. $$ This fact implies that the summands appearing in the semisimplicity statement for $j$ are pairwise isomorphic to the ones appearing for $-j$ and that the local systems $L$ are self-dual.
Theorem \ref{dtbbd} is the deepest known fact concerning the homology of algebraic maps.
\noindent The original proof uses algebraic geometry in positive characteristic. in an essential way.
\n M. Saito has given a transcendental proof of a more general statement concerning his mixed Hodge modules in the series of papers \ci{samhp}, \ci{samhm}, \ci{samhp}.
\n
We give a proof for the push-forward of intersection cohomology (with constant coefficients) first in the case of semismall maps (cf. \ci{demigsemi}) and
then for arbitrary maps in \ci{decmightam}. Though at present
our methods do not afford results concerning the push-forward with
more general coefficients, they give new and precise results on the perverse filtration and on the refined intersection forms.
\n C. Sabbah \ci{sa} has recently proved a decomposition theorem for push-forwards of semisimple local systems.
\begin{rmk} \label{infatt} {\em It is now evident that the computations in \ref{fsts} and \ref{fstc} establish the Decomposition Theorem for maps from a smooth surface. In the case of the proper birational map $f:X \to Y$ of \ref{fsts}, in fact, the complex $Rf_* \rat_X[2]$ is perverse, as observed in \ref{semismall}, and \ref{tmrs} states that it splits into $IC_Y$ and $R^2f_* \rat_X[0].$ In the case of the family of curves treated in \ref{fstc} we have that $j_*T^0[2]=\phix{-1}{ Rf_{*}\rat_{X}[2] }[1],$ and $j_*T^2[0]=\phix{1}{ Rf_{*}\rat_{X}[2] }[-1],$ and we showed in \ref{rhl1} that $\eta: j_*T^0 \to j_*T^2$ is an isomorphism. The perverse sheaf $P$ splits, see \ref{semisimp1}, in $j_*T^1[1]=IC_Y(T^1),$ and $V$, concentrated on points. } \end{rmk}
\begin{rmk} \label{infatt3fold} {\em For the case of the resolution of a threefold with isolated singularities, whose Hodge theory has been treated in \ref{lde}, we have, as seen in \ref{3foldtrunc}, $$\phix{-1}{ Rf_{*}\rat_{X}[3] }\simeq H_4(D)_y, \qquad \phix{1}{ Rf_{*}\rat_{X}[3] }\simeq H^4(D)_y \simeq \eta \wedge H_4(D)_y,$$ and we have the splitting $$ \phix{0}{ Rf_{*}\rat_{X}[3] }\simeq IC_Y \oplus H_3(D)_y. $$ Similarly, for the 4-fold with isolated singularities, see \ref{lde4}, $$\phix{-2}{ Rf_{*}\rat_{X}[4] }\simeq H_6(D)_y, \qquad \phix{-1}{ Rf_{*}\rat_{X}[4] }\simeq H_5(D)_y ,$$ $$\phix{2}{ Rf_{*}\rat_{X}[4] }\simeq H^6(D)_y\simeq \eta^2 \wedge H_6(D)_y, \qquad \phix{1}{ Rf_{*}\rat_{X}[4] }\simeq H^5(D)_y \simeq \eta \wedge H_5(D)_y,$$ and we have the splitting $$ \phix{0}{ Rf_{*}\rat_{X}[4] }\simeq IC_Y \oplus H_4(D)_y. $$} \end{rmk}
\subsection{Results on intersection forms } \label{inr}
In this section we list some of the results of \ci{decmightam} which are related to the theme of this paper. For simplicity, we state them in the special case when $f: X \to Y$ is a map of projective varieties, $X$ smooth. Let $\eta$ and $A$ be ample line bundles on $X$ and $Y$ respectively, $L:= f^{*}A.$
\begin{tm} \label{uf} For $l\geq 0$ and $b\in \zed,$ the subspaces given by the perverse filtration (cf. \ref{de2}) $$ H^{l}_{\leq b}(X) \, \subseteq\, H^{l}(X) $$ are pure Hodge sub-structures. The quotient spaces $$ H^l_b(X)\, = \, H^{l}_{\leq b}(X) /H^{l}_{\leq b-1}(X) $$ inherit a pure Hodge structure of weight $l.$ \end{tm}
The cup product with $\eta$ verifies $\eta \, H^l_{\leq a}(X) \subseteq H^{l+2}_{\leq a+2}(X)$ and induces maps, still denoted $\eta: H^l_{a}(X) \to H^{l+2}_{a+2}(X)$. The cup product with $L$ is compatible with the Decomposition Theorem \ref{dtbbd} and induces maps $L: H^l_{a}(X) \to H^{l+2}_{a}(X).$
\n These maps satisfy graded Hard Lefschetz Theorems (cf. \ci{decmightam}, Theorem 2.1.4).
\n Define $P^{-j}_{-i}:= \hbox{\rm Ker \,} {\, \eta^{i+1}} \cap \hbox{\rm Ker \,} {\, L^{j+1}} \subseteq H^{n-i-j}_{-i}(X),$ $i,\,j \geq 0$ and $P^{-j}_{-i} :=0$ otherwise. In the same way in which the classical Hard Lefschetz implies the Primitive Lefschetz Decomposition for the cohomology of $X,$ the graded Hard Lefschetz Theorems imply the double direct sum decomposition of
\begin{tm} \label{etaldecompo} Let $i, \, j \in \zed.$ There is a Lefschetz-type direct sum decomposition into pure Hodge sub-structures of weight $(n-i-j),$ called the $(\eta,L)-$decomposition: $$ H^{n-i-j}_{-i}(X) = \bigoplus_{l,\,m\, \in \zed }{ \eta^{-i+l} \, L^{-j+m} \, P^{j-2m}_{i-2l}.} $$ \end{tm}
One can define bilinear forms $S^{\eta L}_{ij}$ on $H^{n-i-j}_{-j}(X)$ by modifying the Poincar\'e pairing $$ S^{\eta L}_{ij} ([\alpha], [\beta] ) \, := \, \int_X{ \eta^i \wedge L^j \wedge \alpha \wedge \beta} $$ and descending it to the graded groups. These forms are non degenerate. In fact their signature can be determined in the following generalization of the Hodge Riemann relations.
\begin{tm} \label{tmboh} The \ref{etaldecompo} is orthogonal with respect to $S^{\eta L}_{ij}.$ The forms $S^{\eta L}_{ij}$ induce polarizations of each $(\eta,L)-$direct summand. \end{tm}
The homology groups $H^{BM}_{*}(f^{-1}(y))=H_{*}(f^{-1}(y)),$ $y \in Y,$ are filtered by virtue of the decomposition theorem (one may call this the perverse filtration). The natural cycle class map $cl: H^{BM}_{n-*}(f^{-1}(y)) \to H^{n+*}(X)$ is filtered strict.
The following generalizes the Grauert Contractibility Criterion.
\begin{tm} \label{nhrbr} Let $b \in \zed$, $y \in Y$.
The natural class maps $$ cl_{b}: H^{BM}_{n-b,b}(f^{-1}(y)) \longrightarrow H^{n+b}_b(X) $$ is injective and identifies $H^{BM}_{n-b,b}(f^{-1}(y))\subseteq \hbox{\rm Ker \,} {\, L}\subseteq H^{n+b}_b(X)$ with a pure Hodge substructure, compatibly with the $(\eta,L)-$decomposition. Each $(\eta,L)-$direct summand of $H^{BM}_{n-b,b}(f^{-1}(y))$ is polarized up to sign by $S^{\eta L}_{-b,0}.$
\n In particular, the restriction of $S^{\eta L}_{-b,0}$ to $H^{BM}_{n-b,b}( f^{-1} (y ))$ is non degenerate. \end{tm}
By intersecting in $X$ cycles supported on $f^{-1}(y),$ we get the refined intersection form (see section \ref{psif}) $H^{BM}_{n-*}(f^{-1}(y)) \to H^{n+*}(f^{-1}(y))$ which is filtered strict as well.
\begin{tm} \label{rcffv} ({\bf The Refined Intersection Form Theorem}) Let $b \in \zed$, $y \in Y$. The graded refined intersection form $$ H^{BM}_{n-b,a}(f^{-1}(y)) \longrightarrow H^{n+b}_{a}(f^{-1}(y)) $$ is zero if $a\neq b$ and it is an isomorphism if $a=b.$
\end{tm}
We have seen in earlier sections how these results can be made explicit in the case of surfaces, threefolds and fourfolds. For more applications in any dimension see \ci{decmightam}.
In fact, the method of proof of the results stated in this section is inspired by the low dimensional examples of surfaces, threefolds and fourfolds.
\subsection{The decomposition mechanism} \label{decmec} It is quite hard to describe what kind of geometric phenomena are expressed by the Decomposition Theorem. The complex $Rf_* \rat_X$ essentially describes $H^*(f^{-1}(U))$ for any neighborhood $U$ of a point $y.$ We gain some geometric insight if we represent, via Poincar\'e duality, the cohomology classes with Borel-Moore cycles in $f^{-1}(U).$ Let $S$ be the stratum containing $y.$ By the exact sequence $$ H_*^{BM}(f^{-1}(U \cap S)) \stackrel{i_*}{\to} H_*^{BM}(f^{-1}(U)) \stackrel{j^*}{\to} H_*^{BM}(f^{-1}(U \setminus S)), $$ the Borel Moore cycles in $f^{-1}(U)$ are of two kind: those in $ \hbox{\rm Im \,} i_*$, which are homologous to cycles supported on the inverse image of the stratum $S,$ and those whose restriction to $f^{-1}(U \setminus S)$ is not trivial. Neither $i_*$ is injective nor $j^*$ is surjective: there are non trivial cycles in $f^{-1}(U \cap S)$ which become homologous to zero in $f^{-1}(U )$, and there are cycles in $f^{-1}(U \setminus S)$ which cannot be closed to cycles in $f^{-1}(U).$
The Decomposition Theorem gives strong information on both types. The first deep aspect of the Theorem is that the subspace $ \hbox{\rm Im \,} i_*,$ has a uniform behavior for all projective maps, related to the non degeneracy of the intersection forms. For instance, we already noticed in \ref{link}, how the Grauert Theorem \ref{tmgra} implies that the classes of disks transverse to exceptional curves are homologous to linear combinations of the classes of these curves. Such non degeneracy results, see \ref{nhrbr}, \ref{rcffv}, are peculiar to algebraic maps and stem from "weight'' considerations, either in characteristic $0$ (Hodge Theory). or in positive characteristic (weights of Frobenius, cf. \ci{bbd}).
The Decomposition Theorem, though, contains other deep information. Since $Rf_* \rat_X$ splits as a direct sum of terms associated with the strata, we have a splitting map, which can be made canonical after an ample line bundle $\eta$ on $X$ has been chosen, from the subspace $ \hbox{\rm Im \,} { j^* } \subseteq H_*^{BM}(f^{-1}(U \setminus S)$ to $H_*^{BM}(f^{-1}(U ):$ i.e. the following is split exact $$ 0 \longrightarrow \hbox{\rm Im \,} {i_*} \longrightarrow H_*^{BM}(f^{-1}(U ) \longrightarrow \hbox{\rm Im \,} {j^*} \longrightarrow 0. $$
The image of this map defines a subspace of $H_*^{BM}(f^{-1}(U)$ which is complementary to $ \hbox{\rm Im \,} i_*$ and consists of classes which are closures of some Borel Moore cycles in $f^{-1}(U \setminus S).$ The deep fact here, is that these cycles are governed by the intersection cohomology complex construction on $Y;$ each stratum having $S$ in its closure contributes to $H_*^{BM}(f^{-1}(U)$ via the intersection cohomology of a local system on the stratum.
\section{Grothendieck motive decomposition for maps of threefolds} \label{gmdmt} We assume we are in the situation \ref{3foldass}. Again, this is for ease of exposition only. See Remark \ref{rmkfi}.
We have already shown that $$ H^{2}_{-1}(X) \, = \, \hbox{\rm Im \,} { H_{4}(D) \to H^{2}(X) } \, =: \, \hbox{\rm Im \,} {i_{*}}, $$ $$ H^4_{0}(X)= \hbox{\rm Ker \,} { \{H^4(X) \to H^4(D)\}} \, =: \, \hbox{\rm Ker \,} { i^*}. $$ The choice of an ample line bundle $\eta$ allows to split the perverse filtration: $$H^2(X)= \hbox{\rm Im \,} i_* \oplus (c_1( \eta) \wedge \hbox{\rm Im \,} i_*)^{\perp}.$$ $$H^4(X)= \hbox{\rm Ker \,} i^* \oplus ( \hbox{\rm Im \,} i_*)^{\perp}.$$ We have, canonically, that $$H^3(X)= \hbox{\rm Im \,} { H_3(D)} \oplus ( \hbox{\rm Im \,} {H_3(D))}^{\perp}.$$ so that $$ I\!H^i(Y)\,=\,H^i(X) \hbox{ for } i=0,\,1, \,5,\,6, \qquad I\!H^2(Y)\, = \, (c_1( \eta) \wedge \hbox{\rm Im \,} H_4(D), )^{\perp} $$ $$ I\!H^3(Y)\, = \, ( \hbox{\rm Im \,} {H_3(D)})^{\perp}, \qquad I\!H^4(Y)\, =\, ( \hbox{\rm Im \,} H_4(D))^{\perp}; $$ here we are using the convention for intersection cohomology compatible with singular cohomology: $I\!H^i(Y):= {\Bbb H}^{i-n}(Y, IC_Y)).$
We want to realize these splittings by algebraic cycles on $X \times X$, in order to find a Grothendieck motive for the intersection cohomology of $Y.$ These cycles will be supported on $D \times D.$
We start with the following simple lemma. \begin{lm} \label{support} Let $X$ be a projective $n-$fold, and $Y \subseteq X$ be a subvariety. Let $W \subseteq \hbox{\rm Im \,} { \{ H_s(Y) \to H^{2n-s}(X) \} } \subseteq H^{2n-s}(X)$ be a vector subspace on which the restriction of the Poincar\`e pairing remains non degenerate, i.e. $H(X)=W \oplus W^{\perp}.$ Then the projection $P_W \in End(H(X))\simeq H(X \times X)$ on $W$ relative to the above splitting can be represented by a cycle supported on $Y \times Y$. \end{lm} {\em Proof. } Let $\{e_1\}$ be a basis for $H(X)$ such that $ e_1, \cdots, e_k \in W$ and $ e_{k+1}, \cdots, e_N \in W^{\perp}.$ For $i=1, \cdots, k,$ we can represent $e_i$ by a cycle $\gamma_i$ contained in $Y$. In force of the hypothesis, the dual basis $\{ {e_i}\check{} \}$ is of the form $$ e_i \check{}= \sum_{j=1}^k a_{ij}e_j \qquad \hbox{ for }1 \leq i \leq k \qquad e_i \check{}= \sum_{j=k+1}^N a_{ij}e_j \qquad \hbox{ for }k+1 \leq i \leq N. $$ In particular $e_1 \check{}, \cdots,e_k \check{} $ are represented by the cycles $ \gamma_i \check{}= \sum_{j=1}^k a_{ij}\gamma_j $ supported on $Y$. The projector $P_W= \sum_{i=1}^k e_i \otimes e_i \check{}$ is thus represented by the cycle $ \sum_{i,j=1}^k a_{ij}\gamma_i \times \gamma_j$, which is supported on $Y$. \blacksquare
The following is a standard but very useful application of ``strictness'' in Hodge Theory
\begin{lm} \label{pesi} Let $Y \subseteq X$ be a codimension $d$ subvariety of an $n-fold$ and let $\pi:\tilde{Y} \to Y$ a resolution of singularities. Suppose $\beta \in \hbox{\rm Im \,} \{H_{2k}(Y) \to H^{2(n-k)}(X)\} \cap H^{n-k,n-k}(X).$ Then there is $\tilde{\beta}\in H^{n-k-d,n-k-d}(\tilde{Y})$ such that $(i\circ \pi)_*(\tilde{\beta})= \beta.$ \end{lm} {\em Proof.} We consider the weights of the homology groups as given by their being dual of the cohomology groups. Thus $H_{2k}(Y)$ has weights $\geq -2k.$ The map $H_{2k}(Y) \to H^{2(n-k)}(X)$ is of type $(n,n)$. Since the Hodge structure on $ H^{2(n-k)}(X)$ is pure, the strictness of maps of Hodge structures implies that $$ \hbox{\rm Im \,} { \{ H_{2k}(Y) \to H^{2(n-k)}(X) \} } \, = \, \hbox{\rm Im \,} { \{ W_{-2k}H_{2k}(Y) \to H^{2(n-k)}(X) \} }. $$ It follows that $\beta =i_*\beta '$ for some $\beta ' \in W_{-2k}H_{2k}(Y).$ On the other hand this group coincides with $ \hbox{\rm Im \,} { \{ \pi_* : H_{2k}(\tilde{Y}) \to H_{2k}(Y) \}}$
for any resolution, whence the statement. \blacksquare
\begin{tm} \label{cicli} Let $f:X \to Y$, $D$ as before. Then there exist algebraic 3-dimensional cycles $Z_{-1}, Z_{0}, Z_1,$ supported on $D \times D$ such that:
\n $Z_1$ defines the projection of $H(X)$ onto $H^4_1(X)= c_1( \eta) \wedge \hbox{\rm Im \,} \{H_4(D) \to H^2(X)\} \subseteq H^4(X);$
\n $Z_{-1}$ defines the projection of $H(X)$ onto $H^2_{-1}(X)= \hbox{\rm Im \,} \{H_4(D) \to H^2(X)\} \subseteq H^2(X);$
\n $Z_0$ defines the projection of $H(X)$ on $ \hbox{\rm Im \,} \{H_3(D) \to H^3(X)\} \subseteq H^3(X).$ \end{tm} {\em Proof.} Let $\Lambda$ be the inverse of the negative-definite intersection matrix $I_{ij}= \int_X c_1(\eta) \wedge[D_i]\wedge [D_j].$ We denote by $\eta \cap D_i$ the curve obtained intersecting the divisor $D_i$ with a general section of $\eta$. Set: $$ Z_{-1}= \sum \lambda_{ij}[(\eta \cap D_i) \times D_j] \qquad Z_{1}= \sum \lambda_{ij}[D_i \times (\eta \cap D_j)]. $$ It is immediate to verify that $Z_{-1} $ and $Z_1$ define the sought-for projectors.
\n The construction of $Z_0$ is not so direct:
Since, by \ref{polarizeh3}, the Poincar\'e paring is non degenerate on $ \hbox{\rm Im \,} \{H_3(D) \to H^3(X)\}$, by \ref{support} we can represent the projection on $H_3(D)$ by a cycle supported on $D \times D$. Furthermore, the projection is a map of Hodge structures, hence its representative cycle $P_3 \in H^6(X \times X)$ has type $(3,3).$ By \ref{pesi} we have $P_3=i_* \pi_* \beta$ for some $\beta \in H^{1,1}(\widetilde{D \times D}),$ where $\widetilde{D \times D}$ is any resolution of ${D \times D}.$ By the Lefschetz Theorem on $(1,1),$ classes there is an algebraic cycle $\tilde{Z}$ such that $\beta =[\tilde{Z}]$. It is clear that $Z_0=i_* \pi_* Z$ does the job. \blacksquare
The following follows immediately. \begin{cor}
\label{gm} The Grothendieck motive,
$(X, \Delta_{X} -Z_0 -Z_1 -Z_{-1})$ is a Betti realization of the Intersection cohomology of $Y.$ \end{cor}
We can be more specific: the projector $\Delta_{X} -Z_0 -Z_1 -Z_{-1}$ is supported on the fiber product $X \times_Y X$, therefore defines a relative motive over $Y$ in the sense of \ci{ch} (see also \ci{decmigmot}). By \ci{ch}, Lemma 2.23, the isomorphism of algebras $$ End(Rf_* \rat_X [3])= H^{BM}_6(X \times_Y X) $$ ensures that the Betti realization of this relative motive is the projector associated with the splitting for $Rf_* \rat_X [3]$ we have used in this section.
\begin{rmk} {\rm From the construction of the cycles it is evident that $Z_{-1}$ and $Z_1$ define in fact Chow motives, not only Grothendieck motives. Under some hypothesis it is possible to construct a Chow projector for $Z_0$ as well. For instance, if $D$ is smooth irreducible, and its conormal bundle ${\cal I}_D/{\cal I}_D^2$ is ample. In this case, let $Z_0$ the cycle in $D \times D$ representing the Hodge $\Lambda$ operator with respect to the polarization given by the conormal bundle. It is immediate to verify that $Z_0$ defines the Chow motive we need. In general some knowledge of the nature of the resolution may allow one to find a Chow motive whose Betti realization is intersection cohomology.} \end{rmk}
\begin{rmk}
\label{rmkfi} {\rm It is not difficult to modify the proofs to produce a Grothendieck motive for the intersection cohomology of $Y$ for an arbitrary
three-dimensional variety $Y,$ (e.g. with non-isolated singularities). If, for example, some divisor $D'$ is blown down to a curve $C,$ then one needs to construct a further projector, represented by a cycle which is a linear combination of the components of $D' \times_C D'.$ This projector
splits off the contribution of $D'$ to the cohomology of $X$. We leave this task to the reader.} \end{rmk}
\begin{rmk} \label{chissa} { \rm If $Y$ is a fourfold with isolated singularities, then the computations in \ref{lde4} express its intersection cohomology as a Hodge substructure of the cohomology of a resolution $X.$ The method developed in this section does not apply in general since we do not know whether the classes of the projectors, which are pushforward of classes of type $(p,p)$ on a resolution of the product of the exceptional divisor with itself, are represented by algebraic cycles. On the other hand, this can be achieved in the presence of supplementary information on the singularities of $Y$ or on the exceptional divisor. For example: if the singularities are locally isomorphic to toric singularities. This allows to define a motive for the intersection cohomology in several interesting cases.} \end{rmk}
\end{document} |
\begin{document}
\title[Harmonic tropical morphisms and approximation]
{Harmonic tropical morphisms and approximation}
\author[Lionel Lang]{Lionel Lang} \address{Department of Mathematics, Stockholm University, SE-106 91 Stockholm, Sweden} \email {lang@math.su.se }
\maketitle
\begin{abstract} \textit{Harmonic amoebas} are generalisations of amoebas of algebraic curves immersed in complex tori. Introduced in \cite{Kri}, the consideration of such objects suggests to enlarge the scope of tropical geometry. In the present paper, we introduce the notion of harmonic morphisms from tropical curves to affine spaces and show how these morphisms can be systematically described as limits of families of harmonic amoeba maps on Riemann surfaces. It extends previous results about approximation of tropical curves in affine spaces and provides a different point of view on Mikhalkin's approximation Theorem for regular phase-tropical morphisms, as stated e.g. in \cite{Mikh06}. The results presented here follow from the study of imaginary normalised differentials on families of punctured Riemann surfaces and suggest interesting connections with compactifications of moduli spaces. \end{abstract}
\blfootnote{The author is supported by the FNS project 140666 and the ERC TROPGEO.\\ The author is grateful to S. Boucksom, E. Brugallé, C. Favre, N. Kalinin, G. Mikhalkin, J. Rau and K. Shaw for many useful discussions and comments. The author is also indebted to an anonymous referee for a careful reading and numerous comments.}
\begin{center} \begin{Large} \section{Introduction} \end{Large} \end{center}
\subsection{Motivations}
The present work is an attempt to give an appropriate definition of tropical convergence for families of abstract algebraic curves, with a view towards the constructive and enumerative aspects of tropical geometry. In particular, one of the goals of this paper is to provide an alternative proof of Mikhalkin's approximation Theorem \cite[Theorem 1]{Mikh06} on the realisability of phase-tropical curves in $( \mathbb{C}^\star)^m$.
The link between algebraic and tropical geometry is given by the notion of amoeba introduced in \cite{GKZ}. Recall that the amoeba of an algebraic subvariety $\mathcal{V}\subset (\mathbb{C}^\star)^m$ is the image of $\mathcal{V}$ under the amoeba map \[ \begin{array}{rcl} \mathcal{A} \;\; : \;\; (\mathbb{C}^\star)^m & \rightarrow & \mathbb{R}^m \\ (z_1,\dots,z_m) & \mapsto & \big( \log \vert z_1 \vert, \dots, \log \vert z_m \vert \big) \end{array}. \] Classically, a family of algebraic curves $\left\lbrace \mathcal{C}_t \right\rbrace_{t>1} \subset (\mathbb{C}^\star)^m$ is said to converge to a tropical curve $C \subset \mathbb{R}^m$ if \begin{equation}\label{eq:conv} \lim_{t\rightarrow \infty} \; \frac{1}{\log (t)} \cdot \mathcal{A} ( \mathcal{C}_t) = C \end{equation} with respect to the Hausdorff distance on compact sets.
In this paper, we aim to understand the relations between the moduli of a family of algebraic curves converging in the sense of \eqref{eq:conv} and the limiting tropical curve. Conversely, we want to understand how to prescribe the moduli of a family of algebraic curves so that the latter family converges in the sense of \eqref{eq:conv}.
To that purpose, it will be convenient to leave the algebraic setting and consider the more general theory of harmonic amoebas introduced in \cite{Kri}. We will investigate the above problems for a wider class of object that we here call harmonic tropical curves. Eventually, we will come back to the algebraic setting and reprove Mikhalkin's approximation Theorem.
\subsection{Main results}
\subsubsection{Harmonic amoebas and harmonic tropical curves.}$\,$
For a proper algebraic map $\phi: S \rightarrow (\mathbb{C}^\star)^m$ from a punctured Riemann surface $S$, the composition of the amoeba map $ \mathcal{A}$ with $\phi$ is given by integrating the real 1-forms $\Re \big( d \log z_j \big)$ on $S$ where $z_1,\dots,z_m$ are the coordinate functions of $\phi$. The forms $d \log z_j $ are special instances of \df{imaginary normalised differentials} (\df{i.n.d.} for short), namely meromorphic differentials with at worst simple poles at the punctures of $S$ and with purely imaginary periods, see \cite{Kri}. It follows from Riemann's bilinear relations that such a differential is determined by the vector of its residues at the $n$ punctures of $S$. Therefore, a collection of $m$ residue vectors encoded in a real $m \times n$ matrix $R$ defines a collection $\omega_{R,1},\dots, \omega_{R,m}$ of $m$ i.n.d. on $S$ and a map \[ \begin{array}{rcl} \mathcal{A}_R \;\; : \;\; S & \rightarrow & \mathbb{R}^m \\ s & \mapsto & \Big( \Re\big( \int_p^s \omega_{R,1}\big), \dots, \Re\big( \int_p^s \omega_{R,m}\big)\Big) \end{array} \] up to the choice of a point $p\in S$. Since the coordinates of $ \mathcal{A}_R$ are harmonic functions, we call the set $\mathcal{A}_R (S) \subset \mathbb{R}^m$ the \df{harmonic amoeba}\footnote{The present terminology is due to the author.} of $S$ with respect to $R$. Observe that the map $ \mathcal{A}_R$ corresponds to the composition $ \mathcal{A}\circ\phi$ for an algebraic map $\phi: S \rightarrow (\mathbb{C}^\star)^m$ if and only if all the periods of the differentials $\omega_{R,1},\dots, \omega_{R,m}$ are integer multiples of $2\pi i$. In such case, the $j^{th}$ coordinate of $\phi$ is given by the exponential of the integral $\int \omega_{R,j}$, see Section \ref{sec:indha} for more details.
In this text, we adapt the above generalisation of amoebas to the case of tropical curves. Following \cite{Mikh05}, tropical curves are graphs with univalent vertices removed and equipped with a complete inner metric (unbounded edges are referred to as leaves). A tropical morphism $\pi:C\rightarrow \mathbb{R}^m$ on a tropical curve $C$ is a continuous map that is affine linear on any edge, with integer slopes and satisfyng the so-called balancing condition, see Section \ref{sec:trop}. Alternatively, tropical morphisms can be described by integration of exact 1-forms on tropical curves. Following \cite{MZ}, a 1-form $ \textsc{w}$ on a tropical curve $C$ is equivalent to the data of a current on $C$, seen as an electrical circuit, where a leaf of $C$ is either an electrical source or sink depending whether the residue of $ \textsc{w}$ at the leaf is positive or negative.
Similarly to i.n.d. on algebraic curves, exact 1-forms on a tropical curve $C$ are in correspondence with residue vectors at the $n$ leaves of $C$, see Proposition \ref{tdiff}. Thus, any $m \times n$ residue matrix $R$ defines a collection $ \textsc{w}_{R,1},\dots, \textsc{w}_{R,m}$ of $m$ exact 1-forms on $C$ and a map \[ \begin{array}{rcl} \pi_R \;\; : \;\; C & \rightarrow & \mathbb{R}^m \\ q & \mapsto & \Big( \Re\big( \int_p^q \textsc{w}_{R,1}\big), \dots, \Re\big( \int_p^q \textsc{w}_{R,m}\big)\Big) \end{array} \] up to the choice of a point $p\in C$. We call the set $\pi_R (C) \subset \mathbb{R}^m$ a \df{harmonic tropical curve}. Observe that the vector of currents induced by $R$ on an edge $e\subset C$ is the slope of $\pi_R(e)$. This vector can alternatively be thought of as a vector of periods of the tropical forms $ \textsc{w}_{R,1},\dots, \textsc{w}_{R,m}$. Therefore, harmonic amoebas (respectively harmonic tropical curves) are honest amoebas (respectively tropical curves) if and only if the periods of the forms involved are in $2\pi i \mathbb{Z}$ (respectively $ \mathbb{Z}$).
\subsubsection{Tropical convergence and approximation.}$\,$
To understand the relation between tropical curves and the moduli of families of algebraic curves, we will describe the latter moduli
using Fenchel-Nielsen coordinates, see Section \ref{sec:hyp} for more details. In this text, we denote $\overline{M}_{g,n}$ the (analytic) moduli space of $n$-pointed stable curves of genus $g$ and $M_{g,n}\subset \overline{M}_{g,n}$ the subset of smooth curves.
Recall that any Riemann surface $S\in M_{g,n}$ with $2g+n>2$ admits a unique complete hyperbolic metric, see \cite{SG}. Any collection of $3g-3+n$ hyperbolic geodesics decomposes $S$ into $2g-2+n$ pairs of pants. Each pair of pants is uniquely determined by the length of its boundary geodesics (by convention, a puncture is a geodesic of length $0$) and two pairs of pants adjacent to a common geodesic are glued to each other in $S$ according to a ``twist" parameter in $S^1$.
Conversely, if $G$ is the graph dual to the above decomposition of $S$ (where pairs of pants and punctures correspond respectively to trivalent and univalent vertices of $G$), we can construct any Riemann surface in $M_{g,n}$ from a length function $\ell:E(G)\rightarrow \mathbb{R}_{>0}$ and a twist function $\theta:E(G)\rightarrow S^1$ on the set $E(G)$ of edges of $G$ not adjacent to a univalent vertex. We denote the resulting Riemann surface $S(G,\ell,\theta)\in M_{g,n}$ where the parameters $(\ell,\theta)$ are the aforementioned Fenchel-Nielsen coordinates, relative to $G$. Observe that the data $(G,\ell)$ is equivalent to the data of a trivalent tropical curve, usually referred to as a simple tropical curve.
\begin{definition}[\textbf{Abstract tropical convergence}]\label{tropconv}$\,$\\ Let $C:=(G,\ell)$ be a simple tropical curve of genus $g$ with $n$ leaves. A family of Riemann surfaces $\left\lbrace S_t \right\rbrace_{t >1} \subset M_{g,n} $ \df{converges to} $C$ if $S_t = S(G,\ell_t, \theta_t)$ for some functions $(\ell_t,\theta_t):E(G) \rightarrow \mathbb{R}_{> 0}\times S^1$
and if for any edge $e\in E(C)$, we have $$ \displaystyle \ell_t(e) \; \underset{t\rightarrow \infty}{\sim} \; \frac{2\pi^2}{\ell(e) \log(t)}.$$ \end{definition}
In the above definition, the family $\left\lbrace S_t \right\rbrace_{t >1}$ converges to the unique stable curve in $\overline{M}_{g,n} $ whose dual graph is $G$. The above family defines $3g-3+n$ vanishing cycles in each individual Riemann surface $S_t$. Then, the length function $\ell$ determines the speed of contraction of each vanishing cycle in term of the parameter $t$.
The theorem below establishes the connection between the abstract convergence given in Definition \ref{tropconv} and the notion of convergence given in \eqref{eq:conv}, in the more general context of harmonic amoebas and harmonic tropical curves.
\begin{mytheorem}\label{approxtrop} Let $C$ be a simple tropical curve of genus $g$ with $n$ leaves and let $R$ be an $m \times n$ matrix of residues. Then, for any sequence $ \left\lbrace S_t \right\rbrace_{t >1} \subset M_{g,n}$ converging to $C$, the sequence of maps $ \frac{1}{\log(t)} \cdot \mathcal{A}_R : S_t \rightarrow \mathbb{R}^m $ converges to the map $\pi_R : C \rightarrow \mathbb{R}^m$, in the sense of Definition \ref{GHconv}.
\end{mytheorem}
The notion of convergence of Definition \ref{GHconv} implies in particular that $\frac{1}{\log(t)} \cdot \mathcal{A}_R(S_t)$ converges towards $\pi_R(C)$ in Hausdorff distance on compact sets.
Theorem \ref{approxtrop} will arise as a consequence of Theorem \ref{convdif} describing the limit of imaginary normalised differentials on families of algebraic curves converging in the sense of Definition \ref{tropconv}, see Section \ref{sec:convind}.
\subsubsection{Phase-tropical convergence and Mikhalkin's approximation Theorem.}$\,$
Phase-tropical curves play a central role in the computation of planar Gromov-Witten invariants, see \cite{Mikh05}. They are complexified version of tropical curves and arise as Hausdorff limits of families of curves in $( \mathbb{C}^\star)^m$ after the re-parametrisation \[ \begin{array}{rcl} H_t \; \; : \;\; ( \mathbb{C}^\star)^m & \rightarrow & ( \mathbb{C}^\star)^m \\ (z_1,\dots,z_m) & \mapsto & \left( \vert z_1 \vert^{\frac{1}{\log(t)}} \frac{z_1}{\vert z_1 \vert} ,\dots, \vert z_m \vert^{\frac{1}{\log(t)}} \frac{z_m}{\vert z_m \vert} \right) \end{array}. \] Classically, a family of curves $\left\lbrace \mathcal{C}_t \right\rbrace_{t>1} \subset (\mathbb{C}^\star)^m$ is said to converge to a phase-tropical curve $V \subset (\mathbb{C}^\star)^m$ if \begin{equation}\label{eq:convpt} \lim_{t\rightarrow \infty} \; H_t ( \mathcal{C}_t) = V \end{equation} with respect to the Hausdorff distance on compact sets. In general, phase-tropical curves in $( \mathbb{C}^\star)^m$ can be very singular objects. For this reason, we will restrict our attention to simple phase-tropical curves in this paper.
Simple phase-tropical curves can be considered abstractly. An abstract simple phase-tropical curve $V$ can be constructed by gluing (phase-tropical) pairs of pants together and comes with a map $ \mathcal{A}_V:V\rightarrow C$ onto a simple tropical curve $C=(G,\ell)$. We will see in Section \ref{sec:trop} that any such object can also be described in terms of Fenchel-Nielsen coordinates and be therefore denoted by $V:=V(G,\ell,\theta)$.
\begin{definition}[\textbf{Abstract phase-tropical convergence}]\label{comptropconv}$\,$\\ Let $V:=V(G,\ell,\theta)$ be a simple phase-tropical curve such that $C=(G,\ell)$ is a simple tropical curve of genus $g$ with $n$ leaves.
A family of Riemann surfaces $ \left\lbrace S_t \right\rbrace_{t >1} \subset M_{g,n}$ \df{converges to} $V$ if $ S_t := S(G,\ell_t, \theta_t)$ for some functions $(\ell_t,\theta_t):E(G) \rightarrow \mathbb{R}_{> 0}\times S^1$ and if for any edge $e\in E(G)$, we have $$ \displaystyle \ell_t(e) \;\; \underset{t\rightarrow \;\; \infty}{\sim} \frac{2\pi^2}{\ell(e) \log(t)} \; \;\; \text{ and } \;\;\; \lim_{t\rightarrow \infty} \theta_t(e)=\theta(e).$$ \end{definition}
The above notion of convergence leads to a refined version of Theorem \ref{convdif}. For any $m \times n$ matrix of residues $R$, we can consider the \df{period matrix} $ \mathcal{P}_{R,S}$ of the $m$ i.n.d. on $S \in M_{g,n}$ induced by $R$, where the periods are computed against a fixed basis of $H_1\big( S, \mathbb{Z} \big)$.
\begin{mytheorem} For any sequence $\left\lbrace S_t \right\rbrace_{t>1} \subset M_{g,n} $ converging to a simple phase-tropical curve $V$ and any $m \times n$ matrix of residues $R$, the sequence of period matrices $\mathcal{P}_{R,S_t}$ converges to a matrix $\mathcal{P}_{R,V}$ depending only on $R$ and $V$. \end{mytheorem}
The above theorem will be stated more precisely as Theorem \ref{thm:convperiod} in Section \ref{sec:cop}. Now, for an abstract phase-tropical curve $V$ with underlying tropical curve $C$, any phase-topical morphism $\phi:V\rightarrow ( \mathbb{C}^\star)^m$ is induced from a tropical morphism $\pi:C\rightarrow \mathbb{R}^m$ induced in turn from an $m\times n$ matrix of residues $R$, see Propositions \ref{prop:phasetropdetermined} and \ref{propharmmorph}. By Theorem \ref{approxtrop}, the family of maps $\frac{1}{\log(t)}\cdot \mathcal{A}_R:S_t\rightarrow \mathbb{R}^m$ converges to $\pi:C\rightarrow \mathbb{R}^m$ for any sequence $\left\lbrace S_t \right\rbrace_{t>1}$ converging to $C$. In order to approximate the phase-tropical morphism $\phi$, we need to guarantee that the maps $ \mathcal{A}_R:S_t\rightarrow \mathbb{R}^m$ factorise through algebraic maps $\phi_t:S_t\rightarrow( \mathbb{C}^\star)^m$, or equivalently that the period matrices $ \mathcal{P}_{R,S_t}$ have coefficient in $2\pi i \mathbb{Z}$. This will follow from Theorem \ref{thm:convperiod}, leading to a new proof of Mikhalkin's approximation Theorem, stated as Theorem \ref{thmMik} in Section \ref{Mik}.
\subsection{Techniques and perspectives}
The main ingredient behind Theorems \ref{convdif} and \ref{thm:convperiod} is a rather elementary study of the behaviour of meromorphic differentials on families of annuli and pairs of pants respectively, see Sections \ref{secconfinv} and \ref{sec:cop}. The latter study provides us with a good understanding of the behaviour of i.n.d. on families of Riemann surfaces. In particular, we could design the Definitions \ref{tropconv} and \ref{comptropconv} of tropical and phase-tropical convergence for which the desired approximations theorems hold, namely Theorems \ref{approxtrop} and \ref{thmMik}.
In this paper, we restrict our attention to simple (phase-) tropical curves, mostly for the sake of clarity and concision. It is of primary interest to extend the present results to general tropical curves. The most interesting application is probably related to compactifications of the moduli spaces $M_{g,n}$ with tropical curves. Such compactifications are promising candidates to provide systematic correspondence theorems between algebro-geometric problems and their tropical counterpart.
From this perspective, Theorems \ref{approxtrop} and \ref{thmMik} justify the relevance of Definitions \ref{tropconv} and \ref{comptropconv}. Yet an other justification is given in Remark \ref{rem:jacobian} were we describe the relation between the Jacobians of a family of algebraic curves converging tropically and the Jacobian of the tropical limit, as defined in \cite{MZ}. The latter observation suggests a compactification of the Torelli map by its tropical analogue, as constructed in \cite{BMV}.
To conclude, let us mention that similar compactifications of moduli spaces have been investigated in \cite{O} and \cite{O2}. However, the present approach is quite different, as explained in Remark \ref{rem:odaka}.
\tableofcontents
\section{Prerequisites}\label{sec:prereq}
\subsection{A twisted Hodge bundle}\label{sec:thb}
In order to study the convergence of imaginary normalised differential over families of Riemann surfaces, we will need an appropriate ambient space.
Recall that the Hodge bundle $\Lambda_{g,n} \rightarrow \overline{M}_{g,n}$ is the rank $g$ vector bundle whose fiber at a point $(S;p_1,\dots,p_n)$ is the space of holomorphic sections of the dualising sheaf over $S$, see \cite{ELSV}. Geometrically, the fiber of $\Lambda_{g,n}$ over a smooth curve $(S;p_1,\dots,p_n) \in M_{g,n}$ is the vector space of holomorphic differentials on $S$. For a singular curve $(S;p_1,\dots,p_n) \in \overline{M}_{g,n}$, the fiber of $\Lambda_{g,n}$ over $S$ is the vector space of meromorphic differentials whose pullback to the normalisation of $S$ have at most simple poles at the preimages of the nodes and such that the residues at the two preimages of any node of $S$ are opposite to each other.
\begin{definition} Define the \df{twisted Hodge bundle} $\Lambda^{\text{twist}}_{g,n} \rightarrow \overline{M}_{g,n}$ to be the vector bundle of rank $g+n-1$ obtained by tensoring $\Lambda_{g,n}$ with $\mathcal{O}_S (p_1+\dots+p_n)$. A point in the fiber of $\Lambda^{\text{twist}}_{g,n}$ over $S$ will be called a \df{generalised meromorphic differential} on $S$. \end{definition}
Compared to meromorphic differentials, generalised meromorphic differential on $S$ are allowed to have simple poles at the punctures of $S$.
Let us now introduce coordinates on the fiber of $\Lambda^{\text{twist}}_{g,n}$ over a fixed curve $S \in M_{g,n}$. Choose a maximal collection of points $q_1,\dots,q_k$ among the nodes of $S$ such that $S\setminus \big(\cup_j q_j \big)$ is connected. Choose now a maximal collection of simple closed curves $\gamma_1,\dots,\gamma_{g'}$ in $S$ such that the complement of these curves in $S\setminus \big(\cup_j q_j \big)$ is connected (observe that $g=g'+k$). Then, any generalised meromorphic differential $\omega$ on $S$ is uniquely determined by the vector \begin{equation}\label{eq:vector} \textstyle \big( \int_{\gamma_1} \omega,\dots,\int_{\gamma_{g'}}\omega, \Res_{p_1}\omega, \dots,\Res_{p_{n-1}}\omega, \Res_{q_1}\omega, \dots,\Res_{q_{k}}\omega\big) \in \mathbb{C}^{g+n-1}. \end{equation} Observe that we need to choose a branch of the node $q_j$ in order to define $\Res_{q_j}$.
We can also consider similar coordinates along families of algebraic curves. For any continuous family $\left\lbrace S_t \right\rbrace_{t>1}\subset M_{g,n}$ converging to $S$, each node $q_j$ defines an isotopy class of simple closed curve $\gamma'_{j,t}\subset S_t$, the so-called vanishing cycle associated to $q_j$. Now, a family $\left\lbrace (S_t,\omega_t) \right\rbrace_{t>1}\subset \Lambda^{\text{twist}}_{g,n}$ converges to the point $(S,\omega)$ if and only if the vector \[ \textstyle \big( \int_{\gamma_1} \omega_t,\dots,\int_{\gamma_{g'}}\omega_t, \Res_{p_1}\omega_t, \dots,\Res_{p_{n-1}}\omega_t, \int_{\gamma'_1}\omega_t, \dots,\int_{\gamma'_k}\omega_t\big) \] converges to the vector \eqref{eq:vector}. Observe that $\gamma'_{j,t}$ has to be oriented coherently with the choice we made to define $\Res_{q_j}$.
\subsection{Imaginary normalised differentials and harmonic amoebas}\label{sec:indha}
In this section, we briefly review the material of \cite{Kri} that is of interest to us. Unless specified otherwise, the proofs of the statements to follow can be found there.
\begin{definition} An \df{imaginary normalised differential} (\df{i.n.d.} for short) $\omega$ on a curve $S \in \overline{M}_{g,n}$ is a generalised meromorphic differential having (at worst) simple poles at the $n$ punctures of $S$ and such that for any simple closed curve $ \gamma \subset S$, we have \[ \Re \left( \int_{\gamma} \omega \right) = 0. \] \end{definition}
\begin{theorem}\label{thmRiem} Let $S \in \overline{M}_{g,n}$ with punctures $p_1,\dots, p_n$ and nodes $q_1,\dots, q_k$. Fix any collection $\left\lbrace r_1,\dots, r_n\right\rbrace \subset \mathbb{R}$ such that $\, \sum_j r_j =0$ and associate a number $ r'_j \in \mathbb{R}$ to one of the two branches of the node $q_j$. Then, there exists a unique i.n.d. $\omega$ on $S$ such that $\, \Res_{p_j} \omega = r_j \,$ for any $1 \leqslant j \leqslant n$ and $\, \Res_{q_j} \omega = r'_j \,$ for any $1 \leqslant j \leqslant k$ where the latter residue is computed at the distinguished branch of $q_j$. \end{theorem}
\begin{proof} For $S \in M_{g,n}$, the proof is given in \cite[Theorem 5.3]{Lang82}. In the general case, normalise the curve $S$, associate the residue $-r'_j$ to the remaining branch of the node $q_j$ and apply \cite[Theorem 5.3]{Lang82} to each connected component of the normalisation. \end{proof}
\begin{definition}\label{defAr} A \df{collection of residues} of dimension $m$ on a Riemann surface $S \in M_{g,n}$ is a real $n \times m$ matrix $R:= \big( r_{k,j} \big)_{k,j}$ such that $\sum_k r_{k,j} =0$ for any $1 \leqslant j \leqslant m$. Denote by $$\omega_R^S :=(\omega_{R,1}^{S}, \dots, \omega_{R,m}^{S})$$ the vector of i.n.d. on $S$ induced by the $m$ rows of $R$, see Theorem \ref{thmRiem}. In practice, we will simply denote the latter vector by $\omega_R=(\omega_{R,1}, \dots, \omega_{R,m})$ when no confusion is possible. Finally, define the \df{harmonic amoeba map} \[ \begin{array}{rcl} \mathcal{A}_R \;\; : \;\; S & \rightarrow & \mathbb{R}^m \\ s & \mapsto & \Big( \Re\big( \int_p^s \omega_{R,1}\big), \dots, \Re\big( \int_p^s \omega_{R,m}\big)\Big) \end{array} \]
up to the choice of an initial point $p \in S$. We call the set $\mathcal{A}_R (S) \subset \mathbb{R}^m$ the \df{harmonic amoeba} of $S$ with respect to $R$. \end{definition}
\begin{remark} According to Theorem \ref{thmRiem}, the space of harmonic amoebas maps on any given Riemann surface $S\in M_{g,n}$ to $\mathbb{R}^m$ is a real vector space of dimension $m(n-1)$ given by the coefficients of the collection of residues. \end{remark}
The terminology of Definition \ref{defAr} is motivated by the fact that the coordinates of $\mathcal{A}_R$ are harmonic functions on $S$ and that the definition of harmonic amoebas generalises the one of \cite{GKZ}, as we recall below.
The next proposition illustrate the fact that the main properties of amoebas survive in the context of harmonic amoebas, see \cite[Proposition 2.7]{Kri}.
\begin{proposition}\label{convamoeba} Let $S \in M_{g,n}$ and $R$ be a collection of residues of dimension 2. Then, the harmonic amoeba $\mathcal{A}_R (S)$ is a closed subset of $ \mathbb{R}^2$ with finite area and the connected components of $\mathbb{R}^2 \setminus \mathcal{A}_R (S)$ are convex. \end{proposition}
It is also shown in \cite{Kri} that harmonic amoebas possess a logarithmic Gauss map, a Ronkin function and extend many classical properties of amoebas. In particular, we can carry the definition of spine of planar amoebas as introduced in \cite{PR} to the case of harmonic amoebas. The latter construction suggests to consider a more general class of affine tropical curve with non rational slopes that we introduce in section \ref{secharmtrop} under the name of \df{harmonic tropical curves}.
Observe that for any collection of residues $R$ of dimension $m\geqslant 2$, the image of any small disc $D\subset (S\cup p_j)$ around the puncture $p_j$ is mapped by $ \mathcal{A}_R$ inside the $\varepsilon$-neighbourhood of a half-ray $r\subset \mathbb{R}^m$ and that $\varepsilon$ can be made arbitrarily small by shrinking $D$. Indeed, if we consider a holomorphic coordinate $t$ on $D$ centred at $p_j$ and an initial point $p\in D$, we have that \[ \begin{array}{rl} \mathcal{A}_R(s) & = \; \Big( \Re\big( \int_p^s \frac{r_1}{t}+ O(1) dt\big), \dots, \Re\big( \int_p^s \frac{r_m}{t}+ O(1) dt\big)\Big)\\ & \\ & = \; \big( r_1 \log\vert s \vert + O(1), \dots, r_m \log \vert s \vert + O(1)\big). \end{array} \] The vector $(r_1,\dots,r_m)^\top$ is the $j^{th}$ column of $R$, each $O(1)$ is meant in a neighbourhood of $0$ and can be made as small as desired by shrinking $D$. Following the terminology of \cite{GKZ} and \cite{Kri}, we refer to $ \mathcal{A}_R( D\setminus p_j)$ as a \df{tentacle} of the harmonic amoeba $ \mathcal{A}_R(S)\subset \mathbb{R}^m$ and to $(r_1,\dots,r_m)^\top$ as the \df{slope} of the tentacle.
For $S \in M_{g,n}$, a collection of residues $R$ of dimension $m$ and the integer $k:=2g+n-1$, we can define the (normalised) \df{period matrix} $ \mathcal{P}_{R,S} \in M_{k\times m}( \mathbb{C})$ relative to a basis $\gamma_1,\dots, \gamma_{k}$ of $H_1(S,\mathbb{Z})$ as follows \[ \big( \mathcal{P}_{R,S} \big)_{j,k} := \frac{1}{2\pi i} \int_{\gamma_j} \omega_{R,k} \; . \] The period matrix relative to a different basis $\gamma':= A \cdot \gamma$ is given by $A \cdot \mathcal{P}_{R,S}$ where $A \in \SL_{k}(\mathbb{Z})$.
\begin{definition}\label{defpermat} For $S \in M_{g,n}$ and $R$ be a collection of residues of dimension $m$, the \df{period matrix} of $S$ with respect to $R$ is the equivalence class of the matrix $\mathcal{P}_{R,S}$ relative to any basis of $H_1(S,\mathbb{Z})$ in the quotient of $M_{k\times m}( \mathbb{C})$ by $\SL_{k}(\mathbb{Z})$ acting by multiplication to the left. By a slight abuse of notation, we denote the latter class by $\mathcal{P}_{R,S}$. The class $ \mathcal{P}_{R,S} $ is an \df{integer period matrix} if one of its representative (and then all of them) has coefficients in $\mathbb{Z}$. If $ \mathcal{P}_{R,S} $ is an integer period matrix, define the holomorphic map \[ \df{\begin{array}{rcl} \mathcal{\phi}_{R} \; : \; S & \rightarrow & ( \mathbb{C}^\star)^m\\ z & \mapsto & \left( e^{\int^z_{p} \omega_{R,1}},\dots, e^{ \int^z_{p} \omega_{R,m}} \right) \end{array}} \] up to the choice of an initial point $p\in S$ \end{definition}
The next statement follows from Remark 3.2 and Theorem 2.6 in \cite{GK}.
\begin{theorem}\label{thmcodim} Let $S \in M_{g,n}$ and $R$ be a collection of residues of dimension$ \; 1$. The level set
\[\left\lbrace S' \in M_{g,n} \: \big| \: \mathcal{P}_{R,S} = \mathcal{P}_{R,S'} \right\rbrace\] is a smooth analytic subvariety of $M_{g,n}$ of real codimension $2g$. \end{theorem}
\subsection{Hyperbolic surfaces}\label{sec:hyp}
In this section, we recall some basic facts about geometry of hyperbolic surfaces. We refer to \cite{B} and \cite{Hub} for more details.
\begin{definition} A \df{pair of pants} $Y$ is a complete Riemannian surface with constant curvature $-1$ and geodesic boundary that is conformally equivalent to $ \mathbb{C} P^1 \setminus \big( E_1 \cup E_2 \cup E_3 \big) $ where $E_j$ is either a point or an open disc and such that the $E_j$'s are pairwise disjoint. \end{definition}
Recall that the upper half-space $ \mathbb{H} := \left\lbrace x+iy \in \mathbb{C} \; \big| \; y>0 \right\rbrace $ equipped with the Riemannian metric \[g:= \dfrac{(dx^2 +dy^2)}{y^2}\] is a complete metric space with constant curvature $-1$, see \cite[Ch.1, \S 1]{B}.
\begin{theorem}[Theorem 2.4.2 in \cite{B}] For any $\alpha, \beta, \gamma >0$, there exists a hyperbolic right-angled hexagon $H(\alpha, \beta, \gamma) \subset \mathbb{H}$ unique up to isometry such that its sides have consecutive length $a, \alpha, b, \beta, c, \gamma$. The lengths $a,b,c$ are determined by $\alpha, \beta, \gamma$. \end{theorem}
We can define generalised hyperbolic right-angled hexagon by letting any of the parameters $\alpha, \beta, \gamma$ go to $0$. For any $(\alpha, \beta, \gamma)\in ( \mathbb{R}_{\geqslant 0})^3$, we obtain a Riemannian surface $H(\alpha, \beta, \gamma) \subset \mathbb{H}$ unique up to isometry.
From now on, by a boundary component of a pair of pants $Y$ we mean a boundary geodesic as well as a puncture of $Y$. The theorem below is a combination of Proposition 3.1.5, Theorem 3.1.7 and Lemma 4.4.1 in \cite{B}.
\begin{theorem}\label{thm:buser} For any pair of pants $Y$, there exists a unique orientation reversing isometry $\sigma$ fixing globally each boundary component. The quotient $Y/\sigma$ is isometric to a generalised hyperbolic right-angled hexagon. In particular, the surface $Y$ is determined up to isometry by the length $(\alpha, \beta, \gamma)\in ( \mathbb{R}_{\geqslant 0})^3$ of its $3$ boundary components. \end{theorem}
The isometry $\sigma$ in the above theorem is an involution. By analogy with complex conjugation, we will denote by $ \mathbb{R} Y \subset Y$ the \df{fixed locus of $\sigma$}.
Now let us recall the construction of Riemann surfaces using Fenchel-Nielsen coordinates. Below, we use $l(\gamma)$ to denote the length of a closed geodesic in a hyperbolic surface and $d(\, . \, ,\, . \,)$ denotes the distance induced by the hyperbolic metric.
\begin{definition}\label{def:cubgraph} A \df{cubic graph} $G$ is a connected graph with only trivalent and univalent vertices. An edge adjacent to a univalent vertex is called a \df{leaf}. A \df{ribbon structure} $\mathscr{R}$ on $G$ is the data for every vertex $v$ of $G$ of a cyclical ordering of the edges adjacent to $v$. We denote by $V(G)$ the \df{set of vertices}, by $L(G) $ the \df{set of leaves} and by $E(G)$ the \df{set of non-leaf edges} of $G$. We also denote $LE(G):= L(G) \cup E(G)$. \end{definition}
Let us fix a cubic graph $G$ with $n$ leaves and genus $g$, that is $g:=b_0(G)$, together with a fixed ribbon structure $ \mathscr{R}$. By Euler characteristic computation, we have that $G$ has $(3g-3+n)$ edges and $(2g-2+n)$ vertices of valence $3$. We assume that $2g+n>2$.
Given a \df{length function} $\ell \, : \, E(G) \rightarrow \mathbb{R}_{> 0} $ and a \df{twist function} $\theta \, : \, E(G) \rightarrow S^1$, we can construct a Riemann surface $S(G,\ell,\theta) \in M_{g,n}$ as follows. To each vertex $v\in V(G)$, associate a pair of pants $Y_v$ together with a correspondence between the $3$ edges of $G$ adjacent to $v$ and the $3$ boundary components of $Y_v$ such that :
-- a boundary component of $Y_v$ is a puncture if and only if the corresponding edge of $G$ is a leaf,
-- the boundary geodesic of $Y_v$ corresponding to the edge $e\in E(G)$ has length $\ell(e)$.\\ Now, we will identify each boundary geodesic $\gamma\subset Y_v$ to $S^1$ with the help of the ribbon structure $ \mathscr{R}$. Any pointed, oriented topological circle equipped with an inner metric of length $2\pi$ is uniquely isomorphic to $S^1$, pointed at $1$, oriented counter-clockwise and equipped with the Euclidean metric. Thus,
we simply need to choose an ``origin" and an orientation on $\gamma$, equipped with the rescaled hyperbolic distance $\frac{2\pi}{l(\gamma)}d(\,.\,,\, .\,)$, in order to identify $\gamma$ with $S^1$. We choose the orientation whose direct normal vector is pointing inside $Y_v$. For the origin, observe that the involution $\sigma$ on $Y_v$ has two fixed points on $\gamma$. Exactly one of these two points belong to a component of $ \mathbb{R} Y_v$ intersecting with the boundary component of $Y_v$ preceding $\gamma$ according to the ordering induced by $ \mathscr{R}$. We choose the latter point as the origin of $\gamma$.
It remains to glue the pairs of pants $Y_v$ together. For two vertices $v_1, v_2 \in V(G)$ connected by an edge $e\in E(G)$, glue the two geodesics of $ Y_{v_1} $ and $ Y_{v_2} $ corresponding to $e$ with the isometric involution \[ \begin{array}{rcl}
S^1 & \rightarrow & S^1\\ z & \mapsto & -\overline{\theta(e)z}. \end{array}\] The result of the above construction is a Riemann surface $S(G,\ell,\theta) \in M_{g,n}$. The functions $(\ell,\theta)$ are the \df{Fenchel-Nielsen coordinates} of the Riemann surface $S(G,\ell,\theta)$, relative to $G$ and $ \mathscr{R}$. These coordinates allow to define the map \[ \begin{array}{rcl} \Upsilon_G \: : \: ( \mathbb{C}^\star)^{3g-3+n} & \rightarrow & M_{g,n} \\ (\ell,\theta) & \mapsto & S(G,\ell,\theta) \end{array} \]
\begin{theorem}\label{thmhyperb} The map $\Upsilon_{G}$ is a real-analytic surjective map. \end{theorem}
\begin{proof} Consider the Teicmüller space $\mathcal{T}_{g,n}$ together with its canonical complex structure so that the universal map $\mathcal{T}_{g,n} \rightarrow M_{g,n}$ is holomorphic. The Teichmüller space admits Fenchel-Nielsen coordinates $(\ell, \Theta)\in ( \mathbb{R}_{>0})^{3g-3+n}\times \mathbb{R}^{3g-3+n}$ (relative to any choice of $G$ and $ \mathscr{R}$) such that the length function $\ell$ is the same as the one we just defined and the twist function $\theta$ is the projection of $\Theta$ in $( \mathbb{R}/ \mathbb{Z})^{3g-3+n}$, see \cite[Ch.3, \S 3]{B}. It follows that the universal map is the composition of $(\ell,\Theta)\mapsto(\ell,\theta)$ with $\Upsilon_G$. In particular, $\Upsilon_G$ is surjective since the universal map is. Also, we deduce that $\Upsilon_G$ is real-analytic if and only the universal map is real analytic in Fenchel-Nielsen coordinates. According to \cite[Theorem 3.12]{Wol}, the Fenchel-Nielsen coordinates $(\ell, \Theta)$ are real-analytic on $\mathcal{T}_{g,n}$. In particular, the universal map $\mathcal{T}_{g,n} \rightarrow M_{g,n}$ is again analytic in the latter coordinates. The result follows.
\end{proof}
To conclude this section, we recall some classical facts around the collar Theorem, see \cite[Ch.4]{B} for more details.
\begin{definition} Let $Y$ be a pair of pants. For any boundary geodesic $\gamma$ of $Y$, the \df{half-collar} associated to $\gamma$ is the tubular neighbourhood
\[\hc_\gamma := \left\lbrace z \in Y \; \big| \; d(z, \gamma) \leqslant w \big(l(\gamma)\big) \right\rbrace\] where $ w(l) := \arcsinh \big(\frac{1}{\sinh( l/2)}\big)$. For any puncture $p$ of $Y$, the \df{cusp} $K_p$ associated to $p$ is the neighbourhood of $p$ in $Y$ isometric to the space $\left]- \infty, \log(2) \right] \times \mathbb{R}/\mathbb{Z}$ with coordinates $(\rho,y)$ and metric $d\rho^2 + e^{2 \rho}dy^2$.
For any Riemann surface $S \in M_{g,n}$, and any simple closed geodesic $\gamma \subset S$, the \df{collar} associated to $\gamma$ is the tubular neighbourhood
\[K_\gamma := \left\lbrace z \in S \; \big| \; d(z, \gamma) \leqslant w\big(l(\gamma)\big) \right\rbrace. \] \end{definition}
According to \cite[Equation (3.3.6)]{B}, the half-collar $\hc_\gamma$ is isometric to $\left[0, w(l) \right] \times \mathbb{R}/\mathbb{Z}$ equipped with the metric $d\rho^2 + l^2 \cosh(\rho)^2 dy^2$.
\begin{lemma}\label{propcolcusp} The half-collar to a geodesic $\gamma$ converges to a cusp (in Gromov-Hausdorff distance) when $l(\gamma)$ tends to $0$. \end{lemma}
\begin{proof} To see this, it suffices to compare the limit of the length of the boundary component of $\hc_{\gamma}$ different from $\gamma$ with the length of the boundary of $K_p$. The latter lengths are easily computed from the metric descriptions given above. The length is $2$ for the cusp and $l\cosh(w(l))$ for the half-collar. Using the formula $\arcsinh(x)=\ln(x+\sqrt{x^2+1})$ and the fact that $\lim_{l \rightarrow 0} w(l)= +\infty$, we compute that \[ \begin{array}{rl} l^2\cosh(w(l))^2 & = \;\, l^2\Big(\frac{e^{w(l)}+e^{-w(l)}}{2} \Big)^2 \;\, \underset{l \rightarrow 0}{\sim}\;\, l^2\Big(\frac{e^{w(l)}}{2} \Big)^2 \;\, \sim \;\, l^2\left(\frac{\frac{1}{\sinh(l/2)}+\sqrt{\frac{1}{\sinh(l/2)^2}+1}}{2} \right)^2 \\ \\ & \sim \;\, l^2\Big(\frac{1}{\sinh(l/2)} \Big)^2 \;\, \sim \;\, l^2 \big(\frac{2}{l}\big)^2 \;\, = \;\, 4 \, . \end{array} \] \end{proof}
\begin{proposition}[Proposition 4.4.4 in \cite{B}] Let $Y$ be a pair of pants. The half-collars and cusps of $Y$ are pairwise disjoint. \end{proposition}
\subsection{Conformal invariants on holomorphic annuli}\label{secconfinv}
Unlike holomorphic discs, holomorphic annuli have a non trivial modulus. For example, the annuli $A_{r,R} = \left\lbrace z \in \mathbb{C} \: \big| \: r \leqslant \vert z \vert \leqslant R \right\rbrace$ are determined up to bi-holomorphism by the single conformal invariant $\frac{1}{2\pi}\log \left( \frac{R}{r}\right)$, see \cite[Ch.14]{Rud}. This conformal invariant is related to so-called extremal lengths on complex domains, see e.g. \cite{Ahl}. Since any holomorphic annulus is bi-holomorphic to $A_{1,R}$ for some $R>0$, holomorphic annuli have a single modulus, see either \cite[Ch.5, \S 1, Theorem 10]{Ahl79} or \cite[Proposition 3.2.1]{Hub}. An other useful model for holomorphic annuli is given by
\[A_m = \left\lbrace z \in \mathbb{C} \: \big| \: 0 \leqslant \Re( z ) \leqslant m \right\rbrace \big/ i \mathbb{Z},\] for any $m>0$. The map $ z \mapsto e^{2\pi z} $ identifies $A_m$ to $A_{1,e^{2\pi m}}$ so that $A_m$ has modulus $m$. As a consequence, every holomorphic annulus is bi-holomorphic to $A_m$ for a unique $m>0$.
In the sequel, we will need to compare the above conformal models with the collar model $K_l$ coming from hyperbolic geometry.
\begin{lemma}\label{lemmaconfcollar} Define $m(l):= \frac{2}{l}\arccos\big(\frac{1}{\cosh(w(l))}\big)$. Then, the collar $K_l$ is bi-holomorphic to the annulus $A_{m(l)}$. In particular, we have \[ m(l) \; \underset{l \rightarrow 0}{\sim} \; \pi/l \, .\] \end{lemma}
\begin{proof} Recall that the collar $K_l$ is isometric to $\left[- w(l), w(l) \right] \times \mathbb{R}/\mathbb{Z}$ equipped with the metric $d\rho^2 + l^2 \cosh(\rho)^2 dy^2$. Let us look for a change of variable $x \mapsto \rho(x)$ so that the pullback of the latter metric to the $(x,y)$-space is conformally equivalent to the Euclidean metric. This amounts to solving the differential equation \[\big(\rho'(x)\big)^2 = \big(l \cosh (\rho(x))\big)^2.\] A solution is given by the odd function $\rho(x)$ whose restriction to $ \mathbb{R}_{\geqslant 0}$ is given by $\rho(x) = \arccosh\big(\frac{1}{\cos(lx)}\big)$.
It follows that $K_l$ is isometric to $\left[ - \rho^{-1}\big(w(l)\big), \rho^{-1}\big(w(l)\big) \right] \times \mathbb{R}/\mathbb{Z}$ with the metric
$\big(\frac{l}{\cos(lx)}\big)^2 (dx^2 + dy^2)$. The latter is the hyperbolic metric on the annulus $A_{2\rho^{-1}(w(l))}$. We deduce that $$m(l) \; = \; 2\, \rho^{-1}\big(w(l)\big)\; = \; \frac{2}{l}\,\arccos\left(\frac{1}{\cosh( w(l))}\right).$$ For the asymptotic of $m(l)$ at $0$, observe that $$\displaystyle \lim_{l\rightarrow 0} \, w(l) = + \infty \; \; \text{ and } \; \; \displaystyle \lim_{x\rightarrow +\infty} \, \arccos\left(\frac{1}{\cosh(x)}\right) = \frac{\pi}{2}.$$ \end{proof}
In the sequel, we will also need to consider families of holomorphic differentials $\left\lbrace \omega_m \right\rbrace_{m>0}$ where $\omega_m$ is defined on $A_m$. In order to deal with such families, it is useful to introduce the model
\[C_\lambda = \left\lbrace (z,w) \in \mathbb{C}^2 \, \big| \; \vert z \vert \leqslant 1, \; \vert w \vert \leqslant 1, \: zw= \lambda \right\rbrace\] for $0 \leqslant \lambda <1$. The parametrisation $z \mapsto (e^{-2\pi z},\lambda e^{2\pi z})$ identifies $C_\lambda$ with $A_{-\log(\lambda)/(2\pi)}$. In particular, the family $\big\lbrace C_\lambda\big\rbrace_{0<\lambda<1}$ is a universal family of annuli. For $\lambda=0$, the set $C_\lambda$ is the union of the 2 unit discs in the $z$- and the $w$-coordinate axes of $ \mathbb{C}^2$.
\begin{definition}\label{deflocconv} A family $\left\lbrace \omega_m \right\rbrace_{m>0}$ of holomorphic differentials defined respectively on $A_m$ is \df{continuous} if the differential induced on the total space $\cup_{0<\lambda<1} C_\lambda \subset \mathbb{C}^2$ is continuous. A continuous family $\left\lbrace \omega_m \right\rbrace_{m>0}$ \df{converges} at $+\infty$ if the limit in the $C_\lambda$-model is a meromorphic differential on each of the two irreducible components of $C_0$. \end{definition}
Observe that the limiting differential is holomorphic everywhere on $C_0$ except at the origin where the residue on the two discs are opposite to each other. Suppose now that we are given a continuous family of holomorphic differentials $\left\lbrace \omega_m \right\rbrace_{m>0}$ as above and normalised in the following way \begin{eqnarray}\label{normdiff} \dfrac{1}{2 \pi i} \int_\gamma \omega_m = 1 \end{eqnarray} where $\gamma(t)=m/2+it$, $0\leqslant t \leqslant 1$. We would like to compute the asymptotic of the integral of $\omega_m$ along the width of $A_m$ while $m$ tends to $+\infty$. To that aim, let us introduce the following terminology.
\begin{definition} An admissible sequence of path is a sequence of continuous, piecewise $C^1$ maps $\rho_m : [0,m] \rightarrow A_m$ such that
-- $\Re \big( \rho_m (0) \big) =0$ and $\Re \big( \rho_m (m) \big) =m$,
-- there exists $M>0 $ such that $ \left| \int_0^m \frac{d}{dt}\Im\big( \rho_m(t) \big) dt \right| <M$ for any $m$. \end{definition}
The latter condition is equivalent to saying that the parametrised curve obtained by composition of $\rho_m$ with the projection $A_m\rightarrow \mathbb{R}/ \mathbb{Z}$ has length at most $M$.
\begin{lemma}\label{lemmaconfdiff} Let $\left\lbrace \omega_m \right\rbrace_{m>0}$ be a converging family of holomorphic differentials normalised as in \eqref{normdiff}. For any admissible sequence of paths $\left\lbrace \rho_m \right\rbrace_{m>0}$, we have \[\int_{\rho_m} \omega_m \; \underset{m \rightarrow \infty}{\sim} \; 2 \pi m.\] \end{lemma}
\begin{proof} First, consider the family $\omega_m^0 = 2 \pi dz$ on $A_m$, obviously normalised as in \eqref{normdiff}. Recall that the map $z \mapsto (e^{-2\pi z},\lambda e^{2\pi z})$ allows to identify $C_{e^{-2\pi m}}$ with $A_{m}$. It follows then from a simple computation that the differential $\omega_m^0$ is given on each $C_{e^{-2\pi m}}$ by the restriction of the differential \[\left(-\frac{1}{2z}+w\cdot h(z,w)\right) dz + \left(\frac{1}{2w}+z\cdot h(z,w)\right) dw\] on $ \mathbb{C}^2$, where $h(z,w)$ is any meromorphic function defined on $( \mathbb{C}^\star)^2$. In particular, we see that $\omega_m^0$ converges to the meromorphic differential on $C_0$ that restricts respectively to $-dz/z$ and $dw/w$ on the $z-$ and $w-$unit discs by taking $h(z,w)=\pm (2zw)^{-1}$. Consider now the admissible sequence of paths $\rho_m^0 (t)=t$. Then, we have \[\int_{\rho_m^0} \omega_m^0 = 2 \pi m.\] Now, for any converging family $\left\lbrace \omega_m \right\rbrace_{m>0}$ normalised as in \eqref{normdiff}, the sequence $\left\lbrace \omega_m - \omega_m^0 \right\rbrace_{m>0}$ converges to a differential on $C_0$ whose restriction to any component is holomorphic. It follows that $\int_{\rho_m^0} ( \omega_m - \omega_m^0 ) $ converges and that \[\int_{\rho_m^0} \omega_m \; \underset{m \rightarrow \infty}{\sim} \; 2 \pi m.\] Now for any admissible sequence of paths $\left\lbrace \rho_m \right\rbrace_{m>0}$, there exist 2 sequences of locally injective paths $\left\lbrace \rho_{1,m} \right\rbrace_{m>0}$ and $\left\lbrace \rho_{2,m} \right\rbrace_{m>0}$, respectively contained in the boundaries $ \left\lbrace \Re(z)=0 \right\rbrace$ and $ \left\lbrace \Re(z)=m \right\rbrace$ of $A_m$, and such that $\rho_m$ is isotopic to $\rho_{2,m} \circ \rho_m^0 \circ \rho_{1,m}$. Hence, we have that \[\int_{\rho_m} \omega_m = \int_{\rho_{1,m}} \omega_m + \int_{\rho_m^0} \omega_m + \int_{\rho_{2,m}} \omega_m .\] Since $\left\lbrace \rho_m \right\rbrace_{m>0}$ is admissible, the lengths of the paths $\rho_{1,m}$ and $\rho_{2,m}$ are bounded from above. Since $\left\lbrace \omega_m \right\rbrace_{m>0}$ converges, the integrals of $\omega_t$ along $\rho_{1,m}$ and $\rho_{2,m}$ are also bounded from above. As a consequence, we have \[\int_{\rho_m} \omega_m \; \underset{m \rightarrow \infty}{\sim} \; \int_{\rho_m^0} \omega_m \; \underset{m \rightarrow \infty}{\sim} \; 2 \pi m.\] \end{proof}
\begin{proposition}\label{convdifloc} Let $\left\lbrace l_t \right\rbrace_{t>1}$ be a continuous family of positive numbers such that \[ l_t \; \underset{t \rightarrow \infty}{\sim} \; \dfrac{2\pi^2}{ l \cdot \log(t)} \] for some $l>0$ and let $\alpha$ be a positive number. Then, for any converging sequence $\left\lbrace \omega_t \right\rbrace_{t>1}$ of holomorphic differentials defined respectively on $A_{\alpha \cdot m(l_t)}$ and any admissible sequence of paths $\left\lbrace \rho_t \right\rbrace_{t>1}$, we have \[\dfrac{1}{\log(t)} \int_{\rho_t} \omega_t \; \underset{t \rightarrow \infty}{\sim} \; \alpha \, l \, \Lambda\] where \[ \Lambda \; := \; \lim_{t \rightarrow \infty} \; \dfrac{1}{2\pi i} \int_\gamma \omega_t. \] \end{proposition}
\begin{proof} Denote $\Lambda_t := 1/(2\pi i) \int_\gamma \omega_t$. Observe that, since the family $\left\lbrace \omega_t \right\rbrace_{t>1}$ converges, the integral $\Lambda_t$ converges to the residue of the limit differential on one of the two components of $C_0$. Assume first that $\Lambda \neq 0$ and let $t_0>1$ be such that $\Lambda_t \neq 0$ for any $t>t_0$. Applying lemma \ref{lemmaconfdiff} to the normalised family $\left\lbrace (1/\Lambda_t)\omega_t \right\rbrace_{t>t_0}$, and the equivalence given in lemma \ref{lemmaconfcollar}, we obtain that \[ \frac{1}{\Lambda_t} \int_{\rho_t} \omega_t \;\; \underset{t \rightarrow \infty}{\sim} \;\; 2 \pi \, \alpha \, m(l_t) \;\; \underset{t \rightarrow \infty}{\sim} \;\; 2\pi \,\alpha\, \frac{\pi}{l_t} \;\; \underset{t \rightarrow \infty}{\sim} \;\; \alpha\, l\, \log(t).\] If $\Lambda = 0$, then the limit differential is holomorphic on the two components of $C_0$ and $\int_{\rho_t} \omega_t$ is bounded as a function in $t$. The result follows. \end{proof}
\subsection{Tropical curves}\label{sec:trop}
In this section, we give a brief account on (phase-) tropical curves and tropical 1-forms on them.
\subsubsection{Tropical curves and morphisms}\label{sectropcurvmorph}
\begin{definition}\label{deftropcurv} A \df{tropical curve} $C$ is a graph with all univalent vertices removed and equipped with a complete inner metric. Two tropical curves are \df{isomorphic} if they are isometric to each other. A tropical curve is \df{simple} if its underlying graph is a cubic graph (see Definition \ref{def:cubgraph}).
\end{definition}
We carry the notation $V(C)$, $L(C)$, $E(C)$ and $LE(C)$ from Definition \ref{def:cubgraph}. We recall that in the present terminology, edges have finite length while leaves have infinite length. For any vertex $v \in V(C)$ of a simple tropical curve $C$, denote by $T_v \subset C$ the tripod obtained as the union of $v$ with the $3$ open leaves/edges adjacent to $v$ and $\htr_v \subset T_v$ the tripod obtained by taking only the half of each edge adjacent to $v$.
\begin{remark}\label{rem:tropGL} A tropical curve can be represented by a couple $C :=(G,\ell)$, where $G$ is the graph supporting $C$ and $\ell : E(G) \rightarrow \mathbb{R}_{>0} $ is the length function induced by the metric on $C$. \end{remark}
\begin{definition}\label{tropmorph} A \df{tropical morphism} $ \pi \, : \, C \rightarrow \mathbb{R}^m $ on a tropical curve $C$ is a proper continuous map subject to the following
--\textbf{Integrality}: for any $ e \in LE( C )$, the map $ \pi_{\vert e} $ is integer affine linear with respect to the metric on $e$.
--\textbf{Balancing}: For any $ v \in V ( C )$, denote by $\vec{e}_1, \dots, \vec{e}_n$ the outgoing unitary tangent vectors to the $n$ leaves/edges adjacent to $v$. Then \[ \sum_{1\leqslant j \leqslant n} d\pi (\vec{e}_{j}) \; = \; 0. \]
A \df{tropical curve} $\Gamma \subset \mathbb{R}^m$ is the image of tropical morphism $ \pi : C \rightarrow \mathbb{R}^m$. \end{definition}
\subsubsection{Tropical 1-forms}
\begin{definition}\label{def1form} A (tropical) \df{1-form} $ \textsc{w}$ on a tropical curve $C$ is a locally constant, real-valued 1-form on $C\setminus V(C)$ that satisfies the following balancing condition:
-- for any $v \in V( C )$, and outgoing unitary tangent vectors $\vec{e}_1,\dots,\vec{e}_n$ on the $n$ leaves/edges adjacent to $v$, we have the balancing condition \[ \sum_{1\leqslant j \leqslant n} \textsc{w} (\vec{e}_j) \; =\; 0. \] The vector space of 1-forms on $C$ is denoted by $\Omega( C )$. For any inward oriented leaf $\vec{e}$ of $C$, define the \df{residue} of $ \textsc{w}$ at $e$ to be the number $ \textsc{w} (\vec{e})$. An element $ \textsc{w} \in \Omega( C )$ is \df{holomorphic} if all its residues are zero. The vector space of holomorphic 1-form on $C$ is denoted by $ \Omega_\mathcal{H}( C )$. \end{definition}
A 1-form $ \textsc{w}$ on $C$ corresponds locally to the datum $a \, dx$ on any $e\in LE(C)$, where $a \in \mathbb{R}$ and $x \, : \, e \rightarrow \mathbb{R}$ is an isometric coordinate. For any other isometric coordinate $y$, one has $ \textsc{w} = \pm a \, dy$ depending whether $x \circ y^{-1}$ preserves orientation or not. Therefore, the $1$-form $ \textsc{w}$ is equivalent to the data of an orientation on any $e\in LE(C)$ plus the corresponding real number $a$. In other words, the form $ \textsc{w}$ is equivalent to the datum of an electric current on $C$ seen as an electrical circuit. The above balancing condition corresponds then to the conservation of energy at the node of the circuit.
\begin{definition}\label{pathloop} Let $C=(G,\ell)$ be a tropical curve. A \df{path} $C$ is a continuous map $\rho \, : \, [0,1] \rightarrow LE(C)$ such that $\rho(0)$, $\rho(1) \in V(G)$ and such that the restriction of $\rho$ to $[0,1[$ is injective. In particular, a path can join a univalent vertex of $G$ to an other. A \df{loop} is a path $\rho$ such that $\rho(0)=\rho(1)$. Alternatively, we will also represent a path by the induced ordered collection $(\vec{e}_1,\dots,\vec{e}_k)$ of oriented leaf/edge. For a path $\rho=(\vec{e}_1,\dots,\vec{e}_k)$ and any 1-form $ \textsc{w}$ on $C$, we define the integral \[\int_\rho \textsc{w} \; := \; \sum_{1\leqslant j \leqslant k} \ell(e_j)\, \textsc{w}(\vec{e}_j). \] A 1-form $ \textsc{w} \in \Omega(C)$ is \df{exact} if $\int_\rho \textsc{w} = 0$ for any loop $\rho$ in $C$. The vector space of exact 1-forms on $C$ is denoted by $ \Omega_0( C )$. For a path from a leaf to another or a loop $\rho$ in $C$, define the 1-form $ \textsc{w}^\rho \in \Omega(C)$ \df{dual to} $\rho$ by \[ \textsc{w}^\rho(\vec{e}) = \left\lbrace \begin{array}{rl} 1 \quad& \text{if } \; \vec{e} \in \im(\rho) \\ -1 \quad& \text{if }\; -\vec{e} \in \im(\rho) \\ 0 \quad& \text{otherwise} \end{array} \right. . \] \end{definition}
\begin{proposition}\label{tdiff} Let $C$ be a tropical curve of genus $g$ with $n\geqslant 2$ leaves.
-- a) The sum of the residues of any 1-form $ \textsc{w}\in \Omega(C)$ is zero.
-- b) $\dim_ \mathbb{R} \Omega_0 (C)= n-1$, $\dim_ \mathbb{R} \Omega_ \mathcal{H} (C)= g$ and $\Omega (C) = \Omega_0 (C) \oplus \Omega_ \mathcal{H} (C)$.
-- c) Any element of $\Omega_0 (C) $ is determined by its residues. \end{proposition}
\begin{proof} Let us choose $g$ edges $e_1,\dots, e_g$ in $C$ such that the graph $C^*$ obtained from cutting $C$ at the middle of the latter $g$ edges is a connected tree. For $1\leqslant j \leqslant g$, there is exactly one path (up to orientation) in $C^*$ joining the two ends given by the cut at $e_j$. This path induces a loop $\rho_j$ in $C$. Denote $ \textsc{w}_j \in \Omega(C)$ the $1$-form dual to the latter loop. Given an appropriate orientation $\vec e_j$ on $e_j$, we have that $\int_{\vec e_j} \textsc{w}_k=\delta_{jk}$ where $\delta_{jk}$ is the Kronecker delta. Label $l_1,\dots,l_n$ the leaves of $C$. For $1\leqslant j \leqslant n-1$, there is only one path from $l_j$ to $l_n$ in $C$ coming from a path in $C^*$. Denote this path by $\rho_{g+j}$ and denote by $ \textsc{w}_{g+j}$ its dual $1$-form, see Definition \ref{pathloop}. We claim that the elements $ \textsc{w}_1,\dots, \textsc{w}_{g+n-1}$ in $\Omega(C)$ are linearly independent. Indeed, the matrix of the linear map from $\big( \bigoplus_j \textsc{w}_j \cdot \mathbb{R} \big)$ to $ \mathbb{R}^{g+n-1}$ given by $\big(\int_{\vec e_1} ,\dots,\int_{\vec e_g} , \Res_{ l_2} ,\dots,\Res_{ l_n}\big)$ is diagonal and invertible. Now, we claim that $\Omega(C)$ has dimension $g+n-1$ so that $ \textsc{w}_1,\dots, \textsc{w}_{g+n-1}$ form a basis. Indeed, for any $ \textsc{w}\in \Omega(C)$, there exists a unique linear combination $\widetilde \textsc{w}$ of $ \textsc{w}_{g+1},\dots, \textsc{w}_{g+n-1}$ such that $ \textsc{w}-\widetilde \textsc{w}$ has no residues and hence defines an element of $H^1(C, \mathbb{R})$. From the previous arguments, the elements $ \textsc{w}_1,\dots, \textsc{w}_{g}$ are linearly independent in the $g$-dimensional vector space $H^1(C, \mathbb{R})$. It follows that $\Omega(C)\simeq H^1(C, \mathbb{R})\oplus \big(\bigoplus_{j=g+1}^{g+n-1} \textsc{w}_j \cdot \mathbb{R}\big)$. In particular, the vector space $\Omega(C)$ has expected dimension. Now, since the sum of the residues of every element of the basis $ \textsc{w}_1,\dots, \textsc{w}_{g+n-1}$ is zero, the point $a)$ holds. As shown previously, we have that $\Omega_ \mathcal{H}(C)\simeq H^1(C, \mathbb{R})$ is $g$-dimensional. In particular, there exists elements $\left\lbrace \widetilde \textsc{w}_1,\dots,\widetilde \textsc{w}_g \right\rbrace$ in $\Omega_ \mathcal{H}(C)$ such that $\int_{\rho_k} \widetilde \textsc{w}_j= \delta_{jk}$ for any $1\leqslant k \leqslant g$. Thus, there exists for any $ \textsc{w}\in \Omega(C)$ a unique linear combination $\widetilde \textsc{w}$ of $\widetilde \textsc{w}_1,\dots,\widetilde \textsc{w}_g$ such that $ \textsc{w}-\widetilde \textsc{w}$ is exact. It follows that $\Omega (C) = \Omega_0 (C) \oplus \Omega_ \mathcal{H} (C)$ and $\dim_ \mathbb{R} \Omega_0 (C)= n-1$, proving $b)$. For $c)$, consider the unique $\widetilde \textsc{w}_j$, $g<j\leqslant g+n-1$, such that $ \textsc{w}_j-\widetilde \textsc{w}_j \in \Omega_0(C)$. By construction, the element $\widetilde \textsc{w}_j$ has residues $1$ at $l_j$ and $-1$ at $l_{n}$ and no other residues. It follows that for any $ \textsc{w}\in\Omega_0(C)$, the element $ \textsc{w} -\big( (\Res_{l_1} \textsc{w})\widetilde \textsc{w}_{g+1} +\dots + (\Res_{l_{n-1}} \textsc{w}) \widetilde \textsc{w}_{g+n-1}\big)$ has no residues. Since it is also exact by construction, it is zero by $b)$. This proves the point $c)$. \end{proof}
\begin{definition}\label{defPir} A \df{collection of residues} of dimension $m$ on a simple tropical curve $C$ with $n$ leaves is real $n \times m$ matrix $R:= \big( r_{k,j} \big)_{k,j}$ such that $\sum_j r_{k,j} =0$ for any $1 \leqslant k \leqslant m$. Denote by $$ \textsc{w}_R^C := ( \textsc{w}^{C}_{R,1},\dots, \textsc{w}^{C}_{R,m})$$ the vector of exact 1-forms on $C$ induced by the $m$ rows of $R$, see Proposition \ref{tdiff}. In practice, we will simply denote the latter vector by $ \textsc{w}_R=( \textsc{w}_{R,1}, \dots, \textsc{w}_{R,m})$ when no confusion is possible. Finally, define the map \[ \df{ \begin{array}{rcl} \pi_R \; : \; C & \rightarrow & \mathbb{R}^m\\ q & \mapsto & \big( \int^q_{p} \textsc{w}_{R,1},\dots, \int^q_{p} \textsc{w}_{R,m} \big) \end{array} } \] up to the choice of an initial point $p \in C$. \end{definition}
\begin{remark} Exact 1-forms are exactly those forms obtained as gradient of particular functions on tropical curves. These functions are the harmonic tropical morphisms that we will introduce in Section \ref{secharmtrop}. \end{remark}
\subsubsection{Phase-tropical curves}
Phase-tropical curves first appeared in \cite{Mikh05} under the name of complex tropical curves. In the latter reference, Mikhalkin considered the diffeomorphism \[ \begin{array}{rcl} H_t \; \; : \;\; ( \mathbb{C}^\star)^2 & \rightarrow & ( \mathbb{C}^\star)^2 \\ (z,w) & \mapsto & \left( \vert z \vert^{\frac{1}{\log(t)}} \frac{z}{\vert z \vert} ,\vert w \vert^{\frac{1}{\log(t)}} \frac{w}{\vert w \vert} \right) \end{array} \] satisfying $ \mathcal{A} \circ H_t = \frac{1}{\log(t)} \cdot \mathcal{A}$. Now,
for a family of algebraic curves $\left\lbrace \mathcal{C}_t \right\rbrace_{t>1} \subset (\mathbb{C}^\star)^2$ converging to a tropical curve $C\subset \mathbb{R}^2$ in the sense of \eqref{eq:conv}, the limit of the family $\left\lbrace H_t( \mathcal{C}_t) \right\rbrace_{t>1} \subset (\mathbb{C}^\star)^2$ in Hausdorff distance on compact sets is a so-called phase-tropical curve $V\subset( \mathbb{C}^{\star} )^{2}$. In particular, we have that $ \mathcal{A}(V)=C$.
For the purpose of this paper, we will introduce abstract phase-tropical curves. Such objects have already been considered in the unpublished work \cite{Mikhunpub} and we will see below that there are at least two different ways to think about them. The work \cite{Mikhunpub} gives yet an other point of view.
Consider the line $ \mathcal{L} := \left\lbrace (z,w) \in ( \mathbb{C}^\star)^2 \: \big| \: z+w+1=0 \right\rbrace$. The sequence of topological surfaces $\big\lbrace H_t(\mathcal{L}) \big\rbrace_{t >1}$ converges in Hausdorff distance to the phase-tropical line $L$ when $t$ tends to $\infty$. The subset $L\subset ( \mathbb{C}^{\star} )^{2}$ can be described as follows. The amoeba $ \mathcal{A}(L)\subset \mathbb{R}^2$ is the tropical line $\Lambda$ consisting of $3$ half-rays emanating from $(0,0)$ and directed by $(-1,0)$, $(0,-1)$ and $(1,1)$ respectively. If $r$ is the open half-ray directed by $(-1,0)$ (respectively $(0,-1)$, respectively $(1,1)$), the set $ \mathcal{A}^{-1}(r) \cap L$ is given by the intersection of $ \mathcal{A}^{-1}(r)$ with the cylinder $\{z=1\}$ (respectively $\{w=1\}$, respectively $\{z=-w\}$) and is therefore an open annulus. These $3$ annuli are glued to the preimage of $(0,0)\in \mathbb{R}^2$ under the map $ \mathcal{A}_{\vert L}$. This preimage is the closure of the so-called \df{coamoeba} $\Arg(\mathcal{L}) \subset (S^1)^2$ where $\Arg$ is the component-wise argument map $\Arg : ( \mathbb{C}^\star)^2\rightarrow (S^1)^2$. The latter (closed) coamoeba is the topological pair of pants illustrated in Figure \ref{fig:coamoeba}: it consists in the union of 2 triangles delimited by 3 geodesics in the argument torus $(S^1)^2$. Therefore, the phase-tropical line $L$ is also a pair of pants, as illustrated in Figure \ref{fig:L0}. We also refer to \cite[Proposition 6.11]{Mikh05}.
\begin{figure}
\caption{The map $ \mathcal{A} : L \rightarrow \Lambda$. The preimage of the vertex of $\Lambda$ in $L$ is pictured in grey.}
\label{fig:L0}
\end{figure}
The phase-topical line $L$ is globally invariant under complex conjugation, since each $H_t(\mathcal{L})$ is. The \df{fixed locus} $ \mathbb{R} L\subset L$ consists of $3$ arcs passing respectively through one of the $3$ vertices of the coamoeba $ \mathcal{A}^{-1}(0,0)\cap L$ and joining $2$ of the $3$ punctures of $L$. Observe that for any point $p\in \Lambda\setminus (0,0)$, the geodesic $ \mathcal{A}_{\vert L}^{-1}(p) \subset (S^1)^2$ intersects $ \mathbb{R} L$ in exactly two points.
In the present text, a \df{toric transformation} is a map $ B : ( \mathbb{C}^{\star} )^{2} \rightarrow ( \mathbb{C}^{\star} )^{2} $ of the form \[ (z,w) \mapsto \big( z_0 z^{a}w^{b}, w_0 z^{c}w^{d} \big) \] where $(z_0,w_0) \in ( \mathbb{C}^{\star} )^{2}$ and $ \begin{psmallmatrix} a & b \\ c & d \end{psmallmatrix} \in \text{GL}_{2} \big( \mathbb{Z} \big)$. The map $B$ descends to an affine linear transformation on $\mathbb{R}^2$ (respectively on $(S^1)^2$) by composition with the projection $ \mathcal{A}$ (respectively $\Arg$) that we still denote by $B$. A \df{general phase-tropical line} in $( \mathbb{C}^{\star} )^{2}$ is the image of $L$ by any toric transformation. For any generalised phase-tropical line $L'=B(L)$, we define $ \mathbb{R} L':=B( \mathbb{R} L)$. Since the $6$ toric transformations mapping $L$ to itself fix $ \mathbb{R} L$ globally, the latter definition does not depend on $B$.
We will now define abstract simple phase-tropical curves. A simple phase-tropical curve will be an abstract topological surface $V$ together with a map $ \mathcal{A}_V : V \rightarrow C$ onto a simple tropical curve $C$. For the construction, we need the following data:
-- For any $v\in V(C)$, choose an injective continuous map $\pi_v: T_v \rightarrow \mathbb{R}^2$ satisfying the integrality and balancing conditions of Definition \ref{tropmorph} and choose also a generalised phase-tropical line $L_v \subset ( \mathbb{C}^{\star} )^{2}$ such that $\pi_v(T_v) \subset \mathcal{A}(L_v)$.
-- For any open, oriented edge $\vec e \in E(C)$ from a vertex $v$ to $v'$, choose a toric transformation $B_{\vec e} : ( \mathbb{C}^{\star} )^{2} \rightarrow ( \mathbb{C}^{\star} )^{2} $ such that $B_{\vec e}$ maps the annulus $ \mathcal{A}^{-1}\big(\pi_v(e)\big)\cap L_v$ to the annulus $ \mathcal{A}^{-1}\big(\pi_{v'}(e)\big)\cap L_{v'}$ and such that $(\pi_{v'})^{-1}\circ B_{\vec e}\circ (\pi_v)_{\vert e}=\id_e$. Define $B_{-\vec e}:= (B_{\vec e})^{-1}$.
Define $V$ to be the quotient of the space \[
\coprod_{v \in V(C)} \big\lbrace (p,q) \in L_v \times T_v \, \big| \, \mathcal{A}(p)= \pi_v(q) \big\rbrace \] under the relation: $(p,q) \sim (p',q')$ $\Leftrightarrow$ $\big(q = q'$, $p\in L_v$, $p'\in L_{v'}$, $\exists \, \vec e \in E(C)$ from $v$ to $v'$ such that $B_{\vec e}(p)=p' \big)$.
Each \df{chart} $\big\lbrace (p,q) \in L_v \times T_v \, \big| \, \mathcal{A}(p)= \pi_v(q) \big\rbrace $ is in natural bijection with the topological pair of pants $ \mathcal{A}^{-1}\big(\pi_v(T_v)\big)\cap L_v$ and equipped with the projection $ \mathcal{A}_V:= (\pi_{v})^{-1}\circ \mathcal{A}$ onto $T_v$. It is an easy exercise to see that the above equivalence relation is compatible with the projections $ \mathcal{A}_V$ defined on every chart. We then obtain a surjective map $ \mathcal{A}_V: V\rightarrow C$ modelled on $ \mathcal{A}:L_v \rightarrow \mathcal{A}(L_v)$ in a neighbourhood of each vertex $v\in V(C)$. In particular, the surface $V$ is a topological surface of genus $g=b_0(C)$ with $n$ punctures, where $n$ is the number of leaves of $C$.
\begin{figure}
\caption{The coamoeba $ \Arg( \mathcal{L})$ (dark grey) and its 3 boundary geodesics (blue) in the fundamental domain $(-1,1]^2$ of $(S^1)^2$.}
\label{fig:coamoeba}
\end{figure}
\begin{definition}\label{def:phasetrop} A \df{simple phase-tropical curve} is a topological surface $V$ together with a map $ \mathcal{A}_V:V\rightarrow C$ onto a simple tropical curve $C$ constructed from the data $\left\lbrace (\pi_v, L_v)\right\rbrace_{v\in V(C)}$, $\left\lbrace B_{\vec e} \right\rbrace_{e\in E(C)}$ as above. \end{definition}
\begin{remark} For a family of algebraic curves $\left\lbrace \mathcal{C}_t \right\rbrace_{t>1} \subset (\mathbb{C}^\star)^2$ converging to a simple tropical curve $C\subset \mathbb{R}^2$ in the sense of \eqref{eq:conv}, the set $V:=\lim_{t\rightarrow \infty} H_t( \mathcal{C}_t) \subset (\mathbb{C}^\star)^2$ is a simple phase-tropical curve in the sense of Definition \ref{def:phasetrop}. The map $ \mathcal{A}_V$ is given by $ \mathcal{A}$, the maps $\pi_v$ are given by restriction of the embedding $C\hookrightarrow \mathbb{R}^2$ and the maps $B_{\vec e}$ are the identity. \end{remark}
It is quite clear that any embedded phase-tropical curve $V\subset ( \mathbb{C}^{\star} )^{2}$ can be constructed from different data $\left\lbrace (\pi_v, L_v)\right\rbrace$, $\left\lbrace B_{\vec e} \right\rbrace$. Also, it seems natural to consider the phase-tropical curves $V\subset ( \mathbb{C}^{\star} )^{2}$ and $B(V)$ as isomorphic for any toric transformation $B:( \mathbb{C}^{\star} )^{2} \rightarrow ( \mathbb{C}^{\star} )^{2}$.
\begin{definition}\label{def:isopt} Two simple phase-tropical curves $A_V:V\rightarrow C$ and $A_{V'}:V'\rightarrow C'$ constructed respectively from the data $\left\lbrace (\pi_v, L_v)\right\rbrace$, $\left\lbrace B_{\vec e} \right\rbrace$ and $\left\lbrace (\pi'_v, L'_v)\right\rbrace$, $\left\lbrace B'_{\vec e} \right\rbrace$ are \df{isomorphic} if the following conditions are satisfied:
-- There exists an isomorphism $h:C\rightarrow C'$.
-- For any $v\in V(C)$, we denote by $B_v:( \mathbb{C}^{\star} )^{2} \rightarrow ( \mathbb{C}^{\star} )^{2}$ the unique toric transformation such that $\pi'_{h(v)}=B_v \circ \pi_v$ and $B_v(L_v)=L'_{h(v)}$.
-- For any $\vec e\in E(C)$ from $v$ to $v'$, the restriction of $B'_{h(\vec e)}$ to $L'_{h(v)}$ is given by $B_{v'}\circ B_{\vec e} \circ (B_v)^{-1}$. \end{definition}
Let us now show that simple phase-tropical curves can be described in a way that is similar to the description of Riemann surfaces using Fenchel-Nielsen coordinates.
Let $ \mathcal{A}_V:V\rightarrow C$ be a simple phase-tropical curve. The underlying tropical curve $C$ can be described as a pair $(G, \ell)$ of a cubic graph $G$ equipped with a length function $\ell$, see Remark \ref{rem:tropGL}. The collection of maps $\pi_v$ of Definition \ref{def:phasetrop} induces a ribbon structure $ \mathscr{R}$ on $G$ by pulling back the cyclical ordering of the leaves/edges of $\pi_v(T_v)$ given by the counter-clockwise orientation of $ \mathbb{R}^2$. In order to define the twists along the edges of $G$, consider the following construction.
Choose an edge $e\in E(G)$ with adjacent vertices $v$ and $v'$. For any point $p \in e$, the chart $ \big\lbrace (p,q) \in L_v \times T_v \, \big| \, \mathcal{A}(p)= \pi_v(q) \big\rbrace$ allows to identify $ \mathcal{A}_V^{-1}(p)$ to $S^1$ as follows. In the latter chart, the set $ \mathcal{A}_V^{-1}(p)$ is a geodesic in the argument torus $(S^1)^2\subset ( \mathbb{C}^{\star} )^{2}$. In order to identify $ \mathcal{A}_V^{-1}(p)$ with $S^1$, we need an orientation and the choice of an origin on the geodesic $ \mathcal{A}_V^{-1}(p)\subset (S^1)^2$. We choose the orientation whose direct normal vector $\vec{n}$ is such that $T \mathcal{A}_V (\vec{n})$ is pointing towards $v$. For the origin, recall that there are exactly two points of $ \mathbb{R} L_v$ on the geodesic $ \mathcal{A}_V^{-1}(p)$. These two points belong to different components of $ \mathbb{R} L_v$. The components of $ \mathbb{R} L_v$ correspond to pair of leaves of $ \mathcal{A}(L_v)$. Leaves are cyclically ordered by the counter-clockwise orientation of $ \mathbb{R}^2$ and so are the pairs of them. In turn, the two points of $ \mathcal{A}_V^{-1}(p)\cap \mathbb{R} L_v$ are ordered. We choose the first of these two points as the origin of $ \mathcal{A}_V^{-1}(p)$. For instance, assume that $L_v=L$ and $p$ sits on the leaf of direction $(1,1)$ of $\Lambda$. Then $ \mathcal{A}_V^{-1}(p)$ is the geodesic $\left\lbrace \arg(z)=-\arg(w)\right\rbrace$. The orientation chosen above is given by the vector $(-1,-1)$ and the origin is the point $(-1,1)\in (S^1)^2$.
The same construction can be repeated on the chart $ \big\lbrace (p,q) \in L_{v'} \times T_{v'} \, \big| \, \mathcal{A}(p)= \pi_{v'} (q) \big\rbrace$ so that we get two identifications $\iota_v, \, \iota_{v'} : \mathcal{A}_V^{-1}(p)\rightarrow S^1$ satisfying \[ \begin{array}{rcl} \iota_{v'} \circ \iota_v^{-1} : S^1 & \rightarrow & S^1\\ z & \mapsto & -\overline{\theta(e)z}. \end{array}\] for some $\theta(e)\in S^1$. The latter map is an involution. In particular, the element $\theta(e)$ does not depend on the ordering of $v$ and $v'$. Observe also that $\theta(e)$ does not depend on the choice of the point $p \in e$. We then obtain a map $\theta: E(G)\rightarrow S^1$.
\begin{proposition}\label{prop:ptFN} The map $V\mapsto (G, \mathscr{R}, \ell, \theta)$ constructed above establishes a bijective correspondence between isomorphism classes of simple phase-tropical curves and quadruple $(G, \mathscr{R}, \ell, \theta)$ up to the equivalence relation generated by the following:
-- $(G, \mathscr{R}, \ell, \theta)\sim (G', \mathscr{R}', \ell', \theta')$ if there exists an isomorphism of graph $h:G\rightarrow G'$ such that $ \mathscr{R}$, $\ell$ and $\theta$ are the respective pullbacks of $ \mathscr{R}'$, $\ell'$ and $\theta'$ by $h$.
-- $(G, \mathscr{R}, \ell, \theta)\sim (G, \mathscr{R}', \ell, \theta')$ if $ \mathscr{R}= \mathscr{R}'$ at all vertices except at $v$ and $\theta=\theta'$ at all edges except at the $3$ edges adjacent to $v$ where $\theta=-\theta'$. \end{proposition}
\begin{proof} Let us first show that the map $V\mapsto (G, \mathscr{R}, \ell, \theta)$ is surjective. Indeed, the pair $(G,\ell)$ is the simple tropical curve supporting $V$ and can therefore be chosen arbitrarily. For the coordinate $ \mathscr{R}$, consider the following construction. For $V$ given by $\left\lbrace (\pi_v, L_v)\right\rbrace$, $\left\lbrace B_{\vec e} \right\rbrace$, a vertex $v_0\in V(C)$ and a toric transformation $B_0$, consider the phase-tropical curve $V'$ given by $\left\lbrace (\pi'_v, L'_v)\right\rbrace$, $\left\lbrace B'_{\vec e} \right\rbrace$ such that: for any vertex $v\neq v_0$, then $(\pi'_v, L'_v)=(\pi_v, L_v)$ and $(\pi'_{v_0}, L'_{v_0})=(B_0\circ\pi_v, B_0\circ L_v)$; for any edge $e$ not adjacent to $v$, then $B'_{\vec e}=B_{\vec e}$ and $B'_{\vec e}=B_{\vec e}\circ B_0^{-1}$ for any edge $\vec e$ adjacent to $v$ and oriented outwards $v$. The resulting phase-tropical curve $V'$ is isomorphic to $V$ by construction. If $B_0: \mathbb{R}^2\rightarrow \mathbb{R}^2$ is orientation preserving, then $V'$ maps to the same quadruple $(G, \mathscr{R}, \ell, \theta)$ as $V$. Otherwise, the curve $V'$ maps to $(G, \mathscr{R}', \ell, \theta')$ where $( \mathscr{R}',\theta')$ is related to $( \mathscr{R},\theta)$ by in the second relation of the statement. This implies in particular that the map $V\mapsto (G, \mathscr{R}, \ell)$ is surjective. For the remaining coordinate $\theta$, consider the following construction. Start again from $V$ given by $\left\lbrace (\pi_v, L_v)\right\rbrace$, $\left\lbrace B_{\vec e} \right\rbrace$. Choose a vertex $v_0\in V(C)$, an edge $\vec e_0$ adjacent to $v$ and oriented towards $v$ and a toric transformation $B_0(z,w)=(e^{i a \nu}z,e^{i b \nu}w)$ such that $(a,b)\in \mathbb{Z}^2$ is the slope of $\pi_{v_0}(\vec e)$. Consider now the phase-tropical curve $V'$ given by $\left\lbrace (\pi'_v, L'_v)\right\rbrace$, $\left\lbrace B'_{\vec e} \right\rbrace$ such that: for any vertex $v\neq v_0$, then $(\pi'_v, L'_v)=(\pi_v, L_v)$ and $(\pi'_{v_0}, L'_{v_0})=(B_0\circ\pi_v, B_0\circ L_v)$; for any edge $e$ not adjacent to $v$ and $e_0$, then $B'_{\vec e}=B_{\vec e}$ and $B'_{\vec e}=B_{\vec e}\circ B_0^{-1}$ for any edge $\vec e\neq \vec e_0$ adjacent to $v$ and oriented outwards $v$. Since $B_0$ fixes globally the annulus $ \mathcal{A}^{-1}\big(\pi_{v_0}(e_0)\big)\cap L_{v_0}$, the data $\left\lbrace (\pi'_v, L'_v)\right\rbrace$, $\left\lbrace B'_{\vec e} \right\rbrace$ defines a phase-tropical curve $V'$ with twist function $\theta'$ such that $\theta'=\theta$ for all edges different from $e_0$ and $\theta'(e_0)=\theta(e_0)\cdot e^{i\nu}$. This proves the surjectivity of the map $V\mapsto (G, \mathscr{R}, \ell, \theta)$.
It remains to show that that $V$ is isomorphic to $V'$ if and only if the corresponding data $(G, \mathscr{R}, \ell, \theta)$ are equivalent to each other. To see this, observe that any isomorphism $V\rightarrow V'$ can be decomposed as the composition of an isomorphism on the underlying graph $G$ (taken care of by the first relation of the statement) and a composition of isomorphisms as constructed above for the surjectivity on the coordinate $ \mathscr{R}$ (taken care of by the second relation of the statement). Hence, isomorphic phase-tropical curves are mapped to the same equivalence class. It remains to see that non-isomorphic curves $V$ and $V'$ are mapped to different equivalence classes of quadruple. If $V$ and $V'$ do not have isomorphic underlying tropical curves, then they are clearly mapped to different classes. If they do, we can assume with no loss of generality that $V$ and $V'$ are constructed from the same data $\left\lbrace (\pi_v, L_v)\right\rbrace$ by applying an isomorphism on $V'$. In particular, $V$ and $V'$ map to the same triple $(G, \mathscr{R},\ell)$. From there, it is an easy exercise to see that $V$ and $V'$ have necessarily different twist functions. \end{proof}
We denote by $V(G,\ell,\theta)$ the (isomorphism class of) simple phase-tropical curve given by the quadruple $(G, \mathscr{R}, \ell, \theta)$.
\section{Approximation of harmonic tropical curves}
\subsection{Convergence of imaginary normalised differentials}\label{sec:convind}
The purpose of this section is to study the limit of i.n.d. on families of algebraic curves that converge in the sense of Definition \ref{tropconv}. In general, it is complicated to determine the limits of linear systems on families of algebraic curves, as addressed for instance in \cite[\S 5]{GK} and references therein.
Recall that a vector of residues $R=(r_1,\dots,r_n)$ and a family $ \left\lbrace S_t \right\rbrace_{t>1} \subset M_{g,n}$ give rise to the family $ \left\lbrace ( S_t , \omega_{R}) \right\rbrace_{t>1}\subset \Lambda^{\text{twist}}_{g,n}$, see Definition \ref{defAr}. To any simple tropical curve $C$, we associate the unique stable curve $S(C) \in \overline{M}_{g,n}$ whose dual graph is the underlying cubic graph of $C$. Then, any $ \textsc{w} \in \Omega (C)$ induces a generalised meromorphic differential on $S(C)$ as follows. Since each irreducible component of $S(C)$ is a sphere with $3$ marked points, it suffices to prescribe the residues of the restriction of the differential to each sphere. For a vertex $v\in V(C)$ and any adjacent $\vec{e}\in LE(C)$ oriented toward $v$, prescribe the residue at the marked point dual to $e$ on the component of $S(C)$ dual to $v$ to be $ \textsc{w}(\vec{e})$. We still denote by $ \textsc{w}$ the induced generalised meromorphic differential on $S(C)$ so that the pair $(S(C), \textsc{w})$ is a point in $\Lambda^{\text{twist}}_{g,n}$. Reciprocally, observe that any generalised meromorphic differential on $S(C)$ comes from a 1-form $ \textsc{w}\in\Omega(C)$.
The main result of this section is the following.
\begin{mytheorem}\label{convdif} Let $\left\lbrace S_t \right\rbrace_{t >1} \subset M_{g,n} $ be a family converging to a simple tropical curve $C$. Then, for any collection of residues $R=(r_1,\dots,r_n)$, the family $\left\lbrace (S_t, \omega_{R}) \right\rbrace_{t >1} \subset \Lambda^{\text{twist}}_{g,n}$ converges to the point $\big(S(C), \textsc{w}_{R}\big)$ defined above, where $ \textsc{w}_R\in\Omega(C)$ is the 1-form of Definition \ref{defPir}. \end{mytheorem}
While proving Theorem \ref{convdif}, we will need some terminology. Recall from Definition \ref{tropconv} that $S_t := \Upsilon_G(\ell_t, \theta_t)$ comes with a decomposition into generalised pairs of pants $\left\lbrace Y_{v,t} \right\rbrace_{v\in V(G)}$ bounded by geodesics $\left\lbrace \gamma_{e,t} \right\rbrace_{e\in E(G)}$.
\begin{definition}\label{assloop} Let $\left\lbrace S_t \right\rbrace_{t >1} \subset M_{g,n}$ be a family converging to a simple tropical curve $C$ and $\rho=\left\lbrace \vec{e}_1,\dots, \vec{e}_k \right\rbrace$ be any loop in $C$ (see Definition \ref{pathloop}). Define $\rho_t \subset S_t$ to be the piecewise-geodesic oriented loop satisfying:
-- $\rho_t \subset \cup_{1\leqslant j \leqslant k} Y_{v_j,t}$ where $v_j$ the initial vertex of the oriented edge $\vec{e}_j$.
-- $\rho_t \cap \itr\big(Y_{v_j,t}\big)$ is the unique component of $ \mathbb{R} Y_{v_j,t}$ joining $\gamma_{e_{j-1},t}$ to $\gamma_{e_{j},t}$ and oriented towards the latter (take $e_0:=e_k$).
-- $\rho_t \cap \gamma_{e_j,t}$ is either a point or an oriented arc strictly contained in $\gamma_{v_j,t}$ whose direct normal vector points towards $Y_{v_j,t}$. \end{definition}
\begin{remark}\label{rem:pathconv} The family of loops $\rho_t \subset S_t$ in the above definition converges to a loop $\rho_\infty$ in the limiting stable $S(C)$. This follows from the fact that the fixed locus $ \mathbb{R} Y_{v_j,t}$ converges to the real part of the corresponding irreducible component of $S(C)$. Recall that such a component is bi-holomorphic to $ \mathbb{C} P^1$ with marked points $-1$, $1$ and $\infty$ and with real part $ \mathbb{R} P^1$. \end{remark}
\begin{proof}[Proof of Theorem \ref{convdif}.]
Assume first that the sequence $\big\lbrace \int_{\gamma_{e,t}} \omega_{R} \big\rbrace_{t>1}$ converges for any $e\in E(G)$ when $t$ tends to $\infty$. According to Section \ref{sec:thb}, the latter is equivalent to the convergence of the sequence $\left\lbrace (S_t, \omega_{R}) \right\rbrace_{t>1}$ in $ \Lambda^{\text{twist}}_{g,n}$. In particular, the limiting differential, denote it $ \textsc{w}$, restricted to any irreducible component of $S(C)$ is a meromorphic differential whose set of poles is a subset of the $3$ marked points of the component. Let us show that the limit is $\big(S(C), \textsc{w}_{R}\big)$.
As a direct consequence of Definition \ref{tropconv}, the family $\left\lbrace S_t \right\rbrace_{t>1}$ converges to the stable curve $S(C)$. Now, fix a loop $\rho=\left\lbrace \vec{e}_1,\dots, \vec{e}_k \right\rbrace$ in $C$ and consider the corresponding loop $\rho_t \subset S_t$ of Definition \ref{assloop}. For simplicity, denote $\gamma_{j,t}:=\gamma_{e_j,t}$ and $K_{j,t}\subset S_t$ the collar around $\gamma_{j,t}$. Orient $\gamma_{j,t}$ so that the direct normal vector points toward $Y_{v_j,t}$ where $v_j$ is the initial vertex of the oriented edge $\vec{e}_j$. Then, there exists a unique bi-holomorphism $\psi_{j,t} : K_{j,t} \rightarrow A_{m(j,t)}$ such that:
-- $\psi_{j,t} \big( \rho_{t} \cap K_{j,t} \big) $ is a transversal path in $ A_{m(j,t)}$,
-- $\psi_{j,t} \big( \rho_{t} \cap K_{j,t} \big) \cap \left\lbrace z \in A_{m(j,t)} \, \big| \: Re(z) =0 \right\rbrace = \left\lbrace 0 \right\rbrace$.\\ We claim that the family $(\psi_{j,t})_\ast \, \omega_{R}$ defined respectively on $ A_{m(j,t)}$ converges in the sense of Definition \ref{deflocconv}. Indeed, the family of collars $K_{j,t}$ converges to a neighbourhood $K_{j,\infty}$ of the node $q_j\in S(C)$ given by the vanishing cycle $\gamma_{j,t}$. The neighbourhood $K_{j,\infty}$ is actually the union of the two cusp at $q_j$ according to Lemma \ref{propcolcusp}. In particular, the family $\left\lbrace K_{j,t}\right\rbrace_{1<t\leqslant\infty}$ is just an other model for $\left\lbrace C_{\lambda} \right\rbrace_{0\leqslant \lambda<\lambda_0}$ for some $\lambda_0\leqslant 1$. Since $\left\lbrace (S_t, \omega_{R}) \right\rbrace_{t>1}$ converges in $\Lambda^{\text{twist}}_{g,n}$ by assumption, the restriction of $\omega_R$ to $K_{j,t}$ converges in the sense Definition \ref{deflocconv}. The claim follows.
We now claim that the integral of $\omega_R$ on $\rho_t \setminus \big( \bigcup_{1\leqslant j \leqslant k} \, K_{j,t} \big)$ converges when $t$ tends to $\infty$ and is therefore a bounded function in $t$. Indeed, the family of loops $\rho_t$ converge to a loop $\rho_\infty \subset S(C)$ passing through the node of $S(C)$ dual to the edges $e_1,\dots,e_k$, see Remark \ref{rem:pathconv}. The latter nodes are the only possible poles of the limiting differential $ \textsc{w}$ lying on $\rho_\infty$. The collars $K_{j,t}$ converge to a neighbourhood $K_{j,\infty}$ of the latter nodes, see Lemma \ref{propcolcusp}. It follows that \[ \lim_{t\rightarrow \infty} \; \textstyle \int_{\rho_t \setminus \big( \bigcup_{1\leqslant j \leqslant k} \, K_{j,t} \big)} \omega_R \; = \; \int_{\rho_\infty \setminus \big( \bigcup_{1\leqslant j \leqslant k} \, K_{\infty,t} \big)} \textsc{w} \; < \; \infty \] and the claim is proven. It follows in turn that \[ \int_{\rho_{t}} \omega_{R} \; = \; \sum_{j=1}^{k} \textstyle \big( \int_{\rho_{t} \cap K_{j,t}} \omega_{R} \big)\; + \; O(1).\] Applying Proposition \ref{convdifloc} to the restriction of $\omega_R$ to each $K_{j,t}$ (with $\alpha=1$ and $l=\ell(e_j)$), we deduce from the above formula that \begin{equation}\label{lim} \lim_{t \rightarrow \infty} \; \frac{1}{\log(t)} \int_{\rho_t} \omega_{R} \; = \; \sum_{j=1}^{k} \ell(e_{j}) \Lambda_{j} \end{equation} where $\Lambda_{j} := \displaystyle \lim_{t \rightarrow \infty} \; \textstyle \frac{1}{2\pi i} \int_{\gamma_{j,t}} \omega_{R}$. Since $\omega_{R}$ is an i.n.d. on any $S_t$, it follows that $\int_{\gamma_{j,t}} \omega_{R}$ and $\int_{\rho_t} \omega_R$ are purely imaginary, for any $t$ and $j$. In particular, we have that $ \Lambda_{j} \in \mathbb{R} $ for any $j$. Considering the real part on both sides in \eqref{lim}, we obtain that \begin{equation}\label{lim2} \sum_{j=1}^{k} \ell(e_{j}) \Lambda_{j} = 0. \end{equation} Since the above formula holds for any loop $\rho=\left\lbrace \vec{e}_1,\dots, \vec{e}_k \right\rbrace$, it follows that the limiting differential $ \textsc{w}$ comes from an exact 1-form on $C$ with residue vector $R$. According to Proposition \ref{tdiff}, the differential $ \textsc{w}$ on $S(C)$ necessarily comes from $ \textsc{w}_{R}\in \Omega(C)$. This proves the theorem in the case when $\big\lbrace \int_{\gamma_{e,t}} \omega_{R} \big\rbrace_{t>1}$ converges for any $e\in E(C)$.
Assume now that the sequence $\big\lbrace \int_{\gamma_{e,t}} \omega_{R} \big\rbrace_{t>1}$ is bounded uniformly in $t$ for any $e\in E(C)$. For any discrete infinite subset $I \subset (1,\infty)$ such that $\big\lbrace \int_{\gamma_{e,t}} \omega_{R} \big\rbrace_{t\in I}$ converges for any $e\in E(C)$, it follows from the previous case that the the family $\left\lbrace (S_t, \omega_{R}) \right\rbrace_{t \in I} \subset \Lambda^{\text{twist}}_{g,n}$ converges to the point $\big(S(C), \textsc{w}_{R}\big)$. As any converging subsequence converges to the same limit, the original sequence $\left\lbrace (S_t, \omega_{R}) \right\rbrace_{t >1} \subset \Lambda^{\text{twist}}_{g,n}$ converges to $\big(S(C), \textsc{w}_{R}\big)$.
Assume finally that the sequence $\big\lbrace \int_{\gamma_{e,t}} \omega_{R} \big\rbrace_{t>1}$ is unbounded for at least one edge $e\in E(C)$. Define $M_t:= \max \big\lbrace 1, \max \big\lbrace \vert \int_{\gamma_{e,t}} \omega_{R} \vert \big\rbrace_{e\in E(C)} \big\rbrace$. Then, there exists a discrete infinite subset $I \subset (1,\infty)$ such that $\displaystyle \lim_{t\rightarrow \infty} M_t =\infty$. In particular, the sequences $\big\lbrace \int_{\gamma_{e,t}} \widetilde \omega_{R} \big\rbrace_{t\in I}$, where $\widetilde \omega_{R}:= \omega_{R}/M_t$, are bounded for any $e\in E(C)$. Extracting a subsequence if necessary, we can therefore assume that $\big\lbrace \int_{\gamma_{e,t}} \widetilde \omega_{R} \big\rbrace_{t\in I}$ converges for any $e\in E(C)$. Consequently, the family $\left\lbrace (S_t, \widetilde \omega_{R}) \right\rbrace_{t\in I}$ converges to a point $\big(S(C), \widetilde \textsc{w}\big)\in \Lambda^{\text{twist}}_{g,n}$. Then, we can apply the same reasoning as in the first case to show that $\widetilde \textsc{w}$, seen as a 1-form on $C$, is exact. By definition of $M_t$, the 1-form $\widetilde \textsc{w} \in \Omega_0(C)$ is non-zero and has no residues at the leaves of $C$ since $\widetilde \omega_{R}^{S_t}$ has residues $R/M_t$ and $\lim M_t=\infty$. In other words, the 1-form $\widetilde \textsc{w}$ is both exact and holomorphic. It follows from Proposition \ref{tdiff}-$b)$ that $\widetilde \textsc{w}$ is zero. This is a contradiction. We deduce that all the sequences $\big\lbrace \int_{\gamma_{e,t}} \omega_{R} \big\rbrace_{t>1}$ are necessarily bounded and the theorem follows from the two previous cases. \end{proof}
\subsection{Convergence of periods}\label{sec:cop}
The refined notion of phase-tropical convergence allows to obtain a finer result than Theorem \ref{convdif}.
\begin{mytheorem}\label{thm:convperiod} Let $\left\lbrace S_t \right\rbrace_{t >1} \subset M_{g,n} $ be a family converging to a simple phase-tropical curve $V(G,\ell,\theta)$ and $R=(r_1,\dots,r_n)$ be a collection of residues.
For any loop $\rho=\left\lbrace\vec{e}_1,\dots,\vec{e}_k \right\rbrace$ in the simple tropical curve $C:=(G,\ell)$ and $\rho_t$ the associated loop in $S_t$, see Definition \ref{assloop}, we have \[ \lim_{t \rightarrow \infty} \; \int_{\rho_t} \omega_{R}^{S_t} \; = \; \sum_{j=1}^k \; \log \big( \theta (e_j) \big) \cdot \textsc{w}_{R}^{C} \big( \vec{e}_j \big)\] where the branch of $\log$ is chosen such that $\log : S^1 \rightarrow \left[ 0, 2\pi i \right) \subset \mathbb{C}$. \end{mytheorem}
In order to prove Theorem \ref{thm:convperiod}, we will study imaginary normalised differentials on $S \in M_{0,3}$. Let $\omega$ be such a differential on $S$. We are interested in paths $\rho\subset S$ for which the restriction to $\rho$ of the real-valued differential $\Im(\omega)$ is identically zero. We denote by $h$ the hyperbolic metric on $S$ and define $\Im(\omega)^\vee$ (respectively $\Re(\omega)^\vee$) to be the vector field dual to $\Im(\omega)$ (respectively $\Re(\omega)$) with respect to $h$. The vector field $\Im(\omega)^\vee$ is obtained from $\Re(\omega)^\vee$ by a rotation with angle $\pi/2$. As a consequence, a path $\rho\subset S$ is such that $ \Im(\omega)_{\vert \rho} \equiv 0$ if and only if $\rho$ is parallel to the vector field $\Re(\omega)^\vee$.
We identify $ S \simeq \mathbb{C} P^1 \setminus \left\lbrace -1, 1, \infty \right\rbrace$ with afine coordinate $z$ and assume for a moment that none of the residues of $\omega$ is zero. If necessary, we replace $\omega$ with $-\omega$ and apply an automorphism of $S$ exchanging the punctures so that \begin{equation}\label{omega} \omega = \left( \frac{\lambda_{+}}{z-1} + \frac{\lambda_{-}}{z+1}\right) dz \end{equation} with $\lambda_-, \lambda_+ >0$. We denote by $\widetilde S$ the real oriented blow-up of $S$ at $-1$, $1$ and $\infty$ and denote by $\gamma_-$, $\gamma_+$ and $\gamma_\infty$ the respective boundary components of $\widetilde S$. The vector field $\Re(\omega)^\vee$ does not extend to the boundary of $\widetilde S$ as its modulus gets arbitrarily large. However, its asymptotic direction is well-defined. Below, we denote by $ \mathbb{R} S$ the fixed locus of the unique anti-holomorphic involution on $S$ fixing the puncture and denote by $ \mathbb{R} \widetilde S$ its lift to $\widetilde S$.
\begin{lemma} For any i.n.d. $\omega$ on $S$ as in \eqref{omega}, we have the following:
-- The 3 connected components of $\mathbb{R} S$ are parallel to $\Re(\omega)^\vee$.
-- $\Re(\omega)^\vee$ is asymptotically orthogonal to $\gamma_{-}$, $\gamma_+$ and $\gamma_\infty$. It is oriented inward $\widetilde S$ at $\gamma_{-}$ and $\gamma_+$ and outward at $\gamma_\infty$.
-- For any point $p$ in $\gamma_{-}$ or $\gamma_+$ away from $\mathbb{R} \widetilde S$, the flow line of $\Re(\omega)^\vee$ starting at $p$ ends in $\gamma_\infty$.
-- The segment $[-1,1]$ is the only arc in $\widetilde S$ joining $\gamma_{-}$ to $\gamma_+$ that is parallel to $\Re(\omega)^\vee$. \end{lemma}
\begin{figure}
\caption{Phase portrait of $\Re(\omega)^\vee$ on $\widetilde S$.}
\label{fig:phpt}
\end{figure}
\begin{proof} For the first part, write $z=x+iy$. Then, for any $z\in \mathbb{R} S$, that is $z=x$, we have that $\Re(\omega)= \left( \frac{\lambda_{+}}{x-1} + \frac{\lambda_{-}}{x+1}\right) dx$ at $z$. The dual to $dx$ with respect to the metric $h$ is a multiple of $\partial x$ since $h$ is conformally equivalent to the Euclidean metric. It follows that $\mathbb{R} S$ i parallel to $\Re(\omega)^\vee$.
For the second part, we compute the asymptotic of $\Re(\omega)$ when $z$ tends to $1$ (the computation for $-1$ and $\infty$ are similar). We have that \[ \begin{array}{rcl} \Re(\omega) &\underset{z\rightarrow 1}{\sim} & \displaystyle \Re\left( \frac{\lambda_{+}}{z-1} dz\right) = \Re\left( \frac{\lambda_{+}}{z-1} \right)dx - \Im \left( \frac{\lambda_{+}}{z-1} dz\right) dy \\ & & \\ & = & \displaystyle \frac{\lambda_+}{\vert z-1\vert^2}\big( \Re(\bar{z}-1) dx - \Im(\bar{z}-1) dy\big) = \frac{\lambda_+}{t\big(\tilde x^2+ \tilde y ^2\big)}\big( \tilde x \, dx+\tilde y\,dy\big) \end{array} \] where we wrote $z=1+t(\tilde x +i \tilde y)$. It follows that if $z$ tends to $1$ with asymptotic direction $(\tilde x, \tilde y)$, the differential $\Re(\omega)^\vee$ is asymptotically a positive multiple of $\tilde x \, \partial x+\tilde y \,\partial y$. The second part is proven.
For the last two parts, observe that $\Re(\omega)^\vee$ has a unique zero at the point $ \zeta := \frac{\lambda_+ - \lambda_{-}}{\lambda_+ + \lambda_{-}} \in (-1,1)$. Indeed, the point $\zeta$ is the unique zero of $\omega$ and hence a zero of $\Re(\omega)$. Reciprocally, a zero of $\Re(\omega)$ is also a zero of $\Im(\omega)$ since the two vector fields are obtained from each other by a rotation. Hence, a zero of $\Re(\omega)$ is also a zero of $\omega$. Now, the vector field $\Re(\omega)^\vee$ is the gradient vector field of the harmonic function $z\mapsto \int^z \Re(\omega)$ with critical point at $\zeta$. By the maximum principle and the Morse Lemma, the gradient field of the latter function is the gradient field of the function $(x,y)\mapsto x^2-y^2$ in suitable local coordinates centred at $\zeta$. It follows that there are only $2$ incoming flow lines of $\Re(\omega)^\vee$ at $\zeta$ and $2$ outgoing flow lines, any other flow line does not meet $\zeta$. We deduce from the previous points that the incoming flow lines are the segment $(-1, \zeta)$ and $(\zeta,1)$. It implies that any flow line starting at a point in $\big(\gamma_+ \cup \gamma_-\big) \setminus \mathbb{R} \widetilde S$ avoids the only singular point $\zeta$ of $\Re(\omega)^\vee$ and ends therefore at the boundary of $\widetilde S$. The remaining parts of the lemma follow now from the first two parts. \end{proof}
\begin{lemma} Let $\left\lbrace (Y_t,\omega_t)\right\rbrace_{t >1}$ be a family of pair of pants $Y_t$ equipped with a meromorphic differential $\omega_t$ whose only poles are at the possible punctures of $Y_t$. Assume that $Y_t$ converges to $S$ and that $\omega_t$ converges to an i.n.d. $ \textsc{w}$ on $S$. For any connected component $\rho_t \subset Y_t$ of $\mathbb{R} Y_t$, we have \[ \lim_{t \rightarrow \infty} \int_{\rho_t} \Im(\omega_t) = 0. \] \end{lemma}
\begin{proof} Assume first that the differential $ \textsc{w}$ is as in \eqref{omega}. For any $t>1$, there exist $r_{t,-}, \, r_{t,+}, \, r_{t, \infty} >0$ such that \begin{equation}\label{eq:model}
Y_t \simeq \left\lbrace z \in \mathbb{C} \; \big| \; \vert z \vert \leqslant r_{t,\infty}, \; \vert z-1 \vert \geq r_{t,+}, \; \vert z+1 \vert \geq r_{t,-} \right\rbrace \end{equation} with boundary components $\gamma_{t,-}$, $\gamma_{t,+}$, and $\gamma_{t,\infty}$. Let us prove the lemma for $\rho_t = [r_{t,-}, r_{t,+}]$ (the two remaining cases are similar). According to the previous lemma, there exists $N>0$ such that for almost all $t>N$, we have
-- $\int_{\gamma_{t,-}} \Im(\omega_t) >0$ and $\int_{\gamma_{t,+}} \Im(\omega_t) >0$,
-- $\Re(\omega_t)^\vee$ is everywhere transversal to $\gamma_{t,-}$, $\gamma_{t,+}$ and $\gamma_{t,\infty}$,
-- the differential $\omega_t$ has a single zero on $Y_t$.\\ Applying the same reasoning as in the proof of the previous lemma, we deduce the existence of $M>N$ such that for almost all $t>M$, there exists a unique arc $\breve{\rho}_t \subset Y_t$ parallel to $\Re(\omega_t)$ and joining $\gamma_{t,-}$ to $\gamma_{t,+}$. Necessarily, the arc $\breve{\rho}_t$ is in an $\varepsilon_t$-neighbourhood of $\rho_t$ with $\displaystyle \lim_{t\rightarrow \infty} \varepsilon_t=0$. The segment $\rho_t$ is homotopic to $\bar{\rho}_t:= \rho_{t,+} \circ \breve{\rho}_t \circ \rho_{t,-} $ where $ \rho_{t,-} $ is an arc in $\gamma_{t,-}$ and $ \rho_{t,+} $ is an arc in $\gamma_{t,+}$. As $\breve{\rho}_t$ converges to $\rho_t$, the arcs $\rho_{t,-}$ and $\rho_{t,+}$ eventually shrink to a point. It follows that \[\lim_{t \rightarrow \infty} \int_{\rho_t} \Im(\omega_t) \;=\; \lim_{t \rightarrow \infty} \int_{\bar \rho_t} \Im(\omega_t) \;=\; \lim_{t \rightarrow \infty} \int_{\breve{\rho}_t} \Im(\omega_t) \;=\; 0 \] and the lemma is proven in this case.
For a general sequence $\left\lbrace \omega_t \right\rbrace_{t >1}$, we can consider the sequence $ \omega_t'':=\omega_t+\omega_t'$ where $\omega_t'$ is the restriction of the form \[ \left( \frac{M}{z-1} + \frac{M}{z+1}\right) dz\] to $Y_t$. For $M>0$ arbitrarily large, the sequences $\left\lbrace \omega_t' \right\rbrace_{t >1}$ and $\left\lbrace \omega_t'' \right\rbrace_{t >1}$ are as in the above paragraph. Hence, they satisfy the conclusion of the lemma and so does $\left\lbrace \omega_t \right\rbrace_{t >1}$. \end{proof}
In the course of the proof of Theorem \ref{thm:convperiod} and later on in the text, we will need the following elementary observation.
\begin{remark}\label{rem:blup} Given a family $Y_t$ as in \eqref{eq:model}, it is useful to compare two different types of metric on the boundary components of $Y_t$. On the one hand, we can consider on a given boundary geodesic $\gamma\subset Y_t$ the metric of length $2\pi$ obtained by rescaling the hyperbolic metric with $\frac{2\pi}{l(\gamma)}$. On the other hand, we can consider on the boundary geodesic $\lambda_{t,+}$ (respectively $\lambda_{t,-}$, respectively $\lambda_{t,\infty}$) the pullback of the Euclidean metric on $S^1$ by the function $\arg(z-1)$ (respectively $\arg(z+1)$, respectively $\arg(1/z)$). If for given $t$ and $\gamma\subset Y_t$, the two above metrics might differ, the induced metrics on the limiting boundary component of $\widetilde S$ will coincide at $t=\infty$. In particular, twist parameters of families $\left\lbrace S_t \right\rbrace_{t>1}$ converging tropically may be interpreted as angles when $t$ tends to $\infty$.
\end{remark}
\begin{proof}[Proof of Theorem \ref{thm:convperiod}.] For any loop $\rho \subset C$, the associated loop $\rho_t \subset S_t$ is made out of connected components of $\mathbb{R} Y_{v,t}$ for all the vertices $v\in V(C)$ in $\rho$ and arcs in the geodesics $\gamma_{e,t}$ for all the edges $\vec e$ in $\rho$, see Definition \ref{assloop}. By the previous lemma, the parts of $\rho_t$ contained in the different $\mathbb{R} Y_{v,t}$ do not contribute to the limit of the integral $\int_{\rho_t}\Im(\omega_t)$. In order to prove the theorem, it suffices then to show that for any $\vec e \in \rho$, we have \[ \lim_{t \rightarrow \infty} \int_{\rho_t \cap \gamma_{e,t}} \Im(\omega_R) \; = \; \log \big( \theta(e) \big) \cdot \textsc{w}_{R} (\vec e). \]
For $t$ large enough, the flow lines of $\Re(\omega_t)^\vee$ are transversal to $\gamma_{e,t}$ and points toward the pair of pants $Y_{v,t}$ for a vertex $v$ adjacent to $e$. Identify the latter pair of pants to a subset of $ \mathbb{C} \setminus \{-1,1\}$ as in \eqref{eq:model} such that $\gamma_{e,t}$ maps to $\gamma_{t,+}$. Then, for $\varepsilon >0$ small enough, the flow lines of $\Re(\omega_t)^\vee$ are also transversal to the boundary of the $\varepsilon$-neighbourhood $U_{\varepsilon,t} \subset Y_{v,t}$ with respect to the Euclidean distance in $ \mathbb{C} \setminus \{-1,1\}$. Denote $c_{t,1}$ and $c_{t,2}$ the flow lines of $\Re(\omega_t)^\vee$ starting at the end points of $\rho_t \cap \gamma_{e,t}$ and ending on $\partial U_{\varepsilon,t}$. Denote by $\rho_{t,\varepsilon} $ the arc on $\partial U_{\varepsilon,t} $ between the end points of $c_{t,1}$ and $c_{t,2}$ such that $\rho_t \cap \gamma_{e,t}$ is homotopic to $ \bar\rho_{t, \varepsilon}:=c_{t,2}^{-1} \circ \rho_{t, \varepsilon} \circ c_{t,1}$ (or $c_{t,2} \circ \rho_{t, \varepsilon} \circ c_{t,1}^{-1}$ depending on how $c_{t,1}$ and $c_{t,2}$ are oriented). For any arbitrarily large $t$ and small $\varepsilon$, we have \[ \int_{\rho_t \cap \gamma_{e,t}} \Im(\omega_R) = \int_{\bar\rho_{t, \varepsilon}} \Im(\omega_R) = \int_{\rho_{t, \varepsilon}} \Im(\omega_R) \]
since $c_{t,1}$ and $c_{t,2}$ are flow lines of $\Re(\omega_t)^\vee$. By Theorem \ref{convdif}, the differential $\omega_R$ converges to the differential $ \textsc{w}_R$ on $Y_{v,\infty}$ when $t$ tends to $\infty$. Since the twist parameter $\theta_t(e)$ converges to $\theta(e)$ by assumption, the arc $\rho_{\infty, \varepsilon}\subset Y_{v,\infty}$ is joined by flow lines of $\Re( \textsc{w}_R)^\vee$ to an arc $\alpha:=\left\lbrace z \in S^1 \, \big| \, \theta \leqslant z \leqslant \theta \cdot \theta(e) \right\rbrace$ inside the boundary component $\gamma_+ \simeq S^1$ of $\widetilde S$, see Remark \ref{rem:blup}. It follows that \[ \begin{array}{rcl} \displaystyle \lim_{t \rightarrow \infty} \int_{\rho_t \cap \gamma_{v,t}} \Im(\omega_R) &= & \displaystyle \lim_{\varepsilon \rightarrow 0} \lim_{t \rightarrow \infty} \int_{\rho_{t, \varepsilon}} \Im(\omega_{R} ) \\ & & \\ &=& \displaystyle \int_{z\in \alpha} \frac{ \textsc{w}_{R} \big(\vec e)}{z}dz \; = \;\log \big( \theta(e) \big) \cdot \textsc{w}_{R}(\vec e). \end{array}\]
\end{proof}
\begin{corollary}\label{cor:period} Let $\left\lbrace S_t \right\rbrace_{t >1} \subset M_{g,n} $ be a family converging to a simple phase-tropical curve $V:=V(G,\ell,\theta)$ and $R=(r_1,\dots,r_n)$ be a collection of residues. Then, the family of period matrices $ \mathcal{P}_{R,S_t}$ converges to a matrix $ \mathcal{P}_{R,V}$ depending only on $\theta$ and the projective class of $\ell$, that is $ \mathcal{P}_{R,(G,\ell,\theta)}= \mathcal{P}_{R,(G,\lambda\cdot\ell,\theta)}$ for any $\lambda>0$. \end{corollary}
\begin{proof} Denote $C:=(G,\ell)$. According to Theorem \ref{convdif}, the period $ \frac{1}{2 i \pi} \int_{\gamma_{e,t}} \omega_{R}^{S_t}$ tends to $ \textsc{w}_{R}^{C} (\vec e)$, where $\gamma_{e,t}$ is the geodesic of the pair of pants decomposition of $S_t$ corresponding to $e \in E(C)$. For any basis $\rho_1,\dots, \rho_g \in H_1(C, \mathbb{Z})$ and associated loops $\rho_{1,t},\dots, \rho_{g,t} \subset S_t$, Theorem \ref{thm:convperiod} implies that the period $\frac{1}{2 i \pi} \int_{\rho_{k,t}} \omega_{R,j}^{S_t} $ converges to a linear combination of the $ \textsc{w}_{R}^{C} (\vec e)$, $e\in E(G)$, whose coefficients are determined by $\theta$. In particular, all the periods computed above depend only on the projective class of $\ell$ since $ \textsc{w}_R^C$ does. As any period of $\omega_{R}^{S_t}$ can be computed from the above periods, the result follows. \end{proof}
\begin{remark}\label{rem:jacobian} The techniques used to prove Theorems \ref{convdif} and \ref{thm:convperiod} can be adapted to the compact case $n=0$ and $g\geqslant 2$ to compute the asymptotic of the period matrix of holomorphic differentials on families of algebraic curves. For a family $\left\lbrace S_t \right\rbrace_{t>1} \subset M_g$ converging to a simple phase-tropical curve $V(G,\ell,\theta)$, consider the geometric symplectic basis $\alpha_{1,t},\dots,\alpha_{g,t}, \beta_{1,t},\dots, \beta_{g,t}$ of $S_t$ (see \cite[\S 6.1.2]{farbmarg}) constructed as follows: the $\alpha_{j,t}$ consist of cycles of the form $\gamma_{e_j,t}$ where the $e_j\in E(G)$ are such that $G\setminus \big( \cup_j \, e_j \big)$ is a connected tree; the cycle $\beta_{j,t}$ is given by $\rho_{j,t}$ (see Definition \ref{assloop}) where $\rho_j$ is the unique cycle in $G\setminus \big(\cup_{k\neq j}\, e_k\big)$. For the basis $\omega_{1,t},\dots,\omega_{g,t}$ of $\Omega^1(S_t)$ dual to $\alpha_{1,t},\dots,\alpha_{g,t}$, that is such that $\int_{\alpha_{j,t}} \omega_{k,t}=\delta_{jk}$, the matrix of periods $B_t$ defined by $(B_t)_{jk}:=\int_{\beta_{j,t}} \omega_{k,t}$ determines the Jacobian $J(S_t)$. We claim that \[ B_t \;\; \underset{t\rightarrow \infty}{\sim} \; \; \log(t)\cdot B_{\Re} \, +\, i \cdot B_{\Im}\] where $B_{\Re}$ is the matrix of the quadratic form $Q$ on $H_1(G, \mathbb{Z})$ defined in \cite[\S 6.1]{MZ} and $B_{\Im}$ is the matrix of the form $Q'$ such that $Q'(\rho_j)=-i \cdot \sum_{\vec e \in \rho_j} \log\big(\theta(e)\big)\in \mathbb{R}$.
We deduce in particular that the rescaled period matrix $\frac{1}{\log(t)}\cdot B_t$ converges to the tropical period matrix $Q$ of $C=(G,\ell)$. Moreover, the knowledge of the asymptotic of the imaginary part of $B_t$ suggests a phase-tropical compactification of the moduli space $A_g$ of principally polarised Abelian varieties, see \cite{BMV} and \cite{O2} for related matters. \end{remark}
\subsection{Harmonic tropical morphisms}\label{secharmtrop}
For any planar curve $ \mathcal{C} \subset ( \mathbb{C}^{\star} )^{2}$, the authors of \cite{PR} introduced the spine of the amoeba $ \mathcal{A}( \mathcal{C})$, a canonical tropical curve contained in $ \mathcal{A}( \mathcal{C})$ such that $ \mathcal{A}( \mathcal{C})$ deformation-retracts on it. A similar construction can be carried to the case of harmonic amoebas in $ \mathbb{R}^2$, see \cite{Kri}. In this more general case, the spine is still a piecewise linear graph whose edges have now arbitrarily slopes, with the same deformation-retraction property. Such spines are part of a wider class of tropical curves that we introduce below.
\begin{definition}\label{htropmorph} A \df{ harmonic tropical morphism} $ \pi \, : \, C \rightarrow \mathbb{R}^m $ on a tropical curve $C$ is a proper continuous map subject to the following
--\textbf{Linearity}: for any $ e \in LE( C )$, the map $ \pi_{\vert e} $ is affine linear with respect to the metric on $e$.
--\textbf{Balancing}: For any $ v \in V ( C )$, denote by $\vec{e}_1, \dots, \vec{e}_n$ the outgoing unitary tangent vectors to the $n$ leaves/edges adjacent to $v$. Then \[ \sum_{1\leqslant j \leqslant n} d\pi (\vec{e}_{j}) \; = \; 0. \] A \df{harmonic tropical curve} in $\mathbb{R}^m$ is the image of a harmonic morphism. \end{definition}
Similarly to the case of harmonic amoebas, every harmonic tropical morphism can be described by integration of 1-forms, as stated below.
\begin{proposition}\label{propharmmorph} Let $C$ be a simple tropical curve and $R$ a collection of residues of dimension $m$ on $C$. Then, the map $ \pi_R : C \rightarrow \mathbb{R}^m$ of Definition \ref{defPir} is a harmonic tropical morphism. For any vertex $v \in V(C)$ and any outgoing adjacent leaf/edge $\vec e$, the corresponding unitary tangent vector is mapped to $\omega_{R}^{C}(\vec e)$. Reciprocally, for any harmonic morphism $ \pi \; : \; C \rightarrow \mathbb{R}^m$, there exists a unique collection of residues $R$ of dimension $m$ on $C$ such that $ \pi = \pi_R$. \end{proposition}
\begin{proof} Since the 1-form $ \textsc{w}_{R}^{C}$ is a constant on any leaf/edge $e\in LE(C)$, the map $\pi_R$ is affine linear on any $e\in LE(C)$. By definition, the vector $ \textsc{w}_{R}^{C} (\vec e)$ is the gradient of $\pi_R$ along the oriented edge $\vec e$. The balancing condition of \ref{tropmorph} follows from the definition \ref{def1form}. The first part of the statement is proven.
Reciprocally, it is now clear that the map $\pi$ is given by integration of an $m$-tuple of constant 1-forms on any $e \in LE(C)$. The balancing condition of \ref{tropmorph} is clearly equivalent to the balancing condition of \ref{def1form}. Exactness follows from the fact that this $m$-tuple is given by differentiating the $m$-tuple of coordinate functions of $\pi$. Uniqueness follows from Proposition \ref{tdiff}-$c)$. \end{proof}
\begin{remark} As for harmonic amoeba maps, the space of deformation of harmonic morphisms on a fixed tropical curve $C$ in $\mathbb{R}^m$ corresponds to the space of collection of residues of dimension $m$ on $C$, see Proposition \ref{tdiff}. It is then a real vector space of dimension $m(n-1)$, where $n $ is the number of leaves of $C$. \end{remark}
\subsection{Degeneration of harmonic amoebas}\label{sec:dha}
In order to prove theorem \ref{approxtrop}, we need to make precise the notion of convergence we use. Recall the notations introduced in Section \ref{sectropcurvmorph}.
\begin{definition}\label{GHconv} The sequence of maps $ \frac{1}{\log (t)} \mathcal{A}_R : S_t \rightarrow \mathbb{R}^m $ \df{converges} to the tropical morphism $\pi_R : C \rightarrow \mathbb{R}^m$ if for any $v \in V(C)$ (respectively any $e\in E(C)$), the image of $Y_{v,t}$ (respectively $\gamma_{e,t}$) under the map $ \frac{1}{\log (t)} \mathcal{A}_R (Y_{v,t}) $ converges in Hausdorff distance to $\pi_R(\htr_v)$ (respectively to the middle point of $\pi_R(e)$). \end{definition}
Note that the latter notion of convergence is stronger than the convergence of the image of $ \frac{1}{\log (t)} \mathcal{A}_R $ to the image of $\pi_R$ in Hausdorff distance.
\begin{remark}\label{rem:odaka} In \cite{O}, Odaka studied the convergence of families of compact Riemann surfaces with respect to the Gromov-Hausdorff distance. When each surface in the family is equipped with its hyperbolic metric rescaled so that the diameter of the surface is $1$, Odaka showed that the limit of such a family is either a Riemann surface or a tropical curve, see \cite[Theorem 2.4]{O}. Although the idea of exhibiting tropical curves as limits of families of Riemann surfaces is common to \cite{O} and the present work, the two approaches are different. Indeed, for a family of compact Riemann surfaces $\left\lbrace S_t\right\rbrace_{t>1} \subset M_g$ converging to $C:=(G,\ell)$ in the sense of Definition \ref{tropconv}, the width of the collar of any vanishing cycle of the family is asymptotically equivalent to $\log(\log(t))$. Therefore, the diameter of $S_t$ is equivalent to $D\cdot \log(\log(t))$ where $D$ is the combinatorial diameter of the graph $G$. In particular, the family $\left\lbrace S_t\right\rbrace_{t>1}$ converges in the sense of \cite{O} to the graph $G$ equipped with length $1/D$ on every edge. \end{remark}
\begin{proof}[Proof of Theorem \ref{approxtrop}.]
Recall that the maps $\pi_R$ and $ \mathcal{A}_R$ are defined up to the choice of an initial point for integration, see Definitions \ref{defAr} and \ref{defPir}. Let us choose the initial point of $\pi_R$ to be a vertex $v$ and for $ \mathcal{A}_R$, take any sequence of initial points $p_t\in S_t$ such that $p_t\in Y_{v,t}$ and $p_t$ sits in the complement of the half-collars and cusps of $Y_{v,t}$. For this choice of initial points, we will show below that the image of $Y_{v,t}$ (respectively $\gamma_{e,t}$) under $ \frac{1}{\log (t)} \mathcal{A}_R (Y_{v,t}) $ converges in Hausdorff distance to $\pi_R(\htr_v)$ (respectively to the middle point of $\pi_R(e)$, for any $e\in E(C)$ adjacent to $v$). Since the integrals defining $\pi_R$ and $ \mathcal{A}_R$ are additive with respect to the subdivision of the paths of integration, the above claim implies the statement.
Denote by $R_{v,t}$ the $m \times 3$ matrix of rescaled periods $\frac{1}{2\pi i} \int \omega_{R}^{S_t}$ along the 3 boundary geodesics of $Y_{v,t}$, oriented so that the direct normal vector is pointing inwards. The matrix $R_{v,t}$ is a matrix of residues of dimension $m$ that converges, thanks to Theorem \ref{convdif}, to a matrix $R_v$, columns of which correspond to the slope of $\pi_R$ on the leaves/edges of $\htr_v$. Recall that $S:= \mathbb{C} P^1 \setminus \{-1,1,\infty\}$ and $Y_{v,t}$ can be identified with a subset of $S$ as in \eqref{eq:model}. In particular, the pair of pants $Y_{v,t}$ converges to the whole $S$ when $t$ tends to $\infty$ and the restriction of $\omega_{R}^{S_t}$ to $Y_{v,t}$ converges to $\omega_{R_v}^{S}$, see again Theorem \ref{convdif}.
It follows that $ \mathcal{A}_R (Y_{v,t}) $ converges in Hausdorff distance to $\mathcal{A}_{R_{v}} (S)$ when $t$ tends to $\infty$ (observe that the latter amoeba might be degenerate to a line or a point if $R_v$ has rank $1$, respectively $0$). As a direct consequence, we have that \[ \Gamma_v \; := \; \lim_{t \rightarrow \infty} \; \frac{1}{\log(t)} \mathcal{A}_R (Y_{v,t}) \; \subset \; \lim_{t \rightarrow \infty} \; \frac{1}{\log(t)} \mathcal{A}_{R_{v}} (S) \; =: \; \Gamma_v^\infty .\] The right-hand side $\Gamma^\infty_v$ is the only tripod with infinite edges (respectively the only line, respectively the only point) containing $\pi_R(\htr_v)$ if $R_v$ has rank $2$ (respectively $1$, respectively $0$). We claim that the left-hand side $\Gamma_v$ is $\pi_R(\htr_v)$. First, observe that if $\htr_v$ contains a leaf $e$ of $C$, then $\Gamma_v$ contains $\pi_R (e)$. Indeed, the set $\pi_R (e)$ is an infinite half-ray whose slope is given by the corresponding column of $R$ while the harmonic amoeba $\mathcal{A}_R (Y_{v,t})$ has a tentacle with the same slope. Consider now a bounded edge $e$ of $\htr_v$ such that the corresponding column in $R_v$ is non-zero, otherwise there is nothing to prove. Pick a point $s_t$ in the boundary component $\gamma_{e,t}$ of $Y_{v,t}$ for any $t>1$.
Then, the image of $s_t$ under $\frac{1}{\log(t)} \mathcal{A}_R$ is given by integrating $\frac{1}{\log(t)} \omega_{R}^{S_t}$ along any path $\rho_t$ from $p_t$ to $s_t$. In particular, the path $\rho_t$ has to go across the half-collar $\hc_{\gamma_{e,t}}$. Obviously, the sequence of paths $\{ \rho_t\}_{t>1}$ can be chosen so that their restriction to the half-collars form an admissible sequence. Therefore, Proposition \ref{convdifloc} applies with $\alpha=1/2$ and $l=\ell(e)$, since the modulus of the collar $\hc_{\gamma_{e,t}}$ is half the modulus of $K_{\gamma_{e,t}}$ which is $m\big(\ell_t(e)\big)$. It follows that the limit of $\frac{1}{\log(t)} \mathcal{A}_R (s_t)$ is the end point of the edge $\pi_R(e)$. Such points has to be the end points of $\Gamma_v$ by the maximum principle. This proves the theorem. \end{proof}
\begin{remark} Approximation of tropical curves by families of amoebas is treated for instance in \cite{Mikh05}, \cite{Mikh06}, \cite{NS}, or \cite{Nish}. Considering harmonic amoebas allows to extend the various approximation theorems given in the above references, as shown in Theorem \ref{approxtrop}. Not only we can approximate harmonic tropical curves whose edges may have irrational slopes, but we can also approximate honest tropical curves that can not be obtained as limit of families of amoebas of algebraic curves, see Examples \ref{ex:cubic} and \ref{ex:cubicg2}. \end{remark}
\begin{example}\label{ex:cubic} Consider the tropical curve $C\subset \mathbb{R}^3$ pictured in Figure \ref{fig:tropcubic}. The latter is known not to be approximable by families of amoebas of algebraic curves, see \cite[Example 5.12]{Mikh04}.
As a metric graph, all the edges of $C$ have length $1$ except the edge $e$ which has length $2$. Now, consider a family of curves $\{S_t\}_{t>1} \subset M_{1,12}$ decomposed into pairs of pants as shown on the right-hand side of Figure \ref{fig:tropcubic}. Require that every geodesic of the latter decomposition has length $\ell_t=\frac{2\pi^2}{\log(t)}$ except for the geodesic $\gamma$ which should have length $\frac{\ell_t}{2}$. Consider then the collection of residues $R$ of dimension $3$ on $S_t$ such that the gray, black, yellow and green punctures are respectively assigned the residue vectors $(0,-1,0)$, $(-1,0,0)$, $(0,0,-1)$ and $(1,1,1)$. Theorem \ref{approxtrop} implies that $\frac{1}{\log(t)} \mathcal{A}_R(S_t)$ converges in Hausdorff distance to $C$. \end{example}
\begin{figure}
\caption{Approximation of the tropical curve $C\subset \mathbb{R}^3$ by harmonic amoebas.}
\label{fig:tropcubic}
\end{figure}
\begin{example}\label{ex:cubicg2} It is a classical fact that the geometric genus of an algebraic curve of degree $d$ in $ \mathbb{C} P^n$ is at most $\frac{(d-1)(d-2)}{2}$. This turns out not to be the case in tropical geometry, as shown in \cite{BBM2}. Consider for instance the tropical curve $C\subset \mathbb{R}^3$ of degree $3$ and genus $2$ given in Figure 2 of the latter reference. As in the previous example, it is a simple exercise to construct a family $\{S_t\}_{t>1} \subset M_{2,12}$ and a collection of residue $R$ of dimension $3$ such that $\frac{1}{\log(t)} \mathcal{A}_R(S_t)$ converges in Hausdorff distance to $C$. \end{example}
\section{Approximation of phase-tropical curves}\label{sec:approxphasetrop}
In this last section, we come back to the algebraic case. As illustrated by the above examples, algebraicity impose severe restrictions on the realisability (or ``approximability") of tropical curves, whereas any harmonic tropical curve in $\mathbb{R}^m$ arise as the Hausdorff limit of families of harmonic amoebas. Obstructions to realisability in the algebraic setting are discussed for instance in \cite{Mikh06} or \cite{Nish}. There is at least one natural condition under which tropical curves are realisable. This condition, called regularity, is the one we will use in the remaining of the text.
\subsection{Phase-tropical morphisms}\label{sec:ptm}
Recall that a toric morphism $ B : ( \mathbb{C}^\star)^k \rightarrow ( \mathbb{C}^\star)^m $ is a map of the form \[ (z_1, \dots, z_k) \mapsto \big( b_1 z_1^{a_{11}}\dots z_k^{a_{1k}}, \dots, b_m z_1^{a_{m1}}\dots z_k^{a_{mk}} \big) \] where $(b_1,\dots,b_m) \in ( \mathbb{C}^\star)^m $ and $ \big( a_{ij} \big)_{ij} \in M_{m \times k} \big( \mathbb{Z} \big)$.
\begin{definition}\label{defphtropmorph} A \df{phase-tropical morphism} $\phi : V \rightarrow ( \mathbb{C}^\star)^m $ on a simple phase-tropical curve $V = \big(\left\lbrace \pi_v, L_v \right\rbrace_{v\in V(C)}, \left\lbrace B_{\vec e} \right\rbrace_{e\in E(C)} \big)$ is a proper continuous map such that for any $v \in V(C)$, and the corresponding chart $ \mathcal{A}^{-1}\big(\pi_v(T_v)\big)\cap L_v$, there exists a toric morphism $B_v : ( \mathbb{C}^{\star} )^{2} \rightarrow (\mathbb{C}^\star)^m$ such that the restriction of $\phi$ to the latter chart is given by $B_v$. We denote by $\pi_\phi : C \rightarrow \mathbb{R}^m$ the induced tropical morphism. \end{definition}
\begin{remark} Phase-tropical curves originally appeared as immersed objects in $( \mathbb{C}^{\star} )^{2}$, obtained by degeneration of families of algebraic curves, see section $6.2$ in \cite{Mikh05}. Their abstract counterpart and associated morphisms were considered later in \cite{Mikhunpub}. \end{remark}
\begin{proposition}\label{prop:phasetropdetermined} Any phase-tropical morphism $\phi : V \rightarrow ( \mathbb{C}^\star)^m $ is uniquely determined by its underlying tropical morphism $ \pi_\phi : C \rightarrow \mathbb{R}^m$, up to a toric translation $(z_1, \dots, z_m) \mapsto \big( b_1 z_1, \dots, b_m z_m \big)$ on the target space. \end{proposition}
\begin{proof} Since the toric morphism $B_v : (z_1, \dots, z_k) \mapsto \big( b_1 z_1^{a_{11}}\dots z_k^{a_{1k}}, \dots, b_m z_1^{a_{m1}}\dots z_k^{a_{mk}} \big) $ descends to the affine-linear map $$ (x_1, \dots, x_k) \mapsto \big( \log\vert b_1\vert + a_{11} x_1+ \dots+ a_{1k} x_k, \dots, \log \vert b_m \vert + a_{m1} x_1 + \dots + a_{mk} x_k \big)$$ by composition with $ \mathcal{A}$, the tropical morphism $\pi_\phi$ determines $B_v$ up to the coordinate-wise argument of $b_v := (b_1,\dots,b_m) \in (\mathbb{C}^\star)^m $. Up to translation, we can assume that the latter vector is $(1,\dots,1)$ for a chosen vertex $v_0 \in V(C)$. For any edge $\vec e$ from $v_0$ to another vertex $v$, the argument of $b_v$ is determined by $B_{\vec e}$ and hence all the $b_v$ are determined up to a single translation by $\pi_\phi$. The result follows. \end{proof}
Let us now recall the notions of regularity and superabundancy for tropical morphisms. We refer to \cite[\S 2.4-2.6]{Mikh05} for more details. Recall that any simple tropical curve can be presented as a couple $(G,\ell)$ consisting of a cubic graph and a length function.
\begin{definition} Let $C$ and $C'$ be two simple tropical curves supported on the same cubic graph $G$. Two tropical morphisms $\pi : C \rightarrow \mathbb{R}^m$ and $\pi' : C' \rightarrow \mathbb{R}^m$ have the same \df{combinatorial type} if for any oriented leaf/edge $\vec{e} \in LE(G)$, $\pi(\vec{e})$ and $\pi'(\vec{e})$ have the same slope in $S^{m-1} \cup \left\lbrace 0 \right\rbrace$. \end{definition}
The space of deformation of tropical morphisms within a fixed combinatorial type is subject to a finite number of constraints. Therefore, its dimension is bounded from below by the ``expected dimension". Here, we are interested in the space of tropical curves supporting a tropical morphism of a fixed combinatorial type rather than the space of deformation of the morphism. We refer to 2.4 in \cite[\S 2.4]{Mikh05} for a proof of the next statement.
\begin{proposition} Let $G$ be a cubic graph of genus $g$ with $n$ leaves and $R$ be a collection of residues of dimension $m$ on $G$. For any length function $\ell_0$ on $G$, the space of length function $\ell$ such that the tropical morphism $\pi_R : (G,\ell) \rightarrow \mathbb{R}^m$ has the same combinatorial type than $\pi_R : (G,\ell_0) \rightarrow \mathbb{R}^m$ is the relative interior of an open convex polyhedral domain of codimension at most $mg$ in the space $( \mathbb{R}_{>0})^{3g-3+n}$ of length functions on $G$. \end{proposition}
The constraints on the above space of tropical curves are given by the $g$ cycles of $G$. Since the image of each of the $g$ cycles has to close up in $ \mathbb{R}^m$, each cycle imposes $m$ conditions so that we obtain $mg$ conditions in total.
\begin{definition} Let $G$ and $R$ be as above. A tropical morphism $\pi_R : (G,\ell_0) \rightarrow \mathbb{R}^m$ is \df{regular} (respectively \df{superabundant}) if the space of length functions $\ell$ such that $\pi_R : (G,\ell) \rightarrow \mathbb{R}^m$ has the same combinatorial type than $\pi_R : (G,\ell_0) \rightarrow \mathbb{R}^m$ has codimension $mg$ (respectively smaller than $mg)$ in $( \mathbb{R}_{>0})^{3g-3+n} $. A phase-tropical morphism $\phi:V\rightarrow ( \mathbb{C}^\star)^m$ is \df{regular} if its underlying tropical morphism $\pi_\phi$ is. \end{definition}
\subsection{Mikhalkin's approximation Theorem}\label{Mik}
In this last section, we state and prove Mikhalkin's Theorem in a framework close to the one of \cite{Mikhunpub}. The main difference is the way regularity steps in the proof. In \cite{Mikhunpub}, regularity allows to find principal divisor on appropriate sequences of Riemann surfaces, giving the desired sequence of maps up to the Abel-Jacobi Theorem.
The approximation of phase-tropical morphisms requires more than the approximation of the underlying tropical morphism, guaranteed by Theorem \ref{approxtrop}. In the present framework, we need to construct sequences of harmonic amoeba maps that have well-defined harmonic conjugate and lift to actual holomorphic maps. Technically, we need harmonic amoeba maps coming from i.n.d. whose periods are integer multiples of $2\pi i$. The existence of such maps will be guaranteed by regularity, see Proposition \ref{propreg}.
Recall the change of complex structure \[ \begin{array}{rcl} H_t \: : \: ( \mathbb{C}^\star)^m & \rightarrow & ( \mathbb{C}^\star)^m \\ (z_1,\dots,z_m) & \mapsto & \left( \vert z_1 \vert^{\frac{1}{\log(t)}} \frac{z_1}{\vert z_1 \vert} ,\dots, \vert z_m \vert^{\frac{1}{\log(t)}} \frac{z_m}{\vert z_m \vert} \right) \end{array}. \] Then, we have the following formulation of Mikhalkin's approximation Theorem.
\begin{mytheorem}\label{thmMik} Let $ \phi : V(G,\ell,\theta) \rightarrow ( \mathbb{C}^\star)^m$ be a regular phase-tropical morphism on a simple phase-tropical curve $V:=V(G,\ell,\theta)$, where $C:=(G,\ell)$ has genus $g$ and $n$ leaves. Then, there exists a sequence $\left\lbrace S_t \right\rbrace_{t>1} \subset M_{g,n}$ converging to $V$, together with algebraic maps $ \phi_t \, : \, S_t \rightarrow ( \mathbb{C}^\star)^m$ such that $ H_t \big( \phi_t ( S_t ) \big) $ converges in Hausdorff distance to $\phi (V)$. Moreover, we can require the twist function $\theta_t$ of $S_t$ to be independent of $t$. \end{mytheorem}
\begin{remark} --The requirement in Theorem \ref{thmMik} that $\theta_t$ does not depend on $t$ is of crucial importance for the construction of real algebraic curves with prescribed topology and in real enumerative geometry, see for instance \cite[Theorem $3$]{L} and \cite[Theorem $3$]{Mikh05}. Indeed, if $V$ is globally invariant under complex conjugation, then the twist function $\theta$ takes values in $\{-1, \,1\}$. If $\theta_t$ is chosen to be constant (and therefore equal to $\theta$), each Riemann surface $S_t$ inherits an anti-holomorphic involution given by the involution $\sigma$ of Theorem \ref{thm:buser} on every pair of pants of the underlying decomposition of $S_t$. Whenever the initial point of Definition \ref{defpermat} in $S_t$ belongs to the fixed locus of the involution, the map $\phi_t$ is real algebraic.
-- Theorem \ref{thmMik} takes care of every phase-tropical immersion to $( \mathbb{C}^{\star} )^{2}$ since any such morphism is regular, see \cite[Proposition $2.23$]{Mikh05}. \end{remark}
\begin{proposition}\label{approxcomptrop} Let $ \phi : V \rightarrow ( \mathbb{C}^\star)^m$ be a phase-tropical morphism on a simple phase-tropical curve $ V:=V(G,\ell,\theta)$ and $R$ be the collection of residues such that $\pi_\phi : C \rightarrow \mathbb{R}^m$ is equal to $\pi_R$. For any sequence $ \left\lbrace S_t \right\rbrace_{t>1} $ converging to $V$ and such that $\mathcal{P}_{R,S_t}$ is a constant family of integer period matrices, then $H_t \big(\phi_R(S_t)\big) $ converges in Hausdorff distance to $\phi(V)$, see Definition \ref{defpermat}. \end{proposition}
\begin{proof} We will proceed as in the proof of Theorem \ref{approxtrop}: we first show the convergence locally and then check that the pieces glue together in the expected way.
Let us first show that $ H_t \big( \phi_{R} ( Y_{v,t} ) \big) $ converges to $\phi \big( \mathcal{A}_V^{-1} (\htr _v) \big)$ for any $v \in V(C)$. Recall the notations $R_{v,t}$ and $R_v$ introduced in the proof of Theorem \ref{approxtrop}. By assumption, these matrices have integer coefficients and are all equal to each other. In particular, if $Y_{v,t}$ is presented as a subset of $S$ as in \eqref{eq:model}, the family of maps $\phi_R: Y_{v,t} \rightarrow ( \mathbb{C}^\star)^m$ converges point-wise to the map $\phi_{R_v}: S \rightarrow ( \mathbb{C}^\star)^m$ when a common initial point $p$ is chosen in $\cap_t Y_{v,t}\subset S$. Recall also that $\phi_{R_v}: S \rightarrow ( \mathbb{C}^\star)^m$ is an embedding if and only if the $m\times 3$ matrix $R_v$ has rank $2$. If the latter map is not an embedding, consider the matrix $\widetilde R_v$ obtained by adding the matrix $R_0:=\begin{psmallmatrix} -1 & \;\;0 & \; 1\\ \;\;0 & -1 & \;1 \end{psmallmatrix}$ at the bottom of $R_v$ and consider the map $\widetilde \phi_{R} : Y_{v,t} \rightarrow ( \mathbb{C}^\star)^{m+2}$ obtained by adding the $2$ coordinates $e^{\int \omega_{R_0}^S}$ where the pair of i.n.d. $\omega_{R_0}^S$ restricts to $Y_{v,t}\subset S$. Since the map $\phi_{R}:Y_{v,t}\rightarrow ( \mathbb{C}^\star)^m$ (respectively $\phi_{R_v}: S \rightarrow ( \mathbb{C}^\star)^m$) is obtained from $\widetilde \phi_{R}$ (respectively $\phi_{\widetilde R_v}$) by projecting on the $m$ first coordinates, there is no loss of generality in assuming that $R_v$ has rank $2$, and then that $\phi_{R_v}$ is an embedding. As observed above, the map $\phi_R: Y_{v,t} \rightarrow ( \mathbb{C}^\star)^m$ converges point-wise to $\phi_{R_v}: S \rightarrow ( \mathbb{C}^\star)^m$. We also know from Theorem \ref{approxtrop} that $ \mathcal{A}\circ H_t\circ \phi_R(Y_{v,t})$ converges to $\pi_\phi(\htr_v)$. In particular, each boundary geodesic of $Y_{v,t}$ is mapped to the corresponding end of $\pi_\phi(\htr_v)$. From there, showing that $ H_t \big( \phi_{R} ( Y_{v,t} ) \big) $ converges to $\phi\big( \mathcal{A}_V^{-1} (\htr _v)\big)$ amounts to show that $H_t( \mathcal{L})$ converges to the phase-tropical line $L$. Observe in particular that the image of $ \mathbb{R} Y_{v,t}$ converges to the real part of $\phi\big( \mathcal{A}_V^{-1} (\htr _v)\big)$.
It remains to show that for two vertices $v_1$, $v_2$ connected by an edge $e$, the piece $H_t \big( \phi_{R} ( Y_{v_1,t} \cup Y_{v_2,t})\big)$ converges to $\phi\big( \mathcal{A}_V^{-1} (\htr _{v_1}\cup \htr _{v_2})\big)$. On the hyperbolic side, the two pairs of pants $Y_{v_1,t}$ and $Y_{v_2,t}$ are glued to each other according to the twist $\theta_t(e)$ measuring the angle between the two markings of $\gamma_{e,t}$ given by $ \mathbb{R} Y_{v_1,t}$ and $ \mathbb{R} Y_{v_2,t}$. On the phase-tropical side, the two charts $ \mathcal{A}_V^{-1} (\htr_{v_1})$ and $ \mathcal{A}_V^{-1} (\htr_{v_2})$ are glued to each other according to the twist $\theta(e)$ measuring the angle between the two markings of the geodesic $ \mathcal{A}_V^{-1}(q)$ given by $ \mathbb{R} \mathcal{A}_V^{-1} (\htr_{v_1})$ and $ \mathbb{R} \mathcal{A}_V^{-1} (\htr_{v_1})$ where $q$ is the midpoint of $e$. The result follows now from the facts that: the twist $\theta_t(e)$ converges to $\theta(e)$, the image of $ \mathbb{R} Y_{v_j,t}$ under $H_t\circ \phi_R$ converges to $\phi\big( \mathbb{R} \mathcal{A}_V^{-1} (\htr_{v_j})\big)$ and from Remark \ref{rem:blup}. \end{proof}
\begin{lemma}\label{intperiod} Let $ \phi : V \rightarrow ( \mathbb{C}^\star)^m$ be a phase-tropical morphism on a simple phase-tropical curve $ V:=V(C,\theta)$ and denote $R$ the collection of residues of the underlying tropical morphism $\pi_\phi : C \rightarrow \mathbb{R}^m$. For any loop $\rho\subset C$, we have \[ \sum_{\vec e \in \rho} \log \big( \theta (e) \big) \cdot \textsc{w}_{R}^{C} (\vec e) \in ( 2 i \pi \mathbb{Z} )^m. \] \end{lemma}
\begin{proof} For any vertex $v \in \rho$, there is a distinguished point $p_v$ among the 3 vertices of the coamoeba $\mathcal{A}_V^{-1} (v)$, namely the intersection point of the $2$ boundary geodesics of $\mathcal{A}_V^{-1} (v)$ corresponding to the 2 edges in $\rho$ adjacent to $v$.
If $\vec e \subset \rho$ join the vertex $v$ to the vertex $v'$, the image of the distinguished point $p_{v'}$ under $\Arg \circ \phi $ is obtained from the image of $p_v$ by adding $\frac{1}{i} \cdot \theta(e) \cdot \textsc{w}_{R}^{C} (\vec e)$ in the argument torus $( \mathbb{R}/2\pi \mathbb{Z})^m$ of $( \mathbb{C}^\star)^m$. We deduce that for any $v\in V(C)$, we have the equality \[ p_v + \sum_{\vec e \in \rho} \frac{1}{i} \log \big( \theta (e) \big) \cdot \textsc{w}_{R}^{C} (\vec e) \; = \; p_v\] in the argument torus. The statement follows.
\end{proof}
\begin{lemma}\label{lem:convperiodinteger} Let $ \phi : V \rightarrow ( \mathbb{C}^\star)^m$ be a phase-tropical morphism on a simple phase-tropical curve $ V:=V(C,\theta)$ and denote $R$ the collection of residues of the underlying tropical morphism $\pi_\phi : C \rightarrow \mathbb{R}^m$. For any family $ \left\lbrace S_t \right\rbrace_{t >1} \subset M_{g,n} $ converging to $V$, the family of period matrices $ \left\lbrace \mathcal{P}_{R,S_t} \right\rbrace_{t >1} $ converges to an integer period matrix $ \mathcal{P}_{R, C}$, see Definition \ref{defpermat}. \end{lemma}
\begin{proof} According to Theorem \ref{convdif}, the period vector $ \frac{1}{2 i \pi} \big( \int_{\gamma_{e,t}} \omega_{R,1}^{S_t},\dots, \int_{\gamma_{e,t}} \omega_{R,m}^{S_t} \big)$ tends to the vector $ \textsc{w}_{R}^{C} (\vec e)$, where $\gamma_{e,t}$ is the geodesic of the pair of pants decomposition of $S_t$ corresponding to $e \in E(C)$. Since $\pi_\phi$ is a tropical morphism, the vector $ \textsc{w}_{R}^{C} (\vec e)$ has integer coordinates for any $e\in E(C)$. Hence, the limit of the above period vector has integer coordinates for any $e\in E(C)$. Now, let us consider a basis $\rho_1,\dots, \rho_g \in H_1(C, \mathbb{Z})$ and consider the associated family of piecewise geodesic loops $\rho_{1,t},\dots, \rho_{g,t} \subset S_t$, see Definition \ref{assloop}. By Theorem \ref{thm:convperiod} and Lemma \ref{intperiod}, we have that \[\lim_{t\rightarrow \infty} \int_{\rho_{k,t}} \omega_{R,j}^{S_t} \in 2 i \pi \mathbb{Z}\] for $1 \leqslant k \leqslant g$ and $1 \leqslant j \leqslant m$. Since the closed curves $\gamma_{t,e}$, $e\in E(C)$, and $\rho_{k,t}$, $1 \leqslant k \leqslant g$, generate $H_1(S_t,\mathbb{Z})$ for any $t$, the statement follows. \end{proof}
For the rest of this section, the map $\phi : V \rightarrow ( \mathbb{C}^\star)^m$ is a regular phase-tropical morphism on a phase-tropical curve $ V:=V(C,\theta)$ and $R$ denote the collection of residues of the underlying tropical morphism $\pi_\phi: C \rightarrow \mathbb{R}^m$. The genus and number of leaves of $C$ will be denoted $g$ and $n$ respectively and satisfy $2g+n>2$.
We denote by $G$ the cubic graph supporting $C$ and fix a ribbon structure $ \mathscr{R}$ on it. Recall from Section \ref{sec:hyp} that we can define the surjective analytic map \[ \begin{array}{rcl} \Upsilon_{G} \: : \: ( \mathbb{C}^\star)^{3g-3+n} & \rightarrow & M_{g,n} \\ (\ell,\theta) & \mapsto & S(G,\ell, \theta) \end{array}. \] Consider the partial compactification $K$ of $( \mathbb{C}^\star)^{3g-3+n}$ given by first applying the self- diffeomorphism \[(\ell,\theta)\mapsto \left(\frac{\ell}{\Vert \ell\Vert} (\Vert \ell\Vert+1), \theta \right)\]
and then taking the closure of the image in $\big( \mathbb{R}_{>0}\times (S^1)\big)^{3g-3+n} \subset ( \mathbb{C}^\star)^{3g-3+n}$. The latter construction corresponds to the real oriented blow-up of the $\ell$-factor of $( \mathbb{C}^\star)^{3g-3+n}$ at the origin. The total space $K$ is diffeomorphic to $ \left\lbrace \ell \in ( \mathbb{R}_{>0})^{3g-3+n} \: \big| \: \Vert \ell \Vert \geqslant 1 \right\rbrace\times (S^1)^{3g-3+n}$ with ``exceptional divisor"
$$E := \left\lbrace \ell \in ( \mathbb{R}_{>0})^{3g-3+n} \: \big| \: \Vert \ell \Vert = 1 \right\rbrace \times (S^1)^{3g-3+n}. $$ In view of the notion of abstract phase-tropical convergence given in Definition \ref{comptropconv}, it is natural to consider the points of $E$ as equivalence classes of phase-tropical curves with respect to the relation
\begin{equation}\label{eq:eqrelation} V(G,\ell_1, \theta_1) \sim V(G,\ell_2,\theta_2) \;\; \Leftrightarrow \;\; \big(\theta_1 = \theta_2 \; \text{ and } \; \ell_1 = \lambda \ell_2 \; \text{ for some } \; \lambda>0 \big). \end{equation} The map $\Upsilon_{G} $ extends to $K$ and maps the exceptional divisor $E$ to the stable curve $S(G)$ dual to $G$. Now, define $k:=(2g+n-1)$ and consider the map \[ \begin{array}{rcl} \Pi_R \: : \: ( \mathbb{C}^\star)^{3g-3+n} & \rightarrow & M_{k \times m} (\mathbb{R}) \big/ \SL_{k} (\mathbb{Z}) \\ (\ell,\theta) & \mapsto & \mathcal{P}_{R,S(G,\ell,\theta)} \end{array} \] where $\mathcal{P}_{R,S(G,\ell,\theta)}$ is the period matrix of $S(G,\ell,\theta)$ with respect to $R$, see Definition \ref{defpermat}.
\begin{proposition}\label{propreg} The map $\Pi_R$ extends analytically to $K$. If $[V]$ denotes the equivalence class of $V$ in $E$, then each irreducible component of the level set $ \Pi_R^{-1} \big( \Pi_R( [V]) \big) \subset K$ is a smooth analytic subset of real codimension $2mg$ near $E$. \end{proposition}
\begin{proof} For any continuous family $\left\lbrace (\ell_t, \theta_t) \right\rbrace_{t>1}\subset ( \mathbb{C}^\star)^{3g-3+n}$ converging to a point $E\subset K$, the sequence $S_t:=\Upsilon_G(\ell_t, \theta_t)$ converges to a phase-tropical curve $V$, up to an appropriate re-parametrisation of $t$. For any other re-parametrisation converging to a phase-tropical curve $V'$, the curves $V$ and $V'$ are equivalent under the relation \eqref{eq:eqrelation}. By Corollary \ref{cor:period}, the family of period matrices $ \mathcal{P}_{R,S_t}$ converges to a matrix that only depend on the equivalence class of $V$ and hence does not depend on the choice of a re-parametrisation. It follows that $\Pi_R$ extends to $K$. As seen in Corollary \ref{cor:period}, the limiting periods are polynomial in the coefficients of $ \textsc{w}_{R}^{C} (\vec e)$, $e\in E(C)$, and in the coefficients of $\theta$. Since the $ \textsc{w}_{R}^{C} (\vec e)$ depend linearly on $\ell$, the extension of $\Pi_R$ is analytic.
According to Theorems \ref{thmcodim} and \ref{thmhyperb}, the level set $\Pi_R^{-1} \big( \Pi_R( [V]) \big)$ is the intersection of $m$ smooth subvarieties of real codimension $2g$ in $( \mathbb{C}^\star)^{3g-3+n}$. In order to prove the statement, it remains to show that the level set of the map $\Pi_R$ extended to $K$ intersects $E$ in a smooth subvariety of codimension $2mg$.
We claim that this intersection is described by the classes $[V']$ with $V'= V(C',\theta')$ and $C:=(G,\ell')$ satisfying \begin{equation}\label{condperiod1} \sum_{\vec e \in \rho} \ell'(e) \cdot \textsc{w}_{R}^{C} (\vec e) = 0 \in \mathbb{R}^m \end{equation} \begin{equation}\label{condperiod2} \sum_{\vec e \in \rho} \log \big( \theta'(e) \big)\cdot \textsc{w}_{R}^{C} (\vec e) = 0 \in ( \mathbb{R}/2\pi \mathbb{Z})^m \end{equation} for any loop $\rho \subset G$. To see this, observe first that for any $\vec e\in E(G)$, we have $ \textsc{w}_{R}^{C} (\vec e) = \textsc{w}_{R}^{C'} (\vec e)$. Indeed, by Theorem \ref{convdif}, the vector $ \textsc{w}_{R}^{C} (\vec e)$ is the limit of $\int_{\gamma_{e,t}} \omega_{R}^{S_t}$ where $S_t:=S(G,\ell_t,\theta_t)$ and $(\ell_t,\theta_t)$ is any appropriately scaled family in $\Pi_R^{-1} \big( \Pi_R( [V]) \big)$ converging to $(\ell,\theta)$. The same holds for $ \textsc{w}_{R}^{C'} (\vec e)$ and we have $\int_{\gamma_{e,t}} \omega_{R}^{S_t}=\int_{\gamma'_{e,t}} \omega_{R}^{S'_t}$ since both families $(\ell_t,\theta_t)$ and $(\ell'_t,\theta'_t)$ are in $\Pi_R^{-1} \big( \Pi_R( [V]) \big)$. It follows that $ \textsc{w}_{R}^{C} (\vec e) = \textsc{w}_{R}^{C'} (\vec e)$. Now, the collection of equations \eqref{condperiod1} is equivalent to the exactness of $ \textsc{w}_{R}^{C'} (\vec e)$ under the latter equalities. Therefore, these equations are satisfied on $\Pi_R^{-1} \big( \Pi_R( [V]) \big)\cap E$. For the collection of equations \eqref{condperiod2}, the left-hand side is, according to Theorem \ref{thm:convperiod} and the equalities $ \textsc{w}_{R}^{C} (\vec e) = \textsc{w}_{R}^{C'} (\vec e)$, the limit of the period vector $\int_{\rho'_{t}} \omega_R^{S'_t}$ where $\rho'_t \subset S'_t$ is the loop associated to $\rho\subset C'$, see Definition \ref{assloop}. By the same argument as above, we have that $\int_{\rho'_{t}} \omega_R^{S'_t}=\int_{\rho_{t}} \omega_R^{S_t}$ and also that $\int_{\rho_{t}} \omega_R^{S_t}\in(2\pi i \mathbb{Z})^m$ by Theorem \ref{thm:convperiod} and Lemma \ref{intperiod}. It implies that the equations \eqref{condperiod2} are also satisfied on $\Pi_R^{-1} \big( \Pi_R( [V]) \big)\cap E$. In order to see that the equations \eqref{condperiod1} and \eqref{condperiod2} are sufficient to describe $\Pi_R^{-1} \big( \Pi_R( [V]) \big)\cap E$, we need to show that the rank of the entire system of equations is $2mg$. Clearly, the system \eqref{condperiod1} is linearly independent from system \eqref{condperiod2} and the two systems have the same rank. Now, the system \eqref{condperiod1} describe the space of tropical curves $(G,\ell')$ for which the morphism $\pi_R$ has the same combinatorial type as $\pi_\phi$. Since $\pi_\phi$ is regular by assumption, the latter space as codimension $mg$. The statement follows. \end{proof}
\begin{proof}[Proof of Theorem \ref{thmMik}.] Recall that $2g+n>2$. Thanks to the above proposition, we can construct a sequence $\left\lbrace (\ell_t,\theta_t) \right\rbrace_{ t>1} \subset \Pi_R^{-1} \big( \Pi_R ( [ V ] ) \big) $ converging to the point $\left[ V \right] \in E$. Up to a re-parametrisation, we can assume that $\ell_t \sim 2\pi^2 /\big( \log(t) \cdot \ell\big)$ where $\ell$ is the length function of $V$. In other words, the sequence $S_t : = \Upsilon_{G} (\ell_t,\theta_t) $ converges to $V$ in the sense of Definition \ref{comptropconv}. Since $\left\lbrace (\ell_t,\theta_t) \right\rbrace_{ t>1} \subset \Pi_R^{-1} \big( \Pi_R ( [ V ] ) \big) $, the sequence of period matrices $ \mathcal{P}_{R,S_t}$ is constant and integer, thanks to Lemma \ref{lem:convperiodinteger}. Applying Proposition \ref{approxcomptrop}, we deduces that $H_t \circ \phi_R (S_t) $ converges to $\phi(V)$.
To see that we can choose the sequence $ \left\lbrace \theta_t \right\rbrace_{t>1}$ to be constant, observe that the arguments of the above proof imply that $ \Pi_R^{-1} \big( \Pi_R ( [ V ] ) \big) \cap \left\lbrace \theta = \text{constant} \right\rbrace \subset K$ is analytic, smooth and of real codimension $mg+(3g-3+n)$ in a neighbourhood of $E$. In particular, the latter level set has strictly positive dimension. The statement follows. \end{proof}
\end{document} |
\begin{document}
\title[ \uppercase{Approximation by double second type delayed arithmetic mean}]{ \uppercase{Approximation by double second type delayed arithmetic mean of periodic functions in $H_{p}^{(\omega, \omega)}$ space}} \author[Xh. Z. Krasniqi]{Xh. Z. Krasniqi} \address{Faculty of Education \\ \indent University of Prishtina "Hasan Prishtina" \\ \indent Avenue "Mother Theresa" 5, 10000 Prishtina \\ \indent Kosovo} \author[P. K\'orus]{P. K\'orus} \address{Institute of Applied Pedagogy \\ \indent Juh\'asz Gyula Faculty of Education \\ \indent University of Szeged \\ \indent Hattyas utca 10, H-6725 Szeged \\ \indent Hungary} \author[B. Szal]{B. Szal} \address{Faculty of Mathematics, Computer Science and Econometrics \\ \indent University of Zielona G\'{o}ra \\ \indent ul. Szafrana 4a, 65-516 Zielona G\'{o}ra \\ \indent Poland} \date{}
\begin{abstract} In this paper, we give a degree of approximation of a function in the space $H_{p}^{(\omega, \omega)}$ by using the second type double delayed arithmetic means of its Fourier series. Such degree of approximation is expressed via two functions of moduli of continuity type. To obtain one more general result, we used the even-type double delayed arithmetic means of Fourier series as well. \end{abstract}
\maketitle
\section{Concise historical background and motivation}
On one hand, the approximation of $2\pi$-periodic and integrable functions by their Fourier series in the H\"{o}lder metric has been studied widely and consistently in many papers. Das at al. studied the degree of approximation of functions by matrix means of their Fourier series in the generalized H\"{o}lder metric \cite{DGR}, generalizing many previous known results. Again, Das at al. \cite{DNR} studied the rate of convergence problem of Fourier series in a new Banach space of functions conceived as a generalization of the spaces introduced by Pr\"{o}ssdorf \cite{SP} and Leindler \cite{L1}. Afterwards, Nayak at al. \cite{NDR} studied the rate of convergence problem of the Fourier series by delayed arithmetic mean in the generalized H\"{o}lder metric space which was earlier introduced in \cite{DNR} and obtaining a sharper estimate of Jackson's order which is the main objective of their result. Moreover, De\v{g}er \cite{D} determined the degree of approximation of functions by matrix means of their Fourier series in the same space of functions introduced in \cite{DNR}. In particular, he extended some results of Leindler \cite{L} and some other results by weakening the monotonicity conditions in the results obtained by Singh and Sonker \cite{SS} for some classes of numerical sequences introduced by Mohapatra and Szal \cite{MS}. Leindler's results obtained in \cite{L}, are generalized in \cite{XhK} by first author of the present paper, for functions from a Banach space, mainly using the generalized N\"{o}rlund and Riesz means. Very recently, Kim \cite{KIM1} presented a generalized result of a particular case of a result obtained previously in \cite{NDR}. In Kim's result is treated the degree of approximation of functions in the same generalized H\"{o}lder metric, but using the so-called even-type delayed arithmetic mean of Fourier series.
On the other hand, results on approximation of bivariate integrable functions and $2\pi$-periodic in each variable by their double Fourier series in the H\"{o}lder metric, the interested reader can find in \cite{UD2}, \cite{XhK2} and \cite{NH}. In all results reported in these papers, we came across to the degree of approximation of functions by various means of their double Fourier series and in which the quantity of the form $\mathcal{O}{(\log n)}$ appears. Involving such quantity produces a degree of approximation which is not of Jackson's order. This weakness motivated us to consider some means of double Fourier series which will overshoots it. Whence, we are going to investigate the degree of approximation of bivariate integrable functions and $2\pi$-periodic in each variable by their double Fourier series in the generalized H\"{o}lder metric, by motivation of removing the quantities of the form $\mathcal{O}{(\log n)}$ and obtaining the degree of approximation of Jackson's order, which is the aim of the paper.
Closing this section, for comparing of two quantities $u$ and $v>0$, throughout this paper we write $u=\mathcal{O}(v)$, whenever there exists a positive constant $c$ such that $u\leq cv$.
\section{Introduction and preliminaries}
By $L_p(T^2)$, $p\geq 1$, we denote the space of all functions $f(x,y)$ integrable with $p$-power on $T^2:=(0,2\pi)\times (0,2\pi)$, and with norm \begin{equation*}
\|f\|_p:=\left(\frac{1}{(2\pi)^2}\int_{0}^{2\pi}\int_{0}^{2\pi}|f(x,y)|^p dxdy\right)^{1/p}. \end{equation*} If $f\in L_p(T^2)$, $p\geq 1$, then $\omega _i$, $(i=1,2)$ are considered as moduli of continuity if $\omega _i$ are two positive non-decreasing continuous functions on $[0,2\pi]$ with properties \begin{enumerate} \item[(i)] $\omega _i(0)=0$,
\item[(ii)] $\omega _i(\delta_1+\delta_2)\leq \omega _i(\delta_1)+\omega _i(\delta_2)$,
\item[(iii)] $\omega _{i}(\lambda \delta )\leq (\lambda +1)\omega _{i}(\delta )$, $\lambda \geq 0$. \end{enumerate} We define the space $H_{p}^{(\omega_1, \omega_2)}$ by \begin{equation*} H_{p}^{(\omega_1, \omega_2 )}:=\left\{f\in L^{p}(T^2), p\geq 1: A(f;\omega_1, \omega_2 )<\infty \right\}, \end{equation*} where \begin{equation*}
A(f;\omega_1, \omega_2 ):=\sup_{t_1\neq 0,\,\,t_2\neq 0 }\frac{\|f(x +t_1,y
+t_2)-f(x,y)\|_{p}}{\omega_1 (|t_1|)+\omega_2 (|t_2|)} \end{equation*} and the norm in the space $H_{p}^{(\omega_1, \omega_2)}$ is defined by \begin{equation*}
\|f\|_{p}^{(\omega_1, \omega_2)}:=\|f\|_{p}+A(f;\omega_1, \omega_2 ). \end{equation*} If $\omega_1$, $\omega_2$, $v_1$ and $v_2$ are moduli of continuity so that the two-variable function $\frac{\omega_1 (t_1)+\omega_2 (t_2)}{ v_1(t_1)+v_2(t_2)}$ has a maximum $M$ on $T^2$, then it is easy to see that \begin{equation*}
\|f\|_{p}^{(v_1,v_2)}\leq \max \left(1,M\right)\|f\|_{p}^{(\omega_1, \omega_2)}, \end{equation*} which shows that in this case, for the given spaces $H_{p}^{(\omega_1, \omega_2)}$ and $H_{p}^{(v_1, v_2)}$ we have \begin{equation*} H_{p}^{(\omega_1, \omega_2)}\subseteq H_{p}^{(v_1, v_2)}\subseteq L_p \quad (p\geq 1). \end{equation*}
We write \begin{equation*} \Omega_p(\delta_1,\delta_2;f):=\sup_{0\leq h_1\leq \delta_1; 0\leq h_2\leq
\delta_2}\|f(x+h_1,y+h_2)-f(x,y)\|_{p} \end{equation*} for the integral modulus of continuity of $f(x,y)$, and whenever \begin{equation*}
\|f(x +t_1,y +t_2)-f(x,y)\|_{p}=\mathcal{O}\left(\omega_1 (|t_1|)+\omega_2
(|t_2|)\right) \end{equation*} we write $f\in \text{Lip}(\omega_1,\omega_2,p)$, that is \begin{equation*}
\text{Lip}(\omega_1,\omega_2,p)=\left\{f\in L_p(T^2): \|f(x +t_1,y
+t_2)-f(x,y)\|_{p}=\mathcal{O}\left(\omega_1 (|t_1|)+\omega_2
(|t_2|)\right)\right\}. \end{equation*} Clearly, for $\omega_1(t_1)=\mathcal{O}\left(t_1^{\alpha}\right)$ and $ \omega_2(t_2)=\mathcal{O}\left(t_2^{\beta}\right)$, $0<\alpha \leq 1$, $ 0<\beta \leq 1$, the class $\text{Lip}(\omega_1,\omega_2,p)$ reduces to the class $\text{Lip}(\alpha,\beta,p)$, that is \begin{equation*}
\text{Lip}(\alpha,\beta, p)=\left\{f\in L^p(T^2): \|f(x +t_1,y
+t_2)-f(x,y)\|_{p}=\mathcal{O}\left(t_1^{\alpha}\right)+\mathcal{O} \left(t_2^{\beta}\right)\right\}. \end{equation*} Then for $1 \geq \alpha \geq \gamma \geq 0$ and $1 \geq \beta \geq \delta \geq 0$, by noting $\frac{t_1^{\alpha}+t_2^{\beta}}{t_1^{\gamma}+t_2^{\delta} }$ is bounded on $T^2$, we have \begin{equation*} \text{Lip}(\alpha,\beta,p)\subseteq \text{Lip}(\gamma,\delta,p) \subseteq L_p \quad (p\geq 1). \end{equation*} If $f(x,y)$ is a continuous function periodic in both variables with period $ 2\pi$ and $p=\infty$, then the class $\text{Lip}(\alpha,\beta,p)$ reduces to the H\"older class $\text{H}_{(\alpha,\beta)}$ (also called Lipschitz class), that is \begin{equation*}
\text{H}_{(\alpha,\beta)}=\left\{f: |f(x +t_1,y +t_2)-f(x,y)|=\mathcal{O} \left(t_1^{\alpha}\right)+\mathcal{O}\left(t_2^{\beta}\right)\right\}. \end{equation*} It is verified that $\text{H}_{(\alpha,\beta)}$ is a Banach space (see \cite
{UD2}) with the norm $\|f \|_{\alpha,\beta}$ defined by \begin{equation*}
\|f \|_{\alpha,\beta}=\|f\|_{C}+\sup_{t_1\neq 0,\,\,t_2\neq 0 }\frac{|f(x
+t_1,y +t_2)-f(x,y)|}{|t_1|^{\alpha}+|t_2|^{\beta}}, \end{equation*} where \begin{equation*}
\|f\|_{C}=\sup_{(x,y)\in T^2}|f(x,y)|. \end{equation*}
Let $f(x,y)\in L^p(T^2)$ be a $2\pi$-periodic function with respect to each variable, with its Fourier series \begin{align*} & f(x,y)\sim \sum_{k=0}^{\infty}\sum_{\ell=0}^{\infty}\lambda_{k,\ell}(a_{k,\ell} \cos kx \cos \ell y+b_{k,\ell} \sin kx \cos \ell y \\ & \hspace{3cm}+c_{k,\ell} \cos kx \sin \ell y+d_{k,\ell} \sin kx \sin \ell y) \end{align*} at the point $(x,y)$, where \begin{align*} \lambda _{k,\ell} & = \left\{ \begin{array}{rcl} \frac{1}{4}, & \mbox{if} & k=\ell=0, \\ \frac{1}{2}, & \mbox{if} & k=0, \,\ell>0;\,\,\,\mbox{or} \,\,\, k>0, \,\ell=0, \\ 1, & \mbox{if} & k, \,\ell>0; \end{array} \right. \\ a_{k,\ell} & = \frac{1}{\pi^{2}}\int_{-\pi}^{\pi}\int_{-\pi}^{\pi}f(u, v) \cos ku \cos \ell v \,du dv, \\ b_{k,\ell} & = \frac{1}{\pi^{2}}\int_{-\pi}^{\pi}\int_{-\pi}^{\pi}f(u, v) \sin ku \cos \ell v \,du dv, \\ c_{k,\ell} & = \frac{1}{\pi^{2}}\int_{-\pi}^{\pi}\int_{-\pi}^{\pi}f(u, v) \cos ku \sin \ell v \,du dv, \\ d_{k,\ell} & = \frac{1}{\pi^{2}}\int_{-\pi}^{\pi}\int_{-\pi}^{\pi}f(u, v)\sin ku \sin \ell v\, du dv,\quad k,\,\ell\in \mathbb{N}\cup\{0\}, \end{align*} and whose partial sums are \begin{align*} & S_{n,m}(f;x,y)= \sum_{k=0}^{n}\sum_{\ell=0}^{m}\lambda_{k,\ell}(a_{k,\ell} \cos kx \cos \ell y+b_{k,\ell} \sin kx \cos \ell y \\ & \hspace{3.8cm}+c_{k,\ell} \cos kx \sin \ell y+d_{k,\ell} \sin kx \sin \ell y),\quad n,m\geq 0 . \end{align*}
To reveal our intention, we recall some other notations and notions.
Let $\sum_{k=0}^{\infty}\sum_{\ell=0}^{\infty}u_{k,\ell}$ be an infinite double series with its sequence of arithmetic mean $\{\sigma_{m,n}\}$, where \begin{equation*} \sigma_{m,n}=\frac{1}{(m+1)(n+1)}\sum_{k=0}^{m}\sum_{\ell=0}^{n}U_{k,\ell} \end{equation*} and $U_{k,\ell}:=\sum_{i=0}^{k}\sum_{j=0}^{\ell}u_{i,j}.$
We define the \textit{Double Delayed Arithmetic Mean} $\sigma_{m,k;n,\ell}$ by (see \cite{FW}): \begin{align} \label{eq1} \begin{split} \sigma_{m,k;n,\ell}:=&\left(1+\frac{m}{k}\right)\left(1+\frac{n}{\ell} \right)\sigma_{m+k-1,n+\ell-1}-\left(1+\frac{m}{k}\right)\frac{n}{\ell} \sigma_{m+k-1,n-1} \\ & -\frac{m}{k}\left(1+\frac{n}{\ell}\right)\sigma_{m-1,n+\ell-1}+\frac{mn}{ k\ell}\sigma_{m-1,n-1}, \end{split} \end{align} where $k$ and $\ell$ are positive integers.
If $k$ tends to $\infty$ with $m$ in such a way that $\frac{m}{k}$ is bounded, and $\ell$ tends to $\infty$ with $n$ in such a way that $\frac{n}{ \ell}$ is also bounded, then $\sigma_{m,k;n,\ell}$ defines a method of summability which is at least as strong as the well-known $(C,1,1)$ summablity. This means that if $\sigma_{m,n}\to \mu$, then $ \sigma_{m,k;n,\ell}\to \mu$ as well. This important fact follows from (\ref {eq1}) if we set $\sigma_{m,n}=\mu+\xi_{m,n}$, where $\xi_{m,n}\to 0$ as $ m,n\to \infty$. Introducing this mean we expect to be useful in applications, particularly in approximating of $2\pi$ periodic functions in two variables.
We note that for $k=\ell=1$ we obtain $\sigma_{m,1;n,1}=U_{m,n}$, while for $ m=n=0$ we get $\sigma_{0,k;0,\ell}=\sigma_{k-1,\ell-1}$. Moreover, for $k=m$ and $\ell=n$, we get \begin{align} \label{eq2} \sigma_{m,m;n,n}=4\sigma_{2m-1,2n-1}-2\sigma_{2m-1,n-1}-2\sigma_{m-1,2n-1}+ \sigma_{m-1,n-1} \end{align} that we name as the \textit{first type} Double Delayed Arithmetic Mean.
However, in this research paper we will take $k=2m$ and $\ell=2n$ in the Double Delayed Arithmetic Mean $\sigma_{m,k;n,\ell}$ to obtain \begin{align} \label{eq3} \sigma_{m,2m;n,2n}=\frac{1}{4}\left(9\sigma_{3m-1,3n-1}-3\sigma_{3m-1,n-1}-3 \sigma_{m-1,3n-1}+\sigma_{m-1,n-1}\right). \end{align} We name these particular sums as the \textit{second type} Double Delayed Arithmetic Mean.
By $\sigma _{m,n}(f;x,y)$ and $\sigma _{m,2m;n,2n}(f;x,y)$ we denote the arithmetic mean and the second type Double Delayed Arithmetic Mean for $ S_{k,\ell }(f;x,y)$, respectively. It is well-known (see e.g. \cite[page 4] {KIM}) that the double Fej\'{e}r kernel is \begin{equation*} F_{m,n}(t_{1},t_{2}):=\frac{4}{(m+1)(n+1)}\left( \frac{\sin \frac{(m+1)t_{1} }{2}\sin \frac{(n+1)t_{2}}{2}}{4\sin \frac{t_{1}}{2}\sin \frac{t_{2}}{2}} \right) ^{2}, \end{equation*} which we will use it in its equivalent form \begin{equation}\label{eq4} F_{m,n}(t_{1},t_{2}):=\frac{1}{(m+1)(n+1)}\frac{(1-\cos (m+1)t_{1})(1-\cos (n+1)t_{2})}{\left( 4\sin \frac{t_{1}}{2}\sin \frac{t_{2}}{2}\right) ^{2}}, \end{equation} while the arithmetic mean $\sigma _{m,n}(f;x,y)$ is \begin{equation}\label{eq5} \sigma _{m,n}(f;x,y)=\frac{1}{\pi ^{2}}\int_{0}^{\pi }\int_{0}^{\pi }h_{x,y}(t_{1},t_{2})F_{m,n}(t_{1},t_{2})\,dt_{1}dt_{2}, \end{equation} where \begin{equation*} h_{x,y}(t_{1},t_{2}):=f(x+t_{1},y+t_{2})+f(x-t_{1},y+t_{2})+f(x+t_{1},y-t_{2})+f(x-t_{1},y-t_{2}). \end{equation*} Furthermore, using (\ref{eq4}) successively, we have \begin{align*} F_{3m-1,3n-1}(t_{1},t_{2})& =\frac{(1-\cos (3mt_{1}))(1-\cos (3nt_{2}))}{ 9mn\left( 4\sin \frac{t_{1}}{2}\sin \frac{t_{2}}{2}\right) ^{2}}, \\ F_{3m-1,n-1}(t_{1},t_{2})& =\frac{(1-\cos (3mt_{1}))(1-\cos (nt_{2}))}{ 3mn\left( 4\sin \frac{t_{1}}{2}\sin \frac{t_{2}}{2}\right) ^{2}}, \\ F_{m-1,3n-1}(t_{1},t_{2})& =\frac{(1-\cos (mt_{1}))(1-\cos (3nt_{2}))}{ 3mn\left( 4\sin \frac{t_{1}}{2}\sin \frac{t_{2}}{2}\right) ^{2}}, \\ F_{m-1,n-1}(t_{1},t_{2})& =\frac{(1-\cos (mt_{1}))(1-\cos (nt_{2}))}{ mn\left( 4\sin \frac{t_{1}}{2}\sin \frac{t_{2}}{2}\right) ^{2}}. \end{align*} Thus, we can write \begin{equation} \begin{split} F_{m,2m;n,2n}(t_{1},t_{2}):=& \frac{1}{4}\left( 9F_{3m-1,3n-1}(t_{1},t_{2})-3F_{3m-1,n-1}(t_{1},t_{2})\right. \\ & \quad \left. -3F_{m-1,3n-1}(t_{1},t_{2})+F_{m-1,n-1}(t_{1},t_{2})\right) \\ =& \frac{1}{4mn\left( 4\sin \frac{t_{1}}{2}\sin \frac{t_{2}}{2}\right) ^{2}} \left( (1-\cos (3mt_{1}))(1-\cos (3nt_{2}))\right. \\ & \left. -(1-\cos (3mt_{1}))(1-\cos (nt_{2}))-(1-\cos (mt_{1}))(1-\cos (3nt_{2}))\right. \\ & \left. +(1-\cos (mt_{1}))(1-\cos (nt_{2}))\right) \\ =& \frac{1}{4mn\left( 4\sin \frac{t_{1}}{2}\sin \frac{t_{2}}{2}\right) ^{2}} \left( \cos (mt_{1})-\cos (3mt_{1})\right) \left( \cos (nt_{2})-\cos (3nt_{2})\right) \\ =& \frac{S(t_{1},t_{2})}{mn\left( 4\sin \frac{t_{1}}{2}\sin \frac{t_{2}}{2} \right) ^{2}}, \end{split} \label{eq6} \end{equation} where \begin{equation*} S(t_{1},t_{2}):=\sin 2mt_{1}\sin mt_{1}\sin 2nt_{2}\sin nt_{2}. \end{equation*} Therefore, using (\ref{eq5}) and (\ref{eq6}), we get the second type Double Delayed Arithmetic Mean as \begin{equation*} \sigma_{m,2m;n,2n}(f;x,y)=\frac{1}{mn\pi^2}\int_{0}^{\pi}\int_{0}^{ \pi}h_{x,y}(t_1,t_2)\frac{S(t_1,t_2)}{\left(4\sin \frac{t_1}{2}\sin \frac{t_2 }{2}\right)^2}dt_1dt_2. \end{equation*}
Throughout this paper we agree to put: $h_1:=h_1(m)=\frac{\pi}{m}$, $ h_2:=h_2(n)=\frac{\pi}{n}$, \begin{align*} \phi_{x,y}(t_1,t_2) & := \frac{1}{4} ( f(x+t_1,y+t_2) + f(x-t_1,y+t_2) \\ & \hspace{1cm} + f(x+t_1,y-t_2) + f(x-t_1,y-t_2) - 4 f(x,y) ), \\ H_{x,z_1,y,z_2}(t_1,t_2) & :=\phi_{x+z_1,y+z_2}(t_1,t_2)-\phi_{x,y}(t_1,t_2), \end{align*} and \begin{align*} D_{m,n}(x,y):=\sigma_{m,2m;n,2n}(f;x,y)-f(x,y). \end{align*}
In order to achieve to our aim, we need some helpful lemmas given in next section.
\section{Auxiliary Results}
\begin{lemma} (The generalized Minkowski inequality \cite[p. 21]{SMN})\label{le1} For a function $g(x_{1},y_{1})$ given on a measurable set $E:=E_{1}\times E_{2}\subset \mathbb{R}_{2}$, where $x_{1}=(x_{1},x_{2},\dots ,x_{m})$ and $ y_{1}=(x_{m+1},x_{m+2},\dots ,x_{n})$, the following inequality holds: \begin{equation*} \left( \int_{E_{1}}\left\vert \int_{E_{2}}g(x_{1},y_{1})dy_{1}\right\vert ^{p}dx_{1}\right) ^{\frac{1}{p}}\leq \int_{E_{2}}\left( \int_{E_{1}}\left\vert g(x_{1},y_{1})\right\vert ^{p}dy_{1}\right) ^{\frac{1 }{p}}dx_{1}, \end{equation*} for those values of $p$ for which the right-hand side of this inequality is finite. \end{lemma}
\begin{lemma} \label{le2} Let $\omega_1$, $\omega_2$, $v_1$ and $v_2$ be moduli of continuity so that $\frac{\omega_1 (t_1)}{v_1(t_1)}$ is non-decreasing in $ t_1$, $\frac{\omega_2 (t_2)}{v_2(t_2)}$ is non-decreasing in $t_2$, and $ f\in H_{p}^{(\omega_1, \omega_2)}$. Then for $0< t_1\leq \pi$, $0< t_2\leq \pi$, and $p\geq 1$, \begin{enumerate}
\item[(i)] $\|\phi_{x,y}(t_1,t_2)\|_p=\mathcal{O}\left(\omega_1 (t_1)+\omega_2 (t_2)\right)$,
\item[(ii)] $\|H_{x,z_1,y,z_2}(t_1,t_2)\|_p=\mathcal{O}\left(\omega_1 (t_1)+\omega_2 (t_2)\right)$, \newline
$\|H_{x,z_1,y,z_2}(t_1,t_2)\|_p=\mathcal{O}\left(\omega_1 (|z_1|)+\omega_2
(|z_2|)\right)$, \newline
$\|H_{x,z_1,y,z_2}(t_1+h_1,t_2)\|_p=\mathcal{O}\left(\omega_1 (t_1+h_1)+\omega_2 (t_2)\right)$, \newline
$\|H_{x,z_1,y,z_2}(t_1+h_1,t_2)\|_p=\mathcal{O}\left(\omega_1
(|z_1|)+\omega_2 (|z_2|)\right)$, \newline
$\|H_{x,z_1,y,z_2}(t_1,t_2+h_2)\|_p=\mathcal{O}\left(\omega_1 (t_1)+\omega_2 (t_2+h_2)\right)$, \newline
$\|H_{x,z_1,y,z_2}(t_1,t_2+h_2)\|_p=\mathcal{O}\left(\omega_1
(|z_1|)+\omega_2 (|z_2|)\right)$, \newline
$\|H_{x,z_1,y,z_2}(t_1+h_1,t_2+h_2)\|_p=\mathcal{O}\left(\omega_1 (t_1+h_1)+\omega_2 (t_2+h_2)\right)$, \newline
$\|H_{x,z_1,y,z_2}(t_1+h_1,t_2+h_2)\|_p=\mathcal{O}\left(\omega_1
(|z_1|)+\omega_2 (|z_2|)\right)$. \end{enumerate} Moreover, if $\omega_1 = \omega_2$ and $v_1=v_2$, then \begin{enumerate}
\item[(iii)] $\|H_{x,z_1,y,z_2}(t_1,t_2)\|_p=\mathcal{O}\left( (v_1 (|z_1|)
+ (v_1 (|z_2|)) \left( \frac{\omega_1 (t_1)}{v_1 (t_1)} + \frac{\omega_1 (t_2)}{v_1 (t_2)}\right) \right)$, \newline
$\|H_{x,z_1,y,z_2}(t_1+h_1,t_2)\|_p=\mathcal{O}\left( (v_1 (|z_1|) + (v_1
(|z_2|)) \left( \frac{\omega_1 (t_1+h_1)}{v_1 (t_1+h_1)} + \frac{\omega_1 (t_2)}{v_1 (t_2)}\right) \right)$, \newline
$\|H_{x,z_1,y,z_2}(t_1,t_2+h_2)\|_p=\mathcal{O}\left( (v_1 (|z_1|) + (v_1
(|z_2|)) \left( \frac{\omega_1 (t_1)}{v_1 (t_1)} + \frac{\omega_1 (t_2+h_2)}{ v_1 (t_2+h_2)}\right) \right)$, \newline
$\|H_{x,z_1,y,z_2}(t_1+h_1,t_2+h_2)\|_p=\mathcal{O}\left( (v_1 (|z_1|) +
(v_1 (|z_2|)) \left( \frac{\omega_1 (t_1+h_1)}{v_1 (t_1+h_1)} + \frac{ \omega_1 (t_2+h_2)}{v_1 (t_2+h_2)}\right) \right)$.
\item[(iv)] $\|H_{x,z_1,y,z_2}(t_1,t_2)-H_{x,z_1,y,z_2}(t_1+h_1,t_2)\|_p \newline
=\mathcal{O}\left( (v_1 (|z_1|) + (v_1 (|z_2|)) \left( \frac{\omega_1 (h_1)}{ v_1 (h_1)} + \frac{\omega_1 (h_2)}{v_1 (h_2)}\right) \right)$, \newline
$\|H_{x,z_1,y,z_2}(t_1,t_2)-H_{x,z_1,y,z_2}(t_1,t_2+h_2)\|_p\newline
=\mathcal{O}\left( (v_1 (|z_1|) + (v_1 (|z_2|)) \left( \frac{\omega_1 (h_1)}{ v_1 (h_1)} + \frac{\omega_1 (h_2)}{v_1 (h_2)}\right) \right)$, \newline
$\|H_{x,z_1,y,z_2}(t_1,t_2+h_2)-H_{x,z_1,y,z_2}(t_1+h_1,t_2+h_2)\|_p\newline
=\mathcal{O}\left( (v_1 (|z_1|) + (v_1 (|z_2|)) \left( \frac{\omega_1 (h_1)}{ v_1 (h_1)} + \frac{\omega_1 (h_2)}{v_1 (h_2)}\right) \right)$, \newline
$\|H_{x,z_1,y,z_2}(t_1+h_1,t_2)-H_{x,z_1,y,z_2}(t_1+h_1,t_2+h_2)\|_p\newline
=\mathcal{O}\left( (v_1 (|z_1|) + (v_1 (|z_2|)) \left( \frac{\omega_1 (h_1)}{ v_1 (h_1)} + \frac{\omega_1 (h_2)}{v_1 (h_2)}\right) \right)$, \newline
$\|H_{x,z_1,y,z_2}(t_1,t_2)-H_{x,z_1,y,z_2}(t_1+h_1,t_2) -
H_{x,z_1,y,z_2}(t_1,t_2+h_2) + H_{x,z_1,y,z_2}(t_1+h_1,t_2+h_2) \|_p=
\mathcal{O}\left( (v_1 (|z_1|) + (v_1 (|z_2|)) \left( \frac{\omega_1 (h_1)}{ v_1 (h_1)} + \frac{\omega_1 (h_2)}{v_1 (h_2)}\right) \right)$. \end{enumerate} \end{lemma}
\begin{proof} Part (i). We calculate as \begin{equation*} \begin{split}
\|\phi_{x,y}(t_1,t_2)\|_p&\leq \frac{1}{4}\left\{\|f(x+t_1,y+t_2)-f(x,y)
\|_p+\|f(x,y)-f(x-t_1,y+t_2)\|_p \right. \\ &\quad + \left.
\|f(x,y)-f(x+t_1,y-t_2)\|_p+\|f(x,y)-f(x-t_1,y-t_2)\|_p\right\} \\ &=\mathcal{O}\left(\omega_1 (t_1)+\omega_2 (t_2)\right). \end{split} \end{equation*}
Part (ii). Since $f\in \text{Lip}(\omega_1,\omega_2,p)$, we have \begin{equation*} \begin{split}
\|H_{x,z_1,y,z_2}(t_1,t_2)\|_p&=
\|\phi_{x+z_1,y+z_2}(t_1,t_2)-\phi_{x,y}(t_1,t_2)\|_p \\
&\leq\frac{1}{4}\left\{\|f(x+z_1+t_1,y+z_2+t_2)-f(x+z_1,y+z_2)\|_p \right. \\
&\quad +\|f(x+z_1-t_1,y+z_2+t_2)-f(x+z_1,y+z_2)\|_p \\
&\quad +\|f(x+z_1+t_1,y+z_2-t_2)-f(x+z_1,y+z_2)\|_p \\
&\quad +\|f(x+z_1-t_1,y+z_2-t_2)-f(x+z_1,y+z_2)\|_p \\
&\quad +\|f(x+t_1,y+t_2)-f(x,y)\|_p \\
&\quad +\|f(x-t_1,y+t_2)-f(x,y)\|_p \\
&\quad +\|f(x+t_1,y-t_2)-f(x,y)\|_p \\
&\quad + \left. \|f(x-t_1,y-t_2)-f(x,y)\|_p \right\} \\ &=\mathcal{O}\left(\omega_1 (t_1)+\omega_2 (t_2)\right). \end{split} \end{equation*}
For the second part, a similar reasoning yields \begin{equation} \label{eq9} \begin{split}
\|H_{x,z_1,y,z_2}(t_1,t_2)\|_p&=
\|\phi_{x+z_1,y+z_2}(t_1,t_2)-\phi_{x,y}(t_1,t_2)\|_p \\
&\leq\frac{1}{4}\left\{\|f(x+z_1+t_1,y+z_2+t_2)-f(x+t_1,y+t_2)\|_p \right. \\
&\quad +\|f(x+z_1-t_1,y+z_1+t_2)-f(x-t_1,y+t_2)\|_p \\
&\quad +\|f(x+z_1+t_1,y+z_2-t_2)-f(x+t_1,y-t_2)\|_p \\
&\quad + \left. \|f(x+z_1-t_1,y+z_2-t_2)-f(x-t_1,y-t_2)\|_p\right\} \\
&\quad +\|f(x+z_1,y+z_2)-f(x,y)\|_p \\
& =\mathcal{O}\left(\omega_1 (|z_1|)+\omega_2 (|z_2|)\right). \end{split} \end{equation} The other relations can be verified in a very same way. We omit their proofs.
Part (iii). Using part (ii) and the fact $v_1(t_1)$ is non-decreasing, in case of $t_1\leq |z_1|$ and $t_2\leq |z_2|$, we get \begin{equation*} \begin{split}
\|H_{x,z_1,y,z_2}(t_1,t_2)\|_p&=\mathcal{O}\left(\omega_1 (t_1)+\omega_1 (t_2)\right) \\ &=\mathcal{O}\left( v_1 (t_1) \frac{\omega_1 (t_1)}{v_1 (t_1)} + v_1 (t_2) \frac{\omega_1 (t_2)}{v_1 (t_2)}\right) \\
&=\mathcal{O}\left( v_1 (|z_1|) \frac{\omega_1 (t_1)}{v_1 (t_1)} + v_1
(|z_2|)\frac{\omega_1 (t_2)}{v_1 (t_2)}\right). \end{split} \end{equation*}
Since $\frac{\omega_1 (t_1)}{v_1(t_1)}$ is non-decreasing, and we have the second part of (ii), then for $t_1\geq |z_1|$ and $t_2\geq |z_2|$, we also obtain \begin{equation*} \begin{split}
\|H_{x,z_1,y,z_2}(t_1,t_2)\|_p&=\mathcal{O}\left(\omega_1 (|z_1|)+\omega_1
(|z_2|)\right) \\
&=\mathcal{O}\left(v_1 (|z_1|) \frac{\omega_1 (|z_1|)}{v_1 (|z_1|)} + v_1
(|z_2|)\frac{\omega_1 (|z_2|)}{v_1 (|z_2|)}\right) \\
&=\mathcal{O}\left(v_1 (|z_1|) \frac{\omega_1 (t_1)}{v_1 (t_1)} + v_1 (|z_2|) \frac{\omega_1 (t_2)}{v_1 (t_2)}\right). \end{split} \end{equation*}
For $t_1\leq |z_1|$ and $t_2\geq |z_2|$, consider two possibilities. If $
|z_1| \geq t_2$, then \begin{equation*} \begin{split}
\|H_{x,z_1,y,z_2}(t_1,t_2)\|_p&=\mathcal{O}\left(\omega_1 (t_1)+\omega_1 (t_2)\right) \\ &=\mathcal{O}\left( v_1 (t_1) \frac{\omega_1 (t_1)}{v_1 (t_1)} + v_1 (t_2) \frac{\omega_1 (t_2)}{v_1 (t_2)} \right) \\
& = \mathcal{O}\left( v_1 (|z_1|) \left( \frac{\omega_1 (t_1)}{v_1 (t_1)} + \frac{\omega_1 (t_2)}{v_1 (t_2)}\right) \right). \end{split} \end{equation*}
If $t_2 \geq |z_1|$, then \begin{equation*} \begin{split}
\|H_{x,z_1,y,z_2}(t_1,t_2)\|_p&=\mathcal{O}\left(\omega_1 (|z_1|)+\omega_1
(|z_2|)\right) \\
&=\mathcal{O}\left(v_1 (|z_1|) \frac{\omega_1 (|z_1|)}{v_1 (|z_1|)} + v_1
(|z_2|)\frac{\omega_1 (|z_2|)}{v_1 (|z_2|)}\right) \\
&=\mathcal{O}\left( (v_1 (|z_1|) + v_1 (|z_2|)) \frac{\omega_1 (t_2)}{v_1 (t_2)}\right). \end{split} \end{equation*}
For $t_1\geq |z_1|$ and $t_2\leq |z_2|$, we can get \begin{equation*}
\|H_{x,z_1,y,z_2}(t_1,t_2)\|_p = \mathcal{O}\left( v_1 (|z_2|) \left( \frac{ \omega_1 (t_1)}{v_1 (t_1)} + \frac{\omega_1 (t_2)}{v_1 (t_2)}\right) \right) \end{equation*}
in case of $|z_2| \geq t_1$, and \begin{equation*}
\|H_{x,z_1,y,z_2}(t_1,t_2)\|_p=\mathcal{O}\left( (v_1 (|z_1|) + v_1 (|z_2|)) \frac{\omega_1 (t_1)}{v_1 (t_1)}\right) \end{equation*}
in case of $t_1 \geq |z_2|$ similarly as before. This shows the validity of the first inequality in part (iii). The other relations can be verified in a very same way.
Part (iv). We have \begin{equation*} \begin{split}
&\|H_{x,z_1,y,z_2}(t_1,t_2)-H_{x,z_1,y,z_2}(t_1+h_1,t_2)\|_p \\
&\leq\|\phi_{x,y}(t_1+h_1,t_2)-\phi_{x,y}(t_1,t_2)\|_p \\
&\quad +\|\phi_{x+z_1,y+z_2}(t_1+h_1,t_2)-\phi_{x+z_1,y+z_2}(t_1,t_2)\|_p \\ &=\mathcal{O}\left(\omega_1 (h_1)\right) = \mathcal{O}\left(\omega_1 (h_1) + \omega_1 (h_2)\right) \end{split} \end{equation*} and \begin{equation*} \begin{split}
&\|H_{x,z_1,y,z_2}(t_1,t_2)-H_{x,z_1,y,z_2}(t_1+h_1,t_2)\|_p \\
&\leq\|H_{x,z_1,y,z_2}(t_1,t_2)\|_p + \|H_{x,z_1,y,z_2}(t_1+h_1,t_2)\|_p \\
&=\mathcal{O}\left(\omega_1 (|z_1|)+\omega_2 (|z_2|)\right), \end{split} \end{equation*} while \begin{equation*} \begin{split}
&\|H_{x,z_1,y,z_2}(t_1,t_2)-H_{x,z_1,y,z_2}(t_1,t_2+h_2)\|_p \\
&\leq\|\phi_{x,y}(t_1,t_2+h_2)-\phi_{x,y}(t_1,t_2)\|_p \\
&\quad +\|\phi_{x+z_1,y+z_2}(t_1,t_2+h_2)-\phi_{x+z_1,y+z_2}(t_1,t_2)\|_p \\ &=\mathcal{O}\left(\omega_1 (h_2)\right) = \mathcal{O}\left(\omega_1 (h_1) + \omega_1 (h_2)\right) \end{split} \end{equation*} and \begin{equation*} \begin{split}
&\|H_{x,z_1,y,z_2}(t_1,t_2)-H_{x,z_1,y,z_2}(t_1,t_2+h_2)\|_p \\
&\leq\|H_{x,z_1,y,z_2}(t_1,t_2)\|_p + \|H_{x,z_1,y,z_2}(t_1,t_2+h_2)\|_p \\
&=\mathcal{O}\left(\omega_1 (|z_1|)+\omega_2 (|z_2|)\right), \end{split} \end{equation*} furthermore \begin{equation*} \begin{split}
\| H_{x,z_1,y,z_2}(t_1,t_2)- &H_{x,z_1,y,z_2}(t_1+h_1,t_2)- H_{x,z_1,y,z_2}(t_1,t_2+h_2)\\
&\quad + H_{x,z_1,y,z_2}(t_1+h_1,t_2+h_2) \|_p \\
&\leq \|H_{x,z_1,y,z_2}(t_1,t_2)-H_{x,z_1,y,z_2}(t_1+h_1,t_2)\|_p \\
&\quad + \| H_{x,z_1,y,z_2}(t_1,t_2+h_2) -
H_{x,z_1,y,z_2}(t_1+h_1,t_2+h_2)\|_p \\ &= \mathcal{O}\left(\omega_1 (h_1) + \omega_1 (h_2)\right) \end{split} \end{equation*} and \begin{equation*} \begin{split}
\|H_{x,z_1,y,z_2}(t_1,t_2)&-H_{x,z_1,y,z_2}(t_1+h_1,t_2) - H_{x,z_1,y,z_2}(t_1,t_2+h_2)\\
&\quad + H_{x,z_1,y,z_2}(t_1+h_1,t_2+h_2)\|_p \\
&\leq\|H_{x,z_1,y,z_2}(t_1,t_2)\|_p + \|H_{x,z_1,y,z_2}(t_1+h_1,t_2)\|_p \\
&\quad + \| H_{x,z_1,y,z_2}(t_1,t_2+h_2)\|_p + \|
H_{x,z_1,y,z_2}(t_1+h_1,t_2+h_2)\|_p \\
&=\mathcal{O}\left(\omega_1 (|z_1|)+\omega_2 (|z_2|)\right). \end{split} \end{equation*}
From analogous estimations, we can obtain part (iv) by considering the four cases $h_1\leq |z_1|$ and $h_2\leq |z_2|$; $h_1\geq |z_1|$ and $h_2\geq
|z_2| $; $h_1\leq |z_1|$ and $h_2\geq |z_2|$; $h_1\geq |z_1|$ and $h_2\leq
|z_2|$, respectively, as in part (iii). We omit the details. \end{proof}
\section{Main Results}
We prove the following statement.
\begin{theorem} \label{the01} Let $\omega $ and $v$ be moduli of continuity so that $\frac{ \omega (t)}{v(t)}$ is non-decreasing in $t$. If $f\in H_{p}^{(\omega ,\omega )}$, $p\geq 1$, then \begin{equation*} \Vert \sigma _{m,2m;n,2n}(f)-f\Vert _{p}^{(v,v)}= \mathcal{O} \left( \frac{\omega (h_{1}) }{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) , \end{equation*} where $h_{1}=\frac{\pi }{m}$, $h_{2}=\frac{\pi }{n}$ for $m,n\in
\mathbb{N}
$. \end{theorem}
\allowdisplaybreaks{\begin{proof} Using the equality \begin{equation} \int_{0}^{\pi }\int_{0}^{\pi }\frac{\sin 2mt_{1}\sin mt_{1}\sin 2nt_{2}\sin nt_{2}}{\left( 4\sin \frac{t_{1}}{2}\sin \frac{t_{2}}{2}\right) ^{2}} dt_{1}dt_{2}=\frac{mn\pi ^{2}}{4} \label{eq11} \end{equation} we have \begin{equation} \begin{split} D_{m,n}(x,y)& =\sigma _{m,2m;n,2n}(f;x,y)-f(x,y) \\ & =\frac{4}{mn\pi ^{2}}\int_{0}^{\pi }\int_{0}^{\pi }\phi _{x,y}(t_{1},t_{2}) \frac{S(t_{1},t_{2})}{\left( 4\sin \frac{t_{1}}{2}\sin \frac{t_{2}}{2} \right) ^{2}}dt_{1}dt_{2}, \end{split} \label{eq12} \end{equation} where \begin{equation*} \begin{split} \phi _{x,y}(t_{1},t_{2})=& \frac{1}{4}\big[ f(x+t_{1},y+t_{2})+f(x-t_{1},y+t_{2}) \\ & \quad +f(x+t_{1},y-t_{2})+f(x-t_{1},y-t_{2})-4f(x,y)\big] \end{split} \end{equation*} and \begin{equation*} S(t_{1},t_{2})=\sin 2mt_{1}\sin mt_{1}\sin 2nt_{2}\sin nt_{2}. \end{equation*} By definition, we have \begin{equation} \Vert D_{m,n}\Vert _{p}^{(v,v)}:=\Vert D_{m,n}\Vert _{p}+\sup_{z_{1}\neq 0,\,\,z_{2}\neq 0}\frac{\Vert D_{m,n}(x+z_{1},y+z_{2})-D_{m,n}(x,y)\Vert _{p}
}{v(|z_{1}|)+v(|z_{2}|)}. \label{eq13} \end{equation} Now, we can write \begin{equation} \begin{split} & D_{m,n}(x+z_{1},y+z_{2})-D_{m,n}(x,y) \\ & =\frac{4}{mn\pi ^{2}}\int_{0}^{\pi }\int_{0}^{\pi }\left[ \phi _{x+z_{1},y+z_{2}}(t_{1},t_{2})-\phi _{x,y}(t_{1},t_{2})\right] \frac{ S(t_{1},t_{2})}{\left( 4\sin \frac{t_{1}}{2}\sin \frac{t_{2}}{2}\right) ^{2}} dt_{1}dt_{2} \\ & =\frac{4}{mn\pi ^{2}}\int_{0}^{\pi }\int_{0}^{\pi }H_{x,z_{1},y,z_{2}}(t_{1},t_{2})\frac{S(t_{1},t_{2})}{\left( 4\sin \frac{ t_{1}}{2}\sin \frac{t_{2}}{2}\right) ^{2}}dt_{1}dt_{2} \\ & =\frac{4}{mn\pi ^{2}}\int_{0}^{h_{1}} \int_{0}^{h_{2}}H_{x,z_{1},y,z_{2}}(t_{1},t_{2})\frac{S(t_{1},t_{2})}{\left( 4\sin \frac{t_{1}}{2}\sin \frac{t_{2}}{2}\right) ^{2}}dt_{1}dt_{2} \\ & \quad +\frac{4}{mn\pi ^{2}}\int_{h_{1}}^{\pi }\int_{0}^{h_{2}}H_{x,z_{1},y,z_{2}}(t_{1},t_{2})\frac{S(t_{1},t_{2})}{ \left( 4\sin \frac{t_{1}}{2}\sin \frac{t_{2}}{2}\right) ^{2}}dt_{1}dt_{2} \\ & \quad +\frac{4}{mn\pi ^{2}}\int_{0}^{h_{1}}\int_{h_{2}}^{\pi }H_{x,z_{1},y,z_{2}}(t_{1},t_{2})\frac{S(t_{1},t_{2})}{\left( 4\sin \frac{ t_{1}}{2}\sin \frac{t_{2}}{2}\right) ^{2}}dt_{1}dt_{2} \\ & \quad +\frac{4}{mn\pi ^{2}}\int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi }H_{x,z_{1},y,z_{2}}(t_{1},t_{2})\frac{S(t_{1},t_{2})}{\left( 4\sin \frac{ t_{1}}{2}\sin \frac{t_{2}}{2}\right) ^{2}}dt_{1}dt_{2}:=\sum_{r=1}^{4}J_{r}, \end{split} \label{eq14} \end{equation}
Using Lemma \ref{le1} for $p\geq 1$, by the estimate \begin{equation*} \left\vert \frac{S(t_{1},t_{2})}{\left( 4\sin \frac{t_{1}}{2}\sin \frac{t_{2} }{2}\right) ^{2}}\right\vert =\mathcal{O}\left( m^{2}n^{2}\right) ,\quad 0<t_{1}\leq \pi ,0<t_{2}\leq \pi , \end{equation*} and Lemma \ref{le2}, we have \begin{equation} \begin{split} \Vert J_{1}\Vert _{p}& \leq \frac{4}{mn\pi ^{2}}\int_{0}^{h_{1}} \int_{0}^{h_{2}}\Vert H_{x,z_{1},y,z_{2}}(t_{1},t_{2})\Vert _{p}\left\vert \frac{S(t_{1},t_{2})}{\left( 4\sin \frac{t_{1}}{2}\sin \frac{t_{2}}{2} \right) ^{2}}\right\vert dt_{1}dt_{2} \\
& =\mathcal{O}\left( \left( v(|z_{1}|)+v(|z_{2}|)\right) mn\int_{0}^{h_{1}}\int_{0}^{h_{2}}\left( \frac{\omega (t_{1})}{v(t_{1})}+ \frac{\omega (t_{2})}{v(t_{2})}\right) dt_{1}dt_{2}\right) \\
& =\mathcal{O}\left( \left( v(|z_{1}|)+v(|z_{2}|)\right) mn\left( \frac{ \omega (h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) \int_{0}^{h_{1}}\int_{0}^{h_{2}}dt_{1}dt_{2}\right) \\
& =\mathcal{O}\left( \left( v(|z_{1}|)+v(|z_{2}|)\right) \left( \frac{\omega (h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) \right) . \end{split} \label{eq15} \end{equation}
The quantity $J_{2}$ can be written as follows: \begin{equation} \begin{split} J_{2}& =\frac{4}{mn\pi ^{2}}\int_{h_{1}}^{\pi }\int_{0}^{h_{2}}H_{x,z_{1},y,z_{2}}(t_{1},t_{2})\sin 2mt_{1}\sin mt_{1}\\% &\quad \times \frac{\sin 2nt_{2}\sin nt_{2}}{\left( 2\sin \frac{t_{2}}{2}\right) ^{2}} \left( \frac{1}{\left( 2\sin \frac{t_{1}}{2}\right) ^{2}}-\frac{1}{t_{1}^{2}} \right) dt_{1}dt_{2}\\ &\quad +\frac{4}{mn\pi ^{2}}\int_{h_{1}}^{\pi }\int_{0}^{h_{2}}H_{x,z_{1},y,z_{2}}(t_{1},t_{2})\frac{S(t_{1},t_{2})}{ t_{1}^{2}\left( 2\sin \frac{t_{2}}{2}\right) ^{2}} dt_{1}dt_{2}:=J_{21}+J_{22}. \end{split} \label{eq16} \end{equation} Since the function \begin{equation*} \frac{1}{\left( 2\sin \frac{t_{1}}{2}\right) ^{2}}-\frac{1}{t_{1}^{2}} \end{equation*} is bounded for $0<t_{1}\leq \pi $ and \begin{equation*} \left\vert \frac{\sin 2nt_{2}\sin nt_{2}}{\left( 2\sin \frac{t_{2}}{2} \right) ^{2}}\right\vert =\mathcal{O}\left( n^{2}\right) ,\quad 0<t_{2}\leq \pi , \end{equation*} then by Lemma \ref{le1} with $p\geq 1$ and Lemma \ref{le2}, we get \begin{equation} \begin{split} \Vert J_{21}\Vert _{p}& =\mathcal{O}\left( \frac{n}{m}\right) \int_{h_{1}}^{\pi }\int_{0}^{h_{2}}\Vert H_{x,z_{1},y,z_{2}}(t_{1},t_{2})\Vert _{p}dt_{1}dt_{2} \\
& =\mathcal{O}\left( \left( v(|z_{1}|)+v(|z_{2}|)\right) \frac{n}{m} \int_{h_{1}}^{\pi }\int_{0}^{h_{2}}\left( \frac{\omega (t_{1})}{v(t_{1})}+ \frac{\omega (t_{2})}{v(t_{2})}\right) dt_{1}dt_{2}\right) \\
& =\mathcal{O}\left( \left( v(|z_{1}|)+v(|z_{2}|)\right) \frac{n}{m}\left( \frac{\omega (\pi )}{v(\pi )}+\frac{\omega (h_{2})}{v(h_{2})}\right) \int_{h_{1}}^{\pi }\int_{0}^{h_{2}}dt_{1}dt_{2}\right) \\
& =\mathcal{O}\left( \frac{1}{m}\left( v(|z_{1}|)+v(|z_{2}|)\right) \left( \frac{\omega (\pi )}{v(\pi )}+\frac{\omega (h_{2})}{v(h_{2})}\right) \right) . \end{split} \label{eq17} \end{equation} It is clear that for $m\in
\mathbb{N}
$ \begin{equation*} \frac{1}{m}\omega \left( \pi \right) \leq 2\omega \left( \frac{\pi }{m} \right) , \end{equation*} whence \begin{equation*} \left\Vert J_{21}\right\Vert _{p}=O\left( \left(
v(|z_{1}|)+v(|z_{2}|)\right) \left( \frac{\omega (h_{1})}{v(h_{1})}+\frac{ \omega (h_{2})}{v(h_{2})}\right) \right) . \end{equation*} Further, substituting $t_{1}+h_{1}$ in place of $t_{1}$ in \begin{equation} J_{22}=\frac{4}{mn\pi ^{2}}\int_{h_{1}}^{\pi }\int_{0}^{h_{2}}H_{x,z_{1},y,z_{2}}(t_{1},t_{2})\frac{S(t_{1},t_{2})}{ t_{1}^{2}\left( 2\sin \frac{t_{2}}{2}\right) ^{2}}dt_{1}dt_{2} \label{eq18} \end{equation} we obtain \begin{equation} J_{22}=-\frac{4}{mn\pi ^{2}}\int_{0}^{\pi -h_{1}}\int_{0}^{h_{2}}H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})\frac{ S(t_{1},t_{2})}{(t_{1}+h_{1})^{2}\left( 2\sin \frac{t_{2}}{2}\right) ^{2}} dt_{1}dt_{2}, \label{eq19} \end{equation} Hence, from (\ref{eq18}) and (\ref{eq19}) we get \begin{equation*} \begin{split} J_{22}& =\frac{2}{mn\pi ^{2}}\int_{h_{1}}^{\pi }\int_{0}^{h_{2}}H_{x,z_{1},y,z_{2}}(t_{1},t_{2})\frac{S(t_{1},t_{2})}{ t_{1}^{2}\left( 2\sin \frac{t_{2}}{2}\right) ^{2}}dt_{1}dt_{2} \\ & \quad -\frac{2}{mn\pi ^{2}}\int_{0}^{\pi -h_{1}}\int_{0}^{h_{2}}H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})\frac{ S(t_{1},t_{2})}{(t_{1}+h_{1})^{2}\left( 2\sin \frac{t_{2}}{2}\right) ^{2}} dt_{1}dt_{2}\\ & =\frac{2}{mn\pi ^{2}}\int_{h_{1}}^{\pi }\int_{0}^{h_{2}}H_{x,z_{1},y,z_{2}}(t_{1},t_{2})\frac{S(t_{1},t_{2})}{ t_{1}^{2}\left( 2\sin \frac{t_{2}}{2}\right) ^{2}}dt_{1}dt_{2} \\ & \quad -\frac{2}{mn\pi ^{2}}\int_{0}^{h_{1}} \int_{0}^{h_{2}}H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})\frac{S(t_{1},t_{2})}{ (t_{1}+h_{1})^{2}\left( 2\sin \frac{t_{2}}{2}\right) ^{2}}dt_{1}dt_{2} \\ & \quad -\frac{2}{mn\pi ^{2}}\int_{h_{1}}^{\pi }\int_{0}^{h_{2}}H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})\frac{S(t_{1},t_{2})}{ (t_{1}+h_{1})^{2}\left( 2\sin \frac{t_{2}}{2}\right) ^{2}}dt_{1}dt_{2} \\ & \quad +\frac{2}{mn\pi ^{2}}\int_{\pi -h_{1}}^{\pi }\int_{0}^{h_{2}}H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})\frac{S(t_{1},t_{2})}{ (t_{1}+h_{1})^{2}\left( 2\sin \frac{t_{2}}{2}\right) ^{2}}dt_{1}dt_{2} \end{split} \end{equation*}
\begin{equation} \begin{split} & =\frac{2}{mn\pi ^{2}}\int_{h_{1}}^{\pi }\int_{0}^{h_{2}}\left( \frac{ H_{x,z_{1},y,z_{2}}(t_{1},t_{2})}{t_{1}^{2}}-\frac{ H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})}{(t_{1}+h_{1})^{2}}\right) \frac{S(t_{1},t_{2})}{\left( 2\sin \frac{t_{2}}{2} \right) ^{2}}dt_{1}dt_{2} \\ & \quad -\frac{2}{mn\pi ^{2}}\int_{0}^{h_{1}} \int_{0}^{h_{2}}H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})\frac{S(t_{1},t_{2})}{ (t_{1}+h_{1})^{2}\left( 2\sin \frac{t_{2}}{2}\right) ^{2}}dt_{1}dt_{2} \\ & \quad +\frac{2}{mn\pi ^{2}}\int_{\pi -h_{1}}^{\pi }\int_{0}^{h_{2}}H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})\frac{S(t_{1},t_{2})}{ (t_{1}+h_{1})^{2}\left( 2\sin \frac{t_{2}}{2}\right) ^{2}} dt_{1}dt_{2}:=\sum_{s=1}^{3}J_{22}^{(s)}. \end{split} \label{eq20} \end{equation} Using the inequalities $\frac{2}{\pi }\beta \leq \sin \beta $ for $\beta \in (0,\pi /2)$, $\sin \beta \leq \beta $ for $\beta \in (0,\pi )$, Lemma \ref {le1} and Lemma \ref{le2}, we have \begin{equation} \begin{split} \Vert J_{22}^{(2)}\Vert _{p}& =\mathcal{O}\left( \frac{1}{mn}\right) \int_{0}^{h_{1}}\int_{0}^{h_{2}}\Vert H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})\Vert _{p}\frac{(mt_{1}nt_{2})^{2}}{ (t_{1}+h_{1})^{2}\left( \frac{t_{2}}{\pi }\right) ^{2}}dt_{1}dt_{2} \\ & =\mathcal{O}\left( mn\right)
\int_{0}^{h_{1}}\int_{0}^{h_{2}}(v(|z_{1}|)+(v(|z_{2}|))\left( \frac{\omega (t_{1}+h_{1})}{v(t_{1}+h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) dt_{1}dt_{2} \\
& =\mathcal{O}\left( mn\right) (v(|z_{1}|)+v(|z_{2}|))\left( \frac{\omega (2h_{1})}{v(2h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) \int_{0}^{h_{1}}\int_{0}^{h_{2}}dt_{1}dt_{2} \\
& =\mathcal{O}\left( mn\right) (v(|z_{1}|)+v(|z_{2}|))\left( \frac{\omega (h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) h_{1}h_{2} \\
& =\mathcal{O}\left( (v(|z_{1}|)+v(|z_{2}|))\left( \frac{\omega (h_{1})}{ v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) \right) . \end{split} \label{eq21} \end{equation}
Moreover, the inequalities $|\sin \beta |\leq 1$ for $\beta \in (\pi -h_{1},\pi )$, $\sin \beta \leq \beta $ for $\beta \in (0,\pi )$, Lemma \ref {le1}, and Lemma \ref{le2} imply \begin{equation} \begin{split} \Vert J_{22}^{(3)}\Vert _{p}& =\mathcal{O}\left( \frac{1}{mn}\right) \int_{\pi -h_{1}}^{\pi }\int_{0}^{h_{2}}\Vert H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})\Vert _{p}\frac{(nt_{2})^{2}}{ (t_{1}+h_{1})^{2}\left( \frac{t_{2}}{\pi }\right) ^{2}}dt_{1}dt_{2} \\ & =\mathcal{O}\left( \frac{n}{m}\right) \int_{\pi -h_{1}}^{\pi
}\!\int_{0}^{h_{2}}(v(|z_{1}|)+v(|z_{2}|))\left( \frac{\omega (t_{1}+h_{1})}{ v(t_{1}+h_{1})}+\frac{\omega (t_{2})}{v(t_{2})}\right) \frac{1}{ (t_{1}+h_{1})^{2}}dt_{1}dt_{2} \\
& =\mathcal{O}\left( \frac{n}{m}\right) (v(|z_{1}|)+v(|z_{2}|))\int_{\pi }^{\pi +h_{1}}\int_{0}^{h_{2}}\left( \frac{\omega (\theta _{1})}{v(\theta _{1})}+\frac{\omega (t_{2})}{v(t_{2})}\right) \frac{1}{\theta _{1}^{2}} d\theta _{1}dt_{2} \\
& =\mathcal{O}\left( \frac{n}{m}\right) (v(|z_{1}|)+v(|z_{2}|))\left( \frac{ \omega (\pi +h_{1})}{v(\pi )}+\frac{\omega (h_{2})}{v(h_{2})}\right) \int_{\pi }^{\pi +h_{1}}\int_{0}^{h_{2}}\frac{d\theta _{1}dt_{2}}{\theta _{1}^{2}} \\
& =\mathcal{O}\left( \frac{n}{m}\right) (v(|z_{1}|)+v(|z_{2}|))\left( \frac{ \omega (\pi )+\omega (h_{1})}{v(\pi )}+\frac{\omega (h_{2})}{v(h_{2})} \right) \frac{h_{1}h_{2}}{\pi (\pi +h_{1})} \\
& =\mathcal{O}\left( \frac{1}{m^{2}}\right) (v(|z_{1}|)+v(|z_{2}|))\left( \frac{\omega (\pi )}{v(\pi )}+\frac{\omega (h_{2})}{v(h_{2})}\right) \\
& =\mathcal{O}\left( \frac{1}{m}(v(|z_{1}|)+v(|z_{2}|))\left( \frac{\omega (h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) \right) . \end{split} \label{eq22} \end{equation} For $J_{22}^{(1)}$ we can write \begin{equation} \begin{split} J_{22}^{(1)}& =\frac{2}{mn\pi ^{2}}\int_{h_{1}}^{\pi }\int_{0}^{h_{2}}\Bigg( \frac{H_{x,z_{1},y,z_{2}}(t_{1},t_{2})}{t_{1}^{2}}-\frac{ H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})}{t_{1}^{2}} \\ & \qquad +\frac{H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})}{t_{1}^{2}}-\frac{ H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})}{(t_{1}+h_{1})^{2}}\Bigg) \\ & \qquad \times \frac{S(t_{1},t_{2})}{\left( 2\sin \frac{t_{2}}{2}\right) ^{2}}dt_{1}dt_{2} \\ & =\frac{2}{mn\pi ^{2}}\int_{h_{1}}^{\pi }\int_{0}^{h_{2}}\left( H_{x,z_{1},y,z_{2}}(t_{1},t_{2})-H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}) \right) \\ & \qquad \times \frac{S(t_{1},t_{2})}{t_{1}^{2}\left( 2\sin \frac{t_{2}}{2} \right) ^{2}}dt_{1}dt_{2} \\ & \quad +\frac{2}{mn\pi ^{2}}\int_{h_{1}}^{\pi }\int_{0}^{h_{2}}H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})\frac{S(t_{1},t_{2})}{ \left( 2\sin \frac{t_{2}}{2}\right) ^{2}} \\ & \qquad \times \left( \frac{1}{t_{1}^{2}}-\frac{1}{(t_{1}+h_{1})^{2}} \right) dt_{1}dt_{2}:=J_{221}^{(1)}+J_{222}^{(1)}. \end{split} \label{eq23} \end{equation} Then, using Lemma \ref{le1}, and Lemma \ref{le2}, we have \begin{equation} \begin{split} \Vert J_{221}^{(1)}\Vert _{p}& =\mathcal{O}\left( \frac{1}{mn}\right) \int_{h_{1}}^{\pi }\int_{0}^{h_{2}}\left\Vert H_{x,z_{1},y,z_{2}}(t_{1},t_{2})-H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}) \right\Vert _{p}\frac{(nt_{2})^{2}}{t_{1}^{2}\left( 2\frac{t_{2}}{\pi } \right) ^{2}}dt_{1}dt_{2} \\ & =\mathcal{O}\left( \frac{n}{m}\right) \int_{h_{1}}^{\pi
}\int_{0}^{h_{2}}(v(|z_{1}|)+v(|z_{2}|))\left( \frac{\omega (h_{1})}{v(h_{1}) }+\frac{\omega (h_{2})}{v(h_{2})}\right) \frac{dt_{1}dt_{2}}{t_{1}^{2}} \\
& =\mathcal{O}\left( \frac{n}{m}\right) (v(|z_{1}|)+v(|z_{2}|))\left( \frac{ \omega (h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) \frac{h_{2} }{h_{1}} \\
& =\mathcal{O}\left( (v(|z_{1}|)+v(|z_{2}|))\left( \frac{\omega (h_{1})}{ v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) \right) . \end{split} \label{eq24} \end{equation} With similar reasoning we obtain \begin{equation} \begin{split} \Vert J_{222}^{(1)}\Vert _{p}& =\mathcal{O}\left( \frac{1}{mn}\right) \int_{h_{1}}^{\pi }\int_{0}^{h_{2}}\left\Vert H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})\right\Vert _{p}\frac{(nt_{2})^{2}}{ \left( 2\frac{t_{2}}{\pi }\right) ^{2}}\frac{h_{1}(2t_{1}+h_{1})}{ t_{1}^{2}(t_{1}+h_{1})^{2}}dt_{1}dt_{2} \\ & =\mathcal{O}\left( \frac{n}{m}\right) \int_{h_{1}}^{\pi
}\int_{0}^{h_{2}}(v(|z_{1}|)+(v(|z_{2}|))\left( \frac{\omega (t_{1}+h_{1})}{ v(t_{1}+h_{1})}+\frac{\omega (t_{2})}{v(t_{2})}\right) \frac{ h_{1}dt_{1}dt_{2}}{t_{1}^{2}(t_{1}+h_{1})} \\ & =\mathcal{O}\left( \frac{n}{m^{2}}\right)
(v(|z_{1}|)+v(|z_{2}|))\int_{h_{1}}^{\pi }\left( \frac{\omega (t_{1}+h_{1})}{ (t_{1}+h_{1})v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})t_{1}}\right) \frac{ h_{2}dt_{1}}{t_{1}^{2}} \\ & =\mathcal{O}\left( \frac{1}{m^{2}}\right) \left(
v(|z_{1}|)+v(|z_{2}|)\right) \left( \frac{\omega (2h_{1})}{2h_{1}v(h_{1})} \int\limits_{h_{1}}^{\pi }\frac{dt_{1}}{t_{1}^{2}}+\frac{\omega (h_{2})}{ v(h_{2})}\int\limits_{h_{1}}^{\pi }\frac{dt_{1}}{t_{1}^{3}}\right) \\ & =\mathcal{O}\left( \frac{1}{m^{2}}\right) \left(
v(|z_{1}|)+v(|z_{2}|)\right) \left( m\frac{\omega (h_{1})}{2h_{1}v(t_{1})} +m^{2}\frac{\omega (h_{2})}{v(h_{2})}\right) \\
& =\mathcal{O}\left( \left( v(|z_{1}|)+v(|z_{2}|)\right) \left( \frac{\omega (h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) \right) . \end{split} \label{eq25} \end{equation} So, from (\ref{eq23}), (\ref{eq24}) and (\ref{eq25}), we have \begin{equation}
\Vert J_{22}^{(1)}\Vert _{p}=\mathcal{O}\left( (v(|z_{1}|)+v(|z_{2}|))\left( \frac{\omega (h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) \right) . \label{eq26} \end{equation} Now, taking into account (\ref{eq20}), (\ref{eq21}), (\ref{eq22}) and (\ref {eq26}), we have \begin{equation}
\Vert J_{22}\Vert _{p}=\mathcal{O}\left( \left( v(|z_{1}|)+v(|z_{2}|)\right) \left( \frac{\omega (h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})} \right) \right) . \label{eq27} \end{equation} Whence, using (\ref{eq16}), (\ref{eq17}) and (\ref{eq27}), we obtain \begin{equation}
\Vert J_{2}\Vert _{p}=\mathcal{O}\left( \left( v(|z_{1}|)+v(|z_{2}|)\right) \left( \frac{\omega (h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})} \right) \right) . \label{eq28} \end{equation}
By analogy, we conclude that \begin{equation}
\Vert J_{3}\Vert _{p}=\mathcal{O}\left( \left( v(|z_{1}|)+v(|z_{2}|)\right) \left( \frac{\omega (h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})} \right) \right) . \label{eq29} \end{equation}
Finally, let us estimate the quantity $J_{4}$. Indeed, $J_{4}$ can be rewritten as \begin{align*} J_{4}& = \frac{4}{mn\pi ^{2}}\int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi }H_{x,z_{1},y,z_{2}}(t_{1},t_{2})S(t_1,t_2) \\ &\qquad \times \left( \frac{1}{\left( 2\sin \frac{t_{1}}{2}\right) ^{2}}-\frac{1}{ t_{1}{}^{2}}\right) \left( \frac{1}{\left( 2\sin \frac{t_{2}}{2}\right) ^{2}} -\frac{1}{t_{2}{}^{2}}\right) dt_{1}dt_{2}\\ &\quad +\frac{4}{mn\pi ^{2}}\int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi }H_{x,z_{1},y,z_{2}}(t_{1},t_{2})S(t_1,t_2) \\ & \qquad \times \left( \frac{1}{\left( 2\sin \frac{t_{2}}{2}\right) ^{2}}-\frac{1}{ t_{2}{}^{2}}\right) \frac{1}{t_{1}{}^{2}}dt_{1}dt_{2}\\ & \quad +\frac{4}{mn\pi ^{2}}\int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi }H_{x,z_{1},y,z_{2}}(t_{1},t_{2})S(t_1,t_2) \\ & \qquad \times \left( \frac{1}{\left( 2\sin \frac{t_{1}}{2}\right) ^{2}}-\frac{1}{ t_{1}{}^{2}}\right) \frac{1}{t_{2}{}^{2}}dt_{1}dt_{2}\\ & \quad +\frac{4}{mn\pi ^{2}}\int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi }H_{x,z_{1},y,z_{2}}(t_{1},t_{2})\frac{S(t_1,t_2)}{\left( t_{1}t_{2}\right) ^{2}}dt_{1}dt_{2} \\ &:= J_{41}+J_{42}+J_{43}+J_{44}. \end{align*} The boundedness of the function \begin{equation*} \left( \frac{1}{\left( 2\sin \frac{t_{1}}{2}\right) ^{2}}-\frac{1}{ t_{1}{}^{2}}\right) \left( \frac{1}{\left( 2\sin \frac{t_{2}}{2}\right) ^{2}} -\frac{1}{t_{2}{}^{2}}\right) \end{equation*} for $0<t_{1},t_{2}\leq \pi $, Lemma \ref{le1} and Lemma \ref{le2}, implies \begin{align*} \Vert J_{41}\Vert _{p}&=\mathcal{O}\left( \frac{1}{mn}\right) \int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi }\Vert H_{x,z_{1},y,z_{2}}(t_{1},t_{2})\Vert _{p}dt_{1}dt_{2}\\ &=\mathcal{O}\left( \frac{1}{mn}\right) \int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi
}(v(|z_{1}|)+v(|z_{2}|))\left( \frac{\omega (t_{1})}{v(t_{1})}+\frac{\omega (t_{2})}{v(t_{2})}\right) dt_{1}dt_{2}\\
&=\mathcal{O}\left( \frac{1}{mn}\right) (v(|z_{1}|)+v(|z_{2}|))\frac{\omega \left( \pi \right) }{v\left( \pi \right) } \\
&=\mathcal{O}\left( \left( v(|z_{1}|)+v(|z_{2}|)\right) \left( \frac{\omega (h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) \right) . \end{align*} Substituting $t_{1}$ with $t_{1}+h_{1}$ in \begin{align*} &&\frac{4}{mn\pi ^{2}}\int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi }H_{x,z_{1},y,z_{2}}(t_{1},t_{2})S(t_1,t_2) \left( \frac{1}{\left( 2\sin \frac{t_{2}}{2}\right) ^{2}}-\frac{1}{ t_{2}{}^{2}}\right) \frac{1}{t_{1}{}^{2}}dt_{1}dt_{2} \end{align*} we obtain \begin{align*} J_{42} &= -\frac{4}{mn\pi ^{2}}\int_{0}^{\pi -h_{1}}\int_{h_{2}}^{\pi }H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})S(t_1,t_2) \\ & \qquad \times \left( \frac{1}{\left( 2\sin \frac{t_{2}}{2}\right) ^{2}}-\frac{1}{ t_{2}{}^{2}}\right) \frac{1}{\left( t_{1}+h_{1}\right) {}^{2}}dt_{1}dt_{2}. \end{align*} Hence \begin{align*} J_{42} &= \frac{2}{mn\pi ^{2}}\int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi }H_{x,z_{1},y,z_{2}}(t_{1},t_{2})S(t_{1},t_{2}) \left( \frac{1}{\left( 2\sin \frac{t_{2}}{2}\right) ^{2}}-\frac{1}{ t_{2}{}^{2}}\right) \frac{1}{t_{1}^{2}}dt_{1}dt_{2}\\ & \quad -\frac{2}{mn\pi ^{2}}\int_{0}^{\pi -h_{1}}\int_{h_{2}}^{\pi }H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})S(t_{1},t_{2}) \\ & \qquad \times \left( \frac{1}{\left( 2\sin \frac{t_{2}}{2}\right) ^{2}}-\frac{1}{ t_{2}{}^{2}}\right) \frac{1}{\left( t_{1}+h_{1}\right) {}^{2}}dt_{1}dt_{2}\\ &= \frac{2}{mn\pi ^{2}}\int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi }H_{x,z_{1},y,z_{2}}(t_{1},t_{2})S(t_{1},t_{2}) \left( \frac{1}{\left( 2\sin \frac{t_{2}}{2}\right) ^{2}}-\frac{1}{ t_{2}{}^{2}}\right) \frac{1}{t_{1}^{2}}dt_{1}dt_{2}\\ & \quad -\frac{2}{mn\pi ^{2}}\left( \int_{0}^{h_{1}}\int_{h_{2}}^{\pi }+\int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi }-\int_{\pi -h_{1}}^{\pi }\int_{h_{2}}^{\pi }\right) H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}) \\ & \qquad \times S(t_{1},t_{2})\left( \frac{1}{\left( 2\sin \frac{t_{2}}{2}\right) ^{2}}-\frac{1}{t_{2}{}^{2}}\right) \frac{1}{\left( t_{1}+h_{1}\right) {}^{2}} dt_{1}dt_{2}\\ &= \frac{2}{mn\pi ^{2}}\int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi }\left( \frac{ H_{x,z_{1},y,z_{2}}(t_{1},t_{2})}{t_{1}{}^{2}}-\frac{ H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})}{\left( t_{1}+h_{1}\right) {}^{2}} \right) \\ & \qquad \times S(t_{1},t_{2})\left( \frac{1}{\left( 2\sin \frac{t_{2}}{2}\right) ^{2}}-\frac{1}{t_{2}{}^{2}}\right) \frac{1}{t_{1}{}^{2}}dt_{1}dt_{2}\\ & \quad -\frac{2}{mn\pi ^{2}}\left( \int_{0}^{h_{1}}\int_{h_{2}}^{\pi }-\int_{\pi -h_{1}}^{\pi }\int_{h_{2}}^{\pi }\right) H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}) \\ & \qquad \times S(t_{1},t_{2})\left( \frac{1}{\left( 2\sin \frac{t_{2}}{2}\right) ^{2}}-\frac{1}{t_{2}{}^{2}}\right) \frac{1}{\left( t_{1}+h_{1}\right) {}^{2}} dt_{1}dt_{2} \\ &:= \sum\limits_{s=1}^{3}J_{42}^{\left( s\right) }. \end{align*} Since the function \begin{equation*} \frac{1}{\left( 2\sin \frac{t_{2}}{2}\right) ^{2}}-\frac{1}{t_{2}^{2}} \end{equation*} is bounded for $0<t_{1}\leq \pi $, using Lemma \ref{le1} with $p\geq 1$ and Lemma \ref{le2}, we have \begin{align*} \left\Vert J_{42}^{\left( 2\right) }\right\Vert _{p} &= \mathcal{O}\left( \frac{1}{mn}\right) \int_{0}^{h_{1}}\int_{h_{2}}^{\pi }\left(
v(|z_{1}|)+v(|z_{2}|)\right) \left( \frac{\omega (t_{1}+h_{1})}{ v(t_{1}+h_{1})}+\frac{\omega (t_{2})}{v(t_{2})}\right) \frac{ m^{2}t_{1}^{2}dt_{1}dt_{2}}{(t_{1}+h_{1})^{2}} \\
&= \mathcal{O}\left( \frac{m}{n}\right) \left( v(|z_{1}|)+v(|z_{2}|)\right) \left( \frac{\omega (2h_{1})}{v(2h_{1})}+\frac{\omega (\pi )}{v(\pi )} \right) h_{1} \\
&= \mathcal{O}\left( \left( v(|z_{1}|)+v(|z_{2}|)\right) \left( \frac{\omega (h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) \right) , \end{align*} \begin{align*} \left\Vert J_{42}^{\left( 3\right) }\right\Vert _{p} &= \mathcal{O}\left( \frac{1}{mn}\right) \int_{\pi -h_{1}}^{\pi }\int_{h_{2}}^{\pi }\left(
v(|z_{1}|)+v(|z_{2}|)\right) \left( \frac{\omega (t_{1}+h_{1})}{ v(t_{1}+h_{1})}+\frac{\omega (t_{2})}{v(t_{2})}\right) \frac{dt_{1}dt_{2}}{ (t_{1}+h_{1})^{2}} \\
&= \mathcal{O}\left( \frac{1}{mn}\right) \left( v(|z_{1}|)+v(|z_{2}|)\right) \left( \frac{\omega (\pi )}{v(\pi )}\right) h_{1} \\
&= \mathcal{O}\left( \frac{1}{m}\left( v(|z_{1}|)+v(|z_{2}|)\right) \left( \frac{\omega (h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) \right) \end{align*} and \begin{align*} \left\Vert J_{42}^{\left( 1\right) }\right\Vert _{p} &= \mathcal{O}\left( \frac{1}{mn}\right) \int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi }\left\Vert H_{x,z_{1},y,z_{2}}(t_{1},t_{2})-H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}) \right\Vert _{p}\frac{1}{t_{1}^{2}}dt_{1}dt_{2} \\ & \quad +\mathcal{O}\left( \frac{1}{mn}\right) \int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi }\left\Vert H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})\right\Vert _{p}\frac{ h_{1}(2t_{1}+h_{1})}{t_{1}^{2}\left( t_{1}+h_{1}\right) ^{2}}dt_{1}dt_{2} \\
&= \mathcal{O}\left( \frac{1}{mn}\right) \left( v(|z_{1}|)+v(|z_{2}|)\right) \left( \frac{\omega (h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})} \right) \int_{h_{1}}^{\pi }\frac{1}{t_{1}^{2}}dt_{1} \\ & \quad +\mathcal{O}\left( \frac{1}{m^{2}n}\right) \int_{h_{1}}^{\pi
}\int_{h_{2}}^{\pi }\left( v(|z_{1}|)+v(|z_{2}|)\right) \\ & \qquad \times \left( \frac{\omega (t_{1}+h_{1})}{\left( t_{1}+h_{1}\right) v(t_{1}+h_{1})}+\frac{\omega (t_{2})}{t_{1}v(t_{2})}\right) \frac{1}{ t_{1}^{2}}dt_{1}dt_{2} \\
&= \mathcal{O}\left( \frac{1}{n}\right) \left( v(|z_{1}|)+v(|z_{2}|)\right) \left( \frac{\omega (h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) \\ & \quad +\mathcal{O}\left( \frac{1}{m^{2}n}\right) \left(
v(|z_{1}|)+v(|z_{2}|)\right) \left( \frac{\omega (2h_{1})}{2h_{1}v(h_{1})} \int_{h_{1}}^{\pi }\frac{1}{t_{1}^{2}}dt_{1}+\frac{\omega (\pi )}{v(\pi )} \int_{h_{1}}^{\pi }\frac{1}{t_{1}^{3}}dt_{1}\right) \\
&= \mathcal{O}\left( \left( v(|z_{1}|)+v(|z_{2}|)\right) \left( \frac{\omega (h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) \right) . \end{align*} The quantity $J_{43}$ can be estimated analogously to $J_{42}.$
Substituting in \begin{equation*} J_{44}=\frac{4}{mn\pi ^{2}}\int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi }H_{x,z_{1},y,z_{2}}(t_{1},t_{2})\frac{S(t_1,t_2)}{\left( t_{1}t_{2}\right) ^{2}}dt_{1}dt_{2} \end{equation*} $t_{1}$ with $t_{1}+h_{1}$, $t_{2}$ with $t_{2}+h_{2}$ and $t_{1}$ with $ t_{1}+h_{1}$, $t_{2}$ with $t_{2}+h_{2}$, respectively, we get \begin{equation*} J_{44}=-\frac{4}{mn\pi ^{2}}\int_{0}^{\pi -h_{1}}\int_{h_{2}}^{\pi }H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})\frac{S(t_1,t_2)}{\left( (t_{1}+h_{1})t_{2}\right) ^{2}}dt_{1}dt_{2}, \end{equation*} \begin{equation*} J_{44}=-\frac{4}{mn\pi ^{2}}\int_{h_{1}}^{\pi }\int_{0}^{\pi -h_{2}}H_{x,z_{1},y,z_{2}}(t_{1},t_{2}+h_{2})\frac{S(t_1,t_2)}{\left( t_{1}(t_{2}+h_{2})\right) ^{2}}dt_{1}dt_{2}, \end{equation*} \begin{equation*} J_{44}=\frac{4}{mn\pi ^{2}}\int_{0}^{\pi -h_{1}}\int_{0}^{\pi -h_{2}}H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})\frac{S(t_1,t_2)}{\left( (t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2}}dt_{1}dt_{2}. \end{equation*}
Hence \begin{align*} J_{44}& =\frac{1}{mn\pi ^{2}}\int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi }H_{x,z_{1},y,z_{2}}(t_{1},t_{2})\frac{S(t_1,t_2)}{\left( t_{1}t_{2}\right) ^{2}}dt_{1}dt_{2}\\ & \quad -\frac{1}{mn\pi ^{2}}\int_{0}^{\pi -h_{1}}\int_{h_{2}}^{\pi }H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})\frac{S(t_1,t_2)}{\left( (t_{1}+h_{1})t_{2}\right) ^{2}}dt_{1}dt_{2}\\ & \quad -\frac{1}{mn\pi ^{2}}\int_{h_{1}}^{\pi }\int_{0}^{\pi -h_{2}}H_{x,z_{1},y,z_{2}}(t_{1},t_{2}+h_{2})\frac{S(t_1,t_2)}{\left( t_{1}(t_{2}+h_{2})\right) ^{2}}dt_{1}dt_{2}\\ & \quad +\frac{1}{mn\pi ^{2}}\int_{0}^{\pi -h_{1}}\int_{0}^{\pi -h_{2}}H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})\frac{S(t_1,t_2)}{\left( (t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2}}dt_{1}dt_{2}\\ & =\frac{1}{mn\pi ^{2}}\int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi }H_{x,z_{1},y,z_{2}}(t_{1},t_{2})\frac{S(t_1,t_2)}{\left( t_{1}t_{2}\right) ^{2}}dt_{1}dt_{2}\\ & \quad -\frac{1}{mn\pi ^{2}}\left( \int_{0}^{h_{1}}\int_{h_{2}}^{\pi }+\int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi }-\int_{\pi -h_{1}}^{\pi }\int_{h_{2}}^{\pi }\right) \\ & \quad \qquad H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})\frac{S(t_1,t_2)}{\left( (t_{1}+h_{1})t_{2}\right) ^{2}}dt_{1}dt_{2} \\ & \quad -\frac{1}{mn\pi ^{2}}\left( \int_{h_{1}}^{\pi }\int_{0}^{h_{2}}+\int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi }-\int_{h_{1}}^{\pi }\int_{\pi -h_{2}}^{\pi }\right) \\ & \quad \qquad H_{x,z_{1},y,z_{2}}(t_{1},t_{2}+h_{2})\frac{S(t_1,t_2)}{\left( t_{1}(t_{2}+h_{2})\right) ^{2}}dt_{1}dt_{2}\\ & \quad +\frac{1}{mn\pi ^{2}}\left( \int_{0}^{h_{1}}\int_{0}^{h_{2}}+\int_{0}^{h_{1}}\int_{h_{2}}^{\pi }-\int_{0}^{h_{1}}\int_{\pi -h_{2}}^{\pi }+\int_{h_{1}}^{\pi }\int_{0}^{h_{2}}+\int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi }\right.\\ & \quad \qquad \qquad \left. -\int_{h_{1}}^{\pi }\int_{\pi -h_{2}}^{\pi }-\int_{\pi -h_{1}}^{\pi }\int_{0}^{h_{2}}-\int_{\pi -h_{1}}^{\pi }\int_{h_{2}}^{\pi }+\int_{\pi -h_{1}}^{\pi }\int_{\pi -h_{2}}^{\pi }\right)\\ & \quad \qquad \qquad \quad H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})\frac{S(t_1,t_2)}{\left( (t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2}}dt_{1}dt_{2}\\ & =\frac{1}{mn\pi ^{2}}\int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi }\left( \frac{ H_{x,z_{1},y,z_{2}}(t_{1},t_{2})}{\left( t_{1}t_{2}\right) ^{2}}-\frac{ H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})}{\left( (t_{1}+h_{1})t_{2}\right) ^{2} }\right.\\ & \quad \qquad \quad \left. -\frac{H_{x,z_{1},y,z_{2}}(t_{1},t_{2}+h_{2})}{\left( t_{1}(t_{2}+h_{2})\right) ^{2}}+\frac{ H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})}{\left( (t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2}}\right) S(t_1,t_2) dt_{1}dt_{2}\\ & \quad -\frac{1}{mn\pi ^{2}}\left( \int_{0}^{h_{1}}\int_{h_{2}}^{\pi }-\int_{\pi -h_{1}}^{\pi }\int_{h_{2}}^{\pi }\right) H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})\frac{S(t_1,t_2)}{\left( (t_{1}+h_{1})t_{2}\right) ^{2}}dt_{1}dt_{2}\\ & \quad -\frac{1}{mn\pi ^{2}}\left( \int_{h_{1}}^{\pi }\int_{0}^{h_{2}}-\int_{h_{1}}^{\pi }\int_{\pi -h_{2}}^{\pi }\right) H_{x,z_{1},y,z_{2}}(t_{1},t_{2}+h_{2})\frac{S(t_1,t_2)}{\left( t_{1}(t_{2}+h_{2})\right) ^{2}}dt_{1}dt_{2}\\ & \quad +\frac{1}{mn\pi ^{2}}\left( \int_{0}^{h_{1}}\int_{0}^{h_{2}}+\int_{0}^{h_{1}}\int_{h_{2}}^{\pi }-\int_{0}^{h_{1}}\int_{\pi -h_{2}}^{\pi }+\int_{h_{1}}^{\pi }\int_{0}^{h_{2}}\right.\\ & \quad \qquad \qquad \left. -\int_{h_{1}}^{\pi }\int_{\pi -h_{2}}^{\pi }-\int_{\pi -h_{1}}^{\pi }\int_{0}^{h_{2}}-\int_{\pi -h_{1}}^{\pi }\int_{h_{2}}^{\pi }+\int_{\pi -h_{1}}^{\pi }\int_{\pi -h_{2}}^{\pi }\right)\\ & \quad \qquad \qquad \quad H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})\frac{S(t_1,t_2)}{\left( (t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2}}dt_{1}dt_{2} \\ & :=\sum \limits_{l=1}^{13}J_{44}^{\left( l\right) }. \end{align*}
We start with the estimate of $J_{44}^{\left( 1\right) }$. Indeed, we have \begin{align*} J_{44}^{\left( 1\right) } &= \frac{1}{mn\pi ^{2}}\int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi }\Bigg(\frac{H_{x,z_{1},y,z_{2}}(t_{1},t_{2})}{\left( t_{1}t_{2}\right) ^{2}}-\frac{H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})}{\left( (t_{1}+h_{1})t_{2}\right) ^{2}} \\ & \quad \qquad -\frac{H_{x,z_{1},y,z_{2}}(t_{1},t_{2}+h_{2})}{\left( t_{1}(t_{2}+h_{2})\right) ^{2}}+\frac{ H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})}{\left( (t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2}}\Bigg)S(t_{1},t_{2})dt_{1}dt_{2} \\ &= \frac{1}{mn\pi ^{2}}\int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi }\Bigg( H_{x,z_{1},y,z_{2}}(t_{1},t_{2})-H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}) \\ &\quad \qquad -H_{x,z_{1},y,z_{2}}(t_{1},t_{2}+h_{2})+H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2}) \Bigg)\frac{S(t_{1},t_{2})dt_{1}dt_{2}}{\left( t_{1}t_{2}\right) ^{2}} \\ &\quad +\frac{1}{mn\pi ^{2}}\int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi }\Bigg( H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})-H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2}) \Bigg) \\ &\qquad \times \Bigg(\frac{1}{\left( t_{1}t_{2}\right) ^{2}}-\frac{1}{\left( (t_{1}+h_{1})t_{2}\right) ^{2}}\Bigg)S(t_{1},t_{2})dt_{1}dt_{2} \\ &\quad +\frac{1}{mn\pi ^{2}}\int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi }\Bigg( H_{x,z_{1},y,z_{2}}(t_{1},t_{2}+h_{2})-H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2}) \Bigg) \\ &\qquad \times \Bigg(\frac{1}{\left( t_{1}t_{2}\right) ^{2}}-\frac{1}{\left( t_{1}(t_{2}+h_{2})\right) ^{2}}\Bigg)S(t_{1},t_{2})dt_{1}dt_{2} \\ &\quad +\frac{1}{mn\pi ^{2}}\int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi }H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})\Bigg(\frac{1}{(t_{1}+h_{1})^{2} }-\frac{1}{t_{1}^{2}}\Bigg) \\ &\qquad \times \Bigg(\frac{1}{(t_{2}+h_{2})^{2}}-\frac{1}{t_{2}^{2}}\Bigg) S(t_{1},t_{2})dt_{1}dt_{2} \\ &:= J_{44}^{\left( 1,1\right) }+J_{44}^{\left( 1,2\right) }+J_{44}^{\left( 1,3\right) }+J_{44}^{\left( 1,4\right) }. \end{align*}
It follows from Lemma \ref{le2} (iv) and $|S(t_{1},t_{2})|\leq 1$ for $ t_{1}\in (h_{1},\pi )$, $t_{2}\in (h_{2},\pi )$ that \begin{align*} \Vert J_{44}^{\left( 1,1\right) }\Vert _{p} &= \frac{\mathcal{O}(1)}{mn}
\int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi }\Big\| H_{x,z_{1},y,z_{2}}(t_{1},t_{2})-H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}) \\ & \quad -H_{x,z_{1},y,z_{2}}(t_{1},t_{2}+h_{2})+H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})
\Big\|_{p}\frac{|S(t_{1},t_{2})|dt_{1}dt_{2}}{\left( t_{1}t_{2}\right) ^{2}} \\
&= \frac{\mathcal{O}(1)}{mn}(v(|z_{1}|)+(v(|z_{2}|))\left( \frac{\omega (h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) \int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi }\frac{dt_{1}dt_{2}}{\left( t_{1}t_{2}\right) ^{2}} \\
&= \mathcal{O}(1)(v(|z_{1}|)+(v(|z_{2}|))\left( \frac{\omega (h_{1})}{ v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) . \end{align*} Moreover, from Lemma \ref{le2} (iii) and using the same arguments as above, we obtain \begin{align*} \Vert J_{44}^{\left( 1,2\right) }\Vert _{p} &= \frac{\mathcal{O}(1)}{mn}
\int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi }\Big\| H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})-H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})
\Big\|_{p} \\ & \quad \times \frac{1}{t_{2}^{2}}\Bigg(\frac{1}{t_{1}^{2}}-\frac{1}{\left(
t_{1}+h_{1}\right) ^{2}}\Bigg)|S(t_{1},t_{2})|dt_{1}dt_{2} \\
&= \frac{\mathcal{O}(1)}{mn}(v(|z_{1}|)+(v(|z_{2}|))\left( \frac{\omega (h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) \int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi }\frac{h_{1}\left( 2t_{1}+h_{1}\right) }{ t_{2}^{2}t_{1}^{2}\left( t_{1}+h_{1}\right) ^{2}}dt_{1}dt_{2} \\
&= \frac{\mathcal{O}(1)}{m^{2}n}\left( (v(|z_{1}|)+(v(|z_{2}|)\right) \left( \frac{\omega (h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) \int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi }\frac{1}{t_{2}^{2}t_{1}^{3}} dt_{1}dt_{2} \\
&= \mathcal{O}(1)(v(|z_{1}|)+(v(|z_{2}|))\left( \frac{\omega (h_{1})}{ v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) . \end{align*} Similarly, we obtain \begin{equation*} \Vert J_{44}^{\left( 1,3\right) }\Vert _{p}=\mathcal{O}
(1)(v(|z_{1}|)+(v(|z_{2}|))\left( \frac{\omega (h_{1})}{v(h_{1})}+\frac{ \omega (h_{2})}{v(h_{2})}\right) . \end{equation*} To estimate $\Vert J_{44}^{\left( 1,4\right) }\Vert _{p}$ we use Lemma \ref {le1} and Lemma \ref{le2} (iii) in order to get \begin{align*} \Vert J_{44}^{\left( 1,4\right) }\Vert _{p} &= \frac{\mathcal{O}(1)}{mn}
\int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi }\Big\|
H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})\Big\|_{p} \\ & \quad \times \Bigg(\frac{1}{(t_{1}+h_{1})^{2}}-\frac{1}{t_{1}^{2}}\Bigg)\Bigg( \frac{1}{(t_{2}+h_{2})^{2}}-\frac{1}{t_{2}^{2}}\Bigg)
|S(t_{1},t_{2})|dt_{1}dt_{2} \\
&= \frac{\mathcal{O}(1)}{mn}\left( (v(|z_{1}|)+(v(|z_{2}|)\right) \int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi }\Bigg(\frac{\omega (t_{1}+h_{1})}{ v(t_{1}+h_{1})}+\frac{\omega (t_{2}+h_{2})}{v(t_{2}+h_{2})}\Bigg) \\ & \quad \times \frac{h_{1}h_{2}\left( 2t_{1}+h_{1}\right) \left( 2t_{2}+h_{2}\right) }{\left( t_{1}t_{2}\left( t_{1}+h_{1}\right) \left( t_{2}+h_{2}\right) \right) ^{2}}dt_{1}dt_{2} \\ &= \frac{\mathcal{O}(1)}{\left( mn\right) ^{2}}\left(
(v(|z_{1}|)+(v(|z_{2}|)\right) \\ & \quad \times \int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi }\Bigg(\frac{\omega (t_{1}+h_{1})}{\left( t_{1}+h_{1}\right) v(h_{1})t_{2}^{3}t_{1}^{2}}+\frac{ \omega (t_{2}+h_{2})}{\left( t_{2}+h_{2}\right) v(h_{2})t_{1}^{3}t_{2}^{2}} \Bigg)dt_{1}dt_{2} \\ &= \frac{\mathcal{O}(1)}{\left( mn\right) ^{2}}\left(
(v(|z_{1}|)+(v(|z_{2}|)\right) \\ & \quad \times \Bigg(\frac{\omega (2h_{1})}{2h_{1}v(h_{1})}\int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi }\frac{1}{t_{2}^{3}t_{1}^{2}}dt_{1}dt_{2}+\frac{\omega (2h_{2})}{2h_{2}v(h_{2})}\int_{h_{1}}^{\pi }\int_{h_{2}}^{\pi }\frac{1}{ t_{2}^{2}t_{1}^{3}}dt_{1}dt_{2}\Bigg) \\
&= \mathcal{O}(1)(v(|z_{1}|)+(v(|z_{2}|))\left( \frac{\omega (h_{1})}{ v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) . \end{align*} Whence, using the above estimates, we have \begin{equation} \Vert J_{44}^{\left( 1\right) }\Vert _{p}=\mathcal{O}
(1)(v(|z_{1}|)+(v(|z_{2}|))\left( \frac{\omega (h_{1})}{v(h_{1})}+\frac{ \omega (h_{2})}{v(h_{2})}\right) . \label{eqb1} \end{equation}
Replacing $t_{2}$ with $t_{2}+h_{2}$ in \begin{equation*} J_{44}^{\left( 2\right) }=\frac{1}{mn\pi ^{2}}\int_{0}^{h_{1}}\int_{h_{2}}^{ \pi }H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})\frac{S(t_{1},t_{2})}{\left( (t_{1}+h_{1})t_{2}\right) ^{2}}dt_{1}dt_{2} \end{equation*} we get \begin{equation*} J_{44}^{\left( 2\right) }=-\frac{1}{mn\pi ^{2}}\int_{0}^{h_{1}}\int_{0}^{\pi -h_{2}}H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})\frac{S(t_{1},t_{2})}{ \left( (t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2}}dt_{1}dt_{2} \end{equation*} and after adding them we obtain \begin{align*} J_{44}^{\left( 2\right) } &= \frac{1}{2mn\pi ^{2}}\bigg(\int_{0}^{h_{1}} \int_{h_{2}}^{\pi }H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})\frac{S(t_{1},t_{2}) }{\left( (t_{1}+h_{1})t_{2}\right) ^{2}}dt_{1}dt_{2} \\ & \quad -\int_{0}^{h_{1}}\int_{0}^{\pi -h_{2}}H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})\frac{S(t_{1},t_{2})}{ \left( (t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2}}dt_{1}dt_{2}\bigg) \\ &= \frac{1}{2mn\pi ^{2}}\bigg(\int_{0}^{h_{1}}\int_{h_{2}}^{\pi }\bigg(\frac{ H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})}{\left( (t_{1}+h_{1})t_{2}\right) ^{2} }-\frac{H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})}{\left( (t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2}}\bigg) \\ & \qquad \times S(t_{1},t_{2})dt_{1}dt_{2}\\ & \quad -\int_{0}^{h_{1}} \int_{0}^{h_{2}}H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})\frac{ S(t_{1},t_{2})}{\left( (t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2}}dt_{1}dt_{2} \\ & \quad +\int_{0}^{h_{1}}\int_{\pi -h_{2}}^{\pi }H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})\frac{S(t_{1},t_{2})}{\left( (t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2}}dt_{1}dt_{2}\bigg) \\ &:= \sum\limits_{s=1}^{3}J_{44s}^{\left( 2\right) }. \end{align*}
Since \begin{equation*}
|\sin 2mt_{1}\sin mt_{1}\sin 2nt_{2}\sin nt_{2}|\leq 4(mnt_{1}t_{2})^{2}\quad \text{for}\quad 0<t_{1}<\pi ,0<t_{2}<\pi , \end{equation*} then applying Lemma \ref{le1} and Lemma \ref{le2} (iii) we have \begin{align*} \Vert J_{442}^{\left( 2\right) }\Vert _{p} &= \mathcal{O}\left(\frac{1}{mn} \right)\int_{0}^{h_{1}}\int_{0}^{h_{2}}\left\Vert H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})\right\Vert _{p}\frac{ (mnt_{1}t_{2})^{2}}{\left( (t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2}} dt_{1}dt_{2} \\ &= \mathcal{O}(mn)\int_{0}^{h_{1}}\int_{0}^{h_{2}}\left(
v(|z_{1}|)+v(|z_{2}|)\right) \left( \frac{\omega (t_{1}+h_{1})}{ v(t_{1}+h_{1})}+\frac{\omega (t_{2}+h_{2})}{v(t_{2}+h_{2})}\right) dt_{1}dt_{2} \\
&= \mathcal{O}(mn)\left( v(|z_{1}|)+v(|z_{2}|)\right) \left( \frac{\omega (2h_{1})}{v(2h_{1})}+\frac{\omega (2h_{2})}{v(2h_{2})}\right) h_{1}h_{2} \\
&= \mathcal{O}(1)\left( v(|z_{1}|)+v(|z_{2}|)\right) \left( \frac{\omega (h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) . \end{align*}
Because of \begin{equation*}
|\sin 2mt_{1}\sin mt_{1}|\leq 2(mt_{1})^{2}\quad \text{for}\quad 0<t_{1}<\pi \end{equation*} and \begin{equation*}
|\sin 2nt_{2}\sin nt_{2}|\leq 1\quad \text{for}\quad \pi -h_{2}<t_{2}<\pi , \end{equation*} it follows that \begin{align*} \Vert J_{443}^{\left( 2\right) }\Vert _{p} &= \mathcal{O}\left(\frac{1}{mn} \right)\int_{0}^{h_{1}}\int_{\pi -h_{2}}^{\pi }\left\Vert H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})\right\Vert _{p}\frac{ (mt_{1})^{2}}{\left( (t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2}}dt_{1}dt_{2} \\ &= \mathcal{O}\left(\frac{m}{n}\right)\int_{0}^{h_{1}}\int_{\pi -h_{2}}^{\pi }\left(
v(|z_{1}|)+v(|z_{2}|)\right) \left( \frac{\omega (t_{1}+h_{1})}{ v(t_{1}+h_{1})}+\frac{\omega (t_{2}+h_{2})}{v(t_{2}+h_{2})}\right) \frac{ dt_{1}dt_{2}}{(t_{2}+h_{2})^{2}} \\
&= \mathcal{O}\left(\frac{m}{n}\right)\left( v(|z_{1}|)+v(|z_{2}|)\right) \left( \frac{ \omega (2h_{1})}{v(2h_{1})}h_{1}h_{2}+h_{1}\int_{\pi }^{\pi +h_{2}}\frac{ \omega (\theta _{2})}{v(\theta _{2})}\frac{d\theta _{2}}{\theta _{2}^2}\right) \\
&= \mathcal{O}\left(\frac{m}{n}\right)\left( v(|z_{1}|)+v(|z_{2}|)\right) \left( \frac{ \omega (h_{1})}{v(h_{1})}h_{1}h_{2}+\frac{\omega (\pi +h_{2})}{v(\pi +h_{2})} \frac{h_{1}h_{2}}{\pi (\pi +h_{2})}\right) \\ &= \mathcal{O}\left( \frac{1}{n^{2}}\right) \left(
v(|z_{1}|)+v(|z_{2}|)\right) \left( \frac{\omega (h_{1})}{v(h_{1})}+\frac{ \omega (\pi )}{v(\pi )}\right) \\
&= \mathcal{O}\left(\frac{1}{n}\right)\left( v(|z_{1}|)+v(|z_{2}|)\right) \left( \frac{ \omega (h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) . \end{align*}
Now we write \begin{align*} J_{441}^{\left( 2\right) } &= \frac{1}{2mn\pi ^{2}}\int_{0}^{h_{1}} \int_{h_{2}}^{\pi }\bigg(\frac{H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})}{ \left( (t_{1}+h_{1})t_{2}\right) ^{2}}-\frac{ H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})}{\left( (t_{1}+h_{1})t_{2}\right) ^{2}} \\ & \quad +\frac{H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})}{\left( (t_{1}+h_{1})t_{2}\right) ^{2}}-\frac{ H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})}{\left( (t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2}}\bigg)S(t_{1},t_{2})dt_{1}dt_{2} \\ &= \frac{1}{2mn\pi ^{2}}\int_{0}^{h_{1}}\int_{h_{2}}^{\pi }\bigg( H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})-H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2}) \bigg) \\ &\qquad \times \frac{S(t_{1},t_{2})}{\left( (t_{1}+h_{1})t_{2}\right) ^{2}} dt_{1}dt_{2} \\ &\quad +\int_{0}^{h_{1}}\int_{h_{2}}^{\pi }H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})\frac{S(t_{1},t_{2})}{\left( t_{1}+h_{1}\right) ^{2}} \bigg(\frac{1}{t_{2}^{2}}-\frac{1}{\left( t_{2}+h_{2}\right) ^{2}} \bigg)dt_{1}dt_{2} \\ & :=J_{441}^{\left( 21\right) }+J_{441}^{\left( 22\right) }. \end{align*} For $\Vert J_{441}^{\left( 21\right) }\Vert _{p}$ we have \begin{align*} \Vert J_{441}^{\left( 21\right) }\Vert _{p} &= \mathcal{O}\left(\frac{1}{mn} \right)\int_{0}^{h_{1}}\int_{h_{2}}^{\pi }\left\Vert H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})-H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})\right\Vert _{p} \\ & \quad \times \frac{(mt_{1})^{2}}{\left( (t_{1}+h_{1})t_{2}\right) ^{2}} dt_{1}dt_{2} \\ &= \mathcal{O}\left(\frac{m}{n}\right)\int_{0}^{h_{1}}\int_{h_{2}}^{\pi
}(v(|z_{1}|)+(v(|z_{2}|))\left( \frac{\omega (h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) \frac{dt_{1}dt_{2}}{t_{2}^{2}} \\
&= \mathcal{O}(1)(v(|z_{1}|)+(v(|z_{2}|))\left( \frac{\omega (h_{1})}{ v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) , \end{align*} while for $\Vert J_{441}^{\left( 22\right) }\Vert _{p}$, we get \begin{align*} \Vert J_{441}^{\left( 22\right) }\Vert _{p} &= \mathcal{O}\left(\frac{1}{mn} \right)\int_{0}^{h_{1}}\int_{h_{2}}^{\pi }\left\Vert H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})\right\Vert _{p} \\ & \quad \times \frac{(mt_{1})^{2}}{\left( t_{1}+h_{1}\right) ^{2}}\bigg(\frac{1}{ t_{2}^{2}}-\frac{1}{\left( t_{2}+h_{2}\right) ^{2}}\bigg)dt_{1}dt_{2} \\ &= \mathcal{O}\left(\frac{m}{n}\right)\int_{0}^{h_{1}}\int_{h_{2}}^{\pi
}(v(|z_{1}|)+(v(|z_{2}|)) \\ & \quad \times \left( \frac{\omega (t_{1}+h_{1})}{v(t_{1}+h_{1})}+\frac{\omega (t_{2}+h_{2})}{v(t_{2}+h_{2})}\right) \frac{h_{2}(2t_{2}+h_{2})}{ t_{2}^{2}\left( t_{2}+h_{2}\right) ^{2}}dt_{1}dt_{2} \\
&= \mathcal{O}\left(\frac{m}{n^{2}}\right)(v(|z_{1}|)+(v(|z_{2}|))\bigg( \int_{0}^{h_{1}}\int_{h_{2}}^{\pi }\frac{\omega (t_{1}+h_{1})}{v(t_{1}+h_{1}) }\frac{1}{t_{2}^{3}}dt_{1}dt_{2} \\ & \quad +\int_{0}^{h_{1}}\int_{h_{2}}^{\pi }\frac{\omega (t_{2}+h_{2})}{\left( t_{2}+h_{2}\right) v(h_{2})}\frac{1}{t_{2}^{2}}dt_{1}dt_{2}\bigg) \\
&= \mathcal{O}\left(\frac{m}{n^{2}}\right)(v(|z_{1}|)+(v(|z_{2}|))\bigg(\frac{\omega (2h_{1})}{v(h_{1})}\int_{h_{2}}^{\pi }\frac{1}{t_{2}^{3}}dt_{2}
+\frac{\omega (2h_{2})}{2h_{2}v(h_{2})}\int_{h_{2}}^{\pi }\frac{1}{ t_{2}^{2}}dt_{2}\bigg)h_{1} \\
&= \mathcal{O}(1)(v(|z_{1}|)+(v(|z_{2}|))\left( \frac{\omega (h_{1})}{ v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) . \end{align*} Whence, \begin{equation*} \Vert J_{441}^{\left( 2\right) }\Vert _{p}=\mathcal{O}
(1)(v(|z_{1}|)+(v(|z_{2}|))\left( \frac{\omega (h_{1})}{v(h_{1})}+\frac{ \omega (h_{2})}{v(h_{2})}\right) . \end{equation*} Thus, we obtain \begin{equation} \Vert J_{44}^{\left( 2\right) }\Vert _{p}=\mathcal{O}(1)\left(
v(|z_{1}|)+v(|z_{2}|)\right) \left( \frac{\omega (h_{1})}{v(h_{1})}+\frac{ \omega (h_{2})}{v(h_{2})}\right) . \label{eqb2} \end{equation}
By analogy, we find that \begin{equation} \Vert J_{44}^{\left( 4\right) }\Vert _{p}=\mathcal{O}(1)\left(
v(|z_{1}|)+v(|z_{2}|)\right) \left( \frac{\omega (h_{1})}{v(h_{1})}+\frac{ \omega (h_{2})}{v(h_{2})}\right) . \label{eqb3} \end{equation}
Once more, replacing $t_{2}$ with $t_{2}+h_{2}$ in \begin{equation*} J_{44}^{\left( 3\right) }=\frac{1}{mn\pi ^{2}}\int_{\pi -h_{1}}^{\pi }\!\!\int_{h_{2}}^{\pi }H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})\frac{ S(t_{1},t_{2})}{\left( (t_{1}+h_{1})t_{2}\right) ^{2}}dt_{1}dt_{2} \end{equation*} we get \begin{equation*} J_{44}^{\left( 3\right) }=-\frac{1}{mn\pi ^{2}}\int_{\pi -h_{1}}^{\pi }\!\!\int_{0}^{\pi -h_{2}}H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})\frac{ S(t_{1},t_{2})}{\left( (t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2}}dt_{1}dt_{2} \end{equation*} and adding them side by side, we have \begin{align*} J_{44}^{\left( 3\right) } &= \frac{1}{2mn\pi ^{2}}\bigg(\int_{\pi -h_{1}}^{\pi }\!\!\int_{h_{2}}^{\pi }H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}) \frac{S(t_{1},t_{2})}{\left( (t_{1}+h_{1})t_{2}\right) ^{2}}dt_{1}dt_{2} \\ & \quad -\int_{\pi -h_{1}}^{\pi }\!\!\int_{0}^{\pi -h_{2}}H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})\frac{S(t_{1},t_{2})}{ \left( (t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2}}dt_{1}dt_{2}\bigg) \\ &= \frac{1}{2mn\pi ^{2}}\bigg(\int_{\pi -h_{1}}^{\pi }\!\!\int_{h_{2}}^{\pi } \bigg(\frac{H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})}{\left( (t_{1}+h_{1})t_{2}\right) ^{2}}-\frac{ H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})}{\left( (t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2}}\bigg) \\ &\quad \times S(t_{1},t_{2})dt_{1}dt_{2} \!- \!\int_{\pi -h_{1}}^{\pi }\!\!\int_{0}^{h_{2}}H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})\frac{ S(t_{1},t_{2})}{\left( (t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2}}dt_{1}dt_{2} \\ &\quad +\int_{\pi -h_{1}}^{\pi }\!\!\int_{\pi -h_{2}}^{\pi }H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})\frac{S(t_{1},t_{2})}{\left( (t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2}}dt_{1}dt_{2}\bigg) \\ &:= \sum\limits_{s=1}^{3}J_{44s}^{\left( 3\right) } \end{align*}
Since \begin{equation*}
|\sin 2mt_{1}\sin mt_{1}|\leq 1,\quad \text{for}\quad \pi -h_{1}<t_{1}<\pi \end{equation*} and \begin{equation*}
|\sin 2nt_{2}\sin nt_{2}|\leq 2(nt_{2})^{2}\quad \text{for}\quad 0<t_{2}<\pi , \end{equation*} then applying Lemma \ref{le1} and Lemma \ref{le2} (iii), we have \begin{align*} \Vert J_{442}^{\left( 3\right) }\Vert _{p} &= \mathcal{O}\left(\frac{1}{mn} \right)\int_{\pi -h_{1}}^{\pi }\!\!\int_{0}^{h_{2}}\left\Vert H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})\right\Vert _{p}\frac{ (nt_{2})^{2}}{\left( (t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2}}dt_{1}dt_{2} \\ &= \mathcal{O}\left(\frac{n}{m}\right)\int_{\pi -h_{1}}^{\pi
}\!\!\int_{0}^{h_{2}}\left( v(|z_{1}|)+v(|z_{2}|)\right)
\left( \frac{\omega (t_{1}+h_{1})}{v(t_{1}+h_{1})}+\frac{\omega (t_{2}+h_{2})}{v(t_{2}+h_{2})}\right) \frac{dt_{1}dt_{2}}{\left( t_{1}+h_{1}\right) ^{2}} \\
&= \mathcal{O}\left(\frac{n}{m}\right)\left( v(|z_{1}|)+v(|z_{2}|)\right) \\ & \quad \times h_{2}\left( \int_{\pi -h_{1}}^{\pi }\frac{\omega (t_{1}+h_{1})}{ v(t_{1}+h_{1})}\frac{dt_{1}}{\left( t_{1}+h_{1}\right) ^{2}}+\frac{\omega (2h_{2})}{v(2h_{2})}\int_{\pi -h_{1}}^{\pi }\frac{dt_{1}}{\left( t_{1}+h_{1}\right) ^{2}}\right) \\
&= \mathcal{O}\left(\frac{1}{m}\right)\left( v(|z_{1}|)+v(|z_{2}|)\right) \left( \int_{\pi }^{\pi +h_{1}}\frac{\omega (\theta _{1})}{v(\theta _{1})}\frac{ d\theta _{1}}{\theta _{1}^{2}}+\frac{\omega (2h_{2})}{v(2h_{2})}\int_{\pi }^{\pi +h_{1}}\frac{d\theta _{1}}{\theta _{1}^{2}}\right) \\
&= \mathcal{O}\left(\frac{1}{m}\right)\left( v(|z_{1}|)+v(|z_{2}|)\right) \left( \frac{ \omega (\pi +h_{1})}{v(\pi +h_{1})}+\frac{\omega (2h_{2})}{v(2h_{2})}\right) h_{1} \\ &= \mathcal{O}\left( \frac{1}{m^{2}}\right) \left(
v(|z_{1}|)+v(|z_{2}|)\right) \left( \frac{\omega (\pi )}{v(\pi )}+\frac{ \omega (h_{2})}{v(h_{2})}\right) \\
&= \mathcal{O}\left(\frac{1}{m}\right)\left( v(|z_{1}|)+v(|z_{2}|)\right) \left( \frac{ \omega (h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) . \end{align*} Furthermore, the inequality \begin{equation*}
|\sin 2mt_{1}\sin mt_{1}\sin 2nt_{2}\sin nt_{2}|\leq 1\quad \text{for}\quad \pi -h_{1}<t_{1}<\pi ,\quad \pi -h_{2}<t_{2}<\pi , \end{equation*} implies \begin{align*} \Vert J_{443}^{\left( 3\right) }\Vert _{p} &= \mathcal{O}\left(\frac{1}{mn} \right)\int_{\pi -h_{1}}^{\pi }\!\!\int_{\pi -h_{2}}^{\pi }\left\Vert H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})\right\Vert _{p}\frac{ dt_{1}dt_{2}}{\left( (t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2}} \\ &= \mathcal{O}\left(\frac{1}{mn}\right)\int_{\pi -h_{1}}^{\pi }\!\!\int_{\pi
-h_{2}}^{\pi }\left( v(|z_{1}|)+v(|z_{2}|)\right) \\ & \quad \times \left( \frac{\omega (t_{1}+h_{1})}{v(t_{1}+h_{1})}+\frac{\omega (t_{2}+h_{2})}{v(t_{2}+h_{2})}\right) \frac{dt_{1}dt_{2}}{\left( (t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2}} \\
&= \mathcal{O}\left(\frac{1}{mn}\right)\left( v(|z_{1}|)+v(|z_{2}|)\right) \left( \frac{ \omega (\pi +h_{1})}{v(\pi +h_{1})}+\frac{\omega (\pi +h_{2})}{v(\pi +h_{2})} \right) \\ & \quad \times \int_{\pi -h_{1}}^{\pi }\!\!\int_{\pi -h_{2}}^{\pi }\frac{ dt_{1}dt_{2}}{\left( (t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2}} \\
&= \mathcal{O}\left(\frac{1}{mn}\right)\left( v(|z_{1}|)+v(|z_{2}|)\right) \frac{\omega (2\pi )}{v(2\pi )}\int_{\pi }^{\pi +h_{1}}\!\!\int_{\pi }^{\pi +h_{2}}\frac{ dt_{1}dt_{2}}{\left( t_{1}t_{2}\right) ^{2}} \\ &= \mathcal{O}\left( \frac{1}{\left( mn\right) ^{2}}\right) \left(
v(|z_{1}|)+v(|z_{2}|)\right) \frac{\omega (\pi )}{v(\pi )} \\
&= \mathcal{O}\left(\frac{1}{mn}\right)\left( v(|z_{1}|)+v(|z_{2}|)\right) \left( \frac{ \omega (h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) . \end{align*}
It is obvious that \begin{align*} J_{441}^{\left( 3\right) } &= \frac{1}{2mn\pi ^{2}}\int_{\pi -h_{1}}^{\pi }\!\!\int_{h_{2}}^{\pi }\bigg(\frac{H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})}{ \left( (t_{1}+h_{1})t_{2}\right) ^{2}}-\frac{ H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})}{\left( (t_{1}+h_{1})t_{2}\right) ^{2}} \\ & \quad +\frac{H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})}{\left( (t_{1}+h_{1})t_{2}\right) ^{2}}-\frac{ H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})}{\left( (t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2}}\bigg)S(t_{1},t_{2})dt_{1}dt_{2} \\ &= \int_{\pi -h_{1}}^{\pi }\!\!\int_{h_{2}}^{\pi }\bigg( H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})-H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2}) \bigg) \\ & \quad \times \frac{S(t_{1},t_{2})}{\left( (t_{1}+h_{1})t_{2}\right) ^{2}} dt_{1}dt_{2}+\int_{\pi -h_{1}}^{\pi }\!\!\int_{h_{2}}^{\pi }H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2}) \\ & \quad \times \frac{S(t_{1},t_{2})}{\left( t_{1}+h_{1}\right) ^{2}}\bigg(\frac{1}{ t_{2}^{2}}-\frac{1}{\left( t_{2}+h_{2}\right) ^{2}}\bigg) dt_{1}dt_{2}:=J_{441}^{\left( 31\right) }+J_{441}^{\left( 32\right) }. \end{align*} For $\Vert J_{441}^{\left( 31\right) }\Vert _{p}$, we have \begin{align*} \Vert J_{441}^{\left( 31\right) }\Vert _{p} &= \mathcal{O}\left(\frac{1}{mn} \right)\int_{\pi -h_{1}}^{\pi }\!\!\int_{h_{2}}^{\pi }\left\Vert H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2})-H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})\right\Vert _{p} \\ & \quad \times \frac{dt_{1}dt_{2}}{\left( (t_{1}+h_{1})t_{2}\right) ^{2}} \\
&= \mathcal{O}\left(\frac{1}{mn}\right)(v(|z_{1}|)+(v(|z_{2}|))\left( \frac{\omega (h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) \int_{\pi -h_{1}}^{\pi }\frac{dt_{1}}{\left( t_{1}+h_{1}\right) ^{2}}\int_{h_{2}}^{\pi }\frac{dt_{2}}{t_{2}^{2}} \\
&= \mathcal{O}\left( \frac{1}{m^{2}}\right) (v(|z_{1}|)+(v(|z_{2}|))\left( \frac{\omega (h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) , \end{align*} as far as for $\Vert J_{441}^{\left( 32\right) }\Vert _{p}$ we obtain \begin{align*} \Vert J_{441}^{\left( 32\right) }\Vert _{p} &= \mathcal{O}\left(\frac{1}{mn} \right)\int_{\pi -h_{1}}^{\pi }\!\!\int_{h_{2}}^{\pi }\left\Vert H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})\right\Vert _{p} \\ & \quad \times \frac{1}{\left( t_{1}+h_{1}\right) ^{2}}\bigg(\frac{1}{t_{2}^{2}}- \frac{1}{\left( t_{2}+h_{2}\right) ^{2}}\bigg)dt_{1}dt_{2} \\ &= \mathcal{O}\left(\frac{1}{mn}\right)\int_{\pi -h_{1}}^{\pi }\!\!\int_{h_{2}}^{\pi
}(v(|z_{1}|)+(v(|z_{2}|)) \\ & \quad \times \left( \frac{\omega (t_{1}+h_{1})}{v(t_{1}+h_{1})}+\frac{\omega (t_{2}+h_{2})}{v(t_{2}+h_{2})}\right) \frac{1}{\left( t_{1}+h_{1}\right) ^{2} }\frac{h_{2}(2t_{2}+h_{2})}{t_{2}^{2}\left( t_{2}+h_{2}\right) ^{2}} dt_{1}dt_{2} \\
&= \mathcal{O}\left(\frac{1}{mn^{2}}\right)(v(|z_{1}|)+(v(|z_{2}|))\bigg(\frac{\omega (\pi +h_{1})}{v(\pi +h_{1})}\int_{h_{2}}^{\pi }\frac{dt_{2}}{t_{2}^{3}} +\frac{\omega (2h_{2})}{2h_{2}v(h_{2})}\int_{h_{2}}^{\pi }\frac{dt_{2}}{ t_{2}^{2}}\bigg)h_{1} \\
&= \mathcal{O}\left( \frac{1}{m^{2}}\right) (v(|z_{1}|)+(v(|z_{2}|))\left( \frac{\omega (\pi )}{v(\pi )}+\frac{\omega (h_{2})}{v(h_{2})}\right) \\
&= \mathcal{O}\left( \frac{1}{m}\right) (v(|z_{1}|)+(v(|z_{2}|))\left( \frac{ \omega (h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) . \end{align*}
Hence, we obtain \begin{equation*} \Vert J_{441}^{\left( 3\right) }\Vert _{p}=\mathcal{O}\left( \frac{1}{m}
\right) (v(|z_{1}|)+(v(|z_{2}|))\left( \frac{\omega (h_{1})}{v(h_{1})}+\frac{ \omega (h_{2})}{v(h_{2})}\right) . \end{equation*}
Consequently, we get \begin{equation} \Vert J_{44}^{\left( 3\right) }\Vert _{p}=\mathcal{O}\left( \frac{1}{m}
\right) (v(|z_{1}|)+(v(|z_{2}|))\left( \frac{\omega (h_{1})}{v(h_{1})}+\frac{ \omega (h_{2})}{v(h_{2})}\right) . \label{eqb4} \end{equation}
By the same way, we obtain \begin{equation} \Vert J_{44}^{\left( 5\right) }\Vert _{p}=\mathcal{O}\left( \frac{1}{n}
\right) (v(|z_{1}|)+(v(|z_{2}|))\left( \frac{\omega (h_{1})}{v(h_{1})}+\frac{ \omega (h_{2})}{v(h_{2})}\right) . \label{eqb5} \end{equation}
For $J_{44}^{\left( 6\right) }$, we have \begin{align}\label{eqb6} \begin{split} \Vert J_{44}^{\left( 6\right) }\Vert _{p}& = \mathcal{O}\left( \frac{1}{mn} \right) \int_{0}^{h_{1}}\!\!\int_{0}^{h_{2}}\left\Vert H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})\right\Vert _{p}\frac{ (mnt_{1}t_{2})^{2}dt_{1}dt_{2}}{\left( (t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2} }\\ &= \mathcal{O}\left( mn\right)
\int_{0}^{h_{1}}\!\!\int_{0}^{h_{2}}(v(|z_{1}|)+(v(|z_{2}|))\left( \frac{ \omega (t_{1}+h_{1})}{v(t_{1}+h_{1})}+\frac{\omega (t_{2}+h_{2})}{ v(t_{2}+h_{2})}\right) dt_{1}dt_{2} \\
&= \mathcal{O}\left( 1\right) (v(|z_{1}|)+(v(|z_{2}|))\left( \frac{\omega (h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) . \end{split} \end{align} Similarly as in the estimation of $J_{44}^{\left( 2\right) }$, we have \begin{align*} J_{44}^{\left( 7\right) } &= \frac{1}{2mn\pi ^{2}}\int_{0}^{h_{1}} \int_{h_{2}}^{\pi }H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})\frac{ S(t_{1},t_{2})}{\left( (t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2}}dt_{1}dt_{2} \\ & \quad -\frac{1}{2mn\pi ^{2}}\int_{0}^{h_{1}}\int_{0}^{\pi -h_{2}}H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+2h_{2})\frac{S(t_{1},t_{2})}{ \left( (t_{1}+h_{1})(t_{2}+2h_{2})\right) ^{2}}dt_{1}dt_{2} \\ &= \frac{1}{2mn\pi ^{2}}\bigg(\int_{0}^{h_{1}}\int_{h_{2}}^{\pi }\bigg(\frac{ H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})}{\left( (t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2}}-\frac{ H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+2h_{2})}{\left( (t_{1}+h_{1})(t_{2}+2h_{2})\right) ^{2}}\bigg) \\ &\qquad \times S(t_{1},t_{2})dt_{1}dt_{2} \\ & \quad -\int_{0}^{h_{1}} \int_{0}^{h_{2}}H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+2h_{2})\frac{ S(t_{1},t_{2})}{\left( (t_{1}+h_{1})(t_{2}+2h_{2})\right) ^{2}}dt_{1}dt_{2} \\ & \quad +\int_{0}^{h_{1}}\int_{\pi -h_{2}}^{\pi }H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+2h_{2})\frac{S(t_{1},t_{2})}{\left( (t_{1}+h_{1})(t_{2}+2h_{2})\right) ^{2}}dt_{1}dt_{2}\bigg) \end{align*} and consequently \begin{equation} \Vert J_{44}^{\left( 7\right) }\Vert _{p}=\mathcal{O}\left( 1\right)
(v(|z_{1}|)+(v(|z_{2}|))\left( \frac{\omega (h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) . \label{eqb7} \end{equation}
Analogously, \begin{equation} \Vert J_{44}^{\left( 9\right) }\Vert _{p}=\mathcal{O}\left( 1\right)
(v(|z_{1}|)+(v(|z_{2}|))\left( \frac{\omega (h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) . \label{eqb8} \end{equation}
For $J_{44}^{\left( 8\right) }$, we have \begin{align}\label{eqb9} \begin{split} \Vert J_{44}^{\left( 8\right) }\Vert _{p} &= \mathcal{O}\left( \frac{1}{mn} \right) \int_{0}^{h_{1}}\!\!\int_{\pi -h_{2}}^{\pi }\left\Vert H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})\right\Vert _{p}\frac{ (mt_{1})^{2}dt_{1}dt_{2}}{\left( (t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2}} \\ &= \mathcal{O}\left( \frac{m}{n}\right) \int_{0}^{h_{1}}\!\!\int_{\pi
-h_{2}}^{\pi }(v(|z_{1}|)+(v(|z_{2}|)) \\ & \quad \times \left( \frac{\omega (t_{1}+h_{1})}{v(t_{1}+h_{1})}+\frac{\omega (t_{2}+h_{2})}{v(t_{2}+h_{2})}\right) \frac{dt_{1}dt_{2}}{\left( t_{2}+h_{2}\right) ^{2}} \\
&= \mathcal{O}\left( \frac{m}{n}\right) (v(|z_{1}|)+(v(|z_{2}|))
\left( \frac{\omega (h_{1})}{v(h_{1})}h_{2}+\int_{\pi -h_{2}}^{\pi } \frac{\omega (t_{2}+h_{2})}{v(t_{2}+h_{2})}\frac{dt_{2}}{\left( t_{2}+h_{2}\right) ^{2}}\right) h_{1} \\
&= \mathcal{O}\left( \frac{1}{n}\right) (v(|z_{1}|)+(v(|z_{2}|))
\left( \frac{\omega (h_{1})}{v(h_{1})}h_{2}+\int_{\pi }^{\pi +h_{2}} \frac{\omega (\theta _{2})}{v(\theta _{2})}\frac{d\theta _{2}}{\theta _{2}^{2}}\right)\\
&= \mathcal{O}\left( \frac{1}{n^{2}}\right) (v(|z_{1}|)+(v(|z_{2}|))\left( \frac{\omega (h_{1})}{v(h_{1})}+\frac{\omega (\pi )}{v(\pi )}\right) \\
&= \mathcal{O}\left( \frac{1}{n}\right) (v(|z_{1}|)+(v(|z_{2}|))\left( \frac{ \omega (h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) . \end{split} \end{align}
By analogy, we also get \begin{equation} \Vert J_{44}^{\left( 11\right) }\Vert _{p}=\mathcal{O}\left( \frac{1}{m}
\right) (v(|z_{1}|)+(v(|z_{2}|))\left( \frac{\omega (h_{1})}{v(h_{1})}+\frac{ \omega (h_{2})}{v(h_{2})}\right) . \label{eqb10} \end{equation}
Now we need to give the upper bound for $\Vert J_{44}^{\left( 10\right) }\Vert _{p}$. We have \begin{align}\label{eqb11} \begin{split} \Vert J_{44}^{\left( 10\right) }\Vert _{p} &= \mathcal{O}\left( \frac{1}{mn} \right) \int_{h_{1}}^{\pi }\!\!\int_{\pi -h_{2}}^{\pi }\left\Vert H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})\right\Vert _{p}\frac{ dt_{1}dt_{2}}{\left( (t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2}} \\ &= \mathcal{O}\left( \frac{1}{mn}\right) \int_{h_{1}}^{\pi }\!\!\int_{\pi
-h_{2}}^{\pi }(v(|z_{1}|)+(v(|z_{2}|)) \\ & \quad \times \left( \frac{\omega (t_{1}+h_{1})}{v(t_{1}+h_{1})}+\frac{\omega (t_{2}+h_{2})}{v(t_{2}+h_{2})}\right) \frac{dt_{1}dt_{2}}{\left( t_{1}+h_{1}\right) ^{2}}\\
&= \mathcal{O}\left( \frac{1}{mn}\right) (v(|z_{1}|)+(v(|z_{2}|))\left( \frac{\omega (\pi )}{v(\pi )}\frac{m}{n}+m\int_{\pi }^{\pi +h_{2}}\frac{ \omega (u_{2})du_{2}}{v(u_{2})u_{2}^{2}}\right) \\
&= \mathcal{O}\left( \frac{1}{n^{2}}\right) (v(|z_{1}|)+(v(|z_{2}|))\frac{ \omega (\pi )}{v(\pi )} \\
&= \mathcal{O}\left( \frac{1}{n}\right) (v(|z_{1}|)+(v(|z_{2}|))\frac{\omega (h_{2})}{v(h_{2})}. \end{split} \end{align}
By analogy, we obtain \begin{equation} \Vert J_{44}^{\left( 12\right) }\Vert _{p}=\mathcal{O}\left( \frac{1}{m}
\right) (v(|z_{1}|)+(v(|z_{2}|))\frac{\omega (h_{1})}{v(h_{1})}. \label{eqb12} \end{equation}
Finally, we have \begin{align}\label{eqb13} \begin{split} \Vert J_{44}^{\left( 13\right) }\Vert _{p} &= \mathcal{O}\left( \frac{1}{mn} \right) \int_{\pi -h_{1}}^{\pi }\!\!\int_{\pi -h_{2}}^{\pi }\left\Vert H_{x,z_{1},y,z_{2}}(t_{1}+h_{1},t_{2}+h_{2})\right\Vert _{p}\frac{ dt_{1}dt_{2}}{\left( (t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2}} \\ &= \mathcal{O}\left( \frac{1}{mn}\right) \int_{\pi -h_{1}}^{\pi
}\!\!\int_{\pi -h_{2}}^{\pi }(v(|z_{1}|)+(v(|z_{2}|)) \\ & \quad \times \left( \frac{\omega (t_{1}+h_{1})}{v(t_{1}+h_{1})}+\frac{\omega (t_{2}+h_{2})}{v(t_{2}+h_{2})}\right) \frac{dt_{1}dt_{2}}{\left( (t_{1}+h_{1})(t_{2}+h_{2})\right) ^{2}} \\
&= \mathcal{O}\left( \frac{1}{mn}\right) (v(|z_{1}|)+(v(|z_{2}|)) \\ & \quad \times \left( h_{2}\int_{\pi -h_{1}}^{\pi }\frac{\omega (t_{1}+h_{1})}{ v(t_{1}+h_{1})}\frac{dt_{1}}{(t_{1}+h_{1})^{2}}+h_{1}\int_{\pi -h_{2}}^{\pi } \frac{\omega (t_{2}+h_{2})}{v(t_{2}+h_{2})}\frac{dt_{2}}{(t_{2}+h_{2})^{2}} \right) \\
&= \mathcal{O}\left( \frac{1}{mn}\right) (v(|z_{1}|)+(v(|z_{2}|))\left( h_{2}\int_{\pi }^{\pi +h_{1}}\frac{\omega (\theta _{1})}{v(\theta _{1})} \frac{d\theta _{1}}{\theta _{1}^{2}}+h_{1}\int_{\pi }^{\pi +h_{2}}\frac{ \omega (\theta _{2})}{v(\theta _{2})}\frac{d\theta _{2}}{\theta _{2}^{2}} \right) \\ &= \mathcal{O}\left( \frac{1}{\left( mn\right) ^{2}}\right)
(v(|z_{1}|)+(v(|z_{2}|))\frac{\omega (\pi )}{v(\pi )}\\
& = \mathcal{O}\left( \frac{1}{mn}\right) (v(|z_{1}|)+(v(|z_{2}|))\left( \frac{ \omega (h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) . \end{split} \end{align}
Whence, using \eqref{eqb1}-\eqref{eqb13}, we obtain \begin{equation*} \left\Vert J_{44}\right\Vert _{p}=\mathcal{O}\left( 1\right)
(v(|z_{1}|)+(v(|z_{2}|))\left( \frac{\omega (h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) . \end{equation*} Combining partial estimates, we get \begin{equation*}
\sup_{z_{1}\neq 0,\,\,z_{2}\neq 0}\frac{\Vert D_{m,n}(x+z_{1},y+z_{2})-D_{m,n}(x,y)\Vert _{p}}{v(|z_{1}|)+v(|z_{2}|)}= \mathcal{O}\left( 1\right) \left( \frac{\omega (h_{1})}{v(h_{1})}+\frac{ \omega (h_{2})}{v(h_{2})}\right) . \end{equation*} Procceding as in the similar lines, it can be proved that \begin{equation*} \Vert D_{m,n}\Vert _{p}=\mathcal{O}\left( 1\right) \left( \omega (h_{1})+\omega (h_{2})\right) . \end{equation*} Hence \begin{equation*} \Vert D_{m,n}\Vert _{p}^{(v,v)}=\mathcal{O}\left( 1\right) \left( \frac{ \omega (h_{1})}{v(h_{1})}+\frac{\omega (h_{2})}{v(h_{2})}\right) \end{equation*} and this ends our proof. \end{proof}}
Now we can finish this section by deriving some particular results. For this, we need to specialize the functions $\omega(t)$ and $v(t)$ in our theorem. In fact, taking $\omega(t)=t^{\alpha}$ and $v(t)=t^{\beta}$, $0\leq \beta< \alpha\leq 1$, in Theorem \ref{the01} we get:
\begin{corollary} If $f\in \text{Lip}(\alpha,\beta, p)$, $p\geq 1$, $0\leq \beta< \alpha\leq 1$, then \begin{equation*} \Vert \sigma _{m,2m;n,2n}(f)-f\Vert _{p}^{(v,v)}= \mathcal{O} \left( \frac{1 }{m^{\alpha -\beta}}+\frac{1 }{n^{\alpha -\beta}}\right) \end{equation*} for all $m,n\in
\mathbb{N}
$. \end{corollary}
For $p=\infty$ in above corollary, we obtain the following.
\begin{corollary} If $f\in \text{H}_{(\alpha,\beta)}$, $0\leq \beta< \alpha\leq 1$, then \begin{equation*} \Vert \sigma _{m,2m;n,2n}(f)-f\Vert _{\alpha,\beta}= \mathcal{O} \left( \frac{1 }{m^{\alpha -\beta}}+\frac{1 }{n^{\alpha -\beta}}\right) \end{equation*} for all $m,n\in
\mathbb{N}
$. \end{corollary}
\section{A generalization of Theorem \ref{the01}}
We denote by $W(L^p((-\pi,\pi)^2); \beta_1, \beta_2)$ the weighted space $L^p((-\pi,\pi)^2)$ with weight function $\left|\sin \left(\frac{x}{2}\right)\right|^{\beta_1 p}\left|\sin \left(\frac{y}{2}\right)\right|^{\beta_2 p}$, ($\beta_1,\beta_2\geq 0$), and endowed with norm
$$\|f\|_{p;\beta_1,\beta_2}:=\left(\frac{1}{(2\pi)^2}\int_{-\pi}^{\pi}\int_{-\pi}^{\pi}\left|f(x,y)\right|^p\left|\sin \left(\frac{x}{2}\right)\right|^{\beta_1 p}\left|\sin \left(\frac{y}{2}\right)\right|^{\beta_2 p}dxdy\right)^p$$ for $1\leq p<\infty$, and
$$\|f\|_{p;\beta_1,\beta_2}:=\text{ess}\!\!\!\!\!\sup_{-\pi\leq x,y \leq \pi }\left\{\left|f(x,y)\right|\left|\sin \left(\frac{x}{2}\right)\right|^{\beta_1 }\left|\sin \left(\frac{y}{2}\right)\right|^{\beta_2 }\right\}$$ for $p=\infty$ (for $\beta_1=0,\beta_2=0$ see \cite{US}).
Acting accordingly, we define the space $H_{p;\beta_1,\beta_2}^{(\omega_1, \omega_2)}$ by \begin{equation*} H_{p;\beta_1,\beta_2}^{(\omega_1, \omega_2 )}:=\left\{f\in W(L^p((-\pi ,\pi )^2); \beta_1, \beta_2), p\geq 1: A(f;\omega_1 , \omega_2 ;\beta_1 ,\beta_2 )<\infty \right\}, \end{equation*} where \begin{equation*}
A(f; \omega_1 , \omega_2 ;\beta_1 ,\beta_2 ):=\sup_{t_1\neq 0,\,\,t_2\neq 0 }\frac{\|f(x +t_1,y
+t_2)-f(x,y)\|_{p ;\beta_1 ,\beta_2 }}{\omega_1 (|t_1|)+\omega_2 (|t_2|)} \end{equation*} and the norm in the space $H_{p ;\beta_1 ,\beta_2 }^{(\omega_1, \omega_2)}$ is defined by \begin{equation*}
\|f\|_{p ;\beta_1 ,\beta_2 }^{(\omega_1, \omega_2)}:=\|f\|_{p ;\beta_1 ,\beta_2 }+A(f;\omega_1, \omega_2 ;\beta_1 ,\beta_2 ). \end{equation*}
We take $k=rm$ and $\ell =qn$ in (\ref{eq1}), to obtain \begin{align*} \sigma_{m,rm;n,qn}:=&\left( 1+\frac{1}{r}\right)\left( 1+\frac{1}{q} \right)\sigma_{m(r+1)-1,n(q+1)-1}-\left( 1+\frac{1}{r}\right)\frac{1}{q} \sigma_{m(r+1)-1,n-1} \nonumber \\ & -\frac{1}{r}\left( 1+\frac{1}{q}\right)\sigma_{m-1,n(q+1)-1}+\frac{1}{ rq}\sigma_{m-1,n-1}, \end{align*} where $r,q\in \{2,4,6,\dots\}$.
Hence, we name the mean $\sigma _{m,rm;n,qn}(f;x,y)$ as {\it Double Even-Type Delayed Arithmetic Mean} for $ S_{k,\ell }(f;x,y)$, which can be represented in its integral form
\begin{equation*} \sigma_{m,rm;n,qn}(f;x,y)=\frac{1}{mn\pi^2}\int_{0}^{\pi}\int_{0}^{ \pi}h_{x,y}(t_1,t_2)L_{m,rm;n,qn}(t_1,t_2)dt_1dt_2, \end{equation*} where $$L_{m,rm;n,qn}(t_1,t_2):=\frac{4}{rq}\frac{\sin \frac{\left(r+2\right)mt_1}{2}\sin \frac{rmt_1}{2}\sin \frac{\left(q+2\right)nt_2}{2}\sin \frac{qnt_2}{2}}{\left( 4\sin \frac{t_1}{2}\sin \frac{t_2 }{2}\right)^2}.$$
Further, we establish a more general theorem on the degree of approximation of function $f$ belonging to $H_{p ;\beta_1 ,\beta_2 }^{(\omega, \omega)}$ with norm $\|f\|_{p ;\beta_1 ,\beta_2 }^{(v,v)}$ by Double Even-Type Delayed Arithmetic Mean $\sigma_{m,rm;n,qn}(f;x,y)$. It represent a twofold generalization of Theorem \ref{the01}, both in context of the considered space and the mean.
\begin{theorem} \label{the05} Let $\omega $ and $v$ be moduli of continuity so that $\frac{ \omega (t)}{v(t)}$ is non-decreasing in $t$. If $f\in H_{p ;\beta_1 ,\beta_2 }^{(\omega, \omega)}$, $p\geq 1$, then \begin{equation*} \Vert \sigma_{m,rm;n,qn}(f)-f\Vert _{p ;\beta_1 ,\beta_2 }^{(v,v)}= \mathcal{O} \left( r\frac{\omega (h_{1}) }{v(h_{1})}+q\frac{\omega (h_{2})}{v(h_{2})}\right) , \end{equation*} where $h_{1}=\frac{\pi }{m}$, $h_{2}=\frac{\pi }{n}$, $m,n\in
\mathbb{N}
$, and $r,q\in \{2,4,6,\dots\}$. \end{theorem}
\begin{proof} The proof can be done in the same lines as the proof of Theorem \ref{the01}. We omit the details. \end{proof}
\begin{remark} For $\beta_1 =0$, $\beta_2=0$, $r=2$ and $q=2$, Theorem \ref{the05} reduces exactly to Theorem \ref{the01}. \end{remark}
\section{Conclusions}
For one dimension, a degree of approximation of a function in the space $H_{p}^{(\omega)}$ has been given by several authors, see \cite{DGR,DNR,D,UD2,KIM1,XhK,L,NDR,SS} and some other references already mentioned here. Inspired by these papers, we have given two corresponding results using the second (even) type double delayed arithmetic means of the Fourier series of a function from the space $H_{p}^{(\omega , \omega)}$ ($H_{p ;\beta_1 ,\beta_2 }^{(\omega, \omega)}$).
\label{lastpage}
\end{document} |
\begin{document}
\title{Cocycle twists of algebras} \author{Andrew Davies} \address{School of Mathematics\\ University of Manchester\\ Manchester\\ United Kingdom\\ M13 9PL} \email{andrewpdavies@gmail.com} \subjclass{16S35, 16S38, 16W22}
\keywords{Cocycle twists, Zhang twists, AS-regular} \date{\today}
\begin{abstract} Let $A = \bigoplus_{n=0}^{\infty}A_n$ be a connected graded $k$-algebra over an algebraically closed field $k$ (thus $A_0=k$). Assume that a finite abelian group $G$, of order coprime to the characteristic of $k$, acts on $A$ by graded automorphisms. In conjunction with a suitable cocycle this action can be used to twist the multiplication in $A$. We study this new structure and, in particular, we describe when properties like Artin-Schelter regularity are preserved by such a twist. We then apply these results to examples of Rogalski and Zhang. \end{abstract}
\maketitle
\section{Introduction}\label{sec: introduction}
In this paper we study a twisting operation on algebras that can be formulated as a Zhang twist \cite{zhang1998twisted} or defined in the context of group actions as in \cite[\S 7.5]{montgomery1993hopf}. We will be primarily concerned with whether certain properties are preserved under such twists, following the work of Montgomery in \cite{montgomery2005algebra}.
If an algebra $A$ is acted on by a finite abelian group $G$ then a $G$-grading is induced. We will denote a cocycle twist of this grading using a 2-cocycle $\mu$ by $A^{G,\mu}$. Such twists can be described in another manner, with the following result of Bazlov and Berenstein showing their equivalence. \begin{proposition}[{\cite[Lemma 3.6]{bazlov2012cocycle}}] The algebra $A^{G,\mu}$ is isomorphic to the fixed ring $(AG_{\mu})^G$ for some action of $G$ on the twisted group algebra $AG_{\mu}$. \end{proposition}
We show that applying such twists --- in particular to connected graded algebras when the action of $G$ respects this structure --- yields some interesting results.
Our main theorem is as follows. It brings together Proposition \ref{prop: uninoeth}, Theorem \ref{thm: asreg} and Propositions \ref{prop: koszul} and \ref{prop: cohenmac}. \begin{theorem}\label{thm: maintheorem} Assume that $A$ is a noetherian connected graded algebra and $G$ a finite abelian group that acts on $A$ by graded automorphisms. If $A$ has one of the following properties then the cocycle twist $A^{G,\mu}$ shares that property: \begin{itemize} \item[(i)] it is strongly noetherian; \item[(ii)] it is AS-regular; \item[(iii)] it is Koszul; \item[(iv)] it is Auslander regular; \item[(v)] it is Cohen-Macaulay. \end{itemize} \end{theorem} Moreover, some of the twists we have uncovered have not been studied previously (see \cite{davies2014cocycle2}).
Our ability to prove that properties are preserved under twisting stems from the fact that one of the constructions of such twists has not been fully exploited (it was briefly remarked upon in \cite[\S 3.4]{bazlov2012cocycle}). This formulation generalises an example of Odesskii from \cite[pg. 89-90]{odesskii2002elliptic}, and is especially useful since it allows the use of faithful flatness arguments via the following lemma. \begin{lemma}[{Lemma \ref{lem: fflat}}]\label{lem: fflat-intro} As an $(A^{G,\mu},A^{G,\mu})$-bimodule there is a decomposition \begin{equation*} AG_{\mu} \cong \bigoplus_{g \in G} {^{\text{id}}(A^{G,\mu})^{\phi_{g}}}, \end{equation*} for some automorphisms $\phi_{g}$ of $A^{G,\mu}$, with $\phi_e=\text{id}$. Each summand is free as a left and right $A^{G,\mu}$-module. Consequently $AG_{\mu}$ is a faithfully flat extension of both $A^{G,\mu}$ and $A$ on both the left and the right. \end{lemma}
Let us describe Odesskii's example, which uses a 4-dimensional Sklyanin algebra. Such an object is important in noncommutative algebraic geometry in the sense of \cite{artin1990some}, whose construction can be phrased in terms of an elliptic curve and a point upon it.
Consider a 4-dimensional Sklyanin algebra over $\ensuremath{\mathbb{C}}$, which we denote by $A$; its parameters and relations are unimportant for the purpose of the example. There is an action of the Klein four-group $G=(C_2)^2$ on $A$ by graded algebra automorphisms and also on $M_2(\ensuremath{\mathbb{C}})$, the ring of $2 \times 2$ complex matrices. For $M \in M_2(\ensuremath{\mathbb{C}})$ and generators $g_1, g_2 \in G$, the action is defined by \begin{equation}\label{eq: matrixaction} M^{g_{1}} =\begin{pmatrix} -1 & 0 \\ 0 & 1 \end{pmatrix}M\begin{pmatrix} -1 & 0 \\ 0 & 1 \end{pmatrix},\; M^{g_{2}}=\begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}M\begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}. \end{equation} Now take the tensor product of $\ensuremath{\mathbb{C}}$-algebras $A \otimes_{\ensuremath{\mathbb{C}}} M_2(\ensuremath{\mathbb{C}})$. The example is then given by taking the invariant ring under the diagonal action of $G$, $\left(A \otimes_{\ensuremath{\mathbb{C}}} M_2(\ensuremath{\mathbb{C}})\right)^G$.
It is natural to wonder if the properties of $A$ are shared by this twisted algebra and if this construction can be generalised to any algebra or group. Our attempts to answer these questions motivated the work in this paper. In the subsequent paper \cite{davies2014cocycle2} we study twists of 4-dimensional Sklyanin algebras and related algebras from the perspective of noncommutative algebraic geometry. Theorem \ref{thm: maintheorem} applies to such algebras, providing new examples of AS-regular algebras of global dimension 4.
In a paper appearing on the arXiv in January 2015, Chirvasitu and Smith \cite{chirvasitu2015exotic} prove some of the same results. As noted in their paper, our results had already appeared on the internet in July 2014 \cite{davies2014thesis}.
\subsection{Contents} We now give a brief description of the contents of this paper. In \S\ref{sec: background} we define classical cocycle twists of group-graded algebras, while in \S\ref{sec: construction} we construct the cocycle twists that we will study in two different ways. Our main results appear in \S\ref{sec: preservation}, where we prove both Theorem \ref{thm: maintheorem} and Lemma \ref{lem: fflat-intro}.
To end the paper we look for cocycle twists in the context of Rogalski and Zhang's classfication of AS-regular algebras of dimension 4 with three generators and a proper $\ensuremath{\mathbb{Z}}^{2}$-grading \cite{rogalski2012regular}. We show in \S\ref{sec: rogzhang} that several of the families in their classification are related via cocycle twists.
\subsection{Notation}\label{subsec: notation} Throughout $k$ will denote an algebraically closed field and $G$ a finite abelian group, unless otherwise stated. Further assumptions on the characteristic of $k$ will be made at the beginning of \S\ref{sec: construction}. By $A$ we will denote an associative $k$-algebra, often with the additional assumption that it is $\ensuremath{\mathbb{N}}$-graded. This means that $A = \bigoplus_{n \in \ensuremath{\mathbb{N}}} A_n$ with $ab \in A_{n+m}$ for all $a \in A_n$ and $b \in A_m$. Such an algebra is said to be \emph{connected graded} (or \emph{c.g.}\ for brevity) if $A_0=k$ and $\text{dim}_k A_n < \infty$ for all $n \in \ensuremath{\mathbb{N}}$. The \emph{Hilbert series} of a c.g.\ algebra is the power series $H_A(t)=\sum_{n \in \ensuremath{\mathbb{N}}} (\text{dim}_k A_n)t^n$.
When describing relations in such an algebra, we will use shorthand notation for two kinds of commutator. For $x,y \in A$, define \begin{equation*} [x,y]:=xy-yx \text{ and } [x,y]_+ := xy+yx. \end{equation*}
By $\text{Mod}(A)$ we shall denote the category of $A$-modules and by $\text{GrMod}(A)$ the category of $\ensuremath{\mathbb{N}}$-graded $A$-modules. If necessary we will specify whether these are categories of left $A$-modules or right $A$-modules. By $\text{lgldim }A$ and $\text{rgldim }A$ we will denote the left and right global dimensions of $A$ respectively, and by $\text{pdim }M_A$ ($\text{idim }M_A$) the projective (injective) dimension of a right $A$-module $M$. When $\text{lgldim }A = \text{rgldim }A$ we will write $\text{gldim }A$.
We will often consider an action of the group $G$ on $A$ by $k$-algebra automorphisms. The action of an element $g\in G$ on $a \in A$ will be denoted by the superscript $a^g$. We will denote the group of group automorphisms of $G$ by $\text{Aut}(G)$, and similarly use $\text{Aut}_{\ensuremath{\mathbb{N}}\text{-alg}}(A)$ to denote the group of $\ensuremath{\mathbb{N}}$-graded $k$-algebra automorphisms of $A$ when it is graded.
The tensor product $\otimes$ will denote the tensor product over $k$, $\otimes_k$, if no other subscript appears.
\section{Background}\label{sec: background} We begin by defining cocycle twists of group-graded algebras, a classical construction that underpins the twists that we construct in \S\ref{sec: construction}.
Assume that $A$ is an associative $k$-algebra and $G$ a finite group, not necessarily abelian, such that $A$ admits a $G$-graded structure $A = \bigoplus_{g \in G} A_g$. Consider all functions $\mu: G \times G \rightarrow k^{\times}$ satisfying the following relations for all $g,h,l \in G$: \begin{equation}\label{eq: cocycleid} \mu(g,h)\mu(gh,l)=\mu(g,hl)\mu(h,l),\; \mu(e,g)=\mu(g,e)=1. \end{equation} Such a function is called a \emph{2-cocycle of $G$ with values in $k^{\times}$} (or more formally a \emph{normalised} 2-cocycle). One can define a group structure on the set of 2-cocycles of $G$, denoted by $Z^2(G,k^{\times})$, via pointwise multiplication.
A new multiplication $\ast_{\mu}$ can be defined for all homogeneous elements $a \in A_g$ and $b \in A_h$ by \begin{equation*} a \ast_{\mu} b := \mu(g,h) ab, \end{equation*} where juxtaposition denotes the `old' multiplication in $A$. The new multiplication is then extended to the whole of $A$ by linearity. \begin{defn}\label{defn: basiccocycletwist} With $\mu$ as above, set $A_{\mu} := (A, \ast_{\mu})$. \end{defn}
2-cocycles are precisely those functions $G \times G \rightarrow k^{\times}$ that preserve associativity when deforming the multiplication in an algebra in this manner. Moreover, twisting by a 2-cocycle preserves the identity element of an algebra. The simplest examples of such twists are twisted group algebras, denoted by $kG_{\mu}$.
Now consider a 2-cocycle $\phi$ for which there exists a function $\rho: G \rightarrow k^{\times}$ such that \begin{equation*} \phi(g,h)=\rho(g)\rho(h)\rho(gh)^{-1}, \end{equation*} for all $g, h \in G$. Such a cocycle is called a \emph{2-coboundary}. As \cite[Corollary 33.6]{karpilovsky1987structure} shows in a more general situation, the twisted group algebras $kG_{\mu}$ and $kG_{\phi}$ are isomorphic as $G$-graded algebras if and only if $\mu, \phi \in Z^2(G,k^{\times})$ lie in the same coset modulo the subgroup of coboundaries, $B^2(G,k^{\times})$. Thus the isomorphism classes of $G$-graded deformations of $kG$ are parameterised by $Z^2(G,k^{\times})/B^2(G,k^{\times})$, which is called the \emph{Schur multiplier} of $G$ \cite{karpilovsky1987schur}.
As \cite[Example 2.9]{zhang1998twisted} shows, the cocycle twists defined in Definition \ref{defn: basiccocycletwist} can be formulated as Zhang twists. For cocycle twists of $G$-graded algebras one therefore has an equivalence between their categories of $G$-graded modules by \cite[Theorem 3.1]{zhang1998twisted}.
Finally, it will be useful for us to consider the twisted group algebra $AG_{\mu}$ as a crossed product, whose definition appears in \cite[\S 1.5.8]{mcconnell2001noncommutative}.
\section{Constructions}\label{sec: construction}
Let us fix the base assumptions under which we will work. \begin{hyp}[General case]\label{hyp: generalcase}
Let $A$ be a $k$-algebra where $k$ is an algebraically closed field. Assume that a finite abelian group $G$ acts on $A$ by algebra automorphisms, where $\text{char}(k) \nmid |G|$. Fix an isomorphism between $G$ and its group of characters $G^{\vee}$, mapping $g \mapsto \chi_g$. \end{hyp}
Our primary interest in cocycle twists is to apply them to $\ensuremath{\mathbb{N}}$-graded algebras. As such, we record the following additional assumptions that will often be used. \begin{hyp}[$\ensuremath{\mathbb{N}}$-graded case]\label{hyp: gradedcase} Further to Hypotheses \ref{hyp: generalcase}, assume that $A$ is $\ensuremath{\mathbb{N}}$-graded and $G$ acts on $A$ by $\ensuremath{\mathbb{N}}$-graded algebra automorphisms, i.e. $G \rightarrow \text{Aut}_{\ensuremath{\mathbb{N}}\text{-alg}}(A)$. \end{hyp}
\subsection{Cocycle twists from group actions}\label{subsec: coycletwistgroupaction} In \S\ref{sec: background} we defined twists of a $G$-graded algebra. We will now show that when $G$ is finite abelian a $G$-grading is induced by an action of $G$ on $A$ by algebra automorphisms.
Assume that Hypotheses \ref{hyp: generalcase} hold. By Maschke's Theorem, $A$ splits into a direct sum of 1-dimensional irreducible $kG$-submodules. One defines a grading on $A$ by setting $A_g:=A^{\chi_{g^{-1}}}$, the isotypic component of $A$ corresponding to the character $\chi_{g^{-1}}$. To see that this defines a grading, note that for all homogeneous elements $a \in A_{g_{1}}, b \in A_{g_{2}}$ and $h \in G$, \begin{equation*} (ab)^h=a^hb^h=\chi_{g_{1}^{-1}}(h)a \chi_{g_{2}^{-1}}(h)b=\chi_{(g_1g_2)^{-1}}(h)ab, \end{equation*} which implies that $ab \in A_{g_{1}g_{2}}$. \begin{defn}\label{defn: fullcocycletwist} With the $G$-graded structure described above and a cocycle $\mu$, the resulting twisted algebra is written $A^{G,\mu}:= (A = \bigoplus_{g \in G} A_g, \ast_{\mu})$. Thus for all $a \in A_{g}, b \in A_{h}$ one has $a \ast_{\mu} b = \mu(g,h)ab$. \end{defn}
We now describe another construction, once again working under Hypotheses \ref{hyp: generalcase}. Define an action of $G$ on the twisted group algebra $kG_{\mu}$ by $g^{h}:=\chi_g(h)g$ for all $g, h \in G$ and extending $k$-linearly. Observe that under this action $kG_{\mu}$ is the regular representation of $G$, with isotypic components of the form $\left(kG_{\mu}\right)^{\chi_{g}}=kg$.
Given this action we can consider the diagonal action of $G$ on the tensor product $A \otimes kG_{\mu} = AG_{\mu}$, where $(ag)^h=a^h g^h= \chi_{g}(h)a^h g$ for all $a \in A$, $g, h \in G$. The algebra in which we are interested is the invariant ring under this action, $(A \otimes G_{\mu})^G=(AG_{\mu})^G$.
The constructions just defined are related by the following result of Bazlov and Berenstein. \begin{proposition}[{\cite[Lemma 3.6]{bazlov2012cocycle}}]\label{prop: twoconstrequal} Assume that Hypotheses \ref{hyp: generalcase} hold. Then $A^{G,\mu} \cong (AG_{\mu})^G$ as $k$-algebras. \end{proposition}
Henceforth, our use of the term \emph{cocycle twist} and the notation $A^{G,\mu}$ will refer to either of the equivalent twists in this proposition.
Odesskii's example from the introduction is an illustration of the invariant ring construction of a cocycle twist. The only subtlety is that the ring $M_2(\ensuremath{\mathbb{C}})$ is in fact isomorphic to a twisted group algebra over the Klein four-group for a nontrivial 2-cocycle.
\subsection{Twisting the $G$-grading}\label{subsec: twistggrading} In this section we investigate the effect of twisting a $G$-grading by a group automorphism. This idea is described in \cite[Example 3.8]{zhang1998twisted}. Given a $G$-graded algebra $A$ and $\sigma \in \text{Aut}(G)$, one can define a new grading on $A$ by $A_{\sigma}=\bigoplus_{g \in G} B_g$ where $B_g :=A_{\sigma(g)}$ for all $g \in G$. When $G$ is abelian this grading corresponds to another action of $G$ on $A$ by $k$-algebra automorphisms.
We wish to connect this new grading with a cocycle twist. Given a cocycle $\mu$ we can define an action of $\sigma$ on $\mu$ by \begin{equation*} \mu^{\sigma}(g,h):=\mu(\sigma(g),\sigma(h)), \end{equation*} for all $g,h \in G$. It is clear that this is also a 2-cocycle. Moreover, the action of $\sigma$ preserves 2-coboundaries and so there is an action of $\text{Aut}(G)$ on the Schur multiplier of $G$.
\begin{lemma}\label{lem: autoncocycle} For a 2-cocycle $\mu$ the cocycle twist $(A_{\sigma},\ast_{\mu})$ is isomorphic as a $k$-algebra to $(A,\ast_{\mu^{\left(\sigma^{-1}\right)}})$. \end{lemma} \begin{proof}
In the twist $(A_{\sigma},\ast_{\mu})$ consider homogeneous elements $a \in B_g$ and $b \in B_h$. Under the graded structure in $(A,\ast_{\mu^{\left(\sigma^{-1}\right)}})$ one has $a \in A_{\sigma(g)}$ and $b \in A_{\sigma(h)}$. Writing the multiplication of $a$ and $b$ in $(A_{\sigma},\ast_{\mu})$ gives \begin{equation}\label{eq: autactoncocyclemult} a \ast_{\mu} b = \mu(g,h)ab = \mu^{(\sigma^{-1})}(\sigma(g),\sigma(h))ab. \end{equation} Notice that the right-hand side of \eqref{eq: autactoncocyclemult} is precisely the multiplication $a \ast_{\mu^{\left(\sigma^{-1}\right)}} b$ in $(A,\ast_{\mu^{\left(\sigma^{-1}\right)}})$. \end{proof}
We now examine the choice of isomorphism $G \rightarrow G^{\vee}$. Let us use the notation $(A,\phi,\mu)$ for a triple consisting of an algebra, an isomorphism $G \rightarrow G^{\vee}$, and a 2-cocycle respectively. When $G$ acts on $A$ by algebra automorphisms each such triple can be naturally associated to a cocycle twist.
\begin{proposition}\label{prop: benign} Let $G$ act on $A$ by algebra automorphisms. Let $\phi$ and $\rho$ be isomorphisms $G \rightarrow G^{\vee}$ and $\mu$ be a 2-cocycle. Then there exists an automorphism of $G$, $\tau$ say, such that the cocycle twists corresponding to the triples $(A,\phi,\mu)$ and $(A,\rho,\mu^{(\tau^{-1})})$ are isomorphic as $k$-algebras. \end{proposition} \begin{proof} Given $\phi$, we will identify $\rho$ with an automorphism of $G$ as follows. Firstly, there exists an automorphism $\psi: G^{\vee} \rightarrow G^{\vee}$ such that $\phi = \psi \circ \rho$. Suppose we have an element $x \in A_g$, where the grading is determined under the duality given by $\phi$. This means that for all $h \in G$, \begin{equation*} x^h = \phi(g)^{-1}(h)x=\psi(\rho(g))^{-1}(h)x. \end{equation*}
Since all maps involved are isomorphisms, for all $g \in G$ there exists $k_g \in G$ such that $\psi(\rho(g))=\rho(k_g)$. We claim that the map $\tau:\; g \mapsto k_g$ defines an isomorphism of $G$. To see this, note that for all $g,h \in G$ one has \begin{equation*} \rho(k_{gh})=\psi(\rho(gh))=\psi(\rho(g)) \psi(\rho(h)) =\rho(k_{g})\rho(k_{h})=\rho(k_{g}k_{h}). \end{equation*}
As $\rho$ is an isomorphism, it follows that $\tau$ is also an isomorphism as claimed. Under the duality isomorphism $\rho$ one has $x \in A_{k_{g}}$, since \begin{equation}\label{eq: dualgrading1} x^h =\psi(\rho(g))^{-1}(h)x=\rho(k_g)^{-1}(h)x, \end{equation} for all $h \in G$.
Suppose that $x \in A_g$ and $y \in A_h$ for some $g,h \in G$ under the duality given by $\phi$. Thus $x \ast_{\mu} y=\mu(g,h)xy$ in $(A,\phi,\mu)$. Under the duality given by $\rho$ one has $x \in A_{k_{g}}$ by \eqref{eq: dualgrading1}, and in a similar manner $y \in A_{k_{h}}$. Thus in $(A,\rho,\mu^{(\tau^{-1})})$ the multiplication is \begin{equation*} x \ast_{\mu^{\left(\tau^{-1}\right)}} y = \mu^{(\tau^{-1})}(k_g,k_h)xy=\mu(\tau^{-1}(k_g),\tau^{-1}(k_h))xy=\mu(g,h)xy. \end{equation*} Since the multiplications agree on homogeneous elements, this completes the proof. \end{proof}
\section{Preservation of properties}\label{sec: preservation} In this section we prove that many properties are preserved by the twists defined in \S\ref{sec: construction}.
\subsection{Basic properties}\label{subsec: basicprops} \emph{Unless otherwise stated we assume that Hypotheses \ref{hyp: generalcase} hold for all results in this section.}
We first state a useful result regarding the behaviour of regular and normal elements under a cocycle twist. This result is not stated explicitly in \cite{zhang1998twisted}, although the proof is essentially contained in that of Proposition 2.2(1) op. cit.. \begin{lemma}\label{lem: stillregular} Any element $a \in A$ that is homogeneous with respect to the $G$-grading is regular (normal) in $A$ if and only if it is regular (normal) in $A^{G,\mu}$. \end{lemma}
The next two lemmas (the latter from \cite{montgomery2005algebra}) will be particularly useful when working with algebras defined by generators and relations. \begin{lemma}\label{lem: defrelns} Let $I$ be a $G$-graded ideal of $A$. Then $I$ remains an ideal in $A^{G,\mu}$. Furthermore, a generating set for $I$ that is homogeneous with respect to the $G$-grading is also a generating set for the ideal under twisting. \end{lemma} \begin{proof} That $I$ is still an ideal in the twist is proved in \cite[Proposition 3.1(2)]{montgomery2005algebra}. To complete the proof it suffices to deal with the case that $I=(f)$ for some homogeneous element $f \in A_g$. Suppose that $a \in A$ with homogeneous decomposition $a = \sum_{h \in G} a_h$. One has \begin{equation*} fa = f \ast_{\mu} \left(\sum_{h \in G} \frac{a_h}{\mu(g,h)} \right)\; \text{ and }\; af = \left(\sum_{h \in G} \frac{a_h}{\mu(h,g)} \right) \ast_{\mu} f, \end{equation*} which proves the result. \end{proof} \begin{lemma}[{\cite[ Proposition 3.1(1)]{montgomery2005algebra}}]\label{lemma: finitelygenerated} $A$ is finitely generated as a $k$-algebra if and only if $A^{G,\mu}$ is also finitely generated. Furthermore, if Hypotheses \ref{hyp: gradedcase} hold then $A$ is finitely generated in degree 1 if and only if $A^{G,\mu}$ is too. \end{lemma} \begin{proof} The first part of the statement is proved by Montgomery. By consulting the proof in \cite{montgomery2005algebra}, one can see that a generating set for $A^{G,\mu}$ can be obtained as follows: take a generating set of $A$ and find a vector space $V$ which contains this generating set and is preserved by the action of $G$. Then $A^{G,\mu}$ will be generated by $V$ under the new multiplication on the shared underlying vector space.
One may therefore conclude that under the additional hypotheses of the second statement of this lemma, the property of being finitely generated in degree 1 is preserved. \end{proof}
We now show that gradings are sometimes preserved under cocycle twists. \begin{lemma}\label{lem: autpresgrad} Suppose that $A$ has a $H$-grading for some arbitrary group $H$ and that a finite abelian group $G$ acts on $A$ by $H$-graded algebra automorphisms. Then any cocycle twist $A^{G,\mu}$ will inherit the $H$-grading from $A$. \end{lemma} \begin{proof} We must show that for all $h_1,h_2 \in H$ and homogeneous elements $x \in A_{h_{1}}$ and $y \in A_{h_{2}}$ one has $x \ast_{\mu} y \in A_{h_{1}h_{2}}$, since then $A^{G,\mu}_{h_{1}} \cdot A^{G,\mu}_{h_{2}} \subseteq A^{G,\mu}_{h_{1}h_{2}}$. As $G$ acts on $A$ by $H$-graded algebra automorphisms, one can apply Maschke's theorem to the $H$-graded components of $A$ under the action of $G$. This allows us to further assume that $x$ and $y$ are homogeneous with respect to the $G$-grading, thus $x \in A_{g_{1}}$ and $y \in A_{g_{2}}$ for some $g_1,g_2 \in G$. Then \begin{equation*} x \ast_{\mu} y = \mu(g_1,g_2)xy \in A_{h_{1}h_{2}}, \end{equation*} which completes the proof. \end{proof} \begin{rem} In particular, Lemma \ref{lem: autpresgrad} implies that $A^{G,\mu}$ inherits the $G$-grading from $A$. \end{rem}
Before stating our next result, we recall the concept of twisting a module by an automorphism. Let $A$ be a $k$-algebra and $\phi$ be a $k$-algebra automorphism. For a right $A$-module $M$, one can define a new right $A$-module $M^{\phi}$ via the multiplication $m \ast_{\phi} a=m\phi(a)$ for all $a \in A$, $m \in M$. One can twist both sides of an $(A,A)$-bimodule in this manner simultaneously; in particular, if such an $(A,A)$-bimodule is free on each side and the same generator can be used in each module structure, then one may assume that the bimodule is untwisted on one side (see \cite[\S2.3]{brown2008dualising}). \begin{lemma}\label{lem: fflat} As an $(A^{G,\mu},A^{G,\mu})$-bimodule there is a decomposition \begin{equation*} AG_{\mu} \cong \bigoplus_{g \in G} {^{\text{id}}(A^{G,\mu})^{\phi_{g}}}, \end{equation*} for some automorphisms $\phi_{g}$ of $A^{G,\mu}$, with $\phi_e=\text{id}$. Each summand is free of rank 1 as a left and right $A^{G,\mu}$-module. Consequently, $AG_{\mu}$ is a faithfully flat extension of $A^{G,\mu}$ on both the left and the right.
Similarly, $_A(AG_{\mu})$ and $(AG_{\mu})_A$ are free modules of finite rank, thus $AG_{\mu}$ is a faithfully flat extension of $A$ on both the left and the right. \end{lemma} \begin{proof} We will proceed as in the proof of the main theorem of \cite{smith1989can}. Let $AG_{\mu}=\bigoplus_{g \in G} M^{\chi_{g}}$ be the isotypic decomposition of $AG_{\mu}$ under the action of $G$. Observe that $A^{G,\mu}=M^{\chi_{e}}$ and $M^{\chi_{g}}M^{\chi_{h}}=M^{\chi_{gh}}$ for all $g,h \in G$, since $G$ acts by algebra automorphisms. This means that each isotypic component $M^{\chi_{g}}$ has an $(A^{G,\mu},A^{G,\mu})$-bimodule structure.
The isotypic component $M^{\chi_{g}}$ contains the element $1 \otimes g$. An arbitrary element in this component has the form $a \otimes h$ for some $a \in A_{g^{-1}h}=A^{\chi_{gh^{-1}}}$. Thus $a \otimes g^{-1}h \in A^{G,\mu}$ and therefore \begin{equation*} a \otimes h= \left(\frac{a \otimes g^{-1}h}{\mu(g^{-1}h,g)}\right) \cdot (1 \otimes g)=(1 \otimes g) \cdot \left(\frac{a \otimes g^{-1}h}{\mu(g,g^{-1}h)}\right). \end{equation*} Consequently, $M^{\chi_{g}}$ is cyclic as a left or a right $A^{G,\mu}$-module. Note that $1 \otimes g$ is regular in $AG$, therefore by Lemma \ref{lem: stillregular} it is also regular in $AG_{\mu}$. This proves that $M^{\chi_{g}}$ is a free $A^{G,\mu}$-module of rank 1 on both the left and the right.
By the discussion prior to the statement of the proposition, we know that the bimodule generated by $1 \otimes g$ is isomorphic to ${^{\text{id}}}(A^{G,\mu})^{\phi_{g}}$ for some algebra automorphism $\phi_{g}$. To describe $\phi_g$ it suffices to look at the left action of a homogeneous element in $A^{G,\mu}$ on $1 \otimes g$, which can be taken to be a free generator for the left $A^{G,\mu}$-module structure. Consider a homogeneous element $a \otimes h \in A^{G,\mu}_h$. One has \begin{equation*} (a \otimes h) \cdot (1 \otimes g) = \mu(h,g) a \otimes hg = (1 \otimes g) \cdot \frac{\mu(h,g)}{\mu(g,h)}(a \otimes h). \end{equation*}
Define a map $\phi_g: A^{G,\mu} \rightarrow A^{G,\mu}$ by $a \otimes h \mapsto \frac{\mu(h,g)}{\mu(g,h)}(a \otimes h)$ on homogeneous elements and extending $k$-linearly. To see that this is a $G$-graded automorphism, consider homogeneous elements $a \otimes h \in A^{G,\mu}_h$ and $b \otimes l \in A^{G,\mu}_l$. Then \begin{equation}\label{eq: phighom} \phi_g(a \otimes h)\phi_g(b \otimes l) = \frac{\mu(h,g)\mu(l,g)\mu(h,l)}{\mu(g,h)\mu(g,l)}(ab\otimes hl). \end{equation}
On the other hand, one can use \eqref{eq: cocycleid} to see that \begin{gather} \begin{aligned}\label{eq: phighom1} \phi_g(\mu(h,l)(ab\otimes hl)) &= \frac{\mu(hl,g)\mu(h,l)}{\mu(g,hl)} (ab\otimes hl) \\ &= \frac{\mu(h,lg)\mu(l,g)\mu(h,l)}{\mu(g,h)\mu(gh,l)} (ab\otimes hl). \end{aligned} \end{gather} Observe that $\frac{\mu(h,lg)}{\mu(gh,l)} = \frac{\mu(h,g)}{\mu(g,l)}$, which follows from $G$ being abelian together with another use of \eqref{eq: cocycleid}. Substituting this expression into \eqref{eq: phighom1} produces the expression in \eqref{eq: phighom}. It is clear that $\phi_g$ is injective, therefore it must be a $G$-graded automorphism of $A^{G,\mu}$ as claimed.
The result is trivial for $A$ by the definition of $AG_{\mu}$. \end{proof}
The previous result allows us to begin proving that various properties are preserved under twisting, beginning with GK dimension. Part (ii) of the following lemma is implicit in \cite[pg. 89-90]{odesskii2002elliptic}. \begin{lemma}\label{lem: hilbseries} The following statements are true: \begin{itemize}
\item[(i)] $\text{GKdim }A= \text{GKdim }A^{G,\mu}$;
\item[(ii)] Under Hypotheses \ref{hyp: gradedcase} one has $H_A(t)=H_{A^{G,\mu}}(t)$. In particular, if $A$ is connected graded then so is $A^{G,\mu}$. \end{itemize} \end{lemma} \begin{proof} By Lemma \ref{lem: fflat}, $AG_{\mu}$ is a f.g.\ module over $A$ and $A^{G,\mu}$ on both sides. Applying \cite[Proposition 5.5]{krause2000growth} twice, first to $A \subset AG_{\mu}$, then to $A^{G,\mu} \subset AG_{\mu}$, proves part (i) of the lemma.
Now assume that Hypotheses \ref{hyp: gradedcase} hold. By Lemma \ref{lem: autpresgrad}, $A^{G,\mu}$ possesses the same $\ensuremath{\mathbb{N}}$-graded structure as $A$, thus the dimensions of the graded components are the same for both. \end{proof}
We now turn to the strongly noetherian property which, as demonstrated by results in \cite{rogalski2008canonical}, has strong geometric consequences. \begin{defn}[{\cite[cf. \S 4]{artin1999generic}}] \label{defn: strongnoeth} Let $A$ be a noetherian $k$-algebra. $A$ is \emph{strongly (right) noetherian} if for all commutative noetherian $k$-algebras $R$, $A \otimes R$ is (right) noetherian. \end{defn}
Our result is a partial generalisation of \cite[Proposition 3.1(3)]{montgomery2005algebra} which was concerned with the noetherian condition. \begin{proposition}\label{prop: uninoeth} $A$ is strongly noetherian if and only if $A^{G,\mu}$ is. \end{proposition} \begin{proof} We prove that $A^{G,\mu}$ is strongly right noetherian. Assume that $A$ is strongly noetherian. Then $AG_{\mu}$ is a f.g.\ right $A$-module by the proof of Lemma \ref{lem: fflat}, hence by \cite[Proposition 4.1(1a)]{artin1999generic} $AG_{\mu}$ is strongly right noetherian. Using Lemma \ref{lem: fflat} again, the extension $A^{G,\mu} \subset AG_{\mu}$ is faithfully flat on the right, therefore we can apply \cite[Proposition 4.1(2a)]{artin1999generic} to show that $A^{G, \mu}$ is strongly right noetherian.
In the other direction we can use the same argument but with $AG_{\mu}$ replaced with $A^{G,\mu}G_{\mu^{-1}}$; Lemma \ref{lem: fflat} tells us that $A^{G,\mu}G_{\mu^{-1}}$ is a right faithfully flat extension of both $A^{G,\mu}$ and of $A \cong (A^{G,\mu}G_{\mu^{-1}})^G$.
An identical proof works for the left-sided part of the result. \end{proof}
\subsection{AS-regularity}\label{subsec: asregular} The aim of this section is to prove that being AS-regular is preserved under cocycle twists. Although this property only concerns c.g.\ algebras, we will prove that certain components of the definition are preserved without any assumption on graded structure.
\begin{defn}[{\cite{artin1987graded}}]\label{defn: asregular} Let $d \in \mathbb{N}$. A c.g.\ $k$-algebra $A$ is said to be \emph{AS-regular of dimension $d$} if the following conditions are satisfied: \begin{itemize}
\item[(i)] $A$ has finite GK dimension;
\item[(ii)] $\text{gldim }A=d$;
\item[(iii)] $A$ is AS-Gorenstein, thus $\text{Ext}^i_A(k,A)= k \delta_{i,d}$ when $k$ and $A$ are considered as left or as right $\ensuremath{\mathbb{N}}$-graded $A$-modules. \end{itemize} \end{defn} One can calculate the Ext group in (iii) in either $\text{Mod}(A)$ or in $\text{GrMod}(A)$; since $A$ and $k$ are f.g.\ modules, the discussion in \cite[\S 1.4]{levasseur1992some} shows that the two Ext groups in question are the same.
Let us first consider condition (ii) regarding global dimension. For the purposes of this section, we only need to show that finite global dimension is preserved under cocycle twists. However, we will prove the more general result that left and right global dimension are preserved, regardless of whether they are equal or not.
We will need the following technical result to compare the global dimension of $A^{G,\mu}$ with that of the twisted group algebra $AG_{\mu}$. There is an analogous version for left modules. \begin{theorem}[{\cite[Theorem 7.2.8(i)]{mcconnell2001noncommutative}}]\label{theorem: mcrobtechnical} Let $R,S$ be rings with $R \subseteq S$ such that $R$ is an $(R,R)$-bimodule direct summand of $S$. Then \begin{equation*} \text{rgldim }R \leq \text{rgldim }S +\text{pdim }S_R. \end{equation*} \end{theorem}
Without further ado, we now show that left and right global dimension are preserved under cocycle twists. \begin{proposition}\label{prop: gldim} One has $\text{rgldim }A=\text{rgldim }A^{G,\mu}$ and $\text{lgldim }A=\text{lgldim }A^{G,\mu}$. \end{proposition} \begin{proof} We will give the proof for right global dimension, from which a left-sided proof can easily be derived. Since $AG_{\mu}$ has the structure of a crossed product we can apply \cite[Theorem 7.5.6(iii)]{mcconnell2001noncommutative} to conclude that $\text{rgldim }A=\text{rgldim }AG_{\mu}$. By Lemma \ref{lem: fflat} we know that $A^{G,\mu}$ is an $(A^{G,\mu},A^{G,\mu})$-bimodule direct summand of $AG_{\mu}$. We may therefore apply Theorem \ref{theorem: mcrobtechnical}, which tells us that \begin{equation*} \text{rgldim }A^{G,\mu} \leq \text{rgldim }AG_{\mu} + \text{pdim }(AG_{\mu})_{A^{G,\mu}}= \text{rgldim }AG_{\mu}. \end{equation*}
Here $\text{pdim }(AG_{\mu})_{A^{G,\mu}}=0$ since the module is free by Lemma \ref{lem: fflat}. We have proved that $\text{rgldim }A^{G,\mu} \leq \text{rgldim }A$.
Now we repeat the argument with the roles of $A$ and $A^{G,\mu}$ reversed, considering them as subrings of $A^{G,\mu}G_{\mu^{-1}}$. One obtains the opposite inequality, which completes the proof. \end{proof}
Let us now address the AS-Gorenstein property. Our main tool to prove the preservation of this property will be the following result of Brown and Levasseur. \begin{proposition}[{\cite[Proposition 1.6]{brown1985cohomology}}]\label{prop: brownlevass} Let $R$ and $S$ be rings and $R \rightarrow S$ a ring homomorphism such that $S$ is a flat as a left and right $R$-module. Let $X$ be an $(R,R)$-bimodule such that the $(R,S)$-bimodule $X \otimes_R S$ is an $(S,S)$-bimodule. Then for every f.g.\ left $R$-module $M$ and all $i \geq 0$, there are isomorphisms of right $S$-modules, \begin{equation*} \text{Ext}_R^i(M,X)\otimes_R S \cong \text{Ext}_S^i(S \otimes_R M,X \otimes_R S). \end{equation*} \end{proposition} \begin{rem}\label{rem: x=s} For some of our applications of Proposition \ref{prop: brownlevass} we will take $R=X$. In that case $X \otimes_R S \cong S$ as an $(R,S)$-bimodule, from which $X \otimes_R S$ inherits a natural $(S,S)$-bimodule structure. \end{rem}
\begin{proposition}\label{prop: asgor} Assume that Hypotheses \ref{hyp: gradedcase} hold and that $A$ is connected graded. Then $A$ is AS-Gorenstein of global dimension $d$ if and only if $A^{G,\mu}$ shares this property. \end{proposition} \begin{proof} We will give the proof in the only if direction when $k$ and $A$ are considered as left $A$-modules. The proof in the opposite direction is identical by untwisting, while the proof for right modules is almost identical to that below; the only difference is that it requires the use of a right-handed version of Proposition \ref{prop: brownlevass}.
First, observe that under the $(\ensuremath{\mathbb{N}},G)$-bigrading on $AG_{\mu}$ the subalgebra consisting of elements that have degree zero under the \ensuremath{\mathbb{N}}-grading is the twisted group algebra $kG_{\mu}$. As right $AG_{\mu}$-modules one has isomorphisms \begin{equation}\label{eq: degree1factor} k \otimes_A AG_{\mu} \cong k \otimes_{A^{G,\mu}} AG_{\mu} \cong kG_{\mu}, \end{equation} and as left $AG_{\mu}$-modules we have \begin{equation}\label{eq: degree1factor1} AG_{\mu} \otimes_A k \cong AG_{\mu} \otimes_{A^{G,\mu}} k \cong kG_{\mu}. \end{equation}
We now proceed by applying Proposition \ref{prop: brownlevass} with $R=X=A$, $S=AG_{\mu}$ and $M=k$. To see that the hypotheses of that result are satisfied, observe that $A \subset AG_{\mu}$ is flat by Lemma \ref{lem: fflat} and recall Remark \ref{rem: x=s}. Applying Proposition \ref{prop: brownlevass} gives \begin{gather} \begin{aligned}\label{eq: asgorfirststep} \text{Ext}^i_A(k,A) \otimes_A AG_{\mu} &\cong \text{Ext}^i_{AG_{\mu}}(AG_{\mu} \otimes_A k,A \otimes_A AG_{\mu}) \\ &\cong \text{Ext}^i_{AG_{\mu}}(kG_{\mu},AG_{\mu}), \end{aligned} \end{gather} by using \eqref{eq: degree1factor1}. Since $A$ is AS-Gorenstein of global dimension $d$ we know that the left hand side is non-zero only for $i=d$. In that case it is equal to $k \otimes_A AG_{\mu} \cong kG_{\mu}$ by using \eqref{eq: degree1factor}.
We would now like to apply Proposition \ref{prop: brownlevass} a second time, using $R=X=A^{G,\mu}$, $S=AG_{\mu}$ and $M=k$. We may apply the same argument as used earlier in the proof, mutatis mutandis, to see that the hypotheses of that result are satisfied. Applying Proposition \ref{prop: brownlevass} we obtain \begin{gather} \begin{aligned}\label{eq: asgorsecondstep} \text{Ext}^i_{A^{G,\mu}}(k,A^{G,\mu}) \otimes_{A^{G,\mu}} AG_{\mu} & \cong \text{Ext}^i_{AG_{\mu}}(AG_{\mu} \otimes_{A^{G,\mu}} k, A^{G,\mu} \otimes_{A^{G,\mu}} AG_{\mu}) \\
& \cong \text{Ext}^i_{AG_{\mu}}(kG_{\mu}, AG_{\mu}). \end{aligned} \end{gather} Combining the information from \eqref{eq: asgorfirststep} and \eqref{eq: asgorsecondstep} gives \begin{equation*} \text{Ext}^i_{A^{G,\mu}}(k,A^{G,\mu}) \otimes_{A^{G,\mu}} AG_{\mu} \cong \left\{ \begin{array}{cl} kG_{\mu} & \text{if }i=d, \\ 0 & \text{if }i \neq d. \end{array}\right. \end{equation*}
As $A^{G,\mu} \subset AG_{\mu}$ is a faithfully flat extension on the left, $\text{Ext}^i_{A^{G,\mu}}(k,A^{G,\mu})$ must vanish in all degrees for which $i \neq d$. When $i=d$ we have \begin{equation}\label{eq: asgorfinalstep} \text{Ext}^d_{A^{G,\mu}}(k,A^{G,\mu}) \otimes_{A^{G,\mu}} AG_{\mu} \cong kG_{\mu}, \end{equation} as right $AG_{\mu}$-modules.
Since $k$ and $A^{G,\mu}$ are f.g.\ $\ensuremath{\mathbb{N}}$-graded left $A^{G,\mu}$-modules, the module structure defined in \cite[Theorem 1.15]{rotman2008introduction} implies that $\text{Ext}^i_{A^{G,\mu}}(k,A^{G,\mu})$ is a $\ensuremath{\mathbb{Z}}$-graded group. This $\ensuremath{\mathbb{Z}}$-grading is compatible with the right $A^{G,\mu}$-module structure, in which case the graded module structure on $\text{Ext}^i_{A^{G,\mu}}(k,A^{G,\mu})$ allows us to complete the proof as follows. Consider the $(A^{G,\mu},A^{G,\mu})$-bimodule structure of $AG_{\mu}$ described in Lemma \ref{lem: fflat}. Upon restricting the isomorphism in \eqref{eq: asgorfinalstep} to $A^{G,\mu}$, one obtains \begin{equation*} \bigoplus_{g\in G} \text{Ext}^d_{A^{G,\mu}}(k,A^{G,\mu})^{\phi_{g}} \cong (kG_{\mu})_{A^{G,\mu}}. \end{equation*} By considering the $G$-graded components of this isomorphism and noting that $\phi_e$ is the identity, one obtains the isomorphism of right $A^{G,\mu}$-modules $\text{Ext}^d_{A^{G,\mu}}(k,A^{G,\mu}) \cong k$, which proves the result. \end{proof}
We can now combine several previous results to prove the main theorem of this section. \begin{theorem}\label{thm: asreg} Assume that Hypotheses \ref{hyp: gradedcase} hold and $A$ is connected graded. Then $A$ is AS-regular if and only if $A^{G,\mu}$ is. If in addition $A$ has global and GK dimension $\leq 4$, then $A$ is a domain if and only if $A^{G,\mu}$ is a domain. \end{theorem} \begin{proof} The statement about AS-regularity follows from Lemma \ref{lem: hilbseries} and Propositions \ref{prop: gldim} and \ref{prop: asgor}. The second part of the theorem follows from \cite[Theorem 3.9]{artin1991modules}. \end{proof} The property of being a domain is \emph{not} preserved by Zhang twists. Such twists preserve this property when $G$ is an ordered semigroup \cite[Proposition 5.2]{zhang1998twisted} but not in general. See \cite{davies2014cocycle2} for further examples.
\subsection{The Koszul property}\label{subsec: koszul} Our next aim is to show that the Koszul property is preserved under cocycle twists. We begin by giving one of several equivalent definitions for the property.
\begin{defn}[{\cite[ Proposition 2.1.3]{koszul1996beilinson}}]\label{defn: koszulcomplex} A c.g.\ $k$-algebra $A$ is \emph{Koszul} if and only if for all $i \geq 0$ the $\ensuremath{\mathbb{Z}}$-graded components of $\text{Ext}_A^i(k,k)$ vanish in all degrees other than degree $i$. \end{defn}
Let us now prove our result concerning preservation of the Koszul property. \begin{proposition}\label{prop: koszul} In addition to Hypotheses \ref{hyp: gradedcase}, assume that $A$ is connected graded and its defining relations are in degree 2. Then $A$ is Koszul if and only $A^{G,\mu}$ is. \end{proposition} \begin{proof} We wish to apply Proposition \ref{prop: brownlevass} with $R=A$, $S=AG_{\mu}$, $X={_A}k_A$ and $M={_A}k$. Let us check that the hypotheses are satisfied: observe that $A \subset AG_{\mu}$ is flat by Lemma \ref{lem: fflat}, while $X \otimes_R S = kG_\mu$ by \eqref{eq: degree1factor}, whence it has a natural $(AG_{\mu},AG_{\mu})$-bimodule structure. We may therefore apply Proposition \ref{prop: brownlevass}, in which case one has \begin{gather} \begin{aligned}\label{eq: koszulbrown} \text{Ext}_A^i(k,k)\otimes_A AG_{\mu} &\cong \text{Ext}_{AG_{\mu}}^i(AG_{\mu} \otimes_A k,k \otimes_A AG_{\mu}) \\
&\cong \text{Ext}_{AG_{\mu}}^i(kG_{\mu},kG_{\mu}), \end{aligned} \end{gather} using \eqref{eq: degree1factor} and \eqref{eq: degree1factor1} to pass from the first line of this equation to the second.
Now set $R= A^{G,\mu}$, $S=AG_{\mu}$, $X={_A}k_A$ and $M={_A}k$. One can use the same argument as earlier in the proof, mutatis mutandis, to see that the hypotheses of Proposition \ref{prop: brownlevass} are satisfied. Applying that result we obtain \begin{gather} \begin{aligned}\label{eq: koszulbrown1} \text{Ext}_{A^{G,\mu}}^i(k,k)\otimes_{A^{G,\mu}} AG_{\mu} &\cong \text{Ext}_{AG_{\mu}}^i(AG_{\mu} \otimes_{A^{G,\mu}} k,k \otimes_{A^{G,\mu}} AG_{\mu}) \\
&\cong \text{Ext}_{AG_{\mu}}^i(kG_{\mu},kG_{\mu}), \end{aligned} \end{gather} using \eqref{eq: degree1factor} and \eqref{eq: degree1factor1} once again.
The $\ensuremath{\mathbb{Z}}$-grading on $\text{Ext}_A^i(k,k)$ and $\text{Ext}_{A^{G,\mu}}^i(k,k)$ is compatible with their right $A$- and $A^{G,\mu}$-module structures respectively. Thus the tensor products $\text{Ext}_A^i(k,k)\otimes_A AG_{\mu}$ and $\text{Ext}_{A^{G,\mu}}^i(k,k)\otimes_{A^{G,\mu}} AG_{\mu}$ are naturally $\ensuremath{\mathbb{Z}}$-graded right $AG_{\mu}$-modules. The $\ensuremath{\mathbb{Z}}$-grading on the cohomology group $\text{Ext}_{AG_{\mu}}^i(kG_{\mu},kG_{\mu})$ is also compatible with its right $AG_{\mu}$-module structure. Moreover, one can see from the proof of Proposition \ref{prop: brownlevass} (in \cite[Proposition 1.6]{brown1985cohomology}) that the isomorphisms in \eqref{eq: koszulbrown} and \eqref{eq: koszulbrown1} respect these $\ensuremath{\mathbb{Z}}$-graded structures. We may therefore conclude that there is an isomorphism \begin{equation}\label{eq: comparedims} \text{Ext}_A^i(k,k)\otimes_A AG_{\mu} \cong \text{Ext}_{A^{G,\mu}}^i(k,k)\otimes_{A^{G,\mu}} AG_{\mu} \end{equation} of $\ensuremath{\mathbb{Z}}$-graded right $AG_{\mu}$-modules.
Using the free module structures of $_A(AG_{\mu})$ and $_{A^{G,\mu}}(AG_{\mu})$ described in Lemma \ref{lem: fflat}, we may express the isomorphism in \eqref{eq: comparedims} as \begin{equation}\label{eq: comparedims1}
\bigoplus_{|G|}\text{Ext}_A^i(k,k) \cong \bigoplus_{|G|}\text{Ext}_{A^{G,\mu}}^i(k,k), \end{equation} at the level of vector spaces. Furthermore, as $AG_{\mu}$ is an $\ensuremath{\mathbb{N}}$-graded left module over $A$ and over $A^{G,\mu}$, the isomorphism in \eqref{eq: comparedims1} respects the $\ensuremath{\mathbb{Z}}$-graded structure.
Since $A$ is Koszul, we know that the $\ensuremath{\mathbb{Z}}$-graded components of the left hand side of \eqref{eq: comparedims1} vanish in all degrees other than degree $i$. It follows that $\text{Ext}_{A^{G,\mu}}^i(k,k)$ must also vanish in all degrees other than degree $i$, hence $A^{G,\mu}$ must be Koszul. \end{proof}
\subsection{The Cohen-Macaulay property and Auslander regularity}\label{subsec: cohenmac} In this section we will prove that several more homological properties of algebras are preserved under cocycle twists.
The definitions that follow can all be found in \cite[\S 1.2]{levasseur1993modules}. The \emph{grade} of a finitely generated left or right $A$-module $M$ is defined to be the value \begin{equation*} j_A(M)=\text{inf}\{i: \text{Ext}_A^i(M,A)\neq 0\} \in \ensuremath{\mathbb{N}} \cup \{+\infty\}. \end{equation*} \begin{defn}\label{def: cm} A ring $A$ is said to satisfy the \emph{Cohen-Macaulay property} or be \emph{Cohen-Macaulay} (CM) if for all non-zero finitely generated $A$-modules $M$, one has \begin{equation*} \text{GKdim }M+j_A(M)=\text{GKdim }A. \end{equation*} \end{defn} \begin{defn}\label{def: auslanderprops} The ring $A$ satistfies the \emph{Auslander-Gorenstein condition} if for every finitely generated left or right module $M$, all $i \geq 0$ and every $A$-submodule $N$ of $\text{Ext}^i_A(M,A)$, one has $j_A(N)\geq i$.
The ring is \emph{Auslander-Gorenstein} if it satisfies the Auslander-Gorenstein condition and it has finite left and right injective dimension. It is said to be \emph{Auslander regular} if in addition to satisfying the Auslander-Gorenstein condition it has finite global dimension. \end{defn}
The following result shows that all of the properties defined above are preserved under a cocycle twist. \begin{proposition}\label{prop: cohenmac} In addition to Hypotheses \ref{hyp: generalcase}, assume that $A$ is noetherian. Then $A$ has one of the following properties if and only if $A^{G,\mu}$ does as well: \begin{itemize}
\item[(i)] it is Cohen-Macaulay;
\item[(ii)] it is Auslander-Gorenstein;
\item[(iii)] it is Auslander regular. \end{itemize} \end{proposition} \begin{proof} (i) Assume that $A$ is Cohen-Macaulay. As we proved in Lemma \ref{lem: hilbseries}(i), $\text{GKdim }A=\text{GKdim }AG_{\mu}=\text{GKdim }A^{G,\mu}$. Let $M$ be a f.g\ right $AG_{\mu}$-module. It must also be f.g.\ as an $A$-module since the extension $A \subset AG_{\mu}$ is finite by Lemma \ref{lem: fflat}. By \cite[Lemma 5.4]{ardakov2007primeness} it is clear that the grades of $M_{AG_{\mu}}$ and $M_A$ are equal. One can then apply \cite[Lemma 1.6]{lorenz1988on} to conclude that $\text{GKdim }M_{AG_{\mu}}=\text{GKdim }M_{A}$.
Piecing this together, we find that \begin{gather} \begin{aligned}\label{eq: cohenaagmu} \text{GKdim }M_{AG_{\mu}}+ j_{AG_{\mu}}(M)= \text{GKdim }M_A+j_A(M) &=\text{GKdim }A \\ &=\text{GKdim }AG_{\mu}, \end{aligned} \end{gather} and therefore $AG_{\mu}$ is Cohen-Macaulay.
Now let $M$ be a f.g.\ right $A^{G,\mu}$-module. By applying a right-sided version of Proposition \ref{prop: brownlevass} with $R=X=A^{G,\mu}$ and $S=AG_{\mu}$ we obtain \begin{equation*} AG_{\mu} \otimes_{A^{G,\mu}} \text{Ext}_{A^{G,\mu}}^i \left(M,A^{G,\mu}\right) \cong \text{Ext}_{AG_{\mu}}^i\left(M \otimes_{A^{G,\mu}} AG_{\mu},AG_{\mu}\right). \end{equation*} When combined with faithful flatness of the extension $A^{G,\mu} \subset AG_{\mu}$ by Lemma \ref{lem: fflat}, this implies that \begin{equation*} j_{A^{G,\mu}}(M)=j_{AG_{\mu}}(M \otimes_{A^{G,\mu}} AG_{\mu}). \end{equation*}
By faithful flatness of the extension $A^{G,\mu} \subset AG_{\mu}$, $M$ is contained in $M \otimes_{A^{G,\mu}} AG_{\mu}$. Therefore by the definition of GK dimension one has \begin{equation}\label{eq: gkineq1} \text{GKdim }M_{A^{G,\mu}} \leq \text{GKdim }(M \otimes_{A^{G,\mu}} AG_{\mu})_{A^{G,\mu}}. \end{equation} By \cite[Proposition 5.6]{krause2000growth} one has the inequality \begin{equation}\label{eq: gkineq2} \text{GKdim }M_{A^{G,\mu}} \geq \text{GKdim }(M \otimes_{A^{G,\mu}} AG_{\mu})_{AG_{\mu}}. \end{equation}
Applying \cite[Lemma 1.6]{lorenz1988on} to $M \otimes_{A^{G,\mu}} AG_{\mu}$ and using the inequalities in \eqref{eq: gkineq1} and \eqref{eq: gkineq2} shows that $\text{GKdim }M_{A^{G,\mu}}=\text{GKdim }(M \otimes_{A^{G,\mu}} AG_{\mu})_{AG_{\mu}}$. One can then see from an equality like that in \eqref{eq: cohenaagmu} that the Cohen-Macaulay property is preserved under cocycle twists.
(ii) Using \cite[Proposition 3.9(i)]{yi1995injective} one can see that if $A$ satisfies the Auslander condition then so must $AG_{\mu}$. The twist $A^{G,\mu}$ then satisfies the Auslander condition by \cite[Theorem 2.2(iv)]{teo1996homological}, since the only hypothesis needed is that the extension be flat -- this is true by Lemma \ref{lem: fflat}.
It remains to show that injective dimension is preserved and, by symmetry it suffices to show this for left injective dimension. Consider the $G$-grading on $AG_{\mu}$ for which $(AG_{\mu})_g = A \otimes g$ for all $g \in G$. Under this grading $AG_{\mu}$ is a strongly $G$-graded ring, thus one can apply \cite[Corollary 2.7]{nastasescu1983strongly} with $R = N = AG_{\mu}$ and $\sigma = e$. That result implies that \begin{equation*} \text{idim }_{AG_{\mu}}AG_{\mu} = \text{idim }_{(AG_{\mu})_{e}}(AG_{\mu})_{e}= \text{idim }_{A}A. \end{equation*}
Now consider the $G$-grading on $AG_{\mu}$ under which $(AG_{\mu})_g = A^{G,\mu}(1 \otimes g)$ for all $g \in G$. This $G$-grading is induced by the diagonal action of $G$ on $AG_{\mu}$. It is clear that $AG_{\mu}$ is a strongly $G$-graded ring under this grading as well. One can therefore apply \cite[Corollary 2.7]{nastasescu1983strongly} once again to see that \begin{equation*} \text{idim }_{AG_{\mu}}AG_{\mu} = \text{idim }_{(AG_{\mu})_{e}}(AG_{\mu})_{e}= \text{idim }_{A^{G,\mu}}A^{G,\mu}. \end{equation*}
(iii) We saw in the proof of (ii) that the Auslander condition is preserved. One can then use Proposition \ref{prop: gldim} to show that the global dimensions of $A$ and $A^{G,\mu}$ are equal, which completes the proof. \end{proof}
\section{Twists in relation to Rogalski and Zhang's classification}\label{sec: rogzhang} We will now apply the prior theory to the work in \cite{rogalski2012regular}. As such, we will assume that $\text{char}(k)=0$ for the duration of this section.
Rogalski and Zhang classify AS-regular domains of dimension 4 satisfying two extra conditions: they are generated by three degree 1 elements and admit a proper $\ensuremath{\mathbb{Z}}^{2}$-grading. Properness of such a grading, $A= \bigoplus_{n,m \in \ensuremath{\mathbb{Z}}} A_{m,n}$ say, means that $A_{0,1}\neq 0$ and $A_{1,0}\neq 0$. Their main results are summarised in the following theorem.
\begin{theorem}[{\cite[Theorems 0.1 and 0.2]{rogalski2012regular}}]\label{theorem: rogzhangmain} Let $A$ be an AS-regular domain of dimension 4 which is generated by three degree 1 elements and properly $\ensuremath{\mathbb{Z}}^{2}$-graded. Then either $A$ is a normal extension of an AS-regular algebra of dimension 3, or up to isomorphism it falls into one of eight 1 or 2 parameter families, $\mathcal{A}-\mathcal{H}$. Moreover, any such algebra is strongly noetherian, Auslander regular and Cohen-Macaulay. \end{theorem}
In order to study cocycle twists of such algebras we require graded algebra automorphisms, where in this case graded refers to the c.g.\ structure rather than the additional $\ensuremath{\mathbb{Z}}^{2}$-grading. Section 5 of Rogalski and Zhang's paper is concerned with precisely this topic. The key result is the following, where generic means avoiding some finite set of parameters given in the statement of \cite[Lemma 5.1]{rogalski2012regular}. \begin{theorem}[{\cite[Theorem 5.2(a)]{rogalski2012regular}}]\label{theorem: rogzhangauts} Consider a generic AS-regular algebra $A$ in one of the families $\mathcal{A} - \mathcal{H}$. The graded automorphism group of $A$ is isomorphic either to $k^{\times}\times k^{\times}$ or to $k^{\times}\times k^{\times} \times C_2$. The first case occurs for the families $\mathcal{A}(b,q)$ with $q \neq -1$, $\mathcal{D}(h,b)$ with $h \neq b^4$, $\mathcal{F}$ and $\mathcal{H}$. The second case occurs if $A$ belongs to one of the families $\mathcal{A}(b,-1)$, $\mathcal{B}$, $\mathcal{C}$, $\mathcal{D}(h,b)$ with $h=b^4$, $\mathcal{E}$ or $\mathcal{G}$. \end{theorem} The automorphisms corresponding to $k^{\times}\times k^{\times}$ come from scaling components of the $\ensuremath{\mathbb{Z}}^{2}$-grading.
Let us fix some notation for the remainder of the section. The algebra $A$ will be generated by the three degree 1 elements $x_1,x_2$ and $x_3$, where $x_1, x_2 \in A_{1,0}$ and $x_3 \in A_{0,1}$. We will follow Rogalski and Zhang in referring to the extra automorphism of order 2 as the \emph{quasi-trivial} automorphism. This automorphism interchanges $x_1$ and $x_2$ whilst fixing $x_3$.
We now move onto the main result of this section, in which we show that the presence of the quasi-trivial automorphism implies the existence of cocycle twists relating algebras in different families. We will use the notation $v_i$ to denote generators of a cocycle twist when we wish to suppress the new multiplication symbol $\ast_{\mu}$. \begin{theorem}\label{theorem: rogzhangmymain} Let $G=(C_2)^2=\langle g_1, g_2\rangle$ and let $\mu$ denote the 2-cocycle on $G$ defined by \begin{equation*} \mu(g_1^p g_2^q, g_1^r g_2^s) = (-1)^{ps} \end{equation*} for all $p,q,r,s \in \{0,1\}$. Fix the isomorphism $G \cong G^{\vee}$ given by $g \mapsto \chi_g$, where \begin{equation*} \chi_g(h) = \left\{ \begin{array}{cl} 1 & \text{if }g=e\text{ or }h \in \{e, g\} \\ -1 & \text{otherwise}, \end{array}\right. \end{equation*} for all $g, h \in G$. Then there are $k$-algebra isomorphisms \begin{align*} \mathcal{A}(1,-1)^{G,\mu}\cong \mathcal{D}(1,1),\;\; \mathcal{B}(1)^{G,\mu} &\cong \mathcal{C}(1), \;\; \mathcal{E}(1,\gamma)^{G,\mu}\cong \mathcal{E}(1,-\gamma), \\ \mathcal{G}(1,\gamma)^{G,\mu} &\cong \mathcal{G}(1,\overline{\gamma}). \end{align*} \end{theorem}
In order to save space we will not write out the defining relations of these algebras, but recommend that the reader has \cite{rogalski2012regular} to refer to when reading the proof. \begin{proofof}{\ref{theorem: rogzhangmymain}} Let us begin by defining the action of $G$ which we will use for each of the cocycle twists we perform. Note that all of the algebras in the statement of the result admit the quasi-trivial automorphism. Therefore we can let $g_1$ act via the quasi-trivial automorphism and $g_2$ act by multiplying $x_3$ by -1 and fixing the other two generators.
Since the standard generators are not diagonal with respect to this action, we will instead use the generators
\begin{equation*}
w_1=x_1+x_2,\;\; w_2=x_1-x_2,\;\; w_3=x_3,
\end{equation*} which are homogeneous with respect to any induced $G$-grading, since all automorphisms act on them diagonally. Denoting the algebra we wish to twist by $A$, the induced $G$-grading on the new generators is given by \begin{equation*} w_1 \in A_e,\;\; w_2 \in A_{g_{2}},\;\; w_3 \in A_{g_{1}}. \end{equation*}
The defining relations of any algebra in one of the eight families belong to different components of the $\ensuremath{\mathbb{Z}}^2$-grading. Observe that the algebras $\mathcal{A}(1,-1)$, $\mathcal{B}(1)$, $\mathcal{C}(1)$ and $\mathcal{D}(1,1)$ share three relations, only being distinguished from each other by their relations in the $(2,1)$-component. Writing the shared relations in terms of the diagonal basis we show that they are left invariant under the twist.
The first two are quadratic relations: \begin{align*} 0 &= w_1^2 - w_2^2 = \frac{w_1 \ast_{\mu} w_1}{\mu(e,e)} - \frac{w_2 \ast_{\mu} w_2}{\mu(g_2,g_2)} = w_1 \ast_{\mu} w_1 - w_2 \ast_{\mu} w_2 = v_1^2 - v_2^2,\\ 0 &= w_3 w_1 - w_1 w_3 = \frac{w_3 \ast_{\mu} w_1}{\mu(g_1,e)} - \frac{w_1 \ast_{\mu} w_3}{\mu(e,g_1)} = w_3 \ast_{\mu} w_1 - w_1 \ast_{\mu} w_3 = v_3 v_1 - v_1 v_3, \end{align*} while the third relation is cubic: \begin{align*} 0 = w_3^2w_2 - w_2 w_3^2 &= \frac{w_3 \ast_{\mu} w_3 \ast_{\mu} w_2}{\mu(g_1,g_1)\mu(e,g_2)} - \frac{w_2 \ast_{\mu} w_3 \ast_{\mu} w_3}{\mu(g_2,g_1)\mu(g_1g_2,g_1)} \\ &= w_3 \ast_{\mu} w_3 \ast_{\mu} w_2 - w_2 \ast_{\mu} w_3 \ast_{\mu} w_3 \\ &= v_3^2 v_2 - v_2 v_3^2. \end{align*}
Thus, to verify the first two isomorphisms in the statement of the result it suffices to consider the behaviour under the twist of the only relation they do not share. We first twist this relation in the algebra $\mathcal{A}(1,-1)$, having once again written it in terms of the new generators beforehand: \begin{align*} 0 &= [w_3,[w_1,w_2]_+] \\ &= w_3 w_1 w_2 + w_3 w_2 w_1 - w_1 w_2 w_3 - w_2 w_1 w_3 \\ &= \frac{w_3 \ast_{\mu} w_1 \ast_{\mu} w_2}{\mu(g_1,e)\mu(g_1,g_2)} + \frac{w_3 \ast_{\mu} w_2 \ast_{\mu} w_1}{\mu(g_1,g_2)\mu(g_1g_2,e)} - \frac{w_1 \ast_{\mu} w_2 \ast_{\mu} w_3}{\mu(e,g_2)\mu(g_2,g_1)} - \frac{w_2 \ast_{\mu} w_1 \ast_{\mu} w_3}{\mu(g_2,e)\mu(g_2,g_1)} \\ &= -w_3 \ast_{\mu} w_1 \ast_{\mu} w_2 - w_3 \ast_{\mu} w_2 \ast_{\mu} w_1 - w_1 \ast_{\mu} w_2 \ast_{\mu} w_3 - w_2 \ast_{\mu} w_1 \ast_{\mu} w_3 \\ &= -[v_3,[v_1,v_2]_+]_+. \end{align*} This relation is the same as that in $\mathcal{D}(1,1)$ under the new generators, which proves the first isomorphism.
Let us now move on to $\mathcal{B}(1)$. Twisting the non-shared relation we see that \begin{align*} 0 &= [w_3,[w_2,w_1]]_+ \\ &= w_3 w_2 w_1 - w_3 w_1 w_2 + w_2 w_1 w_3 - w_1 w_2 w_3 \\ &= \frac{w_3 \ast_{\mu} w_2 \ast_{\mu} w_1}{\mu(g_1,g_2)\mu(g_1g_2,e)} - \frac{w_3 \ast_{\mu} w_1 \ast_{\mu} w_2}{\mu(g_1,e)\mu(g_1,g_2)} + \frac{w_2 \ast_{\mu} w_1 \ast_{\mu} w_3}{\mu(g_2,e)\mu(g_2,g_1)} - \frac{w_1 \ast_{\mu} w_2 \ast_{\mu} w_3}{\mu(e,g_2)\mu(g_2,g_1)} \\ &= -w_3 \ast_{\mu} w_2 \ast_{\mu} w_1 + w_3 \ast_{\mu} w_1 \ast_{\mu} w_2 + w_2 \ast_{\mu} w_1 \ast_{\mu} w_3 - w_1 \ast_{\mu} w_2 \ast_{\mu} w_3 \\ &= [v_3,[v_1,v_2]]. \end{align*} This relation is shared by $\mathcal{C}(1)$ under the new generating set, which proves the second isomorphism.
We now move on to the remaining two isomorphisms. The algebras in the relevant families share three relations, two of which we have already shown are preserved under the cocycle twist. This is also true for the third relation, which as yet we have not encountered: \begin{align*} 0 = w_3^2w_2 +w_2w_3^2 &= \frac{w_3 \ast_{\mu} w_3 \ast_{\mu} w_2}{\mu(g_1,g_1)\mu(e,g_2)} + \frac{w_2 \ast_{\mu} w_3 \ast_{\mu} w_3}{\mu(g_2,g_1)\mu(g_1g_2,g_1)} \\&= w_3 \ast_{\mu} w_3 \ast_{\mu} w_2 + w_2 \ast_{\mu} w_3 \ast_{\mu} w_3 \\ &= v_3^2 v_2 +v_2 v_3^2. \end{align*}
Once again, it suffices to see what happens to the non-shared relation. In $\mathcal{E}(1,\gamma)$, where $i = \sqrt{-1}$ and $\gamma = \pm i$, one has \begin{align*} 0 &= w_3w_2w_1 - w_1w_3w_2 +\gamma w_1w_2w_3 - \gamma w_2w_1w_3 \\ &= \frac{w_3 \ast_{\mu} w_2 \ast_{\mu} w_1}{\mu(g_1,g_2)\mu(g_1g_2,e)} - \frac{w_1 \ast_{\mu} w_3 \ast_{\mu} w_2}{\mu(e,g_1)\mu(g_1,g_2)} + \gamma \frac{w_1 \ast_{\mu} w_2 \ast_{\mu} w_3}{\mu(e,g_2)\mu(g_2,g_1)} - \gamma \frac{w_2 \ast_{\mu} w_1 \ast_{\mu} w_3}{\mu(g_2,e)\mu(g_2,g_1)} \\ &= -w_3 \ast_{\mu} w_2 \ast_{\mu} w_1 + w_1 \ast_{\mu} w_3 \ast_{\mu} w_2 + \gamma w_1 \ast_{\mu} w_2 \ast_{\mu} w_3 - \gamma w_2 \ast_{\mu} w_1 \ast_{\mu} w_3 \\ &= -v_3v_2v_1 + v_1v_3v_2 +\gamma v_1v_2v_3 - \gamma v_2v_1v_3. \end{align*} This is the final relation in $\mathcal{E}(1,-\gamma)$ under the new generators, which proves the penultimate isomorphism.
We now twist the final relation of $\mathcal{G}(1,\gamma)$, where $\gamma=\frac{1 + i}{2}$ and so $\overline{\gamma}=\frac{1}{2 \gamma}$: \begin{align*} 0 &= w_3 w_1 w_2 +w_3 w_2 w_1 +i w_1w_2w_3 + i w_2w_1w_3 \\ &= \frac{w_3 \ast_{\mu} w_1 \ast_{\mu} w_2}{\mu(g_1,e)\mu(g_1,g_2)} + \frac{w_3 \ast_{\mu} w_2 \ast_{\mu} w_1}{\mu(g_1,g_2)\mu(g_1g_2,e)} +i\frac{w_1 \ast_{\mu} w_2 \ast_{\mu} w_3}{\mu(e,g_2)\mu(g_2,g_1)} + i \frac{w_2 \ast_{\mu} w_1 \ast_{\mu} w_3}{\mu(g_2,e)\mu(g_2,g_1)} \\ &= -w_3 \ast_{\mu} w_1 \ast_{\mu} w_2 - w_3 \ast_{\mu} w_2 \ast_{\mu} w_1 + i w_1 \ast_{\mu} w_2 \ast_{\mu} w_3 + i w_2 \ast_{\mu} w_1 \ast_{\mu} w_3 \\ &= -v_3 v_1 v_2 - v_3 v_2 v_1 + i v_1 v_2 v_3 + i v_2 v_1 v_3. \end{align*} This is precisely the final relation of $\mathcal{G}(1,\overline{\gamma})$ under the new generators, which proves the last isomorphism in the statement of the theorem. \end{proofof}
Combined with the fact that $\mathcal{A}(b,-1)$, $\mathcal{B}(b)$, $\mathcal{C}(b)$, $\mathcal{D}(b,b^4)$, $\mathcal{E}(b,\gamma)$ and $\mathcal{G}(b,\gamma)$ are Zhang twists of the respective algebras in the statement of Theorem \ref{theorem: rogzhangmymain} for any parameter $b \in k^{\times}$ \cite[\S 3]{rogalski2012regular}, this result gives further information about such algebras up to Zhang twists.
\end{document} |
\begin{document}
\title[$p$-Proximal contraction]{A remark on the paper ``A note on the paper Best proximity point results for $p$-proximal contractions"}
\author[S.\ Som] {Sumit Som}
\address{ Sumit Som,
Department of Mathematics,
School of Basic and Applied Sciences, Adamas University, Barasat-700126, India.}
\email{somkakdwip@gmail.com}
\subjclass {$54H25$, $47H10$}
\keywords{Best proximity point, $p$-proximal contraction, Banach contraction principle}
\begin{abstract}
Recently, In the year 2020, Altun et al. \cite{AL} introduced the notion of $p$-proximal contractions and discussed about best proximity point results for this class of mappings. Then in the year 2021, Gabeleh and Markin \cite{GB} showed that the best proximity point theorem proved by Altun et al. in \cite{AL} follows from the fixed point theory. In this short note, we show that if the $p$-proximal contraction constant $k<\frac{1}{3}$ then the existence of best proximity point for $p$-proximal contractions follows from the celebrated Banach contraction principle. \end{abstract}
\maketitle
\section{\bf{Introduction}} Metric fixed point theory is an essential part of Mathematics as it gives sufficient conditions which will ensure the existence of solutions of the equation $F(x)=x$ where $F$ is a self mapping defined on a metric space $(M,d).$ Banach contraction principle for standard metric spaces is one of the important results in metric fixed point theory and it has lot of applications. Let $A,B$ be non-empty subsets of a metric space $(M,d)$ and $Q:A\rightarrow B$ be a non-self mapping. A necessary condition, to guarantee the existence of solutions of the equation $Qx=x,$ is $Q(A)\cap A\neq \phi.$ If $Q(A)\cap A= \phi$ then the mapping $Q$ has no fixed points. In this case, one seek for an element in the domain space whose distance from its image is minimum i.e, one interesting problem is to $\mbox{minimize}~d(x,Qx)$ such that $x\in A.$ Since $d(x,Qx)\geq d(A,B)=\inf~\{d(x,y):x\in A, y\in B\},$ so, one search for an element $x\in A$ such that $d(x,Qx)= d(A,B).$ Best proximity point problems deal with this situation. Authors usually discover best proximity point theorems to generalize fixed point theorems in metric spaces. Recently, In the year 2020, Altun et al. \cite{AL} introduced the notion of $p$-proximal contractions and discussed about best proximity point results for this class of mappings. Then, in the year 2021, Gabeleh and Markin \cite{GB} showed that the best proximity point theorem proved by Altun et al. in \cite{AL} follows from a result in fixed point theory. In this short note, we show that if the $p$-proximal contraction constant $k<\frac{1}{3}$ then the existence of best proximity point for $p$-proximal contractions follows from the Banach contraction principle.
\section{\bf{Main results}}
We first recall the following definition of $p$-proximal contraction from \cite{AL}.
\begin{definition}\cite{AL} Let $(A, B)$ be a pair of nonempty subsets of a metric space $(M,d).$ A mapping $f:A\rightarrow B$ is said to be a $p$-proximal contraction if there exists $k\in (0,1)$ such that \[ \begin{rcases}
d(u_1, f(x_1))= d(A, B)\\
d(u_2, f(x_2))= d(A, B)
\end{rcases}
{\Longrightarrow d(u_1,u_2)\leq k \Big(d(x_1,x_2)+|d(u_1,x_1)-d(u_2,x_2)|\Big)}
\] for all $u_1, u_2, x_1, x_2 \in A,$ where $d(A,B) = \inf\Big\{d(x, y): x\in A,\ y\in B\Big\}.$ \end{definition} In this paper, we call the constant $k$ in the above definition as $p$-proximal contraction constant. The following notations will be needed. Let $(M,d)$ be a metric space and $A,B$ be nonempty subsets of $M.$ Then $$A_0=\{x\in A: d(x,y)=d(A,B)~\mbox{for some}~y\in B\}.$$ $$B_0=\{y\in B: d(x,y)=d(A,B)~\mbox{for some}~x\in A\}.$$
\begin{definition}\cite{BS} Let $(M,d)$ be a metric space and $A,B$ be two non-empty subsets of $M.$ Then $B$ is said to be approximatively compact with respect to $A$ if for every sequence $\{y_n\}$ of $B$ satisfying $d(x,y_n)\rightarrow d(x,B)$ as $n\rightarrow \infty$ for some $x\in A$ has a convergent subsequence. \end{definition}
We need the following lemma from \cite{FE}. \begin{lemma}\cite[\, proposition 3.3]{FE}\label{b} Let $(A,B)$ be a nonempty and closed pair of subsets of a metric space $(X,d)$ such that $B$ is approximatively compact with respect to $A.$ Then $A_0$ is closed. \end{lemma}
In \cite{AL}, Altun et al. proved the following best proximity point result. \begin{theorem}\cite{AL}\label{a} Let $A,B$ be nonempty and closed subsets of a complete metric space $(M,d)$ such that $A_0$ is nonempty and $B$ is approximatively compact with respect to $A.$ Let $T:A\rightarrow B$ be a $p$-proximal contraction such that $A_0\neq \phi$ and $T(A_0)\subseteq B_0.$ Then $T$ has an unique best proximity point. \end{theorem}
In \cite{GB}, Gabeleh and Markin showed that Theorem \ref{a} follows from the following fixed point theorem. \begin{theorem}\cite{OP}\label{c} Let $(M,d)$ be a complete metric space and $T:M\rightarrow M$ be a $p$-contraction mapping. Then $T$ has an unique fixed point and for any $x_0\in M,$ the Picard iteration sequence $\{T^{n}(x_0)\}$ converges to the fixed point of $T.$ \end{theorem}
Now we like to state our main result. \begin{theorem} If the $p$-proximal contraction constant $0<k<\frac{1}{3}$ then Theorem \ref{a} follows from the Banach contraction principle. \end{theorem}
\begin{proof} Let $x\in A_0.$ As $T(A_0)\subseteq B_0,$ so, $T(x)\in B_0.$ This implies there exists $y\in A_0$ such that $d(y,T(x))=d(A,B).$ Now, we will show that $y\in A_0$ is unique. Suppose there exists $y_1,y_2\in A_0$ such that $d(y_1,T(x))=d(A,B)$ and $d(y_2,T(x))=d(A,B).$ Since, $T:A\rightarrow B$ is a $p$-proximal contraction so we have,
$$d(y_1,y_2)\leq k\Big(d(x,x)+|d(y_1,x)-d(y_2,x)|\Big)\leq k d(y_1,y_2)$$ $$\Longrightarrow y_1=y_2.$$ Let $S_1:A_0\rightarrow A_0$ be defined by $S_1(x)=y.$ Now, we will show that $S$ is a contraction mapping. Let $x_1,x_2\in A_0.$ As $d(S_1(x_1),T(x_1))=d(A,B)$ and $d(S_1(x_2),T(x_2))=d(A,B)$ and $T$ is a $p$-proximal contraction so, we have,
$$d(S_1(x_1),S_1(x_2))\leq k\Big(d(x_1,x_2)+|d(S_1(x_1),x_1)-d(S_1(x_2),x_2)|\Big)$$
$$\Longrightarrow d(S_1(x_1),S_1(x_2))\leq k\Big(d(x_1,x_2)+ d(S_1(x_1),S_1(x_2))+d(x_1,x_2)|\Big)$$ $$\Longrightarrow d(S_1(x_1),S_1(x_2))\leq \frac{2k}{1-k} d(x_1,x_2).$$ Since $0<k<\frac{1}{3},$ so, $0<\frac{2k}{1-k}<1.$ This shows that $S:A_0\rightarrow A_0$ is a Banach contraction mapping. Now, from lemma \ref{b}, we can say $A_0$ is closed. So, $A_0$ is a complete metric space. Then, by Banach contraction principle the mapping $S_1$ has an unique fixed point $z\in A_0.$ Now, $d(z,T(z))=d(S(z),T(z))=d(A,B).$ This shows that $z$ is a best proximity point for $T.$ Uniqueness follows from the definition of $p$-proximal contraction. Also, we can conclude that for any $x_0\in A_0$ the sequence $\{S_1^{n}(x_0)\}$ will converge to the unique best proximity point of $T.$ \end{proof}
\section{conclusion} The main motivation of the current paper is that if the $p$-proximal contraction constant $0<k<\frac{1}{3}$ then the best proximity point theorem by Altun \cite{AL} follows from the Banach contraction principle. If $\frac{1}{3}\leq k<1$ then the best proximity point theorem by Altun \cite{AL} follows from Theorem \ref{c} which is already shown by Gabeleh and Markin in \cite{GB}.
\end{document} |
\begin{document}
\begin{abstract} An ideal $I$ is said to be "well-poised" if all of the initial ideals obtained from points in the tropical variety $\textup{Trop}(I)$ are prime. This condition was first defined by Nathan Ilten and the third author. We classify all well-poised hypersurfaces over an algebraically closed field. We also compute the tropical varieties and associated Newton-Okounkov bodies of these hypersurfaces. \end{abstract}
\title{Well-poised hypersurfaces}
\tableofcontents
\section{Introduction}
Let $\Bbbk [{\bf x}] = \Bbbk [x_1, \ldots, x_n]$ be the polynomial ring in $n$ variables over an algebraically closed field $\Bbbk $ and let $I \subset \Bbbk [{\bf x}]$ be a monomial-free ideal. The \emph{tropical variety} $\textup{Trop}(I)$ associated to $I$ is the set of vectors $\boldsymbol \omega \in \mathbb{R}^n$ whose associated ideal of \emph{initial forms} $\ini{\boldsymbol \omega}(I)$ (see \cref{eq-initialform}) also contains no monomials, \cite{Maclagan-Sturmfels}. The zero locus $V(\ini{\boldsymbol \omega}(I))$ of such an initial ideal is a flat degeneration of the affine variety $V(I) \subseteq \mathbb{A}^n(\Bbbk )$. When $\ini{\boldsymbol \omega}(I)$ is a prime binomial ideal, the variety $V(\ini{\boldsymbol \omega}(I))$ is an affine (possibly non-normal) toric variety, and we say that $\boldsymbol \omega \in \textup{Trop}(I)$ defines a \emph{toric degeneration} of $V(I)$. In this case, $\boldsymbol \omega$ is said to be a \emph{prime point} of $\textup{Trop}(I)$, and the open face $\sigma$ of the Gr\"obner fan of $I$ containing $\boldsymbol \omega$ in its relative interior (likewise contained in $\textup{Trop}(I)$) is a \emph{prime cone}. In the following, we give $\textup{Trop}(I)$ the fan structure inherited from the Gr\"obner fan of $I$, and we let $\ini{\sigma}(I)$ denote the initial ideal associated to a relatively open face $\sigma \in \textup{Trop}(I)$.
Due to their close connection with polyhedral geometry, prime binomial ideals and their associated toric varieties are often easier to handle than general prime ideals. For example, the Gorenstein property, the Cohen-Macaulay property, the Koszul property, normality of the corresponding variety, and bounds on the Betti numbers can be more easily checked for prime binomial ideals. Moreover, if these properties hold for a flat degeneration of a variety, they can be established for the variety itself. In this way toric degeneration can be a useful tool for studying both the geometry of the original variety $V(I)$ and its coordinate algebra $\Bbbk [{\bf x}]/I$. In particular, it can be shown that there is a \emph{Newton-Okounkov body} (\cite{KK}, \cite{LM}, \cite{Ok}) associated to each prime cone of maximal dimension in $\textup{Trop}(I)$ (\cite{Kaveh-Manon-NOK}) from which many invariants of $V(I)$ can be extracted. Recently, Escobar and Harada \cite{Escobar-Harada} have shown that maximal prime cones in $\textup{Trop}(I)$ which share a facet give rise to a \emph{wall-crossing} phenomenon between their associated Newton-Okounkov bodies. For this reason it is of interest to know when $\ini{\boldsymbol \omega}(I)$ is a prime ideal for every $\boldsymbol \omega \in \textup{Trop}(I)$. Following work of the third author and Ilten in \cite{Ilten-Manon} such an ideal is said to be \emph{well-poised}. In this paper, we classify all well-poised principal ideals (Theorem \ref{maintheorem}). A description of Newton-Okounkov bodies for well-poised hypersurfaces appears in Section \ref{Angela}.
We write $f = \sum c_i {\bf x}^{{\bf a}_i}$ to mean a polynomial in $\Bbbk [{\bf x}]$ with monomial terms $c_i{\bf x}^{{\bf a}_i}$, for $c_i \in \Bbbk $ and ${\bf x}^{{\bf a}_i} = x_1^{a_{i,1}}\cdots x_n^{a_{i,n}}$. The initial form:
\begin{equation}\label{eq-initialform}
\ini{\boldsymbol \omega}(f) = \sum_{{\bf a}_i \in M} c_i{\bf x}^{{\bf a}_i}
\end{equation}
\noindent
for a real vector $\boldsymbol \omega = (\omega_1, \ldots, \omega_n) \in \mathbb{R}^n$ is the sum of those monomial terms $c_i{\bf x}^{{\bf a}_i}$, where ${\bf a}_i$ belongs to the set $M$ of those exponents whose inner product with $\boldsymbol \omega$ is maximal (see \cite{Sturmfels96}). Likewise, the initial ideal $\ini{\boldsymbol \omega}(I) \subset \Bbbk [{\bf x}]$ is the ideal generated by the initial forms $\{\ini{\boldsymbol \omega}(f) \mid f \in I\}$. If $I$ is principal, say generated by $f \in \Bbbk [{\bf x}]$, then $\ini{\boldsymbol \omega}(I) = \langle \ini{\boldsymbol \omega}(f) \rangle$; with this in mind we get the following definition.
\begin{definition}\label{wellpoiseddefinition} A polynomial $f$ is said to be {\bf well-poised} if every initial form which is not a monomial is irreducible. \end{definition}
\noindent We introduce the following conventions. We say that that $\gcd({\bf a}_i, {\bf a}_j)$ is the gcd of all entries of the two exponent vectors. We recall the \emph{support} of a monomial term ${\bf x}^{{\bf a_i}}$, denoted $supp({\bf x}^{{\bf a_i}})$, is the set $\{j : {\bf a}_{i,j} \neq 0\}$. Several of our results will involve the condition that $supp(x^{{\bf a}_i})\cap supp(x^{{\bf a}_j})=\varnothing$ for two monomials in a polynomial $f$, and so we give the following definition: \begin{definition}\label{def-disjointly} We say a polynomial $f = \sum_{i =1}^n c_i{\bf x}^{{\bf a}_i}$ is disjointly supported if $supp(x^{{\bf a}_i})\cap supp(x^{{\bf a}_j})=\varnothing$ for all $c_i, c_j \neq 0$. \end{definition}
The following is our main result.
\begin{restatable}{theorem}{maintheorem} \label{maintheorem} A polynomial $f = \sum_{i \in \mathbb{N}}c_i {\bf x}^{{\bf a}_i}$ is well-poised if and only if $f$ is disjointly supported and $\gcd({\bf a}_i, {\bf a}_j) = 1$ for any pair $i,j\in \mathbb{N}$. \end{restatable}
We also give a complete description of two combinatorial invariants of a well-poised hypersurface. We show that the \emph{Newton polytope}, denoted $N(f)$, of any well-poised hypersurface contains no interior lattice points (\cref{interiorlatticepoints}), and we give a complete description of the tropical variety $\textup{Trop}(f)$ (\cref{Angela}).
\begin{restatable}{theorem}{interiorlatticepoints} \label{interiorlatticepoints} Let $f$ be well-poised. Then $N(f)$ is a simplex. Further, $N(f)\cap \mathbb{Z}^n$ is precisely the vertex set of $N(f)$. \end{restatable}
In Section \ref{Angela} we determine the structure of the tropical variety of a polynomial with disjoint supports. We would also like to note here that for these disjointly supported hypersurfaces, (and consequently well-poised hypersurfaces), the computation of the singular locus is a straightforward excerise in computing the Jacobian. By examining these conditions, one can easily compute the codimension of the singular locus and determine normality by applying Serre's criterion.
\begin{example}[E8 Singularity] \label{E8} \emph{ The Du Val $E_8$ singularity is given by the solution set of $x^2 + y^3 + z^5=0$. This is both well-poised and normal. The hypersurface $x^2 + y^3 + z^5=0$ defines a normal, rational, complexity-1 $T$-variety as studied in \cite{Ilten-Manon}. In particular, this is a \emph{semi-canonical} embedding of the hypersurface, and is therefore well-poised by the results in \cite{Ilten-Manon}. } \end{example}
\begin{example}[The Grassmannian $Gr_2(4)$] \label{Grass} \emph{ The affine cone over the Pl\"ucker embedding of the Grassmannian variety $Gr_2(4)$ is given by the solution set of $p_{12}p_{34} - p_{13}p_{24} + p_{14}p_{23}=0$ in $\bigwedge^2(\Bbbk ^4)$. A simple application of our results show that this is well-poised. } \end{example}
Polynomials of the type described in Theorem \ref{maintheorem} have appeared recently in work of Hausen, Hische, and Wr\"obel on Mori dream spaces of low Picard number (\cite{HHW}). The Cox rings of smooth general arrangement varieties of true complexity $2$ and Picard number $2$ of all types except $14$ are presented by well-poised hypersurfaces. Type $14$ is well-poised as well, but it is not a hypersurface. As a consequence, any projective coordinate ring of one of these varieties carries a full rank valuation with finite Khovanskii basis for each maximal cone of the tropicalization of the hypersurface, and the varieties themselves have a toric degeneration for each maximal cone of the tropical variety of the hypersurface. We illustrate this construction with an example of a singular del Pezzo surface. We remark that the spectra of these Cox rings are also arrangement varieties, and so can be proved to be well-poised by recent results of Joseph Cummings and the third author \cite{Cummings-Manon} by a different method.
\begin{example} \emph{ We consider the example of the singular, $\mathbb{Q}$-factorial Gorenstein del Pezzo surface $X$ described in \cite[Example 3.2.1.6]{Cox-book}. The Cox ring of $X$ is presented by a well-poised hypersurface:} \[Cox(X) = \Bbbk [T_1,T_2,T_3,T_4,T_5]/\langle T_1T_2+T_3^2+T_4T_5\rangle.\]
\noindent \emph{The lineality space of $T_1T_2+T_3^2+T_4T_5$ has basis given by the rows of the matrix:}
\[M = \begin{bmatrix} 1 & 1 & 1 & 1 & 1\\ 1 & -1 & 0 & -1 & 1\\ 1 & 1 & 1 & 0 & 2\\ \end{bmatrix}.\]
\emph{ \noindent By Theorem \ref{maintheorem}, we obtain a toric degeneration of $Cox(X)$ for each of the three maximal cones of $\textup{Trop}(\langle T_1T_2+T_3^2+T_4T_5 \rangle)$. The corresponding \emph{global Newton-Okounkov bodies} of $X$ are obtained as the $\mathbb{Q}_{\geq 0}$ spans of the columns of the matrices $M_1, M_2, M_3$ obtained by appending $w_1 = (1, 1, 1, 0, 0)$, $w_2 = (1, 1, 0, 1, 1)$, or $w_3 = (0, 0, 1, 1, 1)$ to the bottom of $M$, respectively. }
\emph{The grading on $Cox(X)$ defined by the 2nd and 3rd rows of $M$ coincides with the class group grading. Let $M'$ be the matrix given by these rows, so that the effective classes on $X$ coincide with the $\mathbb{Z}_{\geq 0}$ span of the columns of $M'$. For $(i, j)$ we let $\Gamma(i,j) \subset Cox(X)$ be the $(i, j)$-th graded component. The polynomial ring $S = \Bbbk [T_1, T_2, T_3, T_4, T_5]$ also inherits a grading by the columns of $M'$, and for each $(i, j)$ there is a surjection $S(i, j) \to \Gamma(i, j) \to 0$. The initial forms $in_{w_i}(T_1T_2+T_3^2+T_4T_5)$ are each homogeneous with respect to the grading by $M'$, as a consequence $S(i, j)$ also surjects onto the $(i, j)$-th component of each degeneration $\Bbbk [T_1, T_2, T_3, T_4, T_5]/\langle in_{w_i}(T_1T_2 + T_3^2 + T_4T_5)\rangle$.}
\begin{figure}
\caption{Newton-Okounkov bodies $\Delta_1(X, (0, 6))$ and $\Delta_2(X, (0, 6))$.}
\label{NOKbodies}
\end{figure}
\emph{The divisor class $(0, 6)$ is ample; let $R = \bigoplus_{n \geq 0} \Gamma(0, 6n)$ be its projective coordinate ring. The space $S(0, 6)$ has basis given by the $23$ monomials $T^{\bf a}$ such that ${\bf a} = (a_1, a_2, a_3, a_4, a_5) \in \mathbb{Z}_{\geq 0}^5$ satisfies $a_1 -a_2 -a_4 + a_5 = 0$ and $a_1 + a_2 + a_3 + 2a_5 = 6$. The subring $S_{0, 6} = \bigoplus_{ n \geq0} S(0, 6n) \subset S$ can be shown to be generated by the component $S(0, 6)$. As a consequence, $R$ and each of the three degenerations \[gr_i(R) \subset \Bbbk [T_1, T_2, T_3, T_4, T_5]/\langle in_{w_i}(T_1T_2 + T_3^2 + T_4T_5)\rangle,\ i \in \{1, 2, 3\}\] are generated by their $(0, 6)$ components as well. It follows that the image of the $23$ monomials in $\Gamma(0, 6) \subset R$ are a Khovanskii basis for each of the three valuations defined by the matrices $M_1, M_2$ and $M_3$ on $R \subset Cox(X)$. The ideal of relations which vanishes on these generators is generated by $117$ elements: $4$ linear, $112$ quadratic, and $1$ sextic. Two of the three Newton-Okounkov bodies $\Delta_i(X, (0, 6))$ $i \in \{1, 2, 3\}$ of the class $(0, 6)$ emerging from this construction are depicted in Figure \ref{NOKbodies}, and $\Delta_3(X, (0, 6)) = \Delta_1(X, (0, 6))$. Each body $\Delta_i(X, (0, 6))$ is the image of the poytope $P(0, 6) \subset \mathbb{Q}_{\geq 0}^5$ cut out by the relations $a_1 -a_2 -a_4 + a_5 = 0$ and $a_1 + a_2 + a_3 + 2a_5 = 6$ under the linear projection defined by the matrix with rows $(1, 1, 1, 1, 1)$ and $w_i$, respectively. Both Newton-Okounkov bodies have volume $24$.} \end{example}
The Cox ring of a projective variety with a finitely generated and free class group is known to be factorial \cite{Arz}. With this in mind, we ask the following question.
\begin{question} For $f$ well-poised, when is the ring $\Bbbk [{\bf x}]/\langle f \rangle$ factorial? In other words, when does $f$ well-poised present the Cox ring of a Mori dream space? \end{question}
\noindent An answer to this question would provide a large class of Cox rings of Mori dream spaces, each equipped with a large combinatorial class of toric degenerations.
\section{The Newton Polytope and Supporting Lemmas} \label{ProofIL} Here we prove the results necessary to establish \cref{maintheorem} and \cref{interiorlatticepoints} in \cref{section3}. These proofs rely on the properties of the Newton polytope of a hypersurface. Terminology and notation are taken from \cite{Maclagan-Sturmfels}.
\begin{definition}
The Newton polytope $N(f)$ of a polynomial $f = \sum c_i {\bf x}^{{\bf a}_i}$ is the convex hull of the set $\{{{\bf a}_i} : c_i \neq 0\} \subset \mathbb{R}^n$. \cite[61]{Maclagan-Sturmfels} \end{definition}
Recall that the faces of $N(f)$ are in one-to-one correspondence with the initial forms $\ini{w}(f).$ In particular, each face is of the form $N(\ini{w}(f))$ for some weight vector ${w}$, and if $N(f)$ is a simplex with no interior lattice points, then every sub-sum of $f = \sum c_i{\bf x}^{{\bf a_i}}$ is an initial form of $f$. Also, recall that if $f=pq$ then $N(f) = N(p) + N(q)$ where the right side denotes the Minkowski sum of the Newton polytopes (\cite{Sturmfels96}). The lemmas in this section serve to restrict the combinatorial type of $N(f)$ when $f$ is well-poised.
\begin{restatable}{lemma}{WellPDis}
\label{WellPDis}
If $f$ is well-poised, then all monomials corresponding to the vertices of $N(f)$ have disjoint supports. \end{restatable}
\begin{proof}
Let $x_j$ be an indeterminant in $f$ and let \[S:=\{{\bf a_i} : j\in supp( {\bf x}^{\bf a_i}), {\bf a_i} \text{ is a vertex}\}\] denote the corresponding vertex set in $N(f)$, corresponding to monomials that contain $x_j$. Note that this set exclusively considers vertices and not interior lattice points. We show that this set contains only one element.
First, note that any face of $N(f)$ corresponds to an initial form of $f$. Should any of the vertices ${\bf a_i} \in S$ share a (potentially degenerate) edge, this would correspond to a factorable initial binomial (or more general form, if the edge is degenerate) of $f$, as the lowest power of $x_j$ could be factored out. Therefore for $f$ to be well-poised, no vertices in $S$ may share an edge. Let ${\bf e_j}$ be the unit basis vector with respect to the $j^{th}$ coordinate and choose a vertex $\overline{\bf{{a_i}}}$ in $S$ such that the interior product $\langle \overline{\bf{a_i}}, \bf{e_j}\rangle$ is maximal. Now, consider the edges from $\overline{\bf{{a_i}}}$ to its adjacent vertices. Notice that by our above reasoning, $\overline{\bf{{a_i}}}$ cannot connect to any other member of $S$, so all edges connect to vertices which lie in the subspace of $\mathbb{R}^n$ homeomorphic to $\mathbb{R}^{n-1}$ corresponding to $x_j=0$.
We can conclude now that $\overline{\bf{a_i}}$ is the only element in $S$. If we suppose there is some other vertex in $S$, $\mathbf{a'}$, we see we are forced to choose between convexity or irreducibilty, as if these two vertices are not joined by some ray, then the polytope is not convex, but if they are joined, then the edge corresponds to a reducible initial form. Therefore, there can only be one element in $S$, and our monomials will have disjoint supports.
\end{proof}
\begin{restatable}{lemma}{PolyDis}
\label{PolyDis}
If the monomials corresponding to vertices in $N(f)$ have disjoint supports, then $N(f)$ is a simplex. \end{restatable}
\begin{proof} Since the monomials have disjoint supports, the corresponding exponent vectors are affinely independent - thus $N(f)$ is the convex hull of an affinely independent set of points, i.e a simplex. \end{proof}
\cref{RedBinom} and \cref{interiorlatticepoints} require the following lemma, which restricts the number of lattice points in $N(f)$, when $f$ is of a specific form. The proof of this lemma is reserved for the end of this section to improve readability.
\begin{restatable}{lemma}{VertPoints} \label{VertPoints} Let $f$ be a polynomial such that the vertices of $N(f)$ have disjoint supports. If $f$ also has $\gcd({\bf a_i, a_j})=1$ for all pairs of vertices $\bf{a_i, a_j}$, then $N(f)$ contains no lattice points besides its vertices. \end{restatable} Now, we prove \cref{RedBinom}. \begin{restatable}{lemma}{RedBinom}
\label{RedBinom}
A binomial $f=c_{i} {\bf x}^{\bf a_i} + c_{j} {\bf x}^{\bf a_j}$ is irreducible if and only if $\gcd(\bf a_i, a_j)=1$ and $supp(x^{{\bf a}_i})\cap supp(x^{{\bf a}_j})=\varnothing$. \end{restatable} \begin{proof}
$\Leftarrow$ Consider the Newton polytope $L=N(f)$. This is a line segment with end points $\mathbf{a}_i$ and $\mathbf{a}_j$. Now suppose $f=pq$. We will show that one of the factors, say $q$, must be a constant. We must have $L=N(f)=N(p)+N(q)$. As $f$ is a binomial in $\Bbbk [\mathbf{x}]$, \[max(dim(N(p)),dim(N(q))) \leq dim(N(f))\leq 1\] \noindent we conclude that $N(q)$ is a line segment, a point distinct from the origin, or the origin itself. We assume without loss of generality that $N(p)$ is a line segment. If $N(q)$ is a point $\mathbf{a_0}$, then the monomial ${\bf x}^{{\bf a}_0}$ must divide $f$, so $\bf{a_i}$ and $\bf{a_j}$ do not have disjoint supports unless $a_0 = 0$. If $N(p)$ and $N(q)$ are both lines, they must be colinear, as otherwise, $N(f)$ would be two-dimensional. The Minkowski sum of two colinear lines with integer endpoints must contain an interior lattice point, corresponding to the sum of one endpoint from each line. However, $f$ satisfies the form required in \cref{VertPoints}, and thus contains no interior lattice points. Therefore, $N(q)$ can only be the point at the origin, meaning that $q$ must be a constant. \\
$\Rightarrow$
If $f$ is irreducible, then ${\bf x}^{\bf a_i}$ and ${\bf x}^{\bf a_j}$ must have disjoint supports, as if they did not, we could factor out a power of $x_k$, where $k\in supp({\bf x}^{\bf a_i})\cap supp({\bf x}^{\bf a_j})$. Now, suppose $\gcd({\bf a_i, a_j})=d>1$, and ${\bf a'_i}d={\bf a_i}$, and ${\bf a'_j}d={\bf a_j}$. Now by rearranging constants, and factoring we get the following:
\[
f=c_{i} {\bf x}^{\bf a_i} + c_{j} {\bf x}^{\bf a_j}=c_i{\bf x}^{{\bf a'_i}d} - c {\bf x}^{{\bf a'_j}d}=(\sqrt[d]{c} {\bf x}^{\bf a'_j})^{d}((\frac{\sqrt[d]{c_i}{\bf x}^{{\bf a'_i}}}{\sqrt[d]{c}{\bf x}^{{\bf a'_j}}})^d - 1)
\]
Now, if we relabel $(\frac{\sqrt[d]{c_i}{\bf x}^{{\bf a'_i}}}{\sqrt[d]{c}{\bf x}^{{\bf a'_j}}})$ as $z$, the above simplifies to
\[f=c {\bf x}^{{\bf a'_j}d}(z^d - 1)\]
where $z^d-1$ easily factors as $\Pi_{1<k<d}(z-\delta_k)$, where $\delta_k$ is a $d^{th}$ root of unity. Now, by resubstituition and distributing, we get:
\[
\begin{split}
f&=(\sqrt[d]{c} {\bf x}^{\bf a'_j})^{d}(z^d - 1)\\&=(\sqrt[d]{c} {\bf x}^{\bf a'_j})^d \Pi_{1<k<d}(z-\delta_k)\\
&= \Pi_{1<k<d} ((\sqrt[d]{c} {\bf x}^{\bf a'_j})z-\delta_k \sqrt[d]{c} {\bf x}^{\bf a'_j}))\\&= \Pi_{1<k<d} (\sqrt[d]{c_i}{\bf x}^{\bf a'_i}-\delta_k \sqrt[d]{c} {\bf x}^{\bf a'_j}))
\end{split}
\]
which gives that $f$ is factorable.
\end{proof}
Now we return to the proof of \cref{VertPoints}, where we need the following lemma, whose proof is straight forward: \begin{lemma}
For a set of equivalent fractions $\{ \frac{b_1}{a_1}, \dots, \frac{b_s}{a_s} \}$, the fractions are equal to $\frac{\gcd(\bf b_i)}{\gcd(\bf a_j)}$, where $\gcd(\bf b_i)$ is the gcd of $b_1, b_2, \cdots, b_s$ and $\gcd(\bf a_j)$ is the gcd of $a_1, a_2, \cdots, a_s$. \end{lemma}
Recall the statement: \VertPoints*
\begin{proof}[Proof of Lemma \ref{VertPoints} ]
We will prove that the existence of a lattice point in a polytope with vertices $\mathbf{a_1},\dots,\mathbf{a_k}$ having disjoint supports implies that $\gcd(\mathbf{a}_i, \mathbf{a}_j) > 1$ for some $i$ and $j$. Now suppose an interior lattice point $\mathbf{b}$ exists in a polytope with disjoint supports. Then it is of the form
\[ \mathbf{b}=\sum^k_{j=1}p_j \mathbf{a}_j.
\]
As the point $\mathbf{b}$ lies in the convex hull of our polytope, we have the added restriction that
\[\sum^k_{j=1}p_j=1.\]
As $\mathbf{b}$ is an integer lattice point, each coordinate must be an integer. By assumption, the collection of all $\mathbf{a}_j$ have disjoint supports, so we can break up $\mathbf{b}$ into a sum of $\mathbf{b_j}$ where each $\mathbf{b}_j$ has the same support as $\mathbf{a}_j$ and thus for $i\in supp(\mathbf{a_j})$ the entry $b_{j,i}=p_j a_{j,i}$. Therefore, $p_j=\frac{b_{j,i}}{a_{j,i}}$. This also gives that for all $s$ supports in a given $\mathbf{a_j}$, \[p_j=\frac{b_{j,1}}{a_{j,1}}=\dots=\frac{b_{j,s}}{a_{j,s}}\]
which, by our above lemma gives $p_j=\frac{\gcd(\mathbf{b}_j)}{\gcd(\mathbf{a}_j)}$. We may now rewrite the second summation above as follows:
\[
\sum^{k-1}_{j=1}\frac{\gcd(\mathbf{b}_j)}{\gcd(\mathbf{a}_j)}=\frac{\gcd(\mathbf{a}_k)-\gcd(\mathbf{b}_k)}{\gcd(\mathbf{a}_k)}
\]
Now by giving the left above a common denominator of $\Pi^{k-1}_{j=1}\gcd(\mathbf{a}_j)$ the above becomes:
\[
\frac{\sum^{k-1}_{i=1}\gcd(\mathbf{b}_i)\Pi_{j\neq i}\gcd(\mathbf{a}_j) }{\Pi^{k-1}_{j=1}\gcd(\mathbf{a}_j)}=\frac{\gcd(\mathbf{a}_k)-\gcd(\mathbf{b}_k)}{\gcd(\mathbf{a}_k)}
\]
Now once again, this is of the form where we may use the above lemma. For the sake of notation we will relabel the above numerators $B_1$ and $B_2$ so we see that the above fraction reduces to:
\[\frac{\gcd(B_1,B_2)}{\gcd(\Pi^{k-1}_{j=1}\gcd(\mathbf{a}_j),\gcd(\mathbf{a}_k))}.\]
Now if we examine the denominator, this must be greater than one, as the fraction is less than one and greater than zero. This would imply that that $\gcd(\mathbf{a}_k,\mathbf{a}_i)>1$ for some $i$, thus proving the statement. \end{proof}
\section{Proof of Theorem \ref{maintheorem} and \ref{interiorlatticepoints}} \label{section3} We can now proceed with the proofs of \cref{maintheorem} and \cref{interiorlatticepoints}. These are largely corollaries of the lemmas given in the previous section. We first prove \cref{interiorlatticepoints}, which we will use in the proof of \cref{maintheorem}. Recall the statement: \interiorlatticepoints* \begin{proof}
If $f$ is well-poised, then by Lemma \ref{WellPDis} the vertices of $N(f)$ have disjoint supports satisfying the first condition. Additionally by Lemma \ref{PolyDis}, $N(f)$ is a simplex. Since $N(f)$ is a simplex, for any two vertices $\bf{x^{a_i}, x^{a_j}}$ there is an edge connecting them corresponding to the irreducible initial form $c_i \mathbf{x^{a_i}} + c_j \mathbf{x^{a_j}}$. Then by Lemma \ref{RedBinom}, any pair of vertices have $\gcd(\bf{a_i}, \bf{a_j})=1$. We may now apply \cref{VertPoints} to conclude there are no interior lattice points, and consequently all monomial terms in $f$ must correspond to vertices. \end{proof}
\maintheorem* \begin{proof}[Proof] $\Rightarrow$ This is similar to the proof of \cref{interiorlatticepoints}, which by application of the lemmas gives all monomial terms in $f$ must correspond to vertices of $N(f)$, where all vertices are disjointly supported and pairwise of the form that $\gcd(\bf{a_i}, \bf{a_j})=1$. \\ $\Leftarrow$ By \cref{PolyDis}, the Newton polytope $N(f)$ is a simplex. By \cref{VertPoints} each edge of $N(F)$ is an empty simplex and thus by \cref{RedBinom} correspond to an irreducible binomial. Consider an arbitrary initial form $\ini{\boldsymbol \omega}(f)$ of $f$, and suppose it can be written, $\ini{\boldsymbol \omega}(f) = g_{1}g_{2}$. We show without loss of generality that $g_1$ is a constant. We have noted above that any binomial initial form of $f$ is irreducible, so if $\ini{\boldsymbol \omega}(f)$ is a binomial we are finished. The argument for the general case is similar. If $\ini{\boldsymbol \omega}(f) = g_{1}g_{2}$, then we get a Minkowski sum decomposition of $N(\ini{\boldsymbol \omega}(f))=N(g_{1})+N(g_{2})$. We know that $N(\ini{\boldsymbol \omega}(f))$ is a face of $N(f)$ and therefore a simplex. Then, any Minkowski decomposition of a simplex is a sum of a point and a simplex. Then, by the disjoint supports condition, this point must be the origin, and so one of our terms must be constant.
\end{proof}
\section{The Tropical Variety}\label{Angela}
Let $f = \sum_{i = 1}^K c_i {\bf x}^{{\bf a}_i}$ be a disjointly supported polynomial with no constant term. In this section we explicitly construct the faces of the Gr\"obner fan of the principal ideal $\langle f \rangle$ whose support is $\textup{Trop}(f)$.
Let ${\bf 1}$ be the vector of $1$'s, then $ \ell_i := \langle {\bf 1}, {\bf a}_i \rangle = \sum_{j =1}^n a_i^j > 0$ for all $c_i \neq 0$.
By letting $\ell$ be $LCM\{ \ell_i \mid c_i \neq 0\}$ and $\mathbf{v}_f$ be the vector with entry $\frac{\ell}{\ell_i}$ at any index in $supp({\bf a}_i)$ and 0 otherwise, we see that $\langle \mathbf{v}_f, {\bf a}_i\rangle = \langle\mathbf{v}_f, {\bf a}_j\rangle > 0$ $\forall i, j$.
It follows that $f$ is homogeneous with respect to $\mathbf{v}_f$, and that the support of the Gr\"obner fan of $f$ is all of $\mathbb{R}^n$ (see \cite[Proposition 1.12]{Sturmfels96}).
We let $L_f$ denote the homogeneity space of $f$, in particular $L_f = \{{\bf u} \mid \langle {\bf u}, {\bf a}_i\rangle = \langle {\bf u}, {\bf a}_j \rangle, \ \ 1 \leq i < j \leq K\}$. Moreover, for each ${\bf a}_i$ with $c_i \neq 0$ we let $\mathbf{w}_i$ be the vector with entry $0$ for $j \notin supp({\bf a}_i)$ and $-1$ for $j \in supp({\bf a}_i)$.
\begin{proposition} Let $S \subseteq [K]$, $f_S = \sum_{j \in S} c_j {\bf x}^{{\bf a}_j}$, and let $C_S := \{\boldsymbol\omega \mid \ini{\boldsymbol\omega}(f) = f_S\}$, then: \[ C_S = L_f + \sum_{i \notin S} \mathbb{R}_{> 0}\mathbf{w}_i. \] \end{proposition}
\begin{proof} Let $\boldsymbol\omega \in L_f+ \sum_{i \notin S} \mathbb{R}_{> 0}\mathbf{w}_i$ and consider $\ini{\boldsymbol\omega}(f)$. Without loss of generality we may assume that $\boldsymbol\omega = \sum_{i \notin S} \; n_i\mathbf{w}_i$ with $n_i > 0$. The polynomial $f$ has disjoint supports, so $\langle \boldsymbol\omega, {\bf a}_i \rangle$ is $0$ if $i \in S$ and $< 0$ if $i \notin S$. It follows that $\ini{\boldsymbol\omega}(f) = f_S$. This proves that $L_f + \sum_{i \notin S} \mathbb{R}_{> 0}\mathbf{w}_i \subseteq C_S$.
If $\boldsymbol\omega \in C_S$, then $\boldsymbol\omega$ weights each term of $f_S$ equally. We let $k = \langle \boldsymbol\omega, {\bf a}_j \rangle$, where $j$ is any element of $S$ and $k_i = \langle \boldsymbol\omega, {\bf a}_i\rangle$ for $i \notin S$. Observe that $k_i < k$, and that $\boldsymbol\omega - \sum_{i =1}^K(k-k_i)\frac{1}{\ell_i}\mathbf{w}_i$ weights all monomials of $f$ equally. It follows that $\boldsymbol\omega - \sum (k - k_i)\frac{1}{\ell_i}\mathbf{w}_i \in L_f$. As $k - k_i > 0$, we conclude that $\boldsymbol\omega \in L_f + \sum_{i \notin S} \mathbb{R}_{> 0}\mathbf{w}_i$, and that $C_S \subseteq L_f + \sum_{i \notin S} \mathbb{R}_{> 0}\mathbf{w}_i$. \end{proof}
Each $f_S$ is a polynomial which corresponds to a face of the Newton polytope of $f$ and likewise, to a cone of the Gr\"obner fan. Observe that by definition the tropical variety $\textup{Trop}(f)$ is the union of the cones $C_S$ where $|S| \geq 2$.
To complete the description of each cone $C_S$ we compute a basis for $L_f$. First, we observe that $\mathbf{v}_f \in L_f$, and for any $\boldsymbol\lambda \in L_f$ there is some $q$ such that $\langle \boldsymbol\lambda - q\mathbf{v}_f, {\bf a}_i \rangle = 0$ for all $i \in [K]$. The space $N_f = \{\boldsymbol\lambda' \mid \langle \boldsymbol\lambda', {\bf a}_i \rangle = 0 \}$ is certainly contained in $L_f$, so it follows that $\mathbf{v}_f$ and a basis of $N_f$ suffice to give a basis of $L_f$. For a basis of $N_f$ we take the integral vectors $\mathbf{v}_{i,j} = {\bf a}_{i}^{1}e_{i}^{j} - {\bf a}_{i}^{j}e_{i}^{j}$; for $2 \leq j \leq k_i$, where $e_{i}^{j}$ is the $j$-th elementary basis vector from the support of ${\bf a}_i$.
Theorem 1 of \cite{Kaveh-Manon-NOK} gives a recipe for producing a full rank valuation with associated Newton-Okounkov given a prime cone from a tropical variety. It is required to choose a a linearly independent set of vectors from the cone which span a full dimensional subcone. For the cone $C_S$ with $|S| = 2$ we select the set $W_S$ composed of the basis $\{\mathbf{v}_f, \ldots, \mathbf{v}_{i,j}, \ldots\} \subset L_f$, and the extremal vectors $\mathbf{w}_i$ for $i \in S^c$. If the sets $S$ and $S'$ differ by a single index, then $W_S$ and $W_{S'}$ differ by a single vector. We let $M_S$ be the matrix with rows equal to the elements of $W_S$.
\[\begin{bmatrix} \mathbf{v}_f\\ \vdots\\ \mathbf{v}_{2,i}\\ \mathbf{v}_{3, i}\\ \vdots\\ \mathbf{v}_{k_i,i}\\ \vdots\\ \mathbf{w}_{i}\\ \vdots \end{bmatrix} = \begin{bmatrix} \cdots & \frac{\ell}{\ell_i} & \frac{\ell}{\ell_i} & \frac{\ell}{\ell_i} & \cdots & \frac{\ell}{\ell_i} & \cdots \\ \cdots & \vdots & \vdots & \vdots & \vdots & \vdots & \cdots \\ \cdots & {\bf a}_{2}^i & -{\bf a}_{1}^i& 0 & \cdots & 0 & \cdots\\ \cdots & {\bf a}_{3}^i & 0 & -{\bf a}_{1}^i & \cdots & 0 & \cdots \\ \cdots & \vdots & \vdots & \vdots & \ddots & \vdots & \cdots \\ \cdots & {\bf a}_{k_i}^i & 0 & 0 & \cdots & -{\bf a}_{1}^i & \cdots \\ \cdots & \vdots & \vdots & \vdots & \vdots & \vdots & \cdots \\ \cdots & -1 & -1 & -1 & \cdots & -1 & \cdots \\ \cdots & \vdots & \vdots & \vdots & \vdots & \vdots & \cdots \\ \end{bmatrix}\]
By \cite[Proposition 4.2]{Kaveh-Manon-NOK}, the matrix $M_S$ defines a full rank valuation $\mathfrak{v}_S : \Bbbk [{\bf x}]/\langle f \rangle \to \mathbb{R}^{n-1}$. The image $S(\Bbbk [{\bf x}]/\langle f \rangle, \mathfrak{v}_S) \subseteq \mathbb{R}^{n-1}$ of $\mathfrak{v}_S$ is the semigroup generated by the columns of $M_S$ under addition, where $\mathfrak{v}_S(x_{ij})$ is the $ij$-th column of $M_S$. If $i \in S^c$, $\mathfrak{v}_S$ sends $x_{ij}$ to the $j$-th column of the block displayed above. If $i \in S$, $\mathfrak{v}_S$ sends $x_{ij}$ to the $j$-th column of the block displayed above, except the $-1$ entries are $0$.
The convex hull $P(\Bbbk [{\bf x}]/\langle f \rangle, \mathfrak{v}_S) \subseteq \mathbb{R}^{n-1}$ of $S(\Bbbk [{\bf x}]/\langle f \rangle, \mathfrak{v}_S) \subseteq \mathbb{R}^{n-1}$ is called the \emph{Newton-Okounkov} cone of $\mathfrak{v}_S$. For each choice of $S$ with $|S|=2$, there is a flat family $\pi_S: E_S \to \mathbb{A}^1(\Bbbk )$ such that the coordinate ring of the fiber $\pi_S^{-1}(c)$ for $c \neq 0$ is $\Bbbk [{\bf x}]/\langle f \rangle$ and the coordinate ring of the fiber $\pi_S^{-1}(0)$ is the affine semigroup algebra $\Bbbk [S(\Bbbk [{\bf x}]/\langle f \rangle, \mathfrak{v}_S)]$. In particular, $\Bbbk [S(\Bbbk [{\bf x}]/\langle f \rangle, \mathfrak{v}_S)] \cong \Bbbk [{\bf x}]/\langle \ini{\boldsymbol \omega}(f) \rangle$ for any $\boldsymbol \omega \in C_S$.
If there is a vector ${\bf d} \in \mathbb{Z}_{> 0}^n$ such that $\langle {\bf d}, {\bf a}_i \rangle$ is a fixed integer for all monomial exponents ${\bf a}_i$ appearing in $f$, we say that $f$ is homogeneous with respect to ${\bf d}$. For example, if $f$ is homogeneous in the classical sense we may take ${\bf d}$ to be the all $1$'s vector. Assuming a fixed ${\bf d}$, the algebra $\Bbbk [{\bf x}]/\langle f \rangle$ is positively graded, that is it can be expressed as direct sum of finite dimensional vector spaces $A_N$:
\[\Bbbk [{\bf x}]/\langle f \rangle \cong \bigoplus_{N \geq 0} A_N.\]
\noindent In this setting, the projective variety $X = \textup{Proj}(\Bbbk [{\bf x}]/\langle f \rangle)$ carries a flat degeneration to the projective toric variety $X_S = \textup{Proj}(\Bbbk [S(\Bbbk [{\bf x}]/\langle f \rangle, \mathfrak{v}_S)])$. It is possible that $X_S$ is non-normal, however the normalization is the projective toric variety associated to the \emph{Newton-Okounkov body} $\Delta(\Bbbk [{\bf x}]/\langle f \rangle, \mathfrak{v}_S)$. Following \cite[Corollary 4.7]{Kaveh-Manon-NOK}, the Newton-Okounkov body $\Delta(\Bbbk [{\bf x}]/\langle f \rangle, \mathfrak{v}_S)$ is obtained from $M_S$ by dividing the $ij$-th column by the degree of $x_{ij}$ assigned by ${\bf d}$, and taking the convex hull of the resulting column vectors.
\begin{example} \emph{ We compute the matrices $M_S$ for $f = x + y^2 + zw \in \Bbbk [x, y, z, w]$. Let $x, y^2$, and $zw$ be the $i =1, 2$ and $3$ monomials, respectively. The space $L_f \subset \mathbb{Q}^4$ can be generated by the vectors ${\bf v}_f = (2, 1, 1, 1)$ and ${\bf v}_{2,3} = (0, 0, 1, -1)$. From this we deduce that $A = \Bbbk [x, y, z, w]/\langle f \rangle$ is graded by the semigroup in $\mathbb{Z}^2$ generated by $(1, 0), (1, 1),$ and $(1, -1)$. The third row of $M_S$ is ${\bf w}_i$, where $\{i\} = S^c$: } \[M_{12} = \begin{bmatrix} 2 & 1 & 1 & 1\\ 0 & 0 & 1 & -1\\ 0 & 0 & -1 & -1 \end{bmatrix}
M_{13} = \begin{bmatrix} 2 & 1 & 1 & 1\\ 0 & 0 & 1 & -1\\ 0 & -1 & 0 & 0 \end{bmatrix}
M_{23} = \begin{bmatrix} 2 & 1 & 1 & 1\\ 0 & 0 & 1 & -1\\ -1 & 0 & 0 & 0 \end{bmatrix}\] \emph{ \noindent For each $S$, the columns of $M_S$ generate a semigroup in $\mathbb{Z}^3$ whose semigroup algebra is the coordinate ring of a toric degeneration of $\Bbbk [x, y, z, w]/\langle f \rangle$. } \end{example}
\end{document} |
\begin{document}
\title{Entanglement Monogamy of Tripartite Quantum States} \author{Chang-shui Yu} \author{He-shan Song} \email{hssong@dlut.edu.cn} \affiliation{School of Physics and Optoelectronic Technology, Dalian University of Technology, Dalian 116024, P. R. China} \date{\today }
\begin{abstract} An interesting monogamy equation with the form of Pythagorean theorem is found for $2\otimes 2\otimes n$-dimensional pure states, which reveals the relation among bipartite concurrence, concurrence of assistance, and genuine tripartite entanglement. At the same time, a genuine tripartite entanglement monotone as a generalization of 3-tangle is naturally obtained for $(2\otimes 2\otimes n)$- dimensional pure states in terms of a distinct idea. For mixed states, the monogamy equation is reduced to a monogamy inequality. Both results for tripartite quantum states can be employed to multipartite quantum states. \end{abstract}
\pacs{03.67.Mn, 03.65.Ta, 03.65.Ud} \maketitle \section{I. Introduction} Entanglement is an essential feature of quantum mechanics, which distinguishes quantum from classical world. A key property of entanglement as well as one of the fundamental differences between quantum entanglement and classical correlations is the degree of sharing among many parties -----Unlike classical correlations, quantum entanglement is monogamous [1-3], i.e., the degree to which either of two parties can be entangled with anything else seems to be constrained by the entanglement that may exist between the two quantum parties. For the systems of three qubits, a kind of monogamy of bipartite quantum entanglement measured by concurrence [4] was described by Coffman-Kundu-Wootters (CKW) inequality [1]. The generalization to the case of multiple qubits was conjectured by CKW and has been proven recently by Osborne et al [5]. The monogamy inequality dual to CKW inequality based on concurrence of assistance (CoA) [6] was presented for tripartite systems of qubits by Gour et al [7] and the generalized one for multiple qubits was proven in Ref. [8]. In this paper, we find a new and very interesting monogamy equation for $\left( 2\otimes 2\otimes n\right) $ -dimensional (or multiple qubits) quantum pure states which relates the bipartite concurrence, CoA and genuine tripartite entanglement.
In fact, CKW inequality and the dual one correspond to a residual quantity, respectively. It is only for tripartite pure states of qubits that so far the two residual quantities have been shown to be the same and have clear physical meanings. From Ref. [1] and [6], one can learn that it just corresponds to 3-tangle [1]. One of the distinguished advantages of our monogamy equation will be found that the residual quantity has clear physical meanings not only for $\left( 2\otimes 2\otimes n\right) $ -dimensional quantum pure state but also for a general multipartite pure state including a pair of qubits.
Recently it has been realized that entanglement is a useful physical resource for various kinds of quantum information processing [9-12]. Based on the different physics of implementation, there are usually three alternative ways [7] to producing entanglement. The specially important way for quantum communication is the reduction of a multipartite entangled state to an entangled state with fewer parties, which is called "assisted entanglement" quantified by entanglement of assistance (EoA) [13]. An important application of EoA is for tripartite quantum entangled state to maximize the entanglement of two parties (qubits) denoted by Alice and Bob with the assistance of the third party (qudit) named Charlie who is only allowed to do local operations. However, because EoA is not an entanglement monotone [14], one would prefer to the remarkable entanglement monotone------concurrence of assistance (CoA) where concurrence is employed to quantify the entanglement between Alice and Bob. In this process of entanglement preparation, Charlie only makes local operations and classical communications in order to increase the entanglement shared by Alice and Bob, therefore it is impossible to produce new entanglement. There must exist some trade-off between the increment of entanglement shared by Alice and Bob induced by Charlie and quantum correlations with other forms. Then what are those?
The question is answered in this paper by our interesting monogamy equation. From the equation, one can find that the increment of entanglement shared by Alice and Bob just corresponds to the degree of genuine tripartite entanglement (3-way entanglement) of $\left( 2\otimes 2\otimes n\right) $- dimensional quantum pure state and is analytically calculable. Hence, the increment naturally characterizes the genuine tripartite entanglement, which is shown to be an entanglement monotone and can be considered as an interesting generalization of 3-tangle in terms of a new idea. In addition, the monogamy equation is reduced to a monogamy inequality for mixed states. The results are also suitable for multipartite quantum states. This paper is organized as follows. We first introduce our interesting monogamy equation for pure states; Then for mixed states, we reduce this monogamy equation for pure state to a monogamy inequality; Next we point out these results are suitable for multipartite quantum states; The conclusion is drawn finally.
\section{II. Monogamy equation for pure states} Given a tripartite $\left( 2\otimes 2\otimes n\right) $- dimensional quantum pure state $\left\vert \Psi \right\rangle _{ABC}$ shared by three parties Alice, Bob and Charlie, where Charlie's aim is to maximize the entanglement shared by Alice and Bob by local measurements on Charlie's particle C, the reduced density matrix by tracing over party C can be given by $\rho _{AB}=Tr_{C}$ $\left( \left\vert \Psi \right\rangle _{ABC}\left\langle \Psi \right\vert \right) $. Let $ \mathcal{E}=\{p_{i},\left\vert \varphi _{i}^{AB}\right\rangle \}$ is any a decomposition of $\rho _{AB}$ such that \begin{equation} \rho _{AB}=\sum\limits_{i}p_{i}\left\vert \varphi _{i}^{AB}\right\rangle \left\langle \varphi _{i}^{AB}\right\vert ,\sum\limits_{i}p_{i}=1, \end{equation} then CoA is defined [5,6] by \begin{eqnarray} C_{a}\left( \left\vert \Psi \right\rangle _{ABC}\right) &=&\max_{\mathcal{E} }\sum\limits_{i}p_{i}C\left( \left\vert \varphi _{i}^{AB}\right\rangle \right) \\ =C_{a}\left( \rho _{AB}\right) &=&tr\sqrt{\sqrt{\rho _{AB}}\tilde{\rho}_{AB} \sqrt{\rho _{AB}}} \\ &=&\sum\limits_{i=1}^{4}\lambda _{i}, \end{eqnarray} where $\tilde{\rho}_{AB}=\left( \sigma _{y}\otimes \sigma _{y}\right) \rho _{AB}^{\ast }\left( \sigma _{y}\otimes \sigma _{y}\right) $, $\sigma_y$ is Pauli matrix and \begin{equation} C\left( \rho _{AB}\right) =\max \{0,\lambda _{1}-\sum\limits_{i>1}\lambda _{i}\} \end{equation} is the concurrence of the reduced density matrix $\rho _{AB}$ with $\lambda _{i}$ being the square roots of the eigenvalues of $\rho _{AB}\tilde{\rho} _{AB}$ in decreasing order. With the definitions of CoA and concurrence, we can obtain the following theorem.
\textbf{Theorem 1: } \emph{For a }$\left( 2\otimes 2\otimes n\right) $\emph{ - dimensional quantum pure state }$\left\vert \Psi \right\rangle _{ABC}$ \emph{, } \begin{equation} C_{a}^{2}\left( \rho _{AB}\right) =C^{2}\left( \rho _{AB}\right) +\tau ^{2}\left( \rho _{AB}\right) , \end{equation} \emph{\newline where }$\tau \left( \rho _{AB}\right) =\tau \left( \left\vert \Psi \right\rangle _{ABC}\right) $\emph{\ is the genuine tripartite entanglement measure for }$\left\vert \Psi \right\rangle _{ABC}$\emph{.}
It is very interesting that eq. (6) has an elegant form that is analogous to Pythagorean theorem if one considers CoA as the length of the hypotenuse of a right-angled triangle and considers bipartite concurrence and genuine tripartite entanglement as the lengths of the other two sides of the triangle. Note that the lengths of all the sides are allowed to be zero. The illustration of the relation is shown in Fig. 1 (See the left triangle).
\textbf{Proof.} According to the definition of CoA and concurrence, it is obvious that \begin{equation} C_{a}^{2}\left( \rho _{AB}\right) -C^{2}\left( \rho _{AB}\right) \geq 0. \end{equation} Then the remaining is to prove that $\tau \left( \rho _{AB}\right) $ is an entanglement monotone and characterizes the genuine tripartite entanglement of $\left\vert \Psi \right\rangle _{ABC}$. Next, we first prove that $\tau \left( \rho _{AB}\right) $ does not increase under a general tripartite local operation and classical communication (LOCC) denoted by $\mathcal{M} _{k}$ where subscript $k$ labels different outcomes. We first assume that Alice and Bob perform quantum operations $M_{Akj}$, and $M_{Bkj}$ on their qubits respectively, where $\sum\limits_{k,j}$ $M_{Akj}^{\dagger }M_{Akj}\leq I_{A}$ and $\sum\limits_{k,j}$ $M_{Bkj}^{\dagger }M_{Bkj}\leq I_{B}$ are the most general local operations given in terms of the Kraus operator [15] with $I_{A}$ and $I_{B}$ being the identity operators in Alice's and Bob's systems. After local operations, the average CoA can be written as \begin{eqnarray} &&\sum\limits_{kk^{\prime }}P_{kk^{\prime }}\tau \left( \mathcal{M} _{kk^{\prime }}\left( \rho _{AB}\right) \right) \notag \\ &=&\sum\limits_{kk^{\prime}}P_{kk^{\prime }}\sqrt{C_{a}^{2}\left( \mathcal{M} _{kk^{\prime }}\left( \rho _{AB}\right) \right) -C^{2}\left( \mathcal{M} _{kk^{\prime }}\left( \rho _{AB}\right) \right) } \notag \\ &\leq &\left\{ \left[ \sum\limits_{kk^{\prime}}P_{kk^{\prime}}C_{a}\left( \mathcal{M} _{kk^{\prime }}\left( \rho _{AB}\right) \right) \right] ^{2}\right. \notag \\ &&\left. -\left[ \sum\limits_{kk^\prime}P_{kk^{\prime}}C\left( \mathcal{M}_{kk^{\prime }}\left( \rho _{AB}\right) \right) \right] ^{2}\right\} ^{1/2} \notag \\ &=& \sum\limits_{kk^{\prime }jj^{\prime }}\left\vert \det \left( M_{Akj}\right) \det \left( M_{Bk^{\prime }j^{\prime }}\right) \right\vert \sqrt{C_{a}^{2}\left( \rho _{AB}\right) -C^{2}\left( \rho _{AB}\right) } \notag \\ &\leq &\sqrt{C_{a}^{2}\left( \rho _{AB}\right) -C^{2}\left( \rho _{AB}\right) }=\tau \left( \rho _{AB}\right) , \end{eqnarray} where \begin{eqnarray} &&\mathcal{M}_{kk^{\prime }}\left( \rho _{AB}\right) =\sum\limits_{jj^{\prime }}\left( M_{Akj}\otimes M_{Bk^{\prime }j^{\prime }}\otimes I_{C}\right) \notag \\ &\times& \left\vert \Psi \right\rangle _{ABC}\left\langle \Psi \right\vert \left( M_{Akj}^{\dagger }\otimes M_{Bk^{\prime }j^{\prime }}^{\dagger }\otimes I_{C}\right) /P_{kk^\prime}, \end{eqnarray} and $P_{kk^{\prime }}=tr\mathcal{M}_{kk^{\prime }}\left( \left\vert \Psi \right\rangle _{ABC}\left\langle\Psi\right\vert\right) $. Here the first inequality follows from Cauchy-Schwarz inequality: \begin{equation} \sum\limits_{i}x_{i}y_{i}\leq \left( \sum\limits_{i}x_{i}^{2}\right) ^{1/2}\left( \sum\limits_{j}y_{j}^{2}\right) ^{1/2}, \end{equation} the second inequality follows from the geometric-arithmetic inequality \begin{equation} \sum\limits_{kj}\left\vert \det \left( M_{xkj}\right) \right\vert \leq \frac{ 1}{2}\sum\limits_{kj}trM_{xkj}^{\dagger }M_{xkj}\leq 1,x=A,B, \end{equation} and the second equation is derived from the fact [4,16] that \begin{equation} C_{a}\left( M_{Akj}\rho _{AB}M_{Akj}^{\dagger }\right) =\left\vert \det \left( M_{Akj}\right) \right\vert C_{a}\left( \rho _{AB}\right) , \end{equation} \begin{equation} C\left( M_{Akj}\rho _{AB}M_{Akj}^{\dagger }\right) =\left\vert \det \left( M_{Akj}\right) \right\vert C\left( \rho _{AB}\right) \end{equation} and the analogous relations for $M_{Bk^{\prime }j^{\prime }}$. Eq. (8) shows that $\tau \left( \rho _{AB}\right) $ does not increase under Alice's and Bob's local operations. \begin{figure}
\caption{The illustration of the relation among CoA, bipartite concurrence and genuine tripartite entanglement. The left right-angled triangle corresponds to Theorem 1 (for pure states) and the right obtuse-angled triangle corresponds to Theorem 2 (for mixed states). All the quantities given in the figures are defined the same as the corresponding theorems.}
\end{figure}
Next we prove that $\tau \left( \rho _{AB}\right) $ does not increase under Charlie's local operations either. Suppose $\rho _{AB}=\lambda \rho _{1}^{AB}+(1-\lambda )\rho _{2}^{AB}$, $\lambda \in \lbrack 0,1]$, then \begin{eqnarray} &&\lambda \tau \left( \rho _{1}^{AB}\right) +(1-\lambda )\tau \left( \rho _{2}^{AB}\right) \notag \\ &=&\lambda \sqrt{C_{a}^{2}\left( \rho _{1}^{AB}\right) -C^{2}(\rho _{1}^{AB}) } \notag \\ &&+(1-\lambda )\sqrt{C_{a}^{2}\left( \rho _{2}^{AB}\right) -C^{2}(\rho _{2}^{AB})} \notag \\ &\leq &\left\{ \left[ \lambda C_{a}\left( \rho _{1}^{AB}\right) +(1-\lambda )C_{a}\left( \rho _{1}^{AB}\right) \right] ^{2}\right. \notag \\ &&\left. -\left[ \lambda C\left( \rho _{2}^{AB}\right) +(1-\lambda )C\left( \rho _{2}^{AB}\right) \right] ^{2}\right\} ^{1/2} \notag \\ &\leq &\sqrt{C_{a}^{2}\left( \rho _{AB}\right) -C^{2}\left( \rho _{AB}\right) }=\tau \left( \rho _{AB}\right) , \end{eqnarray} where the first inequality follows from the Cauchy-Schwarz inequality (10) and the second inequality follows from the definitions of $C_{a}\left( \rho _{AB}\right) $ and $C\left( \rho _{AB}\right) $. Eq. (14) shows that $\tau \left( \rho _{AB}\right) $ is a concave function of $\rho _{AB}$, which proves that $\tau \left( \rho _{AB}\right) $ does not increase under Charlie's local operations following the same procedure (or \textbf{Theorem 3 }) in Ref. [17]. All above show that $\tau \left( \rho _{AB}\right) $ is an entanglement monotone.
Now we prove that $\tau\left(\rho_{AB}\right)$ characterizes genuine tripartite entanglement. Based on eq. (4) and eq. (5), it is obvious that \begin{equation} \tau \left( \rho _{AB}\right) =\left\{ \begin{array}{cc} \sum\limits_{i=1}^{4}\lambda _{i}, & \lambda _{1}\leq \sum\limits_{i=2}^{4}\lambda _{i}, \\ 2\sqrt{\lambda _{1}\sum\limits_{i=2}^{4}\lambda _{i}}, & \lambda _{1}>\sum\limits_{i=2}^{4}\lambda _{i}, \end{array} \right. \end{equation} which is an explicit formulation. Ref. [18] has given a special quantity named "entanglement semi-monotone" that characterizes the genuine tripartite entanglement. One can find that it requires the same conditions as the quantity introduced in Ref. [18] for $\tau \left( \rho _{AB}\right) $ to reach \emph{zero}, which shows that $\tau \left( \rho _{AB}\right) $ characterizes the genuine tripartite entanglement. The proof is completed.$
\Box $
In general, multipartite entanglement is quantified in terms of different classifications [19-21]. However, $\tau \left( \rho _{AB}\right)$ quantifies genuine tripartite entanglement in a new way, i.e., we consider the entanglement of GHZ-state class as the minimal unit [22] in terms of tensor treatment [23] and summarize all the genuine tripartite inseparability without further classifications. It is an interesting generalization of 3-tangle. \textbf{Theorem 1} shows a very clear physical meaning, i.e. the increment of entanglement between Alice and Bob induced by Charlie is just the genuine tripartite entanglement among them. The meaning can especially easily be understood for tripartite quantum state of qubits. In this case, $\tau \left( \rho _{AB}\right) =2\sqrt{\lambda_1\lambda_2}$. Two most obvious examples are GHZ state and W state. The entanglement of reduced density matrix of GHZ state is zero, hence \textbf{Theorem 1} shows that the CoA of GHZ state all comes from the three-way entanglement and equals to 1 (the value of 3-tangle). On the contrary, the W state has no three-way entanglement (only two-way entanglement) [24], hence its CoA is only equal to the concurrence ($\frac{2}{3}$) of two parties. That is to say, for W state, Charlie can not provide any help to increase the entanglement between Alice and Bob.
\section{III. Monogamy inequality for mixed states} For a given mixed state $\rho _{ABC}$, CoA can be extended to mixed states in terms of convex roof construction [15], i.e., \begin{equation} C_{a}(\rho _{ABC})=\min \sum\limits_{i}p_{i}C_{a}(\left\vert \psi ^{i}\right\rangle _{ABC}), \end{equation} where the minimum is taken over all decompositions $\{p_{i},\left\vert \psi \right\rangle _{ABC}\}$ of $\rho _{ABC}$. Thus we have the following theorem.
\textbf{Theorem 2.}-\emph{For a }$\left( 2\otimes 2\otimes n\right) $\emph{- dimensional mixed state }$\rho _{ABC}$\emph{,} \begin{equation} C_{a}^{2}\left( \rho _{ABC}\right) \geqslant C^{2}\left( \rho _{AB}\right) +\tau ^{2}\left( \rho _{ABC}\right) , \end{equation} \emph{where }$\tau \left( \rho _{ABC}\right) $\emph{\ is the genuine tripartite entanglement measure for mixed states by extending }$\tau \left( \cdot\right) $\emph{\ of pure states in terms of convex roof construction and }$\rho _{AB}=tr_{C}\rho _{ABC}$\emph{.}
Analogous to \textbf{Theorem 1}, one can easily find that the relation of \textbf{Theorem 2} corresponds to an obtuse-angled triangle after a simple algebra, where CoA corresponds to the length of the side opposite to the obtuse angle. See the right triangle in Fig.1 for the illustration.
\textbf{\ Proof}. Suppose $\{p_{k},\left\vert \psi ^{k}\right\rangle _{ABC}\} $ is the optimal decomposition in the sense of
\begin{equation} \tau \left( \rho _{ABC}\right) =\sum\limits_{k}p_{k}\tau \left( \left\vert \psi ^{k}\right\rangle _{ABC}\right) =\sum\limits_{k}p_{k}\tau \left( \sigma _{AB}^{k}\right) , \end{equation} where $\sigma _{AB}^{k}=tr_{C}\left[ \left\vert \psi ^{k}\right\rangle _{ABC}\left\langle \psi ^{k}\right\vert \right] $. According to \textbf{ Theorem 1}, we have \begin{eqnarray} \tau \left( \rho _{ABC}\right) &=&\sum\limits_{k}p_{k}\sqrt{C_{a}^{2}\left( \sigma _{AB}^{k}\right) -C^{2}\left( \sigma _{AB}^{k}\right) } \notag\\ &\leq &\sqrt{\left[ \sum\limits_{k}p_{k}C_{a}\left( \sigma _{AB}^{k}\right) \right] ^{2}-\left[ \sum\limits_{k}p_{k}C\left( \sigma _{AB}^{k}\right) \right] ^{2}} \notag \\ &\leq &\sqrt{C_{a}^{2}\left( \rho _{ABC}\right) -C^{2}\left( \rho _{AB}\right) }, \end{eqnarray} where the first inequality follows from Cauchy-Schwarz inequality (10) and the second inequality holds based on the definitions of $C_{a}\left( \rho _{AB}\right) $ and $C\left( \rho _{AB}\right) $. Eq. (19) finishes the proof. $
\Box $
\section{IV. Monogamy for multipartite quantum states}
Any a given $N$-partite quantum state can always be considered as a $\left( 2\otimes 2\otimes X \right)$- dimensional tripartite quantum states with $X$ denoting the total dimension of $N-2$ subsystems so long as the state includes at least two qubits, hence both the two theorems hold in these cases. However, it is especially worthy of being noted that the two qubits must be owned by Alice and Bob respectively and the other $N-2$ subsystems should be at Charlie's hand and be considered as a whole. Charlie is allowed to perform any nonlocal operation on the $N-2$ subsystems. In addition, there may be different groupings [25] of a multipartite quantum state especially for multipartite quantum states of qubits, hence there exist many analogous monogamy equations (for pure states) or monogamy inequalities (for mixed states) for the same quantum state. For pure states, every monogamy equation will lead to a genuine $\left( 2\otimes 2\otimes X\right) $- dimensional tripartite entanglement monotone that quantifies the genuine tripartite entanglement of the tripartite state generated by the corresponding grouping.
\section{V. Conclusion and discussion}We have presented an interesting monogamy equation with elegant form for $(2\otimes 2\otimes n)$- dimensional quantum pure states, which, for the first time, reveals the relation among bipartite concurrence, CoA and genuine tripartite entanglement. The equation naturally leads to a genuine tripartite entanglement measure for $ (2\otimes 2\otimes n)$-dimensional tripartite quantum pure states, which quantifies tripartite entanglement in terms of a new idea. The monogamy equation can be reduced to a monogamy inequality for mixed states. Both the results for tripartite quantum states are also suitable for multipartite quantum states. We hope that the current results can shed new light on not only the monogamy of entanglement but also the quantification of multipartite entanglement.
\section{ Acknowledgement} This work was supported by the National Natural Science Foundation of China, under Grant No. 10747112 and No. 10575017.
\end{document} |
\begin{document}
\title[Invariant Theory] {invariant theory of artin-schelter regular algebras: a survey}
\author{Ellen E. Kirkman}
\address{ Department of Mathematics, P. O. Box 7388, Wake Forest University, Winston-Salem, NC 27109} \email{kirkman@wfu.edu}
\begin{abstract} This is survey of results that extend notions of the classical invariant theory of linear actions by finite groups on $k[x_1, \dots, x_n]$ to the setting of finite group or Hopf algebra $H$ actions on an Artin-Schelter regular algebra $A$. We investigate when $A^H$ is AS regular, or AS Gorenstein, or a ``complete intersection" in a sense that is defined. Directions of related research are explored briefly. \end{abstract}
\maketitle
\setcounter{section}{-1}
\section{Introduction} \label{xxsec0} The study of invariants of finite groups acting on a commutative polynomial ring $k[x_1, \dots, x_n]$ has played a major role in the development of commutative algebra, algebraic geometry, and representation theory. This paper is a survey of more recent work that extends these techniques to a noncommutative setting. We will be particularly concerned with algebraic properties of the subring of invariants.
We begin with some basic definitions. Throughout we let $k$ be an algebraically closed field of characteristic zero and $A$ be a $k$-algebra. A $k$-algebra $A$ is said to be {\it connected graded} if $A = k \oplus A_1 \oplus A_2 \oplus \cdots$ with $A_i \cdot A_j \subseteq A_{i+j}$ for all $i,j \in \mathbb{N}$; we denote the trivial module of a connected graded algebra by $k$. Throughout we assume that a connected graded algebra $A$ is generated in degree 1. The {\it Hilbert series} of $A$ is defined to be the formal power series $H_A(t) = \sum_{i \in \mathbb{N}} (\dim_k A_i) t^i$. The Gelfand-Kirillov dimension of an algebra $A$ is denoted by $\GKdim A$; it is related to the rate of growth in the dimensions of the graded pieces $A_n$ of $A$ (see \cite{KL}). The commutative polynomial ring $k[x_1, \dots, x_n]$ has Gelfand-Kirillov dimension $n$. In our noncommutative setting we replace the commutative polynomial ring $k[x_1, \dots, x_n]$ with an Artin-Schelter regular algebra, defined as follows.
\begin{definition} \label{zzdef1.1} Let $A$ be a connected graded algebra. We call $A$ {\it Artin-Schelter Gorenstein} (or {\it AS Gorenstein} for short) {\it of dimension $d$} if the following conditions hold: \begin{enumerate} \item[(a)] $A$ has injective dimension $d<\infty$ on the left and on the right, \item[(b)] $\Ext^i_A(_Ak,_AA)=\Ext^i_{A}(k_A,A_A)=0$ for all $i\neq d$, and \item[(c)] $\Ext^d_A(_Ak,_AA)\cong \Ext^d_{A}(k_A,A_A)\cong k(-l)$ for some $l$ (where $l$, the shift in the grading, is called the {\it AS index} of $A$). \end{enumerate} If, in addition, \begin{enumerate} \item[(d)] $A$ has finite global dimension, and \item[(e)] $A$ has finite Gelfand-Kirillov dimension, \end{enumerate} then $A$ is called {\it Artin-Schelter regular} (or {\it AS regular} for short) {\it of dimension $d$}. \end{definition}
Note that polynomial rings $k[x_1,\dots, x_n]$ for $n\geq 1$, with $\deg x_i=1$, are AS regular of dimension $n$, and they are the only commutative AS regular algebras. Hence AS regular algebras are natural generalizations of commutative polynomial rings.
In some cases we are able to prove stronger results for a special class of AS regular algebras that we call quantum polynomial rings. \begin{definition} \label{quantumpolynomial} Let $A$ be a connected graded algebra. If $A$ is a noetherian, AS regular graded domain of global dimension $n$ and $H_A(t)=(1-t)^{-n}$, then we call $A$ {\it a quantum polynomial ring of dimension $n$}. \end{definition}
By \cite[Theorem 5.11]{Sm2}, a quantum polynomial ring is Koszul and hence it is generated in degree 1. The GK-dimension of a quantum polynomial ring of global dimension $n$ is $n$.
Artin-Schelter regular algebras of dimension 3 were classified in \cite{ASc, ATV}. They occur in two families: (1) the quantum polynomial algebras having 3 generators and 3 quadratic relations (that include the {\it 3-dimensional Sklyanin algebra}), and (2) algebras having 2 generators and 2 cubic relations (that include the noetherian graded {\it down-up algebras}, which will be discussed in Section 3). There are many examples of AS regular algebras of higher dimensions, but their classification has not been completed (current research centers on dimension 4, which appears to be much more complex). The invariant theory of AS regular algebras may become richer as higher dimensional AS regular algebras are discovered and classified.
We consider finite groups $G$ of graded automorphisms acting on $A$, and throughout we denote the graded automorphism group of $A$ by $\Aut(A)$. We note that any $n \times n$ matrix acts naturally on a commutative polynomial ring in $n$ indeterminates. However, in order for a linear map defined on the degree 1 piece of a noncommutative algebra $A$ to give a well-defined graded homomorphism of $A$, one must check that the map preserves the ideal of relations; for example, the transposition of $x$ and $y$ is an automorphism of $k_q[x,y]$, the skew polynomial ring with the relation $yx=qxy$, if and only if $q = \pm 1$.
Typically there is a limited supply of graded automorphisms of $A$, so we introduce further noncommutativity by allowing actions on $A$ by a finite dimensional Hopf algebra $H$. Throughout we adopt the usual notation for the Hopf structure of a Hopf algebra $H$, namely $(H, m, \epsilon, \Delta, u, S)$ (see \cite{Mon}, our basic reference for Hopf algebra actions), and we denote the $H$-action on $A$ by $\cdot : H \otimes A \rightarrow A$. All groups $G$ we consider are finite, and all Hopf algebras $H$ are finite dimensional.
\begin{definition}\ \label{def1.3} \begin{enumerate} \item Let $H$ be a Hopf algebra $H$ and $A$ be a $k$-algebra. We say that {\it $H$ acts on $A$} (from the left), or $A$ is a {\it left $H$-module algebra}, if $A$ is a left $H$-module, and for all $h \in H$, $h \cdot (ab) = \sum (h_1 \cdot a)(h_2 \cdot b)$ for all $a,b \in A$, where $\Delta(h) = \sum h_1 \otimes h_2$ (using the usual Sweedler notation convention),
and $h \cdot 1_A = \epsilon(h) 1_A$. \item The {\it invariant subring} of such an action is defined to be
$$A^H = \{ a \in A ~|~ h \cdot a = \epsilon(h) a, ~\forall ~h \in H \}.$$ \end{enumerate} \end{definition} When the Hopf algebra $H=k[G]$ is the group algebra of a finite group $G$, the usual coproduct on $k[G]$ is $\Delta(g) = g \otimes g$, and the Hopf algebra action on $A$ means that $g$ acts as a homomorphism on elements of $A$. Similarly, since $\epsilon(g) = 1$, the invariant subring $A^{k[G]} = A^G$ is the usual subring of invariants.
Sometimes it is more convenient to view a left $H$-module algebra $A$ as a right $H^\circ$-comodule algebra, where $H^\circ$ is the Hopf dual of $H$. When $H$ is finite dimensional the Hopf dual is the vector space dual, and a left $H$-module can be viewed as a right $H^\circ$-module. To be a right $K$-comodule algebra over a Hopf algebra $K$ we require a coaction $\rho: A \rightarrow A \otimes K$ with properties: $\rho(1_A) = 1_A \otimes 1_K$ and $\rho(ab) = \rho(a) \rho(b)$ for all $a,b \in A$. The coinvariant subring of such a coaction is defined to be
$$A^{co K} = \{ a \in A | \rho(a) = a \otimes 1_K \}.$$ It follows (\cite[Lemma 6.1 (c)]{KKZ2}) that $A^H = A^{co H^\circ}$.
Throughout we require that the Hopf algebra $H$ acts on $A$ so that \begin{hypotheses} \label{standing}\ \begin{itemize} \item $A$ is an $H$-module algebra, \item the grading on $A$ is preserved, and \item the action of $H$ on $A$ is inner faithful. \end{itemize} \end{hypotheses}
The inner faithful assumption guarantees that the Hopf algebra action is not actually an action over a homomorphic image of $H$ that might be a group algebra (in which case the action is actually a group action). \begin{definition}\cite{BB}. \label{def1.4} Let $M$ be a left $H$-module. We say that $M$ is an {\it inner faithful} $H$-module, or $H$ \emph{acts inner faithfully} on $M$, if $IM\neq 0$ for every nonzero Hopf ideal $I$ of $H$. Dually, let $N$ be a right $K$-comodule. We say that $N$ is an {\it inner faithful} $K$-comodule, or that $K$ {\it acts inner faithfully} on $N$, if for any proper Hopf subalgebra $K' \subsetneq K$, $\rho(N)$ is not contained in $N \otimes K'$. \end{definition}
\begin{lemma} \cite[Lemma 1.6(a)]{CKWZ1}. Let $H$ be a finite dimensional Hopf algebra, $K = H^\circ$, and $U$ be a left $H$-module. Then $U$ is a right $K$-comodule, and the $H$-action on $U$ is inner faithful if and only if the induced $K$-coaction on $U$ is inner faithful. \end{lemma}
Despite the hope that Hopf algebra actions would provide many new actions on $A$, we note the result of Etingof and Walton, that states that if $H$ is a semisimple Hopf algebra acting inner faithfully on a commutative domain, then $H$ is a group algebra \cite{EW1}; this result does not require our assumptions that the action of $H$ on $A$ preserves the grading on $A$. Other results showing that Hopf actions must factor through group actions are mentioned in Section 4. There are, however, interesting actions by pointed Hopf algebras on domains and fields \cite{EW2}. Moreover, there are subrings of invariants that occur under Hopf actions that are not invariants under group actions, warranting continued study of these actions.
In the first section we discuss the question of when $A^H$ is AS regular, extending the Shephard-Todd-Chevalley Theorem to our noncommutative context. In the second section we discuss the question of when $A^H$ is AS Gorenstein; these results include extending Watanabe's Theorem that $k[x_1, \dots, x_n]^G$ is Gorenstein when $G$ is a finite subgroup of ${\rm SL}_n(k)$, and Felix Klein's classification of the invariants of finite subgroups of ${\rm SL}_2(k)$ acting on $k[x,y]$. In the third section we discuss the question of when $A^G$ is a ``complete intersection" (and what a complete intersection might mean in this context); here our goal is to extend results of Gordeev, Kac, Nakajima, and Watanabe to our noncommutative setting. In the final section we briefly discuss some related directions of research.
\section{Artin-Schelter regular subrings of invariants}
\label{xxsec1} A result attributed to Carl F. Gauss states that the subring of invariants of the commutative polynomial ring $k[x_1, \dots, x_n]$ under the action of the symmetric group $S_n$, permuting the indeterminates, is generated by the $n$ elementary symmetric polynomials. Further, the symmetric polynomials are algebraically independent, so that the subring of invariants is also a polynomial ring. This result raised the question: for which finite groups $G$ acting on $k[x_1, \dots, x_n]$ is the fixed subring a polynomial ring? This question was answered in 1954 for $k$ an algebraically closed field of characteristic zero by G. C. Shephard and J. A. Todd \cite{ShT}, who classified the complex reflection groups and produced their invariants. Shortly afterward, C. Chevalley \cite{C} gave a more abstract argument that showed that for real reflection groups $G$, the fixed subring $k[x_1, \dots, x_n]^G$ is a polynomial ring, and J.-P. Serre \cite{S} showed that Chevalley's argument could be used to prove the result for all unitary reflection groups. A. Borel's history \cite[Chapter VII]{B} provides details on the origins of the ``Shephard-Todd-Chevalley Theorem". Invariants in a commutative polynomial ring under the action of a reflection group do \underline{not} always form a polynomial ring in characteristic p (see \cite[Example 3.7.7]{DK} (and following) for a discussion, including an example of Nakajima \cite{N1}). However, in characteristic p, to obtain a polynomial subring of invariants it is necessary (but not sufficient) that the group be a reflection group (see \cite[proof of Theorem 7.2.1]{Be} for a proof that works in any characteristic). In characteristic zero the necessity of $G$ being a reflection group follows from the sufficiency by considering the normal subgroup of reflections in the group. \begin{theorem}[{\bf Shephard-Todd-Chevalley Theorem}] \cite{ShT} {\rm and} \cite{C}. For $k$ a field of characteristic zero, the subring of invariants $k[x_1, \dots, x_n]^G$ under a finite group $G$ is a polynomial ring if and only if $G$ is generated by reflections. \end{theorem} In this context a linear map $g$ on a vector space $V$, where $g$ is of finite order and hence is diagonalizable, is called a {\it reflection of $V$} if all but one of the eigenvalues of $g$ are 1 (i.e. dim $V^g = $ dim $V - 1$) (we note that sometimes the term ``reflection" is reserved for the case that $g$ has real eigenvalues (so that the single non-identity eigenvalue is -1) and the term ``pseudo-reflection" is used for the case that the single non-identity eigenvalue is a complex root of unity). We will call a graded automorphism that is a reflection (in this sense) of $V=A_1$, the $k$-space of elements of degree 1, a ``classical reflection".
In the setting of the Shephard-Todd-Chevalley Theorem the fixed subring $A^G$ is isomorphic to the original ring $A$ (both being commutative polynomial rings), and early noncommutative generalizations of the Shephard-Todd-Chevalley Theorem focused on this property. S.P. Smith \cite{Sm1} showed that if $G$ is a finite group acting on the first Weyl algebra $A= A_1(k)$ then $A^G$ is isomorphic to $A$ if and only if $G = \{ 1 \}$. Alev and Polo \cite{AP} extended Smith's result to the higher Weyl algebras, and showed further that if ${\mathfrak g}$ and ${\mathfrak g}'$ are two semisimple Lie algebras, and $G$ is a finite group of algebra automorphisms of the universal enveloping algebra $U({\mathfrak g})$ such that $U({\mathfrak g})^G \cong U({\mathfrak g}')$, then $G$ is trivial and ${\mathfrak g}\cong {\mathfrak g}'$. The preceding results were attributed to the ``rigidity" of noncommutative algebras, and these early results suggested that there is no noncommutative analogue of the Shephard-Todd-Chevalley Theorem. However, as we shall see, in the case that the algebra $A$ is graded there are other ways to generalize the Shephard-Todd-Chevalley Theorem.
We begin with an illustrative noncommutative example. \begin{example} \label{classicalref} Let $A=k_{-1}[x,y]$ be the skew polynomial ring with the relation $yx=-xy$; the ring $A$ is an AS regular algebra of dimension 2. Let $G= \langle g \rangle $ be the cyclic group generated by the graded automorphism $g$, where $g(x) = \lambda_n x$ and $g(y) = y$, for
$\lambda_n$ a primitive n-th root of unity. The linear map $g$ acting on $V=A_1$ is a classical reflection. The fixed ring ${A}^{G} = k \langle x^n, y \rangle = k_{(-1)^n}[x,y]$ is isomorphic to $A$ when $n$ is odd, but not when $n$ is even (when it is a commutative polynomial ring). However, $A^G$ is AS regular for all $n$. This example suggests that a reasonable generalization of the Shephard-Todd-Chevalley Theorem is that $G$ should be thought of as a ``reflection group" when $A^G$ is AS regular, rather than when $A^G$ is isomorphic to $A$. When $A$ is a commutative polynomial ring these two conditions are the same condition: when $A$ is a commutative polynomial ring, $A^G$ is AS regular if and only if $A^G \cong A$. \end{example}
\begin{definition} We call a finite group $G$ of graded automorphisms of an AS regular algebra $A$ a {\it reflection group for $A$} if the fixed subring $A^G$ is an AS regular algebra. \end{definition}
In our terminology the classical reflection groups are reflection groups for\linebreak $k[x_1, \dots, x_n]$. The next example demonstrates that a classical reflection group will not always produce an AS regular invariant subring when acting on some other AS regular algebra.
\begin{example} \label{transposition} Again, let $A=k_{-1}[x,y]$ be the skew polynomial ring with the relation $yx=-xy$. The transposition $g$ that interchanges $x$ and $y$ induces a graded automorphism of $A$, and $g$ generates the symmetric group $S_2$, which is a classical reflection group. One set of generators for the fixed ring $A^{\langle g \rangle}$ is $x+y$ and $x^3 +y^3$. Here $xy$ is not fixed, as it is in the commutative case, and the invariant $x^2 + y^2 = (x + y)^2$ is not a generator. The generators $x+y$ and $x^3 +y^3$ are not algebraically independent, and the algebra they generate is not an AS regular algebra. However, as we shall see (Example \ref{transagain}), the fixed ring is AS Gorenstein, and it can be viewed as a hypersurface in an AS regular algebra of dimension 3. \end{example}
Example \ref{transposition} shows that when $A$ is noncommutative we need a different notion of ``reflection" to obtain an AS regular fixed subring, as the eigenvalues of the linear map $g$ no longer control the AS regularity of the fixed subring. Our results suggest that it is the trace function, defined below, that determines whether a linear map is a ``reflection". In Example \ref{toshowtraces} below we will show that the classical reflection $g$ in Example \ref{transposition} is not a reflection in this new sense, while the automorphism $g$ of Example \ref{classicalref} is a reflection (under both the classical definition and our new definition).
\begin{definition} Let $A$ be a graded algebra with $A_j$ denoting the elements of degree $j$. The {\it trace function} of a graded automorphism $g$ acting on $A$ is defined to be the formal power series
$$Tr_A(g,t) = \sum_{j=0}^{\infty} trace(g|_{A_j})\; t^j,$$
where $trace(g|_{A_j})$ is the usual trace of the linear map $g$ restricted to $A_j$. \end{definition} In this setting, by \cite[Theorem 2.3(4)]{JiZ} $Tr_A(g,t)$ is a rational function of the form $1/e_g(t)$, where $e_g(t)$ is a polynomial in $k[t]$, and the zeroes of $e_g(t)$ are all roots of unity \cite[Lemma1.6(e)]{KKZ1} (in the case $A=k[x_1, \dots, x_n]$, the roots of $e_g(t)$ are the inverses of the eigenvalues of $g$). The next proposition shows that trace functions can be used in computing fixed subrings, giving a version of the classical Molien's Theorem. Knowing the Hilbert series of the subring of invariants is very useful in computing the subring of invariants. (In the Hopf algebra action case of the theorem below, see \cite[Definition 2.1.1]{Mon} for the definition of the integral). \begin{proposition}[{\bf{Molien's Theorem}}] \label{Molien} \cite[Lemma 5.2]{JiZ}, \cite[Lemma 7.3]{KKZ2}. The Hilbert series of the fixed subring $A^G$ is
$$H_{A^G}(t) = \frac{1}{|G|} \sum_{g \in G} Tr_A(g,t).$$ Similarly, for a semisimple Hopf algebra $H$ acting on $A$ with integral $\int$ that has $\epsilon(\int) = 1 \in k$, the Hilbert series of the fixed subring $A^H$ is $H_{A^H}(t) = Tr_A(\int,t)$. \end{proposition}
In our setting it is the order of the pole of $Tr_A(g,t)$ at $1$, rather than the eigenvalues of $g$, that determines whether $g$ is a ``reflection". \begin{definition} \cite[Definition 1.4]{KKZ1}. Let $A$ be an AS regular algebra of $\GKdim A$ $= n$. We call a graded automorphism $g$ of $A$ a {\it reflection of $A$} if the trace function of $g$ has the form $$Tr_A(g,t) = \frac{1}{(1-t)^{n-1} q(t)} \mbox{ where } q(1) \neq 0.$$ \end{definition} We note that in \cite{KKZ1} we used the term ``quasi-reflection" to distinguish our use of reflection from the usual notion of reflection. Here we will use the term ``reflection" as defined above, and refer to ``classical reflection" when we are referring to a reflection defined in terms of its eigenvalues. The following examples can be used to justify our definition of reflection, as we want a group generated by ``reflections" to have a fixed subring that is AS regular (i.e. to be a reflection group for $A$). \begin{example} \label{toshowtraces} Let $A= k_{-1}[x,y]$, an AS regular algebra of dimension 2, and in each case let $G= \langle g \rangle $ be the cyclic group generated by the graded automorphism $g$, expressed as a matrix acting on the vector space $A_1 = kx \oplus ky$. Let $\lambda_n$ be a primitive n-th root of unity. \begin{enumerate}
\item As in Example \ref{classicalref}, let ${\displaystyle {g = \mattwo{ \lambda_n & 0\\ 0 & 1}}}$; the automorphism $g$ is a classical reflection of $A$. The trace function is ${\displaystyle Tr_{A}(g ,t) = \frac{1} {(1-t)(1- \lambda_n t)}}$, so that $g$ is a reflection of $A$ under our definition. Furthermore ${A}^{G} = k \langle x^n, y \rangle = k_{(-1)^n}[x,y]$ is AS regular for all $n$.
\item As in Example \ref{transposition}, let ${\displaystyle {g= \mattwo{0 & 1\\1 & 0}}}$. The trace function is ${\displaystyle Tr_{{A}}({g},t) = \frac{1}{1+t^2}}$. As we noted earlier, ${A}^G$ is not AS regular. The automorphism $g$ is a classical reflection, but it is not a reflection of $A$ in our sense (the order of the pole at 1 is not 2-1=1, and the fixed subring is not AS regular).
\item Let ${\displaystyle {g = \mattwo{0 & -1\\1 & 0}}}$; the automorphism $g$ is not a classical reflection. The trace function is ${\displaystyle Tr_{{A}}({g},t) = \frac{1}{{(1-t)}(1+t)}}$ and ${A}^{G} = k[x^2 + y^2, xy]$ is a commutative polynomial ring so is AS regular. The automorphism $g$ is a reflection of $A$ in our sense, and we called it a ``mystic reflection" to distinguish it from a classical reflection (a reflection such as in (1)).\\ \end{enumerate} \end{example} Proposition \ref{Molien} is used in computing Example \ref{toshowtraces}. For example, in (3) Molien's Theorem shows us that $$H_{{A}^{G}}(t) = \frac{1}{4(1-t)^2} + \frac{2}{4(1-t^2)} + \frac{1}{4(1+t)^2} = \frac{1}{(1-t^2)^2},$$ hence the two invariants we have found generate the invariant subring, because they are fixed and the ring they generate has this Hilbert series.
We have shown that for $A$ a quantum polynomial ring, there are only two kinds of reflections of $A$: classical reflections and new reflections (as in Example \ref{toshowtraces} (3)) that we call ``mystic reflections". \begin{theorem}\cite[Theorem 3.1]{KKZ1}. \label{xxthm3.1} Let $A$ be a quantum polynomial ring of global dimension $n$. If $g\in \Aut(A)$ is a reflection of $A$ of finite order, then $g$ is in one of the following two cases: \begin{enumerate} \item There is a basis of $A_1$, say $\{b_1,\cdots,b_n\}$, such that $g(b_j)=b_j$ for all $j\geq 2$ and $g(b_1)
=\lambda_n b_1$ for $\lambda_n$ a root of unity. Hence $g|_{A_1}$ is a classical reflection. \item The order of $g$ is $4$ and there is a basis of $A_1$, say $\{b_1,\cdots,b_n\}$, such that $g(b_j)=b_j$ for all $j\geq 3$ and $g(b_1) =i\; b_1$ and $g(b_2)=-i\; b_2$ (where $i^2=-1$). We call such a reflection a {\it mystic reflection}. \end{enumerate} \end{theorem}
It is quite possible that other AS regular algebras have different kinds of reflections, as AS regular algebras have been completely classified only in dimensions $\leq 3$.
We conjecture the following generalization of the Shephard-Todd-Chevalley Theorem. \begin{conjecture}\label{STCconjecture} Let $A$ be an AS regular algebra and $G$ be a finite group of graded automorphisms of $A$. Then $A^G$ is AS regular if and only if $G$ is generated by reflections of $A$. \end{conjecture} We have proved a number of partial results that support this conjecture. First we note that, due to the following theorem, only one direction of this conjecture needs to be proved when $A$ is noetherian. \begin{theorem}\cite[Proposition 2.5(b)]{KKZ3}. Let $A$ be a noetherian AS regular algebra and suppose that for every finite group of graded automorphisms of $A$ that is generated by reflections of $A$ it follows that $A^G$ is AS regular. Then Conjecture \ref{STCconjecture} is true. \end{theorem} Although we have not proved that if $A^G$ is AS regular, then $G$ is generated by reflections, we have shown that $G$ must contain at least one reflection. \begin{theorem}\cite[Theorem 2.4]{KKZ1}. \label{xxthm2.4} Let $A$ be noetherian and AS regular, and let
$G$ be a finite group of graded automorphisms of $A$. If $A^G$ has finite global dimension, then $G$ contains a reflection of $A$. \end{theorem} We have shown that a number of algebras have no reflections (hence no AS regular fixed algebras). These algebras include: non-PI Sklyanin algebras \cite[Corollary 6.3]{KKZ1}, homogenizations of the universal enveloping algebra of a finite dimensional Lie algebras $\mathfrak{g}$ (\cite[Lemma 6.5(d)]{KKZ1}), and down-up algebras (or any noetherian AS regular algebra of dimension 3 generated by two elements of degree 1) (\cite[Proposition 6.4]{KKZ1}). We will say more about down-up algebras in Section 3.
The skew polynomial ring $k_{p_{ij}}[x_1, \dots, x_n]$ is defined to be the $k$-algebra generated by $x_1, \dots, x_n$ with relations $x_jx_i = p_{ij}x_ix_j$ for all $1 \leq i < j \leq n$ and $p_{ii} =1$. \begin{theorem}\label{skewed} \cite[Theorem 5.5]{KKZ3}. Let $A =k_{p_{ij}}[x_1, \dots, x_n]$, and let $G$ be a finite group of graded automorphisms of $A$. Then $A^G$ has finite global dimension if and only if $G$ is generated by reflections of $A$ (in which case $A^G$ is again a skew polynomial ring). \end{theorem} Theorem \ref{skewed} has been proved using different techniques by Y. Bazlov and A. Berenstein in \cite{BB2}, and we will say more about their results shortly. \begin{theorem}\cite[Theorem 6.3]{KKZ3}. Let $A$ be a quantum polynomial ring and let $G$ be a finite abelian group of graded automorphisms of $A$. Then $A^G$ has finite global dimension if and only if $G$ is generated by reflections of $A$. \end{theorem}
In their seminal paper Shephard and Todd classified the (classical) reflection groups, i.e. the reflection groups for $k[x_1, \dots, x_n]$. When $A$ is a noncommutative AS regular algebra, whether or not a group is a reflection group depends upon the algebra $A$ on which the group acts, and groups different from the classsical reflection groups can occur as reflection groups for some noncommutative AS regular algebra.
We present two examples of reflection groups on $k_{p_{ij}}[x_1, \dots, x_n]$ with $p_{ij} = -1$ for all $i \neq j$. \begin{example}\cite[Example 7.1]{KKZ3}. Let $G$ be the group generated by the mystic reflections $$ g_1 = \begin{pmatrix} 0 & -1 & 0\\ 1 & 0 & 0\\ 0 & 0 &1\\ \end{pmatrix} \text {and } g_2 = \begin{pmatrix} 1 & 0 & 0\\ 0 & 0 & -1\\ 0 & 1 & 0\\ \end{pmatrix}.$$ $G$ is the rotation group of the cube, and is isomorphic to the symmetric group $S_4$. $G$ acts on $k_{-1}[x_1, x_2, x_3]$ where $p_{ij} = -1$ for $i \neq j$, with fixed ring the commutative polynomial ring $k[x_1^2+x_2^2+x_3^2, \; x_1x_2x_3, \; x_1^4+x_2^4+x_3^4]$. Hence under this representation (but not the permutation representation) $S_4$ is a reflection group for $k_{-1}[x_1, x_2, x_3]$. \end{example} \begin{example} \cite[Example 7.2]{KKZ3}. The binary dihedral groups $G = BD_{4 \ell}$ are generated by the mystic reflections $$ g_1 = \begin{pmatrix}
0& -\lambda^{-1}\\ \lambda & 0 \end{pmatrix} \text{ and } {g_2 = \begin{pmatrix} 0 & -1\\ 1 & 0 \end{pmatrix}}$$ for $\lambda$ a primitive $2 \ell$-th root of unity. These groups act on $A=k_{-1}[x,y]$ with fixed ring $A^G = k[x^{2 \ell}+ y^{2 \ell}, xy]$, a commutative polynomial ring. Hence the binary dihedral groups are reflection groups for $k_{-1}[x,y]$. When $\ell= 2$, $G$ is the quaternion group of order 8, which is not a classical reflection group. \end{example}
Other examples of reflections groups for $k_{-1}[x_1, \dots, x_n]$ include the infinite family $M(n,\alpha, \beta)$ (\cite[Section 7]{KKZ3}); there are infinite families of these groups that are not isomorphic as groups to classical reflection groups. Bazlov and Berenstein found this same class of groups occurring in their work \cite{BB1} related to Cherednik algebras, and in \cite{BB2} gave a different proof of Theorem \ref{skewed} by introducing a non-trivial correspondence between reflection groups $G$ for $k_{p_{ij}}[x_1, \dots, x_n]$ and classical reflection groups $G'$. In particular, they showed that for $G$ a reflection group for $k_{q_{ij}}[x_1, \dots, x_n]$ the group algebra $k[G]$ is isomorphic to the group algebra $k[G']$ (as algebras) for $G'$ a classical reflection group, even though $G$ and $G'$ are not isomorphic as groups.
Actions of noncocommutative Hopf algebras also can produce AS regular fixed subrings. Hence we expand our notion of reflection group to include ``reflection Hopf algebras" for a given AS regular algebra.
\begin{definition} We call a Hopf algebra $H$ a {\it reflection Hopf algebra for $A$} if $A^H$ is AS regular. \end{definition}
The group algebra of a classical reflection group is a reflection Hopf algebra for $k[x_1, \dots,x_n]$. Next we present an example of a Hopf algebra that is not commutative or cocommutative but has an AS regular ring of invariants. \begin{example}\cite[Section 7]{KKZ2}. The smallest dimensional semisimple Hopf algebra $H$ that is not isomorphic to a group algebra or its dual is the 8-dimensional semisimple algebra $H_8$, defined by Kac and Paljutkin \cite{KP} (see also \cite{Ma1}).
As an algebra {$H_8$} is generated by {$x, y, z$} with the following relations: $$x^2 = y^2 =1, \;\; xy=yx,\;\; zx=yz,$$ $$ zy=xz,\;\; z^2= \frac{1}{2}(1+x+y-xy). $$ The coproduct, counit and antipode are given as follows: $$\Delta(x) = x\otimes x, \;\;\; \Delta(y)=y\otimes y,$$ $$ \Delta(z) = \frac{1}{2}(1\otimes 1 + 1\otimes x + y\otimes 1 - y\otimes x)(z\otimes z), $$ $$ \epsilon(x) = \epsilon(y) = \epsilon(z) =1, \quad S(x)=x^{-1},\; S(y)=y^{-1},\; S(z)=z.$$
${H_8}$ has a unique irreducible 2-dimensional representation on $k u \oplus k v$ given by {$$ x \mapsto \begin{pmatrix}-1 & 0\\ 0 & 1 \end{pmatrix}, \quad \; y \mapsto \begin{pmatrix}1 & 0\\ 0& -1 \end{pmatrix}, \quad \; z \mapsto \begin{pmatrix}0 & 1 \\1& 0 \end{pmatrix}.$$}
\begin{enumerate} \item Let ${A = k\langle u, v \rangle / \langle u^2-v^2 \rangle}$ ($A$ is isomorphic to $k_{-1}[u,v]$). Then $H_8$ acts on $A$ and the fixed subring is ${A}^{H_8} = k[u^2, (uv)^2 - (vu)^2],$ a commutative polynomial ring, and so ${H_8}$ is a reflection Hopf algebra for ${A}$. \item Let ${A =k\langle u, v \rangle / \langle vu \pm iuv \rangle}$. Then $H_8$ acts on $A$ and the fixed subring is ${A}^{H_8} = k[u^2v^2, u^2 + v^2]$, a commutative polynomial ring. Hence ${H_8}$ is a reflection Hopf algebra for ${A}$. \end{enumerate}
\end{example} Furthermore, actions of non-semisimple Hopf algebras can produce AS regular fixed algebras. \begin{example}\cite[Section 3.2.1]{All}. The Sweedler algebra ${H(-1)}$ is generated by ${g}$ and ${x}$ with algebra relations: $$ g^2=1,\;\; x^2 = 0, \;\; xg = - gx,$$ and coproduct, counit, and antipode: $$ \Delta(g) = g\otimes g \;\;\;\; \Delta(x) = g\otimes x + x\otimes 1, $$ $$ \epsilon(g) = 1, \; \epsilon(x) = 0 \quad S(g)=g,\; S(x)=-g x.$$
Then ${H(-1)}$ acts on the commutative polynomial algebra ${k[u,v]}$ as {$$ x \mapsto \begin{pmatrix} 0 & 1\\ 0 & 0 \end{pmatrix}, \quad \; g \mapsto \begin{pmatrix}1 & 0\\ 0& -1 \end{pmatrix} $$} with fixed subring ${k[u,v]}^{H(-1)} = k[u,v^2],$ a commutative polynomial ring. Hence $H(-1)$ is a reflection Hopf algebra for $k[u,v]$. \end{example}
In work in progress we have shown that (using the notation of \cite{Ma2}) the Hopf algebras $H= A_{4m}$ (for $m$ odd) and $H= B_{4m}$ are reflection Hopf algebras for $A=k_{-1}[x,y]$; in both cases as an algebra (but not as a Hopf algebra) $H$ is isomorphic to $k[D_{4m}]$, the group algebra of the dihedral group of order $4m$, a classical reflection group. Further $A_{12}$ acts on a 3-dimensional (non-PI) AS regular algebra $A$ with (non-PI) regular fixed ring. These examples, along with the examples of group algebras acting on skew-polynomial algebras, suggest that the algebra structure of $H$ and its relation to the group algebra of a classical reflection group may be related to conditions that guarantee that the Hopf algebra is a reflection Hopf algebra. We also have some examples of commutative Hopf algebras that are reflection Hopf algebras; in this case the algebra structure of $H$ is not informative.
While we have made Conjecture \ref{STCconjecture} on the properties of a group $G$ that make it a reflection group for $A$, we have made no conjectures on the properties of a general Hopf algebra $H$ that make it a reflection Hopf algebra for $A$. One can still take trace functions of elements in $H$, but one does not always have a nice set of elements for which the trace functions should be computed. Moreover, properties of the trace functions that were used in proving results for groups are not true for the elements of the Hopf algebra $H$. For example, in the group case we showed that if $Tr_A(g,t) = Tr_A(1_G, t)$ then $g = 1_G$ (\cite[Proposition 1.8]{KKZ1}). However, in the Hopf algebra case, one can add the difference of any two elements with the same trace functions without changing the trace function of an element, so we do not have the strong uniqueness of trace functions that we had for groups. Characterizing reflection Hopf algebras for AS regular algebras remains an interesting unsolved problem.
\begin{question} For a Hopf algebra $H$ acting on an AS regular algebra $A$, when is $H$ a reflection Hopf algebra for $A$? \end{question}
\section{Artin-Schelter Gorenstein subrings of invariants}
Artin-Schelter regular invariant subrings occur under only very special circumstances, but, as H. Bass has noted, Gorenstein rings are ubiquitous. and many of the interesting fixed subrings are Gorenstein rings. Twenty years after the Shephard-Todd-Chevalley Theorem was proved, Watanabe \cite{W1} showed that if $G$ is a finite subgroup of ${\rm SL}_n(k)$, then $k[x_1, \dots, x_n]^G$ is a Gorenstein ring, and, in \cite{W2} he showed that the converse is true if $G$ contains no (classical) reflections. In our setting, where $A$ is an AS regular algebra, a reasonable generalization of the condition that $A^G$ is a Gorenstein ring is that $A^G$ is an AS Gorenstein algebra. Next, one must generalize the notion of ``determinant equal to 1". This generalization was accomplished by P. J{\o}rgensen and J. Zhang \cite{JoZ} with their introduction of the notion of the homological determinant of a graded automorphism $g$ of $A$.
The homological determinant $\hdet$ is a group homomorphism $$\hdet: \quad \Aut(A) \rightarrow k^\times$$ that arises in local cohomology; in the case that $A=k[x_1, \dots, x_n]$ it is the determinant (or its inverse, depending upon how $G$ acts on $A$). The original definition of $\hdet$ is given in \cite[Section 2]{JoZ}, and, in Definition \ref{defhomdet} below, we will give the general definition in the context of Hopf algebra actions. Fortunately, in many circumstances $\hdet$ can be computed without using the definition. When $A$ is AS regular, the conditions of the following theorem are satisfied by \cite[Proposition 3.3]{JiZ} and \cite[Proposition 5.5]{JoZ}, and $\hdet g$ can be computed from the trace function of $g$, using the following result.
\begin{lemma} \label{xxlem1.4} \label{hdetbytrace} \cite[Lemma 2.6]{JoZ}. Let $A$ be noetherian and AS Gorenstein, and let $g\in \Aut(A)$. If $g$ is $k$-rational in the sense of \cite[Definition 1.3]{JoZ}, then the rational function ${ Tr}_A(g,t)$ has the form $${ Tr}_A(g,t) = (-1)^n (\hdet g)^{-1} t^{-\ell} + {\rm higher~terms}$$ when it is written as a Laurent series in $t^{-1}$. \end{lemma}
Our results have shown that the condition that all elements of the group have homological determinant equal to 1 plays the role in our noncommutative setting that the condition that the group is a subgroup of ${\rm SL}_n(k)$ plays in classical invariant theory. Using homological determinant to replace the usual determinant, J{\o}rgensen and Zhang proved the following generalization of Watanabe's Theorem. \begin{theorem}\label{watanabe}\cite[Theorem 3.3]{JoZ}. If $G$ is a finite group of graded automorphisms of an AS regular algebra $A$ with $\hdet(g) = 1$ for all $g \in G$ then $A^G$ is AS Gorenstein. \end{theorem}
In the classical case, the symmetric group $S_n$ acting as permutations, is a reflection group for $k[x_1, \dots, x_n]$, but we have already seen (Example \ref{transposition}) that this is not the case for $k_{-1}[x,y]$. However, next we note that if $A= k_{-1}[x_1, \dots, x_n]$ (where for each $i \neq j $ we have the relations $x_j x_i = -x_i x_j$), and if $S_n$ acts on $A$ as permutations, then all subgroups $G$ of $S_n$ have trivial homological determinant, so produce AS Gorenstein invariant subrings. It follows that the fixed subring in Example \ref{transposition} is AS Gorenstein.
\begin{example} \cite[Theorem 5.1]{KKZ5}.
Let ${g}$ be a 2-cycle and {$A = k_{-1}[x_1 \ldots, x_n]$} then $$Tr_{{A}}({g},t) = \frac{1}{(1+t^2)(1-t)^{n-2}}$$ $$= (-1)^n \frac{1}{t^n} + \text{ higher terms }$$ so hdet ${g} = 1$, and hence by Theorem \ref{watanabe}, for ALL groups {$G$} of $n \times n$ permutation matrices, ${A}^{G}$ is AS Gorenstein. This, of course, is not true for permutation actions on a commutative polynomial ring -- e.g. ${k[x_1, x_2, x_3, x_4]}^{\langle (1,2,3,4) \rangle}$ is not Gorenstein, while ${k_{-1}[x_1, x_2, x_3, x_4]}^{\langle (1,2,3,4) \rangle}$ is AS Gorenstein. \end{example} The invariants of $ k_{-1}[x_1, \dots, x_n]$ under permutation actions are studied in detail in \cite{KKZ5}, producing an interesting contrast to the classical case. As one example, these groups of permutations contain no reflections of $k_{-1}[x_1, \dots, x_n]$ \cite[Lemma1.7 (4)]{KKZ5}, and so are ``small groups", while the permutation representation of $S_n$ is a classical reflection group.
A theorem of R. Stanley \cite[Theorem 4.4]{Sta} states that the fixed subring $B= k[x_1, \dots,x_n]^G$ is Gorenstein if and only if the Hilbert series of $B$ satisfies the functional equation $H_B(t) = \pm t^{-m} H_B(t^{-1})$ for some integer $m$. J{\o}rgensen and Zhang extended that result to the more general setting of finite groups acting on AS regular algebras. \begin{theorem} \cite[Proposition 3.8]{KKZ2}. \label{stanley} Let $A$ be an AS regular algebra that satisfies a polynomial identity, and $G$ be a finite group of graded automorphisms of $A$. Then $B= A^G$ is AS Gorenstein if and only if the Hilbert series of $B$ satisfies the functional equation $H_B(t) = \pm t^{-m} H_B(t^{-1})$ for some integer $m$. \end{theorem} Theorem \ref{stanley} is also true under more general (but technical) conditions (see \cite{JoZ} for details).
The homological (co)determinant and Theorems \ref{watanabe} and \ref{stanley} were extended to actions by semisimple Hopf algebras in \cite{KKZ2}. In \cite{CKWZ1} the homological (co)determinant is defined for any finite dimensional Hopf algebra. Let $A$ be AS Gorenstein of injective dimension $d$, and let $H$ be a finite dimensional Hopf algebra acting on $A$. Let $\mathfrak{m}$ denote the maximal graded ideal of $A$ consisting of all elements of positive degree, and let $H_{\mathfrak{m}}^d(A)$ be the $d$-th local cohomology of $A$ with respect to $\mathfrak{m}$. The lowest degree nonzero homogeneous component of $H_{\mathfrak{m}}^d(A)$ is 1-dimensional; let $\mathfrak{e}$ be a basis element. Then there is an algebra homomorphism $\eta: H \rightarrow k$ such that the right $H$-action on $H_{\mathfrak{m}}^d(A)^*$ is given by $\eta(h)\mathfrak{e}$ for all $h \in H$. \begin{definition} \cite[Definition 3.3]{KKZ2} and \cite[Definition 1.7]{CKWZ1}. \label{defhomdet} Retaining the notation above, the composition $\eta \circ S: H \rightarrow k$ is called the {\it homological determinant of the $H$-action on $A$}, and is denoted by $\hdet_H A$. We say that {\it $\hdet_H A$ is trivial} if $\hdet_H A = \epsilon$, the counit of $H$. \end{definition} Dually, if $K$ coacts on $A$ on the right, then $K$ coacts on $k\mathfrak{e}$ and $\rho(\mathfrak{e}) = \mathfrak{e} \otimes {\rm D}^{-1}$ for some grouplike element ${\rm D}$ in $K$. \begin{definition}\cite[Definition 6.2]{KKZ2} and \cite[Definition 1.7]{CKWZ1}. Retaining the notation above, the {\it homological codeterminant of the $K$-coaction on $A$} is defined to be hcodet$_K A = {\rm D}$. We say that {\it hcodet$_K(A)$ is trivial} if hcodet$_K A = 1_K$. \end{definition} The homological determinant $\hdet_H A$ is trivial if and only if the homological codeterminant hcodet$_{H^0} A$ is trivial (\cite[Remark 6.3]{KKZ2}).
Watanabe's Theorem was proved for semisimple Hopf actions on AS regular algebras in \cite[Theorem 3.6]{KKZ2}; it was also shown for all finite dimensional Hopf actions on AS regular algebras of dimension 2 with trivial homological determinant in \cite[Proposition 0.5]{CKWZ1}.
\begin{theorem}\label{hopfwat} \cite[Theorem 3.6]{KKZ2}. Let $H$ be a semisimple Hopf algebra acting an AS regular algebra $A$ with trivial homological determinant. Then $A^H$ is AS Gorenstein. \end{theorem} A partial converse to Theorem \ref{hopfwat} is given in \cite[Theorem 4.10]{KKZ2}; in particular if $G$ is a finite group containing no reflections (i.e. a ``small group"), then $A^G$ is AS Gorenstein if and only $\hdet_G(A)$ is trivial, recovering the classical result in \cite{W2}.
If $G$ is a finite subgroup of ${\rm SL}_n(k)$ acting on $A = k[x_1, \dots, x_n]$ then $G$ contains no reflections (since when $g$ is a classical reflection, $\det(g) = $ a root of unity $\neq 1$) so $A^G$ is not a polynomial ring. We obtain a similar result for the homological determinant.
\begin{theorem} \label{trivialnotregular} \cite[Theorem 2.3]{CKWZ1}. Let $H$ be a semisimple Hopf algebra, and $A$ be a noetherian AS regular algebra equipped with an $H$-algebra action. If $A^H \neq A$ and the $\hdet_H(A)$ is trivial, then $A^H$ is not AS regular. \end{theorem}
Since the $\hdet$ is a homomorphism into $k$, it follows from Theorem \ref{trivialnotregular} that if $G$ is a group with $[G,G] = G$ (e.g. if $G$ is a simple group) then $A^G$ will never be AS regular. Hence such groups are never reflection groups for some AS regular algebra.
We also obtain a version of Stanley's Theorem for semisimple Hopf actions.
\begin{theorem} \cite[Proposition 3.8]{KKZ2}. \label{hopfstanley} Let $A$ be an AS regular algebra that satisfies a polynomial identity, and $H$ be a semisimple Hopf algebra acting on $A$. Then $B= A^G$ is AS Gorenstein if and only if the Hilbert series of $B$ satisfies the functional equation $H_B(t) = \pm t^{-m} H_B(t^{-1})$ for some integer $m$. \end{theorem}
In 1884 Felix Klein (\cite{Kl1} \cite{Kl2} \cite{Su}) classified the finite subgroups of ${\rm SL}_2(k)$ and calculated the invariants $k[x,y]^G$, the ``Kleinian singularities", that are important in commutative algebra, algebraic geometry, and representation theory. These rings of invariants are hypersurfaces in $k[x,y,z]$, and the singularity is of type A, D, or E corresponding to the type of the McKay quiver of the irreducible representations of the group $G$. The paper \cite{CKWZ1} begins the analogous project for any AS regular algebra of dimension 2, finding all finite dimensional Hopf algebras $H$ that act on $A$, with our standing assumptions (\ref{standing}), and having trivial homological determinant; such a Hopf algebra is called a {\it quantum binary polyhedral group}. The AS regular algebras of dimension 2 generated in degree one are isomorphic to: $$k_J[u,v]:=k\langle u,v\rangle/ (vu-uv-u^2)\;\; {\text{ or}}$$ $$\qquad k_q[u,v]:=k\langle u,v\rangle/(vu-quv).$$ The groups that act on one of these algebras with trivial homological determinant are the cyclic groups, the symmetric group $S_2$, the classical binary polyhedral groups, as well as the dihedral groups (which classically are reflection groups). The additional semisimple Hopf algebras that occur are the dual of the group algebra of the dihedral group of order 8, the duals of various finite Hopf quotients of the coordinate Hopf algebra $\mathcal{O}_q({\rm SL}_2(k))$, Hopf algebras that have been studied by \cite{BN}, \cite{Ma2}, \cite{Mu}, and \cite{Ste}. In addition, there are actions by non-semisimple Hopf algebras: the dual of the generalized Taft Hopf algebras $T_{q,\alpha,n}^\circ$, and Hopf algebras whose duals are extensions of the duals of various group algebras by the duals of certain quantum groups. The table below (reproduced from \cite[Table 1]{CKWZ1}) gives the corresponding AS regular algebras $A$ of dimension 2 and the finite dimensional Hopf algebras $H$ acting on $A$ with trivial homological determinant.\\
\noindent \underline{Notation.} [$\tilde{\Gamma}$, $\Gamma$, $C_n$, $D_{2n}$] Let $\tilde{\Gamma}$ denote a finite subgroup of ${\rm SL}_2(k)$, $\Gamma$ denote a finite subgroup of ${\rm PSL}_2(k)$, $C_n$ denote a cyclic group of order $n$, and $D_{2n}$ denote a dihedral group of order $2n$. Let $\text{o}(q)$ denote the order of $q$, for $q \in k^{\times}$ a root of unity. We write $A= k(U)/I$, where $U=ku \oplus kv$, and $I$ is the two-sided ideal generated by the relation.\\
\centerline{The quantum binary polyhedral groups $H$ and the AS regular algebras $A$ they act upon:} \[
\begin{array}{|l|l|} \hline \text{AS regular algebra $A$ gldim 2} & \text{finite dimensional Hopf algebra(s) $H$ acting on $A$}\\ \hline \hline & \\ k[u,v] & k\tilde{\Gamma}\\ \hline &\\ k_{-1}[u,v] & kC_n~\text{for}~n\geq 2; \hspace{.2in} kS_2, \hspace{.2in} kD_{2n}; \\ & (kD_{2n})^{\circ}; \hspace*{.2in} \mathcal{D}(\tilde{\Gamma})^{\circ} \text{ for } \tilde{\Gamma} \text{ nonabelian}\\
\hline &\\ k_q[u,v], ~q \text{ root of 1, } \,q^2 \neq 1 & \\
&\\ {\text{ if $U$ non-simple}} & kC_n \text{~for~} n\geq 3; \hspace{.2in} (T_{q, \alpha, n})^{\circ};\\ \text{ if $U$ simple, $o(q)$ odd} & H \text{ with } 1 \to (k\tilde{\Gamma})^{\circ} \to H^{\circ} \to \mathfrak{u}_{q}(\mathfrak{sl}_2)^{\circ} \to 1;\\ \text{ if $U$ simple, $o(q)$ even, }\, q^4 \neq 1& H \text{ with } 1 \to (k\Gamma)^{\circ} \to H^{\circ} \to \mathfrak{u}_{2,q}(\mathfrak{sl}_2)^{\circ} \to 1;\\
\text{ if $U$ simple}, ~q^4 =1 & \begin{tabular}{l} \hspace{-.13in} $H$ \text{ with } $1 \to (k\Gamma)^{\circ} \to H^{\circ} \to \mathfrak{u}_{2,q}(\mathfrak{sl}_2)^{\circ} \to 1$ \\
\hspace{-.13in} $H$ \text{ with } $1 \to (k\Gamma)^{\circ} \to H^{\circ} \to \frac{\mathfrak{u}_{2,q}(\mathfrak{sl}_2)^{\circ}}{(e_{12}-e_{21} e_{11}^2)}\to 1$ \end{tabular} \\ &\\ \hline &\\ k_q[u,v], ~q \text{ not root 1} & kC_n, n \geq 2\\ \hline &\\ k_J[u,v] & kC_2 \\ \hline \end{array} \] ~\\ \centerline{Table 1 (\cite[Table 1]{CKWZ1})} ~\\
An interesting next question is to determine when a theorem of Auslander is true in the noncommutative setting. Recall that a group is called {\it small} if it contains no classical reflections; for example, subgroups of ${\rm SL}_2(k)$ are small. \begin{theorem}{\rm({\bf Auslander's Theorem})} \cite[Proposition 3.4]{Aus} \rm{and} \cite[Theorem 5.15]{LW}. Let $G$ be a small finite subgroup of ${\rm GL}_n(k)$ acting linearly on $A=k[x_1, \dots, x_n]$. Then the skew group ring $A\#G$ is naturally isomorphic as a graded algebra to the endomorphism ring $\Hom_{A^G}(A,A)$. \end{theorem} Some generalizations of Auslander's Theorem were proved by I. Mori and K. Ueyama for groups with trivial homological determinant. They show \cite[Theorem 3.7]{MU} that if $G$ is ``ample for $A$" in their sense, then $A\#G$ and $\Hom_{(A^G)^{op}}(A,A)$ are isomorphic as graded algebras, and they give a condition \cite[Corollary 3.11 (3)]{MU} that can be checked for the groups with trivial $\hdet$ acting on AS regular algebras of dimension 2. They relate Auslander's Theorem to Ueyama's notion of graded isolated singularity \cite{U}. In \cite{CKWZ2} Auslander's Theorem is proved when $A$ has dimension 2 and $H$ is a semisimple Hopf algebra acting on $A$ under hypotheses (\ref{standing}) and trivial $\hdet_A(H)$. It is conjectured that Auslander's Theorem holds for noetherian AS regular algebras $A$ in any dimension.
\begin{conjecture} If $A$ is an AS regular noetherian algebra and $H$ is a semisimple Hopf algebra acting on $A$ under hypotheses (\ref{standing}) with trivial homological determinant, then
$A\#G$ is naturally isomorphic to $\Hom_{(A^G)^{op}}(A,A)$ as graded algebras. \end{conjecture}
Auslander's theorem was used to relate finitely generated projective modules over the skew group ring $k[x,y]\#G$ and maximal Cohen-Macaulay modules over $k[x,y]^G$ when $G$ is a finite subgroup of ${\rm SL}_2(k)$. Furthermore, a theorem of Herzog \cite{H} states that the indecomposable maximal Cohen-Macaulay $k[x,y]^G$-modules are precisely the indecomposable $k[x,y]^G$ direct summands of $k[x,y]$. These and other results of the McKay correspondence are explored in \cite{CKWZ2} for $A$ a noetherian AS regular algebra of dimension 2 and $H$ a semisimple Hopf algebra acting on $A$ under hypotheses (\ref{standing}) with trivial homological determinant. We call a graded $A$-module $M$ an {\it initial} $A$-module if it is generated by $M_0$, and $M_i = 0$ for $i < 0$. Among the results of \cite{CKWZ2} is the following theorem. \begin{theorem} \cite{CKWZ2} Let $A$ be a noetherian AS regular algebra of dimension 2 and let $H$ be a semisimple Hopf algebra acting on $A$ under hypotheses (\ref{standing}) with trivial homological determinant. Then there is a bijective correspondence between the isomorphism classes of \begin{enumerate} \item indecomposable direct summands of $A$ as right $A^H$-modules \item indecomposable finitely generated, projective, initial left $\Hom_{(A^H)^{op}}(A,A)$-modules \item indecomposable finitely generated, projective, initial left $A\#H$-modules \item simple left $H$-modules \item indecomposable maximal Cohen-Macaulay $A^H$-modules, up to a degree shift. \end{enumerate} \end{theorem} When $A$ is AS regular of dimension 2 and $H$ is a semisimple Hopf algebra acting on $A$ with trivial homological determinant the invariant subalgebras $A^H$ (called ``Kleinian quantum singularities") are all of the form $C/\Omega C$ for $C$ a noetherian AS regular algebra of dimension 3 and $\Omega$ a normal regular element of $C$, and hence can be regarded as hypersurfaces in an AS regular algebra of dimension 3 (see \cite[Theorem 0.1]{KKZ6} and \cite{CKWZ2}), and the explicit singularity $\Omega$ is given for each case. In \cite{CKWZ2} it is shown further that the McKay quiver of $H$ is isomorphic to the Gabriel quiver of the $H$-action on $A$, and the quivers that occur are Euclidean diagrams of types $\widetilde{A}, \widetilde{D}, \widetilde{E}, \widetilde{DL}$, and $\widetilde{L}$.
To conclude this section, we note that it is interesting to compare the roles various groups and Hopf algebras play in the invariant theory of $k[x_1, \dots, x_n]$ and $k_{-1}[x_1, \dots, x_n]$. In Table 2 we give the classical reflections groups and finite subgroups of ${\rm SL}_n(k)$, and the analogous groups and Hopf algebras for $k_{-1}[x_1, \dots, x_n]$. Here we use the notation $H_8$ for the Kac-Puljutkin algebra, $A_{4n}$ and $B_{4n}$ as in \cite{Ma2}, $A(\widetilde{\Gamma})$ and $B(\widetilde{\Gamma)}$ as in \cite{BN}. We notice that some of the same groups play different roles in the two contexts. For example, the dihedral groups are classical reflection groups, but can act with trivial $\hdet$ on $k_{-1}[x_1, \dots, x_n]$. The binary dihedral groups are subgroups of ${\rm SL}_2(k)$ but reflections groups for $k_{-1}[x,y]$.\\
~\\
\begin{tabular}{|c|c|c|} \hline & $A= \mathbb{C}[x_1, \cdots, x_n]$ & $A=\mathbb{C}_{-1}[x_1, \cdots, x_n]$\\ \hline &&\\ Reflection Group for $A$ & &\\ &&\\$n= 2$ & $D_{2n}=G(n,n,2)$ & $ BD_{4n},$ \\
&& $H_8 =B_8$,\\ && $A_{4m}$ ($m$ odd), $ B_{4m}^\circ =B_{4m}$\\ &&\\ $n=3$ & & $A_{12}, S_4$ (rotations of cube)\\ &&\\ Any $n$ & $C_n$, $S_n$, $G(m,p,n)$ & $C_n$\\ &(34 Exceptional &\\ &for various $n$)&\\ &&\\ \hline &&\\ Special Linear for $A$ &&\\&&\\ $n=2 $ & $C_n, BD_{4n}, \widetilde{\mathcal{T}}, \widetilde{\mathcal{O}}, \widetilde{\mathcal{I}} $& $C_n$, $D_{2n}$, $S_2$\\ & &$A_{4m}^\circ, B_{4m}^\circ =B_{4m}, k D_{2n}^\circ$,\\ && $A(\widetilde{\mathcal{T}})^\circ$, $B(\widetilde{\mathcal{T}})^\circ$, $B(\widetilde{\mathcal{I}})^\circ$,\\ && $A(\widetilde{\mathcal{O}})^\circ$, $B(\widetilde{\mathcal{O}})^\circ$\\ &&\\ Any $n$ &Finite subgroups of ${\rm SL}_n(k)$& $S_n$ and all subgroups\\ &&\\ \hline \end{tabular}
\vspace*{.2in} \centerline{Table 2}
\section{Complete intersection subrings of invariants} Gorenstein commutative rings can have pathological properties, but a well-behaved class of Gorenstein commutative rings is the class of graded complete intersections, i.e. the rings of the form $k[x_1, \dots, x_n]/(f_1, \dots, f_m)$ where $f_1, \dots, f_m$ is a regular sequence of homogeneous elements in $k[x_1, \dots, x_n]$. When $A$ is a commutative polynomial ring over $\mathbb{C}$, the problem of determining which finite groups $G$ have the property that $A^G$ is a complete intersection was solved by N.L. Gordeev \cite{G2} (1986) and (independently) by H. Nakajima \cite{N2}, \cite{N3} (see the survey \cite{NW}) (1984). A key result in this classification is the theorem of Kac-Watanabe \cite{KW} and Gordeev \cite{G1} that provides a necessary condition: if the fixed subring $k[x_1,\cdots,x_n]^G$ (for any finite subgroup $G\subset GL_n(k)$) is a complete intersection, then $G$ is generated by bireflections (i.e., elements $g\in GL_n(k)$ such that $\operatorname{rank} (g-I)\leq 2$ -- i.e. all but two eigenvalues of $g$ are 1). However, the condition that $G$ is generated by bireflections is not sufficient for $k[x_1,\cdots,x_n]^G$ to be a complete intesection.
A first problem in generalizing these results to our setting is that there is not an established notion of a complete intersection for noncommutative rings. In the commutative graded case a connected graded algebra $A$ is a complete intersection if one of the following four equivalent conditions holds \cite[Lemma 1.8]{KKZ4} (which references well-known results from \cite{BH}, \cite{FHT}, \cite{FT} \cite{Gu} \cite{Ta}).\\ \begin{enumerate} \item[(cci$^\prime$)] $A\cong k[x_1, \dots, x_d]/(\Omega_1, \cdots, \Omega_n)$, where $\{\Omega_1, \dots, \Omega_n\}$ is a regular sequence of homogeneous elements in $k[x_1, \dots, x_d]$ with $\deg x_i>0$. \item[(cci)] $A\cong C/(\Omega_1, \cdots, \Omega_n)$, where $C$ is a noetherian AS regular algebra and $\{\Omega_1, \dots, \Omega_n\}$ is a regular sequence of normalizing homogeneous elements in $C$. \item[(gci)] The $\Ext$-algebra $E(A):=\bigoplus_{n=0}^\infty \Ext^n_A(k,k)$ of $A$ has finite Gelfand-Kirillov dimension. \item[(nci)] The $\Ext$-algebra $E(A)$ is noetherian. \end{enumerate} ~\\
In \cite{KKZ4} we proposed calling a connected graded ring a {\it cci, gci, or nci} if the respective condition above holds for $A$; we called $A$ a {\it hypersurface} if it is a cci when $n=1$ (i.e. of the form $C/(\Omega)$, where $C$ is a noetherian AS regular algebra and $\Omega$ is a regular, normal element of $C$). In the noncommutative case, unfortunately, the conditions (cci), (gci) and (nci) are not all equivalent, nor does (gci) or (nci) force $A$ to be Gorenstein \cite[Example 6.3]{KKZ4}, making it unclear which property to use as the proper generalization of a commutative complete intersection. A direct generalization to the noncommutative case is condition (cci) which involves considering regular sequences in {\it any} AS regular algebra (in the commutative case the only AS regular algebras are the polynomial algebras), and several researchers have taken an approach to complete intersections that uses regular sequences. Though the condition (cci) seems to be a good definition of a noncommutative complete intersection, there are very few tools available to work with condition (cci), except for explicit construction and computation, and it is not easy to show condition (cci) fails, since one needs to consider regular sequences in {\it any} AS regular algebra.
One relation between these properties that holds in the noncommutative setting is given in the following theorem. \begin{theorem}\cite[Theorem 1.12(a)]{KKZ4}. \label{xxthm0.1} Let $A$ be a connected graded noncommutative algebra.
If $A$ satisfies (cci), then it satisfies (gci). \end{theorem} \cite[Example 6.3]{KKZ4} shows that even both (gci) and (nci) together do not imply (cci), and \cite[Example 6.2]{KKZ4} shows that (gci) does not imply (nci).
The Hilbert series of a commutative complete intersection is a quotient of cyclotomic polynomials; we call a noncommutative ring whose Hilbert series has this property {\it cyclotomic}. A commutative complete intersection is also a Gorenstein ring; we call $A$ {\it cyclotomic Gorenstein} if it is cyclotomic and AS Gorenstein \cite[Definition 1.9]{KKZ4}. \begin{theorem}\cite [Theorem 1.12(b, c)]{KKZ4}. \label{notcyclotomic} If $A$ satisfies (gci) or (nci), and if the Hilbert series of $A$ is a rational function $p(t)/q(t)$ for some coprime integral polynomials $p(t), q(t) \in \mathbb{Z}[t]$ with $p(0) = q(0) =1$, then $A$ is cyclotomic. \end{theorem} In \cite[Section 2]{KKZ4} we show that certain AS Gorenstein Veronese algebras are not cyclotomic, and hence by Theorem \ref{notcyclotomic} these algebras satisfy none of our conditions for a complete intersection.
In our noncommutative invariant theory context we have produced some examples of invariant subrings that are cci algebras. Classically the invariants of $k[x_1, \dots, x_n]^{S_n}$ under the permutation representation form a polynomial ring, but, as we noted, this is not the case in $k_{-1}[x_1, \dots, x_n]^{S_n}$. Under the alternating subgroup $A_n$ of these permutation matrices, the invariants $k[x_1, \dots, x_n]^{A_n}$ are a hypersurface in an $n+1$ dimensional polynomial ring. In \cite{KKZ5} we show that the two invariant subrings of the skew polynomial rings, $k_{-1}[x_1, \dots, x_n]^{S_n}$ and $k_{-1}[x_1, \dots, x_n]^{A_n}$, are each a cci, and we provide generators for the subring of invariants in each case. We return to Example \ref{transposition}.
\begin{example} \label{transagain} \cite[Remark 2.6]{KKZ6}. As in Example \ref{transposition} let $S_2$ act on $A = k_{-1}[x,y]$ by interchanging $x$ and $y$. One set of generators for the fixed subring $A^{S_2}$ is $X:= x+y$ and $Y:=(x-y)(xy)$. These elements generate a down-up algebra $C$ (an AS regular algebra of dimension 3) with relations $$YX^2 = X^2Y \;\;\; \text{ and } \;\;\; Y^2X = XY^2,$$ and $A^{S_2} \cong C/\langle \Omega \rangle$, where $\Omega:= Y^2 - \frac{1}{4}X^2 (XY+YX)$, a central regular element of $C$, so that $A^{S_2}$ is a hypersurface in $C$. \end{example}
To classify the groups that produce complete intersections, one would like to begin by proving the Kac-Watanabe-Gordeev Theorem: that if $A^G$ is a complete intersection (of some kind) then $G$ must be generated by bireflections. Toward this end we extend the notion of bireflection, as we extended the notion of a classical reflection, using trace functions. \begin{definition}\cite[Definition 3.7]{KKZ4}.
Let $A$ be a noetherian connected graded AS regular algebra of GK-dimension $n$. We call $g\in \Aut(A)$ a {\it bireflection} if its trace function has the form: $$Tr_A(g,t) = \frac{1}{(1-t)^{n-2} q(t)}$$ where $q(t)$ is an integral polynomial with $q(1) \neq 0$ (i.e. $1$ is a pole of order $n-2$). We call it a {\it classical bireflection} if all but two of its eigenvalues are 1. \end{definition} The following example suggests that this notion of bireflection based on the trace function may be useful; in this example the fixed subring is a commutative complete intersection, so it satisfies all the equivalent conditions (cci), (nci), and (gci).
\begin{example} \cite[Example 6.6]{KKZ4}. ${A=k_{-1}[x,y,z]}$ is AS regular of dimension 3, and the automorphism \[{g = \matthree{0 & -1 & 0\\1& 0 &0\\0 & 0 & -1}}\] acts on it. The eigenvalues of ${g}$ are $-1, i ,-i$ so ${g}$ is not a classical bireflection. However, $Tr_{A}({g},t) = 1/((1+t)^2(1-t)) = -1/t^3 +$ higher degree terms and ${g}$ is a bireflection with hdet ${g} =1$. The fixed subring is \[{A}^{\langle g \rangle} \cong \frac{k[X,Y,Z,W]}{\langle W^2-(X^2+4Y^2)Z\rangle},\] a commutative complete intersection. \end{example}
In the context of permutation actions on $A=k_{-1}[x_1, \dots, x_n]$ we have proved the converse of the Kac-Watanabe-Gordeev Theorem (a result which is not true in the case $A=k[x_1, \dots, x_n]$).
\begin{theorem}\cite[Theorem 5.4]{KKZ5}. \label{permbi} If $G$ is a subgroup of $S_n$, represented as permutations of $\{x_1, \dots, x_n\}$, and if $G$ is generated by bireflections (defined in terms of the trace functions), then $k_{-1}[x_1, \dots, x_n]^G$ is a cci. \end{theorem}
We conjecture that the Kac-Watanabe-Gordeev Theorem is also true in this context, and we have verified it for $n \leq 4$.
In dimension 2, by \cite[Theorem 0.1]{KKZ6} all AS Gorenstein invariant subrings under the actions of finite groups are hypersurfaces in AS regular algebras of dimension 3, and all automorphisms of finite order are trivially bireflections, and hence the first interesting case of the Kac-Watanabe-Gordeev Theorem is in dimension 3, so that it is natural to investigate generalizations of this theorem for down-up algebras.
Down-up algebras were defined by Benkart and Roby \cite{BR} in 1998 as a tool to study the structure of certain posets. Noetherian graded down-up algebras $A(\alpha, \beta)$ form a class of AS regular algebras of global dimension 3 that are generated in degree 1 by two elements $x$ and $y$, with two cubic relations: $$ y^2x = \alpha yxy + \beta xy^2 \text{ and } yx^2 = \alpha xyx + \beta x^2y$$ for scalars $\alpha, \beta \in k$ with $\beta \neq 0$. These algebras are not Koszul, but they are (3)-Koszul. Their graded automorphism groups, which depend upon the parameters $\alpha$ and $\beta$, were computed in \cite{KK}, and are sufficiently rich to provide many non-trivial examples (e.g. in two cases the automorphism group is the entire group ${\rm GL}_2(k)$). However, it follows from \cite[Proposition 6.4]{KKZ1} that these algebras have no reflections, so all finite subgroups are ``small", and hence from \cite[Corollary 4.11]{KKZ2} $A^G$ is AS Gorenstein if and only if $\hdet $ is trivial. Noetherian graded down-up algebras satisfy the following version of the Kac-Watanabe-Gordeev Theorem.
\begin{theorem}\cite[Theorem 0.3]{KKZ6}.
Let $A$ be a graded noetherian down-up algebra and $G$ be a finite subgroup of $\Aut(A)$. Then the following are equivalent. \begin{enumerate} \item[(C1)] $A^G$ is a gci. \item[(C2)] $A^G$ is cyclotomic Gorenstein and $G$ is generated by bireflections. \item[(C3)] $A^G$ is cyclotomic Gorenstein. \end{enumerate} \end{theorem}
In many of the cases for $A$ a noetherian graded down-up algebra the fixed algebras $A^G$ are shown to be a cci, and it is an open question whether that is always the case.
It would be interesting to study the relation of these conditions for other classes of 3-dimensional AS regular algebras, and we have work in progress on actions of groups with trivial homological determinant acting on the generic 3-dimensional Sklyanin algebra.
\section{Related research directions} In this section we briefly sketch some related directions of research, many of which contain open questions.\\ ~\\ \noindent A. {\bf Degree bounds.} When computing invariant subrings, it is very useful to have an upper bound on the degrees of the algebra generators of the fixed subring. In 1916 Emmy Noether \cite{No}
proved that $|G|$, the order of the group $G$, is an upper bound on the degrees of the algebra generators of $k[x_1, \dots, x_n]^G$, for any finite group $G$, when $k$ is a field of characteristic zero. The Noether upper bound does not always hold in characteristic p (see e.g. \cite[Example 3.5.7 (a) p. 94]{DK}); the survey paper \cite{Ne} is a good introduction to the problem of finding upper bounds on the degrees of the algebra generators of $k[x_1, \dots, x_n]^G$. We have seen (Example \ref{transposition}) that the Noether upper bound does not always hold in our noncommutative setting, for in that example the symmetric group $S_2$ has order 2 and the fixed subring requires a degree 3 generator. In 2011
P. Symonds \cite{Sy} proved the upper bound $n(|G|-1)$ (if $n >1$ and $|G| > 1$) on the degrees of the generators of $k[x_1, \dots, x_n]^G$, when $k$ is a field of characteristic p; letting $n$ be the number of generators of $A$, this upper bound also is too small an upper bound for the degrees of the generators in Example \ref{transposition}.
In the case of permutation actions on $k[x_1, \dots, x_n]$ there is a smaller upper bound, $\max\{n, {n\choose 2}\}$, on the degrees of the generators of the fixed subring under groups of permutations; this upper bound was proved by M. G{\"o}bel in 1995 \cite{Go}, and is true in any characteristic. In \cite[Theorem 2.5]{KKZ5} we prove the bound ${n \choose 2}+\lfloor \frac{n}{2}\rfloor (\lfloor\frac{n}{2}\rfloor+1)$ (which is roughly $3n^2/4$) on the degrees of the generators of $k_{-1}[x_1, \dots, x_n]^G$ for $G$ a group of permutations of $\{x_1, \dots, x_n\}$. This upper bound follows from a more general upper bound that we state below, a bound that holds for semisimple Hopf actions on quantum polynomial algebras under certain technical conditions; this upper bound can be viewed as a generalization of Broer's Bound (see \cite[Proposition 3.8.5]{DK}) in the classical case. In the lemma that follows the field $k$ need not have characteristic zero. \begin{lemma}[\bf Broer's Bound] \cite[Lemma 2.2]{KKZ5}. \label{zzlem2.2} Let $A$ be a quantum polynomial algebra of dimension $n$ and $C$ an iterated Ore extension $k[f_1][f_2;\tau_2,\delta_2]\cdots [f_n;\tau_n,\delta_n]$. Assume that \begin{enumerate} \item $B=A^{H}$ where $H$ is a semisimple Hopf algebra acting on $A$, \item $C\subset B\subset A$ and $A_C$ is finitely generated, and \item $\deg f_{i}>1$ for at least two distinct $i$'s. \end{enumerate}
Then $d_{A^H}$, the maximal degree of the algebra generators of $A^H$, satisifies the inequality" $$d_{A^H}\leq \ell_C-\ell_A=\sum_{i=1}^n \deg f_i -n,$$ where $\ell_A$ and $\ell_C$ are the AS indices of $A$ and $C$ respectively. \end{lemma} It would be useful to have further upper bounds on the degrees of the generators of the subring of invariants.\\ ~\\ B. {\bf Actions on other algebras.} In the work described in this survey thus far we have assumed that $A$ is a graded algebra, and all actions preserve the grading on $A$. There is recent work on actions on filtered algebras, such as the Weyl algebras. Basic properties of this approach were established in \cite{CWWZ}, where it was assumed that the actions preserve the filtration on $A$.
More generally one can consider automorphisms or Hopf actions that may not preserve the filtration on $A$. Etingof and Walton began a program to show that in rather general circumstances a Hopf action must factor through a group action, beginning with \cite[Theorem 1.3]{EW1} that shows that semisimple Hopf actions on commutative domains must factor through group actions. In \cite{EW2} actions of finite dimensional Hopf algebras (that are not necessarily semisimple) on commutative domains, particularly when $H$ is pointed of finite Cartan type, are studied; in this setting there are nontrivial Hopf actions by Taft algebras, Frobenius-Lusztig kernels $u_q({\mathfrak{sl}}_2)$, and Drinfeld twists of some other small quantum groups. In \cite[Theorem 4.1]{CEW} Cuadra, Etingof and Walton show that if a semisimple Hopf algebra $H$ acts inner faithfully on a Weyl algebra $A_n(k)$ for $k$ an algebraically closed field of characteristic zero, then $H$ is cocommutative; in this setting they show further \cite[Theorem 4.2]{CEW} that if $H$ is not necessarily semisimple, but gives rise to a Hopf-Galois extension, then $H$ must be cocommutative. All of these results have no assumptions regarding preserving a grading or filtration.
Relaxing the noetherian assumption of \cite{CKWZ1}, universal quantum linear group coactions on non-noetherian AS regular algebras of dimension 2 are considered in \cite{WW}. In another direction, Hopf algebras (including Taft algebras, doubles of Taft algebras, and $u_q({\mathfrak{sl}}_2)$), that act on certain path algebras, preserving the path length filtration, are studied in \cite{KiW}. ~\\. ~\\C. {\bf Nakayama automorphism.} Considering further generalizations of the algebra $A$ on which the group or Hopf algebra acts, let $A$ be a (not necessarily graded) algebra over $k$, and let $A^e = A \otimes A^{op}$ denote the enveloping algebra of $A$.
\begin{definition}\cite[Definition 0.1]{RRZ} and \cite[Section 3.2]{Gi}.
\begin{enumerate}
\item $A$ is called {\it skew Calabi-Yau} (or {\it skew CY} for short) if \begin{enumerate} \item $A$ has a projective resolution of finite length in the category $A^e$-Mod, with every term in the projective resolution finitely generated, and \item there is an integer $d$ and an automorphism $\mu$ of $A$ such that $$\Ext^i_{A^e}(A,A^e) \cong ~ ^1 A^\mu \text{ for } i = d, \text{ and } \Ext^i_{A^e}(A,A^e) \cong 0 \text{ if } i \neq d$$ as $A$-bimodules, where $1$ denotes the identity map on $A$. The map $\mu$ is usually denoted $\mu_A$ and is called the {\it Nakayama automorphism of $A$}. \end{enumerate} \item \cite[Definition 3.2.3]{Gi} $A$ is called {\it Calabi-Yau} (or {\it CY} for short) if $A$ is skew Calabi-Yau and $\mu_A$ is an inner automorphism of $A$. \end{enumerate} \end{definition} By \cite[Lemma 1.2]{RRZ}
if $A$ is a connected graded algebra then $A$ is an AS regular algebra if and only if $A$ is skew CY.
A homological identity is given in \cite[Theorem 0.1]{CWZ} that has been used to show that the Nayakama automorphism plays a role in determining the class of Hopf algebras that can act on a given AS regular algebra. These techniques were used to show that if a finite dimensional Hopf algebra $H$ acts on $A=k_p[x_1, \dots, x_n]$ (under Hypotheses \ref{standing}) and $p$ is not a root of unity then $H$ is a group algebra (\cite[Theorem 0.4]{CWZ}); further, if it acts on the 3 or 4-dimensional Sklyanin algebras with trivial $\hdet$ then $H$ is semisimple (\cite[Theorem 0.6]{CWZ}). In \cite{LMZ} the Nakayama automorphism is used to characterize the kinds of Hopf algebras that can act on various families of 3-dimensional AS regular algebras; in several generic cases it is shown that the Hopf algebra must be a commutative group algebra or (when the $\hdet$ is trivial) the dual of a group algebra.
Further investigation of the Nakayama automorphism is likely to be useful in the study of Hopf actions, including actions on algebras that are not graded.\\ ~\\ D. {\bf Computing the full automorphism group.} The first step in proving properties of the invariant subring of an algebra $A$ under {\bf any} finite group of automorphisms of $A$ usually is to determine the complete automorphism group of $A$. Such computations are notoriously difficult for commutative polynomial rings. Noncommutative algebras are more rigid than commutative algebras, so sometimes this task is more tractable for noncommutative algebras, even for some PI algebras.
A sequence of recent papers provide some new techniques for computing the full automorphism group of some algebras, including some filtered algebras whose associated graded algebras are AS regular algebras.
In \cite{CPWZ1} an invariant, the discriminant of the algebra over a central subring, is defined and used to compute the full automorphism group of some noncommutative algebras, including, for $n$ even, the filtered algebra (a ``quantum Weyl algebra") $W_n = k \langle x_1, \dots, x_n\rangle$ with relations $x_ix_j + x_j x_i = 1$ for $i > j$, and its associated graded algebra $k_{-1}[x_1, \dots, x_n]$. To cite another example, the discriminant is used to show that the full automorphism group of $B=k_{-1}[x,y]^{S_2}$ (the hypersurface of Example \ref{transposition}) is $k^\times \rtimes S_2$ \cite[Example 5.8]{CPWZ1}. In \cite{CPWZ2}, automorphism groups of tensor products of quantum Weyl algebras and certain skew polynomial rings $k_{q_{i,j}}[x_1, \dots,x_n]$ are computed. In \cite{CPWZ3} it is shown that, when $n$ is even and $n \geq 4$, the fixed subring of $W_n$ under any group of automorphisms of $W_n$ is a filtered AS Gorenstein algebra, but for $n \geq 3$ and odd the full automorphism group of $W_n$ contains a free subalgebra on two (and hence countably many) generators. In \cite{CYZ} further results on computing the discriminant are proved, and some applications to Zariski cancellations problems and isomorphism questions of algebras are given (these two areas of application will be explored further in \cite{BZ} and \cite{CPWZ4}). Further results on the discriminant and its applications remain to be explored.
This new information about the full automorphism group of many families of noncommutative algebras suggests many interesting open questions about the structure of the invariant subrings. The questions considered in this survey can be investigated for actions of ANY finite group of (not necessarily graded) automorphisms of $A$, for larger classes of algebras than AS regular algebras.
\subsection*{Acknowledgments} The author wishes to thank Chelsea Walton and James Zhang, as well as the referee, for making helpful suggestions on this paper. Ellen Kirkman was partially supported by the Simons Foundation grant no. 208314
\end{document} |
\begin{document}
\baselineskip16pt \begin{abstract} The aim of this article is to prove an \lq\lq almost\rq\rq \, global existence result for some semilinear wave equations in the plane outside a bounded convex obstacle with the Neumann boundary condition. \end{abstract}
\maketitle
\thispagestyle{empty}
\section{Introduction} Let ${\mathcal O}$ be an open bounded convex domain with smooth boundary in ${\mathbb R}^2$ and put $\Omega:={\mathbb R}^2 \setminus \overline{\mathcal O}$. Let $\partial_\nu$ denote the outer normal derivative on $\partial \Omega$.
We consider the mixed problem for semilinear wave equations in $\Omega$ with the Neumann boundary condition: \begin{equation}\label{eq.PMN} \begin{array}{ll}
(\partial_t^2-\Delta) u =G(\partial_t u, \nabla_x u), & (t,x) \in (0,\infty)\times \Omega,\\
\partial_\nu u(t,x)=0, &(t,x) \in (0,\infty)\times \partial\Omega,\\
u(0,x)=\phi(x), &x\in \Omega,\\
\partial_t u(0,x)=\psi(x), & x\in \Omega,\\ \end{array} \end{equation} where $\phi$ and $\psi$ are ${\mathcal C}^\infty$-functions compactly supported in $\overline\Omega$, and $G: \mathbb R^3\to \mathbb R$ is a nonlinear function. We will study the case of the cubic nonlinearity with small initial data and obtain an estimate from below for the lifespan of the solution in terms of the size of the initial data. Here by the expression \lq\lq small initial data'' we mean that there exist $m\in \mathbb N$, $s\in \mathbb R$ and a small number $\varepsilon>0$ such that \begin{equation*}
\|\phi\|_{H^{m+1,s}(\Omega)}+\|\psi\|_{H^{m,s}(\Omega)}\le \varepsilon, \end{equation*} where the weighted Sobolev space $H^{m,s}(\Omega)$ is endowed with the norm \begin{equation}\label{eq.datanorm}
\|\varphi\|_{H^{m,s}(\Omega)}^2:=\sum_{|\alpha|\le m}\int_{\Omega}
(1+|x|^2)^s
|\partial_x^\alpha \varphi(x)|^2 d x. \end{equation}
A large amount of works has been devoted to the study of the mixed problem for nonlinear wave equations in an exterior domain $\Omega \subset \mathbb R^n$ for $n\ge 3$, mostly with the Dirichlet boundary condition. To our knowledge very few results deal with the global existence or the lifespan estimate for the exterior mixed problems of nonlinear wave equations in $2$D; in \cite{SSW} the global existence for the case of the Dirichlet boundary condition and the nonlinear terms depending only on $u$ is considered; in \cite{KP} one of the authors obtained an almost global existence result for small initial data under the assumptions that $|G(\partial u)|\simeq (\partial u)^3$, the obstacle is star-shaped and the boundary condition is of the Dirichlet type (see Remark~\ref{Rem1.4} below for the detail).
Here we will treat the problem with the Neumann boundary condition in $2$D and obtain an analogous result to \cite{KP}. However, because we have a weaker decay property for the solution to the Neumann exterior problem of linear wave equations in $2$D (see Secchi and Shibata \cite{SeSh03}), we will obtain a slightly worse lifespan estimate than in the Dirichlet case.
For simplicity, we assume that the nonlinear function $G$ in \eqref{eq.PMN} is a homogeneous polynomial of cubic order. Equivalently, writing $\partial u=(\partial_tu, \nabla_x u)$, this means that \begin{equation}\label{eq.semiG} G(\partial u)=\sum_{0\le \alpha\le \beta\le \gamma\le 2} g_{\alpha,\beta,\gamma}(\partial_\alpha u)(\partial_\beta u)(\partial_\gamma u) \end{equation} with $g_{\alpha,\beta,\gamma}\in \mathbb R$ and $(\partial_0,\partial_1,\partial_2):=(\partial_t, \partial_{x_1},\partial_{x_2})$.
As usual, to consider smooth solutions to the mixed problem, we need some compatibility conditions (see \cite{KaKu08}). Note that, for a nonnegative integer $k$ and a smooth function $u=u(t,x)$ on $[0,T)\times \Omega$, we have \begin{equation} \label{CC0} \partial_t^k\left(G(\partial u)\right)=G^{(k)}[u, \partial_t u, \ldots, \partial_t^{k+1}u], \end{equation} where for $\mathcal C^1$ functions $(p_0, p_1, \ldots, p_{k+1})$ we put \begin{align*} G^{(k)}[p_0, p_1, \ldots, p_{k+1}]=&\sum_{k_1+k_2+k_3=k} g_{0, 0, 0} p_{k_1+1}p_{k_2+1}p_{k_3+1}+ \sum_{k_1+k_2+k_3=k} \sum_{\gamma=1}^2 g_{0,0,\gamma}p_{k_1+1}p_{k_2+1}(\partial_\gamma p_{k_3})\\ &{}+\sum_{k_1+k_2+k_3=k} \sum_{1\le \beta\le \gamma\le 2} g_{0,\beta,\gamma} p_{k_1+1}(\partial_\beta p_{k_2})(\partial_\gamma p_{k_3})\\ &{}+\sum_{k_1+k_2+k_3=k} \sum_{1\le \alpha\le \beta\le \gamma\le 2} g_{\alpha,\beta,\gamma} (\partial_\alpha p_{k_1})(\partial_\beta p_{k_2})(\partial_\gamma p_{k_3}). \end{align*} \begin{definition}\label{CCN} To the mixed problem \eqref{eq.PMN} we can associate the recurrence sequence $\{v_j\}_{j\in\mathbb N^*}$ with $v_j:\overline{\Omega}\to \mathbb R$ such that $$ \begin{array}{l}
v_0= \phi,\\
v_1= \psi,\\
v_j=\Delta v_{j-2}+G^{(j-2)}[v_0, v_1, \ldots, v_{j-1}], \quad j\ge 2, \end{array} $$ where $\mathbb N^*$ denotes the set of nonnegative integers and $G^{(k)}$ is defined as above {\rm(}cf.~\eqref{CC0}{\rm)}. We say that $(\phi, \psi, G)$ satisfies the compatibility condition of infinite order in $\Omega$ for \eqref{eq.PMN} if $\phi,\psi\in {\mathcal C}^\infty(\overline{\Omega})$, and one has $$ \partial_\nu v_j(x)=0, \quad x\in \partial \Omega $$ for all $j\in \mathbb N^*$. \end{definition}
Our aim is to prove the following result.
\begin{theorem}\label{thm.mainsemi} Let ${\mathcal O}$ be a convex obstacle. Consider the semilinear mixed problem \eqref{eq.PMN} with given compactly supported initial data $(\phi,\psi)\in \mathcal C^\infty(\overline{\Omega})\times {\mathcal C}^\infty(\overline{\Omega}) $ and a given nonlinear term $G(\partial u)$ which is a homogeneous polynomial of cubic order as in \eqref{eq.semiG}. Assume that $(\phi, \psi, G)$ satisfies the compatibility condition
of infinite order in $\Omega$ for \eqref{eq.PMN}.
Under these assumptions, there exist $\varepsilon_0>0$, $m\in \mathbb N$, $s\in \mathbb R$ such that, if $\varepsilon \in (0,\varepsilon_0]$ and \begin{equation}\label{eq.smalldata.semi}
\|\phi\|_{H^{m+1,s}(\Omega)}+\|\psi\|_{H^{m,s}(\Omega)} \le \varepsilon, \end{equation} then the mixed problem \eqref{eq.PMN} admits a unique solution $u \in {\mathcal C}^\infty([0,T_\varepsilon)\times \Omega)$ with \begin{equation}\label{eq.lifespanT1.semi} T_\varepsilon \ge \exp(C \varepsilon^{-1}), \end{equation} where $C>0$ is a suitable constant which is uniform with respect to $\varepsilon\in (0,\varepsilon_0]$. \end{theorem}
\begin{rem} \normalfont The only point where we require that the obstacle ${\mathcal O}$ is convex is to gain the local energy decay (see Lemma~\ref{LocalEnergyDecay} below). In general one
can treat the obstacles for which Lemma~\ref{LocalEnergyDecay} holds. Unfortunately, for the Neumann problems in $2$D, up to our knowledge it is not known if there exists non-convex obstacles satisfying such a local energy decay. \end{rem}
\begin{rem}
\normalfont One can ask if it is possible to gain a global existence result maintaining our assumption on the growth of $G$. In general the answer to this question is negative since the blow-up in finite time occurs for $F=(\partial_t u)^3$ when $n=2$. Indeed, it was proved in \cite{God93} that for any $R>0$ we can find initial data such that the blow-up for the corresponding Cauchy problem occurs in the region $|x|>t+R$. This result shows the blow-up for the exterior problem with any boundary condition if we choose sufficiently large $R$, because the solution in $|x|>t+R$ is not affected by the obstacle and the boundary condition, thanks to the finite propagation property (see \cite{KaKu12} for the corresponding discussion in $3$D).
In order to look for global solutions one could investigate the exterior problem with suitable nonlinearity satisfying the so-called \it null condition. \rm \end{rem}
\begin{rem}\label{Rem1.4} \normalfont If we consider the Cauchy problem in $\mathbb R^2$, or the Dirichlet problem in a domain exterior to a star-shaped obstacle in $2$D, an analogous result to Theorem~$\ref{thm.mainsemi}$ holds with \begin{equation} \label{sharplife} T_\varepsilon \ge \exp (C\varepsilon^{-2}), \end{equation} and this lifespan estimate is known to be sharp (see \cite{God93} for the Cauchy problem and \cite{KP} for the Dirichlet problem). One loss of the logarithmic factor in the decay estimates causes this difference between the lifespan estimates \eqref{eq.lifespanT1.semi} and \eqref{sharplife} (see Theorem~\ref{main} and Remark~\ref{Rem83} below). It is an interesting problem whether our lower bound \eqref{eq.lifespanT1.semi} is sharp or not for the Neumann problem. \end{rem}
\section{Preliminaries} In this section we introduce some notation which will be used throughout this paper and some basic lemmas for the proof of Theorem \ref{thm.mainsemi}.
Throughout the paper we shall assume $0\in {\mathcal O}$ so that we have
$|x|\ge c_0$ for $x\in \Omega$ for some positive constant $c_0$. We shall also assume that $\overline{\mathcal O}\subset B_1$, where
$B_r$ stands for an open ball with radius $r$ centered at the origin of ${\mathbb R}^2$. Thus a function $v=v(x)$ on $\Omega$ vanishing for $|x|\le 1$ can be naturally regarded as a function on $\mathbb R^2$.
\subsection{Notation}
Let us start with some standard notation. \begin{itemize}
\item We put $\langle y\rangle :=\sqrt{1+|y|^2}$ for $y\in \mathbb R^d$ with $d\in \mathbb N$. \item Let $A=A(y)$ and $B=B(y)$ be two positive functions of some variable $y$, such as $y=(t,x)$ or $y=x$, on suitable domains. We write $A\lesssim B$ if there exists a positive constant $C$ such that $A(y)\le C B(y)$ for all $y$ in the intersection of the domains of $A$ and $B$.
\item The $L^2(\Omega)$ norm is denoted by $\|\cdot\|_{L^2_{\Omega}}$, while the norm $\| \cdot \|_{L^2}$ without any other index stands for $\|\cdot \|_{L^2(\mathbb R^2)}$. Similar notation will be used for the $L^\infty$ norms. \item For a time-space depending function $u$ satisfying
$u(t,\cdot)\in X$ for $0\le t< T$ with a Banach space $X$, we put $\|u\|_{L^\infty_TX}:=\sup_{0\le t< T}\|u(t,\cdot)\|_X$. For the brevity of the description, we sometimes use the expression
$\|h(s,y)\|_{L^\infty_tL^\infty_\Omega}$ with dummy variables $(s,y)$ for a function $h$ on $[0,t)\times \Omega$, which means
$\sup_{0\le s<t}\|h(s, \cdot)\|_{L^\infty_\Omega}$. \item For $m\in \mathbb N$ and $s\in \mathbb R$, by $H^{m,s}(\Omega)$ we denote the weighted Sobolev space with norm defined by \eqref{eq.datanorm}. Moreover $H^m(\Omega)$ and $H^m(\mathbb R^2)$ are the standard Sobolev spaces. \item We denote by $\mathcal C_0^\infty(\overline\Omega)$ the set of smooth functions defined on $\overline{\Omega}$ which vanish outside $B_R$ for some $R>1$. \end{itemize}
Let $\nu\in \mathbb R$. We put \begin{equation*}
w_\nu(t,x)=\langle x \rangle^{-1/2} \langle t-|x|\rangle^{-\nu}+\langle t+|x|\rangle^{-1/2}\langle t-|x|\rangle^{-1/2}. \end{equation*} This weight function $w_\nu$ will be used repeatedly in the \it a priori estimates of the solution $u$ to \eqref{eq.PMN}. We shall often use the following inequality \begin{equation}\label{eq.ome}
w_\nu(t,x)\lesssim \langle t+|x|\rangle^{-1/2}(\min\{\langle x\rangle, \langle t-|x| \rangle\})^{-1/2}, \quad \nu \ge 1/2. \end{equation}
For $\nu$, $\kappa>0$ we put $$ W_{\nu,\kappa}(t,x)=
\langle t+|x|\rangle^\nu \left (\min\{\langle x\rangle, \langle t-|x|\rangle\}\right)^\kappa. $$
Finally, for $a\ge 1$ we set $$ \Omega_a=\Omega \cap B_a. $$ Since $\overline{\mathcal O}\subset B_1$, we see that $\Omega_a\not= \emptyset $ for any $a\ge 1$.
\subsection{Vector fields associated with the wave operator} We introduce the vector fields\,: $$ \Gamma_0:=\partial_0=\partial_t, \quad \Gamma_1:=\partial_1=\partial_{x_1},\quad \Gamma_2:=\partial_2=\partial_{x_2}, \quad \Gamma_3:=\Lambda: =x_1 \partial_2-x_2\partial_1. $$ Denoting $[A,B]:=AB-BA$, we have \begin{equation}\label{eq.commute} [\Gamma_i, \partial_{t}^2-\Delta]=0, \quad i=0,\dots,3, \end{equation} and also $$ \begin{array}{ll} [\Gamma_i,\Gamma_j]=0, & i,j=0,1,2,\cr [\Gamma_0,\Gamma_3]=0, &\ \cr [\Gamma_1,\Gamma_3]=\Gamma_2, &\cr [\Gamma_2,\Gamma_3]=-\Gamma_1. \end{array} $$ Hence, for $i,j=0,1,2,3$, we have $[\Gamma_i, \Gamma_j]=\sum_{k=0}^3 c_{ij}^k\,\Gamma_k$ with suitable constants $c_{ij}^k$. Moreover, for $i=0,1,2$ and $j=0,1,2,3$ we also have $[\partial_i, \Gamma_j]=\sum_{k=1}^2 d_{ij}^k \partial_k$ with suitable constants $d_{ij}^k$.
We put $\partial=(\partial_0,\partial_1, \partial_2)$, $\partial_x=(\partial_1, \partial_2)$, $\Gamma=(\Gamma_0, \Gamma_1, \Gamma_2, \Gamma_3)=(\partial, \Lambda)$ and $\widetilde{\Gamma}=(\Gamma_1, \Gamma_2, \Gamma_3)=(\partial_x, \Lambda)=
(\nabla_x, \Lambda)
$. The standard multi-index notation will be used for these sets of vector fields, such as $\partial^\alpha=\partial_0^{\alpha_0}\partial_1^{\alpha_1}\partial_2^{\alpha_2}$ with $\alpha=(\alpha_0,\alpha_1,\alpha_2)$ and $\Gamma^\gamma=\Gamma_0^{\gamma_0} \cdots \Gamma_3^{\gamma_3}$ with $\gamma=(\gamma_0, \dots, \gamma_3)$.
For $\rho\ge 0$, $k\in \mathbb N$ and functions $v_0=v_0(x)$ and $v_1=v_1(x)$, we put \begin{eqnarray*}
\mathcal A_{\rho, k}[v_0,v_1]:= \sum_{|\gamma|\le k}\big(
\|\langle \cdot \rangle^\rho \widetilde{\Gamma}^\gamma v_0\|_{L^\infty_{_{\Omega}}}
+\|\langle \cdot \rangle^\rho\widetilde{\Gamma}^\gamma \nabla_x v_0\|_{L^\infty_{_{\Omega}}}
+\|\langle \cdot \rangle^\rho\widetilde{\Gamma}^\gamma v_1\|_{L^\infty_{_{\Omega}}} \big);\\
\mathcal B_{\rho, k}[v_0,v_1]:= \sum_{|\gamma|\le k}\big(
\|\langle \cdot \rangle^\rho \widetilde{\Gamma}^\gamma v_0\|_{L^\infty}
+\|\langle \cdot \rangle^\rho\widetilde{\Gamma}^\gamma \nabla_x v_0\|_{L^\infty}
+\|\langle \cdot \rangle^\rho\widetilde{\Gamma}^\gamma v_1\|_{L^\infty} \big). \end{eqnarray*} These quantities will be used to control the influence of the initial data to the $L^\infty$ norms of the solution.
Using the vector fields in $\widetilde{\Gamma}$, we obtain the following Sobolev-type inequality. \begin{lemma}\label{KlainermanSobolev}\ Let $v \in C_0^2(\overline{\Omega})$. Then we have \begin{eqnarray*}
\sup_{x \in \Omega} |x|^{1/2}|v(x)| \lesssim
\sum_{\substack{|\alpha|+\beta\le 2 \\ \beta \ne 2}}
\|\partial_x^\alpha \Lambda^\beta v\|_{L^2(\Omega)}. \end{eqnarray*} \end{lemma}
\noindent{\it Proof.}\ It is well known that for $w \in C_0^2(\mathbb R^2)$ we have \begin{equation}
|x|^{1/2}|w(x)| \lesssim
\sum_{\substack{|\alpha|+\beta \le 2\\ \beta\not=2}}\|\partial_x^\alpha \Lambda^\beta w\|_{L^2(\mathbb R^2)}, \quad x\in \mathbb R^2 \label{KlainermanIneq} \end{equation} (see Klainerman \cite{kl0} for the proof).
Let $\chi=\chi(x)$ be a nonnegative smooth function satisfying
$\chi(x)\equiv 0$ for $|x|\le 1$ and $\chi(x)\equiv 1$ for $|x|\ge 2$. If we rewrite $v$ as $v=\chi v+(1-\chi) v$, then we have $\chi v\in C^\infty_0(\mathbb R^2)$ and \eqref{KlainermanIneq} leads to $$
\sup_{x \in \Omega} |x|^{1/2}|v(x)|\lesssim
\sum_{\substack{|\alpha|+\beta \le 2\\ \beta\ne 2}} \|\partial_x^\alpha \Lambda^\beta (\chi v)\|_{L^2(\mathbb R^2)}+\|(1-\chi)v\|_{L^\infty(\Omega)}. $$ By using the Sobolev embedding to estimate the last term, we arrive at $$
\sup_{x \in \Omega} |x|^{1/2}|v(x)|\lesssim
\sum_{\substack{|\alpha|+\beta \le 2\\ \beta\not=2}} \|\partial_x^\alpha \Lambda^\beta v\|_{L^2(\Omega)}
+\sum_{|\alpha| \le 2} \|\partial_x^\alpha v\|_{L^2(\Omega)}. $$ This completes the proof.
$\qed$
\subsection{Elliptic estimates}
The following elliptic estimates will be used in the energy estimates.
\begin{lemma}\label{elliptic2}\ Let $R>1$, $m$ be an integer with $m \ge 2$ and $v \in H^m(\Omega)$ such that $\partial_\nu v=0$ on $\partial \Omega$. Then we have \begin{eqnarray}\label{eq.ap1}
\|\partial^\alpha_x v\|_{L^2(\Omega)} \lesssim \|\Delta v\|_{H^{|\alpha|-2}(\Omega)}+\|v\|_{H^{|\alpha|-1}(\Omega_{R+1})} \end{eqnarray}
for $2\le |\alpha| \le m$. \end{lemma}
\begin{proof} Let $\chi$ be a $C^\infty_0({\mathbb R}^n)$ function such that
$\chi(x) \equiv 1$ for $|x|\le R$ and $\chi (x)\equiv 0$ for $|x|\ge R+1$. We set $v_1=\chi v$ and $v_2=(1-\chi) v$, so that $v=v_1+v_2$.
If we put $h=\Delta v_1$, the function $v_1$ solves the elliptic problem \begin{equation*} \left\{ \begin{array}{ll} \Delta v_1= h & \text{ on } \Omega_{R+1}, \\ \partial_{\nu} v_1=0 & \text{ on } \partial \Omega, \\ v_1=0 & \text{ on } \partial B_{R+1}. \end{array} \right. \end{equation*} From Theorem 15.2 of \cite{ADN}, we have \begin{equation}\label{eq.ADN}
\|v_1\|_{H^{l}(\Omega_{R+1})}\lesssim
\|h\|_{H^{l-2}(\Omega_{R+1})}+\|v_1\|_{L^{2}(\Omega_{R+1})}=
\|\Delta v_1
\|_{H^{l-2}(\Omega_{R+1})}+\|v_1\|_{L^{2}(\Omega_{R+1})} \end{equation} for $l\ge 2$. Hence \begin{eqnarray*}
\|\partial^\alpha_x v_1\|_{L^2(\Omega)}
& \lesssim & \|\Delta v\|_{H^{|\alpha|-2}(\Omega_{R+1})}
+\|\nabla v\|_{H^{|\alpha|-2}(\Omega_{R+1})}+\|v\|_{H^{|\alpha|-2}(\Omega_{R+1})}\\
& \lesssim & \|\Delta v\|_{H^{|\alpha|-2}(\Omega_{R+1})}
+\|v\|_{H^{|\alpha|-1}(\Omega_{R+1})} \end{eqnarray*}
Now we consider $v_2$. Note that $v_2$ can be regarded as a function in $\mathbb R^2$ and we can write $\|\partial^\alpha_x v_2\|_{L^2(\Omega)}=\| \partial^\alpha_x v_2\|_{L^2(\mathbb R^2)}$. Let us recall that
$\|\partial^\beta_x w\|_{L^2({\mathbb R}^n)}\lesssim \|\Delta w\|_{L^2({\mathbb R}^n)}$
for any $w \in H^2({\mathbb R}^n)$ and $|\beta|=2$. Writing $\alpha=\beta+\gamma$ with $|\beta|=2$ and $|\gamma|=|\alpha|-2$, we have \begin{eqnarray*}\nonumber
\|\partial^\alpha_x v_2\|_{L^2(\Omega)}&\lesssim & \|\Delta \partial^\gamma_x v_2\|_{L^2(\mathbb R^2)}
\lesssim \|\Delta v_2\|_{ H^{|\alpha|-2}(\mathbb R^2)}\\
&\lesssim &\|\Delta v\|_{ H^{|\alpha|-2}(\Omega)}+\|v\|_{ H^{|\alpha|-1}(\Omega_{R+1})}. \end{eqnarray*} Combining this inequality with the estimate for $v_1$, we find \eqref{eq.ap1}. \end{proof}
\subsection{Decay estimates for the linear wave equation with Neumann boundary condition}\label{LinearNeumann}\rm Given $T>0$, we consider the mixed problem \begin{equation}\label{eq.PMfT} \begin{array}{ll} (\partial_t^2-\Delta) u =f, & (t,x) \in(0,T)\times \Omega,\\ \partial_\nu u(t,x)=0, & (t,x) \in(0,T)\times \partial\Omega,\\ u(0,x)=u_0(x), & x\in \Omega,\\ (\partial_t u)(0,x)=u_1(x), & x\in \Omega. \end{array} \end{equation} It is known that for $u_0\in H^2(\Omega)$, $u_1\in H^1(\Omega)$ and $f\in {\mathcal C}^1\bigl([0,T); L^2(\Omega)\bigr)$, the mixed problem \eqref{eq.PMfT} admits a unique solution $$ u\in \bigcap_{j=0}^2 {\mathcal C}^j\bigl([0,T); H^{2-j}(\Omega)\bigr), $$ provided that $(u_0, u_1, f)$ satisfies the compatibility condition of order $0$, that is to say, \begin{equation}
\label{CCo0}
\partial_\nu u_0(x)=0,\quad x\in \partial \Omega \end{equation} (see \cite{I68} for instance). Under these assumptions for $\vec{u}_0:=(u_0,u_1)$, the solution $u$ of \eqref{eq.PMfT} will be denoted by $S[\vec{u}_0,f](t,x)$. We set $K[\vec{u}_0](t,x)$ for the solution of \eqref{eq.PMfT} with $f\equiv 0$ and $L[f](t,x)$ for the solution of \eqref{eq.PMfT} with $\vec{u}_0\equiv (0,0)$; in other words we put $$ K[\vec{u}_0](t,x):=S[\vec{u}_0, 0](t,x), \quad L[f](t,x):= S[(0,0),f](t,x) $$ so that we get $$ S[\vec{u}_0,f](t,x)=K[\vec{u}_0](t,x)+L[f](t,x), $$ where $K[\vec{u_0}]$ and $L[f]$ are well defined because both of $(u_0, u_1, 0)$ and $(0,0,f)$ satisfy the compatibility condition of order $0$.
In order to obtain a smooth solution to \eqref{eq.PMfT}, we need the compatibility condition of infinite order. \begin{definition}\label{def.complinear} Suppose that $u_0$, $u_1$ and $f$ are smooth. Define $u_j$ for $j\ge 2$ inductively by $$ u_j(x)=\Delta u_{j-2}(x)+(\partial_t^{j-2}f)(0,x),\quad j\ge 2. $$ We say that $(u_0, u_1, f)$ satisfies the compatibility condition of infinite order in $\Omega$ for \eqref{eq.PMfT}, if one has \begin{equation*} \partial_\nu u_j=0 \quad\text{on}\quad \partial\Omega \end{equation*} for any nonnegative integer $j$. \end{definition} We say that $(u_0, u_1, f)\in X(T)$ if the following three conditions are satisfied: \begin{itemize} \item $(u_0,u_1)\in \mathcal C_0^{\infty}(\overline{\Omega})
\times \mathcal C_0^{\infty}(\overline{\Omega})$, \item $f\in C^{\infty}([0,T) \times \overline{\Omega})$;
moreover,
$f(t,\cdot)\in \mathcal C_0^{\infty}(\overline{\Omega})$ for any $t\in [0, T)$, \item $(u_0, u_1, f)$ satisfies the compatibility condition of infinite order. \end{itemize} It is known that if $(u_0, u_1, f)\in X(T)$, then we have $S[\vec{u}_0,f]\in {\mathcal C}^\infty\bigl([0,T)\times \overline{\Omega}\bigr)$ (see \cite{I68} for instance).
The following decay estimates play important roles in our proof of the main theorem.
\begin{theorem}\ \label{main} Let ${\mathcal O}$ be a convex set and $k$ be a nonnegative integer. Suppose that $\Xi=(\vec{u}_0,f)=({u}_0, {u}_1,f) \in X(T)$.
\noindent {\rm (i)}\ Let $\mu>0$. Then we have \begin{eqnarray}\label{ba3}
\sum_{|\delta|\le k} |\Gamma^\delta S[\Xi](t,x)| \lesssim {\mathcal A}_{2+\mu,3+k} [\vec{u}_0]
+ \log(e+t)\sum_{|\delta|\le 3+k}\| |y|^{1/2}W_{1,1+\mu}(s,y)\Gamma^\delta f(s,y)\|_{L^\infty_{t}L^\infty_\Omega} \end{eqnarray} for $(t,x)\in [0,T) \times {\overline{\Omega}}$.
\noindent {\rm (ii)}\ Let $0<\eta<1/2$ and $\mu>0$. Then we have \begin{eqnarray}
&& w_{(1/2)-\eta}^{-1}(t,x) \sum_{|\delta|\le k} |\Gamma^\delta \partial S[\Xi](t,x)|\lesssim \nonumber\\ && \quad \lesssim \mathcal A_{2+\mu,k+4}[\vec{u}_0]+
\log^2(e+t+|x|)
\sum_{|\delta|\le k+4}\||y|^{1/2}W_{1,1}(s,y)\Gamma^\delta f(s,y)\|_{L^\infty_{t}L^\infty_\Omega}, \label{ba4weak} \\
&& w_{1/2}^{-1}(t,x) \sum_{|\delta|\le k} |\Gamma^\delta \partial S[\Xi](t,x)|\lesssim \nonumber\\ && \quad \lesssim \mathcal A_{2+\mu,k+4}[\vec{u}_0]+
\log^2(e+t+|x|)
\sum_{|\delta|\le k+4}\||y|^{1/2}W_{1,1+\mu}(s,y)\Gamma^\delta f(s,y)\|_{L^\infty_{t}L^\infty_\Omega}
\label{ba4} \end{eqnarray} for $(t,x)\in [0,T)\times {\overline{\Omega}}$.
\noindent {\rm (iii)}\ Let $0<\eta<1$ and $\mu>0$. Then we have \begin{eqnarray} && w_{1-\eta}^{-1}(t,x)
\sum_{|\delta|\le k} |\Gamma^\delta \partial\partial_t S[\Xi](t,x)|\lesssim \nonumber\\ && \quad \lesssim {\mathcal A}_{2+\mu,k+ 5}[\vec{u}_0]+
\log^{2} (e+t+|x|)\sum_{|\delta|\le k+ 5}\||y|^{1/2}W_{1,1}(s,y) \Gamma^\delta f(s,y)\|_{L^\infty_{t}L^\infty_\Omega} \label{ba4t} \end{eqnarray} for $(t,x)\in [0,T)\times {\overline{\Omega}}$. \end{theorem}
We will prove Theorem~\ref{main} in Section~\ref{sec.pointwise} below, by using the so-called cut-off method to combine the corresponding decay estimates for the Cauchy problem with the local energy decay.
\section{The abstract argument for the proof of the main theorem}\label{AAPMT} Since the local existence of smooth solutions for the mixed problem \eqref{eq.PMN} has been shown by \cite{SN89} (see also the Appendix), what we need to do for showing the large time existence of the solution is to derive suitable {\it a priori} estimates: following
\cite{SN89}, we need the control of $\|u(t)\|_{H^9(\Omega)}+\|\partial_t u(t)\|_{H^8(\Omega)}$ for the solution $u$.
Let $u$ be the local solution of \eqref{eq.PMN}, assuming \eqref{eq.smalldata.semi} holds for large $m\in \mathbb N$ and $s>0$. Let $T^*$ be the supremum of $T$ such that \eqref{eq.PMN} admits a (unique) classical solution in $[0,T)\times \overline{\Omega}$. For $0<T\le T^*$, a small $\eta>0$, and nonnegative integers $H$ and $K$ we define \begin{align*}
{\mathcal E}_{H,K}(T)\equiv &
\sum_{|\gamma|\le H-1} \|w_{1/2}^{-1} \Gamma^\gamma \partial u\|_{L^\infty_TL^\infty_\Omega}
+\sum_{1\le j+ |\alpha|\le K}\|\partial_t^j \partial_x^\alpha u\|_{L^\infty_T L^2_{\Omega}} \\ &
+\sum_{|\delta|\leq K-2} \| \jb{s}^{-1/2} \Gamma^\delta \partial u(s,y)\|_{L^\infty_T L^2_{\Omega}}
+\sum_{|\delta|\leq K-8} \| \jb{s}^{-(1/4)-\eta} \Gamma^\delta \partial u(s,y)\|_{L^\infty_T L^2_{\Omega}} \\ &
+\sum_{|\delta|\leq K-14} \| \jb{s}^{-2\eta} \Gamma^\delta \partial u(s,y)\|_{L^\infty_T L^2_{\Omega}}
+\sum_{|\delta|\leq K-20} \| \Gamma^\delta \partial u\|_{L^\infty_T L^2_{\Omega}}. \end{align*} We neglect the first sum when $H=0$. Similarly we neglect summations taken over the empty set as $K$ varies. We also put $${\mathcal E}_{H,K}(0)=\lim_{T\to 0^+}{\mathcal E}_{H,K}(T).$$ Observe that ${\mathcal E}_{H,K}(0)$ can be determined only by $\phi$, $\psi$ and $G$ and that we have $$
{\mathcal E}_{H,K}(0)\lesssim \|\phi\|_{H^{m+1,s}(\Omega)} +\|\psi\|_{H^{m,s}(\Omega)} $$ for suitably large $m \in\mathbb N$ and $s>0$ depending on $H$ and $K$. From \eqref{eq.smalldata.semi} for such $m\in \mathbb N$ and $s>0$, we see that ${\mathcal E}_{H,K}(0)$ is finite. The previous inequality can be obtained combining the embedding $H^r(\Omega)\hookrightarrow L^\infty(\Omega)$ for $r>1$ with the trivial inequality
$|\Gamma_3 f|\le \langle x\rangle|\partial_1 f|+\langle x\rangle|\partial_2 f|$ and the equivalence between
$\sum_{ |\alpha|\le m}\|\langle \cdot
\rangle^s \partial_x^\alpha f\|_{ L^2_\Omega}$
and $\|\langle \cdot
\rangle^s f\|_{H^m(\Omega)}$. In order to optimize $m$ or $s$ it is possible to use sharpest embedding theorem in weighted Sobolev spaces proved for example in \cite{GeLu04}.
Our goal is to show the following claim. \begin{claim}\label{claim} \normalfont We can take suitable $H$ and $K$ and sufficiently large $m$ and $s$, so that there exist positive numbers $C_1$, $P$ and $Q$ and a strictly increasing continuous function ${\mathcal R}:[0,\infty)\to[0,\infty)$ with ${\mathcal R}(0)=0$ such that if ${\mathcal E}_{H,K}(T) \le 1$, then \begin{equation}\label{eq.energy}
{\mathcal E}_{H,K}(T)\le C_1\varepsilon + {\mathcal R}\left({\mathcal E}_{H,K}^{P}(T)\log^{Q}(e+T)\right) (\varepsilon+{\mathcal E}_{H,K}(T)), \end{equation} provided that \eqref{eq.smalldata.semi} holds with $\varepsilon\le 1$. Here $C_1$, $P$, $Q$ and ${\mathcal R}$ are independent of $\varepsilon$ and $T$. \end{claim}
Let us explain how from \eqref{eq.energy} we can gain the lifespan estimate. Suppose that the above claim is true. If we assume \eqref{eq.smalldata.semi} for some $m$ and $s$ which are sufficiently large, then, as we have mentioned, there exists $C_*>0$ such that ${\mathcal E}_{H,K}(0)< 2C_*\varepsilon$. We may assume $C_*\ge \max\{C_1,1\}$.
We set $\varepsilon_0=\min \{(2C_*)^{-1}, 1\}$ and suppose that $0<\varepsilon\le \varepsilon_0$, so that we have $\varepsilon\le 1$ and $2C_*\varepsilon\le 1$. We put $$
T_*(\varepsilon):=\sup \left\{T\in [0, T^*):\, {\mathcal E}_{H,K}(T)\le 2C_*\varepsilon\right\}. $$ In particular, for any $T\le T_*(\varepsilon)$, we have $ {\mathcal E}_{H,K}(T)\le 1$. From \eqref{eq.energy} with $T=T_*(\varepsilon)$, we get \begin{equation*} {\mathcal E}_{H,K}\left(T_*(\varepsilon)\right)\le C_*\varepsilon + {\mathcal R}\left((2C_* \varepsilon)^{P} \log^{Q}\left(e+T_*(\varepsilon)\right)\right) (3C_* \varepsilon). \end{equation*} We are going to prove \begin{equation} \label{eq.lifespan00} {\mathcal R}\left((2C_* \varepsilon)^{P} \log^{Q}\left(e+T^*\right) \right) >\frac{1}{4} \end{equation} by contradiction. Suppose that $T^*$ satisfies \begin{equation} \label{eq.lifespan} {\mathcal R}\left((2C_* \varepsilon)^{P} \log^{Q}\left(e+T^*\right) \right) \le \frac{1}{4}. \end{equation} Since $T_*(\varepsilon)\le T^*$, and $\mathcal R$ is an increasing function, we obtain \begin{equation*} {\mathcal E}_{H,K}\left(T_*(\varepsilon)\right)
\le \frac{7}{4} C_* \varepsilon
<2C_* \varepsilon. \end{equation*} Therefore we get $T_*(\varepsilon)=T^*$, because otherwise the continuity of ${\mathcal E}_{H,K}(T)$ implies that there exists $\widetilde{T}>T_*(\varepsilon)$ satisfying ${\mathcal E}_{H,K}(\widetilde{T})\le 2C_*\varepsilon$, which contradicts the definition of $T_*(\varepsilon)$. However, if $T_*(\varepsilon)=T^*$, and $H,K$ are sufficiently large, we can prove \begin{align}
\|u\|_{L^\infty_{T^*}H^9(\Omega)}+\|\partial_tu\|_{L^\infty_{T^*}H^8(\Omega)} & \lesssim \varepsilon + (1+T^*) {\mathcal E}_{H,K}(T^*) \label{eq.stima98}\\ &=\varepsilon + (1+T_*(\varepsilon)) {\mathcal E}_{H,K}\left(T_*(\varepsilon)\right)
\lesssim \varepsilon + (1+T_*(\varepsilon)) 2C_*\varepsilon, \nonumber \end{align} and we can extend the solution beyond the time $T^*$ by the local existence theorem, which contradicts the definition of $T^*$. Therefore \eqref{eq.lifespan} is not true, and we obtain \eqref{eq.lifespan00}. This means that, for any $\varepsilon \le \varepsilon_0$, there exists $\tilde{C}>0$ such that \begin{equation}\label{eq.lifePQ} T^*>\exp\{\tilde C\epsilon^{-P/Q}\}. \end{equation} It remains to show \eqref{eq.stima98}. It is evident that $$
\|u\|_{L^\infty_{T^*}H^9(\Omega)}+\|\partial_tu\|_{L^\infty_{T^*}H^8(\Omega)}\lesssim \|u\|_{L^\infty_{T^*}L^2_\Omega}+{\mathcal E}_{0,9}(T^*). $$
In order to estimate $\|u\|_{L^\infty_{T^*}L^2_\Omega}$ we will use the expression $$ u(t,x)=u(0,x)
+\int_0^t \partial_t u(\tau, x) d\, \tau, $$ which leads to $$
\|u\|_{L^\infty_{T^*}L^2_\Omega}\lesssim \varepsilon +T^*{\mathcal E}_{0,1}(T^*). $$
As a conclusion, we obtain \eqref{eq.lifespanT1.semi}, once we can show that Claim~\ref{claim} is true with $P=Q=1$. This will be done in the next three sections.
\section{Energy estimates for the standard derivatives} In this section we are going to estimate
$\|\partial_t^j\partial_x^\alpha u\|_{L^\infty_TL^2_\Omega}$ for $j+|\alpha|\ge 1$. In the first subsection, we consider the case where $j\ge 0$ and $ |\alpha|=1$. This can be done directly through the standard energy inequalities. In the second subsection, the case where $j\ge 1$ and $|\alpha|\ge 2$
will be treated with the help of the elliptic estimate, Lemma~\ref{elliptic2}. In the third subsection, we consider the case where $j=0$ and $|\alpha|\ge 2$. Lemma~\ref{elliptic2} will be used again, but this time we need the estimate of $\| u\|_{L^\infty_TL^2(\Omega_{R+1})}$ for some $R>0$, which is not included in the definition of ${\mathcal E}_{H,K}(T)$. Since we are considering the $2$D Neumann problem, it seems difficult to use some embedding theorem to estimate $\| u\|_{L^\infty_TL^2(\Omega_{R+1})}$ by $\|\nabla_x u \|_{L^\infty_TH^k(\Omega)}$ with some positive integer $k$. Instead, we will employ the $L^\infty$ estimate, Theorem~\ref{main}, for this purpose. \subsection{On the energy estimates for the derivatives in time} First we set \begin{equation*}
E(v;t)=\frac12 \int_{\Omega} \{|\partial_t v(t,x)|^2+|\nabla_x v(t,x)|^2\} dx \end{equation*} for a smooth function $v=v(t,x)$.
Let $j$ be a nonnegative integer. Since $\partial_t$ commutes with the restriction of the function to $\partial \Omega$, we have $\partial_\nu \partial_t^j u(t,x)=0$ for all $(t,x)\in (0,T) \times \partial \Omega$. Therefore, by the standard energy method, we find \begin{equation*} \frac{d}{dt} E(\partial_t^j u;t)= \int_{\Omega} \partial_t^j( G(\partial u))(t,x)\,\partial_t^{j+1} u(t,x) dx. \end{equation*}
Recalling the definition of ${\mathcal E}_{H,K}(T)$, for $j+|\alpha|\ge 1$ we have \begin{equation}\label{eq.nu}
|\partial^j_t\nabla_x^\alpha u(t,x)|\le w_{1/2}(t,x)
{\mathcal E}_{j+|\alpha|, 0}(T),
\quad x\in \Omega, t\in [0,T). \end{equation} Applying \eqref{eq.nu} and the Leibniz rule we find $$ \frac{d}{dt} E(\partial_t^j u;t)\lesssim
\|w_{1/2}(t)\|_{L^\infty_\Omega}^2{\mathcal E}_{[j/2]+1,0}^2(T)\sum_{h=0}^{j}
\int_{\Omega} |\partial_t^h \partial u(t,x) |\,|\partial_t^{j+1} u(t,x) |dx. $$
It is also clear that if $j+|\alpha| \ge 1$, one has $$
\|\partial_t^j\partial_x^\alpha u(t)\|_{L^2_{\Omega}} \le
{\mathcal E}_{0,j+|\alpha|} (T), \quad t\in [0,T). $$ This gives $$ \frac{d}{dt} E(\partial_t^j u;t)\lesssim
\|w_{1/2}(t)\|_{L^\infty_\Omega}^2 {\mathcal E}^2_{[j/2]+1,0}(T) {\mathcal E}^2_{0,j+1}(T). $$ Since ${\mathcal E}_{H,K}(T)$ is increasing in $H$ and $K$, we get $$
\frac{d}{dt} E(\partial_t^j u;t)\lesssim \|w_{1/2}(t)\|_{L^\infty_\Omega}^2 {\mathcal E}^4_{[j/2]+1,j+1}(T). $$ As a trivial consequence of \eqref{eq.ome}, we find $w_{1/2}(t,x)\le \langle t\rangle^{-1/2}$, so that \begin{equation*} \frac{d}{dt} E(\partial_t^j u;t)\lesssim \langle t \rangle^{-1}{\mathcal E}^4_{[j/2]+1,j+1}(T). \end{equation*} After integration this gives \begin{eqnarray}\label{eq.ap9}
\sum_{l=0}^{j} \|\partial_t^{l+1} u(t)\|_{L^2_\Omega}+
\sum_{l=0}^{j} \|\partial_t^{l} \nabla_x u(t)\|_{L^2_\Omega} \lesssim {\mathcal E}_{j+1}(0)+{\mathcal E}^2_{j+1}(T)\log^{1/2}(e+t) \end{eqnarray} for any $j\ge 0$ and $t \in [0,T)$, where $${\mathcal E}_s(T)={\mathcal E}_{[(s-1)/2]+1,s}(T)$$ for any integer $s\ge 0$.
\subsection{On the energy estimates for the space-time derivatives.}\label{sec.32} Since the spatial derivatives do not preserve the Neumann boundary condition, we need to use elliptic regularity results.
We shall show that for $j\ge 1$ and $k\ge 0$ it holds \begin{eqnarray}\label{eq.ap11}
\sum_{|\alpha|=k} \|\partial_t^{j} \partial_x^\alpha u(t)\|_{L^2_\Omega}
\lesssim {\mathcal E}_{j+k}(0)+{\mathcal E}^2_{j+k}(T)\log^{1/2}(e+T)+{\mathcal E}^3_{j+k-1}(T) \end{eqnarray} with ${\mathcal E}_s(T)={\mathcal E}_{[(s-1)/2]+1, s}(T)$ as before.
It is clear that (\ref{eq.ap11}) follows from (\ref{eq.ap9}) when $j\ge 1$ and $k=0,1$.
Next we suppose that (\ref{eq.ap11}) holds for $j\ge 1$ and $k\le l$
with some positive integer $l$. Let $|\alpha|=l+1$ and $j \ge 1$. Since $|\alpha|\ge 2$, we apply to $\partial_t^{j} u$ the elliptic estimate (Lemma~\ref{elliptic2}) and we obtain $$
\|\partial_x^\alpha \partial_t^j u(t)\|
\lesssim \|\Delta \partial_t^{j} u(t)\|_{H^{l-1}(\Omega)}
+\|\partial_t^{j} u(t)\|_{H^{l}(\Omega)}. $$ By \eqref{eq.ap11} for $k\le l$, we see that the second term has the desired bound. On the other hand, using the fact that $u$ is a solution to \eqref{eq.PMN}, for the first term we have \begin{eqnarray}\nonumber
\|\Delta \partial_t^{j} u(t)\|_{H^{l-1}(\Omega)}\lesssim \|\partial_t^{j+2} u(t)\|_{H^{l-1}(\Omega)}
+ \|\partial_t^{j} (G(\partial u))(t)\|_{H^{l-1}(\Omega)}. \end{eqnarray} Since $(j+2)+(l-1)=j+l+1$, it follows from \eqref{eq.ap11} for $k=l-1$ with $j$ replaced by $j+2$ that $$
\|\partial_t^{j+2} u(t)\|_{H^{l-1}(\Omega)}\lesssim {\mathcal E}_{j+l+1}(0)+{\mathcal E}^2_{j+l+1}(T)\log^{1/2}(e+T)+{\mathcal E}^3_{j+l}(T), $$ which is the desired bound. Finally, observing that $w_{1/2}(t,x)\le 1$, we get \begin{align*}
\|\partial^j_t G(\partial u)(t) \|_{H^{l-1}(\Omega)}\lesssim
\sum_{1\le |\beta|\le [(j+l-1)/2]+1}\|\partial^\beta u(t)\|_{L^\infty_\Omega}^2
\sum_{1\le |\gamma|\le j+l}\| \partial^\gamma u(t)\|_{L^2_\Omega} \lesssim {\mathcal E}^3_{j+l}(T). \end{align*} Combining these estimates, we obtain \eqref{eq.ap11} for $j\ge 1$ and $k=l+1$. This completes the proof of \eqref{eq.ap11} for $j\ge 1$ and $k\ge 0$.
\subsection{On the energy estimates for the space derivatives}\label{sec.spaceder} Our aim here is to
estimate $\|\partial_x^\alpha u\|_{L^\infty_TL^2_{\Omega}}$ for $|\alpha|=k\ge 1$. The estimate for $k=1$ is included in \eqref{eq.ap9}. Let us consider the case $|\alpha|=k\ge 2$. Let us fix $R>1$. The elliptic estimate \eqref{eq.ap1} gives \begin{eqnarray*}
\sum_{|\alpha|=k}
\|\partial_x^\alpha u\|_{L^\infty_T L^2_{\Omega}}
&\lesssim & \|\Delta u\|_{L^\infty_T H^{k-2}(\Omega)}+\|u\|_{L^\infty_TH^{k-1}(\Omega_{R+1})}\\
&\lesssim & \|\partial_{t}^2 u\|_{L^\infty_T H^{k-2}(\Omega)}+\|G(\partial u)\|_{L^\infty_T H^{k-2}(\Omega)}+
\|u\|_{L^\infty_T H^{k-1}(\Omega_{R+1})}. \end{eqnarray*} The first term can be estimated by \eqref{eq.ap11} and we get $$
\|\partial_t^2 u\|_{L^\infty_T H^{k-2}(\Omega)}
\lesssim {\mathcal E}_{k}(0)+{\mathcal E}^2_{k}(T)\log^{1/2}(e+T)+{\mathcal E}^3_{k-1}(T). $$ For the second term, we obtain the following inequality as before: $$
\|G(\partial u)\|_{L^\infty_T H^{k-2}(\Omega)} \lesssim {\mathcal E}_{k-1}^3(T). $$ As for the third term, we get \begin{align}
\|u\|_{L^\infty_TH^{k-1}(\Omega_{R+1})}\lesssim &
\sum_{1\le |\beta|\le k-1}\|\partial_x^\beta u\|_{L^\infty_TL^2(\Omega_{R+1})}
+\|u\|_{L^\infty_TL^2(\Omega_{R+1})} \nonumber\\ \lesssim &
\sum_{1\le |\beta|\le k-1}\|\partial_x^\beta u\|_{L^\infty_TL^2_\Omega}
+\|u\|_{L^\infty_TL^\infty(\Omega_{R+1})}. \nonumber \end{align} Now we fix $\mu\in (0, 1/2)$ and use \eqref{ba3} with $k=0$ to obtain \begin{align}
\|u\|_{L^\infty_TL^\infty_\Omega} \lesssim & {\mathcal A}_{2+\mu, 3}[\phi, \psi]
+\log(e+T) \sum_{|\delta|\le 3}\left\|\jb{y}^{1/2}W_{1,1+\mu}(s,y)
\Gamma^\delta \left(G(\partial u)\right)(s,y)\right\|_{L^\infty_TL^\infty_\Omega}. \label{eq.infty11a} \end{align} By using \eqref{eq.ome}, for any $ s\in [0,T)$ we have $$
\sum_{|\delta|\le 3}
|\Gamma^{\delta}G(\partial u)( s,y)| \lesssim
\langle s+|y|\rangle^{-3/2}\left(\min\{ \langle y \rangle, \langle |y|- s \rangle\}\right)^{-3/2} {\mathcal E}^3_{4,0}(T). $$ This implies $$
\sum_{|\delta|\le 3}\left\| |y|^{1/2} W_{1,1+\mu}(s,y) \Gamma^\delta \left(G(\partial u)\right)(s,y) \right\|_{L^\infty_T L^\infty_{\Omega}} \lesssim
{\mathcal E}^3_{4,0}(T), $$ and \eqref{eq.infty11a} gives \begin{equation}
\|u\|_{L^\infty_TL^\infty_\Omega}
\lesssim \mathcal A_{2+\mu, 3}[\phi, \psi]+{\mathcal E}^3_{4,0}(T)\log (e+T). \label{eq.inftyOK} \end{equation}
Summing up the estimates above, for $|\alpha|=k\ge 2$, we get \begin{align*}
\sum_{|\alpha|=k} \|\partial_x^\alpha u\|_{L^\infty_TL^2_\Omega} \le &{\mathcal A}_{2+\mu,3}[\phi, \psi] {}+{\mathcal E}_{k}(0)+{\mathcal E}_{k}^2(T) \log^{1/2}(e+T) {}+{\mathcal E}_{k-1}^3(T)+{\mathcal E}_{4,0}^3\log(e+T)\\
& {}+\sum_{1\le |\alpha|\le k-1} \|\partial_x^\alpha u\|_{L^\infty_TL^2_\Omega}. \end{align*} Finally we inductively obtain $$
\sum_{|\alpha|=k} \|\partial_x^\alpha u\|_{L^\infty_TL^2_\Omega} \le {\mathcal A}_{2+\mu,3}[\phi, \psi] {}+{\mathcal E}_{k}(0)+{\mathcal E}_{k}^2(T) \log^{1/2}(e+T) {}+{\mathcal E}_{k-1}^3(T)+{\mathcal E}_{4,0}^3(T)\log(e+T) $$ for $k\ge 1$.
\subsection{Conclusion for the energy estimates of the standard derivatives} If $m$ and $s$ are sufficiently large, \eqref{eq.smalldata.semi} and the Sobolev embedding theorem lead to $$
{\mathcal A}_{2+\mu,3}[\phi,\psi]+{\mathcal E}_{K}(0) \lesssim \|\phi\|_{H^{m+1,s}(\Omega)}+\|\psi\|_{H^{m,s}(\Omega)} \lesssim \varepsilon. $$ Summing up the estimates in this section, we get \begin{equation}
\sum_{1\le j+|\alpha|\le K} \|\partial_t^j \partial_x^\alpha u\|_{L^\infty_TL^2_\Omega} \lesssim \varepsilon+ {\mathcal E}_{K}^2(T) \log^{1/2}(e+T) {}+{\mathcal E}_{K}^3(T)\log(e+T) \label{Concl01} \end{equation} for each $K\ge 7$.
\section{On the energy estimates for the generalized derivatives}\label{sec.eegdI} Throughout this section and the next one, we suppose that $K$ is sufficiently large, and we assume that ${\mathcal E}_K(T)\le 1$. \subsection{Direct energy estimates for the generalized derivatives}
Let $|\delta|\le K-2$. Recalling (\ref{eq.commute}), it follows that \begin{eqnarray} \frac{d}{dt} E(\Gamma^\delta u;t) &=& \int_{\Omega} \Gamma^\delta G(\partial u)(t,x)\,\partial_t \Gamma^\delta u(t,x) dx \nonumber\\ &&+\int_{\partial \Omega} \nu\cdot \nabla_x \Gamma^\delta u(t,x)\,\partial_t \Gamma^\delta u(t,x) dS=:I_{\delta}(t)+I\!I_{\delta}(t), \label{eq.eegd} \end{eqnarray} where $\nu=\nu(x)$ is the unit outer normal vector at $x \in \partial \Omega$ and $dS$ is the surface measure on $\partial \Omega$.
Since $G(\partial u)$ is a homogeneous polynomial of order three, we can say that \begin{equation}\label{eq.eegdn}
|\Gamma^\delta G(\partial u)\,\partial_t \Gamma^\delta u| \lesssim \sum_{|\delta_1|\le [|\delta|/2]}|\Gamma^{\delta_1} \partial u|^2
\sum_{|\delta_2|\le |\delta|}|\Gamma^{\delta_2} \partial u(t,x)|^2. \end{equation} Applying the H\"older inequality and taking the $L^\infty$ norm of the first factor, we arrive at \begin{equation}\label{eq.eegdI}
|I_{\delta}(t)| \lesssim \langle t\rangle^{-1}{\mathcal {\mathcal E}}^2_{
[|\delta|/2]+1, 0}(T) \jb{t}
{\mathcal E}^2_{0,
K }(T) \lesssim {\mathcal E}^4_{K}(T), \end{equation}
since $|\delta|\le K-2$.
Now we treat the boundary term, by means of the trace theorem. Since $\partial \Omega \subset B_{1}$, the norms of the generalized derivatives on $\partial \Omega$ are equivalent to the norms of the standard derivatives. Hence for all $t\in (0,T)$ we have $$
|I\!I_{\delta}(t)|\lesssim \sum_{1\le |\gamma|+k \le |\delta|+1} \|\partial_t^{k}
\partial_x^\gamma u(t)\|^2_{L^2(\partial \Omega)}. $$ Moreover, by the trace theorem and \eqref{Concl01}, we see that $$
|I\!I_{\delta}(t)|
\lesssim \sum_{1\le |\gamma|+k \le |\delta|+2}
\|\partial_t^{k} \partial_x^\gamma u(t)\|_{L^2_{\Omega}}^2 \lesssim \left( \varepsilon+{\mathcal R}_0( {\mathcal E}_{K}(T) \log^{1/2}(e+T)) {\mathcal E}_K(T) \right)^2, $$
because of the assumption $|\delta|\le K-2$. Here we put $${\mathcal R}_0(s)=s+s^2.$$ Summarizing the above estimates, for any $K\ge 7$ and
$|\delta|\le K-2$, it holds \begin{align*} \frac{d}{dt} E(\Gamma^\delta u;t) &\lesssim \left( \varepsilon+{\mathcal R}_0( {\mathcal E}_{K}(T) \log^{1/2}(e+T)) {\mathcal E}_K(T) \right)^2 +{\mathcal E}_{K}^4(T) \\ & \lesssim \left( \varepsilon+{\mathcal R}_0( {\mathcal E}_{K}(T) \log^{1/2}(e+T)) {\mathcal E}_K(T) \right)^2. \end{align*} For the last inequality, we recall that
${\mathcal E}_K(T)\le 1$. After integration, this gives \begin{align}
\sum_{|\delta| \le K-2} \| \Gamma^\delta \partial u(t)\|_{L^2_\Omega}
& \lesssim {\mathcal E}_{K}(0)+ t^{1/2} \left(\varepsilon+{\mathcal R}_0( {\mathcal E}_{K}(T) \log^{1/2}(e+T)) {\mathcal E}_K(T)\right) \notag \\
& \lesssim \jb{t}^{1/2} \left(\varepsilon+{\mathcal R}_0( {\mathcal E}_{K}(T) \log^{1/2}(e+T)) {\mathcal E}_K(T)\right). \label{eq.seceegdI} \end{align}
\subsection{Refinement of the energy estimates for the generalized derivatives}
Let $1\le |\delta|\le K-8$. Since $\partial \Omega$ is a bounded set, it follows from \eqref{eq.eegd} that \begin{align*}
|I\!I_{\delta}(t)|\lesssim & \| \Gamma^\delta \partial_t u(t)\|_{L^2(\partial \Omega)}
\sum_{|\gamma|\le |\delta|} \| \Gamma^\gamma \nabla_x u(t)\|_{L^2(\partial \Omega)}\\
\lesssim & \sum_{1\le |\gamma|\le \delta} \|\partial^\gamma\partial_t u(t)\|_{L^\infty(\partial \Omega)}
\sum_{|\gamma|\le |\delta|} \| \partial^\gamma \nabla_x u(t)\|_{L^\infty(\partial \Omega)}. \end{align*}
Since we have $|x|\le 1$ for $x\in \partial \Omega$, we get $\langle |x|+t\rangle {} \simeq {}\langle t\rangle \simeq
\langle |x|-t\rangle$ for $x\in \partial\Omega$. In particular we get $\sup_{x\in \partial\Omega} w_\nu(t,x) \lesssim \langle t\rangle^{-\nu}$ for $0<\nu \le 1$. We fix sufficiently small and positive constants $0<\eta<1/4$ and $\mu>0$. Applying the pointwise estimates \eqref{ba4weak} and \eqref{ba4t} in Theorem~\ref{main}, we get $$
|I\!I_{\delta}(t)|\lesssim \langle t\rangle^{-(3/2)+\eta} \log^4 (e+t)
\left(\mathcal A^2_{2+\mu,|\delta|+4}[\phi,\psi]+A^{2}_{|\delta|+4}(t)\right), $$ where $$
A_{s}(t)=\sum_{|\gamma|\le s}\left\|\,|y|^{1/2}W_{1,1}(s,y)\Gamma^\gamma \left(G(\partial u)\right)(s,y) \right\|_{L^\infty_tL^\infty_\Omega}. $$ If $m$ and $s$ are sufficiently large, by the Sobolev embedding theorem we have
$A_{2+\mu,|\delta|+4}[\phi,\psi]\lesssim \varepsilon$ and we obtain \begin{equation}\label{eq.IIA1}
|I\!I_{\delta}(t)|\lesssim \langle t\rangle^{-(3/2)+\eta} \log^4 (e+t)
\left(\varepsilon^2+A^2_{|\delta|+4}(t)\right). \end{equation}
In order to estimate $A_{|\delta|+4}(t)$, we argue as in \eqref{eq.eegdn}, so that $$
\sum_{|\gamma|\le |\delta|+4} |\Gamma^\gamma G(\partial u)(s,y)|
\lesssim w_{1/2}^2(s,y) {\mathcal E}^2_{[(|\delta|+4)/2]+1,0}(T)
\sum_{|\gamma'|\le |\delta|+4} |\Gamma^{\gamma'} \partial u(s,y)|. $$
Now using \eqref{eq.ome} and applying Lemma~\ref{KlainermanSobolev} to estimate $|\Gamma^{\gamma'} \partial u|$, we obtain $$
\sum_{|\gamma|\le|\delta|+4}
|\Gamma^\gamma G(\partial u)(s,y)|\lesssim
|y|^{-1/2} W_{1,1}^{-1}(s,y){\mathcal E}^2_{[(|\delta|+4)/2]+1,0}(T)
\sum_{|\gamma|\le |\delta|+6} {\|\Gamma^{\gamma} \partial u(s,\cdot)\|_{L^2_\Omega}}, $$ which yields \begin{equation}
\label{Est_A_1}
A_{|\delta|+4}(t) \lesssim {\mathcal E}^2_{K}(T)
\sum_{|\gamma|\le |\delta|+6} \|\Gamma^{\gamma}
\partial u(s,y)\|_{L^\infty_t{L^2_\Omega}}
\end{equation}
because we have $[(|\delta|+4)/2]\le [(K-1)/2]$
for $|\delta|\le K-8$. Observing that $$
\sum_{|\gamma|\le |\delta|+6} \|\Gamma^{\gamma}
\partial u(s,y)\|_{L^\infty_t{L^2_\Omega}} \lesssim \jb{t}^{1/2} {\mathcal E}_{K}(T) $$
for $|\delta|\le K-8$, we see from \eqref{eq.IIA1} and \eqref{Est_A_1} that $$
|I\!I_{\delta}(t)| \lesssim
\langle t\rangle^{-(1/2)+2\eta} \left(\varepsilon^2+{\mathcal E}^6_{K}(T)\right). $$
Moreover for $|\delta|\le K-8$ the inequality \eqref{eq.eegdI} can be improved as $$
|I_{\delta}(t)| \lesssim \langle t\rangle^{-1}{\mathcal {\mathcal E}}^2_{
[|\delta|/2]+1, 0}(T) \left(\jb{t}^{1/4+\eta} {\mathcal E}_{0,K} (T)\right)^2 \lesssim \langle t\rangle^{-(1/2)+2\eta} {\mathcal E}^4_{K}(T). $$ Coming back to \eqref{eq.eegd}, one can conclude from the assumption ${\mathcal E}_K(T)\le 1$ that \begin{eqnarray}
\sum_{1 \le|\delta| \le K-8} \| \Gamma^\delta \partial u(t)\|_{L^2_\Omega}
&\lesssim& {\mathcal E}_{K}(0)
+\jb{t}^{(1/4)+\eta} \left(\varepsilon+{\mathcal E}^2_{K}(T)\right) \nonumber
\\ \label{eq.inftyeegd} &\lesssim &
\jb{t}^{(1/4)+\eta} \left(\varepsilon+{\mathcal E}^2_{K}(T)\right). \end{eqnarray}
Next step is to improve this estimate for lower $|\delta|$ in order to avoid the polynomial growth in $t$. Let $1\le|\delta|\le K-14$. From \eqref{Est_A_1} and the definition of ${\mathcal E}_K(T)$ we get $$
A_{|\delta|+4}(t)\lesssim {\mathcal E}^3_K(T) \jb{t}^{(1/4)+\eta}. $$ From \eqref{eq.IIA1}, it follows that \begin{eqnarray*}
|I\!I_{\delta}(t)| &\lesssim & \langle t \rangle^{-(3/2)+\eta} \log^{4} (e+t)
\left(\varepsilon^2 + \langle t\rangle^{(1/2)+2\eta} {\mathcal E}^6_K(T)\right) \\ &\lesssim&
\langle t \rangle^{-1+4\eta} \left(\varepsilon^2+{\mathcal E}^6_K(T)\right). \end{eqnarray*}
On the other hand, for $|\delta|\le K-14$ it holds $$
|I_{\delta}(t)| \lesssim \langle t\rangle^{-1}{\mathcal {\mathcal E}}^2_{
[|\delta|/2]+1, 0}(T) \left(\jb{t}^{2\eta} {\mathcal E}_{0,K}(T)\right)^2 \lesssim \langle t\rangle^{-1+4\eta} {\mathcal E}^4_{K}(T). $$ Summing up these estimates and integrating \eqref{eq.eegd}, we get \begin{equation}\label{eq.seceegdIIj00}
\sum_{1\le |\delta| \le K-14} \| \Gamma^\delta \partial u(t)\|_{L^2_\Omega} \lesssim \jb{t}^{2\eta} \left(\varepsilon +{\mathcal E}^2_K(T) \right). \end{equation}
We repeat the above procedure once again with $1\le|\delta|\le K-20$. Being $|\delta|+6\le K-14$, from \eqref{Est_A_1}
we have $A_{|\delta|+4}(t)\lesssim \jb{t}^{2\eta}{\mathcal E}^3_K(T)$. In turn this implies \begin{eqnarray*}
|I\!I_{\delta}(t)| & \lesssim & \langle t \rangle^{-(3/2)+\eta} \log^{4} (e+t)
\left(\varepsilon^2 + \langle t\rangle^{4\eta} {\mathcal E}^6_K(T)\right)\\ & \lesssim & \jb{t}^{-(3/2)+6\eta} \left(\varepsilon^2+{\mathcal E}_K^6(T)\right). \end{eqnarray*} In this case $I_{\delta}(t)\le \langle t \rangle^{-1}{\mathcal E}_K^4(T)$. After integration we get \begin{eqnarray}
\sum_{ 1\le|\delta| \le K-20} \| \Gamma^\delta \partial u(t)\|_{L^2_\Omega}
&\lesssim & {\mathcal E}_{K}(0)+{\mathcal E}^2_{K}(T)\log^{1/2}(e+t)+\varepsilon+{\mathcal E}^3_K(T) \nonumber\\ &\lesssim &\varepsilon + {\mathcal E}^2_K(T)\log^{1/2}(e+t). \label{eq.eegdfinal} \end{eqnarray} This estimate is the best we can obtain with our methods due to the estimate of $I_{\delta}(t)$.
\section{Boundedness for the $L^\infty$ norm and the conclusion of the proof of Theorem~\ref{thm.mainsemi}}
Summarizing \eqref{Concl01}, \eqref{eq.seceegdI}, \eqref{eq.inftyeegd}, \eqref{eq.seceegdIIj00}, \eqref{eq.eegdfinal} we have \begin{equation}\label{eq.e0K} {\mathcal E}_{0,K}(T)\lesssim \varepsilon +\mathcal R_0({\mathcal E}_{[(K-1)/2]+1,K}(T) \log^{1/2}(e+T)){\mathcal E}_{[(K-1)/2]+1,K}(T) \end{equation} with $K\ge 20$ and $\mathcal R_0(s)=s+s^2$. If ${\mathcal E}_{H,0}(T)$ with $H=[(K-1)/2]+1$ has the same bound of ${\mathcal E}_{0,K}(T)$ given in \eqref{eq.e0K}, then we conclude that the estimate \eqref{eq.energy} in the Claim~\ref{claim} holds for $P=1$ and $Q=1/2$, and hence $T^* \ge \exp(\tilde C \epsilon^{-2})$. However, $\mathcal R_0$ (and hence $Q$) will be changed due to the following argument. Such a modification yields a worse estimate for the lifespan.
Since we assume $\phi,\psi \in \mathcal{C}_0^\infty(\overline{\Omega})$, there is a positive constant $M$ such that $|x|\le t+M$ in $\supp u(t,\cdot)$ for $t\ge 0$. Hence we have $\log(e+t+|x|)\lesssim \log(e+t)$ in $\supp u(t, \cdot)$.
From \eqref{Est_A_1} and the definition of ${\mathcal E}_K(T)$, it follows that
$A_{|\delta|+4}(t)\lesssim {\mathcal E}^3_K(T)$
for $K\ge 26$ and $|\delta|\le K-26$. Let $\mu>0$. Then we have ${\mathcal A}_{2+\mu, K-22}[\phi,\psi]\lesssim \varepsilon$ if $m$ and $s$ are sufficiently large. For fixed $0<\eta<1/2$, by \eqref{ba4weak}, we obtain \[
\sum_{|\gamma|\le K-26}| \Gamma^\gamma \partial u(t, x)| \lesssim \mathcal B(\varepsilon,t) \,w_{(1/2)-\eta}(t,x) \] where \[ \mathcal B(\varepsilon,t) :=\varepsilon +\log^{2}(e+t){\mathcal E}_K^3(T). \] Using this estimate, we obtain \[
\sum_{|\gamma|\le K-26}|\Gamma^\gamma G(\partial u)(t,x)|\lesssim w_{1/2}^2(t,x){\mathcal E}^2_{[(K-1)/2]+1,0}(T)w_{(1/2)-\eta}(t,x)\mathcal B(\varepsilon,t). \]
Since $|y|^{1/2} w_{1/2-\eta}\lesssim 1$, this implies \[
\mathcal A_{|\delta|+4}(t)\lesssim {\mathcal E}_K^2(T)\mathcal B(\varepsilon,t) \]
for any $|\delta|+4\le K-26$. Therefore, \eqref{ba4} in Theorem~\ref{main} yields \begin{eqnarray*}
\sum_{|\gamma|\le K-30}| \Gamma^\gamma \partial u(t, x)|\lesssim
\left( \varepsilon+ \mathcal B(\varepsilon,t) {\mathcal E}^2_{K}(T)\log^2(e+t)
\right) w_{1/2}(t,x). \end{eqnarray*} For $K\ge 61$ we have $[(K-1)/2]+1 \le K-30$, and we conclude that \begin{equation} \label{Concl02}
\sum_{|\gamma|\le [(K-1)/2]+1} \|w_{1/2}^{-1} \Gamma^\gamma \partial u\|_{L^\infty_TL^\infty_\Omega}\lesssim
\varepsilon+ \mathcal B(\varepsilon,t) {\mathcal E}_K^2(T)\log^2(e+T). \end{equation}
Finally, we combine \eqref{eq.e0K} and \eqref{Concl02} to obtain \begin{align*} {\mathcal E}_K(T) \lesssim & {} \varepsilon+ (\varepsilon+{\mathcal E}_K(T))\times\\ &\times \left( {\mathcal E}_K(T)\log^{1/2}(e+T)+{\mathcal E}_K^2(T)\log^2(e+T)+{\mathcal E}_K^4(T)\log^{4}(e+T) \right).
\end{align*} In order to find \begin{align*} {\mathcal E}_K(T)\le C_1 \varepsilon+{\mathcal R}\left({\mathcal E}^P_K(T)\log^Q(e+T)\right) (\varepsilon+{\mathcal E}_K(T)) \end{align*} with as larger $P/Q$ as possible, we take $$ \mathcal R(\tau):=C_2 (\tau+\tau^2+\tau^4) $$ and $P=Q=1$. Recalling the discussion in Section~\ref{AAPMT}, we obtain Theorem~\ref{thm.mainsemi}.
\section{Proof of pointwise estimates}\label{sec.pointwise}
In this section, we go back to the Neumann problem~\eqref{eq.PMfT} and will prove Theorem~\ref{main} by combining the decay estimates for the Cauchy problem in $\mathbb R^2$ and the local energy decay estimate through the cut-off argument.
\subsection{Decomposition of solutions} Recall the definitions of $X(T)$ and $S[\vec{u}_0, f](t,x)$, $K[\vec{u}_0](t,x)$, $L[f](t,x)$ in Subsection~\ref{LinearNeumann}. In the same manner, the solution of the Cauchy problem \begin{equation}\label{eq.PCgT} \begin{array}{ll} (\partial_t^2-\Delta) v = g & (t,x) \in (0,T)\times {\mathbb R}^2,\\
v(0,x)=v_0(x), & x\in \mathbb R^2,\\ (\partial_t v)(0,x)=v_1(x), & x\in {\mathbb R}^2, \end{array} \end{equation} will be denoted by $S_0[\vec{v}_0, g](t,x)$ with $\vec{v}_0=(v_0, v_1)$. Then we have $$ S_0[\vec{v}_0,g](t,x)= K_0[\vec{v}_0](t,x)+L_0[f](t,x), $$ where $K_0[\vec{v}_0](t,x)$ and $L_0[g](t,x)$ are the solutions of \eqref{eq.PCgT} with $g=0$ and $\vec{v}_{0}=(0,0)$, respectively. In other words, $K_0[\vec{v}_0](t,x)=S_0[\vec{v}_0, 0](t,x)$ and $L_0[g](t,x)=S_0[(0,0), g](t,x)$.
Now we proceed to introduce the cut-off argument. For $a >0$, we denote by $\psi_a$ a smooth radially symmetric function on ${\mathbb R}^2$ satisfying \begin{equation}\label{eq.psia} \begin{cases}
\psi_a(x)=0, & |x| \le a, \\
\psi_a(x)=1, & |x| \ge a+1. \end{cases} \end{equation}
\begin{lemma}\label{lemma.decomposition} Fix $a\ge 1$. Let $(u_0, u_1, f)\in X(T)$.
Assume that for any $t\in (0,T)$ one has $$ \text{supp}\,f(t,\cdot) \subset \overline{\Omega_{t+a}}\quad \text{and}\quad \text{supp}\,u_0 \subset \overline{\Omega_a}, \ \text{supp}\,u_1 \subset \overline{\Omega_a}. $$ Then we have \begin{eqnarray}\label{eq.omo} S[\vec{u}_0, f](t,x)=\psi_a(x) S_0[ \psi_{2a} \vec{u}_0, \psi_{2a}f](t,x)+\sum_{i=1}^4 S_i[ \vec{u}_0, f](t,x), \end{eqnarray} where \begin{eqnarray}\label{eq.S1} && S_1[\vec{u}_0,f](t,x)=(1-\psi_{2a}(x))L[\,[\psi_a,-\Delta]S_0[\psi_{2a} \vec{u}_0, \psi_{2a}f] ](t,x), \\ \label{eq.S2} && S_2[\vec{u}_0,f](t,x)=-L_0[\,[\psi_{2a},-\Delta]L[\,[\psi_a,-\Delta]S_0[\psi_{2a} \vec{u}_0, \psi_{2a}f] ] ](t,x), \\ \label{eq.S3} && S_3[\vec{u}_0, f](t,x)=(1-\psi_{3a} (x)) S[(1-\psi_{2a}) \vec{u}_0, (1-\psi_{2a})f](t,x), \\ \label{eq.S4} && S_4[\vec{u}_0, f](t,x)=-L_0[\,[\psi_{3a},-\Delta] S[(1-\psi_{2a}) \vec{u}_0, (1-\psi_{2a})f] ](t,x). \end{eqnarray} \end{lemma}
For the proof, we refer to \cite{Kub06}.
Observe that the first term on the right-hands side of (\ref{eq.omo}) can be evaluated by applying the decay estimates for the whole space case. In contrast, the local energy decay estimates for the mixed problem work well in estimating $S_j[\vec{u}_0, f]$ for $1\le j\le 4$, because we always have some localized factor in front of the operators $L$, $S$ and in their arguments.
\subsection{Known estimates for the $2$D linear Cauchy problem} In this subsection we recall the decay estimates for solutions of homogeneous wave equation. Since $\Lambda K_0[v_0,v_1]=K_0[\Lambda v_0, \Lambda v_1]$ by \eqref{eq.commute}, we find that Proposition 2.1 of \cite{k93} leads to the following.
\begin{lemma} Let $m\in\mathbb N$. For any $(v_0,v_1) \in \mathcal C^\infty_0(\mathbb R^2)\times \mathcal C^\infty_0(\mathbb R^2)$, it holds that \begin{equation}\label{eq.kubota}
\langle t+|x| \rangle^{1/2}\log^{-1}\left(e+ \frac{\langle t+|x|\rangle}{\langle t-|x|\rangle}\right)\sum_{|\beta|\le m}|\Gamma^\beta K_0[v_0,v_1](t,x)| \lesssim \mathcal B_{3/2,m}[v_0,v_1]. \end{equation} Under the same assumption, for any $\mu>0$ we have \begin{equation}\label{eq.kubota2}
\langle t+|x|\rangle^{1/2}\langle t-|x|\rangle^{1/2}\sum_{|\beta|\le m}|\Gamma^\beta K_0[v_0,v_1](t,x)|\lesssim \mathcal B_{2+\mu,m}[v_0,v_1]. \end{equation} \end{lemma}
For $\kappa\ge 1$ and $\tau\ge 0$, we define $$ \Psi_\kappa(\tau):=\begin{cases}
1, & \kappa>1, \\
\log(e+\tau), & \kappa=1.
\end{cases} $$ The following two lemmas are proved for $m=0$ in \cite{DiF03}. For the general case, see \cite{KP}. \begin{lemma} Let $\kappa \ge 1$ and $m\in \mathbb N$. Then we have \begin{eqnarray}\label{eq.decayMGbis}
\sum_{|\delta|\le m}|\Gamma^\delta L_0[g](t,x)|
\lesssim \Psi_{\kappa}(t+|x|)\sum_{|\delta|\le m}\|\langle y\rangle^{1/2}W_{1/2,\kappa}(s,y) \Gamma^\delta g(s,y)\|_{L^\infty_tL^\infty}, \end{eqnarray} and \begin{eqnarray} \label{eq.decayMG}
&&\langle t+|x| \rangle^{1/2}\log^{-1}\left(e+ \frac{\langle t+|x|\rangle}{\langle t-|x|\rangle}\right)
\sum_{|\delta|\le m}|\Gamma^\delta L_0[g](t,x)|\lesssim\nonumber \\&&\quad
\lesssim \Psi_{\kappa}(t+|x|) \sum_{|\delta|\le m}\|\langle y\rangle^{1/2}W_{1,\kappa}(s,y) \Gamma^\delta g(s,y)\|_{L^\infty_tL^\infty} \end{eqnarray} for any $(t,x)\in [0,T)\times \mathbb R^2$. \end{lemma}
\begin{lemma} Let $0<\sigma<3/2$, $\kappa>1$, $\mu\ge 0$, $0<\eta<1$ and $m\in \mathbb N$. Then, for any $(t,x)\in [0,T)\times \mathbb R^2$, one has \begin{eqnarray}
&&\sum_{|\delta|\le m}|\Gamma^\delta \partial L_0[g](t,x)| \lesssim \nonumber\\
&&\quad \lesssim w_{\sigma}(t,x) \Psi_{\mu+1}(t+|x|)
\sum_{|\delta|\le m+1}\|\langle y\rangle^{1/2+\kappa}\langle s+
|y| \rangle^{\sigma+\mu} \Gamma^\delta g(s,y)\|_{L^\infty_tL^\infty}, \label{eq.decayMG2} \\
&&\sum_{|\delta|\le m}|\Gamma^\delta \partial L_0[g](t,x)| \lesssim \nonumber\\
&&\quad \lesssim w_{1-\eta}(t,x)\log(e+t+|x|)
\sum_{|\delta|\le m+1}\|\langle y\rangle^{1/2}W_{1,1}(s,y) \Gamma^\delta g(s,y)\|_{L^\infty_tL^\infty}. \label{eq.decayMG4} \end{eqnarray} \end{lemma}
\subsection{The local energy decay estimates}
We come back to the linear problem \eqref{eq.PMfT}. Let $X_a(T)$ be the set of all $({u}_0, {u}_1,f)\in X(T)$ such that \begin{align}\label{eq.Xa}
& {u}_0(x)={u}_1(x)=0 \text{ for } |x|\ge a, \\
& f(t,x)=0 \text{ for } |x|\ge a, \; t\in [0,T).\label{eq.Xaa} \end{align}
The following local energy decay will be used in the proof of the pointwise estimate.
\begin{lemma}\label{LocalEnergyDecay} Assume that ${\mathcal O}$ is convex. Let $a, b>1$, $\gamma\in (0, 1]$
and $m \in {\mathbb N}$. If $\Xi=({u}_0, {u}_1, f) \in X_{a}(T)$, then for any $t\in [0,T)$ one has \begin{eqnarray}
&& \sum_{|\alpha| \le m} \jb{t}^\gamma \| \partial^\alpha
S[\Xi](t)\|_{L^2(\Omega_b)}\lesssim \nonumber\\
&&\quad \lesssim \| {u}_0 \|_{{H}^{m}(\Omega)}
+\| {u}_1 \|_{{H}^{m-1}(\Omega)}
+\log(e+t)\sum_{|\alpha|\le m-1}\| \jb{s}^{\gamma} (\partial^\alpha f)(s,y) \|_{L^\infty_t L^2_\Omega}. \label{eq.LE} \end{eqnarray} \end{lemma} \noindent{\it Proof.} For $a,b>1$, it is known that there exists a positive constant $C=C(a,b)$ such that \begin{align}
\int_{\Omega_b}(|\partial_t K[\vec \phi_0](t,x)|^2+
|\nabla_x K[\vec \phi_0](t,x)|^2+&
|K[\vec \phi_0](t,x)|^2) \,dx \lesssim \nonumber\\ &
\lesssim \jb{t}^{-2}\left(\|\phi_0\|^2_{H^1(\Omega)}+ \|\phi_1\|^2_{L^2(\Omega)}\right) \label{obstacle} \end{align} for any $\vec \phi_0=(\phi_0, \phi_1)\in H^{2}(\Omega)\times H^1(\Omega)$
satisfying $\phi_0(x)=\phi_1(x)\equiv 0$ for $|x|\ge a$ and satisfying also the compatibility condition of order $0$, that is to say, $\partial_\nu \phi_0(x)=0$ for $x\in \partial \Omega$ (see for instance Lemma~2.1 of \cite{SeSh03}; see also Morawetz \cite{Mor75} and Vainberg \cite{Vai75}).
Now let $({u}_0, {u}_1, f)\in X_{a}(T)$ with some $a>1$. Let $u_j$ for $j\ge 2$ be defined as in Definition~\ref{def.complinear}. Then, by Duhamel's principle, it follows that \begin{align} & \partial_t^j S[({u}_0, {u}_1,f)](t,x) \nonumber\\ & \qquad =K[({u}_j, {u}_{j+1})](t,x)+ \int_0^t K\bigl[\bigl(0,(\partial_t^j f)(s) \bigr) \bigr](t-s,x) ds \label{DP} \end{align} for any nonnegative integer $j\in \mathbb N^*$ and any $(t,x) \in [0,T) \times \Omega$. Observe that $(u_j, u_{j+1}, 0)$ satisfies the compatibility condition of order $0$, because $(u_0, u_1, f)\in X(T)$ implies $\partial_\nu u_j=0$ on $\partial \Omega$; the compatibility condition of order $0$ is also trivially satisfied for $\bigl(0,(\partial_s^j f)(s),0\bigr)$ for all $s\ge 0$.
\noindent Therefore, by (\ref{obstacle}) we have \begin{eqnarray*}
\sum_{|\alpha|\le 1}
\|\partial^\alpha K[{u}_j, {u}_{j+1}](t)\|_{L^2({\Omega_b})}
& \lesssim & \jb{t}^{-1} \left(\|{u}_j\|_{H^1(\Omega)}+\|{u}_{j+1}\|_{L^2(\Omega)}\right) \\
& \lesssim & \jb{t}^{-1} \bigl(\|{u}_0\|_{H^{j+1}(\Omega)}+\|{u}_1\|_{H^{j}(\Omega)}
+\sum_{k=0}^{j-1} \|(\partial_t^k f)(0)\|_{L^2(\Omega)}\bigr) \end{eqnarray*} and \begin{eqnarray*}
\sum_{|\alpha|\le 1}\int_0^t \|\partial^\alpha K[(0,(\partial_t^j f)(s))](t-s)\|_{L^2({\Omega_b})} ds
&\lesssim &\int_0^t \jb{t-s}^{-1}
\,\|(\partial_t^j f)(s)\|_{L^2(\Omega)} ds \\ \qquad &\lesssim &\jb{t}^{-\gamma} {\rm log}(e+t) \sup_{0\le s \le t} \jb{s}^\gamma
\|(\partial_t^j f)(s)\|_{L^2(\Omega)} \end{eqnarray*} for any $\gamma \in (0,1]$. In conclusion for any $j\in\mathbb N^*$, we have \begin{eqnarray}
&& \sum_{|\alpha|\le 1}\| \partial^\alpha \partial^j_{t} S[({u}_0, {u}_1, f)](t)\|_{L^2(\Omega_b)}\lesssim \nonumber\\ && \quad \lesssim \jb{t}^{-\gamma} \bigl(
\|{u}_0\|_{H^{j+1}(\Omega)}+\|{u}_1\|_{H^{j}(\Omega)}
+\sum_{k=0}^j\log(e+t) \sup_{0\le s \le t} \jb{s}^\gamma\|(\partial_t^k f)(s)\|_{L^2(\Omega)}\bigr). \label{LE1} \end{eqnarray}
In order to evaluate $\partial^\alpha S[\Xi]$ for $2\le |\alpha| \le m$, we have only to combine (\ref{LE1}) with a variant of (\ref{eq.ap1})\,: \begin{equation}\label{LE2}
\|\varphi\|_{H^m(\Omega_b)} \lesssim \|\Delta_x \varphi\|_{H^{m-2}(\Omega_{b^\prime})}+\|\varphi\|_{H^{m-1}(\Omega_{b^\prime})}, \end{equation}
where $1<b<b^\prime$ and $\varphi \in H^m(\Omega)$ with $m \ge 2$; we can easily obtain \eqref{LE2} from \eqref{eq.ap1} by cutting off $\varphi$ for $|x|\ge b'$.
In order to complete the proof, one has to apply this inequality recalling the equation $\Delta S[\Xi]=\partial_t^2 S[\Xi]-f$. Invoking \eqref{LE1}, we finally get the basic estimate \eqref{obstacle}.
$\qed$
\subsection{Proof of Theorem~\ref{main}} The following lemma is the main tool for the proof of Theorem~\ref{main}. \begin{lemma}\label{KataLem} Let ${\mathcal O}$ be a convex set. Let $a,b>1$, $0<\rho\le 1$, $m\in \mathbb N^*$ and $\kappa \ge 1$.
\noindent {\rm (i)} Suppose that $\chi$ is a smooth function on $\mathbb R^2$ satisfying ${\rm supp}\, \chi \subset B_b$. If $\Xi=({u}_0, {u}_1,f) \in X_{a}(T)$, then \begin{eqnarray} && \langle t \rangle^\rho
\sum_{|\delta|\le m}|\Gamma^\delta (\chi S[ \Xi ])(t,x)|\lesssim \nonumber\\
&&\lesssim \| {u}_0 \|_{{H}^{m+2}(\Omega)}+\| {u}_1 \|_{{H}^{m+1}(\Omega)}
+ \log(e+t) \sum_{|\beta|\le m+1} \|\langle s\rangle^{\rho} \partial^\beta f(s,y)\|_{L^\infty_tL^\infty({\Omega_a})} \label{KataL01} \end{eqnarray} for $(t,x)\in[0, T)\times \overline{\Omega}$.
\noindent {\rm (ii)} Let $g\in \mathcal C^{\infty}([0,T)\times \mathbb R^2)$ such that $\supp g (t,\cdot)\subset \overline{B_a\setminus B_1}$ for any $t\in [0,T)$. Then \begin{eqnarray} \label{KataL02}
\sum_{|\delta|\le m}|\Gamma^\delta L_0[g](t,x)| \lesssim
\sum_{|\beta|\le m} \|\langle s\rangle^{1/2} \partial^\beta g(s,y)\|_{L^\infty_tL^\infty({\Omega_a})}, \end{eqnarray} and for any $0\le \eta<\rho$ we have \begin{align} \label{KataL03} & w^{-1}_{\rho-\eta}(t,x)
\sum_{|\delta|\le m}
|\Gamma^\delta \partial L_0[g](t,x)| \lesssim
\Psi_{\eta+1}(t+|x|)
\sum_{|\beta|\le m+1} \|\langle s\rangle^{\rho} \partial^\beta g(s,y)\|_{L^\infty_tL^\infty({\Omega_a})}. \end{align} for $(t,x)\in [0,T)\times \overline{\Omega}$.
\noindent {\rm (iii)} Let $(v_0,v_1,g)\in \mathcal C^{\infty}(\mathbb R^2)\times \mathcal C^{\infty}(\mathbb R^2)\times \mathcal C^{\infty}([0,T)\times \mathbb R^2)$. If $v_0=v_1=g(t,\cdot)=0$ for any $x\in B_1$ and $t\in [0,T)$, then \begin{eqnarray}
&& \langle t \rangle^{1/2} \sum_{|\beta|\le m}
|\Gamma^\beta S_0[v_0,v_1, g ](t,x)| \lesssim \nonumber\\ &&\quad \lesssim {\mathcal A}_{3/2, m}[v_0,v_1]
+\Psi_\kappa(t+|x|) \sum_{|\beta|\le m} \|\langle y\rangle^{1/2} W_{1,\kappa}(s,y)\Gamma^\beta g(s,y)\|_{L^\infty_tL^\infty({\Omega})} \label{KataL04} \end{eqnarray} for $(t,x)\in [0,T)\times {\overline{\Omega}_b}$. \end{lemma}
\noindent{\it Proof.}\ \ First we note that for any smooth function $h:[0,T)\times \overline{\Omega}\to \mathbb R$ such that ${\rm supp}\, h(t, \cdot)\subset B_R$ for any $t\in [0,T)$ and suitable $R>1$,
it holds that \begin{equation}
\sum_{|\beta|\le m}|\Gamma^\beta h(t,x)|\lesssim \sum_{|\beta|\le m} |\partial^\beta h(t,x)|. \label{KataM01} \end{equation} Clearly the same estimate holds for $h:[0,T)\times \mathbb R^2\to \mathbb R$.
We start with the proof of \eqref{KataL01}. Let $\Xi \in X_{a}(T)$ and $0<\rho\le 1$. For $(t,x)\in [0, T)\times \overline{\Omega}$, combining \eqref{KataM01} with the standard Sobolev inequality and then applying the local energy decay \eqref{eq.LE}, we get \begin{eqnarray*} &&\langle t \rangle^\rho
\sum_{|\beta|\le m}|\Gamma^{\beta}(\chi S[ \Xi ]) (t,x)|
\lesssim \langle t\rangle^\rho \!\!\! \sum_{|\beta|\le m+2}
\|{\partial^\beta S[ \Xi ](t)}\|_{L^2(\Omega_b)} \\ && \quad \lesssim
\| {u}_0 \|_{{H}^{m+2}(\Omega)}+\| {u}_1 \|_{{H}^{m+1}(\Omega)}
+\log(e+t) \sum_{|\beta|\le m+1} \| \jb{s}^{\rho} \partial^\beta f(s,y) \|_{L^\infty_t L^2_\Omega}. \end{eqnarray*} Since $\text{supp}\,f(t,\cdot) \subset \overline{\Omega}_{a}$
implies $\|{\partial^\beta f(s)}\|_{L^2(\Omega)}\lesssim \|{\partial^\beta f(s)}\|_{L^\infty({\Omega}_a)}$, we obtain \eqref{KataL01}.
Next we prove \eqref{KataL02} by the aid of the decay estimates for the linear Cauchy problem. By \eqref{eq.decayMGbis} for some $\kappa>1$, we find \begin{eqnarray*}
\sum_{|\delta|\le m}|\Gamma^\delta
L_0[g](t,x)| \lesssim
\sum_{|\delta|\le m} \|\langle y\rangle^{1/2}W_{1/2,\kappa}(s,y)\Gamma^\delta g(s,y)\|_{L^\infty_tL^\infty}. \end{eqnarray*} Using the assumption ${\rm supp}\, g(t,\cdot) \subset \overline{B_a\setminus B_1}\subset {\overline{\Omega}_{a}}$, we gain \eqref{KataL02}.
Similarly, if we use \eqref{eq.decayMG2} (with $\sigma $ being replaced by $\rho-\eta$ and $\mu$ by $\eta$), instead of \eqref{eq.decayMGbis}, then we get \eqref{KataL03}.
Finally we prove \eqref{KataL04} by using \eqref{eq.kubota} and \eqref{eq.decayMG}. It follows that \begin{eqnarray*}
&& \langle t+|x| \rangle^{1/2} \log\left(e+\frac{\langle t+|x|\rangle }{\langle t-|x|\rangle }\right)
\sum_{|\beta|\le m} |\Gamma^\beta S_0[\vec{v}_0, g ](t,x)|\lesssim \\&& \quad \lesssim {\mathcal B}_{3/2, m}[\vec{v}_0]
+ \Psi_\kappa(t+|x|) \sum_{|\beta|\le m} \|\langle y\rangle^{1/2}W_{1,\kappa}(s,y) \Gamma^\beta g(s,y)\|_{L^\infty_tL^\infty} \end{eqnarray*} for $(t,x)\in [0, T)\times \mathbb R^2$. Observe that the logarithmic term on the left-hand side is equivalent to a constant when $x \in \overline{\Omega_b}$. Thus we get \eqref{KataL04}, because our assumption ensures that support of data and ${\rm supp}\,g(t,\cdot)$ are contained in $\Omega$. This completes the proof.
$\qed$
Now we are in a position to prove Theorem~\ref{main}. \begin{proof}[Proof of Theorem~$\ref{main}$]
According to Lemma \ref{lemma.decomposition} with $a=1$, we can write \begin{equation}\label{decomposition} S[{\Xi}](t,x)=\psi_1(x) S_0[\psi_2 \Xi](t,x) {}+\sum_{i=1}^4 S_i[\Xi](t,x) \end{equation} for $(t,x)\in [0,T)\times {\overline{\Omega}}$, where $\psi_a$ is defined by (\ref{eq.psia}) and $S_i[\Xi]$ for $1\le i\le 4$ are defined by \eqref{eq.S1}--\eqref{eq.S4} with $a=1$. It is easy to check that \begin{equation}\label{eq.compsi} [\psi_a,-\Delta]h(t,x)=
h(t,x) \Delta \psi_a(x)+2\nabla_{\!x}\,h(t,x) \cdot \nabla_{\!x}\, \psi_a(x) \end{equation} for $(t, x) \in [0,T)\times {\overline{\Omega}}$, $a \ge 1$ and any smooth function $h$. Note that this identity implies \begin{equation}\label{eq.comX} (0,0, [\psi_a, -\Delta]h)\in X_{a+1}(T) \end{equation} because ${\rm supp}\, \nabla_x \psi_a\cup {\rm supp}\, \Delta \psi_a\subset \overline{B_{a+1}\setminus B_a}$.
First we prove \eqref{ba3}. Applying \eqref{eq.kubota} and \eqref{eq.decayMG}, we have \begin{eqnarray*}
&& \jb{t+|x|}^{1/2}
\log^{-1}\left(e+\frac{\langle t+|x|\rangle}{\langle t-|x|\rangle}\right)
\sum_{|\delta|\le k}\left|\Gamma^\delta S_0[\psi_2\Xi](t,x)\right|\lesssim \\ && \quad \lesssim {\mathcal B}_{3/2,k}[\psi_2\vec{u}_0]
+ \sum_{|\delta|\le k}\|\langle y\rangle^{1/2}W_{1,1+\mu}(s,y)\Gamma^\delta (\psi_2 f)(s,y)\|_{L^\infty_tL^\infty} \\ && \quad \lesssim {\mathcal A}_{3/2,k}[\vec{{u}}_0]
+\sum_{|\delta|\le k} \| |y|^{1/2} W_{1,1+\mu}(s,y) \Gamma^\delta f(s,y)\|_{L^\infty_tL^\infty_{\Omega}}, \end{eqnarray*} so that \begin{align}
& \jb{t+|x|}^{1/2}\log^{-1}\left(e+\frac{\jb{t+|x|}}{\jb{t-|x|}}\right)
\sum_{|\delta|\le k}\left| \Gamma^{\delta}\bigl(\psi_1(x) S_0[\psi_2\Xi](t,x)\bigr)\right|
\lesssim \nonumber\\ & \qquad\qquad \lesssim {\mathcal A}_{3/2,k}[\vec{u}_0]
+ \sum_{|\delta|\le k}\| |y|^{1/2} W_{1,1+\mu}(s,y) \Gamma^\delta f(s,y)\|_{L^\infty_tL^\infty_{\Omega}}.
\label{eq.finalS0} \end{align}
Now we write $$ S_1[\Xi]=(1-\psi_2)L[[\psi_1,-\Delta]K_0[\psi_2 \vec{u}_0]]+(1-\psi_2)L[[\psi_1,-\Delta]L_0[\psi_2 f]]=: S_{1,1}[\Xi]+S_{1,2}[\Xi]. $$ We can apply \eqref{KataL01} to estimate $S_{1,2}[\Xi]$, because we have $L[h]=S[0,0,h]$ and ${\rm supp}(1-\psi_2)\subset B_3$ and because \eqref{eq.comX} guarantees $(0,0,[\psi_1,-\Delta]L_0[\psi_2f]) \in X_{2}$. Therefore we get \begin{eqnarray*}
\langle t \rangle^{1/2}\sum_{|\delta|\le k} |\Gamma^\delta S_{1,2}[\Xi](t,x)|&\lesssim&
\log(e+t)\sum_{|\beta|\le k+ 1}
\bigl\|\langle s\rangle^{1/2} \partial^\beta \bigl([\psi_1,-\Delta]L_0[\psi_2 f] \bigr)(s,x)\bigr\|_{L^\infty_tL^\infty(\Omega_2)}\\ &\lesssim &
\log(e+t)\sum_{|\beta|\le k+2}
\|\langle s\rangle^{1/2} \partial^\beta L_0[\psi_2 f](s,x)\|_{L^\infty_tL^\infty(\Omega_2)}, \end{eqnarray*}
where we have used \eqref{eq.compsi} to obtain the second line. Recalling that $L_0[h]=S_0[0,0,h]$ and noting that $\psi_2 f(t,x)=0$ if $|x|\le 2$, we can use \eqref{KataL04} to obtain \begin{equation}\label{eq.stimaS12}
\langle t \rangle^{1/2} \sum_{|\delta|\le k}|\Gamma^\delta S_{1,2}[\Xi](t,x)| \lesssim \log(e+t)
\sum_{|\beta|\le k+2}\| |y|^{1/2} W_{1,1+\mu}(s,y)\Gamma^\beta f(s,y)\|_{L^\infty_tL^\infty_{\Omega}} \end{equation} for $(t,x)\in [0,T)\times {\overline{\Omega}}$.
In order to estimate $S_{1,1}[\Xi]$, we combine the Sobolev embedding and the local energy decay estimate \eqref{eq.LE} with $\gamma=1$. Then we get \begin{eqnarray*}
\sum_{|\delta|\le k}|\Gamma^\delta S_{1,1}[\Xi](t,x)| &\lesssim&
\|(1-\psi_2) L[[\psi_1,-\Delta]K_0[\psi_2 \vec{u}_0]](t,\cdot)\|_{H^{2+k}(\Omega)}\\ &\lesssim&
\|S[0,0,[\psi_1,-\Delta]K_0[\psi_2 \vec{u}_0]](t,\cdot)\|_{H^{2+k}(\Omega_3)} \\ &\lesssim& \langle t \rangle^{-1} \log(e+t)
\sum_{|\delta|\le k+1}
\|\langle s\rangle\partial^\delta\bigl( [\psi_1,-\Delta] K_0[\psi_2 \vec{u}_0]\bigr)(s,y) \|_{L^\infty_{t} L^2_{\Omega} } \\ &\lesssim&
\langle t \rangle^{-1} \log(e+t)\sum_{|\beta|\le k+2}\| \langle s\rangle \partial^\beta K_0 [\psi_2
\vec{u}_0](s,y) \|_{L^\infty_{t} L^\infty(\Omega_2)}. \end{eqnarray*} Then we use \eqref{eq.kubota2}; recalling that we are in a bounded $y$-domain, for any $\mu>0$ we get \begin{equation}\label{eq.stimaS11}
\jb{t}^{1/2} \jb{t+|x|}^{1/2} \log^{-1} (e+t)
\sum_{|\delta|\le k}|\Gamma^\delta S_{1,1}[\Xi](t,x)|\lesssim \mathcal B_{2+\mu,2+k}[\psi_2\vec{u}_0] \lesssim \mathcal A_{2+\mu,2+k}[\vec{u}_0] \end{equation} for any $(t,x)\in [0,T)\times {\overline{\Omega}}$.
Now we proceed estimating $S_3[\Xi]$. Because $(1-\psi_2)\Xi\in X_{3}(T)$ for any $\Xi\in X(T)$, taking $\rho=1-\mu$ in \eqref{KataL01} we get \begin{eqnarray}
&& \langle t\rangle^{1/2} \sum_{|\delta|\le k}|\Gamma^\delta S_3[\Xi](t,x)|\lesssim \label{eq.stimaS3}\\ &&\quad \lesssim \langle t\rangle^{-1/2+\mu} \Big(
\|{u}_0\|_{H^{k+2}(\Omega_{3})}+\|{u}_1\|_{H^{k+1}(\Omega_{3})}+
\log(e+t) \sum_{|\beta|\le k+1} \|\langle s\rangle^{1-\mu} \partial^\beta f(s,y)\|_{L^\infty_tL^\infty(\Omega_3)}
\Big) \nonumber \end{eqnarray} for $(t,x)\in [0,T)\times {\overline{\Omega}}$.
By using the trivial inequality $\langle s\rangle^{1-\mu}\lesssim |y|^{1/2} W_{1, 1}(s,y)$ in $[0,T)\times\Omega_3$, from \eqref{eq.stimaS12}, \eqref{eq.stimaS11} and \eqref{eq.stimaS3} we can conclude that \begin{align}
&\langle t\rangle^{1/2}\sum_{|\delta|\le k}|\Gamma^\delta S_1[\Xi]|+\langle t\rangle^{1/2}\sum_{|\delta|\le k}|\Gamma^\delta S_3[\Xi]|\lesssim \nonumber\\ & \lesssim \jb{t}^{-(1/2)+\mu}
\mathcal A_{2+\mu,2+k}[\vec{u}_0]+
\log(e+t)\sum_{|\beta|\le 2+k}\||y|^{1/2}W_{1,1+\mu}(s,y)\Gamma^\beta f(s,y)\|_{L^\infty_tL^\infty_{\Omega}}. \label{eq.finalS1S3log} \end{align}
Finally we consider the terms $S_2[\Xi]$, $S_4[\Xi]$. Let us set $g_j[\Xi]=(\partial_t^2-\Delta) S_j[\Xi]$ for $j=2, 4$. Recalling the definition of $L_0$, we find \begin{eqnarray*} g_2[\Xi]&=& -[\psi_2,-\Delta] L\bigl[\,[\psi_1,-\Delta]
S_0[\psi_2 \Xi]\bigr];\\ g_4[\Xi]&=& -[\psi_3,-\Delta] S[(1-\psi_2)\Xi]. \end{eqnarray*} Having in mind \eqref{eq.compsi} we can say that $g_2$ and $g_4$ have the same structures as $S_1$ and $S_3$, but they contain one more derivative. Therefore, arguing similarly to the derivation of \eqref{eq.finalS1S3log}, we arrive at \begin{align}
&\langle t\rangle^{1/2}\sum_{|\delta|\le k}|\Gamma^\delta g_2[\Xi]|+\langle t\rangle^{1/2}\sum_{|\delta|\le k}|\Gamma^\delta g_4[\Xi]|\lesssim \nonumber\\ &\lesssim \jb{t}^{-(1/2)+\mu} \mathcal A_{2+\mu,3+k}[\vec{u}_0]+
\log(e+t)\sum_{|\beta|\le 3+k}\| |y|^{1/2} W_{1,1+\mu}(s,y)\Gamma^\beta f(s,y)\|_{L^\infty_tL^\infty_{{\Omega}}}. \label{Star01} \end{align} On the other hand, we have $S_i[\Xi]= L_0[g_i]$ for $i=2,4$. Thus, since $g_2$ and $g_4$ are supported on $\overline{B_4\setminus B_2}$, we are in a position to apply \eqref{KataL02} and we get \begin{eqnarray}
&& \sum_{|\delta|\le k}\left(|\Gamma^\delta S_2[\Xi]|+|\Gamma^\delta S_4[\Xi]|\right)(t,x) \lesssim\nonumber\\ &&\quad \lesssim \mathcal A_{2+\mu,3+k}[\vec{u}_0]+\log(e+t)
\sum_{|\beta|\le 3+k}\| |y|^{1/2} W_{1,1+\mu}(s,y)\Gamma^\beta f(s,y)\|_{L^\infty_tL^\infty_\Omega}. \label{eq.finalS2S4} \end{eqnarray} Now \eqref{ba3} follows from \eqref{eq.finalS0}, \eqref{eq.finalS1S3log} and \eqref{eq.finalS2S4}.
Next we prove \eqref{ba4}. Trivially one has \begin{eqnarray*}
&& \sum_{|\delta|\le k} |\Gamma^\delta \partial (\psi_1(x) S_0[\psi_2 \Xi](t,x))|\lesssim \\ && \quad \lesssim
\sum_{|\delta|\le k} |\Gamma^\delta \partial S_0[\psi_2 \Xi](t,x)|
+\sum_{|\delta|\le k} |\Gamma^\delta \nabla_x \psi_1(x)| |\Gamma^{\delta} S_0[\psi_2 \Xi](t,x)|. \end{eqnarray*} Since in $\Omega$
one has $|y|\simeq \langle y\rangle$, by \eqref{eq.kubota2} and \eqref{eq.decayMG4} with $\eta=1/2$, we see that \begin{eqnarray*}
&&\sum_{|\delta|\le k} |\Gamma^\delta \partial S_0[\psi_2 \Xi](t,x)|
\lesssim \jb{t+|x|}^{-1/2}\jb{t-|x|}^{-1/2} {\mathcal A}_{2+\mu,k+1}[\vec{u}_0]+ \\
&& \quad +w_{1/2}(t,x) \log (e+t+|x|)
\sum_{|\delta|\le k+1}\|
|y|^{1/2}W_{1,1}(s,y) \Gamma^\delta f(s,y)\|_{L^\infty_tL^\infty_{\Omega}}. \end{eqnarray*} On the other hand, by \eqref{eq.kubota} and \eqref{eq.decayMG} with $\kappa=1$, we have \begin{eqnarray*}
&& \jb{t+|x|}^{1/2}
\log^{-1}\left(e+\frac{\langle t+|x|\rangle}{\langle t-|x|\rangle}\right)
\sum_{|\delta|\le k}\left|\Gamma^\delta S_0[\psi_2\Xi](t,x)\right|\lesssim \\
&& \quad \lesssim {\mathcal A}_{3/2,k}[\vec{u}_0]+\log (e+t+|x|)
\sum_{|\delta|\le k}\| |y|^{1/2}W_{1,1}(s,y) \Gamma^\delta f(s,y)\|_{L^\infty_tL^\infty_{\Omega}}. \end{eqnarray*} Since the logarithmic term on the left-hand side does not appear when $x \in \Omega_{2}$, we get \begin{eqnarray}
&& w_{1/2}^{-1}(t,x) \sum_{|\delta|\le k}\left| \Gamma^{\delta} \partial
\bigl(\psi_1(x) S_0[\psi_2\Xi]\bigr)(t,x)\right|
\nonumber\\ && \quad \lesssim
{\mathcal A}_{2+\mu,k+1}[\vec{u}_0]+\log (e+t+|x|)
\sum_{|\delta|\le k+1} \| |y|^{1/2}W_{1,1}(s,y) \Gamma^{\delta} f(s,y)\|_{L^\infty_tL^\infty_{\Omega}}. \label{eq.finalS0bis} \end{eqnarray} Therefore, $\partial(\psi_1S_0[\psi_2\Xi])$ has the desired bound.
Let us recall that $|x|$ is bounded in $\supp S_1[\Xi](t,\cdot)\cup \supp S_3[\Xi](t,\cdot)$. In particular we get $ w_{1/2}^{-1}(t,x)\lesssim \langle t\rangle^{1/2}$. From \eqref{eq.finalS1S3log} we deduce \begin{eqnarray}
&& \sum_{|\delta|\le k} w_{1/2}^{-1}(t,x)
\left( |\Gamma^\delta \partial S_1[\Xi](t,x)|+|\Gamma^\delta \partial S_3[\Xi](t,x)| \right) \lesssim \nonumber \\ && \quad \lesssim \mathcal A_{2+\mu,3+k}[\vec{u}_0]+ \log(e+t)
\sum_{|\beta|\le 3+k}\| |y|^{1/2}W_{1,1+\mu}(s,y)\Gamma^\beta f(s,y)
\|_{L^\infty_tL^\infty_{\Omega}}. \label{eq.finalS1S3logder} \end{eqnarray} As for $S_4[\Xi]$, we use a similar estimate to \eqref{eq.stimaS3} with $k$ replaced by $k+1$, that is \begin{align} & \langle t\rangle^{1-\mu}
\sum_{|\delta|\le k+1}|\Gamma^\delta g_4[\Xi](t,x)|\lesssim \nonumber\\ & \lesssim
\mathcal A_{2+\mu,k+4}[\vec{u}_0]+ \log(e+t)
\sum_{|\beta|\le k+3}\| |y|^{1/2} W_{1,1+\mu}(s,y)\Gamma^\beta f(s,y)\|_{L^\infty_tL^\infty_{\Omega}}. \label{g2g4} \end{align} Applying \eqref{KataL03} with $\rho=1-\mu$ and $\eta=\mu$~($0<\mu\le 1/4$), we find that \begin{eqnarray}
&& \sum_{|\delta|\le k} w_{1-2\mu}^{-1}(t,x)
|\Gamma^\delta \partial S_4[\Xi]|(t,x) \lesssim \nonumber\\ &&\quad \lesssim \mathcal A_{2+\mu,k+4}[\vec{u}_0]+ \log(e+t)
\sum_{|\beta|\le k+3}\| |y|^{1/2} W_{1,1+\mu}(s,y)\Gamma^\beta f(s,y)\|_{L^\infty_tL^\infty_{\Omega}}. \label{eq.finalS2S4der} \end{eqnarray} For treating $S_2[\Xi]$, we decompose $g_2[\Xi]$ into $g_{2,1}[\Xi]$ and $g_{2,2}[\Xi]$ as was done for evaluating $S_1[\Xi]$. Then $L_0[g_{2,1}]$ can be estimated as $S_4[\Xi]$. On the other hand, using \eqref{KataL03} with $\rho=1/2$ and $\eta=0$ for $L_0[g_{2,2}]$, we arrive at \begin{eqnarray}
&& \sum_{|\delta|\le k} w_{1/2}^{-1}(t,x)
|\Gamma^\delta \partial S_2[\Xi]|(t,x) \lesssim \nonumber\\ &&\quad \lesssim \mathcal A_{2+\mu,4+k}[\vec{u}_0]+
\log^2(e+t+|x|)
\sum_{|\beta|\le 4+k}\| |y|^{1/2} W_{1,1+\mu}(s,y)\Gamma^\beta f(s,y)\|_{L^\infty_tL^\infty_{\Omega}}. \label{eq.finalS2S4derBis} \end{eqnarray} Thus we obtain \eqref{ba4} from \eqref{eq.finalS0bis}, \eqref{eq.finalS1S3logder}, \eqref{eq.finalS2S4der}, and \eqref{eq.finalS2S4derBis}.
In order to show \eqref{ba4weak}, we remark that $w_{1/2}\le w_{(1/2)-\eta}$ so that in \eqref{eq.finalS0bis} we can replace $w_{1/2}$ with $w_{1/2-\eta}$. Moreover, \eqref{eq.finalS1S3logder} and \eqref{g2g4} hold with $\mu=0$ if we replace $\log(e+t)$ by $\log^2(e+t)$, thanks to \eqref{KataL04} with $\kappa=1$. Therefore, the application of \eqref{KataL03} with $\rho=1/2$ and $0<\eta<1/2$ leads to \eqref{eq.finalS2S4der} with $w_{1/2}^{-1}$ replaced by $w^{-1}_{(1/2)-\eta}$ and $\mu=0$ in the second term of the right-hand side. Hence we get \eqref{ba4weak}.
Finally, we prove \eqref{ba4t}. We put $\eta'=\eta/2$. By \eqref{eq.kubota2} and \eqref{eq.decayMG4}, we see that \begin{eqnarray*}
&&\sum_{|\delta|\le k +1} |\Gamma^\delta \partial_t (\psi_1(x) S_0[\psi_2 \Xi](t,x))|
\lesssim \sum_{|\delta|\le k +1} |\Gamma^\delta \partial_t S_0[\psi_2\Xi](t,x)|\lesssim\\
&& \quad \lesssim \jb{t+|x|}^{-1/2}\jb{t-|x|}^{-1/2} {\mathcal A}_{2+\mu,k+ 2}[\vec{u}_0]+ \\
&& \qquad +w_{1-\eta'}(t,x) \log (e+t+|x|)
\sum_{|\delta|\le k+ 2}\| |y|^{1/2}W_{1,1}(s,y) \Gamma^\delta f(s,y)\|_{L^\infty_tL^\infty_{\Omega}}. \end{eqnarray*} Therefore, $\partial_t\bigl(\psi_1S_0[\psi_2\Xi] \bigr)$ has the desired bound because $w_{1-\eta'}\le w_{1-\eta}$.
Combining this estimate with \eqref{KataL01}, we obtain the estimate for $S_1[\Xi]$. Indeed, for $0<\eta<1$ we have \begin{eqnarray*} \langle t \rangle^{1-\eta'}
\sum_{|\delta|\le k +1}|\Gamma^\delta \partial_t S_1[ \Xi ](t,x)|\lesssim
\log(e+t) \sum_{|\beta|\le k+ 2}
\bigl\|\langle s\rangle^{1-\eta'} \partial^\beta \partial_t \bigl([\psi_1, -\Delta]S_0[\psi_2\Xi]
\bigr)(s,y)\bigr\|_{L^\infty_tL^\infty({\Omega_2})}. \end{eqnarray*} Recalling \eqref{eq.compsi}, we can use the estimate of $\partial_t (\psi_1 S_0[\psi_2 \Xi])$ adding two derivatives. In conclusion, we have \begin{eqnarray*} && \langle t \rangle^{1-\eta'}
\sum_{|\delta|\le k +1} |\Gamma^\delta \partial_t S_1[\Xi](t,x)|\lesssim \Theta_{\mu, k+4}(t) \end{eqnarray*} for $(t,x)\in [0,T)\times {\overline{\Omega}}$,
where $$
\Theta_{\mu, m}(t):=
{\mathcal A}_{2+\mu, m}[\vec{u}_0]+ {\rm log}^2(e+t)
\sum_{|\delta|\le m}\| |y|^{1/2}W_{1,1}(s,y) \Gamma^\delta f(s,y)\|_{L^\infty_tL^\infty_{\Omega}}. $$ Since we have $(1-\psi_2)\Xi\in X_{3}(T)$ for any $\Xi\in X(T)$, by using \eqref{KataL01} with $\rho=1-\eta'$ we have $$
\jb{t}^{1-\eta'}\sum_{|\delta|\le k+1} |\Gamma^\delta \partial_t S_3[\Xi](t,x)|\lesssim \Theta_{\mu, k+3}(t). $$
In order to treat $S_2[\Xi]$ and $S_4[\Xi]$, we set $g_j[\Xi]=(\partial_t^2-\Delta) S_j[\Xi]$ for $j=2, 4$ as before. Going similar lines to the estimates for $S_1[\Xi]$ and $S_3[\Xi]$, with a derivative more, we can reach at \begin{eqnarray*}
\langle t\rangle^{1-\eta'} \sum_{|\delta|\le k +1}|\Gamma^\delta \partial_t g_2[\Xi]|+\langle t\rangle^{1-\eta'}
\sum_{|\delta|\le k +1}|\Gamma^\delta \partial_t g_4[\Xi]|\lesssim \Theta_{\mu, k+5}(t). \end{eqnarray*} Let us recall that $g_2$ and $g_4$ are supported on $\overline{B_4\setminus B_2}$ and $ \partial_t S_i[\Xi]= L_0[{\partial_t}g_i]$ for $i=2,4$. We are in a position to apply \eqref{KataL03} (with $\rho=1-\eta'$, and $\eta$ replaced by $\eta'$) and obtain $$
w_{1-\eta}^{-1}(t,x)\sum_{|\delta|\le k} \sum_{i=2, 4} |\Gamma^\delta \partial \partial_t S_i[\Xi](t,x)|
\lesssim \sum_{i=2,4}\sum_{|\delta|\le k+1}\|\jb{s}^{1-\eta'}\partial^\beta\partial_tg_i(s,y)\|_{L^\infty_tL^\infty(\Omega_4)} \lesssim \Theta_{\mu, k+5}(t). $$
The proof of Theorem~\ref{main} is complete. \end{proof}
\begin{rem}\label{Rem83} \normalfont The main difference between the Dirichlet and the Neumann boundary cases is in the logarithmic loss in the local energy decay estimate \eqref{eq.LE}. Due to this term, comparing our result with the one in \cite{KP}, we see that the estimates for $S_2[\Xi]$ and $S_4[\Xi]$ are worse in the Neumann case. \end{rem}
\renewcommand{A.\arabic{equation}}{A.\arabic{equation}}
\setcounter{equation}{0} \renewcommand{A.\arabic{lemma}}{A.\arabic{lemma}}
\setcounter{lemma}{0} \renewcommand{A.\arabic{theorem}}{A.\arabic{theorem}}
\setcounter{theorem}{0}
\section*{Appendix: A local existence theorem of smooth solutions} Here we sketch a proof of the following local existence theorem for the semilinear case (for the general case, see \cite{SN89}). We underline that the convexity assumption for the obstacle is not necessary for the local existence result.
\begin{theorem}\label{LE} Let ${\mathcal O}$ be a bounded obstacle with $\mathcal C^\infty$ boundary and $\Omega=\mathbb R^2\setminus \mathcal O$. For any $\phi$, $\psi\in {\mathcal C}^\infty_0(\overline{\Omega})$ satisfying the compatibility condition of infinite order and \begin{equation} \label{Ori-init}
\|\phi\|_{H^{5}(\Omega)}+\|\psi\|_{H^{4}(\Omega)}\le R, \end{equation} there exists a positive constant $T=T(R)$ such that the mixed problem \eqref{eq.PMN} admits a unique solution $u\in C^\infty\bigl([0,T)\times \overline{\Omega}\bigr)$. Here $T$ is a constant depending only on $R$. \end{theorem}
For nonnegative integer $s$, we put $$ Y^s_T:=\bigcap_{j=0}^{s} {\mathcal C}^{j}\bigl([0,T]; H^{s-j}(\Omega)\bigr), $$ and $$
\|h\|_{Y^s_T}:=\sum_{j=0}^s \sup_{t\in [0,T]} \|\partial_t^j h(t,\cdot)\|_{H^{s-j}(\Omega)}. $$ Let $v_j$ for $j\ge 0$ be given as in Definition~\ref{CCN}. First we show the following result.
\begin{lemma}\label{LEw} Let $m\ge 2$. Suppose that $(\phi$, $\psi)\in H^{m+2}(\Omega)\times H^{m+1}(\Omega)$ satisfies the compatibility condition of order $m+1$, that is to say,
$\left.\partial_\nu v_j\right|_{\partial\Omega}=0$ for $j\in \{0,1,\ldots, m+1\}$, and \begin{equation}
\|\phi\|_{H^{m+2}(\Omega)}+\|\psi\|_{H^{m+1}(\Omega)}\le M. \label{init} \end{equation} Then\footnote{The assumption on initial data here is just for simplicity, and we can prove the same result for initial data with compatibility condition of order $m$ in fact.}, there exists a positive constant $T=T(m, M)$ such that the mixed problem \eqref{eq.PMN} admits a unique solution $u\in Y_{T}^{m+2}$. Here $T$ is a constant depending only on $m$ and $M$. \end{lemma} \begin{proof} To begin with, we note that the Sobolev embedding theorem implies \begin{equation} \label{Sob01}
\sum_{|\beta|\le [(m+1)/2]+1} \|\partial^\beta h(t,\cdot)\|_{L^\infty_\Omega}
\lesssim \sum_{|\beta|\le [(m+1)/2]+3}\|\partial^\beta h(t,\cdot)\|_{L^2_\Omega}
\le \sum_{|\beta|\le m+2}\|\partial^\beta h(t,\cdot)\|_{L^2_\Omega} \end{equation} for $m\ge 2$.
We show the existence of $u$ by constructing an approximate sequence $\bigl\{u^{(n)}\bigr\}\subset Y_T^{m+2}$, and proving its convergence for suitably small $T>0$. Throughout this proof, $C_M$ denotes a positive constant depending on $M$, but being independent of $T$. In order to keep the compatibility condition, we need to choose an appropriate function for the first step: for a moment, we suppose that we can choose a function $u^{(0)}\in Y_T^{m+2}$ satisfying $(\partial_t^j u^{(0)})(0,x)=v_j$ for all $j\in \{0,1,\ldots, m+2\}$. For $n\ge 1$ we inductively define $u^{(n)}$ as \begin{equation} u^{(n)}=S\bigl[\phi, \psi, G\bigl(\partial u^{(n-1)}\bigr) \bigr]. \end{equation} We have to check that $u^{(n)}$ is well defined. Let $v_0^{(n)}:=\phi$, $v_1^{(n)}:=\psi$, and $v_j^{(n)}:=\Delta v_{j-2}^{(n)}
+\partial_t^{j-2} (G(\partial u^{(n-1)})\bigr|_{t=0}$ for $j\ge 2$. Suppose that $u^{(n-1)}\in Y_T^{m+2}$ with $(\partial_t^j u^{(n-1)})(0)=v_j$ for $0\le j\le m+2$. Then we can see that $v_j^{(n)}=v_j$ for $0\le j\le m+2$, and consequently the compatibility condition of order $m+1$ is satisfied for the equation of $u^{(n)}$. Since \eqref{Sob01} implies $G(\partial u^{(n-1)})\in Y_T^{m+1}$, the linear theory (see \cite{I68}) shows that $u^{(n)}\in Y_T^{m+2}$. Therefore, by induction with respect to $n$, we see that $\{u^{(n)}\}\subset Y_T^{m+2}$ is well defined, and that $(\partial_t^j u^{(n)})(0)=v_j^{(n)}=v_j$ for $0\le j\le m+2$ and $n\ge 0$.
Now we are going to explain how to construct $u^{(0)}$. We can show that $v_j\in H^{m+2-j}(\Omega)$ for $0\le j\le m+2$ by its definition and \eqref{Sob01}. By the well-known extension theorem, there is $V_j\in H^{m+2-j}(\mathbb R^2)$
such that $\left. V_j\right|_\Omega=v_j$ and
$\|V_j\|_{H^{m+2-j}(\mathbb R^2)}\lesssim \|v_j\|_{H^{m+2-j}(\Omega)}$. Let $(a_{kl})_{0\le k, l\le m+2}$ be the inverse matrix of $(i^k(l+1)^k)_{0\le k, l\le m+2}$, where $i=\sqrt{-1}$. We put $$ \widehat{V} (t, \xi)=\sum_{k,l=0}^{m+2} \exp(i(k+1)\jb{\xi}t)a_{kl}\widehat{V_l}(\xi)\jb{\xi}^{-l}, $$ where $\widehat{V_l}$ is the Fourier transform of $V_l$.
We set $u^{(0)}(t)=\left. V(t) \right|_{\Omega}$ with the inverse Fourier transform $V(t)$ of $\widehat{V}(t)$. Now we can show that $u^{(0)}(t)$ has the desired property, and
$\|u^{(0)}\|_{Y_T^{m+2}}\le C_M$ (see \cite{SN89} where this kind of function is used to reduce the problem to the case of zero-data).
Now we are in a position to show that $u^{(n)}$ converges to a local solution of \eqref{eq.PMN} on $[0,T]$ with appropriately chosen $T$. For simplicity of description, we put $$
\Norm{h(t)}_k=\sum_{j=0}^{m+2-k}\|\partial_t^j h(t)\|_{H^k(\Omega)} $$
for $0\le k\le m+2$. Note that we have $\|h\|_{Y_T^{m+2}}\lesssim \sup_{t\in[0,T]}\sum_{k=0}^{m+2}\Norm{h(t)}_k$. We also set $G_n(t,x)=G\bigl(\partial u^{(n)}(t,x)\bigr)$ for $n\ge 0$. Combining the elementary inequality $$
\|h(t)\|_{L^2_\Omega}\le \|h(0)\|_{L^2_\Omega}+\int_0^t \|(\partial_t h)(\tau)\|_{L^2_\Omega}d\tau $$ with the standard energy inequality for $\partial_t^j u^{(n)}$ with $0\le j\le m+1$, we get $$
\Norm{u^{(n)}(t)}_0+\Norm{u^{(n)}(t)}_1\le (1+T)\left(C_M+C\sum_{j=0}^{m+1}\int_0^t \|(\partial_t^j G_{n-1})(\tau)\|_{L^2_\Omega}d\tau\right). $$ Writing $$ \Delta \partial^\beta u^{(n)}(t,x)=\partial_t^2 \partial^\beta u^{(n)}-(\partial^\beta G_{n-1})(0,x) {}-\int_0^t (\partial_t\partial^\beta G_{n-1})(\tau,x)d\tau $$ for a multi-index $\beta$ and using the elliptic estimate, given in Lemma \ref{elliptic2}, we have $$
\Norm{u^{(n)}(t)}_k\le C \left(\Norm{u^{(n)}(t)}_{k-2}+\Norm{u^{(n)}(t)}_{k-1}+C_M+\sum_{|\alpha|\le k-1}\int_0^t \|(\partial^\alpha G_{n-1})(\tau)\|_{L^2_\Omega}d\tau\right) $$ for $2\le k\le m+2$. By induction we get control of $\Norm{u^{(n)}(t)}_k$ for $0\le k\le m+2$, and obtain \begin{equation} \label{EnergyLocal}
\sum_{k=0}^{m+2} \Norm{u^{(n)}(t)}_k\le (1+T)\left(C_M+C\sum_{|\alpha|\le m+1}\int_0^t
\|(\partial^\alpha G_{n-1})(\tau)\|_{L^2_\Omega}d\tau\right). \end{equation} It follows from \eqref{Sob01} that \begin{equation} \label{EstNonlinearity01}
\sum_{|\alpha|\le m+1}
\|(\partial^\alpha G_{n-1})(\tau)\|_{L^2_\Omega}
\le C\|u^{(n-1)}\|_{Y^{m+2}_T}^3, \quad 0\le \tau\le T, \end{equation}
and \eqref{EnergyLocal} implies $\|u^{(n)}\|_{Y_T^{m+2}}\le (1+T)\left(C_M+CT\|u^{(n-1)}\|_{Y_T^{m+2}}^3\right)$
for $n\ge 1$. From this, if we take appropriate constants $N_M$ and $T_M$ which can be determined by $M$, we can show that $\|u^{(n)}\|_{Y_T^{m+2}}\le N_M$ for all $n\ge 0$, provided that $T\le T_M$. In the same manner, we can also show that there is some $T_M'(\le T_M)$ such that $$
\|u^{(n+1)}-u^{(n)}\|_{Y_T^{m+2}}\le \frac{1}{2}\|u^{(n)}-u^{(n-1)}\|_{Y_T^{m+2}} $$
for all $n\ge 1$, provided that $T\le T_M'$. Now we see that if $T\le T_M'$, then $\{u^{(n)}\}$ is a Cauchy sequence in $Y_T^{m+2}$, and there is $u\in Y_T^{m+2}$ such that $\lim_{n\to\infty}\|u^{(n)}-u\|_{Y_T^{m+2}}=0$. It is not difficult to see that this $u$ is the desired solution to \eqref{eq.PMN}.
Uniqueness can be easily obtained by the energy inequality. \end{proof}
Theorem~\ref{LE} is a corollary of Lemma~\ref{LEw}.
\begin{proof}[ Proof of Theorem~\ref{LE}]
The assumption on the initial data guarantees that for each $m\ge 3$, there is a positive constant $M_m$ such that $\|\phi\|_{H^{m+2}(\Omega)}+\|\psi\|_{H^{m+1}(\Omega)}\le M_m$. Hence, by Lemma~\ref{LEw}, there is $T_m=T(m, M_m)>0$ such that \eqref{eq.PMN} admits a unique solution $u\in Y_{T_m}^{m+2}$. Note that we may take $T_3=T(3,R)$. We put \begin{equation}\label{eq.bi}
C_0:=\|u\|_{Y_{T_3}^{3+2}}. \end{equation} Our aim is to prove that \eqref{eq.PMN} admits a solution $u\in \bigcap_{m\ge 3}Y_{T_3}^{m+2}$. Then the Sobolev embedding theorem implies that $u\in {\mathcal C}^\infty\left([0,T_3]\times \overline{\Omega}\right)$, which is the desired result. For this purpose, we are going to prove the following {\it a priori} estimate: for each $m\ge 3$, if $u\in Y^{m+2}_T$ is a solution to \eqref{eq.PMN} with some $T\in (0, T_3]$, then there is a positive constant $C_m$, which is independent of $T$, such that \begin{equation} \label{LW_conclusion}
\|u(t)\|_{Y_T^{m+2}}\le C_m. \end{equation} Once we obtain this estimate, by applying Lemma~\ref{LEw} repeatedly, we can see that $u\in Y_{T_3}^{m+2}$ for each $m\ge 3$, which concludes the proof of Theorem~\ref{LE}.
Now we show \eqref{LW_conclusion} by induction. For $m=3$ \eqref{LW_conclusion} follows immediately from \eqref{eq.bi}. Suppose that we have \eqref{LW_conclusion} for some $m=l\ge 3$. If we put $$
\Norm{h(t)}_k=\sum_{j=0}^{l+3-k}\|\partial_t^j h(t)\|_{H^k(\Omega)}, $$ then, similarly to \eqref{EnergyLocal}, we obtain $$
\sum_{k=0}^{l+3} \Norm{u(t)}_k\le (1+T_3)\left(C+C\sum_{|\alpha|\le l+2}\int_0^t
\bigl\|(\partial^\alpha \bigl(G\bigl(\partial u(\tau)\bigr)\bigr)\bigr\|_{L^2_\Omega}d\tau\right). $$ Since $[(m+1)/2]+3\le m+1$ for $m\ge 4$, we have \begin{equation} \label{Sob02}
\sum_{|\beta|\le [(m+1)/2]+1} \|\partial^\beta h(t,\cdot)\|_{L^\infty_\Omega}
\le C \sum_{|\beta|\le m+1}\|\partial^\beta h(t,\cdot)\|_{L^2_\Omega},\quad m\ge 4, \end{equation} in place of \eqref{Sob01}. Combining this estimate for $m=l+1$ with the inductive assumption, we get $$
\sum_{|\alpha|\le l+2}\bigl\|\partial^\alpha\bigl(G(\partial u(\tau))\bigr)\bigr\|_{L^2_\Omega}\le CC_l^2 \sum_{k=0}^{l+3}\Norm{u(\tau)}_k, $$ which yields $$ \sum_{k=0}^{l+3} \Norm{u(t)}_k\le (1+T_3)\left(C+CC_l^2\int_0^t \sum_{k=0}^{l+3} \Norm{u(\tau)}_kd\tau\right). $$ Now the Gronwall Lemma implies $\sum_{k=0}^{l+3} \Norm{u(t)}_k\le C(1+T_3)\exp\bigl(C C_l^2(1+T_3)T_3\bigr)=:C_{l+1}$
for $0\le t\le T(\le T_3)$, which implies $\|u\|_{Y_T^{l+3}}\le C_{l+1}$ for $0\le T\le T_3$. This completes the proof of \eqref{LW_conclusion}. \end{proof}
\begin{center} {\bf Acknowledgments} \end{center} The first author is partially supported by Grant-in-Aid for Scientific Research (C) (No. 23540241), JSPS. The second author is partially supported by Grant-in-Aid for Science Research (B) (No. 24340024), JSPS. The third author is partially supported by GNAMPA Projects 2010 and 2011, coordinating by Prof. P. D'Ancona.
\end{document} |
\begin{document}
\title{Entanglement in bosonic systems}
\author{Stein Olav Skr\o vseth} \email{stein.skrovseth@ntnu.no} \affiliation{ Department of Physics, Norwegian University of Science and Technology, N-7491 Trondheim, Norway }
\date{9 September 2005}
\begin{abstract}
We present a technique to resolve a Gaussian density matrix and its
time evolution through known expectation values in position and
momentum. Further we find the full spectrum of this density matrix
and apply the technique to a chain of harmonic oscillators to find
agreement with conformal field theory in this domain. We also
observe that a nonconformal state has a divergent entanglement
entropy. \end{abstract}
\pacs{03.67.Mn, 11.10.-z,11.25.Hf}
\maketitle
\section{Introduction} Entanglement is today considered a fundamental resource in nature when it comes to quantum computation and information \cite{Nielsen&Chuang}, and measures of entanglement has become a major field of research. In particular, entanglement in condensed-matter systems and the entanglement's critical behavior is well investigated \cite{Osterloh:2002,Vidal:2002rm,Wei04,SOS1,Verstraete2004}. In these terms, the entanglement entropy is an analytically well suited tool for investigating the properties of ground states in condensed-matter systems. In this paper, we focus on bososnic states with Gaussian wave functions, and the entanglement properties of the ground state of a simple harmonic chain, which belongs to this class \cite{Plenio2005}.
We consider the notion of entanglement entropy, that is, considering a quantum system denoted $\mathcal C$ in a pure state with wave function $\ket\psi$, we trace out some portion $\mathcal B$ to obtain the density matrix of the remaining space $\mathcal A$ as $\rho_{\mathcal A}=\trace_{\mathcal B}\ket\psi\bra\psi$. Then the entanglement of $\mathcal A$ with respect to $\mathcal B$ is well defined by the entropy (measured in ebits) of the reduced density matrix \cite{Nielsen&Chuang, Bennett:1995tk}; \begin{equation}
S_{\mathcal A}=-\trace\rho_{\mathcal A}\log_2\rho_{\mathcal A}.
\label{Sdef} \end{equation} The measure of entropy in units of ebits is customary and we will use logarithms base two throughout the paper. This procedure is well established, and works well for all cases where the entire strip $\mathcal C$ is in a pure state, though entanglement measures for mixed states are still incomplete. Most work has been focused on this entanglement in spin chains, but we will focus on a one-dimensional bosonic strip \cite{calabrese-2005-0504,
Audenaert2002}.
At critical points in a parameter space we have scale and translational invariance, and thus expect the theory to be conformally invariant \cite{Francesco97}, and one can use this fact to efficiently detect critical systems \cite{SOS1}. As was computed by Holzhey, Larsen, and Wilczek \cite{Holzhey94}, conformally invariant systems in 1+1 dimension can be considered as a string of length $\Lambda$ of which we trace out some fraction $1-\sigma\in[0,1]$, and then the entanglement entropy of the remaining space with respect to the rest is \begin{equation}
S(\sigma)=\frac{c+\bar
c}6\log\left[\frac\Lambda{\pi\epsilon}\sin(\pi\sigma)\right].
\label{Holzhey} \end{equation} Here $c$ and $\bar c$ are the holomorphic and anti holomorphic charges respectively. $\epsilon$ is some cutoff parameter that we will consider arbitrary. When considering the limit $\sigma\ll1$, the formula reduces simply to $S\sim\log\sigma\Lambda$, which has been a matter of keen interest \cite{Vidal:2002rm}. However, we will focus on any $\sigma$, in particular, when keeping the $\Lambda$ constant (\ref{Holzhey}) provides a very specific signature of a conformally invariant system. Thus this formula presents two independent (as long as $\epsilon$ is considered arbitrary) signatures of a finite conformal system. First the logarithmic divergence of the entropy as $\sigma$ is held constant while $\Lambda$ increases, and second the characteristic $\log\sin$ signature when $\Lambda$ is constant and $\sigma$ varies.
Another, more trivial, measure of the entanglement of a reduced density matrix is the product state identification \begin{equation}
E_M=1-\trace\rho_{\mathcal A}^M,\qquad M\geq2, \end{equation} which is zero for a product state, and unity for a maximally entangled state. This measure is equivalent to the Rényi entropy \cite{Jin2004} and is not well suited for much more than to single out a pure state, as with increasing $M$ any entangled state will converge to zero in this measure, \[\lim_{M\to\infty}E_M=\left\{\begin{matrix}1\quad&\mbox{entangled state}\\ 0\quad&\mbox{product state.}\end{matrix}\right.\]
The density matrix contains all information of any system in a mixed or pure state, and computing vital physical information on any such system is mostly determined by the eigenvalues of the density matrix. Therefore there is a great need to compute these in an efficient way. In particular, considering a bosonic system in one spatial dimension in its ground state, it can be modeled as a harmonic chain. The quantum correlations as measured by the entanglement are nonzero, as will be shown. Thus the vacuum itself is entangled, which is a highly intriguing result. Our main purpose in this paper is twofold, first we wish to demonstrate how to compute this entanglement from the vacuum ground state, and second we wish to see how and if the conformal invariance identified by Eq. (\ref{Holzhey}) arises in this case. The free boson is known to have a central charge $c=\bar c=1$, so the prefactor in $S(\sigma)$ reduces to simply $1/3$. It turns out that we indeed have a conformally invariant ground state, as identified by the two conformal signatures provided. As we will demonstrate in Sec. \ref{sec:application}, the entropy diverges in the massless limit, but it nevertheless seems conclusive that the theory is conformally invariant below a certain threshold mass.
In this paper we will in Sec. \ref{sec:timeev} compute how a Gaussian density matrix can be recovered from expectation values in position and momentum, and how this can be used to compute the time evolution of the density matrix. This can also be computed in terms of Weyl representation, though we apply a more explicit representation here. In Sec. \ref{sec:entmeasures} we compute the two entanglement measures written in this section from the Gaussian density matrix, and finally in Sec. \ref{sec:application} we apply the formulas to the ground state of a harmonic chain.
\section{Gaussian density matrices} \label{sec:timeev} \subsection{Recovery of the density matrix from expectation values} It is known that a Gaussian state in a quantum harmonic oscillator potential will evolve in such a way that the Gaussian shape is preserved at all times. However, the different parameters of the Gaussian state may also evolve in time so that the exact appearance of the density matrix may be difficult to predict. Consider a Gaussian density matrix of $N$ particles with positions $q_i$, \begin{align}
\rho(\vkt{q},\vkt{q}')&=
\sqrt{\frac{\det\left(\matr{A}'-\matr{C}'\right)}{\pi^D}}\exp\left[-d'_i\left(A'_{ij}-C'_{ij}\right)^{-1}d'_j\right]\nonumber\\
&\times\exp\Big[-\frac{1}{2}\left(q_iA_{ij}q_j+q'_iA^*_{ij}q'_j\right)+q_iC_{ij}q'_j+d_iq_i\nonumber\\
&+d^*_iq'_i\Big]
\label{DM} \end{align}
summing over repeated indices. Here $\matr{A}$ is a positive, symmetric $N\times N$ matrix, while $\matr{C}$ is a Hermitian $N\times N$ matrix, and $\vkt{d}$ is an $N$-dimensional vector. We use $\matr{A}'$, $\matr{C}'$, and $\vkt{d}'$ to denote the real parts of $\matr{A}$, $\matr{C}$, and $\vkt{d}$, respectively. We further denote the imaginary parts such that $\matr{A}=\matr{A}'+\mathrm{i}\matr{A}''$, etc. Positions and momenta are real valued. The matrix $A'-C'$ must be invertible and positive in order to have an normalizable density matrix. Also, in order to keep $\rho$ positive, that is that $\bra{\psi}\rho\ket{\psi}>0$ for any $\ket\psi$, one must have $A'>0$. Hence follows that even $C$ must be positive. Next we define three matrices of variances in position and momentum, \begin{align}
\begin{split}
Q_{ij}&=\expt{q_iq_j}-\expt{q_i}\expt{q_j},\\
P_{ij}&=\expt{p_ip_j}-\expt{p_i}\expt{p_j},\\
S_{ij}&=\frac{1}{2}\expt{q_ip_j+p_jq_i}-\expt{q_i}\expt{p_j}.
\label{QPSdef}
\end{split} \end{align}
$\matr{Q}$ and $\matr{P}$ are symmetric, while all are real. Furthermore, since the system in translationally invariant we can assume $Q_{ij}=f(|i-j|)$ for some function $f$, and similarly with the other matrices. Thus $Q$, $P$, and $S$ are Toeplitz matrices, potentially simplifying numerical computations. Toeplitz matrices are known to be central also in the study of quantum spin chains \cite{Jin2004}. These three matrices can be assumed known in a given model of a bosonic quantum system. We refer to the set of variables in the density matrix as $\Theta=\left\{\matr{A},\matr{C},\vkt{d}\right\}$, and the expectation value matrices as $\Xi=\left\{\matr{Q},\matr{P},\matr{S},\expt{\vkt{q}},\expt{\vkt{p}}\right\}$. A simple count shows that the two sets both have $2D^2+3D$ degrees of freedom, and thus the two sets may contain equal amounts of information. The expectation value matrices can be computed for a given density matrix through a straightforward calculation that computes expectation values of an operator $\hat{\mathcal O}$ as \[\expt{\hat{\mathcal O}}=\trace\hat{\mathcal O}\rho(\vkt q,\vkt q')=\iintd\vkt qd\vkt q'\delta(\vkt q-\vkt q')\hat{\mathcal
O}\rho(\vkt q,\vkt q').\] This means that the expectation value matrices $\Xi[\Theta]$ can be
computed in terms of the parameters in the density matrix, \begin{align}
\begin{split}
\matr{Q}&=\frac{1}{2}(\matr{A}'-\matr{C}')^{-1},\\
\matr{P}&=\matr{A}-(\matr{A}-\matr{C})\matr{Q}(\matr{A}-\matr{C})^\mathrm{T},\\
\matr{S}&=-\matr{Q}(\matr{A}''+\matr{C}''),\\
\expt{\vkt{q}}&=2\matr{Q}\vkt{d}',\\
\expt{\vkt{p}}&=-2(\matr{A}''-\matr{C}'')\matr{Q}\vkt{d}'+\vkt{d}''.
\label{ACtoQPS}
\end{split} \end{align} Note that since $A'-C'$ is invertible, $Q$ is well defined invertible. This set of matrix equations can then be inverted to yield $\Theta[\Xi]$, \begin{align}
\begin{split}
\matr{A}'&=\matr{P}+\frac{1}{4}\matr{Q}^{-1}-\matr{S}^\mathrm{T}\matr{Q}^{-1}\matr{S},\\
\matr{A}''&=-\frac{1}{2}\left(\matr{S}^\mathrm{T}\matr{Q}^{-1}+\matr{Q}^{-1}\matr{S}\right),\\
\matr{C}'&=\matr{P}-\frac{1}{4}\matr{Q}^{-1}-\matr{S}^\mathrm{T}\matr{Q}^{-1}\matr{S},\\
\matr{C}''&=\frac{1}{2}\left(\matr{S}^\mathrm{T}\matr{Q}^{-1}-\matr{Q}^{-1}\matr{S}\right),\\
\vkt{d}' &=\frac{1}{2}\matr{Q}^{-1}\expt{\vkt{q}},\\
\vkt{d}'' &=\left(\matr{A}''+\matr{C}''\right)\expt{\vkt{q}}+\expt{\vkt{p}}.
\label{QPStoAC}
\end{split} \end{align} We note that $\matr{S}=0$ implies both $\matr{A}$ and $\matr{C}$ to be real, and that in a state where $\expt{\vkt{q}}=\expt{\vkt{p}}=0$, $\vkt{d}=0$. This way it is simple to recover the density matrix given $\Xi$. A strategy to find the nonlinear time evolution of the density matrix is sketched in Fig. \ref{fig:strategy}; one would use the fact that the time evolution of $\Xi$ is much simpler to find than that of $\Theta$. And since the formulas above enables us to switch between the two sets in an easy manner, we can take the time evolution via $\Xi$ instead of taking it directly on $\Theta$. \setlength{\unitlength}{2mm} \begin{figure}
\caption{Outline of our strategy to find the time evolution of $\Theta(t)$ by going through $\Xi(t)$.}
\label{fig:strategy}
\end{figure}
\subsection{Time evolution} The Gaussian shape of the density matrix is preserved under time evolution governed by the Lagrangian \begin{equation}
\mathcal L=\frac12\left(\dot q_n\dot
q_n-q_n\Omega_{nm}q_m+\xi_nq_n\right),
\label{KGLagrangian} \end{equation} with sum over repeated indices. The conjugate momentum is $p_n=\dot q_n$, while $\Omega$ is symmetric and $\xi$ is a real three-vector describing external forces. The equations of motion become $\ddot q_n+\Omega_{nm}q_m=\xi_n$, which implies four coupled differential equations for the time evolution of the expectation value matrices \begin{align}
\begin{split}
\dot Q&=\mathcal T(S),\\
\dot P&=-\mathcal T(\Omega S),\\
\dot S&=P-\frac12Q\Omega,
\end{split} \end{align} where we have defined the symmetrizing operator $\mathcal T(A)=A+A^{\mathrm T}$. These relations specify the time evolution of the system, given some initial condition. Also, combining the first and third equation gives \[\ddot Q+2\dot Q-\frac12\{Q,\Omega\}=0,\] or, in the case of a diagonal $\Omega$, $\Omega_{ij}=\delta_{ij}\omega_i$, this reads \[\ddot Q_{ij}+2\dot Q_{ij}-\frac12(\omega_i+\omega_j)Q_{ij}=0.\]
\section{Entanglement measures for Gaussian density matrices} \label{sec:entmeasures} The eigenvalues of Eq. (\ref{DM}) can be found explicitly as we will show in this section. To this end, consider what we will refer to as the single-particle density matrix \begin{align}
\begin{split}
\rho&^0(x,x';\eta,d)=e^{-d'^2/1-\eta}\sqrt{\frac{1-\eta}{\pi}}\,\\
&\times\exp\left[-\frac{1}{2}\left(x^2+x'^2\right)+\eta xx'+dx+d^*x'\right].
\label{simpleDM}
\end{split} \end{align} Here $d=d'+\mathrm{i} d''$ is a complex number, while $\eta$ is real in the range $[0,1\rangle$. The latter constraint is consistent with the mentioned requirement on the density matrix that $A'-C$ is positive. We will omit the two latter parameters in the argument list of $\rho^0$ when there is no risk of confusion. The eigenvalue equation of this density matrix is \begin{equation*}
\intd x'\rho^0(x,x')\Psi_n(x')=\lambda_n\Psi_n(x). \end{equation*} Also, the density matrix obeys \begin{equation}
\rho^0(x+x_0,x'+x_0)
=e^{\mathrm{i} d''x}\rho^0(x,x';\eta,0)e^{-\mathrm{i} d'' x'},
\label{rhod0} \end{equation} where $x_0=d'/1-\eta$. This means that the scaled eigenfunction $\tilde\Psi_n(x)=e^{\mathrm{i} d''(x-x_0)}\Psi_n(x-x_0)$ also is an eigenvector of $\rho^0(x,x')$ with the same eigenvalue as $\Psi_n(x)$, and hence the eigenvalues are independent of the displacement $d$. Furthermore, Eq. (\ref{rhod0}) shows that any traces $\trace\rho^M$ are invariant under $d$.
Now, to find the eigenvalues of the single-particle density matrix
when $d=0$, consider the Green's function
$\mathcal{G}(z,z';\tau)=\bra{z}e^{-\tau
H}\ket{z'}$, with $H$ being the single harmonic oscillator Hamiltonian $H=-p^2/2+\frac{1}{2}\omega z^2$, and the states $\ket{z}$ the position eigenstates of the quantum harmonic oscillator. The eigenvalue result for this matrix is, with $\braket{z}{\Psi}_n=\Psi_n(z)$ and $\ket{\Psi}_n$, the energy eigenstates \begin{equation*}
\int d z'\,\mathcal{G}(z,z';\tau)\Psi_n(z')=e^{-\omega\tau(n+1/2)}\Psi_n(z). \end{equation*} Furthermore, $\mathcal{G}$ must solve the initial value problem \begin{align*}
\begin{split}
\left(-\frac{\partial}{\partial\tau}-H\right)\mathcal{G}(z,z';\tau)=0,\\
\lim_{\tau\to 0}\mathcal{G}(z,z';\tau)=\delta(z-z'),
\end{split} \end{align*} which, through a scaling of variables, $x=z\sqrt{\omega\coth(\omega\tau)}$, has the solution \begin{align}
\begin{split}
\mathcal{G}(x,x';\tau)=&\sqrt{\frac{\omega}{2\pi\sinh(\omega\tau)}}\\
&\times\exp\left[-\frac{1}{2}\left(x^2+x'^2\right)+\frac{xx'}{\cosh(\omega\tau)}\right].
\label{fullG}
\end{split} \end{align} There also exists a complex solution, which we disregard as unphysical. This solution (\ref{fullG}) means that if we identify $\eta=1/\cosh(\omega\tau)$, we can conclude that $\rho^0$ has the infinite set of eigenvalues \begin{align}
\label{simpleEV}
\lambda_n = \sqrt{\omega\coth(\omega\tau)}\,e^{-\omega\tau\left(n+1/2\right)}&=\lambda_0\,\xi^n\\
n&\in\left\{0,1,2,\dots\right\},\nonumber \end{align} where $\xi=e^{-\omega\tau}=\eta/(1+\sqrt{1-\eta^2})$.
Hence we have identified the eigenvalues and eigenfunctions of our density matrix in the case $d=0$, which is sufficient to know the eigenvalues also in the $d\not=0$ case.
Now, let us again turn to our general density matrix (\ref{DM}), and show that this can be transformed into the simple form of Eq. (\ref{simpleDM}) \cite{Eisert2003}. In general, the matrix $\matr{A}''$ contributes nothing to traces of the density matrix, and $A$ can be considered real. Take an orthogonal matrix $\matr{O}$ that diagonalizes $\matr{a}=\matr{O}^\mathrm{T}\matr{A}'\matr{O}$ and create the matrix $\sqrt{\matr{a}}$ which is diagonal and takes the square roots of $\matr{A}$'s eigenvalues on its diagonal. $A$ is positive by assumption, so this is well defined and real. Then we find the orthogonal matrix $\tilde{\matr{O}}$ that diagonalizes the matrix $\bar\eta$ defined as \begin{align}
\bar\eta&=\tilde{\matr{O}}^\mathrm{T}\sqrt{\matr{a}^{-1}}\matr{O}^\mathrm{T}\matr{C}\matr{O}\sqrt{\matr{a}^{-1}}\tilde{\matr{O}}. \end{align} Note that both $\matr{O}$ and $\tilde{\matr{O}}$ are real, since $\matr{C}$ is hermitian and $\matr{A}$ is symmetric. The coordinates and vectors $\vkt{d}$ transform as \begin{align*}
\begin{split}
\tilde{\vkt{q}}&=\tilde{\matr{O}}\sqrt{\matr{a}}\matr{O}^\mathrm{T}\vkt{q},\\
\tilde{\vkt{d}}&=\tilde{\matr{O}}\sqrt{\matr{a}^{-1}}\matr{O}^\mathrm{T}\vkt{d},
\end{split} \end{align*} which means that the density matrix (\ref{DM}) may be rewritten as \begin{align}
\tilde{\rho}(\vkt{q},\vkt{q}')
&=\prod_{i=1}^N\rho^0(q_i,q_i';\eta_i,\tilde{d}_i),
\label{rewrittenDM} \end{align} $\eta_i$ being the diagonal elements of $\bar\eta$ and $\tilde{\rho}(\vkt{q},\vkt{q}')=\rho(\tilde{\vkt{q}},\tilde{\vkt{q}}')$. In other words, the density matrix (\ref{DM}) may be expressed as a product of single-particle density matrices (\ref{simpleDM}). Thus each of the product terms in Eq. (\ref{rewrittenDM}) gives the infinite set of eigenvalues (\ref{simpleEV}), and we may write the eigenvalues for the total matrix as \begin{equation}
\lambda_{n_1,n_2,\dots,n_N}=\prod_{i=1}^N\lambda_{n_i}=\Lambda_0\prod_{i=1}^N\xi_i^{n_i}. \end{equation} Here we have defined $\Lambda_0=\prod_i\lambda^{(i)}_0$. Here we have let $\lambda_0\mapsto\lambda_0^{(i)}$, $\eta\mapsto \eta_i$, and $\xi\mapsto\xi_i$ from Eq. (\ref{simpleEV}) where the index $i$ refers to particle number. $\Lambda_0$ can be calculated from the normalization condition \[\trace{\rho}=\sum_{n_1=1}^\infty\cdots\sum_{n_D=1}^\infty\lambda_{n_1,\dots,n_D}=1\] giving $\Lambda_0=\prod_{i=1}^N(1-\xi_i)/\xi_i$. This sets the stage for the calculation of entropy and traces over the density matrix.
\subsection*{Entanglement measures} Having obtained the density matrix eigenvalues, the entropy $S=-\trace\rho\log\rho$ is easily calculated, \begin{align}
S &=-\sum_{n_1=1}^\infty\cdots\sum_{n_N=1}^\infty\lambda_{n_1,\dots,n_N}\ln\lambda_{n_1,\dots,n_N}\nonumber\\
&=-\sum_{i=1}^N\left[\log\left(1-\xi_i\right)+\frac{\xi_i\log\xi_i}{1-\xi_i}\right],
\label{S_sum} \end{align} a result that agrees exactly with that found by Srednicki \cite{Srednicki93}.
We can proceed to find exact formulas for the entanglement measures $E_M\equiv 1-\trace\rho^M$ where $M\geq2$, for which we find that \begin{equation}
E_M=1-\prod_{i=1}^N\frac{\left(1-\xi_i\right)^M}{1-\xi_i^M}.
\label{EM_xi} \end{equation}
These two entanglement measures are now quite straightforward to compute. Since $Q$ is a Toeplitz matrix, the inversion involved can be done efficiently with existing linear algebra packages. Also the computation involves two diagonalizations of real, symmetric $N\times N$ matrices, which is also numerically straightforward and efficient. We will focus primarily on the entropy due to its analytical usability.
\section{Application} \label{sec:application} We now apply the above formalism to the quantum Klein-Gordon field $\phi_n$, defined by the Lagrangian (\ref{KGLagrangian}) with $\xi=0$ and $\Omega_{ij}=\delta_{ij}\kappa$ on $N$ lattice points with periodic boundary conditions \cite{Mandl&Shaw}. The lattice constant is denoted $a$, and the system size is thus $\Lambda=aN$. This field has the Fourier expansion in bosonic creation and annihilation operators $a_{\vkt k}$ and $a^\dag_{\vkt k}$ respectively; \begin{align}
\phi_n=\sum_k\frac1{2\Lambda\omega_k}\left(a_ke^{-\mathrm{i}(kn-\omega_kt)}+a_k^\dag e^{\mathrm{i}(kn-\omega_kt)}\right) \end{align} and conjugate field \begin{equation}
\pi_n=-\mathrm{i}\sum_k\sqrt{\frac{\omega_k}{2\Lambda}}\left(a_ke^{-\mathrm{i}(kn-\omega_kt)}-a_k^\dage^{\mathrm{i}(kn-\omega_kt)}\right). \end{equation} In this discrete field theory the dispersion relation is \[\omega_k^2=\frac4{a^2}\sin^2(k/2)+\kappa^2,\]
and the sum is over all allowed wave vectors $k$ in this space. We have the nonzero commutation rules (at equal time) \begin{align*}
[\phi(\vkt{x}),\pi(\vkt{x}')]&=i\delta(\vkt{x}-\vkt{x}'),\\
[a_\vkt{k},a_\vkt{k}^\dag]&=\delta_{\vkt{k},\vkt{k}'}, \end{align*} indicating the harmonic oscillator nature of the system, with positions $\phi_n$ and momenta $\pi_n$. When rescaling this theory it is imperative to keep $\Lambda$ constant in order to maintain any scaling invariance and hence any conformal invariance in the model. The $\Xi$-matrices of the ground state are the Toeplitz matrices \begin{align}
\begin{split}
Q_{mn}&=\expt{\phi_n\phi_m}=\frac{1}{2N}\sum_k\frac{1}{\omega_k}\,e^{\mathrm{i} k(m-n)},\label{Q_n}\\
P_{mn}&=\frac{1}{2N}\sum_k\omega_k\,e^{\mathrm{i} k(m-n)},\\
S_{mn}&=0,
\end{split} \end{align} with the sums running over all lattice points numbered $i$, and $k=2\pi i/N$. $\matr{Q}$ and $\matr{P}$ are thus essentially finite Fourier transforms of $\omega_k^{\pm1}$. Now, since $\matr{S}$ is vanishing, we can conclude that both $\matr{A}$ and $\matr C$ are real. As shown this is all we need to recover the density matrix, and calculating the traces.
In the limit of a massless theory, $\kappa\to0$, $Q_n$ will diverge due to the $k=0$ (zero mode) term in the sum (\ref{Q_n}). We can exclude this zero mode from the sum by summing over lattice point $m$ using $k=\pi(2m+\alpha)/N$ and choose $\alpha=0~(1)$ to compute with (without) the zero mode. This corresponds to using (anti)periodic boundary conditions on the field $\phi$.
When finding the entropy we define some of the points as ``inside'' (region $\mathcal{A}$) and some as ``outside'' (region $\mathcal{B}$), and trace over the outside to find the geometric entropy of the inside region. Formally, this amounts to calculating the $\Xi$ matrices for the $\sigma N$ inside oscillators, $\sigma$ being the fraction of the entire system that $\mathcal A$ constitutes. When the $\Xi$s are known, we can compute the density matrix and the entropy for the state. Clearly, the entropy is symmetric with respect to interchange of $\mathcal A$ and $\mathcal B$, so any entanglement measure will be symmetric around $\sigma=1/2$. Hence we expect a maximal entanglement at this half size, and we can consider the half size entanglement with respect to $\kappa$ and $N$. Results for $E_2$ at half size is shown in Fig. \ref{fig:E2_kappa}. \begin{figure}
\caption{$E_2$ as function of $\kappa$ for three
different system sizes. The transition from an entangled state in
the massless limit to a product state in the massive limit is
clearly shown. Here, and in Fig. \ref{fig:S_kappa_a0}, $\kappa$ is in
units of inverse length $[\phi^{-1}]$.}
\label{fig:E2_kappa}
\end{figure} Hence we conclude that the massive $\kappa\to\infty$ system in nonentangled, while we have a transition to a maximally entangled case in the massless system $\kappa\to0$. Also, the entanglement is larger for larger systems, and the transition occurs at a larger mass in a larger system. The correlations are greater in a large system, and the inertia of the mass $\kappa$ must be larger to prevent them.
\begin{figure}\label{fig:S_kappa_a0}
\end{figure} We now consider the entanglement entropy. The half size measure for both $\alpha=0$ and $\alpha=1$ are shown in Fig. \ref{fig:S_kappa_a0}. We see that when the zero mode is included, the entropy diverges in the massless limit, while without the zero mode it converges to some system dependent value. In Fig. \ref{fig:S_N} we see how the half size entropy diverges logarithmically with system size for small $\kappa$. Most remarkably, this divergence occurs regardless of the zero mode, and the correct scaling factor $c/3$ is reproduced. \begin{figure}
\caption{$S_\mathrm{max}$ with $N$ for
various $\kappa$. The straight lines shown all rise with central
charge $c=1/3$. The bisected points are the $\alpha=0$ points, from
top to bottom, $\kappa=10^{-4}$, $\kappa=10^{-2}$, $\kappa=1$, and $\kappa=10^2$.
The circles are for $\alpha=1$ and $\kappa=0$.}
\label{fig:S_N}
\end{figure}
Finally, we look at the second feature of the conformal signature, namely the $\log\sin$ shape of a finite system. For the $\alpha=0$ case this is shown in Fig. \ref{fig:S_s_a0}, and we again see a good characteristic of the conformal system in a massless system. For the massive system, the entanglement saturates, and fits the signature only at small $\sigma$, which is also observed earlier in noncritical quantum spin chains \cite{Vidal:2002rm}. \begin{figure}
\caption{$S(\sigma)$ for some values of
$\kappa$ with $\alpha=0$. Here, $N=100$($+$) and $N=10$($\bigcirc$). From top to
bottom, $\kappa=10^{-4}$, $\kappa=10^{-2}$, $\kappa=1$, and
$\kappa=10^2$. The full lines are
$f(\sigma)=\frac{1}{3}\log_2(\sin\pi\sigma)+a$, with
$a$ chosen to
fit the lines at the ends. For large $\kappa$, the entanglement
saturates and does not obey the conformal signature. In the massless
limit the conformal signature is obeyed, even at $N=10$.}
\label{fig:S_s_a0}
\end{figure} When the zero mode is omitted, however, the $\log\sin$ signature is not present, as the system is identical to the $\alpha=0$ case for some massive $\kappa$, a state that is not conformally invariant. But then it is nevertheless notable that the logarithmic divergence with system size still fits the conformal theory although the fixed-size signature does not, and the state is not conformally invariant.
\begin{figure}
\caption{The largest entropy contributions in the
sum $S=\sum_ns_n$, with ordered terms. The graph shows the terms for
system size $N=100$, $\alpha=0$ ($+$), and $\alpha=1$ ($\times$)
and also system size $N=25$, $\alpha=0$ ($*$), and $\alpha=1$
($\Box$). All values are for an essentially massless theory with $\kappa=10^{-3}$. }
\label{fig:eigvals}
\end{figure} It is valuable to note that the conformal signature in the entanglement entropy is present even in small systems, such as $N=10$. This has been seen in quantum spin chains earlier, there enabling efficient identification of criticality \cite{SOS1}.
As a final feature, we investigate which modes, or terms, contribute to the entanglement entropy in the expansion $S=\sum_ns_n$ where the terms are defined by the sum (\ref{S_sum}). The individual terms are shown in Fig. \ref{fig:eigvals}, where we see that for the conformally invariant case, the eigenvalues fall of faster than exponentially, which indicates that only very few terms in the sum are needed to compute the entropy. Indeed, to compute the entropy to within an error $\pm10^{-6}$ one needs only eight out of a possible 50 terms in the expansion. Also, the figure shows that for the nonconformal state without the zero mode, the eigenvalues are paired since in a not scale invariant, though translationally invariant state, any mode with a nonzero impulse will be degenerate with another state of opposite impulse.
\section{Conclusion} We have shown how to compute the entanglement entropy of a chain of bosonic harmonic oscillators, using scalings and rotations of the density matrix to put it in a single-particle form. The results show what we call conformal signatures in the massless limit. Moreover, the results show that even a nonconformal state can show a logarithmic divergence as predicted by conformal field theory, but not the $\log\sin$ signature that the author believes to be a uniquely conformal feature.
\section{Acknowledgments} The author wants to thank Professor K\aa re Olaussen for initiating the research and providing invaluable guidance. The Department of Physics at the University of Troms\o, is thanked for their hospitality.
\end{document} |
\begin{document}
\title{Kronecker limit functions and an extension of the Rohrlich-Jensen formula} \author{James Cogdell \and Jay Jorgenson \footnote{The second named author acknowledges grant support from several PSC-CUNY Awards, which are jointly funded by the Professional Staff Congress and The City University of New York.}\and Lejla Smajlovi\'{c}} \maketitle
\begin{abstract}\noindent In \cite{Ro84} Rohrlich proved a modular analogue of Jensen's formula. Under certain conditions, the Rohrlich-Jensen formula expresses an integral of the log-norm $\log \Vert f \Vert$ of a $\text{\rm PSL}(2,\ZZ)$ modular form $f$ in terms of the Dedekind Delta function evaluated at the divisor of $f$. In \cite{BK19} the authors re-interpreted the Rohrlich-Jensen formula as evaluating a regularized inner product of $\log \Vert f \Vert$ and extended the result to compute a regularized inner product of $\log \Vert f \Vert$ with what amounts to powers of the Hauptmoduli of $\text{\rm PSL}(2,\ZZ)$. In the present article, we revisit the Rohrlich-Jensen formula and prove that it can be viewed as a regularized inner product of special values of two Poincar\'e series, one of which is the Niebur-Poincar\'e series and the other is the resolvent kernel of the Laplacian. The regularized inner product can be seen as a type of Maass-Selberg relation. In this form, we develop a Rohrlich-Jensen formula associated to any Fuchsian group $\Gamma$ of the first kind with one cusp by employing a type of Kronecker limit formula associated to the resolvent kernel. We present two examples of our main result: First, when $\Gamma$ is the full modular group $\text{\rm PSL}(2,\ZZ)$, thus reproving the theorems from \cite{BK19}; and second when $\Gamma$ is an Atkin-Lehner group $\Gamma_{0}(N)^+$, where explicit computations are given for certain genus zero, one and two levels. \end{abstract}
\vskip .15in \section{Introduction and statement of results}
\subsection{The Poisson-Jensen formula}
Let $D_{R} = \{z =x+iy\in {\mathbb C}: \vert z \vert < R\}$ be the disc of radius $R$ centered at the origin in the complex plane $\CC$. Let $F$ be a non-constant meromorphic function on the closure
$\overline{D_{R}}$ of $D_{R}$. Denote by $c_{F}$ the leading non-zero
coefficient of $F$ at zero, meaning that
for some integer $m$ we have that
$F(z) = c_{F}z^{m} + O(z^{m+1})$ as $z$ approaches zero. For any $a \in D_{R}$, let $n_{F}(a)$ denote
the order of $F$ at $a$; there are a finite number of points $a$ for which $n_{F}(a) \neq 0$.
With this, Jensen's formula, as stated on page 341 of \cite{La99}, asserts that \begin{equation}\label{Jensen} \frac{1}{2\pi}\int\limits_{0}^{2\pi}\log\vert F(Re^{i\theta})\vert d\theta + \sum\limits_{a \in D_{R}} n_{F}(a) \log (\vert a \vert/R) + n_{F}(0) \log (1/R) = \log \vert c_{F}\vert. \end{equation} One can consider the action of a M\"obius transformation which preserves $D_{R}$ and seek to determine the resulting expression from \eqref{Jensen}. Such a consideration leads to the Poisson-Jensen formula, and we refer the reader to page 161 of \cite{La87} for a statement and proof.
On their own, the Jensen formula and the Poisson-Jensen formula paved the way toward Nevanlinna theory, which in its most elementary interpretation establishes subtle growth estimates for meromorphic functions; see Chapter VI of \cite{La99}. Going further, Nevanlinna theory provided motivation for Vojta's conjectures whose insight into arithmetic algebraic geometry is profound. In particular, page 34 of \cite{Vo87} contains a type of ``dictionary'' which translates between Nevalinna theory and number theory where Vojta asserts that Jensen's formula should be viewed as analogous to the Artin-Whaples product formula from class field theory.
\subsection{A modular generalization}
In \cite{Ro84} Rohrlich proved what he aptly called a modular version of Jensen's formula. We now shall describe Rohrlich's result.
Let $f$ be a meromorphic function on the upper half plane $\HH$ which is invariant with respect to the action of the full modular group $\mathrm{PSL}(2,\mathbb{Z})$. Set $\mathcal{F}$ to be the ``usual'' fundamental domain of the quotient $\mathrm{PSL}(2,\mathbb{Z})\backslash \HH$, and let $d\mu$ denote the area form of the hyperbolic metric. Assume that $f$ does not have a pole at the cusp $\infty$ of $\mathcal{F}$, and assume further that the Fourier expansion of $f$ at $\infty$ has its constant term equal to one. Let $P(w)$ be the Kronecker limit function associated to the parabolic Eisenstein series associated to $\mathrm{PSL}(2,\mathbb{Z})$; below we will write $P(w)$ in terms of the Dedekind Delta function, but for now we want to keep the concept of a Kronecker limit function in the conversation. With all this, the Rohrlich-Jensen formula is the statement that \begin{equation} \label{rohrl thm}
\frac{1}{2\pi}\int\limits_{\mathrm{PSL}(2,\mathbb{Z})\backslash \mathbb{H}} \log|f(z)|d\mu(z) + \sum_{w\in \mathcal{F}} \frac{\mathrm{ord}_w(f)}{\mathrm{ord}(w)}P(w) =0. \end{equation}
In this expression, $\mathrm{ord}_w(f)$ denotes the order of $f$ at $w$ as a meromorphic function, and $\mathrm{ord}(w)$ denotes the order of the action of $\mathrm{PSL}(2,\mathbb{Z})$ on $\HH$. As a means by which one can see beyond the above setting, one can view \eqref{rohrl thm} as evaluating the inner product $$
\langle 1,\log|f(z)| \rangle=\int\limits_{\mathrm{PSL}(2,\mathbb{Z})\backslash \mathbb{H}} 1\cdot \log|f(z)|d\mu(z) $$ within the Hilbert space of $L^{2}$ functions on $\mathrm{PSL}(2,\mathbb{Z})\backslash \HH$.
There are various directions in which \eqref{rohrl thm} has been extended. In \cite{Ro84}, Rohrlich described the analogue of \eqref{rohrl thm} for general Fuchsian groups of the first kind and for meromorphic modular forms $f$ of non-zero weight; see page 19 of \cite{Ro84}. In \cite{HIvPT19} the authors studied the quotient of hyperbolic three space when acted upon by the discrete group $\mathrm{PSL}(2,\mathcal{O}_K)$ where $\mathcal{O}_K$ denotes the ring of integers of an imaginary quadratic field $K$. In that setting, the function $\log \vert f \vert$ is replaced by a function which is harmonic at all but a finite number of points and at those points the function has prescribed singularities. As in \cite{Ro84}, the analogue of \eqref{rohrl thm} involves a function $P$ which is constructed from a type of Kronecker limit formula.
In \cite{BK19} the authors returned to the setting of $\mathrm{PSL}(2,\mathbb{Z})$ acting on $\HH$. Let $q_{z}=e^{2\pi i z}$ be the standard local coordinate near $\infty$ of $\mathrm{PSL}(2,\mathbb{Z})\backslash \HH$. The Hauptmodul $j(z)$ is the unique $\mathrm{PSL}(2,\mathbb{Z})$ invariant holomorphic function on $\HH$ whose expansion near
$\infty$ is $j(z) = q_{z}^{-1} + o(q_{z}^{-1})$ as $z$ approaches $\infty$. Let $T_{n}$ denote the $n$-th Hecke operator and set $j_{n}(z) = j|T_n (z)$. The main results of \cite{BK19} are the derivation of formulas for the regularized scalar product $\langle j_n(z),\log (({\mathrm{Im}} (z))^k|f(z)|) \rangle$ where $f$ is a weight $2k$ meromorphic modular form with respect to $\mathrm{PSL}(2,\mathbb{Z})$. Below we will discuss further the formulas from \cite{BK19} and describe the way in which their results are natural extensions of \eqref{rohrl thm}.
\subsection{Revisiting Rohrlich's theorem}
The purpose of this article is to extend the point of view that the Rohlrich-Jensen formula is the evaluation of a particular type of inner product. To do so, we shall revisit the role of each of the two terms
$j|T_n (z)$ and $\log (({\mathrm{Im}} (z))^k|f(z)|)$.
The function $j|T_n (z)$ can be characterized as the unique holomorphic function which is $\mathrm{PSL}(2,\mathbb{Z})$ invariant on $\HH$ and whose expansion near $\infty$ is $q_{z}^{-n} + o(q_{z}^{-1})$. These properties hold for the special value $s=1$ of the Niebur-Poincar\'e series $F_{-n}^{\Gamma}(z,s)$, which is defined in \cite{Ni73} for any Fuchsian group $\Gamma$ of the first kind with one cusp and discussed in section \ref{sect_NP-series} below. As proved in \cite{Ni73}, for any $m\in\mathbb{N}$, the Niebur-Poincar\'e series $F_{m}^{\Gamma}(z,s)$ is an eigenfunction of the hyperbolic Laplacian $\Delta_{\hyp}$; specifically, we have that $$ \Delta_{\hyp} F_{m}^{\Gamma}(z,s) = s(s-1)F_{m}^{\Gamma}(z,s). $$ Also, $F_{m}^{\Gamma}(z,s)$ is orthogonal to constant functions.
Furthermore, if $\Gamma = \mathrm{PSL}(2,\mathbb{Z})$, then for any positive integer $n$ there is an explicitly computable constant $c_{n}$ such that \begin{equation}\label{j_via_F} F_{-n}^{\mathrm{PSL}(2,\mathbb{Z})}(z,1) = \frac{1}{2\pi\sqrt{n}}j_{n}(z) + c_{n}. \end{equation} As a result, the Rohrlich-Jensen formula proved in \cite{BK19}, when combined with Rohrlich's formula from \cite{Ro84}, reduces to computing the regularized inner product of
$F_{-n}^{\mathrm{PSL}(2,\mathbb{Z})}(z,1)$ with $\log (({\mathrm{Im}} (z))^k|f(z)|)$.
As for the term $\log (({\mathrm{Im}} (z))^k|f(z)|)$, we begin by recalling Proposition 12 from \cite{JvPS19}. Let $2k\geq 4$ be any even positive integer, and let $f$ be a weight $2k$ meromorphic form $f$ which is $\Gamma$ invariant and with $q$-expansion at $\infty$ that is normalized so its constant term is equal to one. Set $\Vert f\Vert(z) = y^{k}\vert f(z) \vert$, where $z=x+iy$. Let $\mathcal{E}^{\mathrm{ell}}_{\Gamma,w}(z,s)$ be the elliptic Eisenstein series associated to the aforementioned data; a summary of the relevant properties of $\mathcal{E}^{\mathrm{ell}}_{\Gamma,w}(z,s)$ is given in section \ref{ell_Eisen_series} below. Then, in \cite{JvPS19} it is proved that one has the asymptotic relation \begin{equation} \label{ell kroneck limit one cusp} \sum_{w\in \mathcal{F}_\Gamma} \mathrm{ord}_w(f) \mathcal{E}^{\mathrm{ell}}_{\Gamma,w}(z,s)=
-s\log\left( |f(z)| |\eta_{\Gamma,\infty}^4(z)|^{-k}\right) + O(s^2) \,\,\,\,\, \textrm{\rm as $s \to 0$} \end{equation} where $\mathcal{F}_\Gamma$ is the fundamental domain for the action of $\Gamma$ on $\mathbb{H}$ and $\eta_{\Gamma,\infty}(z)$ is the analogue of the classical eta function for the modular group, see the Kronecker limit formula \eqref{KronLimitPArGen} for the parabolic Eisenstein series. With this, formula \eqref{ell kroneck limit one cusp} can be written as \begin{equation} \label{ell kroneck limit one cusp2} \log\left( \Vert f\Vert (z) \right) = kP_{\Gamma}(z) - \sum_{w\in \mathcal{F}_\Gamma} \mathrm{ord}_w(f) \lim_{s\to 0} \frac{1}{s} \mathcal{E}^{\mathrm{ell}}_{\Gamma,w}(z,s), \end{equation}
where $P_{\Gamma}(z)=\log(|\eta_{\Gamma,\infty}^4(z)|{\mathrm{Im}} (z))$ is the Kronecker limit function associated to the parabolic Eisenstein series $\mathcal{E}^{\mathrm{par}}_{\Gamma,\infty}(z,s)$; the precise normalizations and expressions defining $\mathcal{E}^{\mathrm{par}}_{\Gamma,\infty}(z,s)$ will be clarified below.
Following \cite{CJS20}, one can recast \eqref{ell kroneck limit one cusp2} in terms of the resolvent kernel, which we now shall undertake.
The resolvent kernel, also called the automorphic Green's function, $G_s^{\Gamma}(z,w)$ is the integral kernel which for almost all $s \in \CC$ inverts the operator $\Delta_{\hyp} + s(s-1)$. In other words, $$ \Delta_{\hyp} G_s^{\Gamma}(z,w) = s(1-s) G_s^{\Gamma}(z,w). $$ The resolvent kernel is closely related to the elliptic Eisenstein series; see \cite{vP16} as well as \cite{CJS20}. Specifically, from Corollary 7.4 of \cite{vP16}, after taking into account a sign difference in our normalization, we have that \begin{equation}\label{Ell Green connection} \mathrm{ord}(w) \mathcal{E}^{\mathrm{ell}}_{\Gamma,w}(z,s) = -\frac{2^{s+1}\sqrt{\pi} \Gamma(s+1/2)}{\Gamma(s)}G_s^{\Gamma}(z,w) +O(s^2) \,\,\,\,\, \textrm{\rm as $s \to 0$} \end{equation} for all $z,w \in\mathbb{H}$ with $z\neq \gamma w$ when $\gamma\in\Gamma$. It is now evident that one can express $\log\left( \Vert f \Vert(z) \right)$ as a type of Kronecker limit function. Indeed, upon using the functional equation for the Green's function, we will prove below the following result. Under certain general conditions the form $f$, as described above, can be realized through a type of factorization theorem, namely that \begin{align}\label{log norm basic} \log\left( \Vert f\Vert (z) \right)&=-2k + 2\pi \sum_{w\in \mathcal{F}_{\Gamma}} \frac{\mathrm{ord}_w(f)}{\mathrm{ord}(w)} \lim_{s\to 1}\left(G_s^{\Gamma}(z,w) + \mathcal{E}_{\Gamma,\infty}^{\mathrm{par}}(z,s)\right)\notag \\ &= 2\pi \sum_{w\in \mathcal{F}_\Gamma} \frac{\mathrm{ord}_w(f)}{\mathrm{ord}(w)} \left[ \lim_{s\to 1}\left(G_s^{\Gamma}(z,w) + \mathcal{E}_{\Gamma,\infty}^{\mathrm{par}}(z,s)\right) - \frac{2}{\mathrm{vol}_{\hyp}(\Gamma \backslash \HH)} \right]. \end{align}
With all this, it is evident that one can view the inner product realization of the Rohrlich-Jensen formula as a special value of the inner product of the Niebur-Poincar\'e series $F_{m}^\Gamma(z,s)$ and the resolvent kernel $G_{s}^{\Gamma}(z,w)$ plus the parabolic Eisenstein series $\mathcal{E}_{\Gamma,\infty}^{\mathrm{par}}(z,s)$. Furthermore, because all terms are eigenfunctions of the Laplacian, one can seek to compute the inner product in hand in a manner similar to that which yields the Maass-Selberg formula.
\subsection{Our main results}
Unless otherwise explicitly stated, we will assume for the remainder of this article that $\Gamma$ is any Fuchsian group of the first kind with one cusp. By conjugating $\Gamma$, if necessary, we may assume that the cusp is at $\infty$, with the cuspidal width equal to one. The group $\Gamma$ will be arbitrary, but fixed, throughout this article, so, for the sake of brevity, in the sequel, we will suppress the index $\Gamma$ in the notation for Eisenstein series, the Niebur-Poincar\'e series, the Kronecker limit function, the fundamental domain and the resolvent kernel. When $\Gamma$ is taken to be the modular group or the Atkin-Lehner group, that will be indicated in the notation.
With the above discussion, we have established that one manner in which the Rohrlich-Jensen formula can be understood is through the study of the regularized inner product \begin{equation}\label{RJ_integral} \langle F_{-n}(\cdot,1), \overline{\lim_{s\to 1} \left(G_s(\cdot,w) + \mathcal{E}_\infty^{\mathrm{par}}(\cdot,s)\right)} \rangle, \end{equation} which is defined as follows. Since $\Gamma$ has one cusp at $\infty$, one can construct a (Ford) fundamental domain $\mathcal{F}$ of the action of $\Gamma$ on $\HH$. Let $M = \Gamma \backslash \HH$. A cuspidal neighborhood $\mathcal{F}_\infty(Y)$ of $\infty$ is given by $0 < x \leq 1$ and $y \geq Y$, where $z=x+iy$ and some $Y \in \RR$ sufficiently large. (We recall that we have normalized the cusp to be of width one.) Let $\mathcal{F}(Y) = \mathcal{F}\setminus \mathcal{F}_\infty(Y)$. Then, we define \eqref{RJ_integral} to be $$ \lim_{Y\to \infty} \int\limits_{\mathcal{F}(Y)}F_{-n}(z,1)\lim_{s\to 1} \left(G_s(z,w) + \mathcal{E}_\infty^{\mathrm{par}}(z,s)\right)d\mu_{\hyp}(z) $$ where $d\mu_{\hyp}(z)$ denotes the hyperbolic volume element. The function $G_s(z,w) + \mathcal{E}_\infty^{\mathrm{par}}(z,s)$ is unbounded as $z \to w$. However, the asymptotic growth of the function is logarithmic thus integrable, hence it is not necessary to regularize the integral in \eqref{RJ_integral} in a neighborhood containing $w$. The need to regularize the inner product \eqref{RJ_integral} stems solely the from the exponential growth behavior of the factor $F_{-n}(z,1)$ as $z\to\infty$.
Our first main result of this article is the following theorem.
\begin{theorem}\label{thm:main} For any positive integer $n$ and any point $w\in \mathcal{F}$ \begin{equation}\label{main f-la} \langle F_{-n}(\cdot,1), \overline{\lim_{s\to 1} \left(G_s(\cdot,w) + \mathcal{E}_\infty^{\mathrm{par}}(\cdot,s)\right)} \rangle
= - \frac{\partial}{\partial s} F_{-n}(w,s) \Big|_{s=1}.
\end{equation} \end{theorem}
We can combine Theorem \ref{thm:main} with the factorization theorem \eqref{log norm basic} and properties of $F_{-n}(z,1)$ proved in \cite{Ni73} and obtain the following extension of the Rohrlich-Jensen formula.
\begin{corollary}\label{Rohr-Jensen} In addition to the notation above, assume that the even weight $2k\geq 0$ meromorphic form $f$ has been normalized so its $q$-expansion at $\infty$ has constant term equal to $1$. Then we have that
\begin{equation}\label{main f-la - corollary}
\langle F_{-n}(\cdot ,1), \log\Vert f\Vert \rangle = - 2\pi \sum_{w\in \mathcal{F}} \frac{\mathrm{ord}_w(f)}{\mathrm{ord}(w)}\frac{\partial}{\partial s}\left. F_{-n}(w,s) \right|_{s=1}.
\end{equation} \end{corollary}
Let $g$ be a $\Gamma$ invariant analytic function which necessarily has a pole at $\infty$. As such, there is a positive integer $K$ and set of complex numbers $\{a_{n}\}_{n=1}^{K}$ such that $$ g(z) = \sum_{n=1}^K a_n q_z^{-n} + O(1) \,\,\,\,\, \textrm{as $z \rightarrow \infty$.} $$ It is proved in \cite{Ni73} that \begin{equation}\label{g expr} g(z) = \sum_{n=1}^K 2\pi \sqrt{n}a_n F_{-n}(z,1) + c(g) \end{equation} for some constant depending only upon $g$. With this, we can combine Corollary \ref{Rohr-Jensen} and the Theorem on page 19 of \cite{Ro84} to obtain the following result.
\begin{corollary} \label{cor:j-function} With notation as above, there is a constant $\beta$, defined by the Laurent expansion of $\mathcal{E}_\infty^{\mathrm{par}}(z,s)$ near $s=1$, such that \begin{multline}\label{main f-la - corollary 2} \langle g, \log\Vert f\Vert \rangle = - 2\pi \sum_{w\in \mathcal{F}} \frac{\mathrm{ord}_w(f)}{\mathrm{ord}(w)}
\Bigg(2\pi \sum_{n=1}^K \sqrt{n}a_n\frac{\partial}{\partial s} F_{-n}(w,s) \Big|_{s=1} \\ + c(g)(P(w) - \beta \vol_{\hyp}(M) +2) \Bigg). \end{multline} \end{corollary}
The constant $\beta$ is given in \eqref{KronLimitPArGen}. We refer the reader to equation \eqref{KronLimitPArGen} for further details regarding the normalizations which define $\beta$ and the parabolic Kronecker limit function $P$.
Finally, we will consider the generating function of the normalized series constructed from the right-hand side of \eqref{main f-la}. Specifically, we will prove the following identity.
\begin{theorem} \label{thm:generating series} With notation as above, the generating series $$
\sum_{n\geq 1}2\pi \sqrt{n} \frac{\partial}{\partial s} F_{-n}(w,s) \Big|_{s=1}q_z^n $$ is, in the $z$ variable, the holomorphic part of the weight two biharmonic Maass form $$ \mathcal{G}_w(z):=i\frac{\partial}{\partial z} \left( \frac{\partial}{\partial s}
\left( G_s(z,w) + \mathcal{E}_\infty^{\mathrm{par}}(w,s)\right) \Big|_{s=1} \right). $$ \end{theorem} Note that the weight two biharmonic Maas form is a function which satisfies the weight two modularity in $z$ and which is annihilated by $\Delta_2^2=(\xi_0\circ \xi_2)^2$, where, classically $\xi_\kappa := 2iy^\kappa \overline{\frac{\partial}{\partial \overline{z}}}$. It is clear from the definition that $\mathcal{G}_w(z)$ satisfies the weight two modularity in the $z$ variable. In section \ref{sect proof of thm 4} we will prove that $(\xi_0\circ \xi_2)^2\mathcal{G}_w(z)=0$.
In the case $\Gamma = \text{\rm PSL}(2,\ZZ)$, our results will generalize the main theorems from \cite{BK19}, as we will discuss below.
\subsection{Outline of the paper}
In section 2 we will establish notation and recall certain results from the literature. There are two specific examples of Poincar\'e series which are particularly important for our study, the Niebur-Poincar\'e series and the resolvent kernel. Both series are defined, and basic properties are presented in section 3. In section 4 we state the Kronecker limit formulas associated to parabolic and elliptic Eisenstein series, and we prove the factorization theorem \eqref{log norm basic}. The proofs of the main results listed above will be given in section 5.
To illustrate our results, various examples are given in section 6. Our first example is when $\Gamma = \text{\rm PSL}(2,\ZZ)$ where, as claimed above, our results yield the main theorems of \cite{BK19}. We then turn to the case when $\Gamma$ is an Atkin-Lehner group $\Gamma_0(N)^+$ for square-free level $N$. The first examples are when the genus of $\Gamma_0(N)^+$ is zero and when the function $g$ in Corollary \ref{cor:j-function} is the Hauptmodul $j_N^+(z)$. The next two examples we present are for levels $N=37$ and $N=103$. For these levels the genus of the quotient by $\Gamma_0(N)^+$ is one and two, respectively. In these cases, certain generators of the corresponding function fields were constructed in \cite{JST13}. Consequently, we are able to employ the results from \cite{JST13} and fully develop Corollary \ref{cor:j-function}.
\section{Background material}
\subsection{Basic notation} \label{notation} Let $\Gamma\subset\text{\rm PSL}(2,\mathbb{R})$ denote a Fuchsian group of the first kind acting by fractional linear transformations on the hyperbolic upper half-plane $\mathbb{H}:=\{z=x+iy\in\mathbb{C}\,
|\,x,y\in\mathbb{R};\,y>0\}$. We let $M:=\Gamma\backslash\mathbb{H}$, which is a finite volume hyperbolic Riemann surface, and denote by $p:\mathbb{H}\longrightarrow M$ the natural projection. We assume that $M$ has $e_{\Gamma}$ elliptic fixed points and one cusp at $\infty$ of width one. By an abuse of notation, we also say that $\Gamma$ has a cusp at $\infty$ of width one, meaning that the stabilizer $\Gamma_\infty$ of $\infty$ is generated by the matrix $\bigl(\begin{smallmatrix} 1&1\\0&1\end{smallmatrix}\bigr)$. We identify $M$ locally with its universal cover $\mathbb{H}$. By $\mathcal{F}$ we denote the ``usual'' (Ford) fundamental domain for $\Gamma$ acting on $\mathbb H$.
We let $\mu_{\mathrm{hyp}}$ denote the hyperbolic metric on $M$, which is compatible with the complex structure of $M$, and has constant negative curvature equal to minus one. The hyperbolic line element $ds^{2}_{\hyp}$, resp.~the hyperbolic Laplacian $\Delta_{\hyp}$ acting on functions, are given in the coordinate $z=x+iy$ on $\mathbb{H}$ by \begin{align*} ds^{2}_{\hyp}:=\frac{dx^{2}+dy^{2}}{y^{2}},\quad\textrm{resp.} \quad\Delta_{\hyp}:=-y^{2}\left(\frac{\partial^{2}}{\partial x^{2}}+\frac{\partial^{2}}{\partial y^{2}}\right). \end{align*} By $d_{\mathrm{hyp}}(z,w)$ we denote the hyperbolic distance between to the two points $z\in\mathbb{H}$ and $w\in\mathbb{H}$. Our normalization of the hyperbolic Laplacian is different from the one considered in \cite{Ni73} and \cite{He83} where the Laplacian is taken with the plus sign.
\subsection{Modular forms}
Following \cite{Se73}, we define a weakly modular form $f$ of even weight $2k$ for $k \geq 0$ associated to $\Gamma$ to be a function $f$ which is meromorphic on $\mathbb H$ and satisfies the transformation property \begin{equation}\label{transf prop} f\left(\frac{az+b}{cz+d}\right) = (cz+d)^{2k}f(z), \quad\textrm{for any $\begin{pmatrix}a&b\\c&d\end{pmatrix} \in \Gamma$.} \end{equation}
In the setting of this paper, any weakly modular form $f$ will satisfy the relation $f(z+1)=f(z)$, so that for some positive integer $N$ we can write $$ f(z) = \sum\limits_{n=-N}^{\infty}a_{n}q_z^{n}, \quad\text{ where } q_z =e(z)= e^{2\pi iz}. $$ If $a_{n} = 0$ for all $n < 0$, then $f$ is said to be holomorphic at the cusp at $\infty$. A holomorphic modular form with respect to $\Gamma$ is a weakly modular form which is holomorphic on $\mathbb H$ and at all the cusps of $\Gamma$.
When the weight $k$ is zero, the transformation property \eqref{transf prop} indicates that the function $f$ is invariant with respect to the action of elements of the group $\Gamma$, so it may be viewed as a meromorphic function on the surface $M=\Gamma\backslash \mathbb{H}$. In other words, a meromorphic function on $M$ is a weakly modular form of weight $0$.
For any two weight $2k$ weakly modular forms $f$ and $g$ associated to $\Gamma$, with integrable singularities at finitely many points in $\mathcal{F}$, the generalized inner product $\langle \cdot,\, \cdot \rangle$ is defined as \begin{equation} \label{def:inner prod} \langle f,g\rangle = \lim_{Y\to \infty} \int\limits_{\mathcal{F}(Y)}f(z) \overline{g(z)}(\text{\rm Im}(z))^{2k}d\mu_{\hyp}(z) \end{equation} where the integration is taken over the portion $\mathcal{F}(Y)$ of the fundamental domain $\mathcal{F}$ equal to $\mathcal{F}\setminus \mathcal{F}_\infty(Y)$.
\subsection{Atkin-Lehner groups} \label{sect Atkin Leh groups}
Let $N=p_1\cdot\ldots\cdot p_r$ be a square-free, non-negative integer including the case $N=1$. The subset of $\text{\rm SL}(2,\mathbb{R})$, defined by \begin{align*} \Gamma_0(N)^+:=\left\{ \frac{1}{\sqrt{e}}\begin{pmatrix}a&b\\c&d\end{pmatrix}\in
\text{\rm SL}(2,\mathbb{R}): \,\,\, ad-bc=e, \,\,\, a,b,c,d,e\in\mathbb{Z}, \,\,\, e\mid N,\ e\mid a,
\ e\mid d,\ N\mid c \right\} \end{align*} is an arithmetic subgroup of $\text{\rm SL}(2,\mathbb{R})$. We use the terminology Atkin-Lehner groups of level $N$ to describe $\Gamma_0(N)^+$
in part because these groups are obtained by adding all Atkin-Lehner involutions to the congruence group $\Gamma_0(N)$, see \cite{AtLeh70}. Let $\{\pm \textrm{Id}\}$ denote the set of two elements where $\textrm{Id}$ is the identity matrix. In general, if $\Gamma$ is a subgroup of $\text{\rm SL}(2,\mathbb{R})$, we let $\overline{\Gamma} := \Gamma /\{\pm \textrm{Id}\}$ denote its projection into $\textrm{PSL}(2,\mathbb{R})$.
Set $Y_N^{+}:=\overline{\Gamma_0(N)^+} \backslash \mathbb{H}$. According to \cite{Cum04}, for any square-free $N$ the quotient space $Y_{N}^{+}$ has one cusp at $\infty$ with the cusp width equal to one. The spaces $Y_{N}^{+}$ will be used in the last section where we give examples of our results for generators of function fields of meromorphic functions on $Y_N^{+}$.
\subsection{Generators of function fields of Atkin-Lehner groups of small genus}
An explicit construction of generators of function fields of all meromorphic functions on $Y_N^{+}$ with genus $g_{N,+}\leq 3$ was given in \cite{JST13}.
When $g_{N,+}=0$, the function field of meromorphic functions on $Y_N^+$ is generated by a single function, the Hauptmodul $j_N^+(z)$, which is normalized so that its $q$-expansion is of the form $q_z^{-1}+O(q_z)$. The Hauptmodul $j_N^+(z)$ appears in the ``Monstrous Moonshine'' and was investigated in many papers, starting with Conway and Norton \cite{CN79}. The action of the $m$-th Hecke operator $T_m$ on $j_N^+(z)$ produces a meromorphic form on
$Y_N^{+}$ with the $q$-expansion $j_N^+|T_{m}(z)= q_z^{-m} + O(q_z)$.
When $g_{N,+}\geq 1$, the function field associated to $Y_N^+$ is generated by two functions $x_N^+(z)$ and $y_N^+(z)$. Stemming from the results in \cite{JST13}, we have that for $g_{N,+}\leq 3$ the generators $x_N^+(z)$ and $y_N^+(z)$ such that their $q$-expansions are of the form $$ x_N^+(z)=q_z^{-a}+\sum_{j=1}^{a-1}a_jq_z^{-j}+O(q_z) \quad \text{and} \quad y_N^+(z)=q_z^{-b}+\sum_{j=1}^{b-1}b_jq_z^{-j}+O(q_z) $$ where $a,b$ are positive integers with $a\leq 1+g_{N,+}$, and $b\leq 2+g_{N,+}$. Furthermore, for $g_{N,+} \leq 3$, it is shown in \cite{JST13} that all coefficients in the $q$-expansion for $x_N^+(z)$ and $y_N^+(z)$ are integers. For all such $N$, the precise values of these coefficients out to large order were computed, and the results are available at \cite{jst url}.
\section{Two Poincar\'e series}
In this section we will define the Niebur-Poincar\'e series $F_{m}(z,s)$ and the resolvent kernel, also referred to as the automorphic Green's function $G_s(z,w)$. We refer the reader to \cite{Ni73} for additional information regarding $F_{m}(z,s)$ and to \cite{He83} and \cite{Iwa02} and references therein for further details regarding $G_s(z,w)$. As said above, we will suppress the group $\Gamma$ from the notation.
\subsection{Niebur-Poincar\'e series}\label{sect_NP-series}
We start with the definition and properties of the Niebur-Poincar\'e series $F_{m}(z,s)$ associated to a co-finite Fuchsian group with one cusp; then we will specialize results to the setting of Atkin-Lehner groups.
\subsubsection{Niebur-Poincar\'e series associated to a co-finite Fuchsian group with one cusp}
Let $m$ be a non-zero integer, $z=x+iy\in\mathbb{H}$, and $s\in\mathbb{C}$ with ${\mathrm{Re}}(s)>1$. Recall the notation $e(x):=\exp(2\pi i x)$, and let $I_{s-1/2}$ denote the modified $I$-Bessel function of the first kind; see, for example Appendix B.4, formula (B.32) of \cite{Iwa02}). The Niebur-Poincar\'e series $F_{m}(z,s)$ is defined by the series \begin{equation}\label{Def:Niebur Poinc series}
F_m(z,s)=F_m^\Gamma(z,s):= \sum_{\gamma\in\Gamma_\infty \backslash \Gamma} e(m{\mathrm{Re}}(\gamma z))({\mathrm{Im}}(\gamma z))^{1/2}I_{s-1/2}(2\pi |m| {\mathrm{Im}}(\gamma z)). \end{equation} For fixed $m$ and $z$, the series \eqref{Def:Niebur Poinc series} converges absolutely and uniformly on any compact subset of the half plane ${\mathrm{Re}}(s)>1$. Moreover, $\Delta_{\hyp}F_m(z,s) = s(1-s) F_m(z,s)$ for all $s\in\mathbb{C}$ in the half plane ${\mathrm{Re}}(s)>1$. From Theorem 5 of \cite{Ni73}, we have that for any non-zero integer $m$, the function $F_m(z,s)$ admits a meromorphic continuation to the whole complex plane $s\in\CC$. Moreover, $F_m(z,s)$ is holomorphic at $s=1$ and, according to the spectral expansion given in Theorem 5 of \cite{Ni73}, $F_m(z,1)$ is orthogonal to constant functions, meaning that $$ \langle F_m(z,1), 1 \rangle=0. $$
For our purposes, it is necessary to employ the Fourier expansion of $F_{m}(z,s)$ in the cusp $\infty$. The Fourier expansion is proved in \cite{Ni73} and involves Kloosterman sums $S(m,n;c)$, which we now define. For any integers $m$ and $n$, and real number $c$, define $$ S(m,n;c)=S_\Gamma(m,n;c):= \sum_{\bigl(\begin{smallmatrix} a&\ast\\c&d\end{smallmatrix}\bigr)\in \Gamma_\infty \diagdown \Gamma \diagup \Gamma_\infty } e\left( \frac{ma + nd}{c}\right). $$ For ${\mathrm{Re}}(s)>1$ and $z=x+iy\in \mathbb{H}$, the Fourier expansion of $F_m(z,s)$ is given by \begin{equation}\label{Four exp Nieb}
F_m(z,s)=e(mx)y^{1/2}I_{s-1/2}(2\pi |m|y) + \sum_{k=-\infty}^{\infty}b_k(y,s;m)e(kx), \end{equation} where $$
b_0(y,s;m) = \frac{y^{1-s}}{(2s-1)\Gamma(s)}2\pi^s |m|^{s-1/2} \sum_{c>0} S(m,0;c)c^{-2s}=\frac{y^{1-s}}{(2s-1)}B_0(s;m) $$ and, for $k\neq 0$ $$
b_k(y,s;m)=B_k(s;m)y^{1/2}K_{s-1/2}(2\pi |m|y), $$ with $$ B_k(s;m)= 2 \sum_{c>0}S(m,k;c)c^{-1}\cdot \left\{
\begin{array}{ll}
J_{2s-1}\left(\frac{4\pi}{c} \sqrt{mk}\right), & \textrm{\rm if \,}mk>0 \\
I_{2s-1}\left(\frac{4\pi}{c} \sqrt{|mk|}\right), & \textrm{\rm if \,} mk<0.
\end{array}
\right. $$ In the above expression, $J_{2s-1}$ denotes the $J$-Bessel function and $K_{s-1/2}$ is the modified Bessel function; see, for example, formula (B.28) in \cite{Iwa02} for $J_{2s-1}$ and formula (B.34) of \cite{Iwa02}) for $K_{s-1/2}$.
According to the proof of Theorem 6 from \cite{Ni73}, the Fourier expansion \eqref{Four exp Nieb} extends by the principle of analytic continuation to the case when $s=1$, hence putting $B_k(1;m):= \lim_{s\downarrow 1} B_k(s;m)$, we have \begin{equation}\label{Four exp Nieb at 1}
F_m(z,1)=\frac{\sinh(2\pi|m|y)}{\pi \sqrt{|m|}}e(mx) + B_0(1;m)+ \sum_{k\in\mathbb{Z}\setminus\{0\}}\frac{1}{2 \sqrt{|k|}} e^{-2\pi|k|y}B_k(1;m)e(kx). \end{equation} It is clear from \eqref{Four exp Nieb at 1} that for $n>0$ one has that $$ F_{-n}(z,1) = \frac{1}{2\pi\sqrt{n}}q_{z}^{-n} + O(1) \,\,\,\,\,\textrm{\rm as $z \rightarrow \infty$.} $$ Moreover, applying $\frac{\partial }{\partial s}$ to the Fourier expansion \eqref{Four exp Nieb}, taking $s=1$ and reasoning analogously as in the proof of Lemma 4.3. (1), p. 19 of \cite{BK19}
we immediately deduce the following crude bound \begin{equation} \label{eq: N-P deriv bound}
\left.\frac{\partial}{\partial s} F_{-n}(z,s)\right|_{s=1} \ll \exp\left( 2\pi n {\mathrm{Im}}(z)\right), \quad \text{as} \quad {\mathrm{Im}}(z) \to \infty. \end{equation}
We note that the value of the derivative of the Niebur-Poincar\'e series at $s=1$ satisfies a differential equation, namely that \begin{align} \notag
\Delta_{\hyp}\left(\frac{\partial}{\partial s}\left. F_{-n}(z,s) \right|_{s=1} \right) &=\lim_{s\to 1}
\Delta_{\hyp} \left(\frac{F_{-n}(z,s) - F_{-n}(z,1)}{(s-1)}\right)= \\&=\lim_{s\to 1} \left(\frac{s(1-s)F_{-n}(z,s) -0}{(s-1)}\right) = -F_{-n}(z,1). \label{delta of deriv of N-P} \end{align}
\subsubsection{Fourier expansion when $\Gamma$ is an Atkin-Lehner group}
One can explicitly evaluate $B_0(1;m)$ for $m > 0$ when $\Gamma$ is an Atkin-Lehner group. Set $\Gamma=\overline{\Gamma_0(N)^+}$ where $N$ is a squarefree, which we express as $N=\prod\limits_{\nu=1}^r p_\nu$. Let $B_{0,N}^+(1;m)$ denote the coefficient $B_0(1;m)$ for $\overline{\Gamma_0(N)^+}$.
From Theorem 8 and Proposition 9 of \cite{JST13} we get that \begin{equation}\label{B0 for A-L groups} B_{0,N}^+(1;m)= \frac{12\sigma(m)}{\pi\sqrt{m}}\prod\limits_{\nu=1}^r \left(1- \frac{p_\nu^{\alpha_{p_\nu}(m)+1}(p_\nu -1) }{\left(p_\nu^{\alpha_{p_\nu}(m)+1} - 1\right)(p_\nu+1)}\right) , \end{equation} where $\sigma(m)$ denotes the sum of divisors of a positive integer $m$ and $\alpha_p(m)$ is the largest integer such that $p^{\alpha_p(m)}$ divides $m$. These expressions will be used in our explicit examples in section \ref{sect:examples} below.
\subsection{Automorphic Green's function} \label{sec: aut Green}
The automorphic Green's function, also called the resolvent kernel, for the Laplacian on $M$ is defined on page 31 of \cite{He83}. In the notation of \cite{He83}, let $\chi$ be the identity character, $z,w\in \mathcal{F}$ with $z\neq w$, and $s\in\mathbb{C}$ with ${\mathrm{Re}}(s)>1$. Formally, consider the series $$ G_s(z,w)= \sum_{\gamma\in\Gamma} k_s(\gamma z,w) $$ with $$
k_s(z,w):=-\frac{\Gamma(s)^2}{4\pi \Gamma(2s)}\left[1-\left| \frac{z-w}{z-\overline{w}}\right|^2\right]^s F\left(s,s;2s;1-\left| \frac{z-w}{z-\overline{w}}\right|^2\right) $$ and where $F(\alpha,\beta;\gamma;u)$ is the classical hypergeometric function. We should point out that the normalization we are using, which follows \cite{He83}, differs from the normalization for the Green's function in Chapter 5 of \cite{Iwa02}; the two normalizations differ by a minus sign. With this said, it is proved in \cite{He83}, Proposition 6.5. on p.33 that the series which defines $G_{s}(z,w)$ converges uniformly and absolutely on compact subsets of $(z,w,s) \in \mathcal{F} \times \mathcal{F}\times \{s\in\mathbb{C} :{\mathrm{Re}}(s)>1\}$.
Furthermore, for all $s\in\mathbb{C}$ with ${\mathrm{Re}}(s)>1$, and all $z,w \in\mathbb{H}$ with $z\neq \gamma w$ for $\gamma\in\Gamma$, the function $G_s(z,w)$ is the eigenfunction of $\Delta_{\hyp}$ associated to the eigenvalue $s(1-s)$.
Combining formulas 9.134.1. and 8.703. from \cite{GR07} and applying the identity $$
\cosh(d_{\hyp}(z,w))=\left(2-\left[1-\left| \frac{z-w}{z-\overline{w}}\right|^2\right]\right)\left(1-\left| \frac{z-w}{z-\overline{w}}\right|^2\right)^{-1} $$ we deduce that $$k_s(z,w)=-\frac{1}{2\pi}Q^0_{s-1}(\cosh(d_{\hyp}(z,w))),$$ where $Q_\nu^{\mu}$ is the associated Legendre function as defined by formula 8.703 in \cite{GR07}, with $\nu=s-1$ and $\mu=0$.
Now, we can combine Theorem 4 of \cite{Ni73} with Theorem 5.3 of \cite{Iwa02}, to deduce the Fourier expansion of the automorphic Green function in terms of the Niebur-Poincar\'e series. Specifically, let $w\in \mathcal{F}$ be fixed. Assume $z \in \mathcal{F}$ with $y={\mathrm{Im}}(z) > \max\{{\mathrm{Im}}(\gamma w): \gamma\in \Gamma\}$, and assume $s\in\mathbb{C}$ with ${\mathrm{Re}}(s)> 1$. Then $G_{s}(z,w)$ admits the expansion \begin{equation}\label{Four exp Green}
G_s(z,w)=-\frac{y^{1-s}}{2s-1}\mathcal{E}_\infty^{\mathrm{par}}(w,s)-\sum_{k\in\mathbb{Z}\smallsetminus \{0\}} y^{1/2}K_{s-1/2}(2\pi |k| y) F_{-k}(w,s)e(kx) \end{equation} where $\mathcal{E}_\infty^{\mathrm{par}}(w,s)$ is the parabolic Eisenstein series associated to the cusp at $\infty$ of $\Gamma$, see the next section for its full description.
Function $G_s(z,w)$ is unbounded as $z\to w$ and, according to Proposition 6.5. from \cite{He83} we have the asymptotics $$
G_s(z,w)=\frac{\mathrm{ord}(w)}{2\pi}\log|z-w|+O(1),\quad \text{as}\quad z\to w. $$
\section{Eisenstein series and their Kronecker limit formulas}
The purpose of this section is two-fold. First, we state the definitions of parabolic and elliptic Eisenstein series as well as their associated Kronecker limit formulas. Specific examples of the parabolic Kronecker limit formulas are recalled from \cite{JST13}. Second, we prove the factorization theorem for meromorphic forms in terms of elliptic Kronecker limit functions, as stated in \eqref{ell kroneck limit one cusp2}.
\subsection{Parabolic Kronecker limit functions}
Associated to the cusp at $\infty$ of $\Gamma$ one has a parabolic Eisenstein series ${\cal E}^{\mathrm{par}}_{\infty}(z,s)$. Let $\Gamma_{\infty}$ denote the stabilizer subgroup within $\Gamma$ of $\infty$. For $z\in \mathbb{H}$ and $s \in \mathbb{C}$ with $\textrm{Re}(s) > 1$, ${\cal E}^{\mathrm{par}}_{\infty}(z,s)$ is defined by the series \begin{equation*} {\cal E}^{\mathrm{par}}_{\infty}(z,s) = \sum\limits_{\gamma \in \Gamma_{\infty}\backslash \Gamma}\textrm{Im}(\gamma z)^{s}. \end{equation*} It is well-known that ${\cal E}^{\mathrm{par}}_{\infty}(z,s)$ admits a meromorphic continuation to all $s\in \CC$ and a functional equation in $s$.
For us, the Kronecker limit formula means the determination of the constant term in the Laurent expansion of ${\cal E}^{\mathrm{par}}_{\infty}(z,s)$ at $s=1$. Classically, Kronecker's limit formula is the assertion that for $\Gamma = \textrm{PSL}(2,\mathbb{Z})$ one has that \begin{equation}\label{PSL2_KLF} \mathcal{E}^{\mathrm{par}}_{\infty}(z,s)= \frac{3}{\pi(s-1)}
-\frac{1}{2\pi}\log\bigl(|\Delta(z)|{\mathrm{Im}}(z)^{6}\bigr)+C+O(s-1) \,\,\,\text{\rm as} \,\,\, s \rightarrow 1. \end{equation} where $C=6(1-12\,\zeta'(-1)-\log(4\pi))/\pi$ and $\Delta(z)$ is Dedekind's Delta function which defined by \begin{equation}\label{PSL2_Delta} \Delta(z) = \left[q_{z}^{1/24}\prod\limits_{n=1}^{\infty}\left(1 - q_{z}^{n}\right)\right]^{24} = \eta(z)^{24}. \end{equation} We refer to \cite{Siegel80} for a proof of \eqref{PSL2_KLF}, though the above formulation follows the normalization from \cite{JST13}.
For general Fuchsian groups of the first kind, Goldstein \cite{Go73} studied analogues of the Kronecker's limit formula associated to parabolic Eisenstein series. After a slight renormalization and trivial generalization, Theorem 3-1 from \cite{Go73} asserts that the parabolic Eisenstein series $\mathcal{E}^{\mathrm{par}}_{\infty}(z,s)$ admits the Laurent expansion \begin{equation} \label{KronLimitPArGen} \mathcal{E}^{\mathrm{par}}_{\infty}(z,s)= \frac{1}{\vol_{\hyp}(M) (s-1)} + \beta- \frac{1}{\vol_{\hyp}(M)} \log (\vert\eta_{\infty}^4(z)\vert {\mathrm{Im}}(z)) + O(s-1), \end{equation} as $s \to 1$ and where $\beta=\beta_{\Gamma}$ is a certain real constant depending only on the group $\Gamma$. As the notation suggests, the function $\eta_{\infty}(z)$ is a holomorphic form for $\Gamma$ and can be viewed as a generalization of the eta function $\eta(z)$ which is defined in \eqref{PSL2_Delta} for the full modular group.
By employing the functional equation for the parabolic Eisenstein series, as stated in Theorem 6.5 of \cite{Iwa02}, one can re-write the Kronecker limit formula as stating that \begin{equation} \label{KronLimas s to 0} \mathcal{E}^{\mathrm{par}}_{\infty}(z,s)= 1+ \log (\vert \eta_{\infty}^4(z)\vert {\mathrm{Im}}(z))\cdot s + O(s^2) \quad\text{ as } s \to 0, \end{equation} see Corollary 3 of \cite{JvPS19}. In this formulation, we will call the function $$ P(z)=P_{\Gamma}(z):=\log (\vert \eta_{\infty}^4(z)\vert {\mathrm{Im}}(z)) $$ the parabolic Kronecker limit function of $\Gamma$.
\subsection{Atkin-Lehner groups} \label{sect Atkin Leh groups_KLF}
Let $N=p_1\cdot \ldots \cdot p_r$ be a positive squarefree number, which includes the possibility that $N=1$ and set $$ \ell_N = 2^{1-r}\textrm{lcm}\Big(4,\ 2^{r-1}\frac{24}{(24,\sigma(N))}\Big) $$ where $\textrm{lcm}$ stands for the least common multiple of two numbers. In \cite{JST13}, Theorem 16, it is proved that \begin{equation}\label{DeltaN} \Delta_N(z):=\left( \prod_{v \mid N} \eta(v z) \right)^{\ell_N} \end{equation} is a weight $k_N=2^{r-1} \ell_N$ holomorphic form for $\Gamma_0(N)^+$ vanishing only at the cusp. By the valence formula, the order of vanishing of $\Delta_N(z)$ at the cusp is $\nu_N:=k_N \vol_{\hyp}(Y_N^{+})/(4\pi)$ where $\vol_{\hyp}(Y_N^{+})=\pi\sigma(N)/(3\cdot 2^r)$ is the hyperbolic volume of the surface $Y_N^{+}$.
The Kronecker limit formula \eqref{KronLimitPArGen} for the parabolic Eisenstein series $\mathcal{E}^{\mathrm{par},N}_{\infty}(z,s)$ associated to $Y_N^{+}$ reads as \begin{equation} \label{KronLimitPArGen - level N} \mathcal{E}^{\mathrm{par},N}_{\infty}(z,s)= \frac{1}{\vol_{\hyp}(Y_N^{+}) (s-1)} + \beta_N - \frac{1}{\vol_{\hyp}(Y_N^{+})}P_N(z) + O((s-1)) \end{equation} as $s \to 1$. From Example 7 and Example 4 of \cite{JvPS19} we have the explicit evaluations of $\beta_{N}$ and $P_{N}(z)$. Namely, \begin{equation} \label{betaN} \beta_N=- \frac{1}{\vol_{\hyp} (Y_N^{+}) }\left( \sum_{j=1}^{r} \frac{(p_j -1)\log p_j}{2(p_j+1)}- \log N + 2\log (4\pi) + 24\zeta'(-1) - 2\right) \end{equation} and the parabolic Kronecker limit function $P_N(z)$ is given by $$ P_N(z)= \log\left( \sqrt[2^r]{\prod_{v \mid N} \vert \eta(vz)\vert ^4} \cdot {\mathrm{Im}}(z) \right). $$
\subsection{Elliptic Kronecker limit functions}\label{ell_Eisen_series}
Elliptic subgroups of $\Gamma$ have finite order and a unique fixed point within $\mathbb H$. For all but a finite number of $w \in \mathcal{F}$, the order of the elliptic subgroup $\Gamma_{w}$ which fixes $w$ is one. For $z\in \mathbb{H}$ with $z\not=w$ and $s \in \mathbb{C}$ with $\textrm{Re}(s) > 1$, the elliptic Eisenstein series ${\cal E}^{\textrm{ell}}_{w}(z,s)$ is defined by the series \begin{equation}\label{ell_eisen} {\cal E}^{\textrm{ell}}_{w}(z,s) =\sum\limits_{\gamma \in \Gamma_{w}\backslash \Gamma} \sinh(d_{\mathrm{hyp}}(\gamma z, w))^{-s} = \sum\limits_{\gamma \in \Gamma_{w}\backslash \Gamma}
\left( \frac{2\,\textrm{Im}(w)\textrm{Im}(\gamma z)}{|\gamma z-w|\,|\gamma z-\overline{w}|} \right)^s. \end{equation} It was first shown in \cite{vP10} that \eqref{ell_eisen} admits a meromorphic continuation to all $s \in \CC$.
The analogue of the Kronecker limit formula for ${\cal E}^{\textrm{ell}}_{w}(z,s)$ was first proved in \cite{vP10}; see also \cite{JvPS19}. In the setting of this paper, it is shown in \cite{vP10} that for any $w\in\mathcal{F}$ the series \eqref{ell_eisen} admits the Laurent expansion \begin{multline}\label{Kronecker_elliptic} \mathrm{ord}(w)\,\mathcal{E}^{\mathrm{ell}}_{w}(z,s)- \frac{2^{s}\sqrt{\pi}\,\Gamma(s-\frac{1}{2})}{\Gamma(s)}\mathcal{E}^{\mathrm{par}}_{\infty}(w,1-s) \,\mathcal{E}^{\mathrm{par}}_{\infty}(z,s)= \\ =-\frac{2\pi}{\vol_{\hyp}(M)}
-\frac{2\pi}{\vol_{\hyp}(M)}\log\bigl(|H_{\Gamma}(z,w)|^{\mathrm{ord}(w)}{\mathrm{Im}}(z)\bigr)\cdot s+O(s^2)\quad\textrm{as $s \rightarrow 0$.} \end{multline} As a function of $z$, $H(z,w):=H_{\Gamma}(z,w)$ is holomorphic on $\mathbb{H}$ and uniquely determined up to multiplication by a complex constant of absolute value one; in addition, $H(z,w)$ is an automorphic form with a non-trivial multiplier system, which depends on $w$, with respect to $\Gamma$ acting on $z$. The function $H(z,w)$ vanishes if and only if $z=\gamma w$ for some $\gamma\in\Gamma$. We call the function $$
E_w(z)=E_{w,\Gamma}(z):=\log\bigl(|H(z,w)|^{\mathrm{ord}(w)}{\mathrm{Im}}(z)\bigr) $$ the elliptic Kronecker limit function of $\Gamma$ at $w$.
\subsection{A factorization theorem}
We can now prove equation \eqref{ell kroneck limit one cusp2}.
\begin{proposition} \label{prop: factorization} With notation as above, let $f$ be a weight $2k$ meromorphic form on $\mathbb{H}$ with $q$-expansion at $\infty$ given by \begin{equation} \label{q exp. of f_2k} f(z)= 1+ \sum_{n=1}^{\infty} b_{f}(n)q_z^n, \end{equation} Let $\mathrm{ord}_w(f)$ denote the order $f$ at $w$ and define the function $$ H_{f}(z):= \prod_{w \in \mathcal{F}} H(z,w)^{\mathrm{ord}_w(f)} $$ where $H(z,w)=H_{\Gamma}(z,w)$ is given in \eqref{Kronecker_elliptic}. Then there exists a complex constant $c_{f}$ such that \begin{equation} \label{factorization fla} f(z) = c_{f}H_{f}(z). \end{equation} Furthermore, $$ \abs{c_{f}} = \exp \left(-\frac{2\pi}{\vol_{\hyp}(M)} \sum_{w\in \mathcal{F}} \frac{\mathrm{ord}_w(f)}{\mathrm{ord}(w)} \left( 2-\log 2 + P(w)- \beta\vol_{\hyp}(M)\right) \right ), $$ where $P(w)$ and $\beta$ are defined through the parabolic Kronecker limit function \eqref{KronLimitPArGen}. \end{proposition}
\begin{proof} The proof closely follows the proof of Theorem 9 from \cite{JvPS19}. Specifically, following the first part of the proof almost verbatim, we conclude that the quotient $$ F_f(z):=\frac{H_f(z)}{f(z)} $$ is a non-vanishing holomorphic function on $M$ which is bounded and non-zero at the cusp at $\infty$. Hence, $\log \vert F_{f}(z)\vert$ is $L^{2}$ on $M$. From its spectral expansion and the fact that $\log \vert F_{f}(z)\vert$ is harmonic, one concludes $\log \vert F_{f}(z)\vert$ is constant, hence so is $F_{f}(z)$. The evaluation of the constant is obtained by considering the limiting behavior as $z$ approaches $\infty$, which is obtained by using the asymptotic behavior of $H(z,w)$ as ${\mathrm{Im}}(z)\to\infty$, as given in Proposition 6 of \cite{JvPS19}. \end{proof}
By following the proof of Proposition 12 from \cite{JvPS19} we obtain \eqref{ell kroneck limit one cusp}, and hence \eqref{ell kroneck limit one cusp2}, for meromorphic forms $f$ on $\mathbb{H}$ with $q$-expansion \eqref{q exp. of f_2k}. We leave the verification of this simple argument to the reader.
\section{Proofs of main results}
\subsection{Proof of Theorem \ref{thm:main}}
Let $Y>1$ be sufficiently large so that the cuspidal neighborhood $\mathcal{F}_{\infty}(Y)$ of the cusp $\infty$ in $\mathcal{F}$ is of the form $\{z \in {\mathbb H}: 0 < x < 1, y > Y\}$. For $s\in\mathbb{C}$ with ${\mathrm{Re}}(s)>1$, and arbitrary, but fixed $w\in\mathcal{F}$, we then have that \begin{align*}
\int\limits_{\mathcal{F}(Y)}\Delta_{\hyp}(F_{-n}(z,1))&\left(G_s(z,w) + \mathcal{E}_\infty^{\mathrm{par}}(w,s)\right)d\mu_{\hyp}(z) \\ &
-\int\limits_{\mathcal{F}(Y)}F_{-n}(z,1)\Delta_{\hyp} \left(G_s(z,w) + \mathcal{E}_\infty^{\mathrm{par}}(w,s)\right)d\mu_{\hyp}(z) \\ &= -s(1-s)\int\limits_{\mathcal{F}(Y)}F_{-n}(z,1)\left(G_s(z,w) + \mathcal{E}_\infty^{\mathrm{par}}(w,s)\right)d\mu_{\hyp}(z). \end{align*} Actually, the first summand on the left-hand side is zero since $F_{-n}(n,1)$ is holomorphic; however, this judicious form of the number zero is significant since we will use the method behind the Maass-Selberg theorem to
study the left-hand side of the above equation. Before this, note that the integrand on the right-hand side of the above equation is holomorphic at $s=1$. As a result, we can write \begin{align*} \frac{\partial}{\partial s}&\left.\left(-s(1-s)\int\limits_{\mathcal{F}(Y)}F_{-n}(z,1)\left(G_s(z,w) +
\mathcal{E}_\infty^{\mathrm{par}}(w,s)\right)d\mu_{\hyp}(z)\right)\right|_{s=1}\\&= \int\limits_{\mathcal{F}(Y)}F_{-n}(z,1)\lim_{s\to 1}\left(G_s(z,w) + \mathcal{E}_\infty^{\mathrm{par}}(w,s)\right)d\mu_{\hyp}(z). \end{align*} Therefore, \begin{align}\label{M-S starting f-la} \langle F_{-n}(z,1), &\overline{\lim_{s\to 1} \left(G_s(z,w) + \mathcal{E}_\infty^{\mathrm{par}}(w,s)\right)} \rangle \notag \\&=\lim_{Y\to \infty} \int\limits_{\mathcal{F}(Y)}F_{-n}(z,1)\lim_{s\to 1} \left(G_s(z,w) + \mathcal{E}_\infty^{\mathrm{par}}(z,s)\right)d\mu_{\hyp}(z) \notag \\&= \lim_{Y\to \infty} \left[\frac{\partial}{\partial s}\left( \int\limits_{\mathcal{F}(Y)}\Delta_{\hyp}(F_{-n}(z,1))\left(G_s(z,w) + \mathcal{E}_\infty^{\mathrm{par}}(w,s)\right)d\mu_{\hyp}(z) \right. \right.\notag \\&-
\left.\left. \left. \int\limits_{\mathcal{F}(Y)}F_{-n}(z,1)\Delta_{\hyp} \left(G_s(z,w) + \mathcal{E}_\infty^{\mathrm{par}}(w,s)\right)d\mu_{\hyp}(z) \right) \right|_{s=1}\right] \end{align}
The quantity on the right-hand side of \eqref{M-S starting f-la} is setup for an application of Green's theorem as in the proof of the Maass-Selberg relations for the Eisenstein series. As described on page 89 of \cite{Iwa02}, when applying Green's theorem to each term on the right-side of \eqref{M-S starting f-la} for fixed $Y$, the resulting boundary terms on the sides of the fundamental domain, which are identified by $\Gamma$, will sum to zero. As such, we get that \begin{align}\label{M-S interm f-la} \langle F_{-n}(z,1),& \overline{\lim_{s\to 1} \left(G_s(z,w) + \mathcal{E}_\infty^{\mathrm{par}}(w,s)\right)} \rangle \notag \\&= \lim_{Y\to \infty} \left[\frac{\partial}{\partial s}\left( \int\limits_{0}^{1}\frac{\partial}{\partial y}F_{-n}(z,1)\left(G_s(z,w) + \mathcal{E}_\infty^{\mathrm{par}}(w,s)\right)dx \right. \right. \notag\\ &- \left.\left. \left. \int\limits_{0}^{1}F_{-n}(z,1)\frac{\partial}{\partial y} \left(G_s(z,w) + \mathcal{E}_\infty^{\mathrm{par}}(w,s)\right)dx
\right) \right|_{s=1}\right], \end{align} where functions of $z$ and its derivatives with respect to $y={\mathrm{Im}}(z)$ are evaluated at $z=x+iY$.
In order to compute the difference of the two integrals of the right-hand side of \eqref{M-S interm f-la}, we will use the Fourier expansions \eqref{Four exp Nieb at 1} and \eqref{Four exp Green} of the series $F_{-n}(z,1)$ and $G_s(z,w)$ respectively. It will be more convenient to write the first coefficient in the expansion \eqref{Four exp Nieb at 1} as $e(-nx)\sqrt{y}I_{\tfrac{1}{2}}(2\pi n y)$, as in \eqref{Four exp Nieb}.
Specifically, since the exponential functions $e(-nx)$ are orthogonal for different values of $n$, we get that \eqref{M-S interm f-la} is equal to \begin{multline*}
-F_{-n}(w,s)\sqrt{Y} \left(\frac{\partial}{\partial y}\left.\left(\sqrt{y}I_{\tfrac{1}{2}}(2\pi n y)\right)\right|_{y=Y}\cdot K_{s-\tfrac{1}{2}}(2\pi n Y) \right. \\ \left.- I_{\tfrac{1}{2}}(2\pi n Y)\cdot \frac{\partial}{\partial y}\left.\left(\sqrt{y}K_{s-
\tfrac{1}{2}}(2\pi n y) \right)\right|_{y=Y}\right) \end{multline*} \begin{multline*} +B_0(1;-n)(1-s)\frac{Y^{-s}}{2s-1}\mathcal{E}_\infty^{\mathrm{par}}(w,s) \\+ \sum_{j\in\mathbb{Z}\smallsetminus\{0\}}F_j(w,s) \left( b_j(Y,1;-n)\cdot \frac{\partial}{\partial y}\left.\left(\sqrt{y}K_{s-
\tfrac{1}{2}}(2\pi |j| y) \right)\right|_{y=Y} \right. \\ \left.-\frac{\partial}{\partial y}\left.b_j(y,1;-n)\right|_{y=Y}\cdot \sqrt{Y}
K_{s-\tfrac{1}{2}}(2\pi |j| Y) \right) =T_1(Y,s;w) + T_2(Y,s;w)+T_3(Y,s;w), \end{multline*} where the last equality above provides the definitions of the functions $T_{1}$, $T_{2}$ and $T_{3}$. Therefore, from \eqref{M-S interm f-la} we conclude that \begin{multline}\label{Three terms pre-comp}
\langle F_{-n}(z,1), \overline{\lim_{s\to 1} \left(G_s(z,w) + \mathcal{E}_\infty^{\mathrm{par}}(w,s)\right)} \rangle =\\= \lim_{Y\to \infty}
\left[\left.\frac{\partial}{\partial s}\left( T_1(Y,s;w) + T_2(Y,s;w)+T_3(Y,s;w) \right)\right|_{s=1}\right] \end{multline} We will treat each of the three terms on the right-hand side of \eqref{Three terms pre-comp} separately.
To evaluate the term $T_{1}$ in \eqref{Three terms pre-comp}, we apply formulas 8.486.2 and 8.486.11 of \cite{GR07} in order to compute derivatives of the Bessel functions. In doing so, we conclude that $$ T_1(Y,s;w)=-\frac{X}{2}F_{-n}(w,s)\left[K_{s-\tfrac{1}{2}}(X)(I_{-\tfrac{1}{2}}(X) + I_{\tfrac{3}{2}}(X)) + I_{\tfrac{1}{2}}(X)(K_{s-\tfrac{3}{2}}(X) + K_{s+\tfrac{1}{2}}(X)) \right], $$ where we set $X=2\pi n Y$. Next, we express $K_{s+\tfrac{1}{2}} (X)$ in terms of $K_{s-\tfrac{1}{2}} (X)$ and $K_{s-\tfrac{3}{2}}(X)$, using formula 8.485.10 from \cite{GR07} to get $$ K_{s+\tfrac{1}{2}} (X)= K_{s-\tfrac{3}{2}} (X) + \frac{2s-1}{X}K_{s-\tfrac{1}{2}} (X). $$ Then, applying formula 8.486.21 from \cite{GR07}, we deduce that \begin{align*}
\frac{\partial}{\partial s}&\left.\left[K_{s-\tfrac{1}{2}}(X)(I_{-\tfrac{1}{2}}(X) + I_{\tfrac{3}{2}}(X)) + I_{\tfrac{1}{2}}(X)(K_{s-\tfrac{3}{2}}(X) + K_{s+\tfrac{1}{2}}(X)) \right]\right|_{s=1}\\&= \sqrt{\frac{\pi}{2X}}e^X\mathrm{Ei}(-2X)\left[ -(I_{-\tfrac{1}{2}}(X) + I_{\tfrac{3}{2}}(X)) + \sqrt{\frac{2}{\pi X}} (2-1/X) \sinh (X) \right]+\frac{2}{X^2}e^{-X}\sinh (X), \end{align*} where $\mathrm{Ei}(x)$ denotes the exponential integral; see section 8.2 of \cite{GR07}. Continuing, we now employ formula (B.36) from \cite{Iwa02} which asserts certain asymptotic behavior of the $I$-Bessel function as $X\to\infty$; we are interested in the cases when $\nu=-1/2$ and when $\nu=3/2$. This result, together with the bound $\mathrm{Ei}(-2X)\leq e^{-2X}/(2X) $, which follows from the expression 8.212.10 from \cite{GR07} for $\mathrm{Ei}(-x)$ with $x>0$, yields that $$ \lim_{X\to\infty}\frac{X}{2}\frac{\partial}{\partial s}\left.\left[K_{s-\tfrac{1}{2}}(X)(I_{-\tfrac{1}{2}}(X) +
I_{\tfrac{3}{2}}(X)) + I_{\tfrac{1}{2}}(X)(K_{s-\tfrac{3}{2}}(X) + K_{s+\tfrac{1}{2}}(X)) \right]\right|_{s=1} =0. $$ Therefore, \begin{multline*}
\lim_{Y\to \infty}\frac{\partial}{\partial s}\left. T_1(Y,s;w)\right|_{s=1}= - \frac{\partial}{\partial s}\left. F_{-n}(w,s)\right|_{s=1}\cdot \\\cdot\lim_{X\to \infty}\frac{X}{2}\left[K_{s-\tfrac{1}{2}}(X)(I_{-\tfrac{1}{2}}(X) + I_{\tfrac{3}{2}}(X)) + I_{\tfrac{1}{2}}(X)(K_{s-\tfrac{3}{2}}(X) + K_{s+\tfrac{1}{2}}(X)) \right]. \end{multline*} Finally, by applying (B.36) from \cite{Iwa02} again, we deduce that $$ \lim_{X\to \infty}\frac{X}{2}\left[K_{s-\tfrac{1}{2}}(X)(I_{-\tfrac{1}{2}}(X) + I_{\tfrac{3}{2}}(X)) + I_{\tfrac{1}{2}}(X) (K_{s-\tfrac{3}{2}}(X) + K_{s+\tfrac{1}{2}}(X)) \right]=1. $$ Hence \begin{equation}\label{term1}
\lim_{Y\to \infty}\frac{\partial}{\partial s}\left. T_1(Y,s;w)\right|_{s=1}= - \frac{\partial}{\partial s}\left. F_{-n}(w,s)\right|_{s=1}. \end{equation}
As for the term $T_{2}$ in \eqref{Three terms pre-comp}, let us use the Laurent series expansion \eqref{KronLimitPArGen} of $\mathcal{E}_\infty^{\mathrm{par}}(w,s)$, from which one easily deduces that $$
\frac{\partial}{\partial s}\left.(s-1)\frac{Y^{-s}}{2s-1}\mathcal{E}_\infty^{\mathrm{par}}(w,s)\right|_{s=1} = \frac{1}{Y}\left( \beta - \frac{P(w)+2+\log Y}{\vol_{\hyp}(M)}\right). $$ Therefore \begin{equation}\label{term2}
\lim_{Y\to \infty}\frac{\partial}{\partial s}\left. T_2(Y,s;w)\right|_{s=1}=0. \end{equation}
It remains to study the term $T_{3}$ in \eqref{Three terms pre-comp}. Let us set $g(s,y,k):=\sqrt{y}K_{s-\tfrac{1}{2}}(2\pi k y)$ for some positive integers $k$. Then $b_j(y,1;-n)=B_j(1;-n)g(1,y,|n|)$ and \begin{multline*}
T_3(Y,s;w)=\sum_{j\in\mathbb{Z}\smallsetminus\{0\}} B_j(1;-n) F_j(w,s) \left( g(1,Y,|n|) \frac{\partial}{\partial y}\left.g(s,y,|j|)\right|_{y=Y} \right. \\ - \left. g(s,Y;|j|)\frac{\partial}{\partial y}\left.g(1,y,|n|)\right|_{y=Y} \right). \end{multline*}
For positive integers $m$ and $\ell$ let us define $$
G(s,Y,m,\ell):=g(1,Y,m) \frac{\partial}{\partial y}\left.g(s,y,\ell)\right|_{y=Y}- g(s,Y;\ell)\frac{\partial}{\partial y}\left.g(1,y,m)\right|_{y=Y}. $$
Applying the formula 8.486.11 from \cite{GR07} to differentiate the $K-$Bessel function, together with formula 8.486.10 to express $K_{s+\tfrac{1}{2}}(2\pi |j|Y)$ we arrive at \begin{multline*}
G(s,Y,|n|,|j|) =\frac{\pi Y}{2}K_{s-\tfrac{1}{2}}(2\pi |j| Y)K_{\tfrac{1}{2}}(2\pi |n| Y) \cdot \\ \cdot \left( |n|(K_{-\tfrac{1}{2}}(2\pi |n|Y) + K_{\tfrac{3}{2}}(2\pi |n|Y)) - |j|\left(2K_{s-\tfrac{3}{2}}(2\pi |j|Y) +\frac{2s-1}{2\pi |j| Y} K_{s-\tfrac{1}{2}}(2\pi |j|Y)\right)\right). \end{multline*}
Now, we combine the bound (B.36) from \cite{Iwa02} with evaluation of the derivative $\frac{\partial}{\partial \nu} K_{\nu}$ at $\nu=\pm 1/2$ (formula 8.486.21 of \cite{GR07}) and the bound $\mathrm{Ei}(-4\pi |j|Y) \leq \exp(-4\pi |j|Y)/(4\pi |j|Y)$ for the exponential integral function to deduce the following crude bounds $$
\max\left\{G(s,Y,|n|,|j|), \left.\frac{\partial }{\partial s} G(s,Y,|n|,|j|)\right|_{s=1}\right\}\ll (|n| + |j|)\exp(-2\pi Y (|n|+|j|)), \text{ as } Y\to +\infty, $$
where the implied constant is independent of $Y,|j|$.
This, together with the bound \eqref{eq: N-P deriv bound} and the Fourier expansion \eqref{Four exp Nieb at 1} yields $$
\frac{\partial}{\partial s} T_3(Y,s;w)\Big|_{s=1}\ll
\sum_{j\in\mathbb{Z}\smallsetminus\{0\}}(|n| + |j|) | B_j(1;-n)| \exp\left( -2\pi Y (|n|+|j|) + 2\pi|j| {\mathrm{Im}}(w)\right) $$ It remains to estimate the sum on the right hand side of the above equation as $Y\to\infty$. The bounds for the Kloosterman sum zeta function as stated on page 75 of \cite{Iwa02}) yield bounds for $B_j(1;-n)$ for $j\neq 0$. Specifically, one has that $$
B_j(1;-n)\ll \exp\left(\frac{4\pi \sqrt{|jn|}}{c_\Gamma}\right) $$ where $c_\Gamma$ is a certain positive constant depending on the group $\Gamma$; in fact, $c_{\Gamma}$ is equal to the minimal positive left-lower entry of a matrix from $\Gamma$. Also, the implied constant in the bound for $B_j(1;-n)$ is independent of $j$. Therefore $$
\frac{\partial}{\partial s}\left. T_3(Y,s;w)\right|_{s=1}\ll \sum_{j\in\mathbb{Z}\smallsetminus\{0\}} (|n| + |j|)\exp\left( -2\pi \left( (|j|+|n|)Y - 2\sqrt{|jn|}/c_\Gamma -|j| {\mathrm{Im}}(w) \right) \right). $$ For $Y > 2{\mathrm{Im}}(w) + 2\sqrt{n}/c_\Gamma$, this series over $j$ is uniformly convergent and is $o(1)$ as $Y\to\infty$. In other words, \begin{equation}\label{T3_limit}
\lim_{Y\to\infty}\frac{\partial}{\partial s}\left. T_3(Y,s;w)\right|_{s=1} =0. \end{equation} When combining \eqref{T3_limit} with \eqref{Three terms pre-comp} \eqref{term1} and \eqref{term2}, we have that \begin{equation*}
\langle F_{-n}(z,1), \overline{\lim_{s\to 1} \left(G_s(z,w) + \mathcal{E}_\infty^{\mathrm{par}}(w,s)\right)} \rangle = \frac{\partial}{\partial s}\left. F_{-n}(w,s) \right|_{s=1}, \end{equation*} which completes the proof of \eqref{main f-la}.
\subsection{Proof of Corollary \ref{Rohr-Jensen}}
The proof of Corollary \ref{Rohr-Jensen} is a combination of Theorem \ref{thm:main} and the factorization theorem as stated in Proposition \ref{prop: factorization}. The details are as follows.
To begin we shall prove formula \eqref{log norm basic}. Starting with \eqref{ell kroneck limit one cusp2}, which is written as $$
\log\left( y^k |f(z)| \right) = kP(z) - \sum_{w\in \mathcal{F}} \frac{\mathrm{ord}_w(f)}{\mathrm{ord}(w)} \lim_{s\to 0} \frac{1}{s} \mathrm{ord}(w)\mathcal{E}^{\mathrm{ell}}_{w}(z,s), $$ we can express $\lim_{s\to 0} \frac{1}{s}\mathrm{ord}(w) \mathcal{E}^{\mathrm{ell}}_{w}(z,s)$ in terms of the resolvent kernel. Specifically, using \eqref{Ell Green connection}, we have that \begin{equation} \label{ell kroneck limit one cusp3}
\log\left( y^k |f(z)| \right)=kP(z) + \sum_{w\in \mathcal{F}} \frac{\mathrm{ord}_w(f)}{\mathrm{ord}(w)} \lim_{s\to 0} \left(\frac{2^{s}\sqrt{\pi} \Gamma(s-1/2)}{\Gamma(s+1)}(2s-1)G_s(z,w)\right). \end{equation} By applying the functional equation for the Green's function, see Theorem 3.5 of \cite{He83} on pages 250--251, we get \begin{align*} \lim_{s\to 0}\frac{2^{s}\sqrt{\pi} \Gamma(s-1/2)}{\Gamma(s+1)}(2s-1)G_s(z,w) = \lim_{s\to 1}&\left(\frac{2^{1-s}\sqrt{\pi} \Gamma(1/2-s)}{\Gamma(2-s)}\left((1-2s)G_s(z,w)\right.\right. \\&- \left.\left.\frac{}{}\mathcal{E}_\infty^{\mathrm{par}}(z,1-s)\mathcal{E}_\infty^{\mathrm{par}}(w,s)\right)\right). \end{align*} From the Kronecker limit formula \eqref{KronLimas s to 0} and standard Taylor series expansion of the gamma function we immediately deduce that \begin{multline*} \lim_{s\to 0}\frac{2^{s}\sqrt{\pi} \Gamma(s-1/2)}{\Gamma(s+1)}(2s-1)G_s(z,w)= \lim_{s\to 1}2\pi (-1+(s-1)(2-\log 2))\cdot \\ \cdot \left[2(1-s)G_s(z,w) - (G_s(z,w)+\mathcal{E}_\infty^{\mathrm{par}}(w,s)) - P(z)(1-s)\mathcal{E}_\infty^{\mathrm{par}}(w,s)\right]. \end{multline*} According to \cite{Iwa02}, p. 106, the point $s=1$ is the simple pole of $G_s(z,w)$ with the residue $-1/\vol_{\hyp}(M)$ (note that our $G_s(z,w)$ differs from the automorphic Green's function from \cite{Iwa02} by a factor of $-1$). Therefore, the Kronecker limit formula \eqref{KronLimitPArGen} yields the following equation \begin{align} \label{Limit of G as s to 0} \lim_{s\to 0}\frac{2^{s}\sqrt{\pi} \Gamma(s-1/2)}{\Gamma(s+1)}(2s-1)G_s(z,w)&= -\frac{2\pi}{\vol_{\hyp}(M)}P(z)- \frac{4\pi}{\vol_{\hyp}(M)} \\& + 2\pi\lim_{s\to 1}\left(G_s(z,w) + \mathcal{E}_\infty^{\mathrm{par}}(w,s)\right).\notag \end{align} Recall that the classical the Riemann-Roch theorem implies that $$ k\frac{\vol_{\hyp}(M)}{2\pi}=\sum_{w\in \mathcal{F}} \frac{\mathrm{ord}_w(f)}{\mathrm{ord}(w)}; $$ hence, after multiplying \eqref{Limit of G as s to 0} by $\frac{\mathrm{ord}_w(f)}{\mathrm{ord}(w)} $ and taking the sum over all $w\in\mathcal{F}$ from \eqref{ell kroneck limit one cusp3}, we arrive at \eqref{log norm basic}, as claimed.
Having proved \eqref{log norm basic}, observe that the left-hand side of \eqref{log norm basic} is real valued. As proved in \cite{Ni73}, $F_{-n}(z,1)$ is orthogonal to constant functions. Therefore, in order to prove \eqref{main f-la - corollary} one simply applies \eqref{main f-la}, which was established above.
\subsection{Proof of Corollary \ref{cor:j-function}} In order to prove \eqref{main f-la - corollary 2}, it suffices to compute $\langle 1, \overline{\lim_{s\to 1}(G_s(z,w) + \mathcal{E}_{\infty}^{\mathrm{par}}(w,s))} \rangle $, which we will write as $$ \int\limits_{\mathcal{F}}\lim_{s\to 1}\left(G_s(z,w) + \frac{1}{\vol_{\hyp}(M)(s-1)}+ \mathcal{E}_{\infty}^{\mathrm{par}}(w,s)-\frac{1}{\vol_{\hyp}(M)(s-1)}\right)d\mu_{\hyp}(z). $$ From its spectral expansion, the function $\lim_{s\to 1}\left(G_s(z,w) + \frac{1}{\vol_{\hyp}(M)(s-1)}\right)$ is $L^2$ on $\mathcal{F}$ and orthogonal to constant functions. Therefore, by using the Laurent series expansion \eqref{KronLimitPArGen}, we get that $$ \langle 1, \overline{\lim_{s\to 1}(G_s(z,w) + \mathcal{E}_{\infty}^{\mathrm{par}}(w,s))} \rangle =\vol_{\hyp}(M)\left(\beta - \frac{P(w)}{\vol_{\hyp}(M)}\right), $$ which completes the proof.
\subsection{Proof of Theorem \ref{thm:generating series}}\label{sect proof of thm 4}
Our starting point is the Fourier expansion of the sum $G_s(z,w) + \mathcal{E}_\infty^{\mathrm{par}}(w,s)$. Namley, for ${\mathrm{Re}}(s)>1$ and ${\mathrm{Im}}(w)$ sufficiently large we have that \begin{align}\label{G+Epar} G_s(z,w) + \mathcal{E}_\infty^{\mathrm{par}}(w,s)&=\left(1-\frac{y^{1-s}}{2s-1}\right)\mathcal{E}_\infty^{\mathrm{par}}(w,s) \notag
\\&-\sum_{k\in\mathbb{Z}\setminus\{0\}} \sqrt{y}K_{s-\tfrac{1}{2}}(2\pi |k| y)F_{-k}(w,s)e(kx). \end{align}
If ${\mathrm{Im}}(z)$ is sufficiently large, exponential decay of $K_{s-\tfrac{1}{2}}(2\pi |k| y)$ is sufficient to ensure that the right-hand side of \eqref{G+Epar} is holomorphic at $s=1$. The Laurent series expansion of $\mathcal{E}_\infty^{\mathrm{par}}(w,s)$, combined with the expansions $y^{1-s}=1+(1-s)\log y + \tfrac{1}{2}(1-s)^2 \log^2y + O((1-s)^3)$ and $(2s-1)^{-1}= (1-2(s-1))^{-1} = 1-2(s-1)+4(s-1)^2 + O((s-1)^3)$ yields \begin{multline*}
\frac{\partial}{\partial s}\left.\left(1-\frac{y^{1-s}}{2s-1}\right)\mathcal{E}_\infty^{\mathrm{par}}(w,s) \right|_{s=1} =\frac{1}{\vol_{\hyp}(M)}\left[ -4+2\beta\vol_{\hyp}(M)-2P(w)\right. \\ \left. + \log y \left(\beta\mathrm{vol}_{\hyp}(M) - P(w)-2\right) - \tfrac{1}{2}\log^2y \right]. \end{multline*} Additionally, for ${\mathrm{Im}}(z)$ sufficiently large, the series on the right-hand side of \eqref{G+Epar} is a uniformly convergent series of functions which are holomorphic at $s=1$. As such, we may differentiate the series term by term. By employing formulas 8.469.3 and 8.486.21 of \cite{GR07}, we deduce for $k\neq0$ that \begin{multline*}
\frac{\partial}{\partial s}\left. \left(\sqrt{y}K_{s-\tfrac{1}{2}}(2\pi |k| y)F_{-k}(w,s) \right) \right|_{s=1}=\frac{e^{-2\pi|k|y}}{2\sqrt{|k|}}\cdot \\ \cdot\left[ \frac{\partial}{\partial s}\left. F_{-k}(w,s) \right|_{s=1}- F_{-k}(w,1)e^{4\pi|k|y} \mathrm{Ei}(-4\pi |k|y)\right], \end{multline*} where $\mathrm{Ei}(x)$ denotes the exponential integral function; see section 8.21 of \cite{GR07}. From this, we get the expression that \begin{multline*}
\frac{\partial}{\partial s}\left(G_s(z,w) + \mathcal{E}_\infty^{\mathrm{par}}(w,s)\right)\Big|_{s=1} = (\log y+2) \left(\beta - \frac{P(w)+2}
{\mathrm{vol}_{\hyp}(M)}\right) - \frac{\log^2 y}{2\mathrm{vol}_{\hyp}(M)} \\- \sum_{k\in\mathbb{Z}\setminus\{0\}}\frac{1}{2\sqrt{|k|}}\left[ \frac{\partial}{\partial s}
\left. F_{-k}(w,s) \right|_{s=1}- F_{-k}(w,1)e^{4\pi|k|y} \mathrm{Ei}(-4\pi |k|y)\right]e^{2\pi i kx - 2\pi|k|y}. \end{multline*} Let us now compute the derivative $\frac{\partial}{\partial z}$ of the above expression. After multiplying by $i=\sqrt{-1}$, we get that \begin{multline*}
\mathcal{G}_w(z)=\frac{1}{y}\left(\beta - \frac{P(w)+2}{\mathrm{vol}_{\hyp}(M)}\right) - \frac{\log y}{y \mathrm{vol}_{\hyp}(M)} +\sum_{k\geq 1} 2\pi \sqrt{k} \frac{\partial}{\partial s}\left. F_{-k}(w,s) \right|_{s=1}q_z^k \\ +\sum_{k\geq 1}\frac{F_{-k}(w,1)}{2\sqrt{k}y}q_z^k -\sum_{k\leq -1} 2\pi \sqrt{|k|} F_{-k}(w,1)
\mathrm{Ei}(4\pi ky) q_z^k + \sum_{k\leq -1}\frac{F_{-k}(w,1)}{2\sqrt{|k|}y}e^{2\pi i k(x-iy)}. \end{multline*}
The proof of the assertion that $\sum_{k\geq 1} 2\pi \sqrt{k} \frac{\partial}{\partial s}\left. F_{-k}(w,s) \right|_{s=1}q_z^k$ is the holomorphic part of $\mathcal{G}_w(z)$ follows by citing the uniqueness of the analytic continuation in $z$.
It is left to prove that $\mathcal{G}_w(z)$ is weight two biharmonic Maass form. Since $\mathcal{G}_w(z)$ is obtained by taking the derivative $\frac{\partial}{\partial z}$ of a $\Gamma-$invariant function, it is obvious that $\mathcal{G}_w(z)$ is weight two in $z$. Moreover, the straightforward computation that $$
iy^2 \frac{\partial}{\partial \bar z}\mathcal{G}_w(z)=\Delta_{\hyp}\left(\frac{\partial}{\partial s}\left(G_s(z,w) + \mathcal{E}_\infty^{\mathrm{par}}(w,s)\right)\Big|_{s=1}\right)=-
\lim_{s\to 1} \left(G_s(z,w) + \mathcal{E}_\infty^{\mathrm{par}}(w,s)\right),
$$ combined with the fact that $\Delta_{\hyp}\left( \lim_{s\to 1} \left(G_s(z,w) + \mathcal{E}_\infty^{\mathrm{par}}(w,s)\right)\right)=0$ proves that $\mathcal{G}_w(z)$ is biharmonic.
\section{Examples }\label{sect:examples}
\subsection{The full modular group}
Throughout this subsection, let $\Gamma=\mathrm{PSL}(2,\mathbb{Z})$, in which case the the parabolic Kronecker limit function, $P(w)$ can be expressed, in the notation of \cite{BK19}, as $$P(w)=P_{\mathrm{PSL}(2,\mathbb{Z})}(w)=\log(|\eta(w)|^4 \cdot {\mathrm{Im}}(w))=\mathbbm{j}(w)-1,$$ where $\eta(w)$ is Dedekind's eta function and the last equality follows from the definition of $\mathbbm{j}_0(w)=\mathbbm{j}(w)$ given on p. 1 of \cite{BK19}.
In this setting, Corollary \ref{Rohr-Jensen}, when combined with \eqref{j_via_F} and Rohrlich's theorem \eqref{rohrl thm} yields that \begin{equation}\label{prod with jn}
\langle j_n,\log||f||\rangle =2\pi\sqrt{n}\left( - 2\pi \sum_{w\in \mathcal{F}} \frac{\mathrm{ord}_w(f)}{\mathrm{ord}(w)}\left(\frac{\partial}{\partial s}\left. F_{-n}(w,s) \right|_{s=1} -c_nP(w)\right)\right). \end{equation} Moreover, equating the constant terms in the Fourier series expansions for $F_{-n}(z,1)$ and $j_n(z)$, one easily deduces that $2\pi\sqrt{n}c_n=24\sigma(n)$. This proves Theorem 1.2 of \cite{BK19} and shows that, in the notation of \cite{BK19} one has \begin{equation}\label{jn for BK}
\mathbbm{j}_n(w)=2\pi\sqrt{n}\frac{\partial}{\partial s}\left. F_{-n}(w,s) \right|_{s=1} -24\sigma(n)P(w), \end{equation} an identity which provides a description of $\mathbbm{j}_n(w)$, for $n\geq 1$ different from the one given by formula (3.10) of \cite{BK19}.
Furthermore, from the identity \eqref{delta of deriv of N-P}, combined with the fact that $ \Delta_{\hyp}P(w)=1$, which is a straightforward implication of the Kronecker limit formula \eqref{KronLimitPArGen}, it follows that $$\Delta_{\hyp}\mathbbm{j}_n(w)= 2\pi\sqrt{n}\left( F_{-n}(w,1)-c_n \right)=j_n(w),$$ which agrees with formula (3.10) of \cite{BK19}.
Reasoning as above, we easily see that Theorem 1.3. of \cite{BK19} follows from Corollary \ref{cor:j-function} with $g(z)=j_n(z)$.
Finally, in view of \eqref{prod with jn}, Theorem \ref{thm:generating series} is closely related to the first part of Theorem 1.4 of \cite{BK19}. Namely, for large enough ${\mathrm{Im}}(z)$, in the notation of \cite{BK19} \begin{align*} \mathbb{H}_w(z)&=\sum_{n\geq 0} \mathbbm{j}_n(w) q_z^n=\mathbbm{j}_0(w)+\sum_{n\geq 1}\left(2\pi\sqrt{n}\frac{\partial}{\partial s}\left. F_{-n}(w,s)
\right|_{s=1} -24\sigma(n)P(w)\right)q_z^n\\&=1+P(w)\left(1-24\sum_{n\geq 1}\sigma(n)q_z^n\right) + \sum_{n\geq 1} 2\pi \sqrt{n} \frac{\partial}{\partial s}
\left. F_{-n}(w,s) \right|_{s=1}q_z^n. \end{align*}
Theorem \ref{thm:generating series} implies that the function $\mathbb{H}_w(z)$ is the holomorphic part of the weight two biharmonic Maass form $$\widehat{\mathbb{H}}_w(z)=P(w)\widehat{E}_2(z)+\mathcal{G}_w(z),$$ where $$ \widehat{E}_2(z)=1-24\sum_{n\geq 1}\sigma(n)q_z^n - \frac{3}{\pi y} $$ is the weight two completed Eisenstein series for the full modular group.
\subsection{Genus zero Atkin-Lehner groups}
Let $N=\prod_{\nu=1}^r p_\nu$ be a positive square-free integer which is one of the $44$ possible values for which the quotient space $Y_{N}^{+} =\overline{ \Gamma_{0}^{+}(N)}\backslash \HH$ has genus zero; see \cite{Cum04} for a list of such $N$ as well as \cite{JST14}. Let $\Delta_{N}(z)$ be the Kronecker limit function on $Y_{N}^{+}$ associated to the parabolic Eisenstein series; it is given by formula \eqref{DeltaN} above.
In the notation of Section \ref{sect Atkin Leh groups_KLF},the function $\Delta_N(z)(j_N^+(z)-j_N^+(w))^{\nu_N}$, is the weight $k_N=2^{r-1}\ell_N$ holomorphic modular form which possesses the constant term $1$ in its $q$-expansion. Furthermore, this function vanishes only at the point $z=w$, and, by the Riemann-Roch formula, its order of vanishing is equal to $k_N \vol_{\hyp}(Y_N^{+})\cdot\mathrm{ord}(w)/(4\pi )$.
When $N=1$, one has $k_1=12$, $\ell_1=24$, $\nu_1=1$ and $\vol_{\hyp}(Y_N^{+})=\pi/3$, hence $\Delta_1(z)(j_1^+(z)-j_1^+(w))^{\nu_1}$ equals the prime form $(\Delta(z)(j(z)-j(w)))^{1/\mathrm{ord}(w)}$ taken to the power $\mathrm{ord}(w)$; see page 3 of \cite{BK19}.
For any integer $m>1$ the $q$-expansion of the form $j_N^+|T_{m}(z)$ is $q_z^{-m}+O(q_z)$; hence there exists a constant $C_{m,N}$ such that
$j_N^+|T_{m}(z)=2\pi \sqrt{m} F_{-m}(z,1) + C_{m,N}$. The constant $C_{m,N}$ can be explicitly evaluated in terms of $m$ and $N$ by equating the constant terms in the $q$-expansions. Upon doing so, one obtains, using equation \eqref{B0 for A-L groups}, that \begin{align*} C_{m,N}=-2\pi \sqrt{m} B_{0,N}^+(1;-m)&=-24\sigma(m)\prod\limits_{\nu=1}^r \left(1- \frac{p_\nu^{\alpha_{p_\nu}(m)+1}(p_\nu -1) }{\left(p_\nu^{\alpha_{p_\nu}(m)+1} - 1\right)(p_\nu+1)}\right)\\&=-24\sigma(m) \prod\limits_{\nu=1}^r \left(1- \kappa_{m}(p_\nu)\right), \end{align*} where we simplified the notation by denoting the second term in the product over $\nu$ by $\kappa_m(p_\nu)$. We now can apply Corollary \ref{cor:j-function} with $$
g(z)= j_N^+|T_{m}(z)=2\pi \sqrt{m} F_{-m}(z,1)-24\sigma(m)\prod\limits_{\nu=1}^r \left(1- \kappa_{m}(p_\nu)\right) $$ and $f(z)=\Delta_N(z)(j_N^+(z)-j_N^+(w))^{\nu_N}$. Corollary \ref{cor:j-function} becomes the statement that \begin{align*}
\langle j_N^+|T_{m}(z), &\log(y^{\tfrac{k_N}{2}} |\Delta_N(z)(j_N^+(z)-j_N^+(w))^{\nu_N}|) \rangle
\\& = -k_N \vol_{\hyp}(Y_N^{+})\left[\pi\sqrt{m}\left.\frac{\partial}{\partial s}F_{-m}(w,s)\right|_{s=1} \right. \\& \left. +12\sigma(m)\prod\limits_{\nu=1}^r
\left(1- \kappa_{m}(p_\nu)\right) \left(\beta_N\vol_{\hyp}(Y_N^{+})-\log\left( |\Delta_N(w)|^{2/k_N} \cdot {\mathrm{Im}} (w)\right) -2\right)\right], \end{align*} where $\beta_N$ is given by \eqref{betaN}. In this form, we have obtained an alternate proof and generalization of formula (1.2) from \cite{BK19}, which is the special case $N=1$.
\subsection{A genus one example}
Let us consider the case when $\Gamma = \overline{\Gamma_{0}(37)^{+}}$. The choice of $N=37$ is significant since this level corresponds to the smallest square-free integer $N$ such that $Y_{N}^{+}$ is genus one. From Proposition 11 of \cite{JST13}, we have that $\vol_{\hyp}(Y_{37}^{+})= 19\pi/3$ and $$ \beta_{37}=\frac{3}{19\pi}\left(\frac{10}{19}\log37+2-2\log(4\pi)-24\zeta'(-1)\right). $$ The function field generators are $x_{37}^+(z)=q_z^{-2} + 2q_z^{-1}+ O(q_z)$ and $y_{37}^+(z)=q_z^{-3} + 3q_z^{-1}+ O(q_z)$, as displayed in Table 5 of \cite{JST13}. The generators $x_{37}^+(z)$ and $y_{37}^+(z)$ satisfy the cubic relation $y^2 - x^3 + 6xy - 6x^2 + 41y + 49x + 300 = 0$.
The functions $x_{37}^+(z)$ and $y_{37}^+(z)$ can be expressed in in terms of the Niebur-Poincar\'e series by comparing their $q$-expansions. The resulting expressions are that \begin{align*} x_{37}^+(z)&=2\pi[\sqrt{2}F_{-2}(z,1)+ 2F_{-1}(z,1)]-2\pi(\sqrt{2}B_{0,37}^+(1;-2)+2B_{0,37}^+(1;-1))\\&=2\pi[\sqrt{2}F_{-2}(z,1)+ 2F_{-1}(z,1)]-\frac{60}{19} \end{align*} and \begin{align*} y_{37}^+(z)&=2\pi[\sqrt{3}F_{-3}(z,1)+ 3F_{-1}(z,1)]-2\pi(\sqrt{3}B_{0,37}^+(1;-3)+3B_{0,37}^+(1;-1))\\&=2\pi[\sqrt{3}F_{-3}(z,1)+ +3F_{-1}(z,1)]-\frac{84}{19}. \end{align*}
It is important to note that $x_{37}^+(z)$ has a pole of order two at $z=\infty$, i.e., its $q$-expansion begins with $q_z^{-2}$.
As such, $x_{37}^+(z)$ is a linear transformation of the Weierstrass $\wp$-function, in the coordinates of the upper half plane, associated to the elliptic curve obtained by compactifying the space $Y_{37}^{+}$. Hence, there are three distinct points $\{w\}$ on $Y_{37}^{+}$, corresponding to the two torsion points under the group law, such that $x_{37}^+(z)-x_{37}^+(w)$ vanishes as a function of $z$ only when $z=w$. The order of vanishing necessarily is equal to two. The cusp form $\Delta_{37}(z)$ vanishes at $\infty$ to order $19$. Therefore, for such $w$, the form $$ f_{37,w}(z)=\Delta_{37}^2(z)(x_{37}^+(z)-x_{37}^+(w))^{19} $$ is a weight $2k_{37}=24$ holomorphic form. The constant term in its $q$-expansion is equal to $1$, and $f_{37,w}(z)$ vanishes for points $z \in \mathcal{F}$ only when $z=w$. The order of vanishing of $f_{37,w}(z)$ at $z=w$ is $38\cdot \mathrm{ord}(w)$.
With all this, we can apply Corollary \ref{cor:j-function}. The resulting formulas are that \begin{align*} \langle x_{37}^+, \log(\Vert f_{37,w}\Vert ) \rangle &= -152\pi^2 \left(\frac{\partial}{\partial s}\left. (\sqrt{2}F_{-2}(w,s) +
2F_{-1}(w,s))\right|_{s=1}\right)\\& +240\pi\left(\log\left(|\eta(w)\eta(37w)|^2\cdot{\mathrm{Im}} (w)\right) -\frac{10}{19}\log37+2\log(4\pi) +24\zeta'(-1)\right) \end{align*} and \begin{multline*}
\langle y_{37}^+, \log (\Vert f_{37,w}\Vert ) \rangle = - 152\pi^2 \left(\frac{\partial}{\partial s}\left. (\sqrt{3}F_{-3}(w,s) +3F_{-1}(w,s))\right|_{s=1}\right)\\ + 336\pi\left(\log\left(|\eta(w)\eta(37w)|^2\cdot{\mathrm{Im}} (w)\right) -\frac{10}{19}\log37+2\log(4\pi) +24\zeta'(-1)\right). \end{multline*}
Of course, one does not need to assume that $w$ corresponds to a two torsion point. In general, Corollary \ref{cor:j-function} yields an expression where the right-hand side is a sum of two terms, and the corresponding factor in front would be one-half of the factors above.
\subsection{A genus two example}
Consider the level $N=103$. In this case, $\vol_{\hyp}(Y_{103}^{+})= 52\pi/3$ and the function field generators are $x_{103}^+(z)=q_z^{-3} + q_z^{-1} + O(q_z)$ and $y_{103}^+(z)=q_z^{-4} + 3q_z^{-2} + 3q_z^{-1} + O(q_z)$, as displayed in Table 7 of \cite{JST13}. The generators $x_{103}^+(z)$ and $y_{103}^+(z)$ satisfy the polynomial relation $y^3 - x^4 - 5yx^2 - 9x^3 + 16y^2 - 21yx - 60x^2 + 65y - 164x + 18 = 0$. The surface $Y_{103}^{+}$ has genus two.
From Theorem 6 of \cite{Ni73}, we can write $x_{103}^+(z)$ and $y_{103}^+(z)$ in terms of the Niebur-Poincar\'e series. Explictly, we have that \begin{align*} x_{103}^+(z)&=2\pi[\sqrt{3}F_{-3}(z,1)+ F_{-1}(z,1)]-2\pi(\sqrt{3}B_{0,103}^+(1;-3)+B_{0,103}^+(1;-1))\\&=2\pi[\sqrt{3}F_{-3}(z,1)+ F_{-1}(z,1)]-\frac{15}{13} \end{align*} and \begin{align*} y_{103}^+(z)&=2\pi[\sqrt{4}F_{-4}(z,1)+ 3\sqrt{2}F_{-2}(z,1) +3F_{-1}(z,1)]\\&-2\pi(\sqrt{4}B_{0,103}^+(1;-4)+3\sqrt{2}B_{0,103}^+(1;-2)+3B_{0,103}^+(1;-1))\\&=2\pi[2F_{-4}(z,1)+ 3\sqrt{2}F_{-2}(z,1) +3F_{-1}(z,1)]-\frac{57}{13}. \end{align*}
The order of vanishing of $\Delta_{103}(z)$ at the cusp is $\nu_{103}=(12\cdot 52\pi/3)/(4\pi)=52$. Therefore, for an arbitrary, fixed $w\in\HH$, the form $$f_{103,w}(z)=\Delta_{103}^3(z)(x_{103}^+(z)-x_{103}^+(w))^{52}$$ is the weight $3k_{103}=36$ holomorphic form which has constant term in the $q$-expansion equal to $1$. Let $\{w_{1}, w_{2}, w_{3}\}$ be the three, not necessarily distinct, points in the fundamental domain $\mathcal{F}$ where $(x_{103}^+(z)-x_{103}^+(w))$ vanishes. One of the points $w_{j}$ is equal to $w$. The form $f_{103,w_j}(z)$ vanishes at $z=w_{j}$ to order $52 \cdot\mathrm{ord}(w_j)$, $j=1,2,3$.
From Section \ref{sect Atkin Leh groups_KLF}, we have that $$ \beta_{103}=\frac{3}{52\pi}\left(\frac{53}{104}\log103+2-2\log(4\pi)-24\zeta'(-1)\right) $$
and $P_{103}(z)=\log\left(|\eta(z)\eta(103z)|^2\cdot {\mathrm{Im}} (z)\right)$. Let us now apply Corollary \ref{cor:j-function} with $g(z)= x_{103}^+(z)$, in which case $c(g)=-15/13$. In doing so, we get that
\begin{align*} \langle x_{103}^+, \log (\Vert f_{103,w}\Vert ) \rangle &= - 208\pi^2
\sum\limits_{j=1}^{3}\left(\frac{\partial}{\partial s}\left. (\sqrt{3}F_{-3}(w_j,s) +F_{-1}(w_j,s))\right|_{s=1}\right)
\\& +120\pi\sum\limits_{j=1}^{3}\left(\log\left(|\eta(w_j)\eta(103w_j)|^2\cdot{\mathrm{Im}} (w_{j})\right)\right) \\&-360\pi\left(\frac{53}{104}\log103-2\log(4\pi) -24\zeta'(-1)\right). \end{align*} Similarly, we can take $g(z)= y_{103}^+(z)$, in which case $c(g)=-57/13$ and we get that \begin{align*} \langle y_{103}^+, \log (\Vert f_{103,w}\Vert ) \rangle &= - 208\pi^2 \sum\limits_{j=1}^{3}
\left(\frac{\partial}{\partial s}\left. (2F_{-4}(w_{j},s)+ 3\sqrt{2}F_{-2}(w_{j},s) +3F_{-1}(w_{j},s))\right|_{s=1}\right)
\\& +456\pi\sum\limits_{j=1}^{3}\left(\log\left(|\eta(w_{j})\eta(103w_{j})|^2\cdot{\mathrm{Im}} (w_{j})\right)\right) \\& -1368\pi\left(\frac{53}{104}\log103-2\log(4\pi) -24\zeta'(-1)\right). \end{align*}
\subsection{An alternative formulation} In the above discussion, we have written the constant $\beta$ and the Kronecker limit function $P$ separately. However, it should be pointed out that in all instances the appearance of these terms are in the combination $\beta \vol_{\hyp}(M)- P(z)$. From \eqref{KronLimitPArGen}, we can write $$ \beta \vol_{\hyp}(M)- P(z) = \frac{1}{\vol_{\hyp}(M)} \textrm{\rm CT}_{s=1} \mathcal{E}^{\mathrm{par}}_{\infty}(z,s), $$ where $\textrm{\rm CT}_{s=1}$ denotes the constant term in the Laurent expansion at $s=1$. It may be possible that such notational change can provide additional insight concerning the formulas presented above.
\noindent
\noindent James Cogdell \\ Department of Mathematics \\ Ohio State University \\ 231 W. 18th Ave \\ Columbus, OH 43210, U.S.A. \\ e-mail: cogdell@math.ohio-state.edu
\noindent Jay Jorgenson \\ Department of Mathematics \\ The City College of New York \\ Convent Avenue at 138th Street \\ New York, NY 10031 U.S.A. \\ e-mail: jjorgenson@mindspring.com
\noindent Lejla Smajlovi\'c \\ Department of Mathematics \\ University of Sarajevo\\ Zmaja od Bosne 35, 71 000 Sarajevo\\ Bosnia and Herzegovina\\ e-mail: lejlas@pmf.unsa.ba \end{document}
\end{document} |
\begin{document}
\title{Pre-derivations and description of non-strongly nilpotent filiform Leibniz algebras } \author{K.K. Abdurasulov, A.Kh. Khudoyberdiyev, M. Ladra and A.M. Sattarov} \address{[A.Kh. Khudoyberdiyev] National University of Uzbekistan, Institute of Mathematics Academy of Sciences of Uzbekistan, Tashkent, 100170, Uzbekistan.} \email{khabror@mail.ru} \address{[K.K. Abdurasulov -- A.M. Sattarov] Institute of Mathematics Academy of Sciences of Uzbekistan, Tashkent, 100170, Uzbekistan.} \email{abdurasulov0505@mail.ru --- saloberdi90@mail.ru} \address{[M. Ladra] Department of Mathematics, Institute of Mathematics, University of Santiago de Compostela, 15782, Spain.} \email {manuel.ladra@usc.es}
\subjclass[2010]{17A32, 17A36, 17B30}
\keywords{Lie algebra, Leibniz algebra, derivation, pre-derivation, nilpotency, characteristically nilpotent algebra, strongly nilpotent algebra}
\begin{abstract} In this paper we investigate pre-derivations of filiform Leibniz algebras. Recall that the set of filiform Leibniz algebras of fixed dimension can be decomposed into three non-intersected families. We describe the pre-derivation of filiform Leibniz algebras for the first and second families. We found sufficient conditions under which filiform Leibniz algebras are strongly nilpotent. Moreover, for the first and second families, we give the description of characteristically nilpotent algebras which are non-strongly nilpotent. \end{abstract}
\maketitle
\section{Introduction}
The notion of Leibniz algebra was introduced in \cite{Lod} as a non-antisymmetric generalization of Lie algebras. During the last 25 years, the theory of Leibniz algebras has been actively studied and many results of the theory of Lie algebras have been extended to Leibniz algebras. Since the study of derivations and automorphisms of a Lie algebra plays an essential role in the structure theory of algebras, it is a natural question whether the corresponding results for Lie algebras can be extended to more general objects.
In 1955, Jacobson \cite{Jac2} proved that a Lie algebra over a field of characteristic zero admitting a non-singular (invertible) derivation is nilpotent. However, the related problem of whether the inverse of this statement is correct or not, remained open until the work of Dixmier and Lister \cite{Dix}, where an example of a nilpotent Lie algebra, whose all derivations are nilpotent (and hence, singular), was constructed. Such types of Lie algebras are called characteristically nilpotent Lie algebras. The papers \cite{CaNu, Kha1} and others are devoted to the investigation of characteristically nilpotent Lie algebras.
An analogue of Jacobson's result for Leibniz algebras was proved in \cite{LaRiTu}. Moreover, it was shown that, similarly to the case of Lie algebras, the inverse of this statement does not hold and the notion of characteristically nilpotent Lie algebra can be extended to Leibniz algebras \cite{KhLO, Omi1}. The study of derivations of Lie algebras led to the appearance of a natural generalization: pre-derivations of Lie algebras \cite{Mul}; the pre-derivations arise as the Lie algebra corresponding to the group of the pre-automorphisms of a Lie algebra.
A pre-derivation of a Lie algebra is just a derivation of the Lie triple system induced by it. Research on pre-derivations has been related to Lie algebra degenerations, Lie triple systems and bi-invariant semi-Riemannian metrics on Lie groups \cite{Burd}. In \cite{Baj} it was proved that Jacobson's result is also true in terms of pre-derivations. Similarly, likewise the example of Dixmier and Lister, several examples of nilpotent Lie algebras whose pre-derivations are nilpotent were presented in \cite{Burd}. Such Lie algebras are called strongly nilpotent.
In \cite{Moens}, a generalized notion of derivations and pre-derivations of Lie algebras is given. These derivations are defined as Leibniz-derivations of order $k$, and Moens proved that a finite-dimensional Lie algebra over a field of characteristic zero is nilpotent if and only if it admits an invertible Leibniz-derivation. Further, an analogue of Moens’ result was showed for alternative, Jordan, Zinbiel, Malcev and Leibniz algebras \cite{FKhO,Kayg1,Kayg2}.
Since a Leibniz-derivation of order $3$ of a Leibniz algebra is a pre-derivation, it is natural to define the notion of strongly nilpotent Leibniz algebras. It should be noted that the class of strongly nilpotent Leibniz algebras is a subclass of characteristically nilpotent Leibniz algebras. Thus, one of the approaches to the classification of nilpotent Leibniz algebras considers a subclass of Leibniz algebras, in which any Leibniz derivation of order $k$ is nilpotent and any algebra admits a non-nilpotent Leibniz-derivation of order $k+1$. In the case of $k=1$ we have the class of non-characteristically nilpotent Leibniz algebras. Such class of filiform Leibniz algebras was classified in \cite{KhLO}.
This paper is devoted to the study of algebras for the case $k=2$, i.e. the class of characteristically nilpotent filiform Leibniz algebras which are non-strongly nilpotent. It is known that the class of all filiform Leibniz algebras is split into three non-intersected families \cite{AyOm2,GoOm}, where one of the families contains filiform Lie algebras and the other two families come out from naturally graded non-Lie filiform Leibniz algebras. An isomorphism criterion for these two families of filiform Leibniz algebras was given in \cite{GoOm}. Note that some classes of finite-dimensional filiform Leibniz algebras and algebras up to dimension less than $10$ were classified in \cite{AAOK, GoJiKh,OmRa,RaSo}.
In order to achieve our goal, we have organized the paper as follows. In Section~\ref{S:prel}, we present necessary definitions and results that will be used in the rest of the paper. In Section~\ref{S1} we describe pre-derivations of filiform Leibniz algebras of the first and second families.
Finally, in Section~\ref{S2}, we give the description of
characteristically nilpotent filiform Leibniz algebras which are non-strongly nilpotent.
Throughout the paper, all the spaces and algebras are assumed to be finite-dimensional.
\section{Preliminaries}\label{S:prel}
In this section we give necessary definitions and preliminary results.
\begin{defn} An algebra $(L,[-,-])$ over a field $F$ is called a (right) Leibniz algebra if for any $x,y,z\in L$, the so-called Leibniz identity \[ \big[[x,y],z\big]=\big[[x,z],y\big]+\big[x,[y,z]\big] \] holds. \end{defn}
Every Lie algebra is a Leibniz algebra, but the bracket in a Leibniz algebra does not need to be skew-symmetric.
Note that a derivation of Leibniz algebra $L$ is a
linear transformation, such that \[d([x,y])=[d(x),y]+[x, d(y)],\]
for any $x, y\in L$.
Pre-derivations of Leibniz algebras are a generalization of derivations which defined as follows.
\begin{defn} A linear transformation $P$ of the Leibniz algebra $L$ is called a pre-derivation if for any $x, y, z\in L$, \[P([[x,y],z])=[[P(x),y],z]+[[x,P(y)],z]+[[x,y],P(z)].\] \end{defn}
For the given Leibniz algebra $L$ we consider the following central lower series: \[ L^1=L,\qquad L^{k+1}=[L^k,L^1] \qquad k \geq 1. \]
\begin{defn} A Leibniz algebra $L$ is called nilpotent if there exists $s\in\mathbb N $ such that $L^s=0$. \end{defn}
A nilpotent Leibniz algebra is called \emph{characteristically nilpotent} if all its derivations are nilpotent. We say that a Leibniz algebra is \emph{strongly nilpotent} if any pre-derivation is nilpotent.
Since any derivation of the Leibniz algebra is a pre-derivation, it implies that a strongly nilpotent Leibniz algebra is characteristically nilpotent. An example of characteristically nilpotent, but non-strongly nilpotent Leibniz algebra could be find in \cite{FKhO, KhLO, Omi1}.
\begin{defn} A Leibniz algebra $L$ is said to be filiform if $\dim L^i=n-i$, where $n=\dim L$ and $2\leq i \leq n$. \end{defn}
The following theorem decomposes all $n$-dimensional filiform Leibniz algebras into three families.
\begin{thm}[\cite{AyOm2,GoOm}]
Any $n$-dimensional complex filiform Leibniz algebra admits a basis $\{e_1, e_2, \dots, e_n\}$
such that the table of multiplication of the algebra has one of the following forms:
$F_1(\alpha_4, \dots, \alpha_n,\theta)=\left\{\begin{array}{ll} [e_1,e_1]=e_3, \\[1mm] [e_i,e_1]=e_{i+1},& 2\leq i \leq n-1,\\[1mm] [e_1,e_2]=\sum\limits_{t=4}^{n-1}\alpha_te_t+\theta e_n,&\\[1mm] [e_j,e_2]=\sum\limits_{t=j+2}^{n}\alpha_{t-j+2}e_t,& 2\leq j\leq n-2.\\[1mm] \end{array}\right.$ \\[1mm]
$F_2(\beta_4, \dots, \beta_n,\gamma)=\left\{\begin{array}{ll} [e_1,e_1]=e_{3}, \\[1mm] [e_i,e_1]=e_{i+1}, & \ 3\leq i \leq {n-1},\\[1mm] [e_1,e_2]=\sum\limits_{t=4}^{n}\beta_te_{t}, \\[1mm] [e_2,e_2]= \gamma e_{n},\\[1mm] [e_j,e_2]=\sum\limits_{t=j+2}^{n}\beta_{t-j+2}e_t, & \ 3\leq j \leq {n-2}, \end{array} \right.$ \\
$F_3(\theta_1,\theta_2,\theta_3)= \left\{\begin{array}{lll} [e_i,e_1]=e_{i+1}, & 2\leq i \leq {n-1},\\[1mm] [e_1,e_i]=-e_{i+1}, & 3\leq i \leq {n-1}, \\[1mm] [e_1,e_1]=\theta_1e_n, & \\[1mm] [e_1,e_2]=-e_3+\theta_2e_n, & \\[1mm] [e_2,e_2]=\theta_3e_n, & \\[1mm] [e_i,e_j]=-[e_j,e_i] \in \langle e_{i+j+1}, e_{i+j+2}, \dots , e_n\rangle, & 2\leq i < j \leq {n-1},\\[1mm] [e_i,e_{n+1-i}]=-[e_{n+1-i},e_i]=\alpha (-1)^{i+1}e_n, & 2\leq i\leq n-1, \end{array} \right.$
\noindent where all omitted products are equal to zero and $\alpha\in\{0,1\}$ for even $n$ and $\alpha=0$ for odd $n$. \end{thm}
It is easy to see that algebras of the first and the second families are non-Lie algebras. Note that if $(\theta_1, \theta_2, \theta_3) = (0,0,0)$, then an algebra of the third class is a Lie algebra.
Further we shall need the notion of Catalan numbers. The
$p$-th Catalan numbers were defined in \cite{HiPe} by the formula \[C^{p}_{n} = \frac {1} {(p-1)n+1} \ \binom{pn}{n}\,.\]
It should be noted that for the $p$-th Catalan numbers the following equality is hold: \begin{equation} \label{E:comb} \sum\limits_{k=1}^{n} C_k^p C_{n-k}^p = \frac {2n} {(p-1)n+p+1}C_{n+1}^p\,. \end{equation}
\section{Pre-derivations of filiform Leibniz algebras}\label{S1}
In this section we give the description of pre-derivations of filiform Leibniz algebras. First, we consider the filiform Leibniz algebras from the first family.
Let $L$ be a $n$-dimensional filiform Leibniz algebra from the family $F_1(\alpha_4, \dots, \alpha_n, \theta )$.
\begin{prop}\label{prop1} The pre-derivations of the filiform Leibniz algebras from the family $F_1(\alpha_4,\alpha_5,\ldots, \alpha_{n}, \theta)$ have the following form: \begin{align*} P(e_1)&=\sum\limits_{t=1}^{n}a_te_t,\quad P(e_2)=(a_1+a_2)e_2+\sum\limits_{t=3}^{n-2}a_te_t+b_{n-1}e_{n-1}+b_ne_n, \quad P(e_3)=\sum\limits_{t=2}^{n}c_te_t,\\ P(e_{2i})&=((2i-1)a_1+a_2)e_{2i}+\sum\limits_{t=2i+1}^{n}(a_{t-2i+2}+(2i-2)a_2\alpha_{t-2i+3})e_t,\qquad \qquad \qquad 2\leq i\leq \left\lfloor\frac{n}{2}\right\rfloor,\\ P(e_{2i+1})&=c_2e_{2i}+((2i-2)a_1+c_3)e_{2i+1}+\sum\limits_{t=2i+2}^{n}(c_{t-2i+2}+(2i-2)a_2\alpha_{t-2i+2})e_t,\qquad 2\leq i\leq \left\lfloor\frac{n-1}{2}\right\rfloor, \end{align*} where $\lfloor a\rfloor$ is the integer part of the real number $a$ and \begin{equation}\label{eq0}\left\{\begin{array}{ll} (1+(-1)^n)c_2=0,\quad c_2\alpha_t=0,& 4\leq t\leq n-1, \\[1mm] (a_1-a_2)\alpha_4=0,\quad (3a_1-c_3)\alpha_4=0,\\[1mm] \sum\limits_{t=3}^{k}(a_{2k-2t+3}-c_{2k-2t+4}+a_2\alpha_{2k-2t+4})\alpha_{2t-2}=0,&3\leq k\leq \lfloor\frac{n-1}{2}\rfloor,\\[1mm] (2a_1+a_2-c_3)\alpha_{2k}+\sum\limits_{t=3}^{k}(a_{2k-2t+4}-c_{2k-2t+5}+a_2\alpha_{2k-2t+5})\alpha_{2t-2}=0,& 3\leq k\leq \lfloor\frac{n}{2}\rfloor-1,\\[1mm] (a_2-(k-3)a_1)\alpha_{k}=\frac{k-1}{2}a_2\sum\limits_{t=5}^{k}\alpha_{t-1}\alpha_{k-t+4},& 5\leq k\leq n-2,\\[1mm] (a_2-(n-4)a_1)\alpha_{n-1}=a_2\sum\limits_{t=3}^{\frac{n-2}{2}}(2t-3)\alpha_{n-2t+3}\alpha_{2t-1}\\[1mm] \quad\quad\quad\quad\quad\quad\quad\quad\quad +\sum\limits_{t=2}^{\frac{n-2}{2}}(c_{n-2t+2}-a_{n-2t+1}+(2t-3)a_2\alpha_{n-2t+2})\alpha_{2t},& n\quad \text {is even}, \\[1mm] (2a_2-c_3-(n-6)a_1)\alpha_{n-1}=a_2\sum\limits_{t=3}^{\frac{n-1}{2}}(2t-3)\alpha_{n-2t+3}\alpha_{2t-1} +\\[1mm] \quad\quad\quad\quad\quad\quad\quad\quad\quad +\sum\limits_{t=2}^{\frac{n-3}{2}}(c_{n-2t+2}-a_{n-2t+1}+(2t-3)a_2\alpha_{n-2t+2})\alpha_{2t},& n\quad \text {is odd.} \end{array}\right. \end{equation} \end{prop}
\begin{proof} Let $L$ be a filiform Leibniz algebra from the family $F_1(\alpha_4,\alpha_5,\ldots, \alpha_{n}, \theta)$ and let $P \colon L \rightarrow L$ be a pre-derivation of $L$. Put \[P(e_1)=\sum\limits_{t=1}^{n}a_te_t, \quad P(e_2)=\sum\limits_{t=1}^{n}b_te_t, \quad P(e_3)=\sum\limits_{t=1}^{n}c_te_t.\]
From \begin{align*} 0& =P([[e_1,e_1],e_3])=[[P(e_1),e_1],e_3]+[[e_1,P(e_1)],e_3]+[[e_1,e_1],P(e_3)]\\ & = [e_3, \sum\limits_{t=1}^{n}c_te_t] = c_1e_4 + c_2\sum\limits_{t=4}^{n-1}\alpha_{t}e_{t+1}, \end{align*} we have \[ c_1=0,\qquad c_2\alpha_t=0,\quad 4\leq t\leq n-1.\]
By the property of pre-derivation, we have \begin{align*} P(e_4)&=P([[e_1,e_1],e_1])=[[P(e_1),e_1],e_1]+[[e_1,P(e_1)],e_1]+[[e_1,e_1],P(e_1)]\\ & =[(a_1+a_2)e_3+\sum\limits_{t=4}^{n}a_{t-1}e_t,e_1]+[a_1e_3+a_2\big(\sum\limits_{t=4}^{n-1}\alpha_te_t+\theta e_n\big),e_1]+a_1e_4+ a_2\sum\limits_{t=5}^{n}\alpha_{t-1}e_t\\ & =(3a_1+a_2)e_4+\sum\limits_{t=5}^{n}(a_{t-2}+2a_2\alpha_{t-1})e_t. \end{align*}
On the other hand, \begin{align*} P(e_4)&=P([[e_2,e_1],e_1])=[[P(e_2),e_1],e_1]+[[e_2,P(e_1)],e_1]+[[e_2,e_1],P(e_1)] \\ &=[(b_1+b_2)e_3+\sum\limits_{t=4}^{n}b_{t-1}e_t,e_1]+[a_1e_3+a_2\sum\limits_{t=4}^{n}\alpha_te_t,e_1]+a_1e_4+ a_2\sum\limits_{t=5}^{n}\alpha_{t-1}e_t\\ &=(2a_1+b_1+b_2)e_4+\sum\limits_{t=5}^{n}(b_{t-2}+2a_2\alpha_{t-1})e_t. \end{align*}
Comparing the coefficients at the basic elements we have \[b_1+b_2=a_1+a_2, \qquad b_{t}=a_t,\quad 3\leq t\leq n-2.\]
Using the property of pre-derivation, we get \begin{align*} P(e_5)&=P([[e_3,e_1],e_1])=[[P(e_3),e_1],e_1]+[[e_3,P(e_1)],e_1]+[[e_3,e_1],P(e_1)]\\ &=[c_2e_3+\sum\limits_{t=4}^{n}c_{t-1}e_t,e_1]+[a_1e_4+a_2\sum\limits_{t=5}^{n}\alpha_{t-1}e_t,e_1]+a_1e_5+ a_2\sum\limits_{t=6}^{n}\alpha_{t-2}e_t\\ &=c_2e_4+(2a_1+c_3)e_5+\sum\limits_{t=6}^{n}(c_{t-2}+2a_2\alpha_{t-2})e_t. \end{align*}
Similarly, from the equality \[P(e_{j+2})=P([[e_j,e_1],e_1])=[[P(e_j),e_1],e_1]+[[e_j,P(e_1)],e_1]+[[e_j,e_1],P(e_1)], \quad 3\le j\le n-2,\] inductively, we derive \begin{align*} P(e_{2i})&=((2i-1)a_1+a_2)e_{2i}+\sum\limits_{t=2i+1}^{n}(a_{t-2i+2}+(2i-2)a_2\alpha_{t-2i+3})e_t,&& 2\leq i\leq \left\lfloor\frac{n}{2}\right\rfloor.\\ P(e_{2i+1})&=c_2e_{2i}+((2i-2)a_1+c_3)e_{2i+1}+\sum\limits_{t=2i+2}^{n}(c_{t-2i+2}+(2i-2)a_2\alpha_{t-2i+2})e_t,&& 2\leq i\leq \left\lfloor\frac{n-1}{2}\right\rfloor. \end{align*}
Moreover, in the case of $n$ is even using the property of the pre-derivation for the triple $\{e_{n-1},e_1,e_1\}$, we deduce $c_2=0$. In the case of $n$ is odd considering the property of $P([[e_{n-1},e_1],e_1])$ give us the identical equality. Thus we get, \[(1 + (-1)^n)c_2 =0.\]
Consider \begin{multline*}
P([[e_1,e_1],e_2])=P([e_3,e_2])=P\Big(\sum\limits_{t=5}^{n}\alpha_{t-1}e_t\Big)\\ =\sum\limits_{t=2}^{\lfloor\frac{n-1}{2}\rfloor}\Big[c_2e_{2t}+((2t-2)a_1+c_3)e_{2t+1}+\sum\limits_{k=2t+2}^{n}(c_{k-2t+2}+(2t-2)a_2\alpha_{k-2t+2})e_k\Big]\alpha_{2t}\\ +\sum\limits_{t=3}^{\lfloor\frac{n}{2}\rfloor}\Big[((2t-1)a_1+a_2)e_{2t}+\sum\limits_{k=2t+1}^{n}(a_{k-2t+2}+(2t-2)a_2\alpha_{k-2t+3})e_k\Big]\alpha_{2t-1}\\ =\sum\limits_{k=2}^{\lfloor\frac{n-1}{2}\rfloor}((2k-2)a_1+c_3)\alpha_{2k}e_{2k+1}+\sum\limits_{k=6}^{n}\sum\limits_{t=2}^{\lfloor\frac{k-2}{2}\rfloor}(c_{k-2t+2}+(2t-2)a_2\alpha_{k-2t+2})\alpha_{2t}e_k\\ +\sum\limits_{k=3}^{\lfloor\frac{n}{2}\rfloor}((2k-1)a_1+a_2)\alpha_{2k-1}e_{2k}+\sum\limits_{k=7}^{n}\sum\limits_{t=3}^{\lfloor\frac{k-1}{2}\rfloor}(a_{k-2t+2}+(2t-2)a_2\alpha_{k-2t+3})\alpha_{2t-1}e_k\\ =(2a_1+c_3)\alpha_4e_5+\sum\limits_{k=3}^{\lfloor\frac{n}{2}\rfloor}\Big[((2k-1)a_1+a_2)\alpha_{2k-1}+\sum\limits_{t=2}^{k-1}(c_{2k-2t+2}+(2t-2)a_2\alpha_{2k-2t+2})\alpha_{2t}\\ +\sum\limits_{t=3}^{k-1}(a_{2k-2t+2}+(2t-2)a_2\alpha_{2k-2t+3})\alpha_{2t-1}\Big]e_{2k}\\ +\sum\limits_{k=3}^{\lfloor\frac{n-1}{2}\rfloor}\Big[((2k-2)a_1+c_3)\alpha_{2k}+\sum\limits_{t=2}^{k-1}(c_{2k-2t+3}+(2t-2)a_2\alpha_{2k-2t+3})\alpha_{2t}\\ +\sum\limits_{t=3}^{k}(a_{2k-2t+3}+(2t-2)a_2\alpha_{2k-2t+4})\alpha_{2t-1}\Big]e_{2k+1}. \end{multline*}
On the other hand, using the property of pre-derivation, we get \begin{multline*} P([[e_1,e_1],e_2])=[[P(e_1),e_1],e_2]+[[e_1,P(e_1)],e_2]+[[e_1,e_1],P(e_2)]\\ =[(a_1+a_2)e_3+\sum\limits_{t=4}^{n}a_{t-1}e_t,e_2]+[a_1e_3+a_2\Big(\sum\limits_{t=4}^{n-1}\alpha_te_t+\theta e_n \Big),e_2]+b_1e_4+ b_2\sum\limits_{t=5}^{n}\alpha_{t-1}e_t\\ =(a_1+a_2)\sum\limits_{t=5}^{n}\alpha_{t-1}e_t+\sum\limits_{t=4}^{n-2}a_{t-1}\sum\limits_{k=t+2}^{n}\alpha_{k-t+2}e_k\\ + a_1\sum\limits_{t=5}^{n}\alpha_{t-1}e_t+a_2\sum\limits_{t=4}^{n-2}\alpha_{t}\sum\limits_{k=t+2}^{n}\alpha_{k-t+2}e_k+b_1e_4+ b_2\sum\limits_{t=5}^{n}\alpha_{t-1}e_t\\ =b_1e_4+(2a_1+a_2+b_2)\alpha_{4}e_5+\sum\limits_{k=3}^{\lfloor\frac{n}{2}\rfloor}\Big[(2a_1+a_2+b_2)\alpha_{2k-1}+\sum\limits_{t=4}^{2k-2}(a_{t-1}+a_2\alpha_t)\alpha_{2k-t+2}\Big]e_{2k}\\ +\sum\limits_{k=3}^{\lfloor\frac{n-1}{2}\rfloor}\Big[(2a_1+a_2+b_2)\alpha_{2k}+\sum\limits_{t=4}^{2k-1}(a_{t-1}+a_2\alpha_t)\alpha_{2k-t+3}\Big]e_{2k+1}. \end{multline*}
Comparing the coefficients at the basic elements we have \begin{equation}\label{eq1}\left\{\begin{array}{cc} b_1=0,\\[1mm] (2a_1+a_2+b_2)\alpha_4=(2a_1+c_3)\alpha_4, \end{array}\right. \end{equation} \begin{equation}\label{eq2}3\leq k\leq \left\lfloor\frac{n}{2}\right\rfloor\left\{\begin{array}{cc} (2a_1+a_2+b_2)\alpha_{2k-1}+\sum\limits_{t=4}^{2k-2}(a_{t-1}+a_2\alpha_t)\alpha_{2k-t+2}\\[1mm] =((2k-1)a_1+a_2)\alpha_{2k-1}+\sum\limits_{t=2}^{k-1}(c_{2k-2t+2}+(2t-2)a_2\alpha_{2k-2t+2})\alpha_{2t}+\\[1mm] +\sum\limits_{t=3}^{k-1}(a_{2k-2t+2}+(2t-2)a_2\alpha_{2k-2t+3})\alpha_{2t-1}, \end{array}\right. \end{equation} \begin{equation}\label{eq3}3\leq k\leq \left\lfloor\frac{n-1}{2}\right\rfloor\left\{\begin{array}{cc} (2a_1+a_2+b_2)\alpha_{2k}+\sum\limits_{t=4}^{2k-1}(a_{t-1}+a_2\alpha_t)\alpha_{2k-t+3}\\[1mm] =((2k-2)a_1+c_3)\alpha_{2k}+\sum\limits_{t=2}^{k-1}(c_{2k-2t+3}+(2t-2)a_2\alpha_{2k-2t+3})\alpha_{2t}+\\[1mm] +\sum\limits_{t=3}^{k}(a_{2k-2t+3}+(2t-2)a_2\alpha_{2k-2t+4})\alpha_{2t-1}. \end{array}\right. \end{equation}
Now, we consider \begin{multline*} P([[e_3,e_1],e_2])=P([e_4,e_2])=P\Big(\sum\limits_{t=6}^{n}\alpha_{t-2}e_t\Big)\\ =\sum\limits_{t=3}^{\lfloor\frac{n-1}{2}\rfloor}\Big[((2t-2)a_1+c_3)e_{2t+1}+\sum\limits_{k=2t+2}^{n}(c_{k-2t+2}+(2t-2)a_2\alpha_{k-2t+2})e_k\Big]\alpha_{2t-1}\\ +\sum\limits_{t=3}^{\lfloor\frac{n}{2}\rfloor}\Big[((2t-1)a_1+a_2)e_{2t}+\sum\limits_{k=2t+1}^{n}(a_{k-2t+2}+(2t-2)a_2\alpha_{k-2t+3})e_k\Big]\alpha_{2t-2}\\ =\sum\limits_{k=3}^{\lfloor\frac{n-1}{2}\rfloor}((2k-2)a_1+c_3)\alpha_{2k-1}e_{2k+1}+\sum\limits_{k=8}^{n}\sum\limits_{t=3}^{\lfloor\frac{k-2}{2}\rfloor}(c_{k-2t+2}+(2t-2)a_2\alpha_{k-2t+2})\alpha_{2t-1}e_k\\ +\sum\limits_{k=3}^{\lfloor\frac{n}{2}\rfloor}((2k-1)a_1+a_2)\alpha_{2k-2}e_{2k}+\sum\limits_{k=7}^{n}\sum\limits_{t=3}^{\lfloor\frac{k-1}{2}\rfloor}(a_{k-2t+2}+(2t-2)a_2\alpha_{k-2t+3})\alpha_{2t-2}e_k\\ =(5a_1+a_2)\alpha_4e_6+\sum\limits_{k=3}^{\lfloor\frac{n-1}{2}\rfloor}\Big[((2k-2)a_1+c_3)\alpha_{2k-1}+\sum\limits_{t=3}^{k-1}(c_{2k-2t+3}+(2t-2)a_2\alpha_{2k-2t+3})\alpha_{2t-1} \displaybreak \\ +\sum\limits_{t=3}^{k}(a_{2k-2t+3}+(2t-2)a_2\alpha_{2k-2t+4})\alpha_{2t-2}\Big]e_{2k+1} \\ +\sum\limits_{k=4}^{\lfloor\frac{n}{2}\rfloor}\Big[((2k-1)a_1+a_2)\alpha_{2k-2}+\sum\limits_{t=3}^{k-1}(c_{2k-2t+2}+(2t-2)a_2\alpha_{2k-2t+2})\alpha_{2t-1}\\ +\sum\limits_{t=3}^{k-1}(a_{2k-2t+2}+(2t-2)a_2\alpha_{2k-2t+3})\alpha_{2t-2}\Big]e_{2k}. \end{multline*}
On the other hand, \begin{multline*} P([[e_3,e_1],e_2])=[[P(e_3),e_1],e_2]+[[e_3,P(e_1)],e_2]+[[e_3,e_1],P(e_2)]\\ =[\sum\limits_{t=3}^{n}c_{t-1}e_t,e_2]+[a_1e_4+a_2\sum\limits_{t=5}^{n}\alpha_{t-1}e_t,e_2]+b_2\sum\limits_{t=6}^{n}\alpha_{t-2}e_t\\ =\sum\limits_{t=4}^{n-2}c_{t-1}\sum\limits_{k=t+2}^{n}\alpha_{k-t+2}e_k+ a_1\sum\limits_{t=6}^{n}\alpha_{t-2}e_t+a_2\sum\limits_{t=5}^{n-2}\alpha_{t-1}\sum\limits_{k=t+2}^{n}\alpha_{k-t+2}e_k+b_2\sum\limits_{t=6}^{n}\alpha_{t-2}e_t\\ =(2a_1+a_2+c_3)\alpha_{4}e_6+\sum\limits_{k=7}^{n}\Big[(2a_1+a_2+c_3)\alpha_{k-2}+\sum\limits_{t=5}^{k-2}(c_{t-1}+a_2\alpha_{t-1})\alpha_{k-t+2}\Big]e_k\\ =(2a_1+a_2+c_3)\alpha_{4}e_6+\sum\limits_{k=3}^{\lfloor\frac{n-1}{2}\rfloor} \Big[(2a_1+a_2+c_3)\alpha_{2k-1}+\sum\limits_{t=5}^{2k-1}(c_{t-1}+a_2\alpha_{t-1})\alpha_{2k-t+3}\Big]e_{2k+1}\\ +\sum\limits_{k=4}^{\lfloor\frac{n}{2}\rfloor}\Big[(2a_1+a_2+c_3)\alpha_{2k-2}+\sum\limits_{t=5}^{2k-2}(c_{t-1}+a_2\alpha_{t-1})\alpha_{2k-t+2}\Big]e_{2k}. \end{multline*}
Comparing the coefficients at the basic elements we get \begin{equation}\label{eq4} (2a_1+a_2+c_3)\alpha_{4}=(5a_1+a_2)\alpha_4 \end{equation} \begin{equation}\label{eq5}3\leq k\leq \left\lfloor\frac{n-1}{2}\right\rfloor\left\{\begin{array}{cc} (2a_1+a_2+c_3)\alpha_{2k-1}+\sum\limits_{t=5}^{2k-1}(c_{t-1}+a_2\alpha_{t-1})\alpha_{2k-t+3}\\[1mm] =((2k-2)a_1+c_3)\alpha_{2k-1}+\sum\limits_{t=3}^{k-1}(c_{2k-2t+3}+(2t-2)a_2\alpha_{2k-2t+3})\alpha_{2t-1}\\[1mm] +\sum\limits_{t=3}^{k}(a_{2k-2t+3}+(2t-2)a_2\alpha_{2k-2t+4})\alpha_{2t-2}, \end{array}\right. \end{equation} \begin{equation}\label{eq6}4\leq k\leq \left\lfloor\frac{n}{2}\right\rfloor\left\{\begin{array}{cc} (2a_1+a_2+c_3)\alpha_{2k-2}+\sum\limits_{t=5}^{2k-2}(c_{t-1}+a_2\alpha_{t-1})\alpha_{2k-t+2}\\[1mm] =((2k-1)a_1+a_2)\alpha_{2k-2}+\sum\limits_{t=3}^{k-1}(c_{2k-2t+2}+(2t-2)a_2\alpha_{2k-2t+2})\alpha_{2t-1}+\\[1mm] +\sum\limits_{t=3}^{k-1}(a_{2k-2t+2}+(2t-2)a_2\alpha_{2k-2t+3})\alpha_{2t-2}. \end{array}\right. \end{equation}
According to $b_1+b_2 = a_1 + a_2$, from equalities \eqref{eq1} and \eqref{eq4} we obtain \[(a_1-a_2)\alpha_4=0,\quad (3a_1-c_3)\alpha_4=0.\]
Subtracting of equalities \eqref{eq2} and \eqref{eq5} we obtain \[\sum\limits_{t=3}^{k}(a_{2k-2t+3}-c_{2k-2t+4}+a_2\alpha_{2k-2t+4})\alpha_{2t-2}=0, \quad 3 \leq k \leq \left\lfloor\frac{n-1} 2\right\rfloor.\]
Summarizing of these equalities we get \[\alpha_{k}(a_2-(k-3)a_1)=\frac{k-1}{2}a_2\sum\limits_{t=5}^{k}\alpha_{t-1}\alpha_{k-t+4}\quad \text{for odd } k,\quad 5 \leq k \leq n-2.\]
Similarly from equalities \eqref{eq3} and \eqref{eq6} we obtain \[(2a_1+a_2-c_3)\alpha_{2k}+\sum\limits_{t=3}^{k}(a_{2k-2t+4}-c_{2k-2t+5}+a_2\alpha_{2k-2t+5})\alpha_{2t-2}=0, \quad 3 \leq k \leq \left\lfloor\frac{n} 2\right\rfloor -1 \] and \[\alpha_{k}(a_2-(k-3)a_1)=\frac{k-1}{2}a_2\sum\limits_{t=5}^{k}\alpha_{t-1}\alpha_{k-t+4}\quad \text{for even } k, \quad 5 \leq k \leq n-2.\]
From equalities \eqref{eq2} and \eqref{eq3} in the case of $2k=n$ and $2k=n-1$, respectively we obtain last two restrictions of equalities \eqref{eq0}.
Considering the properties of the pre-derivation for $P([[e_i, e_1], e_2])$, $4 \leq i \leq n$ and $P([[e_i, e_2], e_2])$, $3 \leq i \leq n$, we have the same restrictions or identical equalities. \end{proof}
In the following proposition we give the descriptions of pre-derivation of algebras from the second class of filiform Leibniz algebras. \begin{prop}\label{prop32} The pre-derivations of the filiform Leibniz algebras from the family $F_2(\beta_4,\beta_5,\ldots, \beta_{n}, \gamma)$ have the following form: \begin{align*} P(e_1)&=\sum\limits_{t=1}^{n}a_te_t, \qquad P(e_2)=b_2e_2+b_{n-1}e_{n-1}+b_ne_n, \qquad P(e_3)=\sum\limits_{t=2}^{n}c_te_t,\\ P(e_{2i})&=(2i-1)a_1e_{2i}+\sum\limits_{t=2i+1}^{n}(a_{t-2i+2}+(2i-2)a_2\beta_{t-2i+3})e_t,&& 2\leq i\leq \left\lfloor\frac{n}{2}\right\rfloor,\\ P(e_{2i+1})&=((2i-2)a_1+c_3)e_{2i+1}+\sum\limits_{t=2i+2}^{n}(c_{t-2i+2}+(2i-2)a_2\beta_{t-2i+2})e_t, && 2\leq i\leq \left\lfloor\frac{n-1}{2}\right\rfloor, \end{align*} where \begin{equation}\label{eq3.8} \left\{\begin{array}{ll} (c_3-2a_1)\beta_4=0, \quad (b_2-2a_1)\beta_4=0,\quad c_2\beta_t=0,& 4\leq t\leq n-1,\\[1mm] \sum\limits_{t=3}^{k}(a_{2k-2t+3}-c_{2k-2t+4}+a_2\beta_{2k-2t+4})\beta_{2t-2}=0, & 3\leq k\leq \lfloor\frac{n-1}{2}\rfloor,\\[1mm] (c_3-2a_1)\beta_{2k}=\sum\limits_{t=3}^{k}(a_{2k-2t+4}-c_{2k-2t+5}+a_2\beta_{2k-2t+5})\beta_{2t-2}, & 3\leq k\leq \lfloor\frac{n}{2}\rfloor-1,\\[1mm] (b_2-(k-2)a_1)\beta_{k}=\frac{k-1}{2}a_2\sum\limits_{t=5}^{k}\beta_{t-1}\beta_{k-t+4}, & 5\leq k\leq n-2,\\[1mm] (b_2-c_3-(n-5)a_1)\beta_{n-1}=a_2\sum\limits_{t=3}^{\frac{n-1}{2}}(2t-3)\beta_{n-2t+3}\beta_{2t-1}+\\[1mm] \quad\quad\quad\quad\quad\quad\quad\quad +\sum\limits_{t=2}^{\frac{n-3}{2}}(c_{n-2t+2}-a_{n-2t+1}+(2t-3)a_2\beta_{n-2t+2})\beta_{2t}, & n\quad \text {is odd,}\\[1mm] (b_2-(n-3)a_1)\beta_{n-1}=a_2\sum\limits_{t=3}^{\frac{n-2}{2}}(2t-3)\beta_{n-2t+3}\beta_{2t-1}+\\[1mm] \quad\quad\quad\quad\quad\quad\quad\quad+\sum\limits_{t=2}^{\frac{n-2}{2}}(c_{n-2t+2}-a_{n-2t+1}+(2t-3)a_2\beta_{n-2t+2})\beta_{2t},& n\quad \text {is even}. \end{array}\right. \end{equation} \end{prop}
\begin{proof} Let $P$ be a pre-derivation of a filiform Leibniz algebra $L$ from the second class. Put \[P(e_1)=\sum\limits_{t=1}^{n}a_te_t, \quad P(e_2)=\sum\limits_{t=1}^{n}b_te_t, \quad P(e_3)=\sum\limits_{t=1}^{n}c_te_t.\]
Consider the property of pre-derivation \begin{align*} P([[e_2,e_1],e_1])&=[[P(e_2),e_1],e_1]+[[e_2,P(e_1)],e_1]+[[e_2,e_1],P(e_1)]\\ &=[[\sum\limits_{t=1}^{n}b_te_t,e_1],e_1]+[[e_2,\sum\limits_{t=1}^{n}a_te_t],e_1]\\ &=[b_1e_3+\sum\limits_{t=3}^{n-1}b_{t}e_{t+1},e_1]=b_1e_4+\sum\limits_{t=5}^{n}b_{t-2}e_{t}. \end{align*}
On the other hand, $P([[e_2,e_1],e_1])=0$, since $[e_2, e_1]=0$. Thus, we have \[ b_1=0, \quad b_t=0, \quad 3\leq t\leq n-2.\] Hence, $P(e_2)=b_2e_2+b_{n-1}e_{n-1}+b_ne_n$.
From \begin{align*} 0&=P([[e_1,e_1],e_3])=[[P(e_1),e_1],e_3]+[[e_1,P(e_1)],e_3]+[[e_1,e_1],P(e_3)]\\ &=[e_3,\sum\limits_{t=1}^{n}c_te_t]=c_1e_4+c_2\sum\limits_{t=5}^{n}\beta_{t-1}e_t, \end{align*} we get \[c_1=0, \quad c_2\beta_t=0,\quad 4\leq t\leq n-1.\]
Considering the property of pre-derivation for the triples $\{e_1, e_1, e_1\}$ and $\{e_i, e_1, e_1\}$ for $3 \leq i \leq n-2$, inductively we obtain \begin{align*} P(e_{2i})&=(2i-1)a_1e_{2i}+\sum\limits_{t=2i+1}^{n}(a_{t-2i+2}+(2i-2)a_2\beta_{t-2i+3})e_t, && 2\leq i\leq \left\lfloor\frac{n}{2}\right\rfloor.\\ P(e_{2i+1})&=((2i-2)a_1+c_3)e_{2i+1}+\sum\limits_{t=2i+2}^{n}(c_{t-2i+2}+(2i-2)a_2\beta_{t-2i+2})e_t, && 2\leq i\leq \left\lfloor\frac{n-1}{2}\right\rfloor. \end{align*}
Now, we consider
\begin{align*} P([[e_1,e_1],e_2])&=[[P(e_1),e_1],e_2]+[[e_1,P(e_1)],e_2]+[[e_1,e_1],P(e_2)]\\ &=[[\sum\limits_{t=1}^{n}a_te_t,e_1],e_2]+[[e_1,\sum\limits_{t=1}^{n}a_te_t],e_2]+[e_3,b_2e_2+b_{n-1}+b_te_t]\\ &=(2a_1+b_2)\beta_4e_5+\sum\limits_{k=3}^{\lfloor\frac{n}{2}\rfloor}\Big[(2a_1+b_2)\beta_{2k-1} \sum\limits_{t=4}^{2k-2}(a_{t-1}+a_2\beta_t)\beta_{2k-t+2}\Big]e_{2k}\\ &\quad +\sum\limits_{k=3}^{\lfloor\frac{n-1}{2}\rfloor}\Big[(2a_1+b_2)\beta_{2k}+\sum\limits_{t=4}^{2k-1}(a_{t-1}+a_2\beta_t)\beta_{2k-t+3}\Big]e_{2k+1}. \end{align*}
On the other hand, \begin{align*} P([[e_1,e_1],e_2])&=P([e_3,e_2])=P\Big(\sum\limits_{t=5}^{n}\beta_{t-1}e_t\Big)\\ & =\beta_4(2a_1+c_3)e_5 +\sum\limits_{k=3}^{\lfloor\frac{n}{2}\rfloor}\Big[\beta_{2k-1}(2k-1)a_1 +\sum\limits_{t=2}^{k-1}\beta_{2t}(c_{2k-2t+2}+(2t-2)a_2\beta_{2k-2t+2})\\
& \qquad \qquad \qquad \qquad \qquad +\sum\limits_{t=3}^{k-1}\beta_{2t-1}(a_{2k-2t+2}+(2t-2)a_2\beta_{2k-2t+3})\Big]e_{2k}\\ & \qquad +\sum\limits_{k=3}^{\lfloor\frac{n-1}{2}\rfloor}\Big[\beta_{2k}((2k-2)a_1+c_3)+ \sum\limits_{t=2}^{k-1}\beta_{2t}(c_{2k-2t+3}+(2t-2)a_2\beta_{2k-2t+3})\\ & \qquad \qquad \qquad +\sum\limits_{t=3}^{k}\beta_{2t-1}(a_{2k-2t+3}+(2t-2)a_2\beta_{2k-2t+4})\Big]e_{2k+1}. \end{align*} Comparing the coefficients at the basis elements, we have \begin{equation}\label{eq3.9}(2a_1+b_2)\beta_4=(2a_1+c_3)\beta_4.\end{equation} \begin{equation}\label{eq3.10} 3\leq k\leq \left\lfloor\frac{n}{2}\right\rfloor \left\{\begin{array}{c} (2a_1+b_2)\beta_{2k-1}+\sum\limits_{t=4}^{2k-2}(a_{t-1}+a_2\beta_t)\beta_{2k-t+2}\\[1mm] =\beta_{2k-1}(2k-1)a_1+\sum\limits_{t=2}^{k-1}\beta_{2t}(c_{2k-2t+2}+(2t-2)a_2\beta_{2k-2t+2})\\[1mm] +\sum\limits_{t=3}^{k-1}\beta_{2t-1}(a_{2k-2t+2}+(2t-2)a_2\beta_{2k-2t+3}), \end{array}\right.\end{equation} \begin{equation}\label{eq3.11} 3\leq k\leq \left\lfloor\frac{n-1}{2}\right\rfloor \left\{\begin{array}{c}(2a_1+b_2)\beta_{2k}+\sum\limits_{t=4}^{2k-1}(a_{t-1}+a_2\beta_t)\beta_{2k-t+3}\\[1mm] =\beta_{2k}((2k-2)a_1+c_3)+\sum\limits_{t=2}^{k-1}\beta_{2t}(c_{2k-2t+3}+(2t-2)a_2\beta_{2k-2t+3})\\[1mm] +\sum\limits_{t=3}^{k}\beta_{2t-1}(a_{2k-2t+3}+(2t-2)a_2\beta_{2k-2t+4}). \end{array}\right. \end{equation}
Analogously, from the equality \[P([e_4, e_2]) = P([[e_3,e_1],e_2])=[[P(e_3),e_1],e_2]+[[e_3,P(e_1)],e_2]+[[e_3,e_1],P(e_2)],\] we obtain the following restrictions: \begin{equation}\label{eq3.12}(a_1+b_2+c_3)\beta_{4}=5a_1\beta_4,\end{equation} \begin{equation}\label{eq3.13}3\leq k\leq \left\lfloor\frac{n-1}{2}\right\rfloor\left\{\begin{array}{c}(a_1+b_2+c_3)\beta_{2k-1}+\sum\limits_{t=5}^{2k-1}(c_{t-1}+a_2\beta_{t-1})\beta_{2k-t+3}\\[1mm] =\beta_{2k-1}((2k-2)a_1+c_3)+\sum\limits_{t=3}^{k-1}\beta_{2t-1}(c_{2k-2t+3}+(2t-2)a_2\beta_{2k-2t+3})\\[1mm] +\sum\limits_{t=3}^{k}\beta_{2t-2}(a_{2k-2t+3}+(2t-2)a_2\beta_{2k-2t+4}),\end{array}\right.\end{equation} \begin{equation}\label{eq3.14}4\leq k\leq \left\lfloor\frac{n}{2}\right\rfloor\left\{\begin{array}{c}(a_1+b_2+c_3)\beta_{2k-2}+\sum\limits_{t=5}^{2k-2}(c_{t-1}+a_2\beta_{t-1})\beta_{2k-t+2}\\[1mm] =\beta_{2k-2}(2k-1)a_1+\sum\limits_{t=3}^{k-1}\beta_{2t-1}(c_{2k-2t+2}+(2t-2)a_2\beta_{2k-2t+2})\\[1mm] +\sum\limits_{t=3}^{k-1}\beta_{2t-2}(a_{2k-2t+2}+(2t-2)a_2\beta_{2k-2t+3}).\end{array}\right.\end{equation}
It is not difficult to see that from \eqref{eq3.9} and \eqref{eq3.12} we have \[(c_3-2a_1)\beta_4=0, \quad (b_2-2a_1)\beta_4=0.\]
Similarly to the proof of Proposition \ref{prop1}, summarizing and subtracting equalities \eqref{eq3.10} and \eqref{eq3.13}, we obtain \[\sum\limits_{t=3}^{k}(a_{2k-2t+3}-c_{2k-2t+4}+a_2\beta_{2k-2t+4})\beta_{2t-2}=0, \quad 3 \leq k \leq \left\lfloor\frac {n-1} 2\right\rfloor\] and \[(b_2-(k-2)a_1)\beta_{k}=\frac{k-1}{2}a_2\sum\limits_{t=5}^{k}\beta_{t-1}\beta_{k-t+4}\quad \text{for odd } k, \quad 5 \leq k \leq n-2.\]
From equalities \eqref{eq3.11} and \eqref{eq3.14} we have \[(c_3-2a_1)\beta_{2k}=\sum\limits_{t=3}^{k}(a_{2k-2t+4}-c_{2k-2t+5}+a_2\beta_{2k-2t+5})\beta_{2t-2}, \quad 3 \leq k \leq \left\lfloor\frac {n} 2\right\rfloor -1,\] and \[(b_2-(k-2)a_1)\beta_{k}=\frac{k-1}{2}a_2\sum\limits_{t=5}^{k}\beta_{t-1}\beta_{k-t+4}\quad \text{for even } k, \quad 5 \leq k \leq n-2.\]
From equalities \eqref{eq3.10} and \eqref{eq3.11} in the case of $2k=n-1$ and $2k=n$, respectively we obtain last the two restrictions of equalities \eqref{eq3.8}.
Considering the properties of the pre-derivation for $P([[e_i, e_1], e_2])$ for $4 \leq i \leq n$ and $P([[e_i, e_2], e_2])$ for $3 \leq i \leq n$, we have the same restrictions or identical equalities. \end{proof}
\section{Strongly nilpotent filiform Leibniz algebras}\label{S2}
In this section we describe non-strongly nilpotent filiform Leibniz algebras. We give the description of non-strongly nilpotent filiform Leibniz algebras of the first and second classes. We reduce to investigation of strongly nilpotency of third class of filiform Leibniz algebras to the investigation of Lie algebras.
First, we consider the case of filiform Leibniz algebras from the first class. From Proposition \ref{prop1} it is obvious that if there exist the parameters $a_1, a_2, c_2, c_3$ such that $(a_1, a_2, c_2, c_3) \neq (0, 0, 0, 0)$ and the restriction \eqref{eq0} holds, then a filiform Leibniz algebra of the first family is non-strongly nilpotent, otherwise is strongly nilpotent.
\begin{prop} \label{prop41} Let $L(\alpha_4, \alpha_5, \dots, \alpha_n, \theta)$ be a filiform Leibniz algebra of the first family. If $\alpha_4 = \alpha_5 = \dots =\alpha_{n-1} =0$, then $L$ is non-strongly nilpotent. \end{prop}
\begin{proof} It is immediate that, if $\alpha_4 = \alpha_5 = \dots =\alpha_{n-1} =0$, then the restriction \eqref{eq0} is hold for any values of $a_1, a_2, c_3$. Thus, we have that $L$ has a non-nilpotent pre-derivation, which implies that $L$ is non-strongly nilpotent. \end{proof}
It is obvious that any algebra from the family $F_1(0,\dots, 0, \alpha_n, \theta)$ is isomorphic to one of the following four algebras: \[F_1(0,\dots, 0, 0, 0), \quad F_1(0,\dots, 0, 0, 1),\quad F_1(0,\dots, 0, 1, 0), \quad F_1(0,\dots, 0, 1, 1).\]
Remark that the algebras $F_1(0,\dots, 0, 0, 0)$, $F_1(0,\dots, 0, 0, 1)$ and $ F_1(0,\dots, 0, 1, 1)$
are non-characteristically nilpotent (see \cite{KhLO}). The algebra $F_1(0,\dots, 0, 1, 0)$ is characteristically nilpotent,
but non-strongly nilpotent.
Now we consider the case of $\alpha_i\neq 0$ for some $i \ (4 \leq i \leq n-1)$. Then from \eqref{eq0} we have that $c_2=0$.
\begin{thm}\label{thm41} Let $L$ be a filiform Leibniz algebra from the family $F_1(\alpha_4, \alpha_5, \dots, \alpha_n, \theta)$ and let $n$ be even.
$L$ is non-strongly nilpotent if and only if the parameters $(\alpha_4, \alpha_5, \alpha_6,\dots, \alpha_{n-1}, \alpha_n, \theta)$ are one of the following values:
i) $\alpha_4 \neq 0$ and $\alpha_k = (-1)^kC_{k-4}^2\alpha_4^{k-3}, \quad 5 \leq k \leq n-2$;
ii) $\left\{\begin{array}{lll}\alpha_{(2s-3)t+3} = (-1)^{t+1}C_{t}^{2s-2}\alpha_{2s}^{t}, & 3 \leq s \leq \frac{n-2}2 & 1 \leq t \leq \lfloor\frac {n-5} {2s-3}\rfloor, \\[1mm] \alpha_j =0, & j \neq (2s-3)t+3, & 4 \leq j \leq n-2; \end{array}\right.$
iii) $\alpha_{2i}=0$, for $2 \leq i \leq \frac {n-2} 2$;
\noindent where $C^{p}_{n} = \frac {1} {(p-1)n+1} \dbinom{pn}{n}$ is the $p$-th Catalan number. \end{thm}
\begin{proof} From Proposition \ref{prop1} we have \begin{equation}\label{eq4.1}\left\{\begin{array}{ll} (a_1-a_2)\alpha_4=0,\quad (3a_1-c_3)\alpha_4=0,\\[1mm] \sum\limits_{t=3}^{k}(a_{2k-2t+3}-c_{2k-2t+4}+a_2\alpha_{2k-2t+4})\alpha_{2t-2}=0,&3\leq k\leq \frac{n-2}{2},\\[1mm] (2a_1+a_2-c_3)\alpha_{2k}+\sum\limits_{t=3}^{k}(a_{2k-2t+4}-c_{2k-2t+5}+a_2\alpha_{2k-2t+5})\alpha_{2t-2}=0,& 3\leq k\leq \frac{n-2}{2},\\[1mm] (a_2-(k-3)a_1)\alpha_{k}=\frac{k-1}{2}a_2\sum\limits_{t=5}^{k}\alpha_{t-1}\alpha_{k-t+4},& 5\leq k\leq n-2,\\[1mm] (a_2-(n-4)a_1)\alpha_{n-1}= a_2\sum\limits_{t=3}^{\frac{n-2}{2}}(2t-3)\alpha_{n-2t+3}\alpha_{2t-1}\\[1mm] \quad\quad\quad\quad\quad\quad\quad\quad+\sum\limits_{t=2}^{\frac{n-2}{2}}(c_{n-2t+2}-a_{n-2t+1}+(2t-3)a_2\alpha_{n-2t+2})\alpha_{2t}. \end{array}\right. \end{equation}
\textbf{Case 1.} If $\alpha_4 \neq 0$, then from the first two equalities of \eqref{eq4.1} we get $a_2 =a_1$, $c_3=3a_1$ and from the next two equalities of \eqref{eq4.1} we obtain \[c_i = a_{i-1} + a_1 \alpha_i, \qquad 4 \leq i \leq n-3.\]
Since $a_2 =a_1$, $c_3=3a_1$, we get that $L$ is non-strongly nilpotent if and only if $a_1 \neq 0$. Therefore we have \begin{align*} \alpha_{k}&=\frac{k-1}{2(4-k)}\sum\limits_{t=5}^{k}\alpha_{t-1}\alpha_{k-t+4}, && 5\leq k\leq n-2,\\ (5-n)a_1\alpha_{n-1}& = (c_{n-2}-a_{n-3} + a_1\alpha_{n-2})\alpha_4 +a_1\sum\limits_{t=5}^{n-2}(t-2)\alpha_{n-t+2}\alpha_{t}. \end{align*}
Using equality \eqref{E:comb} we get that \[\alpha_k =(-1)^kC_{k-4}^2\alpha_4^{k-3}, \qquad 5 \leq k \leq n-2\]
and \[c_{n-2} = \frac{1}{\alpha_4}\Big((5-n)a_1\alpha_{n-1} - a_1\sum\limits_{t=5}^{n-2}(t-2)\alpha_{n-t+2}\alpha_{t} \Big) +a_{n-3} - a_1\alpha_{n-2}.\]
Note that the parameters $\alpha_{n-1}, \alpha_{n}$ and $\theta$ are free and we have the case \textit{i}).
\textbf{Case 2.} Let $\alpha_{2s} \neq 0$ for some $s \ (3 \leq s \leq \frac {n-2} 2)$ and $\alpha_{2i} = 0$ for $2 \leq i \leq s-1$. Then similarly to the previous case from equality \eqref{eq4.1} we get \[(2a_1+a_2 - c_3)\alpha_{2s}=0, \qquad (a_2 - (2s-3) a_1)\alpha_{2s}=0,\] which derive $a_2=(2s-3) a_1$, $c_3 = (2s-1) a_1$ and \[c_i = a_{i-1} + a_1 \alpha_i, \qquad 4 \leq i \leq n-2s+1.\]
If $L$ is non-strongly nilpotent then $a_1 \neq 0$. Consequently, we have
\[\alpha_{k}=\frac{(k-1)(2s-3)}{2(2s-k)}\sum\limits_{t=5}^{k}\alpha_{t-1}\alpha_{k-t+4},\qquad 5\leq k\leq n-2.\]
Using $\alpha_{2i} = 0$ for $2 \leq i \leq s-1$ and applying formula \eqref{E:comb}, inductively on $t$ we obtain \begin{align*} \alpha_j &= 0, \qquad \qquad j \neq (2s-3)t+3, \qquad 4 \leq j \leq n-2,\\ \alpha_{(2s-3)t+3} &= (-1)^{t+1}C_{t}^{2s-2}\alpha_{2s}^{t}, \qquad \qquad \qquad \quad 1 \leq t \leq \left\lfloor\frac {n-5} {2s-3}\right\rfloor. \end{align*} From the last equality of \eqref{eq4.1} we have \[c_{n-2s+2} = \frac{1}{\alpha_{2s}}\Big((2s+1-n)a_1\alpha_{n-1} - (2s-3)a_1\sum\limits_{t=2s+1}^{n-2s+2}(t-2)\alpha_{n-t+2}\alpha_{t} \Big) +a_{n-2s+1} - (2s-3)^2a_1\alpha_{n-2s}.\]
The parameters $\alpha_n-1$, $\alpha_n$ and $\theta$ may take any values and we obtain case \textit{ii}).
\textbf{Case 3.} Let $\alpha_{2i}=0$ for all $i \ (2 \leq i \leq \frac {n-2} 2)$. Then the first four equalities of \eqref{eq4.1} hold and from the last equalities we have \begin{align*} &\alpha_5(a_2-2a_1) =0, \qquad \alpha_{2s+1}(a_2-(2s-2)a_1) =sa_2\sum\limits_{t=3}^s\alpha_{2t-1}\alpha_{2s+5-2t}, \quad 3 \leq s \leq \frac {n-2} 2.\\ &(a_2-(n-4)a_1)\alpha_{n-1}= a_2\sum\limits_{t=3}^{\frac{n-2}{2}}(2t-3)\alpha_{n-2t+3}\alpha_{2t-1}. \end{align*}
Taking $a_1=a_2=0$ and $c_3\neq 0$, we have that previous equalities hold for any values of $\alpha_{2s+1}$. Since $c_3\neq 0$, this algebra is non-strongly nilpotent and we obtain the case \textit{iii}). \end{proof}
\begin{thm}\label{thm42} Let $L$ be a filiform Leibniz algebra from the family $F_1(\alpha_4, \alpha_5, \dots, \alpha_n, \theta)$ and let $n$ be odd.
$L$ is non-strongly nilpotent if and only if the parameters $(\alpha_4, \alpha_5, \alpha_6,\dots, \alpha_{n-1}, \alpha_n, \theta)$ are one of the following values:
i) $\alpha_4 \neq 0$ and $\alpha_k = (-1)^kC_{k-4}^2\alpha_4^{k-3}, \quad 5 \leq k \leq n-2$;
ii) $\left\{\begin{array}{lll}\alpha_{(2s-3)t+3} = (-1)^{t+1}C_{t}^{2s-2}\alpha_{2s}^{t}, & 3 \leq s \leq \frac {n-3} 2,& 1 \leq t \leq \lfloor\frac {n-5} {2s-3}\rfloor,\\[1mm] \alpha_j = 0, & j \neq (2s-3)t+2, & 4 \leq j \leq n-2;\end{array}\right.$
iii) $\alpha_{2i}=0$, for $2 \leq i \leq \frac {n-1} 2$;
iv) $\left\{\begin{array}{lll}\alpha_{(2s-2)t+3} = (-1)^{t+1}C_{t}^{2s-1}\alpha_{2s+1}^{t}, & 2 \leq s \leq \frac {n-3} 2, & 1 \leq t \leq \lfloor\frac {n-5} {2s-2}\rfloor, \\[1mm] \alpha_j = 0, & j \neq (2s-2)t+3, & 4 \leq j \leq n-2. \end{array}\right.$ \end{thm}
\begin{proof} From Proposition \ref{prop1} we have
\begin{equation}\label{eq4.5} \left\{\begin{aligned} & (a_1-a_2)\alpha_4=0,\qquad \qquad (3a_1-c_3)\alpha_4=0,\\ & \sum\limits_{t=3}^{k}(a_{2k-2t+3}-c_{2k-2t+4}+a_2\alpha_{2k-2t+4})\alpha_{2t-2}=0, \qquad \qquad \qquad \qquad \qquad \quad 3\leq k\leq \frac{n-1}{2},\\ &(2a_1+a_2-c_3)\alpha_{2k}+\sum\limits_{t=3}^{k}(a_{2k-2t+4}-c_{2k-2t+5}+a_2\alpha_{2k-2t+5})\alpha_{2t-2}=0, \qquad 3\leq k\leq \frac{n-3}{2},\\ &(a_2-(k-3)a_1)\alpha_{k}=\frac{k-1}{2}a_2\sum\limits_{t=5}^{k}\alpha_{t-1}\alpha_{k-t+4}, \qquad \qquad \qquad \qquad \qquad \qquad \quad 5\leq k\leq n-2,\\ & (2a_2-c_3-(n-6)a_1)\alpha_{n-1}=a_2\sum\limits_{t=3}^{\frac{n-1}{2}}(2t-3)\alpha_{n-2t+3}\alpha_{2t-1}\\ & \qquad \qquad \qquad \qquad \qquad \qquad \quad +\sum\limits_{t=2}^{\frac{n-3}{2}}(c_{n-2t+2}-a_{n-2t+1}+(2t-3)a_2\alpha_{n-2t+2})\alpha_{2t} \end{aligned}\right. \end{equation} \quad\quad\quad\quad\qquad\qquad
\textbf{Case 1.} $\alpha_{2s} \neq 0$ for some $s \ (2 \leq s \leq \frac {n-3} 2)$ and $\alpha_{2i} = 0$ for $2 \leq i \leq s-1$.
Then similarly to the proof of Theorem \ref{thm41} we get \[a_2=(2s-3) a_1,\qquad c_3 = (2s-1) a_1, \qquad c_i = a_{i-1} + a_1 \alpha_i, \qquad 4 \leq i \leq n-2s+1.\]
Consequently we have \[\alpha_{k}=\frac{(k-1)(2s-3)}{2(2s-k)}\sum\limits_{t=5}^{k}\alpha_{t-1}\alpha_{k-t+4},\quad 5\leq k\leq n-2.\]
Applying formula \eqref{E:comb} inductively on $t$ we obtain \[\alpha_{(2s-3)t+3} = (-1)^{t+1}C_{t}^{2s-2}\alpha_{2s}^{t}, \qquad 1 \leq t \leq \left\lfloor\frac {n-5} {2s-3}\right\rfloor,\] \[\alpha_j = 0, \quad j \neq (2s-3)t+3, \quad 4 \leq j \leq n-2.\]
From the last equality of \eqref{eq4.5} we have \[c_{n-2s+2} = \frac{1}{\alpha_{2s}}\Big((2s+1-n)a_1\alpha_{n-1} - (2s-3)a_1\sum\limits_{t=2s+1}^{n-2s+2}(t-2)\alpha_{n-t+2}\alpha_{t} \Big) +a_{n-2s+1} - (2s-3)^2a_1\alpha_{n-2s}.\]
Thus, we have the cases \textit{i}) and \textit{ii}).
\textbf{Case 2.} Let $\alpha_{2i}=0$ for all $i \ (2 \leq i \leq \frac {n-3} 2)$. Then the first four equalities of \eqref{eq4.5} hold and from the last two equalities we have \begin{equation}\label{eq.forodd} \begin{aligned} &\alpha_5(a_2-2a_1) =0, \qquad \alpha_{2s+1}(a_2-(2s-2)a_1) =sa_2\sum\limits_{t=3}^s\alpha_{2t-1}\alpha_{2s+5-2t}, \quad 3 \leq s \leq \frac {n-3} 2. \\ &(2a_2-c_3-(n-6)a_1)\alpha_{n-1}=0. \end{aligned} \end{equation}
If $\alpha_{n-1}=0$, then taking $a_1=a_2=0$ and $c_3 \neq 0$, we have a non-nilpotent pre-derivation for any values of $\alpha_{2i+1}$, and so we have the case \textit{iii}).
If $\alpha_{n-1}\neq0$, then $c_3 = 2a_2+(n-6)a_1$ and we obtain that
non-nilpotency of pre-derivations depends of the parameters $\alpha_{2i+1}$.
Let $\alpha_{2s+1}$ be the first non-vanishing parameter among $\{\alpha_5,\alpha_7, \dots, \alpha_{n-4},\alpha_{n-2}\}$. Then we get \[a_2 = (2s-2)a_1.\]
Applying formula \eqref{E:comb} inductively on $t$ from \eqref{eq.forodd} we obtain \[\alpha_{(2s-2)t+3} = (-1)^{t+1}C_{t}^{2s-2}\alpha_{2s}^{t}, \qquad 1 \leq t \leq \left\lfloor\frac {n-5} {2s-3}\right\rfloor\]
and \[\alpha_j = 0, \qquad j \neq (2s-2)t+3, \quad 4 \leq j \leq n-2.\]
Therefore, we have the case \textit{iv}). \end{proof}
Now we give the classification of non-strongly nilpotent filiform Leibniz algebras from the second class.
\begin{prop} Let $L(\beta_4, \beta_5, \dots, \beta_n, \gamma)$ be a filiform Leibniz algebra of the second family. If $\beta_4 = \beta_5 = \dots =\beta_{n-1} =0$, then $L$ is non-strongly nilpotent. \end{prop}
\begin{proof} Analogously to the proof of Proposition \ref{prop41}. \end{proof}
It is obvious that any algebra from the family $F_2(0,\dots, 0, \beta_n, \gamma)$ is isomorphic to one of the algebras $F_2(0,\dots, 0, 0, 0)$, $ F_2(0,\dots, 0, 1, 0)$, $F_2(0,\dots, 0, 0, 1)$. Note that these algebras are non-characteristically nilpotent (see \cite{KhLO}).
Now we consider the case of $\beta_i\neq 0$ for some $i \ (4 \leq i \leq n-1)$. Then from \eqref{eq0} we have that $c_2=0$.
\begin{thm}\label{SC.n.even} Let $L$ be a $n$-dimensional complex non-strongly nilpotent filiform Leibniz algebra from the family $F_2(\beta_4,\ldots,\beta_n,\gamma)$ and $n$ even. Then $L$ is isomorphic to one of the following algebras: \begin{align*} & F_2^{2s}(0,\dots, 0, \beta_{2s}, 0 \dots, 0 , \beta_{n-1},\beta_n,\gamma), \qquad \beta_{2s}=1, \quad 2 \leq s \leq \frac{n-2} 2,\\ & F_2(0, \beta_5, 0, \beta_7, 0 , \dots, 0, \beta_{n-1}, \beta_n, \gamma). \end{align*} \end{thm}
\begin{proof}
From Proposition \ref{prop32} we have:
\begin{equation}\label{eq4.8}
\left\{\begin{array}{ll} (c_3-2a_1)\beta_4=0, \quad (b_2-2a_1)\beta_4=0,& \\[1mm] \sum\limits_{t=3}^{k}(a_{2k-2t+3}-c_{2k-2t+4}+a_2\beta_{2k-2t+4})\beta_{2t-2}=0, & 3\leq k\leq \frac{n-2}{2},\\[1mm] (c_3-2a_1)\beta_{2k}=\sum\limits_{t=3}^{k}(a_{2k-2t+4}-c_{2k-2t+5}+a_2\beta_{2k-2t+5})\beta_{2t-2}, & 3\leq k\leq \frac{n-2}{2},\\[1mm] (b_2-(k-2)a_1)\beta_{k}=\frac{k-1}{2}a_2\sum\limits_{t=5}^{k}\beta_{t-1}\beta_{k-t+4}, & 5\leq k\leq n-2,\\[1mm] (b_2-(n-3)a_1)\beta_{n-1}=a_2\sum\limits_{t=3}^{\frac{n-2}{2}}(2t-3)\beta_{n-2t+3}\beta_{2t-1}\\[1mm] \qquad\qquad\qquad\qquad\qquad \ \ + \sum\limits_{t=2}^{\frac{n-2}{2}}(c_{n-2t+2}-a_{n-2t+1}+(2t-3)a_2\beta_{n-2t+2})\beta_{2t}, \end{array}\right. \end{equation}
\textbf{Case 1.} Let $\beta_4\neq 0$, then from \eqref{eq4.8} we have \begin{align*}
c_3&=b_2=2a_1, \qquad c_{i} =a_{i-1}+a_2\beta_i,\qquad \quad 4 \leq i \leq n-3,\\ (4-k)a_1\beta_{k}&=\frac{k-1}{2}a_2\sum\limits_{t=5}^{k}\beta_{t-1}\beta_{k-t+4},\qquad \qquad \qquad 5\leq k\leq n-2,\\ (5-n)a_1\beta_{n-1}&=(c_{n-2}-a_{n-3}+a_2\beta_{n-2})\beta_{4}+a_2\sum\limits_{t=5}^{n-2}(t-2)\beta_{n-t+2}\beta_t. \end{align*}
Since $L$ is non-strongly nilpotent, we get $a_1\neq0$ and \[ \beta_5=-\frac{2a_2}{a_1}\beta_4^2, \qquad \beta_{k}=\frac{(k-1)a_2}{2(4-k)a_1}\sum\limits_{t=5}^{k}\beta_{t-1}\beta_{k-t+4},\quad 6\leq k\leq n-2. \]
From the isomorphic criterion which given in [10, Theorem 4.4] we have that if two algebras from the family $F_2(\beta_4,\ldots,\beta_n,\gamma)$ are isomorphic, then there exist $A,B,D \in \mathbb{C}$, such that \[\beta_4'=\frac{D}{A^2}\beta_4,\qquad \beta_5'=\frac{D}{A^3}(\beta_5-\frac{2B}{A}\beta_4^2),\] where $\beta_i$ and $\beta_i'$ are parameters of the first and second algebras, respectively.
Putting $D=\frac{A^2}{\beta_4}$ and $B=\frac{A\beta_5}{2\beta_4^2}$ we obtain $\beta_4'=1$, $\beta_5'=0$. Therefore, we have shown that, if $L$ is non-strongly nilpotent algebra from the family $F_2(\beta_4,\ldots,\beta_n,\gamma)$, with $\beta_4 \neq 0$, then we may always suppose \[\beta_4=1, \qquad \beta_5=0.\]
Moreover, from $\beta_5=-\frac{2a_2}{a_1}\beta_4^2$, we obtain $a_2=0$, which implies $\beta_k=0$ for $6\leq k\leq n-2$ and \[c_{n-2} = a_{n-3} + \frac{(5-n)a_1\beta_{n-1}}{\beta_4}.\]
Thus, $L$ isomorphic to the algebra $F_2(1,0,\ldots,0,\beta_{n-1},\beta_n,\gamma)$.
\textbf{Case 2.} Let $\beta_{2s}\neq 0$ for some $s \ (3 \leq s \leq \frac{n-2} 2)$ and $\beta_{2i}=0$ for $2 \leq i \leq s-1$. Then we have \begin{align*} b_2&=(2s-2)a_1, \quad c_3 = 2a_1, \quad c_{i} =a_{i-1}+a_2\beta_i,\quad 4 \leq i \leq n-2s+1,\\ (2s-k)a_1\beta_{k}& =\frac{k-1}{2}a_2\sum\limits_{t=5}^{k}\beta_{t-1}\beta_{k-t+4},\qquad \qquad \qquad \qquad \quad 5\leq k\leq n-2,\\ (2s+1-n)a_1\beta_{n-1}&=(c_{n-2s+2}-a_{n-2s+1}+a_2\beta_{n-2s+2})\beta_{2s}+a_2\sum\limits_{t=5}^{n-2}(t-2)\beta_{n-t+2}\beta_t. \end{align*}
Since $L$ is non-strongly nilpotent, we get $a_1\neq0$, which implies $\beta_5=\dots=\beta_{2s-1}=0$. Moreover, we have
\[\beta_{4s-3} = -\frac{(2s-2)a_2}{(2s-3)a_1}\beta_{2s}^2.\]
From the isomorphic criterion of filiform Leibniz algebras of the second class, we obtain \[\beta_{2s}'=\frac{D}{A^{2s-2}}\beta_{2s},\qquad \beta_{4s-3}'=\frac{D}{A^{4s-2}}\left(\beta_{4s-3}-\frac{(2s-2)B}{A}\beta_{2s}^2\right).\]
Putting $D=\frac{A^{2s-2}}{\beta_{2s}}$ and $B=\frac{A\beta_{4s-3}}{(2s-2)\beta_{2s}^2}$, we have $\beta_{2s}'=1$, $\beta_{4s-3}'=0$. Therefore, we have shown that, if $L$ is non-strongly nilpotent algebra from the family $F_2(\beta_4,\ldots,\beta_n,\gamma)$, with $\beta_{2s} \neq 0$ and $\beta_{2i} = 0$ for $2 \leq i \leq s-1$, then we may always suppose \[\beta_{2s}=1, \quad \beta_{4s-3}=0.\]
Moreover, from $\beta_{4s-3} = -\frac{(2s-2)a_2}{(2s-3)a_1}\beta_{2s}^2$, we obtain $a_2=0$, which implies $\beta_k=0$ for $2s+1\leq k\leq n-2$ and \[c_{n-2s+2} = a_{n-2s+1} + \frac{(2s+1-n)a_1\beta_{n-1}}{\beta_{2s}}.\]
Thus, $L$ isomorphic to the algebra \[F_2^{2s}(0,\dots, 0, \beta_{2s}, 0 \dots, 0 , \beta_{n-1},\beta_n,\gamma), \quad \beta_{2s}=1.\]
\textbf{Case 3.} Let $\beta_{2s}= 0$ for $2 \leq s \leq \frac {n-2} 2$, then we have \begin{align*} (b_2-(k-2)a_1)\beta_{k}& =\frac{k-1}{2}a_2\sum\limits_{t=5}^{k}\beta_{t-1}\beta_{k-t+4}, && 5\leq k\leq n-2,\\ (b_2-(n-3)a_1)\beta_{n-1}& =a_2\sum\limits_{t=3}^{\frac{n-2}{2}}(2t-3)\beta_{n-2t+3}\beta_{2t-1}. \end{align*}
Taking $a_1=a_2=0$ and $c_3\neq 0$, we have that previous equalities are hold for any values of $\beta_{2s+1}$. Since $c_3\neq 0$, this algebra is non-strongly nilpotent. Therefore, we obtain the algebra $F_2(0, \beta_5, 0, \beta_7, 0 , \dots, 0, \beta_{n-1}, \beta_n, \gamma)$. \end{proof}
\begin{thm} Let $L$ be a $n$-dimensional complex non-strongly nilpotent filiform Leibniz algebra from the family $F_2(\beta_4,\ldots,\beta_n,\gamma)$ and $n$ odd. Then $L$ is isomorphic to one of the following algebras: \begin{align*} &F_2^{j}(0,\dots, 0, \beta_{j}, 0 \dots, 0 , \beta_{n-1},\beta_n,\gamma), \qquad \beta_{j}=1, \quad 4 \leq j \leq n-2,\\ & F_2(0, \beta_5, 0, \beta_7, 0 , \dots, 0, \beta_{n-2},0, \beta_n, \gamma). \end{align*} \end{thm}
\begin{proof} Analogously to the proofs of Theorems \ref{thm42} and \ref{SC.n.even}. \end{proof}
Now, we consider a Leibniz algebra $L$ from the third family $F_3(\theta_1,\theta_2,\theta_3)$. Note that $L$ is a parametric algebra with parameters $\theta_1,\theta_2,\theta_3$ and $\alpha_{i,j}^k$. The parameters $\alpha_{i,j}^k$ appear from the multiplications $[e_i, e_j]$ for $\leq i < j \leq {n-1}$. In the case of $\theta_1 = \theta_2 = \theta_3 =0$, the algebra $L$ is a Lie algebra.
In the next proposition we assert that the strongly nilpotency of $L(\theta_1,\theta_2,\theta_3)$ is equivalent to the strongly nilpotency of $L(0,0,0)$.
\begin{prop} An algebra $L(\theta_1,\theta_2,\theta_3)$ from the family $F_3(\theta_1,\theta_2,\theta_3)$ is strongly nilpotent if and only if the algebra $L(0,0,0)$ is strongly nilpotent. \end{prop} \begin{proof} Note that the parameters $\theta_1,\theta_2,\theta_3$ appear only in the multiplications $[e_1,e_1], [e_1,e_2], [e_2,e_2]$. Since $[P(x),y],\ [x,P(y)], \ [x,y]\in L^2$ and $e_1,e_2\not \in L^2$ parameters $\theta_i$ do not take part in the identity \[P([[x,y],z])=[[P(x),y],z]+[[x,P(y)],z]+[[x,y],P(z)],\quad x,y,z \in L.\]
Therefore, the spaces of pre-derivations of algebras $L(\theta_1,\theta_2,\theta_3)$ and $L(0,0,0)$ coincide. \end{proof}
\textbf{Acknowledgments.} This work was supported by Agencia Estatal de Investigaci\'on (Spain), grant MTM2016-79661-P (European FEDER support included, UE).
The second named author was supported by IMU/CDC Grant (Abel Visiting Scholar Program), and he would like to acknowledge the hospitality of the University of Santiago de Compostela (Spain).
\end{document} |
\begin{document}
\title{Quantum force estimation in arbitrary non-Markovian Gaussian baths}
\author{C.L. Latune$^1$, I. Sinayskiy$^{1,2}$, F. Petruccione$^{1,2}$} \affiliation{$^1$Quantum Research Group, School of Chemistry and Physics, University of KwaZulu-Natal, Durban, KwaZulu-Natal, 4001, South Africa\\ $^2$National Institute for Theoretical Physics (NITheP), KwaZulu-Natal, 4001, South Africa}
\date{\today} \begin{abstract} The force estimation problem in quantum metrology with arbitrary non-Markovian Gaussian bath is considered. No assumptions are made on the bath spectrum and coupling strength with the probe. Considering the natural global unitary evolution of both bath and probe and assuming initial global Gaussian states we are able to solve the main issues of any quantum metrological problem: the best achievable precision determined by the quantum Fisher information, the best initial state and the best measurement. Studying the short time behavior and comparing to regular Markovian dynamics we observe an increase of quantum Fisher information. We emphasize that this phenomenon is due to the ability to perform measurements below the correlation time of the bath, activating non-Markovian effects. This brings huge consequences for the sequential preparation-and-measurement scenario as the quantum Fisher information becomes unbounded when the initial probe mean energy goes to infinity, whereas its Markovian counterpart remains bounded by a constant. The long time behavior shows the complexity and potential variety of non-Markovian effects, somewhere between the exponential decay characteristic of Markovian dynamics and the sinusoidal oscillations characteristic of resonant narrow bands. \end{abstract}
\maketitle
\section{Introduction} Quantum metrology brought a revolution in parameter estimation theory. The exploitation of subtle quantum resources as entanglement and squeezing offers new ways of dribbling noise without requiring brute force energy increases (often not possible or not welcome). An important example is the detection of gravitational wave where quantum metrology allows to reduce the leading remaining noise in the detectors, the almost irreducible quantum fluctuations, by introducing squeezed light (reduction of 28\% of the shot noise without additional noise \cite{natphot}). Note that the recent observation of gravitational waves from a binary Black Hole Merger \cite{ligo} was realized with coherent light, but the next generation experiment should use squeezed light. In such an experiment the strain of the gravitational wave is encoded in a phase difference between the electromagnetic fields running in the two arms of an interferometer. The resulting detection task is an estimation of rotation in phase space.
Here, we focus on force estimation, which corresponds to estimation of displacement in phase space. High precision force estimation can be implemented in several setups like optomechanics \cite{impulsiveforce, microopto, aspel}, trapped ions \cite{biercuck}, atomic force microscopy \cite{ieee} and others harmonic oscillators \cite{nanotube}, useful for testing or exploring fundamental physical phenomena \cite{singlespin, pt, ca, planck,nori}, and in Biology \cite{biology}.
The treatment of force estimation problem with realistic environment including non-Markovian noise have been pioneered in \cite{euroj}. Most of the publications dealing with non-Markovian noise concern two-level system quantum metrology \cite{plenio,matsuzaki,dd}.
In this paper we treat the force estimation in presence of an environment constituted of a collection of harmonic oscillators without any assumption on the bath spectrum and assuming linear coupling of any strength with the probe. We develop a new approach to study the non-Markovian effects of the bath. Ideally the dynamical parameter to be estimated is encoded in the system or $probe$ via unitary evolution. For a realistic process noise should be added. Rather than constructing and solving a dissipative master equation obtained eventually from several approximations on the probe-environment coupling and initial state, we consider the global unitary evolution of the system and its environment. In doing so we avoid the traditional Markovian and weak coupling approximations and the resolution of a master equation. We start directly from a physical purification of the probe dynamics determined by the bath properties and bath-probe coupling coefficients. We also avoid problems related to potential initial correlations between probe and bath which can yield non-completely positive maps when tracing out the bath \cite{pechukas}.
In this work, effects which arise exclusively from the dynamics at times smaller than the bath correlation time are called non-Markovian. In other words, being able to make measurement at time scales below the bath correlation time can potentially unveil new effects that we call non-Markovian. This notion of non-Markovianity meets its classical counter-part since it refers to the memory of the bath defined as the correlation time.
How this is related to more elaborated definitions which intend to characterize non-Markovianity by short, medium or long time effects is a highly non-trivial question. Hereafter we refer to non-Markovian noise as noise appearing below the correlation time of the bath.
The parameter estimation protocol is the following. The probe system is a harmonic oscillator $S$. After a time window $[t_0;t]$ of force sensing (interaction between the probe and the force), the probe is measured and the force is estimated from the measurement output given that the initial state is known. The function which generates an estimate from the measurement outputs is called estimator. To obtain a better estimate the process has to be repeated several times and the estimate updated via the chosen estimator. For a given measurement the maximal achievable precision of the estimation is determined by the so-called Fisher information \cite{fisher,cramer,rao}. We denote by $F$ the force intensity, $\rho_S(t,t_0,F)$ the probe state just before the measurement, $\{\Pi_m\}_m$ a POVM (positive-operator-valued-measure) describing the measurement, and $p(m|F) = {\rm Tr}[\rho_S(t,t_0,F)\Pi_m]$ the conditional probability of getting the measurement output $m$. The Fisher information associated to this estimation experiment is \cite{fisher} \begin{equation}
{\cal F}(\{\Pi_m\}_m,F) = \int dm \frac{1}{p(m|F)}\left[\frac{\partial p(m|F)}{\partial F}\right]^2. \end{equation} The Fisher information as well as the precision of the force estimation depend strongly on the initial state of the probe and on the measurement applied. The maximal Fisher information over all possible POVM is called the quantum Fisher information (QFI) ${\cal F}_Q(F) = {\rm Max}_{\{\Pi_m\}_m}\{{\cal F}(\{\Pi_m\}_m,F)\}$ . The uncertainty of the estimation is characterised by the mean square of the error $\delta F$ between the estimate $F_{est}$ and the real value $F$ and is lowered by the Cramer-Rao bound \cite{cramer,rao}: \begin{equation}\label{qcrb} \langle\delta^2 F\rangle:= \langle(F_{est}-F)^2\rangle \geq [\nu {\cal F}_Q(F)]^{-1}, \end{equation} where $\nu$ is the number of independent repetitions of the experiment and $\langle...\rangle$ is an ensemble average. This lower bound is saturated for efficient and unbiased estimators \cite{fisher,caves,bcm96} and when best measurements are realised. Identifying best measurements and efficient estimators is in general a complex task. Except for special situation, there is no efficient estimator for finite $\nu$ \cite{bcm96}. The inequality \eqref{qcrb} is sometimes referred as local estimation not only because at principle the lower bound depends on the value of the parameter $F$ and consequently characterises the precision only at this value but also because estimators which saturate the bound may depend on the value of the unknown parameter and their efficiency may require that the range of the possible values for the unknown parameter is previously known. However, because we choose a linear interaction between the force and the probe, the QFI does not depend on $F$ and the lower bound of the error function (see \cite{tsang}) would remain the same as the actual Cramer-Rao bound. Note also that the hypothesis of initialising the probe and bath in a global Gaussian state guarantees that the distributions generated by quadrature measurements are Gaussian, and then the maximum likelihood estimator \cite{fisher,braunstein} is efficient from the first measurement ($\nu=1$).
The QFI determines the maximal achievable precision turning it the central quantity in most works in quantum metrology.\\
In the following we derive an analytic expression of the QFI and determine the best initial state and the best measurement with the only assumption of Gaussianity of the initial global state of the probe and environment. From this result we show that surprisingly, at short times $t-t_0$ below the bath correlation time, the QFI is equal up to order 3 in $t-t_0$ to the QFI for noiseless evolution, whereas the QFI with Markovian approximation is equal to the noiseless expression only up to order 2 in $t-t_0$. This interesting feature is responsible for a huge difference between Markovian and non-Markovian noise regimes for what we call sequential preparation-and-measurement scenario, a repetition of sequence of probe state preparation, sensing of the force for an adequate duration $\tau$, and measurement (Section \ref{seqmes}). We show that with growing initial energy of the optimally prepared probe, the optimal duration of the protocol-step diminishes, so that by entering timescales below the bath correlation time one may benefit from the non-Markovian effects. As a result, the QFI goes to infinity (and the uncertainty of the estimation to zero) whereas it remains limited by a constant for protocol-step timescales bigger than the bath correlation time, where the probe experiences Markovian noise. \\ In \cite{dd} the authors observe that the QFI for frequency estimation by qubits can diverge with growing initial energy. Their observation points in the same direction as ours, but they conclude that this super-classical precision is not related to non-Markovianity since they don't observe back-flow of information from the bath to the probe. Here we show that the same super-classical precision is attainable in continuous variable systems, and without specific hypothesis on the noise as in \cite{dd}, just assuming Gaussianity of the bath (which includes ohmic, sub-ohmic or super-ohmic, a "general environment" as called in \cite{paz}). Although back-flow of information is a witness of non-Markovianity, it is not a universal criteria. In \cite{dd} it is not possible to analyse directly the bath memory properties because the bath does not appear explicitly. In our model we can directly compare the bath dynamics timescales with the measurement timescale and conclude that the super-classical precision occurs when the measurement timescale enters the memory bath timescale, where non-Markovian noise arises.
\section{Global dynamics}\label{global} As mentioned in the introduction the harmonic oscillator $S$ is linearly coupled to the force to be estimated. The force is modulated by a time function $\zeta(t)$ assumed to be known. The global unitary evolution of the probe $S$ and the bath $B$ is denoted by $U(t,t_0,F)$. We use hereafter the following notations: \begin{eqnarray} X(\theta)&:=&\frac{1}{\sqrt{2}}(a^{\dag}e^{i\theta} + a e^{-i\theta}),\nonumber\\ X(\theta) [t,t_0,F]&:=&U^{\dag}(t,t_0,F)X(\theta)U(t,t_0,F),\nonumber\\ \big\langle X(\theta)[t,t_0,F]\big\rangle_0&:=&{\rm Tr}_{SB}\{\rho_{SB}^0 X(\theta)[t,t_0]\},\nonumber\\ \big\langle\Delta^2 X(\theta)[t,t_0]\big\rangle_0&:=& \big\langle \{X(\theta)[t,t_0]\}^2\big\rangle_0 -\big\langle X(\theta)[t,t_0]\big\rangle_0^2.\nonumber \end{eqnarray}
Note that defined in this way $X(\theta)[t,t_0]$ is an observable of both probe and bath's Hilbert space. By taking the expectation value in the state $\rho_{SB}^0$ one recovers the usual expression for the expectation value of a probe observable. Note also that the quantity inside the parenthesis (here $\theta$) and $[t,t_0]$ are independent: the first one designates the quadrature angle, which in the following will be chosen according to the instant of measurement $t$, and the second one designates the time evolution of this same quadrature between $t_0$ and $t$.
The global Hamiltonian is given by: \begin{eqnarray} H(t,F)/\hbar &=& \omega_0 a^{\dag} a - F\frac{\omega_0}{\sqrt{2}}\zeta(t)\left(a^{\dag}+a\right) + \sum_n \omega_n b_n^{\dag}b_n \nonumber\\ &&- ia^{\dag}\sum_n K_n b_n +ia\sum_n K_n^{*}b_n^{\dag} , \end{eqnarray} where $F$ is the force intensity, the parameter to estimate. We consider that the environment is constituted of a collection of harmonic oscillators coupled to $S$ via the coefficient $K_n$. We show (Appendix \ref{evolution}) that the global unitary interaction $U(t,t_0)$ can be written in the following form \begin{eqnarray}\label{evolop}
U(t,t_0,F) = &e^{-\frac{i}{\hbar}(t-t_0)H_0}&e^{iF|D(t,t_0)|X(\phi^D_{t,t_0})} R(t,t_0){\cal B}(t,t_0),\nonumber\\
\end{eqnarray} where $t_0$ designates the instant of time of the beginning of the interaction of the probe with the bath and the force. Expression \eqref{evolop} shows that we can split $U(t,t_0,F)$ in a succession of four operations (see details in Appendix \ref{evolution}). Firstly a probe-bath mixing ${\cal B}(t,t_0)$. Then a displacement proportional to $F$ implemented by the operator $R(t,t_0)$ in the phase space of the bath. A second displacement given by $e^{iF|D(t,t_0)|X(\phi^D_{t,t_0})}$ and taking place in the phase space of $S$, with $D(t,t_0) = \omega_0\int_{t_0}^tdu \zeta(u) e^{i\omega_0(u-t_0)}G(t,u) $, $\phi^D_{t,t_0}=\arg{D(t,t_0)}$, and the complex function $G(t,u)$ describing the bath response. Its explicit expression is similar to a Dyson series and is shown at the beginning of Appendix \ref{evolution}. The last operation is the free evolution $e^{-\frac{i}{\hbar}(t-t_0)H_0}$ with $H_0/\hbar=\omega_0 a^{\dag} a+ \sum_n \omega_n b_n^{\dag}b_n$.
Expression \eqref{evolop} is by itself an important and interesting result by its simple form. It allows exact derivation of probe and bath observables in Heisenberg picture and corresponding moments (Appendix \ref{evolution}). Usual treatment would require much more work to derive those quantities: establish an exact non-Markovian master equation and then solve it. With our method we have also the possibility of deriving those quantities without hypothesis of initially uncorrelated bath and probe, which would be hardly possible by a master equation treatment \cite{pechukas}. However this possibility is not interesting in the present work since we focus on the best initial state which is a pure state.
Note that the expression \eqref{evolop} is similar - without $R(t,t_0)$ - to the form found in \cite{pra} where the noise is treated via a Markovian master equation. Note also that \eqref{evolop} is obtained without doing any approximation regarding the strength of the coupling with the bath, and the bath spectral density.
\section{Derivation of the metrological quantities for Gaussian states} In order to access the QFI of $S$ related to the parameter $F$ we have to maximize the Fisher information ${\cal F}(\{\Pi_m\}_m,F)$ over all physical measurements $\{\Pi_m\}_m$. Unfortunately this is not tractable. Efforts have been made to find alternatives \cite{caves, paris, bruno, nicim}, but they are still hard to apply for arbitrary dynamics and states. We restrict ourselves to Gaussian states which are more easily accessible experimentally and for which \cite{pinel, monras, jiang} have derived explicit expressions of the QFI. As we will see in the following, this important assumption also guarantees that the best measurement is a quadrature measurement and that the maximum likelihood estimator is efficient implying that the quantum Cramer-Rao bound Eq. \eqref{qcrb} is saturated for any number of repetitions of the experiment ($\nu \geq 1$) \cite{fisher,braunstein}. Assuming that the initial global state of the probe and bath $\rho_{SB}^0$ is Gaussian implies that at any time the probe state $\rho_S(t,t_0)$ is Gaussian too since the global Hamiltonian $H(t,F)$ is quadratic and the partial trace is a Gaussian operation. We are now able to derive an expression of the QFI thanks to results from \cite{caves} and expressions for fidelity between Gaussian states \cite{scutaru}. One can also use the derivations in \cite{pinel, monras, jiang} which are applications of the results in \cite{caves} to Gaussian states. As expected, all these expressions of the QFI for Gaussian states are functions of the first and second moment of the quadrature operators. As already mentioned at the end of Section \ref{global} these quantities are highly non-trivial to derive without approximations. Here we do so (Appendix \ref{evolution}) thanks to the simple form of \eqref{evolop}. Since in this problem of force estimation the parameter $F$ to be estimated controls only the amplitude of the displacement in the phase space, the expression for QFI derived in \cite{pinel} for Gaussian states can be simplified to \begin{eqnarray}\label{firstqfi}
{\cal F}_Q (t,t_0) &=& \frac{ |D(t,t_0)|^2}{{\rm Det}{\bf\Sigma}(t,t_0)}\big\langle\Delta^2 X(\phi^D_{t,t_0} - \omega_0(t-t_0))[t,t_0]\big\rangle_0,\nonumber\\ \end{eqnarray} where ${\rm Det}{\bf \Sigma}(t,t_0)$ is the determinant of the covariance matrix ${\bf \Sigma}(t,t_0)$ of $\rho_S(t,t_0)$. Note that the use of the result of \cite{pinel} requires that the initial probe state is Gaussian, but this does not exclude correlation with the bath. The minimum condition is indeed that $\rho_{SB}^0$ be Gaussian. For $\rho_S(t,t_0)$ being Gaussian, the determinant ${\rm Det}{\bf \Sigma}(t,t_0)$ is equal to the product of the extremal quadrature variances. Let's call $\theta_m^t$ the angle of the maximal quadrature variance at time $t$ so that we have ${\rm Det}{\bf \Sigma}(t,t_0) = \big\langle\Delta^2 X(\theta_m^t)[t,t_0]\big\rangle_0 \times\big\langle\Delta^2 X(\theta_m^t+\pi/2)[t,t_0]\big\rangle_0$. The angle $\theta_m^t$ is a function of $t$ and of $\theta_m^0$, the angle of the maximal quadrature variance of the initial state at $t_0$. If the value of $t$ is predetermined, meaning that we choose the length of the sensing window before the beginning of the experiment (as for sequential preparation-and-measurement scenario, Section \ref{seqmes}), $\theta_m^t$ is uniquely determined by $\theta_m^0$ and so there exists one $\theta_m^0 \in[0,\pi[$ such that $\theta_m^t = \phi^D_{t,t_0} - \omega_0(t-t_0)$. As shown in Appendix \ref{evolution}, for the probe and bath initially uncorrelated, the simple following relation holds $\theta_m^t = \theta_m^0 + \phi_{t,t_0}^G -\omega_0(t-t_0)$, where $\phi_{t,t_0}^G:=\arg{G(t,t_0)}$. Under these conditions, preparing the probe with $\theta_m^0 = \phi_{t,t_0}^D-\phi_{t,t_0}^G$ guarantees that $\theta_m^t = \phi^D_{t,t_0} - \omega_0(t-t_0)$, and consequently the expression of the QFI reduces to \begin{equation}\label{qfipvar}
{\cal F}_Q(t,t_0) = \frac{ |D(t,t_0)|^2}{\Big\langle\Delta^2 P(\phi^D_{t,t_0} - \omega_0(t-t_0))[t,t_0]\Big\rangle_0}. \end{equation} This expression is remarkable by its simplicity and similarity to the noiseless and Markovian dynamics expressions \cite{pra}. Note that the condition $\theta_m^0 = \phi_{t,t_0}^D-\phi_{t,t_0}^G$ is obviously satisfied for any circularly symmetric states like coherent or Fock states.
However, more relevant than being able to write the QFI in a simple form is the best initial state. We already know that it belongs to the pure states (because of the convexity of the QFI), which by the way implies that the best state is not correlated to the bath. Maximizing \eqref{firstqfi} implies maximizing the variance $\big\langle\Delta^2 X(\phi^D_{t,t_0} - \omega_0(t-t_0))[t,t_0]\big\rangle_0$ (given a limited initial mean energy $E$) and minimizing ${\rm Det}{\bf \Sigma}(t,t_0)$ which is already minimal and equals to $1/4$ if $S$ is initialized in a pure state. In Appendix \ref{evolution} we show that the best state is the squeezed state $\hat S[\mu(t,t_0)]|0\rangle$ where $|0\rangle$ is the ground state of the harmonic oscillator, $\hat S[\mu(t,t_0)] = \exp{\left(\frac{\mu(t,t_0)}{2} a^{\dag 2} - \frac{\mu^{*}(t,t_0)}{2} a^{2}\right)}$ is a squeezing operator, with $\mu(t,t_0)=re^{2i(\phi^D_{t,t_0}-\phi^G_{t,t_0})}$ and $r=\frac{1}{2}\ln{[2(E+\sqrt{E^2-1/4})]}$. One can see that the best state is squeezed along the quadrature $P(\phi_{t,t_0}^D-\phi_{t,t_0}^G)$ implying that the condition $\theta_m^0 = \phi_{t,t_0}^D-\phi_{t,t_0}^G$ is satisfied and that the form of the QFI for the best state is also \eqref{qfipvar}. Substituting the variance of the quadrature $P(\phi^D_{t,t_0} - \omega_0(t-t_0))[t,t_0,F]$ by its expression for the best state gives \begin{widetext} \begin{eqnarray}\label{qfibeststate}
{\cal F}_Q(t,t_0) = \frac{ |D(t,t_0)|^2}{|G(t,t_0)|^2\left[4(E+\sqrt{E^2-1/4})\right]^{-1}+\sum_n |K_n|^2\left(N_n + \frac{1}{2}\right) \Big|\int_{t_0}^tds G(t,s) e^{i(\omega_0-\omega_n)s}\Big|^2}, \end{eqnarray} \end{widetext}
where $N_n := {\rm Tr}[\rho_B^0b_n^{\dag}b_n]$. This expression shows the transition between noiseless ($K_n=0$ and $G(t,t_0)=1$) and noisy quantum metrology. Without noise we recover the so-called Heisenberg limit characterized by a linear dependence in $E$, leading to an infinite precision for growing $E$ as in noiseless classical parameter estimations. For noisy quantum metrology, as time goes the influence of the bath grows ($|G(t,t_0)| \leq 1$) and can eventually dominate the initial preparation conditions ($|G(t,t_0)| \ll 1$) spoiling efforts to recover infinite precision. For broadband limit and rotating wave approximation $G(t,t_0)$ tends to 0 at long times (see Appendix \ref{bnband}), erasing the initial conditions dependence and recovering the Markovian behavior \cite{pra}. We also show in Appendix \ref{bnband} that the resonant narrow band limit reduces $G(t,t_0)$ to a cosine resulting in an indefinite succession of forward and backward flows of information between $S$ and the bath along with an indefinite dependence on the initial conditions, contrasting with the Markovian behavior. However, as we show in the following, the long time behavior of $G(t,t_0)$ cannot be determined in general without explicit expressions of $K_n$, but a behavior between these two extremes of total loss and total recovering of initial information is expected. \\
To complete the protocol of best estimation we determine in Appendix \ref{quadmeas} that the projective measurement onto the quadrature $P(\phi^D_{t,t_0} -\omega_0(t-t_0))$ yields a {\it Fisher information} equal to the right hand side of \eqref{qfipvar}. One concludes that whenever the {\it QFI} is given by \eqref{qfipvar}, in other words whenever the initial state is prepared in a Gaussian state with the maximal variance of the quadrature occurring for $\theta_m^0 = \phi_{t,t_0}^D-\phi_{t,t_0}^G$ (this includes the best state), the best measurement is the projection onto the quadrature $P(\phi^D_{t,t_0} -\omega_0(t-t_0))$. This useful result shows also that the best measurement generates Gaussian distributions which implies that the maximum likelihood estimator is efficient for arbitrary number of repetitions of the experiment \cite{fisher,braunstein}. One can show that in this problem the maximum likelihood estimator is the simple average of the outcomes (successively obtained after each repetition of the experiment) of the projective measurement just described above.
Interestingly we can treat the situation where the instant of beginning and end of the force are not known exactly. Let's call $t_i$ and $t_f$ such instants so that $\zeta(t)=0$ for $t\leq t_i$ and $t\geq t_f$. The only change in the previous expressions is $D(t,t_0) = \omega_0\int_{t_i}^{t_f}du \zeta(u) e^{i\omega_0(u-t_0)}G(t,u) = D(t_f,t_i)$. As expected, initializing the experiment before the force begins and measuring after the force stops is prejudicial for the precision of the estimation since $\big\langle \Delta^2 P(\phi^D_{t_f,t_i} -\omega_0(t_f-t_i))[t_f,t_i]\big\rangle \leq \big\langle \Delta^2 P(\phi^D_{t_f,t_i} -\omega_0(t-t_0))[t,t_0]\big\rangle$. This is one of the arguments for opting for sequential preparation-and-measurement scenario which, as shown in Appendix \ref{seqmeasurement}, avoids the potential problem of knowing $t_i$ and $t_f$.\\
\section{Short and long time behavior}\label{shortandlong}
We assume that we can perform a measurement at a time $t$ such that $t-t_0$ is much smaller than $\omega_0^{-1}$, $\Omega_p^{-1}$, and $|\chi_q|^{-1}$, for all $p\geq2$ and $q\geq1$ (see Appendix \ref{considtimescale}). Under this condition one can legitimately expand Eq.\eqref{qfipvar} around $t_0$: \begin{eqnarray} {\cal F}_Q(t,t_0) &= \frac{\omega_0^2(t-t_0)^2}{\left\langle \Delta^2 P\left[\phi_{t,t_0}^{D_0}\right]\right\rangle_0 }&\left[\zeta^2(t_0) +\zeta(t_0)\dot{\zeta}(t_0)(t-t_0)\right]\nonumber\\
&&+ {\cal O}[\omega_0^2{\cal K}^2(t-t_0)^4],\label{fexpanded} \end{eqnarray}
where $\phi^{D_0}_{t,t_0}:=\arg{D_0(t,t_0)}$ with $D_0(t,t_0):=\lim_{|K_n|\rightarrow0} D(t,t_0)$, the noiseless value of $D(t,t_0)$ (Appendix \ref{considtimescale}). Consequently the bath dependence appears only at the 4th order in $(t-t_0)$ through the coefficient ${\cal K}^2:=\sum_n |K_n|^2$. One can show from \eqref{firstqfi} that this property is not merely an artefact of some particular initial states but in fact remains valid for any initial Gaussian state. This means that the QFI is equal up to order 3 in $(t-t_0)$ to the QFI without any noise. This is a surprising and interesting result reminiscent of quantum Zeno effect. We show in Appendix \ref{bathcorrelation} that the evolution time scales of the bath correlation function is of order of $\Omega_p^{-1}$ and $|\chi_q|^{-1}$, for all $p\geq2$ and $q\geq1$. This implies that the expansion \eqref{fexpanded} is valid if the measurement is performed at time scales below the correlation time of the bath. \\ If at contrary we cannot perform measurements below the correlation time of the bath, the expansion \eqref{fexpanded} is not valid (the first terms are not significant anymore and the higher terms have to be taken into account). We show in Appendix \ref{shorttime} that in such a situation the short time expansion contains a bath contribution at the 3rd order. The same phenomenon happens for Markovian dynamics.\\
We show in Section \ref{seqmes} that this short time behaviour is responsible for a great increase (and even a change of scaling) of QFI in sequential preparation-and-measurement scenario protocols when the optimal time interval between each measurement becomes smaller than the bath correlation time scale. \\
The long time analysis is much more complicated in general due to $G(t,t_0)$. Beginning to look at the long time behavior of the first term of $G(t,t_0)$, one finds that the real part tends to its Markovian equivalent $-\gamma(t-t_0)/2$ and the imaginary part is more involved, but cancels out if $g(\omega)|K(\omega)|^2$ is symmetrical with respect to $\omega_0$ (see Appendix \ref{longtime}). However, one can show that even in the limit of $t-t_0$ much bigger than all time scales involved in the problem the second term in the expansion of $G(t,t_0)$ is different form its Markovian equivalent, $\gamma^2(t-t_0)^2/8$. So the Markovian behavior does not seem to be recovered in the long time limit as it is sometimes claimed \cite{euroj}. Note that for arbitrary $g(\omega)|K(\omega)|^2$ the imaginary part does not cancel out and its contribution can give rise to diverse long time behaviors far different from the Markovian one as for instance the extreme case of resonant narrow band (Appendix \ref{bnband}).
\section{sequential preparation-and-measurement scenario}\label{seqmes}
Let $[t_0,t_0 + T]$ be the supposed time window available for the sensing of the force. In order to increase the Fisher information about $F$ one can repeat the measurement $\nu$ times after a sensing time interval of $\tau$ such that $\nu \tau =T$. This process increases significantly the precision of the estimation when the window duration $T$ is bigger than the relaxation time of the probe. The time interval $\tau$ must be chosen adequately. There are two competing quantities growing with time: the information about $F$ in the probe state and the environment-induced fluctuation of the probe state. We are interested now in determining this optimal time interval $\tau_{\mathrm{opt}}$.\\
The total QFI is just the sum of each Fisher information generated after each measurement at time $t_k:=t_0+k\tau$: \begin{eqnarray}\label{fseq} {\cal F}_Q^{\mathrm{Seq}}(T,\tau) &=& \sum_{k=0}^{\nu-1} {\cal F}_Q(t_{k+1},t_k)\nonumber\\
&=& \sum_{k=0}^{\nu -1}\frac{|D(t_{k+1},t_k)|^2}{\langle \Delta^2 P(\phi^D_{t_{k+1},t_k} -\omega_0\tau)[t_{k+1},t_k]\rangle_0}. \end{eqnarray}
The second line is valid if after each measurement the probe is prepared in the best state $\hat S[\mu(t_{k+1},t_k)]|0\rangle$. This is what we call sequential preparation-and-measurement scenario. Once we choose $\tau$ we can evaluate (at least numerically) $\phi^D_{t_{k+1},t_k}$ and prepare the state $\hat S[\mu(t_{k+1},t_k)]|0\rangle$. The challenge is to prepare it in a time interval much smaller than the time evolution of the harmonic oscillator so that this time interval needed for the state preparation can be neglected. With this assumption the variances $\langle \Delta^2 P(\phi^D_{t_{k+1},t_k} -\omega_0\tau)[t_{k+1},t_k,F]\rangle$ become k-independent (see Appendix \ref{seqmeasurement}) and can be taken out of the sum in Eq. \eqref{fseq}.\\
To continue the analytic analysis we assume that we are looking for small optimal time interval $\tau_{\mathrm{opt}}$. Assuming that in Eq. \eqref{fseq} $\tau$ is much smaller than $\omega_0^{-1}$, $\Omega_p^{-1}$, and $|\chi_q|^{-1}$, for all $p\geq2$ and $q\geq1$ (see Appendix \ref{considtimescale}), we can legitimately derive a small time expansion (shown in Appendix \ref{seqmeasurement}). From it we deduce,
\begin{eqnarray}\label{optimaltime} \tau_{\mathrm{opt}}=\frac{1}{2\sqrt{3{\cal N}}}{\cal E}^{-1/2} + {\cal O}({\cal E}^{-3/2}),
\end{eqnarray} with ${\cal E}:=\left(E+\sqrt{E^2-\frac{1}{4}}\right)$ and ${\cal N} := \sum_n|K_n|^2(N_n+1/2)$. The second order term is detailed in Appendix \ref{seqmeasurement}. This result illustrates the close relation between optimal time and initial energy $E$: when $E$ goes to infinity, the initial fluctuation of the probe state goes to zero (for the chosen quadrature) and $\tau_{\mathrm{opt}}$ tends to zero. One can derive a sufficient condition on the initial energy $E$ which guarantees that $\tau_{\rm opt} \leq \omega_0^{-1}$, $\Omega_p^{-1}$, and $|\chi_q|^{-1}$, for all $p\geq2$ and $q\geq1$ (see Appendix \ref{considtimescale}). Interesting for experimental implementation, the leading term of Eq. \eqref{optimaltime} does not depend on the total available sensing window $[t_0;t_0+T]$ neither on the force time modulation $\zeta(t)$. For Markovian bath the dependence of $\tau_{\mathrm{opt}}$ is proportional to ${\cal E}^{-1/3}$ \cite{pra}.\\
The corresponding total QFI is \begin{eqnarray} &&{\cal F}_Q^{\mathrm{Seq}}(T,\tau_{opt})= \frac{\sqrt{3}\xi(T,t_0)}{2\sqrt{\cal N}}{\cal E}^{1/2} + {\cal O}({\cal E}^{-1/2}),
\end{eqnarray} with $\xi(T,t_0):=\int_{t_0}^{t_0+T}dt\zeta^2(t)$. The second order term is also detailed in Appendix \ref{seqmeasurement}. The total quantum Fisher information scales as $E^{-1/2}$ and as $E_{\mathrm{tot}}^{1/3}$ for the total energy $E_{\mathrm{tot}}:=\nu_{\mathrm{opt}}E = TE/\tau_{\mathrm{opt}}$. This is valid when the initial energy invested in the squeezing of the probe is sufficient (see Eq. \eqref{optimaltime}) so that the optimal time become smaller than $\Omega_p^{-1}$, and $|\chi_q|^{-1}$, for all $p\geq2$ and $q\geq1$, which correspond to the bath correlation function timescales (Appendix \ref{bathcorrelation}). If these conditions are not fulfilled, then the above result are not valid anymore. Instead, when $\tau$ is bigger than the bath correlation timescale, the short time expansion of Eq. \eqref{fseq} changes (Appendix \ref{nmvsm}) and the total QFI becomes bounded by a constant (Appendix \ref{seqmeasurement}) as for Markovian dynamics \cite{pra}.
Only one previous work exhibits some similar tendencies for non-Markovian noise effects in harmonic oscillator probe \cite{euroj}. Here, we show that the results are general properties which do not dependent on the force time modulation neither on the bath spectrum, coupling coefficients nor coupling strength. In \cite{dd} the authors reach similar conclusions for frequency estimation with qubits but treat a special kind of noise which commutes with the parameter encoding transformation. \\
We show in Appendix \ref{seqmeasurement} that contrary to the single measurement procedure the sequential preparation-and-measurement scenario does not generate loss of information due to an experimental time window $[t_0,t_0+T]$ potentially larger than the one during which the force is actually non-null.
\section{Conclusion} We have shown the striking difference between Markovian and non-Markovian noise for quantum metrology perspectives, stemming from the ability to perform measurement within the correlation time of the bath, activating non-Markovian effects. Our results finally suggest that we can consider at least three situations. Firstly when measurements are performed within the correlation time of the bath, leading to the super-classical scaling of QFI. Then when measurements are performed on a time scale between the bath correlation time and the time scale $\gamma^{-1}$ emerging from Markovian approximation, leading to the bounded increase of QFI discussed in \cite{pra}. Finally, when measurements are performed on a time scale bigger than $\gamma^{-1}$, leading to no significant increase with respect to the one-measurement strategy, and carrying the whole burden of the noise. The transition between super-classical scaling and bounded QFI should occurs continuously, and should happen when the measurement timescale are comparable with the bath correlation timescale. In this intermediate situation we cannot conclude about the behaviour of the QFI. \\ One should keep in mind that the ability to perform measurement at time scales shorter than the bath correlation timescale is not enough, it is also necessary that the energy invested in the initial squeezing of the probe state be big enough so that the optimal time interval $\tau_{\mathrm opt}$ goes below the bath correlation timescale. \\
In \cite{screport,zeno} the authors show an anticorrelation between quantum Zeno effect and Metrological improvement. It is because the estimated parameter depends on the internal dynamics (free evolution of the probe) so that when the evolution is frozen by the quantum Zeno effect, the information about the parameter encoded into the probe is also frozen. In our problem the action of the external force is not affected by the measurements and by the quantum Zeno effect so that the information flow about $F$ from outside into the probe is not frozen.\\ Our method using the global unitary evolution was proved to be efficient to calculate the main dynamical quantities and even the QFI for initial Gaussian states with no other assumptions. The whole difficulty of the exact evolution is concentrated in the bath response function $G(t,t_0)$. It seems that $G(t,t_0)$ captures by itself the nature of the noise, being monotonic for Markovian noise and non-monotonic for resonant narrow band limit. The real behavior of $G(t,t_0)$ is in between. The short time behaviour of $G(t,t_0)$ is also related to quantum Zeno effect. This relationship will be investigated in a future work. \\ We can treat in the same way the probe-bath coupling without rotating wave approximation $(a^{\dag} + a)(B^{\dag}+B)$. The expression of the QFI is also \eqref{firstqfi} but with a more complex function $G(t,u)$. Squeezing effects from the bath appear, turning the identification of the best state and best measurement more difficult. \\ An interesting perspective is to adapt these results to waveform estimation \cite{tsang1,tsang2}.
\begin{widetext} \appendix \numberwithin{equation}{section}
\section{Operator evolution, first and second moment of probe quadrature operator}\label{evolution} We first split the operator evolution in a free evolution term and interaction picture operator. Then, we take the probe-bath interaction term apart using the formula \begin{eqnarray} &&\exp{\left\{{\cal T}\int_{t_0}^t du [A(u) + B(u)]\right\}} = \exp{\left\{{\cal T}\int_{t_0}^t du \left[e^{{\cal T}\int_{u}^t dsA(s)}B(u)e^{-{\cal A}\int_{u}^t dsA(s)}\right]\right\}} \exp{\left\{{\cal T}\int_{t_0}^t du A(u)\right\}}, \end{eqnarray} where ${\cal T}$ is the time ordering operator and ${\cal A}$ is the anti-chronological ordering operator, applying it with $A(u)$ being the probe-bath coupling term and $B(u)$ the force coupling term. It yields \begin{eqnarray} U(t,t_0,F) &=& e^{-iH_0(t-t_0)/\hbar}\exp{\left\{iF\frac{\omega_0}{\sqrt{2}}{\cal T}\int_{t_0}^t du \zeta(u) \left[ e^{i\omega_0(u-t_0)}{\cal B}(t,u)a^{\dag} {\cal B}^{\dag}(t,u) + c.c.\right]\right\}}{\cal B}(t,t_0), \label{uintermed} \end{eqnarray} where $H_0/\hbar = \omega_0 a^{\dag}a + \sum_n \omega_n b_n^{\dag} b_n$, ${\cal B}(t,u) = \exp{{\cal T}\int_u^tds \left[a_0(s)B_0^{\dag}(s) -a_0^{\dag}(s)B_0(s)\right]}$, $a_0(s):=e^{-i\omega_0(s-t_0)}a$ and $B_0(s):=\sum_n K_ne^{-i\omega_n(s-t_0)}b_n$. Then from the Baker-Hausdorff formula one gets \begin{equation}\label{btransfo} {\cal B}(t,u)a^{\dag}{\cal B}^{\dag}(t,u) = a^{\dag}G(t,u) + \int_{u}^tds ~e^{-i\omega_0(s-t_0)}G(s,u)B_0^{\dag}(s), \end{equation} with \begin{eqnarray}\label{gsm}
&&G(t,u):= 1+ \sum_{k=1}^{\infty}(-1)^k\int_{u}^tds_1\int_{u}^{s_1}ds_2...\int_{u}^{s_{2k-1}}ds_{2k}e^{i\omega_0(s_1-s_2+...+s_{2k-1}-s_{2k})}[B_0(s_1),B_0^{\dag}(s_2)]...[B_0(s_{2k-1}),B_0^{\dag}(s_{2k})]\nonumber\\
&&= 1+\sum_{k=1}^{\infty}(-1)^k\sum_{n_1,...,n_k}|K_{n_1}|^2...|K_{n_k}|^2\int_{u}^tds_1\int_{u}^{s_1}ds_2...\int_{u}^{s_{2k-1}}ds_{2k} e^{i(\omega_0-\omega_{n_1})(s_1-s_2)}...e^{i(\omega_0-\omega_{n_k})(s_{2k-1}-s_{2k})}. \end{eqnarray}
One can show that in fact $G(t,u)$ is just a function of $t-u$, $G(t,u)=G(t-u)$. In the beginning of Appendix \ref{shorttime} we show a integro-differential equation satisfied by $G$ which can be solved by Laplace transform. But in the present problem we only need short time expansion of $G$.\\
Inserting \eqref{btransfo} in \eqref{uintermed}, taking apart the probe terms from the bath terms and calculating the time ordered integral, one obtains, up to an irrelevant phase factor, \begin{eqnarray}\label{evolopapp}
U(t,t_0,F) = &e^{-i\frac{t-t_0}{\hbar}H_0}&e^{iF|D(t,t_0)|X(\phi^D_{t,t_0})} R(t,t_0){\cal B}(t,t_0), \end{eqnarray} where $D(t,t_0) = \omega_0\int_{t_0}^tdu \zeta(u) e^{i\omega_0(u-t_0)}G(t,u) $, $\phi^D_{t,t_0}=\arg{D(t,t_0)}$, $X(\phi^D_{t,t_0})=\frac{1}{\sqrt{2}}(a^{\dag}e^{i\phi^D_{t,t_0}} + a e^{-i\phi^D_{t,t_0}})$, and \begin{equation} R(t,t_0)=e^{iF\frac{\omega_0}{\sqrt{2}}{\cal T}\int_{t_0}^t du \int_u^t ds\zeta(u) \left[e^{i\omega_0(u-s)}G(s,u)B_0^{\dag}(s) + c.c.\right]}. \end{equation}
The time ordered integral can be also calculated in $R(t,t_0)$ and leads to the following expression:
\begin{equation} R(t,t_0) = \Pi_n e^{i F |D_n(t,t_0)| X_n(\phi^{D_n}_{t,t_0})} \end{equation} with
\begin{equation} D_n(t,t_0) = \omega_0 K_n^{*}\int_{t_0}^t du~\zeta(u)e^{i\omega_0(u-t_0)}\int_u^t ds~e^{i(\omega_n-\omega_0)(s-t_0)}G(s,u), \end{equation}
$\phi^{D_n}_{t,t_0} = \arg{D_n(t,t_0)}$, and $X_n(\phi^{D_n}_{t,t_0}) = \left(b_n^{\dag}e^{i\phi^{D_n}_{t,t_0}} +b_n e^{-i\phi^{D_n}_{t,t_0}}\right)/\sqrt{2}$.\\
From \eqref{evolopapp} the exact expression in the Heisenberg picture of the probe quadrature $X(\theta)=\frac{1}{\sqrt{2}}(a^{\dag}e^{i\theta}+ae^{-i\theta})$ can be easily derived: \begin{eqnarray} X(\theta)[t,t_0,F] &=& U_{SB}^{\dag}(t,t_0,F)X(\theta)U_{SB}(t,t_0,F)\nonumber\\ &=& \frac{1}{\sqrt{2}} {\cal B}^{\dag}(t,t_0)\Bigg\{e^{i\theta}e^{i\omega_0(t-t_0)}\Big[a^{\dag}-i\frac{F}{\sqrt{2}}D^{*}(t,t_0)\Big] + e^{-i\theta}e^{-i\omega_0 (t-t_0)}\Big[a+i\frac{F}{\sqrt{2}}D(t,t_0)\Big]\Bigg\}{\cal B}(t,t_0)\nonumber\\ &=& \frac{1}{\sqrt{2}} \Bigg\{e^{i\theta}e^{i\omega_0(t-t_0)}\Big[a^{\dag} G^{*}(t,t_0) -\int_{t_0}^tds e^{-i\omega_0(s-t_0)}G^{*}(t,s)B_0^{\dag}(s)-i\frac{F}{\sqrt{2}}D^{*}(t,t_0)\Big] +c.c.\Bigg\},\label{xquad} \end{eqnarray}
For the mean value of $X(\theta)[t,t_0,F]$, defined as $\langle X(\theta)[t,t_0,F]\rangle = {\rm Tr}_{SB}\{\rho_{SB}^0X(\theta)[t,t_0,F]\}$, we find \begin{eqnarray}
\langle X(\theta)[t,t_0,F]\rangle &=& \frac{e^{i\theta}e^{i\omega_0(t-t_0)}}{\sqrt{2}}G^{*}(t,t_0) \langle a^{\dag}\rangle_0 +\frac{e^{-i\theta}e^{-i\omega_0(t-t_0)}}{\sqrt{2}} G(t,t_0) \langle a\rangle_0 +F|D(t,t_0)|\sin[\theta +\omega_0(t-t_0)-\phi^D_{t,t_0}] \nonumber\\
&=& |G(t,t_0)| \big\langle X[\theta+\omega_0(t-t_0) -\phi_{t,t_0}^G]\big\rangle_0 +F|D(t,t_0)|\sin[\theta +\omega_0(t-t_0)-\phi^D_{t,t_0}] , \end{eqnarray} where the subscript 0 means that the expectation value is taken for the state $\rho_{SB}^0$. We assume also that ${\rm Tr}_{SB}\{\rho_B^0 B_0(s)\} = {\rm Tr}_{SB}\{\rho_B^0 B_0^{\dag}(s)\} =0$.\\
In order to obtain a simple expression for the variance we make the assumption that the probe and the bath are initially uncorrelated so that $\rho_{SB}^0 = \rho_S^0\rho_B^0$. The variance is \begin{eqnarray} \langle \Delta^2 X(\theta)[t,t_0]\rangle &=& \Bigg\langle \left[\Delta\left(\frac{1}{\sqrt{2}}e^{i\theta + i\omega_0(t-t_0)} G^{*}(t,t_0)a^{\dag} + c.c.\right)\right]^2\Bigg\rangle_0 \nonumber\\
&+& \int_{t_0}^tds\int^t_{t_0}ds' G^{*}(t,s') G(t,s)\sum_n |K_n|^2 e^{i(\omega_0-\omega_n)(s-s')}\left(N_n + \frac{1}{2}\right)\nonumber\\
&=& |G(t,t_0)|^2\Big\langle \Delta^2 X\big[\theta +\omega_0(t-t_0)-\phi^G_{t,t_0}\big]\Big\rangle_0 +\sum_n |K_n|^2\left(N_n + \frac{1}{2}\right) \Bigg|\int_{t_0}^tds G(t,s) e^{i(\omega_0-\omega_n)s}\Bigg|^2\nonumber\\\label{genvariance} \end{eqnarray} with $N_n := \langle b_n^{\dag}b_n\rangle$ and $\phi^G_{t,t_0}:=\arg G(t,t_0)$. Since the second term in \eqref{genvariance} does not depend on initial conditions, the variance $\langle \Delta^2 X(\theta)[t,t_0]\rangle $ is maximal whenever $\Big\langle \Delta^2 X\big[\theta +\omega_0(t-t_0)-\phi^G_{t,t_0}\big]\Big\rangle_0 $ is maximal. This gives the relation between $\theta_m^t$ the angle of the maximal quadrature variance at time $t$, and $\theta_m^0$ the angle of the maximal quadrature variance at time $t_0$: \begin{equation} \theta_m^t = \theta_m^0 + \phi_{t,t_0}^G -\omega_0(t-t_0). \end{equation}
The variance of $P(\phi^D_{t,t_0}-\omega_0(t-t_0))[t,t_0]$ is obtained doing $\theta=\phi^D_{t,t_0}-\omega_0(t-t_0) + \pi/2$ in \eqref{genvariance}: \begin{equation}\label{variancepbest}
\big\langle \Delta^2 P(\phi^D_{t,t_0}-\omega_0(t-t_0))[t,t_0]\big\rangle = |G(t,t_0)|^2\big\langle \Delta^2 P\big[\phi^D_{t,t_0}-\phi^G_{t,t_0}\big]\big\rangle_0 +\sum_n |K_n|^2\left(N_n + \frac{1}{2}\right) \Bigg|\int_{t_0}^tds G(t,s) e^{i(\omega_0-\omega_n)s}\Bigg|^2,\nonumber\\
\end{equation} and is minimized when $\big\langle \Delta^2 P\big[\phi^D_{t,t_0}-\phi^G_{t,t_0}\big]\big\rangle_0$ is minimized. According to \cite{pra}, the state that realizes this minimization, given an initial mean energy $E$, is a pure squeezed state with the squeezed quadrature being $P\big[\phi^D_{t,t_0}-\phi^G_{t,t_0}\big]$. It can be written as $\hat S[\mu(t,t_0)]|0\rangle$ where $|0\rangle$ is the ground state of the harmonic oscillator, $\hat S[\mu(t,t_0)] = \exp{\left(\frac{\mu(t,t_0)}{2} a^{\dag 2} - \frac{\mu^{*}(t,t_0)}{2} a^{2}\right)}$ with $\mu(t,t_0)=re^{2i(\phi^D_{t,t_0}-\phi^G_{t,t_0})}$ and $r=\frac{1}{2}\ln{[2(E+\sqrt{E^2-1/4})]}$. The corresponding minimal variance is: \begin{equation}
\langle 0|\hat S^{\dag}[\mu(t,t_0)] \left[\Delta P(\phi^D_{t,t_0}-\phi^G_{t,t_0})\right]^2\hat S[\mu(t,t_0)]|0\rangle =\frac{1}{4} \left(E+\sqrt{E^2-1/4}\right)^{-1} , \end{equation} and \begin{equation}
\langle 0|\hat S^{\dag}[\mu(t,t_0)] \left[\Delta X(\phi^D_{t,t_0}-\phi^G_{t,t_0})\right]^2\hat S[\mu(t,t_0)]|0\rangle = \left(E+\sqrt{E^2-1/4}\right) , \end{equation} is maximal.\\
The determinant of the covariance matrix ${\rm Det}{\bf \Sigma}(t,t_0)$ can be written for any quadrature $X(\theta)[t,t_0]$ and $P(\theta)[t,t_0]$ as: \begin{equation} {\rm Det}{\bf \Sigma}(t,t_0) = \langle \{\Delta X(\theta)[t,t_0]\}^2\rangle_0 \langle\{\Delta P(\theta)[t,t_0]\}^2\rangle_0 - \frac{1}{4}\Big\langle \Delta X(\theta)[t,t_0]\Delta P(\theta)[t,t_0] +\Delta P(\theta)[t,t_0]\Delta X(\theta)[t,t_0]\Big\rangle_0^2. \end{equation}
Using \eqref{xquad}, \eqref{genvariance} and conjugated expressions one finds \begin{equation}
{\rm Det}{\bf \Sigma}(t,t_0) = |G(t,t_0)|^4{\rm Det}{\bf \Sigma}^0 + |G(t,t_0)|^2\langle \Delta a\Delta a^{\dag} +\Delta a^{\dag} \Delta a\rangle_0 n_B(t,t_0) + n_B^2(t,t_0),
\end{equation} where ${\bf \Sigma}^0$ is the initial covariance matrix of $S$, $\Delta a := a- \langle a\rangle_0$, and the noise term from the bath is noted $n_B(t,t_0) : = \sum_n |K_n|^2\left(N_n + \frac{1}{2}\right) \Big|\int_{t_0}^tds G(t,s) e^{i(\omega_0-\omega_n)s}\Big|^2$. As expected ${\rm Det}{\bf \Sigma}(t,t_0)$ does not depend on $\theta$ and is minimal when $S$ is initialized in a pure state. \\
\section{Narrow band limit, broad band limit and Markovian dynamics}\label{bnband} The narrow band limit is obtained for instance retaining only one mode of the bath and taking to zero the coupling coefficient of the other modes. One can also convert the discrete bath to a continuum of mode and take the mode distribution to a delta function. In any case we are left with one mode of frequency $\omega$ and coupling coefficient $K_0$. The function $G(t,t_0)$ simplifies to: \begin{eqnarray}
&&G(t,t_0)= 1+\sum_{k=1}^{\infty}(-1)^k|K_0|^{2k}\int_{u}^tds_1\int_{t_0}^{s_1}ds_2...\int_{t_0}^{s_{2k-1}}ds_{2k} e^{i(\omega_0-\omega_{n_1})(s_1-s_2...+s_{2k-1}-s_{2k})}, \end{eqnarray} and if we assume also that the bath mode is resonant with the probe, \begin{eqnarray}
G(t,t_0)&=& 1+\sum_{k=1}^{\infty}(-1)^k|K_0|^{2k}\int_{u}^tds_1\int_{t_0}^{s_1}ds_2...\int_{t_0}^{s_{2k-1}}ds_{2k} 1\nonumber\\
&=& 1+\sum_{k=1}^{\infty}(-1)^k|K_0|^{2k}\frac{1}{2k!}(t-t_0)^{2k}\nonumber\\
&=&\cos{|K_0|(t-t_0)}. \end{eqnarray}
We recover a sinusoidal behavior, meaning that the information is just going to the bath and coming back entirely to the probe periodically. \\
We take now the broad band limit. As we will see one needs more assumptions to recover the Markovian dynamics described in \cite{pra}. \\
Firstly the bath modes are taken to form a continuum, \begin{equation}
\sum_n |K_n|^2 \rightarrow \int_0^{\omega_c} d\omega g(\omega)|K(\omega)|^2 , \end{equation} where $g(\omega)$ is the bath mode distribution and $\omega_c$ is a cut-off (for instance one can consider that the experiment takes place in a limited volume). We now assume that $g(\omega)$ and $K(\omega)$ are mode independent. We have then \begin{eqnarray}
[B_0(s_{2k-1}),B_0^{\dag}(s_{2k})] &=& \sum_n |K_n|^2 e^{-i\omega_n(s_{2k-1}-s_{2k})}\nonumber\\
&=& \int_0^{\omega_c}d\omega g(\omega)|K(\omega)|^2e^{-i\omega_n(s_{2k-1}-s_{2k})}\nonumber\\
&=& g|K|^2 \int_0^{\omega_c}d\omega e^{-i\omega_n(s_{2k-1}-s_{2k})}\nonumber\\
&=&g|K|^2 \left[\pi \delta(s_{2k-1}-s_{2k}) -i {\cal P}\frac{1}{s_{2k-1}-s_{2k}}\right],\label{pb}
\end{eqnarray} with the third line corresponding to mode-independent coupling coefficients and spectrum density, and the fourth line is the limit of $\omega_c$ going to infinity. The Markovian dynamics corresponds to the real part of \eqref{pb}. The imaginary part will generate non-convergent terms when integrating $s_{2k}$ from $u$ to $s_{2k-1}$. So one should maintain the cut-off $\omega_c$ finite, and face the subsequent integrations. To recover the Markovian dynamics one need one more assumption in order to be able to discard the imaginary part. This assumption is equivalent to the rotating wave approximation. Instead of integrating the mode form 0 to $\omega_c$ we integrate from $-\omega_c$ to $\omega_c$ with $g(-\omega)=g(\omega)$ and $|K(-\omega)|=|K(\omega)|$. Then the imaginary part of \eqref{pb} cancels out and we end up with $[B_0(s_{2k-1}),B_0^{\dag}(s_{2k})] =g|K|^2 \pi \delta(s_{2k-1}-s_{2k})$. The integration from $-\omega_c$ to $\omega_c$ can be justified through the rotating wave approximation: the negative frequencies are far from resonance and thus contribute very little as soon as $t-t_0 \gg (\omega_0 -\omega)^{-1}$ (with $\omega \in [-\omega_c;0]$) \cite{gardiner}. We get from this that we can legitimately discard the imaginary part of \eqref{pb} as soon as we are interested in times bigger than $\omega_0^{-1}$. \\
We show now that substituting only the real part of \eqref{pb} in the expression of $G(t,t_0)$ we recover the Markovian behavior \cite{pra}:
\begin{eqnarray} G(t,t_0) &=& 1 +\sum_{k=1}^{\infty}(-1)^k \frac{\pi^k}{2^k}g^k|K|^{2k}\int_{t_0}^t ds_1\int_{t_0}^{s_1}ds_3...\int_{t_0}^{s_{2k-3}}ds_{2k-1}\nonumber\\
&=& 1+\sum_{k=1}^{\infty}\frac{(-\pi g|K|^2/2)^k}{k!}(t-t_0)^k\nonumber\\ &=& e^{-\gamma(t-t_0)/2},
\end{eqnarray} where $\gamma = \pi g |K|^2$. Applying this result to the variance of $P(\theta)$ one find \begin{eqnarray}
\langle \Delta^2 P(\theta)[t,t_0]\rangle &=& e^{-\gamma(t-t_0)}\Bigg\langle \left[\Delta\left(\frac{i}{\sqrt{2}}e^{i\theta + i\omega_0(t-t_0)} a^{\dag} - c.c.\right)\right]^2\Bigg\rangle_0 +\sum_n |K_n|^2\left(N_n + \frac{1}{2}\right) \Bigg|\int_{t_0}^tds e^{-\gamma(t-s)/2} e^{i(\omega_0-\omega_n)s}\Bigg|^2\nonumber\\ &=& e^{-\gamma(t-t_0)}\Bigg\langle \left[\Delta\left(\frac{i}{\sqrt{2}}e^{i\theta + i\omega_0(t-t_0)} a^{\dag} - c.c.\right)\right]^2\Bigg\rangle_0 \nonumber\\
&&+\int_0^{\infty}d\omega g(\omega)|K(\omega)|^2\left(N(\omega) + \frac{1}{2}\right) \int_{t_0}^tds \int_{t_0}^t ds'e^{-\gamma(2t-s-s')/2} e^{i(\omega_0-\omega_n)(s-s')}\nonumber\\
&=& e^{-\gamma(t-t_0)}\big\langle \left\{\Delta P[\theta +\omega(t-t_0)]\right\}^2\big\rangle_0 +g|K|^2\left(N + \frac{1}{2}\right) \int_{t_0}^tds \int_{t_0}^t ds'e^{-\gamma(2t-s-s')} \pi\delta(s-s')\nonumber\\ &=& e^{-\gamma(t-t_0)}\big\langle \left\{\Delta P[\theta +\omega(t-t_0)]\right\}^2\big\rangle_0 +\left(N + \frac{1}{2}\right) \left(1-e^{-\gamma(t-t_0)}\right), \end{eqnarray} where we also assume that the mean excitation number $N(\omega)$ is mode-independent $N(\omega)=N$. \\
The Fisher information from the best quadrature measurement becomes \begin{equation}
{\cal F}_{P(\phi^D_{t,t_0} -\omega_0(t-t_0))}(t,t_0) = \frac{\omega_0^2\big|\int_{t_0}^tdu \zeta(u) e^{i\omega_0(u-t_0)}e^{-\gamma(t-u)/2}\big|^2}{\langle \Delta^2 P(\phi^D_{t,t_0} -\omega_0(t-t_0))[t,t_0]\rangle}=\frac{\omega_0^2\big|\int_{t_0}^tdu \zeta(u) e^{i\omega_0(u-t_0)}e^{-\gamma(t-u)/2}\big|^2}{e^{-\gamma(t-t_0)}\big\langle \left\{\Delta P(\phi^D_{t,t_0})\right\}^2\big\rangle_0 +\left(N + \frac{1}{2}\right) \left(1-e^{-\gamma(t-t_0)}\right)},\label{markovqfi} \end{equation} which is precisely the expression found in \cite{pra} for a Markovian dynamics.\\
In conclusion, the Markovian approximation is more than just the broad band limit, it is also the mode-independence of the coupling coefficients and spectrum but also the rotating wave approximation. This approximation is valid as soon as we are interested in times $t-t_0$ much bigger than $\omega_0^{-1}$. This contributes to the fact that in general Markovian approximation fails to describe the real dynamics at short times.
\section{Quadrature measurement}\label{quadmeas}
We consider the measurement of the quadrature $P(\theta) = \frac{i}{\sqrt{2}}(e^{i\theta}a^{\dag} - e^{-i\theta}a)$. The output is $p$ and the probe is projected on the eigenstate $|p,\theta\rangle$ such that $P(\theta) |p,\theta\rangle =p |p,\theta\rangle$. One POVM corresponding to such an ideal projective measurement is $\{|p,\theta\rangle\langle p,\theta|\}_{p \in {\mathcal R}}$. The output conditional distribution is ${\cal P}(p,\theta|F) = {\rm Tr}_S\{\rho_s(t,t_0,F)|p,\theta\rangle \langle p,\theta|\}$, where $\rho_S(t,t_0,F)={\rm Tr}_B\{U(t,t_0,F)\rho_{SB}^0U^{\dag}(t,t_0,F)\}$ is the state of the probe at the instant t after interacting with the force and the bath since the instant $t_0$ and when the value of the force amplitude is $F$. The eigenstates of $P(\theta)$ cannot be normalized so that the direct calculation of ${\cal P}(p,\theta|F)$ is not easy. But the global Hamiltonian is quadratic in the probe and bath operators so that the global evolution is a Gaussian operation. So if the initial state $\rho_{SB}^0$ is a Gaussian state the final state is Gaussian too. The reduced state of $S$ is also Gaussian (partial tracing is a Gaussian operation), and it is characterized only by the average $\langle P(\theta)[t,t_0,F]\rangle := {\rm Tr}_S\{\rho_S(t,t_0,F)P(\theta)\}$ and the variance $\langle \Delta^2 P(\theta)[t,t_0]\rangle := {\rm Tr}_S\{\rho_S(t,t_0,F)[P(\theta)-\langle P(\theta)[t]\rangle]^2\}$:
\begin{eqnarray}
{\cal P}(p,\theta|F) &=& \frac{1}{2\pi \big\langle \Delta^2 P(\theta)[t,t_0]\big\rangle}\exp{\Big\{-\frac{1}{2\big\langle \Delta^2 P(\theta)[t,t_0]\big\rangle}\big\{p - \big\langle P(\theta)[t,t_0,F]\big\rangle\big\}^2\Big\}}\nonumber\\ &=& \frac{1}{2\pi \big\langle \Delta^2 P(\theta)[t,t_0]\big\rangle}\nonumber\\
&&\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\! \times\exp{\Bigg\{-\frac{\Big\{p -\frac{ie^{i\theta}}{\sqrt{2}} e^{i\omega_0(t-t_0)}G^{*}(t,t_0) \langle a^{\dag}\rangle_0 +\frac{ie^{-i\theta}}{\sqrt{2}}e^{-i\omega_0(t-t_0)}G(t,t_0) \langle a\rangle_0 -F|D(t,t_0)|\cos[\theta +\omega_0(t-t_0)-\phi^D_{t,t_0}] \Big\}^2}{2\big\langle \Delta^2 P(\theta)[t,t_0]\big\rangle}\Bigg\}}.\nonumber\\ \end{eqnarray} The derivation of the expression of $\langle P(\theta)[t,t_0,F]\rangle$ is shown in Appendix \ref{evolution}.\\
When the parameter to estimate is described by a Gaussian distribution the Fisher information is equal to the inverse of its variance. We are presently in such a situation and it is clear that the variance of the distribution ${\cal P}(p,\theta|F) $ seen as a distribution of the parameter $F$ is $\langle \Delta^2 P(\theta)[t,t_0]\rangle\left\{|D(t,t_0)|^2\cos^2\left[\theta +\omega_0(t-t_0)-\phi^D_{t,t_0}\right]\right\}^{-1}$.\\
So the Fisher information corresponding to the measurement of $P(\theta)$ is \begin{equation}
{\cal F}_{P(\theta)}(t,t_0) = \frac{|D(t,t_0)|^2\cos^2[\theta +\omega_0(t-t_0)-\phi^D_{t,t_0}]}{\langle \Delta^2 P(\theta)[t,t_0]\rangle}. \end{equation}
One can easily see that the best quadrature measurement is for the angle $\theta = \phi^D_{t,t_0} -\omega_0(t-t_0)$ such that $\cos^2[\theta +\omega_0(t-t_0)-\phi^D_{t,t_0}] =1$ so that the Fisher information from the best quadrature measurement is \begin{eqnarray}\label{fibestmeasurement}
{\cal F}_{P(\phi^D_{t,t_0} -\omega_0(t-t_0))}(t,t_0) =\frac{|D(t,t_0)|^2}{\big\langle \Delta^2 P(\phi^D_{t,t_0} -\omega_0(t-t_0))[t,t_0]\big\rangle} \end{eqnarray} and it is exactly the expression of the quantum Fisher information in Eq. (6) of the main text.
\section{Short time behavior}\label{shorttime} \subsection{Defining the short time regime}\label{considtimescale} One can show that the function $G(t,t_0)$ satisfies the following integro-differential equation: \begin{equation}\label{eqdiff}
\dot{G}(t,t_0):=\frac{d}{dt}G(t,t_0) = -\int_{t_0}^tds\sum_n|K_n|^2e^{i(\omega_0-\omega_n)(t-s)}G(s,t_0). \end{equation}
One could solve this equation by Laplace transform but, here, we only need the short time behaviour.\\
From this relation one can derive the successive derivatives evaluated in $t=t_0$ for any integer $p\geq2$, \begin{equation}
\frac{d^p}{dt^p}G(t,t_0)_{|_{t=t_0}} =- \sum_n|K_n|^2\sum_{l=0}^{p-2}i^{p-2-l}(\omega_0-\omega_n)^{p-2-l}\frac{d^{l}}{dt^{l}}G(0), \end{equation}
with the notation $\frac{d^{l}}{dt^{l}}G(0):=\frac{d^l}{dt^l}G(t,t_0)_{|_{t=t_0}}$, making explicit the fact that $G(t,t_0)$ is a simple function of $t-t_0$ as mentioned in Appendix \ref{evolution}. For $p=0$ and $p=1$ we have $G(0)=1$ and $\frac{d}{dt}G(0)=0$. Consequently, the successive derivatives of $G(t,t_0)$ are sums and products of terms $\sum_n|K_n|^2i^l(\omega_0-\omega_n)^l$, with the powers of the $|K_n|$-factors and $(\omega_0-\omega_n)$-factors summing up to the order of the derivative. One can conclude that the evolution time scales of $G(t,t_0)$ are of the order $\Omega_p^{-1}$, with $\Omega_p$ defined for all $p\geq 2$ as \begin{equation}
\Omega_p:=\left|\sum_n|K_n|^2(\omega_0-\omega_n)^{p-2}\right|^{1/p}. \end{equation}
This gives us a condition for the validity of the expansion of $G(t,t_0)$ in $t=t_0$, so that when $t-t_0$ is much smaller than $\Omega_p^{-1}$, $\forall p\geq 2$, one can retain the first terms of this expansion: \begin{equation}\label{gexp}
G(t,t_0)= 1 -\frac{{\cal K}^2}{2}(t-t_0)^2+i\frac{(t-t_0)^3}{6}\sum_n|K_n|^2(\omega_n-\omega_0) + {\cal O}[\Omega_4(t-t_0)^4],
\end{equation} where ${\cal K}^2:=\sum_n|K_n|^2$.\\
Note that a slow time scale $\gamma^{-1}$ can also emerge for long times (Appendix \ref{longtime}) or Markovian approximation (Appendix \ref{bnband}).\\
One can expand $D(t,t_0)$ under the same conditions, but adding also the conditions $t-t_0$ much smaller than $\omega_0^{-1}$ and the evolution time scale of $\zeta(t)$: \begin{equation} D(t,t_0)=\omega_0 (t-t_0)\zeta(t_0) + \omega_0\frac{(t-t_0)^2}{2}[i\omega_0\zeta(t_0)+\dot{\zeta}(t_0)] + {\cal O}[\omega_0{\cal K}^2(t-t_0)^3]. \end{equation}
Finally, in order to expand \eqref{qfipvar}, we have to consider also the expansion of $\big\langle \Delta^2 P(\phi^D_{t,t_0} -\omega_0(t-t_0))[t,t_0]\big\rangle$. From Eq. \eqref{variancepbest} we already have an expression available, but we will use the following form in order to simplify the considerations on time scales: \begin{equation}
\big\langle \Delta^2 P(\phi^D_{t,t_0} -\omega_0(t-t_0))[t,t_0]\big\rangle = |G(t,t_0)|^2\Big\langle \Delta^2 P\big[\phi^D_{t,t_0}-\phi^G_{t,t_0}\big]\Big\rangle_0 + \int_{t_0}^tds\int_{t_0}^tds' G(t,s)G^{*}(t,s')C^0(s-s'),\label{newvar} \end{equation} where
\begin{equation} C^0(s-s'):=e^{i\omega_0(s-s')}\frac{1}{2}{\rm Tr}_B\{\rho_B^0[B_0(s)B_0^{\dag}(s') + B_0^{\dag}(s')B_0(s)]\}=\sum_n|K_n|^2(N_n+1/2)e^{i(\omega_0-\omega_n)(s-s')}. \end{equation}
To be allowed to take into account only the first terms of the short time expansion of $\big\langle \Delta^2 P(\phi^D_{t,t_0} -\omega_0(t-t_0))[t,t_0]\big\rangle $, one has to consider times $t-t_0$ much smaller than the $\Omega_p^{-1}$ but also much smaller than the evolution time scale of $C^0(s'-s)$. Note that although having explicit time dependence within the phase of the quadrature variance, the actual value of $\Big\langle \Delta^2 P\big[\phi^D_{t,t_0}-\phi^G_{t,t_0}\big]\Big\rangle_0$ depends only on the initial probe state. Nevertheless, we do not need this argument. From the short time expansion of $D(t,t_0)$ and $G(t,t_0)$ one can see that the bath contribution appears only at the third order in $(t-t_0)$ for $\phi^G_{t,t_0} = \arg{G(t,t_0)}$ and second order in $(t-t_0)$ for $\phi^D_{t,t_0}=\arg{D(t,t_0)}$. Then the variance can be expanded in the following way: \begin{equation} \Big\langle \Delta^2 P\big[\phi^D_{t,t_0}-\phi^G_{t,t_0}\big]\Big\rangle_0 = \Big\langle \Delta^2 P\big[\phi^{D_0}_{ t,t_0}\big]\Big\rangle_0 + {\cal O}({\cal K}^2(t-t_0)^2), \end{equation} where \begin{equation} D_0(t,t_0) := \omega_0\int_{t_0}^tdu \zeta(u) e^{i\omega_0(u-t_0)} , \end{equation} is the coefficient $D(t,t_0)$ in the noiseless situation ($K_n \rightarrow 0$ $\forall n$). \\
Regarding the second term in Eq. \eqref{newvar}, one can easily re-write the Taylor expansion of $C^0(s'-s)$ in 0 in the following form
\begin{equation} C^0(s-s') = \sum_n |K_n|^2(N_n+1/2)\left[1+ \sum_{p=1}^{\infty} \frac{[i\chi_p(s-s')]^p}{p!}\right],
\end{equation} where the frequencies defining the evolution time scales $|\chi_p|^{-1}$ are given by $\chi_p := \left[\sum_n\frac{|K_n|^2(N_n+1/2)}{\cal N}(\omega_0-\omega_n)^p\right]^{1/p}$ and ${\cal N}:=\sum|K_n|^2(N_n+1/2)$. Note that if the $\Omega_p$ converge then the $\chi_p$ too, since the prefactor $\frac{|K_n|^2(N_n+1/2)}{\cal N}$ within the sum is smaller than $|K_n|^2$. \\
Recapping the above considerations on time scales, as long as $(t-t_0)$ is smaller than $\omega_0^{-1}$, $|\chi_q|^{-1}$, $q\geq1$, and $\Omega_p^{-1}$, $p\geq2$, we can write \begin{equation} \big\langle \Delta^2 P(\phi^D_{t,t_0} -\omega_0(t-t_0))[t,t_0]\big\rangle = \Big\langle \Delta^2 P\big[\phi^{D_0}_{ t,t_0}\big]\Big\rangle_0 + {\cal O}[({\cal K}^2+{\cal N}^2)(t-t_0)^2], \end{equation} and one can also expand $G(t,t_0)$, $D(t,t_0)$, $\big\langle \Delta^2 P(\phi^D_{t,t_0} -\omega_0(t-t_0))[t,t_0]\big\rangle$ and the QFI (Eq. \eqref{qfipvar}), and finally get \begin{eqnarray} {\cal F}_{P(\phi^D_{t,t_0} -\omega_0(t-t_0))}(t,t_0) = \frac{\omega_0^2}{\big\langle \Delta^2 P\big[\phi_{t,t_0}^{D_0}\big]\big\rangle_0 }\{\zeta^2(t_0)(t-t_0)2 +\zeta(t_0)\dot{\zeta}(t_0)(t-t_0)^3 + {\cal O}[\omega_0^2{\cal K}^2(t-t_0)^4]\}.\label{stexp} \end{eqnarray}
\subsection{Non-Markovian effects at short times}\label{nmvsm} In the above subsection we derive conditions of validity of the short time expansion \eqref{stexp} of the QFI. We show in Appendix \ref{bathcorrelation} that these time scales correspond also to the evolution time scales of the bath correlation function. If those conditions on $t-t_0$ cannot be fulfilled, meaning that the bath correlation time is not accessible and no measurement can be performed below the bath correlation time, the higher orders of the expansion \eqref{stexp} cannot be neglected and the first terms are not significant any more. Then the correct expansion is not anymore centered at $t_0$ but at $t_0+t_c$, where $t_c$ represents the bath correlation time. In such an expansion, the first derivative of $G$ is taken at $t_0+t_c$ and does not cancel anymore, yielding a first order bath dependent term for the short time expansion of $G(t,t_0)$ and of the denominator of \eqref{qfipvar}, as for Markovian dynamics (Appendix \ref{bnband}). As a comparison, Markovian dynamics can be sketched as the impossibility of accessing the correlation time of the bath and then considering that $t_c$ goes to zero. We show at the end of Appendix \ref{seqmeasurement} that the presence of a first order term in the denominator of \eqref{qfipvar} changes dramatically the QFI behaviour for sequential preparation-and-measurement scenario. The QFI becomes bounded by a constant, irrespectively of the energy invested in the squeezing of the probe initial state.\\
As a matter of comparison, we give here the short time expansion of \eqref{qfipvar} for Markovian dynamics (obtained from Eq. \eqref{markovqfi}) valid for $t-t_0$ smaller than $\omega_0^{-1}$, $\gamma^{-1}$ (defined in \ref{bnband}) and the evolution time scale of $\zeta(t)$: \begin{eqnarray} {\cal F}^M_Q(t,t_0) = &\frac{\omega_0^2(t-t_0)^2}{\big\langle \Delta^2 P\big[\phi_{t,t_0}^D\big]\big\rangle_0 }&\left\{\zeta^2(t_0)+\left[\zeta(t_0)\dot{\zeta}(t_0)+\frac{\gamma}{2}\zeta^2(t_0)-\frac{\gamma\left(n_T+\frac{1}{2}\right)}{\big\langle \Delta^2 P\big[\phi_{t,t_0}^D\big]\big\rangle_0} \zeta^2(t_0)\right](t-t_0)\right\}\nonumber\\ &+& {\cal O}[\omega_0^2(t-t_0)^2]. \end{eqnarray}
The bath contribution comes also at the 3rd order and always reduces the amount of information since $\gamma/2-\gamma\left(n_T+\frac{1}{2}\right) \Big\langle \Big\{\Delta P\big[\phi_{t,t_0}^D\big]\Big\}^2\Big\rangle_0^{-1}$ is always strictly negative (even smaller than $-\gamma/2$). This happens because the only contribution in the first derivative of $G(t,t_0)$ at $t=t_0$ comes from $-\sum_n|K_n|^2\int_{t_0}^t ds e^{i(\omega_0-\omega_n)(t-s)}$ (see Eq. \eqref{eqdiff}). If one takes the broad band limit together with the rotating wave approximation one ends up with $-\gamma/2$ (see Appendix \ref{bnband}). This implies that the short time behavior of the QFI for the Markovian dynamics is qualitatively different as we saw above.\\
Hence the apparition of the bath contribution only at the 4th order is a particularity of the expansion \eqref{stexp} and comes from measurements at time scales below the correlation time of the bath, justifying its classification as non-Markovian effects.
\section{Long time behavior}\label{longtime} We are interested in the long time behavior of ${\cal G}_1(t,t_0)$ the first term of the sum in the expression of $G(t,t_0)$ \eqref{gsm}. \begin{eqnarray} {\cal G}_1(t,t_0) &:=& - \int_{t_0}^tds_1\int_{t_0}^{s_1}ds_2e^{i\omega_0(s_1-s_2)}\left[B_0(s_1),B_0^{\dag}(s_2)\right]\label{g1}\\
&=& -\sum_n |K_n|^2 \left[-\frac{i(t-t_0)}{\omega_n-\omega_0} +\frac{1-e^{-i(\omega_n -\omega_0)(t-t_0)}}{(\omega_n-\omega_0)^2}\right]\nonumber\\
&=& - \sum_n |K_n|^2 \left[\frac{1-\cos{\{(\omega_n -\omega_0)(t-t_0)\}}}{(\omega_n-\omega_0)^2} -i\frac{t-t_0}{\omega_n-\omega_0}+i\frac{\sin{\{(\omega_n -\omega_0)(t-t_0)\}}}{(\omega_n-\omega_0)^2}\right]. \end{eqnarray}
One can show that when $t-t_0$ goes to infinity the real part of the integrand tends to \begin{equation} \frac{1-\cos{\{(\omega_n -\omega_0)(t-t_0)\}}}{(\omega_n-\omega_0)^2} \rightarrow \frac{\pi}{2} (t-t_0)\delta(\omega-\omega_0). \end{equation}
The long time behavior of the real part of ${\cal G}_1(t,t_0)$ reproduces the Markovian behavior since we recover a $\delta-function$. Substituting in the expression of ${\cal G}_1(t,t_0)$ we find \begin{eqnarray}
\Re{{\cal G}_1(t,t_0)} &=& -\sum_n |K_n|^2 \frac{1-\cos{\{(\omega_n -\omega_0)(t-t_0)\}}}{(\omega_n-\omega_0)^2}\nonumber\\
&=& -\int_0^{\infty}d\omega g(\omega)|K(\omega)|^2 \frac{1-\cos{\{(\omega_n -\omega_0)(t-t_0)\}}}{(\omega_n-\omega_0)^2}\nonumber\\
&&\rightarrow -\frac{\pi}{2}(t-t_0)g(\omega_0)|K(\omega_0)|^2. \end{eqnarray}
In the second line we substitute the discrete bath mode distribution by a continuous one in order to realize the integration.
Note that the Markovian limit gives a similar result ${\cal G}_1(t,t_0)\rightarrow -\gamma(t-t_0)/2=-\pi(t-t_0)g|K|^2/2$. So for the real part of ${\cal G}_1(t,t_0)$ the long time limit is similar to the Markov approximation. However, it is not so simple for the imaginary part, the same treatment as for the real part leads to an undetermined form. Writing the sine of the imaginary part in a series expansion one obtains the following expression: \begin{eqnarray} \Im{{\cal G}_1(t,t_0)} =\sum_{p=0}^{\infty}(-1)^{p+1}\frac{(t-t_0)^{2p+3}}{(2p+3)!}\big\langle(\omega_0-\omega)^{2p+1}\big\rangle,
\end{eqnarray} where $\big\langle(\omega_0-\omega)^{2p+1}\big\rangle = \int_0^{+\infty} d\omega g(\omega)|K(\omega)|^2(\omega_0-\omega)^{2p+1}$. The sum is expected to converge since the imaginary part $\Im{{\cal G}_1(t,t_0)}$ is finite (can be seen form expression \eqref{g1}). Note that if $g(\omega)|K(\omega)|^2$ is symmetrical with respect to $\omega_0$ the imaginary part $\Im{{\cal G}_1(t,t_0)}$ cancels out.
\section{Sequential preparation-and-measurement scenario}\label{seqmeasurement} We analyze the variance of the quadrature $P(\phi^D_{t_{k+1},t_k} -\omega_0\tau)$ after the probe interacting with the force and the bath from $t_k:=t_0+k\tau$ to $t_{k+1}:=t_0+(k+1)\tau$. We use the expression of the variance \eqref{newvar} used in Appendix \ref{considtimescale} \begin{eqnarray}
\langle \Delta^2 P(\phi^D_{t_{k+1},t_k} -\omega_0\tau)[t_{k+1},t_k]\rangle &=&|G(t_{k+1},t_k)|^2\Big\langle \Delta^2 X\big[\phi^D_{t_{k+1},t_k}-\phi^G_{t_{k+1},t_k}\big]\Big\rangle_0 \nonumber\\ & +& \int_{t_k}^{t_{k+1}}ds\int_{t_k}^{t_{k+1}}ds' G(t_{k+1},s)G^{*}(t_{k+1},s')C^0(s-s').\label{seqvar} \end{eqnarray}
One can show that $G(t,u) =G(t-u)$, yielding $G(t_{k+1},t_k) = G(\tau)$ and that the double integral depends only on $t_{k+1}-t_k$, allowing to re-write \eqref{seqvar} as \begin{eqnarray}
\langle \Delta^2 P(\phi^D_{t_{k+1},t_k} -\omega_0\tau)[t_{k+1},t_k]\rangle &=&|G(\tau)|^2\Big\langle \Delta^2 X\big[\phi^D_{t_{k+1},t_k}-\phi^G_{\tau}\big]\Big\rangle_0 +\int_0^{\tau}ds\int_0^{\tau}ds' G(\tau-s)G^{*}(\tau-s')C^0(s-s'),\nonumber\\
\end{eqnarray} where $\phi^G_\tau :=\arg{G(\tau)}$. As discussed in Section V of the main text we make the assumption that the probe is prepared in the best state $\hat S[\mu(t_{k+1},t_k)]|0\rangle$ after each measurement. Thanks to this assumption, which corresponds to the best strategy, the expression of the variance simplifies to \begin{eqnarray}\label{varpappendix}
\langle \Delta^2 P(\phi_{t_{k+1},t_k} -\omega_0\tau)[t_{k+1},t_k]\rangle &=&\frac{1}{4} \big|G(\tau)\Big|^2\left(E+\sqrt{E^2-1/4}\right)^{-1} +\int_0^{\tau}ds\int_0^{\tau}ds' G(\tau-s)G^{*}(\tau-s')C^0(s-s'),\nonumber\\
\end{eqnarray} since $\langle 0|\hat S^{\dag}[\mu(t_{k+1},t_k)] \left[\Delta P(\phi^D_{t_{k+1},t_k}-\phi^G_\tau)\right]^2\hat S[\mu(t_{k+1},t_k)]|0\rangle =\frac{1}{4} \left(E+\sqrt{E^2-1/4}\right)^{-1} $. The expression \eqref{varpappendix} depends only on $\tau$ and not anymore on k so that the denominator in Eq. \eqref{fseq} can be taken out of the sum.\\
Assuming now that $\tau$ is much smaller than all time scales involved $D(t_k+\tau,t_k)$, that is much smaller than $\omega_0^{-1}$, $\Omega_p^{-1}$, $p\geq2$ (see Appendix \ref{considtimescale}), and the evolution time scale of $\zeta(t)$, we can expand $D(t_k+\tau,t_k)$ to order 2: \begin{equation} D(t_k+\tau, t_k) = \omega_0\tau\zeta(t_k)+ \omega_0\frac{\tau^2}{2}\left[\dot{\zeta}(t_k) + i\omega_0\zeta(t_k) \right]+ {\cal O}(\tau^3), \end{equation}
where the dot means the time derivative. For $|D(t_k,t_k+\tau)|^2$ we have: \begin{equation}\label{exp}
|D(t_k,t_k+\tau)|^2 = \omega_0^2 \tau^2\left[\zeta^2(t_k)\left(1+\omega_0^2\frac{\tau^2}{4}\right) +\tau\zeta(t_k)\dot{\zeta}(t_k)+ \frac{\tau^2}{4}\dot{\zeta}^2(t_k) \right]+ {\cal O}(\tau^5). \end{equation}
The Euler-Maclaurin formula relates the sum $\sum_{k=0}^{k=\nu-1}|D(t_k,t_k+\tau)|^2$ to the integral $\int_{t_0}^{t_0+T}dt A(t)$ where $A(t)$ designs the expansion \eqref{exp} of $|D(t,t+\tau)|^2$: \begin{eqnarray}\label{exp1}
\sum_{k=0}^{k=\nu-1}|D(t_k,t_k+\tau)|^2 = \omega_0^2\tau \int_{t_0}^{t_0+T}dt \zeta^2(t) &+& \omega_0^2\tau^3 \left\{\frac{1}{4}\int_{t_0}^{t_0+T}dt[\dot{\zeta}^2(t) +\omega_0^2\zeta^2(t)] -\frac{1}{3}[\zeta(t_0+T)\dot{\zeta}(t_0+T) -\zeta(t_0)\dot{\zeta}(t_0)]\right\} \nonumber\\ &+&{\cal O}(\tau^4). \end{eqnarray}
We also expand \eqref{varpappendix} to the third order in $\tau$: \begin{equation}\label{exp2} \langle \Delta^2 P(\phi_{t_{k+1},t_k} -\omega_0\tau)[t_{k+1},t_k]\rangle = \frac{1}{4}\left(1- \tau^2{\cal K}^2 \right)\left(E+\sqrt{E^2-\frac{1}{4}}\right)^{-1} + \tau^2{\cal N} + {\cal O}(\tau^4), \end{equation}
remembering that this is valid if $\tau$ is smaller than $\Omega_p^{-1}$, $p\geq2$ and $|\chi_q|^{-1}$, $q\geq 1$ and ${\cal K}^2:=\sum_n|K_n|^2$, ${\cal N}:=\sum_n|K_n|^2(N_n+1/2)$ (see Appendix \ref{considtimescale}).
Substituting the expressions \eqref{exp1} and \eqref{exp2} in the total quantum Fisher information we have \begin{eqnarray} {\cal F}_Q^{Seq}(T,\tau) &=& \sum_{k=0}^{\nu-1} {\cal F}_Q(t_{k+1},t_k)\nonumber\\
&=& \sum_{k=0}^{\nu -1}\frac{|D(t_{k+1},t_k)|^2}{\langle \Delta^2 P(\phi^D_{t_{k+1},t_k} -\omega_0\tau)[t_{k+1},t_k]\rangle}\nonumber\\ & =&\omega_0^2\frac{\xi(T,t_0) \tau + {\cal C}(T,t_0)\tau^3 +{\cal O}(\tau^4)}{\frac{{\cal E}^{-1}}{4} + \tau^2\left({\cal N}-\frac{{\cal E}^{-1}}{4}{\cal K}^2\right) + {\cal O}(\tau^4)},\label{fseqexp} \end{eqnarray} where ${\cal E}:=\left(E+\sqrt{E^2-\frac{1}{4}}\right)$, $\xi(T,t_0):=\int_{t_0}^{t_0+T}dt\zeta^2(t)$, and ${\cal C}(T,t_0):= \frac{1}{4}\int_{t_0}^{t_0+T}dt[\dot{\zeta}^2(t) +\omega_0^2\zeta^2(t)] -\frac{1}{3}[\zeta(t_0+T)\dot{\zeta}(t_0+T) -\zeta(t_0)\dot{\zeta}(t_0)]$ which simplifies to ${\cal C}(T,t_0):= \frac{1}{4}\int_{t_0}^{t_0+T}dt[\dot{\zeta}^2(t) +\omega_0^2\zeta^2(t)]$ if the force begins and ends at $t_0$ and $t_f$ respectively. \\
From the expansion \eqref{fseqexp} one can easily find the optimal time interval $\tau_{\mathrm{opt}}$: \begin{eqnarray} \tau_{\mathrm{opt}}=& \frac{1}{2\sqrt{3{\cal N}}}{\cal E}^{-1/2} &+ \frac{{\cal C}(T,t_0)+\xi{\cal K}^2}{16\sqrt{3}{\cal N}^{3/2}\xi(T,t_0)}{\cal E}^{-3/2} + {\cal O}({\cal E}^{-5/2}).\nonumber \end{eqnarray}
This result confirms the announced correlation between $\tau_{\mathrm{opt}}$ going to zero and $E$ going to infinity. Interestingly the leading term does not depend on the total available sensing window $[t_0;t_0+T]$ neither on the force time modulation $\zeta(t)$. For Markovian bath the dependence of $\tau_{\mathrm{opt}}$ is in ${\cal E}^{-1/3}$ \cite{pra}.\\
The corresponding total quantum Fisher information is \begin{eqnarray} &&{\cal F}_Q^{\mathrm{Seq}}(T,\tau_{opt})= \frac{\sqrt{3}\xi(T,t_0)}{2\sqrt{\cal N}}{\cal E}^{1/2} + \frac{\sqrt{3}}{32{\cal N}^{3/2}}[2{\cal K}^2\xi(T,t_0) + 7/3{\cal C}(T,t_0)]{\cal E}^{-1/2} + {\cal O}({\cal E}^{-3/2}).\nonumber \end{eqnarray}
If the effective time window during which the force is not null is $[t_i,t_f] \in [t_0,t]$, then we have $\xi(T,t_0)=\int_{t_i}^{t_f}dt\zeta^2(t) =\xi(t_f-t_i,t_i) $ and ${\cal C}(T,t_0):= \frac{1}{4}\int_{t_i}^{t_f}dt[\dot{\zeta}^2(t) +\omega_0^2\zeta^2(t)] = {\cal C}(t_f-t_i,t_i)$ and the total quantum Fisher information is just equal to ${\cal F}_Q^{Seq}(t_f-t_i,t_i)$: it is not prejudicial to starts and stops the sensing beyond the real time window of the force application. The exact knowledge of $t_i$ and $t_f$ is not necessary for sequential preparation-and-measurement scenario.\\
As a matter of comparison, we give the asymptotical behaviour of the optimal time interval and of the corresponding QFI when a term of 1st order in $\tau$ appears at the denominator of \eqref{qfipvar}. This happens when the time interval $\tau$ between each measurement is bigger than the evolution time scales of the bath correlation function (see Appendix \ref{nmvsm}) or when the dynamics is Markovian (which implies obviously that $\tau$ is bigger than the evolution time scales of the bath correlation function). In such situations Eq. \eqref{fseqexp} becomes
\begin{eqnarray} {\cal F}_Q^{Seq}(T,\tau) & =&\omega_0^2\frac{\xi(T,t_0) \tau + {\cal C}(T,t_0)\tau^3 +{\cal O}(\tau^4)}{\frac{{\cal E}^{-1}}{4} + A\tau + {\cal O}(\tau^2)}, \end{eqnarray} where $A$ is a coefficient appearing in situations described above ($A=\gamma(n_T+1/2)$ for Markovian dynamics), yielding \begin{equation} \tau_{opt} = \frac{1}{8A}{\cal E}^{-1/2} \end{equation} and a bounded QFI, \begin{equation} {\cal F}_Q^{\mathrm{Seq}}(T,\tau_{opt}) = \frac{\xi(T,t_0)}{3A} + {\cal O}({\cal E}^{-1}), \end{equation} equivalent to the result in \cite{pra} for Markovian dynamics.
\section{Correlation function and time of the bath}\label{bathcorrelation} The bath correlation function can be defined by the following expression:
\begin{equation} C(t,t_0|t',t_0):=\frac{1}{2}{\rm Tr}_{SB}\{\rho_{SB}^0[B(t,t_0,F)B^{\dag}(t',t_0,F)+B^{\dag}(t',t_0,F)B(t,t_0,F)]\} - {\rm Tr}_{SB}[\rho_{SB}^0B(t,t_0,F)]{\rm Tr}_{SB}[\rho_{SB}^0B^{\dag}(t',t_0,F)] \end{equation} where $B(t,t_0,F):=U^{\dag}(t,t_0,F)BU(t,t_0,F)$.\\
One can show the useful expression for $B(t,t_0,F)$: \begin{equation} B(t,t_0,F) = B_0(t) -a_0(t)\dot{G}(t,t_0) + \int_{t_0}^t ds \dot{G}(t,s)B_0(s)e^{-i\omega_0(t-s)} +i\frac{F}{\sqrt{2}}\sum_n K_n e^{-i\omega_n(t-t_0)}D_n(t,t_0), \end{equation} where $\dot{G}(t,s):= \frac{d}{dt}G(t,s)$, and $D_n(t,t_0)$ is defined in Appendix \ref{evolution}. \\
One gets for the bath correlation function, assuming that $\forall n$, ${\rm Tr}_B[\rho_B^0b_n]={\rm Tr}_B[\rho_B^0b_n^{\dag}]=0$, \begin{equation}\label{bathcofct}
C(t,t_0|t',t_0) = C_{b}(t,t_0|t',t_0)+C_{I}(t,t_0|t',t_0), \end{equation} where the first part
\begin{equation} C_{b}(t,t_0|t',t_0)=e^{-i\omega_0(t-t')}C^0(t-t'), \end{equation} corresponds to the bare correlation function of the bath without interaction with the probe, corresponding also to the Born approximation, and the second part,
\begin{eqnarray} C_I(t,t_0|t',t_0)&=& e^{-i\omega_0(t-t')}\Bigg\{\int_{t_0}^{t'}dsC^0(t-s)\dot{G}^{*}(t',s) +\int_{t_0}^tdsC^0(s-t')\dot{G}(t,s)\nonumber\\
&+&\left[\frac{1}{2}{\rm Tr}_S[\rho^0_S(a^{\dag}a+aa^{\dag})]-|{\rm Tr}_S(\rho_s^0a)|^2\right]\dot{G}(t,t_0)\dot{G}^{*}(t',t_0) + \int_{t_0}^tds\int_{t_0}^{t'}ds' C^0(s-s')\dot{G}(t,s)\dot{G}^{*}(t',s')\Bigg\},\nonumber\\ \end{eqnarray}
gathers second and higher order terms coming from the interaction with the probe, involving the derivative of the response function of the bath $\dot{G}(t,s)$. The function $C^0(t-t')$ is defined above in Appendix \ref{considtimescale}, and $N_n={\rm Tr}_B[\rho_B^0b_n^{\dag}b_n]$. \\
Note that if one looks at the bath correlations at the beginning of the interaction between the bath and the probe, the correlation function is reduced to ($t'\rightarrow t_0$ in \eqref{bathcofct})
\begin{eqnarray} C(t,t_0|t_0,t_0) &=& e^{-i\omega_0(t-t_0)}\left[C^0(t-t_0) +\int_{t_0}^tdsC^0(s-t_0)\dot{G}(t,s)\right]. \end{eqnarray}
As detailed in Appendix \ref{considtimescale} the evolution time scale of $G(t,t_0)$ and $C^0(t-t')$ are of the order of $\Omega_p^{-1}$, $p\geq2$ and $|\chi_q|^{-1}$, $q\geq1$, and since \eqref{bathcofct} depends only on these two functions, the evolution time scale of the bath correlation function is also of the order of $\Omega_p^{-1}$ and $|\chi_q|^{-1}$. This is an important conclusion since it shows that the short time effects considered in this work are within the bath correlation time, justifying their classification as non-Markovian effects.\\
Note finally that under traditional Markovian approximation, including broad band limit, rotating wave approximation and Born approximation, the bath correlation function $C(t,t_0|t',t_0)$ becomes a delta Dirac function $\delta(t-t')$, implying that the correlation time is zero, and the impossibility of performing any measurement within this time.
\end{widetext}
\end{document} |
\begin{document}
\title{Comparative Dynamical Study of a Bound Entangled State} \author{Suprabhat Sinha$^\ast$\footnote[0] {$^\ast$suprabhatsinha64@gmail.com} \\
\textit{School of Computer Science, Engineering and Applications, \\ D Y Patil International University, Akurdi, Pune-411044, India}}
\begin{abstract}
The bound entangled state carries noisy entanglement and it is very hard to distill but the usefulness of bound entangled states has been depicted in different applications. This article represents a comparative dynamical study of an open quantum system for one of the bound entangled states proposed by Bennett \textit{et al.} The study is conducted under the influence of Heisenberg, bi-linear bi-quadratic and Dzyaloshinskii–Moriya (DM) interaction. During the study, an auxiliary qutrit interacts with one of the qutrits of the selected two qutrit bound entangled state through different interactions. The computable cross-norm or realignment (CCNR) criterion has been used to detect the bound entanglement of the state and the negativity has been applied to measure the free entanglement. From this three-fold study it is observed that, although the auxiliary qutrit plays a significant role during the interaction, the probability amplitude of the qutrit does not affect the open quantum system. Further, it is found that the Dzyaloshinskii–Moriya (DM) interaction performs better to activate the chosen bound entangled state among all the interactions.
\end{abstract}
\maketitle
\section{Introduction} Quantum entanglement is one of the cornerstones, which plays a major role in developing the applications of quantum computation and quantum information theory \cite{NL-CH, QE, NG}. In most of the applications of quantum computation and quantum information processing, maximally entangled quantum states are preferred for perfect execution. The quantum entangled states can be divided into two types; free entangled quantum states and bound entangled quantum states \cite{HO1}. Free entangled quantum states are distillable states and pure entanglement can be turned out easily. Due to this reason, these states can be effectively utilized for quantum computation and information processing. A wide range of applications of the free entangled quantum states have been investigated in quantum teleportation \cite{QT1, QT2, QT3}, quantum cryptography \cite{QC1, QC2}, quantum sensing \cite{QS}, quantum imaging\cite{QI}, quantum games \cite{QG} and so many domains. On the other side, bound entangled quantum states are noisy entangled quantum states. This type of quantum state is very hard to distill and extract pure entanglement. For this reason, bound entangled quantum states are generally avoided in quantum computation and information processing. In some of the recent studies, it is found that the bound entangled quantum states may be useful for distributing quantum keys securely \cite{KEY}, hiding quantum data \cite{DATA}, concentrating remote quantum information \cite{RE}, reducing communication complexity \cite{COM} and so on. It has also been discussed that with some free entanglement, a bound entangled state enhances the teleportation fidelity \cite{QT4, QT5}.
The term `Bound Entanglement' was first introduced by Horodecki \textit{et al.} and the preliminary bound entangled quantum state is also proposed by them \cite{HO2}. After that many people, Bennett \textit{et al.} \cite{BT1}, Jurkowski \textit{et al.} \cite{JKI1} etc. have contributed to construct different bound entangled quantum states. Recently, experimental manifestation and distillation of the bound entanglement have been realized \cite{E1, E2, E3, E4}. From the application point of view, the evolutionary dynamics and the distillation of the bound entangled quantum states are as important as the construction. Different authors propose different methods for the dynamical analysis and distillation of different bound entangled quantum states. Guo-Qiang \textit{et al.} have shown the time evolution of a bound entangled quantum state provided by Horodecki \textit{et al.}, under bi-linear bi-quadratic interaction \cite{BI} and offers a way to free bound entanglement. Baghbanzadeh \textit{et al.} have presented the distillation of three bound entangled quantum states proposed by Horodecki \textit{et al.} and Bennett \textit{et al.} using weak measurements \cite{WK}. Sharma \textit{et al.} have also studied one of the Horodecki \textit{et al.} bound entangled quantum states. But they use Dzyaloshinskii–Moriya (DM) interaction to show time evolution and distillation \cite{K1}. Recently, they have made their work one step forward and repeat their study for a bound entangled quantum state provided by Jurkowski \textit{et al.} \cite{K2}. During these studies, different tools are used to detect and measure entanglement. At present, there are several mathematical tools available for characterization, detection, and measurement of the free entanglement for bipartite quantum systems \cite{EM}. Negativity is one of the measures among them, which is also a good tool for the quantification of the free entanglement \cite{N}. On the other hand, the characterization and detection of the bound entanglement is still an open problem. Although some criteria have been already developed to detect the bound entanglement, such as separability criterion, realignment criterion, computable cross-norm or realignment (CCNR) criterion \cite{C1, C2, C3, C4, C5} etc.
In the current study, comparative dynamical analysis and distillation of a $3\times3$ dimensional bipartite bound entangled quantum state provided by Bennett \textit{et al.}, is investigated. The analysis is explored with the help of an auxiliary qutrit under three different physical interactions; Heisenberg \cite{H1,H11,H12}, bi-linear bi-quadratic \cite{H2,H21} and DM interaction \cite{DM1,DM2,DM3}. The selected interactions have already shown their efficacy in the field of quantum computation and information \cite{K3, K4, K5, K6, K7, K8, K9, K10, K11, K12, K13}. According to the best of my knowledge, this type of study under the chosen interactions with the considered bound entangled state is missing in the literature. During the study CCNR criterion has been selected to detect the bound entanglement of the system and negativity has been applied for quantifying and measuring the free entanglement of the system.
The study is sketched as follows. The discussion about the selected bound entangled state, negativity, and CCNR criterion is done in section 2. Section 3 deals with the unitary dynamics and the Hamiltonian of the different interactions in the open quantum system. In section 4, the dynamical behavior for different interactions has been studied. In the last section, the conclusion of the results is explained.
\section{Bound entangled state, Negativity and CCNR criterion} In the current section, the bound entangled state proposed by Bennett \textit{et al.}\cite{BT1}, negativity, and CCNR criterion are discussed. The state is a $3\times3$ dimensional bipartite state dealing with two qutrits $A$ and $B$. The density matrix of the specified bound entangled state can be written as,
\begin{equation} \rho_{AB}=\frac{1}{4} \left( (I \otimes I)-\sum_{i=0}^4\vert \psi_i\rangle \langle\psi_i \vert \right). \label{bes} \end{equation} Where, $I$ is the $3\times3$ dimensional identity matrix, \begin{eqnarray*} \vert\psi_0\rangle=\frac{1}{\sqrt{2}} \vert 0\rangle (\vert 0\rangle-\vert1\rangle),\quad \vert\psi_1\rangle=\frac{1}{\sqrt{2}} (\vert0\rangle-\vert1\rangle)\vert2\rangle,\quad \vert\psi_2\rangle=\frac{1}{\sqrt{2}}\vert2\rangle \vert1\rangle-\vert2\rangle), \end{eqnarray*} \begin{eqnarray*}
\vert\psi_3 \rangle=\frac{1}{\sqrt{2}}(\vert1\rangle-\vert2\rangle)\vert0\rangle \quad \text{and} \quad \vert\psi_4\rangle=\frac{1}{3} (\vert0\rangle+\vert1\rangle+\vert2\rangle)(\vert0\rangle+\vert1\rangle+\vert2\rangle). \end{eqnarray*}
Negativity and CCNR criterion are primarily used to detect and quantify the entanglement in our current work. The negativity has been used to quantify the free entanglement while the CCNR criterion has been used to detect the bound entanglement of the system. CCNR is a very simple and strong criterion for the separability of a density matrix. This criterion can detect a wide range of bound entangled states and performs with better efficacy. The negativity $(N)$ and CCNR criterion are defined as below, \begin{equation}
N=\frac{(\left \|\rho_{AB}^{T}\right \|-1)}{2} \label{N} \end{equation} and \begin{equation}
CCNR=\left\|(\rho_{AB}-\rho_{A}\otimes \rho_{B})^R\right\|-\sqrt{(1-Tr \rho_{A}^{2}) (1-Tr \rho_{B}^{2})}. \label{CCNR} \end{equation}
Where $\|..\|$, $(..)^T$ and $(..)^R$ represent the trace norm, partial transpose and realignment matrix respectively. Further $\rho_{A}$ and $\rho_{B}$ are the reduced density matrices of qutrit A and qutrit B and $\rho_{AB}$ is the density matrix of the bound entangled state respectively and expressed as, \begin{eqnarray*} \rho_{A}=Tr_{BC}(\rho_{ABC}), \quad \rho_{B}=Tr_{AC}(\rho_{ABC})\quad \text{and} \quad \rho_{AB}=Tr_{C}(\rho_{ABC}). \end{eqnarray*}
For a system, $N>0$ or $CCNR>0$ implies that the state is entangled, $N=0$ and $CCNR>0$ implies that the state is bound entangled and $N>0$ corresponds to the free entangled state.
\section{Unitary dynamics and Hamiltonian of different interactions} In this section, the interaction between the closed system and the auxiliary qutrit is presented. Further, the unitary dynamics of the system and the Hamiltonian of the different interactions are also discussed. During the present study, it is considered that the auxiliary qutrit $(C)$ interacts with one of the qutrits of the pair in the closed system through different interactions (Heisenberg, bi-linear bi-quadratic, and DM). Here the closed system is the bound entangled state provided by Bennett \textit{et al.} and consists of two qutrits ($A$ and $B$). In the current article, it is assumed that the interaction takes place only in the Z-direction between qutrit $A$ and auxiliary qutrit $C$. The state vector of the additional auxiliary qutrit $(C)$ can be expressed as, \begin{equation} \vert C\rangle=\alpha\vert0\rangle+\beta\vert1\rangle+\gamma\vert2\rangle\label{eqt} \end{equation} with the normalization condition, \begin{equation} \vert\alpha\vert^{2}+\vert\beta\vert^{2}+\vert\gamma\vert^{2}=1.\label{nc} \end{equation} The density matrix of the qutrit $C$ reads as, \begin{equation} \rho_{C}=\left[ \begin{array}{ccc} \vert\alpha\vert^2&\alpha\beta&\alpha\gamma \\ \alpha\beta&\vert\beta\vert^2&\beta\gamma \\ \alpha\gamma&\beta\gamma&\vert\gamma\vert^2 \\ \end{array} \right ]. \ \ \ \label{edm'} \end{equation} Using the Eq.\ref{nc} the above density matrix can be written as, \begin{equation} \rho_{C}=\left[ \begin{array}{ccc} \vert\alpha\vert^2&\alpha\beta&\alpha\sqrt{1-\vert\alpha\vert^2-\vert\beta\vert^2} \\ \alpha\beta&\vert\beta\vert^2&\beta\sqrt{1-\vert\alpha\vert^2-\vert\beta\vert^2} \\ \alpha\sqrt{1-\vert\alpha\vert^2-\vert\beta\vert^2} &\beta\sqrt{1-\vert\alpha\vert^2-\vert\beta\vert^2} & 1-\vert\alpha\vert^2-\vert\beta\vert^2 \\ \end{array} \right ]. \ \ \ \label{edm} \end{equation} After interaction, the initial density matrix of the open system can be expressed as below, \begin{equation} \rho_{ABC}(0)=\rho_{AB} \otimes \rho_{C}. \label{sdm} \end{equation} Since it is assumed that in the current article the auxiliary qutrit $(C)$ interacts with the qutrit $A$ of the bound entangled closed system. So, the Hamiltonian of the interacted system can be written as, \begin{equation} H=H_{AB}+H_{AC}^{int}. \label{H'} \end{equation} Where $H_{AB}$ is the Hamiltonian of qutrit $A$ and qutrit $B$ and $H_{AC}^{int}$ is the interaction Hamiltonian of qutrit $A$ and qutrit $C$. Here it is considered that qutrit $A$ and qutrit $B$ are uncoupled initially, so $H_{AB}$ is zero. Now the Hamiltonian of the open quantum system becomes, \begin{equation} H=H_{AC}^{int}. \label{H''} \end{equation}
According to the postulates of quantum mechanics, the unitary time evolution of a physical system is obtained from the time-dependent Schr\"odinger equation given below, \begin{equation} i\hbar\frac{d}{dt}\vert\psi(t)\rangle=E\vert\psi(t)\rangle. \end{equation} Where $E$ is the real energies of the physical system. The solution of the above equation is expressed as, \begin{equation} \vert\psi(t)\rangle=e^{\frac{-iHt}{\hbar}}\vert\psi(0)\rangle.\label{se} \end{equation} For the application of density matrix, Eq.\ref{se} can be framed as, \begin{equation} \rho(t)=U(t) \cdot \rho(0) \cdot U(t)^{\dagger}\label{tm1}. \end{equation} Where $U(t)=e^{\frac{-iHt}{\hbar}}$ is the unitary matrix, known as `Time Evolution Operator', includes the Hamiltonian $(H)$ in exponential. To simplify the present study $\hbar$ is assumed as 1 (i.e. $\hbar=1$) and using the Eqs.\ref{sdm} and \ref{tm1} time evolution density matrix of the open system can be written as, \begin{equation} \rho_{ABC}(t)=U(t) \cdot \rho_{ABC}(0) \cdot U(t)^{\dagger}\label{tsdm}. \end{equation} This time evolution density matrix is used to explain the dynamics of the open system with different interactions in the next section.
Heisenberg interaction is one of the commonly considered interactions in the scientific community. The entanglement dynamics of several quantum states have been explored under this interaction. The Hamiltonian of the open system under this interaction is considered to be XXX model in one dimension without any external magnetic field and following no boundary conditions. Under the above consideration, the Heisenberg Hamiltonian of a quantum system can be expressed as, \begin{equation} H_{1}=-J \cdot \sum\limits_{j=1}^N\sigma_j \cdot \sigma_{j+1}\label{h1'}. \end{equation} As per the previous assumption, the interaction happens in the Z-direction between qutrit $A$ and auxiliary qutrit $C$. So, under this assumption, the Heisenberg Hamiltonian of Eq.$\ref{h1'}$ can be framed as, \begin{equation} H_{1}=-J \cdot (\sigma_A^z \otimes \sigma_C^z)\label{h1}. \end{equation} Where $J$ is the coupling constant and $\sigma_A^z$ and $\sigma_C^z$ are the Gell-Mann matrices of qutrit A and qutrit C respectively.
Bi-linear bi-quadratic interaction is the non-linear extension of Heisenberg interaction. Under the same consideration of the Heisenberg Hamiltonian, the bi-linear bi-quadratic Hamiltonian of a quantum system can be written as, \begin{equation} H_{2}=-J \cdot \sum\limits_{j=1}^N\left[\left(\sigma_j \cdot \sigma_{j+1}\right)+\left(\sigma_j\cdot \sigma_{j+1}\right)^2\right]\label{h2'}. \end{equation} According to the interaction assumption of the study, this interaction also occurs in the Z-direction between qutrit $A$ and auxiliary qutrit $C$ and the bi-linear bi-quadratic Hamiltonian of Eq.$\ref{h2'}$ can be rewritten as, \begin{equation} H_{2}=-J \cdot \left[\left(\sigma_A^z\otimes \sigma_C^z\right)+\left(\sigma_A^z\otimes \sigma_C^z\right)^2\right]\label{h2}. \end{equation} Here $J$ is the coupling constant and $\sigma_A^z$ and $\sigma_C^z$ are the Gell-Mann matrices of qutrit A and qutrit C respectively similar to the Heisenberg Hamiltonian. It is also noted that $\left(\sigma_A^z\otimes \sigma_C^z\right)^2$ is the non-linear term arrived in the bi-linear bi-quadratic Hamiltonian as an extension over Heisenberg Hamiltonian.
DM interaction plays an important role in quantum computation and quantum information. This interaction has some lucid properties and most of the time it enhances the entanglement in a physical system. As the requirement of prolonged entanglement is necessary to execute quantum applications, DM interaction has its gravity in the quantum information community. The DM interaction Hamiltonian of any quantum system can be written as, \begin{equation} H_{3}=\vec{D} \cdot (\vec{\sigma_1} \times \vec{\sigma_2}). \label{h3'} \end{equation} Where $\vec{D}$ is a vector and its component represents the interaction strength along the direction of the interaction and $\vec{\sigma_1}$ and $\vec{\sigma_2}$ denote the Pauli vectors. As per the previous assumption regarding the interaction between qutrit A and auxiliary qutrit C, the DM interaction Hamiltonian of Eq.$\ref{h3'}$ can be expressed as, \begin{equation} H_{3}=D \cdot (\sigma_A^x \otimes \sigma_C^y - \sigma_A^y \otimes \sigma_C^x). \label{h3} \end{equation} Where $D$ is the interaction strength along Z-direction and $\sigma_A^x$, $\sigma_A^y$ and $\sigma_C^x$, $\sigma_C^y$ are the Gell-Mann matrices of qutrit $A$ and qutrit $C$ respectively.
\section{Dynamics of open quantum system} In this section, the dynamics of the open quantum system are explored using the time evolution density matrix of the system, given in Eq.\ref{tsdm}. This dynamical analysis used negativity and CCNR criteria for detection and quantification of entanglement as previously discussed. The open quantum system is formed through the interaction between qutrit $A$ of the closed system and auxiliary qutrit $C$, as mentioned before. In the current article, three different interactions are considered between qutrits, and discussed the results in three consecutive cases, which are given in the successive subsections.
\begin{figure*}
\caption{Plot of Negativity(N) and CCNR vs. Time$(t/T)$}
\label{f1}
\end{figure*}
\subsection*{Case 1: Dynamics under Heisenberg interaction} In this case, it is considered that qutrit $A$ of the closed system and auxiliary qutrit $C$ interact through Heisenberg interaction. The open quantum system dynamics of the bound entangled state is examined under the Heisenberg Hamiltonian for different coupling constant $(J)$ in the range $0 \leq J \leq 1$ and shown the outcomes in the figure $\ref{f1}$. In the figure, the solid red line indicates the negativity (N) and the dotted blue line represents the CCNR criterion of the system, which will be followed throughout this article.
The figure shows that at the initial condition when there is no interaction in the system $(J=0)$, the negativity (N) of the state is zero, but the CCNR criterion exists. This result implies that at the initial condition, the state is bound entangled and no free entanglement exists in the state. But as the Heisenberg interaction introduces into the system, the negativity of the state increases in oscillatory pattern, and free entanglement produces in the state. The CCNR criterion also follows the negativity and increases in the same pattern due to the interaction. It is noted that this oscillatory pattern does not exhibit the exact sinusoidal behavior and the produced free entanglement becomes zero frequently. This behavior indicates that the sustainability of the free entanglement is low. Further, the frequency of this oscillation depends on the value of the coupling constant $(J)$ and as the value of $J$ increases, the frequency of the oscillation increases, which can be seen in the figure $\ref{f1}$.
\subsection*{Case 2: Dynamics under bi-linear bi-quadratic interaction}
\begin{figure*}
\caption{Plot of Negativity(N) and CCNR vs. Time$(t/T)$}
\label{f2}
\end{figure*}
The current case deals with the assumption that the interaction between qutrit $A$ and auxiliary qutrit $C$ takes place through a bi-linear bi-quadratic Hamiltonian. The dynamical behavior of the open quantum system under bi-linear bi-quadratic interaction is displayed in the figure $\ref{f2}$ for some selective values of coupling constant $(J)$ in the range $0 \leq J \leq 1$. The figure exhibits that when there is no interaction in the system $(J=0)$, the state has zero negativity (N) and there is no free entanglement in the state. But due to the existence of the CCNR criterion bound entanglement exists in the state similar to the previous case. When the interaction is applied in the system, it almost repeats the case 1 behavior, i.e. negativity and CCNR criterion increases in an oscillatory manner due to the bi-linear bi-quadratic interaction and free entanglement generated in the state. But in this case, the oscillation shows exact sinusoidal behavior and the dropping rate of free entanglement is more frequent than in the previous case, due to the presence of the non-linear term in the Hamiltonian. This phenomenon shows that the sustainability of the originated free entanglement under the bi-linear bi-quadratic interaction is lesser than the Heisenberg interaction. Besides this, the oscillation frequency also follows the previous case and increases with the increment of the coupling constant $(J)$, which is displayed in the figure $\ref{f2}$.
\subsection*{Case 3: Dynamics under DM interaction}
\begin{figure*}
\caption{Plot of Negativity(N) and CCNR vs. Time$(t/T)$}
\label{f3}
\end{figure*}
This case is framed by considering the DM interaction between qutrit $A$ and auxiliary qutrit $C$. The dynamics of the open system under DM interaction is discussed for the different interaction strength $(D)$ in the range $0 \leq D \leq 1$ and plotted the results in the figure $\ref{f3}$. Analyzing the results it is noticed that for the starting condition $(D=0)$, the system shows a similar result to the previous two cases, i.e. there is no free entanglement in the state, but bound entanglement exists. After introducing DM interaction in the system, it is found that the free entanglement rises in the state due to the interaction. Although the non-sinusoidal behavior of the entanglement is observed in the system, the oscillatory behavior is more distorted than in case 1. But for the current case, the generated free entanglement is sustained for a long time in comparison to the other two interactions.
\section{Conclusion} In the current article, comparative open quantum system dynamics of the $3 \times 3$ dimensional bipartite bound entangled state, proposed by Bennett \textit{et al.}, have been studied. The study has explored the dynamical analysis under the influence of three different interactions; Heisenberg, bi-linear bi-quadratic, and DM interaction. In the respective cases, the impact of each interaction has been discussed in detail. After analyzing all the cases, it has been found that the Heisenberg interaction and the bi-linear bi-quadratic interaction have produced oscillatory free entanglement for the bound entangled state with the negativity measure. But DM interaction has performed better to sustain the free entanglement over a long period. However, the entanglement pattern is not oscillatory. Here, it is noted that the probability amplitude of the auxiliary state has not played any role in the production of this free entanglement. In short, it can be narrated that the chosen bound entangled state is activated by the introduction of all the interactions mentioned in the article. But in the matter of quality, one can have to be specific since the selected bound entangled state provided by Bennett \textit{et al.} is quite robust against the Heisenberg and bi-linear bi-quadratic interaction. Further, the study can be continued on the mentioned bound entangled state to explore the influence of other interactions.
\end{document} |
\begin{document}
\title[CKN, remainders and superweights for $L^{p}$-weighted Hardy inequalities]
{Extended Caffarelli-Kohn-Nirenberg inequalities, and remainders, stability, and superweights for $L^{p}$-weighted Hardy inequalities}
\author[M. Ruzhansky]{Michael Ruzhansky} \address{
Michael Ruzhansky:
\endgraf
Department of Mathematics
\endgraf
Imperial College London
\endgraf
180 Queen's Gate, London SW7 2AZ
\endgraf
United Kingdom
\endgraf
{\it E-mail address} {\rm m.ruzhansky@imperial.ac.uk}
} \author[D. Suragan]{Durvudkhan Suragan} \address{
Durvudkhan Suragan:
\endgraf
Institute of Mathematics and Mathematical Modelling
\endgraf
125 Pushkin str.
\endgraf
050010 Almaty
\endgraf
Kazakhstan
\endgraf
{\it E-mail address} {\rm suragan@math.kz}
} \author[N. Yessirkegenov]{Nurgissa Yessirkegenov} \address{
Nurgissa Yessirkegenov:
\endgraf
Institute of Mathematics and Mathematical Modelling
\endgraf
125 Pushkin str.
\endgraf
050010 Almaty
\endgraf
Kazakhstan
\endgraf
and
\endgraf
Department of Mathematics
\endgraf
Imperial College London
\endgraf
180 Queen's Gate, London SW7 2AZ
\endgraf
United Kingdom
\endgraf
{\it E-mail address} {\rm n.yessirkegenov15@imperial.ac.uk}
}
\thanks{The authors were supported in parts by the EPSRC
grant EP/K039407/1 and by the Leverhulme Grant RPG-2014-02,
as well as by the MESRK grant 5127/GF4. No new data was collected or generated during the course of research.}
\keywords{Hardy inequality, weighted Hardy inequality, Caffarelli-Kohn-Nirenberg inequality, remainder term, homogeneous Lie group.}
\subjclass[2010]{22E30, 43A80}
\begin{abstract}
In this paper we give an extension of the classical Caffarelli-Kohn-Nirenberg inequalities: we show that
for $1<p,q<\infty$, $0<r<\infty$ with $p+q\geq r$, $\delta\in[0,1]\cap\left[\frac{r-q}{r},\frac{p}{r}\right]$ with $\frac{\delta r}{p}+\frac{(1-\delta)r}{q}=1$ and $a$, $b$, $c\in\mathbb{R}$ with $c=\delta(a-1)+b(1-\delta)$, and for all functions $f\in C_{0}^{\infty}(\mathbb R^{n}\backslash\{0\})$ we have
$$
\||x|^{c}f\|_{L^{r}(\mathbb R^{n})}
\leq \left|\frac{p}{n-p(1-a)}\right|^{\delta} \left\||x|^{a}\nabla f\right\|^{\delta}_{L^{p}(\mathbb R^{n})}
\left\||x|^{b}f\right\|^{1-\delta}_{L^{q}(\mathbb R^{n})} $$
for $n\neq p(1-a)$, where the constant $\left|\frac{p}{n-p(1-a)}\right|^{\delta}$ is sharp for $p=q$ with $a-b=1$ or $p\neq q$ with $p(1-a)+bq\neq0$.
In the critical case $n=p(1-a)$ we have
$$
\left\||x|^{c}f\right\|_{L^{r}(\mathbb R^{n})}
\leq p^{\delta} \left\||x|^{a}\log|x|\nabla f\right\|^{\delta}_{L^{p}(\mathbb R^{n})}
\left\||x|^{b}f\right\|^{1-\delta}_{L^{q}(\mathbb R^{n})}. $$
Moreover, we also obtain anisotropic versions of these inequalities which can be conveniently formulated in the language of Folland and Stein's homogeneous groups. Consequently, we obtain remainder estimates for $L^{p}$-weighted Hardy inequalities on homogeneous groups, which are also new in the Euclidean setting of $\mathbb R^{n}$. The critical Hardy inequalities of logarithmic type and uncertainty type principles on homogeneous groups are obtained. Moreover, we investigate another improved version of $L^{p}$-weighted Hardy inequalities involving a distance and stability estimates. The relation between the critical and the subcritical Hardy inequalities on homogeneous groups is also investigated. We also establish sharp Hardy type inequalities in $L^{p}$, $1<p<\infty$, with superweights, i.e. with the weights of the form $\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m}}$ allowing for different choices of $\alpha$ and $\beta$. There are two reasons why we call the appearing weights the superweights: the arbitrariness of the choice of any homogeneous quasi-norm and a wide range of parameters.
\end{abstract}
\maketitle
\tableofcontents
\section{Introduction} \label{SEC:intro}
The aim of this paper is to give an extension of the classical Caffarelli-Kohn-Nirenberg (CKN) inequalities \cite{CKN84} with respect to ranges of parameters and to investigate the remainders and stability of the weighted $L^p$-Hardy inequalities. Moreover, our methods also provide sharp constants for the CKN inequality for known ranges of parameters as well as give an improvement by replacing the full gradient by the radial derivative. We also obtain the critical case of the CKN inequality with logarithmic terms, and investigate the remainders and other properties in the case when CKN inequalities reduce to the weighted Hardy inequalities. For the latter, we also establish $L^p$ weighted Hardy inequalities with more general weights of the form $\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m}}$, allowing for different choices of $m$, $\alpha$ and $\beta$.
\subsection{Extended Caffarelli-Kohn-Nirenberg inequalities}
Let us recall the classical Caffarelli-Kohn-Nirenberg inequality \cite{CKN84}:
\begin{thm}\label{clas_CKN} Let $n\in\mathbb{N}$ and let $p$, $q$, $r$, $a$, $b$, $d$, $\delta\in \mathbb{R}$ such that $p,q\geq1$, $r>0$, $0\leq\delta\leq1$, and \begin{equation}\label{clas_CKN0} \frac{1}{p}+\frac{a}{n},\, \frac{1}{q}+\frac{b}{n},\, \frac{1}{r}+\frac{c}{n}>0 \end{equation} where $c=\delta d + (1-\delta) b$. Then there exists a positive constant $C$ such that \begin{equation}\label{clas_CKN1}
\||x|^{c}f\|_{L^{r}(\mathbb R^{n})}\leq C \||x|^{a}|\nabla f|\|^{\delta}_{L^{p}(\mathbb R^{n})} \||x|^{b}f\|^{1-\delta}_{L^{q}(\mathbb R^{n})} \end{equation} holds for all $f\in C_{0}^{\infty}(\mathbb R^{n})$, if and only if the following conditions hold: \begin{equation}\label{clas_CKN2} \frac{1}{r}+\frac{c}{n}=\delta \left(\frac{1}{p}+\frac{a-1}{n}\right)+(1-\delta)\left(\frac{1}{q}+\frac{b}{n}\right), \end{equation} \begin{equation}\label{clas_CKN3} a-d\geq 0 \quad {\rm if} \quad \delta>0, \end{equation} \begin{equation}\label{clas_CKN4} a-d\leq 1 \quad {\rm if} \quad \delta>0 \quad {\rm and} \quad \frac{1}{r}+\frac{c}{n}=\frac{1}{p}+\frac{a-1}{n}. \end{equation} \end{thm}
The first aim of this paper is to extend the CKN-inequalities for general functions with respect to widening the range of indices \eqref{clas_CKN0}. Moreover, another improvement will be achieved by replacing the full gradient $\nabla f$ in \eqref{clas_CKN1} by the radial derivative $\mathcal{R}f=\frac{\partial f}{\partial r}$. It turns out that such improved versions can be establsihed with sharp constants, and to hold both in the isotropic and anisotropic settings.
To compare with Theorem \ref{clas_CKN} let us first formulate the isotropic version of our extension in the usual setting of $\mathbb R^{n}$.
\begin{thm}\label{clas_CKN-2} Let $n\in\mathbb{N}$, $1<p,q<\infty$, $0<r<\infty$, with $p+q\geq r$, $\delta\in[0,1]\cap\left[\frac{r-q}{r},\frac{p}{r}\right]$ and $a$, $b$, $c\in\mathbb{R}$. Assume that $\frac{\delta r}{p}+\frac{(1-\delta)r}{q}=1$, $c=\delta(a-1)+b(1-\delta)$. If $n\neq p(1-a)$, then for any function $f\in C_{0}^{\infty}(\mathbb{R}^{n}\backslash\{0\})$ we have \begin{equation}\label{EQ:aq}
\||x|^{c}f\|_{L^{r}(\mathbb R^{n})}
\leq \left|\frac{p}{n-p(1-a)}\right|^{\delta} \left\||x|^{a}\left(\frac{x}{|x|}\cdot\nabla f\right)\right\|^{\delta}_{L^{p}(\mathbb R^{n})}
\left\||x|^{b}f\right\|^{1-\delta}_{L^{q}(\mathbb R^{n})}. \end{equation}
In the critical case $n=p(1-a)$ for any function $f\in C_{0}^{\infty}(\mathbb{R}^{n}\backslash\{0\})$ we have \begin{equation}\label{CKN_rem3a}
\left\||x|^{c}f\right\|_{L^{r}(\mathbb R^{n})}
\leq p^{\delta} \left\||x|^{a}\log|x|\left(\frac{x}{|x|}\cdot\nabla f\right)\right\|^{\delta}_{L^{p}(\mathbb R^{n})}
\left\||x|^{b}f\right\|^{1-\delta}_{L^{q}(\mathbb R^{n})}, \end{equation}
for any homogeneous quasi-norm $|\cdot|$. If $|\cdot|$ is the Euclidean norm on $\mathbb R^{n}$, inequalities \eqref{EQ:aq} and \eqref{CKN_rem3a} imply, respectively, \begin{equation}\label{EQ:aqe}
\||x|^{c}f\|_{L^{r}(\mathbb R^{n})}
\leq \left|\frac{p}{n-p(1-a)}\right|^{\delta} \left\||x|^{a}\nabla f\right\|^{\delta}_{L^{p}(\mathbb R^{n})}
\left\||x|^{b}f\right\|^{1-\delta}_{L^{q}(\mathbb R^{n})} \end{equation} for $n\neq p(1-a)$, and \begin{equation}\label{CKN_rem3ae}
\left\||x|^{c}f\right\|_{L^{r}(\mathbb R^{n})}
\leq p^{\delta} \left\||x|^{a}\log|x|\nabla f\right\|^{\delta}_{L^{p}(\mathbb R^{n})}
\left\||x|^{b}f\right\|^{1-\delta}_{L^{q}(\mathbb R^{n})}, \end{equation}
for $n=p(1-a)$. The inequality \eqref{EQ:aq} holds for any homogeneous quasi-norm $|\cdot|$, and the constant
$\left|\frac{p}{n-p(1-a)}\right|^{\delta}$ is sharp for $p=q$ with $a-b=1$ or for $p\neq q$ with $p(1-a)+bq\neq0$. Furthermore, the constants $\left|\frac{p}{n-p(1-a)}\right|^{\delta}$ and $p^{\delta}$ are sharp for $\delta=\overline{0,1}$. \end{thm}
Note that if the conditions \eqref{clas_CKN0}
hold, then the inequality \eqref{EQ:aqe} is contained in the family of Caffarelli-Kohn-Nirenberg inequalities in Theorem \ref{clas_CKN}. However, already in this case, if we require $p=q$ with $a-b=1$ or $p\neq q$ with $p(1-a)+bq\neq0$, then \eqref{EQ:aqe} yields the inequality \eqref{clas_CKN1} with sharp constant. Moreover, the constants $\left|\frac{p}{n-p(1-a)}\right|^{\delta}$ and $p^{\delta}$ are sharp for $\delta=0$ or $\delta=1$. Our conditions $\frac{\delta r}{p}+\frac{(1-\delta)r}{q}=1$ and $c=\delta(a-1)+b(1-\delta)$ imply the condition \eqref{clas_CKN2} of the Theorem \ref{clas_CKN}, as well as \eqref{clas_CKN3}-\eqref{clas_CKN4} which are all necessary for having estimates of this type, at least under the conditions \eqref{clas_CKN0}.
If the conditions \eqref{clas_CKN0} are not satisfied, then the inequality \eqref{EQ:aqe} is not covered by Theorem \ref{clas_CKN}. So, this gives an extension of Theorem \ref{clas_CKN} with respect to the range of parameters. Let us give an example:
\begin{exa}\label{CKN_example} Let us take $1<p=q=r<\infty$, $a=-\frac{n-2p}{p}$, $b=-\frac{n}{p}$ and $c=-\frac{n-\delta p}{p}$. Then by \eqref{EQ:aqe}, for all $f\in C_{0}^{\infty}(\mathbb{R}^{n}\backslash\{0\})$ we have the inequality \begin{equation}\label{CKN_example1}
\left\|\frac{f}{|x|^{\frac{n-\delta p}{p}}}\right\|_{L^{p}(\mathbb R^{n})}\leq
\left\|\frac{\nabla f}{|x|^{\frac{n-2p}{p}}}\right\|^{\delta}_{L^{p}(\mathbb R^{n})}
\left\|\frac{f}{|x|^{\frac{n}{p}}}\right\|^{1-\delta}_{L^{p}(\mathbb R^{n})}, \quad 1<p<\infty, \;0\leq\delta\leq1, \end{equation} where $\nabla$ is the standard gradient in $\mathbb R^{n}$. Since we have $$\frac{1}{q}+\frac{b}{n}=\frac{1}{p}+\frac{1}{n}\left(-\frac{n}{p}\right)=0,$$
we see that \eqref{clas_CKN0} fails, so that the inequality \eqref{CKN_example1} is not covered by Theorem \ref{clas_CKN}. Moreover, in this case, $p=q$ with $a-b=1$ hold true, so that the constant $\left|\frac{p}{n-p(1-a)}\right|^{\delta}=1$ in the inequality \eqref{CKN_example1} is sharp. \end{exa}
Although these results are new already in the usual setting of $\mathbb R^{n}$, our techniques apply well also for the anisotropic structures. Consequently, it is convenient to work in the setting of homogeneous groups developed by Folland and Stein \cite{FS-Hardy} with an idea of emphasising general results of harmonic analysis depending only of the group and dilation structures. In particular, in this way we obtain results on the anisotropic $\mathbb R^{n}$, on the Heisenberg group, general stratified groups, graded groups, etc. In the special case of stratified groups (or homogeneous Carnot groups), other formulations using horizontal gradient are possible, and we refer to \cite{Ruzhansky-Suragan:Layers} and especially to \cite{Ruzhansky-Suragan:JDE} for versions of such results and the discussion of the corresponding literature.
The improved versions of the Caffarelli-Kohn-Nirenberg inequality for radially symmetric functions with respect to the range of parameters was investigated in \cite{NDD12}. In \cite{ZhHD15} and \cite{HZh11}, weighted Hardy type inequalities were obtained for the generalised Baouendi-Grushin vector fields, which is when $\gamma=0$ gives the standard gradient in $\mathbb R^{n}$. We also refer to \cite{HNZh11}, \cite{Han15} for weighted Hardy inequalities on the Heisenberg group, to \cite{HZhD11} and \cite{ZhHD14} on the H-type groups, and a recent paper \cite{Yacoub17} on Lie groups of polynomial growth as well as to references therein.
In Section \ref{SEC:prelim} we very briefly recall the necessary notions and fix the notation in more detail. Assuming the notation there, Theorem \ref{clas_CKN-2} is the special case of the following theorem that we prove in this paper:
\begin{thm}\label{THM:CKN-i}
Let $\mathbb{G}$ be a homogeneous group of homogeneous dimension $Q$. Let $|\cdot|$ be an arbitrary homogeneous quasi-norm on $\mathbb{G}$. Let $1<p,q<\infty$, $0<r<\infty$ with $p+q\geq r$, $\delta\in[0,1]\cap\left[\frac{r-q}{r},\frac{p}{r}\right]$ and $a$, $b$, $c\in\mathbb{R}$. Assume that $\frac{\delta r}{p}+\frac{(1-\delta)r}{q}=1$ and $c=\delta(a-1)+b(1-\delta)$. Then for all $f\in C_{0}^{\infty}(\mathbb{G}\backslash\{0\})$ we have the following Caffarelli-Kohn-Nirenberg type inequalities, with $\mathcal{R}:=\frac{d}{d|x|}$ being the radial derivative: If $Q\neq p(1-a)$, then $$
\||x|^{c}f\|_{L^{r}(\mathbb{G})}
\leq \left|\frac{p}{Q-p(1-a)}\right|^{\delta} \left\||x|^{a}\mathcal{R}f\right\|^{\delta}_{L^{p}(\mathbb{G})}
\left\||x|^{b}f\right\|^{1-\delta}_{L^{q}(\mathbb{G})}, $$
where the constant $\left|\frac{p}{Q-p(1-a)}\right|^{\delta}$ is sharp for $p=q$ with $a-b=1$ or $p\neq q$ with $p(1-a)+bq\neq0$. If $Q=p(1-a)$, then $$
\left\||x|^{c}f\right\|_{L^{r}(\mathbb{G})}
\leq p^{\delta} \left\||x|^{a}\log|x|\mathcal{R}f\right\|^{\delta}_{L^{p}(\mathbb{G})}
\left\||x|^{b}f\right\|^{1-\delta}_{L^{q}(\mathbb{G})}.
$$ Moreover, the constants $\left|\frac{p}{Q-p(1-a)}\right|^{\delta}$ and $p^{\delta}$ are sharp for $\delta=\overline{0,1}$. \end{thm}
\subsection{$L^{p}$-weighted Hardy inequalities}
Let us recall the following $L^{p}$-weighted Hardy inequality \begin{equation}\label{Lp_Hardy}
\int_{\mathbb R^{n}}\frac{|\nabla f(x)|^{p}}{|x|^{\alpha p}}dx\geq\left(\frac{n-p-\alpha p}{p}\right)^{p}\int_{\mathbb R^{n}}\frac{|f(x)|^{p}}{|x|^{(\alpha+1)p}}dx \end{equation} for every function $f\in C_{0}^{\infty}(\mathbb R^{n})$, where $-\infty<\alpha<\frac{n-p}{p}$ and $2\leq p<n$. The inequality \eqref{Lp_Hardy} is a special case of the Caffarelli-Kohn-Nirenberg inequalities \cite{CKN84}, recalled also in Theorem \ref{clas_CKN}. Since in this paper we are also interested in remainder estimates for the $L^{p}$-weighted Hardy inequality, let us introduce known results in this direction. Overall, the study of remainders in Hardy and other related inequalities is a classical topic going back to \cite{Brez1, Brez2, BV97}.
Ghoussoub and Moradifam \cite{GM08} proved that there exists no strictly positive function $V\in C^{1}(0,\infty)$ such that the inequality
$$\int_{\mathbb R^{n}}|\nabla f|^{2}dx\geq\left(\frac{n-2}{2}\right)^{2}\int_{\mathbb R^{n}}\frac{|f|^{2}}{|x|^{2}}dx+\int_{\mathbb R^{n}}V(|x|)|f|^{2}dx$$ holds for any $f\in W^{1,2}(\mathbb R^{n})$. Cianchi and Ferone \cite {CF08} showed that for all $1<p<n$ there exists a constant $C=C(p,n)$ such that
$$\int_{\mathbb R^{n}}|\nabla f|^{p}dx\geq\left(\frac{n-p}{p}\right)^{p}\int_{\mathbb R^{n}}\frac{|f|^{p}}{|x|^{p}}dx\,(1+Cd_{p}(f)^{2p^{*}})$$
holds for all real-valued weakly differentiable functions $f$ in $\mathbb R^{n}$ such that $f$ and $|\nabla f|\in L^{p}(\mathbb R^{n})$ go to zero at infinity. Here
$$d_{p}f=\underset{c\in \mathbb{R}}{\rm inf}\frac{\|f-c|x|^{-\frac{n-p}{p}}\|_{L^{p^{*},\infty}(\mathbb R^{n})}}{\|f\|_{L^{p^{*},p}(\mathbb R^{n})}}$$ with $p^{*}=\frac{np}{n-p}$, and $L^{\tau, \sigma}(\mathbb R^{n})$ is the Lorentz space for $0<\tau\leq \infty$ and $1\leq\sigma\leq\infty$. In the case of a bounded domain $\Omega$, Wang and Willem \cite{WW03} for $p=2$ and Abdellaoui, Colorado and Peral \cite{ACP05} for $1<p<\infty$ investigated the improved type of \eqref{Lp_Hardy} (see also \cite{ST15a} and \cite{ST15b} for more details).
For more general Lie group discussions of above inequalities we refer to recent papers \cite{Ruzhansky-Suragan:Layers}, \cite{Ruzhansky-Suragan:squares} and \cite{Ruzhansky-Suragan:JDE} as well as references therein.
Sometimes the improved versions of different inequalities, or remainder estimates, are called stability of the inequality if the estimates depend on certain distances: see, e.g. \cite{BJOS16} for stability of trace theorems, \cite{CFW13} for stability of Sobolev inequalities, etc.
We also note that Sano and Takahashi obtained the improved version of \eqref{Lp_Hardy} in \cite{ST15a} for $\Omega=\mathbb R^{n}$ and $\alpha=0$ and then in \cite{ST15b} for any $-\infty<\alpha<\frac{n-p}{p}$: Let $n\geq 3$, $2\leq p<n$ and $-\infty<\alpha<\frac{n-p}{p}$. Let $N\in \mathbb{N}$, $t\in (0,1)$, $\gamma<\min\{1-t, \frac{p-N}{p}\}$ and $\delta=N-n+\frac{N}{1-t-\gamma}\left(\gamma+\frac{n-p-\alpha p}{p}\right)$. Then there exists a constant $C>0$ such that the inequality
$$\int_{\mathbb R^{n}}\frac{|\nabla f|^{p}}{|x|^{\alpha p}}dx-\left(\frac{n-p-\alpha p}{p}\right)^{p}\int_{\mathbb R^{n}}\frac{|f|^{p}}{|x|^{p(\alpha+1)}}dx
\geq C\frac{\left(\int_{\mathbb R^{n}}|f|^{\frac{N}{1-t-\gamma}}|x|^{\delta}dx\right)^{\frac{p(1-t-\gamma)}{Nt}}}
{\left(\int_{\mathbb R^{n}}|f|^{p}|x|^{-\alpha p}dx\right)^{\frac{1-t}{t}}} $$ holds for any radially symmetric function $f\in W_{0,\alpha}^{1,p}(\mathbb{R}^{n})$, $f\neq 0$.
For the convenience of the reader we now briefly recapture the main results of this part of the paper, formulating them directly in the anisotropic cases following the notation recalled in Section \ref{SEC:prelim}. Thus, we show that for a homogeneous group $\mathbb{G}$ of homogeneous dimension $Q$ and any homogeneous quasi-norm $|\cdot|$ we have the following results: \begin{itemize} \item ({\bf Remainder estimates for the $L^{p}$-weighted Hardy inequality}) Let $2\leq p<Q$, $-\infty<\alpha<\frac{Q-p}{p}$ and $\delta_{1}=Q-p-\alpha p-\frac{Q+pb}{p}$, $\delta_{2}=Q-p-\alpha p-\frac{bp}{p-1}$ for any $b\in\mathbb{R}$. Then for all functions $f\in C_{0}^{\infty}(\mathbb{G}\backslash\{0\})$ we have
$$\int_{\mathbb{G}}\frac{|\mathcal{R}f(x)|^{p}}{|x|^{\alpha p}}dx-\left(\frac{Q-p-\alpha p}{p}\right)^{p}\int_{\mathbb{G}}\frac{|f(x)|^{p}}{|x|^{(\alpha+1)p}}dx$$ $$
\geq C_{p} \frac{\left(\int_{\mathbb{G}}|f(x)|^{p}|x|^{\delta_{1}}dx\right)^{p}}
{\left(\int_{\mathbb{G}}|f(x)|^{p}|x|^{\delta_{2}}dx\right)^{p-1}}, $$ where $C_{p}=c_{p}
\left|\frac{Q(p-1)-pb}{p^{2}}\right|^{p}$, $\mathcal{R}:=\frac{d}{d|x|}$ is the radial derivative and $c_{p}=\underset{0<t\leq1/2}{\rm min}((1-t)^{p}-t^{p}+pt^{p-1})$. This family is a new result already in the standard setting of $\mathbb R^{n}$. \item ({\bf Stability of Hardy inequalities}) Let $2\leq p<Q$ and $-\infty<\alpha<\frac{Q-p}{p}$. Then for all radial functions $f\in C_{0}^{\infty}(\mathbb{G}\backslash\{0\})$ we have the stability estimate
$$\int_{\mathbb{G}}\frac{|\mathcal{R}f(x)|^{p}}{|x|^{\alpha p}}dx-\left(\frac{Q-p-\alpha p}{p}\right)^{p}\int_{\mathbb{G}}\frac{|f(x)|^{p}}{|x|^{(\alpha+1)p}}dx$$ $$ \geq c_{p}\left(\frac{p-1}{p}\right)^{p}\sup_{R>0}d_{R}(f,c_{f}(R)f_{\alpha})^{p}, $$
where $c_{f}(R)=R^{\frac{Q-p-\alpha p}{p}}\widetilde{f}(R)$ with $f(x)=\widetilde{f}(r)$, $r=|x|$,
$\mathcal{R}:=\frac{d}{d|x|}$ is the radial derivative, $c_{p}$ is defined in Lemma \ref{FrS}, $f_{\alpha}$ and $d_{R}(\cdot,\cdot)$ are defined in \eqref{aremterm7} and \eqref{aremterm8}, respectively.
\item ({\bf Critical Hardy inequalities of logarithmic type}) Let $1<\gamma<\infty$ and let $\max\{1,\gamma-1\}<p<\infty$. Then for all $f\in C_{0}^{\infty}(\mathbb{G}\backslash\{0\})$ and all $R>0$ we have
$$\frac{p}{\gamma-1}\left\|
\frac{\mathcal{R}f}{|x|^{\frac{Q-p}{p}}\left(\log\frac{R}{|x|}\right)^{\frac{\gamma-p}{p}}} \right\|_{L^{p}(\mathbb{G})} \geq
\left\|\frac{f-f_{R}}{|x|^{\frac{Q}{p}}\left(\log\frac{R}{|x|}\right)^{\frac{\gamma}{p}}}\right\|_{L^{p}(\mathbb{G})}, $$
where $f_{R}=f\left(R\frac{x}{|x|}\right)$, where $\mathcal{R}:=\frac{d}{d|x|}$ is the radial derivative, and the constant $\frac{p}{\gamma-1}$ is optimal. In the abelian case, this result was obtained in \cite{MOW15}. In the case $\gamma=p$ this result on the homogeneous group was proved in \cite{Ruzhansky-Suragan:critical}. \item ({\bf Uncertainty inequalities}) Let $1<p<\infty$ and $q>1$ be such that $\frac{1}{p}+\frac{1}{q}=\frac{1}{2}$. Let $1<\gamma<\infty$ and $\max\{1,\gamma-1\}<p<\infty$. Then for any $R>0$ and $f\in C_{0}^{\infty}(\mathbb{G}\backslash\{0\})$ we have the uncertainty inequalities $$
\left\|
\frac{\mathcal{R}f}{|x|^{\frac{Q-p}{p}}\left(\log\frac{R}{|x|}\right)^{\frac{\gamma-p}{p}}} \right\|_{L^{p}(\mathbb{G})}\|f\|_{L^{q}(\mathbb{G})}
\geq\frac{\gamma-1}{p}\left\|\frac{f(f-f_{R})}
{|x|^{\frac{Q}{p}}\left(\log\frac{R}{|x|}\right)^{\frac{\gamma}{p}}} \right\|_{L^{2}(\mathbb{G})}, $$
where $\mathcal{R}:=\frac{d}{d|x|}$ is the radial derivative (see \eqref{EQ:Euler}). Moreover, $$
\left\|
\frac{\mathcal{R}f}{|x|^{\frac{Q-p}{p}}\left(\log\frac{R}{|x|}\right)^{\frac{\gamma-p}{p}}} \right\|_{L^{p}(\mathbb{G})}\left\|\frac{f-f_{R}}
{|x|^{\frac{Q}{p'}}\left(\log\frac{R}{|x|}\right)^{2-\frac{\gamma}{p}}}
\right\|_{L^{p'}(\mathbb{G})}$$$$\geq\frac{\gamma-1}{p}\left\|\frac{f-f_{R}}
{|x|^{\frac{Q}{2}}\log\frac{R}{|x|}}\right\|^{2}_{L^{2}(\mathbb{G})} $$ holds for $\frac{1}{p}+\frac{1}{p'}=1$.
\item ({\bf Relation between critical and subcritical Hardy inequalities})
Let $Q\geq m+1$, $m\geq2$. Let $|\cdot|$ be a homogeneous quasi-norm. Then for any nonnegative radial function $g\in C_{0}^{1}(B^{m}(0,R)\backslash\{0\})$, there exists a nonnegative radial function $f\in C_{0}^{1}(B^{Q}(0,1)\backslash\{0\})$ such that
$$\int_{B^{Q}(0,1)}|\mathcal{R}f|^{m}dx-\left(\frac{Q-m}{m}\right)^{m}\int_{B^{Q}(0,1)}\frac{|f|^{m}}{|x|^{m}}dx$$ $$
=\frac{|\sigma|}
{|\widetilde{\sigma}|}\left(\frac{Q-m}{m-1}\right)^{m-1}$$
$$\times\left(\int_{B^{m}(0,R)}|\mathcal{R}g|^{m}dz-
\left(\frac{m-1}{m}\right)^{m}\int_{B^{m}(0,R)}\frac{|g|^{m}}{|z|^{m}\left(\log\frac{Re}{|z|}\right)^{m}}dz\right) $$
holds true, where $\mathcal{R}:=\frac{d}{d|x|}$ is the radial derivative, $|\sigma|$ and $|\widetilde{\sigma}|$ are $Q-1$ and $m-1$ dimensional surface measure of the unit sphere, respectively. \end{itemize}
\subsection{$L^p$-Hardy inequalities with superweights}
The classical Hardy inequalities and their extensions, such as the Caffarelli-Kohn-Nirenberg inequalities, usually involve the weights of the form $\frac{1}{|x|^{m}}$. In this paper, we also consider the weights of the form $\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m}}$ allowing for different choices of $\alpha$ and $\beta$. If $\alpha=0$ or $\beta=0$, this reduces to traditional weights. So, we are interested in the case when $\alpha\beta\not=0$ and, in fact, we obtain two families of inequalities depending on whether $\alpha\beta>0$ or $\alpha\beta<0$. Moreover, $|\cdot|$ in these expressions can be an arbitrary homogeneous quasi-norm and the constants for the obtained inequalities are sharp. The freedom in choosing parameters $\alpha,\beta, a,b,m$ and a quasi-norm led us to calling these weights the `superweights' in this context.
Again, the obtained estimates will include both the isotropic and anisotropic settings of $\mathbb R^{n}$, for which our range of obtained estimates appears also to be new. Namely, already in the Euclidean case of $\mathbb R^{n}$ with the Euclidean norm, they extend the inequalities that have been known for $p=2$ for some range of parameters from \cite{GM11} to the full range of $1<p<\infty$.
Therefore, we can again work on the homogeneous groups. To summarise, on a homogeneous group $\mathbb{G}$
with homogeneous dimension $Q$ for any homogeneous quasi-norm $|\cdot|$ on $\mathbb{G}$, all $a,b>0$ and $1<p<\infty$ we prove that \begin{itemize} \item If $\alpha \beta>0$ and $pm\leq Q-p$, then for all $f\in C_{0}^{\infty}(\mathbb{G}\backslash\{0\})$, we have \begin{equation}\label{intro1} \frac{Q-pm-p}{p}
\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m+1}}f\right\|_{L^{p}(\mathbb{G})}
\leq\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m}}\mathcal{R}f\right\|_{L^{p}(\mathbb{G})} . \end{equation} If $Q\neq pm+p$, then the constant $\frac{Q-pm-p}{p}$ is sharp.
\item If $\alpha \beta<0$ and $pm-\alpha\beta\leq Q-p$, then for all $f\in C_{0}^{\infty}(\mathbb{G}\backslash\{0\})$, we have \begin{equation} \label{intro2} \frac{Q-pm+\alpha\beta-p}{p}
\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m+1}}f\right\|_{L^{p}(\mathbb{G})}
\leq\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m}}\mathcal{R}f\right\|_{L^{p}(\mathbb{G})} . \end{equation} If $Q\neq pm+p-\alpha\beta$, then the constant $\frac{Q-pm+\alpha\beta-p}{p}$ is sharp. \end{itemize}
As noted before, the weights in the inequalities \eqref{intro1} and \eqref{intro2} are called superweights since the constants $\frac{Q-pm-p}{p}$ in \eqref{intro1} and $\frac{Q-pm+\alpha\beta-p}{p}$ in \eqref{intro2} are sharp for arbitrary homogeneous quasi-norm $|\cdot|$ of $\mathbb{G}$ and wide range of choices of the allowed parameters $\alpha, \beta, a, b$ and $m$. Directly from the inequalities \eqref{intro1} and \eqref{intro2}, choosing different $\alpha, \beta, a, b, m$ and $Q$, one can obtain a number of Hardy type inequalities which have various consequences and applications. For instance, in the Abelian (isotropic or anisotropic) case ${\mathbb G}=(\mathbb R^{n},+)$, we have
$Q=n$, so for any quasi-norm $|\cdot|$ on $\mathbb R^{n}$, all $a,b>0$ and $1<p<\infty$ these imply new inequalities. Thus, if $\alpha \beta>0$ and $pm\leq Q-p$, then for all $f\in C_{0}^{\infty}(\mathbb R^{n}\backslash\{0\})$, we have \begin{equation}\label{intro3} \frac{n-pm-p}{p}
\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m+1}}f\right\|_{L^{p}(\mathbb R^{n})}
\leq\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m}}\frac{df}{d|x|}\right\|_{L^{p}(\mathbb R^{n})} \end{equation} with the constant being sharp for $n\neq pm+p$.
If $\alpha \beta<0$ and $pm-\alpha\beta\leq n-p$, then for all $f\in C_{0}^{\infty}(\mathbb{G}\backslash\{0\})$, we have \begin{equation} \label{intro4} \frac{n-pm+\alpha\beta-p}{p}
\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m+1}}f\right\|_{L^{p}(\mathbb R^{n})}
\leq\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m}}\frac{df}{d|x|} \right\|_{L^{p}(\mathbb R^{n})} \end{equation}
with the sharp constant for $n\neq pm+p-\alpha\beta$. In the case of the standard Euclidean distance $|x|=\sqrt{x^{2}_{1}+\ldots+x^{2}_{n}}$ by using the Schwartz inequality from the inequalities \eqref{intro3} and \eqref{intro4} we obtain that if $\alpha \beta>0$ and $pm\leq Q-p$, then for all $f\in C_{0}^{\infty}(\mathbb R^{n}\backslash\{0\})$ \begin{equation}\label{intro5} \frac{n-pm-p}{p}
\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m+1}}f\right\|_{L^{p}(\mathbb R^{n})}
\leq\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m}}\nabla f\right\|_{L^{p}(\mathbb R^{n})} \end{equation} with the constant sharp for $n\neq pm+p$.
If $\alpha \beta<0$ and $pm-\alpha\beta\leq n-p$, then for all $f\in C_{0}^{\infty}(\mathbb{G}\backslash\{0\})$, we have \begin{equation} \label{intro6} \frac{n-pm+\alpha\beta-p}{p}
\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m+1}}f\right\|_{L^{p}(\mathbb R^{n})}
\leq\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m}}\nabla f\right\|_{L^{p}(\mathbb R^{n})} \end{equation} with the sharp constant for $n\neq pm+p-\alpha\beta$. The $L^{2}$-version, that is, when $p=2$ the inequalities \eqref{intro5} and \eqref{intro6} were obtained in \cite{GM11}. We also shall note that these inequalities have interesting applications in theory of ODE (see \cite[Theorem 2.1]{GM11}).
In Section \ref{SEC:2} we give the main result of this part and give its short proof. Some higher order versions of the obtained inequalities are discussed briefly in Section \ref{Sec3}.
In Section \ref{SEC:prelim} we briefly recall the main concepts of homogeneous groups and fix the notation. In Section \ref{SEC:critHardy} we present critical Hardy inequalities of logarithmic type and uncertainty type principles on homogeneous groups. The remainder estimates for $L^{p}$-weighted Hardy inequalities on homogeneous groups are proved in Section \ref{SEC:rem_estimates}. Moreover, in Section \ref{stab} we also investigate another improved version of $L^{p}$-weighted Hardy inequalities involving a distance. In Section \ref{SEC:crit_subcrit_con} the relation between the critical and the subcritical Hardy inequalities on homogeneous groups is investigated. In Section \ref{SEC:CKN} we introduce Caffarelli-Kohn-Nirenberg type inequalities on homogenous groups and prove their extended version.
\section{Preliminaries} \label{SEC:prelim}
In this section we very briefly recall the necessary notation concerning the setting of homogeneous groups following Folland and Stein \cite{FS-Hardy} as well as a recent treatise \cite{FR}. We also recall a few other facts that will be used in the proofs. A connected simply connected Lie group $\mathbb G$ is called a {\em homogeneous group} if its Lie algebra $\mathfrak{g}$ is equipped with a family of the following dilations: $$D_{\lambda}={\rm Exp}(A \,{\rm ln}\lambda)=\sum_{k=0}^{\infty} \frac{1}{k!}({\rm ln}(\lambda) A)^{k},$$ where $A$ is a diagonalisable positive linear operator on $\mathfrak{g}$, and every $D_{\lambda}$ is a morphism of $\mathfrak{g}$, that is, $$\forall X,Y\in \mathfrak{g},\, \lambda>0,\; [D_{\lambda}X, D_{\lambda}Y]=D_{\lambda}[X,Y],$$ holds. We recall that $Q := {\rm Tr}\,A$ is called the homogeneous dimension of $\mathbb G$.
A homogeneous group is a nilpotent (Lie) group and exponential mapping $\exp_{\mathbb G}:\mathfrak g\to\mathbb G$ of this group is a global diffeomorphism. Thus, this implies the dilation structure, and this dilation is denoted by $D_{\lambda}x$ or just by $\lambda x$, on homogeneous groups.
Then we have \begin{equation}
|D_{\lambda}(S)|=\lambda^{Q}|S| \quad {\rm and}\quad \int_{\mathbb{G}}f(\lambda x) dx=\lambda^{-Q}\int_{\mathbb{G}}f(x)dx. \end{equation}
Here $dx$ is the Haar measure on homogeneous groups $\mathbb{G}$ and $|S|$ is the volume of a measurable set $S\subset \mathbb{G}$. The Haar measure on a homogeneous group $\mathbb{G}$ is the standard Lebesgue measure for $\mathbb R^{n}$ (see, for example \cite[Proposition 1.6.6]{FR}).
Let $|\cdot|$ be a homogeneous quasi-norm on $\mathbb G$. Then the quasi-ball centred at $x\in\mathbb{G}$ with radius $R > 0$ is defined by
$$B(x,R):=\{y\in \mathbb{G}: |x^{-1}y|<R\}.$$ The following notation will be also used in this paper
$$B^{c}(x,R):=\{y\in \mathbb{G}: |x^{-1}y|\geq R\}.$$ We refer to \cite{FS-Hardy} for the proof of the following important polar decomposition on homogeneous Lie groups, which can be also found in \cite[Section 3.1.7]{FR}: there is a (unique) positive Borel measure $\sigma$ on the unit quasi-sphere \begin{equation}\label{EQ:sphere}
\mathfrak S:=\{x\in \mathbb{G}:\,|x|=1\}, \end{equation} so that for every $f\in L^{1}(\mathbb{G})$ we have \begin{equation}\label{EQ:polar} \int_{\mathbb{G}}f(x)dx=\int_{0}^{\infty} \int_{\mathfrak S}f(ry)r^{Q-1}d\sigma(y)dr. \end{equation} Let us now fix a basis $\{X_{1},\ldots,X_{n}\}$ of a Lie algebra $\mathfrak{g}$ such that $$AX_{k}=\nu_{k}X_{k}$$ for every $k$, so that the matrix $A$ can be taken to be $A={\rm diag} (\nu_{1},\ldots,\nu_{n}).$ Then every $X_{k}$ is homogeneous of degree $\nu_{k}$ and $$ Q=\nu_{1}+\cdots+\nu_{n}. $$ The decomposition of ${\exp}_{\mathbb{G}}^{-1}(x)$ in $\mathfrak g$ defines the vector $$e(x)=(e_{1}(x),\ldots,e_{n}(x))$$ by the formula $${\exp}_{\mathbb{G}}^{-1}(x)=e(x)\cdot \nabla\equiv\sum_{j=1}^{n}e_{j}(x)X_{j},$$ where $\nabla=(X_{1},\ldots,X_{n})$. It implies the equality $$x={\exp}_{\mathbb{G}}\left(e_{1}(x)X_{1}+\ldots+e_{n}(x)X_{n}\right).$$ Taking into account the homogeneity and denoting $x=ry,\,y\in \mathfrak S,$ one has $$ e(x)=e(ry)=(r^{\nu_{1}}e_{1}(y),\ldots,r^{\nu_{n}}e_{n}(y)). $$ So we have \begin{equation*}
\frac{d}{d|x|}f(x)=\frac{d}{dr}f(ry)=
\frac{d}{dr}f({\exp}_{\mathbb{G}} \left(r^{\nu_{1}}e_{1}(y)X_{1}+\ldots +r^{\nu_{n}}e_{n}(y)X_{n}\right)). \end{equation*} We use the notation \begin{equation}\label{EQ:Euler} \mathcal{R} :=\frac{d}{dr}, \end{equation} that is, \begin{equation}\label{dfdr}
\frac{d}{d|x|}f(x)=\mathcal{R}f(x), \;\forall x\in \mathbb G, \end{equation}
for any homogeneous quasi-norm $|x|$ on $\mathbb G$. Let us recall the following lemma, which will be used in our proof. \begin{lem}[\cite{FrS08}] \label{FrS} Let $p\geq2$ and let $a$, $b$ be real numbers. Then there exists $c_{p}>0$ such that
$$|a-b|^{p}\geq|a|^{p}-p|a|^{p-2}ab+c_{p}|b|^{p}$$ holds, where $c_{p}=\underset{0<t\leq1/2}{\rm min}((1-t)^{p}-t^{p}+pt^{p-1})$ is sharp in this inequality. \end{lem} We will also use the following result (see \cite{ORS16} and \cite{Ruzhansky-Suragan:L2-CKN}) with anisotropic Caffarelli-Kohn-Nirenberg inequality:
\begin{thm}[\cite{ORS16}] \label{CKN}
Let $\mathbb{G}$ be a homogeneous group of homogeneous dimension $Q$. Let $|\cdot|$ be a homogeneous quasi-norm. Let $a,b\in\mathbb{R}$, and $f\in C_{0}^{\infty}(\mathbb{G}\backslash\{0\})$. Then we have \begin{equation}\label{CKN1}
\left|\frac{Q-(a+b+1)}{p}\right|\int_{\mathbb{G}}\frac{|f|^{p}}{|x|^{a+b+1}}dx\leq\left(\int_{\mathbb{G}}
\frac{|\mathcal{R}f|^{p}}{|x|^{ap}}dx\right)^{\frac{1}{p}}\left(\int_{\mathbb{G}}
\frac{|f|^{p}}{|x|^{\frac{bp}{p-1}}}dx\right)^{\frac{p-1}{p}}, \end{equation}
where $\mathcal{R}$ is defined in \eqref{EQ:Euler}, $1<p<\infty$, and the constant $\left|\frac{Q-(a+b+1)}{p}\right|$ is sharp. \end{thm}
\section{On remainder estimates of anisotropic $L^{p}$-weighted Hardy inequalities} \label{SEC:rem_estimates}
In this section we obtain a family of remainder estimates in the weighted $L^p$-Hardy inequalities, with a freedom of choosing the parameter $b\in\mathbb R$. The obtained remainder estimates are new already in the standard setting of $\mathbb R^n$.
\begin{thm}\label{aremterm}
Let $\mathbb{G}$ be a homogeneous group of homogeneous dimension $Q\geq3$. Let $|\cdot|$ be a homogeneous quasi-norm. Let $2\leq p<Q$, $-\infty<\alpha<\frac{Q-p}{p}$ and $\delta_{1}=Q-p-\alpha p-\frac{Q+pb}{p}$, $\delta_{2}=Q-p-\alpha p-\frac{bp}{p-1}$ for any $b\in\mathbb{R}$. Then for all functions $f\in C_{0}^{\infty}(\mathbb{G}\backslash\{0\})$ we have
$$\int_{\mathbb{G}}\frac{|\mathcal{R}f(x)|^{p}}{|x|^{\alpha p}}dx-\left(\frac{Q-p-\alpha p}{p}\right)^{p}\int_{\mathbb{G}}\frac{|f(x)|^{p}}{|x|^{(\alpha+1)p}}dx$$ \begin{equation}\label{aremterm1}
\geq C_{p} \frac{\left(\int_{\mathbb{G}}|f(x)|^{p}|x|^{\delta_{1}}dx\right)^{p}}
{\left(\int_{\mathbb{G}}|f(x)|^{p}|x|^{\delta_{2}}dx\right)^{p-1}}, \end{equation} where $\mathcal{R}$ is defined in \eqref{EQ:Euler}, $C_{p}=c_{p}
\left|\frac{Q(p-1)-pb}{p^{2}}\right|^{p}$ and $c_{p}=\underset{0<t\leq1/2}{\rm min}((1-t)^{p}-t^{p}+pt^{p-1})$. \end{thm} \begin{rem}\label{aremterm_rem1} Since the inequality \eqref{aremterm1} holds for any $b\in\mathbb{R}$, choosing $b=\frac{Q(p-1)}{p}$ so that $C_{p}=0$, we obtain the $L^{p}$-weighted Hardy inequalities on homogeneous groups: \begin{multline}\label{aremterm13}
\int_{\mathbb{G}}\frac{|\mathcal{R}f(x)|^{p}}{|x|^{\alpha p}}dx\geq\left(\frac{Q-p-\alpha p}{p}\right)^{p}\int_{\mathbb{G}}\frac{|f(x)|^{p}}{|x|^{(\alpha+1)p}}dx, \\ -\infty<\alpha<\frac{Q-p}{p},\;\; 2\leq p<Q, \end{multline} for all functions $f\in C_{0}^{\infty}(\mathbb{G}\backslash\{0\})$. In the abelian case $\mathbb{G}=(\mathbb R^{n},+)$ with $Q=n$, the inequality \eqref{aremterm13} gives the $L^{p}$-weighted Hardy inequalities for any quasi-norm on $\mathbb R^{n}$: For any function $f\in C_{0}^{\infty}(\mathbb R^{n}\backslash\{0\})$ we have
$$\int_{\mathbb R^{n}}\left|\frac{x}{|x|}\cdot\nabla f(x)\right|^{p}|x|^{-\alpha p}dx\geq\left(\frac{n-p-\alpha p}{p}\right)^{p}\int_{\mathbb R^{n}}\frac{|f(x)|^{p}}{|x|^{p(\alpha+1)}}dx,$$
where $-\infty<\alpha<\frac{n-p}{p}$ and $2\leq p<n$. By Schwarz's inequality with the standard Euclidean distance $|x|=\sqrt{x_{1}^{2}+x_{2}^{2}+...+x_{n}^{2}}$, we obtain the Euclidean form of the $L^{p}$-weighted Hardy inequalities on $\mathbb R^{n}$: \begin{multline*}
\int_{\mathbb R^{n}}\frac{|\nabla f(x)|^{p}}{|x|^{\alpha p}}dx\geq\left(\frac{n-p-\alpha p}{p}\right)^{p}\int_{\mathbb R^{n}}\frac{|f(x)|^{p}}{|x|^{(\alpha+1)p}}dx, \\ -\infty<\alpha<\frac{n-p}{p},\;\; 2\leq p<n, \end{multline*} for any function $f\in C_{0}^{\infty}(\mathbb R^{n}\backslash\{0\})$, where $\nabla$ is the standard gradient in $\mathbb R^{n}$. \end{rem} \begin{rem}\label{aremterm_rem2} We also note that in the abelian case, \eqref{aremterm1} implies a new remainder estimate for any quasi-norm on $\mathbb R^{n}$: For any function $f\in C_{0}^{\infty}(\mathbb R^{n}\backslash\{0\})$ and for any $b\in\mathbb{R}$, we obtain
$$\int_{\mathbb R^{n}}\left|\frac{x}{|x|}\cdot\nabla f(x)\right|^{p}|x|^{-\alpha p}dx-\left(\frac{n-p-\alpha p}{p}\right)^{p}\int_{\mathbb R^{n}}\frac{|f(x)|^{p}}{|x|^{p(\alpha+1)}}dx$$ \begin{equation}\label{aremterm11}
\geq C_{p} \frac{\left(\int_{\mathbb R^{n}}|f(x)|^{p}|x|^{\delta_{1}}dx\right)^{p}}
{\left(\int_{\mathbb R^{n}}|f(x)|^{p}|x|^{\delta_{2}}dx\right)^{p-1}},\quad 2\leq p<n,\;-\infty<\alpha<\frac{n-p}{p}. \end{equation} As in Remark \ref{aremterm_rem1}, by Schwarz's inequality with the standard Euclidean distance, we obtain the Euclidean version of the remainder estimate for $L^{p}$-weighted Hardy inequalities:
$$\int_{\mathbb R^{n}}\frac{\left|\nabla f(x)\right|^{p}}{|x|^{\alpha p}}dx-\left(\frac{n-p-\alpha p}{p}\right)^{p}\int_{\mathbb R^{n}}\frac{|f(x)|^{p}}{|x|^{(\alpha+1)p}}dx$$ \begin{equation}\label{aremterm12}
\geq C_{p} \frac{\left(\int_{\mathbb R^{n}}|f(x)|^{p}|x|^{\delta_{1}}dx\right)^{p}}
{\left(\int_{\mathbb R^{n}}|f(x)|^{p}|x|^{\delta_{2}}dx\right)^{p-1}},\quad 2\leq p<n,\;-\infty<\alpha<\frac{n-p}{p}, \end{equation} for every function $f\in C_{0}^{\infty}(\mathbb R^{n}\backslash\{0\})$ and for any $b\in\mathbb{R}$, where $\nabla$ is the standard gradient in $\mathbb R^{n}$.
Thus, we note that the remainder estimate \eqref{aremterm12} is new already in the standard setting of $\mathbb R^{n}$. \end{rem}
\begin{proof}[Proof of Theorem \ref{aremterm}] First let us show the statement of Theorem \ref{aremterm} for a radial function $f\in C_{0}^{\infty}(\mathbb{G}\backslash\{0\})$. Since $f$ is radial, $f$ can be represented as $f(x)=\widetilde{f}(|x|)$. By Brezis-V\'{a}zquez's idea (\cite{BV97}), we define \begin{equation}\label{aremterm2} \widetilde{g}(r)=r^{\frac{Q-p-\alpha p}{p}}\widetilde{f}(r). \end{equation}
Since $\widetilde{f}=\widetilde{f}(r)\in C_{0}^{\infty}(0,\infty)$ and $\alpha<\frac{Q-p}{p}$, we obtain $\widetilde{g}(0)=0$ and $\widetilde{g}(+\infty)=0$. We set $g(x)=\widetilde{g}(|x|)$ for $x\in \mathbb{G}$. Introducing polar coordinates $(r,y)=(|x|, \frac{x}{\mid x\mid})\in (0,\infty)\times\mathfrak S$ on $\mathbb{G}$ and using \eqref{EQ:polar}, we have
$$J:=\int_{\mathbb{G}}|\mathcal{R}f|^{p}|x|^{-\alpha p}dx-\left(\frac{Q-p-\alpha p}{p}\right)^{p}\int_{\mathbb{G}}\frac{|f|^{p}}{|x|^{p(\alpha+1)}}dx$$
$$=|\sigma|\int_{0}^{\infty}\left|\frac{d}{dr}\widetilde{f}(r)\right|^{p}r^{-\alpha p+Q-1}dr-|\sigma|
\left(\frac{Q-p-\alpha p}{p}\right)^{p}\int_{0}^{\infty}|\widetilde{f}(r)|^{p}r^{-p(\alpha+1)+Q-1}dr$$
$$=|\sigma|\int_{0}^{\infty}\left|\left(\frac{Q-p-\alpha p}{p}\right)r^{-\frac{Q-\alpha p}{p}}\widetilde{g}(r)
-r^{-\frac{Q-p-\alpha p}{p}}\frac{d}{dr}\widetilde{g}(r)\right|^{p}r^{Q-1-\alpha p}dr$$
$$-|\sigma|\left(\frac{Q-p-\alpha p}{p}\right)^{p}\int_{0}^{\infty}|\widetilde{g}(r)|^{p}r^{-1}dr,$$
where $|\sigma|$ is the $Q-1$ dimensional surface measure of the unit quasi-sphere. Here applying Lemma \ref{FrS} to the integrand of the first term in the last expression above, we get
$$\left|\left(\frac{Q-p-\alpha p}{p}\right)r^{-\frac{Q-\alpha p}{p}}\widetilde{g}(r)-r^{-\frac{Q-p-\alpha p}{p}}\frac{d}{dr}\widetilde{g}(r)\right|^{p}r^{Q-1-\alpha p}$$
$$\geq\left(\left(\frac{Q-p-\alpha p}{p}\right)^{p}r^{-Q+\alpha p}|\widetilde{g}(r)|^{p}\right)r^{Q-1-\alpha p}$$
$$-p\left(\frac{Q-p-\alpha p}{p}\right)^{p-1}|\widetilde{g}(r)|^{p-2}\widetilde{g}(r)\frac{d}{dr}\widetilde{g}(r) r^{-(\frac{Q-\alpha p}{p})(p-1)}r^{-(\frac{Q-p-\alpha p}{p})}r^{Q-1-\alpha p}$$
$$+c_{p}\left|\frac{d}{dr}\widetilde{g}(r)\right|^{p}r^{-Q+p+\alpha p}r^{Q-1-\alpha p}$$
$$=\left(\frac{Q-p-\alpha p}{p}\right)^{p}r^{-1}|\widetilde{g}(r)|^{p}-p\left(\frac{Q-p-\alpha p}{p}\right)^{p-1}|\widetilde{g}(r)|^{p-2}\widetilde{g}(r)\frac{d}{dr}\widetilde{g}(r)$$
$$+c_{p}\left|\frac{d}{dr}\widetilde{g}(r)\right|^{p}r^{p-1}.$$ Since $\widetilde{g}(0)=\widetilde{g}(+\infty)=0$ and $p\geq2$, we note that
$$p\int_{0}^{\infty}|\widetilde{g}(r)|^{p-2}\widetilde{g}(r)\frac{d}{dr}\widetilde{g}(r)dr=\int_{0}^{\infty}
\frac{d}{dr}(|\widetilde{g}(r)|^{p})dr=0.$$ This gives a \enquote{ground state representation} (\cite{FrS08}) of the Hardy difference $J$: \begin{equation}\label{aremterm3}
J\geq c_{p} |\sigma|\int_{0}^{\infty}\left|\frac{d}{dr}\widetilde{g}(r)\right|^{p}r^{p-1}dr=
c_{p}\int_{\mathbb{G}}|\mathcal{R}g(x)|^{p}|x|^{p-Q}dx. \end{equation} Putting $a=\frac{Q-p}{p}$ in \eqref{CKN1}, we obtain for any $b\in\mathbb{R}$, that \begin{multline}\label{CKN2}
\left|\frac{Q(p-1)-pb}{p^{2}}\right|\int_{\mathbb{G}}|g|^{p}
|x|^{-\frac{Q+pb}{p}}dx \\ \leq\left(\int_{\mathbb{G}}
|\mathcal{R}g|^{p}|x|^{p-Q}dx\right)^{\frac{1}{p}}\left(\int_{\mathbb{G}}
|g|^{p}|x|^{-\frac{bp}{p-1}}dx\right)^{\frac{p-1}{p}}. \end{multline} It gives that
\begin{equation}\label{aremterm4}J\geq c_{p}\int_{\mathbb{G}}|\mathcal{R}g(x)|^{p}|x|^{p-Q}dx\geq c_{p}
\left|\frac{Q(p-1)-pb}{p^{2}}\right|^{p}\frac{\left(\int_{\mathbb{G}}|g|^{p}|x|^{-\frac{Q+pb}{p}} dx\right)^{p}}{\left(\int_{\mathbb{G}}
|g|^{p}|x|^{-\frac{bp}{p-1}}dx\right)^{p-1}}.\end{equation}
Taking into account that $g(x)=\widetilde{g}(|x|)$, $x\in\mathbb{G}$, and \eqref{aremterm2}, one calculates
$$\int_{\mathbb{G}}|x|^{-\frac{Q+pb}{p}}|g(x)|^{p}dx=|\sigma|\int_{0}^{\infty}r^{Q-p-\alpha p}|\widetilde{f}(r)|^{p}r^{-\frac{Q+pb}{p}}r^{Q-1}dr $$
$$=\int_{\mathbb{G}}|f(x)|^{p}|x|^{Q-p-\alpha p-\frac{Q+pb}{p}}dx=
\int_{\mathbb{G}}|f(x)|^{p}|x|^{\delta_{1}}dx.$$ On the other hand,
$$\int_{\mathbb{G}}|x|^{-\frac{bp}{p-1}}|g(x)|^{p}dx=|\sigma|\int_{0}^{\infty}r^{Q-p-\alpha p}|\widetilde{f}(r)|^{p}r^{-\frac{bp}{p-1}}r^{Q-1}dr$$
$$=\int_{\mathbb{G}}|f(x)|^{p}|x|^{Q-p-\alpha p-\frac{bp}{p-1}}dx=\int_{\mathbb{G}}|f(x)|^{p}|x|^{\delta_{2}}dx.$$ Putting these into \eqref{aremterm4}, we obtain \begin{equation}\label{aremterm4_01}J\geq c_{p}
\left|\frac{Q(p-1)-pb}{p^{2}}\right|^{p}
\frac{\left(\int_{\mathbb{G}}|f(x)|^{p}|x|^{\delta_{1}}dx\right)^{p}}
{\left(\int_{\mathbb{G}}|f(x)|^{p}|x|^{\delta_{2}}dx\right)^{p-1}}. \end{equation} Now let us prove it for non-radial functions. We consider the radial function for a non-radial function $f$:
\begin{equation}\label{aremterm4_1}U(r)=\left(\frac{1}{|\sigma|}\int_{\mathfrak S}|f(ry)|^{p}d\sigma(y)\right)^{\frac{1}{p}}.\end{equation} Using the H\"{o}lder inequality, we calculate
$$\frac{d}{dr}U(r)=\frac{1}{p}\left(\frac{1}{|\sigma|}\int_{\mathfrak S}|f(ry)|^{p}d\sigma(y)\right)^{\frac{1}{p}-1}
\frac{1}{|\sigma|}\int_{\mathfrak S}p|f(ry)|^{p-2}f(ry)\overline{\frac{d}{dr}f(ry)}d\sigma(y)$$
$$\leq\left(\frac{1}{|\sigma|}\int_{\mathfrak S}|f(ry)|^{p}d\sigma(y)\right)^{\frac{1}{p}-1}
\frac{1}{|\sigma|}\int_{\mathfrak S}|f(ry)|^{p-1}\left|\frac{d}{dr}f(ry)\right|d\sigma(y)$$
$$\leq \left(\frac{1}{|\sigma|}\int_{\mathfrak S}|f(ry)|^{p}d\sigma(y)\right)^{\frac{1}{p}-1}
\frac{1}{|\sigma|}\left(\int_{\mathfrak S}\left|\frac{d}{dr}f(ry)\right|^{p}d\sigma(y)\right)^{\frac{1}{p}}
\left(\int_{\mathfrak S}|f(ry)|^{p}d\sigma(y)\right)^{\frac{p-1}{p}}$$
$$=\left(\frac{1}{|\sigma|}\int_{\mathfrak S}\left|\frac{d}{dr}f(ry)\right|^{p}d\sigma(y)\right)^{\frac{1}{p}}.$$ Thus, we have $$
\frac{d}{dr}U(r)\leq\left(\frac{1}{|\sigma|}\int_{\mathfrak S}\left|\frac{d}{dr}f(ry)\right|^{p}d\sigma(y)\right)^{\frac{1}{p}}. $$ It follows that
$$|\sigma|\int_{0}^{\infty}\left|\frac{d}{dr}U(r)\right|^{p}r^{Q-1-\alpha p}dr\leq
|\sigma|\int_{0}^{\infty}\frac{1}{|\sigma|}\int_{\mathfrak S}\left|\frac{d}{dr}f(ry)\right|^{p}r^{Q-1-\alpha p}d\sigma(y)dr$$
$$=\int_{\mathbb{G}}\left|\mathcal{R}f\right|^{p}|x|^{-\alpha p}dx,$$ that is, \begin{equation}\label{aremterm5}
\int_{\mathbb{G}}\left|\mathcal{R}U\right|^{p}|x|^{-\alpha p}dx\leq\int_{\mathbb{G}}\left|\mathcal{R}f\right|^{p}|x|^{-\alpha p}dx. \end{equation} In view of \eqref{aremterm4_1}, we obtain
$$\int_{\mathbb{G}}|U(|x|)|^{p}|x|^{\theta}dx=|\sigma|\int_{0}^{\infty}|U(r)|^{p}r^{\theta+Q-1}dr$$
\begin{equation}\label{aremterm6}=|\sigma|\int_{0}^{\infty}\frac{1}{|\sigma|}\int_{\mathfrak S}|f(ry)|^{p}d\sigma(y)r^{\theta+Q-1}dr=
\int_{\mathbb{G}}|f(x)|^{p}|x|^{\theta}dx \end{equation} for any $\theta\in\mathbb{R}$. Then, it is easy to see that \eqref{aremterm5} and \eqref{aremterm6} imply that \eqref{aremterm1} holds also for all non-radial functions. \end{proof}
\section{Stability of anisotropic $L^{p}$-weighted Hardy inequalities} \label{stab} In this section we establish a remainder estimate in the $L^{p}$-weighted Hardy inequality involving the distance to the set of extremisers: estimates of such type are known as stability estimates in the literature. Let us denote \begin{equation}\label{aremterm7}
f_{\alpha}(x)=|x|^{-\frac{Q-p-\alpha p}{p}} \end{equation} for $-\infty<\alpha<\frac{Q-p}{p}$, and we set \begin{equation}\label{aremterm8}
d_{R}(f,g):=\left(\int_{\mathbb{G}}\frac{|f(x)-g(x)|^{p}}{\left|\log\frac{R}{|x|}\right|^{p}|x|^{(\alpha+1)p}}dx\right)^{\frac{1}{p}} \end{equation} for functions $f$ and $g$ for which the integral in \eqref{aremterm8} is finite. \begin{thm}\label{aremterm_thm}
Let $\mathbb{G}$ be a homogeneous group of homogeneous dimension $Q\geq3$. Let $|\cdot|$ be a homogeneous quasi-norm. Let $2\leq p<Q$ and $-\infty<\alpha<\frac{Q-p}{p}$. Then for all radial functions $f\in C_{0}^{\infty}(\mathbb{G}\backslash\{0\})$ we have
$$\int_{\mathbb{G}}\frac{|\mathcal{R}f|^{p}}{|x|^{\alpha p}}dx-\left(\frac{Q-p-\alpha p}{p}\right)^{p}\int_{\mathbb{G}}\frac{|f|^{p}}{|x|^{(\alpha+1)p}}dx$$ \begin{equation}\label{aremterm9} \geq c_{p}\left(\frac{p-1}{p}\right)^{p}\sup_{R>0}d_{R}(f,c_{f}(R)f_{\alpha})^{p}, \end{equation}
where $c_{f}(R)=R^{\frac{Q-p-\alpha p}{p}}\widetilde{f}(R)$ with $f(x)=\widetilde{f}(r)$, $|x|=r$, $\mathcal{R}:=\frac{d}{d|x|}$ is the radial derivative, $c_{p}$ is defined in Lemma \ref{FrS}, $f_{\alpha}$ and $d_{R}(\cdot,\cdot)$ are defined in \eqref{aremterm7} and \eqref{aremterm8}, respectively. \end{thm} \begin{proof}[Proof of Theorem \ref{aremterm_thm}] Since $p\geq2$, as in \eqref{aremterm3} in the proof of Theorem \ref{aremterm}, we have
$$J(f)=\int_{\mathbb{G}}|\mathcal{R}f|^{p}|x|^{-\alpha p}dx-\left(\frac{Q-p-\alpha p}{p}\right)^{p}\int_{\mathbb{G}}\frac{|f|^{p}}{|x|^{(\alpha+1)p}}dx$$ \begin{equation}\label{aremterm10}
\geq c_{p}|\sigma|\int_{0}^{\infty}\left|\frac{d}{dr}\widetilde{g}\right|^{p}r^{p-1}dr=
c_{p}\int_{\mathbb{G}}\left|\frac{d}{dr}g\right|^{p}|x|^{p-Q}dx. \end{equation} By Theorem 3.1 in \cite{Ruzhansky-Suragan:critical} or Remark \ref{ScalHardyrem} with $\gamma=p$, we obtain
$$J(f)\geq c_{p}\int_{\mathbb{G}}|\mathcal{R}g|^{p}|x|^{p-Q}dx\geq c_{p}\left(\frac{p-1}{p}\right)^{p}\int_{\mathbb{G}}\frac{\left|g(x)-g(\frac{Rx}{|x|})\right|^{p}}{\left|
\log\frac{R}{|x|}\right|^{p}|x|^{Q}}dx$$
$$=c_{p}\left(\frac{p-1}{p}\right)^{p}\int_{\mathbb{G}}\frac{\left||x|^{\frac{Q-p-\alpha p}{p}}f(x)-R^{\frac{Q-p-\alpha p}{p}}f(\frac{Rx}{|x|})\right|^{p}}{\left|
\log\frac{R}{|x|}\right|^{p}|x|^{Q}}dx$$
for any $R>0$. Here using $f(x)=\widetilde{f}(r)$, $r=|x|$, one calculates
$$J(f)\geq c_{p}\left(\frac{p-1}{p}\right)^{p}\int_{\mathbb{G}}\frac{\left|f(x)-R^{\frac{Q-p-\alpha p}{p}}\widetilde{f}(R)|x|^{-\frac{Q-p-\alpha p}{p}}\right|^{p}}{\left|
\log\frac{R}{|x|}\right|^{p}|x|^{(\alpha+1)p}}dx$$
$$=c_{p}\left(\frac{p-1}{p}\right)^{p}\int_{\mathbb{G}}\frac{\left|f(x)-c_{f}(R)|x|^{-\frac{Q-p-\alpha p}{p}}\right|^{p}}{\left|
\log\frac{R}{|x|}\right|^{p}|x|^{(\alpha+1)p}}dx,$$ \end{proof} yielding \eqref{aremterm9}.
\section{Critical Hardy inequalities of logarithmic type and uncertainty principle} \label{SEC:critHardy} In this section, we present critical Hardy inequalities of logarithmic type on the homogeneous group $\mathbb{G}$. In the abelian isotropic case, the following result was obtained in \cite{MOW15}. In the case $\gamma=p$ this result on the homogeneous group was proved in \cite{Ruzhansky-Suragan:critical}.
\begin{thm}\label{ScalHardy} Let $\mathbb{G}$ be a homogeneous group of homogeneous dimension $Q$. Let $|\cdot|$ be a homogeneous quasi-norm. Let $1<\gamma<\infty$ and $\max\{1,\gamma-1\}<p<\infty$. Then for all $f\in C_{0}^{\infty}(\mathbb{G}\backslash\{0\})$ and all $R>0$ we have
\begin{equation}\label{ScalHardy1}\left\|\frac{f-f_{R}}{|x|^{\frac{Q}{p}}
\left(\log\frac{R}{|x|}\right)^{\frac{\gamma}{p}}}\right\|_{L^{p}(\mathbb{G})}
\leq\frac{p}{\gamma-1}\left\|
\frac{\mathcal{R}f}{|x|^{\frac{Q-p}{p}}\left(\log\frac{R}{|x|}\right)^{\frac{\gamma-p}{p}}} \right\|_{L^{p}(\mathbb{G})}, \end{equation}
where $f_{R}(x)=f\left(R\frac{x}{|x|}\right)$, $\mathcal{R}$ is defined in \eqref{EQ:Euler}, and the constant $\frac{p}{\gamma-1}$ is optimal. \end{thm} \begin{proof}[Proof of Theorem \ref{ScalHardy}] First, let us consider the integrals in \eqref{ScalHardy1} restricted to
$B(0, R)$. Introducing polar coordinates $(r,y)=(|x|, \frac{x}{\mid x\mid})\in (0,\infty)\times\mathfrak S$ on $\mathbb{G}$, where $\mathfrak S$ is the sphere as in \eqref{EQ:sphere}, and using \eqref{EQ:polar}, we have
$$\int_{B(0,R)}\frac{|f(x)-f_{R}(x)|^{p}}
{|x|^{Q}\left|\log\frac{R}{|x|}\right|^{\gamma}}dx$$
$$=\int_{0}^{R}\int_{\mathfrak S}\frac{|f(ry)-f(Ry)|^{p}}{r^{Q}\left(\log\frac{R}{r}\right)^{\gamma}}r^{Q-1}d\sigma(y)dr$$
$$=\int_{0}^{R}\frac{d}{dr}\left(\frac{1}{(\gamma-1)\left(\log\frac{R}{r}\right)^{\gamma-1}}\int_{\mathfrak S}|f(ry)-f(Ry)|^{p}d\sigma(y)\right)dr$$ $$-\frac{p}{\gamma-1}{\rm Re}\int_{0}^{R}\left(\log\frac{R}{r}\right)^{-\gamma+1}
\int_{\mathfrak S}|f(ry)-f(Ry)|^{p-2}(f(ry)-f(Ry))\overline{\frac{df(ry)}{dr}}d\sigma(y)dr$$
$$=-\frac{p}{\gamma-1}{\rm Re}\int_{0}^{R}\left(\log\frac{R}{r}\right)^{-\gamma+1}\int_{\mathfrak S}|f(ry)-f(Ry)|^{p-2} (f(ry)-f(Ry))\overline{\frac{df(ry)}{dr}}d\sigma(y)dr,$$ where $p-\gamma+1>0$, so that the boundary term at $r=R$ vanishes due to inequalities
$$|f(ry)-f(Ry)|\leq C(R-r),$$ $$\log\frac{R}{r}\geq\frac{R-r}{R}.$$ Then, by the H\"{o}lder inequality, we get
$$\int_{0}^{R}\int_{\mathfrak S}\frac{|f(ry)-f(Ry)|^{p}}{r\left(\log\frac{R}{r}\right)^{\gamma}}d\sigma(y)dr$$
$$=-\frac{p}{\gamma-1}{\rm Re}\int_{0}^{R}\left(\log\frac{R}{r}\right)^{-\gamma+1}\int_{\mathfrak S}|f(ry)-f(Ry)|^{p-2} (f(ry)-f(Ry))\overline{\frac{df(ry)}{dr}}d\sigma(y)dr$$
$$\leq\frac{p}{\gamma-1}\int_{0}^{R}\left(\log\frac{R}{r}\right)^{-\gamma+1}\int_{\mathfrak S}|f(ry)-f(Ry)|^{p-1}\left|\frac{df(ry)}{dr}\right| d\sigma(y)dr$$
$$\leq\frac{p}{\gamma-1}\left(\int_{0}^{R}\int_{\mathfrak S}\frac{|f(ry)-f(Ry)|^{p}}{r\left(\log\frac{R}{r}\right) ^{\gamma}}d\sigma(y)dr\right)^{\frac{p-1}{p}}$$ $$\times\left(\int_{0}^{R}\int_{\mathfrak S}r^{p-1}\left(\log\frac{R}{r}\right)^{p-\gamma}
\left|\frac{df(ry)}{dr}\right|^{p}d\sigma(y)dr\right)^{\frac{1}{p}}.$$ Thus we obtain
$$\left(\int_{B(0,R)}\frac{\left|f(x)-f_{R}(x)\right|^{p}}{|x|^{Q}\left|\log\frac{R}{|x|}\right|^{\gamma}} dx\right)^{\frac{1}{p}}$$
\begin{equation}\label{ScalHardy2}\leq\frac{p}{\gamma-1}\left(\int_{B(0,R)}|x|^{p-Q}\left|\log\frac{R}{|x|}\right|^{p-\gamma}
\left|\mathcal{R}f(x)\right|^{p}dx\right)^{\frac{1}{p}}. \end{equation} Similarly, we have
$$\left(\int_{B^{c}(0,R)}\frac{\left|f(x)-f_{R}(x)\right|^{p}}{|x|^{Q}\left|\log\frac{R}{|x|}\right|^{\gamma}} dx\right)^{\frac{1}{p}}$$
\begin{equation}\label{ScalHardy3}\leq\frac{p}{\gamma-1}\left(\int_{B^{c}(0,R)}|x|^{p-Q}\left|\log\frac{R}{|x|}\right|
^{p-\gamma}\left|\mathcal{R}f(x)\right|^{p}dx\right)^{\frac{1}{p}}. \end{equation} The inequalities \eqref{ScalHardy2} and \eqref{ScalHardy3} imply \eqref{ScalHardy1}.
Now let us prove the optimality of the constant $\frac{p}{\gamma-1}$ in \eqref{ScalHardy1}. The inequality \eqref{ScalHardy1} gives that \begin{equation}\label{optim1}
\left(\int_{B(0,R)}\frac{|f(x)|^{p}}{|x|^{Q}\left|\log\frac{R}{|x|}\right|^{\gamma}}\right)^{\frac{1}{p}}
\leq \frac{p}{\gamma-1}\left(\int_{B(0,R)}|x|^{p-Q}\left|\log\frac{R}{|x|}\right|^{p-\gamma}|\mathcal{R}f(x)|^{p}dx\right)^{\frac{1}{p}}. \end{equation} It is enough to prove the optimality of the constant $\frac{p}{\gamma-1}$ in \eqref{optim1}. As in the abelian case (see \cite[Section 3]{MOW15}), we define the following sequence of functions $$f_{k}(x):=\begin{cases}
(\log(kR))^{\frac{\gamma-1}{p}}, \;\;\;{\rm when}\;\;\;|x|\leq\frac{1}{k},\\
(\log\frac{R}{|x|})^{\frac{\gamma-1}{p}}, \;\;\;{\rm when}\;\;\;\frac{1}{k}\leq|x|\leq\frac{R}{2},\\
\frac{2}{R}(\log2)^{\frac{\gamma-1}{p}}(R-|x|), \;\;\;{\rm when}\;\;\;\frac{R}{2}\leq|x|\leq R \end{cases}$$
for large $k\in \mathbb{N}$. Letting $\widetilde{f}_{k}(r):=f_{k}(x)$ with $r=|x|\geq0$, we get $$\frac{d}{dr}\widetilde{f}_{k}(r)=\begin{cases} 0, \;\;\;{\rm when}\;\;\;r<\frac{1}{k},\\ -\frac{\gamma-1}{p}r^{-1}(\log\frac{R}{r})^{\frac{\gamma-1}{p}-1}, \;\;\;{\rm when}\;\;\;\frac{1}{k}<r<\frac{R}{2},\\ -\frac{2}{R}(\log2)^{\frac{\gamma-1}{p}}, \;\;\;{\rm when}\;\;\;\frac{R}{2}<r<R. \end{cases}$$
Denoting by $|\sigma|$ the $Q-1$ dimensional surface measure of the unit sphere, by a direct calculation one has \begin{multline*}
\int_{B(0,R)}|x|^{p-Q}
\left|\log\frac{R}{|x|}\right|^{p-\gamma}\left|\mathcal{R}f_{k}(x)\right| ^{p}dx
=|\sigma|\int_{0}^{R}r^{p-1}\left|\log\frac{R}{r}\right|^{p-\gamma}
\left|\frac{d}{dr}\widetilde{f}_{k}(r)\right|^{p}dr\\
=|\sigma|\left(\frac{\gamma-1}{p}\right)^{p}\int_{\frac{1}{k}}^{\frac{R}{2}}r^{-1}\left(\log\frac{ R}{r}\right)^{-1}dr
+|\sigma|(\log2)^{\gamma-1}\left(\frac{2}{R}\right)^{p} \int_{\frac{R}{2}}^{R}r^{p-1}\left(\log \frac{R}{r}\right)^{p-\gamma}dr\end{multline*}
\begin{equation} \label{optim2}
=|\sigma|\left(\frac{\gamma-1}{p}\right)^{p}\left((\log(\log kR))-\log(\log 2)\right)+C_{\gamma,p}, \end{equation}
where $$C_{\gamma,p}:=2^{p}(\log2)^{\gamma-1}|\sigma|\int_{0}^{\log2}s^{p-\gamma} e^{-ps}ds. $$ Since $p-\gamma+1>0$, we get $C_{\gamma,q}<+\infty$. On the other hand, we see $$
\int_{B(0,R)}\frac{|f_{k}(x)|^{p}}{|x|^{Q}\left|\log\frac{R}{|x|}\right|^{\gamma}}dx
=|\sigma|\int_{0}^{R}\frac{|\widetilde{f}_{k}(r)|^{p}}{r\left|\log\frac{R}{r}\right|^{\gamma}}dr
$$$$=|\sigma|(\log(kR))^{\gamma-1}\int_{0}^{\frac{1}{k}}r^{-1} \left(\log\frac{R}{r}\right)^{-\gamma}dr
+|\sigma|\int_{\frac{1}{k}}^{\frac{R}{2}}r^{-1}
\left(\log\frac{R}{r}\right)^{-1}dr$$$$+|\sigma|(\log2)^{\gamma-1}\left(\frac{2}{R}\right)^{p} \int_{\frac{R}{2}}^{R}r^{-1} (R-r)^{p}\left(\log\frac{R}{r}\right)^{-\gamma}dr $$ \begin{equation}\label{optim3}
=\frac{|\sigma|}{\gamma-1}+|\sigma|(\log(\log(kR))-\log(\log(2))) +C_{R,\gamma,p}, \end{equation} where $$
C_{R,\gamma,p}:=(\log2)^{\gamma-1}\left(\frac{2}{R}\right)^{p}|\sigma|\int_{\frac{R}{2}}^{R}r^{-1} (R-r)^{p}\left(\log\frac{R}{r}\right)^{-\gamma}dr. $$ The inequality $\log\frac{R}{r}\geq \frac{R-r}{R}$ for all $r\leq R$ and the assumption $p-\gamma>-1$, imply $C_{R,\gamma,p}<+\infty$. Then, by \eqref{optim2} and \eqref{optim3}, we have \begin{multline*}
\left(\int_{B(0,R)}|x|^{p-Q}
\left|\log\frac{R}{|x|}\right|^{p-\gamma}\left|\mathcal{R}f_{k}(x)\right|
^{p}dx\right)\\ \times \left(\int_{B(0,R)}\frac{|f_{k}(x)|^{p}}{|x|^{Q}\left|\log\frac{R}{|x|}\right|^{\gamma}}dx\right)^{-1} \rightarrow \left(\frac{\gamma-1}{p}\right)^{p} \end{multline*} as $k \rightarrow \infty$, which implies that the constant $\frac{p}{\gamma-1}$ in \eqref{optim1} is optimal. \end{proof} \begin{cor}[{\rm Uncertainty type principle on $\mathbb{G}$}]\label{ScalHardycor} Let $1<p<\infty$ and $q>1$ be such that $\frac{1}{p}+\frac{1}{q}=\frac{1}{2}$. Let $1<\gamma<\infty$ and $\max\{1,\gamma-1\}<p<\infty$. Then for any $R>0$ and $f\in C_{0}^{\infty}(\mathbb{G}\backslash\{0\})$ we have \begin{equation}\label{ScalHardycor1}
\left\|\frac{\mathcal{R}f}{|x|^{\frac{Q-p}{p}}\left(\log\frac{R}{|x|}\right)^{\frac{\gamma-p}{p}}} \right\|_{L^{p}(\mathbb{G})}\|f\|_{L^{q}(\mathbb{G})}
\geq\frac{\gamma-1}{p}\left\|\frac{f(f-f_{R})}
{|x|^{\frac{Q}{p}}\left(\log\frac{R}{|x|}\right)^{\frac{\gamma}{p}}} \right\|_{L^{2}(\mathbb{G})}. \end{equation} Moreover, \begin{multline}\label{ScalHardycor2}
\left\|\frac{\mathcal{R}f}{|x|^{\frac{Q-p}{p}}\left(\log\frac{R}{|x|}\right)^{\frac{\gamma-p}{p}}} \right\|_{L^{p}(\mathbb{G})}\left\|\frac{f-f_{R}}
{|x|^{\frac{Q}{p'}}\left(\log\frac{R}{|x|}\right)^{2-\frac{\gamma}{p}}}
\right\|_{L^{p'}(\mathbb{G})}\geq\frac{\gamma-1}{p}\left\|\frac{f-f_{R}}
{|x|^{\frac{Q}{2}}\log\frac{R}{|x|}}\right\|^{2}_{L^{2}(\mathbb{G})} \end{multline} holds for $\frac{1}{p}+\frac{1}{p'}=1$. \end{cor} \begin{proof}[Proof of Corollary \ref{ScalHardycor}] By \eqref{ScalHardy1}, we have
$$\left\|\frac{\mathcal{R}f}{|x|^{\frac{Q-p}{p}}\left(\log\frac{R}{|x|}\right)^{\frac{\gamma-p}{p}}} \right\|_{L^{p}(\mathbb{G})}\|f\|_{L^{q}(\mathbb{G})}\geq\frac{\gamma-1}{p}
\left\|\frac{f-f_{R}}{|x|^{\frac{Q}{p}}\left(\log\frac{R}{|x|}\right)^{\frac{\gamma}{p}}}\right\|_{L^{p}(\mathbb{G})}
\|f\|_{L^{q}(\mathbb{G})}$$ $$=\frac{\gamma-1}{p}
\left(\int_{\mathbb{G}}\left|\frac{f(x)-f_{R}(x)}{|x|^{\frac{Q}{p}}\left(\log\frac{R}{|x|}\right)^{\frac{\gamma}{p}}}
\right|^{2\frac{p}{2}}dx\right)^{\frac{1}{2}\frac{2}{p}}
\left(\int_{\mathbb{G}}|f(x)|^{2\frac{q}{2}}dx\right)^{\frac{1}{2}\frac{2}{q}},$$ and using the H\"{o}lder inequality, we obtain
$$\left\||x|^{\frac{p-Q}{p}}\left(\log\frac{R}{|x|}\right)^{\frac{p-\gamma}{p}}
\mathcal{R}f \right\|_{L^{p}(\mathbb{G})}\|f\|_{L^{q}(\mathbb{G})}$$
$$\geq\frac{\gamma-1}{p}\left(\int_{\mathbb{G}}\left|\frac{f(x)(f(x)-f_{R}(x))}
{|x|^{\frac{Q}{p}}\left(\log\frac{R}{|x|}\right)^{\frac{\gamma}{p}}}\right|^{2}dx\right)^{\frac{1}{2}}
=\frac{\gamma-1}{p}\left\|\frac{f(f-f_{R})}
{|x|^{\frac{Q}{p}}\left(\log\frac{R}{|x|}\right)^{\frac{\gamma}{p}}} \right\|_{L^{2}(\mathbb{G})}.$$ Similarly, one can prove \eqref{ScalHardycor2}. \end{proof} \begin{rem}\label{ScalHardyrem} When $\gamma=p$, Theorem \ref{ScalHardy} gives \cite[Theorem 3.1]{Ruzhansky-Suragan:critical}: \begin{equation}\label{ScalHardy4}
\left\|\frac{f-f_{R}}{|x|^{\frac{Q}{p}}\log\frac{R}{|x|}}\right\|_{L^{p}(\mathbb{G})}
\leq\frac{p}{p-1}\left\||x|^{\frac{p-Q}{p}}
\mathcal{R}f \right\|_{L^{p}(\mathbb{G})}, \;\;1<p<\infty, \end{equation} for all $R>0$. \end{rem}
\section{Critical and subcritical Hardy inequalities} \label{SEC:crit_subcrit_con}
In this section, we study the relation between the critical and the subcritical Hardy inequalities on homogeneous groups.
\begin{prop}\label{crit_subcrit_pr2}
Let $\mathbb{G}$ be a homogeneous group of homogeneous dimension $Q\geq 3$ and $Q\geq m+1$, $m\geq2$. Let $|\cdot|$ be a homogeneous quasi-norm. Then for any nonnegative radial function $g\in C_{0}^{1}(B^{m}(0,R)\backslash\{0\})$, there exists a nonnegative radial function $f\in C_{0}^{1}(B^{Q}(0,1)\backslash\{0\})$ such that
$$\int_{B^{Q}(0,1)}|\mathcal{R}f|^{m}dx-\left(\frac{Q-m}{m}\right)^{m}\int_{B^{Q}(0,1)}\frac{|f|^{m}}{|x|^{m}}dx$$ \begin{equation}\label{crit_subcrit3}
=\frac{|\sigma|}
{|\widetilde{\sigma}|}\left(\frac{Q-m}{m-1}\right)^{m-1}\left(\int_{B^{m}(0,R)}|\mathcal{R}g|^{m}dz-
\left(\frac{m-1}{m}\right)^{m}\int_{B^{m}(0,R)}\frac{|g|^{m}}{|z|^{m}\left(\log\frac{Re}{|z|}\right)^{m}}dz\right) \end{equation}
holds true, where $\mathcal{R}$ is defined in \eqref{EQ:Euler}, $|\sigma|$ and $|\widetilde{\sigma}|$ are $Q-1$ and $m-1$ dimensional surface measure of the unit sphere, respectively. \end{prop}
\begin{proof}[Proof of Proposition \ref{crit_subcrit_pr2}] Let $r=|x|$, $x\in\mathbb{G}$ and $s=|z|$, $z\in\mathbb{\widetilde{G}}$, where $\mathbb{\widetilde{G}}$ is a homogeneous group of homogeneous dimension $m$. Let us define a radial function $f=f(x)\in C_{0}^{1}(B^{Q}(0,1)\backslash\{0\})$ for a nonnegative radial function $g=g(z)\in C_{0}^{1}(B^{m}(0,R)\backslash\{0\})$: \begin{equation}\label{crit_subcrit4} f(r)=g(s(r)), \end{equation} where $s(r)=R\exp(1-r^{-\frac{Q-m}{m-1}})$, that is, $$r^{-\frac{Q-m}{m-1}}=\log\frac{Re}{s}, \;s'(r)=\frac{Q-m}{m-1}r^{-\frac{Q-m}{m-1}-1}s(r).$$
Here we see that $s'(r)>0$ for $r\in[0,1]$ and $s(0)=0$, $s(1)=R$. Since $g(s)\equiv0$ near $s=R$, we also note that $f\equiv0$ near $r=1$. Then a direct calculation shows
$$\int_{B^{Q}(0,1)}|\mathcal{R}f|^{m}dx-\left(\frac{Q-m}{m}\right)^{m}\int_{B^{Q}(0,1)}\frac{|f|^{m}}{|x|^{m}}dx$$
$$=|\sigma|\int_{0}^{1}|f'(r)|^{m}r^{Q-1}dr-\left(\frac{Q-m}{m}\right)^{m}|\sigma|\int_{0}^{1}f^{m}(r)r^{Q-m-1}dr$$
$$=|\sigma|\int_{0}^{R}|g'(s)s'(r(s))|^{m}r^{Q-1}(s)\frac{ds}{s'(r(s))}-\left(\frac{Q-m}{m}\right)^{m}|\sigma|\int_{0}^{R}g^{m}(s) r^{Q-m-1}(s)\frac{ds}{s'(r(s))}$$
$$=|\sigma|\left(\frac{Q-m}{m-1}\right)^{m-1}\int_{0}^{R}|g'(s)|^{m}s^{m-1}ds-\left(\frac{Q-m}{m}\right)^{m}\frac{m-1}{Q-m}
|\sigma|\int_{0}^{R}\frac{g^{m}(s)}{s\left(\log\frac{Re}{s}\right)^{m}}ds$$
$$=\frac{|\sigma|}{|\widetilde{\sigma}|}\left(\frac{Q-m}{m-1}\right)^{m-1}\left(\int_{B^{m}(0,R)}|\mathcal{R}g|^{m}dz-
\left(\frac{m-1}{m}\right)^{m}\int_{B^{m}(0,R)}\frac{|g|^{m}}{|z|^{m}\left(\log\frac{Re}{|z|}\right)^{m}}dz\right),$$ yielding \eqref{crit_subcrit3}. \end{proof}
\section{Extended Caffarelli-Kohn-Nirenberg inequalities}\label{SEC:CKN}
In this section, we introduce new Caffarelli-Kohn-Nirenberg type inequalities in the Euclidean setting of $\mathbb R^{n}$ as well as on homogeneous groups. For the convenience of the reader we recall Theorem \ref{THM:CKN-i} and then also explain how it implies Theorem \ref{clas_CKN-2}:
\begin{thm}\label{CKN_thm}
Let $\mathbb{G}$ be a homogeneous group of homogeneous dimension $Q$. Let $|\cdot|$ be a homogeneous quasi-norm. Let $1<p,q<\infty$, $0<r<\infty$ with $p+q\geq r$ and $\delta\in[0,1]\cap\left[\frac{r-q}{r},\frac{p}{r}\right]$ and $a$, $b$, $c\in\mathbb{R}$. Assume that $\frac{\delta r}{p}+\frac{(1-\delta)r}{q}=1$ and $c=\delta(a-1)+b(1-\delta)$. Then we have the following Caffarelli-Kohn-Nirenberg type inequalities for all $f\in C_{0}^{\infty}(\mathbb{G}\backslash\{0\})$:
If $Q\neq p(1-a)$, then \begin{equation}\label{CKN_thm2}
\||x|^{c}f\|_{L^{r}(\mathbb{G})}
\leq \left|\frac{p}{Q-p(1-a)}\right|^{\delta} \left\||x|^{a}\mathcal{R}f\right\|^{\delta}_{L^{p}(\mathbb{G})}
\left\||x|^{b}f\right\|^{1-\delta}_{L^{q}(\mathbb{G})}. \end{equation} If $Q=p(1-a)$, then \begin{equation}\label{CKN_thm1}
\left\||x|^{c}f\right\|_{L^{r}(\mathbb{G})}
\leq p^{\delta} \left\||x|^{a}\log|x|\mathcal{R}f\right\|^{\delta}_{L^{p}(\mathbb{G})}
\left\||x|^{b}f\right\|^{1-\delta}_{L^{q}(\mathbb{G})}. \end{equation}
The constant in the inequality \eqref{CKN_thm2} is sharp for $p=q$ with $a-b=1$ or $p\neq q$ with $p(1-a)+bq\neq0$. Moreover, the constants in \eqref{CKN_thm2} and \eqref{CKN_thm1} are sharp for $\delta=0$ or $\delta=1$. Here $\mathcal{R}:=\frac{d}{d|x|}$ is the radial derivative. \end{thm} \begin{rem} Our conditions $\frac{\delta r}{p}+\frac{(1-\delta)r}{q}=1$ and $c=\delta(a-1)+b(1-\delta)$ imply the condition \eqref{clas_CKN2} of Theorem \ref{clas_CKN}, and in our case $a-d=1$. \end{rem} \begin{rem}\label{CKN_rem1}
In the abelian case $\mathbb{G}=(\mathbb R^{n},+)$ and $Q=n$, \eqref{CKN_thm1} implies a new type of Caffarelli-Kohn-Nirenberg inequality for any quasi-norm on $\mathbb R^{n}$: Let $1<p,q<\infty$, $0<r<\infty$ with $p+q\geq r$ and $\delta\in[0,1]\cap\left[\frac{r-q}{r},\frac{p}{r}\right]$ and $a$, $b$, $c\in\mathbb{R}$. Assume that $\frac{\delta r}{p}+\frac{(1-\delta)r}{q}=1$, $n=p(1-a)$ and $c=\delta(a-1)+b(1-\delta)$. Then we have the Caffarelli-Kohn-Nirenberg type inequality for any function $f\in C_{0}^{\infty}(\mathbb{R}^{n}\backslash\{0\})$ and for any homogeneous quasi-norm $|\cdot|$: \begin{equation}\label{CKN_rem2}
\left\||x|^{c}f\right\|_{L^{r}(\mathbb R^{n})}
\leq p^{\delta} \left\||x|^{a}\log|x|\left(\frac{x}{|x|}\cdot\nabla f\right)\right\|^{\delta}_{L^{p}(\mathbb R^{n})}
\left\||x|^{b}f\right\|^{1-\delta}_{L^{q}(\mathbb R^{n})}. \end{equation}
By the Schwarz inequality with the standard Euclidean distance given by $|x|=\sqrt{x_{1}^{2}+x_{2}^{2}+...+x_{n}^{2}}$, we obtain the Euclidean form of the Caffarelli-Kohn-Nirenberg type inequality: \begin{equation}\label{CKN_rem3}
\left\||x|^{c}f\right\|_{L^{r}(\mathbb R^{n})}
\leq p^{\delta} \left\||x|^{a}\log|x|\nabla f\right\|^{\delta}_{L^{p}(\mathbb R^{n})}
\left\||x|^{b}f\right\|^{1-\delta}_{L^{q}(\mathbb R^{n})}, \end{equation}
where $\nabla$ is the standard gradient in $\mathbb R^{n}$. Similarly, we write the inequality \eqref{CKN_thm2} in the abelian case: Let $1<p,q<\infty$, $0<r<\infty$ with $p+q\geq r$ and $\delta\in[0,1]\cap\left[\frac{r-q}{r},\frac{p}{r}\right]$ and $a$, $b$, $c\in\mathbb{R}$. Assume that $\frac{\delta r}{p}+\frac{(1-\delta)r}{q}=1$, $n\neq p(1-a)$ and $c=\delta(a-1)+b(1-\delta)$. Then we have Caffarelli-Kohn-Nirenberg type inequality for any function $f\in C_{0}^{\infty}(\mathbb{G}\backslash\{0\})$ and for any homogeneous quasi-norm $|\cdot|$: \begin{equation}
\||x|^{c}f\|_{L^{r}(\mathbb R^{n})}
\leq \left|\frac{p}{n-p(1-a)}\right|^{\delta} \left\||x|^{a}\left(\frac{x}{|x|}\cdot\nabla f\right)\right\|^{\delta}_{L^{p}(\mathbb R^{n})}
\left\||x|^{b}f\right\|^{1-\delta}_{L^{q}(\mathbb R^{n})}. \end{equation}
Then, using the Schwarz inequality with the standard Euclidean distance given by $|x|=\sqrt{x_{1}^{2}+x_{2}^{2}+...+x_{n}^{2}}$, we obtain the Euclidean form of the Caffarelli-Kohn-Nirenberg type inequality: \begin{equation}\label{CKN_rem3_1}
\||x|^{c}f\|_{L^{r}(\mathbb R^{n})}
\leq \left|\frac{p}{n-p(1-a)}\right|^{\delta} \left\||x|^{a}\nabla f\right\|^{\delta}_{L^{p}(\mathbb R^{n})}
\left\||x|^{b}f\right\|^{1-\delta}_{L^{q}(\mathbb R^{n})}. \end{equation} Note that if \begin{equation}\label{CKNrem1}\frac{1}{p}+\frac{a}{n}>0, \;\frac{1}{q}+\frac{b}{n}>0 \;\;{\rm and}\;\; \frac{1}{r}+\frac{c}{n}>0
\end{equation} hold, then the inequality \eqref{CKN_rem3_1} is contained in the family of Caffarelli-Kohn-Nirenberg inequalities \cite{CKN84}. In this case, if we require $p=q$ with $a-b=1$ or $p\neq q$ with $p(1-a)+bq\neq0$, then we obtain the inequality \eqref{CKN_rem3_1} with the sharp constant. Moreover, the constants $\left|\frac{p}{n-p(1-a)}\right|^{\delta}$ and $p^{\delta}$ are sharp for $\delta=0$ or $\delta=1$. If \eqref{CKNrem1} is not satisfied, then the inequality \eqref{CKN_rem3_1} is not covered by Theorem \ref{clas_CKN} because condition \eqref{clas_CKN0} fails. So we obtain a new range of the Caffarelli-Kohn-Nirenberg inequality \cite{CKN84}.
Thus, the inequalities \eqref{CKN_rem3} and \eqref{CKN_rem3_1} are new already in the abelian case and, moreover, \eqref{CKN_thm2} and \eqref{CKN_thm1} hold for any choice of homogeneous quasi-norm. \end{rem}
The proof of Theorem \ref{CKN_thm} will be based on the following family of weighted Hardy inequalities that was obtained in \cite[Theorem 3.4]{RSY16}, where $\mathbb{E}=|x|\mathcal{R}$ is the Euler operator.
\begin{thm}[\cite{RSY16}]\label{L_p_weighted_th}
Let $\mathbb{G}$ be a homogeneous group of homogeneous dimension $Q$ and let $\alpha\in \mathbb{R}$. Then for all complex-valued functions $f\in C^{\infty}_{0}(\mathbb{G}\backslash\{0\}),$ $1<p<\infty,$ and any homogeneous quasi-norm $|\cdot|$ on $\mathbb{G}$ for $\alpha p \neq Q$ we have \begin{equation}\label{L_p_weighted}
\left\|\frac{f}{|x|^{\alpha}}\right\|_{L^{p}(\mathbb{G})}\leq
\left|\frac{p}{Q-\alpha p}\right|\left\|\frac{1}{|x|^{\alpha}}\mathbb{E} f\right\|_{L^{p}(\mathbb{G})}. \end{equation}
If $\alpha p\neq Q$ then the constant $\left|\frac{p}{Q-\alpha p}\right|$ is sharp. For $\alpha p=Q$ we have \begin{equation}\label{L_p_weighted_log}
\left\|\frac{f}{|x|^{\frac{Q}{p}}}\right\|_{L^{p}(\mathbb{G})}\leq p\left\|\frac{\log|x|}{|x|^{\frac{Q}{p}}}\mathbb{E} f\right\|_{L^{p}(\mathbb{G})}, \end{equation} where the constant $p$ is sharp. \end{thm}
We briefly recall its proof for the convenience of the reader but also since it will be useful in our argument.
\begin{proof}[Proof of Theorem \ref{L_p_weighted_th}] Using integration by parts, for $\alpha p \neq Q$ we obtain \begin{multline*}
\int_{\mathbb{G}}\frac{|f(x)|^{p}}{|x|^{\alpha p}}dx=\int_{0}^{\infty}\int_{\mathfrak S}|f(ry)|^{p}r^{Q-1-\alpha p}d\sigma(y)dr\\
=-\frac{p}{Q-\alpha p}\int_{0}^{\infty} r^{Q-\alpha p} {\rm Re} \int_{\mathfrak S}|f(ry)|^{p-2} f(ry) \overline{\frac{df(ry)}{dr}}d\sigma(y)dr\\
\leq \left|\frac{p}{Q-\alpha p}\right|\int_{\mathbb{G}}\frac{|\mathbb{E}f(x)||f(x)|^{p-1}}{|x|^{\alpha p}}dx=
\left|\frac{p}{Q-\alpha p}\right|\int_{\mathbb{G}}\frac{|\mathbb{E}f(x)||f(x)|^{p-1}}{|x|^{\alpha+\alpha (p-1)}}dx. \end{multline*} By H\"{o}lder's inequality, it follows that $$
\int_{\mathbb{G}}\frac{|f(x)|^{p}}{|x|^{\alpha p}}dx\leq \left|\frac{p}{Q-\alpha p}\right|\left(\int_{\mathbb{G}}\frac{|\mathbb{E}f(x)|^{p}}{|x|^{\alpha p}}dx\right)
^{\frac{1}{p}}\left(\int_{\mathbb{G}}\frac{|f(x)|^{p}}{|x|^{\alpha p}}dx\right)^{\frac{p-1}{p}}, $$ which gives \eqref{L_p_weighted}.
Now we show the sharpness of the constant. We need to check the equality condition in above H\"older's inequality. Let us consider the function \begin{equation}\label{2}
g(x)=\frac{1}{|x|^{C}}, \end{equation} where $C\in\mathbb{R}, C\neq 0$ and $\alpha p\neq Q$. Then by a direct calculation we obtain \begin{equation}\label{Holder_eq1}
\left|\frac{1}{C}\right|^{p}\left(\frac{|\mathbb{E}g(x)|}{|x|^{\alpha }}\right)^{p}=\left(\frac{|g(x)|^{p-1}}
{|x|^{\alpha (p-1)}}\right)^{\frac{p}{p-1}}, \end{equation}
which satisfies the equality condition in H\"older's inequality. This gives the sharpness of the constant $\left|\frac{p}{Q-\alpha p}\right|$ in \eqref{L_p_weighted}.
Now let us prove \eqref{L_p_weighted_log}. Using integration by parts, we have \begin{multline*}
\int_{\mathbb{G}}\frac{|f(x)|^{p}}{|x|^{Q}}dx=\int_{0}^{\infty}\int_{\mathfrak S}|f(ry)|^{p}r^{Q-1-Q}d\sigma(y)dr\\
=-p\int_{0}^{\infty} \log r {\rm Re} \int_{\mathfrak S}|f(ry)|^{p-2} f(ry) \overline{\frac{df(ry)}{dr}}d\sigma(y)dr\\
\leq p \int_{\mathbb{G}}\frac{|\mathbb{E}f(x)||f(x)|^{p-1}}{|x|^{Q}}|\log|x||dx=
p\int_{\mathbb{G}}\frac{|\mathbb{E}f(x)|\log|x|||}{|x|^{\frac{Q}{p}}}\frac{|f(x)|^{p-1}}{|x|^{\frac{Q(p-1)}{p}}}dx. \end{multline*} By H\"{o}lder's inequality, it follows that $$
\int_{\mathbb{G}}\frac{|f(x)|^{p}}{|x|^{Q}}dx\leq p\left(\int_{\mathbb{G}}\frac{|\mathbb{E}f(x)|^{p}|\log|x||^{p}}{|x|^{Q}}dx\right)
^{\frac{1}{p}}\left(\int_{\mathbb{G}}\frac{|f(x)|^{p}}{|x|^{Q}}dx\right)^{\frac{p-1}{p}}, $$ which gives \eqref{L_p_weighted_log}.
Now we show the sharpness of the constant. We need to check the equality condition in above H\"older's inequality. Let us consider the function
$$h(x)=(\log|x|)^{C},$$ where $C\in\mathbb{R}$ and $C\neq 0$. Then by a direct calculation we obtain \begin{equation}\label{Holder_eq2}
\left|\frac{1}{C}\right|^{p}\left(\frac{|\mathbb{E}h(x)||\log|x||}{|x|^{\frac{Q}{p}}}\right)^{p}=\left(\frac{|h(x)|^{p-1}}
{|x|^{\frac{Q (p-1)}{p}}}\right)^{\frac{p}{p-1}}, \end{equation} which satisfies the equality condition in H\"older's inequality. This gives the sharpness of the constant $p$ in \eqref{L_p_weighted_log}. \end{proof}
We are now ready to prove Theorem \ref{CKN_thm}.
\begin{proof}[Proof of Theorem \ref{CKN_thm}] {\bf Case $\delta=0$}. In this case, we have $q=r$ and $b=c$ by $\frac{\delta r}{p}+\frac{(1-\delta)r}{q}=1$ and $c=\delta(a-1)+b(1-\delta)$, respectively. Then, the inequalities \eqref{CKN_thm2} and \eqref{CKN_thm1} are equivalent to the trivial estimate $$
\||x|^{b}f\|_{L^{q}(\mathbb{G})}
\leq \left\||x|^{b}f\right\|_{L^{q}(\mathbb{G})}. $$ {\bf Case $\delta=1$}. Notice that in this case, $p=r$ and $a-1=c$. By Theorem \ref{L_p_weighted_th}, we have for $Q+pc=Q+p(a-1)\neq0$ the inequality
$$\||x|^{c}f\|_{L^{r}(\mathbb{G})}\leq \left|\frac{p}{Q+pc}\right|\||x|^{c}\mathbb{E}f\|_{L^{r}(\mathbb{G})},$$
where $\mathbb{E}=|x|\mathcal{R}$ is the Euler operator. Taking into account this, we get
$$\||x|^{c}f\|_{L^{r}(\mathbb{G})}\leq \left|\frac{p}{Q+pc}\right|\||x|^{c+1}\mathcal{R}f\|_{L^{r}(\mathbb{G})}$$
$$=\left|\frac{p}{Q-p(1-a)}\right|\||x|^{a}\mathcal{R}f\|_{L^{p}(\mathbb{G})},$$ which implies \eqref{CKN_thm2}. For $Q+pc=Q+p(a-1)=0$ by Theorem \ref{L_p_weighted_th} we obtain
$$\||x|^{c}f\|_{L^{r}(\mathbb{G})}\leq p\||x|^{c}\log|x|\mathbb{E}f\|_{L^{r}(\mathbb{G})}=p\||x|^{c+1}\log|x|\mathcal{R}f\|_{L^{r}(\mathbb{G})}$$
$$=p\||x|^{a}\log|x|\mathcal{R}f\|_{L^{p}(\mathbb{G})}$$ which gives \eqref{CKN_thm1}. In this case, the constants in \eqref{CKN_thm2} and \eqref{CKN_thm1} are sharp, since the constants in Theorem \ref{L_p_weighted_th} are sharp.
{\bf Case $\delta\in(0,1)\cap\left[\frac{r-q}{r},\frac{p}{r}\right]$}. Taking into account $c=\delta(a-1)+b(1-\delta)$, a direct calculation gives $$\||x|^{c}f\|_{L^{r}(\mathbb{G})}=
\left(\int_{\mathbb{G}}|x|^{cr}|f(x)|^{r}dx\right)^{\frac{1}{r}}
=\left(\int_{\mathbb{G}}\frac{|f(x)|^{\delta r}}{|x|^{\delta r (1-a)}}\cdot \frac{|f(x)|^{(1-\delta)r}}{|x|^{-br(1-\delta)}}dx\right)^{\frac{1}{r}}.$$ Since we have $\delta\in(0,1)\cap\left[\frac{r-q}{r},\frac{p}{r}\right]$ and $p+q\geq r$, then by using H\"{o}lder's inequality for $\frac{\delta r}{p}+\frac{(1-\delta)r}{q}=1$, we obtain
$$\||x|^{c}f\|_{L^{r}(\mathbb{G})}
\leq \left(\int_{\mathbb{G}}\frac{|f(x)|^{p}}{|x|^{p(1-a)}}dx\right)^{\frac{\delta}{p}}
\left(\int_{\mathbb{G}}\frac{|f(x)|^{q}}{|x|^{-bq}}dx\right)^{\frac{1-\delta}{q}}$$
\begin{equation}\label{CKN_thm1_1}=\left\|\frac{f}{|x|^{1-a}}\right\|^{\delta}_{L^{p}(\mathbb{G})}
\left\|\frac{f}{|x|^{-b}}\right\|^{1-\delta}_{L^{q}(\mathbb{G})}. \end{equation} Here we note that when $p=q$ and $a-b=1$ H\"{o}lder's equality condition is held for any function. We also note that in the case $p\neq q$ the function \begin{equation}\label{Holder_eq_2}
h(x)=|x|^{\frac{1}{(p-q)}\left(p(1-a)+bq\right)} \end{equation} satisfies H\"{o}lder's equality condition:
$$\frac{|h|^{p}}{|x|^{p(1-a)}}=\frac{|h|^{q}}{|x|^{-bq}}.$$ If $Q\neq p(1-a)$, then by Theorem \ref{L_p_weighted_th}, we have
$$\left\|\frac{f}{|x|^{1-a}}\right\|^{\delta}_{L^{p}(\mathbb{G})}\leq
\left|\frac{p}{Q-p(1-a)}\right|^{\delta} \left\|\frac{\mathbb{E}f}{|x|^{1-a}}\right\|^{\delta}_{L^{p}(\mathbb{G})}$$ \begin{equation}\label{1}=
\left|\frac{p}{Q-p(1-a)}\right|^{\delta} \left\|\frac{\mathcal{R}f}{|x|^{-a}}\right\|^{\delta}_{L^{p}(\mathbb{G})}, \;\;1<p<\infty. \end{equation} Putting this in \eqref{CKN_thm1_1}, one has
$$\||x|^{c}f\|_{L^{r}(\mathbb{G})}\leq
\left|\frac{p}{Q-p(1-a)}\right|^{\delta} \left\|\frac{\mathcal{R}f}{|x|^{-a}}\right\|^{\delta}_{L^{p}(\mathbb{G})}
\left\|\frac{f}{|x|^{-b}}\right\|^{1-\delta}_{L^{q}(\mathbb{G})}.$$ We note that in the case $p=q$, $a-b=1$ H\"older's equality condition of the inequalities \eqref{CKN_thm1_1} and \eqref{1} holds true for $g(x)$ in \eqref{2}. Moreover, in the case $p\neq q$, $p(1-a)+bq\neq0$ H\"older's equality condition of the inequalities \eqref{CKN_thm1_1} and \eqref{1} holds true for $h(x)$ in \eqref{Holder_eq_2}. Therefore, the constant in \eqref{CKN_thm2} is sharp when $p=q$, $a-b=1$ or $p\neq q$, $p(1-a)+bq\neq0$.
Now let us consider the case $Q=p(1-a)$. Using Theorem \ref{L_p_weighted_th}, one has
$$\left\|\frac{f}{|x|^{1-a}}\right\|^{\delta}_{L^{p}(\mathbb{G})}\leq p^{\delta} \left\|\frac{\log|x|}{|x|^{1-a}}\mathbb{E}f\right\|^{\delta}_{L^{p}(\mathbb{G})}, \;\;1<p<\infty.$$ Then, putting this in \eqref{CKN_thm1_1}, we obtain
$$\||x|^{c}f\|_{L^{r}(\mathbb{G})}\leq p^{\delta} \left\|\frac{\log|x|}{|x|^{1-a}}\mathbb{E}f\right\|^{\delta}_{L^{p}(\mathbb{G})}
\left\|\frac{f}{|x|^{-b}}\right\|^{1-\delta}_{L^{q}(\mathbb{G})} $$
$$=p^{\delta} \left\|\frac{\log|x|}{|x|^{-a}}\mathcal{R}f\right\|^{\delta}_{L^{p}(\mathbb{G})}
\left\|\frac{f}{|x|^{-b}}\right\|^{1-\delta}_{L^{q}(\mathbb{G})}.$$ \end{proof}
\section{$L^{p}$-Hardy inequalities with super weights} \label{SEC:2}
We now discuss versions of Hardy inequalities with more general weights, that we call superweights. The following is the main result of this section.
\begin{thm}\label{1}
Let $\mathbb{G}$ be a homogeneous group of homogeneous dimension $Q$ and let $|\cdot|$ be a homogeneous quasi-norm on $\mathbb{G}$. Let $a,b>0$ and $1<p<\infty, Q\geq1$. \begin{itemize} \item[(i)] If $\alpha \beta>0$ and $pm\leq Q-p$, then for all $f\in C_{0}^{\infty}(\mathbb{G}\backslash\{0\})$, we have \begin{equation}\label{Lpweighted1} \frac{Q-pm-p}{p}
\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m+1}}f\right\|_{L^{p}(\mathbb{G})}
\leq\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m}}\mathcal{R}f\right\|_{L^{p}(\mathbb{G})} . \end{equation} If $Q\neq pm+p$, then the constant $\frac{Q-pm-p}{p}$ is sharp.
\item[(ii)] If $\alpha \beta<0$ and $pm-\alpha\beta\leq Q-p$, then for all $f\in C_{0}^{\infty}(\mathbb{G}\backslash\{0\})$, we have \begin{equation}\label{Lpweighted2} \frac{Q-pm+\alpha\beta-p}{p}
\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m+1}}f\right\|_{L^{p}(\mathbb{G})}
\leq\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m}}\mathcal{R}f\right\|_{L^{p}(\mathbb{G})} . \end{equation} If $Q\neq pm+p-\alpha\beta$, then the constant $\frac{Q-pm+\alpha\beta-p}{p}$ is sharp. \end{itemize} \end{thm}
\begin{proof}[Proof of Theorem \ref{1}] We may assume that $Q\neq pm+p$ since for $Q=pm+p$ there is nothing to prove. Introducing polar coordinates $(r,y)=(|x|, \frac{x}{\mid x\mid})\in (0,\infty)\times\mathfrak S$ on $\mathbb{G}$, where $\mathfrak S$ is the unit quasi-sphere \begin{equation}\label{EQ:sphere}
\mathfrak S:=\{x\in \mathbb{G}:\,|x|=1\}, \end{equation} and using the polar decomposition on homogeneous groups (see, for example, \cite{FS-Hardy} or \cite{FR}) and integrating by parts, we get \begin{equation}\label{eqp} \int_{\mathbb{G}}
\frac{(a+b|x|^{\alpha})^{\beta}}{|x|^{pm+p}}|f(x)|^{p}dx =\int_{0}^{\infty}\int_{\mathfrak S}
\frac{(a+br^{\alpha})^{\beta}}{r^{pm+p}}|f(ry)|^{p} r^{Q-1}d\sigma(y)dr. \end{equation} (i) Since $a,b>0$, $\alpha \beta>0$ and $m<\frac{Q-p}{p}$ we obtain $$ \int_{\mathbb{G}}
\frac{(a+b|x|^{\alpha})^{\beta}}{|x|^{pm+p}}|f(x)|^{p}dx$$ $$\leq\int_{0}^{\infty}\int_{\mathfrak S}
(a+br^{\alpha})^{\beta}r^{Q-1-pm-p}\left(\frac{\alpha\beta b r^{\alpha}}{(a+br^{\alpha})(Q-pm-p)}+1\right)|f(ry)|^{p} d\sigma(y)dr $$ $$=\int_{0}^{\infty}\int_{\mathfrak S}
\frac{d}{dr}\left(\frac{(a+br^{\alpha})^{\beta}r^{Q-pm-p}}{Q-pm-p}\right)|f(ry)|^{p} d\sigma(y)dr$$ $$ =-\frac{p}{Q-pm-p}\int_{0}^{\infty}(a+br^{\alpha})^{\beta}r^{Q-pm-p} \,{\rm Re}\int_{\mathfrak S}
|f(ry)|^{p-2} f(ry) \overline{\frac{df(ry)}{dr}}d\sigma(y)dr $$
$$\leq \left|\frac{p}{Q-pm-p}\right|\int_{\mathbb{G}}\frac{(a+b|x|^{\alpha})^{\beta}|\mathcal{R}f(x)||f(x)|^{p-1}}{|x|^{pm+p-1}}dx $$ $$=\frac{p}{Q-pm-p}
\int_{\mathbb{G}}\frac{(a+b|x|^{\alpha})^
{\frac{\beta(p-1)}{p}}|f(x)|^{p-1}}{|x|^{(m+1)(p-1)}}
\frac{(a+b|x|^{\alpha})^
{\frac{\beta}{p}}}{|x|^{m}}|\mathcal{R}f(x)|dx.$$ By H\"{o}lder's inequality, it follows that $$ \int_{\mathbb{G}}
\frac{(a+b|x|^{\alpha})^{\beta}}{|x|^{pm+p}}|f(x)|^{p}dx$$
$$\leq\frac{p}{Q-pm-p}\left(\int_{\mathbb{G}}\frac{(a+b|x|^{\alpha})^{\beta}}{|x|^{pm+p}}|f(x)|^{p}dx\right)^\frac{p-1}{p}
\left(\int_{\mathbb{G}}\frac{(a+b|x|^{\alpha})^{\beta}}{|x|^{pm}}|\mathcal{R}f(x)|^{p}dx\right)^\frac{1}{p}, $$ which gives \eqref{Lpweighted1}.
Now we show the sharpness of the constant. We need to check the equality condition in above H\"older's inequality. Let us consider the function
$$g(x)=|x|^{C}, $$ where $C\in\mathbb{R}, C\neq 0$ and $Q\neq pm+p$. Then by a direct calculation we obtain \begin{equation}\label{Holder_eq1}
\left|\frac{1}{C}\right|^{p}\left(\frac{(a+b|x|^{\alpha})^
{\frac{\beta}{p}}|\mathcal{R}g(x)|}{|x|^{m}}\right)^{p}=\left(\frac{(a+b|x|^{\alpha})^
{\frac{\beta(p-1)}{p}}|g(x)|^{p-1}}
{|x|^{(m+1) (p-1)}}\right)^{\frac{p}{p-1}}, \end{equation} which satisfies the equality condition in H\"older's inequality. This gives the sharpness of the constant $\frac{Q-pm-p}{p}$ in \eqref{Lpweighted1}.
Let us now prove Part (ii). Here we also assume that, $Q\neq pm+p-\alpha\beta$ since for $Q=pm+p-\alpha\beta$ there is nothing to prove. Using the polar decomposition, we have the equality \eqref{eqp}. Since $\alpha \beta<0$ and $pm-\alpha\beta<Q-p$ we obtain $$ \int_{\mathbb{G}}
\frac{(a+b|x|^{\alpha})^{\beta}}{|x|^{pm+p}}|f(x)|^{p}dx$$ $$\leq\int_{0}^{\infty}\int_{\mathfrak S} (a+br^{\alpha})^{\beta}r^{Q-1-pm-p}\left(\frac{b r^{\alpha}}{a+br^{\alpha}}+\frac{a}{a+br^{\alpha}}
\cdot\frac{Q-pm-p}{Q-pm-p+\alpha\beta}\right)|f(ry)|^{p} d\sigma(y)dr $$ $$=\int_{0}^{\infty}\int_{\mathfrak S} \frac{(a+br^{\alpha})^{\beta}r^{Q-1-pm-p}}{Q-pm-p+\alpha\beta}
\left(\frac{\alpha \beta b r^{\alpha}}{a+br^{\alpha}}+Q-pm-p\right)|f(ry)|^{p} d\sigma(y)dr $$ $$=\int_{0}^{\infty}\int_{\mathfrak S} \frac{d}{dr}\left(\frac{(a+br^{\alpha})^{\beta}
r^{Q-pm-p}}{Q-pm-p+\alpha\beta}\right)|f(ry)|^{p} d\sigma(y)dr$$ $$ =-\frac{p}{Q-pm-p+\alpha\beta}\int_{0}^{\infty}(a+br^{\alpha})^{\beta}r^{Q-pm-p} \,{\rm Re}\int_{\mathfrak S}
|f(ry)|^{p-2} f(ry) \overline{\frac{df(ry)}{dr}}d\sigma(y)dr $$
$$\leq \left|\frac{p}{Q-pm-p+\alpha\beta}\right|\int_{\mathbb{G}}\frac{(a+b|x|^{\alpha})^{\beta}|\mathcal{R}f(x)||f(x)|^{p-1}}{|x|^{pm+p-1}}dx $$ $$=\frac{p}{Q-pm-p+\alpha\beta}
\int_{\mathbb{G}}\frac{(a+b|x|^{\alpha})^
{\frac{\beta(p-1)}{p}}|f(x)|^{p-1}}{|x|^{(m+1)(p-1)}}
\frac{(a+b|x|^{\alpha})^
{\frac{\beta}{p}}}{|x|^{m}}|\mathcal{R}f(x)|dx.$$ By H\"{o}lder's inequality, it follows that $$ \int_{\mathbb{G}}
\frac{(a+b|x|^{\alpha})^{\beta}}{|x|^{pm+p}}|f(x)|^{p}dx$$
$$\leq\frac{p}{Q-pm-p+\alpha\beta}\left(\int_{\mathbb{G}}\frac{(a+b|x|^{\alpha})^{\beta}}{|x|^{pm+p}}|f(x)|^{p}dx\right)^\frac{p-1}{p}
\left(\int_{\mathbb{G}}\frac{(a+b|x|^{\alpha})^{\beta}}{|x|^{pm}}|\mathcal{R}f(x)|^{p}dx\right)^\frac{1}{p}, $$ which gives \eqref{Lpweighted2}.
Now we show the sharpness of the constant. We need to check the equality condition in above H\"older's inequality. Let us consider the function
$$h(x)=|x|^{C}, $$ where $C\in\mathbb{R}, C\neq 0$ and $Q\neq pm+p-\alpha\beta$. Then by a direct calculation we obtain \begin{equation}\label{Holder_eq2}
\left|\frac{1}{C}\right|^{p}\left(\frac{(a+b|x|^{\alpha})^
{\frac{\beta}{p}}|\mathcal{R}h(x)|}{|x|^{m}}\right)^{p}=\left(\frac{(a+b|x|^{\alpha})^
{\frac{\beta(p-1)}{p}}|h(x)|^{p-1}}
{|x|^{(m+1) (p-1)}}\right)^{\frac{p}{p-1}}, \end{equation} which satisfies the equality condition in H\"older's inequality. This gives the sharpness of the constant $\frac{Q-pm-p+\alpha\beta}{p}$ in \eqref{Lpweighted2}. \end{proof}
\section{Higher order inequalities} \label{Sec3} In this section we present higher order $L^{p}$-Hardy type inequalities with super weights by iterating the obtained inequalities \eqref{Lpweighted1} and \eqref{Lpweighted2}. \begin{thm}\label{2}
Let $\mathbb{G}$ be a homogeneous group of homogeneous dimension $Q$ and let $|\cdot|$ be a homogeneous quasi-norm on $\mathbb{G}$. Let $a,b>0$ and $1<p<\infty, Q\geq1, k\in \mathbb{N}$. \begin{itemize} \item[(i)] If $\alpha \beta>0$ and $pm\leq Q-p$, then for all $f\in C_{0}^{\infty}(\mathbb{G}\backslash\{0\})$, we have \begin{equation}\label{Lpweighted3} \left[\prod_{j=0}^{k-1}\left(\frac{Q-p}{p}-(m+j)\right)\right]
\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m+k}}f\right\|_{L^{p}(\mathbb{G})}
\leq\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m}}\mathcal{R}^{k}f\right\|_{L^{p}(\mathbb{G})} . \end{equation}
\item[(ii)] If $\alpha \beta<0$ and $pm-\alpha\beta\leq Q-p$, then for all $f\in C_{0}^{\infty}(\mathbb{G}\backslash\{0\})$, we have \begin{equation}\label{Lpweighted4} \left[\prod_{j=0}^{k-1}\left(\frac{Q-p+\alpha\beta}{p}-(m+j)\right)\right]
\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m+k}}f\right\|_{L^{p}(\mathbb{G})}
\leq\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m}}\mathcal{R}^{k}f\right\|_{L^{p}(\mathbb{G})} . \end{equation} \end{itemize} \end{thm} In the case of $k=1$ \eqref{Lpweighted3} gives inequality \eqref{Lpweighted1} and \eqref{Lpweighted4} gives inequality \eqref{Lpweighted2}. \begin{proof}[Proof of Theorem \ref{2}] We can iterate \eqref{Lpweighted1}, that is, we have \begin{equation}\label{Lpiterate0} \frac{Q-pm-p}{p}
\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m+1}}f\right\|_{L^{p}(\mathbb{G})}
\leq\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m}}\mathcal{R}f\right\|_{L^{p}(\mathbb{G})}. \end{equation} In \eqref{Lpiterate0} replacing $f$ by $\mathcal{R}f$ we obtain \begin{equation}\label{Lpiterate1} \frac{Q-pm-p}{p}
\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m+1}}\mathcal{R}f\right\|_{L^{p}(\mathbb{G})}
\leq\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m}}\mathcal{R}^{2}f\right\|_{L^{p}(\mathbb{G})}. \end{equation} On the other hand, replacing $m$ by $m+1$, \eqref{Lpiterate0} gives \begin{equation}\label{Lpiterate2} \frac{Q-p(m+1)-p}{p}
\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m+2}}f\right\|_{L^{p}(\mathbb{G})}
\leq\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m+1}}\mathcal{R}f\right\|_{L^{p}(\mathbb{G})} . \end{equation} Combining this with \eqref{Lpiterate1} we obtain \begin{multline*}\label{Lpiterate3} \frac{Q-pm-p}{p}\cdot\frac{Q-p(m+1)-p}{p}
\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m+2}}f\right\|_{L^{p}(\mathbb{G})} \\
\leq\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m}}\mathcal{R}^{2}f\right\|_{L^{p}(\mathbb{G})}. \end{multline*} This iteration process gives \begin{equation*}\label{Lpweighted5} \prod_{j=0}^{k-1}\left(\frac{Q-p}{p}-(m+j)\right)
\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m+k}}f\right\|_{L^{p}(\mathbb{G})}
\leq\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m}}\mathcal{R}^{k}f\right\|_{L^{p}(\mathbb{G})} . \end{equation*} Similarly, we have for $\alpha \beta<0$, $pm-\alpha\beta\leq Q-2$ and $f\in C_{0}^{\infty}(\mathbb{G}\backslash\{0\})$ \begin{equation*}\label{Lpweighted6} \prod_{j=0}^{k-1}\left(\frac{Q-p+\alpha\beta}{p}-(m+j)\right)
\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m+k}}f\right\|_{L^{p}(\mathbb{G})}
\leq\left\|\frac{(a+b|x|^{\alpha})^{\frac{\beta}{p}}}{|x|^{m}}\mathcal{R}^{k}f\right\|_{L^{p}(\mathbb{G})} , \end{equation*} completing the proof. \end{proof}
\end{document} |
\begin{document}
\title{Ideal operators and higher indescribability}
\author[Brent Cody]{Brent Cody} \address[Brent Cody]{ Virginia Commonwealth University, Department of Mathematics and Applied Mathematics, 1015 Floyd Avenue, PO Box 842014, Richmond, Virginia 23284, United States } \email[B. ~Cody]{bmcody@vcu.edu} \urladdr{http://www.people.vcu.edu/~bmcody/}
\author[Peter Holy]{Peter Holy} \address[Peter Holy]{ University of Udine, Via delle Scienze 206, 33100 Udine, Italy } \email[P. ~Holy]{pholy@math.uni-bonn.de}
\begin{abstract}
We investigate properties of the ineffability and the Ramsey operator, and a common generalization of those that was introduced by the second author, with respect to higher indescribability, as introduced by the first author. This extends earlier investigations on the ineffability operator by James Baumgartner, and on the Ramsey operator by Qi Feng, by Philip Welch et al. and by the first author. \end{abstract}
\subjclass[2010]{Primary 03E55; Secondary 03E05}
\keywords{}
\maketitle
\section{Introduction}\label{section_introduction}
In the set theoretic literature, some of the most popular large cardinals have been equipped with canonical ideals, and sometimes also with certain operators on ideals. Early examples of such \emph{large cardinal operators} are the \emph{ineffability operator} $\mathcal I$ due to Baumgartner in \cite{MR0540770}, and the \emph{Ramsey operator} $\mathcal R$ that was introduced and extensively studied by Feng in \cite{MR1077260}.
In the present paper, we want to analyze the interplay of certain large cardinal operators, and in particular the operators ${\mathcal I}$ and ${\mathcal R}$, with a notion of higher indescribability that was introduced by the first author in \cite{CodyHigherIndescribability}, and which extends a notion of Bagaria from \cite{MR3894041}. Let us note that Sharpe and Welch also introduced a notion of higher indescribability \cite[Definition 3.21]{MR2817562}, but the relationship between the notion we use in the present paper and that of \cite{MR2817562} is not currently known.
In the remainder of this section, we recall the definitions of the operators ${\mathcal I}$ and ${\mathcal R}$, and the \emph{strongly Ramsey subset operator} ${\mathcal S}$ (the latter has been first introduced in \cite{HolyLCOandEE}, where it is denoted as $\mathbf{T}_{\mathrm{cl}}$) that is associated to the large cardinal notion of strong Ramseyness, as introduced in \cite{MR2830415}. In Section \ref{section:indescribability}, we review infinitary second order formulas, and their associated notions of indescribability, and we show that many of their basic properties can be established by simple arguments that make use of generic ultrapowers. In Sections \ref{section_generalizing_baumgartner} and \ref{section:indescribability_from_homogeneity}, we generalize results of Baumgartner \cite{MR0384553,MR0540770} to the context of higher indescribability. For example, we show that if $\kappa$ is a subtle cardinal then there are many cardinals $\alpha<\kappa$ which are $\Pi^1_\xi$-indescribable for all $\xi<\alpha^+$. Furthermore, if one assumes the ideal ${\mathcal I}^{\gamma+1}([\kappa]^{<\kappa})$ associated to the $\gamma+1$-ineffability of a cardinal $\kappa$ is nontrivial, where $\kappa\leq\gamma<\kappa^+$, then for any fixed bijection $b:\kappa\to\gamma$ the set \[\{\alpha<\kappa\mid \alpha\in{\mathcal I}^{\text{o.t.}(b[\alpha])}(\Pi^1_\xi(\alpha))^+\}\] is in the filter dual to ${\mathcal I}^{\gamma+1}([\kappa]^{<\kappa})$ (see Corollary \ref{corollary_below_gamma_plus_1_almost_ineffability}). In Section \ref{section:basic}, we provide two basic lemmas on iterations of the ineffability and the Ramsey operator, which can be viewed as generalizations of the following standard facts: whenever a set has stationarily-many stationary initial segments it must be stationary and if a cardinal is weakly compact then the set of smaller cardinals which are not weakly compact is a weakly compact set. In Section \ref{section:expressibility}, we show that the ideals associated to higher indescribability, and the results of applying our ideal operators to these ideals, can be described by certain infinitary second order formulas. In Section \ref{section_generalized operators}, we review a uniform framework for large cardinal operators from \cite{HolyLCOandEE}, that in particular includes the operators ${\mathcal I}$, ${\mathcal R}$ and ${\mathcal S}$.\footnote{For readers who are only interested in the operators ${\mathcal I}$ and ${\mathcal R}$ (and perhaps also ${\mathcal S}$), it should be possible to skip Section \ref{section_generalized operators}, and only look up some of its relevant bits when necessary. In fact, it is only in Section \ref{section_generating} that we will make use of these generalized operators. We will thus provide some further information regarding this towards the beginning of Section \ref{section_generating}.} In Section \ref{section:preoperators}, we review the notion of pre-operators. In Section \ref{section_generating}, we combine all of the ingredients to verify yet another generalization of results by Baumgartner, which we extend both to the context of higher indescribability and to the above-mentioned framework for large cardinal operators. For example, suppose $\xi<\kappa^+$ and let ${\mathcal O}$ be the ineffability operator ${\mathcal I}$ or the Ramsey operator ${\mathcal R}$. Then ${\mathcal O}^\omega(\Pi^1_\xi(\kappa))={\mathcal O}^\omega(\Pi^1_{\xi+1}(\kappa))$, but ${\mathcal O}^n(\Pi^1_\xi(\kappa))\subsetneq{\mathcal O}^n(\Pi^1_{\xi+1}(\kappa))$ for all $n<\omega$ with $\kappa\in{\mathcal O}^n(\Pi^1_{\xi+1}(\kappa))^+$ (see Corollary \ref{corollary_collapse} and Corollary \ref{corollary_proper_containment_from_xi_to_xi_plus_one}). In Section \ref{Cody_results}, we comment on some partially problematic results of the first author from \cite{CodyHigherIndescribability}. In particular, let us point the reader to a simple question involving the Ramsey operator and $\Pi^1_1$-indescribability, namely Question \ref{question_simple}, which has so far resisted all attempts at a resolution.
Without further mention, we will require all ideals to be ideals on some regular and uncountable cardinal $\kappa$, and to be supersets of the bounded ideal on $\kappa$. For any ideal $I$, $I^+$ denotes the collection of $I$-positive sets, that is, those subsets of $\kappa$ which are not in $I$, while $I^*$ denotes the filter that is dual to $I$, that is, the collection of complements of sets in $I$. We will often introduce ideals by defining the collection of their positive sets when this is more convenient.
The definition of the Ramsey operator that is provided below is not the original definition from \cite{MR1077260}, but a version that is known to be equivalent \cite[Proposition 2.8]{MR4206111}. Recall that for a set of ordinals $A$, an \emph{$A$-list} is a sequence $\langle a_\alpha\mid \alpha\in A\rangle$ such that $a_\alpha\subseteq \alpha$ for any $\alpha\in A$, and that a set $H\subseteq A$ is homogeneous for $\langle a_\alpha\mid \alpha\in A\rangle$ in case $a_\alpha=a_\beta\cap x$ whenever $\alpha<\beta$ are both in $H$.
\begin{definition}
Let $I$ be an ideal on $\kappa$.
\begin{itemize}
\item Given a $\kappa$-list $\vec a$, we define the \emph{local instance of ${\mathcal I}$ at $\vec{a}$}, \[\mathcal I^{\vec a}(I)^+=\{x\subseteq\kappa\mid\exists H\in I^+\ H\subseteq x\textrm{ is homogeneous for }\vec a\},\] and let $\mathcal I(I)^+=\bigcap\{\mathcal I^{\vec a}(I)^+\mid\vec a$ is a $\kappa$-list$\}$.
\item Given a regressive function $c\colon[\kappa]^{<\omega}\to\kappa$, we define the \emph{local instance of ${\mathcal R}$ at $c$}, \[\mathcal R^c(I)^+=\{x\subseteq\kappa\mid\exists H\in I^+\ H\subseteq x\textrm{ is homogeneous for }c\},\] and let $\mathcal R(I)^+=\bigcap\{\mathcal R^c(I)^+\mid c\colon[x]^{<\omega}\to\kappa\textrm{ regressive}\}$.
\end{itemize} \end{definition}
Feng has shown that ${\mathcal R}(I)$ is a normal ideal on $\kappa$ for any ideal $I$ on $\kappa$, and an analogous result for the operator ${\mathcal I}$ is essentially due to Baumgartner (see also our Lemma \ref{lemma:ineffability_normal} below). Some of the classical large cardinal ideals (see \cite{MR0384553}) are directly generated by these two large cardinal operators.
\begin{fact}\label{fact:basics}\quad
\begin{enumerate}
\item If $\kappa$ is weakly ineffable, then $\mathcal I([\kappa]^{<\kappa})$ is the weakly ineffable ideal on $\kappa$.
\item If $\kappa$ is ineffable, then $\mathcal I({\mathop{\rm NS}}_\kappa)$ is the ineffable ideal on $\kappa$.
\item If $\kappa$ is Ramsey, then $\mathcal R([\kappa]^{<\kappa})$ is the Ramsey ideal on $\kappa$.
\item If $\kappa$ is ineffably Ramsey, then $\mathcal R({\mathop{\rm NS}}_\kappa)$ is the ineffably Ramsey ideal on $\kappa$.
\end{enumerate} \end{fact}
In order to define the strongly Ramsey subset operator ${\mathcal S}$, let us recall that $M$ is a \emph{$\kappa$-model} if $M\supseteq\kappa+1$ is a transitive model of ${\rm ZFC}^-$ of size $\kappa$ with $M^{<\kappa}\subseteq M$. An \emph{$M$-ultrafilter $U$ on $\kappa$} is a filter $U\subseteq P(\kappa)^M$ which measures all subsets of $\kappa$ in $M$. Also recall that an $M$-ultrafilter $U$ on $\kappa$ is \emph{$\kappa$-amenable for $M$} if whenever $\mathcal A\in M$ is a $\kappa$-sized collection of subsets of $\kappa$ in $M$, then $\mathcal A\cap U\in M$.
\begin{definition} Let $I$ be an ideal on $\kappa$. Given a set $a\subseteq\kappa$, we define the \emph{local instance of ${\mathcal S}$ at $a$} by letting $x\in{\mathcal S}^a(I)^+$ if and only if $x\subseteq\kappa$ and there is a $\kappa$-model $M$ with $a\in M$ and there is a $\kappa$-amenable $M$-normal $M$-ultrafilter $U$ on $\kappa$ such that $U\subseteq I^+$ and $x\in U$. We let \[{\mathcal S}(I)^+=\bigcap\{{\mathcal S}^a(I)^+\mid a\subseteq\kappa\}.\] \end{definition}
It is easy to see that $\kappa$ is strongly Ramsey (as defined in \cite{MR2830415}) if and only if $\kappa\in{\mathcal S}([\kappa]^{<\kappa})^+$, and furthermore, when $\kappa\in{\mathcal S}(I)^+$, it follows that ${\mathcal S}(I)$ is a nontrivial normal ideal on $\kappa$. If $\kappa$ is strongly Ramsey, then ${\mathcal S}([\kappa]^{<\kappa})$ is the \emph{strongly Ramsey ideal} on $\kappa$, as introduced in \cite{MR4156888}.
In this paper, we will also investigate properties of \emph{iterated large cardinal operators}. If $\mathcal O$ is a large cardinal operator, and $I$ is an ideal, we define $\mathcal O^\gamma(I)$ inductively, setting $\mathcal O^{\gamma+1}(I)=\mathcal O(\mathcal O^\gamma(I))$, and $\mathcal O^\gamma(I)=\bigcup_{\delta<\gamma}\mathcal O^\delta(I)$ when $\gamma$ is a limit ordinal.
\section{Review of higher indescribability}\label{section:indescribability}
\subsection{On the notion of $\Pi^1_\xi$- and $\Sigma^1_\xi$-formulas}
The following definition differs slightly from that of Bagaria \cite[Definition 4.1]{MR3894041} in that we allow for $\Pi^1_\xi$- and $\Sigma^1_\xi$-formulas to contain parameters of various kinds.
\begin{definition}\label{definition_over} Suppose $\kappa$ is a regular cardinal. We define the notions of $\Pi^1_\xi$- and $\Sigma^1_\xi$-formula over $V_\kappa$, for all ordinals $\xi$ as follows. \begin{enumerate} \item A formula $\varphi$ is $\Pi^1_0$, or equivalently $\Sigma^1_0$, over $V_\kappa$ if it is a first order formula in the lanugage of set theory, however we allow for free variables and parameters from $V_\kappa$ of two types, namely of first and of second order. \item A formula $\varphi$ is $\Pi^1_{\xi+1}$ over $V_\kappa$ if it is of the form $\forall X_{k_1}\cdots\forall X_{k_m}\psi$ where $\psi$ is $\Sigma^1_\xi$ over $V_\kappa$ and $m\in\omega$. Similarly, $\varphi$ is $\Sigma^1_{\xi+1}$ over $V_\kappa$ if it is of the form $\exists X_{k_1}\cdots\exists X_{k_m}\psi$ where $\psi$ is $\Pi^1_\xi$ over $V_\kappa$ and $m\in\omega$.\footnote{We follow the convention that uppercase letters represent second order variables, while lower case letters represent first order variables. Thus, in the above, all quantifiers displayed are understood to be second order quantifiers, i.e., quantifiers over subsets of $V_\kappa$.}
\item When $\xi$ is a limit ordinal, a formula $\varphi$, with finitely many second-order free variables and finitely many second-order parameters, is $\Pi^1_\xi$ over $V_\kappa$ if it is of the form \[\bigwedge_{\zeta<\xi}\varphi_\zeta\] where $\varphi_\zeta$ is $\Pi^1_\zeta$ over $V_\kappa$ for all $\zeta<\xi$. Similarly, $\varphi$ is $\Sigma^1_\xi$ if it is of the form \[\bigvee_{\zeta<\xi}\varphi_\zeta\] where $\varphi_\zeta$ is $\Sigma^1_\zeta$ over $V_\kappa$ for all $\zeta<\xi$. \end{enumerate} \end{definition}
Before we can introduce the concept of higher indescribability, which is based on these formula classes, we will need to review a number of further preliminaries.
\subsection{Canonical functions}
For a regular uncountable cardinal $\kappa$, the definition of the $\Pi^1_\xi$-indes\-cribability of a set $S\subseteq\kappa$, where $\kappa\leq\xi<\kappa^+$, that was introduced in \cite{CodyHigherIndescribability}, uses a sequence of functions $\<F^\kappa_\xi\mid\xi<\kappa^+\rangle$, referred to as a \emph{sequence of canonical reflection functions at $\kappa$}, which is defined as follows. If $\xi<\kappa$ then we let $F^\kappa_\xi(\alpha)=\xi$ for all $\alpha\in \kappa$. If $\xi\in\kappa^+\setminus\kappa$, fix a bijection $b_{\kappa,\xi}:\kappa\to\xi$ and let $F^\kappa_\xi(\alpha)=b_{\kappa,\xi}[\alpha]$ for all $\alpha<\kappa$. Notice that for each $\xi<\kappa^+$, the definition of the $\xi^{th}$ canonical reflection function $F^\kappa_\xi$ is independent, modulo the nonstationary ideal, of which bijection $b_{\kappa,\xi}$ is chosen. That is, if $b^1_{\kappa,\xi},b^2_{\kappa,\xi}:\kappa\to\xi$ are two bijections, then the set $\{\alpha<\kappa\mid b^1_{\kappa,\xi}[\alpha]=b^2_{\kappa,\xi}[\alpha]\}$ contains a club subset of $\kappa$.
We obtain a sequence of canonical functions $\<f^\kappa_\xi\mid\xi<\kappa^+\rangle$ at $\kappa$ by letting $f^\kappa_\xi(\alpha)=\mathop{\rm ot}\nolimits(F^\kappa_\xi(\alpha))$ be the transitive collapse of $F^\kappa_\xi(\alpha)$, for all $\xi<\kappa^+$ and all $\alpha<\kappa$. For all such $\alpha$ and $\xi$, let $\pi^\kappa_{\xi,\alpha}:F^\kappa_\xi(\alpha)\to f^\kappa_\xi(\alpha)$ be the transitive collapsing map of $F^\kappa_\xi(\alpha)$. We will assume a fixed choice of these objects throughout the paper.
Intuitively, $\xi$ is to $\kappa$ as $f^\kappa_\xi(\alpha)$ is to $\alpha$, and one can think of $f^\kappa_\xi(\alpha)$ as being $\alpha$'s version of $\xi$ in the sense that when some property involving $\kappa$ and $\xi$ is reflected down to $\alpha$, the statement will be about $\alpha$ and $f^\kappa_\xi(\alpha)$. Notice that (see \cite[Proposition~2.1]{CodyHigherIndescribability}) if $I$ is a normal ideal on $\kappa$, $G$ is generic for $\mathcal P(\kappa)/I$, and $j\colon V\to\mathop{\rm Ult}$, with $\mathop{\rm Ult}=V^\kappa/G$, is the corresponding generic ultrapower embedding, then, for all $\xi<\kappa^+$, the $\xi^{th}$ canonical function $f^\kappa_\xi$ represents the ordinal $\xi$ in $\mathop{\rm Ult}$, that is, $j(f^\kappa_\xi)(\kappa)=[f^\kappa_\xi]_G=\xi$. Similarly, for all $\xi<\kappa^+$, the $\xi^{th}$ canonical reflection function $F^\kappa_\xi$ represents $j"\xi$ in $\mathop{\rm Ult}$, that is, $j(F^\kappa_\xi)(\kappa)=[F^\kappa_\xi]_G=j"\xi$. Some background material on generic ultrapowers may be found in \cite{MR2768692}, but we will only need very little.
Throughout the rest of our paper, with respect to a regular and uncountable cardinal $\kappa$, let $G$ denote an arbitrary generic filter for $\mathcal P(\kappa)/{\mathop{\rm NS}}_\kappa$ over $V$, and let $j\colon V\to \mathop{\rm Ult}$ be the corresponding generic ultrapower embedding with critical point~$\kappa$. We may sometimes make the extra assumption that $G$ contains some particular stationary subset of $\kappa$ as an element. Note that $\mathop{\rm Ult}$ may not be well-founded, but it is so up to $\kappa^+$ (as calculated in $V$), and also that $H(\kappa^+)\subseteq\mathop{\rm Ult}$ and that $H(\kappa)=H(\kappa)^{\mathop{\rm Ult}}$ in case $\kappa$ is inaccessible.
With respect to the objects that we have fixed at the beginning of this section, at the level of $j(\kappa)$ in $\mathop{\rm Ult}$, we will always be using the sequence of bijections \[\langle b_{j(\kappa),\xi}\mid j(\kappa)\le\xi<j(\kappa)^+\rangle=j(\langle b_{\kappa,\xi}\mid\kappa\le\xi<\kappa^+\rangle)\] to define the sequences of canonical (reflection) functions that we use.
The following proposition is an easy folklore observation, and will be used to show that the provability of certain statements about generic ultrapowers induces (ground model) statements about canonical (reflection) functions to hold on a club.
\begin{proposition}\label{framework} Suppose $\kappa$ is a regular uncountable cardinal, $S\subseteq\kappa$ and whenever $G$ is generic for $P(\kappa)/{\mathop{\rm NS}}_\kappa$ it follows that $\kappa\in j(S)$ where $j:V\to \mathop{\rm Ult}$ is the corresponding generic ultrapower. Then $S$ contains a club subset of $\kappa$ in $V$.
\end{proposition} \begin{proof} For the sake of contradiction, suppose $S$ does not contain a club subset of $\kappa$ in $V$. Then $T=\kappa\setminus S$ is stationary and we may let $G$ be generic for $P(\kappa)/{\mathop{\rm NS}}_\kappa$ with $T\in G$. Then $\kappa\in j(T)$, but this contradicts our assumption that $\kappa\in j(S)$. \end{proof}
We will need the following lemma later on, which was also presented in \cite[Lemma 2.7 and Lemma 2.8]{CodyHigherIndescribability}, together with easy elementary proofs. For the sake of completeness, and since we will often make use of similar more difficult arguments later on, we would like to provide an even easier proof that makes use of generic ultrapower representations.\footnote{It is fairly straightforward to find generic ultrapower proofs for many further results on canonical (reflection) functions, for example for all the results that are provided in \cite[Section 2]{CodyHigherIndescribability}. We will however not need any such further results in this paper.}
\begin{lemma}\label{lemma_can} Suppose $\kappa$ is a regular cardinal. For all $\xi<\kappa^+$ the following hold. \begin{enumerate} \item If $\xi$ is a limit ordinal, then the set \[D_0=\{\alpha<\kappa\mid\text{$f^\kappa_\xi(\alpha)$ is a limit ordinal}\}\] is a club subset of $\kappa$. \item The set \[D_1=\{\alpha<\kappa\mid f^\kappa_{\xi+1}(\alpha)=f^\kappa_\xi(\alpha)+1\}\] contains a club subset of $\kappa$. \end{enumerate} \end{lemma} \begin{proof}
\begin{enumerate}
\item It is easy to see that $D_0$ is closed below $\kappa$. Let $j:V\to\mathop{\rm Ult}$ be any generic ultrapower obtained by forcing with $P(\kappa)/{\mathop{\rm NS}}_\kappa$. Then $j(f^\kappa_\xi)(\kappa)=\xi$ is a limit ordinal and hence $\kappa\in j(D_0)$. By Proposition \ref{framework}, $D_0$ is unbounded in $\kappa$.
\item Using that $j(f^\kappa_{\xi+1})(\kappa)=\xi+1$, it follows that $\kappa\in j(D_1)$, and the result thus follows by Proposition \ref{framework}.
\end{enumerate} \end{proof}
\subsection{Restrictions of formulas}
Let $\kappa$ be a regular and uncountable cardinal throughout. When defining the $\Pi^1_\xi$-indescribability of sets $S\subseteq\kappa$ where $\kappa\leq\xi<\kappa^+$, one cannot simply demand that every $\Pi^1_\xi$-sentence which is true in $V_\kappa$ must be true in $V_\alpha$ for some $\alpha\in S$ because, for example, there are $\Pi^1_\kappa$ sentences with no first or second-order parameters that are true in $V_\kappa$ but which are false in $V_\alpha$ for all $\alpha<\kappa$ (see \cite[Section 1]{CodyHigherIndescribability}). However, one can demand that whenever a $\Pi^1_\xi$-sentence $\varphi$ holds in $V_\kappa$, there must be some $\alpha\in S$ such that a canonically defined restriction $\varphi\mathrm{|}^\kappa_\alpha$ is true in $V_\alpha$. Although we summarize the required background here, one may consult \cite[Sections 3 and 4]{CodyHigherIndescribability} for more information on such canonically defined restrictions of formulas.
In the following, when we talk about either $\Pi^1_\xi$-formulas or $\Sigma^1_\xi$-formulas over $V_\kappa$, for some $\xi<\kappa^+$, we mean formulas which are of that \emph{exact} complexity, and not any simpler one, and we say that these formulas \emph{are of complexity $\xi$}. We also treat such formulas as set theoretic objects, and thus we tacitly assume some reasonable and natural coding of these formulas, and interchangeably use these formulas on the meta level as well as the object level. In particular, we assume that any $\Pi^1_\xi$-formula or $\Sigma^1_\xi$-formula over $V_\kappa$ is coded as an element of $H(\kappa^+)$.
\begin{definition} By induction on $\xi<\kappa^+$, we define $\varphi\mathrm{|}^\kappa_\alpha$ for all $\Pi^1_\xi$ formulas $\varphi$ over $V_\kappa$ and all regular $\alpha<\kappa$ as follows.\footnote{This is essentially the same definition as in \cite{CodyHigherIndescribability}, however we are being somewhat more careful with respect to the set theoretic representation of formulas here.} First assume that $\xi<\kappa$. If \[\varphi=\varphi(X_1,\ldots,X_m,A_1,\ldots,A_n),\] with free second order variables $X_1,\ldots,X_m$ and second order parameters $A_1,\ldots,A_n$, such that $\alpha>\xi$, and all first order parameters of $\varphi$ are elements of $V_\alpha$, then we define \[\varphi\mathrm{|}^\kappa_\alpha=\varphi(X_1,\ldots,X_m,A_1\cap V_\alpha,\ldots,A_n\cap V_\alpha),\] and we leave $\varphi\mathrm{|}^\kappa_\alpha$ undefined otherwise.
If $\xi=\zeta+1$ is a successor ordinal and $\varphi=\forall X_{k_1}\ldots\forall X_{k_m}\psi$ is $\Pi^1_{\zeta+1}$ over $V_\kappa$, then we define \[\varphi\mathrm{|}^\kappa_\alpha=\forall X_{k_1}\ldots\forall X_{k_m}(\psi\mathrm{|}^\kappa_\alpha)\] in case $\psi\mathrm{|}^\kappa_\alpha$ is defined, and leave $\varphi\mathrm{|}^\kappa_\alpha$ undefined otherwise. We define $\varphi\mathrm{|}^\kappa_\alpha$ analogously when $\varphi$ is $\Sigma^1_{\zeta+1}$.
If $\xi\in\kappa^+\setminus\kappa$ is a limit ordinal, and \[\varphi=\bigwedge_{\zeta<\xi}\psi_\zeta\] is $\Pi^1_\xi$ over $V_\kappa$, then we define \[\varphi\mathrm{|}^\kappa_\alpha=\bigwedge_{\zeta\in f^\kappa_\xi(\alpha)}\psi_{(\pi^\kappa_{\xi,\alpha})^{-1}(\zeta)}\mathrm{|}^\kappa_\alpha\] in case $\psi_{(\pi^\kappa_{\xi,\alpha})^{-1}(\zeta)}\mathrm{|}^\kappa_\alpha$ is a $\Pi^1_\zeta$-formula over $V_\alpha$ for every $\zeta\in f^\kappa_\xi(\alpha)$. We leave $\varphi\mathrm{|}^\kappa_\alpha$ undefined otherwise. We define $\varphi\mathrm{|}^\kappa_\alpha$ similarly when $\xi\in\kappa^+\setminus\kappa$ is a limit ordinal and $\varphi$ is $\Sigma^1_\xi$. \end{definition}
Note that by a simple induction on formula complexity, we obtain the following.
\begin{observation}\label{observation:definedcomplexity}
If $\varphi$ is a $\Pi^1_\xi$- or $\Sigma^1_\xi$-formula over $V_\kappa$, and $\alpha<\kappa$ is regular, then whenever $\varphi\mathrm{|}^\kappa_\alpha$ is defined, it is a $\Pi^1_{f^\kappa_\xi(\alpha)}$- or $\Sigma^1_{f^\kappa_\xi(\alpha)}$-formula over $V_\alpha$ respectively. \end{observation}
\begin{remark}\label{remark_coding} We will need the following properties of our coding of formulas. We will leave it to our readers to check that any reasonable coding of formulas has these properties. Assume that $\varphi$ is either a $\Pi^1_\xi$- or $\Sigma^1_\xi$-formula over $V_\kappa$ for some $\xi<\kappa^+$.
\begin{enumerate}
\item If $\xi<\kappa$, and $A_1,\ldots,A_n$ are all second order parameters appearing in $\varphi$, then \[j(\varphi(A_1,\ldots,A_n))=\varphi(j(A_1),\ldots,j(A_n)).\]
\item $j(\forall X\,\varphi)=\forall X\,j(\varphi)$.
\item If $\xi\ge\kappa$ is a limit ordinal, and $\varphi$ is either of the form $\varphi=\bigwedge_{\zeta<\xi}\psi_\zeta$, or of the form $\bigvee_{\zeta<\xi}\psi_\zeta$, let $\vec\psi=\langle\psi_\zeta\mid\zeta<\xi\rangle$. Then, \[j(\varphi)=\bigwedge_{\zeta<j(\xi)}j(\vec\psi)_\zeta\quad\textrm{or}\quad j(\varphi)=\bigvee_{\zeta<j(\xi)}j(\vec\psi)_\zeta\] respectively. \end{enumerate} \end{remark}
We will need the following.
\begin{observation}\label{observation:piisjinverse}
Let $\kappa$ be a regular uncountable cardinal, and let $\xi<\kappa^+$. Let $\vec\pi=\langle\pi^\kappa_{\xi,\alpha}\mid\alpha<\kappa\rangle$. Then $j(\vec\pi)_\kappa^{-1}=j\upharpoonright\xi$. \end{observation} \begin{proof}
Since each $\pi^\kappa_{\xi,\alpha}$ is the transitive collapse of $F^\kappa_\xi(\alpha)$, it follows by elementarity that $j(\vec\pi)_\kappa$ is the transitive collapse of $j(F^\kappa_\xi)(\kappa)=j"\xi$. Hence, the image of $j(\vec\pi)_\kappa$ is $\xi$, and its inverse is thus clearly identical to $j\upharpoonright\xi$. \end{proof}
The proof of the next lemma is essentially the same as the first part of the proof of \cite[Proposition 3.8]{CodyHigherIndescribability}. Regarding the assumption of the next lemma, and also of some later results, note that $\kappa$ is regular in $\mathop{\rm Ult}$ if and only if $G$ contains the set of regular cardinals below $\kappa$. This is of course only possible if that latter set is a stationary subset of $\kappa$, i.e., if $\kappa$ is weakly Mahlo.
\begin{lemma}\label{lemma_restrictionvsj}
If $\varphi$ is either a $\Pi^1_\xi$- or $\Sigma^1_\xi$-formula over $V_\kappa$ for some $\xi<\kappa^+$ and $\kappa$ is regular in $\mathop{\rm Ult}$, then in $\mathop{\rm Ult}$, \[j(\varphi)\mathrm{|}^{j(\kappa)}_\kappa=\varphi.\] \end{lemma} \begin{proof}
Regularity of $\kappa$ in $\mathop{\rm Ult}$ is needed so that $j(\varphi)\mathrm{|}^{j(\kappa)}_\kappa$ could possibly be defined in $\mathop{\rm Ult}$.
The proof proceeds by induction on $\xi<\kappa^+$. The case when $\xi<\kappa$ is easy, for then by Remark \ref{remark_coding}(1), \[j(\varphi(A_1,\ldots,A_n))=\varphi(j(A_1),\ldots,j(A_n)),\] and thus, $j(\varphi)\mathrm{|}^{j(\kappa)}_\kappa=\varphi$ by the definition of the restriction operation in this case. Successor steps above $\kappa$ are easily treated as well, for by Remark \ref{remark_coding}(2), in this case, \[j(\forall X\psi(X))=\forall X j(\psi(X)).\]
At limit steps $\xi\ge\kappa$, if $\varphi=\bigwedge_{\zeta<\xi}\psi_\zeta$ is a $\Pi^1_\xi$-formula, let $\vec\psi=\langle\psi_\zeta\mid\zeta<\xi\rangle$, and let $\vec\pi=\langle\pi^\kappa_{\xi,\alpha}\mid\alpha<\kappa\rangle$. Then, by Remark \ref{remark_coding}(3), $j(\varphi)=\bigwedge_{\zeta<j(\xi)}j(\vec\psi)_\zeta$, and therefore \[j(\varphi)\mathrm{|}^{j(\kappa)}_\kappa=\bigwedge_{\zeta\in f^{j(\kappa)}_{j(\xi)}(\kappa)}j(\vec\psi)_{j(\vec\pi)_\kappa^{-1}(\zeta)}\mathrm{|}^{j(\kappa)}_\kappa=\bigwedge_{\zeta\in\xi}(j(\psi_\zeta))\mathrm{|}^{j(\kappa)}_\kappa=\varphi,\] using that $f^{j(\kappa)}_{j(\xi)}(\kappa)=j(f^\kappa_\xi)(\kappa)=\xi$ by our choice of canonical functions at the level of $j(\kappa)$ in $\mathop{\rm Ult}$, and by Observation \ref{observation:piisjinverse}.
The case when $\varphi$ is a $\Sigma^1_\xi$-formula is treated in exactly the same way. \end{proof}
A neat feature, which could also be seen as a possible motivation for our restriction operation, is now the following.
\begin{lemma}\label{lemma_formularepresentation}
Assume that $\kappa$ is regular in $\mathop{\rm Ult}$, that $\varphi$ is either a $\Pi^1_\xi$- or $\Sigma^1_\xi$-formula over $V_\kappa$ for some $\xi<\kappa^+$, and that $\Phi\colon\kappa\to V_\kappa$ is a function with $\Phi(\alpha)=\varphi\mathrm{|}^\kappa_\alpha$ for every regular $\alpha<\kappa$. Then, $\Phi$ represents $\varphi$ in $\mathop{\rm Ult}$. That is, $j(\Phi)(\kappa)=[\Phi]_U=\varphi$. \end{lemma} \begin{proof}
Note that $j(\Phi)(\kappa)=j(\varphi)\mathrm{|}^{j(\kappa)}_\kappa=\varphi$ by Lemma \ref{lemma_restrictionvsj}. \end{proof}
The following was essentially shown as \cite[Lemma 3.6]{CodyHigherIndescribability} using an elementary proof, and becomes almost trivial with a generic ultrapower argument.
\begin{lemma}\label{lemma_restriction_is_nice}
Suppose $\kappa$ is weakly Mahlo. For any $\xi<\kappa^+$, if $\varphi$ is a $\Pi^1_\xi$- or $\Sigma^1_\xi$-formula over $V_\kappa$, then there is a club subset $C$ of $\kappa$ such that for any regular $\alpha\in C$, $\varphi\mathrm{|}^\kappa_\alpha$ is defined, and therefore a $\Pi^1_{f^\kappa_\xi(\alpha)}$- or $\Sigma^1_{f^\kappa_\xi(\alpha)}$-formula over $V_\alpha$ respectively by Observation \ref{observation:definedcomplexity}. \end{lemma} \begin{proof}
Assume for a contradiction that the conclusion of the lemma fails. This means that there is a stationary set $T$ consisting of regular and uncountable cardinals $\alpha$ such that $\varphi\mathrm{|}^\kappa_\alpha$ is not defined. Assume that $T\in G$. Then, $\kappa\in j(T)$, and therefore $\kappa$ is regular in $\mathop{\rm Ult}$, however $j(\varphi)\mathrm{|}^{j(\kappa)}_\kappa$ is not defined in $\mathop{\rm Ult}$. But, by Lemma~\ref{lemma_restrictionvsj}, $j(\varphi)\mathrm{|}^{j(\kappa)}_\kappa=\varphi$, which clearly yields a contradiction. \end{proof}
We will need the following property of restrictions of formulas, which is established using an argument similar to that of Lemma \ref{lemma_restrictionvsj}.
\begin{lemma}\label{lemma_formularestrictions}
Suppose that $\kappa$ is regular in $\mathop{\rm Ult}$. If $\varphi$ is either a $\Pi^1_\xi$- or $\Sigma^1_\xi$-formula over $V_\kappa$ for some $\xi<\kappa^+$, and $\alpha<\kappa$ is regular such that $\varphi\mathrm{|}^\kappa_\alpha$ is defined, then \[j(\varphi)\mathrm{|}^{j(\kappa)}_\alpha=\varphi\mathrm{|}^\kappa_\alpha,\] with the former being calculated in $\mathop{\rm Ult}$, and the latter being calculated in $V$. \end{lemma} \begin{proof}
By induction on $\xi<\kappa^+$. This is immediate in case $\xi<\kappa$, for then by Remark \ref{remark_coding}(1), $j(\varphi(A_1,\ldots,A_n))=\varphi(j(A_1),\ldots,j(A_n))$, and thus $j(\varphi)\mathrm{|}^{j(\kappa)}_\alpha=\varphi\mathrm{|}^\kappa_\alpha$ by the definition of the restriction operation in this case. It is also immediate for successor steps above $\kappa$, for then by Remark \ref{remark_coding}(2), $j(\forall\vec X\psi)=\forall\vec X j(\psi)$.
At limit steps $\xi\ge\kappa$, if $\varphi=\bigwedge_{\zeta<\xi}\psi_\zeta$ is a $\Pi^1_\xi$-formula, let $\vec\psi=\langle\psi_\zeta\mid\zeta<\xi\rangle$, and let $\vec\pi=\langle\pi^\kappa_{\xi,\alpha}\mid\alpha<\kappa\rangle$. Then, by Remark \ref{remark_coding}(3), $j(\varphi)=\bigwedge_{\zeta<j(\xi)}j(\vec\psi)_\zeta$, and therefore, assuming for now that $j(\varphi)\mathrm{|}^{j(\kappa)}_\alpha$ is defined, \[j(\varphi)\mathrm{|}^{j(\kappa)}_\alpha=\bigwedge_{\zeta\in j(f^\kappa_\xi)(\alpha)}j(\vec\psi)_{j(\vec\pi)_\alpha^{-1}(\zeta)}\mathrm{|}^{j(\kappa)}_\alpha=\bigwedge_{\zeta\in j(f^\kappa_\xi)(\alpha)}j(\psi_{j^{-1}(j(\vec\pi)_\alpha^{-1}(\zeta))})\mathrm{|}^{j(\kappa)}_\alpha,\] using that $j(\vec\pi)_\alpha^{-1}[j(f^\kappa_\xi)(\alpha)]=j(F^\kappa_\xi)(\alpha)\subseteq j(F^\kappa_\xi)(\kappa)=j"\xi$. By our inductive hypothesis, for each $\gamma\in\xi$ and every regular $\alpha<\kappa$, $j(\psi_\gamma)\mathrm{|}^{j(\kappa)}_\alpha=\psi_\gamma\mathrm{|}^\kappa_\alpha$. Thus, \[j(\varphi)\mathrm{|}^{j(\kappa)}_\alpha=\bigwedge_{\zeta\in j(f^\kappa_\xi)(\alpha)}\psi_{j^{-1}(j(\vec\pi)_\alpha^{-1}(\zeta))}\mathrm{|}^\kappa_\alpha.\] Now, \[\varphi\mathrm{|}^\kappa_\alpha=\bigwedge_{\zeta\in f^\kappa_\xi(\alpha)}\psi_{(\pi^\kappa_{\xi,\alpha})^{-1}(\zeta)}\mathrm{|}^\kappa_\alpha.\] Since $\alpha<\kappa$ we have $j(f^\kappa_\xi)(\alpha)=f^\kappa_\xi(\alpha)$, and furthermore \[(\pi^\kappa_{\xi,\alpha})^{-1}[f^\kappa_\xi(\alpha)]=F^\kappa_\xi(\alpha)=(j^{-1}\circ j(\vec\pi)_\alpha^{-1})[j(f^\kappa_\xi)(\alpha)],\] showing the above restrictions of $\varphi$ and of $j(\varphi)$ to be equal,\footnote{Being somewhat more careful here, this in fact also uses that the maps $\pi^\kappa_{\xi,\alpha}$, $j$, and $j(\vec\pi)_\alpha$ are order-preserving, so that both of the above conjunctions are taken of the same formulas \emph{in the same order}.} and thus in particular also showing that $j(\varphi)\mathrm{|}^{j(\kappa)}_\alpha$ is defined, as desired.
The case when $\varphi$ is a $\Sigma^1_\xi$-formula is treated in exactly the same way. \end{proof}
We can now easily deduce the following, which was originally shown as \cite[Proposition 5.7]{CodyHigherIndescribability}.
\begin{proposition}\label{proposition_double_restriction} Suppose $\kappa$ is weakly Mahlo, and $\xi<\kappa^+$. For any formula $\varphi$ which is either $\Pi^1_\xi$ or $\Sigma^1_\xi$ over $V_\kappa$, there is a club $D\subseteq\kappa$ such that for all regular uncountable $\alpha\in D$, $\varphi\mathrm{|}^\kappa_\alpha$ is defined, and the set $D_\alpha$ of all ordinals $\beta<\alpha$ such that $(\varphi\mathrm{|}^\kappa_\alpha)\mathrm{|}^\alpha_\beta$ is defined and $(\varphi\mathrm{|}^\kappa_\alpha)\mathrm{|}^\alpha_\beta=\varphi\mathrm{|}^\kappa_\beta$, is in the club filter on $\alpha$. \end{proposition} \begin{proof}
Assume for a contradiction that the conclusion of the proposition fails. By Lemma \ref{lemma_restriction_is_nice}, this means that there is a stationary set $T$ consisting of regular and uncountable cardinals $\alpha$ such that the set $D_\alpha$ has stationary complement $E_\alpha\subseteq\alpha$. Using Lemma \ref{lemma_restriction_is_nice} once again, we may assume that $(\varphi\mathrm{|}^\kappa_\alpha)\mathrm{|}^\alpha_\beta$ is defined for every $\alpha\in T$ and every $\beta\in E_\alpha$. Let $\vec E$ denote the sequence $\langle E_\alpha\mid\alpha\in T\rangle$. Assume that $T\in G$. Then, $\kappa\in j(T)$, and thus $j(\vec E)_\kappa$ is stationary in $\mathop{\rm Ult}$. But, \[j(\vec E)_\kappa=\{\beta<\kappa\mid(j(\varphi)\mathrm{|}^{j(\kappa)}_\kappa)\mathrm{|}^\kappa_\beta\ne j(\varphi)\mathrm{|}^{j(\kappa)}_\beta\}.\]
Note that by Lemma \ref{lemma_formularepresentation}, $j(\varphi)\mathrm{|}^{j(\kappa)}_\kappa=\varphi$. But then, by Lemma \ref{lemma_restriction_is_nice} and Lemma \ref{lemma_formularestrictions}, $j(\vec E)_\kappa$ is nonstationary in $\mathop{\rm Ult}$, which gives our desired contradiction. \end{proof}
\subsection{Higher indescribability}
The notion of $\Pi^1_\xi$-indescribability of (subsets of) a cardinal $\kappa$ when $\xi<\kappa$ was introduced by Joan Bagaria in \cite{MR3894041}, and was extended by the first author as follows.
\begin{definition}[{\cite[Definition 3.4]{CodyHigherIndescribability}}]\label{definition_indescribability} Suppose $\kappa$ is a cardinal and $\xi<\kappa^+$. A set $S\subseteq\kappa$ is \emph{$\Pi^1_\xi$-indescribable} if for every $\Pi^1_\xi$ sentence $\varphi$ over $V_\kappa$, if $V_\kappa\models\varphi$ then there is some $\alpha\in S$ such that $\varphi\mathrm{|}^\kappa_\alpha$ is defined, and $V_\alpha\models\varphi\mathrm{|}^\kappa_\alpha$. \end{definition}
Note that the value of any particular $\varphi\mathrm{|}^\kappa_\alpha$ depends on our choice of bijections $b_{\kappa,\xi}$, however using Lemma \ref{lemma_restrictionvsj} and Proposition \ref{framework}, it is easy to see that for any two choices of sequences $\langle b_{\kappa,\xi}\mid\xi<\kappa^+\rangle$, the corresponding $\varphi\mathrm{|}^\kappa_\alpha$'s agree on a club, and thus in particular the above notion of higher indescribability is independent of that choice.\footnote{This was also shown using elementary proofs as \cite[Lemma 3.3]{CodyHigherIndescribability} and \cite[Lemma 3.7]{CodyHigherIndescribability}.}
Note also that $S\subseteq\kappa$ is $\Pi^1_0$-indescribable if and only if $S$ is a stationary subset of $\kappa$. We will say that $S\subseteq\kappa$ is $\Pi^1_{-1}$-indescribable in case $S$ is an unbounded subset of $\kappa$. For $\xi\in\{-1\}\cup\kappa^+$, we let $\Pi^1_\xi(\kappa)^+$ be the collection of all $\Pi^1_\xi$-indescribable subsets of $\kappa$. It was shown by the first author in \cite[Theorem 5.5]{CodyHigherIndescribability} that if $\kappa$ is a cardinal, $\xi<\kappa^+$, and $\kappa$ is $\Pi^1_\xi$-indescribable, then $\Pi^1_\xi(\kappa)$ is a nontrivial normal ideal on $\kappa$.
\section{Generalizations of a result of Baumgartner}\label{section_generalizing_baumgartner}
A key result from Baumgartner's \cite{MR0384553} is the following theorem that indicates the strength of subtlety. Recall that $S\subseteq\kappa$ is \emph{subtle} in case whenever $\vec S=\<S_\alpha\mid\alpha\in S\rangle$ is an $S$-list and $C\subseteq\kappa$ is club, then there are $\alpha<\beta$ both in $S\cap C$ such that $S_\alpha=S_\beta\cap\alpha$.
\begin{theorem}[Baumgartner]\cite[Theorem 4.1]{MR0384553}\label{theorem_generalizing_Baumgartner}
Suppose $S\subseteq\kappa$ is subtle and $\vec S=\<S_\alpha\mid\alpha\in S\rangle$ is an $S$-list. Let \[A=\{\alpha\in S\mid(\exists X\subseteq S\cap\alpha)(\forall \eta<\omega\ X\in\Pi^1_\eta(\alpha)^+)\land(X\cup\{\alpha\}\text{ is hom. for }\vec{S})\}.\] Then, $S\setminus A$ is not subtle. \end{theorem}
In this section, we want to provide a strengthening of Baumgartner's theorem with respect to higher indescribability, and then apply this to obtain a related result on iterations of the ineffability operator ${\mathcal I}$.
\subsection{Coding formulas}
When $\kappa$ is inaccessible, we will need a sort of improved coding of $\Pi^1_\xi$- and $\Sigma^1_\xi$-formulas over $V_\kappa$ for $\xi<\kappa^+$, with the property that such formulas over $V_\kappa$ are coded as subsets of $\kappa$.
\begin{definition} Suppose $\kappa$ is an inaccessible cardinal, and fix a bijection \[b^\kappa\colon V_\kappa\to\kappa.\footnote{The purpose of this bijection will be the coding of parameters of our formulas, and it is only for its existence here that we use the inaccessibility of $\kappa$.}\] For each $\kappa\le\xi<\kappa^+$, fix a well-ordering \[R^\kappa_\xi\subseteq\kappa\times\kappa\] of $\kappa$ such that $\mathop{\rm ot}\nolimits(\kappa,R^\kappa_\xi)=\xi$ and let \[b_{\kappa,\xi}:\kappa\to\xi\] be the bijection derived from $R^\kappa_\xi$. We call the above the \emph{coding parameters at $\kappa$}. In what follows, by induction on formula complexity, we define a coding function ${\rm code}^\kappa$ such that whenever for some $\xi<\kappa^+$, $\varphi$ is a $\Pi^1_\xi$- or $\Sigma^1_\xi$-formula over $V_\kappa$, ${\rm code}^\kappa(\varphi)$ is a subset of $\kappa$.
If $\xi<\kappa$, we let ${\rm code}^\kappa(\varphi)$ be a subset of $\kappa$ coding $\varphi$ in some reasonable way, making use of the bijection $b^\kappa$ to code the parameters of $\varphi$. In particular, we require that its first slice of code, $({\rm code}^\kappa(\varphi))_0=\varnothing$,\footnote{So that we can distinguish this basic case from the later cases.} we use its slices with finite indices to code the second order parameters of $\varphi$, we only use boundedly many nonempty slices to code $\varphi$, and all such slices with infinite index are bounded subsets of $\kappa$.
Let $\varphi$ be a $\Pi^1_\xi$- or $\Sigma^1_\xi$-formula over $V_\kappa$ for some $\kappa\le\xi<\kappa^+$, and assume that we have inductively defined ${\rm code}^\kappa(\psi)$ whenever $\psi$ is of lower complexity. We define ${\rm code}^\kappa(\varphi)$ to be a subset of $\kappa$ as follows. \begin{itemize} \item Suppose $\xi=\zeta+1$ is a successor ordinal. If $\varphi=\forall X_{k_0}\ldots\forall X_{k_m}\psi$ is $\Pi^1_{\zeta+1}$ with $m\in\omega$ and $\psi$ being $\Sigma^1_\zeta$, we define \[({\rm code}^\kappa(\varphi))_0=\{k_0,\ldots,k_m,\omega\}\] and if $\varphi=\exists X_{k_0}\ldots\exists X_{k_m}\psi$ is $\Sigma^1_{\zeta+1}$ with $m\in\omega$ and $\psi$ being $\Pi^1_\zeta$, we define \[({\rm code}^\kappa(\varphi))_0=\{k_0,\ldots,k_m,\omega+1\}.\] In either case, we let \[({\rm code}^\kappa(\varphi))_1={\rm code}^\kappa(\psi),\] and for $1<\nu<\kappa$, we let \[({\rm code}^\kappa(\varphi))_\nu=\varnothing.\] \item Suppose $\xi$ is a limit ordinal. If $\varphi=\bigwedge_{\zeta<\xi}\psi_\zeta$ is $\Pi^1_\xi$, we let \[({\rm code}^\kappa(\varphi))_0=\{0\},\] and if $\varphi=\bigvee_{\zeta<\xi}\psi_\zeta$ is $\Sigma^1_\xi$, we let \[({\rm code}^\kappa(\varphi))_0=\{1\}.\] In either case, we let \[({\rm code}^\kappa(\varphi))_1=\Gamma[R^\kappa_\xi],\] where $\Gamma$ denotes the G\"odel pairing function, and for all $\nu<\kappa$ we let \[({\rm code}^\kappa(\varphi))_{2+\nu}={\rm code}^\kappa(\psi_{b_{\kappa,\xi}(\nu)}).\] \end{itemize} \end{definition}
Fix a sequence of coding parameters at $\alpha$ for every inaccessible $\alpha\le\kappa$, such that the sequence $\langle b^\alpha\mid\alpha\le\kappa\rangle$ is $\subseteq$-increasing. Note that, using the $j$-images of our coding parameters to code in $\mathop{\rm Ult}$, by elementarity, for any relevant formula $\varphi$ over $V_\kappa$, we have \[j({\rm code}^\kappa(\varphi))={\rm code}^{j(\kappa)}(j(\varphi)).\]
\begin{lemma}\label{lemma_code_of_a_restriction} Suppose $\kappa$ is Mahlo. If $\varphi$ is a $\Pi^1_\xi$- or $\Sigma^1_\xi$-formula over $V_\kappa$ for some $\xi<\kappa^+$, $H\subseteq\kappa$ is stationary and consists only of regular cardinals, and for each $\alpha\in H$, there is a $\Pi^1_{f^\kappa_\xi(\alpha)}$- or $\Sigma^1_{f^\kappa_\xi(\alpha)}$-formula $\varphi^\alpha$ over $V_\alpha$ respectively, such that \[{\rm code}^\alpha(\varphi^\alpha)={\rm code}^\kappa(\varphi)\cap\alpha,\] then there is a club $C\subseteq\kappa$ such that for each $\alpha\in H\cap C$, $\varphi\mathrm{|}^\kappa_\alpha$ is defined, and \[{\rm code}^\kappa(\varphi)\cap\alpha={\rm code}^\alpha(\varphi\mathrm{|}^\kappa_\alpha).\] \end{lemma} \begin{proof}
Suppose for a contradiction that the conclusion of the lemma fails. Using Lemma \ref{lemma_restriction_is_nice}, this means that there is a stationary set $T\subseteq H$ such that for every $\alpha\in T$, \[{\rm code}^\kappa(\varphi)\cap\alpha\ne{\rm code}^\alpha(\varphi\mathrm{|}^\kappa_\alpha).\] Assume $T\in G$. Then, $\kappa\in j(T)$, hence in $\mathop{\rm Ult}$, $\kappa$ is regular and \[{\rm code}^{j(\kappa)}(j(\varphi))\cap\kappa\ne{\rm code}^\kappa(j(\varphi)\mathrm{|}^{j(\kappa)}_\kappa).\]
But by Lemma \ref{lemma_restrictionvsj}, this means that in $\mathop{\rm Ult}$,
\begin{align}{\rm code}^{j(\kappa)}(j(\varphi))\cap\kappa\ne{\rm code}^\kappa(\varphi).\label{equation_code}\end{align}
Let us show that (\ref{equation_code}) is false, thus yielding our desired contradiction. First, let us consider the case in which $\xi<\kappa$. We have $j(\varphi(A_1,\ldots,A_n))=\varphi(j(A_1),\ldots,j(A_n))$ in this case, and by our choice of bijections we see that in $\mathop{\rm Ult}$, $b^{j(\kappa)}\supseteq b^\kappa$ and hence first order parameters are coded in the same way. Furthermore, by our choice of \emph{reasonable coding}, we observe that ${\rm code}^{j(\kappa)}(j(\varphi))\cap\kappa$ and ${\rm code}^\kappa(\varphi)$ have the same slices and hence ${\rm code}^{j(\kappa)}(j(\varphi))\cap\kappa= {\rm code}^\kappa(\varphi)$.
Let us inductively look at the cases when $\xi\ge\kappa$.
The successor ordinal case is immediate, comparing all slices of ${\rm code}^{j(\kappa)}(j(\varphi))\cap\kappa$ and of ${\rm code}^\kappa(\varphi)$.
Assume now that $\xi$ is a limit ordinal. We will again be comparing the slices of ${\rm code}^{j(\kappa)}(j(\varphi))\cap\kappa$ and of ${\rm code}^\kappa(\varphi)$. The slices with index $0$ clearly agree. Our assumption, which we haven't used yet, yields a formula $\psi$ of complexity $\xi$ such that in $\mathop{\rm Ult}$, \[{\rm code}^\kappa(\psi)={\rm code}^{j(\kappa)}(j(\varphi))\cap\kappa.\] The slices with index $1$ thus agree between ${\rm code}^\kappa(\psi)$ and ${\rm code}^\kappa(\varphi)$, for they are coding the same well-ordering, and hence they also agree for our desired formulas. The remaining slices agree inductively, contradicting (\ref{equation_code}) as desired. \end{proof}
\subsection{Generalizing Baumgartner's lemma to higher indescribability}\label{subsection_generalizing_Baumgertner}
In this section, we provide the promised strengthening of Theorem \ref{theorem_generalizing_Baumgartner}.
\begin{theorem}\label{theorem_baumgartner} Suppose $S\subseteq\kappa$ is subtle and $\vec{S}=\<S_\alpha\mid\alpha\in S\rangle$ is an $S$-list. Let \[A=\{\alpha\in S\mid(\exists X\subseteq S\cap\alpha)(\forall \eta<\alpha^+\ X\in\Pi^1_\eta(\alpha)^+)\land(X\cup\{\alpha\}\text{ is hom. for }\vec{S})\}.\] Then, $S\setminus A$ is not subtle. \end{theorem}
\begin{proof} Suppose $S\subseteq\kappa$ is subtle, $\vec{S}$ is an $S$-list, and suppose for a contradiction that $S\setminus A$ is subtle. Using that the set of inaccessible cardinals below $\kappa$ is in the subtle filter on $\kappa$, we may assume that all elements of $S$ are inaccessible. We will also assume our canonical functions at $\kappa$ to be based on the bijections $b_{\kappa,\xi}$ used to define the coding at $\kappa$ in the above.
Suppose $\beta\in S\setminus A$. Let $B_\beta=\{\alpha\in S\cap\beta\mid S_\alpha=S_\beta\cap\alpha\}$. Since $B_\beta\cup\{\beta\}$ is homogeneous for $\vec{S}$, it follows that for some limit ordinal $\xi_\beta$ with $\beta\le\xi_\beta<\beta^+$, the set $B_\beta$ is not $\Pi^1_{\xi_\beta}$-indescribable in $\beta$. Let $\varphi^\beta$ thus be a $\Pi^1_{\xi_\beta}$ sentence over $V_\beta$ such that $V_\beta\models\varphi^\beta$, and for all $\alpha\in B_\beta$, we have that $V_\alpha\not\models\varphi^\beta\mathrm{|}^\beta_\alpha$ whenever $\varphi^\beta\mathrm{|}^\beta_\alpha$ is defined.
For each $\beta\in S\setminus A$, let $E_\beta$ code the pair \[\langle S_\beta,{\rm code}^\beta(\varphi^\beta)\rangle\] as a subset of $\beta$ in a natural way. This defines an $(S\setminus A)$-list $\vec{E}=\<E_\beta\mid\beta\in S\setminus A\rangle$. By Theorem \ref{theorem_generalizing_Baumgartner}, there is a Mahlo cardinal $\beta\in S\setminus A$ and a stationary set $H\subseteq (S\setminus A)\cap\beta$ such that $H\cup\{\beta\}$ is homogeneous for $\vec{E}$. Let $\varphi=\varphi^\beta$, and let $\xi=\xi_\beta$.
Recall that for each $\alpha\in S\setminus A$, we have $({\rm code}^\alpha(\varphi^\alpha))_1=\Gamma[R^\alpha_{\xi_\alpha}]$, and thus the homogeneity of $H\cup\{\beta\}$ implies that for all $\alpha\in H$, we have \[R^\beta_\xi\cap(\alpha\times\alpha)=R^\alpha_{\xi_\alpha},\] and therefore \[\xi_\alpha=\mathop{\rm ot}\nolimits(\alpha,R^\alpha_{\xi_\alpha})=\mathop{\rm ot}\nolimits(\alpha,R^\beta_\xi\cap(\alpha\times\alpha))=\mathop{\rm ot}\nolimits(b_{\beta,\xi}[\alpha])=f^\beta_\xi(\alpha).\]
Thus, for each $\alpha\in H$, we have a $\Pi^1_{f^\beta_\xi(\alpha)}$-formula $\varphi^\alpha$ over $V_\alpha$ such that \[{\rm code}^\alpha(\varphi^\alpha)={\rm code}^\beta(\varphi)\cap\alpha.\] Therefore, by Lemma \ref{lemma_code_of_a_restriction}, there is a club $C\subseteq\beta$ such that $\varphi\mathrm{|}^\beta_\alpha$ is defined, and \[{\rm code}^\beta(\varphi)\cap\alpha={\rm code}^\alpha(\varphi\mathrm{|}^\beta_\alpha)\] for all $\alpha\in H\cap C\ne\varnothing$. Fix some $\alpha\in H\cap C$. We have \[{\rm code}^\alpha(\varphi^\alpha)={\rm code}^\beta(\varphi)\cap\alpha={\rm code}^\alpha(\varphi\mathrm{|}^\beta_\alpha),\] and hence $\varphi^\alpha=\varphi\mathrm{|}^\beta_\alpha$. However, since $S_\beta\cap\alpha=S_\alpha$ as well by the homogeneity of $H$, we have $\alpha\in B_\beta$, and hence $V_\alpha\not\models\varphi\mathrm{|}^\beta_\alpha$, contradicting that $V_\alpha\models\varphi^\alpha$. \end{proof}
The following is now immediate from Theorem \ref{theorem_baumgartner}.
\begin{corollary}\label{corollary_below_a_subtle_cardinal} Suppose $\kappa$ is subtle. Then, the set \[\{\alpha<\kappa\mid(\forall\eta<\alpha^+)\ \text{$\alpha$ is $\Pi^1_\eta$-indescribable}\}\] is in the subtle filter on $\kappa$.
{$\Box$} \end{corollary}
\subsection{Pushing Baumgartner's lemma up the ineffability hierarchy}\label{subsection_pushing_Baumgertner}
In this section, we provide our promised application of Theorem \ref{theorem_baumgartner} on the ineffability hierarchy, showing that iterated applications of the ineffability operator yield strong generalizations of the consequences of subtlety described in Theorem \ref{theorem_baumgartner}. We will also apply this result in order to obtain further results on the ineffability hierarchy later on in our paper. We will first need an easy auxiliary lemma, which is the analogue of \cite[Theorem 2.1]{MR1077260} for the ineffability operator. This result is essentially due to Baumgartner (two particular instances are mentioned as \cite[Theorem 2.3 and Theorem 2.4]{MR0384553}), and can be seen to follow from a combination of several results from \cite{HolyLCOandEE}, and in particular its \cite[Proposition 10.2]{HolyLCOandEE}. We would rather like to provide a self-contained proof, which is a minor adaption of the proof of \cite[Theorem 2.2]{MR0384553}.
\begin{lemma}\label{lemma:ineffability_normal}
If $I$ is an ideal on $\kappa$, then ${\mathcal I}(I)$ is a normal ideal on $\kappa$. \end{lemma} \begin{proof}
The only nontrivial property is normality. Note first that ${\mathcal I}(I)\supseteq{\mathcal I}([\kappa]^{<\kappa})\supseteq{\mathop{\rm NS}}_\kappa$, where the latter is due to an easy argument that may be found within the proof of \cite[Theorem 2.3]{MR0384553}.
Assume now that $A\in{\mathcal I}(I)^+$. Let $C$ be the club set of all ordinals below $\kappa$ that are closed under the G\"odel pairing function $\Gamma$. By the above, it follows that $A\cap C\in{\mathcal I}(I)^+$, and we may thus assume that every element of $A$ is closed under G\"odel pairing.
Assume that $f\colon A\to\kappa$ is regressive, and let $A_\alpha=f^{-1}(\alpha)$ for every $\alpha<\kappa$. Assume for a contradiction that $A_\alpha\in{\mathcal I}(I)$ for every $\alpha<\kappa$. Thus, for every $\alpha<\kappa$, we may fix an $A_\alpha$-list $\vec A_\alpha=\<a^\alpha_\beta\mid\beta\in A_\alpha\rangle$ which has no homogeneous set in $I^+$.
Let $\vec A=\<a_\beta\mid\beta\in A\rangle$ be an $A$-list defined by letting, for every $\beta\in A$, $a_\beta$ code both $\{f(\beta)\}$ and $a^{f(\beta)}_\beta$, using G\"odel pairing. Since $A\in{\mathcal I}(I)^+$, there is $H\in I^+$ that is homogeneous for $\vec A$. It follows that $H$ is homogeneous for $f$, and we let $\alpha$ be the value of $f$ on $H$. It then follows that $H$ is homogeneous for $\vec A_\alpha$. This yields our desired contradiction. \end{proof}
We need another auxiliary observation, which has already been used by Baumgartner in \cite{MR0384553}, but which we couldn't find a proof of in the set-theoretic literature. For the convenience of our readers, we would therefore like to provide the easy argument.
\begin{observation}
For any cardinal $\kappa$, the subtle ideal on $\kappa$ is contained in the weakly ineffable ideal on $\kappa$. \end{observation} \begin{proof}
Assume that $A\subseteq\kappa$ is not an element of the weakly ineffable ideal ${\mathcal I}([\kappa]^{<\kappa})$ on $\kappa$, that $\vec a$ is an $A$-list, and let $C$ be a club subset of $\kappa$. By Lemma \ref{lemma:ineffability_normal}, $A\cap C\in{\mathcal I}([\kappa]^{<\kappa})^+$. It follows that we can find $H\subseteq A\cap C$ that is homogeneous for $H$. This shows that $A$ is subtle. \end{proof}
\begin{theorem}\label{theorem_pushing_Baumgartner} Suppose $\gamma<\kappa^+$, $S\in {\mathcal I}^{\gamma+1}([\kappa]^{<\kappa})^+$ and $\vec{S}=\<S_\alpha\mid\alpha\in S\rangle$ is an $S$-list. Let $A$ be the set of all ordinals $\alpha\in S$ such that \[\exists X\subseteq S\cap\alpha\left[(\forall \xi<\alpha^+\ X\in {\mathcal I}^{f^\kappa_\gamma(\alpha)}(\Pi^1_\xi(\alpha))^+) \land (X\cup\{\alpha\}\text{ is hom. for }\vec{S})\right].\] Then, $S\setminus A\in{\mathcal I}^{\gamma+1}([\kappa]^{<\kappa})$. \end{theorem}
\begin{proof} We proceed by induction on $\gamma<\kappa^+$. When $\gamma=0$, the result follows directly from Theorem \ref{theorem_baumgartner}, because the subtle ideal is contained in the weakly ineffable ideal ${\mathcal I}([\kappa]^{<\kappa})$, and $f^\kappa_0(\alpha)=0$ for all $\alpha<\kappa$.
Suppose $\gamma=\delta+1<\kappa^+$ is a successor ordinal, and suppose for a contradiction that $S\setminus A\in{\mathcal I}^{\delta+2}([\kappa]^{<\kappa})^+$. Let $C=\{\alpha<\kappa\mid f^\kappa_{\delta+1}(\alpha)=f^\kappa_\delta(\alpha)+1\}$ be the club subset of $\kappa$ obtained from Lemma \ref{lemma_can}. Then, the set \[E=\{\alpha\in S\setminus A\mid \alpha\text{ is inaccessible}\}\cap C\] is in ${\mathcal I}^{\delta+2}([\kappa]^{<\kappa})^+$. For each $\alpha\in E$, let $B_\alpha=\{\beta\in S\cap\alpha\mid S_\beta=S_\alpha\cap\beta\}$. Since $B_\alpha\cup\{\alpha\}$ is homogeneous for $\vec{S}$ and $\alpha\in S\setminus A$, there is an ordinal $\xi_\alpha<\alpha^+$ such that $B_\alpha\in{\mathcal I}^{f^\kappa_\delta(\alpha)+1}(\Pi^1_{\xi_\alpha}(\alpha))$, and hence we may fix a $B_\alpha$-list $\vec{B}^\alpha=\<b^\alpha_\beta\mid\beta\in B_\alpha\rangle$ such that $\vec{B}^\alpha$ has no homogeneous set in ${\mathcal I}^{f^\kappa_\delta(\alpha)}(\Pi^1_{\xi_\alpha}(\alpha))^+$.
For $\alpha\in E$, let $E_\alpha$ code the triple $\langle S_\alpha,B_\alpha,\vec{B}^\alpha\rangle$ as a subset of $\alpha$ in a natural way. This defines an $E$-list $\vec{E}=\<E_\alpha\mid\alpha\in E\rangle$. Since $E\in {\mathcal I}^{\delta+2}([\kappa]^{<\kappa})^+$, we may fix $H\in \mathcal P(E)\cap{\mathcal I}^{\delta+1}([\kappa]^{<\kappa})^+$ which is homogeneous for $\vec{E}$. It follows that $H$ is homogeneous for $\<S_\alpha\mid\alpha\in E\rangle$, $\<B_\alpha\mid\alpha\in E\rangle$ and $\langle\vec{B}^\alpha\mid\alpha\in E\rangle$. We let $D=\bigcup_{\alpha\in H} S_\alpha$, $B=\bigcup_{\alpha\in H}B_\alpha$ and $\vec{B}=\bigcup_{\alpha\in H}\vec{B}^\alpha=\<b_\alpha\mid\alpha\in B\rangle$. Since $B=\{\alpha<\kappa\mid S_\alpha=D\cap\alpha\}$, it follows that $H\subseteq B$.
Let $A_0$ be the set of all ordinals $\alpha\in H$ such that \[\exists X\subseteq H\cap\alpha\left[(\forall\xi<\alpha^+\ X\in {\mathcal I}^{f^\kappa_\delta(\alpha)}(\Pi^1_\xi(\alpha))^+) \land (X\cup\{\alpha\}\text{ is hom. for }\vec{B})\right].\] By our inductive hypothesis, $H\setminus A_0\in{\mathcal I}^{\delta+1}([\kappa]^{<\kappa})$, and hence $A_0\in {\mathcal I}^{\delta+1}([\kappa]^{<\kappa})^+$. Thus, we may fix an $\alpha\in A_0$. Since $\alpha\in H$, it follows by homogeneity that $\vec{B}\upharpoonright (H\cap\alpha)=\vec{B}^\alpha\upharpoonright H$. But by the definition of $A_0$, and since $\xi_\alpha<\alpha^+$, there is some $X\in \mathcal P(H\cap\alpha)\cap{\mathcal I}^{f^\kappa_\delta(\alpha)}(\Pi^1_{\xi_\alpha}(\alpha))^+$ which is homogeneous for $\vec{B}^\alpha$, which is a contradiction.
Now let us suppose $\gamma<\kappa^+$ is a limit ordinal, and suppose again for a contradiction that $S\setminus A\in{\mathcal I}^{\gamma+1}([\kappa]^{<\kappa})^+$. Since by Lemma \ref{lemma_can}, the set \[C=\{\alpha<\kappa\mid f^\kappa_\gamma(\alpha)\text{ is a limit ordinal and } f^\kappa_\gamma(\alpha)=\bigcup_{\delta\in F^\kappa_\gamma(\alpha)} f^\kappa_\delta(\alpha) \}\] is in the club filter on $\kappa$, it follows that the set \[E=\{\alpha\in S\setminus A\mid\text{$\alpha$ is inaccessible}\}\cap C\] is in ${\mathcal I}^{\gamma+1}([\kappa]^{<\kappa})^+$.
For each $\alpha\in E$, let $B_\alpha=\{\beta\in S\cap\alpha\mid S_\beta=S_\alpha\cap\beta\}$. Since $B_\alpha\cup\{\alpha\}$ is homogeneous for $\vec{S}$, and $\alpha\in S\setminus A$, there is some $\xi_\alpha<\alpha^+$ such that $B_\alpha\in{\mathcal I}^{f^\kappa_\gamma(\alpha)}(\Pi^1_{\xi_\alpha}(\alpha))$. Since $\alpha\in C$, we have \[B_\alpha\in{\mathcal I}^{f^\kappa_\gamma(\alpha)}(\Pi^1_{\xi_\alpha}(\alpha))=\bigcup_{\delta\in F^\kappa_\gamma(\alpha)}{\mathcal I}^{f^\kappa_\delta(\alpha)}(\Pi^1_{\xi_\alpha}(\alpha))=\bigcup_{\beta<\alpha}{\mathcal I}^{f^\kappa_{b_{\kappa,\gamma}(\beta)}(\alpha)}(\Pi^1_{\xi_\alpha}(\alpha)).\] Using that $f^\kappa_\gamma(\alpha)$ is a limit ordinal once again, we may choose an ordinal $g(\alpha)<\alpha$ such that \[B_\alpha\in{\mathcal I}^{f^\kappa_{b_{\kappa,\gamma}(g(\alpha))}(\alpha)+1}(\Pi^1_{\xi_\alpha}(\alpha)).\] This defines a regressive function $g:E\to\kappa$, and by the normality of ${\mathcal I}^{\gamma+1}([\kappa]^{<\kappa})^+$ that follows from Lemma \ref{lemma:ineffability_normal}, there is an $E^*\in \mathcal P(E)\cap {\mathcal I}^{\gamma+1}([\kappa]^{<\kappa})^+$ and some $\beta_0<\kappa$ such that $g(\alpha)=\beta_0$ for all $\alpha\in E^*$. Let $\nu=b_{\kappa,\gamma}(\beta_0)$ and notice that for all $\alpha\in E^*$, \[B_\alpha\in{\mathcal I}^{f^\kappa_{\nu}(\alpha)+1}(\Pi^1_{\xi_\alpha}(\alpha)).\] For each $\alpha\in E^*$, we fix a $B_\alpha$-list $\vec{B}^\alpha=\<b^\alpha_\beta\mid\beta\in B_\alpha\rangle$ such that $\vec{B}^\alpha$ has no homogeneous set in ${\mathcal I}^{f^\kappa_{\nu}(\alpha)}(\Pi^1_{\xi_\alpha}(\alpha))^+$.
Now we define an $E^*$-list by letting $E^*_\alpha$ code the triple $\langle S_\alpha,B_\alpha,\vec{B}^\alpha\rangle$ as a subset of $\alpha$ in a natural way, for all $\alpha\in E^*$. This defines $\vec{E}^*=\<E^*_\alpha\mid\alpha\in E^*\rangle$. Since $E^*\in {\mathcal I}^{\gamma+1}([\kappa]^{<\kappa})^+$, we may fix an $H\in \mathcal P(E^*)\cap{\mathcal I}^\gamma([\kappa]^{<\kappa})^+$ which is homogeneous for $\vec{E}^*$. Then, $H$ is homogeneous for $\<S_\alpha\mid\alpha\in S\rangle$, $\<B_\alpha\mid\alpha\in E^*\rangle$ and $\langle\vec{B}^\alpha\mid\alpha\in E^*\rangle$. We let $D=\bigcup_{\alpha\in H}S_\alpha$, $B=\bigcup_{\alpha\in H}B_\alpha$ and $\vec{B}=\bigcup_{\alpha\in H}\vec{B}^\alpha=\<b_\alpha\mid\alpha\in B\rangle$. Since $B=\{\alpha<\kappa\mid S_\alpha=D\cap\alpha\}$, it follows that $H\subseteq B$.
Now since $\nu<\gamma$, we have $H\in {\mathcal I}^\gamma([\kappa]^{<\kappa})^+\subseteq {\mathcal I}^{\nu+1}([\kappa]^{<\kappa})^+$, and we may apply the inductive hypothesis to the $H$-list $\vec{B}\upharpoonright H$. Let $A_0$ be the set of all ordinals $\alpha\in H$ such that \[\exists X\subseteq H\cap\alpha\left[(\forall\xi<\alpha^+\ X\in{\mathcal I}^{f^\kappa_{\nu}(\alpha)}(\Pi^1_\xi(\alpha))^+)\land (X\cup\{\alpha\}\text{ is hom. for }\vec{B}\upharpoonright H)\right].\] It follows that $H\setminus A_0\in{\mathcal I}^{\nu+1}([\kappa]^{<\kappa})$, which implies that $A_0\in{\mathcal I}^{\nu+1}([\kappa]^{<\kappa})^+$. Fix $\alpha\in A_0$. Since $\alpha\in H$, homogeneity implies that $\vec{B}\upharpoonright (H\cap\alpha)=\vec{B}^\alpha\upharpoonright H$. But, by the definition of $A_0$, and the fact that $\xi_\alpha<\alpha^+$, it follows that there is some $X\in \mathcal P(H\cap\alpha)\cap {\mathcal I}^{f^\kappa_{\nu}}(\Pi^1_{\xi_\alpha}(\alpha))^+$ which is homogeneous for $\vec{B}^\alpha$, a contradiction. \end{proof}
The following is now immediate from Theorem \ref{theorem_pushing_Baumgartner}.
\begin{corollary}\label{corollary_below_gamma_plus_1_almost_ineffability} Suppose $\kappa\in{\mathcal I}^{\gamma+1}([\kappa]^{<\kappa})^+$ where $\gamma<\kappa^+$. Then the set \[\{\alpha<\kappa\mid (\forall\xi<\alpha^+)\ \alpha\in{\mathcal I}^{f^\kappa_\gamma(\alpha)}(\Pi^1_\xi(\alpha))^+\}\] is in the filter ${\mathcal I}^{\gamma+1}([\kappa]^{<\kappa})^*$.
{$\Box$} \end{corollary}
\subsection{A version of Baumgartner's lemma for the strongly Ramsey ideal}
Recall that in Section \ref{section_introduction}, we introduced the strongly Ramsey subset operator $\mathcal{S}$. Next, we show that Baumgartner's lemma can, in a sense, be generalized to the \emph{strongly Ramsey ideal} ${\mathcal S}([\kappa]^{<\kappa})^+$.
\begin{theorem}\label{theorem_baumgartners_lemma_for_the_strongly_ramsey_ideal} Suppose $S\in{\mathcal S}([\kappa]^{<\kappa})^+$ and $f:[S]^{<\omega}\to\kappa$ is a regressive function. Let \[A=\{\alpha\in S\mid(\exists X\subseteq S\cap \alpha)(\forall\eta<\alpha^+ X\in\Pi^1_\eta(\alpha)^+)\land(\text{$X$ is hom. for $f$})\}.\] Then, $S\setminus A\in{\mathcal S}([\kappa]^{<\kappa})$. \end{theorem}
\begin{proof} Suppose $S\setminus A\in{\mathcal S}([\kappa]^{<\kappa})^+$. Let $M$ be a $\kappa$-model with $S\setminus A,f\in M$ and let $U\subseteq[\kappa]^\kappa$ be a $\kappa$-amenable $M$-normal $M$-ultrafilter such that $S\setminus A\in U$. Let $j:M\to N$ be the usual elementary embedding obtained from $U$ such that $N$ is transitive.
Since $U$ is $\kappa$-amenable, it follows that for every $B\subseteq\kappa^n\times\kappa$ in $M$, the set $\{\vec{\alpha}\in\kappa^n\mid B_{\vec{\alpha}}\in U\}$ is in $M$ where $B_{\vec{\alpha}}=\{\beta<\kappa\mid\vec{\alpha}\mathbin{{}^\smallfrown}\beta\in B\}$. Thus, we can define the \emph{product ultrafilters} $U^n$ on $P(\kappa^n)^M$ by induction as follows. For $B\subseteq\kappa^n\times\kappa$, we let $B\in U^{n+1}=U^n\times U$ if and only if $B\in M$ and $\{\vec{\alpha}\in\kappa^n\mid B_{\vec{\alpha}}\in U\}\in U^n$. For each $n\in\omega\setminus \{0\}$, we let $j_{U^n}:M\to N_{U^n}$ be the ultrapower of $M$ by $U^n$ and note that, it follows from \cite[Proposition 2.32]{MR2710923} that $N_{U^n}$ is well-founded. Furthermore, by \cite[Lemma 2.31]{MR2710923}, we have $j_{U^{n+1}}=j_{j_{U^n}(U)}\circ j_{U^n}$ where $j_{j_{U^n}(U)}$ is the ultrapower of $N_{U^n}$ by $j_{U^n}(U)$ (for more details on product ultrafilters one may consult \cite[Chapter 2]{MR2710923} or \cite{MR2830415}).
For each $n\in\omega\setminus \{0\}$, let $f_n=f\upharpoonright[S]^n$. Since $f$ is regressive, it follows by elementarity that the ordinal $\gamma_1=j_{U^1}(f)(\{\kappa\})$ is less than $\kappa$. Furthermore, for each $n<\omega$ the ordinal $\gamma_{n+2}=j_{U^{n+2}}(f)(\{\kappa,j_U(\kappa),j_{U^2}(\kappa),\ldots,j_{U^{n+1}}(\kappa)\})$ is less than $\kappa$. Fix $n\in\omega\setminus\{0\}$. Let $A_n=\{\vec{\alpha}\in[S]^n\mid f_n(\vec{\alpha})=\gamma_n\}$. By \cite[Lemma 2.34]{MR2710923}, there is a $B_n\in U$ such that for all $\beta_1<\cdots<\beta_n$ in $B_n$ we have $(\beta_1,\ldots,\beta_n)\in A_n$, that is, $f_n(\beta_1,\ldots,\beta_n)=\gamma_n$. Hence $B_n$ is homogeneous for $f_n$.
Clearly $B=\bigcap_{n<\omega}B_n$ is homogeneous for $f$, and since $M$ is a $\kappa$-model and $U$ is $M$-normal, it follows that $B\in U$ and hence $\kappa\in j(B)$. Now we have $B\in N$ and furthermore, $N$ thinks that $B$ is homogeneous for $f$. But, since $\kappa\in j(S\setminus A)$, it follows that $N$ thinks that there are no subsets of $S$ which are both $\Pi^1_\eta$-indescribable in $\kappa$ for all $\eta<(\kappa^+)^N$ and homogeneous for $f$. Hence, in $N$, there must be some $\eta<(\kappa^+)^N$ such that $B$ is not $\Pi^1_\eta$-indescribable in $\kappa$. Working in $N$, fix a $\Pi^1_\eta$-sentence $\varphi$ over $V_\kappa$ that is true in $V_\kappa$ such that for all $\alpha\in B$ we have $V_\alpha\models\lnot\varphi\mathrm{|}^\kappa_\alpha$. Since $M$ and $N$ are both $\kappa$-models and since $P(\kappa)^M=P(\kappa)^N$, it follows that $(V_\kappa\models\varphi)^M$ and \[((\forall\alpha\in B)\ V_\alpha\models\lnot\varphi\mathrm{|}^\kappa_\alpha)^M.\] Hence by elementarity, \[((\forall\alpha\in j(B))\ V_\alpha\models\lnot j(\varphi)\mathrm{|}^{j(\kappa)}_\alpha)^N.\] But this is a contradiction because $\kappa\in j(B)$ and $(V_\kappa\models\varphi)^N$ where $j(\varphi)\mathrm{|}^{j(\kappa)}_\kappa=\varphi$ (see the proof of Lemma \ref{lemma_restrictionvsj}). \end{proof}
\section{Indescribability from homogeneity}\label{section:indescribability_from_homogeneity}
Extending \cite[Lemma 7.1]{MR0384553} and \cite[Lemma 5.1]{MR4206111}, we show that for all $\xi<\kappa^+$, $S\in{\mathcal I}(\Pi^1_\xi(\kappa))^+$ implies $S\in\Pi^1_{\xi+2}(\kappa)^+$. Let us note that the following lemma has precursors in the work of Welch et al. (see \cite[Corollary 3.24]{MR2817562} and \cite{brickhill-welch}).
\begin{lemma}\label{lemma_indescribability_from_homogeneity} Suppose $S\subseteq\kappa$, $\xi<\kappa^+$, and for every $S$-list $\vec{S}$, there is a set $H\in \mathcal P(S)\cap\bigcap_{\zeta\in\{-1\}\cup\xi}\Pi^1_\zeta(\kappa)^+$ that is homogeneous for $\vec{S}$. Then, $S$ is a $\Pi^1_{\xi+1}$-indescribable subset of $\kappa$. \end{lemma}
\begin{proof} The case in which $\xi<\kappa$ is handled by \cite[Lemma 2.20]{MR4206111}. The case in which $\kappa<\xi<\kappa^+$ and $\xi$ is a successor ordinal is similar (see the corresponding case in \cite[Lemma 2.20]{MR4206111}), and is thus left to the reader.
Suppose $\kappa\leq\xi<\kappa^+$, $\xi$ is a limit ordinal, and every $S$-list has a homogeneous set $H\in \mathcal P(S)\cap\bigcap_{\zeta<\xi}\Pi^1_\zeta(\kappa)^+$. Suppose for a contradiction that $S$ is not $\Pi^1_{\xi+1}$-indescribable. Let \[\varphi=\forall X\left(\bigvee_{\zeta<\xi}\psi_\zeta\right)\] be $\Pi^1_{\xi+1}$ over $V_\kappa$, such that $V_\kappa\models\varphi$, and such that for all $\alpha\in S$, we have $V_\alpha\not\models\varphi\mathrm{|}^\kappa_\alpha$ whenever $\varphi\mathrm{|}^\kappa_\alpha$ is defined. Fix a bijection $b:V_\kappa\to\kappa$, let \[C=\{\alpha<\kappa\mid b\upharpoonright\alpha:V_\alpha\to\alpha\text{ is a bijection and }\varphi\mathrm{|}^\kappa_\alpha\text{ is defined}\},\] and note that $C$ is a club subset of $\kappa$ by Lemma \ref{lemma_restriction_is_nice}. For each $\alpha\in S\cap C$, the sentence $\varphi\mathrm{|}^\kappa_\alpha$ thus is $\Pi^1_{f^\kappa_{\xi+1}(\alpha)}$ over $V_\alpha$, and hence $V_\alpha\models\lnot\varphi\mathrm{|}^\kappa_\alpha$, where \[\lnot\varphi\mathrm{|}^\kappa_\alpha=\exists X\left(\bigwedge_{\zeta\in f^\kappa_\xi(\alpha)}\lnot\psi_{(\pi^\kappa_{\xi,\alpha})^{-1}(\zeta)}\mathrm{|}^\kappa_\alpha\right).\] For each $\alpha\in S\cap C$, let $T_\alpha\subseteq V_\alpha$ be such that \begin{align}V_\alpha\models\bigwedge_{\zeta\in f^\kappa_\xi(\alpha)}(\lnot\psi_{(\pi^\kappa_{\xi,\alpha})^{-1}(\zeta)}\mathrm{|}^\kappa_\alpha)(T_\alpha).\label{equation_ind_from_hom} \end{align} For $\alpha\in S\setminus C$ let $T_\alpha\subseteq\alpha$ be arbitrary. Now we define an $S$-list $\vec{S}=\<S_\alpha\mid\alpha\in S\rangle$ where $S_\alpha=b[T_\alpha]$ for all $\alpha\in S$. Let $H\in \mathcal P(S)\cap\bigcap_{\zeta<\xi}\Pi^1_\zeta(\kappa)^+$ be ho\-mo\-ge\-ne\-ous for $\vec{S}$. Notice that $H\cap C\in\bigcap_{\zeta<\xi}\Pi^1_\zeta(\kappa)^+$, let $R=\bigcup_{\alpha\in H\cap C}S_\alpha$, and let $T=b^{-1}[R]$. Since $V_\kappa\models\varphi$, we have $V_\kappa\models\psi_\zeta(T)$ for some fixed $\zeta<\xi$. Since the set \[H\cap C\cap\{\alpha<\kappa\mid \zeta\in F^\kappa_\xi(\alpha)\}\] is $\Pi^1_\zeta$-indescribable in $\kappa$, it follows that for some $\alpha\in H\cap C$ with $\zeta\in F^\kappa_\xi(\alpha)$ we have $V_\alpha\models(\psi_\zeta\mathrm{|}^\kappa_\alpha)(T\cap V_\alpha)$. By homogeneity, we have $R\cap\alpha=S_\alpha$, and hence $T\cap V_\alpha=T_\alpha$. This implies that $V_\alpha \models(\psi_\zeta\mathrm{|}^\kappa_\alpha)(T_\alpha)$, but this contradicts (\ref{equation_ind_from_hom}). \end{proof}
It was shown in \cite[Proposition 3.8]{CodyHigherIndescribability} that measurable cardinals are $\Pi^1_\xi$-indescribable for every $\xi<\kappa^+$. We want to make use of Lemma \ref{lemma_indescribability_from_homogeneity} in order to provide a better upper bound. Recall that a cardinal $\kappa$ is \emph{completely ineffable} if there is a collection $\mathcal S$ of stationary subsets of $\kappa$ that is closed under the taking of supersets (such collections are called a \emph{stationary class}), such that whenever $S\in\mathcal S$ and $\vec S$ is an $S$-list, then there is a set $H\in\mathcal S$ that is homogeneous for $\vec S$.
\begin{proposition}
If $\kappa$ is completely ineffable, then $\kappa$ is $\Pi^1_\xi$-indescribable for every $\xi<\kappa^+$. \end{proposition} \begin{proof}
Let $\mathcal T$ be the union of all stationary classes witnessing that $\kappa$ is completely ineffable. It is easy to see that $\mathcal T$ itself is a stationary class witnessing that $\kappa$ is completely ineffable, and also that $I=\mathcal P(\kappa)\setminus\mathcal T$ is an ideal on $\kappa$ -- in fact, $I$ is what is called the \emph{completely ineffable ideal} on $\kappa$, as defined in \cite{MR918427}. Note that by the very definition of ${\mathcal I}$, we see that ${\mathcal I}(I)=I$. Using Lemma \ref{lemma_indescribability_from_homogeneity}, and recalling that $\Pi^1_0(\kappa)\subseteq I$ is the nonstationary ideal on $\kappa$, a straightforward induction now yields $\kappa$ to be $\Pi^1_\xi$-indescribable for every $\xi<\kappa^+$. \end{proof}
\section{Some properties of the ineffability and the Ramsey operator}\label{section:basic}
In this section, we will provide two lemmas about the ineffability and the Ramsey operator which will be required later on, but which should also be of independent interest. For the Ramsey operator, when $\gamma$ and $\xi$ are both less than $\kappa$, these are due to the first author in \cite[Lemma 3.1 and Lemma 3.2]{MR4206111}.
\begin{lemma}\label{lemma_pos_union_of_pos_sets_is_pos} Let ${\mathcal O}\in\{{\mathcal I},{\mathcal R}\}$. Suppose $\kappa$ is a cardinal, $\gamma<\kappa^+$, $\xi\in\{-1\}\cup\kappa^+$, $S\in{\mathcal O}^\gamma(\Pi^1_\xi(\kappa))^+$, and for each $\alpha\in S$, let $S_\alpha\in{\mathcal O}^{f^\kappa_\gamma(\alpha)}(\Pi^1_{f^\kappa_\xi(\alpha)}(\alpha))^+$. Then $\bigcup_{\alpha\in S}S_\alpha\in{\mathcal O}^\gamma(\Pi^1_\xi(\kappa))^+$. \end{lemma}
\begin{proof} Let us assume that ${\mathcal O}={\mathcal I}$; when ${\mathcal O}={\mathcal R}$ the proof is essentially the same, only one must replace lists by regressive functions. We proceed by induction on $\gamma$. Suppose $\gamma=0$, fix $\xi<\kappa^+$, $S\in\Pi^1_\xi(\kappa)^+$ and let $S_\alpha\in\Pi^1_{f^\kappa_\xi(\alpha)}(\alpha)^+$ for all $\alpha\in S$. Fix a $\Pi^1_\xi$ sentence $\varphi$ over $V_\kappa$ such that $V_\kappa\models\varphi$. By Lemma \ref{lemma_restriction_is_nice}, the set \[C=\{\alpha<\kappa\mid\varphi\mathrm{|}^\kappa_\alpha\text{ is defined, and hence $\Pi^1_{f^\kappa_\xi(\alpha)}$ over $V_\alpha$}\}\] is in the club filter on $\kappa$. By Proposition \ref{proposition_double_restriction}, there is a club subset $D$ of $\kappa$ such that for all regular uncountable $\alpha\in D$, the set $D_\alpha$ of all ordinals $\beta<\alpha$ for which $(\varphi\mathrm{|}^\kappa_\alpha)\mathrm{|}^\alpha_\beta$ is defined and $(\varphi\mathrm{|}^\kappa_\alpha)\mathrm{|}^\alpha_\beta=\varphi\mathrm{|}^\kappa_\beta$ is in the club filter on $\alpha$. Since $S\cap C\cap D\in\Pi^1_\xi(\kappa)^+$, we may fix an $\alpha\in S\cap C\cap D$ such that $V_\alpha\models\varphi\mathrm{|}^\kappa_\alpha$. Now, since $S_\alpha\cap D_\alpha\in\Pi^1_{f^\kappa_\xi(\alpha)}(\alpha)^+$ and $\varphi\mathrm{|}^\kappa_\alpha$ is $\Pi^1_{f^\kappa_\xi(\alpha)}$ over $V_\alpha$, we may fix $\beta\in S_\alpha\cap D_\alpha$ such that $V_\beta\models(\varphi\mathrm{|}^\kappa_\alpha)\mathrm{|}^\alpha_\beta$. Since $\beta\in D_\alpha$ implies $(\varphi\mathrm{|}^\kappa_\alpha)\mathrm{|}^\alpha_\beta=\varphi\mathrm{|}^\kappa_\beta$, we have $V_\beta\models\varphi\mathrm{|}^\kappa_\beta$. Thus, $\bigcup_{\alpha\in S}S_\alpha$ is a $\Pi^1_\xi$-indescribable subset of $\kappa$.
Suppose $\gamma<\kappa^+$ is a limit ordinal. Fix $\xi<\kappa^+$, $S\in{\mathcal I}^\gamma(\Pi^1_\xi(\kappa))^+$ and let $S_\alpha\in {\mathcal I}^{f^\kappa_\gamma(\alpha)}(\Pi^1_{f^\kappa_\xi(\alpha)}(\alpha))^+$ for all $\alpha\in S$. It suffices to show that $\bigcup_{\alpha\in S}S_\alpha\in{\mathcal I}^\delta(\Pi^1_\xi(\kappa))^+$ for all $\delta<\gamma$. Fix $\delta<\gamma$. Since $\delta<\gamma$ in any generic ultrapower $\mathop{\rm Ult}$ obtained by forcing with $P(\kappa)/{\mathop{\rm NS}}_\kappa$, the set \[C=\{\alpha<\kappa\mid f^\kappa_\delta(\alpha)<f^\kappa_\gamma(\alpha)\}\] is in the club filter on $\kappa$. Thus, $S\cap C\in{\mathcal I}^\delta(\Pi^1_\xi(\kappa))^+$, and for each $\alpha\in S\cap C$, we have $S_\alpha\in{\mathcal I}^{f^\kappa_\delta(\alpha)}(\Pi^1_{f^\kappa_\xi(\alpha)})^+$. By our inductive hypothesis, we have $\bigcup_{\alpha\in S\cap C}S_\alpha\in{\mathcal I}^\delta(\Pi^1_\xi(\kappa))^+$. Since $\bigcup_{\alpha\in S\cap C}S_\alpha\subseteq\bigcup_{\alpha\in S}S_\alpha$, we thus see that $\bigcup_{\alpha\in S}S_\alpha\in{\mathcal I}^\gamma(\Pi^1_\xi(\kappa))^+$.
Suppose $\gamma=\delta+1$ is a successor ordinal. Fix $\xi<\kappa^+$, $S\in{\mathcal I}^{\delta+1}(\Pi^1_\xi(\kappa))^+$ and let $S_\alpha\in {\mathcal I}^{f^\kappa_{\delta+1}(\alpha)}(\Pi^1_{f^\kappa_\xi(\alpha)})^+$ for each $\alpha\in S$. Let $T=\bigcup_{\alpha\in S}S_\alpha$. Fix a $T$-list $\vec{T}=\<T_\alpha\mid\alpha\in T\rangle$. We must show that there is a homogeneous set for $\vec{T}$ in ${\mathcal I}^\delta(\Pi^1_\xi(\kappa))^+$. By Lemma \ref{lemma_can}, the set \[C=\{\alpha<\kappa\mid f^\kappa_{\delta+1}(\alpha)=f^\kappa_\delta(\alpha)+1\}\] is in the club filter on $\kappa$. Thus $S\cap C\in{\mathcal I}^{\delta+1}(\Pi^1_\xi(\kappa))^+$. For each $\alpha\in S\cap C$, the $S_\alpha$-list $\vec{T}\upharpoonright S_\alpha$ has a homogeneous set $H_\alpha\in \mathcal P(S_\alpha)\cap\Pi^1_{f^\kappa_\delta(\alpha)}(\alpha)^+$. Let $H\in{\mathcal I}^\delta(\Pi^1_\xi(\kappa))^+$ be homogeneous for the $(S\cap C)$-list $\<H_\alpha\mid\alpha\in S\cap C\rangle$. By our inductive hypothesis, $\bigcup_{\alpha\in H}H_\alpha\in{\mathcal I}^\delta(\Pi^1_\xi(\kappa))^+$, and it is easy to see that this set is homogeneous for $\vec{T}$. \end{proof}
\begin{lemma}\label{lemma_set_of_nons_is_positive} Let ${\mathcal O}\in\{{\mathcal I},{\mathcal R}\}$. Suppose $\kappa$ is a cardinal, $\gamma<\kappa^+$ and $\xi\in\{-1\}\cup\kappa^+$. If $\kappa\in{\mathcal O}^\gamma(\Pi^1_\xi(\kappa))^+$, then the set \[S_\kappa=\{\alpha<\kappa\mid\alpha\in{\mathcal O}^{f^\kappa_\gamma(\alpha)}(\Pi^1_{f^\kappa_\xi(\alpha)}(\alpha))\}\] is in ${\mathcal O}^\gamma(\Pi^1_\xi(\kappa))^+$. \end{lemma}
\begin{proof} Assume for a contradiction that the statement of the lemma does not hold true, and let $\kappa$ be the least counterexample: the least cardinal for which there are $\gamma<\kappa^+$ and $\xi\in\{-1\}\cup\kappa^+$ such that $\kappa\in{\mathcal O}^\gamma(\Pi^1_\xi(\kappa))^+$ and $S:=S_\kappa\in{\mathcal O}^\gamma(\Pi^1_\xi(\kappa))$. Then, $\kappa\setminus S\in{\mathcal O}^\gamma(\Pi^1_\xi(\kappa))^+$. For each $\alpha\in\kappa\setminus S$, we have $\alpha\in{\mathcal O}^{f^\kappa_\gamma(\alpha)}(\Pi^1_{f^\kappa_\xi(\alpha)}(\alpha))^+$, and by the minimality of $\kappa$, the set $S_\alpha=S\cap\alpha$ is in ${\mathcal O}^{f^\kappa_\gamma(\alpha)}(\Pi^1_{f^\kappa_\xi(\alpha)}(\alpha))^+$. Thus, by Lemma \ref{lemma_pos_union_of_pos_sets_is_pos}, the set $S=\bigcup_{\alpha\in\kappa\setminus S}S_\alpha$ is in ${\mathcal O}^\gamma(\Pi^1_\xi(\kappa))^+$, a contradiction. \end{proof}
Next, we provide a result for the strongly Ramsey ideal which is analogous to the base case of Lemma \ref{lemma_set_of_nons_is_positive}. This result follows from more general results in \cite[Lemma 14.2]{MR4156888} (with the core argument being \cite[Lemma 9.15]{MR4156888}), however we would like to provide a proof for the particular case of strongly Ramsey cardinals, also in order to allow for the discussion of possible generalizations that follows in the remark below.
\begin{lemma}[Holy-L\"ucke]\label{lemma_set_of_nons_for_strongly_ramsey_ideal} For every cardinal $\kappa$, if $\kappa\in{\mathcal S}([\kappa]^{<\kappa})^+$, then the set \[T=\{\alpha<\kappa\mid\alpha\in{\mathcal S}([\alpha]^{<\alpha})\}\] is in ${\mathcal S}([\kappa]^{<\kappa})^+$. \end{lemma} \begin{proof} Suppose the result is false and let $\kappa$ be the least counterexample. Then $\kappa$ is strongly Ramsey and $T\in{\mathcal S}([\kappa]^{<\kappa})$. This implies $\kappa\setminus T\in{\mathcal S}([\kappa]^{<\kappa})^*$ and hence there is an $A_T\subseteq\kappa$ such that whenever $M$ is a $\kappa$-model with $A_T,\kappa\setminus T\in M$ and whenever $U\subseteq[\kappa]^\kappa$ is a $\kappa$-amenable $M$-normal $M$-ultrafilter on $\kappa$, it must follow that $\kappa\setminus T\in U$. Let $j:M\to N$ be the ultrapower embedding obtained from $U$, and notice that $\kappa\in j(\kappa\setminus T)$ and hence $\kappa$ is strongly Ramsey in $N$.
By our assumption on $\kappa$, it follows that for all $\alpha<\kappa$, if $\alpha$ is strongly Ramsey then the set \[T\cap\alpha=\{\beta<\alpha\mid\beta\in{\mathcal S}([\beta]^{<\beta})\}\] is in ${\mathcal S}([\alpha]^{<\alpha})^+$. Since $M$ is a $\kappa$-model, this statement also holds in $M$. So, since $\kappa$ is strongly Ramsey in $N$, it follows by elementarity that in $N$, the set $j(T)\cap\kappa=T$ is in $({\mathcal S}([\kappa]^{<\kappa})^+)^N$. Working in $N$, we let $\bar M$ be a $\kappa$-model with $A_T,T\in \bar M$ and we let $\bar U$ be a $\kappa$-amenable $\bar M$-normal $\bar M$-ultrafilter on $\kappa$ with $\bar U\subseteq ([\kappa]^\kappa)^N$ and $T\in \bar U$. Since $N$ is a $\kappa$-model, it follows that in $V$ the set $\bar M$ is a $\kappa$-model with $A_T,T\in \bar M$, $\bar U$ is a $\kappa$-amenable $\bar M$-normal $\bar M$-ultrafilter on $\kappa$ with $\bar U\subseteq[\kappa]^\kappa$, and $T\in U$. This contradicts the definition of $A_T$. \end{proof}
\begin{remark} Let us note that we do not know whether a version of Lemma \ref{lemma_set_of_nons_for_strongly_ramsey_ideal} holds for the ideal ${\mathcal S}^2([\kappa]^{<\kappa})$. Suppose $\kappa\in{\mathcal S}^2([\kappa]^{<\kappa})^+$ and let $T=\{\alpha<\kappa\mid\alpha\in{\mathcal S}^2([\alpha]^{<\alpha})\}$. Does it follow that $T\in{\mathcal S}^2([\kappa]^{<\kappa})^+$? If we try generalize the proof of Lemma \ref{lemma_set_of_nons_for_strongly_ramsey_ideal} to this situation, we would like to show that if a $\kappa$-model $N$ thinks that $\bar M$ is a $\kappa$-model and $\bar U$ is a $\kappa$-amenable $\bar M$-normal $\bar M$-ultrafilter with $\bar U\subseteq({\mathcal S}([\kappa]^{<\kappa})^+)^N$, then it is the case that in $V$ we have $\bar U\subseteq({\mathcal S}([\kappa]^{<\kappa})^+)^V$. However, we do not see how to prove this. One would want to show that $({\mathcal S}([\kappa]^{<\kappa})^+)^N\subseteq({\mathcal S}([\kappa]^{<\kappa})^+)^V$. But this seems to be problematic because $P(\kappa)^N\subsetneq P(\kappa)^V$. \end{remark}
\section{Expressibility results}\label{section:expressibility}
First, let us recall an expressibility result for higher indescribability due to the first author, which extends results of Bagaria from \cite{MR3894041}.
\begin{theorem}[{\cite[Theorem 5.8]{CodyHigherIndescribability}}]\label{theorem_expressing_indescribability} Suppose $\kappa>\omega$ is regular and $\xi<\kappa^+$. Then, there is a $\Pi^1_{\xi+1}$ formula $\Phi$ over $V_\kappa$ and a club $C\subseteq\kappa$ such that for all $S\subseteq\kappa$ we have \[\text{$S$ is a $\Pi^1_\xi$-indescribable subset of $\kappa$ if and only if $V_\kappa\models\Phi(S)$}\] and for all regular $\alpha\in C$, we have \[\text{$S\cap\alpha$ is a $\Pi^1_{f^\kappa_\xi(\alpha)}$-indescribable subset of $\alpha$ if and only if $V_\alpha\models\Phi(S)\mathrm{|}^\kappa_\alpha$}.\] \end{theorem}
Note that, within our usual generic ultrapower setup, using Lemma \ref{lemma_restrictionvsj}, the existence of a club $C$ as for the second statement of Theorem \ref{theorem_expressing_indescribability} above is equivalent to its first statement holding in the generic ultrapower $\mathop{\rm Ult}$. This could be used to extract a fairly simple proof of the second statement from the original proof of the first statement that is provided in \cite{CodyHigherIndescribability}. Since doing this in detail would involve going through quite a lot of material from \cite{CodyHigherIndescribability} however, we will leave this task to the interested reader.
We will next need an easy lemma, whose proof, via a standard closing-off argument, is left to the reader as well.
\begin{lemma}\label{lemma_closing_off} Suppose $\kappa$ is a regular cardinal and $\gamma,\gamma'<\kappa^+$ are ordinals such that $\gamma\leq\gamma'$. If $f:\gamma\to\gamma'$ is any function then the set \[\{\alpha<\kappa\mid f[F^\kappa_\gamma(\alpha)]\subseteq F^\kappa_{\gamma'}(\alpha)\}\] is in the club filter on $\kappa$. \end{lemma}
Building on Theorem \ref{theorem_expressing_indescribability} and \cite[Lemma 5.1]{MR4206111}, we obtain the following.
\begin{lemma}\label{lemma_complexity} Suppose $\kappa$ is a regular cardinal, and $\gamma<\kappa^+$, and $\xi\in\{-1\}\cup\kappa^+$. Let $\mathcal{O}\in\{\mathcal{I},\mathcal{R}\}$ be either the ineffability operator or the Ramsey operator. Then, there is a $\Pi^1_{\xi+1+2\gamma}$ formula $\Theta^\kappa_{\gamma,\xi}$ over $V_\kappa$ and a club subset $C^\kappa_{\gamma,\xi}$ of $\kappa$ such that for all $S\subseteq\kappa$ we have \[S\in{\mathcal O}^\gamma(\Pi^1_\xi(\kappa))^+\text{ if and only if }V_\kappa\models\Theta^\kappa_{\gamma,\xi}(S)\] and for all regular cardinals $\alpha\in C^\kappa_{\gamma,\xi}$ we have \[S\cap\alpha\in {\mathcal O}^{f^\kappa_\gamma(\alpha)}(\Pi^1_{f^\kappa_\xi(\alpha)}(\alpha))^+\text{ if and only if }V_\alpha\models\Theta^\kappa_{\gamma,\xi}(S)\mathrm{|}^\kappa_\alpha.\] \end{lemma} \begin{proof} For the final statement, we will make use of our usual generic ultrapower setup once again: Using Lemma \ref{lemma_restrictionvsj}, it easily follows that the existence of a club as for the second statement of Lemma \ref{lemma_complexity} above is equivalent to its first statement holding in any generic ultrapower $\mathop{\rm Ult}$ obtained by forcing with $P(\kappa)/{\mathop{\rm NS}}_\kappa$.\footnote{Of course, when we refer to the operators ${\mathcal I}$ or ${\mathcal R}$ in $\mathop{\rm Ult}$, these should be the ineffability or the Ramsey operator as defined in $\mathop{\rm Ult}$, respectively. Also, notice that since $j(\Theta^\kappa_{\gamma,\xi}(X))\mathrm{|}^{j(\kappa)}_\kappa=\Theta^\kappa_{\gamma,\xi}(X)\in \mathop{\rm Ult}$, it follows that the statement $(S\in{\mathcal O}^\gamma(\Pi^1_\xi(\kappa))^+\text{ if and only if }V_\kappa\models\Theta^\kappa_{\gamma,\xi}(S))^{\text{Ult}}$ makes sense.} We thus proceed by induction on $\gamma<\kappa^+$ to verify the first statement in $V$ and in $\mathop{\rm Ult}$ simultaneously. Let us consider the case in which $\mathcal{O}=\mathcal{I}$; the case in which $\mathcal{O}=\mathcal{R}$ is similar. If $\gamma=0$, then for all $\xi<\kappa^+$, we have ${\mathcal I}^\gamma(\Pi^1_\xi(\kappa))^+=\Pi^1_\xi(\kappa)^+$, and the result follows directly from Theorem \ref{theorem_expressing_indescribability} and the comments made afterwards (regarding the case of the generic ultrapower $\mathop{\rm Ult}$).
Suppose $\gamma=\delta+1$, and that the result holds for $\delta$. Fix $\xi<\kappa^+$. Then, there is a $\Pi^1_{\xi+1+2\delta}$-formula $\Theta^\kappa_{\delta,\xi}$ over $V_\kappa$ such that both in $V$ and in $\mathop{\rm Ult}$, for all $S\subseteq\kappa$, we have \begin{align}S\in{\mathcal I}^{\delta}(\Pi^1_\xi(\kappa))^+\text{ if and only if }V_\kappa\models\Theta^\kappa_{\delta,\xi}(S).\label{equation_gamma_less_than_omega_at_kappa} \end{align} We simply define $\Theta^\kappa_{\gamma,\xi}$ to be the $\Pi^1_{\xi+1+2\gamma}$-formula over $V_\kappa$ which asserts that every $X$-list has a homogeneous set $Y$ such that $\Theta^\kappa_{\delta,\xi}(Y)$ holds. It is clear that this formula is as desired both in $V$ and in $\mathop{\rm Ult}$.
Suppose now that $\gamma$ is a limit ordinal, and that the result holds for all ordinals $\delta<\gamma$. By definition, we have $X\in{\mathcal I}^\gamma(\Pi^1_\xi(\kappa))^+$ if and only if $X\in{\mathcal I}^\delta(\Pi^1_\xi(\kappa))^+$ for all $\delta<\gamma$. Note that $\xi+1+2\gamma=\xi+\gamma$ in this case. We define a $\Pi^1_{\xi+\gamma}$-formula \[\Theta^\kappa_{\gamma,\xi}=\bigwedge_{\zeta<\xi+\gamma}\psi^\kappa_\zeta\] as follows. For each $\zeta<\xi+\gamma$, if it exists, define $\delta_\zeta$ to be the greatest ordinal $\delta<\gamma$ such that $\xi+1+2\delta\leq\zeta$ and let $\psi^\kappa_\zeta=\Theta^\kappa_{\delta_\zeta,\xi}$. Otherwise, let $\psi^\kappa_\zeta$ be the formula ``$0=0$''. Since the sequence $\vec{\delta}=\langle\delta_\zeta\mid\zeta<\xi+\gamma\rangle$ is cofinal in $\gamma$, it follows that both in $V$ and in $\mathop{\rm Ult}$, for all $S\subseteq\kappa$, \[S\in{\mathcal I}^\gamma(\Pi^1_\xi(\kappa))^+\text{ if and only if }V_\kappa\models\Theta^\kappa_{\gamma,\xi}(S).\] \end{proof}
\section{A framework for large cardinal operators}\label{section_generalized operators}
In this section, we review a framework for large cardinal operators that was introduced by the second author \cite{HolyLCOandEE}, which in particular fits the ineffability operator $\mathcal I$, the Ramsey operator $\mathcal R$, and the strongly Ramsey subset operator $\mathcal S$ (the latter was denoted as $\mathbf T_{\mathrm{cl}}$ in \cite{HolyLCOandEE}). This framework builds on statements about the existence of certain ultrafilters for small models of set theory, and is itself based on a framework for the characterization of large cardinal ideals that was introduced in \cite{MR4156888}. In the present paper, we apply this framework from \cite{HolyLCOandEE}, verifying a number of results on the relationship between higher indescribability and large cardinal operators in a uniform way. In particular, we thus obtain a number of new results on the relationship between higher indescribability and the operators ${\mathcal I}$ and ${\mathcal R}$, and also ${\mathcal S}$. For readers only interested in these examples, our framework is still useful, for it provides uniform arguments that work for each of these operators. We will also mention (see Remark \ref{remark_more_examples}) two additional operators, introduced by the second author \cite{HolyLCOandEE}, that fit into this framework: the $\mathbf T_\omega^\kappa$-Ramsey subset operator~$\mathbf T$ that is connected to the notion of $\mathbf T_\omega^\kappa$-Ramsey cardinals introduced in \cite{MR4156888}, and the $\mathbf{wf}^\kappa_\omega$-Ramsey subset operator $\mathbf{wf}$ that is connected to the notion of weakly Ramsey cardinals from \cite{MR2830415}, to which our results thus apply.
Let us assume throughout this section that $\kappa$ denotes an inaccessible cardinal, and that $I$ denotes an ideal on $\kappa$. Recall that an $M$-ultrafilter $U$ on $\kappa$ is \emph{$\kappa$-amenable for $M$} if whenever $\mathcal A\in M$ is a $\kappa$-sized collection of subsets of $\kappa$ in $M$, then $\mathcal A\cap U\in M$. We next provide the definition of the \emph{model version} ${\mathcal I}_{mod}$ of the ineffability operator, as introduced in \cite{HolyLCOandEE}.
\begin{definition} \begin{itemize}
\item For any $y\subseteq\kappa$, we first define the local instance of ${\mathcal I}_{mod}$ at $y$, by letting $x\in{\mathcal I}_{mod}^y(I)^+$ if there is a transitive weak $\kappa$-model $M$ with $y\in M$, and an $M$-ultrafilter $U$ on $\kappa$ with $x\in U$, such that every diagonal intersection of $U$ is in $I^+$ -- we abbreviate this latter property of $U$ and of $I$ by stating that $\Delta U\in I^+$.\footnote{\label{footnote:diagonalintersections}Since permuting the input of a diagonal intersection only changes its output by a non-stationary set (see \cite[Lemma 1.3.3]{MR0460120}), if $I\supseteq{\mathop{\rm NS}}_\kappa$, rather than requiring that every diagonal intersection of $U$ be in $I^+$, it equivalently suffices to require one (arbitrary) diagonal intersection of $U$ to be in $I^+$.}
\item We let ${\mathcal I}_{mod}(I)^+=\bigcap_{y\subseteq\kappa}{\mathcal I}_{mod}^y(I)^+$. \end{itemize} \end{definition}
\begin{proposition}[\cite{HolyLCOandEE} Proposition 2.5]\label{proposition:ineffableideal2}
Let $I\supseteq{\mathop{\rm NS}}_\kappa$ be an ideal on $\kappa$. Then, ${\mathcal I}_{mod}(I)=\mathcal I(I)$. \end{proposition}
We also provide the \emph{model version} of the Ramsey operator from \cite{HolyLCOandEE}.
\begin{definition}
\begin{itemize}
\item For any $y\subseteq\kappa$, we first define the local instance of ${\mathcal R}_{mod}$ at $y$, by letting $x\in{\mathcal R}_{mod}^y(I)^+$ if there is a transitive weak $\kappa$-model $M$ with $y\in M$, and an $M$-normal $M$-ultrafilter $U$ on $\kappa$ with $x\in U$ that is $\kappa$-amenable for $M$, such that every countable intersection of elements of $U$ is in $I^+$.
\item We let ${\mathcal R}_{mod}(I)^+=\bigcap_{y\subseteq\kappa}{\mathcal R}_{mod}^y(I)^+$.
\end{itemize} \end{definition}
The Ramsey operator and its model version were shown to be equivalent in \cite{MR2817562}. See also \cite{HolyLCOandEE}.
\begin{theorem}[{Sharpe and Welch \cite{MR2817562}}]\label{theorem:ramseymodels}
For any ideal $I$, \[{\mathcal R}_{mod}(I)=\mathcal R(I).\] \end{theorem}
Taking the above characterizations of the ineffability and of the Ramsey operator as an inspiration, a framework for large cardinal operators was developed in \cite{HolyLCOandEE}, which we would now like to review.
\begin{definition}\label{definition:abstractoperator}
Let $\Psi(M,U)$ and $\Omega(U,I)$ be parameter-free first order formulae such that ${\rm ZFC}$ proves that for any ideal $I$ on a regular uncountable cardinal $\kappa$, any transitive weak $\kappa$-model $M$ and any $M$-ultrafilter $U$ on $\kappa$,
\begin{itemize}
\item $\Omega(U,I)$ implies that $U\subseteq I^+$, and
\item for any ideal $J$ on $\kappa$, $\left[I\supseteq J\,\land\,\Omega(U,I)\right]\to\Omega(U,J)$.
\end{itemize}
Let us say that a pair of formulas $\langle\Psi,\Omega\rangle$ satisfying the above is \emph{regular}.
We define an ideal operator $\mathfrak O\Psi\Omega$ as follows. For any ideal $I$ on $\kappa$ and $y\subseteq\kappa$, we first define a local instance by letting
\begin{itemize}
\item $x\in\mathfrak O\Psi\Omega^y(I)^+$ if there exists a transitive weak $\kappa$-model $M$ with $y\in M$ and an $M$-ultrafilter $U$ on $\kappa$ with $x\in U$ such that $\Psi(M,U)$ and $\Omega(U,I)$ hold, and we let
\item $\mathfrak O\Psi\Omega(I)^+=\bigcap_{y\subseteq\kappa}\mathfrak O\Psi\Omega^y(I)^+$.
\end{itemize}
\end{definition}
Let us remark that, since we assume $\kappa$ to be inaccessible, we could additionally require that $M\supseteq V_\kappa$ in the above, for given any $y\subseteq\kappa$, we can easily find $y'\subseteq\kappa$ such that $y'\in M$ implies both that $y\in M$ and that $V_\kappa\subseteq M$.
Let us check how the examples we saw so far fit into these schemes:
\begin{itemize}
\item If $\Psi(M,U)$ is trivial, and $\Omega(U,I)$ denotes the property that $\Delta U{\in}I^+$, then $\mathfrak O\Psi\Omega$ is the model version ${\mathcal I}_{mod}$ of the ineffability operator.
\item If $\Psi(M,U)$ denotes the property that $U$ is $M$-normal and $\kappa$-amenable for $M$, and $\Omega(U,I)$ denotes the property that every countable intersection of elements of $U$ is in $I^+$, then $\mathfrak O\Psi\Omega$ is (the model version ${\mathcal R}_{mod}$ of) the Ramsey operator.
\item If $\Psi(M,U)$ denotes the property that $M$ is closed under ${<}\kappa$-sequences, $U$ is $M$-normal and $U$ is $\kappa$-amenable for $M$, and $\Omega(U,I)$ denotes the property $U\subseteq I^+$, then $\mathfrak O\Psi\Omega$ is the strongly Ramsey subset operator ${\mathcal S}$. \end{itemize}
\begin{proposition}\cite[Proposition 10.2]{HolyLCOandEE}\label{proposition:abstractbasic}
Assume that $\langle\Psi,\Omega\rangle$ is regular, and that $I\supseteq J$ are ideals on $\kappa$. Then, the following hold.
\begin{itemize}
\item $\mathfrak O\Psi\Omega(I)\supseteq I$ is an ideal on $\kappa$.
\item $\mathfrak O\Psi\Omega(I)\supseteq\mathfrak O\Psi\Omega(J)$.
\item If for any transitive weak $\kappa$-model $M$ and any $M$-ultrafilter $U$ on $\kappa$, the conjunction $\Psi(M,U)\,\land\,\Omega(U,I)$ implies that $U$ is $M$-normal, then $\mathfrak O\Psi\Omega(I)$ is normal.
\item In particular, if $I\supseteq{\mathop{\rm NS}}_\kappa$, then $\Delta U\in I^+$ implies that $U$ is $M$-normal.
\item If $\langle \Psi',\Omega'\rangle$ is regular as well, and $\Psi'(M,U)\land\Omega'(U,I)$ implies $\Psi(M,U)\land\Omega(U,I)$ for any transitive weak $\kappa$-model $M$ and any $M$-ultrafilter $U$ on $\kappa$, then $\mathfrak O\Psi'\Omega'(I)\supseteq\mathfrak O\Psi\Omega(I)$.
\end{itemize} \end{proposition}
A crucial property of ideal operators is \emph{ineffability}, as introduced in \cite{HolyLCOandEE}.
\begin{definition} Let $\langle\Psi,\Omega\rangle$ be a pair of formulas, and let $\mathcal O$ be an ideal operator. \begin{itemize}
\item The pair $\langle\Psi,\Omega\rangle$ is \emph{ineffable} in case ${\rm ZFC}$ proves that for any ideal $I$ on a regular uncountable cardinal $\kappa$, any transitive weak $\kappa$-model $M$ and any $M$-ultrafilter $U$ on $\kappa$, $\Psi(M,U)\,\land\,\Omega(U,I)$ implies that for every $A\in U$, every $A$-list $\vec a\in M$ has a homogeneous set in $I^+$.
\item The operator $\mathcal O$ is \emph{ineffable} in case ${\rm ZFC}$ proves that for any ideal $I$ on a regular uncountable cardinal $\kappa$, whenever $A\in\mathcal O(I)^+$ and $\vec a$ is an $A$-list, then $\vec a$ has a homogeneous set in $I^+$.
\end{itemize} \end{definition} Note that by the above, the ineffability operator ${\mathcal I}$ is ineffable. But also, if $\mathcal O$ can be characterized to be of the form $\mathcal O=\mathfrak O\Psi\Omega$ for some ineffable pair of formulas $\langle\Psi,\Omega\rangle$, then $\mathcal O$ is ineffable.
\begin{observation}\label{observation:regularity}\cite[Observation 10.4]{HolyLCOandEE}
Let $\langle\Psi,\Omega\rangle$ be regular, and let $\mathcal O$ be the operator $\mathfrak O\Psi\Omega$. Then,
\begin{itemize}
\item If $\Psi(M,U)$ ${\rm ZFC}$-provably implies that $U$ is $\kappa$-amenable for $M$ and contains all club subsets of $\kappa$ in $M$ as elements, then $\mathcal O$ is ineffable.
\item If $\Omega(U,I)$ ${\rm ZFC}$-provably implies that $\Delta U\in I^+$, then $\mathcal O$ is ineffable.
\item If $\mathcal O$ is ineffable, then for any ideal $I$ on a regular uncountable cardinal $\kappa$, $\mathcal O(I)\supseteq\mathcal I(I)\supseteq{\mathop{\rm NS}}_\kappa$.
\item If ${\rm ZFC}$ proves that for any ideal $I$ on a regular and uncountable cardinal $\kappa$, $\mathcal O(I)\supseteq\mathcal I(I)$, then $\mathcal O$ is ineffable.
\end{itemize} \end{observation}
In particular, the above implies that the operators $\mathcal R$ and ${\mathcal S}$ are ineffable.
Given an ordinal $\beta<\kappa^+$, let us use the notation \[\Pi^1_{<\beta}(\kappa)=\bigcup_{\xi\in\{-1\}\cup\beta}\Pi^1_\xi(\kappa).\] The next corollary is immediate from Lemma \ref{lemma_indescribability_from_homogeneity} together with a straightforward induction on $\gamma$.
\begin{corollary}\label{corollary:levelofhomogeneity}
Assume that $\mathcal O$ is ineffable, $\beta<\kappa^+$ is an ordinal, and $I\supseteq\Pi^1_{<\beta}(\kappa)$. Then, \[\mathcal O^\gamma(I)\supseteq\Pi^1_{<(\beta+2\gamma)}(\kappa). \] \end{corollary}
We will now review material from \cite[Section 13]{HolyLCOandEE} on coding weak $\kappa$-models $M$ and $M$-ultrafilters $U$ on $\kappa$ as subsets of $V_\kappa$. These definitions are tailored so that any transitive weak $\kappa$-model that can be coded will have to be a superset of $V_\kappa$, with elements $x$ of $V_\kappa$ being coded as ordered pairs of the form $\langle 0,x\rangle$, and we code $\kappa$ by $0$.
\begin{definition}
We say that $\mathcal M\subseteq V_\kappa$ is a \emph{code for a transitive weak $\kappa$-model} if $\mathcal M\subseteq V_\kappa$ with the following properties:
\begin{itemize}
\item $\mathcal M$ is a binary relation on $V_\kappa$, such that $\mathop{\rm dom}(\mathcal M)=V_\kappa$,
\item for all $x,y\in V_\kappa$, $\langle 0,x\rangle\mathcal M\langle 0,y\rangle$ if and only if $x\in y$,
\item for all $x$, $x\,\mathcal M\,0\iff \exists y\in\kappa\ x=\langle 0,y\rangle$,
\item $\mathcal M$ is well-founded and extensional, and
\item $\langle V_\kappa,\mathcal M\rangle\models{\rm ZFC}^-$.
\end{itemize}
Note that the weak $\kappa$-model that is coded here is the model $M$ such that $\langle M,\in\rangle$ is the transitive collapse of $\langle V_\kappa,\mathcal M\rangle$. On the other hand, any transitive weak $\kappa$-model $M\supseteq V_\kappa$ has a code as described above, using a suitable bijection between $M$ and $V_\kappa$. Let $\pi_{\mathcal M}$ denote the transitive collapsing map of $\langle V_\kappa,\mathcal M\rangle$. If $X=\pi_{\mathcal M}(x)$, we say that $x$ \emph{is the code of} $X$ (within $\mathcal M$). \end{definition}
Using standard arguments (see \cite[Lemma 12.2]{HolyLCOandEE}), it is easy to see that the property that $\mathcal M$ is a code for a transitive weak $\kappa$-model is a $\Delta^1_1$-property over $\langle V_\kappa,\mathcal M\rangle$. Note also that we can easily shift between subsets $X$ of $V_\kappa$ in $M$ and their codes within $\mathcal M$ using the fact that for $X\subseteq V_\kappa$ in $M$ and $x\in V_\kappa$, the property $\pi_{\mathcal M}^{-1}(X)=x$ is equivalent to the first-order sentence $\forall y\ \left[\langle 0,y\rangle\mathcal M x\longleftrightarrow y\in X\right]$ in $\langle V_\kappa,\in,\mathcal M,X\rangle$.
Next, we want to define what it means to code an $M$-ultrafilter on $\kappa$, which is easily seen to be a $\Delta^1_1$-property over $\langle V_\kappa,\mathcal M,\mathcal U\rangle$.
\begin{definition}
Given a code $\mathcal M$ for a transitive weak $\kappa$-model $M$, we say that $\mathcal U\subseteq V_\kappa$ is a \emph{code for an $M$-ultrafilter on $\kappa$} if $\langle V_\kappa,\mathcal M,\mathcal U\rangle$ thinks that $\mathcal U$ is an ultrafilter on $0$ (note that our setup is so that $0$ codes $\kappa$). \end{definition}
For our desired applications, we will need our operators to satisfy some properties of simple definability that were introduced in \cite{HolyLCOandEE}.
\begin{definition}\label{definition_simple} Let $\langle\Psi,\Omega\rangle$ be a pair of formulas, and let $\mathcal O$ be an ideal operator. \begin{itemize}
\item $\langle\Psi,\Omega\rangle$ is \emph{simple} in case ${\rm ZFC}$ proves the following:
\begin{enumerate}
\item[(a)] whenever $M$ is a transitive weak $\kappa$-model, and $U$ is an $M$-ultrafilter on $\kappa$, then $\Psi(M,U)$ translates to a $\Delta^1_1$-property of any pair of codes $\langle\mathcal M,\mathcal U\rangle$ for $\langle M,U\rangle$ over $V_\kappa$, and
\item[(b)] whenever the property $X\in I^+$ is definable over $V_\kappa$ by a $\Pi^1_\beta$-formula $\varphi(X)$ for some $0<\beta<\kappa$, then $\Omega(U,I)$ translates to a $\Pi^1_\beta$-property of any code $\mathcal U$ of $U$ over $V_\kappa$.
\end{enumerate}
\item $\langle\Psi,\Omega\rangle$ is \emph{always simple} in case ${\rm ZFC}$ additionally proves that if in (b), the property $X\in I^+$ is first order definable over $V_\kappa$, then $\Omega(U,I)$ translates to a $\Delta^1_1$-property of any code $\mathcal U$ of $U$ over $V_\kappa$.
\item $\mathcal O$ is \emph{simple} or \emph{always simple} in case ${\rm ZFC}$ proves that $\mathcal O$ can be characterized in the form $\mathcal O=\mathfrak O\Psi\Omega$ for some pair of formulas $\langle\Psi,\Omega\rangle$ that is simple or always simple respectively.
\end{itemize} \end{definition}
Definition \ref{definition_simple}(a) is immediate if $\Psi$ can be expressed as a first order property of the structure $\langle M,\in,U\rangle$. For example, this is the case when $\Psi(M,U)$ denotes the statement that $U$ is $\kappa$-amenable for $M$.
The property that $U$ is countably complete translates to the following first order statement about $\mathcal{U}$ over $V_\kappa$: for any countable sequence $\langle u_i\mid i<\omega\rangle$ of elements of $\mathcal U$,\footnote{Since $\kappa$ is assumed to be inaccessible (regular and uncountable suffices), these countable sequences are elements of $V_\kappa$.} there is $x$ such that $x\mathcal M u_i$ for every $i<\omega$.
The statement that $M$ is closed under ${<}\kappa$-sequences translates to the following first order statement about $\mathcal M$ over $V_\kappa$: $\forall p\,\exists t\,\forall x\ \left(x\,\mathcal M\,t\ \iff\ x\in p\right)$.
For other examples, see \cite{HolyLCOandEE}.
Let us now look at some examples in which Definition \ref{definition_simple}(b) holds. \begin{itemize}
\item If $\Omega(U,I)$ denotes the statement that $U\subseteq I^+$, then this translates to the statement that $\forall x\,\mathcal M\,\mathcal U\,\forall X\ [\pi_{\mathcal M}^{-1}(X)=x\to\varphi(X)]$, where $\varphi$ is a formula defining $I^+$ over $V_\kappa$.
\item If $\Omega(U,I)$ denotes the property that countable intersections from $U$ are in $I^+$, then this translates to the statement that for any countable sequence $\langle u_\beta\mid\beta<\omega\rangle$ of $\mathcal M$-elements of $\mathcal U$, \[\varphi(\{\alpha<\kappa\mid\forall\beta<\omega\ \langle 0,\alpha\rangle\,\mathcal M\,u_\beta\}).\]
\item If $\Omega(U,I)$ denotes the property that $\Delta U\in I^+$, then this translates to the statement that for any $\kappa$-enumeration $\langle u_\beta\mid\beta<\kappa\rangle$ of the $\mathcal M$-elements of $\mathcal U$, \[\varphi(\{\alpha<\kappa\mid\forall\beta<\alpha\ \langle 0,\alpha\rangle\,\mathcal M\,u_\beta\}).\] \end{itemize} If the property $X\in I^+$ is first order definable, observe that we obtain a $\Delta^1_1$-statement in the first two cases above, for we can equivalently rephrase the above to use existential rather than universal second order quantifiers. However this does not work in the third case (see the remarks made in Footnote \ref{footnote:diagonalintersections}). In particular, this means that the Ramsey operator and the strongly Ramsey subset operator are always simple, while (the model version of) the ineffability operator is simple.
\begin{remark}\label{remark_more_examples} Further examples of operators that are both ineffable and always simple have been introduced in \cite[Section 12 and Section 13]{HolyLCOandEE}, including in particular the $\mathbf T_\omega^\kappa$-Ramsey subset operator $\mathbf T$, and the $\mathbf{wf}^\kappa_\omega$-Ramsey subset operator~$\mathbf{wf}$. All of our results on ineffable always simple operators that follow will thus apply to these operators as well. \end{remark}
As a first application, we want to show that Lemma \ref{lemma_complexity} can be extended to work for our framework, and we want to generalize it even further by considering ideals other than the indescribability ideals (which are particular instances of the below by Theorem \ref{theorem_expressing_indescribability}). Note that the lemma below does not include the case of applying the ineffability operator to the bounded ideal, which however is already handled as a special case of Lemma \ref{lemma_complexity}.
\begin{lemma}\label{lemma_complexity extended}
Suppose $\kappa$ is a regular cardinal, $\gamma,\xi<\kappa^+$ are ordinals with $\xi>0$, $I$ is an ideal on $\kappa$ such that $I^+$ is $\Pi^1_\xi$-definable over $V_\kappa$, $I$ is represented by $\langle I_\alpha\mid\alpha<\kappa\rangle$ in $\mathop{\rm Ult}$,\footnote{When we refer to a definable ideal $I$ in $\mathop{\rm Ult}$, we mean the version of $I$ that is obtained by applying that definition in $\mathop{\rm Ult}$. Strictly speaking, we should thus require that this definition ${\rm ZFC}$-provably yields an ideal. This will clearly hold in all relevant cases.} and that $\mathcal O=\mathfrak O\Psi\Omega$ is simple. Then, there is a $\Pi^1_{\xi+2\gamma}$-formula $\Theta^\kappa_{\gamma,\xi}(X)$ over $V_\kappa$ and a club subset $C^\kappa_{\gamma,\xi}$ of $\kappa$ such that for all $S\subseteq\kappa$, we have \[S\in{\mathcal O}^\gamma(I)^+\text{ if and only if }V_\kappa\models\Theta^\kappa_{\gamma,\xi}(S)\] and for all regular cardinals $\alpha\in C^\kappa_{\gamma,\xi}$, we have \[S\cap\alpha\in {\mathcal O}^{f^\kappa_\gamma(\alpha)}(I_\alpha)^+\text{ if and only if }V_\alpha\models\Theta^\kappa_{\gamma,\xi}(S)\mathrm{|}^\kappa_\alpha.\] If $\mathcal O$ is always simple, then the above also holds for $\xi=0$. \end{lemma} \begin{proof} The second statement is handled as usual, namely it is equivalent to the first statement holding in all generic ultrapowers $\mathop{\rm Ult}$,\footnote{As before, when we refer to the operator $\mathcal O$ in $\mathop{\rm Ult}$, we mean the operator $\mathfrak O\Psi\Omega$ in the sense of $\mathop{\rm Ult}$.} obtained by forcing with $P(\kappa)/{\mathop{\rm NS}}_\kappa$. Thus it suffices to verify the first statement both in $V$ and in $\mathop{\rm Ult}$. We do so by induction on $\gamma<\kappa^+$. The case when $\gamma=0$ is immediate from our assumption. The case when $\gamma$ is a limit ordinal is handled as in the proof of Lemma \ref{lemma_complexity}.
Suppose $\gamma=\delta+1$ and the result holds for $\delta$. Then, there is a $\Pi^1_{\xi+2\delta}$-formula $\Theta^\kappa_{\delta,\xi}(X)$ over $V_\kappa$ such that both in $V$ and in $\mathop{\rm Ult}$, for all $S\subseteq\kappa$, we have \begin{align}S\in\mathcal O^{\delta}(I)^+\text{ if and only if }V_\kappa\models\Theta^\kappa_{\delta,\xi}(S).\label{equation_gamma_less_than_omega_at_kappa_extended} \end{align} We simply define $\Theta^\kappa_{\gamma,\xi}(X)$ to be the $\Pi^1_{\xi+2\gamma}$-formula over $V_\kappa$ which asserts that for every (code $\mathcal M$ for a) transitive weak $\kappa$-model $M$ there is (a code $\mathcal U$ for) an $M$-ultrafilter $U$ on $\kappa$ such that $\Psi(M,U)$ and $\Omega(U,\mathcal O^\delta(I))$ hold. Since $\mathcal O$ is simple, it follows that this formula is as desired, both in $V$ and in $\mathop{\rm Ult}$. Clearly, if $\mathcal O$ is always simple, this works also in case $\xi=0$. \end{proof}
\section{Pre-operators}\label{section:preoperators}
Our ideal operators are defined via local instances that are parametrized by certain objects. Given a cardinal $\kappa$, we refer to the collection of all such objects on $\kappa$ as the \emph{object type at $\kappa$} of such an operator $\mathcal O$, and denote this by $\mathcal T(\mathcal O,\kappa)$. The object type $\mathcal T(\mathcal I,\kappa)$ of the ineffability operator at $\kappa$ is the collection of all $\kappa$-lists, the object type $\mathcal T(\mathcal R,\kappa)$ of the Ramsey operator at $\kappa$ is the collection of all regressive functions $c\colon[\kappa]^{<\omega}\to\kappa$, and the object type of our model based operators at $\kappa$ is simply the powerset of $\kappa$.
Each object type $\mathcal T$ at $\kappa$ comes with an associated restriction operator, which, given some $y\in\mathcal T$ and some $\alpha<\kappa$, outputs its natural restriction $y\restr\alpha$. The following definition should not bear any surprises.
\begin{definition} Suppose $\kappa$ is a cardinal and $\alpha<\kappa$.
\begin{itemize}
\item If $\mathcal T=\mathcal P(\kappa)$ and $y\in\mathcal T$, then $y\restr\alpha=y\cap\alpha$.
\item If $\mathcal T$ is the collection of all $\kappa$-lists and $y\in\mathcal T$, then $y\restr\alpha$ is the restriction of $y$ to the domain $\alpha$, i.e.\ the initial segment of length $\alpha$ of the $\kappa$-sequence $y$.
\item If $\mathcal T$ is the collection of all functions $c\colon[\kappa]^{<\omega}\to 2$ and $y\in\mathcal T$, then $y\restr\alpha$ is the restriction of $y$ to the domain $[\alpha]^{<\omega}$.
\end{itemize} \end{definition}
Each ideal operator $\mathcal O$ with local instances has an associated \emph{pre-operator}.
\begin{definition}
Given an ideal operator $\mathcal O$ together with local instances $\mathcal O^y$ at $\kappa$ for $y\in\mathcal T(\mathcal O,\kappa)$, we define its \emph{associated pre-operator} $\mathcal O_0$ as follows. Given an ideal $I$ on $\kappa$ such that $I^+$ is definable by a $\Pi^1_\xi$-formula over $V_\kappa$ for some $\xi<\kappa^+$, and such that $I$ (in the sense of $\mathop{\rm Ult}$) is represented by $\langle I_\alpha\mid\alpha<\kappa\rangle$ in $\mathop{\rm Ult}$,
\[\mathcal {O}_0(I)^+=\{x\!\subseteq\!\kappa\mid\forall y\in\mathcal T(\mathcal O,\kappa)\,\forall C\!\subseteq\!\kappa\,\textrm{club }\exists\alpha\!\in\!x\ x\cap C\cap\alpha\in\mathcal O^{y\restr\alpha}(I_\alpha)^+\},\] where $\alpha$ is understood to range over regular uncountable cardinals. \end{definition}
${\mathcal I}_0$ is the \emph{subtle operator}, and ${\mathcal R}_0$ is the \emph{pre-Ramsey operator}.
\begin{remark}\label{remark_subtle_ideal} Notice that by Theorem \ref{theorem_baumgartner}, ${\mathcal I}_0([\kappa]^{<\kappa})={\mathcal I}_0(\Pi^1_\xi(\kappa))$ is the \emph{subtle ideal} on $\kappa$ for any $\xi<\kappa^+$, that is the collection of all subsets of $\kappa$ which are not subtle. ${\mathcal R}_0([\kappa]^{<\kappa})$ is the \emph{pre-Ramsey ideal} on $\kappa$. Since we do not know whether an analogue of Theorem \ref{theorem_baumgartner} holds for the pre-Ramsey operator (see Question \ref{question:ramseylikeineffable} below), we do not know whether ${\mathcal R}_0(\Pi^1_\xi(\kappa))={\mathcal R}_0([\kappa]^{<\kappa})$ for all (or any) $\xi<\kappa^+$. \end{remark}
The second author has shown in \cite{HolyLCOandEE} that the subtle and the pre-Ramsey operators are equivalent to their respective model versions (for ideals containing the nonstationary ideal in case of the subtle operator).
\begin{theorem}\cite[Theorem 7.3 and Theorem 9.1]{HolyLCOandEE}
Whenever $I\supseteq{\mathop{\rm NS}}_\kappa$, \[{\mathcal I}_0(I)=({\mathcal I}_{mod})_0(I),\] and for arbitrary ideals $I$ on $\kappa$, \[{\mathcal R}_0(I)=({\mathcal R}_{mod})_0(I).\] \end{theorem}
\section{Generating ideals}\label{section_generating}
In this section, we analyze the interplay between our generalized operators, and ideals of higher indescribability. Such an analysis for the Ramsey operator ${\mathcal R}$ and the ideals $\Pi^1_\xi(\kappa)$ for $\xi<\kappa$ has been performed by the first author in his \cite{MR4206111}. Let us start this section by citing a classic result of Baumgartner, that we will extend afterwards. Given ideals $I$ and $J$ on $\kappa$, $\overline{I\cup J}$ denotes the collection of all sets $X\cup Y$ for which $X\in I$ and $Y\in J$.
\begin{theorem}\label{Baumgartner_basic}\cite[Section 7]{MR0384553}
For all cardinals $\kappa$ and all $\xi\in\{-1\}\cup\omega$, $\kappa\in{\mathcal I}(\Pi^1_\xi(\kappa))^+$ if and only if \begin{enumerate} \item $\kappa\in{\mathcal I}_0(\Pi^1_\xi(\kappa))^+\cap\Pi^1_{\xi+2}(\kappa)^+$ and \item the ideal $\overline{{\mathcal I}_0(\Pi^1_\xi(\kappa))\cup\Pi^1_{\xi+2}(\kappa)}$ is nontrivial and equals ${\mathcal I}(\Pi^1_\xi(\kappa))$. \end{enumerate}
Moreover, (2) is necessary in the above characterization of $\kappa\in{\mathcal I}(\Pi^1_\xi(\kappa))^+$, for the least $\Pi^1_{\xi+2}$-indescribable cardinal $\kappa$ such that $\kappa\in{\mathcal I}_0(\Pi^1_\xi(\kappa))^+$ is strictly below the least cardinal $\kappa$ for which $\kappa\in{\mathcal I}(\Pi^1_\xi(\kappa))^+$ (if such exists). \end{theorem}
We will frequently use the following.
\begin{remark}\label{remark_ideal_generated} Suppose $I_0$, $I_1$ and $J$ are ideals on $\kappa$. In order to prove that $J=\overline{I_0\cup I_1}$, part of what we must show is that $J\supseteq\overline{I_0\cup I_1}$, or in other words $J^+\subseteq\overline{I_0\cup I_1}^+$. Notice that we may obtain a chain of equivalences directly from the definitions involved: \begin{align*} J^+\subseteq\overline{I_0\cup I_1}^+ &\iff \overline{I_0\cup I_1}\subseteq J \\
&\iff I_0\cup I_1\subseteq J\\
&\iff J^+\subseteq I_0^+\cap I_1^+. \end{align*} \end{remark}
In the following, we extend Baumgartner's result to simple ineffable operators, and to ideals of higher indescribability. For readers who are only interested in the operators ${\mathcal I}$ and ${\mathcal R}$, it should be possible to read this section without having read Section~\ref{section_generalized operators} in full detail. In this case, it is only relevant to know that both ${\mathcal I}$ and ${\mathcal R}$ are ineffable and simple, that ${\mathcal R}$ is always simple, and that ${\mathcal I}$ and ${\mathcal R}$ are \emph{monotonic} in the sense that if $\mathcal O\in\{{\mathcal I},{\mathcal R}\}$, and $I\subseteq J$ are both ideals on a cardinal $\kappa$, then ${\mathcal O}(I)\subseteq{\mathcal O}(J)$. It should then be easy to read the present section, perhaps checking some relevant bits of Section~\ref{section_generalized operators} when needed. Let us remind our readers of the following, which provides a fairly large class of ideals that Theorem \ref{theorem_generating_iterates} applies to. It is immediate from Corollary \ref{corollary:levelofhomogeneity} and from Lemma \ref{lemma_complexity extended}.
\begin{observation}\label{observation:appliesto}
Assume that $\mathcal O$ is ineffable and simple, and that $\Pi^1_{<\xi}(\kappa)\subseteq I$.
\begin{itemize}
\item If $\gamma<\kappa^+$, $0<\xi<\kappa^+$, and $I^+$ is $\Pi^1_\xi$-definable over $V_\kappa$,\footnote{This is the case in particular if $I=\Pi^1_{<\xi}(\kappa)$.} then $\Pi^1_{<(\xi+2\gamma)}(\kappa)\subseteq\mathcal O^\gamma(I)$, and the latter ideal is $\Pi^1_{\xi+2\gamma}$-definable over $V_\kappa$.
\item If $\xi=0$, the same holds true if either $\mathcal O$ is always simple, or if $\gamma\ge\omega$, and $I^+$ is $\Pi^1_n$-definable for some $n<\omega$.\footnote{This is the case in particular if $I=[\kappa]^{<\kappa}$.}
{$\Box$}
\end{itemize} \end{observation}
The conclusion of Theorem \ref{theorem_generating_iterates} in case ${\mathcal O}={\mathcal R}$ and $I=\Pi^1_\xi(\kappa)$ for $\xi<\omega$ is due to Feng in \cite[Theorem 4.8]{MR1077260}, and has been extended to $\xi<\kappa$ by the first author in \cite[Corollary 6.2]{MR4206111}.
\begin{theorem}\label{theorem_generating_iterates}
Assume that $I$ is an ideal on $\kappa$, $\mathcal O=\mathfrak O\Psi\Omega$ is ineffable and simple, $\gamma,\xi<\kappa^+$, $\xi>0$, $\Pi^1_{<\xi}(\kappa)\subseteq I$, $I^+$ is $\Pi^1_\xi$-definable over $V_\kappa$ and $\kappa\in{\mathcal O}^{\gamma+1}(I)^+$. Then, \[{\mathcal O}^{\gamma+1}(I)=\overline{{\mathcal O}_0({\mathcal O}^\gamma(I))\cup\Pi^1_{\xi+2\gamma+1}(\kappa)}.\] If either $\mathcal O=\mathcal I$, $\mathcal O$ is ineffable and always simple, or $\gamma\ge\omega$ in the above, then the conclusion also holds in case $\xi=0$. \end{theorem} \begin{proof} Let us first treat the cases when $\mathcal O$ is either simple or always simple. Let $J=\overline{{\mathcal O}_0({\mathcal O}^\gamma(I))\cup\Pi^1_{\xi+2\gamma+1}(\kappa)}$, and assume that $I$, in the sense of $\mathop{\rm Ult}$, is represented by $\langle I_\alpha\mid\alpha<\kappa\rangle$ in $\mathop{\rm Ult}$.
Suppose $S\in J^+$, and for the sake of a contradiction, suppose $S\in{\mathcal O}^{\gamma+1}(I)$. Let $y\subseteq\kappa$ be such that whenever $M$ is a transitive weak $\kappa$-model with $y\in M$ and $U$ is an $M$-ultrafilter on $\kappa$ such that $\Psi(M,U)$ and $\Omega(U,\mathcal O^\gamma(I))$ hold, then $S\not\in U$. Using that $\mathcal O$ is simple, or always simple in case $\xi=0$, and using Lemma \ref{lemma_complexity extended}, this property of $y$ and of $S$ can be expressed by a natural $\Pi^1_{\xi+2\gamma+1}$-formula $\varphi(y,S)$ over $V_\kappa$, and there is a club $C\subseteq\kappa$ such that for all regular $\alpha\in C$, \[V_\alpha\models\varphi(y,S)\mathrm{|}^\kappa_\alpha\] if and only if whenever $M$ is a transitive weak $\alpha$-model with $y\cap\alpha\in M$ and $U$ is an $M$-ultrafilter on $\alpha$ such that $\Psi(M,U)$ and $\Omega(U,\mathcal O^{f^\kappa_\gamma(\alpha)}(I_\alpha))$ hold, then $S\cap\alpha\not\in U$.
Since $V_\kappa\models\varphi(y,S)$, the set \[D=\{\alpha<\kappa\mid V_\alpha\models\varphi(y,S)\mathrm{|}^\kappa_\alpha\}\] is in the filter $\Pi^1_{\xi+2\gamma+1}(\kappa)^*$. Since $S\notin J$, $S$ is not the union of a set in ${\mathcal O}_0({\mathcal O}^\gamma(I))$ and a set in $\Pi^1_{\xi+2\gamma+1}(\kappa)$. Since $S=(S\cap C\cap D)\cup (S\setminus (C\cap D))$ and $S\setminus (C\cap D)\in\Pi^1_{\xi+2\gamma+1}(\kappa)$, we see that $S\cap C\cap D\in{\mathcal O}_0({\mathcal O}^\gamma(I))^+$. Thus, by definition of ${\mathcal O}_0$, there is some ordinal $\alpha\in S\cap C\cap D$ for which there exists a transitive weak $\alpha$-model $M$ with $y\cap\alpha\in M$ and an $M$-ultrafilter $U$ on $\alpha$ such that $\Psi(M,U)$ and $\Omega(U,{\mathcal O}^{f^\kappa_\gamma(\alpha)}(I_\alpha))$ hold, and such that $S\cap C\cap D\cap\alpha\in U$. However, since $\alpha\in C\cap D$ we have $V_\alpha\models\varphi(y,S)\mathrm{|}^\kappa_\alpha$, contradicting the above.
Now suppose $S\in{\mathcal O}^{\gamma+1}(I)^+$. Since $\mathcal O$ is ineffable on $I$, and by our assumption that $\Pi^1_{<\xi}(\kappa)\subseteq I$, this implies that $S\in\Pi^1_{\xi+2\gamma+1}(\kappa)^+$ by Corollary \ref{corollary:levelofhomogeneity}. Let us show that $S\in{\mathcal O}_0({\mathcal O}^\gamma(I))^+$. Suppose $y\subseteq\kappa$ and fix a club subset $C$ of $\kappa$. By the third item in Observation \ref{observation:regularity}, it follows that $S\cap C\in{\mathcal O}^{\gamma+1}(I)^+$, and thus there is a weak $\kappa$-model $M\supseteq V_\kappa$ with $y\in M$, and an $M$-ultrafilter $U$ on $\kappa$ such that $\Psi(M,U)$ and $\Omega(U,O^\gamma(I))$ hold, and such that $S\cap C\in U$. By our assumptions and using Lemma \ref{lemma_complexity extended}, this property of $S\cap C$, $y$, and the codes $\mathcal M$ and $\mathcal U$ of $M$ and $U$ respectively is expressible by a $\Pi^1_{\xi+2\gamma}$-formula $\varphi(S\cap C,y,\mathcal M,\mathcal U)$ over $V_\kappa$, which additionally states that $\mathcal M$ is a code for a weak $\kappa$-model $M$, and that $\mathcal U$ is a code for an $M$-ultrafilter on $\kappa$. Moreover using Lemma \ref{lemma_complexity extended}, there is a club subset $D$ of $\kappa$ such that for $\alpha\in D$, $\varphi(S\cap C,y,\mathcal M,\mathcal U)\mathrm{|}^\kappa_\alpha$ expresses the corresponding property over $V_\alpha$, namely that $\mathcal M\cap V_\alpha$ is a code for a weak $\alpha$-model $\bar M$, that $\mathcal U\cap V_\alpha$ is a code for an $\bar M$-ultrafilter $\bar U$ on $\alpha$, that $y\cap\alpha\in\bar M$, $\Psi(\bar M,\bar U)$ and $\Omega(\bar U,{\mathcal O}^{f^\kappa_\gamma(\alpha)}(I_\alpha))$ hold, and that $S\cap C\cap\alpha\in\bar U$. Since $S\cap C\cap D$ is $\Pi^1_{\xi+2\gamma+1}$-indescribable, there is some $\alpha\in S\cap C\cap D$ such that $V_\alpha\models\varphi(S\cap C,y,\mathcal M,\mathcal U)\mathrm{|}^\kappa_\alpha$. Thus, $S\in{\mathcal O}_0({\mathcal O}^\gamma(I))^+$.
When ${\mathcal O}={\mathcal I}$ (and $\xi=0$), note that the case when $\gamma=0$ is handled by Theorem \ref{Baumgartner_basic}. For $\gamma>1$, note that if $\gamma=1+\bar\gamma$, we have ${\mathcal I}^\gamma(I)={\mathcal I}_{mod}^{\bar\gamma}({\mathcal I}(I))$, and that ${\mathcal I}(I)$ has the properties that $\Pi^1_1(\kappa)\subseteq I$ and that ${\mathcal I}(I)^+$ is $\Pi^1_2$-definable over $V_\kappa$. We can now apply the main case of the theorem using the operator $\mathcal O={\mathcal I}_{mod}$.
Similarly, if $\gamma=\omega+\delta\ge\omega$ (and $\xi=0$), the desired conclusion of the theorem can be rewritten as \[{\mathcal O}^{\delta+1}({\mathcal O}^\omega(I))=\overline{{\mathcal O}_0({\mathcal O}^\delta({\mathcal O}^\omega(I)))\cup\Pi^1_{\xi+2\gamma+1}(\kappa)},\] and we can deduce this conclusion from applying the theorem to the ideal ${\mathcal O}^\omega(I)$, using Observation \ref{observation:appliesto}. \end{proof}
\subsection{On finite iterates of operators}
By Remark \ref{remark_subtle_ideal}, we have ${\mathcal I}_0(\Pi^1_\xi(\kappa))={\mathcal I}_0([\kappa]^{<\kappa})$ for any $\xi<\kappa^+$, and hence we easily obtain the following corollary of Theorem \ref{theorem_generating_iterates}.
\begin{corollary}\label{corollary_ineffabledowntozero} Suppose $\kappa\in{\mathcal I}(\Pi^1_\xi(\kappa))^+$ where $\xi\in\{-1\}\cup\kappa^+$. Then \[{\mathcal I}(\Pi^1_\xi(\kappa))=\overline{{\mathcal I}_0([\kappa]^{<\kappa})\cup\Pi^1_{\xi+2}(\kappa)}.\] \end{corollary}
We can obtain a variant of Theorem \ref{theorem_generating_iterates} for finite iterates of operators as follows.
\begin{corollary}\label{corollary_generating_finite_iterates}
Assume that $I$ is an ideal on $\kappa$, $\mathcal O$ is ineffable and simple, $\gamma<\omega$, $0<\xi<\kappa^+$, $\Pi^1_{<\xi}(\kappa)\subseteq I$, and $I^+$ is $\Pi^1_\xi$-definable over $V_\kappa$. Then, \[{\mathcal O}^{\gamma+1}(I)=\overline{{\mathcal O}_0({\mathcal O}^\gamma(I))\cup{\mathcal O}^\gamma(\Pi^1_{\xi+1}(\kappa))}.\] If either $\mathcal O=\mathcal I$, $\mathcal O$ is ineffable and always simple, or $\gamma\ge\omega$ in the above, then the above conclusion also holds in case $\xi=0$. \end{corollary} \begin{proof}
On the one hand, by our assumptions and Corollary \ref{corollary:levelofhomogeneity}, ${\mathcal O}(I)\supseteq\Pi^1_{\xi+1}(\kappa)$, and therefore, ${\mathcal O}^{\gamma+1}(I)\supseteq{\mathcal O}^\gamma(\Pi^1_{\xi+1}(\kappa))$ by the monotonicity of ${\mathcal O}$ (see Proposition \ref{proposition:abstractbasic}). On the other hand, ${\mathcal O}^\gamma(\Pi^1_{\xi+1}(\kappa))\supseteq\Pi^1_{\xi+2\gamma+1}(\kappa)$. Thus, the result follows immediately from Theorem \ref{theorem_generating_iterates}. \end{proof}
There is also a sort of analogue of the above for infinite iterates of operators. This has been worked out for the Ramsey operator in \cite[Theorem 7.8]{MR4206111}, and can analogously be performed for our generalized operators. We will leave all details to the interested reader.
As an easy corollary of Corollary \ref{corollary:levelofhomogeneity}, again using the monotonicity of our operators, we obtain the following generalization of \cite[Corollary 6.8]{MR4206111}:
\begin{corollary}\label{corollary_collapse} If ${\mathcal O}$ is ineffable, $\xi\in\{-1\}\cup\kappa^+$ and $n<\omega$, then \[{\mathcal O}^\omega(\Pi^1_\xi(\kappa))={\mathcal O}^\omega(\Pi^1_{\xi+n}(\kappa)).\] \end{corollary}
The next corollary is a starting point in relating assumptions of the form $\kappa\in{\mathcal O}^\gamma(\Pi^1_\xi(\kappa))$, for different $\gamma$ and $\xi$ below $\kappa^+$, with respect to consistency strength.
\begin{corollary}\label{corollary_xi_to_xi_plus_one_hierarchy} Assume that $\mathcal O$ is ineffable and simple. Suppose $\gamma<\omega$, $\xi<\kappa^+$, and $\kappa\in{\mathcal O}^\gamma(\Pi^1_\xi(\kappa))^+$. If $S\in{\mathcal O}^\delta(\Pi^1_{\zeta}(\kappa))^+$ where $\zeta+1+2\delta\leq\xi+2\gamma$, then \[T=\{\alpha<\kappa\textrm{ regular}\mid S\cap\alpha\in{\mathcal O}^{f^\kappa_\delta(\alpha)}(\Pi^1_{f^\kappa_{\zeta}(\alpha)}(\alpha))^+\}\in{\mathcal O}^\gamma(\Pi^1_\xi(\kappa))^*.\] If either $\mathcal O=\mathcal I$, or $\mathcal O$ is ineffable and always simple, then the above conclusion also holds in case $\xi=-1$. \end{corollary} \begin{proof} The fact that $S\in{\mathcal O}^\delta(\Pi^1_\zeta(\kappa))^+$ is expressible by a $\Pi^1_{\zeta+1+2\delta}$-formula $\Theta$ over $V_\kappa$ by Lemma \ref{lemma_complexity extended}, or by Lemma \ref{lemma_complexity} in case ${\mathcal O}={\mathcal I}$. Let $C$ be the corresponding club obtained from the relevant lemma. Since $\zeta+1+2\delta\leq\xi+2\gamma$, we have $\Pi^1_{\zeta+1+2\delta}(\kappa)\subseteq\Pi^1_{\xi+2\gamma}(\kappa)\subseteq{\mathcal O}^\gamma(\Pi^1_\xi(\kappa))$ by Corollary \ref{corollary:levelofhomogeneity}. The set \[\{\alpha\in C\textrm{ regular}\mid V_\alpha\models\Theta(S)\mathrm{|}^\kappa_\alpha\}\] is contained in $T$ and is in $\Pi^1_{\zeta+1+2\delta}(\kappa)^*\subseteq{\mathcal O}^\gamma(\Pi^1_\xi(\kappa))^*$. Therefore, $T\in{\mathcal O}^\gamma(\Pi^1_\xi(\kappa))^*$, as desired. \end{proof}
We do not know whether the next result on the proper containment of certain ideals generated by applications of ${\mathcal I}$ and ${\mathcal R}$ generalizes to our framework of operators, for we do not know whether Lemma \ref{lemma_set_of_nons_is_positive} does.
\begin{corollary}\label{corollary_proper_containment_from_xi_to_xi_plus_one} Let ${\mathcal O}\in\{{\mathcal I},{\mathcal R}\}$. Suppose $\gamma<\omega$, $\xi\in\{-1\}\cup\kappa^+$ and $\kappa\in{\mathcal O}^\gamma(\Pi^1_{\xi+1}(\kappa))^+$. Then, \[{\mathcal O}^\gamma(\Pi^1_\xi(\kappa))\subsetneq{\mathcal O}^\gamma(\Pi^1_{\xi+1}(\kappa)).\] \end{corollary}
\begin{proof} Clearly ${\mathcal O}^\gamma(\Pi^1_\xi(\kappa))\subseteq{\mathcal O}^\gamma(\Pi^1_{\xi+1}(\kappa))$, so we just need to show that the containment is proper. Let $S=\{\alpha<\kappa\mid\alpha\in{\mathcal O}^{f^\kappa_\gamma(\alpha)}(\Pi^1_{f^\kappa_\xi(\alpha)}(\alpha))\}$. Then $S\in{\mathcal O}^\gamma(\Pi^1_\xi(\kappa))^+$ by Lemma \ref{lemma_set_of_nons_is_positive}, and Corollary \ref{corollary_xi_to_xi_plus_one_hierarchy} implies that $S\in{\mathcal O}^\gamma(\Pi^1_{\xi+1}(\kappa))$. \end{proof}
We can show yet another form of proper containment of ideals when ${\mathcal O}={\mathcal I}$. An analogous result for the operator ${\mathcal R}$ was claimed by the first author in \cite{MR4206111}, see our Question \ref{question:propercontainment} below.
\begin{corollary}\label{corollary_proper_containment_from_baumgartner} Suppose $\gamma<\kappa^+$, $\xi\in\{-1\}\cup\kappa^+$ and $\kappa\in{\mathcal I}^\gamma(\Pi^1_{\xi+2}(\kappa))^+$. Then, \[{\mathcal I}^\gamma(\Pi^1_{\xi+2}(\kappa))\subsetneq{\mathcal I}^{\gamma+1}(\Pi^1_\xi(\kappa)).\footnote{Let us remark that by Corollary \ref{corollary_collapse}, the case when $\gamma\ge\omega$ is in fact trivial, and the result would hold for arbitrary ineffable operators from our framework in this case.}\] \end{corollary}
\begin{proof} The inclusion itself is immediate, since $\Pi^1_{\xi+2}(\kappa)\subseteq{\mathcal I}(\Pi^1_\xi(\kappa))$ by the ineffability of ${\mathcal I}$, and it only remains to verify its properness. Since $\kappa\in{\mathcal I}^\gamma(\Pi^1_{\xi+2}(\kappa))^+$, it follows by Lemma \ref{lemma_set_of_nons_is_positive} that the set \[S=\{\alpha<\kappa\mid\alpha\in{\mathcal I}^{f^\kappa_\gamma(\alpha)}(\Pi^1_{f^\kappa_{\xi+2}(\alpha)}(\alpha))\}\] is in ${\mathcal I}^\gamma(\Pi^1_{\xi+2}(\kappa))^+$. From Corollary \ref{corollary_below_gamma_plus_1_almost_ineffability}, it follows that the set \[C=\{\alpha<\kappa\mid(\forall\eta<\alpha^+)\ \alpha\in{\mathcal I}^{f^\kappa_\gamma(\alpha)}(\Pi^1_\eta(\alpha))^+\}\] is in the filter ${\mathcal I}^{\gamma+1}([\kappa]^{<\kappa})^*$. Since $C\subseteq\kappa\setminus S$, we see that $\kappa\setminus S\in {\mathcal I}^{\gamma+1}([\kappa]^{<\kappa})^*\subseteq{\mathcal I}^{\gamma+1}(\Pi^1_\xi(\kappa))^*$. Hence, this implies that $S\in{\mathcal I}^{\gamma+1}(\Pi^1_\xi(\kappa))\setminus{\mathcal I}^\gamma(\Pi^1_{\xi+2}(\kappa))$. \end{proof}
The next corollary extends Baumgartner's observation that the use of ideals is necessary in Theorem \ref{Baumgartner_basic}.
\begin{corollary}\label{corollary_characterization}
Assume that $\mathcal O=\mathfrak O\Psi\Omega$ is ineffable and simple, $\gamma<\omega$, $\xi<\kappa$, and $I=\Pi^1_{<\xi}(\kappa)$. Then, $\kappa\in{\mathcal O}^{\gamma+1}(I)^+$ if and only if \begin{enumerate} \item $\kappa\in {\mathcal O}_0({\mathcal O}^\gamma(I))^+\cap\Pi^1_{\xi+2\gamma+1}(\kappa)^+$ and \item the ideal $\overline{{\mathcal O}_0({\mathcal O}^\gamma(I))\cup\Pi^1_{\xi+2\gamma+1}(\kappa)}$ is nontrivial and equals ${\mathcal O}^{\gamma+1}(I)$. \end{enumerate} If either $\mathcal O=\mathcal I$, or $\mathcal O$ is ineffable and always simple in the above, then the conclusion also holds in case $\xi=0$.
Moreover, (2) is necessary in the the above characterization, that is, the least $\Pi^1_{\xi+2\gamma+1}$-indescribable cardinal $\kappa$ that satisfies $\kappa\in{\mathcal O}_0({\mathcal O}^\gamma(I))^+$ is strictly below the least cardinal $\kappa$ that satisfies $\kappa\in{\mathcal O}^{\gamma+1}(I)^+$. \end{corollary} \begin{proof}
Note that $\kappa$ being $\Pi^1_{\xi+2\gamma+1}$-indescribable and $\kappa\in{\mathcal O}_0({\mathcal O}^\gamma(I))^+$ are $\Pi^1_{\xi+2\gamma+2}$-properties over $V_\kappa$, and $\kappa\in{\mathcal O}^{\gamma+1}(I)^+$ implies that $\kappa$ is $\Pi^1_{<(\xi+1+2\gamma+2)}$-indescribable by Corollary \ref{corollary:levelofhomogeneity}, and hence $\kappa$ is $\Pi^1_{\xi+2\gamma+2}$-indescribable using that $\gamma$ is finite. Now since $\xi<\kappa$, this yields some $\xi<\alpha<\kappa$ such that $\alpha\in{\mathcal O}_0({\mathcal O}^\gamma(\Pi^1_\xi(\alpha)))^+$ and $\alpha\in\Pi^1_{\xi+2\gamma+1}(\alpha)^+$. \end{proof}
In the above, one could obtain analogous results when $\kappa\le\xi<\kappa^+$, however the statement that is reflected down from $\kappa$ to $\alpha$ will be changed for $\xi$ will be reflected down to $f^\kappa_\xi(\alpha)$. This still yields a satisfactory analogue of Corollary \ref{corollary_characterization} when $\kappa\leq\xi<\kappa^+$ and $\xi$ is definable from $\kappa$ (for example, if $\xi=\kappa$, or $\xi=\kappa+\kappa$ etc.). We will leave the easy and straightforward details to our interested readers.
\subsection{On infinite iterates of operators}
We would like to use Theorem \ref{theorem_generating_iterates} to prove an analogue of Corollary \ref{corollary_xi_to_xi_plus_one_hierarchy} for infinite $\gamma$, which would, in a sense, say that the strength of the hypothesis ``$\kappa\in{\mathcal O}^\gamma(\Pi^1_\xi(\kappa))^+$'' increases as $\xi$ increases. However, there is an added complication, as illustrated in Corollary \ref{corollary_collapse}, which is that if $\xi_0<\xi_1<\kappa^+$, it may be that $\kappa\in{\mathcal O}^\gamma(\Pi^1_{\xi_0}(\kappa))^+$ is equivalent to $\kappa\in{\mathcal O}^\gamma(\Pi^1_{\xi_1}(\kappa))^+$, if $\gamma$ is large enough. In the next theorem, we determine the least $\gamma$ for which this occurs when ${\mathcal O}\in\{{\mathcal I},{\mathcal R}\}$. Let us note that we do not know how to verify this leastness for operators other than ${\mathcal I}$ and ${\mathcal R}$. Even though the other statements of the theorem below in fact hold for simple ineffable operators, we therefore only state the below result for these two operators.
\begin{theorem} Suppose $\kappa$ is a cardinal, $\xi_0<\xi_1$ are in $\{-1\}\cup\kappa^+$ and ${\mathcal O}\in\{{\mathcal I},{\mathcal R}\}$. Then, the ideal chains \[\langle{\mathcal O}^\gamma(\Pi^1_{\xi_0}(\kappa))\mid\gamma<\kappa^+\rangle\textrm{ and }\langle{\mathcal O}^\gamma(\Pi^1_{\xi_1}(\kappa))\mid\gamma<\kappa^+\rangle\] are eventually equal. Moreover, letting $\delta=\mathop{\rm ot}\nolimits(\xi_1\setminus\xi_0)\cdot\omega$, if the ideal ${\mathcal O}^\delta(\Pi^1_{\xi_1}(\kappa))$ is nontrivial, then $\delta$ is least ordinal such that \[{\mathcal O}^\delta(\Pi^1_{\xi_0}(\kappa))={\mathcal O}^\delta(\Pi^1_{\xi_1}(\kappa)).\] \end{theorem}
\begin{proof} First, let us show that ${\mathcal O}^\delta(\Pi^1_{\xi_0}(\kappa))={\mathcal O}^\delta(\Pi^1_{\xi_1}(\kappa))$, where $\delta=\mathop{\rm ot}\nolimits(\xi_1\setminus\xi_0)\cdot\omega$. Since $\xi_0<\xi_1$, it is clear that ${\mathcal O}^\delta(\Pi^1_{\xi_0}(\kappa))\subseteq{\mathcal O}^\delta(\Pi^1_{\xi_1}(\kappa))$. Let us show that ${\mathcal O}^\delta(\Pi^1_{\xi_0}(\kappa))\supseteq{\mathcal O}^\delta(\Pi^1_{\xi_1}(\kappa))$. If $\sigma=\mathop{\rm ot}\nolimits(\xi_1\setminus\xi_0)=n$ is finite, then $\delta=n\cdot\omega=\omega$ and the result follows from Corollary \ref{corollary_collapse}. Suppose $\sigma\geq\omega$. Then $\delta=\sigma\cdot\omega$ is a limit of limit ordinals. Thus, it will suffice to show that ${\mathcal O}^\eta(\Pi^1_{\xi_1}(\kappa))\subseteq{\mathcal O}^\delta(\Pi^1_{\xi_0}(\kappa))$ for all limit ordinals $\eta<\delta$. Fix a limit ordinal $\eta<\delta$. By Corollary \ref{corollary:levelofhomogeneity}, we have \begin{align} \Pi^1_{\xi_1}(\kappa)=\Pi^1_{\xi_0+\sigma}(\kappa)\subseteq{\mathcal O}^{\sigma+1}(\Pi^1_{\xi_0}(\kappa)).\label{equation_least} \end{align} Applying the operator ${\mathcal O}$ $\eta$-many times to (\ref{equation_least}) yields \[{\mathcal O}^\eta(\Pi^1_{\xi_1}(\kappa))\subseteq{\mathcal O}^{\sigma+1+\eta}(\Pi^1_{\xi_0}(\kappa))\subseteq{\mathcal O}^{\delta}(\Pi^1_{\xi_0}(\kappa)),\] where the final subset relation follows since $\sigma+1+\eta<\delta$.
Next, let us show that if $\eta<\delta$, then ${\mathcal O}^\eta(\Pi^1_{\xi_0}(\kappa))\subsetneq{\mathcal O}^\eta(\Pi^1_{\xi_1}(\kappa))$. If $\sigma=\mathop{\rm ot}\nolimits(\xi_1\setminus\xi_0)$ is finite, in which case $\delta=\omega$, then the result follows from Corollary \ref{corollary_proper_containment_from_xi_to_xi_plus_one}. On the other hand, if $\sigma$ is infinite, then $\delta=\sigma\cdot\omega$ is a limit of limit ordinals. Let $\nu$ be a limit ordinal with $\eta\le\nu<\delta$. It suffices to show that ${\mathcal O}^{\nu+1}(\Pi^1_{\xi_0}(\kappa))\subsetneq{\mathcal O}^{\nu+1}(\Pi^1_{\xi_1}(\kappa))$, for this contradicts ${\mathcal O}^\eta(\Pi^1_{\xi_0}(\kappa))={\mathcal O}^\eta(\Pi^1_{\xi_1}(\kappa))$. Let \[S=\left\{\alpha<\kappa\mid\alpha\in{\mathcal O}^{f^\kappa_{\nu+1}(\alpha)}(\Pi^1_{f^\kappa_{\xi_0}(\alpha)}(\alpha))\right\}.\] Since $\kappa\in{\mathcal O}^\delta(\Pi^1_{\xi_1}(\kappa))^+$, it follows from Lemma \ref{lemma_set_of_nons_is_positive} that $S\notin{\mathcal O}^{\nu+1}(\Pi^1_{\xi_0}(\kappa))$. Furthermore, the fact that $S\notin{\mathcal O}^{\nu+1}(\Pi^1_{\xi_0}(\kappa))$ is expressible by a $\Pi^1_{\xi_0+\nu+2}$-sentence $\Theta$ over $V_\kappa$, by Lemma \ref{lemma_complexity}. Let $C$ be the corresponding club subset of $\kappa$ obtained from that lemma. It follows that the set \[D=\{\alpha\in C\mid V_\alpha\models\Theta(S)\mathrm{|}^\kappa_\alpha\}\] is in the filter $\Pi^1_{\xi_0+\nu+2}(\kappa)^*$ and is contained in $\kappa\setminus S$. Hence, $S\in\Pi^1_{\xi_0+\nu+2}(\kappa)$. By Corollary \ref{corollary:levelofhomogeneity}, since $\xi_1=\xi_0+\sigma$, it follows that $\Pi^1_{\xi_0+\sigma+\nu+1}(\kappa)\subseteq{\mathcal O}^{\nu+1}(\Pi^1_{\xi_1}(\kappa))$. Since $\nu<\delta=\sigma\cdot\omega$, it follows that $\xi_0+\nu+2<\xi_0+\sigma+\nu+1$ and thus $\Pi^1_{\xi_0+\nu+2}(\kappa)\subseteq{\mathcal O}^{\nu+1}(\Pi^1_{\xi_1}(\kappa))$. Together with the above, this implies that $S\in{\mathcal O}^{\nu+1}(\Pi^1_{\xi_1}(\kappa))$. \end{proof}
Next, extending Corollary \ref{corollary_xi_to_xi_plus_one_hierarchy} to infinite iterates of operators, we show that for ineffable and simple operators ${\mathcal O}$, $\gamma<\kappa^+$ and $\xi_0<\xi_1$ in $\kappa^+\setminus\omega$, the hypothesis $\kappa\in{\mathcal O}^\gamma(\Pi^1_{\xi_1}(\kappa))^+$ implies that there are many $\alpha<\kappa$ which satisfy $\alpha\in{\mathcal O}^{f^\kappa_\gamma(\alpha)}(\Pi^1_{f^\kappa_{\xi_0}(\alpha)}(\alpha))^+$, \emph{assuming $\xi_0$ and $\xi_1$ are far enough apart}. Thus, the hypotheses of the form $\kappa\in{\mathcal O}^\gamma(\Pi^1_\xi(\kappa))^+$ for (certain) $\xi<\kappa^+$ provide a strictly increasing hierarchy of length $\kappa^+$.
\begin{theorem} Suppose ${\mathcal O}$ is ineffable and simple, $\kappa$ is a cardinal, $\omega\le\xi_0<\xi_1$ are in $\{-1\}\cup\kappa^+$ and $\gamma<\mathop{\rm ot}\nolimits(\xi_1\setminus\xi_0)\cdot\omega$. If $\kappa\in{\mathcal O}^\gamma(\Pi^1_{\xi_1}(\kappa))^+$, then the set
\[\{\alpha<\kappa\mid\alpha\in{\mathcal O}^{f^\kappa_\gamma(\alpha)}(\Pi^1_{f^\kappa_{\xi_0}(\alpha)}(\alpha))^+\}\] is in ${\mathcal O}^\gamma(\Pi^1_{\xi_1}(\kappa))^*$. \end{theorem} \begin{proof}
Since $\kappa\in{\mathcal O}^\gamma(\Pi^1_{\xi_1}(\kappa))^+$ and $\xi_0<\xi_1$, we have $\kappa\in{\mathcal O}^\gamma(\Pi^1_{\xi_0}(\kappa))^+$, which is expressible by a $\Pi^1_{\xi_0+1+2\gamma}$-formula $\Theta$ over $V_\kappa$ by Lemma \ref{lemma_complexity extended}. Let $C$ be the corresponding club subset of $\kappa$ obtained from that lemma. Since $\gamma<\mathop{\rm ot}\nolimits(\xi_1\setminus\xi_0)\cdot\omega$, it follows that $\xi_0+1+2\gamma<\xi_1+1+2\gamma$. Now by Corollary \ref{corollary:levelofhomogeneity}, we see that $\Pi^1_{<(\xi_1+1+2\gamma)}\subseteq{\mathcal O}^\gamma(\Pi^1_{\xi_1}(\kappa))$, and thus, the set \[D=\{\alpha\in C\mid V_\alpha\models\Theta(\kappa)\mathrm{|}^\kappa_\alpha\}\subseteq\{\alpha<\kappa\mid\alpha\in{\mathcal O}^{f^\kappa_\gamma(\alpha)}(\Pi^1_{f^\kappa_{\xi_0}(\alpha)}(\alpha))^+\}\] is in ${\mathcal O}^\gamma(\Pi^1_{\xi_1}(\kappa))^*$. \end{proof}
\section{On some results of the first author}\label{Cody_results}
In \cite[Theorem 4.1]{MR4206111}, the first author claimed the following: If $S\in\mathcal R([\kappa]^{<\kappa})^+$, then \[T=\{\alpha<\kappa\mid\forall\xi<\alpha\ S\cap\alpha\in\Pi^1_\xi(\alpha)^+\}\in\mathcal R([\kappa]^{<\kappa})^*.\]
The proof that is provided however is slightly flawed, and in fact only yields a somewhat weaker result, namely a weak form of the analogue of Theorem \ref{theorem_pushing_Baumgartner} for the Ramsey operator $\mathcal R$ rather than the ineffability operator $\mathcal I$, in the special case when $\gamma=0$ (see below). We first want to provide a counterexample for the above statement that is claimed in \cite{MR4206111}, and then follow it with a corrected version of that theorem. We then shortly discuss the consequences that this has on other results of \cite{MR4206111}. For the very start, we need an auxiliary result.
\begin{lemma}
If $\kappa$ is a measurable cardinal, then
\[\{\alpha<\kappa\mid\alpha\textrm{ is not Ramsey}\}\not\in\mathcal R([\kappa]^{<\kappa})^*.\] \end{lemma} \begin{proof}
Using Theorem \ref{theorem:ramseymodels}, $\mathcal R([\kappa]^{<\kappa})^*={\mathcal R}_{mod}([\kappa]^{<\kappa})^*$. Let $U^*$ be a measurable ultrafilter on $\kappa$ and let $A\subseteq\kappa$ be arbitrary. Let $M^*\prec H((2^\kappa)^+)$ have size $\kappa$ with $A,U^*\in M^*$ and such that $\kappa+1\subseteq M^*$, let $M$ be the transitive collapse of $M^*$, and let $U$ be the image of $U^*$ under the collapsing map. Then, $U$ is $M$-normal, $\kappa$-amenable for $M$ and countably complete, and since $\kappa$ is Ramsey in the ultrapower of $V$ by $U^*$, it is also Ramsey in the ultrapower of $M$ by $U$, and hence $\{\alpha<\kappa\mid\alpha$ is Ramsey$\}\in U$. This shows that $\{\alpha<\kappa\mid\alpha$ is not Ramsey$\}\not\in{\mathcal R}_{mod}([\kappa]^{<\kappa})^*$. \end{proof}
\begin{counterexample}
Assume that $\kappa$ is Ramsey, such that \[S=\{\alpha<\kappa\mid\alpha\textrm{ is not Ramsey}\}\not\in\mathcal R([\kappa]^{<\kappa})^*.\]
Then,
\[T=\{\alpha<\kappa\mid\forall\xi<\alpha\ S\cap\alpha\in\Pi^1_\xi(\alpha)^+\}\subseteq
\{\alpha<\kappa\mid S\cap\alpha\in\Pi^1_2(\alpha)^+\}=\]
\[=\{\alpha<\kappa\mid\{\beta<\alpha\mid\beta\textrm{ is not Ramsey}\}\in\Pi^1_2(\alpha)^+\}.\]
But since being Ramsey is a $\Pi^1_2$-property, if $\alpha$ is a Ramsey cardinal, then every set in $\Pi^1_2(\alpha)^+$ contains a Ramsey cardinal. Hence the latter set, and thus also $T$, is contained in $S$. This shows that $T\not\in\mathcal R([\kappa]^{<\kappa})^*$. \end{counterexample}
The following seems to be exactly the statement that is shown to hold true by the proof of \cite[Theorem 4.1]{MR4206111}.
\begin{theorem}\label{theorem:codycorrect}
If $\kappa$ is a cardinal, $S\in\mathcal R([\kappa]^{<\kappa})^+$, and \[T=\{\alpha<\kappa\mid\forall\xi<\alpha\ S\cap\alpha\in\Pi^1_\xi(\alpha)^+\},\] then $S\setminus T\in\mathcal R([\kappa]^{<\kappa})$. \end{theorem}
As for the case when $\gamma=0$ in the proof of Theorem \ref{theorem_pushing_Baumgartner} however, this result now follows directly from Theorem \ref{theorem_baumgartner} because the subtle ideal is contained in the Ramsey ideal ${\mathcal R}([\kappa]^{<\kappa})$. Note that Theorem \ref{theorem_baumgartner} in fact yields the stronger statement that if \[T^*=\{\alpha<\kappa\mid\forall\xi<\alpha^+\ S\cap\alpha\in\Pi^1_\xi(\alpha)^+\},\] then $S\setminus T^*\in\mathcal R([\kappa]^{<\kappa})$.
The next result that is claimed in \cite{MR4206111} is its \cite[Theorem 4.2]{MR4206111}, which suffers the same kind of problem as does its \cite[Theorem 4.1]{MR4206111} (and it is now seen to be wrong for it includes the base case when $\alpha=0$, which is \cite[Theorem 4.1]{MR4206111}). However, if its statement is modified according to the modification of \cite[Theorem 4.1]{MR4206111} that we provided in Theorem \ref{theorem:codycorrect} above, it is not clear as to whether Cody's argument can be adapted to work. Let us thus state what might be a good candidate for a corrected version of \cite[Theorem 4.2]{MR4206111} as an open question.\footnote{In fact, we ask a strong version of this question, allowing for $\xi<\alpha^+$ rather than just $\xi<\alpha$.}
\begin{question}\label{question:ramseylikeineffable2}
Assume that $\kappa$ is a cardinal, that $\gamma<\kappa$ is an ordinal, that $S\in\mathcal R^{\gamma+1}([\kappa]^{<\kappa})^+$, and that $T=\{\alpha<\kappa\mid\forall\xi<\alpha^+\ S\cap\alpha\in\mathcal R^\gamma(\Pi^1_\xi(\alpha))^+\}$. Does it follow that $S\setminus T\in\mathcal R^{\gamma+1}([\kappa]^{<\kappa})$?\footnote{If one tries to adapt the proof of \cite[Theorem 4.2]{MR4206111} in a seemingly obvious way, making use of the notation from that proof, one defines sets $X$ and $H$ in $\mathcal R^{\alpha_0+1}([\kappa]^{<\kappa})^+$, and $C$, as in Cody's argument, but then, the inductive conclusion is that $H\setminus C$ is in $\mathcal R^{\alpha_0+1}([\kappa]^{<\kappa})$, rather than Cody's inductive conclusion that $C\in\mathcal R^{\alpha_0+1}([\kappa]^{<\kappa})^*$. Now our weaker conclusion doesn't seem to allow us to derive that $X\cap C\in\mathcal R^{\alpha_0+1}([\kappa]^{<\kappa})^+$.} \end{question}
\cite[Theorem 4.2]{MR4206111} is then used to deduce \cite[Corollary 4.3]{MR4206111}, which we would like to pose as yet another open question, since it now seems unclear how to prove the below when $\gamma>0$ (its instance for $\gamma=0$ however follows directly from Theorem \ref{theorem:codycorrect}).
\begin{question}\label{question:pushinguptheramseyhierarchy}
Assume that $\kappa\in\mathcal R^{\gamma+1}([\kappa]^{<\kappa})^+$. Does it follow that \[\{\alpha<\kappa\mid\forall\xi<\alpha\ \alpha\in\mathcal R^\gamma(\Pi^1_\xi(\alpha))^+\}\in\mathcal R^{\gamma+1}([\kappa]^{<\kappa})^*?\] \end{question}
In the remainder of \cite{MR4206111}, the results from its Section 4 are only used in a few places. The first result that becomes unclear is \cite[Theorem 6.7 (2)]{MR4206111} (except for the case when $m=1$, for the proof of which the case when $\gamma=0$ in Question \ref{question:pushinguptheramseyhierarchy} suffices), which we thus state as an open question.
\begin{question}\label{question:propercontainment}
Suppose $1<m<\omega$ and $\xi<\kappa$.
If $\kappa\in{\mathcal R}^m(\Pi^1_\xi(\kappa))^+$, does it follow that the inclusion \[{\mathcal R}^{m-1}(\Pi^1_{\xi+2}(\kappa))\subseteq{\mathcal R}^m(\Pi^1_\xi(\kappa))\] is a proper inclusion? \end{question}
The only other result from \cite{MR4206111} that becomes unclear is (the properness of the containments in) \cite[Theorem 7.9]{MR4206111}, which is essentially a version of \cite[Theorem 6.7 (2)]{MR4206111} (and thus would yield a version of Question \ref{question:propercontainment}) for infinite $m$.
In order to answer Question \ref{question:ramseylikeineffable2} in the affirmative, it seems one would need to proceed by induction on $\gamma$ to prove a statement similar to that of Theorem \ref{theorem_pushing_Baumgartner}, but with the ineffable operator ${\mathcal I}$ replaced with the Ramsey operator ${\mathcal R}$ and the $S$-list $\vec{S}$ replaced with a regressive function. This suggests the following.
\begin{question}\label{question_Baumgartner_for_Ramsey_hierarchy} Suppose $\gamma<\kappa^+$, $S\in {\mathcal R}^{\gamma+1}([\kappa]^{<\kappa})^+$ and $f:[S]^{<\omega}\to \kappa$ is a regressive function. Let $A$ be the set of all ordinals $\alpha\in S$ such that \[\exists X\subseteq S\cap\alpha\left[(\forall \xi<\alpha^+\ X\in {\mathcal R}^{f^\kappa_\gamma(\alpha)}(\Pi^1_\xi(\alpha))^+) \land (X\cup\{\alpha\}\text{ is hom. for }\vec{S})\right].\] Does it follow that $S\setminus A\in{\mathcal R}^{\gamma+1}([\kappa]^{<\kappa})$? \end{question} Notice that in order to address Question \ref{question_Baumgartner_for_Ramsey_hierarchy}, one might attempt an argument similar to that of Theorem \ref{theorem_baumgartners_lemma_for_the_strongly_ramsey_ideal}, using Ramseyness embeddings instead of strong Ramseyness embeddings. However, the elementary embedding characterization of Ramseyness involves weak $\kappa$-models which are not in general closed under $\omega$-sequences, and therefore, in the context of the proof of Theorem \ref{theorem_baumgartners_lemma_for_the_strongly_ramsey_ideal}, if one only assumes that $M$ is a weak $\kappa$-model, there is no reason to expect that the sequence $\<B_n\mid n<\omega\rangle$ is in $M$, and hence $B=\bigcap_{n<\omega}B_n$ may not be in $M$.
Feng \cite{MR1077260} showed that the Ramsey operator can be characterized using $(\omega,S)$\--sequences. Recall that for any set $S$ of ordinals, an \emph{$(\omega,S)$-sequence} is a sequence $\vec S$ of the form $\vec S=\langle S_{\alpha_1\ldots\alpha_n}\mid 1\le n<\omega,\,\alpha_1<\ldots<\alpha_n,\,\alpha_1,\ldots,\alpha_n\in S\rangle$, where each $S_{\alpha_1\ldots\alpha_n}\subseteq\alpha_1$. We say that $H\subseteq S$ is \emph{homogeneous} for $\vec S$ if for all $n>0$, and all $\alpha_1<\ldots<\alpha_n$ and $\beta_1<\ldots<\beta_n$ from $H$, if $\alpha_1\le\beta_1$, then $S_{\alpha_1\ldots\alpha_n}=S_{\beta_1\ldots\beta_n}\cap\alpha_1$. Feng proved that for any ideal $I$ on a regular cardinal $\kappa$ we have $S\in{\mathcal R}(I)^+$ if and only if every $(\omega,S)$-list has a homogeneous set $H\in P(S)\cap I^+$. Thus, in Question \ref{question_Baumgartner_for_Ramsey_hierarchy} one may replace the regressive function with an $(\omega,S)$-list if desired.
It seems that in order to handle the base case ($\gamma=0$) of Question \ref{question_Baumgartner_for_Ramsey_hierarchy}, one would want to address the following question about the pre-Ramsey ideal; this is in analogy to the fact that the base case of Theorem \ref{theorem_pushing_Baumgartner} follows from the corresponding result, namely Theorem \ref{theorem_generalizing_Baumgartner}, about the subtle ideal. It is straightforward to check that the pre-Ramsey ideal can be characterized in terms of $(\omega,S)$-sequences, so let us formulate the question as follows.
\begin{question}\label{question:ramseylikeineffable}
If $S\in\mathcal R_0([\kappa]^{<\kappa})^+$, $\vec a$ is an $(\omega,S)$-sequence, and \[A=\{\alpha\!\in\!S\mid\exists X\!\subseteq\!S\cap\alpha\,\forall\xi<\alpha^+\ X\!\in\!\Pi^1_\xi(\alpha)^+\,\land\,X\!\cup\!\{\alpha\}\textrm{ is homogeneous for }\vec a\},\] does it follow that $S\setminus A\in\mathcal R_0([\kappa]^{<\kappa})$? \end{question}
Furthermore, note that in Theorem \ref{theorem_generalizing_Baumgartner}, we are only stating a particular instance of Baumgartner's original result, for it is not only about subtlety, but about \emph{$n$-subtlety} for any particular $n<\omega$; here $n$-subtley is a property which resembles subtlety but is formulated in terms of $(n,S)$-sequences (see \cite{MR0384553}). Since pre-Ramseyness is, in a certain sense, simultaneous $n$-subtlety for all $n<\omega$, one could hope for Baumgartner's argument to somehow be adaptable to the context of our Question \ref{question:ramseylikeineffable}, and thus answer it positively. However, our attempts to do so have as yet been unsuccessful.
Let us close by posing the simplest version of Question \ref{question_Baumgartner_for_Ramsey_hierarchy} which remains open. \begin{question}\label{question_simple} Is the hypothesis ``$\exists\kappa\ \kappa\in{\mathcal R}^2([\kappa]^{<\kappa})^+$'' stronger in consistency strength than ``$\exists\kappa\ \kappa\in{\mathcal R}(\Pi^1_1(\kappa))^+$''?
\end{question}
\end{document} |
\begin{document}
\title{On weak solutions to a fractional Hardy--H\'enon equation:\\ Part I: Nonexistence}
\author{ Shoichi Hasegawa\\ Department of Mathematics, School of Fundamental Science and Engineering,\\ Waseda University,\\ 3-4-1 Okubo, Shinjuku-ku, Tokyo 169-8555, Japan \\ \\ Norihisa Ikoma\\ Department of Mathematics, Faculty of Science and Technology,\\ Keio University,\\ 3-14-1 Hiyoshi, Kohoku-ku, Yokohama, Kanagawa 223-8522, Japan \\ \\ Tatsuki Kawakami\\ Applied Mathematics and Informatics Course, Faculty of Advanced Science and Technology,\\ Ryukoku University,\\ 1-5 Yokotani, Seta Oe-cho, Otsu, Shiga 520-2194, Japan } \date{} \maketitle
\begin{abstract} This paper and \cite{HIK-20} treat the existence and nonexistence of stable (resp. outside stable) weak solutions to a fractional Hardy--H\'enon equation
$(-\Delta)^s u = |x|^\ell |u|^{p-1} u$ in $\mathbb{R}^N$, where $0 < s < 1$, $\ell > -2s$, $p>1$, $N \geq 1$ and $N > 2s$. In this paper, the nonexistence part is proved for the Joseph--Lundgren subcritical case. \end{abstract}
\noindent \textbf{Keywords}: fractional Hardy--H\'enon equation, stable (stable outside a compact set) solutions, Liouville type theorem.
\noindent \textbf{AMS Mathematics Subject Classification System 2020}: 35R11, 35J61, 35B33, 35B53, 35D30.
\section{Introduction} \label{section:1}
In this paper and \cite{HIK-20}, we consider a fractional Hardy--H\'enon equation \begin{equation} \label{eq:1.1}
(-\Delta)^s u=|x|^\ell |u|^{p-1}u\qquad\mbox{in}\quad{\mathbb R}^N \end{equation} and throughout this paper, we always assume the following condition on $s,\ell,p,N$:
\begin{equation}\label{eq:1.2}
0 < s < 1, \quad \ell>-2s, \quad p>1,\quad N\ge1,\quad N>2s.
\end{equation} In \eqref{eq:1.1}, $(-\Delta)^s$ is the fractional Laplacian, which is defined for any $\varphi\in C^\infty_c (\mathbb R^N)$ by
\[
(-\Delta)^s\varphi(x)
:= C_{N,s}\mbox{P.V.}\int_{\mathbb R^N}\frac{\varphi(x)-\varphi(y)}{|x-y|^{N+2s}}\,dy
=C_{N,s}\lim_{\varepsilon\to0}\int_{|x-y|>\varepsilon}\frac{\varphi(x)-\varphi(y)}{|x-y|^{N+2s}}\,dy \] for $x\in \mathbb R^N$, where P.V. stands for the Cauchy principal value integral, \begin{equation} \label{eq:1.3} C_{N,s}:=2^{2s}s(1-s)\pi^{-\frac{N}{2}}\frac{\Gamma(\frac{N+2s}{2})}{\Gamma(2-s)} \end{equation} and $\Gamma(z)$ is the gamma function.
This paper and \cite{HIK-20} are motivated by previous work in \cite{Farina,DDG,DDW,H} and we shall study the existence and nonexistence of stable solutions to \eqref{eq:1.1}. Farina \cite{Farina} studied \eqref{eq:1.1} in the case $s=1$, $\ell = 0$ and $N \geq 2$ and he showed the existence ($ p \geq p_c(N) $) and nonexistence ($1 < p < p_c(N)$) of stable (resp. stable outside a compact set) solutions where the Joseph--Lundgren exponent $p_c(N)$ is defined by
\[
p_c(N) := \left\{\begin{aligned}
&\frac{ (N-2)^2 - 4N + 8 \sqrt{N-1} }{(N-2)(N-10)}
& &\text{if } N \geq 11,
\\
&\infty & &\text{if } 1 \leq N \leq 10.
\end{aligned}\right.
\] Next, Dancer, Du and Guo \cite[Theorem 1.2]{DDG} and Wang and Ye \cite[Theorem 1.7]{WY-12} studied the case $\ell > -2 $ and $N \geq 2 $ and showed the existence and nonexistence of stable and finite Morse index solutions in $H^1_{\rm loc} (\mathbb{R}^N) \cap L^\infty_{\rm loc}(\mathbb{R}^N)$. We remark that in \cite{WY-12}, they treated the weaker class of solutions than those in $H^1_{\rm loc} (\mathbb{R}^N) \cap L^\infty_{\rm loc} (\mathbb{R}^N)$. The threshold on $p$ is given by
\[
p_+(N,\ell) := \left\{\begin{aligned}
& \frac{ (N-2)^2 - 2 (\ell + 2) (\ell + N) + 2\sqrt{ (\ell +2)^3 (\ell + 2 N - 2) } }
{(N-2) (N-4 \ell - 10) } & &\text{if} \ N > 10 + 4 \ell ,
\\
& \infty & &\text{if} \ 2 \leq N \leq 10 + 4 \ell.
\end{aligned}\right.
\]
On the other hand, the case $s=1/2$, $\ell = 0$ and $N \geq 2$ was treated in Chipot, Chleb\'{\i}k, Fila and Shafrir \cite{CCFS} as an extension problem and it is shown that there exists a positive radial solution to \eqref{eq:1.1} for $p \geq \frac{N+1}{N-1} = p_S(N,0)$ where $p_S(N,\ell)$ is defined in \eqref{eq:1.6}. Harada \cite{H} considered the same case $s=1/2$, $\ell = 0$ and $N \geq 2$, introduced the notion corresponding to the Joseph--Lundgren exponent $p_c(N)$ and proved the existence of a family of layered positive radial solutions when $p$ is the Joseph--Lundgren supercritical or critical. In \cite{H}, the subcritical case is also treated. D\'{a}vila, Dupaigne and Wei \cite{DDW} dealt with the case $\ell = 0$ and $0<s<1$, and proved the existence and nonexistence of stable (resp. stable outside a compact set) solutions of \eqref{eq:1.1}. We remark that in \cite{DDW}, they treated solutions
$u \in C^{2\sigma} (\mathbb{R}^N) \cap L^1( \mathbb{R}^N , (1+|x|)^{-N-2s}dx )$ (the sign of the weight in \cite[Theorem 1.1]{DDW} might be a missprint)
where $\sigma > s$ and
\[
L^q(\mathbb{R}^N , w(x)dx) := \Set{ u : \mathbb{R}^N \to \mathbb{R} | \| u \|_{L^q(\mathbb{R}^N , w(x)dx)}
:= \left( \int_{ \mathbb{R}^N} |u(x)|^q w(x) \,dx \right)^{\frac{1}{q}} < \infty }.
\] However, in order to make their argument work, it seems appropriate to assume
$ u \in C^{2\sigma} (\mathbb{R}^N) \cap L^2 (\mathbb{R}^N , (1+|x|)^{-N-2s}dx ) $. For this point, see Remark \ref{Remark:2.4} and \cite[Lemmata 2.1--2.4]{DDW}.
Notice that $L^2(\mathbb{R}^N, (1+|x|)^{-N-2s}dx ) \subset L^1(\mathbb{R}^N , (1+|x|)^{-N-2s}dx)$
since $(1+|x|)^{-N-2s} \in L^1(\mathbb{R}^N)$.
We also refer to Li and Bao \cite{LB-19} for the study of positive solutions of \eqref{eq:1.1} with singularity at $x=0$ in the case $-2s < \ell \leq 0$ and $1 < p \leq p_S(N,\ell)$.
The aim of this paper and \cite{HIK-20} is to extend the results of \cite{DDW,H} into the case $\ell \neq 0$ and established the result which is a fractional counterpart of \cite{DDG}. In this paper, we establish the nonexistence result. On the other hand, in \cite{HIK-20}, we will consider the existence result and study properties of solutions.
After submitting this paper, we learned Barrios and Quaas \cite{BQ-20} and the references therein from Alexander Quaas. In \cite{BQ-20} and Dai and Qin \cite{DQ}, they studied the nonexistence of positive solution (and nonnegative nontrivial solutions) of \eqref{eq:1.1} for $\ell \in (-2s,\infty)$ and $0<p<p_S(N,\ell)$ On the other hand, Yang \cite{Ya-15} considered the existence of positive solution of \eqref{eq:1.1} via the minimizing problem for $p=p_S(N,\ell)$. Finally, Fazly--Wei \cite{FW-16} considered \eqref{eq:1.1} for the case $0<\ell$ and $1 < p \leq p_S(N,\ell)$. They proved the nonexistence of stable solutions for $1<p< p_S(N,\ell)$, and for $p=p_S(N,\ell)$, they obtained the same result under the finite energy condition for solutions. See also the comments after Theorem \ref{Theorem:1.1}.
We first introduce the notation of solutions of \eqref{eq:1.1}. \begin{definition} \label{Definition:1.1} Suppose \eqref{eq:1.2}. We say that $u$ is a \emph{solution} of \eqref{eq:1.1}
if $u$ satisfies $u\in H^s_{\mathrm{loc}}(\mathbb R^N)\cap L^\infty_{\rm loc}(\mathbb R^N) \cap L^1(\mathbb{R}^N, (1+|x|)^{-N-2s}dx) $ and \begin{equation} \label{eq:1.4} \langle u,\varphi\rangle_{\dot H^s(\mathbb R^N)}
=\int_{\mathbb R^N}|x|^\ell |u|^{p-1}u\varphi\, dx \quad\mbox{for all}\,\,\, \varphi\in C^\infty_c(\mathbb R^N) \end{equation} where \begin{equation} \label{eq:1.5} \langle u,\varphi\rangle_{\dot H^s(\mathbb R^N)}
:=\frac{C_{N,s}}{2}\int_{\mathbb R^N\times\mathbb R^N}\frac{(u(x)-u(y))(\varphi(x)-\varphi(y))}{|x-y|^{N+2s}}\,dx\,dy \end{equation} and $C_{N,s}$ is the constant defined by \eqref{eq:1.3}. Remark that
$|x|^\ell |u(x)|^{p-1} u(x) \in L^1_{\rm loc} (\mathbb{R}^N)$ due to $u \in L^\infty_{\rm loc}(\mathbb{R}^N)$ and \eqref{eq:1.2}. For $\Omega \subset \mathbb{R}^N$, we also set
\[
\| u \|_{\dot{H}^s (\Omega) }
:=
\left( \frac{C_{N,s}}{2} \int_{ \Omega \times \Omega }
\frac{ |u(x) - u(y)|^2 }{|x-y|^{N+2s}} \, dx \, dy
\right)^{ \frac{1}{2} }.
\] \end{definition}
\begin{remark} \label{Remark:1.1} In Section \ref{section:2}, we will see that
\begin{enumerate}
\item
$\left\langle u, \varphi \right\rangle_{\dot{H}^s(\mathbb{R}^N)} \in \mathbb{R} $
for any $u \in H^s_{\mathrm{loc}}(\mathbb{R}^N) \cap L^1(\mathbb{R}^N, (1+|x|)^{-N-2s}dx) $
and $\varphi \in C^\infty_c (\mathbb{R}^N)$.
\item
we may replace $C^\infty_{c}(\mathbb{R}^N)$ in Definition \ref{Definition:1.1} by $C^1_c(\mathbb{R}^N)$.
\item
our solution $u$ satisfies \eqref{eq:1.1} in the distribution sense, that is,
\[
\int_{\mathbb{R}^N} u \left( -\Delta \right)^s\varphi dx
= \left\langle u , \varphi \right\rangle_{\dot{H}^s(\mathbb{R}^N)} = \int_{\mathbb{R}^N} |x|^\ell |u|^{p-1} u \varphi \, dx
\quad \text{for every $\varphi \in C^\infty_c(\mathbb{R}^N)$}.
\]
\end{enumerate} For the details, see Lemma \ref{Lemma:2.1}. \end{remark}
In order to state our main result and for later use, following \cite{DDW}, we introduce some notation. We put
\[
\begin{aligned}
B_R &:= \Set{ x \in \mathbb{R}^N | \, |x| <R }, & S_R &:= \partial B_R = \set{x \in \mathbb{R}^N | \, |x| =R},
\\
B_R^+ &:= \Set{ (x,t) \in \mathbb{R}^{N+1}_+ | \, \left| (x,t) \right|<R }, &
S_R^+ &:= \partial B_R^+ = \Set{ (x,t) \in \mathbb{R}^{N+1}_+ | \, \left| (x,t) \right| = R },
\end{aligned}
\] and $B_R^c:=\mathbb R^N\setminus B_R$. For $N\ge1$, $s\in(0,1)$ and $\ell>-2s$, we write
\begin{equation}\label{eq:1.6}
p_S(N,\ell) := \frac{N+2s+2\ell}{N-2s} \in (1,\infty).
\end{equation} Note that $p_S(N,0)$ corresponds to the critical exponent of the fractional Sobolev inequality $H^s(\mathbb{R}^N) \subset L^{p_S(N,0)} (\mathbb{R}^N)$. Next, for $\alpha\in[0,(N-2s)/2)$, we set \begin{equation} \label{eq:1.7} \lambda(\alpha) :=2^{2s}\frac{\Gamma(\frac{N+2s+2\alpha}{4})\,\Gamma(\frac{N+2s-2\alpha}{4})} {\Gamma(\frac{N-2s-2\alpha}{4})\,\Gamma(\frac{N-2s+2\alpha}{4})}. \end{equation} It is known that
\begin{equation}\label{eq:1.8}
\text{the function $\alpha\mapsto \lambda(\alpha)$ is strictly decreasing}
\end{equation} and $\lambda(\alpha)\to0$ as $\alpha \nearrow (N-2s)/2$ (see, e.g. Frank, Lieb and Seiringer \cite[Lemma 3.2]{FLS-08} and D\'avila, Dupaigne and Montenegro \cite[Appendix]{DDM}).
\begin{remark}
Let $v_\alpha(x) := |x|^{ - \left( \frac{N-2s}{2} - \alpha \right) }$ for $ 0 \leq \alpha < \frac{N-2s}{2}$. According to Fall \cite[Lemma 4.1]{Fall}, the constant $\lambda(\alpha)$ appears in the equation
\[
(-\Delta)^{s} v_\alpha = \lambda(\alpha) |x|^{-2s} v_\alpha \quad
\text{in} \ \mathbb{R}^N\setminus\{0\}.
\]
\end{remark}
Finally, we introduce the notation of stable, stable outside a compact set and Morse index equal to $K$:
\begin{definition}\label{Definition:1.2}
Let $u \in H^s_{\mathrm{loc}} (\mathbb{R}^N) \cap L^\infty_{\rm loc}(\mathbb{R}^N) \cap L^1(\mathbb{R}^N, (1+|x|)^{-N-2s}dx) $
be a solution of \eqref{eq:1.1}.
We say that $u$ is \emph{stable} if $u$ satisfies
\begin{equation}\label{eq:1.9}
p \int_{\mathbb{R}^N} |x|^\ell |u|^{p-1} \varphi^2 \, dx
\leq \| \varphi \|_{\dot{H}^s(\mathbb{R}^N)}^2 \quad \text{for every $\varphi \in C^\infty_c(\mathbb{R}^N )$}.
\end{equation}
On the other hand, $u$ is called \emph{stable outside a compact set} if
there exists an $R_0 \geq 0$ such that
\begin{equation}
\label{eq:1.10}
p\int_{\mathbb R^N}|x|^\ell|u|^{p-1}\varphi^2\,dx\le \|\varphi\|^2_{\dot H^s(\mathbb R^N)}
\quad \text{for every $\varphi \in C^\infty_c(\mathbb{R}^N \setminus \overline{B_{R_0}} )$}.
\end{equation}
Finally, a solution $u$ is said to have a
\emph{Morse index equal to $K$} provided $K$ is the maximal dimension of subspaces
$Z \subset C^\infty_c ( \mathbb{R}^N)$ with
\[
\| \varphi \|_{\dot{H}^s(\mathbb{R}^N)}^2 - p \int_{ \mathbb{R}^N} |x|^\ell |u|^{p-1} \varphi^2 \, dx < 0
\quad \text{for each $\varphi \in Z \setminus \{0\}$}.
\]
\end{definition}
\begin{remark} \label{Remark:1.3}
\begin{enumerate}
\item
By a density argument, in Definition \ref{Definition:1.2}, we may replace $C_c^\infty(\mathbb{R}^N)$
and $C^\infty_c(\mathbb{R}^N \setminus \overline{ B_{R_0}} )$
by $C_c^1(\mathbb{R}^N)$ and $C^1_c( \mathbb{R}^N \setminus \overline{ B_{R_0} } )$.
In addition, \eqref{eq:1.10} remains true
for $\varphi \in C^1_c(\mathbb{R}^N)$ with $\varphi \equiv 0$ on $B_{R_0}$.
\item
As in \cite[Remark 1]{Farina}, we may check that if a solution $u$ has a finite Morse index,
then $u$ is stable outside a compact set.
\end{enumerate}
\end{remark}
The following is the main result of this paper:
\begin{theorem} \label{Theorem:1.1} Suppose \eqref{eq:1.2} and let
$u\in H^s_{\mathrm{loc}}(\mathbb R^N)\cap L^\infty_{\rm loc}(\mathbb R^N) \cap L^2(\mathbb{R}^N, (1+|x|)^{-N-2s}dx) $ be a solution of \eqref{eq:1.1} which is stable outside a compact set. \begin{itemize} \item[\rm(i)] If $1<p<p_S(N,\ell)$, then $u\equiv0$. \item[\rm(ii)] If $p=p_S(N,\ell)$, then $u$ has finite energy, that is
\[
\|u\|^2_{\dot H^s(\mathbb R^N)}=\int_{\mathbb R^N}|x|^\ell|u|^{p+1}\,dx<+\infty.
\] Furthermore, if $u$ is stable, then $u\equiv 0$. \item[\rm(iii)] If $p_S(N,\ell)<p$ with \begin{equation} \label{eq:1.11} p\,\lambda\bigg(\frac{N-2s}{2}-\frac{2s+\ell}{p-1}\bigg)>\lambda(0), \end{equation} then $u\equiv0$, where $\lambda(\alpha)$ is the function given by \eqref{eq:1.7}. \end{itemize} \end{theorem}
As mentioned in the above, it might be necessary to suppose $u \in L^2(\mathbb{R}^N, (1+|x|)^{-N-2s} dx)$ in \cite{DDW} and also in \cite{FW-16} since a similar argument was used in \cite{FW-16}. Taking this point into consideration, we succeed to extend the nonexistence part of \cite{DDW} and \cite{FW-16} into the case $\ell \in (-2s, 0)$. Furthermore, Theorem \ref{Theorem:1.1} may be regarded as a fractional version of a part of \cite[Theorem 1.2]{DDG}. We remark that we deal with weak solutions and in \cite{DDM,FW-16}, classical solutions were studied. On the other hand, when $p_S(N,\ell) < p$ holds and \eqref{eq:1.11} fails to hold, then we will show the existence of stable solutions in \cite{HIK-20} and observe the properties of those solutions. By Remark \ref{Remark:1.3}, Theorem \ref{Theorem:1.1} asserts also that there is no solution of \eqref{eq:1.1} with finite Morse index when $1<p<p_S(N,\ell)$ or $p_S(N,\ell) < p $ with \eqref{eq:1.11}. When $p=p_S(N,\ell)$, we find that any solution of $u$ of \eqref{eq:1.1} with finite Morse index
satisfies $u \in \dot{H}^s (\mathbb{R}^N) \cap L^{p+1} (\mathbb{R}^N , |x|^{\ell} dx )$.
The proof of Theorem \ref{Theorem:1.1} is basically similar to \cite{DDW}. However, in \cite{DDW}, they use the fact that solutions are of class $C^1$ or smooth, for instance, see \cite[the proofs of Theorem 1.1 for $1 < p \leq p_S(n)$ and Theorem 1.4, and at the end of proof of Theorem 1.1 for $p_S(n) < p$]{DDW}.
On the other hand, \eqref{eq:1.1} contains the term $|x|^\ell$ and especially in the case $ \ell < 0$, solutions of \eqref{eq:1.1} are not of class $C^1$ at the origin. Therefore, we need some modifications in the argument. In this paper, we first prove a local Pohozaev type identity as in Fall and Felli \cite{FF} and exploit it to show Theorem \ref{Theorem:1.1} for $1 < p \leq p_S(N,\ell)$. In addition, to show the motonicity formula (Lemma \ref{Lemma:4.2}), we use the idea in \cite[section 3]{FF} where they studied the Almgren type frequency.
This paper is organized as follows. In subsection \ref{section:2.1}, we investigate the properties of the $s$-harmonic extension
of functions in $H^s_{\mathrm{loc}}(\mathbb{R}^N) \cap L^1( \mathbb{R}^N , (1+|x|)^{-N-2s} dx )$, that is, functions satisfying the extension problem. Subsection \ref{section:2.2} is devoted to the proof of local Pohozaev identity and the energy estimate is done in subsection \ref{section:2.3}. Section \ref{section:3} contains the proof of Theorem \ref{Theorem:1.1} for $ 1 < p \leq p_S(N,\ell)$ and in section \ref{section:4}, we deal with the case $ p_S(N,\ell) < p$.
\section{Preliminaries} \label{section:2}
This section is divided into three subsections. In subsection \ref{section:2.1}, we show properties of functions which belong to
$H^s_{\mathrm{loc}}(\mathbb R^N) \cap L^1(\mathbb{R}^N, (1+|x|)^{-N-2s}dx) $, and give a relationship between a solution $u$ of \eqref{eq:1.1} and $s$-harmonic functions. In subsection \ref{section:2.2}, we recall local regularity estimates for the extension problem. This estimate is useful to establish the Pohozaev identity in Proposition~\ref{Proposition:2.2}. Furthermore, applying an argument similar to the one in \cite{DDW}, we also give energy estimates for solutions of \eqref{eq:1.1} in subsection \ref{section:2.3}.
Throughout this paper, by the letter $C$ we denote generic positive constants and they may have different values also within the same line. Furthermore, we write $X : = (x,t) \in \mathbb{R}^{N+1}_+$.
\subsection{Remark on notion of weak solutions} \label{section:2.1}
We first prove properties of functions which belong to
$ H^s_{\mathrm{loc}}(\mathbb R^N) \cap L^1(\mathbb{R}^N, (1+|x|)^{-N-2s}dx) $.
\begin{lemma} \label{Lemma:2.1}
Let $u\in H^s_{\mathrm{loc}}(\mathbb R^N) \cap L^1(\mathbb{R}^N, (1+|x|)^{-N-2s}dx)$. Then the following hold.
\begin{itemize} \item[\rm(i)] For any $\psi \in C^\infty_c(\mathbb{R}^N)$, $\left\langle u , \psi \right\rangle_{\dot{H}^s(\mathbb{R}^N)} \in \mathbb{R}$ and
\begin{equation}\label{eq:2.1}
\begin{aligned}
&\left| \left\langle u , \psi \right\rangle_{\dot{H}^s(\mathbb{R}^N)} \right|
\\
\leq \ & C(N,s)
\left\{
\| u \|_{H^s(B_{2R})} \| \psi \|_{H^s(B_{2R})}
+ \| u \|_{L^1( \mathbb{R}^N, (1+|x|)^{-N-2s} dx )}
\| \psi \|_{L^1(B_{2R})}
\right\}
\end{aligned}
\end{equation} where $\supp \psi \subset B_R$ with $R \geq 1$ and $C(N,s)$ is a constant depending on $N$ and $s$. In addition, let $\varphi_1\in C^\infty_c(\mathbb R^N)$ with $\varphi_1(x)\equiv1$ in $B_1$ and $\varphi_1(x)\equiv0$ in $B_2^c$, and set $\varphi_n(x):=\varphi_1(n^{-1}x)$. Then for any $\psi\in C^\infty_c(\mathbb R^N)$, \begin{equation*} \langle \varphi_nu,\psi\rangle_{\dot H^s(\mathbb R^N)}\to \langle u,\psi\rangle_{\dot H^s(\mathbb R^N)} \quad{\rm as}\quad n\to\infty. \end{equation*} In particular,
\begin{equation}\label{eq:2.2}
\left\langle u , \psi \right\rangle_{\dot{H}^s(\mathbb{R}^N)} = \int_{\mathbb{R}^N} u (-\Delta)^s \psi \, dx \quad
\text{for each $\psi \in C^\infty_{c}(\mathbb{R}^N)$}.
\end{equation} \item[\rm(ii)] Put \begin{equation} \label{eq:2.3}
P_s (x,t):= p_{N,s}\frac{t^{2s}}{\left( |x|^2 + t^2 \right)^{\frac{N+2s}{2}}},\quad U(x,t) := (P_s (\cdot, t)*u ) (x) \end{equation}
where $p_{N,s}>0$ is chosen so that $\| P_s(\cdot, t) \|_{L^1(\mathbb{R}^N)} = 1$. Then \begin{equation} \label{eq:2.4}
-\diver \left( t^{1-2s} \nabla U \right)=0\quad{\rm in}\,\,\,\mathbb R^{N+1}_+, \quad
U(x,0)=u(x),
\quad
U \in H^1_{\mathrm{loc}} \left( \overline{\mathbb{R}^{N+1}_+} ,t^{1-2s}dX \right) \end{equation} and for each $\psi\in C^\infty_c(\overline{\mathbb R^{N+1}_+})$ with $\partial_t\psi(x,0)=0$, \begin{equation} \label{eq:2.5}
\begin{aligned}
-\lim_{t\to+0}\int_{\mathbb R^N}t^{1-2s}\partial_tU(x,t)\psi(x,t)\,dx
&=
\kappa_s\int_{\mathbb R^N}u(x)(-\Delta)^s\psi(x,0)\,dx
\\
&= \kappa_s \left\langle u , \psi (\cdot, 0) \right\rangle_{\dot{H}^s(\mathbb{R}^N)}.
\end{aligned} \end{equation} Here
\[
\begin{aligned}
& H^1_{\mathrm{loc}} \left( \overline{\mathbb{R}^{N+1}_+} ,t^{1-2s}dX \right)
\\
:= \ & \Set{ V : \mathbb{R}^{N+1}_+ \to \mathbb{R} |
\int_{B_R^+} t^{1-2s} \left\{ |\nabla V|^2 + V^2 \right\} dX < \infty
\ \ \text{for all $R > 0$}}
\end{aligned}
\] and \begin{equation} \label{eq:2.6} \kappa_s:=\frac{\Gamma(1-s)}{2^{2s-1}\Gamma(s)}. \end{equation} \end{itemize} \end{lemma}
\begin{remark}
\label{Remark:2.1}
\begin{enumerate}
\item
In Lemma \ref{Lemma:2.2} and Remark \ref{Remark:2.2},
we will see that \eqref{eq:2.5} holds for every $\psi \in C^\infty_c ( \overline{ \mathbb{R}^{N+1}_+} )$
if we assume that $u$ is a solution of \eqref{eq:1.1}.
\item
Here we collect properties on the Poisson kernel $P_s(x,t)$.
For the properties below, see, for instance, \cite{CS,DiNPV-12,FF,FF-15,JLX}.
First of all, it is known that the Fourier transform of $P_s(x,t)$ is given by
$\widehat{P}_s(\xi,t) = \theta_0 \left( 2\pi |\xi| t \right)$ where
\[
\begin{aligned}
& \wh{v} (\xi) := \int_{ \mathbb{R}^N} v(x) e^{ - 2\pi i x \cdot \xi } \, dx
\quad \text{for $v \in L^1 (\mathbb{R}^N)$},
\\
&\theta_0(t) : = \frac{2}{\Gamma(s)} \left( \frac{t}{2} \right)^s
K_s(t), \quad \theta_0'' + \frac{1-2s}{t} \theta_0' - \theta_0 = 0 \quad
\text{in} \ (0,\infty), \quad
\theta_0 (0) = 1,
\end{aligned}
\]
$K_s(t)$ is the modified Bessel function of the second kind with order $s$ and
\[
\kappa_s = \int_{0}^\infty t^{1-2s} \left\{ (\theta_0'(t))^2 + \theta_0^2(t) \right\} \,dt
= \frac{\Gamma(1-s)}{2^{2s-1} \Gamma(s) }.
\]
By these properties, it is possible to prove that for each $v \in H^s(\mathbb{R}^N)$,
\[
\int_{\mathbb{R}^{N+1}_+} t^{1-2s} \left| \nabla ( P_s (\cdot, t) \ast v )(x) \right|^2 \, dX
= \kappa_s \int_{\mathbb{R}^N} \left( 4\pi^2|\xi|^2 \right)^s \left| \wh{v} (\xi) \right|^2 \, d \xi
= \kappa_s \| v \|_{\dot{H}^s(\mathbb{R}^N)}^2
\]
and for $U(x,t) = (P_s(\cdot , t) \ast u)(x)$ and $u \in \dot{H}^s (\mathbb{R}^N)$,
\begin{equation}\label{eq:2.7}
\int_{\mathbb{R}^{N+1}_+} t^{1-2s} |\nabla U|^2 \, dX = \kappa_s \| u \|_{\dot{H}^s(\mathbb{R}^N)}^2.
\end{equation}
Furthermore, for each $\zeta \in C^1_c(\overline{\mathbb{R}^{N+1}_+})$,
\begin{equation}\label{eq:2.8}
\kappa_s \| \zeta(\cdot,0) \|_{\dot{H}^s(\mathbb{R}^N)}^2
= \int_{ \mathbb{R}^{N+1}_+} t^{1-2s} | \nabla ( P_s (\cdot, t) \ast \zeta(\cdot, 0) ) (x) |^2 \, dX
\leq \int_{ \mathbb{R}^{N+1}_+} t^{1-2s} | \nabla \zeta |^2 \, dX.
\end{equation}
Finally, for $\varphi \in C^\infty_{c} (\mathbb{R}^N)$,
\[
- \lim_{t \to +0} t^{1-2s} \partial_t
\left( P_s( \cdot, t ) \ast \varphi \right) (x) = \kappa_s \left( -\Delta \right)^s \varphi (x)
\quad \text{for any $x \in \mathbb{R}^N$}.
\]
\end{enumerate} \end{remark}
\begin{proof}[Proof of Lemma \ref{Lemma:2.1}] (i) We first show $\left\langle u, \psi \right\rangle_{\dot{H}^s(\mathbb{R}^N)} \in \mathbb{R}$ and \eqref{eq:2.1}. Let $\psi \in C^\infty_{c}(\mathbb{R}^N)$ with $\supp \psi \subset B_R$ and $R \geq 1$.
Since $\psi \equiv 0$ on $B_R^c$ and $|x-y| \geq |y|/2$ for $x \in B_R$ and $y \in B_{2R}^c$, we see from $R \geq 1$ that
\[
\frac{1+|y|}{|x-y|} \leq2 \frac{1+|y|}{|y|} \leq 4 \quad
\quad \text{for each $x \in B_R$ and $y \in B^c_{2R}$}
\] and
\[
\begin{aligned}
& \int_{\mathbb{R}^N \times \mathbb{R}^N} \frac{\left| u(x) - u(y) \right| \left| \psi(x) - \psi(y) \right|}
{|x-y|^{N+2s}} \, dx \, d y
\\
= \ &
\left( \int_{B_{2R} \times B_{2R} } + 2 \int_{B_{2R} \times B_{2R}^c} \right)
\frac{\left| u(x) - u(y) \right| \left| \psi(x) - \psi(y) \right|}{|x-y|^{N+2s}} \, dx \, d y
\\
\leq \ &
\frac{2}{C_{N,s}} \| u \|_{\dot{H}^s(B_{2R})} \| \psi \|_{\dot{H}^s (B_{2R})}
\\
& \qquad
+ 2 \int_{ B_{2R}} dx
\int_{|y| \geq 2R}
\left( |u(x)| + |u(y)| \right) |\psi(x)| \left( 1 + |y| \right)^{-N-2s}
\left( \frac{1+|y|}{|x-y|} \right)^{N+2s} dy
\\
\leq \ &
\frac{2}{C_{N,s}} \| u \|_{\dot{H}^s(B_{2R})} \| \psi \|_{\dot{H}^s (B_{2R})}
\\
& \qquad
+ C (N,s)
\int_{ B_{2R}} dx \int_{ |y| \geq 2R}
\left( |u(x)| + |u(y)| \right) |\psi(x)| (1+|y|)^{-N-2s} \, dy
\\
\leq \ &
\frac{2}{C_{N,s}} \| u \|_{\dot{H}^s(B_{2R})} \| \psi \|_{\dot{H}^s (B_{2R})}
\\
&\quad
+ C (N,s)
\left\{ \| u \|_{L^2(B_{2R})} \| \psi \|_{L^2(B_{2R})}
+ \| \psi \|_{L^1(B_{2R})} \| u \|_{L^1(\mathbb{R}^N, (1+|x|)^{-N-2s} dx )}
\right\}
\end{aligned}
\] where $C_{N,s}$ is the constant given by \eqref{eq:1.3}.
Since $\| u \|_{H^s(B_{2R})} < \infty$ due to $u \in H^s_{\mathrm{loc}} (\mathbb{R}^N)$, we observe that $\left\langle u, \psi \right\rangle_{\dot{H}^s(\mathbb{R}^N)} \in \mathbb{R}$ and \eqref{eq:2.1} holds.
The assertion $\left\langle \varphi_n u , \psi \right\rangle_{\dot{H}^s(\mathbb{R}^N)} \to \left\langle u , \psi \right\rangle_{\dot{H}^s(\mathbb{R}^N)}$ as $n \to \infty$
follows from \eqref{eq:2.1} and $\| \varphi_n u - u \|_{L^1(\mathbb{R}^N , (1+|x|)^{-N-2s} dx )} \to 0$.
Finally we prove \eqref{eq:2.2}. We remark that due to Fall and Weth \cite[Lemma 2.1]{FW} and $\supp \psi \subset B_R$, there exists a $C_R>0$ such that
\[
\left| \int_{|x-y| > \varepsilon }
\frac{\psi(x) - \psi(y) }{|x-y|^{N+2s}} dx dy
\right| \leq C_R \| \psi \|_{C^2(\mathbb{R}^N)} \left( 1 + |x| \right)^{-N-2s}
\quad \text{for all $x \in \mathbb{R}^N$ and $\varepsilon \in (0,1)$}.
\]
Therefore, $\int_{\mathbb{R}^N} u (-\Delta )^s \psi \, dx \in \mathbb{R}$ by $u \in L^1(\mathbb{R}^N, (1+|x|)^{-N-2s}dx)$. Moreover, by the dominated convergence theorem,
\[
\begin{aligned}
\left\langle u, \psi \right\rangle
&= \frac{C_{N,s}}{2} \lim_{\varepsilon \to 0}
\int_{ |x-y|> \varepsilon }
\frac{ \left( u(x) - u(y) \right) \left( \psi(x) - \psi (y) \right) }{|x-y|^{N+2s}} dx dy
\\
&= \lim_{\varepsilon \to 0} C_{N,s} \int_{\mathbb{R}^N}
\left[ u(x) \int_{|x-y| > \varepsilon} \frac{\psi(x) - \psi (y) }{|x-y|^{N+2s}} dy \right] dx
= \int_{ \mathbb{R}^N} u(x) \left( - \Delta \right)^s \psi (x) dx.
\end{aligned}
\] Hence, (i) holds.
(ii) Notice that $U$ is well-defined thanks to $u \in L^1(\mathbb{R}^N, (1+|x|)^{-N-2s}dx)$. We prove \eqref{eq:2.4}. The assertion $ -\diver (t^{1-2s} \nabla U) = 0$ in $\mathbb R^{N+1}_+$ follows from the fact $ -\diver (t^{1-2s} \nabla P_s ) = 0 $ in $\mathbb R^{N+1}_+$. It is also easily seen that $P_s (x, t) \to \delta_0$ as $t \to +0$, hence, $U(x,0) = u(x)$ for $u \in C^\infty_{c} (\mathbb{R}^N)$.
For general $u \in H^s_{\mathrm{loc}} (\mathbb{R}^N) \cap L^1(\mathbb{R}^N,(1+|x|)^{-N-2s}dx) $, the assertion follows from $U \in H^1_{\mathrm{loc}} ( \overline{ \mathbb{R}^{N+1}_+ }, t^{1-2s}dX ) $ (this will be proved below) and the existence of the trace operator $H^1_{\mathrm{loc}} (\overline{ \mathbb{R}^{N+1}_+}, t^{1-2s}dX ) \to H^s_{\mathrm{loc}} (\mathbb{R}^N)$ (see, for instance, Lions \cite{L-59}, and Demengel and Demengel \cite{DD-12}).
In order to prove $U \in H^1_{\mathrm{loc}} ( \overline{\mathbb{R}^{N+1}_+}, t^{1-2s}dX )$, we will show $U \in H^1( B_R \times (0,R),t^{1-2s}dX )$ for each $R > 0$. To this end, let $\varphi_1$ be as in (i), $n_0 > 2R$ and write
\[
U(x,t) = (P_s(\cdot, t)*u ) (x)
= ( P_s(\cdot,t)*( \varphi_{n_0} u + (1-\varphi_{n_0}) u ) ) (x)
=: U_1(x,t) + U_2(x,t).
\]
For $U_1$, we have $\nabla U_1 \in L^2(\mathbb{R}^{N+1}_+,t^{1-2s}dX)$ due to $\varphi_{n_0} u \in H^s(\mathbb{R}^N)$ and Remark \ref{Remark:2.1}.
In addition, Young's inequality and the fact $\| P_s (\cdot , t) \|_{L^1(\mathbb{R}^N)} = 1$ imply
\[
\begin{aligned}
\int_{0}^R dt\int_{\mathbb{R}^N} t^{1-2s}\left(P_s (\cdot , t) \ast (\varphi_{n_0} u )(x)\right)^2 \, dx
&\leq \int_{0}^R t^{1-2s} \| P_s (\cdot ,t ) \|_{L^1(\mathbb{R}^N)}^2 \| \varphi_{n_0} u \|_{L^2(\mathbb{R}^N)}^2
\, dt
\\
& = C_{R,s} \| \varphi_{n_0} u \|_{L^2(\mathbb{R}^N)}^2.
\end{aligned}
\] Hence, $U_1 \in H^1(B_R \times (0,R),t^{1-2s}dX)$ for every $R > 0$.
On the other hand, for $U_2$, thanks to $n_0 > 2R$, we may write as
\[ U_2(x,t)
=p_{N,s} \int_{|y|\ge{2R}} \frac{t^{2s}}{\left( |x-y|^2 + t^2 \right)^{\frac{N+2s}{2}}}(1-\varphi_{n_0}(y)) u(y)\,dy.
\]
Noting $|x-y|\ge |y|/2$ for all $|y| \ge n_0$ and $|x| \le R$, we see that
\[
\left| U_2(x,t) \right| \leq
C t^{2s} \int_{ |y| \geq n_0} |u(y)| (1+|y|)^{-N-2s} \left( \frac{1+|y|}{|x-y|} \right)^{N+2s} dy
\leq C t^{2s} \| u \|_{L^1(\mathbb{R}^N,(1+|x|)^{-N-2s}dx)}
\] and that \begin{align*}
|\nabla U_2(x,t)| &
\le C \int_{|y| \geq n_0}
\left( \frac{t^{2s-1}}{|x-y|^{N+2s}} + \frac{t^{2s}}{|x-y|^{N+1+2s}} \right) |u(y)| \, dy \\ &
\le C \| u \|_{L^1(\mathbb{R}^N,(1+|x|)^{-N-2s}dx)} \left( t^{2s-1} + t^{2s} \right) \end{align*} for all $(x,t) \in B_R \times (0,R)$. This yields $U_2 \in H^1(B_R \times (0,R),t^{1-2s}dX)$, which implies $U=U_1 +U_2 \in H^1_{\mathrm{loc}} \left( \overline{\mathbb{R}^{N+1}_+},t^{1-2s}dX \right)$.
Next we prove \eqref{eq:2.5}. Let $\psi \in C^\infty_c(\overline{\mathbb R^{N+1}_+})$ with $\mathrm{supp}\, \psi \subset B_{R/2} \times [0,R] $ and $\partial_t \psi(x,0) = 0$. Then, by \eqref{eq:2.3} and Fubini's theorem, we have \begin{equation} \label{eq:2.9} \begin{aligned} \hspace{-7pt} -\int_{\mathbb R^N}t^{1-2s}\partial_t U(x,t)\psi(x,t)\,dx &= -t^{1-2s}\int_{\mathbb R^N}\partial_t
\left( \int_{\mathbb R^N} \frac{ p_{N,s}t^{2s} u(y) }{(|x-y|^2+t^2)^{\frac{N+2s}{2}}}\,dy \right)\,\psi(x,t)\,dx \\ & = - t^{1-2s}\int_{\mathbb R^N}
\partial_t \left[ \int_{\mathbb R^N} \frac{p_{N,s} t^{2s} \psi(x,t) }{(|x-y|^{2}+t^2)^{\frac{N+2s}{2}}}\,dx \right]u(y)\,dy \\ & \hspace{1cm}
+ t^{1-2s} \int_{\mathbb R^N}\left[\int_{\mathbb R^N} \frac{p_{N,s} t^{2s}\partial_t \psi(x,t) }{ (|x-y|^2 + t^2)^{\frac{N+2s}{2}}}\,dx\right] u(y)\,dy \\ & =\int_{\mathbb R^N}\bigg(I_1(y,t)+I_2(y,t)\bigg)u(y)\, dy \end{aligned} \end{equation} where \begin{align*}
I_1(y,t)&:=- t^{1-2s}\partial_t \left[ \int_{\mathbb R^N} \frac{p_{N,s} t^{2s} \psi(x,t) }{(|x-y|^{2}+t^2)^{\frac{N+2s}{2}}}\,dx \right], \\
I_2(y,t)&:=t^{1-2s}\int_{\mathbb R^N} \frac{p_{N,s} t^{2s}\partial_t \psi(x,t) }{ (|x-y|^2 + t^2)^{\frac{N+2s}{2}}}\,dx. \end{align*} From $\psi (x,0) \in C^\infty_c(\mathbb{R}^N)$ and Remark \ref{Remark:2.1}, we observe that
\begin{equation}\label{eq:2.10}
-\lim_{t\to +0}t^{1-2s}\partial_t (P_s(\cdot,t) \ast \psi(\cdot,0) ) (x)=\kappa_s(-\Delta)^s\psi(x,0)
\quad \text{for $x \in \mathbb{R}^N$}.
\end{equation} Notice also that \begin{equation} \label{eq:2.11} \begin{aligned} & I_1(y,t) \\ = \ &- t^{1-2s} \left\{\partial_t \left[ \int_{\mathbb R^N} \frac{p_{N,s} t^{2s} \left(\psi(x,t) - \psi(x,0) \right) }
{(|x-y|^2+t^2)^{\frac{N+2s}{2}}}\,dx \right]\right. \\ & \hspace{6cm} + \left.
\partial_t \left[ \int_{\mathbb R^N} \frac{p_{N,s} t^{2s} \psi(x,0) }{(|x-y|^{2}+t^2)^{\frac{N+2s}{2}}}\,dx \right] \right\} \\ = \ & - I_3(y,t) - t^{1-2s}\partial_t (P_s(\cdot,t) \ast \psi(\cdot,0) ) (y) \end{aligned} \end{equation} where
\[ I_3(y,t):=-t^{1-2s}\partial_t \left[ \int_{\mathbb R^N} \frac{p_{N,s} t^{2s} \left(\psi(x,t) - \psi(x,0) \right) }
{(|x-y|^{2}+t^2)^{\frac{N+2s}{2}}}\,dx \right].
\] Thus, if we may prove that for all $y \in \mathbb{R}^N$
\begin{align}
\label{align:2.12}
&\lim_{t\to+0}\bigg(\left|I_2(y,t)\right|+ \left|I_3(y,t)\right|\bigg)=0 \quad
\text{strongly in $L^\infty(\mathbb{R}^N)$},
\\
\label{align:2.13}
&
\begin{aligned}
&|I_2(y,t)|+|I_3(y,t)|+|t^{1-2s}\partial_t (P_s(\cdot,t) \ast \psi(\cdot,0) ) (y)|\le C|y|^{-N-2s}
\\
&\hspace{6cm}\text{for each $|y| \geq 2R$ and $t \in (0,1]$},
\end{aligned}
\end{align} then we have \eqref{eq:2.5} by applying the dominated convergence theorem with
$u \in H^s_{\mathrm{loc}} (\mathbb{R}^N) \cap L^1(\mathbb{R}^N,(1+|x|)^{-N-2s}dx) $, and \eqref{eq:2.10}--\eqref{align:2.13} to \eqref{eq:2.9}.
We first deal with \eqref{align:2.12}. Recall that $ {\rm supp}\, \psi \subset B_{R}$ and $\partial_t \psi (x,0) \equiv 0$. Then,
\[
\begin{aligned}
&
\frac{\psi(x,t) - \psi(x,0)}{t}
= \int_{0}^{1} \partial_t \psi(x,t\theta) \,d\theta
= \int_0^1 \partial_t \psi (x,t \theta) - \partial_t \psi (x,0) \, d \theta
=:\Psi (x,t) \in C^\infty(\overline{\mathbb R^{N+1}_+}),
\\
&
\Psi(x,0)=0, \quad |\Psi(x,t)| \le \varphi_R(x) t,
\quad
\left| \partial_t \Psi(x,t) \right| \le \varphi_R(x)
\quad \text{for each $(x,t) \in \mathbb R^N \times [0,1]$}
\end{aligned}
\] where $\varphi_R \in C_c(B_{3R/2})$.
Hence, we observe from $(|x-y|^2 + t^2)^{1/2} \geq t$ that \begin{equation} \label{eq:2.14} \begin{aligned} &
|I_3(y,t)| \\ = \ &
\left|t^{1-2s} \partial_t
\left[ \int_{\mathbb R^N} \frac{p_{N,s} t^{1+2s} \Psi(x,t) }{(|x-y|^2+t^2)^{\frac{N+2s}{2}}} \,dx \right]
\right| \\ \le \ &
C \int_{\mathbb R^N} t \left| \frac{ \Psi(x,t) }{(|x-y|^2+t^2)^{\frac{N+2s}{2}}} \right| \,dx
+ t^2 \int_{\mathbb R^N} \left| \frac{p_{N,s} \partial_t\Psi(x,t) }{(|x-y|^2+t^2)^{\frac{N+2s}{2}}} \right| \,dx \\ & + C t^2 \int_{ \mathbb{R}^N}
\left| \frac{\Psi(x,t)}{ (|x-y|^2 + t^2)^{ \frac{N+2s}{2} + \frac{1}{2} } } \right| \, dx \\ \le \ &
C \int_{\mathbb R^N} t^2 \frac{\varphi_R(x) }{(|x-y|^2+t^2)^{\frac{N+2s}{2}}} \,dx
= t^{2-2s} \int_{\mathbb R^N} \frac{\varphi_R(y-tz) }{(|z|^2 +1)^{\frac{N+2s}{2}}} \,dz
\leq C \| \varphi_R \|_{L^\infty(\mathbb{R}^N)} t^{2-2s} \end{aligned} \end{equation} where $C> 0$ is independent of $y \in \mathbb{R}^N$. Hence, $I_3(\cdot,t) \to 0$ in $L^\infty(\mathbb{R}^N)$ as $ t \to 0$.
Since $|\partial_t \psi(x,t) | \le t\varphi_R(x)$ for all $(x,t) \in \mathbb R^N \times [0,1]$ due to $\partial_t\psi(x,0)=0$, in a similar way, we can check that $I_2(\cdot,t) \to 0$ in $L^\infty(\mathbb{R}^N)$ as $ t \to 0$ and \eqref{align:2.12} holds.
Next, we treat \eqref{align:2.13}.
Let $|y| \geq 2R$ and consider $I_3$. Noting $|x-y|\ge|y|-|x|\ge |y|/4 $ for all $|y|\ge 2R$ and $|x|\le 3R/2$, by \eqref{eq:2.14} we see that \begin{equation} \label{eq:2.15} \begin{aligned}
|I_3(y,t)| & \le
C\int_{\mathbb R^N} \frac{t^2 \varphi_R(x) }{( |x-y|^2 + t^2 )^{\frac{N+2s}{2}}} \,dx \\ &
\le C\int_{|x|\le{3R/2}} t^2 \| \varphi_R \|_{L^\infty(\mathbb{R}^N)} |x- y|^{-N-2s} \, dx
\le C t^2 |y|^{-N-2s}. \end{aligned} \end{equation} In a similar way, we may prove \begin{equation} \label{eq:2.16}
|I_2(y,t)|\le C t^2 |y|^{-N-2s}. \end{equation} Furthermore, applying the change of variables and the integration by parts and noting $\supp \psi \subset B_R$, we obtain \begin{equation} \label{eq:2.17} \begin{aligned} t^{1-2s}\partial_t (P_s(\cdot,t) \ast \psi(\cdot,0) ) (y) & =
t^{1-2s} \partial_t \left[ \int_{\mathbb R^N} \frac{p_{N,s} \psi(y-tz,0) }{(|z|^2+1)^{\frac{N+2s}{2}}} \,dz \right] \\ & =
-t^{1-2s} \int_{\mathbb R^N} \frac{p_{N,s} \nabla_y \psi(y-tz,0) \cdot z }{(|z|^2+1)^{\frac{N+2s}{2}}} \,dz \\ & =
\int_{|x|\le R}\frac{p_{N,s} \nabla_x \psi(x,0) \cdot (x-y) }{(|x-y|^2+t^2)^{\frac{N+2s}{2}}} \,dx \\ &
=-\int_{|x|\le R}
p_{N,s} \psi(x,0) \mathrm{div}_x \left[ \frac{x-y}{(|x-y|^2+t^2)^{\frac{N+2s}{2}}} \right] \,dx. \end{aligned} \end{equation}
Since it is easily seen that for $|y| \ge 2R$
\[
\int_{|x|\le R} |\psi(x,0) |
\left| \mathrm{div}_x \left[ \frac{x-y}{(|x-y|^2+t^2)^{\frac{N+2s}{2}}} \right] \right| \,dx
\le C\int_{|x|\le R} \left| x - y \right|^{ -N-2s } \,dx \le C |y|^{-N-2s},
\] by \eqref{eq:2.17}, we have
\[
\left| t^{1-2s} \partial_t \left( P_s(\cdot , t) \ast \psi (\cdot , 0) \right) (y) \right| \leq C |y|^{-N-2s}.
\] This together with \eqref{eq:2.15} and \eqref{eq:2.16} implies \eqref{align:2.13}, which completes the proof of Lemma~\ref{Lemma:2.1}. \end{proof}
\par Applying Lemma~\ref{Lemma:2.1}, we have the following.
\begin{lemma} \label{Lemma:2.2}
Let $u \in H^s_{\mathrm{loc}} (\mathbb{R}^N) \cap L^\infty_{\rm loc} (\mathbb{R}^N) \cap L^1(\mathbb{R}^N, (1+|x|)^{-N-2s} dx )$ be a solution of \eqref{eq:1.1} under \eqref{eq:1.2}, and $U$ be the function given in \eqref{eq:2.3}. Then \begin{equation} \label{eq:2.18}
\begin{aligned}
\int_{\mathbb R^{N+1}_+}t^{1-2s}\nabla U \cdot\nabla\psi\,dX
&=\kappa_s\int_{\mathbb R^N}|x|^\ell|u|^{p-1}u\psi(x,0)\, dx
= \kappa_s \left\langle u , \psi (\cdot,0) \right\rangle_{\dot{H}^s(\mathbb{R}^N)}
\end{aligned} \end{equation} for every $ \psi \in C^1_c ( \overline{\mathbb{R}^{N+1}_+} )$ where $\kappa_s$ is the constant given in \eqref{eq:2.6}. \end{lemma}
\begin{remark}\label{Remark:2.2}
Since $U \in C^\infty(\mathbb{R}^{N+1}_+)$, by \eqref{eq:2.4}, \eqref{eq:2.18} and the integration by parts,
for every $\psi \in C^\infty_{c}( \overline{\mathbb{R}^{N+1}_+} )$, we have
\[
\begin{aligned}
- \lim_{\tau \to +0} \int_{\mathbb{R}^N} \tau^{1-2s} \partial_t U (x,\tau) \psi(x,\tau) \, dx
&= \lim_{\tau \to +0} \int_{ \mathbb{R}^N \times (\tau,\infty)} t^{1-2s} \nabla U \cdot \nabla \psi \, dX
\\
&= \kappa_s \int_{\mathbb{R}^N} |x|^\ell |u|^{p-1} u \psi(x,0) \, dx
= \kappa_s \left\langle u , \psi (\cdot, 0) \right\rangle_{\dot{H}^s(\mathbb{R}^N)}.
\end{aligned}
\]
Hence, for solutions $u \in \dot{H}^s (\mathbb{R}^N) \cap L^\infty_{\rm loc} (\mathbb{R}^N) \cap L^1(\mathbb{R}^N,(1+|x|)^{-N-2s}dx) $
of \eqref{eq:1.1}, the corresponding extension problem is
\[
\left\{\begin{aligned}
-\diver \left( t^{1-2s} \nabla U \right) &=0 & &\text{in} \ \mathbb{R}^{N+1}_+, \\
U(x,0) &= u(x) & &\text{on} \ \mathbb{R}^N,\\
- \lim_{t \to +0} t^{1-2s} \partial_t U(x,t) &= \kappa_s |x|^\ell |u|^{p-1} u
& &\text{on} \ \mathbb{R}^N.
\end{aligned}\right.
\]
\end{remark}
\begin{proof}[Proof of Lemma \ref{Lemma:2.2}] We first prove \eqref{eq:2.18} for $\psi \in C^\infty_c ( \overline{\mathbb{R}^{N+1}_+} )$ with $\partial_t \psi(x,0) \equiv 0$ on $\mathbb{R}^N$. Let $\varepsilon>0$, $u$ be a solution of \eqref{eq:1.1} and $\psi \in C^\infty_c ( \overline{ \mathbb{R}^{N+1}_+ } )$ with $\partial_t \psi(x,0) = 0$. Then we have \begin{align*} 0 & = \int_{\mathbb R^N \times (\varepsilon,\infty)} - \diver \left(t^{1-2s} \nabla U \right) \psi (x,t) \,dX \\ & = \int_{\mathbb R^N \times (\varepsilon,\infty)} t^{1-2s} \nabla U \cdot \nabla \psi \,dX + \int_{\mathbb R^N} \varepsilon^{1-2s} \partial_t U(\varepsilon,x) \psi(x,\varepsilon) \,dx. \end{align*} By letting $\varepsilon \to 0$ and \eqref{eq:2.5}, it follows that \[ \int_{\mathbb R^{N+1}_+} t^{1-2s} \nabla U \cdot \nabla \psi \,dX = \kappa_s \int_{\mathbb R^N} u(x) (-\Delta)^s \psi(x,0) \,dx. \] Since $u$ is a solution of \eqref{eq:1.1}, Lemma \ref{Lemma:2.1} yields
\[
\int_{\mathbb R^{N+1}_+} t^{1-2s} \nabla U \cdot \nabla \psi \,dX
= \kappa_s \left\langle u , \psi (\cdot, 0) \right\rangle_{\dot{H}^s(\mathbb{R}^N)}
= \kappa_s \int_{\mathbb{R}^N} |x|^\ell |u|^{p-1} u \psi(x,0) \, dx
\] for every $\psi \in C^\infty_c (\overline{\mathbb{R}^{N+1}_+} )$ with $\partial_t \psi (x,0) \equiv 0$.
Next, we show \eqref{eq:2.18} for $\psi \in C^\infty_c ( \overline{\mathbb{R}^{N+1}_+} )$. Let $\psi \in C^\infty_c ( \overline{\mathbb R^{N+1}_+} )$ and $(\rho_\varepsilon(t))_\varepsilon$ be a mollifier in $t$ with $\rho_\varepsilon(-t) = \rho_\varepsilon(t)$. Set
\[ \Psi(x,t) := \left\{ \begin{aligned} & \psi(x,t) & &\text{if $t\geq 0$}, \\ & \psi(x,-t) & &\text{if $t < 0$}. \end{aligned} \right.
\] Then $\Psi \in C^\infty( \mathbb R^{N+1} \setminus \{ t=0 \} ) \cap W^{1,\infty} (\mathbb R^{N+1})$ and $\Psi(\cdot,t) \in C^\infty_c(\mathbb R^N)$ for every $t\in \mathbb{R}$. Define $\Psi_\varepsilon$ by
\[ \Psi_\varepsilon(x,t) := \int_{\mathbb R} \rho_\varepsilon (t-\tau) \Psi(x,\tau) \,d\tau = \int_{\mathbb R} \rho_\varepsilon (\tau) \Psi(x, t - \tau) \,d\tau.
\] It is easily seen that $\Psi_\varepsilon \in C^\infty_c ( \mathbb R^{N+1} )$ with $\partial_t \Psi_\varepsilon(x,0) = 0$ thanks to the symmetry of $\Psi$ and $\rho_\varepsilon$ in $t$. Since it holds that for any $k\in\mathbb N$ and $t_1>0$
\begin{equation}\label{eq:2.19}
\lim_{\varepsilon\to 0}\bigg(\| \Psi_\varepsilon (\cdot, 0) - \psi(\cdot,0) \|_{C^k(\mathbb R^N)}
+ \sup_{ t \ge t_1 >0 } \| \Psi_\varepsilon (\cdot, t) - \psi(\cdot,t) \|_{C^k(\mathbb R^N)}\bigg)=0
\end{equation}
and $\| \Psi_\varepsilon \|_{W^{1,\infty}(\mathbb R^{N+1}_+)} \le M_0$ for some $M_0>0$, we deduce that \begin{equation} \label{eq:2.20} \Psi_\varepsilon \to \psi \quad \text{strongly in $H^1(\mathbb R^{N+1}_+, t^{1-2s}dX)$}. \end{equation} Therefore, from
\[ \int_{\mathbb R^{N+1}_+} t^{1-2s} \nabla U \cdot \nabla \Psi_\varepsilon \,dX
= \kappa_s \int_{\mathbb R^N} |x|^\ell |u|^{p-1}u \Psi_\varepsilon(x,0) \,dx
\] with \eqref{eq:2.19}, \eqref{eq:2.20} and $u \in L^\infty_{\rm loc} (\mathbb{R}^N)$, as $\varepsilon \to 0$, we have \eqref{eq:2.18} for all $\psi \in C^\infty_c ( \overline{\mathbb{R}^{N+1}_+} )$. Finally, since we may approximate functions in $C^\infty_c ( \overline{ \mathbb{R}^{N+1}_+} )$ by functions in $C^1_c ( \overline{ \mathbb{R}^{N+1}_+} )$ in the $C^1( \overline{ \mathbb{R}^{N+1}_+} )$ sense, \eqref{eq:2.18} holds for every $C^1_c (\overline{ \mathbb{R}^{N+1}_+} )$ and we complete the proof. \end{proof}
\subsection{Local Regularity and the Pohozaev identity} \label{section:2.2}
In this subsection we recall local regularity estimates for the extension problem in Remark \ref{Remark:2.2}, which are taken from \cite[Section~3.1]{FF} (see also \cite{CaSi,JLX}). Furthermore, we prove the Pohozaev identity for solutions to the extension problem.
\par We first recall local regularity estimates.
\begin{proposition} \label{Proposition:2.1} {\rm(Fall and Felli \cite[Proposition~3.2, Lemma~3.3]{FF}, Jin, Li and Xiong \cite[Proposition~2.6]{JLX}, Cabr\`e and Sire \cite[Lemma~4.5]{CaSi})} Let $R_0>0$, $x_0 \in \mathbb{R}^N$, $g(x,u) : B_{4R_0}(x_0) \times \mathbb{R} \to \mathbb{R} $ and $W \in H^1(B^+_{4R_0} (x,0), t^{1-2s}dX)$ be a weak solution to
\[
\left\{\begin{aligned}
-\diver \left( t^{1-2s} \nabla W \right) &= 0
& &\mathrm{in} \ B_{4R_0}^+(x_0,0),
\\
- \lim_{t \to +0} t^{1-2s} \partial_t W (x,t)
&= \kappa_s g(x, W(x,0) ) & &\mathrm{on} \ B_{4R_0}(x_0),
\end{aligned}\right.
\] that is, for all $\varphi \in C^\infty_{c}( B_{4R_0}^+(x_0,0) \cup B_{4R_0} (x_0) )$,
\[
\int_{B_{4R_0}^+(x_0)} t^{1-2s} \nabla W \cdot \nabla \varphi \, d X
= \kappa_s \int_{B_{4R_0}(x_0)} g \left( x,W(x,0) \right) \varphi(x,0) \, dx.
\] \begin{enumerate}
\item[{\rm (i)}]
Suppose that $g(x,u) := a(x) u + b(x)$ with $a,b \in L^q(B_{4R_0}(x_0))$ for some $q > N/(2s)$.
For any $\mu > 0$ there exists a $C=C(N,s,q, \mu , \| a \|_{L^q(B_{4R_0} (x_0) )} )$ such that
\[
\left\| W \right\|_{L^\infty ( B_{2R_0}^+(x_0,0) ) }
\leq C \left[ \| W \|_{L^{\mu} ( B_{3R_0}^+(x_0,0) ) } + \| b \|_{L^{q}(B_{4R_0} (x_0) )} \right].
\]
In addition, there exists an $\alpha \in (0,1)$ such that $W \in C^{\alpha} ( \overline{B_{R_0}^+(x_0,0)} )$
and
\[
\| W \|_{C^{\alpha} ( \overline{B_{R_0}^+(x_0,0) )} }
\leq C \left[ \| W \|_{L^\infty (B_{2R_0}^+(x_0,0)) } + \| b \|_{L^q(B_{4R_0}(x_0))} \right].
\]
\item[{\rm (ii)}]
Suppose that $W \in C^\alpha ( \overline{B_{2R_0}^+ (x_0,0) } )$ and
$g(x,u ) \in C^1( \overline{B_{4R_0} (x_0)} \times \mathbb{R} )$ for some $\alpha \in (0,1)$.
Then there exist $\beta \in (0,1)$ and
$C = C(N,s, \| g \|_{C^1( \overline{B_{2R_0}^+(x_0,0)} \times [ -A , A ] )})$
where $A := \| W \|_{C^\alpha( \overline{B_{2R_0}^+ (x_0,0)} ) }$ such that
$\nabla_x W \in C^\beta ( \overline{B_{R_0}^+(x_0,0)})$ and
$ t^{1-2s} \partial_t W \in C^\beta ( \overline{B_{R_0}^+(x_0,0)})$ with
\[
\left\| \nabla_x W \right\|_{C^\beta ( \overline{B_{R_0}^+(x_0,0)})}
+ \left\| t^{1-2s} \partial_t W \right\|_{C^\beta ( \overline{B_{R_0}^+(x_0,0)})}
\leq C.
\] \end{enumerate} \end{proposition}
For solutions of \eqref{eq:1.1}, we have
\begin{lemma} \label{Lemma:2.3}
Let $u \in H^s_{\mathrm{loc}} (\mathbb{R}^N) \cap L^\infty_{\rm loc} (\mathbb{R}^N) \cap L^1(\mathbb{R}^N, (1+|x|)^{-N-2s}dx) $ be a solution of \eqref{eq:1.1} under \eqref{eq:1.2} and $U$ be the function given in \eqref{eq:2.3}. Then for each $R>1$ there exists an $\alpha_R \in (0,1)$ such that $U \in C^{\alpha_R} ( \overline{B_R^+} )$ and $\nabla_x U , t^{1-2s} \partial_t U \in C^{\alpha_R} ( \overline{(B_R \setminus B_{1/R}) \times (0,R)} )$. As a consequence with Remark \ref{Remark:2.2},
\[
- \lim_{t\to+0}t^{1-2s} \partial_t U(x,t)
=\kappa_s |x|^\ell |u(x)|^{p-1} u(x)\quad\mbox{in}\quad C_{\rm loc} (\mathbb{R}^N \setminus \{0\}).
\] \end{lemma}
\begin{proof} By $u \in L^\infty_{\rm loc} (\mathbb{R}^N)$ and \eqref{eq:1.2}, we find some $q > N/(2s)$ such that
$a(x) := |x|^\ell |u(x)|^{p-1} \in L^q(B_{4R})$ for each $R > 0$. Thus, we may apply Proposition \ref{Proposition:2.1} (i) for $U$ with $a(x)$ and there exists an $\alpha_R \in (0,1)$ so that $U \in C^\alpha( \overline{ B_R^+} )$.
Next, notice that $g(x , u ) := |x|^\ell | u |^{p-1} u \in C^1( \overline{ B_R\setminus B_{1/R}} \times \mathbb{R} )$. Therefore, we may apply Proposition \ref{Proposition:2.1} (ii) and obtain the desired result.
\end{proof}
Next we prove the following Pohozaev identity.
\begin{proposition} \label{Proposition:2.2}
Let $u \in H^s_{\mathrm{loc}} (\mathbb{R}^N) \cap L^\infty_{\rm loc} (\mathbb{R}^N) \cap L^1(\mathbb{R}^N, (1+|x|)^{-N-2s}dx) $ be a solution of \eqref{eq:1.1} with \eqref{eq:1.2}, and $U$ be the function given in \eqref{eq:2.3}. Then for all $R>0$, there holds \begin{equation} \label{eq:2.21} \begin{aligned} &
-\frac{N-2s}{2}\left[\int_{B^+_R}t^{1-2s}|\nabla U|^2\,dX
-\frac{2\kappa_s}{N-2s}\frac{N+\ell}{p+1}\int_{B_R}|x|^\ell|u|^{p+1}\,dx\right] \\ & \quad
+\frac{R}{2}\left[\int_{S^+_R}t^{1-2s}|\nabla U|^2\,dS
-\frac{2\kappa_s}{p+1}\int_{S_R}|x|^\ell|u|^{p+1}\,d \omega \right]
=R\int_{S^+_R}t^{1-2s}\bigg|\frac{\partial U}{\partial \nu}\bigg|^2\,dS \end{aligned} \end{equation} and \begin{equation} \label{eq:2.22}
\int_{B^+_R}t^{1-2s}|\nabla U|^2\,dX
-\kappa_s\int_{B_R}|x|^\ell|u|^{p+1}\,dx =\int_{S^+_R}t^{1-2s}\frac{\partial U}{\partial \nu}U\,dS. \end{equation}
Here $\nu = X/|X|$ is the unit outer normal vector of $S_R^+$ at $X$ and $\kappa_s$ the constant given in \eqref{eq:2.6}. \end{proposition}
\begin{proof} We follow the argument in \cite[Proof of Theorem~3.7]{FF}. Let $u$ be a solution of \eqref{eq:1.1} and $U$ the function given in \eqref{eq:2.3}, and we take any $R>0$. Then, by \eqref{eq:2.4} we have \begin{equation} \label{eq:2.23}
\frac{N-2s}{2}t^{1-2s}|\nabla U|^2
= \diver \left( \frac{1}{2}t^{1-2s}|\nabla U|^2X-t^{1-2s}(X\cdot\nabla U)\nabla U \right) \end{equation} for $X\in B^+_R$. Let $\rho<R$. Then, integrating \eqref{eq:2.23} over the set
\[ \mathcal O_\delta:= (B^+_R\setminus\overline{B^+_\rho})\cap
\Set{X = (x,t) \in \mathbb R^{N+1}_+ | t>\delta} \quad
\] with $\delta \in (0,\rho)$ and writing $B_{R,\rho, \delta} := B_{ \sqrt{R^2-\delta^2} } \setminus B_{ \sqrt{\rho^2 - \delta^2} } \subset \mathbb{R}^N$, we have \begin{equation} \label{eq:2.24} \begin{aligned}
\frac{N-2s}{2}\int_{\mathcal O_\delta}t^{1-2s}|\nabla U|^2\,dX
&= \int_{\mathcal O_\delta} \diver \left( \frac{1}{2}t^{1-2s}|\nabla U|^2X-t^{1-2s} (X\cdot\nabla U)\nabla U \right)\,dX \\ &= -\frac{1}{2}\delta^{2-2s}\int_{ B_{R,\rho,\delta} }
|\nabla U(x,\delta)|^2\,dx
+\delta^{2-2s}\int_{ B_{R,\rho,\delta} } |U_t(x,\delta)|^2\,dx \\ & \quad +\delta^{1-2s}\int_{ B_{R,\rho,\delta} } (x\cdot\nabla_xU(x,\delta))U_t(x,\delta)\,dx \\
& \quad +\frac{R}{2}\int_{S^+_R\cap [t>\delta] }t^{1-2s}|\nabla U|^2\,dS
-R\int_{S^+_R\cap [t>\delta ]}t^{1-2s}\bigg|\frac{\partial U}{\partial\nu}\bigg|^2\,dS \\ & \quad
-\frac{\rho}{2}\int_{S^+_\rho\cap [t>\delta] }t^{1-2s}|\nabla U|^2\,dS
+\rho\int_{S^+_\rho\cap [t>\delta] }t^{1-2s}\bigg|\frac{\partial U}{\partial\nu}\bigg|^2\,dS. \end{aligned} \end{equation} Now we claim that there exists a sequence $\delta_n\to0$ such that \begin{equation} \label{eq:2.25}
\lim_{n\to\infty}\bigg[\frac{1}{2}\delta_n^{2-2s}\int_{B_R}|\nabla U(x,\delta_n)|^2\,dx
+\delta_n^{2-2s}\int_{B_R}|U_t(x,\delta_n)|^2 \,dx \bigg]=0. \end{equation} In fact, if there is no such sequence, then there exists a $C>0$ such that
\[
\liminf_{\delta\to0}\bigg[\frac{1}{2}\delta^{2-2s}\int_{B_R}|\nabla U(x,\delta)|^2\,dx
+\delta^{2-2s}\int_{B_R}|U_t(x,\delta)|^2 \, dx \bigg]\ge C
\] and thus there exists a $\delta_0>0$ such that for all $\delta \in (0,\delta_0)$,
\[
\frac{1}{2}\delta^{1-2s}\int_{B_R}|\nabla U(x,\delta)|^2\,dx
+\delta^{1-2s}\int_{B_R}|U_t(x,\delta)|^2 \, dx \ge\frac{C}{2\delta}.
\] Since $U\in H^1_{\mathrm{loc}} ( \overline{\mathbb R^{N+1}_+} ,t^{1-2s}dX)$, integrating the above inequality in $\delta$ over $(0,\delta_0)$, we have a contradiction and \eqref{eq:2.25} holds.
On the other hand, by Lemma \ref{Lemma:2.3}, we see that \begin{equation} \label{eq:2.26} \begin{aligned} \lim_{\delta\to0}\delta^{1-2s}\int_{ B_{R,\rho,\delta} } (x\cdot\nabla_xU(x,\delta))U_t(x,\delta)\,dx
=-\kappa_s\int_{B_R\setminus B_\rho}(x\cdot\nabla_xu)|x|^\ell|u|^{p-1}u\,dx. \end{aligned} \end{equation} By \eqref{eq:2.24}--\eqref{eq:2.26} and replacing $\mathcal O_\delta$ with $\mathcal O_{\delta_n}$ for a sequence $\delta_n\to 0$, we conclude that \begin{equation} \label{eq:2.27} \begin{aligned} \hspace{-5pt}
\frac{N-2s}{2}\int_{B^+_R\setminus B^+_\rho}t^{1-2s}|\nabla U|^2\,dX & =
-\kappa_s\int_{B_R\setminus B_\rho}(x\cdot\nabla_xu)|x|^\ell|u|^{p-1}u\,dx \\ & \qquad
+\frac{R}{2}\int_{S^+_R}t^{1-2s}|\nabla U|^2\,dS
-R\int_{S^+_R}t^{1-2s}\left|\frac{\partial U}{\partial\nu}\right|^2\,dS \\ & \qquad\quad
-\frac{\rho}{2}\int_{S^+_\rho}t^{1-2s}|\nabla U|^2\,dS
+\rho\int_{S^+_\rho}t^{1-2s}\left|\frac{\partial U}{\partial\nu}\right|^2\,dS. \end{aligned} \end{equation} Furthermore, integration by parts yields \begin{equation} \label{eq:2.28} \begin{aligned} &
- \kappa_s \int_{B_R\setminus B_\rho}(x\cdot\nabla_xu)|x|^\ell|u|^{p-1}u\,dx \\
= \ & - \kappa_s \int_{B_R\setminus B_\rho}|x|^\ell x\cdot\nabla_x \left(\frac{|u|^{p+1}}{p+1} \right)\,dx \\
= \ & \kappa_s \frac{N+\ell}{p+1}\int_{B_R\setminus B_\rho}|x|^\ell|u|^{p+1}\,dx \\ &\hspace{3cm}
-\frac{\kappa_s}{p+1}\int_{S_R}|x|^{\ell+1}|u|^{p+1}\, d \omega
+\frac{\kappa_s}{p+1}\int_{S_\rho}|x|^{\ell+1}|u|^{p+1}\,d \omega. \end{aligned} \end{equation} Similarly to \eqref{eq:2.25}, since $U\in H^1_{\mathrm{loc}} ( \overline{\mathbb R^{N+1}_+} ,t^{1-2s}dX)$ and $u\in L^\infty_{\rm loc}(\mathbb R^N)$, by \eqref{eq:1.2}, we may prove that there exists a sequence $\rho_n\to0$ such that
\[
\lim_{n\to\infty}\left[\rho_n\int_{S^+_{\rho_n}}t^{1-2s}|\nabla U|^2\,dS
+\rho_n\int_{S^+_{\rho_n}}t^{1-2s}\left|\frac{\partial U}{\partial\nu}\right|^2\,dS
+\int_{S_{\rho_n}}|x|^{\ell+1}|u|^{p+1}\,d \omega \right]=0.
\] Therefore, taking $\rho=\rho_n$ and letting $n\to\infty$ in \eqref{eq:2.27} and \eqref{eq:2.28}, we have \eqref{eq:2.21}.
On the other hand, since
\[
- \mathrm{div} \left( t^{1-2s} \nabla U \right) \varphi = 0 \quad
\text{in} \ \widetilde{\mathcal{O}}_\delta := B_R^+ \cap \Set{ X \in \mathbb{R}^{N+1}_+ | t > \delta }
\] for any $\varphi \in C^\infty ( \overline{ B_R^+ } )$, integration by parts gives
\[
\int_{ \widetilde{\mathcal{O}}_\delta }
t^{1-2s} \nabla U \cdot \nabla \varphi \, dX
= \int_{S_R^+ \cap [ t > \delta ] } t^{1-2s} \frac{\partial U}{\partial \nu} \varphi \, d S
- \int_{B_{\sqrt{R^2-\delta^2}}} \delta^{1-2s} \frac{\partial U}{\partial t} (x,\delta)
\varphi (x,\delta) \, d x.
\] For the last term, by decomposing $\varphi$ into $\varphi (x,t) = \zeta(x,t) \varphi (x,t) + (1-\zeta(x,t)) \varphi (x,t)$ where $\zeta \in C^\infty_c( \overline{B_{R/2}^+} )$ with $\zeta \equiv 1$ on $B_{R/4}^+$ and noting $\zeta(x,t) \varphi(x,t) \in C^\infty_{c} ( \overline{ \mathbb{R}^{N+1}_+ } )$, Remark \ref{Remark:2.2} and Lemma \ref{Lemma:2.3} yield
\[
- \int_{B_{\sqrt{R^2-\delta^2}}} \delta^{1-2s} \frac{\partial U}{\partial t} (x,\delta)
\varphi (x,\delta) \, d x \to
\kappa_s \int_{B_R} |x|^\ell |u|^{p-1} u \varphi (x,0) \, dx
\quad \text{as $\delta \to 0$}.
\] Therefore, for every $\varphi\in C^\infty(\overline{B^+_R})$,
\[ \int_{B^+_R}t^{1-2s}\nabla U\cdot\nabla\varphi\,dX =\int_{S^+_R}t^{1-2s}\frac{\partial U}{\partial\nu}\varphi\,dS
+\kappa_s\int_{B_R}|x|^\ell|u|^{p-1}u \varphi(x,0) \,dx.
\] From the fact that $U \in H^1_{\mathrm{loc}} (\overline{\mathbb{R}^{N+1}_+},t^{1-2s}dX)$ can be approximated by functions of $C^\infty(\overline{B^+_R})$ in the $H^1(B^+_R,t^{1-2s}dX)$ sense, by setting $\varphi = U$ in the above, we obtain \eqref{eq:2.22} and Proposition~\ref{Proposition:2.2} follows. \end{proof}
\subsection{Energy estimates} \label{section:2.3}
We first show several lemmata by following \cite[Lemma~2.2 and Corollary~2.3]{DDW}.
\begin{lemma} \label{Lemma:2.4} For $\zeta \in W^{1,\infty}(\mathbb{R}^N) \cap H^1(\mathbb{R}^N) $, define
\begin{equation}
\label{eq:2.29}
\rho(\zeta; x):=\int_{\mathbb R^N}\frac{(\zeta(x)-\zeta(y))^2}{|x-y|^{N+2s}}\, dy.
\end{equation} Then there exists a $C = C(N,s)>0$ such that
\begin{equation}\label{eq:2.30}
\begin{aligned}
\rho \left( \zeta ; x \right)
&\leq
C
\left\{ \left( 1 + |x| \right)^2 \| \nabla \zeta \|_{L^{\infty} (\Omega(|x|)) }^2
+ \| \zeta \|_{L^\infty( \Omega (|x|) )}^2
\right.
\\
& \hspace{3cm}
\left.+ \left( 1 + |x| \right)^{-N} \| \zeta \|_{L^2(\mathbb{R}^N)}^2
\right\} \left( 1 + |x| \right)^{-2s}
\qquad \text{for all $|x| \geq 1$},
\end{aligned}
\end{equation} and
\begin{equation}\label{eq:2.31}
\rho \left( \zeta ; x\right) \leq
C \left( 1 + \| \zeta \|_{W^{1,\infty} (\mathbb{R}^N) }^2 \right)
\quad \text{for all $|x| \leq 1$}
\end{equation} where
\[
\Omega (|x|) := \Set{ y \in \mathbb{R}^N | \, |y| \geq \frac{|x|}{2} }.
\] \end{lemma}
\begin{proof} The proof is basically the same as the one in \cite[Lemma 2.2]{DDW}.
We treat the case $|x| \geq 1$ and put
\[
\begin{aligned}
D_1 &:= \Set{ y \in \mathbb{R}^N | \, |y-x| \leq \frac{|x|}{2} },
& D_2 &:= \Set{ y \in \mathbb{R}^N | \, \frac{|x|}{2} \leq |y-x| \leq 2 |x| },
\\
D_3 &:= \Set{ y \in \mathbb{R}^N | \, 2|x| \leq |y-x| }.
\end{aligned}
\] For $D_1$,
notice that $D_{1}$ is convex and $D_1 \subset \Omega (|x|)$. Since it follows from $\zeta \in W^{1,\infty} (\mathbb{R}^N)$ that \[
|\zeta(x)-\zeta(y)|\le \| \nabla \zeta \|_{L^\infty( D_1 )} |x-y|
\leq \| \nabla \zeta \|_{L^\infty (\Omega (|x|)) } |x-y| , \] we have \begin{equation} \label{eq:2.32} \begin{aligned}
\int_{D_1}\frac{(\zeta(x)-\zeta(y))^2}{|x-y|^{N+2s}}\,dy &
\le C \| \nabla \zeta \|_{L^\infty (\Omega (|x|) )}^2 \int_{D_1} |x-y|^{2-N-2s}\,dy \\ &
\leq C \| \nabla \zeta \|_{L^\infty (\Omega(|x|))}^2 (1+|x|)^{2-2s}. \end{aligned} \end{equation}
For $y\in D_2$, by $|x| \geq 1$, it holds that \begin{equation} \label{eq:2.33} \begin{aligned}
\int_{D_2}\frac{(\zeta(x)-\zeta(y))^2}{|x-y|^{N+2s}}\,dy & \le
C|x|^{-N-2s}\int_{|y|\le 3|x|}(\zeta(x)^2+\zeta(y)^2)\, dy \\ & \leq
C \zeta(x)^2 |x|^{-2s} + C |x|^{-N-2s} \| \zeta \|_{L^2(\mathbb{R}^N)}^2 \\ &\leq
C \| \zeta \|_{L^\infty (\Omega(|x|))}^2 (1+|x|)^{-2s} + C \| \zeta \|_{L^2(\mathbb{R}^N)}^2 (1+|x|)^{-N-2s}. \end{aligned} \end{equation}
For $y \in D_3$, since $|y| \geq |x|$, we have \begin{equation}\label{eq:2.34} \begin{aligned}
\int_{D_3}\frac{(\zeta(x)-\zeta(y))^2}{|x-y|^{N+2s}}\,dy &
\le C \| \zeta \|_{L^\infty (\Omega (|x|))}^2 \int_{D_3}|x-y|^{-N-2s}\, dy \\ &
\leq C \| \zeta \|_{L^\infty (\Omega (|x|))}^2 (1+|x|)^{-2s}. \end{aligned} \end{equation}
Putting \eqref{eq:2.32}--\eqref{eq:2.34} together, we have \eqref{eq:2.30} for $|x| \geq 1$.
On the other hand, for $|x| \leq 1$, by dividing $\mathbb{R}^N$ into $B_2$ and $B_2^c$, and arguing as in \eqref{eq:2.32} and \eqref{eq:2.34}, we obtain \eqref{eq:2.31}. We omit the details and Lemma \ref{Lemma:2.4} holds. \end{proof}
\begin{lemma} \label{Lemma:2.5} For $m>N/2$, set
\begin{equation}
\label{eq:2.35}
\eta(x) :=(1+|x|^2)^{-\frac{m}{2}}.
\end{equation} Let $R\ge R_0\ge1$ and $\psi\in C^\infty(\mathbb R^N)$ such that $0\le \psi\le 1$, $\psi\equiv 0$ on $B_1$ and $\psi\equiv 1$ on $B_2^c$. Define $\eta_R$ and $\rho_R$ by \begin{equation} \label{eq:2.36} \eta_R(x) :=\eta\bigg(\frac{x}{R}\bigg)\psi\bigg(\frac{x}{R_0}\bigg), \quad \rho_R(x) := \rho \left( \eta_R; x \right). \end{equation} Then there exists a constant $C=C(N,s,m,R_0)>0$ such that \begin{equation} \label{eq:2.37} \rho_R(x) \leq \left\{\begin{aligned}
& C\eta \left(\frac{x}{R} \right)^2|x|^{-N-2s}+2R^{-2s}\rho \left(\eta; \frac{x}{R}\right)
& & \text{if} \ |x|\ge 3R_0,\\
& C + 2 R^{-2s} \rho \left( \eta ; \frac{x}{R} \right)
& & \text{if} \ |x| \leq 3R_0.
\end{aligned}\right. \end{equation} \end{lemma}
\begin{remark} \label{Remark:2.3}
By Lemma \ref{Lemma:2.4} and \cite[Lemma 2.2]{DDW}, there exist two constants $c,C>0$ such that
\begin{equation*}
c \left( 1 + |x| \right)^{-N-2s} \leq
\rho (\eta ;x) \leq C \left( 1 + |x| \right)^{-N-2s} \quad \text{for all $x \in \mathbb{R}^N$}.
\end{equation*}
In addition, later (see \eqref{eq:2.58}) we shall also prove that
for all sufficiently large $R>0$ and $x \in \mathbb{R}^N$,
\[
0 < c_R (1+|x|)^{-N-2s} \leq \rho_R(x).
\] \end{remark}
\begin{proof}[Proof of Lemma \ref{Lemma:2.5}] Applying Young's inequality with the definition of $\eta_R$, we have
\begin{equation}\label{eq:2.38}
\begin{aligned}
&\rho_R(x)
\leq 2 \eta\left( \frac{x}{R} \right)^2
\int_{\mathbb{R}^N} \frac{\left( \psi \left( \frac{x}{R_0} \right) - \psi \left( \frac{y}{R_0} \right)
\right)^2}{|x-y|^{N+2s}} \, dy
\\
&\hspace{4cm}
+ 2 \int_{\mathbb{R}^N} \psi \left( \frac{y}{R_0} \right)^2
\frac{ \left( \eta \left( \frac{x}{R} \right) - \eta \left( \frac{y}{R} \right) \right)^2 }{|x-y|^{N+2s}} \, dy.
\end{aligned}
\end{equation}
For the first term, if $|x|\ge 3R_0$, then $|x-y| \geq |x|/3$ for any $ y \in B_{2R_0}$ and
\[
\int_{\mathbb{R}^N} \frac{\left( \psi \left( \frac{x}{R_0} \right) - \psi \left( \frac{y}{R_0} \right)
\right)^2}{|x-y|^{N+2s}} \, dy
\leq \int_{B_{2R_0}} |x-y|^{-N-2s} \, dy \leq C_{R_0} |x|^{-N-2s}
\]
and if $|x| \leq 3R_0$, then we see
\[
\begin{aligned}
\int_{\mathbb{R}^N} \frac{\left( \psi \left( \frac{x}{R_0} \right) - \psi \left( \frac{y}{R_0} \right)
\right)^2}{|x-y|^{N+2s}} \, dy
&\leq
\left( \int_{B_{R_0} (x) } + \int_{B_{R_0}^c (x)} \right)
\frac{\left( \psi \left( \frac{x}{R_0} \right) - \psi \left( \frac{y}{R_0} \right)
\right)^2}{|x-y|^{N+2s}} \, dy
\\
&\leq R_0^{-2} \| \psi \|_{C^1(\mathbb{R}^N)}^2 \int_{|z| \leq R_0} |z|^{-N-2s+2} \, dz
+ \int_{|z| \geq R_0} |z|^{-N-2s} \, dz.
\end{aligned}
\] Since
\[
\int_{\mathbb{R}^N} \psi \left( \frac{y}{R_0} \right)^2
\frac{ \left( \eta \left( \frac{x}{R} \right) - \eta \left( \frac{y}{R} \right) \right)^2 }{|x-y|^{N+2s}} \, dy
\leq
\int_{\mathbb{R}^N}
\frac{ \left( \eta \left( \frac{x}{R} \right) - \eta \left( \frac{y}{R} \right) \right)^2 }{|x-y|^{N+2s}} \, dy
= R^{-2s} \rho \left( \eta ; \frac{x}{R} \right),
\] by \eqref{eq:2.38}, we have \eqref{eq:2.37}.
\end{proof}
\begin{lemma} \label{Lemma:2.6}
Let $u \in H^s_{\mathrm{loc}} (\mathbb{R}^N) \cap L^\infty_{\rm loc} (\mathbb{R}^N) \cap L^2(\mathbb{R}^N,(1+|x|)^{-N-2s}dx) $ be a solution of \eqref{eq:1.1} under \eqref{eq:1.2}. Assume that $u$ is stable outside $B_{R_0}$. Let $\zeta\in C^1(\mathbb{R}^N)$ satisfy $\zeta \equiv 0 $ on $B_{R_0}$ and
\begin{equation}\label{eq:2.39}
|x| \left| \nabla \zeta (x) \right| + |\zeta (x)| \leq C\left( 1+|x| \right)^{- m }
\quad \text{for all $x \in \mathbb{R}^N$}
\end{equation} for some $m > N/2$. Then \begin{equation} \label{eq:2.40}
\int_{\mathbb R^N}|x|^{\ell}|u|^{p+1}\zeta^2\, dx
+\frac{1}{p}\|u\zeta\|_{\dot H^s(\mathbb R^N)}^2 \le \frac{C_{N,s}}{p-1}\int_{\mathbb R^N}u (x)^2\rho(\zeta;x)\, dx \end{equation} where $C_{N,s}$ is the constant given by \eqref{eq:1.3}. \end{lemma}
\begin{remark}\label{Remark:2.4}
Later, we shall use Lemma \ref{Lemma:2.6} for $\zeta (x) = \eta_R (x)$
and we require the right-hand side of \eqref{eq:2.40} to be finite.
For example, see Lemma \ref{Lemma:2.7} and the end of proof of it.
Therefore, we need the condition $u \in L^2(\mathbb{R}^N,(1+|x|)^{-N-2s}dx)$
by Remark \ref{Remark:2.3}.
\end{remark}
\begin{proof} We follow the argument in \cite[Lemma~2.1]{DDW}. Remark that the right-hand side in \eqref{eq:2.40} is finite due to \eqref{eq:2.30}, \eqref{eq:2.31}, \eqref{eq:2.39}
and $u \in L^2(\mathbb{R}^N,(1+|x|)^{-N-2s}dx) $. We first treat the case $\zeta \in C^1_c ( \mathbb{R}^N)$ where $\zeta \equiv 0$ on $B_{R_0}$. Since $u$ is a solution of \eqref{eq:1.1}, by Lemma~\ref{Lemma:2.3}, we see that $u\in C^1(\mathbb R^N\setminus\{0\})$. Since $u\zeta^2\in C^1_c(\mathbb R^N)$, by Remark~\ref{Remark:1.1} we can take $\varphi=u\zeta^2$ as a test function in \eqref{eq:1.4}, and by \eqref{eq:1.5} we have \begin{equation*} \begin{aligned}
\int_{\mathbb R^N}|x|^\ell|u|^{p+1}\zeta^2\,dx &= \left\langle u , u \zeta^2 \right\rangle_{\dot{H}^s(\mathbb{R}^N)} \\ & =\frac{C_{N,s}}{2}
\int_{\mathbb R^N \times \mathbb{R}^N} \frac{(u(x)-u(y))(u(x)\zeta(x)^2-u(y)\zeta(y)^2)}{|x-y|^{N+2s}}\,dx\,dy \\ & =\frac{C_{N,s}}{2}\int_{\mathbb R^N \times \mathbb{R}^N }
\frac{u(x)^2\zeta(x)^2-u(x)u(y)(\zeta(x)^2+\zeta(y)^2)+u(y)^2\zeta(y)^2}{|x-y|^{N+2s}}\,dx\,dy \\ & =\frac{C_{N,s}}{2}\int_{\mathbb R^N \times \mathbb{R}^N}
\frac{(u(x)\zeta(x)-u(y)\zeta(y))^2-(\zeta(x)-\zeta(y))^2u(x)u(y)}{|x-y|^{N+2s}}\,dx\,dy \\ &
=\|u\zeta\|^2_{\dot H^s(\mathbb R^N)}-\frac{C_{N,s}}{2}\int_{\mathbb R^N\times \mathbb{R}^N}
\frac{(\zeta(x)-\zeta(y))^2u(x)u(y)}{|x-y|^{N+2s}}\,dx\,dy. \end{aligned} \end{equation*} Applying the fundamental inequality $2ab\le a^2+b^2$ with \eqref{eq:2.29}, we deduce that \begin{equation} \label{eq:2.41} \begin{aligned} &
\|u\zeta\|^2_{\dot H^s(\mathbb R^N)}-\int_{\mathbb R^N}|x|^\ell|u|^{p+1}\zeta^2\,dx \\ \le \ & \frac{C_{N,s}}{4} \left( \int_{\mathbb R^N \times \mathbb{R}^N}
\frac{(\zeta(x)-\zeta(y))^2}{|x-y|^{N+2s}}u(x)^2\,dx\,dy +\int_{\mathbb R^N \times \mathbb{R}^N}
\frac{(\zeta(x)-\zeta(y))^2}{|x-y|^{N+2s}}u(y)^2\,dx\,dy\right) \\ = \ & \frac{C_{N,s}}{2}\int_{\mathbb R^N}u(x)^2\rho(\zeta;x)\,dx. \end{aligned} \end{equation} Since $u$ is stable outside $B_{R_0}$, by \eqref{eq:1.10} with $\varphi=u\zeta $ (see Remark \ref{Remark:1.3}) and \eqref{eq:2.41}, we have
\begin{equation}\label{eq:2.42}
(p-1)\int_{\mathbb R^N}|x|^\ell|u|^{p+1}\zeta^2\,dx
\leq
\| u \zeta \|_{\dot{H}^s(\mathbb{R}^N)}^2 - \int_{\mathbb{R}^N} |x|^\ell |u|^{p+1} \zeta^2 \, dx
\leq
\frac{C_{N,s}}{2}\int_{\mathbb R^N}u(x)^2\rho(\zeta;x)\,dx.
\end{equation} Hence, by \eqref{eq:2.41} and \eqref{eq:2.42},
\[
\begin{aligned}
\frac{1}{p} \| u \zeta \|_{\dot{H}^s(\mathbb{R}^N)}^2
& \leq \frac{1}{p} \int_{ \mathbb{R}^N} |x|^\ell |u|^{p+1} \zeta^2 \, dx
+ \frac{C_{N,s}}{2p} \int_{ \mathbb{R}^N} u(x)^2 \rho (\zeta;x) \, dx
\\
& \leq \frac{C_{N,s}}{2(p-1)} \int_{ \mathbb{R}^N} u^2 \rho (\zeta;x) \, dx.
\end{aligned}
\] This together with \eqref{eq:2.42} implies \eqref{eq:2.40} for $\zeta \in C^1_{c} (\mathbb{R}^N)$ with $ \zeta \equiv 0$ on $B_{R_0}$.
Next, let $\zeta \in C^1(\mathbb{R}^N)$ satisfy $\zeta \equiv 0$ on $B_{R_0}$ and \eqref{eq:2.39}. Let $(\varphi_n)_{n}$ be a sequence of cut-off functions in Lemma \ref{Lemma:2.1} and set $\zeta_n := \varphi_n \zeta \in C^1_c(\mathbb{R}^N)$. It is easily seen that $(\zeta_n)_n$ satisfies \eqref{eq:2.39} uniformly with respect to $n$, namely, the constant $C$ in \eqref{eq:2.39} is independent of $n$. Exploiting this fact with \eqref{eq:2.30} and \eqref{eq:2.31}, we observe that there exists a $C>0$ such that for all $x \in \mathbb{R}^N$ and $n$,
\[
\rho(\zeta_n;x) \to \rho(\zeta;x), \quad
\left| \rho(\zeta_n;x) \right| \leq C \left( 1 + |x| \right)^{-N-2s}.
\] Therefore, from \eqref{eq:2.40} with $\zeta_n$, we find that $( \zeta_n u )_n$ is bounded in $\dot{H}^s(\mathbb{R}^N)$ and it is not difficult to see
$ |u(x)|^{p+1} \zeta_n (x)^2 \nearrow |u(x)|^{p+1} \zeta(x)^2$ for each $x \in \mathbb{R}^N$ and $\zeta_n u \rightharpoonup \zeta u$ weakly in $\dot{H}^s(\mathbb{R}^N)$. Hence, from the monotone convergence theorem, the weak lower semicontinuity of norm,
the fact $u \in L^2(\mathbb{R}^N,(1+|x|)^{-N-2s}dx)$, \eqref{eq:2.40} with $\zeta_n$ and the dominated convergence theorem, it follows that \eqref{eq:2.40} holds for each $\zeta \in C^1(\mathbb{R}^N)$ with \eqref{eq:2.39} and $\zeta \equiv 0$ on $B_{R_0}$. This completes the proof.
\end{proof}
By using Lemmata~\ref{Lemma:2.5} and ~\ref{Lemma:2.6}, we have the following.
\begin{lemma} \label{Lemma:2.7}
Let $u \in H^s_{\mathrm{loc}} (\mathbb{R}^N) \cap L^\infty_{\rm loc} (\mathbb{R}^N) \cap L^2(\mathbb{R}^N,(1+|x|)^{-N-2s}dx) $ be a solution of \eqref{eq:1.1} with \eqref{eq:1.2}, which is stable outside $B_{R_0}$, and let $\rho_R$ be the function given in Lemma~$\ref{Lemma:2.5}$ with \begin{equation} \label{eq:2.43} m\in\left(\frac{N}{2} , \, \frac{N+s(p+1) + \ell}{2} \right). \end{equation} Then there exists a constant $C=C(N,s,\ell,m,p,R_0)>0$ such that \begin{equation} \label{eq:2.44} \int_{\mathbb R^N}u^2\rho_R\, dx\le C\left( \int_{B_{3R_0}} u^2 \rho_R \, dx + R^{N-\frac{2}{p-1}(s(p+1)+\ell)}+1\right) \end{equation} for all $R\ge 3R_0$. \end{lemma}
\begin{remark} \label{Remark:2.5} Due to \eqref{eq:1.2}, $ 0 < s (p+1) + \ell$ holds and we may choose an $m$ satisfying \eqref{eq:2.43}. \end{remark}
\begin{proof}[Proof of Lemma \ref{Lemma:2.7}] We basically follow the idea in \cite[Lemma 2.4]{DDW} and let $R \geq 3R_0$. First, by H\"older's inequality,
\begin{equation}\label{eq:2.45}
\begin{aligned}
&\int_{\mathbb{R}^N} u^2 \rho_R \, dx
\\
=\ & \int_{B_{3R_0}} u^2 \rho_R \, dx
+ \int_{B_{3R_0}^c}
u^2 \rho_R
\left( |x|^{\frac{\ell}{2}} \eta_R \right)^{\frac{4}{p+1}}
\left( |x|^{\frac{\ell}{2}} \eta_R \right)^{ - \frac{4}{p+1} } \, dx
\\
\leq \ & \int_{B_{3R_0}} u^2 \rho_R \, dx
+ \left( \int_{B_{3R_0}^c} |x|^\ell |u|^{p+1} \eta_R^2 \, dx \right)^{\frac{2}{p+1}}
\left( \int_{B_{3R_0}^c} |x|^{ -\frac{2\ell}{p-1} } \eta_R^{ -\frac{4}{p-1} } \rho_R^{ \frac{p+1}{p-1} }
\, dx \right)^{ \frac{p-1}{p+1} }.
\end{aligned}
\end{equation}
For the case $3R_0\le|x|\le R$, by Lemma~\ref{Lemma:2.4} with \eqref{eq:2.35}, \eqref{eq:2.36} and Remark \ref{Remark:2.3}, we have \begin{equation} \label{eq:2.46} 2^{-\frac{m}{2}}=\eta(1)\le \eta_R(x)\le \eta\bigg(\frac{3R_0}{R}\bigg)\le 1 \qquad\mbox{and}\qquad \rho\bigg(\eta;\frac{x}{R}\bigg)\le C. \end{equation} Then, by \eqref{eq:2.37} we obtain \[
\rho_R(x)\le C(|x|^{-N-2s}+R^{-2s})
\quad \text{for all $3R_0 \leq |x| \leq R$}. \] This together with \eqref{eq:2.46} yields \begin{equation} \label{eq:2.47} \begin{aligned} &
\int_{B_R\setminus B_{3R_0}}|x|^{-\frac{2\ell}{p-1}}\rho_R^{\frac{p+1}{p-1}}\eta_R^{-\frac{4}{p-1}}\,dx \\ \le\ & C\int_{3R_0}^Rr^{-\frac{2\ell}{p-1}}\bigg(r^{-(N+2s)}+R^{-2s}\bigg)^{\frac{p+1}{p-1}}r^{N-1}\, dr \\ \le \ & C\int_{3R_0}^Rr^{N-1-\frac{2\ell}{p-1}-(N+2s)\frac{p+1}{p-1}}\, dr+C R^{-2s\frac{p+1}{p-1}}\int_{3R_0}^Rr^{N-1-\frac{2\ell}{p-1}}\, dr \\ \le \ & C\int_{3R_0}^\infty r^{-1-\frac{2}{p-1}(N+\ell+s(p+1))}\, dr+C R^{-2s\frac{p+1}{p-1}}\int_{3R_0}^Rr^{N-1-\frac{2\ell}{p-1}}\, dr \\ \le \ & C+C R^{-2s\frac{p+1}{p-1}}\int_{3R_0}^Rr^{N-1-\frac{2\ell}{p-1}}\, dr. \end{aligned} \end{equation} Since \[
\begin{aligned}
R^{-2s\frac{p+1}{p-1}}\int_{3R_0}^Rr^{N-1-\frac{2\ell}{p-1}}\, dr
&\leq
\left\{\begin{aligned}
& C R^{-2s \frac{p+1}{p-1} } & &
\text{if} \ N - \frac{2\ell}{p-1} < 0,\\
& C R^{ - 2 s \frac{p+1}{p-1} } \left( 1 + \log R \right) & &
\text{if} \ N - \frac{2\ell}{p-1} = 0,\\
& C R^{ N - \frac{2}{p-1} \left( s(p+1) + \ell \right) } & &
\text{if} \ N - \frac{2\ell}{p-1} > 0,
\end{aligned}\right.
\\
&\le
\left\{
\begin{aligned}
& C & &\mbox{if}\quad N-\frac{2\ell}{p-1}\le0,
\\
&C R^{N-\frac{2}{p-1}(s(p+1)+\ell)}
& &\mbox{if}\quad N-\frac{2\ell}{p-1}>0,
\end{aligned}
\right.
\end{aligned} \] by \eqref{eq:2.47}, we see that \begin{equation} \label{eq:2.48}
\int_{B_R\setminus B_{3R_0}}|x|^{-\frac{2\ell}{p-1}}\rho_R^{\frac{p+1}{p-1}}\eta_R^{-\frac{4}{p-1}}\,dx \le C+CR^{N-\frac{2}{p-1}(s(p+1)+\ell)}. \end{equation}
On the other hand, for the case $|x|\ge R\ge 3R_0$, by \eqref{eq:2.35} and $R^2+|x|^2 \leq 2|x|^2$, we obtain \begin{align*}
\eta\bigg(\frac{x}{R}\bigg)^2|x|^{-N-2s}
\le C\eta(1)^2(R^2+|x|^2)^{-\frac{N}{2}-s}
&\le CR^{-N-2s}\bigg(1+\frac{|x|^2}{R^2}\bigg)^{-\frac{N}{2}-s} \\
&\le CR^{-2s}\bigg(1+\frac{|x|^2}{R^2}\bigg)^{-\frac{N}{2}-s}. \end{align*} This together with \eqref{eq:2.30}, \eqref{eq:2.37} and Remark \ref{Remark:2.3} implies that
\[
\rho_R(x)\le CR^{-2s}\bigg(1+\frac{|x|^2}{R^2}\bigg)^{-\frac{N}{2}-s}
\quad \mbox{for each} \ |x| \geq R.
\]
Thus, \eqref{eq:2.35}, \eqref{eq:2.36} and the fact $|x| \geq R$ give \begin{equation} \label{eq:2.49}
\begin{aligned}
&|x|^{-\frac{2\ell}{p-1}}\rho_R(x)^{\frac{p+1}{p-1}}\eta_R(x)^{-\frac{4}{p-1}}
\\
\leq \ &
C \left( R^2+|x|^2 \right)^{-\frac{\ell}{p-1}}
\left\{ R^{-2s} \left( 1 + \frac{|x|^2}{R^2} \right)^{- \frac{N+2s}{2} } \right\}^{ \frac{p+1}{p-1} }
\left( 1 + \frac{|x|^2}{R^2} \right)^{ \frac{m}{2} \frac{4}{p-1} }
\\
\leq \ & CR^{-2s\frac{p+1}{p-1}-\frac{2\ell}{p-1}}
\left(1+\frac{|x|^2}{R^2}\right)^{-\frac{N+2s}{2}\frac{p+1}{p-1}+\frac{2m}{p-1} - \frac{\ell}{p-1} }.
\end{aligned} \end{equation} Since it follows from \eqref{eq:2.43} that \[ \alpha_{ N,s,p,m,\ell } := -\frac{N+2s}{2}\frac{p+1}{p-1}+\frac{2m}{p-1} - \frac{\ell}{p-1} < -\frac{N}{2}, \] by \eqref{eq:2.49} we have \begin{equation} \label{eq:2.50} \begin{aligned} &
\int_{B_R^c}|x|^{-\frac{2\ell}{p-1}}\rho_R^{\frac{p+1}{p-1}}\eta_R^{-\frac{4}{p-1}}\,dx \\ \le \ & CR^{-2s\frac{p+1}{p-1}-\frac{2\ell}{p-1}}\int_R^\infty
\left(1+\frac{r^2}{R^2}\right)^{\alpha_{ N,s,p,m,\ell } }r^{N-1}\, dr \\ = \ & C R^{- 2s \frac{p+1}{p-1} - \frac{2\ell}{p-1} + N-1 } \int_{R}^{\infty} \left( 1 + \frac{r^2}{R^2} \right)^{ \alpha_{ N,s,p,m,\ell } } \left( \frac{r}{R} \right)^{N-1} \, dr \\ \le \ & CR^{N-\frac{2}{p-1}(s(p+1)+\ell)}. \end{aligned} \end{equation} Combining \eqref{eq:2.48} and \eqref{eq:2.50}, we obtain \begin{equation} \label{eq:2.51}
\int_{B_{3R_0}^c} |x|^{-\frac{2\ell}{p-1}}\rho_R^{\frac{p+1}{p-1}}\eta_R^{-\frac{4}{p-1}}\,dx \le C\left(R^{N-\frac{2}{p-1}(s(p+1)+\ell)}+1\right). \end{equation}
Now we substitute \eqref{eq:2.51} into \eqref{eq:2.45} and infer from \eqref{eq:2.40} with $\zeta= \eta_R$ that
\[
\begin{aligned}
&\int_{\mathbb{R}^N} u^2 \rho_R \, dx
\\
\leq \ &
\int_{B_{3R_0}} u^2 \rho_R \, dx
+ C \left( \int_{ B_{3R_0^c} } |x|^\ell |u|^{p+1} \eta_R^2 \, dx \right)^{\frac{2}{p+1}}
\left(R^{N-\frac{2}{p-1}(s(p+1)+\ell)}+1\right)^{\frac{p-1}{p+1}}
\\
\leq \ &
\int_{B_{3R_0}} u^2 \rho_R \, dx
+ C \left( \int_{\mathbb{R}^N } u^2 \rho_R \, dx \right)^{\frac{2}{p+1}}
\left(R^{N-\frac{2}{p-1}(s(p+1)+\ell)}+1\right)^{\frac{p-1}{p+1}}
\end{aligned}
\] for $R\ge 3R_0$. Dividing the both sides by $ \left( \int_{\mathbb{R}^N } u^2 \rho_R \, dx \right)^{\frac{2}{p+1}} < \infty $ and noting
\[
\left( \int_{ B_{3R_0} } u^2 \rho_R \, dx \right)
\left( \int_{ \mathbb{R}^N} u^2 \rho_R \, dx \right)^{ - \frac{2}{p+1} }
\leq \left( \int_{ B_{3R_0} } u^2 \rho_R \, dx \right)^{\frac{p-1}{p+1}},
\] we have \eqref{eq:2.44} and Lemma~\ref{Lemma:2.7} follows. \end{proof}
For the supercritical case, we have the following energy estimates for the function $U$, which is given in \eqref{eq:2.3}.
\begin{lemma} \label{Lemma:2.8} Assume $p_S(N,\ell) < p $ and \eqref{eq:1.2}.
Let $u \in H^s_{\mathrm{loc}} (\mathbb{R}^N) \cap L^\infty_{\rm loc} (\mathbb{R}^N) \cap L^2(\mathbb{R}^N, (1+|x|)^{-N-2s}dx) $ be a solution of \eqref{eq:1.1} which is stable outside $B_{R_0}$ and $U$ be the function given in \eqref{eq:2.3}. Then there exists a $C=C(N,p,s,\ell,R_0,u)>0$ such that \begin{equation} \label{eq:2.52} \int_{B^+_R}t^{1-2s}U^2\, dX\le CR^{N+2(1-s)-\frac{2(2s+\ell)}{p-1}} \end{equation} for all $R\ge 3R_0$. \end{lemma}
\begin{remark}
If $ p_S(N,\ell) < p $, then
\begin{equation}\label{eq:2.53}
N - \frac{2}{p-1} \left( s ( p+ 1) + \ell \right) > 0.
\end{equation}
\end{remark}
\begin{proof}[Proof of Lemma \ref{Lemma:2.8}] Let $\zeta_{R_0} \in C^\infty_c(\mathbb{R}^N)$ with $0 \leq \zeta_{R_0} \leq 1$, $\zeta_{R_0} \equiv 1$ on $B_{3R_0}$ and $\zeta_{R_0} \equiv 0$ on $B_{4R_0}^c$. We decompose $u$ as
\[
u(x) = \zeta_{R_0} (x) u(x) + (1 - \zeta_{R_0} (x)) u(x) = : v(x) + w(x).
\] Notice that $v \in H^s(\mathbb{R}^N)$ with $\supp v \subset \overline{B_{4R_0}}$ and $w \in H^s_{\rm loc} (\mathbb{R}^N)$. Recalling \eqref{eq:2.3}, we also decompose $U$ as
\[
U(x,t) = (P_s(\cdot, t) \ast u) (x)
= (P_s (\cdot , t) \ast v ) (x) + (P_s (\cdot , t) \ast w) (x)
= : V(x,t) + W(x,t).
\]
We first estimate $V(x,t)$. By Young's inequality and $\| P_s (\cdot , t) \|_{L^1(\mathbb{R}^N)} = 1$, it follows that
\[
\left\| V(\cdot, t) \right\|_{L^2(\mathbb{R}^N)} \leq \| P_s (\cdot , t) \|_{L^1(\mathbb{R}^N)} \| v \|_{L^2(\mathbb{R}^N)}
= \| v \|_{L^2(\mathbb{R}^N)} \quad \text{for each $t \in (0,\infty)$}.
\] Therefore,
\[
\begin{aligned}
\int_{B_R^+} t^{1-2s} |V|^2 \, d X
& \leq \int_{0}^{R} dt \int_{\mathbb{R}^N} t^{1-2s} |V(x,t)|^2 \, dx
\leq \int_0^R t^{1-2s} \| v \|_{L^2(\mathbb{R}^N)}^2 dt
= C R^{2-2s}.
\end{aligned}
\] From
\[
2-2s \leq N + 2 (1-s) - \frac{2(2s+\ell)}{p-1},
\] we infer that
\begin{equation}\label{eq:2.54}
\int_{B^+_R} t^{1-2s} |V|^2 \, dX \leq C R^{N + 2 (1-s) - \frac{2(2s+\ell)}{p-1}}.
\end{equation}
Next, we consider $W(x,t)$. By H\"older's inequality,
\begin{equation}\label{eq:2.55}
\begin{aligned}
& \int_{B_R^+} t^{1-2s} |W|^2 \, dX
\\
= \ &
\int_{ B_R^+} t^{1-2s} \left( \int_{\mathbb{R}^N} (P_s(x-y,t))^{1/2} (P_s(x-y,t))^{1/2} w(y) \, dy \right)^2 \, dX
\\
\leq \ & C \int_{ B_R^+} dX t^{1-2s} \int_{\mathbb R^N}w(y)^2\frac{t^{2s}}{(|x-y|^2+t^2)^{\frac{N+2s}{2}}}\, dy
\\
\leq \ & C \int_{ B_R^+} dX t^{1-2s}
\left( \int_{|x-y| \leq 3R} + \int_{ |x-y|> 3R} \right)
w(y)^2\frac{t^{2s}}{(|x-y|^2+t^2)^{\frac{N+2s}{2}}}\, dy.
\end{aligned}
\end{equation}
For $y \in \mathbb{R}^N$ with $|x-y| \leq 3R$, since it follows from \eqref{eq:1.2} that
\[
- \frac{N+2s}{2} + 1 < 0 \quad \text{if} \ N \geq 2, \quad
- \frac{N+2s}{2} + 1 = \frac{1-2s}{2} > 0 \quad \text{if} \ N = 1,
\] we see that
\[
\begin{aligned}
&\int_{ B_R^+} dX t^{1-2s} \int_{ |x-y| \leq 3R}
w(y)^2 \frac{t^{2s}}{(|x-y|^2+t^2)^{\frac{N+2s}{2}}}\, dy
\\
\leq & \
\int_0^R dt \int_{B_R} dx
\int_{|x-y| \leq 3 R}
w(y)^2\frac{t}{(|x-y|^2+t^2)^{\frac{N+2s}{2}}}\, dy
\\
= & \
\int_{ B_R} dx \int_{ |x-y| \leq 3R} dy
\int_{0}^R w(y)^2
\frac{1}{2-N-2s} \frac{\partial}{\partial t}
\left( |x-y|^2 + t^2 \right)^{ \frac{2 - (N+2s)}{2} }
\, dt
\\
= \ & \frac{1}{2 - N - 2s} \int_{ B_R} dx \int_{ |x-y| \leq 3R}
w(y)^2
\left\{ \left( |x-y|^2 + R^2 \right)^{ \frac{2-N-2s}{2} } - |x-y|^{ 2 - N -2s } \right\} \, dy
\\
\leq \ &
\left\{\begin{aligned}
& \frac{1}{1 - 2s } \int_{ B_R} dx \int_{ |x-y| \leq 3R}
w(y)^2 \left( |x-y|^2 + R^2 \right)^{ \frac{1-2s}{2} } \, dy
& & \text{when $N = 1$},
\\
& \frac{1}{N+2s -2 }\int_{ B_R} dx \int_{ |x-y| \leq 3R}
w(y)^2 |x-y|^{ 2-N-2s } \, dy
& & \text{when $N \geq 2$}.
\end{aligned}\right.
\end{aligned}
\]
When $N = 1$, by $\set{y \in \mathbb{R}^N | |x-y| \leq 3R} \subset B_{4R}$ for each $x \in B_R$, we have
\[
\begin{aligned}
\int_{ B_R} dx \int_{ |x-y| \leq 3R}
w(y)^2 \left( |x-y|^2 + R^2 \right)^{ \frac{1-2s}{2} } \, dy
& \leq C R^{1-2s} \int_{ B_R} dx \int_{ |x-y| \leq 3R} w(y)^2 \, dy
\\
& \leq C R^{2-2s} \int_{ B_{4R}} w(y)^2 \, dy.
\end{aligned}
\] On the other hand, when $N \geq 2$, since $B_R(y) \subset B_{5R}$ for each $y \in B_{4R}$, we have
\[
\begin{aligned}
\int_{ B_R} dx \int_{ |x-y| \leq 3R}
w(y)^2 |x-y|^{ 2-N-2s } \, dy
& \leq C \int_{ B_R} dx \int_{ B_{4R}} w(y)^2 |x-y|^{2-N-2s } \, dy
\\
& = C \int_{ B_{4R}} dy \int_{ B_R (y) } w(y)^2 |z|^{2-N-2s} \, dz
\\
& \leq C \int_{ B_{4R}} dy \int_{ B_{5R}} w(y)^2 |z|^{2-N-2s} \, d z
\\
& = C R^{2 - 2s} \int_{ B_{4R}} w(y)^2 \,dy,
\end{aligned}
\] hence, for $N \geq 1$ and $R \geq 3R_0$,
\[
\int_{ B_R^+} dX t^{1-2s} \int_{ |x-y| \leq 3R}
w(y)^2 \frac{t^{2s}}{(|x-y|^2+t^2)^{\frac{N+2s}{2}}}\, dy
\leq C R^{2-2s} \int_{ B_{4R}} w(y)^2 \, dy.
\]
Notice that $w \equiv 0$ on $B_{3R_0}$, $|w| \leq |u|$
and $0<c \leq \eta_R(x)$ for any $3R_0 \leq |x| \leq 4R$. By Lemma \ref{Lemma:2.6} and $N - 2\ell / (p-1) > 0$ due to $p > p_S(N,\ell)$, we may argue as in \eqref{eq:2.45} and \eqref{eq:2.47} to obtain
\[
\begin{aligned}
& \int_{ B_R^+} dX t^{1-2s} \int_{ |x-y| \leq 3R}
w(y)^2 \frac{t^{2s}}{(|x-y|^2+t^2)^{\frac{N+2s}{2}}}\, dy
\\
\leq \ & C R^{2-2s} \int_{ 3R_0 \leq |x| \leq 4R}
u^2 \left( |x|^{\frac{\ell}{2}} \eta_R \right)^{\frac{4}{p+1}}
\left( |x|^{\frac{\ell}{2}} \eta_R \right)^{-\frac{4}{p+1}} \, dx
\\
\leq \ & CR^{2-2s}
\left( \int_{\mathbb{R}^N} |x|^\ell |u|^{p+1} \eta_R^2 \, dx \right)^{\frac{2}{p+1}}
\left( \int_{3R_0 \leq |x| \leq 4R} |x|^{-\frac{2\ell}{p-1}} \,dx
\right)^{\frac{p-1}{p+1}}
\\
\leq \ & C R^{ 2(1-s) + (N-\frac{2\ell}{p-1})\frac{p-1}{p+1} }
\left( \int_{ \mathbb{R}^N} u^2 \rho_R \, dx \right)^{\frac{2}{p+1}}.
\end{aligned}
\] Furthermore, by \eqref{eq:2.53} and Lemma~\ref{Lemma:2.7}, enlarging $C$ if necessary, we obtain \begin{equation}\label{eq:2.56} \int_{\mathbb R^N}u^2\rho_R\,dx\le CR^{N-\frac{2}{p-1}(s(p+1)+\ell)}, \end{equation} which yields \begin{equation} \label{eq:2.57}
\int_{ B_R^+ } dX t^{1-2s} \int_{ |x-y| \leq 3R}
w(y)^2 \frac{t^{2s}}{(|x-y|^2+t^2)^{\frac{N+2s}{2}}}\, dy \le CR^{N+2(1-s)-\frac{2(2s+\ell)}{p-1}}. \end{equation}
Next, we consider the second term in \eqref{eq:2.55}, namely the case $|x-y| > 3R$.
Since $|x-y|\ge |y|-|x| \ge |y|-R\ge |y|/2$ and $B_{3R}^c(x) \subset B_{2R}^c$ for $x\in B_R$ and $y\in B^c_{2R}$, we have
\[
\begin{aligned}
&\int_{ B_R^+} dX t^{1-2s} \int_{ |x-y| > 3R}
w(y)^2 \frac{t^{2s}}{(|x-y|^2+t^2)^{\frac{N+2s}{2}}}\, dy
\\
\leq \ & \int_{ B_R} dx \int_{ |x-y| > 3R} dy
\int_0^R w(y)^2 t |x-y|^{-N-2s} \, dt
\\
\leq \ & R^2 \int_{ B_R} dx \int_{ B_{3R}^c(x)}
w(y)^2 |x-y|^{-N-2s} \, dy
\\
\leq \ & C R^2 \int_{ B_R} dx \int_{ B_{3R}^c(x)}
w(y)^2 |y|^{-N-2s} \, dy
\\
\leq \ & C R^{2+N} \int_{ |z| \geq 2R} w(z)^2 |z|^{-N-2s} \, dy.
\end{aligned}
\] On the other hand, from the definition of $\eta_R$ and $\rho_R$, it follows that
for $|x| \geq 2 R \geq 6 R_0$ and $\boldsymbol{e}_1 = (1,0,\ldots,0) \in \mathbb{R}^N$,
\begin{equation}\label{eq:2.58}
\begin{aligned}
\rho_R(x) = \int_{ \mathbb{R}^N} \frac{ ( \eta_R(x) - \eta_R (y) )^2 }{|x-y|^{N+2s}} \, dy
&\geq \int_{ |y| \geq 2 R_0 }
\frac{ \left( \eta\left( \frac{x}{R} \right) - \eta \left(\frac{y}{R}\right) \right)^2 }{|x-y|^{N+2s}} \, dy
\\
&= R^{-2s} \int_{ |z| \geq 2 R^{-1} R_0 }
\frac{ \left( \eta\left(\frac{x}{R}\right) - \eta(z) \right)^2 }{|R^{-1} x - z|^{N+2s} } \, dz
\\
& \geq R^{-2s} \int_{ |z- \boldsymbol{e}_1 | < \frac{1}{3}}
\frac{ \left( \eta\left(\frac{x}{R}\right) - \eta(z) \right)^2 }{|R^{-1} x - z|^{N+2s} } \, dz
\\
&\geq C_0 R^{-2s} \left| R^{-1} x \right|^{-N-2s}
= C_0 R^{N} |x|^{-N-2s}
\end{aligned}
\end{equation}
for some $C_0>0$. Thus, noting $u \equiv w$ on $|y| \geq 2R$ and \eqref{eq:2.56}, we obtain
\[
\begin{aligned}
R^{2+N} \int_{ |y| \geq 2R} w(y)^2 |y|^{-N-2s} \, dy
& \leq C R^2 \int_{ |y| \geq 2R} u^2 \rho_R \, dy
\leq CR^{N+ 2 (1-s) - \frac{2(2s+\ell)}{p-1} },
\end{aligned}
\] which implies
\begin{equation}\label{eq:2.59}
\int_{ B_R^+} dX t^{1-2s} \int_{ |x-y| > 3R}
w(y)^2 \frac{t^{2s}}{(|x-y|^2+t^2)^{\frac{N+2s}{2}}}\, dy
\leq CR^{N+ 2 (1-s) - \frac{2(2s+\ell)}{p-1} }.
\end{equation} Substituting \eqref{eq:2.57} and \eqref{eq:2.59} into \eqref{eq:2.55}, we see that
\[
\int_{ B_R^+} t^{1-2s} W^2 \, dX \leq CR^{N+ 2 (1-s) - \frac{2(2s+\ell)}{p-1} }.
\] This with \eqref{eq:2.54} and $U=V+W$ completes the proof of Lemma \ref{Lemma:2.8}.
\end{proof}
\begin{lemma} \label{Lemma:2.9} Assume the same conditions as in Lemma~$\ref{Lemma:2.8}$. Then there exists a $C=C(N,p,s,\ell,R_0,u)>0$ such that \begin{equation} \label{eq:2.60}
\int_{B^+_R}t^{1-2s}|\nabla U|^2\, dX
+\int_{B_R}|x|^\ell|u|^{p+1}\,dx \le CR^{N-\frac{2}{p-1}(s(p+1)+\ell)} \end{equation} for all $R\ge 3R_0$. \end{lemma}
\begin{proof} We first prove the weighted $L^{p+1}$ estimate for $u$. Let $\eta_R$ and $\rho_R$ be the functions given in Lemma~\ref{Lemma:2.5}. Then, since $u\in L^\infty_{\rm loc}(\mathbb R^N)$ with \eqref{eq:1.2}, it holds that \begin{equation} \label{eq:2.61}
\int_{B_{2R_0}}|x|^\ell|u|^{p+1}\, dx\le C. \end{equation} Furthermore, since $p_S(N,\ell) < p$ and \eqref{eq:2.53}, applying Lemmata~\ref{Lemma:2.6} and \ref{Lemma:2.7}, and noting $\eta_R \geq c_0>0$ on $B_R \setminus B_{2R_0}$, we see that for all $R \geq 3R_0$, \begin{align*}
\int_{B_R\setminus B_{2R_0}}|x|^\ell|u|^{p+1}\, dx & \le
C\int_{\mathbb R^N}|x|^\ell|u|^{p+1}\eta_R^2 \,dx \\ & \le C\int_{\mathbb R^N}u^2\rho_R\,dx \le C\bigg(1+R^{N-\frac{2}{p-1}(s(p+1)+\ell)}\bigg) \le CR^{N-\frac{2}{p-1}(s(p+1)+\ell)}. \end{align*} This together with \eqref{eq:2.61} yields \begin{equation} \label{eq:2.62}
\int_{B_R}|x|^\ell|u|^{p+1}\,dx \le CR^{N-\frac{2}{p-1}(s(p+1)+\ell)} \quad \text{for each $R \geq 3R_0$}. \end{equation}
Next we take a cut-off function $\zeta\in C^\infty_c (\overline{\mathbb R^{N+1}_+} \setminus \overline{B^+_{R_0}} )$ such that
\begin{equation}\label{eq:2.63}
\zeta\equiv
\left\{
\begin{array}{ll}
1& \mbox{on}\quad B^+_R\setminus B^+_{2R_0},
\\
0& \mbox{on}\quad B^+_{R_0}\cup( \mathbb R^{N+1}_+ \setminus B^+_{2R}),
\end{array}
\right.
\qquad
|\nabla \zeta|\le CR^{-1}
\quad\mbox{on}\quad B^+_{2R}\setminus B^+_{R}.
\end{equation} Then, taking $\psi = U\zeta^2 \in C^1_c ( \overline{ \mathbb{R}^{N+1}_+} \setminus \overline{ B^+_{R_0}} )$ as a test function in \eqref{eq:2.18}, we obtain \begin{equation} \label{eq:2.64} \begin{aligned}
\kappa_s\int_{ \mathbb R^N }|x|^\ell|u|^{p+1}\zeta(x,0)^2\,dx & =\int_{\mathbb R^{N+1}_+}t^{1-2s}\nabla U\cdot\nabla(U\zeta^2)\,dX \\ &
=\int_{\mathbb R^{N+1}_+}t^{1-2s}\{|\nabla(U\zeta)|^2-U^2|\nabla\zeta|^2\}\,dX. \end{aligned} \end{equation} Since $u$ is stable outside $B_{R_0}$ and $U=u$ on $\partial\mathbb R^{N+1}_+$, we see from \eqref{eq:2.8} that $U$ is stable outside $B^+_{R_0}$, that is, for any $\psi \in C^1_c( \overline{ \mathbb{R}^{N+1}_+ } \setminus \overline{B^+_{R_0}} )$, \begin{equation} \label{eq:2.65} \begin{aligned}
p\kappa_s\int_{\mathbb{R}^N}|x|^\ell|U(x,0)|^{p-1}\psi(x,0)^2\,dx &=
p\kappa_s\int_{\mathbb{R}^N}|x|^\ell|u|^{p-1}\psi(x,0)^2\,dx \\
& \leq \kappa_s \| \psi (\cdot, 0) \|_{\dot{H}^s(\mathbb{R}^N)}^2
\le\int_{\mathbb R^{N+1}_+}t^{1-2s}|\nabla \psi|^2\,dX. \end{aligned} \end{equation} By \eqref{eq:2.64} and \eqref{eq:2.65} with $\psi=U\zeta \in C^1_c( \overline{ \mathbb{R}^{N+1}_+} \setminus \overline{B^+_{R_0}} )$, we have \[
\int_{\mathbb R^{N+1}_+}t^{1-2s}\{|\nabla(U\zeta)|^2-U^2|\nabla\zeta|^2\}\,dX \leq
\frac{1}{p}\int_{\mathbb R^{N+1}_+}t^{1-2s}|\nabla(U\zeta)|^2\, dX, \] which implies \begin{equation} \label{eq:2.66}
\int_{\mathbb R^{N+1}_+}t^{1-2s}|\nabla(U\zeta)|^2\,dX \le
\frac{p}{p-1}\int_{\mathbb R^{N+1}_+}t^{1-2s}U^2|\nabla\zeta|^2\,dX. \end{equation} By \eqref{eq:2.66}, \eqref{eq:2.63} and \eqref{eq:2.52}, we see that \begin{equation} \label{eq:2.67} \begin{aligned}
\int_{B^+_R\setminus B^+_{2R_0}}t^{1-2s}|\nabla U|^2\,dX &\le
\int_{\mathbb R^{N+1}_+}t^{1-2s}|\nabla(U\zeta)|^2\,dX \\ &
\le C\int_{\mathbb R^{N+1}_+}t^{1-2s}U^2|\nabla\zeta|^2\,dX \\ & \le C \left( \int_{B^+_{2R_0} \setminus B_{R_0}^+ } t^{1-2s} U^2 \, dX + R^{-2}\int_{ B^+_{2R} \setminus B_R^+ }t^{1-2s}U^2\,dX \right) \\ & \le C \int_{B^+_{2R_0} \setminus B_{R_0}^+ } t^{1-2s} U^2 \, dX + CR^{N-\frac{2}{p-1}(s(p+1)+\ell)} \end{aligned} \end{equation} for all $R\ge3R_0$.
On the other hand, it follows from $U\in H^1_{\mathrm{loc}} ( \overline{\mathbb{R}^{N+1}_+},t^{1-2s}dX)$ due to Lemma \ref{Lemma:2.1} that \[
\int_{B^+_{2R_0}}t^{1-2s} \left( |\nabla U|^2 + U^2 \right)\,dX\le C. \] This together with \eqref{eq:2.62} and \eqref{eq:2.67} yields \eqref{eq:2.60}, thus Lemma~\ref{Lemma:2.9} follows.
\end{proof}
\section{The subcritical and critical case} \label{section:3}
In this section, we prove Theorem~\ref{Theorem:1.1} for the subcritical and critical case, that is, $1 < p \leq p_S(N,\ell)$.
\begin{proof}[Proof of Theorem~\ref{Theorem:1.1} for $1<p\le p_S(N,\ell)$]
Let $u \in H^s_{\mathrm{loc}} (\mathbb{R}^N) \cap L^\infty_{\rm loc} (\mathbb{R}^N) \cap L^2( \mathbb{R}^N , (1+|x|)^{-N-2s} dx ) $ be a solution of \eqref{eq:1.1} which is stable outside $B_{R_0}$. As $R \to \infty$, $\eta_R(x) \to \psi(R_0^{-1} x)$ for each $x \in \mathbb{R}^N$. Then by Lemma \ref{Lemma:2.5}, $(\rho_R)_{R\ge R_0}$ is bounded in $L^\infty(\mathbb{R}^N)$ and we may check
\[
\rho_R(x) = \int_{ \mathbb{R}^N} \frac{ \left( \eta_R(x) - \eta_R(y) \right)^2 }{|x-y|^{N+2s}} \, dy
\to \int_{ \mathbb{R}^N} \frac{ \left( \psi(R_0^{-1} x) - \psi ( R_0^{-1} y ) \right)^2 }{|x-y|^{N+2s}} \, dy
=: \rho_\infty (x).
\]
Next, from Lemmata \ref{Lemma:2.6} and \ref{Lemma:2.7} and the assumption $1 < p \leq p_S(N,\ell)$, it follows that
$( u \eta_R^{2/(p+1)} )_{R \geq 3R_0}$ is bounded in $L^{p+1} (\mathbb{R}^N,|x|^\ell dx)$ and $(u \eta_R)_{R \geq R_0}$ is bounded in $\dot{H}^s(\mathbb{R}^N)$. Since
\[
u(x) \eta_R(x)^{ \frac{2}{p+1} } \to u(x) \psi \left( R_0^{-1} x \right)^{\frac{2}{p+1}}, \quad
u(x) \eta_R(x) \to u(x) \psi \left( R_0^{-1} x \right)
\quad \text{for each $x \in \mathbb{R}^N$},
\] we infer that
\[
\begin{aligned}
&u \eta_R^{\frac{2}{p+1}} \rightharpoonup u \psi \left( R_0^{-1} \cdot \right)^{\frac{2}{p+1}}
& &\text{weakly in} \ L^{p+1} (\mathbb{R}^N, |x|^\ell dx),
\\
& u \eta_R \rightharpoonup u \psi \left( R_0^{-1} \cdot \right)
& &\text{weakly in} \ \dot{H}^s(\mathbb{R}^N).
\end{aligned}
\]
In particular, we deduce that $u \in \dot{H}^s(\mathbb{R}^N) \cap L^{p+1}(\mathbb{R}^N,|x|^\ell dx)$.
Since $\varphi_n u \to u$ strongly in $\dot{H}^s(\mathbb{R}^N)$ where $(\varphi_n)_n$ appears in Lemma \ref{Lemma:2.1} and $u \in L^\infty_{\rm loc}(\mathbb{R}^N)$, we may use $\varphi_n u$ as a test function in \eqref{eq:1.4}:
\[
\int_{ \mathbb{R}^N} |x|^\ell |u|^{p+1} \varphi_n \, dx
= \left\langle u , \varphi_n u \right\rangle_{\dot{H}^s(\mathbb{R}^N)}.
\] Letting $n \to \infty$, we obtain \begin{equation} \label{eq:3.1}
\int_{\mathbb R^N}|x|^\ell|u|^{p+1}\, dx=\|u\|_{\dot H^s(\mathbb R^N)}^2. \end{equation} Thus, the former assertion of Theorem \ref{Theorem:1.1}(ii) is proved.
For the latter assertion of Theorem \ref{Theorem:1.1} (ii), assume that $p = p_S(N,\ell)$ and $u$ is stable. By the same argument as above, we can apply the stability inequality \eqref{eq:1.9} with the test function $\varphi=u$: \[
p\int_{\mathbb R^N}|x|^\ell|u|^{p+1}\, dx\le\|u\|_{\dot H^s(\mathbb R^N)}^2. \] This contradicts \eqref{eq:3.1} unless $u\equiv0$. So it remains to prove the subcritical case.
Since $u\in\dot H^s(\mathbb R^N) $, notice that $\nabla U\in L^2(\mathbb R^{N+1}_+,t^{1-2s}dX)$ thanks to Remark \ref{Remark:2.1}.
Then, similarly to \eqref{eq:2.25} with $u \in L^{p+1}(\mathbb{R}^N, |x|^\ell dx)$, we claim that there exists a sequence $R_n\to\infty$ such that \begin{equation} \label{eq:3.2} \lim_{n\to\infty}R_n \left[
\int_{S^+_{R_n}}t^{1-2s}|\nabla U|^2\,dS
+\int_{S_{R_n}}|x|^\ell|u|^{p+1}\, d\omega
+\int_{S^+_{R_n}}t^{1-2s}\left|\frac{\partial U}{\partial \nu}\right|^2\,dS\right]=0. \end{equation} By \eqref{eq:2.21}, \eqref{eq:3.2} and replacing $R$ with $R_n$ for a sequence $R_n\to\infty$, we conclude that \[
\int_{\mathbb R^{N+1}_+}t^{1-2s}|\nabla U|^2\,dX
=\frac{2\kappa_s}{N-2s}\frac{N+\ell}{p+1}\int_{\mathbb{R}^N}|x|^\ell|u|^{p+1}\,dx. \] This together with \eqref{eq:2.7} yields the following Pohozaev identity \[
\frac{N+\ell}{p+1}\int_{\mathbb R^N}|x|^\ell|u|^{p+1}\, dx=\frac{N-2s}{2}\|u\|_{\dot H^s(\mathbb R^N)}^2. \] Combining this identity with \eqref{eq:3.1}, we observe that $u\equiv0$ for $p<p_S(N,\ell)$, and the proof of Theorem~\ref{Theorem:1.1} is completed for $ 1 < p \leq p_S(N,\ell)$.
\end{proof}
\section{The supercritical case} \label{section:4}
In this section, we follow the argument in \cite{DDW} basically. However, due to the regularity issue of $U$ around $0$ in $\overline{ \mathbb{R}^{N+1}_+ }$, we prove the monotonicity formula (Lemma \ref{Lemma:4.2}) via the argument in \cite[section 3]{FF} and prove Theorem \ref{Theorem:1.1} for $p_S(N,\ell) < p$.
For $X\in \mathbb R^{N+1}_+$, we use the following notation:
\[
r := |X|, \quad \sigma := \frac{X}{|X|} \in S^+_1, \quad
\sigma_{N+1} := \frac{t}{|X|}.
\]
Let $u \in H^s_{\mathrm{loc}} (\mathbb{R}^N) \cap L^\infty_{\rm loc} (\mathbb{R}^N) \cap L^1(\mathbb{R}^N, (1+|x|)^{-N-2s}dx) $ be a solution of \eqref{eq:1.1} and $U$ be the function given in \eqref{eq:2.3}. For every $\lambda>0$, we define \begin{equation} \label{eq:4.1}
D(U;\lambda):=\lambda^{-(N-2s)}\left[\frac{1}{2}\int_{B^+_\lambda}t^{1-2s}|\nabla U|^2\,dX
-\frac{\kappa_s}{p+1}\int_{B_\lambda}|x|^\ell|u|^{p+1}\,dx\right] \end{equation} and \begin{equation} \label{eq:4.2} H(U;\lambda):=\lambda^{-(N+1-2s)}\int_{S^+_\lambda}t^{1-2s}U^2\,dS =\int_{S^+_1} \sigma_{N+1}^{1-2s}U(\lambda \sigma )^2\,dS. \end{equation}
\begin{lemma} \label{Lemma:4.1} As a function of $\lambda$, $D,H \in C^1((0,\infty))$ and
\[
\partial_{\lambda} D(U;\lambda)
=\lambda^{-(N-2s)}\int_{S^+_\lambda}t^{1-2s} \left|\frac{\partial U}{\partial\nu} \right|^2\,dS
-\lambda^{-(N+1-2s)}\kappa_s\frac{2s+\ell}{p+1}\int_{B_\lambda}|x|^\ell|u|^{p+1}\,dx
\] and
\[
\begin{aligned}
\partial_{\lambda} H (U;\lambda)
&=2\lambda^{-(N+1-2s)}\int_{S^+_\lambda}t^{1-2s}U\frac{\partial U}{\partial \nu}\,dS
\\
&= 2\lambda^{-(N+1-2s)} \left[ \int_{B^+_\lambda}t^{1-2s}|\nabla U|^2\,dX
-\kappa_s\int_{B_\lambda}|x|^\ell|u|^{p+1}\,dx \right].
\end{aligned}
\] \end{lemma}
\begin{proof} We first remark that by \eqref{eq:2.3} and Lemma \ref{Lemma:2.3}, $U \in C( \overline{ \mathbb{R}^{N+1}_+} )$, $\nabla_x U \in C( \overline{ \mathbb{R}^{N+1}_+ } \setminus \{0\} )$ and $V := t^{1-2s} \partial_t U \in C ( \overline{ \mathbb{R}^{N+1}_+ } \setminus \{0\} )$. Hence, as a function of $\lambda$,
\[
\lambda \mapsto \int_{ B_\lambda} |x|^\ell |u|^{p+1} \, dx \in C^1( (0,\infty) ).
\] On the other hand, since
\[
\begin{aligned}
t^{1-2s} \left| \nabla U \right|^2
= t^{1-2s} \left| \nabla_x U \right|^2 + t^{2s-1} V^2
\end{aligned}
\] and $t^{1-2s} \in L^1_{\rm loc} ( \overline{ \mathbb{R}^{N+1}_+} )$, it is easy to see that
\[
\lambda \mapsto \int_{ S_\lambda^+} t^{1-2s} \left| \nabla U \right|^2 \, dS
\in C((0,\infty)), \quad
\lambda \mapsto \int_{ B_\lambda^+ } t^{1-2s} \left| \nabla U \right|^2 \, dX
\in C^1( (0,\infty) ),
\] which yields $D(U;\lambda) \in C^1((0,\infty))$.
On the other hand, for any $0< \lambda_1 < \lambda_2 < \infty$, there exists a $C_{\lambda_1,\lambda_2}>0$ such that for every $\lambda \in [\lambda_1,\lambda_2]$ and $\sigma \in S^+_1$
\[
\begin{aligned}
\sigma_{N+1}^{1-2s}\left| \partial_{\lambda} ( U(\lambda \sigma ) )^2 \right|
\leq 2 \sigma_{N+1}^{1-2s} \left| U(\lambda \sigma) \right| \left| \nabla U(\lambda \sigma ) \right|
&\leq 2 \left| U(\lambda \sigma ) \right|
\left( \sigma_{N+1}^{1-2s}
\left| \nabla_x U(\lambda \sigma ) \right| + \left| V (\lambda \sigma) \right| \right)
\\
& \leq C_{\lambda_1,\lambda_2} (1 + \sigma_{N+1}^{1-2s}).
\end{aligned}
\] Hence, the dominated convergence theorem gives $H (U;\lambda) \in C^1((0,\infty))$.
Next we compute the derivative of $D$ and $H$. Direct computations and \eqref{eq:2.21} give
\[
\begin{aligned}
&\partial_{\lambda} D(U;\lambda)
\\
= \ & - (N-2s) \lambda^{-(N-2s)-1} \left[
\frac{1}{2} \int_{ B_\lambda^+ } t^{1-2s} | \nabla U |^2 \, dX
- \frac{\kappa_s}{p+1} \int_{ B_\lambda} |x|^\ell |u|^{p+1} \, dx
\right]
\\
& \quad + \lambda^{-(N-2s)-1} \lambda
\left[ \frac{1}{2} \int_{ S_\lambda^+} t^{1-2s} |\nabla U|^2 \, dS
- \frac{\kappa_s}{p+1} \int_{ S_\lambda} |x|^\ell |u|^{p+1} \, d\omega \right]
\\
=\ & - (N-2s) \lambda^{-(N-2s)-1} \left[
\frac{1}{2} \int_{ B_\lambda^+ } t^{1-2s} | \nabla U |^2 \, dX
- \frac{\kappa_s}{p+1} \int_{ B_\lambda} |x|^\ell |u|^{p+1} \, dx
\right]
\\
& \quad + \lambda^{-(N-2s)-1}
\left[ \frac{N-2s}{2} \int_{ B_\lambda^+} t^{1-2s} |\nabla U|^2 \, dX
- \kappa_s \frac{N+\ell}{p+1} \int_{ B_\lambda} |x|^\ell |u|^{p+1} \, dx
\right.
\\
& \hspace{3cm} \left.
+ \lambda \int_{ S_\lambda^+} t^{1-2s} \left| \frac{\partial U}{\partial \nu} \right|^2 \, dS \right]
\\
= \ & \lambda^{-(N-2s)} \int_{ S_\lambda^+} t^{1-2s} \left| \frac{\partial U}{\partial \nu} \right|^2 \, dS
- \lambda^{-(N+1-2s)}\kappa_s\frac{2s+\ell}{p+1}\int_{B_\lambda}|x|^\ell|u|^{p+1}\,dx.
\end{aligned}
\] For $H$, we compute similarly by using \eqref{eq:2.22} and
$\nabla U(X) \cdot (X/|X|) = \partial U / \partial \nu$:
\[
\begin{aligned}
\partial_{\lambda} H(U;\lambda)
&= \int_{ S^+_1}
\sigma_{N+1}^{1-2s} 2 U(\lambda \sigma) \nabla U (\lambda \sigma) \cdot \sigma \, d S
\\
&= 2 \lambda^{-N-1+2s} \int_{ S_\lambda^+}
t^{1-2s} U \frac{\partial U}{\partial \nu} \, dS
\\
&= 2 \lambda^{-N-1+2s}
\left[ \int_{ B_\lambda^+ } t^{1-2s} |\nabla U|^2 \, dX
- \kappa_s \int_{ B_\lambda} |x|^\ell |u|^{p+1} \, dx \right].
\end{aligned}
\] Hence, we complete the proof.
\end{proof}
Applying Lemma~\ref{Lemma:4.1}, we prove the following monotonicity formula (cf. \cite[Theorem 1.4]{DDW}).
\begin{lemma} \label{Lemma:4.2} For $\lambda>0$, define $E(U;\lambda)$ by \begin{equation} \label{eq:4.3} E(U;\lambda):=\lambda^{\frac{2(2s+\ell)}{p-1}} \left(D(U;\lambda)+\frac{2s+\ell}{2(p-1)}H(U;\lambda) \right). \end{equation} Then it holds that \begin{equation} \label{eq:4.4} \partial_{\lambda} E(U;\lambda) =\lambda^{\frac{2}{p-1}(s(p+1)+\ell)-N}
\int_{S^+_\lambda} t^{1-2s} \left|\frac{2s+\ell}{p-1}\frac{U}{\lambda}+\frac{\partial U}{\partial r}\right|^2\,dS. \end{equation} \end{lemma}
\begin{proof} Put \begin{equation} \label{eq:4.5} \gamma:=\frac{2(2s+\ell)}{p-1}. \end{equation} By \eqref{eq:4.3} and \eqref{eq:4.5}, we have \begin{equation} \label{eq:4.6} \begin{aligned} \partial_{\lambda} E(U;\lambda) & =\gamma \lambda^{\gamma-1}\left(D(U;\lambda)+\frac{\gamma}{4}H(U;\lambda)\right) +\lambda^{\gamma}\left( \partial_{\lambda} D(U;\lambda)+\frac{\gamma}{4} \partial_{\lambda} H(U;\lambda)\right) \\ & =\lambda^{\gamma-1} \left(\gamma D(U;\lambda)+\frac{\gamma^2}{4}H(U;\lambda)+\lambda \partial_{\lambda} D(U;\lambda) +\frac{\gamma\lambda}{4} \partial_{\lambda}H(U;\lambda) \right). \end{aligned} \end{equation} Since it follows from \eqref{eq:4.5} that \[ \bigg(\frac{1}{2}-\frac{1}{p+1}\bigg)\gamma-\frac{2s+\ell}{p+1} =\frac{p-1}{2(p+1)}\frac{2(2s+\ell)}{p-1}-\frac{2s+\ell}{p+1}=0, \] by Lemma~\ref{Lemma:4.1} and \eqref{eq:2.22}, we see that \begin{align*} & \lambda^{N-2s} \left( \gamma D(U;\lambda)+\frac{\gamma^2}{4}H(U;\lambda) +\lambda \partial_{\lambda} D(U;\lambda) +\frac{\gamma \lambda}{4} \partial_{\lambda}H(U;\lambda)\right) \\ = \ &
\gamma \left[\frac{1}{2}\int_{B^+_\lambda}t^{1-2s}|\nabla U|^2\,dX
-\frac{\kappa_s}{p+1}\int_{B_\lambda}|x|^\ell|u|^{p+1}\,dx \right] +\frac{\gamma^2}{4}\lambda^{-1}\int_{S^+_\lambda}t^{1-2s}U^2\,dS \\ &\quad
+\lambda\int_{S^+_\lambda}t^{1-2s} \left|\frac{\partial U}{\partial\nu} \right|^2\,dS
-\kappa_s\frac{2s+\ell}{p+1}\int_{B_\lambda}|x|^\ell|u|^{p+1}\,dx +\frac{\gamma}{2}\int_{S^+_\lambda}t^{1-2s}U\frac{\partial U}{\partial\nu}\,dS \\ = \ & \lambda \left[ \int_{ S_\lambda^+} t^{1-2s} \left\{ \frac{\gamma^2}{4} \left( \frac{U}{\lambda} \right)^2 + \gamma \frac{U}{\lambda} \frac{\partial U}{\partial \nu} + \left( \frac{\partial U}{\partial \nu} \right)^2
\right\} \, dS \right]
\\
&\quad +\kappa_s\bigg\{\bigg(\frac{1}{2}-\frac{1}{p+1}\bigg) \gamma-\frac{2s+\ell}{p+1}\bigg\}
\int_{B_\lambda}|x|^\ell|u|^{p+1}\,dx \\
= \ &\lambda\int_{S^+_\lambda}t^{1-2s}\bigg|\frac{\gamma}{2}\frac{U}{\lambda}+\frac{\partial U}{\partial\nu}\bigg|^2\,dS. \end{align*} This together with \eqref{eq:4.6} and $\partial U/\partial\nu=\partial U/\partial r$ on $S^+_\lambda$ implies \eqref{eq:4.4}.
\end{proof}
Similar to \cite[Theorem~5.1]{DDW}, we prove the nonexistence result of solutions which have a special form and are stable outside $B_{R_0}^+$.
\begin{lemma} \label{Lemma:4.3} Let $R_0>0$ and $p_S(N,\ell)<p$. Suppose \eqref{eq:1.11} and that $W$ satisfies the following:
\begin{equation}\label{eq:4.7}
\left\{\begin{aligned}
& W(X) = r^{ - \frac{2s+\ell}{p-1} } \psi (\sigma) \in H^1_{\mathrm{loc}} ( \overline{ \mathbb{R}^{N+1}_+} ,t^{1-2s} dX ) ,
\\
&\psi(\omega,0):=\psi|_{ \partial S^{+}_1} \in L^{p+1} ( \partial S^{+}_1 ),
\\
& \int_{ \mathbb{R}^{N+1}_+} t^{1-2s} \nabla W \cdot \nabla \Phi \, dX
= \kappa_s \int_{ \mathbb{R}^N} |x|^\ell |W(x,0)|^{p-1} W(x,0) \Phi(x,0) \, dx
\\
&\hspace{5cm} \text{for each $\Phi \in C^1_c( \overline{ \mathbb{R}^{N+1}_+ } )$},
\\
& \kappa_s p \int_{\mathbb R^N} |x|^\ell |W(x,0)|^{p-1} \Phi(x,0)^2 \, dx
\leq \int_{ \mathbb{R}^{N+1}_+} t^{1-2s} \left| \nabla \Phi \right|^2 \, dX
\\
&\hspace{5cm} \text{for each $\Phi \in C^1_c( \overline{ \mathbb{R}^{N+1}_+ } \setminus \overline{ B^+_{R_0}} )$}.
\end{aligned}\right.
\end{equation} Then $W \equiv 0$. \end{lemma}
\begin{remark}\label{Remark:4.1}
\begin{enumerate}
\item
The function $W$ in Lemma \ref{Lemma:4.4} is not necessarily defined through the form
$W=P_s(\cdot, t) \ast u$ where $u$ is a solution of \eqref{eq:1.1}.
\item
By $p_S(N,\ell)<p$, we have
\[
\ell - \frac{p}{p-1} (2s+\ell) = - \frac{2sp + \ell }{p-1} > -N,
\quad |x|^\ell |W(x,0)|^{p-1} W(x,0) \in L^1_{\rm loc} (\mathbb{R}^N).
\]
\item
Set
\[
\begin{aligned}
H^1( S^+_1 , \sigma_{N+1}^{1-2s}dS )
&:= \overline{ C^1 ( \overline{ S^+_1 } ) }^{ \| \cdot \|_{H^1(S^+_1 , \sigma_{N+1}^{1-2s}dS)} },
\\
\| u \|_{H^1(S^+_1, \sigma_{N+1}^{1-2s}dS)}^2
&:= \int_{ S^+_1 } \sigma_{N+1}^{1-2s}
\left[ \left| \nabla_{S^+_1} u \right|^2 + u^2 \right] dS
\end{aligned}
\]
where $\nabla_{ S^+_1 }$ stands for the standard gradient on the unit sphere in $\mathbb{R}^{N+1}$.
From \cite[Lemma 2.2]{FF}, there exists the trace operator
$H^1(S_1^+,\sigma_{N+1}^{1-2s}dS) \to L^2( \partial S^1_+ )$.
\item
Since
\[
- \diver \left( t^{1-2s} \nabla W \right) = 0 \quad \text{in} \ \mathbb{R}^{N+1}_+, \quad
W \in H^1_{\mathrm{loc}} \left( \overline{ \mathbb{R}^{N+1}_+ },t^{1-2s}dX \right),
\]
elliptic regularity yields $W = r^{ - \frac{2s+\ell}{p-1} } \psi(\sigma) \in C^\infty (\mathbb{R}^{N+1}_+)$.
In addition, from $W \in H^1_{\mathrm{loc}} ( \overline{ \mathbb{R}^{N+1}_+},t^{1-2s}dX)$,
we see $\psi \in H^1(S^+_1,\sigma_{N+1}^{1-2s}dS) $. Next, for $k \geq 1$, consider
\[
W_k(X) := \max \left\{ -k , \, \min \left\{ |X|^{ \frac{2s+\ell}{p-1} } W(X), k \right\} \right\}.
\]
Then
\[
W_k \in H^1 ( \overline{ B^+_2 } \setminus \overline{ B^+_{1/2} },t^{1-2s}dX) \cap
L^\infty ( \overline{ B^+_2 } \setminus \overline{ B^+_{1/2} } ), \quad
\left| W_k(X) \right| \leq \left| X \right|^{ \frac{2s+\ell}{p-1} } \left| W(X) \right|.
\]
From this fact, we may find $( \psi_k )_k$ satisfying
\begin{equation}\label{eq:4.8}
\begin{aligned}
& \psi_k \in H^1( S^+_1,\sigma_{N+1}^{1-2s}dS) \cap L^\infty(S^+_1), \quad
\left| \psi_k(\sigma) \right| \leq \left| \psi(\sigma) \right|
\quad \text{for any $\sigma \in S^+_1$},
\\
&
\left\| \psi_k - \psi \right\|_{H^1(S^+_1,\sigma_{N+1}^{1-2s}dS)} \to 0, \quad
\psi_k (\omega,0) \to \psi(\omega,0) \quad
\text{strongly in } L^{p+1} (\partial S^+_1).
\end{aligned}
\end{equation}
\item
If $\varphi \in H^1(S^+_1,\sigma_{N+1}^{1-2s}dS) \cap L^\infty(S_1^+)$, then
we may find $(\varphi_k) \subset C^1( \overline{S^+_1} )$ such that
\begin{equation}\label{eq:4.9}
\sup_{k \geq 1} \| \varphi_k \|_{L^\infty( S^+_1 )} < \infty, \quad
\| \varphi_k - \varphi \|_{H^1( S^+_1,\sigma_{N+1}^{1-2s}dS )} \to 0.
\end{equation}
From the trace operator, we also have $\varphi_k(\omega,0) \to \varphi (\omega,0)$
in $L^2(\partial S^+_1)$.
\end{enumerate}
\end{remark}
Even though a proof of Lemma~\ref{Lemma:4.3} is similar to the proof of \cite[Theorem~5.1]{DDW}, for the sake of completeness, we give the proof here. Before a proof of Lemma \ref{Lemma:4.3}, we recall \cite[Lemma 2.1]{FF}:
\begin{lemma}\label{Lemma:4.4}
For $v (X) = f(r) \psi(\sigma) \in C^\infty (\mathbb{R}^{N+1}_+)$,
\[
- \diver \left( t^{1-2s} \nabla v \right) =
-r^{-N} \left( r^{N+1-2s} f_r(r) \right)_r \sigma_{N+1}^{1-2s} \psi(\sigma)
- r^{-1-2s} f(r) \diver_{S^+_1} \left( \sigma_{N+1}^{1-2s} \nabla_{S^+_1} \psi \right)
\]
where $\diver_{S^+_1}$ is the standard
divergence operator on the unit sphere in $\mathbb{R}^{N+1}$.
\end{lemma}
\begin{proof}[Proof of Lemma \ref{Lemma:4.3}] Let $W = r^{ - \frac{2s+\ell}{p-1} } \psi (\sigma) $ be as in the statement. We divide our arguments into several steps.
\noindent \textbf{Step 1:} \textsl{$\psi$ satisfies
\[
\left\{\begin{aligned}
&- \diver_{S^+_1} \left( \sigma_{N+1}^{1-2s} \nabla_{S^+_1} \psi \right)
+ \beta \sigma_{N+1}^{1-2s} \psi = 0 & &\mathrm{in} \ S^+_1,
\\
&- \lim_{ \sigma_{N+1} \to +0} \sigma_{N+1}^{1-2s} \partial_{\sigma_{N+1}} \psi
= \kappa_s |\psi |^{p-1} \psi & &\mathrm{on} \ \partial S^+_1=S_1
\end{aligned}\right.
\] where $\beta := \frac{2s+\ell}{p-1} \left( N - \frac{2sp+\ell}{p-1} \right)$, namely, for each $\varphi \in H^1( S^+_1,\sigma_{N+1}^{1-2s}dS) \cap L^\infty(S^+_1) $,
\begin{equation}\label{eq:4.10}
\int_{ S^+_1} \sigma_{N+1}^{1-2s} \nabla_{S^+_1} \psi \cdot \nabla_{S^+_1} \varphi
+ \beta \sigma_{N+1}^{1-2s} \psi \varphi \, d S
= \kappa_s \int_{ \partial S^+_1} |\psi|^{p-1} \psi \varphi
\, d \omega.
\end{equation} Furthermore, we may also choose $\varphi = \psi$ in \eqref{eq:4.10} and obtain
\begin{equation}
\label{eq:4.11}
\int_{S^+_1} \sigma_{N+1}^{1-2s}|\nabla_{S^+_1}\psi|^2
+\beta\sigma_{N+1}^{1-2s}\psi^2 \,dS
=\kappa_s\int_{ \partial S^+_1 }|\psi|^{p+1}\,d\omega.
\end{equation} }
\begin{proof} For \eqref{eq:4.10}, by \eqref{eq:4.9}, $\psi(\omega,0) \in L^{p+1}(\partial S^+_1)$ and the dominated convergence theorem, it is enough to prove it for $\varphi \in C^1( \overline{S^+_1} )$.
For $V(X) = V(r \sigma) \in C^1_c( \overline{ \mathbb{R}^{N+1}_+ } )$, notice that
\begin{equation}\label{eq:4.12}
\begin{aligned}
\nabla V(X)& = \partial_r V( r \sigma ) \sigma + r^{-1} \nabla_{S^+_1} V,
\\
\nabla W(X)& = r^{ - \frac{2s+\ell}{p-1} - 1 }
\left[ - \frac{2s+\ell}{p-1} \psi(\sigma) \sigma + \nabla_{S^+_1} \psi(\sigma) \right].
\end{aligned}
\end{equation} We see from \eqref{eq:4.7}, \eqref{eq:4.12} and $\sigma \cdot \nabla_{S^+_1} h (\sigma) = 0$ for functions $h$ on $S^+_1$ that
\begin{equation}\label{eq:4.13}
\begin{aligned}
&\kappa_s \int_{ \mathbb{R}^N} |x|^{ - \frac{2sp + \ell}{p-1} } |\psi(x/|x|,0)|^{p-1} \psi(x/|x|,0) V(x,0) \, dx
\\
= \ &
\int_{ \mathbb{R}^{N+1}_+} t^{1-2s} \nabla W \cdot \nabla V \, dX
\\
= \ &
\int_{ \mathbb{R}^{N+1}_+} t^{1-2s} r^{ - \frac{2s+\ell}{p-1} - 1 }
\left[ - \frac{2s+\ell}{p-1} \partial_rV (r \sigma) \psi(\sigma )
+ r^{-1} \nabla_{S^+_1} V \cdot \nabla_{S^+_1} \psi
\right] \, dX.
\end{aligned}
\end{equation} If we choose $V$ as $V(X) = \eta(r) \varphi(\sigma)$ where $\eta \in C^1_{c} ([0,\infty))$ and $\varphi \in C^1 ( \overline{ S^+_1} )$, then \eqref{eq:4.13} is rewritten as
\begin{equation}\label{eq:4.14}
\begin{aligned}
&\kappa_s \left( \int_0^\infty r^{ - \frac{2sp+\ell}{p-1} + N - 1 } \eta (r) \,dr \right)
\left( \int_{ \partial S^+_1} |\psi( \omega , 0 )|^{p-1} \psi (\omega, 0) \varphi (\omega,0) \, d \omega
\right)
\\
= \ & - \frac{2s+\ell}{p-1} \left( \int_0^\infty r^{ N - \frac{2sp+\ell}{p-1}}
\eta'(r) \, d r \right)
\left( \int_{ S^+_1} \sigma_{N+1}^{1-2s} \varphi(\sigma) \psi(\sigma) \, d S \right)
\\
& \quad + \left( \int_0^\infty r^{ - \frac{2sp+\ell}{p-1} + N -1 } \eta(r) \, dr \right)
\left( \int_{ S^+_1} \sigma_{N+1}^{1-2s} \nabla_{S^+_1} \varphi \cdot \nabla_{S^+_1} \psi \, d S \right).
\end{aligned}
\end{equation} Since $p_S(N,\ell) < p$ yields $ - \frac{2sp+\ell}{p-1} + N > 0$, it follows from the integration by parts that
\[
-\frac{2s+\ell}{p-1} \int_0^\infty r^{N-\frac{2sp+\ell}{p-1}} \eta'(r) \, dr
= \beta \int_0^\infty r^{N-\frac{2sp+\ell}{p-1}-1} \eta(r) \, dr.
\] Thus, by choosing $\eta \geq 0$ with $\eta \not\equiv 0$, \eqref{eq:4.14} implies
\[
\kappa_s \int_{ \partial S^+_1} |\psi(\omega,0)|^{p-1} \psi(\omega,0) \varphi (\omega,0) \, d\omega
= \int_{ S^+_1}\sigma_{N+1}^{1-2s} \nabla_{S^+_1} \psi \cdot \nabla_{S^+_1} \varphi
+ \beta \sigma_{N+1}^{1-2s} \psi \varphi \, d S
\] for every $\varphi \in C^1( \overline{ S^+_1} )$. Hence, \eqref{eq:4.10} holds.
For \eqref{eq:4.11}, take $(\psi_k)_k$ satisfying \eqref{eq:4.8}. Then \eqref{eq:4.10} holds for $\varphi = \psi_k$. Thanks to $\psi(\omega,0) \in L^{p+1} (\partial S_1^+)$ and the dominated convergence theorem, letting $k \to \infty$, we obtain \eqref{eq:4.11}. \end{proof}
\noindent \textbf{Step 2:} \textsl{For every $\varphi \in H^1( S^+_1,\sigma_{N+1}^{1-2s}dS) \cap L^\infty(S^+_1)$,
\begin{equation}
\label{eq:4.15}
\kappa_sp\int_{ \partial S^+_1 }|\psi|^{p-1}\varphi^2\,d \omega
\le \int_{ S^+_1 }
\sigma_{N+1}^{1-2s} |\nabla_{S^+_1}\varphi|^2\,dS
+\left(\frac{N-2s}{2}\right)^2\int_{ S^+_1} \sigma_{N+1}^{1-2s}\varphi^2\,dS.
\end{equation} }
\begin{proof} It is enough to treat the case $\varphi \in C^1( \overline{ S^+_1} )$ due to \eqref{eq:4.9} as in Step 1. We recall the stability in \eqref{eq:4.7}: for any $\phi\in C^1_c(\overline{\mathbb R^{N+1}_+}\setminus \overline{B^+_{R_0}})$, \begin{equation} \label{eq:4.16}
\kappa_sp\int_{\partial\mathbb R^{N+1}_+}|x|^\ell|W|^{p-1} \phi(x,0)^2\,dx
\le \int_{\mathbb R^{N+1}_+}t^{1-2s}|\nabla \phi|^2\,dX. \end{equation} For $0< \varepsilon \ll 1$, we choose $\tau_\varepsilon$ and a standard cutoff function $\eta_\varepsilon\in C^1_c( (0,\infty) )$ such that \begin{equation} \label{eq:4.17} \tau_\varepsilon := \frac{1}{\sqrt{-\log \varepsilon}} \to 0, \qquad \begin{aligned} &\chi_{(R_0+2 \tau_\varepsilon , \, R_0+ \varepsilon^{-1} )}(r)\le \eta_\varepsilon(r) \le \chi_{(R_0 + \tau_\varepsilon , \, R_0+2 \varepsilon^{-1} )} (r), \\ &
|\eta_\varepsilon'(r)|\le C \tau_\varepsilon^{-1} \quad \mbox{for}\quad r\in(R_0+ \tau_\varepsilon , \, R_0+ 2 \tau_\varepsilon), \\ &
|\eta_\varepsilon'(r)|\le C\varepsilon \quad \mbox{for}\quad r\in(R_0+ \varepsilon^{-1} , \, R_0+2 \varepsilon^{-1}) \end{aligned} \end{equation} where $\chi_A(r)$ is the characteristic function of $A \subset (0,\infty)$. For $\varphi\in C^1( \overline{S^+_1} )$, we put \begin{equation} \label{eq:4.18} \phi(X)=r^{-\frac{N-2s}{2}}\eta_\epsilon(r)\varphi(\sigma )\qquad\mbox{for}\quad X = r \sigma \in\mathbb R^{N+1}_+. \end{equation} Since $W=r^{ - \frac{2s+\ell}{p-1} } \psi (\sigma)$, we have \begin{equation} \label{eq:4.19}
\int_{\partial\mathbb R^{N+1}_+}|x|^\ell|W|^{p-1}\phi^2\,dx =\left(\int_0^\infty r^{-1}\eta_\varepsilon^2\,dr\right)
\left(\int_{ \partial S^+_1 }|\psi|^{p-1}\varphi^2\,d\omega\right). \end{equation}
On the other hand, by \eqref{eq:4.18}, we see that \begin{equation} \label{eq:4.20} \begin{aligned}
|\nabla\phi (X)|^2 & = \left( \left(r^{-\frac{N-2s}{2}}\eta_\varepsilon \right)' \right)^2\varphi^2
+r^{-2}\left(r^{-\frac{N-2s}{2}}\eta_\varepsilon\right)^2 |\nabla_{S^+_1}\varphi|^2 \\ &
=\left[ \left(\frac{N-2s}{2} \right)^2\varphi^2+ | \nabla_{S^+_1}\varphi |^2\right] r^{-2-(N-2s)}\eta_\varepsilon^2 \\ &\qquad +r^{-(N-2s)}(\eta_\varepsilon')^2\varphi^2-(N-2s)r^{-1-(N-2s)}\eta_\varepsilon\eta_\varepsilon'\varphi^2. \end{aligned} \end{equation} Since it follows from \eqref{eq:4.17} that \begin{equation*}
\begin{aligned}
&\int_0^\infty r^{N+1-2s}r^{-(N-2s)}(\eta_\varepsilon')^2\,dr
\leq
C \tau_\varepsilon^{-2} \int_{R_0+\tau_\varepsilon}^{R_0+2\tau_\varepsilon} r \, dr
+C\varepsilon^2\int_{R_0 + 1/\varepsilon}^{R_0 + 2/\varepsilon} r\,dr
\leq C \left( \tau_\varepsilon^{-1} + 1 \right),
\\
&
\left| \int_0^\infty r^{N+1-2s}r^{-1-(N-2s)}\eta_\varepsilon\eta_\varepsilon'\,dr \right|
\leq C \tau_\varepsilon^{-1} \int_{R_0 + \tau_\varepsilon }^{R_0 + 2 \tau_\varepsilon } \,dr
+C\varepsilon\int_{R_0 + 1/\varepsilon}^{R_0 + 2 / \varepsilon} \,dr
\le C,
\end{aligned} \end{equation*} by \eqref{eq:4.20} we have \begin{equation} \label{eq:4.21} \begin{aligned} &
\int_{\mathbb R^{N+1}_+}t^{1-2s}|\nabla \phi|^2\,dX \\ = \ & \int_0^\infty\, dr\, \int_{ S^+_1 } r^N ( r \sigma_{N+1})^{1-2s} \left\{ \left[ \left(\frac{N-2s}{2} \right)^2\varphi^2
+|\nabla_{S^+_1}\varphi|^2\right]r^{-2-(N-2s)}\eta_\varepsilon^2 \right. \\ &\hspace{4cm} + r^{-(N-2s)}(\eta_\varepsilon')^2\varphi^2-(N-2s)r^{-1-(N-2s)}\eta_\varepsilon\eta_\varepsilon'\varphi^2\Bigg\} \,dS \\ \le \ & \left(\int_0^\infty r^{-1}\eta_\varepsilon^2\,dr\right) \left(\int_{S^+_1} \sigma_{N+1}^{1-2s}
\left[ \left( \frac{N-2s}{2} \right)^2\varphi^2+|\nabla_{S^+_1}\varphi|^2\right]\,dS\right) +C \left( \tau_\varepsilon^{-1} + 1 \right). \end{aligned} \end{equation} Finally, remark that
\[
\int_0^\infty r^{-1} \eta_\varepsilon^2 \, dr
\geq \int_{R_0 + 2 \tau_\varepsilon}^{R_0+\varepsilon^{-1}} r^{-1} \, dr
= \log \left( R_0 + \varepsilon^{-1} \right) - \log \left( R_0 + 2 \tau_\varepsilon \right)
\geq \frac{\tau_\varepsilon^{-2}}{2}
\] holds for sufficiently small $\varepsilon$. Therefore, substituting \eqref{eq:4.19} and \eqref{eq:4.21} to \eqref{eq:4.16}, dividing by $\int_0^\infty r^{-1} \eta_\varepsilon^2 \, dr $ and taking $\varepsilon\to0$, we obtain \eqref{eq:4.15}. \end{proof}
\noindent \textbf{Step 3:} \textsl{For $\alpha\in [0 , \frac{N-2s}{2} )$, set
\[
v_\alpha(x) := |x|^{ - \left( \frac{N-2s}{2} - \alpha \right) }, \quad
V_\alpha (X) := ( P_s (\cdot, t) \ast v_\alpha ) (x).
\] Then for each $X \in \mathbb{R}^{N+1}_+$ and $\lambda > 0$, \begin{equation} \label{eq:4.22} V_\alpha(\lambda X)=\lambda^{-\frac{N-2s}{2}+\alpha}V_\alpha(X) \end{equation} and $\phi_{\alpha}(\sigma) := V_\alpha ( \sigma) = V_\alpha (r^{-1} X) \in C( \overline{ S^+_1 } ) \cap C^1( S^+_1 ) \cap H^1( S^+_1,\sigma_{N+1}^{1-2s}dS) $. Moreover, $\phi_{\alpha}$ also satisfies $\phi_{\alpha} > 0$ in $\overline{S^+_1}$, for any $\varphi \in H^1(S^+_1,\sigma_{N+1}^{1-2s}dS) \cap L^\infty(S^+_1)$, \begin{equation} \label{eq:4.23}
\begin{aligned}
&
\int_{S_1^+} \sigma_{N+1}^{1-2s} |\nabla_{S^+_1}\varphi |^2\,dS
+ \left[ \left(\frac{N-2s}{2}\right)^2-\alpha^2\right]
\int_{S^+_1} \sigma_{N+1}^{1-2s}\varphi^2\,dS
\\
= \ &
\kappa_s \lambda(\alpha) \int_{\partial S^+_1} \varphi^2\, d\omega
+\int_{S^+_1} \sigma_{N+1}^{1-2s}\phi_\alpha^2
\left|\nabla_{S^+_1} \left( \frac{\varphi}{\phi_\alpha} \right) \right|^2\,dS
\end{aligned} \end{equation} and
\begin{equation}
\label{eq:4.24}
0 \leq \alpha_1 \leq \alpha_2 < \frac{N-2s}{2} \quad \Rightarrow\quad
\phi_{\alpha_1} \le \phi_{\alpha_2} \qquad \mathrm{in} \quad S^+_1.
\end{equation} }
\begin{proof} By direct computation, we may check \eqref{eq:4.22}. For the assertion $\phi_{\alpha} \in C( \overline{ S^+_1 } ) \cap C^1( S^+_1 ) \cap H^1( S^+_1,\sigma_{N+1}^{1-2s}dS)$, we remark $V_\alpha(X)
= (P_s(\cdot,t) \ast v_\alpha )(x)
\in C^\infty(\mathbb{R}^{N+1}_+)$. By $\phi_\alpha=V_\alpha|_{ S^+_1}$, to show $\phi_{\alpha} \in C( \overline{ S^+_1 } ) \cap C^1( S^+_1 ) \cap H^1( S^+_1,\sigma_{N+1}^{1-2s}dS)$, it suffices to prove
\begin{equation}\label{eq:4.25}
V_\alpha, \ t^{1-2s} \partial_tV_\alpha, \ \nabla_x V_\alpha \in C
\left( \overline{ N^+_{1/4} (\partial S^+_1)} \right), \quad
N^+_r(A) := \Set{ X \in \mathbb{R}^{N+1}_+ | \, \dist \left( X, A \right) < r }.
\end{equation} To this end, decompose $v_\alpha = v_{\alpha,1} + v_{\alpha,2}$ where $ \supp v_{\alpha,1} \subset B_{2} \setminus B_{1/4}$ and $v_{\alpha,1} \equiv v_\alpha$ on $B_{3/2} \setminus B_{1/2}$, and set $V_{\alpha,i} (X) := (P_s(\cdot, t) \ast v_{\alpha,i})(x)$. Since $v_{\alpha,1} \in C^\infty_c(\mathbb{R}^N)$ and $N_{1/4} (\partial S^+_1 ) \cap \supp v_{\alpha,2} = \emptyset$ where
$N_r(A) := \set{x \in \mathbb{R}^N |\, \dist (x,A) < r }$, it is not difficult to show \eqref{eq:4.25} and $\phi_{\alpha} \in C( \overline{ S^+_1 } ) \cap C^1( S^+_1 ) \cap H^1( S^+_1,\sigma_{N+1}^{1-2s}dS)$.
The assertion $\phi_{\alpha} > 0$ in $\overline{ S^+_1}$ follows from $v_\alpha > 0$ in $\mathbb{R}^N \setminus \{0\}$ and the definition of $V_\alpha$.
For \eqref{eq:4.23} and \eqref{eq:4.24}, remark that
in \cite[Lemma~4.1]{Fall}, it is proved that $(-\Delta)^s v_\alpha = \lambda(\alpha) |x|^{-2s} v_\alpha$ in $\mathbb{R}^N$, where $\lambda(\alpha)$ appears in \eqref{eq:1.7}. Hence, by the property of $P_s(x,t)$ and $v_\alpha$, we may check that for each $\varphi \in C^\infty_c(\mathbb{R}^N \setminus \set{0} )$,
\begin{equation}\label{eq:4.26}
- \lim_{t \to +0} \int_{ \mathbb{R}^N} t^{1-2s} \partial_t V_\alpha (X) \varphi (x) \, d x
= \kappa_s \int_{ \mathbb{R}^N} v_\alpha (-\Delta)^s \varphi \, d x
= \kappa_s \int_{ \mathbb{R}^N} \lambda (\alpha) |x|^{-2s} v_\alpha \varphi \, dx.
\end{equation}
For any fixed $\omega \in \partial S^+_1$, we consider a curve
\[
\gamma_{\omega} (\tau ) :=
\begin{pmatrix}
\sqrt{1-\tau^{2}} \, \omega \\ \tau
\end{pmatrix}
\in S^+_1.
\] Then $\phi_\alpha ( \gamma_{\omega} (\tau) ) = V_\alpha (\gamma_{\omega} (\tau))$ and
\[
\frac{d}{d \tau} V_\alpha ( \gamma_{\omega} (\tau ) )
= \nabla V_\alpha ( \gamma_{\omega} (\tau) ) \cdot
\begin{pmatrix}
- \frac{\tau}{\sqrt{1-\tau^2}} \omega \\ 1
\end{pmatrix}
= - \frac{\tau}{\sqrt{1-\tau^2}} \nabla_x V_\alpha( \gamma_{\omega} (\tau) ) \cdot \omega
+ \partial_t V_\alpha( \gamma_{\omega} (\tau) ).
\] Combining this fact with \eqref{eq:4.25}, $v_\alpha(1) = 1$ and \eqref{eq:4.26}, we deduce that
\begin{equation}\label{eq:4.27}
-\lim_{\sigma_{N+1}\to0} \sigma_{N+1}^{1-2s} \partial_{\sigma_{N+1}} \phi_\alpha (\sigma)
=
\kappa_s\lambda(\alpha)\qquad\mbox{on}\quad \partial S^+_1.
\end{equation} Due to \eqref{eq:4.22}, we notice that \[ V_\alpha(X)=r^{-\frac{N-2s}{2}+\alpha}\phi_\alpha(\sigma). \] Furthermore, since $0=- \diver ( t^{1-2s} \nabla V_\alpha)$ in $\mathbb{R}^{N+1}_+$ and $V_\alpha=v_\alpha$ on $\partial\mathbb R^{N+1}_+\setminus\{0\}$, we have $\phi_\alpha=v_\alpha=1$ on $\partial S^+_1$, and by Lemma \ref{Lemma:4.4} with $f(r) = r^{ - \frac{N-2s}{2} + \alpha }$ and $\psi (\sigma) = \phi_{\alpha}(\sigma)$, $\phi_{\alpha}$ is a solution of \begin{equation} \label{eq:4.28} \left\{\begin{aligned}
- \diver_{S^+_1} \left( \sigma_{N+1}^{1-2s} \nabla_{S^+_1} \phi_\alpha \right) + \left[ \left( \frac{N-2s}{2}\right)^2-\alpha^2\right] \sigma_{N+1}^{1-2s}\phi_\alpha &= 0 & &\mbox{in}\quad S^+_1, \\ \phi_\alpha&=1 & & \mbox{on}\quad \partial S^+_1. \end{aligned} \right. \end{equation}
Now we prove \eqref{eq:4.23}. Since $\phi_{\alpha} \in C( \overline{ S^+_1} )$ and $\phi_{\alpha} > 0$ in $ \overline{ S^+_1}$, for every $\varphi \in H^1(S^+_1,\sigma_{N+1}^{1-2s}dS) \cap L^\infty (S^+_1)$ and $(\varphi_k)_k$ with \eqref{eq:4.9}, it follows that
\[
\frac{\varphi_k}{\phi_{\alpha}} \to \frac{\varphi}{\phi_{\alpha}} \quad
\text{strongly in} \ H^1(S^+_1,\sigma_{N+1}^{1-2s}dS), \quad
\frac{\varphi}{\phi_{\alpha}} \in H^1(S^+_1,\sigma_{N+1}^{1-2s}dS).
\] Hence, it suffices to show \eqref{eq:4.23} for $\varphi \in C^1( \overline{ S^+_1 } )$. For $\varphi\in C^1( \overline{S^+_1} )$, notice that
\[
\begin{aligned}
\nabla_{S^+_1} \phi_\alpha \cdot \nabla_{S^+_1}
\left( \frac{\varphi^2}{\phi_\alpha} \right)
&= \nabla_{S^+_1} \phi_{\alpha} \cdot
\left[ \frac{2 \varphi \nabla_{S^+_1} \varphi}{\phi_{\alpha}}
- \frac{\varphi^2 \nabla_{S^+_1} \phi_\alpha}{\phi_{\alpha}^2} \right]
= |\nabla_{S^+_1} \varphi |^2
- \left|\nabla_{S^+_1} \left(\frac{\varphi}{\phi_\alpha} \right) \right|^2
\phi_\alpha^2.
\end{aligned}
\] Thus, multiplying \eqref{eq:4.28} by $\varphi^2/\phi_\alpha$, we see from \eqref{eq:4.27} that \eqref{eq:4.23} holds.
Finally, for $0 \leq \alpha_1 \leq \alpha_2 < \frac{N-2s}{2}$, we infer from $0< \phi_{\alpha}$ that
\[
\begin{aligned}
- {\rm{div}}_{S^+_1} \left( \sigma_{N+1}^{1-2s} \nabla_{S^+_1} \phi_{\alpha_1} \right)
&=
- \left[ \left( \frac{N-2}{2} \right)^2 - \alpha_1^2 \right] \sigma_{N+1}^{1-2s} \phi_{\alpha_1}
\\
&\leq - \left[ \left(\frac{N-2s}{2}\right)^2 - \alpha_2^2 \right] \sigma_{N+1}^{1-2s}\phi_{\alpha_1}
\quad\mbox{on}\quad S^+_1,
\end{aligned}
\] which yields
\[
- \diver_{S^+_1}
\left( \sigma_{N+1}^{1-2s} \nabla_{ S^+_1 } \left( \phi_{\alpha_2} - \phi_{\alpha_1} \right) \right)
+ \left[ \left(\frac{N-2s}{2}\right)^2 - \alpha_2^2 \right] \sigma_{N+1}^{1-2s}
\left( \phi_{\alpha_2} - \phi_{\alpha_1} \right) \geq 0.
\] Multiplying this inequality by $( \phi_{\alpha_2} - \phi_{\alpha_1} )_- :=\max\{0,-(\phi_{\alpha_2} - \phi_{\alpha_1})\}\in H^1( S^+_1,\sigma_{N+1}^{1-2s}dS)$ and integrating it over $S^+_1$, by $\phi_{\alpha_1} = 1 = \phi_{\alpha_2}$ on $\partial S^+_1$ and $\left( \frac{N-2s}{2} \right)^2 - \alpha^2_2 > 0$, we deduce that $ ( \phi_{\alpha_2} - \phi_{\alpha_1} )_- \equiv 0$, hence, \eqref{eq:4.24} holds. \end{proof}
\noindent \textbf{Step 4: } \textsl{Conclusion}
Now we are ready to prove the assertion of Lemma \ref{Lemma:4.3}. By $p_S(N,\ell)<p$ and $\ell>-2s$ thanks to \eqref{eq:1.2}, we set
\begin{equation*}
\widetilde{\alpha}=\frac{N-2s}{2}-\frac{2s+\ell}{p-1} \in \left( 0 , \frac{N-2s}{2} \right).
\end{equation*} By this choice of $\widetilde{\alpha}$, we see that \begin{equation} \label{eq:4.29} \left(\frac{N-2s}{2}\right)^2-\widetilde{\alpha}^2=\frac{2s+\ell}{p-1}\left(N-2s-\frac{2s+\ell}{p-1}\right)=\beta \end{equation} where $\beta$ appears in Step 1. Let $(\psi_k)_k$ be the functions in \eqref{eq:4.8} and notice that $\phi_{0} / \phi_{\widetilde{\alpha}} \in C( \overline{ S^1_+ } ) \cap H^1(S^+_1,\sigma_{N+1}^{1-2s}dS)$, $\phi_0=\phi_{\widetilde{\alpha}}=1$ on $\partial S^+_1$ and $\psi_k \phi_0 / \phi_{\widetilde{\alpha}} \in H^1( S^+_1,\sigma_{N+1}^{1-2s}dS) \cap L^\infty(S^+_1) $. Hence, \eqref{eq:4.15} and \eqref{eq:4.23} with $\varphi = \psi_k \phi_0 / \phi_{\widetilde{\alpha}}$ and $\alpha = 0$ give
\begin{equation*}
\begin{aligned}
&
\kappa_s p\int_{\partial S^+_1} |\psi|^{p-1} \psi_k^2 \,d \omega
\\
\leq \ &
\int_{S^+_1}
\sigma_{N+1}^{1-2s}
\left| \nabla_{S^+_1} \left( \frac{\psi_k \phi_0}{\phi_{\widetilde{\alpha}}} \right) \right|^2
\, dS
+ \left( \frac{N-2s}{2} \right)^2
\int_{S^+_1} \sigma_{N+1}^{1-2s} \left( \frac{\psi_k \phi_0}{\phi_{\widetilde{\alpha}}} \right)^2\,dS
\\
= \ & \kappa_s \lambda(0) \int_{ \partial S^+_1} \psi_k^2 \, d \omega
+ \int_{ S^+_1} \sigma_{N+1}^{1-2s} \phi_{0}^2
\left| \nabla_{ S^+_1 } \left( \frac{\psi_k}{\phi_{\widetilde{\alpha}}} \right) \right|^2 \,dS \end{aligned} \end{equation*} This together with \eqref{eq:4.24} implies \begin{equation} \label{eq:4.30}
\kappa_s p \int_{\partial S^+_1} |\psi|^{p-1} \psi_k^2 \,d \omega \le \kappa_s \lambda(0) \int_{\partial S^+_1} \psi^2_k\,d \omega +\int_{ S^+_1 } \sigma_{N+1}^{1-2s} \phi_{\widetilde{\alpha}}^2
\left|\nabla_{ S^+_1} \left(\frac{\psi_k }{\phi_{\widetilde{\alpha}}} \right) \right|^2\,dS. \end{equation} Substituting \eqref{eq:4.23} with $\varphi = \psi_k$ and $\alpha = \widetilde{\alpha}$ into \eqref{eq:4.30}, we observe from \eqref{eq:4.29} that \begin{equation}\label{eq:4.31} \begin{aligned} &
\kappa_sp\int_{\partial S^+_1} |\psi|^{p-1} \psi_k^2 \,d \omega \\ \le \ & \kappa_s \lambda(0) \int_{\partial S^+_1} \psi^2_k \,d \omega
+ \int_{ S^+_1 } \sigma_{N+1}^{1-2s}|\nabla_{ S^+_1 }\psi_k|^2\,dS
\\
&\hspace{1.5cm}
+\left[ \left( \frac{N-2s}{2}\right)^2-\widetilde{\alpha}^2\right]
\int_{S^+_1} \sigma_{N+1}^{1-2s}\psi^2_k \,dS
-\kappa_s\lambda( \widetilde{\alpha} )\int_{\partial S^+_1}\psi^2_k \,d \omega \\ = \ & \kappa_s \left( \lambda(0) - \lambda(\widetilde{\alpha}) \right) \int_{\partial S^+_1} \psi^2_k \,d \omega
+ \int_{S^+_1} \sigma_{N+1}^{1-2s}|\nabla_{S^+_1}\psi_k |^2\,dS +\beta\int_{ S^+_1 } \sigma_{N+1}^{1-2s}\psi_k^2\,dS. \end{aligned} \end{equation}
On the other hand, by \eqref{eq:4.23} with $\varphi = \psi_k$ and $\alpha = \widetilde{\alpha}$, we have
\[
\begin{aligned}
\int_{ S^+_1} \sigma_{N+1}^{1-2s} | \nabla_{ S^+_1 } \psi_k |^2 \, d S
+ \beta \int_{S^+_1} \sigma_{N+1}^{1-2s} \psi_k^2 \, dS
\geq
\kappa_s \lambda(\widetilde{\alpha})
\int_{ \partial S^+_1} \psi_k^2 \, d \omega.
\end{aligned}
\] From \eqref{eq:4.31} and the fact $\lambda (0) > \lambda(\widetilde{\alpha})$ due to \eqref{eq:1.8}, it follows that
\begin{equation*}
\kappa_s p \int_{ \partial S^+_1} | \psi |^{p-1} \psi_k^2 \, d \omega
\leq \frac{\lambda (0)}{\lambda(\widetilde{\alpha})}
\left\{ \int_{ S^+_1} \sigma_{N+1}^{1-2s} | \nabla_{ S^+_1 } \psi_k |^2 \, d S
+ \beta \int_{S^+_1} \sigma_{N+1}^{1-2s} \psi_k^2 \, dS\right\}.
\end{equation*} Letting $k \to \infty$ and noting \eqref{eq:4.8} and \eqref{eq:4.11}, we obtain
\[
\kappa_s p \int_{ \partial S^+_1} | \psi |^{p+1} \, d \omega
\leq \frac{\lambda (0)}{\lambda(\widetilde{\alpha})}
\kappa_s \int_{ \partial S^+_1} |\psi|^{p+1} \, d \omega.
\] Thus, we obtain $\lambda(\widetilde{\alpha})p\le \lambda(0)$ unless $\psi\equiv 0$. Therefore, if \eqref{eq:1.11} holds, namely $\lambda(\widetilde{\alpha})p> \lambda(0)$, then $\psi\equiv0$, and by $W=r^{ -\frac{2s+\ell}{p-1} } \psi$, we have $W\equiv 0$. Hence, Lemma~\ref{Lemma:4.3} follows.
\end{proof}
Now we are ready to prove Theorem~\ref{Theorem:1.1} for $p_S(N,\ell)<p$. Following \cite{DDW}, we use the blow-down analysis.
\begin{proof}[Proof of Theorem~\ref{Theorem:1.1} for $p_S(N,\ell)<p$]
Assume \eqref{eq:1.2} and \eqref{eq:1.11}. Let $u \in H^s_{\mathrm{loc}} (\mathbb{R}^N) \cap L^\infty_{\rm loc} (\mathbb{R}^N) \cap L^2(\mathbb{R}^N,(1+|x|)^{-N-2s}dx) $ be a solution of \eqref{eq:1.1} which is stable outside $B_{R_0}$ and let $U$ be the function given in \eqref{eq:2.3}. Recall $D$, $H$ and $E$ in \eqref{eq:4.1}, \eqref{eq:4.2} and \eqref{eq:4.3}, respectively. Then, by Lemma~\ref{Lemma:2.9} we see that \begin{equation} \label{eq:4.32} \lambda^{\frac{2(2s+\ell)}{p-1}}D(U;\lambda)
\le C\lambda^{\frac{2}{p-1}(s(p+1)+\ell)-N}\left(\int_{B^+_\lambda}t^{1-2s}|\nabla U|^2\, dX
+\int_{B_\lambda}|x|^\ell|u|^{p+1}\,dx\right) \le C \end{equation} for $\lambda\ge 3 R_0$. Since $E$ is nondecreasing with respect to $\lambda$ due to Lemma \ref{Lemma:4.2}, by \eqref{eq:4.32} and Lemma \ref{Lemma:2.8}, for $\lambda \geq 3R_0$, we have \begin{equation*} \begin{aligned} E(U;\lambda) & \le \lambda^{-1} \int_\lambda^{2\lambda}E(U;\xi)\,d\xi \\ & = \lambda^{-1} \int_\lambda^{2\lambda} \xi^{\frac{2(2s+\ell)}{p-1}} \left[D(U;\xi)+\frac{2s+\ell}{2(p-1)}H(U;\xi) \right]\,d\xi \\ &\leq C + C \lambda^{-1} \int_{ \lambda}^{ 2\lambda} d \xi \, \xi^{ \frac{2(2s+\ell)}{p-1} - N - 1 + 2s } \int_{ S_\xi^+} t^{1-2s} U^2 \, dS \\ & \le C+C\lambda^{ \frac{2}{p-1}(2s+\ell) + 2s -N-2 }\int_{\lambda}^{2\lambda} d\xi \int_{S^+_\xi}t^{1-2s}U^2\,dS \\ & \le C+C\lambda^{ \frac{2}{p-1}(2s+\ell) + 2s -N-2 }\int_{B_{2\lambda}^+ }t^{1-2s}U^2\,dX \\ & \le C+C\lambda^{ \frac{2}{p-1}(2s+\ell) + 2s -N-2 } \lambda^{N + 2 (1-s) - \frac{2(2s+\ell)}{p-1} } \le C. \end{aligned} \end{equation*} This implies that \begin{equation} \label{eq:4.33} \lim_{\lambda\to\infty}E(U;\lambda)<+\infty. \end{equation}
On the other hand, for $X\in\mathbb R^{N+1}_+$, let \[ V_\lambda (X):=\lambda^{\frac{2s+\ell}{p-1}}U(\lambda X). \] Then it is easy to check that
\begin{equation}\label{eq:4.34}
\begin{aligned}
&V_\lambda(X) =
\left( P_s(\cdot, t) \ast \left( \lambda^{ \frac{2s+\ell}{p-1} } u \left( \lambda \cdot \right) \right) \right) (x),
\\
&
- \lim_{t \to +0} t^{1-2s} \partial_t V_\lambda(x,t) = \kappa_s |x|^\ell \left| V_\lambda(x,0) \right|^{p-1}
V_\lambda (x,0),
\\
&\lambda^{\frac{2(2s+\ell)}{p-1}}D(U;\lambda R)=D(V_\lambda ;R),
\,\,\, \lambda^{ \frac{2(2s+\ell)}{p-1} } H(U;\lambda R) = H(V_\lambda; R), \,\,\,
E(U;\lambda R)=E(V_\lambda;R)
\end{aligned}
\end{equation} for each $\lambda \geq 3R_0$. Since $u$ is stable outside $B_{R_0}$, as in the proof of Lemma \ref{Lemma:2.9} (see \eqref{eq:2.65}), by \eqref{eq:2.8}, $U$ is stable outside $B_{R_0}^+$. Therefore, for every $\psi \in C^1_c( \overline{ \mathbb{R}^{N+1}_+} \setminus \overline{ B^+_{\lambda^{-1} R_0 } } )$,
\begin{equation}\label{eq:4.35}
\begin{aligned}
p \kappa_s \int_{ \mathbb{R}^N} |x|^\ell | V_\lambda(x,0) |^{p-1} \psi(x,0)^2 \, dx
&= p \kappa_s \lambda^{2s-N} \int_{ \mathbb{R}^N} |x|^\ell |u(x)|^{p-1} \psi( \lambda^{-1} x,0 )^2 \, dx
\\
&\leq \lambda^{2s-N} \int_{ \mathbb{R}^{N+1}_+} t^{1-2s} \left| \nabla ( \psi(\lambda^{-1} X) ) \right|^2 \, dX
\\
&= \int_{ \mathbb{R}^{N+1}_+} t^{1-2s} \left| \nabla \psi \right|^2 \, dX,
\end{aligned}
\end{equation} which implies that $V_\lambda$ is stable outside $B^+_{\lambda^{-1} R_0 }$. Furthermore, by \eqref{eq:2.52} and \eqref{eq:2.60}, $(V_\lambda)_{\lambda \geq 3 R_0}$ is bounded in $H^1_{\mathrm{loc}}(\overline{ \mathbb{R}^{N+1}_+},t^{1-2s}dX)$ and
$(V_\lambda(x, 0))_{\lambda \geq 3R_0}$ is bounded in $L^{p+1}_{\rm loc} (\mathbb{R}^N,|x|^\ell dx)$.
Now let $(\lambda_i)_{i=1}^\infty$ satisfy $\lambda_i \to \infty$ and $V_{\lambda_i} \rightharpoonup U_\infty$ weakly in $H^1_{\mathrm{loc}} ( \overline{\mathbb{R}^{N+1}_+},t^{1-2s}dX)$. Thanks to the above fact, without loss of generality, we may also assume that
\begin{equation}\label{eq:4.36}
U_\infty(x,0) \in L^{p+1}_{\rm loc} ( \mathbb{R}^N,|x|^\ell dx).
\end{equation} We shall claim that
\begin{align}
&V_{\lambda_i} (x,0) \to U_\infty(x,0) & &\text{strongly in } L^q_{\rm loc} (\mathbb{R}^N)
\quad \text{for $1 \leq q < \frac{2N}{N-2s}$},
\label{align:4.37}
\\
&V_{\lambda_i}(X) \to U_\infty(X) & &\text{strongly in } L^2_{\rm loc} ( \overline{ \mathbb{R}^{N+1}_+},t^{1-2s}dX).
\label{align:4.38}
\end{align} Due to the boundedness of the trace operator from $H^1( B_R \times (0,R),t^{1-2s}dX)$ to $H^s(B_R)$ for each $R$ (see \cite{DD-12}), we also have $V_{\lambda_i} (x,0) \rightharpoonup U_\infty (x,0)$ weakly in $H^s(B_R)$. By the compactness of embedding $H^s(B_R) \subset L^q(B_R)$ where $1 \leq q < 2N/(N-2s)$, we get \eqref{align:4.37}. For \eqref{align:4.38}, since $H^1( B_R \times (R^{-1},R), t^{1-2s}dX) = H^1( B_R \times (R^{-1},R) )$, we first remark that \begin{equation} \label{eq:4.39} V_{\lambda_i} \to U_\infty \qquad\mbox{strongly in $L^2_{\rm loc} (\mathbb{R}^{N+1}_+,t^{1-2s}dX)$}. \end{equation} Around $t = 0$, we notice that for $\psi \in C^1( \overline{ \mathbb{R}^{N+1}_+})$,
\[
\begin{aligned}
\left| \psi(x,t) \right|
&\leq
\left| \psi (x,0) \right|
+ \int_0^t \tau^{\frac{2s-1}{2}} \tau^{\frac{1-2s}{2}} \left| \partial_t \psi (x,\tau) \right| \, d \tau
\\
&\leq |\psi(x,0)| + \left[ \frac{1}{2s} t^{2s} \right]^{1/2}
\left( \int_0^t \tau^{1-2s} \left| \partial_t \psi (x, \tau ) \right|^2 \, d \tau \right)^{1/2}.
\end{aligned}
\] Therefore,
\begin{equation}\label{eq:4.40}
\begin{aligned}
&
\int_{B_R \times (0,T)} t^{1-2s} \left| \psi (X) \right|^2 \, dX
\\
&\leq 2 \int_{B_R \times (0,T)} t^{1-2s} \left[ |\psi(x,0)|^2
+ \frac{t^{2s}}{s} \int_0^t \tau^{1-2s} |\partial_t \psi (x,\tau) |^2 \, d \tau \right] dX
\\
& \leq C_s\left[ T^{2-2s} \| \psi(\cdot, 0) \|_{L^2(B_R)}^2
+T^2 \int_{B_R\times(0,T)} t^{1-2s} |\partial_t \psi (X)|^2 \, dX \right].
\end{aligned}
\end{equation} By the density argument, \eqref{eq:4.40} holds for every $W \in H^1_{\mathrm{loc}} (\overline{\mathbb{R}^{N+1}_+},t^{1-2s}dX)$. Thus, by \eqref{eq:4.39},
\[
\begin{aligned}
&
\limsup_{i \to \infty} \int_{ B_R \times (0,R) } t^{1-2s} | V_{\lambda_i} - U_\infty |^2 \, dX
\\
= \ & \limsup_{i \to \infty} \left( \int_{B_R \times (0,T)} + \int_{B_R \times (T,R)} \right)
t^{1-2s} | V_{\lambda_i} - U_\infty |^2 \, dX
\leq C (T^{2-2s} + T^2 ).
\end{aligned}
\] Since $T \in (0,R)$ is arbitrary and $C$ is independent of $T$, \eqref{align:4.38} holds.
Next, we shall prove that $U_\infty$ satisfies \eqref{eq:4.7}. For the third property in \eqref{eq:4.7}, we observe from \eqref{eq:2.18} and \eqref{eq:4.34} that for each $\varphi \in C^1_c( \overline{ \mathbb{R}^{N+1}_+} )$,
\[
\int_{ \mathbb{R}^{N+1}_+} t^{1-2s} \nabla V_{\lambda_i} \cdot \nabla \varphi \, dX
= \kappa_s \int_{ \mathbb{R}^N} |x|^\ell |V_{\lambda_i} (x,0)|^{p-1} V_{\lambda_i} (x,0) \varphi (x,0) \, dx.
\] By \eqref{align:4.37}, we may also suppose that $V_{\lambda_i} (x,0) \to U_\infty(x,0)$ for a.a. $x \in \mathbb{R}^N$.
Since $(V_{\lambda_i}(x,0))_i$ is bounded in $L^{p+1}_{\rm loc} (\mathbb{R}^N,|x|^\ell dx)$, a variant of Strauss' lemma (see Strauss \cite[Compactness Lemma 2]{Str-77} and Berestycki and Lions \cite[Theorem A.I]{BL}) and the fact $V_{\lambda_i} \rightharpoonup U_\infty$ weakly in $H^1_{\mathrm{loc}}( \overline{ \mathbb{R}^{N+1}_+}, t^{1-2s}dX)$ give
\[
\int_{ \mathbb{R}^{N+1}_+} t^{1-2s} \nabla U_\infty \cdot \nabla \varphi \, dX
= \kappa_s \int_{ \mathbb{R}^N} |x|^\ell |U_{\infty} (x,0)|^{p-1} U_{\infty} (x,0) \varphi (x,0) \, dx.
\] In a similar way, by \eqref{eq:4.35}, we also observe that $U_\infty$ is stable outside $B_\varepsilon^+$ for any $\varepsilon > 0$, that is, for each $\psi \in C^1_c( \overline{ \mathbb{R}^{N+1}_+} \setminus \overline{ B^+_{\varepsilon}} )$,
\[
\kappa_s p \int_{ \mathbb{R}^N} |x|^\ell |U_\infty(x,0)|^{p-1} \psi(x,0)^2 \, dx
\leq \int_{ \mathbb{R}^{N+1}_+} t^{1-2s} |\nabla \psi|^2 \, dX.
\]
Finally, we prove $U_\infty(X) = r^{- \frac{2s+\ell}{p-1}} U_\infty( r^{-1} X )$. If this is true, then \eqref{eq:4.36} gives $\psi(\omega, 0) = U_\infty(x/r,0) \in L^{p+1}(\partial S_1^+)$ and Lemma \ref{Lemma:4.3} is applicable for $U_\infty$. Remark that \eqref{eq:4.34} implies
\begin{equation}\label{eq:4.41}
\left( - \Delta \right)^s V_\lambda ( x,0) = |x|^\ell \left| V_\lambda (x,0) \right|^{p-1} V_\lambda (x,0)
\quad \text{in} \ \mathbb{R}^N
\end{equation} and Proposition \ref{Proposition:2.2} and Lemma \ref{Lemma:4.2} hold for $V_\lambda$. Hence, for $R_2>R_1>0$, by \eqref{eq:4.33}, \eqref{eq:4.34} and Lemma \ref{Lemma:4.2}, we have \begin{equation}\label{eq:4.42} \begin{aligned} 0 =\lim_{i\to\infty}\bigg\{E(U;\lambda_iR_2)-E(U;\lambda_iR_1)\bigg\} &=\lim_{i\to\infty}\bigg\{E(V_{\lambda_i};R_2)-E(V_{\lambda_i};R_1)\bigg\} \\ & \ge\liminf_{i\to\infty}\int_{R_1}^{R_2}\frac{\partial}{\partial r}E(V_{\lambda_i};r)\,dr. \end{aligned} \end{equation} This together with \eqref{eq:4.4}, \eqref{align:4.38} and the weak lower semicontinuity of norms yield \begin{equation} \label{eq:4.43} \begin{aligned} 0 & \ge \liminf_{i\to\infty}\int_{R_1}^{R_2}r^{\frac{2}{p-1}(s(p+1)+\ell)-N} \left( \int_{S^+_r}t^{1-2s}\left(\frac{2s+\ell}{p-1}\frac{V_{\lambda_i}}{r} +\frac{\partial V_{\lambda_i}}{\partial r}\right)^2\, dS \right)dr \\ & = \liminf_{i\to\infty}\int_{B^+_{R_2}\setminus B^+_{R_1}}t^{1-2s}r^{\frac{2}{p-1}(s(p+1)+\ell)-N} \left(\frac{2s+\ell}{p-1}\frac{V_{\lambda_i}}{r}+\frac{\partial V_{\lambda_i}}{\partial r}\right)^2\,dX \\ & \ge \int_{B^+_{R_2}\setminus B^+_{R_1}}t^{1-2s}r^{\frac{2}{p-1}(s(p+1)+\ell)-N} \left(\frac{2s+\ell}{p-1}\frac{U_\infty}{r}+\frac{\partial U_\infty}{\partial r}\right)^2\,dX. \end{aligned} \end{equation} Noting that $U_\infty \in C^\infty(\mathbb{R}^{N+1}_+)$ thanks to $\mathrm{div} (t^{1-2s} \nabla U_\infty) = 0$ and elliptic regularity, by the arbitrariness of $R_1$ and $R_2$, we have
\[
0 =
\frac{\partial U_\infty}{\partial r}+\frac{2s+\ell}{p-1}\frac{U_\infty}{r}
= r^{ - \frac{2s+\ell}{p-1} } \frac{\partial}{\partial r}\left( r^{ \frac{2s+\ell}{p-1} } U_\infty \right)
\quad \mbox{in}\quad \mathbb R^{N+1}_+.
\] Integrating this equality with respect to $r$, we obtain
\[
U_\infty(X)=r^{-\frac{2s+\ell}{p-1}}U_\infty(r^{-1}X)
\] and hence, $U_\infty$ satisfies \eqref{eq:4.7}.
It follows from Lemma~\ref{Lemma:4.3} that $U_\infty\equiv0$. Since the weak limit does not depend on choices of subsequences, we infer that $V_\lambda \rightharpoonup 0$ weakly in $H^1_{\mathrm{loc}} ( \overline{ \mathbb{R}^{N+1}_+},t^{1-2s}dX)$ and from \eqref{align:4.38} that
\begin{equation}\label{eq:4.44}
\int_{ B_{2R}^+ } t^{1-2s} |V_\lambda|^2 \, dX \to 0 \quad \mbox{as}\quad \lambda \to \infty.
\end{equation} Recalling \eqref{eq:4.34}, \eqref{eq:4.35}, \eqref{eq:4.41} and the proof of \eqref{eq:2.66} in Lemma \ref{Lemma:2.9}, we see
\begin{equation}\label{eq:4.45}
\int_{\mathbb R^{N+1}_+}t^{1-2s}|\nabla(V_\lambda \zeta)|^2\,dX
\leq
\frac{p}{p-1}\int_{\mathbb R^{N+1}_+}t^{1-2s}|V_{\lambda}|^2|\nabla\zeta|^2\,dX
\end{equation} where $\zeta \in C^1_c( \overline{ \mathbb{R}^{N+1}_+} ) $ satisfying
\[
\zeta \equiv 1\ \ \text{in} \ B_{R}^+ \setminus B_{r}^+, \quad
\zeta \equiv 0 \quad \text{in} \ B_{r/2}^+ \cup ( \mathbb{R}^{N+1}_+ \setminus B_{2R}^+ )
\quad \mbox{for}\quad \lambda^{-1} R_0 < r < R.
\] From \eqref{eq:4.44}, \eqref{eq:4.45} and the property of $\zeta$, we observe that for any $0 < r < R$,
\begin{equation}\label{eq:4.46}
\lim_{ \lambda \to \infty}
\int_{ B_R^+ \setminus B_r^+ } t^{1-2s} \left| \nabla V_{\lambda} \right|^2 \, dX = 0.
\end{equation} Furthermore, by \eqref{eq:2.64} with $V_\lambda$, \eqref{eq:4.44} and \eqref{eq:4.45}, for each $0 < r < R$,
\begin{equation}\label{eq:4.47}
\lim_{ \lambda \to \infty}\int_{B_R \setminus B_r} |x|^\ell |V_{\lambda}(x,0)|^{p+1} \, dx = 0.
\end{equation}
Next, we shall prove $E(U;\lambda) \to 0$ as $\lambda \to \infty$. In view of \eqref{eq:4.34}, for each $\varepsilon \in (0,1)$, we have \begin{equation}\label{eq:4.48} \begin{aligned} \lambda^{\frac{2(2s+\ell)}{p-1}}D(U;\lambda) = \ &D(V_\lambda ;1) \\
= \ &\frac{1}{2}\int_{B^+_\varepsilon}t^{1-2s}|\nabla V_\lambda|^2\,dX
-\frac{\kappa_s}{p+1}\int_{B_\varepsilon}|x|^\ell|V_\lambda (x,0)|^{p+1}\,dx \\ & \quad
+\frac{1}{2}\int_{B^+_1\setminus B^+_\varepsilon}t^{1-2s}|\nabla V_\lambda|^2\,dX
-\frac{\kappa_s}{p+1}\int_{B_1\setminus B_\varepsilon}|x|^\ell|V_\lambda (x,0) |^{p+1}\,dx \\ = \ & \varepsilon^{N-2s} D(V_\lambda; \varepsilon)
+\frac{1}{2}\int_{B^+_1\setminus B^+_\varepsilon}t^{1-2s}|\nabla V_\lambda|^2\,dX \\ &\quad
-\frac{\kappa_s}{p+1}\int_{B_1\setminus B_\varepsilon}|x|^\ell|V_\lambda (x,0) |^{p+1}\,dx \\ = \ &\varepsilon^{N-\frac{2}{p-1}(s(p+1)+\ell)} \left[ (\lambda\varepsilon)^{\frac{2(2s+\ell)}{p-1}} D(U;\lambda\varepsilon)\right] \\ & \quad
+\frac{1}{2}\int_{B^+_1\setminus B^+_\varepsilon}t^{1-2s}|\nabla V_\lambda|^2\,dX
-\frac{\kappa_s}{p+1}\int_{B_1\setminus B_\varepsilon}|x|^\ell|V_\lambda (x,0) |^{p+1}\,dx. \end{aligned} \end{equation} By \eqref{eq:4.32} and \eqref{eq:4.46}--\eqref{eq:4.48}, we see that
\[
\begin{aligned}
&\limsup_{ \lambda \to \infty}
\left| \lambda^{\frac{2(2s+\ell)}{p-1}}D(U;\lambda) \right|
\\
= \ &
\limsup_{ \lambda \to \infty}
\left| \varepsilon^{N-\frac{2}{p-1}(s(p+1)+\ell)}
\left[ (\lambda\varepsilon)^{\frac{2(2s+\ell)}{p-1}} D(U;\lambda\varepsilon)\right]
\right.
\\
& \hspace{3cm}
\left. +\frac{1}{2}\int_{B^+_1\setminus B^+_\varepsilon}t^{1-2s}|\nabla V_\lambda|^2\,dX
-\frac{\kappa_s}{p+1}\int_{B_1\setminus B_\varepsilon}|x|^\ell|V_\lambda (x,0) |^{p+1}\,dx
\right|
\\
\leq \ & C_0 \varepsilon^{N-\frac{2}{p-1}(s(p+1)+\ell)}
\end{aligned}
\] for some $C_0>0$. Since $\varepsilon \in (0,1)$ is arbitrary and $N - \frac{2}{p-1} ( (p+1) s + \ell ) > 0$, we obtain \begin{equation} \label{eq:4.49} \lim_{\lambda\to\infty}\lambda^{\frac{2(2s+\ell)}{p-1}}D(U;\lambda) = 0. \end{equation} On the other hand, from \eqref{eq:4.44} it follows that
\[
0 = \lim_{ \lambda \to \infty} \int_{B_2^+} t^{1-2s} |V_\lambda|^2 \,dX
= \lim_{\lambda\to\infty} \int_0^2 dr \int_{S_r^+} t^{1-2s} |V_\lambda|^2 \, dS.
\] By choosing a subsequence $(\lambda_i)$,
\[
\int_{S_r^+} t^{1-2s} |V_{\lambda_i}|^2 dS \to 0 \quad \text{a.a. $r \in (0,2)$}.
\] Therefore, there exists an $r_0 \in (0,2)$ such that \eqref{eq:4.34} gives
\[
\lambda_i^{ \frac{2(2s+\ell)}{p-1} } H(U; r_0 \lambda_i)
= H(V_{\lambda_i} ; r_0 ) = r_0^{ -(N+1-2s) } \int_{ S_{r_0}^+} t^{1-2s} |V_{\lambda_i}|^2 \,dS \to 0.
\] With \eqref{eq:4.34}, \eqref{eq:4.49} and the monotonicity of $E(U;\lambda)$, we have
\begin{equation}\label{eq:4.50}
\lim_{ \lambda \to \infty} E(U;\lambda)
= \lim_{ i \to \infty} E(U;\lambda_i r_0)= 0.
\end{equation}
We shall prove $U \equiv 0$. To this end, we shall prove $E(U;\lambda) \to 0$ as $\lambda \to 0$. Since $U \in C ( \overline{\mathbb{R}^{N+1}_+} )$ holds by Lemma \ref{Lemma:2.3}, as $\lambda \to 0$, we have
\begin{equation}\label{eq:4.51}
0 \leq \lambda^{\frac{2(2s+\ell)}{p-1}} H(U;\lambda)
= \lambda^{ \frac{2(2s+\ell)}{p-1} }
\int_{ S^+_1} \sigma_{N+1}^{1-2s} U(\lambda \sigma)^2 \, dS
\leq C \| U \|_{ L^\infty( B_1^+ ) }^2 \lambda^{ \frac{2(2s+\ell)}{p-1} } \to 0
\end{equation} and by \eqref{eq:1.2},
\begin{equation}\label{eq:4.52}
\lambda^{ \frac{2(2s+\ell)}{p-1} - (N-2s)} \int_{ B_\lambda} |x|^\ell |u|^{p+1} \, dx
\leq C \| u \|_{ L^\infty(B_1) }^{p+1} \lambda^{ \frac{2(2s+\ell)}{p-1} + 2s + \ell } \to 0.
\end{equation} By \eqref{eq:4.50} and the monotonicity of $E(U;\lambda)$, we have $E(U;\lambda) \leq 0$ for all $\lambda \in (0,\infty)$. In view of this fact with \eqref{eq:4.51} and \eqref{eq:4.52}, it follows that
\[
\begin{aligned}
&\limsup_{\lambda \to + 0} \frac{\lambda^{ \frac{2(2s+\ell)}{p-1} - (N-2s) } }{2}
\int_{B^+_\lambda} t^{1-2s} |\nabla U|^2 \, dX
\\
= \ & \limsup_{\lambda \to + 0}
\left[ E(U;\lambda) + \frac{\lambda^{ \frac{2(2s+\ell)}{p-1} - (N-2s) } \kappa_s }{p+1}
\int_{B_\lambda} |x|^\ell |u|^{p+1} \, dx -
\lambda^{ \frac{2(2s+\ell)}{p-1} } \frac{2s+\ell}{2(p-1)} H(U;\lambda)
\right]
\\
\leq \ & 0,
\end{aligned}
\] which yields
\begin{equation}\label{eq:4.53}
\lim_{\lambda \to +0}
\lambda^{ \frac{2(2s+\ell)}{p-1} - (N-2s)} \int_{ B_\lambda^+} t^{1-2s} |\nabla U|^2 \, dX
= 0.
\end{equation} Now, \eqref{eq:4.51}--\eqref{eq:4.53} yields $E(U;\lambda) \to 0$ $(\lambda \to 0)$. Therefore, $E(U; \lambda) \equiv 0$ thanks to \eqref{eq:4.50} and the monotonicity of $E$. Applying the argument similar to \eqref{eq:4.42} and \eqref{eq:4.43}, we see that $U = r^{ - \frac{2s+\ell}{p-1} } U( r^{-1} X ) $. In addition, since $U\in C ( \overline{\mathbb{R}^{N+1}_+}) $, $\psi(\omega,0) = U(\omega,0) \in L^\infty(\partial S^+_1)$ and $u$ (respectively $U$) is stable outside $B_{R_0}$ (respectively $B_{R_0}^+$), \eqref{eq:4.7} is satisfied and it follows from Lemma~\ref{Lemma:4.3} that $U\equiv 0$. This completes the proof. \end{proof}
\noindent \textbf{Acknowledgment.} The authors would like to express their sincere gratitude to the anonymous referee for his/her careful reading and useful comments. They also would like to thank Alexander Quaas for letting them know \cite{BQ-20}. The first author (S.H.) was supported by JSPS KAKENHI Grant Numbers JP 20J01191. The second author (N.I.) was supported by JSPS KAKENHI Grant Numbers JP 17H02851, 19H01797 and 19K03590. The third author (T.K.) was supported by JSPS KAKENHI Grant Numbers JP 19H05599 and 16K17629.
\end{document} |
\begin{document}
\vspace*{0ex} \begin{center} {\Large\bf A Hamiltonian structure of the Isobe--Kakinuma model \\[0.5ex] for water waves} \end{center}
\begin{center} {\em Dedicated to the late Professor Walter L. Craig} \end{center}
\begin{center} Vincent Duch\^ene and Tatsuo Iguchi \end{center}
\begin{abstract} We consider the Isobe--Kakinuma model for water waves, which is obtained as the system of Euler--Lagrange equations for a Lagrangian approximating Luke's Lagrangian for water waves. We show that the Isobe--Kakinuma model also enjoys a Hamiltonian structure analogous to the one exhibited by V. E. Zakharov on the full water wave problem and, moreover, that the Hamiltonian of the Isobe--Kakinuma model is a higher order shallow water approximation to the one of the full water wave problem. \end{abstract}
\section{Introduction} \label{section:intro}
We consider a model for the motion of water in a moving domain of the $(n+1)$-dimensional Euclidean space. The water wave problem is mathematically formulated as a free boundary problem for an irrotational flow of an inviscid, incompressible, and homogeneous fluid under a vertical gravitational field. Let $t$ be the time, $\boldsymbol{x}=(x_1,\ldots,x_n)$ the horizontal spatial coordinates, and $z$ the vertical spatial coordinate. We assume that the water surface and the bottom are represented as $z=\eta({\boldsymbol x},t)$ and $z=-h+b({\boldsymbol x})$, respectively, where $\eta({\boldsymbol x},t)$ is the surface elevation, $h$ is the mean depth, and $b({\boldsymbol x})$ represents the bottom topography. We denote by $\Omega(t)$, $\Gamma(t)$, and $\Sigma$ the water region, the water surface, and the bottom of the water at time $t$, respectively. Then, the motion of the water is described by the velocity potential $\Phi({\boldsymbol x},z,t)$ satisfying Laplace's equation
\begin{equation}\label{intro:Laplace} \Delta\Phi + \partial_z^2\Phi = 0 \makebox[3em]{in} \Omega(t), \; t>0, \end{equation}
where $\Delta=\partial_{x_1}^2+\cdots+\partial_{x_n}^2$. The boundary conditions on the water surface are given by
\begin{equation}\label{intro:BC1} \begin{cases}
\partial_t\eta + \nabla\Phi\cdot\nabla\eta - \partial_z\Phi = 0 & \mbox{on}\quad \Gamma(t), \; t>0, \\
\displaystyle
\partial_t\Phi +\frac12 \bigl( |\nabla\Phi|^2 + (\partial_z\Phi)^2 \bigr) + g\eta = 0
& \mbox{on}\quad \Gamma(t), \; t>0, \end{cases} \end{equation}
where $\nabla=(\partial_{x_1},\ldots,\partial_{x_n})^{\rm T}$, and $g$ is the gravitational constant. The first equation is the kinematic condition on the water surface and the second one is Bernoulli's equation. Finally, the boundary condition on the bottom of the water is given by
\begin{equation}\label{intro:BC2} \nabla\Phi\cdot\nabla b - \partial_z\Phi = 0 \makebox[3em]{on} \Sigma, \; t>0, \end{equation}
which is the kinematic condition on the fixed and impermable bottom. These are the basic equations for the water wave problem.
We put
\begin{equation}\label{intro:CV} \phi({\boldsymbol x},t) = \Phi({\boldsymbol x},\eta({\boldsymbol x},t),t), \end{equation}
which is the trace of the velocity potential on the water surface. Then, the basic equations for water waves~\eqref{intro:Laplace}--\eqref{intro:BC2} are transformed equivalently into
\begin{equation}\label{intro:ZCS} \begin{cases}
\partial_t\eta-\Lambda(\eta,b)\phi = 0 & \mbox{on}\quad \mathbf{R}^n, \; t>0, \\[.5ex]
\partial_t\phi + g\eta + \dfrac12|\nabla\phi|^2
- \dfrac12\dfrac{\bigl(\Lambda(\eta,b)\phi + \nabla\eta\cdot\nabla\phi\bigr)^2}{1+|\nabla\eta|^2} = 0
& \mbox{on}\quad \mathbf{R}^n, \; t>0, \end{cases} \end{equation}
where $\Lambda(\eta,b)$ is the Dirichlet-to-Neumann map for Laplace's equation. Namely, it is defined by
\[
\Lambda(\eta,b)\phi= (\partial_z\Phi)|_{z=\eta} - \nabla\eta\cdot(\nabla\Phi)|_{z=\eta}, \] where $\Phi$ is the unique solution to the boundary value problem of Laplace's equation \eqref{intro:Laplace} under the boundary conditions \eqref{intro:BC2}--\eqref{intro:CV}.
As is well-known, the water wave problem has a conserved energy $E=E_{\rm kin} + E_{\rm pot}$, where $E_{\rm kin}$ is the kinetic energy
\begin{align*} E_{\rm kin}
&= \frac{1}{2}\rho\iint_{\Omega(t)} \bigl( |\nabla\Phi({\boldsymbol x},z,t)|^2
+ (\partial_z\Phi({\boldsymbol x},z,t))^2 \bigr)\,{\rm d}{\boldsymbol x}{\rm d}z \\ &= \frac{1}{2}\rho\int_{\mathbf{R}^n}\phi({\boldsymbol x},t)(\Lambda(\eta,b)\phi)({\boldsymbol x},t)\,{\rm d}{\boldsymbol x}, \end{align*}
and $E_{\rm pot}$ is the potential energy
\[ E_{\rm pot} = \frac{1}{2}\rho g\int_{\mathbf{R}^n}\eta({\boldsymbol x},t)^2\,{\rm d}{\boldsymbol x} \]
due to the gravity. Here, $\rho$ is a constant density of the water.
V. E. Zakharov~\cite{Zakharov1968} found that the water wave system has a Hamiltonian structure and $\eta$ and $\phi$ are the canonical variables. The Hamiltonian $\mathscr{H}$ is essentially the total energy, that is, $\mathscr{H} = \frac{1}{\rho}E$. He showed that the basic equations for water waves \eqref{intro:Laplace}--\eqref{intro:BC2} are transformed equivalently into Hamilton's canonical equations
\[ \partial_t\eta = \frac{\delta\mathscr{H}}{\delta\phi}, \quad \partial_t\phi = -\frac{\delta\mathscr{H}}{\delta\eta}. \]
Although V. E. Zakharov did not use explicitly the Dirichlet-to-Neumann map $\Lambda(\eta,b)$, the above canonical equations are exactly the same as \eqref{intro:ZCS}. W. Craig and C. Sulem~\cite{CraigSulem1993} introduced the Dirichlet-to-Neumann map explicitly and derived \eqref{intro:ZCS}. Therefore, nowadays \eqref{intro:ZCS} is often called the Zakharov--Craig--Sulem formulation of the water wave problem. Since then, W. Craig and his collaborators \cite{CraigGroves1994,CraigGroves2000, CraigGuyenneKalisch2005, CraigGuyenneNichollsSulem2005, CraigGuyenneSulem2010, CraigGuyenneSulem2012} have used the Hamiltonian structure of the water wave problem in order to analyze long-wave and modulation approximations. Let us also mention the recent work of W. Craig~\cite{Craig2016}, which generalizes the Hamiltonian formulation of water waves described above to a general coordinatization of the free surface allowing overturning wave profiles.
On the other hand, as was shown by J. C. Luke~\cite{Luke1967}, the water wave problem has also a variational structure. His Lagrangian density is of the form
\begin{equation}\label{intro:Luke's Lagrangian} \mathscr{L}(\Phi,\eta) = \int_{-h+b({\boldsymbol x})}^{\eta({\boldsymbol x},t)}\biggl(
\partial_t\Phi({\boldsymbol x},z,t) + \frac12\bigl( |\nabla\Phi({\boldsymbol x},z,t)|^2
+ (\partial_z\Phi({\boldsymbol x},z,t))^2 \bigr) \biggr)\,{\rm d}z
+ \frac12g\bigl(\eta(\boldsymbol{x},t)\bigr)^2
\end{equation}
and the action function is given by
\[ \mathscr{J}(\Phi,\eta)
= \int_{t_0}^{t_1}\!\!\!\int_{\mathbf{R}^n}\mathscr{L}(\Phi,\eta)\,{\rm d}{\boldsymbol x}\,{\rm d}t. \]
In fact, the corresponding Euler--Lagrange equations are exactly the basic equations for water waves \eqref{intro:Laplace}--\eqref{intro:BC2}. We refer to J. W. Miles~\cite{Miles1977} for the relation between Zakharov's Hamiltonian and Luke's Lagrangian.
M. Isobe~\cite{Isobe1994, Isobe1994-2} and T. Kakinuma~\cite{Kakinuma2000, Kakinuma2001, Kakinuma2003} obtained a family of systems of equations after replacing the velocity potential $\Phi$ in Luke's Lagrangian by
\[ \Phi^{\rm app}({\boldsymbol x},z,t) = \sum_{i=0}^N\Psi_i(z;b)\phi_i({\boldsymbol x},t), \]
where $\{\Psi_i\}$ is a given appropriate function system in the vertical coordinate $z$ and may depend on the bottom topography $b$ and $(\phi_0,\phi_1,\ldots,\phi_N)$ are unknown variables. The Isobe--Kakinuma model is a system of Euler--Lagrange equations corresponding to the action function
\begin{equation}\label{intro:IK-action} \mathscr{J}^{\rm app}(\phi_0,\phi_1,\ldots,\phi_N,\eta)
= \int_{t_0}^{t_1}\!\!\!\int_{\mathbf{R}^n}\mathscr{L}(\Phi^{\rm app},\eta)
\,{\rm d}{\boldsymbol x}{\rm d}t. \end{equation}
We have to choose the function system $\{\Psi_i\}$ carefully for the Isobe--Kakinuma model to produce good approximations to the water wave problem. One possible choice is the bases of the Taylor series of the velocity potential $\Phi({\boldsymbol x},z,t)$ with respect to the vertical spatial coordinate $z$ around the bottom. Such an expansion has been already used by J. Boussinesq~\cite{Boussinesq1872} in the case of the flat bottom and, for instance, by C. C. Mei and B. Le M\'ehaut\'e~\cite{MeiLeMehaute1966} for general bottom topographies. The corresponding choice of the function system is given by
\[ \Psi_i(z;b)
= \begin{cases}
(z + h)^{2i} & \mbox{in the case of the flat bottom}, \\
(z + h - b({\boldsymbol x}))^i & \mbox{in the case of the variable bottom}.
\end{cases} \]
Here we note that the latter choice is valid also for the case of the flat bottom. However, it turns out that the terms of odd degree do not play any important role in such a case so that the former choice is more adequate. In order to treat both cases at the same time, we adopt the approximation
\begin{equation}\label{intro:app} \Phi^{\rm app}({\boldsymbol x},z,t)
= \sum_{i=0}^N(z+h-b({\boldsymbol x}))^{p_i}\phi_i({\boldsymbol x},t), \end{equation}
where $p_0,p_1,\ldots,p_N$ are nonnegative integers satisfying $0=p_0<p_1<\cdots<p_N$. Plugging~\eqref{intro:app} into the action function~\eqref{intro:IK-action}, the corresponding Euler--Lagrange equation yields the Isobe--Kakinuma model of the form
\begin{equation}\label{intro:IK model} \left\{
\begin{array}{l}
\displaystyle
H^{p_i} \partial_t \eta + \sum_{j=0}^N \left\{ \nabla \cdot \left(
\frac{1}{p_i+p_j+1} H^{p_i+p_j+1} \nabla\phi_j
- \frac{p_j}{p_i+p_j} H^{p_i+p_j} \phi_j \nabla b \right) \right.\\
\displaystyle\phantom{ H^{p_i} \partial_t \eta + \sum_{j=0}^N \biggl\{ }
+ \left. \frac{p_i}{p_i+p_j} H^{p_i+p_j} \nabla b \cdot \nabla\phi_j
- \frac{p_ip_j}{p_i+p_j-1} H^{p_i+p_j-1} (1 + |\nabla b|^2) \phi_j \right\} = 0 \\
\makebox[28em]{}\mbox{for}\quad i=0,1,\ldots,N, \\[1ex]
\displaystyle
\sum_{j=0}^N H^{p_j} \partial_t \phi_j + g\eta
+ \frac12 \left\{ \left| \sum_{j=0}^N ( H^{p_j}\nabla\phi_j - p_j H^{p_j-1}\phi_j\nabla b ) \right|^2
+ \left( \sum_{j=0}^N p_j H^{p_j-1} \phi_j \right)^2 \right\} = 0,
\end{array} \right. \end{equation}
where $H({\boldsymbol x},t) = h + \eta({\boldsymbol x},t) - b({\boldsymbol x})$ is the depth of the water. Here and in what follows we use the notational convention $0/0=0$. This system consists of $(N+1)$ evolution equations for $\eta$ and only one evolution equation for $(N+1)$ unknowns $(\phi_0,\phi_1,\ldots,\phi_N)$, so that this is an overdetermined and underdetermined composite system. However, the total number of the unknowns is equal to the total number of the equations.
The main purpose of this paper is to show that the Isobe--Kakinuma model~\eqref{intro:IK model} also enjoys a canonical Hamiltonian structure which is analogous to the one of the water waves problem. In particular, the Hamiltonian is a higher order shallow water approximation of the original Hamiltonian of the water waves problem.
\noindent {\bf Acknowledgement} \\ T. I. was partially supported by JSPS KAKENHI Grant Number JP17K18742 and JP17H02856.
\section{Preliminaries} \label{section:IK}
Since the hypersurface $t=0$ in the space-time $\mathbf{R}^n\times\mathbf{R}$ is characteristic for the Isobe--Kakinuma model~\eqref{intro:IK model}, the initial value problem to the model is not solvable in general. In fact, if the problem has a solution $(\eta,\phi_0,\ldots,\phi_N)$, then by eliminating the time derivative $\partial_t\eta$ from the equations we see that the solution has to satisfy the relations
\begin{align}\label{IK:comp} & H^{p_i}\sum_{j=0}^N\nabla\cdot\left(
\frac{1}{p_j+1}H^{p_j+1}\nabla\phi_j
-H^{p_j}\phi_j\nabla b\right) \nonumber \\ &= \sum_{j=0}^N\left\{\nabla\cdot\left(
\frac{1}{p_i+p_j+1}H^{p_i+p_j+1}\nabla\phi_j
-\frac{p_j}{p_i+p_j}H^{p_i+p_j}\phi_j\nabla b\right)\right. \\ &\phantom{ =\sum_{j=0}^N\biggl\{ }
\displaystyle\left.
+\frac{p_i}{p_i+p_j}H^{p_i+p_j}\nabla b\cdot\nabla\phi_j
-\frac{p_ip_j}{p_i+p_j-1}H^{p_i+p_j-1}(1+|\nabla b|^2)\phi_j\right\} \nonumber \end{align}
for $i=1,\ldots,N$. Therefore, the initial data have to satisfy these relations in order to allow the existence of a solution. Y. Murakami and T. Iguchi~\cite{MurakamiIguchi2015} and R. Nemoto and T. Iguchi~\cite{NemotoIguchi2018} showed that the initial value problem to the Isobe--Kakinuma model~\eqref{intro:IK model} is well-posed locally in time in a class of initial data for which the relations~\eqref{IK:comp} and a generalized Rayleigh--Taylor sign condition are satisfied. Moreover, T. Iguchi~\cite{Iguchi2018-1, Iguchi2018-2} showed that the Isobe--Kakinuma model~\eqref{intro:IK model} is a higher order shallow water approximation for the water wave problem in the strongly nonlinear regime. The Isobe--Kakinuma model~\eqref{intro:IK model} has also a conserved energy, which is the total energy given by
\begin{align} E^{\mbox{\rm\scriptsize IK}}(\eta,\boldsymbol{\phi})
&= \frac12\rho\iint_{\Omega(t)}\bigl( |\nabla\Phi^{\rm app}({\boldsymbol x},z,t)|^2
+ (\partial_z\Phi^{\rm app}({\boldsymbol x},z,t))^2 \bigr)\,
{\rm d}{\boldsymbol x}{\rm d}z
+ \frac12\rho g\int_{\mathbf{R}^n}\eta({\boldsymbol x},t)^2\,{\rm d}{\boldsymbol x} \nonumber \\ &= \frac{\rho}{2}\int_{\mathbf{R}^n}\left\{
\sum_{i,j=0}^N\left(
\frac{1}{p_i+p_j+1}H^{p_i+p_j+1}\nabla\phi_i\cdot\nabla\phi_j
-\frac{2p_i}{p_i+p_j}H^{p_i+p_j}\phi_i\nabla b\cdot\nabla\phi_j \right. \right. \nonumber\\ & \left. \phantom{\sum_{i,j=0}^N}\makebox[6em]{}
+ \frac{p_ip_j}{p_i+p_j-1}H^{p_i+p_j-1}(1+|\nabla b|^2)\phi_i\phi_j\biggr) + g\eta^2 \right\}
{\rm d}{\boldsymbol x},
\label{IK:energy} \end{align}
where $\boldsymbol{\phi}=(\phi_0,\phi_1,\ldots,\phi_N)^{\rm T}$.
We introduce second order differential operators $L_{ij}=L_{ij}(H,b)$ $(i,j=0,1,\ldots,N)$ depending on the water depth $H$ and the bottom topography $b$ by
\begin{align} L_{ij}\psi_j &= -\nabla\cdot\biggl(
\frac{1}{p_i+p_j+1}H^{p_i+p_j+1}\nabla\psi_j
-\frac{p_j}{p_i+p_j}H^{p_i+p_j}\psi_j\nabla b\biggr) \nonumber \\[0.5ex] &\quad\,
-\frac{p_i}{p_i+p_j}H^{p_i+p_j}\nabla b\cdot\nabla\psi_j
+\frac{p_ip_j}{p_i+p_j-1}H^{p_i+p_j-1}(1+|\nabla b|^2)\psi_j.
\label{IK:L} \end{align}
Then, we have $L_{ij}^*=L_{ji}$, where $L_{ij}^*$ is the adjoint operator of $L_{ij}$ in $L^2(\mathbf{R}^n)$. Moreover, the Isobe--Kakinuma model~\eqref{intro:IK model} and the relations~\eqref{IK:comp} can be written simply as
\begin{equation}\label{IK:IK model} \left\{
\begin{array}{l}
\displaystyle
H^{p_i} \partial_t \eta - \sum_{j=0}^N L_{ij}(H,b) \phi_j = 0 \qquad\mbox{for}\quad i=0,1,\ldots,N, \\
\displaystyle
\sum_{j=0}^N H^{p_j} \partial_t \phi_j + g\eta
+ \frac12\bigl( |(\nabla\Phi^{\rm app})|_{z=\eta}|^2
+ ((\partial_z\Phi^{\rm app})|_{z=\eta})^2 \bigr) = 0
\end{array} \right. \end{equation}
and
\begin{equation}\label{IK:comp2} \sum_{j=0}^N ( L_{ij}(H,b) - H^{p_i} L_{0j}(H,b) ) \phi_j = 0 \qquad\mbox{for}\quad i=1,\ldots,N, \end{equation}
respectively. It is easy to calculate the variational derivative of the energy function $E^{\mbox{\rm\scriptsize IK}}(\eta,\boldsymbol{\phi})$ and to obtain
\begin{equation}\label{IK:deltaE} \begin{cases} \displaystyle \frac{1}{\rho} \delta_{\phi_i} E^{\mbox{\rm\scriptsize IK}}
= \sum_{j=0}^N L_{ij}(H,b)\phi_j \quad j=0,1,\ldots,N, \\[3ex] \displaystyle \frac{1}{\rho} \delta_{\eta} E^{\mbox{\rm\scriptsize IK}}
= \frac12\bigl(|(\nabla\Phi^{\rm app})|_{z=\eta}|^2
+ ((\partial_z\Phi^{\rm app})|_{z=\eta})^2 \bigr) + g\eta. \end{cases} \end{equation}
Therefore, introducing $\boldsymbol{l}(H) = (H^{p_0},H^{p_1},\ldots,H^{p_N})^{\rm T}$, the Isobe--Kakinuma model~\eqref{intro:IK model} can also be written simply as
\begin{equation}\label{IK:IK model2} \begin{pmatrix}
0 & -\boldsymbol{l}(H)^{\rm T} \\
\boldsymbol{l}(H) & O \end{pmatrix} \partial_t \begin{pmatrix}
\eta \\
\boldsymbol{\phi} \end{pmatrix} = \frac{1}{\rho} \begin{pmatrix}
\delta_{\eta} E^{\mbox{\rm\scriptsize IK}} \\
\delta_{\boldsymbol{\phi}} E^{\mbox{\rm\scriptsize IK}} \end{pmatrix}. \end{equation}
In view of~\eqref{IK:comp2} we introduce also linear operators $\mathcal{L}_i = \mathcal{L}_i(H,b)$ $(i = 1,\ldots,N)$ depending on the water depth $H$ and the bottom topography $b$, and acting on $\mbox{\boldmath$\varphi$} = (\varphi_0,\varphi_1,\ldots,\varphi_N)^{\rm T}$ by
\begin{equation}\label{IK:scrLi} \mathcal{L}_i \mbox{\boldmath$\varphi$} = \sum_{j=0}^N \bigl( L_{ij}(H,b) - H^{p_i} L_{0j}(H,b) \bigr) \varphi_j
\quad\mbox{for}\quad i=1,\ldots,N, \end{equation}
and put $\mathcal{L} \mbox{\boldmath$\varphi$}
= (\mathcal{L}_1 \mbox{\boldmath$\varphi$},\ldots,\mathcal{L}_N \mbox{\boldmath$\varphi$})^{\rm T}$. Then, the conditions~\eqref{IK:comp} can be written simply as
\begin{equation}\label{IK:comp3}
\mathcal{L}(H,b)\boldsymbol{\phi} = {\bf 0} . \end{equation}
For later use, we also put $L=L(H,b)=(L_{ij}(H,b))_{0\leq i,j\leq N}$ and define $L_0=L_0(H,b)$ by
\begin{equation}\label{IK:L0} L_0(H,b)\boldsymbol{\varphi} = \sum_{j=0}^NL_{0j}(H,b)\varphi_j. \end{equation}
Then, the conditions~\eqref{IK:comp} are also equivalent to
\begin{equation}\label{IK:comp4} L(H,b)\boldsymbol{\phi} = \bigl(L_0(H,b)\boldsymbol{\phi}\bigr)\boldsymbol{l}(H). \end{equation}
Now, for given functions $F_0$ and $\boldsymbol{F}=(F_1,\ldots,F_N)^{\rm T}$ we consider the equations
\begin{equation}\label{IK:elliptic} \begin{cases}
\boldsymbol{l}(H)\cdot \boldsymbol{\varphi} =F_0,\\
\mathcal{L}(H,b)\boldsymbol{\varphi} = \boldsymbol{F}.
\end{cases} \end{equation}
Let $W^{m,p}=W^{m,p}(\mathbf{R}^n)$ be the $L^p$-based Sobolev space of order $m$ on $\mathbf{R}^n$ and $H^m=W^{m,2}$. The norms of the Sobolev space $H^m$ and of a Banach space $\mathscr{X}$
are denoted by $\|\cdot\|_m$ and by $\|\cdot\|_\mathscr{X}$, respectively. Set $\mathring{H}^m=\{\phi \,;\, \nabla\phi\in H^{m-1} \}$. The following lemma was proved in~\cite{NemotoIguchi2018}.
\begin{lemma}\label{IK:lem1} Let $h,c_0,M$ be positive constants and $m$ an integer such that $m>\frac{n}{2}+1$. There exists a positive constant $C$ such that if $\eta$ and $b$ satisfy
\begin{equation}\label{IK:cond} \left\{
\begin{array}{l}
\|\eta\|_m+\|b\|_{W^{m,\infty}} \leq M, \\[0.5ex]
c_0\leq H({\boldsymbol x})=h+\eta({\boldsymbol x})-b({\boldsymbol x}) \quad\mbox{for}\quad {\boldsymbol x}\in\mathbf{R}^n,
\end{array} \right. \end{equation}
then for any $F_0\in \mathring{H}^k$ and $\mbox{\boldmath$F$}=(F_1,\ldots,F_N)^{\rm T}\in (H^{k-2})^N$ with $1\leq k\leq m$ there exists a unique solution $\mbox{\boldmath$\varphi$}=(\varphi_0,\varphi_1,\ldots,\varphi_N)^{\rm T}\in \mathring{H}^k\times (H^{k})^N$ to~\eqref{IK:elliptic}. Moreover, the solution satisfies
\[
\|\nabla\varphi_0\|_{k-1}+\|(\varphi_1,\ldots,\varphi_N)\|_k
\leq C\;(\|\nabla F_0\|_{k-1}+\|(F_1,\ldots,F_N)\|_{k-2}). \]
\end{lemma}
\section{Hamiltonian structure} \label{section:HS}
In the following, we will fix $b\in W^{m,\infty}$ with $m>\frac{n}{2}+1$. Let $(\eta,\phi_0,\ldots,\phi_N)$ be a solution to the Isobe--Kakinuma model~\eqref{intro:IK model}. As we will see later, the canonical variables of the Isobe--Kakinuma model are the surface elevation $\eta$ and the trace of the approximated velocity potential on the water surface
\begin{equation}\label{H:CV}
\phi = \Phi^{\rm app}|_{z=\eta} = \sum_{j=0}^NH^{p_j}\phi_j
= \boldsymbol{l}(H)\cdot\boldsymbol{\phi}. \end{equation}
Then, the relations~\eqref{IK:comp} and the above equation are written in the simple form
\begin{equation}\label{H:eqCV} \begin{cases} \boldsymbol{l}(H)\cdot \boldsymbol{\phi} =\phi,\\ \mathcal{L}(H,b)\boldsymbol{\phi} = {\bf 0}.
\end{cases} \end{equation}
Therefore, it follows from Lemma~\ref{IK:lem1} that once the canonical variables $(\eta,\phi)$ are given in an appropriate class of functions, $\boldsymbol{\phi}=(\phi_0,\phi_1,\ldots,\phi_N)^{\rm T}$ can be determine uniquely. In other words, these variables $(\phi_0,\phi_1,\ldots,\phi_N)$ depend on the canonical variables $(\eta,\phi)$ and furthermore they depend on $\phi$ linearly so that we can write
\[ \boldsymbol{\phi} = {\bf S}(\eta,b)\phi \]
with a linear operator ${\bf S}(\eta,b)$ depending on $\eta$ and $b$. Since we fixed $b$, we simply write ${\bf S}(\eta)$ in place of ${\bf S}(\eta,b)$ for simplicity.
We proceed to analyze this operator ${\bf S}(\eta)$ more precisely. We put
\[ U^m_b = \{ \eta \in H^m \,;\,
\inf_{{\boldsymbol x}\in\mathbf{R}^n}(h+\eta({\boldsymbol x})-b({\boldsymbol x}))>0 \}, \]
which is an open set in $H^m$. For Banach spaces $\mathscr{X}$ and $\mathscr{Y}$, we denote by $B(\mathscr{X};\mathscr{Y})$ the set of all linear and bounded operators from $\mathscr{X}$ into $\mathscr{Y}$. In view of \eqref{IK:comp4}, \eqref{H:CV}, and Lemma~\ref{IK:lem1}, we see easily the following lemma.
\begin{lemma}\label{H:lem1} Let $m$ be an integer such that $m>\frac{n}{2}+1$ and $b\in W^{m,\infty}$. For each $\eta \in U^m_b$ and for $k=1,2,\ldots,m$, the linear operator
\[ {\bf S}(\eta) : \mathring{H}^k \ni\phi \mapsto \boldsymbol{\phi} \in\mathring{H}^k\times(H^k)^N \]
is defined, where $\boldsymbol{\phi}=(\phi_0,\phi_1,\ldots,\phi_N)^{\rm T}$ is the unique solution to \eqref{H:eqCV}. Moreover, we have ${\bf S}(\eta) \in B(\mathring{H}^k;\mathring{H}^k\times(H^k)^N)$ and
\[
L(H,b)\boldsymbol{\phi} = \bigl(L_0(H,b)\boldsymbol{\phi}\bigr)\boldsymbol{l}(H). \] \end{lemma}
Formally, $D_{\eta}{\bf S}(\eta)[\dot{\eta}]$ the Fr\'echet derivative of ${\bf S}(\eta)$ with respect to $\eta$ is given by
\begin{equation}\label{H:Frechet} \begin{cases}
\boldsymbol{l}(H)\cdot\dot{\boldsymbol{\psi}}
= -\bigl( \boldsymbol{l}'(H)\cdot\boldsymbol{\phi} \bigr)\dot{\eta}, \\
\mathcal{L}(H,b)\dot{\boldsymbol{\psi}} = -D_H\mathcal{L}(H,b)[\dot{\eta}]\boldsymbol{\phi}, \end{cases} \end{equation}
with $\boldsymbol{\phi}={\bf S}(\eta)\phi$ and $\dot{\boldsymbol{\psi}}=D_{\eta}{\bf S}(\eta)[\dot{\eta}]\phi$, where $\boldsymbol{l}'(H)\cdot\boldsymbol{\phi} =\sum_{j=1}^Np_jH^{p_j-1}\phi_j$,
\[ D_H\mathcal{L}_i(H)[\dot{\eta}]\boldsymbol{\phi} = \sum_{j=0}^N\bigl( D_HL_{ij}(H,b)[\dot{\eta}] - H^{p_i}D_HL_{0j}(H,b)[\dot{\eta}]
- p_iH^{p_i-1}\dot{\eta}L_{0j}(H,b) \bigr)\phi_j, \]
and
\begin{align*} D_H L_{ij}(H,b)[\dot{\eta}]\phi_j &= -\nabla\cdot\bigl\{ \dot{\eta}(
H^{p_i+p_j}\nabla\phi_j
-p_j H^{p_i+p_j-1}\phi_j\nabla b ) \bigr\} \nonumber \\[0.5ex] &\quad\,
+ \dot{\eta}\bigl\{ -p_i H^{p_i+p_j-1}\nabla b\cdot\nabla\phi_j
+ p_ip_j H^{p_i+p_j-2}(1+|\nabla b|^2)\phi_j \bigr\}. \end{align*}
By using these equations together with Lemma~\ref{IK:lem1} and standard arguments, we can justify the Fr\'echet differentiability of ${\bf S}(\eta)$ with respect to $\eta$. More precisely, we have the following lemma.
\begin{lemma}\label{H:lem2} Let $m$ be an integer such that $m>\frac{n}{2}+1$ and $b\in W^{m,\infty}$. Then, the map $U^m_b \ni \eta \mapsto {\bf S}(\eta) \in B(\mathring{H}^k;\mathring{H}^k\times(H^k)^N)$ is Fr\'echet differentiable for $k=1,2,\ldots,m$, and~\eqref{H:Frechet} holds. \end{lemma}
As mentioned before, the Isobe--Kakinuma model~\eqref{intro:IK model} has a conserved quantity $E^{\mbox{\rm\scriptsize IK}}(\eta,\boldsymbol{\phi})$ given by~\eqref{IK:energy}, which is the total energy. Now, we define a Hamiltonian $\mathscr{H}^{\mbox{\rm\scriptsize IK}}(\eta,\phi)$ to the Isobe--Kakinuma model by
\begin{equation}\label{H:H} \mathscr{H}^{\mbox{\rm\scriptsize IK}}(\eta,\phi)
= \frac{1}{\rho}E^{\mbox{\rm\scriptsize IK}}(\eta,{\bf S}(\eta)\phi), \end{equation}
which is essentially the total energy in terms of the canonical variables $(\eta,\phi)$.
\begin{lemma}\label{H:lem3} Let $m$ be an integer such that $m>\frac{n}{2}+1$ and $b\in W^{m,\infty}$. Then, the map $U^m_b \times\mathring{H}^1 \ni (\eta,\phi) \mapsto \mathscr{H}^{\mbox{\rm\scriptsize IK}}(\eta,\phi)\in\mathbf{R}$ is Fr\'echet differentiable and the variational derivatives of the Hamiltonian are
\[ \begin{cases} \delta_\phi\mathscr{H}^{\mbox{\rm\scriptsize IK}}(\eta,\phi) = L_0(H,b)\boldsymbol{\phi}, \\ \delta_\eta\mathscr{H}^{\mbox{\rm\scriptsize IK}}(\eta,\phi)
= \frac{1}{\rho}(\delta_\eta E^{\mbox{\rm\scriptsize IK}})(\eta,\boldsymbol{\phi})
-(\boldsymbol{l}'(H)\cdot\boldsymbol{\phi})L_0(H,b)\boldsymbol{\phi}, \end{cases} \]
where $\boldsymbol{\phi}={\bf S}(\eta)\phi$. \end{lemma}
\begin{proof}[{\bf Proof}.] Let us calculate Fr\'echet derivatives of the Hamiltonian $\mathscr{H}^{\mbox{\rm\scriptsize IK}}(\eta,\phi)$. Let us consider first $U^m_b \times H^2 \ni (\eta,\phi) \mapsto \mathscr{H}^{\mbox{\rm\scriptsize IK}}(\eta,\phi) $. For any $\dot{\phi}\in {H}^2$, we see that
\begin{align*} D_\phi\mathscr{H}^{\mbox{\rm\scriptsize IK}}(\eta,\phi)[\dot{\phi}] &= \frac{1}{\rho}(D_{\boldsymbol{\phi}} E^{\mbox{\rm\scriptsize IK}})
(\eta,{\bf S}(\eta)\phi)[{\bf S}(\eta)\dot{\phi}] \\ &= \frac{1}{\rho}( (\delta_{\boldsymbol{\phi}} E^{\mbox{\rm\scriptsize IK}})(\eta,\boldsymbol{\phi}),
{\bf S}(\eta)\dot{\phi})_{L^2} \\ &= (L(H,b)\boldsymbol{\phi},{\bf S}(\eta)\dot{\phi})_{L^2} \\ &= (\bigl(L_0(H,b)\boldsymbol{\phi}\bigr)\boldsymbol{l}(H), {\bf S}(\eta)\dot{\phi})_{L^2} \\ &= (L_0(H,b)\boldsymbol{\phi}, \boldsymbol{l}(H)\cdot {\bf S}(\eta)\dot{\phi})_{L^2} \\ &= ( L_0(H,b)\boldsymbol{\phi}, \dot{\phi})_{L^2}, \end{align*}
where we used \eqref{IK:deltaE} and Lemma~\ref{H:lem1}. The above calculations are also valid when $(\phi,\dot{\phi})\in \mathring{H}^1\times \mathring{H}^1$, provided we replace the $L^2$ inner products with the $\mathscr{X}^\prime$--$\mathscr{X}$ duality product where $\mathscr{X} = \mathring{H}^1\times (H^1)^N$ for the first lines, and $\mathscr{X}= \mathring{H}^1$ for the last line. This gives the first equation of the lemma.
Similarly, for any $(\eta,\phi)\in U_b^m\times \mathring{H}^2$ and $\dot{\eta}\in H^m$ we see that
\begin{align*} D_\eta\mathscr{H}^{\mbox{\rm\scriptsize IK}}(\eta,\phi)[\dot{\eta}] = \frac{1}{\rho}(D_\eta E^{\mbox{\rm\scriptsize IK}})(\eta,{\bf S}(\eta)\phi)[\dot{\eta}]
+ \frac{1}{\rho}(D_{\boldsymbol{\phi}} E^{\mbox{\rm\scriptsize IK}})(\eta,{\bf S}(\eta)\phi)
[D_\eta {\bf S}(\eta)[\dot{\eta}]\phi]. \end{align*}
Here, we have
\begin{align*} \frac{1}{\rho}(D_{\boldsymbol{\phi}} E^{\mbox{\rm\scriptsize IK}})(\eta,{\bf S}(\eta)\phi)
[D_\eta {\bf S}(\eta)[\dot{\eta}]\phi] &= \frac{1}{\rho}( (\delta_{\boldsymbol{\phi}} E^{\mbox{\rm\scriptsize IK}})(\eta,\boldsymbol{\phi}),
D_\eta {\bf S}(\eta)[\dot{\eta}]\phi)_{L^2} \\ &= ( L(H,b)\boldsymbol{\phi}, D_\eta {\bf S}(\eta)[\dot{\eta}]\phi)_{L^2} \\ &= ( L_0(H,b)\boldsymbol{\phi}, \boldsymbol{l}(H)\cdot D_\eta {\bf S}(\eta)[\dot{\eta}]\phi)_{L^2} \\ &= -( L_0(H,b)\boldsymbol{\phi}, (\boldsymbol{l}'(H)\cdot {\bf S}(\eta)\phi) \dot{\eta})_{L^2} \\ &= - ( (\boldsymbol{l}'(H)\cdot\boldsymbol{\phi})L_0(H,b)\boldsymbol{\phi}, \dot{\eta})_{L^2}, \end{align*}
where we used the identity
\[ \boldsymbol{l}(H)\cdot D_\eta {\bf S}(\eta)[\dot{\eta}]\phi
+ (\boldsymbol{l}'(H)\cdot {\bf S}(\eta)\phi) \dot{\eta} = 0, \]
stemming from~\eqref{H:Frechet}. Again, the above identities are still valid for $(\eta,\phi)\in U_b^m\times \mathring{H}^1$ provided we replace the $L^2$ inner products with suitable duality products. This concludes the proof of the Fr\'echet differentiability, and the second equation of the lemma. \end{proof}
Now, we are ready to show our main result in this section.
\begin{theorem} Let $m$ be an integer such that $m>\frac{n}{2}+1$ and $b\in W^{m,\infty}$. Then, the Isobe--Kakinuma model~\eqref{intro:IK model} is equivalent to Hamilton's canonical equations
\begin{equation}\label{H:CF} \partial_t\eta = \frac{\delta\mathscr{H}^{\mbox{\rm\scriptsize IK}}}{\delta\phi}, \quad \partial_t\phi = -\frac{\delta\mathscr{H}^{\mbox{\rm\scriptsize IK}}}{\delta\eta}, \end{equation}
with $\mathscr{H}^{\mbox{\rm\scriptsize IK}}$ defined in~\eqref{H:H} as long as $\eta(\cdot,t) \in U^m_b$ and $\phi(\cdot,t)\in \mathring{H}^1$. More precisely, for any regular solution $(\eta,\boldsymbol{\phi})$ to the Isobe--Kakinuma model \eqref{intro:IK model}, if we define $\phi$ by~\eqref{H:CV}, then $(\eta,\phi)$ satisfies Hamilton's canonical equations~\eqref{H:CF}. Conversely, for any regular solution $(\eta,\phi)$ to Hamilton's canonical equations~\eqref{H:CF}, if we define $\boldsymbol{\phi}$ by $\boldsymbol{\phi}={\bf S}(\eta)\phi$, then $(\eta,\boldsymbol{\phi})$ satisfies the Isobe--Kakinuma model~\eqref{intro:IK model}. \end{theorem}
\begin{proof}[{\bf Proof}.] Suppose that $(\eta,\boldsymbol{\phi})$ is a solution to the Isobe--Kakinuma model~\eqref{intro:IK model}. Then, it satisfies~\eqref{IK:IK model2}, particularly, we have
\begin{equation}\label{H:eq1} \partial_t\eta = L_0(H,b)\boldsymbol{\phi}. \end{equation}
It follows from~\eqref{H:CV} and~\eqref{IK:IK model2} that
\begin{align*} \partial_t\phi &= \boldsymbol{l}(H)\cdot\partial_t\boldsymbol{\phi}
+ (\boldsymbol{l}'(H)\cdot\boldsymbol{\phi})\partial_t\eta \\ &= -\frac{1}{\rho} (\delta_\eta E^{\mbox{\rm\scriptsize IK}})(\eta,\boldsymbol{\phi})
+ (\boldsymbol{l}'(H)\cdot\boldsymbol{\phi})L_0(H,b)\boldsymbol{\phi}. \end{align*}
These equations together with Lemma~\ref{H:lem3} show that $(\eta,\phi)$ satisfies~\eqref{H:CF}.
Conversely, suppose that $(\eta,\phi)$ satisfies Hamilton's canonical equations~\eqref{H:CF} and put $\boldsymbol{\phi}={\bf S}(\eta)\phi$. Then, it follows from~\eqref{H:CF} and Lemma~\ref{H:lem3} that we have~\eqref{H:eq1}. This fact and Lemma~\ref{H:lem1} imply the equation
\[ \boldsymbol{l}(H)\partial_t\eta = L(H,b)\boldsymbol{\phi}
= \frac{1}{\rho}\delta_{\boldsymbol{\phi}}E^{\mbox{\rm\scriptsize IK}}(\eta,\boldsymbol{\phi}). \]
We see also that
\begin{align*} \boldsymbol{l}(H)\cdot\partial_t\boldsymbol{\phi} &= \partial_t\phi - (\boldsymbol{l}'(H)\cdot\boldsymbol{\phi})\partial_t\eta
= - \frac{1}{\rho}\delta_{\eta}E^{\mbox{\rm\scriptsize IK}}(\eta,\boldsymbol{\phi}), \end{align*}
where we used~\eqref{H:CF} and Lemma~\ref{H:lem3}. Therefore, $(\eta,\boldsymbol{\phi})$ satisfies~\eqref{IK:IK model2}, that is, the Isobe--Kakinuma model~\eqref{intro:IK model}. \end{proof}
\section{Consistency} \label{section:C}
As aforementioned, it was shown in~\cite{Iguchi2018-1, Iguchi2018-2} that the Isobe--Kakinuma model \eqref{intro:IK model} is a higher order shallow water approximation for the water wave problem in the strongly nonlinear regime. In this section, we will show that the canonical Hamiltonian structure exhibited in the previous section is consistent with this approximation, in the sense that the Hamiltonian of the Isobe--Kakinuma model, $\mathscr{H}^{\mbox{\rm\scriptsize IK}}(\eta,\phi)$, approximates the Hamiltonian of the water wave problem, $\mathscr{H}(\eta,\phi)$, in the shallow water regime.
In order to provide quantitative results, we first rewrite the equations in a nondimensional form. Let $\lambda$ be the typical wavelength of the wave. Recalling that $h$ is the mean depth, we introduce the nondimensional aspect ratio
\[ \delta = \frac{h}{\lambda} , \]
measuring the shallowness of the water. We then rescale the physical variables by
\[ {\boldsymbol x} = \lambda\tilde {\boldsymbol x}, \quad z = h\tilde z, \quad t = \frac{\lambda}{\sqrt{gh}}\tilde t, \quad \eta = h\tilde\eta, \quad b= h\tilde b, \quad \Phi=\lambda\sqrt{gh}\tilde\Phi. \]
Under these rescaling, after dropping the tildes for the sake of readability, the basic equations for water waves~\eqref{intro:Laplace}--\eqref{intro:BC2} are rewritten in a non-dimensional form
\begin{equation}\label{C:Laplace}
\Delta\Phi+\delta^{-2}\partial_z^2\Phi=0 \makebox[3em]{in} \Omega(t), \; t>0, \end{equation}
\begin{equation}\label{C:BC1} \begin{cases}
\partial_t\eta + \nabla\Phi\cdot\nabla\eta - \delta^{-2}\partial_z\Phi = 0
& \mbox{on}\quad \Gamma(t), \; t>0, \\
\displaystyle
\partial_t\Phi + \frac12\bigl( |\nabla\Phi|^2 + \delta^{-2}(\partial_z\Phi)^2 \bigr) + \eta = 0
& \mbox{on}\quad \Gamma(t), \; t>0, \end{cases} \end{equation}
\begin{equation}\label{C:BC2} \nabla\Phi\cdot\nabla b - \delta^{-2} \partial_z\Phi = 0 \makebox[3em]{on} \Sigma, \; t>0, \end{equation}
denoting $\Omega(t)$, $\Gamma(t)$, and $\Sigma$ the rescaled water region, water surface, and bottom of the water at time $t$, respectively. Specifically, the rescaled water surface and bottom of the water are represented as $z=\eta(\boldsymbol{x},t)$ and $z=-1+b(\boldsymbol{x})$, respectively. The corresponding dimensionless Zakharov--Craig--Sulem formulation is
\begin{equation}\label{C:ZCS} \begin{cases}
\partial_t\eta-\Lambda^\delta(\eta,b)\phi = 0 & \mbox{on}\quad \mathbf{R}^n, \; t>0, \\[.5ex]
\partial_t\phi + \eta + \dfrac12|\nabla\phi|^2
- \dfrac{\delta^2}2\dfrac{\bigl(\Lambda^\delta(\eta,b)\phi + \nabla\eta\cdot\nabla\phi\bigr)^2
}{1+\delta^2|\nabla\eta|^2} = 0 & \mbox{on}\quad \mathbf{R}^n, \; t>0, \end{cases} \end{equation}
where
\begin{equation}\label{C:CV} \phi({\boldsymbol x},t) = \Phi({\boldsymbol x},\eta({\boldsymbol x},t),t) \end{equation}
is the trace of the velocity potential on the water surface, and $\Lambda^\delta(\eta,b)$ is the dimensionless Dirichlet-to-Neumann map for Laplace's equation, namely, it is defined by
\[
\Lambda^\delta(\eta,b)\phi = \delta^{-2} (\partial_z\Phi)|_{z=\eta} - \nabla\eta\cdot(\nabla\Phi)|_{z=\eta}, \]
where $\Phi$ is the unique solution to the boundary value problem of the scaled Laplace's equation~\eqref{C:Laplace} under the boundary conditions \eqref{C:BC2} and \eqref{C:CV}. With this rescaling and definitions, the Hamiltonian of the water wave system is given by
\[ \mathscr{H}^\delta(\eta,\phi)
= \frac12 \iint_{\Omega(t)} \bigl( |\nabla\Phi|^2 + \delta^{-2}(\partial_z\Phi)^2 \bigr)
\,{\rm d}{\boldsymbol x}{\rm d}z + \frac{1}{2}\int_{\mathbf{R}^n}\eta^2 \,{\rm d}{\boldsymbol x} . \]
In order to rewrite the Isobe--Kakinuma model~\eqref{intro:IK model} in dimensionless form, we need to rescale the unknown variables $(\phi_0,\phi_1,\ldots,\phi_N)$, depending on the choice of function system $\{\Psi_i\}$. In view of~\eqref{intro:app}, we rescale them by
\[ \phi_i = \frac{\lambda\sqrt{gh}}{\lambda^{p_i}} \tilde \phi_i \qquad\mbox{for}\quad i=0,1,\ldots,N, \]
so that
\begin{equation}\label{C:app} \Phi^{\rm app}({\boldsymbol x}, z, t) = \lambda\sqrt{gh}\;\tilde\Phi^{\rm app}(\tilde{\boldsymbol x},\tilde z,\tilde t) = \lambda\sqrt{gh}\;\biggl(\sum_{i=0}^N\delta^{p_i}(\tilde z+1-\tilde b(\tilde{\boldsymbol x}))^{p_i}
\phi_i(\tilde{\boldsymbol x},\tilde t)\biggr). \end{equation}
As before, we will henceforth drop the tildes for the sake of readability. It is also convenient to introduce the notation
\[ \phi_i^\delta=\delta^{p_i}\tilde\phi_i \qquad\mbox{for}\quad i=0,1,\ldots,N, \]
so that the Isobe--Kakinuma model~\eqref{intro:IK model} in rescaled variables is
\begin{equation}\label{C:IK model} \left\{
\begin{array}{l}
\displaystyle
H^{p_i} \partial_t \eta + \sum_{j=0}^N \left\{ \nabla \cdot \left(
\frac{1}{p_i+p_j+1} H^{p_i+p_j+1} \nabla\phi_j^\delta
- \frac{p_j}{p_i+p_j} H^{p_i+p_j} \phi_j^\delta \nabla b \right) \right.\\
\displaystyle\phantom{ H^{p_i} \partial_t \eta + \sum_{j=0}^N \biggl\{ }
+ \left. \frac{p_i}{p_i+p_j} H^{p_i+p_j} \nabla b \cdot \nabla\phi_j^\delta
- \frac{p_ip_j}{p_i+p_j-1} H^{p_i+p_j-1} (\delta^{-2} + |\nabla b|^2) \phi_j^\delta \right\} = 0 \\
\makebox[28em]{}\mbox{for}\quad i=0,1,\ldots,N, \\[1ex]
\displaystyle
\sum_{j=0}^N H^{p_j} \partial_t \phi_j^\delta + \eta
+ \frac12 \left\{ \left| \sum_{j=0}^N
( H^{p_j}\nabla\phi_j^\delta - p_j H^{p_j-1}\phi_j^\delta\nabla b ) \right|^2
+ \delta^{-2}\left( \sum_{j=0}^N p_j H^{p_j-1} \phi_j^\delta \right)^2 \right\} = 0,
\end{array} \right. \end{equation}
where $H({\boldsymbol x},t) = 1 + \eta({\boldsymbol x},t) - b({\boldsymbol x})$. We also use the notations $\boldsymbol{\phi}^\delta=(\phi_0^\delta,\phi_1^\delta,\dots,\phi_N^\delta)^{\rm T}$ and $L^\delta=L^\delta(H,b)=(L_{ij}^\delta(H,b))_{0\leq i,j\leq N}$, where
\begin{align} L_{ij}^\delta\psi_j &= -\nabla\cdot\biggl(
\frac{1}{p_i+p_j+1}H^{p_i+p_j+1}\nabla\psi_j
-\frac{p_j}{p_i+p_j}H^{p_i+p_j}\psi_j\nabla b\biggr) \nonumber \\[0.5ex] &\quad\,
-\frac{p_i}{p_i+p_j}H^{p_i+p_j}\nabla b\cdot\nabla\psi_j
+\frac{p_ip_j}{p_i+p_j-1}H^{p_i+p_j-1}(\delta^{-2}+|\nabla b|^2)\psi_j.
\label{C:L} \end{align}
Then, \eqref{C:IK model} can be written in a compact form
\begin{equation}\label{C:IK model2} \begin{pmatrix}
0 & -\boldsymbol{l}(H)^{\rm T} \\
\boldsymbol{l}(H) & O \end{pmatrix} \partial_t \begin{pmatrix}
\eta \\
\boldsymbol{\phi}^\delta \end{pmatrix} = \begin{pmatrix}
\delta_{\eta} E^{\mbox{\rm\scriptsize IK},\delta} \\
\delta_{\boldsymbol{\phi}^\delta} E^{\mbox{\rm\scriptsize IK},\delta} \end{pmatrix}, \end{equation}
where
\begin{align} E^{\mbox{\rm\scriptsize IK},\delta}(\eta,\boldsymbol{\phi}^\delta) &= \frac{1}{2}\int_{\mathbf{R}^n}\left\{
\sum_{i,j=0}^N\left(
\frac{1}{p_i+p_j+1}H^{p_i+p_j+1}\nabla\phi_i^\delta\cdot\nabla\phi_j^\delta
-\frac{2p_i}{p_i+p_j}H^{p_i+p_j}\phi_i^\delta\nabla b\cdot\nabla\phi_j^\delta \right. \right. \nonumber \\ & \left. \phantom{\sum_{i,j=0}^N}\makebox[6em]{}
+ \frac{p_ip_j}{p_i+p_j-1}H^{p_i+p_j-1}(\delta^{-2}+|\nabla b|^2)\phi_i^\delta\phi_j^\delta\biggr)
+ \eta^2\right\}{\rm d}{\boldsymbol x}. \end{align}
Then, we define the Hamiltonian
\[ \mathscr{H}^{\mbox{\rm\scriptsize IK},\delta}(\eta,\phi) = E^{\mbox{\rm\scriptsize IK},\delta}(\eta,\boldsymbol{\phi}^\delta), \]
where $\boldsymbol{\phi}^\delta$ is the solution to
\begin{equation}\label{C:S} \begin{cases}
\boldsymbol{l}(H)\cdot\boldsymbol{\phi}^\delta = \phi, \\
L^\delta(H,b)\boldsymbol{\phi}^\delta
= \bigl(L_0^\delta(H,b)\boldsymbol{\phi}^\delta\bigr)\boldsymbol{l}(H). \end{cases} \end{equation}
Here, we used the notation $L_0^\delta=(L_{00}^\delta,\dots,L_{0N}^\delta)$. We recall that $\boldsymbol{\phi}^\delta$ is uniquely determined by~\eqref{C:S} thanks to Lemma~\ref{H:lem1}.
To analyze the consistency of the Hamiltonian in the shallow water regime, we will further restrict ourselves to the following two cases:
\begin{itemize} \item[(H1)] In the case of the flat bottom $b({\boldsymbol x})\equiv0$, $p_i=2i$ for $i=0,1,\ldots,N$.
\item[(H2)] In the case with general bottom topographies, $p_i=i$ for $i=0,1,\ldots,N$. \end{itemize}
We are now in position to state the consistency of the Hamiltonian of the Isobe--Kakinuma model with respect to Zakharov's Hamiltonian of the water wave problem in the shallow water regime.
\begin{theorem}\label{C:consistency-Hamiltonian} Let $c_0,M$ be positive constants and $m>\frac{n}{2}+1$ an integer such that $m\geq 4(N+1)$ in the case {\rm (H1)} and $m\geq 4([\frac{N}{2}]+1)$ in the case {\rm (H2)}. There exists a positive constant $C$ such that if $\eta\in H^m$ and $b\in W^{m+1,\infty}$ satisfy
\[ \begin{cases}
\|\eta\|_m + \|b\|_{W^{m+1,\infty}} \leq M, \\
c_0\leq H({\boldsymbol x}) = 1+\eta({\boldsymbol x})-b({\boldsymbol x})
\quad\mbox{for}\quad {\boldsymbol x}\in\mathbf{R}^n, \end{cases} \]
then for any $\delta\in(0,1]$ and any $\phi\in \mathring{H}^{m}$, we have
\[
|\mathscr{H}^\delta(\eta,\phi) -\mathscr{H}^{\mbox{\rm\scriptsize IK},\delta}(\eta,\phi) | \leq \begin{cases}
C\|\nabla\phi\|_{4N+3}\|\nabla\phi\|_{0}\, \delta^{4N+2} & \text{ in the case {\rm (H1)}}, \\
C\|\nabla \phi\|_{4[\frac{N}{2}]+3}\|\nabla\phi\|_{0}\, \delta^{4[\frac{N}{2}]+2}
& \text{ in the case {\rm (H2)}}. \end{cases} \] \end{theorem}
\begin{remark} Theorem 2.4 in~\cite{Iguchi2018-2} in fact states the stronger result that the difference between exact solutions of the water wave problem obtained in \cite{Iguchi2009,Lannes2013} and the corresponding solutions of the Isobe--Kakinuma model is bounded with the same order of precision as above on the relevant timescale. \end{remark}
\begin{remark} It is important to notice that the order of the approximation given in Theorem~\ref{C:consistency-Hamiltonian} is greater than what we could expect based on~\eqref{C:app}, and in particular greater than the one obtained when using the Boussinesq expansion in the flat bottom case (H1):
\[ \phi_{\rm B}(\tilde t,\tilde{\boldsymbol x})
= \tilde\Phi^{\rm app}_{\rm B}(\tilde{\boldsymbol x}, \eta(\tilde{\boldsymbol x}, \tilde t),\tilde t)
\quad \mbox{with} \quad \tilde\Phi^{\rm app}_{\rm B}(\tilde{\boldsymbol x}, \tilde z, \tilde t)
= \sum_{i=0}^N\delta^{2i}(\tilde z+1)^{2i} \frac{(-\Delta)^i\phi_0(\tilde{\boldsymbol x},\tilde t)}{(2i)!} \]
where $\phi_0$ is the trace of the velocity potential at the bottom. Here we can only expect that the approximation is valid up to an error of order $O(\delta^{2N+2})$, which coincides with the precision of Theorem~\ref{C:consistency-Hamiltonian} only when $N=0$. When $N=0$, we recover that the Saint-Venant or shallow-water equations provide approximate solutions with precision $O(\delta^2)$; see~\cite{Iguchi2009, Lannes2013}. \end{remark}
\begin{proof}[{\bf Proof of Theorem \ref{C:consistency-Hamiltonian}}.] We will modify slightly the strategy in~\cite{Iguchi2018-2}. We first notice that
\begin{align*} &\mathscr{H}^\delta(\eta,\phi)
= \frac12\iint_{\Omega}\bigl( |\nabla\Phi|^2 + \delta^{-2}(\partial_z\Phi)^2 \bigr){\rm d}\boldsymbol{x}{\rm d}z
+ \frac12\int_{\mathbf{R}^n}\eta^2{\rm d}\boldsymbol{x}, \\ &\mathscr{H}^{\mbox{\rm\scriptsize IK},\delta}(\eta,\phi)
= \frac12\iint_{\Omega}\bigl( |\nabla\Phi^{\rm app}|^2
+ \delta^{-2}(\partial_z\Phi^{\rm app})^2 \bigr){\rm d}\boldsymbol{x}{\rm d}z
+ \frac12\int_{\mathbf{R}^n}\eta^2{\rm d}\boldsymbol{x}, \end{align*}
where $\Phi$ is the unique solution to the boundary value problem of the scaled Laplace's equation~\eqref{C:Laplace} under the boundary conditions \eqref{C:BC2} and \eqref{C:CV}, and the approximate velocity potential $\Phi^{\rm app}$ is defined by
\[ \Phi^{\rm app}({\boldsymbol x},z) = \sum_{i=0}^{N} ( z+1-b(\boldsymbol{x}))^{p_i}\phi_i^\delta({\boldsymbol x}), \] where ${\boldsymbol{\phi}}^\delta=({\phi}_0^\delta,{\phi}_1^\delta,\dots,{\phi}_{N}^\delta)^{\rm T}$ is the solution to
\[ \begin{cases} \displaystyle
\sum_{i=0}^{N} H^{p_i} {\phi}_{i}^\delta = \phi, \\[3ex] \displaystyle
\sum_{j=0}^{N} {L}_{ij}^\delta(H,b){{\phi}}_j^\delta
= H^{p_i}\sum_{j=0}^{N} {L}_{0j}^\delta(H,b){{\phi}}_j^\delta
\makebox[4em]{for} i=0,1,\dots,N. \end{cases} \]
We will denote with tildes, as in~\cite{Iguchi2018-2}, the functions obtained when replacing $N$ with $2N+2$. Hence, $\widetilde{\boldsymbol{\phi}}^\delta = (\widetilde{\phi}_0^\delta,\widetilde{\phi}_1^\delta,\dots,\widetilde{\phi}_{2N+2}^\delta)^{\rm T}$ is the solution to
\[ \begin{cases} \displaystyle
\sum_{i=0}^{2N+2} H^{p_i} \widetilde{\phi}_{i}^\delta = \phi, \\[3ex] \displaystyle
\sum_{j=0}^{2N+2} L_{ij}^\delta(H,b)\widetilde{{\phi}}_j^\delta
= H^{p_i}\sum_{j=0}^{2N+2} L_{0j}^\delta(H,b)\widetilde{{\phi}}_j^\delta
\makebox[4em]{for} i=0,1,\dots,2N+2. \end{cases} \]
We also introduce, as in \cite{Iguchi2018-2}, a modified approximate velocity potential $\widetilde{\Phi}^{\rm app}$ by
\begin{equation}\label{C:def-Phiapp} \widetilde{\Phi}^{\rm app}({\boldsymbol x},z) = \sum_{i=0}^{2N+2} ( z+1-b(\boldsymbol{x}))^{p_i}\widetilde\phi_i^\delta({\boldsymbol x}), \end{equation}
and set $\Phi^{\rm res} = \Phi-\widetilde{\Phi}^{\rm app}$ and $\boldsymbol{\varphi}^\delta = (\varphi_0^\delta,\varphi_1^\delta,\ldots,\varphi_N^\delta)^{\rm T}$ with $\varphi_j^\delta=\phi_j^\delta-\widetilde{\phi}_j^\delta$ for $j=0,1,\ldots,N$. Then, we decompose the difference $\mathscr{H}^\delta - \mathscr{H}^{\mbox{\rm\scriptsize IK},\delta}$ as
\begin{align*} &\mathscr{H}^\delta(\eta,\phi) - \mathscr{H}^{\mbox{\rm\scriptsize IK},\delta}(\eta,\phi) \\
&= \frac12\iint_{\Omega}\bigl\{ \bigl( |\nabla\Phi|^2 + \delta^{-2}(\partial_z\Phi)^2 \bigr)
- \bigl( |\nabla\widetilde{\Phi}^{\rm app}|^2 + \delta^{-2}(\partial_z\widetilde{\Phi}^{\rm app})^2 \bigr)
\bigr\}{\rm d}\boldsymbol{x}{\rm d}z \\ &\quad\;
+ \frac12\iint_{\Omega}\bigl\{ \bigl( |\nabla\widetilde{\Phi}^{\rm app}|^2
+ \delta^{-2}(\partial_z\widetilde{\Phi}^{\rm app})^2 \bigr)-\bigl( |\nabla{\Phi}^{\rm app}|^2
+ \delta^{-2}(\partial_z{\Phi}^{\rm app})^2 \bigr)\bigr\}{\rm d}\boldsymbol{x}{\rm d}z \\ &= I_1 + I_2. \end{align*}
We first evaluate $I_1$. It is easy to see that
\begin{align}
|I_1| &\leq \frac12\bigl\{
\|\nabla\Phi^{\rm res}\|_{L^2(\Omega)}\bigl(
\|\nabla\Phi\|_{L^2(\Omega)} + \|\nabla\widetilde{\Phi}^{\rm app}\|_{L^2(\Omega)} \bigr) \nonumber \\ &\quad\;\label{C:I1}
+ \delta^{-2}\|\partial_z\Phi^{\rm res}\|_{L^2(\Omega)}\bigl(
\|\partial_z\Phi\|_{L^2(\Omega)} + \|\partial_z\widetilde{\Phi}^{\rm app}\|_{L^2(\Omega)} \bigr) \bigr\}. \end{align}
By using \cite[Lemma~8.1]{Iguchi2018-2} with $k=0$ as well as~\cite[Lemma~6.4]{Iguchi2018-2} with $(k,j)=(0,2N+1)$ in the case (H1) and~\cite[Lemma~6.9]{Iguchi2018-2} with $(k,j)=(0,2[\frac{N}{2}]+1)$ in the case (H2), we find
\[
\|\nabla\Phi^{\rm res}\|_{L^2(\Omega)} + \delta^{-1}\|\partial_z\Phi^{\rm res}\|_{L^2(\Omega)} \leq \begin{cases}
C\|\nabla\phi\|_{4N+3}\;\delta^{4N+3} & \text{ in the case (H1)}, \\
C\|\nabla \phi\|_{4[\frac{N}{2}]+3}\;\delta^{4[\frac{N}{2}]+3} & \text{ in the case (H2)}, \end{cases} \]
provided that $m\geq 4(N+1)$ in the case (H1), and $m\geq 4([\frac{N}{2}]+1)$ in the case (H2). Here and in what follows, $C$ denotes a positive constant depending on $N$, $m$, $c_0$, and $M$, which changes from line to line. On the other hand, it follows from elliptic estimates given in \cite{Iguchi2009, Lannes2013} that
\[
\|\nabla\Phi\|_{L^2(\Omega)} + \delta^{-1}\|\partial_z\Phi\|_{L^2(\Omega)} \leq C\|\nabla\phi\|_0. \]
Moreover, by the definition \eqref{C:def-Phiapp} and using~\cite[Lemma~3.4]{Iguchi2018-2} with $k=0$, we see that
\begin{align*}
\|\nabla\widetilde{\Phi}^{\rm app}\|_{L^2(\Omega)}
+ \delta^{-1}\|\partial_z\widetilde{\Phi}^{\rm app}\|_{L^2(\Omega)}
&\leq C\bigl( \|\nabla\widetilde{\phi}_0^\delta\|_0
+ \|(\widetilde{\phi}_1^\delta,\ldots,\widetilde{\phi}_{2N+2}^\delta)\|_1
+ \delta^{-1}\|(\widetilde{\phi}_1^\delta,\ldots,\widetilde{\phi}_{2N+2}^\delta)\|_0 \bigr) \\
&\leq C\bigl( \|\nabla\widetilde{\phi}_0^\delta\|_0
+ \delta^{-1}\|(1-\delta^2\Delta)^{\frac12}
(\widetilde{\phi}_1^\delta,\ldots,\widetilde{\phi}_{2N+2}^\delta)\|_0 \bigr) \\
&\leq C\|\nabla\phi\|_0. \end{align*}
Plugging the above estimates into~\eqref{C:I1}, we obtain
\begin{equation}\label{C:I1.5}
|I_1| \leq \begin{cases}
C\|\nabla\phi\|_{4N+3} \|\nabla\phi\|_0 \, \delta^{4N+3} & \text{ in the case (H1)}, \\
C\|\nabla \phi\|_{4[\frac{N}{2}]+3} \|\nabla\phi\|_0 \, \delta^{4[\frac{N}{2}]+3} & \text{ in the case (H2)}, \end{cases} \end{equation}
provided that $m\geq 4(N+1)$ in the case (H1), and $m\geq 4([\frac{N}{2}]+1)$ in the case (H2).
We proceed to evaluate $I_2$ by noticing that, after the calculations in~\cite[p.~2009]{Iguchi2018-2},
\begin{align*} I_2 &= \frac12\sum_{i=0}^{2N+2}\sum_{j=0}^{2N+2}
\bigl( L_{ij}^\delta(H,b)\tilde{\phi}_j^\delta,\tilde{\phi}_i^\delta \bigr)_{L^2}
- \frac12\sum_{i=0}^N\sum_{j=0}^N
\bigl( L_{ij}^\delta(H,b)\phi_j^\delta,\phi_i^\delta \bigr)_{L^2} \\ &= \frac12\sum_{j=0}^{N}\sum_{i=N+1}^{2N+2} \big(
(L_{ij}^\delta(H,b) - H^{p_i}L_{0j}^\delta(H,b))\varphi_j^\delta, \widetilde{\phi}_{i}^\delta \big)_{L^2} \\ &\quad\;
- \frac12\sum_{j=N+1}^{2N+2}\sum_{i=N+1}^{2N+2} \big(
(L_{ij}^\delta(H,b) - H^{p_i}L_{0j}^\delta(H,b))\widetilde{{\phi}}_j^\delta,
\widetilde{\phi}_{i}^\delta \big)_{L^2}. \end{align*}
Therefore,
\begin{align*}
|I_2| &\leq C \bigl\{
\|\boldsymbol{\varphi}^\delta\|_{2N^*+3}
+ \| (\widetilde{\phi}_{N+1}^\delta,\ldots,\widetilde{\phi}_{2N+2}^\delta) \|_{2N^*+3} \\ &\quad\;
+ \delta^{-2} \bigl( \|\boldsymbol{\varphi}^\delta\|_{2N^*+1}
+ \| (\widetilde{\phi}_{N+1}^\delta,\ldots,\widetilde{\phi}_{2N+2}^\delta) \|_{2N^*+1}\bigr) \bigr\}
\| (\widetilde{\phi}_{N+1}^\delta,\ldots,\widetilde{\phi}_{2N+2}^\delta) \|_{-(2N^*+1)} \end{align*}
with $N^\star=N$ in the case (H1) and $N^\star=[\frac{N}{2}]$ in the case (H2).
Using~\cite[Lemma~6.2]{Iguchi2018-2} with $(k,j) = (2N+3,N),(2N+1,N+1),(-2N-1,N+1)$ in the case (H1) and~\cite[Lemma~6.7]{Iguchi2018-2} with $(k,j) = (2[\frac{N}{2}]+3,[\frac{N}{2}]), (2[\frac{N}{2}]+1,[\frac{N}{2}]+1),(-2[\frac{N}{2}]-1,[\frac{N}{2}]+1)$ in the case (H2), we obtain
\begin{equation}\label{C:I2}
|I_2| \leq \begin{cases}
C\|\nabla\phi\|_{4N+2}\|\nabla\phi\|_0\;\delta^{4N+2} & \text{ in the case (H1)}, \\
C\|\nabla\phi\|_{4[\frac{N}{2}]+2}\|\nabla\phi\|_0\;\delta^{4[\frac{N}{2}]+2} & \text{ in the case (H2)}, \end{cases} \end{equation} provided that $m\geq 4N+3$ in the case (H1), and $m\geq 4[\frac{N}{2}]+3$ in the case (H2). Now, \eqref{C:I1.5} and \eqref{C:I2} give the desired estimate. \end{proof}
Vincent Duch\^ene \par {\sc Institut de Recherche Math\'ematique de Rennes} \par {\sc Univ Rennes}, CNRS, IRMAR -- UMR 6625 \par {\sc F-35000 Rennes, France} \par E-mail: \texttt{vincent.duchene@univ-rennes1.fr}
Tatsuo Iguchi \par {\sc Department of Mathematics} \par {\sc Faculty of Science and Technology, Keio University} \par {\sc 3-14-1 Hiyoshi, Kohoku-ku, Yokohama, 223-8522, Japan} \par E-mail: \texttt{iguchi@math.keio.ac.jp}
\end{document} |
\begin{document}
\title{ Invariant subspaces for commuting operators in a real Banach space}
\author{ Victor Lomonosov and Victor Shulman}
\maketitle
{\bf Abstract.}
It is proved that a commutative algebra $A$ of operators in a reflexive real Banach space has an invariant subspace if each operator $T\in A$ satisfies the condition
$$\|1- \varepsilon T^2\|_e \le 1 + o(\varepsilon) \text{ when } \varepsilon\searrow 0,$$
where $\|\cdot\|_e$ is the essential norm. This implies the existence of an invariant subspace for every commutative family of essentially selfadjoint operators in a real Hilbert space.
\section{Introduction }
One of the most known unsolved problems in the invariant subspace theory is the question of existence of a (non-trivial, closed) invariant subspace for an operator $T$ with compact imaginary part (= essentially selfadjoint operator = compact perturbation of a selfadjoint operator). There is a lot of papers concerning this subject; we only mention that the answer is affirmative for perturbations by operators from Shatten - von Neumann class $\mathfrak{S}_p$ (Livshits \cite{MSL} for $p=1$, Sahnovich \cite{Sah} for $p=2$, Gohberg and Krein \cite{GK}, Macaev \cite{M1}, Schwartz \cite{Sch} --- for arbitrary $p$), or, more generally, from the Macaev ideal (Macaev \cite{M2}). But the general question is still open.
It was proved in \cite{Lom2} that an essentially self-adjoint operator in a complex Hilbert space has an invariant real subspace. Then in \cite{Lom3} the following general theorem of Burnside type was proved: \begin{theorem}\label{L1}
Suppose that an algebra $A$ of bounded operators in a (real or complex) Banach space $X$ is not dense in the algebra $\mathcal{B}(X)$ of all bounded operators on $X$ with respect to the weak operator topology (WOT). Then there are non-zero $x\in X^{**}, y\in X^*$, such that \begin{equation}\label{ineq}
|(x,T^*y)| \le \|T\|_e \text{ for all } T\in A, \end{equation}
where $\|T\|_e$ is the essential norm of $~T$, that is $\|T\|_e = \inf\{\|T+K\|: K\in \mathcal{K}(X)\}$. Here $\mathcal{K}(X)$ is the ideal of all compact operators in $X$. \end{theorem}
Using this result and developing a special variational techniques, Simonic \cite{Sc} has obtained a significant progress in the topic: he proved that each essentially selfadjoint operator in a real Hilbert space has invariant subspace. Deep results based on Theorem \ref{L1} were proved then by Atzmon \cite{A}, Atzmon, Godefroy and Kalton \cite{AGK}, Grivaux \cite{SofGr} and other mathematicians. Here we show that every commutative family of essentially selfadjoint operators in a real Hilbert space has an invariant subspace, and consider some analogs of this result for operators in Banach spaces. Our proof is very simple and short but it heavily depends on Theorem \ref{L1}. More precisely, we use the following improvement of Theorem \ref{L1} obtained in \cite{Lom4}:
\begin{theorem}\label{L4}
If an algebra $A\subset \mathcal{B}(X)$ is not WOT-dense in $\mathcal{B}(X)$, then there are non-zero vectors $x\in X^{**}, y\in X^*$, such that $(x,y) \geq 0$ and \begin{equation}\label{ineq2}
|(x,T^*y)| \le \|T\|_e (x,y), \text{ for all } T\in A. \end{equation} \end{theorem}
Let us mention that if $A$ contains a non-zero compact operator then by \cite{Lom0} $A$ has a non-trivial invariant subspace $L$ in $X$ (so $x$ can be chosen from $L$ and $y$ can be chosen from $L^\bot$).
The original proof of Theorem \ref{L1} was essentially simplified by Lindstrom and Shluchtermann \cite{LSc}. For completeness, we give at the end of the paper a short proof of Theorem \ref{L4}, unifying arguments from \cite{LSc} and \cite{Lom4} (and correcting some inaccuracy in \cite{Lom4}) --- in this form it was not published before.
\section{Main results }
In this section $X$ is a real Banach space (complex spaces are considered as real ones). The standard epimorphism from $\mathcal{B}(X)$ to the Calkin algebra $\mathcal{C}(X) = \mathcal{B}(X)/\mathcal{K}(X)$ is denoted by $\pi$.
Let us say that an element $a$ of a unital normed algebra is {\it positive}, if $\|1-\varepsilon a\|\le 1+o(\varepsilon)$ for $\varepsilon > 0$, $\varepsilon\to 0$. And let us say that an element $a$ is {\it real}, if $a^2$ is positive. An operator $T\in \mathcal{B}(X)$ is {\it essentially real}, if $\pi(T)$ is a real element of $\mathcal{C}(X)$.
Clearly all selfadjoint operators in a Hilbert space are real. It is not difficult to check that Hermitian operators in a complex Banach space (defined by the condition $\|\exp(itT)\| = 1$ for $t\in \mathbb{R}$) are real. Many other operators, for instance, all involutions and all nilpotents of index two are also real. So we can see that the class of essentially real operators is very wide.
\begin{theorem}\label{LB} If $A\subset \mathcal{B}(X)$ is a commutative algebra of essentially real operators, then there exists a non-trivial closed subspace of $X^*$, invariant for the algebra $A^*= \{T^*: T \in A\}$. \end{theorem}
\begin{proof} Note that the set of all positive elements of a Banach algebra is a convex cone. Moreover this cone is norm-closed. Indeed let $a = \lim_{n\to \infty}a_n$ where all $a_n$ are positive. If $a$ is not positive then there is a sequence $\varepsilon_n \to 0$ and a number $C> 0$, such that $\|1 - \varepsilon_n a\| > 1 + C\varepsilon_n$ for all $n$. Taking $k$ with $\|a-a_k\| < C/2$, we get that $\|1 - \varepsilon_na_k\| > 1 + C\varepsilon_n - \|a-a_k\|\varepsilon_n > 1 + (C/2)\varepsilon_n$, which is a contradiction to positivity of $a_k$.
Hence the set of real elements is also norm closed. Applying this to $\mathcal{C}(X)$ we see that the set of essentially real operators is norm-closed as well. This allows us to assume that the algebra $A$ is norm-closed. Obviously we may assume also that $A$ is unital. Therefore $\exp(T)\in A$, for each $T\in A$.
Since $A$ is commutative it is not WOT-dense in $\mathcal{B}(X)$. Applying Theorem \ref{L4}, we find non-zero vectors $x\in X^{**}, y\in X^*$, such that the condition (\ref{ineq2}) holds.
Therefore, for $T\in A$ and $\varepsilon \searrow 0$, we have
$$(x,(1-\varepsilon (T^2)^*)y) \le \|1-\varepsilon T^2\|_e(x,y) \le (1+o(\varepsilon))(x,y),$$ hence \;\;\;$-\varepsilon(x, (T^2)^*y) \le o(\varepsilon)(x,y)$ and $(x,(T^2)^*y)\ge 0$, because $(x,y)\ge 0$.
Since $\exp(T) = (\exp(T/2))^2$, we get that \begin{equation}\label{1}
(x,\exp(T^*)y) \ge 0 \text{ for }T\in A.
\end{equation}
Let $K$ be the closed convex envelope of the set $M = \{\exp(T^*)y: T\in A\}\subset X^*$. Since $M$ is invariant under all operators $\exp(T^*)$, the same is true for $K$.
Since $K\neq X^*$ (by (\ref{1})) and $K$ is not a singleton (otherwise we trivially have an invariant subspace $\mathbb{C}y$), the boundary $\partial K$ of $K$ is not a singleton.
By the Bishop-Phelps theorem \cite{BPh}, the set of support points is dense in $\partial K$, so there is a non-zero functional $x_0\in X^{**}$ supporting $K$ in some point $0\neq y_0\in K$. That is
$$(x_0,y_0) \le (x_0,z) \text{ for all } z\in K.$$
By the above arguments, $\exp(T^*)y_0\in K$ for $T\in A$, therefore $$ (x_0,y_0) \le (x_0,\exp(T^*)y_0) \text{ for all } T\in A.$$
It follows that for each $T\in A$, the function $\phi(t) = (x_0,\exp(tT^*)y_0)$ has a minimum at $t = 0$. For this reason $\phi'(0) = 0$ and
$(x_0,T^*y_0) = 0$. Hence the subspace $A^*y_0$ is not dense in $H$, and its norm-closure $\overline{A^*y}$ is a non-trivial invariant subspace. \end{proof}
\begin{corollary}\label{LBref} A commutative algebra of essentially real operators in a reflexive real Banach space has a non-trivial invariant subspace. \end{corollary} Since the algebra generated by a commutative family of essentially selfadjoint operators consists of essentially selfadjoint operators we get the following result:
\begin{corollary}\label{L} Any commutative family of essentially selfadjoint operators in a real Hilbert space has a non-trivial invariant subspace. \end{corollary}
Returning to individual criteria of continuity let us consider the class $(E)$ of operators $T$ such that all polynomials $p(T)$ of $T$ are essentially real.
\begin{corollary}\label{LB2} Each operator $T\in (E)$ in a reflexive real Banach space has a non-trivial invariant subspace. \end{corollary}
Atzmon, Godefroy and Kalton \cite{AGK} introduced the class $(S)$ of all operators, satisfying the condition \begin{equation}\label{S}
\|p(T)\|_e \le \sup\{|p(t)|: t\in L\}, \text{ for each polynomial } p, \end{equation}
where $L$ is a compact subset of $\mathbb{R}$. It was proved in \cite{AGK} that all operators in $(S)$ have invariant subspaces. It is not difficult to see that $(S)\subset (E)$ (indeed if $T\in (S)$ then $\|1- \varepsilon p(T)^2\|_e \le \sup \{ |1-\varepsilon p(t)^2|: t\in L\} \le 1$ if $\varepsilon$ is sufficiently small) , so this result follows from Corollary \ref{LB2}.
\section{Proof of Theorem \ref{L4}}
Without loss of generality one can assume that $A$ is norm-closed. Since the algebra $A$ is not WOT-dense in $\mathcal{B}(X)$, the algebra $A^*$ is not WOT-dense in $\mathcal{B}(X^*)$. Suppose, aiming at the contrary, that $A^*$ is transitive (has no invariant subspaces). Set $F = \{T^*\in A^*: \|T\|_e < 1\}$ and fix $\varepsilon \in (0, \frac{1}{10})$. Choose $y_0\in X^*$ with $\|y_0\| = 3$ and let $S$ be the ball $\{y\in X^*: \|y-y_0\| \le 2\}$.
Let us suppose firstly that $Fy$ is dense in $X^*$ for each non-zero $y\in X^*$. Then the same is true for $\varepsilon Fy$. It follows that for every $y\in S$ there exists $T^*_y\in \varepsilon F$ with $\|T^*_yy - y_0\|<1$. By the definition of $F$, $T^*_y = R^*_y + K^*_y$, where $\|R_y\| < \varepsilon$, $K_y \in \mathcal{K}(X)$. Thus $\|K^*_yy - y_0\| \le \|T^*_yy-y_0\| + \|R^*_yy\| < 1 + 5\varepsilon$ (since $\|y\| \leq 5$, for each $y\in S$). Let $\tau$ denote the (relative) *-weak topology of $S$, then compactness of $K_y$ implies that $K_y^*$ continuously maps $(S,\tau)$ to $(X^*, \|\cdot\|)$. Therefore, for each $y\in S$, there is a $\tau$-neighborhood $V_y$ of $y$ such that $\|K^*_yz - y_0\| < 1 +5\varepsilon$, for each $z\in V_y$, and $\|T^*_yz-y_0\|< 1+5\varepsilon + 5\varepsilon < 2$. In other words $T^*_y$ maps $V_y$ to $S$.
The sets $V_y$, $y\in S$, form an open covering of $S$. Since $S$ is $\tau$-compact there is a finite subcovering $\{V_{y_i}: 1\le i\le n\}$. Let $\{\varphi_i: 1\le i\le n\}$ be a partition of unity related to this subcovering. We define a $\tau$-continuous map $\Phi: S \to S$ by $\Phi(y) = \sum_{i=1}^n\varphi_i(y)T^*_{y_i}(y)$. By Tichonov's Theorem, $\Phi$ has a fixed point $z\in S$. This means that $T^*z = z$, where $T^* = \sum_{i=1}^n\varphi_i(z)T^*_{y_i}$. Since the set $\varepsilon F$ is convex we get that $T^* \in \varepsilon F$ and $\|T^*\|_e \leq \|T\|_e < \frac{1}{10}$.
For this reason 1 is an eigenvalue of $T^*$ exceeding $\|T^*\|_e$ and hence it is an isolated point in $\sigma(T^*)$. The corresponding Riesz projection is of finite rank and belongs to $A^*$. But it is well known (see e.g. \cite[Theorem 8.2]{RR}), that a transitive algebra containing a non-zero finite rank operator is $WOT$-dense in the algebra of all operators. Since $A^*$ is not $WOT$-dense in $\mathcal{B}(X^*)$, we obtain a contradiction.
Hence there exists $y_0\in X^*$ such that $Fy_0$ is not norm-dense in $X^*$. Let $V$ be the norm-closure of $Fy_0$. If $V = \{0\}$ then we have the inequality (\ref{ineq2}) with $y=y_0$ and any non-zero $x$.
If $V \neq \{0\}$ then $V$ is a norm-closed convex proper subset of $X^*$ containing more then one point. By the Bishop - Phelps Theorem \cite{BPh}, there are $0\neq y \in V$ and $0\neq x \in X^{**}$ such that $Re(x,y) = \sup \{Re(x,w): w\in V\}$. Since the set $V$ is invariant under multiplication by any number $t$ with $|t| \le 1$, we have $(x,y) \geq 0$ and $(x,y) \geq |(x,w)|$ for all $ w\in V$. Since $F^2\subset F$, we have that $Fy\subset V$ and $|(x,T^*y)| \le (x,y)$, for any $T^*\in F$. Therefore the inequality $|(x,T^*y)|\le \|T\|_e(x,y)$ holds for all $T\in A$. $~~\blacksquare$
\noindent Dept of Math.\\ \noindent Kent State University\\ \noindent Kent OH 44242, USA \\ lomonoso@math.kent.edu
\noindent Dept of Math.\\ \noindent Vologda State University\\ \noindent Vologda 160000, Russia\\ shulman.victor80@gmail.com
\end{document} |
\begin{document}
\pagestyle{myheadings} \markright{{\small{\sc F\"uredi, Jiang, Kostochka, Mubayi, and Verstra\"ete: Hypergraph blowups of trees}}}
\title{
Extremal problems for hypergraph blowups of trees\footnote{This project started at SQuaRE workshop at the American Institute of Mathematics.}}
\author{ {\large{Zolt\'an F\"uredi}}\thanks{ \footnotesize {Alfr\'ed R\'enyi Institute of Mathematics, Budapest, Hungary. E-mail: \texttt{z-furedi@illinois.edu}. Research partially supported by grant KH130371 from the National Research, Development and Innovation Office NKFIH.}} \and {\large{Tao Jiang}}\thanks{ \footnotesize {Department of Mathematics, Miami University, Oxford, OH 45056, E-mail: \texttt{jiangt@miamioh.edu}. Research partially supported by National Science Foundation award DMS-1400249.}} \and {\large{Alexandr Kostochka}}\thanks{ \footnotesize {University of Illinois at Urbana--Champaign, Urbana, IL 61801
and Sobolev Institute of Mathematics, Novosibirsk 630090, Russia. E-mail: \texttt {kostochk@math.uiuc.edu}.
Research supported in part by NSF grant DMS-1600592 and by grants 18-01-00353A and 19-01-00682
of the Russian Foundation for Basic Research. }} \and {\large{Dhruv Mubayi}}\thanks{ \footnotesize {Department of Mathematics, Statistics, and Computer Science, University of Illinois at Chicago, Chicago, IL 60607. E-mail: \texttt{mubayi@uic.edu.} Research partially supported by NSF awards DMS-1300138 and 1763317.}} \and{\large{Jacques Verstra\"ete}}\thanks{Department of Mathematics, University of California at San Diego, 9500 Gilman Drive, La Jolla, California 92093-0112, USA. E-mail: {\tt jverstra@math.ucsd.edu.} Research supported by NSF awards DMS-1556524 and DMS-1800332.}}
\date{March 2, 2020} \maketitle
\begin{abstract}
In this paper we present a novel approach in extremal set theory which may be viewed as an asymmetric version of Katona's permutation method. We use it to find more Tur\'an numbers of hypergraphs in the Erd\H{o}s--Ko--Rado range.
An $(a,b)$-path $P$ of length $2k-1$ consists of $2k-1$ sets of size $r=a+b$ as follows. Take $k$ pairwise disjoint $a$-element sets $A_0, A_2, \dots, A_{2k-2}$ and other $k$ pairwise disjoint $b$-element sets
$B_1, B_3, \dots, B_{2k-1}$ and order them linearly as $A_0, B_1, A_2, B_3, A_4\dots$. Define the (hyper)edges of $P_{2k-1}(a,b)$ as the sets of the form $A_i\cup B_{i+1}$ and $B_j\cup A_{j+1}$. The members of $P$ can be represented as $r$-element intervals of the $ak+bk$ element underlying set.
Our main result is about hypergraphs that are blowups of trees, and implies that for fixed $k,a,b$, as $n\to \infty$ \[ {\rm{ex}}_r(n,P_{2k-1}(a,b)) = (k - 1){n \choose r - 1} + o(n^{r - 1}).\]
This generalizes the Erd\H os--Gallai theorem for graphs which is the case of $a=b=1$. We also determine the asymptotics when $a+b$ is even; the remaining cases are still open. \end{abstract}
\section{Paths}
\subsection{Definitions concerning $r$-uniform hypergraphs, Two constructions} An $r$-uniform hypergraph, or simply {\em $r$-graph}, is a family of $r$-element subsets of a finite set. We associate an $r$-graph $F$ with its edge set and call its vertex set $V(F)$. Usually we take $V(F)=[n]$, where $[n]$ is the set of first $n$ integers, $[n]:=\{ 1, 2, 3,\dots, n\}$. We also use the notation $F\subseteq \binom{[n]}{r}$. For a hypergraph $H$, a vertex subset $C$ of $H$ that intersects all edges of $H$ is called a {\em vertex cover} of $H$. Let $\tau(H)$ be the minimum size of a vertex cover of $H$. Let $\Psi_c(n,r)$ be the $r$-graph with vertex set $[n]$ consisting of all $r$-edges meeting $[c]$. Then $\Psi$ has the maximum number of $r$-sets such that $\tau(\Psi)\leq c$. When $r$ and $c$ are fixed and $n\to \infty$, \begin{equation}\label{eq:psi}
|\Psi_c(n,r)|= \binom{n}{r}- \binom{n-c}{r} =\, c{n \choose r - 1} + o(n^{r - 1}).
\end{equation}
A {\em crosscut} of a hypergraph $H$ is a set $X \subset V(H)$ such that $|e \cap X| = 1$ for all $e \in H$. Not all hypergraphs have crosscuts. Let $\sigma(H)$ denote the smallest size of a crosscut in a hypergraph $H$ with at least one crosscut. Clearly $\tau(H)\le \sigma(H)$, since a crosscut is a vertex cover. Here strict inequality may hold, as shown by a double star whose adjacent centers have high degrees. Define $ \Psi^1_{c}(n,r):=\{ E\subset [n] : |E|=r, |E \cap [c] |= 1\}$, so it consists of all $r$-sets intersecting a fixed $c$-element subset of $V(H)$ at {\em exactly} one vertex. Then for large enough $n$, $\Psi^1$ has the maximum number of $r$-sets such that $\sigma(\Psi^1)\leq c$. Let us refer to this hypergraph as the {\em crosscut construction}. When $r$ and $c$ are fixed and $n\to \infty$, \begin{equation}\label{eq:psi1}
|\Psi^1_c(n,r)|= c \binom{n-c}{r-1} =\, c{n \choose r - 1} + o(n^{r - 1}).
\end{equation}
Given an $r$-graph $F$, let ${\rm{ex}}_r(n,F)$ denote the maximum number of edges in an $r$-graph on $n$ vertices that does not contain a copy of $F$ (if the uniformity is obvious from context, we may omit the subscript $r$). Crosscuts were introduced in~\cite{FF} to get the following obvious lower bounds \begin{equation}\label{eq3}
{\rm{ex}}(n,F) \geq |\Psi_{\tau(F)-1}(n,r)|,\quad \text{and if crosscut exists then }\enskip {\rm{ex}}(n,F)
\geq |\Psi^1_{\sigma(F)-1}(n,r)|. \end{equation}
{\bf Notation.} If $H$ is a hypergraph and $e \subset V(H)$, the {\em neighborhood} of $e$ is $\Gamma_H(e) = \{f\setminus e : e \subseteq f$, $f \in H\}$ and the {\em degree} of $e$ is
$d_H(e) = |\Gamma_H(e)|$. For an integer $ p$, let the {\em $p$-shadow}, $\partial_p H$, be the collection of $p$-sets that lie in some edge of $H$. If $H$ is an $r$-graph, then the $(r-1)$-shadow of $H$ is simply called the {\em shadow} and is denoted by $\partial H$.
Whenever we write $f(n)\sim g(n)$, we always mean $\lim _{n \to \infty }f(n)/g(n)=1$ while the other variables of $f$ and $g$ are fixed. This is the case even if the variable $n$ is not indicated.
{\bf Aims of this paper.} We have two aims. First, to find more Tur\'an numbers (or estimates) of hypergraphs in the Erd\H{o}s--Ko--Rado range. We are especially interested in cases when the excluded configuration is 'dense', it has only a few vertices of degree one. Second, we present an asymmetric version of Katona's permutation method, when we first solve (estimate) the problem only on a wellchosen substructure. The $(a,b)$-blowups of trees and paths are good examples for both of our aims.
\subsection{Paths in graphs} A fundamental result in extremal graph theory is the Erd\H os--Gallai Theorem~\cite{EG}, that \begin{equation}\label{eq:2}
{\rm{ex}}_2(n,P_\ell) \leq \frac{1}{2}(\ell - 1)n, \end{equation} where $P_\ell$ is the $\ell$-{\it edge path}. (Warning! This is a non-standard notation). Equality holds in~\eqref{eq:2} if and only if $\ell$ divides $n$ and all connected
components of $G$ are $\ell$-vertex complete graphs. The Tur\'an function ${\rm{ex}}(n,P_\ell)$ was determined exactly for every $\ell$ and $n$ by Faudree and Schelp~\cite{FaudreeSchelpJCT75} and independently by Kopylov~\cite{Kopylov}. Let $n\equiv r$ {\rm (mod $\ell$)}, $0\le r< \ell$. Then $ {\rm{ex}}(n ,P_\ell)= \frac{1}{2} (\ell-1)n - \frac{1}{2}r(\ell-r)$. They also described the extremal graphs which are either
\break\phantom{~}\hskip\parindent
--- vertex disjoint unions of $\lfloor n/\ell \rfloor$ complete graphs $K_{\ell}$ and a $K_r$, or
\break\phantom{~}\hskip\parindent
--- $\ell$ is odd, $\ell= 2k-1$, and $r=k$ or $k-1$. Then other extremal graphs with completely different structure can be obtained by taking a vertex disjoint union of $m$ copies of $K_{\ell}$ ($0\le m < \lfloor n/\ell\rfloor $) and a copy of $\Psi_{k-1}(n-m\ell, 2)$, i.e., an $(n-m\ell)$-vertex graph with a $(k-1)$-set meeting all edges.
This variety of extremal graphs makes the solution difficult.
We generalize these theorems for some hypergraph paths and trees.
\subsection{Paths in hypergraphs}
{\bf Paths of length $2$.} \quad Two $r$-sets with intersection size $b$ can be considered as a hypergraph path $P_2(a,b)$ of length two, where $a+b=r$, and $1\leq a, b\leq r-1$. If $H\subset \binom{[n]}{r}$ is $P_2(1,r-1)$-free then the obvious inequality $r |H|=|\partial(H)|\leq \binom{n}{r-1}$ yields the upper bound in the following result: \begin{equation}\label{eq:3}
\frac{1}{r}\binom{n}{r-1}-O(n^{r-2})< P(n, r, r-1)={\rm{ex}}_r(n, P_2(1,r-1)) \leq \frac{1}{r}\binom{n}{r-1}.
\end{equation} Here for any given $r$ equality holds if $n$ is sufficiently large ($n > n_0(r)$) and certain divisibility conditions are satisfied (see, Keevash~\cite{Keevash}).
The case $b=1$ was solved asymptotically by Frankl~\cite{Frankl1977} and the general case was handled in~\cite{FF85}. \begin{equation}\label{eq:4}
{\rm{ex}}_r(n, P_2(a,b)) = \Theta\left( n^{\max\{ a-1,b \}}\right).
\end{equation} Here the right hand side of~\eqref{eq:4} is $o(n^{r-1})$ (for $1\leq a, b\leq r-1$).
Two disjoint $r$-sets can be considered as a $P_2(r,0)$ so~\eqref{eq:4} also holds for $a=r$ since
the maximum size of an intersecting family of $r$-sets is $\binom{n-1}{r-1}$ for $n\geq 2r$
by the Erd\H{o}s-Ko-Rado theorem~\cite{EKR}.
{\bf Definition.}\quad Suppose that $a,b,\ell$ are positive integers, $r=a+b$. The $(a,b)$-{\em path} $P_{\ell}(a,b)$ {\em of length} $\ell$ is an $r$-uniform hypergraph obtained from a (graph) path $P_\ell$ by blowing up its vertices to $a$-sets and $b$-sets. More precisely,
an $(a,b)$-path $P_{\ell}(a,b)$ of length $2k-1$ consists of $2k-1$ sets of size $r=a+b$ as follows. Take $2k$ pairwise disjoint sets $A_0, A_2, \dots, A_{2k-1}$ with $|A_i|=a$ and
$B_1, B_3, \dots, B_{2k-1}$ with $|B_j|=b$ and define the (hyper)edges of $P_{2k-1}(a,b)$ as the sets of the form $A_i\cup B_{i+1}$ and $B_j\cup A_{j+1}$. If the $ak+bk$ elements are ordered linearly, then the members of $P$ can be represented as intervals of length $r$. By adding one more set $A_{2k}$ to the underlying set together with the hyperedge $B_{2k-1}\cup A_{2k}$ we obtain the
$(a,b)$-path of even length, $P_{2k}(a,b)$.
\begin{center}
\includegraphics[scale=0.3]{P532.eps} \quad\quad \includegraphics[scale=0.3]{P523.eps} \quad\quad\quad $P_5(3,2)=P_5(2,3)$.\end{center}
While $P_{2k-1}(a,b)=P_{2k-1}(b,a)$ we have that $P_{2k}(a,b)\neq P_{2k}(b,a)$\enskip for $a\neq b$.
\begin{center}
\includegraphics[scale=0.3]{P632.eps} \quad\quad \includegraphics[scale=0.3]{P623.eps} \end{center}
${}$ \vskip -0.8cm
{\bf On $(a,b)$-paths of length $3$.} \\ In the case $\ell=3$ an $(a,b)$-path has three $r$-sets, two of them are disjoint and they cover the third in a prescribed way. For given $1\leq a,b <r$, $r=a+b$ and for $n> n_2(r)$, F\"uredi and \"Ozkahya~\cite{FurOzk} showed that \begin{equation*}
{\rm{ex}}_r(n, P_3(a,b)) = \binom{n-1}{r-1}.
\end{equation*}
{\bf Longer paths.} \\ Our first goal is to prove a nontrivial extension of the Erd\H{o}s--Gallai Theorem~\eqref{eq:2} for $r$-graphs.
There are several ways to define a hypergraph path $P$. One of the most difficult cases appears to be the case when $P$ is a {\em tight path} of length $\ell$, namely the $r$-graph $Tight\, P_\ell^r$ with edges $\{1,2,\dots,r\},\{2,3,\dots,r+1\},\dots,\{\ell,\ell+1,\dots,\ell+r-1\}$.
The best known results~\cite{FJKMV_Zigzag} for this special case are
\[ \frac{\ell-1}{r}{n \choose r-1} \leq {\rm{ex}}_r(n,Tight\, P_\ell^r) \leq \left\{\begin{array}{ll}
\frac{\ell-1}{2}{n \choose r - 1} & \mbox{ if }r\mbox{ is even,} \\
\frac{1}{2}(\ell + \lfloor \frac{\ell-1}{r}\rfloor){n \choose r - 1} & \mbox{ if }r\mbox{ is odd,}
\end{array}\right.\] where the lower bound holds as long as certain designs exist.
Another possibility is the $r$-uniform {\em loose path} (also called {\em linear path}) $Lin\, P_\ell^r$, which is obtained from $P^2_\ell$ by enlarging each edge with a new set of $(r-2)$
vertices such that these new $(r-2)$-sets are pairwise disjoint (so $|V(P^r_\ell)|= \ell(r-1)+1$). Recently, the authors~\cite{FJS, KMV} determined ${\rm{ex}}_r(n, Lin\, P_\ell^r)$ exactly for large $n$, extending a work of Frankl~\cite{Frankl1977} who solved the case $\ell=2$ by answering a question of Erd\H os and S\'os~\cite{S} (see~\cite{KMW} for a solution for all $n$ when $r=4$).
Here we consider the $(a,b)$-blowup of $P_\ell$. Since the case $\ell=2$ behaves somewhat differently, see~\eqref{eq:3} and~\eqref{eq:4},
we only discuss the case $\ell\geq 3$.
Suppose that $a+b=r$, $a,b\geq 1$, $r\geq 3$ and suppose that $\ell\in \{ 2k-1, 2k\}$, $\ell\geq 4$. Furthermore, suppose that these values are fixed and $n\to \infty$ or $n> n_3(r,k)$. Recall that $ \Psi_{t-1}(n,r):=\{ E\subset [n] : |E|=r, E \cap [k-1] \not= \emptyset\}$. We have the lower bound \begin{eqnarray*}
{\rm{ex}}_r(n,P_{2k}(a,b)) &\geq & {\rm{ex}}_r(n,P_{2k-1}(a,b)) \\ &\geq& |\Psi_{k-1}(n,r)| =\binom{n}{r}-\binom{n-k+1}{r} =(k - 1){n \choose r - 1}+ o(n^{r- 1}) .
\end{eqnarray*}
Our main result (Theorem~\ref{mainexact}) implies that here equality holds
for at least 75\% of the cases. \begin{theorem} \label{th:path} Suppose that $a+b=r$, $a,b\geq 1$, $k\geq 2$. Concerning the Turan number of $P_{\ell}(a,b)$, the $(a,b)$ blowup of a path of length $\ell$, we have \begin{gather*} {\rm{ex}}_r(n,P_{2k-1}(a,b)) = { (k - 1){n \choose r - 1}+ o(n^{r - 1}) }\quad {\rm for \, \, all \,\, odd }\,\,\, r,
\\ {{\rm{ex}}_r(n,P_{2k}(a,b)) } = (k - 1){n \choose r - 1}+ o(n^{r - 1}) \quad {\rm for }\,\,\, a>b.
\end{gather*} Moreover, if $a\neq b$, $a,b\geq 2$, $\ell=2k-1$, then $\Psi_{k-1}(n,r)$ is the only extremal family. \end{theorem}
The remaining cases ($\ell$ is even and $a\leq b$) are still open.
\begin{conjecture} $\Psi_{k-1}(n,r)$ gives the correct asymptotic of the Tur\'an number in all the above cases.
\end{conjecture}
\section{Trees blown up, our main results}
Generalizing the Erd\H{o}s--Gallai Theorem~\eqref{eq:2},
Ajtai, Koml\'os, Simonovits and Szemer\'{e}di~\cite{AKSS} claimed a proof of the Erd\H{o}s--S\'{o}s Conjecture~\cite{ErdosSos}, showing that if $T$ is any tree with $\ell$ edges, where $\ell$ is large enough, then for all $n$, \[ {\rm{ex}}_2(n,T) \leq \frac{1}{2}(\ell - 1)n.\]
A more general conjecture due to Kalai (see in~\cite{FF}) is about the extremal number for hypergraph trees. A hypergraph $T$ is a {\em forest} if it consists of edges $e_1,e_2,\dots,e_\ell$ ordered so that for every $1<i\leq \ell$, there is $1\leq i'<i$ such that $e_i \cap
(\bigcup_{j < i} e_j) \subseteq e_{i'}$. A connected forest is called a {\em tree}. If $T$ is $r$-uniform and for each $i>1$, $|e_i\cap (\bigcup_{j<i} e_j)|=r-1$, then we say that $T$ is a {\em tight tree}.
\begin{conjecture} {\bf (Kalai)} Let $T$ be an $r$-uniform tight tree with $\ell$ edges. Then \begin{equation*}
{\rm{ex}}_r(n,T) \leq \frac{\ell - 1}{r}{n \choose r - 1}.
\end{equation*} \end{conjecture}
When $r = 2$, this is precisely the Erd\H{o}s--S\'{o}s Conjecture. A simple greedy argument shows that \begin{proposition}\label{general-bound}
If $T$ is an $r$-uniform tight tree with $\ell$ edges and $G$ is an $r$-graph on $[n]$ not containing $T$, then $|G|\leq (\ell-1)|\partial(G)|$. \end{proposition} Here $\partial(G)$ is the family of $(r-1)$-sets that lie in some edge of $G$. We obtain \begin{equation*}
{\rm{ex}}_r(n,T) \leq (\ell - 1){n \choose r - 1}.
\end{equation*}
Our goal is to prove a nontrivial extension of the Erd\H{o}s--Gallai Theorem and the Erd\H{o}s--S\'{o}s Conjecture for $r$-graphs. To define the hypergraph trees we study in this paper, we make the following more general definition:
\begin{definition} Let $s,t, a,b > 0$ be integers,
$r=a + b$, and let $H = H(U,V)$ denote a bipartite graph with parts $U = \{u_1,u_2,\dots,u_s\}$
and $V = \{v_1,v_2,\dots,v_t\}$. Let $U_1, \dots, ,U_s$ and $V_1, \dots, V_t$ be pairwise disjoint sets, such that $|U_i|=a$ and $|V_j|=b$ for all $i,j$. So $\left| \bigcup U_i\cup V_j \right| = as+bt$.
The {\em $(a,b)$-blowup} of $H$, denoted by $H(a,b)$, is the $r$-uniform hypergraph with edge set \[ H(a,b):= \{U_i \cup V_j : u_iv_j \in E(H)\}\] \end{definition}
\vskip -0.2cm
\includegraphics[height=0.9in]{tree53} $\quad \Longrightarrow\quad $
\includegraphics[height=1.2in]{tree53_42}
Since an $(a,b)$-blowup of a bipartite graph $H$
$\sigma(H)$ is well defined. Since deleting a vertex cover from a bipartite graph leaves an independent set, each cross cut in a connected bipartite graph is one of its parts,
$\sigma(\mathcal{H})=\min\{s,t\}$. Then the crosscut construction~\eqref{eq:psi1}, $ \Psi^1_{\sigma-1}(n,r):=\{ E\subset [n] : |E|=r, |E \cap [\sigma-1] |= 1\}$, yields that \begin{equation}\label{eq:cclower} (\sigma -1){n \choose r - 1} + o(n^{r - 1})
= (\sigma-1) \binom{n-\sigma +1}{r-1} = |\Psi^1_{\sigma -1}(n,r)|\leq {\rm{ex}}_r(n,H).
\end{equation}
Let $\mathcal{T}_{s,t}$ denote the family of
trees $T$ with parts $U$ and $V$ where $|U|= s$ and $|V| = t$. We frequently say that $T$ is a tree with $s+t$ vertices. Let $\mathcal{T}_{s,t}(a,b)$ denote the family of $(a,b)$-blowups of
trees $T\in \mathcal{T}_{s,t}$. We frequently suppose that $a\geq b$ (but not always).
We investigate the problem of determining when crosscut constructions are asymptotically extremal for $(a,b)$-blowups of trees. For other instances of hypergraph trees for which the crosscut constructions are asymptotically extremal,
see~\cite{KMV2}. Our main result is the following theorem.
\begin{theorem}\label{th:main} Suppose $r \geq 3$, $s,t \geq 2$, $a + b = r$, $b < a < r$. {Let $T$ be a tree on $s+t$ vertices and let ${\mathcal T}=T(a,b)$, its $(a,b)$-blowup. } Then (as $n\to \infty$) any $\mathcal{T}$-free $n$-vertex $r$-graph $H$ satisfies
\[ |H| \leq (t - 1){n \choose r-1} + o(n^{r-1}).\] This is asymptotically sharp whenever $t\leq s$. \end{theorem} Indeed, in the case $t\leq s$ we have $\sigma(\mathcal{T})=t$ and~\eqref{eq:cclower} provides
a matching lower bound.
A vertex $x$ of $T\in \mathcal{T}_{s,t}$ is called a {\em critical leaf} if $\sigma(T\setminus x)< \sigma(T)$. In case of $t\leq s$ it simply means that $\deg_T(x)=1$ and $x\in V$. (Similarly, a {\em critical leaf} of $\mathcal{T}=T(a,b) \in \mathcal{T}_{s,t}(a,b)$ with $t\leq s$ is a $b$-set $V_j$ in the part of size $t$ whose degree in $\mathcal{T}$ is one). If such a vertex exists then we have a more precise upper bound.
\begin{theorem} \label{mainexact} Suppose $r \geq 5$, $2\leq t \leq s$, $a + b = r$, $b < a < r-1$. {Let $T$ be a tree on $s+t$ vertices and let ${\mathcal T}=T(a,b)$, its $(a,b)$-blowup. } Suppose that $T$ has a critical leaf. Then for large enough $n$ ($n > n_0(T)$) \[ {\rm{ex}}(n,\mathcal{T}) \le {n \choose r} - {n - t + 1 \choose r}.\] If, in addition, $\tau(\mathcal{T})=t$, then equality holds above and the only example achieving the bound is $\Psi_{t-1}(n,r)$. \end{theorem}
Since $\tau(\Psi_{t-1}(n,r))=t-1$, no $r$-graph $F$ with $\tau(F)\geq t$ is contained in $ \Psi_{t-1}(n,r)$.
\section{Asymptotics}
In this section we prove the asymptotic version of our main results, i.e., Theorem~\ref{th:main}.
\subsection{Definition of templates and a lemma.}
Throughout this section, $\mathcal{T} \in \mathcal{T}_{s,t}(a,b)$ and we suppose $\mathcal{T}$ is an $(a,b)$-blowup of a tree $T$. If $H$ is an $r$-graph, then an {\em $(a,b)$-template in $H$} is a pair $(A,B)$ where $A$ is an $a$-uniform hypergraph on $V(H)$, $B$ is a $b$-uniform matching on $V(H)$, and $V(A) \cap V(B) = \emptyset$. Define the bipartite graph \[ H_0 = H_0(A,B) = \{(e,f) \in A \times B : e \cup f \in H\}\] and let $H_1 = H_1(A,B) = \{e \cup f : (e,f) \in H_0\}
\subset H$. By construction, $|H_0| = |H_1|$. We claim that {\em if $A$ and $B$ are both matchings and $H_1(A,B)$ is $\mathcal{T}$-free, then} \begin{equation}\label{template1}
|H_1(A,B)| \leq (t - 1)|A| + (s - 1)|B|. \end{equation} Indeed, otherwise
$|H_0(A,B)|=|H_1(A,B)| > (t - 1)|A| + (s - 1)|B|$ and $H_0$ has a minimum induced subgraph $H'_0(A',B')$
satisfying $|H'_0(A',B')|> (t - 1)|A'| + (s - 1)|B'|$. By minimality,
$H'_0$ has
minimum degree at least $t$ in $A'$ and minimum degree at least $s$ in $B'$. This is
sufficient to greedily construct a copy of $T$ in $H'_0$. Since $H_1$ is an $(a,b)$-blowup of $H_0\supseteq H'_0$, this shows $\mathcal{T} \subset H_1$.
We now prove a version of (\ref{template1}) for templates, i.e., in the case when $A$ may be not a matching:
\begin{lemma}\label{eglift} Let $\delta > 0$ and let $\mathcal{T} \in \mathcal{T}_{s,t}(a,b)$. Let $H$ be a $\mathcal{T}$-free $r$-graph containing an $(a,b)$-template $(A,B)$. If $B = B^0 \sqcup B^1$ and $d_H(e) \leq \delta n^b$ for every $a$-set $e \subset V(H) \backslash V(B^1)$, then \begin{equation}\label{template2}
|H_1(A,B)| \leq (t - 1) |A| + as n^{a - 1} (\delta|B^0| + |B^1|). \end{equation} \end{lemma}
\begin{proof} Let $\beta_0 = a s \delta n^{a - 1}$ and $\beta_1 = as n^{a - 1}$. Let $H_1 = H_1(A,B)$ and $H_0 = H_0(A,B)$ and suppose
$|H_1| \geq (t - 1)|A| + \beta_0 |B^0| + \beta_1 |B^1|$. By deleting vertices of $H_0$, we may assume \begin{equation}\label{1124} \mbox{\em
$d_{H_0}(e) \geq t$ for all $e \in A$ and for $i \in \{0,1\}$, $d_{H_0}(e) > \beta_i$ for all $e \in B^i$. } \end{equation} Suppose
$\mathcal{T}$ is a blowup of a tree $T$, where $T$ has a unique bipartition $(U,V)$ with $|U|=s, |V|=t$. We call an embedding of the $(a,b)$-blowup of a subtree $T'$ of $T$ in $H_1(A,B)$ a {\em feasible embedding} if the $a$-sets corresponding to vertices in $U$ are mapped to members of $A$ and the $b$-sets corresponding to vertices in $V$ are mapped to members of $B$. It suffices to prove that any feasible embedding $h$ of the $(a,b)$-blowup of any proper subtree $T'$ of $T$ can be extended to a feasible embedding $h'$ of the $(a,b)$-blowup of a subtree of $T$ that strictly contains $T'$.
Let $T'$ be given. Then there exists an edge $xy$ in $T$ with $x\in V(T')$ and $y\notin V(T')$. Let $h$ be a feasible embedding of the $(a,b)$-blowup $\mathcal{T'}$ of $T'$ in $H_1(A,B)$. First suppose that $x\in U$. Let $e$ denote the image under $h$ of $a$-set in $\mathcal{T'}$ that corresponds to $x$. By our assumption $e\in A$. Hence by our earlier assumption, $d_{H_0}(e)\geq t$. Thus
$|\Gamma_{H_1}(e)|\geq t$. Since $\Gamma_{H_1}(e)\subseteq B$ is a matching of size at least $t$ and the $b$-sets corresponding to $V - \{y\}$ are mapped to at most $t-1$ members of $B$, there exists $f\in B$ such that $f\cap V(h(\mathcal{T'}))=\emptyset$. We can extend $h$ to a feasible embedding of $T'\cup xy$ by mapping the $b$-set in $\mathcal{T}$ corresponding to $y$ to $f$.
Next, suppose $x\in V$. Let $e$ denote the image under $h$ of the $b$-set in $\mathcal{T'}$ that corresponds to $x$. If there exists $f\in \Gamma_{H_1}(e)-V(h(\mathcal{T'}))$, then $h(\mathcal{T'})\cup \{e\cup f\}$ is a feasible embedding of $T'\cup xy$. Hence we may assume that no such $f$ exists. If $e \in B^0$, then we estimate $d_{H_0}(e)$ by adding $a-b$ new vertices, one from $V(h(\mathcal{T'}))$ and all outside $V(B^1)$. This yields
\[ d_{H_0}(e) \leq |V(h(\mathcal{T'})) \cap V(A)| \cdot n^{a - b - 1} \cdot \delta n^{b} \leq as \delta n^{a - 1} = \beta_0,\] a contradiction to~\eqref{1124}. Note it is crucial here that $b < a$. Similarly, if $e \in B^1$, then
\[ d_{H_0}(e) \leq |V(h(\mathcal{T'})) \cap V(A)| \cdot n^{a - 1} \leq as n^{a - 1} = \beta_1.\] This contradicts $d_{H_0}(e) > \beta_1$ for $e \in B^1$. Hence we have shown that each feasible embedding of $\mathcal{T'}$ can be extended. This completes the proof. \end{proof}
\subsection{Proof of Theorem \ref{th:main}.}
In a few places of the proof we will use the following elementary fact or a slight variant of it. Let $e$ be a fixed edge in $\binom{[n]}{p}$ and $H$ a $p$-graph on at most $n$ vertices. Let $L$ be a copy of $H$ in $\binom{[n]}{p}$ chosen uniformly at random among all copies of $H$. Then $P(e\in L)=|H|/\binom{n}{p}$.
Let $m$ be an integer satisfying $m > r^r$ and $m = o(\sqrt{n})$. Let $f(m) = m^{-1/r} n^{r-1} + m^2 n^{r - 2}$. We show that if $H$ is $\mathcal{T}$-free for some $\mathcal{T} \in \mathcal{T}_{s,t}(a,b)$, then
\[ |H| \leq (t - 1){n \choose r - 1} + O(f(m)).\] In particular, taking $m = n^{1/3}$, we obtain
\[ |H| \leq (t - 1){n \choose r - 1} + O(n^{r - 1 - 1/(3r)}).\]
In our arguments below, for convenience, we assume $b$ divides $n$, since assuming so has no effect on the asymptotic bound we want to establish. Let $D = \{e \in {V(H) \choose a} : d_H(e) \geq n^{b}/m\}$ and $L$ be a smallest vertex cover of $D$, meaning that every set in $D$ intersects $L$. We claim \begin{equation}\label{L=O}
|L| = O(m). \end{equation}
Indeed, if $|L| \geq as m$, then $D$
has a matching $M$ of size $s m$. Each set in $M$ forms an edge of $H$ with at least $n^b/m$ different $b$-sets, and at most $a|M|n^{b-1}=asmn^{b-1}$ of these $b$-sets intersect $V(M)$. By averaging, there is a matching $N$ of $b$-sets disjoint from $V(M)$ such that
\[ |H_0(M,N)| \geq \frac{|M|(n^b/m - asmn^{b-1})}{{n-1 \choose b-1}} > |M| \cdot \frac{n}{m} - |M| \cdot as m.\] Since $n$ is large and $m = o(\sqrt n)$, this is at least
\[ (t - 1)|M| + \Bigl(\frac{n}{m} - t + 1 - asm\Bigr)|M|
\geq (t - 1)|M| + (s - 1)n > (t - 1)|M| + (s - 1)|N|.\] By (\ref{template1}), we conclude that $\mathcal{T} \subset H_1(M,N) \subset H$, a contradiction. This proves~\eqref{L=O}.
Let $G = \{e \in H : |e \cap L| \leq 1\}$, so that \begin{equation}\label{fbound}
|G| \geq |H| - |L|^2 n^{r-2} \geq |H| - O(m^2 n^{r - 2}). \end{equation} Let $R \subset V(G) \backslash L$ be a set whose elements are chosen independently with probability $\alpha = m^{-1/r}$, and $A = {R \choose a}$. Let $P$ be a random partition of $V(G)$ into $b$-sets. Let $B$ denote the set of $b$-sets in $P$ that are disjoint from $R$, and let $H_1 = H_1(A,B)$. If $B^0 = \{e \in B : e \cap L =
\emptyset\}$ and $B^1 = \{e \in B : |e \cap L| \geq 1\}$, then by (\ref{template2}) with $\delta = 1/m$, and using
$|B^1| \leq |L|$,
\[ |H_1| \leq (t - 1)|A| + O(n^{a-1}|B^0|/m) + O(n^{a-1}|L|).\]
Taking expectations over all choices of $R$ and $P$ and using~\eqref{L=O} and $|B^0| \leq n$, we get \begin{equation}\label{eq1}
E(|H_1|) \leq (t - 1)\alpha^a {n \choose a} + O(n^a/m). \end{equation}
For $i \in \{0,1\}$, let $G_i = \{e \in G : |e \cap L| = i\}$ and note $G = G_0 \cup G_1$. We observe that for an edge $e \in G_0$, \[ P(e \in H_1) = \frac{{r \choose b} \alpha^a (1 - \alpha)^b}{{n - 1 \choose b - 1}} := p_0\] and for an edge $e \in G_1$, \[ P(e \in H_1) = \frac{{r - 1 \choose b - 1} \alpha^a (1 - \alpha)^{b - 1}}{{n - 1 \choose b - 1}} := p_1.\] Since $\alpha = m^{-1/r} < 1/r$ and $b \le (r-1)/2$, $$p_0 = \frac{r}{b}(1 - \alpha)p_1 > 2p_1.
$$ Therefore \begin{eqnarray}
E(|H_1|)\geq p_0|G_0| + p_1|G_1| &=& (p_0 - p_1)|G_0| + p_1|G| > p_1|G|=
\frac{\alpha^a (r - 1)!(1-\alpha)^{b-1}}{a!n^{b - 1}}|G|.\label{upper2} \end{eqnarray} Combining this with (\ref{eq1}), using $(1-\alpha)^{-b+1} = 1 - O(m^{-1/r})$ and after some simplification, we find \begin{eqnarray*}
|G| &\leq& (t - 1){n \choose r - 1} + O(\alpha n^{r - 1}) + O(n^{r - 1}/\alpha^a m) \\ &\leq& (t - 1){n \choose r - 1} + O(m^{-1/r}n^{r - 1}). \end{eqnarray*}
Together with (\ref{fbound}), this gives the required bound on $|H|$. \qed
In fact, the proof of Theorem \ref{th:main} yields more then the theorem claims. We have the following fact.
\begin{corollary}\label{mainprime} Let $0<\gamma< 1/t$, $b < a < r$, $a+b=r$, $t \leq s$. Let $n$ be sufficiently large, $r^r < m \le n^{\gamma}$ and $f(m) = m^{-1/r} n^{r-1} + m^2 n^{r - 2}$. Let $\mathcal{T} \in \mathcal{T}_{s,t}(a,b)$ and $H$ be an $n$-vertex $\mathcal{T}$-free $r$-graph. If \begin{equation}\label{asymptotic}
|H| = (t - 1){n \choose r - 1} + O(f(m)) \end{equation}
then some $F \subset H$ with $|F| = |H| - O(f(m))$ has a crosscut $L$ of size $O(m)$. \end{corollary}
\begin{proof}
If $|H| = (t - 1){n \choose r - 1} + O(f(m))$, then the upper and lower bounds for $E(|H_1|)$ given by
(\ref{eq1}) and (\ref{upper2}) differ by $O(n^a/m)$. By (\ref{upper2}) they also differ by at least $(p_0 - p_1)|G_0|$ so
\[ (p_0 - p_1)|G_0| = O(n^a/m).\]
Using $p_0 > (1 + 1/r)p_1$, we get $p_1|G_0| = O(n^a/m) $ and this shows $|G_0| = O(f(m))$. Setting $F = G_1$,
$L$ is a crosscut of $F$ and $|F| = |H| - O(f(m))$. \end{proof}
\section{Stability}
The aim of this section is to prove the following stability theorem. It is important throughout this section that $t \leq s$, so that for $\mathcal{T} \in \mathcal{T}_{s,t}(a,b)$, we have $\sigma(\mathcal{T}) = t$ and therefore $\Psi^1_{t-1}(n,r)$ does not contain
$\mathcal{T}$. The following theorem says that if $H$ is a $\mathcal{T}$-free $r$-graph on $n$ vertices and $|H|
\sim |\Psi_{t-1}(n,r)|$, then $H$ is obtained by adding or deleting $o(n^{r - 1})$ edges from $\Psi_{t-1}(n,r)$.
\begin{theorem}\label{th8} Let $\mathcal{T} \in \mathcal{T}_{s,t}(a,b)$, where $b < a < r - 1$, $t \leq s$. Let $H$ be a
$\mathcal{T}$-free $n$-vertex $r$-graph with $|H| \sim (t - 1){n \choose r - 1}$. If $\mathcal{T}$ has a critical leaf, then there exists a set $S$ of $t
- 1$ vertices of $H$ such that $|H - S| = o(n^{r - 1})$. \end{theorem}
\subsection{Degrees of sets.} By Corollary~\ref{mainprime} with $r^r < m = o(n^{1/(t+1)})$ there exists
$F \subset H$ such that $|F| \sim |H|$ and $F$ has a crosscut $L$ of size $O(m)$. Our first claim says that most elements of $\partial F$ have degree $t - 1$ in $F$.
{\bf Claim 1.} {\em There are ${n \choose r-1} - o(n^{r-1})$ sets $e \in \partial F - L$ such that $d_F(e) = t - 1$.}
{\bf Proof.} Suppose $\ell$ sets $e \in \partial F - L$ have $d_F(e) \geq t$. By the definition of $L$, $\Gamma(e)\subseteq L$ for each $e \in \partial F - L$. Let $Z$ be a crosscut of
$\mathcal{T}$ with $|Z| = t$ contained in $B$ and let $\mathcal{T}^* = \{e \backslash Z : e \in \mathcal{T}\}$. Then $\mathcal{T}^*$ is an $(a,b-1)$-blowup of $T$. Proposition~\ref{general-bound} implies $${\rm{ex}}(n, \mathcal{T}^*) < (s + t)n^{r-2}.$$
By the pigeonhole principle, there exists a set $S
\subset L$ with $|S| = t$ such that at least $k =
\ell/|L|^{t}$ sets $e \in \partial F - L$ have $\Gamma_F(e) \supseteq S$. If $k > {\rm{ex}}(n,\mathcal{T}^*)$, then $\mathcal{T}^* \subset \partial F - L$ and for all $e \in \mathcal{T}^*$, $\Gamma_G(e) \supseteq S$. Now we can lift $\mathcal{T}^*$ to $\mathcal{T} \subset F$ via $S$. Indeed, we can greedily enlarge each of the $(b - 1)$-sets that form $\mathcal{T}^*$ to a $b$-set by adding an element of $S$. This contradicts the choice of $H$. We therefore suppose that
\[ \ell/|L|^{t} = k \leq {\rm{ex}}(n,\mathcal{T}^*) \leq (s+t)n^{r-2}\]
which gives $\ell \leq (s+t)|L|^{t}n^{r-2} = O(n^{r -
2}m^t)$. As $|F| \sim |H| \sim (t - 1){n \choose r - 1}$, and the number of $(r-1)$-sets in $V(F)-L$ is at most ${n \choose r-1}$, the average degree of sets in $\partial F - L$ is at least $t - 1 - o(1)$. We have already argued that at most $O(n^{r -2}m^t)$ of these sets have degree larger than $t - 1$. Furthermore, none of them has degree greater than $m$. Hence the number of sets in $\partial F - L$ of degree at most $t-2$ is $z$, then we have inequality $$(t-1){n \choose r - 1}-x+m O(n^{r -2}m^t)\geq (t-1){n \choose r - 1}(1-o(1)).$$ Since $m\ n^{r -2}m^t=o(n^{r-1})$,
we conclude that $x=o\left({n \choose r - 1}\right)$. This yields the claim. \qed
\subsection{Proof of Theorem~\ref{th8}}
Let $S_1,S_2,\dots,S_k$ be an enumeration of the $(t -
1)$-element subsets of $L$, and let $F_i$ denote the family of $(r - 1)$-element sets $e$ in $V(F) \backslash L$ such that $\Gamma_F(e) = S_i$. By Claim~1, $|F_1 \cup F_2
\cup \dots \cup F_k| \sim {n - |L| \choose r - 1}$. Suppose $k \geq 2$. By definition, for $i \neq j$, $F_i \cap F_j = \emptyset$.
Therefore,
\[ \sum_{i = 1}^k |F_i| \sim {n \choose r - 1}.\]
For each $i \in [k]$, if $|F_i| = o(n^{r - 1}/k)$, let
$G_i$ be an empty $(r - 1)$-graph, if $|F_i| = \Omega(n^{r-1}/k)$, then delete edges of $F_i$ containing $a$-sets or $b$-sets of "small" degree until we obtain either an empty $(r - 1)$-graph or an $(r - 1)$-graph $G_i$ such that \begin{equation}\label{1125} \mbox{\em $d_{G_i}(e) > r(s + t)n^{r - 2 - a}$ $\forall$ $a$-set $e \in\partial_a G_i$, and $d_{G_i}(f) > r(s + t)n^{r - 2 - b}$ $\forall$ $b$-set $f \in \partial_b G_i$.} \end{equation}
By construction, $|G_i| \geq |F_i| - 2r(s + t)n^{r - 2}$ and since $F_i = \Omega(n^{r-1}/k)$
and $k \leq |L|^t \leq O(m^t) = o(n)$, whenever $G_i$ is non-empty we have
\[ |G_i| = (1 - o(1))|F_i|.\]
We conclude that if $G = \bigcup G_i$ then $|G| = (1 -
o(1))|F| \sim {n \choose r - 1}$ and \begin{equation} \label{Gi-sum}
\sum_{i = 1}^k |G_i| \sim {n \choose r - 1}. \end{equation}
{\bf Claim 2.} {\em For $i \neq j$, $\partial_a G_i \cap \partial_a G_j = \emptyset$.}
{\it Proof.} Let $W$ be a tree obtained from the tree $T$ by deleting a leaf vertex $x$ with unique neighbor $y \in T$, such that $x$ is in the part of $T$ of size $t$. Suppose some $a$-set $e$ is contained in $\partial_a G_i \cap \partial_a G_j$. By~\eqref{1125}, we can greedily grow $W(a,b-1)$ in $G_j$ such that $e$ is the blowup of $y$. By adding one vertex of $S_j$ to each $b - 1$-set in $W(a,b-1)$, we obtain $W(a,b)$. Now there exists $x' \in S_i \backslash S_j$. Since $d_{G_i}(e) > r(s + t)n^{r - 2 - a}$, there exists an edge $f \in G_i$ containing $e$, such that $f \cap V(W(a,b-1)) = \emptyset$, and therefore $f \cup \{x'\} \in F$ plus $W(a,b)$ gives the tree $T(a,b)$, with $f \backslash e$ the blowup of $x$. This proves the claim. $\Box$
Now we prove Theorem~\ref{th8}. Since $a\leq r-2$, by Claim~2, for all $i\neq j$,
$\partial_{r-2} G_i \cap \partial_{r-2} G_j=\emptyset$. Without loss of generality, suppose that for some $0\leq p\leq k$, $|G_1|\geq |G_2|\geq \ldots\geq |G_p|\geq 1$
and $G_i=\emptyset$ for $p+1\leq i\leq k$. For each $i\in [p]$, let $y_i\geq r-1$ denote the real such that $|G_i|=\binom{y_i}{r-1}$. Then $y_1\geq y_2\geq \cdots\geq y_p$. By the Lov\'asz form of the Kruskal-Katona theorem, for each $i\in [p], |\partial_{r-2}(G_i)|\geq \binom{y_i}{r-2}$. By the disjointness of the $\partial_{r-2}(G_i)$'s, we have $$\sum_{i=1}^p \binom{y_i}{r-2}\leq \binom{n}{r-2}.$$ For each $i\in [p]$, since $\binom{y_i}{r-1}=\frac{y_i-r+2}{r-1}\binom{y_i}{r-2}\leq \frac{y_1-r+2}{r-1}\binom{y_i}{r-2}$, by \eqref{Gi-sum} we have
$$(1-o(1))\binom{n}{r-1}\leq \sum_{i=1}^p |G_i|=\sum_{i=1}^p \binom{y_i}{r-1}\leq \frac{y_1-r+2}{r-1}\sum_{i=1}^p \binom{y_i}{r-2}\leq \frac{y_1-r+2}{r-1}\binom{n}{r-2}.$$
From this, we get $y_1\geq n-o(n)$. Hence $|F_1|\geq |G_1|=\binom{y_1}{r-1}\geq \binom{n}{r-1}-o(n^{r-1})$. Hence there exists $S=S_1\subset L$ such that $(t-1)\binom{n}{r-1}-o(n^{r-1})$ edges of $F$ consists of one vertex in $S$ and $r-1$ vertices disjoint from $S$. $\Box$
\section{Exact results}
The aim of this section is to prove the following theorem, which completes the proof of Theorem \ref{mainexact}:
\begin{theorem} Let $t \leq s$, $b < a < r - 1$ with $a + b = r$ and $\mathcal{T} \in \mathcal{T}_{s,t}(a,b)$ such that $\mathcal{T}$ has a critical leaf and $\tau(\mathcal{T})=t$. If $n$ is large and $H$ is a
$\mathcal{T}$-free $n$-vertex $r$-graph with $|H| \geq {n \choose r} - {n - t + 1 \choose r}$, then $H \cong \Psi_{t-1}(n,r)$. \end{theorem}
To prove this, we aim to show that the set $S$ given by Theorem~\ref{th8}
is a vertex cover of $H$. We prove the following consequence of Claim~1:
{\bf Claim~3.} {\em Let $\Delta_u = (t - 1){n-u \choose r - 1 - u}$. Then for each $\delta > 0$, there exists $G
\subset F$ with $|G| \sim |F|$ such that for any $u$-set $e \subset V(G)$ with $u < r$ and $d_G(e)> 0$,} either \begin{center} \begin{tabular}{lp{5in}}
(i) & $|e \cap S| = 0$ and $d_G(e) \geq (1 - \delta)\Delta_u$ or \\
(ii) & $|e \cap S| = 1$ and $d_G(e) \geq r(s+t) n^{r - 1 - u}$.\\ \end{tabular} \end{center}
\begin{proof} Let $S$ be the $(t-1)$-set given by Theorem~\ref{th8} and $K$ be the set of edges of $F$ containing some $e \in
\partial F - S$ with $d_F(e) = t - 1$. By Claim~1, $|K|
\sim |F|$. Also, every $r$-set in $K$ has one point in $S$ and $r - 1$ points in $V(K) \backslash S$. Since $d_K(e) = t - 1$ for all $e \in \partial K - S$, every $u$-set in $V(K) \backslash S$ has degree at most $\Delta_u$ in $K$.
We repeatedly delete edges from $K$ as follows. Suppose at some stage of the deletion we have a hypergraph $K'$. If there exists a $u$-set $e$ for some $u<r$ such that \begin{center} \begin{tabular}{lp{5in}}
(i') & $|e \cap S| = 0$ and $d_{K'}(e) < (1 - \delta)\Delta_u$ or \\
(ii') & $|e \cap S| = 1$ and $d_{K'}(e) < r(s+t) n^{r - 1 - u}$\\ \end{tabular} \end{center}
then delete all edges of $K'$ containing $e$. Let $G$ be the hypergraph obtained at the end of this process. We shall prove $|G| \sim |K|$. To this end, suppose that
$|G| = |K| - \eta(t - 1){n \choose r - 1}$, and we show $\eta = o(1)$ to complete the proof. Consider two cases.
{\bf Case 1.} {\em At least $\frac{\eta}{2}(t - 1){n \choose r - 1}$ edges of $K$ were deleted due to (ii').}
In this case, there exists $u < r$ such that the set $H'$ of edges of $K$ deleted due to (ii') on $u$-sets satisfies
$|H'| \geq \frac{\eta}{2r}(t - 1){n \choose r - 1}$. Then by (ii'), and since the number of $u$-sets with one vertex in $S$ is $|S|{n - |S| \choose u - 1}$,
\[ |H'| \leq |S|{n - |S| \choose u - 1} \cdot r(s+t) n^{r - 1 - u}
< |S|r(s+t) n^{r - 2}.\] Since $|H'| \geq
\frac{\eta}{2r}{n \choose r - 1}$ and $|S| = t-1$, this gives $\eta = o(1)$.
{\bf Case 2.} {\em At least $\frac{\eta}{2}(t - 1){n \choose r - 1}$ edges of $K$ were deleted due to (i').}
In this case, there exists $u < r$ such that the set $H'$ of edges of $K$ deleted due to (i') on $u$-sets satisfies
$|H'| \geq \frac{\eta}{2r}(t - 1){n \choose r - 1}$. Let $U_1$ be the set of $u$-sets in $V(K) \backslash S$ on which edges of $K$ were deleted due to (i'), and let $U_2$ be the remaining $u$-sets in $V(K) \backslash S$. Then
\[ |U_1| > \frac{|H'|}{(1-\delta)\triangle_u} \geq \frac{\eta (t - 1){n \choose r - 1}}{2r(t - 1){n \choose r - 1 - u}}.\] If $n$ is large enough, then this is at least $\frac{\eta}{4r{r - 1 \choose u}}{n \choose u}$. Let $\gamma = \frac{\eta}{4r{r - 1 \choose u}}$. Then \begin{eqnarray*}
|K|{r - 1 \choose u} &=& \sum_{e \in {V(K) \backslash L \choose u}} d_K(e) \\ &=& \sum_{e \in U_1} d_K(e) + \sum_{e \in U_2} d_K(e) \\
&\leq& |U_1| (1 - \delta)\Delta_u
+ |U_2|\Delta_u \\ &\leq& \gamma(1 - \delta) {n \choose u}\Delta_u + (1 - \gamma){n \choose u}\Delta_u \; = \; (1 - \gamma \delta){n \choose u}\Delta_u. \end{eqnarray*}
Here we used $|U_1| + |U_2| \leq {n \choose u}$. Therefore
\[ |K| \leq (1 - \gamma \delta) \frac{{n \choose u} \Delta_u}{{r - 1 \choose u}} = (1 - \gamma \delta)(t
- 1){n \choose r - 1}.\] Since $|K| \sim |F| \sim (t - 1){n \choose r - 1}$, $\gamma \delta = o(1)$. Since $\delta > 0$ and $\gamma = \frac{\eta}{4r{r - 1 \choose u}}$, this implies $\eta = o(1)$, as required. \end{proof}
Let $\mathcal{T} \in \mathcal{T}_{s,t}(a,b)$ have a critical leaf with $\tau(\mathcal{T})=t \le s$, $a+b=r$, $b<a<r-1$, and let $H$ be a $\mathcal{T}$-free $n$-vertex $r$-graph with
$|H| \geq {n \choose r} - {n - t + 1 \choose r}$. We aim to show that $S$ is a vertex cover of $H$, which gives $H \cong \Psi_{t-1}(n,r)$, as required. To this end, let
$H_i = \{e \in H : |e \cap S| = i\}$. So we have to show $H_0 = \emptyset$.
Since $\mathcal{T}$ has a critical leaf, there is a $b$-set $e'$ of $\mathcal{T}$ in the part of size $t$ with $d_{\mathcal{T}}(e')=1$. Let $\mathcal{T'}$ be the tree obtained from $\mathcal{T}$ by deleting the edge containing $e'$. So $V(\mathcal{T'})$ has one part comprising $t-1$ sets, each of size $b$ and the other part comprising $s$ sets, each of size $a$. It has a crosscut of size $t-1$ by picking one vertex from each of the $b$-sets above.
Let ${\mathcal K}^1$ be the set of $r$-sets of $[n]$ that have exactly one vertex in $S$. A subfamily $T \subset {\mathcal K}^1$ is a {\em potential tree} if \newline\quad 1) $T \cong \mathcal{T}'$ \newline\quad 2) the $t-1$ vertices of $S$ play the role of the crosscut vertices of $\mathcal{T}'$ described above \newline\quad 3) $e_0$ is an $a$-set in $V(T)$ with $e_0 \in \partial_aH_0$ \newline\quad 4) $e_0 \subset e \in H_0$ \newline\quad 5) $T \cup e$ is a copy of $\mathcal{T}$.
Fix an $a$-set $e_0 \in \partial_a H_0$ and suppose $e_0 \subset e \in H_0$. If $T \subset H_1$ is a potential tree as described above, then $T \cup \{e\}$ is a copy of $\mathcal{T}$ in $H$, a contradiction. So for each such potential tree $T$, there exists $f \in T-H_1$. Let us call this a {\em missing edge}. Let $m = as+bt-b$ be the number of vertices of each potential tree. The number of potential trees containing a fixed missing edge $f$ is at most
\[ {n-|S|-(a+b-1) \choose m-|S|-(a+b-1)} \cdot c({\mathcal T}),\] where $c({\mathcal T})$ is the number of ways we can put a potential tree using $f$ into the set $M$ with
$|M|=m$ and
$(S\cup f)\subset M\subset [n]$,
(note that $|f\cap S|=1$).
On the other hand, each $e_0\in \partial_a H_0$ and a subset $M'$ with $|M'|=m$ and
$S\subset M'\subset ([n]-e_0)$ carries at least one potential tree so the total number of potential trees is at least
\[ |\partial_a H_0| {n - |S|-a\choose m-|S|-a}.\]
It follows that the number of missing edges is at least $c|\partial_a H_0| n^{b - 1}$ for some $c> 0$. Therefore
\[ |H| = |H_0| + |H_1| + |H_2| + \dots + |H_r| \leq {n \choose r} - {n - t + 1 \choose r} + |H_0| - c |\partial_a H_0| n^{b - 1}.\]
By Proposition~\ref{general-bound} and the fact that $\mathcal{T}$ is contained in a tight tree on $V(\mathcal{T})$, $|H_0| < c'|\partial H_0|$ for some constant $c'$.
Next, we observe that $\partial H_0 \cap \partial G = \emptyset$, for otherwise we can use Claim~3 to greedily build a copy of $\mathcal{T}$
using the edge of $H_0$, and whose remaining edges form a copy of $\mathcal{T}'$ and come from $G$. In particular, since $|\partial G|
\sim {n \choose r - 1}$, $|\partial H_0| = o(n^{r - 1})$. Writing $|\partial H_0| = {x \choose r - 1}$ for some real $x$, we have $|\partial_a H_0| \geq {x \choose a}$, by the Kruskal-Katona Theorem. Therefore
\[ |H_0| - c |\partial_a H_0| n^{b - 1} \leq c'|\partial H_0| - c |\partial_a H_0| n^{b - 1} \leq c'{x \choose r - 1} - c n^{b - 1} {x \choose a}.\]
Since $x = o(n)$, for large enough $n$ the above expression is negative, unless $|\partial H_0| =
|\partial_a H_0| = 0$. We have shown that if $|H| \geq {n \choose r} - {n - t+ 1 \choose r}$, then $H_0 =
\emptyset$ and $|H| = {n \choose r} - {n - t + 1 \choose r}$, as required. \qed
\section{Concluding remarks}\label{se:7}
In this paper we determined for $b \leq a < r$ the asymptotic behavior of ${\rm{ex}}_r(n,\mathcal{T})$ when $\mathcal{T} \in \mathcal{T}_{s,t}(a,b)$ is an $(a,b)$-blowup of a tree $T$ with parts of sizes $s$ and $t$ where $s \geq t$ and $\sigma(\mathcal{T})=t$. The extremal problem appears to be more difficult when $s < t$, in which case the smallest crosscut of $\mathcal{T}$ has size $s$. We pose Conjecture \ref{general}, which covers all cases except $a = r - 1$.
\begin{conjecture}\label{general}
If $\mathcal{T} \in \mathcal{T}_{s,t}(a,b)$ where $b \leq a < r - 1$, $\sigma = \sigma(\mathcal{T}) = \min\{s,t\}$, and $H$ is a $\mathcal{T}$-free $n$-vertex $r$-graph, then for large enough $n$, $|H| \leq (\sigma - 1){n \choose r - 1} + o(n^{r - 1})$, with equality only if $H$ is isomorphic to a hypergraph obtained from $\Psi_{\sigma -1}(n,r)$ by adding or deleting $o(n^{r - 1})$ edges. \end{conjecture}
{\bf The case $a=r-1$.} \quad If $t > s$ (and $n\geq |V(\mathcal{T})|$), then $\Psi^1_{t-1}(n,r)$ contains $\mathcal{T}$ so Conjecture~\ref{general} does not hold. Since $\Psi_{s-1}^1(n,r)$ does not contain $\mathcal{T}$, it is natural to ask whether $\Psi^1_{s-1}(n,r)$ is (asymptotically) extremal for $\mathcal{T}$. In some cases when $a=r-1$, this is certainly not so because certain Steiner systems do not contain a blowup of a star $K_{1,t}$ and are denser than $\Psi_{s-1}(n,r)$. More precisely: {Let $T$ be a tree on $s+t$ vertices and let ${\mathcal T}=T(a,b)$, its $(a,b)$-blowup. } Suppose $a=r-1$ and let $\lambda= \max_{x\in U} \deg_T(x)$.
Then ${\rm{ex}}(n,\mathcal{T})$ is at least the number of edges in a Steiner $(n,r,r-1,\lambda-1)$-system -- an $r$-graph on $n$ vertices where each $(r - 1)$-set is contained in exactly $\lambda - 1$ edges. In this case, ${\rm{ex}}(n,T(r-1,1)) \geq \frac{\lambda-1}{r}{n \choose r - 1}$ for infinitely many $n$ (due to the existence of those designs~\cite{Keevash}) whereas $\sigma(T) = s$ and it could be much less than $ \frac{\lambda-1}{r}$.
{\bf No stability for $a = r - 1$.}\quad It is important in the above proof that $a \neq r - 1$. If $a = r - 1$, then there is no stability theorem: consider for instance an $(r-1,1)$-blowup $\mathcal{T}$ of a path with four edges. Let $H$ be the $n$-vertex $r$-graph constructed as follows. Let $V(H) = [n]$, let $G_1 \sqcup G_2$ be a partition of the edge set of the complete $(r - 1)$-graph on $\{ 3,4, \dots, n\}$, and let
$H$ consist of the edges $e \cup \{i\}$ such that
$e \in G_i$, for $i \in \{1,2\}$. Then $|H| = {n - 2 \choose r-1}$ and $H$ does not contain $\mathcal{T}$.
{\bf The case $a=b=r/2$.}\quad Let $T$ be a tree on $s+t$ vertices then for ${\mathcal T}=T(r/2,r/2)$ one can use an argument of Frankl~\cite{Frankl} (applied by many others, see~\cite{MV}) to prove that \begin{equation}\label{eq:r/2}
{\rm{ex}}_r(n,{\mathcal T}) \leq \frac{{\rm{ex}}(\lfloor 2n/r\rfloor, T)}{\binom{\lfloor 2n/r \rfloor}{2}} {n\choose r}
\sim \frac{{\rm{ex}}(\lfloor 2n/r\rfloor, T)}{\lfloor 2n/r \rfloor} {n\choose r-1} . \end{equation} Indeed, similarly to the idea of templates, given a ${\mathcal T}$-free $r$-graph $H$ on $n$ vertices take a random partition of $[n]$ into $r/2$-sets, (where for simplicity $r/2$ divides $n$), and consider only those $r$-edges of $H$ which are unions of two partite sets. Then this subfamily consists of at most ${\rm{ex}}(2n/r, H)$ edges of $H$, out of the possible $\binom{2n/r}{ 2}$.
The bound is asymptotically tight, due to $\Psi^1_{t-1}(n,r)$,
if $\sigma({\mathcal T})=t$ and $T$ has $2t-1$ edges. So the inequality~\eqref{eq:r/2} completes the proof of Theorem~\ref{th:path}
showing that $ {\rm{ex}}_r\left(n, P_{2k-1}\left(\frac{r}{2},\frac{r}{2}\right)\right) \sim (k-1)\binom{n}{r-1}$
(the other cases follow from Theorems~\ref{th:main} and~\ref{mainexact}). It also gives a better upper bound for the even length, ${\rm{ex}}_r\left(n, P_{2k}\left(\frac{r}{2},\frac{r}{2}\right)\right) \leq (1+o(1))\left(k-\frac{1}{2}\right)\binom{n}{r-1}$.
However, the proof of~\eqref{eq:r/2} does not reveal the extremal structure.
{\bf The case of forests.}\quad Many of our ideas can be generalized for the case of ${\mathcal T}=F(a,b)$, when $F$ is a {\em forest},
but we do not have a general conjecture.
\begin{problem}\label{forest} Given $a,b\geq 1$ and a forest $F$ on $s+t$ vertices. Determine $\lim_{n\to \infty} {\rm{ex}}(n, F(a,b))\binom{n}{r-1}^{-1}$.
\end{problem}
{\bf Other bipartite graphs.}\quad The class of $(a,b)$-blowups of bipartite graphs contains well-studied instances including blowups of complete bipartite graphs. In particular, F\"{u}redi~\cite{Furedi} made the following conjecture for blowups of a 4-cycle. Let $\mathcal{C}_4^r=\{C_4(a,b): a+b=r, a,b>0\}$.
\begin{conjecture}[\cite{Furedi}] If $r \geq 3$ then $ {\rm{ex}}(n,\mathcal{C}_4^r) \sim \dbinom{n}{r - 1}$. \end{conjecture}
The current record is due to Pikhurko and the last author~\cite{PV}, who showed \[ {\rm{ex}}_r(n,\mathcal{C}_4^r) \lesssim (1 + \frac{2}{\sqrt{r}}){n \choose r - 1}\] and ${\rm{ex}}_3(n,C_4(2,1)) \lesssim \frac{13}{9}{n \choose 2}$. When $G$ is an even cycle of length six or more, it is only known~\cite{JL} that ${\rm{ex}}_r(n,G(a,b))=\Theta(n^{r-1})$ and
the asymptotic behavior of ${\rm{ex}}_r(n,G(a,b))$ is not known. One can show, however, that for $F = K_{s,t}(a,b)$ with $a + b = r$, $b \leq a$, and $t$ sufficiently large as a function of $s$ and $r$, \[ {\rm{ex}}_r(n,F) = \Theta(n^{r - \frac{1}{s}})\] via a randomized algebraic construction.
\small
\end{document} |
\begin{document}
\author[A. T. Bernardino \and D. Pellegrino \and J.B. Seoane-Sep\'{u}lveda \and M.L.V. Souza]{A.T. Bernardino \and D. Pellegrino\textsuperscript{*} \and J.B. Seoane-Sep\'{u}lveda\textsuperscript{**} \and M.L.V. Souza}
\address{Centro de Ensino Superior do Serid\'{o},\newline\indent Universidade Federal do Rio Grande do Norte, \newline\indent Rua Joaquim Greg\'{o}rio, S/N - Penedo, \newline\indent Caic\'{o}, 59300-000, Brazil.} \email{thiagobernardino@yahoo.com.br}
\address{Departamento de Matem\'{a}tica,\newline\indent Universidade Federal da Para\'{\i}ba,\newline\indent 58.051-900 - Jo\~{a}o Pessoa, Brazil.} \email{pellegrino@pq.cnpq.br} \thanks{\textsuperscript{*}Supported by CNPq Grant 301237/2009-3.}
\address{Departamento de An\'{a}lisis Matem\'{a}tico,\newline\indent Facultad de Ciencias Matem\'{a}ticas, \newline\indent Plaza de Ciencias 3, \newline\indent Universidad Complutense de Madrid,\newline\indent Madrid, 28040, Spain.} \email{jseoane@mat.ucm.es} \thanks{\textsuperscript{**}Supported by the Spanish Ministry of Science and Innovation, grant MTM2009-07848.}
\address{Departamento de Matem\'{a}tica/ICENE,\newline\indent UFTM - Universidade Federal do Tri\^angulo Mineiro,\newline\indent Rua Get\'ulio Guarit\'a, 159, \newline\indent CEP 38.025-440 - Uberaba-MG, Brazil.} \email{marcelalvsouza@gmail.com}
\subjclass[2010]{46G25, 47H60, 47B10} \keywords{Absolutely summing operators, coherent ideals, compatible ideals, Banach polynomial ideals} \title[Absolutely summing operators revisited]{Absolutely summing operators revisited: new directions in the nonlinear theory} \begin{abstract} In the last decades many authors have become interested in the study of multilinear and polynomial generalizations of families of operator ideals (such as, for instance, the ideal of absolutely summing operators). However, these generalizations must keep the essence of the given operator ideal and there seems not to be a universal method to achieve this. The main task of this paper is to discuss, study, and introduce multilinear and polynomial extensions of the aforementioned operator ideals taking into account the already existing methods of evaluating the adequacy of such generalizations. Besides this subject's intrinsic mathematical interest, the main motivation is our belief (based on facts that shall be presented) that some of the already existing approaches are not adequate. \end{abstract} \maketitle
\section{Introduction and historical background}
A well-known fact from an undergraduate Analysis course states that, in $\mathbb{R}$, a series converges absolutely if and only if it is unconditionally convergent; this result was proved by J.P.G.L. Dirichlet in 1829. For infinite-dimensional Banach spaces the situation is quite different: on the one hand for $\ell_{p}$ spaces with $1<p<\infty$, for example, it is quite easy to construct an unconditionally convergent series which fails to be absolutely convergent. On the other hand, for $\ell_{1}$ and some other Banach spaces the answer to this problem is far from being straightforward. The special case of $\ell_{1}$ was solved in 1947 by M.S. Macphail \cite{Mac} through a very elaborated construction.
The question of whether every infinite-dimensional Banach space has an unconditionally convergent series which fails to be absolutely convergent was raised by Banach \cite[p. 40]{Banach32} (see also Problem 122 in the Scottish Book \cite{Mau}, proposed by S. Mazur and W. Orlicz). In 1950, A. Dvoretzky and C.A. Rogers \cite{DR} solved this question in the positive:
\textbf{Theorem} (Dvoretzky-Rogers, 1950). The unconditionally convergent series and absolutely summing convergent series coincide in a Banach space $E $ if and only if $\dim E=\infty.$
The above result encouraged the curiosity of the genius of A. Grothendieck, who rapidly presented a different proof of this result in his Ph.D. dissertation \cite{Gro1955}. Grothendieck's famous R\'{e}sum\'{e} \cite{gro} (see also \cite{dies01} for a modern and thorough study) and \cite{Gro1955} are, essentially, the beginning of the theory of absolutely summing operators. More precisely, in view of Dvoretzky-Rogers' striking result, the idea of investigating linear operators that transform unconditionally convergent series into absolutely convergent series seemed natural and was the birth of the notion of absolutely summing operators (a linear operator $u:E\rightarrow F$ is absolutely summing if ${\textstyle\sum} u(x_{j})$ is absolutely convergent whenever ${\textstyle\sum} x_{j}$ is unconditionally convergent). Soon after, Grothendieck proved a quite surprising result asserting that every continuous linear operator from $\ell_{1}$ to $\ell_{2}$ (or to any Hilbert space) is absolutely summing (this kind of result is now called a \emph{coincidence theorem}). This result is a consequence of an intriguing inequality which Grothendieck himself called \textquotedblleft the fundamental theorem of the metric theory of tensor products\textquotedblright. Grothendieck's inequality has important applications (\cite{AAA, FFF}) and still has some hidden mysteries such as the precise value of Grothendieck's constant. For a recent work on the estimates for Grothendieck's constant we refer to \cite{NN}.
The modern notion of absolutely $(p;q)$-summing operators was introduced in the 1960's by A. Pietsch \cite{stu} and B. Mitiagin and A. Pe\l czy\'{n}ski \cite{MPel}. Besides its intrinsic mathematical interest and deep mathematical motivation, it has shown to be a very important tool in general Banach space theory. For instance, and just to cite some, using the theory of absolutely summing operators one can show that every normalized unconditional basis of $\ell_{1}$ is equivalent to the unit vector basis of $\ell_{1}$ and also that, for $1<p<\infty$, there is a normalized unconditional basis of $\ell_{p}$ which is not equivalent to the unit vector basis of $\ell_{p}$.
Throughout this paper $\mathbb{N}$ represents the set of all positive integers and $\mathbb{N}_{m}:=\{1,...,m\}$. Also, $E,E_{1},\ldots,E_{n},F,G,G_{1} ,...,G_{n},H$ will stand for Banach spaces over $\mathbb{K}=\mathbb{R}$ or $\mathbb{C}$, the topological dual of $E$ is represented by $E^{\ast}$ and $B_{E^{\ast}}$ denotes its closed unit ball. The symbol $W\left( B_{E^{\ast} }\right) $ represents the probability measures in the Borel sets of $B_{E^{\ast}}$ with the weak-star topology. We will denote the space of all continuous $n$-linear operators from $E_{1}\times\cdots\times E_{n}$ into $F$ by $\mathcal{L}(E_{1},\ldots,E_{n};F) $ or $\mathcal{L}_{n}(E_{1},\ldots ,E_{n};F).$ Also, we recall that an $n$-homogeneous polynomial $P:E\rightarrow F$ is a map so that $P(x)=\check{P}(x,\ldots,x),$ where $\check{P}$ represents the unique symmetric $n$-linear map associated to $P$. The corresponding space (endowed with the sup norm) is represented by $\mathcal{P}(^{n}E;F)$. For the theory of polynomials and multilinear operators acting on Banach spaces we refer to \cite{Dineen, Mu}.
For $0<p<\infty$, the space of all sequences $\left( x_{j}\right) _{j=1}^{\infty}$ in $E$ such that $\left( \varphi\left( x_{j}\right) \right) _{j=1}^{\infty}\in\ell_{p}$, for every $\varphi\in E^{\ast}$ is denoted by $\ell_{p}^{w}\left( E\right) .$ When endowed with the norm ($p$-norm if $0<p<1 $) \[ \left\Vert \left( x_{j}\right) _{j=1}^{\infty}\right\Vert _{w,p} {\small :=}\sup\{({\textstyle\sum\limits_{j=1}^{\infty}}\left\vert \varphi\left( x_{j}\right) \right\vert ^{p})^{1/p}:\varphi\in B_{E^{\ast} }\}{\small ,} \] the space $\ell_{p}^{w}\left( E\right) $ is complete. We recall that if $0<q\leq p<\infty$ a continuous linear operator $u:E\rightarrow F$ is absolutely $(p;q)$-summing if $\left( u(x_{j})\right) _{j=1}^{\infty}\in \ell_{p}\left( F\right) $ whenever $\left( x_{j}\right) _{j=1}^{\infty} \in\ell_{q}^{w}\left( E\right) .$ In this case we write $u\in\Pi _{(p;q)}(E;F)$. For $p=q=1$ this notion coincides with the concept of absolutely summing operator. For classical results on absolutely summing operators we refer to \cite{dies0, pisier, T1} and references therein (recent results can also be checked in \cite{PellZ, Lima, Ok}). The concept of absolutely summing operators has some natural linear extensions such as the notions of mixing $(p;q)$-summing operators (due to A. Pietsch and B. Maurey) and $(p;q;r)$-summing operators (due to A. Pietsch). It is worth mentioning that these concepts were not just constructed to simply generalize the notion of absolutely $(p;q)$-summing operators; these notions have their particular reasons to be investigated (see \cite[p. 359]{hist}).
In the 1980's, Pietsch \cite{Pie} suggested a multilinear approach to the theory of absolutely summing operators and, more generally, to the theory of operator ideals. Since then, several authors were attracted by the subject and also non-multilinear approaches have appeared (see \cite{CDo, Chen, Junek, Nach00, MST, adv}). The adequate way of lifting the notion of a given operator ideal to the multilinear and polynomial settings is a delicate matter. For example, in the case of the ideal of absolutely summing linear operators, there are several different approaches to the polynomial and multilinear contexts (see \cite{adv, ppaa} and references therein). The abstract notions of (global) holomorphy types (see \cite{BBJMs, Nachbin}), coherent and compatible ideals (see \cite{CDM09}) shed some light on what kind of approach is more adequate.
Recently, in 2003, the notion of multiple summing multilinear operators (and polynomials) was introduced (see \cite{Matos, pv}) but, as a matter of fact, the origin of this notion dates back to \cite{bh, lit, Ram}. Several indicators from the theory of summing operators and from the theory of (multi-) ideals show that this is one of the most adequate approaches to the nonlinear theory of absolutely summing operators. For results on multiple summing multilinear operators we refer to \cite{Na, df, davidstudia, pv, PopJM, Popa}.
Notwithstanding the quick success of the theory of multiple summing multilinear operators, some recent papers related to multilinear summability seem to have overlooked its advantages. More precisely, the multilinear notions of mixing summing operators and absolutely $\left( p;q;r\right) $-summing multilinear operators were introduced following a different perspective (see \cite{achour, Carlos Alberto}). The point is that these approaches do not carry out the essence of the respective linear concepts and this lack is clearly corroborated by the notions of coherence, compatibility and holomorphy types.
In this paper we present multilinear and polynomial notions of absolutely $\left( p;q;r\right) $-summing operators and mixing summing operators which follow the philosophy of the idea of multiple summability. Among other results, the adequacy of our approach is evaluated by proving that our new definitions provide coherent sequences, compatible and also (global) holomorphy types.
Below we recall the notions of mixing summing operators and absolutely $\left( p;q;r\right) $-summing operators.
\subsection{Mixing summing operators\label{xxzzz}}
Let $0<p\leq s\leq\infty$ and $r$ such that $\frac{1}{r}+\frac{1}{s}=\frac {1}{p}.$ A sequence $(x_{i})_{i=1}^{\infty}$ in $E$ is $(s;p)$-mixed summable if \[ x_{i}=\tau_{i}y_{i} \] with $(\tau_{i})_{i=1}^{\infty}\in\ell_{r}$ and $(y_{i})_{i=1}^{\infty}\in \ell_{s}^{w}(E)$.
In this case, consider \[ \left\Vert \left( x_{i}\right) _{i=1}^{\infty}\right\Vert _{mx(s,p)} :=\inf\left\{ \left\Vert \left( \tau_{i}\right) _{i=1}^{\infty}\right\Vert _{r}\left\Vert \left( y_{i}\right) _{i=1}^{\infty}\right\Vert _{w,s} \right\} , \] where the infimum is taken over all possible representations of $\left( x_{i}\right) _{i=1}^{\infty}$ in the above form. The space of all $(s;p)$-mixed summable sequences in $E$ is represented by $\ell_{(s,p)} ^{mx}(E).$ It is not difficult to prove that $\ell_{(s,p)}^{mx}(E)$ is a complete normed ($p$-normed if $0<p<1$) space.
It is immediate that, for $0<p\leq s\leq\infty,$ one always has
\begin{itemize} \item $\ell_{p}(E)\subset\ell_{(s,p)}^{mx}(E)\subset\ell_{p}^{w}(E)$ with \begin{equation} \left\Vert \left( z_{j}\right) _{j=1}^{\infty}\right\Vert _{w,p} \leq\left\Vert \left( z_{j}\right) _{j=1}^{\infty}\right\Vert _{mx(s,p)} \leq\left\Vert \left( z_{j}\right) _{j=1}^{\infty}\right\Vert _{p}, \label{prop} \end{equation}
\item $\ell_{p}^{w}(E)=\ell_{(p,p)}^{mx}(E)$ and $\ell_{p}(E)=\ell _{(\infty,p)}^{mx}(E)$ isometrically. \end{itemize}
Let us now recall the linear concept of mixing summing linear operators (see \cite{pp1}):
Let $0<p\leq s\leq\infty.$ A continuous linear operator $u:E\rightarrow F$ is mixing $(s,p)$-summing ($u\in\Pi_{mx(s,p)}(E;F)$) if there exists a constant $\sigma\geq0$ such that \begin{equation} \left\Vert \left( u(x_{j})\right) _{j=1}^{m}\right\Vert _{mx(s,p)}\leq \sigma\left\Vert (x_{j})_{j=1}^{m}\right\Vert _{w,p} \label{rst} \end{equation} for all $x_{1},\ldots,x_{m}\in E$ and $m\in\mathbb{N}.$ The infimum of all such constants $\sigma$ is represented by $\pi_{mx(s,p)}(u).$
The terminology \textquotedblleft mixing\textquotedblright\ is motivated by the fact that a continuous linear operator $u:E\rightarrow F$ is $\left( s,p\right) $-mixing summing precisely when $u$ maps every weakly $p$-summable sequence $\left( x_{i}\right) _{i=1}^{\infty}$ in $E$ into a sequence which can be written as a product $\left( \tau_{i}y_{i}\right) _{i=1}^{\infty}$ of an absolutely $r$-summable scalar sequence $\left( \tau_{i}\right) _{i=1}^{\infty}$ and a weakly $s$-summable sequence $\left( y_{i}\right) _{i=1}^{\infty}$ in $F$, where $\frac{1}{s}+\frac{1}{r}=\frac{1}{p}.$ Many of the classical results of mixing summing operators are due to B. Maurey \cite{Maurey} and the theory has shown to be sufficiently rich to be investigated by its own (see \cite[Section 32]{Flore}).
\subsection{Absolutely $(p;q;r)$-summing operators\label{Sub2}}
The concept of absolutely $(p;q;r)$-summing linear operators is due to A. Pietsch \cite{pp0, pp1}. If $0<p,q<\infty$ and $0<r\leq\infty$ and \[ \frac{1}{p}\leq\frac{1}{q}+\frac{1}{r}, \] a continuous linear operator $u:E\rightarrow F$ is absolutely $\left( p;q;r\right) $-summing ($u\in\Pi_{as\left( p;q;r\right) }\left( E;F\right) $) if there is a constant $C>0$ such that \begin{equation} \left( \sum_{j=1}^{m}\left\vert \varphi_{j}\left( u\left( x_{j}\right) \right) \right\vert ^{p}\right) ^{\frac{1}{p}}\leq C\left\Vert \left( x_{j}\right) _{j=1}^{m}\right\Vert _{w,q}\left\Vert \left( \varphi _{j}\right) _{j=1}^{m}\right\Vert _{w,r} \label{1807} \end{equation} for all positive integer $m$, and all $x_{1},\ldots,x_{m}$ in $E$ and $\varphi_{1},\ldots,\varphi_{m}$ in $F^{\ast}$. When $r=\infty$, we recover the classical notion of absolutely $(p;q)$-summing operators.\ For details we refer to \cite{Lap, pp1, hist}.
The space composed by all continuous linear operators from $E$ to $F$ that are absolutely $\left( p;q;r\right) $-summing shall be represented by $\Pi_{as\left( p;q;r\right) }\left( E;F\right) $. The infimum of the constants $C$ satisfying the inequality (\ref{1807}) defines a norm ($p$-norm if $0<p<1$) in $\Pi_{as\left( p;q;r\right) }\left( E;F\right) ,$ denoted by $\pi_{\left( p;q;r\right) }(u).$ If $r=\infty$ we use the classical notation of absolutely $\left( p;q\right) $-summing operators, $\Pi_{\left( p;q\right) }\left( E;F\right) $ and $\pi_{\left( p;q\right) }$ for the norm.
If we allow $\frac{1}{p}>\frac{1}{q}+\frac{1}{r}$ we would have $\Pi _{as\left( p;q;r\right) }\left( E;F\right) =\left\{ 0\right\} $ (see \cite[p. 196]{djt}) and, for this reason, we ask for $\frac{1}{p}\leq\frac {1}{q}+\frac{1}{r}$ in the definition above.
\subsection{Operator ideals, multi-ideals and polynomial ideals}
The theory of operator ideals goes back to J.W. Calkin \cite{cal}, H. Weyl \cite{we} and further work of A.\ Grothendieck \cite{grote}. However, only in the 70's, with A. Pietsch \cite{pp1}, the theory was organized in the modern presentation (see also \cite{ddjj, HP}). For historical details we suggest \cite{hist} and for applications we refer to \cite{ddjj}.
An operator ideal $\mathcal{I}$ is a subclass of the class $\mathcal{L}_{1}$ of all continuous linear operators between Banach spaces such that for all Banach spaces $E$ and $F$ its components \[ \mathcal{I}(E;F):=\mathcal{L}_{1}(E;F)\cap\mathcal{I} \] satisfy the following:
(Oa) $\mathcal{I}(E;F)$ is a linear subspace of $\mathcal{L}_{1}(E;F)$ which contains the finite rank operators.
(Ob) If $u\in\mathcal{I}(E;F)$, $v\in\mathcal{L}_{1}(G;E)$ and $w\in \mathcal{L}_{1}(F;H)$, then $w\circ u\circ v\in\mathcal{I}(G;H)$.
The operator ideal is called a normed operator ideal if there is a function $\Vert\cdot\Vert_{\mathcal{I}}\colon\mathcal{I}\longrightarrow\lbrack 0,\infty)$ satisfying
(Ob1) $\Vert\cdot\Vert_{\mathcal{I}}$ restricted to $\mathcal{I}(E;F)$ is a norm, for all Banach spaces $E$, $F$.
(Ob2) $\Vert P_{1}\colon\mathbb{K}\longrightarrow\mathbb{K}:P_{1} (\lambda)=\lambda\Vert_{\mathcal{I}}=1.$
(Ob3) If $u\in\mathcal{I}(E;F)$, $v\in\mathcal{L}_{1}(G;E)$ and $w\in \mathcal{L}_{1}(F;H)$, then \[ \Vert w\circ u\circ v\Vert_{\mathcal{I}}\leq\Vert w\Vert\Vert u\Vert _{\mathcal{I}}\Vert v\Vert. \] When $\mathcal{I}(E;F)$ with the norm above is always complete, $\mathcal{I}$ is called a Banach operator ideal.
Absolutely summing operators and the two related aforementioned concepts are examples of operator ideals. Other examples include the compact, weakly compact, strictly singular operators, etc.
The notion of multi-ideals is also due to Pietsch \cite{Pie}. For each positive integer $n$, let $\mathcal{L}_{n}$ denote the class of all continuous $n$-linear operators between Banach spaces. An ideal of multilinear mappings (or multi-ideal) $\mathcal{M}$ is a subclass of the class $\mathcal{L}=
{\textstyle\bigcup\limits_{n=1}^{\infty}}
\mathcal{L}_{n}$ of all continuous multilinear operators between Banach spaces such that for a positive integer $n$, Banach spaces $E_{1},\ldots,E_{n}$ and $F$, the components \[ \mathcal{M}_{n}(E_{1},\ldots,E_{n};F):=\mathcal{L}_{n}(E_{1},\ldots ,E_{n};F)\cap\mathcal{M} \] satisfy:
(Ma) $\mathcal{M}_{n}(E_{1},\ldots,E_{n};F)$ is a linear subspace of $\mathcal{L}_{n}(E_{1},\ldots,E_{n};F)$ which contains the $n$-linear mappings of finite type.
(Mb) If $T\in\mathcal{M}_{n}(E_{1},\ldots,E_{n};F)$, $u_{j}\in\mathcal{L} _{1}(G_{j};E_{j})$ for $j=1,\ldots,n$ and $v\in\mathcal{L}_{1}(F;H)$, then \[ v\circ T\circ(u_{1},\ldots,u_{n})\in\mathcal{M}_{n}(G_{1},\ldots,G_{n};H). \]
Moreover, $\mathcal{M}$ is a (quasi-) normed multi-ideal if there is a function $\Vert\cdot\Vert_{\mathcal{M}}\colon\mathcal{M}\longrightarrow \lbrack0,\infty)$ satisfying
(Mb1) $\Vert\cdot\Vert_{\mathcal{M}}$ restricted to $\mathcal{M}_{n} (E_{1},\ldots,E_{n};F)$ is a (quasi-) norm, for all Banach spaces $E_{1},\ldots,E_{n}$ and $F.$
(Mb2) $\Vert T_{n}\colon\mathbb{K}^{n}\longrightarrow\mathbb{K}:T_{n} (\lambda_{1},\ldots,\lambda_{n})=\lambda_{1}\cdots\lambda_{n}\Vert _{\mathcal{M}}=1$ for all $n$,
(Mb3) If $T\in\mathcal{M}_{n}(E_{1},\ldots,E_{n};F)$, $u_{j}\in\mathcal{L} _{1}(G_{j};E_{j})$ for $j=1,\ldots,n$ and $v\in\mathcal{L}_{1}(F;H)$, then \[ \Vert v\circ T\circ(u_{1},\ldots,u_{n})\Vert_{\mathcal{M}}\leq\Vert v\Vert\Vert T\Vert_{\mathcal{M}}\Vert u_{1}\Vert\cdots\Vert u_{n}\Vert. \] When all the components $\mathcal{M}_{n}(E_{1},\ldots,E_{n};F)$ are complete under this (quasi-) norm, $\mathcal{M}$ is called a (quasi-) Banach multi-ideal. For a fixed multi-ideal $\mathcal{M}$ and a positive integer $n$, the class \[ \mathcal{M}_{n}:=\cup_{E_{1},\ldots,E_{n},F}\mathcal{M}_{n}\left( E_{1} ,\ldots,E_{n};F\right) \] is called ideal of $n$-linear mappings.
Similarly, for each positive integer $n$, let $\mathcal{P}_{n}$ denote the class of all continuous $n$-homogeneous polynomials between Banach spaces. A polynomial ideal $\mathcal{Q}$ is a subclass of the class $\mathcal{P}= {\textstyle\bigcup\limits_{n=1}^{\infty}} \mathcal{P}_{n}$ of all continuous homogeneous polynomials between Banach spaces so that for all $n\in\mathbb{N} $ and all Banach spaces $E$ and $F$, the components \[ \mathcal{Q}_{n}\left( ^{n}E;F\right) :=\mathcal{P}_{n}\left( ^{n} E;F\right) \cap\mathcal{Q} \] satisfy:
(Pa) $\mathcal{Q}_{n}\left( ^{n}E;F\right) $ is a linear subspace of $\mathcal{P}_{n}\left( ^{n}E;F\right) $ which contains the finite-type polynomials.
(Pb) If $u\in\mathcal{L}_{1}\left( G;E\right) $, $P\in\mathcal{Q}_{n}\left( ^{n}E;F\right) $ and $w\in\mathcal{L}_{1}\left( F;H\right) $, then \[ w\circ P\circ u\in\mathcal{Q}_{n}\left( ^{n}G;H\right) . \]
If there exists a map $\left\Vert \cdot\right\Vert _{\mathcal{Q}} :\mathcal{Q}\rightarrow\lbrack0,\infty\lbrack$ satisfying
(Pb1) $\left\Vert \cdot\right\Vert _{\mathcal{Q}}$ restricted to $\mathcal{Q}_{n}(^{n}E;F)$ is a (quasi-) norm for all Banach spaces $E$ and $F$ and all $n$;
(Pb2) $\left\Vert P_{n}:\mathbb{K}\rightarrow\mathbb{K};\text{ }P_{n}\left( \lambda\right) =\lambda^{n}\right\Vert _{\mathcal{Q}}=1$ for all $n$;
(Pb3) If $u\in\mathcal{L}_{1}(G;E)$, $P\in\mathcal{Q}_{n}(^{n}E;F)$ and $w\in\mathcal{L}_{1}(F;H),$ then \[ \left\Vert w\circ P\circ u\right\Vert _{\mathcal{Q}}\leq\left\Vert w\right\Vert \left\Vert P\right\Vert _{\mathcal{Q}}\left\Vert u\right\Vert ^{n}, \] $\mathcal{Q}$ is called (quasi-) normed polynomial ideal. If all components $\mathcal{Q}_{n}\left( ^{n}E;F\right) $ are complete, $\left( \mathcal{Q},\left\Vert \cdot\right\Vert _{\mathcal{Q}}\right) $ is called a (quasi-) Banach ideal of polynomials (or (quasi-) Banach polynomial ideal). For a fixed ideal of polynomials $\mathcal{Q}$ and $n\in\mathbb{N}$, the class \[ \mathcal{Q}_{n}:=\cup_{E,F}\mathcal{Q}_{n}\left( ^{n}E;F\right) \] is called ideal of $n$-homogeneous polynomials.
A crucial question in the theory of Banach polynomial ideals (and multi-ideals) is the following:
\begin{quote} \textit{Given an operator ideal, is there a natural method to define a related multi-ideal and polynomial ideal without loosing its essence?} \end{quote}
As mentioned before, in general a given operator ideal has several different possible extensions to multi-ideals and polynomial ideals. In an attempt of filtering what approaches are better than others the notions of coherence, compatibility (and in some sense holomorphy types) are quite helpful.
In the last decades several authors have been interested in investigating multilinear and polynomial generalizations of certain operator ideals, such as the ideal of absolutely summing operators. But the search for the correct approach is not an easy task. The generalizations must keep the essence of the given operator ideal and there seems to be no universal receipt for it.
The main goal of this paper is to discuss and introduce multilinear and polynomial extensions of the aforementioned operator ideals (from Subsections \ref{xxzzz} and \ref{Sub2}) taking into account the existent methods of evaluating the adequacy of such generalizations. Besides the intrinsic mathematical interest of the subject, the main motivation of this paper is that we believe (based on concrete facts) that the previous approaches were not adequate.
\section{Coherence and compatibility}
The notions of coherent sequences of ideals of polynomials and compatible ideals of polynomials, which we recall below, are important tools for evaluating polynomial extensions of a given operator ideal. The essence of these concepts rests in the searching of harmony between the levels of homogeneity ($n$-linearity) of a polynomial ideal and connections (compatibility) with the case of linear operators ($n=1$). In the following if $P\in\mathcal{P}\left( ^{n}E;F\right) $, then $P_{a^{k}}\in\mathcal{P} \left( ^{n-k}E;F\right) $ is defined by \[ P_{a^{k}}(x):=\check{P}(a,\ldots,a,x,\ldots,x). \]
\begin{definition} [Compatible ideals, \cite{CDM09}]\label{IdeaisCompativeis}Let $\mathcal{U}$ be a normed ideal of linear operators. A normed ideal of $n$-homogeneous polynomials $\mathcal{U}_{n}$ is compatible with $\mathcal{U}$ if there exist positive constants $\alpha_{1}$ and $\alpha_{2}$ such that for every Banach spaces $E$ and $F$, the following conditions hold:
$\left( i\right) $ For each $P\in\mathcal{U}_{n}\left( E;F\right) $ and $a\in E$, $P_{a^{n-1}}$ belongs to $\mathcal{U}\left( E;F\right) $ and \[ \left\Vert P_{a^{n-1}}\right\Vert _{\mathcal{U}\left( E;F\right) }\leq \alpha_{1}\left\Vert P\right\Vert _{\mathcal{U}_{n}\left( E;F\right) }\left\Vert a\right\Vert ^{n-1}. \]
$\left( ii\right) $ For each $T\in\mathcal{U}\left( E;F\right) $ and $\gamma\in E^{\ast}$, $\gamma^{n-1}T$ belongs to $\mathcal{U}_{n}\left( E;F\right) $ and \[ \left\Vert \gamma^{n-1}T\right\Vert _{\mathcal{U}_{n}\left( E;F\right) } \leq\alpha_{2}\left\Vert \gamma\right\Vert ^{n-1}\left\Vert T\right\Vert _{\mathcal{U}\left( E;F\right) }. \]
\end{definition}
For the sake of simplicity, we will sometimes write \textquotedblleft the sequence $\left( \mathcal{U}_{n}\right) _{n=1}^{\infty}$ is compatible with $\mathcal{U}$\textquotedblright\ instead of writing \textquotedblleft $\mathcal{U}_{n}$ is compatible with $\mathcal{U}$ for every $n$ \textquotedblright. Besides, when we write \textquotedblleft the sequence $\left( \mathcal{U}_{n}\right) _{n=1}^{\infty}$ fails to be compatible with $\mathcal{U}$\textquotedblright\ we are saying that at least for some $n$, the ideal $\mathcal{U}_{n}$ is not compatible with $\mathcal{U}$.
\begin{definition} [Coherent sequence of polynomial ideals \cite{CDM09}]\label{IdeaisCoerentes} Consider the sequence $\left( \mathcal{U}_{k}\right) _{k=1}^{N}$, where for each $k$, $\mathcal{U}_{k}$ is an ideal of $k$-homogeneous polynomials and $N$ is eventually infinite. The sequence $\left( \mathcal{U}_{k}\right) _{k=1}^{N}$ is a coherent sequence of polynomial ideals if there exist positive constants $\beta_{1}$ and $\beta_{2}$ such that for every Banach spaces $E$ and $F$, the following conditions hold for $k\in\{1,\ldots,N-1\}$:
$\left( i\right) $ For each $P\in\mathcal{U}_{k+1}\left( E;F\right) $ and $a\in E$, $P_{a}$ belongs to $\mathcal{U}_{k}\left( E;F\right) $ and \[ \left\Vert P_{a}\right\Vert _{\mathcal{U}_{k}\left( E;F\right) }\leq \beta_{1}\left\Vert P\right\Vert _{\mathcal{U}_{k+1}\left( E;F\right) }\left\Vert a\right\Vert . \]
$\left( ii\right) $ For each $P\in\mathcal{U}_{k}\left( E;F\right) $ and $\gamma\in E^{\ast}$, $\gamma P$ belongs to $\mathcal{U}_{k+1}\left( E;F\right) $ and \[ \left\Vert \gamma P\right\Vert _{\mathcal{U}_{k+1}\left( E;F\right) } \leq\beta_{2}\left\Vert \gamma\right\Vert \left\Vert P\right\Vert _{\mathcal{U}_{k}\left( E;F\right) }. \]
\end{definition}
\section{The first multilinear and polynomial approaches to summability}
In 1989, R. Alencar and M.C. Matos \cite{am} explored the following concept of absolutely summing multilinear operators, which was essentially introduced by Pietsch:
\begin{definition} \label{pro}Let $p,p_{1},\ldots,p_{n}\in(0,\infty),$ with $\frac{1}{p}\leq \frac{1}{p_{1}}+\cdots+\frac{1}{p_{n}}.$ A mapping $T\in\mathcal{L} (E_{1},\ldots,E_{n};F)$ is absolutely $(p;p_{1},\ldots,p_{n})$ -summing\textbf{\ (}or\textbf{\ }$(p;p_{1},\ldots,p_{n})$-summing\textbf{)} if there exists a $C\geq0$ such that \begin{equation} \left( \overset{m}{\underset{i=1}{\sum}}\left\Vert T(x_{i}^{(1)},\ldots ,x_{i}^{(n)})\right\Vert ^{p}\right) ^{\frac{1}{p}}\leq C\overset {n}{\underset{k=1}{\prod}}\left\Vert \left( x_{j}^{(k)}\right) _{j=1} ^{m}\right\Vert _{w,p_{k}} \label{llo} \end{equation}
for every $m\in\mathbb{N}$ and $x_{i}^{(k)}\in E_{k},$ with $\left( i,k\right) \in\left\{ 1,\ldots,m\right\} \times\left\{ 1,\ldots,n\right\} $. Analogously an $n$-homogeneous polynomial $P\in\mathcal{P}(^{n}E;F)$ is\textbf{\ }absolutely\textbf{\ }$(p;q)$-summing if there exists a constant $C\geq0$ such that \[ \left( \sum\limits_{j=1}^{m}\left\Vert P\left( x_{j}\right) \right\Vert ^{p}\right) ^{\frac{1}{p}}\leq C\left\Vert \left( x_{j}\right) _{j=1} ^{m}\right\Vert _{w,q}^{n} \] for all $m\in\mathbb{N}$ and $x_{j}\in E,$ with $j=1,\ldots,m.$ \end{definition}
The space of all $n$-linear operators satisfying (\ref{llo}) will be denoted by $\mathcal{L}_{as(p;p_{1},\ldots,p_{n})}(E_{1},\ldots,E_{n};F).$ When $p_{1}=\cdots=p_{n}=q,$ we simply write $\mathcal{L}_{as(p;q)}(E_{1} ,\ldots,E_{n};F)$. For $n=1$ we use the classical notation $\Pi_{(p;q)}$ instead of $\mathcal{L}_{as(p;q)}.$ For polynomials we write $\mathcal{P} _{as(p;q)}(^{n}E;F).$
For other approaches we mention \cite{port, Choi, df, Dimant} and references therein. The successful notion of multiple summing multilinear operators will be mentioned in the Section \ref{MMMA}.
In the case of mixing summing operators, the multilinear/polynomial theory was investigated by C.A. Soares in his Ph.D. dissertation \cite{Carlos Alberto}. However, the definition considered in \cite{Carlos Alberto} is an extension of Definition \ref{pro} and, as it happens to the concept of absolutely summing multilinear operators, it inherits its weaknesses.
\begin{definition} Let $0<q\leq s\leq\infty$ and $0<p_{1},\ldots,p_{n}\leq\infty.$ An $n$-linear operator $T\in\mathcal{L}(E_{1},\ldots,E_{n};F)$ is\textbf{\ }$(s,q;p_{1} ,\ldots,p_{n})$-mixing summing if there exists a constant $\sigma\geq0$ such that \begin{equation} \left\Vert \left( T(x_{j}^{(1)},\ldots,x_{j}^{(n)})\right) _{j=1} ^{m}\right\Vert _{mx(s,q)}\leq\sigma\prod_{k=1}^{n}\left\Vert (x_{j} ^{(k)})_{j=1}^{m}\right\Vert _{w,p_{k}} \label{xxc} \end{equation} for every $m\in\mathbb{N}$ $,$ $x_{1}^{(1)},\ldots,x_{m}^{(1)}\in E_{1} ,\ldots,x_{1}^{(n)},\ldots,x_{m}^{(n)}\in E_{n}.$ Analogously $P\in \mathcal{P}(^{n}E;F)$ is\textbf{\ }mixing\textbf{\ }$(s,q;p)$-summing if there exists a constant $C\geq0$ such that \[ \left\Vert \left( P(x_{j})\right) _{j=1}^{m}\right\Vert _{mx(s,q)}\leq C\left\Vert \left( x_{j}\right) _{j=1}^{m}\right\Vert _{w,p}^{n} \] for all $m\in\mathbb{N}$ and $x_{j}\in E,$ with $j=1,\ldots,m.$ \end{definition}
If $p_{1}=\cdots=p_{n}=p,$ the operator $T$ is said $(s,q;p)$-mixing summing.
The following multilinear generalization of $(p;q;r)$-summing operators was recently introduced by D. Achour \cite{achour}:
\begin{definition} \label{ach}Let $0<p,q_{1},\ldots,q_{n}<\infty$ and $0<r\leq\infty$ with \[ \frac{1}{p}\leq\frac{1}{q_{1}}+\cdots+\frac{1}{q_{n}}+\frac{1}{r}. \] An $n$-linear map $T$ $\in\mathcal{L}{(E_{1},\ldots,E_{n};F)}$ is absolutely $(p;q_{1},\ldots,q_{n};r)$-summing if there is a $C\geq0$ so that \begin{equation} \left( \sum\limits_{j=1}^{m}\left\vert \varphi_{j}\left( T\left( x_{j}^{(1)},\ldots,x_{j}^{(n)}\right) \right) \right\vert ^{p}\right) ^{\frac{1}{p}}\leq C\left\Vert \left( \varphi_{j}\right) _{j=1} ^{m}\right\Vert _{w,r}\prod_{i=1}^{n}\left\Vert \left( x_{j}^{(i)}\right) _{j=1}^{m}\right\Vert _{w,q_{i}} \label{des} \end{equation} for all $m\in\mathbb{N}$, $\varphi_{j}\in F^{\ast}$ and $x_{j}^{(i)}\in E_{i},$ with $\left( i,j\right) \in\{1,\ldots,n\}\times\{1,\ldots,m\}.$ Analogously an $n$-homogeneous polynomial $P\in\mathcal{P}(^{n}E;F)$ is\textbf{\ }absolutely\textbf{\ }$(p;q;r)$-summing if there exists a constant $C\geq0$ such that \[ \left( \sum\limits_{j=1}^{m}\left\vert \varphi_{j}\left( P\left( x_{j}\right) \right) \right\vert ^{p}\right) ^{\frac{1}{p}}\leq C\left\Vert \left( \varphi_{j}\right) _{j=1}^{m}\right\Vert _{w,r}\left\Vert \left( x_{j}\right) _{j=1}^{m}\right\Vert _{w,q}^{n} \] for all $m\in\mathbb{N}$, $\varphi_{j}\in F^{\ast}$ and $x_{j}\in E,$ with $j=1,\ldots,m.$ \end{definition}
We denote{\ the space of all absolutely }$(p;q_{1},\ldots,q_{n};r)$-summing $n$-linear operators by \[ \mathcal{L}_{as(p;q_{1},\ldots,q_{n};r)}\left( {E_{1},\ldots,E_{n};F}\right) . \] When $q_{1}=\cdots=q_{n}=q$ we just write $\mathcal{L}_{as(p;q;r)}\left( {E_{1},\ldots,E_{n};F}\right) $. When $r=\infty$ we recover the notion of absolutely $\left( p;q_{1}, \ldots,q_{n}\right) $-summing multilinear mappings $\mathcal{L}_{as(p;q_{1},\ldots,q_{n})}$ due to Alencar and Matos \cite{am}. More precisely, \begin{equation} \mathcal{L}_{as(p;q_{1},\ldots,q_{n};\infty)}=\mathcal{L}_{as(p;q_{1} ,\ldots,q_{n})}. \label{s32} \end{equation}
If $\frac{1}{p}>\frac{1}{q_{1}}+\cdots+\frac{1}{q_{n}}+\frac{1}{r}$ and $T$ is absolutely $(p;q_{1},\ldots,q_{n};r)$-summing, then $T=0$. \ It is not difficult to prove that \begin{equation} \mathcal{L}_{as\left( p;q_{1},\ldots,q_{n}\right) }\left( E_{1} ,\ldots,E_{n};F\right) \subset\mathcal{L}_{as\left( p;q_{1},\ldots ,q_{n};r\right) }\left( E_{1},\ldots,E_{n};F\right) \label{mov} \end{equation} for all Banach spaces $E_{1},\ldots,E_{n},F$ and $r>0$.
\subsection{The lack of coherence and compatibility}
The class of absolutely $\left( p;q\right) $-summing $n$-homogeneous polynomials will be denoted by $\mathcal{P}_{as(p;q)}^{n}.$ As before, the space of all $n$-homogeneous polynomials $P:E\rightarrow F$ in $\mathcal{P} _{as(p;q)}^{n}$ is represented by $\mathcal{P}_{as(p;q)}\left( ^{n} E;F\right) .$ The notions of absolutely $\left( p;q;r\right) $-summing polynomials and mixing summing polynomials are denoted in a similar way.
It can be easily seen that $\left( \mathcal{P}_{as(p;q)}^{n}\right) _{n=1}^{\infty}$ in general fails to be coherent and compatible with $\Pi_{as(p;q)}$. In fact for any positive integer $n\geq2$ and any real number $1\leq p\leq2$ we know that \[ \mathcal{P}_{as(1;1)}\left( ^{n}\ell_{p};F\right) =\mathcal{P}\left( ^{n}\ell_{p};F\right) \] for all Banach spaces $F$. This result is an obvious deviation from the spirit of the linear ideal of absolutely summing operators since \[ \Pi_{as(1;1)}\left( \ell_{p};F\right) =\mathcal{L}\left( \ell_{p};F\right) \] if and only if $p=1$ and $F$ is a Hilbert space (see \cite{LP}). This situation also proves that $\left( \mathcal{P}_{as(1;1)}^{n}\right) _{n=1}^{\infty}$ is not coherent or compatible with $\Pi_{as(1;1)}.$ We also know that $\left( \mathcal{P}_{as(p;q)}^{n}\right) _{n=1}^{\infty}$ in general is not a (global) holomorphy type.
Since $\mathcal{P}_{as\left( p;q;\infty\right) }^{n}=\mathcal{P}_{as\left( p;q\right) }^{n}$ and $\mathcal{P}_{mxs\left( \infty;p\right) } ^{n}=\mathcal{P}_{as\left( p;p\right) }^{n}$ these deficiencies of $\left( \mathcal{P}_{as(1;1)}^{n}\right) _{n=1}^{\infty}$are inherited by the polynomial analogues of the concepts of Subsections \ref{xxzzz} and \ref{Sub2}. These deficiencies shall be fixed by the alternative concepts introduced in the next sections.
\section{Multiple summing multilinear operators: the ``nice prototype''\label{MMMA}}
Multiple $(p;q)$-summing multilinear were introduced in 2003 \cite{Matos, pv}. The origins of this notion date back to the 1930's with Littlewood's $4/3$ inequality \cite{lit} which asserts that \[ \left( \sum\limits_{i,j=1}^{N}\left\vert T(e_{i},e_{j})\right\vert ^{\frac {4}{3}}\right) ^{\frac{3}{4}}\leq\sqrt{2}\left\Vert T\right\Vert \] for every bilinear form $T:\ell_{\infty}^{N}\times\ell_{\infty}^{N} \rightarrow\mathbb{K}$ and every positive integer $N.$ In 1931 H.F. Bohnenblust and E. Hille \cite{bh} provided a deep generalization of this result to multilinear mappings: for every positive integer $n$ there is a $C_{n}>0$ so that \[ \left( \sum\limits_{i_{1},\ldots,i_{n}=1}^{N}\left\vert T(e_{i_{^{1}}} ,\ldots,e_{i_{n}})\right\vert ^{\frac{2n}{n+1}}\right) ^{\frac{n+1}{2n}}\leq C_{n}\left\Vert T\right\Vert \] for every $n$-linear mapping $T:\ell_{\infty}^{N}\times\cdots\times \ell_{\infty}^{N}\rightarrow\mathbb{C}$ and every positive integer $N$. This result has important applications in operator theory in Banach spaces, harmonic analysis, complex analysis and analytic number theory. For recent advances related to the Bohnenblust-Hille inequality we refer to \cite{an, df,DPell, Mun, Mu2}.
In his Ph.D. dissertation, D. P\'{e}rez-Garc\'{\i}a \cite{Da} remarked that the Bohnenblust-Hille inequality can be viewed as a result of the theory of multiple summing operators.
\begin{theorem} [Bohnenblust-Hille]\label{ytr}If $E_{1},\ldots,E_{n}$ are Banach spaces and $T\in\mathcal{L}(E_{1},\ldots,E_{n};\mathbb{K}),$ then there exists a constant $C_{n}\geq0$ such that \begin{equation} \left( \sum_{j_{1},\ldots,j_{n}=1}^{N}\left\vert T(x_{j_{1}}^{(1)} ,\ldots,x_{j_{n}}^{(n)})\right\vert ^{\frac{2n}{n+1}}\right) ^{\frac{n+1} {2n}}\leq C_{n}\prod_{k=1}^{n}\left\Vert (x_{j}^{(k)})_{j=1}^{N}\right\Vert _{w,1} \label{juo} \end{equation} for every positive integer $N$ and $x_{j}^{(k)}\in E_{k}$, $k=1,\ldots,n$ and $j=1,\ldots,N.$ \end{theorem}
The inequality above can be regarded as a result in the theory of multiple summing multilinear operators. Recall that for $1\leq q_{1},\ldots,q_{n}\leq p<\infty,$ an $n$-linear operator $T:E_{1}\times\cdots\times E_{n}\rightarrow F$ is multiple\emph{\ }$(p;q_{1},\ldots,q_{n})$-summing ($T\in\mathcal{L} _{mas(p;q_{1},\ldots,q_{n})}(E_{1},\ldots,E_{n};F)$) if there exists $C>0$ such that \begin{equation} \left( \sum_{j_{1},\ldots,j_{n}=1}^{\infty}\Vert T(x_{j_{1}}^{(1)} ,\ldots,x_{j_{n}}^{(n)})\Vert^{p}\right) ^{1/p}\leq C\prod\limits_{k=1} ^{n}\Vert(x_{j}^{(k)})_{j=1}^{\infty}\Vert_{w,q_{k}}\text{ } \label{jup2} \end{equation} for every $(x_{j}^{(k)})_{j=1}^{\infty}\in\ell_{q_{k}}^{w}(E_{k})$, $k=1,\ldots,n$.
The infimum of all $C$'s satisfying (\ref{jup2}), denoted by $\left\Vert T\right\Vert _{(r;r_{1},\ldots,r_{n})},$ defines a complete norm if $r\geq1$ ($r$-norm, if $r\in(0,1)$) in $\mathcal{L}_{mas(r;r_{1},\ldots,r_{n})} (E_{1},\ldots,E_{n};F).$ If $r_{1}=\cdots=r_{n}=s$ we just write $(r;s),$ and when $r=s$ we replace $\left( r;r\right) $ by $r$. For $n=1$ this concept also coincides with the classical notion of absolutely summing linear operators and, for this reason, we keep the usual notation $\pi_{(r;s)}\left( T\right) $ instead of $\left\Vert T\right\Vert _{(r;s)}$ for the norm of $T.$ The essence of the notion of multiple summing multilinear operators, for bilinear operators, can also be traced back to \cite{Ram}. For recent results in the theory of multiple summing operators we refer to \cite{BBJP, se, davidstudia, PopJM} and references therein.
\section{Multiple $\left( p;q_{1},\ldots,q_{n};r\right) $-summing multilinear operators\label{ss33}}
In this section we introduce the notion of multiple $\left( p;q_{1} ,\ldots,q_{n};r\right) $-summing multilinear operators and, as we shall see in the next sections, the polynomial version of this concept is coherent and compatible with the (linear) operator ideal of $(p;q;r)$-summing operators.
\begin{definition} Let $m\in\mathbb{N},p,r,q_{1},\ldots,q_{n}\geq1$ and $E_{1},\ldots,E_{n},F$ be Banach spaces. A continuous multilinear operator $T:E_{1}\times\cdots\times E_{n}\rightarrow F$ is multiple $\left( p;q_{1},\ldots,q_{n};r\right) $-summing when \[ \left( \varphi_{j_{1}\ldots j_{n}}\left( T\left( x_{j_{1}}^{\left( 1\right) },\ldots,x_{j_{n}}^{\left( n\right) }\right) \right) \right) _{j_{1},\ldots,j_{n}\in\mathbb{N}}\in\ell_{p}\left( \mathbb{N}^{n}\right) \] whenever $\left( x_{j}^{\left( i\right) }\right) _{j=1}^{\infty}\in \ell_{q_{i}}^{w}\left( E_{i}\right) ,i=1,\ldots,n$ and $\left( \varphi_{j_{1}\ldots j_{n}}\right) _{j_{1},\ldots,j_{n}\in\mathbb{N}}\in \ell_{r}^{w}\left( F^{\ast},\mathbb{N}^{n}\right) .$ \end{definition}
Sometimes we shall simply write $j\in\mathbb{N}^{n}$ to denote $j=(j_{1} ,\ldots,j_{n})\in\mathbb{N}^{n}.$ The vector space formed by the multiple $\left( p;q_{1},\ldots,q_{n};r\right) $-summing multilinear operators from $E_{1}\times\cdots\times E_{n}$ to $F$ shall be represented by $\mathcal{L} _{mas\left( p;q_{1},\ldots,q_{n};r\right) }\left( E_{1},\ldots ,E_{n};F\right) $). When $q_{1}=\cdots=q_{n}=q$, we simply write $\mathcal{L}_{mas\left( p;q;r\right) }\left( E_{1},\ldots,E_{n};F\right) $.
As it happens in other similar classes, the class $\mathcal{L}_{mas\left( p;q_{1},\ldots,q_{n};r\right) }\left( E_{1},\ldots,E_{n};F\right) $ has a characterization by means of inequalities:
\begin{theorem} \label{1.2}The following assertions are equivalent for $T\in\mathcal{L}\left( E_{1},\ldots,E_{n};F\right) $:
\begin{itemize} \item[(i)] $T\in\mathcal{L}_{mas\left( p;q_{1},\ldots,q_{n};r\right) }\left( E_{1},\ldots,E_{n};F\right) ;$
\item[(ii)] There is a $C\geq0$ such that \begin{align} & \left( \sum_{j_{1},\ldots,j_{n}=1}^{\infty}\left\vert \varphi_{j_{1}\ldots j_{n}}\left( T\left( x_{j_{1}}^{\left( 1\right) },\ldots,x_{j_{n} }^{\left( n\right) }\right) \right) \right\vert ^{p}\right) ^{\frac{1} {p}}\label{11:06}\\ & \leq C\left\Vert \left( \varphi_{j_{1}\ldots j_{n}}\right) _{j_{1} ,...,j_{n}\in\mathbb{N}}\right\Vert _{w,r}\prod_{i=1}^{n}\left\Vert \left( x_{j}^{\left( i\right) }\right) _{j=1}^{\infty}\right\Vert _{w,q_{i} }\nonumber \end{align} whenever $\left( x_{j}^{\left( i\right) }\right) _{j=1}^{\infty}\in \ell_{q_{i}}^{w}\left( E_{i}\right) ,i=1,\ldots,n$ and $\left( \varphi_{j_{1}\ldots j_{n}}\right) _{j\in\mathbb{N}^{n}}\in\ell_{r} ^{w}\left( F^{\ast},\mathbb{N}^{n}\right) ;$
\item[(iii)] There is a $C\geq0$ such that \begin{align*} & \left( \sum_{j_{1},\ldots,j_{n}=1}^{m}\left\vert \varphi_{j_{1}\ldots j_{n}}\left( T\left( x_{j_{1}}^{\left( 1\right) },\ldots,x_{j_{n} }^{\left( n\right) }\right) \right) \right\vert ^{p}\right) ^{\frac{1} {p}}\\ & \leq C\left\Vert \left( \varphi_{j_{1}\ldots j_{n}}\right) _{j_{1} ,...,j_{n}\in\mathbb{N}_{m}}\right\Vert _{w,r}\prod_{i=1}^{n}\left\Vert \left( x_{j}^{\left( i\right) }\right) _{j=1}^{m}\right\Vert _{w,q_{i}} \end{align*} for all $m\in\mathbb{N},$ $x_{1}^{\left( i\right) },\ldots,x_{m}^{\left( i\right) }\in E_{i},i=1,\ldots,n$ and $\left( \varphi_{j_{1}\ldots j_{n} }\right) _{j\in\mathbb{N}_{m}^{n}}\in\ell_{r}^{w}\left( F^{\ast} ,\mathbb{N}_{m}^{n}\right) .$ \end{itemize}
The infimum of all $C$ satisfying (\ref{11:06}) defines a norm in $\mathcal{L}_{mas\left( p;q_{1},\ldots,q_{n};r\right) }\left( E_{1} ,\ldots,E_{n};F\right) .$ \end{theorem}
Similarly to (\ref{mov}) it can also be proved that \begin{equation} \mathcal{L}_{mas\left( p;q_{1},\ldots,q_{n}\right) }\subset\mathcal{L} _{mas\left( p;q_{1},\ldots,q_{n};r\right) } \label{inc222} \end{equation} for all $r>0.$ From Theorem \ref{1.2} we can conclude that if \[ \frac{1}{p}>\frac{1}{q_{i}}+\frac{1}{r} \] for some $i$, then $\mathcal{L}_{mas\left( p;q_{1},\ldots,q_{n};r\right) }\left( E_{1},\ldots,E_{n};F\right) =\left\{ 0\right\} $. In fact, we first prove that if $T\in\mathcal{L}_{mas\left( p;q_{1},\ldots,q_{n} ;r\right) }\left( E_{1},\ldots,E_{n};F\right) ,$ then, for any $a\in E_{1} $, the map \begin{equation} T_{a}:E_{2}\times\cdots\times E_{n}\longrightarrow F:T_{a}\left( x_{2} ,\ldots,x_{n}\right) =T\left( a,x_{2},\ldots,x_{n}\right) \label{wsq} \end{equation} is multiple $\left( p;q_{2},\ldots,q_{n};r\right) $-summing and \begin{equation} \left\Vert T\right\Vert _{mas\left( p;q_{2},\ldots,q_{n};r\right) } \leq\left\Vert a\right\Vert \left\Vert T\right\Vert _{mas\left( p;q_{1},\ldots,q_{n};r\right) }. \label{wsa} \end{equation} So, if $\frac{1}{p}>\frac{1}{q_{i}}+\frac{1}{r}$ for some $i$, then $\mathcal{L}_{mas\left( p;q_{1},\ldots,q_{n};r\right) }\left( E_{1} ,\ldots,E_{n};F\right) =\left\{ 0\right\} $. In fact, suppose that $\frac{1}{p}>\frac{1}{q_{1}}+\frac{1}{r}.$ So, using \ (\ref{wsq}), we know that if $T\in\mathcal{L}_{mas\left( p;q_{1},\ldots,q_{n};r\right) }\left( E_{1},\ldots,E_{n};F\right) $ then $T_{a_{2},\ldots,a_{n}}\in\mathcal{L} _{as\left( p;q_{1};r\right) }\left( E_{1};F\right) $ for all $a_{2}\in E_{2},\ldots,a_{n}\in E_{n}$. It follows that $T_{a_{2},\ldots,a_{n}}=0$ and hence $T=0.$ So, in order to avoid trivialities we shall suppose $\frac{1} {p}\leq\frac{1}{q_{i}}+\frac{1}{r}$ for all $i.$
\subsection{Coherence and compatibility \label{ss44}}
Standard calculations show that \[ \left( \mathcal{L}_{mas\left( p;q_{1},\ldots,q_{n};r\right) },\left\Vert \cdot\right\Vert _{mas\left( p;q_{1},\ldots,q_{n};r\right) }\right) \] is a Banach multi-ideal. If $\mathcal{M}$ is a (quasi-) normed ideal of multilinear mappings, the class \[ \mathcal{P}_{\mathcal{M}}=\left\{ P\in\mathcal{P}^{n};\check{P}\in \mathcal{M},n\in\mathbb{N}\right\} \text{,} \] with $\left\Vert P\right\Vert _{\mathcal{P}_{\mathcal{M}}}:=\left\Vert \check{P}\right\Vert _{\mathcal{M}},$ is a (quasi-) normed ideal of polynomials, called polynomial ideal generated by $\mathcal{M}$. If $\mathcal{M}$ is (quasi-) Banach, then $\mathcal{P}_{\mathcal{M}}$ is (quasi-) Banach (see \cite[p. 46]{BBJMs}).
Thus, the class \[ \mathcal{P}_{mas\left( p;q;r\right) }^{n}=\left\{ P\in\mathcal{P} ^{n};\check{P}\in\mathcal{L}_{mas\left( p;q;r\right) }^{n}\right\} , \] with \[ \left\Vert P\right\Vert _{\mathcal{P}_{mas\left( p;q;r\right) }^{n} }:=\left\Vert \check{P}\right\Vert _{mas\left( p;q;r\right) }, \] ia a Banach polynomial ideal$.$
\begin{theorem} $\left( \mathcal{P}_{mas\left( p;q;r\right) }^{n},\left\Vert .\right\Vert _{\mathcal{P}_{mas\left( p;q;r\right) }^{n}}\right) _{n=1}^{\infty}$ is coherent and, for each fixed $n$, compatible with $\mathcal{L}_{mas\left( p;q;r\right) }$. \end{theorem}
\begin{proof} If $P\in\mathcal{P}_{mas\left( p;q;r\right) }^{n}\left( ^{n}E;F\right) $ and $a\in E$, then $\check{P}\in\mathcal{L}_{mas\left( p;q;r\right) } ^{n}\left( ^{n}E;F\right) $ and, from (\ref{wsq}) and (\ref{wsa}), $\check{P}_{a}\in\mathcal{L}_{mas\left( p;q;r\right) }^{n-1}\left( ^{n-1}E;F\right) .$ Hence $P_{a}\in\mathcal{P}_{mas\left( p;q;r\right) }^{n-1}\left( ^{n-1}E;F\right) $ with \[ \left\Vert P_{a}\right\Vert _{\mathcal{P}_{mas\left( p;q;r\right) }^{n-1} }\leq\left\Vert a\right\Vert \left\Vert P\right\Vert _{\mathcal{P}_{mas\left( p;q;r\right) }^{n}}. \] Let $\gamma\in E^{\ast}.$ Note that \[ \left( \gamma P\right) ^{\vee}\left( x_{1},\ldots,x_{n+1}\right) =\frac {1}{n+1}\sum_{k=1}^{n+1}\gamma\left( x_{k}\right) \check{P}\left( x_{1},\overset{\left[ k\right] }{\ldots},x_{n+1}\right) , \] where $\overset{\left[ k\right] }{\ldots}$ means that the $k$-th coordinate is missing.
Let $m\in\mathbb{N}$, $x_{j}^{(k)}\in E$, with $j=1,\ldots.,m$ and $k=1,\ldots,n+1;$ let $\varphi_{j_{1}\ldots j_{n+1}}\in F^{\ast}$ with $j_{1},\ldots,j_{n+1}=1,\ldots.,m.$ Using the triangle inequality we have \begin{align*} & \left( \sum_{j_{1},\ldots,j_{n+1}=1}^{m}\left\vert \varphi_{j_{1}\ldots j_{n+1}}\left( \left( \gamma P\right) ^{\vee}\left( x_{j_{1}}^{\left( 1\right) },\ldots,x_{j_{n+1}}^{\left( n+1\right) }\right) \right) \right\vert ^{p}\right) ^{\frac{1}{p}}\\ & =\left( \sum_{j_{1},\ldots,j_{n+1}=1}^{m}\left\vert \varphi_{j_{1}\ldots j_{n+1}}\left( \frac{1}{n+1}\sum_{k=1}^{n+1}\gamma\left( x_{j_{k}}^{\left( k\right) }\right) \check{P}\left( x_{j_{1}}^{\left( 1\right) } ,\overset{\left[ k\right] }{\ldots},x_{j_{n+1}}^{\left( n+1\right) }\right) \right) \right\vert ^{p}\right) ^{\frac{1}{p}}\\ & =\frac{1}{n+1}\left( \sum_{j_{1},\ldots,j_{n+1}=1}^{m}\left\vert \varphi_{j_{1}\ldots j_{n+1}}\left( \sum_{k=1}^{n+1}\gamma\left( x_{j_{k} }^{\left( k\right) }\right) \check{P}\left( x_{j_{1}}^{\left( 1\right) },\overset{\left[ k\right] }{\ldots},x_{j_{n+1}}^{\left( n+1\right) }\right) \right) \right\vert ^{p}\right) ^{\frac{1}{p}}\\ & =\frac{1}{n+1}\left( \sum_{j_{1},\ldots,j_{n+1}=1}^{m}\left\vert \sum_{k=1}^{n+1}\varphi_{j_{1}\ldots j_{n+1}}\left( \gamma\left( x_{j_{k} }^{\left( k\right) }\right) \check{P}\left( x_{j_{1}}^{\left( 1\right) },\overset{\left[ k\right] }{\ldots},x_{j_{n+1}}^{\left( n+1\right) }\right) \right) \right\vert ^{p}\right) ^{\frac{1}{p}}\\ & \leq\frac{1}{n+1}\left( \sum_{j_{1},\ldots,j_{n+1}=1}^{m}\left( \sum_{k=1}^{n+1}\left\vert \varphi_{j_{1}\ldots j_{n+1}}\left( \gamma\left( x_{j_{k}}^{\left( k\right) }\right) \check{P}\left( x_{j_{1}}^{\left( 1\right) },\overset{\left[ k\right] }{\ldots},x_{j_{n+1}}^{\left( n+1\right) }\right) \right) \right\vert \right) ^{p}\right) ^{\frac{1} {p}}\\ & =\frac{1}{n+1}\left\Vert \left( \sum_{k=1}^{n+1}\left\vert \varphi _{j_{1}\ldots j_{n+1}}\left( \gamma\left( x_{j_{k}}^{\left( k\right) }\right) \check{P}\left( x_{j_{1}}^{\left( 1\right) },\overset{\left[ k\right] }{\ldots},x_{j_{n+1}}^{\left( n+1\right) }\right) \right) \right\vert \right) _{j_{1},\ldots,j_{n+1}=1}^{m}\right\Vert _{p}\\ & =(\ast). \end{align*} Thus, from the Minkowski inequality we have \begin{align} (\ast) & =\nonumber\\ & =\frac{1}{n+1}\left\Vert \sum_{k=1}^{n+1}\left( \left\vert \varphi _{j_{1}\ldots j_{n+1}}\left( \gamma\left( x_{j_{k}}^{\left( k\right) }\right) \check{P}\left( x_{j_{1}}^{\left( 1\right) },\overset{\left[ k\right] }{\ldots},x_{j_{n+1}}^{\left( n+1\right) }\right) \right) \right\vert \right) _{j_{1},\ldots,j_{n+1}=1}^{m}\right\Vert _{p}\\ & \leq\frac{1}{n+1}\sum_{k=1}^{n+1}\left\Vert \left( \left\vert \varphi_{j_{1}\ldots j_{n+1}}\left( \gamma\left( x_{j_{k}}^{\left( k\right) }\right) \check{P}\left( x_{j_{1}}^{\left( 1\right) } ,\overset{\left[ k\right] }{\ldots},x_{j_{n+1}}^{\left( n+1\right) }\right) \right) \right\vert \right) _{j_{1},\ldots,j_{n+1}=1} ^{m}\right\Vert _{p}\nonumber\\ & =\frac{1}{n+1}\sum_{k=1}^{n+1}\left( \sum_{j_{1},\ldots,j_{n+1}=1} ^{m}\left\vert \varphi_{j_{1}\ldots j_{n+1}}\left( \gamma\left( x_{j_{k} }^{\left( k\right) }\right) \check{P}\left( x_{j_{1}}^{\left( 1\right) },\overset{\left[ k\right] }{\ldots},x_{j_{n+1}}^{\left( n+1\right) }\right) \right) \right\vert ^{p}\right) ^{\frac{1}{p}}\nonumber\\ & =\frac{1}{n+1}\left[ \left( \sum_{j_{1},\ldots,j_{n+1}=1}^{m}\left\vert \varphi_{j_{1}\ldots j_{n+1}}\left( \check{P}\left( \gamma\left( x_{j_{1} }^{\left( 1\right) }\right) x_{j_{2}}^{\left( 2\right) },\ldots ,x_{j_{n+1}}^{\left( n+1\right) }\right) \right) \right\vert ^{p}\right) ^{\frac{1}{p}}+\cdots\right. \nonumber\\ & \left. \cdots+\left( \sum_{j_{1},\ldots,j_{n+1}=1}^{m}\left\vert \varphi_{j_{1}\ldots j_{n+1}}\left( \check{P}\left( \gamma\left( x_{j_{n+1}}^{\left( n+1\right) }\right) x_{j_{1}}^{\left( 1\right) },\ldots,x_{j_{n}}^{\left( n\right) }\right) \right) \right\vert ^{p}\right) ^{\frac{1}{p}}\right] .\nonumber \end{align}
Hence \begin{align} & \left( \sum_{j_{1},\ldots,j_{n+1}=1}^{m}\left\vert \varphi_{j_{1}\ldots j_{n+1}}\left( \left( \gamma P\right) ^{\vee}\left( x_{j_{1}}^{\left( 1\right) },\ldots,x_{j_{n+1}}^{\left( n+1\right) }\right) \right) \right\vert ^{p}\right) ^{\frac{1}{p}}\label{estta}\\ & \leq\frac{1}{n+1}\left[ \left( \sum_{j_{1},\ldots,j_{n+1}=1} ^{m}\left\vert \varphi_{j_{1}\ldots j_{n+1}}\left( \check{P}\left( \gamma\left( x_{j_{1}}^{\left( 1\right) }\right) x_{j_{2}}^{\left( 2\right) },\ldots,x_{j_{n+1}}^{\left( n+1\right) }\right) \right) \right\vert ^{p}\right) ^{\frac{1}{p}}+\cdots\right. \nonumber\\ & \left. \cdots+\left( \sum_{j_{1},\ldots,j_{n+1}=1}^{m}\left\vert \varphi_{j_{1}\ldots j_{n+1}}\left( \check{P}\left( \gamma\left( x_{j_{n+1}}^{\left( n+1\right) }\right) x_{j_{1}}^{\left( 1\right) },\ldots,x_{j_{n}}^{\left( n\right) }\right) \right) \right\vert ^{p}\right) ^{\frac{1}{p}}\right] .\nonumber \end{align}
Note that each one of the $n+1$ terms of (\ref{estta}) can be re-written as \[ \left( \sum_{j_{2}=1}^{m^{2}}\sum_{j_{3},\ldots,j_{n+1}=1}^{m}\left\vert \widetilde{\varphi}_{j_{2}\ldots j_{n+1}}\left( \check{P}\left( z_{j_{2} }^{(2)},\ldots,z_{j_{n+1}}^{\left( n+1\right) }\right) \right) \right\vert ^{p}\right) ^{\frac{1}{p}} \] for adequate choices of $\widetilde{\varphi}_{j_{2}\ldots j_{n+1}}$ and $z_{j_{k}}^{(k)}$, with $k=2,\ldots,n+1.$
In fact, for \[ \left( \sum_{j_{1},\ldots,j_{n+1}=1}^{m}\left\vert \varphi_{j_{1}\ldots j_{n+1}}\left( \check{P}\left( \gamma\left( x_{j_{1}}^{\left( 1\right) }\right) x_{j_{2}}^{\left( 2\right) },\ldots,x_{j_{n+1}}^{\left( n+1\right) }\right) \right) \right\vert ^{p}\right) ^{\frac{1}{p}}, \] we choose \[ \left\{ \begin{array} [c]{c} z_{j_{2}}^{\left( 2\right) }=\gamma\left( x_{1}^{\left( 1\right) }\right) x_{j_{2}}^{\left( 2\right) }\text{ for all }j_{2}=1,\ldots.,m,\\ z_{m+j_{2}}^{\left( 2\right) }=\gamma\left( x_{2}^{\left( 1\right) }\right) x_{j_{2}}^{\left( 2\right) }\text{ for all }j_{2}=1,\ldots.,m,\\ \vdots\\ z_{\left( m-1\right) m+j_{2}}^{\left( 2\right) }=\gamma\left( x_{m}^{\left( 1\right) }\right) x_{j_{2}}^{\left( 2\right) }\text{ for all }j_{2}=1,\ldots.,m,\\ z_{j_{i}}^{(i)}=x_{j_{i}}^{\left( i\right) }\text{ for all }j_{i} =1,\ldots,m,i=3,\ldots,n+1 \end{array} \right. \] and \[ \left\{ \begin{array} [c]{c} \widetilde{\varphi}_{j_{2},\ldots.j_{n+1}}=\varphi_{1j_{2}\ldots j_{n+1} }\text{ for all }j_{2}=1,\ldots.,m,\\ \widetilde{\varphi}_{m+j_{2},\ldots.j_{n+1}}=\varphi_{2j_{2}\ldots j_{n+1} }\text{ for all }j_{2}=1,\ldots.,m,\\ \vdots\\ \widetilde{\varphi}_{(m-1)m+j_{2},\ldots.j_{n+1}}=\varphi_{mj_{2}\ldots j_{n+1}}\text{ for all }j_{2}=1,\ldots.,m. \end{array} \right. \] For these choices one can check that \begin{align*} & \left( \sum_{j_{1},\ldots,j_{n+1}=1}^{m}\left\vert \varphi_{j_{1}\ldots j_{n+1}}\left( \check{P}\left( \gamma\left( x_{j_{1}}^{\left( 1\right) }\right) x_{j_{2}}^{\left( 2\right) },\ldots,x_{j_{n+1}}^{\left( n+1\right) }\right) \right) \right\vert ^{p}\right) ^{\frac{1}{p}}\\ & =\left( \sum_{j_{2}=1}^{m^{2}}\sum_{j_{3},\ldots,j_{n+1}=1}^{m}\left\vert \widetilde{\varphi}_{j_{2}\ldots j_{n+1}}\left( \check{P}\left( z_{j_{2} }^{(2)},\ldots,z_{j_{n+1}}^{\left( n+1\right) }\right) \right) \right\vert ^{p}\right) ^{\frac{1}{p}} \end{align*} and the other cases are similar. Then \begin{align*} & \left( \sum_{j_{1},\ldots,j_{n+1}=1}^{m}\left\vert \varphi_{j_{1}\ldots j_{n+1}}\left( \check{P}\left( \gamma\left( x_{j_{1}}^{\left( 1\right) }\right) x_{j_{2}}^{\left( 2\right) },\ldots,x_{j_{n+1}}^{\left( n+1\right) }\right) \right) \right\vert ^{p}\right) ^{\frac{1}{p}}\\ & =\left( \sum_{j_{2},\ldots,j_{n+1}=1}^{m^{2},m,\ldots,m}\left\vert \widetilde{\varphi}_{j_{2}\ldots j_{n+1}}\left( \check{P}\left( z_{j_{2} }^{(2)},\ldots,z_{j_{n+1}}^{\left( n+1\right) }\right) \right) \right\vert ^{p}\right) ^{\frac{1}{p}}\\ & \leq\left\Vert \check{P}\right\Vert _{mas\left( p;q;r\right) }\left\Vert \left( \widetilde{\varphi}_{j_{2}\ldots j_{n+1}}\right) _{j_{2} ,\ldots,j_{n+1}}^{m^{2},m,\ldots,m}\right\Vert _{w,r}\left\Vert \left( z_{j_{2}}^{\left( 2\right) }\right) _{j_{2}=1}^{m^{2}}\right\Vert _{w,q}\prod_{i=3}^{n+1}\left\Vert \left( z_{j_{i}}^{\left( i\right) }\right) _{j_{i}=1}^{m}\right\Vert _{w,q}\\ & =\left\Vert \check{P}\right\Vert _{mas\left( p;q;r\right) }\left\Vert \left( \varphi_{j_{1}\ldots j_{n+1}}\right) _{j\in\mathbb{N}_{m}^{n+1} }\right\Vert _{w,r}\left\Vert \left( \gamma\left( x_{j_{1}}^{\left( 1\right) }\right) x_{j_{2}}^{\left( 2\right) }\right) _{j_{1},j_{2} =1}^{m}\right\Vert _{w,q}\prod_{i=3}^{n+1}\left\Vert \left( x_{j}^{\left( i\right) }\right) _{j=1}^{m}\right\Vert _{w,q}. \end{align*} Since \begin{align*} & \left\Vert \left( \gamma\left( x_{j_{1}}^{\left( 1\right) }\right) x_{j_{2}}^{\left( 2\right) }\right) _{j_{1},j_{2}=1}^{m}\right\Vert _{w,q}\\ & \leq\left\Vert \left( \gamma\left( x_{j_{1}}^{\left( 1\right) }\right) \right) _{j_{1}=1}^{m}\right\Vert _{\infty}\sup_{\left\Vert \varphi \right\Vert \leq1}\left( \sum_{j=1}^{m}\left\vert \varphi\left( x_{j_{2} }^{\left( 2\right) }\right) \right\vert ^{q}\right) ^{\frac{1}{q}}\\ & \leq\left\Vert \left( \gamma\left( x_{j_{1}}^{\left( 1\right) }\right) \right) _{j_{1}=1}^{m}\right\Vert _{q}\left\Vert \left( x_{j_{2}}^{\left( 2\right) }\right) _{j_{2}=1}^{m}\right\Vert _{w,q}\\ & \leq\left\Vert \gamma\right\Vert \left\Vert \left( x_{j_{1}}^{\left( 1\right) }\right) _{j_{1}=1}^{m}\right\Vert _{w,q}\left\Vert \left( x_{j_{2}}^{\left( 2\right) }\right) _{j_{2}=1}^{m}\right\Vert _{w,q}, \end{align*} we have \begin{align*} & \left( \sum_{j_{1},\ldots,j_{n+1}=1}^{m}\left\vert \varphi_{j_{1}\ldots j_{n+1}}\left( \check{P}\left( \gamma\left( x_{j_{1}}^{\left( 1\right) }\right) x_{j_{2}}^{\left( 2\right) },\ldots,x_{j_{n+1}}^{\left( n+1\right) }\right) \right) \right\vert ^{p}\right) ^{\frac{1}{p}}\\ & \leq\left\Vert \gamma\right\Vert \left\Vert \check{P}\right\Vert _{mas\left( p;q;r\right) }\left\Vert \left( \varphi_{j_{1}\ldots j_{n+1} }\right) _{j\in\mathbb{N}_{m}^{n+1}}\right\Vert _{w,r}\prod_{i=1} ^{n+1}\left\Vert \left( x_{j}^{\left( i\right) }\right) _{j=1} ^{m}\right\Vert _{w,q}. \end{align*} Using the same idea for the other $n$ terms of (\ref{estta}), we obtain \begin{align*} & \left( \sum_{j_{1},\ldots,j_{n+1}=1}^{m}\left\vert \varphi_{j_{1}\ldots j_{n+1}}\left( \check{P}\left( \gamma\left( x_{j_{2}}^{\left( 2\right) }\right) x_{j_{1}}^{\left( 1\right) },x_{j_{3}}^{\left( 3\right) } \ldots,x_{j_{n+1}}^{\left( n+1\right) }\right) \right) \right\vert ^{p}\right) ^{\frac{1}{p}}\\ & \leq\left\Vert \gamma\right\Vert \left\Vert \check{P}\right\Vert _{mas\left( p;q;r\right) }\left\Vert \left( \varphi_{j_{1}\ldots j_{n+1} }\right) _{j\in\mathbb{N}_{m}^{n+1}}\right\Vert _{w,r}\prod_{i=1} ^{n+1}\left\Vert \left( x_{j}^{\left( i\right) }\right) _{j=1} ^{m}\right\Vert _{w,q}, \end{align*} \[ \vdots \] \begin{align*} & \left( \sum_{j_{1},\ldots,j_{n+1}=1}^{m}\left\vert \varphi_{j_{1}\ldots j_{n+1}}\left( \check{P}\left( \gamma\left( x_{j_{n+1}}^{\left( n+1\right) }\right) x_{j_{1}}^{\left( 1\right) },x_{j_{2}}^{\left( 2\right) }\ldots,x_{j_{n}}^{n}\right) \right) \right\vert ^{p}\right) ^{\frac{1}{p}}\\ & \leq\left\Vert \gamma\right\Vert \left\Vert \check{P}\right\Vert _{mas\left( p;q;r\right) }\left\Vert \left( \varphi_{j_{1}\ldots j_{n+1} }\right) _{j\in\mathbb{N}_{m}^{n+1}}\right\Vert _{w,r}\prod_{i=1} ^{n+1}\left\Vert \left( x_{j}^{\left( i\right) }\right) _{j=1} ^{m}\right\Vert _{w,q}. \end{align*} Therefore \begin{align*} & \left( \sum_{j_{1},\ldots,j_{n+1}=1}^{m}\left\vert \varphi_{j_{1}\ldots j_{n+1}}\left( \left( \gamma P\right) ^{\vee}\left( x_{j_{1}}^{\left( 1\right) },\ldots,x_{j_{n+1}}^{\left( n+1\right) }\right) \right) \right\vert ^{p}\right) ^{\frac{1}{p}}\\ & \leq\frac{1}{n+1}\left[ \left\Vert \gamma\right\Vert \left\Vert \check {P}\right\Vert _{mas\left( p;q;r\right) }\left\Vert \left( \varphi _{j_{1}\ldots j_{n+1}}\right) _{j\in\mathbb{N}_{m}^{n+1}}\right\Vert _{w,r}\prod_{i=1}^{n+1}\left\Vert \left( x_{j}^{\left( i\right) }\right) _{j=1}^{m}\right\Vert _{w,q}+\cdots\right. \\ & \left. \cdots+\left\Vert \gamma\right\Vert \left\Vert \check{P}\right\Vert _{mas\left( p;q;r\right) }\left\Vert \left( \varphi_{j_{1}\ldots j_{n+1} }\right) _{j\in\mathbb{N}_{m}^{n+1}}\right\Vert _{w,r}\prod_{i=1} ^{n+1}\left\Vert \left( x_{j}^{\left( i\right) }\right) _{j=1} ^{m}\right\Vert _{w,q}\right] \\ & =\left\Vert \gamma\right\Vert \left\Vert \check{P}\right\Vert _{mas\left( p;q;r\right) }\left\Vert \left( \varphi_{j_{1}\ldots j_{n+1}}\right) _{j\in\mathbb{N}_{m}^{n+1}}\right\Vert _{w,r}\prod_{i=1}^{n+1}\left\Vert \left( x_{j}^{\left( i\right) }\right) _{j=1}^{m}\right\Vert _{w,q}. \end{align*} Finally we conclude that $\gamma P$ is multiple $\left( p;q;r\right) $-summing and \begin{align*} \left\Vert \gamma P\right\Vert _{\mathcal{P}_{mas\left( p;q;r\right) } ^{n+1}} & \leq\left\Vert \gamma\right\Vert \left\Vert \check{P}\right\Vert _{mas\left( p;q;r\right) }\\ & =\left\Vert \gamma\right\Vert \left\Vert P\right\Vert _{\mathcal{P} _{mas\left( p;q;r\right) }^{n}}. \end{align*} The items (i) and (ii) from Definition \ref{IdeaisCompativeis} are obtained in a similar way. \end{proof}
\section{Multiple mixing summing operators}
In this section we introduce the notion of multiple mixing summing multilinear operators (and polynomials) which is coherent and compatible with the respective operator ideal. As another indicator that this is a correct approach to nonlinear mixing summability, we prove a quotient theorem for multilinear operators similar to the one for mixing summing linear operators.
\begin{definition} Let $0<p_{1},\ldots,p_{n}\leq q\leq s<\infty$ . An $n$-linear operator $A\in\mathcal{L}(E_{1},\ldots,E_{n};F)$ is multiple $(s,q;p_{1},\ldots,p_{n} )$-mixing summing if there exists a constant $\sigma\geq0$ such that \begin{equation} \left\Vert \left( A(x_{j_{1}}^{(1)},\ldots,x_{j_{n}}^{(n)})\right) _{j_{1},\ldots,j_{n}=1}^{m}\right\Vert _{mx(s,q)}\leq\sigma\prod_{k=1} ^{n}\left\Vert (x_{j}^{(k)})_{j=1}^{m}\right\Vert _{w,p_{k}} \label{III} \end{equation} for every $m\in\mathbb{N}$ $,$ $x_{1}^{(1)},\ldots,x_{m}^{(1)}\in E_{1} ,\ldots,x_{1}^{(n)},\ldots,x_{m}^{(n)}\in E_{n}.$ \end{definition}
In this case we define \[ \left\Vert A\right\Vert _{mx(s,q;p_{1},\ldots,p_{n})}=\inf\sigma. \] If $p_{1}=\cdots=p_{n}=p,$ we say that $A$ is multiple $(s,q;p)$-mixing summing. The space of all multiple $(s,q;p_{1},\ldots,p_{n})$-mixing summing is represented by $\Pi_{mx(s,q;p_{1},\ldots,p_{n})}.$
In order to avoid trivialities in the definition of multiple $(s,q;p_{1} ,\ldots,p_{n})$ mixing summing operators,\ we assume that $p_{k}\leq q$, for all $k=1,\ldots,n.$ In fact, one can check that if $T\in\mathcal{L}(E_{1} ,\ldots,E_{n};F)$ is multiple $(s,q;p_{1},\ldots,p_{n})$ mixing summing and $q<p_{k},$ for some $k,$ then $T=0.$
The following result, whose proof is standard and we omit, characterizes multiple $(s,q;p_{1},\ldots,p_{n})$ mixing summing operators as those which take adequate weakly summable sequences into adequate mixed summable sequences:
\begin{proposition} \label{Primeira Prop}Let $0<p_{1},\ldots,p_{n}\leq q\leq s<\infty.$ An operator $A\in\mathcal{L}(E_{1},\ldots,E_{n};F)$ is multiple $(s,q;p_{1} ,\ldots,p_{n} )$-mixing summing if, and only if, \[ \left( A(x_{j_{1}}^{(1)},\ldots,x_{j_{n}}^{(n)})\right) _{j_{1},\ldots,j_{n} =1}^{\infty}\in\ell_{(s,q)}^{mx}\left( F,\mathbb{N}^{n}\right) \] regardless of the choice of $(x_{i}^{(1)})_{i=1}^{\infty}\in\ell_{p_{1}} ^{w}(E_{1}),\ldots,$ $(x_{i}^{(n)})_{i=1}^{\infty}$ $\in\ell_{p_{n}}^{w} (E_{n}).$ \end{proposition}
In fact the proof of the previous proposition also shows that $A$ is multiple $(s,q;p_{1},\ldots,p_{n})$-mixing summing if, and only if, the $n$-linear operator \[ \tilde{A}\left( (x_{i}^{(1)})_{i=1}^{\infty},\ldots,(x_{i}^{(n)} )_{i=1}^{\infty}\right) =\left( A(x_{j_{1}}^{(1)},\ldots,x_{j_{n}} ^{(n)})\right) _{j_{1},...,j_{n}=1}^{\infty} \] belongs to $\mathcal{L}(\ell_{p_{1}}^{w}(E_{1}),\ldots,\ell_{p_{n}}^{w} (E_{n});\ell_{(s,q)}^{mx}\left( F,\mathbb{N}^{n}\right) )$. Moreover \[ \left\Vert A\right\Vert _{mx(s,q;p_{1},\ldots,p_{n})}=\left\Vert \tilde {A}\right\Vert . \] The main result of this section (Theorem \ref{criterio}) is a consequence of the following powerful characterization of mixed summable sequences due to Maurey \cite{Maurey} (see also \cite[16.4.3]{pp1}):
\begin{theorem} [Maurey]\label{caracter} Let $0<q<s<\infty.$ A sequence $\left( z_{j}\right) _{j=1}^{\infty}$ in $E$ is mixed $(s,q)$-summable if, and only if, \[ \left( \left( \int_{B_{E^{\ast}}}\left\vert \left\langle \varphi ,z_{j}\right\rangle \right\vert ^{s}d\mu(\varphi)\right) ^{\frac{1}{s} }\right) _{j=1}^{\infty}\in\ell_{q}\text{ whenever }\mu\in W(B_{E^{\ast}}). \] Besides \[ \left\Vert \left( z_{j}\right) _{j=1}^{\infty}\right\Vert _{mx(s,q)} =\sup_{\mu\in W(B_{E^{\ast}})}\left( \sum_{j=1}^{\infty}\left( \int_{B_{E^{\ast}}}\left\vert \left\langle \varphi,z_{j}\right\rangle \right\vert ^{s}d\mu(\varphi)\right) ^{\frac{q}{s}}\right) ^{\frac{1}{q}}. \] The next theorem shows that our concept has a characterization similar to the linear case (see \cite{Flore}): \end{theorem}
\begin{theorem} \label{criterio}Let $0<p_{1},\ldots,p_{n}\leq q\leq s<\infty.$ An operator $A\in\mathcal{L}(E_{1},\ldots,E_{n};F)$ is multiple $(s,q;p_{1},\ldots,p_{n})$ mixing summing if, and only if, there is a constant $\sigma\geq0$ such that \begin{align} & \left( \sum_{j_{1},\ldots,j_{n}=1}^{m}\left( \sum_{j=1}^{k}\left\vert \left\langle \varphi_{j},A(x_{j_{1}}^{(1)},\ldots,x_{j_{n}}^{(n)} )\right\rangle \right\vert ^{s}\right) ^{\frac{q}{s}}\right) ^{\frac{1}{q} }\label{jj}\\ & \leq\sigma\prod_{l=1}^{n}\left\Vert (x_{i}^{(l)})_{i=1}^{m}\right\Vert _{w,p_{l}}\left\Vert (\varphi_{j})_{j=1}^{k}\right\Vert _{s}\nonumber \end{align} for all $k,m\in\mathbb{N}$, $x_{i}^{(l)}\in E_{l};$ $i=1,\ldots,m,$ $l=1,\ldots,n$ and $\varphi_{j}\in F^{\ast}$ with $j=1,\ldots,k.$ Furthermore, \[ \left\Vert A\right\Vert _{mx(s,q;p_{1},\ldots,p_{n})}=\inf\sigma. \]
\end{theorem}
\begin{proof} We split the proof into two cases.
(i) Case $s=q.$
From (\ref{jj}) we conclude that \[ \left( \sum_{j_{1},\ldots,j_{n}=1}^{m}\left\vert \left\langle \varphi ,A(x_{j_{1}}^{(1)},\ldots,x_{j_{n}}^{(n)})\right\rangle \right\vert ^{q}\right) ^{\frac{1}{q}}\leq\sigma\prod_{l=1}^{n}\left\Vert (x_{i} ^{(l)})_{i=1}^{m}\right\Vert _{w,p_{l}} \] for all $\varphi\in B_{F^{\ast}}.$ Thus \begin{equation} \left\Vert \left( A(x_{j_{1}}^{(1)},\ldots,x_{j_{n}}^{(n)})\right) _{j_{1},...,j_{n}\in\mathbb{N}_{m}}\right\Vert _{w,q}\leq\sigma\prod_{l=1} ^{n}\left\Vert (x_{i}^{(l)})_{i=1}^{m}\right\Vert _{w,p_{l}} \label{I} \end{equation} and so by Theorem \ref{caracter} and by (\ref{I}) we obtain \begin{align*} & \left\Vert \left( A(x_{j_{1}}^{(1)},\ldots,x_{j_{n}}^{(n)})\right) _{j_{1},...,j_{n}\in\mathbb{N}_{m}}\right\Vert _{mx(q,q)}\\ & =\sup_{\mu\in W(B_{F^{\ast}})}\left( \sum_{j_{1},\ldots,j_{n}=1} ^{m}\left( \int_{B_{F^{\ast}}}\left\vert \left\langle \varphi,A(x_{j_{1} }^{(1)},\ldots,x_{j_{n}}^{(n)})\right\rangle \right\vert ^{q}d\mu (\varphi)\right) ^{\frac{q}{q}}\right) ^{\frac{1}{q}}\\ & \leq\sup_{\mu\in W(B_{F^{\ast}})}\left( \int_{B_{F^{\ast}}}\sup_{\psi\in B_{F^{\ast}}}\sum_{j_{1},\ldots,j_{n}=1}^{m}\left\vert \left\langle \psi,A(x_{j_{1}}^{(1)},\ldots,x_{j_{n}}^{(n)})\right\rangle \right\vert ^{q}d\mu(\varphi)\right) ^{\frac{1}{q}}\\ & =\sup_{\mu\in W(B_{F^{\ast}})}\left( \int_{B_{F^{\ast}}}\left\Vert \left( A(x_{j_{1}}^{(1)},\ldots,x_{j_{n}}^{(n)})\right) _{j_{1},...,j_{n} \in\mathbb{N}_{m}}\right\Vert _{w,q}^{q}d\mu(\varphi)\right) ^{\frac{1}{q}}\\ & \leq\left\Vert \left( A(x_{j_{1}}^{(1)},\ldots,x_{j_{n}}^{(n)})\right) _{j_{1},...,j_{n}\in\mathbb{N}_{m}}\right\Vert _{w,q}\\ & \leq\sigma\prod_{l=1}^{n}\left\Vert (x_{i}^{(l)})_{i=1}^{m}\right\Vert _{w,p_{l}}. \end{align*} Hence, $A\in\Pi_{mx(q,q;p_{1},\ldots,p_{n})}(E_{1},\ldots,E_{n};F)$ and $\left\Vert A\right\Vert _{mx(q,q;p_{1},\ldots,p_{n})}\leq\sigma$.
Conversely, suppose that $A\in\Pi_{mx(q,q;p_{1},\ldots,p_{n})}(E_{1} ,\ldots,E_{n};F)$. Given \[ x_{1}^{(1)},\ldots,x_{m}^{(1)}\in E_{1},\ldots,x_{1}^{(n)},\ldots,x_{m} ^{(n)}\in E_{n} \] and $\varphi_{1},\ldots,\varphi_{k}\in F^{\ast}$, if \[ A(x_{j_{1}}^{(1)},\ldots,x_{j_{n}}^{(n)})=\tau_{j_{1},\ldots,j_{n}} .y_{j_{1},\ldots,j_{n}}, \] where $(\tau_{j_{1},\ldots,j_{n}})_{j_{1},...,j_{n}\in\mathbb{N}}\in \ell_{\infty}$ and $(y_{j_{1},\ldots,j_{n}})_{j_{1},...,j_{n}\in\mathbb{N}} \in\ell_{q}^{w}(F;\mathbb{N}^{n})$ we have \begin{align*} & \left( \sum_{j_{1},\ldots,j_{n}=1}^{m}\left( \sum_{j=1}^{k}\left\vert \left\langle \varphi_{j},A(x_{j_{1}}^{(1)},\ldots,x_{j_{n}}^{(n)} )\right\rangle \right\vert ^{q}\right) ^{\frac{q}{q}}\right) ^{\frac{1}{q} }\\ & =\left( \sum_{j=1}^{k}\left( \left\Vert \varphi_{j}\right\Vert ^{q} \sum_{j_{1},\ldots,j_{n}=1}^{m}\left\vert \left\langle \frac{\varphi_{j} }{\left\Vert \varphi_{j}\right\Vert },\tau_{j_{1},\ldots,j_{n}}y_{j_{1} ,\ldots,j_{n}}\right\rangle \right\vert ^{q}\right) \right) ^{\frac{1}{q}}\\ & =\left( \sum_{j=1}^{k}\left\Vert \varphi_{j}\right\Vert ^{q}\right) ^{\frac{1}{q}}\left( \sum_{j_{1},\ldots,j_{n}=1}^{m}\left\vert \tau _{j_{1},\ldots,j_{n}}\right\vert ^{q}\left\vert \left\langle \frac{\varphi _{j}}{\left\Vert \varphi_{j}\right\Vert },y_{j_{1},\ldots,j_{n}}\right\rangle \right\vert ^{q}\right) ^{\frac{1}{q}}\\ & \leq\left\Vert (\varphi_{j})_{j=1}^{k}\right\Vert _{q}\left\Vert (\tau_{j_{1},\ldots,j_{n}})_{j\in\mathbb{N}^{n}}\right\Vert _{\infty }\left\Vert (y_{j_{1},\ldots,j_{n}})_{j\in\mathbb{N}^{n}}\right\Vert _{w,q}. \end{align*} Taking the infimum in both sides, we obtain \begin{align*} & \left( \sum_{j_{1},\ldots,j_{n}=1}^{m}\left( \sum_{j=1}^{k}\left\vert \left\langle \varphi_{j},A(x_{j_{1}}^{(1)},\ldots,x_{j_{n}}^{(n)} )\right\rangle \right\vert ^{q}\right) ^{\frac{q}{q}}\right) ^{\frac{1}{q} }\\ & \leq\left\Vert (\varphi_{j})_{j=1}^{k}\right\Vert _{q}\left\Vert \left( A(x_{j_{1}}^{(1)},\ldots,x_{j_{n}}^{(n)})\right) _{j\in\mathbb{N}_{m}^{n} }\right\Vert _{m,(q,q)}\\ & \leq\left\Vert (\varphi_{j})_{j=1}^{k}\right\Vert _{q}\left\Vert A\right\Vert _{mx(q,q;p_{1},\ldots,p_{n})}\prod_{l=1}^{n}\left\Vert (x_{i}^{(l)})_{i=1}^{m}\right\Vert _{w,p_{l}}. \end{align*} Therefore $\inf\sigma\leq\left\Vert A\right\Vert _{mx(q,q;p_{1},\ldots,p_{n} )}$ and with the last inequality we obtain \[ \left\Vert A\right\Vert _{mx(q,q;p_{1},\ldots,p_{n})}=\inf\sigma. \]
(ii) Case $s>q.$
Let $A\in\Pi_{mx(s,q;p_{1},\ldots,p_{n})}(E_{1},\ldots,E_{n};F).$ Given $0\neq\varphi_{1},\ldots,\varphi_{k}\in F^{\ast}$ we define the probability measure \[ \nu=\sum_{j=1}^{k}\nu_{j}\delta_{j},\text{ where }\nu_{j}=\frac{\left\Vert \varphi_{j}\right\Vert ^{s}}{\sum_{j=1}^{k}\left\Vert \varphi_{j}\right\Vert ^{s}} \] and $\delta_{j}$ is the Dirac measure at the point $\tilde{\varphi}_{j} =\frac{\varphi_{j}}{\left\Vert \varphi_{j}\right\Vert }.$
For $x_{1}^{(1)},\ldots,x_{m}^{(1)}\in E_{1},\ldots$, $x_{1}^{(n)} ,\ldots,x_{m}^{(n)}\in E_{n}$, note that \begin{align*} & \int_{B_{F^{\ast}}}\left\vert \left\langle \varphi,A(x_{j_{1}}^{(1)} ,\ldots,x_{j_{n}}^{(n)})\right\rangle \right\vert ^{s}d\nu(\varphi)\\ & =\sum_{j=1}^{k}\left\vert \left\langle \tilde{\varphi_{j}},A(x_{j_{1} }^{(1)},\ldots,x_{j_{n}}^{(n)})\right\rangle \right\vert ^{s}\nu (\tilde{\varphi_{j}})\\ & =\sum_{j=1}^{k}\left\vert \left\langle \frac{\varphi_{j}}{\left\Vert \varphi_{j}\right\Vert },A(x_{j_{1}}^{(1)},\ldots,x_{j_{n}}^{(n)} )\right\rangle \right\vert ^{s}.\nu_{j}.\delta_{j}(\tilde{\varphi_{j}})\\ & =\sum_{j=1}^{k}\left\vert \left\langle \frac{\varphi_{j}}{\left\Vert \varphi_{j}\right\Vert },A(x_{j_{1}}^{(1)},\ldots,x_{j_{n}}^{(n)} )\right\rangle \right\vert ^{s}.\frac{\left\Vert \varphi_{j}\right\Vert ^{s} }{\sum_{j=1}^{k}\left\Vert \varphi_{j}\right\Vert ^{s}}\\ & =\frac{1}{\left\Vert (\varphi_{j})_{j=1}^{k}\right\Vert _{s}^{s}}\sum _{j=1}^{k}\left\vert \left\langle \varphi_{j},A(x_{j_{1}}^{(1)},\ldots ,x_{j_{n}}^{(n)})\right\rangle \right\vert ^{s}. \end{align*}
From the previous equalities and from Theorem \ref{caracter} we have \begin{align*} & \left( \sum_{j_{1},\ldots,j_{n}=1}^{m}\left( \sum_{j=1}^{k}\left\vert \left\langle \varphi_{j},A(x_{j_{1}}^{(1)},\ldots,x_{j_{n}}^{(n)} )\right\rangle \right\vert ^{s}\right) ^{\frac{q}{s}}\right) ^{\frac{1}{q} }\\ & =\left( \sum_{j_{1},\ldots,j_{n}=1}^{m}\left( \int_{B_{F^{\ast}} }\left\vert \left\langle \varphi,A(x_{j_{1}}^{(1)},\ldots,x_{j_{n}} ^{(n)})\right\rangle \right\vert ^{s}d\nu(\varphi)\right) ^{\frac{q}{s} }\right) ^{\frac{1}{q}}\left\Vert (\varphi_{j})_{j=1}^{k}\right\Vert _{s}\\ & \leq\left\Vert \left( A(x_{j_{1}}^{(1)},\ldots,x_{j_{n}}^{(n)})\right) _{j\in\mathbb{N}_{m}^{n}}\right\Vert _{m,(s,q)}\left\Vert (\varphi_{j} )_{j=1}^{k}\right\Vert _{s}\\ & \leq\left\Vert A\right\Vert _{mx(s,q;p_{1},\ldots,p_{n})}\prod_{l=1} ^{n}\left\Vert (x_{i}^{(l)})_{i=1}^{m}\right\Vert _{w,p_{l}}\left\Vert (\varphi_{j})_{j=1}^{k}\right\Vert _{s}. \end{align*} and we obtain (\ref{jj}) with $\inf\sigma\leq\left\Vert A\right\Vert _{mx(s,q;p_{1},\ldots,p_{n})}.$
Reciprocally, with the same idea and using (\ref{jj}), given $\nu=\sum _{i=1}^{k}\nu_{i}\delta_{i}$ a discrete probability measure onto $B_{F^{\ast} }$ we obtain \begin{align*} & \left( \sum_{j_{1},\ldots,j_{n}=1}^{m}\left( \int_{B_{F^{\ast}} }\left\vert \left\langle \varphi,A(x_{j_{1}}^{(1)},\ldots,x_{j_{n}} ^{(n)})\right\rangle \right\vert ^{s}d\nu(\varphi)\right) ^{\frac{q}{s} }\right) ^{\frac{1}{q}}\\ & =\left( \sum_{j_{1},\ldots,j_{n}=1}^{m}\left( \sum_{j=1}^{k}\left\vert \left\langle \nu_{j}^{\frac{1}{s}}\varphi_{j},A(x_{j_{1}}^{(1)},\ldots ,x_{j_{n}}^{(n)})\right\rangle \right\vert ^{s}\right) ^{\frac{q}{s}}\right) ^{\frac{1}{q}}\\ & \leq\sigma\prod_{l=1}^{n}\left\Vert (x_{i}^{(l)})_{i=1}^{m}\right\Vert _{w,p_{l}}\left\Vert (\nu_{j}^{\frac{1}{s}}\varphi_{j})_{j=1}^{k}\right\Vert _{s}\\ & \leq\sigma\prod_{l=1}^{n}\left\Vert (x_{i}^{(l)})_{i=1}^{m}\right\Vert _{w,p_{l}}. \end{align*} The previous inequality holds for every $\nu\in W(B_{F^{\ast}})$, since the discrete probability measures are dense in $W(B_{F^{\ast}})$. Therefore, from Theorem \ref{caracter} we obtain \[ \left\Vert \left( A(x_{j_{1}}^{(1)},\ldots,x_{j_{n}}^{(n)})\right) _{j\in\mathbb{N}_{m}^{n}}\right\Vert _{mx(s,q)}\leq\sigma\prod_{l=1} ^{n}\left\Vert (x_{i}^{(l)})_{i=1}^{m}\right\Vert _{w,p_{l}}, \] for all $m\in\mathbb{N}$ and \[ \left\Vert A\right\Vert _{mx(s,q;p_{1},\ldots,p_{n})}=\inf\sigma. \]
\end{proof}
\subsection{A quotient theorem}
For linear operators, $S\in\mathcal{L}\left( E;F\right) $ is $\left( s,p\right) $-mixing summing if and only if $TS$ is absolutely $p$-summing for all $T\in\Pi_{s}\left( F;G\right) $. In other words \[ \Pi_{mx\left( s,p\right) }\left( E;F\right) =\left( \Pi_{s}\left( F;G\right) \right) ^{-1}\circ\Pi_{p}\left( E;G\right) . \]
For details we refer to \cite[Section 32]{Flore} and \cite{pp1}. In this section we show that our approach provides a perfect multilinear extension of this result. We show that the following assertions are equivalent:
\begin{itemize} \item $T\in\mathcal{L}(E_{1},\ldots,E_{n};F)$ is multiple $\left( s,q;p_{1} ,\ldots,p_{n}\right) $-mixing summing.
\item $u\circ T\in\mathcal{L}_{mas(q;p_{1},\ldots,p_{n})}(E_{1},\ldots ,E_{n};G)$ for all $u\in\Pi_{s}\left( F;G\right) $ and $T\in\mathcal{L} (E_{1},\ldots,E_{n};F).$ \end{itemize}
Using a different notation, we will show the following quotient theorem: \begin{equation} \Pi_{mx\left( s,q;p_{1},\ldots,p_{n}\right) }\left( E_{1},\ldots ,E_{n};F\right) =\left( \Pi_{s}\left( F;G\right) \right) ^{-1} \circ\mathcal{L}_{mas\left( q;p_{1},\ldots,p_{n}\right) }\left( E_{1},\ldots,E_{n};G\right) \label{dtt} \end{equation} for all $E_{1},\ldots,E_{n},F$ and $G.$
The quotient theorem (\ref{dtt}) is a direct consequence of the forthcoming Propositions \ref{pp9} and \ref{pp99}. First we need the following lemma:
\begin{lemma} \label{jlk}Let $A\in\mathcal{L}(E_{1},\ldots,E_{n};F)$ be so that \[ u\circ A\in\mathcal{L}_{mas(p;p_{1},\ldots,p_{n})}(E_{1},\ldots,E_{n};G) \] for all Banach space $G$ and all $u\in\Pi_{r}(F;G).$ Then, there is a $C\geq0$ such that \begin{equation} \left\Vert u\circ A\right\Vert _{(p;p_{1},\ldots,p_{n})}\leq C\pi _{r}(u).\label{gtr} \end{equation}
\end{lemma}
\begin{proof} Suppose that (\ref{gtr}) is not true. So, for all positive integer $k$ there exist Banach spaces $F_{k}$ and $u_{k}\in\Pi_{r}(F;F_{k})$ so that \[ \pi_{r}(u_{k})\leq\frac{1}{2^{k}}\text{ and }\left\Vert u_{k}\circ A\right\Vert _{(p;p_{1},\ldots,p_{n})}\geq k. \] Let $J_{k}:F_{k}\rightarrow\ell_{2}\left( \left( F_{k}\right) _{k=1}^{\infty}\right) $ and $Q_{j}:\ell_{2}\left( \left( F_{k}\right) _{k=1}^{\infty}\right) \rightarrow F_{j}$ be the canonical maps for all positive integers $j,k$. Since \[ \pi_{r}\left( \sum\limits_{k=n_{1}}^{n_{2}}J_{k}\circ u_{k}\right) \leq \sum\limits_{k=n_{1}}^{n_{2}}\pi_{r}\left( J_{k}\circ u_{k}\right) \leq \sum\limits_{k=n_{1}}^{n_{2}}\pi_{r}\left( u_{k}\right) \leq\sum \limits_{k=n_{1}}^{n_{2}}\frac{1}{2^{k}} \] it follows that \[ u:=\sum\limits_{j=1}^{\infty}J_{j}\circ u_{j}\in\Pi_{r}(F;\ell_{2}\left( \left( F_{k}\right) _{k=1}^{\infty}\right) ). \] Since $u_{k}=Q_{k}\circ u$, we thus have \[ k\leq\left\Vert u_{k}\circ A\right\Vert _{(p;p_{1},\ldots,p_{n})}=\left\Vert Q_{k}\circ u\circ A\right\Vert _{(p;p_{1},\ldots,p_{n})}\leq\left\Vert u\circ A\right\Vert _{(p;p_{1},\ldots,p_{n})}, \] a contradiction. \end{proof}
\begin{proposition} \label{pp9}If $A\in\mathcal{L}(E_{1},\ldots,E_{n};F)$ is so that $u\circ A\in\mathcal{L}_{mas(q;p_{1},\ldots.,p_{n})}(E_{1},\ldots,E_{n};G)$ for all $u\in\Pi_{s}\left( F;G\right) ,$ then \[ A\in\Pi_{mx(s,q;p_{1},\ldots.,p_{n})}(E_{1},\ldots,E_{n};F). \]
\end{proposition}
\begin{proof} Let $x_{i}^{(j)}\in E_{j}$ with $\left( i,j\right) \in\left\{ 1,\ldots,m\right\} \times\left\{ 1,\ldots,n\right\} .$ Consider $S:F\rightarrow\ell_{s}^{k}$ defined by \[ S(y)=\left( \varphi_{j}\left( y\right) \right) _{j=1}^{k}. \] It is not difficult to show that \[ \pi_{s}\left( S\right) \leq\left\Vert \left( \varphi_{j}\right) _{j=1} ^{k}\right\Vert _{s}. \] Since $S\circ A\in\mathcal{L}_{mas(q;p_{1},\ldots.,p_{n})}(E_{1},\ldots ,E_{n};\ell_{s}^{k})$ and invoking Lemma \ref{jlk}, there is a constant $C>0$ so that \begin{align*} & \left( \sum_{j_{1},\ldots,j_{n}=1}^{m}\left( \sum_{j=1}^{k}\left\vert \left\langle \varphi_{j},A(x_{j_{1}}^{(1)},\ldots,x_{j_{n}}^{(n)} )\right\rangle \right\vert ^{s}\right) ^{\frac{q}{s}}\right) ^{\frac{1}{q} }\\ & =\left( \sum_{j_{1},\ldots,j_{n}=1}^{m}\left\Vert S\circ A\left( x_{j_{1}}^{(1)},\ldots,x_{j_{n}}^{(n)}\right) \right\Vert _{s}^{q}\right) ^{\frac{1}{q}}\\ & \leq\left\Vert S\circ A\right\Vert _{(q;p_{1},\ldots.,p_{n})} \prod\limits_{j=1}^{n}\left\Vert \left( x_{i}^{(j)}\right) _{i=1} ^{m}\right\Vert _{w,p_{j}}\\ & \leq C\pi_{s}\left( S\right) \prod\limits_{j=1}^{n}\left\Vert \left( x_{i}^{(j)}\right) _{i=1}^{m}\right\Vert _{w,p_{j}}\\ & \leq C\left\Vert \left( \varphi_{j}\right) _{j=1}^{k}\right\Vert _{s}\prod\limits_{j=1}^{n}\left\Vert \left( x_{i}^{(j)}\right) _{i=1} ^{m}\right\Vert _{w,p_{j}}. \end{align*}
\end{proof}
\begin{proposition} \label{pp99}If $A\in\Pi_{mx(s,q;p_{1},\ldots.,p_{n})}(E_{1},\ldots,E_{n};F),$ then \begin{equation} u\circ A\in\Pi_{(q;p_{1},\ldots.,p_{n})}(E_{1},\ldots,E_{n};G) \label{ggr} \end{equation} and \begin{equation} \left\Vert u\circ A\right\Vert _{(q;p_{1},\ldots.,p_{n})}\leq\pi_{s}\left( u\right) \left\Vert A\right\Vert _{mx(s,q;p_{1},\ldots.,p_{n})} \label{ggs} \end{equation} for all $u\in\Pi_{s}\left( F;G\right) .$ \end{proposition}
\begin{proof} Let $x_{i}^{(j)}\in E_{j}$ with $\left( i,j\right) \in\left\{ 1,\ldots,m\right\} \times\left\{ 1,\ldots,n\right\} .$ Given $\varepsilon >0$ there are $\tau_{j_{1},\ldots,j_{n}}\in K$ and $y_{j_{1},\ldots,j_{n}}\in F$ so that \[ A(x_{j_{1}}^{(1)},\ldots,x_{j_{n}}^{(n)})=\tau_{j_{1},\ldots,j_{n}} y_{j_{1},\ldots,j_{n}} \] and \begin{align*} & \left\Vert \left( \tau_{j_{1},\ldots,j_{n}}\right) _{j_{1},\ldots ,j_{n}=1}^{m}\right\Vert _{r}\left\Vert \left( y_{j_{1},\ldots,j_{n}}\right) _{j_{1},\ldots,j_{n}=1}^{m}\right\Vert _{w,s}\\ & <\left( 1+\varepsilon\right) \left\Vert \left( A(x_{j_{1}}^{(1)} ,\ldots,x_{j_{n}}^{(n)})\right) _{j_{1},\ldots,j_{n}=1}^{m}\right\Vert _{mx\left( s,q)\right) }\\ & \leq\left( 1+\varepsilon\right) \left\Vert A\right\Vert _{mx(s,q;p_{1} ,\ldots.,p_{n})}\prod\limits_{j=1}^{n}\left\Vert \left( x_{i}^{(j)}\right) _{i=1}^{m}\right\Vert _{w,p_{j}}. \end{align*} Hence, using H\"{o}lder's Inequality we obtain \begin{align*} & \left\Vert \left( u\circ A(x_{j_{1}}^{(1)},\ldots,x_{j_{n}}^{(n)})\right) _{j_{1},\ldots,j_{n}=1}^{m}\right\Vert _{q}\\ & \leq\left\Vert \left( \tau_{j_{1},\ldots,j_{n}}\right) _{j_{1} ,\ldots,j_{n}=1}^{m}\right\Vert _{r}\left\Vert \left( u\left( y_{j_{1} ,\ldots,j_{n}}\right) \right) _{j_{1},\ldots,j_{n}=1}^{m}\right\Vert _{s}\\ & \leq\left( 1+\varepsilon\right) \pi_{s}\left( u\right) \left\Vert A\right\Vert _{mx(s,q;p_{1},\ldots.,p_{n})}\prod\limits_{j=1}^{n}\left\Vert \left( x_{i}^{(j)}\right) _{i=1}^{m}\right\Vert _{w,p_{j}} \end{align*} and making $\varepsilon\rightarrow0$ we get (\ref{ggr}) and (\ref{ggs}). \end{proof}
\subsection{Coherence and compatibility}
The polynomial version of multiple mixing summing operators can be stated by using the symmetric multilinear operator associated to the polynomials:
\begin{definition} Let $0<p\leq s<\infty.$ A polynomial $P\in\mathcal{P}(^{n}E;F)$ is multiple $(s,p)$-mixing summing if $\check{P}$ is multiple $(s,p;p)$-mixing summing. Besides, \[ \left\Vert P\right\Vert _{mx(s,p)}:=\left\Vert \check{P}\right\Vert _{mx(s,p;p)}. \]
\end{definition}
The following proposition, whose proof is standard, shows that, as it happens to multiple summing multilinear operators, coincidence results for multiple mixing summing multilinear operators imply in coincidence results for smaller degrees:
\begin{proposition} If $\mathcal{L}(E_{1},\ldots,E_{n};F)=\Pi_{mx(s,q;p_{1},\ldots,p_{n})} (E_{1},\ldots,E_{n};F)$ then \[ \mathcal{L}(E_{k_{1}},\ldots,E_{k_{j}};F)=\Pi_{mx(s,q;p_{k_{1}},\ldots ,p_{k_{j}})}(E_{k_{1}},\ldots,E_{k_{j}};F) \] whenever $1\leq j<n$ and $\left\{ k_{1}<\cdots<k_{j}\right\} \subset\left\{ 1,\ldots,n\right\} $. \end{proposition}
Similarly to the previous section one can show that $\left( \mathcal{P} _{mx(s,p)}^{n},\left\Vert \cdot\right\Vert _{mx(s,p)}\right) _{n=1}^{\infty}$ is coherent and for each $n$ it is compatible with the operator ideal $\left( \Pi_{mx(s,p)},\pi_{mx(s,p)}\right) $. For example we prove (i) of Definition \ref{IdeaisCoerentes}:
\begin{proposition} If $P\in\mathcal{P}_{mx(s,p)}(^{n}E;F)$ and $a\in E$, then $P_{a} \in\mathcal{P}_{mx(s,p)}(^{n-1}E;F)$ and \[ \left\Vert P_{a}\right\Vert _{mx(s,p)}\leq\left\Vert P\right\Vert _{mx(s,p)}\left\Vert a\right\Vert . \]
\end{proposition}
\begin{proof} Since $\check{P}\in\Pi_{mx(s,p)}(^{n}E;F)$ we have \[ \left( \sum_{j_{1},\ldots,j_{n}=1}^{m}\left( \sum_{j=1}^{k}\left\vert \left\langle \varphi_{j},\check{P}(x_{j_{1}}^{(1)},\ldots,x_{j_{n}} ^{(n)})\right\rangle \right\vert ^{s}\right) ^{\frac{p}{s}}\right) ^{\frac{1}{p}}\leq\sigma\prod_{l=1}^{n}\left\Vert (x_{i}^{(l)})_{i=1} ^{m}\right\Vert _{w,p}\left\Vert (\varphi_{j})_{j=1}^{k}\right\Vert _{s}. \] and by choosing $x_{1}^{(n)}=a$ and $x_{j}^{(n)}=0$ for $j>1$ we have \begin{align*} & \left( \sum_{j_{1},\ldots,j_{n-1}=1}^{m}\left( \sum_{j=1}^{k}\left\vert \left\langle \varphi_{j},\check{P}_{a}(x_{j_{1}}^{(1)},\ldots,x_{j_{n-1} }^{(n-1)})\right\rangle \right\vert ^{s}\right) ^{\frac{p}{s}}\right) ^{\frac{1}{p}}\\ & =\left( \sum_{j_{1},\ldots,j_{n-1}=1}^{m}\left( \sum_{j=1}^{k}\left\vert \left\langle \varphi_{j},\check{P}(x_{j_{1}}^{(1)},\ldots,x_{j_{n-1}} ^{(n-1)},a)\right\rangle \right\vert ^{s}\right) ^{\frac{p}{s}}\right) ^{\frac{1}{p}}\\ & =\left( \sum_{j_{1},\ldots,j_{n}=1}^{m}\left( \sum_{j=1}^{k}\left\vert \left\langle \varphi_{j},\check{P}(x_{j_{1}}^{(1)},\ldots,x_{j_{n}} ^{(n)})\right\rangle \right\vert ^{s}\right) ^{\frac{p}{s}}\right) ^{\frac{1}{p}}\\ & \leq\left\Vert P\right\Vert _{mx(s,p)}\left\Vert a\right\Vert \prod _{l=1}^{n-1}\left\Vert (x_{i}^{(l)})_{i=1}^{m}\right\Vert _{w,p}\left\Vert (\varphi_{j})_{j=1}^{k}\right\Vert _{s}. \end{align*}
\end{proof}
\section{Final comments and directions for further research}
The concepts of multiple mixing summing and multiple $\left( p;q;r_{1} ,\ldots,r_{n}\right) $-summing polynomials/multilinear operators, as natural extensions of the notion of multiple summing multilinear operators, can be further investigated following different directions: coincidence theorems, generalizations to holomorphic mappings, or inclusion theorems, among others.
The study of coincidence theorems may follow the lines of \cite{Na} combined with the results from the respective linear theories; the study of holomorphic mappings may follow \cite{Junek} and for inclusion theorems \cite{davidstudia} is certainly a good source of inspiration.
We encourage the interested reader to investigate other variants of mixing summability and $\left( p;q;r_{1},\ldots,r_{n}\right) $-summability following the lines given in \cite{jgdd, Nach00, MST, parc}.
\end{document} |
\begin{document}
\title{Some examples of BL-algebras using commutative rings} \author{Cristina Flaut and Dana Piciu} \date{} \maketitle
\begin{abstract} BL-algebras are algebraic structures corresponding to Hajek's basic fuzzy logic.
The aim of this paper is to analize the structure of BL-algebras using commutative rings. From computational considerations, we are very interested in the finite case. We present new ways to generate finite BL-algebras using commutative rings and we give summarizing statistics.
Furthermore, we investigated BL-rings, i.e., commutative rings whose the lattice of ideals can be equiped with a structure of BL-algebra. A new characterization for these rings and their connections to other classes of rings are established. Also, we give examples of finite BL-rings for which their lattice of ideals is not an MV-algebra and using these rings we construct BL-algebras with $2^{r}+1$ elements, $r\geq 2,$ and \ all BL-chains with $k$ elements, $k\geq 4.$
\textbf{Keywords:} commutative ring, BL-ring, ideal, residuated lattice, MV-algebra, BL-algebra.
\textbf{AMS Subject Classification 2010:} 03G10, 03G25, 06A06, 06D05, 08C05, 06F35. \end{abstract}
\section{\textbf{Introduction}}
The origin of residuated lattices is in Mathematical Logic. They have been introduced by Dilworth and Ward, through the papers \cite{[Di; 38]}, \cite {[WD; 39]}.
\ The study of residuated lattices is originated in 1930 in the context of theory of rings, with the study of ring ideals. It is known that the lattice of ideals of a commutative ring is a residuated lattice, see \cite{[Bl; 53]}. Several researchers (\cite{[Bl; 53]}, \cite {[BN; 09]}, \cite{[LN; 18]}, \cite{[TT; 22]}, etc) have been interested in this construction.
Two important subvarieties of residuated lattices are BL-algebras (corresponding to Hajek's logic, see \cite{[H; 98]}) and MV-algebras (corresponding to \L ukasiewicz many-valued logic, see \cite{[COM; 00]}, \cite{[CHA; 58]}). For instance, rings for which the lattice of ideals is a BL-algebra are called BL-rings and are introduced in \cite{[LN; 18]}.
In this paper, we obtain a description for BL-rings, using a new characterization of BL-algebras, given in Theorem \ref{Theorem_1}, i.e, residuated lattices $L$ in which $\ [x\odot (x\rightarrow y)]\rightarrow z=(x\rightarrow z)\vee (y\rightarrow z)$ for every $x,y,z\in L.$ Then, BL-rings are unitary and commutative rings $A$ with the property that $K:$ \textit{\ }$[I\otimes (J:I)]=(K:I)+(K:J),$ \textit{for every }$I,J,K\in Id(A),$\ see Corollary \ref{Corollary 3.6}
Also, we show that the class of BL-rings contains other known classes of commutative rings: rings which are principal ideal domains and some types of finite unitary commutative rings, see Theorem \ref{Theorem_2}, Corollary \ref {Corollary_2_1} and Corollary \ref{Corollary _3_4}.
One of recent applications of BCK algebras is given by Coding Theory. \ From computational considerations, we are interested, in this paper, to find ways to generate finite BL-algebras using finite commutative rings, since a solution that is computational tractable is to consider algebras with a small number of elements. First, we give examples of finite BL-rings whose lattice of ideals is not an MV-algebra. Using these rings we construct BL-algebras with $2^{r}+1$ elements, $r\geq 2$ (see Proposition \ref {Proposition10}) and BL-chains with $k\geq 4$ elements (see Proposition \ref {Proposition11}). Also, in Theorem \ref{Theorem_4}, we present a way to generate all (up to an isomorphism) finite BL-algebras with $2\leq n\leq 5$ elements, by using the ordinal product of residuated lattices and we present summarizing statistics.
\section{\textbf{Preliminaries}}
\begin{definition} \label{Definition_1} (\cite{[Di; 38]}, \cite{[WD; 39]})\textbf{\ \ }A \emph{ (commutative) residuated lattice} \ is an algebra $(L,\wedge ,\vee ,\odot ,\rightarrow ,0,1)$ such that:
(LR1) $\ (L,\wedge ,\vee ,0,1)$ is a bounded lattice;
(LR2) $\ \ (L,\odot ,1)$ is a commutative ordered monoid;
(LR3) $\ z\leq x\rightarrow y$\ iff $x\odot z\leq y,$\ for all $x,y,z\in L.$ \end{definition}
The property (LR3) is called\emph{\ residuation}, where $\leq $ is the partial order of the lattice $\ (L,\wedge ,\vee ,0,1).$
In a residuated lattice is defined an additional operation: for $x\in L,$ we denote $x^{\ast }=x\rightarrow 0.$
\begin{example} \label{Example_1} (\cite{[T; 99]}) Let $(\mathcal{B},\wedge ,\vee ,^{\prime },0,1)$ be a \emph{Boolean algebra.} \ If we define for every $x,y\in \mathcal{B},x\odot y=x\wedge y$ and $x\rightarrow y=x^{\prime }\vee y,$ then $(\mathcal{B},\wedge ,\vee ,\odot ,\rightarrow ,0,1)$ becomes a residuated lattice. \end{example}
\begin{example} \label{Example_2} It is known that, for a commutative unitary ring, $A,$ if we denote by $Id\left( A\right) $ the set of all ideals, then for $I,J\in Id\left( A\right) $, the following sets \begin{equation*} I+J=<I\cup J>=\{i+j,i\in I,j\in J\}\text{, } \end{equation*} \begin{equation*} I\otimes J=\{\underset{k=1}{\overset{n}{\sum }}i_{k}j_{k},\text{ }i_{k}\in I,j_{k}\in J\}\text{, } \end{equation*} \begin{equation*} \left( I:J\right) =\{x\in A,x\cdot J\subseteq I\}\text{,} \end{equation*} \begin{equation*} Ann\left( I\right) =\left( \mathbf{0}:I\right) \text{, where }\mathbf{0}=<0>, \text{ } \end{equation*} \textbf{\ }are also ideals of $A,$ called sum, product, quotient and annihilator, see \cite{[BP; 02]}. If we preserve these notations, $ (Id(A),\cap ,+,\otimes \rightarrow ,0=\{0\},1=A)$ is a residuated lattice in which the order relation is $\subseteq $ and $I\rightarrow J=(J:I),$ for every $I,J\in Id(A)$, see \cite{[TT; 22]}. \end{example}
In a residuated lattice $(L,\wedge ,\vee ,\odot ,\rightarrow ,0,1)$ we consider the following identities:
\begin{equation*} (prel)\qquad (x\rightarrow y)\vee (y\rightarrow x)=1\qquad \text{ (\textit{ prelinearity)}}; \end{equation*} \begin{equation*} (div)\qquad x\odot (x\rightarrow y)=x\wedge y\qquad \text{ (\textit{ divisibility)}}. \end{equation*}
\begin{definition} \label{Definition_2} (\cite{[NL; 03]}, \cite{[I; 09]}, \cite{[T; 99]}) \ A residuated lattice $L$ is called \emph{a BL-algebra}\textit{\ }if $L$ verifies $(prel)+(div)$ conditions. \end{definition}
A \emph{BL-chain }is a totally ordered BL-algebra, i.e., a BL-algebra such that its lattice order is total.
\begin{definition} \label{Definition_3} (\cite{[COM; 00]}, \cite{[CHA; 58]}) An \emph{MV-algebra } is an algebra $\left( L,\oplus ,^{\ast },0\right) $ satisfying the following axioms:
(MV1) $\left( L,\oplus ,0\right) $ \ is an abelian monoid;
(MV2) $(x^{\ast })^{\ast }=x;$
(MV3) $x\oplus 0^{\ast }=0^{\ast };$
(MV4) $\left( x^{\ast }\oplus y\right) ^{\ast }\oplus y=$ $\left( y^{\ast }\oplus x\right) ^{\ast }\oplus x$, for all $x,y\in L.$ \end{definition}
In fact, a residuated lattice $L$ is an MV-algebra iff it satisfies the additional condition: \begin{equation*} (x\rightarrow y)\rightarrow y=(y\rightarrow x)\rightarrow x, \end{equation*} for every $x,y\in L,$ see \cite{[T; 99]}.
\begin{remark} \label{Remark_1} (\cite{[T; 99]}) \ If in a BL- algebra $L$, $x^{\ast \ast }=x,$ for every $x\in L$, and for $x,y\in L$ we denote \begin{equation*} x\oplus y=(x^{\ast }\odot y^{\ast })^{\ast }, \end{equation*} then we obtain an MV-algebra $(L,\oplus ,^{\ast },0).$ Conversely, if $ (L,\oplus ,^{\ast },0)$ is an MV\textit{-algebra}, then $(L,\wedge ,\vee ,\odot ,\rightarrow ,0,1)$ becomes a BL-algebra, in which for $x,y\in L:$ \begin{equation*} x\odot y=(x^{\ast }\oplus y^{\ast })^{\ast }, \end{equation*} \begin{equation*} x\rightarrow y=x^{\ast }\oplus y,1=0^{\ast }, \end{equation*} \begin{equation*} x\vee y=(x\rightarrow y)\rightarrow y=(y\rightarrow x)\rightarrow x\text{ and }x\wedge y=(x^{\ast }\vee y^{\ast })^{\ast }. \end{equation*} In fact, MV-algebras are exactly involutive BL-algebras. \end{remark}
\begin{example} \label{Example_3}\emph{(}\cite{[I; 09]}\emph{) }\textit{\ }We give an example of a finite BL-algebra which is not an MV-algebra. Let $ L=\{0,a,b,c,1\}$ and we define on $L$ the following operations: \begin{equation*}
\begin{array}{c|ccccc} \rightarrow & 0 & c & a & b & 1 \\ \hline 0 & 1 & 1 & 1 & 1 & 1 \\ c & 0 & 1 & 1 & 1 & 1 \\ a & 0 & b & 1 & b & 1 \\ b & 0 & a & a & 1 & 1 \\ 1 & 0 & c & a & b & 1 \end{array} ,\hspace{5mm}
\begin{array}{c|ccccc} \odot & 0 & c & a & b & 1 \\ \hline 0 & 0 & 0 & 0 & 0 & 0 \\ c & 0 & c & c & c & c \\ a & 0 & c & a & c & a \\ b & 0 & c & c & b & b \\ 1 & 0 & c & a & b & 1 \end{array} . \end{equation*} \end{example}
We have, $0\leq c\leq a,b\leq 1,$ but $a,b$ are incomparable, hence $L$ is a BL-algebra which is not a chain. We remark that $x^{\ast \ast }=1,$ for every $x\in L,x\neq 0.$
\section{BL-rings}
It is known that if\textbf{\ } $A$ is a commutative unitary ring, then $ (Id(A),\cap ,+,\otimes \rightarrow ,0=\{0\},1=A)$ is a residuated lattice, see \cite{[TT; 22]}.
\begin{definition} \label{Definition_4} (\cite{[LN; 18]}) \ A commutative ring whose lattice of ideals is a BL-algebra is called \emph{a BL-ring.} \end{definition}
BL-rings are closed under finite direct products, arbitrary direct sums and homomorphic images, see \cite{[LN; 18]}.
In the following, using the connections between BL-algebras and BL-rings, we give a new characterizations for commutative and unitary rings for which the lattice of ideals is a BL-algebra.
\begin{proposition} \label{Proposition_3} (\cite{[I; 09]}) \textit{\ Let }$(L,\vee ,\wedge ,\odot ,\rightarrow ,0,1)$\textit{\ be a residuated lattice. Then we have the equivalences:}
\textit{(i) }$L\ $satisfies $(prel)$\textit{\ condition;}
\textit{(ii) }$\ \ (x\wedge y)\rightarrow z=(x\rightarrow z)\vee (y\rightarrow z),$ \textit{for every \thinspace }$x,y,z\in L.$ \end{proposition}
\begin{lemma} \label{Lemma_0} \textit{\ Let }$(L,\vee ,\wedge ,\odot ,\rightarrow ,0,1)$ \textit{\ be a residuated lattice. The following assertions are equivalent:}
\textit{(i) }$L$ satisfies $(prel)$\textit{\ condition;}
\textit{(ii) }$\ \ $F\textit{or every \thinspace }$x,y,z\in L,$ if $\ x\wedge y\leq z,$ then $(x\rightarrow z)\vee (y\rightarrow z)=1.$\textit{\ } \end{lemma}
\begin{proof} $(i)\Rightarrow (ii).$ Following by Proposition \ref{Proposition_3}.
$(ii)\Rightarrow (i).$ $\ $Using (ii), for $z=$ $x\wedge y$ we deduce that $ 1=(x\rightarrow (x\wedge y))\vee (y\rightarrow (x\wedge y))=[(x\rightarrow x)\wedge (x\rightarrow y)]\vee \lbrack (y\rightarrow x)\wedge (y\rightarrow y)]$ $=(x\rightarrow y)\vee (y\rightarrow x),$ so, $L$ satisfies $(prel)$ \textit{\ condition.} \end{proof}
\begin{lemma} \label{Lemma_1} \textit{\ Let }$(L,\vee ,\wedge ,\odot ,\rightarrow ,0,1)$ \textit{\ be a residuated lattice. The following assertions are equivalent:}
\textit{(i) }$L$ satisfies $(div)$\textit{\ condition;}
\textit{(ii) }$\ \ $F\textit{or every \thinspace }$x,y,z\in L,$ if $x\odot (x\rightarrow y)\leq z,$ then $x\wedge y\leq z.$\textit{\ } \end{lemma}
\begin{proof} $(i)\Rightarrow (ii).$ Obviously.
$(ii)\Rightarrow (i).$ $\ $Using (ii), for $z=$ $x\odot (x\rightarrow y)$ we deduce that $x\wedge y\leq x\odot (x\rightarrow y).$ Since in a residuated lattice, $x\odot (x\rightarrow y)\leq x\wedge y$ we deduce that $L$ satisfies $(div)$\textit{\ condition.} \end{proof}
Using Lemma \ref{Lemma_0} and Lemma \ref{Lemma_1} we deduce that:
\begin{proposition} \label{Proposition_4} \textit{Let }$(L,\vee ,\wedge ,\odot ,\rightarrow ,0,1) $\textit{\ be a residuated lattice. The following assertions are equivalent:}
(i) $L$ is a BL-algebra;
(ii) For every $x,y,z\in L,$ if $x\odot (x\rightarrow y)\leq z,$ then $ (x\rightarrow z)\vee (y\rightarrow z)=1;$
(iii) $[x\odot (x\rightarrow y)]\rightarrow z=(x\rightarrow z)\vee (y\rightarrow z),$ for every $x,y,z\in L.$ \end{proposition}
\begin{proof} $(i)\Rightarrow (ii).$ Let $x,y,z\in L$ such that $x\odot (x\rightarrow y)\leq z.$ Since every BL-algebra satisfies $(div)$\textit{\ condition, }by Lemma \ref{Lemma_1}, we deduce that $x\wedge y\leq z.$ Since every BL-algebra satisfies $(prel)$\textit{\ condition, }following Lemma \ref {Lemma_0}, we deduce that $1=(x\rightarrow z)\vee (y\rightarrow z).$
$(ii)\Rightarrow (i).$ First, we prove that $L$ satisfies condition (ii) from Lemma \ref{Lemma_0}. So, let $x,y,z\in L$ such that $x\wedge y\leq z.$ Thus, $(x\wedge y)\rightarrow z=1$ \ Since $x\odot (x\rightarrow y)\leq x\wedge y,$ we deduce that $1=(x\wedge y)\rightarrow z\leq (x\odot (x\rightarrow y))\rightarrow z.$ Then, $x\odot (x\rightarrow y)\leq z.$ By hypothesis, $(x\rightarrow z)\vee (y\rightarrow z)=1.$
To prove that $L$ verifies condition $(ii)$ from Lemma \ref{Lemma_1}, let $ x,y,z\in L$ such that $x\odot (x\rightarrow y)\leq z.$ By hypothesis, we deduce that, $(x\rightarrow z)\vee (y\rightarrow z)=1.$ Since $(x\rightarrow z)\vee (y\rightarrow z)\leq (x\wedge y)\rightarrow z$ we obtain $(x\wedge y)\rightarrow z=1,$ that is, $x\wedge y\leq z.$\textit{\ }
$(iii)\Rightarrow (ii).$ Obviously.
$(ii)\Rightarrow (iii).$ If we denote $t=[x\odot (x\rightarrow y)]\rightarrow z,$ we have $1=t\rightarrow t=t\rightarrow \lbrack (x\odot (x\rightarrow y))\rightarrow z]=[x\odot (x\rightarrow y)]\rightarrow (t\rightarrow z)$, hence, $x\odot (x\rightarrow y)\leq t\rightarrow z.$
By hypothesis, we deduce that, $(x\rightarrow (t\rightarrow z))\vee (y\rightarrow (t\rightarrow z))=1.$
Then $1=(t\rightarrow (x\rightarrow z))\vee (t\rightarrow (y\rightarrow z))\leq t\rightarrow \lbrack (x\rightarrow z)\vee (y\rightarrow z)].$ Thus, $ t\leq (x\rightarrow z)\vee (y\rightarrow z).$
But $(x\rightarrow z)\vee (y\rightarrow z)\leq (x\wedge y)\rightarrow z\leq \lbrack x\odot (x\rightarrow y)]\rightarrow z=t.$
We conclude that $t=(x\rightarrow z)\vee (y\rightarrow z),$ that is, $ [x\odot (x\rightarrow y)]\rightarrow z=(x\rightarrow z)\vee (y\rightarrow z), $ for every $x,y,z\in L.$ \end{proof}
Using Proposition \ref{Proposition_4} we obtain a new characterisation for BL-algebras:
\begin{theorem} \label{Theorem_1}\textit{A residuated lattice }$L$ is a BL-algebra if and only if for every $x,y,z\in L,$ \begin{equation*} \lbrack x\odot (x\rightarrow y)]\rightarrow z=(x\rightarrow z)\vee (y\rightarrow z). \end{equation*} \end{theorem}
Using this result, we can give a new description for BL-rings:
\begin{corollary} \label{Corollary 3.6}\textit{\ Let }$A$\textit{\ be a commutative and unitary ring. The following assertions are equivalent:}
\textit{(i) }$\ A$ \textit{is a BL-ring;}
\textit{(ii) \ }$K:$\textit{\ }$[I\otimes (J:I)]=(K:I)+(K:J),$ \textit{for every }$I,J,K\in Id(A).$ \end{corollary}
\begin{theorem} \label{Theorem_2}\textit{Let }$A$\textit{\ be a commutative ring which is a principal ideal domain. Then} $A$\textit{\ is a BL-ring.} \end{theorem}
\begin{proof} Since $A$ is a principal ideal domain, let $I=<a>$, $J=<b>,$ be the principal non-zero ideals generated by $a,b\in A$ $\backslash \{0\}$.
If $d=$\textit{gcd}$\{a,b\}$, then $d=a\cdot \alpha +b\cdot \beta ,$ $a,b\in A$, $a=a_{1}d$ and $b=b_{1}d$, with $1=$\textit{gcd}$\{a_{1},b_{1}\}$. Thus, $I+J=<d>,$ $\ I\cap J=<ab/d>,$ $I\otimes J=<ab>$ and $\left( I:J\right) =$ $ <a_{1}>.$
The conditions $(prel)$ is satisfied: $(I:J)+(J:I)=<a_{1}>+<b_{1}>=<1>=A$ and $(div)$ is also satisfied: $J\otimes (I:J)=<b>\otimes <a_{1}>=<ab/d>=I\cap J$.
If $I=\{0\}$, since $A$ is an integral domain, we have that $\left( \mathbf{0 }:J\right) +(J:\mathbf{0})=Ann(J)+A=A$ and $J\otimes (\mathbf{0}:J)=J\otimes Ann(J)=\mathbf{0=0}\cap J=$ $\mathbf{0}\otimes (J:\mathbf{0}),$ for every $ J\in Id\left( A\right) \backslash \{0\}.$
Moreover, we remark that $\left( I:\left( I:J\right) \right) =\left( J:\left( J:I\right) \right) =I+J$, for every non-zero ideals \textit{\ }$ I,J\in Id(A).$ Also, since $A$ is an integral domain, we obtain $ Ann(Ann(I))=A,$ for every $I\in Id\left( A\right) \backslash \{0\}.$ We conclude that $Id(A)$ is a BL-algebra which is not an MV-algebra. \end{proof}
\begin{corollary} \label{Corollary_2_1}\textit{A ring factor of a principal ideal domain is a BL-ring. } \end{corollary}
\begin{proof} Using Theorem \ref{Theorem_2} since BL-rings are closed under homomorphic images, see \cite{[LN; 18]}. Moreover, we remark that, in fact, if $\mathcal{ A}$ \textit{is a ring factor of a principal ideal domain, then }$(Id( \mathcal{A}),\cap ,+,\otimes \rightarrow ,0=\{0\},1=A)$ \textit{\ }is an MV-algebra, see \cite{[FP; 22]}. \end{proof}
\begin{corollary} \label{Corollary _3_4} \textit{\ A finite commutative unitary ring of the form }$A=\mathbb{Z}_{k_{1}}\times \mathbb{Z}_{k_{2}}\times ...\times \mathbb{ Z}_{k_{r}}$(direct product of rings, equipped with componentwise operations), \textit{where} $k_{i}=p_{i}^{\alpha _{i}}$, $p_{i}$ \textit{is} \textit{a prime number, with }$p_{i}\neq p_{j},$\textit{\ is a BL-ring.} \end{corollary}
\begin{proof} We apply Corollary \ref{Corollary_2_1} using the fact that BL-rings are closed under finite direct products, see \cite{[LN; 18]}.
Moreover, we remark that if $\ A$ is a finite commutative unitary ring of the above form, then $Id\left( A\right) =$ $Id(\mathbb{Z}_{k_{1}})\times Id( \mathbb{Z}_{k_{2}})\times ...\times Id(\mathbb{Z}_{k_{r}})$\ is an MV-algebra $(Id\left( A\right) ,\oplus ,^{\ast },0=\{0\})$ in which \begin{equation*} I\oplus J=Ann(Ann(I)\otimes Ann(J))\text{ and }I^{\ast }=Ann(I) \end{equation*} for every $I,J\in Id\left( A\right) $, since, $Ann(Ann(I))=I,$ see \cite {[FP; 22]}. \end{proof}
\begin{example} \label{Remark_2} 1) Following Theorem \ref{Theorem_2}, the ring of integers $ (\mathbb{Z},\mathbb{+},\cdot )$ is a BL-ring in which $(Id(\mathbb{Z}),\cap ,+,\otimes \rightarrow ,0=\{0\},1=A)$ is not an MV-algebra. Indeed, since $ \mathbb{Z}$ is principal ideal domain, we have $Ann\left( Ann\left( I\right) \right) =\mathbb{Z}$, for every $I\in Id\left( \mathbb{Z}\right) \backslash \{0\}.$
2) Let $K$ be a field and $K\left[ X\right] $ be the polynomial ring. For $ f\in K\left[ X\right] $, the quotient ring $A=K\left[ X\right] /\left( f\right) $ is a BL-ring. Indeed, the lattice of ideals of this\ ring is an MV-algebra, see \cite{[FP; 22]}. \end{example}
\section{Examples of BL-algebras using commutative rings}
In this section we present ways to generate finite\ BL-algebras using finite commutative rings.
First, we give examples of finite BL-rings whose lattice of ideals is not an MV-algebra. Using these rings we construct BL-algebras with $2^{r}+1$ elements, $r\geq 2$ (see Proposition \ref{Proposition10}) and BL-chains with $k\geq 4$ elements (see Proposition \ref{Proposition11}).
We recall that, in \cite{[FP; 22]}, we proved the following result:
\begin{proposition} \label{Proposition1} (\cite{[FP; 22]}\textit{\ If} $A$ \textit{is a finite commutative unitary ring of the form }$A=\mathbb{Z}_{k_{1}}\times \mathbb{Z} _{k_{2}}\times ...\times \mathbb{Z}_{k_{r}}$(direct product of rings, equipped with componentwise operations), \textit{where} $k_{i}=p_{i}^{\alpha _{i}}$, $p_{i}$ \textit{is} \textit{a prime number, with }$p_{i}\neq p_{j},$ \textit{\ and} $Id\left( A\right) $ \textit{denotes the set of all ideals of the ring} $A$, \textit{then,} $\left( Id\left( A\right) ,\vee ,\wedge ,\odot ,\rightarrow ,0,1\right) $ \textit{is an MV-algebra, where the order relation is} $\subseteq $, $I\odot J=I\otimes J$, $I^{\ast }=Ann(I)$, $ I\rightarrow J=\left( J:I\right) $, $I\vee J=I+J$, $I\wedge J=I\cap J$, $ 0=\{0\}$ \textit{and} $1=A$. \textit{The set} $Id\left( A\right) $ \textit{ has} $\mathcal{N}_{A}=\overset{r}{\underset{i=1}{\prod }}\left( \alpha _{i}+1\right) $ \textit{elements}. \end{proposition}
In the following, we give examples of finite BL-rings whose lattice of ideals is not an MV-algebra.
\begin{definition} \label{Definition2} (\cite{[BP; 02]}) Let $R$ be a commutative unitary ring. The ideal $M$ of the ring $R$ is \textit{maximal} if it is maximal, with respect of the set inclusion, amongst all proper ideals of the ring $R$. That means, there are no other ideals different from $R$ contained $M$. The ideal $J$ of the ring $R$ is a \textit{minimal ideal} if it is a nonzero ideal which contains no other nonzero ideals. A commutative \textit{local ring} $R$ is a ring with a unique maximal ideal. \end{definition}
\begin{example} \label{Example3} (i) A field $F$ is a local ring, with $\{0\}$ the maximal ideal in this ring.
(ii) In $\left( \mathbb{Z}_{8},+,\cdot \right) $, the ideal $J=\{\widehat{0}, \widehat{4}\}$ is a minimal ideal and the ideal $M=\{\widehat{0},\widehat{2}, \widehat{4},\widehat{6}\}$ is the maximal ideal. \end{example}
\begin{remark} \label{Remark4} Let $R$ be a local ring with $M$ its maximal ideal. Then, the quotient ring $R[X]/(X^{n})$, with $n$ a positive integer, is local. Indeed, the unique maximal ideal of the ring $R[X]/(X^{n})$ is $ \overrightarrow{M}=\{\overrightarrow{f}\in R[X]/(X^{n})/~f\in R[X],f=a_{0}+a_{1}X+...+a_{n-1}X^{n-1}$, with $a_{0}\in M\}$. For other details, the reader is referred to \cite{[La; 01]}. \end{remark}
In the following, we consider the ring $(\mathbb{Z}_{n},+,\cdot )$ with $ n=p_{1}p_{2}...p_{r},p_{1},p_{2},...,p_{r}$ being distinct prime numbers, $ r\geq 2,$ and the factor ring $R=\mathbb{Z}_{n}[X]/\left( X^{2}\right) .$
\begin{remark} \label{Remark5} (i)\textbf{\ }With the above notations, in the ring $( \mathbb{Z}_{n},+,\cdot )$, the ideals generated by $\widehat{p} _{i},M_{p_{i}}=\left( \widehat{p}_{i}\right) ,$ are maximals. The ideals of $ \mathbb{Z}_{n}$ are of the form $I_{d}=\left( \widehat{d}\right) $, where $d$ is a divisor of $n$.
(ii) Each element from $\mathbb{Z}_{n}-\{M_{p_{1}}\cup M_{p_{2}}\cup ...\cup M_{p_{r}}\}$ is an invertible element. Indeed, if $\widehat{x}\in \mathbb{Z} _{n}-\{M_{p_{1}}\cup M_{p_{2}}\cup ...\cup M_{p_{r}}\}$, we have \textit{gcd} \thinspace $\{x,n\}=1$, therefore $x$ is an invertible element. \end{remark}
\begin{proposition} \label{Proposition6} \textit{i)} \textit{With the above notations, the factor ring} $R=\mathbb{Z}_{n}[X]/\left( X^{2}\right) $ \textit{has} $2^{r}+1 $ \textit{ideals, including} $\{0\}$ \textit{and} $R$.
\textit{ii) For} $\widehat{\gamma }\in \mathbb{Z}_{n}-\{M_{p_{1}}\cup M_{p_{2}}\cup ...\cup M_{p_{r}}\}$\textit{, the element} $X+\widehat{\gamma } $ \textit{is an invertible element in} $R$ . \end{proposition}
\begin{proof} (i) Indeed, the ideals are: $J_{p_{i}}=\left( \widehat{\alpha }X+\widehat{ \alpha }_{i}\right) ,\widehat{\alpha }_{i}\in M_{p\,_{i}},i\in \{1,2,...,r\}, $ which are maximals, $J_{d}=\left( \widehat{\beta }X+ \widehat{\beta }_{d}\right) ,\widehat{\beta }_{d}\in I_{d},I_{d}$ is not maximal, $\widehat{\alpha },\widehat{\beta }\in R,d\neq n$, where $d$ is a proper divisor of $n$, the ideals $\left( X\right) $, for $d=n$ and $\left( 0\right) $. Therefore, we have $\complement _{n}^{0}$ ideals, for ideal $ \left( X\right) $, $\complement _{n}^{1}$ ideals for ideals $J_{p_{i}}$, $ \complement _{n}^{2}$ ideals for ideals $J_{p_{i}p_{j}}$, $p_{i}\neq p_{j}$ ,...,$\complement _{n}^{n}$ ideals for ideal $R$, for $d=1$, resulting a total of $2^{r}+1,$ if we add ideal $\left( 0\right) $.
(ii) Since $\widehat{\gamma }\in \mathbb{Z}_{n}-\{M_{p_{1}}\cup M_{p_{2}}\cup ...\cup M_{p_{r}}\}$, we have that $\widehat{\gamma }$ is invertible, with $\widehat{\delta }$ its inverse. Therefore, $\left( X+ \widehat{\gamma }\right) [-\widehat{\delta }^{-2}\left( X-\widehat{\gamma } \right) ]=1$. It results that $X+\widehat{\gamma }$ is invertible, therefore $\left( X+\widehat{\gamma }\right) =R$. \end{proof}
Since for any commutative unitary ring, its lattice of ideals is a residuated lattice (see \cite{[TT; 22]}), in particular, for the unitary and commutative ring \thinspace $A=\mathbb{Z}_{n}[X]/\left( X^{2}\right) $, we have that $(Id(\mathbb{Z}_{n}/\left( X^{2}\right) ),\cap ,+,\otimes \rightarrow ,0=\{0\},1=A)$ is a residuated lattice with $2^{r}+1$ elements.
\begin{remark} \label{Remark7} As we remarked above, the ideals in the ring $R=\mathbb{Z} _{n}[X]/\left( X^{2}\right) $ are:
(i) $\left( 0\right) ;$
(ii) of the form $J_{d}=\left( \widehat{\alpha }X+\widehat{\beta } _{d}\right) ,\widehat{\alpha }\in R,\widehat{\beta }_{d}\in I_{d},$ where $d$ is a proper divisor of $n=p_{1}p_{2}...p_{r},p_{1},p_{2},...,p_{r}$ being distinct prime numbers, $r\geq 2,$ by using the notations from Remark \ref {Remark5}. If $I_{d}=\left( \widehat{p_{i}}\right) ,$ then $J_{d}$ is denoted $J_{p_{i}}$ and it is a maximal ideal in $R=\mathbb{Z}_{n}[X]/\left( X^{2}\right) $ ;
(iii) The ring $R,$ if $d=1$;
(iv) $\left( X\right) ,$ if $d=n$. \end{remark}
\begin{remark} \label{Remark8} We remark that for all two nonzero ideals of the above ring $ R$, $I$ and $J$, we have $\left( X\right) \subseteq I\cap J$ and the ideal $ \left( X\right) $ is the only minimal ideal of $\mathbb{Z}_{n}[X]/\left( X^{2}\right) $. \end{remark}
\begin{remark} \label{Remark9} Let $D_{d}=\{$ $p\in \{p_{1},p_{2},...,p_{r}\},$ such that $ d=\prod p\},d\neq 1$.
(1) We have $J_{d_{1}}\cap J_{d_{2}}=J_{d_{1}}\otimes J_{d_{2}}=J_{d_{3}}$, where $D_{d_{3}}=\{p\in D_{d_{1}}\cup D_{d_{2}},d_{3}=\prod p\}$, for $ d_{1},d_{2}$ proper divisors.
If $d_{1}=1,$ we have $R\otimes J_{d_{2}}=J_{d_{2}}=R\cap J_{d_{2}}$.
If $d_{1}=n$, $d_{2}\neq n$, we have $\left( X\right) \otimes J_{d_{2}}=\left( X\right) \cap J_{d_{2}}=\left( X\right) $. If $d_{2}=n,$ we have $\left( X\right) \otimes \left( X\right) =\left( 0\right) $.
(2) We have $(J_{d_{1}}:J_{d_{2}})=J_{d_{3}}$, with $ D_{d_{3}}=D_{d_{1}}-D_{d_{2}}$. Indeed, $(J_{d_{1}}:J_{d_{2}})=\{y\in R,y\cdot J_{d_{2}}\subseteq J_{d_{1}}\}=J_{d_{3}}$, for $d_{1},d_{2}$ proper divisors$.$
If $J_{d_{1}}=\left( 0\right) ,$ we have $(0:J_{d_{2}})=\left( 0\right) $. Indeed, if $(0:J_{d_{2}})=J\neq \left( 0\right) $, it results that $J\otimes J_{d_{2}}=\left( 0\right) $. But, from the above, $J\otimes J_{d_{2}}=$ $ J\cap J_{d_{2}}$ $\neq \left( 0\right) $, which is false
If $J_{d_{2}}=\left( 0\right) $, we have $(J_{d_{1}}:0)=R$.
If $d_{1}=1$, we have $\left( R:J_{d_{2}}\right) =R$ and $\left( J_{d_{2}}:R\right) =J_{d_{2}}.$
If $d_{1}=n$, $d_{2}\neq n$, we have $J_{d_{1}}=\left( X\right) $, therefore $(J_{d_{1}}:J_{d_{2}})=J_{d_{1}}=\left( X\right) $. If $d_{1}\neq n$, $ d_{2}=n$, we have $J_{d_{2}}=\left( X\right) $, therefore $ (J_{d_{1}}:J_{d_{2}})=R$. If $d_{1}=d_{2}=n$, we have $J_{d_{1}}=J_{d_{2}}= \left( X\right) $ and $(J_{d_{1}}:J_{d_{2}})=R$. \end{remark}
\begin{proposition} \label{Proposition10} (\textit{i)} \textit{For} $n\geq 2$\textit{, with the above notations, the residuated lattice} $(Id(\mathbb{Z}_{n}[X]/\left( X^{2}\right) ),\cap ,+,\otimes \rightarrow ,0=\{0\},1=R),R=\mathbb{Z} _{n}[X]/\left( X^{2}\right) ,$ \ \textit{is a BL-algebra with} $2^{r}+1$ \textit{elements.}
(\textit{ii)} \textit{By using notations from Remark \ref{Remark7}, we have that} $(Id_{p_{i}}(\mathbb{Z}_{n}[X]/\left( X^{2}\right) ),\cap ,+,\otimes \rightarrow ,0=\{0\},1=R),$\textit{where} $Id_{p_{i}}(\mathbb{Z} _{n}[X]/\left( X^{2}\right) )=\{\left( 0\right) ,J_{p_{i}},R\},~$\textit{is \ a BL-sublattice of the lattice} $Id(\mathbb{Z}_{n}[X]/\left( X^{2}\right) ) $, \textit{having }$3$ \textit{elements.} \end{proposition}
\begin{proof} (i) First, we will prove the $\left( prel\right) $ condition: \begin{equation*} \left( I\rightarrow J\right) \vee \left( J\rightarrow I\right) =(J:I)\vee (I:J)=\mathbb{Z}_{n}[X]/\left( X^{2}\right) , \end{equation*} for every $I,J\in Id(\mathbb{Z}_{n}[X]/\left( X^{2}\right) )$.
\textit{Case 1.} If $d_{1}$ and $d_{2}$ are proper divisors of $n$, we have $ \left( J_{d_{1}}\rightarrow J_{d_{2}}\right) \vee \left( J_{d_{2}}\rightarrow J_{d_{1}}\right) =(J_{d_{2}}:J_{d_{1}})\vee (J_{d_{1}}:J_{d_{2}})=J_{d_{4}}\vee J_{d_{5}},$ where $ D_{d_{4}}=D_{d_{1}}-D_{d_{2}}$ and $D_{d_{5}}=D_{d_{2}}-D_{d_{1}}$. We remark that $D_{d_{4}}\cap D_{d_{5}}=\emptyset ,$ then \textit{gcd} \thinspace $\{d_{4},d_{5}\}=1$. From here, it results that there are the integers $a$ and $b$ such that $ad_{4}+bd_{5}=1$. We obtain that $ J_{d_{4}}\vee J_{d_{5}}=<J_{d_{4}}\cup J_{d_{5}}>=R$, from Proposition \ref {Proposition6}, ii).
\textit{Case 2.} If $d_{1}$ is a proper divisor of $n$ and $d_{2}=n$, we have $J_{d_{2}}=\left( X\right) $. Therefore, $\left( J_{d_{1}}\rightarrow J_{d_{2}}\right) \vee \left( J_{d_{2}}\rightarrow J_{d_{1}}\right) =(J_{d_{2}}:J_{d_{1}})\vee (J_{d_{1}}:J_{d_{2}})=J_{d_{2}}\vee R=R$, by using Remark \ref{Remark9}.
\textit{Case 3.} If $d_{1}$ is a proper divisor of $n$ and $J_{d_{2}}=\left( 0\right) $, we have $\left( J_{d_{1}}\rightarrow J_{d_{2}}\right) \vee \left( J_{d_{2}}\rightarrow J_{d_{1}}\right) =(0:J_{d_{1}})\vee (J_{d_{1}}:0)=0\vee R=R$, by using Remark \ref{Remark9}.
\textit{Case 4. }If $d_{1}$ is a proper divisor of $n$ and $J_{d_{2}}=R$, it is clear. From here, we have that the condition $\left( prel\right) $ is satisfied.
Now, we prove condition $\left( div\right) :$ \begin{equation*} I\otimes \left( I\rightarrow J\right) =I\otimes \left( J:I\right) =I\cap J, \end{equation*} for every $I,J\in Id(\mathbb{Z}_{n}[X]/\left( X^{2}\right) )$.
\textit{Case 1.} If $d_{1}$ and $d_{2}$ are proper divisors of $n$, we have $ J_{d_{1}}\otimes \left( J_{d_{2}}:J_{d_{1}}\right) =J_{d_{1}}\otimes J_{d_{3}}=J_{d_{4}}=J_{d_{1}}\cap J_{d_{2}},$ since $ D_{d_{3}}=D_{d_{2}}-D_{d_{1}}$ and $D_{d_{4}}=\{p\in D_{d_{1}}\cup D_{d_{3}},d_{4}=\prod p\}=\{p\in D_{d_{1}}\cup D_{d_{2}},d_{4}=\prod p\}$.
\textit{Case 2.} If $d_{1}$ is a proper divisor of $n$ and $d_{2}=n$, we have $J_{d_{2}}=\left( X\right) $. \ We obtain $J_{d_{1}}\otimes \left( J_{d_{2}}:J_{d_{1}}\right) =J_{d_{1}}\otimes \left( \left( X\right) :J_{d_{1}}\right) =J_{d_{1}}\otimes \left( X\right) =J_{d_{1}}\cap \left( X\right) $ since $\left( X\right) \subset J_{d_{1}}$.
\textit{Case 3.} If $d_{1}=n$ and \thinspace $d_{2}$ is a proper divisor of $ n$, we have $J_{d_{1}}=\left( X\right) $. \ We obtain $J_{d_{1}}\otimes \left( J_{d_{2}}:J_{d_{1}}\right) =\left( X\right) \otimes \left( J_{d_{2}}:\left( X\right) \right) =\left( X\right) \otimes R=\left( X\right) =J_{d_{2}}\cap \left( X\right) $ since $\left( X\right) \subset J_{d_{2}}$.
\textit{Case 4.} If $d_{1}$ is a proper divisor of $n$ and $J_{d_{2}}=\left( 0\right) $, we have $J_{d_{1}}\otimes \left( J_{d_{2}}:J_{d_{1}}\right) =J_{d_{1}}\otimes \left( 0:J_{d_{1}}\right) =J_{d_{1}}\otimes \left( 0\right) =\left( 0\right) =J_{d_{1}}\cap \left( 0\right) ,$ from Remark \ref {Remark9}.
\textit{Case 5.} If $J_{d_{1}}=\left( 0\right) $ and $d_{2}$ is a proper divisor of $n$, we have $J_{d_{1}}\otimes \left( J_{d_{2}}:J_{d_{1}}\right) =0\otimes \left( J_{d_{2}}:0\right) =0.$
\textit{Case 6. }If $d_{1}$ is a proper divisor of $n$ and $J_{d_{2}}=R$, we have $J_{d_{1}}\otimes \left( J_{d_{2}}:J_{d_{1}}\right) =J_{d_{1}}\otimes \left( R:J_{d_{1}}\right) =J_{d_{1}}\otimes R=J_{d_{1}}$. If $\ J_{d_{1}}=R$ and $d_{2}$ is a proper divisor of $n$, we have $J_{d_{1}}\otimes \left( J_{d_{2}}:J_{d_{1}}\right) =R\otimes \left( J_{d_{2}}:R\right) =R\otimes J_{d_{2}}=J_{d_{2}}$. From here, we have that the condition $\left( div\right) $ is satisfied and the proposition is proved.
(ii) It is clear that $J_{p_{i}}\odot J_{p_{i}}=J_{p_{i}}\otimes J_{p_{i}}=J_{p_{i}}$, and we obtain the following tables:
\begin{equation*}
\begin{tabular}{l|lll} $\rightarrow $ & $O$ & $J_{p_{i}}$ & $R$ \\ \hline $O$ & $R$ & $R$ & $R$ \\ $J_{p_{i}}$ & $O$ & $R$ & $R$ \\ $R$ & $O$ & $J_{p_{i}}$ & $R$ \end{tabular} \ \ \
\begin{tabular}{l|lll} $\odot $ & $O$ & $J_{p_{i}}$ & $R$ \\ \hline $O$ & $O$ & $O$ & $O$ \\ $J_{p_{i}}$ & $O$ & $J_{p_{i}}$ & $J_{p_{i}}$ \\ $R$ & $O$ & $J_{p_{i}}$ & $R$ \end{tabular} \ \ , \end{equation*} therefore a BL-algebra of order $3.$ \end{proof}
\begin{proposition} \label{Proposition11} \textit{Let} $n=p^{r}$ \textit{with} $p$ \textit{a prime number,} $p\geq 2,$ $r$ \textit{a positive integer,} $r\geq 2$. \textit{We consider the ring} $R=\mathbb{Z}_{n}[X]/\left( X^{2}\right) $. \textit{The set} $(Id(\mathbb{Z}_{n}[X]/\left( X^{2}\right) ),\cap ,+,\otimes \rightarrow ,0=\{0\},1=R)$ i\textit{s a BL-chain with} $r+2$ \textit{elements. In this way, for a given positive integer }$k\geq 4$, we can construct all BL-chains with $k$ elements\textit{.} \end{proposition}
\begin{proof} The ideals in $\mathbb{Z}_{n}$ are of the form: $\left( 0\right) \subseteq \left( p^{r-1}\right) \subseteq \left( p^{r-2}\right) \subseteq ...\subseteq \left( p\right) \subseteq \mathbb{Z}_{n}$. The ideal $\left( p^{r-1}\right) $ and the ideal $\left( p\right) $ is the only maximal ideal of $\mathbb{Z} _{n} $. The ideals in the ring $R$ are $\left( 0\right) \subseteq \left( X\right) \subseteq \left( \alpha _{r-1}X+\beta _{r-1}\right) \subseteq \left( \alpha _{r-2}X+\beta _{r-2}\right) \subseteq ...\subseteq \left( \alpha _{1}X+\beta _{1}\right) \subseteq R,$ where $\alpha _{i}\in \mathbb{Z} _{n},i\in \{1,...,r-1\},$ $\beta _{r-1}\in \left( p^{r-1}\right) ,\beta _{r-2}\in \left( p^{r-2}\right) ,...,\beta _{1}\in \left( p\right) ,$ therefore $r+2$ ideals. We denote these ideals with $\left( 0\right) ,\left( X\right) ,$ $I_{p^{r-1}},I_{p^{r-2}},...I_{p},\,R$, with $I_{p}$ the only maximal ideal in $R$.
First, we will prove $\left( prel\right) $ condition: \begin{equation*} \left( I\rightarrow J\right) \vee \left( J\rightarrow I\right) =(J:I)\vee (I:J)=\mathbb{Z}_{n}[X]/\left( X^{2}\right) , \end{equation*} for every $I,J\in Id(\mathbb{Z}_{n}[X]/\left( X^{2}\right) )$.
\textit{Case 1.} We suppose that $I$ and $J$ are proper ideals and $ I\subseteq J$. We have $\left( I\rightarrow J\right) \vee \left( J\rightarrow I\right) =(J:I)\vee (I:J)=R\vee I=R$.
\textit{Case 2}$.I=\left( 0\right) $ and $J$ a proper ideal$.$ We have $ \left( I\rightarrow J\right) \vee \left( J\rightarrow I\right) =(J:\left( 0\right) )\vee (\left( 0\right) :J)=R$. Therefore, the condition $\left( prel\right) $ is satisfied.
Now, we prove $\left( div\right) $ condition: \begin{equation*} I\otimes \left( I\rightarrow J\right) =I\otimes \left( J:I\right) =I\cap J, \end{equation*} for every $I,J\in Id(\mathbb{Z}_{n}[X]/\left( X^{2}\right) )$.
\textit{Case 1.} We suppose that $I$ and $J$ are proper ideals and $ I\subseteq J$. We have $I\otimes \left( I\rightarrow J\right) =I\otimes \left( J:I\right) =I\otimes R=I=I\cap J$. If $J\subseteq I$, we have $ I\otimes \left( I\rightarrow J\right) =I\otimes \left( J:I\right) =I\otimes J=J=I\cap J$.
\textit{Case 2}$.I=\left( 0\right) $ and $J$ is a proper ideal$.$We have $ \left( 0\right) \otimes \left( \left( 0\right) \rightarrow J\right) =\left( 0\right) \otimes \left( J:\left( 0\right) \right) =\left( 0\right) =I\cap J$ . \ If $I\neq \left( X\right) $ is a proper ideal and $J=\left( 0\right) $, we have $I\otimes \left( I\rightarrow J\right) =I\otimes \left( \left( 0\right) :I\right) =I\otimes \left( 0\right) =\left( 0\right) $. If $ I=\left( X\right) $ and $J=\left( 0\right) $, we have $I\otimes \left( I\rightarrow J\right) =\left( X\right) \otimes \left( \left( 0\right) :\left( X\right) \right) =\left( X\right) \otimes \left( X\right) =\left( 0\right) $ and $\left( 0\right) \cap \left( X\right) =\left( 0\right) $. From here, we have that the condition $\left( div\right) $ is satisfied and the proposition is proved. \end{proof}
\begin{example} \label{Example12} In Proposition \ref{Proposition10}, we take $n=2\cdot 3\,$ , therefore the ideals of $\ \mathbb{Z}_{6}$ are $\left( 0\right) ,\left( 2\right) ,\left( 3\right) ,\mathbb{Z}_{6}$, with $\left( 2\right) $ and $ \left( 3\right) $ maximal ideals. The ring $\mathbb{Z}_{6}[X]/\left( X^{2}\right) $ has $5$ ideals: $O=\left( 0\right) \subset A=\left( X\right) ,B=\left( \alpha X+\beta \right) ,C=\left( \gamma X+\delta \right) ,E= \mathbb{Z}_{6}$, with $\alpha ,\gamma \in $ $\mathbb{Z}_{4},\beta \in \left( 2\right) $ and $\delta \in \left( 3\right) $. From the following tables, we have a BL-structure on $Id(\mathbb{Z}_{6}[X]/\left( X^{2}\right) )$:
\begin{equation*}
\begin{tabular}{l|lllll} $\rightarrow $ & $O$ & $A$ & $B$ & $C$ & $E$ \\ \hline $O$ & $E$ & $E$ & $E$ & $E$ & $E$ \\ $A$ & $A$ & $E$ & $E$ & $E$ & $E$ \\ $B$ & $O$ & $A$ & $E$ & $C$ & $E$ \\ $C$ & $O$ & $A$ & $B$ & $E$ & $E$ \\ $E$ & $O$ & $A$ & $B$ & $C$ & $E$ \end{tabular} \ \ \
\begin{tabular}{l|lllll} $\odot $ & $O$ & $A$ & $B$ & $C$ & $E$ \\ \hline $O$ & $O$ & $O$ & $O$ & $O$ & $O$ \\ $A$ & $O$ & $O$ & $A$ & $A$ & $A$ \\ $B$ & $O$ & $A$ & $B$ & $A$ & $B$ \\ $C$ & $O$ & $A$ & $A$ & $C$ & $C$ \\ $E$ & $O$ & $A$ & $B$ & $C$ & $E$ \end{tabular} \ \ . \end{equation*}
From Proposition \ref{Proposition10}, if we consider $J_{p_{i}}=$ $B$, we have the following BL-algebra of order $3$:
\begin{equation*}
\begin{tabular}{l|lll} $\rightarrow $ & $O$ & $B$ & $E$ \\ \hline $O$ & $E$ & $E$ & $E$ \\ $B$ & $O$ & $E$ & $E$ \\ $E$ & $O$ & $B$ & $E$ \end{tabular} \ \ \
\begin{tabular}{l|lll} $\odot $ & $O$ & $B$ & $E$ \\ \hline $O$ & $O$ & $O$ & $O$ \\ $B$ & $O$ & $B$ & $B$ \\ $E$ & $O$ & $B$ & $E$ \end{tabular} \ \ . \end{equation*} \end{example}
\begin{example} \label{Example13} In Proposition \ref{Proposition10}, we take $n=2\cdot 3\cdot 5\,$, therefore the ideals of the ring $\mathbb{Z}_{30}$ are $\left( 0\right) ,\left( 2\right) ,\left( 3\right) ,\left( 5\right) ,\left( 6\right) ,\left( 10\right) ,\left( 15\right) ,\mathbb{Z}_{30}$, with $\left( 2\right) ,\left( 3\right) $ and $\left( 5\right) $ maximal ideals. The ring $\mathbb{Z }_{30}[X]/\left( X^{2}\right) $ has $9$ ideals: $O=\left( 0\right) \subset A=\left( X\right) ,B=\left( \alpha _{1}X+\beta _{1}\right) ,C=\left( \alpha _{2}X+\beta _{2}\right) ,$\newline $D=\left( \alpha _{3}X+\beta _{3}\right) ,E=\left( \alpha _{4}X+\beta _{4}\right) ,F=\left( \alpha _{5}X+\beta _{5}\right) ,G=\left( \alpha _{6}X+\beta _{6}\right) ,R=\mathbb{Z}_{30}$, with $\alpha _{i}\in $ $\mathbb{ Z}_{30},i\in \{1,2,3,4,5,6\},\beta _{1}\in \left( 6\right) $, $\beta _{2}\in \left( 10\right) ,\beta _{3}\in \left( 15\right) ,\beta _{4}\in \left( 2\right) ,\beta _{5}\in \left( 3\right) $ and $\beta _{6}\in \left( 5\right) $. The ideals $E,F$ and $G$ are maximals. From the following tables, we have a BL-structure on $Id(\mathbb{Z}_{30}[X]/\left( X^{2}\right) )$: \begin{equation*}
\begin{tabular}{l|lllllllll} $\rightarrow $ & $O$ & $A$ & $B$ & $C$ & $D$ & $E$ & $F$ & $G$ & $R$ \\ \hline $O$ & $R$ & $R$ & $R$ & $R$ & $R$ & $R$ & $R$ & $R$ & $R$ \\ $A$ & $A$ & $R$ & $R$ & $R$ & $R$ & $R$ & $R$ & $R$ & $R$ \\ $B$ & $O$ & $A$ & $R$ & $C$ & $D$ & $R$ & $R$ & $G$ & $R$ \\ $C$ & $O$ & $A$ & $B$ & $R$ & $D$ & $R$ & $F$ & $R$ & $R$ \\ $D$ & $O$ & $A$ & $B$ & $C$ & $R$ & $E$ & $R$ & $R$ & $R$ \\ $E$ & $O$ & $A$ & $B$ & $C$ & $D$ & $R$ & $F$ & $G$ & $R$ \\ $F$ & $O$ & $A$ & $B$ & $C$ & $D$ & $E$ & $R$ & $G$ & $R$ \\ $G$ & $O$ & $A$ & $B$ & $C$ & $D$ & $E$ & $F$ & $R$ & $R$ \\ $R$ & $O$ & $A$ & $B$ & $C$ & $D$ & $E$ & $F$ & $G$ & $R$ \end{tabular} \ \
\begin{tabular}{l|lllllllll} $\odot $ & $O$ & $A$ & $B$ & $C$ & $D$ & $E$ & $F$ & $G$ & $R$ \\ \hline $O$ & $O$ & $O$ & $O$ & $O$ & $O$ & $O$ & $O$ & $O$ & $O$ \\ $A$ & $O$ & $O$ & $A$ & $A$ & $A$ & $A$ & $A$ & $A$ & $A$ \\ $B$ & $O$ & $A$ & $B$ & $A$ & $A$ & $B$ & $B$ & $A$ & $B$ \\ $C$ & $O$ & $A$ & $A$ & $C$ & $A$ & $C$ & $C$ & $A$ & $C$ \\ $D$ & $O$ & $A$ & $A$ & $A$ & $D$ & $A$ & $D$ & $D$ & $D$ \\ $E$ & $O$ & $A$ & $B$ & $C$ & $A$ & $E$ & $A$ & $A$ & $E$ \\ $F$ & $O$ & $A$ & $B$ & $A$ & $D$ & $A$ & $F$ & $A$ & $F$ \\ $G$ & $O$ & $A$ & $A$ & $C$ & $D$ & $A$ & $A$ & $G$ & $G$ \\ $R$ & $O$ & $A$ & $B$ & $C$ & $D$ & $E$ & $F$ & $G$ & $R$ \end{tabular} \text{.} \end{equation*} \end{example}
\begin{example} \label{Example14} In Proposition \ref{Proposition11}, we consider $p=2,r=2$. The ideals in $\left( \mathbb{Z}_{4},+,\cdot \right) $ are $\left( 0\right) \subset \left( 2\right) \subset \mathbb{Z}_{4}$ and $\mathbb{Z}_{4}$ is a local ring. The ring $\mathbb{Z}_{4}[X]/\left( X^{2}\right) $ has $4$ ideals: $O=\left( 0\right) \subset A=\left( X\right) \subset B=\left( \alpha X+\beta \right) \subset E=\mathbb{Z}_{4}$, with $\alpha ,\gamma \in $ $ \mathbb{Z}_{4},\beta \in \left( 2\right) $. From the following tables, we have a BL-structure for $Id(\mathbb{Z}_{4}[X]/\left( X^{2}\right) )$:
\begin{equation*}
\begin{tabular}{l|llll} $\rightarrow $ & $O$ & $A$ & $B$ & $E$ \\ \hline $O$ & $E$ & $E$ & $E$ & $E$ \\ $A$ & $A$ & $E$ & $E$ & $E$ \\ $B$ & $O$ & $A$ & $E$ & $E$ \\ $E$ & $O$ & $A$ & $B$ & $E$ \end{tabular} \ \ \
\begin{tabular}{l|llll} $\odot $ & $O$ & $A$ & $B$ & $E$ \\ \hline $O$ & $O$ & $O$ & $O$ & $O$ \\ $A$ & $O$ & $O$ & $A$ & $A$ \\ $B$ & $O$ & $A$ & $A$ & $B$ \\ $E$ & $O$ & $A$ & $B$ & $E$ \end{tabular} \ \ . \end{equation*} \end{example}
\begin{example} \label{Example15} In Proposition \ref{Proposition11}, we consider $p=2,r=3$. The ideals in $\left( \mathbb{Z}_{8},+,\cdot \right) $ are $\left( 0\right) \subset \left( 4\right) \subset \left( 2\right) \subset \mathbb{Z}_{8}$. The ring $\mathbb{Z}_{8}[X]/\left( X^{2}\right) $ has $5$ ideals: $O=\left( 0\right) \subset A=\left( X\right) \subset B=\left( \alpha X+\beta \right) \subset C=\left( \gamma X+\delta \right) \subset E=\mathbb{Z}_{8}$, with $ \alpha ,\gamma \in $ $\mathbb{Z}_{8},\beta \in \left( 4\right) $ and $\delta \in \left( 2\right) $. From the following tables, we have a BL-structure for $Id(\mathbb{Z}_{8}[X]/\left( X^{2}\right) )$: \end{example}
\begin{equation*}
\begin{tabular}{l|lllll} $\rightarrow $ & $O$ & $A$ & $B$ & $C$ & $E$ \\ \hline $O$ & $E$ & $E$ & $E$ & $E$ & $E$ \\ $A$ & $A$ & $E$ & $E$ & $E$ & $E$ \\ $B$ & $O$ & $A$ & $E$ & $E$ & $E$ \\ $C$ & $O$ & $A$ & $B$ & $E$ & $E$ \\ $E$ & $O$ & $A$ & $B$ & $C$ & $E$ \end{tabular} \ \ \
\begin{tabular}{l|lllll} $\odot $ & $O$ & $A$ & $B$ & $C$ & $E$ \\ \hline $O$ & $O$ & $O$ & $O$ & $O$ & $O$ \\ $A$ & $O$ & $O$ & $A$ & $A$ & $A$ \\ $B$ & $O$ & $A$ & $A$ & $A$ & $B$ \\ $C$ & $O$ & $A$ & $A$ & $B$ & $C$ \\ $E$ & $O$ & $A$ & $B$ & $C$ & $E$ \end{tabular} \ \ . \end{equation*} In the following, we present a way to generate new finite\ BL-algebras using the ordinal product of residuated lattices.
We recall that, in \cite{[I; 09]}, Iorgulescu study the influence of the conditions $(prel)$ and $(div)$ on the ordinal product of two residuated lattices.
It is known that, if $\mathcal{L}_{1}=(L_{1},\wedge _{1},\vee _{1},\odot _{1},\rightarrow _{1},0_{1},1_{1})$ and $\mathcal{L}_{2}=(L_{2},\wedge _{2},\vee _{2},\odot _{2},\rightarrow _{2},0_{2},1_{2})$ are two residuated lattices such that $1_{1}=0_{2}$ and $(M_{1}\backslash \{1_{1}\})\cap (M_{2}\backslash \{0_{2}\})=\oslash ,$ then, the ordinal product of $ \mathcal{L}_{1}$ and $\mathcal{L}_{2}$ is the residuated lattice $\mathcal{L} _{1}\boxtimes \mathcal{L}_{2}=(L_{1}\cup L_{2},\wedge ,\vee ,\odot ,\rightarrow ,0,1)$ where
\begin{equation*} 0=0_{1}\text{ and }1=1_{2}, \end{equation*} \begin{equation*} x\leq y\text{ if }(x,y\in L_{1}\text{ and }x\leq _{1}y)\text{ or }(x,y\in L_{2}\text{ and }x\leq _{2}y)\text{ or }(x\in L_{1}\text{ and }y\in L_{2}) \text{ ,} \end{equation*}
\begin{equation*} x\wedge y=\left\{ \begin{array}{c} x\wedge _{1}y,\text{ if }x,y\in L_{1}, \\ x\wedge _{2}y,\text{ if }x,y\in L_{2}, \\ x,\text{ if }x\in L_{1}\text{ and }y\in L_{2} \end{array} \right. \end{equation*} \begin{equation*} x\vee y=\left\{ \begin{array}{c} x\vee _{1}y,\text{ if }x,y\in L_{1}, \\ x\vee _{2}y,\text{ if }x,y\in L_{2}, \\ y,\text{ if }x\in L_{1}\text{ and }y\in L_{2} \end{array} \right. \end{equation*} \begin{equation*} x\rightarrow y=\left\{ \begin{array}{c} 1,\text{ if }x\leq y, \\ x\rightarrow _{i}y,\text{ if }x\nleq y,\text{ }x,y\in L_{i},\text{ }i=1,2, \\ y,\text{ if }x\nleq y,\text{ }x\in L_{2},\text{ }y\in L_{1}\backslash \{1_{1}\}. \end{array} \right. \end{equation*} \begin{equation*} x\odot y=\left\{ \begin{array}{c} x\odot _{1}y,\text{ if }x,y\in L_{1}, \\ x\odot _{2}y,\text{ if }x,y\in L_{2}, \\ x,\text{ if }x\in L_{1}\backslash \{1_{1}\}\text{ and }y\in L_{2}. \end{array} \right. \end{equation*}
The ordinal product is associative, but is not commutative, see \cite{[I; 09]}.
\begin{proposition} \label{Proposition_5} ( \cite{[I; 09]}, Corollary 3.5.10) Let $\mathcal{L} _{1}$ and $\mathcal{L}_{2}$ be BL-algebras.
\textit{(i) }\ If $\mathcal{L}_{1}$ is a chain, then the ordinal product $ \mathcal{L}_{1}\boxtimes \mathcal{L}_{2}$ \textit{\ is a BL-algebra.}
\textit{(ii) }$\ \ $If $\mathcal{L}_{1}$ is not a chain, then the ordinal product $\mathcal{L}_{1}\boxtimes \mathcal{L}_{2}$ \textit{\ is only a residuated lattice satisfying (div) condition.} \end{proposition}
\begin{remark} \label{Remark_3} (i) An ordinal product of two BL-chains is a BL-chain. Indeed, using the definition of implication in an ordinal product for every $ x,y$ we have $x\rightarrow y=1$ or $y\rightarrow x=1.$
(ii) An ordinal product of two MV-algebras is a BL-algebra which is not an MV-algebra. Indeed, if $\mathcal{L}_{1}$ and $\mathcal{L}_{2}$ are two MV-algebras, using Proposition \ref{Proposition_5}, the residuated lattice $ \mathcal{L}_{1}\boxtimes \mathcal{L}_{2}$ \textit{\ is a BL-algebra in which, we have} $(1_{1})^{\ast \ast }=(1_{1}\rightarrow 0_{1})^{\ast }=(0_{1})^{\ast }=0_{1}\rightarrow 0_{1}=1=1_{2}\neq 1_{1},$ so, $\mathcal{L} _{1}\boxtimes \mathcal{L}_{2}$ \textit{\ is not an MV-algebra.} \end{remark}
For a natural number $n\geq 2$ we consider the decomposition (which is not unique) of $n$ in factors greater than $1$ and we denote by $\pi (n)$ the number of all such decompositions. Obviously, if $n$ is prime then $\pi (n)=0.$
We recall that an MV-algebra is finite iff it is isomorphic to a finite product of MV-chains, see \cite{[HR; 99]}. \ Using this result, in \cite {[FP; 22]} we showed that for every natural number $n\geq 2$ there are $\pi (n)+1$ non-isomorphic MV-algebras with $n$ elements of which only one is a chain. Moreover, all finite MV-algebras (up to an isomorphisms) with $n$ elememts correspond to finite commutative ring $A$ in which $\left\vert Id(A)\right\vert =n.$
In \textbf{Table 1} we sum briefly describe a way to generate finite\ MV-algebras $M$ with $2\leq n\leq 6$ elements using commutative rings, see \cite{[FP; 22]}.
\begin{equation*} \text{\textbf{Table 1:}} \end{equation*}
\textbf{\ } \begin{tabular}{lll} $\left\vert M\right\vert \mathbf{=n}$ & \textbf{Nr of MV} & \textbf{Rings which generates MV} \\ $n=2$ & $1$ & $Id(\mathbb{Z}_{2})$ (chain) \\ $n=3$ & $1$ & $Id(\mathbb{Z}_{4})$ (chain) \\ $n=4$ & $2$ & $Id(\mathbb{Z}_{8})$ (chain) and $Id(\mathbb{Z}_{2}\times \mathbb{Z}_{2})$ \\ $n=5$ & $1$ & $Id(\mathbb{Z}_{16})$ (chain) \\ $n=6$ & $2$ & $Id(\mathbb{Z}_{32})$ (chain) and $Id(\mathbb{Z}_{2}\times \mathbb{Z}_{4})$ \\ $n=7$ & $1$ & $Id(\mathbb{Z}_{64})$ (chain) \\ $n=8$ & $3$ & $Id(\mathbb{Z}_{128})$ (chain) and $Id(\mathbb{Z}_{2}\times \mathbb{Z}_{8})$ and $Id(\mathbb{Z}_{2}\times \mathbb{Z}_{2}\times \mathbb{Z} _{2})$ \end{tabular}
Using the construction of ordinal product, Proposition \ref{Proposition_5} and Remark \ref{Remark_3}, we can generate BL-algebras (which are not MV-algebras) using commutative rings.
\begin{example} \label{Example_5} In \cite{[FP; 22]} we show that there is one MV-algebra with 3 elements (up to an isomorphism), see Table 1.\ This MV-algebras is isomorphic to $Id(\mathbb{Z}_{4})$ and is a chain.\ To generate a BL-chain with 3 elements (which is not an MV-algebra) using the ordinal product we must consider only the MV-algebra with 2 elements (which is in fact a Boolean algebra). In \ the commutative ring $\ (\mathbb{Z}_{2},+,\cdot )$ the ideals are $Id(\mathbb{Z}_{2})=\{\{\widehat{0}\},$ $\mathbb{Z}_{2}\}$. Using Proposition \ref{Proposition1}, $(Id\left( \mathbb{Z}_{2}\right) ,\cap ,+,\otimes \rightarrow ,0=\{0\},1=\mathbb{Z}_{2})$ is an MV-chain with the following operations: \begin{equation*} \text{ }
\begin{tabular}{l|ll} $\rightarrow $ & $\{\widehat{0}\}$ & $\mathbb{Z}_{2}$ \\ \hline $\{\widehat{0}\}$ & $\mathbb{Z}_{2}$ & $\mathbb{Z}_{2}$ \\ $\mathbb{Z}_{2}$ & $\{\widehat{0}\}$ & $\mathbb{Z}_{2}$ \end{tabular} ;\text{ }
\begin{tabular}{l|ll} $\otimes =\cap $ & $\{\widehat{0}\}$ & $\mathbb{Z}_{2}$ \\ \hline $\{\widehat{0}\}$ & $\{\widehat{0}\}$ & $\{\widehat{0}\}$ \\ $\mathbb{Z}_{2}$ & $\{\widehat{0}\}$ & $\mathbb{Z}_{2}$ \end{tabular} \text{ and }
\begin{tabular}{l|ll} $+$ & $\{\widehat{0}\}$ & $\mathbb{Z}_{2}$ \\ \hline $\{\widehat{0}\}$ & $\{\widehat{0}\}$ & $\mathbb{Z}_{2}$ \\ $\mathbb{Z}_{2}$ & $\mathbb{Z}_{2}$ & $\mathbb{Z}_{2}$ \end{tabular} . \end{equation*} Now we consider two MV-algebras isomorphic with $Id\left( \mathbb{Z} _{2}\right) $ denoted $\mathcal{L}_{1}=(L_{1}=\{0,a\},\wedge ,\vee ,\odot ,\rightarrow ,0,a)$ and $\mathcal{L}_{2}=(L_{2}=\{a,1\},\wedge ,\vee ,\odot ,\rightarrow ,a,1).$ Using Proposition \ref{Proposition_5} we can construct the BL-algebra $\mathcal{L}_{1}\boxtimes \mathcal{L}_{2}=(L_{1}\cup L_{2}=\{0,a.1\},\wedge ,\vee ,\odot ,\rightarrow ,0,1)$ with $0\leq a\leq 1$ and the following operations: \begin{equation*}
\begin{tabular}{l|lll} $\rightarrow $ & $0$ & $a$ & $1$ \\ \hline $0$ & $1$ & $1$ & $1$ \\ $a$ & $0$ & $1$ & $1$ \\ $1$ & $0$ & $a$ & $1$ \end{tabular} \text{ and }
\begin{tabular}{l|lll} $\odot $ & $0$ & $a$ & $1$ \\ \hline $0$ & $0$ & $0$ & $0$ \\ $a$ & $0$ & $a$ & $a$ \\ $1$ & $0$ & $a$ & $1$ \end{tabular} \text{ ,} \end{equation*} obtaining the same BL-algebra of order $3$ as in Example \ref{Example12}. \end{example}
\begin{example} \label{Example_6} To generate the non-linearly ordered BL-algebra with 5 elements from Example \ref{Example_3}, we consider the commutative rings $\ ( \mathbb{Z}_{2},+,\cdot )$ and $(\mathbb{Z}_{2}\times \mathbb{Z}_{2},+,\cdot ).$ For $\mathbb{Z}_{2}\times \mathbb{Z}_{2}=\{\left( \widehat{0},\widehat{0} \right) ,\left( \widehat{0},\widehat{1}\right) ,\left( \widehat{1},\widehat{0 }\right) ,\left( \widehat{1},\widehat{1}\right) \}$, we obtain the lattice $ Id\left( \mathbb{Z}_{2}\times \mathbb{Z}_{2}\right) =\{\left( \widehat{0}, \widehat{0}\right) ,\{\left( \widehat{0},\widehat{0}\right) ,\left( \widehat{ 0},\widehat{1}\right) \},\{\left( \widehat{0},\widehat{0}\right) ,\left( \widehat{1},\widehat{0}\right) \},\mathbb{Z}_{2}\times \mathbb{Z} _{2}\}=\{O,R,B,E\}$, which is an MV-algebra $(Id\left( \mathbb{Z}_{2}\times \mathbb{Z}_{2}\right) ,\cap ,+,\otimes \rightarrow ,0=\{\left( \widehat{0}, \widehat{0}\right) \},1=\mathbb{Z}_{2}\times \mathbb{Z}_{2}),$ see Proposition \ref{Proposition1}. In $Id\left( \mathbb{Z}_{2}\times \mathbb{Z} _{2}\right) $ we have the following operations: \begin{equation*}
\begin{tabular}{l|llll} $\rightarrow $ & $O$ & $R$ & $B$ & $E$ \\ \hline $O$ & $E$ & $E$ & $E$ & $E$ \\ $R$ & $B$ & $E$ & $B$ & $E$ \\ $B$ & $R$ & $R$ & $E$ & $E$ \\ $E$ & $O$ & $R$ & $B$ & $E$ \end{tabular} ,\text{ }
\begin{tabular}{l|llll} $\otimes =\cap $ & $O$ & $R$ & $B$ & $E$ \\ \hline $O$ & $O$ & $O$ & $O$ & $O$ \\ $R$ & $O$ & $R$ & $O$ & $R$ \\ $B$ & $O$ & $O$ & $B$ & $B$ \\ $E$ & $O$ & $R$ & $B$ & $E$ \end{tabular} \text{ and }
\begin{tabular}{l|llll} $+$ & $O$ & $R$ & $B$ & $E$ \\ \hline $O$ & $O$ & $R$ & $B$ & $E$ \\ $R$ & $R$ & $R$ & $E$ & $E$ \\ $B$ & $B$ & $E$ & $B$ & $E$ \\ $E$ & $E$ & $E$ & $E$ & $E$ \end{tabular} . \end{equation*} If we consider two MV-algebras isomorphic with $(Id\left( \mathbb{Z} _{2}\right) ,\cap ,+,\otimes \rightarrow ,0=\{0\},1=\mathbb{Z}_{2})$ and $ (Id\left( \mathbb{Z}_{2}\times \mathbb{Z}_{2}\right) ,\cap ,+,\otimes \rightarrow ,0=\{\left( \widehat{0},\widehat{0}\right) \},1=\mathbb{Z} _{2}\times \mathbb{Z}_{2}),$ denoted by $\mathcal{L}_{1}=(L_{1}=\{0,c\}, \wedge _{1},\vee _{1},\odot _{1},\rightarrow _{1},0,c)$ and $\mathcal{L} _{2}=(L_{2}=\{c,a,b,1\},\wedge _{2},\vee _{2},\odot _{2},\rightarrow _{2},c,1),$ using Proposition \ref{Proposition_5} we generate the BL-algebra $\mathcal{L}_{1}\boxtimes \mathcal{L}_{2}=(L_{1}\cup L_{2}=\{0,c,a,b,1\},\wedge ,\vee ,\odot ,\rightarrow ,0,1)$ from Example \ref {Example_3}. \end{example}
From Proposition \ref{Proposition_5} and Remark \ref{Remark_3}, we deduce that:
\begin{proposition} \label{Remark_4} (i) To generate a BL-algebra with $n$ elements as an ordinal product $\mathcal{L}_{1}\boxtimes \mathcal{L}_{2}$ of $\ $two Bl-algebras $\mathcal{L}_{1}$ and $\mathcal{L}_{2}$ we have the following possibilities: \begin{equation*} \mathcal{L}_{1}\text{ is a BL-chain with }i\text{ elements and }\mathcal{L} _{2}\text{ is a BL-algebra with }j\text{ elements, } \end{equation*} and \begin{equation*} \mathcal{L}_{1}\text{ is a BL-chain with }j\text{ elements and }\mathcal{L} _{2}\text{ is a BL-algebra with }i\text{ elements, } \end{equation*} or \begin{equation*} \mathcal{L}_{1}\text{ is a BL-chain with }k\text{ elements and }\mathcal{L} _{2}\text{ is a BL-algebra with }k\text{ elements, } \end{equation*} for $i,j\geq 2,i+j=n+1,$ $i<j$ and $k\geq 2,$ $k=\frac{n+1}{2}$ $\in N,$
(ii) To generate a BL-chain with $n$ elements as the ordinal product $ \mathcal{L}_{1}\boxtimes \mathcal{L}_{2}$ of $\ $two Bl-algebras $\mathcal{L} _{1}$ and $\mathcal{L}_{2}$ we have the following possibilities: \begin{equation*} \mathcal{L}_{1}\text{ is a BL-chain with }i\text{ elements and }\mathcal{L} _{2}\text{ is a BL-chain with }j\text{ elements, } \end{equation*} and \begin{equation*} \mathcal{L}_{1}\text{ is a BL-chain with }j\text{ elements and }\mathcal{L} _{2}\text{ is a BL-chain with }i\text{ elements, } \end{equation*} or \begin{equation*} \mathcal{L}_{1}\text{ is a BL-chain with }k\text{ elements and }\mathcal{L} _{2}\text{ is a BL-chain with }k\text{ elements, } \end{equation*} for $i,j\geq 2,i+j=n+1,$ $i<j$ and $k\geq 2,$ $k=\frac{n+1}{2}$ $\in N.$ \end{proposition}
We make the following notations: \begin{equation*} \mathcal{BL}_{n}=\text{the set of BL-algebras with }n\text{ elements} \end{equation*} \begin{equation*} \mathcal{BL}_{n}(c)=\text{the set of BL-chains with }n\text{ elements} \end{equation*} \begin{equation*} \mathcal{MV}_{n}=\text{the set of MV-algebras with }n\text{ elements} \end{equation*} \begin{equation*} \mathcal{MV}_{n}(c)=\text{the set of MV-algebras with }n\text{ elements.} \end{equation*}
\begin{theorem} \label{Theorem_4} (i) All finite Bl-algebras (up to an isomorphism) which are not MV-algebras with $2\leq n\leq 5$ elements can be generated using the ordinal product of BL-algebras.
(ii) The number of non-isomorphic BL-algebras with $n$ elements (with $2\leq n\leq 5$) is \begin{equation*} \left\vert \mathcal{BL}_{2}\right\vert =\left\vert \mathcal{MV} _{2}\right\vert =\pi (2)+1, \end{equation*} \begin{equation*} \left\vert \mathcal{BL}_{3}\right\vert =\left\vert \mathcal{MV} _{3}\right\vert +\left\vert \mathcal{BL}_{2}\right\vert =\pi (3)+\pi (2)+2, \end{equation*} \begin{equation*} \left\vert \mathcal{BL}_{4}\right\vert =\left\vert \mathcal{MV} _{4}\right\vert +\left\vert \mathcal{BL}_{3}\right\vert +\left\vert \mathcal{ BL}_{2}\right\vert =\pi (4)+\pi (3)+2\cdot \pi (2)+4, \end{equation*} \begin{equation*} \left\vert \mathcal{BL}_{5}\right\vert =\left\vert \mathcal{MV} _{5}\right\vert +\left\vert \mathcal{BL}_{4}\right\vert +\left\vert \mathcal{ BL}_{3}\right\vert +\left\vert \mathcal{BL}_{2}\right\vert = \end{equation*} \begin{equation*} =\pi (5)+\pi (4)+2\cdot \pi (3)+4\cdot \pi (2)+8. \end{equation*} \end{theorem}
\begin{proof} From Proposition \ref{Proposition_5} and Remark \ref{Remark_3}, we remark that using the ordinal product of two BL-algebras we can generate only BL-algebras which are not MV-algebras.
We generate all BL-algebras with $n$ elements ($2\leq n\leq 5)$ which are not MV-algebras.
\textbf{Case }$n=2.$
We have obviously an only BL-algebra (up to an isomorphism) isomorphic with \begin{equation*} (Id(\mathbb{Z}_{2}),\cap ,+,\otimes \rightarrow ,0=\{0\},1=\mathbb{Z}_{2}). \end{equation*} In fact, this residuated lattice is a BL-chain and is the only MV-algebra with $2$ elements. We deduce that \begin{equation*} \left\vert \mathcal{MV}_{2}\right\vert =\left\vert \mathcal{BL} _{2}\right\vert =\pi (2)+1=1 \end{equation*} \begin{equation*} \left\vert \mathcal{MV}_{2}(c)\right\vert =\left\vert \mathcal{BL} _{2}(c)\right\vert =1. \end{equation*} \textbf{Case }$n=3.$
Using Proposition \ref{Remark_4}, to generate a BL-algebra with $3$ elements as an ordinal product $\mathcal{L}_{1}\boxtimes \mathcal{L}_{2}$ of $\ $two BL-algebras $\mathcal{L}_{1}$ and $\mathcal{L}_{2}$ we must consider: \begin{equation*} \mathcal{L}_{1}\text{ a BL-chain with }2\text{ elements and }\mathcal{L}_{2} \text{ a BL-algebra with }2\text{ elements. } \end{equation*} Since there is only one BL-algebra with $2$ elements and it is a chain, we obtain the BL-algebra \begin{equation*} Id(\mathbb{Z}_{2})\boxtimes Id(\mathbb{Z}_{2}), \end{equation*} which is a chain.
We deduce that \begin{equation*} \left\vert \mathcal{MV}_{3}\right\vert =\pi (3)+1\text{ and }\left\vert \mathcal{BL}_{3}\right\vert =\left\vert \mathcal{MV}_{3}\right\vert +1\cdot \left\vert \mathcal{BL}_{2}\right\vert =\pi (3)+\pi (2)+2=2 \end{equation*} \begin{equation*} \left\vert \mathcal{MV}_{3}(c)\right\vert =1\text{ and }\left\vert \mathcal{ BL}_{3}(c)\right\vert =\left\vert \mathcal{MV}_{3}(c)\right\vert +1=1+1=2. \end{equation*} We remark that $\left\vert \mathcal{BL}_{3}\right\vert =\left\vert \mathcal{ MV}_{3}\right\vert +\left\vert \mathcal{BL}_{2}\right\vert .$
\textbf{Case }$n=4.$
Using Proposition \ref{Remark_4}, to generate a BL-algebra with $4$ elements as the ordinal product $\mathcal{L}_{1}\boxtimes \mathcal{L}_{2}$ of \ two Bl-algebras $\mathcal{L}_{1}$ and $\mathcal{L}_{2}$ we must consider: \begin{equation*} \mathcal{L}_{1}\text{ is a BL-chain with }2\text{ elements and }\mathcal{L} _{2}\text{ is a BL-algebra with }3\text{ elements, } \end{equation*} \begin{equation*} \mathcal{L}_{1}\text{ is a BL-chain with }3\text{ elements and }\mathcal{L} _{2}\text{ is a BL-algebra with }2\text{ elements. } \end{equation*} We obtain the following BL-algebras: \begin{equation*} Id(\mathbb{Z}_{2})\boxtimes Id(\mathbb{Z}_{4})\text{ and }Id(\mathbb{Z} _{2})\boxtimes (Id(\mathbb{Z}_{2})\boxtimes Id(\mathbb{Z}_{2}))\text{ } \end{equation*} and \begin{equation*} Id(\mathbb{Z}_{4})\boxtimes Id(\mathbb{Z}_{2})\text{ and }(Id(\mathbb{Z} _{2})\boxtimes (Id(\mathbb{Z}_{2}))\boxtimes Id(\mathbb{Z}_{2}). \end{equation*}
Since $\boxtimes $ is associative, we obtain 3 BL-algebras which are chains, from Remark \ref{Remark_3}.
We deduce that \begin{equation*} \left\vert \mathcal{MV}_{4}\right\vert =\pi (4)+1\text{ } \end{equation*} \begin{equation*} \left\vert \mathcal{BL}_{4}\right\vert =\left\vert \mathcal{MV} _{4}\right\vert +1\cdot \left\vert \mathcal{BL}_{3}\right\vert +2\cdot \left\vert \mathcal{BL}_{2}\right\vert -1=\pi (4)+\pi (3)+2\cdot \pi (2)+4=5 \end{equation*} \begin{equation*} \left\vert \mathcal{MV}_{4}(c)\right\vert =1\text{ and }\left\vert \mathcal{ BL}_{4}(c)\right\vert =\left\vert \mathcal{MV}_{3}(c)\right\vert +3=1+3=4. \end{equation*}
We remark that $\left\vert \mathcal{BL}_{4}\right\vert =\left\vert \mathcal{ MV}_{4}\right\vert +\left\vert \mathcal{BL}_{3}\right\vert +\left\vert \mathcal{BL}_{2}\right\vert .$
\textbf{Case }$n=5.$
To generate a BL-algebra with $5$ elements as the ordinal product $\mathcal{L }_{1}\boxtimes \mathcal{L}_{2}$ of $\ $two Bl-algebras $\mathcal{L}_{1}$ and $\mathcal{L}_{2}$ we must consider: \begin{equation*} \mathcal{L}_{1}\text{ is a BL-chain with }2\text{ elements and }\mathcal{L} _{2}\text{ is a BL-algebra with }4\text{ elements, } \end{equation*} \begin{equation*} \mathcal{L}_{1}\text{ is a BL-chain with }4\text{ elements and }\mathcal{L} _{2}\text{ is a BL-algebra with }2\text{ elements. } \end{equation*} \begin{equation*} \mathcal{L}_{1}\text{ is a BL-chain with }3\text{ elements and }\mathcal{L} _{2}\text{ is a BL-algebra with }3\text{ elements,.} \end{equation*} We obtain the following BL-algebras: \begin{eqnarray*} &&Id(\mathbb{Z}_{2})\boxtimes Id(\mathbb{Z}_{8}),\text{ }Id(\mathbb{Z} _{2})\boxtimes Id(\mathbb{Z}_{2}\times \mathbb{Z}_{2}),\text{ }Id(\mathbb{Z} _{2})\boxtimes (Id(\mathbb{Z}_{2})\boxtimes Id(\mathbb{Z}_{4})), \\ &&Id(\mathbb{Z}_{2})\boxtimes \lbrack Id(\mathbb{Z}_{4})\boxtimes Id(\mathbb{ Z}_{2})]\text{ and }Id(\mathbb{Z}_{2})\boxtimes \lbrack Id(\mathbb{Z} _{2})\boxtimes (Id(\mathbb{Z}_{2})\boxtimes Id(\mathbb{Z}_{2}))] \end{eqnarray*}
and \begin{eqnarray*} &&Id(\mathbb{Z}_{8})\boxtimes Id(\mathbb{Z}_{2}),[Id(\mathbb{Z} _{2})\boxtimes Id(\mathbb{Z}_{4})]\boxtimes Id(\mathbb{Z}_{2}), \\ &&[Id(\mathbb{Z}_{4})\boxtimes Id(\mathbb{Z}_{2})]\boxtimes Id(\mathbb{Z} _{2})\text{ and }[Id(\mathbb{Z}_{2})\boxtimes (Id(\mathbb{Z}_{2})\boxtimes Id(\mathbb{Z}_{2}))]\boxtimes Id(\mathbb{Z}_{2}) \end{eqnarray*}
and \begin{eqnarray*} &&Id(\mathbb{Z}_{4})\boxtimes Id(\mathbb{Z}_{4}),\text{ }[Id(\mathbb{Z} _{2})\boxtimes Id(\mathbb{Z}_{2})]\boxtimes \lbrack Id(\mathbb{Z} _{2}))\boxtimes Id(\mathbb{Z}_{2})] \\ &&Id(\mathbb{Z}_{4})\boxtimes \lbrack Id(\mathbb{Z}_{2})\boxtimes Id(\mathbb{ Z}_{2})]\text{ and }[Id(\mathbb{Z}_{2})\boxtimes Id(\mathbb{Z} _{2})]\boxtimes Id(\mathbb{Z}_{4}) \end{eqnarray*}
Since $\boxtimes $ is associative, $Id(\mathbb{Z}_{2})\boxtimes \lbrack Id( \mathbb{Z}_{4})\boxtimes Id(\mathbb{Z}_{2})]=[Id(\mathbb{Z}_{2})\boxtimes Id( \mathbb{Z}_{4})]\boxtimes Id(\mathbb{Z}_{2}),$ $Id(\mathbb{Z}_{2})\boxtimes \lbrack Id(\mathbb{Z}_{2})\boxtimes (Id(\mathbb{Z}_{2})\boxtimes Id(\mathbb{Z }_{2}))]$ $=$ $[Id(\mathbb{Z}_{2})\boxtimes (Id(\mathbb{Z}_{2})\boxtimes Id( \mathbb{Z}_{2}))]\boxtimes Id(\mathbb{Z}_{2})$ $=[Id(\mathbb{Z} _{2})\boxtimes Id(\mathbb{Z}_{2})]\boxtimes \lbrack Id(\mathbb{Z} _{2}))\boxtimes Id(\mathbb{Z}_{2})],$ \ $[Id(\mathbb{Z}_{4})\boxtimes Id( \mathbb{Z}_{2})]\boxtimes Id(\mathbb{Z}_{2})=Id(\mathbb{Z}_{4})\boxtimes \lbrack Id(\mathbb{Z}_{2})\boxtimes Id(\mathbb{Z}_{2})],$ $Id(\mathbb{Z} _{2})\boxtimes (Id(\mathbb{Z}_{2})\boxtimes Id(\mathbb{Z}_{4}))=$ $[Id( \mathbb{Z}_{2})\boxtimes Id(\mathbb{Z}_{2})]\boxtimes Id(\mathbb{Z}_{4})$ and $Id(\mathbb{Z}_{2})\boxtimes (Id(\mathbb{Z}_{2})\boxtimes Id(\mathbb{Z} _{4}))=[Id(\mathbb{Z}_{2})\boxtimes Id(\mathbb{Z}_{2})]\boxtimes Id(\mathbb{Z }_{4})$
We obtain 8 BL-algebras of which 7 are chains, from Remark \ref{Remark_3}.
We deduce that \begin{equation*} \left\vert \mathcal{MV}_{5}\right\vert =\pi (5)+1=1\text{ and }\left\vert \mathcal{BL}_{5}\right\vert =9=\left\vert \mathcal{MV}_{5}\right\vert +\left\vert \mathcal{BL}_{4}\right\vert +\left\vert \mathcal{BL} _{3}\right\vert +\left\vert \mathcal{BL}_{2}\right\vert \end{equation*} \begin{equation*} \left\vert \mathcal{MV}_{5}(c)\right\vert =1\text{ and }\left\vert \mathcal{ BL}_{5}(c)\right\vert =8. \end{equation*} \end{proof}
\textbf{Table 2} present a basic summary of the structure of BL-algebras $L$ with $2\leq n\leq 5$ elements:
\begin{equation*} \text{\textbf{Table 2:}} \end{equation*}
\textbf{\ } \begin{tabular}{lll} $\left\vert L\right\vert \mathbf{=n}$ & \textbf{Nr of BL-alg } & \textbf{ Structure } \\ $n=2$ & $1$ & $\left\{ Id(\mathbb{Z}_{2})\text{ (chain, MV)}\right. $ \\ $n=3$ & $2$ & $\left\{ \begin{array}{c} Id(\mathbb{Z}_{4})\text{ (chain, MV)} \\ Id(\mathbb{Z}_{2})\boxtimes Id(\mathbb{Z}_{2})\text{ (chain)} \end{array} \right. $ \\ $n=4$ & $5$ & $\left\{ \begin{array}{c} Id(\mathbb{Z}_{8})\text{ (chain, MV)} \\ Id(\mathbb{Z}_{2}\times \mathbb{Z}_{2})\text{ (MV)} \\ Id(\mathbb{Z}_{2})\boxtimes Id(\mathbb{Z}_{4})\text{ (chain)} \\ Id(\mathbb{Z}_{4})\boxtimes Id(\mathbb{Z}_{2})\text{ \ (chain)} \\ Id(\mathbb{Z}_{2})\boxtimes (Id(\mathbb{Z}_{2})\boxtimes Id(\mathbb{Z}_{2})) \text{ (chain)} \end{array} \right. $ \\ $n=5$ & $9$ & $\left\{ \begin{array}{c} Id(\mathbb{Z}_{16})\text{ (chain, MV)} \\ Id(\mathbb{Z}_{2})\boxtimes Id(\mathbb{Z}_{8})\text{ (chain)} \\ Id(\mathbb{Z}_{2})\boxtimes Id(\mathbb{Z}_{2}\times \mathbb{Z}_{2}) \\ Id(\mathbb{Z}_{2})\boxtimes (Id(\mathbb{Z}_{2})\boxtimes Id(\mathbb{Z}_{4})) \text{ (chain)} \\ Id(\mathbb{Z}_{2})\boxtimes (Id(\mathbb{Z}_{4})\boxtimes Id(\mathbb{Z}_{2})) \text{ (chain)} \\ Id(\mathbb{Z}_{2})\boxtimes (Id(\mathbb{Z}_{2})\boxtimes (Id(\mathbb{Z} _{2})\boxtimes Id(\mathbb{Z}_{2})))\text{ (chain)} \\ Id(\mathbb{Z}_{8})\boxtimes Id(\mathbb{Z}_{2})\text{ (chain)} \\ (Id(\mathbb{Z}_{4})\boxtimes Id(\mathbb{Z}_{2}))\boxtimes Id(\mathbb{Z}_{2}) \text{ (chain)} \\ Id(\mathbb{Z}_{4})\boxtimes Id(\mathbb{Z}_{4})\text{ (chain)} \end{array} \right. $ \end{tabular}
Finally, \textbf{Table 3 }present a summary for the number of MV-algebras, MV-chains, BL-algebras and BL-chains with $n\leq 5$ elements:
\begin{equation*} \text{\textbf{Table 3}} \end{equation*}
\textbf{\ \ } \begin{tabular}{lllll} & $n=2$ & $n=3$ & $n=4$ & $n=5$ \\ MV-algebras & $1$ & $1$ & $2$ & $1$ \\ MV-chains & $1$ & $1$ & $1$ & $1$ \\ BL-algebras & $1$ & $2$ & $5$ & $9$ \\ BL-chains & $1$ & $2$ & $4$ & $8$ \end{tabular} .
Cristina Flaut
{\small Faculty of Mathematics and Computer Science, Ovidius University,}
{\small Bd. Mamaia 124, 900527, Constan\c{t}a, Rom\^{a}nia,}
{\small \ http://www.univ-ovidius.ro/math/}
{\small e-mail: cflaut@univ-ovidius.ro; cristina\_flaut@yahoo.com}
Dana Piciu
{\small Faculty of \ Science, University of Craiova, }
{\small A.I. Cuza Street, 13, 200585, Craiova, Romania,}
{\small http://www.math.ucv.ro/dep\_mate/}
{\small e-mail: dana.piciu@edu.ucv.ro, piciudanamarina@yahoo.com}
\end{document} |
\begin{document}
\title{Some applications of smooth bilinear forms with Kloosterman
sums}
\author{Valentin Blomer} \address{Mathematisches Institut, Universit\"at G\"ottingen,
Bunsenstr. 3-5, 37073 G\"ottingen, Germany} \email{vblomer@math.uni-goettingen.de}
\author{\'Etienne Fouvry} \address{Laboratoire de Math\'ematiques d'Orsay, Universit\' e Paris--Saclay \\
91405 Orsay \\France} \email{etienne.fouvry@math.u-psud.fr}
\author{Emmanuel Kowalski} \address{ETH Z\"urich -- D-MATH\\
R\"amistrasse 101\\
CH-8092 Z\"urich\\
Switzerland} \email{kowalski@math.ethz.ch}
\author{Philippe Michel} \address{EPF Lausanne, Chaire TAN, Station 8, CH-1015
Lausanne, Switzerland } \email{philippe.michel@epfl.ch}
\author{Djordje Mili\'cevi\'c}
\address{Department of Mathematics, Bryn Mawr College, 101 North Merion Avenue, Bryn Mawr, PA 19010-2899, U.S.A.}
\curraddr{Max-Planck-Institut f\"ur Mathematik, Vivatsgasse 7, D-53111 Bonn, Germany}
\email{dmilicevic@brynmawr.edu}
\thanks{V.\ B.\ was partially supported by the
Volkswagen Foundation. \'E.\ F.\ thanks ETH Z\"urich and EPF Lausanne
for financial
support. Ph. M. was partially supported by the SNF (grant
200021-137488) and the ERC (Advanced Research Grant 228304). V.\ B.,
Ph.\ M.\ and E.\ K.\ were also partially supported by a DFG-SNF lead
agency program grant (grant 200021L\_153647). D. M. was partially
supported by the NSF (Grant
DMS-1503629) and ARC (through Grant DP130100674).}
\subjclass[2010]{11M06, 11F11, 11L05, 11L40, 11F72, 11T23}
\keywords{$L$-functions, modular forms, shifted convolution sums,
Kloosterman sums, incomplete exponential sums}
\begin{abstract}
We revisit a recent bound of I. Shparlinski and T. P. Zhang on
bilinear forms with Kloosterman sums, and prove an extension for
correlation sums of Kloosterman sums against Fourier coefficients of
modular forms. We use these bounds to improve on earlier results on
sums of Kloosterman sums along the primes and on the error term of
the fourth moment of Dirichlet $L$-functions. \end{abstract}
\maketitle
\setcounter{tocdepth}{1}
\section{Statement of results}\label{intro}
\subsection{Preliminaries}
This note is motivated by a recent result of I. E. Shparlinski and T. P. Zhang \cite{SZ} concerning bilinear forms with Kloosterman sums. Given a prime $q$ and $m\in{\mathbf{F}_q}$, let \[ \Kl(m;q):=\frac{1}{\sqrt{q}}\sum_{\substack{x\in{\mathbf{F}^\times_q}\\ xy=1}}e_q(y+mx) \] denote the normalized Kloosterman sum, where $e_q (x) =\exp( 2 \pi i x /q)$. Shparlinski and Zhang (\cite [Theorem 3.1]{SZ}) proved the following theorem.
\begin{theorem}[Shparlinski--Zhang]\label{thmSZ}
Let $q$ be a prime number and let $\mathcal{M},\mathcal{N}\subset [1,q-1]$ be
intervals of lengths $M,N\geqslant 1$. Then we have \begin{equation}\label{SZbound1}
\sumsum_{m\in{\mathcal{M}},n\in\mathcal{N}}\Kl(mn;q)\ll_{\varepsilon} q^{\varepsilon}\Bigl( q^{1/2}+\frac{MN}{q^{1/2}} \Bigr) \end{equation} for any $\varepsilon > 0$, where the implied constant depends only on $\varepsilon$. \end{theorem}
In light of the Weil bound for Kloosterman sums $|\Kl(m;q)|\leqslant 2$, the estimate \eqref{SZbound1} is non-trivial as long as $MN$ is a bit larger than $q^{1/2}$. On the other hand, if $M$ or $N$ is close to $q$, other methods (e.g.\ the completion method) become more efficient. In particular, the restriction that $M$ and $N$ are $\leqslant q$ is not really restrictive for applications.
The aim of this paper is two-fold. On the one hand, we put Theorem \ref{thmSZ} into a slightly more general context in Propositions~\ref{propSZ} and \ref{propIScuspidal}; viewing it as a correlation estimate for Kloosterman sums and a divisor function (which itself is a Fourier coefficient of an Eisenstein series), it turns out to be a consequence of a version of the Voronoi summation formula. On the other hand, we give two applications of independent interest to the fourth moment of Dirichlet $L$-functions in Theorem~\ref{472} and sums of Kloosterman sums over primes in Theorem~\ref{Klpq}; these applications are discussed in Subsection \ref{appl}.
\subsection{Variations on a theme} Our first result is a smoothed version of the bound \eqref{SZbound1}. To state it, we use the following class of smoothing functions. For a modulus $q\geqslant 1$ and a parameter $Q\geqslant 1$, we will consider functions satisfying the following conditions: \begin{equation}\label{Wbound} \begin{split}
W\colon [0,+\infty[\to\mathbf{C}\text{ is smooth, }
\mathrm{Supp}(W)\subset [1/2,2],\\
W^{(j)}(x)\ll_{j, \varepsilon} \bigl(q^{\varepsilon}Q\bigr)^j\,
\text{ for any $x\geqslant 0$, $j\geqslant 0$ and $\varepsilon>0$.} \end{split} \end{equation}
\begin{proposition}\label{propSZ} Let $q$ be a prime number and let
$Q\geqslant 1$ be a real number. Let $W_1,W_2$ be functions
satisfying~\eqref{Wbound}. For any $M,N\geqslant 1$ and any integer $a$
coprime with $q$, we have
\begin{equation}\label{SZboundsmooth} \sumsum_{m,n}W_1\Bigl(\frac{m}M\Bigr)W_2\Bigl(\frac{n}N\Bigr)\Kl(amn;q)\ll_\varepsilon (qQ)^{\varepsilon}\, Q^2\,\Bigl(q^{1/2}+\frac{MN}{q^{1/2}}\Bigr). \end{equation} Furthermore, if $W_3$ also satisfies~\eqref{Wbound}, then for any $Y\geqslant 1$, we have \begin{equation}\label{SZboundsmooth1}
\sumsum_{m,n}W_1\Bigl(\frac{m}M\Bigr)
W_2\Bigl(\frac{n}N\Bigr)W_3\Bigl( \frac{mn}{Y}\Bigr)\Kl(amn;q)\ll_\varepsilon
(qQ)^{\varepsilon}\, Q^2\,\Bigl(q^{1/2}+\frac{MN}{q^{1/2}}\Bigr). \end{equation} In both cases, the implied constant depends only on $\varepsilon$. \end{proposition}
The inequalities \eqref{SZboundsmooth} and \eqref{SZboundsmooth1} could be easily deduced from the result of Shparlinski and Zhang by summation by parts with respect to the variables $m$ and $n$. In \S \ref{par2}, we will give an alternative proof based on~\cite[Prop.~2.2]{FKMd3}. The $Q$-dependence in Proposition~\ref{propSZ} is presented in a compact form well suited for our applications but it is not fully optimized otherwise (in particular, for Theorem~\ref{472} we will be using $Q=q^{\varepsilon}$); our proof actually yields a better $Q$-dependence in some other ranges.
We can view the bounds \eqref{SZboundsmooth} and \eqref{SZboundsmooth1} essentially as sums over a single variable weighted by the divisor function $d$. The advantage of our proof of Proposition \ref{propSZ} is that it provides naturally an automorphic generalization, where the divisor function is replaced with Fourier coefficients of modular forms.
\begin{proposition}\label{propIScuspidal} Let $(\lambda_f(n))_{n\geqslant 1}$ be
the Hecke eigenvalues of a holomorphic cuspidal Hecke eigenform $f$
of level $1$, normalized so that $|\lambda_f(n)|\leqslant d(n)$. Let $q$ be a
prime number, and let $W$ be a function satisfying~\eqref{Wbound}
with $Q=1$. Let $a$ be an integer coprime to $q$. For any $N\geqslant 1$
and any $\varepsilon>0$, we have \begin{equation}\label{SZcusp}
\sum_{n\geqslant 1}\lambda_f(n)\Kl(an;q)
W\Bigl(\frac{n}N\Bigr)\ll_{\varepsilon, f} (qN)^{\varepsilon}\Bigl(q^{1/2}+\frac{N}{q^{1/2}}\Bigr) \end{equation} where the implied constant depends only on $f$ and $\varepsilon$. \end{proposition}
\begin{remark}
This is by no means the most general statement that may be proved
along these lines. \end{remark}
As pointed out in \cite{SZ}, the estimates \eqref{SZbound1} and \eqref{SZboundsmooth} are significant improvements of the bound \begin{equation}\label{FKMbound1}
\sumsum_{m,n}W_1\Bigl(\frac{m}M\Bigr)W_2\Bigl(\frac{n}N\Bigr)\Kl(amn;q)\ll_{\varepsilon, Q}
q^{\varepsilon}MN\Bigl(1+\frac{q}{MN}\Bigr)^{1/2}q^{-1/8}, \end{equation} and likewise the estimate \eqref{SZcusp} improves significantly over \begin{equation}\label{FKMbound2} \sum_{n\geqslant 1}\lambda_f(n)\Kl(an;q)W\Bigl(\frac{n}N\Bigr)\ll_{\varepsilon,Q}
q^{\varepsilon}N\Bigl(1+\frac{q}{N}\Bigr)^{1/2}q^{-1/8}, \end{equation} both of which were obtained by Fouvry, Kowalski and Michel as special cases of \cite[Thm.~1.16]{FKM2} and \cite[Thm.~1.2]{FKM1}.
\subsection{Applications}\label{appl} The bounds \eqref{FKMbound1} and \eqref{FKMbound2} have been applied recently in a number of problems, and the bounds \eqref{SZbound1} and \eqref{SZboundsmooth} lead to further improvements. The main source for these improvements is the new input of Proposition \ref{propSZ}, but a bit of extra work is necessary.
As a first application, we can improve our work on the error term for the fourth moment of Dirichlet series $L(s, \chi)$ of characters $\chi$ to a prime modulus $q$ (\cite[Theorem 1.1]{445}).
\begin{theorem}\label{472}
There exists a polynomial $P_4\in \mathbf{R}[X]$ of degree $4$, such that \[
\frac{1}{q-1}\sum_{\chi\mods q}|L(\chi,1/2)|^4=P_4(\log q)+O(q^{-1/20+\varepsilon}) \] for all primes $q$, where the implied constant depends only on $\varepsilon>0$. If the Ramanujan--Petersson conjecture holds for Fourier coefficients of Hecke--Maa{\ss} forms of level $1$, then the exponent $1/20$ can be replaced by $1/16$. \end{theorem}
\begin{remark} In \cite[Theorem 1.1]{445}, the exponents were
respectively $1/32$ (unconditionally) and $1/24$ (assuming the
Ramanujan--Petersson conjecture). The first breakthrough in this
respect is due to M.\ Young \cite{MY} who obtained an asymptotic
formula with exponents $5/512$ (resp.\ $1/80$). \end{remark}
\begin{remark}
The proof of Theorem \ref{472} follows the same lines as \cite[\S
6.3]{445}, except that instead of the bound \cite [(5.5)]{445}
(i.e.\ \eqref{FKMbound1} above) we use Proposition \ref{propSZ}. It
is of some interest to record here in outline how this improved
exponent arises. The problem of the fourth moment leads to
evaluating non-trivially the shifted convolution type sum \begin{equation}\label{eqquadrismooth} \sumsum_\stacksum{m\asymp M,n\asymp N}{m\equiv n\mods q}d(m)d(n)\approx \sumsum_\stacksum{m_1,m_2,n_1,n_2}{m_1m_2\equiv n_1n_2\mods q}1 \end{equation} with $d$ the usual divisor function and with $MN=M_1M_2N_1N_2\approx q^2$. The spectral theory of automorphic forms provides a good error term when $M$ and $N$ are relatively close in the logarithmic scale. Otherwise, assuming that $N=N_1N_2\geqslant M=M_1M_2$, we apply the Poisson summation formula to both variables $n_1$ and $n_2$ (equivalently, the Voronoi summation formula applied to the variable $n=n_1n_2$), getting two variables of dual size $n_1^*\sim q/N_1$ and $n_2^*\sim q/N_2$ and a smooth quadrilinear sum of Kloosterman sums \[ \sumsum_{m_1,m_2,n^*_1,n^*_2}\Kl(m_1m_2n_1^*n_2^*;q), \] which is evaluated by various means, in particular using the smooth bilinear sum bound \eqref{SZboundsmooth}. In our specific case, the bound \eqref{SZboundsmooth} amounts to applying the Poisson formula to two of the four variables $m_1,m_2,n^*_1,n^*_2$. This leads back to a sum of the type \eqref{eqquadrismooth}, which is then bounded trivially. This argument is not circular, and allows for an improvement, because we (implicitly) apply the process to variables different from the ones we started from (for instance to $m_1$ and $n_1^*$ instead of $n_1^*$ and $n_2^*$). \end{remark}
Our second application is an improvement of the first bound in \cite[Cor.~1.13] {FKM2} for Klooster\-man sums over primes in short intervals:
\begin{theorem}\label{Klpq} Let $q$ be a prime number.
Let $Q\geqslant 1$ be a parameter and let $W$ be a function
satisfying~\eqref{Wbound}. Then for every $X$ such that
$2\leqslant X\leqslant q$ and every $\varepsilon >0$, we have
\begin{equation}\label{evening2}
\sum_{p\text{ prime }}
W \Bigl( \frac{p}{X}\Bigr) \Kl (p;q) \ll_{\varepsilon}
q^{1/4+\varepsilon} Q^{1/2} X^{2/3}.
\end{equation}
\par
In addition, for every prime $q$, every $X$ such that $2\leqslant X\leqslant q$
and every $\varepsilon >0$, we have
\begin{equation}\label{evening1}
\sum_\stacksum{p\leqslant X}{p\text{ prime}}\Kl(p;q)\ll_{\varepsilon} q^{1/6+\varepsilon}\, X^{7/9}.
\end{equation}
In both cases, the implicit constant depends only on $\varepsilon$. \end{theorem}
\begin{remark} The range where these bounds are non-trivial is the
same as that in~\cite[Cor.~1.13]{FKM2}, namely the length of
summation $X$ should be greater than $q^{3/4 +\varepsilon}$ if $Q$
is fixed. The improvement therefore lies in the greater
cancellation in this allowed range. For instance, when $X=q$, we
gain a factor $q^{1/18 -\varepsilon}$ over the trivial bound for
the sum appearing in \eqref{evening1} instead of
$q^{1/48 -\varepsilon}$ in \cite [Corollary 1.13]{FKM2}.
\end{remark}
\subsection*{Acknowledgement.} We would like to thank the referee for very useful suggestions that improved the presentation of the paper.
\section{Correlation sums of Kloosterman sums and divisor-like
functions}\label{par2}
In this section, we revisit Theorem \ref{thmSZ} and establish Proposition \ref{propSZ}. The idea behind the proof of Theorem \ref{thmSZ} is that after applying the completion method twice over the $m$ and $n$ variables, the Kloosterman sum $\Kl(amn;q)$ is transformed into the Dirac type function $q^{1/2}\delta_{mn\equiv a\mods q}$, and taking the congruence condition into account one saves (in the most favourable situation) a factor $q^{1/2}/q=q^{-1/2}$ over the trivial bound.
In our smoothed setting, the completion method is replaced by two applications of the Poisson summation formula or more precisely by a single application of the \emph{tempered Voronoi summation formula} of Deshouillers and Iwaniec, in the form established in \cite[Prop.~2.2]{FKMd3}.
Let $q$ be a prime number, and let $K\colon\mathbf{Z}\to \mathbf{C}$ be a $q$-periodic function. The \emph{normalized Fourier transform} of $K$ is the $q$-periodic function on $\mathbf{Z}$ defined by \begin{equation*}
\fourier{K}(h) = \frac{1}{\sqrt{q}}\sum_{n\bmod q} K(n) e_q(
{hn}) \end{equation*} and the \emph{Voronoi transform} of $K$ is the $q$-periodic function on $\mathbf{Z}$ defined by \[ \bessel{K}(n) = \frac{1}{\sqrt{q}}\sum_{\substack{h\bmod q\\(h,q) =1}} \fourier{K}(h) e_q ({\overline h n} ). \]
\begin{proposition}[Tempered Voronoi formula modulo
primes]\label{Voronoigeneral0}
Let $q$ be a prime number, let $K\, :\ \mathbf{Z} \longrightarrow \mathbf{C}$ be a
$q$-periodic function, and let $G$ be a smooth function on $\mathbf{R}^2$
with compact support and Fourier transform denoted by $\widehat G$. We have \begin{equation}\label{voronoi} \sumsum_{m,n\in\mathbf{Z}} K(mn) G(m,n)=\frac{\fourier{K}(0)}{\sqrt{q}} \ \sumsum_{m,n\in\mathbf{Z}} G(m,n)+ \frac{1}{q}\ \sumsum_{m,n\in\mathbf{Z}} \bessel{K}(mn)\fourier{G}\Bigl(\frac{m}q,\frac nq\Bigr). \end{equation} \end{proposition}
The key point is that when $K$ is a (multiplicatively shifted) Kloosterman sum, then $\widecheck{G}$ is a normalized delta-function:
\begin{lemma}\label{immediate} For $(a,q)=1$ and $K(n)=\Kl(an;q)$ one has \[ \widehat K(h)=\begin{cases}0&\text{if $q\mid h$},\\ e_q(-a\ov h) &\text{if $q\nmid h$,} \end{cases} \] and \[ \bessel{K}(n)= \begin{cases}\displaystyle{\frac{q-1}{q^{1/2}}}&\text{if $n\equiv a \bmod q$},\\
\displaystyle{-\frac{1}{q^{1/2}}}&\text{otherwise}. \end{cases} \] \end{lemma}
This lemma is proved by an immediate computation. We now begin with the proof of \eqref{SZboundsmooth}. Let $q$ be a prime and let $W$ be a function satisfying~(\ref{Wbound}). By integration by parts, we then have \[ \widehat W (t) \ll_{j,\varepsilon} \min \bigl( 1, q^{j\varepsilon}\vert t/Q\vert^{-j}\bigr) \] for $t\in \mathbf{R}$ and for any integer $j\geqslant 0$ and $\varepsilon >0$, where the implied constant depends only on $j$ and $\varepsilon$.
Defining $G(m,n)=W_1(m/M) W_2(n/N)$, we deduce that for any $A$ and any $\varepsilon>0$, we have \begin{equation}\label{G}
\widehat G\Bigl(\frac{m}q,\frac{n}q\Bigr)=M\widehat W_1\Bigl(\frac{mM}q\Bigr)N\widehat W_2\Bigl(\frac{nN}q\Bigr)\ll_{\varepsilon,A} q^\varepsilon MN\Bigl(1+\frac{\vert m\vert M}{qQ}\Bigr)^{-A}\Bigl(1+\frac{\vert n\vert N}{qQ}\Bigr)^{-A}. \end{equation} We next apply the Voronoi formula, Proposition~\ref{Voronoigeneral0}, with $K(n)=\Kl(an;q)$ to the left-hand side of \eqref{SZboundsmooth}. The first term on the right-hand side of \eqref{voronoi} vanishes since $\fourier{K}(0) = 0$. By Lemma \ref{immediate} and \eqref{G}, the contribution of $mn \not\equiv a$ (mod $q$) in the second term is of order at most \begin{align*}
\frac{MN}{q^{3/2-\varepsilon}} \sum_{m, n\in\mathbf{Z}} \Bigl(1+\frac{\vert m\vert M}{qQ}\Bigr)^{-2}\Bigl(1+\frac{\vert n\vert N}{qQ}\Bigr)^{-2}& \ll \frac{MN}{q^{3/2-\varepsilon}}\Bigl(1 + \frac{qQ}{M}\Bigr) \Bigl(1 + \frac{qQ}{N}\Bigr) \\ &\ll q^{\varepsilon}\Bigl(\frac{MN}{q^{3/2}} + \frac{(M+N)Q}{q^{1/2}} + q^{1/2}Q^2\Bigr). \end{align*}
Similarly, the remaining terms $mn \equiv a$ (mod $q$) are, up to a constant, bounded by \[ q^\varepsilon\frac{MN}{q^{1/2}} \sum_{n \equiv a\, (\text{mod }q)} d(n) \Bigl(1 + \frac{n MN}{q^2Q^2}\Bigr)^{-2} \ll (q^2Q)^{\varepsilon} \left(\frac{MN}{q^{1/2}} + Q^2 q^{1/2}\right). \] This completes the proof of \eqref{SZboundsmooth}.
Next, we prove \eqref{SZboundsmooth1}. We may suppose that \begin{equation*} MN/8 <Y <8 MN, \end{equation*} since otherwise the sum of interest is empty. Then we see that for $M/2 < x<2M$ and $N/2<y<2N$, we have the inequalities
\[ \frac{\partial^{i+j} W_3(xy/Y)}{\partial x^i \, \partial y^j} \ll_{\varepsilon, i, j} (q^\varepsilon Q)^{i+j} M^{-i}N^{-j} \] for all non-negative integers $i, j$. Hence the function $ G(x,y) = W_1(x/M) W_2(y/N) W_3(xy/Y)$ satisfies the inequalities \[ \frac{\partial^{i+j} G(x,y)}{\partial x^i \partial y^j}\ll_{\varepsilon,i,j} (q^\varepsilon Q)^{i+j} x^{-i} y^{-j}, \] for $x$, $y >0$, $\varepsilon >0$ and integers $i, j \geqslant 0$. By repeated integration by parts of the definition of the Fourier transform \[ \widehat G (u,v)= \int_{-\infty}^{\infty} \int_{-\infty}^\infty G(x,y) e( -ux-vy) dx\, dy, \] we obtain the bound \[ \widehat G\Bigl( \frac{m}{q}, \frac{n}{q}\Bigr) \ll_{\varepsilon, A} q^\varepsilon MN \Bigl( 1 +\frac{\vert m\vert M}{qQ}\Bigr)^{-A}
\Bigl( 1 +\frac{\vert n\vert N}{qQ}\Bigr)^{-A} \] for any $A$ and any $\varepsilon > 0$, analogously to \eqref{G}. The end of the proof of \eqref{SZboundsmooth1} is now similar to \eqref{SZboundsmooth}.
For future reference we record the following bound for type II sums of Kloosterman sums \cite[Thm.~1.17]{FKM2}.
\begin{proposition}\label{proptypeII} Let $q$ be a prime number.
Let $1\leqslant M,N\leqslant q$ and $(\alpha_m)$, $(\beta_n)$ be sequences of
complex numbers supported in $[M,2M]$ and $[N,2N]$ respectively. Let
either $Q=1$ and $W$ be the constant function $1$, or $Q\geqslant 1$ and $W$
be a function satisfying~\eqref{Wbound}. Then, for every
$\varepsilon >0$, we have \[ \sumsum_{m,n}\alpha_m \beta_n \Kl (mn;q)W \Bigl(\frac{mn}{Y}\Bigr)\ll_\varepsilon \Vert \mathbf \alpha\Vert_2 \, \Vert \mathbf \beta \Vert_2\, (MN)^{1/2}\Bigl( \frac1M+ Q\frac{q^{1/2+
\varepsilon}}{N} \Bigr)^{1/2}. \] \end{proposition}
This is a special case of \cite[Thm.~1.17]{FKM2} when $W$ is the constant $1$. For smooth $W$, the same proof applies, except that we apply partial summation in \cite[(3.2)] {FKM2} if $m_1 \not= m_2$ to remove the weight $W(m_1n/Y) W(m_2n/Y)$; this produces a factor $Q$ that after taking square roots produces the above bound.
\section{Correlation sums of Kloosterman sums and Hecke eigenvalues}
In this section we prove Proposition \ref{propIScuspidal}. We replace the tempered Voronoi summation formula by the Voronoi summation formula for cusp forms, which we state in a form suited to our purpose.
\begin{proposition}[Voronoi summation formula for cusp forms with arithmetic weights modulo
primes]\label{prvoronoi}
Let $q $ be a prime. Let $W$ be a smooth function compactly supported in
$]0,\infty[$ and let $f$ be a holomorphic cuspidal Hecke eigenform
of level $1$ and weight $k$. Let $\varepsilon(f)=\pm 1$ denote the sign of
the functional equation of the Hecke $L$-function $L(f,s)$ and let
\[
\widetilde W(y)=\int_{0}^\infty W(u)\mathcal{J}_k(4\pi\sqrt{ uy})du, \] where \begin{equation*}
\mathcal{J}_k(u) =
2\pi i^kJ_{k-1}(u).
\end{equation*} Then, for any $q$-periodic arithmetic function $K\colon \mathbf{Z}\to\mathbf{C}$, we have \[ \sum_{n\geqslant 1}\lambda_f(n)K(n)W\Bigl(\frac nN\Bigr)= \frac{\widehat K(0)}{q^{1/2}}\sum_{n\geqslant 1} \lambda_f(n)W\Bigl(\frac nN\Bigr)+\\ \varepsilon(f)\frac{N}{q} \sum_{n\geqslant 1}\lambda_f(n)\widecheck K( n)\widetilde W\Bigl(\frac{nN}{q^2}\Bigr). \] In particular, for $a$ coprime to $q$, we have \begin{multline*}
\sum_{n\geqslant 1} \lambda_f(n)\Kl(an;q)W\Bigl(\frac nN\Bigr)=
\varepsilon(f)\frac{N}{q^{1/2}} \sum_{n\equiv a \mods q
}\lambda_f(n)\widetilde W\Bigl(\frac{nN}{q^2}\Bigr)
- \varepsilon(f)\frac{N}{q^{3/2}} \sum_{n\geqslant 1 }\lambda_f(n)\widetilde
W\Bigl(\frac{nN}{q^2}\Bigr).
\end{multline*} \end{proposition}
\begin{proof}
We expand $K(n)$ into additive characters \[ K(n)=\frac{1}{q^{1/2}}\sum_{a\mods q}\widehat K(a)e_q(-an) \] and apply the classical summation formula \[ \sum_{n\geqslant 1} \lambda_f(n)W\Bigl(\frac{n}N\Bigr)e\Bigl(-\frac{an}{q}\Bigr) = \varepsilon(f)\frac{N}{q} \sum_{n\geqslant
1}\lambda_f(n)e\Bigl(\frac{\overline{a}n}{q}\Bigr) \widetilde W\Bigl(\frac{Nn}{q^2}\Bigr), \] valid for all $N>0$ and all $a$ coprime to $q$ (\cite[Theorem A.4]{KMVDMJ}). \end{proof}
We can now easily prove Proposition \ref{propIScuspidal}: integration by parts shows that for any $A\geqslant 0$ and $\varepsilon>0$ we have \[\widetilde W\Bigl(\frac{nN}{q^2}\Bigr)\ll_{k,A,\varepsilon}
q^\varepsilon\Bigl(1+\frac{nN}{q^2}\Bigr)^{-A}\] (see \cite[Lemma 2.4]{445}), so that (using Deligne's bound $|\lambda_f(n)|\leqslant d(n)\ll_\varepsilon n^\varepsilon$), we get \[ \sum_{n} \lambda_f(n)\Kl(an;q)W\Bigl(\frac nN\Bigr)\ll_{\varepsilon,k} (qN)^\varepsilon \Bigl(q^{1/2}+\frac{N}{q^{1/2}}\Bigr). \]
\section{Application to the fourth moment of Dirichlet \texorpdfstring{$L$-functions}{L-functions}} In this section we prove Theorem \ref{472}. The general strategy of the proof has been explained in detail in our paper \cite{445}. We assume some familiarity with this paper, and refer in particular to \cite[\S 1.2, \S 6.1, \S 6.3]{445} for notations.
We begin with the unconditional bound. Let \begin{multline*}
B_{E, E}^{\pm}(M,N) =\frac{1}{(MN)^{1/2}}\sum_{\substack{m\equiv\pm n\mods q \\ m \not= n}}{d(m)d(n)}W_1\Bigl(\frac{m}M\Bigr)W_2\Bigl(\frac{n}N\Bigr)\\
- \frac{1}{q(MN)^{1/2}} \sum_{m, n} d(m) d(n)
W_1\Bigl(\frac{m}M\Bigr)W_2\Bigl(\frac{n}N\Bigr). \end{multline*} Our objective is to prove that for $\eta=1/20$ one has \begin{equation}\label{eqgoal} B_{E,E}^{\pm}(M,N)-\mathrm{MT}^{od,\pm}_{E,E}(M,N)=:\mathrm{ET}_{E,E}^{\pm}(M,N)\ll_\varepsilon q^{-\eta+o(1)}, \end{equation} where $\mathrm{MT}^{od,\pm}_{E,E}(M,N)$ is a suitable main term (described in \cite{MY}) and $M,N$ range over a set of $O(\log^2q)$ real numbers satisfying \[1\leqslant M\leqslant N,\ MN\leqslant q^{2+o(1)}\] (the first bound is by symmetry, the second is the length of the approximate functional equation). We set \[N^*=q^2/N,\ M=q^\mu,\ N=q^\nu,\ \nu^*=2-\nu,\] so that \[ 0\leqslant \mu\leqslant\nu,\quad -\varepsilon \leqslant \nu^*-\mu. \]
In view of the bound \cite[(3.18)]{445}, which reads $$\ET^{\pm}_{E,E}(M,N) \ll q^{\varepsilon}\Bigl( \frac{N}{qM}\Bigr)^{1/4} \left(1+ \Bigl( \frac{N}{qM}\Bigr)^{1/4} \right)$$ and which is proved using spectral theory, we may also assume that \begin{equation}\label{4eta} \mu+\nu^*\leqslant 1+4\eta \end{equation} for otherwise \eqref{eqgoal} is certainly true. Proceeding in the same way as in \cite[\S 6.3]{445}, we apply Voronoi summation to reduced to the following bounds for $O(\log^4 q)$ sums of the shape \begin{multline*}
S^\pm(M_1,M_2,M_3,M_4)=\frac{1}{(qMN^*)^{1/2}}\sumsum_{m_1,m_2,m_3,m_4}
W_1\left(\frac{m_1}{M_1}\right)W_2\left(\frac{m_2}{M_2}\right) \\ \times W_3\left(\frac{m_3}{M_3}\right)W_4\left(\frac{m_4}{M_4}\right)
\Kl(\pm m_1m_2m_3m_4;q)\ll q^{-\eta+o(1)}, \end{multline*} where the $W_i$ satisfy~(\ref{Wbound}) with $Q=q^{\varepsilon}$, and the $M_i$ written in the shape $M_i=q^{\mu_i}$, $i=1,2,3,4$, satisfy \[ \mu_1\leqslant \mu_2\leqslant \mu_3\leqslant\mu_4,\quad 0\leqslant
\mu_1+\mu_2+\mu_3+\mu_4 = \mu+\nu', \quad \nu' \leqslant \nu^*. \] By the trivial bound for Kloosterman sums (and recalling \eqref{4eta}), we may assume that \begin{equation}\label{munu*range} 1-2\eta\leqslant \mu+\nu'\leqslant \mu+\nu^*\leqslant 1+4\eta, \end{equation} for otherwise \eqref{eqgoal} is true.
We use the same strategy as in \cite[\S 6.3]{445}, except that we replace \cite[(5.5)]{445} by Proposition \ref{propSZ}. Thus, if the largest variables $m_3,m_4$ are large enough, we apply \eqref{SZboundsmooth} to them (fixing $m_1,m_2$); otherwise, we find it more beneficial to group variables differently producing a bilinear sum of Kloosterman sums to which we apply Proposition \ref{proptypeII}.
Explicitly, using \eqref{SZboundsmooth} we obtain that \begin{align*}
S^\pm(M_1,M_2,M_3,M_4)
&\ll q^{o(1)} \frac{M_1M_2}{(qMN^*)^{1/2}}\Bigl(q^{1/2}+\frac{M_3M_4}{q^{1/2}}\Bigr)\\
& \ll
q^{o(1)}\Bigl(\sqrt{\frac{M_1M_2}{M_3M_4}}+\frac{(MN')^{1/2}}q\Bigr)
\ll q^{o(1)}\Bigl(\sqrt{\frac{M_1M_2}{M_3M_4}}+q^{-\eta}\Bigr) \end{align*} since $q^{\frac12(1+4\eta)-1}\leqslant q^{-\eta}$. We may therefore assume that \begin{equation}\label{SZcondition} 0\leqslant \mu_3+\mu_4-(\mu_1+\mu_2)\leqslant 2\eta. \end{equation} We now apply Proposition \ref{proptypeII} with ${\tt M}=M_4$ and ${\tt N}=M_1M_2M_3$ so that ${\tt MN}=q^{\mu+\nu'}\leqslant MN^*$ and derive \[ S^\pm(M_1,M_2,M_3,M_4)\ll q^{o(1)}\big(q^{\frac{\mu_1+\mu_2+\mu_3-1}2}+q^{-\frac{1}4+\frac{\mu_4}2}\big). \] We claim that under the current assumptions both exponents on the right hand side are $\leqslant - \eta$, which completes the proof. Indeed, since $\mu_4\geqslant \mu_i$ for $i=1$, $2$, $3$, we obtain by \eqref{munu*range} that \[ \Bigl(1+\frac13\Bigr)(\mu_1+\mu_2+\mu_3)\leqslant \mu_1+\mu_2+\mu_3+\mu_4\leqslant 1+4\eta\implies \mu_1+\mu_2+\mu_3\leqslant \frac34+3\eta, \] hence \[ {\frac{\mu_1+\mu_2+\mu_3-1}2}\leqslant {-\frac{1}8+\frac{3}2\eta}\leqslant {-\eta}. \] Moreover, by \eqref{SZcondition} and \eqref{munu*range} (since $\mu_1\leqslant \mu_2\leqslant \mu_3\leqslant \mu_4$) we have \[ \mu_4\leqslant 2\eta+\mu_1+\mu_2-\mu_3\leqslant 2\eta+\mu_1\leqslant 2\eta+\frac{1}3(1+4\eta-\mu_4)=\frac{1}3+\frac{10}3\eta-\frac13\mu_4,\] which implies that $\mu_4\leqslant \frac14+\frac52\eta,$ and so \[ -\frac{1}4+\frac{\mu_4}2\leqslant-\frac18+\frac54\eta\leqslant-\eta. \]
If the Ramanujan--Petersson conjecture is available, we can use \cite[(1.7)]{445} with $\theta = 0$ in place of \cite[(3.2)]{445} and replace \eqref{4eta} with $\mu+\nu^*\leqslant 1+2\eta.$ Then the same strategy leads to the numerical value $\eta = 1/16$.
\section{Sums of Kloosterman sums along the primes: proof of Theorem \ref{Klpq}}
\subsection{Proof of inequality \eqref{evening2}}
We now recall the main ideas of the proof of \cite[Thm.~1.5]{FKM2},
since our proof will follow the same path until the moment we use
Proposition \ref{propSZ}. We will incorporate some shortcuts and
combinatorial improvements to \cite{FKM2}, mainly due to the
assumption $X \leqslant q$. By \cite[p.~1711--1716]{FKM2}, we are
reduced to proving the same bound as \eqref{evening2} for the sum \[ \mathcal S_{W,X} (\Lambda, \Kl):= \sum_n \Lambda (n) \Kl (n;q) W \Bigl( \frac{n}{X} \Bigr), \] where $\Lambda$ is the von Mangoldt function. We now apply Heath-Brown's identity \cite{HB} with integer parameter $J\geqslant 2$. This decomposes $\mathcal S_{W,X} (\Lambda, \Kl)$ into a linear combination, with coefficients bounded by $O_J (\log X)$, of $O(\log^{2J} X)$ sums of the shape \begin{multline}\label{HBdecomp}
\Sigma (\uple{M}, \uple{N})=\underset{m_1, \dots, m_J}{\sum\cdots
\sum}
\alpha_1(m_1) \alpha_2(m_2) \cdots \alpha_J (m_J)\\
\times \underset{n_1, \dots, n_J}{\sum \cdots\sum} V_1
\Bigl(\frac{n_1}{N_1}\Bigr) \cdots V_J \Bigl(\frac{n_J}{N_J}\Bigr) W \Bigl(
\frac{m_1\cdots m_J n_1\cdots n_J}{X}\Bigr) \Kl (m_1\cdots
m_Jn_1\cdots n_J;q) \end{multline} where \begin{itemize} \item $\uple{M}= (M_1, \dots, M_J)$, $\uple{N} =(N_1, \dots, N_J)$
are $J$-tuples of parameters in $[1/2, 2X]^{2J}$ which satisfy \begin{equation}\label{restr} N_1 \geqslant N_2 \geqslant \cdots \geqslant N_J, \ \ M_i \leqslant X^{1/J}, \ \ M_1\cdots M_J N_1 \cdots N_J\asymp_J X; \end{equation} \item the arithmetic functions $m\mapsto \alpha_i (m)$ are bounded and
supported in $[M_i/2, 2M_i]$; \item the smooth functions $x\mapsto V_i (x)$ satisfy~(\ref{Wbound})
with parameter $Q$. \end{itemize}
It now remains to study the sum $\Sigma (\uple{M}, \uple{N})$ defined in \eqref{HBdecomp} for every $(\uple{M}, \uple{N})$ as above. We estimate $\Sigma(\uple{M},\uple{N})$ in two ways.
Our first method is to bound $\Sigma (\uple{M}, \uple{N})$ by applying \eqref{SZboundsmooth1} to the largest smooth variables $n_1$ and $n_2$ in $\Sigma(\uple{M},\uple{N})$ and a trivial summation over the other variables. We obtain \begin{equation*} \Sigma(\uple{M},\uple{N}) \ll q^\varepsilon Q^2 X \Bigl( \frac{q^{1/2}}{N_1 N_2}+ \frac{1}{q^{1/2}}\Bigr), \end{equation*} which, by \eqref{restr} and the assumption $X\leqslant q$, simplifies into \begin{equation}\label{ineq3} \Sigma(\uple{M},\uple{N}) \ll q^\varepsilon Q^2 X\ \bigl( q^{1/2}/ (N_1N_2)\bigr). \end{equation}
Our second method is to apply Proposition \ref{proptypeII} to $\Sigma (\uple{M}, \uple{N})$; in this way we obtain \begin{equation}\label{ineq1} \Sigma (\uple{M}, \uple{N}) \ll q^\varepsilon Q^{1/2}\,X\, \Bigl( \frac{1}{M^{1/2}} +\frac{q^{1/4}}{(X/M)^{1/2}}\Bigr) \end{equation} for any factorization \[ M_1\cdots M_JN_1\cdots N_J =M\times N. \]
We have now to play with \eqref{ineq3} and \eqref{ineq1} in an optimal way to bound $\Sigma (\uple{M}, \uple{N})$. We follow the same presentation as in \cite[\S 4.2]{FKM2}. We introduce the real numbers $\kappa$, $x$, $\mu_i$, $\nu_j$, $1\leqslant i,j\leqslant J$, defined by \[ Q=q^\kappa,\ X=q^x,\ M_i=q^{\mu_i},\ N_j=q^{\nu_j} \] and we set \[( \uple{m}, \uple{n}) =(\mu_1, \dots, \mu_J, \nu_1, \dots, \nu_J) \in [0, x]^{2J}. \]
The conditions \eqref{restr} are reinterpreted as \begin{equation}\label{restr1}
\sum_i \mu_i +\sum_j \nu_j =x\leqslant 1, \quad \mu_i \leqslant x/J, \quad \nu_1 \geqslant \nu_2\geqslant \cdots \geqslant \nu_J. \end{equation}
According to \eqref{ineq3} and \eqref{ineq1}, we introduce the function (compare with \cite[definition (4.5)]{FKM2}) $\eta (\uple{m}, \uple{n})$ defined by \begin{equation}\label{defeta} \eta (\uple{m}, \uple{n}):= \max\Bigl\{
(\nu_1+\nu_2) -\frac{1}{2}-2 \kappa \ ;\, \max_\sigma \min \Bigl( \frac{\sigma}{2}, \frac{x-\sigma}{2} -\frac{1}{4} \Bigr) -\frac{\kappa}{2} \Bigr\}, \end{equation} where $\sigma$ ranges over all possible sub-sums of the $\mu_i$ and $\nu_j$ for $1 \leqslant i, j \leqslant J$, that is, over the sums \[ \sigma =\sum_{i \in \mathcal I} \mu_i +\sum_{j \in \mathcal J} \nu_j, \] for $\mathcal I $ and $\mathcal J$ ranging over all possible subsets of $\{1, \dots, J\}$.
With these conventions, as a consequence of \eqref{ineq3} and \eqref{ineq1} we have the inequality \[ \Sigma(\uple{M},\uple{N}) \ll (qQ)^\varepsilon q^{-\eta (\uple{m}, \uple{n})}\,X, \]
and finally, summing aver all possible $(\uple M, \uple N)$, we have the inequality \begin{equation}\label{793} \mathcal S_{W,X} (\Lambda, \Kl) \ll (qQ)^\varepsilon \, q^{-\eta}\, X, \end{equation} where \[ \eta = \min_{(\uple{m}, \uple{n})} \eta (\uple{m}, \uple{n}), \] where $( \uple{m}, \uple{n})$ satisfy \eqref{restr1}.
The estimate~\eqref{evening2} is trivial for $x<3/4$, so we may assume that $3/4 \leqslant x \leqslant 1$. For $\varepsilon >0$ sufficiently small, let $\mathcal I_x$ be the interval \[ \mathcal I_x = [x/6-\varepsilon, x/3+\varepsilon], \] and choose $J=10$ to apply Heath-Brown's identity.
We now consider two different cases in the combinatorics of $(\uple{m}, \uple{n})$. \begin{itemize} \item If $(\uple{m}, \uple{n})$ contains a subsum $\sigma \in \mathcal I_x$, then, by \eqref{defeta}, we have the inequality \[ \eta (\uple{m}, \uple{n}) \geqslant \min \Bigl(\frac{x/6}{2}, \frac{x-x/3}{2}-\frac{1}{4}\Bigr)-\frac{\kappa}{2}-\frac{\varepsilon}{2}, \] which simplifies into \begin{equation}\label{firstbound} \eta (\uple{m}, \uple{n}) \geqslant \frac{x}{3} -\frac{1}{4} -\frac{\kappa}{2} -\frac{\varepsilon}{2}. \end{equation} \item If $(\uple{m}, \uple{n})$ contains no subsum $\sigma \in \mathcal I_x$, then the sum of all the $\mu_i$ and $\nu_j$ which are less than $x/6-\varepsilon$ is also less than $x/6-\varepsilon$ (this is a consequence of the inequality $2(x/6-\varepsilon) < x/3 +\varepsilon$). In light of~\eqref{restr1}, this includes all $\mu_i$, and so some $\nu_j$ must be greater than $x/3+\varepsilon$. On the other hand, since $3 (x/3 +\varepsilon) > x$, we deduce that at most two $\nu_i$ (more precisely, $\nu_1$ or $\nu_1$ and $\nu_2$) are greater than $x/3 +\varepsilon$. Combining these remarks, we deduce the inequality \[ \nu_1+\nu_2 \geqslant x-(x/6 -\varepsilon) = 5x/6 +\varepsilon, \] which implies, by \eqref{defeta}, the inequality \begin{equation}\label{secondbound} \eta (\uple{m}, \uple{n}) \geqslant \frac{5x}{6}- \frac{1}{2} -2 \kappa -\varepsilon. \end{equation} \end{itemize} By \eqref{793}, \eqref{firstbound} and \eqref{secondbound}, we deduce the inequality \begin{equation}\label{828} \mathcal S_{W,X} (\Lambda, \Kl) \ll (qQ)^\varepsilon \bigl( q^{1/4} Q^{1/2} X^{2/3} + q^{1/2} Q^2 X^{1/6} \bigr). \end{equation} In the above upper bound, the first term is larger than the second one if and only if $Q<q^{-1/6} X^{1/3}$, and in this case, we have $Q^\varepsilon < q^\varepsilon$. However, when $Q\geqslant q^{-1/6} X^{1/3}$, it is easy to see that the bound \eqref{evening2} is trivial since we have \[ q^{1/4} Q^{1/2} X^{2/3}\geqslant q^{1/4} (q^{-1/6} X^{1/3})^{1/2} X^{2/3}= q^{1/6} X^{5/6} \geqslant X, \] since we suppose $X\leqslant q.$ In conclusion, we may drop the second term on the right-hand side of \eqref{828}. This remark completes the proof of \eqref{evening2}.
\subsection{Proof of inequality \eqref{evening1}} The proof mimics the proof appearing in \cite[\S 4.3]{FKM2}. By a simple subdivision, it is sufficient to prove the inequality \begin{equation}\label{836}
\sum_\stacksum{X<p\leqslant\frac32X}{p\text{ prime}}\Kl(p;q)\ll q^{1/6+\varepsilon}\, X^{7/9}. \end{equation} Let $\Delta <1/2$ be some parameter, let $W$ be a smooth function defined on $[0, +\infty[$ such that \[ \supp (W) \subset [1-\Delta, \textstyle \frac{3}{2}+\Delta], \ 0\leqslant W \leqslant 1, \ W(x) =1 \text{ for } 1 \leqslant x \leqslant \frac{3}{2}, \] and such that the derivatives satisfy \[ x^j W^{(j)} (x)\ll_j Q^j, \] with $Q=\Delta^{-1}$. By applying \eqref{evening2}, we have \begin{align*}
\sum_\stacksum{X<p\leqslant \frac{3}{2}X}{p\text{ prime}}\Kl(p;q)& \ll \Delta X + 1 + \Bigl\vert\, \sum_p W \Bigl( \frac{p}{X}\Bigr) \Kl (p;q)\, \Bigr\vert\\
& \ll \Delta X + q^{1/4 +\varepsilon} Q^{1/2} X^{2/3} \ll q^{1/6 +\varepsilon} X^{7/9},
\end{align*} by the choice $\Delta =q^{1/6} X^{-2/9} < 1/2$ (the claim is trivial if $q^{1/6} \geqslant \frac{1}{2}X^{2/9}$). This completes the proof of \eqref{836}.
\begin{bibdiv}
\begin{biblist}
\bib{445}{article}{
author={V. Blomer},
author={\' E. Fouvry},
author={E. Kowalski},
author={Ph. Michel},
author={D. Mili\'cevi\' c},
title={On moments of twisted $L$-functions},
journal={Amer. J. Math.},
date={to appear, \url{arXiv:1411.4467}},
}
\bib{FKM2}{article}{
author={{\'E}. Fouvry },
author={E. Kowalski},
author={Ph. Michel},
title={Algebraic trace functions over the primes},
journal={Duke Math. J.},
volume={163},
date={2014},
number={9},
pages={1683--1736},
}
\bib{FKM1}{article}{
author={{\'E}. Fouvry},
author={E. Kowalski},
author={Ph. Michel},
title={Algebraic twists of modular forms and Hecke orbits},
journal={Geom. Funct. Anal.},
volume={25},
date={2015},
number={2},
pages={580--657},
}
\bib{FKMd3}{article}{
author={{\'E}. Fouvry},
author={E. Kowalski },
author={Ph. Michel },
title={On the exponent of distribution of the ternary divisor function},
journal={Mathematika},
volume={61},
date={2015},
number={1},
pages={121--144},
}
\bib{HB}{article}{
author = {D. R. Heath-Brown},
title = {Prime numbers in short intervals and a generalized Vaughan identity},
journal={Canad. J. Math.},
volume={34},
date={1982},
pages={1365--1377},}
\bib{KMVDMJ}{article}{
author={E. Kowalski },
author={Ph. Michel},
author={J. VanderKam},
title={Rankin-Selberg $L$-functions in the level aspect},
journal={Duke Math. J.},
volume={114},
date={2002},
number={1},
pages={123--191},
}
\bib{SZ}{article}{
author={I. Shparlinski},
author={T. P. Zhang},
title={Cancellations amongst Kloosterman sums},
journal={Acta Arith.},
note={(to appear, \url{arXiv:1601.05123})}, }
\bib{MY}{article}{
author={M. P. Young},
title={The fourth moment of Dirichlet $L$-functions},
journal={Ann. of Math. (2)},
pages={1--50}, date={2011}, volume={173}, number={1}, }
\end{biblist}
\end{bibdiv}
\end{document} |
\begin{document}
\begin{abstract} In this article we consider surfaces in the product space $\h^2\times \r$ of the hyperbolic plane $\h^2$ with the real line. The main results are: a description of some geometric properties of minimal graphs; new examples of complete minimal graphs; the classification of umbilical surfaces. \end{abstract}
\title{A note on surfaces in $\h^2\times\r$}
\author{Stefano Montaldo} \address{Universit\`a degli Studi di Cagliari\\ Dipartimento di Matematica e Informatica\\ Via Ospedale 72\\ 09124 Cagliari, Italia} \email{montaldo@unica.it}
\author{Irene I. Onnis} \thanks{Work partially supported by GNSAGA and INdAM, Italy} \address{Departamento de Matem\'{a}tica, C.P. 6065\\ IMECC, UNICAMP, 13081-970, Campinas, SP\\ Brasil} \email{onnis@ime.unicamp.br}
\subjclass{53C42, 53A10}
\keywords{Minimal surfaces, graphs, umbilical immersion, Gauss map}
\maketitle
\section{Introduction}
In the last decade the study of the geometry of surfaces in the three-dimensional Thurston geometries has grown considerably. One reason is that these spaces can be endowed with a complete metric with a large isometry group; another, more recent, is the announced proof of Thurston geometric conjecture, which ensures the dominant role of this spaces among the three-dimensional geometries.
Leaving aside the space forms $\r^3$, $\s^3$ and $\h^3$, among the left five Thurston geometries the Heisenberg space is probably the most studied and the geometry of surfaces is well understood. In recent years the study of the geometry of surfaces in the two product spaces $\h^2\times\r$ and $\s^2\times\r$ is growing very rapidly, and the interest is mainly focused on minimal and constant mean curvature surfaces \cite{AR,cad,FM,MO,MO2,MR,NR,NR1,R,S,ST}.
The purpose of this paper is first to investigate on some geometric properties of minimal graphs in $\h^2\times\r$ (Theorem~\ref{pro-geo} and \ref{teo-min}) and to produce some new examples including complete ones. In the last part (Theorem~\ref{uffa}) we classify the umbilical surfaces in $\h^2\times\r$ giving their explicit local parametrizations.
For completeness we recall some basic notions on $\h^2\times\r$. Let $\h^{2}$ be the upper half-plane model $\{(x,y)\in\r^2\,|\,y>0\}$ of the hyperbolic plane endowed with the metric $g_{\tiny\h}=(dx^2+dy^2)/y^2$, of constant Gauss curvature $-1$. The space $\h^2$, with the group structure derived by the composition of proper affine maps, is a Lie group and the metric $g_{\tiny\h}$ is left invariant. Therefore the product $\h^2\times\r$ is a Lie group with the left invariant product metric $$ g=\frac{dx^2+dy^2}{y^2}+dz^2. $$ With respect to the metric $g$ an orthonormal basis of left invariant vector fields is \begin{equation}\label{campi} E_1=y\frac{\partial}{\partial x},\qquad E_2=y\frac{\partial}{\partial y},\qquad E_3=\frac{\partial}{\partial z}, \end{equation} and the non zero components of the Christoffel symbols are: \begin{equation}\label{chri} \Gamma_{12}^1= \Gamma_{21}^1=\Gamma_{22}^2=-\frac{1}{y},\qquad \Gamma_{11}^2=\frac{1}{y}. \end{equation}
\section{Minimal graphs} The natural parametrization of a graph $\mathcal{M}$ in $\h^2\times\r$ is $$ \phi(x,y)=(x,y,f(x,y)),\qquad (x,y) \in \Omega, $$ where the domain $\Omega\subseteq \h^2$ is relatively compact, with a differentiable boundary and $f:\Omega\to\r$ is a $C^2$-function. The unit normal $\xi$ to $\mathcal{M}$ is given by \begin{equation}\label{xi} \xi(x,y)=-\frac{f_x}{w\,y}\,E_1-\frac{f_y}{w\,y}\,E_2+\frac{1}{w\,y^2}\,E_3, \end{equation} where $w=\tfrac{1}{y^2}\sqrt{y^2(f_x^2+f_y^2)+1}$. The coefficients of the induced metric $h=\phi^{\ast}g$ are $$ E=f_x^2+\frac{1}{y^2},\qquad F=f_x f_y,\qquad G=f_y^2+\frac{1}{y^2}, $$ while the coefficients of the second fundamental form are given by \begin{equation}\label{lmn} L=\frac{y f_{xx}-f_y}{w\, y^3},\qquad M=\frac{y f_{xy}+f_x}{w \,y^3},\qquad N=\frac{y f_{yy}+f_y}{w\, y^3}. \end{equation} The mean curvature function is then \begin{equation}\label{acca}
H=\frac{y^2}{2}\,\Div\Big(\frac{\nabla f}{\sqrt{1+y^2\,|\nabla f|^2}}\Big)=\frac{1}{2}\,\Div_{\tiny\h^2}\Big(\frac{\nabla f }{w}\Big), \end{equation} where $\nabla$ and $\Div$ stand for the Euclidean gradient and the Euclidean divergence, while $\Div_{\tiny\h^2}$ is the divergence in $(\h^2,g_{\tiny\h})$. The equation $H=0$ is called the {\it minimal surfaces equation} in $\h^2\times\r$, and can be also written as \begin{equation}\label{eq1} (1+y^2f_y^2)\,f_{xx}-y\,(f_x^2+f_y^2)\,f_y-2y^2\,f_x\,f_y\,f_{xy}+(1+y^2f_x^2)\,f_{yy}=0. \end{equation} This equation was first founded by B.~Nelli and H.~Rosenberg, in \cite{NR}, where they showed that in $\h^2\times\r$ there exist minimal surfaces of Catenoid-type, Helicoid-type and Scherk-type. Moreover, they proved that the Bernstein's theorem fails, that is there exist complete minimal graphs in $\h^2\times\r$ of rank different from zero.
The first geometric property of minimal graph is that, as in the Euclidean case (see \cite{GDG}), solutions of \eqref{eq1} define graphs of ``minimal'' area.
\begin{theorem}\label{teo-min} If $f$ satisfies the minimal surfaces equation \eqref{eq1} in $\Omega$ and $f$ extends continuously to $\overline{\Omega}$, then the area of the surface $\mathcal{M}$, defined by $f$, is less than or equal to the area of any other surface $\mathcal{\widetilde{M}}$ defined by a function $\widetilde{f}$ in $\Omega$ having the same values of $f$ on $\partial\Omega$. Moreover, equality holds if and only if $f$ and $\widetilde{f}$ coincide on $\Omega$. \end{theorem} \begin{proof} In the domain $\Omega\times\r$ of $\h^2\times\r$, consider the unit vector field $V(x,y,z)$ given by $$ V(x,y,z):=-\frac{f_x}{w\,y}\,E_1-\frac{f_y}{w\,y}\,E_2+\frac{1}{w\,y^2}\,E_3. $$ Writing $V=V^i\,(\partial/\partial x_i)$ and denoting by $\Divv$
the divergence of $\h^2\times\r$, we have $$
\Divv V=-y^2\,\Div\Big(\frac{\nabla f}{\sqrt{1+y^2\,|\nabla f|^2}}\Big). $$ Since $f(x,y)$ satisfies Equation~{\eqref{eq1}}, it follows that $$ \Divv V\equiv 0 \qquad\text{on}\quad \Omega\times\r. $$ The surfaces $\mathcal{M}$ and $\mathcal{\widetilde{M}}$ have the same boundary, and therefore $\mathcal{M}\cup\mathcal{\widetilde{M}}$ is an oriented boundary of an open set $\Theta$ in $\Omega\times\r$. Denoting by $\eta$ the unit normal corresponding to the positive orientation on $\mathcal{M}\cup\mathcal{\widetilde{M}}$ and using the Divergence Theorem, we have: \begin{equation}\label{dive}
0=\int_\Theta \Divv
V=\int_{\mathcal{M}\cup\mathcal{\widetilde{M}}}g(V,\eta)\,dA\,. \end{equation} From the definition of the vector $V$ and from {\eqref{xi}}, it follows that $$ V\equiv \eta \qquad\text{on}\quad \mathcal{M}, $$ hence, from {\eqref{dive}}, and since $V$ and $\eta$ are both unit vector fields, it results that \begin{align*} A(\mathcal{M})=\int_{\mathcal{M}}g(V,\eta)\,dA=\int_{\mathcal{\widetilde{M}}}g(V,-\eta)\,dA \leq \int_{\mathcal{\widetilde{M}}}dA=A(\mathcal{\widetilde{M}}). \end{align*} Furthermore, equality holds if and only if $g(V,-\eta)=1$, that is, if and only if on $\Omega$ $\widetilde{f}_x=f_x$ and $\widetilde{f}_y=f_y$. Finally, since
$f_{|{\partial \Omega}}=\widetilde{f}_{|{\partial \Omega}}$, we must have that $\widetilde{f}(x,y)=f(x,y)$ for all $(x,y)\in\Omega$. \end{proof}
In the following we show some solutions of the Equation {\eqref{eq1}}. \begin{example}\label{planoplano} If a solution of {\eqref{eq1}} has the form $f(x,y)=\varphi(x)$, we have that ${\varphi}''(x)=0$ and, then, $f(x,y)=ax+b$, with $a,b\in\r$. These are the only minimal planes in $\h^2\times\r$ that can be described as graphs.
If now we look for solutions of type $f(x,y)=\psi (y)$, then Equation~{\eqref{eq1}} assumes the form ${\psi}''(y)-y\, {\psi}'(y)^3=0$, and, by integration, we get $$ \psi (y)=\arcsin (a\, y)+b,\qquad 0<y\leq 1/a\,,\quad a,b\in\r,\quad a>0. $$ \end{example}
\begin{example}\label{funnel} As for the Euclidean space, we can find interesting examples of minimal graphs seeking for radial solutions of {\eqref{eq1}} of type $f(x,y)=h(x^2+y^2)$. In this case, it results that $z\,h''(z)+h'(z)=0$, thus the desired function is $$ f(x,y)=a\,\ln(x^2+y^2)+b,\qquad a,b\in\r. $$ This surface, called the {\it funnel surface}, defines a complete minimal graph. We observe that the Gauss map of this surface is of rank $1$. On the right hand side of Figure~\ref{gauss} there is a plot of the image, under the Gauss map, of the funnel surface, which is plotted on the left hand side. \begin{figure}
\caption{Complete minimal graph of rank $1$ (left) and the image of its Gauss map (right).}
\label{gauss}
\end{figure} \end{example}
\begin{example}\label{exnuova} Let $f(x,y)$ be a solution of the minimal surfaces equation of type $$ f(x,y)=\frac{a(x)}{x^2+y^2}, $$ where $a(x)$ is a real function. Then, {\eqref{eq1}} gives $$ [(x^2+y^2)^4+4\,y^4\,a(x)^2]\,a''(x)-4(x^2+y^2)^3\,[x\,a'(x)-a(x)]=0, $$ of which a solution is $a(x)=c\, x$, with $c\in\r$. The corresponding minimal function is \begin{equation}\label{eq-min-r2} f(x,y)=\frac{c\,x}{x^2+y^2},\qquad c\in\r, \end{equation} which produces the minimal graph plotted in Figure~\ref{nuovaa} (left). We observe that the Gauss map of this complete graph is of rank $2$.
This example can be generalized considering, for a given real function $h$, a solution of \eqref{eq1} of type $f(x,y)=h\big(\tfrac{x}{x^2+y^2}\big)$ or of type $f(x,y)=h\big(\tfrac{y}{x^2+y^2}\big)$. In the first case we essentially find (up to translations) the example given by \eqref{eq-min-r2}. In the second case it results that $h''(z)-z\,h'(z)^3=0$ and, therefore, $$ h(z)=\arcsin (a z)+b,\qquad a,b\in\r. $$ The corresponding minimal function does not define a complete graph. A plot of this surface is given in Figure~\ref{nuovaa} (right).
\begin{figure}
\caption{Minimal graphs of rank $2$: a complete one (left) and a non complete one (right).}
\label{nuovaa}
\end{figure} \end{example}
\section{Minimality and harmonicity} In this section we study the relations between the minimality of a surface $\mathcal{M}$ in $\h^2\times\r$, defined as the graph of a differentiable function $f$, and the harmonicity of $f$. A first property about minimal graphs in $\h^2\times\r$ is given by the following:
\begin{proposition} Let $\mathcal{M}\subset \h^2\times\r$ be a graph of a $C^2$-function $f$, defined in a domain $\Omega$ of $\h^2$. If $\mathcal{M}$ is a minimal graph then $f$ is harmonic with respect to the induced metric $h=\phi^{\ast}g$, where $\phi(x,y)=(x,y,f(x,y)), (x,y)\in\Omega$, is a global parametrization of $\mathcal{M}$. \end{proposition} \begin{proof} We use (see, for example, \cite{ES}) the fact that $\mathcal{M}$ is a minimal surface if and only if
$\phi:(\Omega,h)\to\h^2\times\r$ is harmonic, that is $\phi$ satisfies the system $$ \Delta_{h}\phi^{\alpha}+h^{ij}\,\Gamma_{\beta\gamma}^{\alpha}\,\frac{\partial \phi^{\beta}}{\partial x^i} \frac{\partial \phi^{\gamma}}{\partial x^j}=0, \quad \alpha=1,2,3, $$ where $\Delta_{h}$ is the Beltrami-Laplace operator with respect to $h$. Since $\Gamma_{\beta\gamma}^{3}=0$, for all $\beta,\gamma\in\{1,2,3\}$, (see \eqref{chri}), it follows that $\Delta_h f=0$. \end{proof}
As an immediate consequence of the above proposition we can prove that
there exist no compact minimal surfaces without boundary in the product $\h^2\times\r$.
The next result gives a link between the harmonicity of $f$ and the geometry of the level curves.
\begin{proposition}\label{pro-geo} Let $\mathcal{M}\subset \h^2\times\r$ be a minimal surface defined as the graph of a non constant differentiable function $f:\Omega\subseteq\h^2\to\r$. Then, the level curves of $f$ are pre-geodesics of $\h^2$ if and only if $f$ is harmonic with respect to the flat Laplacian. \end{proposition} \begin{proof} Let $f$ be a solution of {\eqref{eq1}} and let $\gamma(t)=(x(t),y(t))$ be the parametrization of a level curve of $f$. If the function $x(t)$ is constant, it results that $f_y=0$ and thus, using Example \ref{planoplano}, the surface $\mathcal{M}$ is a piece of the plane $z=ax+b$, $a,b\in\r$.
Thus we can assume that there exists a point $t_0$ such that $x'(t)\neq 0$ in a neighborhood of $t_0$. Therefore, we can parametrize $\gamma$ as $\gamma(x)=(x,y(x))$, with $y(x)>0$, in a neighborhood of $\gamma(t_0)$. It follows that $$ \gamma'(x)=\frac{E_1}{y(x)}+\frac{y'(x)}{y(x)}\, E_2, $$ and $$ \nabla_{\gamma'}\gamma'=-\frac{2y'(x)}{y(x)^2}\,E_1+ \Big(\frac{y(x)y''(x)-y'(x)^2+1}{y(x)^2}\Big)\,E_2. $$ The geodesic curvature $k_g$ (in $\h^2$) of $\gamma$ is then $$
k_g=\frac{g(\nabla_{\gamma'}\gamma',J\gamma')}{\|\gamma'\|^3} =\frac{y(x)y''(x)+y'(x)^2+1}{(1+y'(x)^2)^{3/2}}. $$ Using $$
y'(x)=-\frac{f_x(x,y(x))}{f_y(x,y(x))} $$ and $$ y''(x)=-\frac{f_{xx}+2f_{xy}y'+f_{yy}(y')^2}{f_y}, $$ the minimal Equation~\eqref{eq1} can be written as $$ \Delta f=y
(x)\,k_g\,|\nabla f|_{\tiny \r^2}^3, $$ which completes the proof. \end{proof}
\section{Umbilical surfaces of $\h^2\times\r$} We start this section studying the totally geodesic surfaces of $\h^2\times\r$. For this we need the following lemma. \begin{lemma}\label{lemver} Let $\mathcal{M}$ be a regular surface in $\h^2\times\r$. Then, there exists an open dense set in $\mathcal{M}$, of which the connected components admit one of the following parametrizations: $$\begin{aligned} X(u,v)&=(u,v,f(u,v)),\;\quad v>0,\\ Y(u,v)&=(u,a(u),v),\;\qquad a(u)>0,\\ Z(u,v)&=(c,u,v),\;\qquad u>0. \end{aligned} $$ \end{lemma} \begin{proof} A detailed proof can be found in \cite{O}. \end{proof} The parametrizations $Y(u,v)$ and $Z(u,v)$ of Lemma~\ref{lemver} define surfaces that we shall call {\it vertical surfaces}.
\begin{theorem}\label{pintus} The totally geodesic surfaces of $\h^2\times\r$ are the horizontal planes $z=c$, $c\in\r$, and the vertical cylinders over the geodesics of $\h^2$. \end{theorem} \begin{proof} We start proving that the only totally geodesic graphs are the horizontal planes $z=c$, $c\in\r$. Let $\mathcal{M}$ be a totally geodesic surface defined as the graph of a differentiable function $f$. From {\eqref{lmn}}, it follows that \begin{equation}\label{sal} \left\{ \begin{aligned} f_{xx}&=\frac{f_y}{y},\\ f_{yy}&=-\frac{f_y}{y},\\ f_{xy}&=-\frac{f_x}{y}. \end{aligned}\right. \end{equation} First, observe that $f(x,y)=c$, $c\in\r$, satisfies System~\eqref{sal} and, therefore, defines a totally geodesic surface. Then, since $f_x=0$ if and only if $f_y=0$, there exist no totally geodesic
graphs defined by a (non constant) function $f$ that depends only of one variable. Thus, we can suppose that $f_x\neq 0$ and $f_y\neq 0$. From the second and third equations of {\eqref{sal}}, we find that there exist two functions $p(x)$ and $q(x)$ such that $f_y={p(x)}/{y}$ and $f_x={q(x)}/{y}$. Now, replacing in the first equation of {\eqref{sal}}, we have the contradiction $$ y\,q'(x)-p(x)=0. $$
To complete the proof, observe that the vertical cylinders over the geodesics of $\h^2$ are totally geodesic surfaces of $\h^2\times\r$. In the following, we prove that these cylinders are the only vertical surfaces that are totally geodesic in $\h^2\times\r$. Let $\mathcal{M}$ be a vertical surface. From Lemma~\ref{lemver}, it follows that either $\mathcal{M}$ is the totally geodesic plane $x=c$, with $y>0$, or it is parametrized by \begin{equation}\label{1fo} X(u,v)=(u,a(u),v),\qquad a(u)>0. \end{equation} The unit normal to the surface $\mathcal{M}$, defined by \eqref{1fo}, is \begin{equation}\label{xi1} \xi=\frac{{a}'}{\sqrt{1+({a'})^2}}\,E_1-\frac{1}{\sqrt{1+({a'})^2}}\,E_2. \end{equation} It is then straightforward to compute that $$ \nabla_{X_u}\xi=\bigg[\Big(\frac{{a'}}{\sqrt{1+({a'})^2}}\Big)_u+ \frac{1}{a\,\sqrt{1+({a'})^2}}\bigg]\,E_1+ \bigg[\frac{{a'}}{a\,\sqrt{1+({a'})^2}}- \Big(\frac{1}{\sqrt{1+({a'})^2}}\Big)_u\bigg]\,E_2, $$ and $$ \nabla_{X_v}\xi=0, $$ hence $M=N=0$. Consequently, $\mathcal{M}$ is totally geodesic if and only if $$ L=-g(\nabla_{X_u}\xi,X_u)=0, $$ that is, the function $a(u)$ satisfies the following ODE: $$ a\,{a''}+({a'})^2+1=0. $$ This implies that $$ a(u)=\sqrt{-u^2+2\,c_1\,u+c_2},\qquad c_1,c_2\in\r\quad\text{with}\quad c_1^2+c_2>0, $$ and the curve $(u,a(u))$ is the geodesic of $\h^2$ given by the upper semi-circle with center at $(c_1,0)$ and radius $\sqrt{c_1^2+c_2}$. This completes the proof. \end{proof}
\begin{remark} From the proof of Theorem~\ref{pintus} it follows that, if $\mathcal{M}$ is a vertical surface, then $F=M=N=0$ and so the mean curvature is given by $H={L}/{2 E}.$ Thus a vertical surface is minimal if and only if it is totally geodesic. \end{remark}
We are now ready to state the main result of this section.
\begin{theorem}\label{uffa}
The umbilical surfaces of $\h^2\times\r$ are: \begin{enumerate} \item[i)] the totally geodesic surfaces given in Theorem~\ref{pintus}; \item[ii)] the surface given as the graph of the function $$ f(x,y)=\arctan\Big(\frac{\lambda(x,y)}{\sqrt{j-\lambda(x,y)^2}}\Big)+c, \qquad c\in\r$$ where $$ \lambda(x,y)=\frac{1}{y}\,\Big[\frac{c_1}{2} (x^2+y^2)+c_2\,x-c_3\Big],\qquad c_1,c_2,c_3\in\r, $$ and $$ j=1-c_2^2-2\,c_1\,c_3>0. $$ \end{enumerate}
\end{theorem} \begin{proof} Let $\mathcal{M}$ be an umbilical surface of $\h^2\times\r$ parametrized by $$ \phi(x,y)=(x,y,f(x,y)), \quad (x,y)\in\Omega\subseteq\h^2. $$ Let $p\in \mathcal{M}$ and let $\xi$ be an unit normal vector field defined in some neighborhood $U$ of $p$. Since $\mathcal{M}$ is umbilical, by definition, there exists a function $\lambda: U\rightarrow \r$ such that the shape operator $A$ satisfies $A_{\xi}=\lambda I$ in $U$. The expression of $A_{\xi}$ with respect to the coordinates basis $\{\phi_x,\phi_y\}$ is $$ A_\xi (p)=\begin{pmatrix} \Big(\displaystyle{\frac{f_x}{w}\Big)_x-\frac{f_y}{y\,w}} & \qquad y\Big(\displaystyle{\frac{f_x}{y\, w}\Big)_y }\\ &\\ \Big(\displaystyle{\frac{f_y}{w}\Big)_x+\frac{f_x}{y\,w}} &\qquad y\Big(\displaystyle{\frac{f_y}{y\,
w}}\Big)_y \end{pmatrix}. $$ Thus $\mathcal{M}$ is umbilical if and only if \begin{equation}\label{matriz} \left\{ \begin{aligned} \Big(\frac{f_x}{w\,y}\Big)_y &=0,\\ \Big(\frac{f_y}{w}\Big)_x &=-\frac{f_x}{ w\,y},\\ \Big(\frac{f_x}{w}\Big)_x &=\Big(\frac{f_y}{w}\Big)_y. \end{aligned}\right. \end{equation} The first and second equations of \eqref{matriz} imply that there exist two functions $a(x)$ and $b(y)$ such that $$ \frac{f_x}{y w}=a(x),\qquad \frac{f_y}{w}=-\int a(x) dw + b(y). $$ Then, from the third equation of \eqref{matriz}, we conclude that $$ a(x)=c_1 x + c_2,\qquad b(y)=\frac{c_1}{2} y^2 +c_3, $$ where $c_1,c_2,c_3\in\r$. Thus ${f_y}/{w}=\frac{c_1}{2}(y^2-x^2)-c_2 x +c_3$, which implies that \begin{equation}\label{eq-lambda} \lambda(x,y)=y\Big(\frac{f_y}{y w}\Big)_y= \frac{1}{y}\Big[ \frac{c_1}{2}(x^2+y^2)+c_2 x - c_3\Big]. \end{equation} From the Codazzi's equation for umbilical surfaces (see, for example, \cite{Daj}) $$ (\R(\phi_x,\phi_y)\xi)^\top= \lambda_y \phi_x- \lambda_x \phi_y, $$ and using $$ (\R(\phi_x,\phi_y)\xi)^\top=\frac{f_y}{w y^3} E_1- \frac{f_x}{w y^3} E_2, $$ it follows that \begin{equation}\label{ung} \left\{ \begin{aligned} \lambda_x&=\frac{f_x}{w\,y^2},\\ \lambda_y&=\frac{f_y}{w \,y^2}. \end{aligned} \right. \end{equation} Now, using the identities \begin{align*}\Big(\frac{1}{w\,y^2}\Big)_y&=-y\Big[f_x\,\Big(\frac{f_x}{w\,y}\Big)_y+f_y\,\Big(\frac{f_y}{w\,y}\Big)_y\Big] \\\Big(\frac{1}{w\,y^2}\Big)_x&=-\Big[f_x\,\Big(\frac{f_x}{w}\Big)_x+f_y\,\Big(\frac{f_y}{w}\Big)_x\Big], \end{align*} and \eqref{eq-lambda}, a simple calculation gives \begin{equation}\label{eq-simple} y^2\,w=\frac{1}{\sqrt{j-\lambda^2}}, \end{equation} where $j(x)=1-c_2^2-2\,c_1\,c_3>0$. Substituting \eqref{eq-simple} in the first equation of {\eqref{ung}} we have $$ f(x,y)=\int\frac{\lambda_x}{\sqrt{j-\lambda^2}}\,dx= \arctan\Big(\frac{\lambda}{\sqrt{j-\lambda^2}}\Big)+h(y), $$ for a certain function $h(y)$. From the second equation of {\eqref{ung}}, we conclude that $h(y)$ is constant.
To complete the proof, we must study the case when $\mathcal{M}$ is an umbilical vertical surface. From Lemma~\ref{lemver}, it follows that either $\mathcal{M}$ is the totally geodesic plane $x=c$, or it is given by: $$ X(u,v)=(u,a(u),v),\qquad a(u)>0. $$ In the last case, we have $A_{\xi}X_v=-\nabla_{X_v}\xi=0$ and, therefore, $\lambda=0$. This conclude the proof of the theorem. \end{proof}
As an example of umbilical surfaces, if $c_1=c_2=0$, we have the graph given by $$ f(x,y)= -\arcsin\big(\frac{c_3}{y}\big)+c,\qquad y\geq
|c_3|>0. $$ In Figure~\ref{ombelicale} there is a plot of this surface for $c_3=-1$. \begin{figure}
\caption{\small Umbilical surface of $\h^2\times\r$.}
\label{ombelicale}
\end{figure}
\noindent{\bf Acknowledgements}. The authors wish to thank Francesco Mercuri for valuable conversations during the preparation of this paper.
\end{document} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.